Thank You very much, Stefan, for the answer,
I have to check, if Your solution will be faster, than executing 500-1000
times select query in the loop from python.
But your solution definitely has some good optimization potential in terms
of precomputing the values.
2012/11/22 Stefan de Konink
On 11/22/12 00:05, Адель Ризаев wrote:
I'm doing an averaging over one column using some specific range in another column multiple times. Actually the query looks like: select avg(value) from myTable where time<N and time>M. i'm calling that query from the python script multiple times. Can i call the query once in order to reduce the number of requests to the monetDB?
You might rewrite the query as bulk operation. Having a secondary table containing all N,M values. Which you join on your query and using:
create table myTable (time time, value integer); create table myArguments (n time, m time);
insert into mytable values ('13:00', 1); insert into mytable values ('2:00', 2); insert into mytable values ('5:00', 3); insert into mytable values ('1:00', 4); insert into mytable values ('0:00', 5); insert into myarguments values ('1:00', '2:00'); insert into myarguments values ('23:00', '0:00'); insert into myarguments values ('20:00', '1:00');
select avg(value), n, m from myTable, myArguments where time < N and time > M group by N, M; +--------------------------+----------+----------+ | L1 | n | m | +==========================+==========+==========+ | 2.5 | 23:00:00 | 00:00:00 | | 2 | 20:00:00 | 01:00:00 | +--------------------------+----------+----------+
Acceptable?
Stefan
_______________________________________________ users-list mailing list users-list@monetdb.org http://mail.monetdb.org/mailman/listinfo/users-list
_______________________________________________ users-list mailing list users-list@monetdb.org http://mail.monetdb.org/mailman/listinfo/users-list