Hi Without insight into your benchmark queries and database schema for the row stores, little can be said. A typical pitfall could be to use row-based queries such as "SELECT * FROM tableexpr", which would call for expensive tuple reconstruction of all columns. Another explanation can come from a multidimensional index maintained in a rowstore to speedup grouping. Last but not least, it could be grouping order or skewed data distribution. MonetDB provides some tools to see where time goes, which may give a hint. http://www.monetdb.org/Documentation/Manuals/MonetDB/Profiler/Stethoscope Your table sizes reported is not extreme at all. regards, Martin On 7/11/13 10:47 AM, Franck Routier wrote:
Hi,
I am benchmarking different DB alternatives for our BI application. We are using Mondrian/JPivot on a dataset with ~300 millions row, and several parent/child dimensions.
We are running some 50 real life scenario (ie 50 differents cubes using parts of this dataset) and measuring user response time on a server with several stripped Sata3 SSD, 32 GB ram and a i7-3820 CPU @ 3.60GHz (6 cores).
During the tests, Monetdb performed by far the worst of Postgresql, VectorWise and (obvioulsy) MonetDB. Things that take 10 seconds with Postgresql take 450 second with monetdb... This was not expected at all, so looking at the server, I saw that MonetDB was consuming all memory (why not), including swap : bad idea I think. I was expecting MonetDB to memory mapped files when needed, real memory, but not use swap as it it was real memory... So here is my question : with a (quite) big dataset, 32GB ram, is there anything I can do to help MonetDB behave according to expectancies ?
(maybe I should use ulimit ?).
Thanks for your input,
Franck
_______________________________________________ users-list mailing list users-list@monetdb.org http://mail.monetdb.org/mailman/listinfo/users-list
_______________________________________________ users-list mailing list users-list@monetdb.org http://mail.monetdb.org/mailman/listinfo/users-list