[MonetDB-users] query slowdown with concurrent write
Hi, MonetDB has great response times for the queries I have been running, but once I start inserting data to the table and concurrently running the queries, the queries slow down. To give you concrete numbers, I am initializing the database with 8 million records. I then run random aggregation queries repeatedly. Before I insert additional data the max query time is 0.3 seconds. If I then insert 10 rows the queries slow to a max of 1.5s. The only solution I have to getting the queries to resume their original speed is to drop the data and reload the 8 millions rows into the table. I have also noticed that if I run my inserts while I am not concurrently making queries to the database, I don't run into the slowness, and the queries keep their 0.3s response time. Has anyone else encountered this problem before? Is there a way around it? I have also tried using the copy command instead of inserts but get the same result. Thanks, Bennett
Hello, It is hard to answer your question from the information given. What MonetDB version are you running on what platform? If the database schema and queries are simple, you might show them. At first glance, you seem to hit the consequences of how updates are dealt with. When they have not been committed to the base table, subsequent (concurrent) queries have to built a consistent snapshot of your 8M row table. This easily drains your resources when running concurrent users. An option to try is to enable the recycler optimizer, which might reduce the problem significantly. At least between two updates. regards, Martin On 2/3/11 8:09 AM, Bennett Hiles wrote:
Hi, MonetDB has great response times for the queries I have been running, but once I start inserting data to the table and concurrently running the queries, the queries slow down. To give you concrete numbers, I am initializing the database with 8 million records. I then run random aggregation queries repeatedly. Before I insert additional data the max query time is 0.3 seconds. If I then insert 10 rows the queries slow to a max of 1.5s. The only solution I have to getting the queries to resume their original speed is to drop the data and reload the 8 millions rows into the table. I have also noticed that if I run my inserts while I am not concurrently making queries to the database, I don't run into the slowness, and the queries keep their 0.3s response time. Has anyone else encountered this problem before? Is there a way around it? I have also tried using the copy command instead of inserts but get the same result. Thanks, Bennett
------------------------------------------------------------------------------ Special Offer-- Download ArcSight Logger for FREE (a $49 USD value)! Finally, a world-class log management solution at an even better price-free! Download using promo code Free_Logger_4_Dev2Dev. Offer expires February 28th, so secure your free ArcSight Logger TODAY! http://p.sf.net/sfu/arcsight-sfd2d
_______________________________________________ MonetDB-users mailing list MonetDB-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/monetdb-users
Hi Bennett, To be able to analyse the problem, I have some questions: - Could you run one of your aggregation queries both before the inserts (when it runs fast) and after the inserts (when it runs slow) (and possibly also dunring the inserts) in TRACE mode (i.e., prefix the query with keyword "TRACE") to get detailed profiling information and provide us with the output? - Did/could you also try --- after inserting and querying concurrently --- to let the server "cool down" (by running neither inserts nor queries) for some time, say, one minute (or up to 10 minutes) and then check whether afterwrds queries are still slow, or fast, again? - If the "cool down" does not help, could you try whether stopping and restaring the server makes your queries fast, again, after concurrent queries and inserts? I understand that the latter two are no solutions, but they could help us to analyse and understand the problem, which is a prerequisite for finding a suitable solution. Thanks! Stefan On Wed, Feb 02, 2011 at 11:09:54PM -0800, Bennett Hiles wrote:
Hi, MonetDB has great response times for the queries I have been running, but once I start inserting data to the table and concurrently running the queries, the queries slow down. To give you concrete numbers, I am initializing the database with 8 million records. I then run random aggregation queries repeatedly. Before I insert additional data the max query time is 0.3 seconds. If I then insert 10 rows the queries slow to a max of 1.5s. The only solution I have to getting the queries to resume their original speed is to drop the data and reload the 8 millions rows into the table. I have also noticed that if I run my inserts while I am not concurrently making queries to the database, I don't run into the slowness, and the queries keep their 0.3s response time. Has anyone else encountered this problem before? Is there a way around it? I have also tried using the copy command instead of inserts but get the same result. Thanks, Bennett
------------------------------------------------------------------------------ Special Offer-- Download ArcSight Logger for FREE (a $49 USD value)! Finally, a world-class log management solution at an even better price-free! Download using promo code Free_Logger_4_Dev2Dev. Offer expires February 28th, so secure your free ArcSight Logger TODAY! http://p.sf.net/sfu/arcsight-sfd2d
_______________________________________________ MonetDB-users mailing list MonetDB-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/monetdb-users
-- | Stefan.Manegold @ CWI.nl | DB Architectures (INS1) | | http://CWI.nl/~manegold/ | Science Park 123 (L321) | | Tel.: +31 (0)20 592-4212 | 1098 XG Amsterdam (NL) |
participants (3)
-
Bennett Hiles
-
Martin Kersten
-
Stefan.Manegold@cwi.nl