On 19 Jul 2017, at 14:39, Stefano Piani
wrote: Hello Jennie, I have found your last comment extremely interesting because inserting individual tuples into a table with primary key is exactly what I do most of the time in my application. Might you please elaborate further this concept? Do I have some alternatives?
Let me explain exactly what I want to do. In my database I have two tables created in the following way: CREATE SEQUENCE "retrievalid_seq" as bigint;
CREATE TABLE retrievals (id BIGINT DEFAULT NEXT VALUE FOR "retrievalid_seq", created_at TIMESTAMP, geolocalized BOOLEAN NOT NULL, latitude DECIMAL(8,5), longitude DECIMAL(8,5), user_id BIGINT NOT NULL, data VARCHAR(500), PRIMARY KEY(id));
CREATE TABLE retrieval_users (user_id BIGINT, user_name VARCHAR(100), user_description VARCHAR(500), user_location VARCHAR(300), PRIMARY KEY(user_id));
I receive about ten retrievals per second, and I want to to save them in my table "retrievals". Moreover, if a retrieval comes from a user that is not already registered in the "retrieval_users" table, I will also add one entry to that table.
What is your advice? Do you think that I should somehow disable the index on the primary key of the tables (actually, I am not even sure that this is possible)?
Hai Stefano,
I tried to save the retrievals in bunches, so that each transaction saves one hundred or one thousand of retrievals,
We’d definitely recommend such bulk operations.
but it didn't help.
What exactly didn’t it help? Performance? Error? Jennie
Thank you for your time,
Stefano
On Tue, Jul 18, 2017 at 11:09 AM, Ying Zhang
wrote: On 17 Jul 2017, at 14:18, Stefano Piani
wrote: Hello, I think that I have faced the same issue. I have a database with a table that contains a few millions of entries. I have been trying to add about ten new entries per second on that table. At the beginning, the database was able to keep the pace with the data but, after a while, the database starts to consume more and more ram and to slow down. In that situation, it also becomes extremely slows to answer to the queries. If I restart the database, everything seems to work fine again.
I am using Ubuntu 16.04 and the monetdb instance is running inside a docker (again with ubuntu 16.04).
This is what I have noticed so far: * It took a while before the problem starts to appear * The bigger the table the sooner the problem appears * The time required for an insertion is more or less the same, the time required for a query increases
Hello Stefano,
I was just thinking that the increasing response time of your queries might have been caused by that you insert individual tuples into a table with primary key (that will become slower over time for sure), until I saw this remark of yours.
I’ll come back with more information for your issues in your other thread.
Regards, Jennie
As soon I can, I will try to prepare a few scripts to check if I can simulate the problem that I have with my application. Then, I will give you a more accurate description of the problem.
Best regards, Stefano
On Sat, Jul 15, 2017 at 7:31 AM, Sharma, Sreejith
wrote: Hi,
Does anyone encountered memory blot issue on high concurrency? Also, I don’t see it coming down when idle. When its blots up, it’s slows down the response time.
Regards,
Sreejith
_______________________________________________ users-list mailing list users-list@monetdb.org https://www.monetdb.org/mailman/listinfo/users-list
_______________________________________________ users-list mailing list users-list@monetdb.org https://www.monetdb.org/mailman/listinfo/users-list
_______________________________________________ users-list mailing list users-list@monetdb.org https://www.monetdb.org/mailman/listinfo/users-list
_______________________________________________ users-list mailing list users-list@monetdb.org https://www.monetdb.org/mailman/listinfo/users-list