[MonetDB-users] MonetDB memory issue

Hello, All, We began to use MonetDB several weeks ago. As I can see people use monetdb several years and operate with millions of records. But we could not insert even several thousands records. After several minutes monetdb eat all 7 Gb memory and crush without any message. It seems that we use monetdb incorrectly. Do there exist any special options for start monetdb? MonetDB version: MonetDB server v5.20.5 (64-bit), based on kernel v1.38.5 (32-bit oids) Release Jun2010-SP2 Copyright (c) 1993-July 2008 CWI Copyright (c) August 2008-2010 MonetDB B.V., all rights reserved Visit http://monetdb.cwi.nl/ for further information Found 7.9GiB available memory Configured for prefix: C:\Users\sjoerd\Documents\src\stable\vs64\NTv64 Libraries: libpcre: 7.9 2009-04-11 openssl: OpenSSL 1.0.0a 1 Jun 2010 libxml2: 2.7.7 Compiled by: sjoerd@ALGOL (i686-pc-win32) Compilation: cl -GF -W3 -WX -wd4273 -wd4102 -MD -nologo -Ox Linking : cl -GF -W3 -WX -wd4273 -wd4102 -MD -nologo -Ox OS: Windows 7 x64 (Версия 6.1.7600 Сборка 7600) (We also try it on the ubuntu 10.4 - same effect) -- Best regards, Andrey.

Hi Andrey, On Fri, Nov 12, 2010 at 10:32:15AM +0300, Andrey Timofeyev wrote:
Hello, All,
We began to use MonetDB several weeks ago. As I can see people use monetdb several years and operate with millions of records.
Indeed. E.g., on a standard 8 GB Linux desktop, we can easily load an query a 100 GB TPC-H instance; the largest table has 600 M records, and is about 70 GB in size.
But we could not insert even several thousands records.
What do your records and/or table schema look like? How many columns, which types, how wide (bytes per record? How do you insert them? Bulkload via COPY INFO from CSV file or single insert statement per tuple? Which interface do you use? Plain mclient, ODBC, JDBC, ...? Can you successfully load less data / fewer records? If so, how many can you load successfully before the problems occur?
After several minutes monetdb eat all 7 Gb memory
That is to be expected is your data size is (that) large, i.e., MonetDB (efficiently & effectively) does exploit all available physical memory, if required --- what else does a DB server machine have memory for? ;-)
and crush without any message.
That is unexpected and strange. No message at all? On the server console or in the terminal window / shell that the server was started in? Not even a segfault? Also not on Linux? What is the errorlevel (Windows) or return code (Linux) of your mserver5 process after it stops (crashes)? On Linux, did you try to start the server via merovingian and the check the merovingian logs?
It seems that we use monetdb incorrectly. Do there exist any special options for start monetdb?
No, MonetDB should in general run "nob-less".
MonetDB version: MonetDB server v5.20.5 (64-bit), based on kernel v1.38.5 (32-bit oids) Release Jun2010-SP2 Copyright (c) 1993-July 2008 CWI Copyright (c) August 2008-2010 MonetDB B.V., all rights reserved Visit http://monetdb.cwi.nl/ for further information Found 7.9GiB available memory Configured for prefix: C:\Users\sjoerd\Documents\src\stable\vs64\NTv64 Libraries: libpcre: 7.9 2009-04-11 openssl: OpenSSL 1.0.0a 1 Jun 2010 libxml2: 2.7.7 Compiled by: sjoerd@ALGOL (i686-pc-win32) Compilation: cl -GF -W3 -WX -wd4273 -wd4102 -MD -nologo -Ox Linking : cl -GF -W3 -WX -wd4273 -wd4102 -MD -nologo -Ox
OS: Windows 7 x64 (Версия 6.1.7600 Сборка 7600) (We also try it on the ubuntu 10.4 - same effect)
-- Best regards, Andrey.
------------------------------------------------------------------------------ Centralized Desktop Delivery: Dell and VMware Reference Architecture Simplifying enterprise desktop deployment and management using Dell EqualLogic storage and VMware View: A highly scalable, end-to-end client virtualization framework. Read more! http://p.sf.net/sfu/dell-eql-dev2dev
_______________________________________________ MonetDB-users mailing list MonetDB-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/monetdb-users
-- | Stefan.Manegold @ CWI.nl | DB Architectures (INS1) | | http://CWI.nl/~manegold/ | Science Park 123 (L321) | | Tel.: +31 (0)20 592-4212 | 1098 XG Amsterdam (NL) |

Hi Stefan, Thank you for your answer. What do your records and/or table schema look like?
How many columns, which types, how wide (bytes per record?
Here is our schema: CREATE TABLE entity1(entity1_id int PRIMARY KEY, entity1_name varchar(20) UNIQUE); CREATE TABLE entity2(entity2_id int PRIMARY KEY, entity2_name varchar(200), entity2_type varchar(20)); CREATE TABLE entity2_entity1(entity2_id int, entity1_id int); CREATE TABLE entity2_entity2(parent_entity2_id int, child_entity2_id int); ALTER TABLE entity2 ADD CONSTRAINT entity2_name_type_unique UNIQUE(entity2_name, entity2_type); ALTER TABLE entity2_entity1 ADD CONSTRAINT entity2_entity1_2_fk FOREIGN KEY(entity2_id) REFERENCES entity2s (entity2_id); ALTER TABLE entity2_entity1 ADD CONSTRAINT entity2_entity1_1_fk FOREIGN KEY(entity1_id) REFERENCES entity1 (entity1_id); ALTER TABLE entity2_entity2 ADD CONSTRAINT entity2_entity2_p2_fk FOREIGN KEY(parent_entity2_id) REFERENCES entity2 (entity2_id); ALTER TABLE entity2_entity2 ADD CONSTRAINT entity2_entity2_c2_fk FOREIGN KEY(child_entity2_id) REFERENCES entity2 (entity2_id);
How do you insert them? Bulkload via COPY INFO from CSV file or single insert statement per tuple?
Insert via single insert statements. Which interface do you use?
Plain mclient, ODBC, JDBC, ...?
We tried mclient + sql script and JDBC. Can you successfully load less data / fewer records?
If so, how many can you load successfully before the problems occur?
We can load about 5000 records in each table. (20000 in summary) The same effect with memory we can see when do many select statements through JDBC.
After several minutes monetdb eat all 7 Gb memory That is to be expected is your data size is (that) large, i.e., MonetDB (efficiently & effectively) does exploit all available physical memory, if required --- what else does a DB server machine have memory for? ;-)
20000 our records shoud take less than 10 Mb + memory for indexes. That is unexpected and strange.
No message at all? On the server console or in the terminal window / shell that the server was started in? Not even a segfault?
On the linux we got segmentation error. I will tell more precisely little bit later. -- Best regards, Andrey.

Hi Andrey, On Fri, Nov 12, 2010 at 12:53:13PM +0300, Andrey Timofeyev wrote:
Hi Stefan,
Thank you for your answer.
What do your records and/or table schema look like?
How many columns, which types, how wide (bytes per record?
Here is our schema: CREATE TABLE entity1(entity1_id int PRIMARY KEY, entity1_name varchar(20) UNIQUE); CREATE TABLE entity2(entity2_id int PRIMARY KEY, entity2_name varchar(200), entity2_type varchar(20)); CREATE TABLE entity2_entity1(entity2_id int, entity1_id int); CREATE TABLE entity2_entity2(parent_entity2_id int, child_entity2_id int);
ALTER TABLE entity2 ADD CONSTRAINT entity2_name_type_unique UNIQUE(entity2_name, entity2_type);
ALTER TABLE entity2_entity1 ADD CONSTRAINT entity2_entity1_2_fk FOREIGN KEY(entity2_id) REFERENCES entity2s (entity2_id); ALTER TABLE entity2_entity1 ADD CONSTRAINT entity2_entity1_1_fk FOREIGN KEY(entity1_id) REFERENCES entity1 (entity1_id);
ALTER TABLE entity2_entity2 ADD CONSTRAINT entity2_entity2_p2_fk FOREIGN KEY(parent_entity2_id) REFERENCES entity2 (entity2_id); ALTER TABLE entity2_entity2 ADD CONSTRAINT entity2_entity2_c2_fk FOREIGN KEY(child_entity2_id) REFERENCES entity2 (entity2_id);
How do you insert them? Bulkload via COPY INFO from CSV file or single insert statement per tuple?
Insert via single insert statements.
While this should indeed also work, for performance reasons, we strongly recommend to bulkload data using COPY INTO (provided your application scenarion allows that, i.e., you have data to bulkload). In that case, we also recommend to initially create the tables without primary keys, foreign keys, and other constrains, and add them (using alter table as you alreasdy do for foreign keys and unique constraints) only once the data is loaded.
Which interface do you use?
Plain mclient, ODBC, JDBC, ...?
We tried mclient + sql script and JDBC.
Can you successfully load less data / fewer records?
If so, how many can you load successfully before the problems occur?
We can load about 5000 records in each table. (20000 in summary) The same effect with memory we can see when do many select statements through JDBC.
Would you be able to shared your datasets (and query workload) with us (not publicly via this list, but privately), such that we could analyse the behavior ourselves?
After several minutes monetdb eat all 7 Gb memory That is to be expected is your data size is (that) large, i.e., MonetDB (efficiently & effectively) does exploit all available physical memory, if required --- what else does a DB server machine have memory for? ;-)
20000 our records shoud take less than 10 Mb + memory for indexes.
That is unexpected and strange.
No message at all? On the server console or in the terminal window / shell that the server was started in? Not even a segfault?
On the linux we got segmentation error. I will tell more precisely little bit later.
Would be great if you could on Linux create a debug build of MonetDB (configured with --enable-debug), and either run it in the debugger or have it create a core dump (`unlimit -c unlimited`), and send us the stack trace after the crash. Stefan
-- Best regards, Andrey.
------------------------------------------------------------------------------ Centralized Desktop Delivery: Dell and VMware Reference Architecture Simplifying enterprise desktop deployment and management using Dell EqualLogic storage and VMware View: A highly scalable, end-to-end client virtualization framework. Read more! http://p.sf.net/sfu/dell-eql-dev2dev
_______________________________________________ MonetDB-users mailing list MonetDB-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/monetdb-users
-- | Stefan.Manegold @ CWI.nl | DB Architectures (INS1) | | http://CWI.nl/~manegold/ | Science Park 123 (L321) | | Tel.: +31 (0)20 592-4212 | 1098 XG Amsterdam (NL) |

Thank you very mach.
When we created tables without constraints: mclient + sql script works fine.
But with JDBC mserver continue to eat memory.
I will try debug build on Linux and if we do not sole problem , send you
more information (with jdbc client that insert data).
2010/11/12 Stefan Manegold
Hi Andrey,
On Fri, Nov 12, 2010 at 12:53:13PM +0300, Andrey Timofeyev wrote:
Hi Stefan,
Thank you for your answer.
What do your records and/or table schema look like?
How many columns, which types, how wide (bytes per record?
Here is our schema: CREATE TABLE entity1(entity1_id int PRIMARY KEY, entity1_name varchar(20) UNIQUE); CREATE TABLE entity2(entity2_id int PRIMARY KEY, entity2_name varchar(200), entity2_type varchar(20)); CREATE TABLE entity2_entity1(entity2_id int, entity1_id int); CREATE TABLE entity2_entity2(parent_entity2_id int, child_entity2_id int);
ALTER TABLE entity2 ADD CONSTRAINT entity2_name_type_unique UNIQUE(entity2_name, entity2_type);
ALTER TABLE entity2_entity1 ADD CONSTRAINT entity2_entity1_2_fk FOREIGN KEY(entity2_id) REFERENCES entity2s (entity2_id); ALTER TABLE entity2_entity1 ADD CONSTRAINT entity2_entity1_1_fk FOREIGN KEY(entity1_id) REFERENCES entity1 (entity1_id);
ALTER TABLE entity2_entity2 ADD CONSTRAINT entity2_entity2_p2_fk FOREIGN KEY(parent_entity2_id) REFERENCES entity2 (entity2_id); ALTER TABLE entity2_entity2 ADD CONSTRAINT entity2_entity2_c2_fk FOREIGN KEY(child_entity2_id) REFERENCES entity2 (entity2_id);
How do you insert them? Bulkload via COPY INFO from CSV file or single insert statement per tuple?
Insert via single insert statements.
While this should indeed also work, for performance reasons, we strongly recommend to bulkload data using COPY INTO (provided your application scenarion allows that, i.e., you have data to bulkload). In that case, we also recommend to initially create the tables without primary keys, foreign keys, and other constrains, and add them (using alter table as you alreasdy do for foreign keys and unique constraints) only once the data is loaded.
Which interface do you use?
Plain mclient, ODBC, JDBC, ...?
We tried mclient + sql script and JDBC.
Can you successfully load less data / fewer records?
If so, how many can you load successfully before the problems occur?
We can load about 5000 records in each table. (20000 in summary) The same effect with memory we can see when do many select statements through JDBC.
Would you be able to shared your datasets (and query workload) with us (not publicly via this list, but privately), such that we could analyse the behavior ourselves?
After several minutes monetdb eat all 7 Gb memory That is to be expected is your data size is (that) large, i.e., MonetDB (efficiently & effectively) does exploit all available physical memory, if required --- what else does a DB server machine have memory for? ;-)
20000 our records shoud take less than 10 Mb + memory for indexes.
That is unexpected and strange.
No message at all? On the server console or in the terminal window / shell that the server was started in? Not even a segfault?
On the linux we got segmentation error. I will tell more precisely little bit later.
Would be great if you could on Linux create a debug build of MonetDB (configured with --enable-debug), and either run it in the debugger or have it create a core dump (`unlimit -c unlimited`), and send us the stack trace after the crash.
Stefan
-- Best regards, Andrey.
------------------------------------------------------------------------------
Centralized Desktop Delivery: Dell and VMware Reference Architecture Simplifying enterprise desktop deployment and management using Dell EqualLogic storage and VMware View: A highly scalable, end-to-end client virtualization framework. Read more! http://p.sf.net/sfu/dell-eql-dev2dev
_______________________________________________ MonetDB-users mailing list MonetDB-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/monetdb-users
-- | Stefan.Manegold @ CWI.nl | DB Architectures (INS1) | | http://CWI.nl/~manegold/ | Science Park 123 (L321) | | Tel.: +31 (0)20 592-4212 | 1098 XG Amsterdam (NL) |
------------------------------------------------------------------------------ Centralized Desktop Delivery: Dell and VMware Reference Architecture Simplifying enterprise desktop deployment and management using Dell EqualLogic storage and VMware View: A highly scalable, end-to-end client virtualization framework. Read more! http://p.sf.net/sfu/dell-eql-dev2dev _______________________________________________ MonetDB-users mailing list MonetDB-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/monetdb-users
-- Best regards, Andrey.

Hello Stefan,
The memory consumption is grown for every JDBC query.
Here is example:
public class SimpleTest {
public static void main(String[] args) throws ClassNotFoundException,
SQLException {
Class.forName("nl.cwi.monetdb.jdbc.MonetDriver");
Connection con = DriverManager.getConnection("jdbc:monetdb://
127.0.0.1:50000/test", "monetdb", "monetdb");
PreparedStatement st = con.prepareStatement("INSERT into test_data
(id, name) values(?, ?)");
for (int i = 0; i < 1000000; i++) {
st.setInt(1, i);
st.setString(2, Integer.toString(i));
st.addBatch();
if(i % 10000 == 0){
System.out.println("10000 inserted");
}
}
st.executeBatch();
for (int i = 0; i < 1000000; i++) {
PreparedStatement select = con.prepareStatement("SELECT * from
test_data where id = ?");
select.setInt(1, i);
ResultSet rs = select.executeQuery();
if(i % 10000 == 0){
System.out.println("10000 selected");
}
rs.close();
select.close();
}
}
}
Does it reproduced on your database?
2010/11/12 Andrey Timofeyev
Thank you very mach.
When we created tables without constraints: mclient + sql script works fine. But with JDBC mserver continue to eat memory. I will try debug build on Linux and if we do not sole problem , send you more information (with jdbc client that insert data).
2010/11/12 Stefan Manegold
Hi Andrey,
On Fri, Nov 12, 2010 at 12:53:13PM +0300, Andrey Timofeyev wrote:
Hi Stefan,
Thank you for your answer.
What do your records and/or table schema look like?
How many columns, which types, how wide (bytes per record?
Here is our schema: CREATE TABLE entity1(entity1_id int PRIMARY KEY, entity1_name varchar(20) UNIQUE); CREATE TABLE entity2(entity2_id int PRIMARY KEY, entity2_name varchar(200), entity2_type varchar(20)); CREATE TABLE entity2_entity1(entity2_id int, entity1_id int); CREATE TABLE entity2_entity2(parent_entity2_id int, child_entity2_id int);
ALTER TABLE entity2 ADD CONSTRAINT entity2_name_type_unique UNIQUE(entity2_name, entity2_type);
ALTER TABLE entity2_entity1 ADD CONSTRAINT entity2_entity1_2_fk FOREIGN KEY(entity2_id) REFERENCES entity2s (entity2_id); ALTER TABLE entity2_entity1 ADD CONSTRAINT entity2_entity1_1_fk FOREIGN KEY(entity1_id) REFERENCES entity1 (entity1_id);
ALTER TABLE entity2_entity2 ADD CONSTRAINT entity2_entity2_p2_fk FOREIGN KEY(parent_entity2_id) REFERENCES entity2 (entity2_id); ALTER TABLE entity2_entity2 ADD CONSTRAINT entity2_entity2_c2_fk FOREIGN KEY(child_entity2_id) REFERENCES entity2 (entity2_id);
How do you insert them? Bulkload via COPY INFO from CSV file or single insert statement per tuple?
Insert via single insert statements.
While this should indeed also work, for performance reasons, we strongly recommend to bulkload data using COPY INTO (provided your application scenarion allows that, i.e., you have data to bulkload). In that case, we also recommend to initially create the tables without primary keys, foreign keys, and other constrains, and add them (using alter table as you alreasdy do for foreign keys and unique constraints) only once the data is loaded.
Which interface do you use?
Plain mclient, ODBC, JDBC, ...?
We tried mclient + sql script and JDBC.
Can you successfully load less data / fewer records?
If so, how many can you load successfully before the problems occur?
We can load about 5000 records in each table. (20000 in summary) The same effect with memory we can see when do many select statements through JDBC.
Would you be able to shared your datasets (and query workload) with us (not publicly via this list, but privately), such that we could analyse the behavior ourselves?
After several minutes monetdb eat all 7 Gb memory That is to be expected is your data size is (that) large, i.e., MonetDB (efficiently & effectively) does exploit all available physical memory, if required --- what else does a DB server machine have memory for? ;-)
20000 our records shoud take less than 10 Mb + memory for indexes.
That is unexpected and strange.
No message at all? On the server console or in the terminal window / shell that the server was started in? Not even a segfault?
On the linux we got segmentation error. I will tell more precisely little bit later.
Would be great if you could on Linux create a debug build of MonetDB (configured with --enable-debug), and either run it in the debugger or have it create a core dump (`unlimit -c unlimited`), and send us the stack trace after the crash.
Stefan
-- Best regards, Andrey.
------------------------------------------------------------------------------
Centralized Desktop Delivery: Dell and VMware Reference Architecture Simplifying enterprise desktop deployment and management using Dell EqualLogic storage and VMware View: A highly scalable, end-to-end client virtualization framework. Read more! http://p.sf.net/sfu/dell-eql-dev2dev
_______________________________________________ MonetDB-users mailing list MonetDB-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/monetdb-users
-- | Stefan.Manegold @ CWI.nl | DB Architectures (INS1) | | http://CWI.nl/~manegold/ | Science Park 123 (L321) | | Tel.: +31 (0)20 592-4212 | 1098 XG Amsterdam (NL) |
------------------------------------------------------------------------------ Centralized Desktop Delivery: Dell and VMware Reference Architecture Simplifying enterprise desktop deployment and management using Dell EqualLogic storage and VMware View: A highly scalable, end-to-end client virtualization framework. Read more! http://p.sf.net/sfu/dell-eql-dev2dev _______________________________________________ MonetDB-users mailing list MonetDB-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/monetdb-users
-- Best regards, Andrey.
-- Best regards, Andrey.

On 12-11-2010 21:47:43 +0300, Andrey Timofeyev wrote:
Hello Stefan,
The memory consumption is grown for every JDBC query.
Here is example:
public class SimpleTest {
public static void main(String[] args) throws ClassNotFoundException, SQLException {
Class.forName("nl.cwi.monetdb.jdbc.MonetDriver");
Connection con = DriverManager.getConnection("jdbc:monetdb://[1]127.0.0.1:50000/test", "monetdb", "monetdb");
PreparedStatement st = con.prepareStatement("INSERT into test_data (id, name) values(?, ?)");
for (int i = 0; i < 1000000; i++) {
st.setInt(1, i);
st.setString(2, Integer.toString(i));
st.addBatch();
if(i % 10000 == 0){
System.out.println("10000 inserted");
}
}
st.executeBatch();
you execute everything in one very large batch, why not execute (and clear) the batch at smaller intervals? You print "10000 inserted", but you actually only queued up that amount of inserts, not sending anything to the database.
for (int i = 0; i < 1000000; i++) {
PreparedStatement select = con.prepareStatement("SELECT * from test_data where id = ?");
You really want to make a million prepared statements here? The server has to store them all...
select.setInt(1, i);
ResultSet rs = select.executeQuery();
if(i % 10000 == 0){
System.out.println("10000 selected");
}
rs.close();
select.close();
}
}
}
Does it reproduced on your database?
I guess so, since you're leaking prepared handles like mad

you execute everything in one very large batch, why not execute (and clear) the batch at smaller intervals? You print "10000 inserted", but you actually only queued up that amount of inserts, not sending anything to the database.
Yes, and one very large batch works ok. The problem here in creating
PreparedStatements.
You really want to make a million prepared statements here? The server has to store them all... I guess so, since you're leaking prepared handles like mad
We create separate PreparedStatement for every query in our project. There are no memory leak on the client side (we close every PreparedStatement). Why server should store PreparedStatement forever, is it possible to configure server do not store them? Or it works so by design, and we should cache our PreparedStatements. -- Best regards, Andrey.

On Fri, Nov 12, 2010 at 10:21:52PM +0300, Andrey Timofeyev wrote:
you execute everything in one very large batch, why not execute (and clear) the batch at smaller intervals? You print "10000 inserted", but you actually only queued up that amount of inserts, not sending anything to the database.
Yes, and one very large batch works ok. The problem here in creating
PreparedStatements.
You really want to make a million prepared statements here? The server has to store them all... I guess so, since you're leaking prepared handles like mad
We create separate PreparedStatement for every query in our project. There are no memory leak on the client side (we close every PreparedStatement). Why server should store PreparedStatement forever, is it possible to configure server do not store them? Or it works so by design, and we should cache our PreparedStatements.
The idea of prepared statements is to prepare a query template once and then call (re-use) it with different parameters multiple times, i.e., for 1000 inserts you would to one prepareStatement (and hence pay the cost for translating, optimizing that query and storing its execution plan only once) and then call (execute) it multiple times with (potentially) different parameters (arguments), at ver low cost and it then is merely calling a pre-compiled function ... This is a general idea, not MonetDB specific ... Stefan
-- Best regards, Andrey.
------------------------------------------------------------------------------ Centralized Desktop Delivery: Dell and VMware Reference Architecture Simplifying enterprise desktop deployment and management using Dell EqualLogic storage and VMware View: A highly scalable, end-to-end client virtualization framework. Read more! http://p.sf.net/sfu/dell-eql-dev2dev
_______________________________________________ MonetDB-users mailing list MonetDB-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/monetdb-users
-- | Stefan.Manegold @ CWI.nl | DB Architectures (INS1) | | http://CWI.nl/~manegold/ | Science Park 123 (L321) | | Tel.: +31 (0)20 592-4212 | 1098 XG Amsterdam (NL) |

On 12-11-2010 20:46:51 +0100, Stefan Manegold wrote:
You really want to make a million prepared statements here? The server has to store them all... I guess so, since you're leaking prepared handles like mad
We create separate PreparedStatement for every query in our project. There are no memory leak on the client side (we close every PreparedStatement). Why server should store PreparedStatement forever, is it possible to configure server do not store them? Or it works so by design, and we should cache our PreparedStatements.
The idea of prepared statements is to prepare a query template once and then call (re-use) it with different parameters multiple times, i.e., for 1000 inserts you would to one prepareStatement (and hence pay the cost for translating, optimizing that query and storing its execution plan only once) and then call (execute) it multiple times with (potentially) different parameters (arguments), at ver low cost and it then is merely calling a pre-compiled function ...
This is a general idea, not MonetDB specific ...
No, but the problem of actually not being able to destroy them server side while staying connected /is/ MonetDB specific. I just opened a feature request for this, since this usage of PreparedStatements is actually fairly common.

On 12-11-2010 21:08:57 +0100, Fabian Groffen wrote:
No, but the problem of actually not being able to destroy them server side while staying connected /is/ MonetDB specific.
I just opened a feature request for this, since this usage of PreparedStatements is actually fairly common.
Hibernate for example uses this strategy as well, since it allows Hibernate to let the driver perform all mappings, quotings, escapings, and what more is necessary to convert a given Java Object (value) to the database specific notation to use in e.g. an INSERT statement.

The idea of prepared statements is to prepare a query template once and then call (re-use) it with different parameters multiple times, i.e., for 1000 inserts you would to one prepareStatement (and hence pay the cost for translating, optimizing that query and storing its execution plan only once) and then call (execute) it multiple times with (potentially) different parameters (arguments), at ver low cost and it then is merely calling a pre-compiled function ...
This is a general idea, not MonetDB specific ...
Thank you for your help to solve our problem and sorry for my persistence. But it will be good if monetdb could determine equals PreparedStatements. We have client-server application. Client side use JDBC driver. Server side - MonetDB. Server side receive requests from client with different parameters. One of them is PreparedStatements. Client side can behave optimaly (reuse PreparedStatements objects) and not optimaly (create new PreparedStatements every request from other clients, as Spring Fraimwork do by default). Or for example client has connection pool, and for eache connection we will have one PreparedStatement (let's it be 10 connections). But independently of how client works, server should work optimaly. And if it is create precompiled functions according to some parameters, It is very good if server will not duplicate the same precompiled function for equals parameters. Is it practically possible to determine that two prepared statements equals? -- Best regards, Andrey.
participants (3)
-
Andrey Timofeyev
-
Fabian Groffen
-
Stefan Manegold