Hi, i had a look at monetdb a while (a year or so) ago and stumbled upon a few crashes/memory issues back then. Apart from that it made a very promising impression to me. Today i thought i could give it a try again to see if these issues have been fixed and found out that copy into still seems to have memory issues and i wonder if i am the only one who hits such issues. I am experimeting with dummy data created by a script. Data is created the following way: 20 smallints: Values 1-10. 100 tinyints: Values 0-1 20 ints: Values 1-10000 40 varchars: md5 of some random value 20 floats: rand(1000000)/1000000 I created a CSV file with ~2.1 million rows and then tried to insert the data. My first attempt was: sql>copy into md_data FROM '/home/mop/performance/hans.csv' delimiters ',','\n' null as ''; MAPI = monetdb@localhost:50000 ACTION= read_line QUERY = copy into md_data FROM '/home/mop/performance/hans.csv' delimiters ',','\n' null as ''; ERROR = Connection terminated This was of course the quick and dirty way. What happened in the background was that monetdb consumed all memory. My system is a debian lenny 64bit. Server has 4GB and decent CPUs. Why does it need all my memory here? I then did the same but loaded only parts of the file (1000000 rows). sql>copy 1000000 records into md_data FROM '/home/mop/performance/hans.csv' delimiters ',','\n' null as '' ; This worked but the server used ~2-3GB of my RAM during the process and what's even more important it used the same amount even though the copy process finished. Why does it use the RAM even though it is not used anymore? Then i tried to insert 1000000 rows more and my expectation was that monetdb would reuse the memory it allocated (and didn't free) in the first loading process. This wasn't the fact. Again monetdb consumed all my memory and when loading finished it indicated that it was still doing something (CPU @ ~20%). When i tried a: sql>SELECT COUNT(*) FROM md_data; it hang completely. Same when trying to establish a new connection. I stopped the server and restarted again. Everything was fine again. I inserted the rest of the data (memory raised again but there were only 100000 rows left so there wasn't any problem this time). Am i the only one hitting such issues? One could argue that i should import less records at once but even then. There seems to be some nasty memleak in the background. Or is this a configuration issue? Or am i doing something completely wrong? I am using the .debs: http://monetdb.cwi.nl/downloads/Debian/ Thanks in advance, Andreas Streichardt