4 Feb
2013
4 Feb
'13
5:59 p.m.
Hi, I'm loading data into a table with >20,000 columns and several hundred thousand rows using "COPY INTO". It works, but I can't seem to make it take less than about 30GB of memory. I've tried specifying the number of records as well. Is there any way to reduce memory usage when loading? Is this much memory usage expected? It's a shame because our queries take very little memory. We have to size our machine simply to handle the load. It's impressive (and very convenient) that I can actually have a table with so many columns, but are there any other limitations I might come across? I've just been trying to keep the column count in the queries low. A "select *" seems to crash MonetDB. Thanks, Andrew