Thanks. This means that the expected database chunk size is topped at 500K * (7 * 8+ 2*20+6 *10+15+144*8 ) bytes i.e. 500K* 1323= 0.670Gb
chucksize wise there does not seem a problem. However, virtual memory footprint quickly increases and I am wondering if we hit a limit in OS regaring memory mapped files.
Is there something I can monitor to see if this limit is reached? system Load (on task manager - left bar) usually reflects cumulative system load with virtual memory utilization. That didn't seem to be showing substantially more than current physical memory use.
I would check a single batch for consistency. Then built a merge table over the smaller batches. In a later stage you can then glue together the pieces until we hit another limit
that's a good idea to try. Also would be great to add a batch size capability to COPY TO, so that I can have it load a multi-million row table without having to slice it myself. -- View this message in context: http://www.nabble.com/COPY-TO-corrupting-data-tp15634267p15640793.html Sent from the monetdb-users mailing list archive at Nabble.com.