On Fri, Feb 22, 2008 at 10:52:07AM -0800, mobigital1 wrote:
Thanks. This means that the expected database chunk size is topped at 500K * (7 * 8+ 2*20+6 *10+15+144*8 ) bytes i.e. 500K* 1323= 0.670Gb
chucksize wise there does not seem a problem. However, virtual memory footprint quickly increases and I am wondering if we hit a limit in OS regaring memory mapped files.
Is there something I can monitor to see if this limit is reached? system Load (on task manager - left bar) usually reflects cumulative system load with virtual memory utilization. That didn't seem to be showing substantially more than current physical memory use.
I would check a single batch for consistency. Then built a merge table over the smaller batches. In a later stage you can then glue together the pieces until we hit another limit
that's a good idea to try.
Also would be great to add a batch size capability to COPY TO, so that I can have it load a multi-million row table without having to slice it myself.
That feature exists ;-) For example COPY 5 OFFSET 5 RECORDS INTO my_test FROM stdin USING DELIMITERS '|','\n' ; Starts from row number 5 and reads 4 more. Niels
-- View this message in context: http://www.nabble.com/COPY-TO-corrupting-data-tp15634267p15640793.html Sent from the monetdb-users mailing list archive at Nabble.com.
------------------------------------------------------------------------- This SF.net email is sponsored by: Microsoft Defy all challenges. Microsoft(R) Visual Studio 2008. http://clk.atdmt.com/MRT/go/vse0120000070mrt/direct/01/ _______________________________________________ MonetDB-users mailing list MonetDB-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/monetdb-users
-- Niels Nes, Centre for Mathematics and Computer Science (CWI) Kruislaan 413, 1098 SJ Amsterdam, The Netherlands room C0.02, phone ++31 20 592-4098, fax ++31 20 592-4312 url: http://www.cwi.nl/~niels e-mail: Niels.Nes@cwi.nl