I would check a single batch for consistency. Then built a merge table over the smaller batches. In a later stage you can then glue together the pieces until we hit another limit
that's a good idea to try.
Also would be great to add a batch size capability to COPY TO, so that I can have it load a multi-million row table without having to slice it myself.
i've been having the same issue on my other environment: a 32 bit windows 2003, with 8gb ram. trying to load a table with following types of columns: 33 columns of type "int" 27 columns of type "real" 1 columns of type "numeric(10,0)" 1 columns of type "varchar(6)" 19 columns of type "varchar(1)" 5 columns of type "varchar(3)" 2 columns of type "varchar(12)" 6 columns of type "varchar(2)" 1 columns of type "varchar(18)" 5 columns of type "varchar(5)" 1 columns of type "varchar(15)" 2 columns of type "varchar(30)" at around 9M+ rows the server dies and some data corrupts. I tried the method above and loaded 250 K at a time into a staging table. in each batch I did a COPY TO the staging table (250K rows at a time) then executed an insert from staging to final table, and delete from staging table. either through staging table or directly, the problem exhibited the same symptoms. I think a few times I tried, the mserver5.exe did not crash, but it did not let go of the COPY TO source data file, i knew that because could not delete it to load with new batch of data. the database had to be discarded and created from scratch again. -- View this message in context: http://www.nabble.com/COPY-TO-corrupting-data-tp15634267p15673697.html Sent from the monetdb-users mailing list archive at Nabble.com.