Re: Failed to load 128GB CSV file using Oct2012
Jennie,
just to be sure: are you sure there is sufficient free disk space on your machine, say, twice the expected database size before starting the load?
How large is your database (and sql_logs) just after the failure while the server is still running? How much disk space is still free at that time?
Stefan
Ying Zhang
Hai Stefan, Before loading, I have 414GB free disk space, which should be sufficient (I didn't check in Sep. how large the database was after a successful loading. According to the SSDB benchmark definition, it should be about 100G.). Database sizes just after the failure are as followings: $ duh Disk usage of directory /export/scratch2/zhang/dbfarm 0.015625 MB: merovingian.log 179464.648438 MB: ssdb/ $ cd ssdb/ $ duh Disk usage of directory /export/scratch2/zhang/dbfarm/ssdb 0.000000 MB: 31c67d12-147b-4dd1-bc1b-c6c0e4b3edbf 179464.609375 MB: bat/ 0.003906 MB: box/ 0.011719 MB: sql_logs/ There is still 239GB free on the disk. mserver5 didn't crash, my connection was just lost. Thanks! Jennie On Dec 05, 2012, at 10:06, Stefan Manegold wrote:
Jennie,
just to be sure: are you sure there is sufficient free disk space on your machine, say, twice the expected database size before starting the load?
How large is your database (and sql_logs) just after the failure while the server is still running? How much disk space is still free at that time?
Stefan
Ying Zhang
wrote: Hai all,
I try to load a CVS file of ~128GB in the ssdb branch (based on Oct2012), but after quite sometime, it failed with:
#GDKrealloc(19598973840) fails, try to free up space [memory in use=22695528216,virtual memory in use=138875848472] #GDKrealloc(19598973840) result [mem=22695528216,vm=138875848472]
In the last Sep., I was able to execute the same query with the same data. But now it takes much longer and ends in a crash. For INS1-ers, the command to start the server:
/export/scratch2/zhang/monet-install/ssdb/debug/bin/mserver5 --set gdk_dbfarm=/export/scratch2/zhang/dbfarm --dbname=ssdb --set mapi_usock=/export/scratch2/zhang/dbfarm/ssdb/.mapi.sock --set monet_vault_key=/export/scratch2/zhang/dbfarm/ssdb/.vaultkey --set gdk_nr_threads=8 --set max_clients=64 --set sql_optimizer=default_pipe --set mapi_port=60000
The command to start the query:
cd /ufs/zhang/papers/ssdb/monetdb ./run.sh loadimg small output
Does anyone have an idea? If anymore information is needed, please let me know. I'm re-running the query to get a GDB stack trace.
Thanks!
Jennie _______________________________________________ users-list mailing list users-list@monetdb.org http://mail.monetdb.org/mailman/listinfo/users-list _______________________________________________ users-list mailing list users-list@monetdb.org http://mail.monetdb.org/mailman/listinfo/users-list
_______________________________________________ users-list mailing list users-list@monetdb.org http://mail.monetdb.org/mailman/listinfo/users-list
Hello, i ran into similar error with MonetDB 11.13.5-20121116 on Debian 6.0. 2012-12-02 18:40:14 ERROR mito - MAPI = (monetdb) /tmp/.s.monetdb.50000 2012-12-02 18:40:14 ERROR mito - QUERY = COPY 8224089 RECORDS INTO x.y_20120628 FROM '/var/tmp/mito_IuKY9k' USING DELIMITERS '\t','\n'; 2012-12-02 18:40:14 ERROR mito - ERROR = !failed to import table 2012-12-02 18:40:14 ERROR mito - !HEAPextend: failed to extend to 32896356 for 62/71/627176tail 2012-12-02 18:40:14 ERROR mito - !TABLETcreate_bats: Failed to create bat of size 8224089 2012-12-02 18:40:15 MSG db[2257]: #GDKmmap(32899072) fails, try to free up space [memory in use=3810103464,virtual memory in use=3623133155496] 2012-12-02 18:40:15 MSG db[2257]: #GDKmmap(32899072) result [mem=3810103464,vm=3623133155496] 2012-12-02 18:40:15 MSG db[2257]: #GDKmmap(32899072) fails, try to free up space [memory in use=3810102384,virtual memory in use=3623133154416] 2012-12-02 18:40:15 MSG db[2257]: #GDKmmap(32899072) result [mem=3810102384,vm=3623133154416] This happened after running 500 similar "copy into" statements. Any further "copy into" failed with the same error message until MonetDB was restarted. There was plenty of disk space available all the time. Kind regards, Christian. _______________________________________________ users-list mailing list users-list@monetdb.org http://mail.monetdb.org/mailman/listinfo/users-list
participants (3)
-
Christian Braun
-
Stefan Manegold
-
Ying Zhang