Re: [Monetdb-developers] Could not create hash table for key test
Hi Stefan, Like Martin indicated, MonetDB runs out of memory here, when trying to obtain a contiguous area of 14GB. The error messages show that MonetDB is pulling all the stops to try to get this memory from the system. That is, a so-called "trim" buffer manager thread tries to unload all unpinned other tables. To no avail here. You see the actvity of the trim thread only when a GDKmalloc has failed already, and the system tries again after unloading what it can. The fact that though it tries to GDKmalloc you see the GDKload error message indicates that it actually tries to get the 14GB by creating a file and memory mapping on it. The cause for failure can thus be: (1) your file system is full (test just after the error message appears, because a MonetDB restarts first clears out all temp files it created on the last run) (2) you try to process such large loads on a 32-bts machine (naughty naughty) For these large sizes, and the way MonetDB manages mamory, you need a 64-bits OS. Peter
Message: 4 Date: Sun, 23 Nov 2008 02:04:09 +0100 From: Stefan de Konink
Subject: [Monetdb-developers] Could not create hash table for key test To: monetdb-dev Message-ID: <4928AC09.7010205@konink.de> Content-Type: text/plain; charset=ISO-8859-1; format=flowed #GDKmalloc(14777222432) fail => BBPtrim(enter) usage[mem=14840896,vm=27527806976] #BBPTRIM_ENTER: memsize=14840896,vmsize=27527806976 #BBPTRIM: memtarget=4611686018427387904 vmtarget=0 #BBPTRIM_EXIT: memsize=19120128,vmsize=27527806976 #GDKmalloc(14777222432) fail => BBPtrim(ready) usage[mem=11538120,vm=27527806976] #BATpropcheck: BAT tmp_1260(-688): could not allocate hash table for key test #GDKmalloc(14777222432) fail => BBPtrim(enter) usage[mem=11537664,vm=27527806976] #BBPTRIM_ENTER: memsize=11537664,vmsize=27527806976 #BBPTRIM: memtarget=4611686018427387904 vmtarget=0 #BBPTRIM_EXIT: memsize=19120128,vmsize=27527806976 #GDKmalloc(14777222432) fail => BBPtrim(ready) usage[mem=10416208,vm=27527806976] #BATpropcheck: BAT tmp_1135(-605): could not allocate hash table for key test MAPI = monetdb@localhost:50000 QUERY = COPY 773410980 RECORDS INTO node_tags from '/mnt/data2/csv/node_tags.csv' USING DELIMITERS ',', '\n', ''''; ERROR = !MALException: GDKmallocmax: failed for 1892320544 bytes !ERROR: GDKload: failed name=12/1252, ext=thash !ERROR: GDKmallocmax: failed for 1892320544 bytes !ERROR: GDKload: failed name=12/1253, ext=thash
What kind of voodoo is this?
I'm trying to import a 42GB csv file using copy into, on a native system, 2GB of memory, 4GB of swap. Latest CVS version, this time compiled for CoreDuo, without debuging, with optimise.
Stefan
------------------------------
Message: 5 Date: Sun, 23 Nov 2008 03:02:18 +0100 From: Stefan de Konink
Subject: Re: [Monetdb-developers] Could not create hash table for key test To: monetdb-dev Message-ID: <4928B9AA.5060401@konink.de> Content-Type: text/plain; charset=ISO-8859-1; format=flowed #BBPTRIM_ENTER: memsize=180224,vmsize=14397997056 #BBPTRIM: memtarget=0 vmtarget=0 #BBPTRIM_EXIT: memsize=20365312,vmsize=14397997056 #BBPTRIM_ENTER: memsize=2496368440,vmsize=16874000184 #BBPTRIM: memtarget=778092856 vmtarget=0 #BBPTRIM_EXIT: memsize=20365312,vmsize=10150287928 Killed
This one also doen't make me happy. The initial table that are inserted without a problem are of 19GB and 1GB.
The table that fails after it is 7GB.
Now obviously we are talking about a system that could use some more memory. But I don't really see why it is inserting 20GB without a problem, and then borks on the inserting of 7GB. A non string, table.
Stefan
------------------------------
Message: 6 Date: Sun, 23 Nov 2008 08:32:00 +0100 From: Martin Kersten
Subject: Re: [Monetdb-developers] Could not create hash table for key test To: Stefan de Konink , monetdb-dev Message-ID: <492906F0.5060209@cwi.nl> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Dear Stefan, Let's explain as far as possible.
Stefan de Konink wrote:
#GDKmalloc(14777222432) fail => BBPtrim(enter) usage[mem=14840896,vm=27527806976] #BBPTRIM_ENTER: memsize=14840896,vmsize=27527806976 #BBPTRIM: memtarget=4611686018427387904 vmtarget=0 #BBPTRIM_EXIT: memsize=19120128,vmsize=27527806976
The system is trying to malloc space for (in this case) a hash table, but that instruction fails. Then it attempts to free up memory by swapping out tables. Given the fact that after this sweep it still fails, indicates that your swap memory has become fragmented, such that there is no consecutive area of 14 GB left.
For large table loads (relative to the swap size) it is advisable to load in multiple steps. First load the data into the tables without (foreign) key checks. After this step ALTER the tables to respect the (foreign) keys.
Better solutions are being worked on.
#GDKmalloc(14777222432) fail => BBPtrim(ready) usage[mem=11538120,vm=27527806976] #BATpropcheck: BAT tmp_1260(-688): could not allocate hash table for key test #GDKmalloc(14777222432) fail => BBPtrim(enter) usage[mem=11537664,vm=27527806976] #BBPTRIM_ENTER: memsize=11537664,vmsize=27527806976 #BBPTRIM: memtarget=4611686018427387904 vmtarget=0 #BBPTRIM_EXIT: memsize=19120128,vmsize=27527806976 #GDKmalloc(14777222432) fail => BBPtrim(ready) usage[mem=10416208,vm=27527806976] #BATpropcheck: BAT tmp_1135(-605): could not allocate hash table for key test MAPI = monetdb@localhost:50000 QUERY = COPY 773410980 RECORDS INTO node_tags from '/mnt/data2/csv/node_tags.csv' USING DELIMITERS ',', '\n', ''''; ERROR = !MALException: GDKmallocmax: failed for 1892320544 bytes !ERROR: GDKload: failed name=12/1252, ext=thash !ERROR: GDKmallocmax: failed for 1892320544 bytes !ERROR: GDKload: failed name=12/1253, ext=thash
What kind of voodoo is this?
I'm trying to import a 42GB csv file using copy into, on a native system, 2GB of memory, 4GB of swap. Latest CVS version, this time compiled for CoreDuo, without debuging, with optimise.
Stefan
------------------------------------------------------------------------- This SF.Net email is sponsored by the Moblin Your Move Developer's challenge Build the coolest Linux based applications with Moblin SDK & win great prizes Grand prize is a trip for two to an Open Source event anywhere in the world http://moblin-contest.org/redirect.php?banner_id=100&url=/ _______________________________________________ Monetdb-developers mailing list Monetdb-developers@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/monetdb-developers
------------------------------
Message: 7 Date: Sun, 23 Nov 2008 09:32:30 +0100 From: Martin Kersten
Subject: Re: [Monetdb-developers] Could not create hash table for key test To: Stefan de Konink , monetdb-dev Message-ID: <4929151E.2040909@cwi.nl> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Stefan de Konink wrote:
#BBPTRIM_ENTER: memsize=180224,vmsize=14397997056 #BBPTRIM: memtarget=0 vmtarget=0 #BBPTRIM_EXIT: memsize=20365312,vmsize=14397997056 #BBPTRIM_ENTER: memsize=2496368440,vmsize=16874000184 #BBPTRIM: memtarget=778092856 vmtarget=0 #BBPTRIM_EXIT: memsize=20365312,vmsize=10150287928 Killed
^^^^^^^^ This means the process is victimized by the Linux kernel.
This one also doen't make me happy. The initial table that are inserted without a problem are of 19GB and 1GB.
The table that fails after it is 7GB.
Now obviously we are talking about a system that could use some more memory. But I don't really see why it is inserting 20GB without a problem, and then borks on the inserting of 7GB. A non string, table.
Stefan
------------------------------------------------------------------------- This SF.Net email is sponsored by the Moblin Your Move Developer's challenge Build the coolest Linux based applications with Moblin SDK & win great prizes Grand prize is a trip for two to an Open Source event anywhere in the world http://moblin-contest.org/redirect.php?banner_id=100&url=/ _______________________________________________ Monetdb-developers mailing list Monetdb-developers@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/monetdb-developers
------------------------------
------------------------------------------------------------------------- This SF.Net email is sponsored by the Moblin Your Move Developer's challenge Build the coolest Linux based applications with Moblin SDK & win great prizes Grand prize is a trip for two to an Open Source event anywhere in the world http://moblin-contest.org/redirect.php?banner_id=100&url=/
------------------------------
_______________________________________________ Monetdb-developers mailing list Monetdb-developers@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/monetdb-developers
End of Monetdb-developers Digest, Vol 30, Issue 9 *************************************************
Hi Peter, Peter Boncz wrote:
Like Martin indicated, MonetDB runs out of memory here, when trying to obtain a contiguous area of 14GB.
But is a contiguous area of 14GB required for this? That is much memory!
The error messages show that MonetDB is pulling all the stops to try to get this memory from the system. That is, a so-called "trim" buffer manager thread tries to unload all unpinned other tables. To no avail here. You see the actvity of the trim thread only when a GDKmalloc has failed already, and the system tries again after unloading what it can.
I'm now trying to import the relative 'smaller' integer only tables without references; but it ends up also in: #BBPTRIM_ENTER: memsize=180224,vmsize=4443275264 #BBPTRIM: memtarget=0 vmtarget=0 #BBPTRIM_EXIT: memsize=14925824,vmsize=4443275264 #BBPTRIM_ENTER: memsize=180224,vmsize=4443275264 #BBPTRIM: memtarget=0 vmtarget=0 #BBPTRIM_EXIT: memsize=14925824,vmsize=4443275264 #GDKmalloc(1374238248) fail => BBPtrim(enter) usage[mem=180224,vm=4443275264] #BBPTRIM_ENTER: memsize=180224,vmsize=4443275264 #BBPTRIM: memtarget=4611686018427387904 vmtarget=0 #BBPTRIM_EXIT: memsize=14925824,vmsize=4443275264 #GDKmalloc(1374238248) fail => BBPtrim(ready) usage[mem=180224,vm=4443275264] !ERROR: GDKmallocmax: failed for 1374238248 bytes !ERROR: GDKload: failed name=31/3141, ext=tail
The fact that though it tries to GDKmalloc you see the GDKload error message indicates that it actually tries to get the 14GB by creating a file and memory mapping on it.
The cause for failure can thus be: (1) your file system is full (test just after the error message appears, because a MonetDB restarts first clears out all temp files it created on the last run)
MonetDB runs on 95GB of free data. Resulting in 16GB in MonetDB5, having inserted 19GB worth of textual content.
(2) you try to process such large loads on a 32-bts machine (naughty naughty)
I'm not naughty, only have sick mind to try to run this amount of data on only 2GB of RAM. Linux srv1 2.6.27-gentoo-r4 #2 SMP Sat Nov 22 18:34:42 CET 2008 x86_64 Intel(R) Pentium(R) Dual CPU E2180 @ 2.00GHz GenuineIntel GNU/Linux Stefan
participants (2)
-
Peter Boncz
-
Stefan de Konink