[Monetdb-developers] Could not create hash table for key test
#GDKmalloc(14777222432) fail => BBPtrim(enter) usage[mem=14840896,vm=27527806976] #BBPTRIM_ENTER: memsize=14840896,vmsize=27527806976 #BBPTRIM: memtarget=4611686018427387904 vmtarget=0 #BBPTRIM_EXIT: memsize=19120128,vmsize=27527806976 #GDKmalloc(14777222432) fail => BBPtrim(ready) usage[mem=11538120,vm=27527806976] #BATpropcheck: BAT tmp_1260(-688): could not allocate hash table for key test #GDKmalloc(14777222432) fail => BBPtrim(enter) usage[mem=11537664,vm=27527806976] #BBPTRIM_ENTER: memsize=11537664,vmsize=27527806976 #BBPTRIM: memtarget=4611686018427387904 vmtarget=0 #BBPTRIM_EXIT: memsize=19120128,vmsize=27527806976 #GDKmalloc(14777222432) fail => BBPtrim(ready) usage[mem=10416208,vm=27527806976] #BATpropcheck: BAT tmp_1135(-605): could not allocate hash table for key test MAPI = monetdb@localhost:50000 QUERY = COPY 773410980 RECORDS INTO node_tags from '/mnt/data2/csv/node_tags.csv' USING DELIMITERS ',', '\n', ''''; ERROR = !MALException: GDKmallocmax: failed for 1892320544 bytes !ERROR: GDKload: failed name=12/1252, ext=thash !ERROR: GDKmallocmax: failed for 1892320544 bytes !ERROR: GDKload: failed name=12/1253, ext=thash What kind of voodoo is this? I'm trying to import a 42GB csv file using copy into, on a native system, 2GB of memory, 4GB of swap. Latest CVS version, this time compiled for CoreDuo, without debuging, with optimise. Stefan
#BBPTRIM_ENTER: memsize=180224,vmsize=14397997056 #BBPTRIM: memtarget=0 vmtarget=0 #BBPTRIM_EXIT: memsize=20365312,vmsize=14397997056 #BBPTRIM_ENTER: memsize=2496368440,vmsize=16874000184 #BBPTRIM: memtarget=778092856 vmtarget=0 #BBPTRIM_EXIT: memsize=20365312,vmsize=10150287928 Killed This one also doen't make me happy. The initial table that are inserted without a problem are of 19GB and 1GB. The table that fails after it is 7GB. Now obviously we are talking about a system that could use some more memory. But I don't really see why it is inserting 20GB without a problem, and then borks on the inserting of 7GB. A non string, table. Stefan
Stefan de Konink wrote:
#BBPTRIM_ENTER: memsize=180224,vmsize=14397997056 #BBPTRIM: memtarget=0 vmtarget=0 #BBPTRIM_EXIT: memsize=20365312,vmsize=14397997056 #BBPTRIM_ENTER: memsize=2496368440,vmsize=16874000184 #BBPTRIM: memtarget=778092856 vmtarget=0 #BBPTRIM_EXIT: memsize=20365312,vmsize=10150287928 Killed
^^^^^^^^ This means the process is victimized by the Linux kernel.
This one also doen't make me happy. The initial table that are inserted without a problem are of 19GB and 1GB.
The table that fails after it is 7GB.
Now obviously we are talking about a system that could use some more memory. But I don't really see why it is inserting 20GB without a problem, and then borks on the inserting of 7GB. A non string, table.
Stefan
------------------------------------------------------------------------- This SF.Net email is sponsored by the Moblin Your Move Developer's challenge Build the coolest Linux based applications with Moblin SDK & win great prizes Grand prize is a trip for two to an Open Source event anywhere in the world http://moblin-contest.org/redirect.php?banner_id=100&url=/ _______________________________________________ Monetdb-developers mailing list Monetdb-developers@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/monetdb-developers
On Sun, 23 Nov 2008, Martin Kersten wrote:
Stefan de Konink wrote:
#BBPTRIM_ENTER: memsize=180224,vmsize=14397997056 #BBPTRIM: memtarget=0 vmtarget=0 #BBPTRIM_EXIT: memsize=20365312,vmsize=14397997056 #BBPTRIM_ENTER: memsize=2496368440,vmsize=16874000184 #BBPTRIM: memtarget=778092856 vmtarget=0 #BBPTRIM_EXIT: memsize=20365312,vmsize=10150287928 Killed
^^^^^^^^ This means the process is victimized by the Linux kernel.
The last line was the only thing I actually understood ;) When OOM is working and is not killing my sshd I'm very happy with it ;) Stefan
Stefan de Konink wrote:
On Sun, 23 Nov 2008, Martin Kersten wrote:
Stefan de Konink wrote:
#BBPTRIM_ENTER: memsize=180224,vmsize=14397997056 #BBPTRIM: memtarget=0 vmtarget=0 #BBPTRIM_EXIT: memsize=20365312,vmsize=14397997056 #BBPTRIM_ENTER: memsize=2496368440,vmsize=16874000184 #BBPTRIM: memtarget=778092856 vmtarget=0 #BBPTRIM_EXIT: memsize=20365312,vmsize=10150287928 Killed
^^^^^^^^ This means the process is victimized by the Linux kernel.
The last line was the only thing I actually understood ;) When OOM is
The other are indeed debugging statements. In this case the BBPtrim succeeded shuffling data around and thereby let the GDKmalloc succeed. The debugging has been moved behind a corresponding flag.
working and is not killing my sshd I'm very happy with it ;)
thanks, we are working on the remainders ;)
Stefan
Dear Stefan, Let's explain as far as possible. Stefan de Konink wrote:
#GDKmalloc(14777222432) fail => BBPtrim(enter) usage[mem=14840896,vm=27527806976] #BBPTRIM_ENTER: memsize=14840896,vmsize=27527806976 #BBPTRIM: memtarget=4611686018427387904 vmtarget=0 #BBPTRIM_EXIT: memsize=19120128,vmsize=27527806976
The system is trying to malloc space for (in this case) a hash table, but that instruction fails. Then it attempts to free up memory by swapping out tables. Given the fact that after this sweep it still fails, indicates that your swap memory has become fragmented, such that there is no consecutive area of 14 GB left. For large table loads (relative to the swap size) it is advisable to load in multiple steps. First load the data into the tables without (foreign) key checks. After this step ALTER the tables to respect the (foreign) keys. Better solutions are being worked on.
#GDKmalloc(14777222432) fail => BBPtrim(ready) usage[mem=11538120,vm=27527806976] #BATpropcheck: BAT tmp_1260(-688): could not allocate hash table for key test #GDKmalloc(14777222432) fail => BBPtrim(enter) usage[mem=11537664,vm=27527806976] #BBPTRIM_ENTER: memsize=11537664,vmsize=27527806976 #BBPTRIM: memtarget=4611686018427387904 vmtarget=0 #BBPTRIM_EXIT: memsize=19120128,vmsize=27527806976 #GDKmalloc(14777222432) fail => BBPtrim(ready) usage[mem=10416208,vm=27527806976] #BATpropcheck: BAT tmp_1135(-605): could not allocate hash table for key test MAPI = monetdb@localhost:50000 QUERY = COPY 773410980 RECORDS INTO node_tags from '/mnt/data2/csv/node_tags.csv' USING DELIMITERS ',', '\n', ''''; ERROR = !MALException: GDKmallocmax: failed for 1892320544 bytes !ERROR: GDKload: failed name=12/1252, ext=thash !ERROR: GDKmallocmax: failed for 1892320544 bytes !ERROR: GDKload: failed name=12/1253, ext=thash
What kind of voodoo is this?
I'm trying to import a 42GB csv file using copy into, on a native system, 2GB of memory, 4GB of swap. Latest CVS version, this time compiled for CoreDuo, without debuging, with optimise.
Stefan
------------------------------------------------------------------------- This SF.Net email is sponsored by the Moblin Your Move Developer's challenge Build the coolest Linux based applications with Moblin SDK & win great prizes Grand prize is a trip for two to an Open Source event anywhere in the world http://moblin-contest.org/redirect.php?banner_id=100&url=/ _______________________________________________ Monetdb-developers mailing list Monetdb-developers@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/monetdb-developers
On Sun, 23 Nov 2008, Martin Kersten wrote:
The system is trying to malloc space for (in this case) a hash table, but that instruction fails. Then it attempts to free up memory by swapping out tables. Given the fact that after this sweep it still fails, indicates that your swap memory has become fragmented, such that there is no consecutive area of 14 GB left.
Maybe a very strange idea, could be your better solution. If by default these operations are mmap'ed, they should be taken care of by a disk based memory extension. Hence being independent of RAM/SWAP but having all pro's of both.
For large table loads (relative to the swap size) it is advisable to load in multiple steps. First load the data into the tables without (foreign) key checks. After this step ALTER the tables to respect the (foreign) keys.
You are again right for many reasons to do it in this way. Sadly my 'data provider' is not so strict on data integrety I found out again, and this is probably the only efficient way to load the tables, and find all non existing relations. Stefan
Martin Kersten wrote:
For large table loads (relative to the swap size) it is advisable to load in multiple steps. First load the data into the tables without (foreign) key checks. After this step ALTER the tables to respect the (foreign) keys.
Sadly the 49GB table will also not insert without foreign keys. Stefan
Martin Kersten wrote:
For large table loads (relative to the swap size) it is advisable to load in multiple steps. First load the data into the tables without (foreign) key checks. After this step ALTER the tables to respect the (foreign) keys.
I don't know if it was the swap space or the removing of *all* constraints including primary keys. But I have the tables inside MonetDB :) I'm happy :) Now adding constraints :) Stefan
On Sun, Nov 23, 2008 at 02:04:09AM +0100, Stefan de Konink wrote:
#GDKmalloc(14777222432) fail => BBPtrim(enter) usage[mem=14840896,vm=27527806976] #BBPTRIM_ENTER: memsize=14840896,vmsize=27527806976 #BBPTRIM: memtarget=4611686018427387904 vmtarget=0 #BBPTRIM_EXIT: memsize=19120128,vmsize=27527806976 #GDKmalloc(14777222432) fail => BBPtrim(ready) usage[mem=11538120,vm=27527806976] #BATpropcheck: BAT tmp_1260(-688): could not allocate hash table for key test #GDKmalloc(14777222432) fail => BBPtrim(enter) usage[mem=11537664,vm=27527806976] #BBPTRIM_ENTER: memsize=11537664,vmsize=27527806976 #BBPTRIM: memtarget=4611686018427387904 vmtarget=0 #BBPTRIM_EXIT: memsize=19120128,vmsize=27527806976 #GDKmalloc(14777222432) fail => BBPtrim(ready) usage[mem=10416208,vm=27527806976] #BATpropcheck: BAT tmp_1135(-605): could not allocate hash table for key test MAPI = monetdb@localhost:50000 QUERY = COPY 773410980 RECORDS INTO node_tags from '/mnt/data2/csv/node_tags.csv' USING DELIMITERS ',', '\n', ''''; ERROR = !MALException: GDKmallocmax: failed for 1892320544 bytes !ERROR: GDKload: failed name=12/1252, ext=thash !ERROR: GDKmallocmax: failed for 1892320544 bytes !ERROR: GDKload: failed name=12/1253, ext=thash
What kind of voodoo is this?
I'm trying to import a 42GB csv file using copy into, on a native system, 2GB of memory, 4GB of swap. Latest CVS version, this time compiled for CoreDuo, without debuging, with optimise.
From the trace above, I comclude that MonetDB is unable to allocate a 14GB chunk --- first (obvious) questions to ask and answer: Is your CoreDuo (or Core2Duo?) a 64-bit CPU? If so, are you running a 64-bit OS (which?)? If so, are you using a 64-bit build of MonetDB? If so, is your MonetDB configured with 64-bit or 32-bit OIDs?
The output of `mserver5 --version` should answer most of these questions ... Stefan
Stefan
------------------------------------------------------------------------- This SF.Net email is sponsored by the Moblin Your Move Developer's challenge Build the coolest Linux based applications with Moblin SDK & win great prizes Grand prize is a trip for two to an Open Source event anywhere in the world http://moblin-contest.org/redirect.php?banner_id=100&url=/ _______________________________________________ Monetdb-developers mailing list Monetdb-developers@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/monetdb-developers
-- | Dr. Stefan Manegold | mailto:Stefan.Manegold@cwi.nl | | CWI, P.O.Box 94079 | http://www.cwi.nl/~manegold/ | | 1090 GB Amsterdam | Tel.: +31 (20) 592-4212 | | The Netherlands | Fax : +31 (20) 592-4312 |
On Sun, 23 Nov 2008, Stefan Manegold wrote:
From the trace above, I comclude that MonetDB is unable to allocate a 14GB chunk --- first (obvious) questions to ask and answer: Is your CoreDuo (or Core2Duo?) a 64-bit CPU?
64bit kernel/userland. I don't think it is actually both, it is one of those Intel low power machines; E2180.
If so, are you using a 64-bit build of MonetDB?
Yes.
If so, is your MonetDB configured with 64-bit or 32-bit OIDs?
64-bit.
The output of `mserver5 --version` should answer most of these questions ...
As attached, my copy paste skills on the console are terrible. Stefan
participants (3)
-
Martin Kersten
-
Stefan de Konink
-
Stefan Manegold