Hello,
I had a look at the code just now... looking for why so much memory
was used (i think mclient was using 100GB of memory in the end).
I am not familiar with the mapiclient, but perhaps is the following
diff a solution?
Index: src/mapiclient/MapiClient.mx
===================================================================
RCS file: /cvsroot/monetdb/clients/src/mapiclient/MapiClient.mx,v
retrieving revision 1.141
diff -u -r1.141 MapiClient.mx
--- src/mapiclient/MapiClient.mx 19 May 2009 12:02:59 -0000 1.141
+++ src/mapiclient/MapiClient.mx 27 May 2009 22:25:24 -0000
@@ -2048,7 +2048,7 @@
fprintf(stderr,"%s\n",mapi_error_str(mid));
exit(2);
}
- mapi_cache_limit(mid, -1);
+ /* mapi_cache_limit(mid, -1); */
if (dump) {
if (mode == SQL) {
dump_tables(mid, toConsole, 0);
This seems to work for me, (at least the moment mclient's memory
consumption remains constant), but I can't oversee the consequences.
Could somebody perhaps say something sensible about it?
Reasoning behind it: This call to mapi_cache_limit makes rowlimit==-1,
and this together with cacheall=0, makes mapi_extend_cache (in
Mapi.mx) allocate more memory each time it is called (so the cache
becomes as large as the largest table).
Without this call "mapi_cache_limit(mid, -1);" the default for the
rowlimit has been set to 100 lines, so with this change the cache will
get flushed every 100 lines.
I think I should have filed a bug :)
Wouter
p.s. while investigating this issue i tried to limit the amount of
memory that mclient would get using "ulimit -v $((256*1024))". This
revealed that there are a number of places in Mapi.mx where a
(m)alloc-call goes unchecked. I don't know the MonetDB coding policy
here, but perhaps they should all at least have an accompanying
assert? The following one-liner in the clients package reveals some
issues:
$ grep "alloc(" -A2 src/mapilib/Mapi.mx
2009/5/25 Wouter Alink
Hello,
Question: is there any reason for mclient to use (large) amounts of memory during a dump of a sql database?
syntax used: $ mclient -lsql -D -dsomedatabase > dump.sql
I observe >12 GB of resident memory use when dumping a 2GB (in dump text format) database (it steadily grows), using the May2009 stable branch (of last week)
Top shows: 28371 walink 16 0 12.2g 12g 2944 R 87 4.0 10:48.58 mclient
I haven't investigated it any further, but I was first of all wondering whether it actually needs these amounts of memory?
Greetings, Wouter