McKennirey.Matthew wrote:
Is there any strategy that would allow MonetDB to operate in a situation where the size of the columns to be read significantly exceeds the available memory on the machine?
It is part of ongoing research works on partitioned and distributed versions. The primitives to build a distributed version are in the kernel already, e.g. remote execution of MAL blocks. However, this is all experimental. The current state of affair is that also the SQL compiler can handle partitioned tables, i.e. smaller BATs, which means that it results in better handling of memory on a single processor, but also enables a parallel and distributed execution. We need, however, more confidence on the stability of the often novel approaches taken here. On Linux systems, the system includes a thread that keeps an eye on the memory consumption and throws pages out when necessary. Often, the performance you see in out-of-memory situations is what you can expect from any system in that case. See the TPC-H performance figures in http://monetdb.cwi.nl/projects/monetdb//SQL/Benchmark/TPCH/ An update of this table is already available and will be brought online shortly. The message remains the same.
On Wednesday 30 April 2008 01:21:56 Martin Kersten wrote:
indeed, there is no way to limit the memory use. All columns are memory mapped files, which give an upperbound on their size. This is particularly problematic for 32-bits machines and very large columns.
------------------------------------------------------------------------- This SF.net email is sponsored by the 2008 JavaOne(SM) Conference Don't miss this year's exciting event. There's still time to save $100. Use priority code J8TL2D2. http://ad.doubleclick.net/clk;198757673;13503038;p?http://java.sun.com/javao... _______________________________________________ MonetDB-users mailing list MonetDB-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/monetdb-users