On Wed, 22 Jul 2009, Charles Samuels wrote:
This runs on a Linux dual quadcore with 64G Ram and lots of disk space. Ofcourse, at some point you well notice IO behavior ;)
For what kind of operations do you need that amount of memory? If you were to do a select on just a few thousand rows on that same mega-database, would it be happy with more "mundane" amounts of memory?
There are several reasons you would like to have more memory on big databases. The most trivial onces are preventing disk i/o. Because your OS obviously would like to fill its blockcache with data it has already read, which makes your scans faster. Since Monet uses memory mapping any amount of mapped regions that could be actually read into memory (just like you swap space is read into memory when its used) saves you a great deal of time. Memory mapping seen as operation to have more memory than you actually have, because it is backed by disk clearly benefits if the amount of disk interaction is limited. Now Monet has more 'tricks' like storing computed results. Wouldn't it be good to actually remember costly operations without doing them over and over again? So will it work without this amount of memory? Yes, if the required intermediate results can be written to disks at reasonable speed (otherwise creating a bottleneck) and can be accessed with a reasonable amount of speed. For some people disk i/o is never an issue, for 6 disks of 1TB it is. Stefan