Hey all- I've been benchmarking my application against mysql and postgres. For my test application which has some 1Mil rows of data and my sample query that returns some 260K rows here's some stats. Speed for query: Postgres 15 sec (8.1) Monetdb 33 sec (nightly stable) Mysql 100 sec (5.0) Bulk load speed: Postgres 12 Min Monetdb 1 Min Mysql 2 Min Memory during bulk load: Postgres 20 Meg Monetdb 700 Meg!!! (And it doesn't come back down) Mysql 20 Meg Memory during query: Postgres 30Meg Monetdb 180 Meg (And increases by 10-20 Meg /query) Mysql 30 Meg Niels has some trick up his sleeves that bring the query speed down (using the nightly current). But I'm concerned about memory use. It's fine to use memory if you are going to be fast. But if I can't load my data (in all I have 67+ mil rows) or have to restart the server after a few queries it is problematic. I've submitted bugs with valgrind reports during load/query, but am wondering if other steps can be taken to help with the memory issue? I know that both mysql and postgres (and a host of other open source projects) have benefitted from coverity's offering [0] and klocworks [1] as well. Have these been looked into? It might be worth it have some automated "eyes" along with the "many eyes" of the open source community that make all bugs shallow. Some food for thought cheers, -matt 0 - http://scan.coverity.com/ 1 - http://www.klocwork.com/company/releases/06_26_06.asp