Hi, I've recently been conducting some performance testing of MonetDB with a variety of servers (from 4 core -8GB RAM to 20 core-64GB RAM) with various data sizes, in an attempt to gain a better understanding of how MonetDB scales. During the performance tests it became obvious that much of the processing was IO bound due to: 1) Columns being unmapped from memory overly aggressively (even when there was plenty of memory still available). 2) The constant mapping/unmapping of memory mapped bat files for intermediate results. I've attached a patch which attempts to address both issues. The first patch to (gdk_utils.c) is to update the memory limit at which the GDKvmtrim kicks in to be 80% memory usage. The second patch (gdk_heap.c) limits the number of mmap/munmap calls via the existing 'heap caching' mechanism which was not working at all! In addition to fixing up the caching code, I've also wired in the heap cache into the case where extending a malloced heap results in a swap over to memory mapped storage. After applying the patches I was seeing approximately 40% performance improvements (your mileage may vary!) If the changes are deemed to be useful, how do I go about getting them accepted into the MonetDB source repository? Thanks, Alistair Sutherland ________________________________
On 2013-07-16 18:23, Alistair Sutherland wrote:
Hi,
I've recently been conducting some performance testing of MonetDB with a variety of servers (from 4 core -8GB RAM to 20 core-64GB RAM) with various data sizes, in an attempt to gain a better understanding of how MonetDB scales.
During the performance tests it became obvious that much of the processing was IO bound due to:
1) Columns being unmapped from memory overly aggressively (even when there was plenty of memory still available). 2) The constant mapping/unmapping of memory mapped bat files for intermediate results.
I've attached a patch which attempts to address both issues. The first patch to (gdk_utils.c) is to update the memory limit at which the GDKvmtrim kicks in to be 80% memory usage. The second patch (gdk_heap.c) limits the number of mmap/munmap calls via the existing 'heap caching' mechanism which was not working at all! In addition to fixing up the caching code, I've also wired in the heap cache into the case where extending a malloced heap results in a swap over to memory mapped storage.
After applying the patches I was seeing approximately 40% performance improvements (your mileage may vary!) If the changes are deemed to be useful, how do I go about getting them accepted into the MonetDB source repository?
Thanks, Alistair Sutherland
Probably a better place to have a discussion about this is the bugtracker (bugs.monetdb.org). Can you submit this (with patch) there please? Two notes I want to make right away. The heapcache isn't without problems. One serious issue is that the heaps that end up there were for temporary bats in which we are not interested anymore. However, the kernel doesn't know we're not interested in the contents, so it will write the data to disk. Maybe not immediately, but eventually. This is unnecessary I/O. The other issue I want to raise is that I have made a change to the Feb2013 branch after the latest release (SP3) which addresses some mmap inefficiencies. On Linux there is a mremap system call which can be used to grow (and possibly move) a memory map. If that system call is available, we now use it. On systems where the system call is not available, we now try to mmap extra space after the existing mmap, and if that fails, we munmap the whole area and mmap it larger somewhere else. The upshow of this is that we no longer write the data to disk if we need to grow a memory map. I am curious how this change performs in your tests. -- Sjoerd Mullender
Raised bug (with additional information): http://bugs.monetdb.org/show_bug.cgi?id=3323 -----Original Message----- From: developers-list [mailto:developers-list-bounces+alistair.sutherland=pb.com@monetdb.org] On Behalf Of Sjoerd Mullender Sent: 16 July 2013 20:04 To: Communication channel for developers of the MonetDB suite. Subject: Re: Performance fixes On 2013-07-16 18:23, Alistair Sutherland wrote:
Hi,
I've recently been conducting some performance testing of MonetDB with a variety of servers (from 4 core -8GB RAM to 20 core-64GB RAM) with various data sizes, in an attempt to gain a better understanding of how MonetDB scales.
During the performance tests it became obvious that much of the processing was IO bound due to:
1) Columns being unmapped from memory overly aggressively (even when there was plenty of memory still available). 2) The constant mapping/unmapping of memory mapped bat files for intermediate results.
I've attached a patch which attempts to address both issues. The first patch to (gdk_utils.c) is to update the memory limit at which the GDKvmtrim kicks in to be 80% memory usage. The second patch (gdk_heap.c) limits the number of mmap/munmap calls via the existing 'heap caching' mechanism which was not working at all! In addition to fixing up the caching code, I've also wired in the heap cache into the case where extending a malloced heap results in a swap over to memory mapped storage.
After applying the patches I was seeing approximately 40% performance improvements (your mileage may vary!) If the changes are deemed to be useful, how do I go about getting them accepted into the MonetDB source repository?
Thanks, Alistair Sutherland
Probably a better place to have a discussion about this is the bugtracker (bugs.monetdb.org). Can you submit this (with patch) there please? Two notes I want to make right away. The heapcache isn't without problems. One serious issue is that the heaps that end up there were for temporary bats in which we are not interested anymore. However, the kernel doesn't know we're not interested in the contents, so it will write the data to disk. Maybe not immediately, but eventually. This is unnecessary I/O. The other issue I want to raise is that I have made a change to the Feb2013 branch after the latest release (SP3) which addresses some mmap inefficiencies. On Linux there is a mremap system call which can be used to grow (and possibly move) a memory map. If that system call is available, we now use it. On systems where the system call is not available, we now try to mmap extra space after the existing mmap, and if that fails, we munmap the whole area and mmap it larger somewhere else. The upshow of this is that we no longer write the data to disk if we need to grow a memory map. I am curious how this change performs in your tests. -- Sjoerd Mullender ________________________________
participants (2)
-
Alistair Sutherland
-
Sjoerd Mullender