Hi Arjen,
Thanks for this input! What you say is reasonable.
I was not very precise about the situation, because what I really meant is that the server is mostly meant for *several instances of* MonetDB. Then it is a lot harder to make them use memory efficiently - they are not really designed to play nicely to each other.
We limit the memory that each instance can see, via cgroups. This has the advantage that no instance can make the others starve.
We don't just divide the available memory, we overbook a bit. The cgroups setting is just a cap, not preallocated memory, so the sum of the caps is larger than the available memory and that's ok because they won't all use all their quota at the same time.
Still, it does happen that some intermediates are large, or that they happen to use more memory at the same time. And then swapping happens. This is much harder to predict compared to a 1-instance scenario.
This multi-server with occasional overbooking failure scenario is where I wonder what the effects of those kernel settings could be.
About swappiness: would reducing swappiness help? Probably it would only delay the moment in which swapping happens. But if swapping becomes needed then the only options are either do it or get killed, I guess?
About overcommit_memory, I'm still not sure whether MonetDB could benefit from always allowing it (vm.overcommit_memory=1).
For example, Redis strongly advises that, to support the copy-on-write mechanism of its background backup.
MonetDB actually uses CoW in many situations (I think about shared heaps). I wonder if this setting would have any effect.