Hi Roberto Finetuning the optimizer pipeline is on the short list to look into in the near future. Perhaps insight in your queries could help us here. There are as many workers as there are cores + 1 extra for each connection. So, going after 16x more workers is not needed. In the upcomming release you can also set the maximum number of workers that a user can deploy. The same holds for the memory footprint. Now a matter of making it part of the user profiler in the system catalog. regards, Martin On 06/06/2021 18:22, Brian Hood wrote:
The other thing you need to look at is kernel buffer settings for the network you can get some major performance benefits takes a bit of tweaking to get the results but i would set it 12MB and play a lower amounts from there or higher its good fun.
On Mon, 1 Mar 2021 at 14:41, Roberto Cornacchia
mailto:roberto.cornacchia@gmail.com> wrote: Thanks for the suggestion, Martin. We actually don't use in-query parallelism in production, for a couple of reasons. First, because so far mitosis never played well with our large queries (MAL plans get too large). But this is something I'm re-evaluating, it might have improved. Second, because including all sessions from all mserver5 instances, the typical server already deals with about 10-16 concurrent queries. Adding in-query concurrency would multiply this number by a factor 6-8. That doesn't seem healthy. But you are right that I may need more detailed insights.
On Fri, 26 Feb 2021 at 22:00, Martin Kersten
mailto:martin.kersten@cwi.nl> wrote: current resource manager looks at rss level mostly, you can also focus purely on the total temporaries lying around and stop parallel working until this has dropped
Sent from my iPad
On 26 Feb 2021, at 21:51, Martin Kersten
mailto:Martin.Kersten@cwi.nl> wrote: A better solution is 1) grab traces to understand where/when the memory pressure occurs 2) trim the current resource manager code in the kernel, 3) simulate the query on a model to inject trim resource operations.
Sent from my iPad
On 26 Feb 2021, at 18:46, Roberto Cornacchia
mailto:roberto.cornacchia@gmail.com> wrote: Hi Arjen,
Thanks for this input! What you say is reasonable. I was not very precise about the situation, because what I really meant is that the server is mostly meant for *several instances of* MonetDB. Then it is a lot harder to make them use memory efficiently - they are not really designed to play nicely to each other. We limit the memory that each instance can see, via cgroups. This has the advantage that no instance can make the others starve. We don't just divide the available memory, we overbook a bit. The cgroups setting is just a cap, not preallocated memory, so the sum of the caps is larger than the available memory and that's ok because they won't all use all their quota at the same time. Still, it does happen that some intermediates are large, or that they happen to use more memory at the same time. And then swapping happens. This is much harder to predict compared to a 1-instance scenario.
This multi-server with occasional overbooking failure scenario is where I wonder what the effects of those kernel settings could be.
About swappiness: would reducing swappiness help? Probably it would only delay the moment in which swapping happens. But if swapping becomes needed then the only options are either do it or get killed, I guess?
About overcommit_memory, I'm still not sure whether MonetDB could benefit from always allowing it (vm.overcommit_memory=1). For example, Redis strongly advises that, to support the copy-on-write mechanism of its background backup. MonetDB actually uses CoW in many situations (I think about shared heaps). I wonder if this setting would have any effect.
In general, it would be great to have a simple set of recommendations like this one for example: https://redis.io/topics/admin https://redis.io/topics/admin
On Fri, 26 Feb 2021 at 09:16, Arjen de Rijke
mailto:arjen.de.rijke@cwi.nl> wrote: Hi Roberto,
I don't have specific recommendations, but i can share my experience with the administration of the scilens cluster. As far as i can remember, i never received requests to change anything related to the vm.overcommit settings. I did receive questions about the swappiness. I never changed the default that fedora uses, but some people wanted to change the value to check some specific use case, including MonetDB, if i remember correctly. But i don't think that it generally matters a lot, i mean, i never heard about that making a big difference. So unless some vendor has a specific reason to suggest some specific version, i would keep the default.
We also kept the amount of swapspace very low, intentionally. In the situation you describe, when MonetDB is the "only" program running, it can use all of the available memory. If the application needs more memory and starts to swap, the performance drop significantly. And likely never finishes. Because in most cases it is some kind of bug. Having a lot of swap only postpones the inevitable, the system runs out of memory. With a lot of swap, it just takes much longer to crash.
Arjen de Rijke
----- Original Message ----- > From: "Roberto Cornacchia"
mailto:roberto.cornacchia@gmail.com> > To: "Communication channel for MonetDB users" mailto:users-list@monetdb.org> > Sent: Thursday, February 25, 2021 6:31:18 PM > Subject: Linux kernel vm settings > Hi there, > > I looked around but couldn't find any recommendation about kernel vm settings in > Linux for MonetDB. > > In particular: > > - vm.overcommit_memory: > 0 (default) : a heuristics decides whether overcommitting is allowed > 1: no check, overcommit is always allowed > 2: overcommitting is regulated by vm.overcommit_ratio (default = 50%) > > Do I understand correctly that using vm.overcommit_memory=1 will only make the > OOM kill mserver5 when the total VM available is exhausted? > > If that is true, should it be reasonably safe to use on a server that is mainly > intended for MonetDB, as long as sufficient disk space is available? > > > - vm.swappiness > Generic recommendations are usually 60 for a desktop and 30 for a server. > Oracle recommends 10. > Redis recommends 1. > Are there studies / recommendations for MonetDB? > > > > _______________________________________________ > users-list mailing list > users-list@monetdb.org mailto:users-list@monetdb.org > https://www.monetdb.org/mailman/listinfo/users-list https://www.monetdb.org/mailman/listinfo/users-list _______________________________________________ users-list mailing list users-list@monetdb.org mailto:users-list@monetdb.org https://www.monetdb.org/mailman/listinfo/users-list https://www.monetdb.org/mailman/listinfo/users-list
_______________________________________________ users-list mailing list users-list@monetdb.org mailto:users-list@monetdb.org https://www.monetdb.org/mailman/listinfo/users-list https://www.monetdb.org/mailman/listinfo/users-list
_______________________________________________ users-list mailing list users-list@monetdb.org mailto:users-list@monetdb.org https://www.monetdb.org/mailman/listinfo/users-list https://www.monetdb.org/mailman/listinfo/users-list
_______________________________________________ users-list mailing list users-list@monetdb.org mailto:users-list@monetdb.org https://www.monetdb.org/mailman/listinfo/users-list https://www.monetdb.org/mailman/listinfo/users-list
_______________________________________________ users-list mailing list users-list@monetdb.org https://www.monetdb.org/mailman/listinfo/users-list