On Tue, Sep 20, 2016 at 7:40 AM Stefano Fioravanzo <fioravanzos@gmail.com> wrote:
Hello,

In my db I have a table with ~300million rows on which I have to perform ~1k aggregation queries periodically.
I have noticed that if I delete even a small amount of rows from the table I have a greatly reduced performance compared to when no delete has been made.
Specifically, with no delete the 1k queries complete in about 2 hours, if I delete a few records the task completes in about 10-12 hours.

Correct, after a delete the table has a list of deleted rows, and any query done after the delete has to check the "dbat" to make sure the rows it's looking at are valid, this is mentioned in the documentation e.g. https://www.monetdb.org/Documentation/Manuals/SQLreference/Transactions .
 
Inserting new rows in the table does not present the issue. Moreover if I dump and restore the table after the delete, everything’s back to normal and the queries execute fast.
 
So the issue seems to be related to the fact that monet does not actually delete the records from the bat file, right? Are you aware of the issue?

It's documented.  I am another user and do not speak for the project.
 
Is there some kind of vacuum operation available? If not, what is the fastest way to dump or copy and restore the table; or is there any other trick that may solve the issue?

There is an undocumented sys.vacuum function which might do what you want.  If not you can use "CREATE TABLE tbl_new AS SELECT * FROM tbl_old; DROP TABLE tbl_old".  Of course the DROP deletes a record from sys._tables, which might have a performance impact once the dbat for sys._tables gets long enough.

ALTER TABLE ... SET READ ONLY might also help in some cases.

-s