how to take output from DELETE and feed it to INSERT?
Dear experts, Does anyone know how to take output from DELETE and feed it to INSERT INTO another table? Since now my pipeline ran into a certain point that 37 seconds of the total time 48 seconds spent on the following queries: INSERT INTO old_to_old SELECT uniqueid, targetid, distance_arcsec FROM associatedsource WHERE uniqueid IN (SELECT old_uniqueid FROM one_to_many) ORDER BY uniqueid, targetid; 42650 affected rows (20.0s) DELETE FROM associatedsource WHERE uniqueid IN (SELECT old_uniqueid FROM one_to_many); 42650 affected rows (16.6s) where associatedsource table is a large table with 1219234829 rows which is the size till the end of second day, while one_to_many table is quite small only 30 rows. associatedsource will accumulate processed data day by day, till the end of first day it has 603553616 rows. As associatedsource table grows, this two queries has severely degraded my pipeline, on first day the time for them were 9.1s and 7.8s. We can see monetdb server engine (randomly?) access the same data twice, for a large table and limited time allocated to a run of pipeline this is very expensive. if we can combine the queries into a single then the logic about which rows to move will be neater and improve performance twice. Thank you and best wishes, Meng
Hi all,
I think TRIGGER might what I need exactly for combining DELETE and INSERT:
CREATE TRIGGER Trigger_1
BEFORE DELETE ON associatedsource REFERENCING OLD ROW AS old_row_name
FOR EACH ROW
INSERT INTO another_table SELECT * FROM old_row_name;
Best,
Meng
------------------ Original ------------------
From: "357416268";<357416268@qq.com>;
Date: Jun 12, 2015
To: "users-list"
Why do you need an ORDER BY in the INSERT statement. That only applies for the result and not for the complete old_to_old table (unless you delete it in each step). Alternatively, mark the state of an object as a separate column, rather then moving it to an other table. martin On 12/06/15 23:47, integrity wrote:
Hi all,
I think TRIGGER might what I need exactly for combining DELETE and INSERT:
CREATE TRIGGER Trigger_1 BEFORE DELETE ON associatedsource REFERENCING OLD ROW AS old_row_name FOR EACH ROW INSERT INTO another_table SELECT * FROM old_row_name;
Best, Meng ------------------ Original ------------------ *From: * "357416268";<357416268@qq.com>; *Date: * Jun 12, 2015 *To: * "users-list"
; *Subject: * how to take output from DELETE and feed it to INSERT? Dear experts,
Does anyone know how to take output from DELETE and feed it to INSERT INTO another table?
Since now my pipeline ran into a certain point that 37 seconds of the total time 48 seconds spent on the following queries:
INSERT INTO old_to_old SELECT uniqueid, targetid, distance_arcsec FROM associatedsource WHERE uniqueid IN (SELECT old_uniqueid FROM one_to_many) ORDER BY uniqueid, targetid; 42650 affected rows (20.0s) DELETE FROM associatedsource WHERE uniqueid IN (SELECT old_uniqueid FROM one_to_many); 42650 affected rows (16.6s) where associatedsource table is a large table with 1219234829 rows which is the size till the end of second day, while one_to_many table is quite small only 30 rows. associatedsource will accumulate processed data day by day, till the end of first day it has 603553616 rows.
As associatedsource table grows, this two queries has severely degraded my pipeline, on first day the time for them were 9.1s and 7.8s. We can see monetdb server engine (randomly?) access the same data twice, for a large table and limited time allocated to a run of pipeline this is very expensive. if we can combine the queries into a single then the logic about which rows to move will be neater and improve performance twice.
Thank you and best wishes, Meng
_______________________________________________ users-list mailing list users-list@monetdb.org https://www.monetdb.org/mailman/listinfo/users-list
participants (2)
-
integrity
-
Martin Kersten