Hi folks, in the following scenario I see very slow DB performance after data insertion. W/ slow performance I mean that „mclient -u XXX -d XXX" blocks for roughly 10min bevore it let me access the DB. Scenario: I parse a pcap file (40MB, 695 connection) in python w/ dpkt. For every new connection found in the pcap file I create a new entry in my connection table and two new empty packet tables (for each direction one table). The actual packet data is not directly inserted into the packet tables, but written into files to push the data later on w/ COPY BINARY INTO into the packet tables. Autocommit is off. I commit after the I read the pcap completely and before I start the COPY BINARY INTO. After the COPY BINARY INTO I’ve another commit. That’s it. The number of transactions is 3*695 (2*695 table creations for the packet tables and 695 entries into the connection table). The script itself terminates in roughly 30secs, but the DB is blocked +- 10mins. Why? What is going on internally in the DB? Can I speedup the performance? Cheers Alex