On Mon, Feb 2, 2009 at 12:11 PM, John van Schie (DT)
Lefteris wrote:
It is only natural if you overload the system with some other process, such as copying 40G with dd, other processes will become slower. It has nothing to do with MonetDB, your favorite web browser will also take 3 minutes to open (USB interface eats up CPU, and dd eats up the IO channel). Especially if you run a DBMS on a machine, which is already resource consuming, the system will slow done alot even with small unrelated processes. Thats is why a dedicate server is usually better.
I still find it a bit hard to believe that with the complete fair I/O scheduler of recent kernels, disk IO can slow a a query down from 0.5s to > 6 minutes.
You just saw that it can:) Does the 6 minute delay ever appears when you are not copying a 40G file? And you said that ionice works, so there it is:) I/O scheduler on the kernel does not know anything about the process requesting data, its algorithm will benefit the writes vs. reads. See the following link for an overview: http://www.linuxjournal.com/article/6931 I quote from that link: <================ It gets worse for our friend the read request, however. Because writes are asynchronous, writes tend to stream. That is, it is common for a large writeback of a lot of data to occur. This implies that many individual write requests are submitted to a close area of the hard disk. As an example, consider saving a large file. The application dumps write requests on the system and hard drive as fast as it is scheduled. Read requests, conversely, usually do not stream. Instead, applications submit read requests in small one-by-one chunks, with each chunk dependent on the last. Consider reading all of the files in a directory. The application opens the first file, issues a read request for a suitable chunk of the file, waits for the returned data, issues a read request for the next chunk, waits and continues likewise until the entire file is read. Then the file is closed, the next file is opened and the process repeats. Each subsequent request has to wait for the previous, which means substantial delays to this application if the requests are to far-off disk blocks. The phenomenon of streaming write requests starving dependent read requests is called writes-starving-reads ===============> So that is what probably you experience, "writes-starving-reads" lefteris
I don't think you will be able to find anything about the IO scheduler of Fedora. Actually this is part of the Linux Kernel and not configurable. The only suggestion that I can make (which I don't know if it works for IO), if you *must* copy 40gigs while you are shredding documents, is to run the copy command with "nice", that is: bash$ nice copy /path/a path/b
lefteris
ionice works approximately the same for IO. This seems to solve the problem.
-- John
------------------------------------------------------------------------------ This SF.net email is sponsored by: SourcForge Community SourceForge wants to tell your story. http://p.sf.net/sfu/sf-spreadtheword _______________________________________________ MonetDB-users mailing list MonetDB-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/monetdb-users