On Mon, 21 Mar 2005 17:53:34 +0100, Jacek Trzmiel <[EMAIL PROTECTED]> wrote:
> >> > This is small tool I've wrote, that does use large memory buffers with >> > asynchronous I/O to copy file. > >Claudio Grondi wrote: >> Thank you! >> This (with a drawback of blocking the entire system) does it! >> ( dzień dobry i dziękuję za tą konstruktywną odpowiedź >> na moje pytanie ) > >:) > >> From my point of view this thread has reached >> its end (I have a solution I can live with), except if >> someone would like to contribute or point to a >> better multicopy.exe which does not block the system > >Symptoms (high cpu usage, unresponsive system) look similar to situation >when you try to read/write as fast as possible from/to IDE drive running >in PIO mode. I think that it's either USB driver problem, or inherent >design flaw in USB (anyone?). > >Anyway, I've added buffersize and sleeptime options to multicopy, so you >may try to throttle it down. Download it here: > http://mastermind.com.pl/multicopy/ > What if some disks could benefit from running ahead a few buffers while others are hanging slowed by e.g. allocation and seeking activity? ISTM there could be a benefit to keeping a multibuffer readahead window of the source stream going? (I didn't look at your code, maybe you do this? Also maybe a particular OS might do this for you so that using several open source streams coming from the same datafile would automatically share system readahead buffer info if reads stayed within a few buffers of each other. Do you us multiple open readonly files as source streams? Or are OS file systems so dumb they don't notice shareability of temporarily memory-resident readonly file data buffers? I guess it would vary, and you could lose or gain by single or multifile source strategy, depending ;-) Regards, Bengt Richter -- http://mail.python.org/mailman/listinfo/python-list