[EMAIL PROTECTED] wrote:
[snip code involving copyfile:]
shutil.copyfile(os.path.join(root, f),

The problem with this is that it only copies about 35 GB/hour. I would
like to copy at least 100 GB/hour... more if possible. I have tried to
copy from the IDE CD drive to the SATA array with the same results. I
understand the throughput on SATA to be roughly 60MB/sec which comes
out to 3.6 GB/min which should be 216 GB/hour. Can someone show me how
I might do this faster? Is shutil the problem?

Have you tried doing this from some kind of batch file, or manually, measuring the results? Have you got any way to achieve this throughput, or is it only a theory? I see no reason to try to optimize something if there's no real evidence that it *can* be optimized.

Also, my first attempt at this did a recursive copy creating subdirs in
dirs as it copied. It would crash everytime it went 85 subdirs deep.
This is an NTFS filesystem. Would this limitation be in the filesystem
or Python?

In general, when faced with the question "Is this a limitation of Python or of this program X of Microsoft origin?", the answer should be obvious... ;-)

More practically, perhaps: use your script to create one of those
massively nested folders.  Wait for it to crash.  Now go in
"manually" (with CD or your choice of fancy graphical browser)
to the lowest level folder and attempt to create a subfolder
with the same name the Python script was trying to use.  Report
back here on your success, if any. ;-)

(Alternatively, describe your failure in terms other than "crash".
Python code rarely crashes.  It does, sometimes, fail and print
out an exception traceback.  These are printed for a very good
reason: they are more descriptive than the word "crash".)

-Peter
--
http://mail.python.org/mailman/listinfo/python-list

Reply via email to