Hi, all.  I'm not sure if this is a bug report, a feature request or what,
so I'm posting it here first to see what people make of it.  I was copying
over a large number of files using shutil, and I noticed that the final
files were taking up a lot more space than the originals; a bit more
investigation showed that files with a positive nominal filesize which
originally took up 0 blocks were now taking up the full amount.  It seems
that Python does not write back file holes as it should; here is a simple
program to illustrate:
  data = '\0' * 1000000
  file = open('filehole.test', 'wb')
  file.write(data)
  file.close()
A quick `ls -sl filehole.test' will show that the created file actually
takes up about 980k, rather than the 0 bytes expected.

If anyone can let me know if this is indeed a bug or feature request, how to
get around it, or where to take it next, I'd really appreciate it.

Thanks a lot,
Tom
-- 
http://mail.python.org/mailman/listinfo/python-list

Reply via email to