On 20.4.2011 17:56, Ian Clelland wrote:
Well, an InMemoryUploadedFile isn't a real file, so I'm not surprised
that that doesn't work. You'll have to pull the data out of it, and
compress that.
Try something like this:
def handle_uploaded_subtitles(self, files):
for uploaded_file in files:
sub_file = SubtitleFile(file_name=file.name <http://file.name>, etc)
data = bz2.compress(uploaded_file.read())
# Here I'm assuming that SubtitleFile.file is a real file object
> sub_file.file.write(data)
> sub_file.file.close()
No, sub_file.file is a FileField attribute. It expects an object, such
as InMemoryUploadedFile which has chunks() attribute. So your proposed
solution doesn't work either. I need to call sub_file.file.save() in
order to get the uploaded file saved at the proper place.
I did spent quite some time on this to get it working, but finally I
have a solution that, even thought it may not be perfect, at least
works. So for whoever comes across this issue, here's the code that works:
def handle_uploaded_files(self, files):
import bz2
import StringIO
from django.core.files.base import ContentFile
bz2comp = bz2.BZ2Compressor()
result = StringIO.StringIO()
for fobj in files:
# compress the data
for chunk in fobj.chunks():
result.write(bz2comp.compress(chunk))
result.write(bz2comp.flush())
result.seek(0)
# create new MyModel object which has FileField attribute
my_file = MyMode(file_name=fobj.name, etc)
my_file.file_field.save(fobj.name, ContentFile(result.read()))
sub_file.save()
If your files are large, then you can read them in lines, or in chunks,
and use a BZ2Compressor object to compress them one-at-a-time.
Indeed, it seems to work. Thanks for the ideas :)
--
You received this message because you are subscribed to the Google Groups "Django
users" group.
To post to this group, send email to django-users@googlegroups.com.
To unsubscribe from this group, send email to
django-users+unsubscr...@googlegroups.com.
For more options, visit this group at
http://groups.google.com/group/django-users?hl=en.