I am new to python, so please don't attack me if this question is easy or basic. How can I go about extracting the contents of a zip file to the same directory? Can I extend this question to a zip file that sits on the internet?
While the ZipFile object can take a file-like object, it looks like it expects to at least be able to seek() on that file-object. The object you get back from a urlopen() call doesn't offer a seek(), so you can't directly use this object.
As such, you'd have to (1) download the remote .zip locally or (2) create your own urlopener (a'la urllib) that has a seek() method and uses HTTP Content-Range[1] if the server supports it. Method #1 is fairly easy (especially if the file fits entirely within memory) but assumes the file isn't gargantuan because you're slurping the entire file over the network. Method #2 requires much crazier hacking skills but could produce serious gains if your file was mammoth.
Point being I need to work with the files inside the zipped file, and there are times when the file is already on my computer, and their others where the datafile sits on the Internet.
I'd create a wrapper that checks if the passed "filename" is a URL or a filename. If it's not a local file, download it to a local tempfile, and then proceed with that file the same as with the local file, only optionally deleting it when done.
Since the ZipFile can take any file-like that allows for seek(), memory permitting, you can even read the web-content into a cStringIO object without having to touch your filesystem.
-tkc [1] http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html#sec14.16 -- http://mail.python.org/mailman/listinfo/python-list