[ python-Bugs-1586513 ] codecs.open problem with "with" statement
Bugs item #1586513, was opened at 2006-10-28 21:56 Message generated for change (Settings changed) made by shauncutts You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1586513&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Python Library >Group: Python 2.5 Status: Open Resolution: None Priority: 5 Private: No Submitted By: Shaun Cutts (shauncutts) Assigned to: Nobody/Anonymous (nobody) Summary: codecs.open problem with "with" statement Initial Comment: Codecs.open does not seem to properly support the "with" protocol. Using codecs.open with "with" and without "with" hill give different results in a simple test. - from __future__ import with_statement import codecs fn = 'test.txt' f = open( fn, 'w' ) f.write( '\xc5' ) f.close() f = codecs.open( fn, 'r', 'L1' ) print repr( f.read() ) f.close() with codecs.open( fn, 'r', 'L1' ) as f: print repr( f.read() ) --- output: --- u'\xc5' '\xc5' --- Note: 2nd string not unicode. -- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1586513&group_id=5470 ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[ python-Bugs-1586513 ] codecs.open problem with "with" statement
Bugs item #1586513, was opened at 2006-10-29 02:56 Message generated for change (Comment added) made by gbrandl You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1586513&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Python Library Group: Python 2.5 >Status: Closed >Resolution: Fixed Priority: 5 Private: No Submitted By: Shaun Cutts (shauncutts) >Assigned to: Georg Brandl (gbrandl) Summary: codecs.open problem with "with" statement Initial Comment: Codecs.open does not seem to properly support the "with" protocol. Using codecs.open with "with" and without "with" hill give different results in a simple test. - from __future__ import with_statement import codecs fn = 'test.txt' f = open( fn, 'w' ) f.write( '\xc5' ) f.close() f = codecs.open( fn, 'r', 'L1' ) print repr( f.read() ) f.close() with codecs.open( fn, 'r', 'L1' ) as f: print repr( f.read() ) --- output: --- u'\xc5' '\xc5' --- Note: 2nd string not unicode. -- >Comment By: Georg Brandl (gbrandl) Date: 2006-10-29 08:39 Message: Logged In: YES user_id=849994 Thanks for the report, this is now fixed in rev. 52517, 52518 (2.5). -- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1586513&group_id=5470 ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[ python-Bugs-1586448 ] compiler module dont emit LIST_APPEND w/ list comprehension
Bugs item #1586448, was opened at 2006-10-28 22:58 Message generated for change (Comment added) made by gbrandl You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1586448&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Python Library Group: None >Status: Closed >Resolution: Fixed Priority: 5 Private: No Submitted By: sebastien Martini (seb_martini) >Assigned to: Georg Brandl (gbrandl) Summary: compiler module dont emit LIST_APPEND w/ list comprehension Initial Comment: In the module compiler, list comprehensions are implemented without emitting this bytecode. For example: >>> src = "[a for a in range(3)]" >>> co = compiler.compile(src, 'lc1', 'exec') >>> co at 0x404927b8, file "lc1", line 1> >>> dis.dis(co) 1 0 BUILD_LIST 0 3 DUP_TOP 4 LOAD_ATTR0 (append) 7 STORE_NAME 1 ($append0) 10 LOAD_NAME2 (range) 13 LOAD_CONST 1 (3) 16 CALL_FUNCTION1 19 GET_ITER >> 20 FOR_ITER16 (to 39) 23 STORE_NAME 3 (a) 26 LOAD_NAME1 ($append0) 29 LOAD_NAME3 (a) 32 CALL_FUNCTION1 35 POP_TOP 36 JUMP_ABSOLUTE 20 >> 39 DELETE_NAME 1 ($append0) 42 POP_TOP 43 LOAD_CONST 0 (None) 46 RETURN_VALUE >>> co2 = compile(src, 'lc2', 'exec') >>> co2 at 0x40492770, file "lc2", line 1> >>> dis.dis(co2) 1 0 BUILD_LIST 0 3 DUP_TOP 4 STORE_NAME 0 (_[1]) 7 LOAD_NAME1 (range) 10 LOAD_CONST 0 (3) 13 CALL_FUNCTION1 16 GET_ITER >> 17 FOR_ITER13 (to 33) 20 STORE_NAME 2 (a) 23 LOAD_NAME0 (_[1]) 26 LOAD_NAME2 (a) 29 LIST_APPEND 30 JUMP_ABSOLUTE 17 >> 33 DELETE_NAME 0 (_[1]) 36 POP_TOP 37 LOAD_CONST 1 (None) 40 RETURN_VALUE -- sébastien martini -- >Comment By: Georg Brandl (gbrandl) Date: 2006-10-29 08:54 Message: Logged In: YES user_id=849994 I "fixed" this in rev. 52520, now the compiler module generates the same bytecode for listcomps as the builtin compiler. Not backporting this, as it isn't really a bug. -- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1586448&group_id=5470 ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[ python-Bugs-1586414 ] tarfile.extract() may cause file fragmentation on Windows XP
Bugs item #1586414, was opened at 2006-10-28 21:22 Message generated for change (Comment added) made by gbrandl You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1586414&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Python Library Group: Python 2.4 Status: Open Resolution: None Priority: 5 Private: No Submitted By: Enoch Julias (enochjul) Assigned to: Nobody/Anonymous (nobody) Summary: tarfile.extract() may cause file fragmentation on Windows XP Initial Comment: When I use tarfile.extract() to extract all the files from a large tar archive with lots of files tends to cause file fragmentation in Windows. Apparently NTFS cluster allocation interacts badly with such operations if Windows is not aware of the size of each file. The solution is to use a combination of the Win32 APIs SetFilePointer() and SetEndOfFile() before writing to the target file. This helps Windows choose a contiguous free space for the file. I tried it on the 2.6 trunk by calling file.truncate() (which seems to implement the appropriate calls on Windows) to set the file size before writing to a file. It helps to avoid fragmentation for the extracted files on my Windows XP x64 system. Can this be added to tarfile to improve its performance on Windows? -- >Comment By: Georg Brandl (gbrandl) Date: 2006-10-29 08:55 Message: Logged In: YES user_id=849994 Can you try to come up with a patch? -- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1586414&group_id=5470 ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[ python-Bugs-1357915 ] suprocess cannot handle shell arguments
Bugs item #1357915, was opened at 2005-11-16 08:23 Message generated for change (Comment added) made by gbrandl You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1357915&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Python Library Group: Python 2.4 >Status: Closed >Resolution: Fixed Priority: 5 Private: No Submitted By: Pierre Ossman (dr7eus) >Assigned to: Georg Brandl (gbrandl) Summary: suprocess cannot handle shell arguments Initial Comment: If you try and include arguments to the shell in subprocess you get a traceback: Traceback (most recent call last): File "", line 1, in ? File "/usr/lib/python2.4/subprocess.py", line 558, in __init__ errread, errwrite) File "/usr/lib/python2.4/subprocess.py", line 907, in _execute_child args = ["/bin/sh", "-c"] + args TypeError: can only concatenate list (not "tuple") to list A simple list() should solve the issue. -- >Comment By: Georg Brandl (gbrandl) Date: 2006-10-29 09:05 Message: Logged In: YES user_id=849994 Fixed in rev. 52522, 52523 (2.5). -- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1357915&group_id=5470 ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[ python-Bugs-1586613 ] zlib/bz2_codec doesn't support incremental decoding
Bugs item #1586613, was opened at 2006-10-29 20:14 Message generated for change (Tracker Item Submitted) made by Item Submitter You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1586613&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Python Library Group: None Status: Open Resolution: None Priority: 5 Private: No Submitted By: Topia (topia) Assigned to: Nobody/Anonymous (nobody) Summary: zlib/bz2_codec doesn't support incremental decoding Initial Comment: http://svn.python.org/view/python/trunk/Lib/encodings/zlib_codec.py?rev=43045&view=auto Incremental encoding/decoding must be stateful. Please use compressobj/decompressobj object. Incremental(Encoder|Decoder)/Stream(Reader|Writer) don't work with current code at all. -- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1586613&group_id=5470 ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[ python-Bugs-1586613 ] zlib/bz2_codec doesn't support incremental decoding
Bugs item #1586613, was opened at 2006-10-29 11:14 Message generated for change (Comment added) made by gbrandl You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1586613&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Python Library Group: None >Status: Closed >Resolution: Fixed Priority: 5 Private: No Submitted By: Topia (topia) >Assigned to: Georg Brandl (gbrandl) Summary: zlib/bz2_codec doesn't support incremental decoding Initial Comment: http://svn.python.org/view/python/trunk/Lib/encodings/zlib_codec.py?rev=43045&view=auto Incremental encoding/decoding must be stateful. Please use compressobj/decompressobj object. Incremental(Encoder|Decoder)/Stream(Reader|Writer) don't work with current code at all. -- >Comment By: Georg Brandl (gbrandl) Date: 2006-10-29 14:40 Message: Logged In: YES user_id=849994 Fixed the incremental coders/decoders in rev. 52529, 52530 (2.5). The StreamReaders/Writers can't be fixed as easily, because their encode/decode methods don't have a "final" flag, so they wouldn't know when to flush the compress object. -- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1586613&group_id=5470 ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[ python-Bugs-1581357 ] missing __enter__ + __getattr__ forwarding
Bugs item #1581357, was opened at 2006-10-20 15:17 Message generated for change (Comment added) made by gbrandl You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1581357&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Python Library Group: Python 2.5 >Status: Closed >Resolution: Fixed Priority: 7 Private: No Submitted By: Hirokazu Yamamoto (ocean-city) >Assigned to: Georg Brandl (gbrandl) Summary: missing __enter__ + __getattr__ forwarding Initial Comment: Hello. I encountered some unexpected behavior on "with" statement. First, please run attached "a.py" file. # traditional way shift_jis True # with statement None False Traceback (most recent call last): File "R:\a.py", line 15, in test(io) File "R:\a.py", line 8, in test io.write(u"あいうえお") UnicodeEncodeError: 'ascii' codec can't encode characters in position 0-4: ordin al not in range(128) "traditional way" runs as expected, but "with statement" crashes. This happens because 1. codecs.open returns codecs.StreamReaderWriter 2. codecs.StreamReaderWriter defines __getattr__ like this. def __getattr__(self, name, getattr=getattr): """ Inherit all other methods from the underlying stream. """ return getattr(self.stream, name) 3. But codecs.StreamReaderWriter doesn't have its __enter__ definition, so srw = StreamReaderWriter(stream, ... srw.__enter__() # actually calls stream.__enter__ which returns stream not srw via __getattr__ And more worse, with statement doesn't complain StreamReaderWriter (currently) doesn't support context manager. Is this intended behavior? If not, only this problem can be solved by attached "a.patch". I greped library files, I found many __getattr__ without __enter__... -- >Comment By: Georg Brandl (gbrandl) Date: 2006-10-29 14:47 Message: Logged In: YES user_id=849994 This is a duplicate of #1586513 which has been fixed today. -- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1581357&group_id=5470 ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[ python-Bugs-1586773 ] hashlib documentation is insuficient
Bugs item #1586773, was opened at 2006-10-29 17:45 Message generated for change (Tracker Item Submitted) made by Item Submitter You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1586773&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Documentation Group: Python 2.5 Status: Open Resolution: None Priority: 5 Private: No Submitted By: Marcos Daniel Marado Torres (mindbooster) Assigned to: Nobody/Anonymous (nobody) Summary: hashlib documentation is insuficient Initial Comment: Greetings, While the documentation of hashlib present on http://docs.python.org/lib/module-hashlib.html is preety good, doing help(hashlib) on python gives a really mediocre documentation. The suggestion is to write new documentation for hashlib based on the one already there on the web. -- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1586773&group_id=5470 ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[ python-Bugs-1586773 ] hashlib documentation is insuficient
Bugs item #1586773, was opened at 2006-10-29 17:45 Message generated for change (Comment added) made by gbrandl You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1586773&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Documentation Group: Python 2.5 >Status: Closed >Resolution: Fixed Priority: 5 Private: No Submitted By: Marcos Daniel Marado Torres (mindbooster) >Assigned to: Georg Brandl (gbrandl) Summary: hashlib documentation is insuficient Initial Comment: Greetings, While the documentation of hashlib present on http://docs.python.org/lib/module-hashlib.html is preety good, doing help(hashlib) on python gives a really mediocre documentation. The suggestion is to write new documentation for hashlib based on the one already there on the web. -- >Comment By: Georg Brandl (gbrandl) Date: 2006-10-29 18:01 Message: Logged In: YES user_id=849994 I added some of the online doc to the docstring. It should now be clearer. (rev. 52532, 52533 (2.5)) -- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1586773&group_id=5470 ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[ python-Bugs-1570255 ] redirected cookies
Bugs item #1570255, was opened at 2006-10-04 09:37 Message generated for change (Comment added) made by hans_moleman You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1570255&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: None Group: None Status: Open Resolution: None Priority: 5 Private: No Submitted By: hans_moleman (hans_moleman) Assigned to: Nobody/Anonymous (nobody) Summary: redirected cookies Initial Comment: Cookies are not resend when a redirect is requested. Blurb: I've been trying to get a response off a server using Python. The response so far differs from the response using Firefox. In Python, I have set headers and cookies the way Firefox does it. I noticed that the server accepts the POST request, and redirects the client to another address with the result on it. This happens both with Python and Firefox correctly. Cookie handling differs though: The Python client, when redirected, using the standard redirect handler, does not resend its cookies to the redirected address. Firefox does resend the cookies from the original request. When I redefine the redirect handler and code it so that it adds the cookies from the original request, the response is the same as Firefox's response. This confirms then that resending cookies is required to get the server to respond correctly. Is the default Python redirection cookie policy different from Firefox's policy? Could we improve the default redirection handler to work like Firefox? Is it a bug? I noticed an old open bug report 511786, that looks very much like this problem. It suggests it is fixed. Cheers Hans Moleman. -- >Comment By: hans_moleman (hans_moleman) Date: 2006-10-30 07:53 Message: Logged In: YES user_id=1610873 OK. I'll have a look at that. Thanks for the pointers. -- Comment By: A.M. Kuchling (akuchling) Date: 2006-10-28 01:16 Message: Logged In: YES user_id=11375 Given the sensitive data in your script, it's certainly best to not post it. You'll have to dig into urllib2 yourself, I think. Start by looking at the code in redirect_request(), around line 520 of urllib2.py, and adding some debug prints. Print out the contents of req.headers; is the cookie line in there? Change the __init__ of AbstractHTTPHandler to default debuglevel to 1, not 0; this will print out all the HTTP lines being sent and received. -- Comment By: hans_moleman (hans_moleman) Date: 2006-10-27 17:20 Message: Logged In: YES user_id=1610873 I am using this script to obtain monthly internet usage statistics from my ISP. My ISP provides a screen via HTTPS, to enter a usercode and password, after which the usage statistics are displayed on a different address. I cannot send this script with my usercode and password. My ISP might not like me doing this either. Therefore I'll try to find another server that behaves similarly, and send you that. -- Comment By: A.M. Kuchling (akuchling) Date: 2006-10-27 09:16 Message: Logged In: YES user_id=11375 More detail is needed to figure out if there's a problem; can you give a sample URL to exhibit the problem? can you provide your code? From the description, it's unclear if this might be a bug in the handling of redirects or in the CookieProcessor class. The bug in 511786 is still fixed; that bug includes sample code, so I could check it. -- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1570255&group_id=5470 ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[ python-Bugs-1580472 ] glob.glob("c:\\[ ]\*) doesn't work
Bugs item #1580472, was opened at 2006-10-19 13:44 Message generated for change (Comment added) made by koblaid You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1580472&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Python Library Group: Python 2.5 Status: Closed Resolution: Invalid Priority: 5 Private: No Submitted By: Koblaid (koblaid) Assigned to: Nobody/Anonymous (nobody) Summary: glob.glob("c:\\[ ]\*) doesn't work Initial Comment: OS: Windows 2000 Service Pack 4 Python 2.5 glob.glob() doesn't work in directories named "[ ]" (with a blank in it). Another example is a directory named "A - [Aa-Am]" Example: # C:\>md [] C:\>md "[ ]" C:\>copy anyfile.txt [] 1 Datei(en) kopiert. C:\>copy anyfile.txt "[ ]" 1 Datei(en) kopiert. C:\>python Python 2.5 (r25:51908, Sep 19 2006, 09:52:17) [MSC v.1310 32 bit (Intel)] on win32 Type "help", "copyright", "credits" or "license" for more information. >>> import glob >>> glob.glob ("c:\\[]\*") ['c:\\[]\\anyfile.txt'] >>> glob.glob ("c:\\[ ]\*") [] # The second glob should have resulted the same as the first glob since I copied the same file to both directories. I may be wrong because I'm new to python. But I've tested it a couple of times, and I think it have to be a bug of python or a bug of windows. Greets, Koblaid -- >Comment By: Koblaid (koblaid) Date: 2006-10-29 23:58 Message: Logged In: YES user_id=1624709 Thanks for your answers. Although I'm pretty new at Python, but I disagree with you. I see, it's not a bug. And folders like "[ ]" aren't very common. But if you scan your filesystem recursively using glob, you will lose all folders named like "[ ]": >>def scanDir(path): >> elements = glob.glob(directoryPath + "\\*") >> for currentElement in elements: >> if os.path.isfile(currentElement): >> print currentElement >>else: >> scanDir(currentElement) Even if these folders are very rare, the damage could be great. You lose files without recognizing it. A programmer assums that a language works correctly in all cases. So I think this should be changed. One easy solution is to add a second optional boolean parameter for glob, which have to be true, if you want to use regular expressions. But this and other similar solutions fail, if you try to use glob recursively, as the example above shows. A solution that would work with my example could be that glob returns the paths and puts every "[" and "]" (and other affected charecters) in brackets, as potten recommended. On the other hand, the result is unhandily if you don't want to use it for glob again. So I don't no a nice solution. Maybe you have better ideas... Thanks, Koblaid -- Comment By: Georg Brandl (gbrandl) Date: 2006-10-27 16:01 Message: Logged In: YES user_id=849994 Not a bug, as Peter said. -- Comment By: Peter Otten (potten) Date: 2006-10-27 14:32 Message: Logged In: YES user_id=703365 Not a bug. "[abc]" matches exactly one character which may be "a", "b" or "c". Therefore "[ ]" matches one space character. If you want a literal "[", put it in brackets, e. g. glob.glob("C:\\[[] ]\\*"). --- By the way, do you think this problem is common enough to warrant the addition of a fnmatch.escape() function? I have something like this in mind: >>> import re >>> r = re.compile("(%s)" % "|".join(re.escape(c) for c in "*?[")) >>> def escape(s): ... return r.sub(r"[\1]", s) ... >>> escape("c:\\[a-z]\\*") 'c:\\[[]a-z]\\[*]' -- Comment By: Josiah Carlson (josiahcarlson) Date: 2006-10-27 08:14 Message: Logged In: YES user_id=341410 This is a known issue with the fnmatch module (what glob uses under the covers). According to the documentation of the translate method that converts patterns into regular expressions... "There is no way to quote meta-characters." The fact that "[]" works but "[ ]" doesn't work is a convenient bug, for those who want to use "[]". If you can come up with some similar but non-ambiguous syntax to update the fnmatch module, I'm sure it would be considered, but as-is, I can't see this as a "bug" per-se. -- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1580472&group_id=5470 ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com