[ python-Bugs-1699759 ] pickle example contains errors
Bugs item #1699759, was opened at 2007-04-13 02:22 Message generated for change (Tracker Item Submitted) made by Item Submitter You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1699759&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Documentation Group: None Status: Open Resolution: None Priority: 5 Private: No Submitted By: Mark Edgington (edgimar) Assigned to: Nobody/Anonymous (nobody) Summary: pickle example contains errors Initial Comment: In the pickle documentation (see http://docs.python.org/lib/pickle-example.html ), there is an error. At the end of the page, under the "A sample usage might be..." section, the file for dumping is opened in text-mode instead of binary mode. This bit me because I was lazy and didn't re-read all of the pickle documentation, but based some code on this snippet. The problem occurs under certain circumstances when pickling a type 'object' instance (newstyle) under Windows, and then attempting to unpickle it under Linux. You get the following error: ImportError: No module named copy_reg. This made no sense to me, and it took a long time to figure out that the problem was due to the mode in which the file was saved (what that has to do with the ImportError I still have no idea...). If interested, I could attach a test script which is supposed to load the data to a class instance, and two pickle dumps, one which works, and the other which fails. Perhaps a related suggestion which maybe should be in a different bug is that pickle should check to see when it writing to a filehandle if the filehandle's mode is binary or text, and to issue a warning if it is text. -- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1699759&group_id=5470 ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[ python-Bugs-1699853 ] locale.getlocale() output fails as setlocale() input
Bugs item #1699853, was opened at 2007-04-13 12:26 Message generated for change (Tracker Item Submitted) made by Item Submitter You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1699853&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Python Library Group: Python 2.5 Status: Open Resolution: None Priority: 5 Private: No Submitted By: Bernhard Reiter (ber) Assigned to: Nobody/Anonymous (nobody) Summary: locale.getlocale() output fails as setlocale() input Initial Comment: This problem report about the locale module consists of three closely related parts (this is why I have decided to put it in one report). a) the example in the docs is wrong / missleading b) under some locale settings python as a defect c) a test case for the locale module, showing b) but useful as general start for a test module. Details: a) Section example: The line >>> loc = locale.getlocale(locale.LC_ALL) # get current locale contradicts that getlocale should not be called with LC_ALL, as stated in the description of getlocale. Suggestion is to change the example to be more useful as getting the locale as first action is not really useful. It should be "C" anyway which will lead to (None, None) so the value is already known. It would make more sense to first set the default locale to the user preferences: import locale locale.setlocale(locale.LC_ALL,'') loc = locale.getlocale(locale.LC_NUMERIC) locale.setlocale(locale.LC_NUMERIC,"C") # convert a string here locale.setlocale(locale.LC_NUMERIC, loc) _but_ this does not work, see problem b). What does work is: import locale.setlocale(locale.LC_ALL,'') loc = locale.setlocale(locale.LC_NUMERIC) locale.setlocale(locale.LC_NUMERIC,"C") # convert a string here locale.setlocale(locale.LC_NUMERIC, loc) Note that all_loc = locale.setlocale(locale.LC_ALL) might contain several categories (see attached test_locale.py where I needed to decode this). 'LC_CTYPE=de_DE.UTF-8;LC_NUMERIC=en_GB.utf8;LC_TIME=de_DE.UTF-8;LC_COLLATE=de_DE.UTF-8;LC_MONETARY=de_DE.UTF-8;LC_MESSAGES=de_DE.UTF-8;LC_PAPER=de_DE.UTF-8;LC_NAME=de_DE.UTF-8;LC_ADDRESS=de_DE.UTF-8;LC_TELEPHONE=de_DE.UTF-8;LC_MEASUREMENT=de_DE.UTF-8;LC_IDENTIFICATION=de_DE.UTF-8' b) The output of getlocale cannot be used as input to setlocale sometimes. Works with * python2.5 und python2.4 on Debian GNU/Linux Etch ppc, de_DE.utf8. I had failures with * python2.3, python2.4, python2.5 on Debian GNU/Linux Sarge ppc, [EMAIL PROTECTED] * Windows XP SP2 python-2.4.4.msiGerman, see: >>> import locale >>> result = locale.setlocale(locale.LC_NUMERIC,"") >>> print result German_Germany.1252 >>> got = locale.getlocale(locale.LC_NUMERIC) >>> print got ('de_DE', '1252') >>> # works ... locale.setlocale(locale.LC_NUMERIC, result) 'German_Germany.1252' >>> # fails ... locale.setlocale(locale.LC_NUMERIC, got) Traceback (most recent call last): File "", line 2, in ? File "C:\Python24\lib\locale.py", line 381, in setlocale return _setlocale(category, locale) locale.Error: unsupported locale setting >>> -- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1699853&group_id=5470 ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[ python-Bugs-1698167 ] xml.etree document element.tag
Bugs item #1698167, was opened at 2007-04-11 08:25 Message generated for change (Comment added) made by effbot You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1698167&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Documentation Group: Python 2.5 Status: Open Resolution: None Priority: 5 Private: No Submitted By: paul rubin (phr) Assigned to: Fredrik Lundh (effbot) Summary: xml.etree document element.tag Initial Comment: The xml.etree docs vaguely mention an implementation-dependent Element interface without describing it in any detail. I could not figure out from the docs how to get the tag name of an element returned from (say) the getiterator interface. That is, for an element like , I wanted the string "foo". Examining the library source showed that e.tag does the job, at least some of the time, and that was enough to get my app working. Could the actual situation please be documented--thanks. -- >Comment By: Fredrik Lundh (effbot) Date: 2007-04-13 15:32 Message: Logged In: YES user_id=38376 Originator: NO Looks like the entire Element section is missing from the current documentation. Thanks for reporting this; I'll take a look when I find the time. In the meantime, you'll find additional documentation here: http://effbot.org/zone/element.htm -- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1698167&group_id=5470 ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[ python-Bugs-1599254 ] mailbox: other programs' messages can vanish without trace
Bugs item #1599254, was opened at 2006-11-19 16:03 Message generated for change (Comment added) made by baikie You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1599254&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Python Library Group: Python 2.5 Status: Open Resolution: None Priority: 7 Private: No Submitted By: David Watson (baikie) Assigned to: A.M. Kuchling (akuchling) Summary: mailbox: other programs' messages can vanish without trace Initial Comment: The mailbox classes based on _singlefileMailbox (mbox, MMDF, Babyl) implement the flush() method by writing the new mailbox contents into a temporary file which is then renamed over the original. Unfortunately, if another program tries to deliver messages while mailbox.py is working, and uses only fcntl() locking, it will have the old file open and be blocked waiting for the lock to become available. Once mailbox.py has replaced the old file and closed it, making the lock available, the other program will write its messages into the now-deleted "old" file, consigning them to oblivion. I've caused Postfix on Linux to lose mail this way (although I did have to turn off its use of dot-locking to do so). A possible fix is attached. Instead of new_file being renamed, its contents are copied back to the original file. If file.truncate() is available, the mailbox is then truncated to size. Otherwise, if truncation is required, it's truncated to zero length beforehand by reopening self._path with mode wb+. In the latter case, there's a check to see if the mailbox was replaced while we weren't looking, but there's still a race condition. Any alternative ideas? Incidentally, this fixes a problem whereby Postfix wouldn't deliver to the replacement file as it had the execute bit set. -- >Comment By: David Watson (baikie) Date: 2007-04-13 14:45 Message: Logged In: YES user_id=1504904 Originator: YES Here's a possible solution to the rereading problem. It should allow existing applications to work whether they behave properly or not, as well as (probably) most third-party subclasses. The idea is to compare the new table of contents to the list of known offset pairs, and raise ExternalClashError if *any* of them have changed or disappeared. Any new pairs can then be added to _toc under new keys. To maintain the list of known pairs, a special dict subclass is used on self._toc that records every offset pair ever set in it - even those that are subsequently removed from the mapping. However, if self._pending is not set when rereading, then the code doesn't rely on this special behaviour: it just uses self._toc.itervalues(), which will work even if a subclass has replaced the special _toc with a normal dictionary. Ways the code can break: - If a subclass replaces self._toc and then the application tries to lock the mailbox *after* making modifications (so that _update_toc() finds self._pending set, and looks for the special attribute on _toc). - If a subclass tries to store something other than sequences in _toc. - If a subclass' _generate_toc() can produce offsets for messages that don't match those they were written under. File Added: mailbox-update-toc-again.diff -- Comment By: A.M. Kuchling (akuchling) Date: 2007-03-28 17:56 Message: Logged In: YES user_id=11375 Originator: NO Created a branch in SVN at svn+ssh://[EMAIL PROTECTED]/python/branches/amk-mailbox to work on this for 2.5.1. I've committed the unified2 module patch and the test_concurrent_ad() test. -- Comment By: A.M. Kuchling (akuchling) Date: 2007-01-24 20:48 Message: Logged In: YES user_id=11375 Originator: NO I've strengthened the warning again. The MH bug in unified2 is straightforward: MH.remove() opens a file object, locks it, closes the file object, and then tries to unlock it. Presumably the MH test case never bothered locking the mailbox before making changes before. -- Comment By: David Watson (baikie) Date: 2007-01-22 20:24 Message: Logged In: YES user_id=1504904 Originator: YES So what you propose to commit for 2.5 is basically mailbox-unified2 (your mailbox-unified-patch, minus the _toc clearing)? -- Comment By: A.M. Kuchling (akuchling) Date: 2007-01-22 15:46 Message: Logged In: YES user_id=11375 Originator: NO This would be an API change, and therefore out-of-bounds for 2.5. I suggest giving up on this for 2.5.1 and only fixing it in 2.6. I'll add another warning to the docs, and maybe to the module as well.
[ python-Bugs-1599254 ] mailbox: other programs' messages can vanish without trace
Bugs item #1599254, was opened at 2006-11-19 16:03 Message generated for change (Comment added) made by baikie You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1599254&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Python Library Group: Python 2.5 Status: Open Resolution: None Priority: 7 Private: No Submitted By: David Watson (baikie) Assigned to: A.M. Kuchling (akuchling) Summary: mailbox: other programs' messages can vanish without trace Initial Comment: The mailbox classes based on _singlefileMailbox (mbox, MMDF, Babyl) implement the flush() method by writing the new mailbox contents into a temporary file which is then renamed over the original. Unfortunately, if another program tries to deliver messages while mailbox.py is working, and uses only fcntl() locking, it will have the old file open and be blocked waiting for the lock to become available. Once mailbox.py has replaced the old file and closed it, making the lock available, the other program will write its messages into the now-deleted "old" file, consigning them to oblivion. I've caused Postfix on Linux to lose mail this way (although I did have to turn off its use of dot-locking to do so). A possible fix is attached. Instead of new_file being renamed, its contents are copied back to the original file. If file.truncate() is available, the mailbox is then truncated to size. Otherwise, if truncation is required, it's truncated to zero length beforehand by reopening self._path with mode wb+. In the latter case, there's a check to see if the mailbox was replaced while we weren't looking, but there's still a race condition. Any alternative ideas? Incidentally, this fixes a problem whereby Postfix wouldn't deliver to the replacement file as it had the execute bit set. -- >Comment By: David Watson (baikie) Date: 2007-04-13 14:46 Message: Logged In: YES user_id=1504904 Originator: YES Some new test cases for this stuff. File Added: test_mailbox-reread.diff -- Comment By: David Watson (baikie) Date: 2007-04-13 14:45 Message: Logged In: YES user_id=1504904 Originator: YES Here's a possible solution to the rereading problem. It should allow existing applications to work whether they behave properly or not, as well as (probably) most third-party subclasses. The idea is to compare the new table of contents to the list of known offset pairs, and raise ExternalClashError if *any* of them have changed or disappeared. Any new pairs can then be added to _toc under new keys. To maintain the list of known pairs, a special dict subclass is used on self._toc that records every offset pair ever set in it - even those that are subsequently removed from the mapping. However, if self._pending is not set when rereading, then the code doesn't rely on this special behaviour: it just uses self._toc.itervalues(), which will work even if a subclass has replaced the special _toc with a normal dictionary. Ways the code can break: - If a subclass replaces self._toc and then the application tries to lock the mailbox *after* making modifications (so that _update_toc() finds self._pending set, and looks for the special attribute on _toc). - If a subclass tries to store something other than sequences in _toc. - If a subclass' _generate_toc() can produce offsets for messages that don't match those they were written under. File Added: mailbox-update-toc-again.diff -- Comment By: A.M. Kuchling (akuchling) Date: 2007-03-28 17:56 Message: Logged In: YES user_id=11375 Originator: NO Created a branch in SVN at svn+ssh://[EMAIL PROTECTED]/python/branches/amk-mailbox to work on this for 2.5.1. I've committed the unified2 module patch and the test_concurrent_ad() test. -- Comment By: A.M. Kuchling (akuchling) Date: 2007-01-24 20:48 Message: Logged In: YES user_id=11375 Originator: NO I've strengthened the warning again. The MH bug in unified2 is straightforward: MH.remove() opens a file object, locks it, closes the file object, and then tries to unlock it. Presumably the MH test case never bothered locking the mailbox before making changes before. -- Comment By: David Watson (baikie) Date: 2007-01-22 20:24 Message: Logged In: YES user_id=1504904 Originator: YES So what you propose to commit for 2.5 is basically mailbox-unified2 (your mailbox-unified-patch, minus the _toc clearing)? -- Comment By: A.M. Kuchling (akuchling) Date: 2007-01-22 15:46
[ python-Bugs-1599254 ] mailbox: other programs' messages can vanish without trace
Bugs item #1599254, was opened at 2006-11-19 16:03 Message generated for change (Comment added) made by baikie You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1599254&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Python Library Group: Python 2.5 Status: Open Resolution: None Priority: 7 Private: No Submitted By: David Watson (baikie) Assigned to: A.M. Kuchling (akuchling) Summary: mailbox: other programs' messages can vanish without trace Initial Comment: The mailbox classes based on _singlefileMailbox (mbox, MMDF, Babyl) implement the flush() method by writing the new mailbox contents into a temporary file which is then renamed over the original. Unfortunately, if another program tries to deliver messages while mailbox.py is working, and uses only fcntl() locking, it will have the old file open and be blocked waiting for the lock to become available. Once mailbox.py has replaced the old file and closed it, making the lock available, the other program will write its messages into the now-deleted "old" file, consigning them to oblivion. I've caused Postfix on Linux to lose mail this way (although I did have to turn off its use of dot-locking to do so). A possible fix is attached. Instead of new_file being renamed, its contents are copied back to the original file. If file.truncate() is available, the mailbox is then truncated to size. Otherwise, if truncation is required, it's truncated to zero length beforehand by reopening self._path with mode wb+. In the latter case, there's a check to see if the mailbox was replaced while we weren't looking, but there's still a race condition. Any alternative ideas? Incidentally, this fixes a problem whereby Postfix wouldn't deliver to the replacement file as it had the execute bit set. -- >Comment By: David Watson (baikie) Date: 2007-04-13 14:47 Message: Logged In: YES user_id=1504904 Originator: YES This fixes the Babyl breakage. Perhaps it should be in the superclass? File Added: mailbox-babyl-fix.diff -- Comment By: David Watson (baikie) Date: 2007-04-13 14:46 Message: Logged In: YES user_id=1504904 Originator: YES Some new test cases for this stuff. File Added: test_mailbox-reread.diff -- Comment By: David Watson (baikie) Date: 2007-04-13 14:45 Message: Logged In: YES user_id=1504904 Originator: YES Here's a possible solution to the rereading problem. It should allow existing applications to work whether they behave properly or not, as well as (probably) most third-party subclasses. The idea is to compare the new table of contents to the list of known offset pairs, and raise ExternalClashError if *any* of them have changed or disappeared. Any new pairs can then be added to _toc under new keys. To maintain the list of known pairs, a special dict subclass is used on self._toc that records every offset pair ever set in it - even those that are subsequently removed from the mapping. However, if self._pending is not set when rereading, then the code doesn't rely on this special behaviour: it just uses self._toc.itervalues(), which will work even if a subclass has replaced the special _toc with a normal dictionary. Ways the code can break: - If a subclass replaces self._toc and then the application tries to lock the mailbox *after* making modifications (so that _update_toc() finds self._pending set, and looks for the special attribute on _toc). - If a subclass tries to store something other than sequences in _toc. - If a subclass' _generate_toc() can produce offsets for messages that don't match those they were written under. File Added: mailbox-update-toc-again.diff -- Comment By: A.M. Kuchling (akuchling) Date: 2007-03-28 17:56 Message: Logged In: YES user_id=11375 Originator: NO Created a branch in SVN at svn+ssh://[EMAIL PROTECTED]/python/branches/amk-mailbox to work on this for 2.5.1. I've committed the unified2 module patch and the test_concurrent_ad() test. -- Comment By: A.M. Kuchling (akuchling) Date: 2007-01-24 20:48 Message: Logged In: YES user_id=11375 Originator: NO I've strengthened the warning again. The MH bug in unified2 is straightforward: MH.remove() opens a file object, locks it, closes the file object, and then tries to unlock it. Presumably the MH test case never bothered locking the mailbox before making changes before. -- Comment By: David Watson (baikie) Date: 2007-01-22 20:24 Message: Logged In:
[ python-Bugs-1700132 ] import and capital letters
Bugs item #1700132, was opened at 2007-04-13 15:06 Message generated for change (Tracker Item Submitted) made by Item Submitter You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1700132&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Python Interpreter Core Group: Python 2.4 Status: Open Resolution: None Priority: 5 Private: No Submitted By: omsynge (omsynge) Assigned to: Nobody/Anonymous (nobody) Summary: import and capital letters Initial Comment: Interactive and relative paths allow unlimited (Or at least I have not found a limit yet) number of characters to be uppercase. This is fine, but when the python interpreter reads directories from PYTHONPATH with 3 capital letters you then get a failure to locate the files. I have replicated this issue with python 2.2 and python 2.4. These where with red hat el3 and ubuntu (Some version not sure which) and an example import dcacheYaimInstallerTest.logger as logger works fine in interactive or relative paths, but not when installed via an RPM. import dcacheYaimInstallertest.logger as logger is just fine in both scenarios. This bug cost me some hours to trace and would have cost more had I not had a lot of experience of packaging, so I would be pleased if this could be fixed in all versions of Python. Regards Owen -- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1700132&group_id=5470 ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[ python-Bugs-1685000 ] asyncore DoS vulnerability
Bugs item #1685000, was opened at 2007-03-21 10:15 Message generated for change (Settings changed) made by billiejoex You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1685000&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Python Library Group: Python 2.5 Status: Open Resolution: None Priority: 9 Private: No Submitted By: billiejoex (billiejoex) >Assigned to: Josiah Carlson (josiahcarlson) Summary: asyncore DoS vulnerability Initial Comment: DoS asyncore vulnerability asyncore, independently if used with select() or poll(), suffers a DoS-type vulnerability when a high number of simultaneous connections to handle simultaneously is reached. The number of maximum connections is system-dependent as well as the type of error raised. I attached two simple Proof of Concept scripts demonstrating such bug. If you want to try the behaviours listed below run the attached "asyncore_server.py" and "asyncore_client.py" scripts on your local workstation. On my Windows XP system (Python 2.5), independently if asyncore has been used to develop a server or a client, the error is raised by select() inside asyncore's "poll" function when 512 (socket_map's elements) simultaneous connections are reached. Here's the traceback I get: [...] connections: 510 connections: 511 connections: 512 Traceback (most recent call last): File "C:\scripts\asyncore_server.py", line 38, in asyncore.loop() File "C:\Python25\lib\asyncore.py", line 191, in loop poll_fun(timeout, map) File "C:\Python25\lib\asyncore.py", line 121, in poll r, w, e = select.select(r, w, e, timeout) ValueError: too many file descriptors in select() On my Linux Ubuntu 6.10 (kernel 2.6.17-10, Python 2.5) different type of errors are raised depending on the application (client or server). In an asyncore-based client the error is raised by socket module (dispatcher's "self.socket" attribute) inside 'connect' method of 'dispatcher' class: [...] connections: 1018 connections: 1019 connections: 1020 connections: 1021 Traceback (most recent call last): File "asyncore_client.py", line 31, in File "asyncore.py", line 191, in loop File "asyncore.py", line 138, in poll File "asyncore.py", line 80, in write File "asyncore.py", line 76, in write File "asyncore.py", line 395, in handle_write_event File "asyncore_client.py", line 24, in handle_connect File "asyncore_client.py", line 9, in __init__ File "asyncore.py", line 257, in create_socket File "socket.py", line 156, in __init__ socket.error: (24, 'Too many open files') On an asyncore-based server the error is raised by socket module (dispatcher's "self.socket" attribute) inside 'accept' method of 'dispatcher' class: [...] connections: 1019 connections: 1020 connections: 1021 Traceback (most recent call last): File "asyncore_server.py", line 38, in File "asyncore.py", line 191, in loop File "asyncore.py", line 132, in poll File "asyncore.py", line 72, in read File "asyncore.py", line 68, in read File "asyncore.py", line 384, in handle_read_event File "asyncore_server.py", line 16, in handle_accept File "asyncore.py", line 321, in accept File "socket.py", line 170, in accept socket.error: (24, 'Too many open files') -- Comment By: Josiah Carlson (josiahcarlson) Date: 2007-04-09 18:13 Message: Logged In: YES user_id=341410 Originator: NO Assign the "bug" to me, I'm the maintainer for asyncore/asynchat. With that said, since a user needs to override asyncore.dispatcher.handle_accept() anyways, which necessarily needs to call asyncore.dispatcher.accept(), the subclass is free to check the number of sockets in its socket map before creating a new instance of whatever subclass of asyncore.dispatcher the user has written. Also, the number of file handles that select can handle on Windows is a compile-time constant, and has nothing to do with the actual number of open file handles. Take and run the following source file on Windows and see how the total number of open sockets can be significantly larger than the number of sockets passed to select(): import socket import asyncore import random class new_map(dict): def items(self): r = [(i,j) for i,j in dict.items(self) if not random.randrange(4) and j != h] r.append((h._fileno, h)) print len(r), len(asyncore.socket_map) return r asyncore.socket_map = new_map() class listener(asyncore.dispatcher): def handle_accept(self): x = self.accept() if x: conn, addr = x connection(conn) class connection(asyncore.dispatcher): def writable(self): 0 def handle_connect(self): pass if __name__ == '__main__': h = listener() h.create_socket(socket.AF_INET, sock
[ python-Bugs-1685000 ] asyncore DoS vulnerability
Bugs item #1685000, was opened at 2007-03-21 02:15 Message generated for change (Comment added) made by josiahcarlson You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1685000&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Python Library Group: Python 2.5 >Status: Closed >Resolution: Wont Fix >Priority: 5 Private: No Submitted By: billiejoex (billiejoex) Assigned to: Josiah Carlson (josiahcarlson) Summary: asyncore DoS vulnerability Initial Comment: DoS asyncore vulnerability asyncore, independently if used with select() or poll(), suffers a DoS-type vulnerability when a high number of simultaneous connections to handle simultaneously is reached. The number of maximum connections is system-dependent as well as the type of error raised. I attached two simple Proof of Concept scripts demonstrating such bug. If you want to try the behaviours listed below run the attached "asyncore_server.py" and "asyncore_client.py" scripts on your local workstation. On my Windows XP system (Python 2.5), independently if asyncore has been used to develop a server or a client, the error is raised by select() inside asyncore's "poll" function when 512 (socket_map's elements) simultaneous connections are reached. Here's the traceback I get: [...] connections: 510 connections: 511 connections: 512 Traceback (most recent call last): File "C:\scripts\asyncore_server.py", line 38, in asyncore.loop() File "C:\Python25\lib\asyncore.py", line 191, in loop poll_fun(timeout, map) File "C:\Python25\lib\asyncore.py", line 121, in poll r, w, e = select.select(r, w, e, timeout) ValueError: too many file descriptors in select() On my Linux Ubuntu 6.10 (kernel 2.6.17-10, Python 2.5) different type of errors are raised depending on the application (client or server). In an asyncore-based client the error is raised by socket module (dispatcher's "self.socket" attribute) inside 'connect' method of 'dispatcher' class: [...] connections: 1018 connections: 1019 connections: 1020 connections: 1021 Traceback (most recent call last): File "asyncore_client.py", line 31, in File "asyncore.py", line 191, in loop File "asyncore.py", line 138, in poll File "asyncore.py", line 80, in write File "asyncore.py", line 76, in write File "asyncore.py", line 395, in handle_write_event File "asyncore_client.py", line 24, in handle_connect File "asyncore_client.py", line 9, in __init__ File "asyncore.py", line 257, in create_socket File "socket.py", line 156, in __init__ socket.error: (24, 'Too many open files') On an asyncore-based server the error is raised by socket module (dispatcher's "self.socket" attribute) inside 'accept' method of 'dispatcher' class: [...] connections: 1019 connections: 1020 connections: 1021 Traceback (most recent call last): File "asyncore_server.py", line 38, in File "asyncore.py", line 191, in loop File "asyncore.py", line 132, in poll File "asyncore.py", line 72, in read File "asyncore.py", line 68, in read File "asyncore.py", line 384, in handle_read_event File "asyncore_server.py", line 16, in handle_accept File "asyncore.py", line 321, in accept File "socket.py", line 170, in accept socket.error: (24, 'Too many open files') -- >Comment By: Josiah Carlson (josiahcarlson) Date: 2007-04-13 12:36 Message: Logged In: YES user_id=341410 Originator: NO The OP and I discussed this via email and IM. There seems to be a few issues that the OP is concerned about. The first is that the number of allowable sockets/process is platform dependent (Windows has no limit, linux can be set manually). The second is that some platforms limit the number of sockets that can be passed to select at a time (Windows limits this to 512, I don't know about linux). The third is that the OP wants a solution to handling both a standard denial of service attack (single client), as well as a distributed denial of service attack (many clients). The first issue is annoying, bit it is not within the realm of problems that should be dealt with by base asyncore. Just like platform floating point handling is platform-dependent, sockets/file handles per process is also platform-dependent, and trying to abstract it away is not reasonable. The second issue is also annoying, but it also isn't within the realm of problems that should be dealt with by base asyncore. Whether or not an application should be able to handle more than a few hundred sockets at a time is dependent on the application, and modifying asyncore to make assumptions about whether an application should or should not handle that many sockets is not reasonable. The third issue is also not reasonable for us to handle. How to respond to many incoming connections (from a sing
[ python-Bugs-1700304 ] pydoc.help samples sys.stdout and sys.stdin at import time
Bugs item #1700304, was opened at 2007-04-13 12:53 Message generated for change (Tracker Item Submitted) made by Item Submitter You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1700304&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Extension Modules Group: Python 2.4 Status: Open Resolution: None Priority: 5 Private: No Submitted By: James Lingard (jchl) Assigned to: Nobody/Anonymous (nobody) Summary: pydoc.help samples sys.stdout and sys.stdin at import time Initial Comment: pydoc.help (aliased to the builtin help) uses the values of sys.stdout and sys.stdin that were in use when the pydoc module was imported. This means that if sys.stdout and/or sys.stdin are later modified, subsequent calls to pydoc.help (or help) use the wrong stdout and stdin. Instead, help should use the current values of sys.stdout and sys.stdin each time it is called. Reported against Python 2.4.4. -- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1700304&group_id=5470 ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[ python-Bugs-1700455 ] ctypes Fundamental data types
Bugs item #1700455, was opened at 2007-04-14 11:16 Message generated for change (Tracker Item Submitted) made by Item Submitter You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1700455&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Documentation Group: Python 2.5 Status: Open Resolution: None Priority: 5 Private: No Submitted By: hg6980 (hg6980) Assigned to: Nobody/Anonymous (nobody) Summary: ctypes Fundamental data types Initial Comment: I think the sentence:- The current memory block contents can be accessed (or changed) with the raw property, if you want to access it as NUL terminated string, use the string property should be The current memory block contents can be accessed (or changed) with the raw property, if you want to access it as NUL terminated string, use the value property -- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1700455&group_id=5470 ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[ python-Bugs-1163401 ] uncaught AttributeError deep in urllib
Bugs item #1163401, was opened at 2005-03-15 00:39 Message generated for change (Comment added) made by nagle You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1163401&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Extension Modules Group: Python 2.4 Status: Closed Resolution: Duplicate Priority: 5 Private: No Submitted By: K Lars Lohn (lohnk) Assigned to: Nobody/Anonymous (nobody) Summary: uncaught AttributeError deep in urllib Initial Comment: Python 2.4 and Python 2.3.4 running under Suse 9.2 We're getting an AttributeError exception "AttributeError: 'NoneType' object has no attribute 'read'" within a very simple call to urllib.urlopen. This was discovered while working on Sentry 2, the new mirror integrity checker for the Mozilla project. We try to touch hundreds of URLs to make sure that the files are present on each of the mirrors. One particular URL kills the call to urllib.urlopen: http://mozilla.mirrors.skynet.be/pub/ftp.mozilla.org/firefox/releases/1.0/win32/en-US/Firefox%20Setup%201.0.exe This file probably does not exist on the mirror, however, in other cases of bad URLs, we get much more graceful failures when we try to read from the object returned by urllib.urlopen. >>> import urllib >>> urlReader = >>> urllib.urlopen("http://mozilla.mirrors.skynet.be/pub/ftp.mozilla.org/firefox/releases/1.0/win32/en-US/Firefox%20Setup%201.0.exe";) Traceback (most recent call last): File "", line 1, in ? File "/usr/local/lib/python2.4/urllib.py", line 77, in urlopen return opener.open(url) File "/usr/local/lib/python2.4/urllib.py", line 180, in open return getattr(self, name)(url) File "/usr/local/lib/python2.4/urllib.py", line 305, in open_http return self.http_error(url, fp, errcode, errmsg, headers) File "/usr/local/lib/python2.4/urllib.py", line 322, in http_error return self.http_error_default(url, fp, errcode, errmsg, headers) File "/usr/local/lib/python2.4/urllib.py", line 550, in http_error_default return addinfourl(fp, headers, "http:" + url) File "/usr/local/lib/python2.4/urllib.py", line 836, in __init__ addbase.__init__(self, fp) File "/usr/local/lib/python2.4/urllib.py", line 786, in __init__ self.read = self.fp.read AttributeError: 'NoneType' object has no attribute 'read' The attached file is a three line scipt that demos the problem. -- Comment By: John Nagle (nagle) Date: 2007-04-14 04:28 Message: Logged In: YES user_id=5571 Originator: NO The basic cause of the "NoneType" attribute error is a straightforward bug in "urllib2". If an error occurs during opening that causes an error to "http_error_default", a dummy file object is created using "addinfourl", so as to return something that looks like an empty file, rather than raising an exception. But that doesn't work if "getfile()" on the httplib.HTTP object returns "None", which is unusual but can happen. We're seeing this error in Python 2.4 on Windows. We're still trying to understand exactly what network situation forces this path, but it's quite real. It seems to be occurring on Python 2.5 on Linux, too. If you override http_error_default in a subclass, you get an HTTP error of "-1" reported when this situation occurs. -- Comment By: Georg Brandl (birkenfeld) Date: 2005-12-15 22:10 Message: Logged In: YES user_id=1188172 Duplicate of #767111. -- Comment By: Roy Smith (roysmith) Date: 2005-04-02 21:44 Message: Logged In: YES user_id=390499 Wow, this is bizarre. I just spend some time tracking down exactly this same bug and was just about to enter it when I saw this entry. For what it's worth, I can reliably reproduce this exception when fetching a URL from a deliberately broken server (well, at least I think it's broken; have to double-check the HTTP spec to be sure this isn't actually allowed) which produces headers but no body: (This is on Mac OSX-10.3.8, Python-2.3.4) --- Roy-Smiths-Computer:bug$ cat server.py #!/usr/bin/env python from BaseHTTPServer import * class NullHandler (BaseHTTPRequestHandler): def do_GET (self): self.send_response (100) self.end_headers () server = HTTPServer (('', 8000), NullHandler) server.handle_request() -- Roy-Smiths-Computer:bug$ cat client.py #!/usr/bin/env python import urllib urllib.urlopen ('http://127.0.0.1:8000') - Roy-Smiths-Computer:bug$ ./client.py Traceback (most recent call last): File "./client.py", line 5, in ? urllib.urlopen ('http://127.0.0.1:8000') File "/usr/local/lib/python2.3/urllib.py", line