[ python-Bugs-1176893 ] Readline segfault
Bugs item #1176893, was opened at 2005-04-05 10:50 Message generated for change (Tracker Item Submitted) made by Item Submitter You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1176893&group_id=5470 Category: Python Library Group: Python 2.5 Status: Open Resolution: None Priority: 5 Submitted By: Walter Dörwald (doerwalter) Assigned to: Michael Hudson (mwh) Summary: Readline segfault Initial Comment: The latest change to the readline module has broken tab completion: ./python Python 2.5a0 (#1, Apr 5 2005, 01:14:33) [GCC 3.3.5 (Gentoo Linux 3.3.5-r1, ssp-3.3.2-3, pie-8.7.7.1)] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> import readline, rlcompleter [25913 refs] >>> readline.parse_and_bind("tab: complete") [25913 refs] >>> Segmentation fault [Press tab after the parse_and_bind() call] -- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1176893&group_id=5470 ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[ python-Bugs-1176893 ] Readline segfault
Bugs item #1176893, was opened at 2005-04-05 09:50 Message generated for change (Comment added) made by mwh You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1176893&group_id=5470 Category: Python Library Group: Python 2.5 Status: Open Resolution: None Priority: 5 Submitted By: Walter Dörwald (doerwalter) Assigned to: Michael Hudson (mwh) Summary: Readline segfault Initial Comment: The latest change to the readline module has broken tab completion: ./python Python 2.5a0 (#1, Apr 5 2005, 01:14:33) [GCC 3.3.5 (Gentoo Linux 3.3.5-r1, ssp-3.3.2-3, pie-8.7.7.1)] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> import readline, rlcompleter [25913 refs] >>> readline.parse_and_bind("tab: complete") [25913 refs] >>> Segmentation fault [Press tab after the parse_and_bind() call] -- >Comment By: Michael Hudson (mwh) Date: 2005-04-05 11:18 Message: Logged In: YES user_id=6656 I'm going to go out on a limb and suggest this is a bug in the PyGilState_ functions. The problem burrows down to calling PyThread_release_lock(interpreter_lock) when interpreter_lock is NULL, i.e. PyEval_InitThreads hasn't been called. (if you start a thread before running the crashing code, it doesn't crash, because the GIL has been allocated). Not sure what the solution is, or who to bug (time to read cvs log, I guess). A silly workaround is to put PyEval_InitThreads in initreadline. -- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1176893&group_id=5470 ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[ python-Bugs-1176893 ] Readline segfault
Bugs item #1176893, was opened at 2005-04-05 09:50 Message generated for change (Comment added) made by mwh You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1176893&group_id=5470 Category: Python Library Group: Python 2.5 Status: Open Resolution: None Priority: 5 Submitted By: Walter Dörwald (doerwalter) Assigned to: Michael Hudson (mwh) Summary: Readline segfault Initial Comment: The latest change to the readline module has broken tab completion: ./python Python 2.5a0 (#1, Apr 5 2005, 01:14:33) [GCC 3.3.5 (Gentoo Linux 3.3.5-r1, ssp-3.3.2-3, pie-8.7.7.1)] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> import readline, rlcompleter [25913 refs] >>> readline.parse_and_bind("tab: complete") [25913 refs] >>> Segmentation fault [Press tab after the parse_and_bind() call] -- >Comment By: Michael Hudson (mwh) Date: 2005-04-05 11:27 Message: Logged In: YES user_id=6656 Or maybe this one line patch is the answer (it certainly fixes this case). Tim, can you spare a minute to think about this? The patch simply adds a check to PyEval_ReleaseThread that doesn't call PyThread_release_lock if the GIL hasn't been allocated. -- Comment By: Michael Hudson (mwh) Date: 2005-04-05 11:18 Message: Logged In: YES user_id=6656 I'm going to go out on a limb and suggest this is a bug in the PyGilState_ functions. The problem burrows down to calling PyThread_release_lock(interpreter_lock) when interpreter_lock is NULL, i.e. PyEval_InitThreads hasn't been called. (if you start a thread before running the crashing code, it doesn't crash, because the GIL has been allocated). Not sure what the solution is, or who to bug (time to read cvs log, I guess). A silly workaround is to put PyEval_InitThreads in initreadline. -- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1176893&group_id=5470 ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[ python-Bugs-1176893 ] Readline segfault
Bugs item #1176893, was opened at 2005-04-05 09:50 Message generated for change (Settings changed) made by mwh You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1176893&group_id=5470 Category: Python Library Group: Python 2.5 Status: Open Resolution: None Priority: 5 Submitted By: Walter Dörwald (doerwalter) >Assigned to: Tim Peters (tim_one) Summary: Readline segfault Initial Comment: The latest change to the readline module has broken tab completion: ./python Python 2.5a0 (#1, Apr 5 2005, 01:14:33) [GCC 3.3.5 (Gentoo Linux 3.3.5-r1, ssp-3.3.2-3, pie-8.7.7.1)] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> import readline, rlcompleter [25913 refs] >>> readline.parse_and_bind("tab: complete") [25913 refs] >>> Segmentation fault [Press tab after the parse_and_bind() call] -- Comment By: Michael Hudson (mwh) Date: 2005-04-05 11:27 Message: Logged In: YES user_id=6656 Or maybe this one line patch is the answer (it certainly fixes this case). Tim, can you spare a minute to think about this? The patch simply adds a check to PyEval_ReleaseThread that doesn't call PyThread_release_lock if the GIL hasn't been allocated. -- Comment By: Michael Hudson (mwh) Date: 2005-04-05 11:18 Message: Logged In: YES user_id=6656 I'm going to go out on a limb and suggest this is a bug in the PyGilState_ functions. The problem burrows down to calling PyThread_release_lock(interpreter_lock) when interpreter_lock is NULL, i.e. PyEval_InitThreads hasn't been called. (if you start a thread before running the crashing code, it doesn't crash, because the GIL has been allocated). Not sure what the solution is, or who to bug (time to read cvs log, I guess). A silly workaround is to put PyEval_InitThreads in initreadline. -- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1176893&group_id=5470 ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[ python-Bugs-1177077 ] [PyPI] Password reset problem.
Bugs item #1177077, was opened at 2005-04-05 13:56 Message generated for change (Tracker Item Submitted) made by Item Submitter You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1177077&group_id=5470 Category: None Group: None Status: Open Resolution: None Priority: 5 Submitted By: Darek Suchojad (dsuch) Assigned to: Nobody/Anonymous (nobody) Summary: [PyPI] Password reset problem. Initial Comment: Hello, the URL for reseting a password to PyPI is http://www.python.org/pypi?:action=password_reset&[EMAIL PROTECTED] However, this page yields a message """ Error... There's been a problem with your request psycopg.ProgrammingError: ERROR: syntax error at or near "where" at character 104 update users set password='6c2105e62b35507733ee49fdaed9815022b324e6', email='[EMAIL PROTECTED]', where name='myname' """ Clearly, there's a superfluous comma before 'where name='. I'm filling this bug report per request from webmaster'python'org. -- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1177077&group_id=5470 ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[ python-Bugs-1177468 ] random.py/os.urandom robustness
Bugs item #1177468, was opened at 2005-04-05 18:03 Message generated for change (Tracker Item Submitted) made by Item Submitter You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1177468&group_id=5470 Category: Python Library Group: Python 2.4 Status: Open Resolution: None Priority: 5 Submitted By: Fazal Majid (majid) Assigned to: Nobody/Anonymous (nobody) Summary: random.py/os.urandom robustness Initial Comment: Python 2.4.1 now uses os.urandom() to seed the random number generator. This is mostly an improvement, but can lead to subtle regression bugs. os.urandom() will open /dev/urandom on demand, e.g. when random.Random.seed() is called, and keep it alive as os._urandomfd. It is standard programming practice for a daemon process to close file descriptors it has inherited from its parent process, and if it closes the file descriptor corresponding to os._urandomfd, the os module is blissfully unaware and the next time os.urandom() is called, it will try to read from a closed file descriptor (or worse, a new one opened since), with unpredictable results. My recommendation would be to make os.urandom() open /dev/urandom each time and not keep a persistent file descripto. This will be slightly slower, but more robust. I am not sure how I feel about a standard library function steal a file descriptor slot forever, specially when os.urandom() is probably going to be called only once in the lifetime of a program, when the random module is seeded. -- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1177468&group_id=5470 ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[ python-Bugs-1177468 ] random.py/os.urandom robustness
Bugs item #1177468, was opened at 2005-04-05 18:03 Message generated for change (Comment added) made by majid You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1177468&group_id=5470 Category: Python Library Group: Python 2.4 Status: Open Resolution: None Priority: 5 Submitted By: Fazal Majid (majid) Assigned to: Nobody/Anonymous (nobody) Summary: random.py/os.urandom robustness Initial Comment: Python 2.4.1 now uses os.urandom() to seed the random number generator. This is mostly an improvement, but can lead to subtle regression bugs. os.urandom() will open /dev/urandom on demand, e.g. when random.Random.seed() is called, and keep it alive as os._urandomfd. It is standard programming practice for a daemon process to close file descriptors it has inherited from its parent process, and if it closes the file descriptor corresponding to os._urandomfd, the os module is blissfully unaware and the next time os.urandom() is called, it will try to read from a closed file descriptor (or worse, a new one opened since), with unpredictable results. My recommendation would be to make os.urandom() open /dev/urandom each time and not keep a persistent file descripto. This will be slightly slower, but more robust. I am not sure how I feel about a standard library function steal a file descriptor slot forever, specially when os.urandom() is probably going to be called only once in the lifetime of a program, when the random module is seeded. -- >Comment By: Fazal Majid (majid) Date: 2005-04-05 18:06 Message: Logged In: YES user_id=110477 There are many modules that have a dependency on random, for instance os.tempnam(), and a program could well inadvertently use it before closing file descriptors. -- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1177468&group_id=5470 ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[ python-Bugs-1161595 ] Minor error in section 3.2
Bugs item #1161595, was opened at 2005-03-11 11:29 Message generated for change (Comment added) made by isandler You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1161595&group_id=5470 Category: Documentation Group: None Status: Open Resolution: Invalid Priority: 1 Submitted By: Jeremy Barbay (jyby) Assigned to: Nobody/Anonymous (nobody) Summary: Minor error in section 3.2 Initial Comment: In the section "3.2 First Steps Towards Programming " of the Python tutorial (http://docs.python.org/tut/node5.html), the output of both implementations of the Fibonacci sequence computation is incorrect. As written, only one 1 should be output. You should either remove one 1 from the input, or replace the lines "print b" and "print b," by "print a" and "print a,". This is minor but might confuse unnecessarily beginners. -- Comment By: Ilya Sandler (isandler) Date: 2005-04-05 19:52 Message: Logged In: YES user_id=971153 It indeed seems that the output in tutorial is correct could you close or delete the bug then? Thanks -- Comment By: Jeremy Barbay (jyby) Date: 2005-03-11 11:40 Message: Logged In: YES user_id=149696 All my apologies: I didn't check my code correctly: as the algorithm is initializing a with 0 instead of 1, the output is correct. -- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1161595&group_id=5470 ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[ python-Bugs-1177468 ] random.py/os.urandom robustness
Bugs item #1177468, was opened at 2005-04-06 01:03 Message generated for change (Comment added) made by jafo You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1177468&group_id=5470 Category: Python Library Group: Python 2.4 Status: Open Resolution: None Priority: 5 Submitted By: Fazal Majid (majid) Assigned to: Nobody/Anonymous (nobody) Summary: random.py/os.urandom robustness Initial Comment: Python 2.4.1 now uses os.urandom() to seed the random number generator. This is mostly an improvement, but can lead to subtle regression bugs. os.urandom() will open /dev/urandom on demand, e.g. when random.Random.seed() is called, and keep it alive as os._urandomfd. It is standard programming practice for a daemon process to close file descriptors it has inherited from its parent process, and if it closes the file descriptor corresponding to os._urandomfd, the os module is blissfully unaware and the next time os.urandom() is called, it will try to read from a closed file descriptor (or worse, a new one opened since), with unpredictable results. My recommendation would be to make os.urandom() open /dev/urandom each time and not keep a persistent file descripto. This will be slightly slower, but more robust. I am not sure how I feel about a standard library function steal a file descriptor slot forever, specially when os.urandom() is probably going to be called only once in the lifetime of a program, when the random module is seeded. -- >Comment By: Sean Reifschneider (jafo) Date: 2005-04-06 03:11 Message: Logged In: YES user_id=81797 Just providing some feedback: I'm able to reproduce this. Importing random will cause this file descriptor to be called. Opening urandom on every call could lead to unacceptable syscall overhead for some. Perhaps there should be a "urandomcleanup" method that closes the file descriptor, and then random could get the bytes from urandom(), and clean up after itself? Personally, I only clean up the file descriptors I have allocated when I fork a new process. On the one hand I agree with you about sucking up a fd in the standard library, but on the other hand I'm thinking that you just shouldn't be closing file descriptors for stuff you'll be needing. That's my two cents on this bug. -- Comment By: Fazal Majid (majid) Date: 2005-04-06 01:06 Message: Logged In: YES user_id=110477 There are many modules that have a dependency on random, for instance os.tempnam(), and a program could well inadvertently use it before closing file descriptors. -- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1177468&group_id=5470 ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[ python-Bugs-1177077 ] [PyPI] Password reset problem.
Bugs item #1177077, was opened at 2005-04-05 13:56 Message generated for change (Comment added) made by jafo You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1177077&group_id=5470 Category: None Group: None >Status: Closed >Resolution: Invalid Priority: 5 Submitted By: Darek Suchojad (dsuch) Assigned to: Nobody/Anonymous (nobody) Summary: [PyPI] Password reset problem. Initial Comment: Hello, the URL for reseting a password to PyPI is http://www.python.org/pypi?:action=password_reset&[EMAIL PROTECTED] However, this page yields a message """ Error... There's been a problem with your request psycopg.ProgrammingError: ERROR: syntax error at or near "where" at character 104 update users set password='6c2105e62b35507733ee49fdaed9815022b324e6', email='[EMAIL PROTECTED]', where name='myname' """ Clearly, there's a superfluous comma before 'where name='. I'm filling this bug report per request from webmaster'python'org. -- >Comment By: Sean Reifschneider (jafo) Date: 2005-04-06 03:24 Message: Logged In: YES user_id=81797 I know Brett told you to submit it here, but I can't imagine why that would be more appropriate than using your first instinct and submitting it to the pypi tracker. It looks like the bug is on line 602 of "store.py", change ", where" to " where" Sean -- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1177077&group_id=5470 ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[ python-Bugs-1175022 ] property example code error
Bugs item #1175022, was opened at 2005-04-01 20:09 Message generated for change (Comment added) made by jafo You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1175022&group_id=5470 Category: Documentation Group: Python 2.4 Status: Open Resolution: None Priority: 5 Submitted By: John Ridley (ojokimu) Assigned to: Nobody/Anonymous (nobody) Summary: property example code error Initial Comment: The example code for 'property' in lib/built-in-funcs.html may produce an error if run "as is": Python 2.4.1 (#1, Mar 31 2005, 21:33:58) [GCC 3.4.1 (Mandrakelinux (Alpha 3.4.1-3mdk)] on linux2 >>> class C(object): ... def getx(self): return self.__x ... def setx(self, value): self.__x = value ... def delx(self): del self.__x ... x = property(getx, setx, delx, "I'm the 'x' property.") ... >>> c=C() >>> c.x Traceback (most recent call last): File "", line 1, in ? File "", line 2, in getx AttributeError: 'C' object has no attribute '_C__x' The same goes for 'del c.x' (although not 'c.x = 0', of course). A more "typical" way of defining managed attributes would be to include an 'init' as follows: class C(object): def __init__(self): self.__x = None def getx(self): return self.__x def setx(self, value): self.__x = value def delx(self): del self.__x x = property(getx, setx, delx, "I'm the 'x' property.") -- >Comment By: Sean Reifschneider (jafo) Date: 2005-04-06 03:33 Message: Logged In: YES user_id=81797 I agree, adding the __init__ to set a value would be useful. +1 -- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1175022&group_id=5470 ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[ python-Bugs-1174606 ] Reading /dev/zero causes SystemError
Bugs item #1174606, was opened at 2005-04-01 04:48 Message generated for change (Comment added) made by jafo You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1174606&group_id=5470 Category: Python Interpreter Core Group: None Status: Open Resolution: None Priority: 5 Submitted By: Adam Olsen (rhamphoryncus) Assigned to: Nobody/Anonymous (nobody) Summary: Reading /dev/zero causes SystemError Initial Comment: $ python -c 'open("/dev/zero").read()' Traceback (most recent call last): File "", line 1, in ? SystemError: ../Objects/stringobject.c:3316: bad argument to internal function Compare with this two variants: $ python -c 'open("/dev/zero").read(2**31-1)' Traceback (most recent call last): File "", line 1, in ? MemoryError $ python -c 'open("/dev/zero").read(2**31)' Traceback (most recent call last): File "", line 1, in ? OverflowError: long int too large to convert to int The unsized read should produce either MemoryError or OverflowError instead of SystemError. Tested with Python 2.2, 2.3, and 2.4. -- >Comment By: Sean Reifschneider (jafo) Date: 2005-04-06 03:39 Message: Logged In: YES user_id=81797 I am able to reproduce this on a Fedora Core 3 Linux system: >>> fp = open('/dev/zero', 'rb') >>> d = fp.read() Traceback (most recent call last): File "", line 1, in ? MemoryError >>> print os.stat('/dev/zero').st_size 0 What about only trusting st_size if the file is a regular file, not a directory or other type of special file? Sean -- Comment By: Armin Rigo (arigo) Date: 2005-04-02 12:31 Message: Logged In: YES user_id=4771 os.stat() doesn't always give consistent results on dev files. On my machine for some reason os.stat('/dev/null') appears to be random (and extremely large). I suspect that on the OP's machine os.stat('/dev/zero') is not 0 either, but a random number that turns out to be negative, hence a "bad argument" SystemError. -- Comment By: Martin v. Löwis (loewis) Date: 2005-04-01 21:42 Message: Logged In: YES user_id=21627 I think it should trust the stat result, and then find that it cannot allocate that much memory. Actually, os.stat("/dev/zero").st_size is 0, so something else must be going on. -- Comment By: Armin Rigo (arigo) Date: 2005-04-01 09:58 Message: Logged In: YES user_id=4771 I think that file.read() with no argument needs to be more conservative. Currently it asks and trusts the stat() to get the file size, but this can lead to just plain wrong results on special devices. (I had the problem that open('/dev/null').read() would give a MemoryError!) We can argue whether plain read() on special devices is a good idea or not, but I guess that not blindly trusting stat() if it returns huge values could be a good idea. -- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1174606&group_id=5470 ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[ python-Bugs-1174606 ] Reading /dev/zero causes SystemError
Bugs item #1174606, was opened at 2005-04-01 06:48 Message generated for change (Comment added) made by loewis You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1174606&group_id=5470 Category: Python Interpreter Core Group: None Status: Open Resolution: None Priority: 5 Submitted By: Adam Olsen (rhamphoryncus) Assigned to: Nobody/Anonymous (nobody) Summary: Reading /dev/zero causes SystemError Initial Comment: $ python -c 'open("/dev/zero").read()' Traceback (most recent call last): File "", line 1, in ? SystemError: ../Objects/stringobject.c:3316: bad argument to internal function Compare with this two variants: $ python -c 'open("/dev/zero").read(2**31-1)' Traceback (most recent call last): File "", line 1, in ? MemoryError $ python -c 'open("/dev/zero").read(2**31)' Traceback (most recent call last): File "", line 1, in ? OverflowError: long int too large to convert to int The unsized read should produce either MemoryError or OverflowError instead of SystemError. Tested with Python 2.2, 2.3, and 2.4. -- >Comment By: Martin v. Löwis (loewis) Date: 2005-04-06 08:40 Message: Logged In: YES user_id=21627 The problem is different. Instead, _PyString_Resize complains that the new buffersize of the string is negative. This in turn happens because the string manages to get larger >2GB, which in turn happens because buffersize is size_t, yet _PyString_Resize expects int. I don't know how Linux manages to allocate such a large string without thrashing. There is a minor confusion with stat() as well: new_buffersize tries to find out how much bytes are left to the end of the file. In the case of /dev/zero, both fstat and lseek are "lying" by returning 0. As lseek returns 0, ftell is invoked and returns non-zero. Then, newbuffer does not trust the values, and just adds BIGCHUNK. -- Comment By: Sean Reifschneider (jafo) Date: 2005-04-06 05:39 Message: Logged In: YES user_id=81797 I am able to reproduce this on a Fedora Core 3 Linux system: >>> fp = open('/dev/zero', 'rb') >>> d = fp.read() Traceback (most recent call last): File "", line 1, in ? MemoryError >>> print os.stat('/dev/zero').st_size 0 What about only trusting st_size if the file is a regular file, not a directory or other type of special file? Sean -- Comment By: Armin Rigo (arigo) Date: 2005-04-02 14:31 Message: Logged In: YES user_id=4771 os.stat() doesn't always give consistent results on dev files. On my machine for some reason os.stat('/dev/null') appears to be random (and extremely large). I suspect that on the OP's machine os.stat('/dev/zero') is not 0 either, but a random number that turns out to be negative, hence a "bad argument" SystemError. -- Comment By: Martin v. Löwis (loewis) Date: 2005-04-01 23:42 Message: Logged In: YES user_id=21627 I think it should trust the stat result, and then find that it cannot allocate that much memory. Actually, os.stat("/dev/zero").st_size is 0, so something else must be going on. -- Comment By: Armin Rigo (arigo) Date: 2005-04-01 11:58 Message: Logged In: YES user_id=4771 I think that file.read() with no argument needs to be more conservative. Currently it asks and trusts the stat() to get the file size, but this can lead to just plain wrong results on special devices. (I had the problem that open('/dev/null').read() would give a MemoryError!) We can argue whether plain read() on special devices is a good idea or not, but I guess that not blindly trusting stat() if it returns huge values could be a good idea. -- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1174606&group_id=5470 ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[ python-Bugs-1174606 ] Reading /dev/zero causes SystemError
Bugs item #1174606, was opened at 2005-04-01 04:48 Message generated for change (Comment added) made by jafo You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1174606&group_id=5470 Category: Python Interpreter Core Group: None Status: Open Resolution: None Priority: 5 Submitted By: Adam Olsen (rhamphoryncus) Assigned to: Nobody/Anonymous (nobody) Summary: Reading /dev/zero causes SystemError Initial Comment: $ python -c 'open("/dev/zero").read()' Traceback (most recent call last): File "", line 1, in ? SystemError: ../Objects/stringobject.c:3316: bad argument to internal function Compare with this two variants: $ python -c 'open("/dev/zero").read(2**31-1)' Traceback (most recent call last): File "", line 1, in ? MemoryError $ python -c 'open("/dev/zero").read(2**31)' Traceback (most recent call last): File "", line 1, in ? OverflowError: long int too large to convert to int The unsized read should produce either MemoryError or OverflowError instead of SystemError. Tested with Python 2.2, 2.3, and 2.4. -- >Comment By: Sean Reifschneider (jafo) Date: 2005-04-06 06:52 Message: Logged In: YES user_id=81797 Linux can do a very fast allocation if it has swap available. It reserves space, but does not actually assign the memory until you try to use it. In my case, I have 1GB of RAM, around 700MB free, and another 2GB in swap. So, I have plenty unless I use it. In C I can malloc 1GB and unless I write every page in that block the system doesn't really give the pages to the process. -- Comment By: Martin v. Löwis (loewis) Date: 2005-04-06 06:40 Message: Logged In: YES user_id=21627 The problem is different. Instead, _PyString_Resize complains that the new buffersize of the string is negative. This in turn happens because the string manages to get larger >2GB, which in turn happens because buffersize is size_t, yet _PyString_Resize expects int. I don't know how Linux manages to allocate such a large string without thrashing. There is a minor confusion with stat() as well: new_buffersize tries to find out how much bytes are left to the end of the file. In the case of /dev/zero, both fstat and lseek are "lying" by returning 0. As lseek returns 0, ftell is invoked and returns non-zero. Then, newbuffer does not trust the values, and just adds BIGCHUNK. -- Comment By: Sean Reifschneider (jafo) Date: 2005-04-06 03:39 Message: Logged In: YES user_id=81797 I am able to reproduce this on a Fedora Core 3 Linux system: >>> fp = open('/dev/zero', 'rb') >>> d = fp.read() Traceback (most recent call last): File "", line 1, in ? MemoryError >>> print os.stat('/dev/zero').st_size 0 What about only trusting st_size if the file is a regular file, not a directory or other type of special file? Sean -- Comment By: Armin Rigo (arigo) Date: 2005-04-02 12:31 Message: Logged In: YES user_id=4771 os.stat() doesn't always give consistent results on dev files. On my machine for some reason os.stat('/dev/null') appears to be random (and extremely large). I suspect that on the OP's machine os.stat('/dev/zero') is not 0 either, but a random number that turns out to be negative, hence a "bad argument" SystemError. -- Comment By: Martin v. Löwis (loewis) Date: 2005-04-01 21:42 Message: Logged In: YES user_id=21627 I think it should trust the stat result, and then find that it cannot allocate that much memory. Actually, os.stat("/dev/zero").st_size is 0, so something else must be going on. -- Comment By: Armin Rigo (arigo) Date: 2005-04-01 09:58 Message: Logged In: YES user_id=4771 I think that file.read() with no argument needs to be more conservative. Currently it asks and trusts the stat() to get the file size, but this can lead to just plain wrong results on special devices. (I had the problem that open('/dev/null').read() would give a MemoryError!) We can argue whether plain read() on special devices is a good idea or not, but I guess that not blindly trusting stat() if it returns huge values could be a good idea. -- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1174606&group_id=5470 ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com