[issue10438] list an example for calling static methods from WITHIN classes
New submission from Ian : Concerning this section of the docs: http://docs.python.org/library/functions.html#staticmethod There is no example for calling a static method from another static method within the same class. As I discovered later, it's simple: C.f() -- from inside the class or outside it. A total newbie will accept this and move on... but in other programming languages, it's frowned upon to the class name from within the class. For example, in PHP you use the "self::" prefix and Java you don't need a prefix at all. So, even though I had it right the first time, it didn't SEEM right... so I went on a wild goose chase, for nothing. Googling "java call static method" will get you java documentation that lists both cases, as does "c++ call static method" and "php call static method". I feel that by adding "Note: you must also use the C.f() syntax when calling from within the class", the documentation will be more complete. -- assignee: d...@python components: Documentation messages: 121314 nosy: d...@python, ifreecarve priority: normal severity: normal status: open title: list an example for calling static methods from WITHIN classes type: feature request ___ Python tracker <http://bugs.python.org/issue10438> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue10438] list an example for calling static methods from WITHIN classes
Ian added the comment: Am I to understand that self.f() is a valid way to call a static method? Can you see how that would run counter to intuition for someone who is familiar with other languages? Given that, I would make the following (more precise) change: < It can be called either on the class (such as C.f()) or on an instance (such as C().f()). --- > It can be called either on the class (such as C.f()) or on an instance (such > as C().f() or self.f()). -- ___ Python tracker <http://bugs.python.org/issue10438> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue10438] list an example for calling static methods from WITHIN classes
Ian added the comment: Disregard my previous comment; calling self.f() does not work from a static method. I stand by my previous suggestion, but I'll clarify it like this: "Note: you must also use the C.f() syntax when calling from a static method within the C class" -- ___ Python tracker <http://bugs.python.org/issue10438> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue38300] Documentation says destuction of TemporaryDirectory object will also delete it, but it does not.
New submission from Ian : The documentation found here https://docs.python.org/3.7/library/tempfile.html#tempfile.TemporaryDirectory states the following "On completion of the context or destruction of the temporary directory object the newly created temporary directory and all its contents are removed from the filesystem." However calling del on the object does not call the cleanup method. t = tempfile.TemporaryDirectory() del t I'm not sure if that is incorrect documentation or my own misunderstanding of what you call destruction. I tested adding my own def __del__(): self.cleanup() which worked as I expected. -- messages: 353393 nosy: iarp priority: normal severity: normal status: open title: Documentation says destuction of TemporaryDirectory object will also delete it, but it does not. type: behavior versions: Python 3.6, Python 3.7 ___ Python tracker <https://bugs.python.org/issue38300> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue38300] Documentation says destuction of TemporaryDirectory object will also delete it, but it does not.
Ian added the comment: I'm sorry, I should've thought to check my python version. I was on 3.6.3 which it would not be deleted, updated to 3.6.8 and it works as intended. -- resolution: -> not a bug stage: -> resolved status: open -> closed versions: -Python 3.7 ___ Python tracker <https://bugs.python.org/issue38300> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue34798] pprint ignores the compact parameter for dicts
Ian added the comment: I came across this and was confused by it too. I also don't understand the justification with not having dicts to be affected by the `compact` parameter. If the "compact form" is having separate entries or elements on one line, instead of having each element separated by a new line, then it seems like inconsistent behavior. **If a dict is short enough, it will appear in "compact form", just like a list.** If a dict is too long for the width, then each item will appear in "expanded form", also like a list. However, the actual compact parameter only affects sequence items. Why is this? There is no reason given in #19132. It does mention a review, but it doesn't seem to be available, or I don't know how to get to it, to understand the reason for that decision. -- nosy: +iansedano ___ Python tracker <https://bugs.python.org/issue34798> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue10438] list an example for calling static methods from WITHIN classes
Ian added the comment: I agree that the use case is probably rare. I agree that to someone intimately familiar with the "self-consistent rules" of Python, the correctness of the C.f() approach is probably obvious. However, your documentation says: Static methods in Python are similar to those found in Java or C++. I feel that it's a mistake to purposefully avoid saying where that similarity ends. In those languages (and in many others), fully qualified function calls from within the same class are redundant and border on "code smell". We agree that this aspect of Python is not mentioned in the documentation, and we disagree on whether it should be. For myself, even in the 7 years and thousands of lines of Python since I opened this issue, I still don't find it intuitive or obvious that a method would need to know the name of the class that contains it. That doesn't make the language "wrong" in any way; it makes the documentation incomplete for not addressing it. The __class__.f() usage in Python 3 seems excellent. If that's the preferred way to do it, then that might be a way to approach the documentation. "To call one static method from another within the same class, as of Python 3 you may use __class__.f() instead of C.f(). For Python 2.x, you must still use the name of the class itself, C.f(), as if you were calling from outside the class." (My wording is still less than ideal, but you get the idea.) -- ___ Python tracker <http://bugs.python.org/issue10438> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue10438] list an example for calling static methods from WITHIN classes
Ian added the comment: I would hope that the docs would cater to people who aren't sure how the language works (and who want to confirm that they are using proper patterns). If people already know how the language works, they won't need the docs. Whether or not you refer to Java and C++, you should state the best practices for both internal and external calling of static methods in Python. -- ___ Python tracker <http://bugs.python.org/issue10438> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue10438] list an example for calling static methods from WITHIN classes
Ian added the comment: As indicated earlier, I would prefer to see clear instructions on how to call a class's static method from another static method within the same class. Currently, it's only clear how to call from outside the class. If that's not going to happen, then I agree that this issue should be closed. -- ___ Python tracker <http://bugs.python.org/issue10438> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue46550] __slots__ updates despite being read-only
New submission from Ian Lee : Hi there - I admit that I don't really understand the internals here, so maybe there is a good reason for this, but I thought it was weird when I just ran across it. If I create a new class `A`, and set it's `__slots`: ```python ➜ ~ docker run -it python:3.10 Python 3.10.2 (main, Jan 26 2022, 20:07:09) [GCC 10.2.1 20210110] on linux Type "help", "copyright", "credits" or "license" for more information. >>> class A(object): ... __slots__ = ["foo"] ... >>> A.__slots__ ['foo'] ``` If I then go to add a new attribute to extend it on the class, that works: ```python >>> A.__slots__ += ["bar"] >>> A.__slots__ ['foo', 'bar'] ``` But then if I create an instance of that class, and try to update `__slots__` on that instnace, I get an AttributeError that `__slots__` is read-only, and yet it still is updating the `__slots__` variable: ```python >>> a = A() >>> a.__slots__ ['foo', 'bar'] >>> a.__slots__ += ["baz"] Traceback (most recent call last): File "", line 1, in AttributeError: 'A' object attribute '__slots__' is read-only >>> a.__slots__ ['foo', 'bar', 'baz'] >>> A.__slots__ ['foo', 'bar', 'baz'] ``` Maybe there is a good reason for this, but I was definitely surprised that I would get a "this attribute is read-only" error, and yet still see that attribute updated. I first found this in python 3.8.5, but I also tested using docker to generate the above example using docker python:3.10 which gave me python 3.10.2. Cheers! -- components: Library (Lib) messages: 411886 nosy: IanLee1521 priority: normal severity: normal status: open title: __slots__ updates despite being read-only type: behavior versions: Python 3.10, Python 3.8 ___ Python tracker <https://bugs.python.org/issue46550> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue46550] __slots__ updates despite being read-only
Ian Lee added the comment: @sobolevn - Hmm, interesting.. I tested in python 3.9 which I had available, and I can reproduce your result, but I think it's different because you are using a tuple. If I use a list then I see my same reported behavior in 3.9: ```python Python 3.9.10 (main, Jan 26 2022, 20:56:53) [GCC 10.2.1 20210110] on linux Type "help", "copyright", "credits" or "license" for more information. >>> class A: ... __slots__ = ('x',) ... >>> a = A() >>> a.__slots__ ('x',) >>> a.__slots__ += ('y',) Traceback (most recent call last): File "", line 1, in AttributeError: 'A' object attribute '__slots__' is read-only >>> a.__slots__ ('x',) >>> >>> >>> >>> class B: ... __slots__ = ['x'] ... >>> b = B() >>> b.__slots__ ['x'] >>> b.__slots__ += ['y'] Traceback (most recent call last): File "", line 1, in AttributeError: 'B' object attribute '__slots__' is read-only >>> b.__slots__ ['x', 'y'] ``` -- ___ Python tracker <https://bugs.python.org/issue46550> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue46550] __slots__ updates despite being read-only
Ian Lee added the comment: @ronaldoussoren - right, I agree that I think that raising the AttributeErrors is the right thing. The part that feels like a bug to me is that the exception is saying it is read only and yet it is not being treated it that way (even though as you point out, the end result doesn't "work"). Maybe this is something about the augmented assignment that I'm just not grokking... I read the blurb @eryksun posted several times, but not seeming to see what is going on. -- ___ Python tracker <https://bugs.python.org/issue46550> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue1167] gdbm/ndbm 1.8.1+ needs libgdbm_compat.so
New submission from Ian Kelly: The ndbm functions in gdbm 1.8.1+ require the gdbm_compat library in addition to gdbm. -- components: Build, Extension Modules files: gdbm_ndbm.diff messages: 55939 nosy: ikelly severity: normal status: open title: gdbm/ndbm 1.8.1+ needs libgdbm_compat.so type: compile error versions: Python 2.5, Python 2.6, Python 3.0 __ Tracker <[EMAIL PROTECTED]> <http://bugs.python.org/issue1167> __ gdbm_ndbm.diff Description: Binary data ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue12059] hashlib does not handle missing hash functions correctly
New submission from Ian Wienand : If one of the hash functions isn't defined in _hashlib, the code suggests it should just be skipped === # this one has no builtin implementation, don't define it pass === This doesn't happen however; due to ImportError not being caught the module decides the whole _hashlib module isn't available and tries to fall back to the older individual libraries. You then get thrown an unrelated error about _md5 being unavailable You can easily replicate this --- $ python Python 2.6.6 (r266:84292, Dec 26 2010, 22:31:48) [GCC 4.4.5] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> def foo(): ... raise ValueError ... >>> import _hashlib >>> _hashlib.openssl_sha224 = foo >>> import hashlib Traceback (most recent call last): File "", line 1, in File "/usr/lib/python2.6/hashlib.py", line 136, in md5 = __get_builtin_constructor('md5') File "/usr/lib/python2.6/hashlib.py", line 63, in __get_builtin_constructor import _md5 ImportError: No module named _md5 >>> --- I think the solution is to catch the ImportError in __get_builtin_constructor and, if caught, consider the hash function unsupported -- files: hashlib.py.diff keywords: patch messages: 135794 nosy: Ian.Wienand priority: normal severity: normal status: open title: hashlib does not handle missing hash functions correctly type: behavior versions: Python 2.6, Python 2.7 Added file: http://bugs.python.org/file21971/hashlib.py.diff ___ Python tracker <http://bugs.python.org/issue12059> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue10748] zipfile does not write empty ZIP structure if close() called after __init__() as doc suggests
New submission from Ian Stevens : The zipfile documentation (http://docs.python.org/library/zipfile.html) states: "If the file is created with mode 'a' or 'w' and then close()d without adding any files to the archive, the appropriate ZIP structures for an empty archive will be written to the file." This is not the case, eg.:: >>> from StringIO import StringIO >>> import zipfile >>> s = StringIO() >>> z = zipfile.ZipFile(s, 'w') >>> z.close() >>> s.len 0 The code for zipfile (http://svn.python.org/projects/python/trunk/Lib/zipfile.py) does not support the documentation either. The ending records are written only if ZipFile._didModify is True, and that attribute is only set to True if writestr() or write() are called. Either the code should be fixed to support writing the ending records on an empty zip, or the documentation should be changed to reflect the existing behaviour. Test case (for Lib/test/test_zipfile):: def test_close_empty_zip_creates_valid_zip(self): # Test that close() called on a ZipFile without write creates a valid ZIP. zf = zipfile.ZipFile(TESTFN, "w") zf.close() chk = zipfile.is_zipfile(TESTFN) self.assertTrue(chk) -- assignee: d...@python components: Documentation, Library (Lib) messages: 124433 nosy: Ian.Stevens, d...@python priority: normal severity: normal status: open title: zipfile does not write empty ZIP structure if close() called after __init__() as doc suggests type: behavior versions: Python 2.6 ___ Python tracker <http://bugs.python.org/issue10748> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue10748] zipfile does not write empty ZIP structure if close() called after __init__() as doc suggests
Ian Stevens added the comment: Yes, I'm using 2.6. If this is not the expected behaviour in 2.6, the doc should reflect that with a "New in version 2.7" note. -- ___ Python tracker <http://bugs.python.org/issue10748> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue11220] https sslv3 error 14077417: illegal parameter
New submission from Ian Wetherbee : Certain https urls do not open using urllib2 (py2.6) and urllib(py3.1), but they open using the latest version of curl and firefox. To reproduce: >>> import urllib.request >>> urllib.request.urlopen("https://ui2web1.apps.uillinois.edu/BANPROD1/bwskfcls.P_GetCrse";) Traceback (most recent call last): File "/usr/lib64/python3.1/urllib/request.py", line 1072, in do_open h.request(req.get_method(), req.selector, req.data, headers) File "/usr/lib64/python3.1/http/client.py", line 932, in request self._send_request(method, url, body, headers) File "/usr/lib64/python3.1/http/client.py", line 970, in _send_request self.endheaders(body) File "/usr/lib64/python3.1/http/client.py", line 928, in endheaders self._send_output(message_body) File "/usr/lib64/python3.1/http/client.py", line 782, in _send_output self.send(msg) File "/usr/lib64/python3.1/http/client.py", line 723, in send self.connect() File "/usr/lib64/python3.1/http/client.py", line 1055, in connect self.sock = ssl.wrap_socket(sock, self.key_file, self.cert_file) File "/usr/lib64/python3.1/ssl.py", line 381, in wrap_socket suppress_ragged_eofs=suppress_ragged_eofs) File "/usr/lib64/python3.1/ssl.py", line 135, in __init__ raise x File "/usr/lib64/python3.1/ssl.py", line 131, in __init__ self.do_handshake() File "/usr/lib64/python3.1/ssl.py", line 327, in do_handshake self._sslobj.do_handshake() ssl.SSLError: [Errno 1] _ssl.c:488: error:14077417:SSL routines:SSL23_GET_SERVER_HELLO:sslv3 alert illegal parameter During handling of the above exception, another exception occurred: Traceback (most recent call last): File "", line 1, in File "/usr/lib64/python3.1/urllib/request.py", line 121, in urlopen return _opener.open(url, data, timeout) File "/usr/lib64/python3.1/urllib/request.py", line 349, in open response = self._open(req, data) File "/usr/lib64/python3.1/urllib/request.py", line 367, in _open '_open', req) File "/usr/lib64/python3.1/urllib/request.py", line 327, in _call_chain result = func(*args) File "/usr/lib64/python3.1/urllib/request.py", line 1098, in https_open return self.do_open(http.client.HTTPSConnection, req) File "/usr/lib64/python3.1/urllib/request.py", line 1075, in do_open raise URLError(err) urllib.error.URLError: Curl request: $ curl https://ui2web1.apps.uillinois.edu/BANPROD1/bwskfcls.P_GetCrse 302 Found Found The document has moved https://apps.uillinois.edu/selfservice/error/";>here. Oracle-Application-Server-10g/10.1.2.3.0 Oracle-HTTP-Server Server at ui2web1a.admin.uillinois.edu Port 443 -- components: None messages: 128626 nosy: Ian.Wetherbee priority: normal severity: normal status: open title: https sslv3 error 14077417: illegal parameter type: behavior versions: Python 2.6, Python 3.1 ___ Python tracker <http://bugs.python.org/issue11220> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue11220] https sslv3 error 14077417: illegal parameter
Ian Wetherbee added the comment: The server seems to be sending a bad TLS handshake, so curl falls back on SSLv3 with TLS disabled. curl 7.20.1 (x86_64-redhat-linux-gnu) libcurl/7.20.1 NSS/3.12.8.0 zlib/1.2.3 libidn/1.16 libssh2/1.2.4 Protocols: dict file ftp ftps http https imap imaps ldap ldaps pop3 pop3s rtsp scp sftp smtp smtps telnet tftp Features: AsynchDNS GSS-Negotiate IDN IPv6 Largefile NTLM SSL libz curl -v https://ui2web1.apps.uillinois.edu/BANPROD1/bwskfcls.P_GetCrse * About to connect() to ui2web1.apps.uillinois.edu port 443 (#0) * Trying 64.22.183.24... connected * Connected to ui2web1.apps.uillinois.edu (64.22.183.24) port 443 (#0) * Initializing NSS with certpath: /etc/pki/nssdb * CAfile: /etc/pki/tls/certs/ca-bundle.crt CApath: none * NSS error -12226 * Error in TLS handshake, trying SSLv3... > GET /BANPROD1/bwskfcls.P_GetCrse HTTP/1.1 > User-Agent: curl/7.20.1 (x86_64-redhat-linux-gnu) libcurl/7.20.1 NSS/3.12.8.0 > zlib/1.2.3 libidn/1.16 libssh2/1.2.4 > Host: ui2web1.apps.uillinois.edu > Accept: */* > * Connection died, retrying a fresh connect * Closing connection #0 * Issue another request to this URL: 'https://ui2web1.apps.uillinois.edu/BANPROD1/bwskfcls.P_GetCrse' * About to connect() to ui2web1.apps.uillinois.edu port 443 (#0) * Trying 64.22.183.24... connected * Connected to ui2web1.apps.uillinois.edu (64.22.183.24) port 443 (#0) * TLS disabled due to previous handshake failure * CAfile: /etc/pki/tls/certs/ca-bundle.crt CApath: none * SSL connection using SSL_RSA_WITH_RC4_128_MD5 * Server certificate: * subject: CN=ui2web1.apps.uillinois.edu,OU=AITS 20100517-25690,O=University of Illinois,L=Urbana,ST=Illinois,C=US * start date: May 17 00:00:00 2010 GMT * expire date: May 17 23:59:59 2011 GMT * common name: ui2web1.apps.uillinois.edu * issuer: E=premium-ser...@thawte.com,CN=Thawte Premium Server CA,OU=Certification Services Division,O=Thawte Consulting cc,L=Cape Town,ST=Western Cape,C=ZA > GET /BANPROD1/bwskfcls.P_GetCrse HTTP/1.1 > User-Agent: curl/7.20.1 (x86_64-redhat-linux-gnu) libcurl/7.20.1 NSS/3.12.8.0 > zlib/1.2.3 libidn/1.16 libssh2/1.2.4 > Host: ui2web1.apps.uillinois.edu > Accept: */* > < HTTP/1.1 302 Found < Date: Wed, 16 Feb 2011 07:49:43 GMT < Server: Oracle-Application-Server-10g/10.1.2.3.0 Oracle-HTTP-Server < Location: https://apps.uillinois.edu/selfservice/error/ < Connection: close < Transfer-Encoding: chunked < Content-Type: text/html; charset=iso-8859-1 < 302 Found Found The document has moved https://apps.uillinois.edu/selfservice/error/";>here. Oracle-Application-Server-10g/10.1.2.3.0 Oracle-HTTP-Server Server at ui2web1b.admin.uillinois.edu Port 443 * Closing connection #0 -- ___ Python tracker <http://bugs.python.org/issue11220> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue11220] https sslv3 error 14077417: illegal parameter
Ian Wetherbee added the comment: Any solution for 2.x? I'm using this with twisted. -- resolution: rejected -> status: pending -> open ___ Python tracker <http://bugs.python.org/issue11220> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue11220] https sslv3 error 14077417: illegal parameter
Ian Wetherbee added the comment: This works for 2.x, I'm closing this issue: # custom HTTPS opener, banner's oracle 10g server supports SSLv3 only import httplib, ssl, urllib2, socket class HTTPSConnectionV3(httplib.HTTPSConnection): def __init__(self, *args, **kwargs): httplib.HTTPSConnection.__init__(self, *args, **kwargs) def connect(self): sock = socket.create_connection((self.host, self.port), self.timeout) if self._tunnel_host: self.sock = sock self._tunnel() try: self.sock = ssl.wrap_socket(sock, self.key_file, self.cert_file, ssl_version=ssl.PROTOCOL_SSLv3) except ssl.SSLError, e: print("Trying SSLv3.") self.sock = ssl.wrap_socket(sock, self.key_file, self.cert_file, ssl_version=ssl.PROTOCOL_SSLv23) class HTTPSHandlerV3(urllib2.HTTPSHandler): def https_open(self, req): return self.do_open(HTTPSConnectionV3, req) # install opener urllib2.install_opener(urllib2.build_opener(HTTPSHandlerV3())) if __name__ == "__main__": r = urllib2.urlopen("https://ui2web1.apps.uillinois.edu/BANPROD1/bwskfcls.P_GetCrse";) print(r.read()) -- resolution: -> works for me status: open -> closed ___ Python tracker <http://bugs.python.org/issue11220> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue3037] in output
New submission from Ian Bicking <[EMAIL PROTECTED]>: I updated to sphinx trunk and made just a few small changes in my template, and I'm now seeing: Note specifically "", which comes right before extrahead. -- assignee: georg.brandl components: Documentation tools (Sphinx) messages: 67696 nosy: georg.brandl, ianb severity: normal status: open title: in output type: behavior ___ Python tracker <[EMAIL PROTECTED]> <http://bugs.python.org/issue3037> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue3037] in output
Ian Bicking <[EMAIL PROTECTED]> added the comment: You can see the source that produces this in http://svn.pythonpaste.org/Paste/trunk at revision 7387 ___ Python tracker <[EMAIL PROTECTED]> <http://bugs.python.org/issue3037> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue3037] in output
Ian Bicking <[EMAIL PROTECTED]> added the comment: Armin says this is a bug that has now been resolved in Jinja ___ Python tracker <[EMAIL PROTECTED]> <http://bugs.python.org/issue3037> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue3512] Change fsync to use fullfsync on platforms (like OS X) that have/need it
New submission from Ian Charnas <[EMAIL PROTECTED]>: fsync on OSX does not actually flush the file to disk as is desired. This is a problem because application developers rely on fsync for file integrity. SQLite [1] and MySQL [2] and other major database systems all use 'fullfsync' on OS X instead of fsync, because 'fullfsync' provides the desired behavior. Because the documented behavior of python's fsync function is to "force write of file with filedescriptor to disk", I believe that on OS X the fullfsync call should be used instead of fsync. The supplied patch adds this functionality in a non-platform-specific way. It checks if there is a FULLFSYNC fcntl call available (using "#ifdef F_FULLFSYNC", where F_FULLFSYNC is defined in sys/fcntl.h), and if this symbol is defined then a fnctl(F_FULLFSYNC, fd, 0) is called instead of fsync. [1] SQLite uses fullfsync on all platforms that define it: http://www.sqlite.org/cvstrac/fileview?f=sqlite/src/os_unix.c [2] MySQL uses fullfsync only on the darwin platform and only when F_FULLFSYNC is defined as 51, which seems to be short-sighted in that this symbol may change value in future versions of OS X. To see this code, download a mysql 5.x source snapshot and open up mysql-/innobase/os/os0file.c -- components: Library (Lib) files: fullfsync.patch keywords: patch messages: 70810 nosy: icharnas severity: normal status: open title: Change fsync to use fullfsync on platforms (like OS X) that have/need it type: behavior versions: Python 2.6 Added file: http://bugs.python.org/file11066/fullfsync.patch ___ Python tracker <[EMAIL PROTECTED]> <http://bugs.python.org/issue3512> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue3512] Change fsync to use fullfsync on platforms (like OS X) that have/need it
Ian Charnas <[EMAIL PROTECTED]> added the comment: My patch is against trunk, but really this fix should be applied to all versions that will have future releases. ___ Python tracker <[EMAIL PROTECTED]> <http://bugs.python.org/issue3512> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue3517] PATCH - Providing fullfsync on supported platforms
New submission from Ian Charnas <[EMAIL PROTECTED]>: Python currently provides os.fsync to call the POSIX 'fsync' on platforms that support it. While this function forces the operating system to force a file buffer to the storage device, data may still be waiting in the hardware write buffers on the storage device. Certain platforms (so far, only OS X) provide "fullfsync" [1] to request that storage devices flush their write buffers to the actual physical media. This functionality is especially useful to VCS and DB developers, and already appears in SQLite [2] and MySQL [3], amongst others. This patch includes code changes to Modules/posixmodule.c that exposes os.fullfsync on supported platforms, including the appropriate documentation added to Doc/library/os.rst -Ian Charnas [1] Discussion of fsync and fullfsync on darwin platform: http://lists.apple.com/archives/darwin-dev/2005/Feb/msg00072.html [2] SQLite uses fullfsync on all platforms that define it: http://www.sqlite.org/cvstrac/fileview?f=sqlite/src/os_unix.c [3] MySQL uses fullfsync only on the darwin platform and only when F_FULLFSYNC is defined as 51, which seems to be short-sighted in that this symbol may change value in future versions of OS X. To see this code, download a mysql 5.x source snapshot and open up mysql-/innobase/os/os0file.c -- components: Library (Lib) files: fullfsync.patch keywords: patch messages: 70842 nosy: icharnas severity: normal status: open title: PATCH - Providing fullfsync on supported platforms type: behavior versions: Python 2.6 Added file: http://bugs.python.org/file11072/fullfsync.patch ___ Python tracker <[EMAIL PROTECTED]> <http://bugs.python.org/issue3517> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue3512] Change fsync to use fullfsync on platforms (like OS X) that have/need it
Ian Charnas <[EMAIL PROTECTED]> added the comment: Done. See 3517 http://bugs.python.org/issue3517 On Thu, Aug 7, 2008 at 12:53 PM, Guido van Rossum <[EMAIL PROTECTED]> wrote: > > Guido van Rossum <[EMAIL PROTECTED]> added the comment: > > Based on discussion in python-dev, I'm rejecting this patch. > > Open a new one if you want to make F_FULLSYNC available. > > -- > resolution: -> rejected > status: open -> closed > > ___ > Python tracker <[EMAIL PROTECTED]> > <http://bugs.python.org/issue3512> > ___ > ___ Python tracker <[EMAIL PROTECTED]> <http://bugs.python.org/issue3512> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue3517] PATCH - Providing fullfsync on supported platforms
Ian Charnas <[EMAIL PROTECTED]> added the comment: Sounds fair enough. I was looking forward to the glitz and glamor of the os module, but I'll settle for a good seat in fcntl. Here's a patch implementing just that. -ian Added file: http://bugs.python.org/file11074/fullfsync_fcntl.patch ___ Python tracker <[EMAIL PROTECTED]> <http://bugs.python.org/issue3517> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue4056] :Class: causes exception
New submission from Ian Bicking <[EMAIL PROTECTED]>: I used a reference like :Class:`something` (note the capitalization) and got this exception: Traceback (most recent call last): File "/home/ianb/src/env/lib/python2.4/site-packages/sphinx/__init__.py", line 135, in main app.builder.build_update() File "/home/ianb/src/env/lib/python2.4/site-packages/sphinx/builder.py", line 201, in build_update summary='targets for %d source files that are ' File "/home/ianb/src/env/lib/python2.4/site-packages/sphinx/builder.py", line 241, in build self.write(docnames, updated_docnames, method) File "/home/ianb/src/env/lib/python2.4/site-packages/sphinx/builder.py", line 276, in write doctree = self.env.get_and_resolve_doctree(docname, self) File "/home/ianb/src/env/lib/python2.4/site-packages/sphinx/environment.py", line 779, in get_and_resolve_doctree self.resolve_references(doctree, docname, builder) File "/home/ianb/src/env/lib/python2.4/site-packages/sphinx/environment.py", line 998, in resolve_references raise RuntimeError('unknown xfileref node encountered: %s' % node) RuntimeError: unknown xfileref node encountered: deliverance.rules.Drop -- assignee: georg.brandl components: Documentation tools (Sphinx) messages: 74385 nosy: georg.brandl, ianb severity: normal status: open title: :Class: causes exception ___ Python tracker <[EMAIL PROTECTED]> <http://bugs.python.org/issue4056> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue8426] multiprocessing.Queue fails to get() very large objects
New submission from Ian Davis : I'm trying to parallelize some scientific computing jobs using multiprocessing.Pool. I've also tried rolling my own Pool equivalent using Queues. In trying to return very large result objects from Pool.map()/imap() or via Queue.put(), I've noticed that multiprocessing seems to hang on the receiving end. On Cygwin 1.7.1/Python 2.5.2 it hangs with no CPU activity. On Centos 5.2/Python 2.6.2 it hangs with 100% CPU. cPickle is perfectly capable of pickling these objects, although they may be 100's of MB, so I think it's the communication. There's also some asymmetry in the error whether it's the parent or child putting the large object. The put does appear to succeed; it's the get() on the other end that hangs forever. Example code: - from multiprocessing import * def child(task_q, result_q): while True: print " Getting task..." task = task_q.get() print " Got task", task[:10] task = task * 1 print " Putting result", task[:10] result_q.put(task) print " Done putting result", task[:10] task_q.task_done() def parent(): task_q = JoinableQueue() result_q = JoinableQueue() worker = Process(target=child, args=(task_q,result_q)) worker.daemon = True worker.start() #tasks = ["foo", "bar", "ABC" * 1, "baz"] tasks = ["foo", "bar", "ABC", "baz"] for task in tasks: print "Putting task", task[:10], "..." task_q.put(task) print "Done putting task", task[:10] task_q.join() for task in tasks: print "Getting result..." print "Got result", result_q.get()[:10] if __name__ == '__main__': parent() - If run as is, I get Traceback (most recent call last): File "/usr/lib/python2.5/site-packages/multiprocessing-2.6.2.1-py2.5-cygwin-1.7.1-i686.egg/multiprocessing/queues.py", line 242, in _feed send(obj) MemoryError: out of memory (*** hangs, I hit ^C ***) Got result Traceback (most recent call last): Process Process-1: Traceback (most recent call last): File "cygwin_multiprocessing_queue.py", line 32, in File "/usr/lib/python2.5/site-packages/multiprocessing-2.6.2.1-py2.5-cygwin-1.7.1-i686.egg/multiprocessing/process.py", line 237, in _bootstrap parent() File "cygwin_multiprocessing_queue.py", line 29, in parent print "Got result", result_q.get()[:10] self.run() File "/usr/lib/python2.5/site-packages/multiprocessing-2.6.2.1-py2.5-cygwin-1.7.1-i686.egg/multiprocessing/process.py", line 93, in run File "/usr/lib/python2.5/site-packages/multiprocessing-2.6.2.1-py2.5-cygwin-1.7.1-i686.egg/multiprocessing/queues.py", line 91, in get self._target(*self._args, **self._kwargs) File "cygwin_multiprocessing_queue.py", line 6, in child res = self._recv() KeyboardInterrupt task = task_q.get() File "/usr/lib/python2.5/site-packages/multiprocessing-2.6.2.1-py2.5-cygwin-1.7.1-i686.egg/multiprocessing/queues.py", line 91, in get res = self._recv() KeyboardInterrupt If instead I comment out the multiplication in child() and uncomment the large task in parent(), then I get Getting task... Putting task foo ... Done putting task foo Putting task bar ... Got task foo Putting result foo Done putting task bar Putting task ABCABCABCA ... Done putting task ABCABCABCA Putting task baz ... Done putting result foo Getting task... Got task bar Putting result bar Done putting result bar Getting task... Done putting task baz (*** hangs, I hit ^C ***) Traceback (most recent call last): File "cygwin_multiprocessing_queue.py", line 32, in parent() File "cygwin_multiprocessing_queue.py", line 26, in parent task_q.join() File "/usr/lib/python2.5/site-packages/multiprocessing-2.6.2.1-py2.5-cygwin-1.7.1-i686.egg/multiprocessing/queues.py", line 303, in join self._cond.wait() File "/usr/lib/python2.5/site-packages/multiprocessing-2.6.2.1-py2.5-cygwin-1.7.1-i686.egg/multiprocessing/synchronize.py", line 212, in wait self._wait_semaphore.acquire(True, timeout) KeyboardInterrupt -- components: Library (Lib) messages: 103349 nosy: Ian.Davis severity: normal status: open title: multiprocessing.Queue fails to get() very large objects type: crash versions: Python 2.5, Python 2.6 ___ Python tracker <http://bugs.python.org/issue8426> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue1167] gdbm/ndbm 1.8.1+ needs libgdbm_compat.so
Ian Kelly added the comment: I'm not sure why you think this patch would be backwards incompatible. I've tested it with gdbm-1.8.0 and gdbm-1.7.3, and it works for those. __ Tracker <[EMAIL PROTECTED]> <http://bugs.python.org/issue1167> __ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue4330] wsgiref.validate doesn't accept arguments to readline
New submission from Ian Bicking <[EMAIL PROTECTED]>: The method wsgiref.validate:InputWrapper.readline doesn't take any arguments. It should take an optional size argument. Though this isn't part of the WSGI specification, the cgi module uses this argument when parsing the body, and in practice no applications that use cgi.FieldStorage (which is most applications) are compatible with wsgiref.validate as a result. Simply adding a *args that is passed to the underlying file fixes this. -- components: Library (Lib) messages: 75920 nosy: ianb severity: normal status: open title: wsgiref.validate doesn't accept arguments to readline versions: Python 2.6 ___ Python tracker <[EMAIL PROTECTED]> <http://bugs.python.org/issue4330> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue4330] wsgiref.validate doesn't accept arguments to readline
Ian Bicking <[EMAIL PROTECTED]> added the comment: This renders wsgiref.validate.validator completely useless, because it cannot be used with any existing code. ___ Python tracker <[EMAIL PROTECTED]> <http://bugs.python.org/issue4330> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue4330] wsgiref.validate doesn't accept arguments to readline
Ian Bicking <[EMAIL PROTECTED]> added the comment: Yes, and I've wanted to deprecate paste.lint, but I can't because people use it over wsgiref.validate because it had this change applied. Yes, cgi.FieldStorage changed, but now that it's changed wsgiref needs to be compatible with it to be viable. Mostly the WSGI spec has been wrong on this for some time, but we've never gone through the process of updating it (though it has been brought up several times on Web-SIG). ___ Python tracker <[EMAIL PROTECTED]> <http://bugs.python.org/issue4330> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue4330] wsgiref.validate doesn't accept arguments to readline
Ian Bicking <[EMAIL PROTECTED]> added the comment: cgi started using this argument due to the potential of a DoS attack without the length limit. So undoing this in cgi (even as an option) would be a regression. ___ Python tracker <[EMAIL PROTECTED]> <http://bugs.python.org/issue4330> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue38242] Revert the new asyncio Streams API
Change by Ian Good : -- nosy: +icgood nosy_count: 9.0 -> 10.0 pull_requests: +30142 pull_request: https://github.com/python/cpython/pull/13143 ___ Python tracker <https://bugs.python.org/issue38242> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue34975] start_tls() difficult when using asyncio.start_server()
Ian Good added the comment: #36889 was reverted, so this is not resolved. I'm guessing this needs to be moved to 3.9 now too. Is my original PR worth revisiting? https://github.com/python/cpython/pull/13143/files -- resolution: fixed -> status: closed -> open ___ Python tracker <https://bugs.python.org/issue34975> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue43749] venv module does not copy the correct python exe
New submission from Ian Norton : On windows, the venv module does not copy the correct python exe if the current running exe (eg sys.executable) has been renamed (eg, named python3.exe) venv will only make copies of python.exe, pythonw.exe, python_d.exe or pythonw_d.exe. If for example the python executable has been renamed from python.exe to python3.exe (eg, to co-exist in a system where multiple pythons are on PATH) then this can fail with errors like: Error: [WinError 2] The system cannot find the file specified When venv tries to run pip in the new environment. If the running python executable is a differently named copy then errors like the one described in https://bugs.python.org/issue40588 are seen. -- components: Library (Lib) messages: 390329 nosy: Ian Norton priority: normal severity: normal status: open title: venv module does not copy the correct python exe versions: Python 3.10, Python 3.8, Python 3.9 ___ Python tracker <https://bugs.python.org/issue43749> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue43749] venv module does not copy the correct python exe
Ian Norton added the comment: This may also cause https://bugs.python.org/issue35644 -- ___ Python tracker <https://bugs.python.org/issue43749> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue43749] venv module does not copy the correct python exe
Change by Ian Norton : -- keywords: +patch pull_requests: +23954 stage: -> patch review pull_request: https://github.com/python/cpython/pull/25216 ___ Python tracker <https://bugs.python.org/issue43749> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue43819] ExtensionFileLoader Does Not Implement invalidate_caches
New submission from Ian H : Currently there's no easy way to get at the internal cache of module spec objects for compiled extension modules. See https://github.com/python/cpython/blob/20ac34772aa9805ccbf082e700f2b033291ff5d2/Python/import.c#L401-L415. For example, these module spec objects continue to be cached even if we call importlib.invalidate_caches. ExtensionFileLoader doesn't implement the corresponding method for this. The comment in the C file referenced above implies this is done this way to avoid re-initializing extension modules. I'm not sure if this can be fixed, but I figured I'd ask for input. Our use-case is an academic project where we've been experimenting with building an interface for linker namespaces into Python to allow for (among other things) loading multiple copies of any module without explicit support from that module. We've been able to do this without having custom builds of Python. We've instead gone the route of overriding some of the import machinery at runtime. To make this work we need a way to prevent caching of previous import-related information about a specific extension module. We currently have to rely on an unfortunate hack to get access to the internal cache of module spec objects for extension modules and modify that dictionary manually. What we have works, but any sort of alternative would be welcome. -- messages: 390905 nosy: Ian.H priority: normal severity: normal status: open title: ExtensionFileLoader Does Not Implement invalidate_caches type: behavior versions: Python 3.9 ___ Python tracker <https://bugs.python.org/issue43819> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue43870] C API Functions Bypass __import__ Override
New submission from Ian H : Some of the import-related C API functions are documented as bypassing an override to builtins.__import__. This appears to be the case, but the documentation is incomplete in this regard. For example, PyImport_ImportModule is implemented by calling PyImport_Import which does respect an override to builtins.__import__, but PyImport_ImportModule doesn't mention respecting an override. On the other hand some routines (like PyImport_ImportModuleLevelObject) do not respect an override to the builtin import. Is this something that people are open to having fixed? I've been working on an academic project downstream that involved some overrides to the __import__ machinery (I haven't figured out a way to do this with just import hooks) and having some modules skip going through our override threw us for a bad debugging loop. The easiest long-term fix from our perspective is to patch the various PyImport routines to always respect an __import__ override. This technically is a backwards compatibility break, but I'm unsure if anyone is actually relying on the fact that specific C API functions bypass builtins.__import__ entirely. It seems more likely that the current behavior will cause bugs downstream like it did for us. -- messages: 391220 nosy: Ian.H priority: normal severity: normal status: open title: C API Functions Bypass __import__ Override type: behavior versions: Python 3.9 ___ Python tracker <https://bugs.python.org/issue43870> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue43895] Unnecessary Cache of Shared Object Handles
New submission from Ian H : While working on another project I noticed that there's a cache of shared object handles kept inside _PyImport_FindSharedFuncptr. See https://github.com/python/cpython/blob/b2b6cd00c6329426fc3b34700f2e22155b44168c/Python/dynload_shlib.c#L51-L55. It appears to be an optimization to work around poor caching of shared object handles in old libc implementations. After some testing, I have been unable to find any meaningful performance difference from this cache, so I propose we remove it to save the space. My initial tests were on Linux (Ubuntu 18.04). I saw no discernible difference in the time for running the Python test suite with a single thread. Running the test suite using a single thread shows a lot of variance, but after running with and without the cache 40 times the mean times with/without the cache was nearly the same. Interpreter startup time also appears to be unaffected. This was all with a debug build, so I'm in the process of collecting data with a release build to see if that changes anything. -- messages: 391453 nosy: Ian.H priority: normal severity: normal status: open title: Unnecessary Cache of Shared Object Handles versions: Python 3.10 ___ Python tracker <https://bugs.python.org/issue43895> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue43895] Unnecessary Cache of Shared Object Handles
Ian H added the comment: Proposed patch is in https://github.com/python/cpython/pull/25487. -- ___ Python tracker <https://bugs.python.org/issue43895> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue43895] Unnecessary Cache of Shared Object Handles
Change by Ian H : -- keywords: +patch pull_requests: +24282 stage: -> patch review pull_request: https://github.com/python/cpython/pull/25487 ___ Python tracker <https://bugs.python.org/issue43895> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue45081] dataclasses that inherit from Protocol subclasses have wrong __init__
Ian Good added the comment: I believe this was a deeper issue that affected all classes inheriting Protocol, causing a TypeError on even the most basic case (see attached): Traceback (most recent call last): File "/.../test.py", line 14, in MyClass() File "/.../test.py", line 11, in __init__ super().__init__() File "/usr/local/Cellar/python@3.9/3.9.7/Frameworks/Python.framework/Versions/3.9/lib/python3.9/typing.py", line 1083, in _no_init raise TypeError('Protocols cannot be instantiated') TypeError: Protocols cannot be instantiated This was a new regression in 3.9.7 and seems to be resolved by this fix. The desired behavior should be supported according to PEP 544: https://www.python.org/dev/peps/pep-0544/#explicitly-declaring-implementation -- nosy: +icgood Added file: https://bugs.python.org/file50277/test.py ___ Python tracker <https://bugs.python.org/issue45081> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue45081] dataclasses that inherit from Protocol subclasses have wrong __init__
Ian Good added the comment: Julian, That is certainly a workaround, however the behavior you are describing is inconsistent with PEP-544 in both word and intention. From the PEP: > To explicitly declare that a certain class implements a given protocol, it > can be used as a regular base class. It further describes the semantics of inheriting as "unchanged" from a "regular base class". If the semantics are "unchanged" then it should follow that super().__init__() would pass through the protocol to the object.__init__, just like a "regular base class" would if it does not override __init__. Furthermore, the intention of inheriting a Protocol as described in the PEP: > Static analysis tools are expected to automatically detect that a class > implements a given protocol. So while it's possible to subclass a protocol > explicitly, it's not necessary to do so for the sake of type-checking. The purpose of adding a Protocol sub-class as an explicit base class is thus only to improve static analysis, it should *not* to modify the runtime semantics. Consider the case where a package maintainer wants to enhance the flexibility of their types by transitioning from using an ABC to using structural sub-typing. That simple typing change would be a breaking change to the package consumers, who must now remove a super().__init__() call. Ian -- ___ Python tracker <https://bugs.python.org/issue45081> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue45241] python REPL leaks local variables when an exception is thrown
New submission from Ian Henderson : To reproduce, copy the following code: import gc gc.collect() objs = gc.get_objects() for obj in objs: try: if isinstance(obj, X): print(obj) except NameError: class X: pass def f(): x = X() raise Exception() f() then open a Python REPL and paste repeatedly at the prompt. Each time the code runs, another copy of the local variable x is leaked. This was originally discovered while using PyTorch -- tensors leaked this way tend to exhaust GPU memory pretty quickly. Version Info: Python 3.9.7 (default, Sep 3 2021, 04:31:11) [Clang 12.0.5 (clang-1205.0.22.9)] on darwin -- components: Interpreter Core messages: 402144 nosy: ianh2 priority: normal severity: normal status: open title: python REPL leaks local variables when an exception is thrown type: resource usage versions: Python 3.9 ___ Python tracker <https://bugs.python.org/issue45241> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue45241] python REPL leaks local variables when an exception is thrown
Ian Henderson added the comment: Ah, you're right -- it looks like the 'objs' global is what's keeping these objects alive. Sorry for the noise. -- stage: -> resolved status: open -> closed ___ Python tracker <https://bugs.python.org/issue45241> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue45335] Default TIMESTAMP converter in sqlite3 ignores time zone
New submission from Ian Fisher : The SQLite converter that the sqlite3 library automatically registers for TIMESTAMP columns (https://github.com/python/cpython/blob/main/Lib/sqlite3/dbapi2.py#L66) ignores the time zone even if it is present and always returns a naive datetime object. I think that the converter should return an aware object if the time zone is present in the database. As it is, round trips of TIMESTAMP values from the database to Python and back might erase the original time zone info. Now that datetime.datetime.fromisoformat is in Python 3.7, this should be easy to implement. -- components: Library (Lib) messages: 402979 nosy: iafisher priority: normal severity: normal status: open title: Default TIMESTAMP converter in sqlite3 ignores time zone type: enhancement ___ Python tracker <https://bugs.python.org/issue45335> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue45335] Default TIMESTAMP converter in sqlite3 ignores time zone
Ian Fisher added the comment: Substitute "UTC offset" for "time zone" in my comment above. I have attached a minimal Python program demonstrating data loss from this bug. -- Added file: https://bugs.python.org/file50324/timestamp.py ___ Python tracker <https://bugs.python.org/issue45335> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue45335] Default TIMESTAMP converter in sqlite3 ignores UTC offset
Ian Fisher added the comment: Unfortunately fixing this will have to be considered a backwards-incompatible change, since Python doesn't allow naive and aware datetime objects to be compared, so if sqlite3 starts returning aware datetimes, existing code might break. Alternatively, perhaps this could be fixed in conjunction with changing sqlite3's API to allow per-database converters and adapters. Then, the old global TIMESTAMP converter could be retained for compatibility with existing code, and new code could opt-in to a per-database TIMESTAMP converter with the correct behavior. -- title: Default TIMESTAMP converter in sqlite3 ignores time zone -> Default TIMESTAMP converter in sqlite3 ignores UTC offset ___ Python tracker <https://bugs.python.org/issue45335> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue45335] Default TIMESTAMP converter in sqlite3 ignores UTC offset
Ian Fisher added the comment: > Another option could be to deprecate the current behaviour and then change it > to being timezone aware in Python 3.13. This sounds like the simplest option. I'd be interested in working on this myself, if you think it's something that a new CPython contributor could handle. -- ___ Python tracker <https://bugs.python.org/issue45335> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue45335] Default TIMESTAMP converter in sqlite3 ignores UTC offset
Ian Fisher added the comment: Okay, I started a discussion here: https://discuss.python.org/t/fixing-sqlite-timestamp-converter-to-handle-utc-offsets/10985 -- ___ Python tracker <https://bugs.python.org/issue45335> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue26651] Deprecate register_adapter() and register_converter() in sqlite3
Change by Ian Fisher : -- nosy: +iafisher ___ Python tracker <https://bugs.python.org/issue26651> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue19065] sqlite3 timestamp adapter chokes on timezones
Change by Ian Fisher : -- nosy: +iafisher ___ Python tracker <https://bugs.python.org/issue19065> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue45335] Default TIMESTAMP converter in sqlite3 ignores UTC offset
Change by Ian Fisher : -- keywords: +patch pull_requests: +27469 stage: -> patch review pull_request: https://github.com/python/cpython/pull/29200 ___ Python tracker <https://bugs.python.org/issue45335> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue45611] pprint - low width overrides depth folding
New submission from Ian Currie : Reproducible example: >>> from pprint import pprint >>> data = [["aa"],[2],[3],[4],[5]] >>> pprint(data) [["aa"], [2], [3], [4], [5]] >>> pprint(data, depth=1) [[...], [...], [...], [...], [...]] >>> pprint(data, depth=1, width=7) [[...], [...], [...], [...], [...]] >>> pprint(data, depth=1, width=6) [['aa'], [2], [3], [4], [5]] The depth "folds" everything deeper than 1 level. Then if you lower the width of the output enough, what was once on one line, will now be split by newlines. The bug is, if you lower the width below seven characters. Which is the length of `[[...],` It seems to override the `depth` parameter and print the everything with no folding. This is true of deeply nested structures too. Expected behavior: for the folding `...` to remain. Why put the width so low? I came across this because of the behavior of `compact`: By default, if a data structure can fit on one line, it will be displayed on one line. `compact` only affects sequences that are longer than the given width. **There is no way to force compact as False for short items**, so as to make sure all items, even short ones, appear on their own line. [1,2,3] - will always appear on its own line, there is no way to make it appear like: [1, 2, 3] The only way is by setting a very low width. -- components: Library (Lib) messages: 405027 nosy: iansedano priority: normal severity: normal status: open title: pprint - low width overrides depth folding type: behavior versions: Python 3.10 ___ Python tracker <https://bugs.python.org/issue45611> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue45858] Deprecate default converters in sqlite3
New submission from Ian Fisher : Per discussion at https://discuss.python.org/t/fixing-sqlite-timestamp-converter-to-handle-utc-offsets/, the default converters in SQLite3 have several bugs and are probably not worth continuing to maintain, so I propose deprecating them and removing them in a later version of Python. Since the converters are opt-in, this should not affect most users of SQLite3. -- components: Library (Lib) messages: 406727 nosy: erlendaasland, iafisher priority: normal severity: normal status: open title: Deprecate default converters in sqlite3 type: behavior ___ Python tracker <https://bugs.python.org/issue45858> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue45858] Deprecate default converters in sqlite3
Ian Fisher added the comment: See also bpo-26651 for a related proposal to deprecate the converter/adapter infrastructure entirely. The proposal in this bug is more limited: remove the default converters (though I think the default adapters should stay), but continue to allow users to define their own converters. -- ___ Python tracker <https://bugs.python.org/issue45858> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue26651] Deprecate register_adapter() and register_converter() in sqlite3
Ian Fisher added the comment: See bpo-45858 for a more limited proposal to only deprecate the default converters. -- ___ Python tracker <https://bugs.python.org/issue26651> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue45677] [doc] improve sqlite3 docs
Ian Fisher added the comment: I think it would also be helpful to make the examples at the top simpler/more idiomatic, e.g. using a context manager for the connection and calling conn.execute directly instead of spawning a cursor. I think the information about the isolation_level parameter should also be displayed more prominently as many people are surprised to find sqlite3 automatically opening transactions. -- nosy: +iafisher ___ Python tracker <https://bugs.python.org/issue45677> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue39480] referendum reference is needlessly annoying
New submission from Ian Jackson : The section "Fancier Output Formatting" has the example below. This will remind many UK readers of the 2016 EU referendum. About half of those readers will be quite annoyed. This annoyance seems entirely avoidable; a different example which did not refer to politics would demonstrate the behaviour just as well. Changing this example would (in the words of the CoC) also show more empathy, and be more considerate towards, python contributors unhappy with recent political developments in the UK, without having to make anyone else upset in turn. >>> year = 2016 >>> event = 'Referendum' >>> f'Results of the {year} {event}' 'Results of the 2016 Referendum' >>> yes_votes = 42_572_654 >>> no_votes = 43_132_495 >>> percentage = yes_votes / (yes_votes + no_votes) >>> '{:-9} YES votes {:2.2%}'.format(yes_votes, percentage)' 42572654 YES votes 49.67%' -- assignee: docs@python components: Documentation messages: 360883 nosy: diziet, docs@python priority: normal severity: normal status: open title: referendum reference is needlessly annoying versions: Python 3.7, Python 3.8, Python 3.9 ___ Python tracker <https://bugs.python.org/issue39480> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue39619] os.chroot is not enabled on HP-UX builds
New submission from Ian Norton : When building on HP-UX using: The configure stage fails to detect chroot(). This is due to setting _XOPEN_SOURCE to a value higher than 500. The fix for this is to not set _XOPEN_SOURCE when configuring for HP-UX -- components: Interpreter Core messages: 361921 nosy: Ian Norton priority: normal severity: normal status: open title: os.chroot is not enabled on HP-UX builds type: enhancement versions: Python 3.7, Python 3.8, Python 3.9 ___ Python tracker <https://bugs.python.org/issue39619> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue43047] logging.config formatters documentation is out of sync with code
New submission from Ian Wienand : The dict based configuration does not mention the "class" option, and neither the ini-file or dict sections mention the style tag. -- components: Library (Lib) messages: 385825 nosy: iwienand priority: normal severity: normal status: open title: logging.config formatters documentation is out of sync with code type: enhancement ___ Python tracker <https://bugs.python.org/issue43047> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue43047] logging.config formatters documentation is out of sync with code
Change by Ian Wienand : -- keywords: +patch pull_requests: +23182 stage: -> patch review pull_request: https://github.com/python/cpython/pull/24358 ___ Python tracker <https://bugs.python.org/issue43047> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue41389] Garbage Collector Ignoring Some (Not All) Circular References of Identical Type
New submission from Ian O'Shaughnessy : Using a script that has two classes A and B which contain a circular reference variable, it is possible to cause a memory leak that is not captured by default gc collection. Only by running gc.collect() manually do the circular references get collected. Attached is a sample script that replicates the issue. Output starts: Ram used: 152.17 MB - A: Active(125) / Total(2485) - B: Active(124) / Total(2484) Ram used: 148.17 MB - A: Active(121) / Total(12375) - B: Active(120) / Total(12374) Ram used: 65.88 MB - A: Active(23) / Total(22190) - B: Active(22) / Total(22189) Ram used: 77.92 MB - A: Active(35) / Total(31935) - B: Active(34) / Total(31934) After 1,000,000 cycles 1GB of ram is being consumed: Ram used: 1049.68 MB - A: Active(1019) / Total(975133) - B: Active(1018) / Total(975132) Ram used: 1037.64 MB - A: Active(1007) / Total(984859) - B: Active(1006) / Total(984858) Ram used: 952.34 MB - A: Active(922) / Total(994727) - B: Active(921) / Total(994726) Ram used: 970.41 MB - A: Active(940) / Total(100) - B: Active(940) / Total(100) -- files: gc.bug.py messages: 374210 nosy: ian_osh priority: normal severity: normal status: open title: Garbage Collector Ignoring Some (Not All) Circular References of Identical Type type: resource usage versions: Python 3.7 Added file: https://bugs.python.org/file49337/gc.bug.py ___ Python tracker <https://bugs.python.org/issue41389> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue41389] Garbage Collector Ignoring Some (Not All) Circular References of Identical Type
Ian O'Shaughnessy added the comment: For a long running process (greatly exceeding a million iterations) the uncollected garbage will become too large for the system (many gigabytes). A manual execution of the gc would be required. That seems flawed given that Python is a garbage collected language, no? -- ___ Python tracker <https://bugs.python.org/issue41389> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue41389] Garbage Collector Ignoring Some (Not All) Circular References of Identical Type
Ian O'Shaughnessy added the comment: "Leak" was likely the wrong word. It does appear problematic though. The loop is using a fixed number of variables (yes, there are repeated dynamic allocations, but they fall out of scope with each iteration), only one of these variables occupies 1MB of ram (aside from the static variable). The problem: There's only really one variable occupying 1MB of in-scope memory, yet the app's memory usage can/will exceed 1GB after extended use. At the very least, this is confusing – especially given the lack of user control to prevent it from happening once it's discovered as a problem. -- ___ Python tracker <https://bugs.python.org/issue41389> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue41389] Garbage Collector Ignoring Some (Not All) Circular References of Identical Type
Ian O'Shaughnessy added the comment: >I don't know of any language that guarantees all garbage will be collected >"right away". Do you? I'm not an expert in this domain, so, no. I am however attempting to find a way to mitigate this issue. Do you have any suggestions how I can avoid these memory spikes? Weak references? Calling gc.collect() on regular intervals doesn't seem to work consistently. -- ___ Python tracker <https://bugs.python.org/issue41389> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue42225] Tkinter hangs or crashes when displaying astral chars
Ian Strawbridge added the comment: Further to the information I posted on Stack Overflow (referred to above) relating to reproducing emoticon characters from Idle under Ubuntu, I have done more testing. Based on some of the code/comments above, I tried modifications which I hoped might identify errors before Idle crashed. At a simple level I can generate some error information in a Ubuntu terminal from the following. usr/bin$ idle-python3.8 Entering chr(0x1f624) gives the following error message in terminal. X Error of failed request: BadLength (poly request too large or internal Xlib length error) Major opcode of failed request: 139 (RENDER) Minor opcode of failed request: 20 (RenderAddGlyphs) Serial number of failed request: 4484 Current serial number in output stream: 4484 Another test used this code. -- def FileSave(sav_file_name,outputstring): with open(sav_file_name, "a", encoding="utf8",newline='') as myfile: myfile.write(outputstring) def FileSave1(sav_file_name,eoutputstring): with open(sav_file_name, "a", encoding="utf8",newline='') as myfile: myfile.write(eoutputstring) tk = True if tk: from tkinter import Tk from tkinter.scrolledtext import ScrolledText root = Tk() text = ScrolledText(root, width=80, height=40) text.pack() def print1(txt): text.insert('insert', txt+'\n') errors = [] outputstring = "Characters:"+ "\n"+"\n" eoutputstring = "Errors:"+ "\n"+"\n" #for i in range(0x1f600, 0x1f660): #crashes at 0x1f624 for i in range(0x1f623, 0x1f624): # 1f624, 1f625 then try 1f652 chars = chr(i) decimal = str(int(hex(i)[2:],16)) try: outputstring = str(hex(i))+" "+decimal+" "+chars+ "\n" FileSave("Charsfile.txt", outputstring) print1(f"{hex(i)} {decimal} {chars}") print(f"{hex(i)} {decimal} {chars}") except Exception as e: print(str(hex(i))) eoutputstring = str(hex(i))+ "\n" FileSave1("Errorfile.txt", eoutputstring) errors.append(f"{hex(i)} {e}") print("ERRORS:") for line in errors: print(line) -- With the range starting at 0x1f623 and changing the end point, in Ubuntu, with end point 0x1f624, this prints ok, but if higher numbers are used the Idle windows all closed. However on some occasions, if I began with end point at 0x1f624 and run, then without closing the editor window I increased the end point to 0x1f625, save and run, the Text window would close, but the console window would remain open. I could then increase the upper range further and repeat and more characters would print to the console. I have attached screenshots of the console output with the fonts-noto-color-emoji fonts package installed(with font), then with this package uninstalled (no font) and finally the same when run under Windows 10. For the console output produced while the font package is installed, if I select in the character column where there is a blank space, "something" can be selected. If I save the console as a text file or select all the rows, copy and paste to a text file, the missing characters are revealed. When the font package is uninstalled, the missing characters are truely missing. It is the apparently missing characters (such as 0x1f624, 0x1f62c, 0x1f641, 0x1f642, 0x1f644-0x1f64f) which appear to be causing the Idle crashes. Presumably such as 0x1f650 and 0x1f651 are unallocated codes so show up as rectangular outlines. In none of the tests with the more complex code above did I manage to generate any error output. My set up is as follows. Ubuntu 20.04.1 LTS x86_64 GNOME version: 3.36.3 Python 3.8.6 (default, Sep 25 2020, 21:22:01) Tk version: 8.6.10 [GCC 7.5.0] on linux Hopefully, the above might give some pointers to handling these characters. -- nosy: +IanSt1 versions: +Python 3.8 -Python 3.10 Added file: https://bugs.python.org/file49581/Screenshots_128547-128593.pdf ___ Python tracker <https://bugs.python.org/issue42225> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue42225] Tkinter hangs or crashes when displaying astral chars
Ian Strawbridge added the comment: On Ubuntu, Tk version is showing as 8.6.10 On Windows 10, Tk version is showing as 8.6.9 -- ___ Python tracker <https://bugs.python.org/issue42225> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue42442] Tarfile to stdout documentation example
New submission from Ian Laughlin : Recommend adding example to tarfile documentation to provide example of writing a tarfile to stdout. example: files = [(file_1, filename_1), (file_2, filename_2)] with tarfile.open(fileobj=sys.stdout.buffer, mode = 'w|gz') as tar: for file, filename in files: file_obj = io.BytesIO() #starts a BytesIO object file_obj.write(file.encode('utf-8')) #writes the file to the BytesIO object info = tarfile.TarInfo(filename) #creates the TarInfo file_obj.seek(0) #goes to the beginning of the BytesIO object else it won't write info.size = len(file) #sets the length of the file tar.addfile(info, fileobj=file_obj) #writes the tar to stdout. -- assignee: docs@python components: Documentation messages: 381665 nosy: docs@python, ilaughlin priority: normal severity: normal status: open title: Tarfile to stdout documentation example versions: Python 3.6, Python 3.7, Python 3.8, Python 3.9 ___ Python tracker <https://bugs.python.org/issue42442> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue13952] mimetypes doesn't recognize .csv
New submission from Ian Davis : The mimetypes module does not respond with "text/csv" for files that end in ".csv", and I think it should :) For goodness sake, "text/tab-delimited-values" is in there as ".tsv", and that seems much less used (to me). -- components: Library (Lib) messages: 152751 nosy: iwd32900 priority: normal severity: normal status: open title: mimetypes doesn't recognize .csv type: behavior versions: Python 2.6, Python 2.7 ___ Python tracker <http://bugs.python.org/issue13952> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue14561] python-2.7.2-r3 suffers test failure at test_mhlib
New submission from Ian Delaney : Testing test suite of pyth-2.7. Re-running failed tests in verbose mode Re-running test 'test_mhlib' in verbose mode test_basic (test.test_mhlib.MhlibTests) ... ok test_listfolders (test.test_mhlib.MhlibTests) ... FAIL It seems to be pinned down to this one line in test that failed. ok, it comes down to this. From test_mhlib.py def test_listfolders(self): mh = getMH() eq = self.assertEqual #tfolders.sort()\\ Line 184 #eq(folders, tfolders) \\ Line 185 Commenting them out removes the source of error. The lines that trips up include at least 185, 189, 193. The 'folders' are not equal. Bug filed in gentoo bugzilla; Bug 387967; 21-10-2011. The build log from that bug in Comment 2 https://bugs.gentoo.org/attachment.cgi?id=290409 -- components: Tests messages: 158119 nosy: idella5 priority: normal severity: normal status: open title: python-2.7.2-r3 suffers test failure at test_mhlib type: behavior versions: Python 2.7 ___ Python tracker <http://bugs.python.org/issue14561> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue31426] Segfault during GC of generator object; invalid gi_frame?
New submission from Ian Wienand: Using 3.5.2-2ubuntu0~16.04.3 (Xenial) we see an occasional segfault during garbage collection of a generator object A full backtrace is attached, but the crash appears to be triggered inside gen_traverse during gc --- (gdb) info args gen = 0x7f22385f0150 visit = 0x50eaa0 arg = 0x0 (gdb) print *gen $109 = {ob_base = {ob_refcnt = 1, ob_type = 0xa35760 }, gi_frame = 0x386aed8, gi_running = 1 '\001', gi_code = , gi_weakreflist = 0x0, gi_name = 'linesplit', gi_qualname = 'linesplit'} --- I believe gen_traverse is doing the following --- static int gen_traverse(PyGenObject *gen, visitproc visit, void *arg) { Py_VISIT((PyObject *)gen->gi_frame); Py_VISIT(gen->gi_code); Py_VISIT(gen->gi_name); Py_VISIT(gen->gi_qualname); return 0; } --- The problem here being that this generator's gen->gi_frame has managed to acquire a NULL object type but still has references --- (gdb) print *gen->gi_frame $112 = {ob_base = {ob_base = {ob_refcnt = 2, ob_type = 0x0}, ob_size = 0}, f_back = 0x0, f_code = 0xca3e4fd8950fef91, ... --- Thus it gets visited and it doesn't go well. I have attached the py-bt as well, it's very deep with ansible, multiprocessing forking, imp.load_source() importing ... basically a nightmare. I have not managed to get it down to any sort of minimal test case unfortunately. This happens fairly infrequently, so suggests a race. The generator in question has a socket involved: --- def linesplit(socket): buff = socket.recv(4096).decode("utf-8") buffering = True while buffering: if "\n" in buff: (line, buff) = buff.split("\n", 1) yield line + "\n" else: more = socket.recv(4096).decode("utf-8") if not more: buffering = False else: buff += more if buff: yield buff --- Wild speculation but maybe something to do with finalizing generators with file-descriptors across fork()? At this point we are trying a work-around of not having the above socket reading routine in a generator but just a "regular" loop. As it triggers as part of a production roll-out I'm not sure we can do too much more debugging. Unless this rings any immediate bells for people, we can probably just have this for tracking at this point. [1] is the original upstream issue [1] https://storyboard.openstack.org/#!/story/2001186#comment-17441 -- components: Interpreter Core files: crash-bt.txt messages: 301943 nosy: iwienand priority: normal severity: normal status: open title: Segfault during GC of generator object; invalid gi_frame? type: crash versions: Python 3.5 Added file: https://bugs.python.org/file47134/crash-bt.txt ___ Python tracker <https://bugs.python.org/issue31426> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue31426] Segfault during GC of generator object; invalid gi_frame?
Changes by Ian Wienand : Added file: https://bugs.python.org/file47135/crash-py-bt.txt ___ Python tracker <https://bugs.python.org/issue31426> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue34975] start_tls() difficult when using asyncio.start_server()
Change by Ian Good : -- keywords: +patch pull_requests: +13056 stage: -> patch review ___ Python tracker <https://bugs.python.org/issue34975> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue34975] start_tls() difficult when using asyncio.start_server()
Ian Good added the comment: I added start_tls() to StreamWriter. My implementation returns a new StreamWriter that should be used from then on, but it could be adapted to modify the current writer in-place (let me know). I've added docs, an integration test, and done some additional "real-world" testing with an IMAP server I work on. Specifically, "openssl s_client -connect xxx -starttls imap" works like a charm, and no errors/warnings are logged on disconnect. -- type: -> enhancement ___ Python tracker <https://bugs.python.org/issue34975> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue36889] Merge StreamWriter and StreamReader into just asyncio.Stream
Change by Ian Good : -- nosy: +icgood ___ Python tracker <https://bugs.python.org/issue36889> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue32541] cgi.FieldStorage constructor assumes all lines terminate with \n
New submission from Ian Craggs : Using cgi.FieldStorage in an HTTP server in a subclass of BaseHTTPRequestHandler, parsing the request with: form = cgi.FieldStorage(fp=self.rfile, headers=self.headers, environ={"REQUEST_METHOD":op.upper(), "CONTENT_TYPE":self.headers['Content-Type'],}) This has been working fine with clients using the Python requests library. Now processing requests from a Java library (org.apache.cxf.jaxrs.client.WebClient), the final line in a multipart request does not include the (\r)\n, which causes the final read to hang until a socket timeout. The read in question is in cgi.py, read_lines_to_outerboundary: line = self.fp.readline(1<<16) # bytes (line 824 in Python 3.6.2). I changed this read to not assume the termination of the final line with \n: def read_line(self, last_boundary): line = self.fp.readline(len(last_boundary)) if line != last_boundary and not line.endswith(b"\n"): line += self.fp.readline((1<<16) - len(last_boundary)) return line and the request worked. The Java library is being used in tests against our production web server so I assume that is working correctly. Perhaps I am misusing the FieldStorage class, I don't know, I'm not expert on this. ------ components: Library (Lib), macOS messages: 309868 nosy: Ian Craggs, ned.deily, ronaldoussoren priority: normal severity: normal status: open title: cgi.FieldStorage constructor assumes all lines terminate with \n type: behavior versions: Python 3.6 ___ Python tracker <https://bugs.python.org/issue32541> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue33281] ctypes.util.find_library not working on macOS
New submission from Ian Burgwin : On Python 3.7.0a4 and later (including 3.7.0b4), find_library currently always returns None on macOS. It works on 3.7.0a3 and earlier. Tested on macOS 10.11 and 10.13. Expected result: Tested on 3.6.5, 3.7.0a1 and 3.7.0a3: >>> from ctypes.util import find_library >>> find_library('iconv') '/usr/lib/libiconv.dylib' >>> find_library('c') '/usr/lib/libc.dylib' >>> Current output on 3.7.0a4 to 3.7.0b3: >>> from ctypes.util import find_library >>> find_library('iconv') >>> find_library('c') >>> -- components: ctypes messages: 315309 nosy: Ian Burgwin (ihaveahax) priority: normal severity: normal status: open title: ctypes.util.find_library not working on macOS type: behavior versions: Python 3.7 ___ Python tracker <https://bugs.python.org/issue33281> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue34908] netrc parding is overly strict
New submission from Ian Remmel : This started as a bug report for httpie https://github.com/jakubroztocil/httpie/issues/717#issuecomment-426125261 And became a bug report for requests https://github.com/requests/requests/issues/4813 > But turned out to be an issue with Python's netrc parser: > > > it appears that auth via netrc is broken if ~/.netrc includes entries that > are not exactly login/password tuples. For example, I have the following > entries for circle ci and heroku: > > ``` >machine api.heroku.com > login > password > method interactive >machine circleci.com > login > ``` > > both of these entries prevent my entry for github.com from working with > httpie (but curl works just fine). I've used the following script to test python 2.7 and 3.7: ``` import netrc import os.path netrc.netrc(os.path.expanduser('~/.netrc')).authenticators('api.github.com') ``` Python 2: ``` Traceback (most recent call last): File "test.py", line 4, in netrc.netrc(os.path.expanduser('~/.netrc')).authenticators('api.github.com') File "/usr/local/Cellar/python@2/2.7.15_1/Frameworks/Python.framework/Versions/2.7/lib/python2.7/netrc.py", line 35, in __init__ self._parse(file, fp, default_netrc) File "/usr/local/Cellar/python@2/2.7.15_1/Frameworks/Python.framework/Versions/2.7/lib/python2.7/netrc.py", line 117, in _parse file, lexer.lineno) netrc.NetrcParseError: bad follower token 'method' (/Users/ian/.netrc, line 7) Python 3: ``` Traceback (most recent call last): File "test.py", line 4, in netrc.netrc(os.path.expanduser('~/.netrc')).authenticators('api.github.com') File "/usr/local/Cellar/python/3.7.0/Frameworks/Python.framework/Versions/3.7/lib/python3.7/netrc.py", line 30, in __init__ self._parse(file, fp, default_netrc) File "/usr/local/Cellar/python/3.7.0/Frameworks/Python.framework/Versions/3.7/lib/python3.7/netrc.py", line 111, in _parse file, lexer.lineno) netrc.NetrcParseError: bad follower token 'method' (/Users/ian/.netrc, line 7) ``` -- messages: 327155 nosy: ianwremmel priority: normal severity: normal status: open title: netrc parding is overly strict versions: Python 2.7, Python 3.7 ___ Python tracker <https://bugs.python.org/issue34908> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue34908] netrc parding is overly strict
Ian Remmel added the comment: Yea, somehow, I suspected it was because there's no formal spec :) I guess technically it's an enhancement, but given that configuration dictated by third-parties can break the environment, it does feel like a bug. For example, I can't use a python app to authenticate to github via netrc because of how heroku says I have to configure my netrc. Also, there are probably two subtly different issues: 1. a machine with a password but no login breaks parsing 2. a machine with an unrecognized key breaks parsing -- ___ Python tracker <https://bugs.python.org/issue34908> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue34908] netrc parsing is overly strict
Change by Ian Remmel : -- title: netrc parding is overly strict -> netrc parsing is overly strict ___ Python tracker <https://bugs.python.org/issue34908> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue34975] start_tls() difficult when using asyncio.start_server()
New submission from Ian Good : There does not seem to be a public API for replacing the transport of the StreamReader / StreamWriter provided to the callback of a call to asyncio.start_server(). The only way I have found to use the new SSL transport is to update protected members of the StreamReaderProtocol object, e.g.: async def callback(reader, writer): loop = asyncio.get_event_loop() transport = writer.transport protocol = transport.get_protocol() new_transport = await loop.start_tls( transport, protocol, ssl_context, server_side=True) protocol._stream_reader = StreamReader(loop=loop) protocol._client_connected_cb = do_stuff_after_start_tls protocol.connection_made(new_transport) async def do_stuff_after_start_tls(ssl_reader, ssl_writer): ... -- components: asyncio messages: 327665 nosy: asvetlov, icgood, yselivanov priority: normal severity: normal status: open title: start_tls() difficult when using asyncio.start_server() versions: Python 3.7 ___ Python tracker <https://bugs.python.org/issue34975> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue15198] multiprocessing Pipe send of non-picklable objects doesn't raise error
New submission from Ian Bell : When a non-picklable object is sent through a multiprocessing.Pipe, no exception is raised, instead when trying to read the other end of the pipe, a TypeError is raised: TypeError: Required argument 'handle' (pos 1) not found -- components: Windows messages: 164118 nosy: Ian.Bell priority: normal severity: normal status: open title: multiprocessing Pipe send of non-picklable objects doesn't raise error versions: Python 2.7 ___ Python tracker <http://bugs.python.org/issue15198> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue15198] multiprocessing Pipe send of non-picklable objects doesn't raise error
Changes by Ian Bell : -- type: -> crash ___ Python tracker <http://bugs.python.org/issue15198> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue15198] multiprocessing Pipe send of non-picklable objects doesn't raise error
Ian Bell added the comment: I had issues with a class that I wrote myself. It is a rather involved data structure with all kinds of interesting things going on. Unfortunately I cannot put together a minimal working example that will cause a Python hang. -- ___ Python tracker <http://bugs.python.org/issue15198> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue15198] multiprocessing Pipe send of non-picklable objects doesn't raise error
Ian Bell added the comment: I have repaired my class so that it pickles properly, but that does not resolve the issue that if you send a non-picklable object through a pipe, it should raise an error, rather than hang. -- ___ Python tracker <http://bugs.python.org/issue15198> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue15340] OSError with "import random" when /dev/urandom doesn't exist (regression from 2.6)
New submission from Ian Wienand : Hi, Lib/random.py has a fallback if os.urandom() returns NotImplementedError --- from os import urandom as _urandom ... def seed(self, a=None): if a is None: try: a = long(_hexlify(_urandom(16)), 16) except NotImplementedError: import time a = long(time.time() * 256) # use fractional seconds --- In 2.6, this is indeed what happens in Lib/os.py where "import urandom from os" gets [2]: --- if not _exists("urandom"): def urandom(n): ... try: _urandomfd = open("/dev/urandom", O_RDONLY) except (OSError, IOError): raise NotImplementedError("/dev/urandom (or equivalent) not found") --- however, in 2.7, things have shuffled around as a result of issue Issue #13703 and now _PyOS_URandom will return an OSError if it can't find /dev/urandom [3]. This means if you "import random" without "/dev/urandom" available it crashes trying to seed I'm not sure if this is intentional? One easy solution would be to catch OSError in random.py and fall back then too [1] http://hg.python.org/cpython/file/70274d53c1dd/Python/random.c#l227 [2] http://hg.python.org/cpython/file/9f8771e09052/Lib/os.py#l746 [3] http://hg.python.org/cpython/file/70274d53c1dd/Lib/random.py#l111 -- components: Library (Lib) messages: 165340 nosy: iwienand priority: normal severity: normal status: open title: OSError with "import random" when /dev/urandom doesn't exist (regression from 2.6) versions: Python 2.7 ___ Python tracker <http://bugs.python.org/issue15340> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue15340] OSError with "import random" when /dev/urandom doesn't exist (regression from 2.6)
Ian Wienand added the comment: I'm not sure what systems are defined as critical or not. Although python is not really installable/configurable by end-users on ESXi, I noticed during development because we use python very early in the boot, before /dev/urandom appears for us (it comes from a kernel module loaded later). -- ___ Python tracker <http://bugs.python.org/issue15340> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue10510] distutils upload/register should use CRLF in HTTP requests
Changes by Ian Cordasco : -- nosy: +icordasc ___ Python tracker <http://bugs.python.org/issue10510> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue10510] distutils upload/register should use CRLF in HTTP requests
Ian Cordasco added the comment: I've attached a patch that should fix this issue. Please review and let me know if changes are necessary. -- keywords: +patch Added file: http://bugs.python.org/file35067/compliant_distutils.patch ___ Python tracker <http://bugs.python.org/issue10510> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue17994] Change necessary in platform.py to support IronPython
Ian Cordasco added the comment: I missed the fact that the user gave me the information from sys.version: https://stackoverflow.com/questions/16545027/ironpython-error-in-url-request?noredirect=1#comment23847257_16545027 I'll throw together a failing test with this and run it against 2.7, and the 3.x branches. -- ___ Python tracker <http://bugs.python.org/issue17994> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue21540] PEP 8 should recommend "is not" and "not in"
Changes by Ian Cordasco : -- nosy: +icordasc ___ Python tracker <http://bugs.python.org/issue21540> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue10510] distutils upload/register should use CRLF in HTTP requests
Ian Cordasco added the comment: Per discussion on twitter (https://twitter.com/merwok_/status/468518605135835136) I'm bumping this to make sure it's merged. -- ___ Python tracker <http://bugs.python.org/issue10510> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue16877] Odd behavior of ~ in os.path.abspath and os.path.realpath
New submission from Ian Shields: Filespecs that start with ~ are not properly handled by os.path.realpath or os.path.abspath (and maybe other functions). Following console output from Fedora 17 using Puthon 3.2 illustrates the issue. Similar issue in 2.7 [ian@attic4 developerworks]$ cd .. [ian@attic4 ~]$ mkdir testpath [ian@attic4 ~]$ cd testpath [ian@attic4 testpath]$ pwd /home/ian/testpath [ian@attic4 testpath]$ python3 Python 3.2.3 (default, Jun 8 2012, 05:36:09) [GCC 4.7.0 20120507 (Red Hat 4.7.0-5)] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> import os >>> os.path.abspath("~") '/home/ian/testpath/~' >>> os.path.realpath("~/xxx/zzz") '/home/ian/testpath/~/xxx/zzz' >>> os.path.abspath("~/..") '/home/ian/testpath' Function should probably use expanduser to determine if path is already absolute. Documentation at http://docs.python.org/3/library/os.path.html is also misleading as this is not how these functions work if given an absolute path to start with. -- components: None messages: 179170 nosy: ibshields priority: normal severity: normal status: open title: Odd behavior of ~ in os.path.abspath and os.path.realpath type: behavior versions: Python 3.2 ___ Python tracker <http://bugs.python.org/issue16877> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue16877] Odd behavior of ~ in os.path.abspath and os.path.realpath
Ian Shields added the comment: Regarding last comment. I had missed the comment in documentation fo os.path.join "Join one or more path components intelligently. If any component is an absolute path, all previous components (on Windows, including the previous drive letter, if there was one) are thrown away, and joining continues". So the issue is really the behavior of os.path.join where the intelligence in the joining does not recognize that "~" is usually expanded to an absolute path. Consider the following Bash commands: [ian@attic4 testpath]$ pwd /home/ian/testpath [ian@attic4 testpath]$ echo $(cd ~/testpath/..;pwd) /home/ian [ian@attic4 testpath]$ cd /home/ian/~ bash: cd: /home/ian/~: No such file or directory Now consider some Python >>> os.getcwd() '/home/ian/testpath' >>> os.path.join(os.getcwd(), "/home/ian") '/home/ian' >>> os.path.expanduser("~") '/home/ian' >>> os.path.join(os.getcwd(), "~") '/home/ian/testpath/~' >>> os.path.expanduser(os.path.abspath("~")) '/home/ian/testpath/~' >>> os.path.abspath(os.path.expanduser("~")) '/home/ian' I find the Python behavior rather odd. I cna live with it now I know about it, but if it is really intentional it would help to document this rather odd behavior somewhat better. -- status: pending -> open ___ Python tracker <http://bugs.python.org/issue16877> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue16877] Odd behavior of ~ in os.path.abspath and os.path.realpath
Ian Shields added the comment: Oddity may be in the eye of the beholder. I've been programming and scripting for about 40 years, including several *IX shells and many other systems. I'm relatively new to Python. Mostly the results of doing things in Python are what I expect. Not doing expansion of a leading tilde when I ask for an absolute path is not what I expect. So to me it's odd. Or different. Or just not what I expect. Substitute "unexpected" for "odd" if you like. Sure, tilde expansion wasn't part of the Bourne shell, but it's been in POSIX shells for about the same amount of time that Python has been around, so it's odd to me that Python differs in this way. It's not hard to work around now that I know about it. -- ___ Python tracker <http://bugs.python.org/issue16877> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com