Re: Maintaining a backported module
Steven D'Aprano writes: > I'm now at the point where I wish to backport this module to support > versions of Python back to 3.1 at least and possibly 2.7, and put it > up on PyPI. Ned Batchelder has managed something at least as ambitious (supporting Python versions 2.4 through 3.3), which should be helpful in your case http://nedbatchelder.com/blog/200910/running_the_same_code_on_python_2x_and_3x.html>. Eli Bendersky also has an article with specific advice http://eli.thegreenplace.net/2010/05/19/making-code-compatible-with-python-2-and-3/>. -- \ “If you are unable to leave your room, expose yourself in the | `\window.” —instructions in case of fire, hotel, Finland | _o__) | Ben Finney -- https://mail.python.org/mailman/listinfo/python-list
Re: Maintaining a backported module
Op 24-10-13 06:54, Steven D'Aprano schreef: > As some of you are aware, I have a module accepted into the standard > library: > > http://docs.python.org/3.4/library/statistics.html > > I'm now at the point where I wish to backport this module to support > versions of Python back to 3.1 at least and possibly 2.7, and put it up > on PyPI. > > I'm looking for advice on best practices for doing so. Any suggestions > for managing bug fixes and enhancements to two separate code-bases > without them diverging too much? > > Other than "Avoid it" :-) I would use only one code-base and make a branch for the 2.7 version. You can then adapt the 2.7 version where really necessary but in general just merge adaption made in the main branch to the 2.7 branch. I am assuming you are using some kind of version control. -- Antoon Pardon -- https://mail.python.org/mailman/listinfo/python-list
Re: Will Python 3.x ever become the actual standard?
Chris The Angel said : "I won't flame you, but I will disagree with you :)" good, that's why I'm here ;) " but there are plenty of things you won't get - and the gap will widen with very Python release." Yes I skimmed that laundry list before deciding. I still think I made the right decision. I'll port it someday. I'll own the iPhone 5s (or whatever the latest one is) someday. I'm not an early adopter kind of person. I'd like to think my project (which looks like it is getting funding, hooray!) will advance the glory of Pythonistan simply by doing cool stuff with 2.7. I'll port it someday (unless it flops, which won't happen, because I won't let it). Good discussion though, thanks! -- https://mail.python.org/mailman/listinfo/python-list
Re: pycrypto: what am I doing wrong?
On Thu, Oct 24, 2013 at 4:22 PM, Paul Pittlerson wrote: > msg = cipher.encrypt(txt) > '|s\x08\xf2\x12\xde\x8cD\xe7u*' > > msg = cipher.encrypt(txt) > '\xa1\xed7\xb8h > # etc Is this strictly the code you're using? AES is a stream cipher; what you've effectively done is encrypt the text twice, once as a follow-on message from the other. To decrypt the second, you'll need to include the first - or treat it as a stream, and decrypt piece by piece. Untested code: import hashlib from Crypto.Cipher import AES from Crypto import Random # Shorter version of your key hashing: key = hashlib.sha256("my key").digest() iv = Random.new().read(AES.block_size) cipher = AES.new(key, AES.MODE_CFB, iv) txt = 'hello world' msg1 = cipher.encrypt(txt) msg2 = cipher.encrypt(txt) # You may need to reset cipher here, I'm not sure. # cipher = AES.new(key, AES.MODE_CFB, iv) cipher.decrypt(iv) # Initialize the decrypter with the init vector print(cipher.decrypt(msg1)) print(cipher.decrypt(msg2)) I don't have pycrypto to test with, but running the same code with Pike's Crypto module does what I expect here. ChrisA -- https://mail.python.org/mailman/listinfo/python-list
Re: Will Python 3.x ever become the actual standard?
On Thu, Oct 24, 2013 at 6:01 PM, Peter Cacioppi wrote: > I'd like to think my project (which looks like it is getting funding, > hooray!) will advance the glory of Pythonistan simply by doing cool stuff > with 2.7. I'll port it someday (unless it flops, which won't happen, because > I won't let it). Which is why I mentioned those helpful __future__ directives, so you can code now and be better able to port in five years when you feel that it's important enough to do so. It's a good system. ChrisA -- https://mail.python.org/mailman/listinfo/python-list
Re: pycrypto: what am I doing wrong?
On 24.10.2013 09:07, Chris Angelico wrote: > On Thu, Oct 24, 2013 at 4:22 PM, Paul Pittlerson > wrote: >> msg = cipher.encrypt(txt) >> > '|s\x08\xf2\x12\xde\x8cD\xe7u*' >> >> msg = cipher.encrypt(txt) >> > '\xa1\xed7\xb8h> >> # etc > AES is a stream cipher; No, it is definitely not! It's a block cipher! However, since he uses CFB mode of operation, it behaves like a stream cipher. Best regards, Joe -- >> Wo hattest Du das Beben nochmal GENAU vorhergesagt? > Zumindest nicht öffentlich! Ah, der neueste und bis heute genialste Streich unsere großen Kosmologen: Die Geheim-Vorhersage. - Karl Kaos über Rüdiger Thomas in dsa -- https://mail.python.org/mailman/listinfo/python-list
Re: pycrypto: what am I doing wrong?
On 24.10.2013 07:22, Paul Pittlerson wrote: > What am I doing wrong? You're not reinitializing the internal state of the crypto engine. When you recreate "cipher" with the same IV every time, it will work. Best regards, Joe -- >> Wo hattest Du das Beben nochmal GENAU vorhergesagt? > Zumindest nicht öffentlich! Ah, der neueste und bis heute genialste Streich unsere großen Kosmologen: Die Geheim-Vorhersage. - Karl Kaos über Rüdiger Thomas in dsa -- https://mail.python.org/mailman/listinfo/python-list
Re: Will Python 3.x ever become the actual standard?
On 10/23/2013 09:57 PM, Peter Cacioppi wrote: Moreover, you get a lot of the good stuff with 2.7. And the "good stuff" in 2.7 makes it easier to take that last step to 3.x when the time comes to do so. -- ~Ethan~ -- https://mail.python.org/mailman/listinfo/python-list
Re: pycrypto: what am I doing wrong?
On 24.10.2013 09:33, Johannes Bauer wrote: > On 24.10.2013 07:22, Paul Pittlerson wrote: > >> What am I doing wrong? > > You're not reinitializing the internal state of the crypto engine. When > you recreate "cipher" with the same IV every time, it will work. Code that works: #!/usr/bin/python3 import hashlib from Crypto.Cipher import AES from Crypto import Random h = hashlib.new('sha256') h.update(b'my key') key = h.digest() iv = Random.new().read(AES.block_size) cipher = AES.new(key, AES.MODE_CFB, iv) txt = 'hello world' msg = cipher.encrypt(txt) print(msg) cipher = AES.new(key, AES.MODE_CFB, iv) # Use *same* IV! origtxt = cipher.decrypt(msg) print(origtxt) Also note that manually deriving a symmetric secret using SHA256 is an INCREDIBLY bad idea. Have a look at PBKDF2. Best regards, Joe -- >> Wo hattest Du das Beben nochmal GENAU vorhergesagt? > Zumindest nicht öffentlich! Ah, der neueste und bis heute genialste Streich unsere großen Kosmologen: Die Geheim-Vorhersage. - Karl Kaos über Rüdiger Thomas in dsa -- https://mail.python.org/mailman/listinfo/python-list
Re: pycrypto: what am I doing wrong?
On Thu, Oct 24, 2013 at 6:30 PM, Johannes Bauer wrote: > On 24.10.2013 09:07, Chris Angelico wrote: >> AES is a stream cipher; > > No, it is definitely not! It's a block cipher! However, since he uses > CFB mode of operation, it behaves like a stream cipher. Sorry! Quite right. What I meant was, it behaves differently based on its current state. The SHA256 of "Hello, world!" is 315f5b...edd3 no matter how many times you calculate it; but the AES-encrypted text is going to change based on the previously-encrypted text. Hence the need to either, as stated in your other email, reset the internal state, or, as stated in my previous one, treat it as a stream. ChrisA -- https://mail.python.org/mailman/listinfo/python-list
Re: Reading From stdin After Command Line Redirection
On 24/10/2013 04:53, Ben Finney wrote: Tim Daneliuk writes: 'Easy there Rainman I'll thank you not to use mental deficiency as some kind of insult. Calling someone “Rainman” is to use autistic people as the punchline of a joke. We're a community that doesn't welcome such ableist slurs. I saw no such insult. I've been diagnosed with Asperger, and have had the fun and games that goes with raising a youngster with Asperger who is also partially deaf. -- Python is the second best programming language in the world. But the best has yet to be invented. Christian Tismer Mark Lawrence -- https://mail.python.org/mailman/listinfo/python-list
Re: Will Python 3.x ever become the actual standard?
On 24/10/2013 01:18, Steven D'Aprano wrote: - the majority of packages on PyPI now support Python 3, so the "Wall of Shame" is now renamed the "Wall of Superpowers": https://python3wos.appspot.com/ Thank you, thank you, thank you, it's been driving me nuts trying to remember what the flaming thing was called :) -- Python is the second best programming language in the world. But the best has yet to be invented. Christian Tismer Mark Lawrence -- https://mail.python.org/mailman/listinfo/python-list
Re: Will Python 3.x ever become the actual standard?
On 24/10/2013 01:17, Steven D'Aprano wrote: On Wed, 23 Oct 2013 14:27:29 +0100, Mark Lawrence wrote: I confess I don't understand how *nix people endure having to compile code instead of having a binary install. Because it's trivially easy under Unix? Three commands: ./configure make make install will generally do the job. Unless it doesn't work, in which case it's a world of pain. But that's no different from Windows, except that somebody else has already worked through the pain for you. Precisely my point. I suspect being a Python core dev must do wonders for the moral fibre. Your pristine, fully reviewed patch improves performance by 10,000% and works wonderfully except on buildbot xyz and has to be reverted. How do they do it? -- Python is the second best programming language in the world. But the best has yet to be invented. Christian Tismer Mark Lawrence -- https://mail.python.org/mailman/listinfo/python-list
Re: Will Python 3.x ever become the actual standard?
On 24/10/2013 05:57, Peter Cacioppi wrote: Moreover, you get a lot of the good stuff with 2.7. Much of it backported from Python 3. -- Python is the second best programming language in the world. But the best has yet to be invented. Christian Tismer Mark Lawrence -- https://mail.python.org/mailman/listinfo/python-list
Re: Will Python 3.x ever become the actual standard?
> gmail.com> writes: > > I am starting to have doubts as to whether Python 3.x will ever be actually adopted by the Python community at > large as their standard. We're planning to start the switch on 25th December 2013, 14h UTC. It should be finished at most 48 hours later. You should expect some intermittent problems during the first few hours, but at the end all uses of Twisted will be replaced with Tornado and asyncio (and camelCase methods will have ceased to be). By the way, if you want to join us, one week later we'll also switch the Internet to IPv6 (except Germany). Regards Antoine. -- https://mail.python.org/mailman/listinfo/python-list
Re: Will Python 3.x ever become the actual standard?
On Thu, Oct 24, 2013 at 7:26 PM, Mark Lawrence wrote: > Precisely my point. I suspect being a Python core dev must do wonders for > the moral fibre. Your pristine, fully reviewed patch improves performance > by 10,000% and works wonderfully except on buildbot xyz and has to be > reverted. How do they do it? It's called Diplomacy, and it's a class skill for bards, clerics, druids, monks, paladins, rogues, and core developers. ChrisA -- https://mail.python.org/mailman/listinfo/python-list
Re: Maintaining a backported module
On 24/10/2013 07:30, Ethan Furman wrote: On 10/23/2013 09:54 PM, Steven D'Aprano wrote: I'm looking for advice on best practices for doing so. Any suggestions for managing bug fixes and enhancements to two separate code-bases without them diverging too much? Confining your code to the intersection of 2.7 and 3.x is probably going to be the easiest thing to do as 2.7 has a bunch of 3.x features. Sadly, when I backported Enum I was targeting 2.5 - 3.x because I have systems still running 2.5. That was *not* a fun experience. :( -- ~Ethan~ Have you or could you publish anything regarding your experiences, I suspect it would be an enlightening read for a lot of us? -- Python is the second best programming language in the world. But the best has yet to be invented. Christian Tismer Mark Lawrence -- https://mail.python.org/mailman/listinfo/python-list
Re: Will Python 3.x ever become the actual standard?
On Thu, Oct 24, 2013 at 7:30 PM, Antoine Pitrou wrote: >> gmail.com> writes: >> >> I am starting to have doubts as to whether Python 3.x will ever be > actually adopted by the Python community at >> large as their standard. > > We're planning to start the switch on 25th December 2013, 14h UTC. > It should be finished at most 48 hours later. You should expect some > intermittent problems during the first few hours, but at the end > all uses of Twisted will be replaced with Tornado and asyncio (and > camelCase methods will have ceased to be). > > By the way, if you want to join us, one week later we'll also switch > the Internet to IPv6 (except Germany). Excellent! It's about time. IPv4 depletion happened some time ago. What's your schedule for the replacement of Windows XP (with either a later Windows or with Linux, open to either option)? ChrisA -- https://mail.python.org/mailman/listinfo/python-list
Re: Reading From stdin After Command Line Redirection
Mark Lawrence writes: > On 24/10/2013 04:53, Ben Finney wrote: > > Tim Daneliuk writes: > > > >> 'Easy there Rainman > > > > I'll thank you not to use mental deficiency as some kind of insult. > > Calling someone “Rainman” is to use autistic people as the punchline > > of a joke. We're a community that doesn't welcome such ableist > > slurs. > > I saw no such insult. Foprtunately, this forum is not all about you. -- \“We must become the change we want to see.” —Mohandas Gandhi | `\ | _o__) | Ben Finney -- https://mail.python.org/mailman/listinfo/python-list
Re: Will Python 3.x ever become the actual standard?
On 24/10/2013 09:30, Antoine Pitrou wrote: gmail.com> writes: I am starting to have doubts as to whether Python 3.x will ever be actually adopted by the Python community at large as their standard. We're planning to start the switch on 25th December 2013, 14h UTC. It should be finished at most 48 hours later. You should expect some intermittent problems during the first few hours, but at the end all uses of Twisted will be replaced with Tornado and asyncio (and camelCase methods will have ceased to be). By the way, if you want to join us, one week later we'll also switch the Internet to IPv6 (except Germany). Regards Antoine. You forgot to mention that the whole world is switching to driving on the left hand side of the road at the same time. -- Python is the second best programming language in the world. But the best has yet to be invented. Christian Tismer Mark Lawrence -- https://mail.python.org/mailman/listinfo/python-list
Re: Will Python 3.x ever become the actual standard?
On 24/10/2013 09:37, Chris Angelico wrote: On Thu, Oct 24, 2013 at 7:30 PM, Antoine Pitrou wrote: gmail.com> writes: I am starting to have doubts as to whether Python 3.x will ever be actually adopted by the Python community at large as their standard. We're planning to start the switch on 25th December 2013, 14h UTC. It should be finished at most 48 hours later. You should expect some intermittent problems during the first few hours, but at the end all uses of Twisted will be replaced with Tornado and asyncio (and camelCase methods will have ceased to be). By the way, if you want to join us, one week later we'll also switch the Internet to IPv6 (except Germany). Excellent! It's about time. IPv4 depletion happened some time ago. What's your schedule for the replacement of Windows XP (with either a later Windows or with Linux, open to either option)? ChrisA Sorry, there's problems with all version of both Windows and Linux so we're reverting with immediate effect to VMS. -- Python is the second best programming language in the world. But the best has yet to be invented. Christian Tismer Mark Lawrence -- https://mail.python.org/mailman/listinfo/python-list
Re: Will Python 3.x ever become the actual standard?
Angelico said: "Which is why I mentioned those helpful __future__ directives," OK, thanks, I'll study the __future__. I will port to 3.x in less than 60 months, or my name isn't Cacioppi. (So, in the worst case, I might have to backport a change to my name). -- https://mail.python.org/mailman/listinfo/python-list
Re: Maintaining a backported module
On Thu, 24 Oct 2013 17:59:59 +1100, Ben Finney wrote: > Steven D'Aprano writes: > >> I'm now at the point where I wish to backport this module to support >> versions of Python back to 3.1 at least and possibly 2.7, and put it up >> on PyPI. > > Ned Batchelder has managed something at least as ambitious (supporting > Python versions 2.4 through 3.3), which should be helpful in your case > http://nedbatchelder.com/blog/200910/ running_the_same_code_on_python_2x_and_3x.html>. I too have written code that supports 2.4 - 3.3 (although only relatively small modules). That's not the problem here. My problem is that I have two chunks of code: 1) statistics.py in the standard library, which is written for Python 3.3/3.4 only, and should be as clean as possible; 2) a backport of it, on PyPI, which will support older Pythons, and may be slower/uglier if need be. My problem is not supporting 2.7 and 3.4 in the one code base. My problem is, how do I prevent #1 and #2 from gradually diverging? The easy answer is "unit tests", but the unit tests for 1) are in the std lib and target 3.4, while the unit tests for 2) will be on PyPI and won't. So how do I keep the unit tests from diverging? -- Steven -- https://mail.python.org/mailman/listinfo/python-list
Re: Maintaining a backported module
Steven D'Aprano writes: > 1) statistics.py in the standard library, which is written for >Python 3.3/3.4 only, and should be as clean as possible; > > 2) a backport of it, on PyPI, which will support older Pythons, and >may be slower/uglier if need be. > > My problem is not supporting 2.7 and 3.4 in the one code base. My > problem is, how do I prevent #1 and #2 from gradually diverging? Generate one of them automatically from the other. My recommendation would be to edit only the code written for Python 3.4, have a fully-automatic generator for the code targeting Python 2.7, and ignore all earlier versions. How does the “3to2” tool http://pypi.python.org/pypi/3to2/> fare with converting your code? > The easy answer is "unit tests", but the unit tests for 1) are in the > std lib and target 3.4, while the unit tests for 2) will be on PyPI > and won't. So how do I keep the unit tests from diverging? Again, I'd recommend you generate the Python 2 code automatically from the actively-maintained Python 3 code. -- \ “Beware of and eschew pompous prolixity.” —Charles A. Beardsley | `\ | _o__) | Ben Finney -- https://mail.python.org/mailman/listinfo/python-list
Re: Will Python 3.x ever become the actual standard?
On Thu, 24 Oct 2013 09:43:18 +0100, Mark Lawrence wrote: > On 24/10/2013 09:30, Antoine Pitrou wrote: >>> gmail.com> writes: >>> >>> I am starting to have doubts as to whether Python 3.x will ever be >> actually adopted by the Python community at >>> large as their standard. >> >> We're planning to start the switch on 25th December 2013, 14h UTC. It >> should be finished at most 48 hours later. You should expect some >> intermittent problems during the first few hours, but at the end all >> uses of Twisted will be replaced with Tornado and asyncio (and >> camelCase methods will have ceased to be). >> >> By the way, if you want to join us, one week later we'll also switch >> the Internet to IPv6 (except Germany). >> >> Regards >> >> Antoine. >> >> >> > You forgot to mention that the whole world is switching to driving on > the left hand side of the road at the same time. that is not true, because of the scale of the problem Bicycles will be switching 1st with cars & lorries switching a week later. -- Children aren't happy without something to ignore, And that's what parents were created for. -- Ogden Nash -- https://mail.python.org/mailman/listinfo/python-list
Re: Maintaining a backported module
On 10/24/13 2:59 AM, Ben Finney wrote: Steven D'Aprano writes: I'm now at the point where I wish to backport this module to support versions of Python back to 3.1 at least and possibly 2.7, and put it up on PyPI. Ned Batchelder has managed something at least as ambitious (supporting Python versions 2.4 through 3.3), which should be helpful in your case http://nedbatchelder.com/blog/200910/running_the_same_code_on_python_2x_and_3x.html>. FWIW, coverage.py currently runs on 2.3 through 3.4. It mostly comes down to: 1. avoiding newer features (decorators! generator expressions! rpartition!), 2. using a compatibility layer like "six" (though I started my own called backward.py before six existed), 3. using some awkward syntax workarounds (sys.exc_info()[1] to get the current exception), 4. somehow finding a way to test on all those versions (pythonz helps, and you have to limit your dependencies). Also, I've just started the coverage.py 4.x branch, which will run on >=2.6 and >=3.2, and it's very nice to get rid of some of that compatibility stuff. --Ned. Eli Bendersky also has an article with specific advice http://eli.thegreenplace.net/2010/05/19/making-code-compatible-with-python-2-and-3/>. -- https://mail.python.org/mailman/listinfo/python-list
Re: Reading From stdin After Command Line Redirection
On 2013-10-24 14:53, Ben Finney wrote: > I think the request is incoherent: If you want to allow the user to > primarily interact with the program, this is incompatible with also > wanting to redirect standard input. As a counter-example, might I suggest one I use regularly: gimme_stuff_on_stdout.sh | vim - I want to use vim interactively, but I want it to read the file-to-edit from stdin. -tkc -- https://mail.python.org/mailman/listinfo/python-list
Re: Maintaining a backported module
Am 24.10.2013 06:54, schrieb Steven D'Aprano: > As some of you are aware, I have a module accepted into the standard > library: > > http://docs.python.org/3.4/library/statistics.html > > I'm now at the point where I wish to backport this module to support > versions of Python back to 3.1 at least and possibly 2.7, and put it up > on PyPI. > > I'm looking for advice on best practices for doing so. Any suggestions > for managing bug fixes and enhancements to two separate code-bases > without them diverging too much? Hi Steven, your module doesn't process text and doesn't use any fancy 3.x features except raise from None. It should be just a matter of an hour or two to port and package your code. 1) Add from __future__ import divisision import sys as _sys if _sys.version_info[0] == 2: range = xrange to the top of your files to use 3.x integer division and generator range 2) Remove from None in the exception handling code 3) Perhaps for 2.7 use iteritems in _sum() sorted(partials.items()) For testing under 2.6 you can use the unittest2 package. It offers all the nice new features like assertIsInstance() and assertIn(). I highly recommand tox + py.test for testing. You can test your package with all versions of Python with just one command. We already have a namespace for backports that you can use: backports. :) Feel free to contact me via mail in #python-dev if you have any questions. Christian -- https://mail.python.org/mailman/listinfo/python-list
Re: Reading From stdin After Command Line Redirection
On 24 October 2013 01:09, Tim Daneliuk wrote: > > Now that I think about it, as I recall from the prehistoric era of writing > lots of assembler and C, if you use shell redirection, stdin shows > up as a handle to the file Yes this is true. A demonstration using seek (on Windows but it is the same in this sense): $ cat test.py import sys sys.stdin.seek(0) print('Seeked fine without errors') $ python test.py Traceback (most recent call last): File "test.py", line 2, in sys.stdin.seek(0) IOError: [Errno 9] Bad file descriptor $ python test.py < other.dat Seeked fine without errors > and there is no way to retrieve/reset it > its default association with the tty/pty. Since python is layered on > top of this, I expect the same would be the case here as well. I think it is true that you cannot restore the association but you can just open the tty explicitly as described here: http://superuser.com/questions/569432/why-can-i-see-password-prompts-through-redirecting-output (I can't test that right now as it obviously doesn't work on Windows). Oscar -- https://mail.python.org/mailman/listinfo/python-list
Re: Maintaining a backported module
On Thu, 24 Oct 2013 06:36:04 -0400, Ned Batchelder wrote: > coverage.py currently runs on 2.3 through 3.4 You support all the way back to 2.3??? I don't know whether to admire your dedication, or back away slowly since you're obviously a crazy person :-) -- Steven -- https://mail.python.org/mailman/listinfo/python-list
Re: Reading From stdin After Command Line Redirection
On 10/23/2013 11:54 PM, Ben Finney wrote: we don't welcome ableist (nor sexist) behaviour. Well now I just feel so very awful ... -- Tim Daneliuk tun...@tundraware.com PGP Key: http://www.tundraware.com/PGP/ -- https://mail.python.org/mailman/listinfo/python-list
Re: Reading From stdin After Command Line Redirection
On 24 October 2013 12:58, Tim Daneliuk wrote: > On 10/23/2013 11:54 PM, Ben Finney wrote: >> >> we don't welcome ableist (nor sexist) behaviour. > > Well now I just feel so very awful ... Please end this line of discussion. Ben is right: your comment was entirely unnecessary and could easily offend. Oscar -- https://mail.python.org/mailman/listinfo/python-list
Re: Maintaining a backported module
On Thursday, October 24, 2013 5:16:58 PM UTC+5:30, Steven D'Aprano wrote: > On Thu, 24 Oct 2013 06:36:04 -0400, Ned Batchelder wrote: > > > coverage.py currently runs on 2.3 through 3.4 > > You support all the way back to 2.3??? > > I don't know whether to admire your dedication, or back away slowly since > you're obviously a crazy person :-) Yes, in the end youve to choose between a rock and a hard place: Either you write idiomatic, beautiful 3-code that is not 2-compatible or you write not so beautiful code putting compatibility at the highest priority. Cant think of examples right now but Ive seen very old python code with True, False defined and inside try-block and all that. Looks ugly but is compatibility-wise correct adn appropriate BTW saw this yesterday: An interesting read correlating OSS projects that succeed and fail with the difference between a commitment to computer science and to computer engineering. Consider it an early-warning system! http://www.forbes.com/sites/danwoods/2013/10/22/why-hasnt-open-source-taken-over-storage/ -- https://mail.python.org/mailman/listinfo/python-list
Re: Will Python 3.x ever become the actual standard?
> I am starting to have doubts as to whether Python 3.x will ever be > actually adopted by the Python community at large as their standard. > Years have passed, and a LARGE number of Python programmers has not > even bothered learning version 3.x. Why am I bothered by this? Because > of lot of good libraries are still only for version 2.x, and there is > no sign of their being updated for v3.x. I get the impression as if > 3.x, despite being better and more advanced than 2.x from the > technical point of view, is a bit of a letdown in terms of adoption. Some Linux distributions will certainly switch to Python 3 by default, sooner or later. Fedora has decided to do so for their 22 release: http://lwn.net/Articles/571528/ -- DW -- https://mail.python.org/mailman/listinfo/python-list
Re: Will Python 3.x ever become the actual standard?
Am 24.10.2013 14:29, schrieb Damien Wyart: I am starting to have doubts as to whether Python 3.x will ever be actually adopted by the Python community at large as their standard. Years have passed, and a LARGE number of Python programmers has not even bothered learning version 3.x. Why am I bothered by this? Because of lot of good libraries are still only for version 2.x, and there is no sign of their being updated for v3.x. I get the impression as if 3.x, despite being better and more advanced than 2.x from the technical point of view, is a bit of a letdown in terms of adoption. Some Linux distributions will certainly switch to Python 3 by default, sooner or later. Fedora has decided to do so for their 22 release: http://lwn.net/Articles/571528/ -- DW Saucy Salamander (Ubuntu 13.10, released oct 17th) comes with Python 3.3. -- https://mail.python.org/mailman/listinfo/python-list
Re: question
On 23/10/2013 16:24, Cesar Campana wrote: > Hi! > > Im installing the python library for the version 2.7 but Im getting the > error unable to find vcvarsall.bat > > I was looking on line but it says is related to Visual Studio...? > > Can you guys please help me to fix this... > The other responses were right-on. But just in case you don't know some of the background, let me try to fill it in. Python itself, much of the standard library, and many of the third-party extension libraries are written in C, completely or partly. If you get source code for any such code, you're expected to compile it, and on Windows, that usually means with Microsoft's C compiler, usually found within Visual Studio. There are free versions (usually with the name "express" as part of their description) on Microsoft's site. vcvars.bat is the first step towards finding a particular version of the compiler. Now, if you don't have the right versionof that compiler (or any version), you will probably be more comfortable using a precompiled binary version of the package. You still have to match the version against whatever CPython you're using, 32 or 64 bit, 2.7 or 3.3, or whatever. Depending on just what you were trying to install, you could look for such a binary package on python.org, on Stackoverflow, or on http://www.lfd.uci.edu/~gohlke/pythonlibs/ -- DaveA -- https://mail.python.org/mailman/listinfo/python-list
Re: Reading From stdin After Command Line Redirection
On 10/24/2013 07:10 AM, Oscar Benjamin wrote: On 24 October 2013 12:58, Tim Daneliuk wrote: On 10/23/2013 11:54 PM, Ben Finney wrote: we don't welcome ableist (nor sexist) behaviour. Well now I just feel so very awful ... Please end this line of discussion. Ben is right: your comment was entirely unnecessary and could easily offend. Oscar And his condescension was even more offensive. But now I feel bad about myself and it's all your fault. -- --- Tim Daneliuk -- https://mail.python.org/mailman/listinfo/python-list
Re: Reading From stdin After Command Line Redirection
Am Donnerstag, 24. Oktober 2013 15:41:52 UTC+2 schrieb Tim Daneliuk: > On 10/24/2013 07:10 AM, Oscar Benjamin wrote: >> On 24 October 2013 12:58, Tim Daneliuk wrote: >>> On 10/23/2013 11:54 PM, Ben Finney wrote: we don't welcome ableist (nor sexist) behaviour. >>> Well now I just feel so very awful ... >> Please end this line of discussion. Ben is right: your comment was >> entirely unnecessary and could easily offend. >> Oscar > And his condescension was even more offensive. > But now I feel bad about myself and it's all your fault. No it is not. You are the only one responsible for your feelings. Or is it your fault, that I fell bad, because you feel bad? Btw it's your feelings, that grade an impartial statement as condescension. As my feelings grade your statements as extremely offending. And (to follow your arguments) that's your fault! > ... > Tim Daneliuk -- https://mail.python.org/mailman/listinfo/python-list
Re: Reading From stdin After Command Line Redirection
On 10/24/2013 06:41 AM, Tim Daneliuk wrote: But now I feel bad about myself and it's all your fault. Really? *plonk* -- https://mail.python.org/mailman/listinfo/python-list
Re: Reading From stdin After Command Line Redirection
On 10/24/2013 09:36 AM, feedthetr...@gmx.de wrote: Am Donnerstag, 24. Oktober 2013 15:41:52 UTC+2 schrieb Tim Daneliuk: On 10/24/2013 07:10 AM, Oscar Benjamin wrote: On 24 October 2013 12:58, Tim Daneliuk wrote: On 10/23/2013 11:54 PM, Ben Finney wrote: we don't welcome ableist (nor sexist) behaviour. Well now I just feel so very awful ... Please end this line of discussion. Ben is right: your comment was entirely unnecessary and could easily offend. Oscar And his condescension was even more offensive. But now I feel bad about myself and it's all your fault. No it is not. You are the only one responsible for your feelings. Or is it your fault, that I fell bad, because you feel bad? Btw it's your feelings, that grade an impartial statement as condescension. As my feelings grade your statements as extremely offending. And (to follow your arguments) that's your fault! Yeah ... definitely feel bad ... definitely... --- Tim Daneliuk -- https://mail.python.org/mailman/listinfo/python-list
Re: Reading From stdin After Command Line Redirection
On 10/24/2013 01:14 AM, Mark Lawrence wrote: On 24/10/2013 04:53, Ben Finney wrote: Tim Daneliuk writes: 'Easy there Rainman I'll thank you not to use mental deficiency as some kind of insult. Calling someone “Rainman” is to use autistic people as the punchline of a joke. We're a community that doesn't welcome such ableist slurs. I saw no such insult. I don't know how wide-spread the movie is, but in the US "Rainman" is well-known. Short synopsis: younger brother finds out about older brother; older brother has mental challenges (I don't recall which one); they spend about a week together while younger brother tries to straighten out his life. So unless you're talking about Native American dances it's an inappropriate moniker to apply to someone, particularly when you're being insulting. And having said all that, we can have disagreements without name calling. -- ~Ethan~ -- https://mail.python.org/mailman/listinfo/python-list
Re: Will Python 3.x ever become the actual standard?
On 10/24/13 9:29 AM, Damien Wyart wrote: I am starting to have doubts as to whether Python 3.x will ever be actually adopted by the Python community at large as their standard. Years have passed, and a LARGE number of Python programmers has not even bothered learning version 3.x. Why am I bothered by this? Because of lot of good libraries are still only for version 2.x, and there is no sign of their being updated for v3.x. I get the impression as if 3.x, despite being better and more advanced than 2.x from the technical point of view, is a bit of a letdown in terms of adoption. Some Linux distributions will certainly switch to Python 3 by default, sooner or later. Fedora has decided to do so for their 22 release: http://lwn.net/Articles/571528/ I'm not sure what "by default" means, I hope it isn't that "python" runs Python 3.x. That causes massive confusion on Arch, and will make it very difficult to support a mixed environment. --Ned. -- https://mail.python.org/mailman/listinfo/python-list
Re: Maintaining a backported module
On 10/24/13 7:46 AM, Steven D'Aprano wrote: On Thu, 24 Oct 2013 06:36:04 -0400, Ned Batchelder wrote: coverage.py currently runs on 2.3 through 3.4 You support all the way back to 2.3??? I don't know whether to admire your dedication, or back away slowly since you're obviously a crazy person :-) Yeah, it's kind of crazy. It was a boiling frog situation: the package supported 2.3 back when that was just normal, and there was no obvious time to drop it, so it's been carried along. It's been fun dropping the contortions for coverage.py 4.x, though! --Ned. -- https://mail.python.org/mailman/listinfo/python-list
Unlimited canvas painting program
How to create a program similar to paint, but the difference would be that the cursor would be always in the middle and the canvas moves or the camera is always fixed on the cursor as it moves around the canvas. And the canvas should be infinite. What would be reasonable to use? In addition, i want it to draw a line whidout me having to press a button, just move the mouse. Ill try to think a better way to describe, what i want, but for now i hope this is sufficient and clear enough. -- https://mail.python.org/mailman/listinfo/python-list
Re: Unlimited canvas painting program
To start with, you'll want some sort of Graphic User Interface, a popular and common (but not the only) one is TkInter, which you can dive into here: https://wiki.python.org/moin/TkInter -- https://mail.python.org/mailman/listinfo/python-list
Re: Unlimited canvas painting program
So, i`ll take the canvas, somekind of mouse tracker, for each mouse location il draw a dot or 2X2 square or something. Main thing i have never understood, is how can i get the backround to move. Lets say ia hve 200X200 window. In the middle of it is the cursor that draws. If i move the mouse the cursor doesent move, but the canvas moves. So if i move mouse to the left, i get a line that goes to the left. So i probably must invert the canvas movement. If mouse goes left, canvas goes right. And if possible i would like to save my piece of art aswell :D -- https://mail.python.org/mailman/listinfo/python-list
Re: Unlimited canvas painting program
On 24/10/2013 20:32, markot...@gmail.com wrote: So, i`ll take the canvas, somekind of mouse tracker, for each mouse location il draw a dot or 2X2 square or something. Main thing i have never understood, is how can i get the backround to move. Lets say ia hve 200X200 window. In the middle of it is the cursor that draws. If i move the mouse the cursor doesent move, but the canvas moves. So if i move mouse to the left, i get a line that goes to the left. So i probably must invert the canvas movement. If mouse goes left, canvas goes right. And if possible i would like to save my piece of art aswell :D I think it'll be confusing because it goes against how every other program does it! In a painting program you can point to other things, such as tools, but if the cursor never moves... It would be simpler, IMHO, if you just moved the canvas and stopped the cursor going off the canvas when the user is drawing near the edge, so that the user doesn't need to stop drawing in order to expose more of the canvas. -- https://mail.python.org/mailman/listinfo/python-list
Re: Maintaining a backported module
On 10/24/2013 1:46 PM, Ned Batchelder wrote: On Thu, 24 Oct 2013 06:36:04 -0400, Ned Batchelder wrote: coverage.py currently runs on 2.3 through 3.4 I want to thank you for this package. I have used it when writing test modules for idlelib modules and aiming for 100% coverage forces me to really understand the code tested to hit all the conditional lines. It's been fun dropping the contortions for coverage.py 4.x, though! One request: ignore "if __name__ == '__main__':" clauses at the end of files, which cannot be run under coverage.py, so 100% coverage is reported as 100% instead of 9x%. -- Terry Jan Reedy -- https://mail.python.org/mailman/listinfo/python-list
Re: Maintaining a backported module
On 10/24/2013 01:37 AM, Mark Lawrence wrote: On 24/10/2013 07:30, Ethan Furman wrote: On 10/23/2013 09:54 PM, Steven D'Aprano wrote: I'm looking for advice on best practices for doing so. Any suggestions for managing bug fixes and enhancements to two separate code-bases without them diverging too much? Confining your code to the intersection of 2.7 and 3.x is probably going to be the easiest thing to do as 2.7 has a bunch of 3.x features. Sadly, when I backported Enum I was targeting 2.5 - 3.x because I have systems still running 2.5. That was *not* a fun experience. :( -- ~Ethan~ Have you or could you publish anything regarding your experiences, I suspect it would be an enlightening read for a lot of us? The only thing I can add to what's already been posted in this thread (and it was advice I got from Barry -- Thanks, Barry! :) is when your class structure cannot be written the same in both 2 and 3 (because, for example, you are using metaclasses) then you have to define your methods outside of a class, store them in a dictionary, and then use type to create the class. You can look at Enum for an example (his or mine ;). -- ~Ethan~ -- https://mail.python.org/mailman/listinfo/python-list
Python Coverage: testing a program (was: Maintaining a backported module)
Terry Reedy writes: > On 10/24/2013 1:46 PM, Ned Batchelder wrote: > > It's been fun dropping the contortions for coverage.py 4.x, though! > > One request: ignore "if __name__ == '__main__':" clauses at the end of > files, which cannot be run under coverage.py, so 100% coverage is > reported as 100% instead of 9x%. You can do this already with current Coverage: tell Coverage to exclude http://nedbatchelder.com/code/coverage/excluding.html> specific statements, and it won't count them for coverage calculations. -- \ “If we don't believe in freedom of expression for people we | `\ despise, we don't believe in it at all.” —Noam Chomsky, | _o__) 1992-11-25 | Ben Finney -- https://mail.python.org/mailman/listinfo/python-list
Re: Will Python 3.x ever become the actual standard?
On 10/24/2013 1:31 PM, Ned Batchelder wrote: On 10/24/13 9:29 AM, Damien Wyart wrote: I am starting to have doubts as to whether Python 3.x will ever be actually adopted by the Python community at large as their standard. Years have passed, and a LARGE number of Python programmers has not even bothered learning version 3.x. Why am I bothered by this? Because of lot of good libraries are still only for version 2.x, and there is no sign of their being updated for v3.x. I get the impression as if 3.x, despite being better and more advanced than 2.x from the technical point of view, is a bit of a letdown in terms of adoption. Some Linux distributions will certainly switch to Python 3 by default, sooner or later. Fedora has decided to do so for their 22 release: http://lwn.net/Articles/571528/ I'm not sure what "by default" means, I hope it isn't that "python" runs Python 3.x. That causes massive confusion on Arch, and will make it very difficult to support a mixed environment. It means that 3.x is always present (with 2.x an option) and Fedora's Python code works with the always-present version. The actual proposal (FEP? ;-): https://fedoraproject.org/wiki/Changes/Python_3_as_Default ''' The main goal is switching to Python 3 as a default, in which state: DNF is the default package manager instead of Yum, which only works with Python 2 Python 3 is the only Python implementation in the minimal buildroot Python 3 is the only Python implementation on the LiveCD Anaconda and all of its dependencies run on Python 3 cloud-init and all of its dependencies run on Python 3 ''' ... "Upstream recommends that /usr/bin/python point to Python 2 runtime for the time being, so if we go with that, there shouldn't be any serious compatibility impact: " -- Terry Jan Reedy -- https://mail.python.org/mailman/listinfo/python-list
Re: Unlimited canvas painting program
On 2013-10-24 12:16, markot...@gmail.com wrote: > How to create a program similar to paint, but the difference would > be that the cursor would be always in the middle and the canvas > moves or the camera is always fixed on the cursor as it moves > around the canvas. And the canvas should be infinite. What would be > reasonable to use? To hold an (effectively) infinite *bitmap* canvas, you'd (effectively) need an (effectively) infinite amount of memory. However, it could be done with an (effectively) infinite *vector* canvas. That way you could limit the on-screen rendering to just the clipped subset of the vector collection. You'd still want to make it easy to toggle between "draw" and "stop drawing", but you could make a mouse-click. To implement, just pick a GUI library tkinter, wx, or whatver. -tkc -- https://mail.python.org/mailman/listinfo/python-list
status of regex modules
The new module is now five years old. PEP 429 Python 3.4 release schedule has it listed under "Other proposed large-scale changes" but I don't believe this is actually happening. Lots of issues on the bug tracker have been closed as fixed in the new module, see issue 2636 for more data. Some work is still being carried out on the old re module. So where do we stand? Is the new module getting into Python 3.x, Python 4.y or what? If no do all the old issues have to be reopened and applied to the re module? Who has to make the final decision on all of this? Note that I've no direct interest as I rarely if ever use the little perishers, I just find this situation bizarre. -- Python is the second best programming language in the world. But the best has yet to be invented. Christian Tismer Mark Lawrence -- https://mail.python.org/mailman/listinfo/python-list
Re: Python Coverage: testing a program
On 24/10/2013 21:54, Ben Finney wrote: Terry Reedy writes: On 10/24/2013 1:46 PM, Ned Batchelder wrote: It's been fun dropping the contortions for coverage.py 4.x, though! One request: ignore "if __name__ == '__main__':" clauses at the end of files, which cannot be run under coverage.py, so 100% coverage is reported as 100% instead of 9x%. You can do this already with current Coverage: tell Coverage to exclude http://nedbatchelder.com/code/coverage/excluding.html> specific statements, and it won't count them for coverage calculations. An alternative to Ned's great bit of kit is figleaf. This, Ned's and many other useful tools are listed here https://wiki.python.org/moin/PythonTestingToolsTaxonomy -- Python is the second best programming language in the world. But the best has yet to be invented. Christian Tismer Mark Lawrence -- https://mail.python.org/mailman/listinfo/python-list
Re: Unlimited canvas painting program
On 2013-10-24, Tim Chase wrote: > On 2013-10-24 12:16, markot...@gmail.com wrote: >> How to create a program similar to paint, but the difference would >> be that the cursor would be always in the middle and the canvas >> moves or the camera is always fixed on the cursor as it moves >> around the canvas. And the canvas should be infinite. What would be >> reasonable to use? > > To hold an (effectively) infinite *bitmap* canvas, you'd (effectively) > need an (effectively) infinite amount of memory. Sparse arrays allow it to be sort-of implemented as long as most of the bitmap is "empty". > However, it could be done with an (effectively) infinite *vector* > canvas. Sort of. Eventually you run out of bits to hold the coordinates. > That way you could limit the on-screen rendering to just the > clipped subset of the vector collection. The same can be done for a sparse array of bitmap subsets. -- Grant Edwards grant.b.edwardsYow! I'm totally DESPONDENT at over the LIBYAN situation gmail.comand the price of CHICKEN ... -- https://mail.python.org/mailman/listinfo/python-list
Re: Unlimited canvas painting program
On 2013-10-24 21:51, Grant Edwards wrote: > > To hold an (effectively) infinite *bitmap* canvas, you'd > > (effectively) need an (effectively) infinite amount of memory. > > Sparse arrays allow it to be sort-of implemented as long as most of > the bitmap is "empty". Fair enough. Raw bitmap canvas eats memory like a ravenous dog. But a smarter sparse array would certainly ameliorate the problem well. -tkc -- https://mail.python.org/mailman/listinfo/python-list
Re: Python Coverage: testing a program
On 10/24/2013 4:54 PM, Ben Finney wrote: Terry Reedy writes: On 10/24/2013 1:46 PM, Ned Batchelder wrote: It's been fun dropping the contortions for coverage.py 4.x, though! One request: ignore "if __name__ == '__main__':" clauses at the end of files, which cannot be run under coverage.py, so 100% coverage is reported as 100% instead of 9x%. You can do this already with current Coverage: tell Coverage to exclude http://nedbatchelder.com/code/coverage/excluding.html> specific statements, and it won't count them for coverage calculations. OK, I added .coveragerc and that works. In the process of verifying this, I was reminded that there is an overt bug in the html report as displayed by Firefox. The fonts used for line numbers ("class='linenos'") and line text ("class='text'") are slightly different and, more importantly, with difference sizes (line numbers are larger). So corresponding numbers and text do not line up with each other. This makes the J,K hotkeys and missing branch notations much less useful than intended. If I use cntl-scrollwheel to change text size, both change in the same proportion, so the mismatch is maintained. -- Terry Jan Reedy -- https://mail.python.org/mailman/listinfo/python-list
Re: Python Coverage: testing a program
On 10/24/2013 01:54 PM, Ben Finney wrote: Terry Reedy writes: On 10/24/2013 1:46 PM, Ned Batchelder wrote: It's been fun dropping the contortions for coverage.py 4.x, though! One request: ignore "if __name__ == '__main__':" clauses at the end of files, which cannot be run under coverage.py, so 100% coverage is reported as 100% instead of 9x%. You can do this already with current Coverage: tell Coverage to exclude http://nedbatchelder.com/code/coverage/excluding.html> specific statements, and it won't count them for coverage calculations. While that's neat (being able to exclude items) is there any reason to ever count the `if __name__ == '__main__'` clause? Are there any circumstances where it could run under Coverage? (Apologies if this is a dumb question, I know nothing about Coverage myself -- but I'm going to go look it up now. ;) -- ~Ethan~ -- https://mail.python.org/mailman/listinfo/python-list
Re: Python Coverage: testing a program
Terry Reedy writes: > OK, I added .coveragerc and that works. In the process of verifying > this, I was reminded that there is an overt bug in the html report […] At this point, it's probably best to direct this sequence of bug reports to https://bitbucket.org/ned/coveragepy/issues/>. -- \“I don't accept the currently fashionable assertion that any | `\ view is automatically as worthy of respect as any equal and | _o__) opposite view.” —Douglas Adams | Ben Finney -- https://mail.python.org/mailman/listinfo/python-list
Re: Python Coverage: testing a program
Ethan Furman writes: > On 10/24/2013 01:54 PM, Ben Finney wrote: > > You can do this already with current Coverage: tell Coverage to > > exclude http://nedbatchelder.com/code/coverage/excluding.html> > > specific statements, and it won't count them for coverage > > calculations. > > While that's neat (being able to exclude items) is there any reason to > ever count the `if __name__ == '__main__'` clause? I'd rather Coverage not go down the track of accumulating special cases based on how they're written. A simple pragma fixes the problem, no matter what the exact statement looks like. -- \ “Prediction is very difficult, especially of the future.” | `\ —Niels Bohr | _o__) | Ben Finney -- https://mail.python.org/mailman/listinfo/python-list
Re: Python Coverage: testing a program
On 10/24/13 6:28 PM, Ethan Furman wrote: On 10/24/2013 01:54 PM, Ben Finney wrote: Terry Reedy writes: On 10/24/2013 1:46 PM, Ned Batchelder wrote: It's been fun dropping the contortions for coverage.py 4.x, though! One request: ignore "if __name__ == '__main__':" clauses at the end of files, which cannot be run under coverage.py, so 100% coverage is reported as 100% instead of 9x%. You can do this already with current Coverage: tell Coverage to exclude http://nedbatchelder.com/code/coverage/excluding.html> specific statements, and it won't count them for coverage calculations. While that's neat (being able to exclude items) is there any reason to ever count the `if __name__ == '__main__'` clause? Are there any circumstances where it could run under Coverage? (Apologies if this is a dumb question, I know nothing about Coverage myself -- but I'm going to go look it up now. ;) Sure, if that line appears in program.py, then it will be run if you execute program.py: $ coverage run program.py You can run coverage a number of times, even with different main programs, then combine all the data to produce a combined report. This way you could cover all of the __main__ clauses in a number of files. --Ned. -- ~Ethan~ -- https://mail.python.org/mailman/listinfo/python-list
Re: Python Coverage: testing a program
On 10/24/2013 6:36 PM, Terry Reedy wrote: OK, I added .coveragerc and that works. In the process of verifying this, I was reminded that there is an overt bug in the html report as displayed by Firefox. The fonts used for line numbers ("class='linenos'") and line text ("class='text'") are slightly different and, more importantly, with difference sizes (line numbers are larger). So corresponding numbers and text do not line up with each other. This makes the J,K hotkeys and missing branch notations much less useful than intended. If I use cntl-scrollwheel to change text size, both change in the same proportion, so the mismatch is maintained. I found the style.css file and the linenos entry that works to make numbers and text line us is, *after correction*: /* Source file styles */ .linenos p { text-align: right; margin: 0; padding: 0 .5em 0 .5em; color: #99; font-family: verdana, sans-serif; font-size: .625em; /* 10/16 */ /* line-height: 1.6em; /* 16/10 */ } Padding was '0 .5em', which I gather is the same as '0 .5em .5em .5em', which adds padding to both top and bottom, instead of just one of the two. The corresponding text padding is '0 0 0 .5em'. The extra .5 for numbers puts padding needed on the right to separate numbers from the vertical green bars. The commented-out line-height added to mis-alignment. The verdana numerals *are* larger than those for the inherited page default (font: 'inherit' at the top of the file). But they do not cause misalignment. -- Terry Jan Reedy -- https://mail.python.org/mailman/listinfo/python-list
Re: Will Python 3.x ever become the actual standard?
On 23/10/2013 9:13 AM, Tim Golden wrote: On 23/10/2013 14:05, Colin J. Williams wrote: On 23/10/2013 8:35 AM, Mark Lawrence wrote: On 23/10/2013 12:57, duf...@gmail.com wrote: Years have passed, and a LARGE number of Python programmers has not even bothered learning version 3.x. The changes aren't large enough to worry a Python programmer so effectively there's nothing to learn, other than how to run 2to3. ...there is no sign of their being updated for v3.x. Could have fooled me. The number is growing all the time. The biggest problem is likely (IMHO) to be the sheer size of the code base and limitations on manpower. I get the impression as if 3.x, despite being better and more advanced than 2.x from the technical point of view, is a bit of a letdown in terms of adoption. I agree with this technical aspect, other than the disastrous flexible string representation, which has been repeatedly shot to pieces by, er, one idiot :) As for adaption we'll get there so please don't do a Captain Mainwearing[1] and panic. People should also be pursuaded by watching this from Brett Cannon http://www.youtube.com/watch?v=Ebyz66jPyJg Just my 2 pence worth. [1] From the extremely popular BBC TV series "Dad's Army" of the late 60s and 70s. It would be good if more of the packages were available, for Python 3.3, in binary for the Windows user. I am currently wrestling with Pandas, lxml etc. Can I assume you're aware of the industrious Christopher Gohlke? http://www.lfd.uci.edu/~gohlke/pythonlibs/ TJG Tim, Many thanks. I have installed lxml. help(lxml) looks good. I'll keep this link for future use. It would be good if, after some verification process for each package, it could be included in PyPi. Colin W. PS A problem in building lxml from source is that the build expects ?Cygwin? and I have Mingw32 installed. -- https://mail.python.org/mailman/listinfo/python-list
Re: Re-raising a RuntimeError - good practice?
Hi, Thanks to @Stephen D'APrano and @Andrew Berg for your advice. The advice seems to be that I should move my exception higher up, and try to handle it all in one place: for job in jobs: try: try: job.run_all() except Exception as err: # catch *everything* logger.error(err) raise except (SpamError, EggsError, CheeseError): # We expect these exceptions, and ignore them. # Everything else is a bug. pass That makes sense, but I'm sorry but I'm still a bit confused. Essentially, my requirements are: 1. If any job raises an exception, end that particular job, and continue with the next job. 2. Be able to differentiate between different exceptions in different stages of the job. For example, if I get a IOError in self.export_to_csv() versus one in self.gzip_csv_file(), I want to be able to handle them differently. Often this may just result in logging a slightly different friendly error message to the logfile. Am I still able to handle 2. if I handle all exceptions in the "for job in jobs" loop? How will I be able to distinguish between the same types of Exceptions being raise by different methods? Also, @Andrew Berg - you mentioned I'm just swallowing the original exception and re-raising a new RuntimeError - I'm guessing this is a bad practice, right? If I use just "raise" except Exception as err: # catch *everything* logger.error(err) raise that will just re-raise the original exception right? Cheers, Victor On Thursday, 24 October 2013 15:42:53 UTC+11, Andrew Berg wrote: > On 2013.10.23 22:23, Victor Hooi wrote: > > > For example: > > > > > > def run_all(self): > > > self.logger.debug('Running loading job for %s' % self.friendly_name) > > > try: > > > self.export_to_csv() > > > self.gzip_csv_file() > > > self.upload_to_foo() > > > self.load_foo_to_bar() > > > except RuntimeError as e: > > > self.logger.error('Error running job %s' % self.friendly_name) > > > ... > > > def export_to_csv(self): > > > ... > > > try: > > > with open(self.export_sql_file, 'r') as f: > > > self.logger.debug('Attempting to read in SQL export > > statement from %s' % self.export_sql_file) > > > self.export_sql_statement = f.read() > > > self.logger.debug('Successfully read in SQL export > > statement') > > > except Exception as e: > > > self.logger.error('Error reading in %s - %s' % > > (self.export_sql_file, e), exc_info=True) > > > raise RuntimeError > > You're not re-raising a RuntimeError. You're swallowing all exceptions and > then raising a RuntimeError. Re-raise the original exception in > > export_to_csv() and then handle it higher up. As Steven suggested, it is a > good idea to handle exceptions in as few places as possible (and > > as specifically as possible). Also, loggers have an exception method, which > can be very helpful in debugging when unexpected things happen, > > especially when you need to catch a wide range of exceptions. > > > > -- > > CPython 3.3.2 | Windows NT 6.2.9200 / FreeBSD 10.0 -- https://mail.python.org/mailman/listinfo/python-list
Re: Re-raising a RuntimeError - good practice?
On 2013.10.24 20:09, Victor Hooi wrote: > Also, @Andrew Berg - you mentioned I'm just swallowing the original exception > and re-raising a new RuntimeError - I'm guessing this is a bad practice, > right? If I use just "raise" > > except Exception as err: # catch *everything* > logger.error(err) > raise > > that will just re-raise the original exception right? Yes. However, if you are doing logging higher up where you actually handle the exception, then logging here is redundant, and you can simply eliminate the try/catch block completely. -- CPython 3.3.2 | Windows NT 6.2.9200 / FreeBSD 10.0 -- https://mail.python.org/mailman/listinfo/python-list
Processing large CSV files - how to maximise throughput?
Hi, We have a directory of large CSV files that we'd like to process in Python. We process each input CSV, then generate a corresponding output CSV file. input CSV -> munging text, lookups etc. -> output CSV My question is, what's the most Pythonic way of handling this? (Which I'm assuming For the reading, I'd with open('input.csv', 'r') as input, open('output.csv', 'w') as output: csv_writer = DictWriter(output) for line in DictReader(input): # Do some processing for that line... output = process_line(line) # Write output to file csv_writer.writerow(output) So for the reading, it'll iterates over the lines one by one, and won't read it into memory which is good. For the writing - my understanding is that it writes a line to the file object each loop iteration, however, this will only get flushed to disk every now and then, based on my system default buffer size, right? So if the output file is going to get large, there isn't anything I need to take into account for conserving memory? Also, if I'm trying to maximise throughput of the above, is there anything I could try? The processing in process_line is quite line - just a bunch of string splits and regexes. If I have multiple large CSV files to deal with, and I'm on a multi-core machine, is there anything else I can do to boost throughput? Cheers, Victor -- https://mail.python.org/mailman/listinfo/python-list
Re: Processing large CSV files - how to maximise throughput?
On 24/10/2013 21:38, Victor Hooi wrote: > Hi, > > We have a directory of large CSV files that we'd like to process in Python. > > We process each input CSV, then generate a corresponding output CSV file. > > input CSV -> munging text, lookups etc. -> output CSV > > My question is, what's the most Pythonic way of handling this? (Which I'm > assuming > > For the reading, I'd > > with open('input.csv', 'r') as input, open('output.csv', 'w') as output: > csv_writer = DictWriter(output) > for line in DictReader(input): > # Do some processing for that line... > output = process_line(line) > # Write output to file > csv_writer.writerow(output) > > So for the reading, it'll iterates over the lines one by one, and won't read > it into memory which is good. > > For the writing - my understanding is that it writes a line to the file > object each loop iteration, however, this will only get flushed to disk every > now and then, based on my system default buffer size, right? > > So if the output file is going to get large, there isn't anything I need to > take into account for conserving memory? No, the system will flush so often that you'll never use much memory. > > Also, if I'm trying to maximise throughput of the above, is there anything I > could try? The processing in process_line is quite line - just a bunch of > string splits and regexes. If you want help optimizing process_line(), you'd have to show us the source. For the regex, you can precompile it and not have to build it each time. Or just write the equivalent Python code, which many times is faster than a regex. > > If I have multiple large CSV files to deal with, and I'm on a multi-core > machine, is there anything else I can do to boost throughput? Start multiple processes. For what you're doing, there's probably no point in multithreading. And as always, in performance tuning, you never know till you measure. -- DaveA -- https://mail.python.org/mailman/listinfo/python-list
Re: Re-raising a RuntimeError - good practice?
On Thu, 24 Oct 2013 20:17:24 -0500, Andrew Berg wrote: > On 2013.10.24 20:09, Victor Hooi wrote: >> Also, @Andrew Berg - you mentioned I'm just swallowing the original >> exception and re-raising a new RuntimeError - I'm guessing this is a >> bad practice, right? Well, maybe, maybe not. It depends on how much of a black-box you want the method or function to be, and whether or not the exception you raise gives the caller enough information to fix the problem. For instance, consider this: # Pseudocode def read(uri): try: scheme, address = uri.split(":", 1) if scheme == "file": obj = open(address) elif scheme == "http": obj = httplib.open(address) elif scheme == "ftp": obj = ftplib.open(address) return obj.read() except Exception: raise URIReadError(some useful message here) since it swallows too much: anything from file not found, permissions errors, low-level network errors, high-level HTTP errors, memory errors, type errors, *everything* gets swallowed and turned into a single exception. But it isn't that the principle is wrong, just that the execution is too greedy. The designer of this function needs to think hard about which exceptions should be swallowed, and which let through unchanged. It may be, that after thinking it through, the designer decides to stick with the "black box" approach. Or perhaps she'll decide to make the function a white box, and not catch any exceptions at all. There's no right answer, it partly depends on how you intend to use this function, and partly on personal taste. My personal taste would be to let TypeError, OS-, IO- and networking errors through unchanged, and capture other errors (like malformed URIs) and raise a custom exception for those. >> If I use just "raise" >> >> except Exception as err: # catch *everything* >> logger.error(err) >> raise >> >> that will just re-raise the original exception right? > > Yes. However, if you are doing logging higher up where you actually > handle the exception, then logging here is redundant, and you can simply > eliminate the try/catch block completely. Yes, this! Try to avoid having too many methods responsible for logging. In an ideal world, each job should be the responsibility for one piece of code. Sometimes you have to compromise on that ideal, but you should always aim for it. -- Steven -- https://mail.python.org/mailman/listinfo/python-list
Re: Processing large CSV files - how to maximise throughput?
On Thu, 24 Oct 2013 18:38:21 -0700, Victor Hooi wrote: > Hi, > > We have a directory of large CSV files that we'd like to process in > Python. > > We process each input CSV, then generate a corresponding output CSV > file. > > input CSV -> munging text, lookups etc. -> output CSV > > My question is, what's the most Pythonic way of handling this? (Which > I'm assuming Start with the simplest thing that could work: for infile, outfile in zip(input_list, output_list): for line in infile: munge line write to outfile sort of thing. If and only if it isn't fast enough, then try to speed it up. > For the reading, I'd > > with open('input.csv', 'r') as input, open('output.csv', 'w') as > output: > csv_writer = DictWriter(output) > for line in DictReader(input): > # Do some processing for that line... output = > process_line(line) > # Write output to file > csv_writer.writerow(output) Looks good to me! > So for the reading, it'll iterates over the lines one by one, and won't > read it into memory which is good. > > For the writing - my understanding is that it writes a line to the file > object each loop iteration, however, this will only get flushed to disk > every now and then, based on my system default buffer size, right? Have you read the csv_writer documentation? http://docs.python.org/2/library/csv.html#writer-objects Unfortunately it is pretty light documentation, but it seems that writer.writerow *probably* just calls write on the underlying file object immediately. But there's really no way to tell when the data hits the disk platter: the Python file object could be doing caching, the OS could be doing caching, the file system could be doing caching, and even the disk itself could be doing caching. Really, the only way to be sure that the data has hit the disk platter is to call os.sync(), and even then some hard drives lie and report that they're synced when in fact the data is still in volatile cache. Bad hard drive, no biscuit. But, really, do you need to care? Better to buy better hard drives (e.g. server grade), or use software RAID with two different brands (so their failure characteristics will be different), or just be prepared to re- process a batch of files if the power goes out mid-run. (You do have a UPS, don't you?) Anyway, I wouldn't bother about calling os.sync directly, the OS will sync when it needs to. But if you need it, it's there. Or you can call flush() on the output file, which is a bit less invasive than calling os.sync. But really I wouldn't bother. Let the OS handle it. > So if the output file is going to get large, there isn't anything I need > to take into account for conserving memory? I shouldn't think so. > Also, if I'm trying to maximise throughput of the above, is there > anything I could try? The processing in process_line is quite line - > just a bunch of string splits and regexes. > > If I have multiple large CSV files to deal with, and I'm on a multi-core > machine, is there anything else I can do to boost throughput? Since this is likely to be I/O-bound, you could use threads. Each thread is responsible for reading the file, processing it, then writing it back again. Have you used threads before? If not, start here: http://www.ibm.com/developerworks/aix/library/au-threadingpython/ http://pymotw.com/2/threading/ If the amount of processing required becomes heavier, and the task becomes CPU-bound, you can either: - move to an implementation of Python without the GIL, like IronPython or Jython; - or use multiprocessing. -- Steven -- https://mail.python.org/mailman/listinfo/python-list
Re: Processing large CSV files - how to maximise throughput?
On Fri, 25 Oct 2013 02:10:07 +, Dave Angel wrote: >> If I have multiple large CSV files to deal with, and I'm on a >> multi-core machine, is there anything else I can do to boost >> throughput? > > Start multiple processes. For what you're doing, there's probably no > point in multithreading. Since the bottleneck will probably be I/O, reading and writing data from files, I expect threading actually may help. -- Steven -- https://mail.python.org/mailman/listinfo/python-list
Re: Processing large CSV files - how to maximise throughput?
On 25/10/2013 02:38, Victor Hooi wrote: So for the reading, it'll iterates over the lines one by one, and won't read it into memory which is good. Wow this is fantastic, which OS are you using? Or do you actually mean that the whole file doesn't get read into memory, only one line at a time? :) -- Python is the second best programming language in the world. But the best has yet to be invented. Christian Tismer Mark Lawrence -- https://mail.python.org/mailman/listinfo/python-list
Re: Processing large CSV files - how to maximise throughput?
On 24/10/2013 23:35, Steven D'Aprano wrote: > On Fri, 25 Oct 2013 02:10:07 +, Dave Angel wrote: > >>> If I have multiple large CSV files to deal with, and I'm on a >>> multi-core machine, is there anything else I can do to boost >>> throughput? >> >> Start multiple processes. For what you're doing, there's probably no >> point in multithreading. > > Since the bottleneck will probably be I/O, reading and writing data from > files, I expect threading actually may help. > > > We approach the tradeoff from opposite sides. I would use multiprocessing to utilize multiple cores unless the communication costs (between the processes) would get too high. They won't in this case. But I would concur -- probably they'll both give about the same speedup. I just detest the pain that multithreading can bring, and tend to avoid it if at all possible. -- DaveA -- https://mail.python.org/mailman/listinfo/python-list
Re: Processing large CSV files - how to maximise throughput?
On Fri, Oct 25, 2013 at 2:57 PM, Dave Angel wrote: > But I would concur -- probably they'll both give about the same speedup. > I just detest the pain that multithreading can bring, and tend to avoid > it if at all possible. I don't have a history of major pain from threading. Is this a Python thing, or have I just been really really fortunate (growing up on OS/2 rather than Windows has definitely been, for me, a major boon)? Generally, I find threads to be convenient, though of course not always useful (especially in serialized languages). ChrisA -- https://mail.python.org/mailman/listinfo/python-list
Re: Python was designed (was Re: Multi-threading in Python vs Java)
Le mardi 15 octobre 2013 23:00:29 UTC+2, Mark Lawrence a écrit : > On 15/10/2013 21:11, wxjmfa...@gmail.com wrote: > > > Le lundi 14 octobre 2013 21:18:59 UTC+2, John Nagle a écrit : > > > > > > > > > > > > [...] > > >> > > >> No, Python went through the usual design screwups. Look at how > > >> > > >> painful the slow transition to Unicode was, from just "str" to > > >> > > >> Unicode strings, ASCII strings, byte strings, byte arrays, > > >> > > >> 16 and 31 bit character builds, and finally automatic switching > > >> > > >> between rune widths. [...] > > > > > > > > > Yes, a real disaster. > > > > > > This "poor" Python is spending its time in reencoding > > > when necessary, without counting the fact it's necessary to > > > check if reencoding is needed. > > > > > > Where is Unicode? Away. > > > Use one of the coding schemes endorsed by Unicode. If a dev is not able to see a non ascii char may use 10 bytes more than an ascii char or a dev is not able to see there may be a regression of a factor 1, 2, 3, 5 or more simply by using non ascii char, I really do not see now I can help. Neither I can force people to understand unicode. I recieved a ton a private emails, even from core devs, and as one wrote, this has not been seriously tested. Even today on the misc. lists some people are suggesting to write to add more tests. All the tools I'm aware of, are using unicode very smoothly (even "utf-8 tools"), Python not. That's the status. This FSR fails. Period. jmf -- https://mail.python.org/mailman/listinfo/python-list
Re: Processing large CSV files - how to maximise throughput?
Chris Angelico, 25.10.2013 08:13: > On Fri, Oct 25, 2013 at 2:57 PM, Dave Angel wrote: >> But I would concur -- probably they'll both give about the same speedup. >> I just detest the pain that multithreading can bring, and tend to avoid >> it if at all possible. > > I don't have a history of major pain from threading. Is this a Python > thing, or have I just been really really fortunate Likely the latter. Threads are ok if what they do is essentially what you could easily use multiple processes for as well, i.e. process independent data, maybe from/to independent files etc., using dedicated channels for communication. As soon as you need them to share any state, however, it's really easy to get it wrong and to run into concurrency issues that are difficult to reproduce and debug. Basically, with multiple processes, you start with independent systems and add connections specifically where needed, whereas with threads, you start with completely shared state and then prune away interdependencies and concurrency until it seems to work safely. That approach makes it essentially impossible to prove that threading is safe in a given setup, except for the really trivial cases. Stefan -- https://mail.python.org/mailman/listinfo/python-list