Re: Curious function argument
ast wrote: > Hello > > I saw in a code from a previous message in this forum > a curious function argument. > > def test(x=[0]): >print(x[0]) ## Poor man's object >x[0] += 1 > test() > 0 test() > 1 test() > 2 > > I understand that the author wants to implement a global > variable x . It would be better to write 'global x' inside the > function. No, it's not a global. A better description is that it is a way of faking static storage for a function: the function test has a "static variable" x which is independent of any other function's x, but it can remember its value from one function call to another. > At first test() function call, it prints 0, that's OK. > But at the second call, since we dont pass any argument to test(), > x should be equal to its default value [0] (a single item list). But > it seems that Python keeps the original object whose content has > been changed to 1. > > Is it a usual way to implement a global variable ? No, it's unusual. If you actually want a global, use the global keyword. Otherwise, there are often better ways to get a similar effect, such as using a generator: def gen(): x = 0 while True: yield x x += 1 it = gen() next(it) # returns 0 next(it) # returns 1 next(it) # returns 2 You can turn that into functional form like this: import functools func = functools.partial(next, gen()) func() # returns 0 func() # returns 1 func() # returns 2 -- Steven -- https://mail.python.org/mailman/listinfo/python-list
Re: Call for information - What assumptions can I make about Unix users' access to Windows?
On 7 November 2014 15:46, Paul Moore wrote: > To that end, I'd like to get an idea of what sort of access to Windows > a typical Unix developer would have. Thanks to all who contributed to this thread. Based on the feedback, I think it's going to be useful to provide two options. First of all, an EC2 AMI that can be used by people without access to a local Windows system. While other cloud providers are a possibility, EC2 provides a free tier (for the first year) and is well-known, so it's probably the easiest to get started with (at least it was for me!) Also, I will provide a script that can be used to automatically build the environment on a newly-installed machine. The idea is that you can use this on a Windows VM (something that a number of people have said they have access to). The script may be usable on an existing machine, but it's hard to make it robust, as there are too many failure modes to consider (software already installed, configuration and/or permission differences, etc). So while such use may be possible, I probably won't consider it as supported. Thanks again, Paul -- https://mail.python.org/mailman/listinfo/python-list
Re: How about some syntactic sugar for " __name__ == '__main__' "?
John Ladasky wrote: > I have taught Python to several students over the past few years. As I > have worked with my students, I find myself bothered by the programming > idiom that we use to determine whether a module is being executed or > merely imported: > > "if __name__ == '__main__':" > > The use of two dunder tokens -- one as a name in a namespace, and the > other as a string, is intimidating. It exposes too much of Python's guts. The dunders are a tad ugly, but it's actually quite simple and elegant: * every module has a global variable `__name__` which normally holds the name of the module: py> import functools py> functools.__name__ 'functools' py> import math as foobarbaz py> foobarbaz.__name__ 'math' * When Python imports a module, it sets the global __name__ to that module's actual name (as taken from the file name). * But when Python runs a file, as in `python2.7 path/to/script.py`, it sets the global __name__ to the magic value '__main__' instead of "script". The consequence is that every module can tell whether it is being run as a script or not by inspecting the __name__ global. That's all there is to it. -- Steven -- https://mail.python.org/mailman/listinfo/python-list
Re: How about some syntactic sugar for " __name__ == '__main__' "?
Chris Kaynor wrote: > I was thinking along the lines of replacing: > > if __name__ == "__main__": > <<>> > > with > > @main > def myFunction() > <<<> > > Both blocks of code will be called at the same time. You can't guarantee that, because you cannot tell ahead of time when the "if __name__" statement will be run. It is *usually* at the end of the file, but that's just the normal useful convention, it is not a hard requirement. The current idiom uses normal, unmagical execution of Python code. When the interpreter reaches the "if __name__ ..." statement, it executes that statement, just like every other statement. There's no magic involved here, and in fact I have written code with *multiple* such "if __name__" blocks. Here's a sketch of the sort of thing I mean: import a import b if __name__ == '__main__': import c as d else: import d def this(): ... def that(): ... flag = __name__ == '__main__' process(flag) if __name__ == '__main__' or condition(): print "still executing" main() print "done loading" (I haven't ever done *all* of these things in a *single* file, but I have done all these things at one time or another.) There's no way that any automatic system can match that for flexibility or simplicity. -- Steven -- https://mail.python.org/mailman/listinfo/python-list
Re: fileno() not supported in Python 3.1
Ian Kelly wrote: > On Fri, Nov 14, 2014 at 12:36 AM, Cameron Simpson wrote: >> On 13Nov2014 15:48, satishmlm...@gmail.com >> wrote: >>> >>> import sys >>> for stream in (sys.stdin, sys.stdout, sys.stderr): >>> print(stream.fileno()) >>> >>> >>> io.UnsupportedOperation: fileno >>> >>> Is there a workaround? >> >> >> The first workaround that suggests itself it to use a more modern Python. >> I've got 3.4.2 here, and it goes: >> >>Python 3.4.2 (default, Nov 5 2014, 21:19:51) >>[GCC 4.2.1 Compatible Apple LLVM 6.0 (clang-600.0.54)] on darwin >>Type "help", "copyright", "credits" or "license" for more information. >>>>> import sys >>>>> for stream in (sys.stdin, sys.stdout, sys.stderr): >>... print(stream.fileno()) >>... >>0 >>1 >>2 >>>>> >> >> In short, in 3.4.2 it just works. > > Why do you think the Python version has anything to do with it? Because the OP states that he is using Python 3.1 (look at the subject line) and it doesn't work in 3.1. For what it is worth, I cannot confirm that alleged behaviour: steve@orac:~$ python3.1 -c "import sys; print(sys.stdout.fileno())" 1 I suspect that the OP may be using an IDE which does something funny to sys.stdout etc., or perhaps he has accidentally shadowed them. The OP failed to copy and paste the actual traceback, so who knows what is actually happening? -- Steven -- https://mail.python.org/mailman/listinfo/python-list
Re: How about some syntactic sugar for " __name__ == '__main__' "?
Steven D'Aprano : > if __name__ == '__main__' or condition(): > print "still executing" > main() > > print "done loading" > > (I haven't ever done *all* of these things in a *single* file, but I > have done all these things at one time or another.) > > There's no way that any automatic system can match that for > flexibility or simplicity. Our test system has this boilerplate at the end of each test case: if __name__ == '__main__': run(test) Nobody claims it's beautiful but nobody has been overly bothered by it, either. Marko -- https://mail.python.org/mailman/listinfo/python-list
Re: A Freudian slip of *EPIC PROPORTIONS*!
Terry Reedy wrote: > On 11/13/2014 6:11 PM, Rick Johnson wrote: > >> # The parse functions have no idea what to do with >> # Unicode, so replace all Unicode characters with "x". >> # This is "safe" so long as the only characters germane >> # to parsing the structure of Python are 7-bit ASCII. >> # It's *necessary* because Unicode strings don't have a >> # .translate() method that supports deletechars. >> uniphooey = str > > It is customary to attribute quotes to their source. This is from 2.x > Lib/idlelib/PyParse.py. The file was committed (and probably written) > by David Scherer 2000-08-15. Edits for unicode, including the above, > were committed (and perhaps written) by Kurt B. Kaiser on 2001-07-13. Correct. The line in question was written by Kurt. We can find this out by using the hg annotate command. Change into the Lib/idlelib directory of the source repository, then use hg annotate command as follows: [steve@ando idlelib]$ hg annotate PyParse.py | grep phoo 42050: uniphooey = s 18555: for raw in map(ord, uniphooey): The numbers shown on the left are the revision IDs, so look at the older of the two: [steve@ando idlelib]$ hg annotate -r 18555 PyParse.py | grep phoo 18555: uniphooey = str 18555: for raw in map(ord, uniphooey): We can confirm that prior to that revision, the uniphooey lines didn't exist: [steve@ando idlelib]$ hg annotate -r 18554 PyParse.py | grep phoo And then find out who is responsible: [steve@ando idlelib]$ hg annotate -uvd -r 18555 PyParse.py | grep phoo Kurt B. Kaiser Fri Jul 13 20:33:46 2001 +: uniphooey = str Kurt B. Kaiser Fri Jul 13 20:33:46 2001 +: for raw in map(ord, uniphooey): > I doubt GvR ever saw this code. I expect KBK has changed opinions with > respect to unicode in 13 years, as has most everyone else. We don't know Kurt's intention with regard to the name, the "phooey" could refer to: - the parse functions failing to understand Unicode; - it being a nasty hack that assumes that Python will never use Unicode characters for keywords or operators; - it being necessary because u''.translate fails to support a deletechars parameter. It's unlikely to refer to the Unicode character set itself. -- Steven -- https://mail.python.org/mailman/listinfo/python-list
Re: Decorators
On 15/11/2014 07:42, Richard Riehle wrote: Mayank, Thanks. I have only been using Python for about four years, so there are features I have only recently discovered. Decorators are one of them. So far, I encounter other Python users who are also unfamiliar with them. When I discovered them, I instantly saw how they could be valuable. Richard Riehle, PhD Core Faculty, ITU Would you please be kind enough to get a PhD in interspersing your replies or bottom posting rather than top posting, thank you. -- My fellow Pythonistas, ask not what our language can do for you, ask what you can do for our language. Mark Lawrence -- https://mail.python.org/mailman/listinfo/python-list
Re: Decorators
Mark Lawrence : > On 15/11/2014 07:42, Richard Riehle wrote: >> When I discovered them, I instantly saw how they could be valuable. > > Would you please be kind enough to get a PhD in interspersing your > replies or bottom posting rather than top posting, thank you. I'd take top-posting if I were enlightened about how decorators could be valuable. Sadly, I didn't see it instantly, or even afterwards. Marko -- https://mail.python.org/mailman/listinfo/python-list
webhooks
What are the basics of implement web hooks with python? I need some detail tutorial in this topic. Thanks -- https://mail.python.org/mailman/listinfo/python-list
Re: Question about installing python and modules on Red Hat Linux 6
pythonista wrote: > I am developing a python application as a contractor. > > I would like to know if someone can provide me with some insight into the > problems that then infrastructure team has been having. > > The scope of the project was to install python 2.7.8 and 4 modules/site > packages on a fresh linux build. A "fresh linux build" of Red Hat Linux 6? RHL 6 was discontinued in 2000. That's *at least* 14 years old. Why on earth are you using something so old instead of a recent version of RHEL, Centos or Fedora? The supported version of Python for RHL 6 was Python 1.5. (That's *one* point five.) I'm not surprised that they had trouble installing Python 2.7 on something that old, especially if you needed support for optional but important features like tkinter, ssh, readline, curses, etc. I haven't tried it, so it is possible that installing 2.7 from source on such an old Linux system will actually be trivially easy and need only half an hour's work. But I wouldn't bet on it. I would expect all sorts of difficulties, due to the age difference between RHL 6 and the Python 2.7. As for the four modules, that depends. If they are extension modules written in C, who knows how hard it will be to get them working under RHL 6. And even if they are pure Python modules, depending on how old they are, they may be difficult to get working with something as new as Python 2.7. Without more details of what the modules are and what errors the infrastructure team experienced, there is no way to tell whether four weeks to get this working was a heroic effort or a sign of utter incompetence. -- Steven -- https://mail.python.org/mailman/listinfo/python-list
Re: Decorators
On 2014-11-15, Marko Rauhamaa wrote: > Tim Chase : > >> And decorators are used pretty regularly in just about every code-base >> that I've touched (I've been programming in Python since early 2004, >> so I've maintained pre-2.4 code without decorators and then brought it >> forward to 2.4 where decorators were usable). > > Funny. My experience with Python is about as old, and I have yet to > encounter them in code. I have seen (maybe even used) @staticmethod once > or twice over a decade and then as a magic keyword rather than a > "decorator pattern." I've been using Python since 1999 and version 1.5.2 and have yet to use decorators. I do occasionally find myself in a spot where I'm pretty sure that there's a more elegent way to do something using a decorator, and promise myself that I'll read up on decorators one of these days when I have some spare time... -- Grant -- https://mail.python.org/mailman/listinfo/python-list
Re: Decorators (was: Re: I love assert)
Richard Riehle wrote: > Decorators are new in Python, so there are not a lot of people using them. The principle of decorators themselves is as old as Python itself. You could implement them as far back as Python 1.5, if not older: [steve@ando ~]$ python1.5 Python 1.5.2 (#1, Aug 27 2012, 09:09:18) [GCC 4.1.2 20080704 (Red Hat 4.1.2-52)] on linux2 Copyright 1991-1995 Stichting Mathematisch Centrum, Amsterdam >>> def decorator(func): ... def inner(arg, func=func): ... return func(arg*2) ... return inner ... >>> def f(x): ... return x + 1 ... >>> f = decorator(f) >>> f(1) 3 >>> f(5) 11 The first built-in decorators (classmethod, staticmethod and property) were added in Python 2.2. Decorator syntax using @ was added in 2.4. https://docs.python.org/2/whatsnew/2.4.html#pep-318-decorators-for-functions-and-methods So decorators have been available for a very long time. > From my experience with other languages, especially Ada and Eiffel, I > enjoy the benefit of assertions (as pre-conditions and post-conditions and > invariants) at the specification level (not embedded in the code), so > decorators are closer to my other experience. They bring me closer to > the Design by Contract model of Ada and Eiffel. That is why I was so > pleased to see them added to Python. Way back in Python 1.5, Guido van Rossum wrote an essay describing a way to get Eiffel-like checks for pre-conditions and post-conditions: https://www.python.org/doc/essays/metaclasses/ (Alas, the link to Eiffel.py is currently broken. But you can read the rest of the essay.) > It is true, however, that they are not immediately intutive in Python, but > once understood, they are handy IMHO for improving code reliability. > Perhaps I was spoiled by having this capability in some other languages. -- Steven -- https://mail.python.org/mailman/listinfo/python-list
Re: Question about installing python and modules on Red Hat Linux 6
On 2014-11-15, Steven D'Aprano wrote: > pythonista wrote: > >> I am developing a python application as a contractor. >> >> I would like to know if someone can provide me with some insight into the >> problems that then infrastructure team has been having. >> >> The scope of the project was to install python 2.7.8 and 4 modules/site >> packages on a fresh linux build. > > A "fresh linux build" of Red Hat Linux 6? RHL 6 was discontinued in 2000. > That's *at least* 14 years old. Why on earth are you using something so old > instead of a recent version of RHEL, Centos or Fedora? I'm sure the OP meant RHEL 6, and not RH 6 [yes, I realize you know that two and are just making a point about how it pays to include accurate info when asking for help.] The OP probably doesn't even _know_ that there was prior product line called RedHat Linux with a version 6. [Kids these days!] IIRC, I started with one of the "holiday" RedHat Linux releases before they were numbered. I think it was "Mothers Day" or "Holloween". The first numbered release I remember was RedHat Linux 3.something. RHL was pretty good up through the 6.x series, but 7.00 was utterly and famously shoddy [at that point I switched to Mandrake]. RH 7.00 was so notoriously bad, I'm surprised that they didn't skip "7" entirely in the RHEL product line to avoid the bad memories... -- Grant -- https://mail.python.org/mailman/listinfo/python-list
Re: Question about installing python and modules on Red Hat Linux 6
On 11/14/2014 08:01 PM, pythonista wrote: > Can anyone provide me with insight as to the scope what the problem could > have been? Well the fact is that RHEL 6 uses Python 2.6 as a core system package. Many system utilities depend on it, so it cannot be replaced with a newer version. You must install the newer version alongside the old version. This can be easily accomplished with Redhat's Software Collections system. I have Python 2.7 and 3.4 installed on my RHEL6 box with this machanism. -- https://mail.python.org/mailman/listinfo/python-list
Re: Question about installing python and modules on Red Hat Linux 6
On 11/15/2014 08:15 AM, Steven D'Aprano wrote: > A "fresh linux build" of Red Hat Linux 6? RHL 6 was discontinued in 2000. Yes I know you're making a point about not assuming anything, but the odds are very good that the OP meant RHEL6. And meaning RHEL6, there are some good reasons why the infrastructure team struggled, since Python 2.6 is a core system dependency and cannot easily be overridden without breaking the entire system. The answer is either use Redhat Software Collections[1], or compile from scratch to /opt. [1] https://www.softwarecollections.org/en/scls/rhscl/python27/ -- https://mail.python.org/mailman/listinfo/python-list
Strange result with timeit execution time measurment
Hi I needed a function f(x) which looks like sinus(2pi.x) but faster. I wrote this one: -- from math import floor def sinusLite(x): x = x - floor(x) return -16*(x-0.25)**2 + 1 if x < 0.5 else 16*(x-0.75)**2 - 1 -- then i used module timeit to compare its execution time with math.sin() I put the sinusLite() function in a module named test. then: import timeit t1 = timeit.Timer("y=test.sinusLite(0.7)", "import test") t2 = timeit.Timer("y=math.sin(4.39)", "import math")## 4.39 = 2*pi*0.7 t1.repeat(3, 100) [1.999461539373, 1.9020670224846867, 1.9191573230675942] t2.repeat(3, 100) [0.2913627989031511, 0.2755561810230347, 0.2755186762562971] so the genuine sinus is much faster than my so simple sinLite() ! Amazing isnt it ? Do you have an explanation ? Thx -- https://mail.python.org/mailman/listinfo/python-list
Re: Question about installing python and modules on Red Hat Linux 6
On 11/14/2014 08:01 PM, pythonista wrote: > The scope of the project was to install python 2.7.8 and 4 modules/site > packages on a fresh linux build. I neglected to put the URL for software collections in my reply to you. Here it is. https://www.softwarecollections.org/en/scls/rhscl/python27/ Note that the infrastructure team will have to learn how to interface with it, since, as a software collection, it's not going to be in the path by default (because it would conflict). Any startup scripts or other invocations of your app will have to be modified to acquire the python 2.7 special environment, by sourcing the python 2.7 collection's enable file. I'm not entirely sure how to integrate it with Apache, but I know it can be done. If it's a package that won't conflict, such as Python 2.4, you can permanently integrate it into the environment this way: http://developerblog.redhat.com/2014/03/19/permanently-enable-a-software-collection/ -- but I can't recommend this for python 2.7 as it would break a lot of RHEL 6's commands. -- https://mail.python.org/mailman/listinfo/python-list
Re: Question about installing python and modules on Red Hat Linux 6
On 11/15/2014 10:13 AM, Michael Torrie wrote: > If it's a package that won't conflict, such as Python 2.4, you can Ahem, that should have been 3.4 -- https://mail.python.org/mailman/listinfo/python-list
Re: Strange result with timeit execution time measurment
ast wrote: > Hi > > I needed a function f(x) which looks like sinus(2pi.x) but faster. > I wrote this one: > > -- > from math import floor > > def sinusLite(x): > x = x - floor(x) > return -16*(x-0.25)**2 + 1 if x < 0.5 else 16*(x-0.75)**2 - 1 > -- > > then i used module timeit to compare its execution time with math.sin() > I put the sinusLite() function in a module named test. > > then: > import timeit t1 = timeit.Timer("y=test.sinusLite(0.7)", "import test") t2 = timeit.Timer("y=math.sin(4.39)", "import math")## 4.39 = 2*pi*0.7 > t1.repeat(3, 100) > [1.999461539373, 1.9020670224846867, 1.9191573230675942] > t2.repeat(3, 100) > [0.2913627989031511, 0.2755561810230347, 0.2755186762562971] > > so the genuine sinus is much faster than my so simple sinLite() ! > Amazing isnt it ? Do you have an explanation ? You are applying your optimisation in an implementation where the function call overhead of a Python-implemented function is greater than the time to invoke the C-coded function, calculate the sin, and create the python float. $ python -m timeit -s 'from math import sin' 'sin(.7)' 100 loops, best of 3: 0.188 usec per loop $ python -m timeit -s 'from test import sinusLite as sin' 'sin(.7)' 100 loops, best of 3: 0.972 usec per loop $ python -m timeit -s 'sin = lambda x: None' 'sin(.7)' 100 loops, best of 3: 0.242 usec per loop For CPython to write fast lowlevel code you have to switch to C (or Cython). In PyPy the results get interesting: $ pypy -m timeit -s 'from test import sinusLite as sin' 'sin(.7)' 1 loops, best of 3: 0.00459 usec per loop $ pypy -m timeit -s 'from math import sin' 'sin(.7)' 1000 loops, best of 3: 0.0476 usec per loop So yes, your approximation may speed up code in some parts of the Python universe (I don't know if pypy takes advantage of the constant argument). -- https://mail.python.org/mailman/listinfo/python-list
Re:Strange result with timeit execution time measurment
"ast" Wrote in message: > Hi > > I needed a function f(x) which looks like sinus(2pi.x) but faster. > I wrote this one: > > -- > from math import floor > > def sinusLite(x): > x = x - floor(x) > return -16*(x-0.25)**2 + 1 if x < 0.5 else 16*(x-0.75)**2 - 1 > -- > > then i used module timeit to compare its execution time with math.sin() > I put the sinusLite() function in a module named test. > > then: > import timeit t1 = timeit.Timer("y=test.sinusLite(0.7)", "import test") t2 = timeit.Timer("y=math.sin(4.39)", "import math")## 4.39 = 2*pi*0.7 > t1.repeat(3, 100) > [1.999461539373, 1.9020670224846867, 1.9191573230675942] > t2.repeat(3, 100) > [0.2913627989031511, 0.2755561810230347, 0.2755186762562971] > > so the genuine sinus is much faster than my so simple sinLite() ! > Amazing isnt it ? Do you have an explanation ? > > Thx > Sure, the library function probably used the trig logic in the processor. Perhaps if you timed things on a processor without a "math coprocessor" things could be different. But even there, you'd probably be comparing C to python. Library code is optimized where it's deemed helpful. -- DaveA -- https://mail.python.org/mailman/listinfo/python-list
Re: Strange result with timeit execution time measurment
On Sat, 15 Nov 2014 18:07:30 +0100, ast wrote: > > I needed a function f(x) which looks like sinus(2pi.x) but faster. > I wrote this one: > > -- > from math import floor > > def sinusLite(x): > x = x - floor(x) > return -16*(x-0.25)**2 + 1 if x < 0.5 else 16*(x-0.75)**2 - 1 > -- > > then i used module timeit to compare its execution time with math.sin() > I put the sinusLite() function in a module named test. > > then: > import timeit t1 = timeit.Timer("y=test.sinusLite(0.7)", "import test") t2 = timeit.Timer("y=math.sin(4.39)", "import math") ## 4.39 = 2*pi*0.7 > t1.repeat(3, 100) > [1.999461539373, 1.9020670224846867, 1.9191573230675942] > t2.repeat(3, 100) > [0.2913627989031511, 0.2755561810230347, 0.2755186762562971] > > so the genuine sinus is much faster than my so simple sinLite() ! > Amazing isnt it ? Do you have an explanation ? I suppose math.sin is implemented in C. Compiled languages (like C) are much faster than interpreted languages like Python. -- To email me, substitute nowhere->runbox, invalid->com. -- https://mail.python.org/mailman/listinfo/python-list
Re: Strange result with timeit execution time measurment
On Sat, Nov 15, 2014 at 10:07 AM, ast wrote: > Hi > > I needed a function f(x) which looks like sinus(2pi.x) but faster. > I wrote this one: > > -- > from math import floor > > def sinusLite(x): >x = x - floor(x) >return -16*(x-0.25)**2 + 1 if x < 0.5 else 16*(x-0.75)**2 - 1 > -- > > then i used module timeit to compare its execution time with math.sin() > I put the sinusLite() function in a module named test. > > then: > import timeit t1 = timeit.Timer("y=test.sinusLite(0.7)", "import test") t2 = timeit.Timer("y=math.sin(4.39)", "import math")## 4.39 = 2*pi*0.7 > > t1.repeat(3, 100) > > [1.999461539373, 1.9020670224846867, 1.9191573230675942] > t2.repeat(3, 100) > > [0.2913627989031511, 0.2755561810230347, 0.2755186762562971] > > so the genuine sinus is much faster than my so simple sinLite() ! > Amazing isnt it ? Do you have an explanation ? The built-in sin is written in C, and the C implementation on most modern systems boils down to a single assembly instruction implemented in microcode. That's generally going to be faster than a whole series of operations written in Python. Even just doing the 2*pi multiplication in Python will add a lot to the timing: C:\>python -m timeit -s "import math" "math.sin(2*math.pi*0.7)" 100 loops, best of 3: 0.587 usec per loop C:\>python -m timeit -s "import math" "math.sin(4.39)" 100 loops, best of 3: 0.222 usec per loop -- https://mail.python.org/mailman/listinfo/python-list
Re: A Freudian slip of *EPIC PROPORTIONS*!
On 11/15/2014 7:28 AM, Steven D'Aprano wrote: Terry Reedy wrote: On 11/13/2014 6:11 PM, Rick Johnson wrote: # The parse functions have no idea what to do with # Unicode, so replace all Unicode characters with "x". # This is "safe" so long as the only characters germane # to parsing the structure of Python are 7-bit ASCII. # It's *necessary* because Unicode strings don't have a # .translate() method that supports deletechars. uniphooey = str It is customary to attribute quotes to their source. This is from 2.x Lib/idlelib/PyParse.py. The file was committed (and probably written) by David Scherer 2000-08-15. Edits for unicode, including the above, were committed (and perhaps written) by Kurt B. Kaiser on 2001-07-13. Correct. The line in question was written by Kurt. We can find this out by using the hg annotate command. Change into the Lib/idlelib directory of the source repository, then use hg annotate command as follows: [steve@ando idlelib]$ hg annotate PyParse.py | grep phoo 42050: uniphooey = s 18555: for raw in map(ord, uniphooey): The numbers shown on the left are the revision IDs, so look at the older of the two: [steve@ando idlelib]$ hg annotate -r 18555 PyParse.py | grep phoo 18555: uniphooey = str 18555: for raw in map(ord, uniphooey): We can confirm that prior to that revision, the uniphooey lines didn't exist: [steve@ando idlelib]$ hg annotate -r 18554 PyParse.py | grep phoo And then find out who is responsible: [steve@ando idlelib]$ hg annotate -uvd -r 18555 PyParse.py | grep phoo Kurt B. Kaiser Fri Jul 13 20:33:46 2001 +: uniphooey = str Kurt B. Kaiser Fri Jul 13 20:33:46 2001 +: for raw in map(ord, uniphooey): On windows, with TortoiseHg installed, I right-clicked PyParse in Explorer and selected TortoiseHg on the context menu and Annotate on the submenu. This pops up a Window with two linked panels -- a list of revisions and an annotated file listing with lines added or changed by the current revision marked a different background color. I found the comment block easily enough, looked at the annotation, and looked back at the revision list. Clicking on a revision changes the file listing. On can easily march through the history of the file. I doubt GvR ever saw this code. I expect KBK has changed opinions with respect to unicode in 13 years, as has most everyone else. Including mine. We don't know Kurt's intention with regard to the name, the "phooey" could refer to: - the parse functions failing to understand Unicode; - it being a nasty hack that assumes that Python will never use Unicode characters for keywords or operators; - it being necessary because u''.translate fails to support a deletechars parameter. I expect I would have been annoyed when a new-fangled feature, elsewhere in Python, broke one of the files I was working on. Now, of course, I would know to not use a variable name that could be misinterpreted by someone years in the future. It's unlikely to refer to the Unicode character set itself. -- Terry Jan Reedy -- https://mail.python.org/mailman/listinfo/python-list
Re: Decorators
On 11/15/2014 8:24 AM, Marko Rauhamaa wrote: I'd take top-posting if I were enlightened about how decorators could be valuable. Here is part of the original rationale. @deco(arg) def f: suite is (for this discussion) equivalent to def f: suite f = deco(arg)(f) The latter requires writing 'f' 3 times instead of 1. Now suppose to meet external requirements, such as interfacing to Objective C, you had to wrap dozens of functions, each with long names, up to 30 chars, with multiple components. You would then become enlightened. Once the idea of making function wrapping easier emerged, many other applications emerged. Various stdlib modules define decorators. If you do not need to wrap functions, easy wrapping might seem useless to you. That's fine. You know they are there should you ever have need. -- Terry Jan Reedy -- https://mail.python.org/mailman/listinfo/python-list
caught in the import web again
Now, I'm getting these errors: ImportError: cannot import name ... and AttributeError: 'module' object has no attribute ... (what is 'module'?) Is there a way to resolve this without having to restructure my code every couple of days? I thought using imports of the form: from module import symbol was the "right way" to avoid these hassles... TIA cts www.creative-telcom-solutions.de -- https://mail.python.org/mailman/listinfo/python-list
Re: caught in the import web again
"Charles T. Smith" writes: > Now, I'm getting these errors: Please reduce the problem to a minimal, complete example demonstrating the behaviour http://sscce.org/> so that you can show us exactly what's happening. > AttributeError: 'module' object has no attribute ... > > (what is 'module'?) The type of the object which doesn't have the attribute. It's saying you're tring to access an attribute on a particular object, and that object (which is of type ‘module’, hence it is a “'module' object”) doesn't have that attribute. > Is there a way to resolve this without having to restructure my code > every couple of days? Not knowing what the actual problem is, it's difficult to say. Please come up with a (contrived, if you like) minimal clear example of what's happening – i.e. make it from scratch, without irrelevant parts from the rest of your program, but ensure it still does what you're confused by – and present it here. -- \ “It's a terrible paradox that most charities are driven by | `\ religious belief.… if you think altruism without Jesus is not | _o__) altruism, then you're a dick.” —Tim Minchin, 2010-11-28 | Ben Finney -- https://mail.python.org/mailman/listinfo/python-list
Re: Question about installing python and modules on Red Hat Linux 6
Grant Edwards wrote: > On 2014-11-15, Steven D'Aprano > wrote: >> pythonista wrote: >> >>> I am developing a python application as a contractor. >>> >>> I would like to know if someone can provide me with some insight into >>> the problems that then infrastructure team has been having. >>> >>> The scope of the project was to install python 2.7.8 and 4 modules/site >>> packages on a fresh linux build. >> >> A "fresh linux build" of Red Hat Linux 6? RHL 6 was discontinued in 2000. >> That's *at least* 14 years old. Why on earth are you using something so >> old instead of a recent version of RHEL, Centos or Fedora? > > I'm sure the OP meant RHEL 6, and not RH 6 [yes, I realize you know > that two and are just making a point about how it pays to include > accurate info when asking for help.] Actually, no, the thought didn't even cross my mind. I just assumed that if somebody is going to cast aspersions on the professionalism of others, they'd get their facts right. If they said RHL 6, they meant RHL 6 and not Centos 5 or Fedora 20 or Debian Squeeze. But I suppose that you're probably right. In hindsight, given that the OP is a Windows guy and not a Linux user, writing Red Hat Linux for Red Hat Enterprise Linux is an easy mistake to make. Assuming it was RHEL 6, then installing Python 2.7 from source as a separate application from the system Python should be trivially easy, half an hour's work. Download the source, untar, run ./configure, make, make altinstall and you should be done. There may be a few bumps in the road to get optional components supported, in which case a skilled Linux admin (which I am not) might need perhaps a couple of hours. Depending on just how bad the bumps were, an unskilled admin like me might take a day or so, not three weeks, before giving up. The most obvious trap is to run `make install` instead of `make altinstall`, in which case congratulations, you've just broken the RHEL install, and why didn't you read the README first? You can recover from it, probably, by fixing a few sym links, or by re-installing the system Python using the package manager. Worst case you just reinstall the whole OS. Two or three days, tops, not four weeks. If some foolish person insisted on upgrading the system python to 2.7 instead of installing a parallel installation, then the sky is the limit. I cannot imagine how much effort that would take, or how fragile it would be. Weeks? Probably. And then the first time the admin runs the package manager to install updates, things could start breaking. One thing which the OP hasn't told us, how much of the four weeks was effort, as opposed to elapsed time. For all we know, it took them four weeks to install this because for three weeks, four days and seven hours they were doing something else. In any case, if the OP has been billed for this time, I would insist on justification for why it took so long before paying. -- Steven -- https://mail.python.org/mailman/listinfo/python-list
Re: caught in the import web again
On Sat, 15 Nov 2014 22:52:33 +, Charles T. Smith wrote: > Now, I'm getting these errors: > > ImportError: cannot import name ... > > and > > AttributeError: 'module' object has no attribute ... It would be useful to know what you're actually trying to import and what the complete error messages are. For example, are these imports of code that you've written yourself, or part of the standard library modules, or third party modules. -- Denis McMahon, denismfmcma...@gmail.com -- https://mail.python.org/mailman/listinfo/python-list
Re: Question about installing python and modules on Red Hat Linux 6
On Sun, Nov 16, 2014 at 12:08 PM, Steven D'Aprano wrote: > Assuming it was RHEL 6, then installing Python 2.7 from source as a separate > application from the system Python should be trivially easy, half an hour's > work. Download the source, untar, run ./configure, make, make altinstall > and you should be done. There may be a few bumps in the road to get > optional components supported, in which case a skilled Linux admin (which I > am not) might need perhaps a couple of hours. Depending on just how bad the > bumps were, an unskilled admin like me might take a day or so, not three > weeks, before giving up. For a competent Linux system admin, this should be second nature. Installing from source should be as normal an operation as writing a shell script or scheduling a job with cron. But if the people concerned aren't so much "POSIX system admins" as "RHEL admins", then it's possible they depend entirely on the upstream repositories, and aren't familiar with the dance of "can't find -lfoo so go install libfoo-dev" (with occasionally a more exotic step, when the package name isn't obvious, but Google helps there); in that case, it could well take a long time, but that's like a Python programmer who's having trouble debugging an asyncio program because s/he isn't used to walking through the control flow of "yield from". I know I'm not competent at that, and it doesn't stop me from being a programmer - but I'm not going to come to python-list saying "This language sucks, I can't find this bug", because the problem is a limitation in my own skills. Other RHEL people: Is there a yum equivalent to "apt-get build-dep", which goes out and fetches all the compilers, libraries, build tools, etc, needed to build a package from source? If so - and I wouldn't be at all surprised if there is - that would be my recommended first step: grab the build deps for Python 2.6 and use those to build 2.7. Chances are that's all you need - that, plus the one little trick of "make altinstall" rather than "make install". ChrisA -- https://mail.python.org/mailman/listinfo/python-list
Re: Question about installing python and modules on Red Hat Linux 6
On 11/15/2014 06:08 PM, Steven D'Aprano wrote: > Assuming it was RHEL 6, then installing Python 2.7 from source as a separate > application from the system Python should be trivially easy, half an hour's > work. Download the source, untar, run ./configure, make, make altinstall > and you should be done. There may be a few bumps in the road to get > optional components supported, in which case a skilled Linux admin (which I > am not) might need perhaps a couple of hours. Depending on just how bad the > bumps were, an unskilled admin like me might take a day or so, not three > weeks, before giving up. In my last system administration job, we forbade installing from source, at least in the manner you are describing. It's a maintenance nightmare. Especially when it comes time to upgrade the system and get things up and running on a new OS version. To make the system maintainable and re-creatable (it's easy to dump a list of installed packages to install on anther machine), it had to be done using the package manager. Preferably with trusted packages (trusted repositories) that were actively maintained. At that time Red Hat software collections didn't exist, so I did have to spend some considerable time building and testing RPM packages. Yes it's a headache for the developer in the short term, but in the long term it always turned out better than hacking things together from source. The OP hasn't said that this is the case for his client of course. -- https://mail.python.org/mailman/listinfo/python-list
Re: Question about installing python and modules on Red Hat Linux 6
On Sun, Nov 16, 2014 at 12:57 PM, Michael Torrie wrote: > In my last system administration job, we forbade installing from source, > at least in the manner you are describing. It's a maintenance > nightmare. Especially when it comes time to upgrade the system and get > things up and running on a new OS version. To make the system > maintainable and re-creatable (it's easy to dump a list of installed > packages to install on anther machine), it had to be done using the > package manager. Preferably with trusted packages (trusted > repositories) that were actively maintained. At that time Red Hat > software collections didn't exist, so I did have to spend some > considerable time building and testing RPM packages. Yes it's a > headache for the developer in the short term, but in the long term it > always turned out better than hacking things together from source. > Fundamentally, this comes down to a single question: Who do you most trust? 1) Upstream repositories? Then install everything from the provided package manager. All will be easy; as long as nothing you use got removed in the version upgrade, you should be able to just grab all the same packages and expect everything to run. This is what I'd recommend for most end users. 2) Yourself? Then install stuff from source. For a competent sysadmin on his/her own computer, this is what I'd recommend. Use the repos when you can, but install anything from source if you feel like it. I do this for a number of packages where Debian provides an oldish version, or where I want to tweak something, or anything like that. When you upgrade to a new OS version, it's time to reconsider all those decisions; maybe there's a newer version in repo than the one you built from source a while ago. 3) A local repository? Then do what you describe above - build and test the RPMs and don't install anything from source. If you need something that isn't in upstream, you compile it, package it, and deploy it locally. Great if you have hundreds or thousands of similar machines to manage, eliminates the maintenance nightmare, but is unnecessary work if you have only a handful of machines and they're all unique. I'm responsible for maybe a dozen actively-used Linux boxes, so I'm a pretty small-time admin. Plus, they run a variety of different hardware, OS releases (they're mostly Debian-family distros, but I have some Ubuntu, some Debian, one AntiX, and various others around the place - and a variety of versions as well), and application software (different needs, different stuff installed). Some are headless servers that exist solely for the network. Others are people's clients that they actively use every day. They don't all need Wine, VirtualBox, DOSBox, or other compatibility/emulation layers; but some need versions newer than those provided in the upsstream repos. One box needs audio drivers compiled from source, else there's no sound. Until a few months ago, several - but not all - needed a few local patches to one program. (Then the patches got accepted upstream, but that version isn't in the Debian repos yet.) And of course, they all need their various configs - a web server is not created by "sudo apt-get install apache2" so much as by editing /etc/apache2/*. Sure, I *could* run everything through deb/rpm packages, but I'd be constantly tweaking and packaging and managing upgrades, and it's much better use of my time to just deploy from source. With so few computers to manage, the Big-Oh benefits of your recommendation just don't kick in. But I doubt these considerations apply to the OP. Of course, we can't know until our crystal balls get some more to work on. ChrisA -- https://mail.python.org/mailman/listinfo/python-list
Where is inspect() located?
Hi, ALL, C:\Documents and Settings\Igor.FORDANWORK\Desktop\winpdb>python Python 2.7.5 (default, May 15 2013, 22:43:36) [MSC v.1500 32 bit (Intel)] on win32 Type "help", "copyright", "credits" or "license" for more information. >>> from a.b import c >>> print c.classA >>> inspect.getmembers(c.classA) Traceback (most recent call last): File "", line 1, in NameError: name 'inspect' is not defined >>> import lib Traceback (most recent call last): File "", line 1, in ImportError: No module named lib >>> In the https://docs.python.org/2/library/inspect.html, it says it is located in Lib/inspect.py. What am I missing? Or its only for 3.x? Thank you. -- https://mail.python.org/mailman/listinfo/python-list
Re: Where is inspect() located?
On Sun, Nov 16, 2014 at 3:12 PM, Igor Korot wrote: > In the https://docs.python.org/2/library/inspect.html, it says it is > located in Lib/inspect.py. > > What am I missing? Or its only for 3.x? You need to import it. If you're curious, you can find out exactly where it's imported from like this: Python 2.7.3 (default, Mar 13 2014, 11:03:55) [GCC 4.7.2] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> import inspect >>> inspect.__file__ '/usr/lib/python2.7/inspect.pyc' The path will be different on your system, as this is a Debian box, but the technique is the same. However, all that matters is that you hadn't imported it. The Lib directory is one of the places where importable modules are found. ChrisA -- https://mail.python.org/mailman/listinfo/python-list
Re:Where is inspect() located?
Igor Korot Wrote in message: > Hi, ALL, > > C:\Documents and Settings\Igor.FORDANWORK\Desktop\winpdb>python > Python 2.7.5 (default, May 15 2013, 22:43:36) [MSC v.1500 32 bit > (Intel)] on win32 > Type "help", "copyright", "credits" or "license" for more information. from a.b import c print c.classA > And what is your question here? inspect.getmembers(c.classA) > Traceback (most recent call last): > File "", line 1, in > NameError: name 'inspect' is not defined That's because you haven't imported it. import lib > Traceback (most recent call last): > File "", line 1, in > ImportError: No module named lib The module is called inspect. import inspect > -- DaveA -- https://mail.python.org/mailman/listinfo/python-list
PyWart: "Python's import statement and the history of external dependencies"
Python's attempt to solve the "external dependencies problem" has yet to produce the results that many people, including myself, would like. Actually, Python is not alone in this deficiency, no, Python is just *ANOTHER* language in a *STRING* of languages over the years who has *YET AGAIN* implemented the same old crusty design patterns, packaged them in a shiny metallic wrapping paper with a big red bow on top, and hoped that no one would notice the stench... WELL I NOTICED M.F.! Before we can formulate a solution to this mess, we must first obtain an "inside perspective" of the "world" in which a Python script lives during run-time. Welcome to "Python Gardens": a master planned community Well... urm, sort of. @_@ I like to analyze problems, when possible, in relation to real world "tangible" examples. So for me, i like to think of the main script (aka: __main__) as an apartment building, and each module that runs under the main script as a single apartment, and finally, the inhabitants of the apartments as objects. Furthermore, we can define the apartment building as being "global space", and each individual apartment as being "local space". The first law we encounter is that "global space" is reserved for all tenants/guest, but "local space" is by *invitation* only! You can think of "import" as similar to sending out an invitation, and requesting that a friend join you inside your apartment (we'll get back to "import" later). And of course, with this being the 21st century and all, every apartment has *local* access to *global* resources like water and electrical. In Python, these "global resources" are available "locally" in every module via the implicit namespace of "__builtins__". You can think of built-ins as a "house keeper" robot that lives in every apartment (I call mine "Rosie"). It was there when you moved in --a "house warming" gift of sorts-- and it helps you with chores and makes your life a little easier. Sometimes i abuse my robot but i always apologize afterwards. Now that we know how the global and local spaces are defined (in the context of modules), and what implicit/explicit "gifts" (are supplied), and "rules" (are demanded) in our little Python world, we have a good basis to start understanding why Python's import mechanism is at best a "helpful failure" that only *naively* attempts to streamline the management of such an important language feature as external dependencies! Up until this point, our apartment example has not mapped the actual *mechanics* of "import" to a real world example, but now that we have the correct "perspective", we can tread into the dark and damp underworld that is "import". Remember when i said: "import is like sending out an invitation"? Well, in actuality, import is only *superficially* similar to "sending out an invitation". You see, when you send an invitation in real life, the most important two things you'll need are a *name* and an *address* --a name so you'll know *who* to send the invitation to, and an address so you'll know *where* to send the invitation-- but Python's import mechanism does not work that way. When you import an external dependency into a Python module all you need is the name of a module -- addresses are neither required *OR* allowed! Like almost all modern languages, Python has adopted the ubiquitous practice of internalizing a mapping of known directories from which to search for modules (called a search path), so when we "import somemodule" the following happens (Note: I'm not going into too many details for the sake of topic): 1. Python checks if the external resource has already been loaded if not 1, then 2. Python looks in a few "special places" if not 2, then 3. Python searches one-by-one the directories in sys.path RESULT: Python finds the resource or barfs an exception. I have no problem with step 1, however, step 2 and step 3 can be redundant, excessive, and even unreliable. I can explain better why i levee such harsh words for "import" by going back to our apartment building example. Let's imagine that Python is the "lobby boy" of our apartment building. And in our little example, one of the duties of the "lobby boy" is to manage invitations between tenants and guests. Each time a tenant wants to invite someone into their apartment, they must pick up the phone, press the import button (which connects them to the lobby boy via a voice connection) and they say the name of the person they want to invite. But *EVEN IF* they know the address of the recipient, they are not allowed to say the address to the lobby boy, no, because *THIS* building is owned by a evil tyrant, who has declared that any mention of an address when calling import is punishable by the fatal exception! So, being a
Re: PyWart: "Python's import statement and the history of external dependencies"
On Sun, Nov 16, 2014 at 4:01 PM, Rick Johnson wrote: > Creating an "implicit name resolution system" (aka: import) > to abstract away an "explicit name resolution system" > (file-paths) has resulted in more problems that it can solve: > > 1. Name clashes! > 2. Smaller name pool! > 3. Machinery is too implicit! > 4. Circular imports are inevitable! > 5. Much too difficult to use and/or explain! > 6. Too many "gotchas"! And the ability to write cross-platform code! ChrisA -- https://mail.python.org/mailman/listinfo/python-list
Re: Efficient Threading
On Fri, Nov 14, 2014 at 10:42 AM, Empty Account wrote: > Hi, > > I am thinking about writing a load test tool in Python, so I am interested > in how I can create the most concurrent threads/processes with the fewest OS > resources. I would imagine that I/O would need to be non-blocking. > > There are a number of options including standard library threading, gevent, > stackless python, cython parallelism etc. Because I am new to Python, I am > unsure on which libraries to choose. I am not really bothered on the tool > chain, just as long as it is Python related (so I'd use PyPy for example). If you need a large amount of concurrency, you might look at Jython with threads. Jython threads well. If you don't intend to do more than a few hundred concurrent things, you might just go with CPython and multiprocessing. Fortunately, the interfaces are similar, so you can try one and switch to another later without huge issues. The main difference is that multiprocessing doesn't show variable changes to other processes, while multithreading (more often) does. -- https://mail.python.org/mailman/listinfo/python-list
Re: fileno() not supported in Python 3.1
On Thu, Nov 13, 2014 at 3:48 PM, wrote: > import sys > for stream in (sys.stdin, sys.stdout, sys.stderr): >print(stream.fileno()) > > > io.UnsupportedOperation: fileno > > Is there a workaround? > -- > https://mail.python.org/mailman/listinfo/python-list Works for me, although it's a little different in Jython: $ pythons --command 'import sys; print(sys.stdin.fileno())' /usr/local/cpython-2.4/bin/python good 0 /usr/local/cpython-2.5/bin/python good 0 /usr/local/cpython-2.6/bin/python good 0 /usr/local/cpython-2.7/bin/python good 0 /usr/local/cpython-3.0/bin/python good 0 /usr/local/cpython-3.1/bin/python good 0 /usr/local/cpython-3.2/bin/python good 0 /usr/local/cpython-3.3/bin/python good 0 /usr/local/cpython-3.4/bin/python good 0 /usr/local/jython-2.7b3/bin/jython good org.python.core.io.StreamIO@170ed6ab /usr/local/pypy-2.3.1/bin/pypy good 0 /usr/local/pypy-2.4.0/bin/pypy good 0 /usr/local/pypy3-2.3.1/bin/pypy good 0 /usr/local/pypy3-2.4.0/bin/pypy good 0 -- https://mail.python.org/mailman/listinfo/python-list
Re: fileno() not supported in Python 3.1
On Sun, Nov 16, 2014 at 4:25 PM, Dan Stromberg wrote: > Works for me, although it's a little different in Jython: > $ pythons --command 'import sys; print(sys.stdin.fileno())' > /usr/local/jython-2.7b3/bin/jython good org.python.core.io.StreamIO@170ed6ab Huh, that is curious. According to its CPython docstring, it's supposed to return an integer (no idea if that's standardized)... but I guess if os.read() takes that StreamIO object, then it'll do as well. ChrisA -- https://mail.python.org/mailman/listinfo/python-list
frame.f_locals['__class__'] -- When does it (not) exist and why?
Hello, I am CPython 3.4+ user on Linux. I am writing a little library for myself to improve the traceback module -- print_exc() and friends. I want to include the module name, class name (if possible), and function name. Some background: traceback.print_exc() iterates through traceback objects returned by sys.exc_info()[2]. traceback.tb_frame holds each stack frame. (I call this 'frame' below.) My improved library nearly works, but I noticed a strange corner case around frame.f_locals['__class__']. When super().__init__() is called, a 'magic' local appears in frame.f_locals called '__class__'. Helpfully, it has the correct class for the context, which will differ from type(self). (I discovered this magic local by poking around in the debugger. I am unable to find any official documentation on it.) Here is the quirk: In the last class in a chain of super.__init__() calls, this magic local disappears. So I have no idea the correct class for the context. I am stuck with frame.f_locals['self']. How can I recover the correct class for the context in the last __init__() method? I noticed if I chance the last class to inherit from object, the magic local '__class__' appears again. A little code to demonstrate: # do not subclass object here def class X: def __init__(self): # frame.f_locals['__class__'] does not exist pass def class Y(X): def __init__(self): # frame.f_locals['__class__'] == Y super().__init__() def class Z(Y): def __init__(self): super().__init__() # subclass object here def class X2(object): def __init__(self): # frame.f_locals['__class__'] == X2 pass def class Y2(X2): def __init__(self): # frame.f_locals['__class__'] == Y2 super().__init__() def class Z2(Y2): def __init__(self): super().__init__() Thanks, Arpe -- https://mail.python.org/mailman/listinfo/python-list
Re: frame.f_locals['__class__'] -- When does it (not) exist and why?
On Sun, Nov 16, 2014 at 4:25 PM, wrote: > When super().__init__() is called, a 'magic' local appears in frame.f_locals > called '__class__'. Helpfully, it has the correct class for the context, > which will differ from type(self). (I discovered this magic local by poking > around in the debugger. I am unable to find any official documentation on > it.) > Interesting. Are you sure you can't get the class name via the function, rather than depending on this magic local? That seems fragile. Your example code isn't currently runnable ("def class"); can you provide a self-contained program that actually attempts to display something? It'd be helpful for those of us who aren't familiar with internal and esoteric details of tracebacks and __init__ :) ChrisA -- https://mail.python.org/mailman/listinfo/python-list
Re: frame.f_locals['__class__'] -- When does it (not) exist and why?
Apologies for previous code example. Yes, the 'def class' should read: 'class'. Writing a sample to better demonstrate the issue made me realize that super() is doing something special. It is injecting the magic '__class__' local. I should rephrase my question: How do I get the declaring class from from a traceback object? Currently, I cannot see how to do it. Magic local '__class__' is not always available. And locals 'cls' and 'self' may come from subclasses. Code sample: import inspect import sys class X: def __init__(self): # Magic local '__class__' is missing raise ValueError() class Y(X): def __init__(self): super().__init__() class X2: def __init__(self): # Calling super() here will 'inject' magic local '__class__' super().__init__() raise ValueError() class Y2(X2): def __init__(self): super().__init__() def main(): _main(lambda: Y()) _main(lambda: Y2()) def _main(func): try: func() except: (exc_type, exc_value, traceback) = sys.exc_info() tb = traceback while tb: frame = tb.tb_frame # See: code.co_freevars. Sometimes magic '__class__' appears. code = frame.f_code lineno = frame.f_lineno func_name = code.co_name file_path = code.co_filename module = inspect.getmodule(frame, file_path) module_name = module.__name__ print("File: {}, Line: {}, Func: {}, Module: {}".format(file_path, lineno, func_name, module_name)) for name in ('__class__', 'self', 'cls'): if name in frame.f_locals: print("{}: '{}'".format(name, frame.f_locals[name])) tb = tb.tb_next print() if __name__ == '__main__': main() -- https://mail.python.org/mailman/listinfo/python-list
Re: caught in the import web again
"Charles T. Smith" writes: > Now, I'm getting these errors: > > ImportError: cannot import name ... > > and > > AttributeError: 'module' object has no attribute ... > > (what is 'module'?) > > Is there a way to resolve this without having to restructure my code > every couple of days? > > I thought using imports of the form: > > from module import symbol > > was the "right way" to avoid these hassles... I have not noticed your previous posts regarding import problems -- thus, I may repeats things already said. Usually, Python imports are straight forward and do not lead to surprises. There are two exceptions: recursive imports and imports in separate threads. Let's look at the problem areas in turn. Recursive imports. In this case you have module "A" which imports module "B" (maybe indirectly via a sequence of intervening imports) which imports module "A". In this, "import" means either "import ..." or "from ... import ..." (it does not matter). When module "B" tries to import module "A", then "A" is not yet complete (as during the execution of "A"'s initialization code, it started to import "B" (maybe indirectly) and the following initialization code has not yet been executed). As a consequence, you may get "AttributeError" (or "ImportError") when you try to access things from "A". To avoid problems like this, try to avoid recursive imports. One way to do this, are local imports -- i.e. imports inside a function. This way, the import happens when the functions is called which (hopefully) happens after module initialization. Imports and threads. The import machinery changes global data structures (e.g. "sys.modules"). To protect those changes, it uses a lock, held while an import is in progress. When, during an import, a separate thread tries to import something, it blocks - waiting for the import lock to be released. In case, the importing thread waits on this thread, then the system deadlocks. To avoid this: do not start threads during imports. -- https://mail.python.org/mailman/listinfo/python-list
encode and decode builtins
I made the switch to python 3 about two months ago, and I have to say I love everything about it, *especially* the change to using only bytes and str (no more unicode! or... everything is unicode!) As someone who works with embedded devices, it is great to know what data I am working with. However, there are times that I do not care what data I am working with, and I find myself writing something like: if isinstance(data, bytes): data = data.decode() This is tedious and breaks the pythonic method of not caring about what your input is. If I expect that my input can always be decoded into valid data, then why do I have to write this? Instead, I would like to propose to add *encode* and *decode* as builtins. I have written simple code to demonstrate my desire: https://gist.github.com/cloudformdesign/d8065a32cdd76d1b3230 There may be a few edge cases I am missing, which would all the more prove my point -- we need a function like this! Basically, if I expect my data to be a string I can just write: data = decode(data) Which would accomplish two goals: explicitly stating what I expect of my data, and doing so concisely and cleanly. -- https://mail.python.org/mailman/listinfo/python-list
Re: encode and decode builtins
Garrett Berg writes: > I made the switch to python 3 about two months ago, and I have to say > I love everything about it, *especially* the change to using only > bytes and str (no more unicode! or... everything is unicode!) As > someone who works with embedded devices, it is great to know what data > I am working with. THanks! It is great to hear from people directly benefiting from this clear distinction. > However, there are times that I do not care what data I am working > with, and I find myself writing something like: > > if isinstance(data, bytes): data = data.decode() Why are you in a position where ‘data’ is not known to be bytes? If you want ‘unicode’ objects, isn't the API guaranteeing to provide them? > This is tedious and breaks the pythonic method of not caring about > what your input is. I wouldn't call that Pythonic. Rather, in the face of ambiguity (“is this text or bytes?”), Pythonic code refuses the temptation to guess: you need to clarify what you have as early as possible in the process. > If I expect that my input can always be decoded into valid data, then > why do I have to write this? I don't know. Why do you have to? -- \ “God was invented to explain mystery. God is always invented to | `\ explain those things that you do not understand.” —Richard P. | _o__)Feynman, 1988 | Ben Finney -- https://mail.python.org/mailman/listinfo/python-list