Re: The Most Diabolical Python Antipattern
On 30/01/2015 06:16, Marko Rauhamaa wrote: Ian Kelly : At least use "except Exception" instead of a bare except. Do you really want things like SystemExit and KeyboardInterrupt to get turned into 0? How about: == try: do_interesting_stuff() except ValueError: try: log_it() except: pass raise == Surprisingly this variant could raise an unexpected exception: == try: do_interesting_stuff() except ValueError: try: log_it() finally: raise == A Python bug? Marko It depends on the Python version that you're running - I think!!! See https://www.python.org/dev/peps/pep-3134/ https://www.python.org/dev/peps/pep-0409/ https://www.python.org/dev/peps/pep-0415/ and finally try (groan :) https://pypi.python.org/pypi/pep3134/ -- My fellow Pythonistas, ask not what our language can do for you, ask what you can do for our language. Mark Lawrence -- https://mail.python.org/mailman/listinfo/python-list
testfixtures 4.1.2 Released!
Hi All, I'm pleased to announce the release of testfixtures 4.1.2. This is a bugfix release that fixes the following: - Clarify documentation for name parameter to LogCapture. - ShouldRaise now shows different output when two exceptions have the same representation but stiff differ. - Fix bug that could result in a dict comparing equal to a list. Thanks to Daniel Fortunov for the documentation clarification. The package is on PyPI and a full list of all the links to docs, issue trackers and the like can be found here: http://www.simplistix.co.uk/software/python/testfixtures Any questions, please do ask on the Testing in Python list or on the Simplistix open source mailing list... cheers, Chris -- Simplistix - Content Management, Batch Processing & Python Consulting - http://www.simplistix.co.uk -- https://mail.python.org/mailman/listinfo/python-list
Re: The Most Diabolical Python Antipattern
Mark Lawrence : > On 30/01/2015 06:16, Marko Rauhamaa wrote: >> How about: >> >> == >> try: >> do_interesting_stuff() >> except ValueError: >> try: >> log_it() >> except: >> pass >> raise >> == >> >> Surprisingly this variant could raise an unexpected exception: >> >> == >> try: >> do_interesting_stuff() >> except ValueError: >> try: >> log_it() >> finally: >> raise >> == >> >> A Python bug? > > It depends on the Python version that you're running - I think!!! See > https://www.python.org/dev/peps/pep-3134/ TL;DR My Python did do exception chaining, but the problem is the surface exception changes, which could throw off the whole error recovery. So I'm thinking I might have found a valid use case for the "diabolical antipattern." Marko -- https://mail.python.org/mailman/listinfo/python-list
Re: An object is an instance (or not)?
Steven D'Aprano wrote: Actually, if you look at my example, you will see that it is a method and it does get the self argument. Here is the critical code again: from types import MethodType polly.talk = MethodType( lambda self: print("Polly wants a spam sandwich!"), polly) Doing it by hand is cheating. That's certainly not correct, because Python had classes and instances before it had descriptors! Before the descriptor protocol, a subset of its functionality was hard-wired into the interpreter. There has always been some magic going on across the instance-class boundary that doesn't occur across the class-baseclass boundary. Ah wait, I think I've got it. If you want (say) your class object itself to support (say) the + operator, it isn't enough to write a __add__ method on the class, you have to write it on the metaclass. That's right. -- Greg -- https://mail.python.org/mailman/listinfo/python-list
Re: The Most Diabolical Python Antipattern
On Thu, Jan 29, 2015 at 11:16 PM, Marko Rauhamaa wrote: > Ian Kelly : > >> At least use "except Exception" instead of a bare except. Do you >> really want things like SystemExit and KeyboardInterrupt to get turned >> into 0? > > How about: > > == > try: > do_interesting_stuff() > except ValueError: > try: > log_it() > except: > pass > raise > == Are you asking if I think this is better? It still swallows arbitrary exceptions. Why would you want to re-raise the anticipated (and logged) ValueError instead of the exception that could potentially be unexpected? > Surprisingly this variant could raise an unexpected exception: > > == > try: > do_interesting_stuff() > except ValueError: > try: > log_it() > finally: > raise > == > > A Python bug? This does what it is supposed to. "If no expressions are present, raise re-raises the last exception that was active in the current scope." In this case, what that exception is depends on whether the finally clause was entered as a result of an exception or fall-through from the try clause. If you only want to re-raise the ValueError, then use the first form above. If you only want to re-raise the other exception, then do so from an except block (or don't catch it in the first place). -- https://mail.python.org/mailman/listinfo/python-list
Re: The Most Diabolical Python Antipattern
On Fri, Jan 30, 2015 at 2:02 AM, Marko Rauhamaa wrote: > Mark Lawrence : > >> On 30/01/2015 06:16, Marko Rauhamaa wrote: >>> How about: >>> >>> == >>> try: >>> do_interesting_stuff() >>> except ValueError: >>> try: >>> log_it() >>> except: >>> pass >>> raise >>> == >>> >>> Surprisingly this variant could raise an unexpected exception: >>> >>> == >>> try: >>> do_interesting_stuff() >>> except ValueError: >>> try: >>> log_it() >>> finally: >>> raise >>> == >>> >>> A Python bug? >> >> It depends on the Python version that you're running - I think!!! See >> https://www.python.org/dev/peps/pep-3134/ > > TL;DR > > My Python did do exception chaining, but the problem is the surface > exception changes, which could throw off the whole error recovery. > > So I'm thinking I might have found a valid use case for the "diabolical > antipattern." I suppose, although it seems awfully contrived to me. In any case it would still be better with "except Exception" rather than the bare except. Unless re-raising that ValueError is more important to you than letting the user hit Ctrl-C during the logging call. -- https://mail.python.org/mailman/listinfo/python-list
Re: python client call Java server by xmlrpc
Thanks dieter, The issue is solved. I use SmartSniff to get the xml message send by java client, and python client, find the difference. Define a new class: class MyData(object): def __init__(self,myKey,myValue): self.Key = myKey self.Value = myValue and use this object as parameter, then the server return correct reply. para = MyData(0.5,0.01) server.Fun([para ], [ ] ) From: dieter To: python-list@python.org Date: 01/23/2015 03:26 PM Subject: Re: python client call Java server by xmlrpc Sent by: "Python-list" fan.di...@kodak.com writes: > I have xmlrpc server written in Java, and it has a method like > > Fun( vector, vector), the vector is array of user-defined object, which is > a class extends HashMap. > > And I call it like: > > server = xmlrpclib.ServerProxy("http://myserver";) > > server.Fun( [ {"0.5":0.1}], [ ] ) > > It always fails with error > > 'No method matching arguments: , [Ljava.lang.Object;, [Ljava.lang.Object; > ' > Does anyone use this before? It troubles me some days. The standard XML-RPC protocol knows only about a very small set of types. Extensions are required to pass on more type information. The (slightly confusing) error message (you got) indicates that the XML-RPC framework on the server side has not correctly recognized the types of the incoming parameters: it should recognize the "[]" (as this is a standard type) (and maybe the open "[" indicates that it has indeed) but apparently, it got the content elements only as generalized "Object"s not something specific (extending "HashMap"). The "xmlrpclib" in the Python runtime libary does not support extensions to pass on additional type information (as far as I know). This might indicate that you cannot use Python's "xmlrpclib" out of the box to interact with your Java implemented XML-RPC service. I would approach the problem as follows. Implement a Java based XML-RPC client for your service. Should this fail, then your service implementation has too complex types for XML-RPC (and you must simplify them). Should you succeed, you can use a tcp logger (or maybe debugging tools of the Java libary implementing XML-RPC) to determine the exact messages exchanged between client and server. This way, you can learn how your Java libraries pass on non standard type information. You can then derive a new class from Python's "xmlrpclib" and implement there this type information passing extension (apparently used by your Java libaries). -- https://mail.python.org/mailman/listinfo/python-list -- https://mail.python.org/mailman/listinfo/python-list
Re: The Most Diabolical Python Antipattern
Marko Rauhamaa : >>> Surprisingly this variant could raise an unexpected exception: >>> >>> == >>> try: >>> do_interesting_stuff() >>> except ValueError: >>> try: >>> log_it() >>> finally: >>> raise >>> == >>> >>> A Python bug? > [...] > My Python did do exception chaining, but the problem is the surface > exception changes, which could throw off the whole error recovery. BTW, the code above can be fixed: == try: do_interesting_stuff() except ValueError as e: try: log_it() finally: raise e == Now the surface exception is kept and the subsidiary exception is chained to it. I'm a bit baffled why the two pieces of code are not equivalent. Marko -- https://mail.python.org/mailman/listinfo/python-list
about supervisor program doesn't show in web console
Hi there I am trying to use supervisord. I wrote some program in /etc/supervisord.conf file. I wrote four programs in it. However, one program can't show in the web console. the program name is "myprogram_three". My /etc/supervisord.conf file's snippet is below I modified http_port. [supervisord] ;http_port=/var/tmp/supervisor.sock ; (default is to run a UNIX domain socket server) http_port=0.0.0.0: And my programs registering is here ; App appname Program ; myprogram 1 [program:myprogram_one] command=python /home/username/work/my_program.py start 3 appname /var/www/django/demo/appname/appname startsecs = 5 user = root redirect_stderr = true stderr_logfile = /var/log/supervisor/Program/hoge_appname-stderr.log stdout_logfile = /var/log/supervisor/Program/hoge_appname-stdout.log ; myprogram 2 [program:myprogram_two] command=python /home/username/work/my_program.py start 4 appname /var/www/django/demo/appname/appname startsecs = 5 user = root redirect_stderr = true stderr_logfile = /var/log/supervisor/Program/bar_appname-stderr.log stdout_logfile = /var/log/supervisor/Program/bar_appname-stdout.log ; appnametwo Program ; myprogram 3 [program:myprogram_three] command=python /home/username/work/my_program.py start 3 appnametwo /var/www/django/demo/appnametwo/appname startsecs = 5 user = root redirect_stderr = true stderr_logfile = /var/log/supervisor/Program/hoge_appnametwo-stderr.log stdout_logfile = /var/log/supervisor/Program/hoge_appnametwo-stdout.log ; myprogram 4 [program:myprogram_four] command=python /home/username/work/my_program.py start 4 appnametwo /var/www/django/demo/appnametwo/appname startsecs = 5 user = root redirect_stderr = true stderr_logfile = /var/log/supervisor/Program/bar_appnametwo-stderr.log stdout_logfile = /var/log/supervisor/Program/bar_appnametwo-stdout.log Why dows "myprogram_three" program work? I would appreciate it if you could help me. -- https://mail.python.org/mailman/listinfo/python-list
EuroPython 2015: New Code of Conduct
For EuroPython 2015 we have chosen to use a new code of conduct (CoC) that is based on the PyCon UK Code of Conduct [1]. We think that it reads much nicer than the one we had before, while serving the same purpose. In summary: Be nice to each other - We trust that attendees will treat each other in a way that reflects the widely held view that diversity and friendliness are strengths of our community to be celebrated and fostered. Furthermore, we believe attendees have a right to: * be treated with courtesy, dignity and respect; * be free from any form of discrimination, victimization, harassment or bullying; * enjoy an environment free from unwelcome behavior, inappropriate language and unsuitable imagery. Here’s the permanent link to the CoC for 2015: EuroPython 2015 - Code of Conduct http://ep2015.europython.eu/coc/ We’d like to thank the PyCon UK organizers for their work on the CoC and for putting it under a CC license. [1] http://pyconuk.net/CodeOfConduct Thanks, -— EuroPython Society (EPS) http://www.europython-society.org/ -- https://mail.python.org/mailman/listinfo/python-list
Re: about supervisor program doesn't show in web console
I resolved I tried and reseted echo_supervisord_conf > /etc/supervisord.conf 2015年1月30日金曜日 19時26分45秒 UTC+9 shin...@gmail.com: > Hi there > > > I am trying to use supervisord. > I wrote some program in /etc/supervisord.conf file. > > I wrote four programs in it. > > However, one program can't show in the web console. > > the program name is "myprogram_three". > > My /etc/supervisord.conf file's snippet is below > > > I modified http_port. > > [supervisord] > ;http_port=/var/tmp/supervisor.sock ; (default is to run a UNIX domain socket > server) > http_port=0.0.0.0: > > > And my programs registering is here > > ; App appname Program > ; myprogram 1 > [program:myprogram_one] > command=python /home/username/work/my_program.py start 3 appname > /var/www/django/demo/appname/appname > startsecs = 5 > user = root > redirect_stderr = true > stderr_logfile = /var/log/supervisor/Program/hoge_appname-stderr.log > stdout_logfile = /var/log/supervisor/Program/hoge_appname-stdout.log > > ; myprogram 2 > [program:myprogram_two] > command=python /home/username/work/my_program.py start 4 appname > /var/www/django/demo/appname/appname > startsecs = 5 > user = root > redirect_stderr = true > stderr_logfile = /var/log/supervisor/Program/bar_appname-stderr.log > stdout_logfile = /var/log/supervisor/Program/bar_appname-stdout.log > > > ; appnametwo Program > ; myprogram 3 > [program:myprogram_three] > command=python /home/username/work/my_program.py start 3 appnametwo > /var/www/django/demo/appnametwo/appname > startsecs = 5 > user = root > redirect_stderr = true > stderr_logfile = /var/log/supervisor/Program/hoge_appnametwo-stderr.log > stdout_logfile = /var/log/supervisor/Program/hoge_appnametwo-stdout.log > > > ; myprogram 4 > [program:myprogram_four] > command=python /home/username/work/my_program.py start 4 appnametwo > /var/www/django/demo/appnametwo/appname > startsecs = 5 > user = root > redirect_stderr = true > stderr_logfile = /var/log/supervisor/Program/bar_appnametwo-stderr.log > stdout_logfile = /var/log/supervisor/Program/bar_appnametwo-stdout.log > > > Why dows "myprogram_three" program work? > > I would appreciate it if you could help me. -- https://mail.python.org/mailman/listinfo/python-list
How to show Tail -f in supervisor's web console.
Hello I use supervisor's web console. I touched "Tail -f" button. But, it can't show the log page. I researched the log file tail -f /tmp/supervisord.log but, it doesn't show about the bug. I would appreciate it if you help me. -- https://mail.python.org/mailman/listinfo/python-list
Re: Sort of Augmented Reality
Le 30. 01. 15 03:22, Dennis Lee Bieber a écrit : Given the first, and the aircraft data, you would need to compute the rays (azimuth/elevation) from camera to each target and map those to pixels in the image... a lot of spherical trigonometry there... http://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&cad=rja&uact=8&ved=0CB4QFjAA&url=http%3A%2F%2Fccar.colorado.edu%2FASEN5070%2Fhandouts%2FComp_of_Azimuth_Ele.ppt&ei=gerKVN-nOsSxyATMwoKQDw&usg=AFQjCNEVx_Yf8QtGYwJ2wH7OU5LRaqQJEA&bvm=bv.84607526,d.aWw (satellite based, but may be applicable; do a search for "convert lat long to azimuth elevation") Hum not so simple I was thinking :-) I think I'll have some hours of reading in internet... Thank you for the seed franssoa -- https://mail.python.org/mailman/listinfo/python-list
Re: Installling ADODB on an offline computer
On Thursday, January 29, 2015 at 8:35:50 PM UTC-5, Alan Meyer wrote: > I work on an application that uses the ActivePython compilation of > Python from ActiveState. It uses three Microsoft COM libraries that are > needed for talking to SQL Server. The libraries are: > > Microsoft Activex Data Objects > Microsoft Activex Data Objects Recordset > Microsoft ADO Ext > > In the past, we have installed those libraries on numerous machines by > running makepy.py. It can be done from the ActivePython IDLE GUI, or > from the command line. makepy.py downloads and installs the packages > for us, taking care of COM server or client registration, or whatever it > is that has to be done (I don't really know much about this stuff.) > > Now I've been asked to port the application to a computer that is not > connected to the Internet. I haven't found any way to get those > packages. I haven't found a way to, for example, download the packages > to files, place the files on the target computer, and run makepy.py or a > setup.py to install them. > > Can someone suggest a way to do it? > > Thank you very much. > > Alan I don't think makepy.py needs to download anything. It's just extracting information from the registry about COM services which are available and building a Python wrapper around those services. I've used it successfully myself on a machine completely cut off from the outside world. Cheers, Bob -- https://mail.python.org/mailman/listinfo/python-list
Re: multiprocessing.Queue & vim python interpreter
On Friday, January 23, 2015 at 10:59:46 PM UTC-5, Cameron Simpson wrote: > On 18Jan2015 16:20, unknown3...@gmail.com wrote: > >I am experimenting on a fork of vim-plug for managing vim plugins. I wanted > >to add parallel update support for python since ruby isn't nearly as common. > >I've come across a weird bug that only seems to happen when I'm inside vim, > >I'm wondering if someone could tell me why. > > > >This problem can be reproduced by sourcing a vim file with the following > >snippet. Then execute the command PyCrash. > >command! -nargs=0 PyCrash call s:py_crash() > >function! s:py_crash() > >python << EOF > >import multiprocessing as multi > >queue = multi.Queue() > >queue.put('a') > >queue.close() > >EOF > >endfunction > > > >This prints to messages the following: > >Traceback (most recent call last): > > File "/usr/lib/python2.7/multiprocessing/queues.py", line 266, in _feed > >send(obj) > >IOError: [Errno 32] Broken pipe > > Please include the entire traceback in errors. > > I would guess this is because you have made a Queue but have not got > subprocess > to read from it. So when you .put onto it, you get a broken pipe. > > Cheers, > Cameron Simpson > > Nothing is impossible for the man who doesn't have to do it. Hi there. Thanks for getting back to me. I don't really need an answer to this issue anymore as I went with threads instead since multiprocessing. The latter doesn't work well on Windows and worse inside an embedded context like GVim ON Windows. Not having fork is a real pain. This bug was probably related to the embedded python nature of the code rather than python as I never experienced it outside of Vim using the same multiprocessing/Queue code. Also, I don't remember which commit it was so since I'm working on other stuff I'm just gonna leave this unfixed/investigated. Thanks anyway. Regards, Jeremy -- https://mail.python.org/mailman/listinfo/python-list
Re: parsing tree from excel sheet
Hi Peter, I'll try to comment the code below to verify if I understood it correctly or missing some major parts. Comments are just below code with the intent to let you read the code first and my understanding afterwards. Peter Otten <__pete...@web.de> wrote: [] > $ cat parse_column_tree.py > import csv > > def column_index(row): >for result, cell in enumerate(row, 0): >if cell: >return result >raise ValueError Here you get the depth of your first node in this row. > class Node: >def __init__(self, name, level): >self.name = name >self.level = level >self.children = [] > >def append(self, child): >self.children.append(child) > >def __str__(self): >return "\%s{%s}" % (self.level, self.name) Up to here everything is fine, essentially defining the basic methods for the node object. A node is represented univocally with its name and the level. Here I could say that two nodes with the same name cannot be on the same level but this is cosmetic. The important part would be that 'Name' can be also 'Attributes', with a dictionary instead. This would allow to store more information on each node. >def show(self): >yield [self.name] Here I'm lost in translation! Why using yield in the first place? What this snippet is used for? >for i, child in enumerate(self.children): >lastchild = i == len(self.children)-1 >first = True >for c in child.show(): >if first: >yield ["\---> " if lastchild else "+---> "] + c >first = False >else: >yield [" " if lastchild else "| "] + c Here I understand more, essentially 'yield' returns a string that would be used further down in the show(root) function. Yet I doubt that I grasp the true meaning of the code. It seems those 'show' functions have lots of iterations that I'm not quite able to trace. Here you loop over children, as well as in the main()... >def show2(self): >yield str(self) >for child in self.children: >yield from child.show2() ok, this as well requires some explanation. Kinda lost again. From what I can naively deduce is that it is a generator that returns the str defined in the node as __str__ and it shows it for the whole tree. > def show(root): >for row in root.show(): >print("".join(row)) > > def show2(root): >for line in root.show2(): >print(line) Here we implement the functions to print a node, but I'm not sure I understand why do I have to iterate if the main() iterates again over the nodes. > > def read_tree(rows, levelnames): >root = Node("#ROOT", "#ROOT") >old_level = 0 >stack = [root] >for i, row in enumerate(rows, 1): I'm not quite sure I understand what is the stack for. As of now is a list whose only element is root. >new_level = column_index(row) >node = Node(row[new_level], levelnames[new_level]) here you are getting the node based on the current row, with its level. >if new_level == old_level: >stack[-1].append(node) I'm not sure I understand here. Why the end of the list and not the beginning? >elif new_level > old_level: >if new_level - old_level != 1: >raise ValueError here you avoid having a node which is distant more than one level from its parent. >stack.append(stack[-1].children[-1]) here I get a crash: IndexError: list index out of range! >stack[-1].append(node) >old_level = new_level >else: >while new_level < old_level: >stack.pop(-1) >old_level -= 1 >stack[-1].append(node) Why do I need to pop something from the stack??? Here you are saying that if current row has a depth (new_level) that is smaller than the previous one (old_level) I decrement by one the old_level (even if I may have a bigger jump) and pop something from the stack...??? >return root once filled, the tree is returned. I thought the tree would have been the stack, but instead is root...nice surprise. > > def main(): [strip arg parsing] >with open(args.infile) as f: >rows = csv.reader(f) >levelnames = next(rows) # skip header >tree = read_tree(rows, levelnames) filling the tree with the data in the csv. > >show_tree = show2 if args.latex else show >for node in tree.children: >show_tree(node) >print("") It's nice to define show_tree as a function of the argument. The for loop now is more than clear, traversing each node of the tree. As I said earlier in the thread there's a lot of food for a newbie, but better going through these sort of exercises than dumb tutorial which don't teach you much. Al -- https://mail.python.org/mailman/listinfo/python-list
Re: The Most Diabolical Python Antipattern
On Fri, Jan 30, 2015 at 3:00 AM, Marko Rauhamaa wrote: > Marko Rauhamaa : > Surprisingly this variant could raise an unexpected exception: == try: do_interesting_stuff() except ValueError: try: log_it() finally: raise == A Python bug? >> [...] >> My Python did do exception chaining, but the problem is the surface >> exception changes, which could throw off the whole error recovery. > > BTW, the code above can be fixed: > > == > try: > do_interesting_stuff() > except ValueError as e: > try: > log_it() > finally: > raise e > == > > Now the surface exception is kept and the subsidiary exception is > chained to it. > > I'm a bit baffled why the two pieces of code are not equivalent. The bare raise re-raises the most recent exception that is being handled. The "raise e" raises that exception specifically, which is not the most recent in the case of a secondary exception. Note that the exceptions are actually chained *in reverse*; the message falsely indicates that the secondary exception was raised first, and the primary exception was raised while handling it, e.g.: Traceback (most recent call last): File "", line 5, in TypeError During handling of the above exception, another exception occurred: Traceback (most recent call last): File "", line 7, in File "", line 2, in ValueError That's because "raise e" causes the exception e to be freshly raised, whereas the bare "raise" merely makes the existing exception context active again. It's interesting to note here that although the exception retains its original traceback information (note the two separate lines in the traceback), it is not chained again from the TypeError. One might expect to actually see the ValueError, followed by the TypeError, followed by the ValueError in the chain. That doesn't happen because the two ValueErrors raised are actually the same object, and Python is apparently wise enough to break the chain to avoid an infinite cycle. -- https://mail.python.org/mailman/listinfo/python-list
Re: The Most Diabolical Python Antipattern
Ian Kelly : > The bare raise re-raises the most recent exception that is being > handled. The "raise e" raises that exception specifically, which is > not the most recent in the case of a secondary exception. Scary. That affects all finally clauses. Must remember that. The pitfall is avoided by using the "except: pass" antipattern but then you lose exception chaining. Marko -- https://mail.python.org/mailman/listinfo/python-list
Re: The Most Diabolical Python Antipattern
On Fri, Jan 30, 2015 at 8:30 AM, Marko Rauhamaa wrote: > Ian Kelly : > >> The bare raise re-raises the most recent exception that is being >> handled. The "raise e" raises that exception specifically, which is >> not the most recent in the case of a secondary exception. > > Scary. That affects all finally clauses. Must remember that. > > The pitfall is avoided by using the "except: pass" antipattern but then > you lose exception chaining. Like I suggested earlier, just don't catch the inner exception at all. The result will be both exceptions propagated, chained in the proper order. -- https://mail.python.org/mailman/listinfo/python-list
Python tracker manual password reset
I just tried to use the password recovery tool for the Python tracker. I entered my personal email. It sent me the confirmation email with the password reset link, which I followed. It then reset my password and sent an email to a different address, an old work address that I no longer have, so I have no idea what the new password is. Is there someone I can contact to have my password manually reset? -- https://mail.python.org/mailman/listinfo/python-list
Re: The Most Diabolical Python Antipattern
On Sat, Jan 31, 2015 at 2:42 AM, Ian Kelly wrote: > Like I suggested earlier, just don't catch the inner exception at all. > The result will be both exceptions propagated, chained in the proper > order. So many MANY times, the best thing to do with unrecognized exceptions is simply to not catch them. ChrisA -- https://mail.python.org/mailman/listinfo/python-list
Re: The Most Diabolical Python Antipattern
Ian Kelly : > Like I suggested earlier, just don't catch the inner exception at all. > The result will be both exceptions propagated, chained in the proper > order. Depends on the situation. Marko -- https://mail.python.org/mailman/listinfo/python-list
how to parse sys.argv as dynamic parameters to another function?
how to parse sys.argv as dynamic parameters to another function? fun(sys.argv) in perl, this is very easy. please help. -- https://mail.python.org/mailman/listinfo/python-list
Re: how to parse sys.argv as dynamic parameters to another function?
On Fri, Jan 30, 2015 at 10:09 AM, Robert Chen wrote: > > how to parse sys.argv as dynamic parameters to another function? > > > fun(sys.argv) Not sure what you mean by "dynamic", but I think you already have it, assuming fun is a function which accepts a single list of strings as its argument. Skip -- https://mail.python.org/mailman/listinfo/python-list
Re: how to parse sys.argv as dynamic parameters to another function?
On Fri, Jan 30, 2015 at 9:09 AM, Robert Chen wrote: > how to parse sys.argv as dynamic parameters to another function? > > > fun(sys.argv) > > in perl, this is very easy. please help. Do you mean that you want each item of sys.argv to be passed as a separate parameter to the function? If so, then: fun(*sys.argv) -- https://mail.python.org/mailman/listinfo/python-list
Re: [OT] fortran lib which provide python like data type
On Friday, January 30, 2015 at 1:03:03 PM UTC+5:30, Christian Gollwitzer wrote: > Am 30.01.15 um 02:40 schrieb Rustom Mody: > > FORTRAN > > > > use dictionary > > type(dictionary), pointer :: d > > d=>dict_new() > > call set(d//'toto',1) > > v = d//'toto' > > call dict_free(d) > > > > The corresponding python > > > > d = dict() > > d['toto'] = 1 > > v = d['toto'] > > del(d) > > > > In particular note the del in the python. > > > > Should highlight the point that languages with gc, support data structures > > in a way that gc-less languages - Fortran, C, C++ - do not and cannot. > > For C++ this is not correct. Ususally a garbage collector is not used - > though possible - but the constructor/destructor/assignment op in C++ > (usually called RAII) provide semantics very similar to the CPython > refcounting behaviour. You may be right... Dont claim to be able to wrap my head round C++ However... > > For example, I made a set of C++ interface methods to return nested > dicts/list to Python, which is far from complete, but allows to write > something like this: > > SWList do_something(SWDict attrs) { > SWList result; > for (int i=0; i<5; i++) { > SWDict entry; > entry.insert("count", i); > entry.insert("name", "something"); > result.push_back(entry); > } > return result; > } > > > There is also Boost::Python which does the same, I think, and much more, > but only supports Python, whereas I use SWIG to interface these > dicts/lists to both CPython and Tcl. > > You cannot, however, resolve certain cyclic dependencies with pure > reference counting. ... if I restate that in other words it says that sufficiently complex data structures will be beyond the reach of the standard RAII infrastructure. Of course this only brings up one side of memory-mgmt problems viz. unreclaimable memory. What about dangling pointers? C++ apps are prone to segfault. Seems to suggest (to me at least) that the memory-management infrastructure is not right. Stroustrup talks of the fact that C++ is suitable for lightweight abstractions. In view of the segfault-proneness I'd say they are rather leaky abstractions. But as I said at the outset I dont understand C++ -- https://mail.python.org/mailman/listinfo/python-list
Re: Installling ADODB on an offline computer
On 01/30/2015 09:45 AM, bkl...@rksystems.com wrote: On Thursday, January 29, 2015 at 8:35:50 PM UTC-5, Alan Meyer wrote: I work on an application that uses the ActivePython compilation of Python from ActiveState. It uses three Microsoft COM libraries that are needed for talking to SQL Server. The libraries are: Microsoft Activex Data Objects Microsoft Activex Data Objects Recordset Microsoft ADO Ext In the past, we have installed those libraries on numerous machines by running makepy.py. It can be done from the ActivePython IDLE GUI, or from the command line. makepy.py downloads and installs the packages for us, taking care of COM server or client registration, or whatever it is that has to be done (I don't really know much about this stuff.) Now I've been asked to port the application to a computer that is not connected to the Internet. I haven't found any way to get those packages. I haven't found a way to, for example, download the packages to files, place the files on the target computer, and run makepy.py or a setup.py to install them. Can someone suggest a way to do it? Thank you very much. Alan I don't think makepy.py needs to download anything. It's just extracting information from the registry about COM services which are available and building a Python wrapper around those services. I've used it successfully myself on a machine completely cut off from the outside world. Cheers, Bob Well, son of a gun. Thanks Bob. Alan -- https://mail.python.org/mailman/listinfo/python-list
Re: The Most Diabolical Python Antipattern
On Fri, Jan 30, 2015 at 8:56 AM, Marko Rauhamaa wrote: > Ian Kelly : > >> Like I suggested earlier, just don't catch the inner exception at all. >> The result will be both exceptions propagated, chained in the proper >> order. > > Depends on the situation. Like what? If you want to specifically propagate the original exception in order to be caught again elsewhere, then I think there's a code smell to that. If this inner exception handler doesn't specifically know how to handle a ValueError, then why should some outer exception handler be able to handle an exception that could have come from virtually anywhere? A better approach to that would be to create a specific exception class that narrowly identifies what went wrong, and raise *that* with the other exceptions chained to it. E.g.: try: do_interesting_stuff() except ValueError as e: try: log_it() except Exception: # Chain both exceptions as __context__ raise SpecificException else: # Chain the original exception as __cause__ raise SpecificException from e Or if you don't care about distinguishing __cause__ from __context__: try: do_interesting_stuff() except ValueError: try: log_it() finally: raise SpecificException -- https://mail.python.org/mailman/listinfo/python-list
Re: [OT] fortran lib which provide python like data type
On 01/30/2015 09:27 AM, Rustom Mody wrote: > ... if I restate that in other words it says that sufficiently > complex data structures will be beyond the reach of the standard > RAII infrastructure. > > Of course this only brings up one side of memory-mgmt problems > viz. unreclaimable memory. > > What about dangling pointers? > C++ apps are prone to segfault. Seems to suggest > (to me at least) that the memory-management infrastructure > is not right. > > Stroustrup talks of the fact that C++ is suitable for lightweight > abstractions. In view of the segfault-proneness I'd say they > are rather leaky abstractions. > > But as I said at the outset I dont understand C++ Yes I can tell you haven't used C++. Compared to C, I've always found memory management in C++ to be quite a lot easier. The main reason is that C++ guarantees objects will be destroyed when going out of scope. So when designing a class, you put any allocation routines in the constructor, and put deallocation routines in the destructor. And it just works. This is something I miss in other languages, even Python. And for many things, though it's not quite as efficient, when dealing with objects you can forgo pointers altogether and just use copy constructors, instead of the blah *a = new blah_with_label("hello") //allocate on heap //forget to "delete a" and it leaks the heap *and* anything //that class blah allocated on construction. just simply declare objects directly and use them: blah a("hello") //assuming there's a constructor that takes a string //deletes everything when it goes out of scope So for the lightweight abstractions Stroustrup talks about, this works very well. And you'll rarely have a memory leak (only in the class itself) and no "dangling pointers." For other things, though, you have to dynamically create objects. But the C++ reference-counting smart pointers offer much of the same destruction semantics as using static objects. It's really a slick system. Almost makes memory management a non-issue. Circular references will still leak (just like they do on Python). But it certainly makes life a lot more pleasant than in C from a memory management perspective. -- https://mail.python.org/mailman/listinfo/python-list
Re: parsing tree from excel sheet
alb wrote: > Hi Peter, I'll try to comment the code below to verify if I understood > it correctly or missing some major parts. Comments are just below code > with the intent to let you read the code first and my understanding > afterwards. Let's start with the simplest: > Peter Otten <__pete...@web.de> wrote: >>def show2(self): >>yield str(self) >>for child in self.children: >>yield from child.show2() > > ok, this as well requires some explanation. Kinda lost again. From what > I can naively deduce is that it is a generator that returns the str > defined in the node as __str__ and it shows it for the whole tree. Given a tree A --> A1 A2 --> A21 A22 A3 assume a slightly modified show2(): def append_nodes(node, nodes): nodes.append(node) for child in node.children: append_nodes(child, nodes) When you invoke this with the root node in the above sample tree and an empty list nodes = [] append_nodes(A, nodes) the first thing it will do is append the root node to the nodes list [A] Then it iterates over A's children: append_nodes(A1, nodes) will append A1 and return immediately because A1 itself has not children. [A, A1] append_nodes(A2, nodes) will append A2 and then iterate over A2's children. As A21 and A22 don't have any children append_nodes(A21, nodes) and append_nodes(A22, nodes) will just append the respective node with no further nested ("recursive") invocation, and thus the list is now [A, A1, A21, A22] Finally the append_nodes(A3, nodes) will append A3 and then return because it has no children, and we end up with nodes = [A, A1, A21, A22, A3] Now why the generator? For such a small problem it doesn't matter, for large datasets it is convenient that you can process the first item immmediately, when the following ones may not yet be available. It also becomes easier to implement different treatment of the items or to stop in the process: for deer in hunt(): kill(deer) if have_enough_food(): break for animal in hunt(): take_photograph(animal) if not_enough_light(): break Also, you never need more than one item in memory instead of the whole list for many problems. Ok, how to get from the recursive list building to yielding nodes as they are encountered? The basic process is always the same: def f(items) items.append(3) items.append(6) for i in range(10): items.append(i) items = [] f(items) for item in items: print(item) becomes def g(): yield 3 yield 6 for i in range(10): yield i for item in g(): print(items) In Python 3.3 there was added some syntactic sugar so that you can write def g(): yield 3 yield 6 yield from range(10) Thus def append_nodes(node, nodes): nodes.append(node) for child in node.children: append_nodes(child, nodes) becomes def generate_nodes(node): yield node for child in node.children: yield from generate_nodes(child) This looks a lot like show2() except that it's not a method and thus the node not called self and that the node itself is yielded rather than str(node). The latter makes the function a bit more flexible and is what I should have done in the first place. The show() method is basically the the same, but there are varying prefixes before the node name. Here's a simpler variant that just adds some indentation. We start with generate_nodes() without the syntactic sugar. This is because we need a name for the nodes yielded from the nested generator call so that we can modify them: def indented_nodes(node): yield node for child in node.children: for desc in from indented_nodes(child): yield desc Now let's modify the yielded nodes: def indented_nodes(node): yield [node] for child in node.children: for desc in indented_nodes(child): yield ["***"] + desc How does it fare on the example tree? A --> A1 A2 --> A21 A22 A3 The lists will have an "***" entry for every nesting level, so we get [A] ["***", A1] ["***", A2] ["***", "***", A21] ["***", "***", A22] ["***", A3] With "".join() we can print it nicely: for item in indented_nodes(tree): print("".join(item)) But wait, "".join() only accepts strings so let's change yield [node] to yield [node.name] # str(node) would also work A ***A1 ***A2 **A21 **A22 ***A3 >> def show2(root): >>for line in root.show2(): >>print(line) > Here we implement the functions to print a node, but I'm not sure I > understand why do I have to iterate if the main() iterates again over the > nodes. Your example had the structure A A1 A11 A12 A2 and I was unsure if there could be data files that have multiple root nodes, e. g. A A1 A11 A12 A2 B B1 B2 To simplify the handling of these I introduced an artificial root R R A A1 A11 A12 A2 B B1 B2
Re: parsing tree from excel sheet
Peter Otten wrote: > [A, A1, A21, A22] > > Finally the append_nodes(A3, nodes) will append A3 and then return because > it has no children, and we end up with > > nodes = [A, A1, A21, A22, A3] Yay, proofreading! Both lists should contain A2: [A, A1, A2, A21, A22] nodes = [A, A1, A2, A21, A22, A3] -- https://mail.python.org/mailman/listinfo/python-list
Re: [OT] fortran lib which provide python like data type
On Friday, January 30, 2015 at 10:39:12 PM UTC+5:30, Michael Torrie wrote: > On 01/30/2015 09:27 AM, Rustom Mody wrote: > > ... if I restate that in other words it says that sufficiently > > complex data structures will be beyond the reach of the standard > > RAII infrastructure. > > > > Of course this only brings up one side of memory-mgmt problems > > viz. unreclaimable memory. > > > > What about dangling pointers? > > C++ apps are prone to segfault. Seems to suggest > > (to me at least) that the memory-management infrastructure > > is not right. > > > > Stroustrup talks of the fact that C++ is suitable for lightweight > > abstractions. In view of the segfault-proneness I'd say they > > are rather leaky abstractions. > > > > But as I said at the outset I dont understand C++ > > Yes I can tell you haven't used C++. Compared to C, I've always found > memory management in C++ to be quite a lot easier. The main reason is > that C++ guarantees objects will be destroyed when going out of scope. I hear you and I trust you as a gentleman but I dont trust C++ :-) The only time in some near 15 years of python use that I got it to segfault was when Ranting Rick gave some wx code to try [at that time he was in rant-against-tk mode] Sure enough it was related to the fact that wx is written in C++ and some expectations were not being followed. > So when designing a class, you put any allocation routines in the > constructor, and put deallocation routines in the destructor. And it > just works. This is something I miss in other languages, even Python. > > And for many things, though it's not quite as efficient, when dealing > with objects you can forgo pointers altogether and just use copy > constructors, instead of the > > blah *a = new blah_with_label("hello") //allocate on heap > //forget to "delete a" and it leaks the heap *and* anything > //that class blah allocated on construction. > > just simply declare objects directly and use them: > > blah a("hello") //assuming there's a constructor that takes a string > //deletes everything when it goes out of scope > > So for the lightweight abstractions Stroustrup talks about, this works > very well. And you'll rarely have a memory leak (only in the class > itself) and no "dangling pointers." And what about the grey area between lightweight and heavyweight? You say just use copy constructors and no pointers. Can you (ie C++) guarantee that no pointer is ever copied out of scope of these copy-constructed objects? > > For other things, though, you have to dynamically create objects. But > the C++ reference-counting smart pointers offer much of the same > destruction semantics as using static objects. It's really a slick > system. Almost makes memory management a non-issue. Circular > references will still leak (just like they do on Python). But it > certainly makes life a lot more pleasant than in C from a memory > management perspective. -- https://mail.python.org/mailman/listinfo/python-list
RAII vs gc (was fortran lib which provide python like data type)
On Friday, January 30, 2015 at 11:01:50 PM UTC+5:30, Rustom Mody wrote: > On Friday, January 30, 2015 at 10:39:12 PM UTC+5:30, Michael Torrie wrote: > > On 01/30/2015 09:27 AM, Rustom Mody wrote: > > > ... if I restate that in other words it says that sufficiently > > > complex data structures will be beyond the reach of the standard > > > RAII infrastructure. > > > > > > Of course this only brings up one side of memory-mgmt problems > > > viz. unreclaimable memory. > > > > > > What about dangling pointers? > > > C++ apps are prone to segfault. Seems to suggest > > > (to me at least) that the memory-management infrastructure > > > is not right. > > > > > > Stroustrup talks of the fact that C++ is suitable for lightweight > > > abstractions. In view of the segfault-proneness I'd say they > > > are rather leaky abstractions. > > > > > > But as I said at the outset I dont understand C++ > > > > Yes I can tell you haven't used C++. Compared to C, I've always found > > memory management in C++ to be quite a lot easier. The main reason is > > that C++ guarantees objects will be destroyed when going out of scope. > > I hear you and I trust you as a gentleman but I dont trust C++ :-) > > The only time in some near 15 years of python use that > I got it to segfault was when Ranting Rick gave some wx code to try > [at that time he was in rant-against-tk mode] > Sure enough it was related to the fact that wx is written in C++ > and some expectations were not being followed. > > > So when designing a class, you put any allocation routines in the > > constructor, and put deallocation routines in the destructor. And it > > just works. This is something I miss in other languages, even Python. > > > > And for many things, though it's not quite as efficient, when dealing > > with objects you can forgo pointers altogether and just use copy > > constructors, instead of the > > > > blah *a = new blah_with_label("hello") //allocate on heap > > //forget to "delete a" and it leaks the heap *and* anything > > //that class blah allocated on construction. > > > > just simply declare objects directly and use them: > > > > blah a("hello") //assuming there's a constructor that takes a string > > //deletes everything when it goes out of scope > > > > So for the lightweight abstractions Stroustrup talks about, this works > > very well. And you'll rarely have a memory leak (only in the class > > itself) and no "dangling pointers." > > And what about the grey area between lightweight and heavyweight? > > You say just use copy constructors and no pointers. > Can you (ie C++) guarantee that no pointer is ever copied out of > scope of these copy-constructed objects? > > > > > For other things, though, you have to dynamically create objects. But > > the C++ reference-counting smart pointers offer much of the same > > destruction semantics as using static objects. It's really a slick > > system. Almost makes memory management a non-issue. Circular > > references will still leak (just like they do on Python). But it > > certainly makes life a lot more pleasant than in C from a memory > > management perspective. The case of RAII vs gc is hardly conclusive: http://stackoverflow.com/questions/228620/garbage-collection-in-c-why -- https://mail.python.org/mailman/listinfo/python-list
Re: [OT] fortran lib which provide python like data type
On 01/30/2015 10:31 AM, Rustom Mody wrote: > And what about the grey area between lightweight and heavyweight? That's what the smart pointers are for. > You say just use copy constructors and no pointers. > Can you (ie C++) guarantee that no pointer is ever copied out of > scope of these copy-constructed objects? If that happened, then it's because you the programmer wanted it to happen. It's not just going to happen all by itself. Yes anytime pointers are allowed, things are potentially unsafe in the hands of a programmer. I'm just saying it's not nearly so bad as you make it out to be. Follow basic rules and 99% of segfaults will never happen and the majority of leaks will not happen either. Python can still leak badly if a programmer causes it to. As for segfaulting, no your python code should not itself segfault. C++ code certainly could. Exposing pointers to the programmer can be very powerful (and necessary... cannot write a bare-metal OS in common Python) but the programmer can screw it up too on occasion. -- https://mail.python.org/mailman/listinfo/python-list
Re: [OT] fortran lib which provide python like data type
Michael Torrie writes: > Follow basic [C++] rules and 99% of segfaults will never happen and > the majority of leaks will not happen either. That is a safe and simple approach, but it works by copying data all over the place instead of passing pointers, resulting in performance loss. Alex Martelli used to post a good riff here about how the main reason to use C++ in the first place was for when you needed to explicitly control resources for performance. So the "data copying" style seems to somewhat miss the point. Smart pointers have similar issues to Python's reference-counted allocation, e.g. cache and thread unfriendliness, and no compaction mechanism AFAIK. Plus I always found them scary in terms of subtle bug potential due to abstraction leaks. But I haven't used them so far. -- https://mail.python.org/mailman/listinfo/python-list
Re: Python is DOOMED! Again!
In article <54ca5bbf$0$12992$c3e8da3$54964...@news.astraweb.com>, steve+comp.lang.pyt...@pearwood.info says... > > Why should I feel guilty? You wrote: > > > "Static analysis cannot and should not clutter executable code." > > > But what are type declarations in statically typed languages like C, Pascal, > Haskell, etc.? They are used by the compiler for static analysis. The same > applies to type declarations in dynamically typed languages like Cobra and > Julia. And yet, there they are, in the executable code. > > So there are a whole lot of languages, going all the way back to 1950s > languages like Fortran, to some of the newest languages which are only a > few years old like Go, both dynamically typed and statically typed, which > do exactly what you say languages "cannot and should not" do: they put type > information used for static analysis there in the code. You are confusing static analysis with compile time checking which produces side-effects like implicit conversion for instance and that affects the resulting binary code. Something that Python won't do with type annotations. And something that Julia, Scala or C does. This is also the first time I hear compilation time mentioned as static analysis. To be clear, type declarations in Julia, Scala, C have the potential to produce side-effects, can result in optimized code and can result in compile time errors or warnings. Type annotations in Python are instead completely ignored by the interpreter. They do nothing of the above. They do not participate in code execution. > As I said, these languages disagree with you. You are not just arguing > against Guido, but against the majority of programming language designers > for 60+ years. You are right. I'm not arguing against Guido. I have yet to hear his opinion on your or mine arguments. I'm not arguing against the majority of programming languages either, because they agree with me. I'm arguing with you. -- https://mail.python.org/mailman/listinfo/python-list
Re: Python is DOOMED! Again!
In article <54ca5bbf$0$12992$c3e8da3$54964...@news.astraweb.com>, steve+comp.lang.pyt...@pearwood.info says... > > > Why should I feel guilty? You wrote: > > > "Static analysis cannot and should not clutter executable code." > > > But what are type declarations in statically typed languages like C, Pascal, > Haskell, etc.? They are used by the compiler for static analysis. The same > applies to type declarations in dynamically typed languages like Cobra and > Julia. And yet, there they are, in the executable code. > > So there are a whole lot of languages, going all the way back to 1950s > languages like Fortran, to some of the newest languages which are only a > few years old like Go, both dynamically typed and statically typed, which > do exactly what you say languages "cannot and should not" do: they put type > information used for static analysis there in the code. (Sorry if I'm late...) You are comparing static analysis with compile time checking which can result in implicit conversions and that can affect the resulting binary code. Something that Python won't do with type annotations. And something that Julia, Scala or C does. This is also the first time I hear compilation mentioned as static analysis. But I suppose... After all it does perform a crude form of static analysis as a natural consequence of compile time checks, besides doing a whole bunch of other things that aren't static analysis. A dog has four legs and two eyes. So does an elephant. I suppose you are going to argue with me that a dog is an elephant after all. To be clear, type declarations in Julia, Scala, C have the potential to produce side-effects, can result in optimized code and can result in compile time errors or warnings. They also affect runtime evaluation as you could easily attest if you input a float into a function expecting an int, whereas in Python the float will be gladly accepted and will only fail at the point in code where its interface won't match the statement. Meanwhile, type annotations in Python are instead completely ignored by the interpreter. They do nothing of the above. They do not participate in code generation and execution. > As I said, these languages disagree with you. You are not just arguing > against Guido, but against the majority of programming language designers > for 60+ years. You are right. I'm not arguing against Guido. I have yet to hear his opinion on what you are saying, so I don't even know if I should want to argue with him. And I'm not arguing against the majority of programming languages either, because as much as you try I have yet to hear an argument from you that convinces me they don't agree with me. No. I'm arguing with you. -- https://mail.python.org/mailman/listinfo/python-list
Multiplexing 2 streams with asyncio
I'm trying to get to grips with asyncio. I *think* it's a reasonable fit for my problem, but I'm not really sure - so if the answer is "you shouldn't be doing that, then that's fair enough :-) What I am trying to do is, given 2 files (the stdout and stderr from a subprocess.Popen object, as it happens), I want to wait on data coming in from either file, and write it to the appropriate one of sys.stdout or sys.stderr, process the data somehow, and then go back to waiting. Sorry if that description's a little vague. Basically I'm trying to write a version of Popen.communicate() that echoes the output to the standard streams *as well* as capturing it. This would normally be handled on Unix with something like a select loop, and with threads on Windows. But is it something I could use an asyncio event loop for? It seems like I could set up reader callbacks on the 2 streams, and then run the event loop, but I can't work out from the documentation how to do this. Can anyone help with some pointers? The reasons I want to try this rather than use the traditional approach are: 1. To learn about asyncio, which looks really cool but I've no idea how to start with even something simple using it :-( 2. Because on Windows I'd have to use threads, whereas asyncio uses IO completion ports behind the scenes (I think) which are probably a lot more lightweight. I know there's a Process abstraction in asyncio, but I don't want to use that directly, because I need my code to be interoperable with existing code that uses Popen. Thanks for any help. Paul -- https://mail.python.org/mailman/listinfo/python-list
Re: Python is DOOMED! Again!
In article , breamore...@yahoo.co.uk says... > > No, they're not always weakly typed. The aim of the spreadsheet put up > by Skip was to sort out (roughly) which languages belong in which camp. > I do not regard myself as suitably qualified to fill the thing out. > Perhaps by now others have? It would help that if instead of weakly typed or strongly typed box, they could be classified comparatively to each other. The terms are better suited to describe two languages as they stand to each other. Weakly Typed --> Strongly Typed >From C to Lisp (arguably) with languages closer to each other indicating a more similar type system model. -- https://mail.python.org/mailman/listinfo/python-list
Re: Python is DOOMED! Again!
On Fri, Jan 30, 2015 at 12:50 PM, Mario Figueiredo wrote: > It would help that if instead of weakly typed or strongly typed box, > they could be classified comparatively to each other. The terms are > better suited to describe two languages as they stand to each other. > > Weakly Typed --> Strongly Typed > The spreadsheet should be world-writable. Knock yourself out. :-) http://preview.tinyurl.com/kcrcq4y Skip -- https://mail.python.org/mailman/listinfo/python-list
Re: how to parse sys.argv as dynamic parameters to another function?
On 1/30/15 11:28 AM, Ian Kelly wrote: On Fri, Jan 30, 2015 at 9:09 AM, Robert Chen wrote: how to parse sys.argv as dynamic parameters to another function? fun(sys.argv) in perl, this is very easy. please help. Do you mean that you want each item of sys.argv to be passed as a separate parameter to the function? If so, then: fun(*sys.argv) Robert, this will work, but keep in mind that sys.argv is a list of strings, always. If your function is expecting integers, you will have to do an explicit conversion somewhere. -- Ned Batchelder, http://nedbatchelder.com -- https://mail.python.org/mailman/listinfo/python-list
Re: Multiplexing 2 streams with asyncio
On Fri, Jan 30, 2015 at 11:45 AM, Paul Moore wrote: > 2. Because on Windows I'd have to use threads, whereas asyncio uses IO > completion ports behind the scenes (I think) which are probably a lot more > lightweight. I have no idea whether that's true, but note that add_reader() on Windows doesn't work with pipes: https://docs.python.org/3/library/asyncio-eventloops.html#windows -- https://mail.python.org/mailman/listinfo/python-list
Re: multiprocessing module backport from 3 to 2.7 - spawn feature
Skip Montanaro wrote: > Can you explain what you see as the difference between "spawn" and "fork" > in this context? Are you using Windows perhaps? I don't know anything > obviously different between the two terms on Unix systems. spawn is fork + exec. Only a handful of POSIX functions are required to be "fork safe", i.e. callable on each side of a fork without an exec. An example of an API which is not safe to use on both sides of a fork is Apple's GCD. The default builds of NumPy and SciPy depend on it on OSX because it is used in Accelerate Framework. You can thus get problems if you use numpy.dot in a process started with multiprocessing. What will happen is that the call to numpy.dot never returns, given that you called any BLAS or LAPACK function at least once before the instance of multiprocessing.Process was started. This is not a bug in NumPy or in Accelerate Framework, it is a bug in multiprocessing because it assumes that BLAS is fork safe. The correct way of doing this is to start processes with spawn (fork + exec), which multiprocessing does on Python 3.4. Sturla -- https://mail.python.org/mailman/listinfo/python-list
Re: multiprocessing module backport from 3 to 2.7 - spawn feature
Andres Riancho wrote: > Spawn, and I took that from the multiprocessing 3 documentation, will > create a new process without using fork(). > This means that no memory > is shared between the MainProcess and the spawn'ed sub-process created > by multiprocessing. If you memory map a segment with MAP_SHARED it will be shared, even after a spawn. File descriptors are also shared. -- https://mail.python.org/mailman/listinfo/python-list
Re: RAII vs gc (was fortran lib which provide python like data type)
Rustom Mody wrote: > The case of RAII vs gc is hardly conclusive: > > http://stackoverflow.com/questions/228620/garbage-collection-in-c-why The purpose of RAII is not to be an alternative to garbage collection (which the those answers imply), but to ensure deterministc execution of setup and tear-down code. The Python equivalent of RAII is not garbage collection but context managers. Those answers is a testimony to how little the majority of C++ users actually understand about the language. A C++ statement with RAII like { Foo bar(); // suite } is not equivalent to bar = Foo() in Python. It actually corresponds to with Foo() as bar: Sturla -- https://mail.python.org/mailman/listinfo/python-list
Re: Python is DOOMED! Again!
On Fri, Jan 30, 2015 at 11:42 AM, Mario Figueiredo wrote: > To be clear, type declarations in Julia, Scala, C have the potential to > produce side-effects, can result in optimized code and can result in > compile time errors or warnings. They also affect runtime evaluation as > you could easily attest if you input a float into a function expecting > an int, whereas in Python the float will be gladly accepted and will > only fail at the point in code where its interface won't match the > statement. At least for C, as I noted in a previous post, it is simply not true that they are used for runtime evaluation. For example: >>> import ctypes >>> libc = ctypes.CDLL("libc.so.6") >>> libc.abs(ctypes.c_double(123.456)) 2093824448 The C compiler may complain about it, but that's a compile-time static check, no different from the sort of checks that PEP 484 seeks to add to Python. > Meanwhile, type annotations in Python are instead completely ignored by > the interpreter. They do nothing of the above. They do not participate > in code generation and execution. But unlike C, Python lets you easily implement this yourself if you want to. >>> def runtime_type_check(f): ... @functools.wraps(f) ... def wrapper(**args): ... for arg, value in args.items(): ... if arg in f.__annotations__: ... if not isinstance(value, f.__annotations__[arg]): ... raise TypeError("Arg %s expected %s, got %s" ... % (arg, f.__annotations__[arg].__name__, type(arg).__name__)) ... return f(**args) ... return wrapper ... >>> @runtime_type_check ... def add(x:int, y:int) -> int: ... return x + y ... >>> add(x="hello", y="world") Traceback (most recent call last): File "", line 1, in File "", line 8, in wrapper TypeError: Arg y expected int, got str (This could of course be extended for positional arguments and more complex annotations, but I wanted to keep it simple for the purpose of the example.) -- https://mail.python.org/mailman/listinfo/python-list
Re: multiprocessing module backport from 3 to 2.7 - spawn feature
Sturla Molden : > Only a handful of POSIX functions are required to be "fork safe", i.e. > callable on each side of a fork without an exec. That is a pretty surprising statement. Forking without an exec is a routine way to do multiprocessing. I understand there are things to consider, but all system calls are available and safe. Marko -- https://mail.python.org/mailman/listinfo/python-list
Re: [OT] fortran lib which provide python like data type
Michael Torrie wrote: On 01/30/2015 10:31 AM, Rustom Mody wrote: And what about the grey area between lightweight and heavyweight? That's what the smart pointers are for. I'd say it's what higher-level languages are for. :-) I'm completely convinced nowadays that there is *no* use case for C++. If you need to program the bare metal, use C. For anything more complicated, use a language that has proper memory-management abstractions built in. -- Greg -- https://mail.python.org/mailman/listinfo/python-list
Re: [OT] fortran lib which provide python like data type
Michael Torrie wrote: > Yes I can tell you haven't used C++. Compared to C, I've always found > memory management in C++ to be quite a lot easier. The main reason is > that C++ guarantees objects will be destroyed when going out of scope. > So when designing a class, you put any allocation routines in the > constructor, and put deallocation routines in the destructor. And it > just works. This is something I miss in other languages, even Python. Python has context managers for that. -- https://mail.python.org/mailman/listinfo/python-list
Re: An object is an instance (or not)?
Gregory Ewing wrote: > Steven D'Aprano wrote: >> Actually, if you look at my example, you will see that it is a method and >> it does get the self argument. Here is the critical code again: >> >> from types import MethodType >> polly.talk = MethodType( >> lambda self: print("Polly wants a spam sandwich!"), polly) > > Doing it by hand is cheating. Smile when you say that, pardner :-) I don't see why you think I'm cheating. What rule do you think is being broken? I want to add a method to the instance itself, so I create a method object and put it on the instance. What should I have done? The default metaclass ("type") only applies the descriptor protocol to attributes retrieved from the class itself, not those retrieved from the instance. For instance, you can add a property object onto the instance, but it won't behave as a property: py> class K(object): ... pass ... py> x = K() py> x.spam = property(lambda self: 23) py> x.spam But if I add it to the class, the descriptor magic happens: py> K.eggs = x.spam py> x.eggs 23 Functions are descriptors, just like property objects! So an alternative to manually making a method object would be to use a metaclass that extended the descriptor protocol to instance attributes. I suppose you would call that "cheating" too? But all of this is a side-show that distracts from my point, which is that the lookup rules for instances and classes are such that you can override behaviour defined in the class on a per-instance basis. The mechanics of such aren't really relevant. >> That's certainly not correct, because Python had classes and instances >> before it had descriptors! > > Before the descriptor protocol, a subset of its functionality > was hard-wired into the interpreter. There has always been > some magic going on across the instance-class boundary that > doesn't occur across the class-baseclass boundary. Yes, but that magic is effectively "implementation, not interface". I put that in scare quotes because in actual pedantic fact, the descriptor protocol is an interface: we can write our own custom descriptors. I don't dispute that's important. But from a birds eye view, and looking at just method and regular attribute access, the relationship between instance-class and class-baseclass is very similar, as is that between class-metaclass, and descriptor magic is merely part of the implementation to make things work. I daresay you are right that there are a few places where the interpreter treats classes as a distinct and different kind of thing than non-class instances. One obvious place is that dunder methods are not looked up on the instance, unlike pretty much everything else. But I did preface my comments about attribute lookup order as being simplified, so you can probably find a lot more to criticise if you wish :-) -- Steven -- https://mail.python.org/mailman/listinfo/python-list
Re: [OT] fortran lib which provide python like data type
Gregory Ewing wrote: > I'm completely convinced nowadays that there is > no use case for C++. I can think of one use-case for C++. You walk up to somebody in the street, say "I wrote my own C++ parser!", and while they are gibbering and shaking in shock, you rifle through their pockets and steal any valuables you find. -- Steven -- https://mail.python.org/mailman/listinfo/python-list
Re: [OT] fortran lib which provide python like data type
Michael Torrie wrote: > If that happened, then it's because you the programmer wanted it to > happen. It's not just going to happen all by itself. Yes anytime > pointers are allowed, things are potentially unsafe in the hands of a > programmer. I'm just saying it's not nearly so bad as you make it out > to be. Follow basic rules and 99% of segfaults will never happen and > the majority of leaks will not happen either. Oh great. So if the average application creates a hundred thousand pointers of the course of a session, you'll only have a thousand or so seg faults and leaks. Well, that certainly explains this: https://access.redhat.com/articles/1332213 Manual low-level pointer manipulation is an anti-pattern. What you glibly describe as programmers following "basic rules" has proven to be beyond the ability of the programming community as a whole. -- Steven -- https://mail.python.org/mailman/listinfo/python-list
Re: Python tracker manual password reset
Ian Kelly wrote: > I just tried to use the password recovery tool for the Python tracker. > I entered my personal email. It sent me the confirmation email with > the password reset link, which I followed. It then reset my password > and sent an email to a different address, an old work address that I > no longer have, so I have no idea what the new password is. o_O If you are right in your diagnosis, that's an impressive bug. Why does the bug tracker even remember old email addresses? > Is there someone I can contact to have my password manually reset? Have you reported this on the tracker meta tracker? http://psf.upfronthosting.co.za/roundup/meta If you are subscribed to python-dev, you could ask for help there (with an appropriate apology-in-advance if its the wrong place). If you're not subscribed, let me know and I can do so for you. -- Steven -- https://mail.python.org/mailman/listinfo/python-list
Re: The Most Diabolical Python Antipattern
On 30/01/2015 08:10, Mark Lawrence wrote: On 30/01/2015 06:16, Marko Rauhamaa wrote: Ian Kelly : At least use "except Exception" instead of a bare except. Do you really want things like SystemExit and KeyboardInterrupt to get turned into 0? How about: == try: do_interesting_stuff() except ValueError: try: log_it() except: pass raise == Surprisingly this variant could raise an unexpected exception: == try: do_interesting_stuff() except ValueError: try: log_it() finally: raise == A Python bug? Marko It depends on the Python version that you're running - I think!!! See https://www.python.org/dev/peps/pep-3134/ https://www.python.org/dev/peps/pep-0409/ https://www.python.org/dev/peps/pep-0415/ and finally try (groan :) https://pypi.python.org/pypi/pep3134/ http://bugs.python.org/issue23353 looks like fun and references PEP3134 for anybody who's interested. -- My fellow Pythonistas, ask not what our language can do for you, ask what you can do for our language. Mark Lawrence -- https://mail.python.org/mailman/listinfo/python-list
Re: multiprocessing module backport from 3 to 2.7 - spawn feature
On 30/01/15 23:25, Marko Rauhamaa wrote: Sturla Molden : Only a handful of POSIX functions are required to be "fork safe", i.e. callable on each side of a fork without an exec. That is a pretty surprising statement. Forking without an exec is a routine way to do multiprocessing. I understand there are things to consider, but all system calls are available and safe. POSIX says this: - No asynchronous input or asynchronous output operations shall be inherited by the child process. - A process shall be created with a single thread. If a multi-threaded process calls fork(), the new process shall contain a replica of the calling thread and its entire address space, possibly including the states of mutexes and other resources. Consequently, to avoid errors, the child process may only execute async-signal-safe operations until such time as one of the exec functions is called. - Fork handlers may be established by means of the pthread_atfork() function in order to maintain application invariants across fork() calls. - When the application calls fork() from a signal handler and any of the fork handlers registered by pthread_atfork() calls a function that is not asynch-signal-safe, the behavior is undefined. Hence you must be very careful which functions you use after calling forking before you have called exec. Generally never use an API above POSIX, e.g. BLAS or Apple's CoreFoundation. Apple said this when the problem with multiprocessing and Accelerate Framework first was discovered: -- Forwarded message -- From: Date: 2012/8/2 Subject: Bug ID 11036478: Segfault when calling dgemm with Accelerate / GCD after in a forked process To: **@** Hi Olivier, Thank you for contacting us regarding Bug ID# 11036478. Thank you for filing this bug report. This usage of fork() is not supported on our platform. For API outside of POSIX, including GCD and technologies like Accelerate, we do not support usage on both sides of a fork(). For this reason among others, use of fork() without exec is discouraged in general in processes that use layers above POSIX. We recommend that you either restrict usage of blas to the parent or the child process but not both, or that you switch to using GCD or pthreads rather than forking to create parallelism. Also see this: http://bugs.python.org/issue8713 https://mail.python.org/pipermail/python-ideas/2012-November/017930.html Sturla -- https://mail.python.org/mailman/listinfo/python-list
Create dictionary based of x items per key from two lists
I have two lists l1 = ["a","b","c","d","e","f","g","h","i","j"] l2 = ["aR","bR","cR"] l2 will always be smaller or equal to l1 numL1PerL2 = len(l1)/len(l2) I want to create a dictionary that has key from l1 and value from l2 based on numL1PerL2 So { a:aR, b:aR, c:aR, d:bR, e:bR, f:bR, g:cR, h:cR, i:cR, j:cR } So last item from l2 is key for remaining items from l1 -- https://mail.python.org/mailman/listinfo/python-list
Re: Create dictionary based of x items per key from two lists
On Sat, Jan 31, 2015 at 1:27 PM, wrote: > l1 = ["a","b","c","d","e","f","g","h","i","j"] > l2 = ["aR","bR","cR"] > > l2 will always be smaller or equal to l1 > > numL1PerL2 = len(l1)/len(l2) > > I want to create a dictionary that has key from l1 and value from l2 based on > numL1PerL2 > > So > > { > a:aR, > b:aR, > c:aR, > d:bR, > e:bR, > f:bR, > g:cR, > h:cR, > i:cR, > j:cR > } > > So last item from l2 is key for remaining items from l1 So the Nth element of l1 will always be paired with the (N/numL1PerL2)th element of l2 (with the check at the end)? Seems easy enough. dups = len(l1)/len(l2) l2.append(l2[-1]) result = {x:l2[i/dups] for i,x in enumerate(l1)} This mutates l2 for convenience, but you could also adjust the index to take care of the excess. As a one-liner: result = {x:l2[min(i/(len(l1)/len(l2)),len(l2)-1)] for i,x in enumerate(l1)} But the one-liner is not better code :) ChrisA -- https://mail.python.org/mailman/listinfo/python-list
Re: Sort of Augmented Reality
On 01/29/2015 06:55 PM, Rustom Mody wrote: [snip...] Like smelly cheese and classical music, math is an acquired taste. Actually enjoyable once you get past the initiation This comment is OT, irrelevant and only about myself... I found the appreciation of classical music instinctive and immediate as soon as I started listening to it (in my mid to late teens -- I'm now 77). OTH, I still don't care for opera... ;-) -=- Larry -=- -- https://mail.python.org/mailman/listinfo/python-list
Re: [OT] fortran lib which provide python like data type
On 01/30/2015 04:12 PM, Sturla Molden wrote: > Michael Torrie wrote: > >> Yes I can tell you haven't used C++. Compared to C, I've always found >> memory management in C++ to be quite a lot easier. The main reason is >> that C++ guarantees objects will be destroyed when going out of scope. >> So when designing a class, you put any allocation routines in the >> constructor, and put deallocation routines in the destructor. And it >> just works. This is something I miss in other languages, even Python. > > Python has context managers for that. Right I had forgotten about that. That's a good solution for dynamic, GC languages. -- https://mail.python.org/mailman/listinfo/python-list
Re: [OT] fortran lib which provide python like data type
On 01/30/2015 04:50 PM, Steven D'Aprano wrote: > Oh great. So if the average application creates a hundred thousand pointers > of the course of a session, you'll only have a thousand or so seg faults > and leaks. > > Well, that certainly explains this: > > https://access.redhat.com/articles/1332213 I fail to see the connection. GLibc is a low-level library written in C, not C++. By its nature requires a lot of pointer use, and is prone to having errors. But not that many, seeing as *all* Linux software depends on it and uses at least part of it *all* the time. Pretty remarkable if you ask me. Wonder how they do it. Perhaps they try to follow "basic rules." > Manual low-level pointer manipulation is an anti-pattern. What you glibly > describe as programmers following "basic rules" has proven to be beyond the > ability of the programming community as a whole. I don't see how you would write system code without this "anti-pattern" as you describe. Python is a great language for everything else, but I certainly wouldn't call it a system language. Couldn't write a kernel in it without providing it with some sort of unsafe memory access (pointers!). Or even write a Python interpreter (Yes there's PyPy, but with a jit it's still working with pointers). What I call glibly "basic rules" are in fact shown to more or less work out, as Glibc proves. Pointer use does lead to potential vulnerabilities. And they must be corrected as they are found. Still not sure what your point is. Is there a reason to use C or C++ for many of us? Nope. I'm not arguing that we should find them of use. It's easy for us to sit on Python and look with contempt at C or C++, but they really do have their place (C more than C++ IMO). This is so far off the original topic that it probably is construed that I am arguing for C++ vs Python or something. But I am not. I'm quite content with Python. There are a host of languages I find interesting including D, Google Go, Vala, FreeBASIC, Mozilla Rust, etc. But Python fits my needs so well, I can't be bothered to invest much time in these other languages. -- https://mail.python.org/mailman/listinfo/python-list
inet_http_server is only one username and passowrd in supervisor
Hi there I use supervisor. in the [inet_http_server] section, there are only one username and password username=user password=pass Do you know how to add more than two users. Thank you. -- https://mail.python.org/mailman/listinfo/python-list