pip does not work anymore
When I try to install something with pip2 I get: Traceback (most recent call last): File "/usr/bin/pip2", line 9, in load_entry_point('pip==7.1.2', 'console_scripts', 'pip2')() File "/usr/lib/python2.7/site-packages/pkg_resources/__init__.py", line 558, in load_entry_point return get_distribution(dist).load_entry_point(group, name) File "/usr/lib/python2.7/site-packages/pkg_resources/__init__.py", line 2682, in load_entry_point return ep.load() File "/usr/lib/python2.7/site-packages/pkg_resources/__init__.py", line 2355, in load return self.resolve() File "/usr/lib/python2.7/site-packages/pkg_resources/__init__.py", line 2361, in resolve module = __import__(self.module_name, fromlist=['__name__'], level=0) File "/usr/lib/python2.7/site-packages/pip/__init__.py", line 15, in from pip.vcs import git, mercurial, subversion, bazaar # noqa File "/usr/lib/python2.7/site-packages/pip/vcs/subversion.py", line 9, in from pip.index import Link File "/usr/lib/python2.7/site-packages/pip/index.py", line 30, in from pip.wheel import Wheel, wheel_ext File "/usr/lib/python2.7/site-packages/pip/wheel.py", line 35, in from pip._vendor.distlib.scripts import ScriptMaker File "/usr/lib/python2.7/site-packages/pip/_vendor/distlib/scripts.py", line 14, in from .compat import sysconfig, detect_encoding, ZipFile File "/usr/lib/python2.7/site-packages/pip/_vendor/distlib/compat.py", line 31, in from urllib2 import (Request, urlopen, URLError, HTTPError, ImportError: cannot import name HTTPSHandler With pip3 I get the same problem. It looks like openSUSE has done an upgrade in which it disabled ssl2. How can I get pip2/3 working again? -- Cecil Westerhof Senior Software Engineer LinkedIn: http://www.linkedin.com/in/cecilwesterhof -- https://mail.python.org/mailman/listinfo/python-list
Re: pip does not work anymore
In a message of Mon, 23 Nov 2015 08:48:54 +0100, Cecil Westerhof writes: >When I try to install something with pip2 I get: >Traceback (most recent call last): > File "/usr/bin/pip2", line 9, in >load_entry_point('pip==7.1.2', 'console_scripts', 'pip2')() > File "/usr/lib/python2.7/site-packages/pkg_resources/__init__.py", line > 558, in load_entry_point >return get_distribution(dist).load_entry_point(group, name) > File "/usr/lib/python2.7/site-packages/pkg_resources/__init__.py", line > 2682, in load_entry_point >return ep.load() > File "/usr/lib/python2.7/site-packages/pkg_resources/__init__.py", line > 2355, in load >return self.resolve() > File "/usr/lib/python2.7/site-packages/pkg_resources/__init__.py", line > 2361, in resolve >module = __import__(self.module_name, fromlist=['__name__'], level=0) > File "/usr/lib/python2.7/site-packages/pip/__init__.py", line 15, in > >from pip.vcs import git, mercurial, subversion, bazaar # noqa > File "/usr/lib/python2.7/site-packages/pip/vcs/subversion.py", line 9, > in >from pip.index import Link > File "/usr/lib/python2.7/site-packages/pip/index.py", line 30, in > >from pip.wheel import Wheel, wheel_ext > File "/usr/lib/python2.7/site-packages/pip/wheel.py", line 35, in > >from pip._vendor.distlib.scripts import ScriptMaker > File "/usr/lib/python2.7/site-packages/pip/_vendor/distlib/scripts.py", > line 14, in >from .compat import sysconfig, detect_encoding, ZipFile > File "/usr/lib/python2.7/site-packages/pip/_vendor/distlib/compat.py", > line 31, in >from urllib2 import (Request, urlopen, URLError, HTTPError, >ImportError: cannot import name HTTPSHandler > >With pip3 I get the same problem. > >It looks like openSUSE has done an upgrade in which it disabled ssl2. >How can I get pip2/3 working again? > >-- >Cecil Westerhof >Senior Software Engineer >LinkedIn: http://www.linkedin.com/in/cecilwesterhof >-- >https://mail.python.org/mailman/listinfo/python-list Reading https://forums.opensuse.org/showthread.php/488962-OpenSuse-python-amp-openssl seems to indicatew that you are supposed to use python-pip and not pip for OpenSuse, but I don't have one, so that is just my interpretation. Laura -- https://mail.python.org/mailman/listinfo/python-list
Re: Late-binding of function defaults (was Re: What is a function parameter =[] for?)
On Fri, Nov 20, 2015 at 6:46 AM, Chris Angelico wrote: > The expressions would be evaluated as closures, using the same scope > that the function's own definition used. (This won't keep things alive > unnecessarily, as the function's body will be nested within that same > scope anyway.) Bikeshed the syntax all you like, but this would be > something to point people to: "here's how to get late-binding > semantics". For the purposes of documentation, the exact text of the > parameter definition could be retained, and like docstrings, they > could be discarded in -OO mode. Just out of interest, I had a shot at implementing this with a decorator. Here's the code: # -- cut -- import functools import time class lb: def __repr__(self): return "" def latearg(f): tot_args = f.__code__.co_argcount min_args = tot_args - len(f.__defaults__) defs = f.__defaults__ # With compiler help, we could get the original text as well as something # executable that works in the correct scope. Without compiler help, we # either use a lambda function, or an exec/eval monstrosity that can't use # the scope of its notional definition (since its *actual* definition will # be inside this decorator). Instead, just show a fixed bit of text. f.__defaults__ = tuple(lb() if callable(arg) else arg for arg in defs) @functools.wraps(f) def inner(*a,**kw): if len(a) < min_args: return f(*a, **kw) # Will trigger TypeError if len(a) < tot_args: more_args = defs[len(a)-tot_args:] a += tuple(arg() if callable(arg) else arg for arg in more_args) return f(*a,**kw) return inner def sleeper(tm): """An expensive function.""" t = time.monotonic() time.sleep(tm) return time.monotonic() - t - tm seen_args = [] @latearg def foo(spam, ham=lambda: [], val=lambda: sleeper(0.5)): print("%s: Ham %X with sleeper %s" % (spam, id(ham), val)) seen_args.append(ham) # Keep all ham objects alive so IDs are unique @latearg def x(y=lambda: []): y.append(1) return y print("Starting!") foo("one-arg 1") foo("one-arg 2") foo("two-arg 1", []) foo("two-arg 2", []) foo("tri-arg 1", [], 0.0) foo("tri-arg 2", [], 0.0) print(x()) print(x()) print(x()) print(x([2])) print(x([3])) print(x([4])) print("Done!") # -- cut -- This does implement late binding, but: 1) The adornment is the rather verbose "lambda:", where I'd much rather have something shorter 2) Since there's no way to recognize "the ones that were adorned", the decorator checks for "anything callable" 3) Keyword args aren't handled - they're passed through as-is (and keyword-only arg defaults aren't rendered) 4) As commented, the help text can't pick up the text of the function But it does manage to render args at execution time, and the help() for the function identifies the individual arguments correctly (thanks to functools.wraps and the modified defaults - though this implementation is a little unfriendly, mangling the original function defaults instead of properly wrapping). Clock this one up as "useless code that was fun to write". ChrisA -- https://mail.python.org/mailman/listinfo/python-list
Re: Is there an meaning of '[[]]' in a list?
Quivis wrote: > On Thu, 19 Nov 2015 12:40:17 +0100, Peter Otten wrote: > >> those questions that are a little harder > > And just how is he going to determine what is hard? Note that I said "a little harder", not "hard". Write down your next ten or so questions, then work through the tutorial or another introductory text, then use a search engine, then post the one or two questions that are still unanswered. -- https://mail.python.org/mailman/listinfo/python-list
Re: pip does not work anymore
On Monday 23 Nov 2015 09:11 CET, Laura Creighton wrote: > In a message of Mon, 23 Nov 2015 08:48:54 +0100, Cecil Westerhof > writes: >> When I try to install something with pip2 I get: Traceback (most >> recent call last): File "/usr/bin/pip2", line 9, in >> load_entry_point('pip==7.1.2', 'console_scripts', 'pip2')() File >> "/usr/lib/python2.7/site-packages/pkg_resources/__init__.py", line >> 558, in load_entry_point return >> get_distribution(dist).load_entry_point(group, name) File >> "/usr/lib/python2.7/site-packages/pkg_resources/__init__.py", line >> 2682, in load_entry_point return ep.load() File >> "/usr/lib/python2.7/site-packages/pkg_resources/__init__.py", line >> 2355, in load return self.resolve() File >> "/usr/lib/python2.7/site-packages/pkg_resources/__init__.py", line >> 2361, in resolve module = __import__(self.module_name, >> fromlist=['__name__'], level=0) File >> "/usr/lib/python2.7/site-packages/pip/__init__.py", line 15, in >> from pip.vcs import git, mercurial, subversion, bazaar # >> noqa File "/usr/lib/python2.7/site-packages/pip/vcs/subversion.py", >> line 9, in from pip.index import Link File >> "/usr/lib/python2.7/site-packages/pip/index.py", line 30, in >> from pip.wheel import Wheel, wheel_ext File >> "/usr/lib/python2.7/site-packages/pip/wheel.py", line 35, in >> from pip._vendor.distlib.scripts import ScriptMaker File >> "/usr/lib/python2.7/site-packages/pip/_vendor/distlib/scripts.py", >> line 14, in from .compat import sysconfig, >> detect_encoding, ZipFile File >> "/usr/lib/python2.7/site-packages/pip/_vendor/distlib/compat.py", >> line 31, in from urllib2 import (Request, urlopen, >> URLError, HTTPError, ImportError: cannot import name HTTPSHandler >> >> With pip3 I get the same problem. >> >> It looks like openSUSE has done an upgrade in which it disabled >> ssl2. How can I get pip2/3 working again? >> > > Reading > https://forums.opensuse.org/showthread.php/488962-OpenSuse-python-amp-openssl > seems to indicatew that you are supposed to use python-pip and not > pip for OpenSuse, but I don't have one, so that is just my > interpretation. python-pip is the package that you need to install to use pip2. I removed it and then pip2 does not exist anymore. I reinstalled it, but keep getting the same error. It looks like I need to install: https://docs.python.org/2/library/urllib2.html?highlight=urllib2#urllib2.HTTPSHandler but I need pip2 to do that. So I need a working pip2 to get it working. :'-( By the way openSUSE did a big update and broke other things also. For example virtualbox. Probably time to really leave openSUSE for Debian. But that will not be painless. -- Cecil Westerhof Senior Software Engineer LinkedIn: http://www.linkedin.com/in/cecilwesterhof -- https://mail.python.org/mailman/listinfo/python-list
Creating a Dynamic Website Using Python
Ok, So I have gone through the CodeAcademy Python modules and decided to jump straight into a project. I want to create a dynamic web-based site like this --- https://www.wedpics.com using Python How / where do I start ? Thanks a lot ! Gengyang -- https://mail.python.org/mailman/listinfo/python-list
Re: What is a function parameter =[] for?
On 23/11/2015 07:47, Steven D'Aprano wrote: I think it would be cleaner and better if Python had dedicated syntax for declaring static local variables: Interesting. So why is it that when /I/ said: > On Mon, 23 Nov 2015 12:21 am, BartC wrote: > >> But if it's used for static storage, then why not just use static >> storage? You replied with the insulting: > /head-desk ? Maybe it's my turn to bang my head on the desk. -- Bartc -- https://mail.python.org/mailman/listinfo/python-list
Re: Creating a Dynamic Website Using Python
On 23/11/2015 09:55, Cai Gengyang wrote: Ok, So I have gone through the CodeAcademy Python modules and decided to jump straight into a project. I want to create a dynamic web-based site like this --- https://www.wedpics.com using Python How / where do I start ? With a search engine, or do you need instructions on how to use one? -- My fellow Pythonistas, ask not what our language can do for you, ask what you can do for our language. Mark Lawrence -- https://mail.python.org/mailman/listinfo/python-list
Re: What is a function parameter =[] for?
On 23/11/2015 00:37, BartC wrote: On 23/11/2015 00:04, Mark Lawrence wrote: On 22/11/2015 23:44, Steven D'Aprano wrote: On Mon, 23 Nov 2015 12:21 am, BartC wrote: But if it's used for static storage, then why not just use static storage? That's a simpler and more general concept than memoisation. /head-desk "But if it's used for cooking, why not just cook? That's a simpler and more general concept than roasting." With 'it' being a washing machine perhaps? But I'll let this other chap have the last word as he puts it across better: > Steven D'Aprano wrote: >> Memoisation isn't "esoteric", it is a simple, basic and widely-used >> technique used to improve performance of otherwise expensive functions. On 22/11/2015 23:43, Gregory Ewing wrote: > That may be true, but I don't think it's a good example > of a use for a shared, mutable default value, because > it's arguably an *abuse* of the default value mechanism. What happened to "Please do not feed the trolls"? You mean, people with different opinions? I think I'm done here. Wow, so there is a God. -- My fellow Pythonistas, ask not what our language can do for you, ask what you can do for our language. Mark Lawrence -- https://mail.python.org/mailman/listinfo/python-list
Re: pip does not work anymore
On Monday 23 Nov 2015 08:48 CET, Cecil Westerhof wrote: > When I try to install something with pip2 I get: Traceback (most > recent call last): File "/usr/bin/pip2", line 9, in > load_entry_point('pip==7.1.2', 'console_scripts', 'pip2')() File > "/usr/lib/python2.7/site-packages/pkg_resources/__init__.py", line > 558, in load_entry_point return > get_distribution(dist).load_entry_point(group, name) File > "/usr/lib/python2.7/site-packages/pkg_resources/__init__.py", line > 2682, in load_entry_point return ep.load() File > "/usr/lib/python2.7/site-packages/pkg_resources/__init__.py", line > 2355, in load return self.resolve() File > "/usr/lib/python2.7/site-packages/pkg_resources/__init__.py", line > 2361, in resolve module = __import__(self.module_name, > fromlist=['__name__'], level=0) File > "/usr/lib/python2.7/site-packages/pip/__init__.py", line 15, in > from pip.vcs import git, mercurial, subversion, bazaar # > noqa File "/usr/lib/python2.7/site-packages/pip/vcs/subversion.py", > line 9, in from pip.index import Link File > "/usr/lib/python2.7/site-packages/pip/index.py", line 30, in > from pip.wheel import Wheel, wheel_ext File > "/usr/lib/python2.7/site-packages/pip/wheel.py", line 35, in > from pip._vendor.distlib.scripts import ScriptMaker File > "/usr/lib/python2.7/site-packages/pip/_vendor/distlib/scripts.py", > line 14, in from .compat import sysconfig, detect_encoding, > ZipFile File > "/usr/lib/python2.7/site-packages/pip/_vendor/distlib/compat.py", > line 31, in from urllib2 import (Request, urlopen, > URLError, HTTPError, ImportError: cannot import name HTTPSHandler > > With pip3 I get the same problem. > > It looks like openSUSE has done an upgrade in which it disabled > ssl2. How can I get pip2/3 working again? Can I make pip work with ssl3? -- Cecil Westerhof Senior Software Engineer LinkedIn: http://www.linkedin.com/in/cecilwesterhof -- https://mail.python.org/mailman/listinfo/python-list
Let urllib3 use ssl3 instead of ssl2
For a program I need to import urllib3, but this gives: ImportError: /usr/lib64/python2.7/lib-dynload/_ssl.so: undefined symbol: SSLv2_method This probably has to do with that openSUSE removed ssl2 because it is not secure. Is there a way to let urllib3 use ssl3 instead of ssl2? -- Cecil Westerhof Senior Software Engineer LinkedIn: http://www.linkedin.com/in/cecilwesterhof -- https://mail.python.org/mailman/listinfo/python-list
Re: What is a function parameter =[] for?
On Monday, November 23, 2015 at 6:35:35 AM UTC-5, Mark Lawrence wrote: > On 23/11/2015 00:37, BartC wrote: > > On 23/11/2015 00:04, Mark Lawrence wrote: > >> What happened to "Please do not feed the trolls"? > > > > You mean, people with different opinions? > > > > I think I'm done here. > > > > Wow, so there is a God. Mark, you aren't helping. --Ned. -- https://mail.python.org/mailman/listinfo/python-list
Re: Creating a Dynamic Website Using Python
On Mon, Nov 23, 2015 at 8:55 PM, Cai Gengyang wrote: > So I have gone through the CodeAcademy Python modules and decided to jump > straight into a project. > > I want to create a dynamic web-based site like this --- > https://www.wedpics.com using Python > > How / where do I start ? While it's *possible* to create a web site using just a base Python install, it's a lot more work than you need. What you should instead do is pick up one of the popular web frameworks like Django, Flask, Bottle, etc, and use that. Pick a framework (personally, I use Flask, but you can use any of them), and work through its tutorial. Then, depending on how fancy you want your site to be, you'll need anywhere from basic to advanced knowledge of the web's client-side technologies - HTML, CSS, JavaScript, and possibly some libraries like Bootstrap, jQuery, etc. Again, pick up any that you want to use, and work through their tutorials. This is a massively open-ended goal. You can make this as big or small as you want. ChrisA -- https://mail.python.org/mailman/listinfo/python-list
Re: What is a function parameter =[] for?
BartC : > I think I'm done here. Bart, this is the internet. Just skip the articles you don't find uplifting. Marko -- https://mail.python.org/mailman/listinfo/python-list
Re: pip does not work anymore
On Mon, 23 Nov 2015 06:48 pm, Cecil Westerhof wrote: > When I try to install something with pip2 I get: > Traceback (most recent call last): [...] > from urllib2 import (Request, urlopen, URLError, HTTPError, > ImportError: cannot import name HTTPSHandler Before blaming SUSE for breaking this, please run this at the interactive interpreter: import urllib2 print urllib2.__file__ Have you perhaps accidentally shadowed the std lib module? -- Steven -- https://mail.python.org/mailman/listinfo/python-list
Re: Creating a Dynamic Website Using Python
Ok, I will look through the Flask tutorials then ... On Monday, November 23, 2015 at 8:21:28 PM UTC+8, Chris Angelico wrote: > On Mon, Nov 23, 2015 at 8:55 PM, Cai Gengyang wrote: > > So I have gone through the CodeAcademy Python modules and decided to jump > > straight into a project. > > > > I want to create a dynamic web-based site like this --- > > https://www.wedpics.com using Python > > > > How / where do I start ? > > While it's *possible* to create a web site using just a base Python > install, it's a lot more work than you need. What you should instead > do is pick up one of the popular web frameworks like Django, Flask, > Bottle, etc, and use that. Pick a framework (personally, I use Flask, > but you can use any of them), and work through its tutorial. Then, > depending on how fancy you want your site to be, you'll need anywhere > from basic to advanced knowledge of the web's client-side technologies > - HTML, CSS, JavaScript, and possibly some libraries like Bootstrap, > jQuery, etc. Again, pick up any that you want to use, and work through > their tutorials. > > This is a massively open-ended goal. You can make this as big or small > as you want. > > ChrisA -- https://mail.python.org/mailman/listinfo/python-list
How to remember last position and size (geometry) of PyQt application?
Hello all fellow Python programmers! I am using PyQt5 (5.5.1) with Python 3.4.0 (64-bit) on Windows 8.1 64-bit. I don't think this much data was needed. :P I am having trouble restoring the position and size (geometry) of my very simple PyQt app. What I read online is that this is the default behavior and we need to use QSettings to save and retrieve settings from Windows registry, which is stored in `\\HKEY_CURRENT_USER\Software\[CompanyName]\[AppName]\`. Here are some of the links are read: http://doc.qt.io/qt-5.5/restoring-geometry.html http://doc.qt.io/qt-5.5/qwidget.html#saveGeometry http://doc.qt.io/qt-5.5/qsettings.html#restoring-the-state-of-a-gui-application and the last one: https://ic3man5.wordpress.com/2013/01/26/save-qt-window-size-and-state-on-closeopen/ I could have followed those tutorials but those tutorials/docs were written for C++ users. C++ is not my glass of beer. Should I expect help from you guys? :) Here is minimal working application: import sys from PyQt5.QtWidgets import QApplication, QWidget class sViewer(QWidget): """Main class of sViewer""" def __init__(self): super(sViewer, self).__init__() self.initUI() def initUI(self): self.show() if __name__ == '__main__': app = QApplication(sys.argv) view = sViewer() sys.exit(app.exec_()) -- https://mail.python.org/mailman/listinfo/python-list
Re: What is a function parameter =[] for?
On Mon, 23 Nov 2015 09:40 pm, BartC wrote: > On 23/11/2015 07:47, Steven D'Aprano wrote: > >> I think it would be cleaner and better if Python had dedicated syntax for >> declaring static local variables: > > Interesting. So why is it that when /I/ said: > > > On Mon, 23 Nov 2015 12:21 am, BartC wrote: > > > >> But if it's used for static storage, then why not just use static > >> storage? > > You replied with the insulting: > > > /head-desk > > ? > > Maybe it's my turn to bang my head on the desk. Let me steal^W borrow an idea from Galileo, and present the explanation in the form of a dialogue between two philosophers of computer science, Salviati and Simplicio, and a third, intelligent layman, Sagredo. https://en.wikipedia.org/wiki/Dialogue_Concerning_the_Two_Chief_World_Systems Salviati: Function defaults can also be used for static storage. Simplicio: If you want static storage, why not use static storage? Salviati: Function defaults in Python *are* static storage. Although they not the only way to get static storage, as closures can also be used for that purpose, they are surely the simplest way to get static storage in Python. Sagredo: Simplest though it might be, surely a reasonable person would consider that using function parameters for static storage is abuse of the feature and a global variable would be better? Salviati: Global variables have serious disadvantages. I will agree that using function parameters for static storage is something of a code smell, but good enough for rough and ready code. Nevertheless, it would be good if Python had dedicated syntax for static storage. Simplicio: Ah-ha! Gotcha! Salviati: No, perhaps you missed that I was referring to a hypothetical future addition to Python, not a current feature. But even if it did exist today, your statement misses the point that by using function defaults I *am* using static storage. In effect, you are telling me that rather than using static storage I should instead use static storage. -- Steven -- https://mail.python.org/mailman/listinfo/python-list
Re: Late-binding of function defaults (was Re: What is a function parameter =[] for?)
On Mon, Nov 23, 2015 at 1:23 AM, Chris Angelico wrote: > def latearg(f): > tot_args = f.__code__.co_argcount > min_args = tot_args - len(f.__defaults__) > defs = f.__defaults__ > # With compiler help, we could get the original text as well as something > # executable that works in the correct scope. Without compiler help, we > # either use a lambda function, or an exec/eval monstrosity that can't use > # the scope of its notional definition (since its *actual* definition will > # be inside this decorator). Instead, just show a fixed bit of text. You should be able to get the correct globals from f.__globals__. For locals, the decorator might capture the locals of the previous stack frame at the moment the decorator was called, but that's potentially a pretty heavy thing to be retaining for this purpose; the definition of a @latearg function would indefinitely keep a reference to every single object that was bound to a variable in that scope, not just the things it needs. For better specificity you could parse the expression and then just grab the names that it uses. Even so, this would still act somewhat like early binding in that it would reference the local variables at the time of definition rather than evaluation. Nonlocals? Just forget about it. > This does implement late binding, but: > 1) The adornment is the rather verbose "lambda:", where I'd much > rather have something shorter > 2) Since there's no way to recognize "the ones that were adorned", the > decorator checks for "anything callable" A parameter annotation could be used in conjunction with the decorator. @latearg def x(y: latearg = lambda: []): ... But that's even more verbose. In the simple case where all the defaults should be late, one could have something like: @latearg('*') def x(y=lambda: []): ... The argument could be generalized to pass a set of parameter names as an alternative to the annotation. > 3) Keyword args aren't handled - they're passed through as-is (and > keyword-only arg defaults aren't rendered) I would expect that Python 3 Signature objects would make this a lot simpler to handle. -- https://mail.python.org/mailman/listinfo/python-list
Re: non-blocking getkey?
eryksun wrote: > On Thu, Nov 19, 2015 at 10:31 AM, Michael Torrie wrote: > > One windows it might be possible to use the win32 api to enumerate the > > windows, find your console window and switch to it. > > You can call GetConsoleWindow [1] and then SetForegroundWindow [2]. (...) Sorry, for the late feedback: great, this works! Thanks! -- Ullrich Horlacher Server und Virtualisierung Rechenzentrum IZUS/TIK E-Mail: horlac...@tik.uni-stuttgart.de Universitaet Stuttgart Tel:++49-711-68565868 Allmandring 30aFax:++49-711-682357 70550 Stuttgart (Germany) WWW:http://www.tik.uni-stuttgart.de/ -- https://mail.python.org/mailman/listinfo/python-list
Re: Late-binding of function defaults (was Re: What is a function parameter =[] for?)
On Tue, Nov 24, 2015 at 3:40 AM, Ian Kelly wrote: > On Mon, Nov 23, 2015 at 1:23 AM, Chris Angelico wrote: >> def latearg(f): >> tot_args = f.__code__.co_argcount >> min_args = tot_args - len(f.__defaults__) >> defs = f.__defaults__ >> # With compiler help, we could get the original text as well as something >> # executable that works in the correct scope. Without compiler help, we >> # either use a lambda function, or an exec/eval monstrosity that can't >> use >> # the scope of its notional definition (since its *actual* definition >> will >> # be inside this decorator). Instead, just show a fixed bit of text. > > You should be able to get the correct globals from f.__globals__. > > For locals, the decorator might capture the locals of the previous > stack frame at the moment the decorator was called, but that's > potentially a pretty heavy thing to be retaining for this purpose; the > definition of a @latearg function would indefinitely keep a reference > to every single object that was bound to a variable in that scope, not > just the things it needs. For better specificity you could parse the > expression and then just grab the names that it uses. Even so, this > would still act somewhat like early binding in that it would reference > the local variables at the time of definition rather than evaluation. > > Nonlocals? Just forget about it. And since nonlocals are fundamentally unsolvable, I took the simpler option and just used an actual lambda function. >> This does implement late binding, but: >> 1) The adornment is the rather verbose "lambda:", where I'd much >> rather have something shorter >> 2) Since there's no way to recognize "the ones that were adorned", the >> decorator checks for "anything callable" > > A parameter annotation could be used in conjunction with the decorator. > > @latearg > def x(y: latearg = lambda: []): > ... > > But that's even more verbose. In the simple case where all the > defaults should be late, one could have something like: > > @latearg('*') > def x(y=lambda: []): > ... > > The argument could be generalized to pass a set of parameter names as > an alternative to the annotation. Yeah, that might help. With real compiler support, both of these could be solved: def x(y=>[]): ... The displayed default could be "=>[]", exactly the way it's seen in the source, and the run-time would know exactly which args were flagged this way. Plus, it could potentially use a single closure to evaluate all the arguments. >> 3) Keyword args aren't handled - they're passed through as-is (and >> keyword-only arg defaults aren't rendered) > > I would expect that Python 3 Signature objects would make this a lot > simpler to handle. Maybe. It's still going to be pretty complicated. I could easily handle keyword-only arguments, but recognizing that something in **kw is replacing something in *a is a bit harder. Expansion invited. This is reminiscent of the manual "yield from" implementation in PEP 380. It looks simple enough, until you start writing in all the corner cases. ChrisA -- https://mail.python.org/mailman/listinfo/python-list
Designing DBI compliant SQL parameters for module
My company uses a database (4th dimension) for which there was no python DBI compliant driver available (I had to use ODBC, which I felt was cludgy). However, I did discover that the company had a C driver available, so I went ahead and used CFFI to wrap this driver into a DBI compliant python module (https://pypi.python.org/pypi/p4d). This works well (still need to make it python 3.x compatible), but since the underlying C library uses "qmark" style parameter markers, that's all I implemented in my module. I would like to expand the module to be able to use the more-common (or at least easier for me) "format" and "pyformat" parameter markers, as indicated in the footnote to PEP-249 (https://www.python.org/dev/peps/pep-0249/#id2 at least for the pyformat markers). Now I am fairly confidant that I can write code to convert such placeholders into the qmark style markers that the underlying library provides, but before I go and re-invent the wheel, is there already code that does this which I can simply use, or modify? --- Israel Brewster Systems Analyst II Ravn Alaska 5245 Airport Industrial Rd Fairbanks, AK 99709 (907) 450-7293 --- -- https://mail.python.org/mailman/listinfo/python-list
Re: pip does not work anymore
On Monday 23 Nov 2015 14:08 CET, Steven D'Aprano wrote: > On Mon, 23 Nov 2015 06:48 pm, Cecil Westerhof wrote: > >> When I try to install something with pip2 I get: >> Traceback (most recent call last): > [...] >> from urllib2 import (Request, urlopen, URLError, HTTPError, >> ImportError: cannot import name HTTPSHandler > > Before blaming SUSE for breaking this, please run this at the > interactive interpreter: > > import urllib2 > print urllib2.__file__ > > > Have you perhaps accidentally shadowed the std lib module? It worked without fail before the update, but you never know: Python 2.7.9 (default, Dec 13 2014, 18:02:08) [GCC] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> import urllib2 >>> print urllib2.__file__ /usr/lib64/python2.7/urllib2.pyc So I did not shadow I would think. -- Cecil Westerhof Senior Software Engineer LinkedIn: http://www.linkedin.com/in/cecilwesterhof -- https://mail.python.org/mailman/listinfo/python-list
Bi-directional sub-process communication
I have a multi-threaded python app (CherryPy WebApp to be exact) that launches a child process that it then needs to communicate with bi-driectionally. To implement this, I have used a pair of Queues: a child_queue which I use for master->child communication, and a master_queue which is used for child->master communication. The way I have the system set up, the child queue runs a loop in a tread that waits for messages on child_queue, and when received responds appropriately depending on the message received, which sometimes involves posting a message to master_queue. On the master side, when it needs to communicate with the child process, it posts a message to child_queue, and if the request requires a response it will then immediately start waiting for a message on master_queue, typically with a timeout. While this process works well in testing, I do have one concern (maybe unfounded) and a real-world issue Concern: Since the master process is multi-threaded, it seems likely enough that multiple threads on the master side would make requests at the same time. I understand that the Queue class has locks that make this fine (one thread will complete posting the message before the next is allowed to start), and since the child process only has a single thread processing messages from the queue, it should process them in order and post the responses (if any) to the master_queue in order. But now I have multiple master processes all trying to read master_queue at the same time. Again, the locks will take care of this and prevent any overlapping reads, but am I guaranteed that the threads will obtain the lock and therefore read the responses in the right order? Or is there a possibility that, say, thread three will get the response that should have been for thread one? Is this something I need to take into consideration, and if so, how? Real-world problem: While as I said this system worked well in testing, Now that I have gotten it out into production I've occasionally run into a problem where the master thread waiting for a response on master_queue times out while waiting. This causes a (potentially) two-fold problem, in that first off the master process doesn't get the information it had requested, and secondly that I *could* end up with an "orphaned" message on the queue that could cause problems the next time I try to read something from it. I currently have the timeout set to 3 seconds. I can, of course, increase that, but that could lead to a bad user experience - and might not even help the situation if something else is going on. The actual exchange is quite simple: On the master side, I have this code: config.socket_queue.put('GET_PORT') try: port = config.master_queue.get(timeout=3) #wait up to three seconds for a response except Empty: port = 5000 # default. Can't hurt to try. Which, as you might have been able to guess, tries to ask the child process (an instance of a tornado server, btw) what port it is listening on. The child process then, on getting this message from the queue, runs the following code: elif item == 'GET_PORT': port = utils.config.getint('global', 'tornado.port') master_queue.put(port) So nothing that should take any significant time. Of course, since this is a single thread handling any number of requests, it is possible that the thread is tied up responding to a different request (or that the GIL is preventing the thread from running at all, since another thread might be commandeering the processor), but I find it hard to believe that it could be tied up for more than three seconds. So is there a better way to do sub-process bi-directional communication that would avoid these issues? Or do I just need to increase the timeout (or remove it altogether, at the risk of potentially causing the thread to hang if no message is posted)? And is my concern justified, or just paranoid? Thanks for any information that can be provided! --- Israel Brewster Systems Analyst II Ravn Alaska 5245 Airport Industrial Rd Fairbanks, AK 99709 (907) 450-7293 --- -- https://mail.python.org/mailman/listinfo/python-list
Re: Bi-directional sub-process communication
On Mon, Nov 23, 2015 at 10:54 AM, Israel Brewster wrote: > Concern: Since the master process is multi-threaded, it seems likely enough > that multiple threads on the master side would make requests at the same > time. I understand that the Queue class has locks that make this fine (one > thread will complete posting the message before the next is allowed to > start), and since the child process only has a single thread processing > messages from the queue, it should process them in order and post the > responses (if any) to the master_queue in order. But now I have multiple > master processes all trying to read master_queue at the same time. Again, the > locks will take care of this and prevent any overlapping reads, but am I > guaranteed that the threads will obtain the lock and therefore read the > responses in the right order? Or is there a possibility that, say, thread > three will get the response that should have been for thread one? Is this > something I need to take into consideration, and if so, how? Yes, if multiple master threads are waiting on the queue, it's possible that a master thread could get a response that was not intended for it. As far as I know there's no guarantee that the waiting threads will be woken up in the order that they called get(), but even if there are, consider this case: Thread A enqueues a request. Thread B preempts A and enqueues a request. Thread B calls get on the response queue. Thread A calls get on the response queue. The response from A's request arrives and is given to B. Instead of having the master threads pull objects off the response queue directly, you might create another thread whose sole purpose is to handle the response queue. That could look like this: request_condition = threading.Condition() response_global = None def master_thread(): global response_global with request_condition: request_queue.put(request) request_condition.wait() # Note: the Condition should remain acquired until response_global is reset. response = response_global response_global = None if wrong_response(response): raise RuntimeError("got a response for the wrong request") handle_response(response) def response_thread(): global response_global while True: response = response_queue.get() with request_condition: response_global = response request_condition.notify() As another option you could use a multiprocessing.Manager to coordinate passing the response back more directly, but starting a third process seems like overkill for this. -- https://mail.python.org/mailman/listinfo/python-list
Re: Bi-directional sub-process communication
On Mon, Nov 23, 2015 at 12:55 PM, Ian Kelly wrote: > On Mon, Nov 23, 2015 at 10:54 AM, Israel Brewster > wrote: >> Concern: Since the master process is multi-threaded, it seems likely enough >> that multiple threads on the master side would make requests at the same >> time. I understand that the Queue class has locks that make this fine (one >> thread will complete posting the message before the next is allowed to >> start), and since the child process only has a single thread processing >> messages from the queue, it should process them in order and post the >> responses (if any) to the master_queue in order. But now I have multiple >> master processes all trying to read master_queue at the same time. Again, >> the locks will take care of this and prevent any overlapping reads, but am I >> guaranteed that the threads will obtain the lock and therefore read the >> responses in the right order? Or is there a possibility that, say, thread >> three will get the response that should have been for thread one? Is this >> something I need to take into consideration, and if so, how? > > Yes, if multiple master threads are waiting on the queue, it's > possible that a master thread could get a response that was not > intended for it. As far as I know there's no guarantee that the > waiting threads will be woken up in the order that they called get(), > but even if there are, consider this case: > > Thread A enqueues a request. > Thread B preempts A and enqueues a request. > Thread B calls get on the response queue. > Thread A calls get on the response queue. > The response from A's request arrives and is given to B. > > Instead of having the master threads pull objects off the response > queue directly, you might create another thread whose sole purpose is > to handle the response queue. That could look like this: > > > request_condition = threading.Condition() > response_global = None > > def master_thread(): > global response_global > with request_condition: > request_queue.put(request) > request_condition.wait() > # Note: the Condition should remain acquired until > response_global is reset. > response = response_global > response_global = None > if wrong_response(response): > raise RuntimeError("got a response for the wrong request") > handle_response(response) > > def response_thread(): > global response_global > while True: > response = response_queue.get() > with request_condition: > response_global = response > request_condition.notify() Actually I realized that this fails because if two threads get notified at about the same time, they could reacquire the Condition in the wrong order and so get the wrong responses. Concurrency, ugh. It's probably better just to have a Condition/Event per thread and have the response thread identify the correct one to notify, rather than just notify a single shared Condition and hope the threads wake up in the right order. -- https://mail.python.org/mailman/listinfo/python-list
Re: Problem to read from array
Thank you all. Here is the last piece of code that caused me so much troubles but now working the way I wanted it: fRawData = [] with open(fStagingFile2) as fStagingFile2FH: fRawData = [line.strip() for line in fStagingFile2FH.readlines()] # This is to read each element from the file and chop off the end of line character fNumberOfColumns = 7 fNumberOfRows = len(fRawData)/fNumberOfColumns fRowID = 0 fParameters = [] for fRowID in range(0, len(fRawData), fNumberOfColumns): fParameters.append(fRawData[fRowID:fRowID+fNumberOfColumns]) # This is to convert 1D array to 2D # ... and down below section is an example of how to read each element of the list # and how to update it if I need so. That was also a problem before. fRowID = 0 fColumnID = 0 for fRowID in range(fNumberOfRows): for fColumnID in range(fNumberOfColumns): if fColumnID == 0: fParameters[fRowID][fColumnID] = "" Message2Log("fParameters[" + str(fRowID) + "][" + str(fColumnID) + "] = " + str(fParameters[fRowID][fColumnID])) CU On Saturday, November 21, 2015 at 4:52:35 PM UTC+1, Nathan Hilterbrand wrote: > On 11/21/2015 10:26 AM, BartC wrote: > > On 21/11/2015 10:41, vostrus...@gmail.com wrote: > >> Hi, > >> I have a file with one parameter per line: > >> a1 > >> b1 > >> c1 > >> a2 > >> b2 > >> c2 > >> a3 > >> b3 > >> c3 > >> ... > >> The parameters are lines of characters (not numbers) > >> > >> I need to load it to 2D array for further manipulations. > >> So far I managed to upload this file into 1D array: > >> > >> ParametersRaw = [] > >> with open(file1) as fh: > >> ParametersRaw = fh.readlines() > >> fh.close() > > > > I tried this code based on yours: > > > > with open("input") as fh: > > lines=fh.readlines() > > > > rows = len(lines)//3 > > > > params=[] > > index=0 > > > > for row in range(rows): > > params.append([lines[index],lines[index+1],lines[index+2]]) > > index += 3 > > > > for row in range(rows): > > print (row,":",params[row]) > > > > For the exact input you gave, it produced this output: > > > > 0 : ['a1\n', 'b1\n', 'c1\n'] > > 1 : ['a2\n', 'b2\n', 'c2\n'] > > 2 : ['a3\n', 'b3\n', 'c3\n'] > > > > Probably you'd want to get rid of those \n characters. (I don't know > > how off-hand as I'm not often write in Python.) > > > > The last bit could also be written: > > > > for param in params: > > print (params) > > > > but I needed the row index. > > > To get rid of the '\n' (lineend) characters: > > with open(file1) as fh: > ParametersRaw = [line.strip() for line in fh.readlines()] > > or, more succinctly.. > > with open(file1) as fh: > ParametersRaw = [line.strip() for line in fh] > > Comprehensions are your friend. > > Nathan -- https://mail.python.org/mailman/listinfo/python-list
Re: Bi-directional sub-process communication
On Nov 23, 2015, at 11:51 AM, Ian Kelly wrote: > > On Mon, Nov 23, 2015 at 12:55 PM, Ian Kelly wrote: >> On Mon, Nov 23, 2015 at 10:54 AM, Israel Brewster >> wrote: >>> Concern: Since the master process is multi-threaded, it seems likely enough >>> that multiple threads on the master side would make requests at the same >>> time. I understand that the Queue class has locks that make this fine (one >>> thread will complete posting the message before the next is allowed to >>> start), and since the child process only has a single thread processing >>> messages from the queue, it should process them in order and post the >>> responses (if any) to the master_queue in order. But now I have multiple >>> master processes all trying to read master_queue at the same time. Again, >>> the locks will take care of this and prevent any overlapping reads, but am >>> I guaranteed that the threads will obtain the lock and therefore read the >>> responses in the right order? Or is there a possibility that, say, thread >>> three will get the response that should have been for thread one? Is this >>> something I need to take into consideration, and if so, how? >> >> Yes, if multiple master threads are waiting on the queue, it's >> possible that a master thread could get a response that was not >> intended for it. As far as I know there's no guarantee that the >> waiting threads will be woken up in the order that they called get(), >> but even if there are, consider this case: >> >> Thread A enqueues a request. >> Thread B preempts A and enqueues a request. >> Thread B calls get on the response queue. >> Thread A calls get on the response queue. >> The response from A's request arrives and is given to B. >> >> Instead of having the master threads pull objects off the response >> queue directly, you might create another thread whose sole purpose is >> to handle the response queue. That could look like this: >> >> >> request_condition = threading.Condition() >> response_global = None >> >> def master_thread(): >>global response_global >>with request_condition: >>request_queue.put(request) >>request_condition.wait() >># Note: the Condition should remain acquired until >> response_global is reset. >>response = response_global >>response_global = None >>if wrong_response(response): >>raise RuntimeError("got a response for the wrong request") >>handle_response(response) >> >> def response_thread(): >>global response_global >>while True: >>response = response_queue.get() >>with request_condition: >>response_global = response >>request_condition.notify() > > Actually I realized that this fails because if two threads get > notified at about the same time, they could reacquire the Condition in > the wrong order and so get the wrong responses. > > Concurrency, ugh. > > It's probably better just to have a Condition/Event per thread and > have the response thread identify the correct one to notify, rather > than just notify a single shared Condition and hope the threads wake > up in the right order. Tell me about it :-) I've actually never worked with conditions or notifications (actually even this bi-drectional type of communication is new to me), so I'll have to look into that and figure it out. Thanks for the information! > -- > https://mail.python.org/mailman/listinfo/python-list -- https://mail.python.org/mailman/listinfo/python-list
Re: Bi-directional sub-process communication
On 23Nov2015 12:22, Israel Brewster wrote: On Nov 23, 2015, at 11:51 AM, Ian Kelly wrote: Concurrency, ugh. I'm a big concurrency fan myself. It's probably better just to have a Condition/Event per thread and have the response thread identify the correct one to notify, rather than just notify a single shared Condition and hope the threads wake up in the right order. Tell me about it :-) I've actually never worked with conditions or notifications (actually even this bi-drectional type of communication is new to me), so I'll have to look into that and figure it out. Thanks for the information! I include a tag with every request, and have the responses include the tag; the request submission function records the response hander in a mapping by tag and the response handing thread looks up the mapping and passes the response to the right handler. Works just fine and avoids all the worrying about ordering etc. Israel, do you have control over the protocol between you and your subprocess? If so, adding tags is easy and effective. Cheers, Cameron Simpson -- https://mail.python.org/mailman/listinfo/python-list
Re: Bi-directional sub-process communication
On Nov 23, 2015, at 12:45 PM, Cameron Simpson wrote: > > On 23Nov2015 12:22, Israel Brewster wrote: >> On Nov 23, 2015, at 11:51 AM, Ian Kelly wrote: >>> Concurrency, ugh. > > I'm a big concurrency fan myself. > >>> It's probably better just to have a Condition/Event per thread and >>> have the response thread identify the correct one to notify, rather >>> than just notify a single shared Condition and hope the threads wake >>> up in the right order. >> >> Tell me about it :-) I've actually never worked with conditions or >> notifications (actually even this bi-drectional type of communication is new >> to me), so I'll have to look into that and figure it out. Thanks for the >> information! > > I include a tag with every request, and have the responses include the tag; > the request submission function records the response hander in a mapping by > tag and the response handing thread looks up the mapping and passes the > response to the right handler. > > Works just fine and avoids all the worrying about ordering etc. > > Israel, do you have control over the protocol between you and your > subprocess? If so, adding tags is easy and effective. I do, and the basic concept makes sense. The one difficulty I am seeing is getting back to the thread that requested the data. Let me know if this makes sense or I am thinking about it wrong: - When a thread requests some data, it sends the request as a dictionary containing a tag (unique to the thread) as well as the request - When the child processes the request, it encodes the response as a dictionary containing the tag and the response data - A single, separate thread on the "master" side parses out responses as they come in and puts them into a dictionary keyed by tag - The requesting threads, after putting the request into the Queue, would then block waiting for data to appear under their key in the dictionary Of course, that last step could be interesting - implementing the block in such a way as to not tie up the processor, while still getting the data "as soon" as it is available. Unless there is some sort of built-in notification system I could use for that? I.e. the thread would "subscribe" to a notification based on its tag, and then wait for notification. When the master processing thread receives data with said tag, it adds it to the dictionary and "publishes" a notification to that tag. Or perhaps the notification itself could contain the payload? Thanks for the information! > > Cheers, > Cameron Simpson -- https://mail.python.org/mailman/listinfo/python-list
Re: Bi-directional sub-process communication
On Mon, Nov 23, 2015 at 2:18 PM, Israel Brewster wrote: > Of course, that last step could be interesting - implementing the block in > such a way as to not tie up the processor, while still getting the data "as > soon" as it is available. Unless there is some sort of built-in > notification system I could use for that? I.e. the thread would "subscribe" > to a notification based on its tag, and then wait for notification. When > the master processing thread receives data with said tag, it adds it to the > dictionary and "publishes" a notification to that tag. Or perhaps the > notification itself could contain the payload? There are a few ways I could see handling this, without having the threads spinning and consuming CPU: 1. Don't worry about having the follow-up code run in the same thread, and use a simple callback. This callback could be dispatched to a thread via a work queue, however you may not get the same thread as the one that made the request. This is probably the most efficient method to use, as the threads can continue doing other work while waiting for a reply, rather than blocking. It does make it harder to maintain state between the pre- and post-request functions, however. 2. Have a single, global, event variable that wakes all threads waiting on a reply, each of which then checks to see if the reply is for it, or goes back to sleep. This is good if most of the time, only a few threads will be waiting for a reply, and checking if the correct reply came in is cheap. This is probably good enough, unless you have a LOT of threads (hundreds). 3. Have an event per thread. This will use less CPU than the second option, however does require more memory and OS resources, and so will not be viable for huge numbers of threads, though if you hit the limit, you are probably using threads wrong. 4. Have an event per request. This is only better than #3 if a single thread may make multiple requests at once, and can do useful work when any of them get a reply back (if they need all, it will make no difference). Generally, I would use option #1 or #2. Option 2 has the advantage of making it easy to write the functions that use the functionality, while option 1 will generally use fewer resources, and allows threads to continue to be used while waiting for replies. How much of a benefit that is depends on exactly what you are doing. Option #4 would probably be better implemented using option #1 in all cases to avoid problems with running out of OS memory - threading features generally require more limited OS resources than memory. Option #3 will also often run into the same issues as option #4 in the cases it will provide any benefit over option #2. Chris -- https://mail.python.org/mailman/listinfo/python-list
Re: Bi-directional sub-process communication
On Nov 23, 2015, at 1:43 PM, Chris Kaynor wrote: > > On Mon, Nov 23, 2015 at 2:18 PM, Israel Brewster > wrote: > >> Of course, that last step could be interesting - implementing the block in >> such a way as to not tie up the processor, while still getting the data "as >> soon" as it is available. Unless there is some sort of built-in >> notification system I could use for that? I.e. the thread would "subscribe" >> to a notification based on its tag, and then wait for notification. When >> the master processing thread receives data with said tag, it adds it to the >> dictionary and "publishes" a notification to that tag. Or perhaps the >> notification itself could contain the payload? > > > There are a few ways I could see handling this, without having the threads > spinning and consuming CPU: > > 1. Don't worry about having the follow-up code run in the same thread, > and use a simple callback. This callback could be dispatched to a thread > via a work queue, however you may not get the same thread as the one that > made the request. This is probably the most efficient method to use, as the > threads can continue doing other work while waiting for a reply, rather > than blocking. It does make it harder to maintain state between the pre- > and post-request functions, however. > 2. Have a single, global, event variable that wakes all threads waiting > on a reply, each of which then checks to see if the reply is for it, or > goes back to sleep. This is good if most of the time, only a few threads > will be waiting for a reply, and checking if the correct reply came in is > cheap. This is probably good enough, unless you have a LOT of threads > (hundreds). > 3. Have an event per thread. This will use less CPU than the second > option, however does require more memory and OS resources, and so will not > be viable for huge numbers of threads, though if you hit the limit, you are > probably using threads wrong. > 4. Have an event per request. This is only better than #3 if a single > thread may make multiple requests at once, and can do useful work when any > of them get a reply back (if they need all, it will make no difference). > > Generally, I would use option #1 or #2. Option 2 has the advantage of > making it easy to write the functions that use the functionality, while > option 1 will generally use fewer resources, and allows threads to continue > to be used while waiting for replies. How much of a benefit that is depends > on exactly what you are doing. While I would agree with #1 in general, the threads, in this case, are CherryPy threads, so I need to get the data and return it to the client in the same function call, which of course means the thread needs to block until the data is ready - it can't return and let the result be processed "later". Essentially there are times that the web client needs some information that only the Child process has. So the web client requests the data from the master process, and the master process then turns around and requests the data from the child, but it needs to get the data back before it can return it to the web client. So it has to block waiting for the data. Thus we come to option #2 (or 3), which sounds good but I have no clue how to implement :-) Maybe something like http://pubsub.sourceforge.net ? I'll dig into that. > > Option #4 would probably be better implemented using option #1 in all cases > to avoid problems with running out of OS memory - threading features > generally require more limited OS resources than memory. Option #3 will > also often run into the same issues as option #4 in the cases it will > provide any benefit over option #2. > > Chris > -- > https://mail.python.org/mailman/listinfo/python-list -- https://mail.python.org/mailman/listinfo/python-list
Re: Bi-directional sub-process communication
On Nov 23, 2015, at 3:05 PM, Dennis Lee Bieber wrote: > > On Mon, 23 Nov 2015 08:54:38 -0900, Israel Brewster > declaimed the following: > >> Concern: Since the master process is multi-threaded, it seems likely enough >> that multiple threads on the master side would make requests at the same >> time. I understand that the Queue class has locks that make > > Multiple "master" threads, to me, means you do NOT have a "master > process". But I do: the CherryPy "application", which has multiple threads - one per request (and perhaps a few more) to be exact. It's these request threads that generate the calls to the child process. > > Let there be a Queue for EVERY LISTENER. > > Send the Queue as part of the request packet. No luck: "RuntimeError: Queue objects should only be shared between processes through inheritance" This IS a master process, with multiple threads, trying to communicate with a child process. That said, with some modifications this sort of approach could still work. --- Israel Brewster Systems Analyst II Ravn Alaska 5245 Airport Industrial Rd Fairbanks, AK 99709 (907) 450-7293 --- > > Let the subthread reply to the queue that was provided via the packet > > Voila! No intermixing of "master/slave" interaction; each slave only > replies to the master that sent it a command; each master only receives > replies from slaves it has commanded. Slaves can still be shared, as they > are given the information of which master they need to speak with. > > > > -- > Wulfraed Dennis Lee Bieber AF6VN >wlfr...@ix.netcom.comHTTP://wlfraed.home.netcom.com/ > > -- > https://mail.python.org/mailman/listinfo/python-list -- https://mail.python.org/mailman/listinfo/python-list
Re: Problem to read from array
On Monday, November 23, 2015 at 12:58:49 PM UTC-8, Crane Ugly wrote: > Thank you all. > Here is the last piece of code that caused me so much troubles but now > working the way I wanted it: > > fRawData = [] > with open(fStagingFile2) as fStagingFile2FH: > fRawData = [line.strip() for line in > fStagingFile2FH.readlines()] # This is to read each element from > the file and chop off the end of line character > > fNumberOfColumns = 7 > fNumberOfRows = len(fRawData)/fNumberOfColumns > > fRowID = 0 > fParameters = [] > for fRowID in range(0, len(fRawData), fNumberOfColumns): > fParameters.append(fRawData[fRowID:fRowID+fNumberOfColumns]) > # This is to convert 1D array to 2D > > # ... and down below section is an example of how to read each > element of the list > # and how to update it if I need so. That was also a problem before. > fRowID = 0 > fColumnID = 0 > for fRowID in range(fNumberOfRows): > for fColumnID in range(fNumberOfColumns): > if fColumnID == 0: > fParameters[fRowID][fColumnID] = "" > Message2Log("fParameters[" + str(fRowID) + "][" + > str(fColumnID) + "] = " + str(fParameters[fRowID][fColumnID])) > > CU > > On Saturday, November 21, 2015 at 4:52:35 PM UTC+1, Nathan Hilterbrand wrote: > > On 11/21/2015 10:26 AM, BartC wrote: > > > On 21/11/2015 10:41, vostrus...@gmail.com wrote: > > >> Hi, > > >> I have a file with one parameter per line: > > >> a1 > > >> b1 > > >> c1 > > >> a2 > > >> b2 > > >> c2 > > >> a3 > > >> b3 > > >> c3 > > >> ... > > >> The parameters are lines of characters (not numbers) > > >> > > >> I need to load it to 2D array for further manipulations. > > >> So far I managed to upload this file into 1D array: > > >> > > >> ParametersRaw = [] > > >> with open(file1) as fh: > > >> ParametersRaw = fh.readlines() > > >> fh.close() > > > > > > I tried this code based on yours: > > > > > > with open("input") as fh: > > > lines=fh.readlines() > > > > > > rows = len(lines)//3 > > > > > > params=[] > > > index=0 > > > > > > for row in range(rows): > > > params.append([lines[index],lines[index+1],lines[index+2]]) > > > index += 3 > > > > > > for row in range(rows): > > > print (row,":",params[row]) > > > > > > For the exact input you gave, it produced this output: > > > > > > 0 : ['a1\n', 'b1\n', 'c1\n'] > > > 1 : ['a2\n', 'b2\n', 'c2\n'] > > > 2 : ['a3\n', 'b3\n', 'c3\n'] > > > > > > Probably you'd want to get rid of those \n characters. (I don't know > > > how off-hand as I'm not often write in Python.) > > > > > > The last bit could also be written: > > > > > > for param in params: > > > print (params) > > > > > > but I needed the row index. > > > > > To get rid of the '\n' (lineend) characters: > > > > with open(file1) as fh: > > ParametersRaw = [line.strip() for line in fh.readlines()] > > > > or, more succinctly.. > > > > with open(file1) as fh: > > ParametersRaw = [line.strip() for line in fh] > > > > Comprehensions are your friend. > > > > Nathan What is the significance of prefixing all your variables with "f"? I've frequently seen people use it to signify the variable is a float, but you're using it for things that aren't floats. -- https://mail.python.org/mailman/listinfo/python-list
Re: pip does not work anymore
On Sunday, November 22, 2015 at 11:59:13 PM UTC-8, Cecil Westerhof wrote: > When I try to install something with pip2 I get: > Traceback (most recent call last): > File "/usr/bin/pip2", line 9, in > load_entry_point('pip==7.1.2', 'console_scripts', 'pip2')() > File "/usr/lib/python2.7/site-packages/pkg_resources/__init__.py", line > 558, in load_entry_point > return get_distribution(dist).load_entry_point(group, name) > File "/usr/lib/python2.7/site-packages/pkg_resources/__init__.py", line > 2682, in load_entry_point > return ep.load() > File "/usr/lib/python2.7/site-packages/pkg_resources/__init__.py", line > 2355, in load > return self.resolve() > File "/usr/lib/python2.7/site-packages/pkg_resources/__init__.py", line > 2361, in resolve > module = __import__(self.module_name, fromlist=['__name__'], level=0) > File "/usr/lib/python2.7/site-packages/pip/__init__.py", line 15, in > > from pip.vcs import git, mercurial, subversion, bazaar # noqa > File "/usr/lib/python2.7/site-packages/pip/vcs/subversion.py", line 9, > in > from pip.index import Link > File "/usr/lib/python2.7/site-packages/pip/index.py", line 30, in > > from pip.wheel import Wheel, wheel_ext > File "/usr/lib/python2.7/site-packages/pip/wheel.py", line 35, in > > from pip._vendor.distlib.scripts import ScriptMaker > File "/usr/lib/python2.7/site-packages/pip/_vendor/distlib/scripts.py", > line 14, in > from .compat import sysconfig, detect_encoding, ZipFile > File "/usr/lib/python2.7/site-packages/pip/_vendor/distlib/compat.py", > line 31, in > from urllib2 import (Request, urlopen, URLError, HTTPError, > ImportError: cannot import name HTTPSHandler > > With pip3 I get the same problem. > > It looks like openSUSE has done an upgrade in which it disabled ssl2. > How can I get pip2/3 working again? > > -- > Cecil Westerhof > Senior Software Engineer > LinkedIn: http://www.linkedin.com/in/cecilwesterhof SSL2 *should* be disabled. It was deprecated nearly 20 years ago, replaced by SSL3, and even THAT has been deprecated. SSL2 is incredibly insecure. -- https://mail.python.org/mailman/listinfo/python-list
tuples in conditional assignment
The following code has bitten me recently: >>> t=(0,1) >>> x,y=t if t else 8, 9 >>> print(x, y) (0, 1) 9 I was assuming that a comma has the highest order of evaluation, that is the expression 8, 9 should make a tuple. Why this is not the case? George -- https://mail.python.org/mailman/listinfo/python-list
Re: tuples in conditional assignment
George Trojan writes: > The following code has bitten me recently: > > >>> t=(0,1) > >>> x,y=t if t else 8, 9 > >>> print(x, y) > (0, 1) 9 You can simplify this by taking assignment out of the picture:: >>> t = (0, 1) >>> t if t else 8, 9 ((0, 1), 9) So that's an “expression list” containing a comma. The reference for expressions tells us:: An expression list containing at least one comma yields a tuple. The length of the tuple is the number of expressions in the list. https://docs.python.org/3/reference/expressions.html#expression-lists> > I was assuming that a comma has the highest order of evaluation You were? The operator precedence rules don't even mention comma as an operator, so why would you assume that? https://docs.python.org/3/reference/expressions.html#operator-precedence> > that is the expression 8, 9 should make a tuple. Why this is not the > case? I'm not sure why it's the case that you assumed that :-) My practical advice: I don't bother trying to remember the complete operator precedence rules. My simplified precedence rules are: * ‘+’, ‘-’ have the same precedence. * ‘*’, ‘/’, ‘//’ have the same precedence. * For anything else: Use parentheses to explicitly declare the precedence I want. Related: When an expression has enough clauses that it's not *completely obvious* what's going on, break it up by assigning some sub-parts to temporary well-chosen descriptive names (not ‘t’). -- \ “It is far better to grasp the universe as it really is than to | `\persist in delusion, however satisfying and reassuring.” —Carl | _o__)Sagan | Ben Finney -- https://mail.python.org/mailman/listinfo/python-list
Re: Bi-directional sub-process communication
On 24Nov2015 16:25, Cameron Simpson wrote: Completely untested example code: class ReturnEvent: def __init__(self): self.event = Event() With, of course: def wait(self): return self.event.wait() Cheers, Cameron Simpson Maintainer's Motto: If we can't fix it, it ain't broke. -- https://mail.python.org/mailman/listinfo/python-list
Re: Bi-directional sub-process communication
On 23Nov2015 14:14, Israel Brewster wrote: On Nov 23, 2015, at 1:43 PM, Chris Kaynor wrote: On Mon, Nov 23, 2015 at 2:18 PM, Israel Brewster wrote: 3. Have an event per thread. This will use less CPU than the second option, however does require more memory and OS resources, and so will not be viable for huge numbers of threads, though if you hit the limit, you are probably using threads wrong. [...] While I would agree with #1 in general, the threads, in this case, are CherryPy threads, so I need to get the data and return it to the client in the same function call, which of course means the thread needs to block until the data is ready - it can't return and let the result be processed "later". Then #3. I would have a common function/method for submitting a request to go to the subprocess, and have that method return an Event on which to wait. Then caller then just waits for the Event and collects the data. Obviously, the method does not just return the Event, but an Event and something to receive the return data. I've got a class called a Result for this kind of thing; make a small class containing an Event and which will have a .result attribute for the return information; the submitting method allocates one of these and returns it. The response handler gets the instance (by looking it up from the tag), sets the .result attribute and fires the Event. Your caller wakes up from waiting on the Event and consults the .result attribute. Completely untested example code: class ReturnEvent: def __init__(self): self.event = Event() seq = 0 re_by_tag = {} def submit_request(query): global seq, re_by_tag tag = seq seq += 1 RE = ReturnEvent() re_by_tag[tag] = RE send_request(tag, query) return RE def process_response(tag, response_data): RE = re_by_tag.pop(tag) RE.result = response_data RE.event.set() ... CherryPy request handler ... RE = submit_request(your_query_info) RE.wait() response_data = RE.result Cheers, Cameron Simpson -- https://mail.python.org/mailman/listinfo/python-list
3.5 64b windows
I'd just installed py3.5 most recent (downloaded, installed 11/23/15) and when starting via the windows start (win 8.1pro) shortcut, I always get this error. I'd navigated to the program directory (it installed in C:\Users\my-ID\AppData\Local\Programs\Python\Python35) and started the python.exe and that fails as well. (as does pythonw.exe -silently. nothing even pops!) -- https://mail.python.org/mailman/listinfo/python-list
[argparse] optional parameter without --switch
I want to call (on bash) a Python script in this two ways without any error. ./arg.py ./arg.py TEST It means that the parameter (here with the value `TEST`) should be optional. With argparse I only know a way to create optional paramters when they have a switch (like `--name`). Is there a way to fix that? #!/usr/bin/env python3 import sys import argparse parser = argparse.ArgumentParser(description=__file__) # must have #parser.add_argument('name', metavar='NAME', type=str) # optional BUT with a switch I don't want #parser.add_argument('--name', metavar='NAME', type=str) # store all arguments in objects/variables of the local namespace locals().update(vars(parser.parse_args())) print(name) sys.exit() -- GnuPGP-Key ID 0751A8EC -- https://mail.python.org/mailman/listinfo/python-list
[no subject]
Hai I have installed python 3.5 in my pc I could not see any application icon or application shortcut on my pc Sent from Mail for Windows 10 -- https://mail.python.org/mailman/listinfo/python-list
Re: How to remember last position and size (geometry) of PyQt application?
This question was reasked and answered on StackOverflow: http://stackoverflow.com/q/33869721/939986 -- https://mail.python.org/mailman/listinfo/python-list
[no subject]
I install python 3.5.0 and run eazy to install and my antivírus detect and trogen can you explay cus im new to this language. -- https://mail.python.org/mailman/listinfo/python-list
Re: 3.5 64b windows
On Tue, Nov 24, 2015 at 11:58 AM, Bernard Baillargeon wrote: > I'd just installed py3.5 most recent (downloaded, installed 11/23/15) and > when starting via the windows start (win 8.1pro) shortcut, I always get this > error. If you attached something, it didn't arrive. Can you type in the text of what's happening, please? ChrisA -- https://mail.python.org/mailman/listinfo/python-list