Re: authentication with python-ldap
Jorge Alberto Diaz Orozco wrote at 2013-5-25 14:00 -0400: >I have been doing the same thing and I tried to use java for testing the >credentials and they are correct. It works perfectly with java. >I really don´t know what we´re doing wrong. Neither do I. But the error message definitely originates from the LDAP server. This means that the server sees different things for the (successful) Java connection and the (unsuccessful) Python connection. Maybe, you can convince your LDAP server administrator to configure a form of logging that allows you to compare the two requests (this may not be easy - because sensitive information is involved). Comparing the requests may provide valuable clues towards the cause of the problem. One may also try some guesswork: There is an important difference between Java and Python 2. Java uses unicode as the typical type for text variables while in Python 2, you use normally the type "str" for text. "str" means no unicode but encoded text. When the Java-LDAP bridge passes text to the LDAP server, it must encode the text - and maybe, it uses the correct encoding (the one the LDAP server expects). The Python-LDAP bridge, on the other hand, does not get unicode but "str" and likely passes the "str" values directly. Thus, if your "str" values do not use the correct encoding (the one expected by the LDAP server), things will not work out correctly. I expect the LDAP server to expect the "utf-8" encoding. In this case, problems could be expected when the data passed on to the LDAP server contains non ascii characters while all ascii data should not see problems. -- Dieter -- http://mail.python.org/mailman/listinfo/python-list
Re: Strange threading behaviour
Rotwang writes: > Hi all, I'm using Python 2.7.2 on Windows 7 and a module I've written > is acting strangely. I can reproduce the behaviour in question with > the following: > > --- begin bugtest.py --- > > import threading, Tkinter, os, pickle > > class savethread(threading.Thread): > def __init__(self, value): > threading.Thread.__init__(self) > self.value = value > def run(self): > print 'Saving:', > with open(os.path.join(os.getcwd(), 'bugfile'), 'wb') as f: > pickle.dump(self.value, f) > print 'saved' > > class myclass(object): > def gui(self): > root = Tkinter.Tk() > root.grid() > def save(event): > savethread(self).start() > root.bind('s', save) > root.wait_window() > > m = myclass() > m.gui() > > --- end bugtest.py --- > > > Here's the problem: suppose I fire up Python and type > import bugtest > > and then click on the Tk window that spawns and press 's'. Then > 'Saving:' gets printed, and an empty file named 'bugfile' appears in > my current working directory. But nothing else happens until I close > the Tk window; as soon as I do so the file is written to and 'saved' > gets printed. If I subsequently type > bugtest.m.gui() > > and then click on the resulting window and press 's', then 'Saving: > saved' gets printed and the file is written to immediately, exactly as > I would expect. Similarly if I remove the call to m.gui from the > module and just call it myself after importing then it all works > fine. But it seems as if calling the gui within the module itself > somehow stops savethread(self).run from finishing its job while the > gui is still alive. > > Can anyone help? It looks as if some waiting operation in the "wait_window" call did not release the GIL (the "Global Interpreter Lock" which ensures that at most one Python thread can run at a given time and protects the Python data structures such as the reference counts and interpreter state). In this case, you could expect some non-deterministic behaviour. If your thread is fast enough to finish before the internal activity inside "wait_window" gets the GIL again, everything completes immediately; otherwise, things complete only after the internal waiting ends and Python code is again executed. It might well be possible that "TKinter" has not been designed for a multi threaded environment; alternatively, there might be a bug. If "TKinter" truely supports multithreaded applications, any call to "tk" would need to release the GIL and any callback into Python reacquire it. Strange things of the kind you observe could happen when this is forgotten at a single place. -- http://mail.python.org/mailman/listinfo/python-list
Re: A question on os.path.join in POSIX systems
Kushal Das writes: > There is a comment on posixpath.join saying "Ignore the previous parts > if a part is absolute." It means: "join(something, abspath) == abspath" whenever "abspath" is an absolute path. > Is this defined in the POSIX spec ? If yes, then can someone please > point me to a link where I can read about it ? It has nothing to do with POSIX. It just describes a senseful behavior of "join". -- http://mail.python.org/mailman/listinfo/python-list
Re: Can't understand python C apis
gmspro writes: > I'm trying to understand the source code of python and how it works > internally. > But i can't understand the python C apis. Usually, you try to understand the Python C api in order to write extensions for Python in C (e.g. to interface with an existing C library or to optimize a tight loop). If this is the case for you, then there is an alternative: "Cython". "Cython" actually is a compiler which compiles an extended Python source language (Python + type/variable declarations + extension types) into "C". With its help, you can create C extensions for Python without a need to know all the details of the Python C API. It might still be necessary at some point to understand more of the API but highly likely it will take considerable time to reach that point -- and then you might already be more familiar and the understanding might be easier. -- http://mail.python.org/mailman/listinfo/python-list
Re: Is there any way to decode String using unknown codec?
howmuchisto...@gmail.com writes: > I'm a Korean and when I use modules like sys, os, &c, > sometimes the interpreter show me broken strings like > '\x13\xb3\x12\xc8'. > It mustbe the Korean "alphabet" but I can't decode it to the rightway. > I tried to decode it using codecs like cp949,mbcs,utf-8 > but It failed. > The only way I found is eval('\x13\xb3\x12\xc8'). This looks as if "sys.stdout/sys.stderr" knew the correct encoding. Check it like this: import sys sys.stdout.encoding -- http://mail.python.org/mailman/listinfo/python-list
Re: Using a CMS for small site?
Gilles writes: > The site is just... > - a few web pages that include text (in four languages) and pictures > displayed in a Flash slide show > - a calendar to show availability > - a form to send e-mail with anti-SPAM support > - (ASAP) online payment > > Out of curiosity, are there CMS/frameworks in Python that can do this? > Django? Other? There is also "Plone" ("http://plone.org";) -- easy to set up. You likely need third party extensions for the "anti-SPAM" support and the onlie payment. Unfortunately, "Plone" is quite resource hungry -- especially it wants quite some memory. -- http://mail.python.org/mailman/listinfo/python-list
Re: adding a simulation mode
andrea crotti writes: > I'm writing a program which has to interact with many external > resources, at least: > - mysql database > - perforce > - shared mounts > - files on disk > > And the logic is quite complex, because there are many possible paths to > follow depending on some other parameters. > This program even needs to run on many virtual machines at the same time > so the interaction is another thing I need to check... > > Now I successfully managed to mock the database with sqlalchemy and only > the fields I actually need, but I now would like to simulate also > everything else. There is a paradigm called "inversion of control" which can be used to handle those requirements. With "inversion of control", the components interact on the bases of interfaces. The components themselves do not know each other, they know only the interfaces they want to interact with. For the interaction to really take place, a component asks a registry "give me a component satisfying this interface", gets it and uses the interface. If you follow this paradigm, it is easy to switch components: just register different alternatives for the interface at hand. "zope.interface" and "zope.component" are python packages that support this paradigm. Despite the "zope" in their name, they can be used outside of "Zope". "zope.interface" models interfaces, while "zope.component" provides so called "utilities" (e.g. "database utility", "filesystem utility", ...) and "adapters" and the corresponding registries. Of course, they contain only the infrastructure for the "inversion of control" paradigm. Up to you to provide the implementation for the various mocks. -- http://mail.python.org/mailman/listinfo/python-list
Re: Question about weakref
Frank Millman writes: > I have a situation where I thought using weakrefs would save me a bit > of effort. Instead of the low level "weakref", you might use a "WeakKeyDictionary". -- http://mail.python.org/mailman/listinfo/python-list
Re: Question about weakref
Frank Millman writes: > On 05/07/2012 10:46, Dieter Maurer wrote: >> Instead of the low level "weakref", you might use a "WeakKeyDictionary". >> > > Thanks, Dieter. I could do that. > > In fact, a WeakSet suits my purposes better. I tested it with my > original example, and it works correctly. It also saves me the step of > deleting the weak reference once the original object is deleted, as it > seems to do that automatically. > > I just need to double-check that I would never have the same > listener-object try to register itself with the publisher twice, as > that would obviously fail with a Set, as it would with a Dict. No need to verify. A secondary subscription would be effectively a no-operation -- with both a "set" and a "dict". > I would still like to know why weakref.proxy raised an exception. I > have re-read the manual several times, and googled for similar > problems, but am none the wiser. In fact, it is documented. Accessing a proxy will raise an exception when the proxied object no longer exists. What you can ask is why your proxy has been accessed after the object was deleted. The documentation is specific: during the callback, the object should still exist. Thus, apparently, one of your proxies outlived an event that should have deleted it (probably a hole in your logic). -- http://mail.python.org/mailman/listinfo/python-list
Re: adding a simulation mode
andrea crotti wrote at 2012-7-12 14:20 +0100: >One thing that I don't quite understand is why some calls even if I >catch the exception still makes the whole program quit. >For example this > >try: >copytree('sjkdf', 'dsflkj') >Popen(['notfouhd'], shell=True) >except Exception as e: >print("here") > > >behaves differently from: > >try: >Popen(['notfouhd'], shell=True) >copytree('sjkdf', 'dsflkj') >except Exception as e: >print("here") > >because if copytree fails it quits anyway. >I also looked at the code but can't quite get why.. any idea? There are ways to quit a program immediately without giving exception handlers a chance to intervene -- though Python does not make this easy. Your code above should not do this. If it does, there is likely a bug. You told us, that the two alternatives above behaved differently -- I expect 'behaved differently with respect to the printing of "here"'. If you tell us, which alternative printed "here" and which did not, we would be able to deduce which of the "Popen" or "copytree" caused the immediate exit. "Popen" might contain a call to "os._exit" (one of the ways to immediately quit) -- though it should only call it in the forked child not in the calling process. "coyptree" might under exceptional circumstances (extremely deeply nested structures -- surely not for non-existent source and target) cause a stack overflow (which, too, can lead to immediate death). In addition, "Popen" and maybe even "copytree" may call platform dependent functions. Thus, platform information could be relevant. Under "*nix", you should be able to get some information from the exit code of a suddenly quiting process. It tells whether the process died from a fatal signal (a stack overflow would result in the fatal SIGSEGV) or whether it existed willingly with an exit code. -- Dieter -- http://mail.python.org/mailman/listinfo/python-list
Re: howto do a robust simple cross platform beep
Steven D'Aprano writes: >> How do others handle simple beeps? >> >> I just want to use them as alert, when certain events occur within a >> very long running non GUI application. > > Why? Do you hate your users? I, too, would find it useful -- for me (although I do not hate myself). Surely, you know an alarm clock. Usually, it gives an audible signal when it is time to do something. A computer can in principle be used as a flexible alarm clock - but it is not so easy with the audible signal... An audible signal has the advantage (over a visual one) that you can recognize it even when you are not looking at the screen (because you are thinking). Unfortunately, I had to give up. My new computer lacks a working speaker... -- http://mail.python.org/mailman/listinfo/python-list
Re: [OT] Simulation Results Managment
moo...@yahoo.co.uk writes: > ... > Does pickle have any advantages over json/yaml? It can store and retrieve almost any Python object with almost no effort. Up to you whether you see it as an advantage to be able to store objects rather than (almost) pure data with a rather limited type set. Of course, "pickle" is a proprietary Python format. Not so easy to decode it with something else than Python. In addition, when you store objects, the retrieving application must know the classes of those objects -- and its knowledge should not be too different from how those classes looked when the objects have been stored. I like very much to work with objects (rather than with pure data). Therefore, I use "pickle" when I know that the storing and retrieving applications all use Python. I use pure (and restricted) data formats when non Python applications come into play. -- http://mail.python.org/mailman/listinfo/python-list
Re: A thread import problem
Bruce Sherwood writes: > I'm trying to do something rather tricky, in which a program imports a > module that starts a thread that exec's a (possibly altered) copy of > the source in the original program, and the module doesn't return. > This has to do with an attempt to run VPython in the Mac Cocoa > context, in which Cocoa is required to be the primary thread, making > it necessary to turn the environment inside out, as currently VPython > invokes the Carbon context as a secondary thread. > > I've created a simple test case, displayed below, that illustrates > something I don't understand. The module reads the source of the > program that imported it, comments out the import statement in that > source, and performs an exec of the modified source. The module then > enters an infinite loop, so that there is no return to the original > program; only the exec-ed program runs, and it runs in a secondary > thread. > > The puzzle is that if there is any later import statement in the exec > source, the exec program halts on that import statement, with no error > message. I saw a discussion that suggested a need for the statement > "global math" to make the math import work, but that doesn't fix the > problem. I've tried with no success various versions of the exec > statement, with respect to its global and local environment. In a recent discussion in this list someone mentioned that on module import, you should not start a thread. The reason: apparently, Python uses some kind of locking during import which can interfere with "import"s in the started thread. You can (in principle) easily avoid starting the thread on module import. Instead of starting the thread as a side effect of the import, put the start in a function, import the module and then call the thread starting function. -- http://mail.python.org/mailman/listinfo/python-list
Re: properly catch SIGTERM
Eric Frederich writes: > ... > This seems to work okay but just now I got this while hitting ctrl-c > It seems to have caught the signal at or in the middle of a call to > sys.stdout.flush() > --- Caught SIGTERM; Attempting to quit gracefully --- > Traceback (most recent call last): > File "/home/user/test.py", line 125, in > sys.stdout.flush() > IOError: [Errno 4] Interrupted system call > How should I fix this? This is normal *nix behavior. Any signal, even if caught by a signal handler, can interrupt system calls. Modern *nix versions might allow to control whether a signal interrupts a system call or not. Check the signal documentation for your operating system for the control you have over signal handling. Likely, you cannot directly control the feature via Python, but the "ctypes" module allows you to call C functions directly. -- http://mail.python.org/mailman/listinfo/python-list
Re: A thread import problem
Bruce Sherwood writes: > ... > from visual import box, rate > b = box() > while True: > rate(100) # no more than 100 iterations per second > b.pos.x += .01 > > This works because a GUI environment is invoked by the visual module > in a secondary thread (written mainly in C++, connected to Python by > Boost). The OpenGL rendering of the box in its current position is > driven by a 30-millisecond timer. This works fine with any environment > other than Mac Cocoa. > > However, the Mac Cocoa GUI environment and interact loop are required > to be the primary thread, so the challenge is to have the visual > module set up the Cocoa environment, with the user's program running > in a secondary thread. Any ideas? The usual approach to this situation is to invoke the user code via a callback from the UI main loop or invoke it explicitely after the UI system has been set up immediately before its main loop is called. Might look somehow like this: main thread: from thread import start_new_thread from visual import setup_gui, start_main_loop setup_gui() # sets up the GUI subsystem start_new_thread(lambda: __import__(), ()) start_main_loop() -- http://mail.python.org/mailman/listinfo/python-list
Re: A thread import problem
Bruce Sherwood writes: > Thanks much for this suggestion. I'm not sure I've correctly > understood the operation "start_new_thread(lambda: __import__( module>), ())". By "your module" do you mean the user program which > imported the module that will execute start_new_thread? By "your_module", I meant what you have called "user.py" elsewhere in this thread -- the thing that does the animation. Of course, my suggestion implies that "visual.py" is somewhat changed. It is supposed to no longer set up the GUI environment automatically but do so only when its "setup_gui" function is called, and starting the GUI main loop, too, is no longer automatic but explicite. > It hadn't > occurred to me to have A import B and B import A, though now that you > describe this (if that's indeed what you mean) it makes sense. I do not propose to do that -- it can lead to problems. In my proposal, you have two modules: one the "main" module which sets up the GUI environment, starts the animation in a separate thread and then activate the GUI main loop. The second module contains the code you have shown in a previous message. Of course, the second module can be eliminated by putting its content into a function and then calling this function in the "start_new_thread" (instead of "lambda: __import__(...)"). -- http://mail.python.org/mailman/listinfo/python-list
Re: A thread import problem
Bruce Sherwood writes: > ... > The failure of this test case suggests that one cannot do imports > inside secondary threads started in imported modules, something I keep > tripping over. But I hope you'll be able to tell me that I'm doing > something wrong! As you know multiple threads can be dangerous when they concurrently change common data structures. Locks are used to protect those data structures. Locking can introduce other problems - like deadlocks (something you seem to observe). I have not looked at the concrete implementation. However, the Python documentation suggests that the import machinery uses its own locks (beside the "Global Interpreter Lock"). It seems to be a "thread lock", which would mean that a thread is not blocked when it already holds the lock - however any other thread would block. This easily can lead to a deadlock -- when you wait for the other thread "in any way". There should be no problem when you complete the whole import chain without any waiting for the thread. However, should you start the GUI main loop inside the import chain, you will never complete this chain. -- http://mail.python.org/mailman/listinfo/python-list
Re: Sudden doubling of nearly all messages
Dave Angel writes: > Has anybody else noticed the sudden double-posting of nearly all > messages in the python mailing list? I am reading this list via "gmane" and do not see any double postings. -- http://mail.python.org/mailman/listinfo/python-list
Re: A thread import problem
Bruce Sherwood writes: > ... > There's nothing wrong with the current VPython architecture, which > does use good style, but there are two absolute, conflicting > requirements that I have to meet. > > (1) The simple program API I've shown must be preserved, because there > exist a large number of such programs in existence, used by lots of > people. I can't change the API. Among other uses, every semester there > are about 5000 students in introductory college science courses, > especially physics, who do computational modeling with 3D > visualizations based on instructional materials that teach the > existing API. There is also a large number of instructors who depend > on existing VPython demo programs to continue working even if the > college upgrades Python and VPython. This isn't some little project > where I'm able to teach my small group of collaborators how they > should structure programs. You might keep the "programs" (one of which you have shown) but change the way how they are "called" (and change the internal working of "visual"). In my "proposal", your "program" is not changed in any way -- but it is not called directly but activated ("imported") from something like a starting module. -- http://mail.python.org/mailman/listinfo/python-list
Re: Daemon loses __file__ reference after a while.
"ivdn...@gmail.com" writes: > I have a daemon process that runs for a considerable amount of time (weeks on > end) without any problems. At some point I start getting the exception: > > Exception info: Traceback (most recent call last): > File "scheduler.py", line 376, in applyrule > result = execrule(rule_code) > File "scheduler.py", line 521, in execrule > rulepath = > os.path.dirname(__file__)+"/"+'/'.join(rule['modules'])+"/"+rule['rulename'] > NameError: name '__file__' is not defined > > This section of the code is executed in this process *all the time*, but > suddenly stops working. I have been searching for similar issues online, but > only come accross people having problems because they run the script > interactively. This is not the case here. This is strange indeed. I have only one vague idea: should something try to terminate the process, modules would start to lose their variables during shutdown. -- http://mail.python.org/mailman/listinfo/python-list
Re: simplified Python parsing question
"Eric S. Johansson" writes: > When you are sitting on or in a name, you look to the left or look to > the right what would you see that would tell you that you have gone > past the end of that name. For example > > a = b + c > > if you are sitting on a, the boundaries are beginning of line and =, > if you are sitting on b, the boundaries are = and +, if you are > sitting on c, the boundaries are + and end of line. a call the region > between those boundaries the symbol region. Check the lexical definitions. They essentially define, what a "symbol region" is. In essence, you have names, operators, literals whitespace and comments -- each with quite a simple definition. -- http://mail.python.org/mailman/listinfo/python-list
Re: PyPI question, or, maybe I'm just stupid
Chris Gonnerman writes: > I've been making some minor updates to the PollyReports module I > announced a while back, and I've noticed that when I upload it to > PyPI, my changelog (CHANGES.txt) doesn't appear to be integrated into > the site at all. Do I have to put the changes into the README, or > have I missed something here? It seems that there should be some > automatic method whereby PyPI users could easily see what I've changed > without downloading it first. "CHANGES.txt" is not automatically presented. If necessary, you must integrate it into the "long description". However, personally, I am not interested in all the details (typically found in "CHANGES.txt") but some (often implicit) information is sufficient for me: something like "major API change", "minor bug fixes". Thus, think carefully what you put on the overview page. I find it very stupid to see several window scrolls of changes for a package but to learn how to install the package, I have to download its source... -- http://mail.python.org/mailman/listinfo/python-list
Re: [Python] Re: PyPI question, or, maybe I'm just stupid
Chris Gonnerman writes: > On 07/30/2012 04:20 AM, Dieter Maurer wrote: > ... >> I find it very stupid to see several window scrolls of changes for >> a package but to learn how to install the package, I have to download its >> source... > Not sure I get this. The installation procedure for PollyReports is > the same as for, what, 99% of Python source packages? > > sudo python setup.py install > > What else are you saying I should do? This remark was not targeted at "PollyReports" but (in general) at packages with non-trivial installation procedures which nevertheless state on the overview page "for installation read the separate installation instructions (in the source distribution)". As a side note: playing well with python package managers ("easy_install", "pip", "zc.buildout", ...) could make it even simpler than "sudo python setup.py install". -- http://mail.python.org/mailman/listinfo/python-list
Re: conditional running of code portion
Serhiy Storchaka writes: > On 05.08.12 09:30, Steven D'Aprano wrote: >> If you are working in a tight loop, you can do this: >> >> if VERBOSE_FLAG: >> for item in loop: >> print(DEBUG_INFORMATION) >> do_actual_work(item) >> else: >> for item in loop: >> do_actual_work(item) > > Or this: > > if VERBOSE_FLAG: > def do_work(item): > print(DEBUG_INFORMATION) > do_actual_work(item) > else: > do_work = do_actual_work > > for item in loop: > do_work(item) Be warned: a function call is *much* more expensive than an "if variable:". -- http://mail.python.org/mailman/listinfo/python-list
Re: Beautiful Soup Table Parsing
Tom Russell writes: > I am parsing out a web page at > http://online.wsj.com/mdc/public/page/2_3021-tradingdiary2.html?mod=mdc_pastcalendar > using BeautifulSoup. > > My problem is that I can parse into the table where the data I want > resides but I cannot seem to figure out how to go about grabbing the > contents of the cell next to my row header I want. > > For instance this code below: > > soup = > BeautifulSoup(urlopen('http://online.wsj.com/mdc/public/page/2_3021-tradingdiary2.html?mod=mdc_pastcalendar')) > > table = soup.find("table",{"class": "mdcTable"}) > for row in table.findAll("tr"): > for cell in row.findAll("td"): > print cell.findAll(text=True) > > brings in a list that looks like this: > > [u'NYSE'] > [u'Latest close'] > [u'Previous close'] > ... > > What I want to do is only be getting the data for NYSE and nothing > else so I do not know if that's possible or not. I am quite confident that it is possible (though I do not know the details). First thing to note: you can use the "break" statement in order to leave a loop "before time". As you have a nested loop, you might need a "break" on both levels, the outer loop's "break" probably controlled by a variable which indicates "success". Second thing to note: the "BeautifulSoup" documentation might tell you something about the return values of its methods. I assume "BeautifulSoup" builds upon "lxml" and the return values are "lxml" related. Then the "lxml" documentation would tell you how to inspect further details about the html structure. -- http://mail.python.org/mailman/listinfo/python-list
Re: Threads and sockets
loial writes: > I am writing an application to send data to a printer port(9100) and then > recieve PJL responses back on that port. Because of the way PJL works I have > to do both in the same process(script). > > At the moment I do not start to read responses until the data has been sent > to the printer. However it seems I am missing some responses from the printer > whilst sending the data, so I need to be able to do the 2 things at the same > time. > > Can I open a port once and then use 2 different threads, one to write to the > post and one to read the responses)? That should be possible. Alternatively, you could use "asyncore" -- a mini framework to facilitate asynchronous communication. -- http://mail.python.org/mailman/listinfo/python-list
Re: Running Python web apps on shared ASO servers?
Gilles writes: > ... > Support replied this in an old thread: "Just a CGI option. We don't > have enough users to justify adding mod_python support." > http://forums.asmallorange.com/topic/4672-python-support/page__hl__python > http://forums.asmallorange.com/topic/4918-python-fcgi-verses-mod-python/ > > Does it mean that ASO only supports writing Python web apps as > long-running processes (CGI, FCGI, WSGI, SCGI) instead of embedded > Python à la PHP? It looks as if you could use CGI to activate Python scripts. There seems to be no mod_python" support. You should probably read the mentioned forum resources to learn details about the Python support provided by your web site hoster. -- http://mail.python.org/mailman/listinfo/python-list
Re: python+libxml2+scrapy AttributeError: 'module' object has no attribute 'HTML_PARSE_RECOVER'
Dmitry Arsentiev writes: > Has anybody already meet the problem like this? - > AttributeError: 'module' object has no attribute 'HTML_PARSE_RECOVER' > > When I run scrapy, I get > > File "/usr/local/lib/python2.7/site-packages/scrapy/selector/factories.py", > line 14, in > libxml2.HTML_PARSE_NOERROR + \ > AttributeError: 'module' object has no attribute 'HTML_PARSE_RECOVER' Apparently, the versions of "scrapy" and "libxml2" do not fit. Check with which "libxml2" versions, your "scrapy" version can work and then install one of them. -- http://mail.python.org/mailman/listinfo/python-list
Re: SSLSocket.getpeercert() doesn't return issuer, serial number, etc
Gustavo Baratto writes: > SSL.Socket.getpeercert() doesn't return essential information present in the > client certificate (issuer, serial number, not before, etc), and it looks it > is by design: > > > > http://docs.python.org/library/ssl.html#ssl.SSLSocket.getpeercert > > http://hg.python.org/cpython/file/b878df1d23b1/Modules/_ssl.c#l866 > > > > By deliberately removing all that information, further > verification/manipulation of the cert becomes impossible. > > Revocation lists, OCSP, and any other extra layers of certificate checking > cannot be done properly without all the information in the cert being > available. I agree with you that the information should not be discarded. > Is there anyway around this? There should be at least a flag for folks that > need all the information in the certificate. You could use the parameter "binary_form=True". In this case, you get the DER-encoded certificate and can analyse it with (e.g.) "openssl". -- http://mail.python.org/mailman/listinfo/python-list
Re: Can I get logging.FileHandler to close the file on each emit?
rikardhul...@gmail.com writes: > I use logging.FileHandler (on windows) and I would like to be able to delete > the file while the process is running and have it create the file again on > next log event. > > On windows (not tried linux) this is not possible because the file is locked > by the process, can I get it to close the file after each log event? > > If not, would the correct thing to do be to write my own LogHandler with this > behavior? Zope is using Python's "logging" module and wants to play well with log rotating (start a new logfile, do something with the old log file (compress, rename, remove)). It does this by registering a signal handler which closes its logfiles when the corresponding signal is received. Maybe, you can do something like this. Signal handling under Windows is limited, but maybe you find a usable signal under Windows (Zope is using "SIGUSR1"). -- http://mail.python.org/mailman/listinfo/python-list
Re: Are the property Function really useful?
writes: > Are the property Function really useful? Someone invested time to implement/document/test it. Thus, there are people who have use cases for it... > Where can i use the property function? You can use it when you have parameterless methods which you want to access as if they were simple attributes: i.e. "obj.m" instead of "obj.m()". To phrase is slightly differently: the "property" function allows you to implement "computed" (rather than "stored") attributes. You may find this feature uninteresting: fine, do not use it... However, there are cases where it is helpful, e.g.: You have a base class "B" with an attribute "a". Now, you want to derive a class "D" from "B" where "a" is not fixed but must be computed from other attributes. The "Eiffel" programming language even stipulates that attributes and parameterless methods are essentially the same and application of the "property" function is implicit in "Eiffel" for parameterless methods: to hide implementation details. As you see, "property" can be highly valued ;-) -- http://mail.python.org/mailman/listinfo/python-list
Re: sockets,threads and interupts
loial writes: > I have threaded python script that uses sockets to monitor network ports. > > I want to ensure that the socket is closed cleanly in all circumstances. This > includes if the script is killed or interupted in some other way. The operating system should close all sockets automatically when the process dies. Thus, if closing alone is sufficient... -- http://mail.python.org/mailman/listinfo/python-list
Re: Why derivated exception can not be pickled ?
Mathieu Courtois writes: > Here is my example : > > > import cPickle > > ParentClass = object # works > ParentClass = Exception # does not > > class MyError(ParentClass): > def __init__(self, arg): > self.arg = arg > > def __getstate__(self): > print '#DBG pass in getstate' > odict = self.__dict__.copy() > return odict > > def __setstate__(self, state): > print '#DBG pass in setstate' > self.__dict__.update(state) > > exc = MyError('IDMESS') > > fo = open('pick.1', 'w') > cPickle.dump(exc, fo) > fo.close() > > fo = open('pick.1', 'r') > obj = cPickle.load(fo) > fo.close() > > > 1. With ParentClass=object, it works as expected. > > 2. With ParentClass=Exception, __getstate__/__setstate__ are not called. The pickle interface is actually more complex and there are several ways an object can ensure picklability. For example, there is also a "__reduce__" method. I suppose, that "Exception" defines methods which trigger the use of an alternative picklability approach (different from "__getstate__/__setstate__"). I would approach your case the following way: Use "pickle" instead of "cPickle" and debug picking/unpickling to find out what happens in detail. -- http://mail.python.org/mailman/listinfo/python-list
Re: [web] Long-running process: FCGI? SCGI? WSGI?
Gilles writes: > To write a long-running web application, I'd like to some feedback > about which option to choose. > > Apparently, the choice boilds down to this: > - FastCGI > - SCGI > - WSGI > > It seems like FCGI and SCGI are language-neutral, while WSGI is > Python-specific. > > Besides that, how to make an informed choice about which option to > choose? Obviously, this depends on your environment. Some hosters, web servers, applications may directly support one interface and not others. If you control your whole environment, I would look for a newer approach. I do not know "SCGI" but I know that "WSGI" is fairly recent. This means that during its design, "FastCGI" was already known and not deemed to be sufficient. Thus, you can expect more features (more modularisation, in this case) in "WSGI". -- http://mail.python.org/mailman/listinfo/python-list
Re: How to implement a combo Web and Desktop app in python.
Shawn McElroy writes: > ... > So I need to find a way I can implement this in the best way... It is in general very difficult to say reliable things about the "best" way. Because, that depends very much on details. My former employer has created a combo destop/online application based on "Zope". "Zope" is a web application framework, platform independent, easily installable, with an integrated HTTP server. It is one of the natural choices as a basis for a Python implemented web application. To get a desktop application, application and Zope was installed on the client system and a standard browser used for the ui. The main drawback of this scheme came from the limitations of the browser implemented ui. It has been very difficult to implement "deep integration" with the desktop (e.g. "drap & drop" in and out of the application; integration with the various other applications (Outlook, Word, ...)) and to provide "gimicks" provided by the surrounding environment. Thus, after 8 years, the application started to look old style and the browser based ui was replaced by a stand alone desktop application that talked via webservices with an online system (if necessary). Thus, *if* the ui requirements are fairly low (i.e. can fairly easily be implemented via a browser) you could go a similar route. If your ui requirements are high, you can replace the browser by a self developped (thick) ui application that talks via an abstraction with its backend. Properly designed, the abstraction could either be implemented by direct calls (to a local library) or by webservice calls (to an online service). This way, you could use your client application both for the (local) desktop only case as well as for the online case. Your description (stripped) suggests that you need special support for "offline" usage. The is separate functionality, independent of the desktop/online question. For example, highly available distributed database systems must provide some synchronization mechanism for resynchronization after temporary network connectivity loss. Another example: transactional systems must not lose transactions and can for example use asnychronous message queues to ensure that messages are safely delivered even in the case of temporary communication problems or failures. Thus, look at these aspects independent from the desktop/online szenario -- these aspects affect any distributed system and solutions can be found there. Those solutions tend to be complex (and expensive). -- http://mail.python.org/mailman/listinfo/python-list
Re: Decorators not worth the effort
> On Sep 14, 3:54 am, Jean-Michel Pichavant > wrote: >> I don't like decorators, I think they're not worth the mental effort. Fine. I like them because they can vastly improve reusability and drastically reduce redundancies (which I hate). Improved reusability and reduced redundancies can make applications more readable, easier to maintain and faster to develop. -- http://mail.python.org/mailman/listinfo/python-list
Re: How to implement a combo Web and Desktop app in python.
Shawn McElroy writes: > ... > Although you are correct in the aspect of having 'real' OS level integration. > Being able to communicate with other apps as well as contextual menus. > Although, could I not still implement those features from python, into the > host system from python? There are also tools like 'kivi' which allow you to > get system level access to do things. Though im not too sure on how far that > extends, or how useful it would be. In my szenario you have a standard browser as (thin) client and Python only on the server side. In my szenario, the server could run on the clients desktop -- however, it ran there as a "service", i.e. not in "user space". My knowledge about Windows is limited. I do not really know whether a Windows "service" can fully interact with applications running in the "user space" and what limitations may apply. -- http://mail.python.org/mailman/listinfo/python-list
Re: Decorators not worth the effort
Dwight Hutto wrote at 2012-9-14 23:42 -0400: > ... >Reduce redundancy, is argumentative. > >To me, a decorator, is no more than a logging function. Correct me if >I'm wrong. Well, it depends on how you are using decorators and how complex your decorators are. If what you are using as decorating function it really trivial, as trivial as "@", then you do not gain much. But your decorator functions need not be trivial. An example: in a recent project, I have implemented a SOAP webservice where most services depend on a valid session and must return specified fields even when (as in the case of an error) there is no senseful value. Instead of putting into each of those function implementations the check "do I have a valid session?" and at the end "add required fields not specified", I opted for the following decorator: def valid_session(*fields): ! fields = ("errorcode",) + fields @decorator def valid_session(f, self, sessionkey, *args, **kw): ! s = get_session(sessionkey) ! if not s.get("authenticated", False): ! rd = {"errorcode": u"1000"} ! else: ! rd = f(self, sessionkey, *args, **kw) ! return tuple(rd.get(field, DEFAULTS.get(field, '')) for field in fields) return valid_session The lines starting with "!" represent the logic encapsulated by the decorator -- the logic, I would have to copy into each function implementation without it. I then use it this way: @valid_session() def logout(self, sessionkey): s = get_session(sessionkey) s["authenticated"] = False return {} @valid_session("amountavail") def getStock(self, sessionkey, customer, item, amount): info = self._get_article(item) return {u"amountavail":info["deliverability"] and u"0" or u"1"} @valid_session("item", "shortdescription", "pe", "me", "min", "price", "vpe", "stock", "linkpicture", "linkdetail", "linklist", "description", "tax") def fetchDetail(self, sessionkey, customer, item): return self._get_article(item) ... I hope you can see that at least in this example, the use of the decorator reduces redundancy and highly improves readability -- because boilerplate code (check valid session, add default values for unspecified fields) is not copied over and over again but isolated in a single place. The example uses a second decorator ("@decorator") -- in the decorator definition itself. This decorator comes from the "decorator" module, a module facilitating the definition of signature preserving decorators (important in my context): such a decorator ensures that the decoration result has the same parameters as the decorated function. To achieve this, complex Python implementation details and Python's introspection must be used. And I am very happy that I do not have to reproduce this logic in my decorator definitions but just say "@decorator" :-) Example 3: In another project, I had to implement a webservice where most of the functions should return "json" serialized data structures. As I like decorators, I chose a "@json" decorator. Its definition looks like this: @decorator def json(f, self, *args, **kw): r = f(self, *args, **kw) self.request.response.setHeader( 'content-type', # "application/json" made problems with the firewall, # try "text/json" instead #'application/json; charset=utf-8' 'text/json; charset=utf-8' ) return udumps(r) It calls the decorated function, then adds the correct "content-type" header and finally returns the "json" serialized return value. The webservice function definitions then look like: @json def f1(self, ): @json def f2(self, ...): The function implementions can concentrate on their primary task. The "json" decorator" tells that the result is (by magic specified elsewhere) turned into a "json" serialized value. This example demontrates the improved maintainability (caused by the redundancy reduction): the "json rpc" specification stipulates the use of the "application/json" content type. Correspondingly, I used this content-type header initially. However, many enterprise firewalls try to protect against viruses by banning "application/*" responses -- and in those environments, my initial webservice implementation did not work. Thus, I changed the content type to "text/json". Thanks to the decorator encapsulation of the "json result logic", I could make my change at a single place -- not littered all over the webservice implementation. And a final example: Sometimes you are interested to cache (expensive) function results. Caching involves non-trivial logic (determine the cache, determine the key, check whether the cache contains a value for the key; if not, call the function, cache the result). The package "plone.memoize" defines a set of decorators (for different caching policies) which which caching can be as easy as: @memoize def f(): The complete caching logic is encapsulated in the tiny "@memoize" prefix. It tells: calls t
Re: Decorators not worth the effort
Jean-Michel Pichavant writes: > - Original Message - >> Jean-Michel Pichavant wrote: > [snip] >> One minor note, the style of decorator you are using loses the >> docstring >> (at least) of the original function. I would add the >> @functools.wraps(func) >> decorator inside your decorator. > > Is there a way to not loose the function signature as well ? Look at the "decorator" module. -- http://mail.python.org/mailman/listinfo/python-list
Re: python application file format
Benjamin Jessup writes: > ... > What do people recommend for a file format for a python desktop > application? Data is complex with 100s/1000s of class instances, which > reference each other. > > ... > Use cPickle with a module/class whitelist? (Can't easily port, not > entirely safe, compact enough, expandable) This is the approach used by the ZODB (Zope Object DataBase). I like the ZODB. It is really quite easy to get data persisted. It uses an elaborate caching scheme to speed up database interaction and has transaction control to ensure persistent data consistency in case of errors. Maybe not so relevant in your context, it does not require locking to safely access persistent data in a multi thread environment. > ... -- http://mail.python.org/mailman/listinfo/python-list
Re: Private methods
alex23 writes: > On 10 Oct, 17:03, real-not-anti-spam-addr...@apple-juice.co.uk (D.M. > Procida) wrote: >> It certainly makes it quick to build a class with the attributes I need, >> but it does make tracing logic sometimes a pain in the neck. >> >> I don't know what the alternative is though. > > Components. > > The examples are in C++ and it's about game development, but I found > this article to be very good at explaining the approach: > http://gameprogrammingpatterns.com/component.html > > I've become a big fan of components & adaptation using zope.interface: > http://wiki.zope.org/zope3/ZopeGuideComponents If multiple inheritance is deemed complex, adaptation is even more so: With multiple inheritance, you can quite easily see from the source code how things are put together. Adaptation follows the "inversion of control" principle. With this principle, how a function is implemented, is decided outside and can very easily be changed (e.g. through configuration). This gives great flexibility but also nightmares when things do not work as expected... -- http://mail.python.org/mailman/listinfo/python-list
Re: serialization and versioning
Neal Becker writes: > I wonder if there is a recommended approach to handle this issue. > > Suppose objects of a class C are serialized using python standard pickling. > Later, suppose class C is changed, perhaps by adding a data member and a new > constructor argument. > > It would see the pickling protocol does not directly provide for this - but > is > there a recommended method? > > I could imagine that a class could include a class __version__ property that > might be useful - although I would further expect that it would not have been > defined in the original version of class C (but only as an afterthought when > it > became necessary). The ZODB (Zope Object DataBase) is based on Python's pickle. In the ZODB world, the following strategy is used: * if the class adds a new data attribute, give it (in addition) a corresponding class level attribute acting as "default" value in case the pickled state of an instance lacks this instance level attribute * for more difficult cases, define an appropriate "__getstate__" for the class that handles the necessary model upgrades -- http://mail.python.org/mailman/listinfo/python-list
Re: deque and thread-safety
Christophe Vandeplas writes: > ... > From the documentation I understand that deques are thread-safe: >> Deques are a generalization of stacks and queues (the name is pronounced >> “deck” >> and is short for “double-ended queue”). Deques support thread-safe, memory >> efficient appends and pops from either side of the deque with approximately >> the >> same O(1) performance in either direction. > > It seems that appending to deques is indeed thread-safe, but not > iterating over them. You are right. And when you think about it, then there is not much point in striving for thread safety for iteration (alone). Iteration is (by nature) a non atomic operation: you iterate because you want to do something with the intermediate results; this "doing" is not part of the iteration itself. Thus, you are looking for thread safety not for only the iteration but for the iteration combined with additional operations (which may well extend beyond the duration of the iteration). Almost surely, the "deque" implementation is using locks to ensure thread safety for its "append" and "pop". Check whether this lock is exposed to the application. In this case, use it to protect you atomic sections involving iteration. -- http://mail.python.org/mailman/listinfo/python-list
Re: problems with xml parsing (python 3.3)
janni...@gmail.com writes: > I am new to Python and have a problem with the behaviour of the xml parser. > Assume we have this xml document: > > > > > Title of the first book. > > > > Title of the second book. > > > > > If I now check for the text of all 'entry' nodes, the text for the node with > the empty element isn't shown > > > > import xml.etree.ElementTree as ET > tree = ET.ElementTree(file='test.xml') > root = tree.getroot() > resultSet = root.findall(".//entry") > for r in resultSet: > print (r.text) I do not know about "xml.etree" but the (said) quite compatible "lxml.etree" handles text nodes in a quite different way from that of "DOM": they are *not* considered children of the parent element but are attached as attributes "text" and "tail" to either the container element (if the first DOM node is a text node) or the preceeding element, otherwise. Your code snippet suggests that "xml.etree" behaves identically in this respect. In this case, you would find "Title of the second book" as the "tail" attribute of the element "coauthored". -- http://mail.python.org/mailman/listinfo/python-list
Re: Applying a paid third party ssl certificate
ehsmenggro...@gmail.com writes: > I haven't quite figured out how to apply a paid ssl cert, say RapidSSL free > SSL test from Python's recent sponsor sslmatrix.com and what to do with that > to make Python happy. > > This good fellow suggests using the PEM format. I tried and failed. > http://www.minnmyatsoe.com/category/python-2/ > > The self signed cert recepies found all work swell, but some browsers > (webkit) gets very upset indeed. I want to use ajax requests from clients > (e.g autocompletion, stats collection etc) and put that in a python program > without hogging down the main apache stack, but without a proper ssl cert > this doesn't work. > > Does anyone have any ideas what to do? >From your description, I derive that you want your client (python program) to autenticate itself via an SSL certificate. If my assumption is correct, I would start with a look at the Python documentation for HTTPS connections. When I remember right, they have 2 optional parameters to specify a client certificate and to specify trusted certificates (when server presented certificates should be verified). Once, you have determined how to present the client certificate for the base HTTPS connection, you may need to look at the documentation or source code of higher level apis (such as "urllib2") to learn how to pass on your certificate down to the real connection. You may also have a look at "PyPI". You may find there packages facilitating Python's "SSL" support. -- http://mail.python.org/mailman/listinfo/python-list
Re: Python garbage collector/memory manager behaving strangely
a...@pythoncraft.com (Aahz) writes: > ... def readlines(f): lines = [] while "f is not empty": line = f.readline() if not line: break if len(line) > 2 and line[-2:] == '|\n': lines.append(line) yield ''.join(lines) lines = [] else: lines.append(line) >>> >>> There's a few changes I'd make: >>> I'd change the name to something else, so as not to shadow the built-in, > ... > Actually, as an experienced programmer, I *do* think it is confusing as > evidenced by the mistake Dave made! Segregated namespaces are wonderful > (per Zen), but let's not pollute multiple namespaces with same name, > either. > > It may not be literally shadowing the built-in, but it definitely > mentally shadows the built-in. I disagree with you. namespaces are there that in working with a namespace I do not need to worry much about other namespaces. Therefore, calling a function "readlines" is very much justified (if it reads lines from a file), even though there was a module around with name "readlines". By the way, the module is named "readline" (not "readlines"). -- http://mail.python.org/mailman/listinfo/python-list
Re: Getting "empty" attachment with smtplib
Tobiah writes: > I just found out that the attachment works fine > when I read the mail from the gmail website. Thunderbird > complains that the attachment is empty. The MIME standard (a set of RFCs) specifies how valid messages with attachments should look like. Fetch the mail (unprocessed if possible) and look at its structure. If it is conformant to the MIME standard, then "Thunderbird" made a mistake; otherwise, something went wrong with the message construction. I can already say that "smtplib" is not to blame. It is (mostly) unconcerned with the internal structure of the message -- and by itself will not empty attachments. -- http://mail.python.org/mailman/listinfo/python-list
Re: error importing smtplib
Eric Frederich writes: > I created some bindings to a 3rd party library. > I have found that when I run Python and import smtplib it works fine. > If I first log into the 3rd party application using my bindings however I > get a bunch of errors. > > What do you think this 3rd party login could be doing that would affect the > ability to import smtp lib. > > Any suggestions for debugging this further. I am lost. > > This works... > > import smtplib > FOO_login() > > This doesn't... > > FOO_login() > import smtplib > > Errors. > import smtplib > ERROR:root:code for hash sha224 was not found. > Traceback (most recent call last): > File "/opt/foo/python27/lib/python2.7/hashlib.py", line 139, in > globals()[__func_name] = __get_hash(__func_name) > File "/opt/foo/python27/lib/python2.7/hashlib.py", line 103, in > __get_openssl_constructor > return __get_builtin_constructor(name) > File "/opt/foo/python27/lib/python2.7/hashlib.py", line 91, in > __get_builtin_constructor > raise ValueError('unsupported hash type %s' % name) > ValueError: unsupported hash type sha224 >From the error, I suppose it does something bad for hash registries. When I have analysed problems with "hashlib" (some time ago, my memory may not be completely trustworthy), I got the impression that "hashlib" essentially delegates to the "openssl" libraries for the real work and especially the supported hash types. Thus, I suspect that your "FOO_login()" does something which confuses "openssl". One potential reason could be that it loads a bad version of an "openssl" shared library. I would use the "trace" (shell) command to find out what operating system calls are executed during "FOO_login()", hoping that one of them give me a clue. -- http://mail.python.org/mailman/listinfo/python-list
Re: error importing smtplib
Eric Frederich writes: > ... > So I'm guessing the problem is that after I log in, the process has a > conflicting libssl.so file loaded. > Then when I try to import smtplib it tries getting things from there and > that is where the errors are coming from. > > The question now is how do I fix this? Likely, you must relink the shared object containing your "FOO_login". When its current version was linked, the (really) old "libssl" has been current and the version was linked against it. As the binary objects for your shared object might depend on the old version, it is best, to not only relink but to recompile it as well. -- http://mail.python.org/mailman/listinfo/python-list
Re: Stack_overflow error
Aung Thet Naing writes: > I'm having Stack_overflow exception in _ctypes_callproc (callproc.c). The > error actually come from the: > > cleanup: > for (i = 0; i < argcount; ++i) > Py_XDECREF(args[i].keep); > > when args[i].keep->ob_refCnt == 1 Really a stack overflow or a general segmentation violation? Under *nix, both are not easy to distinguish -- but maybe, you are working with Windows? -- http://mail.python.org/mailman/listinfo/python-list
Re: Suitable software stacks for simple python web service
Kev Dwyer writes: > I have to build a simple web service which will: > > - receive queries from our other servers > - forward the requests to a third party SOAP service > - process the response from the third party > - send the result back to the original requester > > From the point of view of the requester, this will happen within the scope > of a single request. > > The data exchanged with the original requester will likely be encoded as > JSON; the SOAP service will be handled by SUDS. > > The load is likely to be quite light, say a few requests per hour, though > this may increase in the future. > > Given these requirements, what do you think might be a suitable software > stack, i.e. webserver and web framework (if a web framework is even > necessary)? >From your description (so far), you would not need a web framework but could use any way to integrate Python scripts into a web server, e.g. "mod_python", "cgi", "WSGI", Check what ways your web server will suport. -- http://mail.python.org/mailman/listinfo/python-list
Re: deepcopy questions
lars van gemerden writes: > ... "deepcopy" dropping some items ... > Any ideas are still more then welcome, "deepcopy" is implemented in Python (rather than "C"). Thus, if necessary, you can debug what it is doing and thereby determine where the items have been dropped. -- http://mail.python.org/mailman/listinfo/python-list
Re: os.system and subprocess odd behavior
py_genetic writes: > Example of the issue for arguments sake: > > Platform Ubuntu server 12.04LTS, python 2.7 > > Say file1.txt has "hello world" in it. ^ Here, you speak of "file1.txt" (note the extension ".txt") > subprocess.Popen("cat < file1 > file2", shell = True) > subprocess.call("cat < file1 > file2", shell = True) > os.system("cat < file1 > file2") But in your code, you use "file1" (without extension). If your code really references a non-existing file, you may well get what you are observing. -- http://mail.python.org/mailman/listinfo/python-list
Re: need some help with unexpected signal exception when using input from a thread (Pypy 1.9.0 on osx/linux)
Irmen de Jong writes: > Using Pypy 1.9.0. Importing readline. Using a background thread to get > input() from > stdin. It then crashes with: > > File "/usr/local/Cellar/pypy/1.9/lib_pypy/pyrepl/unix_console.py", line > 400, in restore > signal.signal(signal.SIGWINCH, self.old_sigwinch) > ValueError: signal() must be called from the main thread > > Anyone seen this before? What's going on? Apparently, "input" is not apt to be called from a "background thread". I have no idea why "signal" should only be callable from the main thread. I do not think this makes much sense. Speak with the "Pypy" developers about this. -- http://mail.python.org/mailman/listinfo/python-list
Re: problems importing from /usr/lib/pyshared/
Harold writes: > I recently upgraded my system from ubuntu 11.4 to 12.4 and since run into an > issue when trying to import several packages in python2.7, e.g. > > harold@ubuntu:~$ python -c 'import gtk' > Traceback (most recent call last): > File "", line 1, in > File "/usr/lib/python2.7/dist-packages/gtk-2.0/gtk/__init__.py", line 30, > in > import gobject as _gobject > File "/usr/share/pyshared/gobject/__init__.py", line 26, in > from glib import spawn_async, idle_add, timeout_add, timeout_add_seconds, > \ > File "/usr/share/pyshared/glib/__init__.py", line 22, in > from glib._glib import * > ImportError: No module named _glib Ubuntu 12 has introduced important changes with respect to "glib" (and depending packages). In fact, there are now two quite incompatible implementations - the old "static" one and a new "dynamic" one. It looks as if in your case, old and new implementations were mixed. I had a similar problem when upgrading to "Ubuntu 12.4". In my case, it turned out that my (custom) "PYTHONPATH" setting was responsible for getting into the incompatibility. The new way to use "gtk" is via the "gi" (probable "gnome interface") module. It looks like: from gi.repository import Gtk,GdkPixbuf,GObject,Pango,Gdk,Gio -- http://mail.python.org/mailman/listinfo/python-list
Re: Dependency management in Python?
Adelbert Chang writes: > In the Scala language there is the Simple Build Tool that lets me specify on > a project-by-project basis which libraries I want to use (provided they are > in a central repository somewhere) and it will download them for me. Better > yet, when a new version comes out I need only change the SBT configuration > file for that project and it will download it for me. You might also have a look at "zc.buildout" (--> on "PyPI"). -- http://mail.python.org/mailman/listinfo/python-list
Re: strace of python shows nonsense
Joep van Delft writes: > ... > What puzzles me, is the amount of errors for open and stat64. The high number of errors comes from Python's import logic: when Python should import a module/package (not yet imported), it looks into each member on "sys.path" for about 6 different potential filename spellings corresponding to the module -- until it succeeds or has tried all members. Most such filesystem lookups will fail - giving a high number of "stat" errors. -- http://mail.python.org/mailman/listinfo/python-list
Re: Using pdb with greenlet?
Salman Malik writes: > I am sort of a newbie to Python ( have just started to use pdb). > My problem is that I am debugging an application that uses greenlets and when > I encounter something in code that spawns the coroutines or wait for an event, > I lose control over the application (I mean that after that point I can no > longer do 'n' or 's' on the code). Can anyone of you tell me how to tame > greenlet with pdb, so that I can see step-by-step as to what event does a > coroutine sees and how does it respond to it. > Any help would be highly appreciated. Debugging works via the installation of a "tracehook" function. If such a function is installed in a thread, the Python interpreter calles back via the installed hook to report events relevant for debugging. Usually the hook function is defined by a debugger which examines whether the event is user relevant (e.g. if a breakpoint has been hit, or code for a new line has been entered) and in this case imforms the user and may give him control. It is important that the trace hook installation is thread specific. Otherwise, debugging in a multithreaded environment would be a nightmare - as events from multiple threads may arrive and seriously confuse the debugger as well as the user. I do not know "greenlet". However, I expect that it uses threads under the hood to implement coroutines. In such a case, it would be natural that debugging one coroutine would not follow the execution into a different coroutine. To change this, "greenlet" would need to specially support the "tracehook" feature: when control is transfered to a different coroutine, the "tracehook" would need to be transfered as well. Personally, I am not sure that this would be a good idea (I sometimes experience debugging interaction from different threads -- and I can tell you that it is a really nasty experience). However, you can set the "tracehook" youself in your each of your coroutines: "import pdb; pdb.set_trace()". This is called a "code breakpoint". It installs the debuggers "tracehook" function in the current thread and gives control to the debugger (i.e. it works like a breakpoint). I use this quite frequently to debug multithreaded web applications and it works quite well (sometimes with nasty experiences). "pdb" is not optimal to for multithread debugging because it expects to interact with a single thread only. For a good experience, a thread aware extension would be necessary. -- Dieter -- http://mail.python.org/mailman/listinfo/python-list
Re: validating XML
andrea crotti writes: > Hello Python friends, I have to validate some xml files against some xsd > schema files, but I can't use any cool library as libxml unfortunately. Why? It seems not very rational to implement a complex task (such as XML-Schema validation) when there are ready solutions around. > A Python-only validator might be also fine, but all the projects I've > seen are partial or seem dead.. > So since we define the schema ourselves, I was allowed to only implement > the parts of the huge XML definition that we actually need. > Now I'm not quite sure how to do the validation myself, any suggestions? I would look for a command line tool available on your platform which performs the validation and call this from Python. -- Dieter -- http://mail.python.org/mailman/listinfo/python-list
Re: validating XML
andrea crotti writes: > ... > The reason is that it has to work on many platforms and without any c module > installed, the reason of that Searching for a pure Python solution, you might have a look at "PyXB". It has not been designed to validate XML instances against XML-Schema (but to map between XML instances and Python objects based on an XML-Schema description) but it detects many problems in the XML instances. It does not introduce its own C extensions (but relies on an XML parser shipped with Python). > Anyway in a sense it's also quite interesting, and I don't need to implement > the whole XML, so it should be fine. The XML is the lesser problem. The big problem is XML-Schema: it is *very* complex with structure definitions (elements, attributes and "#PCData"), inheritance, redefinition, grouping, scoping rules, inclusion, data types with restrictions and extensions. Thus if you want to implement a reliable algorithm which for given XML-schema and XML-instance checks whether the instance is valid with respect to the schema, then you have a really big task. Maybe, you have a fixed (and quite simple) schema. Then you may be able to implement a validator (for the fixed schema). But I do not understand why you would want such a validation. If you generate the XML instances, then thouroughly test your generation process (using any available validator) and then trust it. If the XML instances come from somewhere else and must be interpreted by your application, then the important thing is that they are understood by your application, not that they are valid. If you get a complaint that your application cannot handle a specific XML instance, then you validate it in your development environment (again with any validator available) and if the validation fails, you have good arguments. > What I haven't found yet is an explanation of a possible algorithm to use for > the validation, that I could then implement.. You parse the XML (and get a tree) and then recursively check that the elements, attributes and text nodes in the tree conform to the schema (in an abstract sense, the schema is a collection of content models for the various elements; each content model tells you how the element content and attributes should look like). For a simple schema, this is straight forward. If the schema starts to include foreign schemas, uses extensions, restrictions or "redefine"s, then it gets considerably more difficult. -- Dieter -- http://mail.python.org/mailman/listinfo/python-list
Re: Hashable object with self references OR how to create a tuple that refers to itself
"Edward C. Jones" writes: > I am trying to create a collection of hashable objects, where each > object contains references to > other objects in the collection. The references may be circular. > > To simplify, one can define > x= list() > x.append(x) > which satisfies x == [x]. > Can I create a similar object for tuples which satisfies x == (x,)? You can create a tuple in "C" and then put a reference to itself into it, but I am quite convinced that you cannot do it in Python itself. (Of course, you could use "cython" to generate C code with a source language very similar to Python). But, you do not need tuples; You could use a standard class: >>> class C(object): pass ... >>> c=C() >>> c.c=c >>> d=dict(c=c) >>> d {'c': <__main__.C object at 0xb737f86c>} -- Dieter -- http://mail.python.org/mailman/listinfo/python-list
Re: Komodo, Python
"Isaac@AU" writes: > I just started learning python. I have komodo2.5 in my computer. And I > installed python2.7. I tried to write python scripts in komodo. But every > time I run the code, there's always the error: > > Traceback (most recent call last): > File "C:\Program Files\ActiveState Komodo 2.5\callkomodo\kdb.py", line 920, > in > > requestor, connection_port, cookie = ConnectToListener(localhost_addr, > port) > > File "C:\Program Files\ActiveState Komodo 2.5\callkomodo\kdb.py", line 872, > in > ConnectToListener > cookie = makeCookie() > File "C:\Program Files\ActiveState Komodo 2.5\callkomodo\kdb.py", line 146, > in > makeCookie > generator=whrandom.whrandom() > NameError: global name 'whrandom' is not defined This is a bug in "kdb.py". I forgets to import "whrandom". In addition it shows that the "kdb.py" code is very old. "whrandom" is been replaced by "random" a long time ago. -- Dieter -- http://mail.python.org/mailman/listinfo/python-list
Re: Problem with ImapLib and subject in French
Valentin Mercier writes: > I'm trying to search some mails with SUBJECT criteria, but the problem is the > encoding, I'm trying to search french terms (impalib and python V2.7) > > I've tried few things, but I think the encoding is the problem, in my mail > header I have something like this: > > =?iso-8859-1?Q?Job:_"Full_Backup_Les_Gr=E8ves_du_Lac")_?= I would expect that it is the task of the IMap server to handle the header encoding on its side. Check the description of its search function how it expects its input: does it want it "header encoded" or in some other form. -- Dieter -- http://mail.python.org/mailman/listinfo/python-list
Re: Issue sending data from C++ to Python
Pablo Martinez Ulloa wrote at 2022-5-18 15:08 +0100: >I have been using your C++ Python API, in order to establish a bridge from >C++ to Python. Do you know `cython`? It can help very much in the implementation of bridges between Python and C/C++. -- https://mail.python.org/mailman/listinfo/python-list
Re: traceback Shows path to my python libraries
jsch...@sbcglobal.net wrote at 2022-6-20 13:49 -0500: >I coded an application with a 64-bit executable using cython with the embed >option and gcc and I received a traceback showing the path to my python >installation. Is that normal or does that mean the application is going >outside of my executable to my python libraries? I want it portable so it >if is, then it's not portable. The tracebacks are primarily for the developers. Therefore, they identify source locations. When you use `cython`, the compilation to "C" gets references to the `cython` source (because those references are meaningful for the developers). When a traceback is generated, the `cython` source need not be available (you will then not see the source line, only the line number and file information). -- https://mail.python.org/mailman/listinfo/python-list
Re: Logging into single file from multiple modules in python when TimedRotatingFileHandler is used
Chethan Kumar S wrote at 2022-6-21 02:04 -0700: > ... >I have a main process which makes use of different other modules. And these >modules also use other modules. I need to log all the logs into single log >file. Due to use of TimedRotatingFileHandler, my log behaves differently after >midnight. I got to know why it is so but couldn't get how I can solve it. >Issue was because of serialization in logging when multiple processes are >involved. > >Below is log_config.py which is used by all other modules to get the logger >and log. >import logging >import sys >from logging.handlers import TimedRotatingFileHandler > >FORMATTER = logging.Formatter("%(asctime)s — %(name)s — %(message)s") The usual logging usage pattern is: the individual components decide what to log but how the logging happens it decided centrally - common for all components. This implies that usually the individual components do not handle handlers or formatters but use the configuration set up centrally. -- https://mail.python.org/mailman/listinfo/python-list
Re: argparse modify
נתי שטרן wrote at 2022-6-23 15:31 +0300: >how to solve this (argparse) > > >traceback: >Traceback (most recent call last): > File "u:\oracle\RTR.py", line 10, in >class sre_constants(): > File "u:\oracle\RTR.py", line 77, in sre_constants >MAXREPEAT = _NamedIntConstant(32,name=str(32)) >TypeError: 'name' is an invalid keyword argument for int() This does not look like an `argparse` problem: the traceback comes from `oracle/RTR.py`. -- https://mail.python.org/mailman/listinfo/python-list
Re: argparse modify
נתי שטרן wrote at 2022-6-24 08:28 +0300: >I copied code from argparse library and modified it > >בתאריך יום חמישי, 23 ביוני 2022, מאת Dieter Maurer : > >> נתי שטרן wrote at 2022-6-23 15:31 +0300: >> >how to solve this (argparse) >> > >> > >> >traceback: >> >Traceback (most recent call last): >> > File "u:\oracle\RTR.py", line 10, in >> >class sre_constants(): >> > File "u:\oracle\RTR.py", line 77, in sre_constants >> >MAXREPEAT = _NamedIntConstant(32,name=str(32)) >> >TypeError: 'name' is an invalid keyword argument for int() The exception information tells you: ` _NamedIntConstant(32,name=str(32))` raises a `TypeError`: `_NamedIntConstant` does not know the keyword parameter `name`. Thus, something is wrong with the `_NamedIntConstant` definition. -- https://mail.python.org/mailman/listinfo/python-list
Re: Fwd: timedelta object recursion bug
Ben Hirsig wrote at 2022-7-28 19:54 +1000: >Hi, I noticed this when using the requests library in the response.elapsed >object (type timedelta). Tested using the standard datetime library alone >with the example displayed on >https://docs.python.org/3/library/datetime.html#examples-of-usage-timedelta > > > >It appears as though the timedelta object recursively adds its own >attributes (min, max, resolution) as further timedelta objects. I’m not >sure how deep they go, but presumably hitting the recursion limit. If you look at the source, you will see that `min`, `max`, `resolution` are class level attributes. Their values are `timedelta` instances. Therefore, you can access e.g. `timedelta(days=365).min.max.resolution`. But this is nothing to worry about. -- https://mail.python.org/mailman/listinfo/python-list
Re: Fwd: timedelta object recursion bug
Please stay on the list (such that others can help, too) Ben Hirsig wrote at 2022-7-29 06:53 +1000: >Thanks for the replies, I'm just trying to understand why this would be >useful? > >E.g. why does max need a min/max/resolution, and why would these attributes >themselves need a min/max/resolution, etc, etc? `max` is a `timedelta` and as such inherits (e.g.) `resolution` from the class (as any other `timedelta` instance). Note that `timedelta` instances do not have a `max` (`min|resolution`) slot. When `max` is looked up, it is first searched in the instance (and not found), then in the class where it is found: all `max` accesses result in the same object. -- https://mail.python.org/mailman/listinfo/python-list
Re: Register multiple excepthooks?
Albert-Jan Roskam wrote at 2022-7-31 11:39 +0200: > I have a function init_logging.log_uncaught_errors() that I use for > sys.excepthook. Now I also want to call another function (ffi.dlclose()) > upon abnormal termination. Is it possible to register multiple > excepthooks, like with atexit.register? Or should I rename/redefine > log_uncaught_errors() so it does both things? `sys.excepthook` is a single function (not a list of them). This means: at any moment a single `excepthook` is effective. If you need a modular design, use a dispatcher function as your `excepthook` associated with a registry (e.g. a `list`). The dispatcher can then call all registered function. -- https://mail.python.org/mailman/listinfo/python-list
Re: Trying to understand nested loops
ojomooluwatolami...@gmail.com wrote at 2022-8-5 08:34 +0100: >Hello, I’m new to learning python and I stumbled upon a question nested loops. For future, more complex, questions of this kind, you might have a look at the module `pdb` in Python's runtime library. It implements a debugger which allows you (among other features) to interactively run a program line by line and explore the state of all involved variables. There are also IDEs (= Integrated Development Environments) which support this. Whenever a program does things you do not understand, debugging usually helps to bring light into the scene. -- https://mail.python.org/mailman/listinfo/python-list
RE: Parallel(?) programming with python
Schachner, Joseph (US) wrote at 2022-8-9 17:04 +: >Why would this application *require* parallel programming? This could be >done in one, single thread program. Call time to get time and save it as >start_time. Keep a count of the number of 6 hour intervals, initialize it to >0. You could also use the `sched` module from Python's library. -- https://mail.python.org/mailman/listinfo/python-list
Re: Parallel(?) programming with python
Dennis Lee Bieber wrote at 2022-8-10 14:19 -0400: >On Wed, 10 Aug 2022 19:33:04 +0200, "Dieter Maurer" > ... >>You could also use the `sched` module from Python's library. > >Time to really read the library reference manual again... > > Though if I read this correctly, a long running action /will/ delay >others -- which could mean the (FFT) process could block collecting new >1-second readings while it is active. It also is "one-shot" on the >scheduled actions, meaning those actions still have to reschedule >themselves for the next time period. Both true. With `multiprocessing`, you can delegate long running activity to a separate process. -- https://mail.python.org/mailman/listinfo/python-list
Re: setup.py + cython == chicken and the egg problem
Dan Stromberg wrote at 2022-8-16 14:03 -0700: > ... >I'm attempting to package up a python package that uses Cython. > ... > Installing build dependencies ... error > error: subprocess-exited-with-error > > ×? pip subprocess to install build dependencies did not run successfully. > ?? exit code: 1 > > [3 lines of output] > Looking in indexes: https://test.pypi.org/simple/ > ERROR: Could not find a version that satisfies the requirement >setuptools (from versions: none) > ERROR: No matching distribution found for setuptools The message tells you that there is a `setuptools` problem. I would start to locate all `setuptools` requirement locations. I am using `cython` for the package `dm.xmlsec.binding`. I have not observed nor heard of a problem similar to yours (but I have never tried `test.pypi.org`). -- https://mail.python.org/mailman/listinfo/python-list
Re: Superclass static method name from subclass
Ian Pilcher wrote at 2022-11-11 10:21 -0600: >Is it possible to access the name of a superclass static method, when >defining a subclass attribute, without specifically naming the super- >class? > >Contrived example: > > class SuperClass(object): > @staticmethod > def foo(): > pass > > class SubClass(SuperClass): > bar = SuperClass.foo > ^^ > >Is there a way to do this without specifically naming 'SuperClass'? Unless you overrode it, you can use `self.foo` or `SubClass.foo`; if you overrode it (and you are using either Python 3 or Python 2 and a so called "new style class"), you can use `super`. When you use `super` outside a method definition, you must call it with parameters. -- https://mail.python.org/mailman/listinfo/python-list
Re: Dealing with non-callable classmethod objects
Ian Pilcher wrote at 2022-11-11 15:29 -0600: > ... >In searching, I've found a few articles that discuss the fact that >classmethod objects aren't callable, but the situation actually seems to >be more complicated. > > >>> type(DuidLLT._parse_l2addr) > > >>> callable(DuidLLT._parse_l2addr) >True > >The method itself is callable, which makes sense. The factory function >doesn't access it directly, however, it gets it out of the _attrs >dictionary. > > >>> type(DuidLLT._attrs['layer2_addr']) > > >>> callable(DuidLLT._attrs['layer2_addr']) >False Accessing an object via a `dict` does not change its type, nor does putting it into a `dict`. Thus, you did not put `DuidLLT._parse_l2addr` (of type `method`) into your `_attrs` `dict` but something else (of type `classmethod`). This narrows down the space for your investigation: why was the object you have put into `_attr` was not what you have expected. -- https://mail.python.org/mailman/listinfo/python-list
Re: Importlib behaves differently when importing pyd file
Jach Feng wrote at 2022-11-15 22:52 -0800: >My working directory d:\Works\Python\ has a package 'fitz' looks like this: > >fitz\ >__init__.py >fitz.py >utils.py >_fitz.pyd > >There is a statement in fitz.py: >return importlib.import_module('fitz._fitz') > >It works fine under Python 3.4 interpreter: import fitz > >But under Python 3.8 I get an exception: import fitz >Traceback(... >... >... >ImportError: DLL load failed while importing _fitz The Python C-API is Python version dependent. Your `_fitz.pyd` may need to be recreated for Python 3.8. -- https://mail.python.org/mailman/listinfo/python-list
Re: pip issue
Gisle Vanem wrote at 2022-11-30 10:51 +0100: >I have an issue with 'pip v. 22.3.1'. On any >'pip install' command I get warning like this: > c:\> pip3 install asciinema > WARNING: Ignoring invalid distribution -arkupsafe > (f:\gv\python310\lib\site-packages) > WARNING: Ignoring invalid distribution -arkupsafe > (f:\gv\python310\lib\site-packages) > Collecting asciinema > Downloading asciinema-2.2.0-py3-none-any.whl (92 kB) > ... > >Otherwise no issues. But where is this text "-arkupsafe" stored >and how to get rid it it? I've searched through all of my .pth >files and found no such string. Have you looked at the content of the folder mentioned in the warnings (e.g. `...\site-packages`). -- https://mail.python.org/mailman/listinfo/python-list
Re: ContextVars in async context
Marce Coll wrote at 2022-12-20 22:09 +0100: >Hi python people, hope this is the correct place to ask this! > >For a transactional async decorator I'm building I am using contextvars in >order to know when a transaction is open in my current context. > >My understanding is that if given the following call stack > >A >|- B >| |- C >|- D > |- E > >If you set a context var at A with value 1, and then override it at B with >value 2, then A, D and E will see value 1 and B and C will se value 2. Very >similar (although a bit more manual) than dynamic scopes in common lisp. This is not the way I understand context variables. In my view (--> PEP 0567), the context is the coroutine not the call stack. This means: all calls in the same coroutine share the same context variables. In your example, if `B` overrides the context variable, then all later calls in this coroutine will see the overridden value. -- https://mail.python.org/mailman/listinfo/python-list
Re: How make your module substitute a python stdlib module.
Antoon Pardon wrote at 2022-12-27 14:25 +0100: > ... >> But a simple "sys.modules['threading'] = QYZlib.threaders" will work. >> Of course, how *well* this works depends also on how well that module >> manages to masquerade as the threading module, but I'm sure you've >> figured that part out :) > >Well it is what will work for the moment. Thanks for the confirmation >this will indeed work. If you need to change a module in minor ways (e.g. only provide a custom `thread_ident` function), you can use a technique called "monkey patching" (which is patching at runtime). You can usually assign new values to module variables. Thus, you yould try `threading.thread_ident = `. This would affect most uses of the function -- which may not be a good idea. Alternatively, you could monkey patch the `logging` module. Look at its code and find out whether it accesses the function directly or indirectly via `threading`. In the first case, you would monkey patch the function, in the second the `threading` variable. You could also use `dm.reuse` (a package maintained on PyPI) to override the method using the function. This way, your change would be even more localized. -- https://mail.python.org/mailman/listinfo/python-list
Re: Python - working with xml/lxml/objectify/schemas, datatypes, and assignments
aapost wrote at 2023-1-3 22:57 -0500: > ... >Consider the following: > >from lxml import objectify, etree >schema = etree.XMLSchema(file="path_to_my_xsd_schema_file") >parser = objectify.makeparser(schema=schema, encoding="UTF-8") >xml_obj = objectify.parse("path_to_my_xml_file", parser=parser) >xml_root = xml_obj.getroot() > >let's say I have a Version element, that is defined simply as a string >in a 3rd party provided xsd schema > > Does your schema include the third party schema? You might have a look at `PyXB`, too. It tries hard to enforce schema restrictions in Python code. -- https://mail.python.org/mailman/listinfo/python-list
Re: hello can I be in your group?
Keith Thompson wrote at 2023-1-6 17:02 -0800: >September Skeen writes: >> I was wondering if I could be in your group > >This is an unmoderated Usenet newsgroup. In fact, there are several access channels, Usenet newsgroup is one of them. Another channel is the python-list mailing list. You can subscribe to it on "python.org" -- look for community/support --> mailing lists. The maling list tends to have fewer spam then the newsgroup. -- https://mail.python.org/mailman/listinfo/python-list
Re: Mailing-Lists (pointer)
Chris Green wrote at 2023-1-10 08:45 +: > ... >Yes, this is important I think. Plus, if possible, if it's decided to >move to a forum format make that accessible by E-Mail. I much prefer a mailing list over an http based service. With mailing lists, all interesting messages arrive in my email reader, i.e. at a central place; with http based services, I have to visit the various sites to learn whether there is relevant new information. -- https://mail.python.org/mailman/listinfo/python-list
Re: Mailing-Lists (pointer)
Cameron Simpson wrote at 2023-1-11 08:37 +1100: > ... >There's a Discourse forum over at discuss.python.org. I use it in >"mailing list mode" and do almost all my interactions via email, exactly >as I do for python-list. Posts come to me and land in the same local >mail folder I use for python-list. My replies land on the forum as >expected (and of course also go by email to those members who have >turned that mode on). I am also using the Plone `Discourse` forum in "mailing list mode". It now works quite well but it took some years before reaching this state. For a very long time, my mail replies did not reach the forum reliably. My latest complaint (more than half a year ago): when I had visited the forum via `http` (I did this occasionally to verify my reply has reached the forum), it sometimes thought, I had seen a new message and did not inform me about it via mail. Meanwhile, all replies seem to arrive reliably and I no longer use `http` for access. Therefore, I do not know whether the behavior described above still persists. -- https://mail.python.org/mailman/listinfo/python-list
Re: Python - working with xml/lxml/objectify/schemas, datatypes, and assignments
aapost wrote at 2023-1-10 22:15 -0500: >On 1/4/23 12:13, aapost wrote: >> On 1/4/23 09:42, Dieter Maurer wrote: >> ... >>> You might have a look at `PyXB`, too. >>> It tries hard to enforce schema restrictions in Python code. >> ... >Unfortunately picking it apart for a while and diving deeper in to a >rabbit hole, PyXB looks to be a no-go. > >PyXB while interesting, and I respect it's complexity and depth, is >lacking in design consistency in how it operates if you are trying to >modify and work with the resulting structure intuitively. > ... problem with simple types ... I use `PyXB` in `dm.saml2` and `dm.zope.saml2`, i.e. with the SAML2 schema definitions (which include those of XML signature and XML encryption). I had no problems with simple types. I just assign them to attributes of the Python objects representing the XML elements. `PyXB` does the right thing when it serializes those objects into XML. -- https://mail.python.org/mailman/listinfo/python-list
Re: [Help Request] Embedding Python in a CPP Application Responsibly & Functionally
John McCardle wrote at 2023-1-25 22:31 -0500: > ... >1) To get the compiled Python to run independently, I have to hack >LD_LIBRARY_PATH to get it to execute. `LD_LIBRARY_PATH=./Python-3.11.1 >./Python-3.11.1/python` . The need to set `LD_LIBRARY_PATH` usually can be avoided via a link time option: it tells the linker to add library path information into the created shared object. Read the docs to find out which option this is (I think it was `-r` but I am not sure). >Even when trying to execute from the same >directory as the binary & executable, I get an error, `/python: error >while loading shared libraries: libpython3.11.so.1.0: cannot open shared >object file: No such file or directory`. It might be necessary, to provide the option mentioned above for all shared libraries involved in your final application. Alternatively, you could try to put the shared objects into a stadard place (searched by default). >2) When running the C++ program that embeds Python, I see these messages >after initializing: >`Could not find platform independent libraries >Could not find platform dependent libraries ` Again: either put your installation in a standard place or tell the Python generation process about your non-standard place. >This is seemingly connected to some issues regarding libraries: When I >run the Python interpreter directly, I can get some of the way through >the process of creating a virtual environment, but it doesn't seem to >leave me with a working pip: > >`$ LD_LIBRARY_PATH=./Python-3.11.1 ./Python-3.11.1/python > >>> import venv > >>> venv.create("./venv", with_pip=True) >subprocess.CalledProcessError: Command >'['/home/john/Development/7DRL/cpp_embedded_python/venv/bin/python', >'-m', 'ensurepip', '--upgrade', '--default-pip']' returned non-zero exit >status 127.` Run the command manually and see what errors this gives. > ... >3) I'm not sure I even need to be statically linking the interpreter. There should be no need (if all you want in the embedding). -- https://mail.python.org/mailman/listinfo/python-list
Re: asyncio questions
Frank Millman wrote at 2023-1-26 12:12 +0200: >I have written a simple HTTP server using asyncio. It works, but I don't >always understand how it works, so I was pleased that Python 3.11 >introduced some new high-level concepts that hide the gory details. I >want to refactor my code to use these concepts, but I am not finding it >easy. > >In simple terms my main loop looked like this - > > loop = asyncio.get_event_loop() > server = loop.run_until_complete( > asyncio.start_server(handle_client, host, port)) > loop.run_until_complete(setup_companies()) > session_check = asyncio.ensure_future( > check_sessions()) # start background task > print('Press Ctrl+C to stop') > try: > loop.run_forever() > except KeyboardInterrupt: > print() > finally: > session_check.cancel() # tell session_check to stop running > loop.run_until_complete(asyncio.wait([session_check])) > server.close() > loop.stop() Why does your code uses several `loop.run*` calls? In fact, I would define a single coroutine and run that with `asyncio.run`. This way, the coroutine can use all `asyncio` features, including `loop.create_task`. -- https://mail.python.org/mailman/listinfo/python-list
Re: File write, weird behaviour
Azizbek Khamdamov wrote at 2023-2-19 19:03 +0500: > ... >Example 2 (weird behaviour) > >file = open("D:\Programming\Python\working_with_files\cities.txt", >'r+') ## contains list cities ># the following code DOES NOT add new record TO THE BEGINNING of the >file IF FOLLOWED BY readline() and readlines()# Expected behaviour: >new content should be added to the beginning of the file (as in >Example 1) >file.write("new city\n") > >file.readlines() >file.close() > >I could not find anything in documentation to explain this strange >behaviour. Why is this happening? The effect of "r+" (and friends) is specified by the C standard. The Linux doc (of `fopen`) tells us that ANSI C requires that a file positioning command (e.g. `seek`) must intervene between input and output operations. Your example above violates this condition. Therefore, weird behavior is to be expected. -- https://mail.python.org/mailman/listinfo/python-list
Re: semi colonic
Thomas Passin wrote at 2023-2-22 21:04 -0500: >On 2/22/2023 7:58 PM, avi.e.gr...@gmail.com wrote: > ... >> So can anyone point to places in Python where a semicolon is part of a best >> or even good way to do anything? > >Mostly I use it to run small commands on the command line with python >-c. e.g. > >python -c "import sys;print('\n'.join(sys.path))" > >This is handy enough that I wouldn't like to do without. > >Another place I use the semicolon (once in a while) is for quick >debugging. I might add as line like, perhaps, > >import os; print(os.path.exists(filename)) I also see, `;` occasionally in `*.pth` files. -- https://mail.python.org/mailman/listinfo/python-list
Look free ID genertion (was: Is there a more efficient threading lock?)
Chris Angelico wrote at 2023-3-1 12:58 +1100: > ... > The >atomicity would be more useful in that context as it would give >lock-free ID generation, which doesn't work in Python. I have seen `itertools.count` for that. This works because its `__next__` is implemented in "C" and therefore will not be interrupted by a thread switch. -- https://mail.python.org/mailman/listinfo/python-list
Re: Bug 3.11.x behavioral, open file buffers not flushed til file closed.
aapost wrote at 2023-3-5 09:35 -0500: > ... >If a file is still open, even if all the operations on the file have >ceased for a time, the tail of the written operation data does not get >flushed to the file until close is issued and the file closes cleanly. This is normal: the buffer is flushed if one of the following conditions are met: 1. you call `flush` 2. the buffer overflows 3. the file is closed. -- https://mail.python.org/mailman/listinfo/python-list
Re: Problem in using libraries
Pranav Bhardwaj wrote at 2023-4-3 22:13 +0530: >Why can't I able to use python libraries such as numpy, nudenet, playsound, >pandas, etc in my python 3.11.2. It always through the error "import >'numpy' or any other libraries could not be resolved". The "libraries" you speak of are extensions (i.e. not part of the Python download). Extensions are Python minor version specific. You must install them for each Python minor version. E.g. you can use an extension installation for Python 3.10 for any Python 3.10.x, but you must install it again for Python 3.11. -- https://mail.python.org/mailman/listinfo/python-list
Re: Embedded python is not 100% stable
Guenther Sohler wrote at 2023-4-13 09:40 +0200: > ... >I have been working on adding embedded python into OpenSCAD ( >www.openscad.org) >for some time already. For that i coded/added an additional Python Type >Object >which means to hold openscad geometric data. > >It works quite well but unfortunately its not 100% stable and i have been >heavily checking >all the functions which are referenced in the PyType Object and tried to >match >them with the documentation which i found in the python web site The Python C/C++ interface is complex: it is easy to make mistakes which may lead to crashes. Often, `cython` (--> PyPI) can help you to define extension types in a much safer way. Maybe, you check its documentation? -- https://mail.python.org/mailman/listinfo/python-list
Re: Using loguru in a library
Roy Hann wrote at 2023-4-30 15:40 -: >Is there anyone using loguru (loguru 0.5.3 in my case) successfully in a >library? > ... > import mylib > logger.enable('mylib') > >expecting that it would report any log messages above level DEBUG, just >as it does when I don't disable logging. Have you configured the logging system? Note that `logging.config.fileConfig` may do strange things regarding disabling (due to its default parameter `disable_existing_loggers=True`). I had several cases of missing log entries because `fileConfig` had disabled already existing loggers. -- https://mail.python.org/mailman/listinfo/python-list
Re: What do these '=?utf-8?' sequences mean in python?
Chris Green wrote at 2023-5-6 15:58 +0100: >Chris Green wrote: >> I'm having a real hard time trying to do anything to a string (?) >> returned by mailbox.MaildirMessage.get(). >> >What a twit I am :-) > >Strings are immutable, I have to do:- > >newstring = oldstring.replace("_", " ") The solution based on `email.Header` proposed by `jak` is better. -- https://mail.python.org/mailman/listinfo/python-list
Re: Do subprocess.PIPE and subprocess.STDOUT sametime
Horst Koiner wrote at 2023-5-9 11:13 -0700: > ... >For production i run the program with stdout=subprocess.PIPE and i can fetch >than the output later. For just testing if the program works, i run with >stdout=subprocess.STDOUT and I see all program output on the console, but my >program afterwards crashes since there is nothing captured in the python >variable. So I think I need to have the functionality of subprocess.PIPE and >subprcess.STDOUT sametime. You might want to implement the functionality of the *nix programm `tee` in Python. `tee` reads from one file and writes the data to several files, i.e. it multiplexes one input file to several output files. Pyhton's `tee` would likely be implemented by a separate thread. For your case, the input file could be the subprocess's pipe and the output files `sys.stdout` and a pipe created by your own used by your application in place of the subprocess's pipe. -- https://mail.python.org/mailman/listinfo/python-list
Re: "Invalid literal for int() with base 10": is it really a literal?
Chris Angelico wrote at 2023-5-26 18:29 +1000: > ... >However, if you want to change the wording, I'd be more inclined to >synchronize it with float(): > float("a") >Traceback (most recent call last): > File "", line 1, in >ValueError: could not convert string to float: 'a' +1 -- https://mail.python.org/mailman/listinfo/python-list
Re: SMS API
ismail nagi wrote at 2021-2-3 08:48 -0800: >I would like to know how an sms api is created. I assume that "sms api" means that your Python application should be able to send SMS messages. In this case, you need a service which interfaces between your device (mobile phone, computer, tablet, ...) and the telephone network. Such a service will provide some API - and depending on the type of API, you might be able to use it in Python. First thing is to find out about this service. -- https://mail.python.org/mailman/listinfo/python-list
Re: SMS API
ismail nagi wrote at 2021-2-3 21:06 +0300: >Yes, its about sending messages. For example, something like >twilio...it's an SMS API, can something like twilio be created using python >and how (just a basic idea)? Thank You. "twilio" provides a web service interface to send messages. You can use Python libraries to access web services (e.g. "suds") - and thereby, control the "twilio" service via Python applications. If your aim is to implement a "twilio" like functionality out of the box, you need a gateway between your device and the telephone network. In particular, this gateway must ensure proper payment for the use of the telephone network; as a consequence, access will be restricted and subject to quite strict policies (to avoid abuse). If your telephone network (access point) does not provide an easy access, then it is likely very difficult to implement access on your own. If you look for an application on a mobile phone, then the phone's operating system likely provides a service to send SMS messages. There are Python components to facilitate the use of Python on mobile phones (but I have no experience in this domain). With Python for mobile phones, it may be possible to access those mobile phone services provided by the phone's operating system. -- https://mail.python.org/mailman/listinfo/python-list