Re: Cookbook 2nd ed Credits (was Re: The Industry choice)
Wow I didn't realize that I made that significant of a contribution :-) > 3: 9 u'John Nielsen' Well, I guess I did and I didn't. I worked hard to put postings up before I started taking classes again at a university last fall (with little kids and working full time, classes are a frustrating timesink). I am glad I could make a difference and proud to see what the community could put together. And of course, thanks to Alex and all the editors for their hard work. The cookbook site is, of course, among my default startup pages, so I can peruse what new things pop up there :-) I am slowly building up ideas on more things to post. This spring break, hopefully I'll get my mind enough together to put it all down coherently. john -- http://mail.python.org/mailman/listinfo/python-list
Re: windows/distutils question
If the environment variable: os.environ['APPDATA'] is present on non-English Windows, you may be able to use that to get what you need. john -- http://mail.python.org/mailman/listinfo/python-list
Re: How to find Windows "Application data" directory??
I had a post yesterday on just that. Anyways, I always love it when what can be a really annoying problem, reduces into as something simple and elegant like a python dict. (in general, I find dictionaries rock). I remember a similar eureka, when some time ago I found it really neat that split w/no args works on whitespace words. Or, things like min and sort actually travel down levels of data structures for you. Or, when one realizes things like "in" works on all sorts of sequences even filehandes, or you can treat gzipped files just like normal files, or coolness of cStringIO, or startswith and endsmith methods on strings, or . . . Hmm, I wonder if there is a page of the little python coolnesses. I recall one of python annoyances. john -- http://mail.python.org/mailman/listinfo/python-list
Re: low-end persistence strategies?
People sometimes run to complicated systems, when right before you there is a solution. In this case, it is with the filesystem itself. It turns out mkdir is an atomic event (at least on filesystems I've encountered). And, from that simple thing, you can build something reasonable as long as you do not need high performance. and space isn't an issue. You need a 2 layer lock (make 2 directories) and you need to keep 2 data files around plus a 3rd temporary file. The reader reads from the newest of the 2 data files. The writer makes the locks, deletes the oldest data file and renames it's temporary file to be the new data file. You could have the locks expire after 10 minutes, to take care of failure to clean up. Ultimately, the writer is responsible for keeping the locks alive. The writer knows it is his lock because it has his timestamp. If the writer dies, no big deal, since it only affected a temporary file and the locks will expire. Rename the temporary file takes advantage of the fact that a rename is essentially immediate. Since, whatever does the reading, only reads from the newest of the 2 files (if both are available). Once, the rename of the temporary file done by the writer is complete, any future reads will now hit the newest data. And, deleting the oldest file doesn't matter since the reader never looks at it. If you want more specifics let me know. john -- http://mail.python.org/mailman/listinfo/python-list
Re: low-end persistence strategies?
You do not need to use a 24/7 process for low end persistance, if you rely on the fact that only one thing can ever succeed in making a directory. If haven't seen a filesystem where this isn't the case. This type of locking works cross-thread/process whatever. An example of that type of locking can be found at: http://aspn.activestate.com/ASPN/Cookbook/Python/Recipe/252495 The only problem with this locking is if a process dies w/out cleaning up the lock, how do you know when to remove them? If you have the assumption that the write to the database is quick (ok for low end), just have the locks time out after a minute. And if you need to keep the lock longer, unpeel appropriately and reassert them. With 2 lock directories, 2 files and 1 temporary file, you end up with a hard to break system. The cost is disk space, which for low end should be fine. Basically, the interesting question is, how far can one, cross-platform, actually go in building a persistence system with long term process business. john -- http://mail.python.org/mailman/listinfo/python-list
Re: Python - what is the fastest database ?
It depends on what you mean by database. If you want really fast I/O, try pytables. "PyTables is a hierarchical database package designed to efficiently manage very large amounts of data." http://pytables.sourceforge.net/html/WelcomePage.html some more comments from the webpage: # High performance I/O: On modern systems, and for large amounts of data, tables and array objects can be read and written at a speed only limited by the performance of the underlying I/O subsystem. Moreover, if your data is compressible, even faster than your I/O maximum throughput (!). # Support of files bigger than 2 GB: So that you won't be limited if you want to deal with very large datasets. In fact, PyTables support full 64-bit file addressing even on 32-bit platforms (provided that the underlying filesystem does so too, of course). # Architecture-independent: PyTables has been carefully coded (as HDF5 itself) with little-endian/big-endian byte orderings issues in mind . So, you can write a file in a big-endian machine (like a Sparc or MIPS) and read it in other little-endian (like Intel or Alpha) without problems. # Portability: PyTables has been ported to many architectures, namely Linux, Windows, MacOSX, FreeBSD, Solaris, IRIX and probably works in many more. Moreover, it runs just fine also in 64 bit plaforms (like AMD64, Intel64, UltraSparc or MIPS RXX000 processors). -- http://mail.python.org/mailman/listinfo/python-list
Re: Queue.Queue-like class without the busy-wait
Cool Code! One possible sticking point is that I think select only works on network sockets on windows. This would make the code not crossplatforn. john -- http://mail.python.org/mailman/listinfo/python-list
Re: Queue.Queue-like class without the busy-wait
Thinking about cross-platform issues. I found this, from the venerable Tim Peters to be enlightening for python's choice of design: "It's possible to build a better Queue implementation that runs only on POSIX systems, or only on Windows systems, or only on one of a dozen other less-popular target platforms. The current implementation works fine on all of them, although is suboptimal compared to what could be done in platform-specific Queue implementations. " Here is a link: http://groups-beta.google.com/group/comp.lang.python/messages/011f680b2dac320c,a03b161980b81d89,1162a30e96ae330a,0db1e52548493843,6b8d593c84ad4fd4,b6293a53f98252ce,82cddc89805b4b56,81c7289cc4cb4441,0906b24cc1534844,3ff6629391074ed4?thread_id=55b80d05e9d54705&mode=thread&noheader=1&q=queue+timeout+python#doc_011f680b2dac320c The whole thread (oops a pun) is worth a read. john -- http://mail.python.org/mailman/listinfo/python-list
Is socket.shutdown(1) useless
Issues of socket programming can be wierd, so I'm looking for some comments. In my python books I find exclusive use of socket.close(). From my other readings, I know about a "partial close operation". So, I figured it would be useful to post some code about how socket.close() has an implicit send in it and you can actually gain some clarity by being more explicit with the partial close which means splitting socket.close() up into socket.shutdown(1) and socket.close(). And got a response in essence saying, why bother, socket.shutdown, isn't useful. Here is my thinking: With a standard socket.close(), the client closes the socket immediately after the implicit send. This means the client assumes it was ok to actually close the socket, independant of how the server reacts to that last bit of data. To me that is an assumption you may not always want to make. If, instead, the client does a socket.shutdown(1) to say it is done sending, it can still recv and wait for the server to respond with either: 1)yep, I agree you finished sending or 2)I know you are done,and I got your data, but I do not think you are done To me these seem like a very useful distinction, since now if the client cares, it can find out if it's final communication did matter. It helps avoid what I call the princess bride phenomenon of #2: "You keep using that word. I do not think it means what you think it means." So, is this whole business with socket.shutdown mostly useless? So useless that I cannot find any mention of it in 2nd edition, Programming Python. john -- http://mail.python.org/mailman/listinfo/python-list
Re: HTTPSConnection script fails, but only on some servers (long)
I have a couple of recipes at the python cookbook site, that allows python to do proxy auth and ssl. The easiest one is: http://aspn.activestate.com/ASPN/Cookbook/Python/Recipe/301740 john [EMAIL PROTECTED] wrote: > Well HTTPSConnection does not support proxies. (HTTP/CONNECT + switch to HTTPS) > > And it hasn't ever. Although the code seems to make sense there is > no support for handling that switch. Probably a good thing to complain > about (file a new bug report). > > In the meantime you should take a look a cURL and pycurl, which do support > all kind of more extreme HTTP (FTP, etc.) handling, like using https over > an proxy. > > Andreas > > On Tue, Apr 12, 2005 at 03:37:33AM -0400, Steve Holden wrote: > > Paul Winkler wrote: > > >This is driving me up the wall... any help would be MUCH appreciated. > > >I have a module that I've whittled down into a 65-line script in > > >an attempt to isolate the cause of the problem. > > > > > >(Real domain names have been removed in everything below.) > > > > > >SYNOPSIS: > > > > > >I have 2 target servers, at https://A.com and https://B.com. > > >I have 2 clients, wget and my python script. > > >Both clients are sending GET requests with exactly the > > >same urls, parameters, and auth info. > > > > > >wget works fine with both servers. > > >The python script works with server A, but NOT with server B. > > >On Server B, it provoked a "Bad Gateway" error from Apache. > > >In other words, the problem seems to depend on both the client > > >and the server. Joy. > > > > > >Logs on server B show malformed URLs ONLY when the client > > >is my python script, which suggests the script is broken... > > >but logs on server A show no such problem, which suggests > > >the problem is elsewhere. > > > > > >DETAILS > > > > > >Note, the module was originally written for the express > > >purpose of working with B.com; A.com was added as a point of reference > > >to convince myself that the script was not totally insane. > > >Likewise, wget was tried when I wanted to see if it might be > > >a client problem. > > > > > >Note the servers are running different software and return different > > >headers. wget -S shows this when it (successfully) hits url A: > > > > > > 1 HTTP/1.1 200 OK > > > 2 Date: Tue, 12 Apr 2005 05:23:54 GMT > > > 3 Server: Zope/(unreleased version, python 2.3.3, linux2) ZServer/1.1 > > > 4 Content-Length: 37471 > > > 5 Etag: > > > 6 Content-Type: text/html;charset=iso-8859-1 > > > 7 X-Cache: MISS from XXX.com > > > 8 Keep-Alive: timeout=15, max=100 > > > 9 Connection: Keep-Alive > > > > > >... and this when it (successfully) hits url B: > > > > > > 1 HTTP/1.1 200 OK > > > 2 Date: Tue, 12 Apr 2005 04:51:30 GMT > > > 3 Server: Jetty/4.2.9 (Linux/2.4.26-g2-r5-cti i386 java/1.4.2_03) > > > 4 Via: 1.0 XXX.com > > > 5 Content-Length: 0 > > > 6 Connection: close > > > 7 Content-Type: text/plain > > > > > >Only things notable to me, apart from the servers are the "Via:" and > > >"Connection:" headers. Also the "Content-Length: 0" from B is odd, but > > >that doesn't seem to be a problem when the client is wget. > > > > > >Sadly I don't grok HTTP well enough to spot anything really > > >suspicious. > > > > > >The apache ssl request log on server B is very interesting. > > >When my script hits it, the request logged is like: > > > > > >A.com - - [01/Apr/2005:17:04:46 -0500] "GET > > >https://A.com/SkinServlet/zopeskin?action=updateSkinId&facilityId=1466&skinId=406 > > >HTTP/1.1" 502 351 > > > > > >... which apart from the 502, I thought reasonable until I realized > > >there's > > >not supposed to be a protocol or domain in there at all. So this is > > >clearly > > >wrong. When the client is wget, the log shows something more sensible > > >like: > > > > > >A.com - - [01/Apr/2005:17:11:04 -0500] "GET > > >/SkinServlet/zopeskin?action=updateSkinId&facilityId=1466&skinId=406 > > >HTTP/1.0" 200 - > > > > > >... which looks identical except for not including the spurious > > >protocol and domain, and the response looks as expected (200 with size > > >0). > > > > > >So, that log appears to be strong evidence that the problem is in my > > >client > > >script, right? The failing request is coming in with some bad crap in > > >the path, which Jboss can't handle so it barfs and Apache responds with > > > > > >Bad Gateway. Right? > > > > > >So why does the same exact client code work when hitting server B?? > > >No extra gunk in the logs there. AFAICT there is nothing in the script > > >that could lead to such an odd request only on server A. > > > > > > > > >THE SCRIPT > > > > > >#!/usr/bin/python2.3 > > > > > >from httplib import HTTPSConnection > > >from urllib import urlencode > > >import re > > >import base64 > > > > > >url_re = re.compile(r'^([a-z]+)://([A-Za-z0-9._-]+)(:[0-9]+)?') > > > > > >target_urls = { > > >'B': 'https://B/SkinServlet/zopeskin', > > >'A': 'https://A/zope/manage_main', > > >} > > > > > >auth_info= {'B':('userXXX', 'passXXX'), > > >
Re: How to run Python in Windows w/o popping a DOS box?
Python.exe starts up a windows console which gives you things stdin, stderr, and stdout from the C runtime. Be warned that you do not have those things with the consoleless(?) pythonw.exe, stuff which MS intends for gui applications. It reminds me of select() on windows only working halfway (just w/sockets) because of the history of how all this got added to windows. A lot of half-way stuff. john -- http://mail.python.org/mailman/listinfo/python-list
Re: How to run Python in Windows w/o popping a DOS box?
click on my computer Then select tools->folder options->File Types scroll down the where the py extension is defined, highlight it, click on advanced then highlight open and hit the edit button. There you should see python.exe with some other stuff, change it to pythonw.exe Then, in the future, if you click on a python program, it should use pythonw.exe The steps should be roughly similar on different versions of windows. I always liked the #! syntax of unix, too bad MS doesn't have it. And, too bad their command prompt sucks, too bad process creation is heavy, too bad . . . -- http://mail.python.org/mailman/listinfo/python-list
Re: How to run Python in Windows w/o popping a DOS box?
>>I think of it like the ''.join semantics. The object knows best how to >>handle join (even if it looks wierd to some people). In the #! case, >>the program knows best how to start itself. >This I don't understand ;-) With ','.join(['a','b','c'])You rely on what wants to join the sequence to handle the issue of joining rather than have the sequence understand joining. I think of it as the object knows best. I think of #! as "the program knowing best" how to startup, rather than having to rely on something else to deal with it. I also like the text based simplicity and explictness. Just like text based "etc" files on unix versus the registry in windows. And, if you want you can add more power like use env variables in #!. It can be as simple or as powerful as you need, you can use whatever means you want to manage the #! line: text editors, other programs, etc. It is data-centric, just like http, sql, file I/O rather than verb-centric (learn another whole set of methods to figure out how to change startup). hopefully I am making sense, john -- http://mail.python.org/mailman/listinfo/python-list
Re: How to run Python in Windows w/o popping a DOS box?
>I am objecting to embeddeding metadata in data. >I think we were just looking a different aspects of the elephant ;-) I think you are right on both counts. Given current filesystems, I like the #! method. I tend to like approaches that have very low entrance access fees and can scale up. Kinda like python's "hello world" versus Java's. It seems as though our managing of complexity fails exponentially the more cruft you add to things. The more you can keep things simple, the further you can avoid the inflection point. john -- http://mail.python.org/mailman/listinfo/python-list
Re: Translate this to python?
For some reason, ocassionally when I see xrange, I think "But wasn't that deprecated since range is now a . . oh wait that's xreadlines". xrange is a cool thing the few times where you really need it. john > Not sure what i is really for, but j seems to be independent, > so perhaps (also untested, and caveat: it's past bedtime) > > i = nPoints - 1 > for j in xrange(nPoints): > # whatever > i = j > > Regards, > Bengt Richter -- http://mail.python.org/mailman/listinfo/python-list
Re: Occasional OSError: [Errno 13] Permission denied on Windows
File attributes may be an issue to. Take look at the recipe at: http://aspn.activestate.com/ASPN/Cookbook/Python/Recipe/303343 which ensures the file attributes are normal before you delete it. john -- http://mail.python.org/mailman/listinfo/python-list
Re: SSL (HTTPS) with 2.4
If you need some help, send me an email and if we figure this out we can post a resolution. I have used both approaches (having authored them). Or at least let me know what site you are going to and I will try them on a windows box and see if I can debug that the [EMAIL PROTECTED]@ is going on. john -- http://mail.python.org/mailman/listinfo/python-list
Re: SSL (HTTPS) with 2.4
After failed attempts at trying to get my code to work with squid. I did some research into this and came up with some info. http://www.python.org/peps/pep-0320.txt "- It would be nice if the built-in SSL socket type could be used for non-blocking SSL I/O. Currently packages such as Twisted which implement async servers using SSL have to require third-party packages such as pyopenssl. " My guess is that the squid proxy server uses non-blocking sockets which python ssl does not support. And, of course after looking at the squid site, I found this: "Unlike traditional caching software, Squid handles all requests in a single, non-blocking, I/O-driven process." Now, I haven't had time to verify this. But, it can explain why the non-ssl proxy authentication works and the ssl partially works. And, also why I get success with a different type of proxy server. For a clue as to why there is this problem I would also recommend looking at http://www.openssl.org/support/faq.html, specifically the section on non-blocking i/o. It looks like pyopenssl would be an option: http://pyopenssl.sourceforge.net/ It's docs comment that it was written because m2crypto error handeling was not finished for non-blocking i/o: http://pyopenssl.sourceforge.net/pyOpenSSL.txt The reason this module exists at all is that the SSL support in the socket module in the Python 2.1 distribution (which is what we used, of course I cannot speak for later versions) is severely limited. When asking about SSL on the comp.lang.python newsgroup (or on python-list@python.org) people usually pointed you to the M2Crypto package. The M2Crypto.SSL module does implement a lot of OpenSSL's functionality but unfortunately its error handling system does not seem to be finished, especially for non-blocking I/O. I think that much of the reason for this is that M2Crypto^1 is developed using SWIG^2. This makes it awkward to create functions that e.g. can return both an integer and NULL since (as far as I know) you basically write C functions and SWIG makes wrapper functions that parses the Python argument list and calls your C function, and finally transforms your return value to a Python object. john -- http://mail.python.org/mailman/listinfo/python-list