Re: Odd closure issue for generators
On Jun 5, 6:49 am, a...@pythoncraft.com (Aahz) wrote: > In article > <05937a34-5490-4b31-9f07-a319b44dd...@r33g2000yqn.googlegroups.com>, > Michele Simionato wrote: > > > > >Actually, in Scheme one would have to fight to define > >a list comprehension (more in general loops) working as > >in Python: the natural definition works as the OP wants. See > >http://www.artima.com/weblogs/viewpost.jsp?thread=3D251156and the > >comments below for the details. > > This URL isn't working for me, gives 500. This happens sometimes with Artima. Usually it is just a matter of waiting a few minutes. -- http://mail.python.org/mailman/listinfo/python-list
Re: Odd closure issue for generators
On Jun 5, 6:49 am, a...@pythoncraft.com (Aahz) wrote: > In article > <05937a34-5490-4b31-9f07-a319b44dd...@r33g2000yqn.googlegroups.com>, > Michele Simionato wrote: > > > > >Actually, in Scheme one would have to fight to define > >a list comprehension (more in general loops) working as > >in Python: the natural definition works as the OP wants. See > >http://www.artima.com/weblogs/viewpost.jsp?thread=3D251156and the > >comments below for the details. > > This URL isn't working for me, gives 500. Anyway, the point is that to explain Python behavior with closures in list/generator comprehension it is not enough to invoke late bindings (Scheme has late bindings too but list comprehension works differently). The crux is in the behavior of the for loop: in Python there is a single scope and the loop variable is *mutated* at each iteration, whereas in Scheme (or Haskell or any other functional language) a new scope is generated at each iteration and there is actually a new loop variable at each iteration: no mutation is involved. Common Lisp works like Python. It is a design decision which at the end comes down to personal preference and different languages make different choices with no clear cut winner (I personally prefer the more functional way). -- http://mail.python.org/mailman/listinfo/python-list
Re: Yet another unicode WTF
In article <8763fbmk5a@benfinney.id.au>, Ben Finney wrote: > Ned Deily writes: > > $ python2.6 -c 'import sys; print sys.stdout.encoding, \ > > sys.stdout.isatty()' > > UTF-8 True > > $ python2.6 -c 'import sys; print sys.stdout.encoding, \ > > sys.stdout.isatty()' > foo ; cat foo > > None False > > So shouldn't the second case also detect UTF-8? The filesystem knows > it's UTF-8, the shell knows it too. Why doesn't Python know it? The filesystem knows what is UTF-8? While the setting of the locale environment variables may influence how the file system interprets the *name* of a file, it has no direct influence on what the *contents* of a file is or is supposed to be. Remember in python 2.x, a file is a just sequence of bytes. If you want to write encode Unicode to the file, you need to use something like codecs.open to wrap the file object with the proper streamwriter encoder. What confuses matters in 2.x is the print statement's under-the-covers implicit Unicode encoding for files connected to a terminal: http://bugs.python.org/issue612627 http://bugs.python.org/issue4947 http://wiki.python.org/moin/PrintFails >>> x = u'\u0430\u0431\u0432' >>> print x [nice looking characters here] >>> sys.stdout.write(x) Traceback (most recent call last): File "", line 1, in UnicodeEncodeError: 'ascii' codec can't encode characters in position 0-2: ordinal not in range(128) >>> sys.stdout.encoding 'UTF-8' In python 3.x, of course, the encoding happens automatically but you still have to tell python, via the "encoding" argument to open, what the encoding of the file's content is (or accept python's default which may not be very useful): >>> open('foo1','w').encoding 'mac-roman' WTF, indeed. -- Ned Deily, n...@acm.org -- http://mail.python.org/mailman/listinfo/python-list
unified way or cookbook to access cheetah from other frameworks django/webpy/pylons
can you or tavis or one of the cheetah masters please show us how to use cheetah from webpy the only useful thing webpy cheetah.py does is replace the #include with the content of the files Can you share a simple snippet/cookbook example on how to hook up cheetah from other frameworks such as django/webpy/pylons, if there is a unified way to access the cheetah template then the problems will be easier to fix? Thanks a lot... for trying to help... -- Bidegg worlds best auction site http://bidegg.com -- http://mail.python.org/mailman/listinfo/python-list
Re: Feedparser problem
Jonathan Nelson wrote: > I'm trying to add a feedreader element to my django project. I'm > using Mark Pilgrim's great feedparser library. I've used it before > without any problems. I'm getting a TypeError I can't figure out. > I've tried searching google, bing, google groups to no avail. > > Here's the dpaste of what I'm trying to do and the result I'm > getting: > >>>import feedparser > >>>url='http://feeds.nytimes.com/nyt/rss/Technology' > >>>d=feedparser.parse(url) > >>>d > {'bozo':1, > 'bozo_exception': TypeError("__init__() got an unexpected keyword argument 'timeout'",), > 'encoding': 'utf-8', > 'entries': [], > 'feed':{}, > 'version': None} > I've tried checking my firewall settings. I'm using Windows 7 and > Python 2.6. Win 7 is allowing other Python programs through. I've > tried several different RSS urls with the same result. > > Any thoughts would be greatly appreciated. Which version of feedparser are you using? In the 4.1 source 'timeout' occurs only in a comment. Peter -- http://mail.python.org/mailman/listinfo/python-list
Re: Project source code layout?
Lawrence D'Oliveiro wrote: In message , Dave Angel wrote: Rather than editing the source files at install time, consider just using an environment variable in your testing environment, which would be missing in production environment. I'd still need to define that environment variable in a wrapper script, which means editing that script at install time ... back to square one ... No, the whole point is it's an environment variable which is *missing" in production environment. Make sure you make it an obscure name, like set MyProductName_TestingMode=1 So the way you know you're in a production environment is that you do not have such an environment variable. -- http://mail.python.org/mailman/listinfo/python-list
Re: Odd closure issue for generators
In message <78180b4c-68b2-4a0c-8594-50fb1ea2f...@c19g2000yqc.googlegroups.com>, Michele Simionato wrote: > The crux is in the behavior of the for loop: > in Python there is a single scope and the loop variable is > *mutated* at each iteration, whereas in Scheme (or Haskell or any > other functional language) a new scope is generated at each > iteration and there is actually a new loop variable at each iteration: > no mutation is involved. I think it's a bad design decision to have the loop index be a variable that can be assigned to in the loop. -- http://mail.python.org/mailman/listinfo/python-list
Re: The Complexity And Tedium of Software Engineering
På Fri, 05 Jun 2009 08:07:39 +0200, skrev Xah Lee : On Jun 3, 11:50 pm, Xah Lee wrote: The point in these short examples is not about software bugs or problems. It illustrates, how seemingly trivial problems, such as networking, transferring files, running a app on Mac or Windwos, upgrading a app, often involves a lot subtle complexities. For mom and pop users, it simply stop them dead. For a senior industrial programer, it means some conceptually 10-minutes task often ends up in hours of tedium. What on earth gave you the idea that this is a trivial problem? Networks have been researched and improved for the last 40 years! It is a marvel of modern engineering that they work as well as they do. In some “theoretical” sense, all these problems are non-problems. But in practice, these are real, non-trivial problems. These are complexities that forms a major, multi-discipline, almost unexplored area of software research. Again, it is it not a trivial problem theoretically. Unexplored? What world are you on? I'm trying to think of a name that categorize this issue. I think it is a mix of software interface, version control, release control, formal software specification, automated upgrade system, etc. The ultimate scenario is that, if one needs to transfer files from one machine to another, one really should just press a button and expect everything to work. Software upgrade should be all automatic behind the scenes, to the degree that users really don't need fucking to know what so-called “version” of software he is using. Actually they mostly are. At least on my machine. (I use Windows XP and Ubuntu Linux.) Today, with so-called “exponential” scientific progress, and software has progress tremendously too. In our context, that means there are a huge proliferation of protocols and standards. For example, unicode, gazillion networking related protocols, version control systems, automatic update technologies, all comes into play here. However, in terms of the above visionary ideal, these are only the beginning. There needs to be more protocols, standards, specifications, and more strict ones, and unified ones, for the ideal scenario to take place. No, there are already to many protocols and the ideas of how a network infrastructure should be built are mostly in place. I think we would benefit from "cleaning up" the existing interface. That is by removing redundancy. What does need further research is distributed processing. Again this is a highly complex problem and a lot of work has been put into trying to make simpler and more manageable interfaces and protocol's. See for example the languages Erlang and Oz to get an idea. - John Thingstad -- http://mail.python.org/mailman/listinfo/python-list
Re: is anyone using text to speech to read python documentation
On Jun 3, 12:28 pm, Stef Mientki wrote: > eric_dex...@msn.com wrote: > > I wrote a small pre-processor for python documentation and I am > > looking for advice on how to get the most natural sounding reading. I > > uploaded an example of a reading of lxml documentation as a podcast1 > > >http://dexrow.blogspot.com/2009/06/python-voice-preprocessor.html. > > Depends what OS you want to use, on Windows it's very easy: > > import win32com.client > s = win32com.client.Dispatch("SAPI.SpVoice") > s.Speak('Is this punthoofd ') > > cheers, > Stef That is intresting and might be useful but isn't what I am doing. alot of the time you will see stuff like >>> that needs to be changed into other wording so you have one file that gets transformed into another text that makes more sense when read. I haven't changed html tags into something that makes more sense when spoken so my example is a little defective -- http://mail.python.org/mailman/listinfo/python-list
Re: import sqlite3
On Jun 4, 2009, at 7:45 AM, willgun wrote: By the way ,what does 'best regards' means at the end of a mail? "regards" is just respectful (and slightly formal) goodbye. Have a look at the definition: http://dictionary.reference.com/search?q=regards It's used much more in written communication than in spoken. At the end of a letter (or email), you might also see simply "regards", or "warm regards" (which is especially friendly) or "kind regards". At the end of a meeting of two friends A and B, A might say to B, "Give my regards to X" where X is a person that both A and B know that B will soon see. A is asking B to "carry" good wishes to X. Instead, A could have said to B, "When you see X, please tell her I'm thinking of her fondly". One can also say, "I hold him in high regard", meaning, "I respect and admire him". I'm in the USA; other English speakers might see this differently. bye Philip -- http://mail.python.org/mailman/listinfo/python-list
Re: Using C++ and ctypes together: a vast conspiracy? ;)
Joseph Garvin schrieb: > On Thu, Jun 4, 2009 at 3:23 PM, Brian wrote: >> What is the goal of this conversation that goes above and beyond what >> Boost.Python + pygccxml achieve? > > I can't speak for others but the reason I was asking is because it's > nice to be able to define bindings from within python. At a minimum, > compiling bindings means a little extra complexity to address with > whatever build tools you're using. AFAIU, pybindgen takes this approach. And, AFAIK, pygccxml can generate pybindgen code. -- http://mail.python.org/mailman/listinfo/python-list
Re: How to develop a python application?
On Fri, Jun 5, 2009 at 4:12 AM, Vincent Davis wrote: > This might be a off topic but this also seemed like a good place to ask. > > I have an application (several) I would like to develop. Parts of it I > can do but parts I would like to outsource. I am thinking mostly of > outsourcing most of my django (or similar) work and otherwise have > some custom classes written. > I would like to do this small bits (that is the out sourcing) at a > time for many reasons but I realize there are down sides to doing this > (I probably don't know all them) > > I have a this specific project in mind but don't mind this topic being > rather broad. I would like to read and learn more about developing > software (commercial or open source) > > My questions > How do I find programs interested in small projects. > How do they expect to be paid or how should I pay. > Are sites like elance.com good? > What do I not know to ask? That is what should I be considering? > > Any suggestions would be appreciated. > > > > Very brief description of the project. > The app would take GPS, Heartrate, Power(bicycle) data from a Garmin > GPS and other devises and upload it to a database. After that I have > several calculations and analysis of the data. Then display graphs and > other statistics. This is a very brief explanation. > > There are several examples of similar python projects, but not web based. > Granola > pygarmin > garmin-sync > The closest web based example would be Training Peaks > > http://home.trainingpeaks.com/personal-edition/training-log-and-food-diary.aspx > > Thanks > Vincent Davis > 720-301-3003 > -- > http://mail.python.org/mailman/listinfo/python-list > With regards to sites like elance, I can only offer advice here from a coder's perspective, so I may be missing some things, but here goes: You can probably find people on elance or rentacoder, or similar sites to work on your app. You will need to be very careful about who you hire though - the sites are filled with incompetent coders, and bots that represent _teams_ of incompetent programmers. I used to do a good bit of work on sites like that, and a lot of my work was fixing apps that got written by other people on those sites that had no idea what they were doing. We're talking about 10,000 lines of PHP that got changed into ~2500 with simple, mostly automated refactoring because the people who wrote it had apparently never heard of a for loop. Payment is normally done through an escrow service. The price you're willing to pay generally gets decided on before work begins, and the people who want to work on it can make bids saying how much they want for the work, and you can talk to them - make sure they know what they're talking about, haggle price, etc. There tends to be protection for both the person paying and the person working to avoid you not paying them if they did what they were supposed to, and to avoid you having to pay them if they didn't. All in all, using sites like elance can get your work done, and it can get it done well and on the cheap - but you'll have to spend a significant amount of time weeding through automated responses and making sure you're getting the right person to work on your stuff. -- http://mail.python.org/mailman/listinfo/python-list
Re: What text editor is everyone using for Python
Ben Finney wrote: > Emile van Sebille writes: > > > On 6/4/2009 3:19 PM Lawrence D'Oliveiro said... > > > In message , Nick Craig- > > > Wood wrote: > > > > > >> You quit emacs with Ctrl-X Ctrl-C. > > > > > > That's "save-buffers-kill-emacs". If you don't want to save buffers, > > > the exit sequence is alt-tilde, f, e. > > This is an invocation of the menu system, driven by the keyboard. (Also, > it's not Alt+tilde (which would be Alt+Shift+`), it's Alt+` i.e. no > Shift.) It's an alternate command, and IMO is just adding confusion to > the discussion. Also, according to my emacs e==>Exit Emacs (C-x C-c) so Alt-` f e is exactly the same as Ctrl-x Ctrl-c anyway! If the OP really want to quit emacs without being prompted to save any buffes then run the 'kill-emacs' command which isn't bound to a key by default. You would do this with Alt-X kill-emacs But the fact that it isn't bound to a key by default means that it isn't recommended (and I've never used it in 10 years of using emacs!) - just use Ctrl-X Ctrl-C as Richard Stallman intended ;-) -- Nick Craig-Wood -- http://www.craig-wood.com/nick -- http://mail.python.org/mailman/listinfo/python-list
Re: how to create a big list of list
command@alexbbs.twbbs.org (§ä´m¦Ã¤vª�...@¤ù¤Ã) writes: > if i want to create a list of list which size is 2**25 > > how should i do it? > > i have try [ [] for x in xrange(2**25) ] > > but it take too long to initial the list > > is there any suggestion? What is it you want to do with the result? If you want to lazy-evaluate the expression, what you're looking for is a generator. You can get one easily by writing a generator expression: >>> foo = ([] for x in xrange(2**25)) >>> foo >>> for item in foo: ... do_interesting_stuff_with(item) If what you want is to have a huge multi-dimensional array for numerical analysis, lists may not be the best option. Instead, install the third-party NumPy library and use its types. You don't show what kind of data you want in your array, but assuming you want integers initialised to zero: >>> import numpy >>> foo = numpy.zeros((2**25, 0), int) >>> foo array([], shape=(33554432, 0), dtype=int32) Other quick ways of constructing NumPy arrays exist, see http://docs.scipy.org/doc/numpy/reference/routines.array-creation.html>. -- \ âPrediction is very difficult, especially of the future.â | `\ âNiels Bohr | _o__) | Ben Finney -- http://mail.python.org/mailman/listinfo/python-list
Re: urlretrieve() failing on me
En Thu, 04 Jun 2009 23:42:29 -0300, Robert Dailey escribió: Hey guys, try using urlretrieve() in Python 3.0.1 on the following URL: http://softlayer.dl.sourceforge.net/sourceforge/wxwindows/wxMSW-2.8.10.zip Have it save the ZIP to any destination directory. For me, this only downloads about 40KB before it stops without any error at all. Any reason why this isn't working? I could not reproduce it. I downloaded about 300K without error (Python 3.0.1 on Windows) -- Gabriel Genellina -- http://mail.python.org/mailman/listinfo/python-list
PYTHONPATH and multiple python versions
Hi, As I don't have admin privileges on my main dev machine, I install a good deal of python modules somewhere in my $HOME, using PYTHONPATH to point my python intepreter to the right location. I think PEP370 (per-user site-packages) does exactly what I need, but it works only for python 2.6 and above. Am I out of luck for versions below ? David -- http://mail.python.org/mailman/listinfo/python-list
Re: Generating all combinations
On Thu, 04 Jun 2009 23:10:33 -0700, Mensanator wrote: >> "Everybody" knows? Be careful with those sweeping generalizations. >> Combinatorics is a fairly specialized area not just of programming but >> mathematics as well. > > I would expect that. That was supposed to be funny. I knew that! I was just testing to see if everyone else did... *wink* -- Steven -- http://mail.python.org/mailman/listinfo/python-list
Re: __file__ access extremely slow
En Fri, 05 Jun 2009 00:12:25 -0300, John Machin escribió: > (2) This will stop processing on the first object in sys.modules that > doesn't have a __file__ attribute. Since these objects aren't > *guaranteed* to be modules, Definitely not guaranteed to be modules. Python itself drops non-modules in there! Python 2.3 introduced four keys mapped to None -- one of these was dropped in 2.4, but the other three are still there in 2.5 and 2.6: In case someone wonders what all those None are: they're a "flag" telling the import machinery that those modules don't exist (to avoid doing a directory scan over and over, because Python<2.7 attempts first to do a relative import, and only if unsuccessful attempts an absolute one) C:\junk>\python23\python -c "import sys; print [k for (k, v) in sys.modules.items() if v is None]" ['encodings.encodings', 'encodings.codecs', 'encodings.exceptions', 'encodings.types'] In this case, somewhere inside the encodings package, there are statements like "import types" or "from types import ...", and Python could not find types.py in the package directory. -- Gabriel Genellina -- http://mail.python.org/mailman/listinfo/python-list
DUDA !!!!!!!!!!
Hola: Hoy en día me encuentro iniciandome dentro del python, en estos momentos quiero saber de que forma puedo eliminar un archivo de un compactado, ya sea zip, rar o cualquier otro. Estudie las librerías zipfile pero no tiene ninguna funcion que me permita hacerlo. Trabajo con python 2.5 salu2 Ariel This message was sent using IMP, the Internet Messaging Program. -- http://mail.python.org/mailman/listinfo/python-list
how to create a big list of list
if i want to create a list of list which size is 2**25 how should i do it? i have try [ [] for x in xrange(2**25) ] but it take too long to initial the list is there any suggestion? Thanks a lot! -- [1;36mâ»Post by [37mcommand [36mfrom [33m59-124-255-226.HINET-IP.[m [1;36mèé¼ çé¦é¦ä¹³é ªæ´[31mË[33mé»åä½åæ¬ç³»çµ±[31mË[32malexbbs.twbbs.org[31mË[37m140.113.166.7[m -- http://mail.python.org/mailman/listinfo/python-list
Re: multi-core software
On Thu, 4 Jun 2009 09:46:44 -0700 (PDT), Xah Lee wrote, quoted or indirectly quoted someone who said : > Why Must Software Be Rewritten For Multi-Core Processors? Threads have been part of Java since Day 1. Using threads complicates your code, but even with a single core processor, they can improve performance, particularly if you are doing something like combing multiple websites. The nice thing about Java is whether you are on a single core processor or a 256 CPU machine (We got to run our Athena Integer Java spreadsheet engine on such a beast), does not concern your code. You just have to make sure your threads don't interfere with each other, and Java/the OS, handle exploiting all the CPUs available. -- Roedy Green Canadian Mind Products http://mindprod.com Never discourage anyone... who continually makes progress, no matter how slow. ~ Plato 428 BC died: 348 BC at age: 80 -- http://mail.python.org/mailman/listinfo/python-list
Re: how to iterate over several lists?
In Chris Rebert writes: >Just add the lists together. >for x in list_a + list_b: >foo(x) Cool! Thanks! kynn -- -- http://mail.python.org/mailman/listinfo/python-list
Re: Yet another unicode WTF
On 5 Jun, 03:18, Ron Garret wrote: > > According to what I thought I knew about unix (and I had fancied myself > a bit of an expert until just now) this is impossible. Python is > obviously picking up a different default encoding when its output is > being piped to a file, but I always thought one of the fundamental > invariants of unix processes was that there's no way for a process to > know what's on the other end of its stdout. The only way to think about this (in Python 2.x, at least) is to consider stream and file objects as things which only understand plain byte strings. Consequently, use of the codecs module is required if receiving/sending Unicode objects from/to streams and files. Paul -- http://mail.python.org/mailman/listinfo/python-list
Re: Odd closure issue for generators
En Fri, 05 Jun 2009 01:49:15 -0300, Aahz escribió: In article <05937a34-5490-4b31-9f07-a319b44dd...@r33g2000yqn.googlegroups.com>, Michele Simionato wrote: Actually, in Scheme one would have to fight to define a list comprehension (more in general loops) working as in Python: the natural definition works as the OP wants. See http://www.artima.com/weblogs/viewpost.jsp?thread=3D251156 > This URL isn't working for me, gives 500. Mmm, the URL ends with: thread, an equals sign, and the number 251156 If you see =3D -- that's the "=" encoded as quoted-printable... -- Gabriel Genellina -- http://mail.python.org/mailman/listinfo/python-list
Re: Winter Madness - Passing Python objects as Strings
wrote: > Got some use cases? plural cases - no. I did it for the reason already described. to elucidate, the code looks something like this: rec = input_q.get() # <=== this has its origen in a socket, as a netstring. reclist = rec.split(',') if reclist[0] == 'A': do something with the outputs get hold of the latest inputs return the result by putting a message on the normal output q. continue # up to here this is the code that is done in 99.% of cases. # note that it has to run as fast as possible, in a very cripple processor. if reclist[0] == "B": # This means we have to change state, # it comes from another thread that did # not exist before an event. new_output_q = uncan(reclist[1]) # <== This is where it is used while True: do similar stuff until normality is re established, discarding the incoming "A" records, using new "C" records and new_output_q. Terminated by a "D" record. It is simply a different way of saying "use this one", in an in band way. In the above, it avoids a double unpacking step - once to get to the record type, and then to get to the actual data. It only makes sense here because I know that the stuff that comes in is basically an unending stream of more of the same, and it happens - I would say thousands of times a second, but it is more like a hundred or so, given the lack of speed of the processor. So I am quite prepared to trade off the slight inefficiency during the seldom occurring changeover for the railroad like chugging along in the vast majority of cases. "seldom" here is like once a day for a few minutes. And it sure beats the hell out of passing the queue name as a string and mucking around with exec or eval - That is what I did first, and I liked it even less, as the queue passed in such a way had to be a global for the exec to work. It all started because I was not prepared to waste precious cycles in an extra unpacking stage. So I wrote the Can extension module, and I thought it weird enough to make it public: - Hands up those who have ever passed a pointer as a string ! - Hendrik -- http://mail.python.org/mailman/listinfo/python-list
Re: Winter Madness - Passing Python objects as Strings
"Nigel Rantor" wrote: > It just smells to me that you've created this elaborate and brittle hack > to work around the fact that you couldn't think of any other way of > getting the thread to change it's behaviour whilst waiting on input. I am beginning to think that you are a troll, as all your comments are haughty and disparaging, while you either take no trouble to follow, or are incapable of following, my explanations. In the event that this is not the case, please try to understand my reply to Skip, and then suggest a way that will perform better in my use case, out of your vast arsenal of better, quicker, more reliable, portable and comprehensible ways of doing it. - Hendrik -- http://mail.python.org/mailman/listinfo/python-list
Re: unladen swallow: python and llvm
Luis M González wrote: > I am very excited by this project (as well as by pypy) and I read all > their plan, which looks quite practical and impressive. > But I must confess that I can't understand why LLVM is so great for > python and why it will make a difference. CPython uses a C compiler to compile the python code (written in C) into native machine code. unladen-swallow uses an llvm-specific C compiler to compile the CPython code (written in C) into LLVM opcodes. The LLVM virtual machine executes those LLVM opcodes. The LLVM virtual machine also has a JIT (just in time compiler) which converts the LLVM op-codes into native machine code. So both CPython and unladen-swallow compile C code into native machine code in different ways. So why use LLVM? This enables unladen swallow to modify the python virtual machine to target LLVM instead of the python vm opcodes. These can then be run using the LLVM JIT as native machine code and hence run all python code much faster. The unladen swallow team have a lot more ideas for optimisations, but this seems to be the main one. It is an interesting idea for a number of reasons, the main one as far as I'm concerned is that it is more of a port of CPython to a new architecture than a complete re-invention of python (like PyPy / IronPython / jython) so stands a chance of being merged back into CPython. -- Nick Craig-Wood -- http://www.craig-wood.com/nick -- http://mail.python.org/mailman/listinfo/python-list
Re: Winter Madness - Passing Python objects as Strings
"Jean-Paul Calderone" wrote: > So, do you mind sharing your current problem? Maybe then it'll make more > sense why one might want to do this. Please see my reply to Skip that came in and was answered by email. - Hendrik -- http://mail.python.org/mailman/listinfo/python-list
Uppercase/Lowercase on unicode
Is there any librery that works ok with unicode at converting to uppercase or lowercase? -- >>> foo = u'áèïöúñ' >>> print(foo.upper()) áèïöúñ -- -- http://mail.python.org/mailman/listinfo/python-list
Re: Yet another unicode WTF
Paul Boddie writes: > The only way to think about this (in Python 2.x, at least) is to > consider stream and file objects as things which only understand plain > byte strings. Consequently, use of the codecs module is required if > receiving/sending Unicode objects from/to streams and files. Actually strings in Python 2.4 or later have the ‘encode’ method, with no need for importing extra modules: = $ python -c 'import sys; sys.stdout.write(u"\u03bb\n".encode("utf-8"))' λ $ python -c 'import sys; sys.stdout.write(u"\u03bb\n".encode("utf-8"))' > foo ; cat foo λ = -- \ “Life does not cease to be funny when people die any more than | `\ it ceases to be serious when people laugh.” —George Bernard Shaw | _o__) | Ben Finney -- http://mail.python.org/mailman/listinfo/python-list
Re: multi-core software
Single - thread programming is great! clean, safe!!! I'm trying schedual task to several work process (not thread). On Fri, Jun 5, 2009 at 4:49 AM, MRAB wrote: > Kaz Kylheku wrote: > >> ["Followup-To:" header set to comp.lang.lisp.] >> On 2009-06-04, Roedy Green wrote: >> >>> On Thu, 4 Jun 2009 09:46:44 -0700 (PDT), Xah Lee >>> wrote, quoted or indirectly quoted someone who said : >>> >>> • Why Must Software Be Rewritten For Multi-Core Processors? >>> Threads have been part of Java since Day 1. >>> >> >> Unfortunately, not sane threads designed by people who actually understand >> multithreading. >> >> The nice thing about Java is whether you are on a single core >>> processor or a 256 CPU machine (We got to run our Athena Integer Java >>> spreadsheet engine on such a beast), does not concern your code. >>> >> >> You are dreaming if you think that there are any circumstances (other than >> circumstances in which performance doesn't matter) in which you don't have >> to >> concern yourself about the difference between a uniprocessor and a 256 CPU >> machine. >> > > If you're interested in parallel programming, have a look at Flow-Based > Programming: > > http://www.jpaulmorrison.com/fbp/ > > -- > http://mail.python.org/mailman/listinfo/python-list > -- http://mail.python.org/mailman/listinfo/python-list
Re: Uppercase/Lowercase on unicode
En Fri, 05 Jun 2009 06:39:31 -0300, Kless escribió: Is there any librery that works ok with unicode at converting to uppercase or lowercase? -- foo = u'áèïöúñ' print(foo.upper()) áèïöúñ -- Looks like Python thinks your terminal uses utf-8, but it actually uses another encoding (latin1?) Or, you saved the script as an utf-8 file but the encoding declaration says otherwise. This works fine for me: py> foo = u'áèïöúñ' py> print foo áèïöúñ py> print foo.upper() ÁÈÏÖÚÑ -- Gabriel Genellina -- http://mail.python.org/mailman/listinfo/python-list
Re: Uppercase/Lowercase on unicode
Kless writes: > Is there any librery that works ok with unicode at converting to > uppercase or lowercase? > > -- > >>> foo = u'áèïöúñ' > > >>> print(foo.upper()) > áèïöúñ > -- Works fine for me. What do you get when trying to replicate this: >>> import sys >>> sys.version '2.5.4 (r254:67916, Feb 18 2009, 04:30:07) \n[GCC 4.3.3]' >>> sys.stdout.encoding 'UTF-8' >>> foo = u'áèïöúñ' >>> print(foo.upper()) ÁÈÏÖÚÑ -- \ “I was sad because I had no shoes, until I met a man who had no | `\ feet. So I said, ‘Got any shoes you're not using?’” —Steven | _o__) Wright | Ben Finney -- http://mail.python.org/mailman/listinfo/python-list
Re: Winter Madness - Passing Python objects as Strings
"Terry Reedy" wrote: > If I understand correctly, your problem and solution was this: > > You have multiple threads within a long running process. One thread > repeatedly reads a socket. Yes and it puts what it finds on a queue. - it is a pre defined simple comma delimited record. > You wanted to be able to occasionally send > an object to that thread. Close - to another thread that reads the queue, actually. > Rather than rewrite the thread to also poll a > queue.Queue(), which for CPython sends objects by sending a pointer, It is in fact reading a queue, and what it gets out in the vast majority of cases is the record that came from the socket. >you > converted pointers to strings and sent (multiplex) them via the text > stream the thread was already reading -- and modified the thread to > decode and act on the new type of message. Basically yes - the newly created thread just puts a special text string onto the queue. As I pointed out in my reply to Skip, this makes the unpacking at the output of the queue standard, just using split(','), and it is this simplicity that I wanted to preserve, as it happens almost all of the time. > And you are willing to share the can code with someone who has a similar > rare need and understands the danger of interpreting ints as addresses. > Correct? Absolutely right on - I do not think that it is the kind of thing that should be in the std lib, except as a kind of Hara Kiri bomb - uncan(a random number string), and die! It would also help if someone who is more knowledgable about python would have a look at the C code to make it more robust. There are only 2 routines that matter - it is about a screenfull. - Hendrik -- http://mail.python.org/mailman/listinfo/python-list
Re: PYTHONPATH and multiple python versions
maybe a shell script to switch PYTHONPATH, like: start-python-2.5 start-python-2.4 ... On Fri, Jun 5, 2009 at 4:56 PM, David Cournapeau wrote: > Hi, > > As I don't have admin privileges on my main dev machine, I install a > good deal of python modules somewhere in my $HOME, using PYTHONPATH to > point my python intepreter to the right location. I think PEP370 > (per-user site-packages) does exactly what I need, but it works only > for python 2.6 and above. Am I out of luck for versions below ? > > David > -- > http://mail.python.org/mailman/listinfo/python-list > -- http://mail.python.org/mailman/listinfo/python-list
Re: Uppercase/Lowercase on unicode
On 5 jun, 09:59, "Gabriel Genellina" wrote: > En Fri, 05 Jun 2009 06:39:31 -0300, Kless > escribió: > > > Is there any librery that works ok with unicode at converting to > > uppercase or lowercase? > > > -- > foo = u'áèïöúñ' > > print(foo.upper()) > > áèïöúñ > > -- > > Looks like Python thinks your terminal uses utf-8, but it actually uses > another encoding (latin1?) > Or, you saved the script as an utf-8 file but the encoding declaration > says otherwise. > > This works fine for me: > > py> foo = u'áèïöúñ' > py> print foo > áèïöúñ > py> print foo.upper() > ÁÈÏÖÚÑ > > -- > Gabriel Genellina I just to check it in the python shell and it's correct. Then the problem is by iPython that I was testing it from there. -- http://mail.python.org/mailman/listinfo/python-list
a problem with concurrency
At work we have a Web application acting as a front-end to a database (think of a table-oriented interface, similar to an Excel sheet). The application is accessed simultaneously by N people (N < 10). When a user posts a requests he changes the underlying database table. The issue is that if more users are editing the same set of rows the last user will override the editing of the first one. Since this is an in-house application with very few users, we did not worry to solve this issue, which happens very rarely. However, I had a request from the people using the application, saying that this issue indeed happens sometimes and that they really would like to be able to see if some other user is editing a row. In that case, the web interface should display the row as not editable, showing the name of the user which is editing it. Moreover, when posting a request involving non-editable rows, there should be a clear error message and the possibility to continue anyway (a message such as "do you really want to override the editing made by user XXX?"). Looks like a lot of work for an application which is very low priority for us. Also, I do not feel too confident with managing concurrency directly. However, just for the sake of it I have written a prototype with the basic functionality and I am asking here for some advice, since I am sure lots of you have already solved this problem. My constraint are: the solution must work with threads (the web app uses the Paste multithreaded server) but also with processes (while the server is running a batch script could run and set a few rows). It also must be portable across databases, since we use both PostgreSQL and MS SQLServer. The first idea that comes to my mind is to add a field 'lockedby' to the database table, containing the name of the user which is editing that row. If the content of 'lockedby' is NULL, then the row is editable. The field is set at the beginning (the user will click a check button to signal - via Ajax - that he is going to edit that row) to the username and reset to NULL after the editing has been performed. This morning I had a spare hour, so I wrote a 98 lines prototype which has no web interface and does not use an ORM, but has the advantage of being easy enough to follow; you can see the code here: http://pastebin.com/d1376ba05 The prototype uses SQLite and works in autocommit mode (the real application works in autocommit mode too, even if with different databases). I have modelled the real tables with a simple table like this: CREATE TABLE editable_data ( rowid INTEGER PRIMARY KEY, text VARCHAR(256), lockedby VARCHAR(16)) There is thread for each user. The test uses 5 threads; there is no issue of scalability, since I will never have more than 10 users. The basic idea is to use a RowLock object with signature RowLock(connection, username, tablename, primarykeydict) with __enter__ and __exit__ methods setting and resetting the lockedby field of the database table respectively. It took me more time to write this email than to write the prototype, so I do not feel confident with it. Will it really work for multiple threads and multiple processes? I have always managed to stay away from concurrency in my career ;-) Michele Simionato -- http://mail.python.org/mailman/listinfo/python-list
Re: Odd closure issue for generators
On Jun 5, 11:26 am, "Gabriel Genellina" wrote: > Mmm, the URL ends with: thread, an equals sign, and the number 251156 > If you see =3D -- that's the "=" encoded as quoted-printable... Actually this is the right URL: http://www.artima.com/weblogs/viewpost.jsp?thread=251156 -- http://mail.python.org/mailman/listinfo/python-list
Re: Feedparser problem
On 6/4/09, Jonathan Nelson wrote: > I'm trying to add a feedreader element to my django project. I'm > using Mark Pilgrim's great feedparser library. I've used it before > without any problems. I'm getting a TypeError I can't figure out. > I've tried searching google, bing, google groups to no avail. > > Here's the dpaste of what I'm trying to do and the result I'm > getting: > > http://dpaste.com/51406/ > > I've tried checking my firewall settings. I'm using Windows 7 and > Python 2.6. Win 7 is allowing other Python programs through. I've > tried several different RSS urls with the same result. > > Any thoughts would be greatly appreciated. > from feedparser 4.1 documentation: bozo_exception The exception raised when attempting to parse a non-well-formed feed. I suspect that unexpected keyword "timeout" appears in the feed's xml document. You could download the document yourself and have a look. > -- > http://mail.python.org/mailman/listinfo/python-list > -- http://mail.python.org/mailman/listinfo/python-list
Am I doing this the python way? (list of lists + file io)
Hello, I'm a fairly new python programmer (aren't I unique!) and a somewhat longer C/++ programmer (4 classes at a city college + lots and lots of tinkering on my own). I've started a pet project (I'm really a blacksheep!); the barebones of it is reading data from CSV files. Each CSV file is going to be between 1 and ~500 entries/lines, I can't see them being much bigger than that, though 600 entries/lines might be possible. As for the total number of CSV files I will be reading in, I can't see my self going over several hundred (200-300), though 500 isn't to much of a stretch and 1000 files *could* happen in the distant future. So, I read the data in from the CSV file and store each line as an entry in a list, like this (I have slightly more code, but this is basically what I'm doing) *FilepathVar = "my/file/path/csv.txt" import csv reader = csv.reader(open(FilepathVar,"rb"), delimiter=',') entryGrouping = [] # create a list for entry in reader: entryGrouping.append(entry)* This produces a list (entryGrouping) where I can do something like ( *print entryGrouping[0]* ) and get the first row/entry of the CSV file. I could also do ( *print entryGrouping[0][0]* ) and get the first item in the first row. All is well and good, codewise, I hope? Then, since I wanted to be able to write in multiple CSV files (they have the same structure, the data relates to different things) I did something like this to store multiple entryGroupings... *masterList = [] # create a list masterList.append(entryGrouping) # ... # load another CSV file into entryGrouping # ... masterList.append(entryGrouping)* Which lets me write code like this... * print masterList[0] # prints an entire entryGrouping print masterList[0][0] # prints the first entry in entryGrouping print masterList[0][0][0] # prints the first item in the first row of the first entryGrouping... * So, my question (because I did have one!) is thus: I'm I doing this in a pythonic way? Is a list of lists (of lists?) a good way to handle this? As I start adding more CSV files, will my program grind to a halt? To answer that, you might need some more information, so I'll try and provide a little right now as to what I expect to be doing... (It's still very much in the planning phases, and a lot of it is all in my head) So, Example: I'll read in a CSV file (just one, for now.) and store it into a list. Sometime later, I'll get another CSV file, almost identical/related to the first. However, a few values might have changed, and there might be a few new lines (entries) or maybe a few less. I would want to compare the CSV file I have in my list (in memory) to new CSV file (which I would probably read into a temporary list). I would then want to track and log the differences between the two files. After I've figured out what's changed, I would either update the original CSV file with the new CSV's information, or completely discard the original and replace it with the new one (whichever involves less work). Basically, lots of iterating through each entry of each CSV file and comparing to other information (either hard coded or variable). So, to reiterate, are lists what I want to use? Should I be using something else? (even if that 'something else' only really comes into play when storing and operating on LOTS of data, I would still love to hear about it!) Thank you for taking the time to read this far. I apologize if I've mangled any accepted terminology in relation to python or CSV files. - Ira (P.S. I've read this through twice now and tried to catch as many errors as I could. It's late (almost 4AM) so I'm sure to have missed some. If something wasn't clear, point it out please. See you in the morning! - er, more like afternoon!) -- http://mail.python.org/mailman/listinfo/python-list
Re: PYTHONPATH and multiple python versions
Hello, I think that virtualenv could also do the job. Best regards, Javier 2009/6/5 Red Forks : > maybe a shell script to switch PYTHONPATH, like: > start-python-2.5 > start-python-2.4 ... > On Fri, Jun 5, 2009 at 4:56 PM, David Cournapeau wrote: >> >> Hi, >> >> As I don't have admin privileges on my main dev machine, I install a >> good deal of python modules somewhere in my $HOME, using PYTHONPATH to >> point my python intepreter to the right location. I think PEP370 >> (per-user site-packages) does exactly what I need, but it works only >> for python 2.6 and above. Am I out of luck for versions below ? >> >> David >> -- >> http://mail.python.org/mailman/listinfo/python-list > > > -- > http://mail.python.org/mailman/listinfo/python-list > > -- http://mail.python.org/mailman/listinfo/python-list
Re: Winter Madness - Passing Python objects as Strings
En Fri, 05 Jun 2009 07:00:24 -0300, Hendrik van Rooyen escribió: "Terry Reedy" wrote: You have multiple threads within a long running process. One thread repeatedly reads a socket. Yes and it puts what it finds on a queue. - it is a pre defined simple comma delimited record. You wanted to be able to occasionally send an object to that thread. Close - to another thread that reads the queue, actually. Rather than rewrite the thread to also poll a queue.Queue(), which for CPython sends objects by sending a pointer, It is in fact reading a queue, and what it gets out in the vast majority of cases is the record that came from the socket. Ah... I had the same impression as Mr. Reedy, that you were directly reading from a socket and processing right there, so you *had* to use strings for everything. But if you already have a queue, you may put other objects there (instead of "canning" them). Testing the object type with isinstance(msg, str) is pretty fast, and if you bind locally those names I'd say the overhead is negligible. -- Gabriel Genellina -- http://mail.python.org/mailman/listinfo/python-list
Re: Making the case for repeat
Gabriel Genellina wrote: > Ok, you're proposing a "bidimensional" repeat. I prefer to keep things > simple, and I'd implement it in two steps. But what is simple? I am currently working on a universal feature creeper that could replace itertools.cycle, itertools.repeat, itertools.chain and reverse and also helps to severely cut down on itertools.islice usage. All within virtually the same parameter footprint as the last function I posted. The problem is posting *this* function would kill my earlier repeat for sure. And it already had a problem with parameters < 0 (Hint: that last bug has now become a feature in the unpostable repeat implementation) > Note that this doesn't require any additional storage. Second step would > be to build a bidimensional repeat: Thanks for reminding me, but the storage savings only work for a 'single cycle' function call. But I guess one could special case for that. > py> one = chain.from_iterable(repeat(elem, 3) for elem in thing) > py> two = chain.from_iterable(tee(one, 2)) > py> list(two) > ['1', '1', '1', '2', '2', '2', '3', '3', '3', '4', '4', '4', '1', '1', > '1', '2', >'2', '2', '3', '3', '3', '4', '4', '4'] > > Short and simple, but this one requires space for one complete run (3*4 > items in the example). Really? I count 4 nested functions and an iterator comprehension. I guess it's a tradeoff between feature creep and function nesting creep. P. -- http://mail.python.org/mailman/listinfo/python-list
Re: Yet another unicode WTF
On 5 Jun, 11:51, Ben Finney wrote: > > Actually strings in Python 2.4 or later have the ‘encode’ method, with > no need for importing extra modules: > > = > $ python -c 'import sys; sys.stdout.write(u"\u03bb\n".encode("utf-8"))' > λ > > $ python -c 'import sys; sys.stdout.write(u"\u03bb\n".encode("utf-8"))' > foo > ; cat foo > λ > = Those are Unicode objects, not traditional Python strings. Although strings do have decode and encode methods, even in Python 2.3, the former is shorthand for the construction of a Unicode object using the stated encoding whereas the latter seems to rely on the error-prone automatic encoding detection in order to create a Unicode object and then encode the result - in effect, recoding the string. As I noted, if one wants to remain sane and not think about encoding everything everywhere, creating a stream using a codecs module function or class will permit the construction of something which deals with Unicode objects satisfactorily. Paul -- http://mail.python.org/mailman/listinfo/python-list
Re: Project source code layout?
In message , Dave Angel wrote: > Lawrence D'Oliveiro wrote: > >> In message , Dave >> Angel wrote: >> >>> Rather than editing the source files at install time, consider just >>> using an environment variable in your testing environment, which would >>> be missing in production environment. >>> >> >> I'd still need to define that environment variable in a wrapper script, >> which means editing that script at install time ... back to square one >> ... >> > No, the whole point is it's an environment variable which is *missing" > in production environment. Make sure you make it an obscure name, like > set MyProductName_TestingMode=1 > > So the way you know you're in a production environment is that you do > not have such an environment variable. Sounds like a very roundabout solution to the wrong problem. -- http://mail.python.org/mailman/listinfo/python-list
Messing up with classes and their namespace
Hello world, I had recently a very nasty bug in my python application. The context is quite complex, but in the end the problem can be resume as follow: 2 files in the same directory : lib.py: >import foo >foo.Foo.BOOM='lib' foo.py: >class Foo: >BOOM = 'F' > >if __name__=='__main__': >import lib # I'm expecting BOOM to be set to 'lib' >print Foo.BOOM I was expecting 'lib' as output, but I got 'Fooo'. I don't really understand what python mechanism I'm messing with but I have the feeling I've misunderstood a very basic concept about class, namespace or whatever import notion. This is how I made it work: >if __name__=='__main__': >from lib import Foo # make sure Foo comes from lib >print Foo.BOOM I guess there is 2 different objects for the same class Foo. How I do I make both Foo objects the same object ? Jean-Michel -- http://mail.python.org/mailman/listinfo/python-list
Re: Adding a Par construct to Python?
In message <77as23f1fhj3...@mid.uni-berlin.de>, Diez B. Roggisch wrote: >> But reduce()? I can't see how you can parallelize reduce(). By its >> nature, it has to run sequentially: it can't operate on the nth item >> until it is operated on the (n-1)th item. > > That depends on the operation in question. Addition for example would > work. My math-skills are a bit too rusty to qualify the exact nature of > the operation, commutativity springs to my mind. Associativity: ((A op B) op C) = (A op (B op C)) So for example A op B op C op D could be grouped into (A op B) op (C op D) and the two parenthesized subexpressions evaluated concurrently. But this would only give you a speedup that was logarithmic in the number of op's. -- http://mail.python.org/mailman/listinfo/python-list
Re: Adding a Par construct to Python?
In message , Steven D'Aprano wrote: > threads = [PMapThread(datapool) for i in xrange(numthreads)] Shouldn’t that “PMapThread” be “thread”? -- http://mail.python.org/mailman/listinfo/python-list
Re: python way to automate IE8's File Download dialog
Hi! Suppose that the (web) site give the file only after several seconds, and after the user click a confirm (example: RapidFile). Suppose that the (web) site give the file only after the user input a code, controled by a javascript script. @-salutations -- Michel Claveau -- http://mail.python.org/mailman/listinfo/python-list
fastest way to test file for string?
Hi. I need to implement, within a Python script, the same functionality as that of Unix's grep -rl some_string some_directory I.e. find all the files under some_directory that contain the string "some_string". I imagine that I can always resort to the shell for this, but is there an efficient way of doing it within Python? (BTW, portability is not high on my list here; this will run on a Unix system, and non-portable Unix-specific solutions are fine with me.) TIA! -- -- http://mail.python.org/mailman/listinfo/python-list
pylint naming conventions?
Hi, as someone who is still learning a lot about Python I am making use of all the tools that can help me, such as pyflakes, pychecker and pylint. I am confused by pylint's naming conventions, I don't think the are in tune with Python's style recommendations (PEP 8?) Anyone else think this? Is there an easy way to get this in compliance? Or lacking this just turn this off (I'd rather not turn it off if it's easy to get in tune with the standards). Or am I wrong about my assertion with regard to the naming conventions? Thanks, Esmail ps: if anyone else wants to toss in some other recommendations for useful tools feel free to do so! -- http://mail.python.org/mailman/listinfo/python-list
Re: Using C++ and ctypes together: a vast conspiracy? ;)
>> Requiring that the C++ compiler used to make the dll's/so's to be the >> same one Python is compiled with wouldn't be too burdensome would it? Scott> And what gave you then impression that Python is compiled with a Scott> C++ compiler? I don't think it's too much to expect that a C++ compiler be available for the configure step if Python is being built in a C++ shop. The C compiler used to build Python proper should be compatible with the C++ compiler available to build C++ extension modules or C++ libraries dynamically linked into Python. If there is no C++ compiler available then the proposed layout sniffing just wouldn't be done and either a configure error would be emitted or a run-time exception raised if a program attempted to use that feature. (Or the sniffing could be explicitly enabled/disabled by a configure flag.) -- Skip Montanaro - s...@pobox.com - http://www.smontanaro.net/ America's vaunted "free press" notwithstanding, story ideas that expose the unseemly side of actual or potential advertisers tend to fall by the wayside. Not quite sure why. -- Jim Thornton -- http://mail.python.org/mailman/listinfo/python-list
Re: What text editor is everyone using for Python
On May 25, 10:35 am, LittleGrasshopper wrote: > With so many choices, I was wondering what editor is the one you > prefer when coding Python, and why. I normally use vi, and just got > into Python, so I am looking for suitable syntax files for it, and > extra utilities. I dabbled with emacs at some point, but couldn't get > through the key bindings for commands. I've never tried emacs with vi > keybindings (I forgot the name of it) but I've been tempted. > > So what do you guys use, and why? Hopefully we can keep this civil. I use emacs. If you never tried emacs, you might check out: • Xah's Emacs Tutorial http://xahlee.org/emacs/emacs.html • Xah's Emacs Lisp Tutorial http://xahlee.org/emacs/elisp.html you can use python to write emacs commands: • Elisp Wrapper For Perl Scripts http://xahlee.org/emacs/elisp_perl_wrapper.html Emacs keyboard shortcuts is problematic indeed. See: • Why Emacs's Keyboard Shortcuts Are Painful http://xahlee.org/emacs/emacs_kb_shortcuts_pain.html However, you can completely fix that. See: • Ergoemacs Keybindings http://xahlee.org/emacs/ergonomic_emacs_keybinding.html Xah ∑ http://xahlee.org/ ☄ -- http://mail.python.org/mailman/listinfo/python-list
Re: a problem with concurrency
On Fri, Jun 5, 2009 at 2:59 PM, Tim Chase wrote: > The common way to do this is to not bother with the "somebody else is > editing this record" because it's nearly impossible with the stateless web > to determine when somebody has stopped browsing a web page. Instead, each > record simply has a "last modified on $TIMESTAMP by $USERID" pair of field. > When you read the record to display to the user, you stash these values > into the page as $EXPECTED_TIMESTAMP and $EXPECTED_USERID. If, when the > user tries to save the record, your web-server app updates the record only > if the timestamp+username+rowid match This is much easier to implement than the locking mechanism since I already have the fields $EXPECTED_TIMESTAMP and $EXPECTED_USERID in the db! It looks quite sufficient for my use case. -- http://mail.python.org/mailman/listinfo/python-list
Re: Odd closure issue for generators
In article , Lawrence D'Oliveiro wrote: >In message ><78180b4c-68b2-4a0c-8594-50fb1ea2f...@c19g2000yqc.googlegroups.com>, Michele >Simionato wrote: >> >> The crux is in the behavior of the for loop: in Python there is a >> single scope and the loop variable is *mutated* at each iteration, >> whereas in Scheme (or Haskell or any other functional language) a new >> scope is generated at each iteration and there is actually a new loop >> variable at each iteration: no mutation is involved. > >I think it's a bad design decision to have the loop index be a variable >that can be assigned to in the loop. Why? -- Aahz (a...@pythoncraft.com) <*> http://www.pythoncraft.com/ "Given that C++ has pointers and typecasts, it's really hard to have a serious conversation about type safety with a C++ programmer and keep a straight face. It's kind of like having a guy who juggles chainsaws wearing body armor arguing with a guy who juggles rubber chickens wearing a T-shirt about who's in more danger." --Roy Smith, c.l.py, 2004.05.23 -- http://mail.python.org/mailman/listinfo/python-list
Re: The Complexity And Tedium of Software Engineering
John Thingstad wrote: På Fri, 05 Jun 2009 08:07:39 +0200, skrev Xah Lee : On Jun 3, 11:50 pm, Xah Lee wrote: The point in these short examples is not about software bugs or problems. It illustrates, how seemingly trivial problems, such as networking, transferring files, running a app on Mac or Windwos, upgrading a app, often involves a lot subtle complexities. For mom and pop users, it simply stop them dead. For a senior industrial programer, it means some conceptually 10-minutes task often ends up in hours of tedium. What on earth gave you the idea that this is a trivial problem? Networks have been researched and improved for the last 40 years! It is a marvel of modern engineering that they work as well as they do. [snip] The actual phrase was "/seemingly/ trivial problems", ie it /looks/ simple at first glance. -- http://mail.python.org/mailman/listinfo/python-list
Re: pylint naming conventions?
Esmail writes: > I am confused by pylint's naming conventions, I don't think the are in > tune with Python's style recommendations (PEP 8?) > > Anyone else think this? It's hard to know, without examples. Can you give some output of pylint that you think doesn't agree with PEP 8? -- \ “Are you thinking what I'm thinking, Pinky?” “Uh... yeah, | `\ Brain, but where are we going to find rubber pants our size?” | _o__) —_Pinky and The Brain_ | Ben Finney -- http://mail.python.org/mailman/listinfo/python-list
Re: fastest way to test file for string?
On Jun 5, 7:50 am, kj wrote: > Hi. I need to implement, within a Python script, the same > functionality as that of Unix's > > grep -rl some_string some_directory > > I.e. find all the files under some_directory that contain the string > "some_string". > > I imagine that I can always resort to the shell for this, but is > there an efficient way of doing it within Python? > > (BTW, portability is not high on my list here; this will run on a > Unix system, and non-portable Unix-specific solutions are fine with > me.) > > TIA! > -- You can write your own version of grep in python using os.walk, open, read and find. I don't know why one would want to do that unless for portability reasons. It will be pretty hard to beat grep in efficiency and succinctness. The most sensible thing IMHO is a shell script or call grep using os.system (or using subprocess). -- http://mail.python.org/mailman/listinfo/python-list
Re: a problem with concurrency
When a user posts a requests he changes the underlying database table. The issue is that if more users are editing the same set of rows the last user will override the editing of the first one. Since this is an in-house application with very few users, we did not worry to solve this issue, which happens very rarely. However, I had a request from the people using the application, saying that this issue indeed happens sometimes and that they really would like to be able to see if some other user is editing a row. In that case, the web interface should display the row as not editable, showing the name of the user which is editing it. Moreover, when posting a request involving non-editable rows, there should be a clear error message and the possibility to continue anyway (a message such as "do you really want to override the editing made by user XXX?"). The common way to do this is to not bother with the "somebody else is editing this record" because it's nearly impossible with the stateless web to determine when somebody has stopped browsing a web page. Instead, each record simply has a "last modified on $TIMESTAMP by $USERID" pair of field. When you read the record to display to the user, you stash these values into the page as $EXPECTED_TIMESTAMP and $EXPECTED_USERID. If, when the user tries to save the record, your web-server app updates the record only if the timestamp+username+rowid match: cursor.execute(""" UPDATE MyTable SET Field1=?, Field2=?, Field3=? WHERE id=? AND LastModified=? AND LastModifiedBy=?""", (field1, field2, field3, rowid, expected_lastmodified, expecteduserid) ) if cursor.rowcount: cursor.commit() print "Yay!" else: cursor.execute(""" SELECT u.name, t.lastmodified FROM MyTable t INNER JOIN MyUsers u ON u.id = t.LastModifiedBy WHERE t.id = ?""", (rowid,)) # maybe a little try/except around this in case # the record was deleted instead of modified? name, when = cursor.fetchone() print "This information has been modified " \ "(by %s at %s) since you last viewed it (at %s)" % ( name, when, expected_lastmodified) If you wanted to be really snazzy, you could pull up the existing new record alongside the data they tried to submit, and allow them to choose the correct value for each differing field. This also encourages awareness of conflicting edits and hopefully increases communication between your users ("Why is Pat currently editing this record...I'm working on it?!" [calls/IMs/emails Pat to get matters straight]) The first idea that comes to my mind is to add a field 'lockedby' to the database table, containing the name of the user which is editing that row. If the content of 'lockedby' is NULL, then the row is editable. The field is set at the beginning (the user will click a check button to signal - via Ajax - that he is going to edit that row) to the username and reset to NULL after the editing has been performed. Locking is the easy part -- it's knowing when to *unlock* that it becomes a problem. What happens if a user locks a record at 4:59pm on Friday afternoon and then goes on vacation for a week preventing folks from editing this record? If the locks are scoped to a single request, they do no good. The locks have to span multiple requests. I'd just ignore locking. -tkc -- http://mail.python.org/mailman/listinfo/python-list
Re: Using C++ and ctypes together: a vast conspiracy? ;)
s...@pobox.com schrieb: > >> Requiring that the C++ compiler used to make the dll's/so's to be the > >> same one Python is compiled with wouldn't be too burdensome would it? > > Scott> And what gave you then impression that Python is compiled with a > Scott> C++ compiler? > > I don't think it's too much to expect that a C++ compiler be available for > the configure step if Python is being built in a C++ shop. The C compiler > used to build Python proper should be compatible with the C++ compiler > available to build C++ extension modules or C++ libraries dynamically linked > into Python. > > If there is no C++ compiler available then the proposed layout sniffing just > wouldn't be done and either a configure error would be emitted or a run-time > exception raised if a program attempted to use that feature. (Or the > sniffing could be explicitly enabled/disabled by a configure flag.) > Hm, on Linux, gccxml (if its version is compatible with that of the C++ compiler) can probably help a lot. At runtime, no configure step needed. Unfortunately not on Windows. Thomas -- http://mail.python.org/mailman/listinfo/python-list
Re: urlretrieve() failing on me
On Jun 5, 3:47 am, "Gabriel Genellina" wrote: > En Thu, 04 Jun 2009 23:42:29 -0300, Robert Dailey > escribió: > > > Hey guys, try using urlretrieve() in Python 3.0.1 on the following > > URL: > > >http://softlayer.dl.sourceforge.net/sourceforge/wxwindows/wxMSW-2.8.1... > > > Have it save the ZIP to any destination directory. For me, this only > > downloads about 40KB before it stops without any error at all. Any > > reason why this isn't working? > > I could not reproduce it. I downloaded about 300K without error (Python > 3.0.1 on Windows) > > -- > Gabriel Genellina Can you show me your test code please? -- http://mail.python.org/mailman/listinfo/python-list
Re: Odd closure issue for generators
Ned Deily wrote: In article <4a28903b.4020...@sweetapp.com>, Brian Quinlan wrote: Scott David Daniels wrote: [snipped] When you evaluate a lambda expression, the default args are evaluated, but the expression inside the lambda body is not. When you apply that evaluated lambda expression, the expression inside the lambda body is is evaluated and returned. But that's not really the issue. I knew that the lambda was not evaluated but thought each generator expression got its own context rather than sharing one. Each? Maybe that's a source of confusion. There is only one generator expression in your example. c = (lambda : i for i in range(11, 16)) c at 0x114e90> d = list(c) d [ at 0x119348>, at 0x119390>, at 0x1193d8>, at 0x119420>, at 0x119468>] Sorry, I wasn't as precise as I should have been. If you consider this example: ( for x in y) I thought that every time that was evaluated, it would be done in a new closure with x bound to the value of x at the time that the closure was created. Instead, a new closure is created for the entire generator expression and x is updated inside that closure. Cheers, Brian -- http://mail.python.org/mailman/listinfo/python-list
Feedparser Problem
I'm working with Feedparser on months old install of Windows 7, and now programs that ran before are broken, and I'm getting wierd messages that are rather opaque to me. Google, Bing, News groups have all left me empty handed. I was wondering if I could humbly impose upon the wizards of comp.lang.python to lend me their wisdom and insight to this problem that is too difficult for my little mind. Here's what I'm going through: >>>from feedparser import parse >>>url='http://feeds.nytimes.com/nyt/rss/Technology' >>>url2='http://feeds.washingtonpost.com/wp-dyn/rss/technology/index_xml' >>>d = parse(url) >>>d2= parse(url2) >>>d {'bozo':1, 'bozo_exception': TypeError("__init__() got an unexpected keyword argument 'timeout'",), 'encoding': 'utf-8', 'entries': [], 'feed':{}, 'version': None} >>>d2 {'bozo': 1, 'bozo_exception': TypeError("__init__() got an unexpected keyword argument 'timeout'",), 'encoding': 'utf-8', 'entries': [], 'feed': {}, 'version': None} I've checked my firewall settings, and python is allowed. Other python programs can get data from the web. I know that the 'bozo' is for malformed xml. I've searched through both of those rss feeds, and I can't find the argument 'timeout' anywhere in them. Any ideas, thoughts or directions in which I might go? Thanks to all in advance, Jonathan -- http://mail.python.org/mailman/listinfo/python-list
Re: Using C++ and ctypes together: a vast conspiracy? ;)
On Jun 5, 2009, at 10:13 AM, Thomas Heller wrote: s...@pobox.com schrieb: If there is no C++ compiler available then the proposed layout sniffing just wouldn't be done and either a configure error would be emitted or a run-time exception raised if a program attempted to use that feature. (Or the sniffing could be explicitly enabled/disabled by a configure flag.) Hm, on Linux, gccxml (if its version is compatible with that of the C ++ compiler) can probably help a lot. At runtime, no configure step needed. Unfortunately not on Windows. I'm not a gccxml user, but its install page has a section for Windows: http://www.gccxml.org/HTML/Install.html HTH P -- http://mail.python.org/mailman/listinfo/python-list
Re: Using C++ and ctypes together: a vast conspiracy? ;)
Philip Semanchuk schrieb: > On Jun 5, 2009, at 10:13 AM, Thomas Heller wrote: > >> s...@pobox.com schrieb: > If there is no C++ compiler available then the proposed layout > sniffing just >>> wouldn't be done and either a configure error would be emitted or a >>> run-time >>> exception raised if a program attempted to use that feature. (Or the >>> sniffing could be explicitly enabled/disabled by a configure flag.) >>> >> >> Hm, on Linux, gccxml (if its version is compatible with that of the C >> ++ compiler) >> can probably help a lot. At runtime, no configure step needed. >> Unfortunately not on Windows. > > I'm not a gccxml user, but its install page has a section for Windows: > http://www.gccxml.org/HTML/Install.html Yes, it runs on Windows (*). However, there are two problems: 1. gccxml refuses to parse quite some include files from the Window SDK. That's probably not the fault of gccxml but MS using non-standard C++ constructs. (There is a workaround inside gccxml: it installs patched Windows header files, but the patches must be created first by someone) 2. You cannot expect gccxml (which is mostly GCC inside) to use the MSVC algorithm for C++ object layout, so unfortuately it does not help in the most important use-case. Thomas (*) And there is even use for it on Windows to parse C header files and generate ctypes wrappers, in the ctypeslib project. -- http://mail.python.org/mailman/listinfo/python-list
Re: urlretrieve() failing on me
Robert Dailey wrote: > On Jun 5, 3:47 am, "Gabriel Genellina" wrote: >> En Thu, 04 Jun 2009 23:42:29 -0300, Robert Dailey >> escribió: >> >> > Hey guys, try using urlretrieve() in Python 3.0.1 on the following >> > URL: >> >> >http://softlayer.dl.sourceforge.net/sourceforge/wxwindows/wxMSW-2.8.1... >> >> > Have it save the ZIP to any destination directory. For me, this only >> > downloads about 40KB before it stops without any error at all. Any >> > reason why this isn't working? >> >> I could not reproduce it. I downloaded about 300K without error (Python >> 3.0.1 on Windows) >> >> -- >> Gabriel Genellina > > Can you show me your test code please? Here's mine: $ cat retriever.py import urllib.request import os def report(*args): print(args) url = "http://softlayer.dl.sourceforge.net/sourceforge/wxwindows/wxMSW-2.8.10.zip"; filename = url.rsplit("/")[-1] urllib.request.urlretrieve(url, filename=filename, reporthook=report) print(os.path.getsize(filename)) $ If you had shown your code in the first place the problem might have been solved by now... Peter -- http://mail.python.org/mailman/listinfo/python-list
Re: Winter Madness - Passing Python objects as Strings
Hendrik van Rooyen wrote: > "Nigel Rantor" wrote: > >> It just smells to me that you've created this elaborate and brittle hack >> to work around the fact that you couldn't think of any other way of >> getting the thread to change it's behaviour whilst waiting on input. > > I am beginning to think that you are a troll, as all your comments are > haughty and disparaging, while you either take no trouble to follow, > or are incapable of following, my explanations. > > In the event that this is not the case, please try to understand my > reply to Skip, and then suggest a way that will perform better > in my use case, out of your vast arsenal of better, quicker, > more reliable, portable and comprehensible ways of doing it. Well, why not have a look at Gabriel's response. That seems like a much more portable way of doing it if nothing else. I'm not trolling, you just seem to be excited about something that sounds like a fundamentally bad idea. n -- http://mail.python.org/mailman/listinfo/python-list
Re: Winter Madness - Passing Python objects as Strings
"Gabriel Genellina" wrote: >Ah... I had the same impression as Mr. Reedy, that you were directly >reading from a socket and processing right there, so you *had* to use >strings for everything. not "had to" - "chose to" - to keep the most used path as short as I could. > >But if you already have a queue, you may put other objects there (instead >of "canning" them). Testing the object type with isinstance(msg, str) is >pretty fast, and if you bind locally those names I'd say the overhead is >negligible. Maybe you are right and I am pre optimising - but the heart of this box really is that silly loop and the processor really is not fast at all. I felt it was right to keep the processing of the stuff coming out of the queue standard with the split(','), as the brainless way seemed to be the best - anything else I could think of just added overhead. I thought of putting the string in a list, with the record type being the first item, and the string the second, with a queue replacing the string for the state change record. I basically rejected this as it would have added extra processing both at the entry and the exit of the critical queue, for every record. I admit that I did not think of testing the type with isinstance, but even if the overhead is minimal, it does add extra cycles to the innermost loop, for every one of the thousands of times that nothing of importance is detected. This is what I was trying to avoid, as it is important to get as much performance out of the box as I can (given that I am using Python to get the job done fast, because that is also important). So it is a kind of juggling with priorities - "make it fast" would imply do not use python, but "get it done quickly" implies using python, and it looks to me that if I am careful and think more like an assembler programmer, the real time performance will be adequate. And I do not want to do anything that could conceivably compromise that. Even if it means jumping through a funny hoop like I am doing now, and inventing a weird way to pass an object. "adequate" here is up to now quite good - if I set up a client on the LAN and I reflect the inputs back to the outputs, then to my human senses there is no difference in the way the output relays chatter and bounce when I play with a wire on the inputs, to what I would expect of a locally hard wired setup. So to a large extent I think that I am doing the job as fast as it is possible - the comma delimited input string is a fact, decided on between myself and the customer a long time ago, and it comes over a socket, so it has to be a string. The least I can do with it is nothing before I put it on the critical queue. Then when it comes out of the queue, I have to break it up into its constituent parts, and I would think that split is the canonical way of doing that. Then there follows some jiggery pokery to get the outputs out and the inputs in, that involves some ctypes stuff to address the real hardware, and then the input results have to be sent back to the client(s), over and over. That is basically what the box does, until another connection is made (from a control room) and the local "master" client is pre empted and the outputs from the new "master" must be obeyed, and the results reflected back to the new connection too. It is a fairly simple state machine, and it has to be as fast as possible, as twice its loop time plus the network round trip time defines its responsiveness. I would really appreciate it if someone can dream up a faster way of getting round this basic loop. (even if it involves other equally weird constructs as "canning") - Hendrik -- http://mail.python.org/mailman/listinfo/python-list
Re: fastest way to test file for string?
"kj" wrote: > > Hi. I need to implement, within a Python script, the same > functionality as that of Unix's > >grep -rl some_string some_directory > > I.e. find all the files under some_directory that contain the string > "some_string". > > I imagine that I can always resort to the shell for this, but is > there an efficient way of doing it within Python? > > (BTW, portability is not high on my list here; this will run on a > Unix system, and non-portable Unix-specific solutions are fine with > me.) Use grep. You will not beat it's performance. - Hendrik -- http://mail.python.org/mailman/listinfo/python-list
Re: Uppercase/Lowercase on unicode
> I just to check it in the python shell and it's correct. > Then the problem is by iPython that I was testing it from there. yes, iPython has a bug like that https://bugs.launchpad.net/ipython/+bug/339642 -- дамјан ( http://softver.org.mk/damjan/ ) A: Because it reverses the logical flow of conversation. Q: Why is top posting frowned upon? -- http://mail.python.org/mailman/listinfo/python-list
numpy 00 character bug?
Hello, all! I've recently encountered a bug in NumPy's string arrays, where the 00 ASCII character ('\x00') is not stored properly when put at the end of a string. For example: Python 2.5.2 (r252:60911, Jul 31 2008, 17:28:52) [GCC 4.2.3 (Ubuntu 4.2.3-2ubuntu7)] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> import numpy >>> print numpy.version.version 1.3.0 >>> arr = numpy.empty(1, 'S2') >>> arr[0] = 'ab' >>> arr array(['ab'], dtype='|S2') >>> arr[0] = 'c\x00' >>> arr array(['c'], dtype='|S2') It seems that the string array is using the 00 character to pad strings smaller than the maximum size, and thus is treating any 00 characters at the end of a string as padding. Obviously, as long as I don't use smaller strings, there is no information lost here, but I don't want to have to re-add my 00s each time I ask the array what it is holding. Is this a well-known bug already? I couldn't find it on the NumPy bug tracker, but I could have easily missed it, or it could be triaged, deemed acceptable because there's no better way to deal with arbitrary-length strings. Is there an easy way to avoid this problem? Pretty much any performance-intensive part of my program is going to be dealing with these arrays, so I don't want to just replace them with a slower dictionary instead. I can't imagine this issue hasn't come up before; I encountered it by using NumPy arrays to store Python structs, something I can imagine is done fairly often. As such, I apologize for bringing it up again! Nathaniel -- http://mail.python.org/mailman/listinfo/python-list
Re: Winter Madness - Passing Python objects as Strings
"Nigel Rantor" wrote: > Well, why not have a look at Gabriel's response. I have, and have responded at some length, further explaining what I am doing, and why. > That seems like a much more portable way of doing it if nothing else. There is nothing portable in what I am doing - it is aimed at the eBox, as the i/o stuff is specific to the Vortex processor. Even without the can and uncan, if you were to try to run it on any other machine, it would segfault because of the underlying C routines called via ctypes to access the non standard parallel port. > I'm not trolling, you just seem to be excited about something that > sounds like a fundamentally bad idea. Glad to hear it, and I am aware of the dangers, but I am aiming at a very specific speed objective, and I really cannot think of a way that achieves the result in fewer machine cycles than this weird way of passing an object, in a case such as mine. (barring of course writing the whole thing in C, which would never get the job done in time) - Hendrik -- http://mail.python.org/mailman/listinfo/python-list
Re: numpy 00 character bug?
In article , Nathaniel Rook wrote: > >I've recently encountered a bug in NumPy's string arrays, where the 00 >ASCII character ('\x00') is not stored properly when put at the end of a >string. You should ask about this on the NumPy mailing lists and/or report it on the NumPy tracker: http://scipy.org/ -- Aahz (a...@pythoncraft.com) <*> http://www.pythoncraft.com/ "Given that C++ has pointers and typecasts, it's really hard to have a serious conversation about type safety with a C++ programmer and keep a straight face. It's kind of like having a guy who juggles chainsaws wearing body armor arguing with a guy who juggles rubber chickens wearing a T-shirt about who's in more danger." --Roy Smith, c.l.py, 2004.05.23 -- http://mail.python.org/mailman/listinfo/python-list
Re: Messing up with classes and their namespace
Jean-Michel Pichavant wrote: Hello world, I had recently a very nasty bug in my python application. The context is quite complex, but in the end the problem can be resume as follow: 2 files in the same directory : lib.py: >import foo >foo.Foo.BOOM='lib' foo.py: >class Foo: >BOOM = 'F' > >if __name__=='__main__': >import lib # I'm expecting BOOM to be set to 'lib' >print Foo.BOOM I was expecting 'lib' as output, but I got 'Fooo'. I don't really understand what python mechanism I'm messing with but I have the feeling I've misunderstood a very basic concept about class, namespace or whatever import notion. I guess there is 2 different objects for the same class Foo. How I do I make both Foo objects the same object ? OK, here is one solution (from which you may infer the problem): lib.py: import __main__ __main__.Foo.BOOM = 'lib' foo.py: class Foo: BOOM = 'F' if __name__ == '__main__': import lib # I'm expecting BOOM to be set to 'lib' print(Foo.BOOM) Here is another solution: lib.py: import foo foo.Foo.BOOM = 'lib' foo.py: class Foo: BOOM = 'F' if __name__ == '__main__': import sys sys.modules['foo'] = sys.modules['__main__'] import lib # I'm expecting BOOM to be set to 'lib' print(Foo.BOOM) Here is a demo of what is actually going wrong: foo.py: class Foo: inside = __name__ import foo if __name__ == '__main__': print(Foo is foo.Foo) print(Foo.inside, foo.Foo.inside) And here is a fix foo.py: if __name__ == '__main__': import sys sys.modules['foo'] = sys.modules['__main__'] class Foo: inside = __name__ import foo if __name__ == '__main__': print(Foo is foo.Foo) print(Foo.inside, foo.Foo.inside) --Scott David Daniels scott.dani...@acm.org -- http://mail.python.org/mailman/listinfo/python-list
Re: fastest way to test file for string?
Hi. I need to implement, within a Python script, the same functionality as that of Unix's grep -rl some_string some_directory I.e. find all the files under some_directory that contain the string "some_string". I'd do something like this untested function: def find_files_containing(base_dir, string_to_find): for path, files, dirs in os.walk(base_dir): for fname in files: full_name = os.path.join(path, fname) f = file(full_name) for line in f: if string_to_find in line: f.close() yield full_name break else: f.close() for filename in find_files_containing( "/path/to/wherever/", "some_string" ): print filename It's not very gracious regarding binary files, but caveat coder. -tkc -- http://mail.python.org/mailman/listinfo/python-list
Re: Way to use proxies & login to site?
On May 5, 12:51 pm, Kushal Kumaran wrote: > On Wed, Apr 29, 2009 at 8:21 AM, inVINCable wrote: > > On Apr 27, 7:40 pm, inVINCable wrote: > >> Hello, > > >> I have been using ClientForm to log in to sites & ClientCookie so I > >> can automatically log into my site to do some penetration testing, > >> although, I cannot figure out a solution to use proxies with this > >> logging in automatically. Does anyone have any solutions? > > >> Thanks :) > > >> Vince > > > Any ideas? > > If, like the example athttp://wwwsearch.sourceforge.net/ClientForm/, > you are using urllib2, you can read the documentation for that module. > It also has examples for working with proxies. > > -- > kushal Ok, I gotcha. Sounds neat, but the problem is, do you know if I can work with proxies and then connect to a site? -- http://mail.python.org/mailman/listinfo/python-list
Re: Winter Madness - Passing Python objects as Strings
Hendrik van Rooyen wrote: "Gabriel Genellina" wrote: Ah... I had the same impression as Mr. Reedy, that you were directly reading from a socket and processing right there, so you *had* to use strings for everything. not "had to" - "chose to" - to keep the most used path as short as I could. But if you already have a queue, you may put other objects there (instead of "canning" them). Testing the object type with isinstance(msg, str) is pretty fast, and if you bind locally those names I'd say the overhead is negligible. I can think of use cases for can, and from that use an alternate construct. The use case is passing a reference out over a wire (TCP port?) that will be used later. Sub cases: (1) Handing work over the wire along with a callback to invoke with the results. (2) Handing work over the wire along with a progress callback. (3) Handing work over the wire along with a pair of result functions, where the choice of functions is made on the far side of the wire. The "can" can be used to send the function(s) out. Alternatively, for use case 1: class Holder(object): def __init__(self): self.key = 0 self.holds = {} def handle(self, something): key = str(self.key) # may need to lock w/ next for threads self.key += 1 self.holds[key] = something return key def use(self, handle): return self.holds.pop(handle) Otherwise a simple dictionary with separate removal may be needed. If you might abandon an element w/o using it, use a weakref dictionary, but then you need a scheme to keep the thing alive long enough for needed operations (as you also need with a can). In use case 1, the dictionary becomes that holding point. The counter-as-key idea allows you to keep separate references to the same thing, so the reference is held for precisely as long as needed. It (counter-as-key) beats the str(id(obj)) of can because it tracks the actual object, not simply the id that can be reused. --Scott David Daniels scott.dani...@acm.org -- http://mail.python.org/mailman/listinfo/python-list
Re: Yet another unicode WTF
In article , Ned Deily wrote: > In python 3.x, of course, the encoding happens automatically but you > still have to tell python, via the "encoding" argument to open, what the > encoding of the file's content is (or accept python's default which may > not be very useful): > > >>> open('foo1','w').encoding > 'mac-roman' > > WTF, indeed. BTW, I've opened a 3.1 release blocker issue about 'mac-roman' as a default on OS X. Hard to believe none of us has noticed this up to now! http://bugs.python.org/issue6202 -- Ned Deily, n...@acm.org -- http://mail.python.org/mailman/listinfo/python-list
Re: how to iterate over several lists?
On Fri, 5 Jun 2009 04:07:19 + (UTC), kj wrote: > > >Suppose I have two lists, list_a and list_b, and I want to iterate >over both as if they were a single list. E.g. I could write: > >for x in list_a: >foo(x) >for x in list_b: >foo(x) > >But is there a less cumbersome way to achieve this? I'm thinking >of something in the same vein as Perl's: > >for $x in (@list_a, @list_b) { > foo($x); >} > >TIA! > >kynn def chain(*args): return (item for seq in args for item in seq) for x in chain(list_a, list_b): foo(x) -- http://mail.python.org/mailman/listinfo/python-list
Re: multi-core software
On Jun 4, 8:35 pm, Roedy Green wrote: > On Thu, 4 Jun 2009 09:46:44 -0700 (PDT), Xah Lee > wrote, quoted or indirectly quoted someone who said : > > >• Why Must Software Be Rewritten For Multi-Core Processors? > > Threads have been part of Java since Day 1. Using threads complicates > your code, but even with a single core processor, they can improve > performance, particularly if you are doing something like combing > multiple websites. > > The nice thing about Java is whether you are on a single core > processor or a 256 CPU machine (We got to run our Athena Integer Java > spreadsheet engine on such a beast), does not concern your code. > > You just have to make sure your threads don't interfere with each > other, and Java/the OS, handle exploiting all the CPUs available. You need to decompose your problem in 256 independent tasks. It can be trivial for some problems and difficult or perhaps impossible for some others. -- http://mail.python.org/mailman/listinfo/python-list
[wxpython] change the language of a menubar
Hi Guys, i am new to wxpython an i have trouble with the menubar. i tried to write a dynamic menubar that can read the itemnames from an sqlite3 database, so i can change the language very easy. like this. def MakeMenuBar(self): self.dbCursor.execute("SELECT " + self.lang[self.langSelect] +" FROM menu") self.menuwords = self.dbCursor.fetchall() menu = wx.Menu()##Filemenu item = menu.Append(ID_CONNECT, "%s" %self.menuwords[3]) . . . this works fine, when i draw the menu for the first time, when i want to change language in runtime, i can not get the menu to redraw. can someone help me out on this one? thanks heiner -- http://mail.python.org/mailman/listinfo/python-list
Re: Messing up with classes and their namespace
Scott David Daniels wrote: Jean-Michel Pichavant wrote: Hello world, I had recently a very nasty bug in my python application. The context is quite complex, but in the end the problem can be resume as follow: 2 files in the same directory : lib.py: >import foo >foo.Foo.BOOM='lib' foo.py: >class Foo: >BOOM = 'F' > >if __name__=='__main__': >import lib # I'm expecting BOOM to be set to 'lib' >print Foo.BOOM I was expecting 'lib' as output, but I got 'Fooo'. I don't really understand what python mechanism I'm messing with but I have the feeling I've misunderstood a very basic concept about class, namespace or whatever import notion. I guess there is 2 different objects for the same class Foo. How I do I make both Foo objects the same object ? OK, here is one solution (from which you may infer the problem): lib.py: import __main__ __main__.Foo.BOOM = 'lib' foo.py: class Foo: BOOM = 'F' if __name__ == '__main__': import lib # I'm expecting BOOM to be set to 'lib' print(Foo.BOOM) Here is another solution: lib.py: import foo foo.Foo.BOOM = 'lib' foo.py: class Foo: BOOM = 'F' if __name__ == '__main__': import sys sys.modules['foo'] = sys.modules['__main__'] import lib # I'm expecting BOOM to be set to 'lib' print(Foo.BOOM) Here is a demo of what is actually going wrong: foo.py: class Foo: inside = __name__ import foo if __name__ == '__main__': print(Foo is foo.Foo) print(Foo.inside, foo.Foo.inside) And here is a fix foo.py: if __name__ == '__main__': import sys sys.modules['foo'] = sys.modules['__main__'] class Foo: inside = __name__ import foo if __name__ == '__main__': print(Foo is foo.Foo) print(Foo.inside, foo.Foo.inside) --Scott David Daniels scott.dani...@acm.org Thanks for the explanation. I'll have to give it a second thought, I'm still missing something but I'll figure it out. Jean-Michel -- http://mail.python.org/mailman/listinfo/python-list
Re: multi-core software
On 2009-06-05, Vend wrote: > On Jun 4, 8:35 pm, Roedy Green > wrote: >> On Thu, 4 Jun 2009 09:46:44 -0700 (PDT), Xah Lee >> wrote, quoted or indirectly quoted someone who said : >> >> >• Why Must Software Be Rewritten For Multi-Core Processors? >> >> Threads have been part of Java since Day 1. Using threads complicates >> your code, but even with a single core processor, they can improve >> performance, particularly if you are doing something like combing >> multiple websites. >> >> The nice thing about Java is whether you are on a single core >> processor or a 256 CPU machine (We got to run our Athena Integer Java >> spreadsheet engine on such a beast), does not concern your code. >> >> You just have to make sure your threads don't interfere with each >> other, and Java/the OS, handle exploiting all the CPUs available. > > You need to decompose your problem in 256 independent tasks. > > It can be trivial for some problems and difficult or perhaps > impossible for some others. Even for problems where it appears trivial, there can be hidden issues, like false cache coherency communication where no actual sharing is taking place. Or locks that appear to have low contention and negligible performance impact on ``only'' 8 processors suddenly turn into bottlenecks. Then there is NUMA. A given address in memory may be RAM attached to the processor accessing it, or to another processor, with very different access costs. -- http://mail.python.org/mailman/listinfo/python-list
Re: how to iterate over several lists?
On Fri, Jun 5, 2009 at 10:20 AM, Tom wrote: > On Fri, 5 Jun 2009 04:07:19 + (UTC), kj > wrote: > >> >> >>Suppose I have two lists, list_a and list_b, and I want to iterate >>over both as if they were a single list. E.g. I could write: >> >>for x in list_a: >> foo(x) >>for x in list_b: >> foo(x) >> >>But is there a less cumbersome way to achieve this? I'm thinking >>of something in the same vein as Perl's: >> >>for $x in (@list_a, @list_b) { >> foo($x); >>} >> >>TIA! >> >>kynn > > def chain(*args): > return (item for seq in args for item in seq) > > for x in chain(list_a, list_b): > foo(x) > -- > http://mail.python.org/mailman/listinfo/python-list > If they are the same length, you can try the zip built-in function. -- Thanks, --Minesh -- http://mail.python.org/mailman/listinfo/python-list
Re: distutils extension configuration problem
On May 26, 11:10 pm, Ron Garret wrote: > I'm trying to build PyObjC on an Intel Mac running OS X 10.5.7. The > build is breaking because distutils seems to want to build extension > modules as universal binaries, but some of the libraries it depends on > are built for intel-only, i.e.: > > [...@mickey:~/Desktop/pyobjc-framework-ScreenSaver-2.2b2]$ python2.6 > setup.py build > /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/distutils > /dist.py:266: UserWarning: Unknown distribution option: 'options' > warnings.warn(msg) > running build > running build_py > running build_ext > building 'ScreenSaver._inlines' extension > gcc -arch ppc -arch i386 -isysroot /Developer/SDKs/MacOSX10.4u.sdk -g > -bundle -undefined dynamic_lookup > build/temp.macosx-10.3-i386-2.6/Modules/_ScreenSaver_inlines.o -o > build/lib.macosx-10.3-i386-2.6/ScreenSaver/_inlines.so -framework > ScreenSaver > ld: in /Developer/SDKs/MacOSX10.4u.sdk/usr/local/lib/libTIFF.dylib, file > is not of required architecture for architecture ppc > collect2: ld returned 1 exit status > lipo: can't open input file: > /var/folders/nT/nTiypn-v2RatkU+BYncrKU+++TI/-Tmp-//ccMFYRkt.out (No such > file or directory) > error: command 'gcc' failed with exit status 1 > > [...@mickey:~/Desktop/pyobjc-framework-ScreenSaver-2.2b2]$ file > build/temp.macosx-10.3-i386-2.6/Modules/_ScreenSaver_inlines.o > build/temp.macosx-10.3-i386-2.6/Modules/_ScreenSaver_inlines.o: Mach-O > universal binary with 2 architectures > build/temp.macosx-10.3-i386-2.6/Modules/_ScreenSaver_inlines.o (for > architecture ppc): Mach-O object ppc > build/temp.macosx-10.3-i386-2.6/Modules/_ScreenSaver_inlines.o (for > architecture i386): Mach-O object i386 > > [...@mickey:~/Desktop/pyobjc-framework-ScreenSaver-2.2b2]$ file > /usr/local/lib/libtiff.dylib > /usr/local/lib/libtiff.dylib: Mach-O dynamically linked shared library > i386 > > How do I get distutils to stop trying to build extensions as universal > binaries? > > Thanks, > rg I have the same questions but haven't found anything. I got this idea from the apple site: http://developer.apple.com/releasenotes/OpenSource/PerlExtensionsRelNotes/index.html so I tried: env CFLAGS='-arch i386' LDFLAGS='-arch i386' python setup.py build and this removes the -arch ppc flags at least for the compiles but not the links. Maybe something in this direction will work. This didn't work: env ARCHFLAGS='-arch i386' python setup.py install Art -- http://mail.python.org/mailman/listinfo/python-list
Re: fastest way to test file for string?
Tim Chase wrote: Hi. I need to implement, within a Python script, [functionality like]: grep -rl some_string some_directory I'd do something like this untested function: def find_files_containing(base_dir, string_to_find): for path, files, dirs in os.walk(base_dir): Note order wrong here ... for filename in find_files_containing( "/path/to/wherever/", "some_string" ): print filename I like results in a nice order, so I do something more like: def find_files_containing(base_dir, string_to_find): for path, dirs, files in os.walk(base_dir): # note order for fname in sorted(files): # often endswith in here full_name = os.path.join(path, fname) try: with open(full_name) as f: for line in f: if string_to_find in line: yield full_name break except IOError, why: print ("On %s in %s: %s' % (fname, path, why)) # usually several subdirs to avoid dirs[:] = sorted([d for d in dirs if d[0] != '.' and d not in ('RCS', 'CVS')]) --Scott David Daniels scott.dani...@acm.org -- http://mail.python.org/mailman/listinfo/python-list
Programming language comparison examples?
I thought there was a website which demonstrated how to program a bunch of small problems in a number of different languages. I know about the Programming Language Shootout: http://shootout.alioth.debian.org/ but that's not what I was thinking of. I thought there was a site with a bunch of smaller examples. I'm looking for a site with this sort of information to pass along to my son who's entering his sophomore year of college, has one Java course under his belt, and will take a second course in the fall. I'm hoping to reach him before his brain turns to mush. ;-) Thx, -- Skip Montanaro - s...@pobox.com - http://www.smontanaro.net/ America's vaunted "free press" notwithstanding, story ideas that expose the unseemly side of actual or potential advertisers tend to fall by the wayside. Not quite sure why. -- Jim Thornton -- http://mail.python.org/mailman/listinfo/python-list
Re: PyQt4 + WebKit
On 1 Cze, 22:05, David Boddie wrote: > On Monday 01 June 2009 16:16, dudekks...@gmail.com wrote: > > > On 31 Maj, 02:32, David Boddie wrote: > >> So, you only want to handle certain links, and pass on to WebKit those > >> which you can't handle? Is that correct? > > > Yes, I want to handle external links (out of my host) and links > > starting with download://... (my specific protocol). > > If you want to handle them in a different way to normal Web pages, it seems > that calling setForwardUnsupportedContent(True) on the QWebPage is the way > to do it. > > However, if you want to handle those links and present the content for the > browser to render then you may need to override the browser's network access > manager, as discussed in this message: > > http://lists.trolltech.com/pipermail/qt-interest/2009-March/004279.html > > I experimented a little and added an example to the PyQt Wiki: > > http://www.diotavelli.net/PyQtWiki/Usinga Custom Protocol with QtWebKit > > I hope it helps to get you started with your own custom protocol. > > David Thank You David for Your help. You made a piece of good work :) -- http://mail.python.org/mailman/listinfo/python-list
Create multiple variables (with a loop?)
Hi, I need to create multiple variables (lets say 10x10x10 positions of atoms). Is it possible to create them through a loop with some kind of indexing like atom000, atom001, etc? Or is this a very bad idea? Thing is ... i want to create a grid of atoms in vpython and then calculate the forces for each of them (to their neighbours). Don't know how else I could do that. Maybe a huuuge matrix? Thx for your help! - Philip -- http://mail.python.org/mailman/listinfo/python-list
Re: Create multiple variables (with a loop?)
On Fri, Jun 5, 2009 at 12:35 PM, Philip Gröger wrote: > Hi, > I need to create multiple variables (lets say 10x10x10 positions of atoms). > Is it possible to create them through a loop with some kind of indexing like > atom000, atom001, etc? > Or is this a very bad idea? Yes, very bad idea. Use a collection of some sort instead (list, dict, set). Dynamic variable names are usually evil. > Thing is ... i want to create a grid of atoms in vpython and then calculate > the forces for each of them (to their neighbours). Don't know how else I > could do that. Maybe a huuuge matrix? Yup, that sounds like a promising possibility. You might want to look at NumPy - http://numpy.scipy.org/ Cheers, Chris -- http://blog.rebertia.com -- http://mail.python.org/mailman/listinfo/python-list
MD6 in Python
As every one related to security probably knows, Rivest (and his friends) have a new hashing algorithm which is supposed to have none of the weaknesses of MD5 (and as a side benefit - not too many rainbow tables yet). His code if publicly available under the MIT license. Is there a reason not to add it to the standard lib, following the hashlib "protocol"? -- http://mail.python.org/mailman/listinfo/python-list
Re: how to iterate over several lists?
On Fri, Jun 5, 2009 at 11:37 AM, Minesh Patel wrote: > On Fri, Jun 5, 2009 at 10:20 AM, Tom wrote: >> On Fri, 5 Jun 2009 04:07:19 + (UTC), kj >> wrote: >> >>> >>> >>>Suppose I have two lists, list_a and list_b, and I want to iterate >>>over both as if they were a single list. E.g. I could write: >>> >>>for x in list_a: >>> foo(x) >>>for x in list_b: >>> foo(x) >>> >>>But is there a less cumbersome way to achieve this? I'm thinking >>>of something in the same vein as Perl's: >>> >>>for $x in (@list_a, @list_b) { >>> foo($x); >>>} >>> >>>TIA! >>> >>>kynn >> >> def chain(*args): >> return (item for seq in args for item in seq) >> >> for x in chain(list_a, list_b): >> foo(x) >> -- >> http://mail.python.org/mailman/listinfo/python-list >> > > If they are the same length, you can try the zip built-in function. But he doesn't want to iterate over them in parallel... Cheers, Chris -- http://blog.rebertia.com -- http://mail.python.org/mailman/listinfo/python-list
Re: Programming language comparison examples?
On Fri, Jun 5, 2009 at 12:20 PM, wrote: > I thought there was a website which demonstrated how to program a bunch of > small problems in a number of different languages. I know about the > Programming Language Shootout: > > http://shootout.alioth.debian.org/ > > but that's not what I was thinking of. I thought there was a site with a > bunch of smaller examples. PLEAC (Programming Language Examples Alike Cookbook) is one option: http://pleac.sourceforge.net/ Cheers, Chris -- Hope my brain doesn't turn to mush. http://blog.rebertia.com -- http://mail.python.org/mailman/listinfo/python-list
Re: Programming language comparison examples?
someone: > I thought there was a website which demonstrated how to program a bunch of > small problems in a number of different languages. http://www.rosettacode.org/wiki/Main_Page http://en.literateprograms.org/LiteratePrograms:Welcome http://www.codecodex.com/wiki/index.php?title=Main_Page http://merd.sourceforge.net/pixel/language-study/scripting-language/ http://pleac.sourceforge.net/ http://www.angelfire.com/tx4/cus/shapes/index.html Bye, bearophile -- http://mail.python.org/mailman/listinfo/python-list
Re: The Complexity And Tedium of Software Engineering
Xah Lee wrote: On Jun 3, 11:50 pm, Xah Lee wrote: Of interest: • The Complexity And Tedium of Software Engineering http://xahlee.org/UnixResource_dir/writ/programer_frustration.html Addendum: The point in these short examples is not about software bugs or problems. It illustrates, how seemingly trivial problems, such as networking, transferring files, running a app on Mac or Windwos, upgrading a app, often involves a lot subtle complexities. For mom and pop users, it simply stop them dead. For a senior industrial programer, it means some conceptually 10-minutes task often ends up in hours of tedium. Quibble: those are not /tedious/. Those are as fascinating as an episode of House, trying not only to get new information but how to get it and how to figure out when some information already in hand is actually misinformation, a classic solution to hard problems. Also figuring out coincidences mistaken for cause and effect. But that is just a quibble, ie, I think you need a different word, and it is OK if it still conveys some form of unleasantness. Hair-pulling? Head-banging? In some “theoretical” sense, all these problems are non-problems. But in practice, these are real, non-trivial problems. These are complexities that forms a major, multi-discipline, almost unexplored area of software research. I'm trying to think of a name that categorize this issue. I think it is a mix of software interface, version control, release control, formal software specification, automated upgrade system, etc. The ultimate scenario is that, if one needs to transfer files from one machine to another, one really should just press a button and expect everything to work. Software upgrade should be all automatic behind the scenes, to the degree that users really don't need fucking to know what so-called “version” of software he is using. I think you are looking for an immaculate road system on a volcanic island still growing ten feet a day. Today, with so-called “exponential” scientific progress, and software has progress tremendously too. In our context, that means there are a huge proliferation of protocols and standards. For example, unicode, gazillion networking related protocols, version control systems, automatic update technologies, all comes into play here. However, in terms of the above visionary ideal, these are only the beginning. There needs to be more protocols, standards, specifications, and more strict ones, and unified ones, for the ideal scenario to take place. But when would we write the software? Even with all the head-banging, look what we have been able to do with computers, leaving aside for the moment the flight control algorithms of the Airbus? When progress stops we will have time to polish our systems, not before. But then you will be able to use the word "tedium". kt -- http://mail.python.org/mailman/listinfo/python-list
A simpler logging configuration file format?
Hi all, I wonder if there are others out there who like me have tried to use the logging module's configuration file and found it bloated and over- complex for simple usage (i.e. a collection of loggers writing to files) At the moment, if I want to add a new logger "foo" writing to its own file "foo" to my config file, I have to repeat the name 7(!) times, editing 3 different sections in the process: 1) Add "foo" to [loggers] and [handlers]. (These seem completely pointless and were apparently added in anticipation of new functionality that never happened). 2) Create the section [logger_foo] handler:foo qualname:foo level:INFO 3) Create the section [handler_foo] class: FileHandler args:("foo", "a") Wondering how it got like this, I found this blog from soon after it was released in 2004: http://www.mechanicalcat.net/richard/log/Python/Simple_usage_of_Python_s_logging_module Some of the verdicts are "full of dead chickens and error prone", "horribly complicated" , "over-engineered". So along comes the "basicConfig" method in 2005 which is supposed to address thes concerns. But this can only be applied globally, not even to individual loggers, so is only helpful for extremely basic usage (everything to one target) as far as I can see. The config file is still much as it was then. I'd really like to see the concept extended to encompass multiple loggers and the config file. Allowing Logger.basicConfig should be trivial. Writing a configuration format which took a section and passed all the options in it to basicConfig would in my view lead to a much friendlier syntax for normal usage. -- http://mail.python.org/mailman/listinfo/python-list
python 3.1 - io.BufferedReader.peek() incomplete or change of behaviour.
Hello, I have sent this message to the authors as well as to this list. If this is the wrong list please let me know where I should be sending it... dev perhaps? First the simple questions: The versions of io.BufferedReader.peek() have different behavior which one is going to stay long term? Is the C version of the reader incomplete or simply changing the behavior? lastly will you consider my input on the api (see below)? Now a full explanation. I am working on writing a multipart parser for html returns in python 3.1. The email parser being used by cgi does not work currently and cgi is broken at the moment especially when used with the wsgiref.simple_server as it is currently implemented. This is what has pushed me to write my own implementation to _part_ of cgi.py. My thinking being that if it works well in the end I might submit a patch as it needs one anyway. My questions revolve around io.BufferedReader.peek(). There are two implementations one writen in python and one in C. At least in python3.1 C is used by default. The version written in python behaves as follows: want = min(n, self.buffer_size) have = len(self._read_buf) - self._read_pos if have < want or have <= 0: to_read = self.buffer_size - have current = self.raw.read(to_read) if current: self._read_buf = self._read_buf[self._read_pos:] + current self._read_pos = 0 return self._read_buf[self._read_pos:] This basically means it will always return the requested number of bytes up to buffersize and will preform a read on the underlying stream to get extra data if the buffer has less than requested (upto full buffersize). It also will not return a longer buffer than the number of bytes requested. I have verified this is the behaviour of this. The C version works a little different. The C version works as follows: Py_ssize_t have, r; have = Py_SAFE_DOWNCAST(READAHEAD(self), Py_off_t, Py_ssize_t); /* Constraints: 1. we don't want to advance the file position. 2. we don't want to lose block alignment, so we can't shift the buffer to make some place. Therefore, we either return `have` bytes (if > 0), or a full buffer. */ if (have > 0) { return PyBytes_FromStringAndSize(self->buffer + self->pos, have); } /* Fill the buffer from the raw stream, and copy it to the result. */ _BufferedReader_reset_buf(self); r = _BufferedReader_fill_buffer(self); if (r == -1) return NULL; if (r == -2) r = 0; self->pos = 0; return PyBytes_FromStringAndSize(self->buffer, r); Which basically means it returns what ever is in the buffer period. It will not fill the buffer any more from the raw stream to allow us to peek up to one buffersize like the python version and it always returns whats in the buffer regardless of how much you request. The only exception to this is if the buffer is empty. In that case it will read it full then return it. So it can be said this function is guaranteed to return 1 byte unless a raw read is not possible. The author says they cannot shift the buffer. This is true to retain file alignment. Double buffers maybe a solution if the python versions behavior is wanted. I have not yet checked how buffering is implemented fully. In writing the parser I found that being able to peek a number of bytes was helpful but I need to be able to peek more than 1 consistently (70 in my case) to meet the rfc I am implementing. This meant the C version of peek would not work. Fine I wrote a wrapper class that adds a buffer... This seemed dumb as I was already using a buffered reader so I detach the stream and use my wrapper. But now the logic and buffer handling is in the slower python where I would rather not have it. This defeats the purpose of the C buffer reader implementation almost. The C version still has a valid use for being able to read arbitrary size reads but that is really all the buffer reader is doing and I can do block oriented reads and buffering in my wrapper since I have to buffer anyway. Unless I only need a guaranteed peek of 1 byte (baring EOF, etc.) the c version doesn't seem very useful other than for random read cases. This is not a full explanation of course but may give you the picture as I see it. In light of the above and my questions I would like to give my input, hopefully to be constructive. This is what I think the api _should_ be the peek impementation. I may have missed things of course but none the less here it is: - read(n): Current be behavior read1(n): If n is greater than 0 return n or upto current buffer contents bytes advancing the stream position. If n is less than 0 or None return the the buffer contents and advance the position. If the buffer is empty and EOF has not been reached return None. If the buffer is empty and EOF has been reached return b''. peek(n): If n is less than 0 or None return buffer contents with out advancing stream position
Re: MD6 in Python
mik...@gmail.com schrieb: > As every one related to security probably knows, Rivest (and his > friends) have a new hashing algorithm which is supposed to have none > of the weaknesses of MD5 (and as a side benefit - not too many rainbow > tables yet). His code if publicly available under the MIT license. > > Is there a reason not to add it to the standard lib, following the > hashlib "protocol"? Somebody has to write and add a md6 wrapper to the standard library. It's going to take some time, at least 18 months until Python 2.8 and 3.2 are going to be released. Do you need a wrapper for md6? I could easily write one in about an hour. Christian -- http://mail.python.org/mailman/listinfo/python-list
Re: PyQt4 + WebKit
On Friday 05 June 2009 21:33, dudekks...@gmail.com wrote: > On 1 Cze, 22:05, David Boddie wrote: >> I experimented a little and added an example to the PyQt Wiki: >> >> http://www.diotavelli.net/PyQtWiki/Usinga Custom Protocol with QtWebKit >> >> I hope it helps to get you started with your own custom protocol. >> >> David > > Thank You David for Your help. You made a piece of good work :) No problem. I've since found some issues with another example I've been working on, so I may well update that page soon. :-) David -- http://mail.python.org/mailman/listinfo/python-list
Re: Messing up with classes and their namespace
Jean-Michel Pichavant wrote: Thanks for the explanation. I'll have to give it a second thought, I'm still missing something but I'll figure it out. Perhaps it is this: 1. When you run foo.py as a script, the interpreter creates module '__main__' by executing the code in foo.py. 2. When that code does 'import lib', the interpreter looks for an existing module named 'lib', does not find it, and creates module 'lib' by executing the code in lib.py. 3. When that code does 'import foo', the interpreter looks for an existing module named 'foo', does not find it, and creates module 'foo' by executing (again) the code in foo.py. Module 'foo' is slightly different from module '__main__', created from the same code, because of the section conditioned by 'if __name__ == '__main__', that being the purpose of that incantation. But each of the two modules have their own class Foo. You sort of guessed this ... > I guess there is 2 different objects for the same class Foo. They are the same in content, but not in identify, until you change one of then. > How I do I make both Foo objects the same object ? As Scott hinted, by not making two of them, and you do that by not making two modules from the same file. Terry Jan Reedy -- http://mail.python.org/mailman/listinfo/python-list