Import aliases
For a while a maintained a Python package 'foo' with a number of modules (including a nested structure of module). Now the package moved into a namespace package'a.b.foo'. What is the way to approach making old code work with the new package in order that imports like import foo.bar.xxx or from foo.bar import xxx remain working for existing code. Andy -- http://mail.python.org/mailman/listinfo/python-list
Re: Run process with timeout
Natan wrote: > Hi. > > I have a python script under linux where I poll many hundreds of > interfaces with mrtg every 5 minutes. Today I create some threads and > use os.system(command) to run the process, but some of them just hang. > I would like to terminate the process after 15 seconds if it doesn't > finish, but os.system() doesn't have any timeout parameter. > > Can anyone help me on what can I use to do this? The new subprocess module has that functionality. If your python version doesn't have that you could try my unix specific version: http://www.pixelbeat.org/libs/subProcess.py Pádraig. -- http://mail.python.org/mailman/listinfo/python-list
Re: get the IP address of a host
J Berends wrote: def getipaddr(hostname='default'): [snip] It returns the IP address with which it connects to the world (not lo), might be a pvt LAN address or an internet routed IP. Depend on where the host is. I hate the google trick actually, so any suggestions to something better is always welcome. Yes your IP is determined really by what you connect to. So I did essentially the same as you. For reference see getMyIPAddress() at the following: http://www.pixelbeat.org/libs/PadSocket.c Pádraig -- http://mail.python.org/mailman/listinfo/python-list
Re: Python & unicode
Michel Claveau - abstraction mÃta-galactique non triviale en fuite perpÃtuelle. wrote: Hi ! If Python is Ok with Unicode, why the next script not run ? # -*- coding: utf-8 -*- def Ñ(toto): return(toto*3) Because the coding is only supported in string literals. But I'm not sure exactly why. It would be nice to do: import math Ï = math.pi -- PÃdraig Brady - http://www.pixelbeat.org -- -- http://mail.python.org/mailman/listinfo/python-list
Re: Python & unicode
Scott David Daniels wrote: [EMAIL PROTECTED] wrote: Because the coding is only supported in string literals. But I'm not sure exactly why. The why is the same as why we write in English on this newsgroup. Not because English is better, but because that leaves a single language for everyone to use to communicate in. Fair enough. Though people can communicate in other languages if they want, or have specific newsgroups for other languages. If you allow non-ASCII characters in symbol names, your source code will be unviewable (and uneditable) for people with ASCII-only terminals, never mind how comprehensible it might otherwise be. So how does one edit non ascii string literals at the moment? It is a least-common-denominator argument, not a "this is better" argument. If one edited the whole file in the specified coding then one wouldn't have to switch editing modes when editing strings which is a real pain. -- PÃdraig Brady - http://www.pixelbeat.org -- -- http://mail.python.org/mailman/listinfo/python-list
Re: regular expression match collection
[EMAIL PROTECTED] wrote: Hello, For example I have a string : "Halo by by by" Then I want to take and know the possition of every "by" how can I do it in python? [ match.start() for match in p.finditer("Helo by by by") ] see: http://mail.python.org/pipermail/python-list/2004-December/255013.html -- Pádraig Brady - http://www.pixelbeat.org -- -- http://mail.python.org/mailman/listinfo/python-list
Re: ssh popen stalling on password redirect output?
for ssh automation I would in order: paramiko twisted keys + popen pexpect -- Pádraig Brady - http://www.pixelbeat.org -- -- http://mail.python.org/mailman/listinfo/python-list
Re: python connect to server using SSH protocol
[EMAIL PROTECTED] wrote: How can python connect to server which use SSH protocol? Is it easy since my python has to run third party vendor, write data, read data inside the server (supercomputer). Any suggestion? you can use popen around the ssh binary. You man need the pty module if you want to deal with password prompts If you want lower level control, use the twisted module. -- Pádraig Brady - http://www.pixelbeat.org -- -- http://mail.python.org/mailman/listinfo/python-list
Re: graph visualisation
Alexander Zatvornitskiy wrote: Hello, All! I need routines for visualization of graphs, like this for Matlab: You could output the dot language, parsed by graphviz See: http://dkbza.org/pydot.html PÃdraig. -- http://mail.python.org/mailman/listinfo/python-list
Re: Python glade tute
somesh wrote: hello, I wrote a small tute for my brother to teach him python + glade, plz see, and suggest to make it more professional , In tute I discussed on Glade + Python for developing Applications too rapidly as ppls used to work on win32 platform with VB. http://www40.brinkster.com/s4somesh/glade/index.html I would work through a concrete example, with screenshots. Have a look at: http://www.pixelbeat.org/talks/pygtk You may get ideas from other tutorials here: http://www.awaretek.com/tutorials.html Pádraig. -- http://mail.python.org/mailman/listinfo/python-list
Re: [perl-python] a program to delete duplicate files
I've written a python GUI wrapper around some shell scripts: http://www.pixelbeat.org/fslint/ the shell script logic is essentially: exclude hard linked files only include files where there are more than 1 with the same size print files with matching md5sum Pádraig. -- http://mail.python.org/mailman/listinfo/python-list
Re: RegEx: find all occurances of a single character in a string
Franz Steinhaeusler wrote: given a string: st="abcdatraataza" ^ ^ ^ ^ (these should be found) I want to get the positions of all single 'a' characters. (Without another 'a' neighbour) So I tried: r=re.compile('[^a]a([^a]') but this applies only for the a's, which has neighbours. So I need also '^a' and 'a$'. Am I doing something wrong? Is there a easier solution? How can I quickly get all these positions? Thank you in advance. import re s='abcdatraataza' r=re.compile('(? -- Pádraig Brady - http://www.pixelbeat.org -- -- http://mail.python.org/mailman/listinfo/python-list
Re: Efficient grep using Python?
sf wrote: Just started thinking about learning python. Is there any place where I can get some free examples, especially for following kind of problem ( it must be trivial for those using python) I have files A, and B each containing say 100,000 lines (each line=one string without any space) I want to do " A - (A intersection B) " Essentially, want to do efficient grep, i..e from A remove those lines which are also present in file B. You could implement elegantly using the new sets feature For reference here is the unix way to do it: sort a b b | uniq -u -- Pádraig Brady - http://www.pixelbeat.org -- -- http://mail.python.org/mailman/listinfo/python-list
Re: Efficient grep using Python?
Christos TZOTZIOY Georgiou wrote: On Wed, 15 Dec 2004 16:10:08 +, rumours say that [EMAIL PROTECTED] might have written: Essentially, want to do efficient grep, i..e from A remove those lines which are also present in file B. You could implement elegantly using the new sets feature For reference here is the unix way to do it: sort a b b | uniq -u No, like I just wrote in another post, he wants $ grep -vf B A I think that $ sort A B B | uniq -u can be abbreviated to $ sort -u A B B which is the union rather than the intersection of the files wrong. Notice the -u option to uniq. http://marc.free.net.ph/message/20021101.043138.1bc24964.html wastes some time by considering B twice I challenge you to a benchmark :-) and finally destroys original line order (should it be important). true -- Pádraig Brady - http://www.pixelbeat.org -- -- http://mail.python.org/mailman/listinfo/python-list
Re: Efficient grep using Python?
Christos TZOTZIOY Georgiou wrote: On Thu, 16 Dec 2004 14:28:21 +, rumours say that [EMAIL PROTECTED] I challenge you to a benchmark :-) Well, the numbers I provided above are almost meaningless with such a small set (and they easily could be reverse, I just kept the convenient-to-me first run :). Do you really believe that sorting three files and then scanning their merged output counting duplicates is faster than scanning two files (and doing lookups during the second scan)? $ python Python 2.3.3 (#1, Aug 31 2004, 13:51:39) [GCC 3.3.3 (SuSE Linux)] on linux2 Type "help", "copyright", "credits" or "license" for more information. x=open('/usr/share/dict/words').readlines() len(x) 45378 import random random.shuffle(x) open("/tmp/A", "w").writelines(x) random.shuffle(x) open("/tmp/B", "w").writelines(x[:1000]) $ time sort A B B | uniq -u >/dev/null real0m0.311s user0m0.315s sys 0m0.008s $ time grep -Fvf B A >/dev/null real0m0.067s user0m0.064s sys 0m0.003s (Yes, I cheated by adding the F (for no regular expressions) flag :) Also you only have 1000 entries in B! Try it again with all entries in B also ;-) Remember the original poster had 100K entries! and finally destroys original line order (should it be important). true That's our final agreement :) Note the order is trivial to restore with a "decorate-sort-undecorate" idiom. -- Pádraig Brady - http://www.pixelbeat.org -- -- http://mail.python.org/mailman/listinfo/python-list
Re: Efficient grep using Python?
sf wrote: The point is that when you have 100,000s of records, this grep becomes really slow? There are performance bugs with current versions of grep and multibyte characters that are only getting addressed now. To work around these do `export LANG=C` first. In my experience grep is not scalable since it's O(n^2). See below (note A and B are randomized versions of /usr/share/dict/words (and therefore worst case for the sort method)). $ wc -l A B 45427 A 45427 B $ export LANG=C $ time grep -Fvf B A real0m0.437s $ time sort A B B | uniq -u real0m0.262s $ rpm -q grep coreutils grep-2.5.1-16.1 coreutils-4.5.3-19 -- Pádraig Brady - http://www.pixelbeat.org -- -- http://mail.python.org/mailman/listinfo/python-list
Re: subprocess.Popen
Michele Simionato wrote: I was looking at Python 2.4 subprocess.Popen. Quite nice and handy, but I wonder why a "kill" method is missing. I am just adding it via subclassing, class Popen(subprocess.Popen): def kill(self, signal = SIGTERM): os.kill(self.pid, signal) but I would prefer to have it in the standard Popen class. I am surprised it is not there. Any comments? Seems like an ommission, but probably due to windows implementation problems? Note my subprocess.py that was referenced in pep 324 does have a kill method: http://www.pixelbeat.org/libs/subProcess.py Note also that it also kills any children of the subProcess using process groups. -- Pádraig Brady - http://www.pixelbeat.org -- -- http://mail.python.org/mailman/listinfo/python-list
Re: RegEx: find all occurances of a single character in a string
[EMAIL PROTECTED] wrote: import re s='abcdatraataza' r=re.compile('(? Oops, tested this time: import re def index_letters(s,l): regexp='(? print index_letters('abcdatraataza','a') -- Pádraig Brady - http://www.pixelbeat.org -- -- http://mail.python.org/mailman/listinfo/python-list
Re: RegEx: find all occurances of a single character in a string
[EMAIL PROTECTED] wrote: [EMAIL PROTECTED] wrote: import re s='abcdatraataza' r=re.compile('(? Oops, tested this time: import re def index_letters(s,l): regexp='(? print index_letters('abcdatraataza','a') Just comparing Fredrik Lundh's method: r=re.compile("a+") [m.start() for m in r.finditer(s) if len(m.group()) == 1] Mine runs 100K iterations of 'abcdatraataza','a' in 1.4s whereas Fredrik's does the same in 1.9s -- Pádraig Brady - http://www.pixelbeat.org -- -- http://mail.python.org/mailman/listinfo/python-list
Re: [ann] fdups 0.15
Patrick Useldinger wrote: I am happy to announce version 0.15 of fdups. Cool. For reference have a look at: http://www.pixelbeat.org/fslint/ Pádraig. -- http://mail.python.org/mailman/listinfo/python-list
Re: Getting directory size
francisl wrote: How can we get a full directory size (sum of all his data)? like when we type `du -sh mydir` Because os.path.getsize('mydir') only give the size of the directory physical representation on the disk. Have a look at: http://www.pixelbeat.org/scripts/dutop Pádraig. -- http://mail.python.org/mailman/listinfo/python-list
MP3 and ID3 library/module recommendations
Wondering what experiences people have had using various packages for extracting data from and manipulating mp3 files. Specifically, i need to get a song duration, which as i understand it, you extract from the framesets, as well as the typical id3 stuff like artist, album, song, year, etc. Ideally, i'd also like something that converts audio files to mp3 format. Looking on the the python package index, I see the following: - eyeD3 - PyMedia - hachoir-metadata - libtagedit - mutagen Any others to consider? What are peoples comments on these? thanks all, p. -- http://mail.python.org/mailman/listinfo/python-list
mimicking a file in memory
I am using the mutagen module to extract id3 information from mp3 files. In order to do this, you give mutagen a filename, which it converts into a file object using the python built-in "file" function. Unfortunately, my mp3 files don't live locally. They are on a number of remote servers which I access using urllib2. Here is my dilemma: I don't want to copy the files into a local directory for mutagen's sake, only to have to remove them afterward. Instead, I'd like to load the files into memory and still be able to hand the built-in "file" function a filename to access the file in memory. Any ideas on how to do this? -- http://mail.python.org/mailman/listinfo/python-list
Re: mimicking a file in memory
On Nov 20, 1:20 pm, Larry Bates <[EMAIL PROTECTED]> wrote: > p. wrote: > > I am using the mutagen module to extract id3 information from mp3 > > files. In order to do this, you give mutagen a filename, which it > > converts into a file object using the python built-in "file" function. > > > Unfortunately, my mp3 files don't live locally. They are on a number > > of remote servers which I access using urllib2. > > > Here is my dilemma: > > I don't want to copy the files into a local directory for mutagen's > > sake, only to have to remove them afterward. Instead, I'd like to load > > the files into memory and still be able to hand the built-in "file" > > function a filename to access the file in memory. > > > Any ideas on how to do this? > > Looks like you would need to "hack" the source and replace lines like: > > def load(self, filename): > self.filename = filename > fileobj = file(filename, "rb") > > with something like: > > def load(self, filename): > if hasattr(filename, 'read'): > fileobj=filename > if hasattr(filename, 'name'): > self.filename = filename > else: > self.filename = 'unknown' > > else: > self.filename = filename > fileobj = file(filename, "rb") > > -Larry I thought about this approach originally, but here's the catch there: the read method isn't the only method i need. mutagen calls the seek method on the file object. urllib2 returns a "file-like object" that does not have a seek method associated with it, which means i'd have to extend urllib2 to add that method. Problem is, i don't know how you could implement a seek method with urllib2. -- http://mail.python.org/mailman/listinfo/python-list
Re: mimicking a file in memory
On Nov 20, 2:06 pm, Grant Edwards <[EMAIL PROTECTED]> wrote: > On 2007-11-20, Jarek Zgoda <[EMAIL PROTECTED]> wrote: > > >> Here is my dilemma: I don't want to copy the files into a > >> local directory for mutagen's sake, only to have to remove > >> them afterward. Instead, I'd like to load the files into > >> memory and still be able to hand the built-in "file" function > >> a filename to access the file in memory. > > >> Any ideas on how to do this? > > By "memory" I presume you mean virtual memory? RAM with > disk-blocks as backing store? On any real OS, tempfiles are > just RAM with disk-blocks as backing store. > > Sound similar? The only difference is the API used to access > the bytes. You want a file-I/O API, so you can either use the > extensively tested and and highly optimized filesystem code in > the OS to make disk-backed-RAM look like a file, or you can try > to write Python code that does the same thing. > > Which do you think is going to work faster/better? > > [The kernel is generally better at knowing what needs to be in > RAM than you are -- let it do its job.] > > IOW: just use a temp file. Life will be simple. The bytes > probably won't ever hit the platters (if they do, then that > means they would have the other way too). > > -- > Grant Edwards grante Yow! It's a hole all the > at way to downtown Burbank! >visi.com Thanks all. Grant, are temp files automatically put into ram for all linux distros? at any rate, i could set up ram disk. much better solution than using python...except that i've never done a ram disk before. more reading to do... -- http://mail.python.org/mailman/listinfo/python-list
Re: mimicking a file in memory
On Nov 20, 3:14 pm, Grant Edwards <[EMAIL PROTECTED]> wrote: > On 2007-11-20, p. <[EMAIL PROTECTED]> wrote: > > > > >> By "memory" I presume you mean virtual memory? RAM with > >> disk-blocks as backing store? On any real OS, tempfiles are > >> just RAM with disk-blocks as backing store. > > >> Sound similar? The only difference is the API used to access > >> the bytes. You want a file-I/O API, so you can either use the > >> extensively tested and and highly optimized filesystem code in > >> the OS to make disk-backed-RAM look like a file, or you can try > >> to write Python code that does the same thing. > > >> Which do you think is going to work faster/better? > > >> [The kernel is generally better at knowing what needs to be in > >> RAM than you are -- let it do its job.] > > >> IOW: just use a temp file. Life will be simple. The bytes > >> probably won't ever hit the platters (if they do, then that > >> means they would have the other way too). > > > Grant, are temp files automatically put into ram for all linux > > distros? > > All files are put into ram for all linux distros that use > virtual memory. (You'll know if you're not using virtual.) > > > at any rate, i could set up ram disk. much better solution > > than using python...except that i've never done a ram disk > > before. more reading to do... > > You don't have set up a ram disk. You already have one. All > your disks are ram disks. It's just that some of them have > magnetic platters as backing store so they get preserved during > a reboot. On some Linux distros, the /tmp directory is a > filesystem without prmanent magnetic backing-store. On others > it does have a permanent backing store. If you do a "mount" > command, you'll probably see a "filesystem" who's type is > "tmpfs". That's a filesystem with no permanent magnetic > backing-store[1]. > > Seehttp://en.wikipedia.org/wiki/TMPFS > > /tmp might or might not be in a tmpfs filesystem (depends on > the distro). In any case, you probably don't need to worry > about it. > > Just call tempfile.NamedTemporaryFile() and tell it you want an > unbuffered file (that way you don't have to remember to flush > the file after writing to it). It will return a file object: > > f = tempfile.NamedTemporaryFile(bufsize=0) > > Write the data to that file object and flush it: > > f.write(mydata) > > Pass the file's name to whatever broken library it is that > insists on a file name instead of a file-like object: > > brokenLib.brokenModule(f.name). > > When you're done, delete the file object: > > del f > > NB: This particular approach won't work on Windows. On Windows > you'll have to use tempfile.mktemp(), which can have race > conditions. It returns a name, so you'll have to create > the file, write to it, and then pass the name to the broken > module. > > [1] Tmpfs pages use the swap partition for temporary backing > store the same as for all other memory pages. If you're > using tmpfs for big stuff, make sure your swap partition is > large enough to hold whatever you're doing in tmpfs plus > whatever normal swapping capacity you need. > > --demo.py-- > def brokenModule(filename): > f = file(filename) > d = f.read() > print d > f.close() > > import tempfile,os > > f = tempfile.NamedTemporaryFile(bufsize=0) > n = f.name > print f,":",n > os.system("ls -l %s\n" % n) > > f.write("hello world") > brokenModule(n) > > del f > os.system("ls -l %s\n" % n) > --demo.py-- > > If you run this you'll see something like this: > > $ python demo.py > ', mode 'w+b' at 0xb7c37728> : /tmp/tmpgqSj8p > -rw--- 1 grante users 0 2007-11-20 17:11 /tmp/tmpgqSj8p > hello world > ls: cannot access /tmp/tmpgqSj8p: No such file or directory > > -- > Grant Edwards grante Yow! I want to mail a > at bronzed artichoke to >visi.comNicaragua! excellent. didn't know tempfile was a module. thanks so much. -- http://mail.python.org/mailman/listinfo/python-list
Transforming ascii file (pseduo database) into proper database
I need to take a series of ascii files and transform the data contained therein so that it can be inserted into an existing database. The ascii files are just a series of lines, each line containing fields separated by '|' character. Relations amongst the data in the various files are denoted through an integer identifier, a pseudo key if you will. Unfortunately, the relations in the ascii file do not match up with those in the database in which i need to insert the data, i.e., I need to transform the data from the files before inserting into the database. Now, this would all be relatively simple if not for the following fact: The ascii files are each around 800MB, so pulling everything into memory and matching up the relations before inserting the data into the database is impossible. My questions are: 1. Has anyone done anything like this before, and if so, do you have any advice? 2. In the abstract, can anyone think of a way of amassing all the related data for a specific identifier from all the individual files without pulling all of the files into memory and without having to repeatedly open, search, and close the files over and over again? -- http://mail.python.org/mailman/listinfo/python-list
Re: Transforming ascii file (pseduo database) into proper database
So in answer to some of the questions: - There are about 15 files, each roughly representing a table. - Within the files, each line represents a record. - The formatting for the lines is like so: File1: somval1|ID|someval2|someval3|etc. File2: ID|someval1|someval2|somewal3|etc. Where ID is the one and only value linking "records" from one file to "records" in another file - moreover, as far as I can tell, the relationships are all 1:1 (or 1:0) (I don't have the full dataset yet, just a sampling, so I'm flying a bit in the dark). - I believe that individual "records" within each of the files is unique with respect to the identifier (again, not certain because I'm only working with sample data). - As the example shows, the position of the ID is not the same for all files. - I don't know how big N is since I only have a sample to work with, and probably won't get the full dataset anytime soon. (Lets just take it as a given that I won't get that information until AFTER a first implementation...politics.) - I don't know how many identifiers either, although it has to be at least as large as the number of lines in the largest file (again, I don't have the actual data yet). So as an exercise, lets assume 800MB file, each line of data taking up roughly 150B (guesstimate - based on examination of sample data)...so roughly 5.3 million unique IDs. With that size, I'll have to load them into temp db. I just can't see holding that much data in memory... -- http://mail.python.org/mailman/listinfo/python-list
Re: Transforming ascii file (pseduo database) into proper database
Thanks to all for the ideas. I am familiar with external sorting. Hadn't considered it though. Will definitely be giving that a go, and then merging. Again, thanks all. -- http://mail.python.org/mailman/listinfo/python-list
download timeout vs. socket timeout
i'm using urllib2 in python 2.4 wondering how people typically deal with the case in which a download is too slow. setting the socket timeout only covers those cases where there is no response in the socket for whatever the timeout period is. what if, however, i'm getting bits back but want simply to bail out if the total time to download takes too long? i'm trying to avoid creating a whole other thread if possible? -- http://mail.python.org/mailman/listinfo/python-list
Google App Engine dev_appserver and pdb?
Is there a way to use pdb to debug Google apps written in Python? When I start the development system to run the app "test" like this - './google_appengine/dev_appserver.py' './test' - I'd like to send the program into debug. I couldn't see anything in the documentation how to do this. If I do this - python -mpdb './google_appengine/dev_appserver.py' './test' - then dev_appserver.py goes into debug but doesn't know anything about "test". -- http://mail.python.org/mailman/listinfo/python-list
Re: Bools and explicitness [was Re: PyWart: The problem with "print"]
On Tuesday, June 4, 2013 8:44:11 AM UTC-7, Rick Johnson wrote: > Yes, but the problem is not "my approach", rather the lack > > of proper language design (my apologizes to the "anointed > > one". ;-) If you don't like implicit conversion to Boolean, then maybe you should be using another language -- and I mean that in a constructive sense. I'm not particularly fond of it either, but I switched from Python to another language a while back. The issue is not a lack of "proper language design" but rather a language design philosophy that values conciseness and simplicity over explicitness and rigor. Implicit conversion to Boolean is only one of many language features that are questionable for critical production software. Another is the convention of interpreting negative indices as counting backward from the end of a list or sequence. Yeah, I thought that was elegant... until it bit me. Is it a bad idea? Not necessarily. It certainly enhances programmer productivity, and it can be done correctly "almost" all the time. But that one time in a hundred or a thousand when you accidentally use a negative index can be a bitch. But then, what would you expect of a language that allows you to write x = 1 x = "Hello" It's all loosey goosey -- which is fine for many applications but certainly not for critical ones. -- http://mail.python.org/mailman/listinfo/python-list
Re: Bools and explicitness [was Re: PyWart: The problem with "print"]
On Wednesday, June 5, 2013 12:15:57 AM UTC-7, Chris Angelico wrote: > On Wed, Jun 5, 2013 at 4:11 PM, Russ P. wrote: > > > On Tuesday, June 4, 2013 8:44:11 AM UTC-7, Rick Johnson wrote: > > > > > >> Yes, but the problem is not "my approach", rather the lack > > >> > > >> of proper language design (my apologizes to the "anointed > > >> > > >> one". ;-) > > > > > > If you don't like implicit conversion to Boolean, then maybe you should be > > using another language -- and I mean that in a constructive sense. I'm not > > particularly fond of it either, but I switched from Python to another > > language a while back. The issue is not a lack of "proper language design" > > but rather a language design philosophy that values conciseness and > > simplicity over explicitness and rigor. > > > > (Out of curiosity, which language? Feel free to not answer, or to > > answer off-list, as that's probably not constructive to the thread.) No problem. I'm using Scala. It has a sophisticated type system. The language is not perfect, but it seems to suit my needs fairly well. > > > I cannot name a single modern programming language that does NOT have > > some kind of implicit boolification. The only such language I know of > > is REXX, which has a single data type for everything, but insists on > > the exact strings "1" and "0" for True and False, anything else is an > > error. Every other language has some definition of "these things are > > true, these are false"; for instance: Scala (and Java) don't do that. Nor does Ada. That's because Ada is designed for no-nonsense critical systems. It is the standard higher-order language for flight control systems, for example. > > It's all loosey goosey -- which is fine for many applications but certainly > > not for critical ones. > > > > The looseness doesn't preclude critical applications. It's all a > > question of what you're testing. Does your code care that this be a > > list, and not something else? Then test! You have that option. What > > happens if it isn't a list, and something is done that bombs with an > > exception? Maybe that's not a problem. > > > > Critical applications can often be built in layers. For instance, a > > network server might listen for multiple socket connections, and for > > each connection, process multiple requests. You would want to catch > > exceptions at the two boundaries there; if a request handler crashes, > > the connection should not be dropped, and if a connection handler > > crashes, the server should keep running. With some basic defenses like > > that, your code need no longer concern itself with trivialities - if > > something goes wrong, there'll be an exception in the log. (BTW, this > > is one of the places where a bare or very wide except clause is > > appropriate. Log and move on.) Well, I don't really want to open the Pandora's box of static vs. dynamic typing. Yes, with enough testing, I'm sure you can get something good out of a dynamically typed language for small to medium-sized applications, but I have my doubts about larger applications. However, I don't claim to be an expert. Someone somewhere has probably developed a solid large application in Python. But I'll bet a dollar to a dime that it took more work than it would have taken in a good statically typed language. Yes, extensive testing can go a long way, but extensive testing combined with good static typing can go even further for the same level of effort. -- http://mail.python.org/mailman/listinfo/python-list
Re: Bools and explicitness [was Re: PyWart: The problem with "print"]
On Wednesday, June 5, 2013 1:59:01 AM UTC-7, Mark Lawrence wrote: > On 05/06/2013 07:11, Russ P. wrote: > > > > > But then, what would you expect of a language that allows you to write > > > > > > x = 1 > > > x = "Hello" > > > > > > It's all loosey goosey -- which is fine for many applications but certainly > > not for critical ones. > > > > > > > I want to launch this rocket with an expensive satellite on top. I know > > it's safe as the code is written in ADA. Whoops :( So Python would have been a better choice? Yeah, right. If you know anything about that rocket mishap, you should know that Ada was not the source of the problem. Ada won't keep airplane wings from breaking either, by the way. It's not magic. -- http://mail.python.org/mailman/listinfo/python-list
Re: Bools and explicitness [was Re: PyWart: The problem with "print"]
On Wednesday, June 5, 2013 9:59:07 AM UTC-7, Chris Angelico wrote: > On Thu, Jun 6, 2013 at 2:15 AM, Russ P. wrote: > > > On Wednesday, June 5, 2013 1:59:01 AM UTC-7, Mark Lawrence wrote: > > >> I want to launch this rocket with an expensive satellite on top. I know > > >> > > >> it's safe as the code is written in ADA. Whoops :( > > > > > > > > > So Python would have been a better choice? Yeah, right. If you know > > anything about that rocket mishap, you should know that Ada was not the > > source of the problem. Ada won't keep airplane wings from breaking either, > > by the way. It's not magic. > > > > Frankly, I don't think the language much matters. It's all down to the > > skill of the programmers and testers. Ada wasn't the source of the > > problem unless Ada has a bug in it... which is going to be true of > > pretty much any language. Maybe Python would be a better choice, maybe > > not; but let me tell you this, if the choice of language means the > > difference between testable in three months and testable code in three > > years, I'm going for the former. > > > > ChrisA I'm not an Ada guy, but Ada advocates claim that it reduces development time by half in the long run compared to C and C++ due to reduced debugging time and simpler maintenance. Then again, I think Java people make a similar claim. As for Python, my experience with it is that, as your application grows, you start getting confused about what the argument types are or are supposed to be. That requires the developer to keep much more of the design in his head, and that undesirable. Of course, you can always put the argument types in comments, but that won't be verified by the compiler. -- http://mail.python.org/mailman/listinfo/python-list
Re: Bools and explicitness [was Re: PyWart: The problem with "print"]
On Wednesday, June 5, 2013 4:18:13 PM UTC-7, Michael Torrie wrote: > On 06/05/2013 12:11 AM, Russ P. wrote: > > > But then, what would you expect of a language that allows you to > > > write > > > > > > x = 1 > > > x = "Hello" > > > > > > It's all loosey goosey -- which is fine for many applications but > > > certainly not for critical ones. > > > > This comment shows me that you don't understand the difference between > > names, objects, and variables. May sound like a minor quibble, but > > there're actually major differences between binding names to objects > > (which is what python does) and variables (which is what languages like > > C have). It's very clear Rick does not have an understanding of this > > either. My comment shows you nothing about what I understand about names, objects, and variables. You have chosen to question my understanding apparently because my point bothered you but you don't have a good reply. Then you link me with Rick for good measure. That's two ad hominems in three sentences. -- http://mail.python.org/mailman/listinfo/python-list
Re: Bools and explicitness [was Re: PyWart: The problem with "print"]
On Wednesday, June 5, 2013 7:29:44 PM UTC-7, Chris Angelico wrote: > On Thu, Jun 6, 2013 at 11:56 AM, Steven D'Aprano > > wrote: > > > On Wed, 05 Jun 2013 14:59:31 -0700, Russ P. wrote: > > >> As for Python, my experience with it is that, as > > >> your application grows, you start getting confused about what the > > >> argument types are or are supposed to be. > > > > > > Whereas people never get confused about the arguments in static typed > > > languages? > > > > > > The only difference is whether the compiler tells you that you've passed > > > the wrong type, or your unit test tells you that you've passed the wrong > > > type. What, you don't have unit tests? Then how do you know that the code > > > does the right thing when passed data of the right type? Adding an extra > > > couple of unit tests is not that big a burden. > > > > The valid type(s) for an argument can be divided into two categories: > > Those the compiler can check for, and those the compiler can't check > > for. Some languages have more in the first category than others, but > > what compiler can prove that a string is an > > HTML-special-characters-escaped string? In a very few languages, the > > compiler can insist that an integer be between 7 and 30, but there'll > > always be some things you can't demonstrate with a function signature. > > > > That said, though, I do like being able to make at least *some* > > declaration there. It helps catch certain types of error. I recall reading a few years ago that Guido was thinking about adding optional type annotations. I don't know if that went anywhere or not, but I thought it was a good idea. Eventually I got tired of waiting, and I realized that I just wanted a statically typed language, so I started using one. Steven's view on static vs. dynamic typing are interesting, but I think they are "out of the mainstream," for whatever that's worth. Does that mean he is wrong? I don't know. But I do know that statically typed code just seems to me to fit together tighter and more solidly. Maybe it's a liberal/conservative thing. Do liberals tend to favor dynamic typing? -- http://mail.python.org/mailman/listinfo/python-list
Re: Bools and explicitness [was Re: PyWart: The problem with "print"]
On Thursday, June 6, 2013 2:29:02 AM UTC-7, Steven D'Aprano wrote: > On Thu, 06 Jun 2013 12:29:44 +1000, Chris Angelico wrote: > > > > > On Thu, Jun 6, 2013 at 11:56 AM, Steven D'Aprano > > > wrote: > > >> On Wed, 05 Jun 2013 14:59:31 -0700, Russ P. wrote: > > >>> As for Python, my experience with it is that, as your application > > >>> grows, you start getting confused about what the argument types are or > > >>> are supposed to be. > > >> > > >> Whereas people never get confused about the arguments in static typed > > >> languages? > > >> > > >> The only difference is whether the compiler tells you that you've > > >> passed the wrong type, or your unit test tells you that you've passed > > >> the wrong type. What, you don't have unit tests? Then how do you know > > >> that the code does the right thing when passed data of the right type? > > >> Adding an extra couple of unit tests is not that big a burden. > > > > > > The valid type(s) for an argument can be divided into two categories: > > > Those the compiler can check for, and those the compiler can't check > > > for. Some languages have more in the first category than others, but > > > what compiler can prove that a string is an > > > HTML-special-characters-escaped string? In a very few languages, the > > > compiler can insist that an integer be between 7 and 30, but there'll > > > always be some things you can't demonstrate with a function signature. > > > > > > That said, though, I do like being able to make at least *some* > > > declaration there. It helps catch certain types of error. > > > > *shrug* > > > > I don't terribly miss type declarations. Function argument declarations > > are a bit simpler in Pascal, compared to Python: > > > > > > Function Add(A, B : Integer) : Integer; > > Begin > > Add := A + B; > > End; > > > > > > versus > > > > > > def add(a, b): > > if not (isinstance(a, int) and isinstance(b, int)): > > raise TypeError > > return a + b > Scala also has isInstanceOf[Type] which allows you to do this sort of thing, but of course it would be considered terrible style in Scala. > > > > but not that much simpler. And while Python can trivially support > > multiple types, Pascal cannot. (Some other static typed languages may.) > > > > Whatever benefit there is in declaring the type of a function is lost due > > to the inability to duck-type or program to an interface. There's no type > > that says "any object with a 'next' method", for example. And having to > > declare local variables is a PITA with little benefit. > > > > Give me a language with type inference, and a nice, easy way to keep duck- > > typing, and I'll reconsider. But until then, I don't believe the benefit > > of static types comes even close to paying for the extra effort. Scala has type inference. For example, you can write val x = 1 and the compiler figures out that x is an integer. Scala also has something called structural typing, which I think is more or less equivalent to "duck typing," although I don't think it is used very often. Are you ready to try Scala yet? 8-) -- http://mail.python.org/mailman/listinfo/python-list
Re: Newbie question on python programming
On 07/21/2012 02:30 AM, Ian Kelly wrote: On Fri, Jul 20, 2012 at 5:38 PM, Chris Williams wrote: Hello I hope this is the right newsgroup for this post. I am just starting to learn python programming and it seems very straightforward so far. It seems, however, geared toward doing the sort of programming for terminal output. Is it possible to write the sort of applications you can create in something like c-sharp or visual c++, or should I be looking at some other programming language? I am using ubuntu 12.04. There are plenty of options for GUI programming in Python. Among the most popular are Tkinter, wxPython, PyGTK, and PyQT, all of which are cross-platform and free. Also, since you specifically mention the .NET languages, IronPython runs on .NET and so is able to make full use of the .NET APIs including Windows Forms and WPF. A more comprehensive list can be found at: http://wiki.python.org/moin/GuiProgramming Another platform independent approach is to write the program as a web server something like this- def application(environ, start_response): start_response("200 OK", [("Content-type", "text/plain")]) return ["Hello World!"] if __name__ == '__main__': from wsgiref.simple_server import make_server server = make_server('localhost', 8080, application) server.serve_forever() Run this and then use your browser to connect to localhost:8080 You can then use html features such as forms for input/output. -- http://mail.python.org/mailman/listinfo/python-list
looking for a neat solution to a nested loop problem
consider a nested loop algorithm - for i in range(100): for j in range(100): do_something(i,j) Now, suppose I don't want to use i = 0 and j = 0 as initial values, but some other values i = N and j = M, and I want to iterate through all 10,000 values in sequence - is there a neat python-like way to this? I realize I can do things like use a variable for k in range(1): and then derive values for i and j from k, but I'm wondering if there's something less clunky. -- http://mail.python.org/mailman/listinfo/python-list
Re: looking for a neat solution to a nested loop problem
On 08/06/2012 06:18 PM, Nobody wrote: On Mon, 06 Aug 2012 17:52:31 +0200, Tom P wrote: consider a nested loop algorithm - for i in range(100): for j in range(100): do_something(i,j) Now, suppose I don't want to use i = 0 and j = 0 as initial values, but some other values i = N and j = M, and I want to iterate through all 10,000 values in sequence - is there a neat python-like way to this? for i in range(N,N+100): for j in range(M,M+100): do_something(i,j) Or did you mean something else? no, I meant something else .. j runs through range(M, 100) and then range(0,M), and i runs through range(N,100) and then range(0,N) .. apologies if I didn't make that clear enough. Alternatively: import itertools for i, j in itertools.product(range(N,N+100),range(M,M+100)): do_something(i,j) This can be preferable to deeply-nested loops. Also: in 2.x, use xrange() in preference to range(). -- http://mail.python.org/mailman/listinfo/python-list
Re: looking for a neat solution to a nested loop problem
On 08/06/2012 06:03 PM, John Gordon wrote: In Tom P writes: consider a nested loop algorithm - for i in range(100): for j in range(100): do_something(i,j) Now, suppose I don't want to use i = 0 and j = 0 as initial values, but some other values i = N and j = M, and I want to iterate through all 10,000 values in sequence - is there a neat python-like way to this? I realize I can do things like use a variable for k in range(1): and then derive values for i and j from k, but I'm wondering if there's something less clunky. You could define your own generator function that yields values in whatever order you want: def my_generator(): yield 9 yield 100 for i in range(200, 250): yield i yield 5 Thanks, I'll look at that but I think it just moves the clunkiness from one place in the code to another. -- http://mail.python.org/mailman/listinfo/python-list
Re: looking for a neat solution to a nested loop problem
On 08/06/2012 08:29 PM, Grant Edwards wrote: On 2012-08-06, Grant Edwards wrote: On 2012-08-06, Tom P wrote: On 08/06/2012 06:18 PM, Nobody wrote: On Mon, 06 Aug 2012 17:52:31 +0200, Tom P wrote: consider a nested loop algorithm - for i in range(100): for j in range(100): do_something(i,j) Now, suppose I don't want to use i = 0 and j = 0 as initial values, but some other values i = N and j = M, and I want to iterate through all 10,000 values in sequence - is there a neat python-like way to this? for i in range(N,N+100): for j in range(M,M+100): do_something(i,j) Or did you mean something else? no, I meant something else .. j runs through range(M, 100) and then range(0,M), and i runs through range(N,100) and then range(0,N) In 2.x: for i in range(M,100)+range(0,M): for j in range(N,100)+range(0,N): do_something(i,j) Dunno if that still works in 3.x. I doubt it, since I think in 3.x range returns an iterator, not? Indeed it doesn't work in 3.x, but this does: from itertools import chain for i in chain(range(M,100),range(0,M)): for j in chain(range(N,100),range(0,N)): do_something(i,j) ah, that looks good - I guess it works in 2.x as well? -- http://mail.python.org/mailman/listinfo/python-list
I'm looking for a Junior level Django job (telecommute)
I'm looking for a Junior level Django job (telecommute) About me: - less than year of experience with Python/Django - Intermediate knowledge of Python/Django - Experience with Linux - Experience with Django ORM - Passion for developing high-quality software and Python language - I am able to use many aplications, for example (south, mptt, django-debug-toolbar etc.) - English: communicative, still learning I would like to develop my qualifications I can be reached anytime via email at dreampr...@gmail.com Thank you. -- http://mail.python.org/mailman/listinfo/python-list
Re: Uniquely identifying each & every html template
On 01/21/2013 01:39 PM, Oscar Benjamin wrote: On 21 January 2013 12:06, Ferrous Cranus wrote: Τη Δευτέρα, 21 Ιανουαρίου 2013 11:31:24 π.μ. UTC+2, ο χρήστης Chris Angelico έγραψε: Seriously, you're asking for something that's beyond the power of humans or computers. You want to identify that something's the same file, without tracking the change or having any identifiable tag. That's a fundamentally impossible task. No, it is difficult but not impossible. It just cannot be done by tagging the file by: 1. filename 2. filepath 3. hash (math algorithm producing a string based on the file's contents) We need another way to identify the file WITHOUT using the above attributes. This is a very old problem (still unsolved I believe): http://en.wikipedia.org/wiki/Ship_of_Theseus Oscar That wiki article gives a hint to a poosible solution -use a timestamp to determine which key is valid when. -- http://mail.python.org/mailman/listinfo/python-list
Re: are int, float, long, double, side-effects of computer engineering?
On Mar 5, 10:34 pm, Xah Lee wrote: > On Mar 5, 9:26 pm, Tim Roberts wrote: > > > Xah Lee wrote: > > > >some additional info i thought is relevant. > > > >are int, float, long, double, side-effects of computer engineering? > > > Of course they are. Such concepts violate the purity of a computer > > language's abstraction of the underlying hardware. We accept that > > violation because of performance reasons. There are, as you point out, > > languages that do maintain the purity of the abstraction, but that purity > > is ALWAYS at the expense of performance. > > > I would also point out pre-emptively that there is nothing inherently wrong > > with asking us to accept an impure abstraction in exchange for performance. > > It is a performance choice that we choose to make. > > while what you said is true, but the problem is that 99.99% of > programers do NOT know this. They do not know Mathematica. They've > never seen a language with such feature. The concept is alien. This is > what i'd like to point out and spread awareness. I seriously doubt that. I think most decent programmers are well aware of the limitations of floating point math. If properly used, double- precision arithmetic is more than adequate for the vast majority of practical scientific and engineering problems. > also, argument about raw speed and fine control vs automatic > management, rots with time. Happened with auto memory management, > managed code, compilers, auto type conversion, auto extension of > array, auto type system, dynamic/scripting languages, etc. First of all, "dynamic/scripting languages" are still a long, long way from being "fast enough" for computationally intensive applications. Secondly, nothing is stopping anyone from writing a library to implement rational numbers or infinite-precision arithmetic in python (or just about any other language). They are just not needed for most applications. -- http://mail.python.org/mailman/listinfo/python-list
Re: are int, float, long, double, side-effects of computer engineering?
On Mar 6, 7:25 pm, rusi wrote: > On Mar 6, 6:11 am, Xah Lee wrote: > > > some additional info i thought is relevant. > > > are int, float, long, double, side-effects of computer engineering? > > It is a bit naive for computer scientists to club integers and reals > as mathematicians do given that for real numbers, even equality is > undecidable! > Mostly when a system like mathematica talks of real numbers it means > computable real numbers which is a subset of mathematical real numbers > (and of course a superset of floats) > > Seehttp://en.wikipedia.org/wiki/Computable_number#Can_computable_numbers... I might add that Mathematica is designed mainly for symbolic computation, whereas IEEE floating point numbers are intended for numerical computation. Those are two very different endeavors. I played with Mathematica a bit several years ago, and I know it can do numerical computation too. I wonder if it resorts to IEEE floating point numbers when it does. -- http://mail.python.org/mailman/listinfo/python-list
Re: numpy (matrix solver) - python vs. matlab
On Apr 29, 5:17 pm, someone wrote: > On 04/30/2012 12:39 AM, Kiuhnm wrote: > > >> So Matlab at least warns about "Matrix is close to singular or badly > >> scaled", which python (and I guess most other languages) does not... > > > A is not just close to singular: it's singular! > > Ok. When do you define it to be singular, btw? > > >> Which is the most accurate/best, even for such a bad matrix? Is it > >> possible to say something about that? Looks like python has a lot more > >> digits but maybe that's just a random result... I mean Element 1,1 = > >> 2.81e14 in Python, but something like 3e14 in Matlab and so forth - > >> there's a small difference in the results... > > > Both results are *wrong*: no inverse exists. > > What's the best solution of the two wrong ones? Best least-squares > solution or whatever? > > >> With python, I would also kindly ask about how to avoid this problem in > >> the future, I mean, this maybe means that I have to check the condition > >> number at all times before doing anything at all ? How to do that? > > > If cond(A) is high, you're trying to solve your problem the wrong way. > > So you're saying that in another language (python) I should check the > condition number, before solving anything? > > > You should try to avoid matrix inversion altogether if that's the case. > > For instance you shouldn't invert a matrix just to solve a linear system. > > What then? > > Cramer's rule? If you really want to know just about everything there is to know about a matrix, take a look at its Singular Value Decomposition (SVD). I've never used numpy, but I assume it can compute an SVD. -- http://mail.python.org/mailman/listinfo/python-list
Re: numpy (matrix solver) - python vs. matlab
On May 1, 11:52 am, someone wrote: > On 04/30/2012 03:35 AM, Nasser M. Abbasi wrote: > > > On 04/29/2012 07:59 PM, someone wrote: > > I do not use python much myself, but a quick google showed that pyhton > > scipy has API for linalg, so use, which is from the documentation, the > > following code example > > > X = scipy.linalg.solve(A, B) > > > But you still need to check the cond(). If it is too large, not good. > > How large and all that, depends on the problem itself. But the rule of > > thumb, the lower the better. Less than 100 can be good in general, but I > > really can't give you a fixed number to use, as I am not an expert in > > this subjects, others who know more about it might have better > > recommendations. > > Ok, that's a number... > > Anyone wants to participate and do I hear something better than "less > than 100 can be good in general" ? > > If I don't hear anything better, the limit is now 100... > > What's the limit in matlab (on the condition number of the matrices), by > the way, before it comes up with a warning ??? The threshold of acceptability really depends on the problem you are trying to solve. I haven't solved linear equations for a long time, but off hand, I would say that a condition number over 10 is questionable. A high condition number suggests that the selection of independent variables for the linear function you are trying to fit is not quite right. For a poorly conditioned matrix, your modeling function will be very sensitive to measurement noise and other sources of error, if applicable. If the condition number is 100, then any input on one particular axis gets magnified 100 times more than other inputs. Unless your inputs are very precise, that is probably not what you want. Or something like that. -- http://mail.python.org/mailman/listinfo/python-list
Re: numpy (matrix solver) - python vs. matlab
On May 1, 4:05 pm, Paul Rubin wrote: > someone writes: > > Actually I know some... I just didn't think so much about, before > > writing the question this as I should, I know theres also something > > like singular value decomposition that I think can help solve > > otherwise illposed problems, > > You will probably get better advice if you are able to describe what > problem (ill-posed or otherwise) you are actually trying to solve. SVD > just separates out the orthogonal and scaling parts of the > transformation induced by a matrix. Whether that is of any use to you > is unclear since you don't say what you're trying to do. I agree with the first sentence, but I take slight issue with the word "just" in the second. The "orthogonal" part of the transformation is non-distorting, but the "scaling" part essentially distorts the space. At least that's how I think about it. The larger the ratio between the largest and smallest singular value, the more distortion there is. SVD may or may not be the best choice for the final algorithm, but it is useful for visualizing the transformation you are applying. It can provide clues about the quality of the selection of independent variables, state variables, or inputs. -- http://mail.python.org/mailman/listinfo/python-list
Re: numpy (matrix solver) - python vs. matlab
On May 1, 11:03 pm, someone wrote: > On 05/02/2012 01:38 AM, Russ P. wrote: > > > > > > > > > > > On May 1, 4:05 pm, Paul Rubin wrote: > >> someone writes: > >>> Actually I know some... I just didn't think so much about, before > >>> writing the question this as I should, I know theres also something > >>> like singular value decomposition that I think can help solve > >>> otherwise illposed problems, > > >> You will probably get better advice if you are able to describe what > >> problem (ill-posed or otherwise) you are actually trying to solve. SVD > >> just separates out the orthogonal and scaling parts of the > >> transformation induced by a matrix. Whether that is of any use to you > >> is unclear since you don't say what you're trying to do. > > > I agree with the first sentence, but I take slight issue with the word > > "just" in the second. The "orthogonal" part of the transformation is > > non-distorting, but the "scaling" part essentially distorts the space. > > At least that's how I think about it. The larger the ratio between the > > largest and smallest singular value, the more distortion there is. SVD > > may or may not be the best choice for the final algorithm, but it is > > useful for visualizing the transformation you are applying. It can > > provide clues about the quality of the selection of independent > > variables, state variables, or inputs. > > Me would like to hear more! :-) > > It would really appreciate if anyone could maybe post a simple SVD > example and tell what the vectors from the SVD represents geometrically > / visually, because I don't understand it good enough and I'm sure it's > very important, when it comes to solving matrix systems... SVD is perhaps the ultimate matrix decomposition and the ultimate tool for linear analysis. Google it and take a look at the excellent Wikipedia page on it. I would be wasting my time if I tried to compete with that. To really appreciate the SVD, you need some background in linear algebra. In particular, you need to understand orthogonal transformations. Think about a standard 3D Cartesian coordinate frame. A rotation of the coordinate frame is an orthogonal transformation of coordinates. The original frame and the new frame are both orthogonal. A vector in one frame is converted to the other frame by multiplying by an orthogonal matrix. The main feature of an orthogonal matrix is that its transpose is its inverse (hence the inverse is trivial to compute). The SVD can be thought of as factoring any linear transformation into a rotation, then a scaling, followed by another rotation. The scaling is represented by the middle matrix of the transformation, which is a diagonal matrix of the same dimensions as the original matrix. The singular values can be read off of the diagonal. If any of them are zero, then the original matrix is singular. If the ratio of the largest to smallest singular value is large, then the original matrix is said to be poorly conditioned. Standard Cartesian coordinate frames are orthogonal. Imagine an x-y coordinate frame in which the axes are not orthogonal. Such a coordinate frame is possible, but they are rarely used. If the axes are parallel, the coordinate frame will be singular and will basically reduce to one-dimensional. If the x and y axes are nearly parallel, the coordinate frame could still be used in theory, but it will be poorly conditioned. You will need large numbers to represent points fairly close to the origin, and small deviations will translate into large changes in coordinate values. That can lead to problems due to numerical roundoff errors and other kinds of errors. --Russ P. -- http://mail.python.org/mailman/listinfo/python-list
Re: numpy (matrix solver) - python vs. matlab
On May 2, 1:29 pm, someone wrote: > > If your data starts off with only 1 or 2 digits of accuracy, as in your > > example, then the result is meaningless -- the accuracy will be 2-2 > > digits, or 0 -- *no* digits in the answer can be trusted to be accurate. > > I just solved a FEM eigenvalue problem where the condition number of the > mass and stiffness matrices was something like 1e6... Result looked good > to me... So I don't understand what you're saying about 10 = 1 or 2 > digits. I think my problem was accurate enough, though I don't know what > error with 1e6 in condition number, I should expect. How did you arrive > at 1 or 2 digits for cond(A)=10, if I may ask ? As Steven pointed out earlier, it all depends on the precision you are dealing with. If you are just doing pure mathematical or numerical work with no real-world measurement error, then a condition number of 1e6 may be fine. But you had better be using "double precision" (64- bit) floating point numbers (which are the default in Python, of course). Those have approximately 12 digits of precision, so you are in good shape. Single-precision floats only have 6 or 7 digits of precision, so you'd be in trouble there. For any practical engineering or scientific work, I'd say that a condition number of 1e6 is very likely to be completely unacceptable. -- http://mail.python.org/mailman/listinfo/python-list
Re: numpy (matrix solver) - python vs. matlab
On May 3, 10:30 am, someone wrote: > On 05/02/2012 11:45 PM, Russ P. wrote: > > > > > On May 2, 1:29 pm, someone wrote: > > >>> If your data starts off with only 1 or 2 digits of accuracy, as in your > >>> example, then the result is meaningless -- the accuracy will be 2-2 > >>> digits, or 0 -- *no* digits in the answer can be trusted to be accurate. > > >> I just solved a FEM eigenvalue problem where the condition number of the > >> mass and stiffness matrices was something like 1e6... Result looked good > >> to me... So I don't understand what you're saying about 10 = 1 or 2 > >> digits. I think my problem was accurate enough, though I don't know what > >> error with 1e6 in condition number, I should expect. How did you arrive > >> at 1 or 2 digits for cond(A)=10, if I may ask ? > > > As Steven pointed out earlier, it all depends on the precision you are > > dealing with. If you are just doing pure mathematical or numerical > > work with no real-world measurement error, then a condition number of > > 1e6 may be fine. But you had better be using "double precision" (64- > > bit) floating point numbers (which are the default in Python, of > > course). Those have approximately 12 digits of precision, so you are > > in good shape. Single-precision floats only have 6 or 7 digits of > > precision, so you'd be in trouble there. > > > For any practical engineering or scientific work, I'd say that a > > condition number of 1e6 is very likely to be completely unacceptable. > > So how do you explain that the natural frequencies from FEM (with > condition number ~1e6) generally correlates really good with real > measurements (within approx. 5%), at least for the first 3-4 natural > frequencies? > > I would say that the problem lies with the highest natural frequencies, > they for sure cannot be verified - there's too little energy in them. > But the lowest frequencies (the most important ones) are good, I think - > even for high cond number. Did you mention earlier what "FEM" stands for? If so, I missed it. Is it finite-element modeling? Whatever the case, note that I said, "If you are just doing pure mathematical or numerical work with no real- world measurement error, then a condition number of 1e6 may be fine." I forgot much more than I know about finite-element modeling, but isn't it a purely numerical method of analysis? If that is the case, then my comment above is relevant. By the way, I didn't mean to patronize you with my earlier explanation of orthogonal transformations. They are fundamental to understanding the SVD, and I thought it might be interesting to anyone who is not familiar with the concept. -- http://mail.python.org/mailman/listinfo/python-list
Re: numpy (matrix solver) - python vs. matlab
Yeah, I realized that I should rephrase my previous statement to something like this: For any *empirical* engineering or scientific work, I'd say that a condition number of 1e6 is likely to be unacceptable. I'd put finite elements into the category of theoretical and numerical rather than empirical. Still, a condition number of 1e6 would bother me, but maybe that's just me. --Russ P. On May 3, 3:21 pm, someone wrote: > On 05/03/2012 07:55 PM, Russ P. wrote: > > > > > On May 3, 10:30 am, someone wrote: > >> On 05/02/2012 11:45 PM, Russ P. wrote: > >>> For any practical engineering or scientific work, I'd say that a > >>> condition number of 1e6 is very likely to be completely unacceptable. > > >> So how do you explain that the natural frequencies from FEM (with > >> condition number ~1e6) generally correlates really good with real > >> measurements (within approx. 5%), at least for the first 3-4 natural > >> frequencies? > > >> I would say that the problem lies with the highest natural frequencies, > >> they for sure cannot be verified - there's too little energy in them. > >> But the lowest frequencies (the most important ones) are good, I think - > >> even for high cond number. > > > Did you mention earlier what "FEM" stands for? If so, I missed it. Is > > it finite-element modeling? Whatever the case, note that I said, "If > > Sorry, yes: Finite Element Model. > > > you are just doing pure mathematical or numerical work with no real- > > world measurement error, then a condition number of > > 1e6 may be fine." I forgot much more than I know about finite-element > > modeling, but isn't it a purely numerical method of analysis? If that > > I'm not sure exactly, what is the definition of a purely numerical > method of analysis? I would guess that the answer is yes, it's a purely > numerical method? But I also thing it's a practical engineering or > scientific work... > > > is the case, then my comment above is relevant. > > Uh, I just don't understand the difference: > > 1) "For any practical engineering or scientific work, I'd say that a > condition number of 1e6 is very likely to be completely unacceptable." > > vs. > > 2) "If you are just doing pure mathematical or numerical work with no > real-world measurement error, then a condition number of, 1e6 may be fine." > > I would think that FEM is a practical engineering work and also pure > numerical work... Or something... > > > By the way, I didn't mean to patronize you with my earlier explanation > > of orthogonal transformations. They are fundamental to understanding > > the SVD, and I thought it might be interesting to anyone who is not > > familiar with the concept. > > Don't worry, I think it was really good and I don't think anyone > patronized me, on the contrary, people was/is very helpful. SVD isn't my > strongest side and maybe I should've thought a bit more about this > singular matrix and perhaps realized what some people here already > explained, a bit earlier (maybe before I asked). Anyway, it's been good > to hear/read what you've (and others) have written. > > Yesterday and earlier today I was at work during the day so > answering/replying took a bit longer than I like, considering the huge > flow of posts in the matlab group. But now I'm home most of the time, > for the next 3 days and will check for followup posts quite frequent, I > think... -- http://mail.python.org/mailman/listinfo/python-list
Re: numpy (matrix solver) - python vs. matlab
On May 3, 4:59 pm, someone wrote: > On 05/04/2012 12:58 AM, Russ P. wrote: > > > Yeah, I realized that I should rephrase my previous statement to > > something like this: > > > For any *empirical* engineering or scientific work, I'd say that a > > condition number of 1e6 is likely to be unacceptable. > > Still, I don't understand it. Do you have an example of this kind of > work, if it's not FEM? > > > I'd put finite elements into the category of theoretical and numerical > > rather than empirical. Still, a condition number of 1e6 would bother > > me, but maybe that's just me. > > Ok, but I just don't understand what's in the "empirical" category, sorry... I didn't look it up, but as far as I know, empirical just means based on experiment, which means based on measured data. Unless I am mistaken , a finite element analysis is not based on measured data. Yes, the results can be *compared* with measured data and perhaps calibrated with measured data, but those are not the same thing. I agree with Steven D's comment above, and I will reiterate that a condition number of 1e6 would not inspire confidence in me. If I had a condition number like that, I would look for a better model. But that's just a gut reaction, not a hard scientific rule. -- http://mail.python.org/mailman/listinfo/python-list
killing a script
I have a Python (2.6.x) script on Linux that loops through many directories and does processing for each. That processing includes several "os.system" calls for each directory (some to other Python scripts, others to bash scripts). Occasionally something goes wrong, and the top-level script just keeps running with a stack dump for each case. When I see that, I want to just kill the whole thing and fix the bug. However, when I hit Control- C, it apparently just just kills whichever script happens to be running at that instant, and the top level script just moves to the next line and keeps running. If I hit Control-C repeatedly, I eventually get "lucky" and kill the top-level script. Is there a simple way to ensure that the first Control-C will kill the whole darn thing, i.e, the top-level script? Thanks. --Russ P. -- http://mail.python.org/mailman/listinfo/python-list
Re: killing a script
On Aug 28, 6:52 pm, MRAB wrote: > On 29/08/2011 02:15, Russ P. wrote:> I have a Python (2.6.x) script on Linux > that loops through many > > directories and does processing for each. That processing includes > > several "os.system" calls for each directory (some to other Python > > scripts, others to bash scripts). > > > Occasionally something goes wrong, and the top-level script just keeps > > running with a stack dump for each case. When I see that, I want to > > just kill the whole thing and fix the bug. However, when I hit Control- > > C, it apparently just just kills whichever script happens to be > > running at that instant, and the top level script just moves to the > > next line and keeps running. If I hit Control-C repeatedly, I > > eventually get "lucky" and kill the top-level script. Is there a > > simple way to ensure that the first Control-C will kill the whole darn > > thing, i.e, the top-level script? Thanks. > > You could look at the return value of os.system, which may tell you the > exit status of the process. Thanks for the suggestion. Yeah, I guess I could do that, but it seems that there should be a simpler way to just kill the "whole enchilada." Hitting Control-C over and over is a bit like whacking moles. --Russ P. -- http://mail.python.org/mailman/listinfo/python-list
Re: killing a script
On Aug 28, 7:51 pm, Chris Angelico wrote: > On Mon, Aug 29, 2011 at 12:41 PM, Russ P. wrote: > > On Aug 28, 6:52 pm, MRAB wrote: > >> You could look at the return value of os.system, which may tell you the > >> exit status of the process. > > > Thanks for the suggestion. Yeah, I guess I could do that, but it seems > > that there should be a simpler way to just kill the "whole enchilada." > > Hitting Control-C over and over is a bit like whacking moles. > > I believe the idea of this suggestion is for the outer script to > notice that the inner script terminated via Ctrl-C, and would then > immediately choose to terminate itself - thus avoiding the > whack-a-mole effect. > > ChrisA Yes, but if I am not mistaken, that will require me to put a line or two after each os.system call. That's almost like whack-a-mole at the code level rather than the Control-C level. OK, not a huge deal for one script, but I was hoping for something simpler. I was hoping I could put one line at the top of the script and be done with it. --Russ P. -- http://mail.python.org/mailman/listinfo/python-list
Re: killing a script
On Aug 28, 8:16 pm, Chris Rebert wrote: > On Sun, Aug 28, 2011 at 8:08 PM, Russ P. wrote: > > On Aug 28, 7:51 pm, Chris Angelico wrote: > >> On Mon, Aug 29, 2011 at 12:41 PM, Russ P. wrote: > >> > On Aug 28, 6:52 pm, MRAB wrote: > >> >> You could look at the return value of os.system, which may tell you the > >> >> exit status of the process. > > >> > Thanks for the suggestion. Yeah, I guess I could do that, but it seems > >> > that there should be a simpler way to just kill the "whole enchilada." > >> > Hitting Control-C over and over is a bit like whacking moles. > > >> I believe the idea of this suggestion is for the outer script to > >> notice that the inner script terminated via Ctrl-C, and would then > >> immediately choose to terminate itself - thus avoiding the > >> whack-a-mole effect. > > >> ChrisA > > > Yes, but if I am not mistaken, that will require me to put a line or > > two after each os.system call. > > Er, just write a wrapper for os.system(), e.g.: > > def mysystem(cmd): > if os.system(cmd): > sys.exit() > > Also, you may want to switch to using the `subprocess` module instead. > > Cheers, > Chris Sounds like a good idea. I'll give it a try. Thanks. --Russ P. -- http://mail.python.org/mailman/listinfo/python-list
Re: killing a script
On Aug 28, 8:16 pm, Chris Rebert wrote: > On Sun, Aug 28, 2011 at 8:08 PM, Russ P. wrote: > > On Aug 28, 7:51 pm, Chris Angelico wrote: > >> On Mon, Aug 29, 2011 at 12:41 PM, Russ P. wrote: > >> > On Aug 28, 6:52 pm, MRAB wrote: > >> >> You could look at the return value of os.system, which may tell you the > >> >> exit status of the process. > > >> > Thanks for the suggestion. Yeah, I guess I could do that, but it seems > >> > that there should be a simpler way to just kill the "whole enchilada." > >> > Hitting Control-C over and over is a bit like whacking moles. > > >> I believe the idea of this suggestion is for the outer script to > >> notice that the inner script terminated via Ctrl-C, and would then > >> immediately choose to terminate itself - thus avoiding the > >> whack-a-mole effect. > > >> ChrisA > > > Yes, but if I am not mistaken, that will require me to put a line or > > two after each os.system call. > > Er, just write a wrapper for os.system(), e.g.: > > def mysystem(cmd): > if os.system(cmd): > sys.exit() > > Also, you may want to switch to using the `subprocess` module instead. > > Cheers, > Chris I ended up with this: def systemx(cmd): if system(cmd): exit("\nERROR: " + cmd + " failed\n") This is good enough for my purposes in this case. Thanks for all the suggestions. --Russ P. -- http://mail.python.org/mailman/listinfo/python-list
Re: .Well, ok, I will try some of that. But I am running window 7, not Linux.
I did a little writeup for setting PyVISA up in Windows. It's not exactly polished, but it can get you through the difficult bits. If you need any additional help, leave comments/questions on my blog. http://psonghi.wordpress.com/2011/03/29/pyvisa-setup-in-windows/ > On Friday, April 01, 2011 11:29 AM Manatee wrote: > I have unpacked the PyVISA files into the Python/lib/site-packages dir > and from the IDLE GUI I get and error > > import visa > > Traceback (most recent call last): > File "", line 1, in > import visa > ImportError: No module named visa > > > > There must be more to just putting the files in the correct directory. > Need help configuring PyVISA to work. > My ultimate goal is to control electronic instruments with Python > through visa. >> On Friday, April 01, 2011 2:05 PM GüntherDietrich wrote: >> Yes, there is more: >> >> - DON'T unpack the files into any site-packages folder. If you already >> have done it, remove them. >> - Unpack the PyVISA archive to any other folder. >> - On the command line, change into the PyVISA folder. There you should >> find - among others - the two files setup.py and setup.cfg (at least >> if you use PyVISA-1.3.tar.gz). >> - Now, it depends on what variant of python you use and want to install >> PyVISA for and on the configuration of your PYTHONPATH rsp. sys.path >> and the folders they point to. >> You can simply try: 'sudo python ./setup install' >> If you are lucky, that is it. If not, you have to decide, where the >> installation script has to put the files to. For example, for my >> python 2.6, I chose >> '/Library/Frameworks/Python.framework/Versions/2.6/'. In this path, >> there is a folder 'lib/site-packages', which is pointed to by >> sys.path, and where .pth files are evaluated. >> - Edit the file setup.cfg. Near the end, in section '[install]', you will >> find the line 'prefix=/usr'. Replace the '/usr' by your chosen path. >> - Save the file and retry the install (see above). >> >> >> >> Best regards, >> >> G??nther >>> On Friday, April 01, 2011 3:40 PM Manatee wrote: >>> . >>> >>> Well, ok, I will try some of that. But I am running window 7, not Linux. >>> The "sudo" command sounds like Linux. -- http://mail.python.org/mailman/listinfo/python-list
Single line if statement with a continue
I occasionally run across something like: for idx, thing in enumerate(things): if idx == 103: continue do_something_with(thing) It seems more succinct and cleaner to use: if idx == 103: continue. Of course this would be considered an anti-pattern, and Flake8 will complain. Any opinions, or feedback on the matter. -- https://mail.python.org/mailman/listinfo/python-list
How to fix PyV8 linux setup error: command 'x86_64-linux-gnu-gcc' failed with exit status 1
Hello, I am currently installing Pyv8 and other requirements for me to run a honeypot. I downloaded pyv8 from source and using v8 (version 5.5) - built it with depot_tools. I already exported the V8_HOME path. But I still have this error whenever I run 'python setup.py build' of pyv8. Also, I am using Python 2.7. Here is the output I get: running build running build_py running build_ext building '_PyV8' extension x86_64-linux-gnu-gcc -pthread -DNDEBUG -g -fwrapv -O2 -Wall -Wstrict- prototypes -fno-strict-aliasing -Wdate-time -D_FORTIFY_SOURCE=2 -g -fstack-protector-strong -Wformat -Werror=format-security -fPIC -DBOOST_PYTHON_STATIC_LIB -DV8_NATIVE_REGEXP -DENABLE_DISASSEMBLER -DENABLE_LOGGING_AND_PROFILING -DENABLE_DEBUGGER_SUPPORT -DV8_TARGET_ARCH_X64 -I/home/patricia/Thesis/v8/include -I/home/patricia/Thesis/v8 -I/home/patricia/Thesis/v8/src -I/usr/include/python2.7 -c src/Exception.cpp -o build/temp.linux-x86_64-2.7/src/Exception.o cc1plus: warning: command line option ‘-Wstrict-prototypes’ is valid for C/ObjC but not for C++ In file included from /usr/include/c++/5/unordered_set:35:0, from /home/patricia/Thesis/v8/src/heap/spaces.h:10, from /home/patricia/Thesis/v8/src/heap/mark-compact.h:12, from /home/patricia/Thesis/v8/src/heap/incremental-marking.h:12, from /home/patricia/Thesis/v8/src/heap/incremental-marking-inl.h:8, from /home/patricia/Thesis/v8/src/heap/heap-inl.h:13, from /home/patricia/Thesis/v8/src/objects-inl.h:24, from /home/patricia/Thesis/v8/src/arguments.h:9, from /home/patricia/Thesis/v8/src/debug/debug.h:9, from /usr/include/c++/5/bits/stl_iterator_base_funcs.h:65, from /usr/include/c++/5/bits/stl_algobase.h:66, from /usr/include/c++/5/bits/char_traits.h:39, from /usr/include/c++/5/string:40, from /usr/include/c++/5/stdexcept:39, from src/Exception.h:4, from src/Exception.cpp:1: /usr/include/c++/5/bits/c++0x_warning.h:32:2: error: #error This file requires compiler and library support for the ISO C++ 2011 standard. This support must be enabled with the -std=c++11 or -std=gnu++11 compiler options. #error This file requires compiler and library support \ ^ src/Exception.cpp:18:25: fatal error: src/natives.h: No such file or directory #include "src/natives.h" ^ compilation terminated. error: command 'x86_64-linux-gnu-gcc' failed with exit status 1 Well, before that, it repeatedly asks for exporting V8 to V8_HOME and building it. I do it repeatedly just to move on from it, and now, I get this. Also, I already searched in Ubuntu Packages Content Search (http://tiny.cc/snyrey) about /usr/include/c++/5/bits/stl_iterator_base_funcs.h, .../stl_algobase.h, .../char_traits.h, etc. That all led me to installing libstdc++-5-dev via apt-get install libstdc++-5-dev But still. I am getting the error. Any help will be appreciated. :) -- https://mail.python.org/mailman/listinfo/python-list
Python Access logging of another program ran in subprocess
Hello, I'm currently running another Python program (prog2.py) in my program via subprocess. input_args = ['python', '/path/to/prog2.py'] + self.chosen_args file = open("logfile.txt",'w') self.process = Popen((input_args), stdout=file) However, the logs that prog2.py contains still show at the terminal since logs are not covered in the stdout module and printed in the terminal, also, the logfile.txt is empty. Is there a way I can access those logs while in prog1.py? I can't modify prog2.py since it is not mine. I'm stuck here for days now, any tips or help will do. -- https://mail.python.org/mailman/listinfo/python-list
Re: Python Access logging of another program ran in subprocess
On Monday, June 27, 2016 at 1:36:24 AM UTC+8, MRAB wrote: > > > The output you're seeing might be going to stderr, not stdout. Wow, huhuhu. Thank you. I did not know that. Thanks man! -- https://mail.python.org/mailman/listinfo/python-list
python 3.7.2
Hi, I’m having problems installing and using python as it defaults into [ ...users/ user/appdata/local/programs/] etc etc, its about 9 locations in all but there is no route to ‘app data’, the trail is lost at this point. Its such an obscure location and I cannot find it anywhere on windows, even though it is working okay, I’m worried I will lose files. I cannot transfer files saved on python 3.4 to the new python3.7.2 as I cannot find the new version on my computer. In the installer it defaults to this location and there is no option to install it anywhere else (greyed out box). It says I need write permissions to change the location but no more information about how to do this. Also, I thought it would be an add on, not a whole new download. What happens if I delete python 3.4 after I have downloaded 3.7.2? will I lose all my existing code samples? I’m a student programmer and I can’t understand the highly technical, expert level documentation that might rectify the problem and there is only a small amount of information anyway. I just want to download Python to C drive on windows, it should be really easy... Hope you can help Sarah Padden -- https://mail.python.org/mailman/listinfo/python-list
Re: Is it possible to open a dbf
> Paul Rubin wrote: > > > John Fabiani <[EMAIL PROTECTED]> writes: > >> I'm wondering if there is a module available that will open a dbf > > > So far (more than a minute) I have discovered a reader only. So if you have > a URL or a search string it would be very helpful. > TIA > John Yes, "dBase Python" yields only some code for reading dBase ... and lots of enquires about such a thing... Miklós --- Jegenye 2001 Bt. Egyedi szoftverkészítés, tanácsadás | Custom software development, consulting Magyarul: http://jegenye2001.parkhosting.com In English: http://jegenye2001.parkhosting.com/en -- http://mail.python.org/mailman/listinfo/python-list
Re: Python! Is! Truly! Amazing!
"Erik Bethke" <[EMAIL PROTECTED]> wrote: > Hello Everyone, > > I have to say: > > Python! Is! Truly! Amazing! > > So I started with python about a month ago and put in 24 hours across > three weekends. ... > > Truly thank you. > > -Erik > I enjoyed to read about your enthusiasm about Python you have recently discovered. :) After checking out your personal and business site, I'm sure you'll be even more pleased when Python will be an essential component in your business and, say, GoPets take on a bit of snake-like inner working. :-) Best, Miklós --- Jegenye 2001 Bt. Egyedi szoftverkészítés, tanácsadás | Custom software development, consulting Magyarul: http://jegenye2001.parkhosting.com In English: http://jegenye2001.parkhosting.com/en -- http://mail.python.org/mailman/listinfo/python-list
Guild of Python consultants?
Hello freelancers out there, Is there such a thing somewhere? Yes, I'm aware of the Python Business Forum. But I mean something specifically for (individual) consultants. By searching on Google, I couldn't find a "virtual guild" of consultants who try to make a living from Python and technologies built around it (like Zope, Plone, etc.) I'm thinking of providing mutual marketing help like a webring, passing info about projects and clients, discussing business related issues and technical issues in deploying Python in the enterprise, etc. Any thoughts? Best, Miklós --- Jegenye 2001 Bt. Egyedi szoftverkészítés, tanácsadás | Custom software development, consulting Magyarul: http://jegenye2001.parkhosting.com In English: http://jegenye2001.parkhosting.com/en -- http://mail.python.org/mailman/listinfo/python-list
Re: Guild of Python consultants?
"Peter Hansen" <[EMAIL PROTECTED]> wrote in message news:[EMAIL PROTECTED] > I also know of many people (myself included) who restrict the term > to those who have a deep expertise in one or more areas and who > look for projects where they can be brought in to apply that > expertise, usually by telling the customer what to do (or what > not to do any more, perhaps). This sort of work can be hourly, > or quite often daily or even at a fixed price (say, for specialized > "emergency" troubleshooting, or for a design task). > > (The second type will take on jobs that the former would take, > often grudgingly if work is scarce, while the former are rarely > qualified to take on the sort of work that interests the latter.) > > Which type do you mean? Well, I consider myself a consultant of the second kind. Using Python for maths/stats (often meaning decision support systems) or science related custom software is what I prefer to do .. But I also tend to enjoy "firefighting" web projects because they pay well and they are never boring.. rather on the contrary.. :-) ...and I second what you said in parenthesis. So my interest follows. Though one could surely have two divisions of a huge guild. :-)) Best, Miklós -- http://mail.python.org/mailman/listinfo/python-list
PythonWin (build 203) for Python 2.3 causes Windows 2000 to grind to a halt?
I've been having a problem with PythonWin that seemed to start completely spontaneously and I don't even know where to START to find the answer. The only thing I can think of that marks the point between "PythonWin works fine" and "PythonWin hardly every works fine" was that I changed the size of my Virtual Paging file, noticing that it was too small (I currently have a P4 with 1G of RAM). I tried returning it to its original (smaller) size, but it didn't fix the problems. The first time I noticed it, I was using PythonWin and then right-clicked on My Computer to use "Explore". Instead of the usual full listing (approx 10 items), I got a mini-listing of 4 items. Then, after clicking "Explore", I either don't get a new window at all OR I get a strange file explorer that won't let me look at files, won't let me copy files, etc. The "mini-lising" thing also happens if I click the "Start" button while PythonWin is open. Another problem is trying to open another program while PythonWin is running - generally, the program will not start, but I also don't get any kind of error popping up on the screen. My request is just ignored (although I sometimes get a "system beep".) If I already have other programs open and then open PythonWin, my menu bar might refuse to function. Is it significant that, when the menu bar IS working, the drop-down menu fades in quite slowly, instead of popping up immediately? At the end of this message, I've pasted a screen dump of a message I get when I try to open a file and I've got other apps open (note that I can have very few, non-memory intensive apps open and I still get it). Thanks for any help you can give, - Chris [SCREEN DUMP AFTER I TRY TO OPEN A .PY FILE] File "C:\Python23\Lib\site-packages\pythonwin\pywin\mfc\docview.py", line 91, in CreateNewFrame wnd.LoadFrame(self.GetResourceID(), -1, None, context) # triggers OnCreateClient... win32ui: LoadFrame failed win32ui: CreateNewFrame() virtual handler (>) raised an exception TypeError: PyCTemplate::CreateNewFrame must return a PyCFrameWnd object. -- http://mail.python.org/mailman/listinfo/python-list
Re: PythonWin (build 203) for Python 2.3 causes Windows 2000 to grind to a halt?
AWESOME - my life just got THAT much better. The bug you suggested is exactly the problem that I was having... I had looked through the bugs being tracked, but the title of that one didn't jump out at me as something that would help. Thanks! - Chris P.S. For anyone reading this group who wants to know exactly what I did: 1) Uninstall Pywin32 2) Open the registry editor ('regedit' at the command prompt) 3) Go to HKEY_CURRENT_USER\Software\Python[version]\Python for Win32 You will likely find many many many keys that have the format "ToolbarDefault-Bar#". These keys filling up your registry cause Windows 2000 to become extremely slow/unstable when Python is running (especially if your debugger is running.) 4) Delete the keys... I just deleted it at the "Python[version]" root 5) Reinstall Pywin32 "Roger Upole" <[EMAIL PROTECTED]> wrote in message news:<[EMAIL PROTECTED]>... > These look like symptoms of sf bug #1017504 > http://sourceforge.net/tracker/index.php?func=detail&aid=1017504&group_id=78018&atid=551954 > > What version of Pywin32 are you running ? > There's a (semi) fix for this in the latest build. >hth > Roger -- http://mail.python.org/mailman/listinfo/python-list
Re: UML to Python/Java code generation
"Magnus Lycka" <[EMAIL PROTECTED]> wrote in message news:[EMAIL PROTECTED] > James wrote: > > The brain may be fine for generating Python from UML but it is MANY > > MANY orders of magnitude harder to generate UML from code with just > > your brain than using a tool (usually zero effort and error free) no > > matter how good you are at Python. > > I've really only used Rational Rose, but that tool is > really limited in both directions. Class structure to > class diagrams work both ways (not for Python, but for > Java and C++ at least) but that's just a tiny part of > UML. > > I don't expect any tool to do anything meaningful about e.g. > use case diagrams, but what about activity diagrams, sequence > diagrams, collaboration diagrams and state diagrams? > > I agree that the brain is poor at such things, but I doubt > that any current tool is much better except for trivial > subsets of UML. Have a look at Enterprise Architect from Sparx Systems (http://www.sparxsystems.com.au). It is a really nice UML tool that is also affordable. I have also used Rational Rose since about 96-97, and although it doesn't have Python support for code generation, it most likely isn't a huge job to modify one of the existing code generation templates (say the Java one) to generate Python from the class diagrams. Enterprise Architect has a separate free downloadble plug-in which can generate Python. See: http://sparxsystems.com.au/resources/mdg_tech/ Enterprise Architect is also much cheaper than Rose, and they can both do C++ and Java. Also, both of these tools have VBA/COM interfaces. What this means is that you can control them and read data directly from them using the win32 python extension models. I haven't tried it with EA, but a couple of years ago I wrote Python scripts that controlled Rose. I did stuff like creating UML diagrams on the fly including changing colours and shapes, and also reading data from UML models. The Rose Extensibility Interface (REI) is very well documented. One could potentially write Python scripts coupled with some Python introspection/reflection code to automatically generate and reverse engineer (and document) Python <-> UML. It was always something I wanted to do but never got around to it. Mike -- http://mail.python.org/mailman/listinfo/python-list
Re: map vs. list-comprehension
"Björn Lindström" <[EMAIL PROTECTED]> wrote in message news:[EMAIL PROTECTED] > "F. Petitjean" <[EMAIL PROTECTED]> writes: > > > res = [ bb+ii*dd for bb,ii,dd in zip(b,i,d) ] > > > > Hoping that zip will not be deprecated. > > Nobody has suggested that. The ones that are planned to be removed are > lambda, reduce, filter and map. Here's GvR's blog posting that explains > the reasons: > > http://www.artima.com/weblogs/viewpost.jsp?thread=98196 > That really sucks, I wasn't aware of these plans. Ok, I don't use reduce much, but I use lambda, map and filter all the time. These are some of the features of Python that I love the best. I can get some pretty compact and easy to read code with them. And no, I'm not a Lisp programmer (never programmed in Lisp). My background being largely C++, I discovered lambda, apply, map and filter in Python, although I had seen similar stuff in other functional languages like Miranda and Haskell. Also, I don't necessarily think list comprehensions are necessarily easier to read. I don't use them all that much to be honest. IMHO I'm not particularly happy with the way Python is going language wise. I mean, I don't think I'll ever use decorators, for example. Personally, in terms of language features and capabilities I think the language is fine. Not much (if anything needs to be added). I think at this stage the Python community and Python programmers would be better served by building a better, more standardised, cross platform, more robust, better documented, and more extensive standard library. I would be happy to contribute in this regard, rather than having debates about the addition and removal of language features which don't improve my productivity. I would love it if modules like PyOpenGL, PyOSG (Open Scene Graph), PyQt, a graph library etc, were all part of the standard python library, and that worked out of the box on all major platforms -Windows, Unix, Linux, Mac. All these modules which are C/C++ based are all at different versions and at different stages, requiring different versions of Python working on different operating systems. It's not as transparent as it should be. For example, why aren't PIL, Numeric and a host of other fairly mainstream Python modules not part of the standard library? Compare that with the huge SDK that comes with Java. Then there is always issues of performance, better standard development tools, better documentation. There are lots of things to do, to make the Python programmers life better without touching the actual features of the language. Sorry, I've probably gone way off topic, and probably stirred up political issues which I'm not aware of, but, man when I hear stuff like the proposed removal of reduce, lambda, filter and map, all I see ahead of me is a waste of time as a programmer. I don't program in Python for it's own sake. I program in Python because it lets me get my job done quicker and it saves me time. The proposed removals are going to waste my time. Why? Because my team and myself are going to have to go through all our code and change stuff like maps to ugly looking list comprehensions or whatever when Python 3000 comes out. Sure some of you will say you don't have to update, just stick with Python 2.3/2.4 or whatever. That is fine in theory, but in practice I'm going to have to use some third party module which will require Python 3000 (this happened to me recently with a module which had a serious bug with the Python 2.3 version, but worked with the Python 2.4 version - I had to upgrade every single third party module I was using - I was lucky the ones I was using had 2.4 versions, but there are still a lot of modules out there that don't). Sorry for the OT long rant. Mike -- http://mail.python.org/mailman/listinfo/python-list
XML Pickle with PyGraphLib - Problems
Hi all, I'm working on a simulation (can be considered a game) in Python where I want to be able to dump the simulation state to a file and be able to load it up later. I have used the standard Python pickle module and it works fine pickling/unpickling from files. However, I want to be able to use a third party tool like an XML editor (or other custom tool) to setup the initial state of the simulation, so I have been playing around with the gnosis set of tools written by David Mertz. In particular I have been using the gnosis.xml.pickle module. Anyway, this seems to work fine for most data structures, however I have problems getting it to work with pygraphlib version 0.6.0.1 (from sourceforge). My simulation needs to store data in a set of graphs, and pygraphlib seems like a nice simple python graph library, that I found easy to use. Anyway, the problem is that I can successfully pickle to XML a data structure containing a pygraphlib graph, but I have trouble reloading/unpickling the very same data structure that was produced by the XML pickle module. I get an XMLUnpicklingError. Anyone have any ideas? Here's some output from the interactive prompt of a small example which demonstrates the error: == C:\home\src>python ActivePython 2.4 Build 244 (ActiveState Corp.) based on Python 2.4 (#60, Feb 9 2005, 19:03:27) [MSC v.1310 32 bit (Intel)] on win32 Type "help", "copyright", "credits" or "license" for more information. >>> from pygraphlib import pygraph >>> graph = pygraph.DGraph() >>> graph.add_edge('Melbourne', 'Sydney') >>> graph.add_edge('Melbourne', 'Brisbane') >>> graph.add_edge('Melbourne', 'Adelaide') >>> graph.add_edge('Adelaide', 'Perth') >>> print graph DGraph: 5 nodes, 4 edges >>> graph DGraph: 5 nodes, 4 edges Melbourne -> ['Sydney', 'Brisbane', 'Adelaide'] Brisbane -> [] Perth -> [] Sydney -> [] Adelaide -> ['Perth'] >>> import gnosis.xml.pickle >>> file = open('graph.xml', 'w') >>> gnosis.xml.pickle.dump(graph, file) >>> file.close() >>> f2 = open('graph.xml', 'r') >>> g2 = gnosis.xml.pickle.load(f2) Traceback (most recent call last): File "", line 1, in ? File "C:\Python24\Lib\site-packages\gnosis\xml\pickle\_pickle.py", line 152, in load return parser(fh, paranoia=paranoia) File "C:\Python24\lib\site-packages\gnosis\xml\pickle\parsers\_dom.py", line 42, in thing_from_dom return _thing_from_dom(minidom.parse(fh),None,paranoia) File "C:\Python24\lib\site-packages\gnosis\xml\pickle\parsers\_dom.py", line 175, in _thing_from_dom container = unpickle_instance(node, paranoia) File "C:\Python24\lib\site-packages\gnosis\xml\pickle\parsers\_dom.py", line 59, in unpickle_instance raw = _thing_from_dom(node, _EmptyClass(), paranoia) File "C:\Python24\lib\site-packages\gnosis\xml\pickle\parsers\_dom.py", line 234, in _thing_from_dom node_val = unpickle_instance(node, paranoia) File "C:\Python24\lib\site-packages\gnosis\xml\pickle\parsers\_dom.py", line 95, in unpickle_instance raise XMLUnpicklingError, \ gnosis.xml.pickle.XMLUnpicklingError: Non-DictType without setstate violates pickle protocol.(PARANOIA setting may be too high) >>> === I find it strange that: a) the standard pickle/unpickle works b) the xml pickle dumps the file but the xml unpickle can't load it. I'm guessing the problem lies somewhere with implementing __setstate__ and __getstate__ for pygraphlib (I don't know much about this - haven't used them before). However, I am a bit reluctant to go in and start playing around with the internals pygraphlib, as I was hoping to just use it, and ignore the internal implementation (as you would with any library). Funny how the standard pickle module doesn't need to do this. Another thing I tried was the generic xml marshaller (xml.marshal.generic) that comes with PyXML 0.8.4 (for Python 2.4 on windows). This also fails but for different reasons. The marshaller doesn't support the boolean and set types which are part of Python 2.4 and are used in pygraphlib. I get errors of the form: AttributeError: Marshaller instance has no attribute 'tag_bool' AttributeError: Marshaller instance has no attribute 'm_Set' Again strange given that bool and sets should be supported in Python 2.4. Anyway, back to my question - does anyone have any suggestions as to where I could proceed next? I was hoping to be able to XML Pickle/Unpickle Python data structures containing graphs from pygraphlib fairly easily without having to stuff around in the internals of third party libraries. It would be nice if I could just concentrate on my application logic :-) Any ideas? Cheers. Mike P. -- http://mail.python.org/mailman/listinfo/python-list
Re: How to upgrade to 2.4.1 on Mac OS X tiger
On Fri, 9 Sep 2005 13:55:03 -0700, Trent Mick wrote: > [Mike Meyer wrote] >> stri ker <[EMAIL PROTECTED]> writes: >>> Has anyone here upgraded from 2.3 to 2.4 on Tiger? >>> If so how'd ya do it? >> >> You don't. You install 2.4 in parallel with 2.3. You can do pretty >> much whatever you want with /usr/bin/python, /usr/local/bin/python, >> etc. - Tiger doesn't seem to use those. I don't remember if I replaced >> one or not, but don't touch anything else about the 2.3 installtion. >> >> I installed the darwinports version of 2.4, and have been using it >> ever since for all my stuff. > > There are also the following install options: > > - ActivePython: > http://www.activestate.com/Products/ActivePython/ > (disclaimer: I make this distro) > > - MacPython: > http://undefined.org/python/#python > by Bob Ippolito > > - fink (similar in spirit to the darwinports project) also has a Python > I believe > > > Trent I just got a Mac and was wondering the same thing as the original poster - how to move to 2.4, but I found out there was more than one version. So in addition to the Apple installation of 2.3, there are 4 versions of Python 2.4 (ActivePython, MacPython, fink, darwinports). Which one should I go for? What are other people using (i.e. which is the most popular version)? Any particular advantages/disadvantages for each version? Cheers. Mike -- http://mail.python.org/mailman/listinfo/python-list
xhtml-print parser
my question is i have parsed the xhtml data stream using c i need to diplay the content present in the command prompt as the data present in the webpage as links how it can be done? -- http://mail.python.org/mailman/listinfo/python-list
Text To Speech with pyTTS
Hi, I was wondering if anyone has had any luck with the python text to speech (pyTTS) module available on Sourceforge: http://sourceforge.net/projects/uncassist I have followed the tutorial for pyTTS at: http://www.cs.unc.edu/~parente/tech/tr02.shtml Using the first simple speech example: import pyTTS tts = pyTTS.Create() tts.Speak("Hello World!") I get the following error on the call to pyTTS.Create() C:\Program Files\Python23\Lib\site-packages\pyTTS>python ActivePython 2.3.2 Build 232 (ActiveState Corp.) based on Python 2.3.2 (#49, Nov 13 2003, 10:34:54) [MSC v.1200 32 bit (Intel)] on win32 Type "help", "copyright", "credits" or "license" for more information. >>> import pyTTS >>> tts = pyTTS.Create() Traceback (most recent call last): File "", line 1, in ? File "C:\Program Files\Python23\Lib\site-packages\pyTTS\__init__.py", line 28, in Create raise ValueError('"%s" not supported' % api) ValueError: "SAPI" not supported >>> I followed the instructions in the tutorial in order and installed the required packages in the following order, given that I already had an ActiveState Python 2.3 installation under Windows XP. 1) wxPython2.5-win32-unicode-2.5.3.1-py23.exe (didn't already have this and some of the pyTTS demos need it) 2) Microsoft SAPI 5.1 (SAPI5SpeechInstaller.msi) 3) Extra Microsoft Voices (SAPI5VoiceInstaller.msi) 4) pyTTS-3.0.win32-py2.3.exe (pyTTS for Python 2.3 under windows) I ran the example and it didn't work. I didn't initially install Mark Hammond's Python win32all extensions, because they already come with ActiveState Python. So I tried installing the win32all (win32all-163.exe) package just in case, but I still get the SAPI not supported error. Anyone get this working - any suggestions? Or am I missing something obvious? Thanks In Advance. Mike P. -- http://mail.python.org/mailman/listinfo/python-list
Re: wxPython vs. pyQt
"RM" <[EMAIL PROTECTED]> wrote in message news:[EMAIL PROTECTED] > Of course, the licensing terms may still be too restrictive for those > that want to create comercial closed source applications and can't > afford the comercial license of Qt. That is why, for many, wxPython > will remain the preferred choice. > > Being that you are inclined use Xemacs and xterm for your development, > I don't think you will have too much trouble with either one. > Currently, I think the choice between Qt and wx boils down to this: > > Type of app - Choice - Reason > > GPL or Company use only app - Qt - It is easier, cleaner, etc. > Commercial Closed Source - Qt - Don't mind the license cost. > Any type - wx - It is free. > > Other (lesser, I think) considerations, however, may bee the appearance > of the app. On linux/KDE you may prefer the Qt native look. On > Linux/GNOME you may prefer wx's GTK native look. On Windows, wx is > completely native, while I can't speak for Qt's look since I have never > seen it, but I know it is not completely native looking. > I use Qt under Windows and the look and feel is completely native. The best thing to do is to judge for yourself. Trolltech's website has example screenshots of Qt applications under X11, Windows and Mac OS X. For example 3rd party apps, look at: http://www.trolltech.com/products/hotnew/index.html and http://www.trolltech.com/success/index.html For Trolltech's Qt tools, look at: http://www.trolltech.com/screenshots/tools.html Personally for ease of use Qt is the way to go. I'm really looking forward to Qt 4 coming out, and then sometime later PyQt 4. Then I'll be able to develop with my favourite APIs (Qt, OpenGL, and OpenSceneGraph) under a completely Python environment - heaven from a development perspective. Mike. -- http://mail.python.org/mailman/listinfo/python-list
Rounding the elements of a Python array (numarray module)
Hi. I have a very simple task to perform and I'm having a hard time doing it. Given an array called 'x' (created using the numarray library), is there a single command that rounds each of its elements to the nearest integer? I've already tried something like >>> x_rounded = x.astype(numarray.Int) but that only truncates each element (i.e. '5.9' becomes '5'). I've read over all the relevant numarray documentation, and it mentions a bunch of Ufuncs that are available - there's 'floor' and 'ceil', but no 'round'. And when I try using the built-in function 'round' on an array like so >>> x_rounded = round(x) I get "TypeError: Only rank-0 numarray can be cast to floats." So I created a bad function that uses nested loops, but my arrays are 600x600x3 elements (I'm doing some image processing after converting a PIL Image to a numarray array) and it takes a REALLY long time. Has anyone else had to do this, or knows of any tricks? Any help is appreciated. - Chris P.S. Here's my "bad function" # This code assumes that 'the_array' is a 3-dimensional array. def ArrayRound(the_array): result = numarray.zeros(the_array.shape) if len(the_array.shape) == 3: (depths,rows,columns) = the_array.shape for depth in range(depths): for row in range(rows): for column in range(columns): result[depth][row][column] = #continued round(the_array[depth][row][column]) return result -- http://mail.python.org/mailman/listinfo/python-list
Re: Help With Hiring Python Developers
"fuego" <[EMAIL PROTECTED]> wrote in message news:[EMAIL PROTECTED] > My company (http://primedia.com/divisions/businessinformation/) has > two job openings that we're having a heckuva time filling. We've > posted at Monster, Dice, jobs.perl.org and python.jobmart.com. Can > anyone advise other job boards that might be helpful? Also, feel free .. > Thanks in advance! Hm, you're looking for *Manhattan locals* who are required to have Perl skills (i.e. magically working, hardly readable line noise :D ) but optionally they may have Python skills, too (i.e. magically working, easily readable pseudo code :D ) How about adding the additional requirement of COBOL and that the applicant must live in a particular street? :-) Miklós -- http://mail.python.org/mailman/listinfo/python-list
Re: Need help on program!!!
"Darth Haggis" <[EMAIL PROTECTED]> wrote in message news:[EMAIL PROTECTED] > I need help writing a program > > You are to write a python program to accomplish the following: > ... > > > Haggis > Seems very much like homework to me... ;) And that's something you are supposed to do on your own.. M. -- http://mail.python.org/mailman/listinfo/python-list
Re: Rounding the elements of a Python array (numarray module)
S I wasn't sure if no one replied because a) this question was too dumb or b) this question was too hard... it was definitely the former. But I'll post the answer, anyway: I forgot to keep in mind - when reading the documentation, assume that a >>> from numarray import * occurred first. So here's the way to do it: >>> import numarray >>> a = numarray.array([1.6, 1.9, 2.1]) >>> rounded_a = numarray.around(a) and rounded_a then equals ([2., 2., 2.]) - Chris [EMAIL PROTECTED] (Chris P.) wrote in message news:<[EMAIL PROTECTED]>... > Given an array called 'x' (created using the numarray library), is > there a single command that rounds each of its elements to the nearest > integer? -- http://mail.python.org/mailman/listinfo/python-list
Re: Need help on program!!!
"Dan Perl" <[EMAIL PROTECTED]> wrote in message news:[EMAIL PROTECTED] > > "Hopefully" for whom? For us, who may have to work with him someday or use > a product that he developed? Most of us here have been students (or still > are) and may sympathize with the OP, but personally I have been also a TA so > I have seen the other side and that's where my sympathy lies now. > > Dan > To me, it's my benignity towards the OP rather than my sympathy to the TAs... That's why I find the greedy guy's offer immoral. Best regards, Miklós -- http://mail.python.org/mailman/listinfo/python-list
Re: Need help on program!!!
"Brad Tilley" <[EMAIL PROTECTED]> wrote in message news:[EMAIL PROTECTED] > elif first in (2,3,7): > > I think you mean 'elif first in (2,3,12):' > Seems you've grown out of school.. So why make the little rascal even lazier? :-\ M. -- http://mail.python.org/mailman/listinfo/python-list
Re: Need help on program!!!
"Dan Perl" <[EMAIL PROTECTED]> wrote in message news:[EMAIL PROTECTED] > > > I'm not sure what you mean by "benignity" here, but I think I agree with > you. Sympathy for TAs is not really my reason for how I feel towards I meant that I think the real (or long term) interest of the OP is to *learn* things as opposed to (the short term interest in) submitting that homework without his devoting any effort. > reason to give him the solution). I guess that is not the best argument I > could have made. > > Dan > Yes, I agree and I had thought you actually meant it similarly like I do. Best regards, Miklós -- http://mail.python.org/mailman/listinfo/python-list
numpy masked array puzzle
I have two numpy arrays, xx and yy - (Pdb) xx array([0.7820524520874, masked, masked, 0.3700476837158, 0.7252384185791, 0.6002384185791, 0.6908474121094, 0.7878760223389, 0.6512288818359, 0.1110143051147, masked, 0.716205039978, 0.5460381469727, 0.4358950958252, 0.63868808746337891, 0.02700700354576, masked, masked], dtype=object) (Pdb) yy array([-0.015120843222826226, -0.0080196081193390761, 0.02241851002495138, -0.021720756657755306, -0.0095334465407607427, -0.0063953867288363917, -0.013363615476044387, 0.0080645889792231359, -0.0056745213729654086, -0.0071783823973457523, -0.0019400978318164389, -0.0038670581150256019, 0.0048961156278229494, -0.01315129469368378, -0.007727079344820257, -0.0042560259937610449, 0.0063857167196111056, 0.0024528141737232877], dtype=object) (Pdb) -- which gives a strange error - stats.mstats.linregress(x, y) *** AttributeError: 'int' object has no attribute 'view' (Pdb) What is stranger I can't get the mask - (Pdb) np.ma.getmaskarray(xx) array([False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False], dtype=bool) (Pdb) -- https://mail.python.org/mailman/listinfo/python-list
problem with dateutil
I am writing a program that has to deal with various date/time formats and convert these into timestamps. It looks as if dateutil.parser.parse should be able to handle about any format, but what I get is: datetimestr = '2012-10-22 11:22:33' print(dateutil.parser.parse(datetimestr)) result: datetime.datetime(2012, 10, 22, 11, 22, 33) However: datetimestr = '2012:10:22 11:22:33' print(dateutil.parser.parse(datetimestr)) result: datetime.datetime(2016, 2, 13, 11, 22, 33) In other words, it's getting the date wrong when colons are used to separate :MM:DD. Is there a way to include this as a valid format? -- https://mail.python.org/mailman/listinfo/python-list
Re: problem with dateutil
On 02/13/2016 07:13 PM, Gary Herron wrote: On 02/13/2016 09:58 AM, Tom P wrote: I am writing a program that has to deal with various date/time formats and convert these into timestamps. It looks as if dateutil.parser.parse should be able to handle about any format, but what I get is: datetimestr = '2012-10-22 11:22:33' print(dateutil.parser.parse(datetimestr)) result: datetime.datetime(2012, 10, 22, 11, 22, 33) However: datetimestr = '2012:10:22 11:22:33' print(dateutil.parser.parse(datetimestr)) result: datetime.datetime(2016, 2, 13, 11, 22, 33) In other words, it's getting the date wrong when colons are used to separate :MM:DD. Is there a way to include this as a valid format? Yes, there is a way to specify your own format. Search the datetime documentation for datetime.strptime(date_string, format) Gary Herron Thanks. I started out with datetime.strptime but AFAICS that means I have to go through try/except for every conceivable format. Are you saying that I can't use dateutil.parser? -- https://mail.python.org/mailman/listinfo/python-list
Re: problem with dateutil
On 02/13/2016 09:45 PM, Gary Herron wrote: On 02/13/2016 12:27 PM, Tom P wrote: On 02/13/2016 07:13 PM, Gary Herron wrote: On 02/13/2016 09:58 AM, Tom P wrote: I am writing a program that has to deal with various date/time formats and convert these into timestamps. It looks as if dateutil.parser.parse should be able to handle about any format, but what I get is: datetimestr = '2012-10-22 11:22:33' print(dateutil.parser.parse(datetimestr)) result: datetime.datetime(2012, 10, 22, 11, 22, 33) However: datetimestr = '2012:10:22 11:22:33' print(dateutil.parser.parse(datetimestr)) result: datetime.datetime(2016, 2, 13, 11, 22, 33) In other words, it's getting the date wrong when colons are used to separate :MM:DD. Is there a way to include this as a valid format? Yes, there is a way to specify your own format. Search the datetime documentation for datetime.strptime(date_string, format) Gary Herron Thanks. I started out with datetime.strptime but AFAICS that means I have to go through try/except for every conceivable format. Are you saying that I can't use dateutil.parser? Well now... If by "every conceivable format" you are including formats that the author of dateutil.parser did not conceive of, then of course you cannot use dateutil.parser. But you have the code for dateutil.parser -- perhaps you could modify it to accept whatever odd formats you care about. Gary Herron I had a look at the code for dateutil.parser. Have you looked at it? Meanwhile I'm living with try: dt = datetime.datetime.strptime(datetimestr, '%Y:%m:%d %H:%M:%S') except ValueError: dt = dateutil.parser.parse(datetimestr) unixtime = time.mktime(dt.timetuple()) -- https://mail.python.org/mailman/listinfo/python-list
Re: problem with dateutil
On 02/13/2016 10:01 PM, Mark Lawrence wrote: On 13/02/2016 17:58, Tom P wrote: I am writing a program that has to deal with various date/time formats and convert these into timestamps. It looks as if dateutil.parser.parse should be able to handle about any format, but what I get is: datetimestr = '2012-10-22 11:22:33' print(dateutil.parser.parse(datetimestr)) result: datetime.datetime(2012, 10, 22, 11, 22, 33) However: datetimestr = '2012:10:22 11:22:33' print(dateutil.parser.parse(datetimestr)) result: datetime.datetime(2016, 2, 13, 11, 22, 33) In other words, it's getting the date wrong when colons are used to separate :MM:DD. Is there a way to include this as a valid format? From http://labix.org/python-dateutil#head-a23e8ae0a661d77b89dfb3476f85b26f0b30349c parserinfo This parameter allows one to change how the string is parsed, by using a different parserinfo class instance. Using it you may, for example, intenationalize the parser strings, or make it ignore additional words. HTH. Thanks, let me look at that. -- https://mail.python.org/mailman/listinfo/python-list
Re: Help
On 02/29/2016 01:53 PM, tomwilliamson...@gmail.com wrote: Thanks. If a word appears more than once how would I bring back both locations? for i, str in enumerate(l): . . . . -- https://mail.python.org/mailman/listinfo/python-list
Getting a stable virtual env
Hi ppl, I'm trying to figure out the whole virtualenv story. Right now I'm using it to creating an environment for our upcoming debian upgrade to squeeze. I'm doing some tests in our current distrib (python 2.5). I have come to realize that a lot of packages in the version I'm interested in are not available anymore on pypi. The pip installer fails a lot. Squeeze features python 2.7 so I'm pretty sure that everything will work fine. I've actually tested it. The question is, how do I protect myself from future package removal ? Do I absolutely need to run a local pypi server (I've seen some python package doing this), and mirror all the packages I'm interested in ? cheers, JM -- https://mail.python.org/mailman/listinfo/python-list
Re: Convert numpy array to single number
On 28.04.2014 15:04, mboyd02...@gmail.com wrote: I have a numpy array consisting of 1s and zeros for representing binary numbers: e.g. >>> binary array([ 1., 0., 1., 0.]) I wish the array to be in the form 1010, so it can be manipulated. I do not want to use built in binary converters as I am trying to build my own. Do you mean that each element in the array represents a power of two? So array([ 1., 0., 1., 0.]) represents 2^3 + 2 = 6 decimal? -- https://mail.python.org/mailman/listinfo/python-list
A Pragmatic Case for Static Typing
I just stumbled across this video and found it interesting: http://vimeo.com/72870631 My apologies if it has been posted here already. -- http://mail.python.org/mailman/listinfo/python-list
Re: A Pragmatic Case for Static Typing
On Monday, September 2, 2013 1:10:34 AM UTC-7, Paul Rubin wrote: > "Russ P." writes: > > > I just stumbled across this video and found it interesting: > > > http://vimeo.com/72870631 > > > My apologies if it has been posted here already. > > > > The slides for it are here, so I didn't bother watching the 1 hour video: > > > > http://gbaz.github.io/slides/hurt-statictyping-07-2013.pdf > > > > I guess for Python programmers looking to expand their horizons a bit, > > it's worth at least looking at the slides. But, it may overstate its > > case a little bit. Haskell's type system is way cool but the language > > introduces other headaches into programming. I thought the video was amusing, but I am probably easily amused. I noticed that he did not list my current main language, Scala, as statically typed. I am not sure why, but perhaps because it inherits null from Java. In any case, his main point was that static typing reduces time to working code. I have no doubt that is true for large-scale programming, but I doubt it is true for small-scale programming. The question is where the crossover point is. -- http://mail.python.org/mailman/listinfo/python-list
Re: Can I trust downloading Python?
On 10.09.2013 11:45, Oscar Benjamin wrote: On 10 September 2013 01:06, Steven D'Aprano wrote: On Mon, 09 Sep 2013 12:19:11 +, Fattburger wrote: But really, we've learned *nothing* from the viruses of the 1990s. Remember when we used to talk about how crazy it was to download code from untrusted sites on the Internet and execute it? We're still doing it, a hundred times a day. Every time you go on the Internet, you download other people's code and execute it. Javascript, Flash, HTML5, PDF are all either executable, or they include executable components. Now they're *supposed* to be sandboxed, but we've gone from "don't execute untrusted code" to "let's hope my browser doesn't have any bugs that the untrusted code might exploit". You could have also mentioned pip/PyPI in that. 'pip install X' downloads and runs arbitrary code from a largely unmonitored and uncontrolled code repository. The maintainers of PyPI can only try to ensure that the original author of X would remain in control of what happens and could remove a package X if it were discovered to be malware. However they don't have anything like the resources to monitor all the code coming in so it's essentially a system based on trust in the authors where the only requirement to be an author is that you have an email address. Occasionally I see the suggestion to do 'sudo pip install X' which literally gives root permissions to arbitrary code coming straight from the net. Oscar Interesting observation -- https://mail.python.org/mailman/listinfo/python-list
Python PDB conditional breakpoint
I can't get conditional breakpoints to work. I have a variable ID and I want to set a breakpoint which runs until ID==11005. Here's what happens - -> import sys ... (Pdb) b 53, ID==11005 Breakpoint 1 at /home/tom/Desktop/BEST Tmax/MYSTUFF/sqlanalyze3.py:53 (Pdb) b Num Type Disp Enb Where 1 breakpoint keep yes at /home/tom/Desktop/BEST Tmax/MYSTUFF/sqlanalyze3.py:53 stop only if ID==11005 (Pdb) l 50 for ID in distinct_IDs: 51 cursr.execute("select Date, Temperature from data where StationID = ? and Date > 1963", ID) 52 datarecords = cursr.fetchall() # [(date, temp),..] 53 B-> ll = len(datarecords) 54 if ll > 150: # and len(results) < 100 : (Pdb) c ... std_error too large -132.433 61.967 10912 std_error too large -133.36274 62.2165 10925 std_error too large -137.37 62.82 10946 std_error too large -138.217 64.45 10990 std_error too large -138.32 65.35 11005 std_error too large -138.32 65.35 11005 std_error too large -138.32 65.35 11005 std_error too large -138.32 65.35 11005 std_error too large -134.86625 67.415 11036 std_error too large -135.0 68.22908 11053 ... - in other words it doesn't stop even though the value ID == 11005 shows up. Am I specifying the condition incorrectly? This is Python 2.7.4, Linux 64 bit. -- https://mail.python.org/mailman/listinfo/python-list
[solved]Re: Python PDB conditional breakpoint
On 06.11.2013 16:14, Tom P wrote: ok I figured it. ID is a tuple, not a simple variable. The correct test is ID[0]==11005 I can't get conditional breakpoints to work. I have a variable ID and I want to set a breakpoint which runs until ID==11005. Here's what happens - -- https://mail.python.org/mailman/listinfo/python-list
i want to know about python language
h -- https://mail.python.org/mailman/listinfo/python-list