Re: Cross Compiler for Python?

2008-07-09 Thread Hendrik van Rooyen
 "norseman"  wrote:


8< -

> dreaded  Yep! I know the feeling. Got lots of those T-Shirts. ;)
> 
> 
> I re-read your original post. I got the feeling the eBox is running a
> stripped down Linux. Is that so?   If it is, then:

Correct - a kernel, and busybox masquerading as the whole of GNU...

> 
> You mention pcmcia. Is it external, a plug in?  Do any of your
> desktops/etc have pcmcia slots? Because if so.

Its part of the eBox - quite a neat implementation

> 
> 1) Card can be mounted on your machine
> 2) Compile and install can be direct to card (rem 32bit output)
> just change the install path(s). Look over the eBox /lib
> ( I'm assuming Linux again)  and put that lib path first in
> the compile command line.
> 
> If no desktop pcmcia adapter - pay the $40 or so for an USB attachable
> pcmcia adapter

This is sane - I would need it if ever we get to using the thing, for loading
software onto the devices anyway.

> 
> sample:
> plug in card
> 
> (if current drives on desktop are scsi or sata drives you may need to
> change sda to sdb or sdc or   use fdisk /dev/sda to check existence.
> the single letter 'q' to exit without changing anything. Careful with
> fdisk - damage comes quickly, easily and unrepairable. All you are after 
> is which letter to use. Last one before No such drive is the one.
> )

should be able to look in /dev - but I will be carefull...

Also Suse and GUI should just pop up, like a USB stick does.

> 
> cd /mnt
> mkdir sda
> mount /dev/sda1 /mnt/sda
> ls /mnt/sdashould give base dir listing of flashdrive
> (linux will treat the pcmcia flash as a removable HardDrive)
> 
> install to /mnt/sda/usr/local/lib  (or /mnt/sda/--where ever--)
> then: back on eBox
>cd /mnt
>mkdir sda
>cd sda
>ln -s /usr  usr
> 
> I did the soft link process above on my machine, then:
> 
> ls /mnt/sda/usr/local/lib/python2.5   should list contents correctly
> (and does on my machine since it is located in /usr/local/lib/python2.5) 
> thus keeping compile and install paths working.  The use of the midnight 
> commander (mc) with card mounted makes transferring whole trees simple.
> Soft links were invented for a reason!
> 
> OF COURSE, if the eBox is not running Linux, all this is useless! ;)
> 
> Steve

Thanks a lot - this seems the way to go...

- Hendrik

--
http://mail.python.org/mailman/listinfo/python-list


Regular Expressions Quick Question

2008-07-09 Thread Lamonte Harris
Alright, basically I have a list of words in a file and I load each word
from each line into the array.  Then basically the question is how do I
check if the input word matches multiple words in the list.

Say someone input "test", how could I check if that word matches these list
of words:

test
testing
tested

Out of the list of

Hello
blah
example
test
ested
tested
testing

I want it to loop then check if the input word I used starts any of the
words in the list so if I typed 'tes'

Then:

test
testing
testing

would be appended to a new array.

I'm unsure how to do this in python.

Thanks in advanced.
--
http://mail.python.org/mailman/listinfo/python-list

Re: Newbie question

2008-07-09 Thread |e0
So, i can't use wmi module on linux?

On Wed, Jul 9, 2008 at 9:14 AM, Lamonte Harris <[EMAIL PROTECTED]> wrote:
> I think the win32 module is only for windows.
>
--
http://mail.python.org/mailman/listinfo/python-list


Re: Regular Expressions Quick Question

2008-07-09 Thread Rajanikanth Jammalamadaka
hi!

Try this:

>>> lis=['t','tes','test','testing']
>>> [elem for elem in lis if re.compile("^te").search(elem)]
['tes', 'test', 'testing']

Cheers,

Raj

On Wed, Jul 9, 2008 at 12:13 AM, Lamonte Harris <[EMAIL PROTECTED]> wrote:
> Alright, basically I have a list of words in a file and I load each word
> from each line into the array.  Then basically the question is how do I
> check if the input word matches multiple words in the list.
>
> Say someone input "test", how could I check if that word matches these list
> of words:
>
> test
> testing
> tested
>
> Out of the list of
>
> Hello
> blah
> example
> test
> ested
> tested
> testing
>
> I want it to loop then check if the input word I used starts any of the
> words in the list so if I typed 'tes'
>
> Then:
>
> test
> testing
> testing
>
> would be appended to a new array.
>
> I'm unsure how to do this in python.
>
> Thanks in advanced.
>
> --
> http://mail.python.org/mailman/listinfo/python-list
>



-- 
"For him who has conquered the mind, the mind is the best of friends;
but for one who has failed to do so, his very mind will be the
greatest enemy."

Rajanikanth
--
http://mail.python.org/mailman/listinfo/python-list


Re: ActiveState Code: the new Python Cookbook site

2008-07-09 Thread Stef Mientki

hi Mike,

nice job, I just took a quick look,

Trent Mick wrote:



The Python Cookbook is by far the most popular of the ASPN Cookbooks, 
so I wanted to get the Python community's feedback on the new site. 
What do you think? What works? What doesn't? I'll try to answer 
feedback on python-list or on the site's feedback form:




one small remark,
If I want to browse 200 recipes, at 10 per page  
... please make something like 100 available per page,
are internet is fast enough nowadays.

cheers,
Stef

--
http://mail.python.org/mailman/listinfo/python-list


Re: re.search much slower then grep on some regular expressions

2008-07-09 Thread John Machin
On Jul 9, 2:01 am, Kris Kennaway <[EMAIL PROTECTED]> wrote:
> samwyse wrote:
> > On Jul 4, 6:43 am, Henning_Thornblad <[EMAIL PROTECTED]>
> > wrote:
> >> What can be the cause of the large difference between re.search and
> >> grep?
>
> >> While doing a simple grep:
> >> grep '[^ "=]*/' input                  (input contains 156.000 a in
> >> one row)
> >> doesn't even take a second.
>
> >> Is this a bug in python?
>
> > You might want to look at Plex.
> >http://www.cosc.canterbury.ac.nz/greg.ewing/python/Plex/
>
> > "Another advantage of Plex is that it compiles all of the regular
> > expressions into a single DFA. Once that's done, the input can be
> > processed in a time proportional to the number of characters to be
> > scanned, and independent of the number or complexity of the regular
> > expressions. Python's existing regular expression matchers do not have
> > this property. "
>
> > I haven't tested this, but I think it would do what you want:
>
> > from Plex import *
> > lexicon = Lexicon([
> >     (Rep(AnyBut(' "='))+Str('/'),  TEXT),
> >     (AnyBut('\n'), IGNORE),
> > ])
> > filename = "my_file.txt"
> > f = open(filename, "r")
> > scanner = Scanner(lexicon, f, filename)
> > while 1:
> >     token = scanner.read()
> >     print token
> >     if token[0] is None:
> >         break
>
> Hmm, unfortunately it's still orders of magnitude slower than grep in my
> own application that involves matching lots of strings and regexps
> against large files (I killed it after 400 seconds, compared to 1.5 for
> grep), and that's leaving aside the much longer compilation time (over a
> minute).  If the matching was fast then I could possibly pickle the
> lexer though (but it's not).
>

Can you give us some examples of the kinds of patterns that you are
using in practice and are slow using Python re? How large is "large"?
What kind of text?

Instead of grep, you might like to try nrgrep ... google("nrgrep
Navarro Raffinot"): PDF paper about it on Citeseer (if it's up),
postscript paper and C source findable from Gonzalo Navarro's home-
page.

Cheers,
John
--
http://mail.python.org/mailman/listinfo/python-list


delete lines

2008-07-09 Thread antar2
I am new in python and I have the following problem:

Suppose I have a list with words of which I want to remove each time
the words in the lines below item1 and above item2:

item1
a
b
item2
c
d
item3
e
f
item4
g
h
item1
i
j
item2
k
l
item3
m
n
item4
o
p

I did not find out how to do this:

Part of my script was

f = re.compile("item\[1\]\:")
g = re.compile("item\[2\]\:")
for i, line in enumerate(list1):
f_match = f.search(line)
g_match = g.search(line)
if f_match:
if g_match:
if list1[i] > f_match:
if list1[i] < g_match:
del list1[i]


But this does not work

If someone can help me, thanks!
--
http://mail.python.org/mailman/listinfo/python-list


Re: Newbie question

2008-07-09 Thread A.T.Hofkamp
On 2008-07-09, |e0 <[EMAIL PROTECTED]> wrote:
> So, i can't use wmi module on linux?
>
> On Wed, Jul 9, 2008 at 9:14 AM, Lamonte Harris <[EMAIL PROTECTED]> wrote:
>> I think the win32 module is only for windows.
>>

Welcome to the world outside MS.

Many python modules don't actually do anything than passing on calls to an
existing underlying library. They are cheap to make, and make it possible to
use the functionality of the library from a Python program. The down-side is,
as you have discovered, that you need the underlying library to make it work.

So, the answer is no, you cannot use wmi under a non-MS OS. (But what did you
expect, given that wmi means WINDOWS Management Instrumentation?) No doubt
there are also open source variants of this package, however, I am not familiar
with them, so I cannot help you.


Albert

--
http://mail.python.org/mailman/listinfo/python-list


Re: Regular Expressions Quick Question

2008-07-09 Thread Bruno Desthuilliers

Rajanikanth Jammalamadaka a écrit :
(top-post corrected - Please, Rajanikanth, learn to trim"e properly, 
and by all means avoid top-posting)


On Wed, Jul 9, 2008 at 12:13 AM, Lamonte Harris <[EMAIL PROTECTED]> wrote:

Alright, basically I have a list of words in a file and I load each word
from each line into the array. 



I assume you meant 'list' ?



Then basically the question is how do I

check if the input word matches multiple words in the list.

Say someone input "test", how could I check if that word matches these list
of words:

test
testing
tested

Out of the list of

Hello
blah
example
test
ested
tested
testing

I want it to loop then check if the input word I used starts any of the
words in the list so if I typed 'tes'

Then:

test
testing
testing



I assume you meant:
 test
 tested
 testing



would be appended to a new array.


> hi!
>
> Try this:
>
 lis=['t','tes','test','testing']
 [elem for elem in lis if re.compile("^te").search(elem)]

Using a regexp for this is total overkill. But please at least use the 
proper regexp, and use re.compile correctly:


exp = re.compile(r'^tes')
found = [word for word in lis if exp.match(word)]

But you just don't need a regexp for this - str.startswith is your friend:

words = ['Hello', 'blah', 'example', 'test', 'ested', 'tested', 'testing']
found = [word for word in words if word.starswith('tes')]
assert found == ['test', 'tested', 'testing']

HTH
--
http://mail.python.org/mailman/listinfo/python-list


Re: Newbie question

2008-07-09 Thread Tim Golden

A.T.Hofkamp wrote:

On 2008-07-09, |e0 <[EMAIL PROTECTED]> wrote:

So, i can't use wmi module on linux?

On Wed, Jul 9, 2008 at 9:14 AM, Lamonte Harris <[EMAIL PROTECTED]> wrote:

I think the win32 module is only for windows.



Welcome to the world outside MS.

Many python modules don't actually do anything than passing on calls to an
existing underlying library. They are cheap to make, and make it possible to
use the functionality of the library from a Python program. The down-side is,
as you have discovered, that you need the underlying library to make it work.


And this is of course true both ways. Python users under Windows miss out
on about half [*] of the os module since it's just handing off to the *nix 
system
calls. At the same time, *nix users won't be able to add Shell Namespace
Extensions or use the Windows API to monitor directory changes.


So, the answer is no, you cannot use wmi under a non-MS OS. (But what did you
expect, given that wmi means WINDOWS Management Instrumentation?) 


I don't know if anyone's tried to get something like WMI running under
Wine. In principal it might work but I suspect it would involve a lot of
time and effort. Strictly, WMI is an implementation of the WBEM [2] 
standards. Googling around suggests that implementations exist

for Linux but I've no idea how mature or robust they are, and I'm
quite sure they're not going to be using a Windows API model for
their interface.

TJG

[1] A pardonable exaggeration
[2] http://www.dmtf.org/standards/wbem/

--
http://mail.python.org/mailman/listinfo/python-list


Re: Returning the positions of a list that are non-zero

2008-07-09 Thread Luis Zarrabeitia

This could work:

l = [0,0,1,2,1,0,0]
indexes, values = zip(*((index,value) for index,value in enumerate(l) if value
!= 0))

But I guess it would be a little less cryptic (and maybe a lot more efficient)
if there were an unzip function instead of using the zip(*sequence) trick.

I think a more readable way would be:

indexes = [index for index,value in enumerate(l) if value != 0]
values = [value for value in l if value != 0]

Cheers.

-- 
Luis Zarrabeitia
Facultad de Matemática y Computación, UH
http://profesores.matcom.uh.cu/~kyrie


Quoting Benjamin Goudey <[EMAIL PROTECTED]>:

> I have a very large list of integers representing data needed for a
> histogram that I'm going to plot using pylab. However, most of these
> values (85%-95%) are zero and I would like to remove them to reduce
> the amount of memory I'm using and save time when it comes to plotting
> the data. To do this, I'm trying to find the best way to remove all of
> the zero values and produce a list of indices of where the non-zero
> values used to be.
> 
> For example, if my original list is [0,0,1,2,1,0,0] I would like to
> produce the lists [1,2,1] (the non zero values) and [2,3,4] (indices
> of where the non-zero values used to be). Removing non-zero values is
> very easy but determining the indicies is where I'm having difficulty.
> 
> Thanks in advance for any help
> --
> http://mail.python.org/mailman/listinfo/python-list
> 

--
http://mail.python.org/mailman/listinfo/python-list


Re: delete lines

2008-07-09 Thread Peter Otten
antar2 wrote:

> I am new in python and I have the following problem:
> 
> Suppose I have a list with words of which I want to remove each time
> the words in the lines below item1 and above item2:

> f = re.compile("item\[1\]\:")
> g = re.compile("item\[2\]\:")
> for i, line in enumerate(list1):
> f_match = f.search(line)
> g_match = g.search(line)
> if f_match:
> if g_match:
> if list1[i] > f_match:
> if list1[i] < g_match:
> del list1[i]
> 
> 
> But this does not work
> 
> If someone can help me, thanks!

I see two problems with your code: 

- You are altering the list while you iterate over it. Don't do that, it'll
cause Python to skip items, and the result is usually a mess. Make a new
list instead.

- You don't keep track of whether you are between "item1" and "item2". A
simple flag will help here.

A working example:

inlist = """item1
a
b
item2
c
d
item3
e
f
item4
g
h
item1
i
j
item2
k
l
item3
m
n
item4
o
p
""".splitlines()

print inlist

outlist = []
between = False

for item in inlist:
if between:
  if item == "item2":
  between = False
  outlist.append(item)
else:
  outlist.append(item)
  if item == "item1":
between = True

print outlist

Peter
--
http://mail.python.org/mailman/listinfo/python-list

Opening Unicode files

2008-07-09 Thread Noorhan Abbas
Hello,
I wonder if you don't mind helping me out in this problem. I have been 
developing a tool in python that opens some unicode files, reading them and 
processing the data. It is working fine. When I started to write a cgi module 
that does the same thing for google appengine(actually it is using the same 
files that I used befoer), I get an error:
 
f = codecs.open (FileName , 'rU', 'utf-8')
 
It says that the above functions takes 3 arguments but 4 were given.  I wonder 
if I need to do something before using the codecs library from within the cgi 
module?!
 
Thank you very much for your help,
Nora


  __
Not happy with your email address?.
Get the one you really want - millions of new email addresses available now at 
Yahoo! http://uk.docs.yahoo.com/ymail/new.html--
http://mail.python.org/mailman/listinfo/python-list

Manipulating sys.path

2008-07-09 Thread Thomas

Hi all,

I want to manipulate sys.path outside of PYTHONPATH (environment) and 
sys.path.append() (programmatically).


A bit of background:
We're maintaining a small application that includes a couple of Python 
scripts. Over time, a decent amount of code has been forked into 
modules, so the overall file system layout of our kit looks like this:


tool/
  bin/
prog1.py
prog2.py
...
  lib/
pack1/
  mod1.py
  mod2.py
  ...

The issue I have is that I want to add the 'lib' directory to the module 
search path so that our programs prog1.py, prog2.py,... can find the 
modules pack1.mod1, pack1.mod2, ... But I want to keep this out of the 
program's source code which rules out statements like 
'sys.path.insert(0, "../lib")'. We also want to be minimal-invasive for 
the hosting environment, so no copying of 'lib' into the standard Python 
lib directories (like /usr/local/lib/python2.5/site-packages etc.), nor 
forcing the user to change his PYTHONPATH shell environment. It should 
be solved locally in our kit's directory tree.


I was thinking about putting code into a 'bin/__init__.py' file but 
that's only working for modules and not for executable scripts, right?! 
Then I came across the '.pth' files, but unfortunately they only seem to 
work in some standard paths (like the before mentioned 
/usr/local/lib/python2.5/site-packages), and not in the script directory 
(like 'bin' in my case) which is automatically added to sys.path.


Can anybody think of something that could be of help here?

Thanks,
Thomas
--
http://mail.python.org/mailman/listinfo/python-list


Re: a simple 'for' question

2008-07-09 Thread Tim Cook

On Wed, 2008-07-09 at 00:00 -0400, Ben Keshet wrote:
> oops, my mistake, actually it didn't work...
> when I tried:
> for x in folders:
> print x # print the current folder
> filename='Folder/%s/myfile.txt' %x
> f=open(filename,'r')
> 
> it says: IOError: [Errno 2] No such file or directory:
> 'Folder/1/myfile.txt'
> 

I believe it's because x is the position marker what you want instead is
the contents of folders at x; therefore folders[x] 

HTH,
Tim




-- 
Timothy Cook, MSc
Health Informatics Research & Development Services
LinkedIn Profile:http://www.linkedin.com/in/timothywaynecook 
Skype ID == timothy.cook 
**
*You may get my Public GPG key from  popular keyservers or   *
*from this link http://timothywayne.cook.googlepages.com/home*
**


signature.asc
Description: This is a digitally signed message part
--
http://mail.python.org/mailman/listinfo/python-list

start reading from certain line

2008-07-09 Thread antar2
I am a starter in python and would like to write a program that reads
lines starting with a line that contains a certain word.
For example the program starts reading the program when a line is
encountered that contains 'item 1'


The weather is nice
Item 1
We will go to the seaside
...

Only the lines coming after Item 1 should be read

Thanks!
--
http://mail.python.org/mailman/listinfo/python-list


Re: (silly?) speed comparisons

2008-07-09 Thread mk

Rajanikanth Jammalamadaka wrote:

Try using a list instead of a vector for the C++ version.


Well, it's even slower:

$ time slice4

real0m4.500s
user0m0.015s
sys 0m0.015s


Time of execution of vector version (using reference to a vector):

$ time slice2

real0m2.420s
user0m0.015s
sys 0m0.015s

Still slower than Python!


Source, using lists in C++:

 slice4.c++ 
#include 
#include 
#include 

using namespace std;

list move_slice(list& slist, int start, int stop, int dest)
{
int idx;
if( dest > stop)
idx = dest - (stop - start);
else
idx = dest;

list frag;

int i;
list::iterator startiter;
list::iterator enditer;

startiter = slist.begin();

for (i = 0; i < start; i++)
startiter++;
enditer = startiter;

// copy fragment
for (i = start; i < stop; i++)
{
frag.push_back(*enditer);
enditer++;
}   

// delete frag from the slist
slist.erase( startiter, enditer );

// insert frag into slist at idx
startiter = slist.begin();
for (i = 0; i < idx; i++)
startiter++;
slist.insert( startiter, frag.begin(), frag.end());

/*  cout << "frag " << endl;
for (startiter = frag.begin(); startiter != frag.end(); startiter ++)
cout << *startiter << " ";
cout << endl;

cout << "slist " << endl;
for (startiter = slist.begin(); startiter != slist.end(); startiter++)
cout << *startiter << " ";
cout << endl;*/

return slist;
}


int main(int argc, char* argv)
{
list slice;
string u = "abcdefghij";
int pos;
for (pos = 0; pos < u.length(); pos++)
slice.push_back(u.substr(pos,1));
int i;
for (i = 0; i<100; i++)
move_slice(slice, 6, 7, 7);

}




Source, using reference to a vector:

 slice2.c++ 
#include 
#include 
#include 

using namespace std;

vector move_slice(vector& vec, int start, int stop, int 
dest)

{
int idx = stop - start;
vector frag;
for (idx = start; idx < stop; idx++)
frag.push_back(vec.at(idx));
if( dest > stop)
idx = dest - (stop - start);
else
idx = dest;
vec.erase( vec.begin() + start, vec.begin() + stop);
vec.insert( vec.begin() + idx, frag.begin(), frag.end());
return vec;
}


int main(int argc, char* argv)
{
vector slice;
string u = "abcdefghij";
int pos;
for (pos = 0; pos < u.length(); pos++)
slice.push_back(u.substr(pos,1));
int i;
for (i = 0; i<100; i++)
move_slice(slice, 6, 7, 7);

}

-

--
http://mail.python.org/mailman/listinfo/python-list


Re: start reading from certain line

2008-07-09 Thread Diez B. Roggisch
antar2 wrote:

> I am a starter in python and would like to write a program that reads
> lines starting with a line that contains a certain word.
> For example the program starts reading the program when a line is
> encountered that contains 'item 1'
> 
> 
> The weather is nice
> Item 1
> We will go to the seaside
> ...
> 
> Only the lines coming after Item 1 should be read

Start reading each line, and skip them until your criterion matches. Like
this:

def line_skipper(predicate, line_iterable):
for line in line_iterable:
if predicate(line):
   break
for line in line_iterable:
yield line

Diez
--
http://mail.python.org/mailman/listinfo/python-list


mock with inheritance

2008-07-09 Thread lidiriel
Hello,

i would like use a mock object for testing one class and its methods:
Here my class :
class Foo(Component):
def __init__(self):
self._db = self.env.get_db()

def foomethod(self, arg):
   .

But i don't know how to mock the class Component. Note that Component
provide the attribut env.
I use
Pyhton mock module (author Dave Kirby's ) or
mock (author Michael Foord (fuzzyman)) but if you have a generic
solution i will adapt it to my problem.

Thanks by advance and sorry for my poor english.

--
http://mail.python.org/mailman/listinfo/python-list


Re: start reading from certain line

2008-07-09 Thread Tim Cook

On Wed, 2008-07-09 at 03:30 -0700, antar2 wrote:
> I am a starter in python and would like to write a program that reads
> lines starting with a line that contains a certain word.
> For example the program starts reading the program when a line is
> encountered that contains 'item 1'
> 
> 
> The weather is nice
> Item 1
> We will go to the seaside
> ...
> 
> Only the lines coming after Item 1 should be read

file=open(filename)
while True:
   line=file.readline()
   if not line:
  break

  if 'Item 1' in line:
 print line


HTH,
Tim


-- 
**
Join the OSHIP project.  It is the standards based, open source
healthcare application platform in Python.
http://www.openehr.org/wiki/display/dev/Python+developer%27s+page 
**


signature.asc
Description: This is a digitally signed message part
--
http://mail.python.org/mailman/listinfo/python-list

Re: start reading from certain line

2008-07-09 Thread Tim Cook
On Wed, 2008-07-09 at 03:30 -0700, antar2 wrote:
> I am a starter in python and would like to write a program that reads
> lines starting with a line that contains a certain word.
> For example the program starts reading the program when a line is
> encountered that contains 'item 1'
> 
> 
> The weather is nice
> Item 1
> We will go to the seaside
> ...
> 
> Only the lines coming after Item 1 should be read

file=open(filename)
while True:
   line=file.readline()
   if not line:
  break

  if 'Item 1' in line:
 print line


HTH,
Tim


-- 
**
Join the OSHIP project.  It is the standards based, open source
healthcare application platform in Python.
Home page: https://launchpad.net/oship/ 
Wiki: http://www.openehr.org/wiki/display/dev/Python+developer%27s+page 
**


signature.asc
Description: This is a digitally signed message part
--
http://mail.python.org/mailman/listinfo/python-list

Re: Impossible to change methods with special names of instances of new-style classes?

2008-07-09 Thread cokofreedom
>
> My question is: did something about the way the special method names are
> implemented change for new-style classes?
>

>>> class old:
pass

>>> class new(object):
pass

>>> testone = old()
>>> testone.__call__ = lambda : 33
>>> testone()
33
>>> testtwo = new()
>>> testtwo.__call__ = lambda : 33
>>> testtwo()

Traceback (most recent call last):
  File "", line 1, in 
testtwo()
TypeError: 'new' object is not callable
>>> old.__call__

Traceback (most recent call last):
  File "", line 1, in 
old.__call__
AttributeError: class old has no attribute '__call__'
>>> new.__call__

>>> testone.__call__
 at 0x00C35EB0>
>>> testtwo.__call__
 at 0x00C35B70>
>>> dir(testtwo)
['__call__', '__class__', '__delattr__', '__dict__', '__doc__',
'__getattribute__', '__hash__', '__init__', '__module__', '__new__',
'__reduce__', '__reduce_ex__', '__repr__', '__setattr__', '__str__',
'__weakref__']
>>> dir(testone)
['__call__', '__doc__', '__module__']
>>> dir(new)
['__class__', '__delattr__', '__dict__', '__doc__',
'__getattribute__', '__hash__', '__init__', '__module__', '__new__',
'__reduce__', '__reduce_ex__', '__repr__', '__setattr__', '__str__',
'__weakref__']
>>> dir(old)
['__doc__', '__module__']

I don't see __call__ in either class structures, but for new style
classes it is a wrapper and for old it is nothing. Not sure if that
helps, but this is rather over my head.
--
http://mail.python.org/mailman/listinfo/python-list


Re: start reading from certain line

2008-07-09 Thread A.T.Hofkamp
On 2008-07-09, antar2 <[EMAIL PROTECTED]> wrote:
> I am a starter in python and would like to write a program that reads
> lines starting with a line that contains a certain word.
> For example the program starts reading the program when a line is
> encountered that contains 'item 1'
>
>
> The weather is nice
> Item 1
> We will go to the seaside
> ...
>
> Only the lines coming after Item 1 should be read

Not possible at most OSes, file reading always starts at the first character at
the first line. Also, most OSes don't understand the 'line' concept natively, a
file is just a long sequence of characters to them (end-of-line is also just a
character, namely '\n' (or '\r\n' if you are using Windows).

So you have to read the entire file, then throw away the bits you don't want to
keep. Luckily, Python does understand what a 'line' is, which makes the prblem
simpler.

Have a look at the readline() function (or the readlines() function if your
file is not too long). That should give you a start.



Albert
--
http://mail.python.org/mailman/listinfo/python-list


Re: a simple 'for' question

2008-07-09 Thread cokofreedom
On Jul 9, 2:08 am, Ben Keshet <[EMAIL PROTECTED]> wrote:
> Hi fans,
>
> I want to use a 'for' iteration to manipulate files in a set of folders,
> something like:
>
> folders= ['1A28','1A6W','56Y7']
> for x in folders:
> print x # print the current folder
> f = open('my/path/way/x/my_file.txt', 'r')
> ...
>
> where 'x' in the pathway should iterate over '1A28','1A6W','56Y7'.  How
> should I identify 'x' in the pathway line as the same x that is
> iterating over 'folders'?
>
> I am getting the following error:
>
> Traceback (most recent call last):
>   File
> "C:\Python25\Lib\site-packages\pythonwin\pywin\framework\scriptutils.py",
> line 310, in RunScript
> exec codeObject in __main__.__dict__
>   File "C:\Linux\Dock_method_validation\myscripts\test_for.py", line 5,
> in 
> f = open('c:/Linux/Dock_method_validation/x/receptor.mol2', 'r')
> IOError: [Errno 2] No such file or directory:
> 'c:/Linux/Dock_method_validation/x/receptor.mol2'
>
> I tired several variations: %x, 'x', "x", etc. all gave me similar errors.
>
> Thanks for your help,
> BK

>>> folders = ["a", "b", "c"]
>>> for item in folders:
print item
file = open("my/path/way/" + item + "/my_file.txt", "r")
--
http://mail.python.org/mailman/listinfo/python-list


Re: mock with inheritance

2008-07-09 Thread Ben Finney
lidiriel <[EMAIL PROTECTED]> writes:

> But i don't know how to mock the class Component. Note that Component
> provide the attribut env.

I prefer to use the MiniMock framework
http://cheeseshop.python.org/pypi/MiniMock>.

You don't need to specify what interface the mock object has. Its Mock
objects will allow *any* attribute access, method or otherwise, and
simply report what was done. You can then check that report to see if
it matches what you expect; the author suggests that the existing
standard-library 'doctest' module is ideal for such checking.

-- 
 \   “It is the mark of an educated mind to be able to entertain a |
  `\ thought without accepting it.” —Aristotle |
_o__)  |
Ben Finney
--
http://mail.python.org/mailman/listinfo/python-list

Doubts about how implementing asynchronous timeouts through a heap

2008-07-09 Thread Giampaolo Rodola'
Hi,
I'm trying to implement an asynchronous scheduler for asyncore to call
functions at a later time without blocking the main loop.
The logic behind it consists in:

- adding the scheduled functions into a heapified list
- calling a "scheduler" function at every loop which checks the
scheduled functions due to expire soonest

Note that, by using a heap, the first element of the list is always
supposed to be the one with the lower timeout.
Here's the code I wrote:


<--- snippet --->
import heapq
import time
import sys

delayed_map = []

class delayed_call:
"""Calls a function at a later time.

The instance returned is an object that can be used to cancel the
scheduled call, by calling its cancel() method.
It also may be rescheduled by calling delay() or reset()} methods.
"""

def __init__(self, delay, target, *args, **kwargs):
"""
- delay: the number of seconds to wait
- target: the callable object to call later
- args: the arguments to call it with
- kwargs: the keyword arguments to call it with
"""
assert callable(target), "%s is not callable" %target
assert sys.maxint >= delay >= 0, "%s is not greater than or
equal " \
   "to 0 seconds" % (delay)
self.__delay = delay
self.__target = target
self.__args = args
self.__kwargs = kwargs
# seconds from the epoch at which to call the function
self.timeout = time.time() + self.__delay
self.cancelled = False
heapq.heappush(delayed_map, self)

def __le__(self, other):
return self.timeout <= other.timeout

def active(self):
"""Return True if this scheduler has not been cancelled."""
return not self.cancelled

def call(self):
"""Call this scheduled function."""
self.__target(*self.__args, **self.__kwargs)

def reset(self):
"""Reschedule this call resetting the current countdown."""
assert not self.cancelled, "Already cancelled"
self.timeout = time.time() + self.__delay
if delayed_map[0] is self:
heapq.heapify(delayed_map)

def delay(self, seconds):
"""Reschedule this call for a later time."""
assert not self.cancelled, "Already cancelled."
assert sys.maxint >= seconds >= 0, "%s is not greater than or
equal " \
   "to 0 seconds" %(seconds)
self.__delay = seconds
self.reset()

def cancel(self):
"""Unschedule this call."""
assert not self.cancelled, "Already cancelled"
del self.__target, self.__args, self.__kwargs
if self in delayed_map:
if delayed_map[0] is self:
delayed_map.remove(self)
heapq.heapify(delayed_map)
else:
delayed_map.remove(self)
self.cancelled = True


def fun(arg):
print arg

a = delayed_call(0.6, fun, '0.6')
b = delayed_call(0.5, fun, '0.5')
c = delayed_call(0.4, fun, '0.4')
d = delayed_call(0.3, fun, '0.3')
e = delayed_call(0.2, fun, '0.2')
f = delayed_call(0.1, fun, '0.1')


while delayed_map:
now = time.time()
while delayed_map and now >= delayed_map[0].timeout:
delayed = heapq.heappop(delayed_map)
try:
delayed.call()
finally:
if not delayed.cancelled:
delayed.cancel()
time.sleep(0.01)



Here comes the questions.
Since that the timeouts of the scheduled functions contained in the
list can change when I reset() or cancel() them I don't know exactly
*when* the list needs to be heapified().
By doing some tests I came to the conclusion that I need the heapify()
the list only when the function I reset() or cancel() is the *first of
the list* but I'm not absolutely sure about it.
When do you think it would be necessary calling heapify()?
I wrote a short test suite which tests the code above and I didn't
notice strange behaviors but since that I don't know much about the
logic behind heaps I'd need some help.
Thanks a lot in advance.


--- Giampaolo
http://code.google.com/p/pyftpdlib/
--
http://mail.python.org/mailman/listinfo/python-list


Re: a simple 'for' question

2008-07-09 Thread Ben Keshet
it didn't help.  it reads the pathway "as is" (see errors for both 
tries).  It looks like it had the write pathway the first time, but 
could not find it because it searched in the path/way instead of in the 
path\way.  thanks for trying.


folders= ['1','2','3']
for x in folders:
   print x # print the current folder
   filename='Folder/%s/myfile.txt' %[x]
   f=open(filename,'r')

gives: IOError: [Errno 2] No such file or directory: 
"Folder/['1']/myfile.txt"




Tim Cook wrote:

On Wed, 2008-07-09 at 00:00 -0400, Ben Keshet wrote:
  

oops, my mistake, actually it didn't work...
when I tried:
for x in folders:
print x # print the current folder
filename='Folder/%s/myfile.txt' %x
f=open(filename,'r')

it says: IOError: [Errno 2] No such file or directory:
'Folder/1/myfile.txt'




I believe it's because x is the position marker what you want instead is
the contents of folders at x; therefore folders[x] 


HTH,
Tim




  
--
http://mail.python.org/mailman/listinfo/python-list

Re: GUI Programming by hand not code with Python Code

2008-07-09 Thread Nicola Musatti
On Jul 8, 10:09 pm, sturlamolden <[EMAIL PROTECTED]> wrote:
[...]
> I use wxFormBuilder with wxPython. Works like a charm. Design the GUI
> graphically, export it like a wx XML resource (.xrc). All you nedd to
> code in Python is the event handlers and the code to bind/hook the
> events.
>
> http://sturlamolden.blogspot.com/2008/03/howto-using-wxformbuilder-wi...

I also use wxFormBuilder, but I use XRCed from wxPython 2.8.6.x to
generate an application Skeleton from my .xrc file. This version
creates explicit attributes for all the visual elements that have a
name in the xrc file.

Unfortunately the latest XRCed version requires you to annotate the
xrc in order to obtain the same effect which is not only tedious, but
as far as I can tell it also makes it impossible to round trip between
XRCed and wxFormBuilder.

Cheers,
Nicola Musatti


--
http://mail.python.org/mailman/listinfo/python-list


Re: How to make python scripts .py executable, not bring up editor

2008-07-09 Thread Gerry
And if you've gotten this far, why not take the next step:

http://aspn.activestate.com/ASPN/Cookbook/Python/Recipe/476204

and just type tryme (as opposed to tryme.py)

Gerry

--
http://mail.python.org/mailman/listinfo/python-list


Re: a simple 'for' question

2008-07-09 Thread Bruno Desthuilliers

Tim Cook a écrit :

On Wed, 2008-07-09 at 00:00 -0400, Ben Keshet wrote:

oops, my mistake, actually it didn't work...
when I tried:
for x in folders:
print x # print the current folder
filename='Folder/%s/myfile.txt' %x
f=open(filename,'r')

it says: IOError: [Errno 2] No such file or directory:
'Folder/1/myfile.txt'



I believe it's because x is the position marker what you want instead is
the contents of folders at x; therefore folders[x] 


Nope. Python's for loops iterate over elements, not over indices. Here, 
the following code:


folders= ['1A28','1A6W','56Y7']
for x in folders:
   print x
   filename='Folder/%s/myfile.txt' %x
   print filename

yields:

1A28
Folder/1A28/myfile.txt
1A6W
Folder/1A6W/myfile.txt
56Y7
Folder/56Y7/myfile.txt

IOW: the problem is elsewhere.

--
http://mail.python.org/mailman/listinfo/python-list


Re: re.search much slower then grep on some regular expressions

2008-07-09 Thread Kris Kennaway

John Machin wrote:


Hmm, unfortunately it's still orders of magnitude slower than grep in my
own application that involves matching lots of strings and regexps
against large files (I killed it after 400 seconds, compared to 1.5 for
grep), and that's leaving aside the much longer compilation time (over a
minute).  If the matching was fast then I could possibly pickle the
lexer though (but it's not).



Can you give us some examples of the kinds of patterns that you are
using in practice and are slow using Python re?


Trivial stuff like:

  (Str('error in pkg_delete'), ('mtree', 'mtree')),
  (Str('filesystem was touched prior to .make install'), 
('mtree', 'mtree')),

  (Str('list of extra files and directories'), ('mtree', 'mtree')),
  (Str('list of files present before this port was installed'), 
('mtree', 'mtree')),
  (Str('list of filesystem changes from before and after'), 
('mtree', 'mtree')),


  (re('Configuration .* not supported'), ('arch', 'arch')),

  (re('(configure: error:|Script.*configure.*failed 
unexpectedly|script.*failed: here are the contents of)'),

   ('configure_error', 'configure')),
...

There are about 150 of them and I want to find which is the first match 
in a text file that ranges from a few KB up to 512MB in size.


> How large is "large"?

What kind of text?


It's compiler/build output.


Instead of grep, you might like to try nrgrep ... google("nrgrep
Navarro Raffinot"): PDF paper about it on Citeseer (if it's up),
postscript paper and C source findable from Gonzalo Navarro's home-
page.


Thanks, looks interesting but I don't think it is the best fit here.  I 
would like to avoid spawning hundreds of processes to process each file 
(since I have tens of thousands of them to process).


Kris

--
http://mail.python.org/mailman/listinfo/python-list


Re: re.search much slower then grep on some regular expressions

2008-07-09 Thread Jeroen Ruigrok van der Werven
-On [20080709 14:08], Kris Kennaway ([EMAIL PROTECTED]) wrote:
>It's compiler/build output.

Sounds like the FreeBSD ports build cluster. :)

Kris, have you tried a PGO build of Python with your specific usage? I
cannot guarantee it will significantly speed things up though.

Also, a while ago I did tests with various GCC compilers and their effect on
Python running time as well as Intel's cc. Intel won on (nearly) all
accounts, meaning it was faster overall.

From the top of my mind: GCC 4.1.x was faster than GCC 4.2.x.

-- 
Jeroen Ruigrok van der Werven  / asmodai
イェルーン ラウフロック ヴァン デル ウェルヴェン
http://www.in-nomine.org/ | http://www.rangaku.org/ | GPG: 2EAC625B
Beware of the fury of the patient man...
--
http://mail.python.org/mailman/listinfo/python-list

Re: re.search much slower then grep on some regular expressions

2008-07-09 Thread Kris Kennaway

Jeroen Ruigrok van der Werven wrote:

-On [20080709 14:08], Kris Kennaway ([EMAIL PROTECTED]) wrote:

It's compiler/build output.


Sounds like the FreeBSD ports build cluster. :)


Yes indeed!


Kris, have you tried a PGO build of Python with your specific usage? I
cannot guarantee it will significantly speed things up though.


I am pretty sure the problem is algorithmic, not bad byte code :)  If it 
was a matter of a few % then that is in the scope of compiler tweaks, 
but we're talking orders of magnitude.


Kris


Also, a while ago I did tests with various GCC compilers and their effect on
Python running time as well as Intel's cc. Intel won on (nearly) all
accounts, meaning it was faster overall.

From the top of my mind: GCC 4.1.x was faster than GCC 4.2.x.



--
http://mail.python.org/mailman/listinfo/python-list


Re: (silly?) speed comparisons

2008-07-09 Thread Maric Michaud
Le Wednesday 09 July 2008 12:35:10 mk, vous avez écrit :
> vector move_slice(vector& vec, int start, int stop, int
> dest)

I guess the point is to make a vector of referene to string if  you don't want 
to copy string objects all around but just a word for an address each time.

The signature should be :
vector move_slice(vector& vec, int start, int stop, int 
dest)

or

vector move_slice(vector& vec, int start, int stop, int 
dest)


-- 
_

Maric Michaud
--
http://mail.python.org/mailman/listinfo/python-list


Re: Newbie question

2008-07-09 Thread |e0
I did not mean to use WMI on linux, but query win machines *from* linux.
Thank you for your clarifications

- Leonardo

On Wed, Jul 9, 2008 at 11:04 AM, A.T.Hofkamp <[EMAIL PROTECTED]> wrote:
> Welcome to the world outside MS.
>
> Many python modules don't actually do anything than passing on calls to an
> existing underlying library. They are cheap to make, and make it possible to
> use the functionality of the library from a Python program. The down-side is,
> as you have discovered, that you need the underlying library to make it work.
>
> So, the answer is no, you cannot use wmi under a non-MS OS. (But what did you
> expect, given that wmi means WINDOWS Management Instrumentation?) No doubt
> there are also open source variants of this package, however, I am not 
> familiar
> with them, so I cannot help you.
>
>
> Albert
--
http://mail.python.org/mailman/listinfo/python-list


Re: Newbie question

2008-07-09 Thread Tim Golden

|e0 wrote:

I did not mean to use WMI on linux, but query win machines *from* linux.
Thank you for your clarifications


In principle you ought to be able to use some kind of DCOM bridge
(since WMI access if via COM/DCOM). I've no idea if anyone's attempted
this or even if all the pieces are in place.

If this won't fly, it should be simple enough to use one of the many,
many RPC-ish mechanisms, whether all-Python or otherwise, to
call into a WMI proxy service you could run on the chosen Windows
boxes.

TJG
--
http://mail.python.org/mailman/listinfo/python-list


Re: Newbie question

2008-07-09 Thread Diez B. Roggisch
|e0 wrote:

> I did not mean to use WMI on linux, but query win machines *from* linux.

What do you mean by "query"? Using the WMI module? No. It's Windows only.

Diez
--
http://mail.python.org/mailman/listinfo/python-list


Re: Returning the positions of a list that are non-zero

2008-07-09 Thread Andrii V. Mishkovskyi
2008/7/9 Benjamin Goudey <[EMAIL PROTECTED]>:
> I have a very large list of integers representing data needed for a
> histogram that I'm going to plot using pylab. However, most of these
> values (85%-95%) are zero and I would like to remove them to reduce
> the amount of memory I'm using and save time when it comes to plotting
> the data. To do this, I'm trying to find the best way to remove all of
> the zero values and produce a list of indices of where the non-zero
> values used to be.
>
> For example, if my original list is [0,0,1,2,1,0,0] I would like to
> produce the lists [1,2,1] (the non zero values) and [2,3,4] (indices
> of where the non-zero values used to be). Removing non-zero values is
> very easy but determining the indicies is where I'm having difficulty.
>
> Thanks in advance for any help

>>> l = [0, 0, 1, 2, 1, 0, 0]
>>> zip(*[(item, index) for (index, item) in enumerate(l) if item != 0])
[(1, 2, 1), (2, 3, 4)]


> --
> http://mail.python.org/mailman/listinfo/python-list
>



-- 
Wbr, Andrii Mishkovskyi.

He's got a heart of a little child, and he keeps it in a jar on his desk.
--
http://mail.python.org/mailman/listinfo/python-list


Re: Returning the positions of a list that are non-zero

2008-07-09 Thread Chris
On Jul 9, 7:48 am, "Rajanikanth Jammalamadaka" <[EMAIL PROTECTED]>
wrote:
> Try this:
>
> >>> li=[0,0,1,2,1,0,0]
> >>> li
>
> [0, 0, 1, 2, 1, 0, 0]>>> [i for i in range(len(li)) if li[i] != 0]
>
> [2, 3, 4]
>
> Cheers,
>
> Raj
>
>
>
> On Tue, Jul 8, 2008 at 10:26 PM, Benjamin Goudey <[EMAIL PROTECTED]> wrote:
> > I have a very large list of integers representing data needed for a
> > histogram that I'm going to plot using pylab. However, most of these
> > values (85%-95%) are zero and I would like to remove them to reduce
> > the amount of memory I'm using and save time when it comes to plotting
> > the data. To do this, I'm trying to find the best way to remove all of
> > the zero values and produce a list of indices of where the non-zero
> > values used to be.
>
> > For example, if my original list is [0,0,1,2,1,0,0] I would like to
> > produce the lists [1,2,1] (the non zero values) and [2,3,4] (indices
> > of where the non-zero values used to be). Removing non-zero values is
> > very easy but determining the indicies is where I'm having difficulty.
>
> > Thanks in advance for any help
> > --
> >http://mail.python.org/mailman/listinfo/python-list
>
> --
> "For him who has conquered the mind, the mind is the best of friends;
> but for one who has failed to do so, his very mind will be the
> greatest enemy."
>
> Rajanikanth

That's a waste

>>> li=[0,0,1,2,1,0,0]
>>> [i for i in li if i]

That's all you need. :)
--
http://mail.python.org/mailman/listinfo/python-list


Re: Returning the positions of a list that are non-zero

2008-07-09 Thread Chris
On Jul 9, 7:48 am, "Rajanikanth Jammalamadaka" <[EMAIL PROTECTED]>
wrote:
> Try this:
>
> >>> li=[0,0,1,2,1,0,0]
> >>> li
>
> [0, 0, 1, 2, 1, 0, 0]>>> [i for i in range(len(li)) if li[i] != 0]
>
> [2, 3, 4]
>
> Cheers,
>
> Raj
>
>
>
> On Tue, Jul 8, 2008 at 10:26 PM, Benjamin Goudey <[EMAIL PROTECTED]> wrote:
> > I have a very large list of integers representing data needed for a
> > histogram that I'm going to plot using pylab. However, most of these
> > values (85%-95%) are zero and I would like to remove them to reduce
> > the amount of memory I'm using and save time when it comes to plotting
> > the data. To do this, I'm trying to find the best way to remove all of
> > the zero values and produce a list of indices of where the non-zero
> > values used to be.
>
> > For example, if my original list is [0,0,1,2,1,0,0] I would like to
> > produce the lists [1,2,1] (the non zero values) and [2,3,4] (indices
> > of where the non-zero values used to be). Removing non-zero values is
> > very easy but determining the indicies is where I'm having difficulty.
>
> > Thanks in advance for any help
> > --
> >http://mail.python.org/mailman/listinfo/python-list
>
> --
> "For him who has conquered the mind, the mind is the best of friends;
> but for one who has failed to do so, his very mind will be the
> greatest enemy."
>
> Rajanikanth

Whoops, misread the question

li =[0,0,1,2,1,0,0]
[(index,data) for index,data in enumerate(li) if data]
--
http://mail.python.org/mailman/listinfo/python-list


Python / Windows process control

2008-07-09 Thread Salim Fadhley
Does anybody know of a python module which can do process management
on Windows? The sort of thing that we might usually do with
taskmgr.exe or process explorer?

For example:

* Kill a process by ID
* Find out which process ID is locking an object in the filesystem
* Find out all the IDs of a particular .exe file
* Find all the details of a currently running process (e.g. given an
ID tell me which files it uses, niceness, runtime)

Thanks!

Sal
--
http://mail.python.org/mailman/listinfo/python-list


Doubts about how implementing asynchronous timeouts through a heap

2008-07-09 Thread Giampaolo Rodola'
Hi,
I'm trying to implement an asynchronous scheduler for asyncore to call
functions at a later time without blocking the main loop.
The logic behind it consists in:

- adding the scheduled functions into a heapified list
- calling a "scheduler" function at every loop which checks the scheduled
functions due to expire soonest

Note that, by using a heap, the first element of the list is always supposed
to be the one with the lower timeout.
The support class used to reset() and cancel() the scheduled functions is
very similar to the DelayedCall class defined in /twisted/internet/base.py.
Here's the code I wrote:


<--- snippet --->
import heapq
import time
import sys

delayed_map = []

class delayed_call:
"""Calls a function at a later time.

The instance returned is an object that can be used to cancel the
scheduled call, by calling its cancel() method.
It also may be rescheduled by calling delay() or reset()} methods.
"""

def __init__(self, delay, target, *args, **kwargs):
"""
- delay: the number of seconds to wait
- target: the callable object to call later
- args: the arguments to call it with
- kwargs: the keyword arguments to call it with
"""
assert callable(target), "%s is not callable" %target
assert sys.maxint >= delay >= 0, "%s is not greater than or equal "
\
   "to 0 seconds" % (delay)
self.__delay = delay
self.__target = target
self.__args = args
self.__kwargs = kwargs
# seconds from the epoch at which to call the function
self.timeout = time.time() + self.__delay
self.cancelled = False
heapq.heappush(delayed_map, self)

def __le__(self, other):
return self.timeout <= other.timeout

def active(self):
"""Return True if this scheduler has not been cancelled."""
return not self.cancelled

def call(self):
"""Call this scheduled function."""
self.__target(*self.__args, **self.__kwargs)

def reset(self):
"""Reschedule this call resetting the current countdown."""
assert not self.cancelled, "Already cancelled"
self.timeout = time.time() + self.__delay
if delayed_map[0] is self:
heapq.heapify(delayed_map)

def delay(self, seconds):
"""Reschedule this call for a later time."""
assert not self.cancelled, "Already cancelled."
assert sys.maxint >= seconds >= 0, "%s is not greater than or equal
" \
   "to 0 seconds" %(seconds)
self.__delay = seconds
self.reset()

def cancel(self):
"""Unschedule this call."""
assert not self.cancelled, "Already cancelled"
del self.__target, self.__args, self.__kwargs
if self in delayed_map:
if delayed_map[0] is self:
delayed_map.remove(self)
heapq.heapify(delayed_map)
else:
delayed_map.remove(self)
self.cancelled = True


def fun(arg):
print arg

a = delayed_call(0.6, fun, '0.6')
b = delayed_call(0.5, fun, '0.5')
c = delayed_call(0.4, fun, '0.4')
d = delayed_call(0.3, fun, '0.3')
e = delayed_call(0.2, fun, '0.2')
f = delayed_call(0.1, fun, '0.1')


while delayed_map:
now = time.time()
while delayed_map and now >= delayed_map[0].timeout:
delayed = heapq.heappop(delayed_map)
try:
delayed.call()
finally:
if not delayed.cancelled:
delayed.cancel()
time.sleep(0.01)



Here comes the questions.
Since that the timeouts of the scheduled functions contained in the list can
change when I reset() or cancel() them I don't know WHEN the list needs to
be heapified().
By doing some tests I came to the conclusion that I need the heapify() the
list only when the function I  reset() or cancel() is the *first of the
list* but I'm not absolutely sure about it.
When do you think it would be necessary calling heapify()?
I wrote a short test suite which tests the code above and I didn't notice
strange behaviors but since that I don't know much about the logic behind
heaps I'd need some help.
Thanks in advance.


--- Giampaolo
http://code.google.com/p/pyftpdlib/
--
http://mail.python.org/mailman/listinfo/python-list

Allow tab completion when inputing filepath?

2008-07-09 Thread Keith Hughitt
Hi all,

I've been looking around on the web for a way to do this, but so far
have not come across anything for this particular application. I have
found some ways to enable tab completion for program-related commands,
but not for system filepaths. This would be nice to have when
prompting the user to enter a file/directory location.

Any suggestions?

Thanks,
Keith
--
http://mail.python.org/mailman/listinfo/python-list


Re: Allow tab completion when inputing filepath?

2008-07-09 Thread Tim Golden

Keith Hughitt wrote:

I've been looking around on the web for a way to do this, but so far
have not come across anything for this particular application. I have
found some ways to enable tab completion for program-related commands,
but not for system filepaths. This would be nice to have when
prompting the user to enter a file/directory location.


What platform are you on? And what kind of display?
(Console / GUI / wxPython / Qt / Web...)

TJG
--
http://mail.python.org/mailman/listinfo/python-list



Re: Relative Package Import

2008-07-09 Thread Thomas

Robert Hancock wrote:

mypackage/
  __init__.py
  push/
__init__.py
 dest.py
 feed/
   __init__py
subject.py

In subject.py I have
 from ..push import dest


There is no such thing as relative package imports. See 
http://www.python.org/doc/essays/packages.html


Thomas



But i receive the error:
  Caught exception importing module subject:
File "/usr/local/python/lib/python2.5/site-packages/pychecker/
checker.py", line 621, in setupMainCode()
  module = imp.load_module(self.moduleName, file, filename, smt)
File "subject.py", line 1, in ()
  from ..feed import dest
  ValueError: Attempted relative import in non-package

What am I missing?




--
http://mail.python.org/mailman/listinfo/python-list


Re: Returning the positions of a list that are non-zero

2008-07-09 Thread Paul McGuire
On Jul 9, 12:26 am, Benjamin Goudey <[EMAIL PROTECTED]> wrote:
> I have a very large list of integers representing data needed for a
> histogram that I'm going to plot using pylab. However, most of these
> values (85%-95%) are zero and I would like to remove them to reduce
> the amount of memory I'm using and save time when it comes to plotting
> the data. To do this, I'm trying to find the best way to remove all of
> the zero values and produce a list of indices of where the non-zero
> values used to be.
>
> For example, if my original list is [0,0,1,2,1,0,0] I would like to
> produce the lists [1,2,1] (the non zero values) and [2,3,4] (indices
> of where the non-zero values used to be). Removing non-zero values is
> very easy but determining the indicies is where I'm having difficulty.
>

>>> sparse_data = [0, 0, 1, 2, 1, 0, 0]
>>> values,locns = zip(*[ (x,i) for i,x in enumerate(sparse_data) if x ])
>>> print values
(1, 2, 1)
>>> print locns
(2, 3, 4)
>>>

-- Paul


--
http://mail.python.org/mailman/listinfo/python-list


FOSS projects exhibiting clean/good OOP?

2008-07-09 Thread Phillip B Oldham
I'm wondering whether anyone can offer suggestions on FOSS projects/
apps which exhibit solid OO principles, clean code, good inline
documentation, and sound design principles?

I'm devoting some time to reviewing other people's code to advance my
skills. Its good to review bad code (of which I have more than enough
examples) as well as good, but I'm lacking in finding good examples.

Projects of varying sizes would be great.
--
http://mail.python.org/mailman/listinfo/python-list


Re: Relative Package Import

2008-07-09 Thread Peter Otten
Thomas wrote:

> Robert Hancock wrote:
>> mypackage/
>>   __init__.py
>>   push/
>> __init__.py
>>  dest.py
>>  feed/
>>__init__py
>> subject.py
>> 
>> In subject.py I have
>>  from ..push import dest
> 
> There is no such thing as relative package imports. See
> http://www.python.org/doc/essays/packages.html

Unless you are using Python 1.5 the following document is a bit more
relevant:

http://www.python.org/dev/peps/pep-0328/

Peter

--
http://mail.python.org/mailman/listinfo/python-list


Re: (silly?) speed comparisons

2008-07-09 Thread mk

Maric Michaud wrote:

Le Wednesday 09 July 2008 12:35:10 mk, vous avez écrit :

vector move_slice(vector& vec, int start, int stop, int
dest)


I guess the point is to make a vector of referene to string if  you don't want 
to copy string objects all around but just a word for an address each time.


The signature should be :
vector move_slice(vector& vec, int start, int stop, int 
dest)


or

vector move_slice(vector& vec, int start, int stop, int 
dest)


That matters too, but I just found, the main culprit was _returning the 
list instead of returning the reference to list_.


The difference is staggering - some 25 sec vs 0.2 sec:

$ time slice6

real0m0.191s
user0m0.015s
sys 0m0.030s



#include 
#include 
#include 

using namespace std;

list& move_slice(list& slist, int start, int stop, int 
dest)

{
int idx;
if( dest > stop)
idx = dest - (stop - start);
else
idx = dest;


int i;
list::iterator startiter;
list::iterator enditer;
list::iterator destiter;

startiter = slist.begin();
destiter = slist.begin();

for (i = 0; i < start; i++)
startiter++;
enditer = startiter;

for (i = start; i < stop; i++)
enditer++;

for (i = 0; i < dest; i++)
destiter++;


slist.splice(destiter, slist, startiter, enditer);


/*  cout << "frag " << endl;
for (startiter = frag.begin(); startiter != frag.end(); startiter ++)
cout << *startiter << " ";
cout << endl;*/

/*  cout << " after: ";
for (startiter = slist.begin(); startiter != slist.end(); startiter++)
cout << *startiter << " ";
cout << endl;*/

return slist;
}


int main(int argc, char* argv)
{
list slice;
	string u = 
"abcdefghijabcdefghijabcdefghijabcdefghijabcdefghijabcdefghijabcdefghijabcdefghijabcdefghijabcdefghij";

int pos;
for (pos = 0; pos < u.length(); pos++)
slice.push_back(new string(u));
int i;
//for (i = 0; i<100; i++)

/*list::iterator startiter;
cout << "before: ";
for (startiter = slice.begin(); startiter != slice.end(); startiter++)
cout << *startiter << " ";
cout << endl;*/

for (int i = 0; i<100; i++)
move_slice(slice, 4, 6, 7);

}



--
http://mail.python.org/mailman/listinfo/python-list


You, spare time and SyntaxError

2008-07-09 Thread cokofreedom
def ine(you):
yourself = "what?"
go = list("something"), list("anything")
be = "something"
please = be, yourself
yourself = "great"
for good in yourself:
if you is good:
good in you
please.add(more, good)
else:
def inition(lacks, clarity):
if clarity not in you:
please.remove(everything and go)
for bad in yourself:
list(bad) if bad else (ignore, yourself)
try:
(to, escape, your, fate, but)
except (Exception), son:
if bad in (you, son):
(you is bad, son), so
finally:
if bad in you:
lie, cheat, steal, be, bad
else:
print you, "is", yourself
you is good and yourself is not bad
please, go

ine("Everyone")
--
http://mail.python.org/mailman/listinfo/python-list


Re: (silly?) speed comparisons

2008-07-09 Thread mk


P.S. Java 1.6 rocks - I wrote equivalent version using ArrayList and it 
executed in 0.7s.


--
http://mail.python.org/mailman/listinfo/python-list


Re: FOSS projects exhibiting clean/good OOP?

2008-07-09 Thread Tim Cook

On Wed, 2008-07-09 at 07:38 -0700, Phillip B Oldham wrote:
> I'm wondering whether anyone can offer suggestions on FOSS projects/
> apps which exhibit solid OO principles, clean code, good inline
> documentation, and sound design principles?
> 
> I'm devoting some time to reviewing other people's code to advance my
> skills. Its good to review bad code (of which I have more than enough
> examples) as well as good, but I'm lacking in finding good examples.
> 
> Projects of varying sizes would be great.

Of course 'I think' mine matches that description. :-)

In addition to the two links in the signature below where you can get a
description and source code; there is an entry on Ohloh that says it is
well documented code. http://www.ohloh.net/projects/oship 

I would appreciate your feedback.

Cheers,
Tim

PS. The Launchpad and Ohloh repositories lag the openEHR SVN by several
hours.

-- 
**
Join the OSHIP project.  It is the standards based, open source
healthcare application platform in Python.
Home page: https://launchpad.net/oship/ 
Wiki: http://www.openehr.org/wiki/display/dev/Python+developer%27s+page 
**


signature.asc
Description: This is a digitally signed message part
--
http://mail.python.org/mailman/listinfo/python-list

Re: numeric emulation and __pos__

2008-07-09 Thread samwyse
On Jul 8, 12:34 pm, Ethan Furman <[EMAIL PROTECTED]> wrote:

> Anybody have an example of when the unary + actually does something?
> Besides the below Decimal example.  I'm curious under what circumstances
> it would be useful for more than just completeness (although
> completeness for it's own sake is important, IMO).

Well, as in Decimal, it would be a good operator to use for
canonization.  Let's say you implement complex numbers as an angle and
radius.  Then, unary plus could be used to normalize the angle to +/-
Pi and the radius to a positive number (by inverting the angle).
--
http://mail.python.org/mailman/listinfo/python-list


Re: Logging to zero or more destinations

2008-07-09 Thread samwyse
On Jul 8, 3:01 pm, Rob Wolfe <[EMAIL PROTECTED]> wrote:
> samwyse <[EMAIL PROTECTED]> writes:

> > P.S.  I tried researching this further by myself, but the logging
> > module doesn't come with source (apparently it's written in C?) and I
> > don't have the time to find and download the source to my laptop.
>
> Hmmm... that's strange. It is a pure Python package.
>
> $ ls /usr/lib/python2.5/logging/
> config.py  config.pyc  handlers.py  handlers.pyc  __init__.py  __init__.pyc
>
> HTH,
> Rob

Oops, my bad.  I was using IDLE and tried the "Open Module..." command
on logging, not logging.something.
--
http://mail.python.org/mailman/listinfo/python-list


Re: Boost Python - C++ class' private static data blown away before accessing in Python?

2008-07-09 Thread Stodge
Thanks. Maybe it's a DLL boundary issue? I'll look into this too.

On Jul 5, 11:14 pm, Giuseppe Ottaviano <[EMAIL PROTECTED]> wrote:
> > In Python, I retrive an Entity from the EntityList:
>
> > elist = EntityList()
> > elist.append(Entity())
> > elist.append(Entity())
>
> > entity = elist.get_at(0)
>
> > entity.foo()
>
> > But it crashes inside foo() as the private static data is empty; or
> > rather the string array is empty. I know before that point that the
> > private static data is valid when accessed earlier by the C++ code as
> > the program works fine. It just won't work from Python, so somehow the
> > private static data has been blown away but I can't work out where or
> > why.
>
> Probably it is a problem of lifetime. What is the signature of append?  
> Who deletes the appended Entity in C++ code?
> If append takes a raw pointer, Boost.Python copies the pointer but  
> destroys the Entity object because it is a temporary and its reference  
> count went to zero. So the pointer in the list is referring to a  
> destroyed object, which results in undefined behaviour.
>
> Did you have a look at the lifetime policies of Boost.Python? The  
> simplest way to workaround the problem is using const reference  
> arguments, and always use value semantics. If it can result in a  
> performance penalty, another simple way is using shared_ptr's, which  
> have their own reference count (different from the one in CPython  
> lib), but Boost.Python does the magic to make them work together.
>
> HTH,
> Giuseppe

--
http://mail.python.org/mailman/listinfo/python-list


Re: Opening Unicode files

2008-07-09 Thread Gary Herron

Noorhan Abbas wrote:

Hello,
I wonder if you don't mind helping me out in this problem. I have been 
developing a tool in python that opens some unicode files, reading 
them and processing the data. It is working fine. When I started to 
write a cgi module that does the same thing for google 
appengine(actually it is using the same files that I used befoer), I 
get an error:
 
f = codecs.open (FileName , 'rU', 'utf-8')
 
It says that the above functions takes 3 arguments but 4 were given.


Truly?  I ran that line and found it works perfectly.

What version of python are you using (I used 2.5.1).
What OS?  (That should not matter, however you should tell us anyway.)
Are you sure your codecs is the Python module imported normally.  (You 
may have to show us all your code.)


With no code and a careful look at your error message, here's my guess:

That error message is what you get when you call an instance's method 
with the wrong number of arguments.  (And in that case the extra 
argument it's referring to is the instance itself begin passed in as the 
first (self) parameter.)


I believe your codecs value is no longer the imported module, but rather 
some instance object you've assigned into it:

 codecs = ...something overwriting the module object ...


Gary Herron




I wonder if I need to do something before using the codecs library 
from within the cgi module?!
 
Thank you very much for your help,

Nora


Not happy with your email address?
Get the one you really want  
- millions of new email addresses available now at Yahoo! 




--
http://mail.python.org/mailman/listinfo/python-list


--
http://mail.python.org/mailman/listinfo/python-list


Re: Boost Python - C++ class' private static data blown away before accessing in Python?

2008-07-09 Thread Stodge
I wonder if it's a DLL boundary problem.

On Jul 5, 11:14 pm, Giuseppe Ottaviano <[EMAIL PROTECTED]> wrote:
> > In Python, I retrive an Entity from the EntityList:
>
> > elist = EntityList()
> > elist.append(Entity())
> > elist.append(Entity())
>
> > entity = elist.get_at(0)
>
> > entity.foo()
>
> > But it crashes inside foo() as the private static data is empty; or
> > rather the string array is empty. I know before that point that the
> > private static data is valid when accessed earlier by the C++ code as
> > the program works fine. It just won't work from Python, so somehow the
> > private static data has been blown away but I can't work out where or
> > why.
>
> Probably it is a problem of lifetime. What is the signature of append?  
> Who deletes the appended Entity in C++ code?
> If append takes a raw pointer, Boost.Python copies the pointer but  
> destroys the Entity object because it is a temporary and its reference  
> count went to zero. So the pointer in the list is referring to a  
> destroyed object, which results in undefined behaviour.
>
> Did you have a look at the lifetime policies of Boost.Python? The  
> simplest way to workaround the problem is using const reference  
> arguments, and always use value semantics. If it can result in a  
> performance penalty, another simple way is using shared_ptr's, which  
> have their own reference count (different from the one in CPython  
> lib), but Boost.Python does the magic to make them work together.
>
> HTH,
> Giuseppe

--
http://mail.python.org/mailman/listinfo/python-list


Anyone happen to have optimization hints for this loop?

2008-07-09 Thread dp_pearce
I have some code that takes data from an Access database and processes
it into text files for another application. At the moment, I am using
a number of loops that are pretty slow. I am not a hugely experienced
python user so I would like to know if I am doing anything
particularly wrong or that can be hugely improved through the use of
another method.

Currently, all of the values that are to be written to file are pulled
from the database and into a list called "domainVa". These values
represent 3D data and need to be written to text files using line
breaks to seperate 'layers'. I am currently looping through the list
and appending a string, which I then write to file. This list can
regularly contain upwards of half a million values...

count = 0
dmntString = ""
for z in range(0, Z):
for y in range(0, Y):
for x in range(0, X):
fraction = domainVa[count]
dmntString += "  "
dmntString += fraction
count = count + 1
dmntString += "\n"
dmntString += "\n"
dmntString += "\n***\n

dmntFile = open(dmntFilename, 'wt')
dmntFile.write(dmntString)
dmntFile.close()

I have found that it is currently taking ~3 seconds to build the
string but ~1 second to write the string to file, which seems wrong (I
would normally guess the CPU/Memory would out perform disc writing
speeds).

Can anyone see a way of speeding this loop up? Perhaps by changing the
data format? Is it wrong to append a string and write once, or should
hold a file open and write at each instance?

Thank you in advance for your time,

Dan
--
http://mail.python.org/mailman/listinfo/python-list


Re: Newbie question

2008-07-09 Thread Mike Driscoll
On Jul 9, 2:19 am, |e0 <[EMAIL PROTECTED]> wrote:
> So, i can't use wmi module on linux?
>
> On Wed, Jul 9, 2008 at 9:14 AM, Lamonte Harris <[EMAIL PROTECTED]> wrote:
> > I think the win32 module is only for windows.

WMI is a Windows thing. It stands for "Windows Management
Instrumentation". So it's not going to work on anything other than a
Windows box.

Mike
--
http://mail.python.org/mailman/listinfo/python-list


Re: Anyone happen to have optimization hints for this loop?

2008-07-09 Thread Diez B. Roggisch
dp_pearce wrote:

> I have some code that takes data from an Access database and processes
> it into text files for another application. At the moment, I am using
> a number of loops that are pretty slow. I am not a hugely experienced
> python user so I would like to know if I am doing anything
> particularly wrong or that can be hugely improved through the use of
> another method.
> 
> Currently, all of the values that are to be written to file are pulled
> from the database and into a list called "domainVa". These values
> represent 3D data and need to be written to text files using line
> breaks to seperate 'layers'. I am currently looping through the list
> and appending a string, which I then write to file. This list can
> regularly contain upwards of half a million values...
> 
> count = 0
> dmntString = ""
> for z in range(0, Z):
> for y in range(0, Y):
> for x in range(0, X):
> fraction = domainVa[count]
> dmntString += "  "
> dmntString += fraction
> count = count + 1
> dmntString += "\n"
> dmntString += "\n"
> dmntString += "\n***\n
> 
> dmntFile = open(dmntFilename, 'wt')
> dmntFile.write(dmntString)
> dmntFile.close()
> 
> I have found that it is currently taking ~3 seconds to build the
> string but ~1 second to write the string to file, which seems wrong (I
> would normally guess the CPU/Memory would out perform disc writing
> speeds).
> 
> Can anyone see a way of speeding this loop up? Perhaps by changing the
> data format? Is it wrong to append a string and write once, or should
> hold a file open and write at each instance?

Don't use in-place adding to concatenate strings. It might lead to
quadaratic behavior.

Use the "".join()-idiom instead:

dmntStrings = []

dmntStrings.append("\n")

dmntFile.write("".join(dmntStrings))

Diez
--
http://mail.python.org/mailman/listinfo/python-list


Determining when a file has finished copying

2008-07-09 Thread writeson
Hi all,

I'm writing some code that monitors a directory for the appearance of
files from a workflow. When those files appear I write a command file
to a device that tells the device how to process the file. The
appearance of the command file triggers the device to grab the
original file. My problem is I don't want to write the command file to
the device until the original file from the workflow has been copied
completely. Since these files are large, my program has a good chance
of scanning the directory while they are mid-copy, so I need to
determine which files are finished being copied and which are still
mid-copy.

I haven't seen anything on Google talking about this, and I don't see
an obvious way of doing this using the os.stat() method on the
filepath. Anyone have any ideas about how I might accomplish this?

Thanks in advance!
Doug
--
http://mail.python.org/mailman/listinfo/python-list


Re: numeric emulation and __pos__

2008-07-09 Thread Sion Arrowsmith
Ethan Furman  <[EMAIL PROTECTED]> wrote:
>Anybody have an example of when the unary + actually does something?

I've seen it (jokingly) used to implement a prefix increment
operator. I'm not going to repeat the details in case somebody
decides it's serious code.

-- 
\S -- [EMAIL PROTECTED] -- http://www.chaos.org.uk/~sion/
   "Frankly I have no feelings towards penguins one way or the other"
-- Arthur C. Clarke
   her nu becomeþ se bera eadward ofdun hlæddre heafdes bæce bump bump bump
--
http://mail.python.org/mailman/listinfo/python-list

Re: Regular Expressions Quick Question

2008-07-09 Thread Paul McGuire
On Jul 9, 2:24 am, "Rajanikanth Jammalamadaka" <[EMAIL PROTECTED]>
wrote:
> hi!
>
> Try this:
>
> >>> lis=['t','tes','test','testing']
> >>> [elem for elem in lis if re.compile("^te").search(elem)]
>
> ['tes', 'test', 'testing']
>
> Cheers,
>
> Raj
>
>
>
>
>
> On Wed, Jul 9, 2008 at 12:13 AM, Lamonte Harris <[EMAIL PROTECTED]> wrote:
> > Alright, basically I have a list of words in a file and I load each word
> > from each line into the array.  Then basically the question is how do I
> > check if the input word matches multiple words in the list.
>
> > Say someone input "test", how could I check if that word matches these list
> > of words:
>
> > test
> > testing
> > tested
>
> > Out of the list of
>
> > Hello
> > blah
> > example
> > test
> > ested
> > tested
> > testing
>
> > I want it to loop then check if the input word I used starts any of the
> > words in the list so if I typed 'tes'
>
> > Then:
>
> > test
> > testing
> > testing
>
> > would be appended to a new array.
>
> > I'm unsure how to do this in python.
>
> > Thanks in advanced.
>
> > --
> >http://mail.python.org/mailman/listinfo/python-list
>
> --
> "For him who has conquered the mind, the mind is the best of friends;
> but for one who has failed to do so, his very mind will be the
> greatest enemy."
>
> Rajanikanth- Hide quoted text -
>
> - Show quoted text -

Give the built-in string functions a try before resorting to the re
howitzers:

>>> lis=['t','tes','test','testing']
>>> [elem for elem in lis if elem.startswith("te")]
['tes', 'test', 'testing']

-- Paul

--
http://mail.python.org/mailman/listinfo/python-list


Re: Boost Python - C++ class' private static data blown away before accessing in Python?

2008-07-09 Thread Stodge
Could it be a boundary problem? The static data is initialised by the
application. The problem arises when the python module tries to access
it.

On Jul 5, 11:14 pm, Giuseppe Ottaviano <[EMAIL PROTECTED]> wrote:
> > In Python, I retrive an Entity from the EntityList:
>
> > elist = EntityList()
> > elist.append(Entity())
> > elist.append(Entity())
>
> > entity = elist.get_at(0)
>
> > entity.foo()
>
> > But it crashes inside foo() as the private static data is empty; or
> > rather the string array is empty. I know before that point that the
> > private static data is valid when accessed earlier by the C++ code as
> > the program works fine. It just won't work from Python, so somehow the
> > private static data has been blown away but I can't work out where or
> > why.
>
> Probably it is a problem of lifetime. What is the signature of append?  
> Who deletes the appended Entity in C++ code?
> If append takes a raw pointer, Boost.Python copies the pointer but  
> destroys the Entity object because it is a temporary and its reference  
> count went to zero. So the pointer in the list is referring to a  
> destroyed object, which results in undefined behaviour.
>
> Did you have a look at the lifetime policies of Boost.Python? The  
> simplest way to workaround the problem is using const reference  
> arguments, and always use value semantics. If it can result in a  
> performance penalty, another simple way is using shared_ptr's, which  
> have their own reference count (different from the one in CPython  
> lib), but Boost.Python does the magic to make them work together.
>
> HTH,
> Giuseppe

--
http://mail.python.org/mailman/listinfo/python-list


trouble building Python 2.5.1 on solaris 10

2008-07-09 Thread mg
When make gets to the _ctypes section, I am getting the following in
my output:

building '_ctypes' extension
creating build/temp.solaris-2.10-i86pc-2.5/home/ecuser/Python-2.5.1/
Modules/_ctypes
creating build/temp.solaris-2.10-i86pc-2.5/home/ecuser/Python-2.5.1/
Modules/_ctypes/libffi
creating build/temp.solaris-2.10-i86pc-2.5/home/ecuser/Python-2.5.1/
Modules/_ctypes/libffi/src
creating build/temp.solaris-2.10-i86pc-2.5/home/ecuser/Python-2.5.1/
Modules/_ctypes/libffi/src/x86
gcc -fPIC -fno-strict-aliasing -DNDEBUG -g -O3 -Wall -Wstrict-
prototypes -I. -I/home/ecuser/Python-2.5.1/./Include -Ibuild/
temp.solaris-2.10-i86pc-2.5/libffi/include -Ibuild/temp.solaris-2.10-
i86pc-2.5/libffi -I/home/ecuser/Python-2.5.1/Modules/_ctypes/libffi/
src -I./Include -I. -I/usr/local/include -I/home/ecuser/Python-2.5.1/
Include -I/home/ecuser/Python-2.5.1 -c /home/ecuser/Python-2.5.1/
Modules/_ctypes/_ctypes.c -o build/temp.solaris-2.10-i86pc-2.5/home/
ecuser/Python-2.5.1/Modules/_ctypes/_ctypes.o
gcc -fPIC -fno-strict-aliasing -DNDEBUG -g -O3 -Wall -Wstrict-
prototypes -I. -I/home/ecuser/Python-2.5.1/./Include -Ibuild/
temp.solaris-2.10-i86pc-2.5/libffi/include -Ibuild/temp.solaris-2.10-
i86pc-2.5/libffi -I/home/ecuser/Python-2.5.1/Modules/_ctypes/libffi/
src -I./Include -I. -I/usr/local/include -I/home/ecuser/Python-2.5.1/
Include -I/home/ecuser/Python-2.5.1 -c /home/ecuser/Python-2.5.1/
Modules/_ctypes/callbacks.c -o build/temp.solaris-2.10-i86pc-2.5/home/
ecuser/Python-2.5.1/Modules/_ctypes/callbacks.o
gcc -fPIC -fno-strict-aliasing -DNDEBUG -g -O3 -Wall -Wstrict-
prototypes -I. -I/home/ecuser/Python-2.5.1/./Include -Ibuild/
temp.solaris-2.10-i86pc-2.5/libffi/include -Ibuild/temp.solaris-2.10-
i86pc-2.5/libffi -I/home/ecuser/Python-2.5.1/Modules/_ctypes/libffi/
src -I./Include -I. -I/usr/local/include -I/home/ecuser/Python-2.5.1/
Include -I/home/ecuser/Python-2.5.1 -c /home/ecuser/Python-2.5.1/
Modules/_ctypes/callproc.c -o build/temp.solaris-2.10-i86pc-2.5/home/
ecuser/Python-2.5.1/Modules/_ctypes/callproc.o
/home/ecuser/Python-2.5.1/Modules/_ctypes/callproc.c: In function
`_CallProc':
/home/ecuser/Python-2.5.1/Modules/_ctypes/callproc.c:918: warning:
implicit declaration of function `alloca'
gcc -fPIC -fno-strict-aliasing -DNDEBUG -g -O3 -Wall -Wstrict-
prototypes -I. -I/home/ecuser/Python-2.5.1/./Include -Ibuild/
temp.solaris-2.10-i86pc-2.5/libffi/include -Ibuild/temp.solaris-2.10-
i86pc-2.5/libffi -I/home/ecuser/Python-2.5.1/Modules/_ctypes/libffi/
src -I./Include -I. -I/usr/local/include -I/home/ecuser/Python-2.5.1/
Include -I/home/ecuser/Python-2.5.1 -c /home/ecuser/Python-2.5.1/
Modules/_ctypes/stgdict.c -o build/temp.solaris-2.10-i86pc-2.5/home/
ecuser/Python-2.5.1/Modules/_ctypes/stgdict.o
gcc -fPIC -fno-strict-aliasing -DNDEBUG -g -O3 -Wall -Wstrict-
prototypes -I. -I/home/ecuser/Python-2.5.1/./Include -Ibuild/
temp.solaris-2.10-i86pc-2.5/libffi/include -Ibuild/temp.solaris-2.10-
i86pc-2.5/libffi -I/home/ecuser/Python-2.5.1/Modules/_ctypes/libffi/
src -I./Include -I. -I/usr/local/include -I/home/ecuser/Python-2.5.1/
Include -I/home/ecuser/Python-2.5.1 -c /home/ecuser/Python-2.5.1/
Modules/_ctypes/cfield.c -o build/temp.solaris-2.10-i86pc-2.5/home/
ecuser/Python-2.5.1/Modules/_ctypes/cfield.o
gcc -fPIC -fno-strict-aliasing -DNDEBUG -g -O3 -Wall -Wstrict-
prototypes -I. -I/home/ecuser/Python-2.5.1/./Include -Ibuild/
temp.solaris-2.10-i86pc-2.5/libffi/include -Ibuild/temp.solaris-2.10-
i86pc-2.5/libffi -I/home/ecuser/Python-2.5.1/Modules/_ctypes/libffi/
src -I./Include -I. -I/usr/local/include -I/home/ecuser/Python-2.5.1/
Include -I/home/ecuser/Python-2.5.1 -c /home/ecuser/Python-2.5.1/
Modules/_ctypes/malloc_closure.c -o build/temp.solaris-2.10-i86pc-2.5/
home/ecuser/Python-2.5.1/Modules/_ctypes/malloc_closure.o
gcc -fPIC -fno-strict-aliasing -DNDEBUG -g -O3 -Wall -Wstrict-
prototypes -I. -I/home/ecuser/Python-2.5.1/./Include -Ibuild/
temp.solaris-2.10-i86pc-2.5/libffi/include -Ibuild/temp.solaris-2.10-
i86pc-2.5/libffi -I/home/ecuser/Python-2.5.1/Modules/_ctypes/libffi/
src -I./Include -I. -I/usr/local/include -I/home/ecuser/Python-2.5.1/
Include -I/home/ecuser/Python-2.5.1 -c /home/ecuser/Python-2.5.1/
Modules/_ctypes/libffi/src/prep_cif.c -o build/temp.solaris-2.10-
i86pc-2.5/home/ecuser/Python-2.5.1/Modules/_ctypes/libffi/src/
prep_cif.o
gcc -fPIC -fno-strict-aliasing -DNDEBUG -g -O3 -Wall -Wstrict-
prototypes -I. -I/home/ecuser/Python-2.5.1/./Include -Ibuild/
temp.solaris-2.10-i86pc-2.5/libffi/include -Ibuild/temp.solaris-2.10-
i86pc-2.5/libffi -I/home/ecuser/Python-2.5.1/Modules/_ctypes/libffi/
src -I./Include -I. -I/usr/local/include -I/home/ecuser/Python-2.5.1/
Include -I/home/ecuser/Python-2.5.1 -c /home/ecuser/Python-2.5.1/
Modules/_ctypes/libffi/src/x86/ffi64.c -o build/temp.solaris-2.10-
i86pc-2.5/home/ecuser/Python-2.5.1/Modules/_ctypes/libffi/src/x86/
ffi64.o
gcc -fPIC -fno-strict-aliasing -DNDEBUG -g -O3 -Wall -Wstrict-
prototypes -I. -I/home/ecuser/Python-2.5.1/./Include -Ibuild/
temp.solaris-

Anyone happen to have optimization hints for this loop?

2008-07-09 Thread dp_pearce
I have some code that takes data from an Access database and processes
it into text files for another application. At the moment, I am using
a number of loops that are pretty slow. I am not a hugely experienced
python user so I would like to know if I am doing anything
particularly wrong or that can be hugely improved through the use of
another method.

Currently, all of the values that are to be written to file are pulled
from the database and into a list called "domainVa". These values
represent 3D data and need to be written to text files using line
breaks to seperate 'layers'. I am currently looping through the list
and appending a string, which I then write to file. This list can
regularly contain upwards of half a million values...

count = 0
dmntString = ""
for z in range(0, Z):
for y in range(0, Y):
for x in range(0, X):
fraction = domainVa[count]
dmntString += "  "
dmntString += fraction
count = count + 1
dmntString += "\n"
dmntString += "\n"
dmntString += "\n***\n

dmntFile = open(dmntFilename, 'wt')
dmntFile.write(dmntString)
dmntFile.close()

I have found that it is currently taking ~3 seconds to build the
string but ~1 second to write the string to file, which seems wrong (I
would normally guess the CPU/Memory would out perform disc writing
speeds).

Can anyone see a way of speeding this loop up? Perhaps by changing the
data format? Is it wrong to append a string and write once, or should
hold a file open and write at each instance?

Thank you in advance for your time,

Dan
--
http://mail.python.org/mailman/listinfo/python-list


Re: FOSS projects exhibiting clean/good OOP?

2008-07-09 Thread Larry Bates

Phillip B Oldham wrote:

I'm wondering whether anyone can offer suggestions on FOSS projects/
apps which exhibit solid OO principles, clean code, good inline
documentation, and sound design principles?

I'm devoting some time to reviewing other people's code to advance my
skills. Its good to review bad code (of which I have more than enough
examples) as well as good, but I'm lacking in finding good examples.

Projects of varying sizes would be great.


I think the following are very good:

Python Imaging Library (PIL)
elementTree
(actually anything that Fredrick Lundh has written is an excellent example)

ReportLab

wxPython

Django

That should keep you busy for a while.

-Larry



--
http://mail.python.org/mailman/listinfo/python-list


Re: Determining when a file has finished copying

2008-07-09 Thread Manuel Vazquez Acosta
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

This seems a synchronization problem. A scenario description could clear
things up so we can help:

Program W (The workflow) copies file F to directory B
Program D (the dog) polls directory B to find is there's any new file F

In this scenario, program D does not know whether F has been fully
copied, but W does.

Solution:
Create a custom lock mechanism. Program W writes a file D/F.lock to
indicate file F is not complete, it's removed when F is fully copied.
I program W crashes in mid-copy both F and F.lock are kept so program D
does not bother to process F. Recovery from the crash in W would another
issue to tackle down.

Best regards,
Manuel.

writeson wrote:
> Hi all,
> 
> I'm writing some code that monitors a directory for the appearance of
> files from a workflow. When those files appear I write a command file
> to a device that tells the device how to process the file. The
> appearance of the command file triggers the device to grab the
> original file. My problem is I don't want to write the command file to
> the device until the original file from the workflow has been copied
> completely. Since these files are large, my program has a good chance
> of scanning the directory while they are mid-copy, so I need to
> determine which files are finished being copied and which are still
> mid-copy.
> 
> I haven't seen anything on Google talking about this, and I don't see
> an obvious way of doing this using the os.stat() method on the
> filepath. Anyone have any ideas about how I might accomplish this?
> 
> Thanks in advance!
> Doug
> --
> http://mail.python.org/mailman/listinfo/python-list
> 

-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.9 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iEYEARECAAYFAkh04skACgkQI2zpkmcEAhi0eQCgsVqg51fWiwi47jxqtbR8Gz2U
UukAoKm15UAm3KpEyjhsIGQ+68rq8WuU
=UFHi
-END PGP SIGNATURE-
--
http://mail.python.org/mailman/listinfo/python-list


Re: Determining when a file has finished copying

2008-07-09 Thread Larry Bates

writeson wrote:

Hi all,

I'm writing some code that monitors a directory for the appearance of
files from a workflow. When those files appear I write a command file
to a device that tells the device how to process the file. The
appearance of the command file triggers the device to grab the
original file. My problem is I don't want to write the command file to
the device until the original file from the workflow has been copied
completely. Since these files are large, my program has a good chance
of scanning the directory while they are mid-copy, so I need to
determine which files are finished being copied and which are still
mid-copy.

I haven't seen anything on Google talking about this, and I don't see
an obvious way of doing this using the os.stat() method on the
filepath. Anyone have any ideas about how I might accomplish this?

Thanks in advance!
Doug


The best way to do this is to have the program that copies the files copy them 
to a temporarily named file and rename it when it is completed.  That way you 
know when it is done by scanning for files with a specific mask.


If that is not possible you might be able to use pyinotify 
(http://pyinotify.sourceforge.net/) to watch for WRITE_CLOSE events on the 
directory and then process the files.


-Larry

--
http://mail.python.org/mailman/listinfo/python-list


Openine unicode files

2008-07-09 Thread Noorhan Abbas
Hello,
I wonder if someone can advise me on how to open unicode utf-8 files without 
using the codecs library . I am trying to use the codecs.open() from within 
Google Appengine but it is not working.   
Thank you very much in advance,
Nora


  __
Not happy with your email address?.
Get the one you really want - millions of new email addresses available now at 
Yahoo! http://uk.docs.yahoo.com/ymail/new.html--
http://mail.python.org/mailman/listinfo/python-list

Re: Anyone happen to have optimization hints for this loop?

2008-07-09 Thread writeson
On Jul 9, 12:04 pm, dp_pearce <[EMAIL PROTECTED]> wrote:
> I have some code that takes data from an Access database and processes
> it into text files for another application. At the moment, I am using
> a number of loops that are pretty slow. I am not a hugely experienced
> python user so I would like to know if I am doing anything
> particularly wrong or that can be hugely improved through the use of
> another method.
>
> Currently, all of the values that are to be written to file are pulled
> from the database and into a list called "domainVa". These values
> represent 3D data and need to be written to text files using line
> breaks to seperate 'layers'. I am currently looping through the list
> and appending a string, which I then write to file. This list can
> regularly contain upwards of half a million values...
>
> count = 0
> dmntString = ""
> for z in range(0, Z):
>     for y in range(0, Y):
>         for x in range(0, X):
>             fraction = domainVa[count]
>             dmntString += "  "
>             dmntString += fraction
>             count = count + 1
>         dmntString += "\n"
>     dmntString += "\n"
> dmntString += "\n***\n
>
> dmntFile     = open(dmntFilename, 'wt')
> dmntFile.write(dmntString)
> dmntFile.close()
>
> I have found that it is currently taking ~3 seconds to build the
> string but ~1 second to write the string to file, which seems wrong (I
> would normally guess the CPU/Memory would out perform disc writing
> speeds).
>
> Can anyone see a way of speeding this loop up? Perhaps by changing the
> data format? Is it wrong to append a string and write once, or should
> hold a file open and write at each instance?
>
> Thank you in advance for your time,
>
> Dan

Hi Dan,

Looking at the code sample you sent, you could do some clever stuff
making dmntString a list rather than a string and appending everywhere
you're doing a +=. Then at the end you build the string your write to
the file one time with a dmntFile.write(''.join(dmntList). But I think
the more straightforward thing would be to replace all the dmntString
+= ... lines in the loops with a dmntFile.write(whatever), you're just
constantly adding onto the file in various ways.

I think the slowdown you're seeing your code as written comes from
Python string being immutable. Every time you perform a dmntString
+= ... in the loops you're creating a new dmntString, copying in the
contents of the old, plus the appended content. And if your list can
reach a half a million items, well that's a TON of string create,
string copy operations.

Hope you find this helpful,
Doug
--
http://mail.python.org/mailman/listinfo/python-list


Re: start reading from certain line

2008-07-09 Thread norseman

Tim Cook wrote:

On Wed, 2008-07-09 at 03:30 -0700, antar2 wrote:

I am a starter in python and would like to write a program that reads
lines starting with a line that contains a certain word.
For example the program starts reading the program when a line is
encountered that contains 'item 1'


The weather is nice
Item 1
We will go to the seaside
...

Only the lines coming after Item 1 should be read


file=open(filename)
while True:
   line=file.readline()
   if not line:
  break

  if 'Item 1' in line:
 print line


HTH,
Tim






--
http://mail.python.org/mailman/listinfo/python-list

===

I would use:

readthem= 0
file=open(filename,'r')
while readthem == 0:
  line=file.readline()
  if not line:
break
  if 'Item 1' in line:
readthem= 1
# print line  # uncomment if 'Item 1' is to be printed
while line:
  line= file.readline()
  print line  # see note-1 below
#  end of segment

The first while has lots of needed tests causing lots of bouncing about.
May even need more tests to make sure it is right tag point. As in if 
'Item 1' accidentally occurred in a line previous to intended one.

The second while just buzzes on through.
If the objective is to process from tag point on then processing code 
will be cleaner, easier to read in second while.


note-1:
in this form the line terminators in the file are also printed and then 
print attaches it's own EOL (newline). This gives a line double spacing 
effect, at least in Unix.  Putting the comma at the end (print line,) 
will stop that. You will need to modify two lines if used.  Which to use 
depends on you.



Steve
[EMAIL PROTECTED]




--
http://mail.python.org/mailman/listinfo/python-list


Re: Determining when a file has finished copying

2008-07-09 Thread norseman


Also available:
  pgm-W copies/creates-fills whatever B/dummy
  when done, pgm-W renames B/dummy to B/F
  pgm-D only scouts for B/F and does it thing when found

Steve
[EMAIL PROTECTED]


Manuel Vazquez Acosta wrote:

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

This seems a synchronization problem. A scenario description could clear
things up so we can help:

Program W (The workflow) copies file F to directory B
Program D (the dog) polls directory B to find is there's any new file F

In this scenario, program D does not know whether F has been fully
copied, but W does.

Solution:
Create a custom lock mechanism. Program W writes a file D/F.lock to
indicate file F is not complete, it's removed when F is fully copied.
I program W crashes in mid-copy both F and F.lock are kept so program D
does not bother to process F. Recovery from the crash in W would another
issue to tackle down.

Best regards,
Manuel.

writeson wrote:

Hi all,

I'm writing some code that monitors a directory for the appearance of
files from a workflow. When those files appear I write a command file
to a device that tells the device how to process the file. The
appearance of the command file triggers the device to grab the
original file. My problem is I don't want to write the command file to
the device until the original file from the workflow has been copied
completely. Since these files are large, my program has a good chance
of scanning the directory while they are mid-copy, so I need to
determine which files are finished being copied and which are still
mid-copy.

I haven't seen anything on Google talking about this, and I don't see
an obvious way of doing this using the os.stat() method on the
filepath. Anyone have any ideas about how I might accomplish this?

Thanks in advance!
Doug
--
http://mail.python.org/mailman/listinfo/python-list



-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.9 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iEYEARECAAYFAkh04skACgkQI2zpkmcEAhi0eQCgsVqg51fWiwi47jxqtbR8Gz2U
UukAoKm15UAm3KpEyjhsIGQ+68rq8WuU
=UFHi
-END PGP SIGNATURE-
--
http://mail.python.org/mailman/listinfo/python-list



--
http://mail.python.org/mailman/listinfo/python-list


Re: how to remove oldest files up to a limit efficiently

2008-07-09 Thread Terry Reedy



Dan Stromberg wrote:

On Tue, 08 Jul 2008 15:18:23 -0700, [EMAIL PROTECTED] wrote:


I need to mantain a filesystem where I'll keep only the most recently
used (MRU) files; least recently used ones (LRU) have to be removed to
leave space for newer ones. The filesystem in question is a clustered fs
(glusterfs) which is very slow on "find" operations. To add complexity
there are more than 10^6 files in 2 levels: 16³ dirs with equally
distributed number of files inside.



Any suggestions of how to do it effectively?


os.walk once.

Build a list of all files in memory.

Sort them by whatever time you prefer - you can get times from os.stat.


Since you do not need all 10**6 files sorted, you might also try the 
heapq module.  The entries into the heap would be (time, fileid)


--
http://mail.python.org/mailman/listinfo/python-list

Re: Anyone happen to have optimization hints for this loop?

2008-07-09 Thread Casey
On Jul 9, 12:04 pm, dp_pearce <[EMAIL PROTECTED]> wrote:
> I have some code that takes data from an Access database and processes
> it into text files for another application. At the moment, I am using
> a number of loops that are pretty slow. I am not a hugely experienced
> python user so I would like to know if I am doing anything
> particularly wrong or that can be hugely improved through the use of
> another method.
>
> Currently, all of the values that are to be written to file are pulled
> from the database and into a list called "domainVa". These values
> represent 3D data and need to be written to text files using line
> breaks to seperate 'layers'. I am currently looping through the list
> and appending a string, which I then write to file. This list can
> regularly contain upwards of half a million values...
>
> count = 0
> dmntString = ""
> for z in range(0, Z):
>     for y in range(0, Y):
>         for x in range(0, X):
>             fraction = domainVa[count]
>             dmntString += "  "
>             dmntString += fraction
>             count = count + 1
>         dmntString += "\n"
>     dmntString += "\n"
> dmntString += "\n***\n
>
> dmntFile     = open(dmntFilename, 'wt')
> dmntFile.write(dmntString)
> dmntFile.close()
>
> I have found that it is currently taking ~3 seconds to build the
> string but ~1 second to write the string to file, which seems wrong (I
> would normally guess the CPU/Memory would out perform disc writing
> speeds).
>
> Can anyone see a way of speeding this loop up? Perhaps by changing the
> data format? Is it wrong to append a string and write once, or should
> hold a file open and write at each instance?
>
> Thank you in advance for your time,
>
> Dan

Maybe try something like this ...

count = 0
dmntList = []
for z in xrange(Z):
for y in xrange(Y):
dmntList.extend(["  "+domainVa[count+x] for x in xrange(X)])
dmntList.append("\n")
count += X
dmntList.append("\n")
dmntList.append("\n***\n")
dmntString = ''.join(dmntList)
--
http://mail.python.org/mailman/listinfo/python-list


Re: "in"consistency?

2008-07-09 Thread David C. Ullrich
In article <[EMAIL PROTECTED]>,
 Terry Reedy <[EMAIL PROTECTED]> wrote:

> David C. Ullrich wrote:
> > In article <[EMAIL PROTECTED]>,
> >  Terry Reedy <[EMAIL PROTECTED]> wrote:
> 
> >>> Is there a reason for the inconsistency? I would
> >>> have thought "in" would check for elements of a
> >>> sequence, regardless of what sort of sequence it was...
> >> It is not an inconsistency but an extension corresponding to the 
> >> limitation of what an string element can be.
> > 
> > It's an inconsistency. That doesn't mean it's a bad thing or that
> > I want my money back. It may well be a reasonable inconsistency -
> > strings _can_ work that way while it's clear lists had better not.
> > But it's an inconsistency.
> 
> To decisively argue 'inconsistency' as factual or not, versus us having 
> divergent opinions, you would have to supply a technical definition ;-) 
>   The math definition of 'leading to a contradiction' in the sense of 
> being able to prove False is True, does not seem to apply here.
> 
> However,
> a) In common English, 'in' and 'contains', applied to strings of 
> characters (text), is understood as applying to substrings that appear 
> in the text.  This is also true of many other programming languages. 
> 'Dictionary' contains 'diction'.  This is even the basis of various word 
> games.
> b) Python otherwise allows operators to vary in meaning for different 
> classes.
> 
> In any case, back to your original question: the extension of meaning, 
> 'inconsistent' or not, was deliberated and adopted on the basis that the 
> usefulness of the extension would outweigh the confusion wrought by the 
> class-specific nature of the extension.  (In other words, threads such 
> as this *were* anticipated ;-)

I wasn't saying that the fact that the behavior of "in" for
strings is inconsistent with the behavior for lists was a bad
thing - I was just asking about the reason for it.

(I also wasn't claiming that it was inconsistent with the
common English usage of "in"...)

People have pointed out that "in" for strings _can_ work that
way, while (of course) "in" for lists had better not. That's
fine.

> Terry Jan Reedy

-- 
David C. Ullrich
--
http://mail.python.org/mailman/listinfo/python-list


User-defined exception: "global name 'TestRunError' is not defined"

2008-07-09 Thread jmike
I'm using some legacy code that has a user-defined exception in it.

The top level program includes this line

from TestRunError import *

It also imports several other modules.  These other modules do not
explicitly import TestRunError.  TestRunError is raised in various
places throughout the modules.

There are a few cases where something goes wrong with the program and
I get this error:

FATAL ERROR: global name 'TestRunError' is not defined

I realize this is kind of a silly question to ask in the general sense
without showing more of the code, but does anyone have any suggestions
as to the most likely causes of this error coming up?  Could it be
something like an error happening where it is not explicitly in a try
block, or an error happening while I'm already in an except block, or
something like that?

Thanks,
  --JMike
--
http://mail.python.org/mailman/listinfo/python-list


Re: Anyone happen to have optimization hints for this loop?

2008-07-09 Thread Terry Reedy



Diez B. Roggisch wrote:

dp_pearce wrote:






count = 0
dmntString = ""
for z in range(0, Z):
for y in range(0, Y):
for x in range(0, X):
fraction = domainVa[count]
dmntString += "  "
dmntString += fraction


Minor point, just construct "  "+domain[count] all at once


count = count + 1
dmntString += "\n"
dmntString += "\n"
dmntString += "\n***\n

dmntFile = open(dmntFilename, 'wt')
dmntFile.write(dmntString)
dmntFile.close()

I have found that it is currently taking ~3 seconds to build the
string but ~1 second to write the string to file, which seems wrong (I
would normally guess the CPU/Memory would out perform disc writing
speeds).

Can anyone see a way of speeding this loop up? Perhaps by changing the
data format? Is it wrong to append a string and write once, or should
hold a file open and write at each instance?


Don't use in-place adding to concatenate strings. It might lead to
quadaratic behavior.


Semantically, repeated extension of an immutable is inherently 
quadratic.  And it is for strings in Python until 2.6 or possibly 2.5 
(not sure), when more sophisticated code was added because people kept 
falling into this trap.  But since the more sophisticated code 'cheats' 
by mutating the immutable (with an algorithm similar to list.extend),I 
am pretty sure it will only be used when there is only one reference to 
the string and the extension is with +=, so that the extension can 
reliably be done in-place without changing semantics.  Thus, Python 
programmers should still learn the following.



Use the "".join()-idiom instead:


(Or upgrade)


dmntStrings = []

dmntStrings.append("\n")

dmntFile.write("".join(dmntStrings))


Note that the minor point above will cut the number of things to be 
joined nearly in half.


tjr

--
http://mail.python.org/mailman/listinfo/python-list


Re: sage vs enthought for sci computing

2008-07-09 Thread Ken Starks

sturlamolden wrote:

On 7 Jul, 22:35, [EMAIL PROTECTED] wrote:

Hello,
I have recently become interested in using python for scientific
computing, and came across both sage and enthought. I am curious if
anyone can tell me what the differences are between the two, since
there seems to be a lot of overlap (from what I have seen). If my goal
is to replace matlab (we do signal processing and stats on
physiological data, with a lot of visualization), would sage or
enthought get me going quicker? I realize that this is a pretty vague
question, and I can probably accomplish the same with either, but what
would lead me to choose one over the other?
Thanks!


I work in neuroscience, and use Python of signal processing. I've used
Matlab before. Python is just better.

I do not use either Sage or Enthought. Instead I have istalled a
vanilla Python and the libraries I need. The most important parts are:

- Python 2.5.2
- NumPy
- SciPy
- Matplotlib
- wxPython
- pywin32
- PIL
- Cython
- PyOpenGL
- mpi4py
- processing module
- gfortran and gcc (not a Python library, but I need a C and Fortran
compiler)

Less important stuff I also have installed:

- Twisted
- PyGame
- MySQL and mysqldb
- Python for .NET (http://pythonnet.sourceforge.net)
- VideoCapture




I would add RPy for luck!
The Rproject stats package seems to have attracted a lot of medical 
users, and this is a python interface.


I'm not entirely sure what is the advantage of a python wrapper
over R, (compared with the stand-alone Rproject language), but 
presumably it would be to combine its functionality with that of

some of the python libraries above.

Anyway, you get lots of graphics for exploratory data analysis, high
quality stats, the ability to write scripts.

The RPy is on sourceforge:
  http://rpy.sourceforge.net/

the Rproject itself is at:
  http://www.r-project.org/
and there is a whole CRAN (Comprehensive R archive network)

--
http://mail.python.org/mailman/listinfo/python-list


Re: Openine unicode files

2008-07-09 Thread Terry Reedy



Noorhan Abbas wrote:

Hello,
I wonder if someone can advise me on how to open unicode utf-8 files 
without using the codecs library . I am trying to use the codecs.open() 
from within Google Appengine but it is not working.  


When posting about something 'not working', post at least a line of code 
and the resulting exception traceback, if there is one, or a printing of 
the erroneous value, if that instead is the problem.


--
http://mail.python.org/mailman/listinfo/python-list


Re: Impossible to change methods with special names of instances of new-style classes?

2008-07-09 Thread samwyse
On Jul 8, 4:56 pm, Joseph Barillari <[EMAIL PROTECTED]> wrote:

> My question is: did something about the way the special method names are
> implemented change for new-style classes?

Just off the top of my head, I'd guess that it's due to classes
already having a default __call__ method, used when you instatiate.
Remember, the Python compiler doesn't know the difference between
this:
  a = MyClass
  instance = a()
and this:
  a = myFunc
  result = a()
--
http://mail.python.org/mailman/listinfo/python-list


socket-module: different behaviour on windows / unix when a timeout is set

2008-07-09 Thread Mirko Vogt
Hey,

it seems that the socket-module behaves differently on unix / windows when a 
timeout is set.
Here an example:

# test.py

import socket
sock=socket.socket(socket.AF_INET,socket.SOCK_STREAM)
print 'trying to connect...'
sock.connect(('127.0.0.1',))
print 'connected!'


# executed on windows

>C:\Python25\python.exe test.py
trying to connect...
Traceback (most recent call last):
  File "test.py", line 4, in 
sock.connect(('127.0.0.1',))
  File "", line 1, in connect
socket.error: (10061, 'Connection refused')
>


# executed on linux

$ python test.py 
trying to connect...
Traceback (most recent call last):
  File "test.py", line 4, in 
sock.connect(('127.0.0.1',))
  File "", line 1, in connect
socket.error: (111, 'Connection refused')
$


Even if the error-codes are different both raise an socket.error with the 
message 'Connection refused' - good so far.
Now I will change the code slightly - to be precise I set a timeout on the 
socket:


# test.py

import socket
sock=socket.socket(socket.AF_INET,socket.SOCK_STREAM)
sock.settimeout(3.0) # 
<--
print 'trying to connect...'
sock.connect(('127.0.0.1',))
print 'connected!'


# executed on linux

$ python test.py 
trying to connect...
Traceback (most recent call last):
  File "test.py", line 5, in 
sock.connect(('127.0.0.1',))
  File "", line 1, in connect
socket.error: (111, 'Connection refused')
$


# executed on windows

>C:\Python25\python.exe test.py
trying to connect...
connected!
>


The code executed by python running on windows does *not* raise the exception 
anymore.
The Linux does as expected.

Is that behaviour common or even documented? Found nothing.

It took me lot's of time to figure this out, because there was no exception 
which was raised when testing for open / ports.

When I try to read from the socket (e.g. on port  on which nothing runs) I 
get a timeout after the via settimeou() specified value.

Thanks in advance,

Mirko
--
http://mail.python.org/mailman/listinfo/python-list


Re: Determining when a file has finished copying

2008-07-09 Thread writeson
Guys,

Thanks for your replies, they are helpful. I should have included in
my initial question that I don't have as much control over the program
that writes (pgm-W) as I'd like. Otherwise, the write to a different
filename and then rename solution would work great. There's no way to
tell from the os.stat() methods to tell when the file is finished
being copied? I ran some test programs, one of which continously
copies big files from one directory to another, and another that
continously does a glob.glob("*.pdf") on those files and looks at the
st_atime and st_mtime parts of the return value of os.stat(filename).
>From that experiment it looks like st_atime and st_mtime equal each
other until the file has finished being copied. Nothing in the
documentation about st_atime or st_mtime leads me to think this is
true, it's just my observations about the two test programs I've
described.

Any thoughts? Thanks!
Doug
--
http://mail.python.org/mailman/listinfo/python-list


Re: Boost Python - C++ class' private static data blown away before accessing in Python?

2008-07-09 Thread Stodge
Oops - I didn't see my post so I thought something had gone wrong and
reposted. Apologies for the multiple posts.

On Jul 9, 11:57 am, Stodge <[EMAIL PROTECTED]> wrote:
> Could it be a boundary problem? The static data is initialised by the
> application. The problem arises when the python module tries to access
> it.
>
> On Jul 5, 11:14 pm, Giuseppe Ottaviano <[EMAIL PROTECTED]> wrote:
>
> > > In Python, I retrive an Entity from the EntityList:
>
> > > elist = EntityList()
> > > elist.append(Entity())
> > > elist.append(Entity())
>
> > > entity = elist.get_at(0)
>
> > > entity.foo()
>
> > > But it crashes inside foo() as the private static data is empty; or
> > > rather the string array is empty. I know before that point that the
> > > private static data is valid when accessed earlier by the C++ code as
> > > the program works fine. It just won't work from Python, so somehow the
> > > private static data has been blown away but I can't work out where or
> > > why.
>
> > Probably it is a problem of lifetime. What is the signature of append?  
> > Who deletes the appended Entity in C++ code?
> > If append takes a raw pointer, Boost.Python copies the pointer but  
> > destroys the Entity object because it is a temporary and its reference  
> > count went to zero. So the pointer in the list is referring to a  
> > destroyed object, which results in undefined behaviour.
>
> > Did you have a look at the lifetime policies of Boost.Python? The  
> > simplest way to workaround the problem is using const reference  
> > arguments, and always use value semantics. If it can result in a  
> > performance penalty, another simple way is using shared_ptr's, which  
> > have their own reference count (different from the one in CPython  
> > lib), but Boost.Python does the magic to make them work together.
>
> > HTH,
> > Giuseppe

--
http://mail.python.org/mailman/listinfo/python-list


About Google App Engine

2008-07-09 Thread Francisco Perez
Hello every body:
Recently the Google boy's announce their last toy: Google App Engine,
a really great idea. Although the GAE site have documentations and
guides, i think it not covers the some of the best practices when we
really build a web site. I mean layers, design patterns, etc.
In the link bellow [1] we can find a completelly functional Ads site.
And the best thing is that for all tht people like me that are not too
good in english, the site has a Help (Ayuda) that explains the Maiking
Of (Como se Hizo) of de site. There is a great explanation in spanish
of MVC patterns and hot apply it building a web site.

Check it.

[1] http://bazar.appspot.com
--
http://mail.python.org/mailman/listinfo/python-list


formatting list -> comma separated

2008-07-09 Thread Robert
given d:

d = ["soep", "reeds", "ook"]

I want it to print like

soep, reeds, ook

I've come up with :

print ("%s"+", %s"*(len(d)-1)) % tuple(d)

but this fails for d = []

any (pythonic) options for this?

Robert



--
http://mail.python.org/mailman/listinfo/python-list


Re: re.search much slower then grep on some regular expressions

2008-07-09 Thread samwyse
On Jul 8, 11:01 am, Kris Kennaway <[EMAIL PROTECTED]> wrote:
> samwyse wrote:

> > You might want to look at Plex.
> >http://www.cosc.canterbury.ac.nz/greg.ewing/python/Plex/
>
> > "Another advantage of Plex is that it compiles all of the regular
> > expressions into a single DFA. Once that's done, the input can be
> > processed in a time proportional to the number of characters to be
> > scanned, and independent of the number or complexity of the regular
> > expressions. Python's existing regular expression matchers do not have
> > this property. "

> Hmm, unfortunately it's still orders of magnitude slower than grep in my
> own application that involves matching lots of strings and regexps
> against large files (I killed it after 400 seconds, compared to 1.5 for
> grep), and that's leaving aside the much longer compilation time (over a
> minute).  If the matching was fast then I could possibly pickle the
> lexer though (but it's not).

That's funny, the compilation is almost instantaneous for me.
However, I just tested it to several files, the first containing
4875*'a', the rest each twice the size of the previous.  And you're
right, for each doubling of the file size, the match take four times
as long, meaning O(n^2).  156000*'a' would probably take 8 hours.
Here are my results:

compile_lexicon() took 0.0236021580595 secs
test('file-0.txt') took 24.8322969831 secs
test('file-1.txt') took 99.3956799681 secs
test('file-2.txt') took 398.349623132 secs

And here's my (probably over-engineered) testbed:

from __future__ import with_statement
from os.path import exists
from timeit import Timer

from Plex import *

filename = "file-%d.txt"

def create_files(n):
for x in range(0,n):
fname = filename % x
if not exists(fname):
print 'creating', fname
with open(fname, 'w') as f:
print >>f, (4875*2**x)*'a',

def compile_lexicon():
global lexicon
lexicon = Lexicon([
(Rep(AnyBut(' "='))+Str('/'),  TEXT),
(AnyBut('\n'), IGNORE),
])

def test(fname):
with open(fname, 'r') as f:
scanner = Scanner(lexicon, f, fname)
while 1:
token = scanner.read()
#print token
if token[0] is None:
break

def my_timed_test(func_name, *args):
stmt = func_name + '(' + ','.join(map(repr, args)) + ')'
t = Timer(stmt, "from __main__ import "+func_name)
print stmt, 'took', t.timeit(1), 'secs'

if __name__ == '__main__':
create_files(6)
my_timed_test('compile_lexicon')
for x in range(0,4):
my_timed_test('test', filename%x)
--
http://mail.python.org/mailman/listinfo/python-list


Re: formatting list -> comma separated

2008-07-09 Thread Jerry Hill
On Wed, Jul 9, 2008 at 3:23 PM, Robert <[EMAIL PROTECTED]> wrote:
> given d:
> d = ["soep", "reeds", "ook"]
>
> I want it to print like
> soep, reeds, ook

use the join() method of strings, like this:
>>> d = ["soep", "reeds", "ook"]
>>> ', '.join(d)
'soep, reeds, ook'
>>> d = []
>>> ', '.join(d)
''
>>>

-- 
Jerry
--
http://mail.python.org/mailman/listinfo/python-list


Re: formatting list -> comma separated

2008-07-09 Thread Paul Hankin
On Jul 9, 8:23 pm, "Robert" <[EMAIL PROTECTED]> wrote:
> given d:
>
> d = ["soep", "reeds", "ook"]
>
> I want it to print like
>
> soep, reeds, ook
>
> I've come up with :
>
> print ("%s"+", %s"*(len(d)-1)) % tuple(d)
>
> but this fails for d = []
>
> any (pythonic) options for this?

print ', '.join(d)

--
Paul Hankin
--
http://mail.python.org/mailman/listinfo/python-list


Re: Impossible to change methods with special names of instances of new-style classes?

2008-07-09 Thread Terry Reedy



samwyse wrote:

On Jul 8, 4:56 pm, Joseph Barillari <[EMAIL PROTECTED]> wrote:


My question is: did something about the way the special method names are
implemented change for new-style classes?


I believe the difference is that for new-style classes, when special 
methods are called 'behind the scenes' to implement built-in syntax and 
methods, they are looked up directly on the class instead of first on 
the instance.  Note that functions attached to instances are *not* 
methods and do not get combined with the instance as a bound method.


--
http://mail.python.org/mailman/listinfo/python-list


variable question

2008-07-09 Thread Support Desk
I am trying to assign a variable using an if / else statement like so:

 

If condition1:

Variable = something

If condition2:

Variable = something else

Do stuff with variable.

 

But the variable assignment doesn't survive outside the if statement. Is
there any better way to assign variables using an if statement or exception
so I don't have to write two almost identical if statements. This is
probably a dumb question.

--
http://mail.python.org/mailman/listinfo/python-list

Re: a simple 'for' question

2008-07-09 Thread Ethan Furman

Ben Keshet wrote:
it didn't help.  it reads the pathway "as is" (see errors for both 
tries).  It looks like it had the write pathway the first time, but 
could not find it because it searched in the path/way instead of in the 
path\way.  thanks for trying.


The form of slash ('\' vs '/') is irrelevant to Python.  At least on 
Windows.



folders= ['1','2','3']
for x in folders:
print x # print the current folder
filename='Folder/%s/myfile.txt' %[x]

   ^- brackets not needed

f=open(filename,'r')

gives: IOError: [Errno 2] No such file or directory: 
"Folder/['1']/myfile.txt"




Pay attention to the error message.  Do you actually have the file 
"Folder\['1']\myfile.txt" on your machine?  And did you really want 
brackets and quotes?  Is the path located relative to whereever your 
python script is running from?  If it's an absolute path, precede it 
with a leading slash.


As far as the Python question of string substitution, "%s" % var is an 
appropriate way.


Your above code should read:
folders = ['1', '2', '3']
for x in folders:
   print x
   filename = 'Folder/%s/myfile.txt' % x
   f = open(filename, 'r')

Again, in order for that to work, you *must* have a path/file of 
'Folder\1\myfile.txt' existing from the same folder that this code is 
running from.  This is O/S related, not Python related.

--
Ethan
--
http://mail.python.org/mailman/listinfo/python-list


Re: re.search much slower then grep on some regular expressions

2008-07-09 Thread Kris Kennaway

samwyse wrote:

On Jul 8, 11:01 am, Kris Kennaway <[EMAIL PROTECTED]> wrote:

samwyse wrote:



You might want to look at Plex.
http://www.cosc.canterbury.ac.nz/greg.ewing/python/Plex/
"Another advantage of Plex is that it compiles all of the regular
expressions into a single DFA. Once that's done, the input can be
processed in a time proportional to the number of characters to be
scanned, and independent of the number or complexity of the regular
expressions. Python's existing regular expression matchers do not have
this property. "



Hmm, unfortunately it's still orders of magnitude slower than grep in my
own application that involves matching lots of strings and regexps
against large files (I killed it after 400 seconds, compared to 1.5 for
grep), and that's leaving aside the much longer compilation time (over a
minute).  If the matching was fast then I could possibly pickle the
lexer though (but it's not).


That's funny, the compilation is almost instantaneous for me.


My lexicon was quite a bit bigger, containing about 150 strings and regexps.


However, I just tested it to several files, the first containing
4875*'a', the rest each twice the size of the previous.  And you're
right, for each doubling of the file size, the match take four times
as long, meaning O(n^2).  156000*'a' would probably take 8 hours.
Here are my results:


The docs say it is supposed to be linear in the file size ;-) ;-(

Kris

--
http://mail.python.org/mailman/listinfo/python-list


Re: Anyone happen to have optimization hints for this loop?

2008-07-09 Thread Paul Hankin
On Jul 9, 5:04 pm, dp_pearce <[EMAIL PROTECTED]> wrote:
> count = 0
> dmntString = ""
> for z in range(0, Z):
>     for y in range(0, Y):
>         for x in range(0, X):
>             fraction = domainVa[count]
>             dmntString += "  "
>             dmntString += fraction
>             count = count + 1
>         dmntString += "\n"
>     dmntString += "\n"
> dmntString += "\n***\n
>
> dmntFile     = open(dmntFilename, 'wt')
> dmntFile.write(dmntString)
> dmntFile.close()
> Can anyone see a way of speeding this loop up?

I'd consider writing it like this:

def dmntGenerator():
count = 0
for z in xrange(Z):
for y in xrange(Y):
for x in xrange(X):
yield '  '
yield domainVa[count]
count += 1
yield '\n'
yield '\n'
yield '\n***\n'

You can make the string using ''.join:

dmntString = ''.join(dmntGenerator())

But if you don't need the string, just write straight to the file:

for part in dmntGenerator():
dmntFile.write(part)

This is likely to be a lot faster as no large string is produced.

--
Paul Hankin

--
http://mail.python.org/mailman/listinfo/python-list


Re: Allow tab completion when inputing filepath?

2008-07-09 Thread Keith Hughitt
On Jul 9, 10:18 am, Tim Golden <[EMAIL PROTECTED]> wrote:
> Keith Hughitt wrote:
> > I've been looking around on the web for a way to do this, but so far
> > have not come across anything for this particular application. I have
> > found some ways to enable tab completion for program-related commands,
> > but not for system filepaths. This would be nice to have when
> > prompting the user to enter a file/directory location.
>
> What platform are you on? And what kind of display?
> (Console / GUI / wxPython / Qt / Web...)
>
> TJG

Hi TJG,

Currently Unix/Console. Although I don't have any plans at the moment
to add a GUI, it would be great if a cross-platform solution existed.

Keith
--
http://mail.python.org/mailman/listinfo/python-list


Re: FOSS projects exhibiting clean/good OOP?

2008-07-09 Thread Rob Wolfe
Phillip B Oldham <[EMAIL PROTECTED]> writes:

> I'm wondering whether anyone can offer suggestions on FOSS projects/
> apps which exhibit solid OO principles, clean code, good inline
> documentation, and sound design principles?
>
> I'm devoting some time to reviewing other people's code to advance my
> skills. Its good to review bad code (of which I have more than enough
> examples) as well as good, but I'm lacking in finding good examples.
>
> Projects of varying sizes would be great.

IMHO these projects are examples of very interesting and clean design:

docutils: http://docutils.sourceforge.net/docs/dev/hacking.html

trac: http://trac.edgewall.org/wiki/TracDev/ComponentArchitecture

In both of them code is not just clean, it is a work of art. ;)

HTH,
Rob
--
http://mail.python.org/mailman/listinfo/python-list


Re: Anyone happen to have optimization hints for this loop?

2008-07-09 Thread [EMAIL PROTECTED]
On 9 juil, 18:04, dp_pearce <[EMAIL PROTECTED]> wrote:
> I have some code that takes data from an Access database and processes
> it into text files for another application. At the moment, I am using
> a number of loops that are pretty slow. I am not a hugely experienced
> python user so I would like to know if I am doing anything
> particularly wrong or that can be hugely improved through the use of
> another method.
>
> Currently, all of the values that are to be written to file are pulled
> from the database and into a list called "domainVa". These values
> represent 3D data and need to be written to text files using line
> breaks to seperate 'layers'. I am currently looping through the list
> and appending a string, which I then write to file. This list can
> regularly contain upwards of half a million values...
>
> count = 0
> dmntString = ""
> for z in range(0, Z):
> for y in range(0, Y):
> for x in range(0, X):
> fraction = domainVa[count]
> dmntString += "  "
> dmntString += fraction
> count = count + 1
> dmntString += "\n"
> dmntString += "\n"
> dmntString += "\n***\n
>
> dmntFile = open(dmntFilename, 'wt')
> dmntFile.write(dmntString)
> dmntFile.close()
>
> I have found that it is currently taking ~3 seconds to build the
> string but ~1 second to write the string to file, which seems wrong (I
> would normally guess the CPU/Memory would out perform disc writing
> speeds).

Not necessarily - when the dataset becomes too big and your process
has eaten all available RAM, your OS starts swapping, and then it's
getting worse than disk IO.  IOW, for large datasets (for a definition
of 'large' depending on the available resources), you're sometimes
better doing direct disk access - which BTW are usually buffered by
the OS.

> Can anyone see a way of speeding this loop up? Perhaps by changing the
> data format?

Almost everyone told you to use a list and str.join()... Which used to
be a sound advice wrt/ both performances and readability, but nowadays
"only" makes your code more readable (and pythonic...) :

[EMAIL PROTECTED] ~ $ python
Python 2.5.1 (r251:54863, Apr  6 2008, 17:20:35)
[GCC 4.1.2 (Gentoo 4.1.2 p1.0.2)] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> from timeit import Timer
>>> def dostr():
... s = ''
... for i in xrange(1):
... s += ' ' + str(i)
...
>>> def dolist():
... s = []
... for i in xrange(1):
... s.append(str(i))
... s = ' '.join(s)
...
>>> tstr = Timer("dostr", "from __main__ import dostr")
>>> tlist = Timer("dolist", "from __main__ import dolist")
>>> tlist.timeit(1000)
1.4280490875244141
>>> tstr.timeit(1000)
1.4347598552703857
>>>

The list + str.join version is only marginaly faster... But you should
consider this solution even if doesn't that change much to perfs -
readabilty counts, too//

> Is it wrong to append a string and write once, or should
> hold a file open and write at each instance?

Is it really a matter of one XOR the other ? Perhaps you should try a
midway solution, ie building not-too-big chunks as lists, and writing
them to the (opened file) ? This would avoids possible swap and reduce
disk IO. I suggest you try this approach with different list-size /
write ratios, using the timeit module (and eventually the "top"
program on unix or it's equivalent if you're on another platform to
check memory/CPU usage) to find out which ratio works best for a
representative sample of your input data. That's at least what I'd
do...

HTH
--
http://mail.python.org/mailman/listinfo/python-list


Re: formatting list -> comma separated (slightly different)

2008-07-09 Thread Michiel Overtoom
Paul & Robert wrote...

> d = ["soep", "reeds", "ook"]
>print ', '.join(d)
> soep, reeds, ook

I occasionally have a need for printing lists of items too, but in the form:
"Butter, Cheese, Nuts and Bolts".  The last separator is the word 'and'
instead of the comma. The clearest I could come up with in Python is below.
I wonder if there is a more pythonic solution for this problem.  Maybe
something recursive?

Greetings,


'''
Formatting a sequence of items such that they are separated by
commas, except the last item, which is separated by the word 'and'.

Used for making lists of dates and items more human-readable
in generated emails and webpages.

For example:

Four friends have a dinner: Anne, Bob, Chris and Debbie
Three friends have a dinner: Anne, Bob and Chris
Two friends have a dinner: Anne and Bob
One friend has a dinner: Anne
No friend has a dinner:

 ['Anne','Bob','Chris','Debbie'] -> "Anne, Bob, Chris and Debbie"
 ['Bob','Chris','Debbie'] -> "Bob, Chris and Debbie"
 ['Chris','Debbie'] -> "Chris and Debbie"
 ['Debbie'] -> "Debbie"
 [] -> ""

'''

def pretty(f):
if len(f)==0: return ''
if len(f)==1: return f[0]
sepwithcommas=f[:-1]
sepwithand=f[-1]
s=', '.join(sepwithcommas)
if sepwithand:
s+=' and '+sepwithand
return s

friends=['Anne','Bob','Chris','Debbie','Eve','Fred']
while True:
print friends,'->',pretty(friends)
if friends:
friends.pop(0)
else:
break





-- 
"The ability of the OSS process to collect and harness
the collective IQ of thousands of individuals across
the Internet is simply amazing." - Vinod Vallopillil
http://www.catb.org/~esr/halloween/halloween4.html

--
http://mail.python.org/mailman/listinfo/python-list


Re: FOSS projects exhibiting clean/good OOP?

2008-07-09 Thread [EMAIL PROTECTED]
On 9 juil, 16:38, Phillip B Oldham <[EMAIL PROTECTED]> wrote:
> I'm wondering whether anyone can offer suggestions on FOSS projects/
> apps which exhibit solid OO principles, clean code, good inline
> documentation, and sound design principles?

This is somewhat subjective... Some would say that Python's object
model is fundamentally broken and crappy (not MHO, needless to say)
that Python +  "solid OO principles" is antinomic !-)

More seriously:

> I'm devoting some time to reviewing other people's code to advance my
> skills. Its good to review bad code (of which I have more than enough
> examples) as well as good, but I'm lacking in finding good examples.
>
> Projects of varying sizes would be great.

I'd recommand at least FormEncode and SQLAlchemy.
--
http://mail.python.org/mailman/listinfo/python-list


Re: variable question

2008-07-09 Thread Russell Blau
"Support Desk" <[EMAIL PROTECTED]> wrote in message 
news:[EMAIL PROTECTED]

> I am trying to assign a variable using an if / else statement like so:

> If condition1:
> Variable = something
> If condition2:
> Variable = something else
> Do stuff with variable.
>
> But the variable assignment doesn't survive outside the if statement. Is 
> there
> any better way to assign variables using an if statement or exception so I 
> don't
> have to write two almost identical if statements. This is probably a dumb
> question.

The variable assignment should survive outside the if statements:

Python 2.5.2 (r252:60911, Feb 21 2008, 13:11:45) [MSC v.1310 32 bit (Intel)] 
on win32
Type "copyright", "credits" or "license()" for more information.
>>> if "yes" == "no":
value = "impossible"

>>> if "no" == "no":
value = "possible"

>>> print value
possible
>>>

You probably want to watch out for the possibility that both Condition1 and 
Condition2 are false; otherwise, you will get a NameError when you try to 
access Variable without initializing it.

(By the way, as a matter of style, Python variable names are usually written 
in all lower-case letters.)

Russ



--
http://mail.python.org/mailman/listinfo/python-list


  1   2   >