threads - was:Is there a way to protect a piece of critical code?

2007-01-14 Thread Ray Schumacher
"Hendrik van Rooyen" wrote:
 > Similarly discrete background thread jobs can be used
 > in a functional style this way:
 >  http://aspn.activestate.com/ASPN/Cookbook/Python/Recipe/491280
 > ( an alternative for the laborious OO-centric threading.

With the various options available, many of which I haven't used 
(CallQueue for example), I am wondering what the "best" methodology 
to use is for a 2-task example on a dual core processor for my next project:

-One task is an ADC data collection/device driver whose sole job in 
life is to fill a numpy "circular buffer" with data from an external 
ADC device at high speed and expose a pointer - programmatically 
complex but functionally distinct from the analysis task. It is I/O 
bound and spends time waiting on the ADC.
-The main task is analysis of that data in near-real time with FFT, 
correlations etc, and computationally bound. It needs to have read 
access to the array and a pointer, and kill the ADC task when desired.

Thoughts/opinions are humbly requested,
Ray

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Python 2.5 install on Gentoo Linux: failed dmb and _tkinter

2007-01-14 Thread Sorin Schwimmer
# ldconfig -p | grep "/usr/local/lib"
libtk8.4.so (libc6)
=> /usr/local/lib/libtk8.4.so
libtcl8.4.so (libc6) =>
/usr/local/lib/libtcl8.4.so

Sorin


 

Now that's room service!  Choose from over 150,000 hotels
in 45,000 destinations on Yahoo! Travel to find your fit.
http://farechase.yahoo.com/promo-generic-14795097
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Decorators inside of class and decorator parameters

2007-01-14 Thread Michele Simionato
Gabriel Genellina wrote:
> see this article by M. Simoniato
> http://www.phyast.pitt.edu/~micheles/python/documentation.html for a better
> way using its decorator factory.

Actually the name is Simionato ;)
I have just released version 2.0, the new thing is an update_wrapper
function similar to the one
in the standard library, but with the ability to preserve the signature
on demand. For instance

def traced(func):
   def wrapper(*args, **kw):
   print 'calling %s with args %s, %s' % (func, args, kw)
   return func(*args, **kw)
  return update_wrapper(wrapper, func, create=False)

works exactly as functools.update_wrapper (i.e. copies__doc__,
__module__,etc. from func to wrapper without
preserving the signature), whereas update_wrapper(wrapper, func,
create=True) creates a new wrapper
with the right signature before copying the attributes.

 Michele Simionato

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Threaded for loop

2007-01-14 Thread Paul Rubin
"John" <[EMAIL PROTECTED]> writes:
> Damn! That is bad news. So even if caclulate is independent for
> (i,j) and is computable on separate CPUs (parts of it are CPU bound,
> parts are IO bound) python cant take advantage of this?

Not at the moment, unless you write C extensions that release the
global interpreter lock (GIL).  One of these days.  Meanwhile there
are various extension modules that let you use multiple processes,
look up POSH and Pyro.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Threaded for loop

2007-01-14 Thread parallelpython
John wrote:
> Thanks. Does it matter if I call shell commands os.system...etc in
> calculate?
>
> Thanks,
> --j

The os.system command neglects important changes in the environment
(redirected streams) and would not work with current version of ppsmp.
Although there is a very simple workaround:
print os.popen("yourcommand").read()
instead of os.system("yourcommand")


Here is a complete working example of that code:
http://www.parallelpython.com/component/option,com_smf/Itemid,29/topic,13.0

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Maths error

2007-01-14 Thread Hendrik van Rooyen
 "Tim Peters" <[EMAIL PROTECTED]> wrote:


> [Nick Maclaren]
> >> ...
> >> Yes, but that wasn't their point.  It was that in (say) iterative
> >> algorithms, the error builds up by a factor of the base at every
> >> step. If it wasn't for the fact that errors build up, almost all
> >> programs could ignore numerical analysis and still get reliable
> >> answers! 
> >>
> >> Actually, my (limited) investigations indicated that such an error
> >> build-up was extremely rare - I could achieve it only in VERY
> >> artificial programs.  But I did find that the errors built up faster
> >> for higher bases, so that a reasonable rule of thumb is that 28
> >> digits with a decimal base was comparable to (say) 80 bits with a
> >> binary base. 
> 
> [Hendrik van Rooyen]
> > I would have thought that this sort of thing was a natural consequence 
> > of rounding errors - if I round (or worse truncate) a binary, I can be 
> > off by at most one, with an expectation of a half of a least 
> > significant digit, while if I use hex digits, my expectation is around
> > eight, and for decimal around five... 
> 
> Which, in all cases, is a half ULP at worst (when rounding -- as 
> everyone does now).
> 
> > So it would seem natural that errors would propagate 
> > faster on big base systems, AOTBE, but this may be 
> > a naive view.. 
> 
> I don't know of any current support for this view.  It the bad old days, 
> such things were often confused by architectures that mixed non-binary 
> bases with "creative" rounding rules (like truncation indeed), and it 
> could be hard to know where to "pin the blame".
> 
> What you will still see stated is variations on Kahan's telegraphic 
> "binary is better than any other radix for error analysis (but not very 
> much)", listed as one of two techincal advantages for binary fp in:
> 
> http://www.cs.berkeley.edu/~wkahan/MktgMath.pdf
> 
> It's important to note that he says "error analysis", not "error 
> propagation" -- regardless of base in use, rounding is good to <= 1/2 
> ULP.  A fuller elementary explanation of this can be found in David 
> Goldberg's widely available "What Every Computer Scientist Should Know 
> About Floating-Point", in its "Relative Error and Ulps" section.  The 
> short course is that rigorous forward error analysis of fp algorithms is 
> usually framed in terms of relative error:  given a computed 
> approximation x' to the mathematically exact result x, what's the 
> largest possible absolute value of the mathematical
> 
>r = (x'-x)/x
> 
> (the relative error of x')?  This framework gets used because it's more-
> or-less tractable, starting by assuming inputs are exact (or not, in 
> which case you start by bounding the inputs' relative errors), then 
> successively computing relative errors for each step of the algorithm.  
> Goldberg's paper, and Knuth volume 2, contain many introductory examples 
> of rigorous analysis using this approach.
> 
> Analysis of relative error generally goes along independent of FP base.  
> It's at the end, when you want to transform a statement about relative 
> error into a statement about error as measured by ULPs (units in the 
> last place), where the base comes in strongly.  As Goldberg explains, 
> the larger the fp base the sloppier the relative-error-converted-to-ULPs 
> bound is -- but this is by a constant factor independent of the 
> algorithm being analyzed, hence Kahan's "... better ... but not very 
> much".  In more words from Goldberg:
> 
> Since epsilon [a measure of relative error] can overestimate the
> effect of rounding to the nearest floating-point number by the
> wobble factor of B [the FP base, like 2 for binary or 10 for
> decimal], error estimates of formulas will be tighter on machines
> with a small B.
> 
> When only the order of magnitude of rounding error is of interest,
> ulps and epsilon may be used interchangeably, since they differ by
> at most a factor of B.
> 
> So that factor of B is irrelevant to most apps most of the time.  For a 
> combination of an fp algorithm + set of inputs near the edge of giving 
> gibberish results, of course it can be important.  Someone using 
> Python's decimal implementation has an often very effective workaround 
> then, short of writing a more robust fp algorithm:  just boost the 
> precision.
> 

Thanks Tim, for taking the trouble. - really nice explanation.

My basic error of thinking ( ? - more like gut feel ) was that the
bigger bases somehow lose "more bits" at every round, 
forgetting that half a microvolt is still half a microvolt, whether
it is rounded in binary, decimal, or hex...

- Hendrik

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Maths error

2007-01-14 Thread Nick Maclaren

In article <[EMAIL PROTECTED]>,
"Hendrik van Rooyen" <[EMAIL PROTECTED]> writes:
|>  "Tim Peters" <[EMAIL PROTECTED]> wrote:
|> 
|> > What you will still see stated is variations on Kahan's telegraphic 
|> > "binary is better than any other radix for error analysis (but not very 
|> > much)", listed as one of two techincal advantages for binary fp in:
|> > 
|> > http://www.cs.berkeley.edu/~wkahan/MktgMath.pdf

Which I believe to be the final statement of the matter.  It was a minority
view 30 years ago, but I now know of little dissent.

He has omitted that mid-point invariant as a third advantage of binary,
but I agree that it could be phrased as "one or two extra mathematical
invariants hold for binary (but not very important ones)".

|> My basic error of thinking ( ? - more like gut feel ) was that the
|> bigger bases somehow lose "more bits" at every round, 
|> forgetting that half a microvolt is still half a microvolt, whether
|> it is rounded in binary, decimal, or hex...

That is not an error, but only a mistake :-)

Yes, you have hit the nail on the head.  Some people claimed that some
important algorithms did that, and that binary was consequently much
better.  If it were true, then the precision you would need would be
pro rata to the case - so the decimal equivalent of 64-bit binary would
need 160 bits.

Experience failed to confirm their viewpoint, and the effect was seen
in only artificial algorithms (sorry - I can no longer remember the
examples and am reluctant to waste time trying to reinvent them).  But
it was ALSO found that the converse was not QUITE true, either, and the 
effective numerical precision is not FULLY independent of the base.

So, at a wild guesstimate, 64-bit decimal will deliver a precision
comparable to about 56-bit binary, and will cause significant numerical
problems to a FEW applications.  Hence people will have to convert to
the much more expensive 128-bit decimal format for such work.

Bloatware rules.  All your bits are belong to us.


Regards,
Nick Maclaren.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Maths error

2007-01-14 Thread Nick Maclaren

In article <[EMAIL PROTECTED]>,
"Hendrik van Rooyen" <[EMAIL PROTECTED]> writes:
|> > 
|> I would suspect that this is one of those questions which are simple
|> to ask, but horribly difficult to answer - I mean - if the hardware has 
|> thrown it away, how do you study it - you need somehow two
|> different parallel engines doing the same stuff, and comparing the 
|> results, or you have to write a big simulation, and then you bring 
|> your simulation errors into the picture - There be Dragons...

No.  You just emulate floating-point in software and throw a switch
selecting between the two rounding rules.


Regards,
Nick Maclaren.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Rational Numbers

2007-01-14 Thread Nick Maclaren

In article <[EMAIL PROTECTED]>,
"Hendrik van Rooyen" <[EMAIL PROTECTED]> writes:
|> 
|> > Financial calculations need decimal FIXED-point, with a precisely
|> > specified precision.  It is claimed that decimal FLOATING-point
|> > helps with providing that, but that claim is extremely dubious.
|> > I can explain the problem in as much detail as you want, but would
|> > very much rather not.
|> 
|> Ok I will throw in a skewed ball at this point - use integer arithmetic,
|> and work in tenths of cents or pennies or whatever, and don't be too 
|> lazy to do your own print formatting...

That's not a skewed ball - that's the traditional way of doing it on
systems that don't have fixed-point hardware (and sometimes even when
they do).  Yes, it's dead easy in a language (like Python) that allows
decent encapsulation.

The decimal floating-point brigade grossly exaggerate the difficulty of
doing that, in order to persuade people that their solution is better.
If they admitted the difficulties of using decimal floating-point, and
merely said "but, overall, we think it is a better solution", I would
disagree but say nothing.


Regards,
Nick Maclaren.
-- 
http://mail.python.org/mailman/listinfo/python-list


Fixed-point [was Re: Rational Numbers]

2007-01-14 Thread Nick Maclaren

In article <[EMAIL PROTECTED]>,
"Hendrik van Rooyen" <[EMAIL PROTECTED]> writes:
|> 
|> Ok I will throw in a skewed ball at this point - use integer arithmetic,
|> and work in tenths of cents or pennies or whatever, and don't be too 
|> lazy to do your own print formatting...

If anyone is interested in doing this, here are a few notes on some of
the non-trivial aspects.  Please feel free if you want to contact me
for more information.

All numbers are held as (precision, value).  Precision is the number of
digits after the decimal point (a small, often constant, integer) and
the value is held as an integer.

It would be possible to have a separate subclass or context for every
precision (like Decimal), but that would be an unnecessary complication.

+, - and % return a precision that is the maximum of their input
precisions, and // returns a precision of 0.

* returns a precision that is the sum of its input precisions.

/ returns a floating-point number!

There is a divide function that takes a divisor, dividend and output
precision.  It also takes a rounding rule as an optional argument
(up, down, in, out, conventional nearest or IEEE 754R nearest).

There is a precision conversion function that takes a value and
output precision, and a rounding rule as an optional argument.

There is a conversion function that takes a string or floating-point
number and output precision, and a rounding rule as an optional
argument.  It raises an exception if a 1 ULP change in the floating-
point number would give a different answer; this is needed to make
certain operations reliable.

The default formatting does the obvious thing :-)

Er, that's about it 


Regards,
Nick Maclaren.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Decorators inside of class and decorator parameters

2007-01-14 Thread Colin J. Williams
Gabriel Genellina wrote:
[snip]
> As said in the beginning, there is no use for decorators as methods (perhaps 
> someone can find a use case?)
Except, perhaps to indicate to the script reader that the decorator only 
applies within the class?

Colin W.
> If you move the example above inside the class, you get exactly the same 
> results.
> 
> HTH,
> 
> 

-- 
http://mail.python.org/mailman/listinfo/python-list


Announcing: Spiff Guard (Generic Access Lists for Python)

2007-01-14 Thread Samuel
Introduction

Spiff Guard is a library for implementing access lists in Python. It
provides a clean and simple API and was implemented with performance
and security in mind. It was originally inspired by phpGACL
(http://phpgacl.sourceforge.net/), but features an API that is
significantly cleaner and easier to use.

Spiff Guard is the first library published as part of the Spiff
platform. The Spiff platform aims to produce a number of generic
libraries generally needed in enterprise (web) applications.

Spiff Guard is free software and distributed under the GNU GPLv2.


Dependencies
-
sqlalchemy (http://www.sqlalchemy.org/)


Download
-
Please check out the code from SVN:

svn checkout http://spiff.googlecode.com/svn/trunk/libs/Guard/


Links:
---
Spiff project page: http://code.google.com/p/spiff/
Bug tracker: http://code.google.com/p/spiff/issues/list
Documentation: http://spiff.googlecode.com/svn/trunk/libs/Guard/README
Browse the source: http://spiff.googlecode.com/svn/trunk/libs/Guard/

Any questions, please ask (or file a bug).
-Samuel

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Tkinter code (with pmw) executing to soon please help

2007-01-14 Thread Peter Otten
[EMAIL PROTECTED] wrote:

> Scott David Daniels wrote:
>> Gabriel Genellina wrote:
>> >... So `callback` should return a function, like this:
>> >
>> > def callback(text):
>> >   def handler(event):
>> > print text
>> >
>>
>> Even better than that:
>>  def callback(text):
>>  def handler(event):
>>  print text
>>  return handler
>>
>> Otherwise callback returns the spectacularly un-useful value None.

> C:\dex_tracker\csdlist.py bay-at-night.csd
> Traceback (most recent call last):
>  File "C:\dex_tracker\csdlist.py", line 58, in
> root.mainloop()
>  File "C:\Python25\lib\lib-tk\Tkinter.py", line 1023, in mainloop
> self.tk.mainloop(n)
>  File "../../..\Pmw\Pmw_1_2\lib\PmwBase.py", line 1751, in __call__
>  File "../../..\Pmw\Pmw_1_2\lib\PmwBase.py", line 1777, in _reporterror
> TypeError: unsupported operand type(s) for +: 'type' and 'str'
> Script terminated.
> 
> It doesn't like the return handler part of it.

Probably because a Tkinter.Button command callback doesn't accept any
arguments. Try

def callback(text):
def handler():
print text
return handler

Note that 'make_callback' would be a better name than 'callback' because the
function 'callback' actually creates the callback (called 'handler').

Peter


-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Comparing a matrix (list[][]) ?

2007-01-14 Thread Colin J. Williams
jairodsl wrote:
> Hi,
> 
> How can I find the minus element greater than zero in a matrix, my
> matrix is
> 
> matrix=
> [9,8,12,15],
> [0,11,15,18],
> [0,0,10,13],
> [0,0,0,5]
> 
> I dont want to use "min" function because each element in the matrix is
> associated to (X,Y) position.
> 
> Thanks a lot.
> 
> jDSL
> 

You might consider numarray or numpy for this sort of thing.

Colin W.

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: template engine

2007-01-14 Thread [EMAIL PROTECTED]
> 
>  
>   $title
>  
>  
>  #if user
>   hello $user/name
>  #else
>   hello guest
>  #endif
>  
> 

This example code would work in cheetah with only 2 changes...

www.cheetahtemplate.org

Pete

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Threaded for loop

2007-01-14 Thread skip

John> Damn! That is bad news. So even if caclulate is independent for
John> (i,j) and is computable on separate CPUs (parts of it are CPU
John> bound, parts are IO bound) python cant take advantage of this?

It will help if parts are I/O bound, presuming the threads which block
release the global interpreter lock (GIL).

There is a module in development (processing.py) that provides an API like
the threading module but that uses processes under the covers:

http://mail.python.org/pipermail/python-dev/2006-October/069297.html

You might find that an interesting alternative.

Skip

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Threaded for loop

2007-01-14 Thread sturlamolden

John wrote:
> I want to do something like this:
>
> for i = 1 in range(0,N):
>  for j = 1 in range(0,N):
>D[i][j] = calculate(i,j)
>
> I would like to now do this using a fixed number of threads, say 10
> threads.

Why do you want to run this in 10 threads? Do you have 10 CPUs?

If you are concerned about CPU time, you should not be using threads
(regardless of language) as they are often implemented with the
assumption that they stay idle most of the time (e.g. win32 threads and
pthreads). In addition, CPython has a global interpreter lock (GIL)
that prevents the interpreter from running on several processors in
parallel. It means that python threads are a tool for things like
writing non-blocking i/o and maintaining responsiveness in a GUI'. But
that is what threads are implemented to do anyway, so it doesn't
matter. IronPython and Jython do not have a GIL.

In order to speed up computation you should run multiple processes and
do some sort of IPC. Take a look at MPI (e.g. mpi4py.scipy.org) or
'parallel python'. MPI is the de facto industry standard for dealing
with CPU bound problems on systems with multiple processors, whether
the memory is shared or distributed does not matter. Contrary to common
belief, this approach is more efficient than running multiple threads,
sharing memory and synchronizong with mutexes and event objects - even
if you are using a system unimpeded by a GIL.

The number of parallel tasks should be equal to the number of available
CPU units, not more, as you will get excessive context shifts if the
number of busy threads or processes exceed the number of computational
units. If you only have two logical CPUs (e.g. one dual-core processor)
you should only run two parallel tasks - not ten. If you try to
parallelize using additional tasks (e.g. 8 more), you will just waste
time doing more context shifts, more cache misses, etc. But if you are
a lucky bastard with access to a 10-way server, sure run 10 tasks in
parallel.

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Decorators inside of class and decorator parameters

2007-01-14 Thread Gabriel Genellina
"Colin J. Williams" <[EMAIL PROTECTED]> escribió en el mensaje 
news:[EMAIL PROTECTED]

> Gabriel Genellina wrote:
>> As said in the beginning, there is no use for decorators as methods 
>> (perhaps
>> someone can find a use case?)
> Except, perhaps to indicate to the script reader that the decorator only
> applies within the class?

But it looks rather strange - a method without self, that is not a 
classmethod nor static method, and can't be called on an instance...

-- 
Gabriel Genellina 



-- 
http://mail.python.org/mailman/listinfo/python-list

Re: Decorators inside of class and decorator parameters

2007-01-14 Thread Gabriel Genellina
"Michele Simionato" <[EMAIL PROTECTED]> escribió en el mensaje 
news:[EMAIL PROTECTED]
> Gabriel Genellina wrote:
>> see this article by M. Simoniato
>> http://www.phyast.pitt.edu/~micheles/python/documentation.html for a 
>> better
>> way using its decorator factory.
>
> Actually the name is Simionato ;)

Oh, sorry! I think I've written it wrong in another place too :(

-- 
Gabriel Genellina 



-- 
http://mail.python.org/mailman/listinfo/python-list

Re: Decorators inside of class and decorator parameters

2007-01-14 Thread MR
Thanks so much for your reply.  You've definitely helped me a great
deal on this. Your comment about the difference between define time and
instantiation time cleared things up more than anything, and that also
helped clear up the confusion I was having about "self".

I think the places I've seen decorators called like fooDecorator() must
be using some default arguments in the function signature...so that
part makes a lot more sense now too.

Thanks again!


Gabriel Genellina wrote:
> "MR" <[EMAIL PROTECTED]> escribió en el mensaje
> news:[EMAIL PROTECTED]
>
> > I have a question about decorators, and I think an illustration would
> > be helpful. Consider the following simple class:
> >
> > #begin code
> > class Foo:
> >def fooDecorator(f):
> >print "fooDecorator"
> >
> >def _f(self, *args, **kw):
> >return f(self, *args, **kw)
> >
> >return _f
> >
> >@fooDecorator
> >def fooMethod(self):
> >print "fooMethod"
> >
> > f = Foo()
> > f.fooMethod()
> > #end of code
> >
> > This code runs, and actually serves my purpose. However, I'm a little
> > confused about three things and wanted to try and work through them
> > while I had the time to do so. I believe all of my confusion is related
> > to the parameters related to the fooDecorator:
>
> [I reordered your questions to make the answer a bit more clear]
>
> > -why does this code even work, because the first argument to
> > fooDecorator isn't self
>
> fooDecorator is called when the class is *defined*, not when it's
> instantiated. `self` has no meaning inside it, neither the class to which it
> belongs (Foo does not even exist yet).
> At this time, fooDecorator is just a simple function, being collected inside
> a namespace in order to construct the Foo class at the end. So, you get
> *exactly* the same effect if you move fooDecorator outside the class.
>
> > -how I would pass arguments into the fooDecorator if I wanted to (my
> > various attempts have failed)
>
> Once you move fooDecorator outside the class, and forget about `self` and
> such irrelevant stuff, it's just a decorator with arguments.
> If you want to use something like this:
>@fooDecorator(3)
>def fooMethod(self):
> that is translated to:
>fooMethod = fooDecorator(3)(fooMethod)
> That is, fooDecorator will be called with one argument, and the result must
> be a normal decorator - a function accepting a function an an argument and
> returning another function.
>
> def outerDecorator(param):
>   def fooDecorator(f):
> print "fooDecorator"
>
> def _f(self, *args, **kw):
> print "decorated self=%s args=%s kw=%s param=%s" % (self, args, kw,
> param)
> kw['newparam']=param
> return f(self, *args, **kw)
>
> return _f
>   return fooDecorator
>
> This is the most direct way of doing this without any help from other
> modules - see this article by M. Simoniato
> http://www.phyast.pitt.edu/~micheles/python/documentation.html for a better
> way using its decorator factory.
>
> > -what the difference is between decorating with @fooDecorator versus
> > @fooDecorator()
> Easy: the second way doesn't work :)
> (I hope reading the previous item you can answer this yourself)
>
> > I'm searched the net and read the PEPs that seemed relevant, but I
> > didn't see much about decorators inside of a class like this. Can
> > anyone comment on any of these three things?
> As said in the beginning, there is no use for decorators as methods (perhaps
> someone can find a use case?)
> If you move the example above inside the class, you get exactly the same
> results.
> 
> HTH,
> 
> -- 
> Gabriel Genellina

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Decorators inside of class and decorator parameters

2007-01-14 Thread MR
Wow. Really neat stuff there. memoize seems especially cool since you
can get lots of nice dynamic programming benefits "for free" (sorry if
I just stated the obvious, but I thought was was especially cool.)


Michele Simionato wrote:
> Gabriel Genellina wrote:
> > see this article by M. Simoniato
> > http://www.phyast.pitt.edu/~micheles/python/documentation.html for a better
> > way using its decorator factory.
>
> Actually the name is Simionato ;)
> I have just released version 2.0, the new thing is an update_wrapper
> function similar to the one
> in the standard library, but with the ability to preserve the signature
> on demand. For instance
>
> def traced(func):
>def wrapper(*args, **kw):
>print 'calling %s with args %s, %s' % (func, args, kw)
>return func(*args, **kw)
>   return update_wrapper(wrapper, func, create=False)
>
> works exactly as functools.update_wrapper (i.e. copies__doc__,
> __module__,etc. from func to wrapper without
> preserving the signature), whereas update_wrapper(wrapper, func,
> create=True) creates a new wrapper
> with the right signature before copying the attributes.
> 
>  Michele Simionato

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Tkinter code (with pmw) executing to soon please help

2007-01-14 Thread Gabriel Genellina
<[EMAIL PROTECTED]> escribió en el mensaje 
news:[EMAIL PROTECTED]

> button[num] = Tkinter.Button(frame,text =  returnstring,
> command=callback(returnstring))#
>
> I understand this part of it
>
> def callback(text):
>   def handler(event):
> print text

> It stopped calling it automaticaly but will not do anything when I
> click on the button.  Does something have to change on this line as
> well.

Sorry, I overlooked your example. For a button, command should be a function 
with no arguments (and I forget to return the handler, as someone already 
pointed out):

def callback(text):
def handler():
print text
return handler

-- 
Gabriel Genellina 



-- 
http://mail.python.org/mailman/listinfo/python-list

Re: Newbie - converting csv files to arrays in NumPy - Matlab vs. Numpy comparison

2007-01-14 Thread oyekomova
Thank you so much. Your solution works!  I greatly appreciate your
help.




sturlamolden wrote:
> oyekomova wrote:
>
> > Thanks for your note. I have 1Gig of RAM. Also, Matlab has no problem
> > in reading the file into memory. I am just running Istvan's code that
> > was posted earlier.
>
> You have a CSV file of about 520 MiB, which is read into memory. Then
> you have a list of list of floats, created by list comprehension, which
> is larger than 274 MiB. Additionally you try to allocate a NumPy array
> slightly larger than 274 MiB. Now your process is already exceeding 1
> GiB, and you are probably running other processes too. That is why you
> run out of memory.
>
> So you have three options:
>
> 1. Buy more RAM.
>
> 2. Low-level code a csv-reader in C.
>
> 3. Read the data in chunks. That would mean something like this:
>
>
> import time, csv, random
> import numpy
>
> def make_data(rows=6E6, cols=6):
> fp = open('data.txt', 'wt')
> counter = range(cols)
> for row in xrange( int(rows) ):
> vals = map(str, [ random.random() for x in counter ] )
> fp.write( '%s\n' % ','.join( vals ) )
> fp.close()
>
> def read_test():
> start  = time.clock()
> arrlist = None
> r = 0
> CHUNK_SIZE_HINT = 4096 * 4 # seems to be good
> fid = file('data.txt')
> while 1:
> chunk = fid.readlines(CHUNK_SIZE_HINT)
> if not chunk: break
> reader = csv.reader(chunk)
> data = [ map(float, row) for row in reader ]
> arrlist = [ numpy.array(data,dtype=float), arrlist ]
> r += arrlist[0].shape[0]
> del data
> del reader
> del chunk
> print 'Created list of chunks, elapsed time so far: ', time.clock()
> - start
> print 'Joining list...'
> data = numpy.empty((r,arrlist[0].shape[1]),dtype=float)
> r1 = r
> while arrlist:
> r0 = r1 - arrlist[0].shape[0]
> data[r0:r1,:] = arrlist[0]
> r1 = r0
> del arrlist[0]
> arrlist = arrlist[0]
> print 'Elapsed time:', time.clock() - start
>
> make_data()
> read_test()
>
> This can process a CSV file of 6 million rows in about 150 seconds on
> my laptop. A CSV file of 1 million rows takes about 25 seconds.
>
> Just reading the 6 million row CSV file ( using fid.readlines() ) takes
> about 40 seconds on my laptop. Python lists are not particularly
> efficient. You can probably reduce the time to ~60 seconds by writing a
> new CSV reader for NumPy arrays in a C extension.

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Threaded for loop

2007-01-14 Thread Paul Boddie
[EMAIL PROTECTED] wrote:
>
> There is a module in development (processing.py) that provides an API like
> the threading module but that uses processes under the covers:
>
> http://mail.python.org/pipermail/python-dev/2006-October/069297.html
>
> You might find that an interesting alternative.

See the promised parallel processing overview on the python.org Wiki
for a selection of different solutions:

http://wiki.python.org/moin/ParallelProcessing

Paul

-- 
http://mail.python.org/mailman/listinfo/python-list


help with recursion on GP project

2007-01-14 Thread none
I'm trying to create a recursive function to evaluate the expressions 
within a list.  The function uses eval() to evaluate the list.  Like a 
lisp interpreter but very limited.
What I'm looking for is a function to recursively traverse the list and 
provide answers in place of lists, so that ...
Example = ['add', ['sub', 5, 4], ['mul', 3, 2]]
Becomes:Example = ['add', 1, 6]
Becomes:Example = 7
*Functions are defined in the script

The code I currently have, which isn't pretty (bottom), doesn't work 
because it doesn't return the value of the evaluated list.  But I can't 
figure out how to do that.  Any help would be greatly appreciated.

Jack Trades


def recursive(tree):
   if type(tree[1]) != type([]) and type(tree[2]) != type([]):
 eval(a[0]+'('+str(tree[1])+','+str(tree[2])+')')
   if type(tree[2]) == type([]):
 recursive(tree[2])
   if type(tree[1]) == type([]):
 recursive(tree[1])
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Is it possible to get whole commandline include redirection.., etc

2007-01-14 Thread Carl Banks
[EMAIL PROTECTED] wrote:
> Can I get whole commandline not only argument list.
>
> 1. When I command like this
> $ a.py > filename
> 2. sys.argv is returns only argument list
> ['a.py']
>
> Is there a way to find out 'redirection' information.

It's not possible to find the exact command line redirections.

However, you can tell whether a standard I/O stream has been redirected
or not (kind of) using isattr attribute.  For instance,
sys.stdin.isatty() returns 1 when it's not being redirected.  It's not
exact, though.  It's possible to redirect to a device that is a tty,
and sometimes standard I/O streams will not be ttys even without
redirection, such as when run by a script with redirection.  (It
shouldn't be a problem, since the main use case is to check whether the
program should run in interactive mode or not.)


Carl Banks

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: help with recursion on GP project

2007-01-14 Thread bearophileHUGS
First possible raw solution:

from operator import add, sub, mul, div, neg

def evaluate(expr):
if isinstance(expr, list):
fun, ops = expr[0], expr[1:]
return fun(*map(evaluate, ops))
else:
return expr

example = [add, [add, [sub, 5, 4], [mul, 3, 2]], [neg, 5]]
print evaluate(example)

But it's rather slow...

Bye,
bearophile

-- 
http://mail.python.org/mailman/listinfo/python-list


Conflicting needs for __init__ method

2007-01-14 Thread dickinsm
Here's an example of a problem that I've recently come up against for
the umpteenth time.  It's not difficult to solve, but my previous
solutions have never seemed quite right, so I'm writing to ask whether
others have encountered this problem, and if so what solutions they've
come up with.

Suppose you're writing a class "Rational" for rational numbers.  The
__init__ function of such a class has two quite different roles to
play.  First, it's supposed to allow users of the class to create
Rational instances; in this role, __init__ is quite a complex beast.
It needs to allow arguments of various types---a pair of integers, a
single integer, another Rational instance, and perhaps floats, Decimal
instances, and suitably formatted strings.  It has to validate the
input and/or make sure that suitable exceptions are raised on invalid
input.  And when initializing from a pair of integers---a numerator
and denominator---it makes sense to normalize: divide both the
numerator and denominator by their greatest common divisor and make
sure that the denominator is positive.

But __init__ also plays another role: it's going to be used by the
other Rational arithmetic methods, like __add__ and __mul__, to return
new Rational instances.  For this use, there's essentially no need for
any of the above complications: it's easy and natural to arrange that
the input to __init__ is always a valid, normalized pair of integers.
(You could include the normalization in __init__, but that's wasteful
when gcd computations are relatively expensive and some operations,
like negation or raising to a positive integer power, aren't going to
require it.)  So for this use __init__ can be as simple as:

def __init__(self, numerator, denominator):
self.numerator = numerator
self.denominator = denominator

So the question is: (how) do people reconcile these two quite
different needs in one function?  I have two possible solutions, but
neither seems particularly satisfactory, and I wonder whether I'm
missing an obvious third way.  The first solution is to add an
optional keyword argument "internal = False" to the __init__ routine,
and have all internal uses specify "internal = True"; then the
__init__ function can do the all the complicated stuff when internal
is False, and just the quick initialization otherwise.  But this seems
rather messy.

The other solution is to ask the users of the class not to use
Rational() to instantiate, but to use some other function
(createRational(), say) instead.  Then __init__ is just the simple
method above, and createRational does all the complicated stuff to
figure out what the numerator and denominator should be and eventually
calls Rational(numerator, denomiator) to create the instance.  But
asking users not to call Rational() seems unnatural.  Perhaps with
some metaclass magic one can ensure that "external" calls to
Rational() actually go through createRational() instead?

Of course, none of this really has anything to do with rational
numbers.  There must be many examples of classes for which internal
calls to __init__, from other methods of the same class, require
minimal argument processing, while external calls require heavier and
possibly computationally expensive processing.  What's the usual way
to solve this sort of problem?

Mark

-- 
http://mail.python.org/mailman/listinfo/python-list


python - process id

2007-01-14 Thread bruce
hi...

is there a way to have a test python app, get its' own processID. i'm
creating a test python script under linux, and was wondering if this is
possible..

also, i've tried using an irc client to join the irc #python channel, and
for some reason i keep getting the err msg saying that the 'address is
banned' i've never been to the python channel, but i'm using dsl, so i'm
getting a dynamic address.. and yeah, i've tried changing the address a
number of times..

thanks


-- 
http://mail.python.org/mailman/listinfo/python-list


Re: python - process id

2007-01-14 Thread Jean-Paul Calderone
On Sun, 14 Jan 2007 16:36:58 -0800, bruce <[EMAIL PROTECTED]> wrote:
>hi...
>
>is there a way to have a test python app, get its' own processID. i'm
>creating a test python script under linux, and was wondering if this is
>possible..

See the os module, the getpid function.

>
>also, i've tried using an irc client to join the irc #python channel, and
>for some reason i keep getting the err msg saying that the 'address is
>banned' i've never been to the python channel, but i'm using dsl, so i'm
>getting a dynamic address.. and yeah, i've tried changing the address a
>number of times..

IRC bans can cover a wide range of addresses.  Someone else on your ISP may
have abused the channel badly enough to get a large range of addresses, maybe
every address you could possibly get, banned from the channel.

Jean-Paul
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Conflicting needs for __init__ method

2007-01-14 Thread Ziga Seilnacht
Mark wrote:

[a lot of valid, but long concerns about types that return
 an object of their own type from some of their methods]

I think that the best solution is to use an alternative constructor
in your arithmetic methods. That way users don't have to learn about
two different factories for the same type of objects. It also helps
with subclassing, because users have to override only a single method
if they want the results of arithmetic operations to be of their own
type.

For example, if your current implementation looks something like
this:

class Rational(object):

# a long __init__ or __new__ method

def __add__(self, other):
# compute new numerator and denominator
return Rational(numerator, denominator)

# other simmilar arithmetic methods


then you could use something like this instead:

class Rational(object):

# a long __init__ or __new__ method

def __add__(self, other):
# compute new numerator and denominator
return self.result(numerator, denominator)

# other simmilar arithmetic methods

@staticmethod
def result(numerator, denominator):
"""
we don't use a classmethod, because users should
explicitly override this method if they want to
change the return type of arithmetic operations.
"""
result = object.__new__(Rational)
result.numerator = numerator
result.denominator = denominator
return result


Hope this helps,
Ziga

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Maths error

2007-01-14 Thread Tim Roberts
"Hendrik van Rooyen" <[EMAIL PROTECTED]> wrote:

>"Nick Maclaren" <[EMAIL PROTECTED]> wrote:
>
>> What I don't know is how much precision this approximation loses when
>> used in real applications, and I have never found anyone else who has
>> much of a clue, either.
>> 
>I would suspect that this is one of those questions which are simple
>to ask, but horribly difficult to answer - I mean - if the hardware has 
>thrown it away, how do you study it - you need somehow two
>different parallel engines doing the same stuff, and comparing the 
>results, or you have to write a big simulation, and then you bring 
>your simulation errors into the picture - There be Dragons...

Actually, this is a very well studied part of computer science called
"interval arithmetic".  As you say, you do every computation twice, once to
compute the minimum, once to compute the maximum.  When you're done, you
can be confident that the true answer lies within the interval.

For people just getting into it, it can be shocking to realize just how
wide the interval can become after some computations.
-- 
Tim Roberts, [EMAIL PROTECTED]
Providenza & Boekelheide, Inc.
-- 
http://mail.python.org/mailman/listinfo/python-list


Python web app. (advice sought)

2007-01-14 Thread Duncan Smith
Hello,
 I find myself in the, for me, unusual (and at the moment unique)
position of having to write a web application.  I have quite a lot of
existing Python code that will form part of the business logic.  This
relies on 3rd party libraries (such as numpy) which would make porting
to e.g. IronPython difficult (I would imagine).  I was thinking LAMP
(the P standing for Python, of course), particularly as I was originally
encouraged to go for open source solutions.

The application will provide some basic statistical analyses of data
contained in database tables and associated plots (R / matplotlib /
graphviz).  There will also be some heavier duty Monte Carlo simulation
and graphical modelling / MCMC.  The user(s) will need to be able to set
model parameters; maybe even tinker with model structure, so it will be
very interactive (AJAX?).

I've had a look at Django, Turbogears and Plone, and at the moment I am
torn between Turbogears and Plone.  I get the impression that Turbogears
will require me to write more non-Python code, but maybe Plone is more
than I need (steeper learning curve?).  Maybe Turbogears will lead to a
more loosely coupled app. than Plone?

The disconcerting thing is that others on the project (who won't be
developing) have started to talk about a LAMP back end with an IIS front
end, .NET, and the benefits of sharepoint.  The emphasis is supposed to
be on rapid development, and these technologies are supposed to help.
But I have no real familiarity with them at all; just Python, C and SQL
to any realistic level of competence.

Any advice would be greatly appreciated.  I have to do much of the
statistical work too, so I need to make good choices (and hopefully be
able to justify them so nobody else on the project makes inappropriate
choices for me).  e.g. I don't mind learning Javascript if it doesn't
take too long.  The physical server will initially be a multiprocessor
machine with several GB of RAM.  But we also have a cluster (I have no
details, I only started the project a week ago).  So any advice
regarding parallelisation would also be appreciated (or, in fact, any
useful advice / pointers at all).  Thanks.

Duncan
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Learning Python book, new edition?

2007-01-14 Thread wesley chun
Robert Hicks wrote:
> I would get "Core Python Programming" by Wesley Chun. It covers just
> about everything under the sun and includes version 2.5.


Robert, thanks for the plug.  if the OP wants to learn more about my
book and its philosophy, feel free to check out my comments on the
Amazon product page and/or the book's website at http://corepython.com
to see if it's right for you.

more on topic, here's a summary of Python books which are rev'd up to
2.5, categorized but not in any particular order:

Python learning:
- Python for Dummies, Maruch, Sep 2006
- Core Python Programming, Chun, Sep 2006

Python pure reference:
- Python Essential Reference, Beazley, Feb 2006
- Python in a Nutshell, Martelli, Jul 2006

Python case study reference:
- Programming Python, Lutz, Aug 2006

Enjoy!
-wesley

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
"Core Python Programming", Prentice Hall, (c)2007,2001
http://corepython.com

wesley.j.chun :: wescpy-at-gmail.com
python training and technical consulting
cyberweb.consulting : silicon valley, ca
http://cyberwebconsulting.com

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Conflicting needs for __init__ method

2007-01-14 Thread Gabriel Genellina

At Sunday 14/1/2007 20:32, [EMAIL PROTECTED] wrote:


Of course, none of this really has anything to do with rational
numbers.  There must be many examples of classes for which internal
calls to __init__, from other methods of the same class, require
minimal argument processing, while external calls require heavier and
possibly computationally expensive processing.  What's the usual way
to solve this sort of problem?


In some cases you can differentiate by the type or number of 
arguments, so __init__ is the only constructor used.
In other cases this can't be done, then you can provide different 
constructors (usually class methods or static methods) with different 
names, of course. See the datetime class, by example. It has many 
constructors (today(), fromtimestamp(), fromordinal()...) all of them 
class methods; it is a C module.


For a slightly different approach, see the TarFile class (this is a 
Python module). It has many constructors (classmethods) like taropen, 
gzopen, etc. but there is a single public constructor, the open() 
classmethod. open() is a factory, dispatching to other constructors 
depending on the combination of arguments used.



--
Gabriel Genellina
Softlab SRL 







__ 
Preguntá. Respondé. Descubrí. 
Todo lo que querías saber, y lo que ni imaginabas, 
está en Yahoo! Respuestas (Beta). 
¡Probalo ya! 
http://www.yahoo.com.ar/respuestas 

-- 
http://mail.python.org/mailman/listinfo/python-list

How naive is Python?

2007-01-14 Thread John Nagle
How naive (in the sense that compiler people use the term)
is the current Python system?  For example:

def foo() :
s = "This is a test"
return(s)

 s2 = foo()

How many times does the string get copied?

Or, for example:

 s1 = "Test1"
s2 = "Test2"
s3 = "Test3"
s = s1 + s2 + s3

Any redundant copies performed, or is that case optimized?

How about this?

kcount = 1000
s = ''
for i in range(kcount) :
s += str(i) + ' '

Is this O(N) or O(N^2) because of recopying of "s"?

I just want a sense of what's unusually inefficient in the
current implementation.  Thanks.

John Nagle
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Conflicting needs for __init__ method

2007-01-14 Thread Steven D'Aprano
On Sun, 14 Jan 2007 15:32:35 -0800, dickinsm wrote:

> Suppose you're writing a class "Rational" for rational numbers.  The
> __init__ function of such a class has two quite different roles to
> play.  First, it's supposed to allow users of the class to create
> Rational instances; in this role, __init__ is quite a complex beast.
> It needs to allow arguments of various types---a pair of integers, a
> single integer, another Rational instance, and perhaps floats, Decimal
> instances, and suitably formatted strings.  It has to validate the
> input and/or make sure that suitable exceptions are raised on invalid
> input.  And when initializing from a pair of integers---a numerator
> and denominator---it makes sense to normalize: divide both the
> numerator and denominator by their greatest common divisor and make
> sure that the denominator is positive.
> 
> But __init__ also plays another role: it's going to be used by the
> other Rational arithmetic methods, like __add__ and __mul__, to return
> new Rational instances.  For this use, there's essentially no need for
> any of the above complications: it's easy and natural to arrange that
> the input to __init__ is always a valid, normalized pair of integers.
> (You could include the normalization in __init__, but that's wasteful

Is it really? Have you measured it or are you guessing? Is it more or less
wasteful than any other solution?

> when gcd computations are relatively expensive and some operations,
> like negation or raising to a positive integer power, aren't going to
> require it.)  So for this use __init__ can be as simple as:
> 
> def __init__(self, numerator, denominator):
> self.numerator = numerator
> self.denominator = denominator
> 
> So the question is: (how) do people reconcile these two quite
> different needs in one function?  I have two possible solutions, but
> neither seems particularly satisfactory, and I wonder whether I'm
> missing an obvious third way.  The first solution is to add an
> optional keyword argument "internal = False" to the __init__ routine,
> and have all internal uses specify "internal = True"; then the
> __init__ function can do the all the complicated stuff when internal
> is False, and just the quick initialization otherwise.  But this seems
> rather messy.

Worse than messy. I guarantee you that your class' users will,
deliberately or accidentally, end up calling Rational(10,30,internal=True)
and you'll spent time debugging mysterious cases of instances not being
normalised when they should be.


> The other solution is to ask the users of the class not to use
> Rational() to instantiate, but to use some other function
> (createRational(), say) instead.

That's ugly! And they won't listen.

> Of course, none of this really has anything to do with rational
> numbers.  There must be many examples of classes for which internal
> calls to __init__, from other methods of the same class, require
> minimal argument processing, while external calls require heavier and
> possibly computationally expensive processing.  What's the usual way
> to solve this sort of problem?

class Rational(object):
def __init__(self, numerator, denominator):
print "lots of heavy processing here..."
# processing ints, floats, strings, special case arguments, 
# blah blah blah...
self.numerator = numerator
self.denominator = denominator
def __copy__(self):
cls = self.__class__
obj = cls.__new__(cls)
obj.numerator = self.numerator
obj.denominator = self.denominator
return obj
def __neg__(self):
obj = self.__copy__()
obj.numerator *= -1
return obj

I use __copy__ rather than copy for the method name, so that the copy
module will do the right thing.



-- 
Steven D'Aprano 

-- 
http://mail.python.org/mailman/listinfo/python-list


on which site or Usenet group should this question be posted?

2007-01-14 Thread mirandacascade
Operating system: Win XP
Version of Python: 2.4

I recognize that this question is not about Python...it has only a
tangential Python connection.  I'm using a Python package to run a
process; that process is issuing an error, and I'm hoping that someone
on this site can point me to the site that has the appropriate
expertise.

Situation is this:
1) using win32com.client.Dispatch to work with the MSXML2.XMLHTTP COM
object
2) I use the open() method of the COM object and specify a "POST", then
I use the send() method of the COM object
3) when the url is an http url, able to send and then check the
responseText property without problems
4) when the url is an https url, the open() method works, but when the
send() method is invoked, it raises an exception with the error
message: "the download of the specified resource has failed"
5) when I use Google with searches such as "XMLHTTP", "https",
"download of the specified resource", I see that other people are
experiencing the issue, but I didn't see any solutions, nor did I see
whether there was a site (perhaps a Usenet group) on which it would
make sense to post this issue

Eventually, I want to learn whether the XMLHTTP COM object can work
with https url's, but what I'm hoping to learn from this post is advice
as to which site I should post this question so that it might be read
by folks with the appropriate subject-matter expertise.

Thank you.

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Conflicting needs for __init__ method

2007-01-14 Thread Steven D'Aprano
On Mon, 15 Jan 2007 14:43:55 +1100, Steven D'Aprano wrote:

>> Of course, none of this really has anything to do with rational
>> numbers.  There must be many examples of classes for which internal
>> calls to __init__, from other methods of the same class, require
>> minimal argument processing, while external calls require heavier and
>> possibly computationally expensive processing.  What's the usual way
>> to solve this sort of problem?
> 
> class Rational(object):
> def __init__(self, numerator, denominator):
> print "lots of heavy processing here..."
> # processing ints, floats, strings, special case arguments, 
> # blah blah blah...
> self.numerator = numerator
> self.denominator = denominator
> def __copy__(self):
> cls = self.__class__
> obj = cls.__new__(cls)
> obj.numerator = self.numerator
> obj.denominator = self.denominator
> return obj
> def __neg__(self):
> obj = self.__copy__()
> obj.numerator *= -1
> return obj


Here's a variation on that which is perhaps better suited for objects with
lots of attributes:

def __copy__(self):
cls = self.__class__
obj = cls.__new__(cls)
obj.__dict__.update(self.__dict__) # copy everything quickly
return obj




-- 
Steven D'Aprano 

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: How naive is Python?

2007-01-14 Thread skip

John> How naive (in the sense that compiler people use the term) is the
John> current Python system?  For example:

John>   def foo() :
John>   s = "This is a test"
John>   return(s)

John>  s2 = foo()

John> How many times does the string get copied?

Never.  s and s2 just refer to the same object (strings are immutable).
Executing this:

def foo() :
print id("This is a test")
s = "This is a test"
print id(s)
return(s)

s2 = foo()
print id(s2)

prints the same value three times.

John> Or, for example:

John>   s1 = "Test1"
John>   s2 = "Test2"
John>   s3 = "Test3"
John>   s = s1 + s2 + s3

John> Any redundant copies performed, or is that case optimized?

Not optimized.  You can see that using the dis module:

  4   0 LOAD_CONST   1 ('Test1')
  3 STORE_FAST   0 (s1)

  5   6 LOAD_CONST   2 ('Test2')
  9 STORE_FAST   1 (s2)

  6  12 LOAD_CONST   3 ('Test3')
 15 STORE_FAST   2 (s3)

  7  18 LOAD_FAST0 (s1)
 21 LOAD_FAST1 (s2)
 24 BINARY_ADD  
 25 LOAD_FAST2 (s3)
 28 BINARY_ADD  
 29 STORE_FAST   3 (s)
 32 LOAD_CONST   0 (None)
 35 RETURN_VALUE

The BINARY_ADD opcode creates a new string.

John> How about this?

John>   kcount = 1000
John>   s = ''
John>   for i in range(kcount) :
John>   s += str(i) + ' '

John> Is this O(N) or O(N^2) because of recopying of "s"?

O(N).  Here's a demonstration of that:

#!/usr/bin/env python

from __future__ import division

def foo(kcount):
s = ''
for i in xrange(kcount) :
s += str(i) + ' '

import time

for i in xrange(5,25):
t = time.time()
foo(2**i)
t = time.time() - t
print 2**i, t, t/2**i

On my laptop t roughly doubles for each iteration and prints around 5e-06
for t/2**i in all cases.

Skip

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: How naive is Python?

2007-01-14 Thread Roy Smith
In article <[EMAIL PROTECTED]>,
 John Nagle <[EMAIL PROTECTED]> wrote:

> How naive (in the sense that compiler people use the term)
> is the current Python system?  For example:
> 
>   def foo() :
>   s = "This is a test"
>   return(s)
> 
>  s2 = foo()
> 
> How many times does the string get copied?

All of those just move around pointers to the same (interned) string.

> How about this?
> 
>   kcount = 1000
>   s = ''
>   for i in range(kcount) :
>   s += str(i) + ' '
> 
> Is this O(N) or O(N^2) because of recopying of "s"?

This is a well-known (indeed, the canonical) example of quadratic behavior 
in Python.  The standard solution is to store all the strings (again, 
really just pointers to the strings) in a list, then join all the elements:

  temp = []
  for i in range (1000):
temp.append (str(i))
  s = "".join (temp)

That ends up copying each string once (during the join operation), and is 
O(N) overall.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: How naive is Python?

2007-01-14 Thread Steven D'Aprano
On Mon, 15 Jan 2007 03:25:01 +, John Nagle wrote:

> How naive (in the sense that compiler people use the term)
> is the current Python system?  For example:
> 
>   def foo() :
>   s = "This is a test"
>   return(s)
> 
>  s2 = foo()
> 
> How many times does the string get copied?


Let's find out. Results below for Python 2.3 -- other versions may vary.

>>> def foo():
... s = "This is a test"
... return s
...
>>> def foo2():
... return "This is a test"
...
>>> import dis
>>> dis.dis(foo)
  2   0 LOAD_CONST   1 ('This is a test')
  3 STORE_FAST   0 (s)

  3   6 LOAD_FAST0 (s)
  9 RETURN_VALUE
 10 LOAD_CONST   0 (None)
 13 RETURN_VALUE
>>> dis.dis(foo2)
  2   0 LOAD_CONST   1 ('This is a test')
  3 RETURN_VALUE
  4 LOAD_CONST   0 (None)
  7 RETURN_VALUE

foo and foo2 functions compile to different byte-code. foo does a little
more work than foo2, but it is likely to be a trivial difference.

>>> s1 = foo()
>>> s2 = foo()
>>> s1 == s2, s1 is s2
(True, True)

So the string "This is a test" within foo is not copied each time the
function is called. However, the string "This is a test" is duplicated
between foo and foo2 (the two functions don't share the same string
instance):

>>> s3 = foo2()
>>> s3 == s1, s3 is s1
(True, False)


 
> Or, for example:
> 
>  s1 = "Test1"
>   s2 = "Test2"
>   s3 = "Test3"
>   s = s1 + s2 + s3
> 
> Any redundant copies performed, or is that case optimized?

I don't believe it is optimized. I believe that in Python 2.5 simple
numeric optimizations are performed, so that "x = 1 + 3" would compile to
"x = 4", but I don't think that holds for strings. If you are running 2.5,
you could find out with dis.dis.



> How about this?
> 
>   kcount = 1000
>   s = ''
>   for i in range(kcount) :
>   s += str(i) + ' '
> 
> Is this O(N) or O(N^2) because of recopying of "s"?

That will be O(N**2), except that CPython version 2.4 (or is it 2.5?) can,
sometimes, optimize it. Note that this is an implementation detail, and
doesn't hold for other Pythons like Jython, IronPython or PyPy.



-- 
Steven D'Aprano 

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: How to write code to get focuse the application which is open from server

2007-01-14 Thread vithi
Hi Paul,
Since your reply I try to use pywinauto. I was able to get the control
of a window that is a good news but it is not working for sub window
this main window create.(popup windows)
eg) File -> printwill open print window but the code is not
clicking print button. I try several combination
app,Print.OK.CloseClick() or app,Print.OK.Click() is not working

my code goes like that
app=application.Application()
qi = app.window_(title_re = ".*arcMap.*")
qi.TypeKeys("%FP")
app,Print.OK.Click()

the last line of code is not working same thing with file open to
qi.TypeKeys("%FO") create popup window "open" but
app,Open.Filename.SetEditText("test1,txt") is not working any help to
overcome to this problem (Here the "Print" , "Open" are windows title I
am using without any undestanding why I am using. Any help?





Paul McGuire wrote:
> "vinthan" <[EMAIL PROTECTED]> wrote in message
> news:[EMAIL PROTECTED]
> > hi,
> > I am new to python. I have to write test cases in python. An
> > application is open in the desk top ( application writen in .Net) I
> > have to write code to get focuse the application and click on the link
> > which in the one side  and it will load the map on the other and I have
> > to check map is loaded. Any one tell me how do I use Dispatch or any
> > other method to write a code.
> >
> If you are running on Windows, look into pywinauto
> (http://www.openqa.org/pywinauto/).
>
> I have successfully used it to interact with a Flash animation running
> within an IE browser.
>
> I also had to inspect the graphics displayed by the Flash animation, for
> this I used PIL (http://www.pythonware.com/products/pil/).
> 
> Good luck,
> -- Paul

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: How to write code to get focuse the application which is open from server

2007-01-14 Thread vithi
Hi Paul,
Since your reply I try to use pywinauto. I was able to get the control
of a window that is a good news but it is not working for sub window
this main window create.(popup windows)
eg) File -> printwill open print window but the code is not
clicking print button. I try several combination
app,Print.OK.CloseClick() or app,Print.OK.Click() is not working

my code goes like that
app=application.Application()
qi = app.window_(title_re = ".*arcMap.*")
qi.TypeKeys("%FP")
app,Print.OK.Click()

the last line of code is not working same thing with file open to
qi.TypeKeys("%FO") create popup window "open" but
app,Open.Filename.SetEditText("test1,txt") is not working any help to
overcome to this problem (Here the "Print" , "Open" are windows title I
am using without any undestanding why I am using. Any help?





Paul McGuire wrote:
> "vinthan" <[EMAIL PROTECTED]> wrote in message
> news:[EMAIL PROTECTED]
> > hi,
> > I am new to python. I have to write test cases in python. An
> > application is open in the desk top ( application writen in .Net) I
> > have to write code to get focuse the application and click on the link
> > which in the one side  and it will load the map on the other and I have
> > to check map is loaded. Any one tell me how do I use Dispatch or any
> > other method to write a code.
> >
> If you are running on Windows, look into pywinauto
> (http://www.openqa.org/pywinauto/).
>
> I have successfully used it to interact with a Flash animation running
> within an IE browser.
>
> I also had to inspect the graphics displayed by the Flash animation, for
> this I used PIL (http://www.pythonware.com/products/pil/).
> 
> Good luck,
> -- Paul

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Maths error

2007-01-14 Thread Hendrik van Rooyen
"Dennis Lee Bieber" <[EMAIL PROTECTED]>wrote:


> On Sun, 14 Jan 2007 07:18:11 +0200, "Hendrik van Rooyen"
> <[EMAIL PROTECTED]> declaimed the following in comp.lang.python:
> 
> > 
> > I recall an SF character known as "Slipstick Libby",
> > who was supposed to be a Genius - but I forget
> > the setting and the author.
> >
> Robert Heinlein. Appears a few of the Lazarus Long books.
>  
> > It is something that has become quietly extinct, and
> > we did not even notice.
> >
> And get collector prices --
> http://www.sphere.bc.ca/test/sruniverse.html

Thanks Dennis - Fascinating site !

- Hendrik

-- 
http://mail.python.org/mailman/listinfo/python-list


Class list of a module

2007-01-14 Thread Laurent . LAFFONT-ST
Hi,

I want to get all classes of a module in a list. I wrote this code but I 
wonder
if there's not a simpler solution


import inspect

def getClassList(aModule):
return [getattr(aModule, attName) \
for attName in aModule.__dict__  \
if inspect.isclass(getattr(aModule, attName))]

Regards,

Laurent Laffont 
-- 
http://mail.python.org/mailman/listinfo/python-list