Re: 2.6, 3.0, and truly independent intepreters

2008-10-25 Thread Martin v. Löwis
> If Py_None corresponds to None in Python syntax (sorry I'm not familiar
> with Python internals yet; glad you are commenting, since you are), then
> it is a fixed constant and could be left global, probably.

If None remains global, then type(None) also remains global, and
type(None),__bases__[0]. Then type(None).__bases__[0].__subclasses__()
will yield "interesting" results. This is essentially the status quo.

> But if we
> want a separate None for each interpreter, or if we just use Py_None as
> an example global variable to use to answer the question then here goes

There are a number of problems with that approach. The biggest one is
that it is theoretical. Of course I'm aware of thread-local variables,
and the abstract possibility of collecting all global variables in
a single data structure (in fact, there is already an interpreter
structure and per-interpreter state in Python). I wasn't claiming that
it was impossible to solve that problem - just that it is not simple.
If you want to find out what all the problems are, please try
implementing it for real.

Regards,
Martin
--
http://mail.python.org/mailman/listinfo/python-list


Re: PIL: Getting a two color difference between images

2008-10-25 Thread Lie Ryan
On Fri, 24 Oct 2008 14:51:07 -0500, Kevin D. Smith wrote:

> I'm trying to get the difference of two images using PIL.  The
> ImageChops.difference function does almost what I want, but it takes the
> absolute value of the pixel difference.  What I want is a two color
> output image: black where the image wasn't different, and white where it
> was different.  Right now I get black where it wasn't different, and
> abs(image1-image2) where it was different.
> 
> It would be nice if I could specify the colors for difference and no
> difference.  This sounds like it should be easy, but I just don't see
> how to do it.
> 
> --
> Kevin D. Smith

Use the Image.point()

Also, see PIL Handbook: http://www.pythonware.com/library/pil/handbook/
index.htm

--
http://mail.python.org/mailman/listinfo/python-list


Re: Urllib vs. FireFox

2008-10-25 Thread Lie Ryan
On Fri, 24 Oct 2008 20:38:37 +0200, Gilles Ganault wrote:

> Hello
> 
> After scratching my head as to why I failed finding data from a web
> using the "re" module, I discovered that a web page as downloaded by
> urllib doesn't match what is displayed when viewing the source page in
> FireFox.
> 

Cookies?

--
http://mail.python.org/mailman/listinfo/python-list


Re: dictionary

2008-10-25 Thread Hendrik van Rooyen

Steven D'Aprano  wrote:

>On Fri, 24 Oct 2008 14:53:19 +, Peter Pearson wrote:
>
>> On 24 Oct 2008 13:17:45 GMT, Steven D'Aprano wrote:
>>>
>>> What are programmers coming to these days? When I was their age, we
>>> were expected to *read* the error messages our compilers gave us, not
>>> turn to the Interwebs for help as soon there was the tiniest problem.
>> 
>> Yes, and what's more, the text of the error message was "IEH208".  After
>> reading it several times, one looked it up in a big fat set of books,
>> where one found the explanation:
>> 
>>   IEH208: Your program contains an error. Correct the error and resubmit
>>   your job.
>> 
>> An excellent system for purging the world of the weak and timid.
>
>You had reference books? You were lucky! When I was lad, we couldn't 
>afford reference books. If we wanted to know what an error code meant, we 
>had to rummage through the bins outside of compiler vendors' offices 
>looking for discarded documentation.

eee!  You were Lucky!

You had Compilers!
You had Compiler Vendors!

When I was lad, we had nowt but raw hardware.
We had to sit in cold room, ears deafened by
whine of fan, clicking switches to load our
octal in computer. We just had error light...

- Hendrik


--
http://mail.python.org/mailman/listinfo/python-list


Re: from package import * without overwriting similarly named functions?

2008-10-25 Thread Lie Ryan
On Fri, 24 Oct 2008 11:06:54 -0700, Reckoner wrote:

> I have multiple packages that have many of the same function names. Is
> it possible to do
> 
> from package1 import *
> from package2 import *
> 
> without overwriting similarly named objects from package1 with material
> in package2? How about a way to do this that at least gives a warning?

That (overwritten names) is exactly the reason why wildcard import should 
be avoided.

Use:
from package1 import blah
import package2

But avoid:
from package3 import *


--
http://mail.python.org/mailman/listinfo/python-list


set/dict comp in Py2.6

2008-10-25 Thread bearophileHUGS
I'd like to know why Python 2.6 doesn't have the syntax to create sets/
dicts of Python 3.0, like:

{x*x for x in xrange(10)}
{x:x*x for x in xrange(10)}

Bye,
bearophile
--
http://mail.python.org/mailman/listinfo/python-list


Re: How can I handle the char immediately after its input, without waiting an endline?

2008-10-25 Thread Lie Ryan
>>> I want to write something that handle every char immediately after its
>>> input. Then tehe user don't need to type [RETURN] each time. How can I
>>> do this?
>>>
>>> Thanks in advance.

Don't you think that getting a one-character from console is something 
that many people do very often? Do you think that all these platform 
independent code should be moved to the interpreter level instead (and 
raises the appropriate error when the platform somehow cannot do 
unbuffered input)? So python developer could do something like this:

raw_input(bufferring = 0)

--
http://mail.python.org/mailman/listinfo/python-list


Re: Ordering python sets

2008-10-25 Thread Lie Ryan
On Wed, 22 Oct 2008 10:43:35 -0700, bearophileHUGS wrote:

> Mr.SpOOn:
>> Is there another convenient structure or shall I use lists and define
>> the operations I need?
> 
> 
> As Python becomes accepted for more and more "serious" projects some
> more data structures can eventually be added to the collections module:
> - SortedSet, SortedDict: can be based on red-black trees. Require items
> to be sortable (them being hashable isn't required, but it's probably
> more safe that they are immutable). - Odict: a dict ordered according to
> insertion order. - Bidict: a unordered dict that allows O(1) retrieval
> on both keys and values (that are both unique).
> - Graph: a directed unsorted data structure like mine may be acceptable
> too.
> - Bitset: dynamically re-sizable and efficient in space and time, easy
> to implement in C.
> - Chain: simulates a double linked list, but its implementation can be
> similar to the current Deque but allowing not completely filled blocks
> in the middle too. (I haven't named it just List because there's a name
> clash with the list()).
> - I use all those data structures in Python programs, plus some more,
> like interval map, trie (and a dawg), persistent dict and persistent
> list, kd-tree, BK-tree, Fibonacci Heap, a rank & select, a disjoint-
> set, and maybe more. But those are uncommon enough to be left out of a
> standard library.
> - A problem with the Chain data structure is how to represent iterators
> in Python. I think this is a big problem, that I don't know how to solve
> yet. A possible solution is to make them owned by the Chain itself, but
> this makes the code slow down linearly in accord to the number of the
> iterators. If someone has a solution I'm all ears. 
> 
> Bye,
> bearophile


Since python is dynamic language, I think it should be possible to do 
something like this:

a = list([1, 2, 3, 4, 5], implementation = 'linkedlist')
b = dict({'a': 'A'}, implementation = 'binarytree')
c = dict({'a': 'A'}, implementation = 'binarytree')

i.e. basically since a data structure can have different implementations, 
and different implementations have different performance characteristics, 
it should be possible to dynamically change the implementation used.

In the far future, the data structure and its implementation could be 
abstracted even further:

a = list() # ordered list
b = set() # unordered list
c = dict() # unordered dictionary
d = sorteddict() # ordered dictionary

Each of the implementations would share a common subset of methods and 
possibly a few implementation dependent method that could only work on 
certain implementations (or is extremely slow except in the correct 
implementation).


--
http://mail.python.org/mailman/listinfo/python-list


Re: How can I handle the char immediately after its input, without waiting an endline?

2008-10-25 Thread Steven D'Aprano
On Sat, 25 Oct 2008 08:36:32 +, Lie Ryan wrote:

 I want to write something that handle every char immediately after
 its input. Then tehe user don't need to type [RETURN] each time. How
 can I do this?

 Thanks in advance.
> 
> Don't you think that getting a one-character from console is something
> that many people do very often? 

No.

I can't think of any modern apps that use one character commands like 
that. One character plus a modifier (ctrl or alt generally) perhaps, but 
even there, it's mostly used in GUI applications.


> Do you think that all these platform
> independent code should be moved to the interpreter level instead 

Absolutely not! There's no need for it to be given a keyword or special 
syntax.

But maybe there should be a standard library function for it. 


> (and
> raises the appropriate error when the platform somehow cannot do
> unbuffered input)? So python developer could do something like this:
> 
> raw_input(bufferring = 0)

No. Leave raw_input as it is. A better interface would be:

import input_services
c = input_services.get_char()

Eventually the module could grow other services as well.





-- 
Steven
--
http://mail.python.org/mailman/listinfo/python-list


Re: lxml removing tag, keeping text order

2008-10-25 Thread Stefan Behnel
Tim Arnold schrieb:
> Hi,
> Using lxml to clean up auto-generated xml to validate against a dtd; I need 
> to remove an element tag but keep the text in order. For example
> s0 = '''
> 
>first text
> ladida
> emphasized text
> middle text
> 
> last text
>   
> '''
> 
> I want to get rid of the  tag but keep everything else as it is; 
> that is, I need this result:
> 
> 
>first text
> ladida
> emphasized text
> middle text
> 
> last text
>   
> 

There's a drop_tag() method in lxml.html (lxml/html/__init__.py) that does
what you want. Just copy the code over to your code base and adapt it as needed.

Stefan
--
http://mail.python.org/mailman/listinfo/python-list


Re: set/dict comp in Py2.6

2008-10-25 Thread Steven D'Aprano
On Sat, 25 Oct 2008 01:13:08 -0700, bearophileHUGS wrote:

> I'd like to know why Python 2.6 doesn't have the syntax to create sets/
> dicts of Python 3.0, like:
> 
> {x*x for x in xrange(10)}
> {x:x*x for x in xrange(10)}

Maybe nobody asked for it?

Personally, I don't see the advantage of set and dict comprehensions. I 
think the value of them is very marginal, not worth the additional syntax.

set([x*x for x in xrange(10)])
dict((x, x*x) for x in xrange(10))

work perfectly well using the existing syntax.


-- 
Steven

--
http://mail.python.org/mailman/listinfo/python-list


Re: Ordering python sets

2008-10-25 Thread Steven D'Aprano
On Sat, 25 Oct 2008 08:58:18 +, Lie Ryan wrote:

> 
> Since python is dynamic language, I think it should be possible to do
> something like this:
> 
> a = list([1, 2, 3, 4, 5], implementation = 'linkedlist')
> b = dict({'a': 'A'}, implementation = 'binarytree') 
> c = dict({'a': 'A'}, implementation = 'binarytree')

Oh I hope not. I think you have mistaken "dynamic" for "chaotic".

When I see a dict, I want to know that any two dicts work the same way. I 
don't want to have to search the entire project's source code to find out 
if it is a dict implemented as a hash table with O(1) lookups, or a dict 
implemented as a binary tree with O(log N) lookups, or a dict implemented 
as a linear array with O(N) lookups.

If I wanted that sort of nightmare, I can already do it by shadowing the 
builtin:

dict = binarytree
D = dict({'a': 'A'})  # make a binary tree

There is no possible good that come from this suggestion. The beauty of 
Python is that the built-in data structures (list, dict, set) are 
powerful enough for 99% of uses[1], and for the other 1%, you can easily 
and explicitly use something else.

But *explicitly* is the point. There's never any time where you can do 
this:

type(mydict) is dict

and not know exactly what performance characteristics mydict will have. 
(Unless you shadow dict or type, or otherwise do something that breaks 
the rules.) You never need to ask, "Okay, it's a dict. What sort of dict?"

If you want a binary tree, ask for a binary tree.






[1] Your mileage may vary.


-- 
Steven
--
http://mail.python.org/mailman/listinfo/python-list


Re: 2.6, 3.0, and truly independent intepreters

2008-10-25 Thread Michael Sparks
Hi Andy,


Andy wrote:

> However, we require true thread/interpreter
> independence so python 2 has been frustrating at time, to say the
> least.  Please don't start with "but really, python supports multiple
> interpreters" because I've been there many many times with people.
> And, yes, I'm aware of the multiprocessing module added in 2.6, but
> that stuff isn't lightweight and isn't suitable at all for many
> environments (including ours).

This is a very conflicting set of statements and whilst you appear to be
extremely clear on what you want here, and why multiprocessing, and
associated techniques are not appropriate, this does sound very
conflicting. I'm guessing I'm not the only person who finds this a
little odd.

Based on the size of the thread, having read it all, I'm guessing also
that you're not going to have an immediate solution but a work around.
However, also based on reading it, I think it's a usecase that would be
generally useful in embedding python.

So, I'll give it a stab as to what I think you're after.

The scenario as I understand it is this:
* You have an application written in C,C++ or similar.
* You've been providing users the ability to script it or customise it
  in some fashion using scripts.

Based on the conversation:
* This worked well, and you really liked the results, but...
* You only had one interpreter embedded in the system
* You were allowing users to use multiple scripts

Suddenly you go from: Single script, single memory space.
To multiple scripts, unconstrained shared shared memory space.

That then causes pain for you and your users. So as a result, you decided to
look for this scenario:
* A mechanism that allows each script to think it's the only script
  running on the python interpreter.
* But to still have only one embedded instance of the interpreter.
* With the primary motivation to eliminate the unconstrained shared
  memory causing breakage to your software.

So, whilst the multiprocessing module gives you this:
* With the primary motivation to eliminate the unconstrained shared
  memory causing breakage to your software.

It's (for whatever reason) too heavyweight for you, due to the multiprocess
usage. At a guess the reason for this is because you allow the user to run
lots of these little scripts.

Essentially what this means is that you want "green processes".

One workaround of achieving that may be to find a way to force threads in
python to ONLY be allowed access to (and only update) thread local values,
rather than default to shared values.

The reason I say that, is because the closest you get to green processes in
python at the moment is /inside/ a python generator. It's nowhere near the
level you want, but it's what made me think of the idea of green processes.

Specifically if you have the canonical example of a python generator:

def fib():
a,b = 1,1
while 1:
a,b = b, a+b
yield 1

Then no matter how many times I run that, the values are local, and can't
impact each other. Now clearly this isn't what you want, but on some level
it's *similar*.

You want to be able to do:
run(this_script)

and then when (this_script) is running only use a local environment.

Now, if you could change the threading API, such that there was a means of
forcing all value lookups to look in thread local store before looking
outside the thread local store [1], then this would give you a much greater
level of safety.

[1] I don't know if there is or isn't I've not been sufficiently interested
to look...

I suspect that this would also be a very nice easy win for many
multi-threaded applications as well, reducing accidental data sharing.

Indeed, reversing things such that rather than doing this:
   myLocal = threading.local()
   myLocal.X = 5

Allowing a thread to force the default to be the other way round:
   systemGlobals = threading.globals()
   systemGlobals = 5

Would make a big difference. Furthermore, it would also mean that the
following:
   import MyModule
   from MyOtherModule import whizzy thing

I don't know if such a change would be sufficient to stop the python
interpreter going bang for extension modules though :-)

I suspect also that this change, whilst potentially fraught with
difficulties, would be incredibly useful in python implementations
that are GIL-free (such as Jython or IronPython)

Now, this for me is entirely theoretical because I don't know much about
python's threading implementation (because I've never needed to), but it
does seem to me to be the easier win than looking for truly independent
interpreters...

It would also be more generally useful, since it would make accidental
sharing of data (which is where threads really hurt people most) much
harder.

Since it was raised in the thread, I'd like to say "use Kamaelia", but your
usecase is slightly different as I understand it. You want to take existing
stuff that won't be written in any particular w

Re: 2.6, 3.0, and truly independent intepreters

2008-10-25 Thread Michael Sparks
Andy O'Meara wrote:

> Yeah, that's the idea--let the highest levels run and coordinate the
> show.

Yes, this works really well in python and it's lots of fun. We've found so
far you need at minimum the following parts to a co-ordination little
language:

Pipeline
Graphline
Carousel
Seq
OneShot
PureTransformer
TPipe
Filter
Backplane
PublishTo
SubscribeTo

The interesting thing to me about this is in most systems these would be
patterns of behaviour in activities, whereas in python/kamaelia these are
concrete things you can drop things into. As you'd expect this all becomes
highly declarative.

In practice the world is slightly messier than a theoretical document would
like to suggest, primarily because if you consider things like pygame,
sometimes you have only have a resource instantiated once in a single
process. So you do need a mechanism for advertising services inside a
process and looking those up. (The Backplane idea though helps with
wrapping those up a lot I admit, for certain sorts of service :)

And sometimes you do need to just share data, and when you do that's when
STM is useful.

But concurrent python systems are fun to build :-)


Michael.
-- 
http://www.kamaelia.org/GetKamaelia

--
http://mail.python.org/mailman/listinfo/python-list


Re: 2.6, 3.0, and truly independent intepreters

2008-10-25 Thread Michael Sparks
Glenn Linderman wrote:

> In the module multiprocessing environment could you not use shared
> memory, then, for the large shared data items?

If the poshmodule had a bit of TLC, it would be extremely useful for this,
since it does (surprisingly) still work with python 2.5, but does need a
bit of TLC to make it usable.

http://poshmodule.sourceforge.net/


Michael
--
http://www.kamaelia.org/GetKamaelia
--
http://mail.python.org/mailman/listinfo/python-list


Re: Global dictionary or class variables

2008-10-25 Thread Fuzzyman
On Oct 24, 9:44 pm, Mr.SpOOn <[EMAIL PROTECTED]> wrote:
> Hi,
> in an application I have to use some variables with fixed valuse.
>
> For example, I'm working with musical notes, so I have a global
> dictionary like this:
>
> natural_notes = {'C': 0, 'D': 2, 'E': 4 }
>
> This actually works fine. I was just thinking if it wasn't better to
> use class variables.
>
> Since I have a class Note, I could write:
>
> class Note:
>     C = 0
>     D = 2
>     ...
>
> Which style maybe better? Are both bad practices?

I would *probably* find 'Note.C' more natural to use than
"natural_notes['C']".

Michael Foord

--
http://www.ironpythoninaction.com/
--
http://mail.python.org/mailman/listinfo/python-list


Re: How to examine the inheritance of a class?

2008-10-25 Thread Fuzzyman
On Oct 24, 7:27 pm, Derek Martin <[EMAIL PROTECTED]> wrote:
> On Fri, Oct 24, 2008 at 11:59:46AM +1000, James Mills wrote:
> > On Fri, Oct 24, 2008 at 11:36 AM, John Ladasky <[EMAIL PROTECTED]> wrote:
> > > etc.  The list of subclasses is not fully defined.  It is supposed to
> > > be extensible by the user.
>
> > Developer. NOT User.
>
> It's a semantic argument, but John's semantics are fine.  A library is
> code intended to be consumed by developers.  The developers *are* the
> users of the library.  *End users* use applications, not libraries.


Except in the case of user scripting where end users of your
applications may well be using your APIs. :-)

Michael Foord

--
http://www.ironpythoninaction.com/
--
http://mail.python.org/mailman/listinfo/python-list


Re: 2.6, 3.0, and truly independent intepreters

2008-10-25 Thread Michael Sparks
Andy O'Meara wrote:

> basically, it seems that we're talking about the
> "embarrassingly parallel" scenario raised in that paper

We build applications in Kamaelia and then discover afterwards that they're
embarrassingly parallel and just work. (we have an introspector that can
look inside running systems and show us the structure that's going on -
very useful for debugging)

My current favourite example of this is a tool created to teaching small
children to read and write:
   http://www.kamaelia.org/SpeakAndWrite

Uses gesture recognition and speech synthesis, has a top level view of
around 15 concurrent components, with signficant numbers of nested ones.

(OK, that's not embarrasingly parallel since it's only around 50 things, but
the whiteboard with around 200 concurrent things, is)

The trick is to stop viewing concurrency as the problem, but to find a way
to use it as a tool for making it easier to write code. That program was a
10 hour or so hack. You end up focussing on the problem you want to solve,
and naturally gain a concurrent friendly system.

Everything else (GIL's, shared memory etc) then "just" becomes an
optimisation problem - something only to be done if you need it.

My previous favourite examples were based around digital TV, or user
generated content transcode pipelines.

My reason for preferring the speak and write at the moment is because its a
problem you wouldn't normally think of as benefitting from concurrency,
when in this case it benefitted by being made easier to write in the first
place.

Regards,



Michael
--
http://www.kamaelia.org/GetKamaelia

--
http://mail.python.org/mailman/listinfo/python-list


Re: 2.6, 3.0, and truly independent intepreters

2008-10-25 Thread Michael Sparks
Jesse Noller wrote:

> http://www.kamaelia.org/Home

Thanks for the mention :)

I don't think it's a good fit for the original poster's question, but a
solution to the original poster's question would be generally useful IMO,
_especially_ on python implementations without a GIL (where threads are the
more natural approach to using multiple processes & multiple processors).

The approach I think would be useful would perhaps by allowing python to
have some concept of "green processes" - that is threads that can only see
thread local values or they search/update thread local space before
checking globals, ie flipping

   X = threading.local()
   X.foo = "bar"

To something like:
   X = greenprocesses.shared()
   X.foo = "bar"

Or even just changing the search for values from:
   * Search local context
   * Search global context

To:
   * Search thread local context
   * Search local context
   * Search global context

Would probably be quite handy, and eliminate whole classes of bugs for
people using threads. (Probably introduce all sorts of new ones of course,
but perhaps easier to isolate ones)

However, I suspect this is also *a lot* easier to say than to implement :-)

(that said, I did hack on the python internals once (cf pep 318) so it might
be quite pleasant to try)

It's also independent of any discussions regarding the GIL of course since
it would just make life generally safer for people.

BTW, regarding Kamaelia - regarding something you said on your blog - whilst
the components list on /Components looks like a large amount of extra stuff
you have to comprehend to use, you don't. (The interdependency between
components is actually very low.)

The core that someone needs to understand is the contents of this:
http://www.kamaelia.org/MiniAxon/

Which is sufficient to get someone started. (based on testing with a couple
of dozen novice developers now :)

If someone doesn't want to rewrite their app to be kamaelia based, they can
cherry pick stuff, by running kamaelia's scheduler in the background and
using components in a file-handle like fashion:
* http://www.kamaelia.org/AxonHandle

The reason /Components contains all those things isn't because we're trying
to make it into a swiss army knife, it's because it's been useful in
domains that have generated those components which are generally
reusable :-)



Michael.
--
http://www.kamaelia.org/GetKamaelia

--
http://mail.python.org/mailman/listinfo/python-list


why asynchat's initiate_send() get called twice after reconnect ?

2008-10-25 Thread davy zhang
Python3.0rc1  windowsxp

in the lib\asynchat.py

   def handle_write (self):
   self.initiate_send()

   def push (self, data):
   sabs = self.ac_out_buffer_size
   if len(data) > sabs:
   for i in range(0, len(data), sabs):
   self.producer_fifo.append(data[i:i+sabs])
   else:
   self.producer_fifo.append(data)
   self.initiate_send()

when there's only one time connection, the object works just fine. but
problems came out when the client disconnected and reconnected again
to the server, it seems there are two ways to call the initiate_send,
one is from push() which I called in my program, one is from
handle_write() which automatically called in asyncore.loop(). I just
can't get it why one time connection works fine but multi-time
connection went bad.

I printed the traceback. I found when one time connection made, the
handle_write() always get silent, but when the second time, it get
called and start to call initiate_send in the same time as push()  get
called. So confusing



So I tried to remove the initiate_send from push() and the code
magically works fine for me.

the main program lists below:
since it's need a flash client, I attached a webpage to reproduce the problem
click on the connect button multiple times and clicked on the send
button will make an error

import asyncore, asynchat
import os, socket, string
from multiprocessing import Process,Manager
import pickle
import _thread
import threading

PORT = 80

policyRequest = b""
policyReturn = b"""

 \x00"""

def handler(taskList,msgList):
   while 1:
   print('getting task')
   item = pickle.loads(taskList.get())
   print('item before handle ', item)
   #do something
   item['msg'] += b' hanlded done'
   msgList.put(pickle.dumps(item))

def findClient(id):
   for item in clients:
   if item.idx == id:
   return item

def pushData(ch,data):
   global pushLock
   pushLock.acquire()
   try:
   ch.push(data)
   finally:
   pushLock.release()


def sender():
   global msgList
   print('thread started')
   while 1:
   item = pickle.loads(msgList.get())
   #print time()
   c = findClient(item['cid'])
   #print time()
   #wrong here it's not thread safe, need some wrapper
   #c.push(item['msg'])
   pushData(c,item['msg'])
   print('msg sent ',item['msg'])
   #print time()

class HTTPChannel(asynchat.async_chat):

   def __init__(self, server, sock, addr):
   global cid;
   asynchat.async_chat.__init__(self, sock)
   self.set_terminator(b"\x00")
   self.data = b""
   cid += 1
   self.idx = cid
   if not self in clients:
   print('add to clients:',self)
   clients.append(self)

   def collect_incoming_data(self, data):
   self.data = self.data + data
   print(data)

   def found_terminator(self):
   global taskList
   print("found",self.data)
   if self.data == policyRequest:
   pushData(self,policyReturn)
   self.close_when_done()
   else:
   d = {'cid':self.idx,'msg':self.data}
   taskList.put(pickle.dumps(d))
   self.data = b""

   def handle_close(self):
   if self in clients:
   print('remove from clients:',self)
   clients.remove(self)

class HTTPServer(asyncore.dispatcher):

   def __init__(self, port):
   asyncore.dispatcher.__init__(self)
   self.create_socket(socket.AF_INET, socket.SOCK_STREAM)
   self.bind(("", port))
   self.listen(5)

   def handle_accept(self):
   conn, addr = self.accept()
   print('a new customer!')
   HTTPChannel(self, conn, addr)


#
# try it out
if __name__ == "__main__":
   s = HTTPServer(PORT)
   print ("serving at port", PORT, "...")

   #push data lock
   pushLock = threading.Lock()


   clients=[]

   cid = 0

   manager = Manager()

   taskList = manager.Queue()

   msgList = manager.Queue()


   h = Process(target=handler,args=(taskList,msgList))
   h.start()


   _thread.start_new_thread(sender,())
   print('entering loop')
   asyncore.loop()
--
http://mail.python.org/mailman/listinfo/python-list


Re: 2.6, 3.0, and truly independent intepreters

2008-10-25 Thread M.-A. Lemburg
These discussion pop up every year or so and I think that most of them
are not really all that necessary, since the GIL isn't all that bad.

Some pointers into the past:

 * http://effbot.org/pyfaq/can-t-we-get-rid-of-the-global-interpreter-lock.htm
   Fredrik on the GIL

 * http://mail.python.org/pipermail/python-dev/2000-April/003605.html
   Greg Stein's proposal to move forward on free threading

 * 
http://www.sauria.com/~twl/conferences/pycon2005/20050325/Python%20at%20Google.notes
   (scroll down to the Q&A section)
   Greg Stein on whether the GIL really does matter that much

Furthermore, there are lots of ways to tune the CPython VM to make
it more or less responsive to thread switches via the various sys.set*()
functions in the sys module.

Most computing or I/O intense C extensions, built-in modules and object
implementations already release the GIL for you, so it usually doesn't
get in the way all that often.

So you have the option of using a single process with multiple
threads, allowing efficient sharing of data. Or you use multiple
processes and OS mechanisms to share data (shared memory, memory
mapped files, message passing, pipes, shared file descriptors, etc.).

Both have their pros and cons.

There's no general answer to the
problem of how to make best use of multi-core processors, multiple
linked processors or any of the more advanced parallel processing
mechanisms (http://en.wikipedia.org/wiki/Parallel_computing).
The answers will always have to be application specific.

-- 
Marc-Andre Lemburg
eGenix.com

Professional Python Services directly from the Source  (#1, Oct 25 2008)
>>> Python/Zope Consulting and Support ...http://www.egenix.com/
>>> mxODBC.Zope.Database.Adapter ... http://zope.egenix.com/
>>> mxODBC, mxDateTime, mxTextTools ...http://python.egenix.com/


 Try mxODBC.Zope.DA for Windows,Linux,Solaris,MacOSX for free ! 


   eGenix.com Software, Skills and Services GmbH  Pastor-Loeh-Str.48
D-40764 Langenfeld, Germany. CEO Dipl.-Math. Marc-Andre Lemburg
   Registered at Amtsgericht Duesseldorf: HRB 46611
--
http://mail.python.org/mailman/listinfo/python-list


Re: Cannot build _multiprocessing, math, mmap and readline of Python 2.6 on FreeBSD 4.11 w/ gcc 2.95.4

2008-10-25 Thread M.-A. Lemburg
On 2008-10-25 08:39, Akira Kitada wrote:
> Hi list,
> 
> I was trying to build Python 2.6 on FreeBSD 4.11 and found it failed
> to build some of the modules.
> 
> """
> Failed to find the necessary bits to build these modules:
> _bsddb _sqlite3   _tkinter
> gdbm   linuxaudiodev  spwd
> sunaudiodev
> To find the necessary bits, look in setup.py in detect_modules() for
> the module's name.
> 
> 
> Failed to build these modules:
> _multiprocessing   math   mmap
> readline
> """
> 
> Because I don't have Berkeley DB, SQLite3 tk, GDBM installed on the
> system and running FreeBSD,
> there is no wonder it failed to build  _bsddb, _sqlite3, _tkinter,
> gdbm, linuxaudiodev, spwd and sunaudiodev.
> 
> The problem is it failed to build _multiprocessing, math, mmap and readline.

Please post a bug report on python.org about these failures.

The multiprocessing module is still fairly new and obviously needs
more fine tuning for the large set of platforms on which Python
can run. However, please also note that FreeBSD4 is a rather old
version of that OS. FWIW: Python 2.6 compiles just fine on FreeBSD6.

Thanks.

> Here are the outputs of each build failure.
> 
> """
> building '_multiprocessing' extension
> creating 
> build/temp.freebsd-4.11-RELEASE-i386-2.6/usr/home/build/dev/Python-2.6/Modules/_multiprocessing
> gcc -pthread -fPIC -fno-strict-aliasing -DNDEBUG -g -O3 -Wall
> -Wstrict-prototypes -DHAVE_SEM_OPEN=1 -DHAVE_FD_TRANSFER=1
> -DHAVE_SEM_TIMEDWAIT=1 -IModules/_multiprocessing -I.
> -I/usr/home/build/dev/Python-2.6/./
> Include -I. -IInclude -I./Include -I/usr/local/include
> -I/usr/home/build/dev/Python-2.6/Include
> -I/usr/home/build/dev/Python-2.6 -c
> /usr/home/build/dev/Python-2.6/Modules/_multiprocessing/multiprocessing.c
> -o b
> uild/temp.freebsd-4.11-RELEASE-i386-2.6/usr/home/build/dev/Python-2.6/Modules/_multiprocessing/multiprocessing.o
> In file included from
> /usr/home/build/dev/Python-2.6/Modules/_multiprocessing/multiprocessing.h:24,
>  from
> /usr/home/build/dev/Python-2.6/Modules/_multiprocessing/multiprocessing.c:9:
> /usr/include/arpa/inet.h:89: warning: parameter has incomplete type
> /usr/include/arpa/inet.h:92: warning: parameter has incomplete type
> /usr/include/arpa/inet.h:96: warning: parameter has incomplete type
> /usr/home/build/dev/Python-2.6/Modules/_multiprocessing/multiprocessing.c:
> In function `multiprocessing_sendfd':
> /usr/home/build/dev/Python-2.6/Modules/_multiprocessing/multiprocessing.c:102:
> storage size of `dummy_iov' isn't known
> /usr/home/build/dev/Python-2.6/Modules/_multiprocessing/multiprocessing.c:102:
> warning: unused variable `dummy_iov'
> /usr/home/build/dev/Python-2.6/Modules/_multiprocessing/multiprocessing.c:
> In function `multiprocessing_recvfd':
> /usr/home/build/dev/Python-2.6/Modules/_multiprocessing/multiprocessing.c:137:
> storage size of `dummy_iov' isn't known
> /usr/home/build/dev/Python-2.6/Modules/_multiprocessing/multiprocessing.c:137:
> warning: unused variable `dummy_iov'
> """
> 
> """
> building 'cmath' extension
> gcc -pthread -fPIC -fno-strict-aliasing -DNDEBUG -g -O3 -Wall
> -Wstrict-prototypes -I. -I/usr/home/build/dev/Python-2.6/./Include -I.
> -IInclude -I./Include -I/usr/local/include
> -I/usr/home/build/dev/Python-2.6/I
> nclude -I/usr/home/build/dev/Python-2.6 -c
> /usr/home/build/dev/Python-2.6/Modules/cmathmodule.c -o
> build/temp.freebsd-4.11-RELEASE-i386-2.6/usr/home/build/dev/Python-2.6/Modules/cmathmodule.o
> /usr/home/build/dev/Python-2.6/Modules/cmathmodule.c: In function
> `special_type':
> /usr/home/build/dev/Python-2.6/Modules/cmathmodule.c:79: warning:
> implicit declaration of function `copysign'
> /usr/home/build/dev/Python-2.6/Modules/cmathmodule.c: In function `c_acos':
> /usr/home/build/dev/Python-2.6/Modules/cmathmodule.c:152: warning:
> implicit declaration of function `asinh'
> /usr/home/build/dev/Python-2.6/Modules/cmathmodule.c: In function `c_atanh':
> /usr/home/build/dev/Python-2.6/Modules/cmathmodule.c:345: warning:
> implicit declaration of function `log1p'
> gcc -shared 
> build/temp.freebsd-4.11-RELEASE-i386-2.6/usr/home/build/dev/Python-2.6/Modules/cmathmodule.o
> -L/usr/local/lib -lm -o
> build/lib.freebsd-4.11-RELEASE-i386-2.6/cmath.so
> building 'math' extension
> gcc -pthread -fPIC -fno-strict-aliasing -DNDEBUG -g -O3 -Wall
> -Wstrict-prototypes -I. -I/usr/home/build/dev/Python-2.6/./Include -I.
> -IInclude -I./Include -I/usr/local/include
> -I/usr/home/build/dev/Python-2.6/I
> nclude -I/usr/home/build/dev/Python-2.6 -c
> /usr/home/build/dev/Python-2.6/Modules/mathmodule.c -o
> build/temp.freebsd-4.11-RELEASE-i386-2.6/usr/home/build/dev/Python-2.6/Modules/mathmodule.o
> /usr/home/build/dev/Python-2.6/Modules/mathmodule.c: In function `m_atan2':
> /usr/home/build/dev/Python-2.6/Modules/mathmodule.c:118: warning:
> implicit declaration of function `copysign'
> /usr/home/build/dev/Python-2.6/Modules/mathmodule.c: In function `math_

arange randomly words in a list

2008-10-25 Thread william paul
Hi:

I have a list that looks like:

name = name1 name2 name3 name4 

and I would like to be able to arrange randomly this list, like:

name = name 2 name 1 name3 name4
name = name4 name2 name1 name3


I have tried with random.shuffle, but still no good result

May I get an example?

Thank you,

William

__
Do You Yahoo!?
Tired of spam?  Yahoo! Mail has the best spam protection around 
http://mail.yahoo.com --
http://mail.python.org/mailman/listinfo/python-list


Re: ANN: Enthought Python Distribution - New Release

2008-10-25 Thread Laura Creighton
Thank you Travis.

Very pleased to get this from you.

Congratulatoins on the new release,
Laura
--
http://mail.python.org/mailman/listinfo/python-list


Re: How can I handle the char immediately after its input, without waiting an endline?

2008-10-25 Thread Roel Schroeven

Steven D'Aprano schreef:
I can't think of any modern apps that use one character commands like 
that. One character plus a modifier (ctrl or alt generally) perhaps, but 
even there, it's mostly used in GUI applications.


less, vi, info, top, cfdisk, lynx, links, ... come to mind. I suppose 
there are many more that I can't think of at the moment.


--
The saddest aspect of life right now is that science gathers knowledge
faster than society gathers wisdom.
  -- Isaac Asimov

Roel Schroeven
--
http://mail.python.org/mailman/listinfo/python-list


project in python

2008-10-25 Thread asit
I want to do a project in python.

It should be something based on socket programming, HTML/XML parsing,
etc

please suggest me 
--
http://mail.python.org/mailman/listinfo/python-list


Re: arange randomly words in a list

2008-10-25 Thread Tim Chase

I have a list that looks like:

name = name1 name2 name3 name4 


and I would like to be able to arrange randomly this list, like:

name = name 2 name 1 name3 name4
name = name4 name2 name1 name3


I have tried with random.shuffle, but still no good result

May I get an example?


I'm not sure what you mean by "still no good result" as using 
random.shuffle works quite nicely:


>>> name = "name1 name2 name3 name4".split()
>>> name
['name1', 'name2', 'name3', 'name4']
>>> import random
>>> random.shuffle(name)
>>> name
['name1', 'name3', 'name4', 'name2']
>>> print ' '.join(name)
name1 name3 name4 name2

which is exactly what you describe...

-tkc




--
http://mail.python.org/mailman/listinfo/python-list


Re: set/dict comp in Py2.6

2008-10-25 Thread Lie Ryan
On Sat, 25 Oct 2008 09:07:35 +, Steven D'Aprano wrote:

> On Sat, 25 Oct 2008 01:13:08 -0700, bearophileHUGS wrote:
> 
>> I'd like to know why Python 2.6 doesn't have the syntax to create sets/
>> dicts of Python 3.0, like:
>> 
>> {x*x for x in xrange(10)}
>> {x:x*x for x in xrange(10)}
> 
> Maybe nobody asked for it?
> 
> Personally, I don't see the advantage of set and dict comprehensions. 

In fact, it is a good syntax sugar for set/dict(generator-comprehension)

> I
> think the value of them is very marginal, not worth the additional
> syntax.
> 
> set([x*x for x in xrange(10)])


You should omit the []s as it would force python to build an internal 
list. I'm sure you know this would be a problem for large comprehensions.


> dict((x, x*x) for x in xrange(10))
> 
> work perfectly well using the existing syntax.
> 
> 
> --
> Steven


--
http://mail.python.org/mailman/listinfo/python-list


Re: How can I handle the char immediately after its input, without waiting an endline?

2008-10-25 Thread Steven D'Aprano
On Sat, 25 Oct 2008 16:30:55 +0200, Roel Schroeven wrote:

> Steven D'Aprano schreef:
>> I can't think of any modern apps that use one character commands like
>> that. One character plus a modifier (ctrl or alt generally) perhaps,
>> but even there, it's mostly used in GUI applications.
> 
> less, vi, info, top, cfdisk, lynx, links, ... come to mind. I suppose
> there are many more that I can't think of at the moment.

I said modern *wink*

But seriously... point taken.


-- 
Steven
--
http://mail.python.org/mailman/listinfo/python-list


how to pass a dictionary (including chinese characters) through Queue as is?

2008-10-25 Thread ouyang
Hi everyone,
As indicated in the following python script, the dictionary b has
Chinese characters: "中文". But a.get() returns the dictionary with a
little bit different format for the "中文“:   '\xd6\xd0\xce\xc4' . How
can I get the dictionary through the Queue as is?

>>> import Queue
>>> a = Queue.Queue(0)
>>> b = {'a':'中文','b':1232,'c':'abc'}
>>> a.put(b)
>>> c = a.get()
>>> c
{'a': '\xd6\xd0\xce\xc4', 'c': 'abc', 'b': 1232}

Cheers.

Ouyang
--
http://mail.python.org/mailman/listinfo/python-list


Re: using modules in destructors

2008-10-25 Thread [EMAIL PROTECTED]
It seems to me that deleting local instances before imported modules
would solve the problem. Is it not possible for the interpreter to get
this right? Or are there cases where this would break stuff.

It seems rather unpythonic for the __del__() method to become
unpredictable at exit.
--
http://mail.python.org/mailman/listinfo/python-list


Re: set/dict comp in Py2.6

2008-10-25 Thread Paul Rubin
[EMAIL PROTECTED] writes:
> {x*x for x in xrange(10)}
> {x:x*x for x in xrange(10)}

I've always just used:

set(x*x for x in xrange(10))
dict((x,x*x) for x in xrange(10))

I didn't even realize that you could write sets with {...}.
--
http://mail.python.org/mailman/listinfo/python-list


project in python

2008-10-25 Thread asit
I want to do a project in python.
It should be something based on socket programming, HTML/XML parsing,
etc

plz suggest me 
--
http://mail.python.org/mailman/listinfo/python-list


project in python

2008-10-25 Thread asit
I am a newbie and learned python to some extent.

I want to do some project in python based on network programming or
HTML/XML parsing.

Can anyone suggest me about this ???
--
http://mail.python.org/mailman/listinfo/python-list


Re: how to pass a dictionary (including chinese characters) through Queue as is?

2008-10-25 Thread Jean-Paul Calderone

On Sat, 25 Oct 2008 08:36:22 -0700 (PDT), ouyang <[EMAIL PROTECTED]> wrote:

Hi everyone,
   As indicated in the following python script, the dictionary b has
Chinese characters: "中文". But a.get() returns the dictionary with a
little bit different format for the "中文“:   '\xd6\xd0\xce\xc4' . How
can I get the dictionary through the Queue as is?


import Queue
a = Queue.Queue(0)
b = {'a':'中文','b':1232,'c':'abc'}
a.put(b)
c = a.get()
c

{'a': '\xd6\xd0\xce\xc4', 'c': 'abc', 'b': 1232}



Try printing b before you put it into the Queue.

The Queue isn't doing anything to the objects you pass through it,
you're just surprised at how repr() is presenting the un-altered
data.

Jean-Paul
--
http://mail.python.org/mailman/listinfo/python-list


Re: 2.6, 3.0, and truly independent intepreters

2008-10-25 Thread Terry Reedy

Glenn Linderman wrote:
On approximately 10/24/2008 8:39 PM, came the following characters from 
the keyboard of Terry Reedy:

Glenn Linderman wrote:

For example, Python presently has a rather stupid algorithm for 
string concatenation.


Yes, CPython2.x, x<=5 did.

Python the language has syntax and semantics.  Python implementations 
have algorithms that fulfill the defined semantics.


I can buy that, but when Python is not qualified, CPython should be 
assumed, as it predominates.


People do that, and it sometimes leads to unnecessary confusion.  As to 
the present discussion, is it about

* changing Python, the language
* changing all Python implementations
* changing CPython, the leading implementation
* branching CPython with a compiler switch, much as there was one for 
including Unicode or not.

* forking CPython
* modifying an existing module
* adding a new module
* making better use of the existing facilities
* some combination of the above

> Of course, the latest official release

should probably also be assumed, but that is so recent,


People do that, and it sometimes leads to unnecessary confusion.  People 
routine posted version specific problems and questions without 
specifying the version (or platform when relevant).  In a month or so, 
there will be *2* latest official releases.  There will be more 
confusion without qualification.


few have likely 
upgraded as yet... I should have qualified the statement.


* Is the target of this discussion 2.7 or 3.1 (some changes would be 3.1 
only).


[diversion to the side topic]

If there is more than one reference to a guaranteed immutable object, 
such as a string, the 'stupid' algorithm seem necessary to me.  
In-place modification of a shared immutable would violate semantics.


Absolutely.  But after the first iteration, there is only one reference 
to string.


Which is to say, 'string' is the only reference to its object it refers 
too.  You are right, so I presume that the optimization described would 
then kick in.  But I have not read the code, and CPython optimizations 
are not part of the *language* reference.


[back to the main topic]

There is some discussion/debate/confusion about how much of the stdlib 
is 'standard Python library' versus 'standard CPython library'.  [And 
there is some feeling that standard Python modules should have a default 
Python implementation that any implementation can use until it 
optionally replaces it with a faster compiled version.]  Hence my 
question about the target of this discussion and the first three options 
listed above.


Terry Jan Reedy

--
http://mail.python.org/mailman/listinfo/python-list


Re: project in python

2008-10-25 Thread Stefan Behnel
asit wrote:
> I am a newbie and learned python to some extent.
> 
> I want to do some project in python based on network programming or
> HTML/XML parsing.
> 
> Can anyone suggest me about this ???

The more you spam people with your repetitive postings, the less likely it
becomes that they are willing to answer you.

There are a lot of projects out there that might make an interesting starting
point for you. Check PyPI.

http://pypi.python.org/pypi?%3Aaction=browse

Stefan
--
http://mail.python.org/mailman/listinfo/python-list


Re: from package import * without overwriting similarly named functions?

2008-10-25 Thread Fernando H. Sanches
Also, remember that since the latter functions will always overwrite
the first, you can just reverse the order of the imports:

from package2 import *
from package1 import *

This should preserve the functions of package1 over the other ones.
--
http://mail.python.org/mailman/listinfo/python-list


Re: Building truth tables

2008-10-25 Thread Paul McGuire
On Oct 24, 5:53 am, andrea <[EMAIL PROTECTED]> wrote:
> On 26 Set, 20:01, "Aaron \"Castironpi\" Brady" <[EMAIL PROTECTED]>
> wrote:
>
>
>
> > Good idea.  If you want prefixed operators: 'and( a, b )' instead of
> > 'a and b', you'll have to write your own.  ('operator.and_' is bitwise
> > only.)  It may be confusing to mix prefix with infix: 'impl( a and b,
> > c )', so you may want to keep everything prefix, but you can still use
> > table( f, n ) like Tim said.
>
> After a while I'm back, thanks a lot, the truth table creator works,
> now I just want to parse some strings to make it easier to use.
>
> Like
>
> (P \/ Q) -> S == S
>
> Must return a truth table 2^3 lines...
>
> I'm using pyparsing and this should be really simple, but it doesn't
> allow me to recurse and that makes mu stuck.
> The grammar BNF is:
>
> Var :: = [A..Z]
> Exp ::= Var | !Exp  | Exp \/ Exp | Exp -> Exp | Exp /\ Exp | Exp ==
> Exp
>
> I tried different ways but I don't find a smart way to get from the
> recursive bnf grammar to the implementation in pyparsing...
> Any hint?

Use Forward to create a recursive grammar.  Look at the examples page
on the pyparsing wiki, and there should be several samples of
recursive grammars.

Here is a very simple recursive grammar, with no precedence to your
operators:

from pyparsing import oneOf, alphas, Forward, ZeroOrMore, Group,
Optional
var = oneOf(list(alphas))
op = oneOf(r"\/ /\ -> ==")
expr = Forward()
expr << Optional('!') + ( var  | Group('(' + expr + ')') ) +
ZeroOrMore(op + expr)

test = "(P \/ Q) -> S == S"

print expr.parseString(test).asList()

prints:

[['(', 'P', '\\/', 'Q', ')'], '->', 'S', '==', 'S']


Since these kinds of expressions are common, pyparsing includes a
helper method for defining precedence of operations infix notation:

from pyparsing import operatorPrecedence, opAssoc

expr = operatorPrecedence(var,
[
(r'!', 1, opAssoc.RIGHT),
(r'\/', 2, opAssoc.LEFT),
(r'/\\', 2, opAssoc.LEFT),
(r'->', 2, opAssoc.LEFT),
(r'==', 2, opAssoc.LEFT),
])

print expr.parseString(test).asList()

prints:

'P', '\\/', 'Q'], '->', 'S'], '==', 'S']]

HTH,
-- Paul
--
http://mail.python.org/mailman/listinfo/python-list


Re: PIL: Getting a two color difference between images

2008-10-25 Thread bearophileHUGS
Kevin D. Smith:
> What I want is a two color output image: black where the image wasn't 
> different, and white where it was different.<

There are several ways to do that. If speed isn't essential, then you
can create a third blank image of the right size, and then use the
method that iterates on the pixels of an image, and assign p1 != p2 at
every pixel of the third image.

If speed is important you can copy the images into numpy arrays and
then your operation becomes easy.

Maybe there are built-in ways in PIL too, I don't know. You can also
find an intermediate solution, like computing the difference image
with PIL and then binarize it manually.

Bye,
bearophile
--
http://mail.python.org/mailman/listinfo/python-list


Re: 2.6, 3.0, and truly independent intepreters

2008-10-25 Thread Philip Semanchuk


On Oct 25, 2008, at 7:53 AM, Michael Sparks wrote:


Glenn Linderman wrote:


In the module multiprocessing environment could you not use shared
memory, then, for the large shared data items?


If the poshmodule had a bit of TLC, it would be extremely useful for  
this,
since it does (surprisingly) still work with python 2.5, but does  
need a

bit of TLC to make it usable.

http://poshmodule.sourceforge.net/


Last time I checked that was Windows-only. Has that changed?

The only IPC modules for Unix that I'm aware of are one which I  
adopted (for System V semaphores & shared memory) and one which I  
wrote (for POSIX semaphores & shared memory).


http://NikitaTheSpider.com/python/shm/
http://semanchuk.com/philip/posix_ipc/


If anyone wants to wrap POSH cleverness around them, go for it! If  
not, maybe I'll make the time someday.


Cheers
Philip
--
http://mail.python.org/mailman/listinfo/python-list


Re: How can I handle the char immediately after its input, without waiting an endline?

2008-10-25 Thread Lie Ryan
On Sat, 25 Oct 2008 09:04:01 +, Steven D'Aprano wrote:

> On Sat, 25 Oct 2008 08:36:32 +, Lie Ryan wrote:
> 
> I want to write something that handle every char immediately after
> its input. Then tehe user don't need to type [RETURN] each time. How
> can I do this?
>
> Thanks in advance.
>> 
>> Don't you think that getting a one-character from console is something
>> that many people do very often?
> 
> No.
> 
> I can't think of any modern apps that use one character commands like
> that. One character plus a modifier (ctrl or alt generally) perhaps, but
> even there, it's mostly used in GUI applications.
> 
> 
>> Do you think that all these platform
>> independent code should be moved to the interpreter level instead
> 
> Absolutely not! There's no need for it to be given a keyword or special
> syntax.

By "interpreter level", I meant python's VM including its standard 
libraries (i.e. anywhere but at end-programmer's level), I don't mean it 
should have a keyword or special syntax or anything of that sort.

> But maybe there should be a standard library function for it.
> 
> 
>> (and
>> raises the appropriate error when the platform somehow cannot do
>> unbuffered input)? So python developer could do something like this:
>> 
>> raw_input(bufferring = 0)
> 
> No. Leave raw_input as it is. A better interface would be:
> 
> import input_services
> c = input_services.get_char()

That would be fine as well.

> 
> Eventually the module could grow other services as well.
> 



--
http://mail.python.org/mailman/listinfo/python-list


Re: Cannot build _multiprocessing, math, mmap and readline of Python 2.6 on FreeBSD 4.11 w/ gcc 2.95.4

2008-10-25 Thread Akira Kitada
Hi Marc-Andre,

Thanks for the suggestion.
I opened a ticket for this issue: http://bugs.python.org/issue4204

Now I understand the state of the multiprocessing module,
but it's too bad to see math, mmap and readline modules, that worked
fine before,
cannot be built anymore.

As for FreeBSD4, yeah it's really dated and I understand newer FreeBSD should
make my life easier, but I would rather want Python continue to
support old system like this
as long as it's not getting very hard to maintain the clean code base.

Thanks,

On Sat, Oct 25, 2008 at 10:53 PM, M.-A. Lemburg <[EMAIL PROTECTED]> wrote:
> On 2008-10-25 08:39, Akira Kitada wrote:
>> Hi list,
>>
>> I was trying to build Python 2.6 on FreeBSD 4.11 and found it failed
>> to build some of the modules.
>>
>> """
>> Failed to find the necessary bits to build these modules:
>> _bsddb _sqlite3   _tkinter
>> gdbm   linuxaudiodev  spwd
>> sunaudiodev
>> To find the necessary bits, look in setup.py in detect_modules() for
>> the module's name.
>>
>>
>> Failed to build these modules:
>> _multiprocessing   math   mmap
>> readline
>> """
>>
>> Because I don't have Berkeley DB, SQLite3 tk, GDBM installed on the
>> system and running FreeBSD,
>> there is no wonder it failed to build  _bsddb, _sqlite3, _tkinter,
>> gdbm, linuxaudiodev, spwd and sunaudiodev.
>>
>> The problem is it failed to build _multiprocessing, math, mmap and readline.
>
> Please post a bug report on python.org about these failures.
>
> The multiprocessing module is still fairly new and obviously needs
> more fine tuning for the large set of platforms on which Python
> can run. However, please also note that FreeBSD4 is a rather old
> version of that OS. FWIW: Python 2.6 compiles just fine on FreeBSD6.
>
> Thanks.
>
>> Here are the outputs of each build failure.
>>
>> """
>> building '_multiprocessing' extension
>> creating 
>> build/temp.freebsd-4.11-RELEASE-i386-2.6/usr/home/build/dev/Python-2.6/Modules/_multiprocessing
>> gcc -pthread -fPIC -fno-strict-aliasing -DNDEBUG -g -O3 -Wall
>> -Wstrict-prototypes -DHAVE_SEM_OPEN=1 -DHAVE_FD_TRANSFER=1
>> -DHAVE_SEM_TIMEDWAIT=1 -IModules/_multiprocessing -I.
>> -I/usr/home/build/dev/Python-2.6/./
>> Include -I. -IInclude -I./Include -I/usr/local/include
>> -I/usr/home/build/dev/Python-2.6/Include
>> -I/usr/home/build/dev/Python-2.6 -c
>> /usr/home/build/dev/Python-2.6/Modules/_multiprocessing/multiprocessing.c
>> -o b
>> uild/temp.freebsd-4.11-RELEASE-i386-2.6/usr/home/build/dev/Python-2.6/Modules/_multiprocessing/multiprocessing.o
>> In file included from
>> /usr/home/build/dev/Python-2.6/Modules/_multiprocessing/multiprocessing.h:24,
>>  from
>> /usr/home/build/dev/Python-2.6/Modules/_multiprocessing/multiprocessing.c:9:
>> /usr/include/arpa/inet.h:89: warning: parameter has incomplete type
>> /usr/include/arpa/inet.h:92: warning: parameter has incomplete type
>> /usr/include/arpa/inet.h:96: warning: parameter has incomplete type
>> /usr/home/build/dev/Python-2.6/Modules/_multiprocessing/multiprocessing.c:
>> In function `multiprocessing_sendfd':
>> /usr/home/build/dev/Python-2.6/Modules/_multiprocessing/multiprocessing.c:102:
>> storage size of `dummy_iov' isn't known
>> /usr/home/build/dev/Python-2.6/Modules/_multiprocessing/multiprocessing.c:102:
>> warning: unused variable `dummy_iov'
>> /usr/home/build/dev/Python-2.6/Modules/_multiprocessing/multiprocessing.c:
>> In function `multiprocessing_recvfd':
>> /usr/home/build/dev/Python-2.6/Modules/_multiprocessing/multiprocessing.c:137:
>> storage size of `dummy_iov' isn't known
>> /usr/home/build/dev/Python-2.6/Modules/_multiprocessing/multiprocessing.c:137:
>> warning: unused variable `dummy_iov'
>> """
>>
>> """
>> building 'cmath' extension
>> gcc -pthread -fPIC -fno-strict-aliasing -DNDEBUG -g -O3 -Wall
>> -Wstrict-prototypes -I. -I/usr/home/build/dev/Python-2.6/./Include -I.
>> -IInclude -I./Include -I/usr/local/include
>> -I/usr/home/build/dev/Python-2.6/I
>> nclude -I/usr/home/build/dev/Python-2.6 -c
>> /usr/home/build/dev/Python-2.6/Modules/cmathmodule.c -o
>> build/temp.freebsd-4.11-RELEASE-i386-2.6/usr/home/build/dev/Python-2.6/Modules/cmathmodule.o
>> /usr/home/build/dev/Python-2.6/Modules/cmathmodule.c: In function
>> `special_type':
>> /usr/home/build/dev/Python-2.6/Modules/cmathmodule.c:79: warning:
>> implicit declaration of function `copysign'
>> /usr/home/build/dev/Python-2.6/Modules/cmathmodule.c: In function `c_acos':
>> /usr/home/build/dev/Python-2.6/Modules/cmathmodule.c:152: warning:
>> implicit declaration of function `asinh'
>> /usr/home/build/dev/Python-2.6/Modules/cmathmodule.c: In function `c_atanh':
>> /usr/home/build/dev/Python-2.6/Modules/cmathmodule.c:345: warning:
>> implicit declaration of function `log1p'
>> gcc -shared 
>> build/temp.freebsd-4.11-RELEASE-i386-2.6/usr/home/build/dev/Python-2.6/Modules/cmathmodule.o
>> -L/usr/local/lib -lm -o
>> build/lib.freebsd-4.11-RELEASE-i386-2.6/cmath.so
>> buildi

sqlite version for python 2.6

2008-10-25 Thread James Thiele
I'd like to know which version of sqlite the python 2.6 sqlite3 module
supports.

Any help would be appreciated.

Thanks,
James
--
http://mail.python.org/mailman/listinfo/python-list


collections.chain

2008-10-25 Thread bearophileHUGS
Several languages like Java, C# etc have a List type in the std lib.
Python has a built-in list(), it's implemented as array dynamic on the
right.

Not too much time ago Hettinger has added a collections.deque (its C
code is really nice), that compared to list() allows a faster append
on the right and a much faster prepend on the left. It's implemented
as a double linked list of fixed-sized blocks. All blocks but the
first and last are fully filled, so such blocks don't need to store
their length, and the data structure just needs to store the length of
the first and last block.

In the C++ STL I think the deque can be implemented as a dynamic array
of pointers, that point to the start of each fixed-size block. This
allows a faster access of items, because you need two lookups and you
don't need to follow the linked list (plus maybe a module operation).
I don't know why the collections.deque uses a double linked list,
maybe because it allows a simpler design (in the dynamic arrays of
pointers you have to manage them as a circular array, so you need the
module or an if).

A double-ended queue covers lot of usages of a linked list, but not
all of them. So if enough Python programmers feel the need of a
(light) data structure that allows O(1) removal and add of items in
any point, then such data structure can be created. The name can be
"chain", because it's easy, short, it means the right thing, and
"list" is already taken.

Its implementation can be a normal double linked list. But on modern
CPUs they can be not much efficient, so there are few strategies to
improve that:
http://en.wikipedia.org/wiki/CDR_coding
http://en.wikipedia.org/wiki/Unrolled_linked_list
I think an unrolled double linked list is a fit implementation for
this purpose. This data structure is quite similar to the
collections.deque, but each block has to keep the number of items it
contains (note: if experiments show that such overhead in memory and
speed is little enough, then most of the C code of the deque may even
be thrown away, using the chain to implement it).

Are enough Python programmers interested in such chain data structure?
Can the typical Python programs enjoy the use of it? I presume it's
not very important, but sometimes I have found a use for it.

If enough people are interested in this data structure, then I think
there's a problem to be solved. How to manage references into the
chain itself. You need references, otherwise many operations become
O(n), and this makes the chain quite less useful.

A simple solution is to create another independent object like
chainptr, that's essentially a pointer to an item of the chain, it can
also become nil/Nil/Null/null, I presume... If you have two chainptr
that point the n-th item, and you remove the n-th item, then the
second chainptr now doesn't point to the n+1-th item (this is true for
python list, where pointers are integer numbers), it's a broken
pointer. Etc. Such low-level problems are probably out of place in
Python.

A way to avoid those problems is to make the pointers part of the
chain object itself, so they are kept consistent. I presume there must
be some way to add/remove such indexes dynamically, but I presume in
Python this is not too much a problem, but such kind of management
slows down the data structure a little. Every operation has to control
the state of all defined indexes, but I presume their number is usally
low (one, two) so it may not be a problem.

I don't have ideas for a good API yet, but if the pointers are part of
the chain, the syntax may becomes something like:

from collections import chain
d = chain("ABCD")
d.addindex() # creates p0 that points A
d.p0.next()
d.p0.next() # now p0 point C
d.addindex(d.p0) # now p1 point C
d.p0.delete() # now p0 and p1 point D (or nothing?)

Well, that's ugly. But first of all it's important to see if a chain i
useful, if the answer is positive, then I can look for a decent API.

Bye,
bearophile
--
http://mail.python.org/mailman/listinfo/python-list


Re: Perl/Python regular expressions vs. Boost.regex?

2008-10-25 Thread skip

Rob> Quoting from : 
http://www.boost.org/doc/libs/1_36_0/libs/regex/doc/html/boost_regex/ref/regex_match.html>

Rob> 
Rob> Important 

Rob> Note that the result is true only if the expression matches the
Rob> whole of the input sequence. If you want to search for an
Rob> expression somewhere within the sequence then use regex_search. If
Rob> you want to match a prefix of the character string then use
Rob> regex_search with the flag match_continuous set.
 
Rob> 

Rob> So yes it does.

Thanks.  I'll try and convince my colleague to use regex_search instead of
regex_match.

Skip

--
http://mail.python.org/mailman/listinfo/python-list


Re: sqlite version for python 2.6

2008-10-25 Thread Martin v. Löwis
> I'd like to know which version of sqlite the python 2.6 sqlite3 module
> supports.

When you compile Python, you can chose any version of sqlite that you
want to.

Regards,
Martin
--
http://mail.python.org/mailman/listinfo/python-list


Improving interpreter startup speed

2008-10-25 Thread Pedro Borges
Hi guys,


Is there a way to improve the interpreter startup speed?

In my machine (cold startup) python takes 0.330 ms and ruby takes
0.047 ms, after cold boot python takes 0.019 ms and ruby 0.005 ms to
start.


TIA
--
http://mail.python.org/mailman/listinfo/python-list


Re: 2.6, 3.0, and truly independent intepreters

2008-10-25 Thread Martin v. Löwis
>> There are a number of problems with that approach. The biggest one is
>> that it is theoretical. 
> 
> Not theoretical.  Used successfully in Perl. 

Perhaps it is indeed what Perl does, I know nothing about that.
However, it *is* theoretical for Python. Please trust me that
there are many many many many pitfalls in it, each needing a
separate solution, most likely with no equivalent in Perl.

If you had a working patch, *then* it would be practical.

> Granted Perl is quite a
> different language than Python, but then there are some basic
> similarities in the concepts.

Yes - just as much as both are implemented in C :-(

> Perhaps you should list the problems, instead of vaguely claiming that
> there are a number of them.  Hard to respond to such a vague claim.

As I said: go implement it, and you will find out. Unless you are
really going at an implementation, I don't want to spend my time
explaining it to you.

> But the approach is sound; nearly any monolithic
> program can be turned into a multithreaded program containing one
> monolith per thread using such a technique.

I'm not debating that. I just claim that it is far from simple.

Regards,
Martin


--
http://mail.python.org/mailman/listinfo/python-list


Re: @property decorator doesn't raise exceptions

2008-10-25 Thread Rafe
On Oct 24, 9:58 am, Peter Otten <[EMAIL PROTECTED]> wrote:
> Rafe wrote:
> > On Oct 24, 2:21 am, Christian Heimes <[EMAIL PROTECTED]> wrote:
> >> Rafewrote:
> >> > Hi,
>
> >> > I've encountered a problem which is making debugging less obvious than
> >> > it should be. The @property decorator doesn't always raise exceptions.
> >> > It seems like it is bound to the class but ignored when called. I can
> >> > see the attribute using dir(self.__class__) on an instance, but when
> >> > called, python enters __getattr__. If I correct the bug, the attribute
> >> > calls work as expected and do not call __getattr__.
>
> >> > I can't seem to make a simple repro. Can anyone offer any clues as to
> >> > what might cause this so I can try to prove it?
>
> >> You must subclass from "object" to get a new style class. properties
> >> don't work correctly on old style classes.
>
> >> Christian
>
> > All classes are a sub-class of object. Any other ideas?
>
> Hard to tell when you don't give any code.
>
> >>> class A(object):
>
> ...     @property
> ...     def attribute(self):
> ...             raise AttributeError
> ...     def __getattr__(self, name):
> ...             return "nobody expects the spanish inquisition"
> ...>>> A().attribute
>
> 'nobody expects the spanish inquisition'
>
> Do you mean something like this? I don't think the __getattr__() call can be
> avoided here.
>
> Peter

You nailed it Peter! I thought __getattr__ was a symptom, not the
cause of the misleading errors. Here is the repro (pretty much
regurgitated):

The expected behavior...

>>> class A(object):
... @property
... def attribute(self):
... raise AttributeError("Correct Error.")
>>> A().attribute
Traceback (most recent call last):
  File "", line 0, in 
  File "", line 0, in attribute
AttributeError: Correct Error.


The unexpected and misleading exception...

>>> class A(object):
... @property
... def attribute(self):
... raise AttributeError("Correct Error.")
... def __getattr__(self, name):
... cls_name = self.__class__.__name__
... msg = "%s has no attribute '%s'." % (cls_name, name)
... raise AttributeError(msg)
Traceback (most recent call last):
  File "", line 0, in 
  File "", line 0, in __getattr__
AttributeError: A has no attribute 'attribute'.


The docs state:
"Called when an attribute lookup has not found the attribute in the
usual places (i.e. it is not an instance attribute nor is it found in
the class tree for self). name is the attribute name. This method
should return the (computed) attribute value or raise an
AttributeError exception."

Can anyone explain why this is happening? I can hack a work-around,
but even then I could use some tips on how to raise the 'real'
exception so debugging isn't guesswork.


Cheers,

- Rafe
--
http://mail.python.org/mailman/listinfo/python-list


Re: 2.6, 3.0, and truly independent intepreters

2008-10-25 Thread Andy O'Meara
On Oct 24, 9:52 pm, "Martin v. Löwis" <[EMAIL PROTECTED]> wrote:
> >> A c-level module, on the other hand, can sidestep/release
> >> the GIL at will, and go on it's merry way and process away.
>
> > ...Unless part of the C module execution involves the need do CPU-
> > bound work on another thread through a different python interpreter,
> > right?
>
> Wrong.
>
> > (even if the interpreter is 100% independent, yikes).
>
> Again, wrong.
>
> > For
> > example, have a python C module designed to programmatically generate
> > images (and video frames) in RAM for immediate and subsequent use in
> > animation.  Meanwhile, we'd like to have a pthread with its own
> > interpreter with an instance of this module and have it dequeue jobs
> > as they come in (in fact, there'd be one of these threads for each
> > excess core present on the machine).
>
> I don't understand how this example involves multiple threads. You
> mention a single thread (running the module), and you mention designing
> a  module. Where is the second thread?

Glenn seems to be following me here...  The point is to have any many
threads as the app wants, each in it's own world, running without
restriction (performance wise).  Maybe the app wants to run a thread
for each extra core on the machine.

Perhaps the disconnect here is that when I've been saying "start a
thread", I mean the app starts an OS thread (e.g. pthread) with the
given that any contact with other threads is managed at the app level
(as opposed to starting threads through python).  So, as far as python
knows, there's zero mention or use of threading in any way,
*anywhere*.


> > As far as I can tell, it seems
> > CPython's current state can't CPU bound parallelization in the same
> > address space.
>
> That's not true.
>

Um...  So let's say you have a opaque object ref from the OS that
represents hundreds of megs of data (e.g. memory-resident video).  How
do you get that back to the parent process without serialization and
IPC?  What should really happen is just use the same address space so
just a pointer changes hands.  THAT's why I'm saying that a separate
address space is  generally a deal breaker when you have large or
intricate data sets (ie. when performance matters).

Andy


--
http://mail.python.org/mailman/listinfo/python-list


ANN: Python programs for epidemic modelling

2008-10-25 Thread I. Soumpasis
Dear lists,

DeductiveThinking.com now provides the Python programs for the book of M.
Keeling & P. Rohani "Modeling Infectious Diseases in Humans and Animals",
Princeton University Press, 2008. The book has on-line material which
includes programs for different models in various programming languages and
mathematical tools such as, "C++, FORTRAN and Matlab, while some are also
coded in the web-based Java programming language to allow readers to quickly
experiment with these types of models", as it is stated at the website. The
Python version of the programs were written long ago and submitted to the
book's on line material website (available soon). The Python programs with
the basic equations modelled and the results in figures were now uploaded on
a special wiki page of DeductiveThinking.com.

Since, the programs are heavily using numpy, scipy and matplotlib libraries,
I send this announcement to all the three lists and the main python-list;
sorry for double-posting. The announcement with the related links is
uploaded here http://blog.deductivethinking.com/?p=29. The programs are at
http://wiki.deductivethinking.com/wiki/Python_Programs_for_Modelling_Infectious_Diseases_book.
For those who are interested on modelling and epidemiology, they can take a
look at the main site (http://deductivethinking.com) or the main page of the
wiki (http://wiki.deductivethinking.com) and follow the epidemiology links.
The website is in its beginning, so limited information is uploaded up to
now.

Thanks for your time and I hope it will be useful for some people,
Best Regards,
Ilias Soumpasis
--
http://mail.python.org/mailman/listinfo/python-list


Re: big objects and avoiding deepcopy?

2008-10-25 Thread Robert Kern

Reckoner wrote:

I am writing an algorithm that takes objects (i.e. graphs with
thousands of nodes) into a "hypothetical" state. I need to keep a
history of these  hypothetical objects depending on what happens to
them later. Note that these hypothetical objects are intimately
operated on, changed, and made otherwise significantly different from
the objects they were copied from.

I've been using deepcopy to push the objects into the hypothetical
state where I operate on them heavily. This is pretty slow since the
objects are very large.

Is there another way to do this without resorting to deepcopy?

by the way, the algorithm works fine. It's just this part of it that I
am trying to change.


This is similar to implementing "Undo" functionality in applications. One 
solution is to define every operation you can do on the data structure as a pair 
of functions, one which does the "forward" operation on the data structure and 
one which does the "backward" operation which will return the modified data 
structure back to its original state. Each time you do a forward operation, 
append the pair of functions to a list (along with any auxiliary data that you 
need). Once you have finished with the hypothetical operations, you can work 
your way backwards through the list and using the "backwards" operations.


This works fairly well if you have a single data structure that you are managing 
this way and a limited set of operations to track. If you have multiple 
interacting objects and a large set of operations, things can become cumbersome.


--
Robert Kern

"I have come to believe that the whole world is an enigma, a harmless enigma
 that is made terrible by our own mad attempt to interpret it as though it had
 an underlying truth."
  -- Umberto Eco

--
http://mail.python.org/mailman/listinfo/python-list


Re: @property decorator doesn't raise exceptions

2008-10-25 Thread Rafe
On Oct 24, 9:58 am, Peter Otten <[EMAIL PROTECTED]> wrote:
> Rafe wrote:
> > On Oct 24, 2:21 am, Christian Heimes <[EMAIL PROTECTED]> wrote:
> >> Rafewrote:
> >> > Hi,
>
> >> > I've encountered a problem which is making debugging less obvious than
> >> > it should be. The @property decorator doesn't always raise exceptions.
> >> > It seems like it is bound to the class but ignored when called. I can
> >> > see the attribute using dir(self.__class__) on an instance, but when
> >> > called, python enters __getattr__. If I correct the bug, the attribute
> >> > calls work as expected and do not call __getattr__.
>
> >> > I can't seem to make a simple repro. Can anyone offer any clues as to
> >> > what might cause this so I can try to prove it?
>
> >> You must subclass from "object" to get a new style class. properties
> >> don't work correctly on old style classes.
>
> >> Christian
>
> > All classes are a sub-class of object. Any other ideas?
>
> Hard to tell when you don't give any code.
>
> >>> class A(object):
>
> ...     @property
> ...     def attribute(self):
> ...             raise AttributeError
> ...     def __getattr__(self, name):
> ...             return "nobody expects the spanish inquisition"
> ...>>> A().attribute
>
> 'nobody expects the spanish inquisition'
>
> Do you mean something like this? I don't think the __getattr__() call can be
> avoided here.
>
> Peter


Peter nailed it, thanks! I thought __getattr__ was a symptom, not a
cause of the misleading exceptions. Here is a complete repro:


The expected behavior...

>>> class A(object):
... @property
... def attribute(self):
... raise AttributeError("Correct Error.")
>>> A().attribute
Traceback (most recent call last):
  File "", line 0, in 
  File "", line 0, in attribute
AttributeError: Correct Error.


The misleading/unexpected behavior...

>>> class A(object):
... @property
... def attribute(self):
... raise AttributeError("Correct Error.")
... def __getattr__(self, name):
... cls_name = self.__class__.__name__
... msg = "%s has no attribute '%s'." % (cls_name, name)
... raise AttributeError(msg)
>>> A().attribute
Traceback (most recent call last):
  File "", line 0, in 
  File "", line 0, in __getattr__
AttributeError: A has no attribute 'attribute'.


Removing @property works as expected...

>>> class A(object):
... def attribute(self):
... raise AttributeError("Correct Error.")
... def __getattr__(self, name):
... cls_name = self.__class__.__name__
... msg = "%s has no attribute '%s'." % (cls_name, name)
... raise AttributeError(msg)
>>> A().attribute()   # Note the '()'
Traceback (most recent call last):
  File "", line 0, in 
  File "", line 0, in attribute
AttributeError: Correct Error.


The docs seem to suggest this is impossible:
"Called when an attribute lookup has not found the attribute in the
usual places (i.e. it is not an instance attribute nor is it found in
the class tree for self). name is the attribute name. This method
should return the (computed) attribute value or raise an
AttributeError exception."

Can anyone explain why this is happening? Is it a bug? I can write a
workaround to detect this by comparing the attribute name passed
__getattr__ with dir(self.__class__) = self.__dict__.keys(), but how
can I raise the expected exception?


Thanks,

- Rafe
--
http://mail.python.org/mailman/listinfo/python-list


Re: 2.6, 3.0, and truly independent intepreters

2008-10-25 Thread Andy O'Meara
On Oct 24, 9:40 pm, "Martin v. Löwis" <[EMAIL PROTECTED]> wrote:
> > It seems to me that the very simplest move would be to remove global
> > static data so the app could provide all thread-related data, which
> > Andy suggests through references to the QuickTime API. This would
> > suggest compiling python without thread support so as to leave it up
> > to the application.
>
> I'm not sure whether you realize that this is not simple at all.
> Consider this fragment
>
>     if (string == Py_None || index >= state->lastmark ||
> !state->mark[index] || !state->mark[index+1]) {
>         if (empty)
>             /* want empty string */
>             i = j = 0;
>         else {
>             Py_INCREF(Py_None);
>             return Py_None;
>


The way to think about is that, ideally in PyC, there are never any
global variables.  Instead, all "globals" are now part of a context
(ie. a interpreter) and it would presumably be illegal to ever use
them in a different context. I'd say this is already the expectation
and convention for any modern, industry-grade software package
marketed as extension for apps.  Industry app developers just want to
drop in a 3rd party package, make as many contexts as they want (in as
many threads as they want), and expect to use each context without
restriction (since they're ensuring contexts never interact with each
other).  For example, if I use zlib, libpng, or libjpg, I can make as
many contexts as I want and put them in whatever threads I want.  In
the app, the only thing I'm on the hook for is to: (a) never use
objects from one context in another context, and (b) ensure that I'm
never make any calls into a module from more than one thread at the
same time.  Both of these requirements are trivial to follow in the
"embarrassingly easy" parallelization scenarios, and that's why I
started this thread in the first place.  :^)

Andy



--
http://mail.python.org/mailman/listinfo/python-list


Limit between 0 and 100

2008-10-25 Thread chemicalclothing
Hi. I'm very new to Python, and so this is probably a pretty basic
question, but I'm lost. I am looking to limit a float value to a
number between 0 and 100 (the input is a percentage).

I currently have:

integer = int()
running = True

while running:
  try:
per_period_interest_rate = float(raw_input("Enter per-period
interest rate, in percent: "))
break
  except ValueError:
print "Please re-enter the per-period interest rate as a number
between 0 and 100."


I also have to make sure it is a number and not letters or anything.

Thanks for the help.

James

P.S. I don't understand a lot of what I have there, I got most of it
from the beginning tutorials and help sections. I have never
programmed before, but this is for a school assignment.
--
http://mail.python.org/mailman/listinfo/python-list


Re: 2.6, 3.0, and truly independent intepreters

2008-10-25 Thread Andy O'Meara
On Oct 24, 10:24 pm, Glenn Linderman <[EMAIL PROTECTED]> wrote:
>
> > And in the case of hundreds of megs of data
>
> ... and I would be surprised at someone that would embed hundreds of
> megs of data into an object such that it had to be serialized... seems
> like the proper design is to point at the data, or a subset of it, in a
> big buffer.  Then data transfers would just transfer the offset/length
> and the reference to the buffer.
>
> > and/or thousands of data structure instances,
>
> ... and this is another surprise!  You have thousands of objects (data
> structure instances) to move from one thread to another?

Heh, no, we're actually in agreement here.  I'm saying that in the
case where the data sets are large and/or intricate, a single top-
level pointer changing hands is *always* the way to go rather than
serialization.  For example, suppose you had some nifty python code
and C procs that were doing lots of image analysis, outputting tons of
intricate and rich data structures.  Once the thread is done with that
job, all that output is trivially transferred back to the appropriate
thread by a pointer changing hands.

>
> Of course, I know that data get large, but typical multimedia streams
> are large, binary blobs.  I was under the impression that processing
> them usually proceeds along the lines of keeping offsets into the blobs,
> and interpreting, etc.  Editing is usually done by making a copy of a
> blob, transforming it or a subset in some manner during the copy
> process, resulting in a new, possibly different-sized blob.

No, you're definitely right-on, with the the additional point that the
representation of multimedia usually employs intricate and diverse
data structures (imagine the data structure representation of a movie
encoded in modern codec, such as H.264, complete with paths, regions,
pixel flow, geometry, transformations, and textures).  As we both
agree, that's something that you *definitely* want to move around via
a single pointer (and not in a serialized form).  Hence, my position
that apps that use python can't be forced to go through IPC or else:
(a) there's a performance/resource waste to serialize and unserialize
large or intricate data sets, and (b) they're required to write and
maintain serialization code that otherwise doesn't serve any other
purpose.

Andy



--
http://mail.python.org/mailman/listinfo/python-list


Re: Improving interpreter startup speed

2008-10-25 Thread Lie Ryan
On Sat, 25 Oct 2008 12:32:07 -0700, Pedro Borges wrote:

> Hi guys,
> 
> 
> Is there a way to improve the interpreter startup speed?
> 
> In my machine (cold startup) python takes 0.330 ms and ruby takes 0.047 
> ms, after cold boot python takes 0.019 ms and ruby 0.005 ms to start.
> 
> 
> TIA

um... does it really matter? It's less than a second and only once on the 
program startup...

if you've found yourself destroying small python processes thousands of 
times, try creating the controller program in python, so this controller 
program would import the "small modules" and doesn't restart python 
interpreters that many times.

--
http://mail.python.org/mailman/listinfo/python-list


Re: Limit between 0 and 100

2008-10-25 Thread Benjamin Kaplan
On Sat, Oct 25, 2008 at 4:42 PM, <[EMAIL PROTECTED]> wrote:

> Hi. I'm very new to Python, and so this is probably a pretty basic
> question, but I'm lost. I am looking to limit a float value to a
> number between 0 and 100 (the input is a percentage).
>
> I currently have:
>
> integer = int()
> running = True
>
> while running:
>  try:
>per_period_interest_rate = float(raw_input("Enter per-period
> interest rate, in percent: "))
>break
>  except ValueError:
>print "Please re-enter the per-period interest rate as a number
> between 0 and 100."
>
>
> I also have to make sure it is a number and not letters or anything.
>
> Thanks for the help.
>
> James
>
> P.S. I don't understand a lot of what I have there, I got most of it
> from the beginning tutorials and help sections. I have never
> programmed before, but this is for a school assignment.
> --


look up conditionals (specifically if statements) in the Python tutorial. If
you still can't figure it out, ask on the Python Tutor list. This list is
for questions about python, not for questions about how to program.

>
> http://mail.python.org/mailman/listinfo/python-list
>
--
http://mail.python.org/mailman/listinfo/python-list


Re: arange randomly words in a list

2008-10-25 Thread BJörn Lindqvist
2008/10/20 william paul <[EMAIL PROTECTED]>:
> I have a list that looks like:
>
> name = name1 name2 name3 name4
>
> and I would like to be able to arrange randomly this list, like:
>
> name = name 2 name 1 name3 name4
> name = name4 name2 name1 name3
> 
>
> I have tried with random.shuffle, but still no good result

That is exactly what random.shuffle() does. Why doesn't the function
work for you?


-- 
mvh Björn
--
http://mail.python.org/mailman/listinfo/python-list


Re: 2.6, 3.0, and truly independent intepreters

2008-10-25 Thread Andy O'Meara

> Andy O'Meara wrote:
> > I would definitely agree if there was a context (i.e. environment)
> > object passed around then perhaps we'd have the best of all worlds.
>
> Moreover, I think this is probably the *only* way that
> totally independent interpreters could be realized.
>
> Converting the whole C API to use this strategy would be
> a very big project. Also, on the face of it, it seems like
> it would render all existing C extension code obsolete,
> although it might be possible to do something clever with
> macros to create a compatibility layer.
>
> Another thing to consider is that passing all these extra
> pointers around everywhere is bound to have some effect
> on performance.


Good points--I would agree with you on all counts there.  On the
"passing a context everywhere" performance hit, perhaps one idea is
that all objects could have an additional field that would point back
to their parent context (ie. their interpreter).  So the only
prototypes that would have to be modified to contain the context ptr
would be the ones that inherently don't take any objects. This would
conveniently and generally correspond to procs associated with
interpreter control (e.g. importing modules, shutting down modules,
etc).


> Andy O'Meara wrote:
> > - each worker thread makes its own interpreter, pops scripts off a
> > work queue, and manages exporting (and then importing) result data to
> > other parts of the app.
>
> I hope you realize that starting up one of these interpreters
> is going to be fairly expensive.

Absolutely.  I had just left that issue out in an effort to keep the
discussion pointed, but it's a great point to raise.  My response is
that, like any 3rd party industry package, I'd say this is the
expectation (that context startup and shutdown is non-trivial and to
should be minimized for performance reasons).  For simplicity, my
examples didn't talk about this issue but in practice, it'd be typical
for apps to have their "worker" interpreters persist as they chew
through jobs.


Andy


--
http://mail.python.org/mailman/listinfo/python-list


SendKeys-0.3.win32-py2.1.exe

2008-10-25 Thread Jesse
cant seem to install this, using python 2.6, any known errors that
wont let me select the python installation to use, just opens a blank
dialog and wont let me continue..do i need to downgrade python??

thanks in advance
--
http://mail.python.org/mailman/listinfo/python-list


Re: set/dict comp in Py2.6

2008-10-25 Thread bearophileHUGS
Sorry for the answering delay, Google Groups is slow today.

Steven D'Aprano:

>Personally, I don't see the advantage of set and dict comprehensions. I think 
>the value of them is very marginal, not worth the additional syntax.<

If it's worth in 3.0 then it's worth in 2.6 too. If it isn't worth in
2.6 then maybe it's not worth in 3.0 too.

It's just a little bit of sugar, and in this specific case I don't see
risk of "diabetes".

I think the dict generator syntax has a small advantage (the set
generator is probably there just for symmetry): it redueces the number
of perenthesys, and replaces a comma with a different symbol (a colon,
this helps you to distinguish the colon itself from the other commas),
this increases readability (Lisp docet).

If the example is very simple like this you don't see much readability
difference:
sqrts = dict((x, x*x) for x in range(1000))
sqrts = {x: x*x for x in range(1000)}

But if those x and x*x need perenthesys then you may see a difference:
sqrts = dict( ((sin(x) + 5) * 3, (x, (x*x, x*x*x))) for x in
range(1000) )
sqrts = {(sin(x) + 5) * 3: (x, (x*x, x*x*x)) for x in range(1000)}

What's the more readable? I think the in second one is much simpler to
tell if it's a correct expression, even after I have added extra
spaces in the first line.


And have you even received this error?

>>> dict(x,x*x for x in xrange(10))
  File "", line 1
SyntaxError: Generator expression must be parenthesized if not sole
argument

This syntax avoid that class of errors:
{x:x*x for x in xrange(10)}

So summing up I like the new syntax (I think Fortress language has
something similar).

Bye,
bearophile
--
http://mail.python.org/mailman/listinfo/python-list


Re: Ordering python sets

2008-10-25 Thread Lie Ryan
On Sat, 25 Oct 2008 09:21:05 +, Steven D'Aprano wrote:

> On Sat, 25 Oct 2008 08:58:18 +, Lie Ryan wrote:
> 
>> 
>> Since python is dynamic language, I think it should be possible to do
>> something like this:
>> 
>> a = list([1, 2, 3, 4, 5], implementation = 'linkedlist') b = dict({'a':
>> 'A'}, implementation = 'binarytree') c = dict({'a': 'A'},
>> implementation = 'binarytree')
> 
> Oh I hope not. I think you have mistaken "dynamic" for "chaotic".
> 
> When I see a dict, I want to know that any two dicts work the same way.

Oh no, the two dict implementation would work _exactly_ the same from the 
outside, they are transparently interchangeable. Only the performance 
characteristic differs because of the different implementation. Actually 
I got this idea from a book about algorithm and data structure, that book 
said that an abstract data type (e.g. dict, set, list) have several 
competing implementations or data structures (e.g. binary tree dict, 
hashed dict, array dict). A data's implementation and interface can be 
separated so that we can switch the data structure's implementation 
without changing the rest of the code. The book is Algorithm Design 
Manual by Skiena.

hint: read the PS below

> I don't want to have to search the entire project's source code to find
> out if it is a dict implemented as a hash table with O(1) lookups, or a
> dict implemented as a binary tree with O(log N) lookups, or a dict
> implemented as a linear array with O(N) lookups.

No, you'd only need to look at the dict's creation point (or actually 
much better at projects docs, but not everyone create good docs). The 
alternative you mentioned below, by shadowing built-in, is both a hack 
and is much more likely to be missed.

> If I wanted that sort of nightmare, I can already do it by shadowing the
> builtin:
> 
> dict = binarytree
> D = dict({'a': 'A'})  # make a binary tree

I DON'T want THAT sort of nightmare you mentioned...
And it'd be impossible to have two dictionary that have two different 
implementations.

> There is no possible good that come from this suggestion. The beauty of
> Python is that the built-in data structures (list, dict, set) are
> powerful enough for 99% of uses[1], and for the other 1%, you can easily
> and explicitly use something else.

Oh really? As far as I know, python's list is extremely bad if you're 
inserting data at the beginning of the list (e.g. lst.insert(0) requires 
the whole array be "copied" one address away). This is because python's 
list uses array data structure, making addressing (e.g. a[2]) fast, but 
insertion slow. If, on the other hand, it is implemented using binary 
tree, it'd make insertion O(log n) but addressing is a bit tricky on it.

The keyword is "tradeoffs".

> But *explicitly* is the point. There's never any time where you can do
> this:

Yes, true, explicitly IS the point. How more explicit can you be than: 
dict({foo: bar}, implementation = 'binarytree')

> type(mydict) is dict

If my memory serves right, binary tree dict and hashed table dict is both 
a dict right? (Duck Typing)
Only their implementation differs. Implementation is... well, 
"implementation detail".

> and not know exactly what performance characteristics mydict will have.

Oh... why do I need to know what the performance characteristic of mydict 
is? Unless I know what I'm doing.

> (Unless you shadow dict or type, or otherwise do something that breaks
> the rules.) You never need to ask, "Okay, it's a dict. What sort of
> dict?"

Okay, it's a dict. What sort of dict? Who the hell cares? I don't need to 
know, they all looks and behave the same (Duck Typing)... at least until 
I profile them (since profiling is a deep black magic by itself, it 
cannot be used to discredit switching implementations). 

Sometimes we need a data type to use a specific data structure that have 
some specific performance characteristic, because we know we'll be doing 
a specific operation a lot more than other operations. 

If you actually need to know what the implementation is currently being 
used, you could implement a dict.implementation property.

> If you want a binary tree, ask for a binary tree.

Yeah, you ask for binary tree EXPLICITLY:
bintreedict = dict({a:b}, implementation = 'binarytree')

this:
regularhasheddict = dict({a:b})

would have a reasonable default.


PS: I do admit I have used the wrong terms in the last post. I used the 
term "data structure", instead it should be "abstract data type", "data 
structure" is a synonym for "implementation". In this post, I hope I've 
corrected all of the usage.

--
http://mail.python.org/mailman/listinfo/python-list


Re: [SciPy-user] ANN: Python programs for epidemic modelling

2008-10-25 Thread Alan G Isaac

On 10/25/2008 4:14 PM I. Soumpasis apparently wrote:

http://blog.deductivethinking.com/?p=29


This is cool.
But I do not see a license.
May I hope this is released under the new BSD license,
like the packages it depends on?

Thanks,
Alan Isaac

--
http://mail.python.org/mailman/listinfo/python-list


Re: How can I handle the char immediately after its input, without waiting an endline?

2008-10-25 Thread Lie Ryan
On Sat, 25 Oct 2008 15:27:32 +, Steven D'Aprano wrote:

> On Sat, 25 Oct 2008 16:30:55 +0200, Roel Schroeven wrote:
> 
>> Steven D'Aprano schreef:
>>> I can't think of any modern apps that use one character commands like
>>> that. One character plus a modifier (ctrl or alt generally) perhaps,
>>> but even there, it's mostly used in GUI applications.
>> 
>> less, vi, info, top, cfdisk, lynx, links, ... come to mind. I suppose
>> there are many more that I can't think of at the moment.
> 
> I said modern *wink*
> 
> But seriously... point taken.
> 

I uses some of them a lot... less and top is on the top of my list (pun 
intended). I sometimes used vi(m), although I never really liked it, but 
it's sometimes unavoidable. info is replaced by man. lynx and links... 
well I remember a time when I tried to install Gentoo on a VMWare, lynx/
links (I forgot which one) was a life-saver because I wouldn't need to 
get out to the Windows host every two seconds to see the installation 
instruction (I was new to Linux at that time), and that was on a VMWare, 
what if I installed it directly, not on a virtual machine?

And as far as I know, it is impossible to implement a "press any key" 
feature with python in a simple way (as it should be). And if std input's 
character buffering is easy, it'd contribute a lot to command-line real 
time action games (and of course many other programs, but that is the 
first genre of programs that crosses my mind).

PS: 
>>> modern != GUI
True
>>> commandline == old
False

--
http://mail.python.org/mailman/listinfo/python-list


Re: [Numpy-discussion] [SciPy-user] ANN: Python programs for epidemic modelling

2008-10-25 Thread I. Soumpasis
2008/10/25 Alan G Isaac <[EMAIL PROTECTED]>

> On 10/25/2008 4:14 PM I. Soumpasis apparently wrote:
> > http://blog.deductivethinking.com/?p=29
>
> This is cool.
> But I do not see a license.
> May I hope this is released under the new BSD license,
> like the packages it depends on?
>
> The programs are GPL licensed. More info on the section of copyrights
http://wiki.deductivethinking.com/wiki/Deductive_Thinking:Copyrights.

I hope it is ok,
Ilias
--
http://mail.python.org/mailman/listinfo/python-list


Re: PIL: Getting a two color difference between images

2008-10-25 Thread Lie Ryan
> Kevin D. Smith:
>> What I want is a two color output image: black where the image wasn't
>> different, and white where it was different.<

Use the ImageChops.difference, which would give a difference image. Then 
map all colors to white except black using Image.point() 

--
http://mail.python.org/mailman/listinfo/python-list


Re: Ordering python sets

2008-10-25 Thread Terry Reedy

Lie Ryan wrote:




Since python is dynamic language, I think it should be possible to do 
something like this:


a = list([1, 2, 3, 4, 5], implementation = 'linkedlist')


For this to work, the abstract list would have to know about all 
implementations of the abstraction.



b = dict({'a': 'A'}, implementation = 'binarytree')
c = dict({'a': 'A'}, implementation = 'binarytree')

i.e. basically since a data structure can have different implementations, 
and different implementations have different performance characteristics, 
it should be possible to dynamically change the implementation used.


In the far future, the data structure and its implementation could be 
abstracted even further:


a = list() # ordered list
b = set() # unordered list
c = dict() # unordered dictionary
d = sorteddict() # ordered dictionary

Each of the implementations would share a common subset of methods and 
possibly a few implementation dependent method that could only work on 
certain implementations (or is extremely slow except in the correct 
implementation).




The future is 3.0, at least in part, with Abstract Base Classes.
There are 16 in the collectons module.
 "In addition to containers, the collections module provides some ABCs 
(abstract base classes) that can be used to test whether a class 
provides a particular interface, for example, is it hashable or a 
mapping, and some of them can also be used as mixin classes."


The ABCs for numbers are in the numbers module.

tjr

--
http://mail.python.org/mailman/listinfo/python-list


Re: Consequences of importing the same module multiple times in C++?

2008-10-25 Thread Lie Ryan
On Fri, 24 Oct 2008 12:23:18 -0700, Robert Dailey wrote:

> Hi,
> 
> I'm currently using boost::python::import() to import Python modules, so
> I'm not sure exactly which Python API function it is calling to import
> these files. I posted to the Boost.Python mailing list with this
> question and they said I'd probably get a better answer here, so here it
> goes...
> 
> If I do the following:
> 
> using namespace boost::python;
> import( "__main__" ).attr( "new_global" ) = 40.0f; import( "__main__"
> ).attr( "another_global" ) = 100.0f:
> 
> Notice that I'm importing twice. What would be the performance
> consequences of this? Do both import operations query the disk for the
> module and load it into memory? Will the second call simply reference a
> cached version of the module loaded at the first import() call?
> 
> Thanks.

I think it does not reload the module. Running python with verbose mode:

[EMAIL PROTECTED]:~$ python -v
(snip)
>>> import xml
import xml # directory /usr/local/lib/python2.6/xml
# /usr/local/lib/python2.6/xml/__init__.pyc matches /usr/local/lib/
python2.6/xml/__init__.py
import xml # precompiled from /usr/local/lib/python2.6/xml/__init__.pyc
>>> import xml
>>> 

It's also mentioned in the docs: (paraphrased to clarify the points)
'''
The system maintains a table of modules that have been ... 
initialized When a module name is found..., step (1) is finished. If 
not, a search for a module ... . When ... found, it is loaded.
'''
http://www.python.org/doc/2.5.2/ref/import.html

--
http://mail.python.org/mailman/listinfo/python-list


Re: 2.6, 3.0, and truly independent intepreters

2008-10-25 Thread Rhamphoryncus
On Oct 25, 12:29 am, greg <[EMAIL PROTECTED]> wrote:
> Rhamphoryncus wrote:
> > A list
> > is not shareable, so it can only be used within the monitor it's
> > created within, but the list type object is shareable.
>
> Type objects contain dicts, which allow arbitrary values
> to be stored in them. What happens if one thread puts
> a private object in there? It becomes visible to other
> threads using the same type object. If it's not safe
> for sharing, bad things happen.
>
> Python's data model is not conducive to making a clear
> distinction between "private" and "shared" objects,
> except at the level of an entire interpreter.

shareable type objects (enabled by a __future__ import) use a
shareddict, which requires all keys and values to themselves be
shareable objects.

Although it's a significant semantic change, in many cases it's easy
to deal with: replace mutable (unshareable) global constants with
immutable ones (ie list -> tuple, set -> frozenset).  If you've got
some global state you move it into a monitor (which doesn't scale, but
that's your design).  The only time this really fails is when you're
deliberately storing arbitrary mutable objects from any thread, and
later inspecting them from any other thread (such as our new ABC
system's cache).  If you want to store an object, but only to give it
back to the original thread, I've got a way to do that.
--
http://mail.python.org/mailman/listinfo/python-list


Re: Ordering python sets

2008-10-25 Thread Lie Ryan
On Sat, 25 Oct 2008 18:20:46 -0400, Terry Reedy wrote:

> Lie Ryan wrote:
> 
> 
>> 
>> Since python is dynamic language, I think it should be possible to do
>> something like this:
>> 
>> a = list([1, 2, 3, 4, 5], implementation = 'linkedlist')
> 
> For this to work, the abstract list would have to know about all
> implementations of the abstraction.

/the exact syntax isn't really important/
/abstract type and implementation separation is the important point/

Actually, if I want to force it, that syntax could work using the same 
magic used by event-based syst

--
http://mail.python.org/mailman/listinfo/python-list


Re: Ordering python sets

2008-10-25 Thread Lie Ryan
On Sat, 25 Oct 2008 18:20:46 -0400, Terry Reedy wrote:

> Lie Ryan wrote:
> 
> 
>> 
>> Since python is dynamic language, I think it should be possible to do
>> something like this:
>> 
>> a = list([1, 2, 3, 4, 5], implementation = 'linkedlist')
> 
> For this to work, the abstract list would have to know about all
> implementations of the abstraction.

# Sorry the last message is truncated because of an "accident"

/the exact syntax isn't really important/
/abstract type and implementation separation is the important point/

Actually, if I want to force it, that syntax could work using the same 
magic used by event-based systems: registration. Although I agree it 
might be a bit cumbersome to do registration for something like this, but 
as I've said before, exact syntax is not really important.

--
http://mail.python.org/mailman/listinfo/python-list


Re: Building truth tables

2008-10-25 Thread Aaron Brady
On Oct 24, 5:53 am, andrea <[EMAIL PROTECTED]> wrote:
> On 26 Set, 20:01, "Aaron \"Castironpi\" Brady" <[EMAIL PROTECTED]>
> wrote:
>
>
>
> > Good idea.  If you want prefixed operators: 'and( a, b )' instead of
> > 'a and b', you'll have to write your own.  ('operator.and_' is bitwise
> > only.)  It may be confusing to mix prefix with infix: 'impl( a and b,
> > c )', so you may want to keep everything prefix, but you can still use
> > table( f, n ) like Tim said.
>
> After a while I'm back, thanks a lot, the truth table creator works,
> now I just want to parse some strings to make it easier to use.
>
> Like
>
> (P \/ Q) -> S == S
>
> Must return a truth table 2^3 lines...
>
> I'm using pyparsing and this should be really simple, but it doesn't
> allow me to recurse and that makes mu stuck.
> The grammar BNF is:
>
> Var :: = [A..Z]
> Exp ::= Var | !Exp  | Exp \/ Exp | Exp -> Exp | Exp /\ Exp | Exp ==
> Exp
>
> I tried different ways but I don't find a smart way to get from the
> recursive bnf grammar to the implementation in pyparsing...
> Any hint?

Tell you what.  At the risk of "carrot-and-stick, jump-how-high"
tyranny, I'll show you some output of a walk-through.  It should give
you an idea of the process.  You can always ask for more hints.

( ( ( !( R ) /\ ( !( P \/ Q ) ) ) -> S ) == S )
(((!(R)/\(!(P\/Q)))->S)==S)
(((!R/\(!(P\/Q)))->S)==S)
n1 := !R
(((n1/\(!(P\/Q)))->S)==S)
n2 := P\/Q
(((n1/\(!(n2)))->S)==S)
(((n1/\(!n2))->S)==S)
n3 := !n2
(((n1/\(n3))->S)==S)
(((n1/\n3)->S)==S)
n4 := n1/\n3
(((n4)->S)==S)
((n4->S)==S)
n5 := n4->S
((n5)==S)
(n5==S)
n6 := n5==S
(n6)
n6
{'n1': (, '!R', ('R',)),
 'n2': (, 'P\\/Q', ('P', 'Q')),
 'n3': (, '!n2', ('n2',)),
 'n4': (, 'n1/\\n3', ('n1', 'n3')),
 'n5': (, 'n4->S', ('n4', 'S')),
 'n6': (, 'n5==S', ('n5', 'S'))}
{'Q': True, 'P': True, 'S': True, 'R': True} True
{'Q': True, 'P': True, 'S': False, 'R': True} False
{'Q': True, 'P': True, 'S': True, 'R': False} True
{'Q': True, 'P': True, 'S': False, 'R': False} False
{'Q': False, 'P': True, 'S': True, 'R': True} True
{'Q': False, 'P': True, 'S': False, 'R': True} False
{'Q': False, 'P': True, 'S': True, 'R': False} True
{'Q': False, 'P': True, 'S': False, 'R': False} False
{'Q': True, 'P': False, 'S': True, 'R': True} True
{'Q': True, 'P': False, 'S': False, 'R': True} False
{'Q': True, 'P': False, 'S': True, 'R': False} True
{'Q': True, 'P': False, 'S': False, 'R': False} False
{'Q': False, 'P': False, 'S': True, 'R': True} True
{'Q': False, 'P': False, 'S': False, 'R': True} False
{'Q': False, 'P': False, 'S': True, 'R': False} True
{'Q': False, 'P': False, 'S': False, 'R': False} True

Before you trust me too much, you might want to check at least some of
these, to see if the starting (complicated) expression is evaluated
correctly.  I didn't.
--
http://mail.python.org/mailman/listinfo/python-list


Re: Consequences of importing the same module multiple times in C++?

2008-10-25 Thread Aaron Brady
On Oct 24, 2:23 pm, Robert Dailey <[EMAIL PROTECTED]> wrote:
> Hi,
>
> I'm currently using boost::python::import() to import Python modules,
> so I'm not sure exactly which Python API function it is calling to
> import these files. I posted to the Boost.Python mailing list with
> this question and they said I'd probably get a better answer here, so
> here it goes...
>
> If I do the following:
>
> using namespace boost::python;
> import( "__main__" ).attr( "new_global" ) = 40.0f;
> import( "__main__" ).attr( "another_global" ) = 100.0f:
>
> Notice that I'm importing twice. What would be the performance
> consequences of this? Do both import operations query the disk for the
> module and load it into memory? Will the second call simply reference
> a cached version of the module loaded at the first import() call?
>
> Thanks.

Docs:

Note
For efficiency reasons, each module is only imported once per
interpreter session. Therefore, if you change your modules, you must
restart the interpreter – or, if it’s just one module you want to test
interactively, use reload(), e.g. reload(modulename).

--
http://mail.python.org/mailman/listinfo/python-list


Re: big objects and avoiding deepcopy?

2008-10-25 Thread Aaron Brady
On Oct 24, 1:11 pm, Reckoner <[EMAIL PROTECTED]> wrote:
> I am writing an algorithm that takes objects (i.e. graphs with
> thousands of nodes) into a "hypothetical" state. I need to keep a
> history of these  hypothetical objects depending on what happens to
> them later. Note that these hypothetical objects are intimately
> operated on, changed, and made otherwise significantly different from
> the objects they were copied from.
>
> I've been using deepcopy to push the objects into the hypothetical
> state where I operate on them heavily. This is pretty slow since the
> objects are very large.
>
> Is there another way to do this without resorting to deepcopy?
>
> by the way, the algorithm works fine. It's just this part of it that I
> am trying to change.
>
> Thanks in advance.

This solution takes a level of indirection.

Each graph has a stack of namespaces mapping names to nodes.

G:
{ 0: nodeA, 1: nodeB, 2: nodeC }
G-copy:
G, { 0: nodeD }
G-copy2:
G, { 1: nodeE }
G-copy-copy:
G-copy, { 3: nodeF }

If a key isn't found in the dictionary of a graph, its parent graph is
searched, and so on.  Then G-copy[ 0 ] is nodeD, G-copy[ 1 ] is nodeB,
G-copy2[ 2 ] is nodeC, G-copy-copy[ 0 ] is nodeD.  It might take a
significant change to your implementation, however.
--
http://mail.python.org/mailman/listinfo/python-list


Re: Limit between 0 and 100

2008-10-25 Thread Marc 'BlackJack' Rintsch
On Sat, 25 Oct 2008 13:42:08 -0700, chemicalclothing wrote:

> Hi. I'm very new to Python, and so this is probably a pretty basic
> question, but I'm lost. I am looking to limit a float value to a number
> between 0 and 100 (the input is a percentage).
> 
> I currently have:
> 
> integer = int()

What's this supposed to do?  I think writing it as ``integer = 0`` is a 
bit simpler and more clear.

> running = True
> 
> while running:
>   try:
> per_period_interest_rate = float(raw_input("Enter per-period
> interest rate, in percent: "))
> break
>   except ValueError:
> print "Please re-enter the per-period interest rate as a number
> between 0 and 100."

You have to check for the range before you leave the loop.  The 
`ValueError` handling just makes sure that the input is a valid float.

The ``try``/``except`` structure can have an ``else`` branch.  Maybe that 
can be of use here.

Ciao,
Marc 'BlackJack' Rintsch
--
http://mail.python.org/mailman/listinfo/python-list


Re: 2.6, 3.0, and truly independent intepreters

2008-10-25 Thread greg

Glenn Linderman wrote:
On approximately 10/25/2008 12:01 AM, came the following characters from 
the keyboard of Martin v. Löwis:



If None remains global, then type(None) also remains global, and
type(None),__bases__[0]. Then type(None).__bases__[0].__subclasses__()
will yield "interesting" results. This is essentially the status quo.


I certainly don't grok the implications of what you say above, 
as I barely grok the semantics of it.


Not only is there a link from a class to its base classes, there
is a link to all its subclasses as well.

Since every class is ultimately a subclass of 'object', this means
that starting from *any* object, you can work your way up the
__bases__ chain until you get to 'object', then walk the sublass
hierarchy and find every class in the system.

This means that if any object at all is shared, then all class
objects, and any object reachable from them, are shared as well.

--
Greg
--
http://mail.python.org/mailman/listinfo/python-list


is it "legal" to pace the module's doc string after some imports ?

2008-10-25 Thread Stef Mientki

hello,

I wonder if it's  "legal" to pace the module's doc string after some 
imports ?


I mean something like this:

from language_support import _
__doc__ = _(0, """
some documentation
"""

thanks,
Stef Mientki

--
http://mail.python.org/mailman/listinfo/python-list


Re: @property decorator doesn't raise exceptions

2008-10-25 Thread greg

Rafe wrote:


The docs seem to suggest this is impossible:
"Called when an attribute lookup has not found the attribute in the
usual places (i.e. it is not an instance attribute nor is it found in
the class tree for self).


Getting an AttributeError is the way that the interpreter
machinery tells that the attribute wasn't found. So when
your property raises an AttributeError, this is
indistinguishable from the case where the property wasn't
there at all.

To avoid this you would have to raise some exception
that doesn't derive from AttributeError.

--
Greg
--
http://mail.python.org/mailman/listinfo/python-list


Re: Ordering python sets

2008-10-25 Thread Steven D'Aprano
On Sat, 25 Oct 2008 21:53:10 +, Lie Ryan wrote:

> On Sat, 25 Oct 2008 09:21:05 +, Steven D'Aprano wrote:
> 
>> On Sat, 25 Oct 2008 08:58:18 +, Lie Ryan wrote:
>> 
>>> 
>>> Since python is dynamic language, I think it should be possible to do
>>> something like this:
>>> 
>>> a = list([1, 2, 3, 4, 5], implementation = 'linkedlist') b =
>>> dict({'a': 'A'}, implementation = 'binarytree') c = dict({'a': 'A'},
>>> implementation = 'binarytree')
>> 
>> Oh I hope not. I think you have mistaken "dynamic" for "chaotic".
>> 
>> When I see a dict, I want to know that any two dicts work the same way.
> 
> Oh no, the two dict implementation would work _exactly_ the same from
> the outside, they are transparently interchangeable. Only the
> performance characteristic differs because of the different
> implementation.

Exactly. That was my point.



[...]
>> I don't want to have to search the entire project's source code to find
>> out if it is a dict implemented as a hash table with O(1) lookups, or a
>> dict implemented as a binary tree with O(log N) lookups, or a dict
>> implemented as a linear array with O(N) lookups.
> 
> No, you'd only need to look at the dict's creation point (or actually
> much better at projects docs, but not everyone create good docs).

And how do you find an arbitrary object's creation point without 
searching the project's source code?



>> If I wanted that sort of nightmare, I can already do it by shadowing
>> the builtin:
>> 
>> dict = binarytree
>> D = dict({'a': 'A'})  # make a binary tree
> 
> I DON'T want THAT sort of nightmare you mentioned... And it'd be
> impossible to have two dictionary that have two different
> implementations.

Nonsense.

dict = binarytree
D1 = dict({'a': 'A'})  # make a binary tree "dict"
dict = __builtin__.dict
D2 = dict({'a': 'A'})  # make a standard dict
dict = someothertype
D3 = dict({'a': 'A'})

I'm not suggesting this is a good idea. This is a terrible idea. But it 
is not much worse than your idea:

D1 = dict({'a': 'A'}, implementation='binarytree')
D2 = dict({'a': 'A'}, implementation='dict')
D3 = dict({'a': 'A'}, implementation='someothertype')


>> There is no possible good that come from this suggestion. The beauty of
>> Python is that the built-in data structures (list, dict, set) are
>> powerful enough for 99% of uses[1], and for the other 1%, you can
>> easily and explicitly use something else.
> 
> Oh really? As far as I know, python's list is extremely bad if you're
> inserting data at the beginning of the list

And how often do you do that?

And when you do, use a deque. Just call it a deque.


[...]
>> But *explicitly* is the point. There's never any time where you can do
>> this:
> 
> Yes, true, explicitly IS the point. How more explicit can you be than:
> dict({foo: bar}, implementation = 'binarytree')
> 
>> type(mydict) is dict


You miss the point. With your plan, you can do this:

D1 = dict({foo: bar}, implementation = 'binarytree')
D2 = dict({foo: bar}, implementation = 'dict')
type(D1) is type(D2)

and yet D1 and D2 have UTTERLY different performance characteristics. So 
now you need to add ANOTHER test to distinguish dicts-which-are-dicts 
from dicts-which-are-binary-trees:

D1.implementation != D2.implementation

And why? So you can avoid calling a goose a goose, and call it a duck 
instead.


> If my memory serves right, binary tree dict and hashed table dict is
> both a dict right? (Duck Typing)
> Only their implementation differs. Implementation is... well,
> "implementation detail".

Duck typing refers to *interface*, not implementation. I have no problem 
with you using a type with the same interface as a dict. That's what duck 
typing is all about. Just don't call it a dict!


>> and not know exactly what performance characteristics mydict will have.
> 
> Oh... why do I need to know what the performance characteristic of
> mydict is? Unless I know what I'm doing.

http://www.joelonsoftware.com/articles/fog000319.html


Because when you do this:

mydict[key] = 1

it's important whether each dict lookup is O(1), O(log N) or O(N). For a 
dict with one million items, that means that an implementation based on a 
binary tree does O(20) times more processing than a dict, and an 
implementation based on linear searching does O(100) times more 
processing.

If you think implementation details don't matter, try this:

s1 = 'c'*(10**6)

versus

s2 = ''
for i in xrange(10**6):
s2 = 'c' + s2  # defeat optimizer


>> (Unless you shadow dict or type, or otherwise do something that breaks
>> the rules.) You never need to ask, "Okay, it's a dict. What sort of
>> dict?"
> 
> Okay, it's a dict. What sort of dict? Who the hell cares? 

If you don't care, then why are you specifying the implementation type?

mydict = dict({'foo': 'bar'}, implementation="surprise me!")

You can't have it both ways. If you care, then you know enough to want a 
hash table based dict (the standard) or a binary tree or something else. 
So go ah

Re: Improving interpreter startup speed

2008-10-25 Thread Steven D'Aprano
On Sat, 25 Oct 2008 12:32:07 -0700, Pedro Borges wrote:

> Hi guys,
> 
> 
> Is there a way to improve the interpreter startup speed?

Get a faster computer?
 
> In my machine (cold startup) python takes 0.330 ms and ruby takes 0.047
> ms, after cold boot python takes 0.019 ms and ruby 0.005 ms to start.

How are you measuring this?


-- 
Steven
--
http://mail.python.org/mailman/listinfo/python-list


Re: is it "legal" to pace the module's doc string after some imports ?

2008-10-25 Thread Steven D'Aprano
On Sun, 26 Oct 2008 02:31:01 +0200, Stef Mientki wrote:

> hello,
> 
> I wonder if it's  "legal" to pace the module's doc string after some
> imports ?
> 
> I mean something like this:
> 
> from language_support import _
> __doc__ = _(0, """
> some documentation
> """


Doc strings are normal objects like anything else, so the above should 
work fine.

The only "magic" that happens with doc strings is that if you have a bare 
string immediately after a class, method or function definition, or at 
the top of the module, it gets picked up by the compiler and assigned to 
__doc__. You can do anything you like to it.


You might even do this:

# top of module
"""This is some 
documentation
blah blah blah
"""

try:
from language_support import _
__doc__ = _(0, __doc__)
except ImportError:
pass


and it should just work. 


-- 
Steven
--
http://mail.python.org/mailman/listinfo/python-list


You Want To Earn 10000$ see My Blog.

2008-10-25 Thread chinu
hai,
i am srinu from india. i am sending a blog url for yours use.


Right side Of The Blog Awsurvey Banner will appear.

click on the banner and get a free signup with 6$ bonus and you will
get more surveys.
once you have completed one survey you will get minimem 4$ and more

left side of the blog home based jobs will appear
click on the ads you will get more details about to choose your job.

you willnot satisfy to choose your job
you will type jobs or sports or jewelery etc on
search box field .and click on search.
then you will find what resuilts are you want.




click on the blog and get more information to choose yours job.

the blog url is:


   http://wealthinonline.blogspot.com/



 goodluck

--
http://mail.python.org/mailman/listinfo/python-list


Re: Improving interpreter startup speed

2008-10-25 Thread BJörn Lindqvist
2008/10/25 Pedro Borges <[EMAIL PROTECTED]>:
> Is there a way to improve the interpreter startup speed?
>
> In my machine (cold startup) python takes 0.330 ms and ruby takes
> 0.047 ms, after cold boot python takes 0.019 ms and ruby 0.005 ms to
> start.

How are you getting those numbers? 330 μs is still pretty fast, isn't
it? :) Most disks have a seek time of 10-20 ms so it seem implausible
to me that Ruby would be able to cold start in 47 ms.


-- 
mvh Björn
--
http://mail.python.org/mailman/listinfo/python-list


Re: set/dict comp in Py2.6

2008-10-25 Thread Benjamin
On Oct 25, 3:13 am, [EMAIL PROTECTED] wrote:
> I'd like to know why Python 2.6 doesn't have the syntax to create sets/
> dicts of Python 3.0, like:

Because nobody bothered to backport them.
>
> {x*x for x in xrange(10)}
> {x:x*x for x in xrange(10)}
>
> Bye,
> bearophile

--
http://mail.python.org/mailman/listinfo/python-list


Re: set/dict comp in Py2.6

2008-10-25 Thread Benjamin
On Oct 25, 3:13 am, [EMAIL PROTECTED] wrote:
> I'd like to know why Python 2.6 doesn't have the syntax to create sets/
> dicts of Python 3.0, like:

Because nobody bothered to backport them.

>
> {x*x for x in xrange(10)}
> {x:x*x for x in xrange(10)}
>
> Bye,
> bearophile

--
http://mail.python.org/mailman/listinfo/python-list


Re: Ordering python sets

2008-10-25 Thread Terry Reedy

Lie Ryan wrote:

On Sat, 25 Oct 2008 18:20:46 -0400, Terry Reedy wrote:



a = list([1, 2, 3, 4, 5], implementation = 'linkedlist')

For this to work, the abstract list would have to know about all
implementations of the abstraction.


/the exact syntax isn't really important/
/abstract type and implementation separation is the important point/

Actually, if I want to force it, that syntax could work using the same 
magic used by event-based systems: registration. 


ABCs have registration method.  The builtin ABCs have appropriate 
builtin classes preregistered.

>>> import collections as co
>>> mu = co.MutableSequence
>>> issubclass(list, mu)
True

I believe user classes that inherit from an ABC are also registered, and 
other can be registered explicitly.


Although I agree it 
might be a bit cumbersome to do registration for something like this, but 
as I've said before, exact syntax is not really important.


Then why do you object to current
mylist = linkedlist(data)
and request the harder to write and implement
mylist = list(data, implementation = 'linkedlist')
?

tjr

--
http://mail.python.org/mailman/listinfo/python-list


Re: SendKeys-0.3.win32-py2.1.exe

2008-10-25 Thread Benjamin Kaplan
On Sat, Oct 25, 2008 at 5:33 PM, Jesse <[EMAIL PROTECTED]> wrote:

> cant seem to install this, using python 2.6, any known errors that
> wont let me select the python installation to use, just opens a blank
> dialog and wont let me continue..do i need to downgrade python??
>
> thanks in advance
> --


Compiled extensions have to be recompiled for each new version of Python.
This particular exe was compiled for Python 2.1. If you want to use a newer
version of Python, get a different exe off of the web site.

>
> http://mail.python.org/mailman/listinfo/python-list
>
--
http://mail.python.org/mailman/listinfo/python-list


Re: Improving interpreter startup speed

2008-10-25 Thread Terry Reedy

Pedro Borges wrote:

Hi guys,


Is there a way to improve the interpreter startup speed?

In my machine (cold startup) python takes 0.330 ms and ruby takes
0.047 ms, after cold boot python takes 0.019 ms and ruby 0.005 ms to
start.


You of course mean CPython, but Version, version, what Version?
3.0 starts much quicker than 2.5.  Don't have 2.6.

--
http://mail.python.org/mailman/listinfo/python-list


Re: Limit between 0 and 100

2008-10-25 Thread Steven D'Aprano
On Sat, 25 Oct 2008 13:42:08 -0700, chemicalclothing wrote:

> Hi. I'm very new to Python, and so this is probably a pretty basic
> question, but I'm lost. I am looking to limit a float value to a number
> between 0 and 100 (the input is a percentage).


Before I answer that, I'm going to skip to something you said at the end 
of your post:

> P.S. I don't understand a lot of what I have there, I got most of it
> from the beginning tutorials and help sections. I have never programmed
> before, but this is for a school assignment.

Thank you for admitting this. You had made a good start, you were quite 
close to having working code.

Because this is a school assignment, you need to be careful not to pass 
off other people's work as your own. That might mean that you have to re-
write what you learn here in your own way (changing the program logic a 
little bit), or it might simply mean that you acknowledge that you 
received assistance from people on the Internet. You should check with 
your teacher about your school's policy.

 
> I currently have:
> 
> integer = int()
> running = True
> 
> while running:
>   try:
> per_period_interest_rate = float(raw_input("Enter per-period
> interest rate, in percent: "))
> break
>   except ValueError:
> print "Please re-enter the per-period interest rate as a number
> between 0 and 100."
> 
> 
> I also have to make sure it is a number and not letters or anything.

Separate the parts of your logic. You need three things:

(1) You need to get input from the user repeatedly until it is valid.

(2) Valid input is an float, and not a string or anything else.

(3) Valid input is between 0 and 100.

Let's do the last one first, because it is the easiest. Since we're 
checking a value is valid, we should fail if it isn't valid, and do 
nothing if it is.

def check_range(x, min=0.0, max=100.0):
"""Fail if x is not in the range min to max inclusive."""
if not min <= x <= max:
raise ValueError('value out of range')


(Note: I'm "shadowing two built-ins" in the above function. If you don't 
know what that is, don't worry about it for now. I'm just mentioning it 
so I can say it isn't a problem so long as it is limited to a small 
function like the above.)

So now you can test this and see if it works:

>>> check_range(0)  # always check the end points
>>> check_range(100)
>>> check_range(12.0)
>>> check_range(101.0)  # always check data that is out of range
Traceback (most recent call last):
  File "", line 1, in 
  File "", line 4, in in_range
ValueError: percentage out of range


Now the second part: make sure the input is a float. Floats are 
complicated, there are lots of ways to write floats:

0.45
.45
45e-2
000.45E4

are all valid ways of writing the same number. So instead of trying to 
work out all the ways people might write a float, we let Python do it and 
catch the error that occurs if they do something else.

Putting those two together:

def make_percentage(s):
"""Return a float between 0 and 100 from string s."""
# Some people might include a percentage sign. Get rid of it.
s = s.rstrip('%')
x = float(s)
check_range(x)
return x

Function make_percentage() takes the user input as a string, and it does 
one of two things: it either returns a valid percentage, or it raises a 
ValueError exception to indicate an error. It can't do both at the same 
time. (By the way, there are many different exceptions, not just 
ValueError. But for now you don't care about them.)


Now let's grab the user input:

def get_input():
prompt = "Enter per-period interest rate as a percentage: "
per_period_interest_rate = None
# loop until we have a value for the percentage
while per_period_interest_rate is None:
user_input = raw_input(prompt)
try:
per_period_interest_rate = make_percentage(user_input)
except ValueError:
print "Please enter a number between 0 and 100."
return per_period_interest_rate


Inside the loop, if the make_percentage function raises a ValueError 
exception Python jumps to the "except" clause, and prints a message, then 
goes back to the start of the loop. This keeps going until 
per_period_interest_rate gets a valid percentage value, and then the loop 
exits (can you see why?) and the percentage is returned.


-- 
Steven
--
http://mail.python.org/mailman/listinfo/python-list


Re: [Numpy-discussion] [SciPy-user] ANN: Python programs for epidemic modelling

2008-10-25 Thread Alan G Isaac

On 10/25/2008 6:07 PM I. Soumpasis wrote:

The programs are GPL licensed. More info on the section of copyrights

> http://wiki.deductivethinking.com/wiki/Deductive_Thinking:Copyrights.

I hope it is ok,


Well, that depends what you mean by "ok".

Obviously, the author picks the license s/he prefers.
But a GPL license means that some people will avoid
your code, so you make wish to make sure you thought
the licensing issue for this code carefully.

As a point of comparison,
note that all your package dependencies have
a new BSD license.

Alan Isaac

--
http://mail.python.org/mailman/listinfo/python-list


Re: @property decorator doesn't raise exceptions

2008-10-25 Thread Rafe
On Oct 24, 1:47 am, Rafe <[EMAIL PROTECTED]> wrote:
> Hi,
>
> I've encountered a problem which is making debugging less obvious than
> it should be. The @property decorator doesn't always raise exceptions.
> It seems like it is bound to the class but ignored when called. I can
> see the attribute using dir(self.__class__) on an instance, but when
> called, python enters __getattr__. If I correct the bug, the attribute
> calls work as expected and do not call __getattr__.
>
> I can't seem to make a simple repro. Can anyone offer any clues as to
> what might cause this so I can try to prove it?
>
> Cheers,
>
> - Rafe


Peter Oten pointed me in the right direction. I tried to reply to his
post 2 times and in spite of GoogleGroups reporting the post was
successful, it never showed up. Here is the repro:

The expected behavior...

>>> class A(object):
... @property
... def attribute(self):
... raise AttributeError("Correct Error.")
>>> A().attribute
Traceback (most recent call last):
  File "", line 0, in 
  File "", line 0, in attribute
AttributeError: Correct Error.


The misleading/unexpected behavior...

>>> class A(object):
... @property
... def attribute(self):
... raise AttributeError("Correct Error.")
... def __getattr__(self, name):
... cls_name = self.__class__.__name__
... msg = "%s has no attribute '%s'." % (cls_name, name)
... raise AttributeError(msg)
>>> A().attribute
Traceback (most recent call last):
  File "", line 0, in 
  File "", line 0, in __getattr__
AttributeError: A has no attribute 'attribute'.


Removing @property works as expected...

>>> class A(object):
... def attribute(self):
... raise AttributeError("Correct Error.")
... def __getattr__(self, name):
... cls_name = self.__class__.__name__
... msg = "%s has no attribute '%s'." % (cls_name, name)
... raise AttributeError(msg)
>>> A().attribute()   # Note the '()'
Traceback (most recent call last):
  File "", line 0, in 
  File "", line 0, in attribute
AttributeError: Correct Error.


I never suspected __getattr__ was the cause and not just a symptom.
The docs seem to indicate __gettattr__ should never be called when the
attribute exisits in the class:
"Called when an attribute lookup has not found the attribute in the
usual places (i.e. it is not an instance attribute nor is it found in
the class tree for self). name is the attribute name. This method
should return the (computed) attribute value or raise an
AttributeError exception."

Is this a bug? Any idea why this happens? I can write a hack in to
__getattr__ in my class which will detect this, but I'm not sure how
to raise the expected exception.


Cheers,

- Rafe
--
http://mail.python.org/mailman/listinfo/python-list