Read-Write Lock vs primitive Lock()

2008-12-29 Thread k3xji
Hi,

I  am trying to see on which situations does the Read-Write Lock
performs better on primitive Lock() itself. Below is the code I am
using to test the performance:
import threading
import locks
import time

class mylock(object):
def __init__(self):
self.__notreading = threading.Event()
self.__notwriting = threading.Event()
self.__notreading.set()
self.__notwriting.set()
def acquire_read(self):
self.__notreading.clear()
self.__notwriting.wait()
def acquire_write(self):
self.__notreading.wait()
self.__notwriting.clear()
def release_read(self):
self.__notreading.set()
def release_write(self):
self.__notwriting.set()

GLOBAL_VAR = 1
#GLOBAL_LOCK = locks.ReadWriteLock()
GLOBAL_LOCK = threading.Lock()
#GLOBAL_LOCK = mylock()
GLOBAL_LOOP_COUNT = 10
GLOBAL_READER_COUNT = 1000
GLOBAL_WRITER_COUNT = 1



class wthread(threading.Thread):
def run(self):
try:
#GLOBAL_LOCK.acquireWrite()
#GLOBAL_LOCK.acquire_write()
GLOBAL_LOCK.acquire()
for i in range(GLOBAL_LOOP_COUNT):
GLOBAL_VAR = 4
finally:
#GLOBAL_LOCK.release_write()
GLOBAL_LOCK.release()

class rthread(threading.Thread):
def run(self):
try:
#GLOBAL_LOCK.acquireRead()
#GLOBAL_LOCK.acquire_read()
GLOBAL_LOCK.acquire()
for i in range(GLOBAL_LOOP_COUNT):
GLOBAL_VAR = 3
finally:
#GLOBAL_LOCK.release_read()
GLOBAL_LOCK.release()

# module executed?
if __name__ == "__main__":
starttime = time.clock()
threads = []
for i in range(GLOBAL_READER_COUNT):
rt = rthread()
threads.append(rt)
for i in range(GLOBAL_WRITER_COUNT):
wt = wthread()
threads.append(wt)

for thread in threads:
thread.start()
for thread in threads:
thread.join()
print "All operations took " + str(time.clock() - starttime) + "
msecs"


What I am doing is: I am creating multiple readers and try to do
something. I had assumed that with using primitive Lock() on the above
situation, it will create a bottleneck on the rthreads. But the
numbers indicate that there are no difference at all. I had
implemented my own READ-WRIET lock as can be seen above mylock and
also used the one here: code.activestate.com/recipes/502283/.

Both have the same numbers:
above test with primitive Lock:
C:\Python25\mytest>python test_rw.py
All operations took 14.4584082614 msecs
above test with mylock:
C:\Python25\mytest>python test_rw.py
All operations took 14.5185156214 msecs
abive test with the one in recipe:
C:\Python25\mytest>python test_rw.py
All operations took 14.4641975447 msecs

So, I am confused in which situations Read-Write lock scales better?

Thanks,
--
http://mail.python.org/mailman/listinfo/python-list


Re: How to display Chinese in a list retrieved from database via python

2008-12-29 Thread Mark Tolonen


"zxo102"  wrote in message 
news:2560a6e0-c103-46d2-aa5a-8604de4d1...@b38g2000prf.googlegroups.com...



I have a list in a dictionary and want to insert it into the html
file. I test it with following scripts of CASE 1, CASE 2 and CASE 3. I
can see "中文" in CASE 1 but that is not what I want. CASE 2 does not
show me correct things.
So, in CASE 3, I hacked the script of CASE 2 with a function:
conv_list2str() to 'convert' the list into a string. CASE 3 can show
me "中文". I don't know what is wrong with CASE 2 and what is right with
CASE 3.

Without knowing why, I have just hard coded my python application
following CASE 3 for displaying Chinese characters from a list in a
dictionary in my web application.

Any ideas?



See below each case...新年快乐!


Happy a New Year: 2009

ouyang



CASE 1:

f=open('test.html','wt')
f.write('''

test

var test = ['\xd6\xd0\xce\xc4', '\xd6\xd0\xce\xc4', '\xd6\xd0\xce
\xc4']
alert(test[0])
alert(test[1])
alert(test[2])


''')
f.close()


In CASE 1, the *4 bytes* D6 D0 CE C4 are written to the file, which is the 
correct gb2312 encoding for 中文.



CASE 2:
###
mydict = {}
mydict['JUNK'] = ['\xd6\xd0\xce\xc4','\xd6\xd0\xce\xc4','\xd6\xd0\xce
\xc4']
f_str = '''

test

var test = %(JUNK)s
alert(test[0])
alert(test[1])
alert(test[2])


'''

f_str = f_str%mydict
f=open('test02.html','wt')
f.write(f_str)
f.close()


In CASE 2, the *16 characters* "\xd6\xd0\xce\xc4" are written to the file, 
which is NOT the correct gb2312 encoding for 中文, and will be interpreted 
however javascript pleases.  This is because the str() representation of 
mydict['JUNK'] in Python 2.x is the characters "['\xd6\xd0\xce\xc4', 
'\xd6\xd0\xce\xc4', '\xd6\xd0\xce\xc4']".



CASE 3:
###
mydict = {}
mydict['JUNK'] = ['\xd6\xd0\xce\xc4','\xd6\xd0\xce\xc4','\xd6\xd0\xce
\xc4']

f_str = '''

test

var test = %(JUNK)s
alert(test[0])
alert(test[1])
alert(test[2])


'''

import string

def conv_list2str(value):
  list_len = len(value)
  list_str = "["
  for ii in range(list_len):
  list_str += '"'+string.strip(str(value[ii])) + '"'
  if ii != list_len-1:
   list_str += ","
  list_str += "]"
  return list_str

mydict['JUNK'] = conv_list2str(mydict['JUNK'])

f_str = f_str%mydict
f=open('test03.html','wt')
f.write(f_str)
f.close()


CASE 3 works because you build your own, correct, gb2312 representation of 
mydict['JUNK'] (value[ii] above is the correct 4-byte sequence for 中文).


That said, learn to use Unicode strings by trying the following program, but 
set the first line to the encoding *your editor* saves files in.  You can 
use the actual Chinese characters instead of escape codes this way.  The 
encoding used for the source code and the encoding used for the html file 
don't have to match, but the charset declared in the file and the encoding 
used to write the file *do* have to match.


# coding: utf8

import codecs

mydict = {}
mydict['JUNK'] = [u'中文',u'中文',u'中文']

def conv_list2str(value):
   return u'["' + u'","'.join(s for s in value) + u'"]'

f_str = u'''

test

var test = %s
alert(test[0])
alert(test[1])
alert(test[2])


'''

s = conv_list2str(mydict['JUNK'])
f=codecs.open('test04.html','wt',encoding='gb2312')
f.write(f_str % s)
f.close()


-Mark

P.S.  Python 3.0 makes this easier for what you want to do, because the 
representation of a dictionary changes.  You'll be able to skip the 
conv_list2str() function and all strings are Unicode by default.



--
http://mail.python.org/mailman/listinfo/python-list


Re: why cannot assign to function call

2008-12-29 Thread Bruno Desthuilliers

scsoce a écrit :
I have a function return a reference, and want to assign to the 
reference,


You have a function that returns an object. You can't "assign" to an 
object - this makes no sense.


I'm afraid you are confusing Python's name/object bindings with C 
pointers or C++ references.


--
http://mail.python.org/mailman/listinfo/python-list


Re: "return" in def

2008-12-29 Thread Bruno Desthuilliers

John Machin a écrit :

On Dec 29, 7:06 am, Roger  wrote:

Curious. When I see a bare return, the first thing I think is that the
author forgot to include the return value and that it's a bug.
The second thing I think is that maybe the function is a generator, and
so I look for a yield. If I don't see a yield, I go back to thinking
they've left out the return value, and have to spend time trying to
understand the function in order to determine whether that is the case or
not.
In other words, even though it is perfectly valid Python, bare returns
always make the intent of the function less clear for me. I'm with Bruno
-- if you have a function with early exits, and you need to make the
intent of the function clear, explicitly return None. Otherwise, leave it
out altogether.
--
Steven

To me this is the soundest argument.  Thanks for the advice.  I think
I'll follow this as a rule of thumb hereafter.


Please don't. Follow MRAB's advice, with the corollary that a
generator is forced by the compiler to be a "procedure" in MRAB's
terminology.


I fail to see any *practical* difference between MRAB's and Steven's 
POVs. In both cases, it boils down to

- don't use a bare return at the end of a def statement's body,
- either use only bare returns ('procedure') or explicitely return None 
('function')

--
http://mail.python.org/mailman/listinfo/python-list


Re: why cannot assign to function call

2008-12-29 Thread John Machin
On Dec 29, 5:01 pm, scsoce  wrote:
> I have a function return a reference,

Stop right there. You don't have (and can't have, in Python) a
function which returns a reference that acts like a pointer in C or C+
+. Please tell us what manual, tutorial, book, blog or Usenet posting
gave you that idea, and we'll get the SWAT team sent out straight
away.

> and want to assign to the
> reference, simply like this:
>  >>def f(a)
>           return a

That's not a very useful function, even after you fix the syntax error
in the def statement. Would you care to give us a more realistic
example of what you are trying to achieve?

>      b = 0
>     * f( b ) = 1*

Is the * at the start of the line meant to indicate pointer
dereferencing like in C? If not, what is it? Why is there a * at the
end of the line?

> but the last line will be refused as "can't assign to function call".
> In my thought , the assignment is very nature,

Natural?? Please tell us why you would want to do that instead of:

b = 1

> but  why the interpreter
> refused to do that ?

Because (the BDFL be praised!) it (was not, is not, will not be) in
the language grammar.
--
http://mail.python.org/mailman/listinfo/python-list


Windows SSH (remote execution of commands) - Python Automation

2008-12-29 Thread Narasimhan Raghu-RBQG84
Hi experts,
 
I am looking for some information on how to automate remote login to a
UNIX machine using ssh from a windows XP box.
 
Possible way:
 
1. Use putty (or any other ssh client from windows XP). -- Can be
automated with command line parameters. The problem is that I am able to
login - Putty window opens up as well. But obviously I am unable to run
any commands in that. I need to find something like a handle to that
Putty window so that I can execute commands there.
 
Can anyone provide me some help in achieving this ?
 
 
Thanks,
 
--
Raghu
--
http://mail.python.org/mailman/listinfo/python-list


Re: Get a list of functions in a file

2008-12-29 Thread Chris Rebert
On Sun, Dec 28, 2008 at 11:26 PM, member Basu  wrote:
> I'm putting some utility functions in a file and then building a simple
> shell interface to them. Is their some way I can automatically get a list of
> all the functions in the file? I could wrap them in a class and then use
> attributes, but I'd rather leave them as simple functions.

Assuming you've already imported the module as 'mod':

func_names = [name for name in dir(mod) if callable(getattr(mod, name))]
funcs = [getattr(mod, name) for name in dir(mod) if
callable(getattr(mod, name))]

Note that such lists will also include classes (as they too are
callable). There are ways of excluding classes (and other objects that
implement __call__), but it makes the code a bit more complicated.

Cheers,
Chris

-- 
Follow the path of the Iguana...
http://rebertia.com
--
http://mail.python.org/mailman/listinfo/python-list


Re: Get a list of functions in a file

2008-12-29 Thread Gabriel Genellina

En Mon, 29 Dec 2008 05:26:52 -0200, member Basu 
escribió:


I'm putting some utility functions in a file and then building a simple
shell interface to them. Is their some way I can automatically get a  
list of

all the functions in the file? I could wrap them in a class and then use
attributes, but I'd rather leave them as simple functions.


Such file is called a "module" in Python. Just import it (I'll use glob as
an example, it's a module in the standard library, look for glob.py). To
get all names defined inside the module, use the dir() function:

py> import glob
py> dir(glob)
['__all__', '__builtins__', '__doc__', '__file__', '__name__',
'__package__', 'f
nmatch', 'glob', 'glob0', 'glob1', 'has_magic', 'iglob', 'magic_check',
'os', 'r
e', 'sys']

Note that you get the names of all functions defined inside the module
(fnmatch, glob, glob0, has_magic...) but also many other names (like os,
re, sys that are just imported modules, and __all__, __doc__, etc that are
special attributes)
If you are only interested in functions, the best way is to use inspect:

py> import inspect
py> inspect.getmembers(glob, inspect.isfunction)
[('glob', ), ('glob0', ), ('glob1', ), ('has_magic', ), ('iglob', )]

Modules are covered in the Python tutorial here
 and the inspect module is
documented here 


--
Gabriel Genellina

--
http://mail.python.org/mailman/listinfo/python-list


Re: Read-Write Lock vs primitive Lock()

2008-12-29 Thread Gabriel Genellina

En Mon, 29 Dec 2008 05:56:10 -0200, k3xji  escribió:


I  am trying to see on which situations does the Read-Write Lock
performs better on primitive Lock() itself. Below is the code I am
using to test the performance:
import threading
import locks
import time

class mylock(object):


(I'm not convinced your lock is correct)


GLOBAL_VAR = 1
#GLOBAL_LOCK = locks.ReadWriteLock()
GLOBAL_LOCK = threading.Lock()
#GLOBAL_LOCK = mylock()
GLOBAL_LOOP_COUNT = 10
GLOBAL_READER_COUNT = 1000
GLOBAL_WRITER_COUNT = 1


Only one writer? If this is always the case, you don't need a lock at all.


class wthread(threading.Thread):
def run(self):
try:
#GLOBAL_LOCK.acquireWrite()
#GLOBAL_LOCK.acquire_write()
GLOBAL_LOCK.acquire()
for i in range(GLOBAL_LOOP_COUNT):
GLOBAL_VAR = 4
finally:
#GLOBAL_LOCK.release_write()
GLOBAL_LOCK.release()


Note that the thread acquires the lock ONCE, repeats several thousand
times an assignment to a *local* variable called GLOBAL_VAR (!), finally
releases the lock and exits. As every thread does the same, they just run
one after another, they never have a significant overlap.

Also, you should acquire the lock *before* the try block (you have to
ensure that, *after* acquiring the lock, it is always released; such
requisite does not apply *before* acquiring the lock)

I'd test again with something like this:

class wthread(threading.Thread):
 def run(self):
 global GLOBAL_VAR
 for i in xrange(GLOBAL_LOOP_COUNT):
 GLOBAL_LOCK.acquire()
 try:
 GLOBAL_VAR += 1
 finally:
 GLOBAL_LOCK.release()


class rthread(threading.Thread):
def run(self):
try:
#GLOBAL_LOCK.acquireRead()
#GLOBAL_LOCK.acquire_read()
GLOBAL_LOCK.acquire()
for i in range(GLOBAL_LOOP_COUNT):
GLOBAL_VAR = 3
finally:
#GLOBAL_LOCK.release_read()
GLOBAL_LOCK.release()


Hmmm, it's a reader but attempts to modify the value?
You don't have to protect a read operation on a global variable - so a
lock isn't required here.


What I am doing is: I am creating multiple readers and try to do
something. I had assumed that with using primitive Lock() on the above
situation, it will create a bottleneck on the rthreads. But the
numbers indicate that there are no difference at all. I had
implemented my own READ-WRIET lock as can be seen above mylock and
also used the one here: code.activestate.com/recipes/502283/.


I hope you now understand why you got the same numbers always.

--
Gabriel Genellina

--
http://mail.python.org/mailman/listinfo/python-list


Re: "return" in def

2008-12-29 Thread John Machin
On Dec 29, 8:26 pm, Bruno Desthuilliers  wrote:
> John Machin a écrit :
>
>
>
> > On Dec 29, 7:06 am, Roger  wrote:
> >>> Curious. When I see a bare return, the first thing I think is that the
> >>> author forgot to include the return value and that it's a bug.
> >>> The second thing I think is that maybe the function is a generator, and
> >>> so I look for a yield. If I don't see a yield, I go back to thinking
> >>> they've left out the return value, and have to spend time trying to
> >>> understand the function in order to determine whether that is the case or
> >>> not.
> >>> In other words, even though it is perfectly valid Python, bare returns
> >>> always make the intent of the function less clear for me. I'm with Bruno
> >>> -- if you have a function with early exits, and you need to make the
> >>> intent of the function clear, explicitly return None. Otherwise, leave it
> >>> out altogether.
> >>> --
> >>> Steven
> >> To me this is the soundest argument.  Thanks for the advice.  I think
> >> I'll follow this as a rule of thumb hereafter.
>
> > Please don't. Follow MRAB's advice, with the corollary that a
> > generator is forced by the compiler to be a "procedure" in MRAB's
> > terminology.
>
> I fail to see any *practical* difference between MRAB's and Steven's
> POVs. In both cases, it boils down to
> - don't use a bare return at the end of a def statement's body,
> - either use only bare returns ('procedure') or explicitely return None
> ('function')

Steven's treatment was somewhat discursive, and didn't explicitly
mention the 'procedure' possibility. In fact, this sentence "if you
have a function with early exits, and you need to make the intent of
the function clear, explicitly return None." would if applied to a
'procedure' cause a stylistic horror as bad as a bare return at the
end of the def.

--
http://mail.python.org/mailman/listinfo/python-list


Re: game engine (as in rules not graphics)

2008-12-29 Thread Martin
Hi,


2008/12/29 Phil Runciman :
> See: Chris Moss, Prolog++: The Power of Object-Oriented and Logic Programming 
> (ISBN 0201565072)
>
> This book is a pretty handy intro to an OO version Prolog produced by Logic 
> Programming Associates.

> From: Aaron Brady [mailto:castiro...@gmail.com]
> Sent: Sunday, 28 December 2008 1:22 p.m.
> Not my expertise but here are my $0.02.  You are looking for ways to 
> represent rules: buying a house is legal in such and such situation, and the 
> formula for calculating its price is something.  You want "predicates" such 
> as InJail, OwnedBy, Costs.
>
> Costs( New York Ave, 200 )
> InJail( player2 )
> OwnedBy( St. Charles Ave, player4 )
> LegalMove( rolldie )
> LegalMove( sellhouse )

I'm not sure I'm looking for prolog, i had an introductory course back
at the university but it didn't exactly like it. I'm after some info
how such rules would defined in python (specifically python althou
logic programming is probably the more appropriate way).

I guess I'm missing quite some basics in the design of such concepts,
I'll head back to google to find some introductory stuff now :).

regards,
Martin


-- 
http://soup.alt.delete.co.at
http://www.xing.com/profile/Martin_Marcher
http://www.linkedin.com/in/martinmarcher

You are not free to read this message,
by doing so, you have violated my licence
and are required to urinate publicly. Thank you.

Please avoid sending me Word or PowerPoint attachments.
See http://www.gnu.org/philosophy/no-word-attachments.html
--
http://mail.python.org/mailman/listinfo/python-list


Re: Any equivalent to Ruby's 'hpricot' html/xpath/css selector package?

2008-12-29 Thread Bruno Desthuilliers

Kenneth McDonald a écrit :
Ruby has a package called 'hpricot' which can perform limited xpath 
queries,


ElementTree ? (it's in the stdlib now)


and CSS selector queries.


PyQuery ?
http://pypi.python.org/pypi/pyquery

However, what makes it really useful 
is that it does a good job of handling the "broken" html that is so 
commonly found on the web.


BeautifulSoup ?
http://pypi.python.org/pypi/BeautifulSoup/3.0.7a

possibly with ElementSoup ?
http://pypi.python.org/pypi/ElementSoup/rev452

--
http://mail.python.org/mailman/listinfo/python-list


poblem regarding opening a html file

2008-12-29 Thread Sibtey Mehdi
Hi 

I have a GUI application (wxpython) that calls another GUI
Application. I m using os.system (cmd) to launch

The second GUI, in the second GUI I m trying to open the html file using the
os.startfile (filename) function but 

It takes lots of time to open the html file.

If I am running only the second application then 'os.startfile' quickly open
the html file.

Any one can help me to solve this problem.

 

Thanks.

Sibtey

 

--
http://mail.python.org/mailman/listinfo/python-list


Re: Read-Write Lock vs primitive Lock()

2008-12-29 Thread k3xji
On 29 Aralık, 11:52, "Gabriel Genellina" 
wrote:
> En Mon, 29 Dec 2008 05:56:10 -0200, k3xji  escribió:
>
> > I  am trying to see on which situations does the Read-Write Lock
> > performs better on primitive Lock() itself. Below is the code I am
> > using to test the performance:
> > import threading
> > import locks
> > import time
>
> > class mylock(object):
>
> (I'm not convinced your lock is correct)

No problem.:)

> > GLOBAL_VAR = 1
> > #GLOBAL_LOCK = locks.ReadWriteLock()
> > GLOBAL_LOCK = threading.Lock()
> > #GLOBAL_LOCK = mylock()
> > GLOBAL_LOOP_COUNT = 10
> > GLOBAL_READER_COUNT = 1000
> > GLOBAL_WRITER_COUNT = 1
>
> Only one writer? If this is always the case, you don't need a lock at all.

No just used for testing. It does not matter, what I want to see is
that I want to create a botleneck on readers.

> > class wthread(threading.Thread):
> >     def run(self):
> >             try:
> >                 #GLOBAL_LOCK.acquireWrite()
> >                 #GLOBAL_LOCK.acquire_write()
> >                 GLOBAL_LOCK.acquire()
> >                 for i in range(GLOBAL_LOOP_COUNT):
> >                     GLOBAL_VAR = 4
> >             finally:
> >                 #GLOBAL_LOCK.release_write()
> >                 GLOBAL_LOCK.release()
>
> Note that the thread acquires the lock ONCE, repeats several thousand
> times an assignment to a *local* variable called GLOBAL_VAR (!), finally
> releases the lock and exits. As every thread does the same, they just run
> one after another, they never have a significant overlap.

If I put the for loop outside, in that case, readers will not overlap
at all, and you would be amazed by the numbers for that test. They
indicate primitive lock is faster than read-write lock, as it requires
the lock, executes only one bytecode operation and releases the lock.
So, in order to create a bottlenneck on readers, we need to somehow do
not release the lock immediately.

> Also, you should acquire the lock *before* the try block (you have to
> ensure that, *after* acquiring the lock, it is always released; such
> requisite does not apply *before* acquiring the lock)

Yeah, you are right but it is irrelevant.

> I'd test again with something like this:
>
> class wthread(threading.Thread):
>       def run(self):
>           global GLOBAL_VAR
>           for i in xrange(GLOBAL_LOOP_COUNT):
>               GLOBAL_LOCK.acquire()
>               try:
>                   GLOBAL_VAR += 1
>               finally:
>                   GLOBAL_LOCK.release()

With that, primitive locks perform 10 times better than Read-Write
lock. See above.


> > class rthread(threading.Thread):
> >     def run(self):
> >             try:
> >                 #GLOBAL_LOCK.acquireRead()
> >                 #GLOBAL_LOCK.acquire_read()
> >                 GLOBAL_LOCK.acquire()
> >                 for i in range(GLOBAL_LOOP_COUNT):
> >                     GLOBAL_VAR = 3
> >             finally:
> >                 #GLOBAL_LOCK.release_read()
> >                 GLOBAL_LOCK.release()
>
> Hmmm, it's a reader but attempts to modify the value?
> You don't have to protect a read operation on a global variable - so a
> lock isn't required here.

This is just for testing. Suppose that I am actually reading the
value. I don't understand why a lock is not required? Are you saying
lock is not necesary beacuse GLOBAL_VALUE is an immutable object, if
then, suppose it is not. This is just a test. Suppose GLOBAl_VAR is a
list and we are calling append() on it which is nt an atomic
operation.

> > What I am doing is: I am creating multiple readers and try to do
> > something. I had assumed that with using primitive Lock() on the above
> > situation, it will create a bottleneck on the rthreads. But the
> > numbers indicate that there are no difference at all. I had
> > implemented my own READ-WRIET lock as can be seen above mylock and
> > also used the one here: code.activestate.com/recipes/502283/.
>
> I hope you now understand why you got the same numbers always.

Unfortunately, I do not understand anyhing.

Thanks.

--
http://mail.python.org/mailman/listinfo/python-list


Re: Windows SSH (remote execution of commands) - Python Automation

2008-12-29 Thread Tino Wildenhain

Hi,

Narasimhan Raghu-RBQG84 wrote:

Hi experts,
 
I am looking for some information on how to automate remote login to a 
UNIX machine using ssh from a windows XP box.
 
Possible way:
 
1. Use putty (or any other ssh client from windows XP). -- Can be 
automated with command line parameters. The problem is that I am able to 
login - Putty window opens up as well. But obviously I am unable to run 
any commands in that. I need to find something like a handle to that 
Putty window so that I can execute commands there.


Obviously putty is one (of several) terminal emulators (or in short gui 
clients) for ssh protocol. This means they are made for interactive work

with mouse and keyboard rather then for command automation.

Its easy if you just use one of the many command line ssh clients. You
can use os.popen() and friends or the command module to work with them.

There is also another solution:

http://www.lag.net/paramiko/

which implements the ssh protocol in python so you can do more and
have finer control over the processes and channels (for example
file transfer and command control w/o resort to multiple connections)

This is a little bit harder of course.

Also, sometimes its more easy and relieable to just use cron on unix 
side. This works much much better then Task scheduler on windows btw.


Regards
Tino



Can anyone provide me some help in achieving this ?
 
 
Thanks,
 
--

*Raghu*




--
http://mail.python.org/mailman/listinfo/python-list




smime.p7s
Description: S/MIME Cryptographic Signature
--
http://mail.python.org/mailman/listinfo/python-list


Unicode encoding - ignoring errors

2008-12-29 Thread Michal Ludvig
Hi,

in my script I have sys.stdout and sys.stderr redefined to output
unicode strings in the current system encoding:

encoding = locale.getpreferredencoding()
sys.stdout = codecs.getwriter(encoding)(sys.stdout)

However on some systems the locale doesn't let all the unicode chars be
displayed and I eventually end up with UnicodeEncodeError exception.

I know I could explicitly "sanitize" all output with:

whatever.encode(encoding, "replace")

but it's quite inconvenient. I'd much prefer to embed this "replace"
operation into the sys.stdout writer.

Is there any way to set a conversion error handler in codecs.getwriter()
or perhaps chain it with some other filter somehow? I prefer to have
questionmarks in the output instead of experiencing crashes with
UnicodeEncodeErrors ;-)

Thanks!

Michal
--
http://mail.python.org/mailman/listinfo/python-list


AttributeError: 'module' object has no attribute 'DatagramHandler' (ubuntu-8.10, python 2.5.2)

2008-12-29 Thread Tzury Bar Yochay
$ ~/devel/ice/snoip/freespeech$ python
Python 2.5.2 (r252:60911, Oct  5 2008, 19:24:49)
[GCC 4.3.2] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import logging
>>> logging.DatagramHandler
Traceback (most recent call last):
  File "", line 1, in 
AttributeError: 'module' object has no attribute 'DatagramHandler'
>>>


That is odd since the documentation says there is DatagramHandler for
module logging
--
http://mail.python.org/mailman/listinfo/python-list


Re: Unicode encoding - ignoring errors

2008-12-29 Thread Chris Rebert
On Mon, Dec 29, 2008 at 4:06 AM, Michal Ludvig  wrote:
> Hi,
>
> in my script I have sys.stdout and sys.stderr redefined to output
> unicode strings in the current system encoding:
>
>encoding = locale.getpreferredencoding()
>sys.stdout = codecs.getwriter(encoding)(sys.stdout)
>
> However on some systems the locale doesn't let all the unicode chars be
> displayed and I eventually end up with UnicodeEncodeError exception.
>
> I know I could explicitly "sanitize" all output with:
>
>whatever.encode(encoding, "replace")
>
> but it's quite inconvenient. I'd much prefer to embed this "replace"
> operation into the sys.stdout writer.
>
> Is there any way to set a conversion error handler in codecs.getwriter()
> or perhaps chain it with some other filter somehow? I prefer to have
> questionmarks in the output instead of experiencing crashes with
> UnicodeEncodeErrors ;-)

You really should read the fine module docs (namely,
http://docs.python.org/library/codecs.html ).

codecs.getwriter() returns a StreamWriter subclass (basically).
The constructor of said subclass has the signature:
StreamWriter(stream[, errors])
You want the 'errors' argument.

So all you have to do is add one argument to your stdout reassignment:
sys.stdout = codecs.getwriter(encoding)(sys.stdout, 'replace')

Yay Python, for making such things easy!

Cheers,
Chris

-- 
Follow the path of the Iguana...
http://rebertia.com
--
http://mail.python.org/mailman/listinfo/python-list


Re: AttributeError: 'module' object has no attribute 'DatagramHandler' (ubuntu-8.10, python 2.5.2)

2008-12-29 Thread Chris Rebert
On Mon, Dec 29, 2008 at 4:08 AM, Tzury Bar Yochay
 wrote:
> $ ~/devel/ice/snoip/freespeech$ python
> Python 2.5.2 (r252:60911, Oct  5 2008, 19:24:49)
> [GCC 4.3.2] on linux2
> Type "help", "copyright", "credits" or "license" for more information.
 import logging
 logging.DatagramHandler
> Traceback (most recent call last):
>  File "", line 1, in 
> AttributeError: 'module' object has no attribute 'DatagramHandler'

>
>
> That is odd since the documentation says there is DatagramHandler for
> module logging

>From http://docs.python.org/library/logging.html#logging.DatagramHandler :
"The DatagramHandler class, located in the logging.handlers module [...]"

>From http://docs.python.org/library/logging.html#logging-levels :
"The StreamHandler and FileHandler classes are defined in the core
logging package. The other handlers are defined in a sub- module,
logging.handlers."

There's your answer. I do agree though that the "class
logging.DatagramHandler" line in the docs is misleading to say the
least. Perhaps a docs bug should be filed...

Cheers,
Chris

-- 
Follow the path of the Iguana...
http://rebertia.com
--
http://mail.python.org/mailman/listinfo/python-list


Re: poblem regarding opening a html file

2008-12-29 Thread Hendrik van Rooyen
Sibtey Mehdi  wrote:


>Hi
>I have a GUI application (wxpython) that calls another GUI
Application. I m using os.system (cmd) >to launch
>The second GUI, in the second GUI I m trying to open the html file using the
os.startfile (filename) function >but
>It takes lots of time to open the html file.
>If I am running only the second application then ‘os.startfile’ quickly open
the html file.
>Any one can help me to solve this problem.
>

Buy more memory?

- Hendrik

--
http://mail.python.org/mailman/listinfo/python-list


Re: AttributeError: 'module' object has no attribute 'DatagramHandler' (ubuntu-8.10, python 2.5.2)

2008-12-29 Thread Bruno Desthuilliers

Tzury Bar Yochay a écrit :

$ ~/devel/ice/snoip/freespeech$ python
Python 2.5.2 (r252:60911, Oct  5 2008, 19:24:49)
[GCC 4.3.2] on linux2
Type "help", "copyright", "credits" or "license" for more information.

import logging
logging.DatagramHandler

Traceback (most recent call last):
  File "", line 1, in 
AttributeError: 'module' object has no attribute 'DatagramHandler'


That is odd since the documentation says there is DatagramHandler for
module logging


It also says that DatagramHandler is located in the logging.handlers 
modules:


http://www.python.org/doc/2.5.2/lib/node415.html


HTH
--
http://mail.python.org/mailman/listinfo/python-list


Re: "return" in def

2008-12-29 Thread Bruno Desthuilliers

John Machin a écrit :

On Dec 29, 8:26 pm, Bruno Desthuilliers  wrote:

John Machin a écrit :

(snip)

Please don't. Follow MRAB's advice, with the corollary that a
generator is forced by the compiler to be a "procedure" in MRAB's
terminology.

I fail to see any *practical* difference between MRAB's and Steven's
POVs. In both cases, it boils down to
- don't use a bare return at the end of a def statement's body,
- either use only bare returns ('procedure') or explicitely return None
('function')


Steven's treatment was somewhat discursive, and didn't explicitly
mention the 'procedure' possibility. In fact, this sentence "if you
have a function with early exits, and you need to make the intent of
the function clear, explicitly return None." would if applied to a
'procedure' cause a stylistic horror as bad as a bare return at the
end of the def.


Ok. You're right.
--
http://mail.python.org/mailman/listinfo/python-list


Re: multiply each element of a list by a number

2008-12-29 Thread skip
> "Colin" == Colin J Williams  writes:

Colin> s...@pobox.com wrote:

>> For extremely short lists, but not for much else:
>> 
>> % for n in 1 10 100 1000 1 10 ; do
>> >   echo "len:" $n
>> >   echo -n "numpy: "
>> >   python -m timeit -s 'import numpy ; a = numpy.array(range('$n'))' 
'a*3'
>> >   echo -n "list: "
>> >   python -m timeit -s 'a = range('$n')' '[3*x for x in a]'
>> > done
>> len: 1
>> numpy: 10 loops, best of 3: 11.7 usec per loop
>> list: 100 loops, best of 3: 0.698 usec per loop
>> len: 10
>> numpy: 10 loops, best of 3: 11.7 usec per loop
>> list: 10 loops, best of 3: 2.94 usec per loop
>> len: 100
>> numpy: 10 loops, best of 3: 12.1 usec per loop
>> list: 1 loops, best of 3: 24.4 usec per loop
>> len: 1000
>> numpy: 10 loops, best of 3: 15 usec per loop
>> list: 1000 loops, best of 3: 224 usec per loop
>> len: 1
>> numpy: 1 loops, best of 3: 41 usec per loop
>> list: 100 loops, best of 3: 2.17 msec per loop
>> len: 10
>> numpy: 1000 loops, best of 3: 301 usec per loop
>> list: 10 loops, best of 3: 22.2 msec per loop
>> 
>> This is with Python 2.4.5 on Solaris 10.  YMMV.

Colin> Your comment is justified for len= 100 
Colin> or 1,000 but not for len= 10,000 or 100,000.

Look again at the time units per loop.

Colin> I wonder about the variability of the number of loops in your
Colin> data.

That's how timeit works.  It runs a few iterations to see how many to run to
get a reasonable runtime.

Colin> I have tried to repeat your test with the program below, but it
Colin> fails to cope with numpy.

I stand by my assertion that numpy will be much faster than pure Python for
all but very short lists.

-- 
Skip Montanaro - s...@pobox.com - http://smontanaro.dyndns.org/
--
http://mail.python.org/mailman/listinfo/python-list


Re: setup.py installs modules to a wrong place

2008-12-29 Thread Michal Ludvig
Hi Omer,

> I'm seeing this on fc8 with a custom built python2.6.  Not happening
> with any other packages (e.g. boto).  Workaround of course was just to
> copy the S3 dir to /usr/local/lib/python2.6/site-packages.

I've found it. The culprit was a pre-set install prefix in setup.cfg - I
can't remember why I put it there ages ago. Anyway, now it's removed ;-)

> I poked around a bit but nothing obvious jumped out.  Happy to do any
> debugging if you have tests you'd like to me to run.

Thanks for the offer. I believe the current SVN trunk [1] of s3cmd
should install just fine everywhere.

Michal

[1] .. http://s3tools.logix.cz/download




--
http://mail.python.org/mailman/listinfo/python-list


Re: Unicode encoding - ignoring errors

2008-12-29 Thread Michal Ludvig
Chris Rebert wrote:
> On Mon, Dec 29, 2008 at 4:06 AM, Michal Ludvig  wrote:
>> Hi,
>>
>> in my script I have sys.stdout and sys.stderr redefined to output
>> unicode strings in the current system encoding:
>>
>>encoding = locale.getpreferredencoding()
>>sys.stdout = codecs.getwriter(encoding)(sys.stdout)
>>
>> However on some systems the locale doesn't let all the unicode chars be
>> displayed and I eventually end up with UnicodeEncodeError exception.
>>
>> I know I could explicitly "sanitize" all output with:
>>
>>whatever.encode(encoding, "replace")
>>
>> but it's quite inconvenient. I'd much prefer to embed this "replace"
>> operation into the sys.stdout writer.
>> [...]
> codecs.getwriter() returns a StreamWriter subclass (basically).
> The constructor of said subclass has the signature:
> StreamWriter(stream[, errors])
> You want the 'errors' argument.

Thanks!

(and I'm going to read the module docs, really ;-)


Michal


--
http://mail.python.org/mailman/listinfo/python-list


Re: AttributeError: 'module' object has no attribute 'DatagramHandler' (ubuntu-8.10, python 2.5.2)

2008-12-29 Thread John Machin
On Dec 29, 11:08 pm, Tzury Bar Yochay  wrote:
> $ ~/devel/ice/snoip/freespeech$ python
> Python 2.5.2 (r252:60911, Oct  5 2008, 19:24:49)
> [GCC 4.3.2] on linux2
> Type "help", "copyright", "credits" or "license" for more information.>>> 
> import logging
> >>> logging.DatagramHandler
>
> Traceback (most recent call last):
>   File "", line 1, in 
> AttributeError: 'module' object has no attribute 'DatagramHandler'
>
>
>
> That is odd since the documentation says there is DatagramHandler for
> module logging

According to http://www.python.org/doc/2.5.2/lib/module-logging.html
"""
The StreamHandler and FileHandler classes are defined in the core
logging package. The other handlers are defined in a sub- module,
logging.handlers.
"""
and later in http://www.python.org/doc/2.5.2/lib/node415.html
"""
The DatagramHandler class, located in the logging.handlers module, ...
"""

HTH,
John
--
http://mail.python.org/mailman/listinfo/python-list


Re: Any equivalent to Ruby's 'hpricot' html/xpath/css selector package?

2008-12-29 Thread Mark Thomas
On Dec 28, 6:22 pm, Kenneth McDonald
 wrote:
> Ruby has a package called 'hpricot' which can perform limited xpath  
> queries, and CSS selector queries. However, what makes it really  
> useful is that it does a good job of handling the "broken" html that  
> is so commonly found on the web. Does Python have anything similar,  
> i.e. something that will not only do XPath queries, but will do so on  
> imperfect HTML?

Hpricot is a fine package but I prefer Nokogiri (see
http://www.rubyinside.com/nokogiri-ruby-html-parser-and-xml-parser-1288.html)
because it is based on libxml2 and therefore is faster, conforms to
the full XPath 1.0 spec, works on imperfect HTML, and exposes the
Hpricot API.

In python, the equivalent is lxml (http://codespeak.net/lxml/), which
is similarly based on libxml2, very fast, XPath-1.0 conformant, and
exposes the now-standard ElementTree API.

The main difference is that lxml doesn't have CSS selector syntax, but
IMHO that's a gimmick when you have a full XPath 1.0 engine at your
disposal.

-- Mark.
--
http://mail.python.org/mailman/listinfo/python-list


Re: "return" in def

2008-12-29 Thread Aaron Brady
On Dec 28, 11:56 am, Gerard Flanagan  wrote:
> On Dec 28, 5:19 pm, Roger  wrote:
>
> > Hi Everyone,
> [...]
> > When I define a method I always include a return statement out of
> > habit even if I don't return anything explicitly:
>
> > def something():
> >         # do something
> >         return
>
> > Is this pythonic or excessive?  Is this an unnecessary affectation
> > that only adds clock ticks to my app and would I be better off
> > removing "returns" where nothing is returned or is it common practice
> > to have returns.
>
> It's not particularly excessive but it is uncommon. A nekkid return
> can sometimes be essential within a function body, so a non-essential
> nekkid return could be considered just noise.

One style of coding I heard about once only permits returns at the end
of a function.  It claims it makes it easier to see the function as a
mathematical object.

It's a slick idea, but multiple exit points are really practical.

Incidentally, generators have multiple entry points.  They "yield
multiple times, they have more than one entry point and their
execution can be suspended" -- 
http://docs.python.org/reference/expressions.html#yield-expressions

The discussion makes me think that 'clear' is subjective, just like
'natural' has 39 definitions.
--
http://mail.python.org/mailman/listinfo/python-list


Re: Get a list of functions in a file

2008-12-29 Thread Aaron Brady
On Dec 29, 3:50 am, "Chris Rebert"  wrote:
> On Sun, Dec 28, 2008 at 11:26 PM, member Basu  wrote:
> > I'm putting some utility functions in a file and then building a simple
> > shell interface to them. Is their some way I can automatically get a list of
> > all the functions in the file? I could wrap them in a class and then use
> > attributes, but I'd rather leave them as simple functions.
>
> Assuming you've already imported the module as 'mod':
>
> func_names = [name for name in dir(mod) if callable(getattr(mod, name))]
> funcs = [getattr(mod, name) for name in dir(mod) if
> callable(getattr(mod, name))]
>
> Note that such lists will also include classes (as they too are
> callable). There are ways of excluding classes (and other objects that
> implement __call__), but it makes the code a bit more complicated.

No, not in general.  It's a weakness of one of the strengths of
Python.  For instance, if you define a function in a string, or return
one from another function, there's no way to get at it.

If you do want to import it, you can put any executable code inside an
'if __name__== "__main__"' block, so it won't get executed while
you're trying to index/catalog it.

If you're interested in something more hard-core, you might like the
'tokenize' module.  And I think the pattern you're looking for is
'every "def" outside a string, and even some in one.'

P.S.  Did not receive the original message on Google Groups.
--
http://mail.python.org/mailman/listinfo/python-list


Re: How to display Chinese in a list retrieved from database via python

2008-12-29 Thread zxo102
On 12月29日, 下午5时06分, "Mark Tolonen"  wrote:
> "zxo102"  wrote in message
>
> news:2560a6e0-c103-46d2-aa5a-8604de4d1...@b38g2000prf.googlegroups.com...
>
> > I have a list in a dictionary and want to insert it into the html
> > file. I test it with following scripts of CASE 1, CASE 2 and CASE 3. I
> > can see "中文" in CASE 1 but that is not what I want. CASE 2 does not
> > show me correct things.
> > So, in CASE 3, I hacked the script of CASE 2 with a function:
> > conv_list2str() to 'convert' the list into a string. CASE 3 can show
> > me "中文". I don't know what is wrong with CASE 2 and what is right with
> > CASE 3.
>
> > Without knowing why, I have just hard coded my python application
> > following CASE 3 for displaying Chinese characters from a list in a
> > dictionary in my web application.
>
> > Any ideas?
>
> See below each case...新年快乐!
>
>
>
> > Happy a New Year: 2009
>
> > ouyang
>
> > CASE 1:
> > 
> > f=open('test.html','wt')
> > f.write('''
> > 
> > test
> > 
> > var test = ['\xd6\xd0\xce\xc4', '\xd6\xd0\xce\xc4', '\xd6\xd0\xce
> > \xc4']
> > alert(test[0])
> > alert(test[1])
> > alert(test[2])
> > 
> > 
> > ''')
> > f.close()
>
> In CASE 1, the *4 bytes* D6 D0 CE C4 are written to the file, which is the
> correct gb2312 encoding for 中文.
>
>
>
> > CASE 2:
> > ###
> > mydict = {}
> > mydict['JUNK'] = ['\xd6\xd0\xce\xc4','\xd6\xd0\xce\xc4','\xd6\xd0\xce
> > \xc4']
> > f_str = '''
> > 
> > test
> > 
> > var test = %(JUNK)s
> > alert(test[0])
> > alert(test[1])
> > alert(test[2])
> > 
> > 
> > '''
>
> > f_str = f_str%mydict
> > f=open('test02.html','wt')
> > f.write(f_str)
> > f.close()
>
> In CASE 2, the *16 characters* "\xd6\xd0\xce\xc4" are written to the file,
> which is NOT the correct gb2312 encoding for 中文, and will be interpreted
> however javascript pleases.  This is because the str() representation of
> mydict['JUNK'] in Python 2.x is the characters "['\xd6\xd0\xce\xc4',
> '\xd6\xd0\xce\xc4', '\xd6\xd0\xce\xc4']".
>
>
>
> > CASE 3:
> > ###
> > mydict = {}
> > mydict['JUNK'] = ['\xd6\xd0\xce\xc4','\xd6\xd0\xce\xc4','\xd6\xd0\xce
> > \xc4']
>
> > f_str = '''
> > 
> > test
> > 
> > var test = %(JUNK)s
> > alert(test[0])
> > alert(test[1])
> > alert(test[2])
> > 
> > 
> > '''
>
> > import string
>
> > def conv_list2str(value):
> >   list_len = len(value)
> >   list_str = "["
> >   for ii in range(list_len):
> >   list_str += '"'+string.strip(str(value[ii])) + '"'
> >   if ii != list_len-1:
> >list_str += ","
> >   list_str += "]"
> >   return list_str
>
> > mydict['JUNK'] = conv_list2str(mydict['JUNK'])
>
> > f_str = f_str%mydict
> > f=open('test03.html','wt')
> > f.write(f_str)
> > f.close()
>
> CASE 3 works because you build your own, correct, gb2312 representation of
> mydict['JUNK'] (value[ii] above is the correct 4-byte sequence for 中文).
>
> That said, learn to use Unicode strings by trying the following program, but
> set the first line to the encoding *your editor* saves files in.  You can
> use the actual Chinese characters instead of escape codes this way.  The
> encoding used for the source code and the encoding used for the html file
> don't have to match, but the charset declared in the file and the encoding
> used to write the file *do* have to match.
>
> # coding: utf8
>
> import codecs
>
> mydict = {}
> mydict['JUNK'] = [u'中文',u'中文',u'中文']
>
> def conv_list2str(value):
> return u'["' + u'","'.join(s for s in value) + u'"]'
>
> f_str = u'''
> 
> test
> 
> var test = %s
> alert(test[0])
> alert(test[1])
> alert(test[2])
> 
> 
> '''
>
> s = conv_list2str(mydict['JUNK'])
> f=codecs.open('test04.html','wt',encoding='gb2312')
> f.write(f_str % s)
> f.close()
>
> -Mark
>
> P.S.  Python 3.0 makes this easier for what you want to do, because the
> representation of a dictionary changes.  You'll be able to skip the
> conv_list2str() function and all strings are Unicode by default.

Thanks for your comments, Mark. I understand it now. The list(escape
codes): ['\xd6\xd0\xce\xc4','\xd6\xd0\xce\xc4','\xd6\xd0\xce\xc4'] is
from a postgresql database with "select" statement.I will postgresql
database configurations and see if it is possible to return ['中文','中
文','中文'] directly with "select" statement.

ouyang

--
http://mail.python.org/mailman/listinfo/python-list


Re: Read-Write Lock vs primitive Lock()

2008-12-29 Thread Aaron Brady
On Dec 29, 4:17 am, k3xji  wrote:
> On 29 Aralýk, 11:52, "Gabriel Genellina" 
> wrote:
>
> > En Mon, 29 Dec 2008 05:56:10 -0200, k3xji  escribió:
>
snip
> > > class wthread(threading.Thread):
> > >     def run(self):
> > >             try:
> > >                 #GLOBAL_LOCK.acquireWrite()
> > >                 #GLOBAL_LOCK.acquire_write()
> > >                 GLOBAL_LOCK.acquire()
> > >                 for i in range(GLOBAL_LOOP_COUNT):
> > >                     GLOBAL_VAR = 4
> > >             finally:
> > >                 #GLOBAL_LOCK.release_write()
> > >                 GLOBAL_LOCK.release()
>
> > Note that the thread acquires the lock ONCE, repeats several thousand
> > times an assignment to a *local* variable called GLOBAL_VAR (!)
snip
> > class wthread(threading.Thread):
> >       def run(self):
> >           global GLOBAL_VAR
> >           for i in xrange(GLOBAL_LOOP_COUNT):
> >               GLOBAL_LOCK.acquire()
> >               try:
> >                   GLOBAL_VAR += 1
> >               finally:
> >                   GLOBAL_LOCK.release()
>
> With that, primitive locks perform 10 times better than Read-Write
> lock. See above.
snip

Gabriel's point (one of them) was that 'GLOBAL_VAR' isn't global in
your example.  Your 'wthread' objects aren't sharing anything.  He
added the 'global GLOBAL_VAR' statement, which is important
--
http://mail.python.org/mailman/listinfo/python-list


Re: poblem regarding opening a html file

2008-12-29 Thread Aaron Brady
On Dec 29, 6:10 am, "Hendrik van Rooyen"  wrote:
> Sibtey Mehdi  wrote:
> >Hi
> >            I have a GUI application (wxpython) that calls another GUI
>
> Application. I m using os.system (cmd) >to launch>The second GUI, in the 
> second GUI I m trying to open the html file using the
>
> os.startfile (filename) function >but
>
> >It takes lots of time to open the html file.
> >If I am running only the second application then ‘os.startfile’ quickly open
> the html file.
> >Any one can help me to solve this problem.
>
> Buy more memory?
>
> - Hendrik

You might be running into problems with duelling message pumps in the
GUI loops.  The 'os.system' call runs in a subprocess, not in an
independent one.

If you use 'subprocess.Popen', you can explicitly wait on the second
process, which will hang your first GUI, but may cause the second one
to run better.  Or, you have the option to not wait for it.

I see that 'os.system' waits for the second process to complete, while
'startfile' does not.

Just a thought, are you looking for the 'webbrowser' module?


Another option is to run it from a separate thread.
--
http://mail.python.org/mailman/listinfo/python-list


Python-URL! - weekly Python news and links (Dec 29)

2008-12-29 Thread Gabriel Genellina
QOTW:  "The fundamental economics of software development leads you to
open-source software."  David Rivas
http://www.ddj.com/linux-open-source/212201757


Python 2.5.4 final released (replaces 2.5.3 due to a critical bug)
http://groups.google.com/group/comp.lang.python/t/4042c08783c2/

Doing set operations with non-hashable objects:
http://groups.google.com/group/comp.lang.python/t/83972f948754ee36/

Reading a file one line at a time *and* getting accurate line offsets:
http://groups.google.com/group/comp.lang.python/t/7cd79e287ebf51cf/

Moving from C to Python: how to represent a sequence of bytes?
http://groups.google.com/group/comp.lang.python/t/1ee718148cecc39f/

Keeping track of all instances of a class:
http://groups.google.com/group/comp.lang.python/t/3ceee5e64d909bce/

return, return None, or nothing? Which is the right one to choose?
http://groups.google.com/group/comp.lang.python/t/99d0ba4075f6684e/

Best way to find the differences between two large dictionaries:
http://groups.google.com/group/comp.lang.python/t/4f3220dce8a3cf23/

xkcd Christmas special contains another Python reference:
http://xkcd.com/521/



Everything Python-related you want is probably one or two clicks away in
these pages:

Python.org's Python Language Website is the traditional
center of Pythonia
http://www.python.org
Notice especially the master FAQ
http://www.python.org/doc/FAQ.html

PythonWare complements the digest you're reading with the
marvelous daily python url
 http://www.pythonware.com/daily

Just beginning with Python?  This page is a great place to start:
http://wiki.python.org/moin/BeginnersGuide/Programmers

The Python Papers aims to publish "the efforts of Python enthusiats":
http://pythonpapers.org/
The Python Magazine is a technical monthly devoted to Python:
http://pythonmagazine.com

Readers have recommended the "Planet" sites:
http://planetpython.org
http://planet.python.org

comp.lang.python.announce announces new Python software.  Be
sure to scan this newsgroup weekly.
http://groups.google.com/group/comp.lang.python.announce/topics

Python411 indexes "podcasts ... to help people learn Python ..."
Updates appear more-than-weekly:
http://www.awaretek.com/python/index.html

The Python Package Index catalogues packages.
http://www.python.org/pypi/

The somewhat older Vaults of Parnassus ambitiously collects references
to all sorts of Python resources.
http://www.vex.net/~x/parnassus/

Much of Python's real work takes place on Special-Interest Group
mailing lists
http://www.python.org/sigs/

Python Success Stories--from air-traffic control to on-line
match-making--can inspire you or decision-makers to whom you're
subject with a vision of what the language makes practical.
http://www.pythonology.com/success

The Python Software Foundation (PSF) has replaced the Python
Consortium as an independent nexus of activity.  It has official
responsibility for Python's development and maintenance.
http://www.python.org/psf/
Among the ways you can support PSF is with a donation.
http://www.python.org/psf/donations/

The Summary of Python Tracker Issues is an automatically generated
report summarizing new bugs, closed ones, and patch submissions. 

http://search.gmane.org/?author=status%40bugs.python.org&group=gmane.comp.python.devel&sort=date

Although unmaintained since 2002, the Cetus collection of Python
hyperlinks retains a few gems.
http://www.cetus-links.org/oo_python.html

Python FAQTS
http://python.faqts.com/

The Cookbook is a collaborative effort to capture useful and
interesting recipes.
http://code.activestate.com/recipes/langs/python/

Many Python conferences around the world are in preparation.
Watch this space for links to them.

Among several Python-oriented RSS/RDF feeds available, see:
http://www.python.org/channews.rdf
For more, see:
http://www.syndic8.com/feedlist.php?ShowMatch=python&ShowStatus=all
The old Python "To-Do List" now lives principally in a
SourceForge reincarnation.
http://sourceforge.net/tracker/?atid=355470&group_id=5470&func=browse
http://www.python.org/dev/peps/pep-0042/

del.icio.us presents an intriguing approach to reference commentary.
It already aggregates quite a bit of Python intelligence.
http://del.icio.us/tag/python

*Py: the Journal of the Python Language*
http://www.pyzine.com

Dr.Dobb's Portal is another source of Python news and articles:
http://www.ddj.com/TechSearch/searchRe

multiprocessing vs thread performance

2008-12-29 Thread mk

Hello everyone,

After reading http://www.python.org/dev/peps/pep-0371/ I was under 
impression that performance of multiprocessing package is similar to 
that of thread / threading. However, to familiarize myself with both 
packages I wrote my own test of spawning and returning 100,000 empty 
threads or processes (while maintaining at most 100 processes / threads 
active at any one time), respectively.


The results I got are very different from the benchmark quoted in PEP 
371. On twin Xeon machine the threaded version executed in 5.54 secs, 
while multiprocessing version took over 222 secs to complete!


Am I doing smth wrong in code below? Or do I have to use 
multiprocessing.Pool to get any decent results?


# multithreaded version


#!/usr/local/python2.6/bin/python

import thread
import time

class TCalc(object):

def __init__(self):
self.tactivnum = 0
self.reslist = []
self.tid = 0
self.tlock = thread.allocate_lock()

def testth(self, tid):
if tid % 1000 == 0:
print "== Thread %d working ==" % tid
self.tlock.acquire()
self.reslist.append(tid)
self.tactivnum -= 1
self.tlock.release()

def calc_100thousand(self):
tid = 1
while tid <= 10:
while self.tactivnum > 99:
time.sleep(0.01)
self.tlock.acquire()
self.tactivnum += 1
self.tlock.release()
t = thread.start_new_thread(self.testth, (tid,))
tid += 1
while self.tactivnum > 0:
time.sleep(0.01)


if __name__ == "__main__":
tc = TCalc()
tstart = time.time()
tc.calc_100thousand()
tend = time.time()
print "Total time: ", tend-tstart



# multiprocessing version

#!/usr/local/python2.6/bin/python

import multiprocessing
import time


def testp(pid):
if pid % 1000 == 0:
print "== Process %d working ==" % pid

def palivelistlen(plist):
pll = 0
for p in plist:
if p.is_alive():
pll += 1
else:
plist.remove(p)
p.join()
return pll

def testp_100thousand():
pid = 1
proclist = []
while pid <= 10:
while palivelistlen(proclist) > 99:
time.sleep(0.01)
p = multiprocessing.Process(target=testp, args=(pid,))
p.start()
proclist.append(p)
pid += 1
print "=== Main thread waiting for all processes to finish ==="
for p in proclist:
p.join()

if __name__ == "__main__":
tstart = time.time()
testp_100thousand()
tend = time.time()
print "Total time:", tend - tstart


--
http://mail.python.org/mailman/listinfo/python-list


Re: game engine (as in rules not graphics)

2008-12-29 Thread Aaron Brady
On Dec 29, 4:14 am, Martin  wrote:
> Hi,
>
> 2008/12/29 Phil Runciman :
>
> > See: Chris Moss, Prolog++: The Power of Object-Oriented and Logic 
> > Programming (ISBN 0201565072)
>
> > This book is a pretty handy intro to an OO version Prolog produced by Logic 
> > Programming Associates.
> > From: Aaron Brady [mailto:castiro...@gmail.com]
> > Sent: Sunday, 28 December 2008 1:22 p.m.
> > Not my expertise but here are my $0.02.  You are looking for ways to 
> > represent rules: buying a house is legal in such and such situation, and 
> > the formula for calculating its price is something.  You want "predicates" 
> > such as InJail, OwnedBy, Costs.
>
> > Costs( New York Ave, 200 )
> > InJail( player2 )
> > OwnedBy( St. Charles Ave, player4 )
> > LegalMove( rolldie )
> > LegalMove( sellhouse )
>
> I'm not sure I'm looking for prolog, i had an introductory course back
> at the university but it didn't exactly like it. I'm after some info
> how such rules would defined in python (specifically python althou
> logic programming is probably the more appropriate way).
>
> I guess I'm missing quite some basics in the design of such concepts,
> I'll head back to google to find some introductory stuff now :).
snip

It depends on what you want to do with it.  Do you want to answer a
question about whether something is legal?  Do you want a catalog of
legal moves?  Do you want to forward-chain moves to a state?  Do you
want just a representation for its own sake?

For instance, the game just started.  Player 1 landed on Oriental,
bought it, and Player 2 landed in the same place.  Here are the legal
possibilities.

Player 1 offers to sell Oriental to Player X.
Player X offers to buy Oriental from Player 1.
Player 1 mortgages Oriental.
Player 1 collects rent from Player 2.
Player 3 rolls dice.

Thinking aloud, I think the closest thing to predicates you'll have in
Python is to build a Relation class or use a relational database.
Some tables you might use are: Property( id, name, price, rent0houses,
rent1house, ..., numhouses, mortgaged, owner ).  Player( id, location,
money ).  LastMove( player.id ).

P.S.  There is 'pyprolog' on sourceforge; I did not check it out.
--
http://mail.python.org/mailman/listinfo/python-list


Re: multiprocessing vs thread performance

2008-12-29 Thread janislaw
On 29 Gru, 15:52, mk  wrote:
> Hello everyone,
>
> After readinghttp://www.python.org/dev/peps/pep-0371/I was under
> impression that performance of multiprocessing package is similar to
> that of thread / threading. However, to familiarize myself with both
> packages I wrote my own test of spawning and returning 100,000 empty
> threads or processes (while maintaining at most 100 processes / threads
> active at any one time), respectively.
>
> The results I got are very different from the benchmark quoted in PEP
> 371. On twin Xeon machine the threaded version executed in 5.54 secs,
> while multiprocessing version took over 222 secs to complete!
>
> Am I doing smth wrong in code below? Or do I have to use
> multiprocessing.Pool to get any decent results?

Oooh, 10 processes! You're fortunate that your OS handled them in
finite time.

[quick browsing through the code]

Ah, so there are 100 processes at time. 200secs still don't sound
strange.

JW
--
http://mail.python.org/mailman/listinfo/python-list


Re: multiply each element of a list by a number

2008-12-29 Thread Colin J. Williams

s...@pobox.com wrote:

"Colin" == Colin J Williams  writes:


Colin> s...@pobox.com wrote:

>> For extremely short lists, but not for much else:
>> 
>> % for n in 1 10 100 1000 1 10 ; do

>> >   echo "len:" $n
>> >   echo -n "numpy: "
>> >   python -m timeit -s 'import numpy ; a = numpy.array(range('$n'))' 
'a*3'
>> >   echo -n "list: "
>> >   python -m timeit -s 'a = range('$n')' '[3*x for x in a]'
>> > done
>> len: 1
>> numpy: 10 loops, best of 3: 11.7 usec per loop
>> list: 100 loops, best of 3: 0.698 usec per loop
>> len: 10
>> numpy: 10 loops, best of 3: 11.7 usec per loop
>> list: 10 loops, best of 3: 2.94 usec per loop
>> len: 100
>> numpy: 10 loops, best of 3: 12.1 usec per loop
>> list: 1 loops, best of 3: 24.4 usec per loop
>> len: 1000
>> numpy: 10 loops, best of 3: 15 usec per loop
>> list: 1000 loops, best of 3: 224 usec per loop
>> len: 1
>> numpy: 1 loops, best of 3: 41 usec per loop
>> list: 100 loops, best of 3: 2.17 msec per loop
>> len: 10
>> numpy: 1000 loops, best of 3: 301 usec per loop
>> list: 10 loops, best of 3: 22.2 msec per loop
>> 
>> This is with Python 2.4.5 on Solaris 10.  YMMV.


Colin> Your comment is justified for len= 100 
Colin> or 1,000 but not for len= 10,000 or 100,000.


Look again at the time units per loop.

Colin> I wonder about the variability of the number of loops in your
Colin> data.

That's how timeit works.  It runs a few iterations to see how many to run to
get a reasonable runtime.


That's interesting but that's not the 
way timeit is documented for Python 2.5:


timeit( [number=100])

Time number executions of the main 
statement. This executes the setup 
statement once, and then returns the 
time it takes to execute the main 
statement a number of times, measured in 
seconds as a float. The argument is the 
number of times through the loop, 
defaulting to one million. The main 
statement, the setup statement and the 
timer function to be used are passed to 
the constructor.




Colin> I have tried to repeat your test with the program below, but it
Colin> fails to cope with numpy.

I stand by my assertion that numpy will be much faster than pure Python for
all but very short lists.



In spite of the fact that your own data 
doesn't support the assertion?


I would have expected numpy to be the 
clear winner for len > 1,500.


Perhaps your data questions the value of 
timeit as a timing tool.


Colin W.
--
http://mail.python.org/mailman/listinfo/python-list


Re: Windows SSH (remote execution of commands) - Python Automation

2008-12-29 Thread Cameron Laird
In article ,
Tino Wildenhain   wrote:
.
.
.
>> I am looking for some information on how to automate remote login to a 
>> UNIX machine using ssh from a windows XP box.
>>  
>> Possible way:
>>  
>> 1. Use putty (or any other ssh client from windows XP). -- Can be 
>> automated with command line parameters. The problem is that I am able to 
>> login - Putty window opens up as well. But obviously I am unable to run 
>> any commands in that. I need to find something like a handle to that 
>> Putty window so that I can execute commands there.
>
>Obviously putty is one (of several) terminal emulators (or in short gui 
>clients) for ssh protocol. This means they are made for interactive work
>with mouse and keyboard rather then for command automation.
>
>Its easy if you just use one of the many command line ssh clients. You
>can use os.popen() and friends or the command module to work with them.
>
>There is also another solution:
>
>http://www.lag.net/paramiko/
>
>which implements the ssh protocol in python so you can do more and
>have finer control over the processes and channels (for example
>file transfer and command control w/o resort to multiple connections)
>
>This is a little bit harder of course.
>
>Also, sometimes its more easy and relieable to just use cron on unix 
>side. This works much much better then Task scheduler on windows btw.
.
.
.
Good advice, all around.  I'll reinforce a few of your 
points:
A.  I entirely agree that Mr. Raghu would likely
do well to learn about cron(8); automation of
the sort that seems to be involved here is 
generally more convenient with standard Linux
tools than from the Windows side.
B.  One of the Windows command-line automaters 
to which you alluded is a sibling of putty:
plink http://the.earth.li/~sgtatham/putty/0.58/htmldoc/Chapter7.html >.
It shares configuration and infrastructure 
elements with putty, and might require the
least adjustment.
C.  'You think paramiko is harder?  I find it a
nice solution in many situations.
--
http://mail.python.org/mailman/listinfo/python-list


Re: multiprocessing vs thread performance

2008-12-29 Thread mk

janislaw wrote:


Ah, so there are 100 processes at time. 200secs still don't sound
strange.


I ran the PEP 371 code on my system (Linux) on Python 2.6.1:

Linux SLES (9.156.44.174) [15:18] root ~/tmp/src # ./run_benchmarks.py 
empty_func.py


Importing empty_func
Starting tests ...
non_threaded (1 iters)  0.05 seconds
threaded (1 threads)0.000235 seconds
processes (1 procs) 0.002607 seconds

non_threaded (2 iters)  0.06 seconds
threaded (2 threads)0.000461 seconds
processes (2 procs) 0.004514 seconds

non_threaded (4 iters)  0.08 seconds
threaded (4 threads)0.000897 seconds
processes (4 procs) 0.008557 seconds

non_threaded (8 iters)  0.10 seconds
threaded (8 threads)0.001821 seconds
processes (8 procs) 0.016950 seconds

This is very different from PEP 371. It appears that the PEP 371 code 
was written on Mac OS X. The conclusion I get from comparing above costs 
sis that OS X must have very low cost of creating the process, at least 
when compared to Linux, not that multiprocessing is a viable alternative 
to thread / threading module. :-(


--
http://mail.python.org/mailman/listinfo/python-list


Re: multiprocessing vs thread performance

2008-12-29 Thread Christian Heimes
mk wrote:
> Am I doing smth wrong in code below? Or do I have to use
> multiprocessing.Pool to get any decent results?

You have missed an important point. A well designed application does
neither create so many threads nor processes. The creation of a thread
or forking of a process is an expensive operation. You should use a pool
of threads or processes.

The limiting factor is not the creation time but the communication and
synchronization overhead between multiple threads or processes.

Christian

--
http://mail.python.org/mailman/listinfo/python-list


Re: multiply each element of a list by a number

2008-12-29 Thread skip

Colin> That's interesting but that's not the 
Colin> way timeit is documented for Python 2.5:

Colin> timeit( [number=100])

That's how it works when invoked as a main program using -m.

Colin> In spite of the fact that your own data doesn't support the
Colin> assertion?

Colin> I would have expected numpy to be the clear winner for len >
Colin> 1,500.

It is.  In fact, it's the clear winner well below that.  Below I have
reorganized the timeit output so the units are the same for all runs
(*microseconds* per loop):

 length numpy   pure python
  1  11.7   0.698
 10  11.7   2.94
100  12.1  24.4
   1000  15   224
  1  41  2170
 10 301 22200

-- 
Skip Montanaro - s...@pobox.com - http://smontanaro.dyndns.org/
--
http://mail.python.org/mailman/listinfo/python-list


Re: I always wonder ...

2008-12-29 Thread Hyuga
On Dec 22, 1:51 pm, Grant Edwards  wrote:
> On 2008-12-22, s...@pobox.com  wrote:
>
> > ... shouldn't people who spend all their time trolling be
> > doing something else: studying, working, writing patches which
> > solve the problems they perceive to exist in the troll
> > subject?
>
> I think you misunderstand the point of trolling.  The author of
> a troll post doesn't actually care about the "problems" (and
> may not even genuinely perceive them as problems).
>
> > Is there some online troll game running where the players earn
> > points for generating responses to their posts?
>
> Yup. It's called Usenet.

a.k.a. multi-player Emacs in deathmatch mode on nightmare
difficulty. ;)
--
http://mail.python.org/mailman/listinfo/python-list


Re: multiprocessing vs thread performance

2008-12-29 Thread mk

Christian Heimes wrote:

mk wrote:

Am I doing smth wrong in code below? Or do I have to use
multiprocessing.Pool to get any decent results?


You have missed an important point. A well designed application does
neither create so many threads nor processes. 


Except I was not developing "well designed application" but writing the 
test the goal of which was measuring the thread / process creation cost.



The creation of a thread
or forking of a process is an expensive operation. 


Sure. The point is, how expensive? While still being relatively 
expensive, it turns out that in Python creating a thread is much, much 
cheaper than creating a process via multiprocessing on Linux, while this 
seems to be not necessarily true on Mac OS X.



You should use a pool
of threads or processes.


Probably true, except, again, that was not quite the point of this 
exercise..



The limiting factor is not the creation time but the communication and
synchronization overhead between multiple threads or processes.


Which I am probably going to test as well.


--
http://mail.python.org/mailman/listinfo/python-list


Re: multiprocessing vs thread performance

2008-12-29 Thread Roy Smith
In article ,
 Christian Heimes  wrote:

> You have missed an important point. A well designed application does
> neither create so many threads nor processes. The creation of a thread
> or forking of a process is an expensive operation. You should use a pool
> of threads or processes.

It's worth noting that forking a new process is usually a much more 
expensive operation than creating a thread.  Not that I would want to 
create 100,000 of either!

Not everybody realizes it, but threads eat up a fair chunk of memory (you 
get one stack per thread, which means you need to allocate a hunk of memory 
for each stack).  I did a quick look around; 256k seems like a common 
default stack size.  1 meg wouldn't be unheard of.
--
http://mail.python.org/mailman/listinfo/python-list


Re: multiprocessing vs thread performance

2008-12-29 Thread Aaron Brady
On Dec 29, 8:52 am, mk  wrote:
> Hello everyone,
>
> After readinghttp://www.python.org/dev/peps/pep-0371/I was under
> impression that performance of multiprocessing package is similar to
> that of thread / threading. However, to familiarize myself with both
> packages I wrote my own test of spawning and returning 100,000 empty
> threads or processes (while maintaining at most 100 processes / threads
> active at any one time), respectively.
>
> The results I got are very different from the benchmark quoted in PEP
> 371. On twin Xeon machine the threaded version executed in 5.54 secs,
> while multiprocessing version took over 222 secs to complete!
>
> Am I doing smth wrong in code below? Or do I have to use
> multiprocessing.Pool to get any decent results?

I'm running a 1.6 GHz.  I only ran 1 empty threads and 1 empty
processes.  The threads were the ones you wrote.  The processes were
empty executables written in a lower language, also run 100 at a time,
started with 'subprocess', not 'multiprocessing'.  The threads took
1.2 seconds.  The processes took 24 seconds.

The processes you wrote had only finished 3000 after several minutes.
--
http://mail.python.org/mailman/listinfo/python-list


Re: why cannot assign to function call

2008-12-29 Thread anthony . tolle
On Dec 29, 1:01 am, scsoce  wrote:
> I have a function return a reference, and want to assign to the
> reference, simply like this:
>  >>def f(a)
>           return a
>      b = 0
>     * f( b ) = 1*
> but the last line will be refused as "can't assign to function call".
> In my thought , the assignment is very nature,  but  why the interpreter
> refused to do that ?
>
> thks

Probably the closest thing you are going to get in Python would be the
following:

>>> class C:
... pass
...
>>> def f(a):
... return a
...
>>> b = C()
>>> b.value = 0
>>> b.value
0
>>> f(b).value = 1
>>> b.value
1

But as others have pointed out, Python is not C/C++, and shouldn't be
treated as such.
--
http://mail.python.org/mailman/listinfo/python-list


Re: math module for Decimals

2008-12-29 Thread Raymond L. Buvel
Since the interest is more in extended precision than in decimal 
representation, there is another module that may be of interest.


http://calcrpnpy.sourceforge.net/clnum.html

It interfaces to the Class Library for Numbers (CLN) library to provide 
both arbitrary precision floating point and complex floating point 
numbers and the usual math functions.


While I am the author of this module, I agree with Mark that a module 
based on MPFR would be better since you have better control over 
precision and rounding.


I have looked at Sage (which uses MPFR) but it is a huge integrated 
package so you can't just import what you need into one of your usual 
Python scripts.


I wrote the clnum module mainly to support arbitrary precision in an RPN 
calculator available from the same SourceForge project.  However, it 
also works nicely as a stand-alone module.


At this time, there is no Windows installer available for Python 2.6 
because I don't use Windows at home and the person who normally builds 
the installer for me is no longer interested.  If someone wants to 
follow the published instructions and send me the resulting installer, I 
will put it up on SourceForge.


Ray

Jerry Carl wrote:

>> 1. mpmath?
> 2. sympy?
> 3. Sage?

Haven't tried those, i guess i have some studying to do.

> > x=Decimal.__mod__(x,Decimal('2')*pi())

> > Works fine for what i need, but i am sure it's not the right way to do
> > it.

> I don't know of any better way to deal with large arguments.
> The main problem is that the reduction step can introduce fairly
> large errors:  for example, if you're using a value of pi
> that's accurate to 10**-20, say, then reducing something of
> magnitude 10**5*pi will give a result with error of around
> 10**-15.  As far as I know, this problem is essentially
> unavoidable, and it's the reason why implementing sin for inputs
> like 10**9 isn't feasible.

Good point. No tool will work in all parts of the universe (which is
especially true for the universal ski wax).

Let me check the 3 modules you listed above!
--
http://mail.python.org/mailman/listinfo/python-list


Re: why cannot assign to function call

2008-12-29 Thread Aaron Brady
On Dec 29, 12:01 am, scsoce  wrote:
> I have a function return a reference, and want to assign to the
> reference, simply like this:
>  >>def f(a)
>           return a
>      b = 0
>     * f( b ) = 1*
> but the last line will be refused as "can't assign to function call".
> In my thought , the assignment is very nature,  but  why the interpreter
> refused to do that ?

'Why' is a long question.  The syntax has advantages and disadvantages
(pros and cons), which weigh different amounts in different
languages.  In Python, the cons weigh more.  In C, the pros weigh
more.  The short answer is, there is no such thing as assigning to
objects, only to variables.

You are talking like it could save you ten lines of code or something.
--
http://mail.python.org/mailman/listinfo/python-list


Re: Windows SSH (remote execution of commands) - Python Automation

2008-12-29 Thread Robin Becker

Narasimhan Raghu-RBQG84 wrote:

Hi experts,
 
I am looking for some information on how to automate remote login to a

UNIX machine using ssh from a windows XP box.
 
Possible way:
 
1. Use putty (or any other ssh client from windows XP). -- Can be

automated with command line parameters. The problem is that I am able to
login - Putty window opens up as well. But obviously I am unable to run
any commands in that. I need to find something like a handle to that
Putty window so that I can execute commands there.
 
Can anyone provide me some help in achieving this ?
 
 
Thanks,
 
--

Raghu



I have been using plink (companion to putty) without any problem eg


plink app1 ls -l

where app1 is defined by putty (as a connection) and ls -l etc etc are command 
args. I have modified the py package's SshGateway to use plink under windows and 
to allow very reasonable remote python behaviour.

--
Robin Becker

--
http://mail.python.org/mailman/listinfo/python-list


Re: HTML Correctness and Validators

2008-12-29 Thread Aaron Gray
"Xah Lee"  wrote in message 
news:2fb289be-00b3-440a-b153-ca88f0ba1...@d42g2000prb.googlegroups.com...
>recently i wrote a blog essay about html correctness and html
>validators, with relations to the programing lang communities. I hope
>programing lang fans will take more consideration on the correctness
>of the doc they produces.
>
>HTML Correctness and Validators
>. http://xahlee.org/js/html_correctness.html

Do you enjoy spamming comp.lang.functional with OT cross-posts ?

Regards,

Aaron



--
http://mail.python.org/mailman/listinfo/python-list


Re: math module for Decimals

2008-12-29 Thread Steve Holden
Raymond L. Buvel wrote:
> Since the interest is more in extended precision than in decimal
> representation, there is another module that may be of interest.
> 
> http://calcrpnpy.sourceforge.net/clnum.html
> 
> It interfaces to the Class Library for Numbers (CLN) library to provide
> both arbitrary precision floating point and complex floating point
> numbers and the usual math functions.
> 
> While I am the author of this module, I agree with Mark that a module
> based on MPFR would be better since you have better control over
> precision and rounding.
> 
> I have looked at Sage (which uses MPFR) but it is a huge integrated
> package so you can't just import what you need into one of your usual
> Python scripts.
> 
> I wrote the clnum module mainly to support arbitrary precision in an RPN
> calculator available from the same SourceForge project.  However, it
> also works nicely as a stand-alone module.
> 
> At this time, there is no Windows installer available for Python 2.6
> because I don't use Windows at home and the person who normally builds
> the installer for me is no longer interested.  If someone wants to
> follow the published instructions and send me the resulting installer, I
> will put it up on SourceForge.
> 
I'm not sure why nobody has mentioned gmpy, except possibly because it
advertises its alpha status and doesn't have many active developers.

regards
 Steve
-- 
Steve Holden+1 571 484 6266   +1 800 494 3119
Holden Web LLC  http://www.holdenweb.com/

--
http://mail.python.org/mailman/listinfo/python-list


Re: multiprocessing vs thread performance

2008-12-29 Thread Jarkko Torppa
On 2008-12-29, mk  wrote:
> janislaw wrote:
>
>> Ah, so there are 100 processes at time. 200secs still don't sound
>> strange.
>
> I ran the PEP 371 code on my system (Linux) on Python 2.6.1:
>
> Linux SLES (9.156.44.174) [15:18] root ~/tmp/src # ./run_benchmarks.py 
> empty_func.py
>
> Importing empty_func
> Starting tests ...
> non_threaded (1 iters)  0.05 seconds
> threaded (1 threads)0.000235 seconds
> processes (1 procs) 0.002607 seconds
>
> non_threaded (2 iters)  0.06 seconds
> threaded (2 threads)0.000461 seconds
> processes (2 procs) 0.004514 seconds
>
> non_threaded (4 iters)  0.08 seconds
> threaded (4 threads)0.000897 seconds
> processes (4 procs) 0.008557 seconds
>
> non_threaded (8 iters)  0.10 seconds
> threaded (8 threads)0.001821 seconds
> processes (8 procs) 0.016950 seconds
>
> This is very different from PEP 371. It appears that the PEP 371 code 
> was written on Mac OS X.

On the PEP371 it says "All benchmarks were run using the following:
Python 2.5.2 compiled on Gentoo Linux (kernel 2.6.18.6)"

On my iMac 2.3Ghz dualcore. python 2.6

iTaulu:src torppa$ python run_benchmarks.py empty_func.py 
Importing empty_func
Starting tests ...
non_threaded (1 iters)  0.02 seconds
threaded (1 threads)0.000227 seconds
processes (1 procs) 0.002367 seconds

non_threaded (2 iters)  0.03 seconds
threaded (2 threads)0.000406 seconds
processes (2 procs) 0.003465 seconds

non_threaded (4 iters)  0.04 seconds
threaded (4 threads)0.000786 seconds
processes (4 procs) 0.006430 seconds

non_threaded (8 iters)  0.06 seconds
threaded (8 threads)0.001618 seconds
processes (8 procs) 0.012841 seconds

With python2.5 and pyProcessing-0.52

iTaulu:src torppa$ python2.5 run_benchmarks.py empty_func.py
Importing empty_func
Starting tests ...
non_threaded (1 iters)  0.03 seconds
threaded (1 threads)0.000143 seconds
processes (1 procs) 0.002794 seconds

non_threaded (2 iters)  0.04 seconds
threaded (2 threads)0.000277 seconds
processes (2 procs) 0.004046 seconds

non_threaded (4 iters)  0.05 seconds
threaded (4 threads)0.000598 seconds
processes (4 procs) 0.007816 seconds

non_threaded (8 iters)  0.08 seconds
threaded (8 threads)0.001173 seconds
processes (8 procs) 0.015504 seconds

-- 
Jarkko Torppa, Elisa
--
http://mail.python.org/mailman/listinfo/python-list


Re: 2to3 used in the Shootout

2008-12-29 Thread pruebauno
On Dec 23, 5:21 pm, Isaac Gouy  wrote:
> On Dec 23, 11:51 am, bearophileh...@lycos.com wrote:
>
> > They have translated the Python benchmarks of the Shootout site from
> > Py2 to Py3 using 2to3:
>
> >http://shootout.alioth.debian.org/u32/benchmark.php?test=all〈=pyt...
>
> So please re-write those programs to remove problems created by
> automatic translation and better take advantage of Python 3
> functionality...
>
> http://shootout.alioth.debian.org/u32/faq.php#play
>
> > It shows some "performance bugs" of Python3 itself (especially
> > regarding the binary-trees benchmark, that was unexpected by me), and
> > two points where 2to3 may be improved, for example after the
> > translation this gives error:
> >          table=string.maketrans('ACBDGHK\nMNSRUTWVYacbdghkmnsrutwvy',
> >                                 'TGVHCDM
> > \nKNSYAAWBRTGVHCDMKNSYAAWBR')):
>
> > Gives:
> > TypeError: maketrans arguments must be bytes objects
>
> > Bye,
> > bearophile
>
>
BTW I am not sure how to submit this or if this is actually valid to
do, but I have a faster version for the pidigits program that uses
basically the same algorithm but removes function calls and unused
terms of the formula.


import time

def pi_digits(n, width):
out = []
wrt = out.append
aq = 1
ar = 0
at = 1
k = 0
f = 1
g = 2
i = 0
while i < n:
y = (aq*3+ar)//at
while y != ((aq*4+ar)//at):
k += 1
f += 2
g += 4
ar = aq*g+ar*f
aq = aq*k
at = at*f
y = (aq*3+ar)//at
aq = 10*aq
ar = 10*ar-10*y*at
i += 1
wrt(str(y))
if not i%width:
wrt('\t:%d\n'%i)
wrt(' '*(width-i%width))
wrt('\t:%d\n'%i)
return ''.join(out)


def main():
begin = time.time()
n = 1000
width = 70
print pi_digits(n,width)
print 'Total Time:', time.time()-begin

main()


--
http://mail.python.org/mailman/listinfo/python-list


Re: math module for Decimals

2008-12-29 Thread rlbuvel
On Dec 29, 10:22 am, Steve Holden  wrote:
> Raymond L. Buvel wrote:
> > Since the interest is more in extended precision than in decimal
> > representation, there is another module that may be of interest.
>
> >http://calcrpnpy.sourceforge.net/clnum.html
>
> > It interfaces to the Class Library for Numbers (CLN) library to provide
> > both arbitrary precision floating point and complex floating point
> > numbers and the usual math functions.
>
> > While I am the author of this module, I agree with Mark that a module
> > based on MPFR would be better since you have better control over
> > precision and rounding.
>
> > I have looked at Sage (which uses MPFR) but it is a huge integrated
> > package so you can't just import what you need into one of your usual
> > Python scripts.
>
> > I wrote the clnum module mainly to support arbitrary precision in an RPN
> > calculator available from the same SourceForge project.  However, it
> > also works nicely as a stand-alone module.
>
> > At this time, there is no Windows installer available for Python 2.6
> > because I don't use Windows at home and the person who normally builds
> > the installer for me is no longer interested.  If someone wants to
> > follow the published instructions and send me the resulting installer, I
> > will put it up on SourceForge.
>
> I'm not sure why nobody has mentioned gmpy, except possibly because it
> advertises its alpha status and doesn't have many active developers.
>
> regards
>  Steve
> --
> Steve Holden+1 571 484 6266   +1 800 494 3119
> Holden Web LLC  http://www.holdenweb.com/

The main reason is that it doesn't support the functions the OP wanted
(sin, cos, log, etc.).  This was one of the reasons I developed clnum
(in addition to needing complex numbers).

Ray
--
http://mail.python.org/mailman/listinfo/python-list


Python module import loop issue

2008-12-29 Thread Kottiyath
This might not be  pure python question. Sorry about that. I couldnt
think of any other place to post the same.
I am creating a _medium_complex_ application, and I am facing issues
with creating the proper module structure.
This is my first application and since this is a run-of-the-mill
application, I hope someone would be able to help me.

Base Module:
Contains definitions for Class A1, Class A2

Module 1.1:
Class B1 (refines A1)
Module 1.2:
Class C1 (refines A1)
Module 1.3:
Class D1 (refines A1)

Module 2.1:
Class B2 (refines A2):
Uses objects of B1, C1, D1
Module 2.2:
Class C2 (refines A2)
Module 2.3:
Class D2 (refines A2)

-->Python Entry Module : Module EN<--
Calls objects of B1, C1 and D1

Module EN and also Module 2 creates and calls the objects during run
time - and so calls cannot be hardcoded.
So, I want to use Factory methods to create everything.

Module Factory:
import 1.1,1.2,1.3,  2.1,2.2,2.3
A1Factory: {'B1Tag':1.1.B1, 'C1Tag':1.2.C1, 'D1Tag':1.3.D1'}
A2Factory: {'B2Tag':2.1.B2, 'C2Tag':2.2.C2, 'D2Tag':2.3.D2'}

But, since Module requires objects of B1, C1 etc, it has to import
Factory.
Module 2.1:
import Factory.

Now, there is a import loop. How can we avoid this loop?

The following ways I could think of
1. Automatic updation of factory inside superclass whenever a subclass
is created. But, since there is no object created,  I cannot think of
a way of doing this.
2. Update A1Factory in each module which implements refinements.
_Very_important_, how do I make sure each module is hit - so that the
factory is updated? The module EN will be looking only at base module,
so the other modules is not hit. I will have to import every module in
EN - just to make sure that the A1Factory updation code is hit. This
looks in-elegent.

If somebody could help me out, I would be very thankful.
--
http://mail.python.org/mailman/listinfo/python-list


Re: multiprocessing vs thread performance

2008-12-29 Thread mk

Jarkko Torppa wrote:


On the PEP371 it says "All benchmarks were run using the following:
Python 2.5.2 compiled on Gentoo Linux (kernel 2.6.18.6)"


Right... I overlooked that. My tests I quoted above were done on SLES 
10, kernel 2.6.5.



With python2.5 and pyProcessing-0.52

iTaulu:src torppa$ python2.5 run_benchmarks.py empty_func.py
Importing empty_func
Starting tests ...
non_threaded (1 iters)  0.03 seconds
threaded (1 threads)0.000143 seconds
processes (1 procs) 0.002794 seconds

non_threaded (2 iters)  0.04 seconds
threaded (2 threads)0.000277 seconds
processes (2 procs) 0.004046 seconds

non_threaded (4 iters)  0.05 seconds
threaded (4 threads)0.000598 seconds
processes (4 procs) 0.007816 seconds

non_threaded (8 iters)  0.08 seconds
threaded (8 threads)0.001173 seconds
processes (8 procs) 0.015504 seconds


There's smth wrong with numbers posted in PEP. This is what I got on 
4-socket Xeon (+ HT) with Python 2.6.1 on Debian (Etch), with kernel 
upgraded to 2.6.22.14:



non_threaded (1 iters)  0.04 seconds
threaded (1 threads)0.000159 seconds
processes (1 procs) 0.001067 seconds

non_threaded (2 iters)  0.05 seconds
threaded (2 threads)0.000301 seconds
processes (2 procs) 0.001754 seconds

non_threaded (4 iters)  0.06 seconds
threaded (4 threads)0.000581 seconds
processes (4 procs) 0.003906 seconds

non_threaded (8 iters)  0.09 seconds
threaded (8 threads)0.001148 seconds
processes (8 procs) 0.008178 seconds


--
http://mail.python.org/mailman/listinfo/python-list


Re: New Python 3.0 string formatting - really necessary?

2008-12-29 Thread walterbyrd
On Dec 21, 12:28 pm, Bruno Desthuilliers
 wrote:

> > I can see where the new formatting might be helpful in some cases.
> > But, I am not sure it's worth the cost.
>
> Err... _Which_ cost exactly ?

Loss of backward compatibility, mainly.
--
http://mail.python.org/mailman/listinfo/python-list


SQL, lite lite lite

2008-12-29 Thread Aaron Brady
Hi all,

About a year ago, I posted an idea I was having about thread
synchronization to the newsgroup.  However, I did not explain it well,
and I really erred on the side of brevity.  (After some finagling, Mr.
Bieber and I decided it wasn't exactly anything groundbreaking.)  But
I think the brevity cost me some readers, who might have had more
interest.  The affair was on the whole discouraging.  So, I'm going to
try another idea, and assume that readers have some time, and will
spend it on it.

I don't think relational data can be read and written very easily in
Python.  There are some options, such as 'sqllite3', but they are not
easy.  'sqllite3' statements are valid SQL expressions, which afford
the entire power of SQL, but contrary to its name, it is not that
'lite'.  To me, 'lite' is something you could learn (even make!) in an
afternoon, not a semester; something the size of an ActiveState
recipe, or a little bigger, maybe a file or two.  If you think SQL is
a breeze, you probably won't find my idea exciting.  I assume that the
basics of SQL are creating tables, selecting records, and updating
records.

My idea is to create a 'Relation' class.  The details are basically
open, such as whether to back it with 'sqllite3', 'shelve', 'mmap', or
just mapping and sequence objects; what the simplest syntax is that
can capture and permit all the basics, and how much and what else can
fit in at that level; how and whether it can include arbitrary Python
objects, and what constraints there are on them if not; how and
whether to permit transactions; and what the simplest and coolest
thing you can do with a little Python syntax is.

This is basically an invitation for everyone to brainstorm.  (No
hijackings, good humor & digression ok.)  Lastly, ...

**warning, spoiler!  here's what I thought of already.**

**repeat!  spoiler!  here's what I thought of already.**

#Just the select and update syntax:

>>> a= people._select( "firstname== 'Joe'" )
#select 'key' from 'people' where 'firstname'== 'joe'
>>> a
[Entry2864, Entry3076, Entry3172]
>>> entry1= a[ 0 ]
>>> entry1.phone
#select 'phone' from 'people' where 'key'==self.key
"555-2413"
>>> entry1.phone= "555-1234"
#update 'people' set 'phone'= '555-1234' where 'key'==self.key
>>> entry1.phone
"555-1234"

#Create table syntax (a-whole-nother beast in itself):

>>> classes= db.Relation( 'class_', 'person', Unique( 'class_', 'person' ) )
#create table 'classes' ( 'key', 'class_', 'person' ) unique
( 'class_', 'person' )
>>> classes._unique( 'class_', 'person' )
>>> classes.class_.noneok= False #'class_' cannot be null
>>> classes.person.noneok= False
>>> classes._insert( 'Physics', 'Dan' )
>>> classes._insert( 'Chem', 'Tim' )

Hoping-"good critic"-is-self-consistent-ly, hoping-to-hear-it's-too-
complicated-already-ly,
A. Brady
--
http://mail.python.org/mailman/listinfo/python-list


Re: SQL, lite lite lite

2008-12-29 Thread Philip Semanchuk


On Dec 29, 2008, at 1:06 PM, Aaron Brady wrote:


I don't think relational data can be read and written very easily in
Python.  There are some options, such as 'sqllite3', but they are not
easy.  'sqllite3' statements are valid SQL expressions, which afford
the entire power of SQL, but contrary to its name, it is not that
'lite'.  To me, 'lite' is something you could learn (even make!) in an
afternoon, not a semester; something the size of an ActiveState
recipe, or a little bigger, maybe a file or two.


Hi Aaron,
The "lite" part of SQLite refers to its implementation more than its  
feature set. In other words, SQLite doesn't promise to make SQL  
easier, it promises many of the features of a big, heavy relational  
database  (e.g. Postgres, MySQL, Oracle, etc.) but in a small, light  
package. I can see why you'd be disappointed if you were expecting the  
former. IMHO it does quite well at the latter.


After a look at the syntax you're proposing, I wonder how you feel it  
differs from ORMs like SQLAlchemy (for instance).



Cheers
Philip

--
http://mail.python.org/mailman/listinfo/python-list


Re: SQL, lite lite lite

2008-12-29 Thread Ned Deily
In article ,
 Philip Semanchuk  wrote:
> On Dec 29, 2008, at 1:06 PM, Aaron Brady wrote:
> > I don't think relational data can be read and written very easily in
> > Python.  There are some options, such as 'sqllite3', but they are not
> > easy.  'sqllite3' statements are valid SQL expressions, which afford
> > the entire power of SQL, but contrary to its name, it is not that
> > 'lite'.  To me, 'lite' is something you could learn (even make!) in an
> > afternoon, not a semester; something the size of an ActiveState
> > recipe, or a little bigger, maybe a file or two.
> [...]
> After a look at the syntax you're proposing, I wonder how you feel it  
> differs from ORMs like SQLAlchemy (for instance).

... and Elixir, a declarative layer on top of SQLAlchemy:

 

-- 
 Ned Deily,
 n...@acm.org

--
http://mail.python.org/mailman/listinfo/python-list


Re: why cannot assign to function call

2008-12-29 Thread Scott David Daniels

John Machin wrote:

On Dec 29, 5:01 pm, scsoce  wrote:

I have a function return a reference,


Stop right there. You don't have (and can't have, in Python) a
function which returns a reference that acts like a pointer in C or C+
+. Please tell us what manual, tutorial, book, blog or Usenet posting
gave you that idea, and we'll get the SWAT team sent out straight
away.


Perhaps we can send the the Pennsylvania State University out after
them.  I don't know why, but some fairly substantial people here are
scared of the PSU.

...

Oh, I have just been informed by my captors that the are the Python
Secre
--
http://mail.python.org/mailman/listinfo/python-list


Re: SQL, lite lite lite

2008-12-29 Thread Bruno Desthuilliers

Aaron Brady a écrit :

Hi all,


(snip)
>

I don't think relational data can be read and written very easily in
Python. 


Did you try SQLAlchemy or Django's ORM ?


There are some options, such as 'sqllite3', but they are not
easy.  'sqllite3' statements are valid SQL expressions, which afford
the entire power of SQL, but contrary to its name, it is not that
'lite'.


sqlite is a Python-independant library providing a lightweight SQL 
embedded (ie : no server) database system. It is "light" wrt/ Oracle, 
Postgres etc.



 To me, 'lite' is something you could learn (even make!) in an
afternoon, not a semester;


No one in it's own mind would hope to learn the relational theory and 
algebra in an afternoon, whatever the implementation.



something the size of an ActiveState
recipe, or a little bigger, maybe a file or two.  If you think SQL is
a breeze, you probably won't find my idea exciting.  I assume that the
basics of SQL are creating tables, selecting records, and updating
records.


There's much more than this.


My idea is to create a 'Relation' class.  The details are basically
open, such as whether to back it with 'sqllite3', 'shelve', 'mmap', or
just mapping and sequence objects; what the simplest syntax is that
can capture and permit all the basics, and how much and what else can
fit in at that level; how and whether it can include arbitrary Python
objects, and what constraints there are on them if not; how and
whether to permit transactions; and what the simplest and coolest
thing you can do with a little Python syntax is.

This is basically an invitation for everyone to brainstorm.  (No
hijackings, good humor & digression ok.)  Lastly, ...


#Just the select and update syntax:


a= people._select( "firstname== 'Joe'" )

#select 'key' from 'people' where 'firstname'== 'joe'

a

[Entry2864, Entry3076, Entry3172]

entry1= a[ 0 ]
entry1.phone

#select 'phone' from 'people' where 'key'==self.key
"555-2413"

entry1.phone= "555-1234"

#update 'people' set 'phone'= '555-1234' where 'key'==self.key

entry1.phone

"555-1234"

#Create table syntax (a-whole-nother beast in itself):


classes= db.Relation( 'class_', 'person', Unique( 'class_', 'person' ) )

#create table 'classes' ( 'key', 'class_', 'person' ) unique
( 'class_', 'person' )

classes._unique( 'class_', 'person' )
classes.class_.noneok= False #'class_' cannot be null
classes.person.noneok= False
classes._insert( 'Physics', 'Dan' )
classes._insert( 'Chem', 'Tim' )


From django's tutorial, part 1:

# polls/models.py
import datetime
from django.db import models

class Poll(models.Model):
question = models.CharField(max_length=200)
pub_date = models.DateTimeField('date published')

def __unicode__(self):
return self.question

def was_published_today(self):
return self.pub_date.date() == datetime.date.today()

class Choice(models.Model):
poll = models.ForeignKey(Poll)
choice = models.CharField(max_length=200)
votes = models.IntegerField()

def __unicode__(self):
return self.choice

# in the interactive shell
>>> from mysite.polls.models import Poll, Choice
>>> Poll.objects.all()
[]

# Create a new Poll.
>>> import datetime
>>> p = Poll(question="What's up?", pub_date=datetime.datetime.now())

# Save the object into the database. You have to call save() explicitly.
>>> p.save()

# Now it has an ID. Note that this might say "1L" instead of "1", depending
# on which database you're using. That's no biggie; it just means your
# database backend prefers to return integers as Python long integer
# objects.
>>> p.id
1

# Access database columns via Python attributes.
>>> p.question
"What's up?"
>>> p.pub_date
datetime.datetime(2007, 7, 15, 12, 00, 53)

# Change values by changing the attributes, then calling save().
>>> p.pub_date = datetime.datetime(2007, 4, 1, 0, 0)
>>> p.save()

# objects.all() displays all the polls in the database.
>>> Poll.objects.all()
[]
# Django provides a rich database lookup API that's entirely driven by
# keyword arguments.
>>> Poll.objects.filter(id=1)
[]
>>> Poll.objects.filter(question__startswith='What')
[]

# Get the poll whose year is 2007. Of course, if you're going through this
# tutorial in another year, change as appropriate.
>>> Poll.objects.get(pub_date__year=2007)


>>> Poll.objects.get(id=2)
Traceback (most recent call last):
...
DoesNotExist: Poll matching query does not exist.

# Lookup by a primary key is the most common case, so Django provides a
# shortcut for primary-key exact lookups.
# The following is identical to Poll.objects.get(id=1).
>>> Poll.objects.get(pk=1)


# Make sure our custom method worked.
>>> p = Poll.objects.get(pk=1)
>>> p.was_published_today()
False

# Give the Poll a couple of Choices. The create call constructs a new
# choice object, does the INSERT statement, adds the choice to the set
# of available choices and returns the new Choice object.
>>> p = Poll.objects.get(pk=1)
>>> p.choice_set.create(choice='Not much', votes=

flushing of print statements ending with comma

2008-12-29 Thread Grebekel
I have recently noticed that print statements ending with a comma are
not immediately flushed. This is evident when such statement is
executed before a very long operation (a big loop for instance).

Example:


print 'Take a walk, because this will take a while...',
i = 0
while i < 10**10:
i += 1
print "we're done!"


Here the first string is not printed until the second print statement.
If the second print statement is removed then the string is not
printed until the end of the program. Apparently, python does not
flush the stdout buffer until it is ordered to do so either explicitly
or by a print statement not ending with a comma.

Using sys.std.flush after the print fixes this issue, but doing so
each time seems cumbersome and somewhat counterintuitive.

Is there some reasoning behind this behavior or is it a bug?

Python version 2.5.1
--
http://mail.python.org/mailman/listinfo/python-list


Re: How to display Chinese in a list retrieved from database via python

2008-12-29 Thread Mark Tolonen


"zxo102"  wrote in message 
news:7e38e76a-d5ee-41d9-9ed5-73a2e2993...@w1g2000prm.googlegroups.com...

On 12月29日, 下午5时06分, "Mark Tolonen"  wrote:

"zxo102"  wrote in message

news:2560a6e0-c103-46d2-aa5a-8604de4d1...@b38g2000prf.googlegroups.com...



[snip]

That said, learn to use Unicode strings by trying the following program, 
but

set the first line to the encoding *your editor* saves files in.  You can
use the actual Chinese characters instead of escape codes this way.  The
encoding used for the source code and the encoding used for the html file
don't have to match, but the charset declared in the file and the 
encoding

used to write the file *do* have to match.

# coding: utf8

import codecs

mydict = {}
mydict['JUNK'] = [u'中文',u'中文',u'中文']

def conv_list2str(value):
return u'["' + u'","'.join(s for s in value) + u'"]'

f_str = u'''

test

var test = %s
alert(test[0])
alert(test[1])
alert(test[2])


'''

s = conv_list2str(mydict['JUNK'])
f=codecs.open('test04.html','wt',encoding='gb2312')
f.write(f_str % s)
f.close()

-Mark

P.S.  Python 3.0 makes this easier for what you want to do, because the
representation of a dictionary changes.  You'll be able to skip the
conv_list2str() function and all strings are Unicode by default.


Thanks for your comments, Mark. I understand it now. The list(escape
codes): ['\xd6\xd0\xce\xc4','\xd6\xd0\xce\xc4','\xd6\xd0\xce\xc4'] is
from a postgresql database with "select" statement.I will postgresql
database configurations and see if it is possible to return ['中文','中
文','中文'] directly with "select" statement.

ouyang


The trick with working with Unicode is convert anything read into the 
program (from a file, database, etc.) to Unicode characters, manipulate it, 
then convert it back to a specific encoding when writing it back.  So if 
postgresql is returning gb2312 data, use:


data.decode('gb2312') to get the Unicode equivalent:


'\xd6\xd0\xce\xc4'.decode('gb2312')

u'\u4e2d\u6587'

print '\xd6\xd0\xce\xc4'.decode('gb2312')

中文

Google for some Python Unicode tutorials.

-Mark




--
http://mail.python.org/mailman/listinfo/python-list


Re: Get a list of functions in a file

2008-12-29 Thread Terry Reedy

member Basu wrote:
I'm putting some utility functions in a file and then building a simple 
shell interface to them. Is their some way I can automatically get a 
list of all the functions in the file? I could wrap them in a class and 
then use attributes, but I'd rather leave them as simple functions.


Lets assume that either
1) You only define functions (bind function names) in the module, or
2) You start any other top-level names with '_' so that they do not get 
imported.


import utilfuncs
funcs = vars(utilfuncs) # dict of name:func pairs
names = funcs.keys()

# display names and ask user to select 'inputname'
# then, assuming no args

output = funcs[inputname]()

tjr


--
http://mail.python.org/mailman/listinfo/python-list


Re: SQL, lite lite lite

2008-12-29 Thread Pierre Quentel
On 29 déc, 19:06, Aaron Brady  wrote:
> Hi all,
>
> About a year ago, I posted an idea I was having about thread
> synchronization to the newsgroup.  However, I did not explain it well,
> and I really erred on the side of brevity.  (After some finagling, Mr.
> Bieber and I decided it wasn't exactly anything groundbreaking.)  But
> I think the brevity cost me some readers, who might have had more
> interest.  The affair was on the whole discouraging.  So, I'm going to
> try another idea, and assume that readers have some time, and will
> spend it on it.
>
> I don't think relational data can be read and written very easily in
> Python.  There are some options, such as 'sqllite3', but they are not
> easy.  'sqllite3' statements are valid SQL expressions, which afford
> the entire power of SQL, but contrary to its name, it is not that
> 'lite'.  To me, 'lite' is something you could learn (even make!) in an
> afternoon, not a semester; something the size of an ActiveState
> recipe, or a little bigger, maybe a file or two.  If you think SQL is
> a breeze, you probably won't find my idea exciting.  I assume that the
> basics of SQL are creating tables, selecting records, and updating
> records.
>
> My idea is to create a 'Relation' class.  The details are basically
> open, such as whether to back it with 'sqllite3', 'shelve', 'mmap', or
> just mapping and sequence objects; what the simplest syntax is that
> can capture and permit all the basics, and how much and what else can
> fit in at that level; how and whether it can include arbitrary Python
> objects, and what constraints there are on them if not; how and
> whether to permit transactions; and what the simplest and coolest
> thing you can do with a little Python syntax is.
>
> This is basically an invitation for everyone to brainstorm.  (No
> hijackings, good humor & digression ok.)  Lastly, ...
>
> **warning, spoiler!  here's what I thought of already.**
>
> **repeat!  spoiler!  here's what I thought of already.**
>
> #Just the select and update syntax:
>
> >>> a= people._select( "firstname== 'Joe'" )
>
> #select 'key' from 'people' where 'firstname'== 'joe'>>> a
>
> [Entry2864, Entry3076, Entry3172]>>> entry1= a[ 0 ]
> >>> entry1.phone
>
> #select 'phone' from 'people' where 'key'==self.key
> "555-2413">>> entry1.phone= "555-1234"
>
> #update 'people' set 'phone'= '555-1234' where 'key'==self.key>>> entry1.phone
>
> "555-1234"
>
> #Create table syntax (a-whole-nother beast in itself):
>
> >>> classes= db.Relation( 'class_', 'person', Unique( 'class_', 'person' ) )
>
> #create table 'classes' ( 'key', 'class_', 'person' ) unique
> ( 'class_', 'person' )
>
> >>> classes._unique( 'class_', 'person' )
> >>> classes.class_.noneok= False #'class_' cannot be null
> >>> classes.person.noneok= False
> >>> classes._insert( 'Physics', 'Dan' )
> >>> classes._insert( 'Chem', 'Tim' )
>
> Hoping-"good critic"-is-self-consistent-ly, hoping-to-hear-it's-too-
> complicated-already-ly,
> A. Brady

Hi,

PyDbLite (http://pydblite.sourceforge.net/) is not far from what you
describe. The basic version stores data in cPickle format, and there
are interfaces to use the same Pythonic syntax with SQLite and MySQL
backends

Regards,
Pierre
--
http://mail.python.org/mailman/listinfo/python-list


Re: why cannot assign to function call

2008-12-29 Thread Aaron Brady
On Dec 29, 1:05 pm, Scott David Daniels  wrote:
> John Machin wrote:
> > On Dec 29, 5:01 pm, scsoce  wrote:
> >> I have a function return a reference,
>
> > Stop right there. You don't have (and can't have, in Python) a
> > function which returns a reference that acts like a pointer in C or C+
> > +. Please tell us what manual, tutorial, book, blog or Usenet posting
> > gave you that idea, and we'll get the SWAT team sent out straight
> > away.
>
> Perhaps we can send the the Pennsylvania State University out after
> them.  I don't know why, but some fairly substantial people here are
> scared of the PSU.
>
> ...
>
> Oh, I have just been informed by my captors that the are the Python
> Secre

--Why would he take the time to carve "ARGHHH!"?
--Maybe he was dictating.
--
http://mail.python.org/mailman/listinfo/python-list


Re: why cannot assign to function call

2008-12-29 Thread Terry Reedy

John Machin wrote:

On Dec 29, 5:01 pm, scsoce  wrote:

I have a function return a reference,


Stop right there. You don't have (and can't have, in Python) a
function which returns a reference that acts like a pointer in C or C+
+. Please tell us what manual, tutorial, book, blog or Usenet posting
gave you that idea,


Perhaps the ones claiming that Python is 'call by reference' and hence, 
by implication, at least, 'return by reference'.



and we'll get the SWAT team sent out straight away.


I and others have posted many times that such a viewpoints leads to 
confusion, such as in this post.


tjr

--
http://mail.python.org/mailman/listinfo/python-list


Re: math module for Decimals

2008-12-29 Thread Tino Wildenhain

jerry.carl...@gmail.com wrote:
...

It's really just the goniometric functions that I am missing most at
the moment, so maybe I can figure it out with help of what you said
plus the already existing imperfect modules. Meantime maybe this
discussion will caught Guido's eye... ;-) And btw I do expect that
Python becomes better than Mathematica one day because it's free and
open :-) Maybe when Wolfram retires ;-) Thanks again!


I agree having full support for all math.* functions for all builtin
types would be nice.

However if you are more looking for replacement of mathematica and
friends with python you might check scipy/numpy.

Regards
Tino


smime.p7s
Description: S/MIME Cryptographic Signature
--
http://mail.python.org/mailman/listinfo/python-list


Symposium “Image Processing and Analysis” within the ICCES'09 Thailand – Last Announce & Call for Papers

2008-12-29 Thread tava...@fe.up.pt
(Our apologies for cross-posting.
We appreciate if you kindly distribute this information by your co-
workers and colleagues.)

***

Symposium “Image Processing and Analysis”
Int. Conf. on Computational & Experimental Engineering and Sciences
2009 (ICCES'09)
Phuket, Thailand, 8-13 April 2009
http://icces.org/cgi-bin/ices09/pages/index

***

Dear Colleague,

Within the International Conference on Computational & Experimental
Engineering and Sciences 2009 (ICCES'09), to be held in Phuket,
Thailand, in 8-13 April 2009, we are organizing the Symposium “Image
Processing and Analysis”.

Examples of some topics that will be considered in that symposium are:
Image restoring, Description, Compression, Segmentation and
Description; Objects tracking, Matching, Reconstruction and
Registration; Visualization Enhance; Simulation and Animation;
Software Development for Image Processing and Analysis; Grid Computing
in Image Processing and Analysis; Applications of Image Processing and
Analysis.

Due to your research activities in those fields, we would like to
invite you to submit your work and participate in the Symposium “Image
Processing and Analysis”.


Important dates and Instructions:

- 1 Jan 2009: Deadline for abstract submission;
- 10 Jan 2009: End of abstract selection.

For instructions and submission, please access to the conference
website at: http://icces.org/cgi-bin/ices09/pages/index. Instructions
for authors are available at: http://icces.org/cgi-bin/ices09/pages/guide.
Please note, when submitting your work you should choose the Symposium
“Image Processing and Analysis”.
If you intend to submit your work please notify as soon as possible
the main organizer of your intention (tava...@fe.up.pt);

The organizers are preparing a special issue of the International
Journal Computer Modeling in Engineering & Sciences (CMES), ISSN:
1526-1492, dedicated to the Symposium “Image Processing and Analysis”
with extended papers of the works presented in the ICCES'09.


With kind regards,
The Organizers,

João Manuel R. S. Tavares (tava...@fe.up.pt)
Faculty of Engineering of University of Porto, Porto, Portugal
Yongjie (Jessica) Zhan (jessi...@andrew.cmu.edu)
Carnegie Mellon University, Pittsburgh, USA
Maria João M. Vasconcelos (maria.vasconce...@fe.up.pt)
Faculty of Engineering of University of Porto, Porto, Portugal
--
http://mail.python.org/mailman/listinfo/python-list


tkInter constraining the width only

2008-12-29 Thread akineko
Hello everyone,

I'm writing a Tkinter program and trying to constraint the window
size.
I want to set the minimum of the width and the height and the maximum
of the width, but not the height.
I can use minsize(width=min_width, height=min_height) from Wm method
to limit the minimum sizes.
Similarly I thought I could use maxsize(width=max_width, height=None)
to limit the maximum sizes.

Unfortunately, it didn't work.
maxsize() complained if only width is given.

Is there any easy way to constraint a Tkinter window so that the sizes
are set except it can grow vertically.

Any suggestions are greatly appreciated.

Best regards,
Aki Niimura
--
http://mail.python.org/mailman/listinfo/python-list


Re: multiprocessing vs thread performance

2008-12-29 Thread Hrvoje Niksic
Roy Smith  writes:

> In article ,
>  Christian Heimes  wrote:
>
>> You have missed an important point. A well designed application does
>> neither create so many threads nor processes. The creation of a thread
>> or forking of a process is an expensive operation. You should use a pool
>> of threads or processes.
>
> It's worth noting that forking a new process is usually a much more 
> expensive operation than creating a thread.

If by "forking" you mean an actual fork() call, as opposed to invoking
a different executable, the difference is not necessarily that great.
Modern Unix systems tend to implement a 1:1 mapping between threads
and kernel processes, so creating a thread and forking a process
require similar amount of work.

On my system, as measured by timeit, spawning and joining a thread
takes 111 usecs, while forking and waiting for a process takes 260.
Slower, but not catastrophically so.

> Not that I would want to create 100,000 of either!

Agreed.

> Not everybody realizes it, but threads eat up a fair chunk of memory
> (you get one stack per thread, which means you need to allocate a
> hunk of memory for each stack).  I did a quick look around; 256k
> seems like a common default stack size.  1 meg wouldn't be unheard
> of.

Note that this memory is virtual memory, so it doesn't use up the
physical RAM until actually used.  I've seen systems running legacy
Java applications that create thousands of threads where *virtual*
memory was the bottleneck.
--
http://mail.python.org/mailman/listinfo/python-list


Re: tkInter constraining the width only

2008-12-29 Thread Roger
On Dec 29, 3:23 pm, akineko  wrote:
> Hello everyone,
>
> I'm writing a Tkinter program and trying to constraint the window
> size.
> I want to set the minimum of the width and the height and the maximum
> of the width, but not the height.

You want to set the max height to 0.  I know this is counter-
intuitive. Both values must be a number or None, not mixed.  So to do
what you want to do it would be this:

some_window.wm_minsize(width=min_width, height=min_height)
some_window.wm_maxsize(width=max_width, height=0)

Note that these numbers are in pixels if you're using .pack() and
according to the docs it's in grid coordinates if you're using .grid()

Good luck!
Roger.
--
http://mail.python.org/mailman/listinfo/python-list


Re: New Python 3.0 string formatting - really necessary?

2008-12-29 Thread Luis M . González
On 19 dic, 13:01, walterbyrd  wrote:
> I have not worked with Python enough to really know. But, it seems to
> me that more I look at python 3.0, the more I wonder if it isn't a
> step backwards.
>
> To me, it seems that this:
>
> print "%s=%d" % ('this',99)
>
> Is much easier, and faster, to type, and is also easier to read and
> understand. It also allows people to leverage their knowledge of C.
>
> This (if it's right) is much longer, and requires more special
> characters.
>
> print( "{0}={1}".format('this',99))
>
> Maybe it's worth all the extra trouble, and breaking backward
> compatibilty, and all. But, I never had the idea that the old way was
> all that big a problem. Of course, I could be wrong. Was the old way
> all that big of a problem?

Well, I was playing with python 3k a little bit and, as usual, after a
few minutes everything felt natural.
The new string formating is perhaps a little more typing, much is much
more clear and readable.
I know where it came from. Long ago, Guido took a look at Boo, which
is a python-like .NET language, and he posted a comment saying how
much he liked the string formating, which is identical to the new one
in python.

I still can't get used to add the parenthesis to "print", and this is
the only thing I don't like, but I'm sure there's a good reason for
this change...

Luis
--
http://mail.python.org/mailman/listinfo/python-list


Re: Python module import loop issue

2008-12-29 Thread Carl Banks
On Dec 29, 10:51 am, Kottiyath  wrote:
> This might not be  pure python question. Sorry about that. I couldnt
> think of any other place to post the same.
> I am creating a _medium_complex_ application, and I am facing issues
> with creating the proper module structure.
> This is my first application and since this is a run-of-the-mill
> application, I hope someone would be able to help me.
>
> Base Module:
> Contains definitions for Class A1, Class A2
>
> Module 1.1:
> Class B1 (refines A1)
> Module 1.2:
> Class C1 (refines A1)
> Module 1.3:
> Class D1 (refines A1)
>
> Module 2.1:
> Class B2 (refines A2):
>         Uses objects of B1, C1, D1
> Module 2.2:
> Class C2 (refines A2)
> Module 2.3:
> Class D2 (refines A2)
>
> -->Python Entry Module : Module EN<--
> Calls objects of B1, C1 and D1
>
> Module EN and also Module 2 creates and calls the objects during run
> time - and so calls cannot be hardcoded.
> So, I want to use Factory methods to create everything.
>
> Module Factory:
> import 1.1,1.2,1.3,  2.1,2.2,2.3
> A1Factory: {'B1Tag':1.1.B1, 'C1Tag':1.2.C1, 'D1Tag':1.3.D1'}
> A2Factory: {'B2Tag':2.1.B2, 'C2Tag':2.2.C2, 'D2Tag':2.3.D2'}
>
> But, since Module requires objects of B1, C1 etc, it has to import
> Factory.
> Module 2.1:
> import Factory.
>
> Now, there is a import loop. How can we avoid this loop?
>
> The following ways I could think of
> 1. Automatic updation of factory inside superclass whenever a subclass
> is created. But, since there is no object created,  I cannot think of
> a way of doing this.

I'm going to suggest three ways: a straightforward, good-enough way; a
powerful, intelligent, badass way; and a sneaky way.


1. The straightforward, good-enough way

Define functions in Factory.py called register_A1_subclass and
register_A2_subclass, then call them whenever you create a new
subclass.

Factory.py
-
A1Factory = {}
A2Factory = {}

def register_A1_subclass(tag,cls):
A1Factory[tag] = cls

def register_A2_subclass(tag,cls):
A2Factory[tag] = cls
-

package1/module1.py:
-
import Factory

class B1(A1):
# define class B1 here

Factory.register_A1_subclass("B1Tag",B1)
-

So after you define B1, call Factory.register_A1_subclass to add it to
the A1Factory.  Factory.py no longer has to import package1.module2,
so the circular import is broken, at the paltry price of having to add
a boilerplate function call after every class definition.


2. The powerful, intelligent, badass way

Metaclasses.  I would guess you do not want to do this, and I wouldn't
recommend it if you haven't studied up on how metaclasses work, but
it's a textbook example of their usefulness.  If you expect to use
factory functions like this a lot, it might be worth your while to
learn them.

Anyway, here's a simple example to illustrate.  It doesn't meet your
requirements since all classes use the same factory; updating it to
your needs is left as an exercise.

Factory.py:
-
Factory = {}

class FactoryMetaclass(type):
def __new__(metaclass,name,bases,dct):
cls = type.__new__(metaclass,name,bases,dct)
tag = dct.get("tag")
if tag is not None:
Factory[tag] = cls
return cls
--

Base.py:
--
import Factory

class A2(object):
__metaclass__ = FactoryMetaclass
# define rest of A2
--

package1/module2.py:
--
class B2(A2):
tag = "B2Tag"
#define rest of B2
--

When the class B2 statement is executed, Python notes that the
metaclass for A2 was set to FactoryMetaclass (subclasses inherit the
metaclass), so it calls FactoryMetaclass's __new__ method to create
the class object.  The __new__ method checks to see if the class
defines a "tag" attribute, and if so, adds the class to the Factory
with that tag.  Voila.

(As a footnote, I will mention that I've created a library, Dice3DS,
that uses metaclass programming in exactly this way.)


3. The sneaky way

New-style classes maintain a list of all their subclasses, which you
can retrieve by calling the __subclassess__ class method.  You could
use this to define a factory function that searches through this list
for the appropriate subclass.

Factory.py:
-
def _create_subclass(basecls,name):
for cls in basecls.__subclasses__():
if cls.__name__ == name:
return cls()
cls2 = _create_subclass(cls,name)
if cls2 is not None:
return cls2()
return None

def create_A1_subclass(name):
cls = _create_subclass(A1,name)
if cls is None:
raise ValueError("no subclass of A1 by that name")
return cls
-

So here you search through A1's subclasses for a class matching the
class's name.  Note that we do it recursively, in case B1 (f

Re: SQL, lite lite lite

2008-12-29 Thread Roger Binns
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Aaron Brady wrote:
> Python.  There are some options, such as 'sqllite3', but they are not
> easy.  'sqllite3' statements are valid SQL expressions, which afford
> the entire power of SQL, but contrary to its name, it is not that
> 'lite'. 

Have you compared the compiled size of SQLite against other things?  For
example on my machine the size of MySQL client library, whose sole
purpose is to transport queries and results across the network is the
same size as the entirety of SQLite!  You can prune SQLite back even
further as documented in http://www.sqlite.org/compile.html

It is even possible to omit the SQL front end.  Queries are stored
already processed in the database.  This functionality is used by mp3
manufacturers and similar constrained embedded environments.

> To me, 'lite' is something you could learn (even make!) in an
> afternoon, 

If you just want to treat the database as a glorified spreadsheet then
SQL is "lite", although perhaps a little verbose of a dbm style interface.

> If you think SQL is
> a breeze, you probably won't find my idea exciting.  I assume that the
> basics of SQL are creating tables, selecting records, and updating
> records.

The basics of SQL are about expressing the relational model
http://en.wikipedia.org/wiki/Relational_model which has stood the test
of time.  (That doesn't mean it is superior just that it is good enough
like the "qwerty" keyboard layout.)  There have been attempts at
alternatives like http://en.wikipedia.org/wiki/The_Third_Manifesto but
that doesn't seem to have caught on.

It seems your basic complaint is the complexity of doing database stuff.
 Ultimately this will be the case if some data is related to other bits
of data.  As other posters have pointed out, there are various ORM type
wrappers for Python that try to wrap this up in syntactic sugar :-)

For something completely different have a look at CouchDB
http://couchdb.apache.org/ which operates on "documents" (basically
something with an id and an arbitrary updateable list of properties).
It does take a bit to get your head wrapped around it - try this posting
for an example http://www.cmlenz.net/archives/2007/10/couchdb-joins

Roger
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.9 (GNU/Linux)

iEYEARECAAYFAklZRp0ACgkQmOOfHg372QQ4RQCgzSmgEhvG2DQlWYb68U8BZNOo
wOAAnip4GIvKiskuwwWJwaepmJwHLjXJ
=0UiA
-END PGP SIGNATURE-

--
http://mail.python.org/mailman/listinfo/python-list


Re: New Python 3.0 string formatting - really necessary?

2008-12-29 Thread ajaksu
On Dec 29, 7:37 pm, Luis M. González  wrote:
> I still can't get used to add the parenthesis to "print", and this is
> the only thing I don't like, but I'm sure there's a good reason for
> this change...

I should know better than to post such an awful hack:

__past__.py:

from sys import excepthook as sys_excepthook
from sys import modules
---
def printhook(exctype, value, traceback):
skip = True
if isinstance(value, SyntaxError):
if 'print ' in value.text:
printable = value.text.replace('print ', '')[:-1]
skip = False
toprint = 'print(' + printable +')'
print('Trying to convert your mess into', toprint)
try:
exec(toprint)
except NameError as ne:
name = str(ne).replace("name '", '').replace("' is not
defined", '')
try:
var = str(getattr(modules['__main__'], name))
exec('print(' + printable.replace(name, var) +
')')
except AttributeError as ae:
sys_excepthook(NameError, ne, traceback)
except SyntaxError as se:
print('NameError workaround replaced something
bad')
skip = True
except NameError as ne2:
print('Too many names to map to objects :P')
skip = True
except:
print('Sorry, something went wrong and I am too
lazy to find out what')
skip = True
except:
raise
skip = True
if skip:
sys_excepthook(exctype, value, traceback)
---

Then, as I'd check some stuff in parallel on 2.5 and 3.0, I do this on
the 3.0 prompt:
---
import sys
exchook = sys.excepthook
from __past__ import printhook
sys.excepthook = printhook
---

As soon as I wrote that mess^H^H^H^H helper, remembering to use print
() became easier (I think the trauma helped) and I haven't imported
much from __past__ since.

Should I hit 'send'?

Daniel
--
http://mail.python.org/mailman/listinfo/python-list


Re: flushing of print statements ending with comma

2008-12-29 Thread Cameron Simpson
On 29Dec2008 11:11, Grebekel  wrote:
| I have recently noticed that print statements ending with a comma are
| not immediately flushed.

I will warn you that neither are the more common uncommaed print
statements, except on a terminal.

| [...] Example:
| 
| print 'Take a walk, because this will take a while...',
| i = 0
| while i < 10**10:
| i += 1
| print "we're done!"
| 
| 
| Here the first string is not printed until the second print statement.
[...]
| Using sys.std.flush after the print fixes this issue, but doing so
| each time seems cumbersome and somewhat counterintuitive.
| Is there some reasoning behind this behavior or is it a bug?

It's correct behaviour. The python print etc is layered on the C library
stdio. A stdio stream can be buffered in three standard ways: unbuffered,
line buffered and block buffered.

On UNIX, on a terminal, stdout is normally line buffered: output is
flushed when a newline is encoutered in the data, and this is pleasing
to humans. Conversely, is stdout is _not_ attached to a terminal it
will be block buffered by default; output is only flushed when the
buffer is filled. This is much more _efficient_ in terms of I/O and
program activity.

By contrast, again by default, stderr is normally unbuffered. Being
reserved for error messages, immediate output (before your program
explodes:-) is considered more important than system efficiency.

So you should sys.stdout.flush() if you want data output right now.
For many purposes it is better to let the default behaviour obtain.

Also, I suugest that progress reporting such as your be written to
stderr anyway. It will appear in a timely fashion, and will also thus
not pollute the output stream. Consider:

  your-program >datafile
or
  your-program | process the output data...

Sending your progress reports to stdout puts junk in the data
stream.

Cheers,
-- 
Cameron Simpson  DoD#743
http://www.cskk.ezoshosting.com/cs/

Gentle suggestions being those which are written on rocks of less than 5lbs.
- Tracy Nelson in comp.lang.c
--
http://mail.python.org/mailman/listinfo/python-list


Re: multiprocessing vs thread performance

2008-12-29 Thread James Mills
On Tue, Dec 30, 2008 at 12:52 AM, mk  wrote:
> Hello everyone,
>
> After reading http://www.python.org/dev/peps/pep-0371/ I was under
> impression that performance of multiprocessing package is similar to that of
> thread / threading. However, to familiarize myself with both packages I
> wrote my own test of spawning and returning 100,000 empty threads or
> processes (while maintaining at most 100 processes / threads active at any
> one time), respectively.
>
> The results I got are very different from the benchmark quoted in PEP 371.
> On twin Xeon machine the threaded version executed in 5.54 secs, while
> multiprocessing version took over 222 secs to complete!
>
> Am I doing smth wrong in code below? Or do I have to use
> multiprocessing.Pool to get any decent results?

The overhead in starting OS level processes
is quite high. This is why event-driven, single
process servers can perform far better than
ones that fork (spawn multiple processes)
per request.

As others have mentioned, it's not suprising
that spawning even 100 processes took some
time.

Bottom line: multiprocessing should not be used this way.
(nor should threading).

cheers
James
--
http://mail.python.org/mailman/listinfo/python-list


Re: why cannot assign to function call

2008-12-29 Thread Miles
On Mon, Dec 29, 2008 at 1:01 AM, scsoce  wrote:
> I have a function return a reference, and want to assign to the reference,
> simply like this:
>>>def f(a)
> return a
>b = 0
>   * f( b ) = 1*
> but the last line will be refused as "can't assign to function call".
> In my thought , the assignment is very nature,  but  why the interpreter
> refused to do that ?

Here's some links to help you better understand Python objects:

http://effbot.org/zone/python-objects.htm
http://effbot.org/zone/call-by-object.htm

The second one is a bit denser reading, but it's important to learn
that Python's approach to objects and "variables" is fundamentally
different from that of C/C++.  In the example below, there's no way in
the Python language* that bar() can change the value of b, since
strings and numbers are immutable.

def foo():
b = 0
bar(b)
print b # will always be 0

* There are stupid [ctypes/getframe/etc.] tricks, though I think all
are implementation-specific

-Miles
--
http://mail.python.org/mailman/listinfo/python-list


Re: New Python 3.0 string formatting - really necessary?

2008-12-29 Thread Steven D'Aprano
On Mon, 29 Dec 2008 09:50:14 -0800, walterbyrd wrote:

> On Dec 21, 12:28 pm, Bruno Desthuilliers
>  wrote:
> 
>> > I can see where the new formatting might be helpful in some cases.
>> > But, I am not sure it's worth the cost.
>>
>> Err... _Which_ cost exactly ?
> 
> Loss of backward compatibility, mainly.

How do you lose backward compatibility by *adding* new functionality? The 
old functionality will continue to work as normal.



-- 
Steven
--
http://mail.python.org/mailman/listinfo/python-list


Of console I/O, characters, strings & dogs

2008-12-29 Thread David
I am trying getch() from msvcrt.  The following module
has been run with 3 different concatination statements
and none yield a satisfactory result.Python 3.0

# script12
import msvcrt
shortstr1 = 'd' + 'o' + 'g'
print(shortstr1)
char1 = msvcrt.getch()
char2 = msvcrt.getch()
char3 = msvcrt.getch()
 <  alternatives for line 8 below  >
print(shortstr2)

   print(shortstr1) givesdogof course.
   If the same char are entered individually at the
   console,  as char1, 2 & 3, using msvcrt.getch(),
   I have not been able to get out a plain dog.

   If line 8 is   shortstr2 = char1[0] + char2[0] + char3[0]
print(shortstr2)  yields 314

   If line 8 is   shortstr2 = 'char1[0]' + 'char2[0]' + 'char3[0]'
print(shortstr2)  yields char1[0]char2[0]char3[0]

   If line 8 is   shortstr2 = char1 + char2 + char3
print(shortstr2)  yieldsb 'dog'
 
Is the latter out of "How to Speak Redneck"  ?

  Possibly b means bit string.  But how do I get a plain 
  dog out of these char console entries ?   
  The 3.0 tutorial doesn't discuss console I/O.
 Found msvcrt in Python in a Nutshell.

   An old c programmer learns that this b 'Python'
--
http://mail.python.org/mailman/listinfo/python-list


get method

2008-12-29 Thread Ross
I am teaching myself Python by going through Allen Downing's "Think
Python." I have come across what should be a simple exercise, but I am
not getting the correct answer. Here's the exercise:

Given:

def histogram(s):
d = dict()
for c in s:
if c not in d:
d[c] = 1
else:
d[c] += 1
return d


Dictionaries have a method called get that takes a key and a default
value. If the key appears in the dictionary, get returns the
corresponding value; otherwise it returns the default value. For
example:

>>> h = histogram('a')
>>> print h
{'a': 1}
>>> h.get('a', 0)
1
>>> h.get('b', 0)
0

Use get to write histogram more concisely. You should be able to
eliminate the if statement.

Here's my code:

def histogram(s):
d = dict()
for c in s:
d[c]= d.get(c,0)
return d

This code returns a dictionary of all the letters to any string s I
give it but each corresponding value is incorrectly the default of 0.
What am I doing wrong?

--
http://mail.python.org/mailman/listinfo/python-list


Re: "return" in def

2008-12-29 Thread Steven D'Aprano
On Mon, 29 Dec 2008 05:31:17 -0800, Aaron Brady wrote:

> One style of coding I heard about once only permits returns at the end
> of a function.  It claims it makes it easier to see the function as a
> mathematical object.

That's silly. You treat the function as a black box: input comes in, and 
output comes out. You have no idea of what happens inside the black box: 
it could loop a thousand times, take 150 different branches, or take one 
of 37 different exit points. From the outside, it's still exactly like a 
mathematical object. Internal complexity is irrelevant. This is why 
mathematicians can perform algebra on complicated functions like Bessel's 
function (of the first or second kind), without needing to care that 
actually calculating Bessel's function is quite tricky.

What I think the one-return-per-function style is aiming at is that it is 
(sometimes) easier to analyse the internals of the function if there are 
few branches. The more complicated branches you have, the harder it is to 
analyse the function. Early exits on their own are not the cause of the 
complexity: it's the number of branches leading to the early exit that 
causes the problem.

Avoiding early exits is an over-reaction to the Bad Old Days of spaghetti 
code. But used wisely, early exists can simplify, not complicate, code.

Consider the following:

def find_ham(alist):
for item in alist:
if isinstance(item, Ham):
return item
raise ValueError('no ham found')


def find_spam(alist):
found_item = None
for item in alist:
if found_item is not None:
if isinstance(item, Spam):
found_item = item
if found_item is None:
raise ValueError('no spam found')
else:
return found_item


The second version has double the number of lines of code of the first. 
It introduces an extra variable "found_item" and two extra if blocks. I 
don't think the claim that the version with an early exit is more 
complicated than the version without can justified.


-- 
Steven
--
http://mail.python.org/mailman/listinfo/python-list


Re: get method

2008-12-29 Thread Steven D'Aprano
On Mon, 29 Dec 2008 17:00:31 -0800, Ross wrote:

> Here's my code:
> 
> def histogram(s):
>   d = dict()
>   for c in s:
>   d[c]= d.get(c,0)
>   return d
> 
> This code returns a dictionary of all the letters to any string s I give
> it but each corresponding value is incorrectly the default of 0. What am
> I doing wrong?

You're forgetting to increase the count each time you see a letter:

* Look up the letter c in the dict, and call it count;
* If c isn't found in the dict, use 0 as the count.
* Set the value to count.

But at no point do you increase count.


-- 
Steven
--
http://mail.python.org/mailman/listinfo/python-list


Re: get method

2008-12-29 Thread Scott David Daniels

Ross wrote:

... Use get to write histogram more concisely. You should be able to
eliminate the if statement.

def histogram(s):
d = dict()
for c in s:
d[c]= d.get(c,0)
return d

This code returns a dictionary of all the letters to any string s I
give it but each corresponding value is incorrectly the default of 0.
What am I doing wrong?


How is this code supposed to count?

--Scott David Daniels
scott.dani...@acm.org
--
http://mail.python.org/mailman/listinfo/python-list


Python in C

2008-12-29 Thread thmpsn . m . k
I've just downloaded Python's mainstream implementation (CPython),
which is written in C. Not to my surprise, I feel like I'm looking at
unstructured spaghetti, and I'm having trouble figuring out how it all
works together. (Please bear with me; I'm just going through the usual
frustration that anyone goes through when trying to see the
organization of a C program :)

So, I have two queries:

1. Can anyone explain to me what kind of program structuring technique
(which paradigm, etc) CPython uses? How do modules interact together?
What conventions does it use?

2. Have there been any suggestions in the past to rewrite Python's
mainstream implementation in C++ (or why wasn't it done this way from
the beginning)?
--
http://mail.python.org/mailman/listinfo/python-list


Re: Python in C

2008-12-29 Thread Chris Rebert
On Mon, Dec 29, 2008 at 5:22 PM,   wrote:

> 2. Have there been any suggestions in the past to rewrite Python's
> mainstream implementation in C++ (or why wasn't it done this way from
> the beginning)?

I'm not a CPython dev (I bet one will pipe in), but I would speculate
it's because C++ is so much more complicated and a bit less portable
than C.

Cheers,
Chris

-- 
Follow the path of the Iguana...
http://rebertia.com
--
http://mail.python.org/mailman/listinfo/python-list


Re: get method

2008-12-29 Thread James Mills
On Tue, Dec 30, 2008 at 11:00 AM, Ross  wrote:
> I am teaching myself Python by going through Allen Downing's "Think
> Python." I have come across what should be a simple exercise, but I am
> not getting the correct answer. Here's the exercise:
>
> Given:
>
> def histogram(s):
>d = dict()
>for c in s:
>if c not in d:
>d[c] = 1
>else:
>d[c] += 1
>return d
>
>
> Dictionaries have a method called get that takes a key and a default
> value. If the key appears in the dictionary, get returns the
> corresponding value; otherwise it returns the default value. For
> example:
>
 h = histogram('a')
 print h
> {'a': 1}
 h.get('a', 0)
> 1
 h.get('b', 0)
> 0
>
> Use get to write histogram more concisely. You should be able to
> eliminate the if statement.
>
> Here's my code:
>
> def histogram(s):
>d = dict()
>for c in s:
>d[c]= d.get(c,0)
>return d
>
> This code returns a dictionary of all the letters to any string s I
> give it but each corresponding value is incorrectly the default of 0.
> What am I doing wrong?

Ross, the others have informed you that you are not
actually incrementing the count. I'll assume you've
fixed your function now :) ... I want to show you a far
simpler way to do this which takes advantage of
Python's list comprehensions and mappings (which are
really what dictionaries are):

>>> s = "James Mills and Danielle Van Sprang"
>>> dict([(k, len([x for x in s if x == k])) for k in s])
{'a': 5, ' ': 5, 'e': 3, 'd': 1, 'g': 1, 'i': 2, 'M': 1, 'J': 1, 'm':
1, 'l': 4, 'n': 4, 'p': 1, 's': 2, 'r': 1, 'V': 1, 'S': 1, 'D': 1}
>>>

Let us know when you get to the "List Comprehension"
section - They are very powerful - As as Generators
and Generator Expressions.

Have fun learning Python,

cheers
James
--
http://mail.python.org/mailman/listinfo/python-list


Re: multiprocessing vs thread performance

2008-12-29 Thread Aaron Brady
On Dec 29, 6:05 pm, "James Mills" 
wrote:
> On Tue, Dec 30, 2008 at 12:52 AM, mk  wrote:
> > Hello everyone,
>
> > After readinghttp://www.python.org/dev/peps/pep-0371/I was under
> > impression that performance of multiprocessing package is similar to that of
> > thread / threading. However, to familiarize myself with both packages I
> > wrote my own test of spawning and returning 100,000 empty threads or
> > processes (while maintaining at most 100 processes / threads active at any
> > one time), respectively.
snip
> As others have mentioned, it's not suprising
> that spawning even 100 processes took some
> time.
>
> Bottom line: multiprocessing should not be used this way.
> (nor should threading).

The OP may be interested in Erlang, which Wikipedia (end-all, be-all)
claims is a 'distribution oriented language'.

You might also find it interesting to examine a theoretical OS that is
optimized for process overhead.  In other words, what is the minimum
overhead possible?  Can processes be as small as threads?  Can entire
threads be only a few bytes (words) big?

Also, could generators provide any of the things you need with your
multiple threads?  You could, say, call 'next()' on many items in a
list, and just remove them on StopIteration.
--
http://mail.python.org/mailman/listinfo/python-list


Re: get method

2008-12-29 Thread James Mills
On Tue, Dec 30, 2008 at 11:32 AM, James Mills
 wrote:
> Ross, the others have informed you that you are not
> actually incrementing the count. I'll assume you've
> fixed your function now :) ... I want to show you a far
> simpler way to do this which takes advantage of
> Python's list comprehensions and mappings (which are
> really what dictionaries are):
>
 s = "James Mills and Danielle Van Sprang"
 dict([(k, len([x for x in s if x == k])) for k in s])
> {'a': 5, ' ': 5, 'e': 3, 'd': 1, 'g': 1, 'i': 2, 'M': 1, 'J': 1, 'm':
> 1, 'l': 4, 'n': 4, 'p': 1, 's': 2, 'r': 1, 'V': 1, 'S': 1, 'D': 1}

>
> Let us know when you get to the "List Comprehension"
> section - They are very powerful - As as Generators
> and Generator Expressions.
>
> Have fun learning Python,

Also, here's a nice function:

>>> def histogram(s):
... d = dict([(k, len([x for x in s if x == k])) for k in s])
... for k, v in d.iteritems():
... print "%s: %s" % (k, "*" * v)
...
>>> histogram("Hello World!")
!: *
 : *
e: *
d: *
H: *
l: ***
o: **
r: *
W: *

cheers
James
--
http://mail.python.org/mailman/listinfo/python-list


Re: get method

2008-12-29 Thread Ross
On Dec 29, 8:07 pm, Scott David Daniels  wrote:
> Ross wrote:
> > ... Use get to write histogram more concisely. You should be able to
> > eliminate the if statement.
>
> > def histogram(s):
> >    d = dict()
> >    for c in s:
> >            d[c]= d.get(c,0)
> >    return d
>
> > This code returns a dictionary of all the letters to any string s I
> > give it but each corresponding value is incorrectly the default of 0.
> > What am I doing wrong?
>
> How is this code supposed to count?
>
> --Scott David Daniels
> scott.dani...@acm.org

I realize the code isn't counting, but how am I to do this without
using an if statement as the problem instructs?
--
http://mail.python.org/mailman/listinfo/python-list


Re: multiprocessing vs thread performance

2008-12-29 Thread James Mills
On Tue, Dec 30, 2008 at 11:34 AM, Aaron Brady  wrote:
> The OP may be interested in Erlang, which Wikipedia (end-all, be-all)
> claims is a 'distribution oriented language'.

I would suggest to the OP that he take a look
at circuits (1) an event framework with a focus
on component architectures and distributed
processing.

I'm presently looking at Virtual Synchrony and
other distributed processing architectures - but
circuits is meant to be general purpose enough
to fit event-driven applications/systems.

cheers
James

1. http://trac.softcircuit.com.au/circuits/
--
http://mail.python.org/mailman/listinfo/python-list


Re: get method

2008-12-29 Thread James Mills
On Tue, Dec 30, 2008 at 11:38 AM, Ross  wrote:
> I realize the code isn't counting, but how am I to do this without
> using an if statement as the problem instructs?

I just gave you a hint :)

cheers
James
--
http://mail.python.org/mailman/listinfo/python-list


Re: Of console I/O, characters, strings & dogs

2008-12-29 Thread Steven D'Aprano
On Mon, 29 Dec 2008 18:53:45 -0600, David Lemper wrote:

> I am trying getch() from msvcrt.  The following module has been run with
> 3 different concatination statements and none yield a satisfactory
> result.Python 3.0


Your first problem is that you've run into the future of computing: 
strings are not bytes. For many decades, programmers have been pretending 
that they are, and you can get away with it so long as you ignore the 95% 
of the world that doesn't use English as their only language.

Unfortunately, the time for this is passed. Fortunately, it's (mostly) 
not difficult to use Unicode, and Python makes it easy for you.

> # script12
> import msvcrt
> shortstr1 = 'd' + 'o' + 'g'
> print(shortstr1)

In Python 3, shortstr1 is a Unicode string. Python 3 uses Unicode as it's 
string type. Because Python is doing all the hard work behind the scenes, 
you don't have to worry about it, and you can just print shortstr1 and 
everything will Just Work.


> char1 = msvcrt.getch()
> char2 = msvcrt.getch()
> char3 = msvcrt.getch()

I don't have msvcrt here but my guess it that getch is returning a *byte* 
rather than a *character*. In the Bad Old Days, all characters were 
bytes, and programmers pretended that they were identical. (This means 
you could only have 256 of them, and not everyone agreed what those 256 
of them were.)

But in Unicode, characters are characters, and there are thousands of 
them. MANY thousands. *Way* too many to store in a single byte.


>  <  alternatives for line 8 below  >
> print(shortstr2)
> 
>print(shortstr1) givesdogof course. If the same
>char are entered individually at the console,  as char1,
>2 & 3, using msvcrt.getch(), I have not been able to get
>out a plain dog.
> 
>If line 8 is   shortstr2 = char1[0] + char2[0] + char3[0]
> print(shortstr2)  yields 314


>>> ord('d') + ord('o') + ord('g')
314

The ordinal value of a byte is its numeric value, as a byte.



>If line 8 is   shortstr2 = 'char1[0]' + 'char2[0]' + 'char3[0]'
> print(shortstr2)  yields char1[0]char2[0]char3[0]


Of course it does. You're setting shortstr2 equal to the literal strings 
'char1[0]' etc. But nice try: you want to convert each not-really-a-char 
to a (Unicode) string. You don't do that with the '' delimiters, as that 
makes a literal string, but with the str() function.

Either of these should work:

shortstr2 = str(char1) + str(char2) + str(char3)
shortstr2 = str(char1 + char2 + char3)

While they will work for (probably) any character you can type with your 
keyboard, they will probably fail to give sensible results as soon as you 
try using characters like £ © ë β 伎 

The right way to convert bytes to characters is to decode them, and to 
decode them, you have to know what encoding system is used. If the 
characters are plain-old English letters typed on an American keyboard, 
you can do this:

bytes = char1 + char2 + char3
shortstr2 = bytes.decode('ascii')

but this can give unexpected results if bytes contains non-ASCII values.

Better is to go the whole-hog and use the UTF-8 encoding, unless you 
specifically know to use something else:

shortstr2 = bytes.decode('utf-8')


>If line 8 is   shortstr2 = char1 + char2 + char3
> print(shortstr2)  yieldsb 'dog'
>  
> Is the latter out of "How to Speak Redneck"  ?
>
>   Possibly b means bit string.

Nice guess, close but not quite. It actually means byte string.

You probably should read this:

The Absolute Minimum Every Software Developer Absolutely, Positively Must 
Know About Unicode and Character Sets (No Excuses!)
by Joel Spolsky

http://www.joelonsoftware.com/articles/Unicode.html


Good luck!



-- 
Steven
--
http://mail.python.org/mailman/listinfo/python-list


Re: Python in C

2008-12-29 Thread James Mills
On Tue, Dec 30, 2008 at 11:32 AM, Chris Rebert  wrote:
> On Mon, Dec 29, 2008 at 5:22 PM,   wrote:
> 
>> 2. Have there been any suggestions in the past to rewrite Python's
>> mainstream implementation in C++ (or why wasn't it done this way from
>> the beginning)?
>
> I'm not a CPython dev (I bet one will pipe in), but I would speculate
> it's because C++ is so much more complicated and a bit less portable
> than C.

I'm not a CPython dev either, but
I concur with this statement.

cheers
James
--
http://mail.python.org/mailman/listinfo/python-list


Re: why cannot assign to function call

2008-12-29 Thread Aaron Brady
On Dec 29, 6:06 pm, Miles  wrote:
> On Mon, Dec 29, 2008 at 1:01 AM, scsoce  wrote:
> > I have a function return a reference, and want to assign to the reference,
> > simply like this:
> >>>def f(a)
> >         return a
> >    b = 0
> >   * f( b ) = 1*
> > but the last line will be refused as "can't assign to function call".
> > In my thought , the assignment is very nature,  but  why the interpreter
> > refused to do that ?
>
> Here's some links to help you better understand Python objects:
>
> http://effbot.org/zone/python-objects.htmhttp://effbot.org/zone/call-by-object.htm
>
> The second one is a bit denser reading, but it's important to learn
> that Python's approach to objects and "variables" is fundamentally
> different from that of C/C++.  In the example below, there's no way in
> the Python language* that bar() can change the value of b, since
> strings and numbers are immutable.

On a technicality, to avert a flaming, "change the value of 'b'" is an
ambiguous phrase.  There are two interpretations of "change what 'b'
refers to" and "change what 'b' refers to".  Even in spoken language,
I don't think that emphasis can resolve them either.

One means, 'make a change in the world, in the actual configuration of
such and such actual matter.'  The other means, 'update the axioms the
speaker is using to communicate to the listeners.  (Such and such will
no longer refer to such and such; it will refer to such and such;
accept this and reject that.)'  To make an observation, reference is a
purely linguistic phenomenon.

I, for one, am at a loss for how to disambiguate it.  I'm open to
suggestions.
--
http://mail.python.org/mailman/listinfo/python-list


Re: get method

2008-12-29 Thread James Mills
On Tue, Dec 30, 2008 at 11:43 AM, James Mills
 wrote:
> On Tue, Dec 30, 2008 at 11:38 AM, Ross  wrote:
>> I realize the code isn't counting, but how am I to do this without
>> using an if statement as the problem instructs?
>
> I just gave you a hint :)

Ross:

This exercise is a simple exercise dealing with:
 * assignments
 * functions
 * dictionaries
 * looping
 * attributes and methods

>>> def histogram(s):
... d = dict()
... for c in s:
... d[c] = d.get(c, 0) + 1
... return d
...
>>> histogram("Hello World!")
{'!': 1, ' ': 1, 'e': 1, 'd': 1, 'H': 1, 'l': 3, 'o': 2, 'r': 1, 'W': 1}

Note the 3rd line of the function ?
1. Get the value (with a default of 0) of the key c from the dictionary d
2. Add 1 to this value
3. Store in d with key c

Hope this helps.

cheers
James
--
http://mail.python.org/mailman/listinfo/python-list


Re: Python in C

2008-12-29 Thread Paul Rubin
thmpsn@gmail.com writes:
> 1. Can anyone explain to me what kind of program structuring technique
> (which paradigm, etc) CPython uses? How do modules interact together?
> What conventions does it use?

There are a bunch of docs about this, you could read them.  The program
is written about the way you would expect if you have worked on 
interpreters written in C before.

> 2. Have there been any suggestions in the past to rewrite Python's
> mainstream implementation in C++ (or why wasn't it done this way from
> the beginning)?

I don't think there has ever been any interest in this.  There is an
effort under way to rewrite Python in Python.  This is called PyPy
(you should be able to websearch for it easily) and should be much
more advanced than the C implementation.  It works now, under some
preliminary definition of "working", but it will be a while before it
is ready for wide deployment.
--
http://mail.python.org/mailman/listinfo/python-list


Re: get method

2008-12-29 Thread Steven D'Aprano
On Mon, 29 Dec 2008 17:38:36 -0800, Ross wrote:

> On Dec 29, 8:07 pm, Scott David Daniels  wrote:
>> Ross wrote:
>> > ... Use get to write histogram more concisely. You should be able to
>> > eliminate the if statement.
>>
>> > def histogram(s):
>> >    d = dict()
>> >    for c in s:
>> >            d[c]= d.get(c,0)
>> >    return d
>>
>> > This code returns a dictionary of all the letters to any string s I
>> > give it but each corresponding value is incorrectly the default of 0.
>> > What am I doing wrong?
>>
>> How is this code supposed to count?
>>
>> --Scott David Daniels
>> scott.dani...@acm.org
> 
> I realize the code isn't counting, but how am I to do this without using
> an if statement as the problem instructs?


You don't increment a value using if. This would be silly:

# increment x
if x == 0:
x = 1
elif x == 1:
x = 2
elif x == 2:
x = 3  # can I stop yet?
else:
x = "I can't count that high!"


You increment a value using + 1:

x = x + 1

or 

x += 1

In the original code, the program did this:

def histogram(s):
d = dict()
for c in s:
if c not in d:
d[c] = 1
else:
d[c] += 1


* look for c in the dict
* if it isn't there, set d[c] to 1
* but if it is there, increment d[c] by 1

Your attempt was quite close:

def histogram(s):
d = dict()
for c in s:
d[c]= d.get(c,0)
return d

which is pretty much the same as:

* set d[c] to whatever d[c] already is, or 0 if it isn't already there.

So what you need is:

* set d[c] to whatever d[c] already is plus one, or 0 plus one if it 
isn't already there.

It's a two character change to one line. Let us know if you still can't 
see it.



-- 
Steven
--
http://mail.python.org/mailman/listinfo/python-list


  1   2   >