Re: Config files with different types

2009-07-03 Thread Javier Collado
Hello,

Have you considered using something that is already developed?

You could take a look at this presentation for an overview of what's available:
http://us.pycon.org/2009/conference/schedule/event/5/

Anyway, let me explain that, since I "discovered" it, my favourite
format for configuration files is yaml (http://yaml.org/,
http://pyyaml.org/). It's easy to read, easy to write, available in
different programming languagues, etc. In addition to this, type
conversion is already in place so I think it covers your requirement.
For example:

IIn [1]: import yaml

In [2]: yaml.load("""name: person name
   ...: age: 25
   ...: is_programmer: true""")
Out[2]: {'age': 25, 'is_programmer': True, 'name': 'person name'}

Best regards,
Javier

2009/7/2 Zach Hobesh :
> Hi all,
>
> I've written a function that reads a specifically formatted text file
> and spits out a dictionary.  Here's an example:
>
> config.txt:
>
> Destination = C:/Destination
> Overwrite = True
>
>
> Here's my function that takes 1 argument (text file)
>
> the_file = open(textfile,'r')
> linelist = the_file.read().split('\n')
> the_file.close()
> configs = {}
> for line in linelist:
>       try:
>              key,value = line.split('=')
>              key.strip()
>              value.strip()
>              key.lower()
>              value.lower()
>              configs[key] = value
>
>       except ValueError:
>              break
>
> so I call this on my config file, and then I can refer back to any
> config in my script like this:
>
> shutil.move(your_file,configs['destination'])
>
> which I like because it's very clear and readable.
>
> So this works great for simple text config files.  Here's how I want
> to improve it:
>
> I want to be able to look at the value and determine what type it
> SHOULD be.  Right now, configs['overwrite'] = 'true' (a string) when
> it might be more useful as a boolean.  Is there a quick way to do
> this?  I'd also like to able to read '1' as an in, '1.0' as a float,
> etc...
>
> I remember once I saw a script that took a string and tried int(),
> float() wrapped in a try except, but I was wondering if there was a
> more direct way.
>
> Thanks in advance,
>
> Zach
> --
> http://mail.python.org/mailman/listinfo/python-list
>
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: logging of strings with broken encoding

2009-07-03 Thread Thomas Guettler
Stefan Behnel schrieb:
> Thomas Guettler wrote:
>> My quick fix is this:
>>
>> class MyFormatter(logging.Formatter):
>> def format(self, record):
>> msg=logging.Formatter.format(self, record)
>> if isinstance(msg, str):
>> msg=msg.decode('utf8', 'replace')
>> return msg
>>
>> But I still think handling of non-ascii byte strings should be better.
>> A broken logging message is better than none.
> 
> Erm, may I note that this is not a problem in the logging library but in
> the code that uses it?

I know that my code passes the broken string to the logging module. But maybe
I get the non-ascii byte string from a third party (psycopg2 sometime passes
latin1 byte strings from postgres in error messages).

I like Python very much because "it refused to guess". But in this case, "best 
effort"
is a better approach.

It worked in 2.5 and will in py3k. I think it is a bug, that it does not in 2.6.

 Thomas



-- 
Thomas Guettler, http://www.thomas-guettler.de/
E-Mail: guettli (*) thomas-guettler + de
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Sequence splitting

2009-07-03 Thread Lie Ryan
Brad wrote:
> On Jul 2, 9:40 pm, "Pablo Torres N."  wrote:
>> If it is speed that we are after, it's my understanding that map and
>> filter are faster than iterating with the for statement (and also
>> faster than list comprehensions).  So here is a rewrite:
>>
>> def split(seq, func=bool):
>> t = filter(func, seq)
>> f = filter(lambda x: not func(x), seq)
>> return list(t), list(f)
>>
> 
> In my simple tests, that takes 1.8x as long as the original solution.
> Better than the itertools solution, when "func" is short and fast. I
> think the solution here would worse if func was more complex.
> 
> Either way, what I am still wondering is if people would find a built-
> in implementation useful?
> 
> -Brad

A built-in/itertools should always try to provide the general solution
to be as useful as possible, something like this:

def group(seq, func=bool):
ret = {}
for item in seq:
fitem = func(item)
try:
ret[fitem].append(item)
except KeyError:
ret[fitem] = [item]
return ret

definitely won't be faster, but it is a much more general solution.
Basically, the function allows you to group sequences based on the
result of func(item). It is similar to itertools.groupby() except that
this also group non-contiguous items.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: logging of strings with broken encoding

2009-07-03 Thread Lie Ryan
Thomas Guettler wrote:
> Stefan Behnel schrieb:
>> Thomas Guettler wrote:
>>> My quick fix is this:
>>>
>>> class MyFormatter(logging.Formatter):
>>> def format(self, record):
>>> msg=logging.Formatter.format(self, record)
>>> if isinstance(msg, str):
>>> msg=msg.decode('utf8', 'replace')
>>> return msg
>>>
>>> But I still think handling of non-ascii byte strings should be better.
>>> A broken logging message is better than none.
>> Erm, may I note that this is not a problem in the logging library but in
>> the code that uses it?
> 
> I know that my code passes the broken string to the logging module. But maybe
> I get the non-ascii byte string from a third party (psycopg2 sometime passes
> latin1 byte strings from postgres in error messages).

If the database contains non-ascii byte string, then you could repr()
them before logging (repr also adds some niceties such as quotes). I
think that's the best solution, unless you want to decode the byte
string (which might be an overkill, depending on the situation).

> I like Python very much because "it refused to guess". But in this case, 
> "best effort"
> is a better approach.

One time it refused to guess, then the next time it tries best effort. I
don't think Guido liked such inconsistency.

> It worked in 2.5 and will in py3k. I think it is a bug, that it does not in 
> 2.6.

In python 3.x, the default string is unicode string. If it works in
python 2.5, then it is a bug in 2.5
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: question of style

2009-07-03 Thread Paul Rubin
Lie Ryan  writes:
> I guess in python, None as the missing datum idiom is still quite prevalent:

Well, sometimes there is no way around it, but:

> def cat_list(a=None, b=None):
> # poor man's list concatenation
> if a is None and b is None: return []
> if a is None: return b
> if b is None: return a
> return a + b

def cat_list(a=[], b=[]):
return a + b
-- 
http://mail.python.org/mailman/listinfo/python-list


use python to edit pdf document

2009-07-03 Thread nillgump
hi all.
I  are looking for some packages which use python to edit pdf format
docutment.
when I searched the internet,I found  the
paper  " Using Python as PDF Editing and Processing Framework"  which
is at:
*http://www.python.org/workshops/2002-02/papers/17/
index.htm*
this paper is posted at  2002.it  had a  long  years.But I cann't
find  the impletation.
Does anyone know what  happened  about
this?And  I  have seen the Reportlab  which can generate the pdf .But
what I  want is not only this,but also can edit the pdf.
So does  everyone know there  are some
exsiting python packages about this?
With using the exsiting  things,I can save
much work.
Any advice is  appropriated!

--nillgump
--2009/7/3
--China
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Sequence splitting

2009-07-03 Thread Lie Ryan
Rickard Lindberg wrote:
>> I tried posting on python-ideas and received a "You are not allowed to
>> post to this mailing list" reply. Perhaps because I am posting through
>> Google groups? Or maybe one must be an approved member to post?
> 
> If you got an "awaiting moderator approval" message you post might appear on
> the list soon. The reason for getting those can be that it is a member only
> list and you posted from another address. I am not sure if that was the 
> message
> you got.
> 

AFAIK, python-ideas is not moderated (I followed python-ideas). I've
never used Google Groups to access it though. Try subscribing to the
mailing list directly (instead of using Google Group's web-gateway)
here: http://mail.python.org/mailman/listinfo/python-ideas
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: use python to edit pdf document

2009-07-03 Thread Chris Rebert
On Fri, Jul 3, 2009 at 12:31 AM, nillgump wrote:
> hi all.
> I  are looking for some packages which use python to edit pdf format
> docutment.
> when I searched the internet,I found  the
> paper  " Using Python as PDF Editing and Processing Framework"  which
> is at:
> *http://www.python.org/workshops/2002-02/papers/17/
> index.htm*
> this paper is posted at  2002.it  had a  long  years.But I cann't
> find  the impletation.
> Does anyone know what  happened  about
> this?And  I  have seen the Reportlab  which can generate the pdf .But
> what I  want is not only this,but also can edit the pdf.

See ReportLab's proprietary+commercial PageCatcher library
(http://developer.reportlab.com/).

Cheers,
Chris
-- 
http://blog.rebertia.com
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: question of style

2009-07-03 Thread Steven D'Aprano
On Thu, 02 Jul 2009 21:56:40 -0700, Paul Rubin wrote:

>> Well I wouldn't know, I've been fortunate enough to program mostly in
>> python for over half a decade now and None and 0 are as close as I've
>> gotten to NULL in a long time.
> 
> Right, and how many times have you had to debug
> 
>AttributeError: 'NoneType' object has no attribute 'foo'

Hardly ever, and mostly when I do something silly like:

another_list = alist.sort()

Of course, YMMV.



> or the equivalent null pointer exceptions in Java, C, or whatever? They
> are very common.  And the basic idea is that if you avoid using null
> pointers in the first place, you'll get fewer accidental null pointer
> exceptions.

And some other error instead, due to the bugs in your more complicated 
code *wink*



-- 
Steven
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Sequence splitting

2009-07-03 Thread Steven D'Aprano
On Thu, 02 Jul 2009 22:10:14 -0500, Pablo Torres N. wrote:

> This sounds like it belongs to the python-ideas list.  I suggest posting
> there for better feedback, since the core developers check that list
> more often than this one.

If you post to python-ideas, you'll probably be told to gather feedback 
here first. The core-developers aren't hugely interested in arbitrary new 
features unless they have significant community support.

I've never needed such a split function, and I don't like the name, and 
the functionality isn't general enough. I'd prefer something which splits 
the input sequence into as many sublists as necessary, according to the 
output of the key function. Something like itertools.groupby(), except it 
runs through the entire sequence and collates all the elements with 
identical keys.

E.g.:

splitby(range(10), lambda n: n%3)
=> [ (0, [0, 3, 6, 9]),
 (1, [1, 4, 7]), 
 (2, [2, 5, 8]) ]

Your split() would be nearly equivalent to this with a key function that 
returns a Boolean.



-- 
Steven


-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Sequence splitting

2009-07-03 Thread Paul Rubin
Steven D'Aprano  writes:
> I've never needed such a split function, and I don't like the name, and 
> the functionality isn't general enough. I'd prefer something which splits 
> the input sequence into as many sublists as necessary, according to the 
> output of the key function. Something like itertools.groupby(), except it 
> runs through the entire sequence and collates all the elements with 
> identical keys.

No really, groupby makes iterators, not lists, and it you have to
develop quite a delicate sense of when you can use it without having
bugs caused by the different iterators it makes getting advanced at
the wrong times.  The concept of a split function that actually works
on lists is useful.  I'm neutral about whether it's worth having a C
version in the stdlib.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Adding an object to the global namespace through " f_globals" is that allowed ?

2009-07-03 Thread Stef Mientki




ryles wrote:

  On Jul 2, 1:25 am, Terry Reedy  wrote:
  
  

  The next statement works,
but I'm not sure if it will have any dramatical side effects,
other than overruling a possible object with the name A
  


  def some_function ( ...) :
 A = object ( ...)
 sys._getframe(1).f_globals [ Name ] = A
  

global name
name = A

or is name is a string var
globals()[name] = A

  
  
It wasn't explicit, but I think Stef meant that the global should be
added to the caller's environment, which was the reason for
sys._getframe().

Is this environment only intended for interactive use? I wonder if you
might just set things up
in a PYTHONSTARTUP script instead.
  

the idea is to get a simple environment where you can do interactive 3D
geometry,
without knowing anything about Python.
So to create a triangle e.g., the whole program will look like this:

Point ( 'A', (1,1,0) )
Point ( 'B', (5,5,0) )
Point ( 'C', (1,5,0) )
Line_Segment ( 'AB' ) 
Line_Segment ( 'BC' )  
Line_Segment ( 'AC' ) 

    

And now the points A,B,C and the line segments AB,BC,AC also exists as
variables in the namespace of the above environment.
So now you can add a point "D" in the middle of the line-segment AB, by
giving the formula of that point:

Point ( 'D',  ' ( A + B ) / 2 ' )

Or display the properties of point A, with :

Print A

which (for the moment) will result in:

Point Instance: A
  Childs : AB(Line-Segment), AC(Line-Segment), 
  Parents: 

The graphics above is done in VPython, which gives you an easy way to
make all objects dragable,
and if objects are defined by formulas, the will stick together
according to that formula.

And now it's obvious I can't use the solution of Terry,
because I don't know what names will to become global.

thanks,
Stef




-- 
http://mail.python.org/mailman/listinfo/python-list


Re: question of style

2009-07-03 Thread Lie Ryan
Paul Rubin wrote:
> Lie Ryan  writes:
>> I guess in python, None as the missing datum idiom is still quite prevalent:
> 
> Well, sometimes there is no way around it, but:
> 
>> def cat_list(a=None, b=None):
>> # poor man's list concatenation
>> if a is None and b is None: return []
>> if a is None: return b
>> if b is None: return a
>> return a + b
> 
> def cat_list(a=[], b=[]):
> return a + b

Being super contrived is why I tagged it as poor man's concat.
Generally, there will be a processing:

...
if b is None: return a
# processing here
return retlist
...

and it will not be about concatenating two lists, but something more
complex. But I thought that was unnecessary since I just want to mention
about the None argument idiom.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: No trees in the stdlib?

2009-07-03 Thread Lawrence D'Oliveiro
In message , João 
Valverde wrote:

> Lawrence D'Oliveiro wrote:
>
>> In message , João
>> Valverde wrote:
>>   
>>> Simple example usage case: Insert string into data structure in sorted
>>> order if it doesn't exist, else retrieve it.
>>
>> the_set = set( ... )
>>
>> if str in the_set :
>> ... "retrieval" case ...
>> else :
>> the_set.add(str)
>> #end if
>>
>> Want sorted order?
>>
>> sorted(tuple(the_set))
>>
>> What could be simpler?
> 
> Try putting that inside a loop with thousands of iterations and you'll
> see what the problem is.

You could apply the same argument to anything. E.g. why create a tree 
structure with a million elements? Try putting that inside a loop with 
thousands of iterations and you'll see what the problem is.

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Sequence splitting

2009-07-03 Thread Chris Rebert
On Thu, Jul 2, 2009 at 11:31 PM, Brad wrote:
> On Jul 2, 9:40 pm, "Pablo Torres N."  wrote:
>>
>> If it is speed that we are after, it's my understanding that map and
>> filter are faster than iterating with the for statement (and also
>> faster than list comprehensions).  So here is a rewrite:
>>
>> def split(seq, func=bool):
>>         t = filter(func, seq)
>>         f = filter(lambda x: not func(x), seq)
>>         return list(t), list(f)
>>
>
> In my simple tests, that takes 1.8x as long as the original solution.
> Better than the itertools solution, when "func" is short and fast. I
> think the solution here would worse if func was more complex.
>
> Either way, what I am still wondering is if people would find a built-
> in implementation useful?

FWIW, Ruby has Enumerable#partition, which does the same thing as
split() and has a better name IMHO.
http://www.ruby-doc.org/core/classes/Enumerable.html#M003130

Cheers,
Chris
-- 
http://blog.rebertia.com
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: XML(JSON?)-over-HTTP: How to define API?

2009-07-03 Thread Diez B. Roggisch

Allen Fowler schrieb:




I have an (in-development) python system that needs to shuttle events / requests 
around over the network to other parts of itself.   It will also need to 
cooperate with a .net application running on yet a different machine.


So, naturally I figured some sort of HTTP event / RPC type of would be a good 
idea?


Are there any modules I should know about, or guidelines I could read, that 
could aid me in the design of the API?






To clarify:

Each message would be <1KB of data total, and consist of some structured object 
containing strings, numbers, dates, etc.

For instance there would be an "add user" request that would contain one or 
more User objects each having a number of properties like:

- Full Name
- Username
- Password
- Email addresses (a variable length array)
- Street Address line1
- Street Address line1
- Street Address line1
- City
- State
- Zip
- Sign Up Date

 and so on.


Since I need to work with other platforms, pickle is out...  what are the 
alternatives?  XML? JSON?

How should I formally define each of the valid messages and objects?

Thank you,


Use XMLRPC. Implementations for both languages are available. There is 
no need for formal spec - which is a good thing. You just call the 
server, and it works.


Diez
--
http://mail.python.org/mailman/listinfo/python-list


Re: Adding an object to the global namespace through " f_globals" is that allowed ?

2009-07-03 Thread Chris Rebert
On Fri, Jul 3, 2009 at 1:07 AM, Stef Mientki wrote:
> ryles wrote:
>
> On Jul 2, 1:25 am, Terry Reedy  wrote:
>
>
> The next statement works,
> but I'm not sure if it will have any dramatical side effects,
> other than overruling a possible object with the name A
>
>
> def some_function ( ...) :
>  A = object ( ...)
>  sys._getframe(1).f_globals [ Name ] = A
>
>
> global name
> name = A
>
> or is name is a string var
> globals()[name] = A
>
>
> It wasn't explicit, but I think Stef meant that the global should be
> added to the caller's environment, which was the reason for
> sys._getframe().
>
> Is this environment only intended for interactive use? I wonder if you
> might just set things up
> in a PYTHONSTARTUP script instead.
>
>
> the idea is to get a simple environment where you can do interactive 3D
> geometry,
> without knowing anything about Python.
> So to create a triangle e.g., the whole program will look like this:
>
> Point ( 'A', (1,1,0) )
> Point ( 'B', (5,5,0) )
> Point ( 'C', (1,5,0) )
> Line_Segment ( 'AB' )
> Line_Segment ( 'BC' )
> Line_Segment ( 'AC' )
>
>
>
> And now the points A,B,C and the line segments AB,BC,AC also exists as
> variables in the namespace of the above environment.
> So now you can add a point "D" in the middle of the line-segment AB, by
> giving the formula of that point:
>
> Point ( 'D',  ' ( A + B ) / 2 ' )
>
> Or display the properties of point A, with :
>
> Print A
>
> which (for the moment) will result in:
>
> Point Instance: A
>   Childs : AB(Line-Segment), AC(Line-Segment),
>   Parents:
>
> The graphics above is done in VPython, which gives you an easy way to make
> all objects dragable,
> and if objects are defined by formulas, the will stick together according to
> that formula.
>
> And now it's obvious I can't use the solution of Terry,
> because I don't know what names will to become global.

Have you considered just using imports instead? Requiring a single opaque:

from graphicslibthingy import *

at the start of a file using the environment isn't so bad, IMHO.

Cheers,
Chris
-- 
http://blog.rebertia.com
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: question of style

2009-07-03 Thread Steven D'Aprano
On Thu, 02 Jul 2009 22:14:18 +, Tim Harig wrote:

> On 2009-07-02, Paul Rubin  wrote:
>> Tim Harig  writes:
>>> If lower is 5 and higher is 3, then it returns 3 because 3 != None in
>>> the first if.
>> Sorry, the presumption was that lower <= higher, i.e. the comparison
>> had already been made and the invariant was enforced by the class
>> constructor.  The comment should have been more explicit, I guess.
> 
> That being the case, it might be a good idea either to handle the
> situation and raise an exception or add:
> 
> assert self.lower <= self.higher

Only if you want strange and mysterious bugs whenever somebody runs your 
code with the -O flag.

 
> That way an exception will be raised if there is an error somewhere else
> in the code rather then silently passing a possibly incorrect value.

asserts are disabled when the -O flag is given.

You should never use asserts for testing data. That's not what they're 
for. They're for testing program logic (and unit-testing).

This is wrong, because it will sometimes fail when running under -O:

def parrot(colour):
assert colour.lower() in ('red', 'green', 'blue')
return "Norwegian %s" % colour.title()

That should be written as test followed by an explicit raise:

def parrot(colour):
if colour.lower() not in ('red', 'green', 'blue'):
raise ValueError
return "Norwegian %s" % colour.title()


An acceptable use for assert is to verify your program logic by testing 
something which should never fail. If it does fail, then you know 
something bizarre and unexpected has happened, and you have a program bug 
rather than bad data. Here's a silly example:


def cheese_available(kind):
"""Return True if the kind of cheese specified is in stock
in the cheeseshop run by Michael Palin.

This function always returns False.
"""
return False


def get_cheese():
exchange_amusing_banter()
for kind in ('Cheddar', 'Gouda', 'Swiss', 'Camembert'):
ask_proprietor("Do you have any %s?" % kind)
flag = cheese_available(kind)
assert not flag
if flag:
buy_cheese(kind)
return None
else:
express_disappointment()
# We only get here if no cheese is bought.
print "Cleese: Does this cheeseshop actually have ANY cheese?"
print "Palin: No sir, I'm afraid I've been wasting your time."
print "Cleese: Well then, I'm going to have to shoot you."



-- 
Steven
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Why re.match()?

2009-07-03 Thread Steven D'Aprano
On Thu, 02 Jul 2009 11:19:40 +, kj wrote:

> I'm sure that it is possible to find cases in which the *current*
> implementation of re.search() would be inefficient, but that's because
> this implementation is perverse, which, I guess, is ultimately the point
> of my original post.  Why privilege the special case of a
> start-of-string anchor?  

Because wanting to see if a string matches from the beginning is a very 
important and common special case.


> What if you wanted to apply an end-anchored
> pattern to some prefix of your 4GB string?  Why not have a special re
> method for that?  And another for every possible special case?

Because they're not common special cases. They're rare and not special 
enough to justify special code.


> If the concern is efficiency for such cases, then simply implement
> optional offset and length parameters for re.search(), to specify any
> arbitrary substring to apply the search to.  To have a special-case
> re.match() method in addition to a general re.search() method is
> antithetical to language minimalism, and plain-old bizarre.  Maybe
> there's a really good reason for it, but it has not been mentioned yet.

There is, and it has. You're welcome to keep your own opinion, but I 
don't think you'll find many experienced Python coders will agree with it.


-- 
Steven
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: how to spawn a process under different user

2009-07-03 Thread Nick Craig-Wood
Gabriel Genellina  wrote:
>  En Thu, 02 Jul 2009 19:27:05 -0300, Tim Harig 
>  escribió:
> > On 2009-07-02, sanket  wrote:
> >>> sanket wrote:
> >>> > I am trying to use python's subprocess module to launch a process.
> >>> > but in order to do that I have to change the user.
> >> I am using python 2.4 on centos.
> >
> > I have never done this in python; but, using the normal system calls in C
> > the process is basically:
> > 
> > 1. fork() a  new process
> > 2. the child process changes its user id with setreuid() and
> > possibly its group id with setregid()
> > 3. then the child exec()s new process which replaces itself
> >
> > All of the necessary functions should be under the os module on POSIX
> > operating systems.
> 
>  How to do that using the subprocess module: write a function for item (2)
>  above and pass it as the preexec_fn argument to Popen. preexec_fn is
>  executed after fork() and before exec()

If you are forking 100s of processes a second you want to use the
method above, however I think it is easier to use su (assuming you
start off as root),

so instead of passing

  ['mycommand', 'my_arg1', 'my_arg2']

to Popen, pass

  ['su', '-', 'username', '-c', 'mycommand my_arg1 my_arg2']

There is some opportunity for quoting problems there, but it is easy!

-- 
Nick Craig-Wood  -- http://www.craig-wood.com/nick
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: No trees in the stdlib?

2009-07-03 Thread Paul Rubin
Lawrence D'Oliveiro  writes:
> >> Want sorted order?
> >> sorted(tuple(the_set))
> >> What could be simpler?
> > 
> > Try putting that inside a loop with thousands of iterations and you'll
> > see what the problem is.
> 
> You could apply the same argument to anything. E.g. why create a tree 
> structure with a million elements? Try putting that inside a loop with 
> thousands of iterations and you'll see what the problem is.

The idea is you're going to insert or delete something in the tree
structure at each iteration, an O(log n) operation, making the whole
loop O(n log n).  If you sort a set (which is O(n log n)) inside the
loop then you end up with O(n**2 log n) which is impractical.  A
reason you might sort inside the loop might be to find the nearby
neighbors of the new element or traverse some elements in the middle.
This is trivial with a tree structure but messy with something like
sets.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: No trees in the stdlib?

2009-07-03 Thread Steven D'Aprano
On Fri, 03 Jul 2009 20:08:20 +1200, Lawrence D'Oliveiro wrote:

> In message , João
> Valverde wrote:
> 
>> Lawrence D'Oliveiro wrote:
[...]
>>> Want sorted order?
>>>
>>> sorted(tuple(the_set))
>>>
>>> What could be simpler?
>> 
>> Try putting that inside a loop with thousands of iterations and you'll
>> see what the problem is.
> 
> You could apply the same argument to anything. E.g. why create a tree
> structure with a million elements? Try putting that inside a loop with
> thousands of iterations and you'll see what the problem is.

The difference is, it's vanishingly rare to want to build a tree with 
millions of elements thousands of times over and over again, but it is 
not unreasonable to want to access sorted data thousands of times. 
Needing to re-sort it over and over and over again is wasteful, slow and 
stupid. Binary trees were invented, in part, specifically to solve this 
use-case.



-- 
Steven
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Sequence splitting

2009-07-03 Thread Steven D'Aprano
On Fri, 03 Jul 2009 01:02:56 -0700, Paul Rubin wrote:

> Steven D'Aprano  writes:
>> I've never needed such a split function, and I don't like the name, and
>> the functionality isn't general enough. I'd prefer something which
>> splits the input sequence into as many sublists as necessary, according
>> to the output of the key function. Something like itertools.groupby(),
>> except it runs through the entire sequence and collates all the
>> elements with identical keys.
> 
> No really, groupby makes iterators, not lists, and it you have to
> develop quite a delicate sense of when you can use it without having
> bugs caused by the different iterators it makes getting advanced at the
> wrong times.  The concept of a split function that actually works on
> lists is useful.  I'm neutral about whether it's worth having a C
> version in the stdlib.

groupby() works on lists.

The difference between what I'm suggesting and what groupby() does is 
that my suggestion would collate *all* the elements with the same key, 
not just runs of them. This (as far as I can tell) requires returning 
lists rather than iterators.

The most important difference between my suggestion and that of the OP is 
that he limited the key function to something which returns a truth 
value, while I'm looking for something more general which can split the 
input into an arbitrary number of collated sublists.



-- 
Steven
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: PEP368 and pixeliterators

2009-07-03 Thread Steven D'Aprano
On Thu, 02 Jul 2009 10:32:04 +0200, Joachim Strömbergson wrote:

> for pixel in rgb_image:
> # swap red and blue, and set green to 0 pixel.value = pixel.b, 0,
> pixel.r
> 
> 
> The idea I'm having is that fundamentally the image is made up of a 2D
> array of pixels, not rows of pixels.

A 2D array implies rows (and columns) of pixels.




-- 
Steven
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Sequence splitting

2009-07-03 Thread Paul Rubin
Steven D'Aprano  writes:
> groupby() works on lists.

>>> a = [1,3,4,6,7]
>>> from itertools import groupby
>>> b = groupby(a, lambda x: x%2==1)  # split into even and odd
>>> c = list(b)
>>> print len(c)
3
>>> d = list(c[1][1])# should be [4,6]
>>> print d  # oops.
[]

> The difference between what I'm suggesting and what groupby() does is 
> that my suggestion would collate *all* the elements with the same key, 
> not just runs of them. This (as far as I can tell) requires returning 
> lists rather than iterators.

I guess that is reasonable.

> The most important difference between my suggestion and that of the OP is 
> that he limited the key function to something which returns a truth 
> value, while I'm looking for something more general which can split the 
> input into an arbitrary number of collated sublists.

Also ok.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: No trees in the stdlib?

2009-07-03 Thread zooko
Try PyJudy:

http://www.dalkescientific.com/Python/PyJudy.html
-- 
http://mail.python.org/mailman/listinfo/python-list


is it possible to write USSD / SMS /SS7 apps in python

2009-07-03 Thread Goksie Aruna

Can someone give me an insight into these?

   developing ss7 or USSD or SMS apps in python.

is there any existing ones in this manner?

Thanks



--
http://mail.python.org/mailman/listinfo/python-list


Re: Sequence splitting

2009-07-03 Thread tsangpo
Just a shorter implementation:

from itertools import groupby
def split(lst, func):
gs = groupby(lst, func)
return list(gs[True]), list(gs[False])


"Lie Ryan"  
дÈëÏûÏ¢ÐÂÎÅ:nfi3m.2341$ze1.1...@news-server.bigpond.net.au...
> Brad wrote:
>> On Jul 2, 9:40 pm, "Pablo Torres N."  wrote:
>>> If it is speed that we are after, it's my understanding that map and
>>> filter are faster than iterating with the for statement (and also
>>> faster than list comprehensions).  So here is a rewrite:
>>>
>>> def split(seq, func=bool):
>>> t = filter(func, seq)
>>> f = filter(lambda x: not func(x), seq)
>>> return list(t), list(f)
>>>
>>
>> In my simple tests, that takes 1.8x as long as the original solution.
>> Better than the itertools solution, when "func" is short and fast. I
>> think the solution here would worse if func was more complex.
>>
>> Either way, what I am still wondering is if people would find a built-
>> in implementation useful?
>>
>> -Brad
>
> A built-in/itertools should always try to provide the general solution
> to be as useful as possible, something like this:
>
> def group(seq, func=bool):
>ret = {}
>for item in seq:
>fitem = func(item)
>try:
>ret[fitem].append(item)
>except KeyError:
>ret[fitem] = [item]
>return ret
>
> definitely won't be faster, but it is a much more general solution.
> Basically, the function allows you to group sequences based on the
> result of func(item). It is similar to itertools.groupby() except that
> this also group non-contiguous items. 


-- 
http://mail.python.org/mailman/listinfo/python-list


Re: question of style

2009-07-03 Thread Tim Harig
On 2009-07-03, Steven D'Aprano  wrote:
> On Thu, 02 Jul 2009 22:14:18 +, Tim Harig wrote:
>> On 2009-07-02, Paul Rubin  wrote:
>>> Tim Harig  writes:
 If lower is 5 and higher is 3, then it returns 3 because 3 != None in
 the first if.
>>> Sorry, the presumption was that lower <= higher, i.e. the comparison
>>> had already been made and the invariant was enforced by the class
>>> constructor.  The comment should have been more explicit, I guess.
>> That being the case, it might be a good idea either to handle the
>> situation and raise an exception or add:
>> assert self.lower <= self.higher
> Only if you want strange and mysterious bugs whenever somebody runs your 
> code with the -O flag.

The point here is that Rubin presumed that the condition where lower >
higher should never exist.  Therefore, because the program given segment of
program doesn't test it elswhere, it is possible that a bug in the code
that sets lower > higher could go unoticed while silently passing the wrong
data.

Therefore, It is better to test that assumption in a way that *will* crash
if something is wrong, for instance, if another method accidently changes
one of the values.  That would tend to make errors in another method of the
class (or even higher up in this method), more likely to be found.

> asserts are disabled when the -O flag is given.

Right, like defining NDEBUG in C.  In _Writing Solid Code_ by Steve
Maguire, he likens it to having two pieces of code: one for testing where
any errors should cause crashes as early as possible and one for shipping
where it may be better to handle errors if possible from within the code.

> You should never use asserts for testing data. That's not what they're 
> for. They're for testing program logic (and unit-testing).

What I am suggesting here is exactly that.  If lower is always defined such
that it should always be equal to or lower then higher, then there is an
error in the code somewhere if lower is higher by the time this code is
called.  Either the constructor didn't didn't reject input properly or
another method within the class has inadvertantly changed higher or lower
in an uncorrect way.

In any event, whether I am using it 100% correctly, I want to find any
errors in my code.  If there is an error, I want the program to crash
during testing rather then silently passing bad data.  assert will help
here.

Unfortunatly, Rubin is right.  I did make the mistake that the assert will
fail with higher or lower values of None, which is allowed here.  The same
basic premise is correct but will require more complex assertions to allow
that through.
-- 
http://mail.python.org/mailman/listinfo/python-list


PSP Caching

2009-07-03 Thread Johnson Mpeirwe

Hello All,

How do I stop caching of Python Server Pages (or whatever causes changes 
in a page not to be noticed in a web browser)? I am new to developing 
web applications in Python and after looking at implementations of PSP 
like Spyce (which I believed introduces new unnecessary non-PSP syntax), 
I decided to write my own PSP applications from scratch. When I modify a 
file, I keep getting the old results until I intentionally introduce an 
error (e.g parse error) and correct it after to have the changes 
noticed. There's no proxy (I am working on a windows machine unplugged 
from the network). I have Googled and no documents seem to talk about 
this. Is there any particular mod_python directive I must set in my 
Apache configuration to fix this?


Any help will be highly appreciated.

Johnson
--
http://mail.python.org/mailman/listinfo/python-list


Re: is it possible to write USSD / SMS /SS7 apps in python

2009-07-03 Thread Chris Rebert
On Fri, Jul 3, 2009 at 1:55 AM, Goksie Aruna wrote:
> Can someone give me an insight into these?
>
>   developing ss7 or USSD or SMS apps in python.
>
> is there any existing ones in this manner?

Advice for the future: STFW.

http://pypi.python.org/pypi?%3Aaction=search&term=sms&submit=search

Cheers,
Chris
-- 
http://blog.rebertia.com
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: is it possible to write USSD / SMS /SS7 apps in python

2009-07-03 Thread Goke Aruna
On Fri, Jul 3, 2009 at 10:21 AM, Chris Rebert  wrote:

> On Fri, Jul 3, 2009 at 1:55 AM, Goksie Aruna wrote:
> > Can someone give me an insight into these?
> >
> >   developing ss7 or USSD or SMS apps in python.
> >
> > is there any existing ones in this manner?
>
> Advice for the future: STFW.
>
> http://pypi.python.org/pypi?%3Aaction=search&term=sms&submit=search
>
> Cheers,
> Chris
> --
> http://blog.rebertia.com
>


thanks Chris,

however, i have checked all the packages there are specifics for a
particular company or device.

what am saying is reading the ITU info on USSD, is it possible to use python
to write the application SS7 with support for TCAP/MAP talking to E1 card to
do the ss7 signalling.

Thanks.

goksie
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Direct interaction with subprocess - the curse of blocking I/O

2009-07-03 Thread yam850
On 1 Jul., 21:30, spillz  wrote:
> On Jun 29, 3:15 pm, Pascal Chambon  wrote:
> > I've had real issues with subprocesses recently : from a python script,
> > on windows, I wanted to "give control" to a command line utility, i.e
> > forward user in put to it and display its output on console. It seems
> > simple, but I ran into walls :
> If you are willing to have a wxPython dependency, wx.Execute handles
> non-blockingi/o with processes on all supported platforms

I made a python method/function for non blocking read from a file
object.
I use it in one of my python programs.
When looking at the code bear in mind that I am not an expert and I am
happy to see comments.

#--
def non_blocking_readline(f_read=sys.stdin, timeout_select=0.0):
"""to readline non blocking from the file object 'f_read'
   for 'timeout_select' see module 'select'"""
import select
text_lines = ''   # empty string
while True:   # as long as there are bytes
to read
try:  # try select
rlist, wlist, xlist = select.select([f_read], [], [],
timeout_select)
except:   # select ERROR
print >>sys.stderr, ("non_blocking_read select ERROR")
break
if DEBUG: print("rlist=%s, wlist=%s, xlist=%s" % (repr(rlist),
repr(wlist), repr(xlist)))
if  len(rlist) > 0:
text_read = f_read.readline() # get a line
if DEBUG: print("after read/readline text_read:'%s', len=
%s" % (text_read, repr(len(text_read
if  len(text_read) > 0:   # there were some bytes
text_lines = "".join([text_lines, text_read])
if DEBUG: print("text_lines:'%s'" % (text_lines))
else:
break # there was no byte in a
line
else:
break # there was no byte in the
f_read
if  text_lines  ==  '':
return None
else:
return text_lines


--
Kurt
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Sequence splitting

2009-07-03 Thread Steven D'Aprano
On Fri, 03 Jul 2009 01:39:27 -0700, Paul Rubin wrote:

> Steven D'Aprano  writes:
>> groupby() works on lists.
> 
 a = [1,3,4,6,7]
 from itertools import groupby
 b = groupby(a, lambda x: x%2==1)  # split into even and odd 
 c = list(b)
 print len(c)
> 3
 d = list(c[1][1])# should be [4,6] print d  # oops.
> []

I didn't say it worked properly *wink*

Seriously, this behaviour caught me out too. The problem isn't that the 
input data is a list, the same problem occurs for arbitrary iterators. 
From the docs:

[quote]
The operation of groupby() is similar to the uniq filter in Unix. It 
generates a break or new group every time the value of the key function 
changes (which is why it is usually necessary to have sorted the data 
using the same key function). That behavior differs from SQL’s GROUP BY 
which aggregates common elements regardless of their input order.

The returned group is itself an iterator that shares the underlying 
iterable with groupby(). Because the source is shared, when the groupby() 
object is advanced, the previous group is no longer visible. So, if that 
data is needed later, it should be stored as a list
[end quote]

http://www.python.org/doc/2.6/library/itertools.html#itertools.groupby




-- 
Steven
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: getting rid of —

2009-07-03 Thread Tep
On 3 Jul., 06:40, Simon Forman  wrote:
> On Jul 2, 4:31 am, Tep  wrote:
>
>
>
>
>
> > On 2 Jul., 10:25, Tep  wrote:
>
> > > On 2 Jul., 01:56, MRAB  wrote:
>
> > > > someone wrote:
> > > > > Hello,
>
> > > > > how can I replace '—' sign from string? Or do split at that character?
> > > > > Getting unicode error if I try to do it:
>
> > > > > UnicodeDecodeError: 'ascii' codec can't decode byte 0x97 in position
> > > > > 1: ordinal not in range(128)
>
> > > > > Thanks, Pet
>
> > > > > script is # -*- coding: UTF-8 -*-
>
> > > > It sounds like you're mixing bytestrings with Unicode strings. I can't
> > > > be any more helpful because you haven't shown the code.
>
> > > Oh, I'm sorry. Here it is
>
> > > def cleanInput(input)
> > >     return input.replace('—', '')
>
> > I also need:
>
> > #input is html source code, I have problem with only this character
> > #input = 'foo — bar'
> > #return should be foo
> > def splitInput(input)
> >     parts = input.split(' — ')
> >     return parts[0]
>
> > Thanks!
>
> Okay people want to help you but you must make it easy for us.
>
> Post again with a small piece of code that is runnable as-is and that
> causes the traceback you're talking about, AND post the complete
> traceback too, as-is.
>
> I just tried a bit of your code above in my interpreter here and it
> worked fine:
>
> |>>> data = 'foo — bar'
> |>>> data.split('—')
> |['foo ', ' bar']
> |>>> data = u'foo — bar'
> |>>> data.split(u'—')
> |[u'foo ', u' bar']
>
> Figure out the smallest piece of "html source code" that causes the
> problem and include that with your next post.

The problem was, I've converted "html source code" to unicode object
and didn't encoded to utf-8 back, before using split...
Thanks for help and sorry for not so smart question
Pet

>
> HTH,
> ~Simon
>
> You might also read this:http://catb.org/esr/faqs/smart-questions.html

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: question of style

2009-07-03 Thread Jean-Michel Pichavant

Simon Forman wrote:

Hey I was hoping to get your opinions on a sort of minor stylistic
point.
These two snippets of code are functionally identical. Which would you
use and why?
The first one is easier [for me anyway] to read and understand, but
  
Easier for you but not for those who are familiar with the standard way 
of python coding. (by standard I don't mean the "only Godly way of 
coding", there's room for custom rules)

But I think most of the people will write:
if self.higher is None:
   return self.lower
if self.lower is None:
   return self.higher
# here neither are None


slightly less efficient, while the second is [marginally] harder to
follow but more efficient.

## First snippet

if self.higher is self.lower is None: return
if self.lower is None: return self.higher
if self.higher is None: return self.lower

## Second snippet

if self.higher is None:
if self.lower is None:
return
return self.lower
if self.lower is None:
return self.higher

What do you think?

(One minor point: in the first snippet, the "is None" in the first
line is superfluous in the context in which it will be used, the only
time "self.lower is self.higher" will be true is when they are both
None.)
  


--
http://mail.python.org/mailman/listinfo/python-list


Is code duplication allowed in this instance?

2009-07-03 Thread Klone
Hi all. I believe in programming there is a common consensus to avoid
code duplication, I suppose such terms like 'DRY' are meant to back
this idea. Anyways, I'm working on a little project and I'm using TDD
(still trying to get a hang of the process) and am trying to test the
functionality within a method. Whoever it so happens to verify the
output from the method I have to employ the same algorithm within the
method to do the verification since there is no way I can determine
the output before hand.

So in this scenario is it OK to duplicate the algorithm to be tested
within the test codes or refactor the method such that it can be used
within test codes to verify itself(??).
-- 
http://mail.python.org/mailman/listinfo/python-list


GOZERBOT 0.9.1 BETA2 released

2009-07-03 Thread Bart Thate
GOZERBOT has a new website !! check it out at http://gozerbot.org.
This is all in preparation for the 0.9.1 release and the latest
GOZERBOT beta has been released as well. Please try this version and
let me know how goes.

Install is as simple as .. easy_install gozerbot gozerplugs, see
README.

This release will be used to move GOZERBOT 0.9 into the debian
repositories and freebsd ports.

we can be contacted on #dunkbots EFNET/IRCNET or use http://dev.gozerbot.org
for any bugs you might find.

NEW IN THIS RELEASE:

* GOZERBOT is now truely free as it no longer depends on GPL licensed
xmpppy, a own xmpp package has been implemented for this.

* GOZERBOT now depends on setuptools to install the proper packages

* gozerbot-nest script can be used to install all dependacies and bot
code in 1 directory that can be run by the user (no root required)

* morphs are added that allow for encryption of input and output
streams (not used yet)

ABOUT GOZERBOT:

GOZERBOT is a channel bot that aids with conversation in irc channels
and jabber conference rooms. its mainly used to serve rss feeds and to
have custom commands made for the channel. More then just a channel
bot GOZERBOT aims to provide a platform for the user to program his
own bot and make it into something thats usefull. This is done with a
plugin structure that makes it easy to program your own. But GOZERBOT
comes with some batteries included, there are now over 100 plugins
already written and ready for use.

groet,

Bart
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Sequence splitting

2009-07-03 Thread Paul Moore
2009/7/3 Brad :
> Perhaps true, but it would be a nice convenience (for me) as a built-
> in written in either Python or C. Although the default case of a bool
> function would surely be faster.

The chance of getting this accepted as a builtin is essentially zero.
To be a builtin, as opposed to being in the standard library,
something has to have a very strong justification.

This suggestion may find a home in the standard library, although it's
not entirely clear where (maybe itertools, although it's not entirely
obvious that it's a fit there).

You'd have to justify this against the argument "not every 2-3 line
function needs to be built in". Personally, I'm not sure it passes
that test - sure, it's a useful function, but it's not that hard to
write when you need it. It may be better as a recipe in the cookbook.
Or if it's close enough to the spirit of the itertools, it may be
suitable as a sample in the itertools documentation (the "recipes"
section).

Paul.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: stringio+tarfile

2009-07-03 Thread superpollo
First problem I see is all those numbers before the lines.  That's not 
valid python.


Assuming that was a transcription error


not so. i intentionally add linenumbers to facilitare reference to the 
code, but if it is a nuisance i will not include them anymore.


thanks
--
http://mail.python.org/mailman/listinfo/python-list


Re: pep 8 constants

2009-07-03 Thread Horace Blegg
I've been kinda following this. I have a cousin who is permanently wheel
chair bound and doesn't have perfect control of her hands, but still manages
to use a computer and interact with society. However, the idea/thought of
disabled programmers was new to me/hadn't ever occurred to me.

You say that using your hands is painful, but what about your feet? Wouldn't
it be possible to rig up some kind of foot peddle for shift/caps lock? Kinda
like the power peddle used with sowing machines, so the hands are free to
hold fabric.

I don't mean this in a condescending manor, and I apologize if you take it
as such. I'm genuinely curious if you think something like this could work.

The way I was envisioning it working last night (and I haven't the faintest
clue how SR works, nor have I ever used SR) was that you would hit the foot
peddle, which would tell the SR program to capitalize the first letter of
the next word (a smart shift, basically, so you don't end up doing something
like ... WONderland -or- "stocks are up 1,0))% TOday".)

Possible? Stupid?
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Is code duplication allowed in this instance?

2009-07-03 Thread Rickard Lindberg
> Whoever it so happens to verify the
> output from the method I have to employ the same algorithm within the
> method to do the verification since there is no way I can determine
> the output before hand.

Can you clarify this scenario a bit?

If you are performing black-box testing I don't see why you need to
use the same algorithm in the test code. But maybe your case is
special. On the other hand you can not perform black-box testing if
the output is not known for a given input.

-- 
Rickard Lindberg
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: stringio+tarfile

2009-07-03 Thread superpollo
thanks to the guys who bothered to answer me even using their chrystal 
ball ;-)


i'll try to be more specific.

yes: i want to create a tar file in memory, and add some content from a
memory buffer...

my platform:

$ uname -a
Linux fisso 2.4.24 #1 Thu Feb 12 19:49:02 CET 2004 i686 GNU/Linux
$ python -V
Python 2.3.4

following some suggestions i modified the code:

$ cat tar001.py
import tarfile
import StringIO
sfo1 = StringIO.StringIO("one\n")
sfo2 = StringIO.StringIO("two\n")
tfo = StringIO.StringIO()
tar = tarfile.open(fileobj=tfo , mode="w")
ti = tar.gettarinfo(fileobj=tfo)
for sfo in [sfo1 , sfo2]:
tar.addfile(fileobj=sfo , tarinfo=ti)
print tfo

and that's what i get:

$ python tar001.py > tar001.out
Traceback (most recent call last):
  File "tar001.py", line 7, in ?
ti = tar.gettarinfo(fileobj=tfo)
  File "/usr/lib/python2.3/tarfile.py", line 1060, in gettarinfo
name = fileobj.name
AttributeError: StringIO instance has no attribute 'name'

can you help?

TIA

ps: i'd also like that the tar file has names for the buffers, so than 
once output the file can be untarred with /bin/tar into regular files...

--
http://mail.python.org/mailman/listinfo/python-list


Re: Searching equivalent to C++ RAII or deterministic destructors

2009-07-03 Thread Ulrich Eckhardt
Thanks to all that answered, in particular I wasn't aware of the existence
of the __del__ function.

For completeness' sake, I think I have found another way to not really solve
but at least circumvent the problem: weak references. If I understand
correctly, those would allow me to pass out handles to the resources and,
if some code decides it is time, release the resources and render all the
weak references invalid. At least I don't suffer resource leaks but rather
get meaningful errors that way, which is enough for my case.

cheers!

Uli

-- 
Sator Laser GmbH
Geschäftsführer: Thorsten Föcking, Amtsgericht Hamburg HR B62 932

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: stringio+tarfile

2009-07-03 Thread Peter Otten
superpollo wrote:

> thanks to the guys who bothered to answer me even using their chrystal
> ball ;-)
> 
> i'll try to be more specific.
> 
> yes: i want to create a tar file in memory, and add some content from a
> memory buffer...
> 
> my platform:
> 
> $ uname -a
> Linux fisso 2.4.24 #1 Thu Feb 12 19:49:02 CET 2004 i686 GNU/Linux
> $ python -V
> Python 2.3.4
> 
> following some suggestions i modified the code:
> 
> $ cat tar001.py
> import tarfile
> import StringIO
> sfo1 = StringIO.StringIO("one\n")
> sfo2 = StringIO.StringIO("two\n")
> tfo = StringIO.StringIO()
> tar = tarfile.open(fileobj=tfo , mode="w")
> ti = tar.gettarinfo(fileobj=tfo)

gettarinfo() expects a real file, not a file-like object.
You have to create your TarInfo manually.

> for sfo in [sfo1 , sfo2]:
>  tar.addfile(fileobj=sfo , tarinfo=ti)
> print tfo
> 
> and that's what i get:
> 
> $ python tar001.py > tar001.out
> Traceback (most recent call last):
>File "tar001.py", line 7, in ?
>  ti = tar.gettarinfo(fileobj=tfo)
>File "/usr/lib/python2.3/tarfile.py", line 1060, in gettarinfo
>  name = fileobj.name
> AttributeError: StringIO instance has no attribute 'name'
> 
> can you help?

I recommend that you have a look into the tarfile module's source code.

The following seems to work:

import sys
import time
import tarfile
import StringIO

sf1 = "first.txt", StringIO.StringIO("one one\n")
sf2 = "second.txt", StringIO.StringIO("two\n")
tf = StringIO.StringIO()

tar = tarfile.open(fileobj=tf , mode="w")

mtime = time.time()
for name, f in [sf1 , sf2]:
ti = tarfile.TarInfo(name)
ti.size = f.len
ti.mtime = mtime
# add more attributes as needed 
tar.addfile(ti, f)

sys.stdout.write(tf.getvalue())

Peter

-- 
http://mail.python.org/mailman/listinfo/python-list


Python debugger

2009-07-03 Thread srinivasan srinivas

Hi,
Could you suggest some python debuggers?

Thanks,
Srini


  Love Cricket? Check out live scores, photos, video highlights and more. 
Click here http://cricket.yahoo.com
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Searching equivalent to C++ RAII or deterministic destructors

2009-07-03 Thread Jack Diederich
On Thu, Jul 2, 2009 at 2:36 PM, Carl Banks wrote:
> On Jul 2, 3:12 am, Ulrich Eckhardt  wrote:
>> Bearophile wrote:
>> > Ulrich Eckhardt:
>> >> a way to automatically release the resource, something
>> >> which I would do in the destructor in C++.
>>
>> > Is this helpful?
>> >http://effbot.org/pyref/with.htm
>>
>> Yes, it aims in the same direction. However, I'm not sure this applies to my
>> case. The point is that the resource handle is not just used locally in a
>> restricted scope but it is allocated and stored. The 'with' is something
>> that makes sense in the context of mutex locking, where you have a
>> well-defined critical section. What I need is something similar to open(),
>> which returs a file. When the last reference to that object goes out of
>> scope, the underlying file object is closed.
>
> On CPython you can do it with a __del__ attribute.
>
> Warning: objects with a __del__ attribute prevent reference cycle
> detection, which can potentially lead to memory (and resource) leaks.
> So you must be careful to avoid creating reference loops with that
> object.

WARNING-er: As Carl points out, adding a __del__ method actually makes
it /less/ likely that your object will be cleaned up.  __del__ is only
useful if you have a complicated tear-down procedure.  Since you come
from C++ RAII land (my old haunt) your objects probably only allocate
one resource each, or worse: a bunch of objects that allocate one
resource each.  In that case __del__ will always hurt you.

The C++ temptation is to match every __init__ with a __del__.  A
better rule of thumb is to only add a __del__ method after confirming
with someone else that it would be useful.  Better still is to ask
them by postal courier.  For best results that someone should be your
grandmother.

> Note that file objects have a close method; you can explicitly close
> it at any time.  Your object should follow that example, and define a
> close (or release, or whatever) method.  I'd recommend making an
> effort to call it and to rely on __del__ as little as possible.

This.  If you care enough about a resource to write a __del__ method
you care enough to clean it up explicitly.  'with' blocks are very
nice for that.


-jack
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Sequence splitting

2009-07-03 Thread Pablo Torres N.
On Fri, Jul 3, 2009 at 06:01, Paul Moore wrote:
> 2009/7/3 Brad :
>> Perhaps true, but it would be a nice convenience (for me) as a built-
>> in written in either Python or C. Although the default case of a bool
>> function would surely be faster.
>
> The chance of getting this accepted as a builtin is essentially zero.
> To be a builtin, as opposed to being in the standard library,
> something has to have a very strong justification.

That's right.  Mr. schickb, I think what you need is a few concrete
examples as to where this function would be beneficial, so it can be
judged objectively.

-- 
Pablo Torres N.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Is code duplication allowed in this instance?

2009-07-03 Thread Francesco Bochicchio
On Jul 3, 12:46 pm, Klone  wrote:
> Hi all. I believe in programming there is a common consensus to avoid
> code duplication, I suppose such terms like 'DRY' are meant to back
> this idea. Anyways, I'm working on a little project and I'm using TDD
> (still trying to get a hang of the process) and am trying to test the
> functionality within a method. Whoever it so happens to verify the
> output from the method I have to employ the same algorithm within the
> method to do the verification since there is no way I can determine
> the output before hand.
>
> So in this scenario is it OK to duplicate the algorithm to be tested
> within the test codes or refactor the method such that it can be used
> within test codes to verify itself(??).

If the purpose of the test is to verify the algorithm, you obviously
should not use the algorithm
to verify itself ... you should use a  set of pairs (input data,
exoected output data) data that you know is
well representative of the data your algorithm will process. Possibly
to prepare the test data set
you might need a  different - and already proven - implementation of
the algorithm.

Another thing I sometime do when testing mathematics function is use
counter-proof: for instance, if my function
computes the roots of a quadratic equation, the test verifies that the
roots applied to the equation
actually give (almost) zero as result. This kind of test might not be
as rigorous as preparing the data set with the known
answers, but it is easier to setup and could give you a first idea if
your code is "correct enough" to stand
more formal proof.

Ciao

FB
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Python debugger

2009-07-03 Thread Bruno Desthuilliers

srinivasan srinivas a écrit :

Hi,
Could you suggest some python debuggers?


http://docs.python.org/library/pdb.html#module-pdb

HTH
--
http://mail.python.org/mailman/listinfo/python-list


Re: Adding an object to the global namespace through " f_globals" is that allowed ?

2009-07-03 Thread Bruno Desthuilliers

Stef Mientki a écrit :

hello,

I need to add an object's name to the global namespace.
The reason for this is to create an environment,
where you can add some kind of math environment,
where no need for Python knowledge is needed.

The next statement works,
but I'm not sure if it will have any dramatical side effects,
other than overruling a possible object with the name A

def some_function ( ...) :
 A = object ( ...)
 sys._getframe(1).f_globals [ Name ] = A




Anything wrong with the 'global' statement ?

Python 2.5.2 (r252:60911, Oct  5 2008, 19:24:49)
[GCC 4.3.2] on linux2
Type "help", "copyright", "credits" or "license" for more information.
pythonrc start
pythonrc done
>>> def foo():
... global a
... a = 42
...
>>> a
Traceback (most recent call last):
  File "", line 1, in 
NameError: name 'a' is not defined
>>> foo()
>>> a
42
>>>


--
http://mail.python.org/mailman/listinfo/python-list


Re: Is code duplication allowed in this instance?

2009-07-03 Thread Lie Ryan
Klone wrote:
> Hi all. I believe in programming there is a common consensus to avoid
> code duplication, I suppose such terms like 'DRY' are meant to back
> this idea. Anyways, I'm working on a little project and I'm using TDD
> (still trying to get a hang of the process) and am trying to test the
> functionality within a method. Whoever it so happens to verify the
> output from the method I have to employ the same algorithm within the
> method to do the verification since there is no way I can determine
> the output before hand.
> 
> So in this scenario is it OK to duplicate the algorithm to be tested
> within the test codes or refactor the method such that it can be used
> within test codes to verify itself(??).

When unittesting a complex output, usually the first generation code
would generate the output and then you (manually) verify this output and
copy the output to the test code. The testing code should be as dumb as
possible to avoid bugs in the testing code itself. The most important
thing is not to use a possibly buggy algorithm implementation to check
the algorithm itself. If your test code looks like this:

# this the function to be tested
def func(a, b):
return a + b

class TC(unittest.TestCase):
def test_foo(self):
a, b = 10, 20 # worse: use random module
result = a + b
self.assertEqual(func(a, b), result)

then something is definitely wrong. Instead the testing code should be
simple and stupid like:

class TC(unittest.TestCase):
def test_foo(self):
self.assertEqual(func(10, 20), 30)

There are exceptions though, such as if you're 100% sure your
first-generation program is correct and you simply want to replicate the
same function with different algorithm, or probably optimizing the
function. For that case, you could do something like this:

def func(a, b):
# some fancy new algo
pass
class TC(unittest.TestCase):
def original_func(self, a, b):
return a + b
def test_foo(self):
self.assertEquals(func(a, b), self.original_func(a, b))
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Python debugger

2009-07-03 Thread Clovis Fabricio
2009/7/3 srinivasan srinivas :
> Could you suggest some python debuggers?

Two graphical debugger frontends:

http://www.gnu.org/software/emacs/
http://winpdb.org/
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: PEP 376

2009-07-03 Thread Tarek Ziadé
Ok here's my proposal for the checksum :

- I'll add the "hash_type:" suffix in the record file

- install will get a new option to define what hash should be used
when writing the RECORD file
  it will default to SHA1 for 2.7/3.2

- pkgutil, that reads the RECORD files, will pick the right hash
function by looking at the suffix

now for the security it's another story that goes beyond the scope of this PEP
notice though, that the PEP provides a place-holder for distributions metadata,
so it could host a key later on.


On Thu, Jul 2, 2009 at 8:00 PM, Charles Yeomans wrote:
>
> On Jul 2, 2009, at 1:37 PM, Lie Ryan wrote:
>
>> Joachim Strömbergson wrote:
>>>
>>> Aloha!
>>>
>>> Tarek Ziadé wrote:

 The prefix is a good idea but since it's just a checksum to control
 that the file hasn't changed
 what's wrong with using a weak hash algorithm like md5 or now sha1 ?
>>>
>>> Because it creates a dependency to an old algorithm that should be
>>> deprecated. Also using MD5, even for a thing like this might make people
>>> belive that it is an ok algorithm to use - "Hey, it is used by the
>>> default install in Python, so it must be ok, right?"
>>>
>>> If we flip the argument around: Why would you want to use MD5 instead of
>>> SHA-256? For the specific use case the performance will not (should not)
>>> be an issue.
>>>
>>> As I wrote a few mails ago, it is time to move forward from MD5 and
>>> designing something in 2009 that will be around for many years that uses
>>> MD5 is (IMHO) a bad design decision.
>>>
 If someone wants to modify a file of a distribution he can recreate
 the checksum as well,
 the only secured way to prevent that would be to use gpg keys but
 isn't that overkill for what we need ?
>>>
>>> Actually, adding this type of security would IMHO be a good idea.
>>>
>>
>> Now, are we actually talking about security or checksum?
>>
>> It has been known for years that MD5 is weak, weak, weak. Not just in
>> the recent years. But it doesn't matter since MD5 was never designed for
>> security, MD5 was designed to protect against random bits corruption. If
>> you want security, look at least to GPG. For data protection against
>> intentional, malicious forging, definitely MD5 is the wrong choice. But
>> when you just want a simple checksum to ensure that a faulty router
>> somewhere in the internet backbone doesn't destroying your data, MD5 is
>> a fine algorithm.
>> --
>
> On the contrary, MD5 was intended to be a cryptographic hash function, not a
> checksum.
>
> Charles Yeomans
>
>
>
> --
> http://mail.python.org/mailman/listinfo/python-list
>



-- 
Tarek Ziadé | http://ziade.org
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Is code duplication allowed in this instance?

2009-07-03 Thread Steven D'Aprano
On Fri, 03 Jul 2009 03:46:32 -0700, Klone wrote:

> Hi all. I believe in programming there is a common consensus to avoid
> code duplication, I suppose such terms like 'DRY' are meant to back this
> idea. Anyways, I'm working on a little project and I'm using TDD (still
> trying to get a hang of the process) and am trying to test the
> functionality within a method. Whoever it so happens to verify the
> output from the method I have to employ the same algorithm within the
> method to do the verification since there is no way I can determine the
> output before hand.
> 
> So in this scenario is it OK to duplicate the algorithm to be tested
> within the test codes or refactor the method such that it can be used
> within test codes to verify itself(??).

Neither -- that's essentially a pointless test. The only way to 
*correctly* test a function is to compare the result of that function to 
an *independent* test. If you test a function against itself, of course 
it will always pass:

def plus_one(x):
"""Return x plus 1."""
return x-1  # Oops, a bug.

# Test it is correct:
assert plus_one(5) == plus_one(5)


The only general advice I can give is:

(1) Think very hard about finding an alternative algorithm to calculate 
the same result. There usually will be one.

(2) If there's not, at least come up with an alternative implementation. 
It doesn't need to be particularly efficient, because it will only be 
called for testing. A rather silly example:

def plus_one_testing(x):
"""Return x plus 1 using a different algorithm for testing."""
if type(x) in (int, long):
temp = 1
for i in range(x-1):
temp += 1
return temp
else:
floating_part = x - int(x)
return floating_part + plus_one_testing(int(x))

(The only problem is, if a test fails, you may not be sure whether it's 
because your test function is wrong or your production function.)

(3) Often you can check a few results by hand. Even if it takes you 
fifteen minutes, at least that gives you one good test. If need be, get a 
colleague to check your results.

(4) Test failure modes. It might be really hard to calculate func(arg) 
independently for all possible arguments, but if you know that func(obj) 
should fail, at least you can test that. E.g. it's hard to test whether 
or not you've downloaded the contents of a URL correctly without actually 
downloading it, but you know that http://example.com/ should fail because 
that domain doesn't exist.

(5) Test the consequences of your function rather than the exact results. 
E.g. if it's too difficult to calculate plus_one(x) independently:

assert plus_one(x) > x  # except for x = inf or -inf
assert plus_one( -plus_one(x) ) == x  # -(x+1)+1 = x

(6) While complete test coverage is the ideal you aspire to, any tests 
are better than no tests. But they have to be good tests to be useful. 
Even *one* test is better than no tests.



Hope this helps.



-- 
Steven
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Sequence splitting

2009-07-03 Thread Lie Ryan
tsangpo wrote:
> Just a shorter implementation:
> 
> from itertools import groupby
> def split(lst, func):
> gs = groupby(lst, func)
> return list(gs[True]), list(gs[False])
> 

As you're replying to my post, I assume you meant a shorter
implementation my function. But it doesn't do the same thing. The idea
with my group() is similar to what Steven D'Aprano is describing in
another branch of this thread (i.e. splitting not only based on True and
False, but arbitrary groupings, e.g. 'tru', 'flash' or perhaps -1, 0, 1).

For example:
>>> def group(seq, func=bool):
... ret = {}
... for item in seq:
... fitem = func(item)
... try:
... ret[fitem].append(item)
... except KeyError:
... ret[fitem] = [item]
... return ret
...
>>> group(range(10), lambda x: x%3)
{0: [0, 3, 6, 9], 1: [1, 4, 7], 2: [2, 5, 8]}
>>> # the next one is the OP's split
>>> group(['true', '', [], [''], 'false'], bool)
{False: ['', []], True: ['true', [''], 'false']}
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: stringio+tarfile

2009-07-03 Thread Lie Ryan
superpollo wrote:
>> First problem I see is all those numbers before the lines.  That's not
>> valid python.
>>
>> Assuming that was a transcription error
> 
> not so. i intentionally add linenumbers to facilitare reference to the
> code, but if it is a nuisance i will not include them anymore.

The line numbering makes it impossible for others to copy and paste to
the interactive interpreter to see the problem code running, and makes
it hard to copy and paste into a script file. For code reference,
quoting the problematic code is usually much easier.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Why re.match()?

2009-07-03 Thread kj
In <025db0a6$0$20657$c3e8...@news.astraweb.com> Steven D'Aprano 
 writes:

>On Thu, 02 Jul 2009 11:19:40 +, kj wrote:

>> If the concern is efficiency for such cases, then simply implement
>> optional offset and length parameters for re.search(), to specify any
>> arbitrary substring to apply the search to.  To have a special-case
>> re.match() method in addition to a general re.search() method is
>> antithetical to language minimalism, and plain-old bizarre.  Maybe
>> there's a really good reason for it, but it has not been mentioned yet.

>There is, and it has.

I "misspoke" earlier.  I should have written "I'm *sure* there's
a really good reason for it."  And I think no one in this thread
(myself included, of course) has a clue of what it is.  I miss the
days when Guido still posted to comp.lang.python.  He'd know. 

Regarding the "practicality beats purity" line, it's hard to think
of a better motto for *Perl*, with all its practicality-oriented
special doodads.  (And yes, I know where the "practicality beats
purity" line comes from.)  Even *Perl* does not have a special
syntax for the task that re.match is supposedly tailor-made for,
according to the replies I've received.  Given that it is so trivial
to implement all of re.match's functionality with only one additional
optional parameter for re.search (i.e. pos), it is absurd to claim
that re.match is necessary for the sake of this special functionality.
The justification for re.match must be elsewhere.

But thanks for letting me know that I'm entitled to my opinion.
That's a huge relief.

kj

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Why re.match()?

2009-07-03 Thread MRAB

kj wrote:

In <025db0a6$0$20657$c3e8...@news.astraweb.com> Steven D'Aprano 
 writes:


On Thu, 02 Jul 2009 11:19:40 +, kj wrote:



If the concern is efficiency for such cases, then simply implement
optional offset and length parameters for re.search(), to specify any
arbitrary substring to apply the search to.  To have a special-case
re.match() method in addition to a general re.search() method is
antithetical to language minimalism, and plain-old bizarre.  Maybe
there's a really good reason for it, but it has not been mentioned yet.



There is, and it has.


I "misspoke" earlier.  I should have written "I'm *sure* there's
a really good reason for it."  And I think no one in this thread
(myself included, of course) has a clue of what it is.  I miss the
days when Guido still posted to comp.lang.python.  He'd know. 


Regarding the "practicality beats purity" line, it's hard to think
of a better motto for *Perl*, with all its practicality-oriented
special doodads.  (And yes, I know where the "practicality beats
purity" line comes from.)  Even *Perl* does not have a special
syntax for the task that re.match is supposedly tailor-made for,
according to the replies I've received.  Given that it is so trivial
to implement all of re.match's functionality with only one additional
optional parameter for re.search (i.e. pos), it is absurd to claim
that re.match is necessary for the sake of this special functionality.
The justification for re.match must be elsewhere.

But thanks for letting me know that I'm entitled to my opinion.
That's a huge relief.


As I wrote, re.match anchors the match whereas re.search doesn't. An
alternative would have been to implement Perl's \G anchor, but I believe
that that was invented after the re module was written.
--
http://mail.python.org/mailman/listinfo/python-list


Clarity vs. code reuse/generality

2009-07-03 Thread kj


I'm will be teaching a programming class to novices, and I've run
into a clear conflict between two of the principles I'd like to
teach: code clarity vs. code reuse.  I'd love your opinion about
it.

The context is the concept of a binary search.  In one of their
homeworks, my students will have two occasions to use a binary
search.  This seemed like a perfect opportunity to illustrate the
idea of abstracting commonalities of code into a re-usable function.
So I thought that I'd code a helper function, called _binary_search,
that took five parameters: a lower limit, an upper limit, a
one-parameter function, a target value, and a tolerance (epsilon).
It returns the value of the parameter for which the value of the
passed function is within the tolerance of the target value.

This seemed straightforward enough, until I realized that, to be
useful to my students in their homework, this _binary_search function
had to handle the case in which the passed function was monotonically
decreasing in the specified interval...

The implementation is still very simple, but maybe not very clear,
particularly to programming novices (docstring omitted):

def _binary_search(lo, hi, func, target, epsilon):
assert lo < hi
assert epsilon > 0
sense = cmp(func(hi), func(lo))
if sense == 0:
return None
target_plus = sense * target + epsilon
target_minus = sense * target - epsilon
while True:
param = (lo + hi) * 0.5
value = sense * func(param)
if value > target_plus:
hi = param
elif value < target_minus:
lo = param
else:
return param

if lo == hi:
return None

My question is: is the business with sense and cmp too "clever"?

Here's the rub: the code above is more general (hence more reusable)
by virtue of this trick with the sense parameter, but it is also
a bit harder to understand.

This not an unusual situation.  I find that the processing of
abstracting out common logic often results in code that is harder
to read, at least for the uninitiated...

I'd love to know your opinions on this.

TIA!

kj

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: pep 8 constants

2009-07-03 Thread Eric S. Johansson
Horace Blegg wrote:
> I've been kinda following this. I have a cousin who is permanently wheel
> chair bound and doesn't have perfect control of her hands, but still
> manages to use a computer and interact with society. However, the
> idea/thought of disabled programmers was new to me/hadn't ever occurred
> to me.
> 
> You say that using your hands is painful, but what about your feet?
> Wouldn't it be possible to rig up some kind of foot peddle for
> shift/caps lock? Kinda like the power peddle used with sowing machines,
> so the hands are free to hold fabric.
> 
> I don't mean this in a condescending manor, and I apologize if you take
> it as such. I'm genuinely curious if you think something like this could
> work.
> 
> The way I was envisioning it working last night (and I haven't the
> faintest clue how SR works, nor have I ever used SR) was that you would
> hit the foot peddle, which would tell the SR program to capitalize the
> first letter of the next word (a smart shift, basically, so you don't
> end up doing something like ... WONderland -or- "stocks are up 1,0))%
> TOday".)
> 
> Possible? Stupid?
> 
it's not stupid.

People have used foot pedals for decades for a variety of controls. I don't
think foot pedals would work for me because when I am dictating, I pace.
Standing, sitting, I pace. With a cord headset, I'm forced to stay within about
4 feet of the computer. But what I'm using a Bluetooth headset, I will sometimes
ramble as far as 10 or 15 feet from the computer. It helps if I make the font
larger so I can glance over and see what kind of errors I've gotten.

I really love a Bluetooth headset with speech recognition. It's so liberating.

Your question about foot pedals makes me think of alternative. would it make
sense to have a handheld keyboard which would be used for command-and-control
functionality or as an adjunct to speech recognition use? It would have to be
designed in such a way that it doesn't aggravate a hand injury which may not be
possible. Anyway, just thinking out loud.

-- 
http://mail.python.org/mailman/listinfo/python-list


VirtualEnv

2009-07-03 Thread Ronn Ross
I'm attempting to write a bootstrap script for virtualenv. I just want to do
a couple of easy_install's after the environment is created. It was fairly
easy to create the script, but I can't figure out how to implement it. The
documentation was not of much help. Can someone please point me in the right
direction?
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Why re.match()?

2009-07-03 Thread Aahz
In article , kj   wrote:
>In <025db0a6$0$20657$c3e8...@news.astraweb.com> Steven D'Aprano 
> writes:
>>On Thu, 02 Jul 2009 11:19:40 +, kj wrote:
>>>
>>> If the concern is efficiency for such cases, then simply implement
>>> optional offset and length parameters for re.search(), to specify any
>>> arbitrary substring to apply the search to.  To have a special-case
>>> re.match() method in addition to a general re.search() method is
>>> antithetical to language minimalism, and plain-old bizarre.  Maybe
>>> there's a really good reason for it, but it has not been mentioned yet.
>>
>>There is, and it has.
>
>I "misspoke" earlier.  I should have written "I'm *sure* there's
>a really good reason for it."  And I think no one in this thread
>(myself included, of course) has a clue of what it is.  I miss the
>days when Guido still posted to comp.lang.python.  He'd know. 

You may find this enlightening:

http://www.python.org/doc/1.4/lib/node52.html
-- 
Aahz (a...@pythoncraft.com)   <*> http://www.pythoncraft.com/

"as long as we like the same operating system, things are cool." --piranha
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Clarity vs. code reuse/generality

2009-07-03 Thread Steven D'Aprano
On Fri, 03 Jul 2009 14:05:08 +, kj wrote:

> ... I find that the processing of
> abstracting out common logic often results in code that is harder to
> read ...

Yes. There is often a conflict between increasing abstraction and ease of 
understanding.



[...]
> The implementation is still very simple, but maybe not very clear,
> particularly to programming novices (docstring omitted):
> 
> def _binary_search(lo, hi, func, target, epsilon):
> assert lo < hi
> assert epsilon > 0

You're using assert for data sanitation. That is a very bad habit for you 
to be teaching novices. Your code will fail to behave as expected 
whenever the caller runs Python with the -O switch.


[...]
> My question is: is the business with sense and cmp too "clever"?

For production code? Maybe -- you would need to profile the code, and 
compare it to a less general, more simple-minded function, and see which 
performs better. If there is an excessive performance penalty, that's a 
good sign that perhaps you're being too clever, too general, or too 
abstract -- or all three.

Too clever for novices? That depends on how inexperienced they are -- are 
they new to Python or new to programming in general? Are they children or 
adults? Do they have a PhD degree or are they still at primary school? It 
depends on their experience and their intelligence -- dare I say it, some 
people will *never* get programming, let alone code-reuse. It also 
depends on how you lead up to it -- are you dropping them straight into 
the abstract code, or giving them two pieces of nearly the same code and 
showing them how to pull out the common functionality?


-- 
Steven
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Is code duplication allowed in this instance?

2009-07-03 Thread Bearophile
Francesco Bochicchio:
> Possibly to prepare the test data set you might need a
> different - and already proven - implementation of
> the algorithm.

Usually a brute force or slow but short algorithm is OK (beside some
hard-coded input-output pairs).

Sometimes you may use the first implementation of your code that's
usually simpler, before becoming complex because of successive
optimizations. But you have to be careful, because the first
implementation too may be buggy.

Some other times there are simpler algorithms to test if the output of
another algorithm is correct (think of exponential algorithms with a
polinomial test of the correct result), for example to test a fancy
Introsort implementation you can use a very small and simple O(n^2)
sorter, or better a simple linear loop that tests the result to be
sorted.

Here, beside unit tests, are also useful languages (or tools) that
allow to add pre- and post-conditions, class invariants, etc.

Bye,
bearophile
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Clarity vs. code reuse/generality

2009-07-03 Thread Alan G Isaac
On 7/3/2009 10:05 AM kj apparently wrote:
> The context is the concept of a binary search.  In one of their
> homeworks, my students will have two occasions to use a binary
> search.  This seemed like a perfect opportunity to illustrate the
> idea of abstracting commonalities of code into a re-usable function.
> So I thought that I'd code a helper function, called _binary_search,
> that took five parameters: a lower limit, an upper limit, a
> one-parameter function, a target value, and a tolerance (epsilon).
> It returns the value of the parameter for which the value of the
> passed function is within the tolerance of the target value.
> 
> This seemed straightforward enough, until I realized that, to be
> useful to my students in their homework, this _binary_search function
> had to handle the case in which the passed function was monotonically
> decreasing in the specified interval...
> 
> The implementation is still very simple, but maybe not very clear,
> particularly to programming novices (docstring omitted):
> 
> def _binary_search(lo, hi, func, target, epsilon):
> assert lo < hi
> assert epsilon > 0
> sense = cmp(func(hi), func(lo))
> if sense == 0:
> return None
> target_plus = sense * target + epsilon
> target_minus = sense * target - epsilon
> while True:
> param = (lo + hi) * 0.5
> value = sense * func(param)
> if value > target_plus:
> hi = param
> elif value < target_minus:
> lo = param
> else:
> return param
> 
>   if lo == hi:
>   return None



1. Don't use assertions to test argument values!

2.
from scipy.optimize import bisect
def _binary_search(lo, hi, func, target, epsilon):
def f(x): return func(x) - target
return bisect(f, lo, high, xtol=epsilon)

3. If you don't want to use SciPy (why?), have them
implement http://en.wikipedia.org/wiki/Bisection_method#Pseudo-code
to produce their own `bisect` function.

hth,
Alan Isaac
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Clarity vs. code reuse/generality

2009-07-03 Thread Aahz
In article , kj   wrote:
>
>This seemed straightforward enough, until I realized that, to be
>useful to my students in their homework, this _binary_search function
>had to handle the case in which the passed function was monotonically
>decreasing in the specified interval...
>
>def _binary_search(lo, hi, func, target, epsilon):
>assert lo < hi
>assert epsilon > 0
>sense = cmp(func(hi), func(lo))
>if sense == 0:
>return None
>target_plus = sense * target + epsilon
>target_minus = sense * target - epsilon
>while True:
>param = (lo + hi) * 0.5
>value = sense * func(param)
>if value > target_plus:
>hi = param
>elif value < target_minus:
>lo = param
>else:
>return param
>
>   if lo == hi:
>   return None
>
>My question is: is the business with sense and cmp too "clever"?

First of all, cmp() is gone in Python 3, unfortunately, so I'd avoid
using it.  Second, assuming I understand your code correctly, I'd change
"sense" to "direction" or "order".
-- 
Aahz (a...@pythoncraft.com)   <*> http://www.pythoncraft.com/

"as long as we like the same operating system, things are cool." --piranha
-- 
http://mail.python.org/mailman/listinfo/python-list


The TimingAnalyzer -- project to convert from Java to Python

2009-07-03 Thread chewie
Hello,

This a project related to the development of an EDA CAD tool program
called the TimingAnalyzer.  Digital engineers could use this kind of
program to analyze and document inteface timing diagrams  for IC,
ASIC, FPGA, and board level hardware projects.

The TimingAnalyzer is licensed as freeware.   I don't have the time
needed to make a high quality commercial product but I do want to keep
the development moving forward and continue to fix problems and add
new features as time permits.

www.timing-diagrams.com

Recently, I have become very interested in Python and using it to
develop similar type cad programs.  My plan is to convert the
TimingAnalyzer Java to Python with mostly a scripting interface for
building complex timing diagrams, doing timing analysis,  creating
testbenches and testvectors from waveform diagrams,
and creating timing diagrams from simulation VCD files.  Most all of
this is text based work anyway.

Developing professional GUIs is very time consuming for me.  This has
been my bottleneck with the program all along.  With a command line
interface,  you will execute a script and in one window,  and view and
edit and print the timing diagram shown in another window.   Like the
Matlab interface.

For example:

micro = m68000()
micro.write(add, data, wait_states)
micro.read(add, wait_states).

or

add_clock(..)
add_signal(.)
add_delay(..)
add_constraint(.)
add_or_gate()
add_and_gate()
add_counter()
add_clock_jitter(.)

analyze_clock_domains(.)
analyze_worst_case_timings()
analyze_best_case_timings.

read_vcd()
vcd_2_timing_diagram(.)
create_testvectors(.)
create_testbench()

A lot of these functions are built into the program now so its a
matter of converting them java to python.  I won't have to spend most
of the time getting the user interface to look good and be friendly.
If this is made an open source project,  I would hope that others
would help with the development and new features and bug fixes will so
progress will be made quickly.

If anyone is interested in helping with the development,  I will make
this an open source project.   Just let me know if your interested.

Thank you,
Dan Fabrizio
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Clarity vs. code reuse/generality

2009-07-03 Thread Bearophile
On 3 Lug, 16:05, kj  wrote:
> I'm will be teaching a programming class to novices, and I've run
> into a clear conflict between two of the principles I'd like to
> teach: code clarity vs. code reuse.

They are both important principles, but clarity is usually more
important because short code that can't be read can't be fixed and
modified, while long code that can be read is alive still.


> So I thought that I'd code a helper function, called _binary_search,
> that took five parameters: a lower limit, an upper limit, a
> one-parameter function, a target value, and a tolerance (epsilon).

Five arguments are a bit too much, even for non-novices. 2-3 arguments
are more than enough for novices.

Where are doctests'? Novices have to learn to think of tests as part
of the normal code :-)

I think the main problem in all this discussion is that generally to
learn to program you have to write code yourself, so your students
have to invent and write their own binary search. Reading your code
they are going to learn much less. Creating a binary search from
almost scratch can turn out to be too much hard for them, it means you
have to ask them to invent something simpler :-)

Programming is (among other things) problem solving, so they have to
learn to solve not easy problems from the beginning. Showing them
famous algorithms (and very good code to copy from) can be useful, but
it's less important than learning basic problem solving skills, and
much less important than learning why solving problems has to became
their purpose and pleasure :-)

Bye,
bearophile
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: pep 8 constants

2009-07-03 Thread MRAB

Eric S. Johansson wrote:

Horace Blegg wrote:

I've been kinda following this. I have a cousin who is permanently wheel
chair bound and doesn't have perfect control of her hands, but still
manages to use a computer and interact with society. However, the
idea/thought of disabled programmers was new to me/hadn't ever occurred
to me.

You say that using your hands is painful, but what about your feet?
Wouldn't it be possible to rig up some kind of foot peddle for
shift/caps lock? Kinda like the power peddle used with sowing machines,
so the hands are free to hold fabric.

I don't mean this in a condescending manor, and I apologize if you take
it as such. I'm genuinely curious if you think something like this could
work.

The way I was envisioning it working last night (and I haven't the
faintest clue how SR works, nor have I ever used SR) was that you would
hit the foot peddle, which would tell the SR program to capitalize the
first letter of the next word (a smart shift, basically, so you don't
end up doing something like ... WONderland -or- "stocks are up 1,0))%
TOday".)

Possible? Stupid?


it's not stupid.

People have used foot pedals for decades for a variety of controls. I don't
think foot pedals would work for me because when I am dictating, I pace.
Standing, sitting, I pace. With a cord headset, I'm forced to stay within about
4 feet of the computer. But what I'm using a Bluetooth headset, I will sometimes
ramble as far as 10 or 15 feet from the computer. It helps if I make the font
larger so I can glance over and see what kind of errors I've gotten.

I really love a Bluetooth headset with speech recognition. It's so liberating.

Your question about foot pedals makes me think of alternative. would it make
sense to have a handheld keyboard which would be used for command-and-control
functionality or as an adjunct to speech recognition use? It would have to be
designed in such a way that it doesn't aggravate a hand injury which may not be
possible. Anyway, just thinking out loud.


You can get giant piano keyboards that you step on, so how about a giant
computer keyboard? "I wrote 5 miles of code before lunch!" :-)
--
http://mail.python.org/mailman/listinfo/python-list


Custom CFLAGS with distutils

2009-07-03 Thread Rocco Rutte
Hi,

I've been trying to make distutils build mercurial with custom cflags.
The only way this works is to change Makefile because I don't want to
put my changed CFLAGS into the environment and I tend to forget to run
"make" with a CFLAGS=... option.

Google brings up a special "Setup" file which should solve my problem,
but it somehow doesn't. I've tried:

mercurial mercurial/base85.c -Xcompiler -arch x86_64
mercurial.base85 mercurial/base85.c -Xcompiler -arch x86_64

for base85.c in the mercurial/ subdirectory. Hacking Makefile does the
trick, but having a Setup file working would never produce merge
conflicts.

What am I doing wrong?

Rocco
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: getting rid of —

2009-07-03 Thread Mark Tolonen


"Tep"  wrote in message 
news:46d36544-1ea2-4391-8922-11b8127a2...@o6g2000yqj.googlegroups.com...

On 3 Jul., 06:40, Simon Forman  wrote:
> On Jul 2, 4:31 am, Tep  wrote:

[snip]
> > > > > how can I replace '—' sign from string? Or do split at that 
> > > > > character?

> > > > > Getting unicode error if I try to do it:
>
> > > > > UnicodeDecodeError: 'ascii' codec can't decode byte 0x97 in 
> > > > > position

> > > > > 1: ordinal not in range(128)
>
> > > > > Thanks, Pet
>
> > > > > script is # -*- coding: UTF-8 -*-

[snip]

> I just tried a bit of your code above in my interpreter here and it
> worked fine:
>
> |>>> data = 'foo — bar'
> |>>> data.split('—')
> |['foo ', ' bar']
> |>>> data = u'foo — bar'
|>>> data.split(u'—')
> |[u'foo ', u' bar']
>
> Figure out the smallest piece of "html source code" that causes the
> problem and include that with your next post.

The problem was, I've converted "html source code" to unicode object
and didn't encoded to utf-8 back, before using split...
Thanks for help and sorry for not so smart question
Pet


You'd still benefit from posting some code.  You shouldn't be converting 
back to utf-8 to do a split, you should be using a Unicode string with split 
on the Unicode version of the "html source code".  Also make sure your file 
is actually saved in the encoding you declare.  I print the encoding of your 
symbol in two encodings to illustrate why I suspect this.


Below, assume "data" is your "html source code" as a Unicode string:

# -*- coding: UTF-8 -*-
data = u'foo — bar'
print repr(u'—'.encode('utf-8'))
print repr(u'—'.encode('windows-1252'))
print data.split(u'—')
print data.split('—')


OUTPUT:

'\xe2\x80\x94'
'\x97'
[u'foo ', u' bar']
Traceback (most recent call last):
 File 
"C:\dev\python\Lib\site-packages\pythonwin\pywin\framework\scriptutils.py", 
line 427, in ImportFile

   exec codeObj in __main__.__dict__
 File "", line 1, in 
 File "x.py", line 6, in 
   print data.split('—')
UnicodeDecodeError: 'ascii' codec can't decode byte 0xe2 in position 0: 
ordinal not in range(128)


Note that using the Unicode string in split() works.  Also note the decode 
byte in the error message when using a non-Unicode string to split the 
Unicode data.  In your original error message the decode byte that caused an 
error was 0x97, which is 'EM DASH' in Windows-1252 encoding.  Make sure to 
save your source code in the encoding you declare.  If I save the above 
script in windows-1252 encoding and change the coding line to windows-1252 I 
get the same results, but the decode byte is 0x97.


# coding: windows-1252
data = u'foo — bar'
print repr(u'—'.encode('utf-8'))
print repr(u'—'.encode('windows-1252'))
print data.split(u'—')
print data.split('—')

'\xe2\x80\x94'
'\x97'
[u'foo ', u' bar']
Traceback (most recent call last):
 File 
"C:\dev\python\Lib\site-packages\pythonwin\pywin\framework\scriptutils.py", 
line 427, in ImportFile

   exec codeObj in __main__.__dict__
 File "", line 1, in 
 File "x.py", line 6, in 
   print data.split('ק)
UnicodeDecodeError: 'ascii' codec can't decode byte 0x97 in position 0: 
ordinal not in range(128)


-Mark


--
http://mail.python.org/mailman/listinfo/python-list


Video shows vigorous Jackson before death???

2009-07-03 Thread Huynh Nhu Thuy
Video shows vigorous Jackson before death???

http://hd-family.blogspot.com/2009/07/video-shows-vigorous-jackson-before.html


LOS ANGELES (AFP) – A video released Thursday showed Michael Jackson
vigorously practicing a song-and-dance routine days before his death,
supporting accounts he had been in good health.

In footage obtained by AFP, the pop legend performed at the Staples
Center in Los Angeles on June 23, two days before he died, as he
prepared for a 50-date set in London starting in July.


http://hd-family.blogspot.com/2009/07/video-shows-vigorous-jackson-before.html
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Clarity vs. code reuse/generality

2009-07-03 Thread Bruno Desthuilliers

kj a écrit :

I'm will be teaching a programming class to novices, and I've run
into a clear conflict between two of the principles I'd like to
teach: code clarity vs. code reuse.  I'd love your opinion about
it.


(snip - others already commented on this code)


Here's the rub: the code above is more general (hence more reusable)
by virtue of this trick with the sense parameter, but it is also
a bit harder to understand.


Perhaps better naming (s/sense/direction/g ?) and a small comment could 
help ?



This not an unusual situation.  I find that the processing of
abstracting out common logic often results in code that is harder
to read, at least for the uninitiated...


IOW : the notion of "clarity" depends on the context (including the 
reader). Anyway, there are algorithms (or implementations of...) that 
are definitly and inherently on the 'hard to read' side - well, 
complexity is something you have to learn to live with, period. The key 
is to try and make it as simple and readable *as possible*.


Also, factoring out common code - or even slightly complex code - often 
makes _client_ code (much) more readable.


--
http://mail.python.org/mailman/listinfo/python-list


Re: Why re.match()?

2009-07-03 Thread Bruno Desthuilliers

kj a écrit :
(snipo

To have a special-case
re.match() method in addition to a general re.search() method is
antithetical to language minimalism,


FWIW, Python has no pretention to minimalism.
--
http://mail.python.org/mailman/listinfo/python-list


Re: Direct interaction with subprocess - the curse of blocking I/O

2009-07-03 Thread Scott David Daniels

yam850 wrote:

I made a python method/function for non blocking read from a file
object   I am happy to see comments.


OK, here's a fairly careful set of comments with a few caveats:
Does this work on windows?  The top comment should say where you
know it works.  Does this code correctly read a single line?
perhaps you need to check for a final '\n' whenever you actually
get characters, and break out there.  The rest below simply is
style-changing suggestions; take what you like and leave the rest.



def non_blocking_readline(f_read=sys.stdin, timeout_select=0.0):
"""to readline non blocking from the file object 'f_read'
   for 'timeout_select' see module 'select'"""
import select

Typically put imports at the top of the source
XXX   text_lines = ''   # empty string
Either no comment here or say _why_ it is empty.
(A)   text_lines = []  # for result accumulation
if collecting pieces (see below)
(B)   text_lines = ''  # for result accumulation
if collecting a single string


while True:   # as long as there are bytes to read
try:  # try select
rlist, wlist, xlist = select.select([f_read],   
  [], [],
   timeout_select)

XXX   except:   # select ERROR
XXX   print >>sys.stderr, ("non_blocking_read select ERROR")
I'd not hide the details of the exception like that.  Don't do empty
excepts. Either don't catch it at all here, (I prefer that) or do
something like the following to capture the error:
  except Exception, why:
  print >>sys.stderr, "non_blocking_read select: %s" % why

break

XXX   if DEBUG: print("rlist=%s, wlist=%s, xlist=%s" % (repr(rlist),
XXX  repr(wlist), repr(xlist)))
Don't be scared of vertical space; if you must, define a function DPRINT
to ignore args; use %r to get repr
So, either:
  if DEBUG:
  print("rlist=%r, wlist=%r, xlist=%r" % (
  rlist, wlist, xlist))
or:
elsewhere:
def DPRINT(format_str, *args, **kwargs):
'''Conditionally print formatted output based on DEBUG'''
if DEBUG:
print(format_str % (args or kwargs))
and then here:
  DPRINT("rlist=%r, wlist=%r, xlist=%r", rlist, wlist, xlist)

XXX   if  len(rlist) > 0:
Idiomatic Python: use the fact that empty containers evaluate false:
  if  rlist:

text_read = f_read.readline() # get a line

XXXif DEBUG: print("after read/readline text_read:'%s', len=

%s" % (text_read, repr(len(text_read

Similar comment to above:
   if DEBUG:
   print("after read/readline text_read:%r, len=%s" % (
  text_read, len(text_read))
or:
   DPRINT("after read/readline text_read:%r, len=%s",
  text_read, len(text_read))

XXX   if  len(text_read) > 0:   # there were some bytes
  if text_read:   # there were some bytes
XXX   text_lines = "".join([text_lines, text_read])
Note the join is a good idea only if you have several elements.
For a single concatenation, "=" is just fine.  The (A) case combines
at the end, the (B) case if you expect multiple concatenates are rare.
(A)   text_lines.append(text_read)
(B)   text_lines += text_read

XXX   if DEBUG: print("text_lines:'%s'" % (text_lines))
Similar to above
   if DEBUG:
   print("text_lines:%r" % text_lines)
or:
   DPRINT("text_lines:%r", text_lines)

XXX   else:
XXX   break   # there was no byte in a line
XXX   else:
XXX   break   # there was no byte in the f_read
To make the flow of control above more clear (one case continues, others
get out), I'd also replace the above with:
  continue # In one case we keep going
  break  # No new chars found, let's get out.

XXX   if  text_lines  ==  '':
XXX   return None
XXX   else:
XXX   return text_lines
Simplify using 'or's semantics:
(A)   return ''.join(text_lines) or None
(B)   return text_lines or None


So, with everything applied:

import select

def DPRINT(format_str, *args, **kwargs):
'''Conditionally print formatted output based on DEBUG'''
if DEBUG:
print(format_str % (args or kwargs))

def non_blocking_readline(f_read=sys.stdin, timeout_select=0.0):
"""to readline non blocking from the file object 'f_read'

for 'timeout_select' see module 'select'
"""
text_lines = ''  # for result accumulation
while True: # as long as there are bytes to read
rlist, wlist, xlist = select.select([f_read],   [], [], 


timeout_select)
DPRINT("rlist=%r, wlist=%r, xlist=%r",
  rlist, wlist, xli

Re: Clarity vs. code reuse/generality

2009-07-03 Thread Jean-Michel Pichavant

kj wrote:

I'm will be teaching a programming class to novices, and I've run
into a clear conflict between two of the principles I'd like to
teach: code clarity vs. code reuse.  I'd love your opinion about
it.
  

[...]

sense = cmp(func(hi), func(lo))
if sense == 0:
return None

My suggestion on how to improve this part for python novices:
# assuming func is monotonous
if func(high) > func(low):
   direction = 1 # aka sign of the func derivative
elif func(low) > func(high):
   direction = -1
else:
   return None

Avoid using cmp, this will prevent the "what the hell is cmp doing ?" , 
unless you want to teach your students  how to search for inline python 
documentation.
Some other list members have suggested to improve the variable naming. I 
couldn't agree more, in your case, I think clarity can be achieved as 
well with the abstraction (these notions does not necessarily collide).


Here's a link on my how-to-name bible :
http://tottinge.blogsome.com/meaningfulnames/

Jean-Michel

--
http://mail.python.org/mailman/listinfo/python-list


Re: Direct interaction with subprocess - the curse of blocking I/O

2009-07-03 Thread yam850
On 3 Jul., 17:43, Scott David Daniels  wrote:
> yam850 wrote:
> > I made a python method/function for nonblockingread from a file
> > object   I am happy to see comments.
> OK, here's a fairly careful set of comments with a few caveats:
[snip] valuable comments
> --Scott David Daniels
> scott.dani...@acm.org


Wow, thats a GREAT answer!!!
Thanks!!!
I will learn a lot while "digesting" this mail.


G
--
yam850
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Clarity vs. code reuse/generality

2009-07-03 Thread Lie Ryan
kj wrote:
> I'm will be teaching a programming class to novices, and I've run
> into a clear conflict between two of the principles I'd like to
> teach: code clarity vs. code reuse.  I'd love your opinion about
> it.

Sometimes when the decision between clarity and generality becomes too
hard; you might fare better to save the code, go out for a walk to
forget the code, and start a blank text editor. Being in a fresh mind,
you may found an alternative approach, e.g.:

from __future__ import division
def binary_search(lo, hi, func, target, epsilon):
# reverses hi and lo if monotonically decreasing
lo, hi = (lo, hi) if func(hi) > func(lo) else (hi, lo)

param = (lo + hi) / 2

# loop while not precise enough
while abs(func(param) - target) > epsilon:
param = (lo + hi) / 2

if target < func(param):
hi = param
else:
lo = param
return param

> The context is the concept of a binary search.  In one of their
> homeworks, my students will have two occasions to use a binary
> search.  This seemed like a perfect opportunity to illustrate the
> idea of abstracting commonalities of code into a re-usable function.
> So I thought that I'd code a helper function, called _binary_search,
> that took five parameters: a lower limit, an upper limit, a
> one-parameter function, a target value, and a tolerance (epsilon).
> It returns the value of the parameter for which the value of the
> passed function is within the tolerance of the target value.
> 
> This seemed straightforward enough, until I realized that, to be
> useful to my students in their homework, this _binary_search function
> had to handle the case in which the passed function was monotonically
> decreasing in the specified interval...
> 
> The implementation is still very simple, but maybe not very clear,
> particularly to programming novices (docstring omitted):
> 
> def _binary_search(lo, hi, func, target, epsilon):
> assert lo < hi
> assert epsilon > 0
> sense = cmp(func(hi), func(lo))
> if sense == 0:
> return None
> target_plus = sense * target + epsilon
> target_minus = sense * target - epsilon
> while True:
> param = (lo + hi) * 0.5
> value = sense * func(param)
> if value > target_plus:
> hi = param
> elif value < target_minus:
> lo = param
> else:
> return param
> 
>   if lo == hi:
>   return None
> 
> My question is: is the business with sense and cmp too "clever"?
> 
> Here's the rub: the code above is more general (hence more reusable)
> by virtue of this trick with the sense parameter, but it is also
> a bit harder to understand.
> 
> This not an unusual situation.  I find that the processing of
> abstracting out common logic often results in code that is harder
> to read, at least for the uninitiated...
> 
> I'd love to know your opinions on this.
> 
> TIA!
> 
> kj
> 
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Clarity vs. code reuse/generality

2009-07-03 Thread kj
In  Alan G Isaac  
writes:

>1. Don't use assertions to test argument values!

Out of curiosity, where does this come from? 

Thanks,

kj
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Clarity vs. code reuse/generality

2009-07-03 Thread kj
In  a...@pythoncraft.com (Aahz) writes:

>First of all, cmp() is gone in Python 3, unfortunately, so I'd avoid
>using it.

Good to know.

>Second, assuming I understand your code correctly, I'd change
>"sense" to "direction" or "order".

Thanks,

kj

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: getting rid of —

2009-07-03 Thread Tep
On 3 Jul., 16:58, "Mark Tolonen"  wrote:
> "Tep"  wrote in message
>
> news:46d36544-1ea2-4391-8922-11b8127a2...@o6g2000yqj.googlegroups.com...
>
>
>
>
>
> > On 3 Jul., 06:40, Simon Forman  wrote:
> > > On Jul 2, 4:31 am, Tep  wrote:
> [snip]
> > > > > > > how can I replace '—' sign from string? Or do split at that
> > > > > > > character?
> > > > > > > Getting unicode error if I try to do it:
>
> > > > > > > UnicodeDecodeError: 'ascii' codec can't decode byte 0x97 in
> > > > > > > position
> > > > > > > 1: ordinal not in range(128)
>
> > > > > > > Thanks, Pet
>
> > > > > > > script is # -*- coding: UTF-8 -*-
> [snip]
> > > I just tried a bit of your code above in my interpreter here and it
> > > worked fine:
>
> > > |>>> data = 'foo — bar'
> > > |>>> data.split('—')
> > > |['foo ', ' bar']
> > > |>>> data = u'foo — bar'
> > |>>> data.split(u'—')
> > > |[u'foo ', u' bar']
>
> > > Figure out the smallest piece of "html source code" that causes the
> > > problem and include that with your next post.
>
> > The problem was, I've converted "html source code" to unicode object
> > and didn't encoded to utf-8 back, before using split...
> > Thanks for help and sorry for not so smart question
> > Pet
>
> You'd still benefit from posting some code.  You shouldn't be converting

I've posted code below

> back to utf-8 to do a split, you should be using a Unicode string with split
> on the Unicode version of the "html source code".  Also make sure your file
> is actually saved in the encoding you declare.  I print the encoding of your
> symbol in two encodings to illustrate why I suspect this.

File was indeed in windows-1252, I've changed this. For errors see
below

>
> Below, assume "data" is your "html source code" as a Unicode string:
>
> # -*- coding: UTF-8 -*-
> data = u'foo — bar'
> print repr(u'—'.encode('utf-8'))
> print repr(u'—'.encode('windows-1252'))
> print data.split(u'—')
> print data.split('—')
>
> OUTPUT:
>
> '\xe2\x80\x94'
> '\x97'
> [u'foo ', u' bar']
> Traceback (most recent call last):
>   File
> "C:\dev\python\Lib\site-packages\pythonwin\pywin\framework\scriptutils.py",
> line 427, in ImportFile
>     exec codeObj in __main__.__dict__
>   File "", line 1, in 
>   File "x.py", line 6, in 
>     print data.split('—')
> UnicodeDecodeError: 'ascii' codec can't decode byte 0xe2 in position 0:
> ordinal not in range(128)
>
> Note that using the Unicode string in split() works.  Also note the decode
> byte in the error message when using a non-Unicode string to split the
> Unicode data.  In your original error message the decode byte that caused an
> error was 0x97, which is 'EM DASH' in Windows-1252 encoding.  Make sure to
> save your source code in the encoding you declare.  If I save the above
> script in windows-1252 encoding and change the coding line to windows-1252 I
> get the same results, but the decode byte is 0x97.
>
> # coding: windows-1252
> data = u'foo — bar'
> print repr(u'—'.encode('utf-8'))
> print repr(u'—'.encode('windows-1252'))
> print data.split(u'—')
> print data.split('—')
>
> '\xe2\x80\x94'
> '\x97'
> [u'foo ', u' bar']
> Traceback (most recent call last):
>   File
> "C:\dev\python\Lib\site-packages\pythonwin\pywin\framework\scriptutils.py",
> line 427, in ImportFile
>     exec codeObj in __main__.__dict__
>   File "", line 1, in 
>   File "x.py", line 6, in 
>     print data.split('ק)
> UnicodeDecodeError: 'ascii' codec can't decode byte 0x97 in position 0:
> ordinal not in range(128)
>
> -Mark

#! /usr/bin/python
# -*- coding: UTF-8 -*-
import urllib2
import re
def getTitle(input):
title = re.search('(.*?)', input)
title = title.group(1)
print "FULL TITLE", title.encode('UTF-8')
parts = title.split(' — ')
return parts[0]


def getWebPage(url):
user_agent = 'Mozilla/4.0 (compatible; MSIE 5.5; Windows NT)'
headers = { 'User-Agent' : user_agent }
req = urllib2.Request(url, '', headers)
response = urllib2.urlopen(req)
the_page = unicode(response.read(), 'UTF-8')
return the_page


def main():
url = "http://bg.wikipedia.org/wiki/
%D0%91%D0%B0%D1%85%D1%80%D0%B5%D0%B9%D0%BD"
title = getTitle(getWebPage(url))
print title[0]


if __name__ == "__main__":
main()


Traceback (most recent call last):
  File "C:\user\Projects\test\src\new_main.py", line 29, in 
main()
  File "C:\user\Projects\test\src\new_main.py", line 24, in main
title = getTitle(getWebPage(url))
FULL TITLE Бахрейн — Уикипеди�
  File "C:\user\Projects\test\src\new_main.py", line 9, in getTitle
parts = title.split(' — ')
UnicodeDecodeError: 'ascii' codec can't decode byte 0xe2 in position
1: ordinal not in range(128)
-- 
http://mail.python.org/mailman/listinfo/python-list


Compiling 64 bit python on a mac - cannot compute sizeof (int)

2009-07-03 Thread Keflavich
I'm trying to compile a 64 bit version of python 2.6.2 on my mac (OS X
10.5.7), and am running into a problem during the configure stage.

I configure with:
./configure --enable-framework=/Library/Frameworks --enable-
universalsdk MACOSX_DEPLOYMENT_TARGET=10.5 --with-universal-archs=all -
with-readline-dir=/usr/local

because I want 64 and 32 bit, and I needed to install a 64 bit
readline as a prerequisite.

configure fails at:
checking size of int... configure: error: cannot compute sizeof (int)

I'm not reporting this as a bug because I know it's a problem with my
path somewhere (a friend with an identical computer but slightly
different setup was able to compile without a problem), but I don't
know what paths to change.  Any tips?

Thanks,
Adam
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Clarity vs. code reuse/generality

2009-07-03 Thread Paul Rubin
kj  writes:
> sense = cmp(func(hi), func(lo))
> if sense == 0:
> return None
> target_plus = sense * target + epsilon
> target_minus = sense * target - epsilon
> ...

The code looks confusing to me and in some sense incorrect.  Suppose
func(hi)==func(lo)==target.  In this case the solver gives up
and returns None even though it already has found a root.

Also, the stuff with the sense parameter, and target_minus and
target_plus looks messy.  I do like to use cmp.  I'd just write
something like (untested):

 def _binary_search(lo, hi, func, target, epsilon):
 y_hi, y_lo = func(hi), func(lo)
 
 while True:
 x_new = (lo + hi) * 0.5
 y_new = func(x_new)
 if abs(y_new - target) < epsilon: 
return x_new
 elif cmp(y_new, target) == cmp(y_hi, target):
hi = x_new
 else:
lo = x_new
 if lo == hi:
return None 

This uses an extra couple function calls in that trivial case, of
course.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Is Psyco good for Python 2.6.1?

2009-07-03 Thread duncan smith
Russ P. wrote:
> I need to speed up some Python code, and I discovered Psyco. However,
> the Psyco web page has not been updated since December 2007. Before I
> go to the trouble of installing it, does anyone know if it is still
> good for Python 2.6.1? Thanks.

If you look at http://psyco.sourceforge.net/ it would seem so (a Windows
binary for 2.6 is available).  You might want to wait a little while
because Psyco 2 is likely to be released very soon (tomorrow, the last I
heard).

Duncan
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Why re.match()?

2009-07-03 Thread Lie Ryan
Steven D'Aprano wrote:
> On Thu, 02 Jul 2009 11:19:40 +, kj wrote:
> 
>> I'm sure that it is possible to find cases in which the *current*
>> implementation of re.search() would be inefficient, but that's because
>> this implementation is perverse, which, I guess, is ultimately the point
>> of my original post.  Why privilege the special case of a
>> start-of-string anchor?  
> 
> Because wanting to see if a string matches from the beginning is a very 
> important and common special case.
> 

I find the most oddest thing about re.match is that it have an implicit
beginning anchor, but not implicit end anchor. I thought it was much
more common to ensure that a string matches a certain pattern, than just
matching the beginning. But everyone's mileages vary.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Config files with different types

2009-07-03 Thread Zach Hobesh
yaml looks pretty interesting.  Also, I wouldn't have to change much,
I would still use the same function, and still output a dict.

Thanks!

-Zach

On Thu, Jul 2, 2009 at 11:55 PM, Javier Collado wrote:
> Hello,
>
> Have you considered using something that is already developed?
>
> You could take a look at this presentation for an overview of what's 
> available:
> http://us.pycon.org/2009/conference/schedule/event/5/
>
> Anyway, let me explain that, since I "discovered" it, my favourite
> format for configuration files is yaml (http://yaml.org/,
> http://pyyaml.org/). It's easy to read, easy to write, available in
> different programming languagues, etc. In addition to this, type
> conversion is already in place so I think it covers your requirement.
> For example:
>
> IIn [1]: import yaml
>
> In [2]: yaml.load("""name: person name
>   ...: age: 25
>   ...: is_programmer: true""")
> Out[2]: {'age': 25, 'is_programmer': True, 'name': 'person name'}
>
> Best regards,
>    Javier
>
> 2009/7/2 Zach Hobesh :
>> Hi all,
>>
>> I've written a function that reads a specifically formatted text file
>> and spits out a dictionary.  Here's an example:
>>
>> config.txt:
>>
>> Destination = C:/Destination
>> Overwrite = True
>>
>>
>> Here's my function that takes 1 argument (text file)
>>
>> the_file = open(textfile,'r')
>> linelist = the_file.read().split('\n')
>> the_file.close()
>> configs = {}
>> for line in linelist:
>>       try:
>>              key,value = line.split('=')
>>              key.strip()
>>              value.strip()
>>              key.lower()
>>              value.lower()
>>              configs[key] = value
>>
>>       except ValueError:
>>              break
>>
>> so I call this on my config file, and then I can refer back to any
>> config in my script like this:
>>
>> shutil.move(your_file,configs['destination'])
>>
>> which I like because it's very clear and readable.
>>
>> So this works great for simple text config files.  Here's how I want
>> to improve it:
>>
>> I want to be able to look at the value and determine what type it
>> SHOULD be.  Right now, configs['overwrite'] = 'true' (a string) when
>> it might be more useful as a boolean.  Is there a quick way to do
>> this?  I'd also like to able to read '1' as an in, '1.0' as a float,
>> etc...
>>
>> I remember once I saw a script that took a string and tried int(),
>> float() wrapped in a try except, but I was wondering if there was a
>> more direct way.
>>
>> Thanks in advance,
>>
>> Zach
>> --
>> http://mail.python.org/mailman/listinfo/python-list
>>
>
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: getting rid of —

2009-07-03 Thread MRAB

Tep wrote:

On 3 Jul., 16:58, "Mark Tolonen"  wrote:

"Tep"  wrote in message

news:46d36544-1ea2-4391-8922-11b8127a2...@o6g2000yqj.googlegroups.com...






On 3 Jul., 06:40, Simon Forman  wrote:

On Jul 2, 4:31 am, Tep  wrote:

[snip]

how can I replace '—' sign from string? Or do split at that
character?
Getting unicode error if I try to do it:
UnicodeDecodeError: 'ascii' codec can't decode byte 0x97 in
position
1: ordinal not in range(128)
Thanks, Pet
script is # -*- coding: UTF-8 -*-

[snip]

I just tried a bit of your code above in my interpreter here and it
worked fine:
|>>> data = 'foo — bar'
|>>> data.split('—')
|['foo ', ' bar']
|>>> data = u'foo — bar'

|>>> data.split(u'—')

|[u'foo ', u' bar']
Figure out the smallest piece of "html source code" that causes the
problem and include that with your next post.

The problem was, I've converted "html source code" to unicode object
and didn't encoded to utf-8 back, before using split...
Thanks for help and sorry for not so smart question
Pet

You'd still benefit from posting some code.  You shouldn't be converting


I've posted code below


back to utf-8 to do a split, you should be using a Unicode string with split
on the Unicode version of the "html source code".  Also make sure your file
is actually saved in the encoding you declare.  I print the encoding of your
symbol in two encodings to illustrate why I suspect this.


File was indeed in windows-1252, I've changed this. For errors see
below


Below, assume "data" is your "html source code" as a Unicode string:

# -*- coding: UTF-8 -*-
data = u'foo — bar'
print repr(u'—'.encode('utf-8'))
print repr(u'—'.encode('windows-1252'))
print data.split(u'—')
print data.split('—')

OUTPUT:

'\xe2\x80\x94'
'\x97'
[u'foo ', u' bar']
Traceback (most recent call last):
  File
"C:\dev\python\Lib\site-packages\pythonwin\pywin\framework\scriptutils.py",
line 427, in ImportFile
exec codeObj in __main__.__dict__
  File "", line 1, in 
  File "x.py", line 6, in 
print data.split('—')
UnicodeDecodeError: 'ascii' codec can't decode byte 0xe2 in position 0:
ordinal not in range(128)

Note that using the Unicode string in split() works.  Also note the decode
byte in the error message when using a non-Unicode string to split the
Unicode data.  In your original error message the decode byte that caused an
error was 0x97, which is 'EM DASH' in Windows-1252 encoding.  Make sure to
save your source code in the encoding you declare.  If I save the above
script in windows-1252 encoding and change the coding line to windows-1252 I
get the same results, but the decode byte is 0x97.

# coding: windows-1252
data = u'foo — bar'
print repr(u'—'.encode('utf-8'))
print repr(u'—'.encode('windows-1252'))
print data.split(u'—')
print data.split('—')

'\xe2\x80\x94'
'\x97'
[u'foo ', u' bar']
Traceback (most recent call last):
  File
"C:\dev\python\Lib\site-packages\pythonwin\pywin\framework\scriptutils.py",
line 427, in ImportFile
exec codeObj in __main__.__dict__
  File "", line 1, in 
  File "x.py", line 6, in 
print data.split('ק)
UnicodeDecodeError: 'ascii' codec can't decode byte 0x97 in position 0:
ordinal not in range(128)

-Mark


#! /usr/bin/python
# -*- coding: UTF-8 -*-
import urllib2
import re
def getTitle(input):
title = re.search('(.*?)', input)


The input is Unicode, so it's probably better for the regular expression
to also be Unicode:

title = re.search(u'(.*?)', input)

(In the current implementation it actually doesn't matter.)


title = title.group(1)
print "FULL TITLE", title.encode('UTF-8')
parts = title.split(' — ')


The title is Unicode, so the string with which you're splitting should
also be Unicode:

parts = title.split(u' — ')


return parts[0]


def getWebPage(url):
user_agent = 'Mozilla/4.0 (compatible; MSIE 5.5; Windows NT)'
headers = { 'User-Agent' : user_agent }
req = urllib2.Request(url, '', headers)
response = urllib2.urlopen(req)
the_page = unicode(response.read(), 'UTF-8')
return the_page


def main():
url = "http://bg.wikipedia.org/wiki/
%D0%91%D0%B0%D1%85%D1%80%D0%B5%D0%B9%D0%BD"
title = getTitle(getWebPage(url))
print title[0]


if __name__ == "__main__":
main()


Traceback (most recent call last):
  File "C:\user\Projects\test\src\new_main.py", line 29, in 
main()
  File "C:\user\Projects\test\src\new_main.py", line 24, in main
title = getTitle(getWebPage(url))
FULL TITLE Бахрейн — Уикипеди�
  File "C:\user\Projects\test\src\new_main.py", line 9, in getTitle
parts = title.split(' — ')
UnicodeDecodeError: 'ascii' codec can't decode byte 0xe2 in position
1: ordinal not in range(128)

--
http://mail.python.org/mailman/listinfo/python-list


Re: XML(JSON?)-over-HTTP: How to define API?

2009-07-03 Thread Ken Dyck
On Jul 2, 6:17 pm, Allen Fowler  wrote:
> Since I need to work with other platforms, pickle is out...  what are the 
> alternatives?  XML? JSON?

Don't forget YAML (http://yaml.org). Libraries available for Python
and .NET, among others.

-Ken
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Sequence splitting

2009-07-03 Thread Brad
On Jul 3, 12:57 am, Steven D'Aprano  wrote:
> I've never needed such a split function, and I don't like the name, and
> the functionality isn't general enough. I'd prefer something which splits
> the input sequence into as many sublists as necessary, according to the
> output of the key function.

That's not a bad idea, I'll have to experiment with the alternatives.
My thought process for this, however, was that filter itself already
splits the sequence and it would have been more useful had it not
thrown away "half" of what it discovers. It could have been written to
returned two sequences with very litter perf hit for all but very
large input sequences, and been useful in more situations. What I
*really* wanted was a way to make filter itself more useful, since it
seems a bit silly to have two very similar functions.

Maybe this would be difficult to get into the core, but how about this
idea: Rename the current filter function to something like "split" or
"partition" (which I agree may be a better name) and modify it to
return the desired true and false sequences. Then recreate the
existing "filter" function with a wrapper that throws away the false
sequence.

Here are two simplified re-creations of situations where I could have
used partition (aka split):

words = ['this', 'is', 'a', 'bunch', 'of', 'words']
short, long = partition(words, lambda w: len(w) < 3)

d = {1 : 'w', 2 : 'x' ,3 : 'y' ,4 : 'z'}
keys = [1, 3, 4, 9]
found, missing = partition(keys, d.has_key)


There are probably a dozen other approaches, but the existing "filter"
is fast, clear, and *almost* good enough. So when is this useful in
general: Whenever filter itself is useful, but you want to use both
sides of the partitioning work it already does.


-Brad
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Sequence splitting

2009-07-03 Thread Paul Rubin
Brad  writes:
> Maybe this would be difficult to get into the core, but how about this
> idea: Rename the current filter function to something like "split" or
> "partition" (which I agree may be a better name) and modify it to
> return the desired true and false sequences. Then recreate the
> existing "filter" function with a wrapper that throws away the false
> sequence.

This isn't so attractive, since filter takes a sequence input but
returns a list.  So if I have an iterator that produces a billion
elements of which I expect three to satisfy some predicate, then
   xs = filter(func, seq)
as currently implemented will build a 3-element list and return it.  
Under your suggestion, it would also build and throw away an (almost)
billion element list.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Sequence splitting

2009-07-03 Thread Lie Ryan
Brad wrote:
> On Jul 3, 12:57 am, Steven D'Aprano  cybersource.com.au> wrote:
>> I've never needed such a split function, and I don't like the name, and
>> the functionality isn't general enough. I'd prefer something which splits
>> the input sequence into as many sublists as necessary, according to the
>> output of the key function.
> 
> That's not a bad idea, I'll have to experiment with the alternatives.
> My thought process for this, however, was that filter itself already
> splits the sequence and it would have been more useful had it not
> thrown away "half" of what it discovers. It could have been written to
> returned two sequences with very litter perf hit for all but very
> large input sequences, and been useful in more situations. What I
> *really* wanted was a way to make filter itself more useful, since it
> seems a bit silly to have two very similar functions.

It's not that easy. filter is nearly by definition a lazy function. The
various split/group functions is impossible to be made an efficient
iterator, since they must traverse the whole list before being sure that
they have finished the group/part of the split (or even to be sure that
they have all the group keys).

> Maybe this would be difficult to get into the core, but how about this
> idea: Rename the current filter function to something like "split" or
> "partition" (which I agree may be a better name) and modify it to
> return the desired true and false sequences. Then recreate the
> existing "filter" function with a wrapper that throws away the false
> sequence.

filter has a very deep root in functional programming, I doubt it will
change any time soon. Also, consider infinite iterators. With the
"filter as wrapper to partition" scheme, it is impossible for
split/group/partition to use infinite iterator.
-- 
http://mail.python.org/mailman/listinfo/python-list


PyGtkSourceView / PyScintilla

2009-07-03 Thread fiafulli
Hello to all.
I would have need of the recent version (possibly most recent) than one of 
the two controls in subject.
I think that wrapper for scintilla it would be better but creed that the 
development for pygtk is stopped.

therefore I'm more inclined for GtkSourceView (than today it is to the 
version 2.6.0 http://projects.gnome.org/gtksourceview/).

In the specific I need the version for windows (for that for linux I have 
not still tried but I must not have problems) of which a setup exists for 
python 2.5 but it is stop to version 2.2 
http://ftp.gnome.org/pub/gnome/binaries/win32/pygtksourceview/2.2/).

Someone would be to post the compiled in issue (if it were last version 
would be fantastic!)? or to explain step to me step as to compile it to me 
alone on windows (I have tried installing mingw, msys, autoconf, automake, 
libtools, and all the other ones of time in time, I have begun to compile 
but they are stopped to me on reference lacking pygobject 2.15 or something 
of the sort. I lost all morning!)?

The version of python that I use is the 2.5.4. on windows xp sp3 (and ubuntu 
9.04)

Thanks in advance,
Francesco

p.s.: sorry for my bad english 


-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Sequence splitting

2009-07-03 Thread Scott David Daniels

Steven D'Aprano wrote:
I've never needed such a split function, and I don't like the name, and 
the functionality isn't general enough. I'd prefer something which splits 
the input sequence into as many sublists as necessary, according to the 
output of the key function. Something like itertools.groupby(), except it 
runs through the entire sequence and collates all the elements with 
identical keys.


splitby(range(10), lambda n: n%3)
=> [ (0, [0, 3, 6, 9]),
 (1, [1, 4, 7]), 
 (2, [2, 5, 8]) ]


Your split() would be nearly equivalent to this with a key function that 
returns a Boolean.


Well, here is my go at doing the original with iterators:

def splitter(source, test=bool):
a, b = itertools.tee((x, test(x)) for x in source)
return (data for data, decision in a if decision), (
data for data, decision in b if not decision)

This has the advantage that it can operate on infinite lists.  For
something like splitby for grouping, I seem to need to know the cases
up front:

def _make_gen(particular, src):
 return (x for x, c in src if c == particular)

def splitby(source, cases, case):
'''Produce a dict of generators for case(el) for el in source'''
decided = itertools.tee(((x, case(x)) for x in source), len(cases))
return dict((c, _make_gen(c, src))
for c, src in zip(cases, decided))

example:

def classify(n):
'''Least prime factor of a few'''
for prime in [2, 3, 5, 7]:
if n % prime == 0:
return prime
return 0

for k,g in splitby(range(50), (2, 3, 5, 7, 0), classify).items():
print('%s: %s' % (k, list(g)))

0: [1, 11, 13, 17, 19, 23, 29, 31, 37, 41, 43, 47]
2: [0, 2, 4, 6, 8, 10, 12, 14, 16, 18, 20, 22, 24,
26, 28, 30, 32, 34, 36, 38, 40, 42, 44, 46, 48]
3: [3, 9, 15, 21, 27, 33, 39, 45]
5: [5, 25, 35]
7: [7, 49]

--Scott David Daniels
scott.dani...@acm.org
--
http://mail.python.org/mailman/listinfo/python-list


Re: stringio+tarfile

2009-07-03 Thread superpollo

Peter Otten wrote:


gettarinfo() expects a real file, not a file-like object.
You have to create your TarInfo manually.


ok... which attributes are mandatory, and which optional?



I recommend that you have a look into the tarfile module's source code.



i will try... but:


$ cat /usr/lib/python2.3/tarfile.py | wc -l
1938

wow! it'll take some time ;-)



The following seems to work:

import sys
import time
import tarfile
import StringIO

sf1 = "first.txt", StringIO.StringIO("one one\n")
sf2 = "second.txt", StringIO.StringIO("two\n")
tf = StringIO.StringIO()

tar = tarfile.open(fileobj=tf , mode="w")

mtime = time.time()
for name, f in [sf1 , sf2]:
ti = tarfile.TarInfo(name)
ti.size = f.len
ti.mtime = mtime
# add more attributes as needed 
tar.addfile(ti, f)


sys.stdout.write(tf.getvalue())

Peter



much obliged mr otten

bye
--
http://mail.python.org/mailman/listinfo/python-list


Re: PSP Caching

2009-07-03 Thread Simon Forman
On Jul 3, 5:18 am, Johnson Mpeirwe  wrote:
> Hello All,
>
> How do I stop caching of Python Server Pages (or whatever causes changes
> in a page not to be noticed in a web browser)? I am new to developing
> web applications in Python and after looking at implementations of PSP
> like Spyce (which I believed introduces new unnecessary non-PSP syntax),
> I decided to write my own PSP applications from scratch. When I modify a
> file, I keep getting the old results until I intentionally introduce an
> error (e.g parse error) and correct it after to have the changes
> noticed. There's no proxy (I am working on a windows machine unplugged
> from the network). I have Googled and no documents seem to talk about
> this. Is there any particular mod_python directive I must set in my
> Apache configuration to fix this?
>
> Any help will be highly appreciated.
>
> Johnson

I don't know much about caching with apache, but the answer mght be on
this page: http://httpd.apache.org/docs/2.2/caching.html

Meanwhile, couldn't you just send apache a restart signal when you
modify your code?

HTH,
~Simon
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Clarity vs. code reuse/generality

2009-07-03 Thread Alan G Isaac
> In  Alan G Isaac  
> writes:
>> 1. Don't use assertions to test argument values!


On 7/3/2009 12:19 PM kj apparently wrote:
> Out of curiosity, where does this come from? 


http://docs.python.org/reference/simple_stmts.html#grammar-token-assert_stmt
"The current code generator emits no code for an assert statement when 
optimization is requested at compile time."

Alan Isaac
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Clarity vs. code reuse/generality

2009-07-03 Thread Steven D'Aprano
On Fri, 03 Jul 2009 16:19:22 +, kj wrote:

> In  Alan G Isaac
>  writes:
> 
>>1. Don't use assertions to test argument values!
> 
> Out of curiosity, where does this come from?

Assertions are disabled when you run Python with the -O (optimise) flag. 
If you rely on assertions to verify data, then any time the user runs 
python with -O your code will be running without error checking.

assert should be used to verify program logic, not to sanitize data.


-- 
Steven
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Python debugger

2009-07-03 Thread Kee Nethery

It's not free but I like the debugger in Komodo IDE.
Lets me simulate a web connection, lets me step through the code and  
examine the variables as it executes, can be run remotely (have not  
played with that aspect yet).
Does variable inspection of the variables so you can dive into the  
parts of arrays and dictionaries to see what the (for example) 5th  
item of the 4th item named "blah" is set to and what type of data  
element it is (int, unicode, etc). I find it tremendously useful as a  
newbie to Python.

Kee Nethery
--
http://mail.python.org/mailman/listinfo/python-list


Reversible Debugging

2009-07-03 Thread Patrick Sabin

Hello,

I am interested if there are any python modules, that supports 
reversible debugging aka stepping backwards. Any links or ideas would be 
helpful, because I am thinking of implementing something like that.


Thanks in advance,
Patrick
--
http://mail.python.org/mailman/listinfo/python-list


  1   2   >