Re: Single type for __builtins__ in Py3.0

2005-09-23 Thread Tom Anderson
On Fri, 23 Sep 2005, Collin Winter wrote:

> If possible, I'd like to see this go in before 3.0. The reference manual 
> currently states [2] that __builtins__ can be either a dict or a module, 
> so changing it to always be a module would still be in keeping with 
> this. However, I realise that there's probably code out there that 
> hasn't been written to deal with both types, so this would result in 
> some minor breakage (though it would be easily fixable).

Perhaps __builtins__ should be a magic module which could be accessed 
using subscripting as well as proper names (__builtins__["int"] as well as 
__builtins__.int), to avoid breakage. AFAICT, there's no easy way to do 
this at the moment: defining __getitem__ in a module doesn't give you a 
working [] operator on it. Have i screwed that up? If not, might it be 
possible to change this, so modules are treated the same as any other 
object in terms of handling []? I admit that this is the only use case for 
it, and it's a pretty weak one, but it would be good to be consistent, i 
think.

> If this gets a good response, I'll kick it up to python-dev.

+1

tom

-- 
Gatsos are a stealth tax on motorists in the same way that city centre video 
cameras are a stealth tax on muggers and DNA testing is a stealth tax on 
rapists. -- Guy Chapman
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Productivity and economics at software development

2005-09-23 Thread Tom Anderson
On Fri, 23 Sep 2005, Scott David Daniels wrote:

> Adriano Monteiro wrote:
>
>> I'm making a little research project about programming languages e
>> their respective IDEs. The goal is to trace each language silhouettes,
>> where it fits better, their goods/bads and the costs of developing in
>> this language.
>
> What do you consider the IDE for Assembly code or Microcode?

emacs, of course - just as it is for every other language.

tom

-- 
If you think it's expensive to hire a professional to do the job, wait until 
you hire an amateur. -- Red Adair
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: [RFC] Parametric Polymorphism

2005-09-25 Thread Tom Anderson
On Sun, 25 Sep 2005, Catalin Marinas wrote:

> Sorry if this was previously discussed but it's something I miss in 
> Python. I get around this using isinstance() but it would be cleaner to 
> have separate functions with the same name but different argument types. 
> I think the idea gets quite close to the Lisp/CLOS implementation of 
> methods.
>
> Below is just simple implementation example (and class functions are
> not supported) but it can be further extended/optimised/modified for
> better type detection like issubclass() etc. The idea is similar to
> the @accepts decorator:
>
> methods = dict()
>
> def method(*types):
>def build_method(f):
>assert len(types) == f.func_code.co_argcount
>
>if not f.func_name in methods:
>methods[f.func_name] = dict()
>methods[f.func_name][str(types)] = f
>
>def new_f(*args, **kwds):
>type_str = str(tuple([type(arg) for arg in args]))
>assert type_str in methods[f.func_name]
>return methods[f.func_name][type_str](*args, **kwds)
>new_f.func_name = f.func_name
>
>return new_f
>
>return build_method

Neat. I'd come up with the same general idea myself, but since i am a 
worthless slob, i never actually implemented it.

Is there any reason you have to stringify the type signature? Types are 
hashable, so a tuple of types is hashable, so you can just use that as a 
key. Replace "methods[f.func_name][str(types)] = f" with 
"methods[f.func_name][types] = f" and "type_str = str(tuple([type(arg) for 
arg in args]))" with "type_str = tuple(type(arg) for arg in args)". And 
then rename type_str to types thoughout.

Also, we can exploit the closureness of new_f to avoid a dict lookup:

f_implementations = methods[f.func_name]
def new_f(*args, **kwds):
types = tuple(type(arg) for arg in args)
return f_implementations[types](*args, **kwds)


tom

-- 
double mashed, future mashed, millennium mashed; man it was mashed
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Struggling with basics

2005-09-25 Thread Tom Anderson
On Sun, 25 Sep 2005, Jason wrote:

> A week ago I posted a simple little hi-score routine that I was using to 
> learn Python.
>
> I've only just managed to examine the code, and the responses that people 
> gave, and I'm now seriously struggling to understand why things aren't 
> working correctly.

Others have dealt with the string printing problem, so i'll leave that.

The problem with the sorting is that you're not consistent about how 
scores are represented - are they strings or integers? At present, you 
sometimes use one and sometimes the other, with the result that the sort 
basically pukes all over you. To fix this, pick one type (hint: integers), 
and use that consistently. I'll show you how to do that below (although 
it's not exactly hard).

Oh, and i'm a picky git, so i'm going to point out some other flaws in the 
code!

> At present my code is as follows...
>
> import random
> import bisect
>
> class HiScores:
>def __init__(self,hiScores):
>self.hiScores=[entry for entry in hiScores]

One bug and one wart here.

The wart is the way you initialise self.hiScores - you use a list 
comprehension when you can just call the list builtin:

self.hiScores = list(hiScores)

The bug is that you don't sort the list. If you're certain that the 
initial set of high scores will always come sorted, that's okay, but i'd 
say it was good practice to sort them, just in case.

In fact, i'd punt the addition to addScore:

def __init__(self, hiScores):
self.hiScores = []
for score, name in hiScores:
self.addScore(score, name)

This is the 'Once And Only Once' principle in action; the knowledge about 
how to keep the list sorted is expressed once and only once, in addScore; 
if any other parts of the code need to add items, they call that. This 
means there's only one piece of code you have to check to make sure it's 
going to get this right.

>def showScores(self):
>for score,name in self.hiScores:
>score=str(score).zfill(5)
>print "%s - %s" % name,score

As has been pointed out, you need to wrap parens round "name, score" to 
make it into a tuple.

Apart from that, i'd skip the string interpolation and just write:

for score, name in self.hiScores:
print name, "-", str(score).zfill(5)

If you insist on the string interpolation, i'd still elide the 
intermediate variable and write:

for score, name in self.hiScores:
print "%s - %05i" % (name, score)

The %05i in the format string means 'an integer, zero-filled to five 
digits'. Good, eh?

>def addScore(self,score,name):
>score.zfill(5)
>bisect.insort(self.hiScores,(score,name))
>if len(self.hiScores)==6:
>self.hiScores.pop()

Two problems there. Well, two and a half.

Firstly, the type confusion - are scores strings or integers? the zfill 
indicates that you're thinking in terms of strings here. You should be 
using integers, so you can just drop that line.

And if you were working with strings, the zfill would still be wrong (this 
is the half problem!) - zfill doesn't affect the string it's called on 
(strings are immutable), it makes a new zero-filled string and returns it. 
You're not doing anything with the return value of that call, so the 
zero-filled string would just evaporate into thin air.

Secondly, bisect.insort sorts the list so that the highest scores are at 
the tail end of the list; list.pop takes things off that same end, so 
you're popping the highest scores, not the lowest! You need to say pop(0) 
to specify that the item should be popped off the head (ie the low end) of 
the list.

Also, i'd be tempted to program defensively and change the test guarding 
the pop to "while (len(self.hiScores > 6):".

All in all, that makes my version:

def addScore(self, score, name):
bisect.insort(self.hiScores, (int(score), name))
while(len(self.hiScores) > 6):
self.hiScores.pop(0)

>def lastScore(self):
>return self.hiScores[-1][0]

This will return the top score; you want self.hiScores[0][0].

> def main():
>
>  
> hiScores=[('1','Alpha'),('07500','Beta'),('05000','Gamma'),('02500','Delta'),('0','Epsilon')]

Here you've got scores as strings, and this is the root of the problem. 
Change this to:

hiScores=[(1,'Alpha'),(7500,'Beta'),(5000,'Gamma'),(2500,'Delta'),(0,'Epsilon')]

Note that i've taken the leading zeroes off - leading zeroes on integers 
in python are a magic signal that the number is octal (yes, base eight!), 
which is not what you want at all.

>a=HiScores(hiScores)
>print "Original Scores\n---"
>a.showScores()
>
>while 1:

"while True:" is preferred here.

>newScore=str(random.randint(0,1))

Take out the str().

>if newScore  > a.lastScore():
>print "Congratulations, you scored %s " % newScore

Make that a %i (or a %05i).

>name=raw_input("Please enter your name :")
>  

Re: Proposal: add sys to __builtins__

2005-09-26 Thread Tom Anderson
On Sun, 25 Sep 2005, James Stroud wrote:

> I'm into *real* purity. I would rather begin every script:
>
>  from python import list, str, dict, sys, os
>
> Oh wait. I only use dict in less than 50% of my scripts:
>
>  from python import list, str, sys, os
>
> That's better.

What? How exactly is that pure? "from x import y" is bletcherosity 
incarnate! You should do:

import python

s = python.str(5)
# etc

And don't let me see you importing like that again!

tom

-- 
90% mental, 25% effort, 8% mathematics
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Metaclasses, decorators, and synchronization

2005-09-26 Thread Tom Anderson
On Mon, 26 Sep 2005, Jp Calderone wrote:

> On Sun, 25 Sep 2005 23:30:21 -0400, Victor Ng <[EMAIL PROTECTED]> wrote:
>> You could do it with a metaclass, but I think that's probably overkill.
>> 
>> It's not really efficient as it's doing test/set of an RLock all the
>> time, but hey - you didn't ask for efficient.  :)
>
> There's a race condition in this version of synchronized which can allow two 
> or more threads to execute the synchronized function simultaneously.

You could define a meta-lock, and use that to protect the 
lock-installation action.

To avoid bottlenecking on the single meta-lock, you could put a meta-lock 
in each class, and use that to protect installation of locks in instances 
of that class. You would, of course, then need a meta-meta-lock to protect 
those.

Also, and more helpfully, you can get a modest speedup by taking out the 
explicit test for the presence of the lock, and just rely on getting an 
AttributeError from self._sync_lock.acquire() if it's not there; you could 
then install the lock in the except suite.

tom

-- 
90% mental, 25% effort, 8% mathematics
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: PEP 350: Codetags

2005-09-26 Thread Tom Anderson
On Mon, 26 Sep 2005, Micah Elliott wrote:

> Please read/comment/vote.  This circulated as a pre-PEP proposal 
> submitted to c.l.py on August 10, but has changed quite a bit since 
> then.  I'm reposting this since it is now "Open (under consideration)" 
> at .

Seems generally fine to me; i'm not the best person to comment, though, 
since it's highly unlikely i'll use them.

I did notice one thing that is sort of wrong, though:

:Objection: *WorkWeek* is an obscure and uncommon time unit.

:Defense: That's true but it is a highly suitable unit of granularity
 for estimation/targeting purposes, and it is very compact.  The
 `ISO 8601`_ is widely understood but allows you to only specify
 either a specific day (restrictive) or month (broad).

Actually, ISO 8601 includes a week notation. Have a read of this:

http://www.cl.cam.ac.uk/~mgk25/iso-time.html

Which explains that you can write things like 2005-W20 to mean the 20th 
week of 2005, and ISO won't send you to hell for it.

tom

-- 
Brace yourself for an engulfing, cowardly autotroph! I want your photosynthetic 
apparatii!
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: PEP 350: Codetags

2005-09-27 Thread Tom Anderson
On Tue, 27 Sep 2005, Bengt Richter wrote:

> 5) Sometimes time of day can be handy, so maybe <2005-09-26 12:34:56> 
> could be recognized?

ISO 8601 suggests writing date-and-times like 2005-09-26T12:34:56 - using 
a T as the separator between date and time. I don't really like the look 
of it, but it is a standard, so i'd suggest using it.

Bear in mind that if you don't, a black-helicopter-load of blue-helmeted 
goons to come and apply the rubber hose argument to you.

tom

-- 
On two occasions I have been asked [by members of Parliament], 'Pray, Mr. 
Babbage, if you put into the machine wrong figures, will the right answers come 
out?' I am not able rightly to apprehend the kind of confusion of ideas that 
could provoke such a question. -- Charles Babbage
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Silly function call lookup stuff?

2005-09-28 Thread Tom Anderson
On Tue, 27 Sep 2005, Dan Sommers wrote:

> On Wed, 28 Sep 2005 00:38:23 +0200,
> Lucas Lemmens <[EMAIL PROTECTED]> wrote:
>
>> On Tue, 27 Sep 2005 13:56:53 -0700, Michael Spencer wrote:
>
>>> Lucas Lemmens wrote:
>
 Why isn't the result of the first function-lookup cached so that 
 following function calls don't need to do the function-lookup at all?
>>>
>>> I guess because the function name may be re-bound between loop 
>>> iterations.  Are there good applications of this?  I don't know.
>>
>> Yuk I'd hate that. I think it would be extremely rare.
>
> With duck typing, I think it would be fairly common:
>
>def process_list_of_things_to_process( list_of_things_to_process ):
>for x in list_of_things_to_process:
>x.process( )

That's not what's being talked about here. In this code, x.process would 
be a different method each time through, and wouldn't be cached (this 
isn't C++, you know!).

We're talking about this case:

class Foo:
def process(self):
return "foo's version of process"

def bar(foo):
foo.process = lambda: "bar's version of process"

x = Foo()
print x.process()
bar(x)
print x.process()

Naive local method cacheing would get this wrong. Worldly-wise method 
cacheing that would get this right would be a nightmare to implement.

A better bet might be to speed up method lookup. I should say that i know 
bugger all about how python's method lookup is implemented now, but i 
believe that it's basically a dict lookup in a per-object feature 
dictionary (ie self.__dict__). It might be possible to instead use a sort 
of vtable, with a translucent layer of indirection wrapped round it to 
allow for python's dynamism.

Okay, thinking out loud follows. The following is pseudocode showing how 
the interpreter is implemented; any resemblance to an actual programming 
language, living or dead, is purely coincidental.

The current implementation looks something like this:

def classmembers(cl):


def new(cl):
"Make a new instance of a class cl. An object is a pair (cl, 
members), where cl is its class and members is a dict of its members."
members = {}
for name, member in classmembers(cl):
members[name] = member
obj = (cl, members)
return obj

def lookup(obj, name):
members = obj[1]
member = members[name]
return member

def bind(obj, name, member):
members = obj[1]
members[name] = member

Since the members dict is mutable, there's nothing that can be cached 
here. This is what i'd suggest ...

type mtuple:


def new(cl):
index = {}
members = [cl, index]
for name, member in classmembers(cl):
index[name] = len(members)
members.append(member)
obj = (cl, index, mtuple(members))
return obj

# the index dict is *never* modified by any code anywhere

def lookup(obj, name):
index = obj[1]
offset = index[name]
value = obj[offset]
return value

def bind(obj, name, value):
index = obj[1]
offset = index[name]
obj[offset] = value

So far, this wouldn't be any faster; in fact, it would be slightly slower, 
due to the extra layer of indirection.

However, now we can expose a little of the lookup mechanism to the 
execution machinery:

def offset(obj, name):
index = obj[1]
offset = index[name]
return offset

def load(obj, offset):
value = obj[offset]
return value

And refactor:

def lookup(obj, name):
offset = offset(obj, name)
value = load(obj, offset)
return value

We now have something cachable. Given code like:

while (foo()):
x.bar()

The compiler can produce code like:

_bar = offset(x, "bar")
while (foo()):
load(x, _bar)()

It has to do the lookup in the dict once, and after that, just has to do a 
simple load. The crucial thing is, if the method gets rebound, it'll be 
rebound into exactly the same slot, and everything keeps working fine.

Note that the code above doesn't handle binding of members to names that 
weren't defined in the class; it thus doesn't support dynamic extension of 
an object's interface, or, er, member variables. However, these are fairly 
simple to add, via an extra members dict (which i create lazily):

def new(cl):
index = {}
extras = None
members = [cl, index, extras]
for name, member in classmembers(cl):
index[name] = len(members)
members.append(member)
obj = (cl, index, mtuple(members))
return obj

def lookup(obj, name):
index = obj[1]
try:
offset = index[name]
value = obj[offset]
except KeyError:
extras = obj[2]
if (extras == None): raise KeyError, name
value = extras[name]
return value

def bind(obj, name, value):
  

Re: A rather unpythonic way of doing things

2005-10-01 Thread Tom Anderson
On Thu, 29 Sep 2005, Peter Corbett wrote:

> One of my friends has recently taken up Python, and was griping a bit 
> about the language (it's too "prescriptive" for his tastes). In 
> particular, he didn't like the way that Python expressions were a bit 
> crippled. So I delved a bit into the language, and found some sources of 
> syntactic sugar that I could use, and this is the result:
>
> http://www.pick.ucam.org/~ptc24/yvfc.html

It's this sort of thing that makes it clear beyond all shadow of a doubt 
that Cambridge should be razed to the ground.

Keep up the good work.

tom

-- 
I'm not quite sure how that works but I like it ...
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: User-defined augmented assignment

2005-10-01 Thread Tom Anderson
On Thu, 29 Sep 2005, Pierre Barbier de Reuille wrote:

> a discussion began on python-dev about this. It began by a bug report, 
> but is shifted and it now belongs to this discussion group.
>
> The problem I find with augmented assignment is it's too complex, it's
> badly explained, it's error-prone. And most of all, I don't see any
> use-case for it !
>
> The most common error is to consider that :
>
> a += b <==> a.__iadd__(b)
>
> when the truth is :
>
> a += b <==> a = a.__iadd__(b)
>
> which can be very confusing, as the two "a" are not necessarily the
> same.

Indeed. I certainly didn't realise that was how it worked.

> So, what I would suggest is to drop the user-defined augmented 
> assignment and to ensure this equivalence :
>
> a X= b <=> a = a X b
>
> with 'X' begin one of the operators.

That seems quite an odd move. Your proposal would lead to even more 
surprising behaviour; consider this:

a = [1, 2, 3]
b = a
a += [4, 5, 6]
print b

At present, this prints [1, 2, 3, 4, 5, 6]; if we were to follow your 
suggestion, it would be [1, 2, 3].

So, -1, i'm afraid.

I think the right solution here is staring us in the face: if everyone 
seems to think that:

a += b <==> a.__iadd__(b)

Then why not make it so that:

a += b <==> a.__iadd__(b)

Principle of Least Surprise and all that.

Since not everything that should support += is mutable (integers, for 
example), how about saying that if the recipient of a += doesn't have an 
__iadd__ method, execution falls back to:

a = a + b

I say 'a + b', because that means we invoke __add__ and __radd__ 
appropriately; indeed, the __add__ vs __radd__ thing is a precedent for 
this sort of fallback.

Doesn't that leave everyone happy?

tom

-- 
I'm not quite sure how that works but I like it ...
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Python 3! Finally!

2005-10-01 Thread Tom Anderson
On Fri, 30 Sep 2005, Stefan Behnel wrote:

> I just firefoxed to Python.org and clicked on the bz2 link at
> http://python.org/2.4.2/ and what did I get?
>
> Python-3.4.2.tar.bz2 !!
>
> Python 3 - what we've all been waiting for, finally, it's there!

Not only that, but they've skipped the tiresome 3.0.x early release 
teething phase and gone straight to the mature, solid-as-a-rock middle 
releases! God, i love python!

Hey, and it's still got lambdas! WE WON!!!

tom

-- 
I'm not quite sure how that works but I like it ...
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Exception raising, and performance implications.

2005-10-04 Thread Tom Anderson
On Mon, 3 Oct 2005, it was written:

> "leo" <[EMAIL PROTECTED]> writes:
>
>> I come from a java background, where Exceptions are a big Avoid Me, but 
>> are the performance implications the same in Python?
>
> Well, you could measure it experimentally pretty easily, but anyway, 
> Python exceptions are much less expensive than Java exceptions.

Really? How come? What is it that stops java using the same technique as 
python? There's been quite a lot of work put into making java fast, so 
it'd be interesting if we had something they didn't.

tom

-- 
What we learn about is not nature itself, but nature exposed to our methods of 
questioning. -- Werner Heisenberg
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: dictionary interface

2005-10-04 Thread Tom Anderson
On Tue, 4 Oct 2005, Robert Kern wrote:

> Antoon Pardon wrote:
>
>>   class Tree:
>>
>> def __lt__(self, term):
>>   return set(self.iteritems()) < set(term.iteritems())
>>
>> def __eq__(self, term):
>>   return set(self.iteritems()) == set(term.iteritems())
>>
>> Would this be a correct definition of the desired behaviour?
>
> No.
>
> In [1]: {1:2} < {3:4}
> Out[1]: True
>
> In [2]: set({1:2}.iteritems()) < set({3:4}.iteritems())
> Out[2]: False
>
>> Anyone a reference?
>
> The function dict_compare in dictobject.c .

Well there's a really helpful answer. I'm intrigued, Robert - since you 
know the real answer to this question, why did you choose to tell the 
Antoon that he was wrong, not tell him in what way he was wrong, certainly 
not tell him how to be right, but just tell him to read the source, rather 
than simply telling him what you knew? Still, at least you told him which 
file to look in. And if he knows python but not C, or gets lost in the 
byzantine workings of the interpreter, well, that's his own fault, i 
guess.

So, Antoon, firstly, your implementation of __eq__ is, i believe, correct.

Your implementation of __lt__ is, sadly, not. While sets take "<" to mean 
"is a proper subset of", for dicts, it's a more conventional comparison 
operation, which constitutes a total ordering over all dicts (so you can 
sort with it, for example). However, since dicts don't really have a 
natural total ordering, it is ever so slightly arbitrary.

The rules for ordering on dicts are, AFAICT:

- If one dict has fewer elements than the other, it's the lesser
- If not, find the smallest key for which the two dicts have different 
values (counting 'not present' as a value)
-- If there is no such key, the dicts are equal
-- If the key is present in one dict but not the other, the dict in which 
it is present is the lesser
-- Otherwise, the dict in which the value is lesser is itself the lesser

In code:

def dict_cmp(a, b):
diff = cmp(len(a), len(b))
if (diff != 0):
return diff
for key in sorted(set(a.keys() + b.keys())):
if (key not in a):
return 1
if (key not in b):
return -1
diff = cmp(a[key], b[key])
if (diff != 0):
return diff
return 0

I assume your tree has its items sorted by key value; that means there's 
an efficient implementation of this using lockstep iteration over the two 
trees being compared.

Another way of looking at it is in terms of list comparisons: comparing 
two dicts is the same as comparing the sorted list of keys in each dict, 
breaking ties by looking at the list of values, in order of their keys. 
There's a quirk, in that a shorter dict is always less than a longer dict, 
regardless of the elements.

In code:

def dict_cmp_alternative(a, b):
diff = cmp(len(a), len(b))
if (diff != 0):
return diff
ka = sorted(a.keys())
kb = sorted(b.keys())
diff = cmp(ka, kb)
if (diff != 0):
return diff
va = [a[k] for k in ka]
vb = [b[k] for k in kb]
return cmp(va, vb)

Hope this helps.

tom

PS In case it's of any use to you, here's the code i used to test these:

import random

def rnd(n):
return random.randint(0, n)

def randomdict(maxlen=20, range=100):
return dict((rnd(range), rnd(range)) for x in xrange(rnd(maxlen)))

def test(cmp2, n=1000):
for i in xrange(n):
a = randomdict()
b = randomdict()
if ((cmp(a, b)) != (cmp2(a, b))):
raise AssertionError, (a, b)

-- 
What we learn about is not nature itself, but nature exposed to our methods of 
questioning. -- Werner Heisenberg
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: dictionary interface

2005-10-05 Thread Tom Anderson
On Tue, 4 Oct 2005, Robert Kern wrote:

> Tom Anderson wrote:
>> On Tue, 4 Oct 2005, Robert Kern wrote:
>>
>>> Antoon Pardon wrote:
>>>
>>>> Anyone a reference?
>>>
>>> The function dict_compare in dictobject.c .
>>
>> Well there's a really helpful answer.
>
> Well, *I* thought it was.

And indeed it was. I'm sorry i was so rude - i must have been in a bad 
mood. My apologies.

> What do you want? Personalized Python tutorials delivered by candygram?

YES DAMMIT! WITH BIG KISS FROM GUIDO!

tom

-- 
The revolution is here. Get against the wall, sunshine. -- Mike Froggatt
-- 
http://mail.python.org/mailman/listinfo/python-list


Idle bytecode query on apparently unreachable returns

2005-10-09 Thread Tom Anderson
Evening all,

Here's a brief chat with the interpretator:

Python 2.4.1 (#2, Mar 31 2005, 00:05:10)
[GCC 3.3 20030304 (Apple Computer, Inc. build 1666)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> def fib(x):
... if (x == 1):
... return 1
... else:
... return x * fib((x - 1))
...
>>> import dis
>>> dis.dis(fib)
   2   0 LOAD_FAST0 (x)
   3 LOAD_CONST   1 (1)
   6 COMPARE_OP   2 (==)
   9 JUMP_IF_FALSE8 (to 20)
  12 POP_TOP

   3  13 LOAD_CONST   1 (1)
  16 RETURN_VALUE
  17 JUMP_FORWARD19 (to 39)
 >>   20 POP_TOP

   5  21 LOAD_FAST0 (x)
  24 LOAD_GLOBAL  1 (fib)
  27 LOAD_FAST0 (x)
  30 LOAD_CONST   1 (1)
  33 BINARY_SUBTRACT
  34 CALL_FUNCTION1
  37 BINARY_MULTIPLY
  38 RETURN_VALUE
 >>   39 LOAD_CONST   0 (None)
  42 RETURN_VALUE

I'm no bytecode connoisseur, but having read 
, i more or less get this.

What puzzles me, though, are bytecodes 17, 39 and 42 - surely these aren't 
reachable? Does the compiler just throw in a default 'return None' 
epilogue, with routes there from every code path, even when it's not 
needed? If so, why?

tom

-- 
News flash: there's no deep meaning or hidden message BECAUSE DAVID LYNCH IS 
INSANE
-- 
http://mail.python.org/mailman/listinfo/python-list


Python's garbage collection was Re: Python reliability

2005-10-10 Thread Tom Anderson
On Mon, 10 Oct 2005, it was written:

> Ville Voipio <[EMAIL PROTECTED]> writes:
>
>> Just one thing: how reliable is the garbage collecting system? Should I 
>> try to either not produce any garbage or try to clean up manually?
>
> The GC is a simple, manually-updated reference counting system augmented 
> with some extra contraption to resolve cyclic dependencies. It's 
> extremely easy to make errors with the reference counts in C extensions, 
> and either leak references (causing memory leaks) or forget to add them 
> (causing double-free crashes).

Has anyone looked into using a real GC for python? I realise it would be a 
lot more complexity in the interpreter itself, but it would be faster, 
more reliable, and would reduce the complexity of extensions.

Hmm. Maybe it wouldn't make extensions easier or more reliable. You'd 
still need some way of figuring out which variables in C-land held 
pointers to objects; if anything, that might be harder, unless you want to 
impose a horrendous JAI-like bondage-and-discipline interface.

> There is no way you can avoid making garbage.  Python conses everything, 
> even integers (small positive ones are cached).

So python doesn't use the old SmallTalk 80 SmallInteger hack, or similar? 
Fair enough - the performance gain is nice, but the extra complexity would 
be a huge pain, i imagine.

tom

-- 
Fitter, Happier, More Productive.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Function decorator that caches function results

2005-10-10 Thread Tom Anderson
On Mon, 10 Oct 2005, Steven D'Aprano wrote:

> On Sun, 09 Oct 2005 17:39:23 +0200, Fredrik Lundh wrote:
>
>> only if you're obsessed with CPython implementation details.
>
> No. I'm obsessed with finding out what closures are, since nobody seems 
> to have a good definition of them!

On the contrary - the problem is that several people have good but 
incompatible definitions of them!

I think you pretty much understand the mechanics of what's going on; i've 
spent god knows how long trying to write a clear but accurate definition 
of closures, but i can't, so i'm just going to say that (a) closures are 
functions, and (b) the things in func_closure are not closures - they're 
the variables over which a closure (the function you're inspecting) is 
closed; this is just sloppy terminology on the part of python's 
implementors.

Okay, a crack at a definition: a closure is a function in which some of 
the variable names refer to variables outside the function. And i don't 
mean global variables - i mean somebody else's locals; call them 'remote 
variables'. The 'somebody else' who owns those locals is a function whose 
scope encloses the definition of the closure function. A crucial point is 
that these names keep working even after the scope where they started out 
dies - when a closure escapes the 'mother scope' that houses its remote 
variables, those variables effectively transcend, becoming, well, i don't 
know. Nonlocal variables or something.

For some reason, this makes me think of Stephen Baxter novels. And a 
little of H. P. Lovecraft. I guess the point is that you shouldn't delve 
too deeply into the mysteries of functional programming if you wish to 
retain your humanity.

tom

-- 
Orange paint menace
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Idle bytecode query on apparently unreachable returns

2005-10-12 Thread Tom Anderson
On Tue, 11 Oct 2005, Raymond Hettinger wrote:

> [Tom Anderson]:
>
>> What puzzles me, though, are bytecodes 17, 39 and 42 - surely these 
>> aren't reachable? Does the compiler just throw in a default 'return 
>> None' epilogue, with routes there from every code path, even when it's 
>> not needed? If so, why?
>
> Since unreachable code is never executed, there is no performance payoff 
> for optimizing it away.  It is not hard to write a dead-code elimination 
> routine, but why bother?

Fair enough - it wasn't a criticism, i was just wondering if those 
bytecodes were serving some crucial function i hadn't appreciated!

> It would save a few bytes, slow down compilation time, save nothing at 
> runtime, and make the compiler more complex/fragile.

I have this vague idea that a compiler could be written in such a way 
that, rather than dead code being weeded out by some 
extra-complexity-inducing component, it would simply never be generated in 
the first place; that could perhaps even be simpler than the situation at 
present. I have tree reduction and SSA graphs frolicking in soft focus in 
my imagination. But, that said, i have no experience of actually writing 
compilers, so maybe this SSA stuff is harder than it sounds!

tom

-- 
That's no moon!
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Python's garbage collection was Re: Python reliability

2005-10-12 Thread Tom Anderson
On Wed, 12 Oct 2005, Jorgen Grahn wrote:

> On Mon, 10 Oct 2005 20:37:03 +0100, Tom Anderson <[EMAIL PROTECTED]> wrote:
>> On Mon, 10 Oct 2005, it was written:
> ...
>>> There is no way you can avoid making garbage.  Python conses everything,
>>> even integers (small positive ones are cached).
>>
>> So python doesn't use the old SmallTalk 80 SmallInteger hack, or similar?
>
> If the SmallInteger hack is something like this, it does:
>
>>>> a = 42
>>>> b = 42
>>>> a is b
> True
>>>> a = 42000
>>>> b = 42000
>>>> a is b
> False
>>>>
>
> ... which I guess is what if referred to above as "small positive
> ones are cached".

That's not what i meant.

In both smalltalk and python, every single variable contains a reference 
to an object - there isn't the object/primitive distinction you find in 
less advanced languages like java.

Except that in smalltalk, this isn't true: in ST, every variable *appears* 
to contain a reference to an object, but implementations may not actually 
work like that. In particular, SmallTalk 80 (and some earlier smalltalks, 
and all subsequent smalltalks, i think) handles small integers (those that 
fit in wordsize-1 bits) differently: all variables contain a word, whose 
bottom bit is a tag bit; if it's one, the word is a genuine reference, and 
if it's zero, the top bits of the word contain a signed integer. The 
innards of the VM know about this (where it matters), and do the right 
thing. All this means that small (well, smallish - up to a billion!) 
integers can be handled with zero heap space and much reduced instruction 
counts. Of course, it means that references are more expensive, since they 
have to be checked for integerness before dereferencing, but since this is 
a few instructions at most, and since small integers account for a huge 
fraction of the variables in most programs (as loop counters, array 
indices, truth values, etc), this is a net win.

See the section 'Representation of Small Integers' in:

http://users.ipa.net/~dwighth/smalltalk/bluebook/bluebook_chapter26.html#TheObjectMemory26

The precise implementation is sneaky - the tag bit for an integer is zero, 
so in many cases you can do arithmetic directly on the word, with a few 
judicious shifts here and there; the tag bit for a pointer is one, and the 
pointer is stored in two's-complement form *with the bottom bit in the 
same place as the tag bit*, so you can recover a full-length pointer from 
the word by complementing the whole thing, rather than having to shift. 
Since pointers are word-aligned, the bottom bit is always a zero, so in 
the complement it's always a one, so it can also be the status bit!

I think this came from LISP initially (most things do) and was probably 
invented by Guy Steele (most things were).

tom

-- 
That's no moon!
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Python's garbage collection was Re: Python reliability

2005-10-12 Thread Tom Anderson
On Mon, 10 Oct 2005, it was written:

> Tom Anderson <[EMAIL PROTECTED]> writes:
>
>> Has anyone looked into using a real GC for python? I realise it would 
>> be a lot more complexity in the interpreter itself, but it would be 
>> faster, more reliable, and would reduce the complexity of extensions.
>
> The next PyPy sprint (this week I think) is going to focus partly on GC.

Good stuff!

>> Hmm. Maybe it wouldn't make extensions easier or more reliable. You'd 
>> still need some way of figuring out which variables in C-land held 
>> pointers to objects; if anything, that might be harder, unless you want 
>> to impose a horrendous JAI-like bondage-and-discipline interface.
>
> I'm not sure what JAI is (do you mean JNI?)

Yes. Excuse the braino - JAI is Java Advanced Imaging, a component whose 
horribleness exceed even that of JNI, hence the confusion.

> but you might look at how Emacs Lisp does it.  You have to call a macro 
> to protect intermediate heap results in C functions from GC'd, so it's 
> possible to make errors, but it cleans up after itself and is generally 
> less fraught with hazards than Python's method is.

That makes a lot of sense.

tom

-- 
That's no moon!
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Python's garbage collection was Re: Python reliability

2005-10-12 Thread Tom Anderson
On Tue, 11 Oct 2005, Alex Martelli wrote:

> Tom Anderson <[EMAIL PROTECTED]> wrote:
>   ...
>> Has anyone looked into using a real GC for python? I realise it would be a
>
> If you mean mark-and-sweep, with generational twists,

Yes, more or less.

> that's what gc uses for cyclic garbage.

Do you mean what python uses for cyclic garbage? If so, i hadn't realised 
that. There are algorithms for extending refcounting to cyclic structures 
(i forget the details, but you sort of go round and experimentally 
decrement an object's count and see it ends up with a negative count or 
something), so i assumed python used one of those. Mind you, those are 
probably more complex than mark-and-sweep!

>> lot more complexity in the interpreter itself, but it would be faster, 
>> more reliable, and would reduce the complexity of extensions.
>
> ???  It adds no complexity (it's already there), it's slower,

Ah. That would be why all those java, .net, LISP, smalltalk and assorted 
other VMs out there, with decades of development, hojillions of dollars 
and the serried ranks of some of the greatest figures in computer science 
behind them all use reference counting rather than garbage collection, 
then.

No, wait ...

> it is, if anything, LESS reliable than reference counting (which is way 
> simpler!),

Reliability is a red herring - in the absence of ill-behaved native 
extensions, and with correct implementations, both refcounting and GC are 
perfectly reliable. And you can rely on the implementation being correct, 
since any incorrectness will be detected very quickly!

> and (if generalized to deal with ALL garbage) it might make it almost 
> impossible to write some kinds of extensions (ones which need to 
> interface existing C libraries that don't cooperate with whatever GC 
> collection you choose).

Lucky those existing C libraries were written to use python's refcounting!

Oh, you have to write a wrapper round the library to interface with the 
automatic memory management? Well, as it happens, the stuff you need to do 
is more or less identical for refcounting and GC - the extension has to 
tell the VM which of the VM's objects it holds references to, so that the 
VM knows that they aren't garbage.

> Are we talking about the same thing?!

Doesn't look like it, does it?

>> So python doesn't use the old SmallTalk 80 SmallInteger hack, or similar?
>> Fair enough - the performance gain is nice, but the extra complexity would
>> be a huge pain, i imagine.
>
> CPython currently is implemented on a strict "minimize all tricks" 
> strategy.

A very, very sound principle. If you have the aforementioned decades, 
hojillions and serried ranks, an all-tricks-turned-up-to-eleven strategy 
can be made to work. If you're a relatively small non-profit outfit like 
the python dev team, minimising tricks buys you reliability and agility, 
which is, really, what we all want.

tom

-- 
That's no moon!
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Function decorator that caches function results

2005-10-12 Thread Tom Anderson
On Tue, 10 Oct 2005, it was written:

> Tom Anderson <[EMAIL PROTECTED]> writes:
>
>> Okay, a crack at a definition: a closure is a function in which some of 
>> the variable names refer to variables outside the function.
>
> That's misleading,

You're right. I don't think it's wrong, but it depends on the reader 
knowing what i mean by 'variable'. I didn't make it clear that variables 
live in an *invocation* of a function, not a definition. IOW, in this 
code:

def foo():
x = 1
foo()
foo()

There are two occurrances of assignment, and they are to two *different* 
variables.

Hmm. Maybe i *am* wrong - looking at your example:

>  For example:
>  def f(n):
> def g(): return n
> return g
>  h1 = f(1)
>  h2 = f(2)

h1 and h2 are closures, while g is "a function in which some of the 
variable names refer to variables outside the function", but is not a 
closure.

So a closure is something that is created when a function is taken out of 
its lexical environment (through being returned, or passed downwards, or 
stored on the heap); when you take it out, it retains a connection to that 
environment. What we need now is a good metaphor for this ...

> I'd say a closure is a combination of a function (executable code) and a 
> lexical environment (the values of the function's free variables, as 
> taken from surrounding scopes at the time the function was created).

I was trying to avoid the word 'lexical', since a lot of people aren't too 
clear on what that means.

> h1 and h2 are two different closures.  They have the same executable
> code but their environments are different.  In h1, n=1, but in h2, n=2.
> So, h1() will return 1 but h2() will return 2.  Is there really anything
> confusing about this?  All that's happened is that when you call f, f
> allocates a memory slot for n.  g makes a reference to the slot and
> then f returns.  Since the reference to the slot still exists, the slot
> doesn't get GC'd.  When you call f again, it allocates a new slot.

That's quite a good way of explaining it. The thing about closures is that 
they're really obvious when you actually write them, but rather tricky to 
define in words. So, perhaps the best explanation is to walk the reader 
through an example.

> This is all described in SICP (mitpress.mit.edu/sicp).

A lot of things are described in SICP. ISTM that someone should not have 
to read the whole of SICP to understand what closures are.

tom

-- 
That's no moon!
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: sqlstring -- a library to build a SELECT statement

2005-10-20 Thread Tom Anderson
On Thu, 20 Oct 2005, [EMAIL PROTECTED] wrote:

> On this line of thought, what about the += operator?  That might be more 
> intuative than //.  I could even use -= for not in.

You're going to have to explain to me how using an assignment operator for 
something other than assignment is intuitive!

-1 on this one from me, i'm afraid.

Using 'in' would be good. It does require some truly puke-inducing 
contortions, though; since 'in' calls __contains__ on the right-hand 
operand, and that's likely to be a list, or some other type that's not 
under your control, you have to cross your fingers and hope that whatever 
it is implements __contains__ with equality tests with the probe object on 
the left-hand side and the candidates on the right (as lists do, at least 
in 2.4.1). then, you just have to make your table names do the right thing 
when compared to strings.

It's a shame (sort of) that you can't define entirely new operators in 
python. What we need is a __operate__(self, op, arg) special method, so 
you could do:

>>> class Operable:
... def __operate__(self, op, arg):
... print "operating with", op, "on", arg
... 
>>> o = Operable()
>>> o <~> "foo"
operating with <~> on foo

I'm sure that would do *wonders* for program readability :).

tom

-- 
NOW ALL ASS-KICKING UNTIL THE END
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: sqlstring -- a library to build a SELECT statement

2005-10-20 Thread Tom Anderson

On Thu, 20 Oct 2005, Pierre Quentel wrote:


[EMAIL PROTECTED] a écrit :


My solution is sqlstring. A single-purpose library: to create SQL
statement objects.


With the same starting point - I don't like writing SQL strings inside Python 
code either - I have tested a different approach : use the Python list 
comprehension / generator expression syntax for the select requests


For instance :

s = query(r.name for r in planes if r.speed > 500)
for item in s:
print s

query is a class whose instances are created with the generator 
expression as argument. The matching SQL request is built in the 
__init__ method, here :


SELECT r.name FROM planes AS r WHERE r.speed > 500


That, sir, is absolute genius.

Evil as fuck, but still absolute genius.

tom

--
NOW ALL ASS-KICKING UNTIL THE END-- 
http://mail.python.org/mailman/listinfo/python-list

Re: A macro editor

2005-10-20 Thread Tom Anderson
On Thu, 20 Oct 2005, Diez B. Roggisch wrote:

> So - _I_ think the better user-experience comes froma well-working easy 
> to use REPL to quickly give the scripts a try.

I'd agree with that. Which is better, a difficult language with lots of 
fancy tools to help you write it, or an easy language?

I don't know Groovy, but having looked at some examples, it looks like 
jave with a makeover, which, compared to python, sounds like a difficult 
language.

As for python vs ruby, i can't really offer any insights on the languages 
themselves. Personally, i'd go for python, but that's because i know 
python and not ruby.

tom

-- 
NOW ALL ASS-KICKING UNTIL THE END
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Python vs Ruby

2005-10-20 Thread Tom Anderson
On Thu, 20 Oct 2005, Amol Vaidya wrote:

> "Casey Hawthorne" <[EMAIL PROTECTED]> wrote in message
> news:[EMAIL PROTECTED]
>
>> What languages do you know already? What computer science concepts do 
>> you know? What computer programming concepts do you know? Have you 
>> heard of Scheme?

Good questions!

>> Ruby is a bit Perl like -- so if you like Perl, chances are you might 
>> like Ruby.

I don't think rubyists would appreciate that description. Ruby may be 
heavier on the funky symbols than python, but it's a very clean, elegant, 
usable, well-thought-out and deeply object-oriented language - in other 
words, nothing at all like perl.

>> Python is more like Java.

Python is *nothing* like java.

>> I have heard, but have not been able to verify that if a program is
>> about
>> 10,000 lines in C++
>> it is about
>> 5,000 lines in Java
>> and it is about
>> 3,000 lines in Python (Ruby to?)

ITYM 300. Yes, ruby too.

> I've done a lot of studying on my own, and taken the classes that my 
> high-school offers. I feel that I have a fairly good understanding of 
> Java, and basic OO concepts due to that. I've created some semi-complex 
> programs in java, in my opinion, such as networked checkers, 8-player 
> blackjack, a space-shooter type game, a copy of mario (one level, 
> anyway), and some other stuff. I've also done a bit of studying on C. 
> I've done a few projects in C, including another space-shooter type of 
> game using SDL, an IRC client and some simple database-type programs. I 
> also gave a shot at assembly using NASM for x86 before, but didn't get 
> too far. I wrote some trivial code -- wrote to the video buffer, played 
> with some bios interrupts, stuff like that. The only thing I did in 
> assembly was create a program that loads at boot-up, and loads another 
> program that just reiterates whatever you type in. I only did that 
> because I was curious. That's about as far as my programming 
> knowledge/experience goes.

An excellent start!

> Well, I'm not sure what you mean by programming concepts. I'm familiar 
> with OO through Java, and procedural programming through C. I'd be more 
> detailed, but I'm not exactly sure what you are asking. Sorry.

I think i know what Casey means, but i don't know if i can explain it any 
better. Do you understand the concept orthogonality? The Once And Only 
Once principle? Have you ever heard of design patterns?

> I have no idea what Scheme is, but I'll cettainly look it up as soon as 
> I'm done writing this.

You won't like it. Give yourself another 5-10 years, and you might start 
to find it strangely intriguing.

> I've never given Perl a shot. It was another language I considered 
> learning, but my father's friend told me to go with Python or Ruby.

Your father has good friends.

> Thanks for your help. Hopefully I wasn't too lengthy in this post.

Lengthy is fine!

Anyway, the upshot of all this is that, yes, you should learn python. 
Python is dope!

tom

-- 
NOW ALL ASS-KICKING UNTIL THE END
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Python vs Ruby

2005-10-21 Thread Tom Anderson
On Thu, 20 Oct 2005, Mike Meyer wrote:

> "[EMAIL PROTECTED]" <[EMAIL PROTECTED]> writes:
>
>> other than haskell and SQL, the others are more or less the same to me 
>> so getting familiar with them is not too difficult.
>
> There are actually lots of good "train your brain" type languages. 
> Members of the LISP family, for instance, to learn what you can do with 
> lists, and also for how cool a real macro facility can be. I happen to 
> like Scheme, but that's just me.

I haven't actually done anything much in any LISP, but Scheme definitely 
looks like a winner to me - single namespace, generally cleaned-up 
language and library, etc.

tom

-- 
For one thing at least is almost certain about the future, namely, that very 
much of it will be such as we should call incredible. -- Olaf Stapledon
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Python vs Ruby

2005-10-21 Thread Tom Anderson
On Fri, 21 Oct 2005, vdrab wrote:

> You can tell everything is well in the world of dynamic languages when 
> someone posts a question with nuclear flame war potential like "python 
> vs. ruby" and after a while people go off singing hymns about the beauty 
> of Scheme...

+1 QOTW

> I love this place.

Someone should really try posting a similar question on c.l.perl and 
seeing how they react ...

tom

-- 
Transform your language.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Binding a variable?

2005-10-21 Thread Tom Anderson
On Fri, 21 Oct 2005, Paul Dale wrote:

> Is it possible to bind a list member or variable to a variable such that
>
> temp = 5
> list = [ temp ]
> temp == 6
> list
>
> would show
>
> list = [ 6 ]

As you know by now, no. Like any problem in programming, this can be 
solved with a layer of abstraction: you need an object which behaves a bit 
like a variable, so that you can have multiple references to it. The 
simplest solution is to use a single-element list:

>>> temp = [None] # set up the list
>>> temp[0] = 5
>>> list = [temp]
>>> temp[0] = 6
>>> list
[[6]]

I think this is a bit ugly - the point of a list is to hold a sequence of 
things, so doing this strikes me as a bit of an abuse.

An alternative would be a class:

class var:
def __init__(self, value=None):
self.value = value
def __str__(self): # not necessary, but handy
return "<<" + str(self.val) + ">>"

>>> temp = var()
>>> temp.value = 5
>>> list = [temp]
>>> temp.value = 6
>>> list
[<<6>>]

This is more heavyweight, in terms of both code and execution resources, 
but it makes your intent clearer, which might make it worthwhile.

tom

-- 
Transform your language.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Python variables are bound to types when used?

2005-10-23 Thread Tom Anderson
On Sat, 22 Oct 2005, Fredrik Lundh wrote:

> [EMAIL PROTECTED] wrote:
>
>>> reset your brain:
>>>
>>> http://effbot.org/zone/python-objects.htm

Is it really a good idea to say that objects have names? Isn't it cleaner 
to describe objects without any reference to names or variables or 
whatnot, then introduce names, namespaces and references as tools for 
working with objects? You go on to explain this clearly, but it's a bit of 
a confusing way to start!

tom

PS Sorry to be following up to a message other than the one i'm actually 
replying to - the original, i'm afraid, is an ex-message.

-- 
It is a laborious madness, and an impoverishing one, the madness of
composing vast books. -- Jorge Luis Borges
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Syntax across languages

2005-10-23 Thread Tom Anderson
On Sun, 23 Oct 2005, Fredrik Lundh wrote:

> [EMAIL PROTECTED] wrote:
>
>> - ~== for approximate FP equality
>
> str(a) == str(b)

This is taken from the AI 754 standard, i take it? :)

Seriously, that's horrible. Fredrik, you are a bad man, and run a bad 
railway.

However, looking at the page the OP cites, the only mention of that 
operator i can find is in Dylan, and in Dylan, it's nothing to do with 
approximate FP equality - it means 'not identical', which we can spell "is 
not".

What would approximate FP equality even mean? How approximate?

>> - Exception retrying: after catching an exception, tell the snippet to
>> be re-run "retry" as in Ruby
>
 x = 0
 while 1:
> ... try:
> ... x += 1
> ... if x <= 5:
> ... raise ValueError
> ... except ValueError:
> ... print "retry"
> ... continue
> ... else:
> ... break
> ...
> retry
> retry
> retry
> retry
> retry


That works well for trivial cases, and not at all for anything complex. If 
you have this sort of structure:

def reverse_the_polarity_of_the_neutron_flow():
five_hundred_lines_of_code()
and_dozens_of_layers_of_nesting_and_indirection()
interrossitor.activate() # can raise InterrossitorError
do_what_we_came_here_to_do()

try:
reverse_the_polarity_of_the_neutron_flow()
except InterrossitorError:
degausser.degauss(interrossitor)
interrossitor.activate()
RETRY # how do you implement this?

You're in trouble. I realise that this snippet is not a hugely compelling 
example, but the point is that there could be some corrective action that 
you can take in an exception handler which, for some reason, you can't 
write close enough to the source of the exception that control can carry 
on flowing in the right direction.

What you can do - and this is fairly high-grade evil of a different sort - 
is package the exception-handling specifics in a function, and pass that 
in, to be applied at the appropriate point:

def reverse_the_polarity_of_the_neutron_flow(ie_hdlr):
five_hundred_lines_of_code()
and_dozens_of_layers_of_nesting_and_indirection()
try:
interrossitor.activate() # can raise InterrossitorError
except InterrossitorError, e:
ie_hdlr(e)
do_what_we_came_here_to_do()

def handle_interrossitor_error(e):
degausser.degauss(interrossitor)
interrossitor.activate()
reverse_the_polarity_of_the_neutron_flow(handle_interrossitor_error)

You can even do extra bonus higher-order-functioning:

def reverse_the_polarity_of_the_neutron_flow(ie_hdlr):
five_hundred_lines_of_code()
and_dozens_of_layers_of_nesting_and_indirection()
ie_hdlr(interrossitor.activate)
do_what_we_came_here_to_do()

def handle_interrossitor_error(fn):
try:
fn()
except InterrossitorError, e:
degausser.degauss(interrossitor)
interrossitor.activate()
reverse_the_polarity_of_the_neutron_flow(handle_interrossitor_error)

Although i can't see any reason why you'd want to.

>> - recursive "flatten" as in Ruby (useful)
>
> if you can define the semantics, it's a few lines of code.  if you're 
> not sure about the semantics, a built-in won't help you...

While we're on the subject, we had a big recursive flatten bake-off round 
here a few months back: look for a thread called "flatten(), [was Re: 
map/filter/reduce/lambda opinions andbackground unscientific 
mini-survey]", and filter out the posts with code from the posts with 
rants. There are all sorts of solutions, coming at the problem from 
different angles, but the end of it is more or less here:

http://groups.google.co.uk/group/comp.lang.python/msg/0832db53bd2700db

Looking at the code now, i can see a couple of points where i could tweak 
my flatten even more, but i think the few microseconds it might save 
aren't really worth it!

tom

-- 
It is a laborious madness, and an impoverishing one, the madness of
composing vast books. -- Jorge Luis Borges
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Oh what a twisted thread we weave....

2005-10-29 Thread Tom Anderson
On Sat, 28 Oct 2005, GregM wrote:

> ST_zeroMatch = 'You found 0 products'
> ST_zeroMatch2 = 'There are no products matching your selection'
>
> # why does this always drop through even though the If should be true.
>   if (ST_zeroMatch or ST_zeroMatch2) in self.webpg:

This code - i do not think it means what you think it means. Specifically, 
it doesn't mean "is either of ST_zeroMatch or ST_zeroMatch2 in 
self.webpg"; what it means is "apply the 'or' opereator to ST_zeroMatch 
and ST_zeroMatch2, then check if the result is in self.webpg". The result 
of applying the or operator to two nonempty strings is the left-hand 
string; your code is thus equivalent to

if ST_zeroMatch in self.webpg:

Which will work in cases where your page says 'You found 0 products', but 
not in cases where it says 'There are no products matching your 
selection'.

What you want is:

if (ST_zeroMatch in self.webpg) or (ST_zeroMatch2 in self.webpg):

Or something like that.

You say that you have a single-threaded version of this that works; 
presumably, you have a working version of this logic in there. Did you 
write the threaded version from scratch? Often a bad move!

tom

-- 
It's the 21st century, man - we rue _minutes_. -- Benjamin Rosenbaum
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Most efficient way of storing 1024*1024 bits

2005-11-03 Thread Tom Anderson

On Wed, 2 Nov 2005, Dan Bishop wrote:


Tor Erik Sønvisen wrote:


I need a time and space efficient way of storing up to 6 million bits.


The most space-efficient way of storing bits is to use the bitwise
operators on an array of bytes:


Actually, no, it's to xor all the bits together and store them in a single
boolean.

Getting them back out is kinda tricky though.


Time efficency is more important then space efficency


In that case, you're better off simply using a list of bools.


Unlikely - the indexing is a bit simpler, but the cache hit rate is going 
to go through the floor.


tom

--
power to the people and the beats-- 
http://mail.python.org/mailman/listinfo/python-list

Re: Running autogenerated code in another python instance

2005-11-03 Thread Tom Anderson
On Thu, 3 Nov 2005, Paul Cochrane wrote:

> On Wed, 02 Nov 2005 06:33:28 +, Bengt Richter wrote:
>
>> On Wed, 2 Nov 2005 06:08:22 + (UTC), Paul Cochrane <[EMAIL PROTECTED]> 
>> wrote:
>>
>>> I've got an application that I'm writing that autogenerates python 
>>> code which I then execute with exec().  I know that this is not the 
>>> best way to run things, and I'm not 100% sure as to what I really 
>>> should do.  I've had a look through Programming Python and the Python 
>>> Cookbook, which have given me ideas, but nothing has gelled yet, so I 
>>> thought I'd put the question to the community.  But first, let me be a 
>>> little more detailed in what I want to do:

Paul, this is a rather interesting problem. There are two aspects to it, 
which i believe are probably separable: getting instructions from the 
client to the server, and getting data back from the server to the client. 
The former is more complex, i think, and what's attracted the attention so 
far.

The first thing i'd say is that, while eval/exec is definitely a code 
smell, that doesn't mean it's never the right solution. If you need to be 
able to express complex things, python code might well be the best way to 
do it, and the best way to evaluate python code is eval/exec.

>> It's a little hard to tell without knowing more about your user input 
>> (command language?) syntax that is translated to or feeds the process 
>> that "autogenerates python code".
>
> It's basically just a command language I guess.

Hang on - the stuff that the user writes is what you're calling "pyvisi 
code", is that right? That doesn't look like 'just a command language', 
that looks like python, using a library you've written. Or is there 
another language, the "just a command language", on top of that?

And what you call "vtk-python code" - this is python again, but using the 
renderer's native library, right?

And you generate the vtk-python from the pyvisi-python by executing the 
pyvisi-python, there being (pluggable renderer-specific) logic in the guts 
of your pyvisi classes to emit the vtk-python code, right? You're not 
parsing anything?

>> There are lots of easy things you could do without generating and exec-ing
>> python code per se.
>
> I'd love to know of other options.  I like the idea of generating the 
> code one would have to write for a particular renderer so that if the 
> user wanted to, they could use the autogenerated code to form the basis 
> of a specific visualisation script they could then hack themselves.

If you want vtk-python code as an intermediate, i think you're stuck with 
eval/exec [1].

> One of the main ideas of the module is to distill the common visualisation
> tasks down to a simple set of commands, and then let the interface work out
> how to actually implement that.

Okay. There's a classic design pattern called Interpreter that applies 
here. This is one of the more complex patterns, and one that's rather 
poorly explained in the Gang of Four book, so it's not well-known.

Basically, the idea is that you provide classes which make it possible for 
a program to build structures encoding a series of instructions - 
essentially, you define a language whose concrete syntax is objects, not 
text - then you write code which takes such structures and carries out the 
instructions encoded in them - an interpreter, in other words.

For example, here's a very simple example for doing basic arithmetic:

# the language

class expression(object):
pass

class constant(expression):
def __init__(self, value):
self.value = value

class unary(expression):
def __init__(self, op, arg):
self.op = op
self.arg = arg

class binary(expression):
def __init__(self, op, arg_l, arg_r):
self.op = op
self.arg_l = arg_l
self.arg_r = arg_r

# the interpreter

UNARY_OPS = {
"-": lambda x: -x,
"|": lambda x: abs(x) # apologies for abnormal syntax
}

BINARY_OPS = {
"+": lambda l, r: l + r,
"-": lambda l, r: l - r,
"*": lambda l, r: l * r,
"/": lambda l, r: l / r,
}

def evaluate(expr):
if isinstance(expr, constant):
return expr.value
elif isinstance(expr, unary):
op = UNARY_OPS[expr.op]
arg = evaluate(expr.arg)
return op(arg)
elif isinstance(expr, binary):
op = BINARY_OPS[expr.op]
arg_l = evaluate(expr.arg_l)
arg_r = evaluate(expr.arg_r)
return op(arg_l, arg_r)
else:
raise Exception, "unknown expression type: " + str(type(expr))

# a quick demo

expr = binary("-",
binary("*",
constant(2.0),
constant(3.0)),
unary("-",
binary("/",
constant(4.0),
constant(5.0

print evaluate(expr)

This

Re: PyFLTK - an underrated gem for GUI projects

2005-11-09 Thread Tom Anderson
On Mon, 6 Nov 2005, it was written:

> aum <[EMAIL PROTECTED]> writes:
>
>> To me, wxPython is like a 12-cylinder Hummer, ... Whereas PyFLTK feels 
>> more like an average suburban 4-door sedan
>
> Interesting.  What would Tkinter be at that car dealership?

A '70s VW Beetle - it's been going for ever, but it's still rock solid, 
even if it does look a bit naff. Also, hippies love it.

tom

-- 
[of Muholland Drive] Cancer is pretty ingenious too, but its best to
avoid. -- Tex
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: need help extracting data from a text file

2005-11-09 Thread Tom Anderson
On Mon, 7 Nov 2005, Kent Johnson wrote:

> [EMAIL PROTECTED] wrote:
>
>> i have a text file with a bunch of values scattered throughout it. i am 
>> needing to pull out a value that is in parenthesis right after a 
>> certain word, like the first time the word 'foo' is found, retrieve the 
>> values in the next set of parenthesis (bar) and it would return 'bar'
>
> It's pretty easy with an re:
>
 import re
 fooRe = re.compile(r'foo.*?\((.*?)\)')

Just out of interest, i've never really got into using non-greedy 
quantifiers (i use them from time to time, but hardly ever feel the need 
for them), so my instinct would have been to write this as:

>>> fooRe = re.compile(r"foo[^(]*\(([^)]*)\)")

Is there any reason to use one over the other?

 fooRe.search('foo(bar)').group(1)
> 'bar'
 fooRe.search('This is a foo bar baz blah blah (bar)').group(1)
> 'bar'

Ditto.

tom

-- 
[of Muholland Drive] Cancer is pretty ingenious too, but its best to
avoid. -- Tex
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: [OT] Map of email origins to Python list

2005-11-09 Thread Tom Anderson
On Mon, 7 Nov 2005, Claire McLister wrote:

> We've been working with Google Maps, and have created a web service to map 
> origins of emails to a group.

Top stuff! The misses are, if anything, more interesting than the hits!

I, apparently, am in Norwich. I have been to Norwich a few times, and, in 
fact, i think i've walked along the very street where i'm supposedly 
located, but i don't think i've ever posted news from there. I read this 
group via an SSH connection from my office (in north central London) or 
home (in north-east inner London), or elsewhere, to a shell account on 
urchin.earth.li, a machine colocated at an ISP (in Docklands, London), 
which peers at three POPs (probably also in Docklands, London).

The domain earth.li, in which the machine lives, however, was registered 
by someone who gives their address as being in Norwich, which i guess is 
where that comes from.

What it doesn't explain is why Sion Arrowsmith is also down as being in 
Norwich - i don't know Sion from Eve, but based on the fact that she's a 
chiark.greenend.org.uk user, i'd guess she's in Cambridge. Now, chiark has 
no links to Norwich that i can see, but it is also colocated at the same 
ISP as urchin (chiark and urchin are sort of mirror images of each other 
in many ways) - is this a case of 'Norwich by association'?

tom

-- 
Exceptions say, there was a problem. Someone must deal with it. If you
won't deal with it, I'll find someone who will.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: How to use generators?

2005-11-09 Thread Tom Anderson
On Wed, 9 Nov 2005, Sybren Stuvel wrote:

> Ian Vincent enlightened us with:
>
>> I have never used generators before but I might have now found a use 
>> for them. I have written a recursive function to solve a 640x640 maze 
>> but it crashes, due to exceeding the stack.  The only way around this I 
>> can think of is to use Generator but I have no idea how to.
>
> A better way is to use a queue. I had the same problem with a similar
> piece of code. The only thing why you're using a stack is to move to
> the "next" point, without going back to a visited point.

Exactly - using a queue means you'll do a breadth-first rather than a 
depth-first search, which will involve much less depth of recursion. See:

http://cs.colgate.edu/faculty/nevison.pub/web102/web102S00/Notes12.htm

For details.

An extended version of this exercise would be to implement an A* search:

http://en.wikipedia.org/wiki/A-star_search_algorithm

Which might be quicker than a blind breadth-first.

tom

-- 
Exceptions say, there was a problem. Someone must deal with it. If you
won't deal with it, I'll find someone who will.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: web interface

2005-11-09 Thread Tom Anderson
On Mon, 7 Nov 2005, Ajar wrote:

> I have a stand alone application which does some scientific 
> computations. I want to provide a web interface for this app. The app is 
> computationally intensive and may take long time for running. Can 
> someone suggest me a starting point for me? (like pointers to the issues 
> involved in this,

You're probably best off starting a new process or thread for the 
long-running task, and having the web interface return to the user right 
after starting it; you can then provide a second page on the web interface 
where the user can poll for completion of the task, and get the results if 
it's finished. You can simulate the feel of a desktop application to some 
extent by having the output of the starter page redirect the user to the 
poller page, and having the poller page refresh itself periodically.

What you really want is a 'push' mechanism, by which the web app can 
notify the browser when the task is done, but, despite what everyone was 
saying back in '97, we don't really have anything like that.

> or even better any of the existing tools for doing this...)

Pise does what you want, and much more:

http://www.pasteur.fr/recherche/unites/sis/Pise/

But i have no idea if it's of any use to you - it was designed for 
bioinformatics programs, and might not be easily adaptable to other tasks.

Old-enough-to-remember-the-push-hype-ly y'rs,
tom

-- 
Exceptions say, there was a problem. Someone must deal with it. If you
won't deal with it, I'll find someone who will.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Iterator addition

2005-11-12 Thread Tom Anderson
On Thu, 9 Nov 2005, it was written:

> [EMAIL PROTECTED] (Alex Martelli) writes:
>
>>> Is there a good reason to not define iter1+iter2 to be the same as
>>
>> If you mean for *ALL* built-in types, such as generators, lists, files,
>> dicts, etc, etc -- I'm not so sure.
>
> Hmm, there might also be __add__ operations on the objects, that would 
> have to take precedence over iterator addition.  Iterator addition 
> itself would have to be a special kludge like figuring out "<" from 
> __cmp__, etc.
>
> Yeah, I guess the idea doesn't work out that well.  Oh well.

How about if we had some sort of special sort of iterator which did the 
right thing when things were added to it? like an iterable version of The 
Blob:

class blob(object):
def __init__(self, it=None):
self.its = []
if (it != None):
self.its.append(iter(it))
def __iter__(self):
return self
def next(self):
try:
return self.its[0].next()
except StopIteration:
# current iterator has run out!
self.its.pop(0)
return self.next()
except IndexError:
# no more iterators
raise StopIteration
def __add__(self, it):
self.its.append(iter(it))
return self
def __radd__(self, it):
self.its.insert(0, iter(it))

Then we could do:

all_lines = blob(file1) + file2 + file3
candidate_primes = blob((2,)) + (1+2*i for i in itertools.count(1))

Which, although not quite as neat, isn't entirely awful.

Another option would be a new operator for chaining - let's use $, since 
that looks like the chain on the fouled anchor symbol used by navies etc:

http://www.diggerhistory.info/images/badges-asstd/female-rels-navy.jpg

Saying "a $ b" would be equivalent to "chain(a, b)", where chain (which 
could even be a builtin if you like) is defined:

def chain(a, b):
if (hasattr(a, "__chain__")):
return a.__chain__(b)
elif (hasattr(b, "__rchain__")): # optional
return b.__rchain__(a)
else:
return itertools.chain(a, b) # or equivalent

Whatever it is that itertools.chain or whatever returns would be modified 
to have a __chain__ method which behaved like blob.__add__ above. This 
then gets you:

all_lines = file1 $ file2 $ file3
candidate_primes = (2,) $ (1+2*i for i in itertools.count(1))

And we're halfway to looking like perl already! Perhaps a more pythonic 
thing would be to define a "then" operator:

all_lines = file1 then file2 then file3
candidate_primes = (2,) then (1+2*i for i in itertools.count(1))

That looks quite nice. The special method would be __then__, of course.

tom

-- 
if you can't beat them, build them
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Hash map with multiple keys per value ?

2005-11-12 Thread Tom Anderson
On Fri, 11 Nov 2005, Chris Stiles wrote:

> Is there an easier and cleaner way of doing this ?  Is there example 
> code floating around that I might have a look at ?

I'm not aware of a way which can honestly be called better.

However, i do feel your pain about representing the alias relationships 
twice - it feels wrong. Therefore, i offer you an alternative 
implementation - represent each set as a linked list, threaded through a 
dict by making the value the dict holds under each key point to the next 
key in the alias set. Confused? No? You will be ...

class Aliases(object):
def __init__(self, aliases=None):
self.nexts = {}
if (aliases != None):
for key, value in aliases:
self[key] = value
def __setitem__(self, key, value):
if ((value != None) and (value != key)):
self.nexts[key] = self.nexts[value]
self.nexts[value] = key
else:
self.nexts[key] = key
def __getitem__(self, key):
return list(follow(key, self.nexts))
def __delitem__(self, key):
cur = key
while (self.nexts[cur] != key):
cur = self.nexts[cur]
if (cur != key):
self.nexts[cur] = self.nexts[key]
del self.nexts[key]
def canonical(self, key):
canon = key
for cur in follow(key, self.nexts):
if (cur < canon):
canon = cur
return canon
def iscanonical(self, key):
for cur in follow(key, self.nexts):
if (cur < key):
False
return True
def iteraliases(self, key):
cur = self.nexts[key]
while (cur != key):
yield cur
cur = self.nexts[cur]
def __iter__(self):
return iter(self.nexts)
def itersets(self):
for key in self.nexts:
if (not isprimary(key, self.nexts)):
continue
yield [key] + self[key]
def __len__(self):
return len(self.nexts)
def __contains__(self, key):
return key in self.nexts
def __str__(self):
return ""
def __repr__(self):
return "Aliases([" + ", ".join(str((key, self.canonical(key))) 
for key in sorted(self.nexts.keys())) + "])"

As i'm sure you'll agree, code that combines a complete absence of clarity 
with abject lack of runtime efficiency. Oh, and i haven't tested it 
properly.

tom

-- 
if you can't beat them, build them
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Iterator addition

2005-11-13 Thread Tom Anderson
On Sun, 13 Nov 2005, Reinhold Birkenfeld wrote:

> [EMAIL PROTECTED] wrote:
>
>> Tom Anderson:
>
>>> And we're halfway to looking like perl already! Perhaps a more 
>>> pythonic thing would be to define a "then" operator:
>>>
>>> all_lines = file1 then file2 then file3
>>
>> Or a "chain" one:
>>
>> all_lines = file1 chain file2 chain file3

This may just be NIH syndrome, but i like that much less - 'then' makes 
for something that reads much more naturally to me. 'and' would be even 
better, but it's taken; 'andthen' is a bit unwieldy.

Besides, "chain file2" is going to confuse people coming from a BASIC 
background :).

> That's certainly not better than the chain() function. Introducing new 
> operators for just one application is not pythonic.

True, but would this be for just one application With python moving 
towards embracing a lazy functional style, with generators and genexps, 
maybe chaining iterators is a generally useful operation that should be 
supported at the language level. I'm not seriously suggesting doing this, 
but i don't think it's completely out of the question.

tom

-- 
limited to concepts that are meta, generic, abstract and philosophical --
IEEE SUO WG
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: running functions

2005-11-17 Thread Tom Anderson
On Wed, 16 Nov 2005, [EMAIL PROTECTED] wrote:

> Gorlon the Impossible wrote:
>
>> Is it possible to run this function and still be able to do other 
>> things with Python while it is running? Is that what threading is 
>> about?
>
> Threading's a good answer if you really need to share all your memory. A 
> multiprocess solution is probably preferrable, though it depends on the 
> architecture.

I'm really curious about this assertion, which both you and Ben Finney 
make. Why do you think multiprocessing is preferable to multithreading?

I've done a fair amount of threads programming, although in java rather 
than python (and i doubt very much that it's less friendly in python than 
java!), and i found it really fairly straightforward. Sure, if you want to 
do complicated stuff, it can get complicated, but for this sort of thing, 
it should be a doddle. Certainly, it seems to me, *far* easier than doing 
anything involving multiple processes, which always seems like pulling 
teeth to me.

For example, his Impossibleness presumably has code which looks like this:

do_a_bunch_of_midi(do, re, mi)
do_something_else(fa, so, la)

All he has to do to get thready with his own bad self is:

import threading
threading.Thread(do_a_bunch_of_midi, (do, re, mi)).start()
do_something_else(fa, so, la)

How hard is that? Going multiprocess involves at least twice as much code, 
if not ten times more, will have lower performance, and will make future 
changes - like interaction between the two parallel execution streams - 
colossally harder.

tom

-- 
Remember when we said there was no future? Well, this is it.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: running functions

2005-11-18 Thread Tom Anderson
On Thu, 17 Nov 2005, Scott David Daniels wrote:

> Gorlon the Impossible wrote:
>
>> I have to agree with you there. Threading is working out great for me
>> so far. The multiprocess thing has just baffled me, but then again I'm
>> learning. Any tips or suggestions offered are appreciated...
>
> The reason multiprocess is easier is that you have enforced separation. 
> Multiple processes / threads / whatever that share reads and writes into 
> shared memory are rife with irreproducible bugs and untestable code. 
> Processes must be explicit about their sharing (which is where the bugs 
> occur), so those parts of the code cane be examined carefully.

That's a good point.

> If you program threads with shared nothing and communication over Queues 
> you are, in effect, using processes.  If all you share is read-only 
> memory, similarly, you are doing "easy" stuff and can get away with it. 
> In all other cases you need to know things like "which operations are 
> indivisible" and "what happens if I read part of this from before an 
> update and the other after the update completes, .

Right, but you have exactly the same problem with separate processes - 
except that with processes, having that richness of interaction is so 
hard, that you'll probably never do it in the first place!

tom

-- 
science fiction, old TV shows, sports, food, New York City topography,
and golden age hiphop
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Any royal road to Bezier curves...?

2005-11-21 Thread Tom Anderson
On Sun, 20 Nov 2005, Warren Francis wrote:

> Basically, I'd like to specify a curved path of an object through space. 
> 3D space would be wonderful, but I could jimmy-rig something if I could 
> just get 2D...  Are bezier curves really what I want after all?

No. You want a natural cubic spline:

http://mathworld.wolfram.com/CubicSpline.html

This is a fairly simple curve, which can be fitted through a series of 
points (called knots) in space of any dimensionality, without the need to 
specify extra control points (unlike a Bezier curve), and which has the 
nice property of minimising the curvature of the curve - it's the shape 
you'd get if you ran a springy wire through your knots. It usually looks 
pretty good too.

Google will help you find python implementations.

There are other kinds of splines - Catmull-Rom, B-spline (a generalisation 
of a Bezier curve), Hermite - but they mostly don't guarantee to pass 
through the knots, which might make them less useful to you.

In the opposite direction on the mathematical rigour scale, there's what i 
call the blended quadratic spline, which i invented as a simpler and more 
malleable alternative to the cubic spline. It's a piecewise parametric 
spline, like the cubic, but rather than calculating a series of pieces 
which blend together naturally, using cubics and linear algebra, it uses 
simple quadratic curves fitted to overlapping triples of adjacent knots, 
then interpolates ('blends') between them to draw the curve. It looks very 
like a cubic spline, but the code is simpler, and the pieces are local - 
each piece depends only on nearby knots, rather than on all the knots, as 
in a cubic spline - which is a useful property for some jobs. Also, it's 
straightforward to add the ability to constrain the angle at which the 
curve passes through a subset of the knots (you can do it for some knots, 
while leaving others 'natural') by promoting the pieces to cubics at the 
constrained knots and constraining the appropriate derivatives. Let me 
know if you want more details on this. To be honest, i'd suggest using a 
proper cubic spline, unless you have specific problems with it.

tom

-- 
... a tale for which the world is not yet prepared
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Why are there no ordered dictionaries?

2005-11-21 Thread Tom Anderson
On Sun, 20 Nov 2005, Alex Martelli wrote:

> Christoph Zwerschke <[EMAIL PROTECTED]> wrote:
>
>> The 'sorted' function does not help in the case I have indicated, where 
>> "I do not want the keys to be sorted alphabetically, but according to 
>> some criteria which cannot be derived from the keys themselves."
>
> Ah, but WHAT 'some criteria'?  There's the rub!  First insertion, last 
> insertion, last insertion that wasn't subsequently deleted, last 
> insertion that didn't change the corresponding value, or...???

All the requests for an ordered dictionary that i've seen on this group, 
and all the cases where i've needed on myself, want one which behaves like 
a list - order of first insertion, with no memory after deletion. Like the 
Larosa-Foord ordered dict.

Incidentally, can we call that the "Larosa-Foord ordered mapping"? Then it 
sounds like some kind of rocket science discrete mathematics stuff, which 
(a) is cool and (b) will make Perl programmers feel even more inadequate 
when faced with the towering intellectual might of Python. Them and their 
Scwartzian transform. Bah!

tom

-- 
Baby got a masterplan. A foolproof masterplan.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Any royal road to Bezier curves...?

2005-11-21 Thread Tom Anderson
On Mon, 21 Nov 2005, Tom Anderson wrote:

> On Sun, 20 Nov 2005, Warren Francis wrote:
>
>> Basically, I'd like to specify a curved path of an object through space. 3D 
>> space would be wonderful, but I could jimmy-rig something if I could just 
>> get 2D...  Are bezier curves really what I want after all?
>
> No. You want a natural cubic spline:

In a fit of code fury (a short fit - this is python, so it didn't take 
long), i ported my old java code to python, and tidied it up a bit in the 
process:

http://urchin.earth.li/~twic/splines.py

That gives you a natural cubic spline, plus my blended quadratic spline, 
and a framework for implementing other kinds of splines.

tom

-- 
Gin makes a man mean; let's booze up and riot!
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Backwards compatibility [was Re: is parameter an iterable?]

2005-11-22 Thread Tom Anderson
On Tue, 22 Nov 2005, Steven D'Aprano wrote:

> Are there practical idioms for solving the metaproblem "solve problem X 
> using the latest features where available, otherwise fall back on older, 
> less powerful features"?
>
> For instance, perhaps I might do this:
>
> try:
>built_in_feature
> except NameError:
># fall back on a work-around
>from backwards_compatibility import \
>feature as built_in_feature
>
> Do people do this or is it a bad idea?

>From some code i wrote yesterday, which has to run under 2.2:

try:
True
except NameError:
True = 1 == 1
False = 1 == 0

Great minds think alike!

As for whether it's a bad idea, well, bad or not, it certainly seems like 
the least worst.

> Are there other techniques to use? Obviously refusing to run is a 
> solution (for some meaning of "solution"), it may even be a practical 
> solution for some cases, but is it the only one?

How about detecting which environment you're in, then running one of two 
entirely different sets of code? Rather than trying to construct modern 
features in the antique environment, write code for each, using the local 
idioms. The trouble with this is that you end up with massive duplication; 
you can try to factor out the common parts, but i suspect that the 
differing parts will be a very large fraction of the codebase.

> If I have to write code that can't rely on iter() existing in the 
> language, what should I do?

Can you implement your own iter()? I have no idea what python 2.0 was 
like, but would something like this work:

class _iterator:
def __init__(self, x):
self.x = x
self.j = 0
def next(self):
self.j = self.j + 1
return self.x.next()
def __getitem__(self, i):
if (i != self.j):
raise ValueError, "out of order iteration"
try:
return self.next()
except StopIteration:
raise IndexError
def __iter__(self):
return self
# hopefully, we don't need this, but if we do ...
def __len__(self):
return sys.maxint # and rely on StopIteration to stop the loop

class _listiterator(_iterator):
def next(self):
try:
item = self.x[self.j]
self.j = self.j + 1
return item
except IndexError:
raise StopIteration
def __getitem__(self, i):
if (i != self.j):
raise ValueError, "out of order iteration"
self.j = self.j + 1
return self.x[i]

import types

def iter(x):
# if there's no hasattr, use explicit access and try-except blocks
# handle iterators and iterables from the future
if hasattr(x, "__iter__"):
return _iterator(x.__iter__())
# if there's no __getitem__ on lists, try x[0] and catch the exception
# but leave the __getitem__ test to catch objects from the future
if hasattr(x, "__getitem__"):
return _listiterator(x)
if type(x) == types.FileType:
return _fileiterator(x) # you can imagine the implementation of 
this
# insert more tests for specific types here as you like
raise TypeError, "iteration over non-sequence"

?

NB haven't actually tried to run that code.

tom

-- 
I'm angry, but not Milk and Cheese angry. -- Mike Froggatt
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Any royal road to Bezier curves...?

2005-11-22 Thread Tom Anderson
On Tue, 22 Nov 2005, Warren Francis wrote:

> For my purposes, I think you're right about the natural cubic splines. 
> Guaranteeing that an object passes through an exact point in space will 
> be more immediately useful than trying to create rules governing where 
> control points ought to be placed so that the object passes close enough 
> to where I intended it to go.

Right so. I wrote that code the first time when i was in a similar spot 
myself - trying to draw maps with nice smooth roads etc based on a fairly 
sparse set of points - so i felt your pain.

> Thanks for the insight, I never would have found that on my own.  At 
> least not until Google labs comes out with a search engine that gives 
> names for what you're thinking of. ;-)

You're in for a wait - i think that feature's scheduled for summer 2006.

> I know this is a fairly pitiful request, since it just involves parsing 
> your code, but I'm new enough to this that I'd benefit greatly from an 
> couple of lines of example code, implementing your classes... how do I 
> go from a set of coordinates to a Natural Cubic Spline, using your 
> python code?

Pitiful but legit - i haven't documented that code at all well. If you go 
right to the foot of my code, you'll find a simple test routine, which 
shows you the skeleton of how to drive the code. It looks a bit like this 
(this is slightly simplified):

def test_spline():
knots = [(0, 0), (0, 1), (1, 0), (0, -2), (-3, 0)] # a spiral
trace = []
c = NaturalCubicSpline(tuples2points(knots))
u = 0.0
du = 0.1
lim = len(c) + du
while (u < lim):
p = c(u)
trace.append(tuple(p))
u = u + du
return trace

tuples2points is a helper function which turns your coordinates from a 
list of tuples (really, an iterable of length-2 iterables) to a list of 
Points. The alternative way of doing it is something like:

curve = NaturalCubicSpline()
for x, y in knot_coords:
curve.knots.append(Point(x, y))
do_something_with(curve)

tom

-- 
I DO IT WRONG!!!
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: user-defined operators: a very modest proposal

2005-11-22 Thread Tom Anderson
On Tue, 22 Nov 2005, Steve R. Hastings wrote:

> User-defined operators could be defined like the following: ]+[

Eeek. That really doesn't look right.

Could you remind me of the reason we can't say [+]? It seems to me that an 
operator can never be a legal filling for an array literal or a subscript, 
so there wouldn't be ambiguity.

We could even just say that [?] is an array version of whatever operator ? 
is, and let python do the heavy lifting (excuse the pun) of looping it 
over the operands. [[?]] would obviously be a doubly-lifted version. 
Although that would mean [*] is a componentwise product, rather than an 
outer product, which wouldn't really help you very much! Maybe we could 
define {?} as the generalised outer/tensor version of the ? operator ...

> For improved readability, Python could even enforce a requirement that 
> there should be white space on either side of a user-defined operator. I 
> don't really think that's necessary.

Indeed, it would be extremely wrong - normal operators don't require that, 
and special cases aren't special enough to break the rules.

Reminds me of my idea for using spaces instead of parentheses for grouping 
in expressions, so a+b * c+d evaluates as (a+b)*(c+d) - one of my worst 
ideas ever, i'd say, up there with gin milkshakes.

> Also, there should be a way to declare what kind of precedence the 
> user-defined operators use.

Can't be done - different uses of the same operator symbol on different 
classes could have different precedence, right? So python would need to 
know what the class of the receiver is before it can work out the 
evaluation order of the expression; python does evaluation order at 
compile time, but only knows classes at execute time, so no dice.

Also, i'm pretty sure you could cook up a situation where you could 
exploit differing precedences of different definitions of one symbol to 
generate ambiguous cases, but i'm not in a twisted enough mood to actually 
work out a concrete example!

And now for something completely different.

For Py4k, i think we should allow any sequence of characters that doesn't 
mean something else to be an operator, supported with one special method 
to rule them all, __oper__(self, ator, and), so:

a + b

Becomes:

a.__oper__("+", b)

And:

a --{--@ b

Becomes:

a.__oper__("--{--@", b) # Euler's 'single rose' operator

Etc. We need to be able to distinguish a + -b from a +- b, but this is 
where i can bring my grouping-by-whitespace idea into play, requiring 
whitespace separating operands and operators - after all, if it's good 
enough for grouping statements (as it evidently is at present), it's good 
enough for expressions. The character ']' would be treated as whitespace, 
so a[b] would be handled as a.__oper__("[", b). Naturally, the . operator 
would also be handled through __oper__.

Jeff Epler's proposal to use unicode operators would synergise most 
excellently with this, allowing python to finally reach, and even surpass, 
the level of expressiveness found in languages such as perl, APL and 
INTERCAL.

tom

-- 
I DO IT WRONG!!!
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: user-defined operators: a very modest proposal

2005-11-22 Thread Tom Anderson
On Tue, 22 Nov 2005 [EMAIL PROTECTED] wrote:

> Each unicode character in the class 'Sm' (Symbol,
> Math) whose value is greater than 127 may be used as a user-defined operator.

EXCELLENT idea, Jeff!

> Also, to accomodate operators such as u'\N{DOUBLE INTEGRAL}', which are not
> simple unary or binary operators, the character u'\N{NO BREAK SPACE}' will be
> used to separate arguments.  When necessary, parentheses will be added to
> remove ambiguity.  This leads naturally to expressions like
>   \N{DOUBLE INTEGRAL} (y * x**2) \N{NO BREAK SPACE} dx \N{NO BREAK SPACE} 
> dy
> (corresponding to the call (y*x**2).__u222c__(dx, dy)) which are clearly easy
> to love, except for the small issue that many inferior editors will not 
> clearly
> display the \N{NO BREAK SPACE} characters.

Could we use '\u2202' instead of 'd'? Or, to be more correct, is there a 
d-which-is-not-a-d somewhere in the mathematical character sets? It would 
be very useful to be able to distinguish d'x', as it were, from 'dx'.

>* Do we immediately implement the combination of operators with nonspacing
>  marks, or defer it?

As long as you don't use normalisation form D, i'm happy.

>* Should some of the unicode mathematical symbols be reserved for literals?
>  It would be greatly preferable to write \u2205 instead of the other 
> proposed
>  empty-set literal notation, {-}.  Perhaps nullary operators could be 
> defined,
>  so that writing \u2205 alone is the same as __u2205__() i.e., calling the
>  nullary function, whether it is defined at the local, lexical, module, or
>  built-in scope.

Sounds like a good idea. \u211D and relatives would also be a candidate 
for this treatment.

And for those of you out there who are laughing at this, i'd point out 
that Perl IS ACTUALLY DOING THIS.

tom

-- 
I DO IT WRONG!!!
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Why are there no ordered dictionaries?

2005-11-22 Thread Tom Anderson
On Tue, 22 Nov 2005, Carsten Haese wrote:

> On Tue, 2005-11-22 at 14:37, Christoph Zwerschke wrote:
>
>> In Foord/Larosa's odict, the keys are exposed as a public member which 
>> also seems to be a bad idea ("If you alter the sequence list so that it 
>> no longer reflects the contents of the dictionary, you have broken your 
>> OrderedDict").
>
> That could easily be fixed by making the sequence a "managed property" 
> whose setter raises a ValueError if you try to set it to something 
> that's not a permutation of what it was.

I'm not a managed property expert (although there's a lovely studio in 
Bayswater you might be interested in), but how does this stop you doing:

my_odict.sequence[0] = Shrubbery()

Which would break the odict good and hard.

tom

-- 
When I see a man on a bicycle I have hope for the human race. --
H. G. Wells
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Why are there no ordered dictionaries?

2005-11-22 Thread Tom Anderson
On Tue, 22 Nov 2005, Christoph Zwerschke wrote:

> One implementation detail that I think needs further consideration is in 
> which way to expose the keys and to mix in list methods for ordered 
> dictionaries.
>
> In Foord/Larosa's odict, the keys are exposed as a public member which 
> also seems to be a bad idea ("If you alter the sequence list so that it 
> no longer reflects the contents of the dictionary, you have broken your 
> OrderedDict").
>
> I think it would be probably the best to hide the keys list from the public, 
> but to provide list methods for reordering them (sorting, slicing etc.).

I'm not too keen on this - there is conceptually a list here, even if it's 
one with unusual constraints, so there should be a list i can manipulate 
in code, and which should of course be bound by those constraints.

I think the way to do it is to have a sequence property (which could be a 
managed attribute to prevent outright clobberation) which walks like a 
list, quacks like a list, but is in fact a mission-specific list subtype 
whose mutator methods zealously enforce the invariants guaranteeing the 
odict's integrity.

I haven't actually tried to write such a beast, so i don't know if this is 
either of possible and straightforward.

tom

-- 
When I see a man on a bicycle I have hope for the human race. --
H. G. Wells
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Why are there no ordered dictionaries?

2005-11-22 Thread Tom Anderson
On Tue, 22 Nov 2005, Christoph Zwerschke wrote:

> Fuzzyman schrieb:
>
>> Of course ours is ordered *and* orderable ! You can explicitly alter 
>> the sequence attribute to change the ordering.
>
> What I actually wanted to say is that there may be a confusion between a 
> "sorted dictionary" (one where the keys are automatically sorted) and an 
> "ordered dictionary" (where the keys are not automatically ordered, but 
> have a certain order that is preserved). Those who suggested that the 
> "sorted" function would be helpful probably thought of a "sorted 
> dictionary" rather than an "ordered dictionary."

Exactly.

Python could also do with a sorted dict, like binary tree or something, 
but that's another story.

tom

-- 
When I see a man on a bicycle I have hope for the human race. --
H. G. Wells
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Why are there no ordered dictionaries?

2005-11-25 Thread Tom Anderson
On Wed, 23 Nov 2005, Christoph Zwerschke wrote:

> Alex Martelli wrote:
>
>> However, since Christoph himself just misclassified C++'s std::map as 
>> "ordered" (it would be "sorted" in this new terminology he's now 
>> introducing), it seems obvious that the terminological confusion is 
>> rife.
>
> Speaking about "ordered" and "sorted" in the context of collections is 
> not a new terminology I am introducing, but seems to be pretty common in 
> computer science

This is quite true. I haven't seen any evidence for 'rife' 
misunderstanding of these terms.

That said ...

> Perhaps Pythonists are not used to that terminology, since they use the 
> term "list" for an "ordered collection". An ordered dictionary is a 
> dictionary whose keys are a (unique) list. Sometimes it is also called a 
> "sequence"

Maybe we should call it a 'sequenced dictionary' to fit better with 
pythonic terminology?

tom

-- 
YOU HAVE NO CHANCE TO ARRIVE MAKE ALTERNATIVE TRAVEL ARRANGEMENTS. --
Robin May
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Why are there no ordered dictionaries?

2005-11-25 Thread Tom Anderson
On Wed, 23 Nov 2005, Carsten Haese wrote:

> On Wed, 2005-11-23 at 15:17, Christoph Zwerschke wrote:
>> Bengt Richter wrote:
>>
>> > E.g., it might be nice to have a mode that assumes d[key] is
>> d.items()[k][1] when
>> > key is an integer, and otherwise uses dict lookup, for cases where
>> the use
>> > case is just string dict keys.
>>
>> I also thought about that and I think PHP has that feature, but it's 
>> probably better to withstand the temptation to do that. It could lead 
>> to an awful confusion if the keys are integers.
>
> Thus quoth the Zen of Python:
> "Explicit is better than implicit."
> "In the face of ambiguity, refuse the temptation to guess."
>
> With those in mind, since an odict behaves mostly like a dictionary, [] 
> should always refer to keys. An odict implementation that wants to allow 
> access by numeric index should provide explicitly named methods for that 
> purpose.

+1

Overloading [] to sometimes refer to keys and sometimes to indices is a 
really, really, REALLY bad idea. Let's have it refer to keys, and do 
indices either via a sequence attribute or the return value of items().

More generally, if we're going to say odict is a subtype of dict, then we 
have absolutely no choice but to make the methods that it inherits behave 
the same way as in dict - that's what subtyping means. That means not 
doing funky things with [], returning a copy from items() rather than a 
live view, etc.

So, how do we provide mutatory access to the order of items? Of the 
solutions discussed so far, i think having a separate attribute for it - 
like items, a live view, not a copy (and probably being a variable rather 
than a method) - is the cleanest, but i am starting to think that 
overloading items to be a mutable sequence as well as a method is quite 
neat. I like it in that the it combines two things - a live view of the 
order and a copy of the order - that are really two aspects of one thing, 
which seems elegant. However, it does strike me as rather unpythonic; it's 
trying to cram a lot of functionality in an unexpected combination into 
one place. Sparse is better than dense and all that. I guess the thing to 
do is to try both out and see which users prefer.

tom

-- 
YOU HAVE NO CHANCE TO ARRIVE MAKE ALTERNATIVE TRAVEL ARRANGEMENTS. --
Robin May
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Why are there no ordered dictionaries?

2005-11-25 Thread Tom Anderson
On Wed, 23 Nov 2005, Christoph Zwerschke wrote:

> Tom Anderson wrote:
>
>>> I think it would be probably the best to hide the keys list from the 
>>> public, but to provide list methods for reordering them (sorting, slicing 
>>> etc.).
>>
>> one with unusual constraints, so there should be a list i can manipulate in 
>> code, and which should of course be bound by those constraints.
>
> Think of it similar as the case of an ordinary dictionary: There is 
> conceptually a set here (the set of keys), but you cannot manipulate it 
> directly, but only through the according dictionary methods.

Which is a shame!

> For an ordedred dictionary, there is conceptually a list (or more 
> specifically a unique list). Again you should not manipulate it 
> directly, but only through methods of the ordered dictionary.
>
> This sounds at first more complicated, but is in reality more easy.
>
> For instance, if I want to put the last two keys of an ordered dict d at 
> the beginning, I would do it as d = d[:-2] + d[-2:].

As i mentioned elsewhere, i think using [] like this is a terrible idea - 
and definitely not easier.

> With the list attribute (called "sequence" in odict, you would have to 
> write: d.sequence = d.sequence[:-2] + d.sequence[-2:]. This is not only 
> longer to write down, but you also have to know that the name of the 
> attribute is "sequence".

True, but that's not exactly rocket science. I think the rules governing 
when your [] acts like a dict [] and when it acts like a list [] are 
vastly more complex than the name of one attribute.

> Python's strength is that you don't have to keep many details in mind 
> because it has a small "basic vocabulary" and orthogonal use.

No it isn't - it's in having a wide set of basic building blocks which do 
one simple thing well, and thus which are easy to use, but which can be 
composed to do more complex things. What are other examples of this kind 
of 'orthogonal use'?

> I prefer the ordered dictionary does not introduce new concepts or 
> attributes if everything can be done intuitively with the existing 
> Python methods and operators.

I strongly agree. However, i don't think your overloading of [] is at all 
intuitive.

tom

-- 
YOU HAVE NO CHANCE TO ARRIVE MAKE ALTERNATIVE TRAVEL ARRANGEMENTS. --
Robin May
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Which License Should I Use?

2005-11-25 Thread Tom Anderson
On Fri, 25 Nov 2005, Robert Kern wrote:

> You may also want to read this Licensing HOWTO:
>
>  http://www.catb.org/~esr/faqs/Licensing-HOWTO.html
>
> It's a draft, but it contains useful information.

It's worth mentioning that ESR, who wrote that, is zealously 
pro-BSD-style-license. That's not to say that the article isn't useful 
and/or balanced, but it's something to bear in mind while reading it.

tom

-- 
Science runs with us, making us Gods.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Which License Should I Use?

2005-11-25 Thread Tom Anderson
On Fri, 25 Nov 2005, mojosam wrote:

> How do I decide on a license?

You decide on what obligations you wish to impose on licensees, then pick 
a license which embodies those. There are basically three levels of 
obligation:

1. None.

2. Derivatives of the code must be open source.

3. Derivatives of the code and any other code which uses it must be open 
source.

By 'derivatives', i mean modified versions. By 'open source', i really 
mean 'under the same license as the original code'.

So, the licenses corresponding to these obligations are:

1. A BSD-style license. I say 'BSD-style' because there are about a 
hojillion licenses which say more or less the same thing - and it's quite 
amazing just how many words can be split spelling out the absence of 
obligations - but the grand-daddy of them all is the BSD license:

http://www.opensource.org/licenses/bsd-license.php

2. The GNU Lesser General Public License:

http://www.gnu.org/copyleft/lesser.html

3. The GNU General Public License:

http://www.gnu.org/copyleft/gpl.html

The GPL licenses place quite severe restrictions on the freedom of 
programmers using the code, but you often hear GNU people banging on about 
freedom - 'free software', 'free as in speech', etc. What you have to 
realise is that they're not talking about the freedom of the programmers, 
but about the freedom of the software. The logic, i think, is that the 
freedom of the code is the key to the freedom of the end-users: applying 
the GPL to your code means that other programmers will be forced to apply 
to to their code, which means that users of that code will get the 
benefits of open source.

Having said all that, you can only license software if you own the 
copyright on it, and as has been pointed out, in this case, you might not.

> Are there any web sites that summarize the pros and cons?

The GNU project has a quite useful list of licenses, with their takes on 
them:

http://www.gnu.org/licenses/license-list.html

Bear in mind that the GNU project is strongly in favour of the GPL, so 
they're perhaps not as positive about non-GPL licenses as would be fair.

This dude's written about this a bit:

http://zooko.com/license_quick_ref.html

> I guess I don't care too much about how other people use it.  These 
> things won't be comprehensive enough or have broad enough appeal that 
> somebody will slap a new coat of paint on them and try to sell them. I 
> guess I don't care if somebody incorporates them into something bigger. 
> If somebody were to add features to them, it would be nice to get the 
> code and keep the derivative work as open source, but I don't think that 
> matters all that much to me.  If somebody can add value and find a way 
> of making money at it, I don't think I'd be too upset.

To me, it sounds like you want a BSD-style license. But then i'm a BSD 
afficionado myself, so perhaps i would say that!

In fact, while were on the subject, let me plug my own license page:

http://urchin.earth.li/~twic/The_Amazing_Disappearing_BSD_License.html

I apply 0-clause BSD to all the code i release these days.

> I will be doing the bulk of the coding on my own time, because I need to 
> be able to take these tools with me when I change employers. However, 
> I'm sure that in the course of using these tools, I will need to spend 
> time on the job debugging or tweaking them.  I do not want my current 
> employer to have any claim on my code in any way.  Usually if you 
> program on company time, that makes what you do a "work for hire". I 
> can't contaminate my code like that.  Does that mean the GPL is the 
> strongest defense in this situation?

The license you choose has absolutely no bearing on this. Either the 
copyright belongs to you, in which case you're fine, or to your employer, 
in which case you don't have the right to license it, so its moot.

> Let's keep the broader issue of which license will bring about the fall 
> of Western Civilization

You mean the GPL?

> on the other thread.

Oops!

tom

-- 
Science runs with us, making us Gods.
-- 
http://mail.python.org/mailman/listinfo/python-list


icmp - should this go in itertools?

2005-11-25 Thread Tom Anderson
Hi all,

This is a little function to compare two iterators:



def icmp(a, b):
for xa in a:
try:
xb = b.next()
d = cmp(xa, xb)
if (d != 0):
return d
except StopIteration:
return 1
try:
b.next()
return -1
except StopIteration:
return 0



It's modelled after the way cmp treats lists - if a and b are lists, 
icmp(iter(a), iter(b)) should always be the same as cmp(a, b).

Is this any good? Would it be any use? Should this be added to itertools?

tom

-- 
I content myself with the Speculative part [...], I care not for the
Practick. I seldom bring any thing to use, 'tis not my way. Knowledge
is my ultimate end. -- Sir Nicholas Gimcrack
-- 
http://mail.python.org/mailman/listinfo/python-list


Yet another ordered dictionary implementation

2005-11-25 Thread Tom Anderson
What up yalls,

Since i've been giving it all that all over the ordered dictionary thread 
lately, i thought i should put my fingers where my mouth is and write one 
myself:

http://urchin.earth.li/~twic/odict.py

It's nothing fancy, but it does what i think is right.

The big thing that i'm not happy with is the order list (what Larosa and 
Foord call 'sequence', i call 'order', just to be a pain); this is a list 
of keys, which for many purposes is ideal, but does mean that there are 
things you might want to do with the order that you can't do with normal 
python idioms. For example, say we wanted to move the last item in the 
order to be first; if this was a normal list, we'd say:

od.order.insert(0, od.order.pop())

But we can't do that here - the argument to the insert is just a key, so 
there isn't enough information to make an entry in the dict. To make up 
for this, i've added move and swap methods on the list, but this still 
isn't idiomatic.

In order to have idiomatic order manipulation, i think we need to make the 
order list a list of items - that is, (key, value) pairs. Then, there's 
enough information in the results of a pop to support an insert. This also 
allows us to implement the various other mutator methods on the order 
lists that i've had to rub out in my code.

However, this does seem somehow icky to me. I can't quite put my finger on 
it, but it seems to violate Once And Only Once. Also, even though the 
above idiom becomes possible, it leads to futile remove-reinsert cycles in 
the dict bit, which it would be nice to avoid.

Thoughts?

tom

-- 
I content myself with the Speculative part [...], I care not for the
Practick. I seldom bring any thing to use, 'tis not my way. Knowledge
is my ultimate end. -- Sir Nicholas Gimcrack
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Why are there no ordered dictionaries?

2005-11-25 Thread Tom Anderson
On Fri, 25 Nov 2005, Christoph Zwerschke wrote:

> Tom Anderson wrote:
>
>> True, but that's not exactly rocket science. I think the rules governing 
>> when your [] acts like a dict [] and when it acts like a list [] are vastly 
>> more complex than the name of one attribute.
>
> I think it's not really rocket science either to assume that an ordered 
> dictionary behaves like a dictionary if you access items by subscription 
> and like a list if you use slices (since slice indexes must evaluate to 
> integers anyway, they can only be used as indexes, not as keys).

When you put it that way, it makes a certain amount of sense - [:] is 
always about index, and [] is always about key. It's still icky, but it is 
completely unambiguous.

tom

-- 
I content myself with the Speculative part [...], I care not for the
Practick. I seldom bring any thing to use, 'tis not my way. Knowledge
is my ultimate end. -- Sir Nicholas Gimcrack
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Comparison problem

2005-11-26 Thread Tom Anderson
Chris, as well as addressing what i think is causing your problem, i'm 
going to point out some bits of your code that i think could be polished a 
little. It's intended in a spirit of constructive criticism, so i hope you 
don't mind!

On Sat, 26 Nov 2005, Chris wrote:

>if item[0:1]=="-":

item[0:1] seems a rather baroque way of writing item[0]! I'd actually 
suggest writing this line like this:

if item.startswith("-:):

As i feel it's more readable.

> item=item[ :-7]
> item=item[1:]

You could just write:

item = item[1:7]

For those two lines.

> infile=open("inventory","r")

The "r" isn't necessary - reading is the default mode for files. You could 
argue that this documents your intentions towards the file, i suppose, but 
the traditional python idiom would leave it out.

> while infile:
>  dummy=infile.readline()

The pythonic idiom for this is:

for dummy in infile:

Although i'd strongly suggest you change 'dummy' to a more descriptive 
variable name; i use 'line' myself.

Now, this is also the line that i think is at the root of your trouble: 
readline returns lines with the line-terminator ('\n' or whatever it is on 
your system) still on them. That gets you into trouble later - see below.

When i'm iterating over lines in a file, the first thing i do with the 
line is chomp off any trailing newline; the line after the for loop is 
typically:

line = line.rstrip("\n")

>  if dummy=='':break

You don't by any chance mean 'continue' here, do you?

>  print item
>  print ", "+dummy
>  if (dummy == item): 

This is where it all falls down - i suspect that what's happening here is 
that dummy has a trailing newline, and item doesn't, so although they look 
very similar, they're not the same string, so the comparison comes out 
false. Try throwing in that rstrip at the head of the loop and see if it 
fixes it.

HTH.

tom

-- 
Gotta treat 'em mean to make 'em scream.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: icmp - should this go in itertools?

2005-11-26 Thread Tom Anderson
On Fri, 25 Nov 2005, Roy Smith wrote:

> Tom Anderson <[EMAIL PROTECTED]> wrote:
>
>> It's modelled after the way cmp treats lists - if a and b are lists,
>> icmp(iter(a), iter(b)) should always be the same as cmp(a, b).
>>
>> Is this any good? Would it be any use? Should this be added to itertools?
>
> Whatever happens, please name it something other than icmp.  When I read 
> "icmp", I think "Internet Control Message Protocol".

Heh! That's a good point. The trouble is, icmp is clearly the Right Thing 
to call it from the point of view of itertools, continuing the pattern of 
imap, ifilter, izip etc. Wouldn't it be clear from context that this was 
nothing to do with ICMP?

tom

-- 
Gotta treat 'em mean to make 'em scream.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: icmp - should this go in itertools?

2005-11-26 Thread Tom Anderson
On Sat, 26 Nov 2005, Diez B. Roggisch wrote:

> Tom Anderson wrote:
>
>> Is this any good? Would it be any use? Should this be added to itertools?
>
> Whilst not a total itertools-expert myself, I have one little objection 
> with this: the comparison won't let me know how many items have been 
> consumed. And I end up with two streams that lack some common prefix 
> plus one field.

Good point. It would probably only be useful if you didn't need to do 
anything with the iterators afterwards.

One option - which is somewhat icky - would be to encode that in the 
return value; if n is the number of items read from both iterators, then 
if the first argument is smaller, the return value is -n, and if the 
second is smaller, it's n. The trouble is that you couldn't be sure 
exactly how many items had been read from the larger iterator - it could 
be n, if the values in the iterators differ, or n+1, if the values were 
the same but the larger one was longer.

> I'm just not sure if there is any usecase for that.

I used it in my ordered dictionary implementation; it was a way of 
comparing two 'virtual' lists that are lazily generated on demand.

I'll go away and think about this more.

tom

-- 
Gotta treat 'em mean to make 'em scream.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Comparison problem

2005-11-26 Thread Tom Anderson
On Sat, 26 Nov 2005, Peter Hansen wrote:

> Tom Anderson wrote:
>> On Sat, 26 Nov 2005, Chris wrote:
>> 
>>>   if item[0:1]=="-":
>> 
>> item[0:1] seems a rather baroque way of writing item[0]! I'd actually 
>> suggest writing this line like this:
>
> Actually, it's not so much baroque as it is safe... item[0] will fail if 
> the string is empty, while item[0:1] will return '' in that case.

Ah i didn't realise that. Whether that's safe rather depends on what the 
subsequent code does with an empty string - an empty string might be some 
sort of error (in this particular case, it would mean that the loop test 
had gone wrong, since bool("") == False), and the slicing behaviour would 
constitute silent passing of an error.

But, more importantly, egad! What's the thinking behind having slicing 
behave like that? Anyone got any ideas? What's the use case, as seems to 
be the fashionable way of putting it these days? :)

tom

-- 
This should be on ox.boring, shouldn't it?
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: ANN: Dao Language v.0.9.6-beta is release!

2005-12-03 Thread Tom Anderson
On Fri, 2 Dec 2005, [EMAIL PROTECTED] wrote:

> Dave Hansen wrote:
>
>> TAB characters are evil.  They should be banned from Python source 
>> code. The interpreter should stop translation of code and throw an 
>> exception when one is encountered.  Seriously.  At least, I'm serious 
>> when I say that.  I've never seen TAB characters solve more problems 
>> than they cause in any application.
>>
>> But I suspect I'm a lone voice crying in the wilderness.  Regards,
>
> You're not alone.
>
> I still don't get why there is still people using real tabs as
> indentation.

I use real tabs. To me, it seems perfectly simple - i want the line to be 
indented a level, so i use a tab. That's what tabs are for. And i've 
never, ever come across any problem with using tabs.

Spaces, on the otherhand, can be annoying: using spaces means that the 
author's personal preference about how wide a tab should be gets embedded 
in the code, so if that's different to mine, i end up having to look at 
weird code. Navigating and editing the code with arrow-keys under a 
primitive editor, which one is sometimes forced to do, is also slower and 
more error-prone.

So, could someone explain what's so evil about tabs?

tom

-- 
Space Travel is Another Word for Love!
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Tabs bad (Was: ANN: Dao Language v.0.9.6-beta is release!)

2005-12-04 Thread Tom Anderson

On Sun, 4 Dec 2005, [utf-8] Björn Lindström wrote:


This article should explain it:

http://www.jwz.org/doc/tabs-vs-spaces.html


Ah, Jamie Zawinski, that well-known fount of sane and reasonable ideas.

It seems to me that the tabs-vs-spaces thing is really about who controls 
the indentation: with spaces, it's the writer, and with tabs, it's the 
reader. Does that match up with people's attitudes? Is it the case that 
the space cadets want to control how their code looks to others, and the 
tabulators want to control how others' code looks to them?


I wonder if there's a further correlation between preferring spaces to 
tabs and the GPL to the BSDL ...


tom

Lexicographical PS: 'tabophobia' is, apparently, fear of the 
neurodegenerative disorder tabes dorsalis.


--
3118110161  Pies-- 
http://mail.python.org/mailman/listinfo/python-list

Re: ANN: Dao Language v.0.9.6-beta is release!

2005-12-04 Thread Tom Anderson
On Sun, 4 Dec 2005 [EMAIL PROTECTED] wrote:

>>  you're about 10 years late
>
> The same could be said for hoping that the GIL will be eliminated.
> Utterly hopeless.
>
> Until... there was PyPy.  Maybe now it's not so hopeless.

No - structuring by indentation and the global lock are entirely different 
kettles of fish. The lock is an implementation detail, not part of the 
language, and barely even perceptible to users; indeed, Jython and 
IronPython, i assume, don't even have one. Structuring by indentation, on 
the other hand, is a part of the language, and a very fundamental one, at 
that. Python without structuring by indentation *is not* python.

Which is not to say that it's a bad idea - if it really is scaring off 
potential converts, then a dumbed-down dialect of python which uses curly 
brackets and semicolons might be a useful evangelical tool.

tom

-- 
3118110161  Pies
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: How to get the extension of a filename from the path

2005-12-08 Thread Tom Anderson
On Thu, 8 Dec 2005, Lad wrote:

> what is a way to get the the extension of  a filename from the path?
> E.g., on my XP windows the path can be
> C:\Pictures\MyDocs\test.txt
> and I would like to get
> the the extension of  the filename, that is here
> txt

You want os.path.splitext:

>>> import os
>>> os.path.splitext("C:\Pictures\MyDocs\test.txt")
('C:\\Pictures\\MyDocs\test', '.txt')
>>> os.path.splitext("C:\Pictures\MyDocs\test.txt")[1]
'.txt'
>>>

> I would like that to work on Linux also

It'll be fine.

tom

-- 
[Philosophy] is kind of like being driven behind the sofa by Dr Who -
scary, but still entertaining. -- Itchyfidget
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: How to get the extension of a filename from the path

2005-12-09 Thread Tom Anderson
On Thu, 8 Dec 2005, gene tani wrote:

> Lad wrote:
>
>> what is a way to get the the extension of  a filename from the path?
>
> minor footnote: windows paths can be raw strings for os.path.split(),
> or you can escape "/"
> tho Tom's examp indicates unescaped, non-raw string works with
> splitext()

DOH. Yes, my path's got a tab in it, hasn't it!

tom

-- 
Women are monsters, men are clueless, everyone fights and no-one ever
wins. -- cleanskies
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Encoding of file names

2005-12-09 Thread Tom Anderson

On Thu, 8 Dec 2005, "Martin v. Löwis" wrote:


utabintarbo wrote:


Fredrik, you are a God! Thank You^3. I am unworthy 


For all those who followed this thread, here is some more explanation:

Apparently, utabintarbo managed to get U+2592 (MEDIUM SHADE, a filled 
50% grayish square) and U+2524 (BOX DRAWINGS LIGHT VERTICAL AND LEFT, a 
vertical line in the middle, plus a line from that going left) into a 
file name. How he managed to do that, I can only guess: most likely, the 
Samba installation assumes that the file system encoding on the Solaris 
box is some IBM code page (say, CP 437 or CP 850). If so, the byte on 
disk would be \xb4. Where this came from, I have to guess further: 
perhaps it is ACUTE ACCENT from ISO-8859-*.


Anyway, when he used listdir() to get the contents of the directory, 
Windows applies the CP_ACP encoding (known as "mbcs" in Python). For 
reasons unknown to me, the US and several European versions of XP map 
this to \xa6, VERTICAL BAR (I can somewhat see that as meaningful for 
U+2524, but not for U+2592).


So when he then applies isfile to that file name, \xa6 is mapped to 
U+00A6, which then isn't found on the Samba side.


So while Unicode here is the solution, the problem is elsewhere; most 
likely in a misconfiguration of the Samba server (which assumes some 
encoding for the files on disk, yet the AIX application uses a different 
encoding).


Isn't the key thing that Windows is applying a non-roundtrippable 
character encoding? If i've understood this right, Samba and Windows are 
talking in unicode, with these (probably quite spurious, but never mind) 
U+25xx characters, and Samba is presenting a quite consistent view of the 
world: there's a file called "double bucky backlash grey box" in the 
directory listing, and if you ask for a file called "double bucky backlash 
grey box", you get it. Windows, however, maps that name to the 8-bit 
string "double bucky blackslash vertical bar", but when you pass *that* 
back to it, it gets encoded as the unicode string "double bucky backslash 
vertical bar", which Sambda then doesn't recognise.


I don't know what Windows *should* do here. I know it shouldn't do this - 
this leads to breaking of some very basic invariants about files and 
directories, and so the kind of confusion utabintarbo suffered. The 
solution is either to apply an information-preserving encoding (UTF-8, 
say), or to refuse to do it at all (ie, raise an error if there are 
unencodable characters), neither of which are particularly beautiful 
solutions. I think Windows is in a bit of a rock/hard place situation 
here, poor thing.


Incidentally, for those who haven't come across CP_ACP before, it's not 
yet another character encoding, it's a pseudovalue which means 'the 
system's current default character set'.


tom

--
Women are monsters, men are clueless, everyone fights and no-one ever
wins. -- cleanskies-- 
http://mail.python.org/mailman/listinfo/python-list

Validating an email address

2005-12-09 Thread Tom Anderson
Hi all,

A hoary old chestnut this - any advice on how to syntactically validate an 
email address? I'd like to support both the display-name-and-angle-bracket 
and bare-address forms, and to allow everything that RFC 2822 allows (and 
nothing more!).

Currently, i've got some regexps which recognise a common subset of 
possible addresses, but it would be nice to do this properly - i don't 
currently support quoted pairs, quoted strings, or whitespace in various 
places where it's allowed. Adding support for those things using regexps 
is really hard. See:

http://www.ex-parrot.com/~pdw/Mail-RFC822-Address.html

For a level to which i am not prepared to stoop.

I hear the email-sig are open to adding a validation function to the email 
package, if a satisfactory one can be written; i would definitely support 
their doing that.

tom

-- 
Women are monsters, men are clueless, everyone fights and no-one ever
wins. -- cleanskies
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: heartbeats

2005-12-09 Thread Tom Anderson
On Fri, 9 Dec 2005, Sybren Stuvel wrote:

> Yves Glodt enlightened us with:
>
>> In detail I need a daemon on my central server which e.g. which in a 
>> loop pings (not really ping but you know what I mean) each 20 seconds 
>> one of the clients.

Do you mean pings one client every 20 sec, or each client every 20 sec?

> You probably mean "really a ping, just not an ICMP echo request".

What's a real ping, if not an ICMP echo request? That's pretty much the 
definitive packet for internetwork groping as far as i know. I think that 
the more generic sense of ping is a later meaning (BICouldVeryWellBW).

>> My central server, and this is important, should have a short timeout. 
>> If one client does not respond because it's offline, after max. 10 
>> seconds the central server should continue with the next client.
>
> I'd write a single function that pings a client and waits for a 
> response/timeout. It then should return True if the client is online, 
> and False if it is offline. You can then use a list of clients and the 
> filter() function, to retrieve a list of online clients.

That sounds like a good plan.

To do the timeouts, you want the settimeout method on socket:



import socket

def default_validate(sock):
return True

def ping(host, port, timeout=10.0, validate=default_validate):

"""Ping a specified host on the specified port. The timeout (in
seconds) and a validation function can be set; the validation
function should accept a freshly opened socket and return True if
it's okay, and False if not. This functions returns True if the
specified target can be connected to and yields a valid socket, and
False otherwise.

"""
sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
sock.settimeout(timeout)
try:
sock.connect((host, port))
except socket.error:
return False
ok = validate(sock)
sock.close()
return ok



A potential problem with this is that in the worst case, you'll be 
spending a little over ten seconds on each socket; if you have a lot of 
sockets, that might mean you're not getting through them fast enough. 
There are two ways round this: handle several pings in parallel using 
threads, or use non-blocking sockets to handle several at once with a 
single thread.

tom

-- 
everything from live chats and the Web, to the COOLEST DISGUSTING
PORNOGRAPHY AND RADICAL MADNESS!!
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: heartbeats

2005-12-09 Thread Tom Anderson
On Fri, 9 Dec 2005, Peter Hansen wrote:

> Tom Anderson wrote:
>> On Fri, 9 Dec 2005, Sybren Stuvel wrote:
>>> You probably mean "really a ping, just not an ICMP echo request".
>> 
>> What's a real ping, if not an ICMP echo request? That's pretty much the 
>> definitive packet for internetwork groping as far as i know. I think that 
>> the more generic sense of ping is a later meaning (BICouldVeryWellBW).
>
> Submarines came before the 'net.  ;-)

Ah, of course!

if self.ping(host):
self.depth = PERISCOPE_DEPTH
periscope.up()
self.tubes["bow"].load()

:)

tom

-- 
Rip and tear your guts! You are huge! That means you have huge guts! Rip
and tear! -- The Doomguy
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Encoding of file names

2005-12-09 Thread Tom Anderson

On Fri, 9 Dec 2005, "Martin v. Löwis" wrote:


Tom Anderson wrote:

Isn't the key thing that Windows is applying a non-roundtrippable 
character encoding?


This is a fact, but it is not a key thing. Of course Windows is applying 
a non-roundtrippable character encoding. What else could it do?


Well, i'm no great thinker, but i'd say that errors should never pass 
silently, and that in the face of ambiguity, one should refuse the 
temptation to guess. So, as i said in my post, if the name couldn't be 
translated losslessly, an error should be raised.


I don't know what Windows *should* do here. I know it shouldn't do this 
- this leads to breaking of some very basic invariants about files and 
directories, and so the kind of confusion utabintarbo suffered.


It always did this, and always will. Applications should stop using the 
*A versions of the API.


Absolutely true.

If they continue to do so, they will continue to get bogus results in 
border cases.


No. The availability of a better alternative is not an excuse for 
gratuitous breakage of the worse alternative.


tom

--
Whose house? Run's house!-- 
http://mail.python.org/mailman/listinfo/python-list

Re: Validating an email address

2005-12-09 Thread Tom Anderson
On Sat, 10 Dec 2005, Ben Finney wrote:

> Tom Anderson <[EMAIL PROTECTED]> wrote:
>
>> A hoary old chestnut this - any advice on how to syntactically
>> validate an email address?
>
> Yes: Don't.
>
>http://www.apps.ietf.org/rfc/rfc3696.html#sec-3>

The IETF must have updated that RFC between you posting the link and me 
reading it, because that's not what it says. What it says that the syntax 
for local parts is complicated, and many of the variations are actually 
used for reasons i can't even imagine, so they should be permitted. It 
doesn't say anything about not validating the local part against that 
syntax.

> Please, don't attempt to "validate" the local-part. It's not up to you 
> to decide what the receiving MTA will accept as a local-part,

Absolutely not - it's up to the IETF, and their decision is recorded in 
RFC 2822.

tom

-- 
Whose house? Run's house!
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: ANN: Dao Language v.0.9.6-beta is release!

2005-12-10 Thread Tom Anderson
On Sat, 10 Dec 2005, Sybren Stuvel wrote:

> Zeljko Vrba enlightened us with:
>
>> Find me an editor which has folds like in VIM, regexp search/replace 
>> within two keystrokes (ESC,:), marks to easily navigate text in 2 
>> keystrokes (mx, 'x), can handle indentation-level matching as well as 
>> VIM can handle {}()[], etc.  And, unlike emacs, respects all (not just 
>> some) settings that are put in its config file. Something that works 
>> satisfactorily out-of-the box without having to learn a new programming 
>> language/platform (like emacs).
>
> Found it! VIM!

ED IS THE STANDARD TEXT EDITOR.

tom

-- 
Argumentative and pedantic, oh, yes. Although it's properly called
"correct" -- Huge
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Favorite non-python language trick?

2005-06-24 Thread Tom Anderson
On Fri, 24 Jun 2005, Joseph Garvin wrote:

> Claudio Grondi wrote:
>
> So far we've got lisp macros and a thousand response's to the lua trick. 
> Anyone else have any actual non-python language tricks they like?

Higher-order functions like map, filter and reduce. As of Python 3000, 
they're non-python tricks. Sigh - i guess it's time for me to get to know 
list comprehensions a bit better.

The one thing i really do miss is method overloading by parameter type. I 
used this all the time in java, and it really notice the lack of it 
sometimes in python. No, it's not really possible in a typeless language, 
and yes, there are implementations based on decorators, but frankly, 
they're awful.

Yeah, and i'm with "if False:" for commenting out chunks of code.

tom

-- 
... but when you spin it it looks like a dancing foetus!
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Favorite non-python language trick?

2005-06-25 Thread Tom Anderson
On Fri, 24 Jun 2005, Roy Smith wrote:

> Tom Anderson  <[EMAIL PROTECTED]> wrote:
>
>> The one thing i really do miss is method overloading by parameter type. 
>> I used this all the time in java
>
> You do things like that in type-bondage languages

I love that expression. I think it started out as 'bondage and discipline 
languages', which is even better. I'm going to start referring to python 
as a 'sluttily typed' language.

> like Java and C++ because you have to.  Can you give an example of where 
> you miss it in Python?

No. I don't generally go around keeping a list of places where i miss 
particular features or find particular warts irritating. Still, my 
medium-term memory is not completely shot, so i assume i haven't missed it 
much in the last couple of days!

> If you want to do something different based on the type of an argument, 
> it's easy enough to do:
>
> def foo (bar):
>if type(bar) == whatever:
>   do stuff
>else:
>   do other stuff
>
> replace type() with isistance() if you prefer.

Yeah, i'm well aware that this is possible - what it's not is a clean 
solution. If i was into writing boilerplate, i'd be using C. Also, this 
gets really nasty if you want to overload on multiple variables.

Also, it actually falls down really badly in combination with duck typing 
- you can't use isinstance to ask if an object looks like a file, for 
example, only if it really is a file. Sure, you can do a bunch of hasattrs 
to see if it's got the methods it should have, but that doesn't tell you 
for certain it's a file, and it's a pain in the arse to write. In a typed 
language, you'd just ask if it implemented the Channel (for example) 
interface.

>> No, it's not really possible in a typeless language,
>
> Python is not typeless.  It's just that the types are bound to the 
> objects, not to the containers that hold the objects.

No. Types are properties of variables; the property that objects have is 
called class. Python has classes but not types. I realise that many, even 
most, people, especially those using typeless languages like python or 
smalltalk, use the two terms interchangeably, but there's a real and 
meaningful distinction between them. I may be the last person alive who 
thinks it's an important distinction, but by god i will die thinking it. 
So let's recognise that we have slightly different terminologies and not 
argue about it!

tom

-- 
Why do we do it? - Exactly!
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Favorite non-python language trick?

2005-06-25 Thread Tom Anderson
On Sat, 25 Jun 2005, Konstantin Veretennicov wrote:

> On 6/25/05, Mandus <[EMAIL PROTECTED]> wrote:
>
>> It is really a consensus on this; that removing map, filter, reduce is 
>> a good thing? It will render a whole lot of my software unusable :(
>
> I think you'll be able to use "from __past__ import map, filter,
> reduce" or something like that :)

from __grumpy_old_bastard_who_cant_keep_up__ import map

:)

tom

-- 
Why do we do it? - Exactly!
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Better console for Windows?

2005-06-28 Thread Tom Anderson
On Tue, 27 Jun 2005, Brett Hoerner wrote:

> Rune Strand wrote:
>
> Christ, thanks.  When you install Windows it should pop up first thing 
> and ask if you want to be annoyed, Y/N.

What, and not install if you say no?

Perhaps your best way to get a proper shell on windows is just to install 
a proper shell; Cygwin () has bash, but it also 
installs a bunch of other unixish stuff you might or might not want. This:

http://www.steve.org.uk/Software/bash/

looks like a standalone bash, plus ls, mv, cp rm, chmod and less. Here:

http://gnuwin32.sourceforge.net/packages.html

you can get various further bits of free software compiled for windows, 
including:

http://gnuwin32.sourceforge.net/packages/coreutils.htm

the GNU coreutils, which is the 1% of commands you use 99% of the time. 
bash + coreutils should do nicely. For a mostly complete GNU development 
toolchain, check out MinGW:

http://www.mingw.org/

Which, IMHO, is a better solution than Cygwin for general programming.

tom

-- 
i'm prepared to do anything as long as someone else works out how to do it and 
gives me simple instructions... -- Sean
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Modules for inclusion in standard library?

2005-06-30 Thread Tom Anderson
On Wed, 29 Jun 2005, it was written:

> Rocco Moretti <[EMAIL PROTECTED]> writes:
>
>> Except that (please correct me if I'm wrong) there is somewhat of a 
>> policy for not including interface code for third party programs which 
>> are not part of the operating system.
>
> I've never heard of Python having such a policy and I don't understand
> how such a stupid policy could be considered compatible with a
> proclaimed "batteries included" philosophy.

Agreed. If this is the policy, it should be reconsidered. It's silly.

tom

-- 
How did i get here?
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Boss wants me to program

2005-06-30 Thread Tom Anderson
On Wed, 29 Jun 2005, phil wrote:

>> Wow! How about a sextant? Simple device really. And a great practical 
>> demonstration of trigonometry.
>
> Excellent idea, even found a few how to sites. We'll do it.
> Any others?

A ballista? For many years when i was a kid, my dad wanted to build a 
ballista; he collected loads of literature on it. There's a surprising 
amount of maths involved - the Greeks actually devised instruments for 
computing cube roots in order to do it!

Perhaps not an ideal project for schoolkids, though.

tom

-- 
How did i get here?
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: map vs. list-comprehension

2005-06-30 Thread Tom Anderson

On Fri, 1 Jul 2005, Mike P. wrote:


"Björn Lindström" <[EMAIL PROTECTED]> wrote in message
news:[EMAIL PROTECTED]

"F. Petitjean" <[EMAIL PROTECTED]> writes:


res = [ bb+ii*dd for bb,ii,dd in zip(b,i,d) ]

Hoping that zip will not be deprecated.


Nobody has suggested that. The ones that are planned to be removed are
lambda, reduce, filter and map. Here's GvR's blog posting that explains
the reasons:

http://www.artima.com/weblogs/viewpost.jsp?thread=98196


That really sucks, I wasn't aware of these plans. Ok, I don't use reduce 
much, but I use lambda, map and filter all the time. These are some of 
the features of Python that I love the best. I can get some pretty 
compact and easy to read code with them.


Same here.

And no, I'm not a Lisp programmer (never programmed in Lisp). My 
background being largely C++, I discovered lambda, apply, map and filter 
in Python, although I had seen similar stuff in other functional 
languages like Miranda and Haskell.


Same here too!

Also, I don't necessarily think list comprehensions are necessarily 
easier to read. I don't use them all that much to be honest.


And here!

However, i also felt that way about generator functions - until the other 
day, when i realised one was the best solution to a problem i had. That 
made me realise that the same was probably true of list comprehensions.


That said, i do still think that map etc are better than list comps, 
because they involve less language. Once you have the idea of a function 
and a list, you can understand map as a function that operates on lists; 
list comprehensions provide a whole new splodge of arbitrary syntax to 
learn. I guess you could say the same about lambda, which is really an 
essential part of the whole map way of life, but i don't think that's fair 
- list comprehensions are a structure for doing just one thing, whereas 
lambda is a construct of enormous general power.


I'd be happy for the lambda syntax to be tidied up, though - perhaps it 
could be merged with def? Like:


def name(args): # traditional form
some_statements
return some_expression

def name(args): return some_expression # one-line form

def name(args): some_statements; return some_expression

def name(args) = some_expression # shorthand one-line form

Then an anonymous form, which is an expression rather than a statement:

def (args):
some_statements
return some_expression

def (args): return some_expression

def (args) = some_expression

The latter form is like a lambda; i'm not sure how the former forms would 
work inside enclosing expressions; i think it would look pretty sick:


surfaceAreaToVolumeRatios = map(def (radius):
area = 4.0 * math.pi * (radius ** 2)
volume = 4.0 / 3.0 * math.pi * (radius ** 2)
return area / volume
, radii)

It works, but i admit it's not hugely pretty. But then, i would't advise 
anyone to actually do this; it's just there for completeness.


You might also want to allow:

def name(args) = some_statements; some_expression

And the anonymous counterpart. But i'm not sure about that one. Multiple 
expressions inside lambdas would sometimes be useful, but you can get 
those with the shorthand form.


I think at this stage the Python community and Python programmers would 
be better served by building a better, more standardised, cross 
platform, more robust, better documented, and more extensive standard 
library. I would be happy to contribute in this regard, rather than 
having debates about the addition and removal of language features which 
don't improve my productivity.


Same here.

Sorry, I've probably gone way off topic, and probably stirred up 
political issues which I'm not aware of, but, man when I hear stuff like 
the proposed removal of reduce, lambda, filter and map, all I see ahead 
of me is a waste of time as a programmer.


Same here.


Sorry for the OT long rant.


Yeah, that was really off-topic for a python newsgroup. You didn't even 
mention regional accents once!


tom

--
How did i get here?-- 
http://mail.python.org/mailman/listinfo/python-list

Re: When someone from Britain speaks, Americans hear a "British accent"...

2005-06-30 Thread Tom Anderson
On Thu, 30 Jun 2005, Benji York wrote:

> python-needs-more-duct-tape'ly yours,

You're in luck: Python 3000 will replace duck typing with duct taping.

tom

-- 
I know you wanna try and get away, but it's the hardest thing you'll ever know
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: When someone from Britain speaks, Americans hear a "British accent"...

2005-06-30 Thread Tom Anderson
On Thu, 30 Jun 2005, Simon Brunning wrote:

> On 29 Jun 2005 15:34:11 -0700, Luis M. Gonzalez <[EMAIL PROTECTED]> wrote:
>
>> What's exactly the "cockney" accent? Is it related to some place or 
>> it's just a kind of slang?
>
> The cockney accent used to be pretty distinct, but these days it's 
> pretty much merged into the "Estuary English" accent common throughout 
> the South East of England.

I grew up in Colchester, in the heart of Essex, the homeland of Estuary 
English; i was recently told by a couple of Spanish colleagues that i 
sounded just another colleague who has a Cockney accent.

Although, in fact, my parents aren't Essexen, and i left the county seven 
years ago, so my accent is weird hybrid of Estuary and RP, and the 
colleague isn't a real Cockney - i think he's from east-north-eastern 
London - but he does overcompensate pronounciation-wise, so i don't know 
what it all means.

It's also complicated by the fact that Essex actually has two completely 
different accents - the town accent, which is Estuary and is pretty much 
derived from emigrants from East London, and the country accent, which is 
indigenous, and very similar to the Suffolk and Norfolk accents. I grew up 
in a village and went to school (and went drinking etc) in the nearby 
town, so i was exposed to a different accents at different times of day!

>> I'm not sure, but I think that I read somewhere that it is common in 
>> some parts of London, and that it is a sign of a particular social 
>> class, more than a regionalism. Is that true?
>
> Cockney was London's working class accent, pretty much, thought it was
> frequently affected by members of the middle classes. Estuary English
> has taken over its position as the working class accent these days,
> but with a much wider regional distribution.

blimey guvnor you is well dahn on ar muvver tung, innit?

> How off topic is this? Marvellous!

Spike Milligan did an excellent sketch in the style of a TV 
pop-anthropology documentary visiting the strange and primitive Cockanee 
people of East London. It was part of one of his Q series; i'm not sure 
which, but if it was Q5, then it would have had a direct impact on the 
Monty Python team, since that series basically beat them to the punch with 
the format they'd planned to use, forcing them to switch to the 
stream-of-consciousness style that became their trademark and which is the 
basis for python's indentation-based block structure. Therefore, if it 
hadn't been for the quirks of the Cockney accent, we'd all be using curly 
brackets and semicolons. FACT.

tom

-- 
I know you wanna try and get away, but it's the hardest thing you'll ever know
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: When someone from Britain speaks, Americans hear a "British accent"...

2005-06-30 Thread Tom Anderson
On Wed, 29 Jun 2005, Michael Hoffman wrote:

> Steven D'Aprano wrote:
>
>> Herb starts with H, not E. It isn't "ouse" or "ospital" or "istory". It 
>> isn't "erb" either. You just sound like tossers when you try to 
>> pronounce herb in the original French.

Yes, i find this insanely irritating.

>> And the same with homage.
>
> Strangely enough there are Brits who pronounce "hotel" without an H at 
> the beginning. And even those who pronounce it with an H sometimes say 
> "an hotel" rather than "a hotel" because it used to be pronounced 
> starting with the vowel!

That's an interesting one. In most English accents, and i think in RP, 
it's "a hotel"; dropping of the aitch and the accompanying shift to 'an', 
as in "an 'otel" is a symptom of Estuary english. However, as you say, 
there is some weird historical precedent for pronouncing the 'h' but also 
using 'an', as in "an hotel", which is practiced only by the 
self-consciously posh (including, often, newsreaders), and sounds 
completely absurd.

> Similarly, the Brits should note that "idea" does not end in an "r" and that 
> "Eleanor" does.

How about carrier?

tom

-- 
I know you wanna try and get away, but it's the hardest thing you'll ever know
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Python for everything?

2005-07-01 Thread Tom Anderson
On Thu, 30 Jun 2005 [EMAIL PROTECTED] wrote:

> can Python "do it all"?

More or less. There are two places where python falls down, IMHO. One is 
performance: python isn't generally as fast as C or Java, even with Psyco. 
However, the number of cases where performance - and absolute 
straight-line performance of the code - actually matters is much smaller 
than you might think. Also, you can incorporate C into python pretty 
easily. The other is in bit-twiddling - anything that involves mucking 
about with data at the level of bits and bytes. Maybe this is just blind 
prejudice, but i'm never as comfortable hacking on that sort of stuff 
(writing a Huffman coder, say) in python as in java.

Other than that, python is pure victory.

> I am wondering what to learn as my scripting language.

Python.

> I have read that perl is good up to about 250 lines, and after that it 
> gets kind of hairy.

That's putting it mildly.

> I would like opinions as to the suitability of Python as a general 
> purpose language for programming unix, everything from short scripts to 
> muds.

Python is, all things considered, definitely the best such language.

There are strong arguments that can be made in favour of younger cousins 
of Python such as Ruby and Lua, but none of those have anything like the 
userbase or third-party code that Python does, and that counts for a lot. 
LISP (or rather Scheme) would be a more unusual option; it's a language 
that most people hate, but that people who really take the time to learn 
it love with a fervour bordering on scary.

tom

-- 
In-jokes for out-casts
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: map vs. list-comprehension

2005-07-01 Thread Tom Anderson
On Thu, 30 Jun 2005, Roy Smith wrote:

> Terry Hancock <[EMAIL PROTECTED]> wrote:
>
>> One of the strengths of Python has been that the language itself is 
>> small (which it shares with C and (if I understand correctly, not being 
>> a lisp programmer?) Lisp), but with all the syntax enhancements going 
>> on, Python is getting pretty complicated. I have to wonder if new users 
>> won't begin to find it just as intimidating as Perl or other big 
>> languages.
>
> +1
>
> Even some of the relatively recent library enhancements have been kind 
> of complicated.  The logging module, for example, seems way over the 
> top.

Exactly the same thing happened with Java. if you look at the libraries 
that were in 1.1, they're very clean and simple (perhaps with the 
exception of AWT). 1.2 added a load of stuff that was much less 
well-designed (with the notable exception of the collections stuff, which 
is beautiful), and a lot of the extension packages that have been written 
since then are seriously crappy. My particular bugbear is JAI, the imaging 
library, the most gratuitously badly-designed library it has ever been my 
misfortune to work with. EJB is another great example.

I imagine the reason for this degradation has been the expansion of the 
java design team: it started off with James Gosling, who is an incredibly 
smart guy and an awesome engineer, and a relatively small team of crack 
troops; they were capable of writing good code, and really cared about 
doing that. Over the years, as it's grown, it's had to absorb a lot of 
people who don't have that combination of intelligence and good taste, and 
they've written a lot of crap. I suspect a trend away from gifted lone 
hackers and towards design by committee hasn't helped, either.

How this applies to python, where the BDFL is still very much at the helm, 
is not clear. I wonder if analogies to Linux, also a despotism, are more 
useful?

tom

-- 
In-jokes for out-casts
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: map vs. list-comprehension

2005-07-01 Thread Tom Anderson
On Fri, 1 Jul 2005, George Sakkis wrote:

> "Terry Hancock" wrote:
>
> Keeping the language small is a worthwhile goal, but it should be traded 
> off with conciseness and readability; otherwise we could well be content 
> with s-expressions.

There's quite a number of satisfied LISP programmers out there who *are* 
content with S-expressions ...

tom

-- 
In-jokes for out-casts
-- 
http://mail.python.org/mailman/listinfo/python-list


Re:

2005-07-01 Thread Tom Anderson
On Fri, 1 Jul 2005, Adriaan Renting wrote:

> I'm not a very experienced Python programmer yet, so I might be 
> mistaken, but there are a few things that would make me prefer C++ over 
> Python for large (over 500.000 LOC) projects.

Hmm. I don't know C++, but here goes ...

> - namespaces

Aren't namespaces basically the same as packages/modules in python?

> - templates

These would be meaningless in python - they're part of typefulness, which ...

> - strong type checking

... python eschews.

Not that this is necessarily a good thing. I have to say that my Java 
roots do lead me to think that strong typing is a plus for big projects, 
since it's a way of defining and enforcing interfaces between bits of code 
written by different people (or by one person at different times!). 
Optional static typing in python would be nice for this.

> - data hiding

Surely you can hide data in python?

> - more available libraries and more advanced developement tools.

True. The more advanced development tools are offset to a large degree by 
the advanced crappiness of C++ as a language, though; i'd be surprised if 
a C++ programmer borged up with all the latest tools was actually more 
productive than a python programmer with a syntax-colouring, 
auto-indenting text editor. It'd be very interesting to get some real 
numbers on that.

>> Ultimately, manageability of a project is far and away more about the
>> people involved and the techniques used than it is about any single
>> technology involved.
>
> Agreed.

+1 getting to the crux of it.

tom

-- 
In-jokes for out-casts
-- 
http://mail.python.org/mailman/listinfo/python-list


map/filter/reduce/lambda opinions and background unscientific mini-survey

2005-07-01 Thread Tom Anderson
Comrades,

During our current discussion of the fate of functional constructs in 
python, someone brought up Guido's bull on the matter:

http://www.artima.com/weblogs/viewpost.jsp?thread=98196

He says he's going to dispose of map, filter, reduce and lambda. He's 
going to give us product, any and all, though, which is nice of him.

What really struck me, though, is the last line of the abstract:

"I expect tons of disagreement in the feedback, all from ex-Lisp-or-Scheme 
folks. :-)"

I disagree strongly with Guido's proposals, and i am not an ex-Lisp, 
-Scheme or -any-other-functional-language programmer; my only other real 
language is Java. I wonder if i'm an outlier.

So, if you're a pythonista who loves map and lambda, and disagrees with 
Guido, what's your background? Functional or not?

tom

-- 
Batman always wins
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Re:

2005-07-02 Thread Tom Anderson
On Fri, 1 Jul 2005, Andreas Kostyrka wrote:

> Am Freitag, den 01.07.2005, 08:25 -0700 schrieb George Sakkis:
>
>>> Again, how? Is there a way to force that an external user of my lib can
>>> not use my internal data/methods/classes, unless he uses odd compiler
>>> hacks?
>>
>> I never understood how mainstream OO languages expect the designer of a 
>> class to know in advance that an attribute should be hidden or 
>> unnecessary to its subclasses by being declared "private" instead of 
>> "protected".
>
> The problem is, that the classic private/protected/public visibility
> tags try to solve multiple problems.
>
> Private: Ok, that's all that's really only for the implementation.
> public: Well, that's all for my "customers". Hmm. What if I've got two
> kinds of customers? Say a customer like in bank customer, and second
> customer that plays the role of the bank employee? oops.

C++ has 'friend' for that:

http://www.cplusplus.com/doc/tutorial/tut4-3.html

tom

-- 
REMOVE AND DESTROY
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: map/filter/reduce/lambda opinions and background unscientificmini-survey

2005-07-02 Thread Tom Anderson
On Fri, 1 Jul 2005, Ivan Van Laningham wrote:

> Personally, I find that Lisp & its derivatives put your head in a very 
> weird place.  Even weirder than PostScript/Forth/RPN, when you come 
> right down to it.

+1 QOTW!

tom

-- 
REMOVE AND DESTROY
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: map vs. list-comprehension

2005-07-02 Thread Tom Anderson
On Fri, 1 Jul 2005, Sion Arrowsmith wrote:

> Tom Anderson  <[EMAIL PROTECTED]> wrote:
>> On Thu, 30 Jun 2005, Roy Smith wrote:
>>
>>> Even some of the relatively recent library enhancements have been kind 
>>> of complicated.  The logging module, for example, seems way over the 
>>> top.
>>
>> Exactly the same thing happened with Java.
>
> I was under the impression that Python's logging module (like unittest) 
> was based on a common Java one, and it's complexity could be blamed on 
> that.

That would explain it. Who was responsible for this crime? I say we shoot 
them and burn the bodies.

>> if you look at the libraries that were in 1.1, they're very clean and 
>> simple (perhaps with the exception of AWT). 1.2 added a load of stuff 
>> that was much less well-designed (with the notable exception of the 
>> collections stuff, which is beautiful)
>
> There are very many adjectives I could (and have) used to describe the 
> Collection framework. "Beautiful" is not among them. I think the closest 
> I could manage is "baroque".

Oh, i don't think it's really that bad. For java.

tom

-- 
REMOVE AND DESTROY
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Modules for inclusion in standard library?

2005-07-02 Thread Tom Anderson
On Fri, 1 Jul 2005, Scott David Daniels wrote:

> Daniel Dittmar wrote:
>> Rocco Moretti wrote:
>>
> Except that (please correct me if I'm wrong) there is somewhat of a 
> policy for not including interface code for third party programs 
> which are not part of the operating system. (I.e. the modules in the 
> standard libary should all be usable for anyone with a default OS + 
> Python install.)
>> 
>> There seems to be a great reluctance by the Python developers to add 
>> modules of the expat kind, as this means responsibilities for 
>> additional source modules. There's also the problem with incompatible 
>> licenses, integrating a second configure, deciding when to update to 
>> the latest version of the library etc.
>
> If you haven't noticed, the Python code has a substantial body of unit
> tests.  Arranging the tests to be easily runnable for all developers
> is going to be tough for "third party programs."

The tests for interface modules would have to use mock objects on the back 
end. This is pretty standard practice, isn't it?

> Making the interfaces work for differing versions of the 3PPs as the 
> third parties themselves change their interfaces (see fun with Tcl/Tk 
> versions for example), and building testbeds to test to all of those 
> differing versions, would cause a nightmare that would make a knight of 
> Ni scream.

But given that at a number of such modules have in fact been written, 
along with tests, why not add them to the standard distribution?

tom

-- 
REMOVE AND DESTROY
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: A brief question.

2005-07-02 Thread Tom Anderson
On Sat, 2 Jul 2005, Tom Brown wrote:

> On Saturday 02 July 2005 10:55, Nathan Pinno wrote:
>
>> Brief question for anyone who knows the answer, because I don't. Is 
>> there anyway to make Python calculate square roots?
>
> from math import sqrt

That's one way. I'd do:

root = value ** 0.5

Does that mean we can expect Guido to drop math.sqrt in py3k? :)

tom

-- 
That's no moon!
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: math.nroot [was Re: A brief question.]

2005-07-03 Thread Tom Anderson
On Sun, 3 Jul 2005, Steven D'Aprano wrote:

> On Sun, 03 Jul 2005 02:22:23 +0200, Fredrik Johansson wrote:
>
>> On 7/3/05, Tom Anderson <[EMAIL PROTECTED]> wrote:
>>> That's one way. I'd do:
>>>
>>> root = value ** 0.5
>>>
>>> Does that mean we can expect Guido to drop math.sqrt in py3k? :)
>>
>> I'd rather like to see a well implemented math.nthroot. 64**(1/3.0)
>> gives 3.9996, and this error could be avoided.
>
> py> math.exp(math.log(64)/3.0)
> 4.0
>
> Success!!!

Eeenteresting. I have no idea why this works. Given that math.log is 
always going to be approximate for numbers which aren't rational powers of 
e (which, since e is transcendental, is all rational numbers, and 
therefore all python floats, isn't it?), i'd expect to get the same 
roundoff errors here as with exponentiation. Is it just that the errors 
are sufficiently smaller that it looks exact?

> Note how much simpler this would be if we could guarantee proper 
> infinities and NaNs in the code. We could turn a 23-line block to a 
> one-liner.

YES! This is something that winds me up no end; as far as i can tell, 
there is no clean programmatic way to make an inf or a NaN; in code i 
write which cares about such things, i have to start:

inf = 1e300 ** 1e300
nan = inf - inf

Every bloody time. I'm going to be buggered if python ever rolls out some 
sort of bigfloat support.

And then god forbid i should actually want to test if a number is NaN, 
since, bizarrely, (x == nan) is true for every x; instead, i have to 
write:

def isnan(x):
return (x == 0.0) and (x == 1.0)

The IEEE spec actually says that (x == nan) should be *false* for every x, 
including nan. I'm not sure if this is more or less stupid than what 
python does!

And while i'm ranting, how come these expressions aren't the same:

1e300 * 1e300
1e300 ** 2

And finally, does Guido know something about arithmetic that i don't, or 
is this expression:

-1.0 ** 0.5

Evaluated wrongly?

tom

-- 
Please! Undo clips before opening handle.
-- 
http://mail.python.org/mailman/listinfo/python-list


  1   2   3   >