Re: float("nan") in set or as key

2011-06-03 Thread Carl Banks
On Wednesday, June 1, 2011 5:53:26 PM UTC-7, Steven D'Aprano wrote:
> On Tue, 31 May 2011 19:45:01 -0700, Carl Banks wrote:
> 
> > On Sunday, May 29, 2011 8:59:49 PM UTC-7, Steven D'Aprano wrote:
> >> On Sun, 29 May 2011 17:55:22 -0700, Carl Banks wrote:
> >> 
> >> > Floating point arithmetic evolved more or less on languages like
> >> > Fortran where things like exceptions were unheard of,
> >> 
> >> I'm afraid that you are completely mistaken.
> >> 
> >> Fortran IV had support for floating point traps, which are "things like
> >> exceptions". That's as far back as 1966. I'd be shocked if earlier
> >> Fortrans didn't also have support for traps.
> >> 
> >> http://www.bitsavers.org/pdf/ibm/7040/C28-6806-1_7040ftnMathSubrs.pdf
> > 
> > Fine, it wasn't "unheard of".  I'm pretty sure the existence of a few
> > high end compiler/hardware combinations that supported traps doesn't
> > invalidate my basic point.
> 
> On the contrary, it blows it out of the water and stomps its corpse into 
> a stain on the ground.

Really?  I am claiming that, even if everyone and their mother thought 
exceptions were the best thing ever, NaN would have been added to IEEE anyway 
because most hardware didn't support exceptions.  Therefore the fact that NaN 
is in IEEE is not any evidence that NaN is a good idea.

You are saying that the existence of one early system that supported exceptions 
not merely argument against that claim, but blows it out of the water?  Your 
logic sucks then.

You want to go off arguing that there were good reasons aside from backwards 
compatibility they added NaN, be my guest.  Just don't go around saying, "Its 
in IEEE there 4 its a good idear LOL".  Lots of standards have all kinds of bad 
ideas in them for the sake of backwards compatibility, and when someone goes 
around claiming that something is a good idea simply because some standard 
includes it, it is the first sign that they're clueless about what 
standarization actually is.


> NANs weren't invented as an alternative for 
> exceptions, but because exceptions are usually the WRONG THING in serious 
> numeric work.
> 
> Note the "usually". For those times where you do want to interrupt a 
> calculation just because of an invalid operation, the standard allows you 
> to set a trap and raise an exception.

I don't want to get into an argument over best practices in serious numerical 
programming, so let's just agree with this point for argument's sake.

Here's the problem: Python is not for serious numerical programming.  Yeah, 
it's a really good language for calling other languages to do numerical 
programming, but it's not good for doing serious numerical programming itself.  
Anyone with some theoretical problem where NaN is a good idea should already be 
using modules or separate programs written in C or Fortran.

Casual and lightweight numerical work (which Python is good at) is not a wholly 
separate problem domain where the typical rules ("Errors should never pass 
silently") should be swept aside.


[snip]
> You'll note that, out of the box, numpy generates NANs:
> 
> >>> import numpy
> >>> x = numpy.array([float(x) for x in range(5)])
> >>> x/x
> Warning: invalid value encountered in divide
> array([ nan,   1.,   1.,   1.,   1.])

Steven, seriously I don't know what's going through your head.  I'm saying 
strict adherence to IEEE is not the best idea, and you cite the fact that a 
library tries to strictly adhere to IEEE as evidence that strictly adhering to 
IEEE is a good idea.  Beg the question much?


> The IEEE standard supports both use-cases: those who want exceptions to 
> bail out early, and those who want NANs so the calculation can continue. 
> This is a good thing. Failing to support the standard is a bad thing. 
> Despite your opinion, it is anything but obsolete.

There are all kinds of good reasons to go against standards.  "Failing to 
support the standard is a bad thing" are the words of a fool.  A wise person 
considers the cost of breaking the standard versus the benefit got.

It's clear tha IEEE's NaN handling is woefully out of place in the philosophy 
of Python, which tries to be newbie friendly and robust to errors; and Python 
has no real business trying to perform serious numerical work where 
(ostensibly) NaNs might find a use.  Therefore, the cost of breaking standard 
is small, but the benefit significant, so Python would be very wise to break 
with IEEE in the handling of NaNs.


Carl Banks
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: GIL in alternative implementations

2011-06-07 Thread Carl Banks
On Monday, June 6, 2011 9:03:55 PM UTC-7, Gabriel Genellina wrote:
> En Sat, 28 May 2011 14:05:16 -0300, Steven D'Aprano  
>  escribi�:
> 
> > On Sat, 28 May 2011 09:39:08 -0700, John Nagle wrote:
> >
> >> Python allows patching code while the code is executing.
> >
> > Can you give an example of what you mean by this?
> >
> > If I have a function:
> >
> >
> > def f(a, b):
> > c = a + b
> > d = c*3
> > return "hello world"*d
> >
> >
> > how would I patch this function while it is executing?
> 
> I think John Nagle was thinking about rebinding names:
> 
> 
> def f(self, a, b):
>while b>0:
>  b = g(b)
>  c = a + b
>  d = self.h(c*3)
>return "hello world"*d
> 
> both g and self.h may change its meaning from one iteration to the next,  
> so a complete name lookup is required at each iteration. This is very  
> useful sometimes, but affects performance a lot.

It's main affect performance is that it prevents an optimizer from inlining a 
function call(which is a good chunk of the payoff you get in languages that can 
do that).

I'm not sure where he gets the idea that this has any impact on concurrency, 
though.


Carl Banks
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: how to inherit docstrings?

2011-06-09 Thread Carl Banks
On Thursday, June 9, 2011 12:13:06 AM UTC-7, Eric Snow wrote:
> On Thu, Jun 9, 2011 at 12:37 AM, Ben Finney  wrote:
> > So, it's even possible to do what you ask without decorators at all:
> >
> >    class Foo(object):
> >        def frob(self):
> >            """ Frobnicate thyself. """
> >
> >    class Bar(Foo):
> >        def frob(self):
> >            pass
> >        frob.__doc__ = Foo.frob.__doc__
> >
> > Not very elegant, and involving rather too much repetition; but not
> > difficult.
> >
> 
> Yeah, definitely you can do it directly for each case.  However, the
> inelegance, repetition, and immodularity are exactly why I am pursuing
> a solution.  :)  (I included a link in the original message to
> examples of how you can already do it with metaclasses and class
> decorators too.)
> 
> I'm just looking for a way to do it with decorators in the class body
> without using metaclasses or class decorators.

The tricky part is that, inside the class body (where decorators are being 
evaluated) the class object doesn't exist yet, so the method decorator has no 
way to infer what the base classes are at that point.  A class decorator or 
metaclass can operate after the class object is made, but a method decorator 
can't.

The best you could probably do with a method decorator is something like this:

def inherit_docstring(base):
def set_docstring(f):
f.__doc__ = getattr(base,f.func_name).__doc__
return f
    return set_docstring

where you have to repeat the base class every time:

class Bar(Foo):
@inherit_docstring(Foo)
def somefunction(self):
pass


Carl Banks
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: how to inherit docstrings?

2011-06-09 Thread Carl Banks
On Thursday, June 9, 2011 3:27:36 PM UTC-7, Gregory Ewing wrote:
> IMO, it shouldn't be necessary to explicitly copy docstrings
> around like this in the first place. Either it should happen
> automatically, or help() should be smart enough to look up
> the inheritance hierarchy when given a method that doesn't
> have a docstring of its own.

Presumably, the reason you are overriding a method in a subclass is to change 
its behavior; I'd expect an inherited docstring to be inaccurate more often 
than not.  So I'd be -1 on automatically inheriting them.

However, I'd be +1 easily on a little help from the language to explicitly 
request to inherit the docstring.


Carl Banks
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: how to inherit docstrings?

2011-06-09 Thread Carl Banks
On Thursday, June 9, 2011 6:42:44 PM UTC-7, Ben Finney wrote:
> Carl Banks 
>  writes:
> 
> > Presumably, the reason you are overriding a method in a subclass is to
> > change its behavior; I'd expect an inherited docstring to be
> > inaccurate more often than not.
> 
> In which case the onus is on the programmer implementing different
> behaviour to also override the docstring.

Totally disagree.  The programmer should never be under onus to correct 
mistakes made by the langauge.  "In the face of ambiguity, refuse the 
temptation to guess."

When the language tries to guess what the programmer wants, you get 
monstrosities like Perl.  Don't want to go there.  


Carl Banks
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: how to inherit docstrings?

2011-06-09 Thread Carl Banks
On Thursday, June 9, 2011 7:37:19 PM UTC-7, Eric Snow wrote:
> When I write ABCs to capture an interface, I usually put the
> documentation in the docstrings there.  Then when I implement I want
> to inherit the docstrings.  Implicit docstring inheritance for
> abstract base classes would meet my needs. 

Do all the subclasses do exactly the same thing?  What's the use of a docstring 
if it doesn't document what the function does?


class Shape(object):
def draw(self):
"Draw a shape"
raise NotImplementedError

class Triangle(Shape):
def draw(self):
print "Triangle"

class Square(Shape):
def draw(self):
print "Square"

x = random.choice([Triange(),Square()])
print x.draw.__doc__  # prints "Draws a shape"


Quick, what shape is x.draw() going to draw?  Shouldn't your docstring say what 
the method is going to do?

So, I'm sorry, but I don't see this being sufficient for your use case for ABCs.


> I'm just not clear on the
> impact this would have for the other use cases of docstrings.

Whenever somebody overrides a method to do something different, the inherited 
docstring will be insufficient (as in your ABC example) or wrong.  This, I 
would say, is the case most of the time when overriding a base class method.  
When this happens, the language is committing an error.

Put it this way: if Python doesn't automatically inherit docstrings, the worst 
that can happen is missing information.  If Python does inherit docstrings, it 
can lead to incorrect information.


Carl Banks
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: how to inherit docstrings?

2011-06-10 Thread Carl Banks
On Thursday, June 9, 2011 10:18:34 PM UTC-7, Ben Finney wrote:

[snip example where programmer is expected to consult class docstring to infer 
what a method does]

> There's nothing wrong with the docstring for a method referring to the
> context within which the method is defined.
> 
> > Whenever somebody overrides a method to do something different, the
> > inherited docstring will be insufficient (as in your ABC example) or
> > wrong.
> 
> I hope the above demonstrates that your assertion is untrue. Every
> single method on a class doesn't need to specify the full context; a
> docstring that requires the reader to know what class the method belongs
> to is fine.

It does not.  A docstring that requires the user to  to figure out that is poor 
docstring.

There is nothing wrong, as you say, incomplete documentation that doesn't say 
what the function actually does.  There's nothing wrong with omitting the 
docstring entirely for that matter.  However, the question here is not whether 
a programmer is within right to use poor docstrings, but whether the langauge 
would go out of its way to support them.  It should not.

There is one thing that is very wrong to do with a docstring: provide incorrect 
or misleading information.  So, despite having brought the point up myself, I 
am going to say the point is moot.  Even if it is absolutely desirable for a 
language to go out it's way to support incomplete docstrings, part of that 
bargain is that the language will go out of its way to support flat-out wrong 
docstrings, and that trumps any ostensible benefit.


Carl Banks
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: how to inherit docstrings?

2011-06-10 Thread Carl Banks
On Friday, June 10, 2011 2:51:20 AM UTC-7, Steven D'Aprano wrote:
> On Thu, 09 Jun 2011 20:36:53 -0700, Carl Banks wrote:
> > Put it this way: if Python doesn't automatically inherit docstrings, the
> > worst that can happen is missing information.  If Python does inherit
> > docstrings, it can lead to incorrect information.
> 
> This is no different from inheriting any other attribute. If your class 
> inherits "attribute", you might get an invalid value unless you take 
> steps to ensure it is a valid value. This failure mode doesn't cause us 
> to prohibit inheritance of attributes.

Ridiculous.  The docstring is an attribute of the function, not the class, 
which makes it very different from any other attribute.  Consider this:


class A(object):
foo = SomeClass()


class B(A):
foo = SomeOtherUnrelatedClass()


Would you have B.foo "inherit" all the attributes of A.foo that it doesn't 
define itself?  That's the analogous case to inheriting docstrings.


Carl Banks
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: how to inherit docstrings?

2011-06-13 Thread Carl Banks
On Friday, June 10, 2011 7:30:06 PM UTC-7, Steven D'Aprano wrote:
> Carl, I'm not exactly sure what your opposition is about here. Others 
> have already given real-world use cases for where inheriting docstrings 
> would be useful and valuable. Do you think that they are wrong? If so, 
> you should explain why their use-case is invalid and what solution they 
> should use.

I don't have any issue with inheriting docstrings explicitly.  Elsewhere in 
this thread I said I was +1 on the language helping to simplify this.  What I 
am opposed to automatically inheriting the docstrings.

I do think people are overstating the uses where inherited methods would share 
the same docstring, but that's besides the point.  Overstated or not, one 
cannot deny that the base method's docstring is frequently unacceptable for the 
derived method, and my opposition to automatic inheritance is because in those 
cases will lead to incorrect docstrings, and no other reason.

> If you fear that such docstring inheritance will become the default, 
> leading to a flood of inappropriate documentation, then I think we all 
> agree that this would be a bad thing.

That is exactly what I fear, and you are wrong that "we all agree that this 
would be a bad thing".  Several people in this thread are arguing that 
inheriting docstrings by default is the right thing, and that would lead to 
heaps of inappropriate documentation.


Carl Banks
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: writable iterators?

2011-06-22 Thread Carl Banks
On Wednesday, June 22, 2011 4:10:39 PM UTC-7, Neal Becker wrote:
> AFAIK, the above is the only python idiom that allows iteration over a 
> sequence 
> such that you can write to the sequence.  And THAT is the problem.  In many 
> cases, indexing is much less efficient than iteration.

Well, if your program is such that you can notice a difference between indexing 
and iteration, you probably have better things to worry about.  But whatever.  
You can get the effect you're asking for like this:


class IteratorByProxy(object):
def __init__(self,iterable):
self.set(iterable)
def __iter__(self):
return self
def next(self):
return self.current_iter.next()
def set(self,iterable):
self.current_iter = iter(iterable)

s = IteratorByProxy(xrange(10))
for i in s:
print i
if i == 6:
s.set(xrange(15,20))


Carl Banks
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Nested/Sub Extensions in Python

2011-07-01 Thread Carl Banks
On Friday, July 1, 2011 1:02:15 PM UTC-7, H Linux wrote:
> Once I try to nest this, I cannot get the module to load anymore:
> >import smt.bar
> Traceback (most recent call last):
>   File "", line 1, in 
> ImportError: No module named bar

[snip]

> PyMODINIT_FUNC
> initbar(void)
> {
>   Py_InitModule("smt.bar", bar_methods);
> }

This should be: Py_InitModule("bar", bar_methods);

That's probably it; other than that, it looks like you did everything right.  
What does the installed file layout look like after running distutils setup?


Carl Banks

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Nested/Sub Extensions in Python

2011-07-02 Thread Carl Banks
On Saturday, July 2, 2011 6:35:19 AM UTC-7, H Linux wrote:
> On Jul 2, 2:28 am, Carl Banks 
>  wrote:
> > On Friday, July 1, 2011 1:02:15 PM UTC-7, H Linux wrote:
> > > Once I try to nest this, I cannot get the module to load anymore:
> > > >import smt.bar
> > > Traceback (most recent call last):
> > >   File "", line 1, in 
> > > ImportError: No module named bar
> >
> > [snip]
> >
> > > PyMODINIT_FUNC
> > > initbar(void)
> > > {
> > >    Py_InitModule("smt.bar", bar_methods);
> > > }
> >
> > This should be: Py_InitModule("bar", bar_methods);
> > That's probably it; other than that, it looks like you did everything right.
> Thanks for your help, but I actually tried both ways. This does not
> seem to be the problem, as it fails both ways with identical error
> message.

Correct, I misspoke.  The problem would be if the initbar function name was 
misspelled.


> > What does the installed file layout look like after running distutils setup?
> Tree output is:
> /usr/local/lib/python2.6/dist-packages/
> ├── foo.so
> ├── smt
> │   ├── bar.so
> │   ├── __init__.py
> │   └── __init__.pyc
> └── smt-0.1.egg-info
> 
> Just in case anyone is willing to have a look, here is a link to the
> complete module as built with:
> python setup.py sdist:
> https://docs.google.com/leaf?id=0Byt62fSE5VC5NTgxOTFkYzQtNzI3NC00OTUzLWI1NzMtNmJjN2E0ZTViZTJi&hl=en_US
> 
> If anyone has any other ideas how to get it to work, thanks in
> advance...

I got and built the package, and it imported smt.bar just fine for me.

So my advice would be to rename all the modules.  My guess is that there is a 
conflict for smt and Python is importing some other module or package.  Is 
there a file called smt.py in your working directory?  Try doing this:

import smt
print smt.__file__

And see if it prints at the location where your smt module is installed.  If 
not, you have a conflict.

And if that is the problem, in the future be more careful to keep your module 
namespace clean.  Choose good, distinct names for modules and packages to 
lessen the risk of conflict.


Carl Banks
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Does hashlib support a file mode?

2011-07-06 Thread Carl Banks
On Wednesday, July 6, 2011 12:07:56 PM UTC-7, Phlip wrote:
> If I call m = md5() twice, I expect two objects.
> 
> I am now aware that Python bends the definition of "call" based on
> where the line occurs. Principle of least surprise.

Phlip:

We already know about this violation of the least surprise principle; most of 
us acknowledge it as small blip in an otherwise straightforward and clean 
language.  (Incidentally, fixing it would create different surprises, but 
probably much less common ones.)

We've helped you with your problem, but you risk alienating those who helped 
you when you badmouth the whole language on account of this one thing, and you 
might not get such prompt help next time.  So try to be nice.

You are wrong about Python bending the definition of "call", though.  
Surprising though it be, the Python language is very explicit that the default 
arguments are executed only once, when creating the function, *not* when 
calling it.


Carl Banks

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: What makes functions special?

2011-07-09 Thread Carl Banks
On Saturday, July 9, 2011 2:28:58 PM UTC-7, Eric Snow wrote:
> A tracker issue [1] recently got me thinking about what makes
> functions special.  The discussion there was regarding the distinction
> between compile time (generation of .pyc files for modules and
> execution of code blocks), [function] definition time, and [function]
> execution time.  Definition time actually happens during compile time,

Nope.  Compile time and definition time are always distinct.


> but it has its own label to mark the contrast with execution time.  So
> why do functions get this special treatment?

They don't really.


[snip]
> Am I wrong about the optimization expectation?

As best as I can tell, you are asking (in a very opaque way) why the Python 
compiler even bothers to create code objects, rather than just to create a 
function object outright, because it doesn't (you think) do that for any other 
kind of object.

Two answers (one general, one specific):

1. You're looking for a pattern where it doesn't make any sense for there to be 
one.  The simple truth of the matter is different syntaxes do different things, 
and there isn't anything more to it.  A lambda expression or def statement does 
one thing; a different syntax, such as an integer constant, does another thing. 
 Neither one is treated "specially"; they're just different.

Consider another example: tuple syntax versus list syntax.  Python will often 
build the tuple at compile time, but it never builds a list at compile time.  
Neither one is "special"; it's just that tuple syntax does one thing, list 
syntax does a different thing.

2. Now that we've dispensed with the idea that Python is treating functions 
specially, let's answer your specific question.  It's not special, but still, 
why the code object?

The reason, simply, is that code objects are used for more than just functions. 
 Code objects are also used in modules, and in eval and exec statements, and 
there's one for each statement at the command line.  Code objects are also used 
directly by the interpreter when executing byte code.  A function object is 
only one of several "interfaces" to a code object.

A minor reason is that code objects are constant (in fact, any object that is 
built at compile time must be a constant).  However, function objects are 
mutable.

I hope that helps clear things up.


Carl Banks
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Function docstring as a local variable

2011-07-10 Thread Carl Banks
On Sunday, July 10, 2011 3:50:18 PM UTC-7, Tim Johnson wrote:
>   Here's a related question:
>   I can get the docstring for an imported module:
>   >>> import tmpl as foo
>   >>> print(foo.__doc__)
>   Python templating features
> 
>Author - tim at akwebsoft dot com
> 
>  ## Is it possible to get the module docstring
>  ## from the module itself?


print __doc__


Carl Banks
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Function docstring as a local variable

2011-07-11 Thread Carl Banks
On Sunday, July 10, 2011 4:06:27 PM UTC-7, Corey Richardson wrote:
> Excerpts from Carl Banks's message of Sun Jul 10 18:59:02 -0400 2011:
> > print __doc__
> > 
> 
> Python 2.7.1 (r271:86832, Jul  8 2011, 22:48:46) 
> [GCC 4.4.5] on linux2
> Type "help", "copyright", "credits" or "license" for more information.
> >>> def foo():
> ... "Docstring"
> ... print __doc__
> ... 
> >>> foo()
> None
> >>> 
> 
> What does yours do?

It prints the module docstring, same as your example does.  You did realize 
that was the question I was answering, right?


Carl Banks
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: "Python Wizard," with apologies to The Who

2011-07-12 Thread Carl Banks
On Tuesday, July 12, 2011 9:40:23 AM UTC-7, John Keisling wrote:
> After too much time coding Python scripts and reading Mark Lutz's
> Python books, I was inspired to write the following lyrics. For those
> too young to remember, the tune is that of "Pinball Wizard," by The
> Who. May it bring you as much joy as it brought me!
> 
> 
> I cut my teeth on BASIC
> At scripting I'm no pawn
> From C++ to Java
> My code goes on and on
> But I ain't seen nothing like this
> In any place I've gone
> That modeling and sim guy
> Sure codes some mean Python!


That's pretty funny.  I knew what it would be even when I saw the cut-off 
subject line, and I am too young to remember it.


Carl Banks
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Functional style programming in python: what will you talk about if you have an hour on this topic?

2011-07-14 Thread Carl Banks
On Wednesday, July 13, 2011 5:39:16 AM UTC-7, Anthony Kong wrote:
[snip]
> I think I will go through the following items:
> 
> itertools module
> functools module
> concept of currying ('partial')
> 
> 
> I would therefore want to ask your input e.g.
> 
> Is there any good example to illustrate the concept? 
> What is the most important features you think I should cover?
> What will happen if you overdo it?

Java is easily worst language I know of for support of functional programming 
(unless they added delegates or some other tacked-on type like that), so my 
advice would be to keep it light, for two reasons:

1. It won't take a lot to impress them
2. Too much will make them roll their eyes

Thinking about it, one of the problems with demonstrating functional features 
is that it's not obvious how those features can simplify things.  To get the 
benefit, you have to take a step back and redo the approach somewhat.

Therefore, I'd recommend introducing these features as part of a demo on how a 
task in Python can be solved much more concisely than in Java.  It's kind of an 
art to find good examples, though.  Off the top of my head, I can think of 
using functools module to help with logging or to apply patches, whereas in 
Java they'd have to resort to a code weaver or lots of boilerplate.


Carl Banks
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: list(), tuple() should not place at "Built-in functions" in documentation

2011-07-15 Thread Carl Banks
On Thursday, July 14, 2011 8:00:16 PM UTC-7, Terry Reedy wrote:
> I once proposed, I believe on the tracker, that 'built-in functions' be 
> expanded to 'built-in function and classes'. That was rejected on the 
> basis that people would then expect the full class documentation that is 
> in the 'built-in types' section (which could now be called the 
> built-isssn classes section.

Built in functions and contructors?


Carl Banks
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Aw: python.org back up ?(was Re: python.org is down?)

2011-07-25 Thread Carl Banks
On Sunday, July 24, 2011 11:42:45 AM UTC-7, David Zerrenner wrote:
> *pew* I can't live without the docs, that really made my day now.

If you can't live without the docs, you should consider downloading them and 
accessing them locally.  That'll let you work whenever python.org goes down, 
and will help keep the load off the server when it's up.


Carl Banks
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: list comprehension to do os.path.split_all ?

2011-07-29 Thread Carl Banks
On Thursday, July 28, 2011 2:31:43 PM UTC-7, Ian wrote:
> On Thu, Jul 28, 2011 at 3:15 PM, Emile van Sebille  wrote:
> > On 7/28/2011 1:18 PM gry said...
> >>
> >> [python 2.7] I have a (linux) pathname that I'd like to split
> >> completely into a list of components, e.g.:
> >>    '/home/gyoung/hacks/pathhack/foo.py'  -->   ['home', 'gyoung',
> >> 'hacks', 'pathhack', 'foo.py']
> >>
> >> os.path.split gives me a tuple of dirname,basename, but there's no
> >> os.path.split_all function.
> >>
> >
> > Why not just split?
> >
> > '/home/gyoung/hacks/pathhack/foo.py'.split(os.sep)
> 
> Using os.sep doesn't make it cross-platform. On Windows:
> 
> >>> os.path.split(r'C:\windows')
> ('C:\\', 'windows')
> >>> os.path.split(r'C:/windows')
> ('C:/', 'windows')
> >>> r'C:\windows'.split(os.sep)
> ['C:', 'windows']
> >>> r'C:/windows'.split(os.sep)
> ['C:/windows']

It's not even fullproof on Unix.

'/home//h1122/bin///ghi/'.split('/')

['','home','','bin','','','ghi','']

The whole point of the os.path functions are to take care of whatever oddities 
there are in the path system.  When you use string manipulation to manipulate 
paths, you bypass all of that and leave yourself open to those oddities, and 
then you find your applications break when a user enters a doubled slash.

So stick to os.path.


Carl Banks
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: thread and process

2011-08-13 Thread Carl Banks
On Saturday, August 13, 2011 2:09:55 AM UTC-7, 守株待兔 wrote:
> please see my code:
> import os
> import  threading
> print  threading.currentThread()  
> print "i am parent ",os.getpid()
> ret  =  os.fork()
> print  "i am here",os.getpid()
> print  threading.currentThread()
> if  ret  ==  0:
>  print  threading.currentThread()
> else:
>     os.wait()
>     print  threading.currentThread()
>     
>     
> print "i am runing,who am i? 
> ",os.getpid(),threading.currentThread()
> 
> the output is:
> <_MainThread(MainThread, started -1216477504)>
> i am parent  13495
> i am here 13495
> <_MainThread(MainThread, started -1216477504)>
> i am here 13496
> <_MainThread(MainThread, started -1216477504)>
> <_MainThread(MainThread, started -1216477504)>
> i am runing,who am i?  13496 <_MainThread(MainThread, started 
> -1216477504)>
> <_MainThread(MainThread, started -1216477504)>
> i am runing,who am i?  13495 <_MainThread(MainThread, started 
> -1216477504)>
> it is so strange that  two  different  processes  use one  mainthread!!


They don't use one main thread; it's just that each process's main thread has 
the same name.  Which makes sense: when you fork a process all the data in the 
process has to remain valid in both parent and child, so any pointers would 
have to have the same value (and the -1216477504 happens to be the value of 
that pointer cast to an int).


Carl Banks
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Help with regular expression in python

2011-08-19 Thread Carl Banks
On Friday, August 19, 2011 10:33:49 AM UTC-7, Matt Funk wrote:
> number = r"\d\.\d+e\+\d+"
> numbersequence = r"%s( %s){31}(.+)" % (number,number)
> instance_linetype_pattern = re.compile(numbersequence)
> 
> The results obtained are:
> results: 
> [(' 2.199000e+01', ' : (instance: 0)\t:\tsome description')]
> so this matches the last number plus the string at the end of the line, but 
> no 
> retaining the previous numbers.
> 
> Anyway, i think at this point i will go another route. Not sure where the 
> issues lies at this point.


I think the problem is that repeat counts don't actually repeat the groupings; 
they just repeat the matchings.  Take this expression:

r"(\w+\s*){2}"

This will match exactly two words separated by whitespace.  But the match 
result won't contain two groups; it'll only contain one group, and the value of 
that group will match only the very last thing repeated:

Python 2.7.1+ (r271:86832, Apr 11 2011, 18:13:53) 
[GCC 4.5.2] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import re
>>> m = re.match(r"(\w+\s*){2}","abc def")
>>> m.group(1)
'def'

So you see, the regular expression is doing what you think it is, but the way 
it forms groups is not.


Just a little advice (I know you've found a different method, and that's good, 
this is for the general reader).

The functions re.findall and re.finditer could have helped here, they find all 
the matches in a string and let you iterate through them.  (findall returns the 
strings matched, and finditer returns the sequence of match objects.)  You 
could have done something like this:

row = [ float(x) for x in re.findall(r'\d+\.\d+e\+d+',line) ]

And regexp matching is often overkill for a particular problem; this may be of 
them.  line.split() could have been sufficient:

row = [ float(x) for x in line.split() ]

Of course, these solutions don't account for the case where you have lines, 
some of which aren't 32 floating-point numbers.  You need extra error handling 
for that, but you get the idea.


Carl Banks
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Help on PyQt4 QProcess

2011-08-19 Thread Carl Banks
On Friday, August 19, 2011 12:55:40 PM UTC-7, Edgar Fuentes wrote:
> On Aug 19, 1:56 pm, Phil Thompson 
>  wrote:
> > On Fri, 19 Aug 2011 10:15:20 -0700 (PDT), Edgar Fuentes
> >  wrote:
> > > Dear friends,
> >
> > > I need execute an external program from a gui using PyQt4, to avoid
> > > that hang the main thread, i must connect the signal "finished(int)"
> > > of a QProcess to work properly.
> >
> > > for example, why this program don't work?
> >
> > >    from PyQt4.QtCore import QProcess
> > >    pro = QProcess() # create QProcess object
> > >    pro.connect(pro, SIGNAL('started()'), lambda
> > > x="started":print(x))        # connect
> > >    pro.connect(pro, SIGNAL("finished(int)"), lambda
> > > x="finished":print(x))
> > >    pro.start('python',['hello.py'])        # star hello.py program
> > > (contain print("hello world!"))
> > >    timeout = -1
> > >    pro.waitForFinished(timeout)
> > >    print(pro.readAllStandardOutput().data())
> >
> > > output:
> >
> > >    started
> > >    0
> > >    b'hello world!\n'
> >
> > > see that not emit the signal finished(int)
> >
> > Yes it is, and your lambda slot is printing "0" which is the return code
> > of the process.
> >
> > Phil
> 
> Ok, but the output should be:
> 
> started
> b'hello world!\n'
> finished
> 
> no?.
> 
> thanks Phil

Two issues.  First of all, your slot for the finished function does not have 
the correct prototype, and it's accidentally not throwing an exception because 
of your unnecessary use of default arguments.  Anyway, to fix that, try this:

pro.connect(pro, SIGNAL("finished(int)"), lambda v, x="finished":print(x))

Notice that it adds an argument to the lambda (v) that accepts the int argument 
of the signal.  If you don't have that argument there, the int argument goes 
into x, which is why Python prints 0 instead of "finished".

Second, processess run asynchrously, and because of line-buffering, IO can 
output asynchronously, and so there's no guarantee what order output occurs.  
You might try calling the python subprocess with the '-u' switch to force 
unbuffered IO, which might be enough to force synchronous output (depending on 
how signal/slot and subprocess semantics are implemented).


Carl Banks
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Run time default arguments

2011-08-27 Thread Carl Banks
On Thursday, August 25, 2011 1:54:35 PM UTC-7, ti...@thsu.org wrote:
> On Aug 25, 10:35 am, Arnaud Delobelle  wrote:
> > You're close to the usual idiom:
> >
> > def doSomething(debug=None):
> >     if debug is None:
> >         debug = defaults['debug']
> >     ...
> >
> > Note the use of 'is' rather than '=='
> > HTH
> 
> Hmm, from what you are saying, it seems like there's no elegant way to
> handle run time defaults for function arguments, meaning that I should
> probably write a sql-esc coalesce function to keep my code cleaner. I
> take it that most people who run into this situation do this?

I don't; it seems kind of superfluous when "if arg is not None: arg = whatever" 
is just as easy to type and more straightforward to read.

I could see a function like coalesce being helpful if you have a list of 
several options to check, though.  Also, SQL doesn't give you a lot of 
flexibility, so coalesce is a lot more needed there.

But for simple arguments in Python, I'd recommend sticking with "if arg is not 
None: arg = whatever"


Carl Banks
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Why do closures do this?

2011-08-28 Thread Carl Banks
On Saturday, August 27, 2011 8:45:05 PM UTC-7, John O'Hagan wrote:
> Somewhat apropos of the recent "function principle" thread, I was recently 
> surprised by this:
> 
> funcs=[]
> for n in range(3):
> def f():
> return n
> funcs.append(f)
> 
> [i() for i in funcs]
> 
> The last expression, IMO surprisingly, is [2,2,2], not [0,1,2]. Google tells 
> me I'm not the only one surprised, but explains that it's because "n" in the 
> function "f" refers to whatever "n" is currently bound to, not what it was 
> bound to at definition time (if I've got that right), and that there are at 
> least two ways around it: 
> My question is, is this an inescapable consequence of using closures, or is 
> it by design, and if so, what are some examples of where this would be the 
> preferred behaviour?


It is the preferred behavior for the following case.

def foo():
def printlocals():
print a,b,c,d
a = 1; b = 4; c = 5; d = 0.1
printlocals()
a = 2
printlocals()

When seeing a nested function, there are strong expectations by most people 
that it will behave this way (not to mention it's a lot more useful).  It's 
only for the less common and much more advanced case of creating a closure in a 
loop that the other behavior would be preferred.


Carl Banks
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: fun with nested loops

2011-09-01 Thread Carl Banks
On Wednesday, August 31, 2011 8:51:45 AM UTC-7, Daniel wrote:
> Dear All,
> 
> I have some complicated loops of the following form
> 
> for c in configurations: # loop 1
> while nothing_bad_happened: # loop 2
> while step1_did_not_work: # loop 3
> for substeps in step1 # loop 4a
> # at this point, we may have to
> -leave loop 1
> -restart loop 4
> -skip a step in loop 4
> -continue on to loop 4b
> 
> while step2_did_not_work: # loop 4b
> for substeps in step2:
> # at this point, we may have to
> -leave loop 1
> -restart loop 2
> -restart loop 4b
> ...
> ...many more loops...
> 
> 
> I don't see any way to reduce these nested loops logically, they
> describe pretty well what the software has to do.
> This is a data acquisition application, so on ever line there is
> a lot of IO that might fail or make subsequent steps useless or
> require a
> retry.
> 
> Now every step could need to break out of any of the enclosing loops.


I feel your pain.  Every language, even Python, has cases where the trade-offs 
made in the language design make some legitimate task very difficult.  In such 
cases I typically throw out the guidebook and make use of whatever shameless 
Perlesque thing it takes to keep things manageable.

In your example you seem like you're trying to maintain some semblance of 
structure and good habit; I'd it's probably no longer worth it.  Just store the 
level to break to in a variable, and after every loop check the variable and 
break if you need to break further.  Something like this, for example:

break_level = 99
while loop1:
while loop2:
while loop3:
if some_condition:
break_level = (1, 2, or 3)
break
    if break_level < 3: break
break_level = 99
if break_level < 2: break
break_level = 99


Carl Banks
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Optparse buggy?

2011-09-01 Thread Carl Banks
On Thursday, September 1, 2011 7:16:13 PM UTC-7, Roy Smith wrote:
> In article ,
>  Terry Reedy  wrote:
> 
> > Do note "The optparse module is deprecated and will not be developed 
> > further; development will continue with the argparse module."
> 
> One of the unfortunate things about optparse and argparse is the names.  
> I can never remember which is the new one and which is the old one.  It 
> would have been a lot simpler if the new one had been named optparse2 
> (in the style of unittest2 and urllib2).

It's easy: "opt"parse parses only "opt"ions (-d and the like), whereas 
"arg"parse parses all "arg"uments.  argparse is the more recent version since 
it does more.  optparse2 would have been a bad name for something that parses 
more than options.

(In fact, although I have some minor philosophical disagreements with 
optparse's design decisions, the main reason I always recommended using 
argparse instead was that optparse didn't handle positional arguments.  
optparse has all these spiffy features with type checking and defaults, but it 
never occurred to the optparse developers that this stuff would be useful for 
positional arugments, too.  They just dropped the ball there.)


Carl Banks
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: sqlite3 with context manager

2011-09-03 Thread Carl Banks
On Friday, September 2, 2011 11:43:53 AM UTC-7, Tim Arnold wrote:
> Hi,
> I'm using the 'with' context manager for a sqlite3 connection:
> 
> with sqlite3.connect(my.database,timeout=10) as conn:
>  conn.execute('update config_build set datetime=?,result=?
> where id=?',
>(datetime.datetime.now(), success,
> self.b['id']))
> 
> my question is what happens if the update fails? Shouldn't it throw an
> exception?

If you look at the sqlite3 syntax documentation, you'll see it has a SQL 
extension that allows you to specify error semantics.  It looks something like 
this:

UPDATE OR IGNORE
UPDATE OR FAIL
UPDATE OR ROLLBACK

I'm not sure exactly how this interacts with pysqlite3, but using one of these 
might help it throw exceptions when you want it to.


Carl Banks
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Why doesn't threading.join() return a value?

2011-09-03 Thread Carl Banks
On Friday, September 2, 2011 11:01:17 AM UTC-7, Adam Skutt wrote:
> On Sep 2, 10:53 am, Roy Smith  wrote:
> > I have a function I want to run in a thread and return a value.  It
> > seems like the most obvious way to do this is to have my target
> > function return the value, the Thread object stash that someplace, and
> > return it as the return value for join().
> > > Yes, I know there's other ways for a thread to return values (pass the
> > target a queue, for example), but making the return value of the
> > target function available would have been the most convenient.  I'm
> > curious why threading wasn't implemented this way.
> 
> I assume it is because the underlying operating system APIs do not
> support it.

Nope.  This could easily be implemented by storing the return value in the 
Thread object.

It's not done that way probably because no one thought of doing it.


Carl Bannks
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Why doesn't threading.join() return a value?

2011-09-03 Thread Carl Banks
On Friday, September 2, 2011 11:53:43 AM UTC-7, Adam Skutt wrote:
> On Sep 2, 2:23 pm, Alain Ketterlin 
> wrote:
> > Sorry, you're wrong, at least for POSIX threads:
> >
> > void pthread_exit(void *value_ptr);
> > int pthread_join(pthread_t thread, void **value_ptr);
> >
> > pthread_exit can pass anything, and that value will be retrieved with
> > pthread_join.
> 
> No, it can only pass a void*, which isn't much better than passing an
> int.  Passing a void* is not equivalent to passing anything, not even
> in C.  Moreover, specific values are still reserved, like
> PTHREAD_CANCELLED. Yes, it was strictly inappropriate for me to say
> both return solely integers, but my error doesn't meaningful alter my
> description of the situation.  The interface provided by the
> underlying APIs is not especially usable for arbitrary data transfer.

I'm sorry, but your claim is flat out wrong.  It's very common in C programming 
to use a void* to give a programmer ability to pass arbitrary data through some 
third-party code.

The Python API itself uses void* in this way in several different places.  For 
instance, ake a look at the Capsule API 
(http://docs.python.org/c-api/capsule.html).  You'll notice it uses a void* to 
let a user pass in opaque data.  Another case is when declaring properties in 
C: it's common to define a single get or set function, and only vary some piece 
of data for the different properties.  The API provides a void* so that the 
extension writer can pass arbitrary data to the get and set functions.


Carl Banks
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: ctypes inheritance issue

2011-02-24 Thread Carl Banks
On Feb 23, 9:38 am, Steve  wrote:
> After looking at some metaclass examples it appears this information
> is readily available.  A metaclass gets a dictionary containing
> information about the parent class (or should, at least).

What examples did you look at?


> It seems
> like it must have this information in order to dynamically make
> decisions about how to create classes...  So, "bug" or not, shouldn't
> this just work?

No.  Information about parent class members is available if you dig
for it but it doesn't "just work".

A metaclass gets three pieces of information it uses when constructing
a class: the name of the class, a list of bases, and a dictionary
containing everything defined in the class's scope (and only the
class's scope, not the scope of any base classes).  Some, if not most,
metaclasses inspect and modify this dictionary before passing it to
the type constructor (type.__new__); inheritance hasn't even come into
play at that point.

A metaclass can look at the list of bases and try to extract
attributes from them, but that's not just working; that's digging.
(Needless to say, a lot of implementors don't go through the effort to
dig.)

> Is there something that prevents it from being
> implemented?  Would this break something?

As I said, it's inherently a chicken-and-egg problem.  You have a
situation where you want to inherit the information needed to create a
class, but inheritance doesn't come into play until the class is
created.

I guess you could elimiate the paradox by breaking down type
construction into steps (set up the inheritance relationships first,
then construct the type object, giving the metaclass time to get data
from the bases).

Some other language will have to try that, though.  Yes it would break
things.  Not a lot of things but there cases where you don't want to
inherit.  I use the following pattern fairly often:


class KeepTrackOfSubtypesMetaclass(type):

subtypes = {}

def __new__(metatype,name,bases,class_dct):
key = class_dct.get('key')
self = type.__new__(metatype,name,bases,class_dct)
if key is not None:
metatype.subtypes[key] = self
return self


Any instance of this metaclass that defines key in its scope will be
added to the dict of subtypes.  But I don't want a derived class to
overwrite its parent's entry in the subtype dict--it should define its
own key.


Carl Banks
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Python C Extensions

2011-02-24 Thread Carl Banks
On Feb 24, 8:46 am, "aken8...@yahoo.com"  wrote:
> Thank you very much, it worked.
> I thought the PyDict_SetItem should assume ownership
> of the passed object and decrease it's reference count (I do not know
> why).
>
> Does this also go for the Lists ? Should anything inserted into list
> also
> be DECRED-ed ?


The Python C API documentation has this information--if a function is
documented as borrowing a reference, then it behaves as you were
expecting (it doesn't increase the reference count).  If it's
documented as creating a new reference, it does increase the reference
count.

I don't know if there's a simple rule to know of a function borrows or
creates a new reference; I've never noticed one.


Carl Banks
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Checking against NULL will be eliminated?

2011-03-02 Thread Carl Banks
On Mar 2, 5:51 am, Claudiu Popa  wrote:
> Hello Python-list,
>
> I  don't  know how to call it, but the following Python 3.2 code seems to 
> raise a
> FutureWarning.
>
> def func(root=None):
>     nonlocal arg
>     if root:
>        arg += 1
> The  warning is "FutureWarning: The behavior of this method will change
> in future versions.  Use specific 'len(elem)' or 'elem is not None' test 
> instead."
> Why is the reason for this idiom to be changed?

I'm guessing root is an ElementTree Element?

The reason for this is that some element tree functions will return
None if an element is not found, but an empty element will have a
boolean value of false because it acts like a container.  Some people
who use ElementTree don't always have this behavior in mind when they
are testing to see if an element was found, and will use "if element:"
when they need to be using "if element is not None:".

The larger reason is that boolean evaluation in Python tries to be too
many things: for some types is means "not zero", for some types it
means "empty", and for some types it means "this is a value of this
type as opposed to None".  That causes conflicts when more than one of
those tests makes sense for a given type, as it does with Elements.

This change is only for ElementTree as far as I know.  (Incidentally,
Numpy arrays are another notable type that's disabled implicit
booleans, but it did it a long time ago.)


Carl Banks
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Checking against NULL will be eliminated?

2011-03-03 Thread Carl Banks
On Mar 2, 3:46 pm, Steven D'Aprano  wrote:

> > Fortunately for me, I never trusted python's
> > complex, or should I say 'overloaded' Boolean usage.
>
> That's your loss. Just because you choose to not trust something which
> works deterministically and reliably, doesn't mean the rest of us
> shouldn't.

Perl works deterministically and reliably.  In fact, pretty much every
language works deterministically and reliably.  Total non-argument.


Carl Banks
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Checking against NULL will be eliminated?

2011-03-03 Thread Carl Banks
On Mar 3, 5:16 am, Neil Cerutti  wrote:
> On 2011-03-03, Tom Zych  wrote:
>
> > Carl Banks wrote:
> >> Perl works deterministically and reliably.  In fact, pretty much every
> >> language works deterministically and reliably.  Total non-argument.
>
> > Well, yes. I think the real issue is, how many surprises are
> > waiting to pounce on the unwary developer. C is deterministic
> > and reliable, but full of surprises.
>
> Point of order, for expediency, C and C++ both include lots and
> lots of indeterminate stuff.

It's besides the point, but I'll bite.  Apart from interactions with
the environment (system timer and whatnot), when does C or C++ code
ever produce indeterminate behavior?

> A piece of specific C code can be
> totally deterministic, but the language is full of undefined
> corners.

C and C++ have plenty of behaviors that are undefined, implementation
defined, etc.  But that is not the same thing as indeterminate.
Determinate means when you compile/run the code it does the same thing
every time (more or less).  When run a program and it does one thing,
then you run it again and it does something else, it's indeterminate.

I actually can think of one indeterminate behavior in C (although it's
not certain whether this qualifies as interaction with the
environment):

int main(void) {
int a;
printf("%d\n",a);
}

The C standard allows the memory a refers to to be uninitialized,

OTOH this

int main(void) {
a = 1;
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Checking against NULL will be eliminated?

2011-03-03 Thread Carl Banks
On Mar 3, 7:12 pm, Carl Banks  wrote:
[snip]

Accidental post before I was done.  To complete the thought:

I actually can think of one indeterminate behavior in C (although it's
not certain whether this qualifies as interaction with the
environment):

int main(void) {
    int a;
    printf("%d\n",a);
return 0;
}

The C standard allows the memory a refers to to be uninitialized,
meaning that a's value is whatever previously existed in that memory
slot, which could be anything.

OTOH this program:

int main(void) {
    int a = 1;
a = a++;
printf("%d\n",a);
return 0;
}

is undefined, which I guess technically could mean that compiler could
output an indeterminate result, but I doubt there are any compilers
that won't output the same value every time it's run.


Carl Banks
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: having both dynamic and static variables

2011-03-05 Thread Carl Banks
On Mar 5, 7:46 pm, Corey Richardson  wrote:
> On 03/05/2011 10:23 PM, MRAB wrote:
>
> > Having a fixed binding could be useful elsewhere, for example, with
> > function definitions:
> > [..]
> >      fixed PI = 3.1415926535897932384626433832795028841971693993751
>
> >      fixed def squared(x):
> >          return x * x
>
> This question spawns from my ignorance: When would a functions
> definition change? What is the difference between a dynamic function and
> a fixed function?

There's a bit of ambiguity here.  We have to differentiate between
"fixed binding" (which is what John Nagle and MRAB were talking about)
and "immutable object" (which, apparently, is how you took it).  I
don't like speaking of "constants" in Python because it's not always
clear which is meant, and IMO it's not a constant unless it's both.

An immutable object like a number or tuple can't be modified, but the
name refering to it can be rebound to a different object.

a = (1,2,3)
a.append(4) # illegal, can't modify a tuple
a = (1,2,3,4) # but this is legal, can set a to a new tuple

If a hypothetical fixed binding were added to Python, you wouldn't be
able to rebind a after it was set:

fixed a = (1,2,3)
a = (1,2,3,4) # now illegal

If you could define functions with fixed bindings like this, then a
compiler that's a lot smarter than CPython's would be able to inline
functions for potentially big speed increases.  It can't do that now
because the name of the function can always be rebound to something
else.

BTW, a function object is definitely mutable.

def squared(x):
return x*x

squared.foo = 'bar'


Carl Banks
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Abend with cls.__repr__ = cls.__str__ on Windows.

2011-03-18 Thread Carl Banks
On Mar 18, 2:18 am, Duncan Booth  wrote:
> Terry Reedy  wrote:
> > On 3/17/2011 10:00 PM, Terry Reedy wrote:
> >> On 3/17/2011 8:24 PM, J Peyret wrote:
> >>> This gives a particularly nasty abend in Windows - "Python.exe has
> >>> stopped working", rather than a regular exception stack error. I've
> >>> fixed it, after I figured out the cause, which took a while, but
> maybe
> >>> someone will benefit from this.
>
> >>> Python 2.6.5 on Windows 7.
>
> >>> class Foo(object):
> >>> pass
>
> >>> Foo.__repr__ = Foo.__str__ # this will cause an abend.
>
> >> 2.7.1 and 3.2.0 on winxp, no problem, interactive intepreter or IDLE
> >> shell. Upgrade?
>
> > To be clear, the above, with added indent, but with extra fluff
> (fixes)
> > removed, is exactly what I ran. If you got error with anything else,
> > please say so. Described behavior for legal code is a bug. However,
> > unless a security issue, it would not be fixed for 2.6.
>
> On Windows, I can replicate this with Python 2.7, Python 3.1.2, and
> Python 3.2. Here's the exact script (I had to change the print to be
> compatible with Python 3.2):
>
>  bug.py --
> class Foo(object):
>     pass
>     #def __str__(self):  #if you have this defined, no abend
>     #    return "a Foo"
>
> Foo.__repr__ = Foo.__str__   # this will cause an abend.
> #Foo.__str__ = Foo.__repr__  #do this instead, no abend
>
> foo = Foo()
> print(str(foo))
>
> --
>
> for Python 3.2 the command:
>     C:\Temp>c:\python32\python bug.py
>
> generates a popup:
>
>     python.exe - Application Error
>     The exception unknown software exception (0xcfd) occurred in the
>     application at location 0x1e08a325.
>
>     Click on OK to terminate the program
>     Click on CANCEL to debug the program
>
> So it looks to me to be a current bug.

Multiple people reproduce a Python hang/crash yet it looks like no one
bothered to submit a bug report

I observed the same behavior (2.6 and 3.2 on Linux, hangs) and went
ahead and submitted a bug report.


Carl Banks
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Abend with cls.__repr__ = cls.__str__ on Windows.

2011-03-18 Thread Carl Banks
On Mar 18, 5:31 pm, J Peyret  wrote:
> If I ever specifically work on an OSS project's codeline, I'll post
> bug reports, but frankly that FF example is a complete turn-off to
> contributing by reporting bugs.

You probably shouldn't take it so personally if they don't agree with
you.  But it's ok, it's not unreasonable to call attention to (actual)
bugs here.

I was surprised, though, when several people confirmed but no one
reported it, especially since it was a crash, which is quite a rare
thing to find.  (You should feel proud.)


Carl Banks
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Guido rethinking removal of cmp from sort method

2011-03-23 Thread Carl Banks
On Mar 23, 6:59 am, Stefan Behnel  wrote:
> Antoon Pardon, 23.03.2011 14:53:
>
> > On Sun, Mar 13, 2011 at 12:59:55PM +, Steven D'Aprano wrote:
> >> The removal of cmp from the sort method of lists is probably the most
> >> disliked change in Python 3. On the python-dev mailing list at the
> >> moment, Guido is considering whether or not it was a mistake.
>
> >> If anyone has any use-cases for sorting with a comparison function that
> >> either can't be written using a key function, or that perform really
> >> badly when done so, this would be a good time to speak up.
>
> > How about a list of tuples where you want them sorted first item in 
> > ascending
> > order en second item in descending order.
>
> You can use a stable sort in two steps for that.

How about this one: you have are given an obscure string collating
function implented in a C library you don't have the source to.

Or how about this: I'm sitting at an interactive session and I have a
convenient cmp function but no convenient key, and I care more about
the four minutes it'd take to whip up a clever key function or an
adapter class than the 0.2 seconds I'd save to on sorting time.

Removing cmp from sort was a mistake; it's the most straightforward
and natural way to sort in many cases.  Reason enough for me to keep
it.


Carl Banks
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Guido rethinking removal of cmp from sort method

2011-03-23 Thread Carl Banks
On Mar 23, 10:51 am, Stefan Behnel  wrote:
> Carl Banks, 23.03.2011 18:23:
>
>
>
>
>
> > On Mar 23, 6:59 am, Stefan Behnel wrote:
> >> Antoon Pardon, 23.03.2011 14:53:
>
> >>> On Sun, Mar 13, 2011 at 12:59:55PM +, Steven D'Aprano wrote:
> >>>> The removal of cmp from the sort method of lists is probably the most
> >>>> disliked change in Python 3. On the python-dev mailing list at the
> >>>> moment, Guido is considering whether or not it was a mistake.
>
> >>>> If anyone has any use-cases for sorting with a comparison function that
> >>>> either can't be written using a key function, or that perform really
> >>>> badly when done so, this would be a good time to speak up.
>
> >>> How about a list of tuples where you want them sorted first item in 
> >>> ascending
> >>> order en second item in descending order.
>
> >> You can use a stable sort in two steps for that.
>
> > How about this one: you have are given an obscure string collating
> > function implented in a C library you don't have the source to.
>
> > Or how about this: I'm sitting at an interactive session and I have a
> > convenient cmp function but no convenient key, and I care more about
> > the four minutes it'd take to whip up a clever key function or an
> > adapter class than the 0.2 seconds I'd save to on sorting time.
>
> As usual with Python, it's just an import away:
>
> http://docs.python.org/library/functools.html#functools.cmp_to_key
>
> I think this is a rare enough use case to merit an import rather than being
> a language feature.

The original question posted here was, "Is there a use case for cmp?"
There is, and your excuse-making doesn't change the fact.  It's the
most natural way to sort sometimes; that's a use case.  We already
knew it could be worked around.

It's kind of ridiculous to claim that cmp adds much complexity (it's
maybe ten lines of extra C code), so the only reason not to include it
is that it's much slower than using key.  Not including it for that
reason would be akin to the special-casing of sum to prevent strings
from being concatenated, although omitting cmp would not be as drastic
since it's not a special case.

Do we omit something that's useful but potentially slow?  I say no.


Carl Banks
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Guido rethinking removal of cmp from sort method

2011-03-23 Thread Carl Banks
On Mar 23, 1:38 pm, Paul Rubin  wrote:
> Carl Banks  writes:
> > It's kind of ridiculous to claim that cmp adds much complexity (it's
> > maybe ten lines of extra C code), so the only reason not to include it
> > is that it's much slower than using key.
>
> Well, I thought it was also to get rid of 3-way cmp in general, in favor
> of rich comparison.

Supporting both __cmp__ and rich comparison methods of a class does
add a lot of complexity.  The cmp argument of sort doesn't.

The cmp argument doesn't depend in any way on an object's __cmp__
method, so getting rid of __cmp__ wasn't any good readon to also get
rid of the cmp argument; their only relationship is that they're
spelled the same.  Nor is there any reason why cmp being a useful
argument of sort should indicate that __cmp__ should be retained in
classes.


Carl Banks
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Guido rethinking removal of cmp from sort method

2011-03-24 Thread Carl Banks
On Mar 24, 5:37 pm, "Martin v. Loewis"  wrote:
> > The cmp argument doesn't depend in any way on an object's __cmp__
> > method, so getting rid of __cmp__ wasn't any good readon to also get
> > rid of the cmp argument
>
> So what do you think about the cmp() builtin? Should have stayed,
> or was it ok to remove it?

Since it's trivial to implement by hand, there's no point for it to be
a builtin.  There wasn't any point before rich comparisons, either.
I'd vote not merely ok to remove, but probably a slight improvement.
It's probably the least justified builtin other than pow.


> If it should have stayed: how should it's implementation have looked like?

Here is how cmp is documented: "The return value is negative if x < y,
zero if x == y and strictly positive if x > y."

So if it were returned as a built-in, the above documentation suggests
the following implementation:

def cmp(x,y):
if x < y: return -1
if x == y: return 0
if x > y: return 1
raise ValueError('arguments to cmp are not well-ordered')

(Another, maybe better, option would be to implement it so as to have
the same expectations as list.sort, which I believe only requires
__eq__ and __gt__.)


> If it was ok to remove it: how are people supposed to fill out the cmp=
> argument in cases where they use the cmp() builtin in 2.x?

Since it's trivial to implement, they can just write their own cmp
function, and as an added bonus they can work around any peculiarities
with an incomplete comparison set.


Carl Banks
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: dynamic assigments

2011-03-25 Thread Carl Banks
On Mar 25, 5:29 am, Seldon  wrote:
> I thought to refactor the code in a more declarative way, like
>
> assignment_list = (
> ('var1', value1),
> ('var2', value2),
> .. ,
> )
>
> for (variable, value) in assignment_list:
>         locals()[variable] = func(arg=value, *args)

Someday we'll get through a thread like this without anyone mistakenly
suggesting the use of locals() for this


> My question is: what's possibly wrong with respect to this approach ?

I'll answer this question assuming you meant, "hypothetically, if it
actually worked".

The thing that's wrong with your "declarative way" is that it adds
nothing except obscurity.  Just do this:

var1 = value2
var2 = value2

What you're trying to do is akin to writing poetry, or a sociological
research paper.  The emphasis in that kind of writing is not on clear
communication of ideas, but on evoking some emotion with the form of
the words (almost always at the expense of clear communication).

Same thing with your "declarative way".  It adds nothing to the code
apart from a feeling of formalism.  It doesn't save you any work: you
still have to type out all the variables and values.  It doesn't save
you from repeating yourself.  It doesn't minimize the possibility of
typos or errors; quite the opposite.  It DOES make your code a lot
harder to read.

So stick with regular assignments.


"But wait," you say, "what if I don't know the variable names?"


Well, if you don't know the variable names, how can you write a
function that uses those names as local variables?


"Er, well I can access them with locals() still."


You should be using a dictionary, then.

I have found that whenever I thought I wanted to dynamically assign
local variables, it turned out I also wanted to access them
dynamically, too.  Therefore, I would say that any urge to do this
should always be treated as a red flag that you should be using a
dictionary.


"Ok, but say I do know what the variables are, but for some reason I'm
being passed a huge list of these key,value pairs, and my code
consists of lots and lots of formulas and with lots of these
variables, so it'd be unwieldy to access them through a dictionary or
as object attributes, not to mention a lot slower."


Ah, now we're getting somewhere.  This is the main use case for
dynamically binding local variables in Python, IMO.  You're getting a
big list of variables via some dynamic mechanism, you know what the
variables are, and you want to operate on them as locals, but you also
want to avoid boilerplate of binding all of them explicitly.

Not a common use case, but it happens.  (I've faced it several times,
but the things I work on make it more common for me.  I bit the bullet
and wrote out the boilerplate.)


Carl Banks
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Guido rethinking removal of cmp from sort method

2011-03-25 Thread Carl Banks
On Mar 25, 3:06 pm, Steven D'Aprano  wrote:
> The reason Guido is considering re-introducing cmp is that somebody at
> Google approached him with a use-case where a key-based sort did not
> work. The use-case was that the user had masses of data, too much data
> for the added overhead of Decorate-Sort-Undecorate (which is what key
> does), but didn't care if it took a day or two to sort.
>
> So there is at least one use-case for preferring slowly sorting with a
> comparison function over key-based sorting. I asked if there any others.
> It seems not.

1. You asked for a specific kind of use case.  Antoon gave you a use
case, you told him that wasn't the kind of use case you were asking
for, then you turn around and say "I guess there are no use
cases" (without the mentioning qualification).


2. I posted two use cases in this thread that fit your criteria, and
you followed up to that subthread so you most likely read them.  Here
they are again so you won't overlook them this time:

"You have are given an obscure string collating function implented in
a C library you don't have the source to."  (Fits your criterion
"can't be done with key=".)

"I'm sitting at an interactive session and I have a
convenient cmp function but no convenient key, and I care more about
the four minutes it'd take to whip up a clever key function or an
adapter class than the 0.2 seconds I'd save to on sorting
time."  (Fits your criterion "performs really badly when done so".)


3. You evidently also overlooked the use-case example posted on Python-
dev that you followed up to.


Call me crazy, but you seem to be overlooking a lot of things in your
zeal to prove your point.


Carl Banks
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Why aren't copy and deepcopy in __builtins__?

2011-03-27 Thread Carl Banks
On Mar 27, 8:29 pm, John Ladasky  wrote:
> Simple question.  I use these functions much more frequently than many
> others which are included in __builtins__.  I don't know if my
> programming needs are atypical, but my experience has led me to wonder
> why I have to import these functions.

I rarely use them (for things like lists I use list() constructor to
copy, and for most class instances I usually don't want a straight
copy of all members), but I wouldn't have a problem if they were
builtin.  They make more sense than a lot of builtins.

I'd guess the main reason they're not builtin is that they aren't
really that simple.  The functions make use of a lot of knowledge
about Python types.  Builtins tend to be for straightforward, simple,
building-block type functions.


Carl Banks
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Python CPU

2011-04-03 Thread Carl Banks
It'd be kind of hard.  Python bytecode operates on objects, not memory slots, 
registers, or other low-level entities like that.  Therefore, in order to 
implement a "Python machine" one would have to implement the whole object 
system in the hardware, more or less.

So it'd be possible but not too practical or likely.


Carl Banks
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: A question about Python Classes

2011-04-22 Thread Carl Banks
On Thursday, April 21, 2011 11:00:08 AM UTC-7, MRAB wrote:
> On 21/04/2011 18:12, Pascal J. Bourguignon wrote:
> > chad  writes:
> >
> >> Let's say I have the following
> >>
> >> class BaseHandler:
> >>  def foo(self):
> >>  print "Hello"
> >>
> >> class HomeHandler(BaseHandler):
> >>  pass
> >>
> >>
> >> Then I do the following...
> >>
> >> test = HomeHandler()
> >> test.foo()
> >>
> >> How can HomeHandler call foo() when I never created an instance of
> >> BaseHandler?
> >
> > But you created one!
> >
> No, he didn't, he created an instance of HomeHandler.
> 
> > test is an instance of HomeHandler, which is a subclass of BaseHandler,
> > so test is also an instance of BaseHandler.
> >
> test isn't really an instance of BaseHandler, it's an instance of
> HomeHandler, which is a subclass of BaseHandler.

I'm going to vote that this is incorrect usage.  An instance of HomeHandler is 
also an instance of BaseHandler, and it is incorrect to say it is not.  The 
call to HomeHandler does create an instance of BaseHandler.

The Python language itself validates this usage.  isinstance(test,BaseHandler) 
returns True.


If you are looking for a term to indicate an object for which type(test) == 
BaseHandler, then I would suggest "proper instance".  test is an instance of 
BaseHandler, but it is not a proper instance.


Carl Banks
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Composition instead of inheritance

2011-04-28 Thread Carl Banks
On Thursday, April 28, 2011 10:15:02 AM UTC-7, Ethan Furman wrote:
> For anybody interested in composition instead of multiple inheritance, I 
> have posted this recipe on ActiveState (for python 2.6/7, not 3.x):
> 
> http://code.activestate.com/recipes/577658-composition-of-classes-instead-of-multiple-inherit/
> 
> Comments welcome!

That's not what we mean by composition.  Composition is when one object calls 
upon another object that it owns to implement some of its behavior.  Often used 
to model a part/whole relationship, hence the name.

The sorts of class that this decorator will work for are probably not the ones 
that are going to have problems cooperating in the first place.  So you might 
as well just use inheritance; that way people trying to read the code will have 
a common, well-known Python construct rather than a custom decorator to 
understand.

If you want to enforce no duplication of attributes you can do that, such as 
with this untested metaclass:

class MakeSureNoBasesHaveTheSameClassAttributesMetaclass(type):
def __new__(metatype,name,bases,dct):
u = collections.Counter()
for base in bases:
for key in base.__dict__.keys():
u[key] += 1
for key in dct.keys():
u[key] += 1
if any(u[key] > 1 for key in u.keys()):
raise TypeError("base classes and this class share some class 
attributes")
return type.__new__(metatype,name,bases,dct)
 

Carl Banks
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Composition instead of inheritance

2011-04-29 Thread Carl Banks
On Thursday, April 28, 2011 6:43:35 PM UTC-7, Ethan Furman wrote:
> Carl Banks wrote:
> > The sorts of class that this decorator will work for are probably not
>  > the ones that are going to have problems cooperating in the first place.
>  > So you might as well just use inheritance; that way people trying to read
>  > the code will have a common, well-known Python construct rather than a
>  > custom decorator to understand.
> 
>  From thread 'python and super' on Python-Dev:
> Ricardo Kirkner wrote:
>  > I'll give you the example I came upon:
>  >
>  > I have a TestCase class, which inherits from both Django's TestCase
>  > and from some custom TestCases that act as mixin classes. So I have
>  > something like
>  >
>  > class MyTestCase(TestCase, Mixin1, Mixin2):
>  >...
>  >
>  > now django's TestCase class inherits from unittest2.TestCase, which we
>  > found was not calling super.
> 
> This is the type of situation the decorator was written for (although 
> it's too simplistic to handle that exact case, as Ricardo goes on to say 
> he has a setUp in each mixin that needs to be called -- it works fine 
> though if you are not adding duplicate names).

The problem is that he was doing mixins wrong.  Way wrong.

Here is my advice on mixins:

Mixins should almost always be listed first in the bases.  (The only exception 
is to work around a technicality.  Otherwise mixins go first.)

If a mixin defines __init__, it should always accept self, *args and **kwargs 
(and no other arguments), and pass those on to super().__init__.  Same deal 
with any other function that different sister classes might define in varied 
ways (such as __call__).

A mixin should not accept arguments in __init__.  Instead, it should burden the 
derived class to accept arguments on its behalf, and set attributes before 
calling super().__init__, which the mixin can access.

If you insist on a mixin that accepts arguments in __init__, then it should 
should pop them off kwargs.  Avoid using positional arguments, and never use 
named arguments.  Always go through args and kwargs.

If mixins follow these rules, they'll be reasonably safe to use on a variety of 
classes.  (Maybe even safe enough to use in Django classes.)


Carl Banks
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Composition instead of inheritance

2011-04-29 Thread Carl Banks
On Friday, April 29, 2011 2:44:56 PM UTC-7, Ian wrote:
> On Fri, Apr 29, 2011 at 3:09 PM, Carl Banks 
>  wrote:
> > Here is my advice on mixins:
> >
> > Mixins should almost always be listed first in the bases.  (The only 
> > exception is to work around a technicality.  Otherwise mixins go first.)
> >
> > If a mixin defines __init__, it should always accept self, *args and 
> > **kwargs (and no other arguments), and pass those on to super().__init__.  
> > Same deal with any other function that different sister classes might 
> > define in varied ways (such as __call__).
> 
> Really, *any* class that uses super().__init__ should take its
> arguments and pass them along in this manner.

If you are programming defensively for any possible scenario, you might try 
this (and you'd still fail).

In the real world, certain classes might have more or less probability to be 
used in a multiple inheritance situations, and programmer needs to weigh the 
probability of that versus the loss of readability.  For me, except when I'm 
designing a class specifically to participate in MI (such as a mixin), 
readability wins.

[snip]
> > A mixin should not accept arguments in __init__.  Instead, it should burden 
> > the derived class to accept arguments on its behalf, and set attributes 
> > before calling super().__init__, which the mixin can access.
> 
> Ugh.  This breaks encapsulation, since if I ever need to add an
> optional argument, I have to add handling for that argument to every
> derived class that uses that mixin.  The mixin should be able to
> accept new optional arguments without the derived classes needing to
> know about them.

Well, encapsulation means nothing to me; if it did I'd be using Java.

If you merely mean DRY, then I'd say this doesn't necessarily add to it.  The 
derived class has a responsibility one way or another to get the mixin whatever 
initializers it needs.  Whether it does that with __init__ args or through 
attributes it still has to do it.  Since attributes are more versatile than 
arguments, and since it's messy to use arguments in MI situations, using 
attributes is the superior method. 


Carl Banks
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: in search of graceful co-routines

2011-05-17 Thread Carl Banks
On Tuesday, May 17, 2011 10:04:25 AM UTC-7, Chris Withers wrote:
> Now, since the sequence is long, and comes from a file, I wanted the 
> provider to be an iterator, so it occurred to me I could try and use the 
> new 2-way generator communication to solve the "communicate back with 
> the provider", with something like:
> 
> for item in provider:
>try:
>  consumer.handleItem(self)
>except:
>   provider.send('fail')
>else:
>   provider.send('succeed')
> 
> ..but of course, this won't work, as 'send' causes the provider 
> iteration to continue and then returns a value itself. That feels weird 
> and wrong to me, but I guess my use case might not be what was intended 
> for the send method.

You just have to call send() in a loop yourself.  Note that you should usually 
catch StopIteration whenever calling send() or next() by hand.  Untested:

result = None
while True:
try:
item = provider.send(result)
except StopIteration:
break
try:
consumer.handleItem(item)
except:
result = 'failure'
else:
result = 'success'


Carl Banks
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Why did Quora choose Python for its development?

2011-05-22 Thread Carl Banks
On Sunday, May 22, 2011 12:44:18 AM UTC-7, Octavian Rasnita wrote:
> I've noticed that on many Perl mailing lists the list members talk very
> rarely about Python, but only on this Python mailing list I read many
> discussions about Perl, in which most of the participants use to agree that
> yes, Python is better, as it shouldn't be obvious that most of the list
> members prefer Python.

Evidently Perl users choose to bash other languages in those languages' own 
mailing lists.


> If Python would be so great, you wouldn't talk so much about how bad are
> other languages,

Sure we would.  Sometimes it's fun to sit on your lofty throne and scoff at the 
peasantry.


> or if these discussions are not initiated by envy, you would
> be also talking about how bad is Visual Basic, or Pascal, or Delphi, or who
> knows other languages.

I would suggest that envy isn't the reason, the reason is that Perl is just 
that much worse than Visual Basic, Pascal, and Delphi.  We only make fun of the 
really, really bad langauges.

(Or, less cynically, it's because Perl and Python historically filled the same 
niche, whereas VB, Pascal, and Delphi were often used for different sorts of 
programming.)


What I'm trying to say here is your logic is invalid.  People have all kinds of 
reasons to badmouth other languages; that some mailing list has a culture that 
is a bit more or a bit less approving of it than some other list tells us 
nothing.  In any case it's ridiculous to claim envy as factor nowadays, as 
Python is clearly on the rise while Perl is on the decline.  Few people are 
choosing Perl for new projects.


Carl Banks
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: super() in class defs?

2011-05-25 Thread Carl Banks
On Wednesday, May 25, 2011 10:54:11 AM UTC-7, Jess Austin wrote:
> I may be attempting something improper here, but maybe I'm just going
> about it the wrong way. I'm subclassing
> http.server.CGIHTTPRequestHandler, and I'm using a decorator to add
> functionality to several overridden methods.
> 
> def do_decorate(func):
> .   def wrapper(self):
> .   if appropriate():
> .   return func()
> .   complain_about_error()
> .   return wrapper
> 
> class myHandler(CGIHTTPRequestHandler):
> .   @do_decorate
> .   def do_GET(self):
> .   return super().do_GET()
> .   # also override do_HEAD and do_POST
> 
> My first thought was that I could just replace that whole method
> definition with one line:
> 
> class myHandler(CGIHTTPRequestHandler):
> .   do_GET = do_decorate(super().do_GET)
> 
> That generates the following error:
> 
> SystemError: super(): __class__ cell not found
> 
> So I guess that when super() is called in the context of a class def
> rather than that of a method def, it doesn't have the information it
> needs.

Right.  Actually the class object itself doesn't even exist yet when super() is 
invoked.  (It won't be created until after the end of the class statement 
block.)

> Now I'll probably just say:
> 
> do_GET = do_decorate(CGIHTTPRequestHandler.do_GET)
> 
> but I wonder if there is a "correct" way to do this instead? Thanks!

Well, since the class object isn't created until after the end of the class 
statement block, it's impossible to invoke super() on the class from inside the 
block.  So there's only two ways to invoke super(): 1. like you did above, by 
calling it inside a method, and 2. call it beyond the end of the class 
statement, like this:

class myHandler(CGIHTTPRequestHandler):
pass

myHandler.do_GET = do_decorate(super(myHandler).do_GET)

I wouldn't call that correct, though.  (I'm not even sure it'll work, since I 
don't have Python 3 handy to test it, but as far as I can tell it will.)

It's just one of the quirks of Python's type system.

I don't agree with Ian's recommendation not to use super() in general, but I'd 
probably agree that one should stick to using it only in its intended way (to 
invoke base-class methods directly).


Carl Banks
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: bug in str.startswith() and str.endswith()

2011-05-26 Thread Carl Banks
On Thursday, May 26, 2011 4:27:22 PM UTC-7, MRAB wrote:
> On 27/05/2011 00:27, Ethan Furman wrote:
> > I've tried this in 2.5 - 3.2:
> >
> > --> 'this is a test'.startswith('this')
> > True
> > --> 'this is a test'.startswith('this', None, None)
> > Traceback (most recent call last):
> > File "", line 1, in 
> > TypeError: slice indices must be integers or None or have an __index__
> > method
> >
> > The 3.2 docs say this:
> >
> > str.startswith(prefix[, start[, end]])
> > Return True if string starts with the prefix, otherwise return False.
> > prefix can also be a tuple of prefixes to look for. With optional start,
> > test string beginning at that position. With optional end, stop
> > comparing string at that position
> >
> > str.endswith(suffix[, start[, end]])
> > Return True if the string ends with the specified suffix, otherwise
> > return False. suffix can also be a tuple of suffixes to look for. With
> > optional start, test beginning at that position. With optional end, stop
> > comparing at that position.
> >
> > Any reason this is not a bug?
> >
> Let's see: 'start' and 'end' are optional, but aren't keyword
> arguments, and can't be None...
> 
> I'd say bug.

I also say bug.  The end parameter looks pretty useless for .startswith() and 
is probably only present for consistency with other string search methods like 
.index().  Yet on .index() using None as an argument works as intended:

>>> "cbcd".index("c",None,None)
0

So it's there for consistency, yet is not consistent.


Carl Banks
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Why did Quora choose Python for its development?

2011-05-27 Thread Carl Banks
On Friday, May 27, 2011 6:47:21 AM UTC-7, Roy Smith wrote:
> In article <948l8n...@mid.individual.net>,
>  Gregory Ewing  wrote:
> 
> > John Bokma wrote:
> > 
> > > A Perl programmer will call this line noise:
> > > 
> > > double_word_re = re.compile(r"\b(?P\w+)\s+(?P=word)(?!\w)",
> > > re.IGNORECASE)
> 
> One of the truly awesome things about the Python re library is that it 
> lets you write complex regexes like this:
> 
> pattern = r"""\b # beginning of line
>   (?P\w+)  # a word
>   \s+# some whitespace
>   (?P=word)(?!\w)# the same word again
>"""
> double_word_re = re.compile(pattern,  re.I | re.X)

Perl has the X flag as well, in fact I'm pretty sure Perl originated it.  Just 
saying.


Carl Banks
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: float("nan") in set or as key

2011-05-29 Thread Carl Banks
On Sunday, May 29, 2011 4:31:19 PM UTC-7, Steven D'Aprano wrote:
> On Sun, 29 May 2011 22:19:49 +0100, Nobody wrote:
> 
> > On Sun, 29 May 2011 10:29:28 +, Steven D'Aprano wrote:
> > 
> >>> The correct answer to "nan == nan" is to raise an exception,
> >>> because
> >>> you have asked a question for which the answer is nether True nor
> >>> False.
> >> 
> >> Wrong.
> > 
> > That's overstating it. There's a good argument to be made for raising an
> > exception. 
> 
> If so, I've never heard it, and I cannot imagine what such a good 
> argument would be. Please give it.

Floating point arithmetic evolved more or less on languages like Fortran where 
things like exceptions were unheard of, and defining NaN != NaN was a bad trick 
they chose for testing against NaN for lack of a better way.

If exceptions had commonly existed in that environment there's no chance they 
would have chosen that behavior; comparison against NaN (or any operation with 
NaN) would have signaled a floating point exception.  That is the correct way 
to handle exceptional conditions.

The only reason to keep NaN's current behavior is to adhere to IEEE, but given 
that Python has trailblazed a path of correcting arcane mathematical behavior, 
I definitely see an argument that Python should do the same for NaN, and if it 
were done Python would be a better language.


Carl Banks
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: float("nan") in set or as key

2011-05-29 Thread Carl Banks
On Sunday, May 29, 2011 7:41:13 AM UTC-7, Grant Edwards wrote:
> It treats them as identical (not sure if that's the right word).  The
> implementation is checking for ( A is B or A == B ).  Presumably, the
> assumpting being that all objects are equal to themselves.  That
> assumption is not true for NaN objects, so the buggy behavior is
> observed.

Python makes this assumption in lots of common situations (apparently in an 
implementation-defined manner):

>>> nan = float("nan")
>>> nan == nan
False
>>> [nan] == [nan]
True

Therefore, I'd recommend never to rely on NaN != NaN except in casual throwaway 
code.  It's too easy to forget that it will stop working when you throw an item 
into a list or tuple.  There's a function, math.isnan(), that should be the One 
Obvious Way to test for NaN.  NaN should also never be used as a dictionary key 
or in a set (of course).

If it weren't for compatibility with IEEE, there would be no sane argument that 
defining an object that is not equal to itself isn't a bug.  But because 
there's a lot of code out there that depends on NaN != NaN, Python has to 
tolerate it.


Carl Banks
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: float("nan") in set or as key

2011-05-29 Thread Carl Banks
On Sunday, May 29, 2011 6:14:58 PM UTC-7, Chris Angelico wrote:
> On Mon, May 30, 2011 at 10:55 AM, Carl Banks 
>  wrote:
> > If exceptions had commonly existed in that environment there's no chance 
> > they would have chosen that behavior; comparison against NaN (or any 
> > operation with NaN) would have signaled a floating point exception.  That 
> > is the correct way to handle exceptional conditions.
> >
> > The only reason to keep NaN's current behavior is to adhere to IEEE,
> > but given that Python has trailblazed a path of correcting arcane
> > mathematical behavior, I definitely see an argument that Python
> > should do the same for NaN, and if it were done Python would be a
> > better language.
> 
> If you're going to change behaviour, why have a floating point value
> called "nan" at all?

If I were designing a new floating-point standard for hardware, I would 
consider getting rid of NaN.  However, with the floating point standard that 
exists, that almost all floating point hardware mostly conforms to, there are 
certain bit pattern that mean NaN.

Python could refuse to construct float() objects out of NaN (I doubt it would 
even be a major performance penalty), but there's reasons why you wouldn't, the 
main one being to interface with other code that does use NaN.  It's better, 
then, to recognize the NaN bit patterns and do something reasonable when trying 
to operate on it.


Carl Banks
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: float("nan") in set or as key

2011-05-31 Thread Carl Banks
On Sunday, May 29, 2011 8:59:49 PM UTC-7, Steven D'Aprano wrote:
> On Sun, 29 May 2011 17:55:22 -0700, Carl Banks wrote:
> 
> > Floating point arithmetic evolved more or less on languages like Fortran
> > where things like exceptions were unheard of, 
> 
> I'm afraid that you are completely mistaken.
> 
> Fortran IV had support for floating point traps, which are "things like 
> exceptions". That's as far back as 1966. I'd be shocked if earlier 
> Fortrans didn't also have support for traps.
> 
> http://www.bitsavers.org/pdf/ibm/7040/C28-6806-1_7040ftnMathSubrs.pdf

Fine, it wasn't "unheard of".  I'm pretty sure the existence of a few high end 
compiler/hardware combinations that supported traps doesn't invalidate my basic 
point.  NaN was needed because few systems had a separate path to deal with 
exceptional situations like producing or operating on something that isn't a 
number.  When they did exist few programmers used them.  If floating-point were 
standardized today it might not even have NaN (and definitely wouldn't support 
the ridiculous NaN != NaN), because all modern systems can be expected to 
support exceptions, and modern programmers can be expected to use them.


> The IEEE standard specifies that you should be able to control whether a 
> calculation traps or returns a NAN. That's how Decimal does it, that's 
> how Apple's (sadly long abandoned) SANE did it, and floats should do the 
> same thing.

If your aim is to support every last clause of IEEE for better or worse, then 
yes that's what Python should do.  If your aim is to make Python the best 
language it can be, then Python should reject IEEE's obsolete notions, and 
throw exceptions when operating on NaN.


Carl Banks
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: float("nan") in set or as key

2011-05-31 Thread Carl Banks
On Sunday, May 29, 2011 7:53:59 PM UTC-7, Chris Angelico wrote:
> Okay, here's a question. The Python 'float' value - is it meant to be
> "a Python representation of an IEEE double-precision floating point
> value", or "a Python representation of a real number"?

The former.  Unlike the case with integers, there is no way that I know of to 
represent an abstract real number on a digital computer.

Python also includes several IEEE-defined operations in its library 
(math.isnan, math.frexp).


Carl Banks
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: float("nan") in set or as key

2011-05-31 Thread Carl Banks
On Tuesday, May 31, 2011 8:05:43 PM UTC-7, Chris Angelico wrote:
> On Wed, Jun 1, 2011 at 12:59 PM, Carl Banks 
>  wrote:
> > On Sunday, May 29, 2011 7:53:59 PM UTC-7, Chris Angelico wrote:
> >> Okay, here's a question. The Python 'float' value - is it meant to be
> >> "a Python representation of an IEEE double-precision floating point
> >> value", or "a Python representation of a real number"?
> >
> > The former.  Unlike the case with integers, there is no way that I know of 
> > to represent an abstract real number on a digital computer.
> 
> This seems peculiar. Normally Python seeks to define its data types in
> the abstract and then leave the concrete up to the various
> implementations - note, for instance, how Python 3 has dispensed with
> 'int' vs 'long' and just made a single 'int' type that can hold any
> integer. Does this mean that an implementation of Python on hardware
> that has some other type of floating point must simulate IEEE
> double-precision in all its nuances?

I think you misunderstood what I was saying.

It's not *possible* to represent a real number abstractly in any digital 
computer.  Python couldn't have an "abstract real number" type even it wanted 
to.

(Math aside: Real numbers are not countable, meaning they cannot be put into 
one-to-one correspondence with integers.  A digital computer can only represent 
countable things exactly, for obvious reasons; therefore, to model 
non-countable things like real numbers, one must use a countable approximation 
like floating-point.)

You might be able to get away with saying float() merely represents an 
"abstract floating-point number with provisions for nan and inf", but pretty 
much everyone uses IEEE format, so what's the point?  And no it doesn't mean 
Python has to support every nuance (and it doesn't).


Carl Banks
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: float("nan") in set or as key

2011-05-31 Thread Carl Banks
On Tuesday, May 31, 2011 8:57:57 PM UTC-7, Chris Angelico wrote:
> On Wed, Jun 1, 2011 at 1:30 PM, Carl Banks 
>  wrote:
> > I think you misunderstood what I was saying.
> >
> > It's not *possible* to represent a real number abstractly in any digital 
> > computer.  Python couldn't have an "abstract real number" type even it 
> > wanted to.
> 
> True, but why should the "non-integer number" type be floating point
> rather than (say) rational?

Python has several non-integer number types in the standard library.  The one 
we are talking about is called float.  If the type we were talking about had 
instead been called real, then your question might make some sense.  But the 
fact that it's called float really does imply that that underlying 
representation is floating point.


> Actually, IEEE floating point could mostly
> be implemented in a two-int rationals system (where the 'int' is
> arbitrary precision, so it'd be Python 2's 'long' rather than its
> 'int'); in a sense, the mantissa is the numerator, and the scale
> defines the denominator (which will always be a power of 2). Yes,
> there are very good reasons for going with the current system. But are
> those reasons part of the details of implementation, or are they part
> of the definition of the data type?

Once again, Python float is an IEEE double-precision floating point number.  
This is part of the language; it is not an implementation detail.  As I 
mentioned elsewhere, the Python library establishes this as part of the 
language because it includes several functions that operate on IEEE numbers.

And, by the way, the types you're comparing it to aren't as abstract as you say 
they are.  Python's int type is required to have a two's-compliment binary 
representation and support bitwise operations.


> > (Math aside: Real numbers are not countable, meaning they 
> > cannot be put into one-to-one correspondence with integers.
> >  A digital computer can only represent countable things
> > exactly, for obvious reasons; therefore, to model
> > non-countable things like real numbers, one must use a
> > countable approximation like floating-point.)
> 
> Right. Obviously a true 'real number' representation can't be done.
> But there are multiple plausible approximations thereof (the best
> being rationals).

That's a different question.  I don't care to discuss it, except to say that 
your default real-number type would have to be called something other than 
float, if it were not a floating point.


> Not asking for Python to be changed, just wondering why it's defined
> by what looks like an implementation detail. It's like defining that a
> 'character' is an 8-bit number using the ASCII system, which then
> becomes problematic with Unicode.

It really isn't.  Unlike with characters (which are trivially extensible to 
larger character sets, just add more bytes), different real number 
approximations differ in details too important to be left to the implementation.

For instance, say you are using an implementation that uses floating point, and 
you define a function that uses Newton's method to find a square root:

def square_root(N,x=None):
if x is None:
x = N/2
for i in range(100):
x = (x + N/x)/2
return x

It works pretty well on your floating-point implementation.  Now try running it 
on an implementation that uses fractions by default

(Seriously, try running this function with N as a Fraction.)

So I'm going to opine that the representation does not seem like an 
implementation detail.


Carl Banks
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: float("nan") in set or as key

2011-06-01 Thread Carl Banks
On Wednesday, June 1, 2011 10:17:54 AM UTC-7, OKB (not okblacke) wrote:
> Carl Banks wrote:
> 
> > On Tuesday, May 31, 2011 8:57:57 PM UTC-7, Chris Angelico wrote:
> >> On Wed, Jun 1, 2011 at 1:30 PM, Carl Banks  wrote:
> > Python has several non-integer number types in the standard
> > library.  The one we are talking about is called float.  If the
> > type we were talking about had instead been called real, then your
> > question might make some sense.  But the fact that it's called
> > float really does imply that that underlying representation is
> > floating point. 
> 
>   That's true, but that's sort of putting the cart before the horse.

Not really.  The (original) question Chris Angelico was asking was, "Is it an 
implementation detail that Python's non-integer type is represented as an IEEE 
floating-point?"  Which the above is the appropriate answer to.

> In response to that, one can just ask: why is this type called "float"? 

Which is a different question; not the question I was answering, and not one I 
care to discuss.
 

Carl Banks
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: float("nan") in set or as key

2011-06-01 Thread Carl Banks
On Wednesday, June 1, 2011 11:10:33 AM UTC-7, Ethan Furman wrote:
> Carl Banks wrote:
> > For instance, say you are using an implementation that uses
>  > floating point, and you define a function that uses Newton's
>  > method to find a square root:
> > 
> > def square_root(N,x=None):
> > if x is None:
> > x = N/2
> > for i in range(100):
> > x = (x + N/x)/2
> > return x
> > 
> > It works pretty well on your floating-point implementation.
>  > Now try running it on an implementation that uses fractions
>  > by default
> > 
> > (Seriously, try running this function with N as a Fraction.)
> 
> Okay, will this thing ever stop?  It's been running for 90 minutes now. 
>   Is it just incredibly slow?
> 
> Any enlightenment appreciated!

Fraction needs to find the LCD of the denominators when adding; but LCD 
calculation becomes very expensive as the denominators get large (which they 
will since you're dividing by an intermediate result in a loop).  I suspect the 
time needed grows exponentially (at least) with the value of the denominators.

The LCD calculation should slow the calculation down to an astronomical crawl 
well before you encounter memory issues.

This is why representation simply cannot be left as an implementation detail; 
rationals and floating-points behave too differently.


Carl Banks
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: PEP 308 accepted - new conditional expressions

2005-09-30 Thread Carl Banks
Reinhold Birkenfeld wrote:
> X if C else Y

Oh well.  Just about any conditional is better than no conditional.

Carl Banks

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Intersection of lists/sets -- with a catch

2005-10-18 Thread Carl Banks

James Stroud wrote:
> Hello All,
>
> I find myself in this situation from time to time: I want to compare two lists
> of arbitrary objects and (1) find those unique to the first list, (2) find
> those unique to the second list, (3) find those that overlap. But here is the
> catch: comparison is not straight-forward. For example, I will want to
> compare 2 objects based on a set of common attributes. These two objects need
> not be members of the same class, etc. A function might help to illustrate:
>
> def test_elements(element1, element2):
>   """
>   Returns bool.
>   """
>   # any evaluation can follow
>   return (element1.att_a == element2.att_a) and \
>  (element1.att_b == element2.att_b)


[snip]

> Its probably obvious to everyone that this type of task seems perfect for
> sets. However, it does not seem that sets can be used in the following way,
> using a hypothetical "comparator" function. The "comparator" would be
> analagous to a function passed to the list.sort() method. Such a device would
> crush the previous code to the following very straight-forward statements:
>
> some_set = Set(some_list, comparator=test_elements)
> another_set = Set(another_list, comparator=test_elements)
> overlaps = some_set.intersection(another_set)
> unique_some = some_set.difference(another_set)
> unique_another = another_set.difference(some_set)
>
> I am under the personal opinion that such a modification to the set type would
> make it vastly more flexible, if it does not already have this ability.
>
> Any thoughts on how I might accomplish either technique or any thoughts on how
> to make my code more straightforward would be greatly appreciated.


Howabout something like this (untested):

class CmpProxy(object):
def __init__(self,obj):
self.obj = obj
def __eq__(self,other):
return (self.obj.att_a == other.obj.att_b
and self.obj.att_b == other.obj.att_b)
def __hash__(self):
return hash((self.obj.att_a,self.obj.att_b))

set_a = set(CmpProxy(x) for x in list_a)
set_b = set(CmpProxy(y) for y in list_b)
overlaps = [ z.obj for z in set_a.intersection(set.b) ]


Carl Banks

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: import statement / ElementTree

2005-11-04 Thread Carl Banks
[EMAIL PROTECTED] wrote:
> O/S: Windows 2K
> Vsn of Python: 2.4
>
> Currently:
>
> 1) Folder structure:
>
> \workarea\ <- ElementTree files reside here
>   \xml\
> \dom\
> \parsers\
> \sax\

First point, XML DOM comes packaged with Python 2.4.  (IIRC, Python XML
is, or was, a seperate project that made it into the Python
distribution--maybe it still makes its own releases.  I presume you're
using one of those release in lieu of the version supplied with Python.
 I only point this out in case you didn't realize the xml package is in
Python 2.4.  My apologies if this comes across as presumptuous of me.)


> 2) The folder \workarea\ is in the path.
>
> 3) A script (which is working) makes calls to the Element(),
> SubElement(), tostring() and XML() methods within ElementTree.py; the
> script is organized as follows:
>
> # top of file; not within any function/mehtod
> import ElementTree
>
> 
>
> root = ElementTree.Element('request')
> pscifinq = ElementTree.SubElement(root, 'pscifinq')
> bank = ElementTree.SubElement(pscifinq, 'bank')
> bank.text = '1'
> inquiryString = ElementTree.tostring(root)

In the most recent version, ElementTree modules are part of the
elementtree package.  Are you using an older version?  If so, perhaps
you should get the latest version.

> 4) the term 'ElementTree files' referenced above refers to the
> following files:
>   __init__.py (this file contains only comments)
>   ElementInclude.py
>   ElementPath.py
>   ElementTree.py
>   HTMLTreeBuilder.py
>   SgmlopXMLTreeBuilder.py
>   SimpleXMLTreeBuilder.py
>   SimpleXMLWriter.py
>   TidyHTMLTreeBuilder.py
>   TidyTools.py
>   XMLTreeBuilder.py

It looks like your version of ElementTree is a packaged version.  The
file __init__.py normally appears only in packages; it's a mistake for
these files to have been in the workarea directory in the first place.
How did you install ElementTree?

> Want to change things as follows:
>
> Folder structure:
>
> \workarea\ <- ElementTree files no longer here
>   \xml\
> \dom\
> \elementtree\ <- ElementTree files reside here
> \parsers\
> \sax\

Bad idea, I'd say.  Generally, you shouldn't inject your own modules
into someone else's package system (unless you're working on someone
else's packages, to modify or enhance them).  The xml package might
have been a slight exception to this, seeing how it's just a container
for several related packages, but it's in the Python distribution.

ElementTree modules are part of the elementtree package.  You should
arrange your directories like this (and they should have been arranged
like this in the first place):

\workarea
  \elementtree
  \xml
\dom
\sax
...etc...


> I tried changing the
>
> import ElementTree
>
> statement to:
>
> import xml.elementtree.ElementTree
>
> The result of changing the folder structure and the import statement
> was the following error:
>
> import xml.elementtree.ElementTree
>
> ImportError: No module named elementtree.ElementTree

I'm guessing this exception is not happening with your import
statement, but with some other import in one of the ElementTree modules
(though in that case I'm not sure why it would have been working
before).  I'd have to see the traceback to be sure.  Generally, when
reporting an error, you should include a traceback.


> I verified that the file ElementTree.py really does reside in the
> \workarea\xml\elementtree\ folder.  Assuming that I really want the
> ElementTree files to reside in the \workarea\xml\elementtree\ folder,
> what changes must I make such that the script can locate the
> ElementTree.py file? I have a hunch that there is something obvious
> that I am missing in the import statement; is it possible to accomplish
> this by changing only the import statement rather than changing each of
> the calls to the Element(), SubElement(), XML() and tostring() methods.

Well, if you arrange it as I advise, you shouldn't have a problem.
However, if you want to change only the import statements, you don't
want to do this:

import elementtree.ElementTree

That will import ElementTree but the you'd have to access it as
elementtree.ElementTree.  Instead you should do this:

from elementtree import ElementTree


Carl Banks

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: super() and multiple inheritance

2005-12-01 Thread Carl Banks

hermy wrote:
> Hi,
> I'm trying to figure out how to pass constructor arguments to my
> superclasses in a multiple inheritance situation.
>
> As I understand it, using super() is the preferred way to call
> the next method in method-resolution-order. When I have parameterless
> __init__ methods, this works as expected.
> However, how do you solve the following simple multiple inheritance
> situation in python ?
>
> class A(object):
>  def __init__(self,x):
>  super(A,self).__init__(x)
>  print "A init (x=%s)" % x
>
> class B(object):
>  def __init__(self,y):
>  super(B,self).__init__(y)
>  print "B init (y=%s)" % y
>
> class C(A,B):
>  def __init__(self,x,y):
>  super(C,self).__init__(x,y)  < how to do this ???
>  print "C init (x=%s,y=%s)" % (x,y)
>
> What I want is that when I create a class C object
> x = C(10,20)
> that the x argument of C's __init__ is used to initialize the
> A superclass, and the y argument is used to initialize the B
> superclass.
> In C++, I would do this using initilaization lists, like:
> C::C(int x, int y) : A(x), B(y) { ... }

Well, technically, you can't do this in C++ at all.

A little explanation.  In multiple inheritance situations, Python has a
significant difference from regular C++ inheritance: in C++, if you
multiply-inherit from two different classes that both inherit from the
same base class, the resulting structure has two copies of the data
associated with the base class.  In Python, only there is only one
copy.  If you want only one copy of the base class's data in C++, you
must use virtual inheritance.

But here's the thing: in C++, you can't initialize a virtual base class
in the constructor.  A virtual base class is always initialized with
the default constructor.  The reason for this is obvious: otherwise,
you could end up initializing the virtual base twice with different
arguments.

This also explains why, in Python, super is preferred for multiple
inheritance: it guarantees that each base class's __init__ is called
only once.  This comes at the price of less flexibility with the
function arguments, but in Python, at least you can use function
arguments.

So now, let's talk about solutions.

Now that we know why super is preferred, we can make a somewhat
intelligent decision whether to go against the advice.  If you know
your inheritance hierarchy is not going to have any two classes
inheriting from the same base class (except for object), then you could
just call each class's __init__ directly, same as you would have done
with old-style classes.  There is no danger of initializing any base
class twice and no reason for super to be preferred here.

  A.__init__(self,x)
  B.__init__(self.y)

But if you can't or don't want to do this, you'll have to make some
concessions with the argument lists.  One thing to do would have A and
B both accept x and y, using only the one it needs.  A more general
approach might be to use keyword arguments.  For example (you can
improve upon this):

  class A(object):
  def __init__(self,**kwargs):
  use(kwargs['x'])

  class B(object):
  def __init__(self,**kwargs):
  use(kwargs['y'])

  class C(A,B):
  def __init__(self,**kwargs):
  super(C,self).__init__(**kwargs)

  C(x=1,y=2)


> I'm probably overlooking some basic stuff here,

Unfortunately, it doesn't appear that you are.  You'll have to choose
between calling base class __init__s old-style, or fiddling with their
argument lists.


Carl Banks

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: super() and multiple inheritance

2005-12-01 Thread Carl Banks
hermy wrote:
> Thanx, I think I got it (please correct me if I'm wrong):
> o super(C,self) determines the next class in the inheritance hierarchy
> according to
>   method resolution order, and simply calls the specified method on it
> (in this case
>  __init__ with the specified argument list.
> o since I cannot now beforehand where my class will appear in the
> inheritance
>   hierarchy when __init__ is called, I need to pass on all the
> arguments and let
>   my method decide which ones to use.
>
> On the other hand, when I use old-style, s.a. B.__init__(args), I
> specify statically
> in which class the method lookup should occur.
>
> Unfortunately, new and old style don't mix well (as I found out by
> experimenting a little),
> so for new code I should probably stick to new style, and use super.

Well I wasn't suggesting you mix styles; I was suggesting all old, all
the way here.  With due care, the effect of using this style over the
entire hierarchy is to call the __init__ functions possibly out of MRO
sequence.  You have to pay some attention to what methods are called in
__init__ anyways, so this isn't exactly a whole new dimension of
carefulness.

> Which leads me to my original problem. Your suggestion with **kwargs
> works fine,
> but only if it's used consistently, and I'll probably need to do some
> name-mangling
> to encode class names in parameter names in order to avoid name
> clashes.

Meh.  Do nameclashes happen so often that it's better to implement a
handmade name mangling scheme rather than just change a variable name
when one pops up?

> Unfortunately, this solution is (as far as I know) not universally
> accepted, so if I want
> to multiply inherit from other people's classes (who didn't follow this
> solution), it won't
> work. Short: I can make it work if I write all the classes myself, I
> can't make it work if
> I try to re-use other people's code.

I would suggest that, if you're trying to make a mixin of completely
different classes, super is only one of many worries.  There's all
sorts of things that could go wrong when you do that in Python.  When
you have a class that's not expecting to be in an inheritance
hierarchy, it might not even be calling super.  Or if it's a derived
class, it might be using old-style calls to the subclass.  Sometimes
classes have incompatible C-footprints and can't be mixed in.  Python
objects use a single namespace for their instance variables, so if
these two classes happen to use the same for a variable, you're pretty
much out of luck.  If any of the classes use a different metaclass
there could be all sorts of problems.

Bottom line is: if you're going to multiply inherit from two classes,
they have to cooperate in some fashion.  You have to be careful, and
sometimes you can't avoid modifying the base classes.  (Yes, you can do
that.)

> Which is a pity, since this means that I can't even use multiple
> inheritance for mixin
> classes (where I know that no diamond will appear in the hierarchy, and
> I have a simple
> tree - except for the shared object superclass).

Um, sure you can.  You can't necessarily, but it doesn't mean you can't
never.

> So, for the moment my conclusion is that although Python has some
> syntax for
> multiple inheritance, it doesn't support it very well, and I should
> probably stick to
> single inheritance.

"well supported" is a relative term.  I don't disagree that MI can be
unwieldy in Python, but it seems to me that throwing it out of the
toolshed is a bit extreme here.  There's just times when MI is the best
and most elegant solution to a problem, and not using it because you
have to fiddle with __init__ args a little is probably not a good
thing.

> Also, if there would be some language support for the argument passing
> problem
> (especially for __init__ methods), multiple inheritance would work just
> fine. Hopefully
> some future version of Python will solve this.

Not likely at all.  I think the Python philosophy is that MI is a thing
that's occasionally very useful, but not vital.  I think the designers
feel that a little unwieldiness in MI is a small price to pay compared
the language changes needed to take away that unwieldiness.  For
instance, a foolproof way to avoid name clashes in mixin classes would
require a complete overhaul of namespaces in Python, and that is not
going to happen.

At best, what you might get is some sort of protocol (maybe a
decorator) that standardizes calling the base class methods, coupled
with a very strong suggestion to use it in all future code for all
classes.


Carl Banks

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Optional Static Typing: Part II

2005-01-04 Thread Carl Banks
John Roth wrote:
> http://www.artima.com/weblogs/viewpost.jsp?thread=86641

Nitpicking: I don't think he's necessarily in good company w.r.t. types
vs classes.  Take Ada, for example.   In Ada, a class is a set of types
(in particular, the type and all its subtypes), which is kind of the
opposite way Guido claims to see it.  Not the Ada is relevant, and not
that there is ever any agreement on terminology in computer science,
but still.

Based on their English language meanings, I would tend to agree with
Ada's terminology.  But, based on how the terminology developed for
computer languages (especially under the influence of C++), it seems
that most people would regard class as more of an implementation.

Another question: can anyone think of something an interface statement
could syntactically that an interface metaclass couldn't?  I couldn't
think of anything, based on the description, and it's not like th BDFL
to throw out keywords for things that current syntax can handle.  It
leads me to suspect that maybe he has something up his sleeve.  Hmm.
-- 
CARL BANKS

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Dr. Dobb's Python-URL! - weekly Python news and links (Dec 30)

2005-01-04 Thread Carl Banks
Skip Montanaro wrote:
> I started to answer, then got confused when I read the docstrings for
> unicode.encode and unicode.decode:
[snip]


It certainly is confusing.  When I first started Unicoding, I pretty
much stuck to Aahz's rule of thumb, without understanding this details,
and still do that. But now I do undertstand it.

Although encodings are bijective (i.e., equivalent one-to-one
mappings), they are not apolar.  One side of the encoding is
arbitrarily labeled the encoded form; the other is arbitrarily labeled
the decoded form.  (This is not a relativistic system, here.)  The
encode method maps from the decoded to the encoded set.  The decode
method does the inverse.

That's it.  The only real technical difference between encode and
decode is the direction they map in.

By convention, the decoded form is a Python unicode string, and the
encoded form is the byte string.

I believe it's technically possible (but very rude) to write an
"inverse encoding", where the "encoded" form is a unicode string, and
the decoded form is UTF-8 byte string.

Also, note that there are some encodings unrelated to Unicode.  For
example, try this:

. >>> "abcd".encode("base64")
This is an encoding between two byte strings.


-- 
CARL BANKS

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Python Operating System???

2005-01-06 Thread Carl Banks
Arich Chanachai wrote:
> But
> then again, if you don't like C++, you probably won't like Java.
They
> can be very different languages, but in my experience, the reasons
why
> one does not like C++ is usually due to a quality/flaw that can also
be
> found in Java.

Oh, brother.

The Zen of Python says that "simple is better than complex" and
"complex is better than complicated".  Java does pretty well here.  C++
didn't even get "complicated is better than convoluted" right.  There's
are a ton of flaws in C++ not found in Java.


-- 
CARL BANKS

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: how to extract columns like awk $1 $5

2005-01-07 Thread Carl Banks
Roy Smith wrote:
> Hmmm.  There's something going on here I don't understand.  The ref
> manual (3.3.5 Emulating container types) says for __getitem__(),
"Note:
> for loops expect that an IndexError will be raised for illegal
indexes
> to allow proper detection of the end of the sequence."  I expected my

> little demo class to therefore break for loops, but they seem to work

> fine:
>
> >>> import awk
> >>> l = awk.awkList ("foo bar baz".split())
> >>> l
> ['foo', 'bar', 'baz']
> >>> for i in l:
> ... print i
> ...
> foo
> bar
> baz
> >>> l[5]
> ''
>
> Given that I've caught the IndexError, I'm not sure how that's
working.


The title of that particular section is "Emulating container types",
which is not what you're doing, so it doesn't apply here.  For built-in
types, iterators are at work.  The list iterator probably doesn't even
call getitem, but accesses the items directly from the C structure.
-- 
CARL BANKS

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: sorting on keys in a list of dicts

2005-01-07 Thread Carl Banks
Jeff Shannon wrote:
> Jp Calderone wrote:
>
> > L2 = [(d[key], i, d) for (i, d) in enumerate(L)]
> > L2.sort()
> > L = [d for (v, i, d) in L2]
>
> Out of curiosity, any reason that you're including the index?  I'd
> have expected to just do
>
>  L2 = [(d[key], d) for d in L]
>  L2.sort()
>  L = [d for (v, d) in L2]


Suppose L is a list of objects that can't be compared (for example,
they are dicts that have complex number items) and the keys are not all
distinct.  If sort tries to compare two equal keys, it'll proceed to
compare the objects themselves, which could throw an exception.

Stick the index in there, and that possibility is gone.


-- 
CARL BANKS

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: python3: 'where' keyword

2005-01-08 Thread Carl Banks
Nick Coghlan wrote:
> Andrey Tatarinov wrote:
> > Hi.
> >
> > It would be great to be able to reverse usage/definition parts in
> > haskell-way with "where" keyword. Since Python 3 would miss lambda,
that
> > would be extremly useful for creating readable sources.
> >
> > Usage could be something like:
> >
> >  >>> res = [ f(i) for i in objects ] where:
> >  >>> def f(x):
> >  >>> #do something
>
[snip]
> For compound statements, a where clause probably isn't appropriate,
as it would
> be rather unclear what the where clause applied to.

Right.  But you know that as soon as you add this to simple
expressions, a bunch of people are going to come here whining about how
they don't get to use where with if-expressions.

Frankly, they might have a point here.  Although we have replacing
lambda expressions on our minds, I have in mind a different problem
that a where-statement would solve perfectly.  But it would have to be
used with an if-expression.

However, I think it might not be so hard.  Let's take Paul Rubin's
advice and precede the if statement with where.  Let's also allow
"elif" clauses to be replaced with "else where ... if" clauses.  That
which is bound in the while-block would be visible in both the
if-expression and if-block.

Then we could do this:

. where:
. m = someregexp.match(somestring)
. if m:
. blah blah blah
. else where:
. m = someotherregexp.match(somestring)
. if m:
. blah blah blah

We might want to spell "else where" instead as "elwhere", to match
"elif", but that's not important now.  This would take away one of the
major minor annoyances of Python.  (In fact, I've suggested something
like this as a solution to the set-and-test idiom, which Python makes
difficult, only I used the keyword "suppose" instead of "where".)

Ok, but if you do that, now you have people whining that "where" comes
after some expressions, and before others.  (This would not bother me
one bit, BTW, but I'm pretty sure I'd lose the popular vote on this
one.)

So, let's go all out and say that while could precede any statement.
We now have consistency.  Well, that really wouldn't work for the
if-statement, though, because then how could we apply a different
while-block to an else clause?  We'd have to treat if-statements
specially anyways.  So we don't have consistency.

My solution would be to propose two different where statements: a
where...do statement, and a separate where...if statement.  The
where...do statement would look like this:

. where:
. def whatever(): pass
. do:
. blah blah use whatever blah

It has the advantage of being able to apply the where bindings to
several statements, and is, IMO, much cleaner looking than simply
applying where's bindings to the single following unindented statement.

I would recommend against where...while and where...for statements.
They can't accomplish anything you couldn't do with a break statement
inside the block, and it's not obvious whether the where clause gets
executed once or for each loop (since it's physically outside the loop
part).

One question: what do you do with a variable bound inside a while-block
that has the same name as a local variable?  (Or, horrors, a
surrounding while-block?)  I'm inclined to think it should be illegal,
but maybe it would be too restrictive.
Anyways, I like this idea a lot.

+1


-- 
CARL BANKS

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: python3: 'where' keyword

2005-01-08 Thread Carl Banks
Peter Hansen wrote:
> Andrey Tatarinov wrote:
> >  >>> print words[3], words[5] where:
> >  >>> words = input.split()
> >
> > - defining variables in "where" block would restrict their
visibility to
> > one expression
>
> Then your example above doesn't work...  print takes a
> sequence of expressions, not a tuple as you seem to think.

You misunderstand.  There "where" is not part of the expression but the
statement.  The above example would be a modified print statement, a
print...where statement, if you will.  Under this suggestion, there
would be modified versions of various simple statements.

This wouldn't be a problem parsing, of course, because "where" would be
a keyword.


-- 
CARL BANKS

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: python3: 'where' keyword

2005-01-08 Thread Carl Banks

Bengt Richter wrote:
> And, is the whole thing after the '=' an expression? E.g.,
>
>   x = ( foo(x) where:
>  x = math.pi/4.0
>   ) where:
>  def foo(x): print 'just for illustration', x

How would that be any improvement over this?

. x = foo(x) where:
. x = math.pi/4.0
. def foo(x): print 'just for illustration', x

Can anyone think of a use case for embedding "where" inside an
expression as opposed to making it part of a simple statement?  And, if
so, is the benefit of it worth the massive hit in readability.


> or is this legal?
>
>   for y in ([foo(x) for x in bar] where:
>  bar = xrange(5)
> ): baz(y) where:
> def baz(arg): return arg*2

Here, I can only hope not.  One reason I proposed a where...do syntax
is so that, if you wanted to localize a variable to a for loop or some
other compound statement, you could do it with a minimum of fuss.

. where:
. bar = xrange(5)
. def baz(arg): return arg*2
. do:
. for y in [foo(x) for x in bar]:
. baz(y)


> Not trying to sabotage the idea, really, just looking for
clarification ;-)

That's ok.  For it fly, it's got to be able to withstand the flak.
-- 
CARL BANKS

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: python3: 'where' keyword

2005-01-08 Thread Carl Banks

Paul Rubin wrote:
> "Carl Banks" <[EMAIL PROTECTED]> writes:
> > You misunderstand.

BTW, Peter, I guess I should have said "I misunderstand, but it can be
legal if you consider it part of the statements", since it appears the
author did intend it to be part of an expression.


> > There "where" is not part of the expression but the
> > statement.  The above example would be a modified print statement,
a
> > print...where statement, if you will.  Under this suggestion, there
> > would be modified versions of various simple statements.
>
> You mean I can't say
>
># compute sqrt(2) + sqrt(3)
>x = (sqrt(a) where:
>  a = 2.) \
>+ sqrt (a) where:
>a = 3.
>
> Hmmm.

What would be the advantage of that over this?

. x = sqrt(a) + sqrt(b) where:
. a = 2.0
. b = 3.0

Where would making "where" part of an expression rather than part of
the statement help?  Can you think of a place?  ("That it makes Python
more like LISP" is not a good enough answer for me, BTW.  But feel free
to try. :)


-- 
CARL BANKS

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: python3: accessing the result of 'if'

2005-01-08 Thread Carl Banks
Nick Coghlan wrote:
> I have a different suggestion for this.
>
> 'as' is used for renaming in import statements. 'as' will be used for
exception
> naming in Python 3k.
>
> So let's use it for expression naming in 'if' statements, too.
>
> if someregexp.match(s) as m:
># blah using m
> elif someotherregexp.match(s) as m:
># blah using m


What if the condition you wanted to test wasn't the same as the thing
you want to save?  In other words, how would you convert this?

. where:
. m = something()
. if m > 20:
. do_something_with(m)

What you propose works for typical regexps idiom but not for the
slightly more general case.  However, I could see why some people might
not like the where...if syntax I proposed; it's kind of choppy and not
exactly easy to follow at a first glance.

As a compromise, howabout:

. if m > 20 where m=something():
. do_something_with(m)

In this case, the m=something() is NOT an assignment statement, but
merely a syntax resembling it.  The "where m=something()" is part of
the if-statement, not the if-expression.  It causes m to be visisble in
the if-expression and the if-block.

It (or your suggestion) could work with a while-loop too.

. while line where line=f.readline():
. do_something_with(line)


The main problem here (as some would see it) is that you can't do
something this:

. if m > 20 where (def m(): a(); b()):


-- 
CARL BANKS

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: python3: accessing the result of 'if'

2005-01-08 Thread Carl Banks
Nick Coghlan wrote:
> Carl Banks wrote:
> > What if the condition you wanted to test wasn't the same as the
thing
> > you want to save?  In other words, how would you convert this?
> >
> > . where:
> > . m = something()
> > . if m > 20:
> > . do_something_with(m)
>
> Yeah, this problem eventually occurred to me as well. However, I
think a little
> utility function can help solve it:
>
>def test(val, condition):
>  if condition(val):
>return val
>   else:
>return None
>
>if test(something(), lambda x: x < 10) as m:
>  print "Case 1:", m
>elif test(something(), lambda x: x > 20) as m:
>  print "Case 2:", m
>else:
>  print "No case at all!"

I'm sorry, I really can't agree that this helper function "solves" it.
IMO, it's a workaround, not a solution.  And, if I may be frank, it's a
pretty ugly one.

Not only that, but it still doesn't work.  What if the object itself is
false?


-- 
CARL BANKS

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: python3: accessing the result of 'if'

2005-01-08 Thread Carl Banks

Donn Cave wrote:
> If Python 3 is going to get assignment-as-expression, it will be
> because GvR accepts that as a reasonable idea.  You won't bootleg it
> in by trying to hide it behind this "where" notion, and you're not
> doing "where" any good in trying to twist it this way either.

I suspect you misunderstood me.  When proposed this:

. if m > 20 where m=something():
. use(m)

the "where m=something()" part is NOT part of the if-expression.  It is
an optional part of the if-statement.  A very poor excuse for a BNF
grammar of the if-statment would look like this:

."if" expr [ "where" symbol "=" expr ] suite ...

In no way, shape, or form did I ever intend for something like this to
be possible:

. x = (m > 20 where m=something())

Besides, if I had intended it to be an alternately-spelled assignment
expression, then it wouldn't have worked as I stated.  I wanted the
thing being bound to be visible inside the if-expression and the
if-block, and to do that it must have the cooperation of the if-block,
and therefore must be part of the if-statement.  If this were an
assigment expression, then m would have to be either be visible within
the whole surrounding scope, or just within that expression.

What I proposed was really nothing more than a convenient way to sneak
an extra binding inside an elif clause.  (The real point here is not to
use this on if-clauses, but on elif-clauses.)


-- 
CARL BANKS

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: python3: 'where' keyword

2005-01-09 Thread Carl Banks

Paul Rubin wrote:
> Nick Coghlan <[EMAIL PROTECTED]> writes:
> > Trying to push it a level further (down to expressions) would, IMO,
be
> > a lot of effort for something which would hurt readability a lot.
>
> I think we should just try to do things in a simple and general way
> and not try to enforce readability.  For example, the
> slightly-overcomplex code that I proposed might have been generated
by
> a macro, or even by a compiler from some other language.  No human
> would ever have to look at it, so it doesn't matter whether it's
> easily readable.  There's no reason to add needless constraints on
the
> language just to make writing ugly code difficult.  The main goal
> should be to make writing clear code easy, not to worry about whether
> someone might also write ugly code.


I couldn't disagree more.  I belive in the Zen of Python.  I believe
the Zen of Python is the major factor responsible for  Python's
success.  I believe that moving away from the Zen of Python will only
diminish the language.

And I think allowing a where statement inside in expression goes
against the Zen in so many ways it isn't even funny.

Beautiful is better than ugly.
Simple is better than complex.  (Note that simple means different
things to different people: for me, and I believe, for the Zen of
Python, it means simple for a human to understand.)
Flat is better than nested. (Seems to be the official Zen if effect
here.)
Readability counts. (Yes, if something's unreadable enough, I hope it's
not in the language, not merely that no one uses it.)
Special cases aren't special enough to break the rules.  (Heretofore,
Python has never had a nested block inside an expression; doing that
would make it a special case.)


I don't want Python to be LISP.  I don't think it's an admirable goal
for Python to strive to be like LISP for the sake of being like LISP,
or for the sake of being general or pure.  If Python borrows something
from LISP, it should be because that aspect of LISP supports the Zen of
Python.

If I wanted to use LISP, I'd be using LISP.  But I like my statements
and expressions distinct.  I like things that belong in statements to
stay in statements, and things that belong in expressions to stay in
expressions.

And a suite, be it a def statement, a where block, or whatever, belongs
in a statement, not an expression.



-- 
CARL BANKS

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: python3: 'where' keyword

2005-01-10 Thread Carl Banks

Paul Rubin wrote:
> "Carl Banks" <[EMAIL PROTECTED]> writes:
> > And a suite, be it a def statement, a where block, or whatever,
belongs
> > in a statement, not an expression.
>
> So do you approve of the movement to get rid of the print statement?

Any little incremental change in Python you could make by having or not
having a print statement would be minor compared to the H-Bomb of
ugliness we'd get if suites of statements were to be allowed inside
Python expressions.  Having or not having a print statement might
violate some small aspect of the Zen, but it won't rape the whole list.

So I don't know what point you're trying to make.

But to answer your question, I would prefer a Python without a print
statement, since a print method could do anything the print statement
could.


-- 
CARL BANKS

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: python3: 'where' keyword

2005-01-10 Thread Carl Banks

Paul Rubin wrote:
> "Carl Banks" <[EMAIL PROTECTED]> writes:
> > > So do you approve of the movement to get rid of the print
statement?
> >
> > Any little incremental change in Python you could make by having or
not
> > having a print statement would be minor compared to the H-Bomb of
> > ugliness we'd get if suites of statements were to be allowed inside
> > Python expressions.  Having or not having a print statement might
> > violate some small aspect of the Zen, but it won't rape the whole
list.
>
> How about macros?  Some pretty horrible things have been done in C
> programs with the C preprocessor.  But there's a movememnt afloat to
> add hygienic macros to Python.  Got any thoughts about that?

How about this: Why don't you go to a Python prompt, type "import
this", and read the Zen of Python.  Consider each line, and whether
adding macros to the language would be going against that line or for
it.  After you've done that, make an educated guess of what you think
I'd think about macros, citing various Zens to support your guess.

Then I'll tell you what my my thoughts about it are.


> > So I don't know what point you're trying to make.
>
> Why should you care whether the output of a macro is ugly or not,
> if no human is ever going to look at it?

I don't.


> > But to answer your question, I would prefer a Python without a
print
> > statement, since a print method could do anything the print
statement
> > could.
>
> A print -method-?!!
[snip example]
>
> I've heard of people wanting to replace print with a function, but
> hadn't heard of replacing it with a method.  Are you trying to turn
> Python into Ruby?

I'll give you the benefit of the doubt that you just didn't think it
over thoroughly.  I was thinkging would be a method of file like
objects.

stdout.print("hello")


-- 
CARL BANKS

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: python3: 'where' keyword

2005-01-10 Thread Carl Banks
Paul Rubin wrote:
>
> The Zen of Python, by Tim Peters
>
> Beautiful is better than ugly.   => +1 macros
> Explicit is better than implicit. => +1 macros
> Simple is better than complex.  => +1 macros
> Complex is better than complicated.  => I don't understand this,
+0
> Flat is better than nested.  => not sure, +0
> Sparse is better than dense. => +1 macros
> Readability counts. => +1 macros
> Special cases aren't special enough to break the rules. => +1
macros
> Although practicality beats purity. => +1 macros
> Errors should never pass silently. => +1 macros
> Unless explicitly silenced. => +1 macros
> In the face of ambiguity, refuse the temptation to guess. => +1
macros
> There should be one-- and preferably only one --obvious way to do
it. => -1
> Although that way may not be obvious at first unless you're
Dutch. => ???
> Now is better than never. => +1 macros, let's do it
> Although never is often better than *right* now. => +1
> If the implementation is hard to explain, it's a bad idea. =>
unknown, +0
> If the implementation is easy to explain, it may be a good idea.
=> +0
> Namespaces are one honking great idea -- let's do more of those!
=> +1
>
> I'm -1 on doing stuff by received dogma, but in this particular case
> it looks to me like the dogma is +12 for macros.  What are your
thoughts?

Paul,

When I asked you to do this, it was just a rhetorical way to tell you
that I didn't intend to play this game.  It's plain as day you're
trying to get me to admit something.  I'm not falling for it.

If you have a point to make, why don't you just make it?
-- 
CARL BANKS

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: python3: 'where' keyword

2005-01-10 Thread Carl Banks
Paul Rubin wrote:
> "Carl Banks" <[EMAIL PROTECTED]> writes:
> > When I asked you to do this, it was just a rhetorical way to tell
you
> > that I didn't intend to play this game.  It's plain as day you're
> > trying to get me to admit something.  I'm not falling for it.
> >
> > If you have a point to make, why don't you just make it?
>
> You asked me to compare the notion of macros with the Zen list.  I
did
> so.  I didn't see any serious conflict, and reported that finding.
> Now you've changed your mind and you say you didn't really want me to
> make that comparison after all.

I asked you to make an educated guess about what I would think of them,
which you didn't do.  I wanted you to apply the Zen to macros so that
you could justify the guess.  I wasn't interested in your thoughts.


> An amazing amount of the headaches that both newbies and experienced
> users have with Python, could be solved by macros.  That's why
there's
> been an active interest in macros for quite a while.  It's not clear
> what the best way to do design them is, but their existence can have
a
> profound effect on how best to do these ad-hoc syntax extensions like
> "where".  Arbitrary limitations that are fairly harmless without
> macros become a more serious pain in the neck if we have macros.

What good are macros going to do when they entail (according to you)
saddling the language with all this unreadable crap?  You may say
macros are not against the Zen of Python, but for their sake, you will
add a million things that are.  Net effect is, you've taken away
everything that makes Python great.

But here's the best part: all of this is to avoid a "serious pain in
the neck."

Get real, Paul.

Here's a thought: if macros are so great, it should be pretty easy for
you to create a halfway syntax with none of these pesky so-called
"arbitrary limitations" and have macros automatically turn it into
legal Python.  Don't you think that's maybe better than turning the
language into an unreadable blob?

No, of course you don't, because an unreadable blob is the LISP way.


> So, we shouldn't consider these topics separately from each other.
> They are likely to end up being deeply related.

No, Paul, they're likely never to be related because Python is never
going to have macros.  Or, at least not the general sort that you want.
-- 
CARL BANKS

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Securing a future for anonymous functions in Python

2005-01-11 Thread Carl Banks
Tim Peters wrote:
> ...
>
> [Anna]
> >> BTW - I am *quite* happy with the proposal for "where:" syntax - I
> >> think it handles the problems I have with lambda quite handily.
>
> [Steve Holden]
> > Whereas I find it to be an excrescence, proving (I suppose) that
one
> > man's meat is another person's poison, or something.
>
> I've been waiting for someone to mention this, but looks like nobody
> will, so I'm elected. Modern functional languages generally have two
> forms of local-name definition, following common mathematical
> conventions.  "where" was discussed here.  The other is "let/in", and
> seems a more natural fit to Python's spelling of block structure:
>
> let:
> suite
> in:
> suite

Ah.  During that discussion, I did kind of suggest this (spelling it
where...do) as an alternative to where (thinking I was clever).  Only
no one seemed to take notice, probably because I suggested something
more poignant at the same time.

Now I see why I liked the idea so much; it was exactly like let forms.


> There's no restriction to expressions here.  I suppose that, like the
> body of a class, the `let` suite is executed starting with a
> conceptually empty local namespace, and whatever the suite binds to a
> local name becomes a temporary binding in the `in` suite (like
> whatever a class body binds to local names becomes the initial value
> of the class __dict__).  So, e.g.,
>
> i = i1 = 3
> let:
> i1 = i+1
> from math import sqrt
> in:
> print i1, sqrt(i1)
> print i1,
> print sqrt(i1)
>
> would print
>
> 4 2
> 3
>
> and then blow up with a NameError.
>
> LIke it or not, it doesn't seem as strained as trying to pile more
> gimmicks on Python expressions.

Indeed.


-- 
CARL BANKS

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: complex numbers

2005-01-12 Thread Carl Banks

It's me wrote:
> The world would come to a halt if all of a sudden nobody understands
complex
> numbers anymore.  :-)
Actually, it would oscillate out of control.


-- 
CARL BANKS

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: python and macros (again) [Was: python3: 'where' keyword]

2005-01-14 Thread Carl Banks

Skip Montanaro wrote:
> Fredrik> no, expressions CAN BE USED as statements.  that doesn't
mean
> Fredrik> that they ARE statements, unless you're applying belgian
logic.
>
> Hmmm...  I'd never heard the term "belgian logic" before.  Googling
provided
> a few uses, but no formal definition (maybe it's a European phrase so
> searching for it in English is futile).  The closest thing I found
was
>
> Or is it another case of Belgian logic, where you believe it
because
> theres no evidence or motive whatsoever?
Maybe it's Belgain logic, as opposed to Dutch logic.


-- 
CARL BANKS

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: python and macros (again) [Was: python3: 'where' keyword]

2005-01-14 Thread Carl Banks

Tim Jarman wrote:
> IANA French person, but I believe that Belgians are traditionally
> regarded as stupid in French culture, so "Belgian logic" would be
> similar to "Irish logic" for an English person. (Feel free to insert
> your own cultural stereotypes as required. :)

Ok.

http://www.urbandictionary.com/define.php?term=belgian&r=f
-- 
CARL BANKS

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Zen of Python

2005-01-19 Thread Carl Banks

Skip Montanaro wrote:
> Bill> The example that occurs to me is that "import smtplib" is
better
> Bill> than "import stdlib.inet.services.smtp".
>
> Sure.  There is a balance to be achieved however.  "import
std.smtplib"
> might be better than "import smtplib", simply because making the
standard
> library a package reduces the possibility of namespace collisions.

Yep.  "Reorganize the standard library to be not as shallow" is listed
right there in PEP 3000.  Notice, however, that it doesn't say,
"Reorganize the standard library into an intricate feudal hierarchy of
packages, modules, and cross-references." :)

The gist of "Flat is better than nested" is "be as nested as you have
to be, no more," because being too nested is just a mess.
-- 
CARL BANKS

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Zen of Python

2005-01-19 Thread Carl Banks

Timothy Fitz wrote:
> While I agree that the Zen of Python is an amazingly concise list of
> truisms, I do not see any meaning in:
>
> Flat is better than nested.
>
> I strive for balance between flat and nested. Does anyone have a good
> example of where this is applied? (specifically to python, or in
> general)


I think the essence of why this Zen is true is the recognition that the
world isn't really organized into a nice, proper, perfect heirarchy,
where every node is a perfect little subset of its parent.

The fact is, some things are flat.  A list is flat.  Is is not better
to deal with flat things flatly?

And some things aren't flat, but they're not nested either.  Rather,
they are various overlapping sets, but not subsets.  It it better to
deal with such as mess by shoehorning it into a heirarchy that isn't
applicable, or to make it flat and deal with the subsets individually?

I shall give two examples of where Python exhibits this Zen, and doing
one my favorite things in the process: slamming other languages.


ITERATION

There are two ways to iterate: the flat way, with a for loop or list
comprehension or something like that; and the nested way, recursively.
We all know that recursion is often absolutely necessary.  But usually
flat iteration suffices.

In other languages (notoriously LISP and functional languages), there
is a tendency to do it recursively anyways.  For example, the following
Python code illustrates a recursive way to copy a list that would be
considered an example of "good code" in a language such as LISP:

. def copylist(a):
. if a: return a[:1] + copylist(a[1:])
. else: return a

LISP, of course, was designed to make recursive processing like this
easy and efficient.  But it doesn't, IMHO, keep recursion from being
much harder to figure out.  Although this has a sort of coolness in
that you can see the list copy operation reduced to its simplest
possible form (in the same way that Game of Life is cool because you
get all kinds of complexity from very simple rules), it completely
misses the big picture.  When I want to copy a list, I want to copy a
list; I don't want to figure out the minimal rule set I could use to do
it.  Iteration fits the mind better.

Python, IMO, wisely put the focus on iteration.


POLYMORPHISM

In many languages, the only way to get dynamic polymorphism is to
subclass.  If two classes are to share an interface, they have to both
be subclasses of a common base class.  If you have lots of classes that
you want to share parts of their interfaces, you have to put them all
into that heirarchy.

The problem is, the world isn't a neatly organized tree, where every
type of object must have functionality that is an exact proper subset
of some other type of object.  So what you end up with is this big,
messy, inflexible hierarchy of classes that is difficult to make
changes or add new options to.  Many classes have stuff in them that
ought not to be there, just to satisfy the requirements of subclassing.
Oftentimes, the root class has lots of methods that many subclasses
don't implement.  Oftentimes, there will be subclasses with
functionality that isn't polymorphic because the root class doesn't
define virtual methods for them.

(For an example of all this madness: in I/O hierarchies, the base class
often has seek() and tell() methods, but since not all streams are
seekable, it also has to have a seekable() method.  Thus, polymorphic
behavior and its benefits are abandoned in favor of the procedural way.
Sure, for this case, you could just define a SeekableStream
intermediate class, only for seekable streams.  But here's the thing:
there are any number of functionalities a stream may or may not have.
Are you going to design a hierarchy with branches to account for all
possible combinations of them?)

Not so in Python.  In Python, if you want to classes to have the same
interface, then write two classes that have the same methods.  Bam, you
got polymorphism.  No shoehorning into a heirarchy necessary.  It's
what we call duck typing.  (If it looks like a duck, and floats like a
duck, it's made of wood.)

The flat method of polymorphism is so much better it isn't even funny.
Again, Python chose wisely.


-- 
CARL BANKS

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Zen of Python

2005-01-19 Thread Carl Banks

Timothy Fitz wrote:
> On 19 Jan 2005 15:24:10 -0800, Carl Banks
<[EMAIL PROTECTED]> wrote:
> > The gist of "Flat is better than nested" is "be as nested as you
have
> > to be, no more," because being too nested is just a mess.
>
> Which I agree with, and which makes sense. However your "gist" is a
> different meaning. It's not that "Flat is better than nested" it's
> that "Too flat is bad and too flat is nested so be as nested (or as
> flat) as you have to be and no more." Perhaps Tim Peters is far too
> concise for my feeble mind 

Couldn't you say the same about Simple and Complex?  or Explicit and
Implicit?


-- 
CARL BANKS

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Funny Python error messages

2005-01-21 Thread Carl Banks

Peter Hansen wrote:
> Will Stuyvesant wrote:
> > Perhaps this will even be a useful thread, to brighten the
> > life of the brave people doing the hard work of providing us
> > with error messages.
> >
> > My first one (i'm learning, i'm learning) is
> >
> > TypeError: 'callable-iterator' object is not callable
> >
> > # >>> it = iter(lambda:0, 0)
> > # >>> it()
> > # TypeError: 'callable-iterator' object is not callable
>
> Given that the supposed humour depends on the *name* of
> the object, which is "callable-iterator", I'd say it's
> probably not hard to come up with lots of "funny" error
> messages this way.

The mildly amusing nature of this error message is due to Will's
finding a name, "callable-iterator" (where callable is a name, not a
description), appearing in a different context from where it was coined
that causes us to parse it differently (where callable is a
description, not a name), and accidentally stating an absurdity.
I'd say it's actually a nice bit of subtlety.



-- 
CARL BANKS

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Another scripting language implemented into Python itself?

2005-01-24 Thread Carl Banks
Roy Smith wrote:
> Rocco Moretti <[EMAIL PROTECTED]> wrote:
> > The OP doesn't mention his application, but there is something to
be
> > said about domain specific scripting languages. A well designed
> > domain-specific scripting language(*) with the appropriate high
level
> > constructs can make script writing simpler.
>
> This is a bit of a sore point with me.
>
> I've been involved with several projects where people felt the need
to
> invent their own scripting languages.  It usually starts with "we
don't
> need the power of a full programming language, we only need to be
able
> to do X, Y, and Z".  So they write a little language which lets them
do
> X, Y, and Z.
>
> Then they discover they need more complex data structures than they
> originally thought.  And nested loops.  And functions.  And more
> sophisticated variable scoping rules.  And a regex library.  And 47
> other things.  So they duct-tape all those into the system.

Not only is it short-sighted, I would say it is quite arrogant to
believe you can anticipate every possible use of this script you're
going to impliement, and can safely hold back essential language
features.


> Anyway, that's my little rant on inventing your own scripting
language.
> Imbed

EMBED.

This spelling error wouldn't bother me if I didn't work with people
whose own job title is embedded control engineer, yet who still
misspell it "imbedded."


> Python, or Perl, or TCL, or Ruby, or PHP,

Not PHP.  PHP is one of the better (meaning less terrible) examples of
what happens when you do this sort of thing, which is not saying a lot.
PHP was originally not much more than a template engine with some
crude operations and decision-making ability.  Only its restricted
problem domain has saved it from the junkheap where it belongs.

TCL isn't that great in this regard, either, as it makes a lot of
common operations that ought to be very simple terribly unweildy.


> or Java, or whatever
> floats your boat.  Almost any choice has got to be better than
rolling
> your own.  Invest your intellectual capital doing what you can do
best,
> and don't get bogged down developing a new language.

You're right.  I use a custom, domain-specific language in my work.
Whenever I use it, all I can think of is how much better this POS would
be if they had just extended Python (probably would even be faster).
At least they were smart enough to (try to) make it into a complete
programming language.


-- 
CARL BANKS

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Another scripting language implemented into Python itself?

2005-01-25 Thread Carl Banks

Roy Smith wrote:
> "Carl Banks" <[EMAIL PROTECTED]> wrote:
>
> > > Imbed
> > EMBED.
>
> My apologies for being sloppy.  And with an initial capital, so it
just
> jumps off the page at you :-)

Ok.  Prescriptive language isn't normally my cup of tea, but there's
always something.  And usually it's very silly.

> > > Python, or Perl, or TCL, or Ruby, or PHP,
> >
> > Not PHP.  PHP is one of the better (meaning less terrible) examples
of
> > what happens when you do this sort of thing, which is not saying a
lot.
>
> But, that's exactly my point.  To be honest, I've never used PHP.
But
> however bad it may be, at least it's got a few years of people fixing

> bugs, writing books, writing add-on tools, etc, behind it.  Better to

> use somebody else's well-known and well-supported mess of a scripting

> language than to invest several person-years inventing your own mess
> that's no better.

Well, if you look at it that way, I guess so.

My mindset was closer to "hacked-up quasi-languages are evil" than
"hacked-up quasi-languages are not worth the time to implement when
there are plenty of hacked-up quasi-languages already out there, not to
mention some real languages."


-- 
CARL BANKS

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: what's OOP's jargons and complexities?

2005-01-28 Thread Carl Banks

Dan Perl wrote:
> I will not get into your "history" of the "OOP hype".


The best thing to is just ignore him.

But, if he bothers you too much to let it slide, then don't take issue
with anything he writes.  Just post a follow-up warning the newbies
that he's a pest, that his claims are untrue and his advice is not
good, and that he appears that his posts are just trolling in disguise.

(Or, you could do what I do when I feel a need to reply: follow-up with
a Flame Warriors link.  For Xah, it would probably be this:
http://tinyurl.com/4vor3 )


-- 
CARL BANKS

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: variable declaration

2005-01-31 Thread Carl Banks

Thomas Bartkus wrote:
> Python *does* require that your variables be declared and initialized
before
> you use them. You did that with epsilon=0 and S=0 at the top.  It is
> unfortunate, however, that the statement epselon=epsilon+1 also
declares a
> new variable in the wrong place at the wrong time. Such mispellings
are a
> *common* error caught instantly in languages that require a more
formal
> declaration procedure.


I have no interest in arguing this right now, but it does raise a
question for me:  How common is it for a local variable to be bound in
more than one place within a function?  It seems that it isn't (or
shouldn't be) too common.

Certainly the most common case where this occurs is for temporary
variables and counters and stuff.  These typically have short names and
thus are not as likely to be misspelled.

Another common place is for variables that get bound before and inside
a loop.  I would guess that's not as common in Python as it is in other
languages, seeing that Python has features like iterators that obviate
the need to do this.  (The OP's original example should have been "for
epsilon in range(10)"; epsilon only needed bound in one place.)

I guess this might be why, in practice, I don't seem to encounter the
misspelling-a-rebinding error too often, even though I'm prone to
spelling errors.  Perhaps, if someone runs into this error a lot, the
problem is not with Python, but with their tendency to rebind variables
too much?  Just a thought.


-- 
CARL BANKS

-- 
http://mail.python.org/mailman/listinfo/python-list


  1   2   3   4   5   6   7   8   9   10   >