Easy questions from a python beginner

2010-07-11 Thread wheres pythonmonks
I'm an old Perl-hacker, and am trying to Dive in Python.  I have some
easy issues (Python 2.6)
which probably can be answered in two seconds:

1.  Why is it that I cannot use print in booleans??  e.g.:
>>> True and print "It is true!"

I found a nice work-around using eval(compile(.,"","exec"))...
Seems ugly to this Perl Programmer -- certainly Python has something better?

2.  How can I write a function, "def swap(x,y):..." so that "x = 3; y
= 7; swap(x,y);" given x=7,y=3??
(I want to use Perl's Ref "\" operator, or C's &).
(And if I cannot do this [other than creating an Int class], is this
behavior limited to strings,
 tuples, and numbers)

3.  Why might one want to store "strings" as "objects" in numpy
arrays?  (Maybe they wouldn't)?

4.  Is there a way for me to make some function-definitions explicitly
module-local?
(Actually related to Q3 below: Is there a way to create an anonymous scope?)

5. Is there a way for me to introduce a indention-scoped variables in python?
See for example: http://evanjones.ca/python-pitfall-scope.html

6.  Is there a Python Checker that enforces Strunk and White and is
bad English grammar anti-python?  (Only half joking)
http://www.python.org/dev/peps/pep-0008/

Thanks,
W
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Easy questions from a python beginner

2010-07-11 Thread wheres pythonmonks
Thanks for your answers -- it is much appreciated.

On #1:  I had very often used chained logic with both logging and
functional purposes in Perl, and wanted to duplicate this in Python.
"It reads like english"  Using the print_ print wrapper works for me.

Follow-up:
Is there a way to define compile-time constants in python and have the
bytecode compiler optimize away expressions like:

if is_my_extra_debugging_on: print ...

when "is_my_extra_debugging" is set to false?  I'd like to pay no
run-time penalty for such code when extra_debugging is disabled.

On #2:  My point regarding the impossibility of writing the swap
function for ints is to explicitly understand that this isn't
possible, so as not to look for solutions along those lines when
trying to write python code.

On #3:  Sorry this is confusing, but I was browsing some struct array
code from numpy, in which one of the columns contained strings, but
the type information, supplied in numpy.array's dtype argument,
specified the type as a an "object" not a string.  Just wondering why
one would do that.

On #4:  So there are some hacks, but not something as easy as "import
unimportable" or an @noexport decorator.  The underscore works, so
does "del".

On #5: Nesting the function was actually what I was thinking of doing,
but alas, I cannot modify outer-scope variables within a function, and
of course, I don't want to use globals.

On #6: Always trying to improve my writing -- and I thought it was
cute that Guido tries to encourage this as well.


I am programmer who likes to scope off variables as much as possible
(I believe in minimal state).

The following is an example of what I am trying to protect against:
http://stackoverflow.com/questions/938429/scope-of-python-lambda-functions-and-their-parameters

Will try to avoid namespace mangling until next week.

Thanks again,

W



On Sun, Jul 11, 2010 at 2:17 PM, Duncan Booth
 wrote:
> wheres pythonmonks  wrote:
>
>> I'm an old Perl-hacker, and am trying to Dive in Python.  I have some
>> easy issues (Python 2.6)
>> which probably can be answered in two seconds:
>>
>> 1.  Why is it that I cannot use print in booleans??  e.g.:
>>>>> True and print "It is true!"
>>
>> I found a nice work-around using
>> eval(compile(.,"","exec"))... Seems ugly to this Perl
>> Programmer -- certainly Python has something better?
>
> In Python 2.x print is a statement. If you really wanted you could do:
>
>   True and sys.write("It is true!\n")
>
> In Python 3 you can do this:
>
>   True and print("It is true!")
>
> though I can't think of any situations where this would be better that just
> writing:
>
>   if somecondition: print "whatever"
>
>>
>> 2.  How can I write a function, "def swap(x,y):..." so that "x = 3; y
>>= 7; swap(x,y);" given x=7,y=3??
>
> Why use a function?
>
>   x, y = y, x
>
>> (I want to use Perl's Ref "\" operator, or C's &).
>> (And if I cannot do this [other than creating an Int class], is this
>> behavior limited to strings,
>>  tuples, and numbers)
>
> If you want to use perl's operators I suggest you use perl.
>
>>
>> 3.  Why might one want to store "strings" as "objects" in numpy
>> arrays?  (Maybe they wouldn't)?
>
> Why would one want to write incomprehensible questions?
>
>>
>> 4.  Is there a way for me to make some function-definitions explicitly
>> module-local?
>> (Actually related to Q3 below: Is there a way to create an anonymous
>> scope?)
>
> Not really.
>
>>
>> 5. Is there a way for me to introduce a indention-scoped variables in
>> python? See for example: http://evanjones.ca/python-pitfall-scope.html
>
> No. The page you reference effectively says 'my brain is used to the way
> Java works'. *My* brain is used to the way Python works. Who is to say
> which is better?
>
>>
>> 6.  Is there a Python Checker that enforces Strunk and White and is
>> bad English grammar anti-python?  (Only half joking)
>> http://www.python.org/dev/peps/pep-0008/
>>
> pylint will do quite a good job of picking over your code. Most people
> don't bother.
> --
> http://mail.python.org/mailman/listinfo/python-list
>
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Easy questions from a python beginner

2010-07-22 Thread wheres pythonmonks
Okay -- so I promised that I would try the namespace mangling
approach, and here's what I have come up with:

Approach #1:  Pass in the variables to be swapped as strings.  (boring)

>>> import sys
>>> def swap(n1,n2):
...  try:
...   raise RuntimeException()
...  except:
...   e,b,t = sys.exc_info()
...  ldict = t.tb_frame.f_back.f_locals
...  t = ldict[n1];
...  ldict[n1] = ldict[n2]
...  ldict[n2] = t
...
>>> x = 'A'
>>> y = 'B'
>>> id(x)
47946650437696
>>> id(y)
47946649710192
>>> swap('x','y')
>>> print id(x)
47946649710192
>>> print id(y)
47946650437696
>>> print x,y
B A

Approach #2:  Allow the user to pass in arbitrary objects, takes the
id, infer what the variable in by hashing all possible objects, and
then apply the swap operation above.

>>> def swap2(o1,o2):
...  try:
...   raise RuntimeException()
...  except:
...   e,b,t = sys.exc_info()
...  ldict = t.tb_frame.f_back.f_locals
...  iddict = dict( (id(v), k ) for k,v in ldict.items() )
...  # print id(o1), id(o2)
...  n1 = iddict[id(o1)]
...  n2 = iddict[id(o2)]
...  t = ldict[n1];
...  ldict[n1] = ldict[n2]
...  ldict[n2] = t
...
>>> print x,y
B A
>>> swap2(x,y)
>>> print x,y
A B
>>>

Now, I want to make the above codes more "Pythonic" -- is there a way to:

1.  Get the function's arguments from the perspective of the caller?

def f(x):
  print "caller's view of x = %s" % callersview(x)

Then, f(1+2+3) would yield:
caller's view of x = 1 + 2 + 3

2.  Is there a better way to loopup by id?  I'm not very familiar with
sys.exc_info, but creating the id->name hash each time seems like
overkill.

3.  Is there a reference on all the special variables, like __foo__?

4.  Is there any work on deparsing (like Perl's deparse) lambda
functions to inline algebra and get a performance gain?

Thanks again for your input,

W

( from Perl-hacker to Python Programmer )

On Sun, Jul 11, 2010 at 2:37 PM, Stephen Hansen
 wrote:
> On 7/11/10 10:48 AM, wheres pythonmonks wrote:
>> I'm an old Perl-hacker, and am trying to Dive in Python.  I have some
>> easy issues (Python 2.6)
>> which probably can be answered in two seconds:
>>
>> 1.  Why is it that I cannot use print in booleans??  e.g.:
>>>>> True and print "It is true!"
>
> Because print is a statement. Statements have to start lines. If you
> want to do this, use a function-- in Python 2.6 either via "from
> __future__ import print_function" or writing your own, even if its just
> a very thing wrapper around the print statement.
>
>> 2.  How can I write a function, "def swap(x,y):..." so that "x = 3; y
>> = 7; swap(x,y);" given x=7,y=3??
>> (I want to use Perl's Ref "\" operator, or C's &).
>> (And if I cannot do this [other than creating an Int class], is this
>> behavior limited to strings,
>>  tuples, and numbers)
>
> You can't do that*. Its not limited to any certain type of objects. You
> can't manipulate calling scopes: if you really want to do that sort of
> explicit namespace mangling, use dictionaries (or objects, really) as
> the namespace to mangle and pass them around.
>
>> 3.  Why might one want to store "strings" as "objects" in numpy
>> arrays?  (Maybe they wouldn't)?
>
> I don't use numpy. No idea.
>
>> 4.  Is there a way for me to make some function-definitions explicitly
>> module-local?
>
> In what sense? If you prepend them with an underscore, the function
> won't be imported with "from x import *". You can also explicitly
> control what is imported in that way with a module-level __all__ attribute.
>
> Now that won't stop someone from doing "import x" and
> "x._your_private_function" but Python doesn't believe in enforicng
> restrictions.
>
>> (Actually related to Q3 below: Is there a way to create an anonymous scope?)
>
> No. You can create a limited anonymous function with lambda, but note it
> takes only an expression-- no statements in it.
>
>> 5. Is there a way for me to introduce a indention-scoped variables in python?
>> See for example: http://evanjones.ca/python-pitfall-scope.html
>
> No. Python only has three scopes historically; local, global, and
> builtin. Then post-2.2(ish, I forget) limited nested scoping -- but only
> with nested functions, and you can't (until Python 3) re-bind variables
> in outer scopes (though you can modify them if they are mutable objects).
>
> Python's scoping is very basic (we generally think this is a good thing;
> others are never

Re: Easy questions from a python beginner

2010-07-22 Thread wheres pythonmonks
Thanks for pointing out that swap (and my swap2) don't work everywhere
-- is there a way to get it to work inside functions?

"No offense, but you seem like you're still tying to be a hacker.  If
that's what you want, fine, but generally speaking (and particularly
for Python), you are going to have a better experience if you do it
the language's way."

None taken, but I always think that it is the language's job to
express my thoughts...  I don't like to think that my thoughts are
somehow constrained by the language.

The truth is that I don't intend to use these approaches in anything
serious.  However, I've been known to do some metaprogramming from
time to time.

In a recent application, I pass in a list of callables (lambdas) to be
evaluated repeatedly.
Clearly, a superior solution is to pass a single lambda that returns a
list. [Less function call dispatches]
However, it might be more efficient to avoid the function call
overhead completely and pass-in a string which is substituted into a
string code block, compiled, and executed.

W





On Thu, Jul 22, 2010 at 8:12 PM, Carl Banks  wrote:
> On Jul 22, 3:34 pm, wheres pythonmonks 
> wrote:
>> Okay -- so I promised that I would try the namespace mangling
>> approach, and here's what I have come up with:
>>
>> Approach #1:  Pass in the variables to be swapped as strings.  (boring)
>>
>> >>> import sys
>> >>> def swap(n1,n2):
>>
>> ...  try:
>> ...   raise RuntimeException()
>> ...  except:
>> ...   e,b,t = sys.exc_info()
>> ...  ldict = t.tb_frame.f_back.f_locals
>> ...  t = ldict[n1];
>> ...  ldict[n1] = ldict[n2]
>> ...  ldict[n2] = t
>> ...>>> x = 'A'
>> >>> y = 'B'
>> >>> id(x)
>> 47946650437696
>> >>> id(y)
>> 47946649710192
>> >>> swap('x','y')
>> >>> print id(x)
>> 47946649710192
>> >>> print id(y)
>> 47946650437696
>> >>> print x,y
>>
>> B A
>
> Have you tried calling this swap inside a function?  I bet you
> haven't.
>
> def test():
>    x = "A"
>    y = "B"
>    swap("x","y")
>    print x,y
>
>
>> Approach #2:  Allow the user to pass in arbitrary objects, takes the
>> id, infer what the variable in by hashing all possible objects, and
>> then apply the swap operation above.
>>
>> >>> def swap2(o1,o2):
>>
>> ...  try:
>> ...   raise RuntimeException()
>> ...  except:
>> ...   e,b,t = sys.exc_info()
>> ...  ldict = t.tb_frame.f_back.f_locals
>> ...  iddict = dict( (id(v), k ) for k,v in ldict.items() )
>> ...  # print id(o1), id(o2)
>> ...  n1 = iddict[id(o1)]
>> ...  n2 = iddict[id(o2)]
>> ...  t = ldict[n1];
>> ...  ldict[n1] = ldict[n2]
>> ...  ldict[n2] = t
>> ...
>>
>> >>> print x,y
>> B A
>> >>> swap2(x,y)
>> >>> print x,y
>> A B
>
> Same question.
>
>
>> Now, I want to make the above codes more "Pythonic"
>
> It's simply not possible (let alone Pythonic), in general, to rebind
> variables in the namespace of the caller.
>
> You were able to do it for a very limited circumstance, when the
> calling namespace was module-level.  It doesn't work when the calling
> namespace is a function.  This is true in Python 2 and 3.
>
> IMO, even if it could work, the very act of rebinding variables in
> another namespace is unPythonic.  About the only time I've resorted to
> it is some metaprogramming tasks, and even then I give the functions
> that do it very explicit names, and I still feel dirty.
>
>
>> -- is there a way to:
>>
>> 1.  Get the function's arguments from the perspective of the caller?
>>
>> def f(x):
>>   print "caller's view of x = %s" % callersview(x)
>>
>> Then, f(1+2+3) would yield:
>> caller's view of x = 1 + 2 + 3
>
> Nope, other than inspecting the caller's frame.
>
>> 2.  Is there a better way to loopup by id?  I'm not very familiar with
>> sys.exc_info, but creating the id->name hash each time seems like
>> overkill.
>
> Looking up by id is a bad idea in general.  Objects associated with an
> id can be destroyed, and id be reused.  So if you're storing an id, by
> the time you get to it it could be a different object, or an object
> that no longer exists.
>
>
>> 3.  Is there a reference on all the special variables, like __foo__?
>
> Python Language Reference
>
>
>> 4.  Is there any work on deparsing (like Perl's deparse) lambda
>> functions to inline algebra and get a performance gain?
>
> psyco (q.g.) might help, not sure if it'll help much for lambdas,
> though.
>
>> Thanks again for your input,
>>
>> W
>>
>> ( from Perl-hacker to Python Programmer )
>
> No offense, but you seem like you're still tying to be a hacker.  If
> that's what you want, fine, but generally speaking (and particularly
> for Python), you are going to have a better experience if you do it
> the language's way.
>
> And just to throw this out there, based on your questions I think it's
> possible that Ruby would fit your style better.  (It lets you play
> fast and loose with namespaces and code blocks and such.)
>
>
> Carl Banks
> --
> http://mail.python.org/mailman/listinfo/python-list
>
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Easy questions from a python beginner

2010-07-23 Thread wheres pythonmonks
Funny... just spent some time with timeit:

I wonder why I am passing in strings if the callback overhead is so light...

More funny:  it looks like inline (not passed in) lambdas can cause
python to be more efficient!
>>> import random
>>> d = [ (['A','B'][random.randint(0,1)],x,random.gauss(0,1)) for x in 
>>> xrange(0,100) ]
>>> def A1(): j = [ lambda t: (t[2]*t[1],t[2]**2+5) for t in d ]

>>> def A2(): j = [ (t[2]*t[1],t[2]**2+5) for t in d ]

>>> def A3(l): j = [ l(t) for t in d]

>>> import timeit
>>> timeit.timeit('A1()','from __main__ import A1,d',number=10);
2.2185971572472454
>>> timeit.timeit('A2()','from __main__ import A2,d',number=10);
7.2615454749912942
>>> timeit.timeit('A3(lambda t: (t[2]*t[1],t[2]**2+5))','from __main__ import 
>>> A3,d',number=10);
9.4334241349350947

So: in-line lambda possible speed improvement.  in-line tuple is slow,
passed-in callback, slowest yet?

Is this possibly right?

Hopefully someone can spot the bug?

W





On Fri, Jul 23, 2010 at 4:10 AM, Steven D'Aprano
 wrote:
> On Thu, 22 Jul 2010 22:47:11 -0400, wheres pythonmonks wrote:
>
>> Thanks for pointing out that swap (and my swap2) don't work everywhere
>> -- is there a way to get it to work inside functions?
>
> Not in CPython. In IronPython or Jython, maybe, I don't know enough about
> them. But even if you got it to work, it would be an implementation-
> dependent trick.
>
> [...]
>> I always think that it is the language's job to express
>> my thoughts...
>
> Ha, no, it's the language's job to execute algorithms. If it did so in a
> way that is similar to the way people think, that would be scary. Have
> you *seen* the way most people think???
>
> *wink*
>
>
>> I don't like to think that my thoughts are somehow
>> constrained by the language.
>
>
> Whether you "like" to think that way, or not, thoughts are influenced and
> constrained by language. While I don't accept the strong form of the
> Sapir-Whorf hypothesis (that some thoughts are *impossible* due to lack
> of language to express them, a position which has been discredited), a
> weaker form is almost certainly correct. Language influences thought.
>
> Turing Award winner and APL creator Kenneth E. Iverson gave a lecture
> about this theme, "Notation as a tool of thought", and argued that more
> powerful notations aided thinking about computer algorithms.
>
> Paul Graham also discusses similar ideas, such as the "blub paradox".
> Graham argues that the typical programmer is "satisfied with whatever
> language they happen to use, because it dictates the way they think about
> programs". We see this all the time, with people trying to write Java in
> Python, Perl in Python, and Ruby in Python.
>
> And Yukihiro Matsumoto has said that one of his inspirations for creating
> Ruby was the science fiction novel Babel-17, which in turn is based on
> the Sapir-Whorf Hypothesis.
>
>
>
>> The truth is that I don't intend to use these approaches in anything
>> serious.  However, I've been known to do some metaprogramming from time
>> to time.
>>
>> In a recent application, I pass in a list of callables (lambdas) to be
>> evaluated repeatedly.
>
> Are you aware that lambdas are just functions? The only differences
> between a "lambda" and a function created with def is that lambda is
> syntactically limited to a single expression, and that functions created
> with lambda are anonymous (they don't have a name, or at least, not a
> meaningful name).
>
>
>> Clearly, a superior solution is to pass a single lambda that returns a
>> list.
>
> I don't see why you say this is a superior solution, mostly because you
> haven't explained what the problem is.
>
>
>> [Less function call dispatches]
>
> How? You have to generate the list at some point. Whether you do it like
> this:
>
> functions = (sin, cos, tan)
> data = (2.3, 4.5, 1.2)
> result = [f(x) for f, x in zip(functions, data)]
>
> or like this:
>
> result = (lambda x, y, z: return (sin(x), cos(y), tan(z))
>    )(2.3, 4.5, 1.2)
>
> you still end up with the same number of function calls (four). Any
> execution time will almost certainly be dominated by the work done inside
> the lambda (sin, cos and tan) rather than the infrastructure. And unless
> you have profiled your code, you would be surprised as to where the
> bottlenecks are. Your intuitions from Perl will not guide you well in
> Python -- it's a diffe

Nice way to cast a homogeneous tuple

2010-07-28 Thread wheres pythonmonks
A new python convert is now looking for a replacement for another perl idiom.

In particular, since Perl is weakly typed, I used to be able to use
unpack to unpack sequences from a string, that I could then use
immediately as integers.

In python, I find that when I use struct.unpack I tend to get strings.
 (Maybe I am using it wrong?)

def f(x,y,z): print x+y+z;

f( *struct.unpack('2s2s2s','123456'))
123456

(the plus concatenates the strings returned by unpack)

But what I want is:

f( *map(lambda x: int(x), struct.unpack('2s2s2s','123456')))
102

But this seems too complicated.

I see two resolutions:

1.  There is a way using unpack to get out string-formatted ints?

2.  There is something like map(lambda x: int(x) without all the
lambda function call overhead.  (e.g., cast tuple)?
  [And yes: I know I can write my own "cast_tuple" function -- that's
not my point.  My point is that I want a super-native python inline
solution like (hopefully shorter than) my "map" version above.  I
don't like defining trivial functions.]

W
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Nice way to cast a homogeneous tuple

2010-07-28 Thread wheres pythonmonks
Thanks ... I thought int was a type-cast (like in C++) so I assumed I
couldn't reference it.



On Wed, Jul 28, 2010 at 9:31 AM, Nick Raptis  wrote:
> Ep, that missing line should be:
>
> On 07/28/2010 04:27 PM, Nick Raptis wrote:
>>
>> On 07/28/2010 04:15 PM, wheres pythonmonks wrote:
>>>
>>> f( *map(lambda x: int(x), struct.unpack('2s2s2s','123456')))
>>> 102
>>>
>>> But this seems too complicated.
>>>
>>>
>> Well, you don't need the lambda at all
>> int   ===    lambda x: int(x)
>>
>> So just write
>>
> f( *map(int, struct.unpack('2s2s2s', '123456')))
>
> Pretty compact now, isn't it?
>
>> It's like writing:
>> def myint(x):
>>    return int(x)
>>
>>
>> Nick,
>>
>> Warm thanks to Steven D' Aprano who taught me that just yesterday in the
>> Tutor list ;)
>
> --
> http://mail.python.org/mailman/listinfo/python-list
>
-- 
http://mail.python.org/mailman/listinfo/python-list


default behavior

2010-07-29 Thread wheres pythonmonks
Why is the default value of an int zero?

>>> x = int
>>> print x

>>> x()
0
>>>

How do I build an "int1" type that has a default value of 1?
[Hopefully no speed penalty.]
I am thinking about applications with collections.defaultdict.
What if I want to make a defaultdict of defaultdicts of lists?  [I
guess my Perl background is showing -- I miss auto-vivification.]

W
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: default behavior

2010-07-29 Thread wheres pythonmonks
Thanks.  I presume this will work for my nested example as well.  Thanks again.

On Thu, Jul 29, 2010 at 2:18 PM, Paul Rubin  wrote:
> wheres pythonmonks  writes:
>> How do I build an "int1" type that has a default value of 1?
>> [Hopefully no speed penalty.]
>> I am thinking about applications with collections.defaultdict.
>
> You can supply an arbitary function to collections.defaultdict.
> It doesn't have to be a class.  E.g.
>
>    d = collections.defaultdict(lambda: 1)
>
> will do what you are asking.
> --
> http://mail.python.org/mailman/listinfo/python-list
>
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: default behavior

2010-07-30 Thread wheres pythonmonks
Instead of defaultdict for hash of lists, I have seen something like:


m={}; m.setdefault('key', []).append(1)

Would this be preferred in some circumstances?
Also, is there a way to upcast a defaultdict into a dict?  I have also
heard some people use exceptions on dictionaries to catch key
existence, so passing in a defaultdict (I guess) could be hazardous to
health.  Is this true?

W




On Fri, Jul 30, 2010 at 6:56 AM, Duncan Booth
 wrote:
> Peter Otten <__pete...@web.de> wrote:
>> real is a property, not a method. conjugate() was the first one that
>> worked that was not __special__. I think it has the added benefit that
>> it's likely to confuse the reader...
>>
> Ah, silly me, I should have realised that.
>
> Yes, micro-optimisations that are also micro-obfuscations are always the
> best. :^)
>
> --
> Duncan Booth http://kupuguy.blogspot.com
> --
> http://mail.python.org/mailman/listinfo/python-list
>
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: default behavior

2010-07-30 Thread wheres pythonmonks
Sorry, doesn't the following make a copy?

>>>> from collections import defaultdict as dd
>>>> x = dd(int)
>>>> x[1] = 'a'
>>>> x
> defaultdict(, {1: 'a'})
>>>> dict(x)
> {1: 'a'}
>
>


I was hoping not to do that -- e.g., actually reuse the same
underlying data.  Maybe dict(x), where x is a defaultdict is smart?  I
agree that a defaultdict is safe to pass to most routines, but I guess
I could imagine that a try/except block is used in a bit of code where
on the key exception (when the value is absent)  populates the value
with a random number.  In that application, a defaultdict would have
no random values.


Besides a slightly different favor, does the following have
applications not covered by defaultdict?

m.setdefault('key', []).append(1)

I think I am unclear on the difference between that and:

m['key'] = m.get('key',[]).append(1)

Except that the latter works for immutable values as well as containers.

On Fri, Jul 30, 2010 at 8:19 AM, Steven D'Aprano
 wrote:
> On Fri, 30 Jul 2010 07:59:52 -0400, wheres pythonmonks wrote:
>
>> Instead of defaultdict for hash of lists, I have seen something like:
>>
>>
>> m={}; m.setdefault('key', []).append(1)
>>
>> Would this be preferred in some circumstances?
>
> Sure, why not? Whichever you prefer.
>
> setdefault() is a venerable old technique, dating back to Python 2.0, and
> not a newcomer like defaultdict.
>
>
>> Also, is there a way to upcast a defaultdict into a dict?
>
> "Upcast"? Surely it is downcasting. Or side-casting. Or type-casting.
> Whatever. *wink*
>
> Whatever it is, the answer is Yes:
>
>>>> from collections import defaultdict as dd
>>>> x = dd(int)
>>>> x[1] = 'a'
>>>> x
> defaultdict(, {1: 'a'})
>>>> dict(x)
> {1: 'a'}
>
>
>
>> I have also heard some people use
>> exceptions on dictionaries to catch key existence, so passing in a
>> defaultdict (I guess) could be hazardous to health.  Is this true?
>
> Yes, it is true that some people use exceptions on dicts to catch key
> existence. The most common reason to do so is to catch the non-existence
> of a key so you can add it:
>
> try:
>    mydict[x] = mydict[x] + 1
> except KeyError:
>    mydict[x] = 1
>
>
> If mydict is a defaultdict with the appropriate factory, then the change
> is perfectly safe because mydict[x] will not raise an exception when x is
> missing, but merely return 0, so it will continue to work as expected and
> all is good.
>
> Of course, if you pass it an defaultdict with an *inappropriate* factory,
> you'll get an error. So don't do that :) Seriously, you can't expect to
> just randomly replace a variable with some arbitrarily different variable
> and expect it to work. You need to know what the code is expecting, and
> not break those expectations too badly.
>
> And now you have at least three ways of setting missing values in a dict.
> And those wacky Perl people say that Python's motto is "only one way to
> do it" :)
>
>
>
> --
> Steven
> --
> http://mail.python.org/mailman/listinfo/python-list
>
-- 
http://mail.python.org/mailman/listinfo/python-list


pylint scores

2010-07-30 Thread wheres pythonmonks
I am starting to use pylint to look at my code and I see that it gives a rating.
What values do experienced python programmers get on code not
targeting the benchmark?

I wrote some code, tried to keep it under 80 characters per line,
reasonable variable names, and I got:

0.12 / 10.

Is this a good score for one not targeting the benchmark?  (pylint
running in default mode)

Somewhat related:  Is the backslash the only way to extend arguments
to statements over multiple lines?  (e.g.)

>>> def f(x,y,z): return(x+y+z);
...
>>> f(1,2,
... 3)
6
>>> assert f(1,2,3)>0,
  File "", line 1
assert f(1,2,3)>0,
 ^
SyntaxError: invalid syntax
>>>

In the above, I could split the arguments to f (I guess b/c of the
parens) but not for assert.  I could use a backslash, but I find this
ugly -- it that my only (best?) option?

[I really like to assert my code to correctness and I like using the
second argument to assert, but this resulted in a lot of long lines
that I was unable to break except with an ugly backslash.]

W
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: default behavior

2010-07-30 Thread wheres pythonmonks
>
> Hint -- what does [].append(1) return?
>

Again, apologies from a Python beginner.  It sure seems like one has
to do gymnastics to get good behavior out of the core-python:

Here's my proposed fix:

 m['key'] = (lambda x: x.append(1) or x)(m.get('key',[]))

Yuck!  So I guess I'll use defaultdict with upcasts to dict as needed.

On a side note:  does up-casting always work that way with shared
(common) data from derived to base?  (I mean if the data is part of
base's interface, will  b = base(child) yield a new base object that
shares data with the child?)

Thanks again from a Perl-to-Python convert!

W


On Fri, Jul 30, 2010 at 11:47 PM, Steven D'Aprano
 wrote:
> On Fri, 30 Jul 2010 08:34:52 -0400, wheres pythonmonks wrote:
>
>> Sorry, doesn't the following make a copy?
>>
>>>>>> from collections import defaultdict as dd x = dd(int)
>>>>>> x[1] = 'a'
>>>>>> x
>>> defaultdict(, {1: 'a'})
>>>>>> dict(x)
>>> {1: 'a'}
>>>
>>>
>>>
>>
>> I was hoping not to do that -- e.g., actually reuse the same underlying
>> data.
>
>
> It does re-use the same underlying data.
>
>>>> from collections import defaultdict as dd
>>>> x = dd(list)
>>>> x[1].append(1)
>>>> x
> defaultdict(, {1: [1]})
>>>> y = dict(x)
>>>> x[1].append(42)
>>>> y
> {1: [1, 42]}
>
> Both the defaultdict and the dict are referring to the same underlying
> key:value pairs. The data itself isn't duplicated. If they are mutable
> items, a change to one will affect the other (because they are the same
> item). An analogy for C programmers would be that creating dict y from
> dict y merely copies the pointers to the keys and values, it doesn't copy
> the data being pointed to.
>
> (That's pretty much what the CPython implementation does. Other
> implementations may do differently, so long as the visible behaviour
> remains the same.)
>
>
>
>> Maybe dict(x), where x is a defaultdict is smart?  I agree that a
>> defaultdict is safe to pass to most routines, but I guess I could
>> imagine that a try/except block is used in a bit of code where on the
>> key exception (when the value is absent)  populates the value with a
>> random number.  In that application, a defaultdict would have no random
>> values.
>
> If you want a defaultdict with a random default value, it is easy to
> provide:
>
>>>> import random
>>>> z = dd(random.random)
>>>> z[2] += 0
>>>> z
> defaultdict(, {2:
> 0.30707092626033605})
>
>
> The point which I tried to make, but obviously failed, is that any piece
> of code has certain expectations about the data it accepts. If take a
> function that expects an int between -2 and 99, and instead decide to
> pass a Decimal between 100 and 150, then you'll have problems: if you're
> lucky, you'll get an exception, if you're unlucky, it will silently give
> the wrong results. Changing a dict to a defaultdict is no different.
>
> If you have code that *relies* on getting a KeyError for missing keys:
>
> def who_is_missing(adict):
>    for person in ("Fred", "Barney", "Wilma", "Betty"):
>        try:
>            adict[person]
>        except KeyError:
>            print person, "is missing"
>
> then changing adict to a defaultdict will cause the function to
> misbehave. That's not unique to dicts and defaultdicts.
>
>
>
>> Besides a slightly different favor, does the following have applications
>> not covered by defaultdict?
>>
>> m.setdefault('key', []).append(1)
>
> defaultdict calls a function of no arguments to provide a default value.
> That means, in practice, it almost always uses the same default value for
> any specific dict.
>
> setdefault takes an argument when you call the function. So you can
> provide anything you like at runtime.
>
>
>> I think I am unclear on the difference between that and:
>>
>> m['key'] = m.get('key',[]).append(1)
>
> Have you tried it? I guess you haven't, or you wouldn't have thought they
> did the same thing.
>
> Hint -- what does [].append(1) return?
>
>
> --
> Steven
> --
> http://mail.python.org/mailman/listinfo/python-list
>
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: default behavior

2010-07-31 Thread wheres pythonmonks
I think of an upcast as casting to the base-class (casting up the
inheritance tree).
http://en.wiktionary.org/wiki/upcast
But really, what I am thinking of doing is overriding the virtual
methods of a derived class with the base class behavior in an object
that I can then pass into methods that are base/derived agnostic.

defaultdict is the way to go.

W




Sadly, there are guidelines that I program by that are perhaps anti-pythonic:

1.  Don't use "extra" variables in code.  Don't use global variables.
Keep the scopes of local variables at a minimum to reduce state (the
exception being for inner loops) or variables explicitly identified as
part of the algorithm before implementation.  [In python, just about
everything is a variable which is terrifying to me.  I never want the
Alabama version of math.pi  i.e.,
http://www.snopes.com/religion/pi.asp, or math.sin being "666".]

2.  Use built-in functions/features as much as possible, as this are
the most tested.  Don't roll your own -- you're not that good, instead
master the language.  (How often do I invent a noun in English?  Not
even "upcast"!)  [Plus, guys with phds probably already did what you
need.]  Use only very well known libraries -- numpy is okay (I hope!)
for example.  An exception can be made while interfacing external
data, because others who create data may not have abided by rule #2.
In most cases (except gui programming, which again tackles the
external interfacing program) the more heavy-weight your API, the more
wrong you are.

3.  In interpreted languages, avoid function calls, unless the
function does something significant. [e.g., Functional call overhead
tends to be worse that a dictionary lookup -- and yes I used timeit,
the overhead can be 100%.]  Small functions and methods (and
callbacks) hamper good interpreted code.  When writing functions, make
them operate on lists/dicts.

It is because of the above that I stopped writing object-oriented Perl.

So I want "big" functions that do a lot of work with few variable
names.  Ideally, I'd create only variables that are relevant based on
the description of the algorithm.  [Oh yeah, real programming is done
before the implementation in python or C++.]

My problems are compounded by the lack of indention-based scope, but I
see this as simply enforcing the full use of functional-programming
approaches.




On Sat, Jul 31, 2010 at 5:55 AM, Steven D'Aprano
 wrote:
> On Sat, 31 Jul 2010 01:02:47 -0400, wheres pythonmonks wrote:
>
>
>>> Hint -- what does [].append(1) return?
>>>
>>>
>> Again, apologies from a Python beginner.  It sure seems like one has to
>> do gymnastics to get good behavior out of the core-python:
>>
>> Here's my proposed fix:
>>
>>  m['key'] = (lambda x: x.append(1) or x)(m.get('key',[]))
>>
>> Yuck!
>
> Yuk is right. What's wrong with the simple, straightforward solution?
>
> L = m.get('key', [])
> L.append(1)
> m['key'] = L
>
>
> Not everything needs to be a one-liner. But if you insist on making it a
> one-liner, that's what setdefault and defaultdict are for.
>
>
>
>> So I guess I'll use defaultdict with upcasts to dict as needed.
>
> You keep using that term "upcast". I have no idea what you think it
> means, so I have no idea whether or not Python does it. Perhaps you
> should explain what you think "upcasting" is.
>
>
>> On a side note:  does up-casting always work that way with shared
>> (common) data from derived to base?  (I mean if the data is part of
>> base's interface, will  b = base(child) yield a new base object that
>> shares data with the child?)
>
> Of course not. It depends on the implementation of the class.
>
>
> --
> Steven
> --
> http://mail.python.org/mailman/listinfo/python-list
>
-- 
http://mail.python.org/mailman/listinfo/python-list


subclassing versus object redefinition

2010-08-03 Thread wheres pythonmonks
Hi!

I have a class (supposed to be an abstract base class):
In python (as opposed to static languages like C++) I don't seed to
subclass the base class, but instead I can simply override the
behavior of stub methods and values.
Is there a preference between between subclassing (C++ approach) and
overriding methods/data in an instance?  From a design perspective?
I think I prefer the override/redefine approach because it result in a
thinner class hierarchy.

It seems like inheriting an ABC is needed only when I must share
instances (between multiple parts of the code, or if the subclass is
instantiated in different places...)

Thoughts?

W
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: subclassing versus object redefinition

2010-08-03 Thread wheres pythonmonks
Roald:

First, I must admit, I didn't know I could create an ABC in python.
Now I see (http://docs.python.org/library/abc.html). Thank you.

I think that the crux of the matter is in points #3,  #4, and #5 that
you raised:

3) adding stuff to instances is less reusable that adding stuff to (sub)classes
4) if I'm reading your code and want to know what an object is like, I look
 at the class, not through your whole program to collect all bits and
pieces of information spread out over it

On #3:  Not clear that all possible specializations warrant
factorization into a class.  Indeed, this may result in "premature
abstraction" -- and make the code less clear.  Also, it will freeze in
the base classes, making future refactoring a headache.

On #4:  Unless I misunderstood something, there is nothing in python
that ensures that a class definition is localized.  So, putting
definitions in classes, does not guarantee that the definition is at a
single location in the code.

5) why would you want a thinner class hierarchy?

The yo-yo anti-patten:
http://en.wikipedia.org/wiki/Yo-yo_problem

I have a pretty strong preference for using a small number of useful
objects, instead of having code littered with objects strewn across
the namespace.

Maybe there is a Python ABC tutorial out there that can enlighten me?

W

On Tue, Aug 3, 2010 at 10:06 AM, Roald de Vries  wrote:
> On Aug 3, 2010, at 2:46 PM, wheres pythonmonks wrote:
>>
>> Hi!
>>
>> I have a class (supposed to be an abstract base class):
>> In python (as opposed to static languages like C++) I don't seed to
>> subclass the base class, but instead I can simply override the
>> behavior of stub methods and values.
>> Is there a preference between between subclassing (C++ approach) and
>> overriding methods/data in an instance?  From a design perspective?
>> I think I prefer the override/redefine approach because it result in a
>> thinner class hierarchy.
>>
>> It seems like inheriting an ABC is needed only when I must share
>> instances (between multiple parts of the code, or if the subclass is
>> instantiated in different places...)
>>
>> Thoughts?
>
> 1) some things are just not possible in instances, like overriding operators
> 2) abstract base classes are not supposed to be instantiable, so if you are
> able to do it anyway, that is a hack
> 3) adding stuff to instances is less reusable that adding stuff to
> (sub)classes
> 4) if I'm reading your code and want to know what an object is like, I look
> at the class, not through your whole program to collect all bits and pieces
> of information spread out over it
> 5) why would you want a thinner class hierarchy?
>
> So I would go for inheritance.
>
> Cheers, Roald
>
>
-- 
http://mail.python.org/mailman/listinfo/python-list


None is negative?

2010-08-03 Thread wheres pythonmonks
I did the google search... I must be blind as I don't see any hits...

None is negative in Python?  (v2.6)

http://www.google.com/search?ie=UTF-8&q=%22none+is+negative%22+python

>>> if None < -999.99: print "hi"

hi
>>>

>>> if -999 > None: print "hi"

hi
>>>

Is there a way to have the comparison raise an exception?

W
-- 
http://mail.python.org/mailman/listinfo/python-list


easy question on parsing python: "is not None"

2010-08-05 Thread wheres pythonmonks
How does "x is not None" make any sense?  "not x is None" does make sense.

I can only surmise that in this context (preceding is) "not" is not a
unary right-associative operator, therefore:

x is not None === IS_NOTEQ(X, None)

Beside "not in" which seems to work similarly, is there other
syntactical sugar like this that I should be aware of?

W
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: easy question on parsing python: "is not None"

2010-08-05 Thread wheres pythonmonks
Well, I am not convinced of the equivalence of not None and true:

>>> not None
True
>>> 3 is True;
False
>>> 3 is not None
True
>>>

P.S. Sorry for the top-post -- is there a way to not do top posts from
gmail?  I haven't used usenet since tin.

On Thu, Aug 5, 2010 at 11:56 AM, Roald de Vries  wrote:
> On Aug 5, 2010, at 5:42 PM, wheres pythonmonks wrote:
>>
>> How does "x is not None" make any sense?  "not x is None" does make sense.
>>
>> I can only surmise that in this context (preceding is) "not" is not a
>> unary right-associative operator, therefore:
>>
>> x is not None === IS_NOTEQ(X, None)
>>
>> Beside "not in" which seems to work similarly, is there other
>> syntactical sugar like this that I should be aware of?
>
> 'not None' first casts None to a bool, and then applies 'not', so 'x is not
> None' means 'x is True'.
> 'not x is None' is the same as 'not (x is None)'
>
> Cheers, Roald
> --
> http://mail.python.org/mailman/listinfo/python-list
>
-- 
http://mail.python.org/mailman/listinfo/python-list


inline exception handling in python

2010-08-12 Thread wheres pythonmonks
Hi!

I have on a few occasions now wanted to have inline-exception
handling, like the inline if/else operator.

For example,

The following might raise ZeroDivisionError:

f = n / d

So, I can look before I leap (which is okay):

f = float("nan") if d == 0 else n/d;

But, what I'd like to be able to write is:

f = n / d except float("nan");

Which I find much more appealing than:

try:
   f = n / d
except:
   f = float("nan")

(Obviously, I am thinking about more complicated functions than "n/d"
-- but this works as an example.)

Thoughts?

W
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: inline exception handling in python

2010-08-12 Thread wheres pythonmonks
On Thu, Aug 12, 2010 at 2:08 PM, Thomas Jollans  wrote:
> On Thursday 12 August 2010, it occurred to wheres pythonmonks to exclaim:
>> try:
>>    f = n / d
>> except:
>>    f = float("nan")
>
> A catch-all except clause. Never a good idea. It's not as bad in this case, as
> there is only one expression, but there are still a couple of other exceptions
> that have a chance of occurring here: KeyboardInterrupt and SystemExit.
> So:
>
> try:
>    f = n / d
> except ZeroDivisionError:
>    f = float('nan')
>
>
>> f = n / d except float("nan");
>
> So this syntax really isn't adequate for real use: catch-all except clauses
> are frowned upon, and rightfully so.
>
> Besides, more often than not, you want to have a finally clause around when
> you're dealing with exceptions.
>
>
>> (Obviously, I am thinking about more complicated functions than "n/d"
>> -- but this works as an example.)
>
> The more complex the function is, the more likely it is to raise an exception
> you can't handle that easily.
> --
> http://mail.python.org/mailman/listinfo/python-list
>

With a bit imagination the syntax can handle specific exceptions:

f = n /d except except(ZeroDivisionError) float("nan")

f = n /d except except(ZeroDivisionError) float("nan")
except(ValueError) float("nan")

But then we cannot bind to useful variable you say...

I think the problem in my case is best solved by look before you leap,
or a wrapper function.  [I just hate function call overhead for this.
]

Thanks,

W
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: inline exception handling in python

2010-08-12 Thread wheres pythonmonks
On Thu, Aug 12, 2010 at 2:19 PM, wheres pythonmonks
 wrote:
> On Thu, Aug 12, 2010 at 2:08 PM, Thomas Jollans  wrote:
>> On Thursday 12 August 2010, it occurred to wheres pythonmonks to exclaim:
>>> try:
>>>    f = n / d
>>> except:
>>>    f = float("nan")
>>
>> A catch-all except clause. Never a good idea. It's not as bad in this case, 
>> as
>> there is only one expression, but there are still a couple of other 
>> exceptions
>> that have a chance of occurring here: KeyboardInterrupt and SystemExit.
>> So:
>>
>> try:
>>    f = n / d
>> except ZeroDivisionError:
>>    f = float('nan')
>>
>>
>>> f = n / d except float("nan");
>>
>> So this syntax really isn't adequate for real use: catch-all except clauses
>> are frowned upon, and rightfully so.
>>
>> Besides, more often than not, you want to have a finally clause around when
>> you're dealing with exceptions.
>>
>>
>>> (Obviously, I am thinking about more complicated functions than "n/d"
>>> -- but this works as an example.)
>>
>> The more complex the function is, the more likely it is to raise an exception
>> you can't handle that easily.
>> --
>> http://mail.python.org/mailman/listinfo/python-list
>>
>
> With a bit imagination the syntax can handle specific exceptions:
>
> f = n /d except except(ZeroDivisionError) float("nan")
>
> f = n /d except except(ZeroDivisionError) float("nan")
> except(ValueError) float("nan")
>
> But then we cannot bind to useful variable you say...
>
> I think the problem in my case is best solved by look before you leap,
> or a wrapper function.  [I just hate function call overhead for this.
> ]
>
> Thanks,
>
> W
>

I mean something along these lines:

f = n /d  except(ZeroDivisionError) float("nan") except(ValueError) float("nan")
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: inline exception handling in python

2010-08-12 Thread wheres pythonmonks
On Thu, Aug 12, 2010 at 2:42 PM, MRAB  wrote:
> wheres pythonmonks wrote:
>>
>> Hi!
>>
>> I have on a few occasions now wanted to have inline-exception
>> handling, like the inline if/else operator.
>>
>> For example,
>>
>> The following might raise ZeroDivisionError:
>>
>> f = n / d
>>
>> So, I can look before I leap (which is okay):
>>
>> f = float("nan") if d == 0 else n/d;
>>
>> But, what I'd like to be able to write is:
>>
>> f = n / d except float("nan");
>>
>> Which I find much more appealing than:
>>
>> try:
>>   f = n / d
>> except:
>>   f = float("nan")
>>
>> (Obviously, I am thinking about more complicated functions than "n/d"
>> -- but this works as an example.)
>>
>> Thoughts?
>>
> Discussed a year ago:
>
> [Python-Dev] (try-except) conditional expression similar to (if-else)
> conditional (PEP 308)
>
> http://code.activestate.com/lists/python-dev/90256/
>
> --
> http://mail.python.org/mailman/listinfo/python-list
>


http://code.activestate.com/lists/python-dev/90256/

Nice -- excellent discussion and what I was looking for.  I am
guessing that no implementation materialized.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: inline exception handling in python

2010-08-12 Thread wheres pythonmonks
On Thu, Aug 12, 2010 at 2:57 PM, Thomas Jollans  wrote:
> On Thursday 12 August 2010, it occurred to wheres pythonmonks to exclaim:
>> [I just hate function call overhead for this.]
>
> I think you've got your priorities wrong. If you want to avoid unnecessary
> overhead, avoid exceptions more than functions.
>
> --
> http://mail.python.org/mailman/listinfo/python-list
>

Well I suppose it matters depending on the nature of the data you are
looking at...  But small function calls tend to be the death of
interpreted languages...

>>> timeit.timeit("""
def f(y,i):
 try:
  return(y/(i%10))
 except:
  return(float("nan"))

for i in range(100):
 x = f(7,i)

""")
56.362180419240985

>>> timeit.timeit("""
for i in range(100):
 try:
  x = 7 / (i % 10)
 except:
  x = float("nan")
""")
34.588313601484742
>>>
-- 
http://mail.python.org/mailman/listinfo/python-list


first non-null element in a list, otherwise None

2010-09-02 Thread wheres pythonmonks
This should be trivial:


I am looking to extract the first non-None element in a list, and
"None" otherwise.  Here's one implementation:

>>> x = reduce(lambda x,y: x or y, [None,None,1,None,2,None], None)
>>> print x
1

I thought maybe a generator expression would be better, to prevent
iterating over the whole list:

>>> x = ( x for x in [None,1,2] if x is not None ).next()
>>> print x
1

However, the generator expression throws if the list is entirely None.

With list comprehensions, a solution is:

>>> x = ([ x for x in [None,1,2] if x is not None ] + [ None ] )[0]

But this can be expensive memory wise.  Is there a way to concatenate
generator expressions?

More importantly,

Is there a better way?  (In one line?)

Thanks,

W
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: first non-null element in a list, otherwise None

2010-09-02 Thread wheres pythonmonks
Peter wrote:

>> But this can be expensive memory wise.  Is there a way to concatenate
>> generator expressions?
>
> itertools.chain()
>

Aha!

import itertools
>>> x = itertools.chain( (x for x in [None,None] if x is not None), [ None ] 
>>> ).next()
>>> print x
None
>>> x = itertools.chain( (x for x in [None,7] if x is not None), [ None ] 
>>> ).next()
>>> print x
7
>>>


W
-- 
http://mail.python.org/mailman/listinfo/python-list