Newbie Alert! Upgrading Python?

2005-07-10 Thread El
Hi,

Sorry to bother you folks with a real newbie question, but I am sure that 
this is the place for me to ask.

Python 1.5.1 (final) and Python Win32 Extensions are installed on my 4 year 
old computer.  My computer has always been upgraded to include the latest 
programs and Windows updates.

However, I have NEVER upgraded Python?  Therefore, my questions:

1.  Should I upgrade to the new Python 2.4.1 (Final)?

2.  If so, should I uninstall the old version first and reboot, or can I 
just install the new version over the old version?

3.  Are the Win32 extensions included in the final version, or are the 
extensions a separate download?

I do not use Python, but I am assuming that something on my computer must. 
LOL.

Thank you for any enlightenment you can render.

Elliot


-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Will python never intend to support private, protected and public?

2005-10-02 Thread El Pitonero
Bengt Richter wrote:
>
> I decided to read this thread today, and I still don't know exactly
> what your requirements are for "private" whatevers.

No name collision in subclassing. Notice that even if you use

self._x = 3

in a parent class, it can be overriden in a sub-sub-class accidentally.

> Or maybe, what are your real requirements?
> ;-)

No need for this type of joke.

For people coming from Java/C++ camp, (especially Java camp,) they
think they've got something good with the private class variable, and
they think the lack of an exact equivalent in Python means Python is
lesser in this regard.

The thing is, there are two sides to every coin. Features surely can be
viewed as "goodies", or they can be viewed as "handcuffs".

Python's lack of Java-style "private" surely has its drawback: name
collisions can happen. But, that's just one side. Name collisions are
allowed in many dynamic languages, where you can override the default
system behavior (in some languages, you can override/intercept the
assignment operator "=", or even the "if" statement and the "while"
loop.) Sure, in these very dynamic languages you can ACCIDENTALLY
override the default system behavior. How many Python programmers have
once upon a time done stupid things like:

list = 3

, without realizing that list() is a Python function? This type of
accidental overriding DOES happen. Similarly, in large and complicated
Python projects, name collision (even for "self._x" type of private
members) CAN happen, and I am sure it has happened for some people.

Now the question is: for average people and average project, how often
does this type of error happen? If people really cool down their heads
and don't talk emotionally, and base on their real experience
throughout the years, the truth is that this type of error just doesn't
happen that often. And even if it does happen, the error is usually
fatal enough to be detected.

OK, I have said that not having Java-style "private" has one downside
in Python. But what's the upside?

Not having "private" in Python closes one door, but opens another door.

The upside is exactly the same as the fact that you can override the
"list()" function in Python. Python is dynamic language. Like others
have pointed out, you can not even be sure about the variable content
of a class/object during runtime. In Java, you cannot override your
private members during runtime. In Python, if you have a private
variable:

self._x = 3

and you, for curiosity reasons and DURING runtime (after the program is
already up and running) want to know the exact moment the self._x
variable is accessed (say, print out the local time), or print out the
calling stack frames, you can do it. And I mean the program is running.

In Java, you would have to stop the program, re-write/re-implement
changes, re-compile, re-start the program, etc. etc.

The thing is, Python allows you to do runtime dynamic programming much
more easily than Java/C++. This type of dynamic and metaprogramming are
kind of unthinkable in the C++/Java world. Sure, these are advanced
programming features that not everybody uses, but Python leaves the
door open for those who want to take advantage of these powerful
features.

The fact that you can override Python's "list()" function can be either
viewed as pro or con. The fact that you can override member variables
can also be viewed as pro or con.

Would any Python programmer trade the benefits of a highly dynamic
language with an unessential feature like Java-style "private" data
members? My guess is not.

---

What do I say Java-style "private" is unessential?

If your Python class/object needs a real Java-style private working
namespace, you have to ask yourself: do the private variables REALLY
belong to the class?

In my opinion, the answer is: NO. Whenever you have Java-style private
variables (i.e, non-temporary variables that need to persist from one
method call to the next time the class node is accessed), those
variables/features may be better described as another object, separate
from your main class hierarchy. Why not move them into a foreign worker
class/object instead, and let that foreign worker object hold those
private names, and separate them from the main class hierarchy? (In
Microsoft's jargon: why not use "containment" instead of
"aggregation"?)

That is, the moment you need Java-style private variables, I think you
might as well create another class to hold those names and
functionalities, since they do not belong to the core functionality of
the main class hierarchy. Whatever inside the core functionality of the
main class, should perhaps be inheritable, sharable and modifiable.

If you use containment instead of aggregation, the chance for name
collision reduces dramatically. And in my opinion, it's the Pythonic
way of dealing with the "private" problem: move things that don't
belong to this object to some other object, and be happy again.

-- 
http://mail.python.org/mailman/list

Re: Will python never intend to support private, protected and public?

2005-10-04 Thread El Pitonero
Paul Rubin wrote:
>
> Let's see, say I'm a bank manager, and I want to close my cash vault
> at 5pm today and set its time lock so it can't be opened until 9am
> tomorrow, including by me.  Is that "handcuffs"?  It's normal
> procedure at any bank, for good reason. It's not necessarily some
> distrustful policy that the bank CEO set to keep me from robbing the
> bank.  I might have set the policy myself.  Java lets me do something
> similar with instance variables.  Why is it somehow an advantage for
> Python to withhold such a capability?

If so, you would probably be the type of person that also likes static
typing, type safety and variable declaration, right? You like your
language with extra baggages to improve some safety concerns, which
most Python programmers don't seem to need.

>   def countdown():
> n = 3
> while n > 0:
>yield n
>   g = countdown()
>   print g.next()  # 3
>   print g.next()  # 2
>
> where's the Python feature that lets me modify g's internal value of n
> at this point?

You missed the point. I have, for fun, built a GUI application in
Python, while the program is running. I just kept on adding more and
more code to it. This, while the Python program is running. I was able
to change the GUI's look-and-feel, add more buttons, menus, all while
the programming is running. I was able to change the class definition,
preserve the previous object state variables. For that, you already
have event-based program and can use module reload and some metaclass
tricks to automatically relink your objects to new classes. Sure,
Python is not as friendly as Lisp/Scheme for interactive programming,
but you can still do a lot.

> [Other stuff incomprehensible and snipped].

Sigh, I gave you the keywords: containment and aggregation.

There are two ways of enhancing functionality from an existing
object/class. One way is by inheritance, that's aggregation. Another
way is by containment, that means that instead of inheriting, you add
the additional features as an object contained in the new object.

Vault encapsulation is one way to do OOP. But by no means it is the
only way. The whole access level thing (the "private" keyword) is not
an essential part of OOP. The concept of a "private" namespace is not
necessary for OOP. Just because you learned OOP from one particular
school of thought, does not mean that that's the only way. Let us call
your OOP "school A".

Another way of OOP is to accept by default that everything in the class
hierarchy is inheritable. And Python is by no means the only language
that does that. Now, obviously in this type of OOP you can run into
name collision. But if you actually follow this other school of
thought, you would organize your variables differently. Let us call
this type of OOP "school B".

Let me be more clear: when you have variables that are so, so, so
private that no one else should touch, from school B's point of view,
those variables do not belong to the object. They belong to another
object.

Let us say there is a financial instrument that pays some fixed coupon
interest:

class Bond:
volatility = None
interest = None
def __init__(self):
self.volatility = 0.3 # volatility from credit risk
self.interest = 0.4

Now, you have another instrument that pays variable interest:

class VariableInterestBond(Bond):
volatility = None # this one collides with the base class
def __init__(self):
 Bond.__init__(self)
 self.volatility = 0.2 # volatility for variable interest
def calculate(self):
 interest = self.get_variable_interest()
 ...
def get_variable_interest(self):
 return self.interest * (1 + random.random()*self.volatility)
...

In this example, the subclass's "volatility" meant something else but
collides with the base class's "volatility". It should have been a
private variable, but now it accidentally overwrote an existing
variable.

We are trying to combine two "features" in the hierarchy tree:


Bond
|
|
+- Variable Interest
|
|
Variable-Interest Bond

There are two ways to add the "variable interest" feature to the "bond"
object. One way is by aggregation (inheritance), which is shown above.
Another way is by containment.

If the subsclass's "volatility" should be private and encapsulated from
the main class hierarchy (i.e. Bond-like objects), then from school B's
point of view, it does not belong to the bond object. It would be
better to encapsulate it into another object.

class VariableInterest:
volatility = None
def __init__(self, interest):
self.interest = interest
self.volatility = 0.2
def get_variable_interest(self):
 return self.interest * (1 + random.random()*self.volatility)

class VariableInterestBond(Bond):
variable_interest_calculator = None
def __init__(self):
 Bond.__init__(self)
 self.variable_interest_calculator =
VariableInterest(self.interest)
def calculate(self):
 

Re: updating local()

2005-10-06 Thread El Pitonero
Flavio wrote:
> I wish all my problems involved just a couple of variables, but
> unfortunately the real interesting problems tend to be complex...
>
> def fun(**kw):
> a = 100
> for k,v in kw.items():
> exec('%s = %s'%(k,v))
> print locals()
>
>
> >>> fun(**{'a':1,'b':2})
> {'a': 1, 'k': 'b', 'b': 2, 'kw': {'a': 1, 'b': 2}, 'v': 2}
>
> any better Ideas?

Actually, your solution is not bad. Some potential problems are: (1)
unintentional name collisions with other variables, including
globals/builtins, (2) it's easy to unpack variables into locals(), but
not easy to pack them back, since locals() are often contaminated with
extra auxiliary variables.

Your problem happens often in the field of math formulas/equations.

I remember similar problem happens in C++, too. When one has a function
with a long list of parameters, in C++ one may find oneself updating
the funtion header/prototype all the time, which is very tiresome and
error-prone.

When you have complicated list of arguments to pass, it's better to put
them into a structure/object. This way, the function header/prototype
will remain the same, and you only need to change the declaration of
the object.

The parameter object(s) could be called:

- request and response, if input and output are separated
- param
- workspace, session, etc.

so, your function call would look like

class Param: pass
...
def f(p):
result = p.x + p.y
return result
...
p=Param()
p.x = 3
p.y = 4
result = f(p)

Now, you may not like the extra dots in the line:

result = p.x + p.y

My experience is: it's not that bad to have names with extra dots. I
know it's annoying, but not a total disaster. Plus, once you start to
use OOP, it makes your code more organized. It has its benefits. For
instance, very often your models/formulas have several versions. Using
OOP's class hierarchy inheritance mechanism allows you to try out
different versions or different ideas much more easily, and you can
roll back the changes more easily, too (instead of commenting out code
lines all over places.) If you do decide to go the route of OOP, the
lines:

p.x = 3
p.y = 4
p.z = 5

can be replaced by something like:

calculation_engine.set(x=3, y=4)
calculation_engine.set(z=5)

--

The longer answer is: if you need complex formula evaluations, Python
is probably not the language to use. For speed and memory usage issues,
C++ is probably what you need. You can hookup your C++ program to
Python in various ways, but packing/unpacking variables seems
unavoidable. And even in C++ (where object attributes don't have the
dots inside the object's own methods), I still often end up using a lot
of dotted names for attributes from other objects.

If the problem is complex, then it's complex. I know you have your
equations, but you have to decide for yourself: is it better to do all
the packing/unpacking, or is it better to change your equations to use
the dotted names? There is no right or wrong answer, it all depends on
your particular situation.

-- 
http://mail.python.org/mailman/listinfo/python-list


python and MySQL - 3 questions

2005-10-09 Thread el chupacabra
I'm using mysqldb module and python 2.4. I'm a newbie. Thanks in advance.

1. Output desired: 

"hello"
"world"

I know that MySQL takes \n and \t and what not. 

But my python script, it takes that \n as literal.  Meaning, when I retrieve 
the records, they show up like "hello \n world".

How can keep formatting when inserting data to table?

This is what I have:

cursor.execute('insert into table values (%s, %s, %s, %s)', (newId, 
insertEntryName, insertLastName, insertSSN)


2. How can make my python show *** (stars) when entering user passwords?

3. Is it possible to make Python/MySQL transactions secure, encrypted?  Can you 
point me to readings or something?



--=  Posted using GrabIt  =
--=  Binary Usenet downloading made easy =-
-=  Get GrabIt for free from http://www.shemes.com/  =-

-- 
http://mail.python.org/mailman/listinfo/python-list


breaking a loop

2005-08-11 Thread el chupacabra
Hi, I'm just learning Pythonthanks in advance...

Do you get out of this loop?

Problem: When I type 'exit' (no quotes) the program doesn't quit the loop...it 
actually attemps to find entries that containt the 'exit' string.

Desired behavior: when I type 'exit' the program should quit.

def search():
searchWhat = ""
while searchWhat != 'exit':
searchWhat = "%%%s%%" % raw_input ('Search for : ')
cursor.execute("select * from TABLE where FIELD like %s", (searchWhat))
result = cursor.fetchall()
print 
'+--+'
print '|  #   |Name   |  LastName   
   |'
print 
'+--+'
for record in result:
print  '  ', record[0], ' :  ', record[1], ' ==>  ', 
record[2]
print 
'+--+'
#end for statement, end of search


--=  Posted using GrabIt  =
--=  Binary Usenet downloading made easy =-
-=  Get GrabIt for free from http://www.shemes.com/  =-

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: breaking a loop

2005-08-13 Thread Mosti El
I think u need break before exit()
so if u want break from any loop just  add break

"el chupacabra" <[EMAIL PROTECTED]> wrote in message
news:[EMAIL PROTECTED]
> Hi, I'm just learning Pythonthanks in advance...
>
> Do you get out of this loop?
>
> Problem: When I type 'exit' (no quotes) the program doesn't quit the
loop...it actually attemps to find entries that containt the 'exit' string.
>
> Desired behavior: when I type 'exit' the program should quit.
>
> def search():
> searchWhat = ""
> while searchWhat != 'exit':
> searchWhat = "%%%s%%" % raw_input ('Search for : ')
> cursor.execute("select * from TABLE where FIELD like %s",
(searchWhat))
> result = cursor.fetchall()
> print
'+--+'
> print '|  #   |Name   |
LastName  |'
> print
'+--+'
> for record in result:
> print  '  ', record[0], ' :  ', record[1], ' ==>  ',
record[2]
> print
'+--+'
> #end for statement, end of search
>
>
> --=  Posted using GrabIt  =
> --=  Binary Usenet downloading made easy =-
> -=  Get GrabIt for free from http://www.shemes.com/  =-
>


-- 
http://mail.python.org/mailman/listinfo/python-list


Is there any module to play mp3 or wav format files?

2005-08-26 Thread el chupacabra
Is there any module to play mp3 or wav format files?

any sample code available somewhere?

thanks,
el chupacabra


--=  Posted using GrabIt  =
--=  Binary Usenet downloading made easy =-
-=  Get GrabIt for free from http://www.shemes.com/  =-

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Re: error processing variables

2005-09-09 Thread el chupacabra

>Your problem is that the def statement reassignes the name "toHPU" to a
>function instead of a string.  So when the code runs, you're passing a
>function object to s.copy2.

  So...how do fix it?


>> import shutil
>>
>> #variables
>> s = shutil
>>
>> toHPU = "/etc/sysconfig/network/toHPU.wifi"
>> wlan = "/etc/sysconfig/network/ifcfg-wlan-id-00:0e:38:88:ba:6d"
>> toAnyWifi = "/etc/sysconfig/network/toAny.wifi"
>> wired = "/etc/sysconfig/network/ifcfg-eth-id-00:0b:db:1b:e3:88"
>>
>>
>> def toHPU():
>>
>>
>> s.copy2(toHPU,wlan)
>> s.copy2(toAnyWifi,wired)
>>
>> #end
>>
>> #execute function
>> toHPU()



--=  Posted using GrabIt  =
--=  Binary Usenet downloading made easy =-
-=  Get GrabIt for free from http://www.shemes.com/  =-

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Python becoming less Lisp-like

2005-03-15 Thread El Pitonero
Fernando wrote:
> The real problem with Python is ... Python is
> going the C++ way: piling feature upon feature, adding bells
> and whistles while ignoring or damaging its core design.

I totally agree.

Look at a recent thread "Compile time evaluation (aka eliminating
default argument hacks)"

http://groups-beta.google.com/group/comp.lang.python/browse_frm/thread/d0cd861daf3cff6d/6a8abafed95a9053#6a8abafed95a9053

where people coming from C++ or other typical programming languages
would do:

x = 1
def _build_used():
  y = x + 1
  return x, y
def f(_used = _build_used()):
  x, y = _used
  print x, y

instead of:

x=1
def f():
   y=x+1
   global f
   def f(x=x, y=y):
 print x, y
   f()

It is easy to see that people have been molded into thinking one way
(declaration of functions, legacy from staticly typed languages),
instead of viewing code also as object that you can tweak.

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Pre-PEP: Dictionary accumulator methods

2005-03-19 Thread El Pitonero
On Sat, 19 Mar 2005 01:24:57 GMT, "Raymond Hettinger"
<[EMAIL PROTECTED]> wrote:
>I would like to get everyone's thoughts on two new dictionary methods:
>
>def count(self, value, qty=1):
>try:
>self[key] += qty
>except KeyError:
>self[key] = qty
>
>def appendlist(self, key, *values):
>try:
>self[key].extend(values)
>except KeyError:
>self[key] = list(values)

Bengt Richter wrote:
>  >>> class xdict(dict):
>  ... def valadd(self, key, incr=1):
>  ... try: self[key] = self[key] + type(self[key])(incr)
>  ... except KeyError: self[key] = incr

What about:

import copy
class safedict(dict):
def __init__(self, default=None):
self.default = default
def __getitem__(self, key):
try:
return dict.__getitem__(self, key)
except KeyError:
return copy.copy(self.default)

x = safedict(0)
x[3] += 1
y = safedict([])
y[5] += range(3)
print x, y
print x[123], y[234]

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Pre-PEP: Dictionary accumulator methods

2005-03-19 Thread El Pitonero
Dan Sommers wrote:
> On Sat, 19 Mar 2005 01:24:57 GMT,
> "Raymond Hettinger" <[EMAIL PROTECTED]> wrote:
>
> > The proposed names could possibly be improved (perhaps tally() is
more
> > active and clear than count()).
>
> Curious that in this lengthy discussion, a method name of
"accumulate"
> never came up.  I'm not sure how to separate the two cases
(accumulating
> scalars vs. accumulating a list), though.

Is it even necessary to use a method name?

import copy
class safedict(dict):
def __init__(self, default=None):
self.default = default
def __getitem__(self, key):
try:
return dict.__getitem__(self, key)
except KeyError:
return copy.copy(self.default)


x = safedict(0)
x[3] += 1
y = safedict([]) 
y[5] += range(3) 
print x, y 
print x[123], y[234]

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Pre-PEP: Dictionary accumulator methods

2005-03-19 Thread El Pitonero
Raymond Hettinger wrote:
> Separating the two cases is essential.  Also, the wording should
contain strong
> cues that remind you of addition and of building a list.
>
> For the first, how about addup():
>
> d = {}
> for word in text.split():
>  d.addup(word)

import copy
class safedict(dict):
def __init__(self, default=None):
self.default = default
def __getitem__(self, key):
if not self.has_key(key):
self[key] = copy.copy(self.default)
return dict.__getitem__(self, key)

text = 'a b c b a'
words = text.split()
counts = safedict(0)
positions = safedict([])
for i, word  in enumerate(words):
counts[word] += 1
positions[word].append(i)

print counts, positions

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Pre-PEP: Dictionary accumulator methods

2005-03-19 Thread El Pitonero
Raymond Hettinger wrote:
>
> As written out above, the += syntax works fine but does not work with
append().
> ...
> BTW, there is no need to make the same post three times.

The append() syntax works, if you use the other definition of safedict
(*). There are more than one way of defining safedict, see the subtle
differences between the two versions of safedict, and you'll be glad
more than one version has been posted. At any rate, what has been
presented is a general idea, nitpicking details is kind of out of
place. Programmers know how to modify a general receipe to suit their
actual needs, right?

(*) In some cases, people do not want to create a dictionary entry when
an inquiry is done on a missing item. In some case, they do. A general
receipe cannot cater to the needs of everybody.

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Pre-PEP: Dictionary accumulator methods

2005-03-19 Thread El Pitonero
George Sakkis wrote:
> "Aahz" <[EMAIL PROTECTED]> wrote:
> > In article <[EMAIL PROTECTED]>,
> > Raymond Hettinger <[EMAIL PROTECTED]> wrote:
> > >
> > >The proposed names could possibly be improved (perhaps tally() is
more active
> > >and clear than count()).
> >
> > +1 tally()
>
> -1 for count(): Implies an accessor, not a mutator.
> -1 for tally(): Unfriendly to non-native english speakers.
> +0.5 for add, increment. If incrementing a negative is unacceptable,
how about
> update/updateby/updateBy ?
> +1 for accumulate. I don't think that separating the two cases --
adding to a scalar or appending to
> a list -- is that essential; a self-respecting program should make
this obvious by the name of the
> parameter anyway ("dictionary.accumulate('hello', words)" vs
"a.accumulate('hello', b)").

What about no name at all for the scalar case:

a['hello'] += 1
a['bye'] -= 2

and append() (or augmented assignment) for the list case:

a['hello'].append(word)
a['bye'] += [word]

?

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: importing two modules with the same name

2005-03-19 Thread El Pitonero
Francisco Borges wrote:
> There are 2 "foo" named modules, 'std foo' and 'my foo'. I want to be
> able to import 'my foo' and then from within my foo, import 'std
> foo'. Anyone can help??

In other words, you would like to make a "patch" on third-party code.
There are many ways to do it. Here is just one possible approach.

#- A.py: third-party module
x = 3
def f():
return 'A.f(): x=%d' % x
#- B.py: your modifications to the module
def f():
return 'B.f(): x=%d' % x
#- Main.py: your program
import imp
# load the third party module into sys.modules
imp.load_source('A', '', open('C:\\A.py'))
# load and execute your changes
imp.load_source('A', '', open('C:\\B.py'))
# import now from memory (sys.modules)
import A
print A.f()

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: importing two modules with the same name

2005-03-19 Thread El Pitonero
Tim Jarman wrote:
> But if your foo is under your control, why not do everyone a favour
and call
> it something else?

His case is a canonical example of a patch. Often you'd like to choose
the "patch" approach because:

(1) the third-party may eventually incorporate the changes themselves,
hence you may want to minimize changes to the name of the module in
your code, so one day in the future you may simply remove the patch, or

(2) you are testing out several ideas, at any point you may want to
change to a different patch, or simply roll back to the original
module, or

(3) your product is shipped to different clients, each one of them
requires a different twist of the shared module, or

(4) your program and your data files are versioned, and your program
structure needs cumulative patches in order to be able to work with all
previous data file versions.

These types of needs are rather common.

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Python for a 10-14 years old?

2005-03-24 Thread El Pitonero
Lucas Raab wrote:
> [EMAIL PROTECTED] wrote:
> > I am blessed with a *very* gifted nine-years old daughter...
> > Now, I would like to teach her programming basics using Python
>
> Let her mess around with it on her own. I'm 15 and have been using
> Python for 2-3 years and had nothing to really go on. Give her Dive
Into
> Python or How to Think Like a Computer Scientist and let her ask
> questions if she needs help.

In the chess world, people have long learnt to take young prodigies
seriously. Most of the grandmasters start to play chess at age 4 or
earlier. Bobby Fisher became the US chess champion at age 14, and a
grandmaster at 15. And that's considered old by modern standard: Sergei
Karjakin became grandmaster at age 12.

http://www.chessbase.com/newsdetail.asp?newsid=310
http://members.lycos.co.uk/csarchive/gilbert.htm

Sure, programming's skill set is a bit broader than chess playing or
ice-skating, but young hackers have plenty of contacts and resources
through internet, and many of them live (will be living) in Brazil,
Russia, India and China (the so-called BRIC countries.) So, a thorny
question for matured programmers is: what's your value in face of this
competition? :)

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: "static" variables in functions (was: Version Number Comparison Function)

2005-03-29 Thread El Pitonero
Christos TZOTZIOY Georgiou wrote:
>
> One of the previous related threads is this (long URL):
>
http://groups-beta.google.com/group/comp.lang.python/messages/f7dea61a92f5e792,5ce65b041ee6e45a,dbf695317a6faa26,19284769722775d2,7599103bb19c7332,abc53bd83cf8f636,4e87b44745a69832,330c5eb638963459,e4c8d45fe5147867,5a184dac6131a61e?thread_id=84da7d3109e1ee14&mode=thread&noheader=1#doc_7599103bb19c7332

Another previous message on this issue:

http://groups-beta.google.com/group/comp.lang.lisp/msg/1615d8b83cca5b20

Python's syntax surely is not clean enough for concise metaprogramming.
At any rate, I'd agree with Fernando's assessment:

Fernando wrote:
> The real problem with Python is ... Python is
> going the C++ way: piling feature upon feature, adding bells
> and whistles while ignoring or damaging its core design.

If the core design were better, many "new features" in Python could
have been rendered unnecessary.

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Docorator Disected

2005-04-02 Thread El Pitonero
Ron_Adam wrote:
>
> # (0) Read defined functions into memory
>
> def decorator(d_arg): # (7) Get 'Goodbye' off stack
>
> def get_function(function): # (8) Get func object off stack
>
> def wrapper(f_arg):# (9) Get 'Hello' off stack
>
> new_arg = f_arg+'-'+d_arg
> result = function(new_arg)  # (10) Put new_arg on stack
> # (11) Call func object
>
> return result  # (14) Return result to wrapper
>
> return wrapper# (15) Return result to get_function
>
> return get_function# (16) Return result to caller of func
>
>
>
> @decorator('Goodbye')   # (5) Put 'Goodbye' on stack
> # (6) Do decorator
>
> def func(s):# (12) Get new_arg off stack
>
> return s# (13) Return s to result
>
> # (1) Done Reading definitions
>
>
> print func('Hello') # (2) Put 'Hello' on stack
> # (3) Put func object on stack
> # (4) Do @decorator
> # (17) print 'Hello-Goodbye'
>
> # Hello-Goodbye

Is it possible that you mistakenly believe your @decorator() is being
executed at the line "func('Hello')"?

Please add a print statement to your code:

def decorator(d_arg):
 def get_function(function):
 print 'decorator invoked'
 def wrapper(f_arg):
 new_arg = f_arg+'-'+d_arg
 result = function(new_arg)
 return result
 return wrapper
 return get_function

When you run the program, you will see that the comment "decorator
invoked" is printed out at the moment when you finish defining:

@decorator('Goodbye')
def func(s):
return s

That is, decorator is invoked before you run the line "func('Hello')".

Decorator feature is a metaprogramming feature. Not a programming
feature. By metaprogramming I mean you are taking a function/code
object, and try to do something with it (e.g., wrap it around.) By the
time you finish defining the function "func(s)", the decorator
"get_function()" was already invoked and will never be invoked again.

It's better to view functions as individual objects. And try to think
who holds reference to these objects. If no one holds reference to an
object, it will be garbage collected and will be gone. After you define
the function "func()" and before you execute "func('Hello')", this is
the situation:

decorator() <--- held by the module
get_function() <--- temporary object, garbage collected
wrapper() <--- held by the module, under the name "func"
func() <--- held by wrapper(), under the name "function"

'Goodbye' <--- string object, held by the wrapper function object,
under the name d_arg

Objects can be rebound to different names. In your code you have
rebound the original wrapper() and func() function objects to different
names.

I think the confusing part is that, for function name binding, Python
does not use the = operator, but instead relies on the "def" keyword.
Maybe this is something to be considered for Python 3K. Anonymous
function or codeblock objects are good to have, when you are doing
metaprogramming.

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Decorator Dissection

2005-04-02 Thread El Pitonero
Ron_Adam wrote:
> On 2 Apr 2005 08:39:35 -0800, "Kay Schluehr" <[EMAIL PROTECTED]>
> wrote:
>
> >There is actually nothing mysterious about decorators.
>
> I've heard this quite a few times now, but *is* quite mysterious if
> you are not already familiar with how they work.  Or instead of
> mysterious, you could say complex, as they can be used in quite
> complex ways.

If the syntax were like:

decorator = function(d_arg) {
return function(f) {
return function(f_arg) {
new_arg = f_arg+'-'+d_arg;
return f(new_arg);
}
}
}

func = decorator('Goodbye') function(s) {
return s;
}

Would you think it would be more easily understandable? Here,
"function()" is a metafunction (or function factory) whose role is to
manufacture a function given a parameter spec and a code body. And in
the expression

func = decorator('Goodbye')(function(s){return s;})

one pair of outter parenthesis have been omitted. Sure, it's not as
readable as Python's "def", but with today's syntax highlighters, the
special word "function" can be highlighted easily.

If the decorator does not have parameters, one has:

func = decorator function(s) {

}

or in the general case:

func = deco1 deco2 deco3 function(s) {

}

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Docorator Disected

2005-04-02 Thread El Pitonero
Ron_Adam wrote:
>
> So I didn't know I could do this:
>
> def foo(a1):
> def fee(a2):
> return a1+a2
> return fee
>
> fum = foo(2)(6)   <-- !!!

Ah, so you did not know functions are objects just like numbers,
strings or dictionaries. I think you may have been influenced by other
languages where there is a concept of static declaration of functions.

The last line can be better visualized as:

fum = (foo(2)) (6)

where foo(2) is a callable.

---

Since a function is an object, they can be assigned (rebound) to other
names, pass as parameters to other functions, returned as a value
inside another function, etc. E.g.:

def g(x):
return x+3

h = g # <-- have you done this before? assignment of function

print h(1) # prints 4

def f(p):
return p # <-- function as return value

p = f(h) # <-- passing a function object

print p(5) # prints 8

Python's use of "def" keyword instead of the "=" assignment operator
makes it less clear that functions are indeed objects. As I said
before, this is something to think about for Python 3K (the future
version of Python.)



Function modifiers exist in other languages. Java particularly is
loaded with them.

public static synchronized double random() {
...
}

So your new syntax:

@decorator(a1)(foo)
def foo(): 
   pass 

is a bit out of the line with other languages.

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Docorator Disected

2005-04-03 Thread El Pitonero
Martin v. Löwis wrote:
> Ron_Adam wrote:
> >
> > No, I did not know that you could pass multiple sets of arguments
to
> > nested defined functions in that manner.
>
> Please read the statements carefully, and try to understand the
mental
> model behind them. He did not say that you can pass around multiple
> sets of arguments. He said that functions (not function calls, but
> the functions themselves) are objects just like numbers. There is
> a way of "truly" understanding this notion, and I would encourage
> you to try doing so.

I have the same feeling as Martin and Bengt. That is, Ron you are still
not getting the correct picture. The fact that you have three-level
nested definition of functions is almost incidental: that's not the
important part (despite the nested scope variables.) The important part
is that you have to understand functions are objects.

Perhaps this will make you think a bit more:

x=1

if x==1:
def f(): return 'Hello'
else:
def f(): return 'Bye'

for x in range(3):
def f(x=x):
return x

Do you realize that I have introduced 5 function objects in the above
code? Do you realize that function objects could be created *anywhere*
you can write a Python statement? Whether it's inside another function,
or inside a if...else... statement, or inside a loop, doesn't matter.
Whereever you can write a Python statement, you can create a function
there. I don't know what your previous programming language is, but you
have to stop treating functions as "declarations". The "def" is an
executable statement.

Another example:

def f():
   return f

g = f()()()()()()()()()()()

is perfectly valid.

--
http://mail.python.org/mailman/listinfo/python-list


Re: Decorator Base Class: Needs improvement.

2005-04-05 Thread El Pitonero
Scott David Daniels wrote:
> Ron_Adam wrote:
> > ...
>
>  def tweakdoc(name):
>  def decorator(function):
>   function.__doc__ = 'Tweak(%s) %r' % (name, function.__doc__)
>   return function
>  return decorator
>
> What is confusing us about what you write is that you are referring
to
> tweakdoc as a decorator, when it is a function returning a decorator.

"Decorator factory" would be a shorter name for "a function returning a
decorator".

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Decorator Base Class: Needs improvement.

2005-04-06 Thread El Pitonero
Bengt Richter wrote:
> On 5 Apr 2005 19:28:55 -0700, "El Pitonero" <[EMAIL PROTECTED]>
wrote:
>
> >Scott David Daniels wrote:
> >> Ron_Adam wrote:
> >> > ...
> >>
> >>  def tweakdoc(name):
> >>  def decorator(function):
> >>function.__doc__ = 'Tweak(%s) %r' % (name, function.__doc__)
> >>return function
> >>  return decorator
> >>
> >> What is confusing us about what you write is that you are
referring
> >to
> >> tweakdoc as a decorator, when it is a function returning a
decorator.
> >
> >"Decorator factory" would be a shorter name for "a function
returning a
> >decorator".
> >
> True, but tweakdoc doesn't have to be a function, so IMO we need a
better
> name for the @-line, unless you want to use many various specific
names
> like factory. E.g.,

There are two things:

(1) The "tweadoc" object in the example, which no doubt can be called a
decorator factory.

(2) The @-line, which you called a "decorator expression" and that's
fine with me. My preference would be something like the "decorator
header". A more clear statement would be something like: a "decorator
header expression" or the "expression in the decorator header", though
your proposed "decorator expression" would be clear enough, too.

I was addressing (1). You jumped in with (2), which I was aware of and
was not dissenting.

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Puzzling OO design problem

2005-04-09 Thread El Pitonero
It may be useful to separate the code into version-independent part and
version-dependent part. Also, one can try to implement the higher-level
logic directly in the class definition of A, B, etc., and then use the
version objects only as patches for the details. That is, one can use
place-holder calls. The place-holder calls do nothing if a feature is
not really implemented (either in a parent class, or in an older
version).

class World(object):

def __init__(w, version):

class A(object):
def ff(): pass # place holder for version-dependent code
def f(self): # version-independent code
return self.ff()

class B(A):
def gg(): pass
def g(self):
return self.gg()

for cls in (A, B):
setattr(w, cls.__name__, w.versionize(cls, version))

def versionize(w, cls, version):
import inspect
methods = inspect.getmembers(version, inspect.ismethod)
methods = [m[1] for m in methods if m[0].split('_')[0] ==
cls.__name__]
for m in methods:
m_name = '_'.join(m.__name__.split('_')[1:])
import new
im = new.instancemethod(m.im_func, None, cls)
setattr(cls, m_name, im)
return cls

class Version1(object):
def A_ff(self):
return 'A.ff: version 1'
def B_gg(self):
return 'B.gg: version 1'

class Version2(Version1):
def A_ff(self):
return 'A.ff: version 2'
def B_ff(self):
return 'B.ff: version 2'

w1, w2 = World(Version1), World(Version2)
a1, b1 = w1.A(), w1.B()
a2, b2 = w2.A(), w2.B()

print a1.f() # prints 'A.ff: version 1'
print b1.f() # prints 'A.ff: version 1'
print b1.g() # prints 'B.gg: version 1'
print ''
print a2.f() # prints 'A.ff: version 2'
print b2.f() # prints 'B.ff: version 2'
print b2.g() # prints 'B.gg: version 1'

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: a=[ lambda t: t**n for n in range(4) ]

2005-04-22 Thread El Pitonero
Bengt Richter wrote:
> I still don't know what you are asking for, but here is a toy,
> ...
> But why not spend some time with the tutorials, so have a few more
cards in your deck
> before you try to play for real? ;-)

Communication problem.

All he wanted is automatic evaluation a la spreadsheet application.
Just like in Microsoft Excel. That's all.

There are many ways for implementing the requested feature. Here are
two:

(1) Push model: use event listeners. Register dependent quantities as
event listeners of independent quantities. When an independent quantity
is modified, fire off the event and update the dependent quantities.
Excel uses the push model.

(2) Pull model: lazy evaluation. Have some flag on whether an
independent quantity has been changed. When evaluating a dependent
quantity, survey its independent quantities recursively, and update the
cached copies whereever necessary.

Of course, combination of the two approaches is possible.

For Python, metaclasses and/or decorators and/or properties may help.

But functional languages are a more natural territory.

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: how can I sort a bunch of lists over multiple fields?

2005-04-30 Thread El Pitonero
googleboy wrote:
>
> I am reading in a csv file that documents a bunch of different info
on
> about 200 books, such as title, author, publisher, isbn, date and
> several other bits of info too.
> ...
> I really want to be able to sort the list of books based on other
> criterium, and even multiple criteria (such as by author, and then by
> date.)

import string

input = open(r'c:\books.csv', 'r')
records = input.readlines()
input.close()

# assuming first line contains headers
headers = records.pop(0)
records = [x.strip().split(',') for x in records]

# header order
p_headers ='(title, author, publisher, isbn, date, other)'
p_sorts = '(author, title, date, publisher, other, isbn)'

temp_records = []
for r in records:
exec '%(p_headers)s = r' % vars()
exec 't = %(p_sorts)s' % vars()
temp_records.append(t)

temp_records.sort()

sorted_records = []
for t in temp_records:
exec '%(p_sorts)s = t' % vars()
exec 'r = %(p_headers)s' % vars()
sorted_records.append(r)

lines = [headers] + [','.join(x)+'\n' for x in sorted_records]
output = open(r'c:\output.csv', 'w')
output.writelines(lines) 
output.close()

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: time.clock()

2006-07-14 Thread El Duderino
Tobiah wrote:
> Am I barking up the wrong tree?

I don't think so, time.clock() has always worked fine for me. You can 
also try time.time(). It is not as precise, but it might be sufficient 
for your needs.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Subprocess with a Python Session?

2006-12-07 Thread El Pitonero
Paul Boddie wrote:
> Shane Hathaway wrote:
> >
> > Make sure the pipes are unbuffered.  Launch the process with "python -u"
> > and flush() the streams after writing.  (That's the issue I've
> > encountered when doing this before.)
>
> The -u option is critical, yes. I wrote some code recently which
> communicated with a subprocess where the input/output exchanges aren't
> known in advance, and my technique involved using socket.poll and one
> character reads from the subprocess. I note that Pexpect is also
> conservative about communicating with subprocesses (see the "$ regex
> pattern is useless" section on the Pexpect site [1]).
>
> Generally, any work with asynchronous communications, or even
> socket/pipe programming in a wider sense, seems to involve ending up
> with working code that looks a lot like the code you started out with,
> but only after a period of intense frustration and with a few minor
> adjustments being made to separate the two.

Is there something equivalent to the "-u" option for a shell like
"bash"? In general (whether the subprocess is bash or python), how can
one make sure that once something is written into the subprocess'
stdin, the output from its stdout is fully completed and the subprocess
is ready to receive another command? Is there some kind of signal or
return status code one can capture?

-- P.

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Python vs. Lisp -- please explain

2006-02-20 Thread El Loco
Kay Schluehr wrote:
> Yes, it's Guidos master-plan to lock programmers into a slow language
> in order to dominate them for decades. Do you also believe that Al
> Quaida is a phantom organization of the CIA founded by neocons in the
> early '90s who planned to invade Iraq?

Actually, it was created by Bruce Lee, which is not dead but working
undercover for the Hong Kong police to fight against the chinese
triads. At this point you might guess what does Bruce Lee have to do
with Al Qaida? Well my friend, if you don't understand this, you don't
get it at all! (Hint: he's a c++ hacker who wanted to build a tracking
system for terrorists in middle east, but the project got halted when
Guido Van Rossum, who's real name is Abdul Al Wazari, convinced him to
use python). Now the system is so so slow that Ben Laden never gets
caught!

El Loco

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: So what exactly is a complex number?

2007-09-05 Thread El Pitonero
On Sep 1, 3:54 am, Grzegorz S odkowicz <[EMAIL PROTECTED]> wrote:
>
> You're mixing definition with application. You didn't say a word about
> what complex numbers are, not a word about the imaginary unit, where
> does it come from, why is it 'imaginary' etc.  
> ...
> I'd also like to see a three-dimensional vector
> represented by a complex number.

Well, maybe you'd like to learn something about Geometric Algebra. :)

I am a bit surprised that today, September 2007, in a thread about
complex numbers, no one has mentioned about geometric algebra. There
is an older way of looking at complex numbers: the imaginary unit as
square root of -1. And then there is a new way of looking at complex
numbers: as the multi-vector space associated to the two-dimensional
vector space. So, yes, complex numbers are a bit like vectors, but
more precisely, they are "multi-vectors", where the first component
(the real part) is a "scalar", and the second part (the imaginary
part) is an "area".

This may all be just paraphrasing. But it gets more interesting when
you go to higher dimensions. You'd like to know whether there are
extension of complex numbers when you go to three dimensional space,
and the answer is definitely YES! But the new multivectors live in 8
dimensional space. Geometric product not only make this possible, but
this product is invertible. Moreover, complicated equations in
electromagnatism in physics (Maxwell's equations) can be written in a
single line when you use geometric algebra. When you see some of the
features of geometric algebra, you will realize that complex number
are but a small part of it. (There is a paper with the title
"Imaginary numbers are not real...", I guess the title says it all.)

Anyway, there are always many ways of looking at the same thing.
Geometric algebra is one. Who knows what tomorrow brings? But as of
today, I'd say that it's better to teach school children about
geometric algebra, instead of the present way of introducing imaginary
unit. Just my opinion.

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: So what exactly is a complex number?

2007-09-07 Thread El Pitonero
On Sep 5, 7:27 am, El Pitonero <[EMAIL PROTECTED]> wrote:
>
> I am a bit surprised that today, September 2007, in a thread about
> complex numbers, no one has mentioned about geometric algebra.

Here is a good reference for whoever is interested. It's quite
accessible to general audience.

http://www.xtec.es/~rgonzal1/treatise.pdf

If a person spends some time to look at the geometric algebra, it will
become clear that complex numbers are not that special, after all.
Hopefully the relationship between the 2-d vector plane and the
complex plane will also become more clear, as complex numbers can be
understood as rotation-dilation operators over vectors. One also
learns that complex numbers are based on a metric assumption of square
of vectors (norm) being positive (a.k.a Euclidean space). There is
nothing sacred about positively-defined metric, and in fact if one
uses mixed signature metric (pseudo-Euclidean space), one comes up
with hyperbolic numbers instead of complex numbers.

-- 
http://mail.python.org/mailman/listinfo/python-list


[Ann] New super python vm

2009-04-01 Thread El Loco
Hi all,

This is to announce that right after a few weeks after our first
coding sprint,
our project, "Unswallowed-snot", has already achieved substantial
results.
In our tests, runtime performance shows a 150x slowdown.
This is due mainly to our lead developer (myself) still not knowing
enough python,
but we expect the situation improves, or not, in the next couple of
months.

Interested hackers, please drop me an email.
Thanks!
l&f
--
http://mail.python.org/mailman/listinfo/python-list


Re: New Python Logo Revealed

2006-04-03 Thread El Loco
Alec Jang wrote:
> I guess it is absolutely a joke. If not, there will be a disaster, and
> that means ruby will rule the world.

Yes, we'll become slaves, our leaders crucified, and our culture will
vanish forever...

-- 
http://mail.python.org/mailman/listinfo/python-list


Fourth example from PEP 342

2018-01-19 Thread Léo El Amri
Hello list,

I am currently trying to learn co-routine/asynchronous mechanisms in
Python. I read the PEP 342, but I stumble on the fourth example.
I don't understand what the lines "data = yield nonblocking_read(sock)"
in echo_handler() and "connected_socket = yield
nonblocking_accept(sock)" in listen_on() are trying to do.

For example, can someone explain me how the returned value in the line
"connected_socket = yield nonblocking_accept(sock)" can be used on the
next line ("trampoline.add(handler(connected_socket))") ? To me, it
looks like the returned value is lost in the Trampoline, when resume()
gets the returned value of the yield expression.

Léo
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Fourth example from PEP 342

2018-01-20 Thread Léo El Amri
On 20/01/2018 11:55, Thomas Jollans wrote:
> control is returned to t.resume. nonblocking_accept is supposed to be a
> coroutine
> Ergo, it schedules the nonblocking_accept coroutine (‘value’) to be
> called on the next iteration, and keeps the running listen_on coroutine
> (‘coroutine’) on the stack for safekeeping.

Hello Thomas,

I missed THE point of the examples. nonblocking_accept is a coroutine. I
was stuck in the C way of doing non blocking, and I was wondering what
was the magic behind the examples.

Perfect explanation, now I understand. Still, I also wonder how theses
"nonblocking_" functions are meant to be implemented. But it's another
topic.

Thanks.
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Treatment of NANs in the statistics module

2018-03-17 Thread Léo El Amri
On 17/03/2018 00:16, Steven D'Aprano wrote:
> The bug tracker currently has a discussion of a bug in the median(), 
> median_low() and median_high() functions that they wrongly compute the 
> medians in the face of NANs in the data:
> 
> https://bugs.python.org/issue33084
> 
> I would like to ask people how they would prefer to handle this issue:

TL;DR: I choose (5)

I'm agree with Terry Reedy for his proposal for the (5), however, I want
to define precisely what we mean with "ignore".
In my opinion "ignoring" should be more like "stripping". In the case
the number of data points is odd, we can return a NAN without any
concerns. But in the case the number of data points is even, and at
least one of the two middle values is a NAN, we're probably going to
have an exception raised. In this case, to not over-complicate things, I
think we should go with this meaning for "ignore": "Removing" NAN before
actual data points processing. In this case, we should have two possible
options for the keyword argument "nan": 'strip' (Which does what I just
described) and 'raise' (Which raises an exception if there is a NAN in
the data points).
We should still consider adding an "ignore" option in a later time. This
option would blindly ignore NAN values. If an exception is encountered
during the actual processing (Let's say we have an even number of data
points, and a NAN in one of the two values), it is raised up to the caller.

From my point of view, I prefer the (5). With a default of 'strip'. Your
argument with (1) being the fastest (I believe, in terms of
running-time, tell me if I'm wrong) can be achieved with the 'ignore'
option.

Going with (1) would force Python developers to write implementation
specific code (Oh rather "implementation-defined-prone" code). In this
case (5) goes easy with Python-side code.

Options from (2) to (4) force Python developers to adopt a behavior.
It's not necessarily a bad thing, but since (5) allows flexibility at no
cost I don't see why we shouldn't go with it.


--
Léo El Amri
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: python-daemon for Python v3

2014-08-22 Thread Y@i$el
A mi si me ha dado problemas. No tengo forma de decirle que se ejecute como 
usuario www-data y cuando lo intento deja de funcionar abruptamente. Saludos.

-- 
https://mail.python.org/mailman/listinfo/python-list


Spanish Translation of any python Book?

2006-01-13 Thread Olaf \&quot;El Blanco\"
Maybe there is someone that speak spanish. I need the best spanish book for 
learning python.

Thanks!



-- 
*
"...Winds and storms, embrace us now,
Lay waste the light of day.
Open gates to darker lands...
We spread our wings and fly away..."
* 


-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Spanish Translation of any python Book?

2006-01-13 Thread Olaf \&quot;El Blanco\"
:)

So Sorry!


"gene tani" <[EMAIL PROTECTED]> escribió en el mensaje 
news:[EMAIL PROTECTED]
>
> Olaf "El Blanco" wrote:
>> Maybe there is someone that speak spanish. I need the best spanish book 
>> for
>> learning python.
>>
>
> you should learn Chinese, Korean and Russian so you can read this many
> times
> http://diveintopython.org/#languages
> 


-- 
http://mail.python.org/mailman/listinfo/python-list

RE: Event objects Threading on Serial Port on Win32

2006-07-03 Thread el cintura partida
Muchas gracias Gabriel por haberme informado, vos si
que es un profesional de la programación.

Un saludo,

David
 --- Gabriel <[EMAIL PROTECTED]> escribió:

> David:
> Tube el mismo problema que vos con el hilo del
> ejemplo de pyserial. Me
> paso que en Linux andaba bien, obvio, pero tenia un
> pequeño
> problemilla en Windows, también obvio.
> 
> Lo solucione de la siguiente manera:
> Asi es el codigo original de la función
> ComPortThread
> 
> def ComPortThread(self):
> """Thread that handles the incomming
> traffic. Does the basic input
>transformation (newlines) and generates
> an SerialRxEvent"""
> 
> while self.alive.isSet():  
> #loop while alive event is true
> if self.ser.inWaiting() != 0:
> text = self.ser.read()
> event = SerialRxEvent(self.GetId(),
> text)
>
> self.GetEventHandler().AddPendingEvent(event)
> 
> 
> solo tiene que agregarle el siguiente bucle antes
> que nada:
> while not self.alive.isSet():
> pass
> 
> quedándote así dicha función...
> 
> def ComPortThread(self):
> """Thread that handles the incomming
> traffic. Does the basic input
>transformation (newlines) and generates
> an SerialRxEvent"""
> while not self.alive.isSet():
> pass
> 
> while self.alive.isSet():  
> #loop while alive event is true
> if self.ser.inWaiting() != 0:
> text = self.ser.read()
> event = SerialRxEvent(self.GetId(),
> text)
>
> self.GetEventHandler().AddPendingEvent(event)
> 
> y listo... Con eso debería andar
> Espero haber sido útil
> 
> -- 
> Gabriel
> 






__ 
LLama Gratis a cualquier PC del Mundo. 
Llamadas a fijos y móviles desde 1 céntimo por minuto. 
http://es.voice.yahoo.com
-- 
http://mail.python.org/mailman/listinfo/python-list


Why is multiprocessing.Queue creating a thread ?

2018-07-29 Thread Léo El Amri via Python-list
Hello list,

This is a simple question: I wonder what is the reason behind
multiprocessing.Queue creating a thread to send objects through a
multiprocessing.connection.Connection.
I plan to implement an asyncio "aware" Connection class. And while
reading the source code of the multiprocessing module, I found that (As
outlined in the documentation) Queue is indeed creating a thread. I want
to know if I'm missing something.

-- 
Leo
-- 
https://mail.python.org/mailman/listinfo/python-list


asyncio: Warning message when waiting for an Event set by AbstractLoop.add_reader

2018-08-02 Thread Léo El Amri via Python-list
Hello list,

During my attempt to bring asyncio support to the multiprocessing Queue,
I found warning messages when executing my code with asyncio debug
logging enabled.

It seems that awaiting for an Event will make theses messages appears
after the second attempt to wait for the Event.

Here is a code sample to test this (47 lines long):

import asyncio
import threading
import os
import io
import time
import logging
logging.basicConfig()
logging.getLogger('asyncio').setLevel(logging.DEBUG)
fd1, fd2 = os.pipe()  # (r, w)
loop = asyncio.get_event_loop()
event = asyncio.Event()
def readfd(fd, size):
buf = io.BytesIO()
handle = fd
remaining = size
while remaining > 0:
chunk = os.read(handle, remaining)
n = len(chunk)
if n == 0:
raise Exception()
buf.write(chunk)
remaining -= n
return buf.getvalue()
def f():
loop.add_reader(fd1, event.set)
loop.run_until_complete(g())
loop.close()
async def g():
while True:
await event.wait()
msg = readfd(fd1, 4)
event.clear()
if msg == b'STOP':
break
print(msg)
if __name__ == '__main__':
t = threading.Thread(target=f)
t.start()
time.sleep(1)
os.write(fd2, b'TST1')
time.sleep(1)
os.write(fd2, b'TST2')
time.sleep(1)
os.write(fd2, b'TST3')
time.sleep(1)
os.write(fd2, b'STOP')
t.join()

I get this output:

DEBUG:asyncio:Using selector: EpollSelector
DEBUG:asyncio:poll took 999.868 ms: 1 events
b'TST1'
b'TST2'
WARNING:asyncio:Executing  took 1.001 seconds
INFO:asyncio:poll took 1000.411 ms: 1 events
b'TST3'
WARNING:asyncio:Executing  took 1.001 seconds
DEBUG:asyncio:Close <_UnixSelectorEventLoop [CUT]>

I don't get the message, because it seems that the event is awaited
properly. I'm especially bugged, because I get this output from the
actual code I'm writing:

WARNING:asyncio:Executing  took 1.000 seconds
DEBUG:asyncio:poll took 0.012 ms: 1 events

Here, it seems that "polling" went for 0.012 ms. But it should have gone
for 1000.0 ms. But in any case, it still works. However, the fact that
polling seems to "return" instantly is a strange fact.

The test code is not producing this difference. In fact, it seems that
polling is working as intended (Polling takes 1000.0 ms and Event is
awaited for 1000.0 ms). But there is still this warning...

Maybe one of you have an insight on this ?
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: asyncio: Warning message when waiting for an Event set by AbstractLoop.add_reader

2018-08-03 Thread Léo El Amri via Python-list
Thanks for the answer, but the problem is that this is happening in the
built-in Event of the asyncio package; which is actually a coroutine. I
don't expect the built-in to have this kind of behavior. I guess I'll
have to dig on the source code of the asyncio default loop to actually
understand how all the thing is behaving in the shadows. Maybe I'm
simply doing something wrong. But it would mean that the documentation
is lacking some details, or maybe that I'm just really stupid on this one.

On 03/08/2018 07:05, dieter wrote:
> Léo El Amri via Python-list  writes:
>> ...
>> WARNING:asyncio:Executing  took 1.000 seconds
>> ...
>>  But there is still this warning...
> 
> At your place, I would look at the code responsible for the warning.
> I assume that it is produced because the waiting time is rather
> high -- but this is just a guess.
> 
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Embedded Python and multiprocessing on Windows?

2018-08-09 Thread Léo El Amri via Python-list
On 09/08/2018 19:33, Apple wrote:> So my program runs one script file,
and multiprocessing commands from that script file seem to fail to spawn
new processes.
> 
> However, if that script file calls a function in a separate script file that 
> it has imported, and that function calls multiprocessing functions, it all 
> works exactly the way it should.
> On Thursday, August 9, 2018 at 12:09:36 PM UTC-4, Apple wrote:
>> I've been working on a project involving embedding Python into a Windows 
>> application. I've got all of that working fine on the C++ side, but the 
>> script side seems to be hitting a dead end with multiprocessing. When my 
>> script tries to run the same multiprocessing code that works in a 
>> non-embedded environment, the code doesn't appear to be executed at all.
>>
>> Still no joy. However, a Python.exe window does pop up for a tenth of a 
>> second, so *something* is happening.

That may be something simple: Did you actually protected the entry-point
of your Python script with if __name__ == '__main__': ?
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: asyncio: Warning message when waiting for an Event set by AbstractLoop.add_reader

2018-08-12 Thread Léo El Amri via Python-list
I found out what was the problem.

The behavior of my "reader" (The callback passed to
AbstractEventLoop.add_reader()) is to set an event. This event is
awaited for in a coroutine which actually reads what is written on a
pipe. The execution flow is the following:

* NEW LOOP TURN
* The selector awaits for read-ready state on a file descriptor AND The
coroutine awaits for the event
* At some point, the file descriptor becomes read-ready, the selector
queue the callback to set the event on the loop
* The callback put on the queue is called and the event is set
* NEW LOOP TURN
* The selector awaits for read-ready state on a file descriptor AND The
coroutine awaits for the event
* The file descriptor is read-ready, so the selector queue the callback
to set the event on the loop
* The corouting is resumed, because the event is now set, the corouting
then reads what was written on the file descriptor, then clears the
event and awaits again for it
* The callback put on the queue is called and the event is set
* NEW LOOP TURN
* The selector awaits for read-ready state on a file descriptor AND The
coroutine awaits for the event
* The corouting is resumed, because the event was set at the end of the
last turn, so the coroutine attempts a read, but blocks on it, because
the file descriptor is actually not read-ready

Rince and repeat.

This behavior actually depends on how the loop is implemented. So the
best practice here is to actually do the read from the callback you
gives to AbstractEventLoop.add_reader(). I think we should put a mention
about it in the documentation.
-- 
https://mail.python.org/mailman/listinfo/python-list


Python 3.6 Logging time is not listed

2018-08-13 Thread Léo El Amri via Python-list
On 13/08/2018 19:23, MRAB wrote:
> Here you're configuring the logger, setting the name of the logfile and
> the logging level, but not specifying the format, so it uses the default
> format:
> 
>> logging.basicConfig(filename='example.log',level=logging.DEBUG)
> 
> Here you're configuring the logger again, this time specifying the
> format and the logging level, but not a path of a logging file, so it'll
> write to the console:
> 
>> logging.basicConfig(format='%(asctime)s;%(levelname)s:%(message)s',
>> level=logging.DEBUG)
> 
> The second configuration is overriding the first.

No, the second is not overriding the first one. The second call simply
does nothing at all. See
https://docs.python.org/3/library/logging.html#logging.basicConfig :
"This function does nothing if the root logger already has handlers
configured for it."
-- 
https://mail.python.org/mailman/listinfo/python-list


>< swap operator

2018-08-13 Thread Léo El Amri via Python-list
On 13/08/2018 21:54, skybuck2...@hotmail.com wrote:
> I just had a funny idea how to implement a swap operator for types:
> 
> A >< B
> 
> would mean swap A and B.

I think that:

a, b = b, a

is pretty enough
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: [OT] master/slave debate in Python

2018-09-24 Thread Léo El Amri via Python-list
On 24/09/2018 14:52, Robin Becker wrote:
> On 23/09/2018 15:45, Albert-Jan Roskam wrote:
>> *sigh*. I'm with Hettinger on this.
>>
>> https://www.theregister.co.uk/2018/09/11/python_purges_master_and_slave_in_political_pogrom/
>>
>>
> I am as well. Don't fix what is not broken. The semantics (in
> programming) might not be an exact match, but people have been using
> these sorts of terms for a long time without anyone objecting. This sort
> of language control is just thought control of the wrong sort.

All receivable arguments have been told already, thanks to Terry,
Steven, Raymond and others, who all managed to keep their capability to
think rationally after the motive of this bug report.

Hopefully we have competent core devs.
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: [OT] master/slave debate in Python

2018-09-24 Thread Léo El Amri via Python-list
On 24/09/2018 18:30, Dan Purgert wrote:
> Robin Becker wrote:
>> [...] just thought control of the wrong sort..
> 
> Is there "thought control of the right sort"?

We may have to ask to Huxley
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: [OT] master/slave debate in Python

2018-09-26 Thread Léo El Amri via Python-list
On 26/09/2018 06:34, Ian Kelly wrote:
> Chris Angelico  wrote:
>> What I know about them is that they (and I am assuming there are
>> multiple people, because there are reports of multiple reports, if
>> that makes sense) are agitating for changes to documentation without
>> any real backing.
> 
> The terminology should be changed because it's offensive, full stop.
> It may be normalized to many who are accustomed to it, but that
> doesn't make it any less offensive.

Come on ! I have nothing to add to what Terry said:
https://bugs.python.org/msg324773

Now, the bug report is still pointing out some places in the code where
the master/slave terminology is misused, for _technical_ reasons. And
none of us should be blinded by the non-technical motive of the
bug-report. We should do what we have to do, and just let sink the rest
of it.

> Imagine if the terminology were instead "dominant / submissive".
> Without meaning to assume too much, might the cultural context
> surrounding those terms make you feel uncomfortable when using them?

I couldn't care less as well. The meaning of words is given by the context.
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: asyncio await different coroutines on the same socket?

2018-10-03 Thread Léo El Amri via Python-list
Hello Russell,

On 03/10/2018 15:44, Russell Owen wrote:
> Using asyncio I am looking for a simple way to await multiple events where 
> notification comes over the same socket (or other serial stream) in arbitrary 
> order. For example, suppose I am communicating with a remote device that can 
> run different commands simultaneously and I don't know which command will 
> finish first. I want to do this:
> 
> coro1 = start(command1)
> coro2 = start(command2)
> asyncio.gather(coro1, coro2)
> 
> where either command may finish first. I’m hoping for a simple and 
> idiomatic way to read the socket and tell each coroutine it is done. So far 
> everything I have come up with is ugly, using multiple layers of "async 
> def”, keeping a record of Tasks that are waiting and calling "set_result" 
> on those Tasks when finished. Also Task isn’t even documented to have the 
> set_result method (though "future" is)
I don't really get what you want to achieve. Do you want to signal other
coroutines that one of the others finished ?

From what I understand, you want to have several coroutines reading on
the same socket "simultaneously", and you want to stop all of them once
one of them is finished. Am I getting it right ?

-- 
Léo
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Package creation documentation?

2018-10-16 Thread Léo El Amri via Python-list
Given your coding experience also you may want to look at
https://docs.python.org/3/reference/import.html#packages,
which is the technical detail of what a package is (And "how" it's
implemented).
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Package creation documentation?

2018-10-16 Thread Léo El Amri via Python-list
Hello Spencer,

On 16/10/2018 17:15, Spencer Graves wrote:
>   Where can I find a reasonable tutorial on how to create a Python
> package?

IMO, the best documentation about this is the tutorial:
https://docs.python.org/3/tutorial/modules.html#packages

>   According to the Python 3 Glossary, "a package is a Python module
> with an __path__ attribute."[1]

What you are looking at are the technical details of what a package is.
Incidentally, if you follow the tutorial, everything will get in-place.

>   I found "packaging.python.org", which recommends "Packaging Python
> Projects"[2] and "An Overview of Packaging for Python".[3]

packaging.python.org is centered on "How to install and distribute
Python packages (Or modules)"

- Léo
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Asyncio tasks getting cancelled

2018-11-04 Thread Léo El Amri via Python-list
On 04/11/2018 20:25, i...@koeln.ccc.de wrote:
> I'm having trouble with asyncio. Apparently tasks (asyncio.create_task)
> are not kept referenced by asyncio itself, causing the task to be
> cancelled when the creating function finishes (and noone is awaiting the
> corresponding futue). Am I doing something wrong or is this expected
> behavior?
> 
> The code sample I tried:
> 
>> import asyncio
>>
>> async def foobar():
>> print(1)
>> await asyncio.sleep(1)
>> print(2)
>>
>> async def main():
>> asyncio.create_task(foobar())
>> #await asyncio.sleep(2)
>>
>> loop = asyncio.get_event_loop()
>> asyncio.run(main())
>> loop.run_forever()
> 

I don't know anything about asyncio in Python 3.7, but given the
documentation, asyncio.run() will start a loop and run the coroutine
into it until there is nothing to do anymore, then free the loop it
created. I assume it's kind of a run_forever() with some code before it
to schedule the coroutine.

Thus, I don't think it's appropriate to allocate the loop by yourself
with get_event_loop(), then to run it with run_forever().
Usually, when you're already running into a coroutine, you're using
"await other_coroutine()" instead of
"asyncio.create_task(other_coroutine())".

Buy it should not be the root cause of your issue. With theses
information only, it looks like a Python bug to me (Or an implementation
defined behavior. By the way what is your platform ?).

You could try to change either one of asyncio.create_task() or
asyncio.run() with "older" Python API (Python 3.6, preferably).
In the case of run() it would be a simple loop.create_task(main())
instead of the asyncio.asyncio.run(main()).
In the case of asyncio.create_task() it would be
asyncio.get_event_loop(foobar()).
But anyway, I highly recommend you to use the "await other_coroutine()"
syntax I talked about earlier. It may even fix the issue (90% chance).

I don't have Python 3.7 right now, so I can't help further. I hope
someone with knowledge in asyncio under Python 3.7 will be able to help.

- Léo
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Asyncio tasks getting cancelled

2018-11-05 Thread Léo El Amri via Python-list
On 05/11/2018 07:55, Ian Kelly wrote:
>> I assume it's kind of a run_forever() with some code before it
>> to schedule the coroutine.
> 
> My understanding of asyncio.run() from
> https://github.com/python/asyncio/pull/465 is that asyncio.run() is
> more or less the equivalent of loop.run_until_complete() with
> finalization, not loop.run_forever(). Thus this result makes sense to
> me: asyncio.run() runs until main() finishes, then stops. Without the
> sleep(2), main() starts creates the foobar() task and then returns
> without awaiting it, so the sleep(1) never finishes. asyncio.run()
> also finalizes its event loop, so I assume that the loop being
> run_forever must be a different event loop since running it doesn't
> just raise an exception. Therefore it doesn't resume the foobar()
> coroutine since it doesn't know about it.

That totally make sense. I agree with this supposition.


> If the goal here is for the task created by main() to complete before
> the loop exits, then main() should await it, and not just create it
> without awaiting it.

I said the following:

> Usually, when you're already running into a coroutine, you're using
> "await other_coroutine()" instead of
> "asyncio.create_task(other_coroutine())".

Which is not accurate. What Ian said is accurate. One indeed may need to
schedule a coroutine in some situation. I just assumed that's wasn't
what the OP intended to do given the name of the "main" function.

I also said:

> But anyway, I highly recommend you to use the "await other_coroutine()"
> syntax I talked about earlier. It may even fix the issue (90% chance).

This should indeed fix the issue, but this is definitely not what one is
looking for if one really want to _schedule_ a coroutine.

- Léo
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Asyncio tasks getting cancelled

2018-11-05 Thread Léo El Amri via Python-list
On 05/11/2018 16:38, i...@koeln.ccc.de wrote:
> I just saw, actually
> using the same loop gets rid of the behavior in this case and now I'm
> not sure about my assertions any more.

It's fixing the issue because you're running loop with the
run_forever(). As Ian and myself pointed out, using both asyncio.run()
and loop.run_forever() is not what you are looking for.

> Yet it still looks like asyncio
> doen'st keep strong references.

You may want to look at PEP 3156. It's the PEP defining asyncio.
Ian made a good explanation about why your loop wasn't running the
coroutine scheduled from main().

>> This should indeed fix the issue, but this is definitely not what one is
>> looking for if one really want to _schedule_ a coroutine.
> 
> Which is what I want in this case. Scheduling a new (long-running) task
> as a side effect, but returning early oneself. The new task can't be
> awaited right there, because the creating one should return already.
> 
>> If the goal here is for the task created by main() to complete before
>> the loop exits, then main() should await it, and not just create it
>> without awaiting it.
> 
> So if this happens somewhere deep in the hirarchy of your application
> you would need some mechanism to pass the created tasks back up the
> chain to the main function?

Sorry, I don't get what is you're trying to express.

Either you await for a coroutine, which means the calling one is
"paused" until the awaited coroutine finish its job, either you schedule
a coroutine, which means it will be ran once the already stacked
events/coroutines on the loop are all ran.

In your case, you probably want to change

> loop = asyncio.get_event_loop()
> asyncio.run(main())
> loop.run_forever()

to

> loop = asyncio.get_event_loop()
> loop.create_task(main())
> loop.run_forever()

- Léo

[1] https://www.python.org/dev/peps/pep-3156/
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: There is LTS?

2020-08-24 Thread Léo El Amri via Python-list
On 24/08/2020 04:54, 황병희 wrote:
> Hi, just i am curious. There is LTS for *Python*? If so, i am very thank
> you for Python Project.

Hi Byung-Hee,

Does the "LTS" acronym you are using here stands for "Long Term Support" ?

If so, then the short answer is: Yes, kind of. There is a 5 years
maintenance for each major version of Python 3.

You can find information on how Python is released and maintained at PEP
101 [1] and on the developers documentation [2].

Each major Python release typically have a PEP associated with it where
past minor releases dates and future plans are recorded.
For example: PEP 494 [3] for Python 3.6, PEP 537 [4] for Python 3.7 and
PEP 569 [5] for Python 3.8.

[1] https://www.python.org/dev/peps/pep-0101/
[2] https://devguide.python.org/devcycle/
[3] https://www.python.org/dev/peps/pep-0494/
[4] https://www.python.org/dev/peps/pep-0537/
[5] https://www.python.org/dev/peps/pep-0569/
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Asyncio Queue implementation suggestion

2020-09-17 Thread Léo El Amri via Python-list
Hello Alberto,

I scrambled your original message a bit here.

> Apparently asyncio Queues use a Linux pipe and each queue require 2 file
> descriptors. Am I correct?

As far as I know (And I know a bit about asyncio in CPython 3.5+)
asyncio.queues.Queue doesn't use any file descriptor. It is implemented
using futures which are themselves tightly implemented with the inner
workings of the event loop in mind.

> I wrote a daemon in Python 3 (running in Linux) which test many devices
> at the same time, to be used in a factory environment. This daemon
> include multiple communication events to a back-end running in another
> country. I am using a class for each device I test, and embedded into
> the class I use asyncio. Due to the application itself and the number of
> devices tested simultaneously, I soon run out of file descriptor. Well,
> I increased the number of file descriptor in the application and then I
> started running into problems like “ValueError: filedescriptor out of
> range in select()”. I guess this problem is related to a package called
> serial_asyncio, and of course, that could be corrected. However I became
> curious about the number of open file descriptors opened: why so many?

I don't know about a CPython internal called serial_asyncio.

I wonder whether the issue is coming from too much file descriptors...
1. If so, then I assume the issue you encounter with the selector is
coming from the number of sockets you open (Too many sockets).
2. Otherwise it could come from the fact that some sockets gets
destroyed before the selector gets a chance to await on the file
descriptors.

Without a complete traceback I can't be 100% certain, but the error
message you get looks like a EBADF error. So the second hypothesis is my
preferred.

Have you tried enabling asyncio's debug mode ? I hope it can give you
more information on why this is occurring. I believe it's a bug, not a
limitation of CPython.

- Léo
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Asyncio Queue implementation suggestion

2020-09-17 Thread Léo El Amri via Python-list
On 17/09/2020 16:51, Dennis Lee Bieber wrote:
> On Wed, 16 Sep 2020 13:39:51 -0400, Alberto Sentieri <2...@tripolho.com>
> declaimed the following:
> 
>> devices tested simultaneously, I soon run out of file descriptor. Well, 
>> I increased the number of file descriptor in the application and then I 
>> started running into problems like “ValueError: filedescriptor out of 
>> range in select()”. I guess this problem is related to a package called 
> 
> https://man7.org/linux/man-pages/man2/select.2.html
> """
> An fd_set is a fixed size buffer. 
> """
> and
> """
> On success, select() and pselect() return the number of file
>descriptors contained in the three returned descriptor sets (that
> is,
>the total number of bits that are set in readfds, writefds,
>exceptfds).
> """
> Emphasis "number of bits that are set" -- the two together implies that
> these are words/longwords/quadwords used a bitmaps, one fd per bit, and
> hence inherently limited to only as many bits as the word-length supports.

By the way, Alberto, you can change the selector used by your event loop
by instantiating the loop class by yourself [1]. You may want to use
selectors.PollSelector [2].

[1]
https://docs.python.org/3/library/asyncio-eventloop.html#asyncio.SelectorEventLoop
[2] https://docs.python.org/3/library/selectors.html#selectors.PollSelector

- Léo
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Pythonic style

2020-09-21 Thread Léo El Amri via Python-list
On 21/09/2020 00:34, Stavros Macrakis wrote:
> I'm trying to improve my Python style.
> 
> Consider a simple function which returns the first element of an iterable
> if it has exactly one element, and throws an exception otherwise. It should
> work even if the iterable doesn't terminate. I've written this function in
> multiple ways, all of which feel a bit clumsy.
> 
> I'd be interested to hear thoughts on which of these solutions is most
> Pythonic in style. And of course if there is a more elegant way to solve
> this, I'm all ears! I'm probably missing something obvious!

Hello Stavros,

As there is no formal definition nor consensus on what is Pythonic and
what isn't, this reply will be very subjective.



My opinion on your code suggestions is the following:

> 1 def firstf(iterable):
> 2 n = -1
> 3 for n,i in enumerate(iterable):
> 4 if n>0:
> 5 raise ValueError("first1: arg not exactly 1 long")
> 6 if n==0:
> 7 return i
> 8 else:
> 9 raise ValueError("first1: arg not exactly 1 long")

firstf isn't Pythonic:

1. We are checking twice for the same thing at line 4 (n>0)
   and 6 (n==0).

2. We are using enumarate(), from which we ignore the second element
   it yields in its tuple

> 1 def firstd(iterable):
> 2 it = iter(iterable)
> 3 try:
> 4 val = next(it)
> 5 except StopIteration:
> 6 raise ValueError("first1: arg not exactly 1 long")
> 7 for i in it:
> 8 raise ValueError("first1: arg not exactly 1 long")
> 9 return val

firstd isn't Pythonic. While the usage of a for statement in place of a
try..except saves two lines, it is at the expense of showing a clear
intent: When I see a for statement, I expect a "complex" operation on
the iterable items (which we are ignoring here).

>  1 def firsta(iterable):
>  2 it = iter(iterable)
>  3 try:
>  4 val = next(it)
>  5 except StopIteration:
>  6 raise ValueError("first1: arg not exactly 1 long")
>  7 try:
>  8 next(it)
>  9 except StopIteration:
> 10 return val
> 11 raise ValueError("first1: arg not exactly 1 long")

>  1 def firstb(iterable):
>  2 it = iter(iterable)
>  3 try:
>  4 val = next(it)
>  5 except StopIteration:
>  6 raise ValueError("first1: arg not exactly 1 long")
>  7 try:
>  8 next(it)
>  9 except StopIteration:
> 10 return val
> 11 else:
> 12 raise ValueError("first1: arg not exactly 1 long")

>  1 def firstc(iterable):
>  2 it = iter(iterable)
>  3 try:
>  4 val = next(it)
>  5 except StopIteration:
>  6 raise ValueError("first1: arg not exactly 1 long")
>  7 try:
>  8 next(it)
>  9 raise ValueError("first1: arg not exactly 1 long")
> 10 except StopIteration:
> 11 return val

firsta, firstb and firstc are equally Pythonic. I have a preference for
firsta, which is more concise and have a better "reading flow".

>  1 def firste(iterable):
>  2 it = iter(iterable)
>  3 try:
>  4 good = False
>  5 val = next(it)
>  6 good = True
>  7 val = next(it)
>  8 good = False
>  9 raise StopIteration   # or raise ValueError
> 10 except StopIteration:
> 11 if good:
> 12 return val
> 13 else:
> 14 raise ValueError("first1: arg not exactly 1 long")

firste might be Pythonic although it's very "C-ish". I can grasp the
intent and there is no repetition. I wouldn't write the assignation at
line 7, though.



Mixing firsta and firste would make something more Pythonic:

def firstg(iterable):
it = iter(iterable)
try:
val = next(it)
try:
next(it)
except StopIteration:
return val
except StopIteration:
pass
raise ValueError("first: arg not exactly 1 long")

1. The code isn't repetitive (The "raise ValueError" is written
   only once)

2. The intent is a bit harder to grasp than for firsta or firste, but
   the code is shorter than firste

3. The try..catch nesting is considered a bad practice, but the code
   here is simple enough so it shouldn't trigger a strong aversion
   reading it



- Léo
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Pythonic style

2020-09-21 Thread Léo El Amri via Python-list
On 21/09/2020 15:15, Tim Chase wrote:
> You can use tuple unpacking assignment and Python will take care of
> the rest for you:
> 
> so you can do
> 
>   def fn(iterable):
> x, = iterable
> return x
> 
> I'm not sure it qualifies as Pythonic, but it uses Pythonic features
> like tuple unpacking and the code is a lot more concise.

I guess you just beat the topic. I think it is Pythonic and I'd be
surprised if someone came with something more Pythonic.

FI: The behavior of this assignation is detailed here:
https://docs.python.org/3/reference/simple_stmts.html#assignment-statements
-- 
https://mail.python.org/mailman/listinfo/python-list