Re: else condition in list comprehension

2005-01-09 Thread Matteo Dell'Amico
Luis M. Gonzalez wrote:
Hi there,
I'd like to know if there is a way to add and else condition into a
list comprehension. I'm sure that I read somewhere an easy way to do
it, but I forgot it and now I can't find it...
for example:
z=[i+2 for i in range(10) if i%2==0]
what if I want i to be "i-2" if i%2 is not equal to 0?
You could use
[(i-2, i+2)[bool(i%2 == 0)] for i in range(10)]
or, in a less general but shorter way
[(i+2, i-2)[i%2] for i in range(10)]
or even
[i%2 and i-2 or i+2 for i in range(10)]
The "if" clause in comprehensions is used as a filter condition.
--
Ciao,
Matteo
--
http://mail.python.org/mailman/listinfo/python-list


Re: set of sets

2005-08-11 Thread Matteo Dell'Amico
Paolino wrote:
> I thought rewriting __hash__ should be enough to avoid mutables problem 
> but:
> 
> class H(set):
>   def __hash__(self)
> return id(self)
> 
> s=H()
> 
> f=set()
> 
> f.add(s)
> f.remove(s)
> 
> the add succeeds
> the remove fails eventually not calling hash(s).

Why don't you just use "frozenset"?

-- 
Ciao,
Matteo
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: set of sets

2005-08-11 Thread Matteo Dell'Amico
Paolo Veronelli wrote:

> And mostly with sets remove operation expense should be sublinear or am
> I wrong?
> Is this fast as with lists?

It's faster then with lists... in sets, as with dicts, remove is on 
average O(1).

> Obviously if I use the ids as hash value nothing is guaranted about the
> objects contents to be unique but I don't care.
> My work is a self organizing net,in which the nodes keep a structure to
> link other nodes.As the nature of the net,the links are moved frequently
>  so remove and add operations and contains query should be optimized.
> Why objects need to be hashable for this? Isn't __hash__ there to solve
> the problem?

The idea of a set of mutable sets looks a bit odd to me...
I don't get why the outer container should be a set, since you don't 
care about uniqueness... if you are representing a graph (which seems 
the case to me), I'd use an identifier for each node, and a dictionary 
mapping node-ids to its adjacency set for each node. For instance,

0 <-- 1 --> 2 --> 3
 | |
 v v
 4 --> 5

would be represented as

{0: set([]), 1: set([0, 2]), 2: set([2,4]), 3: set([5]), 4: set([5]),
  5: set([])}

If node ids are consecutive integers, you could also of course use a 
list as the outer structure.

PS: we could also discuss this in italian in it.comp.lang.python :)

-- 
Ciao,
Matteo
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: set of sets

2005-08-11 Thread Matteo Dell'Amico
Paolo Veronelli wrote:
> Yes this is really strange.
> 
> from sets import Set
> class H(Set):
>   def __hash__(self):
> return id(self)
> 
> s=H()
> f=set() #or f=Set()
> 
> f.add(s)
> f.remove(s)
> 
> No errors.
> 
> So we had a working implementation of sets in the library an put a 
> broken one in the __builtins__ :(
> 
> Should I consider it a bug ?

Looks like the builtin "set" implicitly converts sets arguments to 
remove to frozensets. That way, remove looks for "frozenset()" instead 
of "H()", so it won't work. Doesn't look like a documented behaviour to 
me, though.

-- 
Ciao,
Matteo
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: returning True, False or None

2005-02-07 Thread Matteo Dell'Amico
nghoffma wrote:
sorry, that should have been:
py>>import sets
py>>def doit(thelist):
... s = sets.Set(thelist)
... if  s == sets.Set([None]):
... return None
... else:
... return max(s - sets.Set([None]))
Since a function that doesn't return is equivalent to one that returns 
None, you can write it as:

>>> def doit(lst):
... s = set(lst) - set([None])
... if s: return max(s)
that looks to me as the most elegant so far, but this is just because 
it's mine :-)

You can also filter out Nones with a list/generator comprehension, but 
sets are just more elegant...

--
Ciao,
Matteo
--
http://mail.python.org/mailman/listinfo/python-list


Re: Pre-PEP: Dictionary accumulator methods

2005-03-20 Thread Matteo Dell'Amico
Raymond Hettinger wrote:
I would like to get everyone's thoughts on two new dictionary methods:
def count(self, value, qty=1):
try:
self[key] += qty
except KeyError:
self[key] = qty
def appendlist(self, key, *values):
try:
self[key].extend(values)
except KeyError:
self[key] = list(values)
They look as a special-case to me. They don't solve the problem for 
lists of sets or lists of deques for instance, not to mention other 
possible user-defined containers.

defaultdicts look to me as a solution that is more elegant and solves 
more problems. What is the problem with them?

--
Ciao,
Matteo
--
http://mail.python.org/mailman/listinfo/python-list


Re: Pre-PEP: Dictionary accumulator methods - typing & initialising

2005-03-20 Thread Matteo Dell'Amico
Kay Schluehr wrote:
Why do You set
d.defaultValue(0)
d.defaultValue(function=list)
but not
d.defaultValue(0)
d.defaultValue([])
?
I think that's because you have to instantiate a different object for 
each different key. Otherwise, you would instantiate just one list as a 
default value for *all* default values. In other words, given:

class DefDict(dict):
def __init__(self, default):
self.default = default
def __getitem__(self, item):
try:
return dict.__getitem__(self, item)
except KeyError:
return self.default
you'll get
In [12]: d = DefDict([])
In [13]: d[42].extend(['foo'])
In [14]: d.default
Out[14]: ['foo']
In [15]: d[10].extend(['bar'])
In [16]: d.default
Out[16]: ['foo', 'bar']
In [17]: d[10]
Out[17]: ['foo', 'bar']
In [18]: d[10] is d.default
Out[18]: True
and this isn't what you really wanted.
By the way, to really work, I think that Duncan's proposal should create 
new objects when you try to access them, and to me it seems a bit 
counterintuitive. Nevertheless, I'm +0 on it.

And why not dict(type=int), dict(type=list) instead where default
values are instantiated during object creation? A consistent pythonic
handling of all types should be envisioned not some ad hoc solutions
that go deprecated two Python releases later.
I don't really understand you. What should 'type' return? A callable 
that returns a new default value? That's exactly what Duncan proposed 
with the "function" keyword argument.

--
Ciao,
Matteo
--
http://mail.python.org/mailman/listinfo/python-list


Re: Pre-PEP: Dictionary accumulator methods - typing & initialising

2005-03-20 Thread Matteo Dell'Amico
Kay Schluehr wrote:
I think that's because you have to instantiate a different object for
each different key. Otherwise, you would instantiate just one list as
a default value for *all* default values.
Or the default value will be copied, which is not very hard either or
type(self._default)() will be called. This is all equivalent and it
does not matter ( except for performance reasons ) which way to go as
long only one is selected.
I don't like it very much... it seems too implicit to be pythonic. Also, 
it won't work with non-copyable objects, and type(42)() = 0, and getting 
0 when the default is 42 looks very strange. I prefer the explicit "give 
me a callable" approach.

If the dict has a fixed semantics by applying defaultValue() and it
returns defaults instead of exceptions whenever a key is missing i.e.
behavioural invariance the client of the dict has nothing to worry
about, hasn't he?
For idioms like d[foo].append('blah') to work properly, you'd have to 
set the default value every time you access a variable. It can be really 
strange to fill up memory only by apparently accessing values.

I suspect the proposal really makes sense only if the dict-values are
of the same type. Filling it with strings, custom objects and other
stuff and receiving 0 or [] or '' if a key is missing would be a
surprise - at least for me. Instantiating dict the way I proposed
indicates type-guards! This is the reason why I want to delay this
issue and discuss it in a broader context. But I'm also undecided.
Guidos Python-3000 musings are in danger to become vaporware. "Now is
better then never"... Therefore +0.
Having duck-typing, we can have things that have common interface but no 
common type. For instance, iterables. I can imagine a list of iterables 
of different types, and a default value of maybe [] or set([]).

--
Ciao,
Matteo
--
http://mail.python.org/mailman/listinfo/python-list


Re: For loop extended syntax

2005-03-20 Thread Matteo Dell'Amico
George Sakkis wrote:
I'm sure there must have been a past thread about this topic but I don't know 
how to find it: How
about extending the "for  in" syntax so that X can include default arguments 
? This would be very
useful for list/generator comprehensions, for example being able to write 
something like:
[x*y-z for (x,y,z=0) in (1,2,3), (4,5), (6,7,8)]
instead of the less elegant explicit loop version that has to check for the 
length of each sequence.
What do you think ?
How did you get the data in that format in the first place? It looks a 
bit strange to me. Wouldn't it be easier to fill in default values when 
you gather data as opposed to when you use it?

--
Ciao,
Matteo
--
http://mail.python.org/mailman/listinfo/python-list


Re: Python Cookbook, 2'nd. Edition is published

2005-04-06 Thread Matteo Dell'Amico
Larry Bates wrote:
I received my copy on Friday (because I was a contributor).
I wanted to thank Alex, Anna, and David for taking the time to put
this together.  I think it is a GREAT resource, especially for
beginners.  This should be required reading for anyone that
is serous about learning Python.
+1.
The Python Cookbook is really great, and being included in the 
contributors, even if for a little tiny idea that got heavily 
refactored, feels wonderful. I'm really grateful to the python community.
--
Ciao,
Matteo
--
http://mail.python.org/mailman/listinfo/python-list