Re: A faster shutil.rmtree or maybe a command.

2005-10-11 Thread Giovanni Bajo
[EMAIL PROTECTED] wrote:

> Sometimes I must delete 2 very big directory's.
> The directory's have a very large tree with much small file's in it.
>
> So I use shutil.rmtree()
> But its to slow.
>
> Is there a faster method ?


Is os.system("rm -rf %s" % directory_name) much faster?
-- 
Giovanni Bajo


-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Some set operators

2005-10-15 Thread Giovanni Bajo
Alex Martelli <[EMAIL PROTECTED]> wrote:

> I still vaguely hope that in 3.0, where backwards incompatibilities
> can be introduced, Python may shed some rarely used operators such as
> these (for all types, of course).


I hope there is no serious plan to drop them. There is nothing wrong in having
such operators, and I wouldn't flag bit operations as "rarely used". They are
very common when calling C-based API and other stuff. I know I use them very
often. They have a clear and well-understood meaning, as they appear identical
in other languages, including the widely-spread C and C++.

Giovanni Bajo


-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Some set operators

2005-10-16 Thread Giovanni Bajo
Alex Martelli wrote:

>>> I still vaguely hope that in 3.0, where backwards incompatibilities
>>> can be introduced, Python may shed some rarely used operators such
>>> as
>>> these (for all types, of course).
>>
>> I hope there is no serious plan to drop them. There is nothing wrong
>> in having such operators, and I wouldn't flag bit operations as
>> "rarely used". They are very common when calling C-based API and
>> other stuff. I know I use them very often. They have a clear and
>> well-understood meaning, as they appear identical in other
>> languages, including the widely-spread C and C++.
>
> Well, C and C++ don't have unbounded-length integers, nor built-in
> sets, so the equivalence is slightly iffy; and the precedence table of
> operators in Python is not identical to that in C/C++.

The equivalence was trying to make a point about the fact that bitwise
operators are not an uncommon and obscure operator like GCC's " As for
> frequency
> of use, that's easily measured: take a few big chunks of open-source
> Python code, starting with the standard library (which does a lot of
> "calling C-based API and other stuff") and widespread applications
> such as mailman and spambayes, and see what gives.

I grepped in a Python application of mine (around 20k lines), and I found about
350 occurrences of ^, | and &, for either integers or builtin sets.

> But the crux of our disagreement lies with your assertion that there's
> nothing wrong in having mind-boggling varieties and numbers of
> operators, presumably based on the fact that C/C++ has almost as many.

When exactly did I assert this? I am just saying that an infix operator form
for bitwise or, and, xor is very useful. And once we have them for integers,
using them for sets is elegant and clear. Notice also that a keyword-based
alternative like "bitand", "bitor", "bitxor" would serve well as a replacement
for the operators for integers, but it would make them almost useless for sets.

> I contend that having huge number of operators (and other built-ins)
> goes against the grain of Python's simplicity,

We agree on this.

> makes Python
> substantially harder to teach, and presents no substantial advantages
> when compared to the alternative of placing that functionality in a
> built-in module (possibly together with other useful bit-oriented
> functionality, such as counts of ones/zeros, location of first/last
> one/zero bit, formatting into binary, octal and hexadecimal, etc).

Such a module would be very useful, but I believe it's orthogonal to having an
infix notation for common operations. We have a string module (and string
methods), but we still have a couple of operators for strings like "+".
-- 
Giovanni Bajo


-- 
http://mail.python.org/mailman/listinfo/python-list


Re: How to organize Python files in a (relatively) big project

2005-10-19 Thread Giovanni Bajo
TokiDoki wrote:

> At first, I had all of my files in one single directory, but now, with
> the increasing number of files, it is becoming hard to browse my
> directory. So, I would want to be able to divide the files between 8
> directory, according to their purpose. The problem is that it breaks
>the 'import's between my files. And besides,AFAIK, there is no
> easy way to import a
> file that is not in a subdirectory of the current file

Remember that the directory where you start the toplevel script is always
included in the sys.path. This means that you can have your structure like
this:

main.py
   |
   | - - pkg1
   | - - pkg2
   | - - pkg3

Files in any package can import other packages. The usual way is to do "import
pkgN" and then dereference. Within each package, you will have a __init__.py
which will define the package API (that is, will define those symbols that you
can access from outside the package).

Typically, you only want to import *packages*, not submodules. In other words,
try to not do stuff like "from pkg1.submodule3 import Foo", because this breaks
encapsulation (if you reorganize the structure of pkg1, code will break). So
you'd do "import pgk1" and later "pkg1.Foo", assuming that pkg1.__init__.py
does something like "from submodule3 import Foo". My preferred way is to have
__init__.py just do "from submodules import *", and then each submodule defines
__all__ to specify which are its public symbols. This allow for more
encapsulation (each submodule is able to change what it exports in a
self-contained way, you don't need to modify __init__ as well).

Moreover, the dependence graph between packages shouldn't have loops. For
instance, if pkg3 uses pkg1, pkg1 shouldn't use pkg3. It makes sense to think
of pkg1 as the moral equivalent of a library, which pkg3 uses.
-- 
Giovanni Bajo


-- 
http://mail.python.org/mailman/listinfo/python-list


Re: How to organize Python files in a (relatively) big project

2005-10-19 Thread Giovanni Bajo
Jarek Zgoda wrote:

> How to install this structure eg. on Linux? What layout do you
> recommend? It's tempting to use /opt hierarchy for installation target
> (as it gives relatively much freedom within application directory),
> but
> many administrators are reluctant to use this hierarchy and prefer
> more standarized targets, such as /usr or /usr/local.


I am not very experienced with Linux, but if you don't use something like
PyInstaller, you could still install the main tree somewhere like
/usr/local/myapp/ and then generate a simple script for /usr/local/bin which
adds /usr/local/myapp to sys.path[0], and "import main" to boot the
application.

I'm not sure I have answered your question though :)
-- 
Giovanni Bajo


-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Understanding the arguments for SubElement factory in ElementTree

2005-10-30 Thread Giovanni Bajo
[EMAIL PROTECTED] wrote:

> 
>   xyz
> 
>
> rather than:
> root = Element('root')
> subroot = SubElement(root, 'subroot')
> subroot.text = 'xyz'
>
> Was wondering whether this code accomplish that
> root = Element('root')
> subroot = SubElement(root, 'subroot', text='xyz')


No, this creates:


  


I believe the text ought to be set in a separate statement.

Giovanni Bajo


-- 
http://mail.python.org/mailman/listinfo/python-list


Reuse base-class implementation of classmethod?

2005-10-31 Thread Giovanni Bajo
Hello,

what's the magic needed to reuse the base-class implementation of a
classmethod?

class A(object):
   @classmethod
   def foo(cls, a,b):
   # do something
   pass

class B(A):
@classmethod
def foo(cls, a, b):
 A.foo(cls, a, b)   # WRONG!

I need to call the base-class classmethod to reuse its implementation, but I'd
like to pass the derived class as first argument.
-- 
Giovanni Bajo


-- 
http://mail.python.org/mailman/listinfo/python-list


Re: 'super' to only be used for diamond inheritance problems?

2005-10-31 Thread Giovanni Bajo
Alex Hunsley wrote:

> I've seen a few discussion about the use of 'super' in Python,
> including the opinion that 'super' should only be used to solve
> inheritance diamond problem. (And that a constructor that wants to
> call the superclass methods should just call them by name and forget
> about super.) What is people's opinion on this? Does it make any
> sense?


I personally consider super a half-failure in Python. Some of my reasons are
cited here:
http://fuhm.org/super-harmful/

-- 
Giovanni Bajo


-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Reuse base-class implementation of classmethod?

2005-11-01 Thread Giovanni Bajo
David Wahler wrote:

>> what's the magic needed to reuse the base-class implementation of a
>> classmethod?
>>
>> class A(object):
>>@classmethod
>>def foo(cls, a,b):
>># do something
>>pass
>>
>> class B(A):
>> @classmethod
>> def foo(cls, a, b):
>>  A.foo(cls, a, b)   # WRONG!
>>
>> I need to call the base-class classmethod to reuse its
>> implementation, but I'd like to pass the derived class as first
>> argument. --
>> Giovanni Bajo
>
> See the super object. In your case, it can be used like this:
>
> class B(A):
> @classmethod
> def foo(cls, a, b):
>     super(B, cls).foo(a, b)


Ah thanks, I did not realize that super() was the only way of doing the right
thing here!
-- 
Giovanni Bajo


-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Addressing the last element of a list

2005-11-08 Thread Giovanni Bajo
[EMAIL PROTECTED] wrote:

> I just want an alias.

What you want is impossible. But there are many workarounds.

> I just want to be able to make a reference to any old thing in Python.
> A list, an integer variable, a function etc. so that I, in complicated
> cases, can make a shorthand. If "myRef" could somehow "point at"
> something long an complicated like a[42]["pos"][-4], that might be
> useful.


One simple workaround:

def createAlias(L, idx1, idx2, idx3):
 def setval(n):
 L[idx1][idx2][idx3] = n
 return setval

k = createAlias(a, 42, "pos", -4)

[...]

n = computeValue(...)
k(n)   # store it!


You will also find out that your case is actually very unlikely in
well-designed code. You don't usually work with stuff with three level of
nesting like a[][][], since it gets totally confusing and unmaintanable. You
will end up always with objects with better semantic, and methods to modify
them. So in the end you will always have a reference to the moral equivalent of
a[42]["pos"], and that would be a well defined object with a method setThis()
which will modify the moral equivalent of [-4].
-- 
Giovanni Bajo


-- 
http://mail.python.org/mailman/listinfo/python-list


Re: LARGE numbers

2005-11-11 Thread Giovanni Bajo
[EMAIL PROTECTED] wrote:

> An early alpha-quality release is available at
> http://home.comcast.net/~casevh/


Given the module named "Decimal" in Python 2.4, I'd suggest you to rename
your library.
-- 
Giovanni Bajo


-- 
http://mail.python.org/mailman/listinfo/python-list


Re: sort the list

2005-11-21 Thread Giovanni Bajo
Fredrik Lundh wrote:

>> I have a list like [[1,4],[3,9],[2,5],[3,2]]. How can I sort the list
>> based on the second value in the item?
>> That is,
>> I want the list to be:
>> [[3,2],[1,4],[2,5],[3,9]]
>
> since you seem to be using 2.3, the solution is to use a custom
> compare function:
>
> >>> L = [[1,4],[3,9],[2,5],[3,2]]
> >>> def mycmp(a, b):
> ... return cmp(a[1], b[1])
> ...
> >>> L.sort(mycmp)
> >>> L
> [[3, 2], [1, 4], [2, 5], [3, 9]]
>
> under 2.4, you can use the key argument together with the item-
> getter function to sort on a given column; look for "itemgetter" on
> this page for some examples:
>
> http://aspn.activestate.com/ASPN/Cookbook/Python/Recipe/305304

To clarify, itemgetter is just an optimization. Even without it, you can
still use "key":

>>> L = [[1,4],[3,9],[2,5],[3,2]]
>>> def mykey(e):
... return e[1]
...
>>> L.sort(key=mykey)
>>> L
[[3, 2], [1, 4], [2, 5], [3, 9]]

Using the "key" keyword argument can be easier to understand ("sorting
against the second element" == "second element is the key").
-- 
Giovanni Bajo


-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Making immutable instances

2005-11-23 Thread Giovanni Bajo
Ben Finney wrote:

> How can a (user-defined) class ensure that its instances are
> immutable, like an int or a tuple, without inheriting from those
> types?
>
> What caveats should be observed in making immutable instances?


In short, you can't. I usually try harder to derive from tuple to achieve this
(defining a few read-only properties to access item through attributes). Using
__slots__ is then required to avoid people adding attributes to the instance.

In fact, I don't know the rationale. I would think it would be a great addition
to be able to say __immutable__ = True or something like that. Or at least, I'd
be grateful if someone explained me why this can't or shouldn't be done.
-- 
Giovanni Bajo


-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Making immutable instances

2005-11-23 Thread Giovanni Bajo
Mike Meyer wrote:

> Note that this property of __slots__ is an implementation detail. You
> can't rely on it working in the future.

I don't "rely" on it. I just want to catch bugs in my code.

> I'm curious as to why you care if people add attributes to your
> "immutable" class. Personally, I consider that instances of types
> don't let me add attributes to be a wart.

To catch stupid bugs, typos and whatnot. If an instance is immutable, you can't
modify it, period. If you do it, it's a bug. So why not have the bug raises an
exception, rather than go unnoticed?

I don't see your point, either. Why would you want to add attributes to an
object documented to be immutable?
-- 
Giovanni Bajo


-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Making immutable instances

2005-11-24 Thread Giovanni Bajo
Mike Meyer wrote:

>> If it's not a wart, why would it be a wart for user-defined types to
>> have the same behaviour?
>
> It's a wart because user-defined classes *don't* have the same
> behavior.

Then *my* solution for this would be to give user-defined classes a way to
behave like builtins, eg. explicitally and fully implement immutability.
Immutability is an important concept in Python programs, and I'm impressed it
does not have explicit support.
-- 
Giovanni Bajo


-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Making immutable instances

2005-11-24 Thread Giovanni Bajo
Mike wrote:

>> How can a (user-defined) class ensure that its instances are
>> immutable, like an int or a tuple, without inheriting from those
>> types?
>>
>> What caveats should be observed in making immutable instances?
>
> IMHO, this is usually (but not always) a mistake. (If you're
> programming a missle guidance system, or it makes your program go
> faster it's not a mistake :))
> So are PRIVATE, CONST (all types), SEALED, FINAL, etc -- even the best
> programmer doesn't foresee what a user will want to do to make best
> use of his components, and many a time I've been annoyed (in Java and
> MS frameworks) by not being able to access/modify/subclass a
> member/class that I know is there because it has to be there (or
> because I can see it in the debugger), but it's not accessable
> because the programmer was overly clever and had been to OOP school.

There's a big difference. An immutable object has a totally different semantic,
compared to a mutable object. If you document it to be immutable, and maybe
even provide __eq__ /__hash__, adding attributes from it is surely an user bug.
And surely a bug for which I'd expect an exception to be raised.

Sometimes, I play with some of my objects and I have to go back and check
documentation whether they are immutable or not, to make sure I use the correct
usage pattern. That's fine, this is what docs are for, but couldn't Python give
me some way to enforce this so that, if I or some other dev do the mistake, it
doesn't go unnoticed?
-- 
Giovanni Bajo


-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Making immutable instances

2005-11-24 Thread Giovanni Bajo
Mike Meyer wrote:

>>> Note that this property of __slots__ is an implementation detail.
>>> You
>>> can't rely on it working in the future.
>> I don't "rely" on it. I just want to catch bugs in my code.
>
> I certainly hope you're not relying on it to catch bugs. You should do
> proper testing instead. Not only will that catch pretty much all the
> bugs you mention later - thus resolving you of the need to handcuff
> clients of your class - it will catch lots of other bugs as well.

This sounds a little academic. Writing a real Python program without testing is
impossible or close to it, so you are really not telling me anything new. Even
*with* testing, there are bugs. I'm sure you well know this.

My feeling is that you're trying to get too much out of my words. I'm not
trying to handcuff anyone. You seem to concentrate on me trying to avoid people
adding attributes to my precious objects. It's not that. If I write a class and
want it to be immutable, it is because it has to be so. If I write a queue
class and I say that people shouldn't call pop() if it's empty, I mean it. If I
enforce it with a RuntimeError, I'm not thinking I'm handcuffing someone. I
don't see a ImmutableError to be so different from it.

I often design objects that I want to be immutable. I might keep it as a key in
dictionaries, or I might have external, non-intrusive caches of some kind
relying on the fact that the instance does not change. Of course, it *might* be
that testing uncovers the problem. Unittests tend to be pretty specific, so in
my experience they happen to miss *new* bugs created or uncovered by the
integration of components. Or if they hit it, you still have to go through
debug sessions. An ImmutableError would spot the error early.

In my view, enforcing immutability is no different from other forms of
self-checks. Do you reckon all kind of asserts are useless then? Surely they
don't help for anything that a good unittest couldn't uncover. But they help
catching bugs such as breakage of invariants immediately as they happen.
Immutability can be such an invariant.


>>>> If it's not a wart, why would it be a wart for user-defined types
>>>> to
>>>> have the same behaviour?
>>>
>>> It's a wart because user-defined classes *don't* have the same
>>> behavior.
>> Then *my* solution for this would be to give user-defined classes a
>> way to behave like builtins, eg. explicitally and fully implement
>> immutability.
>
> Well, if you want to propose a change to the language, you need a good
> use case to demonstrate the benefits of such a change. Do you have
> such a use case? Catching bugs doesn't qualify, otherwise Python would
> be radically different from what it is.


One good reason, in my opinion, is that there *are* immutable objects in
Python, among builtins. And people can easily build extension objects which are
immutable. So being impossible to write a regular object in Python which is
immutable is not orthogonal to me.

Now let me ask you a question. What is a good use case for "assert" that
justifies its introduction in the language? What is a good usecase for module
'unittest' which justifies its introduction in the standard library? Why do you
think tuples are immutable and *enforced* to be so?


>> Immutability is an important concept in Python programs, and I'm
>> impressed it does not have explicit support.
>
> I'm not convinced that immutability is that important a concept. Yeah,
> you have to know about it, but it seems more like an implementation
> detail than a crucial concept.

Probably it's just a matter of design styles.

> I'm not sure it's more important than
> things like interned strings and the sharing of small integers.  Most
> of the discussion of immutables here seems to be caused by newcomers
> wanting to copy an idiom from another language which doesn't have
> immutable variables. Their real problem is usually with binding, not
> immutability.

I'm not such a newcomer, but (how funny) Python is *the* language that
introduced me to the concept of immutable objects and their importance in
design :)
-- 
Giovanni Bajo


-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Making immutable instances

2005-11-24 Thread Giovanni Bajo
Björn Lindström wrote:

>> My feeling is that you're trying to get too much out of my words. I'm
>> not trying to handcuff anyone. You seem to concentrate on me trying
>> to avoid people adding attributes to my precious objects. It's not
>> that. If I write a class and want it to be immutable, it is because
>> it has to be so. If I write a queue class and I say that people
>> shouldn't call pop() if it's empty, I mean it. If I enforce it with a
>> RuntimeError, I'm not thinking I'm handcuffing someone. I don't see a
>> ImmutableError to be so different from it.
>
> But why would you call it that, when the object isn't actually
> implemented as immutable?

I think you misunderstood my message. I'm saying that an ImmutableError
wouldn't be a much different form of self-checking compared to the usual
TypeError/ValueError/RuntimeError you get when you pass wrong values/types to
methods/functions, or you violate invariants of objects. This has nothing to do
with dynamism, duck typing, dynamic binding and whatnot. Some objects have deep
invariants which can't be broken.

Why do you think we have a frozenset, for instance? By Mike's argument, we
shouldn't have it. And we should be able to use a regular mutable set as
dictionary key. Unittests will catch errors. Instead, we got two classes
instead of one, immutability is *enforced*, and sets can't be used as
dictionary keys. This is all good in my opinion, and follows the good rule of
"catch errors early".

> Throw an exception that describes why it doesn't make sense to change
> that particular object instead.

*How* can I do? There is no language construct which lets me specifies an
exception to throw whenever someone modifies my object. *Even* if I got partial
support for immutable types (like __new__  which can be used to initialize
immutable objects).

> As I said before, I think you're confusing the (in Python pretty
> non-existent) concept of encapsulation with Python's immutable types,
> which are immutable because the implementation demands it. (A fact I
> hope will disappear at some point.)

You seriously believe that strings, integers and tuples are immutable because
of implementation details? I believe they are part of a language design -- and
a good part of it.
-- 
Giovanni Bajo


-- 
http://mail.python.org/mailman/listinfo/python-list

Re: Making immutable instances

2005-11-24 Thread Giovanni Bajo
Mike wrote:

>> There's a big difference. An immutable object has a totally different
>> semantic,
>> compared to a mutable object. If you document it to be immutable, and
>> maybe
>> even provide __eq__ /__hash__, adding attributes from it is surely
>> an user bug.
>> And surely a bug for which I'd expect an exception to be raised.
>
> Why is it "surely" a bug?  It is arguable whether adding new
> attributes (vs. changing those that are already there) is, in fact,
> mutation.

If we agree that *changing* attributes is a bug for those classes, we're
already a step beyond in this discussion :)

About adding attributes, I agree that it's kind of a grey area. Per-se, there
is nothing wrong. My experience is that they give the user a false expectation
that the object can be modified. It ends up with an object with attributes
which can't be modified because that'd break invariants, and others which are
freely modificable.

In fact, the only thing that you are adding an attribute to *that* instance,
and not to all the *equivalent* instances (by definition of __hash__/__eq__) is
questionable and bug-prone. You might end up *thinking* you have that very
instance around, while you don't. In fact, with immutable objects, you are not
supposed to think in terms of instances, but rather of values. When I see a
string like "foobar" I don't care if it's the same instance of a "foobar" I saw
before. If I could add an attribute to "foobar", I might end up believing that
whenever I see a "foobar" around, it will have that attribute.

As you said, Python has rich data structures which lets you still find good
ways to carry around additional information. So why trying so hard to get into
trouble?

> It sounds like what you may want are opaque objects where you can
> control access better

No that's a different issue.  One example of my immutable classes is Point2D
(two attributes: x/y). Having it immutable gives it many useful properties,
such as a real value semantic.
-- 
Giovanni Bajo


-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Making immutable instances

2005-11-24 Thread Giovanni Bajo
Mike Meyer wrote:

> And I have no problems with that. If you believe your class should
> throw an error if someone calls an instances pop() method when it's
> empty, do so.
>
> Likewise, if you want to make it so a client can't change your
> attributes, feel free to do so.
>
> However, when you prevent a client from adding an attribute, you're
> not merely making your objects immutable, you're making them
> static. Python isn't a static language, it's a dynamic language. I
> consider parts of it that are static to be warts.

I always thought that adding an attribute was just as bad as changing the
attributes, for an immutable object. But I now see your point (eventually!),
they are pretty different issues.

But, whatever attribute you add to the instance, it should *also* be immutable
(so that, in other words, wouldn't be so different than carrying around a tuple
with the original instance and the added attribute). This said, I think I
devise that a language support for enforcing immutability could allow adding
attributes to instance -- as long as those attributes then become immutable as
well.

>>> I'm not sure it's more important than
>>> things like interned strings and the sharing of small integers.
>>> Most
>>> of the discussion of immutables here seems to be caused by newcomers
>>> wanting to copy an idiom from another language which doesn't have
>>> immutable variables. Their real problem is usually with binding, not
>>> immutability.
>> I'm not such a newcomer, but (how funny) Python is *the* language
>> that introduced me to the concept of immutable objects and their
>> importance in design :)
>
> Well, that would explain why you think it's so important - it's where
> you first encountered it.

Yes. But I'm familiar with different object semantics, and I found the
immutable objects to be a pretty good replacement of value semantics, and to
implement a kind-of pass-by-value convention in a language which only has
references.

> I'd argue that it's no more important than
> identity - which is what I was searching for when I talked about
> interned strings sharing small integers. There are builtin types that
> preserve identity for equal instances, at least under some
> conditions. There are no constructs for helping you do that with
> user-defined objects. Should we add them for the sake of
> orthogonality? I don't think so - not without a good use case.

I don't think identity is important for immutable objects (as I wrote
elsewhere), so I don't think adding language constucts for this would prove
useful. Instead, immutable objects *are* common, and we still miss a way to
mark them as such.
-- 
Giovanni Bajo


-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Making immutable instances

2005-11-24 Thread Giovanni Bajo
Paul Rubin wrote:

> "Giovanni Bajo" <[EMAIL PROTECTED]> writes:

[pay attention to the quoting, I didn't write that :) ]

>>> Mike Meyer wrote:
>>>
>>> However, when you prevent a client from adding an attribute, you're
>>> not merely making your objects immutable, you're making them
>>> static.
>
> No I don't believe that.  If an object is immutable, then
> obj.serialize() should return the same string every time.  If you can
> add attributes then the serialization output will become different.


I guess it might be argued that the method serialize() could return whatever
value it returned before, and that the added attribute could be "optional" (eg.
think of it as a cache that can be recomputed from the immutable attributes at
any time).

It's kind of a grey area. Surely, I would *not* mind if language support for
immutability prohibited this :)
-- 
Giovanni Bajo


-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Making immutable instances

2005-11-24 Thread Giovanni Bajo
Mike Meyer wrote:

>> Björn Lindström wrote:
>> Why do you think we have a frozenset, for instance? By Mike's
>> argument, we shouldn't have it.
>
> Not *my* arguments, certainly. Not unless you're seriously
> misinterpreting them.


Sorry then, I probably am. There must be a misunderstanding somewhere.

What is your position about frozenset? By my understanding of your arguments,
it is a hand-cuffed version of set, which just prevents bugs that could still
be caught by testing. The same applies for the arbitrary restriction of not
allowing sets to be key dictionaries (with their hash value being their id).
-- 
Giovanni Bajo


-- 
http://mail.python.org/mailman/listinfo/python-list

Re: unittest.assertRaise and keyword arguments?

2005-12-02 Thread Giovanni Bajo
Bo Peng wrote:

> The syntax for using assertRaise is
>
>assertRaise(exception, function, para1, para2,...)
>
> However, I have a long list of arguments (>20) so I would like to test
> some of them using keyword arguments (use default for others). Is there
> a way to do this except for manually try...except?


You can pass keyword arguments to assertRaises without problems:

self.assertRaises(ValueError, myfunc, arg1,arg2, arg3, arg4, abc=0, foo=1,
bar="hello")

Or you can always do something like:

self.assertRaises(ValueError, lambda: myfunc(arg1,arg2, arg3, arg4, abc=0,
foo=1, bar="hello"))
-- 
Giovanni Bajo


-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Instances behaviour

2005-12-04 Thread Giovanni Bajo
Mr.Rech wrote:

> and so on. The problem I'm worried about is that an unaware user may
> create an instance of "A" supposing that it has any real use, while it
> is only a sort of prototype. However, I can't see (from my limited
> point of view) any other way to rearrange things and still get a
> similar behaviour.


1) Document your class is not intended for public use.
2) Make your A class "private" of the module that defines it. A simple way is
putting an underscore in front of its name.
3) Make your A class non-functional. I assume B and C have methods that A
doesn't. Then, add those methods to A too, but not implement them:

 def foo(self):
   """Foo this and that. Must be implemented in subclasses."""
   raise NotImplementedError

-- 
Giovanni Bajo


-- 
http://mail.python.org/mailman/listinfo/python-list


Re: CDDB.py binaries for Python 2.4

2005-12-04 Thread Giovanni Bajo
Kent Tenney wrote:

> I would love to use the tools at
> http://cddb-py.sourceforge.net/
> the newest Win binaries are for Python 2.0

I packaged these for you, but they're untested:

http://www.develer.com/~rasky/CDDB-1.3.win32-py2.3.exe
http://www.develer.com/~rasky/CDDB-1.3.win32-py2.4.exe
-- 
Giovanni Bajo


-- 
http://mail.python.org/mailman/listinfo/python-list


Re: newbie - needing direction

2005-12-04 Thread Giovanni Bajo
[EMAIL PROTECTED] wrote:

> I'm a newbie, just got through van Rossum's tutorial and I would like
> to try a small project of my own. Here's the description of my project.
>
> When the program starts a light blue semi-transparent area, size 128 by
> 102,  is placed in the middle of the screen. The user can move this
> area with arrow the keys. When the user hits the Enter key, a magnified
> picture of the chosen area is shown on the screen (10 times
> magnification on a 1280 by 1024 monitor). When the user hits the Enter
> key the program exits leaving the screen as it was before the program
> started.
>
> Could someone direct me what libraries and modules I should study in
> order to accomplish this.


I'd try something easier (with less interaction with Windows). Pygame should
provide everything you need for this kind of applications:
http://www.pygame.org/

Then you can use pywin32 (http://sourceforge.net/projects/pywin32) to bind
to the Windows API and accomplish what you need.
-- 
Giovanni Bajo


-- 
http://mail.python.org/mailman/listinfo/python-list


Re: ElementTree - Why not part of the core?

2005-12-08 Thread Giovanni Bajo
[EMAIL PROTECTED] wrote:

>> I think some people were hoping that instead of adding these things
>> to
>> the standard library, we would come up with a better package manager
>> that would make adding these things to your local library much
>> simpler.
>>
>> STeVe
>>
>>
[1]http://www.python.org/dev/summary/2005-06-01_2005-06-15.html#reorganising-th
e-standard-library-again
>
> A better package manager would be great but does not
> replace having things in the core.  Distributed code that
> relies on external packages puts a significantly greater
> burden on the user of the code.

Seconded.

One thing I really fear about the otherwise great EasyInstall (and Python Eggs)
is that we could forget about


Let's not turn the Python standard library into the CPAN mess, where there are
5 different libraries for adding two numbers, so that it's then impossible to
grab a random perl program and read it, without going through 150 different man
pages you never saw before. I don't need 450 libraries to compute MD5, or to
zip a file, or 140 different implementations of random numbers. There will
always be external libraries for specific purposes, but I'd rather the standard
library to stay focused on provided a possibly restricted set of common
features with a decent interface/implementation.

This said, I'd also like to see ElementTree in the standard library. We already
have a SAX and a DOM, but we need a Pythonic XML library, and ElementTree is
just perfect.
-- 
Giovanni Bajo


-- 
http://mail.python.org/mailman/listinfo/python-list


Re: ElementTree - Why not part of the core?

2005-12-08 Thread Giovanni Bajo
Giovanni Bajo wrote:

> One thing I really fear about the otherwise great EasyInstall (and
> Python Eggs) is that we could forget about...

... how important is to have a standard library. The fact that it's easy to
install external modules shouldn't make us drop the standard library. A
standard library means a great uniformity across programs. Whenever I open a
Python program which uses ZipFile, or socket, or re, I can read it
*immediately*. If it uses an external library / framework, I have to go study
its manual and documentation. Proliferation of external libraries is good, but
people should normally use the standard library modules for uniformity.

In other words, I disagree with this message:
http://mail.python.org/pipermail/python-dev/2005-June/054102.html

My personal hope is that Python 3.0 will have a nice cleaned-up standard
library with even more features than Python 2.x. As I said the in other
message, let's not end up into the CPAN mess just because it's now technically
easier!
-- 
Giovanni Bajo


-- 
http://mail.python.org/mailman/listinfo/python-list


Re: pyqt

2005-12-10 Thread Giovanni Bajo
Kuljo wrote:

> - Can I execute bash commands in a python script (e.g. ls -l or grep)?

Yes, for instance with os.system or os.popen. It won't be portable though.

> - In the QT Designer there are also KDE-widgets but I'm not sucseeded
> to get them work in pyqt (it says:  "self.kProgress1 =
> KProgress(self,"kProgress1") NameError: global name 'KProgress' is not
> defined").
> Are the not implemented there or I should install some additional
> software?

PyKDE.
-- 
Giovanni Bajo


-- 
http://mail.python.org/mailman/listinfo/python-list


Re: set, dict and other structures

2005-01-31 Thread Giovanni Bajo
Raymond Hettinger wrote:

> If set-ifying a dictionary turns out to be an essential and
> time-critical operation, it is possible to add some special case code
> for this.  The question is whether it is worth it.   If you find the
> need is pressing, file a feature request explaining why it is
> essential.  Then, I'll implement it for you.  On the other hand, if
> this is a theoretical nice-to-have, then why clutter up the code
> (with pre-sizing and forcing all values to True).


Just today I was writing some code where I wanted to use sets for the
abstraction (intersection, etc.), but also carry some values with them to
process. So, yes, I believe that having set-like abstraction for dictionaries
would be great. In fact, for a moment I wondered if dict.keys() was already of
type set in 2.4, because it could have made sense.

My concrete situation was a routine that was updating a current snapshot of
data (stored in a dictionary) with a new snapshot of data (another dictionary).
With 'updating' I mean some kind of algorithm where you have to do different
things for:

- items that were present in the current snapshot but not in the new snapshot
anymore (obsolete items)
- items that were not present in the current snapshot but are presente in the
new snapshot (new items)
- items that were present in both the current and the new snapshot (possibly
modified items)

So, this was a perfect suit for sets. Given my dictionaries d1 and d2, I would
have liked to be able to do:

for k,v in (d1 - d2).iteritems():
...

for k,v in (d2 - d1).iteritems():
...

for k,v in (d1 & d2).iteritems():   # uhm, not fully correct
...

Of course, there are some nitpicks. For instance, in the last case the
dictionary at the intersection should probably hold a tuple of two values, one
coming from each dictionary (since the intersection finds the items whose key
is present in both dictionaries, but the value can obviously be different). So
you'd probably need something:

for k,(v1,v2) in (d1 & d2).iteritems():
...

Another solution would be to say that set-like operations, like (d1 & d2),
return an iterator to a sequence of keys *only*, in this case the sequence of
keys available in both dictionaries.

I also tried a different approach, that is putting my data in some structure
that was suitable for a set (with __hash__ and __key__ methods that were caring
of the key part only):

class FakeForSet:
def __init__(self, k, v):
self.key = key
self.value = value
def __hash__(self):
 return hash(self.key)
def __cmp__(self, other):
 return cmp(self.key, other.key)

but this looked immediatly weird because I was lying to Python somehow. It then
became clear that it's not going to work, because when you go through (s1 & s2)
you do not know if the items you get are coming from s1 or s2, and that is
important because the value member is going to be different (even if you're
lying to Python saying that those items are equal anyway).

I know of course there are other ways of doing the same thing, like:

   # intersection
   for k in d1:
   if k not in d2:
   continue
   v1, v2 = d1[k], d2[k]
   ...

But I don't think there is anything wrong in providing an abstraction for this,
especially since we already decided that set-like abstractions are useful.

So, FWIW, I would find set-like operations on dictionaries an useful addition
to Python. Besides, I guess they can be easily experimented and studied by
subclassing dict.
-- 
Giovanni Bajo


-- 
http://mail.python.org/mailman/listinfo/python-list


pythonXX.dll size: please split CJK codecs out

2005-08-20 Thread Giovanni Bajo
Hello,

python24.dll is much bigger than python23.dll. This was discussed already on
the newsgroup, see the thread starting here:
http://mail.python.org/pipermail/python-list/2004-July/229096.html

I don't think I fully understand the reason why additional .pyd modules were
built into the .dll. OTOH, this does not help anyone, since:

- Normal users don't care about the size of the pythonXX.dll, or the number of
dependencies, nor if a given module is shipped as .py or .pyd. They just import
modules of the standard library, ignoring where each module resides. So,
putting more modules (or less modules) within pythonXX.dll makes absolutely no
differences for them.
- Users which freeze applications instead are *worse* served by this, because
they end up with larger programs. For them, it is better to have the highest
granularity wrt external modules, so that the resulting freezed application is
as small as possible.

A post in the previous thread (specifically
http://mail.python.org/pipermail/python-list/2004-July/229157.html) suggests
that py2exe users might get a small benefit from the fact that in some cases
they would be able to ship the program with only 3 files (app.exe,
python24.dll, and library.zip). But:

1) I reckon this is a *very* rare case. You need to write an application that
does not use Tk, socket, zlib, expat, nor any external library like numarray or
PIL.
2) Even if you fit the above case, you still end up with 3 files, which means
you still have to package your app somehow, etc. Also, the resulting package
will be *bigger* for no reason, as python24.dll might include modules which the
user doesn't need.

I don't think that merging things into python24.dll is a good way to serve
users of freezing programs, not even py2exe users. Personally, I use McMillan's
PyInstaller[1] which always builds a single executable, no matter what. So I do
not like the idea that things are getting worse because of py2exe: py2exe
should be fixed instead, if its users request to have fewer files to ship (in
my case, for instance, this missing feature is a showstopper for adopting
py2exe).

Can we at least undo this unfortunate move in time for 2.5? I would be grateful
if *at least* the CJK codecs (which are like 1Mb big) are splitted out of
python25.dll. IMHO, I would prefer having *more* granularity, rather than
*less*.

+1 on splitting out the CJK codecs.

Thanks,
Giovanni Bajo


[1] See also my page on PyInstaller: http://www.develer.com/oss/PyInstaller


-- 
http://mail.python.org/mailman/listinfo/python-list


Re: pythonXX.dll size: please split CJK codecs out

2005-08-21 Thread Giovanni Bajo
Martin v. Löwis wrote:

>> I don't think I fully understand the reason why additional .pyd
>> modules were built into the .dll. OTOH, this does not help anyone,
>> since:
>
> The reason is simple: a single DLL is easier to maintain. You only
> need
> to add the new files to the VC project, edit config.c, and be done. No
> new project to create for N different configurations, no messing with
> the MSI builder.

FWIW, this just highlights how ineffecient your build system is. Everything you
currently do by hand could be automated, including MSI generation. Also, you
describe the Windows procedure, which I suppose it does not take into account
what needs to be done for other OS. But I'm sure that revamping the Python
building system is not a piece of cake.

I'll take the point though: it's easier to maintain for developers, and most
Python users don't care.

> In addition, having everything in a single DLL speeds up Python
> startup a little, since less file searching is necessary.

I highly doubt this can be noticed in an actual benchmark, but I could be
wrong. I can produce numbers though, if this can help people decide.

>> Can we at least undo this unfortunate move in time for 2.5? I would
>> be grateful if *at least* the CJK codecs (which are like 1Mb big)
>> are splitted out of python25.dll. IMHO, I would prefer having *more*
>> granularity, rather than *less*.
>
> If somebody would formulate a policy (i.e. conditions under which
> modules go into python2x.dll, vs. going into separate files), I'm
> willing to implement it. This policy should best be formulated in
> a PEP.
>
> The policy should be flexible wrt. to future changes. I.e. it should
> *not* say "do everything as in Python 2.3", because this means I
> would have to rip off the modules added after 2.3 entirely (i.e.
> not ship them at all). Instead, the policy should give clear guidance
> even for modules that are not yet developed.
>
> It should be a PEP, so that people can comment. For example,
> I think I would be -1 on a policy "make python2x.dll as minimal
> as possible, containing only modules that are absolutely
> needed for startup".

I'm willing to write up such a PEP, but it's hard to devise an universal
policy. Basically, the only element we can play with is the size of the
resulting binary for the module. Would you like a policy like "split out every
module whose binary on Windows is > X kbytes?".

My personal preference would go to something "make python2x.dll include only
the modules which are really core, like sys and os". This would also provide
guidance to future modules, as they would simply go in external modules (I
don't think really core stuff is being added right now).

At this point, my main goal is getting CJK out of the DLL, so everything that
lets me achieve this goal is good for me.

Thanks,
-- 
Giovanni Bajo


-- 
http://mail.python.org/mailman/listinfo/python-list

Revamping Python build system (Was: pythonXX.dll size: please split CJK codecs out)

2005-08-21 Thread Giovanni Bajo
Michael Hoffman wrote:

>> FWIW, this just highlights how ineffecient your build system is.
>> Everything you currently do by hand could be automated, including
>> MSI generation.
>
> I'm sure Martin would be happy to consider a patch to make the build
> system more efficient. :)


Out of curiosity, was this ever discussed among Python developers? Would
something like scons qualify for this? OTOH, scons opens nasty
self-bootstrapping issues (being written itself in Python).

Before considering a patch (or even a PEP) for this, the basic requirements
should be made clear. I know portability among several UNIX flavours is one,
for instance. What are the others?
-- 
Giovanni Bajo


-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Revamping Python build system (Was: pythonXX.dll size: please split CJK codecs out)

2005-08-21 Thread Giovanni Bajo
Martin v. Löwis wrote:

>> Out of curiosity, was this ever discussed among Python developers?
>> Would something like scons qualify for this? OTOH, scons opens nasty
>> self-bootstrapping issues (being written itself in Python).
>
> No. The Windows build system must be integrated with Visual Studio.
> (Perhaps this is rather, "dunno: is it integrated with VS.NET 2003?")
> When developing on Windows, you really want all the support you can
> get from VS, e.g. when debugging, performing name completion, etc.
> To me, this makes it likely that only VS project files will work.

You seem to ignore the fact that scons can easily generate VS.NET projects. And
it does that by parsing the same file it could use to build the project
directly (by invoking your Visual Studio); and that very same file would be the
same under both Windows and UNIX.

And even if we disabled this feature and build the project directly from
command line, you could still edit your files with the Visual Studio
environment and debug them in there (since you are still compiling them with
Visual C, it's just scons invoking the compiler). You could even setup the
environment so that when you press CTRL+SHIFT+B (or F7, if you have the old
keybinding), it invokes scons and builds the project.

So, if the requirement is "integration with Visual Studio", that is not an
issue to switching to a different build process.

>> Before considering a patch (or even a PEP) for this, the basic
>> requirements should be made clear. I know portability among several
>> UNIX flavours is one, for instance. What are the others?
>
> Clearly, the starting requirement would be that you look at the build
> process *at all*.

I compiled Python several times under Windows (both 2.2.x and 2.3.x) using
Visual Studio 6, and one time under Linux. But I never investigated into it in
detail.

> The Windows build process and the Unix build process
> are completely different.

But there is no technical reason why it has to be so. I work on several
portable projects, and they use the same build process under both Windows and
Unix, while retaining full Visual Studio integration (I myself am a Visual
Studio user).

> Portability is desirable only for the Unix
> build process; however, you might find that it already meets your
> needs quite well.

Well, you came up with a maintenance problem: you told me that building more
external modules needs more effort. In a well-configured and fully-automated
build system, when you add a file you have to write its name only one time in a
project description file; if you want to build a dynamic library, you have to
add a single line. This would take care of both Windows and UNIX, both
compilation, packaging and installation.
-- 
Giovanni Bajo


-- 
http://mail.python.org/mailman/listinfo/python-list

ANN: PyInstaller 1.0 in the works: package your Python app into a single-file executable

2005-09-03 Thread Giovanni Bajo
Hello,

http://pyinstaller.hpcf.upr.edu/

PyInstaller is a program that packages Python programs into stand-alone
executables, under both Windows and Linux. This is similar to the famous
py2exe, but PyInstaller is also able to build fully-contained (single file)
executables, while py2exe can only build directories containing an executable
with multiple dynamic libraries.
PyInstaller is an effort to rescue, maintain and further develop Gordon
McMillan's Python Installer (now PyInstaller). Their official website is not
longer available and the original package is not longer maintained. Believing
that it is still far superior to py2exe we have setup this page to continue its
further development.

We have just begun development on PyInstaller. Feel free to join us in the
effort! Please consult our Roadmap
(http://pyinstaller.hpcf.upr.edu/pyinstaller/roadmap) to check our plans.

-- 
Giovanni Bajo



-- 
http://mail.python.org/mailman/listinfo/python-list


Re: py2exe 0.6.1 released

2005-09-06 Thread Giovanni Bajo
Thomas Heller wrote:

> * py2exe can now bundle binary extensions and dlls into the
>   library-archive or the executable itself.  This allows to
>   finally build real single-file executables.
>
>   The bundled dlls and pyds are loaded at runtime by some special
>   code that emulates the Windows LoadLibrary function - they are
>   never unpacked to the file system.


Cute!

I tried it using the wx singlefile example, but unfortunately the resulting
executable segfaults at startup (using Python 2.3.3 on Windows 2000, with
latest wxWindows). How can I debug it?
-- 
Giovanni Bajo


-- 
http://mail.python.org/mailman/listinfo/python-list


Re: ANN: PyInstaller 1.0 in the works: package your Python app into asingle-file executable

2005-09-06 Thread Giovanni Bajo
Trent Mick wrote:

> I notice that the release notes for py2exe 0.6.1 mention that it
> finally *can* make a single file executable. I'm not involved in
> developing it
> nor am I that experienced with it. Just an FYI.

Yes, I noticed it. You'll have noticed also that the announcement happened
after my mail was already posted. I'll update PyInstaller's website ASAP to fix
the now incorrect information in there.

Thanks,
-- 
Giovanni Bajo


-- 
http://mail.python.org/mailman/listinfo/python-list


Re: py2exe 0.6.1 released

2005-09-06 Thread Giovanni Bajo
Thomas Heller wrote:

>> I tried it using the wx singlefile example, but unfortunately the
>> resulting executable segfaults at startup (using Python 2.3.3 on
>> Windows 2000, with latest wxWindows).
>
> Yes, I can reproduce that.  I'm still using wxPython 2.4.2.4 for
> Python
> 2.3.5, and that combo works.  I have done a few tests, and wxPython
> 2.5.1.5 also works, while 2.5.5.1 crashes.

Ah that's fine, then. I thought it was one of those "only in my computer" kind
of issue :)

>> How can I debug it?
>
> I'll assume that's a serious question.

Of course it was, I'm not sure why you should doubt it. I was just trying to
being helpful to you, thinking that it could have been hard to reproduce.
Luckily, you can look into it yourself.

> I've done all this, and it seems it is crashing when trying to import
> _gdi.pyd.  Next would be to debug through _memimported.pyd, but I
> don't have a debug build of wxPython.

OK. Do you believe that _memimported.pyd can eventually converge to something
stable? Emulating LoadLibrary for all versions of Windows is not an easy task
after all. Wine might provide some insights.
-- 
Giovanni Bajo


-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Python xml.dom, help reading attribute data

2005-09-06 Thread Giovanni Bajo
Thierry Lam wrote:

> Let's say I have the following xml tag:
>
> 1
>
> I can't figure out what kind of python xml.dom codes I should invoke
> to read the data 1? Any help please?
>
> Thanks
> Thierry


If you use elementtree:

>>> from elementtree import ElementTree
>>> node = ElementTree.fromstring("""1""")
>>> node.text
'1'
>>> node.attrib["role"]
'success'
-- 
Giovanni Bajo


-- 
http://mail.python.org/mailman/listinfo/python-list


Re: What XML lib to use?

2005-09-19 Thread Giovanni Bajo
Fredrik Lundh wrote:

> Edvard Majakari wrote:
>
>> Using a SAX / full-compliant DOM parser could be good for learning
>> things, though. As I said, depends a lot.
>
> since there are no *sane* reasons to use SAX or DOM in Python, that's
> mainly a job security issue...


One sane reason is that ElementTree is not part of the standard library. There
are cases where you write a simple python script of 400 lines and you want it
to stay single-file. While ElementTree is very easy to distribute (for basic
features it's just a single file), it still won't fit some scenarios.

So, why did it not make it to the standard library yet, given that it's so much
better than the alternatives?
-- 
Giovanni Bajo


-- 
http://mail.python.org/mailman/listinfo/python-list


ANN: PyInstaller 1.0 - build single-file distributions for your Python programs

2005-09-19 Thread Giovanni Bajo
Hello,

PyInstaller 1.0 is out:
http://pyinstaller.hpcf.upr.edu/pyinstaller

Despite its version number, this is a very stable relase, as we are working
off the well-known McMillan's Installer, which was discontinued some years
ago.

Feature highlights:
* Packaging of Python programs into standard executables, that work on
computers without Python installed.
 * Multiplatform: works under Windows, Linux and Irix.
 * Multiversion: works under any version of Python since 1.5.
 * Dual packaging mode:
   * Single directory: build a directory containing an executable plus all
the external binary modules (.dll, .pyd, .so) used by the program.
   * Single file: build a single executable file, totally self-contained,
which runs without any external dependency.
 * Support for automatic binary packing through the well-known UPX
compressor.
 * Optional console mode (see standard output and standard error at
runtime).
 * Selectable executable icon (Windows only).
 * Fully configurable version resource section in executable (Windows only).
 * Support for building COM servers (Windows only).


ChangeLog (with respect to the latest official release of McMillan's
Installer):
(+ user visible changes, * internal stuff)
 + Add support for Python 2.3 (fix packaging of codecs).
 + Add support for Python 2.4 (under Windows, needed to recompiled the
bootloader with a different compiler version).
 + Fix support for Python 1.5.2, should be fully functional now (required to
rewrite some parts of the string module for the bootloader).
 + Fix a rare bug in extracting the dependencies of a DLL (bug in PE header
parser).
 + Fix packaging of PyQt programs (needed an import hook for a hidden
import).
 + Fix imports calculation for modules using the "from __init__ import"
syntax.
 + Fix a packaging bug when a module was being import both through binary
dependency and direct import.
 * Restyle documentation (now using docutils and reStructuredText).
 * New Windows build system for automatic compilations of bootloader in all
the required flavours (using Scons)


Documentation:
http://pyinstaller.hpcf.upr.edu/trac_common/docs/Manual_v1.0.html

Mailing list:
http://lists.hpcf.upr.edu/mailman/listinfo/pyinstaller

Future plans:
* Make executables built with Python 2.4 not depend on MSVCR71.DLL (under
Windows)
* Add a very simple frontend to simplify the usage
* Make PyInstaller a normal distutil package.

Happy packaging!
-- 
Giovanni Bajo


-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Getting tired with py2exe

2005-09-20 Thread Giovanni Bajo
James Stroud wrote:

>> What about PyInstaller that was announced the other day? The feature
>> list looks great, and it appears the developers intend to maintain
>> and enhance the program indefinitely.
> ...
>>
>> http://pyinstaller.hpcf.upr.edu/pyinstaller
>
> That's one short "indefinitely":
>
> Not Found
> The requested URL /pyinstaller was not found on this server.
> Apache/2.0.53 (Fedora) Server at pyinstaller.hpcf.upr.edu Port 80

Yes, we had a short offline period today due to maintenance on the server,
sorry for that.
-- 
Giovanni Bajo


-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Getting tired with py2exe

2005-09-20 Thread Giovanni Bajo
Simon John wrote:

> And if they want to use UPX, well that's up to them, but I've had some
> problems with it and don't particularly like the thought of runtime
> decompression and the two process thing.

UPX compression is totally optional, and it is even disabled by default. For
the log, I have been using UPX for many, many years and never had a problem
whatsoever, but surely there had been bugs.

Nonetheless, if you are so worried about it, I wonder how you can feel
comfortable with py2exe loading up DLLs with his own version of LoadLibrary. It
looks like potentially much more dangerous to me. Or maybe you never build
single-file distributions and you go with single-dir distributions... in which
case there is no "two process" thing.

> And when you can compress the
> distributable using 7zip or whatever, why bother keeping it compressed
> once downloaded?

Some people like the idea of "absolutely no installation process needed".
-- 
Giovanni Bajo


-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Getting tired with py2exe

2005-09-20 Thread Giovanni Bajo
Bugs wrote:

> Whereas py2exe can create an executable that NEVER writes any files
> out to the filesystem, they are loaded instead directly from the
> executable?
> If so then that's a major difference and IMHO the py2exe method is
> superior.

To do this, py2exe uses a manually rewritten version of LoadLibrary (the win32
api call in charge of loading DLLs). Since nobody has access to the original
source code of LoadLibrary (across all Windows version), the rewritten version
is by definition incomplete and less stable; it might also be not forward
compatible (that is, your shipped application may break on Windows Vista, or if
you package applications compiled with newer Visual Studio versions).

So it's a trade-off problem. I'm not a fan of writing temporary files to disk,
but surely I prefer this to a potential compatibility headache; an executable
built by PyInstaller will dump some stuff into your temp directory, but it
won't run into compatibility problem when run on that computer or with that
DLL. Anyway, I'm not against adding things to PyInstaller in principle: if
there is enough request for such a feature, we might as well add it.
-- 
Giovanni Bajo


-- 
http://mail.python.org/mailman/listinfo/python-list


Re: PyInstaller 1.0 - build single-file distributions for your Python programs

2005-09-20 Thread Giovanni Bajo
Giovanni Bajo wrote:

> PyInstaller 1.0 is out:
> http://pyinstaller.hpcf.upr.edu/pyinstaller


For the logs, the correct URL is:
http://pyinstaller.hpcf.upr.edu

The other was a redirector which is no longer valid after a site maintenance
session. I apologize for the inconvenience.
-- 
Giovanni Bajo


-- 
http://mail.python.org/mailman/listinfo/python-list


Re: PyINI : Cross-Platform INI parser

2005-02-11 Thread Giovanni Bajo
SeSe wrote:

> hi, every one,
>
> I started a opensource project PyINI for corss-platform *.ini parsing
> at http://sourceforge.net/projects/pyini/
>
> I have released a simple alpha version, which can read *.ini, with
> some extended features such as "key=value1,value2,value3". I also
> made a c++ binding to PyINI with elmer toolkit.


The most useful feature would be to be able to write INI files back without
affecting their formatting, without removing user commands, etc. This is what
Windows APIs do, and it is what I am missing from most INI parsing libraries
out there.
-- 
Giovanni Bajo


-- 
http://mail.python.org/mailman/listinfo/python-list


Re: PyINI : Cross-Platform INI parser

2005-02-12 Thread Giovanni Bajo
Thomas Heller wrote:

>>> I have released a simple alpha version, which can read *.ini, with
>>> some extended features such as "key=value1,value2,value3". I also
>>> made a c++ binding to PyINI with elmer toolkit.
>>
>>
>> The most useful feature would be to be able to write INI files back
>> without affecting their formatting, without removing user commands,
>> etc. This is what Windows APIs do, and it is what I am missing from
>> most INI parsing libraries out there.
>
> You can easily access the windows apis either with pywin32, or with
> ctypes for those functions that aren't wrapped in pywin32.

Sure, but we were speaking of doing that in a portable library.
-- 
Giovanni Bajo


-- 
http://mail.python.org/mailman/listinfo/python-list


multiple inheritance with builtins

2005-03-05 Thread Giovanni Bajo
Hello,

I noticed that bultin types like list, set, dict, tuple don't seem to adhere to
the convention of using super() in constructor to correctly allow
diamond-shaped inheritance (through MRO). For instance:


>>> class A(object):
... def __init__(self):
... print "A.__init__"
... super(A, self).__init__()
...
>>> class B(A, list):
... def __init__(self):
... print "B.__init__"
... super(B, self).__init__()
...
>>> B.__mro__
(, , , )
>>> B()
B.__init__
A.__init__
[]
>>> class C(list, A):
... def __init__(self):
... print "C.__init__"
... super(C, self).__init__()
...
>>> C.__mro__
(, , , )
>>> C()
C.__init__
[]



It seems weird to me that I have to swap the order of bases to get the expected
behaviour. Is there a reason for this, or is it simply a bug that should be
fixed?
-- 
Giovanni Bajo


-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Turning String into Numerical Equation

2005-03-14 Thread Giovanni Bajo
Michael Spencer wrote:

> * this means that, eval("sys.exit()") will likely stop your
> interpreter, and
> there are various other inputs with possibly harmful consequences.
>
> Concerns like these may send you back to your original idea of doing
> your own expression parsing.

I use something along these lines:

def safe_eval(expr, symbols={}):
return eval(expr, dict(__builtins__=None, True=True, False=False), symbols)

import math
def calc(expr):
return safe_eval(expr, vars(math))

>>> calc("2+3*(4+5)*(7-3)**2")
434
>>> calc("sin(pi/2)")
1.0
>>> calc("sys.exit()")
Traceback (most recent call last):
  File "", line 1, in ?
  File "", line 2, in calc
  File "", line 2, in safe_eval
  File "", line 0, in ?
NameError: name 'sys' is not defined
>>> calc("0x1000 | 0x0100")
4352

-- 
Giovanni Bajo


-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Turning String into Numerical Equation

2005-03-15 Thread Giovanni Bajo
Steven Bethard wrote:

>>> I use something along these lines:
>>>
>>> def safe_eval(expr, symbols={}):
>>> return eval(expr, dict(__builtins__=None, True=True,
>>> False=False), symbols)
>>>
>>> import math
>>> def calc(expr):
>>> return safe_eval(expr, vars(math))
>>>
>> That offers only notional security:
>>
>>  >>> calc("acos.__class__.__bases__[0]")
>>  
>
> Yeah, I was concerned about the same thing, but I realized that I
> can't actually access any of the func_globals attributes:


When __builtin__ is not the standard __builtin__, Python is in restricted
execution mode. In fact, I believe my solution to be totally safe, and I
otherwise would love to be proved wrong.
-- 
Giovanni Bajo


-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Turning String into Numerical Equation

2005-03-16 Thread Giovanni Bajo
Steven Bethard wrote:

>> When __builtin__ is not the standard __builtin__, Python is in
>> restricted execution mode.
>
> Do you know where this is documented?  I looked around, but couldn't
> find anything.


I found some documentation in the reference of the (now disabled) modules for
Restricted Execution (chapter 17 in the Library Reference). Quoting:

"""
The Python run-time determines whether a particular code block is executing in
restricted execution mode based on the identity of the __builtins__ object in
its global variables: if this is (the dictionary of) the standard __builtin__
module, the code is deemed to be unrestricted, else it is deemed to be
restricted.
"""

There are also some hints in the documentation for eval() itself:

"""
If the globals dictionary is present and lacks '__builtins__', the current
globals are copied into globals before expression is parsed. This means that
expression normally has full access to the standard __builtin__ module and
restricted environments are propagated
"""

In fact, the documentation for eval() could be improved to explain the benefits
of setting __builtins__ in the globals.
-- 
Giovanni Bajo


-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Turning String into Numerical Equation

2005-03-16 Thread Giovanni Bajo
Michael Spencer wrote:

>> In fact, I believe my solution to be totally safe,
>
> That's a bold claim!  I'll readily concede that I can't access
> func_globals from restricted mode eval (others may know better).  But
> your interpreter is still be vulnerable to DOS-style attack from
> rogue calculations or quasi-infinite loops.


Yes, but I don't see your manually-rolled-up expression calculator being
DOS-safe. I believe DOS attacks to be a problem whenever you want to calculate
the result of an expression taken from the outside. What I was trying to show
is that my simple one-liner is no worse than a multi-page full-blown expression
parser and interpreter.
-- 
Giovanni Bajo


-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Turning String into Numerical Equation

2005-03-16 Thread Giovanni Bajo
Steven Bethard wrote:

>> In fact, the documentation for eval() could be improved to explain
>> the benefits of setting __builtins__ in the globals.
>
> Well, if you think you're pretty clear on what's happening, a patch is
> always appreciated. =)  I have a feeling that the docs are at least
> partially vague because no one actually wants to advertise the
> restricted execution features[1] since no one can guarantee that
> they're really secure...

>[1] Guido say as much
>http://mail.python.org/pipermail/python-dev/2002-December/031234.html

I am by no means clear. I found out by accident this "feature" of eval and
wondered why it is not explained in the documentation. The link you provided is
a good answer to my question. I understand Guido's concerns, in fact.

Then, I should start my usual rant about how is really sad to send patches to
Python and have them ignored for years (not even an acknowledge). Really sad.
This is why I'm not going to do that again.
-- 
Giovanni Bajo


-- 
http://mail.python.org/mailman/listinfo/python-list


Re: wxPython vs. pyQt

2005-03-17 Thread Giovanni Bajo
[EMAIL PROTECTED] wrote:

> I've narrowed down my toolkit selection for my project to wxPython and
> pyQt, and now i'd like to hear any opinions, war stories, peeves, etc,
> about them, particularly from anyone who's used _both_toolkits_. I'm
> only mildly interested in the IDEs and UI designers for each, as i
> want to do as much as i can in just Xemacs and xterm. Feel free to
> rant, rave, pontificate, whatever.


I have used both. I find PyQT is vastly superior than wxPython for a number of
reasons, including overall better design of the library, total flexibility and
orthogonality of provided features, incredible portability. I would suggest
wxPython only if you cannot meet PyQt license requirements (and this is going
to change soon, since Qt4 will have a GPL version for Windows too).
-- 
Giovanni Bajo


-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Upgrade woes: Numeric, gnuplot, and Python 2.4

2004-12-11 Thread Giovanni Bajo
Jive wrote:

> 4) Buy a copy of the VC++ 7.1 and port Numeric myself. (I still don't
> know what to buy from Mr. Gates -- VC++ .net Standard?)


VC++ 7.1, the compiler/linker, is a free download. Google for "VC toolkit". You
won't get the fancy IDE and stuff, but you have everything you need to build
your extensions. I saw patches floating around to build Python itself with the
free version (a couple of small nits).
-- 
Giovanni Bajo


-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Grouping code by indentation - feature or ******?

2005-03-27 Thread Giovanni Bajo
Terry Reedy wrote:

>> 3) Sometimes the structure of the algorithm is not the structure
>>   of the code as written, people who prefer that the indentation
>>   reflects the structure of the algorithm instead of the structure
>>   of the code, are forced to indent wrongly.
>
> Do you have any simple examples in mind?


Yes. When I use PyQt (or similar toolkit), I would like to indent my widget
creation code so that one indentiation level means one level down into the
widget tree hierarchy:

v = VBox(self)
# sons of v indented here
w = HBox(self)
# sons of w here
QLabel("hello", w)
QLabel("world", w)
QButton("ok", v)

In fact, I am used to do this very thing in C++, and it helps readability a
lot.

I know I can add "if 1:" to do such a thing, but that's beyond the point. I'm
just showing that there are simple and reasonable examples of cases where you
would like to indent your code in different ways and you can't.
-- 
Giovanni Bajo


-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Dr Dobbs "with" keyword

2005-04-16 Thread Giovanni Bajo
Neil Hodgson wrote:

>In the March 2005 issue of Dr Dobbs Journal there is an article
> "Resource Management in Python" by Oliver Schoenborn. One paragraph
> (first new paragraph, page 56) starts "Starting with Python 2.4, a new
> type of expression lets you use the keyword /with/". It continues,
> mentioning PEP 310 (Reliable Acquisition/Release Pairs) which is at
> "Draft" status and unlikely to be accepted with the keyword "with" as
> Guido wants to use that for another purpose.

Whatever keyword is chosen, I hope PEP 310 eventually hit Python, I have been
awaiting it for a long time. I would also like to have a builtin resource()
like this:

def resource(enter_call, exit_call):
   class Res(object):
  __enter__ = lambda self: enter_call()
  __exit__ = lambda self: exit_call()
   return Res()

with resource(self.mutex.lock, self.mutex.unlock):
pass

Either that, or "with" could call adapt() implicitly so I can register my
conversion functions.
-- 
Giovanni Bajo


-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Managing import statements

2005-12-11 Thread Giovanni Bajo
Shane Hathaway wrote:

> Here's the real problem: maintaining import statements when moving
> sizable blocks of code between modules is hairy and error prone.


You can also evaluate a solution like this:
http://codespeak.net/py/current/doc/misc.html#the-py-std-hook

-- 
Giovanni Bajo


-- 
http://mail.python.org/mailman/listinfo/python-list


Re: split string saving screened spaces

2005-12-16 Thread Giovanni Bajo
Sergey wrote:

> Which module to use to do such thing:

> "-a -b -c '1 2 3'" -> ["-a", "-b", "-c", "'1 2 3'"]


>>> import shlex
>>> shlex.split("-a -b -c '1 2 3'")
['-a', '-b', '-c', '1 2 3']

-- 
Giovanni Bajo


-- 
http://mail.python.org/mailman/listinfo/python-list


distutils: build scripts as native executables?

2005-12-16 Thread Giovanni Bajo
Hello,

am I wrong or isn't there a way in distutils to build (compile/link) a
native executable and install it among the "scripts"? It looks like that
distutils.CCompiler features a link_executable() method, and build_ext.py
has most of the logic needed to track dependencies, include paths and
whatnot, but nothing is exposed at a higher level.

I was going to subclass build_ext and find a way to piggy-back
link_executable() into it instead of link_shared_object(), but I thought I
asked before doing useless / duplicate work.
-- 
Giovanni Bajo


-- 
http://mail.python.org/mailman/listinfo/python-list


Re: reading files into dicts

2005-12-29 Thread Giovanni Bajo
rbt wrote:

> What's a good way to write a dictionary out to a file so that it can
> be easily read back into a dict later? I've used realines() to read
> text
> files into lists... how can I do the same thing with dicts? Here's
> some sample output that I'd like to write to file and then read back
> into a dict:
>
> {'.\\sync_pics.py': 1135900993, '.\\file_history.txt': 1135900994,
> '.\\New Text Document.txt': 1135900552}



>>> d = {'.\\sync_pics.py': 1135900993, '.\\file_history.txt': 1135900994,
... '.\\New Text Document.txt': 1135900552}
>>> file("foo", "w").write(repr(d))
>>> data = file("foo").read()
>>> data
"{'.sync_pics.py': 1135900993, '.file_history.txt': 1135900994,
'.New Text Document.txt': 1135900552}"
>>> d = eval(data)
>>> d
{'.\\sync_pics.py': 1135900993, '.\\file_history.txt': 1135900994, '.\\New Text
Document.txt': 1135900552}

-- 
Giovanni Bajo


-- 
http://mail.python.org/mailman/listinfo/python-list


Re: PyQt calling an external app?

2006-01-10 Thread Giovanni Bajo
gregarican wrote:

> What's the easiest and cleanest way of having PyQt bring up an
> external application?

You can also go the Qt way and use QProcess. This also gives you cross-platform
communication and process killing capabilities which are pretty hard to obtain
(see the mess in Python with popen[1234]/subprocess). You also get nice signals
from the process which interact well in a Qt environment.
-- 
Giovanni Bajo


-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Why keep identity-based equality comparison?

2006-01-10 Thread Giovanni Bajo
Mike Meyer wrote:

>> My question is, what reasons are left for leaving the current default
>> equality operator for Py3K, not counting backwards-compatibility?
>> (assume that you have idset and iddict, so explicitness' cost is only
>> two characters, in Guido's example)
>
> Yes. Searching for items in heterogenous containers. With your change
> in place, the "in" operator becomes pretty much worthless on
> containers of heterogenous objects. Ditto for container methods that
> do searches for "equal" members. Whenever you compare two objects that
> don't have the same type, you'll get an exception and terminate the
> search. If the object your searching for would have been found
> "later", you lose - you'll get the wrong answer.
>
> You could fix this by patching all the appropriate methods. But then
> how do you describe their behavior, without making some people expect
> that it will raise an exception if they pass it incomparable types?
>
> Also, every container type now has this split between identity and
> equality has to be dealt with for *every* container class. If you want
> identity comparisons on objects, you have to store them in an idlist
> for the in operator and index methods to work properly.
>
> I also think your basic idea is wrong. The expression "x == y" is
> intuitively False if x and y aren't comparable. I'd say breaking that
> is a bad thing. But if you don't break that, then having "x == y"
> raise an exception for user classes seems wrong. The comparison should
> be False unless they are the same object - which is exactly what
> equality based on id gives us.

Seconded. All hell would break loose if Python didn't allow == for heterogenous
types, $DEITY only knows how many types I relied on it. Please don't let it go
in Py3k.
-- 
Giovanni Bajo


-- 
http://mail.python.org/mailman/listinfo/python-list


Re: import module and execute function at runtime

2006-01-13 Thread Giovanni Bajo
[EMAIL PROTECTED] wrote:

> I'm trying to import a module at runtime using variables to specify
> which module, and which functions to execute. for example:
>
> mStr = "sys"
> fStr = "exit"
>
> # load mod
> mod = __import__(mStr)
> # call function
> mod.fStr()


getattr(mod, fStr)()
-- 
Giovanni Bajo


-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Why can't I "from module import *" except at module level?

2006-01-13 Thread Giovanni Bajo
Mudcat wrote:

> Is there any way to do this or am must I load all modules by function
> name only if it's after initialization?

Not sure. globals().update(mod.__dict__) might do the trick. Or just design a
better system and be done with it.
-- 
Giovanni Bajo


-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Marshal Obj is String or Binary?

2006-01-13 Thread Giovanni Bajo
[EMAIL PROTECTED] wrote:

> Try...
>
>>>> for i in bytes: print ord(i)
>
> or
>
>>>> len(bytes)
>
> What you see isn't always what you have. Your database is capable of
> storing \ x 0 0 characters, but your string contains a single byte of
> value zero. When Python displays the string representation to you, it
> escapes the values so they can be displayed.

He can still store the repr of the string into the database, and then
reconstruct it with eval:

>>> bytes = "\x00\x01\x02"
>>> bytes
'\x00\x01\x02'
>>> len(bytes)
3
>>> ord(bytes[0])
0
>>> rb = repr(bytes)
>>> rb
"'\\x00\\x01\\x02'"
>>> len(rb)
14
>>> rb[0]
"'"
>>> rb[1]
'\\'
>>> rb[2]
'x'
>>> rb[3]
'0'
>>> rb[4]
'0'
>>> bytes2 = eval(rb)
>>> bytes == bytes2
True

-- 
Giovanni Bajo


-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Marshal Obj is String or Binary?

2006-01-14 Thread Giovanni Bajo
Max wrote:

>>> What you see isn't always what you have. Your database is capable of
>>> storing \ x 0 0 characters, but your string contains a single byte
>>> of value zero. When Python displays the string representation to
>>> you, it escapes the values so they can be displayed.
>>
>>
>> He can still store the repr of the string into the database, and then
>> reconstruct it with eval:
>>
>
> Yes, but len(repr('\x00')) is 4, while len('\x00') is 1. So if he uses
> BLOB his data will take almost a quarter of the space, compared to
> your method (stored as TEXT).

Sure, but he didn't ask for the best strategy to store the data into the
database, he specified very clearly that he *can't* use BLOB, and asked how to
tuse TEXT.
-- 
Giovanni Bajo


-- 
http://mail.python.org/mailman/listinfo/python-list


Re: PEP 309 (Partial Function Application) Idea

2006-01-15 Thread Giovanni Bajo
Ronald Mai wrote:

> Here is a reference implementation:
>
> _ = lambda x: x.pop(0)
>
> def partial(func, *args, **keywords):
> def newfunc(*fargs, **fkeywords):
> newkeywords = keywords.copy()
> newkeywords.update(fkeywords)
> newargs = (lambda seq: tuple([(a == _ and a(seq)) or a for
> a in args] + seq))(list(fargs))
> return func(*newargs, **newkeywords)
> newfunc.func = func
> newfunc.args = args
> newfunc.keywords = keywords
> return newfunc
>
> Here is example of use:
>
>>>> def capture(*args):
> return args
>
>>>> partial(capture)()
> ()
>>>> partial(capture, _)(1)
> (1,)
>>>> partial(capture, _, 2)(1)
> (1, 2)
>>>> partial(capture, 1)(2)
> (1, 2)
>>>> partial(capture, 1, _)(2)
> (1, 2)
>>>> partial(capture, 1, _)()
> IndexError: pop from empty list
>>>> partial(capture, 1, _, _)(2, 3)
> (1, 2, 3)

Other implementations I have seen (boost::bind comes to mind) use ordered
placeholders such as _1, _2, _3, etc to provide more flexibility in adaptation:

>>> partial(capture, "a", _1, _2)("b", "c")
("a", "b", "c")
>>> partial(capture, "a", _2, _1)("b", "c")
("a", "c", "b")

I don't see mention of this in the PEP, but it's a nice feature to have IMO.
-- 
Giovanni Bajo


-- 
http://mail.python.org/mailman/listinfo/python-list


Re: PEP 309 (Partial Function Application) Idea

2006-01-15 Thread Giovanni Bajo
[EMAIL PROTECTED] wrote:

> Since python has named parameter(and I assume this PEP would support
> it as well), is it really that useful to have these place holder things
> ?

Probably not so much, you're right.

> As when the parameter list gets long, named param should be easier to
> read.
>
> The only case I find it useful is for binary ops where I would like to
> either bind the left hand side or the right hand side but that can be
> handled easily with a "flip" function as in haskell.

I'm not sure I'd like a specialized flip function, compared to the numbered
placeholders solution.
-- 
Giovanni Bajo


-- 
http://mail.python.org/mailman/listinfo/python-list


Re: zipfile decompress problems

2006-01-16 Thread Giovanni Bajo
Waguy wrote:

> import zipfile
> file = zipfile.ZipFile("c:\\chessy.zip", "r")


Use "rb".
-- 
Giovanni Bajo


-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Shrinky-dink Python (also, non-Unicode Python build is broken)

2006-01-16 Thread Giovanni Bajo
Larry Hastings wrote:

> First and foremost: turning off Py_USING_UNICODE *breaks the build*
> on Windows.

Probably nobody does that nowadays. My own feeling (but I don't have numbers
for backing it up) is that the biggest size in the .DLL is represented by
things like the CJK codecs (which are about 800k). I don't think you're
gaining that much by trying to remove unicode support at all, especially
since (as you noticed) it's going to be maintenance headhache.

> Second of all, the dumb-as-a-bag-of-rocks Windows linker (at least
> the one used by VC++ under MSVS .Net 2003) *links in unused static
> symbols*.  If I want to excise the code for a module, it is not
> sufficient to comment-out the relevant _inittab line in config.c.
> Nor does it help if I comment out the "extern" prototype for the
> init function.  As far as I can tell, the only way to *really* get
> rid of a module, including all its static functions and static data,
> is to actually *remove all the code* (with comments, or #if, or
> whatnot).  What a nosebleed, huh?

This is off-topic here, but MSVC linker *can* strip unused symbols, of
course. Look into /OPT:NOREF.

> So in order to build my *really* minimal python24.dll, I have to hack
> up the source something fierce.  It would be pleasant if the Python
> source code provided an easy facility for turning off modules at
> compile-time.  I would be happy to propose something / write a PEP
> / submit patches to do such a thing, if there is a chance that such
> a thing could make it into the official Python source.  However, I
> realize that this has terribly limited appeal; that, and the fact
> that Python releases are infrequent, makes me think it's not a
> terrible hardship if I had to re-hack up each new Python release
> by hand.

You're not the only one complaining about the size of Python .DLL: also
people developing self-contained programs with tools like PyInstaller or
py2exe (that is, programs which are supposed to run without Python
installed) are affected by the lack of a clear policy.

I myself complained before, especially after Python 2.4 got those ginormous
CJK codecs within its standard DLL, you can look for the thread in Google.
The bottom line of that discussion was:

- The policy about what must be linked within python .dll and what must be
kept outside should be proposed as a PEP, and it should provide guidelines
to be applied also for future modules.
- There will be some opposition to the obvious policy of "keeping the bare
minimum inside the DLL" because of inefficiencies in the Python build
system. Specifically, I was told that maintaining modules outside the DLL
instead of inside the DLL is more burdesome for some reason (which I have
not investigated), but surely, with a good build system, switching either
configuration setting should be the matter of changing a single word in a
single place, with no code changes required.

Personally, I could find some time to write up a PEP, but surely not to pick
up a lengthy discussion nor to improve the build system myself. Hence, I
mostly decided to give up for now and stick with recompiling Python myself.
The policy I'd propose is that the DLL should contain the minimum set of
modules needed to run the following Python program:

---
print "hello world"
---

There's probably some specific exception I'm not aware of, but you get the
big picture.
-- 
Giovanni Bajo


-- 
http://mail.python.org/mailman/listinfo/python-list


Re: zipfile decompress problems

2006-01-16 Thread Giovanni Bajo
Waguy wrote:

> I tried that to and it didn't work, got the same message
> Thanks though,


Can you send / provide a link to a minimal zip file which reproduces the
problem?
-- 
Giovanni Bajo


-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Parsing files -- pyparsing to the rescue?

2006-01-16 Thread Giovanni Bajo
rh0dium wrote:

> I have a file which I need to parse and I need to be able to break it
> down by sections.  I know it's possible but I can't seem to figure this
> out.
>
> The sections are broken by <> with one or more keywords in the <>.
> What I want to do is to be able to pars a particular section of the
> file.  So for example I need to be able to look at the SYSLIB section.
> Presumably the sections are
>
>
> 
> Sys Data
> Sys-Data
> asdkData
> Data
> 
> Data
> Data
> Data
> Data
> 
> Data
> Data
> Data
> Data
> 
> Data
> Data
> Data
> Data
> 

Given your description, pyparsing doesn't feel like the correct tool:

secs = {}
for L in file("foo.txt", "rU"):
 L = L.rstrip("\n")
 if re.match(r"<.*>", L):
name = L[1:-1]
secs[name] = []
 else:
secs[name].append(L)

-- 
Giovanni Bajo


-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Shrinky-dink Python (also, non-Unicode Python build is broken)

2006-01-17 Thread Giovanni Bajo
Neil Hodgson wrote:

>> - There will be some opposition to the obvious policy of "keeping
>> the bare minimum inside the DLL" because of inefficiencies in the
>> Python build system.
>
> It is also non-optimal for those that do want the full set of
> modules as separate files can add overhead for block sizing (both on
> disk and in memory, executables pad out each section to some block
> size), by requiring more load-time inter-module fixups

I would be surprised if this showed up in any profile. Importing modules can
already be slow no matter external stats (see programs like "mercurial" that,
to win benchmarks with C-compiled counterparts, do lazy imports). As for the
overhead at the border of blocks, you should be more worried with 800K of CJK
codecs being loaded in your virtual memory (and not fully swapped out because
of block sizing) which are totally useless for most applications.

Anyway, we're picking nits here, but you have a point in being worried. If I
ever write a PEP, I will produce numbers to show beyond any doubt that there is
no performance difference.

> , and by not
> allowing the linker to perform some optimizations. It'd be worthwhile
> seeing if the DLL would speed up or shrink if whole program
> optimization was turned on.

There is no way whole program optimization can produce any advantage as the
modules are totally separated and they don't have direct calls that the
compiler can exploit.
-- 
Giovanni Bajo


-- 
http://mail.python.org/mailman/listinfo/python-list


Re: magical expanding hash

2006-01-17 Thread Giovanni Bajo
James Stroud wrote:

>> I need a magical expanding hash with the following properties:
>>
>> * it creates all intermediate keys
>>
>> meh['foo']['bar] = 1
>>
>> -- works even if meh['foo'] didn't exist before
>>
>> * allows pushing new elements to leaves which are arrays
>>
>> meh['foo']['list] << elem1
>> meh['foo']['list] << elem2
>>
>> * allows incrementing numeric leaves
>>
>> meh['foo']['count'] += 7
>>
>> * serializable
>>
>> I have such a class in ruby.  Can python do that?
>>
>
>
> Is this too magical?
>
>
> class meh(dict):
>def __getitem__(self, item):
>  if self.has_key(item):
>return dict.__getitem__(self, item)
>  else:
>anitem = meh()
>dict.__setitem__(self, item, anitem)
>  return anitem

Actually what the OP wants is already a method of dict, it's called
setdefault(). It's not overloaded by "[]" because it's believed to be better to
be able to say "I want auto-generation" explicitally rather than implicitly: it
gives the user more power to control, and to enforce stricter rules.

>>> class meh(dict):
...  def __getitem__(self, item):
...   return dict.setdefault(self, item, meh())
...
>>> a = meh()
>>> a["foo"]["bar"] = 2
>>> a["foo"]["dup"] = 3
>>> print a["foo"]["bar"]
2
>>> print a
{'foo': {'dup': 3, 'bar': 2}}


So I advise using this class, and suggest the OP to try using setdefault()
explicitally to better understand Python's philosophy.

BTW: remember that setdefault() is written "setdefault()" but it's read
"getorset()".
-- 
Giovanni Bajo


-- 
http://mail.python.org/mailman/listinfo/python-list


Re: magical expanding hash

2006-01-18 Thread Giovanni Bajo
braver wrote:

> Also, what's the shortest python idiom for get_or_set in expression?

dict.setdefault, as I already explained to you.

Again, I'd like to point out that what you're doing is *not* the correct
Pythonic way of doing things. In Python, there is simply no implicit
sub-dicts creation, nor implicit type inference from operators. And there
are very good reason for that. Python is a strongly typed languages: objects
have a type and keep it, they don't change it when used with different
operators. setdefault() is you get'n'set, everything else has to be made
explicit for a good reason. Strong typing has its virtues, let me give you a
link about this:

http://wingware.com/python/success/astra
See specifically the paragraph "Python's Error Handling Improves Robustness"

I believe you're attacking the problem from a very bad point of view.
Instead of trying to write a Python data structure which behaves like
Perl's, convert a Perl code snippet into Python, using the *Pythonic* way of
doing it, and then compare things. Don't try to write Perl in Python, just
write Python and then compare the differences.
-- 
Giovanni Bajo


-- 
http://mail.python.org/mailman/listinfo/python-list


Re: list(...) and list comprehensions (WAS: Arithmetic sequences in Python)

2006-01-18 Thread Giovanni Bajo
Diez B. Roggisch wrote:

>> due to the nested parentheses.  Note that replacing list comprehensions
>> with list(...) doesn't introduce any nested parentheses; it basically
>> just replaces brackets with parentheses.
>
> But you don't need the nested parentheses - use *args instead for the
> list-constructor.
>
> list(a,b,c)

No, you can't. That's ambigous if you pass only one argument, and that
argument is iterable. This is also the reason why set() doesn't work this
way.
-- 
Giovanni Bajo


-- 
http://mail.python.org/mailman/listinfo/python-list


getopt.gnu_getopt: incorrect documentation

2006-01-18 Thread Giovanni Bajo
Hello,

The official documentation for "getopt.gnu_getopt" does not mention the version
number in which it was introduced (so I assumed it was introduced back when
getopt was added). This is wrong, though: I was informed that Python 2.2 does
not have this function, and a quick google search turned up that its addition
was mentioned in Python 2.3 release notes.

This should be fixed in the documentation.

Thanks!
-- 
Giovanni Bajo


-- 
http://mail.python.org/mailman/listinfo/python-list


Re: getopt.gnu_getopt: incorrect documentation

2006-01-19 Thread Giovanni Bajo
Steve Holden wrote:

>> The official documentation for "getopt.gnu_getopt" does not mention
>> the version number in which it was introduced (so I assumed it was
>> introduced back when getopt was added). This is wrong, though: I was
>> informed that Python 2.2 does not have this function, and a quick
>> google search turned up that its addition was mentioned in Python
>> 2.3 release notes.
>>
>> This should be fixed in the documentation.
>>
>> Thanks!
>
> Giovanni:
>
> Sadly this post is unlikely to change anything. Can you please email
> the
> address shown in the documentation ([EMAIL PROTECTED]), or by using the
> Python bug tracker?

Sure, I'll use e-mail. My previous attempt at using the Python bug tracker was
a failure (totally ignored after years) so I'm keen on trying some other way.

Thanks!
-- 
Giovanni Bajo


-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Returning a tuple-struct

2006-01-19 Thread Giovanni Bajo
[EMAIL PROTECTED] wrote:

>>>> time.localtime()
> (2006, 1, 18, 21, 15, 11, 2, 18, 0)
>>>> time.localtime()[3]
> 21
>>>> time.localtime().tm_hour
> 21
>
> Anyway, I guess there's a few of ways to do this.  In the case above,
> it would seem reasonable to override __getitem__() and other things to
> get that result.


I have a generic solution for this (never submitted to the cookbook... should
I?)


import operator
def NamedTuple(*args, **kwargs):
class named_tuple_class(tuple):
pass

values = []
idx = 0
for arg in args:
for name in arg[:-1]:
setattr(named_tuple_class, name,
property(operator.itemgetter(idx)))
values.append(arg[-1])
idx += 1
for name,val in kwargs.iteritems():
setattr(named_tuple_class, name, property(operator.itemgetter(idx)))
values.append(val)
idx += 1

return named_tuple_class(values)




>>> t = NamedTuple(("x", 12), ("y", 18))
>>> t
(12, 18)
>>> t[0]
12
>>> t.x
12
>>> t[1]
18
>>> t.y
18
>>> t.z
Traceback (most recent call last):
  File "", line 1, in ?
AttributeError: 'named_tuple_class' object has no attribute 'z'


>>> t = NamedTuple(("p", "pos", "position", 12.4))
>>> t
(12.4,)
>>> t.p
12.4
>>> t.pos
12.4
>>> t.position
12.4



>>> t = NamedTuple(("p", "pos", 12.4), length="foo")
>>> t
(12.4, 'foo')
>>> t.p
12.4
>>> t.pos
12.4
>>> t.length
'foo'
-- 
Giovanni Bajo


-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Some thougts on cartesian products

2006-01-22 Thread Giovanni Bajo
Christoph Zwerschke wrote:

> Sometimes I was missing such a feature.
> What I expect as the result is the "cartesian product" of the strings.

I've been thinking of it as well. I'd like it for lists too:

>> range(3)**2
[(0,0), (0,1), (0,2), (1,0), (1,1), (1,2), (2,0), (2,1), (2,2)]

-- 
Giovanni Bajo


-- 
http://mail.python.org/mailman/listinfo/python-list


Python 2.5b2 Windows binaries

2006-07-17 Thread Giovanni Bajo
Hello,

since I tested Python 2.5b2 on my applications, I have rebuilt some extension
modules I needed. It wasn't a very simple or fast task, so I thought I'd share
the result of the efforts:

http://www.develer.com/oss/Py25Bins

this page contains the Windows binaries (installers) for the following
packages:

- NumPy 0.98
- Numeric 24.2
- PyOpenGL 2.0.2.01 (with Numeric 24.2)
- Pyrex 0.9.4.1 (with a Python 2.5 compatibility patch posted in its mailing
list)

I plan to update this page later as I build more installers (but don't hold
your breath). Hope this helps everybody!
-- 
Giovanni Bajo


-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Python to use a non open source bug tracker?

2006-10-04 Thread Giovanni Bajo
[EMAIL PROTECTED] wrote:

> Giovanni> In fact, are you absolutely positive that you need so
> much Giovanni> effort to maintain an existing bugtracker
> installation?
>
> The development group's experience with SF and I think to a lesser
> extent, Roundup in its early days, and more generally with other
> components of the development toolchain (source code control) and
> python.org website maintenance suggests that some human needs to be
> responsible for each key piece of technology.  Maybe when it's mature
> it needs very little manpower to maintain, but a substantial
> investment is required when the technology is first installed.

One thing is asking for a special help during the transition phase and the
"landing" phase (the first few months). Another thing is asking for "roughly
6-10 people" to install and maintain a Roundup installation. This is simply
not going to realistically happen, and I find it incredible for the PSF
committee to ask for such a high request. Damn, we don't have "roughly 6-10
people" in charge of reviewing patches or fixing bugs.

I followed the GNATS -> Bugzilla transition myself closely, and a single
person (Daniel Berlin) was able to setup the Bugzilla server on the
gcc.gnu.org computer, convince everybody that a transition was needed (and
believe me, this was a hard work), patch it as much as needed to face the
needs of the incredibly picky GCC developers (asking for every little
almost-unused-and-obsoleted feature in GNATS to be replicated in Bugzilla),
and later maintain the installation. It took him approximately one year to
do this, and surely it wasn't full time. After that, he maintains and
administer the Bugzilla installation on his own, by providing upgrades when
needed and a few modifications.

I wonder why the PSF infrastructure committee believes that a group of 6-10
people is needed to "install and maintain" Roundup. Let us also consider
that Roundup's lead developer *was* part of the PSF infrastrucutre
committee, and he might be willing to help in the transition (just my very
wild guess), and he obviously knows his stuff. Also, given the requirement
for the selection, there is already a running roundup installation somewhere
(so the whole pipeline export -> import has already been estabilished and
confirmed to work).

My own opinion is that a couple of person can manage the
transition/migration phase to *any* other bug tracking system, and provide
support in the python-dev mailing list. After the whole thing has fully
landed, I'd be really surprised if a single appointed maintainer would not
be enough.

If the PSF committee lowers its requests to a more realistical amount of
effort, I'm sure we will see many more people willing to help. I think many
people (including myself) would be willing to half-half-help with loose
ends, but when faced with an abnormous "6-10 people" request they just shut
up and sit in a corner.
-- 
Giovanni Bajo


-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Python to use a non open source bug tracker?

2006-10-04 Thread Giovanni Bajo
Steve Holden wrote:

> No, I'm not on the infrastructure list, but I know that capable people
> *are*: and you know I am quite capable of donating my time to the
> cause, when I have it to spare (and sometimes even when I don't).
>
> Perhaps what I *should* have written was "Sadly *many* people spend
> too much time bitching and moaning about those that roll their
> sleeves up, and not enough rolling their own sleeves up and pitching
> in".
>
> Sniping from the sidelines is far easier than hard work towards a
> goal.
>
> Kindly note that none of the above remarks apply to you.

The current request is: "please, readers of python-dev, setup a team of 6-10
people to handle roundup or we'll go to a non-free software for bug
tracking".  This is something which I cannot cope with, and I'm *speaking*
up against. Were the request lowered to something more reasonable, I'd be
willing to *act*. I have to speak before acting, so that my acting can
produce a result.

And besides the only thing I'm really sniping the PSF against is about
*ever* having thought of non-FLOSS software. This is something I *really* do
not accept. You have not seen a mail from me with random moaning as "Trac is
better", "Bugzilla is better", "why this was chosen". I do respect the fact
that the PSF committee did a thorough and correct evaluation: I just
disagree with their initial requirements (and I have not raised this point
before because, believe me if you can, I really thought it was obvious and
implicit).

So, if your remarks apply to me, I think you are misrepresenting my mails
and my goals.
-- 
Giovanni Bajo


-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Python to use a non open source bug tracker?

2006-10-04 Thread Giovanni Bajo
Martin v. Löwis wrote:

>> Frankly, I don't give a damn about the language the application is
>> coded in
>
> That's probably one of the reasons why you aren't a member of the
> Python Software Foundation. Its mission includes to publicize,
> promote the
> adoption of, and facilitate the ongoing development of Python-related
> technology and educational resources. So the tracker being written in
> Python is quite of importance.

So we have a problem between the PSF and the "PSF infrastructure committee",
since the latter did not put "being written in Python" has a requirement for
the tracker.
-- 
Giovanni Bajo


-- 
http://mail.python.org/mailman/listinfo/python-list

Re: Python to use a non open source bug tracker?

2006-10-04 Thread Giovanni Bajo
Paul Rubin wrote:

>> You fail to recognize that Python is *already* using a non-free
>> software for bug tracking, as do thousands of other projects.
>
> I don't think that reflects an explicit decision.  SF started out as
> free software and the software became nonfree after people were
> already using it.

Moreover, this looked like a very good chance to have this nuisance sorted out.
Too bad some people don't value free software enough.
-- 
Giovanni Bajo


-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Python to use a non open source bug tracker?

2006-10-04 Thread Giovanni Bajo
A.M. Kuchling wrote:

>> The surprise people are expressing is because they thought one of the
>> goals of a big open source project would be to avoid reliance on
>> closed tools.
>
> I don't think Python has ever had this as a goal.  Python's license
> lets it be embedded in closed-source products; Windows binaries are
> built using closed-source tools (MS Visual C), and on some platforms
> we use a closed-source system compiler; python.org used to be a
> Solaris box, and now uses SourceForge which runs on top of DB/2...

Notice that there is a different between "allowing/helping/supporting non-free
software" and "avoid reliance on non-free software". The fact that Python
license allows it to be used in non-free products falls in the former, while
the usage of Jira is part of the latter. Distributing binaries compiled with
closed-source tools is not a problem since people can still compile it with
different free compilers.

> IMHO, using Jira presents risks that are manageable:
> [...]
>
> * A data export is available if we decide to switch. [...]

Out of curiosity, how is this obtained? Is this any plan to take a daily export
or so?
-- 
Giovanni Bajo


-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Python to use a non open source bug tracker?

2006-10-04 Thread Giovanni Bajo
David Goodger wrote:

> Go back to the original announcement:
>
> """
> After evaluating the trackers on several points (issue creation,
> querying, etc.), we reached a tie between JIRA and Roundup in terms of
> pure tracker features.
> """
>
> JIRA gets a leg up because of the hosting and administration also
> being offered. But...
>
> """
> If enough people step forward we will notify python-dev that Roundup
> should be considered the recommendation of the committee and
> graciously
> turn down Atlassian's offer.
> """
>
> That is a perfectly reasonable offer. Put up or shut up.

You're cherry picking your quotes:

"""
In order for Roundup to be considered equivalent in terms of an overall
tracker package there needs to be a sufficient number of volunteer admins
(roughly 6 - 10 people) who can help set up and maintain the Roundup
installation.
"""

This is *NOT* a perfectly reasonable offer, because you do not see 6-10 people
stepping up at the same time for almost *anything* in the open source world.
-- 
Giovanni Bajo


-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Python to use a non open source bug tracker?

2006-10-05 Thread Giovanni Bajo
Martin v. Löwis wrote:

>> In fact, are you absolutely positive that you need so much effort to
>> maintain an existing bugtracker installation? I know for sure that
>> GCC's Bugzilla installation is pretty much on its own; Daniel Berlin
>> does some maintainance every once in a while (upgrading when new
>> versions are out, applying or writing some patches for most
>> requested features in the community, or sutff like that), but it's
>> surely not his job, not even part-time.
>
> Daniel Berlin has put a tremendous amount of work into it. I know,
> because I set up the first bug tracker for gcc (using GNATS), and
> have been followed the several years of pondering fairly closely.
> It was quite some work to set up GNATS, and it was even more work
> to setup bugzilla.
>
> For Python, we don't have any person similar to Daniel Berlin
> (actually, we have several who *could* have done similar work,
>  but none that ever volunteered to do it). Don't underestimate
> the work of somebody else.

Martin, I am by no means understimating Daniel's work. I am just noting that
the spare-time work he did is, by definition, much much lower than the "6-10
people" that the PSF infrastructure committee is calling for. I would like this
statement to be officially reduced to "2-3 people", since it is *really* not
required much more than that to setup a bug tracker installation, and no more
than 1 person to maintain it afterwards. *IF* there are more volunteers, that's
good, they can offload the maintenance work from a single maintainer; but I
think it's unfair to put such a high *requisite*.

We do not have 6-10 people maintaining SVN afterall, even if you wish we had :)
-- 
Giovanni Bajo


-- 
http://mail.python.org/mailman/listinfo/python-list

Re: Python to use a non open source bug tracker?

2006-10-06 Thread Giovanni Bajo
Martin v. Löwis wrote:

> That, in principle, could happen to any other free software as well.
> What is critical here is that SF *hosted* the installation. If we
> would use a tracker that is free software, yet hosted it elsewhere,
> the same thing could happen: the hoster could make modifications to
> it which
> are non-free. Not even the GPL could protect from this case: the
> hoster would be required to publish source only if he publishes
> binaries, but he wouldn't publish any binaries, so he wouldn't need
> to release the source changes, either.
>
> Also, even if it the software is open source and unmodified, there
> still wouldn't be a guarantee that you can get the data out of it
> if you want to. You *only* get the advantages of free software if
> you also run it yourself. Unfortunately, there is a significant
> cost associated with running the software yourself.

You have many good points here, Martin. Let me notice, though, that people
providing hosting not necessarily want to maintain the software by themselves
alone: some python developers could still have admin access to the boxes so to
double check if weird things are being done behind the curtain. I think the
point of uncertainty araises only if you totally trust someone else to do the
job for you.
-- 
Giovanni Bajo


-- 
http://mail.python.org/mailman/listinfo/python-list

Re: Python to use a non open source bug tracker?

2006-10-06 Thread Giovanni Bajo
[EMAIL PROTECTED] wrote:

> Martin> The regular admin tasks likely include stuff like this:
> Martin> - the system is unavailable, bring it back to work
> Martin>   This is really the worst case, and a short response time
> Martin>   is the major factor in how users perceive the service
> Martin> - the system is responding very slowly
>
> To all those people who have been moaning about needing 6-10 people to
> administer the system, in my opinion these are the most important
> reasons to have more than one person available to help.  Python isn't
> only used in the USofA.  It has been very helpful to have
> administrators scattered around the globe who were awake and alert to
> handle problems with python.org when folks in the US were asleep.  Of
> course, spreading the load among several people helps with the other
> tasks as well.
>
> As Martin pointed out in an earlier post, with only one person
> actively administering Subversion (Martin), new requests for access
> had to wait if he was away for an extended period of time.

This is true of many open source projects. I don't dispute that having 6-10
people to administer Roundup would not be good. I dispute that it is the
minimum requirement to make a Roundup installation acceptable for Python
development.

Are bug-tracker configuration issues so critical that having to wait 48-72hrs
to have them fixed is absolutely unacceptable for Python development? It looks
like an overexaggeration. People easily cope with 2-3 days of SVN freezing,
when they are politically (rather than technically) stopped from committing to
SVN. I guess they can wait 48 hrs to be able to close that bug, or open that
other one, or run that query.
-- 
Giovanni Bajo


-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Python to use a non open source bug tracker?

2006-10-07 Thread Giovanni Bajo
[EMAIL PROTECTED] wrote:

> Giovanni> Are bug-tracker configuration issues so critical that
> having Giovanni> to wait 48-72hrs to have them fixed is
> absolutely unacceptable Giovanni> for Python development?
>
> Yes, I think that would put a crimp in things.  The downtimes we see
> for the SourceForge tracker tend to be of much shorter duration than
> that (typically a few hours) and cause usually minor problems when
> they occur.  For the tracker to be down for 2-3 days would make the

I was actually thinking of 48-72hrs to do regulard admin work like installing
latest security patch or actrivate a new account.

> developers temporarily blind to all outstanding bug reports and
> patches during that time and prevent non-developers from submitting
> new bugs, patches and comments.  Those people might well forget about
> their desired submission altogether and not return to submit them
> once the tracker was back up.

I understand your concerns, but I have to remember you that most bug reports
submitted by users go totally ignored for several years, or, better, forever. I
do not have a correct statistic for this, but I'm confident that at least 80%
of the RFE or patches filed every week is totally ignored, and probably at
least 50% of the bugs too. I think there is a much bigger problem here wrt QOS.

So, you might prefer 6-10 people to activate a new tracker account faster than
light. I'd rather have 3-days delay in administrative issues because our single
administrator is sleeping or whatever, and then have 2-3 people doing regular
bug processing.
-- 
Giovanni Bajo


-- 
http://mail.python.org/mailman/listinfo/python-list


Re: n-body problem at shootout.alioth.debian.org

2006-10-07 Thread Giovanni Bajo
Peter Maas wrote:

> I have noticed that in the language shootout at
> shootout.alioth.debian.org the Python program for the n-body problem
> is about 50% slower than the Perl program. This is an unusual big
> difference. I tried to make the Python program faster but without
> success. Has anybody an explanation for the difference? It's pure
> math so I expected Perl and Python to have about the same speed.

Did you try using an old-style class instead of a new-style class?
-- 
Giovanni Bajo


-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Python to use a non open source bug tracker?

2006-10-07 Thread Giovanni Bajo
Steve Holden wrote:

>> I understand your concerns, but I have to remember you that most bug
>> reports submitted by users go totally ignored for several years, or,
>> better, forever. I do not have a correct statistic for this, but I'm
>> confident that at least 80% of the RFE or patches filed every week
>> is totally ignored, and probably at least 50% of the bugs too. I
>> think there is a much bigger problem here wrt QOS.
>>
>> So, you might prefer 6-10 people to activate a new tracker account
>> faster than light. I'd rather have 3-days delay in administrative
>> issues because our single administrator is sleeping or whatever, and
>> then have 2-3 people doing regular bug processing.
>
> ... and if wishes were horses then beggars would ride.

Are you ever going to try and make a point which is not "you are not entitled
to have opinions because you do not act"? Your sarcasm is getting annoying. And
since I'm not trolling nor flaming, I think I deserve a little bit more of
respect.
-- 
Giovanni Bajo


-- 
http://mail.python.org/mailman/listinfo/python-list


New-style classes slower than old-style classes? (Was: n-body problem at shootout.alioth.debian.org)

2006-10-07 Thread Giovanni Bajo
Peter Maas wrote:

>> Did you try using an old-style class instead of a new-style class?
>
> The original program has an old style class, changing it to a new
> style class increases run time by 25% (version is 2.4.3 btw).

Ah yes. Years ago when I first saw this test it was still using new-style
classes.

Anyway, this is a bug on its own I believe. I don't think new-style classes are
meant to be 25% slower than old-style classes. Can any guru clarify this?
-- 
Giovanni Bajo


-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Python to use a non open source bug tracker?

2006-10-07 Thread Giovanni Bajo
Aahz wrote:

>> Are you ever going to try and make a point which is not "you are not
>> entitled to have opinions because you do not act"? Your sarcasm is
>> getting annoying. And since I'm not trolling nor flaming, I think I
>> deserve a little bit more of respect.
>
> IMO, regardless of whether you are trolling or flaming, you are
> certainly being disrespectful.  Why should we treat you with any more
> respect than you give others?

Disrespectful? Because I say that I don't agree with some specific requirement,
trying to discuss and understand the rationale behind it?
-- 
Giovanni Bajo


-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Python to use a non open source bug tracker?

2006-10-07 Thread Giovanni Bajo
Tim Peters wrote:

> None are /totally ignored/ -- indeed, at least I see every one as it
> comes in.  You might want to change your claim to that no work
> obviously visible to you is done on them.  That would be better.

Please notice that my mail was in the context of "user satisfaction with the
bug tracker". I was claiming that it is useless to provide a blazingly fast
support turnaround for technical issue, when there is "no visibile work" being
done on most bugs that are submitted. And, in turn, this was in the context of
hiring 6-10 people as the only acceptable minimum to maintain and admin a bug
tracker. I was claiming that, if such a group was ever formed, it was better
spent on bug triage rather than keeping their keys ready all day long to
quick-fix any server breakage in minutes.

> These are the actual stats as of a few minutes ago:
>
> Bugs:  938 open of 7169 total ~= 87% closed
> Patches:  429 open of 3846 total ~= 89% closed
> Feature Requests:  240 open of 479 total ~= 50% closed

I claimed different numbers based on personal perception; I stand corrected,
and apologize for this. I could just say that my perception was wrong, but I
guess there's something in it that could be further analyzed. For instance, I
would be really curious to see how these figures look like if you consider only
bugs/patches/rfe *NOT* submitted by python committers. I don't think
Sourceforge can ever compute this number for us, so I'll wait to ask Roundup
about it (or, uh, Jira...).

> There's an easy way to improve these percentages dramatically,
> although they're not bad as-is:  run thru them and close every one
> that isn't entirely clear.  For example, reject every feature request,
> close every patch that changes visible behavior not /clearly/ fixing a
> bona fide bug, and close every "bug report" that's really a feature
> request or random "but Perl/Ruby/PHP doesn't do it this way" complaint
> in disguise.

Either close directly any nonsense, or ask for more feedback to the poster,
until the bug/patch/rfe is sufficiently clear to be handled, or 3 months are
passed and you close the bug for "no further feedback from the poster". If this
would dramatically reduce the number of open bugs, then yes, Python really
needs someone to do bug triaging.

> For example, the oldest patch open today is a speculative
> implementation of rational numbers for Python.  This is really a
> feature request in disguise, and has very little chance-- but not /no/
> chance --of ever being accepted.  The oldest bug open today is from 6
> years ago, and looks like an easy-to-answer /question/ about the
> semantics of regular expressions in Python 1.6.  I could take time to
> close that one now, but is that a /good/  use of time?  Yes, but, at
> the moment, even finishing this reply seems to be a /better/ use of my
> time -- and after that, I'm going to get something to eat ;-)

It might be not a good use of your time at all, since you are a developer. But
having a database with 938 open bugs most of which are
incomplete/nonsense/unconfirmed is much less useful than it could be. It also
raises the bar for new developers: it's much harder to just "pick one" and fix
it. I know because I tried sometimes, and after half an hour I couldn't find
any bug that was interesting to me and "complete" enough to work on it. I also
noticed that most bugs are totally uncommented like nobody cared at all. This
is where my thought about Python missing bug triaging started.
-- 
Giovanni Bajo


-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Find interface associated with default route?

2006-11-12 Thread Giovanni Bajo
Neal Becker wrote:

> A quick strace reveals that 'route' just reads /proc/net/route, so:
>
> def get_default_if():
> f = open ('/proc/net/route', 'r')
> for line in f:
> words = string.split (line)
> dest = words[1]
> try:
> if (int (dest) == 0):
> interf = words[0]
> break
> except ValueError:
> pass
> return interf

And if you acknowledge that /proc/net/route is a CSV file, you can be more
terse and clear:

>>> import csv
>>> def get_default_if():
... f = open('/proc/net/route')
... for i in csv.DictReader(f, delimiter="\t"):
... if long(i['Destination'], 16) == 0:
... return i['Iface']
... return None
...
>>>
>>> get_default_if()
'ppp0'

-- 
Giovanni Bajo


-- 
http://mail.python.org/mailman/listinfo/python-list


genexp performance problem?

2006-05-30 Thread Giovanni Bajo
Hello,

I found this strange:

python -mtimeit "sum(int(L) for L in xrange(3000))"
100 loops, best of 3: 5.04 msec per loop

python -mtimeit "import itertools; sum(itertools.imap(int, xrange(3000)))"
100 loops, best of 3: 3.6 msec per loop

I thought the two constructs could achieve the same speed.
-- 
Giovanni Bajo


-- 
http://mail.python.org/mailman/listinfo/python-list


Re: genexp performance problem?

2006-05-31 Thread Giovanni Bajo
Fredrik Lundh wrote:

>> I found this strange:
>>
>> python -mtimeit "sum(int(L) for L in xrange(3000))"
>> 100 loops, best of 3: 5.04 msec per loop
>>
>> python -mtimeit "import itertools; sum(itertools.imap(int,
>> xrange(3000)))" 100 loops, best of 3: 3.6 msec per loop
>>
>> I thought the two constructs could achieve the same speed.
>
> hint: how many times to the interpreter have to look up the names
> "int"
> and "L" in the two examples ?

Ah right, thanks!
-- 
Giovanni Bajo


-- 
http://mail.python.org/mailman/listinfo/python-list


  1   2   >