Re: Open Source: you're doing it wrong - the Pyjamas hijack

2012-05-15 Thread Pascal Chambon

Hi,

cool down, people, if anything gave FOSS a bad reputation, that's well 
the old pyjamas website (all broken, because "wheel must be reinvented 
here"), and most of all the "terror management" that occurred on its 
mailing list.
Previously I had always considered open-source as a benevolent state of 
mind, until I got, there, the evidence that it could also be, for some 
people, an irrational and harmful cult (did you know github were 
freaking evildoers ?).


Blatantly the pyjs ownership  change turned out to be an awkward 
operation (as reactions on that ML show it), but a fork could also have 
very harmfully "split" pyjs-interested people, so all in all I don't 
think there was a perfect solution - dictatorships never fall harmlessly.


The egos of some might have been hurt, the legal sense of others might 
have been questioned, but believe me all this fuss is pitiful compared 
to the real harm that was done numerous time to willing newcomers, on 
pyjs' old ML, when they weren't aware about the heavy dogmas lying around.


A demo sample  (I quote it each time the suvject arises, sorry for 
duplicates)


| Please get this absolutely clear in your head: that  |
| you do not "understand" my reasoning is completely and utterly   |
| irrelevant.  i understand *your* reasoning; i'm the one making the   |
| decisions, that's my role to understand the pros and cons.  i make a |
| decision: that's the end of it.  |
| You present reasoning to me: i weight it up, against the other   |
| reasoning, and i make a decision.  you don't have to understand that |
| decision, you do not have to like that decision, you do not have to  |
| accept that decision.|


Ling live pyjs,
++
PKL



Le 08/05/2012 07:37, alex23 a écrit :

On May 8, 1:54 pm, Steven D'Aprano  wrote:

Seriously, this was a remarkably ham-fisted and foolish way to "resolve"
a dispute over the direction of an open source project. That's the sort
of thing that gives open source a bad reputation.

The arrogance and sense of entitlement was so thick you could choke on
it. Here's a sampling from the circle jerk of self-justification that
flooded my inbox over the weekend:

"i did not need to consult Luke, nor would that have be productive"

No, it's generally _not_ productive to ask someone if you can steal
their project from them.

"i have retired Luke of the management duties, particularly, *above*
the source"

Who is this C Anthony Risinger asshole and in what way did he _hire_
the lead developer?

"What I have wondered is, what are effects of having the project
hostage to the whims of an individuals often illogically radical
software libre beliefs which are absolutely not up for discussion at
all with anyone."

What I'm wondering is: how is the new set up any different? Why were
Luke Leighton's philosophies/"whims" any more right or wrong than
those held by the new Gang of Dicks?

"Further more, the reason I think it's a bad idea to have this drawn
out discussion is that pretty much the main reason for this fork is
because of Luke leadership and project management decisions and
actions. To have discussions of why the fork was done would invariably
lead to quite a bit of personal attacks and petty arguments."

Apparently it's nicer to steal someone's work than be mean to them.

"I agree, Lex - this is all about moving on.  This is a software
project, not a cult of personality."

Because recognising the effort of the lead developer is cult-like.

"My only quibble is with the term "fork."  A fork is created when you
disagree with the technical direction of a project.  That's not the
issue here.  This is a reassignment of the project administration only
- a shuffling of responsibility among *current leaders* of the
community.  There is no "divine right of kings" here."

My quibble is over the term "fork" too, as this is outright theft. I
don't remember the community acknowledging _any other leadership_ over
Luke Leighton's.

"I suspect Luke will be busy with other projects and not do much more
for Pyjamas/pyjs, Luke correct me if you see this and I am wrong."

How about letting the man make his own fucking decisions?

"All of you spamming the list with your unsubscribe attempts: Anthony
mentioned in a previous email that he's using mailman now"

Apparently it's the responsibility of the person who was subscribed
without their permission to find out the correct mechanism for
unsubscribing from that list.

"apparantly a bunch of people were marked as "POSTING" in the DB, but
not receiving mail (?)"

Oh I see, the sudden rush of email I received was due to an error in
the data they stole...

"Nobody wins if we spend any amount of time debating the details of
this transition, what's done is done."

Truly the jus

Re: GIL in alternative implementations

2011-05-30 Thread Pascal Chambon

Thanks for the details on IronPython's implementation B-)

Hopefully Pypy will eventually get rid of its own Gil, since it doesn't 
do refcounting either.


Regards,
Pascal

Le 28/05/2011 00:52, Dino Viehland a écrit :


In IronPython we have fine grained locking on our mutable data 
structures.  In particular we have a custom dictionary type which is 
designed to allow lock-free readers on common operations while writers 
take a lock.  Our list implementation is similar but in some ways 
that's trickier to pull off due to features like slicing so if I 
recall correctly we only have lock-free reads when accessing a single 
element.


For .NET data structures they follow the .NET convention which is up 
to the data structure.  So if you wanted to get every last bit of 
performance out of your app you could handle thread safety yourself 
and switch to using the .NET dictionary or list types (although 
they're a lot less friendly to Python developers).


Because of these locks on micro-benchmarks that involve simple 
list/dict manipulations you do see noticeably worse performance in 
IronPython vs. CPython. 
http://ironpython.codeplex.com/wikipage?title=IP27A1VsCPy27Perf&referringTitle=IronPython%20Performance 
<http://ironpython.codeplex.com/wikipage?title=IP27A1VsCPy27Perf&referringTitle=IronPython%20Performance> 
 - See the SimpleListManipulation and SimpleDictManipulation as the 
core examples here.  Also CPython's dictionary is so heavily tuned 
it's hard to beat anyway, but this is a big factor.


Finally one of the big differences with both Jython and IronPython is 
that we have good garbage collectors which don't rely upon reference 
counting.  So one area where CPython gains from having a GIL is a 
non-issue for us as we don't need to protect ref counts or use 
interlocked operations for ref counting.


*From:* python-list-bounces+dinov=exchange.microsoft@python.org 
[mailto:python-list-bounces+dinov=exchange.microsoft@python.org] 
*On Behalf Of *Pascal Chambon

*Sent:* Friday, May 27, 2011 2:22 PM
*To:* python-list@python.org >> Python List
*Subject:* GIL in alternative implementations

Hello everyone,

I've already read quite a bit about the reasons for the GIL in 
CPython, i.e to summarize, that a more-fine graine locking, allowing 
real concurrency in multithreaded applications, would bring too much 
overhead for single-threaded python applications.


However, I've also heard that other python implementations 
(ironpython, jython...) have no GIL, and yet nobody blames them for 
performance penalties that would be caused by that lack (I especially 
think about IronPython, whose performances compare quite well to CPython).


So I'd like to know: how do these other implementations handle 
concurrency matters for their primitive types, and prevent them from 
getting corrupted in multithreaded programs (if they do) ? I'm not 
only thinking about python types, but also primitive containers and 
types used in .Net and Java VMs, which aren't atomic elements either 
at an assembly-level point of view.


Do these VMs have some GIL-like limitations, that aren't spoken about 
? Are there functionings completely different from the CPython VM, so 
that the question is not relevant ? Do people consider that they 
always concern multithreaded applications, and so accept performance 
penalties that they wouldn't allow in their CPython scripts ?


I think you in advance for your lights on these questions.

Regards,

Pkl

[[ Important Note: this is a serious question, trolls and emotionally 
disturbed persons had better go on their way. ]]




-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Python interpreter speed

2009-04-19 Thread Pascal Chambon

Hello

I'm not expert in low level languages, but I'd say that Python and Java 
are "compiled" to bytecodes of similar level. The difference lies in the 
information contained in those bytecodes : java is statically typed, so 
attribute access and other basic operations are rather quick, allowing 
few levels of indirection. Whereas a python attribute lookup can involve 
a big number of operations (browsing the inheritance tree, querying 
__getattribute__, __get__ and all those magic methods...). That's in 
this "magic" of python that we have a loss of performance I think - 
that's both the power and the drawback of this (awesome) language.


Regards,
Pascal

PS : I guess core python developpers will have much more accurate things 
to say about it ^^


Ryniek90 a écrit :


Hi.

Standard Python interpreter's implementation is written in C language. 
C code while compilation, is compilled into machine code (the fastest 
code). Python code is compiled into into byte-code which is also some 
sort of fast machine code. So why Python interpreter is slower than 
Java VM? Being written in C and compilled into machine code, it should 
be as fast as C/Asm code.

What's wrong with that?

Greets and thank you.
--
http://mail.python.org/mailman/listinfo/python-list





--
http://mail.python.org/mailman/listinfo/python-list


Re: The Python standard library and PEP8

2009-04-19 Thread Pascal Chambon
I agree that there are still some styling inconsistencies in python 
stdlib, but I'm not advocating a cleaning because I've always found 
camelCase much prettier than those multi_underscore_methods :p


Concerning the length property of strings, isn't the __len__() method 
sufficient ?
I know they're not usual in OOP languages, but builtins like len() and 
iter() might be better anyway, since they deal with some magical 
problems (CF "special attributes lookups" in the python documentation)



Regards,
Pascal

Emmanuel Surleau a écrit :

Hi there,

Exploring the Python standard library, I was surprised to see that several 
packages (ConfigParser, logging...) use mixed case for methods all over the 
place. I assume that they were written back when the Python styling 
guidelines were not well-defined.


Given that it's rather irritating (not to mention violating the principle of 
least surprise) to have this inconsistency, wouldn't it make sense to clean 
up the API by marking old-style, mixed-case methods as deprecated (but 
keep them around anyway) and add equivalent methods following the 
lowercase_with_underscores convention?


On an unrelated note, it would be *really* nice to have a length property on 
strings. Even Java has that!


Cheers,

Emm
--
http://mail.python.org/mailman/listinfo/python-list


  







--
http://mail.python.org/mailman/listinfo/python-list


Re: print as a function in 2.5 ?

2009-04-19 Thread Pascal Chambon

Hello,

I had found some article on this some months ago, can't remember where 
exactly...


But if you don't want harassments, I'd just advise you to create a 
function with a properly specific name (like "exprint()"), and to make 
it mimic closely the signature and behaviour of Py3k's print() function.
That way, if one day you switch to upper versions, a simple mass text 
replacing operation on all your files will do it in an instant B-)


Regards,
Pascal

Stef Mientki a écrit :


hello,

For several reasons I still use Python version 2.5.
I understand that the print-statement will be replaced in Python 
version 3.0.


At the moment I want to extend the print statement with an optional 
traceback.

So I've 2 options:
1- make a new function, like "eprint ()", where "e" stands for 
extended print

2- make a function "print()" that has the extended features

Now I guess that one of the reasons to change print from a statement 
to a function,

is the option to override and extend it.
If that's so, choice 2 would be the best choice.
Is that assumption correct ?

Suppose the second choice is the best,
I can now create a function "print",
and have the best of 2 worlds, get my extension and being prepared for 
the future.


def print ( *args ) :
   for arg in args :
   print arg,
   print (' Print Traceback ')
   do_extended printer actions

Now doesn't seem to be allowed,
nor is there an import from __future__  :-(

What's the best solution (other than moving to 2.6 or up ?

thanks,
Stef Mientki
--
http://mail.python.org/mailman/listinfo/python-list





--
http://mail.python.org/mailman/listinfo/python-list


Re: How can I get path/name of the softlink to my python script when executing it

2009-04-19 Thread Pascal Chambon

Hello

I fear that in this case the whole indirection operations on softlink 
occur only in the shell, and that the final command is only executed as 
if it were called directly on the real file...


Have you tried typing "python ./waf", to see how the resolution occurs 
in that case ?


Regards,
Pascal



Saravanan Shanmugham (sarvi) a écrit :
 
Hi,

I am writiing a script say "wabexec" in python
I will then have softlinks from other softlinks like  ls, 
waf,hello, etc that are in the same directory and pointing to wabexec.
 
When some executes ./waf or ./hello and wabexec gets invoked because 
of the softlink, how do I find out from within wabexec how it was 
invoked? was it throug waf or hello, etc.
 
both __file__ and sys.arg0[] seem to have wabexec not the name of the 
softlink.
 
Any ideas?
 
Sarvi



--
http://mail.python.org/mailman/listinfo/python-list
  


--
http://mail.python.org/mailman/listinfo/python-list


Re: iterate to make multiple variables?

2009-04-20 Thread Pascal Chambon

Mark Tolonen a écrit :



"Tairic"  wrote in message 
news:95ea7bdf-2ae8-4e5e-a613-37169bb36...@w35g2000prg.googlegroups.com...

Hi, I'm somewhat new to programming and especially to python. Today I
was attempting to make a sudoku-solver, and I wanted to put numbers
into sets call box1, box2, ... box9, so that I could check new values
against the boxes

I ended up declaring the boxes like this
box1 = set([])
box2 = set([])
..
..
box9 = set([])


Is there a way for me instead to generate these variables (box1 to
box9) as empty sets instead of just writing them all out? Some way to
iterate and generate them?


This will generate a list of nine unique, empty sets, box[0] through 
box[8]:


   box = [set() for i in range(9)]

-Mark

--
http://mail.python.org/mailman/listinfo/python-list

Yep, such automatization on local variables is not really meant to be 
(even though you can play with eval(), and end up having an unreadable 
code), iterables are made for that B-)

--
http://mail.python.org/mailman/listinfo/python-list


Re: Interest in generational GC for Python

2009-04-20 Thread Pascal Chambon

Martin v. Löwis a écrit :

Is there any interest in generational garbage collection in Python these days ?

Anyone working on it ?



This is the time machine at work: the garbage collector in CPython *is*
generational (with three generations).

Regards,
Martin
--
http://mail.python.org/mailman/listinfo/python-list


  
I'm lost there. Isn't CPython using reference counting (i.e updating the 
object's state at each reference creation/deletion, and deleting the 
objects as soon as they have no more references to them) ? It seemed to 
me that generational GC only applied to periodic GCs, like tracing 
garbage collectors. Or is CPython using a mix of both technologies (to 
prevent cycles for example) ?



Regards,
pascal
--
http://mail.python.org/mailman/listinfo/python-list


Re: and [True,True] --> [True, True]?????

2009-04-20 Thread Pascal Chambon

AggieDan04 a écrit :

On Apr 20, 2:03 am, bdb112  wrote:
  

Is there any obvious reason why
[False,True] and [True,True]
gives [True, True]

Python 2.5.2 (r252:60911, Feb 21 2008, 13:11:45) [MSC v.1310 32 bit
(Intel)]



X and Y == (Y if X else X)
X or Y == (X if X else Y)

[False, True] is true, so the and operator returns the second argument.
--
http://mail.python.org/mailman/listinfo/python-list

  
My 0.02$ comments : people used to simulate ternary operator (the 
"bool?res1:res2" of c++) with combinations like "bool and res 1 or res 
2", but actually it doens't work in all cases (if res1 evaluates to 
false, res2 will ALWAYS be returned), so we'd better use "res1 if bool 
else res2", the python dedicated operator B-)



Regards,
Pascal

--
http://mail.python.org/mailman/listinfo/python-list


Daemonic processes in multiprocessing

2009-04-25 Thread Pascal Chambon


Hello everyone


I've just read the doc of the (awesome) "multiprocessing" module, and 
there are some little things I don't understand, concerning daemon 
processes (see quotes below).


When a python process exits, the page says it attempts to join all its 
children. Is this just a design choice, or are there constraints behind 
this ? Because normally, when a parent process exits, its child gets 
adopted by init, and that can be useful for creating daemons, can't it ?


Concerning daemons processes, precisely, the manual states that they are 
all terminated when their parent process exits. But isn't it contrary to 
the concept of dameons, which are supposed to have become independent 
from their parent ?


And I don't understand how "the initial value (of the "daemonic" 
attribute) is inherited from the creating process", since "daemonic 
processes are not allowed to create child processes". Isn't it the same 
to say that "daemonic" is always false by default, then ?
And finally, why can't daemonic processes have children ? If these 
children get "orphaned" when the daemonic process gets terminated (by 
its parent), they'll simply get adpoted by init, won't they ?


Thanks a lot for helping me get rid of my confusion,
regards,
Pascal


=QUOTES==
daemon¶ 



   The process's daemon flag, a Boolean value. This must be set before
   start()
   

   is called.

   The initial value is inherited from the creating process.

   When a process exits, it attempts to terminate all of its daemonic
   child processes.

   Note that a daemonic process is not allowed to create child
   processes. Otherwise a daemonic process would leave its children
   orphaned if it gets terminated when its parent process exits.

   --

Similarly, if the child process is non-daemonic then the parent
process may hang on exit when it tries to join all its non-daemonic children.
--
Remember also that non-daemonic
processes will be automatically be joined.


--
http://mail.python.org/mailman/listinfo/python-list


Re: Marshal vs pickle...

2009-04-25 Thread Pascal Chambon

Hello

I've never run into a discussion on pickle vs marshal, but clearly if 
the point is to exchange data between different clients, or to store it, 
pickle is the preferred solution, as masrhal is really too low level and 
its format too unstable.
Indeed, the problem of pickle is that at the contrary, it transmits too 
much information, including executable code, etc, so it's a security risk.


If you only need to transmit data, like objects (without their methods), 
arrays, dicts etc. over networks or time, I'd advise a dedicated 
solution like json or xml, for which python as easy serializers.


Regards,
Pascal



Lawson English a écrit :


Marshalling is only briefly mentioned in most python books I have, and 
"pickling" is declared teh preferred method for serialization.


I read somewhere that Marshalling is version-dependent while pickling 
is not, but can't find that reference. OTOH, pickling can lead to 
loading of malicious code (I understand) while marshalling only 
handles basic Python types?



Could anyone point me to a reasonable discussion of the pros and cons 
of each method for serialization?



Thanks.


Lawson
--
http://mail.python.org/mailman/listinfo/python-list





--
http://mail.python.org/mailman/listinfo/python-list


Re: import and package confusion

2009-04-29 Thread Pascal Chambon

 Actually, the parethesis mean "calling" the object.

"Callable" objects can be of different types :
-functions - in which case they get executed
-classes (or metaclasses) - in which case they get "instantiated" (with 
all the protocol : __new__(), __init__()...)
-other objects - in which case they must contain a __call__ method with 
will be executed when we use the parenthesis operator on the object.


But a module is none of these : when you write "mymodule()", python 
doesn't have a clue what he must execute/instantiate.


Moduels aren't callable, but they can be "imported", simply by doing 
"import mymodule", or "import mypackage.mymodule"
Note that when you use packages, the modules that the package really 
contains are searched with a protocol that can be rather elaborate, at 
least (if I remember) the submodule name must be in the __all__ 
attribute of the package (i.e, the __all__ array defiend in the package 
__init__.py file).


I hope I haven't made you more confused with these quick explanations :p


Regards,
pascal

Dale Amon a écrit :

I am trying to get to the heart of what it is I am
missing. Is it the case that if you have a module C in a 
package A:


A.C

that there is no way to load it such that you can use:

x = A.C()

in your code? This is just a simpler case of what I'm
trying to do now, which has a module C in a sub-package
to be imported:

A.B.C

ie with files:
mydir/A/B/C.py
mydir/mymain.py

and executed in mymain.py as:

x = A.B.C()

I may still chose to do it the way you suggested, but I
would still like to understand why this does not work.

  



--
http://mail.python.org/mailman/listinfo/python-list
  


--
http://mail.python.org/mailman/listinfo/python-list


Re: Generator oddity

2009-05-01 Thread Pascal Chambon

ops...@batnet.com a écrit :

I'm a little baffled by the inconsistency here. Anyone have any
explanations?

  

def gen():


...   yield 'a'
...   yield 'b'
...   yield 'c'
...
  

[c1 + c2 for c1 in gen() for c2 in gen()]


['aa', 'ab', 'ac', 'ba', 'bb', 'bc', 'ca', 'cb', 'cc']

  

list(c1 + c2 for c1 in gen() for c2 in gen())


['aa', 'ab', 'ac', 'ba', 'bb', 'bc', 'ca', 'cb', 'cc']

  

it1 = gen()
it2 = gen()
list(c1 + c2 for c1 in it1 for c2 in it2)


['aa', 'ab', 'ac']

Why does this last list only have three elements instead of nine?

--
http://mail.python.org/mailman/listinfo/python-list


  
When you use "for c2 in gen()]",  at each loop of c1, a new gen is 
instanciated and looped-over to continue building the list.
Whereas when you use "for c2 in it2)", it's always the same instance of 
gen which is used (it2), so after 3 loops on c2 this instance is 
exhausted, and the following iterations on c1 (and then attempts of 
looping on c2) don't give anything because the looping on c2 gives an 
exception "StopIteration".


Regards,
pascal


--
http://mail.python.org/mailman/listinfo/python-list


Re: Tools for web applications

2009-05-01 Thread Pascal Chambon
Concerning Desktop applications, I would really suggestion PyQt. I don't 
know if it's "easyToLearn", but with its GUI designer you can very 
quickly get a cool looking application (and if you need to extend your 
app later, it offers, imo, much more possibilities that other toolkits 
I've tried).


Regards,
pascal
Mario a écrit :

What easyToLearn tools you suggest for creating:
1. powerfull web applications
2. desktop applications
--
http://mail.python.org/mailman/listinfo/python-list


  



--
http://mail.python.org/mailman/listinfo/python-list


Re: File handling problem.

2009-05-02 Thread Pascal Chambon

subhakolkata1...@gmail.com a écrit :

Dear Group,

I am using Python2.6 and has created a file where I like to write some
statistical values I am generating. The statistical values are
generating in a nice way, but as I am going to write it, it is not
taking it, the file is opening or closing properly but the values are
not getting stored. It is picking up arbitrary values from the
generated set of values and storing it. Is there any solution for it?

Best Regards,
SBanerjee.
--
http://mail.python.org/mailman/listinfo/python-list


  

Hello

Could you post excerpt of your file-handling code ?
It might be a buffering problem (although when the file closes, I think 
buffers get flushed), else it's really weird...


Regards,
pascal

--
http://mail.python.org/mailman/listinfo/python-list


Re: Warning of missing side effects

2009-05-02 Thread Pascal Chambon

Tobias Weber a écrit :

Hi,
being new to Python I find remarkable that I don't see any side effects. 
That's especially true for binding. First, it is a statement, so this 
won't work:


   if x = q.pop():
  print x # output only true values

Second, methods in the standard library either return a value OR modify 
the reciever, so even if assignment was an expression the above wouldn't 
work.


Only it still wouldn't, because IF is a statement as well. So no ternary:

   x = if True: 5 else: 7;

However there is one bit of magic, functions implicitly return None. So 
while the following will both run without error, only one actually works:


   x = 'foo'.upper()
   y = ['f', 'b'].reverse()

Now I mentioned that the mutable types don't have functions that mutate 
and return something, so I only have to remember that...


But I'm used to exploiting side effect, and sometimes forget this rule 
in my own classes. IS THERE A WAY to have the following produce a 
runtime error?


   def f():
  x = 5
  # no return

   y = f()

Maybe use strict ;)

  

Hello

Just to note that if "['f', 'b'].reverse()" doesn't return the new 
value, there is a corersponding function : "reversed(mylist)" which does 
it (idem, sorted(list) <-> list.sort())  B-)


Concerning your question on warnings, well I guess that after a little 
time in python, you won't make mistakes on "side effects" anymore ;


But if you want to add checks to your methods, you should see towards 
decorators or metaclasses : they allow you to wrap your methods inside 
other methods, of which the only point couldbe, for example, to check 
what your methods returtn and raise a warning if it returns "None". But 
the problem is, sometimes you WANT them to return None


If you want to detect methods that don't have explicit "return" 
statements, then you'll have to play with abstract syntax trees it 
seems... much trouble for not much gain.

I guess you'll quickly get the pythonic habits without needing all that ^^

Regards,
pascal

--
http://mail.python.org/mailman/listinfo/python-list


Re: for with decimal values?

2009-05-02 Thread Pascal Chambon

Aahz a écrit :

In article ,
Esmail   wrote:
  

Is there a Python construct to allow me to do something like this:

   for i in range(-10.5, 10.5, 0.1):
 ...

If there is such a thing already available, I'd like
to use it, otherwise I can write a function to mimic this,
but I thought I'd check (my search yielded nothing).



Write a function
  
Else you can work on integers - for i in range (-105, 105) - and divide 
by ten just below... but concerning performances I don't know if it's a 
good idea ^^


Regards,
Pascal
--
http://mail.python.org/mailman/listinfo/python-list


Re: Daemonic processes in multiprocessing - solved

2009-05-02 Thread Pascal Chambon

Pascal Chambon a écrit :


Hello everyone


I've just read the doc of the (awesome) "multiprocessing" module, and 
there are some little things I don't understand, concerning daemon 
processes (see quotes below).


When a python process exits, the page says it attempts to join all its 
children. Is this just a design choice, or are there constraints 
behind this ? Because normally, when a parent process exits, its child 
gets adopted by init, and that can be useful for creating daemons, 
can't it ?


Concerning daemons processes, precisely, the manual states that they 
are all terminated when their parent process exits. But isn't it 
contrary to the concept of dameons, which are supposed to have become 
independent from their parent ?


And I don't understand how "the initial value (of the "daemonic" 
attribute) is inherited from the creating process", since "daemonic 
processes are not allowed to create child processes". Isn't it the 
same to say that "daemonic" is always false by default, then ?
And finally, why can't daemonic processes have children ? If these 
children get "orphaned" when the daemonic process gets terminated (by 
its parent), they'll simply get adpoted by init, won't they ?


Thanks a lot for helping me get rid of my confusion,
regards,
Pascal


=QUOTES==
daemon¶ 
<http://docs.python.org/library/multiprocessing.html#multiprocessing.Process.daemon>


The process's daemon flag, a Boolean value. This must be set
before start()

<http://docs.python.org/library/multiprocessing.html#multiprocessing.Process.start>
is called.

The initial value is inherited from the creating process.

When a process exits, it attempts to terminate all of its daemonic
child processes.

Note that a daemonic process is not allowed to create child
processes. Otherwise a daemonic process would leave its children
orphaned if it gets terminated when its parent process exits.

--

Similarly, if the child process is non-daemonic then the parent
process may hang on exit when it tries to join all its non-daemonic children.
--
Remember also that non-daemonic
processes will be automatically be joined.

  



--
http://mail.python.org/mailman/listinfo/python-list
  
Allright, I guess I hadn't understand much about "daemonic processes" in 
python's multiprocessing module.


So, for those interested, here is a clarification of the concepts, as 
far as I've understood - please poke me if I'm wrong somewhere.


Usually, in Unix, daemon processes are processes which got disconnected from
their parent process and from terminals, and work in the background, often 
under a
different user identity.
The python multiprocessing module has the concept of "daemon" too, but this
time in reference to the "threading" module, in which dameons are just
threads that wont prevent the application termination, even if they are
still running. Thus, daemonic processes launched through multiprocessing
API are normal processes that will be terminated (and not joined) if 
non-dameonic processes are all over.


Thus, not much in common between "traditionnal" *nix daemon processes, and 
python multiprocessing daemon processes.

Regards, 
pascal



--
http://mail.python.org/mailman/listinfo/python-list


Re: Which one is best Python or Java for developing GUI applications?

2009-05-05 Thread Pascal Chambon

Chris Rebert a écrit :

On Tue, May 5, 2009 at 12:26 AM, Paul Rudin  wrote:
  

Paul Rubin  writes:



srinivasan srinivas  writes:
  

Could you tell me does Python have any advantages over Java for the
development of GUI applications?


Yes.
  

Clearly c.l.p needs to adopt the SNB 
convention :)



Surely you have forgotten the comedy style of the language's namesake,
which makes that rule completely inadmissible! ;P

Cheers,
Chris
  


The fact that Python is a dynamic language offers, in my opinion, a huge 
advantage to quickly setup a GUI, without caring about the infinite 
details of the variable types and function signatures.
Its good handling of "function as first-class objects" is also precious 
when comes the time of setting callbacks (I can't bear anymore the way 
Swing does it, with interfaces etc.)


But much depends on the framework used, too. I've used wxPython for a 
multimedia project, and actually it lacked a lot of necessary features 
(transparency, event loop tuning, multithreading support...), but that 
was 1 year ago, maybe things have changed.
Anyway, I'd advocate the use of PyQt, which really offers tremendous 
possibilities - if your application isn't a simple office application, 
its' really worth turning towards pyqt.


Regards,
Pascal
--
http://mail.python.org/mailman/listinfo/python-list


Re: Which one is best Python or Java for developing GUI applications?

2009-05-08 Thread Pascal Chambon




On Thu, May 7, 2009 at 1:42 PM, Pascal Chambon 
mailto:chambon.pas...@wanadoo.fr>> wrote:


   Hello

   When a lot of code using wxwidgets is in place, it's sure that
   moving to qt is a big task ; even though, thanks to qt's GUI
   designer, it's possible to quickly reproduce the structure of the
   wxwidget application with QT widgets.

   If you want to see the power of Qt, I highly advice you to browse
   the very nice "demos" included in both the Qt and PyQt packages -
   those are Gui applications that let you watch all kind of abilities
   very fastly.
   Also, something that wxwidgets will surely never be able to do :
   
http://labs.trolltech.com/blogs/2008/12/02/widgets-enter-the-third-dimension-wolfenqt/

   The 2 products (and the Gui designer, docs and demos) can be
   downloaded from these pages :
   http://www.qtsoftware.com/products/
   http://www.riverbankcomputing.co.uk/software/pyqt/download

   And if you have some time for reading :
   http://www.qtsoftware.com/files/pdf/qt-4.4-whitepaper
   http://doc.trolltech.com/4.5/index.html

   Good time with all that,
   regards,
   Pascal



   Qijing Li a écrit :

Thank you for sharing this information.

I started to use wxPython two years ago,  which fit my needy very
well because
the jobs I worked on didn't focus on GUI. But now , the project I
am working on
involves much drawing shapes and looking, most of wxPython works
for me,
but one thing, as you mentioned, transparency drove me nuts.

 wxPython suppose transparent window, but not transparent
background that is right what I need. I did research a lot and was
trying to find a proper way to get it, it turns out that I found
two tricky way, one is to use wx.EVT_ERASE_BACKGROUND tricky, the
other is to copy the image of background under the window as the
background image. Although the problem is solved, I feel
uncomfortable  about this.
I hope wxPython supports  real transparency some day.

Recently, I have no plan to transmit to other frameworks, such as
PyQt.
I'm really interested in what is differences between them,  I'll
check it.
Is there demo of PyQt ? or could you give me some links if they
are in the bag.

Have a good day!
Miles







On Tue, May 5, 2009 at 11:42 AM, Pascal Chambon
mailto:chambon.pas...@wanadoo.fr>> wrote:


The fact that Python is a dynamic language offers, in my
opinion, a huge advantage to quickly setup a GUI, without
caring about the infinite details of the variable types and
function signatures.
Its good handling of "function as first-class objects" is also
precious when comes the time of setting callbacks (I can't
bear anymore the way Swing does it, with interfaces etc.)

But much depends on the framework used, too. I've used
wxPython for a multimedia project, and actually it lacked a
lot of necessary features (transparency, event loop tuning,
multithreading support...), but that was 1 year ago, maybe
things have changed.
Anyway, I'd advocate the use of PyQt, which really offers
tremendous possibilities - if your application isn't a simple
office application, its' really worth turning towards pyqt.

Regards,
Pascal



Leon a écrit :

I think there are two advantages over java for GUI application

First, python is more productive and has very rich third modules
support,
you can check the demo of wxPython.

Second, you can develop native-looking GUI

BTW: I'm developing GUI application using python and wxPython.



Second,
On May 4, 11:41 pm, srinivasan srinivas  
<mailto:sri_anna...@yahoo.co.in>
wrote:
  

Could you tell me does Python have any advantages over Java for the 
development of GUI applications?

Thanks,
Srini

  Now surf faster and smarter ! Check out the new Firefox 3 - Yahoo! 
Editionhttp://downloads.yahoo.com/in/firefox/?fr=om_email_firefox 
<http://downloads.yahoo.com/in/firefox/?fr=om_email_firefox>


--
http://mail.python.org/mailman/listinfo/python-list


  






--
http://mail.python.org/mailman/listinfo/python-list


Re: php to python code converter

2009-05-08 Thread Pascal Chambon

Hello

That's funny, I was precisely thinking about a php to python converter, 
some weeks ago.
Such a tool, allowing for example to convert some CMS like Drupal to 
python, would be a killer app, when we consider the amount of php code 
available.


But of course, there are lots of issues that'd have to be fixed :
- translating the php syntax to python syntax
- forcing scope limitations where php doesn't have any
- handling differences in semantics (for example, the booleanness of "0" 
or)

- handling the automatic variable creation and coertion that php features
- handling the php types like arrays (which are neither python lists nor 
python dicts)
- providing a whole mirror of the php stdlib (string and file functions, 
access to environment vars...)


Some things, like PECL modules, would imo be almost impossible to handle 
(there are already so much trouble to make Cpython extensions available 
to other python implementations...), but I guess that 95% of existing 
php websites could be wholly translated "just" with a language 
translator and an incomplete stddlib replacement.


That's hell a lot of work anyway, so it'd be worth weighing it ^^

Actually, your module "phppython" might already be rather useful, 
because I've crossed here and there people desperately asking for help 
when translating some php function of their own to python.


But if the project were to become bigger, I guess some choices would 
have to be rechecked. For example it seems you parse the php code your 
own way, instead of using existing php parsers ; I think the most 
flexible way would be walking some kind of php abstract syntax tree, and 
translating it to python AST on the way. Also, writting the comments in 
english woudl be mandatory :p


I'd like to have the opinion of people around : do you think that 
complete language translators like php<->python or ruby<->python are 
possible ? Impossible ? Not worth the effort ? Or must be reached by 
another way (eg. Parrot and stuffs) ?


Regards,
Pascal

PS : Am I the only one having most of answers rejected by the antispam 
system of python-list ? That's humiliating :p


bvidinli a écrit :

if anybody needs:
http://code.google.com/p/phppython/
--
http://mail.python.org/mailman/listinfo/python-list


  


--
http://mail.python.org/mailman/listinfo/python-list


Re: php to python code converter

2009-05-08 Thread Pascal Chambon

D'Arcy J.M. Cain a écrit :

On Fri, 08 May 2009 17:19:13 +0200
Pascal Chambon  wrote:
  
That's funny, I was precisely thinking about a php to python converter, 
some weeks ago.
Such a tool, allowing for example to convert some CMS like Drupal to 
python, would be a killer app, when we consider the amount of php code 
available.



I'm not a big fan of PHP but I don't understand the desireability of
such a tool.  If you have a good PHP app just run it.  The point of
Python in my mind is that it is a cleaner syntax and promotes better
code.  Anything that converts PHP to Python is going to leave you with
some butt-ugly Python.  It also risks adding new bugs.

If you want a Python version of Drupal then perhaps that is the project
that you want to start.  Start fresh using the existing project as your
requirements specification.  Don't automatically import all their
coding choices that may be partially based on the language used.

  
I agree that, in any way, the code resulting from this translation would 
be anything but pythonic :p


The point would rather be to consider code translated from php, ruby, 
perl or anything as extension modules, which have to be wrapped with 
pythonic interfaces (much like c/c++ libraries actually), or which are 
supposed to be deeply refactored.
It's always a pity when libraries available in a language have to be 
reproduced in another - even though building the translation tools would 
maybe ask for too much effort.


Concerning applications, like Drupal, it'd be different of course - I 
have no precise plan on how Drupal could be translated and then made 
"maintainable" in python, but for sure Python lacks a similar CMS, 
easier to approach than Plone and much more feature-full than others 
(Slemetonz, Pylucid...). And I don't feel quite ready for implementing 
such a thing myself ^^


Maybe the translation approach isn't the right one ; surely the best 
would be an interoperability at a lower level, like we have in Jpype, 
the DLR of .Net, Parrot, ironclad etc... but even with those projects, 
we're far from a fluent interoperability between languages, which would 
allow you to pick the best modules in the languages that fit the most 
for each of your tasks, on the platform you want  (I guess that's just a 
geeky dream).


Regards,
pascal


--
http://mail.python.org/mailman/listinfo/python-list


Re: php to python code converter

2009-05-08 Thread Pascal Chambon


D'Arcy J.M. Cain a écrit :

On Fri, 08 May 2009 17:19:13 +0200
Pascal Chambon  wrote:
  
That's funny, I was precisely thinking about a php to python converter, 
some weeks ago.
Such a tool, allowing for example to convert some CMS like Drupal to 
python, would be a killer app, when we consider the amount of php code 
available.



I'm not a big fan of PHP but I don't understand the desireability of
such a tool.  If you have a good PHP app just run it.  The point of
Python in my mind is that it is a cleaner syntax and promotes better
code.  Anything that converts PHP to Python is going to leave you with
some butt-ugly Python.  It also risks adding new bugs.

If you want a Python version of Drupal then perhaps that is the project
that you want to start.  Start fresh using the existing project as your
requirements specification.  Don't automatically import all their
coding choices that may be partially based on the language used.

  
I agree that, in any way, the code resulting from this translation would 
be anything but pythonic :p


The point would rather be to consider code translated from php, ruby, 
perl or anything as extension modules, which have to be wrapped with 
pythonic interfaces (much like c/c++ libraries actually), or which are 
supposed to be deeply refactored.
It's always a pity when libraries available in a language have to be 
reproduced in another - even though building the translation tools would 
maybe ask for too much effort.


Concerning applications, like Drupal, it'd be different of course - I 
have no precise plan on how Drupal could be translated and then made 
"maintainable" in python, but for sure Python lacks a similar CMS, 
easier to approach than Plone and much more feature-full than others 
(Slemetonz, Pylucid...). And I don't feel quite ready for implementing 
such a thing myself ^^


Maybe the translation approach isn't the right one ; surely the best 
would be an interoperability at a lower level, like we have in Jpype, 
the DLR of .Net, Parrot, ironclad etc... but even with those projects, 
we're far from a fluent interoperability between languages, which would 
allow you to pick the best modules in the languages that fit the most 
for each of your tasks, on the platform you want  (I guess that's just a 
geeky dream).


Regards,
pascal


--
http://mail.python.org/mailman/listinfo/python-list


Re: Thread locking question.

2009-05-09 Thread Pascal Chambon

grocery_stocker a écrit :

On May 9, 8:36 am, Piet van Oostrum  wrote:
  

grocery_stocker  (gs) wrote:
  

gs> The following code gets data from 5 different websites at the "same
gs> time".
gs> #!/usr/bin/python
gs> import Queue
gs> import threading
gs> import urllib2
gs> import time
gs> hosts = ["http://yahoo.com";, "http://google.com";, "http://amazon.com";,
gs>  "http://ibm.com";, "http://apple.com";]
gs> queue = Queue.Queue()
gs> class MyUrl(threading.Thread):
gs> def __init__(self, queue):
gs> threading.Thread.__init__(self)
gs> self.queue = queue
gs> def run(self):
gs> while True:
gs> host = self.queue.get()
gs> if host is None:
gs> break
gs> url = urllib2.urlopen(host)
gs> print url.read(1024)
gs> #self.queue.task_done()
gs> start = time.time()
gs> def main():
gs> for i in range(5):
gs> t = MyUrl(queue)
gs> t.setDaemon(True)
gs> t.start()
gs> for host in hosts:
gs> print "pushing", host
gs> queue.put(host)
gs> for i in range(5):
gs> queue.put(None)
gs> t.join()
gs> if __name__ == "__main__":
gs> main()
gs> print "Elapsed Time: %s" % (time.time() - start)
gs> How does the parallel download work if each thread has a lock? When
gs> the program openswww.yahoo.com, it places a lock on the thread,
gs> right? If so, then doesn't that mean the other 4 sites have to wait
gs> for the thread to release the lock?
  

No. Where does it set a lock? There is only a short lock period in the queue
when an item is put in the queue or got from the queue. And of course we
have the GIL, but this is released as soon as a long during operation is
started - in this case when the Internet communication is done.
--



Maybe I'm being a bit daft, but what prevents the data from www.yahoo.com
from being mixed up with the data from www.google.com? Doesn't using
queue() prevent the data from being mixed up?

--
http://mail.python.org/mailman/listinfo/python-list


  

Hello

Each thread has a separate access to internet (its own TCP/IP 
connection, port number etc.), so the incoming data will never get mixed 
between the thread on the input.


The only problem is when you explicitly use shared data structures be 
the threads - like the queue here that they all access.
But the queue is protected against multithreading so there is no problem 
there (another data structure might give bugs, if not explicitly locked 
before use).


On the contarry, there will be mixes on the console (stdout), since each 
thread can write to it at any moment. It's likely that the sources of 
all the pages will get mixed, on your screen, yep. ^^


Regards,
pascal
--
http://mail.python.org/mailman/listinfo/python-list


Re: stand alone exec

2009-05-11 Thread Pascal Chambon

Hello

It sounds indeed like a runtime library problem...

You should run a dependancy finder (like dependency walker - 
http://www.dependencywalker.com/) on your executable, and thus see what 
might be lacking on other systems.
I know that on *nix systems there are tools to see more precisely what's 
missing, but on windows, exept that tool I don't know much.


Regards,
Pascal



prakash jp a écrit :

Hi all,
 
I want to run dos commands through python stand alone execs. The 
created Python stand alone executable (py2exe)  works fine
 
in my machine but on transferring the "dist" folder to other systems 
the executable fails to run.
 
I tried to copy the MSVCP90.dll in the "dist" folder. Also tried to 
exclude the same dll in the options of the setup.py file
 
The error reads as follows :
 
"The application has failed to start because the application 
configuration is incorrect. Reinstalling the application may fix this 
problem".
 
Details of the installed setup files may be useful :
 
1- python-2.6.1.msi

2- py2exe-0.6.9.win32-py2.6.exe
3- pywin32-212.win32-py2.6.exe
 
Thanks in advance
 
Regards

Prakash


--
http://mail.python.org/mailman/listinfo/python-list
  


-- 
http://mail.python.org/mailman/listinfo/python-list


Intra-package C extensions with freeze.py

2010-01-13 Thread Pascal Chambon

Hello everyone
Some times ago, I've had unexpected problems while trying to freeze some 
scripts into a standalone executable with the "freeze.py" script.
I had already done it : normally, you simply freeze pure python modules 
into a standalone executable, you package it along with python 
extensions (_ssl.so, time.so etc.), and you're done.


But the problem is that I had a dependency with python-fuse bindings, 
and these bindings contains a python extension (_fusemodule.so) inside 
the "fuseparts" package.
So pure python modules of this fuse wrapper got frozen into the 
standalone executable, and I had that "fuseparts/_fusemodule.so" left 
outside, which was not found by the module loader, since it was expected 
to appear inside a "fuseparts package" now embedded into the executable


I've managed to solve that by manually "monkey patching" sys.modules, 
before fusemodule's actual import. But this looks like an unsatisfying 
solution, to me.
Does anyone have a clue about how to freeze a python program cleanly, in 
case such inner C extensions are involved ? Does any of the freezers 
(freeze.py, py2exe, pyrex, cx_freeze...) do that ? I haven't seen such 
things so far in their docs.


Thanks for the attention,
Regards,
Pascal

--
http://mail.python.org/mailman/listinfo/python-list


Re: Castrated traceback in sys.exc_info()

2010-03-17 Thread Pascal Chambon
Hello,

traceback functions indeed allow the manipulation of exception tracebacks,
but the root problem is that anyway, since that traceback is incomplete,
your "traceback.format_exc().splitlines()" will only provide frames for
callee (downward) functions, not caller (upward) ones, starting from the
exception catching frame.

Regards,
Pascal



2010/3/17 Michael Ricordeau 

> Hi,
>
> to log tracebacks, you can probably try traceback module.
>
> I use it like this :
>
> import traceback
>  # your code
>
> for line in traceback.format_exc().splitlines():
>  log.trace(line)
>
>
>
> Le Wed, 17 Mar 2010 03:42:44 -0700 (PDT),
> Pakal  a écrit :
>
> > Hello
> >
> > I've just realized recently that sys.exc_info() didn't return a full
> > traceback for exception concerned : it actually only contains the
> > frame below the point of exception catching.
> >
> > That's very annoying to me, because I planned to log such tracebacks
> > with logging.critical(*, exc_info=True), and these partial
> > tracebacks, like the one below, are clearly unsufficient to determine
> > where the problem comes from. A whole traceback, from program entry
> > point to exception raising point, would be much better.
> >
> > 2010-03-17 09:28:59,184 - pims - CRITICAL - This is just a test to
> > ensure critical emails are properly sent
> > Traceback (most recent call last):
> > << HERE, lots of frames missing >>
> >   File "test_common.py", line 34, in test_email_sending
> > os.open("qsdsdqsdsdqsd", "r")
> > TypeError: an integer is required
> >
> > Is there any workaround for this ? I've thought about a custom logging
> > formatter, which would take both the exc_info traceback AND its own
> > full backtrace, and to connect them together on their relevant part,
> > but it's awkward and error prone... why can't we just have all the
> > precious traceback info under the hand there (an additional attribute
> > might have pointed the precise frame in which the exception was
> > caught).
> >
> > Tanks for the attention,
> > Regards,
> > Pascal
>
-- 
http://mail.python.org/mailman/listinfo/python-list


Precedence of py, pyd, so, egg, and folder modules/packages when importing

2010-03-20 Thread Pascal Chambon

Hello,

I've run into a slight issue when turning my package hierarchy into a 
parallel hierarchy of compiled cython extensions. Fue to the compilation 
process, pure python and C modules must have the basename, and they're 
located in the same folders.


Is there any way for me to ensure that, if both modules sets are 
installed by an user, the compiled, faster module will ALWAYS be 
imported instead of the pure python one ? Or am I forced to remove 
pure-python sources when cython ones must be used ?


More generally, is there any certainty about the precedence of the misc. 
module types, when several of them are in the same path ? I have found 
nothing about it online.


Thanks for the attention,
regards,
Pascal
--
http://mail.python.org/mailman/listinfo/python-list


Re: Castrated traceback in sys.exc_info()

2010-03-22 Thread Pascal Chambon

Gabriel Genellina a écrit :


En Wed, 17 Mar 2010 09:42:06 -0300, Pascal Chambon 
 escribió:


traceback functions indeed allow the manipulation of exception 
tracebacks,

but the root problem is that anyway, since that traceback is incomplete,
your "traceback.format_exc().splitlines()" will only provide frames for
callee (downward) functions, not caller (upward) ones, starting from the
exception catching frame.


Either I don't understand what you mean, or I can't reproduce it:




Allright, here is more concretely the problem :


import logging

def a(): return b()
def b(): return c()
def c():
   try:
   return d()
   except:
   logging.exception("An error")
  
def d(): raise ValueError


def main():
   logging.basicConfig(level=logging.DEBUG)
   a()

main()


OUTPUT:
>>>
ERROR:root:An error
Traceback (most recent call last):
 File "C:/Users/Pakal/Desktop/aaa.py", line 7, in c
   return d()
 File "C:/Users/Pakal/Desktop/aaa.py", line 11, in d
   def d(): raise ValueError
ValueError
>>>


As you see, the traceback only starts from function c, which handles the 
exception.
It doesn't show main(), a() and b(), which might however be (and are, in 
my case) critical to diagnose the severity of the problem (since many 
different paths would lead to calling c()).


So the question is : is that possible to enforce, by a way or another, 
the retrieval of the FULL traceback at exception raising point, instead 
of that incomplete one ?


Thank you for your help,
regards,

Pascal


--
http://mail.python.org/mailman/listinfo/python-list


Re: Castrated traceback in sys.exc_info()

2010-03-23 Thread Pascal Chambon

Gabriel Genellina a écrit :


En Mon, 22 Mar 2010 15:20:39 -0300, Pascal Chambon 
 escribió:


Allright, here is more concretely the problem :

ERROR:root:An error
Traceback (most recent call last):
  File "C:/Users/Pakal/Desktop/aaa.py", line 7, in c
return d()
  File "C:/Users/Pakal/Desktop/aaa.py", line 11, in d
def d(): raise ValueError
ValueError
 >>>

As you see, the traceback only starts from function c, which handles 
the exception.
It doesn't show main(), a() and b(), which might however be (and are, 
in my case) critical to diagnose the severity of the problem (since 
many different paths would lead to calling c()).


So the question is : is that possible to enforce, by a way or 
another, the retrieval of the FULL traceback at exception raising 
point, instead of that incomplete one ?


Thanks for bringing this topic! I learned a lot trying to understand 
what happens.


The exception traceback (what sys.exc_info()[2] returns) is *not* a 
complete stack trace. The sys module documentation is wrong [1] when 
it says "...encapsulates the call stack at the point where the 
exception originally occurred."


The Language Reference is more clear [2]: "Traceback objects represent 
a stack trace of an exception. A traceback object is created when an 
exception occurs. When the search for an exception handler unwinds the 
execution stack, at each unwound level a traceback object is inserted 
in front of the current traceback. When an exception handler is 
entered, the stack trace is made available to the program."


That is, a traceback holds only the *forward* part of the stack: the 
frames already exited when looking for an exception handler. Frames 
going from the program starting point up to the current execution 
point are *not* included.


Conceptually, it's like having two lists: stack and traceback. The 
complete stack trace is always stack+traceback. At each step (when 
unwinding the stack, looking for a frame able to handle the current 
exception) an item is popped from the top of the stack (last item) and 
inserted at the head of the traceback.


The traceback holds the "forward" path (from the current execution 
point, to the frame where the exception was actually raised). It's a 
linked list, its tb_next attribute holds a reference to the next item; 
None marks the last one.


The "back" path (going from the current execution point to its caller 
and all the way to the program entry point) is a linked list of 
frames; the f_back attribute points to the previous one, or None.


In order to show a complete stack trace, one should combine both. The 
traceback module contains several useful functions: extract_stack() + 
extract_tb() are a starting point. The simplest way I could find to 
make the logging module report a complete stack is to monkey patch 
logging.Formatter.formatException so it uses format_exception() and 
format_stack() combined (in fact it is simpler than the current 
implementation using a StringIO object):
Good point, there is clearly a distinction between "stack trace" and 
"exception traceback" that I didn't know (actually, it seems no one 
makes it in computer literature).



Good catch, Gabriel.

There should be no need to monkey-patch the logging module - it's
better if I include the change in the module itself. The only
remaining question is that of backward compatibility, but I can do
this for Python 2.7/3.2 only so that won't be an issue. It's probably
a good idea to log an issue on the bug tracker, though, so we have
some history for the change - do you want to do that, or shall I?

Regards,

Vinay Sajip
  
Well having it fixed in logging would be great, but that kind of 
information is good to have in other circumstances, so shouldn't we 
rather advocate the availability of this "stack trace part" in exc_info 
too ?
This way, people like me who consider frames as black magic wouldn't 
need to meet complex stuffs as 
"traceback.format_stack(ei[2].tb_frame.f_back"  :p


Should I open an issue for this evolution of exceptiuon handling, or 
should we content ourselves of this "hacking" of frame stck ?


Regards,
Pascal





--
http://mail.python.org/mailman/listinfo/python-list


Directly calling python's function arguments dispatcher

2010-12-12 Thread Pascal Chambon

Hello

I've encountered several times, when dealing with adaptation of function 
signatures, the need for explicitly resolving complex argument sets into 
a simple variable mapping. Explanations.



Consider that function:

def foo(a1, a2, *args, **kwargs):
pass

calling foo(1, a2=2, a3=3)

will map these arguments to local variables like these:
{
'a1': 1,
'a2': 2,
'args': tuple(),
'kwarg's: {'a3': 3}
}

That's a quite complex resolution mechanism, which must handle 
positional and keyword arguments, and deal with both collision and 
missing argument cases.


Normally, the simplest way to invoke this mechanism is to define a 
function with the proper signature, and then call it (like, here, foo()).


But there are cases where a more "meta" approach would suit me well.

For example when adapting xmlrpc methods : due to the limitations of 
xmlrpc (no keyword arguments), we use a trick, i.e our xmlrpc functions 
only accept a single argument, a "struct" (python dict) which gets 
unpacked on arrival, when calling the real functions exposed by the 
xmlrpc server.


But on client side, I'd like to offer a more native interface (allowing 
both positional and keyword arguments), without having to manually 
define an adapter function for each xmlrpc method.


To summarize, I'd like to implement a magic method like this one (please 
don't care about performance isues for now):


class XmlrpcAdapter:
def __getattr__(self, funcname):
# we create an on-the-fly adapter
def adapter(*args, **kwargs):
xmlrpc_kwargs = _resolve_func_signature(funcname, *args, 
**kwargs)

# we call the remote function with an unique dict argument
self.xmlrpc_server.call(funcname, xmlrpc_kwargs)
return adapter

As you see, all I need is _resolve_func_signature(), which is actually 
the routine (internal to the python runtime) which transforms complex 
function calls in a simple mapping of variables to be added to the 
function local namespace. Of course this routine would need information 
about the target functions' signature, but I have that info available 
(for example, via a set of functions that are a mockup of the real 
xmlrpc API).


Is that routine exposed to python, somewhere ? Does anybody know a 
working implementation here or there ?


Thanks for the help,
regards,
Pakal



--
http://mail.python.org/mailman/listinfo/python-list


Re: Directly calling python's function arguments dispatcher

2010-12-13 Thread Pascal Chambon

Le 12/12/2010 23:41, Peter Otten a écrit :

Pascal Chambon wrote:


   

I've encountered several times, when dealing with adaptation of function
signatures, the need for explicitly resolving complex argument sets into
a simple variable mapping. Explanations.


Consider that function:

def foo(a1, a2, *args, **kwargs):
  pass

calling foo(1, a2=2, a3=3)

will map these arguments to local variables like these:
{
'a1': 1,
'a2': 2,
'args': tuple(),
'kwarg's: {'a3': 3}
}

That's a quite complex resolution mechanism, which must handle
positional and keyword arguments, and deal with both collision and
missing argument cases.
 
   

Is that routine exposed to python, somewhere ? Does anybody know a
working implementation here or there ?
 

http://docs.python.org/library/inspect.html#inspect.getcallargs

   

Too sweet  \o/

Thanks a lot,
regards,
Pakal
--
http://mail.python.org/mailman/listinfo/python-list


Troubles with python internationalization

2010-05-25 Thread Pascal Chambon

Hello

I'm studying the migration of my website (mixed english and french
languages...) to a properly localized architecture.

From what I've read so far, using translation "tags" (or quick phrases)
in the code, and translating them to every target language (including
english) sounds a better approach than using, for example, final english
wordings as translation tags. The setup is longer via the first way, but
at least if you change english wordings later, you don't break all other
translations at the same time.

However, I still have problems with some aspects of internationalization:

* code safety : it seems default python string formatting technics (%
operator, .format() method) are normally used when one needs to
substitute placeholders in translated strings. But the thing is : I DONT
want my view to raise an exception simply because one of the
translations has forgotten a damn "%(myvar)s" placeholder. The only
quick fix I can think of, is to always use substitution through
defaultdicts instances (and still, exceptions could occur if abnormal
"%s" placeholders are found in the translated string).
Are there some utilities in python, or frameworks like django,
 to allow a safe string substitution (which might,
for example, simply log an error if a buggy string si found)  ? Python's
template strings' "safe_substitute()" won't fit, because it swallows
errors without any notice...

* unknown translatable strings : I have in different data files (eg.
yaml), strings which will need translation, but that can't be detected
by gettext and co, since they only appear in the code as variable i.e
"_(yamlvar)". The easiest, I guess, would be to replace them by specific
tags (like "TR_HOMEPAGE_TITLE"), and to have a tool browse the code to
extract them and add them to the standard gettext translation chain.
Such a tool shouldn't be too hard to code, but I'd rather know : doesn't
such a tool already exist somewhere ? I've seen no such mention in
gettext or babel tools, only recogniztion via function calls ( _(),
tr()... ).

* I have seen nowhere mention of how to remove deprecated/unused strings
from gettext files - only merging translations seems to interest people.
However, having a translation file which slowly fills itself with
outdated data doesn't sound cool to me. Does anyone know tools/program
flags which would list/extract translations that don't seem used anymore ?


Thanks for you help,

regards,
Pascal


--
http://mail.python.org/mailman/listinfo/python-list


Direct interaction with subprocess - the curse of blocking I/O

2009-06-29 Thread Pascal Chambon

Hello everyone

I've had real issues with subprocesses recently : from a python script, 
on windows, I wanted to "give control" to a command line utility, i.e 
forward user in put to it and display its output on console. It seems 
simple, but I ran into walls :
- subprocess.communicate() only deals with a forecast input, not 
step-by-step user interaction

- pexpect module is unix-only, and for automation, not interactive input
- when wanting to do all the job manually (transfering data between the 
standard streams of the python program and the binary subprocess, I met 
the issue : select() works only on windows, and python's I/O are 
blocking, so I can't just, for example, get data from the subprocess' 
stdout and expect the function to return if no input is present - the 
requesting thread might instead block forever.


Browsing the web, I found some hints :
- use the advanced win32 api to create non-blocking I/O : rather 
complicated, non portable and far from the python normal files
- use threads that block on the different streams and eat/feed them 
without ever stopping : rather portable, but gives problems on shutdown 
(How to terminate these threads without danger ? On some OSes, a process 
never dies as long as any thread - even "daemonic" - lives, I've seen 
people complaining about it).


So well, I'd like to know, do you people know any solution to this 
simple problem - making a user interact directly with a subprocess ? Or 
would this really require a library handling each case separately (win32 
api, select().) ?


Thanks a lot for your interest and advice,
regards,
Pascal
--
http://mail.python.org/mailman/listinfo/python-list


Re: Direct interaction with subprocess - the curse of blocking I/O

2009-07-02 Thread Pascal Chambon

Thank you all for the comments


you might want something like Expect.


Yes "Expect" deals with such things, unfortunately it's posix only (due to the 
PTY module requirement...); whereas I'd like to find generic ways (i.e at least 
windows/linux/mac recipes)


The latter is inherently tricky (which is why C's popen() lets you connect
to stdin or stdout but not both). You have to use either multiple threads,
select/poll, or non-blocking I/O.

If the child's output is to the console, it should presumably be the
former, i.e. piping stdin but allowing the child to inherit stdout, in
which case, where's the problem? Or are you piping its stdout via Python
for the hell of it
It's well a too-way piping that I'd like to get ; but actually, even  a 
single-way piping is error-prone : if I try to send data to the child's 
stdin, and this child never wants to receive that data (for whatever 
reason), the parent thread will be forever stuck.
I can use multiple threads, but it doesn't fully solve the problem, 
because having even a single thread stuck might prevent the process from 
terminating, on some operating systems... I'd need a way to deblock I/O 
blocking threads (but even closing the pipe isn't sure to do it properly 
- the closing might be delayed by the I/O operations wating).



>I would guess the architectural differences are so great that an attempt
>to "do something simple" is going to involve architecture-specific code.
>and I personally wouldn't have it any other way.  Simulating a shell
>with hooks on its I/O should be so complicated that a "script kiddie"
>has trouble writing a Trojan.

I'm not sure I understand the security hole there - if a script kiddie 
manages to make you execute his program, he doesn't need complex I/O 
redirections to harm you, simply destroying files randomly on your hard 
drive should fit his needs shouldn't it. :?




> I met the issue : select() works only on windows ...
  


No it doesn't. It works only on sockets on Windows, on Unix/Linux it works 
with all file descriptors .



Indeed, I mixed up my words there... >_<


>If you are willing to have a wxPython dependency, wx.Execute handles 
non-blocking i/o with processes on all supported platforms more 
discussion of python's blocking i/o issues here:
>  
http://groups.google.com/group/comp.lang.python/browse_thread/thread/a037349e18f99630/60886b8beb55cfbc?q=#60886b8beb55cfbc


Very interesting link, that's exactly the problem... and it seems no 
obvious solution comes out of it, except, of course, going down to the 
OS' internals...
File support is really weak in python I feel (stdio is a little 
outdated...), it'd need a complete blocking/non-blocking, 
locking/non-locking stream API straight in the stdlib...
I'm already working on locking classes, I'll explore the opportunity for 
a higher level "file" object too whenever possible (a new filesystem api 
is in progress here at europython2009, it might be the occasion).



regards,
Pascal



-- 
http://mail.python.org/mailman/listinfo/python-list