Re: Environment variables not visible from Python

2011-09-22 Thread Thomas Rachel

Am 22.09.2011 08:12 schrieb Steven D'Aprano:

I don't understand why some environment variables are not visible from
Python.

[steve@wow-wow ~]$ echo $LINES $COLUMNS $TERM
30 140 xterm
[steve@wow-wow ~]$ python2.6
Python 2.6.6 (r266:84292, Dec 21 2010, 18:12:50)
[GCC 4.1.2 20070925 (Red Hat 4.1.2-27)] on linux2
Type "help", "copyright", "credits" or "license" for more information.

import os
(os.getenv('LINES'), os.getenv('COLUMNS'), os.getenv('TERM'))

(None, None, 'xterm')



They are no environment variables, but merely shell variables.

You can turn them into environment variables with the shell command 
"export". After exporting them, they are visible by Python.


The environment can be obtained with env.

So try:

$ python -c 'import os; print "\n".join(sorted("%s=%s" % (k,v) for k,v 
in os.environ.iteritems()))' | diff -u - <(env|LANG=C sort)

@@ -61,4 +61,4 @@
 XDG_DATA_DIRS=/usr/share
 XKEYSYMDB=/usr/share/X11/XKeysymDB
 XNLSPATH=/usr/share/X11/nls
-_=/usr/bin/python
+_=/usr/bin/env

and you see that they (nearly) match.


Try as well

$ python -c 'import os; print "\n".join(os.getenv(k) or "" for k in 
("LINES","COLUMNS","TERM"))'



linux
$ export LINES
$ python -c 'import os; print "\n".join(os.getenv(k) or "" for k in 
("LINES","COLUMNS","TERM"))'

24

linux
$ export COLUMNS
$ python -c 'import os; print "\n".join(os.getenv(k) or "" for k in 
("LINES","COLUMNS","TERM"))'

24
80
linux
$

HTH,

Thomas
--
http://mail.python.org/mailman/listinfo/python-list


Re: Python deadlock using subprocess.popen and communicate

2011-09-22 Thread Thomas Rachel

Am 22.09.2011 05:42 schrieb Atherun:


I'm pretty sure thats the problem, this is a generic catch all
function for running subprocesses.  It can be anything to a simple
command to a complex command with a ton of output.  I'm looking for a
better solution to handle the case of running subprocesses that have
an undetermined amount of output.


Just handle process.stdout/stderr by yourself - read it out until EOF 
and then wait() for the process.



Thomas
--
http://mail.python.org/mailman/listinfo/python-list


static statements and thread safety

2011-09-22 Thread Eric Snow
A recent thread on the python-ideas list got me thinking about the
possibility of a static statement (akin to global and nonlocal).  I am
wondering if an implementation would have to address thread safety
concerns.

I would expect that static variables would work pretty much the same
way as default arguments, with a list of names on the code object and
a list of values on the function object.  And I would guess that the
values from the static variables would get updated on the function
object at the end of the call.  If multiple threads are executing the
function at the same time won't there be a problem with that
end-of-call update?

-eric


p.s. It probably shows that I haven't done a lot of thread-related
programming, so perhaps this is not a hard question.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Odd behavior with imp.reload and logging

2011-09-22 Thread Andrew Berg
On 2011.09.22 01:46 AM, Chris Angelico wrote:
> I think Pike may be a good choice for you.
That's quite unappealing for a few reasons. First, that would likely
require writing an entirely new bot (I'm not even that familiar with the
current one; I've only been writing a module for it). Also, I don't
really enjoy programming (I'm aware I'm likely in the minority on this
list); I tolerate it enough to get certain things done, so learning
another language, especially when I'm still learning Python, is not
something I want to do. Python is probably not the best tool for this
particular job, but I am not nearly dedicated to this project enough to
learn another programming language.

So, is there any way to at least monitor what happens after a reload? I
haven't noticed anything odd until I came across this logging issue.

-- 
CPython 3.2.2 | Windows NT 6.1.7601.17640 | Thunderbird 6.0.2
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: static statements and thread safety

2011-09-22 Thread Chris Angelico
On Thu, Sep 22, 2011 at 5:45 PM, Eric Snow  wrote:
> I would expect that static variables would work pretty much the same
> way as default arguments

Could you just abuse default arguments to accomplish this?

def accumulate(n,statics={'sum':0}):
statics['sum']+=n
return statics['sum']

>>> accumulate(1)
1
>>> accumulate(10)
11
>>> accumulate(20)
31
>>> accumulate(14)
45

This eliminates any sort of "end of function write-back" by writing to
static storage immediately. Of course, syntactic assistance would make
this look cleaner, for instance:

def accumulate(n):
static sum=0
sum+=n
return sum

Both of these would, of course, have thread-safety issues. But these
can be resolved by figuring out exactly what you're trying to
accomplish with your static data, and what it really means when two
threads are affecting it at once.

ChrisA
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Odd behavior with imp.reload and logging

2011-09-22 Thread Chris Angelico
On Thu, Sep 22, 2011 at 5:59 PM, Andrew Berg  wrote:
> That's quite unappealing for a few reasons. First, that would likely
> require writing an entirely new bot (I'm not even that familiar with the
> current one; I've only been writing a module for it).

Ah, then yeah, it's probably not a good idea to change languages. But
you may end up finding other issues with reload() as well.

I wonder whether this would work...

modules=[]

# instead of 'import mymodule' use:
modules.append(__import__('mymodule')); mymodule=modules[-1]

In theory, this should mean that you load it fresh every time - I
think. If not, manually deleting entries from sys.modules might help,
either with or without the list of modules.

ChrisA
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: static statements and thread safety

2011-09-22 Thread Eric Snow
On Thu, Sep 22, 2011 at 2:06 AM, Chris Angelico  wrote:
> On Thu, Sep 22, 2011 at 5:45 PM, Eric Snow  
> wrote:
>> I would expect that static variables would work pretty much the same
>> way as default arguments
>
> Could you just abuse default arguments to accomplish this?
>
> def accumulate(n,statics={'sum':0}):
>    statics['sum']+=n
>    return statics['sum']
>
 accumulate(1)
> 1
 accumulate(10)
> 11
 accumulate(20)
> 31
 accumulate(14)
> 45
>
> This eliminates any sort of "end of function write-back" by writing to
> static storage immediately. Of course, syntactic assistance would make
> this look cleaner, for instance:
>
> def accumulate(n):
>    static sum=0
>    sum+=n
>    return sum
>
> Both of these would, of course, have thread-safety issues. But these
> can be resolved by figuring out exactly what you're trying to
> accomplish with your static data, and what it really means when two
> threads are affecting it at once.

That's a good point.  So, isn't the default arguments hack in the same
boat with regards to threads?

Maybe I'm just misunderstanding the thread concept in Python.  Threads
have separate execution stacks but share interpreter global state,
right?

-eric

>
> ChrisA
> --
> http://mail.python.org/mailman/listinfo/python-list
>
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Graphing

2011-09-22 Thread John Ladasky
I'm using matplotlib and I'm happy with it.  Quick plotting is easy
using the pyplot interface, which resembles the popular software
package MATLAB.  As your ambitions grow, matplotlib has many
sophisticated tools waiting for you.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: static statements and thread safety

2011-09-22 Thread Chris Angelico
On Thu, Sep 22, 2011 at 6:16 PM, Eric Snow  wrote:
> That's a good point.  So, isn't the default arguments hack in the same
> boat with regards to threads?
>
> Maybe I'm just misunderstanding the thread concept in Python.  Threads
> have separate execution stacks but share interpreter global state,
> right?

I would say it probably is, but others on this list will know more of
threading in Python. I tend not to write multithreaded programs in
Python - the main reason for me to use Python is rapid scriptwriting,
which usually doesn't demand threads.

ChrisA
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Odd behavior with imp.reload and logging

2011-09-22 Thread Steven D'Aprano
On Wed, 21 Sep 2011 23:47:55 -0500, Andrew Berg wrote:

> On 2011.09.21 11:22 PM, Steven D'Aprano wrote:
>> You could
>> try something like this (untested):
> That works. Thanks!
> This makes me wonder what else stays around after a reload 

Practically everything. A reload doesn't delete anything, except as a 
side-effect of running the module again.

Don't think of reloading as:

  * Revert anything the module is responsible for.
  * Delete the module object from the import cache.
  * Import the module in a fresh environment.

Instead, think of it as:

  * Re-import the module in the current environment.


In practice, you won't often see such side-effects, because most modules 
don't store state outside of themselves. If they store state *inside* 
themselves, then they will (almost always) overwrite that state. E.g. 
this will work as expected:


state = [something]


But this leaves state hanging around in other modules and will be 
surprising:

import another_module
another_module.state.append(something)


My guess is that the logging module uses a cache to save the logger, 
hence there is state inadvertently stored outside your module.


Another place where reload() doesn't work as expected:

>>> import module
>>> a = module.MyClass()
>>> reload(module)

>>> b = module.MyClass()
>>> type(a) is type(b)
False


Objects left lying around from before the reload will keep references 
open to the way things were before the reload. This often leads to 
confusion when modules are edited, then reloaded. (Been there, done that.)



-- 
Steven
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Odd behavior with imp.reload and logging

2011-09-22 Thread Andrew Berg
On 2011.09.22 03:25 AM, Steven D'Aprano wrote:
> Objects left lying around from before the reload will keep references 
> open to the way things were before the reload. This often leads to 
> confusion when modules are edited, then reloaded. (Been there, done that.)
I'll keep that in mind. My module does have a class, but instances are
kept inside dictionaries, which are explicitly set to {} at the
beginning (can't use the update() method for dictionaries that don't
exist). Also, class instances get pickled after creation and unpickled
when the module is imported.

-- 
CPython 3.2.2 | Windows NT 6.1.7601.17640 | Thunderbird 6.0.2
-- 
http://mail.python.org/mailman/listinfo/python-list


python install on locked down windows box?

2011-09-22 Thread Chris Withers

Hi All,

Is there a way to install python on a locked down Windows desktop?
(ie: no compilers, no admin rights, etc)

cheers,

Chris

--
Simplistix - Content Management, Batch Processing & Python Consulting
- http://www.simplistix.co.uk
--
http://mail.python.org/mailman/listinfo/python-list


Re: python install on locked down windows box?

2011-09-22 Thread Glenn Hutchings
You could try Portable Python (http://www.portablepython.com).  No need to 
install anything!
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Environment variables not visible from Python

2011-09-22 Thread Ben Finney
Steven D'Aprano  writes:

> I don't understand why some environment variables are not visible from 
> Python.

Not all variables are environment variables. Variables only become
environment variables if exported to the environment; the ‘export’
command is one way to do that.

-- 
 \   “As far as the laws of mathematics refer to reality, they are |
  `\not certain, and as far as they are certain, they do not refer |
_o__)  to reality.” —Albert Einstein, 1983 |
Ben Finney
-- 
http://mail.python.org/mailman/listinfo/python-list


Context manager with class methods

2011-09-22 Thread Gavin Panella
Hi,

On Python 2.6 and 3.1 the following code works fine:

class Foo(object):

@classmethod
def __enter__(cls):
print("__enter__")

@classmethod
def __exit__(cls, exc_type, exc_value, traceback):
print("__exit__")

with Foo: pass

However, in 2.7 and 3.2 I get:

Traceback (most recent call last):
  File "", line 1, in 
AttributeError: __exit__

Is this a regression or a deliberate change? Off the top of my head I
can't think that this pattern is particularly useful, but it seems
like something that ought to work.

Gavin.
-- 
http://mail.python.org/mailman/listinfo/python-list


httplib's HEAD request, and https protocol

2011-09-22 Thread Yaşar Arabacı
Hi,

I wrote a function to get thorugh redirections and find a final page for a
given web-page. But following function gives maximum recursion error for any
https pages I tried. Do you know what might be the problem here?

def getHeadResponse(url,response_cache = {}):
try:
return response_cache[url]
except KeyError:
url = urlparse.urlparse(url)
conn = httplib.HTTPConnection(url.netloc)
try:
conn.request("HEAD",url.path)
except:
# Anything can happen, this is SPARTA!
return None
response = conn.getresponse()
response_cache[url.geturl()] = response
return response

def getFinalUrl(url):
"Navigates through redirections to get final url."

response = getHeadResponse(url)
try:
if str(response.status).startswith("3"):
return getFinalUrl(response.getheader("location"))
except AttributeError:
pass
return url
-- 
http://yasar.serveblog.net/
-- 
http://mail.python.org/mailman/listinfo/python-list


Execute code after Shut Down command given --- How?

2011-09-22 Thread Virgil Stokes
I would like to execute some Python code (popup message to be displayed) when 
Windows Vista/7 is shut down. That is, this code should execute after  "Shut 
Down" is given from the "Shut Down Windows" popup, but before the actual shut 
down sequence starts.


How to write Python code to accomplish this task?

--
http://mail.python.org/mailman/listinfo/python-list


Re: Environment variables not visible from Python

2011-09-22 Thread Steven D'Aprano
Ben Finney wrote:

> Steven D'Aprano  writes:
> 
>> I don't understand why some environment variables are not visible from
>> Python.
> 
> Not all variables are environment variables. Variables only become
> environment variables if exported to the environment; the ‘export’
> command is one way to do that.

I see. Thank you to everyone who answered.


-- 
Steven

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Execute code after Shut Down command given --- How?

2011-09-22 Thread Steven D'Aprano
Virgil Stokes wrote:

> I would like to execute some Python code (popup message to be displayed)
> when
> Windows Vista/7 is shut down. That is, this code should execute after 
> "Shut Down" is given from the "Shut Down Windows" popup, but before the
> actual shut down sequence starts.
> 
> How to write Python code to accomplish this task?


Exactly the same way you would write it in any other language.

This is not a Python question. It is a Windows question: "How do I execute
code after the user calls Shut Down Windows, but before the shut down
sequence starts?" Find out how to do that, and then do it using Python
instead of another language.



-- 
Steven

-- 
http://mail.python.org/mailman/listinfo/python-list


Negativ nearest interger?

2011-09-22 Thread joni
Have a simple question in the Integer calculator in Python 2.65 and
also 2.7..

The consol showing:

Python 2.6.5 (r265:79063, Apr 16 2010, 13:09:56)
[GCC 4.4.3] on linux2
Type "help", "copyright", "credits" or "license" for more information.

>>> 7.0/3  #fist to show floiting-point calculation
2.3335
>>> -7.0/3
-2.3335
>>> -7/-3 #Rounding to nearest interger.
2
>>> 7/3
2
>>> 7/-3  #Now to the problem with interger rounding with negative anwser.
-3
>>> -7/3
-3
>>>

-3 are more wrong than -2. Negativ number seems not to round to
nearest interger, but the integer UNDER the anwser!! Or?

Why?

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: httplib's HEAD request, and https protocol

2011-09-22 Thread Yaşar Arabacı
Ok, nevermind. Appereantly there is such a thing as HTTPSConnection. I
thought httplib auto-handled https connections..

22 Eylül 2011 13:43 tarihinde Yaşar Arabacı  yazdı:

> Hi,
>
> I wrote a function to get thorugh redirections and find a final page for a
> given web-page. But following function gives maximum recursion error for any
> https pages I tried. Do you know what might be the problem here?
>
> def getHeadResponse(url,response_cache = {}):
> try:
> return response_cache[url]
> except KeyError:
> url = urlparse.urlparse(url)
> conn = httplib.HTTPConnection(url.netloc)
> try:
> conn.request("HEAD",url.path)
> except:
> # Anything can happen, this is SPARTA!
> return None
> response = conn.getresponse()
> response_cache[url.geturl()] = response
> return response
>
> def getFinalUrl(url):
> "Navigates through redirections to get final url."
>
> response = getHeadResponse(url)
> try:
> if str(response.status).startswith("3"):
> return getFinalUrl(response.getheader("location"))
> except AttributeError:
> pass
> return url
> --
> http://yasar.serveblog.net/
>
>


-- 
http://yasar.serveblog.net/
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Negativ nearest interger?

2011-09-22 Thread Jussi Piitulainen
joni writes:

> Have a simple question in the Integer calculator in Python 2.65 and
> also 2.7..
> 
> The consol showing:
> 
> Python 2.6.5 (r265:79063, Apr 16 2010, 13:09:56)
...
> >>> -7/3
> -3
> >>>
> 
> -3 are more wrong than -2. Negativ number seems not to round to
> nearest interger, but the integer UNDER the anwser!! Or?
> 
> Why?

It simply does not round to the nearest integer. It floors. This has
nicer mathematical properties. In particular, it allows the remainder
(notated as "per cent") operation (n % m) to return a number that
differs from n by a multiple of m ("is congruent to n modulo m").
These two operations go together.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Environment variables not visible from Python

2011-09-22 Thread Thomas Rachel

Am 22.09.2011 12:16 schrieb Ben Finney:

--
  \   “As far as the laws of mathematics refer to reality, they are |
   `\not certain, and as far as they are certain, they do not refer |
_o__)  to reality.” —Albert Einstein, 1983 |
Ben Finney


So, he said what in 1983? Wow.

--
http://mail.python.org/mailman/listinfo/python-list


Re: Context manager with class methods

2011-09-22 Thread Thomas Rachel

Am 22.09.2011 12:21 schrieb Gavin Panella:

Hi,

On Python 2.6 and 3.1 the following code works fine:

 class Foo(object):

 @classmethod
 def __enter__(cls):
 print("__enter__")

 @classmethod
 def __exit__(cls, exc_type, exc_value, traceback):
 print("__exit__")

 with Foo: pass

However, in 2.7 and 3.2 I get:

 Traceback (most recent call last):
   File "", line 1, in
 AttributeError: __exit__


Same here.

But

with Foo(): pass

works, and that is more important and more logical.


Thomas
--
http://mail.python.org/mailman/listinfo/python-list


Re: Environment variables not visible from Python

2011-09-22 Thread Ben Finney
Thomas Rachel writes:

> Am 22.09.2011 12:16 schrieb Ben Finney:
> > --
> >   \   “As far as the laws of mathematics refer to reality, they are |
> >`\not certain, and as far as they are certain, they do not refer |
> > _o__)  to reality.” —Albert Einstein, 1983 |
> > Ben Finney
>
> So, he said what in 1983? Wow.

Or at least, in a work of his published in 1983: “Sidelights on
Relativity”. According to Wikiquote, anyway
https://secure.wikimedia.org/wikiquote/en/wiki/Albert_Einstein>.

-- 
 \  “[Entrenched media corporations will] maintain the status quo, |
  `\   or die trying. Either is better than actually WORKING for a |
_o__)  living.” —ringsnake.livejournal.com, 2007-11-12 |
Ben Finney
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Context manager with class methods

2011-09-22 Thread Mel
Gavin Panella wrote:

> Hi,
> 
> On Python 2.6 and 3.1 the following code works fine:
> 
> class Foo(object):
> 
> @classmethod
> def __enter__(cls):
> print("__enter__")
> 
> @classmethod
> def __exit__(cls, exc_type, exc_value, traceback):
> print("__exit__")
> 
> with Foo: pass
> 
> However, in 2.7 and 3.2 I get:
> 
> Traceback (most recent call last):
>   File "", line 1, in 
> AttributeError: __exit__
> 
> Is this a regression or a deliberate change? Off the top of my head I
> can't think that this pattern is particularly useful, but it seems
> like something that ought to work.

This seems to work:



class MetaWith (type):
@classmethod
def __enter__(cls):
print("__enter__")

@classmethod
def __exit__(cls, exc_type, exc_value, traceback):
print("__exit__")

class With (object):
__metaclass__ = MetaWith

with With:
pass



Mel.

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Context manager with class methods

2011-09-22 Thread Mel
Mel wrote:
> This seems to work:
> 
> 
> 
> class MetaWith (type):
> @classmethod
> def __enter__(cls):
> print("__enter__")
> 
> @classmethod
> def __exit__(cls, exc_type, exc_value, traceback):
> print("__exit__")
> 
> class With (object):
> __metaclass__ = MetaWith
> 
> with With:
> pass

It seems to work equally well without the `@classmethod`s

Mel.

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: python install on locked down windows box?

2011-09-22 Thread Steven D'Aprano
Chris Withers wrote:

> Hi All,
> 
> Is there a way to install python on a locked down Windows desktop?
> (ie: no compilers, no admin rights, etc)

(1) Bribe or blackmail the fascist system administrator.

(2) Hack into the system with any of dozens of unpatched vulnerabilities
that will give you admin rights.

(3) Sneak into the office at 3 in the morning and replace the desktop with
an identical machine which you have admin rights to.

(4) Guess the admin password -- it's not hard, most fascist system
administrators can't remember words with more than four letters, so the
password is probably something like "passw" or, if he's being especially
cunning, "drows".

(5) "Accidentally" install Linux on the machine and use that instead.

(6) Take hostages.

(7) If all else fails, as an absolute last resort, simply run the Windows
installer as a regular, unprivileged user, after selecting the option for a
Non-Admin Install under Advanced Options first. You could also try the
ActivePython installer.

http://www.richarddooling.com/index.php/2006/03/14/python-on-xp-7-minutes-to-hello-world/
http://diveintopython.org/installing_python/windows.html



-- 
Steven

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Negativ nearest interger?

2011-09-22 Thread joni
On Sep 22, 1:44 pm, Jussi Piitulainen 
wrote:
> joni writes:
> > Have a simple question in the Integer calculator in Python 2.65 and
> > also 2.7..
>
> > The consol showing:
>
> > Python 2.6.5 (r265:79063, Apr 16 2010, 13:09:56)
> ...
> > >>> -7/3
> > -3
>
> > -3 are more wrong than -2. Negativ number seems not to round to
> > nearest interger, but the integer UNDER the anwser!! Or?
>
> > Why?
>
> It simply does not round to the nearest integer. It floors. This has
> nicer mathematical properties. In particular, it allows the remainder
> (notated as "per cent") operation (n % m) to return a number that
> differs from n by a multiple of m ("is congruent to n modulo m").
> These two operations go together.

Thanx. See if I can understand it /Cheers
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: static statements and thread safety

2011-09-22 Thread MRAB

On 22/09/2011 08:45, Eric Snow wrote:

A recent thread on the python-ideas list got me thinking about the
possibility of a static statement (akin to global and nonlocal).  I am
wondering if an implementation would have to address thread safety
concerns.

I would expect that static variables would work pretty much the same
way as default arguments, with a list of names on the code object and
a list of values on the function object.  And I would guess that the
values from the static variables would get updated on the function
object at the end of the call.  If multiple threads are executing the
function at the same time won't there be a problem with that
end-of-call update?


It's no different from using a global, except that it's not in the
global (module) namespace, but attached to a function object.


-eric


p.s. It probably shows that I haven't done a lot of thread-related
programming, so perhaps this is not a hard question.

--
http://mail.python.org/mailman/listinfo/python-list


ANN: Urwid 1.0.0 - Console UI Library

2011-09-22 Thread Ian Ward
Announcing Urwid 1.0.0
--

Urwid home page:
  http://excess.org/urwid/

Manual:
  http://excess.org/urwid/wiki/UrwidManual

Tarball:
  http://excess.org/urwid/urwid-1.0.0.tar.gz


About this release:
===

This is a major feature release for Urwid:

It's the first official release that has support for Python 3.

There's a new experimental Terminal widget so you can terminal while you
terminal or write a screen-clone.

There's a new example showing how to serve Urwid interfaces to many
users simultaneously over ssh with Twisted.

There are new classes to help with creating dynamic tree views of
anything you have that's tree-like.

There are new widgets for working with pop-ups so you can now have all
the menu bars, drop-downs and combo-boxes you can write.

The old requirement to sprinkle draw_screen() calls around your
callbacks is gone.  Urwid now updates the screen automatically after
everything else is done.

There's a new simple MainLoop method for catching updates from other
threads and processes.  No need to manually fumble with os.pipe() and
event loops.

And lots more...

Happy 1.0 Urwid!  It's been a great nearly-seven years since our first
release.  Huge thanks to everyone that's contributed code, docs, bug
reports and help on the mailing list and IRC.


New in this release:


  * New support for Python 3.2 from the same 2.x code base,
requires distribute instead of setuptools (by Kirk McDonald,
Wendell, Marien Zwart) everything except TwistedEventLoop and
GLibEventLoop is supported

  * New experimental Terminal widget with xterm emulation and
terminal.py example program (by aszlig)

  * Edit widget now supports a mask (for passwords), has a
insert_text_result() method for full-field validation and
normalizes input text to Unicode or bytes based on the caption
type used

  * New TreeWidget, TreeNode, ParentNode, TreeWalker
and TreeListBox classes for lazy expanding/collapsing tree
views factored out of browse.py example program, with new
treesample.py example program (by Rob Lanphier)

  * MainLoop now calls draw_screen() just before going idle, so extra
calls to draw_screen() in user code may now be removed

  * New MainLoop.watch_pipe() method for subprocess or threaded
communication with the process/thread updating the UI, and new
subproc.py example demonstrating its use

  * New PopUpLauncher and PopUpTarget widgets and MainLoop option
for creating pop-ups and drop-downs, and new pop_up.py example
program

  * New twisted_serve_ssh.py example (by Ali Afshar) that serves
multiple displays over ssh from the same application using
Twisted and the TwistedEventLoop

  * ListBox now includes a get_cursor_coords() method, allowing
nested ListBox widgets

  * Columns widget contents may now be marked to always be treated
as flow widgets for mixing flow and box widgets more easily

  * New lcd_display module with support for CF635 USB LCD panel and
lcd_cf635.py example program with menus, slider controls and a
custom font

  * Shared command_map instance is now stored as Widget._command_map
class attribute and may be overridden in subclasses or individual
widgets for more control over special keystrokes

  * Overlay widget parameters may now be adjusted after creation with
set_overlay_parameters() method

  * New WidgetPlaceholder widget useful for swapping widgets without
having to manipulate a container widget's contents

  * LineBox widgets may now include title text

  * ProgressBar text content and alignment may now be overridden

  * Use reactor.stop() in TwistedEventLoop and document that Twisted's
reactor is not designed to be stopped then restarted

  * curses_display now supports AttrSpec and external event loops
(Twisted or GLib) just like raw_display

  * raw_display and curses_display now support the IBMPC character
set (currently only used by Terminal widget)

  * Fix for a gpm_mev bug preventing user input when on the console

  * Fix for leaks of None objects in str_util extension

  * Fix for WidgetWrap and AttrMap not working with fixed widgets

  * Fix for a lock up when attempting to wrap text containing wide
characters into a single character column


About Urwid
===

Urwid is a console UI library for Python. It features fluid interface
resizing, Unicode support, multiple text layouts, simple attribute
markup, powerful scrolling list boxes and flexible interface design.

Urwid is released under the GNU LGPL.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Python deadlock using subprocess.popen and communicate

2011-09-22 Thread Atherun
On Sep 22, 12:24 am, Thomas Rachel  wrote:
> Am 22.09.2011 05:42 schrieb Atherun:
>
> > I'm pretty sure thats the problem, this is a generic catch all
> > function for running subprocesses.  It can be anything to a simple
> > command to a complex command with a ton of output.  I'm looking for a
> > better solution to handle the case of running subprocesses that have
> > an undetermined amount of output.
>
> Just handle process.stdout/stderr by yourself - read it out until EOF
> and then wait() for the process.
>
> Thomas

Thats what confuses me though, the documentation says
process.stdout.read()/stderr.read() can deadlock and apparently so can
communicate, how do you read the stdout/stderr on yourself if its
documented using them can cause a deadlock?
-- 
http://mail.python.org/mailman/listinfo/python-list


Decision on python technologies

2011-09-22 Thread Navkirat Singh
Hi Guys,

I have been a python developer for a bit now and for the life of me I am not
being able to decide something. I am trying to develop a web based
application in python. I am torn between using python 2 or 3. All the good
frameworks are still in 2.x. Now, cherrypy, sqlalchemy and jinja2 support
python 3. But do I really want to do all the boilerplate work again? I have
this strong urge to use python 3 and call it my indecisiveness , I am
somehow not wanting to use 2.x, though it has everything I need to build my
app. Hence, I finally decided to turn to the community for helping me make
this decision.

Please help.

Regards,
Nav
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: [ANNC] pynguin-0.12 (fixes problems running on Windows)

2011-09-22 Thread Miki Tebeka
Thank you! My kids *love* it.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Environment variables not visible from Python

2011-09-22 Thread Peter Pearson
On Thu, 22 Sep 2011 09:21:59 +0200, Thomas Rachel wrote:
[snip]
> $ python -c 'import os; print "\n".join(sorted("%s=%s" % (k,v) for k,v 
> in os.environ.iteritems()))' | diff -u - <(env|LANG=C sort)

[standing ovation]

-- 
To email me, substitute nowhere->spamcop, invalid->net.
-- 
http://mail.python.org/mailman/listinfo/python-list


random.randint() slow, esp in Python 3

2011-09-22 Thread Chris Angelico
The standard library function random.randint() seems to be quite slow
compared to random.random(), and worse in Python 3 than Python 2
(specifically that's 3.3a0 latest from Mercurial, and 2.6.6 that came
default on my Ubuntu install).

My test involves building a list of one million random integers
between 0 and ten million (for tinkering with sorting algorithms),
using a list comprehension:

import random
import time
sz=100
start=time.time()
a=[random.randint(0,sz*10-1) for i in range(sz)]
print("Time taken: ",time.time()-start)

The code works fine in either version of Python (although the display
looks a bit odd in Py2). But on my test system, it takes about 5
seconds to run in Py2, and about 10 seconds for Py3. (The obvious
optimization of breaking sz*10-1 out and storing it in a variable
improves both times, but leaves the dramatic difference.)

Replacing randint with random():
a=[int(random.random()*top) for i in range(sz)]
cuts the times down to about 1.5 secs for Py2, and 1.8 secs for Py3.

I suspect that the version difference is (at least in part) due to the
merging of the 'int' and 'long' types in Py3. This is supported
experimentally by rerunning the second list comp but using int() in
place of long() - the time increases to about 1.7-1.8 secs, matching
Py3.

But this still doesn't explain why randint() is so much slower. In
theory, randint() should be doing basically the same thing that I've
done here (multiply by the top value, truncate the decimal), only it's
in C instead of Python - if anything, it should be faster than doing
it manually, not slower.

A minor point of curiosity, nothing more... but, I think, a fascinating one.

ChrisA
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Python deadlock using subprocess.popen and communicate

2011-09-22 Thread Nobody
On Thu, 22 Sep 2011 08:55:40 -0700, Atherun wrote:

>> Just handle process.stdout/stderr by yourself - read it out until EOF
>> and then wait() for the process.
> 
> Thats what confuses me though, the documentation says
> process.stdout.read()/stderr.read() can deadlock and apparently so can
> communicate, how do you read the stdout/stderr on yourself if its
> documented using them can cause a deadlock?

If you try to read/write two or more of stdin/stdout/stderr via the
"naive" approach, you run the risk of the child process writing more than
a pipe's worth of data to one stream (and thus blocking) while the
parent is performing a blocking read/write on another stream, resulting in
deadlock.

The .communicate() method avoids the deadlock by either:

1. On Unix, using non-blocking I/O and select(), or
2. On Windows, creating a separate thread for each stream.

Either way, the result is that it can always read/write whichever
streams are ready, so the child will never block indefinitely while
waiting for the parent.

If .communicate() is blocking indefinitely, it suggests that the child
process never terminates. There are many reasons why this might happen,
and most of them depend upon exactly what the child process is doing.

I suggest obtaining a copy of Process Explorer, and using it to
investigate the state of both processes (but especially the child) at the
point that the "deadlock" seems to occur.

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Decision on python technologies

2011-09-22 Thread Emile van Sebille

On 9/22/2011 9:00 AM Navkirat Singh said...

Hi Guys,

I have been a python developer for a bit now and for the life of me I am
not being able to decide something. I am trying to develop a web based
application in python. I am torn between using python 2 or 3. All the
good frameworks are still in 2.x. Now, cherrypy, sqlalchemy and jinja2
support python 3. But do I really want to do all the boilerplate work
again? I have this strong urge to use python 3 and call it my
indecisiveness , I am somehow not wanting to use 2.x, though it has
everything I need to build my app. Hence, I finally decided to turn to
the community for helping me make this decision.



I'd consider the development timeframe -- if it'll still be under 
development within the timeframe of migration and availability of 
desired tools then I'd start with 3 and focus on those parts that can be 
worked on.  If your development timeframe is measured in weeks instead 
of quarters or years, I'd just get it done with 2.


Emile



--
http://mail.python.org/mailman/listinfo/python-list


Re: random.randint() slow, esp in Python 3

2011-09-22 Thread Steven D'Aprano
Chris Angelico wrote:

> The standard library function random.randint() seems to be quite slow
> compared to random.random(), and worse in Python 3 than Python 2
[...]
> But this still doesn't explain why randint() is so much slower. In
> theory, randint() should be doing basically the same thing that I've
> done here (multiply by the top value, truncate the decimal), only it's
> in C instead of Python - if anything, it should be faster than doing
> it manually, not slower.

What makes you think it's in C? I don't have Python 3.3a, but in 3.2 the
random module is mostly Python. There is an import of _random, which
presumably is in C, but it doesn't have a randint method:

>>> import _random
>>> _random.Random.randint
Traceback (most recent call last):
  File "", line 1, in 
AttributeError: type object '_random.Random' has no attribute 'randint'


I'm not seeing any significant difference in speed between 2.6 and 3.2:

[steve@sylar ~]$ python2.6 -m timeit -s "from random import
randint" "randint(0, 100)"
10 loops, best of 3: 4.29 usec per loop

[steve@sylar ~]$ python3.2 -m timeit -s "from random import
randint" "randint(0, 100)"
10 loops, best of 3: 4.98 usec per loop


(The times are quite variable: the above are the best of three attempts.)



-- 
Steven

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Python deadlock using subprocess.popen and communicate

2011-09-22 Thread Atherun
On Sep 22, 10:44 am, Nobody  wrote:
> On Thu, 22 Sep 2011 08:55:40 -0700, Atherun wrote:
> >> Just handle process.stdout/stderr by yourself - read it out until EOF
> >> and then wait() for the process.
>
> > Thats what confuses me though, the documentation says
> > process.stdout.read()/stderr.read() can deadlock and apparently so can
> > communicate, how do you read the stdout/stderr on yourself if its
> > documented using them can cause a deadlock?
>
> If you try to read/write two or more of stdin/stdout/stderr via the
> "naive" approach, you run the risk of the child process writing more than
> a pipe's worth of data to one stream (and thus blocking) while the
> parent is performing a blocking read/write on another stream, resulting in
> deadlock.
>
> The .communicate() method avoids the deadlock by either:
>
> 1. On Unix, using non-blocking I/O and select(), or
> 2. On Windows, creating a separate thread for each stream.
>
> Either way, the result is that it can always read/write whichever
> streams are ready, so the child will never block indefinitely while
> waiting for the parent.
>
> If .communicate() is blocking indefinitely, it suggests that the child
> process never terminates. There are many reasons why this might happen,
> and most of them depend upon exactly what the child process is doing.
>
> I suggest obtaining a copy of Process Explorer, and using it to
> investigate the state of both processes (but especially the child) at the
> point that the "deadlock" seems to occur.

In the one case I can easily reproduce, its in a p4.exe call that I'm
making both python and p4.exe have nearly the same stack for their
threads:
python:

ntoskrnl.exe!memset+0x64a
ntoskrnl.exe!KeWaitForMultipleObjects+0xd52
ntoskrnl.exe!KeWaitForMutexObject+0x19f
ntoskrnl.exe!__misaligned_access+0xba4
ntoskrnl.exe!__misaligned_access+0x1821
ntoskrnl.exe!KeWaitForMultipleObjects+0xf5d
ntoskrnl.exe!KeWaitForMutexObject+0x19f
ntoskrnl.exe!NtWaitForSingleObject+0xde
ntoskrnl.exe!KeSynchronizeExecution+0x3a43
wow64cpu.dll!TurboDispatchJumpAddressEnd+0x6c0
wow64cpu.dll!TurboDispatchJumpAddressEnd+0x4a8
wow64.dll!Wow64SystemServiceEx+0x1ce
wow64.dll!Wow64LdrpInitialize+0x429
ntdll.dll!RtlUniform+0x6e6
ntdll.dll!RtlCreateTagHeap+0xa7
ntdll.dll!LdrInitializeThunk+0xe
ntdll.dll!ZwWaitForSingleObject+0x15
kernel32.dll!WaitForSingleObjectEx+0x43
kernel32.dll!WaitForSingleObject+0x12
python26.dll!_Py_svnversion+0xcf8


p4:

ntoskrnl.exe!memset+0x64a
ntoskrnl.exe!KeWaitForMultipleObjects+0xd52
ntoskrnl.exe!KeWaitForSingleObject+0x19f
ntoskrnl.exe!_misaligned_access+0xba4
ntoskrnl.exe!_misaligned_access+0x1821
ntoskrnl.exe!KeWaitForMultipleObjects+0xf5d
ntoskrnl.exe!KeWaitForSingleObject+0x19f
ntoskrnl.exe!NtCreateFile+0x4c9
ntoskrnl.exe!NtWriteFile+0x7e3
ntoskrnl.exe!KeSynchronizeExecution+0x3a43
ntdll.dll!ZwWriteFile+0xa
KERNELBASE.dll!WriteFile+0x7b
kernel32.dll!WriteFile+0x36
p4.exe+0x42d4b
p4.exe+0x42ed8


To me it looks like they're both waiting on each other.

-- 
http://mail.python.org/mailman/listinfo/python-list


Python Mixins

2011-09-22 Thread Matt
I'm curious about what people's opinions are about using mixins in
Python. I really like, for example, the way that class based views
were implemented in Django 1.3 using mixins. It makes everything
extremely customizable and reusable. I think this is a very good
practice to follow, however, in Python mixins are achieved by using
(or perhaps misusing) inheritance and often multiple inheritance.

Inheritance is a very powerful tool, and multiple inheritance is an
even more powerful tool. These tools have their uses, but I feel like
"mixing in" functionality is not one of them. There are much different
reasons and uses for inheriting functionality from a parent and mixing
in functionality from elsewhere.

As a person, you learn certain things from your parents, you learn
other things from your peers all of those things put together becomes
you. Your peers are not your parents, that would not be possible. You
have completely different DNA and come from a completely different
place.

In terms of code, lets say we have the following classes:

class Animal
class Yamlafiable
class Cat(Animal, Yamlafiable)
class Dog(Animal, Yamlafiable)

I've got an Animal that does animal things, a Cat that does cat things
and a Dog that does dog things. I've also got a Yamlafiable class that
does something clever to generically convert an object into Yaml in
some way. Looking at these classes I can see that a Cat is an Animal,
a Dog is an Animal, a Dog is not a Cat, a Cat is not a Dog, a Dog is a
Yamlafiable? and a Cat is a Yamlafiable? Is that really true? If my
objects are categorized correctly, in the correct inheritance
hierarchy shouldn't that make more sense? Cats and Dogs aren't
Yamlafiable, that doesn't define what they are, rather it defines
something that they can do because of things that they picked up from
their friend the Yamlafile.

This is just a ridiculous example, but I think it is valid to say that
these things shouldn't be limited to inherit functionality only from
their parents, that they can pick other things up along the way as
well. Which is easy to do, right?

Dog.something_new = something_new

(I wish my stupid dog would learn that easily)

Ideally, what I would like to see is something like Ruby's mixins. It
seems to me like Ruby developed this out of necessity due to the fact
that it does not support multiple inheritance, however I think the
implementation is much more pure than inheriting from things that
aren't your parents. (although having only a single parent doesn't
make much sense either, I believe there are very few actual documented
cases of that happening). Here is a Ruby snippet:

module ReusableStuff
def one_thing
"something cool"
end
end
class MyClass < MyParent
include ReusableStuff
end

x = MyClass.new
x.one_thing == "something cool"
MyClass.superclass == Object

So I'm inheriting from MyParent and mixing in additional functionality
from ReusableStuff without affecting who my Parents are. This, to me,
makes much more sense than using inheritance to just grab a piece of
functionality that I want to reuse. I wrote a class decorator for
Python that does something similar (https://gist.github.com/1233738)
here is a snippet from that:

class MyMixin(object):
def one_thing(self):
return "something cool"

@mixin(MyMixin)
class MyClass(object):
pass

x = MyClass()
x.one_thing() == 'something cool'
x.__class__.__bases__ ==  (object,)

To me, this is much more concise. By looking at this I can tell what
MyClass IS, who it's parents are and what else it can do. I'm very
interested to know if there are others who feel as dirty as I do when
using inheritance for mixins or if there are other things that Python
developers are doing to mix in functionality without using inheritance
or if the general populous of the Python community disagrees with me
and thinks that this is a perfectly valid use of inheritance.

I look forward to hearing back.

Thanks,
Matthew J Morrison
www.mattjmorrison.com


P.S. - This is a good article about not using inheritance as a code
reuse tool: 
http://littletutorials.com/2008/06/23/inheritance-not-for-code-reuse/
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Python Mixins

2011-09-22 Thread rantingrick
On Sep 22, 4:14 pm, Matt  wrote:

> (although having only a single parent doesn't
> make much sense either, I believe there are very few actual documented
> cases of that happening).

There is nothing wrong with an object having only one parent. Most
times the reasons are for maintainability. I might have a TextEditor
that exposes all the generic functionality that are ubiquitous to text
editors and then a FancyTextEditor(TextEditor) that exposes
functionality that is unique to a confined set of text editing uses. A
silly example, but proves the point. Do not judge an object by the
number of prodigy.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Python Mixins

2011-09-22 Thread Thomas Jollans
On 2011-09-22 23:14, Matt wrote:
> I'm curious about what people's opinions are about using mixins in
> Python. I really like, for example, the way that class based views
> were implemented in Django 1.3 using mixins. It makes everything
> extremely customizable and reusable. I think this is a very good
> practice to follow, however, in Python mixins are achieved by using
> (or perhaps misusing) inheritance and often multiple inheritance.
> 
> Inheritance is a very powerful tool, and multiple inheritance is an
> even more powerful tool. These tools have their uses, but I feel like
> "mixing in" functionality is not one of them. There are much different
> reasons and uses for inheriting functionality from a parent and mixing
> in functionality from elsewhere.
> 
> As a person, you learn certain things from your parents, you learn
> other things from your peers all of those things put together becomes
> you. Your peers are not your parents, that would not be possible. You
> have completely different DNA and come from a completely different
> place.
> 
> In terms of code, lets say we have the following classes:
> 
> class Animal
> class Yamlafiable
> class Cat(Animal, Yamlafiable)
> class Dog(Animal, Yamlafiable)
> 

I think this is an excellent use of multiple inheritance. One could also
have a construction like this:

class Dog (Animal)
class YamlafialbleDog (Dog, Yamlafiable)

... which you may be more comfortable with. In you above example, yes, a
Dog object is a Yamlafiable object. If you need a Yamlafiable object,
you can use a Cat or Dog. That's what inheritance is about.

In Python or Ruby, this way of doing things is not all that different
from the one you present below. Here, it doesn't really matter. In
strictly typed languages, it makes a world of difference. What if you
don't care what kind of object you're dealing with, as long as it
supports the interface a certain mixin provides? In Python, true, duck
typing will do the trick. In C++, for example, where you could use the C
preprocessor to do something like Ruby mixins, multiple inheritance is a
lot more useful for mixing in something that has a public interface.

The Vala language, and, I suppose the GObject type system, actually
allows interfaces to act as mixins. This is really a more formalised way
of doing just this: using multiple inheritance (which, beyond
interfaces, Vala does not support) to mix in functionality.

Oh and your thing looks kind of neat.

Thomas

> I've got an Animal that does animal things, a Cat that does cat things
> and a Dog that does dog things. I've also got a Yamlafiable class that
> does something clever to generically convert an object into Yaml in
> some way. Looking at these classes I can see that a Cat is an Animal,
> a Dog is an Animal, a Dog is not a Cat, a Cat is not a Dog, a Dog is a
> Yamlafiable? and a Cat is a Yamlafiable? Is that really true? If my
> objects are categorized correctly, in the correct inheritance
> hierarchy shouldn't that make more sense? Cats and Dogs aren't
> Yamlafiable, that doesn't define what they are, rather it defines
> something that they can do because of things that they picked up from
> their friend the Yamlafile.
> 
> This is just a ridiculous example, but I think it is valid to say that
> these things shouldn't be limited to inherit functionality only from
> their parents, that they can pick other things up along the way as
> well. Which is easy to do, right?
> 
> Dog.something_new = something_new
> 
> (I wish my stupid dog would learn that easily)
> 
> Ideally, what I would like to see is something like Ruby's mixins. It
> seems to me like Ruby developed this out of necessity due to the fact
> that it does not support multiple inheritance, however I think the
> implementation is much more pure than inheriting from things that
> aren't your parents. (although having only a single parent doesn't
> make much sense either, I believe there are very few actual documented
> cases of that happening). Here is a Ruby snippet:
> 
> module ReusableStuff
> def one_thing
> "something cool"
> end
> end
> class MyClass < MyParent
> include ReusableStuff
> end
> 
> x = MyClass.new
> x.one_thing == "something cool"
> MyClass.superclass == Object
> 
> So I'm inheriting from MyParent and mixing in additional functionality
> from ReusableStuff without affecting who my Parents are. This, to me,
> makes much more sense than using inheritance to just grab a piece of
> functionality that I want to reuse. I wrote a class decorator for
> Python that does something similar (https://gist.github.com/1233738)
> here is a snippet from that:
> 
> class MyMixin(object):
> def one_thing(self):
> return "something cool"
> 
> @mixin(MyMixin)
> class MyClass(object):
> pass
> 
> x = MyClass()
> x.one_thing() == 'something cool'
> x.__class__.__bases__ ==  (object,)
> 
> To me, this is much more concise. By looking at this I can tell what
> MyClass IS, who it's parents are 

Re: Context manager with class methods

2011-09-22 Thread Terry Reedy

On 9/22/2011 6:21 AM, Gavin Panella wrote:


On Python 2.6 and 3.1 the following code works fine:
 class Foo(object):
 @classmethod
 def __enter__(cls):
 print("__enter__")
 @classmethod
 def __exit__(cls, exc_type, exc_value, traceback):
 print("__exit__")

 with Foo: pass


This could be regarded as a bug, see below.


However, in 2.7 and 3.2 I get:

 Traceback (most recent call last):
   File "", line 1, in
 AttributeError: __exit__


type(Foo) == type and type has no such attribute.

Unless otherwise specified, 'method' typically means 'instance method'. 
In particular, the '__xxx__' special methods are (all?) (intended to be) 
instance methods, which is to say, functions that are attributes of an 
object's class. So it is normal to look for special methods on the class 
(and superclasses) of an object rather than starting with the object 
itself. For instance, when executing 'a+b', the interpreter never looks 
for __add__ as an attribute of a itself (in a.__dict__)

but starts the search looking for with type(a).__add__


Is this a regression or a deliberate change? Off the top of my head I
can't think that this pattern is particularly useful, but it seems
like something that ought to work.


I suspect there was a deliberate change to correct an anomaly, though 
this might have been done as part of some other change. As Thomas noted, 
*instances* of Foo work and as Mei noted, making Foo an instance of a 
(meta)class with the needed methods also works.


--
Terry Jan Reedy

--
http://mail.python.org/mailman/listinfo/python-list


Re: Negativ nearest interger?

2011-09-22 Thread Terry Reedy

On 9/22/2011 7:44 AM, Jussi Piitulainen wrote:

joni writes:



Python 2.6.5 (r265:79063, Apr 16 2010, 13:09:56)

...

-7/3

-3


In late Python 2 you *can* and in Python 3 you *must* use // rather than 
/ to get an int result.



-3 are more wrong than -2. Negativ number seems not to round to
nearest interger, but the integer UNDER the anwser!! Or?
Why?


It simply does not round to the nearest integer. It floors. This has
nicer mathematical properties. In particular, it allows the remainder
(notated as "per cent") operation (n % m) to return a number that
differs from n by a multiple of m ("is congruent to n modulo m").
These two operations go together.


The Python doc set has a FAQ collection. I recommend you read the 
questions for those you might be interested in. In the Programming FAQ:


"Why does -22 // 10 return -3?

It’s primarily driven by the desire that i % j have the same sign as j. 
If you want that, and also want:


i == (i // j) * j + (i % j)
then integer division has to return the floor. C also requires that 
identity to hold, and then compilers that truncate i // j need to make i 
% j have the same sign as i.


There are few real use cases for i % j when j is negative. When j is 
positive, there are many, and in virtually all of them it’s more useful 
for i % j to be >= 0. If the clock says 10 now, what did it say 200 
hours ago? -190 % 12 == 2 is useful; -190 % 12 == -10 is a bug waiting 
to bite."






--
Terry Jan Reedy


--
http://mail.python.org/mailman/listinfo/python-list


Re: random.randint() slow, esp in Python 3

2011-09-22 Thread Chris Angelico
On Fri, Sep 23, 2011 at 4:14 AM, Steven D'Aprano
 wrote:
> What makes you think it's in C? I don't have Python 3.3a, but in 3.2 the
> random module is mostly Python. There is an import of _random, which
> presumably is in C, but it doesn't have a randint method:

True. It seems to be defined in cpython/lib/random.py as a reference
to randrange, which does a pile of error checking and calls
_randbelow... which does a whole lot of work as well as calling
random(). Guess I should have checked the code before asking!

There's probably good reasons for using randint(), but if you just
want a pile of more-or-less random integers, int(random.random()*top)
is the best option.

> I'm not seeing any significant difference in speed between 2.6 and 3.2:
>
> [steve@sylar ~]$ python2.6 -m timeit -s "from random import
> randint" "randint(0, 100)"
> 10 loops, best of 3: 4.29 usec per loop
>
> [steve@sylar ~]$ python3.2 -m timeit -s "from random import
> randint" "randint(0, 100)"
> 10 loops, best of 3: 4.98 usec per loop

That might be getting lost in the noise. Try the list comp that I had
above and see if you can see a difference - or anything else that
calls randint that many times.

Performance-testing with a heapsort (and by the way, it's
_ridiculously_ slower implementing it in Python instead of just
calling a.sort(), but we all knew that already!) shows a similar
difference in performance. As far as I know, everything's identical
between the two (I use // division so there's no floating point
getting in the way, for instance), but what takes 90 seconds on Py2
takes 150 seconds on Py3. As with the randint test, I switched int()
to long() to test Py2, and that slowed it down a little, but still far
far below the Py3 time.

I've pasted the code I'm using here: http://pastebin.com/eQPHQhD0

Where's the dramatic performance difference? Or doesn't it matter,
since anything involving this sort of operation needs to be done in C
anyway?

ChrisA
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: PyEval_EvalCodeEx return value

2011-09-22 Thread Mark Hammond

On 20/09/2011 8:34 PM, Mateusz Loskot wrote:

Hi,

I'm trying to dig out details about what exactly is the return
value the of PyEval_EvalCodeEx function in Python 3.x
The documentation is sparse, unfortunately.

Perhaps I'm looking at wrong function.
My aim is simple, I need to execute Python code using Python interpreter
embedded in my C++ application.
The Python code is a simple script that always returns single value.
For example:

#! /usr/bin/env python
def foo(a, b):
return a + b
f = foo(2, 3)

But, f can be of different type for different script: one returns
numeric value, another returns a sequence, so the type is not
possible to be determined in advance.

I know how to capture Python stdout/stderr.

I also know how to access the "f" attribute using
PyObject_GetAttrString and then I can convert "f" value to C++ type
depending on PyObject type.

However, I guess there shall be a way to access "f" value
directly from PyEval_EvalCode return object:

PyObject* evalRet = ::PyEval_EvalCode(...);

But, I can't find any details what the "evalRet" actually is.


Eval is to eval an expression.  If you simply eval the expression "f" in 
the context of the module you should get the result returned.  Obviously 
though it is designed to eval more complex expressions and in your 
specific example, doing the getattr thing will also work fine.


Mark
--
http://mail.python.org/mailman/listinfo/python-list


Python 2.5 zlib trouble

2011-09-22 Thread Jesramz
Hello,

I am trying to deploy an app on google app engine using bottle, a
micro-framework, similar to flask. I am running on ubuntu which comes
with python 2.7 installed but GAE needs version 2.5, so I installed
2.5. I then realized I didn't use make altinstall so I may have a
default version problem now. But my real problem is that when I try to
use the gae server to test locally I get the following error:

Traceback (most recent call last):
  File "/opt/google/appengine/dev_appserver.py", line 77, in 
run_file(__file__, globals())
  File "/opt/google/appengine/dev_appserver.py", line 73, in run_file
execfile(script_path, globals_)
  File "/opt/google/appengine/google/appengine/tools/
dev_appserver_main.py", line 156, in 
from google.appengine.tools import dev_appserver
  File "/opt/google/appengine/google/appengine/tools/
dev_appserver.py", line 94, in 
import zlib
ImportError: No module named zlib



Can you help me with this?
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: python install on locked down windows box?

2011-09-22 Thread Matt Joiner
5 is the best solution, followed by 2 and 3.
On Sep 22, 2011 11:02 PM, "Steven D'Aprano" <
steve+comp.lang.pyt...@pearwood.info> wrote:
> Chris Withers wrote:
>
>> Hi All,
>>
>> Is there a way to install python on a locked down Windows desktop?
>> (ie: no compilers, no admin rights, etc)
>
> (1) Bribe or blackmail the fascist system administrator.
>
> (2) Hack into the system with any of dozens of unpatched vulnerabilities
> that will give you admin rights.
>
> (3) Sneak into the office at 3 in the morning and replace the desktop with
> an identical machine which you have admin rights to.
>
> (4) Guess the admin password -- it's not hard, most fascist system
> administrators can't remember words with more than four letters, so the
> password is probably something like "passw" or, if he's being especially
> cunning, "drows".
>
> (5) "Accidentally" install Linux on the machine and use that instead.
>
> (6) Take hostages.
>
> (7) If all else fails, as an absolute last resort, simply run the Windows
> installer as a regular, unprivileged user, after selecting the option for
a
> Non-Admin Install under Advanced Options first. You could also try the
> ActivePython installer.
>
>
http://www.richarddooling.com/index.php/2006/03/14/python-on-xp-7-minutes-to-hello-world/
> http://diveintopython.org/installing_python/windows.html
>
>
>
> --
> Steven
>
> --
> http://mail.python.org/mailman/listinfo/python-list
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Python deadlock using subprocess.popen and communicate

2011-09-22 Thread Nobody
On Thu, 22 Sep 2011 11:19:28 -0700, Atherun wrote:

>> I suggest obtaining a copy of Process Explorer, and using it to
>> investigate the state of both processes (but especially the child) at the
>> point that the "deadlock" seems to occur.
> 
> In the one case I can easily reproduce, its in a p4.exe call that I'm
> making both python and p4.exe have nearly the same stack for their
> threads:
> python:

> kernel32.dll!WaitForSingleObject+0x12
> python26.dll!_Py_svnversion+0xcf8

I haven't a clue how this happens. _Py_svnversion just returns a string:

_Py_svnversion(void)
{
/* the following string can be modified by subwcrev.exe */
static const char svnversion[] = SVNVERSION;
if (svnversion[0] != '$')
return svnversion; /* it was interpolated, or passed on command line */
return "Unversioned directory";
}

It doesn't even 
-- 
http://mail.python.org/mailman/listinfo/python-list