Re: Design thought for callbacks

2015-02-22 Thread Chris Angelico
On Sun, Feb 22, 2015 at 6:52 PM, Marko Rauhamaa  wrote:
> What I mean, though, is that you shouldn't think you need to create
> object destructors where you routinely set all members to None.

Sure, not *routinely*. It'd be a special case where it's not
specifically a destructor, and its job is to break a reference cycle.
For instance, you might have a close() method that clears out a bunch
of references, which will then allow everything to get cleaned up
promptly. Or (a very common case for me) a callback saying "remote end
is gone" (eg on a socket) might wipe out the callbacks, thus removing
their refloops.

ChrisA
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Design thought for callbacks

2015-02-22 Thread Marko Rauhamaa
Chris Angelico :

> Or (a very common case for me) a callback saying "remote end is gone"
> (eg on a socket) might wipe out the callbacks, thus removing their
> refloops.

Refloops are not to be worried about, let alone removed.


Marko
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Design thought for callbacks

2015-02-22 Thread Chris Angelico
On Sun, Feb 22, 2015 at 7:34 PM, Marko Rauhamaa  wrote:
> Chris Angelico :
>
>> Or (a very common case for me) a callback saying "remote end is gone"
>> (eg on a socket) might wipe out the callbacks, thus removing their
>> refloops.
>
> Refloops are not to be worried about, let alone removed.

Why? They force the use of the much slower cycle-detecting GC, rather
than the quick and efficient CPython refcounter. I don't know how
other Pythons work, but mark-and-sweep has its own costs, and I don't
know of any system that's both prompt and able to detect refloops.
Helping it along means your program doesn't waste memory. Why such a
blanket statement?

ChrisA
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: How to design a search engine in Python?

2015-02-22 Thread Laura Creighton
In a message of Sat, 21 Feb 2015 22:07:30 -0800, subhabangal...@gmail.com write
>Dear Sir,
>
>Thank you for your kind suggestion. Let me traverse one by one. 
>My special feature is generally Semantic Search, but I am trying to build
>a search engine first and then go for semantic I feel that would give me a 
>solid background to work around the problem. 
>
>Regards,
>Subhabrata. 

You may find the API docs surrounding rdelbru.github.io/SIREn/
of interest then.

Laura Creighton
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Design thought for callbacks

2015-02-22 Thread Marko Rauhamaa
Chris Angelico :

> On Sun, Feb 22, 2015 at 7:34 PM, Marko Rauhamaa  wrote:
>> Refloops are not to be worried about, let alone removed.
>
> Why?

Because the whole point of GC-languages is that you should stop worrying
about memory. Trying to mastermind and micromanage GC in the application
is, pardon my French, an antipattern.

> They force the use of the much slower cycle-detecting GC, rather than
> the quick and efficient CPython refcounter.

Java's Hotspot doesn't bother with refcounters but is much faster than
Python. CPython's refcounters are a historical accident that a Python
application developer shouldn't even be aware of.

> I don't know how other Pythons work, but mark-and-sweep has its own
> costs, and I don't know of any system that's both prompt and able to
> detect refloops.

It's exceedingly difficult (and pointless) to detect cycles in your
object structures. Python is going to have to do a GC occasionally
anyway. Yes, your worst-case response times are going to suffer, but
that's the cost of doing business.

> Helping it along means your program doesn't waste memory. Why such a
> blanket statement?

Because worrying Python programmers with evil spirits (reference loops)
leads to awkward coding practices and takes away one of the main
advantages of Python as a high-level programming language.


Marko
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Design thought for callbacks

2015-02-22 Thread Gregory Ewing

Frank Millman wrote:
"In order to inform users that certain bits of state have changed, I require 
them to register a callback with my code."


This sounds to me like a pub/sub scenario. When a 'listener' object comes 
into existence it is passed a reference to a 'controller' object that holds 
state. It wants to be informed when the state changes, so it registers a 
callback function with the controller.


Perhaps instead of registering a callback function, you
should be registering the listener object together with
a method name.

You can then keep a weak reference to the listener object,
since if it is no longer referenced elsewhere, it presumably
no longer needs to be notified of anything.

--
Greg
--
https://mail.python.org/mailman/listinfo/python-list


Re: Design thought for callbacks

2015-02-22 Thread Chris Angelico
On Sun, Feb 22, 2015 at 8:14 PM, Marko Rauhamaa  wrote:
>> Helping it along means your program doesn't waste memory. Why such a
>> blanket statement?
>
> Because worrying Python programmers with evil spirits (reference loops)
> leads to awkward coding practices and takes away one of the main
> advantages of Python as a high-level programming language.

Right, and I suppose that, by extension, we should assume that the
Python interpreter can optimize this?

def fib(x):
if x<2: return x
return fib(x-2)+fib(x-1)

Just because a computer can, in theory, recognize that this is a pure
function, doesn't mean that we can and should depend on that. If you
want this to be optimized, you either fix your algorithm or explicitly
memoize the function - you don't assume that Python can do it for you.

Even when you write in a high level language, you need to understand
how computers work.

ChrisA
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Design thought for callbacks

2015-02-22 Thread Steven D'Aprano
Chris Angelico wrote:

> On Sun, Feb 22, 2015 at 3:38 PM, Steven D'Aprano
>  wrote:
>> But you are using it. You might not be using it by name, but you are
>> using it via the callback function. What did you expect, that Python
>> should read your mind and somehow intuit that you still care about this
>> socket listener, but not some other socket listener that you are done
>> with?
>>
>> You don't have to bind the listener to a name. Any reference will do. You
>> can dump it in a bucket:
>>
>> bucket_of_stuff = []
>> bucket_of_stuff.append(some_function(a, b, c))
>> bucket_of_stuff.append(make_web_server())
>> bucket_of_stuff.append(socket(23, on_accept=client_connected))
> 
> Sure, and whether it's a name or a list-element reference doesn't
> matter: it seems wrong to have to stash a thing in a bucket in order
> to keep its callbacks alive. I expect the callbacks _themselves_ to
> keep it alive. 

Why? Do you expect that the Python garbage collector special cases callbacks
to keep them alive even when there are no references to them? How would it
distinguish a callback from some other function?

If I stuff a function in a list:

   [len]

would you expect the presence of the function to keep the list alive when
there are no references to the list?

Apart from "But I really, really, REALLY want a magical pony that feeds
itself and never poops and just disappears when I don't want it around!"
wishful-thinking, which I *totally* get, I don't see how you think this is
even possible. Maybe I'm missing something, but it seems to me that what
you're wishing for is impossible.

Perhaps if we had a special "callback" type which was treated as a special
case by the garbage collector. But that opens up a big can of worms: how do
you close/delete objects which are kept alive by the presence of callbacks
if you don't have a reference to either the object or the callback?



-- 
Steven

-- 
https://mail.python.org/mailman/listinfo/python-list


Standard

2015-02-22 Thread Phillip Fleming
In my opinion, Python will not take off like C/C++ if there is no ANSI
standard.
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: What behavior would you expect?

2015-02-22 Thread Jason Friedman
> If you're going to call listdir, you probably want to use fnmatch directly.
>
> fnmatch seems to be silent on non-existent directories:
python -c 'import fnmatch; fnmatch.fnmatch("/no/such/path", "*")'
-- 
https://mail.python.org/mailman/listinfo/python-list


About GSOC 2015

2015-02-22 Thread Nadeesh Dilanga
Hi,

I'm a Computer Science undergraduate student who like to participate in
GSOC this year.
Do you have any projects willing to publish for gsoc 2015.
I am more familiar with Python.
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Design thought for callbacks

2015-02-22 Thread Chris Angelico
On Sun, Feb 22, 2015 at 9:32 PM, Steven D'Aprano
 wrote:
> Why? Do you expect that the Python garbage collector special cases callbacks
> to keep them alive even when there are no references to them? How would it
> distinguish a callback from some other function?

No no no. It's the other way around. _Something_ has to be doing those
callbacks, and it's that _something_ that should be keeping them
alive. The fact that it's a registered callback should itself *be* a
reference (and not a weak reference), and should keep it alive.

ChrisA
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Design thought for callbacks

2015-02-22 Thread Laura Creighton
somebody, I got confused with the indent level wrote:

>> They force the use of the much slower cycle-detecting GC, rather than
>> the quick and efficient CPython refcounter.

Somebody has misunderstood something here.  When it comes to efficient
garbage collectors, refcounting is a turtle.  The CPython one is no
exception.  Ref counting, however, is fairly easy to write.  But when
the PyPy project first replaced its refcounting gc with its very first
and therefore not very efficient at all nursery gc ... that was the very
first time when a bunch of python programs ran faster on pypy than on
CPython.  This was before pypy had a JIT.

And today the pypy channel is full of people who want to link their
C extension into some Python code running on PyPy, and who find that
their C extension slows things down.  There are lots of reasons for
this, but one of the most common problems is 'this C extension is
faking refcounting.  All of this is wasted effort for PyPy and
usually makes the thing unJITable as well.'  Many of these people
rewrite their C extension as pure Python and find that then, with
PyPy, they get the speed improvements they were looking for.

So: two points.

One reason you might not want to rely on ref counting, because you expect
your code to run under PyPy one day.

and

If you are interested in manipulating garbage collection -- especially if
this is for your own pleasure and enjoyment, a worthy goal in my books --
you could do a lot worse than write your own gc in RPython for PyPy.
The gc code is not mixed in with all of the other VM stuff, so a gc is
small, and you don't have to worry about clobbering anything else while
you are working.  So it is great for experimenting, which was the whole
point.  Hacking gcs is fun! :)

Laura

-- 
https://mail.python.org/mailman/listinfo/python-list


Re: pypandoc and restructured text

2015-02-22 Thread Fabien

On 22.02.2015 00:20, alb wrote:

I finally upgraded! And I'm currently trying out xfce!
Thanks again for the suggestions.

Al

p.s.: now pandoc works as expected.


I don't want to sound insistent, but as a Linux user I personnaly 
recommend not to use "apt" to install and use python packages. Installed 
packages will soon or late become outdated or they will become a burden 
because you have less control on what you have installed and why.


I really like virtualenv for it's help in keeping things (system / 
python version / fooling around with new packages) separated. It's also 
the recommendation of the recommendation of the "Python Packaging 
Authority":

https://packaging.python.org/en/latest/current.html#installation-tool-recommendations

Fabien
--
https://mail.python.org/mailman/listinfo/python-list


Re: Design thought for callbacks

2015-02-22 Thread Cem Karan

On Feb 21, 2015, at 10:55 AM, Chris Angelico  wrote:

> On Sun, Feb 22, 2015 at 2:45 AM, Cem Karan  wrote:
>> OK, so if I'm reading your code correctly, you're breaking the cycle in your 
>> object graph by making the GUI the owner of the callback, correct?  No other 
>> chunk of code has a reference to the callback, correct?
> 
> Correct. The GUI engine ultimately owns everything. Of course, this is
> a very simple case (imagine a little notification popup; you don't
> care about it, you don't need to know when it's been closed, the only
> event on it is "hit Close to destroy the window"), and most usage
> would have other complications, but it's not uncommon for me to build
> a GUI program that leaves everything owned by the GUI engine.
> Everything is done through callbacks. Destroy a window, clean up its
> callbacks. The main window will have an "on-deletion" callback that
> terminates the program, perhaps. It's pretty straight-forward.

How do you handle returning information?  E.g., the user types in a number and 
expects that to update the internal state of your code somewhere.

Thanks,
Cem Karan
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Design thought for callbacks

2015-02-22 Thread Cem Karan

On Feb 21, 2015, at 11:03 AM, Marko Rauhamaa  wrote:

> Chris Angelico :
> 
>> On Sat, Feb 21, 2015 at 1:44 PM, Cem Karan  wrote:
> 
>>> In order to inform users that certain bits of state have changed, I
>>> require them to register a callback with my code. The problem is that
>>> when I store these callbacks, it naturally creates a strong reference
>>> to the objects, which means that if they are deleted without
>>> unregistering themselves first, my code will keep the callbacks
>>> alive. Since this could lead to really weird and nasty situations,
>>> [...]
>> 
>> No, it's not. I would advise using strong references - if the callback
>> is a closure, for instance, you need to hang onto it, because there
>> are unlikely to be any other references to it. If I register a
>> callback with you, I expect it to be called; I expect, in fact, that
>> that *will* keep my object alive.
> 
> I use callbacks all the time but haven't had any problems with strong
> references.
> 
> I am careful to move my objects to a zombie state after they're done so
> they can absorb any potential loose callbacks that are lingering in the
> system.

So, if I were designing a library for you, you would be willing to have a 
'zombie' attribute on your callback, correct?  This would allow the library to 
query its callbacks to ensure that only 'live' callbacks are called.  How would 
you handle closures?  

Thanks,
Cem Karan
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Design thought for callbacks

2015-02-22 Thread Marko Rauhamaa
Cem Karan :

> On Feb 21, 2015, at 11:03 AM, Marko Rauhamaa  wrote:
>> I use callbacks all the time but haven't had any problems with strong
>> references.
>> 
>> I am careful to move my objects to a zombie state after they're done so
>> they can absorb any potential loose callbacks that are lingering in the
>> system.
>
> So, if I were designing a library for you, you would be willing to have
> a 'zombie' attribute on your callback, correct? This would allow the
> library to query its callbacks to ensure that only 'live' callbacks are
> called. How would you handle closures?

Sorry, don't understand the question.


Marko
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Design thought for callbacks

2015-02-22 Thread Cem Karan

On Feb 21, 2015, at 12:08 PM, Marko Rauhamaa  wrote:

> Steven D'Aprano :
> 
>> Other than that, I cannot see how calling a function which has *not*
>> yet been garbage collected can fail, just because the only reference
>> still existing is a weak reference.
> 
> Maybe the logic of the receiving object isn't prepared for the callback
> anymore after an intervening event.
> 
> The problem then, of course, is in the logic and not in the callbacks.

This was PRECISELY the situation I was thinking about.  My hope was to make the 
callback mechanism slightly less surprising by allowing the user to track them, 
releasing them when they aren't needed without having to figure out where the 
callbacks were registered.  However, it appears I'm making things more 
surprising rather than less.

Thanks,
Cem Karan
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Design thought for callbacks

2015-02-22 Thread Chris Angelico
On Sun, Feb 22, 2015 at 11:07 PM, Cem Karan  wrote:
>> Correct. The GUI engine ultimately owns everything. Of course, this is
>> a very simple case (imagine a little notification popup; you don't
>> care about it, you don't need to know when it's been closed, the only
>> event on it is "hit Close to destroy the window"), and most usage
>> would have other complications, but it's not uncommon for me to build
>> a GUI program that leaves everything owned by the GUI engine.
>> Everything is done through callbacks. Destroy a window, clean up its
>> callbacks. The main window will have an "on-deletion" callback that
>> terminates the program, perhaps. It's pretty straight-forward.
>
> How do you handle returning information?  E.g., the user types in a number 
> and expects that to update the internal state of your code somewhere.

Not sure what you mean by "returning". If the user types in a number
in a GUI widget, that would trigger some kind of on-change event, and
either the new text would be a parameter to the callback function, or
the callback could query the widget. In the latter case, I'd probably
have the callback as a closure, and thus able to reference the object.

ChrisA
-- 
https://mail.python.org/mailman/listinfo/python-list


Future of Pypy?

2015-02-22 Thread Dave Farrance
As an engineer, I can quickly knock together behavioural models of
electronic circuits,  complete units, and control systems in Python, then
annoyingly in a few recent cases, have to re-write in C for speed.

I've tried PyPy, the just-in-time compiler for Python, and that is
impressively, hugely fast in comparison, but it's no good making these
models if I can't display the results in a useful way, and at the moment
PyPy just doesn't have the huge range of useful time-saving libraries that
CPython has.  It's still quicker to do a re-write in the more cumbersome C
than try to work with PyPy because C, like CPython, also has many useful
libraries.

A few years back, I recall people saying that PyPy was going to be the
future of Python, but it seems to me that CPython still has the lion's
share of the momentum, is developing faster and has ever more libraries,
while PyPy is struggling to get enough workers to even get Numpy
completed.

Maybe there's not enough people like me that have really felt the need for
the speed.  Or maybe it's simply the accident of the historical
development path that's set-in-stone an interpreter rather than a JIT.
Anybody got a useful perspective on this?
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Design thought for callbacks

2015-02-22 Thread Marko Rauhamaa
Cem Karan :

> On Feb 21, 2015, at 12:08 PM, Marko Rauhamaa  wrote:
>> Maybe the logic of the receiving object isn't prepared for the callback
>> anymore after an intervening event.
>> 
>> The problem then, of course, is in the logic and not in the callbacks.
>
> This was PRECISELY the situation I was thinking about. My hope was to
> make the callback mechanism slightly less surprising by allowing the
> user to track them, releasing them when they aren't needed without
> having to figure out where the callbacks were registered. However, it
> appears I'm making things more surprising rather than less.

When dealing with callbacks, my advice is to create your objects as
explicit finite state machines. Don't try to encode the object state
implicitly or indirectly. Rather, give each and every state a symbolic
name and log the state transitions for troubleshooting.

Your callbacks should then consider what to do in each state. There are
different ways to express this in Python, but it always boils down to a
state/transition matrix.

Callbacks sometimes cannot be canceled after they have been committed to
and have been shipped to the event pipeline. Then, the receiving object
must brace itself for the impending spurious callback.


Marko
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Design thought for callbacks

2015-02-22 Thread Laura Creighton
In a message of Sun, 22 Feb 2015 07:16:14 -0500, Cem Karan writes:

>This was PRECISELY the situation I was thinking about.  My hope was
>to make the callback mechanism slightly less surprising by allowing
>the user to track them, releasing them when they aren't needed
>without having to figure out where the callbacks were registered.
>However, it appears I'm making things more surprising rather than
>less.

You may be able to accomplish your goal by using a Queue with a
producer/consumer model.
see: 
http://stackoverflow.com/questions/9968592/turn-functions-with-a-callback-into-python-generators

especially the bottom of that.

I haven't run the code, but it looks mostly reasonable, except that
you do not want to rely on the Queue maxsize being 1 here, and
indeed, I almost always want a bigger Queue  in any case.  Use
Queue.task_done if blocking the producer features in your design.

The problem that you are up against is that callbacks are inherantly
confusing, even to programmers who are learning about them for the
first time.  They don't fit people's internal model of 'how code works'.
There isn't a whole lot one can do about that except to
try to make the magic do as little as possible, so that more of the
code works 'the way people expect'.

Laura
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Future of Pypy?

2015-02-22 Thread jkn
On Sunday, 22 February 2015 12:45:15 UTC, Dave Farrance  wrote:
> As an engineer, I can quickly knock together behavioural models of
> electronic circuits,  complete units, and control systems in Python, then
> annoyingly in a few recent cases, have to re-write in C for speed.
> 
> I've tried PyPy, the just-in-time compiler for Python, and that is
> impressively, hugely fast in comparison, but it's no good making these
> models if I can't display the results in a useful way, and at the moment
> PyPy just doesn't have the huge range of useful time-saving libraries that
> CPython has.  It's still quicker to do a re-write in the more cumbersome C
> than try to work with PyPy because C, like CPython, also has many useful
> libraries.
> 
> A few years back, I recall people saying that PyPy was going to be the
> future of Python, but it seems to me that CPython still has the lion's
> share of the momentum, is developing faster and has ever more libraries,
> while PyPy is struggling to get enough workers to even get Numpy
> completed.
> 
> Maybe there's not enough people like me that have really felt the need for
> the speed.  Or maybe it's simply the accident of the historical
> development path that's set-in-stone an interpreter rather than a JIT.
> Anybody got a useful perspective on this?

I'm curious what ...behavioural... models you are creating quickly in Python 
that then need rewriting in C for speed. SPICE? some other CAD? Might be 
interesting to learn more about what and how you are actually doing.

How about running your front end (simulation) work in PyPy, and the backend 
display work on CPython, if there are some missing features in PyPy that you 
need. This may be more or less easy depending on your requirements and any 
intermediate format you have.

Or you could offer to assist in the PyPy porting? Or express an interest in 
specific libraries being ported?

Cheers
Jon N

-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Standard

2015-02-22 Thread Skip Montanaro
On Thu, Feb 19, 2015 at 10:27 AM, Phillip Fleming  wrote:
> In my opinion, Python will not take off like C/C++ if there is no ANSI
> standard.

On one side of your statement, what makes you think Python ever wanted
to "take off like C/C++"? On the other side, there are other languages
(Java, PHP, Perl, Tcl) which have done pretty well without ANSI
standardization. Python as well, as done fine in my opinion without an
ANSI standard.

I can't help but think I've just given a troll a carrot though...

Skip
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Design thought for callbacks

2015-02-22 Thread Cem Karan

On Feb 21, 2015, at 12:27 PM, Steven D'Aprano 
 wrote:

> Cem Karan wrote:
> 
>> 
>> On Feb 21, 2015, at 8:15 AM, Chris Angelico  wrote:
>> 
>>> On Sun, Feb 22, 2015 at 12:13 AM, Cem Karan  wrote:
 OK, so it would violate the principle of least surprise for you. 
 Interesting.  Is this a general pattern in python?  That is, callbacks
 are owned by what they are registered with?
 
 In the end, I want to make a library that offers as few surprises to the
 user as possible, and no matter how I think about callbacks, they are
 surprising to me.  If callbacks are strongly-held, then calling 'del
 foo' on a callable object may not make it go away, which can lead to
 weird and nasty situations.
> 
> How?
> 
> The whole point of callbacks is that you hand over responsibility to another
> piece of code, and then forget about your callback. The library will call
> it, when and if necessary, and when the library no longer needs your
> callback, it is free to throw it away. (If I wish the callback to survive
> beyond the lifetime of your library's use of it, I have to keep a reference
> to the function.)

Marko mentioned it earlier; if you think you've gotten rid of all references to 
some chunk of code, and it is still alive afterwards, that can be surprising.

 Weakly-held callbacks mean that I (as the 
 programmer), know that objects will go away after the next garbage
 collection (see Frank's earlier message), so I don't get 'dead'
 callbacks coming back from the grave to haunt me.
> 
> I'm afraid this makes no sense to me. Can you explain, or better still
> demonstrate, a scenario where "dead callbacks rise from the grave", so to
> speak?

"""
#! /usr/bin/env python

class Callback_object(object):
def __init__(self, msg):
self._msg = msg
def callback(self, stuff):
print("From {0!s}: {1!s}".format(self._msg, stuff))

class Fake_library(object):
def __init__(self):
self._callbacks = list()
def register_callback(self, callback):
self._callbacks.append(callback)
def execute_callbacks(self):
for thing in self._callbacks:
thing('Surprise!')

if __name__ == "__main__":
foo = Callback_object("Evil Zombie")
lib = Fake_library()
lib.register_callback(foo.callback)

# Way later, after the user forgot all about the callback above
foo = Callback_object("Your Significant Other")
lib.register_callback(foo.callback)

# And finally getting around to running all those callbacks.
lib.execute_callbacks()
"""

Output:
>From Evil Zombie: Surprise!
>From Your Significant Other: Surprise!

In this case, the user made an error (just as Marko said in his earlier 
message), and forgot about the callback he registered with the library.  The 
callback isn't really rising from the dead; as you say, either its been garbage 
collected, or it hasn't been.  However, you may not be ready for a callback to 
be called at that moment in time, which means you're surprised by unexpected 
behavior.

 So, what's the consensus on the list, strongly-held callbacks, or
 weakly-held ones?
>>> 
>>> I don't know about Python specifically, but it's certainly a general
>>> pattern in other languages. They most definitely are owned, and it's
>>> the only model that makes sense when you use closures (which won't
>>> have any other references anywhere).
>> 
>> I agree about closures; its the only way they could work.
> 
> *scratches head* There's nothing special about closures. You can assign them
> to a name like any other object.
> 
> def make_closure():
>x = 23
>def closure():
>return x + 1
>return closure
> 
> func = make_closure()
> 
> Now you can register func as a callback, and de-register it when your done:
> 
> register(func)
> unregister(func)
> 
> 
> Of course, if you thrown away your reference to func, you have no (easy) way
> of de-registering it. That's no different to any other object which is
> registered by identity. (Registering functions by name is a bad idea, since
> multiple functions can have the same name.)
> 
> As an alternative, your callback registration function might return a ticket
> for the function:
> 
> ticket = register(func)
> del func
> unregister(ticket)
> 
> but that strikes me as over-kill. And of course, the simplest ticket is to
> return the function itself :-)

Agreed on all points; closures are just ordinary objects.  The only difference 
(in my opinion) is that they are 'fire and forget'; if you are registering or 
tracking them then you've kind of defeated the purpose.  THAT is what I meant 
about how you handle closures.

> 
>> When I was 
>> originally thinking about the library, I was trying to include all types
>> of callbacks, including closures and callable objects.  The callable
>> objects may pass themselves, or one of their methods to the library, or
>> may do something really weird.
> 
> I don't think they can do anything too weir

Re: Design thought for callbacks

2015-02-22 Thread Cem Karan

On Feb 21, 2015, at 3:57 PM, Grant Edwards  wrote:

> On 2015-02-21, Cem Karan  wrote:
>> 
>> On Feb 21, 2015, at 12:42 AM, Chris Angelico  wrote:
>> 
>>> On Sat, Feb 21, 2015 at 1:44 PM, Cem Karan  wrote:
 In order to inform users that certain bits of state have changed, I 
 require them to register a callback with my code.  The problem is that 
 when I store these callbacks, it naturally creates a strong reference to 
 the objects, which means that if they are deleted without unregistering 
 themselves first, my code will keep the callbacks alive.  Since this could 
 lead to really weird and nasty situations, I would like to store all the 
 callbacks in a WeakSet 
 (https://docs.python.org/3/library/weakref.html#weakref.WeakSet).  That 
 way, my code isn't the reason why the objects are kept alive, and if they 
 are no longer alive, they are automatically removed from the WeakSet, 
 preventing me from accidentally calling them when they are dead.  My 
 question is simple; is this a good design?  If not, why not?  Are there 
 any potential 'gotchas' I should be worried about?
 
>>> 
>>> No, it's not. I would advise using strong references - if the callback
>>> is a closure, for instance, you need to hang onto it, because there
>>> are unlikely to be any other references to it. If I register a
>>> callback with you, I expect it to be called; I expect, in fact, that
>>> that *will* keep my object alive.
>> 
>> OK, so it would violate the principle of least surprise for you.
> 
> And me as well.  I would expect to be able to pass a closure as a
> callback and not have to keep a reference to it.  Perhaps that just a
> leftover from working with other languages (javascript, scheme, etc.).
> It doesn't matter if it's a string, a float, a callback, a graphic or
> whatever: if I pass your function/library an object, I expect _you_ to
> keep track of it until you're done with it.
> 
>> Interesting.  Is this a general pattern in python?  That is,
>> callbacks are owned by what they are registered with?
> 
> I'm not sure what you mean by "owned" or why it matters that it's a
> callback: it's an object that was passed to you: you need to hold onto
> a reference to it until you're done with it, and the polite thing to
> do is to delete references to it when you're done with it.

I tend to structure my code as a tree or DAG of objects.  The owner refers to 
the owned object, but the owned object has no reference to its owner.  With 
callbacks, you get cycles, where the owned owns the owner.  As a result, if you 
forget where your object has been registered, it may be kept alive when you 
aren't expecting it.  My hope was that with WeakSets I could continue to 
preserve the DAG or tree while still having the benefits of callbacks.  
However, it looks like that is too surprising to most people.

Thanks,
Cem Karan
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Future of Pypy?

2015-02-22 Thread Laura Creighton
In a message of Sun, 22 Feb 2015 12:45:03 +, Dave Farrance writes:

>Maybe there's not enough people like me that have really felt the need for
>the speed.  Or maybe it's simply the accident of the historical
>development path that's set-in-stone an interpreter rather than a JIT.
>Anybody got a useful perspective on this?

I don't understand 'an interpreter rather than a JIT'.  PyPy has a
JIT, that sort of is the whole point.

One problem is that hacking on PyPy itself is hard.  Lots of people
find it too hard, and give up.  (Of course, lots of people give up
on hacking CPython too.  I think that hacking on PyPy is harder than
hacking on CPython, but I am quite biased.)  So this is a barrier to
getting more people to work on it.

Provided your code is in pure python, PyPy already works great for you.
as you found out.  Pure Python libraries aren't a problem, either.

The problem arises when you want to add your fancy graphic library to
what you do, because chances are that fancy library is a C extension.
And there is no magic 'sprinke this dust here' on a C extension that
makes it acceptable to PyPy.  It's a hard job.  The PyPy team has
gone after, and rewritten how to do this I think 5 times now.  Maybe
more.  Every time the goal has been to make it easier for the average
programmer to make an interface, and, of course to not slow things down
too much.  C extensions, in general, make PyPy code run a lot slower because
you cannot get the JIT in there to speed things up, so you may be
stuck with unjitted PyPy performance, even on your python code,
which isn't speedy.  You also find lots of bugs in the C extensions,
which don't get noticed until you, for instance, no longer have a ref
counting GC.

Some of the things aren't really bugs, exactly, just that the person
who wrote the thing knew far, far, too much about how CPython works
and has produced something that had no need or desire to be portable
anywhere else.  The closer the person who wrote the extension
was 'to the metal' .. knowing _exactly_ how CPython does things and how
to squeeze that for the tiniest last drop of performance improvement 
the harder things are for whoever wants to get it to work with PyPy, which
has a competely different architecture and a whole lot of other assumptions.

And the slower that it will run.  So, if it is a small thing, then the
usual suggestion is to rewrite it in pure python and let the JIT handle
it.  Very, very often the result is faster than the pure C code, but
clearly this isn't something you want to do with a huge graphics
library ...

There is hope that we can take another crack at this problem using
things learned from the Transactional Memory stuff, but nobody is promising
anything yet.  Also Armin has had a new neat thought.

https://mail.python.org/pipermail/pypy-dev/2015-February/013085.html

If we can get rid of the warmup time, then PyPy should be more popular
than it is now.  Lots of people run PyPy once, un-warmed up, see no
big improvement, and decide it's not for them.

But the problem of

'here is a huge chunk of C code, designed to work with Language X.
Now make it work with Language Y (PyPy) which isn't written in C'
can only be simplified to a certain extent.  There comes a point
where _this is just bloody hard_.

Laura
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Design thought for callbacks

2015-02-22 Thread Cem Karan

On Feb 22, 2015, at 7:12 AM, Marko Rauhamaa  wrote:

> Cem Karan :
> 
>> On Feb 21, 2015, at 11:03 AM, Marko Rauhamaa  wrote:
>>> I use callbacks all the time but haven't had any problems with strong
>>> references.
>>> 
>>> I am careful to move my objects to a zombie state after they're done so
>>> they can absorb any potential loose callbacks that are lingering in the
>>> system.
>> 
>> So, if I were designing a library for you, you would be willing to have
>> a 'zombie' attribute on your callback, correct? This would allow the
>> library to query its callbacks to ensure that only 'live' callbacks are
>> called. How would you handle closures?
> 
> Sorry, don't understand the question.

You were saying that you move your objects into a zombie state.  I assumed that 
you meant you marked them in some manner (e.g., setting 'is_zombie' to True), 
so that anything that has a strong reference to the object knows the object is 
not supposed to be used anymore.  That way, regardless of where or how many 
times you've registered your object for callbacks, the library can do something 
like the following (banged out in my mail application, may have typos):

"""
_CALLBACKS = []

def execute_callbacks():
global _CALLBACKS
_CALLBACKS = [x for x in _CALLBACKS if not x.is_zombie]
for x in _CALLBACKS:
x()
"""

That will lazily unregister callbacks that are in the zombie state, which will 
eventually lead to their collection by the garbage collector.  It won't work 
for anything that you don't have a reference for (lambdas, etc.), but it should 
work in a lot of cases.

Is this what you meant?

Thanks,
Cem Karan
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Design thought for callbacks

2015-02-22 Thread Steven D'Aprano
Chris Angelico wrote:

> On Sun, Feb 22, 2015 at 9:32 PM, Steven D'Aprano
>  wrote:
>> Why? Do you expect that the Python garbage collector special cases
>> callbacks to keep them alive even when there are no references to them?
>> How would it distinguish a callback from some other function?
> 
> No no no. It's the other way around. _Something_ has to be doing those
> callbacks, and it's that _something_ that should be keeping them
> alive. The fact that it's a registered callback should itself *be* a
> reference (and not a weak reference), and should keep it alive.

That's much more reasonable than what you said earlier:

it seems wrong to have to stash a thing in a bucket in order
to keep its callbacks alive. I expect the callbacks themselves to
keep it alive.


So yes. If I bind a callback to a button, say, or a listener, then the
button (or listener) keeps the callback alive, *not* the callback keeping
the button or listener alive.

But if there are no references to the button, or the listener, then it will
be garbage-collected, which will free the references to the callback and
allow it to be garbage-collected as well (if there are no further
references to it).



-- 
Steven

-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Design thought for callbacks

2015-02-22 Thread Cem Karan

On Feb 22, 2015, at 7:24 AM, Chris Angelico  wrote:

> On Sun, Feb 22, 2015 at 11:07 PM, Cem Karan  wrote:
>>> Correct. The GUI engine ultimately owns everything. Of course, this is
>>> a very simple case (imagine a little notification popup; you don't
>>> care about it, you don't need to know when it's been closed, the only
>>> event on it is "hit Close to destroy the window"), and most usage
>>> would have other complications, but it's not uncommon for me to build
>>> a GUI program that leaves everything owned by the GUI engine.
>>> Everything is done through callbacks. Destroy a window, clean up its
>>> callbacks. The main window will have an "on-deletion" callback that
>>> terminates the program, perhaps. It's pretty straight-forward.
>> 
>> How do you handle returning information?  E.g., the user types in a number 
>> and expects that to update the internal state of your code somewhere.
> 
> Not sure what you mean by "returning". If the user types in a number
> in a GUI widget, that would trigger some kind of on-change event, and
> either the new text would be a parameter to the callback function, or
> the callback could query the widget. In the latter case, I'd probably
> have the callback as a closure, and thus able to reference the object.

We're thinking of the same thing.  I try to structure what little GUI code I 
write using the MVP pattern 
(http://en.wikipedia.org/wiki/Model-view-presenter), so I have these hub and 
spoke patterns.  But you're right, if you have a partially evaluated callback 
that has the presenter as one of the parameters, that would do it for a GUI.  I 
was thinking more of a DAG of objects, but now that I think about it, callbacks 
wouldn't make sense in that case.

Thanks,
Cem Karan
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Design thought for callbacks

2015-02-22 Thread Marko Rauhamaa
Cem Karan :

> You were saying that you move your objects into a zombie state.  I
> assumed that you meant you marked them in some manner (e.g., setting
> 'is_zombie' to True),

Yes, but even better:

self.set_state(ZOMBIE)

>  so that anything that has a strong reference to the object knows the
>  object is not supposed to be used anymore.

The other way round: the zombie object knows to ignore callbacks sent
its way. It's not the responsibility of the sender to mind the
receiver's internal state.

I nowadays tend to implement states as inner classes. Here's how I've
implemented the zombie state of one class:

class Delivery...:
def __init__(...):
...
class ZOMBIE(STATE):
def handle_connected(self):
pass
def handle_eof(self):
pass
def handle_response(self, code, response):
pass
def handle_io_error(self, errcode):
pass
def zombifie(self):
assert False
def transaction_timeout(self):
assert False


Marko
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Design thought for callbacks

2015-02-22 Thread Steven D'Aprano
Marko Rauhamaa wrote:

> Chris Angelico :
> 
>> On Sun, Feb 22, 2015 at 7:34 PM, Marko Rauhamaa  wrote:
>>> Refloops are not to be worried about, let alone removed.
>>
>> Why?
> 
> Because the whole point of GC-languages is that you should stop worrying
> about memory. Trying to mastermind and micromanage GC in the application
> is, pardon my French, an antipattern.

While it would be nice to be able to stop worrying about memory, try to
calculate 1000**1000**1000 and see how that works for you.

Garbage collection enables us to *mostly* automate the allocation and
deallocation of memory. If doesn't mean we can forget about it. GC is an
abstraction that frees us from most of the grunt work of allocating memory,
but it doesn't mean that there is never any need to think about memory. GC
is a leaky abstraction. Depending on the implementation, it may cause
distracting and annoying pauses in your application and/or resource leaks.
Even if there are no pauses, GC still carries a performance penalty. Good
programmers need to be aware of the limitations of their tools, and be
prepared to code accordingly.

When writing programs for educational purposes, we should try to code in the
simplest and most elegant way with no thought given to annoying practical
matters. At least at first. But when writing programs for actual use, we
should write for the implication we have, not the one we wish we had.


>> They force the use of the much slower cycle-detecting GC, rather than
>> the quick and efficient CPython refcounter.
> 
> Java's Hotspot doesn't bother with refcounters but is much faster than
> Python. CPython's refcounters are a historical accident that a Python
> application developer shouldn't even be aware of.

I don't know about Java's Hotspot, but I do know that CPython's ref counting
garbage collector has at least one advantage over the GC used by Jython and
IronPython: unlike them, open files are closed as soon as they are no
longer in use. Code like this may run out of operating system file handles
in Jython:

i = 0
while True:
f = open('/tmp/x%d' % i)
i += 1

while CPython will just keep going. I suppose it will *eventually* run out
of some resource, but probably not file handles.

Oh, a bit of trivia: Apple is abandoning their garbage collector and going
back to a reference counter:

https://developer.apple.com/news/?id=02202015a

Word on Reddit is that Apple is concerned about performance and battery
life.

P.S. A reminder that reference counting *is* a form of garbage collection.


>> I don't know how other Pythons work, but mark-and-sweep has its own
>> costs, and I don't know of any system that's both prompt and able to
>> detect refloops.
> 
> It's exceedingly difficult (and pointless) to detect cycles in your
> object structures. Python is going to have to do a GC occasionally
> anyway. Yes, your worst-case response times are going to suffer, but
> that's the cost of doing business.

In *general*, you're right. Who wants to spend all their time worrying about
cycles when the GC can do it for you? But if cycles are rare, and in known
parts of your code where it is simple to break them when you're done,
there's no disadvantage to doing so. Leave the GC for the hard cases.

It's like explicitly closing a file, either with file.close() or a context
manager. When using CPython, it doesn't really matter whether you close the
file or not, since the ref counter will normally close it automatically as
soon as the file goes out of scope. But it is cheap and easy to do so, so
why not do it? Then, when it otherwise would matter, say you are running
under Jython, it doesn't because you've closed the file.


>> Helping it along means your program doesn't waste memory. Why such a
>> blanket statement?
> 
> Because worrying Python programmers with evil spirits (reference loops)
> leads to awkward coding practices and takes away one of the main
> advantages of Python as a high-level programming language.

I think you exaggerate a tad. We're not trying to scare beginners, we're a
group of moderately experienced coders discussing "best practice" (or at
least "reasonable practice") when using callbacks.


-- 
Steven

-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Design thought for callbacks

2015-02-22 Thread Chris Angelico
On Mon, Feb 23, 2015 at 12:45 AM, Steven D'Aprano
 wrote:
>> No no no. It's the other way around. _Something_ has to be doing those
>> callbacks, and it's that _something_ that should be keeping them
>> alive. The fact that it's a registered callback should itself *be* a
>> reference (and not a weak reference), and should keep it alive.
>
> That's much more reasonable than what you said earlier:
>
> it seems wrong to have to stash a thing in a bucket in order
> to keep its callbacks alive. I expect the callbacks themselves to
> keep it alive.
>
>
> So yes. If I bind a callback to a button, say, or a listener, then the
> button (or listener) keeps the callback alive, *not* the callback keeping
> the button or listener alive.

I meant the same thing, but my terminology was poor. Yes, that's
correct; it's not any sort of magic about it being a callback, but
more that the one you register it with becomes the owner of something.
Hence, no weak references.

ChrisA
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Design thought for callbacks

2015-02-22 Thread Marko Rauhamaa
Steven D'Aprano :

> I don't know about Java's Hotspot, but I do know that CPython's ref counting
> garbage collector has at least one advantage over the GC used by Jython and
> IronPython: unlike them, open files are closed as soon as they are no
> longer in use.

You can't depend on that kind of behavior. Dangling resources may or may
not be cleaned up, ever.

> Oh, a bit of trivia: Apple is abandoning their garbage collector and going
> back to a reference counter:
>
> https://developer.apple.com/news/?id=02202015a
>
> Word on Reddit is that Apple is concerned about performance and battery
> life.

That truly is a bit OT here.

> It's like explicitly closing a file, either with file.close() or a context
> manager.

Both methods are explicit. Closing files and other resources are not
directly related to GC.

Here's the thing: GC relieves your from dynamic memory management. You
are still on your own when it comes to other resources.

> We're not trying to scare beginners, we're a group of moderately
> experienced coders discussing "best practice" (or at least "reasonable
> practice") when using callbacks.

Who mentioned beginners? I'm abiding by the same best practices I'm
advocating.


Marko
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Future of Pypy?

2015-02-22 Thread Steven D'Aprano
Dave Farrance wrote:

> As an engineer, I can quickly knock together behavioural models of
> electronic circuits,  complete units, and control systems in Python, then
> annoyingly in a few recent cases, have to re-write in C for speed.
> 
> I've tried PyPy, the just-in-time compiler for Python, and that is
> impressively, hugely fast in comparison, but it's no good making these
> models if I can't display the results in a useful way, and at the moment
> PyPy just doesn't have the huge range of useful time-saving libraries that
> CPython has.

I assume you're talking about drawing graphics rather than writing text. Can
you tell us which specific library or libraries won't run under PyPy?



-- 
Steven

-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Design thought for callbacks

2015-02-22 Thread Cem Karan

On Feb 22, 2015, at 7:52 AM, Laura Creighton  wrote:

> In a message of Sun, 22 Feb 2015 07:16:14 -0500, Cem Karan writes:
> 
>> This was PRECISELY the situation I was thinking about.  My hope was
>> to make the callback mechanism slightly less surprising by allowing
>> the user to track them, releasing them when they aren't needed
>> without having to figure out where the callbacks were registered.
>> However, it appears I'm making things more surprising rather than
>> less.
> 
> You may be able to accomplish your goal by using a Queue with a
> producer/consumer model.
> see: 
> http://stackoverflow.com/questions/9968592/turn-functions-with-a-callback-into-python-generators
> 
> especially the bottom of that.
> 
> I haven't run the code, but it looks mostly reasonable, except that
> you do not want to rely on the Queue maxsize being 1 here, and
> indeed, I almost always want a bigger Queue  in any case.  Use
> Queue.task_done if blocking the producer features in your design.
> 
> The problem that you are up against is that callbacks are inherantly
> confusing, even to programmers who are learning about them for the
> first time.  They don't fit people's internal model of 'how code works'.
> There isn't a whole lot one can do about that except to
> try to make the magic do as little as possible, so that more of the
> code works 'the way people expect'.

I think what you're suggesting is that library users register a Queue instead 
of a callback, correct?  The problem is that I'll then have a strong reference 
to the Queue, which means I'll be pumping events into it after the user code 
has gone away.  I was hoping to solve the problem of forgotten registrations in 
the library.

Thanks,
Cem Karan
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Standard

2015-02-22 Thread Mark Lawrence

On 19/02/2015 16:27, Phillip Fleming wrote:

In my opinion, Python will not take off like C/C++ if there is no ANSI
standard.



Python has already taken off because it doesn't have a standard as such.

--
My fellow Pythonistas, ask not what our language can do for you, ask
what you can do for our language.

Mark Lawrence

--
https://mail.python.org/mailman/listinfo/python-list


Re: Design thought for callbacks

2015-02-22 Thread Cem Karan

On Feb 22, 2015, at 7:46 AM, Marko Rauhamaa  wrote:

> Cem Karan :
> 
>> On Feb 21, 2015, at 12:08 PM, Marko Rauhamaa  wrote:
>>> Maybe the logic of the receiving object isn't prepared for the callback
>>> anymore after an intervening event.
>>> 
>>> The problem then, of course, is in the logic and not in the callbacks.
>> 
>> This was PRECISELY the situation I was thinking about. My hope was to
>> make the callback mechanism slightly less surprising by allowing the
>> user to track them, releasing them when they aren't needed without
>> having to figure out where the callbacks were registered. However, it
>> appears I'm making things more surprising rather than less.
> 
> When dealing with callbacks, my advice is to create your objects as
> explicit finite state machines. Don't try to encode the object state
> implicitly or indirectly. Rather, give each and every state a symbolic
> name and log the state transitions for troubleshooting.
> 
> Your callbacks should then consider what to do in each state. There are
> different ways to express this in Python, but it always boils down to a
> state/transition matrix.
> 
> Callbacks sometimes cannot be canceled after they have been committed to
> and have been shipped to the event pipeline. Then, the receiving object
> must brace itself for the impending spurious callback.

Nononono, I'm NOT encoding anything implicitly!  As Frank mentioned earlier, 
this is more of a pub/sub problem.  E.g., 'USB dongle has gotten plugged in', 
or 'key has been pressed'.  The user code needs to decide what to do next, the 
library code provides a nice, clean interface to some potentially weird 
hardware.

Thanks,
Cem Karan
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Design thought for callbacks

2015-02-22 Thread Cem Karan

On Feb 22, 2015, at 5:15 AM, Gregory Ewing  wrote:

> Frank Millman wrote:
>> "In order to inform users that certain bits of state have changed, I require 
>> them to register a callback with my code."
>> This sounds to me like a pub/sub scenario. When a 'listener' object comes 
>> into existence it is passed a reference to a 'controller' object that holds 
>> state. It wants to be informed when the state changes, so it registers a 
>> callback function with the controller.
> 
> Perhaps instead of registering a callback function, you
> should be registering the listener object together with
> a method name.
> 
> You can then keep a weak reference to the listener object,
> since if it is no longer referenced elsewhere, it presumably
> no longer needs to be notified of anything.

I see what you're saying, but I don't think it gains us too much.  If I store 
an object and an unbound method of the object, or if I store the bound method 
directly, I suspect it will yield approximately the same results.

Thanks,
Cem Karan
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: What behavior would you expect?

2015-02-22 Thread Tim Chase
On 2015-02-19 22:55, Jason Friedman wrote:
> > If you're going to call listdir, you probably want to use fnmatch
> > directly.
> >
> > fnmatch seems to be silent on non-existent directories:  
> python -c 'import fnmatch; fnmatch.fnmatch("/no/such/path", "*")'

a better test would be glob.glob as fnmatch simply asks "does this
string match this pattern?" so it cares nothing for filenames.

However, it still holds that glob.glob("/does/not/exist/*.txt")
doesn't raise an error but rather just returns an empty list of
iterables.

However, for the OP's question, it's max() that raises an error:

  import glob
  import os
  def most_recent_file(loc, pattern):
globstr = os.path.join(loc, pattern)
return max(glob.glob(globstr), key=lambda f: os.stat(f).st_mtime)

gives me this when the glob returns an empty iterable:

  Traceback (most recent call last):
File "", line 1, in 
  ValueError: max() arg is an empty sequence

-tkc




-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Standard

2015-02-22 Thread Steven D'Aprano
Skip Montanaro wrote:

> On Thu, Feb 19, 2015 at 10:27 AM, Phillip Fleming 
> wrote:
>> In my opinion, Python will not take off like C/C++ if there is no ANSI
>> standard.
> 
> On one side of your statement, what makes you think Python ever wanted
> to "take off like C/C++"? On the other side, there are other languages
> (Java, PHP, Perl, Tcl) which have done pretty well without ANSI
> standardization. Python as well, as done fine in my opinion without an
> ANSI standard.

I'm pretty sure that Python is doing pretty well, popularity-wise. Depending
on how you measure it, it is even more popular than C!

http://import-that.dreamwidth.org/1388.html

I don't actually believe that Python is more popular than C. It's just that
there is no one single definition of popularity for programming languages,
so depending on how you do your measurement, you get different results. In
any case, Python is consistently in the top dozen or so languages. Any
suggestion that Python "will not take off" is laughably wrong and a decade
too late.


> I can't help but think I've just given a troll a carrot though...

Very likely :-)



-- 
Steven

-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Future of Pypy?

2015-02-22 Thread Dave Farrance
jkn  wrote:

> I'm curious what ...behavioural... models you are creating quickly in
> Python that then need rewriting in C for speed. SPICE? some other CAD?
> Might be interesting to learn more about what and how you are actually
> doing.

The convert-to-C cases were complex filtering functions.  I do make good
use of spice-based tools, but I often find it useful to make a more
abstract model, usually before completing the design.  This helps with
component selection, finalizing the design, and making sure that I
understood the what the circuit would do.

I started work 1980ish, had an early 6502-based home computer, and my then
place of work had some 6502-based Pet computers, so I gained the ability
to quickly write BASIC programs as an engineering aid.  Later, when BASIC
dropped into obscurity, I switched to C and C++, although I always found
that cumbersome compared to the old BASIC.  Later still, when I found that
my Google queries for code examples started returning more Python than C,
I tried that -- and discovered that Python was like BASIC, only better.

But that's just me.  Other hardware engineers use a variety of modeling
applications.  Or don't need to because they're just that clever?  Or they
give the modeling work to system engineers who will use whatever apps that
system engineers use, and will return a result a few weeks later.
Personally, I've tended to get used to writing code in just one
general-purpose language, and it seems to me that I get a useful result
relatively quickly.

> How about running your front end (simulation) work in PyPy, and the
> backend display work on CPython, if there are some missing features in
> PyPy that you need. This may be more or less easy depending on your
> requirements and any intermediate format you have.

Maybe I should look at that again.  In the case of the filter models,
their usefulness had grown to the point that requiring support by other
people was a possibility, so converting them to C seemed better than
writing something that bridged between two language implementations.

> Or you could offer to assist in the PyPy porting? Or express an interest
> in specific libraries being ported?

I'm a hardware engineer not a software engineer, so I have to plead lack
of ability there.  I do appreciate the work that's done on Python, and
have to be grateful for what is available, since I'm not paying for it.
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Future of Pypy?

2015-02-22 Thread Dave Farrance
Laura Creighton  wrote:

>I don't understand 'an interpreter rather than a JIT'.  PyPy has a
>JIT, that sort of is the whole point.

Yes.  I meant that from my end-user, non-software-engineer perspective, it
looked as though CPython was proceeding with leaps and bounds and that
PyPy remained mostly a proof-of-concept after several years.

But thanks for your description of the issues.  So once the core issues
are sorted out, if enough people can be found to work on it, then
hopefully the library conversions will follow apace.
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Future of Pypy?

2015-02-22 Thread Dave Farrance
Steven D'Aprano  wrote:

>I assume you're talking about drawing graphics rather than writing text. Can
>you tell us which specific library or libraries won't run under PyPy?

Yes, mainly the graphics.  I'm a hardware engineer, not a software
engineer, so I might well be misunderstanding PyPy's current capability.

For easy-to-use vector graphics output, like 1980s BASIC computers, I've
settled on Pygame.  CPython libraries that I've used for other reasons
include Scipy, Matplotlib, PIL, CV2, and Kivy.
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Algorithm for Creating Supersets of Smaller Sets Based on Common Elements

2015-02-22 Thread Peter Pearson
On Sat, 21 Feb 2015 14:46:26 -0500, TommyVee wrote:
> Start off with sets of elements as follows:
>
> 1. A,B,E,F
> 2. G,H,L,P,Q
> 3. C,D,E,F
> 4. E,X,Z
> 5. L,M,R
> 6. O,M,Y
>
> Note that sets 1, 3 and 4 all have the element 'E' in common, therefore they 
> are "related" and form the following superset:
>
> A,B,C,D,E,F,X,Z
>
> Likewise, sets 2 and 5 have the element 'L' in common, then set 5 and 6 have 
> element 'M' in common, therefore they form the following superset:
>
> G,H,L,M,O,P,Q,R,Y
>
> I think you get the point.
[snip]

I recommend continuing to work on your statement of the problem until it
is detailed, precise, and complete -- something along the lines of,
"Given a set of sets, return a set of sets having the following
properties: (1)... (2)..."  This approach often brings to light logical
problems in the loosely sketched requirements.  It also produces the
outline of a testing regimen to determine whether an implemented
solution is correct.

-- 
To email me, substitute nowhere->runbox, invalid->com.
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Algorithm for Creating Supersets of Smaller Sets Based on Common Elements

2015-02-22 Thread duncan smith
On 21/02/15 19:46, TommyVee wrote:
> Start off with sets of elements as follows:
> 
> 1. A,B,E,F
> 2. G,H,L,P,Q
> 3. C,D,E,F
> 4. E,X,Z
> 5. L,M,R
> 6. O,M,Y
> 
> Note that sets 1, 3 and 4 all have the element 'E' in common, therefore
> they are "related" and form the following superset:
> 
> A,B,C,D,E,F,X,Z
> 
> Likewise, sets 2 and 5 have the element 'L' in common, then set 5 and 6
> have element 'M' in common, therefore they form the following superset:
> 
> G,H,L,M,O,P,Q,R,Y
> 
> I think you get the point.  As long as sets have at least 1 common
> element, they combine to form a superset.  Also "links" (common
> elements) between sets may go down multiple levels, as described in the
> second case above (2->5->6).  Cycles thankfully, are not possible.
> 
> BTW, the number of individual sets (and resultant supersets) will be
> very large.
> 
> I don't know where to start with this.  I thought about some type of
> recursive algorithm, but I'm not sure.  I could figure out the Python
> implementation easy enough, I'm just stumped on the algorithm itself.
> 
> Anybody have an idea?
> 
> Thanks, Tom

Tom,
You could construct a graph G such that there exists an edge {v,w}
for each pair of items that appear in a common set. Each connected
component will contain the nodes of one of the supersets you're looking
for. That's unnecessarily expensive, so you can adopt a similar approach
using trees.

http://en.wikipedia.org/wiki/Disjoint-set_data_structure

Construct a forest of trees (initially one tree for each item) and join
pairs of trees until you have one tree for each of your supersets. For
each set of size n you only need to consider n-1 joins. That will ensure
that any pair of items that are in one of the sets are contained in a
single tree. The find and union operations are amortized constant, so
this should be efficient for your large numbers of sets.

The union_find module can be found at
https://github.com/DuncanSmith147/KVMS. Cheers.

Duncan


>>> import union_find
>>> sets = (['A','B','E','F'],
['G','H','L','P','Q'],
['C','D','E','F'],
['E','X','Z'],
['L','M','R'],
['O','M','Y'])
>>> uf = union_find.UnionFindTree()
>>> for a_set in sets:
for item in a_set:
try:
uf.add(item)
except ValueError:
pass
n = a_set[0]
for item in a_set[1:]:
is_merged = uf.union(n, item)


>>> from collections import defaultdict
>>> res = defaultdict(list)
>>> for item in uf:
res[uf.find(item)].append(item)


>>> res.values()
[['G', 'H', 'M', 'L', 'O', 'Q', 'P', 'R', 'Y'], ['A', 'C', 'B', 'E',
'D', 'F', 'X', 'Z']]
>>>


-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Future of Pypy?

2015-02-22 Thread Laura Creighton
In a message of Sun, 22 Feb 2015 15:36:42 +, Dave Farrance writes:
>Laura Creighton  wrote:
>
>>I don't understand 'an interpreter rather than a JIT'.  PyPy has a
>>JIT, that sort of is the whole point.
>
>Yes.  I meant that from my end-user, non-software-engineer perspective, it
>looked as though CPython was proceeding with leaps and bounds and that
>PyPy remained mostly a proof-of-concept after several years.

Oh no, that was the state of the world about 8 years ago.  PyPy works
great, and more and more people are using it in production all the
time.

>But thanks for your description of the issues.  So once the core issues
>are sorted out, if enough people can be found to work on it, then
>hopefully the library conversions will follow apace.

Well, most libraries just run, of course. It's the ones that are written
in C that cause most of the problems.

One of the problems, from a pure Have-PyPy-Take-Over-the-World
perspective is that as things were moving along quite merrily,
the CPython core developers decided to invent Python 3.0.  This
meant that everybody on the planet, who wanted their library to
work with Python 3.0 had to convert it to work there.

There was, and still is, an enormous amount of resentment about this.
For a lot of people, the perception was, and still is that the benefits
of Python 3.x over Python 2.x was not worth breaking backwards
compatibility.  And there still are plenty of places whose plan is
to use Python 2.7 indefinitely into the far future.  I've got 15
years worth of commerical python code out there in the world, and
nobody wants it converted enough to pay me to do so.  Their position
is that it runs quite well enough as it is.  I'm sure not
going to convert the stuff for fun.  Practically every Python consultant on
the planet is in the same boat.

Things will get converted when there is a compelling business argument
to do so, but not before, and for a lot of code the answer is never.

Given the nature of this political problem, it is not surprising that
the proponents of Python 3.0 spent a lot of effort talking about the
benefits, and praising the people who converted their stuff, and
making a huge effort in the public relations lines.  The whole thing
could have blown up in their faces, as is quite common when you decide
to make the 'second version' of a language.  It happened to Perl.  So
this creates buzz and warm feelings about Python 3.0.

In contrast, on the PyPy team, there is nobody who doesn't consider
public relations and marketing and 'creating the warm fuzzy feelings in
the users' somewhere between 'unpleasant duty' and 'sheer torture'.
The set of human skills you need to have to be good as this sort of
thing is not a set that we have, either collectively or in individuals.
We're much more into 'letting the code speak for itself', which, of
course, it does not do.

A lot of us, indeed were raised on a philosophy that it is morally wrong
to try to influence people.  You can give them options, and you can
even explain the options that you are giving them, and you can argue
with others in favour of certain options, but when it comes right down to
it, everybody has to make their own decision.

This is all well and virtuous, but the fact of the matter is that a large
number of people aren't competant to make their own decisions, and even
among those that are there exist a large number who very
much don't want to do such a thing.  If you are trying to get such people
to adopt your software, you need to provide a completely different
experience.  They need to feel comfortable, and safe, and among a
large community, and, well, I don't know what else they want.  That is
part of the problem.  I am pretty sure that what they want is something
that I never pay a lot of attention to. I mean  I'm a charter member of
the 'always-sacrifice-comfort-in-order-to-have-fun-and-interesting-times'
club.  And my marketing skills, such as they are, are much above average
for the PyPy gang - though some members are learning a bit, slowly, through
necessity.  But you notice that we have only 1 blog, and things are added
to it very slowly.  There are people all over planet python who blog about
things every week, for fun.  There is no way we can compete with them.

So, until some people with such skills decide to take an interest in
PyPy, our marketing effort is going to limp. And I personally feel
pretty bad about asking some poor soul who has just made his C extension
work with 3.0 go back and do it _again_ in a more PyPy friendly way.

But if Armin gets the Transactional Memory to be usable in a robust
way, (as opposed to now where it is only a bit more than a proof of
concept) then things could rocket off again.  Because one thing we
do know is that people who are completely and utterly ignorant about
whether having multiple cores will improve their code still want to
use a language that lets them use the multiple processors.  If the
TM dream of having that just happen, seemle

Question on asyncio

2015-02-22 Thread pfranken85
Hello!

I am just trying to get familiar with asyncio. It seems to be a good thing, 
however, I am still having troubles and feel pretty puzzled although I think I 
got the point what async IO means. This is the problem I am trying to 
accomplish:

I have some functions which are reading values from hardware. If one of the 
values changes, I want a corresponding notification to the connected clients. 
The network part shouldn't be the problem. Here is what I got so far:

@asyncio.coroutine
def check():
  old_val = read_value_from_device()
  yield from asyncio.sleep(2)
  new_val = read_value_from_device()
  # we may have fluctuations, so we introduce a threshold
  if abs(new_val-old_val) > 0.05:
  return new_val
  else:
  return None
  
@asyncio.coroutine
def runner():
  while 1:
new = yield from check()
print(new)
  
loop = asyncio.get_event_loop()
loop.run_until_complete(update())


Is this the way one would accomplish this task? Or are there better ways? 
Should read_value_from_device() be a @coroutine as well? It may contain parts 
that take a while ... Of course, instead of print(new) I would add the 
corresponding calls for notifying the client about the update.

Thanks!
-- 
https://mail.python.org/mailman/listinfo/python-list


id() and is operator

2015-02-22 Thread LJ
Hi everyone. Quick question here. Lets suppose if have the following numpy 
array:

b=np.array([[0]*2]*3)

and then:

>>> id(b[0])
4582
>>> id(b[1])
45857512
>>> id(b[2])
4582

Please correct me if I am wrong, but according to this b[2] and b[0] are the 
same object. Now,

>>> b[0] is b[2]
False


Any clarification is much appreciated.

Cheers,
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: id() and is operator

2015-02-22 Thread Laura Creighton
In a message of Sun, 22 Feb 2015 09:53:33 -0800, LJ writes:
>Hi everyone. Quick question here. Lets suppose if have the following numpy 
>array:
>
>b=np.array([[0]*2]*3)
>
>and then:
>
 id(b[0])
>4582
 id(b[1])
>45857512
 id(b[2])
>4582
>
>Please correct me if I am wrong, but according to this b[2] and b[0] are the 
>same object. Now,
>
 b[0] is b[2]
>False


You are running into one of the peculiarities of the python representation
of numbers.  It can make things more efficient to represent all common
numbers as 'there is only one' of them.

So.

  Python 2.7.9 (default, Dec 11 2014, 08:58:12)
 [GCC 4.9.2] on linux2
 Type "help", "copyright", "credits" or "license" for more information.
 >>> a = 1
 >>> b = 1
 >>> a is b
 True
 >>> a = 1001
 >>> b = 1001
 >>> a is b
 False


Don't rely on this.  Other implementations are free to implement this
however they like.


[PyPy 2.4.0 with GCC 4.9.2] on linux2
Type "help", "copyright", "credits" or "license" for more information.
 a = 1001
 b = 1001
 a is b
True

Laura


-- 
https://mail.python.org/mailman/listinfo/python-list


Re: How to design a search engine in Python?

2015-02-22 Thread subhabangalore
On Sunday, February 22, 2015 at 2:42:48 PM UTC+5:30, Laura Creighton wrote:
> In a message of Sat, 21 Feb 2015 22:07:30 -0800,  write
> >Dear Sir,
> >
> >Thank you for your kind suggestion. Let me traverse one by one. 
> >My special feature is generally Semantic Search, but I am trying to build
> >a search engine first and then go for semantic I feel that would give me a 
> >solid background to work around the problem. 
> >
> >Regards,
> >Subhabrata. 
> 
> You may find the API docs surrounding rdelbru.github.io/SIREn/
> of interest then.
> 
> Laura Creighton

Dear Madam,

Thank you for your kind help. I would surely check then. 

Regards,
Subhabrata. 
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Accessible tools

2015-02-22 Thread Jacob Kruger
- Original Message - 
From: "Tim Chase" 

Subject: Re: Accessible tools




While my experience has shown most of your items to be true, I'd
contend that


• Do not, have access to debugging tools.


is mistaken or at least misinformed.  For Python, I use the "pdb"
module all the time, and it's command-line driven.  Combined with a
multi-terminal (whether multiple windows, virtual consoles, or a
tmux/screen session), I can easily bounce back and forth between a
"pdb" debugging session and the source code to make edits.  Just to
check, I fired up the "yasr" terminal screen-reader, launched tmux
(using my quiet config, since it updates information on the screen
like the time on a regular basis, making it chatty), and stepped
through some Python code, checked variables, and walked up/down the
call-stack.  I know most other languages have similar functionality
such as gdb for C code.
Will check out PDB a bit more, but, honestly, my windows screenreader that 
use most of the time, jaws, doesn't always cooperate perfectly with command 
line/console interface - can be worked around, but, not all that easily at 
times - but, this page seems to offer enough detail relating to PDB, to 
start off with anyway:

https://docs.python.org/3/library/pdb.html

Jacob Kruger
Blind Biker
Skype: BlindZA
"Roger Wilco wants to welcome you...to the space janitor's closet..."


--
https://mail.python.org/mailman/listinfo/python-list


Re: try pattern for database connection with the close method

2015-02-22 Thread Mario Figueiredo
On Sat, 21 Feb 2015 12:22:58 +, Mark Lawrence
 wrote:

>
>Use your context manager at the outer level.
>
>import sqlite3 as lite
>
>try:
> with lite.connect('data.db') as db:
> try:
> db.execute(sql, parms)
> except lite.IntegrityError:
> raise ValueError('invalid data')
>except lite.DatabaseError:
> raise OSError('database file corrupt or not found.')

The sqlite context manager doesn't close a database connection on
exit. It only ensures, commits and rollbacks are performed.
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Future of Pypy?

2015-02-22 Thread Paul Rubin
Laura Creighton  writes:
> Because one thing we do know is that people who are completely and
> utterly ignorant about whether having multiple cores will improve
> their code still want to use a language that lets them use the
> multiple processors.  If the TM dream of having that just happen,
> seemlessly (again, no promises) is proven to be true, well   we
> think that the hordes will suddenly be interested in PyPy.

TM is a useful feature but it's unlikely to be the thing that attracts
"the hordes".  More important is to eliminate the GIL and hopefully have
lightweight (green) threads that can still run on multiple cores, like
in GHC and Erlang.  Having them communicate by mailboxes/queues is
sufficient most of the time, and in Erlang it's the only method allowed
in theory (there are some optimizations taking place behind the scenes).
TM hasn't gotten that much uptake in GHC (one of the earliest HLL
implementations of TM) in part because its performance cost is
significant when there's contention.  I wonder if Clojure programmers
use it more.
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: try pattern for database connection with the close method

2015-02-22 Thread Mark Lawrence

On 22/02/2015 18:41, Mario Figueiredo wrote:

On Sat, 21 Feb 2015 12:22:58 +, Mark Lawrence
 wrote:



Use your context manager at the outer level.

import sqlite3 as lite

try:
 with lite.connect('data.db') as db:
 try:
 db.execute(sql, parms)
 except lite.IntegrityError:
 raise ValueError('invalid data')
except lite.DatabaseError:
 raise OSError('database file corrupt or not found.')


The sqlite context manager doesn't close a database connection on
exit. It only ensures, commits and rollbacks are performed.



Where in the documentation does it state that?  If it does, it certainly 
breaks my expectations, as I understood the whole point of Python 
context managers is to do the tidying up for you.  Or have you misread 
what it says here 
https://docs.python.org/3/library/sqlite3.html#using-the-connection-as-a-context-manager 
?


>>> import sqlite3
>>> with 
sqlite3.connect(r'C:\Users\Mark\Documents\Cash\Data\cash.sqlite') as db:

... db.execute('select count(*) from accounts')
...

>>> db.close()
>>>

Looks like you're correct.  Knock me down with a feather, Clevor Trevor.

--
My fellow Pythonistas, ask not what our language can do for you, ask
what you can do for our language.

Mark Lawrence

--
https://mail.python.org/mailman/listinfo/python-list


Re: Accessible tools

2015-02-22 Thread Tim Chase
On 2015-02-22 20:29, Jacob Kruger wrote:
> jaws, doesn't always cooperate perfectly with command line/console
> interface

I've heard that on multiple occasions.  Since I mostly work with
Linux, the only terminal-with-screen-reader hints I've heard involve
using TeraTerm as the SSH client with NVDA on Windows.  Sorry I can't
be of much more assistance there, since I haven't really used Windows
for years.

> this page seems to offer enough detail relating to PDB, to start
> off with anyway: https://docs.python.org/3/library/pdb.html

My generally process is as follows:

1) add the following line some place before where I want to debug:

   import pdb; pdb.set_trace()

2) run my program

3) when it hits that line of code, it drops you to the debugging
prompt

4) poke around, using "print" followed by the thing(s) I want to
inspect such as

  print dir(something)
  print my_var.some_field

5) step to the next line of code with "n" or step into a
function/method call with "s" or exit/return from the current
function/method with "r".  I'll also use "l" (ell) to list some
context around the current line so I can tell where I am in the
source code.  This is particularly helpful as I can use the
corresponding line-number(s) to jump to the same line in my editor if
I spot the offending line of code, then edit it.

very rarely, I'll actually set additional breakpoints (optionally
making them conditional) but I *always* have to look up the syntax
for that.

Hopefully that gets you at least to the point where debugging isn't
some laborious task.  Best wishes,

-tkc


-- 
https://mail.python.org/mailman/listinfo/python-list


Re: id() and is operator

2015-02-22 Thread Chris Angelico
On Mon, Feb 23, 2015 at 5:13 AM, Laura Creighton  wrote:
> In a message of Sun, 22 Feb 2015 09:53:33 -0800, LJ writes:
>>Hi everyone. Quick question here. Lets suppose if have the following numpy 
>>array:
>>
>>b=np.array([[0]*2]*3)
>>
>>and then:
>>
> id(b[0])
>>4582
> id(b[1])
>>45857512
> id(b[2])
>>4582
>>
>>Please correct me if I am wrong, but according to this b[2] and b[0] are the 
>>same object. Now,
>>
> b[0] is b[2]
>>False
>
>
> You are running into one of the peculiarities of the python representation
> of numbers.  It can make things more efficient to represent all common
> numbers as 'there is only one' of them.

That shouldn't break the correspondence between id() and the is
operator. The id function is documented as returning an integer which
is "guaranteed to be unique among simultaneously existing objects",
and if all three elements of b exist through the entire duration of
this experiment, it should be perfectly safe to compare their id()s to
check object identity.

So the only explanation I can think of is: When you subscript a numpy
array, you aren't getting back a reference to a pre-existing object,
but you are instead getting a brand new object which is being created
for you. (This theory is supported by a vague recollection that
subscripting a numpy array returns a view of some sort, but you'd have
to check the docs.) If that theory is correct, then you'd expect to
find that the id() of such a thing is not stable; and that is, in
fact, what I see:

>>> import numpy as np
>>> b=np.array([[0]*2]*3)
>>> id(b[0])
26806960
>>> id(b[0])
26655344
>>> id(b[0])
26820432
>>> id(b[0])
26806960
>>> id(b[0])
26655344

After a few iterations, they're getting reused, but it's not like
playing with a Python list, where you would be getting back the exact
same object every time

You'd have to check the docs to be sure, but this is how I would go
about exploring the situation.

ChrisA
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: id() and is operator

2015-02-22 Thread Gary Herron

On 02/22/2015 09:53 AM, LJ wrote:

Hi everyone. Quick question here. Lets suppose if have the following numpy 
array:

b=np.array([[0]*2]*3)

and then:


id(b[0])

4582

id(b[1])

45857512

id(b[2])

4582

Please correct me if I am wrong, but according to this b[2] and b[0] are the 
same object. Now,


b[0] is b[2]

False


Any clarification is much appreciated.

Cheers,



In fact, b[0] and b[2] are different objects as can be seen here:
>>> import numpy as np
>>> b=np.array([[0]*2]*3)
>>> b[0]=1 // broadcast into both ints in row 0
>>> b[1]=2 // ... row 1
>>> b[2]=3 // ... row 2
>>> b
array([[1, 1],
   [2, 2],
   [3, 3]])

When you extracted b[0], you got a newly created python/numpy object 
(1x2 array of ints) briefly stored at location  4582 but then 
deleted immediately after that use.  A little later, the extraction of 
b[2] used the same bit of memory.  The id of a temporarily created value 
is meaningless, and apparently misleading.


As a separate issue, each of b, b[0], b[1], and b[2] do *all* refer to 
the same underlying array of ints as can be seen here:

>>> r = b[0]
>>> r[0] = 123
>>> b
array([[123,   1],
   [  2,   2],
   [  3,   3]])


but the Python/numpy objects that wrap portions of that underlying array 
of ints are all distinct.



Gary Herron



--
Dr. Gary Herron
Department of Computer Science
DigiPen Institute of Technology
(425) 895-4418

--
https://mail.python.org/mailman/listinfo/python-list


Re: Future of Pypy?

2015-02-22 Thread Dave Farrance
Dave Farrance  wrote:

>Steven D'Aprano  wrote:
>
>>I assume you're talking about drawing graphics rather than writing text. Can
>>you tell us which specific library or libraries won't run under PyPy?
>
>Yes, mainly the graphics.  I'm a hardware engineer, not a software
>engineer, so I might well be misunderstanding PyPy's current capability.
>
>For easy-to-use vector graphics output, like 1980s BASIC computers, I've
>settled on Pygame.  CPython libraries that I've used for other reasons
>include Scipy, Matplotlib, PIL, CV2, and Kivy.

I see that PyPy's website says that PIL (Pillow) works.  I have so far
only used Python libraries that were readily available as binaries for
Windows, or were already available in Linux distro repositories.  In
Ubuntu, for example, Pillow is available for CPython but not PyPy.  Is
there a guide to tell me (in non-developer language, hopefully) how to
install Pillow for PyPy on Ubuntu?
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: try pattern for database connection with the close method

2015-02-22 Thread Skip Montanaro
On Sun, Feb 22, 2015 at 12:41 PM, Mario Figueiredo  wrote:
> The sqlite context manager doesn't close a database connection on
> exit. It only ensures, commits and rollbacks are performed.

Sorry, I haven't paid careful attention to this thread, so perhaps
this has already been suggested, however... Can't you write your own
class which delegates to the necessary sqlite3 bits and has a context
manager with the desired behavior? Thinking out loud, you could define
a ConnectionMgr class which accepts a sqlite3 connection as a
parameter:

class ConnectionMgr(object):
  def __init__(self, conn):
self.conn = conn

  def __enter__(self):
...

  def __exit__(self, type, value, exception):
if self.conn is not None:
  ... close self.conn connection here ...
self.conn = None

  def __getattr__(self, attr):
return getattr(self.conn, attr)

then...

  try:
with MyConnection(lite.connect('data.db')) as db:
  ...
  except lite.DatabaseError:
...

Might also have to __enter__ and __exit__ self.conn as appropriate.

Skip
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Future of Pypy?

2015-02-22 Thread Laura Creighton
In a message of Sun, 22 Feb 2015 11:02:29 -0800, Paul Rubin writes:
>Laura Creighton  writes:
>> Because one thing we do know is that people who are completely and
>> utterly ignorant about whether having multiple cores will improve
>> their code still want to use a language that lets them use the
>> multiple processors.  If the TM dream of having that just happen,
>> seemlessly (again, no promises) is proven to be true, well   we
>> think that the hordes will suddenly be interested in PyPy.
>
>TM is a useful feature but it's unlikely to be the thing that attracts
>"the hordes".  More important is to eliminate the GIL and hopefully have
>lightweight (green) threads that can still run on multiple cores, like
>in GHC and Erlang.  Having them communicate by mailboxes/queues is
>sufficient most of the time, and in Erlang it's the only method allowed
>in theory (there are some optimizations taking place behind the scenes).
>TM hasn't gotten that much uptake in GHC (one of the earliest HLL
>implementations of TM) in part because its performance cost is
>significant when there's contention.  I wonder if Clojure programmers
>use it more.

The GIL isn't going away from PyPy any time real soon, alas.  Armin has
some pretty cool ideas about what to do about contention, but if
you want to hear them, its better if you go post that to pypy-...@python.org
so you can get it from the man directly rather that hearing my
paraphrase.  Or ask away on the #pypy channel on freenode ...

But this reminds me that I have to get Lennart Augustsson and Armin
Rigo in the same room some time.  Should be fun.

Laura

-- 
https://mail.python.org/mailman/listinfo/python-list


Re: id() and is operator

2015-02-22 Thread Laura Creighton
Ooops, I missed the numpy, so I thought that it was the contents
of the array that was causing the problem.  My very bad.  Apologies.

Laura

-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Future of Pypy?

2015-02-22 Thread Paul Rubin
Laura Creighton  writes:
> The GIL isn't going away from PyPy any time real soon, alas.

I thought the GIL's main purpose was to avoid having to lock all the
CPython refcount updates, so if PyPy has tracing GC, why is there still
a GIL?  And how is TM going to help with parallelism if the GIL is still
there?

> Armin has some pretty cool ideas about what to do about contention,
> but if you want to hear them, its better if you go post that to
> pypy-...@python.org...  Or ask away on the #pypy channel on freenode

It would be nice if he blogged something about them.

> But this reminds me that I have to get Lennart Augustsson and Armin
> Rigo in the same room some time.  Should be fun.

I thought the STM stuff in GHC was done by the Simon's.  Armin should
certainly have Simon Marlow's book about concurrency and Haskell:

http://chimera.labs.oreilly.com/books/123000929/index.html
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Design thought for callbacks

2015-02-22 Thread Ethan Furman
On 02/22/2015 05:13 AM, Cem Karan wrote:

> Output:
> From Evil Zombie: Surprise!
> From Your Significant Other: Surprise!
> 
> In this case, the user made an error (just as Marko said in his earlier 
> message),
> and forgot about the callback he registered with the library.  The callback 
> isn't
> really rising from the dead; as you say, either its been garbage collected, 
> or it
> hasn't been.  However, you may not be ready for a callback to be called at 
> that
> moment in time, which means you're surprised by unexpected behavior.

But the unexpected behavior is not a problem with Python, nor with your library 
-- it's a bug in the fellow-programmer's
code, and you can't (or at least shouldn't) try to prevent those kinds of bugs 
from manifesting -- they'll just get
bitten somewhere else by the same bug.

--
~Ethan~



signature.asc
Description: OpenPGP digital signature
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Design thought for callbacks

2015-02-22 Thread Cem Karan

On Feb 22, 2015, at 4:02 PM, Ethan Furman  wrote:

> On 02/22/2015 05:13 AM, Cem Karan wrote:
> 
>> Output:
>> From Evil Zombie: Surprise!
>> From Your Significant Other: Surprise!
>> 
>> In this case, the user made an error (just as Marko said in his earlier 
>> message),
>> and forgot about the callback he registered with the library.  The callback 
>> isn't
>> really rising from the dead; as you say, either its been garbage collected, 
>> or it
>> hasn't been.  However, you may not be ready for a callback to be called at 
>> that
>> moment in time, which means you're surprised by unexpected behavior.
> 
> But the unexpected behavior is not a problem with Python, nor with your 
> library -- it's a bug in the fellow-programmer's
> code, and you can't (or at least shouldn't) try to prevent those kinds of 
> bugs from manifesting -- they'll just get
> bitten somewhere else by the same bug.

I agree with you, but until a relatively new programmer has gotten used to what 
callbacks are and what they imply, I want to make things easy.  For example, if 
the API subclasses collections.abc.MutableSet, and the documentation states 
that you can only add callbacks to this particular type of set, then a new 
programmer will naturally decide that either a) they need to dispose of the 
set, and if that isn't possible, then b) they need to delete their callback 
from the set.  It won't occur to them that their live object will just 
magically 'go away'; its a member of a set!

My goal is to make things as pythonic (whatever that means in this case) and 
obvious as possible.  Ideally, a novice can more or less guess what will happen 
with my API without really having to read the documentation on it.  

Thanks,
Cem Karan
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Question on asyncio

2015-02-22 Thread Marko Rauhamaa
pfranke...@gmail.com:

> I have some functions which are reading values from hardware. If one
> of the values changes, I want a corresponding notification to the
> connected clients. The network part shouldn't be the problem. Here is
> what I got so far:
>
> @asyncio.coroutine
> def check():
>   old_val = read_value_from_device()
>   yield from asyncio.sleep(2)
>   new_val = read_value_from_device()
>   # we may have fluctuations, so we introduce a threshold
>   if abs(new_val-old_val) > 0.05:
>   return new_val
>   else:
>   return None
>   
> @asyncio.coroutine
> def runner():
>   while 1:
> new = yield from check()
> print(new)

In asyncio, you typically ignore the value returned by yield. While
generators use yield to communicate results to the calling program,
coroutines use yield only as a "trick" to implement cooperative
multitasking and an illusion of multithreading.

Thus, "yield from" in asyncio should be read, "this is a blocking
state."

> Is this the way one would accomplish this task? Or are there better
> ways? Should read_value_from_device() be a @coroutine as well? It may
> contain parts that take a while ... Of course, instead of print(new) I
> would add the corresponding calls for notifying the client about the
> update.

How do you read a value from the hardware? Do you use a C extension? Do
you want read_value_from_device() to block until the hardware has the
value available or is the value always available for instantaneous
reading?

If the value is available instantaneously, you don't need to turn it
into a coroutine. However, if blocking is involved, you definitely
should do that. Depending on your hardware API it can be easy or
difficult. If you are running CPython over linux, hardware access
probably is abstracted over a file descriptor and a coroutine interface
would be simple.


Marko
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: id() and is operator

2015-02-22 Thread Marko Rauhamaa
LJ :

 id(b[0])
> 4582
[...]
 id(b[2])
> 4582
>
> Please correct me if I am wrong, but according to this b[2] and b[0]
> are the same object. Now,
>
 b[0] is b[2]
> False

This is a true statement:

   If X is Y, then id(X) == id(Y).

However, this is generally not a true statement:

   If X is Y, then id(X) is id(Y).


Marko
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Design thought for callbacks

2015-02-22 Thread Marko Rauhamaa
Cem Karan :

> My goal is to make things as pythonic (whatever that means in this
> case) and obvious as possible. Ideally, a novice can more or less
> guess what will happen with my API without really having to read the
> documentation on it.

If you try to shield your user from the complexities of asynchronous
programming, you will only cause confusion. You will definitely need to
document all nooks and crannies of the semantics of the callback API and
your user will have to pay attention to every detail of your spec.

Your user, whether novice or an expert, will thank you for your
unambiguous specification even if it is complicated.


Marko
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: id() and is operator

2015-02-22 Thread Chris Angelico
On Mon, Feb 23, 2015 at 8:25 AM, Marko Rauhamaa  wrote:
> This is a true statement:
>
>If X is Y, then id(X) == id(Y).
>
> However, this is generally not a true statement:
>
>If X is Y, then id(X) is id(Y).

Irrelevant, because the identities of equal integers didn't come into this.

ChrisA
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Future of Pypy?

2015-02-22 Thread Laura Creighton
Good news  -- it seems to be working fine with PyPy.
https://travis-ci.org/hugovk/Pillow/builds

for me, not extensively tested, it just seems to be working.

I have several pypy's floating around here, each within its own
virtualenv.  If you aren't familiar with virtualenv, read all
about it here:
http://www.dabapps.com/blog/introduction-to-pip-and-virtualenv-python/

Note the first question to the blog writer is 'how to get it to work
with pypy'.  Do what he says.  virtualenv -p /path/to/pypy env
but, if you want to use more bleeding edge pypy you will want:

# from a tarball
$ virtualenv -p /opt/pypy-c-jit-41718-3fb486695f20-linux/bin/pypy my-pypy-env

# from the mercurial checkout
$ virtualenv -p /path/to/pypy/pypy/translator/goal/pypy-c my-pypy-env

I've only got bleeding edge PyPys around here, in virtualenvs, but
in all of them

import sys
from PIL import Image

for infile in sys.argv[1:]:
   try:
   with Image.open(infile) as im:
 print(infile, im.format, "%dx%d" % im.size, im.mode)
   except IOError:
 pass

which I pasted right in from
http://pillow.readthedocs.org/en/latest/handbook/tutorial.html

seems to be working just fine for me.  Hardly an exhaustive test,
but ... well, try it and see how it goes for you.

I don't know what time it is where you are, but it is 22:44 here now, and
alas I promised a kivy demo to a client tomorrow morning, and, double
alas, I haven't written it yet.  It shouldn't take more than an hour or
three to write, but I am going to have to stop having pleasant chats
about pypy for a while and get this thing done ... :)

Laura




-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Design thought for callbacks

2015-02-22 Thread Cem Karan

On Feb 22, 2015, at 4:34 PM, Marko Rauhamaa  wrote:

> Cem Karan :
> 
>> My goal is to make things as pythonic (whatever that means in this
>> case) and obvious as possible. Ideally, a novice can more or less
>> guess what will happen with my API without really having to read the
>> documentation on it.
> 
> If you try to shield your user from the complexities of asynchronous
> programming, you will only cause confusion. You will definitely need to
> document all nooks and crannies of the semantics of the callback API and
> your user will have to pay attention to every detail of your spec.
> 
> Your user, whether novice or an expert, will thank you for your
> unambiguous specification even if it is complicated.

Documentation is a given; it MUST be there.  That said, documenting something, 
but still making it surprising, is a bad idea.  For example, several people 
have been strongly against using a WeakSet to hold callbacks because they 
expect a library to hold onto callbacks.  If I chose not to do that, and used a 
WeakSet, then even if I documented it, it would still end up surprising people 
(and from the sound of it, more people would be surprised than not).

Thanks,
Cem Karan
-- 
https://mail.python.org/mailman/listinfo/python-list


calling subprocess

2015-02-22 Thread jkuplinsky
Hi,

I thought this would be easy:


for subprocess import call
call (['cd', r'C:\apps'], shell = True)


It doesn't work -- tried with/without prefix r, escaped backslashes, triple 
quotes, str(), .. nothing seems to work (it doesn't complain, but it doesn't 
change directories either) -- what am I doing wrong?

Thanks.

JK
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Future of Pypy?

2015-02-22 Thread Laura Creighton
In a message of Sun, 22 Feb 2015 12:14:45 -0800, Paul Rubin writes:
>Laura Creighton  writes:
>> The GIL isn't going away from PyPy any time real soon, alas.
>
>I thought the GIL's main purpose was to avoid having to lock all the
>CPython refcount updates, so if PyPy has tracing GC, why is there still
>a GIL?  And how is TM going to help with parallelism if the GIL is still
>there?

This requires a long answer.   a very long answer.
More later, I must work this evening.

>> Armin has some pretty cool ideas about what to do about contention,
>> but if you want to hear them, its better if you go post that to
>> pypy-...@python.org...  Or ask away on the #pypy channel on freenode
>
>It would be nice if he blogged something about them.

You are asking for water to roll up-hill.  If you want the joy of
hearing the cool ideas as Armin has them, you need to hang out on
the irc channel.  Of course, if you are interested in such things
this makes hanging out there worthwhile.

>> But this reminds me that I have to get Lennart Augustsson and Armin
>> Rigo in the same room some time.  Should be fun.
>
>I thought the STM stuff in GHC was done by the Simon's.  Armin should
>certainly have Simon Marlow's book about concurrency and Haskell:

Of course, but if you think that Lennart Augustsson is not familiar
with every aspect of every Haskell compiler on the planet  well
then I know Lennart better than you do.  And given that Lennart is
a friend, well really a good friend of my lover and a something-better-
than-an-acquaintance with me  I should make the effort to get these
two under the same roof (mine, by preference) for the fun of the
experience.

So thank you for giving me this idea ...

Laura


-- 
https://mail.python.org/mailman/listinfo/python-list


Re: calling subprocess

2015-02-22 Thread Tim Golden

On 22/02/2015 22:06, jkuplin...@gmail.com wrote:

Hi,

I thought this would be easy:


for subprocess import call call (['cd', r'C:\apps'], shell = True)


It doesn't work -- tried with/without prefix r, escaped backslashes,
triple quotes, str(), .. nothing seems to work (it doesn't complain,
but it doesn't change directories either) -- what am I doing wrong?


Two things:

1) All you're doing is running up a subprocess, changing directory 
*within it* and then closing it. (Altho' good job spotting that you'd 
need shell=True if you did indeed want to do what you're doing here).


2) Use os.chdir: that's what it's there for

TJG
--
https://mail.python.org/mailman/listinfo/python-list


Re: calling subprocess

2015-02-22 Thread Chris Angelico
On Mon, Feb 23, 2015 at 9:06 AM,   wrote:
> I thought this would be easy:
>
>
> for subprocess import call
> call (['cd', r'C:\apps'], shell = True)
>
>
> It doesn't work -- tried with/without prefix r, escaped backslashes, triple 
> quotes, str(), .. nothing seems to work (it doesn't complain, but it doesn't 
> change directories either) -- what am I doing wrong?

It does work. But what it does is spawn a shell, change the working
directory _of that shell_, and then terminate it. The working
directory change won't apply to your process.

It sounds to me like you're adopting a "shotgun debugging" [1]
approach. If you don't know what your changes are going to do, why are
you making them? I recommend, instead, a policy of examination and
introspection - what I tend to refer to as IIDPIO debugging: If In
Doubt, Print It Out. The first argument to subprocess.call() is a list
of parameters; you can assign that to a name and print it out before
you do the call:

from subprocess import call
args = ['cd', r'C:\apps']
print(args)
call(args, shell=True)

Now do all your permutations of args, and see what effect they have.
If the printed-out form is identical, there's no way it can affect the
subprocess execution.

Side point: *Copy and paste* your code rather than retyping it. The
keyword here is "from", not "for", and if we can't depend on the code
you're showing us, how can we recognize whether or not a bug exists in
your real code?

ChrisA

[1] http://www.catb.org/jargon/html/S/shotgun-debugging.html
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Design thought for callbacks

2015-02-22 Thread Laura Creighton
In a message of Sun, 22 Feb 2015 17:09:01 -0500, Cem Karan writes:

>Documentation is a given; it MUST be there.  That said, documenting
>something, but still making it surprising, is a bad idea.  For
>example, several people have been strongly against using a WeakSet to
>hold callbacks because they expect a library to hold onto callbacks.
>If I chose not to do that, and used a WeakSet, then even if I
>documented it, it would still end up surprising people (and from the
>sound of it, more people would be surprised than not).

>Thanks, Cem Karan

No matter what you do, alas, will surprise the hell out of people
because callbacks do not behave as people expect.  Among people who
have used callbacks, what you are polling is 'what are people
familiar with', and it seems for the people around here, now,
WeakSets are not what they are familiar with.

But that is not so surprising.  How many people use WeakSets for
_anything_?  I've never used them, aside from 'ooh! cool shiny
new language feature!  Let's kick it around the park!'  That people
aren't familiar with WeakSets doesn't mean all that much.

The question I have is does this architecture make things harder,
easier or about the same to debug?  To write tests for? to do Test
Driven Design with?

Laura
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Design thought for callbacks

2015-02-22 Thread Chris Angelico
On Mon, Feb 23, 2015 at 9:29 AM, Laura Creighton  wrote:
> But that is not so surprising.  How many people use WeakSets for
> _anything_?  I've never used them, aside from 'ooh! cool shiny
> new language feature!  Let's kick it around the park!'  That people
> aren't familiar with WeakSets doesn't mean all that much.

I haven't used weak *sets*, but I've used weak *mappings* on occasion.
It's certainly not a common thing, but they have their uses. I have a
MUD which must guarantee that there be no more than one instance of
any given room (identified by a string that looks like a Unix path),
but which will, if it can, flush rooms out of memory when nothing
refers to them. So it has a mapping from the path strings to the
instances, but with weak refs for the instances; if anything else is
referring to that instance (eg a player character in the room), it'll
hang around, and any time anyone else needs that room, they'll get the
same instance back from the mapping; but any time the garbage
collector notices that a room can be disposed of, it will be.

Definitely not common though.

ChrisA
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: try pattern for database connection with the close method

2015-02-22 Thread Mario Figueiredo
On Sun, 22 Feb 2015 19:07:03 +, Mark Lawrence
 wrote:

>
>Looks like you're correct.  Knock me down with a feather, Clevor Trevor.

It took me by surprise when I first encountered it too. The rationale
apparently is that the context manager is strictly a transactional
feature, allowing for multiple context managers within the same
connection to properly perform commits and rollbacks on multiple
transactions.
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: try pattern for database connection with the close method

2015-02-22 Thread Mario Figueiredo
On Sun, 22 Feb 2015 13:15:09 -0600, Skip Montanaro
 wrote:

>
>Sorry, I haven't paid careful attention to this thread, so perhaps
>this has already been suggested, however... Can't you write your own
>class which delegates to the necessary sqlite3 bits and has a context
>manager with the desired behavior? Thinking out loud, you could define
>a ConnectionMgr class which accepts a sqlite3 connection as a
>parameter

Indeed I could. Thank you. 
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: try pattern for database connection with the close method

2015-02-22 Thread Mario Figueiredo
On Sat, 21 Feb 2015 16:22:36 +0100, Peter Otten <__pete...@web.de>
wrote:

>
>Why would you care about a few lines? You don't repeat them, do you? Put the 
>code into a function or a context manager and invoke it with

Thanks for the suggestions that followed.
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: calling subprocess

2015-02-22 Thread jkuplinsky
On Sunday, February 22, 2015 at 5:22:24 PM UTC-5, Chris Angelico wrote:
> On Mon, Feb 23, 2015 at 9:06 AM,   wrote:
> > I thought this would be easy:
> >
> >
> > for subprocess import call
> > call (['cd', r'C:\apps'], shell = True)
> >
> >
> > It doesn't work -- tried with/without prefix r, escaped backslashes, triple 
> > quotes, str(), .. nothing seems to work (it doesn't complain, but it 
> > doesn't change directories either) -- what am I doing wrong?
> 
> It does work. But what it does is spawn a shell, change the working
> directory _of that shell_, and then terminate it. The working
> directory change won't apply to your process.
> 
> It sounds to me like you're adopting a "shotgun debugging" [1]
> approach. If you don't know what your changes are going to do, why are
> you making them? I recommend, instead, a policy of examination and
> introspection - what I tend to refer to as IIDPIO debugging: If In
> Doubt, Print It Out. The first argument to subprocess.call() is a list
> of parameters; you can assign that to a name and print it out before
> you do the call:
> 
> from subprocess import call
> args = ['cd', r'C:\apps']
> print(args)
> call(args, shell=True)
> 
> Now do all your permutations of args, and see what effect they have.
> If the printed-out form is identical, there's no way it can affect the
> subprocess execution.
> 
> Side point: *Copy and paste* your code rather than retyping it. The
> keyword here is "from", not "for", and if we can't depend on the code
> you're showing us, how can we recognize whether or not a bug exists in
> your real code?
> 
> ChrisA
> 
> [1] http://www.catb.org/jargon/html/S/shotgun-debugging.html

OK (1) sorry about for/from
(2) print() sounds nice, but fact is , no matter what I try, i always get 
C:\\apps instead of c:\apps. So in this sense print() doesn't help much. 
Obviously i'm doing something wrong -- which is what you perhaps call shotgun 
debugging; but that's why i'm asking. 
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Future of Pypy?

2015-02-22 Thread Steven D'Aprano
Paul Rubin wrote:

> Laura Creighton  writes:
>> Because one thing we do know is that people who are completely and
>> utterly ignorant about whether having multiple cores will improve
>> their code still want to use a language that lets them use the
>> multiple processors.  If the TM dream of having that just happen,
>> seemlessly (again, no promises) is proven to be true, well   we
>> think that the hordes will suddenly be interested in PyPy.
> 
> TM is a useful feature but it's unlikely to be the thing that attracts
> "the hordes".  More important is to eliminate the GIL 

*rolls eyes*

I'm sorry, but the instant somebody says "eliminate the GIL", they lose
credibility with me. Yes yes, I know that in *your* specific case you've
done your research and (1) multi-threaded code is the best solution for
your application and (2) alternatives aren't suitable.

Writing multithreaded code is *hard*. It is not a programming model which
comes naturally to most human beings. Very few programs are inherently
parallelizable, although many programs have *parts* which can be
successfully parallelized. 

I think that for many people, "the GIL" is just a bogeyman, or is being
blamed for their own shortcomings. To take an extreme case, if you're
running single-thread code on a single-core machine and still complaining
about the GIL, you have no clue.

(That's not *you personally* Paul, it's a generic "you".)

There are numerous alternatives for those who are genuinely running into
GIL-related issues. Jeff Knupp has a good summary:

http://www.jeffknupp.com/blog/2013/06/30/pythons-hardest-problem-revisited/

One alternative that he misses is that for some programs, the simplest way
to speed it up is to vectorize the core parts of your code by using numpy.
No threads needed.

For those who think that the GIL and the GIL alone is the problem, consider
that Jython is nearly as old as CPython, it goes back at least 15 years.
IronPython has been around for a long time too, and is possibly faster than
CPython even in single threaded code. Neither has a GIL. Both are mature
implementations, built on well-known, powerful platforms with oodles of
business credibility (the JVM and .Net). IronPython even has the backing of
Microsoft, it is one of the few non-Microsoft languages with a privileged
position in the .Net ecosystem.

Where are the people flocking to use Jython and IronPython?

In fairness, there are good reasons why some people cannot use Jython or
IronPython, or one of the other alternatives. But that demonstrates that
the problem is more complex than just "the GIL".

For removal of the GIL to really make a difference:

- you must have at least two cores (that, at least, applies to most people
these days);

- you must be performing a task which is parallelizable and not inherently
sequential (no point using multiple threads if each thread spends all its
time waiting for the previous thread);

- the task must be one that moving to some other multi-processing model
(such as greenlets, multiprocess, etc.) is infeasible;

- you must actually use multiple threads, and use them properly (no busy
wait loops);

- your threading bottleneck must be primarily CPU-bound, not I/O bound
(CPython's threads are already very effective at parallelising I/O tasks);

- and you must be using libraries and tools which prevent you moving to
Jython or IronPython or some other alternative.

I can't help but feel that the set of people for whom removal of the GIL
would actually help is much smaller than, and different to, the set of
people who complain about the GIL.



-- 
Steven

-- 
https://mail.python.org/mailman/listinfo/python-list


Re: calling subprocess

2015-02-22 Thread Chris Angelico
On Mon, Feb 23, 2015 at 12:13 PM,   wrote:
> (2) print() sounds nice, but fact is , no matter what I try, i always get 
> C:\\apps instead of c:\apps. So in this sense print() doesn't help much. 
> Obviously i'm doing something wrong -- which is what you perhaps call shotgun 
> debugging; but that's why i'm asking.
>

Actually, that means it's helping a lot: it's showing you that, no
matter what you fiddle with in terms of string literals, the resulting
string is exactly the same. That's the point of printing stuff out :)

ChrisA
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: id() and is operator

2015-02-22 Thread Steven D'Aprano
LJ wrote:

> Hi everyone. Quick question here. Lets suppose if have the following numpy
> array:
> 
> b=np.array([[0]*2]*3)
> 
> and then:
> 
 id(b[0])
> 4582
 id(b[1])
> 45857512
 id(b[2])
> 4582
> 
> Please correct me if I am wrong, but according to this b[2] and b[0] are
> the same object. 

Not necessarily. CPython (the version of Python you are using) can reuse
object IDs. This is not the case for all Pythons, e.g. Jython and
IronPython never reuse IDs. That means that if you compare the ID of two
objects in CPython which are not alive at the same time, they might have
received the same ID.

py> id("hello world")
3083591616
py> id("now what???")
3083591616

IDs are only unique if the objects are alive at the same time.

Numpy arrays are effectively C arrays of low-level machine values, what Java
calls "unboxed" values. So when you index a specific value, Python has to
create a new object to hold it. (In this case, that object is also an
array.) If that object is then garbage collected, the next time you ask for
the value at an index, the freshly created object may end up with the same
ID just by chance.

py> import numpy as np
py> b = np.array([[0]*2]*3)
py> x = b[0]
py> y = b[1]
py> print id(x), id(y)
155749968 156001664
py> print id(b[0]), id(b[1])  # temporary objects that are thrown away
156055016 156055016


If you try it yourself, you may or may not get exactly the same results. You
may need to print the IDs repeatedly until, just by chance, you end up with
identical IDs.



-- 
Steven

-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Future of Pypy?

2015-02-22 Thread Paul Rubin
Steven D'Aprano  writes:
> I'm sorry, but the instant somebody says "eliminate the GIL", they lose
> credibility with me. Yes yes, I know that in *your* specific case you've
> done your research and (1) multi-threaded code is the best solution for
> your application and (2) alternatives aren't suitable.

I don't see what the big deal is.  I hear tons of horror stories about
threads and I believe them, but the thing is, they almost always revolve
around acquiring and releasing locks in the wrong order, forgetting to
lock things, stuff like that.  So I've generally respected the terror
and avoided programming in that style, staying with a message passing
style that may take an efficiency hit but seems to avoid almost all
those problems.  TM also helps with lock hazards and it's a beautiful
idea--I just haven't had to use it yet.  The Python IRC channel seems to
rage against threads and promote Twisted for concurrency, but Twisted
has always reminded me of Java.  I use threads in Python all the time
and haven't gotten bitten yet.

> Writing multithreaded code is *hard*. It is not a programming model
> which comes naturally to most human beings. Very few programs are
> inherently parallelizable, although many programs have *parts* which
> can be successfully parallelized.

Parallel algorithms are complicated and specialized but tons of problems
amount to "do the same thing with N different pieces of data", so-called
embarassingly parallel.  The model is you have a bunch of worker threads
reading off a queue and processing the items concurrently.  Sometimes
separate processes works equally well, other times it's good to have
some shared data in memory instead of communicating through sockets.  If
the data is mutable then have one thread own it and access it only with
message passing, Erlang style.  If it's immutable after initialization
(protect it with a condition variable til initialization finishes) then
you can have read-only access from anywhere.

> if you're running single-thread code on a single-core machine and
> still complaining about the GIL, you have no clue.

Even the Raspberry Pi has 4 cores now, and fancy smartphones have had
them for years.  Single core cpu's are specialized and/or historical.

> for some programs, the simplest way to speed it up is to vectorize the
> core parts of your code by using numpy.  No threads needed.

Nice for numerical codes, not so hot for anything else.

> Where are the people flocking to use Jython and IronPython?

Shrug, who knows, those implementations were pretty deficient from what
heard.

> For removal of the GIL to really make a difference: ...
> - you must be performing a task which is parallelizable and not inherently
> sequential (no point using multiple threads if each thread spends all its
> time waiting for the previous thread);

That's most things involving concurrency these days.

> - the task must be one that moving to some other multi-processing model
> (such as greenlets, multiprocess, etc.) is infeasible;

I don't understand this--there can be multiple ways to solve a problem.

> - your threading bottleneck must be primarily CPU-bound, not I/O bound
> (CPython's threads are already very effective at parallelising I/O tasks);

If your concurrent program's workload makes it cpu-bound even 1% of the
time, then you gain something by having it use your extra cores at those
moments, instead of having those cores always do nothing.

> - and you must be using libraries and tools which prevent you moving to
> Jython or IronPython or some other alternative.

I don't get this at all.  Why should I not want Python to have the same
capabilities?
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Future of Pypy?

2015-02-22 Thread Chris Angelico
On Mon, Feb 23, 2015 at 1:04 PM, Paul Rubin  wrote:
>> if you're running single-thread code on a single-core machine and
>> still complaining about the GIL, you have no clue.
>
> Even the Raspberry Pi has 4 cores now, and fancy smartphones have had
> them for years.  Single core cpu's are specialized and/or historical.

Or virtual. I have a quad-core + hyperthreading CPU, but most of my
VMs (in fact, all except the one that runs a Python buildbot, I
think!) are restricted to a single core. If you rent a basic cheap
machine from a cloud provider, you'll quite possibly be getting a
single core, too.

ChrisA
-- 
https://mail.python.org/mailman/listinfo/python-list


Unrecognized backslash escapes in string literals

2015-02-22 Thread Chris Angelico
In Python, unrecognized escape sequences are treated literally,
without (as far as I can tell) any sort of warning or anything. This
can mask bugs, especially when Windows path names are used:

>>> 'C:\sqlite\Beginner.db'
'C:\\sqlite\\Beginner.db'
>>> 'c:\sqlite\beginner.db'
'c:\\sqlite\x08eginner.db'

To a typical Windows user, the two strings should be equivalent - case
insensitive file names, who cares whether you say "Beginner" or
"beginner"? But to Python, one of them will happen to work, the other
will fail badly.

Why is it that Python interprets them this way, and doesn't even give
a warning? What happened to errors not passing silently? Or, looking
at this the other way: Is there a way to enable such warnings/errors?
I can't see one in 'python[3] -h', but if there's some way elsewhere,
that would be a useful thing to recommend to people (I already
recommend running Python 2 with -tt).

ChrisA
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: calling subprocess

2015-02-22 Thread Dave Angel

On 02/22/2015 08:13 PM, jkuplin...@gmail.com wrote:


OK (1) sorry about for/from


That's not what you should be sorry about.  You should be sorry you 
didn't use cut&paste.



(2) print() sounds nice, but fact is , no matter what I try, i always get 
C:\\apps instead of c:\apps. So in this sense print() doesn't help much. 
Obviously i'm doing something wrong -- which is what you perhaps call shotgun 
debugging; but that's why i'm asking.



You probably are getting confused about the difference between str() and 
repr().  If you print the repr() of a string, it'll add quotes around 
it, and escape the unprintable codes.  So it'll double the backslash. 
It also turns a newline into  \n, and tabs into \t, and so on.  Very useful.


That's also what happens when you print a list that contains strings. 
The individual elements of the list are converted using repr().  Watch 
for the quotes to get a strong hint about what you're seeing.


If you don't get a positive handle on how string literals relate to 
string variables, and on str() and repr(), and print(), you'll be 
jumping around the problem instead of solving it.




Back to your original problem, which had you trying to use 
subprocess.call to change the current directory.  Current directory is 
effectively (or actually) depending on the OS involved) an environment 
variable, and changes made in a child process are not magically returned 
to the parent.


But even though there is an os.chdir() in Python, you really shouldn't 
use it.  Long experience of many people show that you're better off 
manipulating the directories you need explicitly, converting any 
directory that's relative to something other than the current one, to an 
absolute.


--
DaveA
--
https://mail.python.org/mailman/listinfo/python-list


Re: Unrecognized backslash escapes in string literals

2015-02-22 Thread Ben Finney
Chris Angelico  writes:

> In Python, unrecognized escape sequences are treated literally,
> without (as far as I can tell) any sort of warning or anything.

Right. Text strings literals are documented to work that way
https://docs.python.org/3/library/stdtypes.html#text-sequence-type-str>,
which refers the reader to the language reference
https://docs.python.org/3/reference/lexical_analysis.html#strings>.

> Why is it that Python interprets them this way, and doesn't even give
> a warning?

Because the interpretation of those literals is unambiguous and correct.

It's unfortunate that MS Windows inherited the incompatible “backslash
is a path separator”, long after backslash was already established in
many programming languages as the escape character.

> Is there a way to enable such warnings/errors?

A warning or error for a correctly formatted literal with an unambiguous
meaning would be an up-Pythonic thing to have.

I can see the motivation, but really the best solution is to learn that
the backslash is an escape character in Python text string literals.

This has the advantage that it's the same escape character used for text
string literals in virtually every other programming language, so you're
not needing to learn anything unusual.

-- 
 \“The deepest sin against the human mind is to believe things |
  `\   without evidence.” —Thomas Henry Huxley, _Evolution and |
_o__)Ethics_, 1893 |
Ben Finney

-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Unrecognized backslash escapes in string literals

2015-02-22 Thread Dave Angel

On 02/22/2015 09:29 PM, Chris Angelico wrote:

In Python, unrecognized escape sequences are treated literally,
without (as far as I can tell) any sort of warning or anything. This
can mask bugs, especially when Windows path names are used:


'C:\sqlite\Beginner.db'

'C:\\sqlite\\Beginner.db'

'c:\sqlite\beginner.db'

'c:\\sqlite\x08eginner.db'

To a typical Windows user, the two strings should be equivalent - case
insensitive file names, who cares whether you say "Beginner" or
"beginner"? But to Python, one of them will happen to work, the other
will fail badly.

Why is it that Python interprets them this way, and doesn't even give
a warning? What happened to errors not passing silently? Or, looking
at this the other way: Is there a way to enable such warnings/errors?
I can't see one in 'python[3] -h', but if there's some way elsewhere,
that would be a useful thing to recommend to people (I already
recommend running Python 2 with -tt).

ChrisA



I've long thought they should be errors, but in Python they're not even 
warnings.  It's one thing to let a user be sloppy on a shell's 
commandline, but in a program, if you have an invalid escape sequence, 
it should be an invalid string literal, full stop.


And Python doesn't even treat these invalid sequences the same (broken) 
way C does.  The documentation explicitly says it's different than C. 
If you're going to be different, at least be strict.


--
DaveA
--
https://mail.python.org/mailman/listinfo/python-list


Re: Future of Pypy?

2015-02-22 Thread Paul Rubin
Laura Creighton  writes:
> And given that Lennart is a friend, well really a good friend of my
> lover and a something-better- than-an-acquaintance with me  I
> should make the effort to get these two under the same roof (mine, by
> preference) for the fun of the experience.

Oh cool, right, I forgot that you are in Sweden where he is.  I've never
met him but he's pretty visible online and yes, he sure knows a lot
about Haskell.
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Unrecognized backslash escapes in string literals

2015-02-22 Thread Chris Angelico
On Mon, Feb 23, 2015 at 1:41 PM, Ben Finney  wrote:
> Chris Angelico  writes:
>
>> Why is it that Python interprets them this way, and doesn't even give
>> a warning?
>
> Because the interpretation of those literals is unambiguous and correct.

And it also implies that never, in the entire infinite future of
Python development, will any additional escapes be invented - because
then it'd be ambiguous (in versions up to X, "\s" means "\\s", and
after that, "\s" means something else).

> It's unfortunate that MS Windows inherited the incompatible “backslash
> is a path separator”, long after backslash was already established in
> many programming languages as the escape character.

I agree, the fault is primarily with Windows. But I've seen similar
issues when people use /-\| for box drawing and framing and such;
Windows paths are by far the most common case of this, but not the
sole.

>> Is there a way to enable such warnings/errors?
>
> A warning or error for a correctly formatted literal with an unambiguous
> meaning would be an up-Pythonic thing to have.
> ...
> This has the advantage that it's the same escape character used for text
> string literals in virtually every other programming language, so you're
> not needing to learn anything unusual.

And yet the treatment of the edge case differs. In C, for instance,
you get a compiler warning, and then the backslash is removed and
you're left with just the other character.

The trouble isn't that people need to learn that backslashes are
special in Python string literals. The trouble is that, especially
when file names are frequently being written with uppercase first
letters, it's very easy to have code that just so happens to work,
without being reliable. Having spent some time working with paths like
these:

fn = "C:\Foo\Bar\Asdf.ext"

and then to find that each of these fails, but in a different way:

path = "C:\Foo\Bar\"; fn = path + "Asdf.ext"
fn = "c:\foo\bar\asdf.ext"
fn = "c:\users\myname\blah"

would surely count as surprising. Particularly since the last one will
work fine in Python 2 sans unicode_literals, and will then blow up in
Python 3 - because, contrary to the "no additional escapes"
assumption, Unicode strings introduced new escapes, which means that
"\u0123" has different meaning in byte strings and Unicode strings. In
fact, that's an exception to the usual rule of "upper case is safe",
and it's one that *will* trip people up, thanks to the "C:\Users"
directory on a modern Windows system. What's the betting people will
blame the failure on Python 3 and/or Unicode, rather than on the
sloppy use of escapes and the poor choice of path separator on a
popular platform?

ChrisA
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Unrecognized backslash escapes in string literals

2015-02-22 Thread Dave Angel

On 02/22/2015 09:41 PM, Ben Finney wrote:

Chris Angelico  writes:


In Python, unrecognized escape sequences are treated literally,
without (as far as I can tell) any sort of warning or anything.


Right. Text strings literals are documented to work that way
https://docs.python.org/3/library/stdtypes.html#text-sequence-type-str>,
which refers the reader to the language reference
https://docs.python.org/3/reference/lexical_analysis.html#strings>.


Why is it that Python interprets them this way, and doesn't even give
a warning?


Because the interpretation of those literals is unambiguous and correct.


Correct according to a misguided language definition.



It's unfortunate that MS Windows inherited the incompatible “backslash
is a path separator”, long after backslash was already established in
many programming languages as the escape character.


Windows "inherited" it from DOS.  But since Windows was nothing but a 
DOS shell for several years, that's not surprising.  The historical 
problem came from CP/M's use of the forward slash for a 
switch-character.  Since MSDOS/PCDOS/QDOS was trying to permit 
transliterated CP/M programs, and because subdirectories were an 
afterthought (version 2.0), they felt they needed to pick a different 
character.  At one time, the switch-character could be set by the user, 
but most programs ignored that, so it died.





Is there a way to enable such warnings/errors?


A warning or error for a correctly formatted literal with an unambiguous
meaning would be an up-Pythonic thing to have.

I can see the motivation, but really the best solution is to learn that
the backslash is an escape character in Python text string literals.

This has the advantage that it's the same escape character used for text
string literals in virtually every other programming language, so you're
not needing to learn anything unusual.



I might be able to buy that argument if it was done the same way, but as 
it says in:

  https://docs.python.org/3/reference/lexical_analysis.html#strings

"""Unlike Standard C, all unrecognized escape sequences are left in the 
string unchanged, i.e., the backslash is left in the result. (This 
behavior is useful when debugging: if an escape sequence is mistyped, 
the resulting output is more easily recognized as broken.)

"""

The word "broken" is an admission that this was a flawed approach.  If 
it's broken, it should be an error.


I'm not suggesting that the implementation should falsely trigger an 
error.  But that the language definition should be changed to define it 
as an error.


--
DaveA
--
https://mail.python.org/mailman/listinfo/python-list


Re: calling subprocess

2015-02-22 Thread Dave Angel

On 02/22/2015 09:38 PM, Dave Angel wrote:

On 02/22/2015 08:13 PM, jkuplin...@gmail.com wrote:


OK (1) sorry about for/from


That's not what you should be sorry about.  You should be sorry you
didn't use cut&paste.


(2) print() sounds nice, but fact is , no matter what I try, i always
get C:\\apps instead of c:\apps. So in this sense print() doesn't help
much. Obviously i'm doing something wrong -- which is what you perhaps
call shotgun debugging; but that's why i'm asking.



You probably are getting confused about the difference between str() and
repr().  If you print the repr() of a string, it'll add quotes around
it, and escape the unprintable codes.  So it'll double the backslash. It
also turns a newline into  \n, and tabs into \t, and so on.  Very useful.

That's also what happens when you print a list that contains strings.
The individual elements of the list are converted using repr().  Watch
for the quotes to get a strong hint about what you're seeing.

If you don't get a positive handle on how string literals relate to
string variables, and on str() and repr(), and print(), you'll be
jumping around the problem instead of solving it.

Two other things I should have pointed out here.  The debugger uses 
repr() to display things, when you have an unassigned expression.


And you can solve a lot of problems by just using a forward slash for a 
directory separator.  The forward slash is just as correct in most 
circumstances within a program.  It's mainly on the command line that 
forward slash takes on a different meaning.





Back to your original problem, which had you trying to use
subprocess.call to change the current directory.  Current directory is
effectively (or actually) depending on the OS involved) an environment
variable, and changes made in a child process are not magically returned
to the parent.

But even though there is an os.chdir() in Python, you really shouldn't
use it.  Long experience of many people show that you're better off
manipulating the directories you need explicitly, converting any
directory that's relative to something other than the current one, to an
absolute.




--
DaveA
--
https://mail.python.org/mailman/listinfo/python-list


Re: Future of Pypy?

2015-02-22 Thread Ryan Stuart
On Mon Feb 23 2015 at 12:05:46 PM Paul Rubin 
wrote:

> I don't see what the big deal is.  I hear tons of horror stories about
> threads and I believe them, but the thing is, they almost always revolve
> around acquiring and releasing locks in the wrong order, forgetting to
> lock things, stuff like that.  So I've generally respected the terror
> and avoided programming in that style, staying with a message passing
> style that may take an efficiency hit but seems to avoid almost all
> those problems.  TM also helps with lock hazards and it's a beautiful
> idea--I just haven't had to use it yet.  The Python IRC channel seems to
> rage against threads and promote Twisted for concurrency, but Twisted
> has always reminded me of Java.  I use threads in Python all the time
> and haven't gotten bitten yet.
>
>
Many people have written at length about why it's bad. The most recent
example I have come across is here ->
https://glyph.twistedmatrix.com/2014/02/unyielding.html

It's not a specific Python problem. I must be in the limited crowd that
believes that the GIL is a *feature* of Python. Then again, maybe it isn't
so limited since the GIL-less python implementations haven't really taken
off.

I have yet to come across a scenario I couldn't solve with either
Processes, NumPy or event loops. Yes, when using processes, the passing of
data can be annoying sometimes. But that is far less annoying then trying
to debug something that shares state across threads.

It's great that you haven't been bitten yet. But, the evidence seems to
suggest the either you *will* be bitten at some point or, you already have
been, and you just don't know it.

Cheers


> > Writing multithreaded code is *hard*. It is not a programming model
> > which comes naturally to most human beings. Very few programs are
> > inherently parallelizable, although many programs have *parts* which
> > can be successfully parallelized.
>
> Parallel algorithms are complicated and specialized but tons of problems
> amount to "do the same thing with N different pieces of data", so-called
> embarassingly parallel.  The model is you have a bunch of worker threads
> reading off a queue and processing the items concurrently.  Sometimes
> separate processes works equally well, other times it's good to have
> some shared data in memory instead of communicating through sockets.  If
> the data is mutable then have one thread own it and access it only with
> message passing, Erlang style.  If it's immutable after initialization
> (protect it with a condition variable til initialization finishes) then
> you can have read-only access from anywhere.
>
> > if you're running single-thread code on a single-core machine and
> > still complaining about the GIL, you have no clue.
>
> Even the Raspberry Pi has 4 cores now, and fancy smartphones have had
> them for years.  Single core cpu's are specialized and/or historical.
>
> > for some programs, the simplest way to speed it up is to vectorize the
> > core parts of your code by using numpy.  No threads needed.
>
> Nice for numerical codes, not so hot for anything else.
>
> > Where are the people flocking to use Jython and IronPython?
>
> Shrug, who knows, those implementations were pretty deficient from what
> heard.
>
> > For removal of the GIL to really make a difference: ...
> > - you must be performing a task which is parallelizable and not
> inherently
> > sequential (no point using multiple threads if each thread spends all its
> > time waiting for the previous thread);
>
> That's most things involving concurrency these days.
>
> > - the task must be one that moving to some other multi-processing model
> > (such as greenlets, multiprocess, etc.) is infeasible;
>
> I don't understand this--there can be multiple ways to solve a problem.
>
> > - your threading bottleneck must be primarily CPU-bound, not I/O bound
> > (CPython's threads are already very effective at parallelising I/O
> tasks);
>
> If your concurrent program's workload makes it cpu-bound even 1% of the
> time, then you gain something by having it use your extra cores at those
> moments, instead of having those cores always do nothing.
>
> > - and you must be using libraries and tools which prevent you moving to
> > Jython or IronPython or some other alternative.
>
> I don't get this at all.  Why should I not want Python to have the same
> capabilities?
> --
> https://mail.python.org/mailman/listinfo/python-list
>
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Unrecognized backslash escapes in string literals

2015-02-22 Thread Chris Angelico
On Mon, Feb 23, 2015 at 1:41 PM, Ben Finney  wrote:
> Right. Text strings literals are documented to work that way
> https://docs.python.org/3/library/stdtypes.html#text-sequence-type-str>,
> which refers the reader to the language reference
> https://docs.python.org/3/reference/lexical_analysis.html#strings>.

BTW, quoting from that:

"""
Unlike Standard C, all unrecognized escape sequences are left in the
string unchanged, i.e., the backslash is left in the result. (This
behavior is useful when debugging: if an escape sequence is mistyped,
the resulting output is more easily recognized as broken.)
"""

I'm not sure it's more obviously broken. Comparing Python and Pike:

>>> "asdf\qwer"
'asdf\\qwer'

> "asdf\qwer";
(1) Result: "asdfqwer"

Which is the "more easily recognized as broken" depends on what the
actual intention was. If you wanted to have a backslash (eg a path
name), then the second one is, because you've just run two path
components together. If you wanted to have some sort of special
character ("\n"), then they're both going to be about the same - you'd
expect to see "\n" in the output, one has added a backslash (assuming
you're looking at the repr), the other has removed it. Likewise if you
wanted some other symbol (eg forward slash), they're about the same (a
doubled backslash, or a complete omission, same diff). But if you just
fat-fingered a backslash into a string where it completely doesn't
belong, then seeing a doubled backslash is definitely better than
seeing just the following character (which would mask the error
entirely). Since the interpreter can't know what the intention was, it
obviously has to do just one thing and stick with it.

I'm not convinced this is really an advantage. Python has been aiming
more and more towards showing problems immediately, rather than having
them depend on your data - for instance, instead of letting you treat
bytes and characters as identical until you hit something that isn't
ASCII, Py3 forces you to distinguish from the start. That said,
though, there's probably a lot of code out there that depends on
backslashes being non-special, so it's quite probably something that
can't be changed. But it'd be nice to be able to turn on a warning for
it.

ChrisA
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Future of Pypy?

2015-02-22 Thread Chris Angelico
On Mon, Feb 23, 2015 at 2:16 PM, Ryan Stuart  wrote:
> Many people have written at length about why it's bad. The most recent
> example I have come across is here ->
> https://glyph.twistedmatrix.com/2014/02/unyielding.html
>
> It's not a specific Python problem. I must be in the limited crowd that
> believes that the GIL is a *feature* of Python. Then again, maybe it isn't
> so limited since the GIL-less python implementations haven't really taken
> off.

The GIL isn't a problem, per se. It's a solution to an underlying
problem (concurrent access to internal data structures) which comes
with its own tradeoffs. Every method of eliminating the GIL is really
an alternate solution to the same underlying problem, with its own
tradeoffs. The GIL has simplicity on its side.

ChrisA
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Unrecognized backslash escapes in string literals

2015-02-22 Thread Ben Finney
Chris Angelico  writes:

> That said, though, there's probably a lot of code out there that
> depends on backslashes being non-special, so it's quite probably
> something that can't be changed. But it'd be nice to be able to turn
> on a warning for it.

If you're motivated to see such warnings, an appropriate place to
implement them would be in PyLint or another established static code
analysis tool.

-- 
 \“The whole area of [treating source code as intellectual |
  `\property] is almost assuring a customer that you are not going |
_o__)   to do any innovation in the future.” —Gary Barnett |
Ben Finney

-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Future of Pypy?

2015-02-22 Thread Paul Rubin
Ryan Stuart  writes:
> Many people have written at length about why it's bad. The most recent
> example I have come across is here ->
> https://glyph.twistedmatrix.com/2014/02/unyielding.html

That article is about the hazards of mutable state shared between
threads.  The key to using threads safely is to not do that.  So the
"transfer" example in the article would instead be a message handler in
the thread holding the account data, and it would do the transfer in the
usual sequential way.  You'd start a transfer by sending a message
through a Queue, and get back a reply through another queue.

In Erlang that style is enforced: it has basically no such thing as data
mutation or sharing.  In Python it's straightforward to write in that
style; it's just that the language doesn't stop you from departing from
it, sort of like a language with weak type-checking.  You can survive it
if the code complexity isn't too bad.  In most programs I've dealt with,
the number of distinct handlers is not all that large (there might be
1000 threads in a high-concurrency network server, but they're all doing
about the same thing).  So it hasn't been that hard to "color inside the
lines" with a bit of thoughtfulness and code inspection.

You might like this:

http://jlouisramblings.blogspot.com/2012/08/getting-25-megalines-of-code-to-behave.html
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Future of Pypy?

2015-02-22 Thread Ryan Stuart
On Mon Feb 23 2015 at 1:50:40 PM Paul Rubin  wrote:

> That article is about the hazards of mutable state shared between
> threads.  The key to using threads safely is to not do that.  So the
> "transfer" example in the article would instead be a message handler in
> the thread holding the account data, and it would do the transfer in the
> usual sequential way.  You'd start a transfer by sending a message
> through a Queue, and get back a reply through another queue.
>

I think that is a pretty accurate summary. In fact, the article even says
that. So, just to iterate its point, if you are using non-blocking Queues
to communicate to these threads, then you just have a communicating event
loop. Given that Queues work perfectly with with processes as well, what is
the point of using a thread? Using a process/fork is far safer in that
someone can't "accidentally" decide to alter mutable state in the future.


> You might like this:
>
> http://jlouisramblings.blogspot.com/2012/08/getting-
> 25-megalines-of-code-to-behave.html


Thanks for this, I'll take a look.

Cheers


>
> --
> https://mail.python.org/mailman/listinfo/python-list
>
-- 
https://mail.python.org/mailman/listinfo/python-list


  1   2   >