Re: [Python-ideas] Generator-based context managers can't skip __exit__

2016-11-06 Thread Ram Rachum
On Sun, Nov 6, 2016 at 8:53 AM, Nick Coghlan  wrote:

> On 6 November 2016 at 16:07, Ram Rachum  wrote:
> > Heh, I just played with this, and found a workaround. If I do something
> like
> > this after creating the generator:
> >
> > sys.g = g
> >
> > Then it wouldn't get closed when Python finishes, and the cleanup won't
> > happen, which is what I want.
>
> The interpreter goes to significant lengths to make sure that finally
> clauses get executed prior to or during interpreter shutdown, and any
> means you find by which they don't get executed is considered a bug
> (not always a fixable bug, but a bug nonetheless). If you rely on
> those bugs and limitations to get your program to perform the way you
> want it to you're going to run into problems later when upgrading to
> new CPython versions, or trying out different interpreter
> implementations.
>

I understand, and I agree with the reasoning. Still, I think I'll take my
chances.

There's still something seriously odd going in relation to your
> overall resource management architecture if "cleanup, maybe, unless I
> decide to tell you not to" is a behaviour you regularly need. Cleanup
> functions in a garbage collected environment should be idempotent, so
> it doesn't matter if you redundantly call them again later.
>

Well, you think it's weird that I want a `finally` clause to not be called
in some circumstances. Do you think it's equally weird to want an
`__exit__` method that is not called in some circumstances?


>
> However, if you *do* need that pattern regularly, then the pattern
> itself can be encapsulated in a context manager:
>
> class callback_unless_exit:
> def __init__(self, callback):
> self.callback = callback
> def __enter__(self):
> return self
> def __exit__(self, exc_type, exc_value, exc_tb):
> if issubclass(exc_type, GeneratorExit):
> return
> self.callback()
>
> and then do:
>
> with callback_unless_exit(cleanup):
> yield
>
> in the context managers where you want that behaviour.
>
>
Thanks for the workaround but I feel it's even less elegant than my
original workaround.
___
Python-ideas mailing list
[email protected]
https://mail.python.org/mailman/listinfo/python-ideas
Code of Conduct: http://python.org/psf/codeofconduct/

Re: [Python-ideas] Method signature syntactic sugar (especially for dunder methods)

2016-11-06 Thread Nick Coghlan
On 6 November 2016 at 16:28, Nathan Dunn  wrote:
> There are some immediate problems with this, such as `bool(self)` being
> indistinguishable from a regular method signature and `class(x, y)` not
> declaring the `self` identifier. These and other problems can be solved to
> some extent, but I thought I would see if there is any interest around this
> before going too in depth.

The syntax is the least confusing part of special method overrides, so
if folks are still struggling with that aspect of defining them, there
are plenty of other things that are going to trip them up.

>From your examples:

* __add__ is only part of the addition protocol, there is also
__radd__ and __iadd__
* likewise, there is not a one-to-one correspondence between the
bool() builtin and the __bool__() special method (there are other ways
to support bool(), like defining __len__() on a container)
* the mapping protocol covers more than just __getitem__ (and you also
need to decide if you're implementing a mapping, sequence, or
multi-dimensional array)

If the current syntax makes people think "This looks tricky and
complicated and harder than defining normal methods", that's a good
thing, as magic methods *are* a step up in complexity from normal
method definitions, since you need to learn more about how and when
they get called and the arguments they receive, while normal methods
are accessed via plain function calls.

My concern with the last suggestion is different (permitting the first
parameter to be specified on the left of the method name), which is
that it would break the current symmetry between between name binding
in def statements and target binding in assignment statements -
currently, all permitted binding targets in def and class statements
behave the same way as they do in normal assigment statements, and
throw SyntaxError otherwise. With the proposed change, we'd face the
problem that the following would both be legal, but meant very
different things:

cls.mymethod = lambda self: print(self)
def cls.mymethod(self): print(self)

The former is already legal and assigns the given lambda function as a
method on the existing class, `cls`

The latter currently throws SyntaxError.

With the proposed change, rather than throwing SyntaxError as it does
now, the latter would instead be equivalent to:

def mymethod(cls, self): print(self)

which would be a very surprising difference in behaviour.

Regards,
Nick.

-- 
Nick Coghlan   |   [email protected]   |   Brisbane, Australia
___
Python-ideas mailing list
[email protected]
https://mail.python.org/mailman/listinfo/python-ideas
Code of Conduct: http://python.org/psf/codeofconduct/


Re: [Python-ideas] Generator-based context managers can't skip __exit__

2016-11-06 Thread Brendan Barnwell

On 2016-11-06 00:18, Ram Rachum wrote:

Well, you think it's weird that I want a `finally` clause to not be
called in some circumstances. Do you think it's equally weird to want an
`__exit__` method that is not called in some circumstances?


	It's weird to not want the __exit__ to be called if it's defined as a 
finally block, which is what you're doing with the way you're using 
contextlib.contextmanager.  The __exit__ block there is effectively 
whatever is after the yield in the generator function you write.  If 
there's code you don't want to always be run, don't put that code inside 
a finally where the yield is in the try.


--
Brendan Barnwell
"Do not follow where the path may lead.  Go, instead, where there is no 
path, and leave a trail."

   --author unknown
___
Python-ideas mailing list
[email protected]
https://mail.python.org/mailman/listinfo/python-ideas
Code of Conduct: http://python.org/psf/codeofconduct/


Re: [Python-ideas] Generator-based context managers can't skip __exit__

2016-11-06 Thread Nick Coghlan
On 6 November 2016 at 17:18, Ram Rachum  wrote:
> On Sun, Nov 6, 2016 at 8:53 AM, Nick Coghlan  wrote:
>> There's still something seriously odd going in relation to your
>> overall resource management architecture if "cleanup, maybe, unless I
>> decide to tell you not to" is a behaviour you regularly need. Cleanup
>> functions in a garbage collected environment should be idempotent, so
>> it doesn't matter if you redundantly call them again later.
>
>
> Well, you think it's weird that I want a `finally` clause to not be called
> in some circumstances. Do you think it's equally weird to want an `__exit__`
> method that is not called in some circumstances?

Yes, as the whole point of __exit__ is that the interpreter goes to
great lengths to make sure it always gets called, no matter what else
happens with the currently executing frame (whether it finishes
normally, returns early, breaks out of a loop, continues with the next
iteration, raises an exception, or gets suspended without ever
resuming normal execution).

If you don't want that behaviour, then __exit__ likely isn't the right
tool (although it may provide the technical basis for a selective
cleanup framework).

Cheers,
Nick.

-- 
Nick Coghlan   |   [email protected]   |   Brisbane, Australia
___
Python-ideas mailing list
[email protected]
https://mail.python.org/mailman/listinfo/python-ideas
Code of Conduct: http://python.org/psf/codeofconduct/


Re: [Python-ideas] Generator-based context managers can't skip __exit__

2016-11-06 Thread Ram Rachum
On Sun, Nov 6, 2016 at 9:38 AM, Nick Coghlan  wrote:

> On 6 November 2016 at 17:18, Ram Rachum  wrote:
> > On Sun, Nov 6, 2016 at 8:53 AM, Nick Coghlan  wrote:
> >> There's still something seriously odd going in relation to your
> >> overall resource management architecture if "cleanup, maybe, unless I
> >> decide to tell you not to" is a behaviour you regularly need. Cleanup
> >> functions in a garbage collected environment should be idempotent, so
> >> it doesn't matter if you redundantly call them again later.
> >
> >
> > Well, you think it's weird that I want a `finally` clause to not be
> called
> > in some circumstances. Do you think it's equally weird to want an
> `__exit__`
> > method that is not called in some circumstances?
>
> Yes, as the whole point of __exit__ is that the interpreter goes to
> great lengths to make sure it always gets called, no matter what else
> happens with the currently executing frame (whether it finishes
> normally, returns early, breaks out of a loop, continues with the next
> iteration, raises an exception, or gets suspended without ever
> resuming normal execution).
>
> If you don't want that behaviour, then __exit__ likely isn't the right
> tool (although it may provide the technical basis for a selective
> cleanup framework).
>
> Cheers,
> Nick.
>
>
I understand your point of view. I see that Python does allow you to not
call `__exit__` if you don't want to, so I wish it'll have the same
approach to not calling `generator.close()` if you don't want to. (This is
what it's really about, not `finally`.)
___
Python-ideas mailing list
[email protected]
https://mail.python.org/mailman/listinfo/python-ideas
Code of Conduct: http://python.org/psf/codeofconduct/

Re: [Python-ideas] Generator-based context managers can't skip __exit__

2016-11-06 Thread Steven D'Aprano
On Sun, Nov 06, 2016 at 06:46:40AM +0200, Ram Rachum wrote:
> Hi everyone,
> 
> Here is a simplification of a problem that's been happening in my code:
> 
> import contextlib
> 
> @contextlib.contextmanager
> def f():
> print('1')
> try:
> yield
> finally:
> print('2')
> 
> 
> g = f()
> g.__enter__()
> 
> 
> This code prints 1 and then 2, not just 1 like you might expect.

I expect it to print 2. After all, its in a finally clause.

And why are you calling g.__enter__() directly? Calling dunder methods 
by hand is nearly always the wrong thing to do.

> This is
> because when the generator is garbage-collected, it gets `GeneratorExit`
> sent to it.

Right. That's what they're designed to do.

Later, in another thread you say:

"Well, you think it's weird that I want a `finally` clause to not be 
called in some circumstances. Do you think it's equally weird to want an
`__exit__` method that is not called in some circumstances?"

Yes to both. It is seriously weird.

You might as well be complaining that Python calls your __iter__ 
method when you call iter(my_instance). That's the whole point of 
__iter__, and the whole point of finally clauses and the __exit__ method 
is that they are unconditionally called, always, when you leave the 
with block.

But judging from your code above, it looks like you're not even using a 
with block. In that case, instead of abusing the __enter__ and __exit__ 
methods, why not just create a class with non-dunder enter() and exit() 
methods and call them by hand?

g = f()  # implementation of f is left as an exercise
g.enter()
if condition:
g.exit()


I'm having a lot of difficulty in understanding your use-case here, and 
so maybe I've completely misunderstood something.


> This has been a problem in my code since in some instances, I tell a
> context manager not to do its `__exit__` function. (I do this by using
> `ExitStack.pop_all()`. However the `__exit__` is still called here.

Have you considered something like:

def f():
print('1')
try:
yield
finally:
if f.closing:
print('2')


You can then write a decorator to set f.closing to True or False as 
needed. But again, I don't understand why you would want this feature, 
or how you are using it, so I might have this completely wrong.



-- 
Steve
___
Python-ideas mailing list
[email protected]
https://mail.python.org/mailman/listinfo/python-ideas
Code of Conduct: http://python.org/psf/codeofconduct/


Re: [Python-ideas] Generator-based context managers can't skip __exit__

2016-11-06 Thread Nick Coghlan
On 6 November 2016 at 17:44, Ram Rachum  wrote:
> I understand your point of view. I see that Python does allow you to not
> call `__exit__` if you don't want to, so I wish it'll have the same approach
> to not calling `generator.close()` if you don't want to. (This is what it's
> really about, not `finally`.)

No, as that's like asking that Python not call close() on files
automatically, or not wait for non-daemon threads to terminate when
it's shutting down.

When Python is discarding a frame that was previously suspended and
never finished normally, it throws an exception into it in order to
give it a chance to release any resources it might be holding. If you
want to deliberately make it leak resources in such cases instead of
cleaning them up, you're going to have to leak them deliberately and
explicitly, just as you would in normal synchronous code.

Regards,
Nick.

-- 
Nick Coghlan   |   [email protected]   |   Brisbane, Australia
___
Python-ideas mailing list
[email protected]
https://mail.python.org/mailman/listinfo/python-ideas
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-ideas] Alternative to PEP 532: delayed evaluation of expressions

2016-11-06 Thread Eric V. Smith

Creating a new thread, instead of hijacking the PEP 532 discussion.

From PEP 532:

> Abstract
> 
>
> Inspired by PEP 335, PEP 505, PEP 531, and the related discussions, 
this PEP
> proposes the addition of a new protocol-driven circuit breaking 
operator to
> Python that allows the left operand to decide whether or not the 
expression

> should short circuit and return a result immediately, or else continue
> on with evaluation of the right operand::
>
> exists(foo) else bar
> missing(foo) else foo.bar()

Instead of new syntax that only works in this one specific case, I'd 
prefer a more general solution. I accept being "more general" probably 
seals the deal in killing any proposal!


I realize the following proposal has at least been hinted at before, but 
I couldn't find a specific discussion about it. Since it applies to the 
short-circuiting issues addressed by PEP 532 and its predecessors, I 
thought I'd bring it up here. It could also be used to solve some of the 
problems addressed by the rejected PEP 463 (Exception-catching 
expressions). See also PEP 312 (Simple Implicit Lambda). It might also 
be usable for some of the use cases presented in PEP 501 (General 
purpose string interpolation, aka i-strings).


I'd rather see the ability to have unevaluated expressions, that can 
later be evaluated. I'll use backticks here to mean: "parse, but do not 
execute the enclosed code". This produces an object that can later be 
evaluated with a new builtin I'll call "evaluate_now". Obviously these 
are strawmen, and partly chosen to be ugly and unacceptable names and 
symbols in the form I'll discuss here.


Then you could write a function:

eval_else(`foo.bar`, `some_func()`)

whose value is foo.bar, unless foo.bar cannot be evaluated, in which 
case the value is some_func().


def eval_else(expr, fallback, exlist=(AttributeError,)):
try:
return evaluate_now(expr)
except exlist:
return evaluate_now(fallback)

Exactly which exceptions you catch is up to you. Of course there's the 
chance that someone would pass in something for which the caught 
exception is too broad, and it's raised deep inside evaluating the first 
expression, but that's no different than catching exceptions now. Except 
I grant that hiding the try/except inside a called function increases 
the risk.


Like f-strings, the expressions are entirely created at the site they're 
specified inside ``. So they'd have access to locals and globals, etc., 
at the definition site.


def x(foo, i):
return eval_else(`foo.bar`, `some_func(i, __name__)`)

And like the expressions in f-strings, they have to be valid 
expressions. But unlike f-strings, they aren't evaluated right when 
they're encountered. The fact that they may never be evaluated is one of 
their features.


For example the if/else expression:

if_else(`y`, x is None, `x.a`)

could be defined as being exactly like:

y if x is None else x.a

including only evaluating x.a if x is not None.

def if_else(a, test, b):
if test:
return evaluate_now(a)
return evaluate_now(b)

You could do fancier things that require more than 2 expressions.

Whether `` returns an AST that could later be manipulated, or it's 
something else that's opaque is another discussion. Let's assume it's 
opaque for now.


You could go further and say that any argument to a function that's 
specially marked would get an unevaluated expression. Suppose that you 
can mark arguments as & to mean "takes an unevaluated expression". Then 
you could write:


def if_else(&a, test, &b):
if test:
return evaluate_now(a)
return evaluate_now(b)

And call it as:
if_else(y, x is None, x.a)

But now you've made it non-obvious at the caller site exactly what's 
happening. There are other downsides, such as only being able to create 
an unevaluated expression when calling a function. Or maybe that's a 
good thing!


In any event, having unevaluated expressions would open up more 
possibilities than just the short-circuit evaluation model. And it 
doesn't involve a new protocol.


Eric.
___
Python-ideas mailing list
[email protected]
https://mail.python.org/mailman/listinfo/python-ideas
Code of Conduct: http://python.org/psf/codeofconduct/


Re: [Python-ideas] Alternative to PEP 532: delayed evaluation of expressions

2016-11-06 Thread Eric V. Smith
[top posting from my phone]

Chris Angelico points out the & part of the idea interacts poorly with *args 
and **kwargs, so I drop that idea. 

Re-reading PEP 312, this idea is basically identical, with different spellings. 

The point remains: do we want  to be able to create unevaluated expressions 
that can be evaluated at a different point?

--
Eric.

> On Nov 6, 2016, at 8:06 AM, Eric V. Smith  wrote:
> 
> Creating a new thread, instead of hijacking the PEP 532 discussion.
> 
> From PEP 532:
> 
> > Abstract
> > 
> >
> > Inspired by PEP 335, PEP 505, PEP 531, and the related discussions, this PEP
> > proposes the addition of a new protocol-driven circuit breaking operator to
> > Python that allows the left operand to decide whether or not the expression
> > should short circuit and return a result immediately, or else continue
> > on with evaluation of the right operand::
> >
> > exists(foo) else bar
> > missing(foo) else foo.bar()
> 
> Instead of new syntax that only works in this one specific case, I'd prefer a 
> more general solution. I accept being "more general" probably seals the deal 
> in killing any proposal!
> 
> I realize the following proposal has at least been hinted at before, but I 
> couldn't find a specific discussion about it. Since it applies to the 
> short-circuiting issues addressed by PEP 532 and its predecessors, I thought 
> I'd bring it up here. It could also be used to solve some of the problems 
> addressed by the rejected PEP 463 (Exception-catching expressions). See also 
> PEP 312 (Simple Implicit Lambda). It might also be usable for some of the use 
> cases presented in PEP 501 (General purpose string interpolation, aka 
> i-strings).
> 
> I'd rather see the ability to have unevaluated expressions, that can later be 
> evaluated. I'll use backticks here to mean: "parse, but do not execute the 
> enclosed code". This produces an object that can later be evaluated with a 
> new builtin I'll call "evaluate_now". Obviously these are strawmen, and 
> partly chosen to be ugly and unacceptable names and symbols in the form I'll 
> discuss here.
> 
> Then you could write a function:
> 
> eval_else(`foo.bar`, `some_func()`)
> 
> whose value is foo.bar, unless foo.bar cannot be evaluated, in which case the 
> value is some_func().
> 
> def eval_else(expr, fallback, exlist=(AttributeError,)):
>try:
>return evaluate_now(expr)
>except exlist:
>return evaluate_now(fallback)
> 
> Exactly which exceptions you catch is up to you. Of course there's the chance 
> that someone would pass in something for which the caught exception is too 
> broad, and it's raised deep inside evaluating the first expression, but 
> that's no different than catching exceptions now. Except I grant that hiding 
> the try/except inside a called function increases the risk.
> 
> Like f-strings, the expressions are entirely created at the site they're 
> specified inside ``. So they'd have access to locals and globals, etc., at 
> the definition site.
> 
> def x(foo, i):
>return eval_else(`foo.bar`, `some_func(i, __name__)`)
> 
> And like the expressions in f-strings, they have to be valid expressions. But 
> unlike f-strings, they aren't evaluated right when they're encountered. The 
> fact that they may never be evaluated is one of their features.
> 
> For example the if/else expression:
> 
> if_else(`y`, x is None, `x.a`)
> 
> could be defined as being exactly like:
> 
> y if x is None else x.a
> 
> including only evaluating x.a if x is not None.
> 
> def if_else(a, test, b):
>if test:
>return evaluate_now(a)
>return evaluate_now(b)
> 
> You could do fancier things that require more than 2 expressions.
> 
> Whether `` returns an AST that could later be manipulated, or it's something 
> else that's opaque is another discussion. Let's assume it's opaque for now.
> 
> You could go further and say that any argument to a function that's specially 
> marked would get an unevaluated expression. Suppose that you can mark 
> arguments as & to mean "takes an unevaluated expression". Then you could 
> write:
> 
> def if_else(&a, test, &b):
>if test:
>return evaluate_now(a)
>return evaluate_now(b)
> 
> And call it as:
> if_else(y, x is None, x.a)
> 
> But now you've made it non-obvious at the caller site exactly what's 
> happening. There are other downsides, such as only being able to create an 
> unevaluated expression when calling a function. Or maybe that's a good thing!
> 
> In any event, having unevaluated expressions would open up more possibilities 
> than just the short-circuit evaluation model. And it doesn't involve a new 
> protocol.
> 
> Eric.
> ___
> Python-ideas mailing list
> [email protected]
> https://mail.python.org/mailman/listinfo/python-ideas
> Code of Conduct: http://python.org/psf/codeofconduct/
> 

___
Python-ideas mailing list
Py

Re: [Python-ideas] Generator-based context managers can't skip __exit__

2016-11-06 Thread Terry Reedy

On 11/6/2016 2:18 AM, Ram Rachum wrote:


Well, you think it's weird that I want a `finally` clause to not be
called in some circumstances. Do you think it's equally weird to want an
`__exit__` method that is not called in some circumstances?


Without a deeper understanding of why you want to do so, I would.  The 
automatic exit cleanup typically the main purpose of a context manager 
and 'with' block.  If, for instance, I want a file only maybe closed, I 
would just call 'open' instead of 'with open' and then conditionally 
(with 'if' or 'try') call 'close'.


--
Terry Jan Reedy

___
Python-ideas mailing list
[email protected]
https://mail.python.org/mailman/listinfo/python-ideas
Code of Conduct: http://python.org/psf/codeofconduct/


Re: [Python-ideas] Reduce/fold and scan with generator expressions and comprehensions

2016-11-06 Thread Danilo J. S. Bellini
2016-11-03 8:10 GMT-02:00 Stephen J. Turnbull :

> As for recursion, the syntax you proposed doesn't say "recursion" to
> me, it says "let's add more complexity to a syntax that already is
> hard to fit on a line so we can turn a sequence into a series."
>

1. The "fit on a line" is just another weird taboo:
1.1. An expression isn't a physical line.
1.2. Sub-expressions in an expression might be on other statements (e.g.
assignments, other functions).
1.4. Several of my examples in PyScanPrev aren't oneliners, including one
you pasted in an e-mail.
1.3. When you call itertools.accumulate, it's still an expression.
1.5. List comprehensions are expressions, including the triple-for-section
comprehensions with repeated target variable names.
1.6. Being anti-expression because of an anti-oneliner bias sounds like
moving towards an assembly language (using only atomic imperative commands)
or something merely bureaucratic.

2. The itertools.accumulate function signature is broken:
2.1. Its arguments are reversed when compared to other higher order
functions like map, filter and reduce.
2.2. It lacks a "start" parameter, requiring more complexity to include it
(nesting a itertools.chain or a "prepend" function call).

3. It's not about "adding complexity", it's the other way around:
3.1. The proposal is about explicit recursion in a list comprehension (a
way to access the previous output value/result, a.k.a.
accumulator/state/memory).
3.2. Today you can only do (3.1) using either:
3.2.1. A triple-for-section list comprehension with repeated target names
(as described in the PyScanPrev rationale section).
3.2.2. Functions for stack frame manipulation (like the example I gave in
this maillist using stackfull).
3.2.3. Bytecode manipulation (the PyScanPrev approach), or something alike
(e.g. AST manipulation).
3.2.4. Raw Python code pre-processing (e.g. by customizing some import
hooks).

Only 3.2.4 allows a new syntax.

I'm not sure what you meant with "turn a sequence into a series", that
sounds like the itertools.accumulate from Python 3.2, which didn't have a
function as a parameter.


2016-11-03 8:10 GMT-02:00 Stephen J. Turnbull :

>  > Anyway, that makes this talking about computational accuracy sound
>  > like an argument for my proposal here, not against it.
>
> Not as I see it.  My point is that functions like accumulate already
> get me as much of your proposal as I can see being useful in my own
> applications, and so I'd rather spend effort on the inherent
> complexity of accurate computation than on learning new syntax which
> as far as I can see buys me no extra simplicity.
>

What do you mean by the word "accuracy"? IMHO, now you're not talking about
accuracy anymore. It's an argument like "map/filter and generator
expressions are the same" again. That's just an argument against generator
expressions and list/set/dict comprehensions in general, as they "already
get us as much of" map/filter.

About the effort, do you really find the examples below with the new
proposed syntax difficult to understand?

>>> # Running product
>>> [prev * k for k in [5, 2, 4, 3] from prev = 1]
[1, 5, 10, 40, 120]

>>> # Accumulate (prefix sum / cumulative sum)
>>> [prev + k for k in [5, 2, 4, 3] from prev = 0]
[0, 5, 7, 11, 14]

>>> # Pairs to be used in the next example (nothing new here)
>>> pairs = [(a, b) for a in [0, 1, 2, 3] for b in [-2, 2] if b != a]
>>> pairs
[(0, -2), (0,2), (1, -2), (1, 2), (2, -2), (3, -2), (3, 2)]

>>> # The recursive list comprehension with the new syntax
>>> [b + prev * a for a, b in pairs from prev = 0]
[0, -2, 2, 0, 2, 2, 4, 14]

>>> # The same with itertools.accumulate (for comparison)
>>> from itertools import accumulate, chain
>>> list(accumulate(chain([0], pairs),
... lambda prev, pair: pair[1] + prev * pair[0]))
[0, -2, 2, 0, 2, 2, 4, 14]

>>> # The same in a single expression using the new syntax
>>> [b + prev * a for a in [0, 1, 2, 3]
...   for b in [-2, 2]
...   if b != a
...   from prev = 0]
[0, -2, 2, 0, 2, 2, 4, 14]

Everything needs some effort to be understood, some effort to be memorized,
some effort to be maintained, etc.. But Rob Cliffe could understand the
running product when he had read this thread and disclaimed that he never
heard about accumulate. For someone who doesn't know what
accumulate/scan/fold/reduce is, I think the proposed syntax is more
descriptive/explicit/instructional/didactic/maintainable.

Please read the "Rationale" section and the "Comparison" example in
PyScanPrev. Also, please keep in mind that even with bytecode manipulation,
PyScanPrev is still limited to valid Python syntax, and the syntax proposed
here is different and more general (e.g. the last example in this e-mail
can't be done in PyScanPrev as a single list comprehension).


2016-11-03 8:10 GMT-02:00 Stephen J. Turnbull :

> Consider the existing comprehension syntax.  I use it all the time
> because it's very expressive, it "looks like

Re: [Python-ideas] Reduce/fold and scan with generator expressions and comprehensions

2016-11-06 Thread Stephen J. Turnbull
Danilo J. S. Bellini writes:

 > About the effort, do you really find the examples below with the new
 > proposed syntax difficult to understand?

No.  I just don't see how they would become tools I would use.  My main
interest here was in your claim to have economic applications, but the
examples you give don't seem to offer big wins for the kind of
calculations I, my students, or my colleagues do.  Perhaps you will
have better luck interesting/persuading others.

___
Python-ideas mailing list
[email protected]
https://mail.python.org/mailman/listinfo/python-ideas
Code of Conduct: http://python.org/psf/codeofconduct/


Re: [Python-ideas] Reduce/fold and scan with generator expressions and comprehensions

2016-11-06 Thread Danilo J. S. Bellini
2016-11-06 18:00 GMT-02:00 Stephen J. Turnbull <
[email protected]>:

> Danilo J. S. Bellini writes:
>
>  > About the effort, do you really find the examples below with the new
>  > proposed syntax difficult to understand?
>
> No.  I just don't see how they would become tools I would use.  My main
> interest here was in your claim to have economic applications, but the
> examples you give don't seem to offer big wins for the kind of
> calculations I, my students, or my colleagues do.  Perhaps you will
> have better luck interesting/persuading others.
>

If you want something simple, the itertools.accumulate examples from
docs.python.org include a simple "loan amortization" example:

>>> # Amortize a 5% loan of 1000 with 4 annual payments of 90
>>> cashflows = [1000, -90, -90, -90, -90]
>>> list(accumulate(cashflows, lambda bal, pmt: bal*1.05 + pmt))
[1000, 960.0, 918.0, 873.90001, 827.59501]

>>> # With the proposed syntax
>>> payments = [90, 90, 90, 90]
>>> [bal * 1.05 - pmt for pmt in payments from bal = 1000]
[1000, 960.0, 918.0, 873.90001, 827.59501]


>From the Wilson J. Rugh "Linear Systems Theory" book, chapter 20 "Discrete
Time State Equations", p. 384-385 (the very first example on the topic):

"""
A simple, classical model in economics for national income y(k) in year k
describes y(k) in terms of consumer expenditure c(k), private investment
i(k), and government expenditure g(k) according to:

y(k) = c(k) + i(k) + g(k)

These quantities are interrelated by the following assumptions. First,
consumer expenditure in year k+1 is proportional to the national income in
year k,

c(k+1) = α·y(k)

where the constant α is called, impressively enough, the marginal
propensity to consume. Second, the private investment in year k+1 is
proportional to the increase in consumer expenditure from year k to year
k+1,

i(k+1) = β·[c(k+1) - c(k)]

where the constant β is a growth coefficient. Typically 0 < α < 1 and β > 0.

>From these assumptions we can write the two scalar difference equations

c(k+1) = α·c(k) + α·i(k) + α·g(k)
i(k+1) = (β·α-β)·c(k) + β·α·i(k) + β·α·g(k)

Defining state variables as x₁(k) = c(k) and x₂(k) = i(k), the output as
y(k), and the input as g(k), we obtain the linear state equation

#  ⎡   α  α ⎤⎡ α ⎤
# x(k+1) = ⎢⎥·x(k) + ⎢   ⎥·g(k)
#  ⎣β·(α-1)  β·α⎦⎣β·α⎦
#
#   y(k) = [1  1]·x(k) + g(k)

Numbering the years by k = 0, 1, ..., the initial state is provided by c(0)
and i(0).
"""

You can use my "ltiss" or "ltvss" (if alpha/beta are time varying)
functions from the PyScanPrev state-space example to simulate that, or some
dedicated function. The linear time varying version with the proposed
syntax would be (assuming alpha, beta and g are sequences like
lists/tuples):

>>> from numpy import mat
>>> def first_digital_linear_system_example_in_book(alpha, beta, c0, i0, g):
... A = (mat([[a,   a  ],
...   [b*(a-1), b*a]]) for a, b in zip(alpha, beta))
... B = (mat([[a  ],
...   [b*a]]) for a, b in zip(alpha, beta))
... x0 = mat([[c0],
...   [i0]])
... x = (Ak*xk + Bk*gk for Ak, Bk, gk in zip(A, B, g) from xk = x0)
... return [xk.sum() + gk for xk, gk in zip(x, g)]

If A and B were constants, it's simpler, as the scan line would be:

x = (A*xk + B*gk for gk in g from xk = x0)

-- 
Danilo J. S. Bellini
---
"*It is not our business to set up prohibitions, but to arrive at
conventions.*" (R. Carnap)
___
Python-ideas mailing list
[email protected]
https://mail.python.org/mailman/listinfo/python-ideas
Code of Conduct: http://python.org/psf/codeofconduct/

[Python-ideas] PythonOS

2016-11-06 Thread victor rajewski
Firstly, apologies if this is the wrong forum for this idea; it's not so
much about the language itself, but what surrounds it. If you have any
ideas for better forums, please let me know. Also, if there is any work
started on anything like this, let me know

Now to the idea:
PythonOS - a (likely linux-based) OS built specifically for python.

I've been working in the education space as well as with embedded systems
for quite a few years, and am really excited about tiny affordable
standalone computing devices as a computing and science educational tool.
At the moment however, there are two options for python in this space:

   - micropython, which, despite its awesomeness, lacks many library
   possibilities, doesn't have a debugger, and is restricted to a handful of
   devices. The learning curve for getting this up and running can be quite
   steep
   - A full linux setup on an SBC - let's take the raspberry pi. This has
   access to (just about) everything, but beyond just programming the python
   code, you need to have some (basic) system administration training to get
   it running. For someone new to programming, this can be quite intimidating
   (recent developments with drag-and-drop Pi configuration notwithstanding).

In the latter category, there seems to be a lot of newer, cheaper hardware
appearing - for example the Pi Zero, C.H.I.P and VoCore2, and these are
already at  very affordable price point; I think that before long the
rationale for micropython will be lost, and $2 hardware will be running a
fully capable linux setup. However, micropython has the advantage of
simplicity. Upload main.py, and press reset. That's it.

My proposal is a Linux distribution that has the simplicity of micropython,
but the power of full python. Drop a file (let's call it main.py) on an SD
card from your laptop, plug it into the device, and it boots and runs that
file straight away. Or drop an entire project. If the device has USB device
capabilities, then just drag the file across, or even edit it live. Easy
SSH setup could allow more advanced users remote development and debugging
options. Maybe jupyter could be leveraged to provide other network-based
dev options (pynq  already has a working linux distro
like this). Connecting to a console would give you a python prompt.

Ideally, key OS functions should be able to be controlled from within
python, so that the user never has to leave the python prompt. For a start,
things like network config, and possibly cron.

What do you think? I'm not in a state to do all of this myself, but could
give it a start if there is interest in the concept.
-- 

Victor Rajewski

Sent from my mobile device. please. Please excuse brevity and any errors.
___
Python-ideas mailing list
[email protected]
https://mail.python.org/mailman/listinfo/python-ideas
Code of Conduct: http://python.org/psf/codeofconduct/

Re: [Python-ideas] PythonOS

2016-11-06 Thread Bernardo Sulzbach

On 11/06/2016 08:25 PM, victor rajewski wrote:

What do you think? I'm not in a state to do all of this myself, but
could give it a start if there is interest in the concept.


Even though there is nothing wrong with the idea, I think there is not 
enough motivation for it. Most of the people learning Python for 
tinkering around with robots or doing some statistics will both be able 
to and eventually need to learn some basic system administration.


Also, in comparison with some other languages, at least under Linux and 
BSD systems, setting up a Python environment is very straightforward. It 
is packaged for most distributions out there and executing a script does 
not require any cryptic initiation rituals.


All in all, I don't know if this solves a problem or not.



Sent from my mobile device. please. Please excuse brevity and any errors.



Did you type all that in a touchscreen?

--
Bernardo Sulzbach
http://www.mafagafogigante.org/
[email protected]
___
Python-ideas mailing list
[email protected]
https://mail.python.org/mailman/listinfo/python-ideas
Code of Conduct: http://python.org/psf/codeofconduct/


Re: [Python-ideas] Alternative to PEP 532: delayed evaluation of expressions

2016-11-06 Thread Greg Ewing

Eric V. Smith wrote:

I'd rather see the ability to have unevaluated expressions, that can 
later be evaluated. I'll use backticks here to mean: "parse, but do not 
execute the enclosed code". This produces an object that can later be 
evaluated with a new builtin I'll call "evaluate_now".


So far you've essentially got a compact notation for a
lambda with no arguments. Suggestions along these lines
have been made before, but didn't go anywhere.

You could go further and say that any argument to a function that's 
specially marked would get an unevaluated expression. Suppose that you 
can mark arguments as & to mean "takes an unevaluated expression".


Now *that* would be truly interesting, but it would
complicate some fundamental parts of the implementation
tremendously. Before calling any function, it would be
necessary to introspect it to find out which parameters
should be evaluated. Alternatively, every parameter
would have to be passed as a lambda, with the function
deciding which ones to evaluate. I fear this would be
far too big a change to swallow.

--
Greg
___
Python-ideas mailing list
[email protected]
https://mail.python.org/mailman/listinfo/python-ideas
Code of Conduct: http://python.org/psf/codeofconduct/


Re: [Python-ideas] Reduce/fold and scan with generator expressions and comprehensions

2016-11-06 Thread Wes Turner
- So, IIUC, for recursive list comprehensions
  - "prev" = x_(n-1)
  - there is a need to define an initial value
- chain([1000], [...])
  - sometimes, we actually need window function
- __[0] = x_(n-1)
- __[1] = x_(n-2)  # this
- __[-1] = x_(n-2)  # or this
- this can be accomplished with dequeue
  - __= dequeue([1000], maxlen)
- for recursive list comprehensions, we'd want to bind e.g. __ to a
dequeue

[f(__[0], x) for x in y with __ = dequeue((1000,), 1)]

But the proposed syntax differs from this interpretation:

- "from bal = 1000" # ~= with prev = dequeue((1000,), 1)[-1]

(Recursive) fibonacci would then require a dequeue (..., 2)

Other than brevity, is there any advantage to list comprehensions over a
for loop?
- IIRC,  reduce() and fold() can avoid unnecessary variable binding, but
require lower-level debugging.

A recursive list comprehension syntax would be cool. Is there a better
variable name than '__'?

On Sunday, November 6, 2016, Danilo J. S. Bellini 
wrote:

> 2016-11-06 18:00 GMT-02:00 Stephen J. Turnbull  tsukuba.ac.jp
> >:
>
>> Danilo J. S. Bellini writes:
>>
>>  > About the effort, do you really find the examples below with the new
>>  > proposed syntax difficult to understand?
>>
>> No.  I just don't see how they would become tools I would use.  My main
>> interest here was in your claim to have economic applications, but the
>> examples you give don't seem to offer big wins for the kind of
>> calculations I, my students, or my colleagues do.  Perhaps you will
>> have better luck interesting/persuading others.
>>
>
> If you want something simple, the itertools.accumulate examples from
> docs.python.org include a simple "loan amortization" example:
>
> >>> # Amortize a 5% loan of 1000 with 4 annual payments of 90
> >>> cashflows = [1000, -90, -90, -90, -90]
> >>> list(accumulate(cashflows, lambda bal, pmt: bal*1.05 + pmt))
> [1000, 960.0, 918.0, 873.90001, 827.59501]
>
> >>> # With the proposed syntax
> >>> payments = [90, 90, 90, 90]
> >>> [bal * 1.05 - pmt for pmt in payments from bal = 1000]
> [1000, 960.0, 918.0, 873.90001, 827.59501]
>
>
> From the Wilson J. Rugh "Linear Systems Theory" book, chapter 20 "Discrete
> Time State Equations", p. 384-385 (the very first example on the topic):
>
> """
> A simple, classical model in economics for national income y(k) in year k
> describes y(k) in terms of consumer expenditure c(k), private investment
> i(k), and government expenditure g(k) according to:
>
> y(k) = c(k) + i(k) + g(k)
>
> These quantities are interrelated by the following assumptions. First,
> consumer expenditure in year k+1 is proportional to the national income in
> year k,
>
> c(k+1) = α·y(k)
>
> where the constant α is called, impressively enough, the marginal
> propensity to consume. Second, the private investment in year k+1 is
> proportional to the increase in consumer expenditure from year k to year
> k+1,
>
> i(k+1) = β·[c(k+1) - c(k)]
>
> where the constant β is a growth coefficient. Typically 0 < α < 1 and β >
> 0.
>
> From these assumptions we can write the two scalar difference equations
>
> c(k+1) = α·c(k) + α·i(k) + α·g(k)
> i(k+1) = (β·α-β)·c(k) + β·α·i(k) + β·α·g(k)
>
> Defining state variables as x₁(k) = c(k) and x₂(k) = i(k), the output as
> y(k), and the input as g(k), we obtain the linear state equation
>
> #  ⎡   α  α ⎤⎡ α ⎤
> # x(k+1) = ⎢⎥·x(k) + ⎢   ⎥·g(k)
> #  ⎣β·(α-1)  β·α⎦⎣β·α⎦
> #
> #   y(k) = [1  1]·x(k) + g(k)
>
> Numbering the years by k = 0, 1, ..., the initial state is provided by
> c(0) and i(0).
> """
>
> You can use my "ltiss" or "ltvss" (if alpha/beta are time varying)
> functions from the PyScanPrev state-space example to simulate that, or some
> dedicated function. The linear time varying version with the proposed
> syntax would be (assuming alpha, beta and g are sequences like
> lists/tuples):
>
> >>> from numpy import mat
> >>> def first_digital_linear_system_example_in_book(alpha, beta, c0, i0,
> g):
> ... A = (mat([[a,   a  ],
> ...   [b*(a-1), b*a]]) for a, b in zip(alpha, beta))
> ... B = (mat([[a  ],
> ...   [b*a]]) for a, b in zip(alpha, beta))
> ... x0 = mat([[c0],
> ...   [i0]])
> ... x = (Ak*xk + Bk*gk for Ak, Bk, gk in zip(A, B, g) from xk = x0)
> ... return [xk.sum() + gk for xk, gk in zip(x, g)]
>
> If A and B were constants, it's simpler, as the scan line would be:
>
> x = (A*xk + B*gk for gk in g from xk = x0)
>
> --
> Danilo J. S. Bellini
> ---
> "*It is not our business to set up prohibitions, but to arrive at
> conventions.*" (R. Carnap)
>
___
Python-ideas mailing list
[email protected]
https://mail.python.org/mailman/listinfo/python-ideas
Code of Conduct: http://python.org/psf/codeofconduct/

Re: [Python-ideas] Alternative to PEP 532: delayed evaluation of expressions

2016-11-06 Thread Nathaniel Smith
On Sun, Nov 6, 2016 at 5:06 AM, Eric V. Smith  wrote:
> Creating a new thread, instead of hijacking the PEP 532 discussion.
>
> From PEP 532:
>
>> Abstract
>> 
>>
>> Inspired by PEP 335, PEP 505, PEP 531, and the related discussions, this
>> PEP
>> proposes the addition of a new protocol-driven circuit breaking operator
>> to
>> Python that allows the left operand to decide whether or not the
>> expression
>> should short circuit and return a result immediately, or else continue
>> on with evaluation of the right operand::
>>
>> exists(foo) else bar
>> missing(foo) else foo.bar()
>
> Instead of new syntax that only works in this one specific case, I'd prefer
> a more general solution. I accept being "more general" probably seals the
> deal in killing any proposal!
>
> I realize the following proposal has at least been hinted at before, but I
> couldn't find a specific discussion about it. Since it applies to the
> short-circuiting issues addressed by PEP 532 and its predecessors, I thought
> I'd bring it up here. It could also be used to solve some of the problems
> addressed by the rejected PEP 463 (Exception-catching expressions). See also
> PEP 312 (Simple Implicit Lambda). It might also be usable for some of the
> use cases presented in PEP 501 (General purpose string interpolation, aka
> i-strings).
>
> I'd rather see the ability to have unevaluated expressions, that can later
> be evaluated. I'll use backticks here to mean: "parse, but do not execute
> the enclosed code". This produces an object that can later be evaluated with
> a new builtin I'll call "evaluate_now". Obviously these are strawmen, and
> partly chosen to be ugly and unacceptable names and symbols in the form I'll
> discuss here.

If we're considering options along these lines, then I think the local
optimum is actually a "quoted-call" operator, rather than a quote
operator. So something like (borrowing Rust's "!"):

eval_else!(foo.bar, some_func())

being sugar for

eval_else.__macrocall__(,
)

You can trivially use this to recover a classic quote operator if you
really want one:

def quote!(arg):
return arg

but IMO this way is more ergonomic for most use cases (similar to your
'&' suggestion), while retaining the call-site marking that "something
magical is happening here" (which is also important, both for
readability + implementation simplicity -- it lets the compiler know
statically when it needs to retain the AST, solving the issue that
Greg pointed out).

Some other use cases:

Log some complicated object, but only pay the cost of stringifying the
object if debugging is enabled:

log.debug!(f"Message: {message_object!r}")

Generate a plot where the axes are automatically labeled "x" and
"np.sin(x)" (this is one place where R's plotting APIs are more
convenient than Python's):

import numpy as np
import matplotlib.pyplot as plt
x = np.linspace(0, 10)
plt.plot!(x, np.sin(x))

What PonyORM does, but without the thing where currently their
implementation involves decompiling bytecode...:

db.select!(c for c in Customer if sum(c.orders.price) > 1000)

Filtering out a subset of rows from a data frame in pandas; 'height'
and 'age' refer to columns in the data frame (equivalent to
data_frame[data_frame["height"] > 100 and data_frame["age"] < 5], but
more ergonomic and faster (!)):

data_frame.subset!(height > 100 and age < 5)

(IIRC pandas has at least experimented with various weird lambda hacks
for this kind of thing; not sure what the current status is.)

Every six months or so I run into someone who's really excited about
the idea of adding macros to python, and I suggest this approach. So
far none of them have been excited enough to actually write a PEP, but
if I were going to write a PEP then this is the direction that I'd
take :-).

-n

-- 
Nathaniel J. Smith -- https://vorpus.org
___
Python-ideas mailing list
[email protected]
https://mail.python.org/mailman/listinfo/python-ideas
Code of Conduct: http://python.org/psf/codeofconduct/


Re: [Python-ideas] Method signature syntactic sugar (especially for dunder methods)

2016-11-06 Thread Steven D'Aprano
On Sun, Nov 06, 2016 at 01:28:34AM -0500, Nathan Dunn wrote:

> Python has very intuitive and clear syntax, except when it comes to method
> definitions, particularly dunder methods.

I disagree with your premise here. Python's method definitions are just 
as intuitive and clear as the rest of Python's syntax: methods are just 
functions, indented in the body of the class where they belong, with an 
explicit "self" parameter.

And dunder methods are just a naming convention. They're not the most 
visually attractive methods, due to the underscores, but its just a 
naming convention. Otherwise they are declared in exactly the same way 
as any other method: using normal function syntax, indented inside the 
body of the class, with an explicit "self" the same as other methods.

So there's no magic to learn. Once you know how to declare a function, 
it is a tiny step to learn to declare a method: put it inside a class, 
indent it, and add "self", and now you have a method. And once you know 
how to declare a method, there's nothing more to learn to handle dunder 
methods. All you need know is the name of the method or methods you 
need, including the underscores.


[...]
> Having to declare a self parameter is confusing since you don't pass
> anything in when you call the method on an instance (I am aware of bound
> vs. unbound methods, etc. but a beginner would not be).

You are mistaking "mysterious" for "confusing".

"Why do I have to explicitly declare a self parameter?" is a mystery, 
and the answer can be given as:

- you just do
- because internally methods are just functions
- because it is actually useful (e.g. for unbound methods)

depending on the experience of the person asking. But its not 
*confusing*. "Sometimes I have to implicitly declare self, and sometimes 
I don't, and there doesn't seem to be any pattern to which it is" would 
be confusing. "Always explicitly declare self" is not.


> The double underscores are also confusing.

I've certainly a few cases of people who misread __init__ as _init_ and 
was surprised by their code not working. In over a decade of dealing 
with beginners' questions on comp.lang.python and the tutor mailing 
list. So it is an easy mistake to make, but apparently a *rare* mistake 
to make, and very easy to correct.

So I disagree that double underscores are "confusing". What is confusing 
about the instructions "press underscore twice at the beginning and end 
of the method name"?


> I propose syntactic sugar to make these method signatures more intuitive
> and clean.
> 
> class Vec(object):
>def class(x, y):
>self.x, self.y = x, y

I don't think that there is anything intuitive about changing the name 
of the method from __init__ to "class". What makes you think that people 
will intuit the word "class" to create instance? That seems like a 
dubious idea to me.

And it certainly isn't *clean*. At the moment, Python's rules are nicely 
clean: keywords can never be used as identifiers. You would either break 
that rule, or have some sort of magic where *some* keywords can 
*sometimes* be used as identifiers, but not always. That's the very 
opposite of clean -- it is a nasty, yucky design, and it doesn't scale 
to other protocols:

def with:  # is this __enter__ or __exit__?

It doesn't even work for instance construction! Is class(...) the 
__new__ or __init__ method?

Not all beginners to Python are beginners to programming at all. Other 
languages typically use one of three naming conventions for the 
constructor:

- a method with the same name as the class itself

  e.g. Java, C#, PHP 4, C++, ActionScript.

- special predefined method names

  e.g. "New" in VisualBasic, "alloc" and "init" in Objective C, 
  "initialize" in Ruby, "__construct" in PHP 5.

- a keyword used before an otherwise normal method definition

  e.g. "constructor" in Object Pascal, "initializer" in Ocaml, 
  "create" in Eiffel, "new" in F#.


So there's lots of variation in how constructors are written, and what 
seems "intuitive" will probably depend on the reader's background. Total 
beginners to OOP don't have any pre-conceived expectations, because the 
very concept of initialising an instance is new to them. Whether it is 
spelled "New" or "__init__" or "mzygplwts" is just a matter of how hard 
it is to spell correctly and memorise.


>def self + other:
>return Vec(self.x + other.x, self.y + other.y)

My guess is that this is impossible in a LL(1) parser, but even if 
possible, how do you write the reversed __radd__ method? My guess is 
that you would need:

def other + self:

but for that to work, "self" now needs to be a keyword rather than just 
a regular identifier which is special only because it is the first in 
the parameter list. And that's a problem because there are cases 
(rare, but they do happen) where we don't want to use "self" for the 
instance parameter.

A very common pattern in writing classes is:

   def __add__(self, other):
 

Re: [Python-ideas] Reduce/fold and scan with generator expressions and comprehensions

2016-11-06 Thread Steven D'Aprano
On Sun, Nov 06, 2016 at 04:46:42PM -0200, Danilo J. S. Bellini wrote:

> 1.2. Sub-expressions in an expression might be on other statements (e.g.
> assignments, other functions).

Not in Python it can't be. Assignment is not an expression, you cannot 
say (x = 2) + 1.


> 2. The itertools.accumulate function signature is broken:
> 2.1. Its arguments are reversed when compared to other higher order
> functions like map, filter and reduce.

That's only "broken" if you think that we should be able to swap 
accumulate for map, filter and reduce. But that's not the case: 
accumulate is NOT broken because the API is not intended to be the same 
as map, filter and reduce. The API for accumulate is that the iterable 
is mandatory, but the function argument is *not*.


> 2.2. It lacks a "start" parameter, requiring more complexity to include it
> (nesting a itertools.chain or a "prepend" function call).

Then just create your own wrapper:

def accum(func, iterable, start=None):
if start is not None:
iterable = itertools.chain([start], iterable)
return itertools.accumulate(iterable, func)



> 3. It's not about "adding complexity", it's the other way around:

No, I'm sorry, it does add complexity.

> 3.1. The proposal is about explicit recursion in a list comprehension (a
> way to access the previous output value/result, a.k.a.
> accumulator/state/memory).

List comprehensions are not intended for these sorts of complex 
calculations. Regular for-loops are easy to use and can be as general 
as you like.


> I'm not sure what you meant with "turn a sequence into a series", 

I think Stephen is referring to the mathematical concept of sequences 
and series.

A sequence is an ordered set of numbers obeying some rule, e.g.:

[1, 2, 3, 4, 5]

while a series is the partial sums found by adding each term to the 
previous sum:

[1, 3, 6, 10, 15]



-- 
Steve
___
Python-ideas mailing list
[email protected]
https://mail.python.org/mailman/listinfo/python-ideas
Code of Conduct: http://python.org/psf/codeofconduct/


Re: [Python-ideas] Generator-based context managers can't skip __exit__

2016-11-06 Thread Ethan Furman

On 11/06/2016 12:18 AM, Ram Rachum wrote:


Well, you think it's weird that I want a `finally` clause to not be called
 in some circumstances.


Yes I (we) do.


 Do you think it's equally weird to want an `__exit__` method that is not
 called in some circumstances?


Yes I (we) do.

--
~Ethan~
___
Python-ideas mailing list
[email protected]
https://mail.python.org/mailman/listinfo/python-ideas
Code of Conduct: http://python.org/psf/codeofconduct/


Re: [Python-ideas] Generator-based context managers can't skip __exit__

2016-11-06 Thread Ethan Furman

On 11/06/2016 12:44 AM, Ram Rachum wrote:


I see that Python does allow you to not call `__exit__` if you don't want
 to [...]


Um, how?  I was unaware of this (mis-)feature.

--
~Ethan~
___
Python-ideas mailing list
[email protected]
https://mail.python.org/mailman/listinfo/python-ideas
Code of Conduct: http://python.org/psf/codeofconduct/


Re: [Python-ideas] PythonOS

2016-11-06 Thread Nick Coghlan
On 7 November 2016 at 08:25, victor rajewski  wrote:
> My proposal is a Linux distribution that has the simplicity of micropython,
> but the power of full python. Drop a file (let's call it main.py) on an SD
> card from your laptop, plug it into the device, and it boots and runs that
> file straight away. Or drop an entire project. If the device has USB device
> capabilities, then just drag the file across, or even edit it live. Easy SSH
> setup could allow more advanced users remote development and debugging
> options. Maybe jupyter could be leveraged to provide other network-based dev
> options (pynq already has a working linux distro like this). Connecting to a
> console would give you a python prompt.
>
> Ideally, key OS functions should be able to be controlled from within
> python, so that the user never has to leave the python prompt. For a start,
> things like network config, and possibly cron.
>
> What do you think? I'm not in a state to do all of this myself, but could
> give it a start if there is interest in the concept.

A potentially simpler option to explore would be a derivative of an
existing beginner-friendly Linux distro like Raspbian that configures
xon.sh ( http://xon.sh/ ) as the default system shell and IPython as
the default Python REPL.

That still keeps a distinction between the system shell, the
interactive Python shell, and normal Python application programming,
but I think those are actually good distinctions to preserve, as
"query and control the currently running machine", "perform ad hoc
interactive IO and data manipulation" and "perform IO manipulation and
user interaction repeatably for the benefit of other users" are
genuinely different tasks, even though they have common needs when it
comes to basic control flow constructs.

Regards,
Nick.

-- 
Nick Coghlan   |   [email protected]   |   Brisbane, Australia
___
Python-ideas mailing list
[email protected]
https://mail.python.org/mailman/listinfo/python-ideas
Code of Conduct: http://python.org/psf/codeofconduct/


Re: [Python-ideas] Alternative to PEP 532: delayed evaluation of expressions

2016-11-06 Thread Mikhail V
On 7 November 2016 at 02:32, Nathaniel Smith  wrote:
> On Sun, Nov 6, 2016 at 5:06 AM, Eric V. Smith  wrote:
>> Creating a new thread, instead of hijacking the PEP 532 discussion.
>>
>> From PEP 532:
>>
>>> Abstract
>>> 
>>>
>>> Inspired by PEP 335, PEP 505, PEP 531, and the related discussions, this
>>> PEP
>>> proposes the addition of a new protocol-driven circuit breaking operator
>>> to
>>> Python that allows the left operand to decide whether or not the
>>> expression
>>> should short circuit and return a result immediately, or else continue
>>> on with evaluation of the right operand::
>>>
>>> exists(foo) else bar
>>> missing(foo) else foo.bar()
>>
>> Instead of new syntax that only works in this one specific case, I'd prefer
>> a more general solution. I accept being "more general" probably seals the
>> deal in killing any proposal!
>>
>> I realize the following proposal has at least been hinted at before, but I
>> couldn't find a specific discussion about it. Since it applies to the
>> short-circuiting issues addressed by PEP 532 and its predecessors, I thought
>> I'd bring it up here. It could also be used to solve some of the problems
>> addressed by the rejected PEP 463 (Exception-catching expressions). See also
>> PEP 312 (Simple Implicit Lambda). It might also be usable for some of the
>> use cases presented in PEP 501 (General purpose string interpolation, aka
>> i-strings).
>>
>> I'd rather see the ability to have unevaluated expressions, that can later
>> be evaluated. I'll use backticks here to mean: "parse, but do not execute
>> the enclosed code". This produces an object that can later be evaluated with
>> a new builtin I'll call "evaluate_now". Obviously these are strawmen, and
>> partly chosen to be ugly and unacceptable names and symbols in the form I'll
>> discuss here.
>
> If we're considering options along these lines, then I think the local
> optimum is actually a "quoted-call" operator, rather than a quote
> operator. So something like (borrowing Rust's "!"):
>
> eval_else!(foo.bar, some_func())
>
> being sugar for
>
> eval_else.__macrocall__(,
> )
>
> You can trivially use this to recover a classic quote operator if you
> really want one:
>
> def quote!(arg):
> return arg
>
> but IMO this way is more ergonomic for most use cases (similar to your
> '&' suggestion), while retaining the call-site marking that "something
> magical is happening here" (which is also important, both for
> readability + implementation simplicity -- it lets the compiler know
> statically when it needs to retain the AST, solving the issue that
> Greg pointed out).
>
> Some other use cases:
>
> Log some complicated object, but only pay the cost of stringifying the
> object if debugging is enabled:
>
> log.debug!(f"Message: {message_object!r}")
>
> Generate a plot where the axes are automatically labeled "x" and
> "np.sin(x)" (this is one place where R's plotting APIs are more
> convenient than Python's):
>
> import numpy as np
> import matplotlib.pyplot as plt
> x = np.linspace(0, 10)
> plt.plot!(x, np.sin(x))
>
> What PonyORM does, but without the thing where currently their
> implementation involves decompiling bytecode...:
>
> db.select!(c for c in Customer if sum(c.orders.price) > 1000)
>
> Filtering out a subset of rows from a data frame in pandas; 'height'
> and 'age' refer to columns in the data frame (equivalent to
> data_frame[data_frame["height"] > 100 and data_frame["age"] < 5], but
> more ergonomic and faster (!)):
>
> data_frame.subset!(height > 100 and age < 5)
>
> (IIRC pandas has at least experimented with various weird lambda hacks
> for this kind of thing; not sure what the current status is.)
>
> Every six months or so I run into someone who's really excited about
> the idea of adding macros to python, and I suggest this approach. So
> far none of them have been excited enough to actually write a PEP, but
> if I were going to write a PEP then this is the direction that I'd
> take :-).
>

Oh great! Good to know I am not alone thinking in this
direction.
I have however one minor problem here: the problem is that "!" sign
is almost invisible in the code, unless there is syntax highlighting
which paints it in some very bright color.
On the other hand I am not sure if it *must* be very visible...
So to make it more distinctive in code I would propose something
like:

macros<>( x, y )
macros>( x, y )
macros::( x, y )

And those are already used operators, sad :(
It would look so neat... But probably still possible to do?


Mikhail
___
Python-ideas mailing list
[email protected]
https://mail.python.org/mailman/listinfo/python-ideas
Code of Conduct: http://python.org/psf/codeofconduct/


Re: [Python-ideas] Generator-based context managers can't skip __exit__

2016-11-06 Thread Nick Coghlan
On 7 November 2016 at 12:25, Ethan Furman  wrote:
> On 11/06/2016 12:44 AM, Ram Rachum wrote:
>
>> I see that Python does allow you to not call `__exit__` if you don't want
>>  to [...]
>
> Um, how?  I was unaware of this (mis-)feature.

It involves wrapping the context manager in another context manager
that deliberately doesn't delegate the call to __exit__ in some cases
(cf contextlib.ExitStack.pop_all()).

By contrast, if __del__ is defined (as it is on generators), if you
don't keep the context manager itself alive, you can only prevent the
cleanup happening if you can define a subclass to use instead, and
that's not always possible (deliberately so, in the case of generator
cleanup).

So the odd part of Ram's request isn't wanting to have conditional
resource cleanup - the recipes in the contextlib docs gives some
examples of where conditional local resource management is useful and
how to achieve it using ExitStack. The odd part is wanting to make the
resource cleanup implicitly unreliable, rather than having it be
reliable by default and folks having to explicitly opt in to disabling
it, since the easiest way to obtain non-deterministic resource
management is to just avoid using the context management features in
the first place.

Cheers,
Nick.

-- 
Nick Coghlan   |   [email protected]   |   Brisbane, Australia
___
Python-ideas mailing list
[email protected]
https://mail.python.org/mailman/listinfo/python-ideas
Code of Conduct: http://python.org/psf/codeofconduct/


Re: [Python-ideas] Generator-based context managers can't skip __exit__

2016-11-06 Thread Ethan Furman

On 11/06/2016 08:11 PM, Nick Coghlan wrote:

On 7 November 2016 at 12:25, Ethan Furman  wrote:

On 11/06/2016 12:44 AM, Ram Rachum wrote:


I see that Python does allow you to not call `__exit__` if you don't want
  to [...]


Um, how?  I was unaware of this (mis-)feature.


It involves wrapping the context manager in another context manager
that deliberately doesn't delegate the call to __exit__ in some cases
(cf contextlib.ExitStack.pop_all()).


Ah, okay.

Perhaps a custom iterator class would suit Ram's needs better than using a 
generator shortcut.

--
~Ethan~
___
Python-ideas mailing list
[email protected]
https://mail.python.org/mailman/listinfo/python-ideas
Code of Conduct: http://python.org/psf/codeofconduct/


Re: [Python-ideas] Reduce/fold and scan with generator expressions and comprehensions

2016-11-06 Thread Danilo J. S. Bellini
2016-11-06 23:55 GMT-02:00 Steven D'Aprano :

> On Sun, Nov 06, 2016 at 04:46:42PM -0200, Danilo J. S. Bellini wrote:
>
> > 1.2. Sub-expressions in an expression might be on other statements (e.g.
> > assignments, other functions).
>
> Not in Python it can't be. Assignment is not an expression, you cannot
> say (x = 2) + 1.
>

O.o ... how is that (x = 2) + 1 related to what I wrote?
Say you have "d = a + b + c", you can write it as "h = a + b" then "d = h +
c"...
The expression "a + b + c" and the new "h + c" are equivalent because the
sub-expression "a + b" was assigned to "h" elsewhere (another statement).


2016-11-06 23:55 GMT-02:00 Steven D'Aprano :

> > 2. The itertools.accumulate function signature is broken:
> > 2.1. Its arguments are reversed when compared to other higher order
> > functions like map, filter and reduce.
>
> That's only "broken" if you think that we should be able to swap
> accumulate for map, filter and reduce. But that's not the case:
> accumulate is NOT broken because the API is not intended to be the same
> as map, filter and reduce. The API for accumulate is that the iterable
> is mandatory, but the function argument is *not*.
>

I'd say not even on map and filter requires a function, as "None" means
"lambda x: x" on them, strangely enough.

All of them are higher order functions in the standard Python, and you can
include other functions like itertools.takewhile and itertools.dropwhile to
this comparison. The accumulate signature is simply
broken/different/surprising in that context.


2016-11-06 23:55 GMT-02:00 Steven D'Aprano :

> > 3. It's not about "adding complexity", it's the other way around:
>
> No, I'm sorry, it does add complexity.
>

Compared to the alternatives, it's the other way around: my proposal
removes complexity from the resulting code that requires a scan.

Unless you're saying so because it's a proposal to change CPython, and as
CPython itself would get a new feature, it would "be more complex"... I
agree: any change would either add complexity somewhere or be backwards
incompatible. Which one is better? Why this mail list exists?


2016-11-06 23:55 GMT-02:00 Steven D'Aprano :

> List comprehensions are not intended for these sorts of complex
> calculations.
>

I know list comprehensions aren't yet intended for scan, otherwise why
would I propose this? Triple-for-sections list comprehensions are
complicated, and it's actually surprising that Python allows using the same
target variable name twice on them.

But the calculation itself and its declarative description aren't complex.
Even you wrote such a declarative description in your e-mail, when
explaining what "series" are.

-- 
Danilo J. S. Bellini
---
"*It is not our business to set up prohibitions, but to arrive at
conventions.*" (R. Carnap)
___
Python-ideas mailing list
[email protected]
https://mail.python.org/mailman/listinfo/python-ideas
Code of Conduct: http://python.org/psf/codeofconduct/

Re: [Python-ideas] Alternative to PEP 532: delayed evaluation of expressions

2016-11-06 Thread C Anthony Risinger
On Nov 6, 2016 7:32 PM, "Nathaniel Smith"  wrote:
>
> [...]
>
> Some other use cases:
>
> Log some complicated object, but only pay the cost of stringifying the
> object if debugging is enabled:
>
> log.debug!(f"Message: {message_object!r}")

Would the log.debug implementation need to fetch the context to evaluate
the delayed expression (say by using sys._getframe) or would that be bound?
Is a frame necessary or just a (bound?) symbol table? Could a substitute be
provided by on evaluation?

Curious how this looks to the callee and what is possible.

Also what is the meaning (if desirable) of something like:

def debug!(...): pass

Persistent delayed calls? Delayed default arguments? Something else? Not
valid?
___
Python-ideas mailing list
[email protected]
https://mail.python.org/mailman/listinfo/python-ideas
Code of Conduct: http://python.org/psf/codeofconduct/

Re: [Python-ideas] Alternative to PEP 532: delayed evaluation of expressions

2016-11-06 Thread Matthias Bussonnier
On Sun, Nov 6, 2016 at 5:32 PM, Nathaniel Smith  wrote:

>
> If we're considering options along these lines, then I think the local
> optimum is actually a "quoted-call" operator, rather than a quote
> operator. So something like (borrowing Rust's "!"):
>
> eval_else!(foo.bar, some_func())
>
> being sugar for
>
> eval_else.__macrocall__(,
> )
>
> You can trivially use this to recover a classic quote operator if you
> really want one:
>
> def quote!(arg):
> return arg
>

Xonsh does it:

http://xon.sh/tutorial_macros.html

At least for "function" call, and make use of Python type annotations
to decide whether to expand the expression or not, or passing, string,
Ast, to the defined macro.

I haven't tried it in a while but there were some ideas floating
around for context-manager as well, to get the block they wrap.

-- 
M
___
Python-ideas mailing list
[email protected]
https://mail.python.org/mailman/listinfo/python-ideas
Code of Conduct: http://python.org/psf/codeofconduct/


Re: [Python-ideas] Alternative to PEP 532: delayed evaluation of expressions

2016-11-06 Thread Greg Ewing

C Anthony Risinger wrote:
On Nov 6, 2016 7:32 PM, "Nathaniel Smith" > wrote:

 >
 > log.debug!(f"Message: {message_object!r}")

Would the log.debug implementation need to fetch the context to evaluate 
the delayed expression


Not if it expands to

   log.debug(lambda: f"Message: {message_object!r}")


Also what is the meaning (if desirable) of something like:

def debug!(...): pass


Nothing like that would be needed. The implementation of
debug() would just be an ordinary function receiving callable
objects as parameters.

--
Greg
___
Python-ideas mailing list
[email protected]
https://mail.python.org/mailman/listinfo/python-ideas
Code of Conduct: http://python.org/psf/codeofconduct/


Re: [Python-ideas] Reduce/fold and scan with generator expressions and comprehensions

2016-11-06 Thread Steven D'Aprano
On Mon, Nov 07, 2016 at 02:21:04AM -0200, Danilo J. S. Bellini wrote:
> 2016-11-06 23:55 GMT-02:00 Steven D'Aprano :
> 
> > On Sun, Nov 06, 2016 at 04:46:42PM -0200, Danilo J. S. Bellini wrote:
> >
> > > 1.2. Sub-expressions in an expression might be on other statements (e.g.
> > > assignments, other functions).
> >
> > Not in Python it can't be. Assignment is not an expression, you cannot
> > say (x = 2) + 1.
> >
> 
> O.o ... how is that (x = 2) + 1 related to what I wrote?

I misread your comment as "Sub-expressions in an expression might be 
other statements".


Sorry for the misunderstanding.


-- 
Steve
___
Python-ideas mailing list
[email protected]
https://mail.python.org/mailman/listinfo/python-ideas
Code of Conduct: http://python.org/psf/codeofconduct/


Re: [Python-ideas] Alternative to PEP 532: delayed evaluation of expressions

2016-11-06 Thread Steven D'Aprano
On Sun, Nov 06, 2016 at 09:31:06AM -0500, Eric V. Smith wrote:

> The point remains: do we want to be able to create unevaluated 
> expressions that can be evaluated at a different point?

I sometimes think that such unevaluated expressions (which I usually 
call "thunks") would be cool to have. But in more realistic moments I 
think that they're a solution looking for a problem to solve.

If your PEP suggests a problem that they will solve, I'm interested.

But note that we already have two ways of generating thunk-like objects: 
functions and compiled byte-code.

thunk = lambda: a + b - c
thunk()

thunk = compile('a + b - c', '', 'single')
eval(thunk)

Both are very heavyweight: the overhead of function call syntax is 
significant, and the keyword "lambda" is a lot of typing just to delay 
evaluation of an expression. compile(...) is even worse.


-- 
Steve
___
Python-ideas mailing list
[email protected]
https://mail.python.org/mailman/listinfo/python-ideas
Code of Conduct: http://python.org/psf/codeofconduct/


Re: [Python-ideas] Alternative to PEP 532: delayed evaluation of expressions

2016-11-06 Thread Brendan Barnwell

On 2016-11-06 21:46, Steven D'Aprano wrote:

I sometimes think that such unevaluated expressions (which I usually
call "thunks") would be cool to have. But in more realistic moments I
think that they're a solution looking for a problem to solve.

If your PEP suggests a problem that they will solve, I'm interested.

But note that we already have two ways of generating thunk-like objects:
functions and compiled byte-code.

thunk = lambda: a + b - c
thunk()

thunk = compile('a + b - c', '', 'single')
eval(thunk)

Both are very heavyweight: the overhead of function call syntax is
significant, and the keyword "lambda" is a lot of typing just to delay
evaluation of an expression. compile(...) is even worse.


	I sometimes want these too.  But note that both the solutions you 
propose are quite a ways from a true "unevaluated expression".


	The big problem with an ordinary lambda (or def) is that you cannot 
explicitly control where it will decide to look for its free variables. 
 If it uses a local variable from an enclosing namespace, it will 
always look for it in that namespace, so you can't "patch in" a value by 
setting a global variable.  If it uses a variable that isn't local to 
any enclosing namespace, it will always look for it in the global 
namespace, so you can't patch in a value by setting a local variable in 
the context where you're calling the function.


	You can get around this in a def by, for instance, using global to mark 
all variables global, and then using eval to pass in a custom global 
namespace.  But that is a lot of boilerplate.  What I want (when I want 
this) is a way to create a function that will allow the injection of 
values for *any* variables, regardless of whether the function 
originally thought they were local, nonlocal (i.e., local to some 
enclosing scope) or global.  The way it is now, the status of a 
function's variables is inextricably linked to the syntactic context 
where it was defined.  This is a good thing most of the time, but it's 
not what you want if you want to define an expression that should later 
be evaluated in some other context.


	I consider the compile-based solution a nonstarter, because it puts the 
code in a string.  With the code in a string, you are blocked from using 
syntax highlighting or any other handy editor features.


--
Brendan Barnwell
"Do not follow where the path may lead.  Go, instead, where there is no 
path, and leave a trail."

   --author unknown
___
Python-ideas mailing list
[email protected]
https://mail.python.org/mailman/listinfo/python-ideas
Code of Conduct: http://python.org/psf/codeofconduct/


Re: [Python-ideas] Alternative to PEP 532: delayed evaluation of expressions

2016-11-06 Thread Nathaniel Smith
On Sun, Nov 6, 2016 at 9:08 PM, C Anthony Risinger  wrote:
> On Nov 6, 2016 7:32 PM, "Nathaniel Smith"  wrote:
>>
>> [...]
>>
>> Some other use cases:
>>
>> Log some complicated object, but only pay the cost of stringifying the
>> object if debugging is enabled:
>>
>> log.debug!(f"Message: {message_object!r}")
>
> Would the log.debug implementation need to fetch the context to evaluate the
> delayed expression (say by using sys._getframe) or would that be bound? Is a
> frame necessary or just a (bound?) symbol table? Could a substitute be
> provided by on evaluation?

There are a lot of ways one could go about it -- I'll leave the
details to whoever decides to actually write the PEP :-) -- but one
sufficient solution would be to just pass AST objects. Those are
convenient (the compiler has just parsed the code anyway), they allow
the code to be read or modified before use (in case you want to inject
variables, or convert to SQL as in the PonyORM case, etc.), and if you
want to evaluate the thunks then you can look up the appropriate
environment using sys._getframe and friends. Maybe one can do even
better, but simple ASTs are a reasonable starting point.

-n

-- 
Nathaniel J. Smith -- https://vorpus.org
___
Python-ideas mailing list
[email protected]
https://mail.python.org/mailman/listinfo/python-ideas
Code of Conduct: http://python.org/psf/codeofconduct/