[Python-Dev] Feature Request: Py_NewInterpreter to create separate GIL (branch)

2006-11-03 Thread Robert
repeated from c.l.p : "Feature Request: Py_NewInterpreter to create 
separate GIL (branch)"

Daniel Dittmar wrote:
 > robert wrote:
 >> I'd like to use multiple CPU cores for selected time consuming Python
 >> computations (incl. numpy/scipy) in a frictionless manner.
 >>
 >> Interprocess communication is tedious and out of question, so I
 >> thought about simply using a more Python interpreter instances
 >> (Py_NewInterpreter) with extra GIL in the same process.
 >
 > If I understand Python/ceval.c, the GIL is really global, not specific
 > to an interpreter instance:
 > static PyThread_type_lock interpreter_lock = 0; /* This is the GIL */
 >

Thats the show stopper as of now.
There are only a handfull funcs in ceval.c to use that very global lock. 
The rest uses that funcs around thread states.

Would it be a possibilty in next Python to have the lock separate for 
each Interpreter instance.
Thus: have *interpreter_lock separate in each PyThreadState instance and 
only threads of same Interpreter have same GIL?
Separation between Interpreters seems to be enough. The Interpreter runs 
mainly on the stack. Possibly only very few global C-level resources 
would require individual extra locks.

Sooner or later Python will have to answer the multi-processor question.
A per-interpreter GIL and a nice module for tunneling Python-Objects 
directly between Interpreters inside one process might be the answer at 
the right border-line ? Existing extension code base would remain 
compatible, as far as there is already decent locking on module globals, 
which is the the usual case.

Robert
___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


[Python-Dev] Feature Request: Py_NewInterpreter to create separate GIL (branch)

2006-11-09 Thread Robert
Talin wrote:

>>/ I don't know how you define simple. In order to be able to have
/>>/ separate GILs  you have to remove *all* sharing of objects between
/>>/ interpreters. And all other data structures, too. It would probably
/>>/ kill performance too, because currently obmalloc relies on the GIL.
/
> Nitpick: You have to remove all sharing of *mutable* objects. One day, 
> when we get "pure" GC with no refcounting, that will be a meaningful 
> distinction. :)

Is it mad?:

It could be a distinction now: immutables/singletons refcount could be held 
~fix around MAXINT easily (by a loose periodic GC scheme, or by Py_INC/DEFREF 
to be like { if ob.refcount!=MAXINT ... )

dicty things like Exception.x=5 could either be disabled or 
Exception.refcount=MAXINT/.__dict__=lockingdict ... or exceptions could be 
doubled as they don't have to cross the bridge (weren't they in an ordinary 
python module once ?).

obmalloc.c/LOCK() could be something fast like:

_retry:
  __asm   LOCK INC malloc_lock
  if (malloc_lock!=1) { LOCK DEC malloc_lock; /*yield();*/ goto _retry; }

To know the final speed costs ( 
http://groups.google.de/group/comp.lang.python/msg/01cef42159fd1712 ) would 
require an experiment.
Cheap signal processors (<1%) don't need to be supported for free threading 
interpreters.

Builtin/Extension modules global __dict__ to become a lockingdict. 

Yet a speedy LOCK INC lock method may possibly lead to general free threading 
threads (for most CPUs) at all. Almost all Python objects have 
static/uncritical attributes/require only few locks.
A full blown LOCK INC lock method on dict & list accesses, (avoidable for 
fastlocals?) & defaulty Py_INC/DECREF (as far as there is still refcounting in 
Py3K).
Py_FASTINCREF could be fast for known immutables (mainly Py_None) with MAXINT 
method, and for fresh creations etc.

PyThreadState_GET(): A ts(PyThread_get_thread_ident())/*TlsGetValue() would 
become necessary. Is there a fast thread_ID register in todays CPU's?*


Robert


___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


[Python-Dev] Fwd: Ruby/Python Continuations: Turning a block callback into a read()-method ?

2006-02-12 Thread Robert
Fwd: news:<[EMAIL PROTECTED]>

After failing on a yield/iterator-continuation problem in Python (see
below) I tried the Ruby (1.8.2) language first time on that construct:
The example tries to convert a block callback interface
(Net::FTP.retrbinary) into a read()-like iterator function in order to
virtualize the existing FTP class as kind of file system.  4 bytes max
per read in this first simple test below. But it fails on the second
continuation with ThreadError after this second continuation really
executing!? Any ideas how to make this work/correct?

(The question is not about the specific FTP example as it - e.g. about a
rewrite of FTP/retrbinary or use of OS tricks, real threads with polling
etc... - but about the continuation language trick to get the execution
flow right in order to turn any callback interface into an "enslaved
callable iterator". Python can do such things in simple situations with
yield-generator functions/iter.next()... But Python obviously fails by a
hair when there is a function-context barrier for "yield". Ruby's
block-yield-mechanism seems to not at all have the power of real
generator-continuation as in Python, but in principle only to be that
what a normal callback would be in Python. Yet "callcc" seemes to be
promising - I thought so far :-(   )

=== Ruby callcc Pattern : execution fails with ThreadError!? ===
require 'net/ftp'
module Net

class FTPFile
   def initialize(ftp,path)
  @ftp = ftp
  @path=path
  @flag=true
  @iter=nil
   end
   def read
  if @iter
 puts "@iter.call"
 @iter.call
  else
 puts "RETR "[EMAIL PROTECTED]
 @ftp.retrbinary("RETR "[EMAIL PROTECTED],4) do |block|
print "CALLBACK ",block,"\n"
callcc{|@iter| @flag=true}
if @flag
   @flag=false
   return block
end
 end
  end
   end
end

end

ftp = Net::FTP.new("localhost",'user','pass')
ff  = Net::FTPFile.new(ftp,'data.txt')
puts ff.read()
puts ff.read()

=== Output/Error 

vs:~/test$ ruby ftpfile.rb
RETR data.txt
CALLBACK robe
robe
@iter.call
CALLBACK rt

/usr/lib/ruby/1.8/monitor.rb:259:in `mon_check_owner': current thread
not owner (ThreadError)
 from /usr/lib/ruby/1.8/monitor.rb:211:in `mon_exit'
 from /usr/lib/ruby/1.8/monitor.rb:231:in `synchronize'
 from /usr/lib/ruby/1.8/net/ftp.rb:399:in `retrbinary'
 from ftpfile.rb:17:in `read'
 from ftpfile.rb:33
vs:~/test$

===  Python Pattern : I cannot write down the idea because of a barrier ===

 I tried a pattern like:
 
 def open(self,ftppath,mode='rb'):
 class FTPFile:
 ...
 def iter_retr()
 ...
 def callback(blk):
 how-to-yield-from-here-as-iter_retr blk???
 self.ftp.retrbinary("RETR %s" % self.relpath,callback)
 def read(self, bytes=-1):
 ...
 self.buf+=self.iter.next()
 ...
 ...
 

=


Robert









___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] os.walk() is going to be *fast* with scandir

2014-08-09 Thread Robert Collins
A small tip from my bzr days - cd into the directory before scanning
it - especially if you'll end up statting more than a fraction of the
files, or are recursing - otherwise the VFS does a traversal for each
path you directly stat / recurse into. This can become a dominating
factor in some workloads (I shaved several hundred milliseconds off of
bzr stat on kernel trees doing this).

-Rob

On 10 August 2014 15:57, Nick Coghlan  wrote:
> On 10 August 2014 13:20, Antoine Pitrou  wrote:
>> Le 09/08/2014 12:43, Ben Hoyt a écrit :
>>
>>> Just thought I'd share some of my excitement about how fast the all-C
>>> version [1] of os.scandir() is turning out to be.
>>>
>>> Below are the results of my scandir / walk benchmark run with three
>>> different versions. I'm using an SSD, which seems to make it
>>> especially faster than listdir / walk. Note that benchmark results can
>>> vary a lot, depending on operating system, file system, hard drive
>>> type, and the OS's caching state.
>>>
>>> Anyway, os.walk() can be FIFTY times as fast using os.scandir().
>>
>>
>> Very nice results, thank you :-)
>
> Indeed!
>
> This may actually motivate me to start working on a redesign of
> walkdir at some point, with scandir and DirEntry objects as the basis.
> My original approach was just too slow to be useful in practice (at
> least when working with trees on the scale of a full Fedora or RHEL
> build hosted on an NFS share).
>
> Cheers,
> Nick.
>
> --
> Nick Coghlan   |   [email protected]   |   Brisbane, Australia
> ___
> Python-Dev mailing list
> [email protected]
> https://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe: 
> https://mail.python.org/mailman/options/python-dev/robertc%40robertcollins.net



-- 
Robert Collins 
Distinguished Technologist
HP Converged Cloud
___
Python-Dev mailing list
[email protected]
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] os.walk() is going to be *fast* with scandir

2014-08-18 Thread Robert Collins
Indeed - my suggestion is applicable to people using the library

-Rob
On 10 Aug 2014 18:21, "Larry Hastings"  wrote:

>  On 08/09/2014 10:40 PM, Robert Collins wrote:
>
> A small tip from my bzr days - cd into the directory before scanning it
>
>
> I doubt that's permissible for a library function like os.scandir().
>
>
> */arry*
>
> ___
> Python-Dev mailing list
> [email protected]
> https://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe:
> https://mail.python.org/mailman/options/python-dev/robertc%40robertcollins.net
>
>
___
Python-Dev mailing list
[email protected]
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] web-sig mailing list moderating every post?

2014-09-20 Thread Robert Collins
Ugh - this was in my mailbox shortly after the moderator action email
from mailman:

"No, this looks like the spam filter.  Don't know what triggered it.  Or
why it went to you.  But the list moderation is turned off (except for
non-members posting to the list), and you yourself are not moderated,
so...

Bill"

- nothing to see here, move right along, and sorry for the noise.

-Rob

On 21 September 2014 10:19, Robert Collins  wrote:
> I'm not sure of the right place to bring this up - I tried to on the
> web-sig list itself, but the moderator rejected the post.
>
> What I tried to post there was
>
> """Looks like *every* post to web-sig gets manually moderated. That seems
> like it will make discussion rather hard: can we get that changed (or
> is there some historical need for it - if so, perhaps we should use
> python-dev or some other list) ?"""
>
> -Rob
>
> --
> Robert Collins 
> Distinguished Technologist
> HP Converged Cloud



-- 
Robert Collins 
Distinguished Technologist
HP Converged Cloud
___
Python-Dev mailing list
[email protected]
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


[Python-Dev] web-sig mailing list moderating every post?

2014-09-20 Thread Robert Collins
I'm not sure of the right place to bring this up - I tried to on the
web-sig list itself, but the moderator rejected the post.

What I tried to post there was

"""Looks like *every* post to web-sig gets manually moderated. That seems
like it will make discussion rather hard: can we get that changed (or
is there some historical need for it - if so, perhaps we should use
python-dev or some other list) ?"""

-Rob

-- 
Robert Collins 
Distinguished Technologist
HP Converged Cloud
___
Python-Dev mailing list
[email protected]
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Encoding of PyFrameObject members

2015-02-08 Thread Robert Collins
On 9 February 2015 at 09:11, Maciej Fijalkowski  wrote:
> Hi Francis
>
> Feel free to steal most of vmprof code, it should generally work
> without requiring to patch cpython (python 3 patches appreciated :-).
> As far as timer goes - it seems not to be going anywhere, I would
> rather use a background thread or something

What about setting a flag when the signal arrives and checking it at
the next bytecode evaluation or something?

-Rob
___
Python-Dev mailing list
[email protected]
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] [python-committers] Do we need to sign Windows files with GnuPG?

2015-04-04 Thread Robert Collins
On 4 April 2015 at 11:14, Steve Dower  wrote:
> The thing is, that's exactly the same goodness as Authenticode gives, except
> everyone gets that for free and meanwhile you're the only one who has
> admitted to using GPG on Windows :)
>
> Basically, what I want to hear is that GPG sigs provide significantly better
> protection than hashes (and I can provide better than MD5 for all files if
> it's useful), taking into consideration that (I assume) I'd have to obtain a
> signing key for GPG and unless there's a CA involved like there is for
> Authenticode, there's no existing trust in that key.

GPG sigs will provide protection against replay attacks [unless we're
proposing to revoke signatures on old point releases with known
security vulnerabilities - something that Window software vendors tend
not to do because of the dramatic and immediate effect on the deployed
base...]

This is not relevant for things we're hosting on SSL, but is if anyone
is mirroring our installers around. They dont' seem to be so perhaps
its a bit 'meh'.

OTOH I also think there is value in consistency: signing all our
artifacts makes checking back on them later easier, should we need to.

One question, if you will - I don't think this was asked so far - is
authenticode verifiable from Linux, without Windows? And does it work
for users of WINE ?

-Rob


-- 
Robert Collins 
Distinguished Technologist
HP Converged Cloud
___
Python-Dev mailing list
[email protected]
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] unittest test discovery and namespace packages

2015-04-17 Thread Robert Collins
On 17 April 2015 at 19:40, Alex Shkop  wrote:
> Hello!
>
> There's an issue considering test discovery in unittest module. Basically it
> is about unittest module that doesn't find tests in namespace packages. For
> more info see issue http://bugs.python.org/issue23882.
>
> I'm willing to make a patch for this bug. But I need help to formulate how
> test discovery should work.
>
> Documentation states that all importable modules that match pattern will be
> loaded. This means that test modules inside namespace packages should be
> loaded too. But enabling this would change things drastically. For example
> now, running
>
> python -m unittest
>
> inside cpython source root does nothing. If we will enable test discovery
> inside namespace packages then this command will start running the whole
> python test suite in Lib/test/.

I don't think that 'scan the global namespace' makes a sensible
default definition.

The behaviour of discovery with namespace packages today requires some
key to select the namespace - either a locally discovered directory,
which happens to be a namespace package, or the name of the package to
process.

Since discovery is recursive, sub namespace packages should work, but
I note there are no explicit tests to this effect.

I'm sorry I didn't respond earlier on the tracker, didn't see the
issue in my inbox for some reason. Lets discuss there.

-Rob


-- 
Robert Collins 
Distinguished Technologist
HP Converged Cloud
___
Python-Dev mailing list
[email protected]
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Type hints -- a mediocre programmer's reaction

2015-04-20 Thread Robert Collins
On 21 April 2015 at 08:07, Guido van Rossum  wrote:

> The situation is possibly even bleaker (or happier, depending on your
> position :-) for inline type hints in 3rd party packages -- few package
> authors will be satisfied with supporting only Python 3.5 and later. True,
> you can support Python 3.2 and up by declaring the 3rd party typing package
> as a dependency (unless Python 3.5+ is detected), but I don't expect this to
> become a popular approach overnight.

mypy has a codec for 2.x which strips type annotations -
https://github.com/JukkaL/mypy/tree/master/mypy/codec - while you
can't run mypy under 2.x, you can run it under 3.x to perform the
analysis, and ones code still runs under 2.x.

Another route - the one I've been experimenting with as I get familiar
with mypy - is to just use type comments exclusively. Function type
comments currently break, but that seems like a fairly shallow bug to
me, rather than something that shouldn't work. The advantage of that
route is that editors which make comments appear in subtle colours,
makes the type hints be unobtrusive without specific syntax colouring
support.

-Rob


-- 
Robert Collins 
Distinguished Technologist
HP Converged Cloud
___
Python-Dev mailing list
[email protected]
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Type hints -- a mediocre programmer's reaction

2015-04-20 Thread Robert Collins
On 21 April 2015 at 08:10, Eric Snow  wrote:
>
>
>
> While it helps, this sort of best-practice is still unsettled (and apparently 
> not obvious).  In the short term it would make more sense to recommend using 
> stub files for all the reason Harry enumerated.  Once the best practices are 
> nailed down through experience with stub files, then we can make 
> recommendations regarding inline type hints.
>
> -eric

Forgive my ignorance, but can stub files can't annotate variables
within functions? E.g. AIUI if there is a stub file, it is used in the
static analysis instead of the actual source. Likely I've got it
modelled wrong in my head :)

-Rob

-- 
Robert Collins 
Distinguished Technologist
HP Converged Cloud
___
Python-Dev mailing list
[email protected]
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Type hints -- a mediocre programmer's reaction

2015-04-20 Thread Robert Collins
On 21 April 2015 at 08:50, Harry Percival  wrote:
>> stub files are only used to type-check *users* of a module. If you want a
>> module itself to be type-checked you have to use inline type hints
>
> is this a fundamental limitation, or just the current state of tooling?

AIUI its the fundamental design. Stubs don't annotate python code,
they *are* annotated code themselves. They aren't merged with the
observed code at all.

Could they be? Possibly. I don't know how much work that would be.

-Rob

-- 
Robert Collins 
Distinguished Technologist
HP Converged Cloud
___
Python-Dev mailing list
[email protected]
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Type hints -- a mediocre programmer's reaction

2015-04-20 Thread Robert Collins
On 21 April 2015 at 10:02, Ian Cordasco  wrote:
>
>

>
> So I've generally stayed out of this but I feel there is some context that
> people are missing in general.
>
> First, allow me to provide some context: I maintain a /lot/ of Python
> code[1] and nearly all of it is designed to be compatible with Pythons 2.6,
> 2.7, 3.2, 3.3, 3.4 (and eventually 3.5) and sometimes 2.5 (depending on the
> project). If I want to improve a developer's experience with some of that
> code using Type Hints I will essentially have no way to do that unless I
> write the code with the annotations and ship versions with annotations
> stripped and other versions with annotations? That's a lot more overhead. If
> I could provide the annotations in stubs that means that only the people who
> care about using them will have to use them.

2.5? I'm so sorry :).

Being in approximately the same boat, I definitely want to be able to
improve the developer experience.

That said, with one key exception (str/bytes/unicode) Python code
generally has the same type on all versions. Sure it might be imported
from somewhere else, and you're restricted to the common subset of
APIs, but the types in use don't vary per-python.

So - as long as your *developers* can run mypy on 3.2+, they can
benefit from type checking. mypy itself requires 3.2+ to run, but
programs with type annotations should be able to run on all those
python versions you mention.

Now, what is the minimum barrier for entry?

Nothing :) - at the moment every file can be analysed, and mypy ships
with a bunch of stubs that describe the stdlib. So - you'll get some
benefit immediately, where bad use of stdlib routines is happening.

Constraining the types of functions gets you better errors (because
you are expressing intent rather than what-might-happen which the
inference has to work from otherwise. In particular, constraining the
type of *inputs* can let bad callers be detected, rather than the
engine assuming they are valid-until-a-contradiction-occurs. You can
do that with stub files: put them in repo A, and add them to the
MYPYPATH when working on repo B which calls the code in repo A. You
can also add those stubs to repo B, but I wouldn't do that because
then they will skew vs repo A.

A further step up would be to annotate A in its code, rather than
using stubs. That way, developers of repo A will be warned about bad
uses of other repo A code.

But - if you have stubs *or* annotations in-line in repo A, *everyone*
changing repo A needs to care. Because if the code mismatches the
stub, the folk that do care will now be unable to use repo A correctly
- their type checker will complain about valid uses, and fail to
complain about more invalid uses.

I'm particularly interested in mypy for OpenStack because for some
repos > 10% of reported bugs are type mismatch errors which mypy may
well have avoided.

> Is it more overhead to manage twice the number of files? Yes. Do I feel it
> would be worth it to not overly complicate how these packages are released?
> Yes.

> Further, there are far more reasons to make stubs the baseline (in my
> opinion) the biggest reason of all is that people want to provide stubs for
> popular yet unmaintained libraries as third party packages. Should everyone
> using PIL be using Pillow? Of course. Does that mean they'll migrate or be
> allowed to migrate? No. Should they be able to benefit from this? Yes the
> should. The only way for PIL users to be able to do that is if stub files
> can be packaged separately for PIL and distributed by someone else.

stubs can certainly be packaged and distributed separately. That
doesn't make the case that we should use stubs for projects that are
opting in.

> I think while the authors are currently seeing stubs as a necessary *evil*
> they're missing points where they're a better backwards compatible solution
> for people who want to give users with capable IDEs the ability to use stub
> (or hint) files.

-Rob


-- 
Robert Collins 
Distinguished Technologist
HP Converged Cloud
___
Python-Dev mailing list
[email protected]
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Type hints -- a mediocre programmer's reaction

2015-04-21 Thread Robert Collins
On 22 April 2015 at 04:28, Guido van Rossum  wrote:
> On Tue, Apr 21, 2015 at 12:49 AM, Antoine Pitrou 
> wrote:
>>
>> On Mon, 20 Apr 2015 20:43:38 -0400
>> "R. David Murray"  wrote:
>> > +1 to this from me too. I'm afraid that means I'm -1 on the PEP.
>> >
>> > I didn't write this in my earlier email because I wasn't sure about it,
>> > but my gut reaction after reading Harry's email was "if type annotations
>> > are used in the stdlib, I'll probably stop contributing".  That doesn't
>> > mean that's *true*, but that's the first time I've ever had that
>> > thought, so it is probably worth sharing.
>>
>> I think it would be nice to know what the PEP means for daily stdlib
>> development. If patches have to carry typing information each time they
>> add/enhance an API that's an addition burden. If typing is done
>> separately by interested people then it sounds like it wouldn't have
>> much of an impact on everyone else's workflow.
>
>
> This point will be moot until new code appears in the stdlib whose author
> likes type hints. As I said, we won't be converting existing code to add
> type hints (I'm actively against that for the stdlib, for reasons I've
> explained already).
>
> *If* type hints prove useful, I expect that adding type hints **to code that
> deserves them** is treated no different in the workflow than adding tests or
> docs. I.e. something that is the right thing to do because it has obvious
> benefits for users and/or future maintainers. If at some point running a
> type checker over the stdlib as part of continuous integration become
> routine, type hints can also replace certain silly tests.
>
> Until some point in a possible but distant future when we're all thinking
> back fondly about the argument we're currently having, it will be the choice
> of the author of new (and *only* new) stdlib modules whether and how to use
> type hints. Such a hypothetical author would also be reviewing updates to
> "their" module and point out lack of type hints just like you might point
> out an incomplete docstring, an outdated comment, or a missing test. (The
> type checker would be responsible for pointing out bugs. :-P )

What about major changes to existing modules? I have a backlog of
intended feature uplifts from testtools into unittest - if the type
hints thing works out I am likely to put them into testtools. Whats
your view on type hints to such *new code* in existing modules?

-Rob

-- 
Robert Collins 
Distinguished Technologist
HP Converged Cloud
___
Python-Dev mailing list
[email protected]
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Type hints -- a mediocre programmer's reaction

2015-04-21 Thread Robert Collins
On 22 April 2015 at 08:26, Guido van Rossum  wrote:

> In the end this should be up to you and the reviewers, but for such a
> venerable module like unittest I'd be hesitant to be an early adopter. I'd
> also expect that much of unittest is too dynamic in nature to benefit from
> type hints. But maybe you should just try to use them for testtools and see
> for yourself how beneficial or cumbersome they are in that particular case?

Exactly yes. I've been experimenting recently with mypy to see. So far
I've regressed backthrough 4 repos (unittest2, testtools, traceback2,
linecache2) to get something small enough to work and experiment with.

-Rob

-- 
Robert Collins 
Distinguished Technologist
HP Converged Cloud
___
Python-Dev mailing list
[email protected]
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PYTHONHTTPSVERIFY env var

2015-05-09 Thread Robert Collins
On 10 May 2015 at 11:44, Chris Angelico  wrote:
> On Sun, May 10, 2015 at 4:13 AM, M.-A. Lemburg  wrote:
>> By providing a way to intentionally switch off the new default,
>> we do make people aware of the risks and that's good enough,
>> while still maintaining the contract people rightly expect of
>> patch level releases of Python.
>
> Just as long as it's the sysadmin, and NOT some random attacker over
> the internet, who has the power to downgrade security. Environment
> variables can be attacked in various ways.

They can, and the bash fun was very good evidence of that.

OTOH if someones environment is at risk, PATH and PYTHONPATH are
already very effective attack vectors.

-Rob

-- 
Robert Collins 
Distinguished Technologist
HP Converged Cloud
___
Python-Dev mailing list
[email protected]
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PYTHONHTTPSVERIFY env var

2015-05-12 Thread Robert Kuska


- Original Message -
> From: "Donald Stufft" 
> To: "Nick Coghlan" 
> Cc: "python-dev" , "M.-A. Lemburg" 
> Sent: Monday, May 11, 2015 1:16:58 PM
> Subject: Re: [Python-Dev] PYTHONHTTPSVERIFY env var
> 
> 
> > On May 11, 2015, at 6:47 AM, Nick Coghlan  wrote:
> > 
> > On 11 May 2015 at 20:23, Donald Stufft  wrote:
> >> On May 11, 2015, at 6:15 AM, Nick Coghlan  wrote:
> >>> We made the decision when PEP 476 was accepted that this change turned
> >>> a silent security failure into a noisy one, rather than being a
> >>> regression in its own right. PEP 493 isn't about disagreeing with that
> >>> decision, it's about providing a smoother upgrade path in contexts
> >>> where letting the security failure remain silent is deemed to be
> >>> preferred in the near term.
> >> 
> >> I don't really agree that the decision to disable TLS is an environment
> >> one,
> >> it's really a per application decision. This is why I was against having
> >> some
> >> sort of global off switch for all of Python because just because one
> >> application needs it turned off doesn't mean you want it turned off for
> >> another
> >> Python application.
> > 
> > The scenario I'm interested in is the one where it *was* off globally
> > (i.e. you were already running Python 2.7.8 or earlier) and you want
> > to manage a global rollout of a new Python version that supports being
> > configured to verify HTTPS certificates by default, while making the
> > decision on whether or not to enable HTTPS certificate verification on
> > a server-by-server basis, rather than having that decision be coupled
> > directly to the rollout of the updated version of Python.
> > 
> > I agree that the desired end state is where Python 3 is, and where
> > upstream Python 2.7.9+ is, this is solely about how to facilitate
> > folks getting from point A to point B without an intervening window of
> > "I broke the world and now my boss is yelling at me about it" :)
> > 
> 
> Oh, another issue that I forgot to mention--
> 
> A fair number of people had no idea that Python wasn't validating TLS before
> 2.7.9/3.4.3 however as part of the processing of changing that in 2.7.9 a lot
> of people became aware that Python's before 2.7.9 didn't validate but that
> Python 2.7.9+ does. I worry that if Redhat (or anyone) ships a Python 2.7.9
> that doesn't verify by default then they are going to be shipping something
> which defies the expectations of those users who were relying on the fact
> that
> Python 2.7.9+ was supposed to be secure by default now. You're
> (understandibly)
> focusing on "I already have my thing running on Python 2.7.8 and I want to
> yum update and get 2.7.9 and have things not visibly break", however there is
> the other use case of "I'm setting up a new environment, and I installed RHEL
> and got 2.7.9, I remembered reading in LWN that 2.7.9 verifies now so I must
> be safe". If you *do* provide such a switch, defaulting it to verify and
> having

We (Red Hat) will not update python to 2.7.9, we ship 2.7.5 and backport 
bugfixes/features based on users demand.

> people where that breaks go in and turn it off is probably a safer mechanism
> since the cases where 2.7.9 verification breaks things for people is a
> visible
> change where the case that someone expects 2.7.9 to verify and it doesn't
> isn't
> a visible change and is easily missed unless they go out of their way to try
> and test it against a server with an invalid certificate.
> 
> Either way, if there is some sort of global off switch, having that off
> switch
> set to off should raise some kind of warning (like urllib3 does if you use
> the unverified HTTPS methods). To be clear, I don't mean that using the built
> in ssl module APIs to disable verification should raise a warning, I mean the
> hypothetical "make my Python insecurely access HTTPS" configuration file (or
> environment variable) that is being proposed.
> 
> ---
> Donald Stufft
> PGP: 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA
> 
> 
> ___
> Python-Dev mailing list
> [email protected]
> https://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe:
> https://mail.python.org/mailman/options/python-dev/rkuska%40redhat.com
> 


Regards
Robert Kuska
{rkuska}
___
Python-Dev mailing list
[email protected]
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Can Python guarantee the order of keyword-only parameters?

2017-11-27 Thread Robert Collins
Plus 1 from me. I'm not 100% sure the signature / inspect backport does
this, but as you say, it should be trivial to do, to whatever extent the
python version we're hosted on does it.

Rob

On 28 Nov. 2017 07:14, "Larry Hastings"  wrote:

>
>
> First, a thirty-second refresher, so we're all using the same terminology:
>
> A *parameter* is a declared input variable to a function.
> An *argument* is a value passed into a function.  (*Arguments* are stored
> in *parameters.*)
>
> So in the example "def foo(clonk): pass; foo(3)", clonk is a parameter,
> and 3 is an argument. ++
>
>
> Keyword-only arguments were conceived of as being unordered.  They're
> stored in a dictionary--by convention called **kwargs--and dictionaries
> didn't preserve order.  But knowing the order of arguments is occasionally
> very useful.  PEP 468 proposed that Python preserve the order of
> keyword-only arguments in kwargs.  This became easy with the
> order-preserving dictionaries added to Python 3.6.  I don't recall the
> order of events, but in the end PEP 468 was accepted, and as of 3.6 Python
> guarantees order in **kwargs.
>
> But that's arguments.  What about parameters?
>
> Although this isn't as directly impactful, the order of keyword-only
> parameters *is* visible to the programmer.  The best way to see a
> function's parameters is with inspect.signature, although there's also the
> deprecated inspect.getfullargspec; in CPython you can also directly examine
> fn.__code__.co_varnames.  Two of these methods present their data in a way
> that preserves order for all parameters, including keyword-only
> parameters--and the third one is deprecated.
>
> Python must (and does) guarantee the order of positional and
> positional-or-keyword parameters, because it uses position to map arguments
> to parameters when the function is called.  But conceptually this isn't
> necessary for keyword-only parameters because their position is
> irrelevant.  I only see one place in the language & library that addresses
> the ordering of keyword-only parameters, by way of omission.  The PEP for
> inspect.signature (PEP 362) says that when comparing two signatures for
> equality, their positional and positional-or-keyword parameters must be in
> the same order.  It makes a point of *not* requiring that the two
> functions' keyword-only parameters be in the same order.
>
> For every currently supported version of Python 3, inspect.signature and
> fn.__code__.co_varnames preserve the order of keyword-only parameters.
> This isn't surprising; it's basically the same code path implementing those
> as the two types of positional-relevant parameters, so the most
> straightforward implementation would naturally preserve their order.  It's
> just not guaranteed.
>
> I'd like inspect.signature to guarantee that the order of keyword-only
> parameters always matches the order they were declared in.  Technically
> this isn't a language feature, it's a library feature.  But making this
> guarantee would require that CPython internally cooperate, so it's kind of
> a language feature too.
>
> Does this sound reasonable?  Would it need a PEP?  I'm hoping for "yes"
> and "no", respectively.
>
>
> Three final notes:
>
>- Yes, I do have a use case.  I'm using inspect.signature metadata to
>mechanically map arguments from an external domain (command-line arguments)
>to a Python function.  Relying on the declaration order of keyword-only
>parameters would elegantly solve one small problem.
>- I asked Armin Rigo about PyPy's support for Python 3.  He said it
>should already maintain the order of keyword-only parameters, and if I ever
>catch it not maintaining them in order I should file a bug.  I assert that
>making this guarantee would be nearly zero effort for any Python
>implementation--I bet they all already behave this way, all they need is a
>test case and some documentation.
>- One can extend this concept to functools.partial and
>inspect.Signature.bind: should its transformations of keyword-only
>parameters also maintain order in a consistent way?  I suspect the answer
>there is much the same--there's an obvious way it should behave, it almost
>certainly already behaves that way, but it doesn't guarantee it.  I don't
>think I need this for my use case.
>
>
>
> */arry*
>
> ++ Yes, that means "Argument Clinic" should really have been called
> "Parameter Clinic".  But the "Parameter Clinic" sketch is nowhere near as
> funny.
>
> ___
> Python-Dev mailing list
> [email protected]
> https://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe: https://mail.python.org/mailman/options/python-dev/
> robertc%40robertcollins.net
>
>
___
Python-Dev mailing list
[email protected]
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-

Re: [Python-Dev] Symmetry arguments for API expansion

2018-03-21 Thread Robert Smallshire
As requested on the bug tracker, I've submitted a pull request for
is_integer() support on the other numeric types.
https://github.com/python/cpython/pull/6121

These are the tactics I used to implement it:

- float: is_integer() already exists, so no changes

- int:  return True

- Real: return x == int(x). Although Real doesn't explicitly support
conversation to int with __int__, it does support conversion to int with
__trunc__. The int constructor falls back to using __trunc__.

- Rational (also inherited by Fraction): return x.denominator == 1 as
Rational requires that all numbers must be represented in lowest form.

- Integral: return True

- Decimal: expose the existing dec_mpd_isinteger C function to Python as
is_integer()
___
Python-Dev mailing list
[email protected]
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Deprecating float.is_integer()

2018-03-21 Thread Robert Smallshire
Here's an excerpted (and slightly simplified for consumption here) usage of
float.is_integer() from the top of a function which does some
convolution/filtering in a geophysics application. I've mostly seen it used
in guard clauses in this way to reject either illegal numeric arguments
directly, or particular combinations of arguments as in this case:

def filter_convolve(x, y, xf, yf, stride=1, padding=1):
x_out = (x - xf + 2*padding) / stride + 1
y_out = (y - yf + 2*padding) / stride + 1

if not (x_out.is_integer() and y_out.is_integer()):
raise ValueError("Invalid convolution filter_convolve({x},
{y}, {xf}, {yf}, {stride}, {padding})"
 .format(x=x, y=y, xf=xf, yf=yf,
stride=stride, padding=padding))
x_out = int(x_out)
y_out = int(y_out)

# ...

Of course, there are other ways to do this check, but the approach here is
obvious and easy to comprehend.
___
Python-Dev mailing list
[email protected]
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Deprecating float.is_integer()

2018-03-22 Thread Robert Smallshire
In the PR which implements is_integer() for int, the numeric tower, and
Decimal I elected not to implement it for Complex or complex. This was
principally because complex instances, even if they have an integral real
value, are not convertible to int and it seems reasonable to me that any
number for which is_integer() returns True should be convertible to int
successfully, and without loss of information.

  >>> int(complex(2, 0))

  Traceback (most recent call last):

File "", line 1, in 

  TypeError: can't convert complex to int


There could be an argument that a putative complex.is_integral() should
therefore return False, but I expect that would get even less support than
the other suggestions in these threads.

*Robert Smallshire | *Managing Director
*Sixty North* | Applications | Consulting | Training
[email protected] | T +47 63 01 04 44 | M +47 924 30 350
http://sixty-north.com

On 22 March 2018 at 10:51, Kirill Balunov  wrote:

> I apologize that I get into the discussion. Obviously in some situations
> it will be useful to check that a floating-point number is integral, but
> from the examples given it is clear that they are very rare. Why the
> variant with the inclusion of this functionality into the math module was
> not considered at all. If the answer is - consistency upon the numeric
> tower - will it go for complex type and what will it mean (there can be two
> point of views)?
>
> Is this functionality so often used and practical to be a method of float,
> int, ..., and not just to be an auxiliary function?
>
> p.s.: The same thoughts about `as_integer_ratio` discussion.
>
> With kind regards,
> -gdg
>
>
> ___
> Python-Dev mailing list
> [email protected]
> https://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe: https://mail.python.org/mailman/options/python-dev/
> rob%40sixty-north.com
>
>
___
Python-Dev mailing list
[email protected]
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Symmetry arguments for API expansion

2018-03-27 Thread Robert Smallshire
In the PR I've submitted, that's essentially what I'm doing for the default
Real.is_integer() implementation. The details differ slightly, in that I
rely on the int() constructor to call __trunc__(), rather than introduce a
new dependency on the math module.

On Tue, 27 Mar 2018 at 21:29, Chris Barker  wrote:

> I know this is all done, but for completeness’ sake:
>
> I just noticed math.trunc() and __trunc__().
>
> So wouldn’t the “correct” way to check for an integral value be something
> like:
>
> obj.__trunc__() == obj
>
> I don’t think this has any bearing on adding is_integer() methods to
> numeric objects, but might if we wanted to add a generic is_integer()
> function somewhere.
>
> In any case, I don’t recall it being mentioned in the conversation, so
> thought I’d complete the record.
>
> -CHB
>
>
>
>
>
> On Wed, Mar 21, 2018 at 8:31 PM Guido van Rossum  wrote:
>
>> On Wed, Mar 21, 2018 at 6:48 PM, Chris Barker 
>> wrote:
>>
>>> On Wed, Mar 21, 2018 at 4:12 PM, Guido van Rossum 
>>> wrote:
>>>
>>>> Thank you! As you may or may not have noticed in a different thread,
>>>> we're going through a small existential crisis regarding the usefulness of
>>>> is_integer() -- Serhiy believes it is not useful (and even an attractive
>>>> nuisance) and should be deprecated. OTOH the existence of
>>>> dec_mpd_isinteger() seems to validate to me that it actually exposes useful
>>>> functionality (and every Python feature can be abused, so that alone should
>>>> not
>>>>
>>> )
>>
> ___
> Python-Dev mailing list
> [email protected]
> https://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe:
> https://mail.python.org/mailman/options/python-dev/rob%40sixty-north.com
>
-- 
*Robert Smallshire | *Managing Director
*Sixty North* | Applications | Consulting | Training
[email protected] | T +47 63 01 04 44 | M +47 924 30 350
http://sixty-north.com
___
Python-Dev mailing list
[email protected]
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 574 -- Pickle protocol 5 with out-of-band data

2018-03-28 Thread Robert Collins
One question..

On Thu., 29 Mar. 2018, 07:42 Antoine Pitrou,  wrote:

> ...
>

===
>
> Mutability
> --
>
> PEP 3118 buffers [#pep-3118]_ can be readonly or writable.  Some objects,
> such as Numpy arrays, need to be backed by a mutable buffer for full
> operation.  Pickle consumers that use the ``buffer_callback`` and
> ``buffers``
> arguments will have to be careful to recreate mutable buffers.  When doing
> I/O, this implies using buffer-passing API variants such as ``readinto``
> (which are also often preferrable for performance).
>
> Data sharing
> 
>
> If you pickle and then unpickle an object in the same process, passing
> out-of-band buffer views, then the unpickled object may be backed by the
> same buffer as the original pickled object.
>
> For example, it might be reasonable to implement reduction of a Numpy array
> as follows (crucial metadata such as shapes is omitted for simplicity)::
>
>class ndarray:
>
>   def __reduce_ex__(self, protocol):
>  if protocol == 5:
> return numpy.frombuffer, (PickleBuffer(self), self.dtype)
>  # Legacy code for earlier protocols omitted
>
> Then simply passing the PickleBuffer around from ``dumps`` to ``loads``
> will produce a new Numpy array sharing the same underlying memory as the
> original Numpy object (and, incidentally, keeping it alive)::


This seems incompatible with v4 semantics. There, a loads plus dumps
combination is approximately a deep copy. This isn't. Sometimes. Sometimes
it is.

Other than that way, I like it.

Rob

>
>
___
Python-Dev mailing list
[email protected]
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 572: Assignment Expressions

2018-04-19 Thread Robert Smallshire
If you restrict the idea to 'if' and 'while', Why not render this using the
existing 'as' form for binding names, already used with 'except' and 'with':

while learner.get(static_hint) as points:
learner.feed(f(points))

The equivalent for 'if' helps with the regex matching case:

if re.match(r"...") as m:
print(m.group(1))

I considered proposing these two forms in a PEP a few years ago, but never
got around to it. To my eye, they fit syntactically into the language
as-is, without introducing new symbols, operators or keywords, are
consistent with existing usage, and address two relatively common causes of
displeasing Python code.

Robert
___
Python-Dev mailing list
[email protected]
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 575: Unifying function/method classes

2018-05-02 Thread Robert Bradshaw
This would be really useful for Cython, as well as a nice cleanup in
general (e.g. replacing 4 special cases with one check).

It seems the main concern is the user-visible change in types. If this is
determined to be too backwards incompatible (I would be surprised if many
projects are impacted, but also surprised if none are--more data is
warranted) I think the main points of this proposal could be addressed by
introducing the common superclass(es) while keeping the "leaf" types of
builtin_function_or_method, etc. exactly the same similar to the two-phase
proposal (though of course it'd be a nice to split this up, as well as
unify normal-method and c-defined-method if that's palatable).

- Robert




On Mon, Apr 30, 2018 at 8:55 AM, Jeroen Demeyer  wrote:

> On 2018-04-30 15:38, Mark Shannon wrote:
>
>> While a unified *interface* makes sense, a unified class hierarchy and
>> implementation, IMO, do not.
>>
>
> The main reason for the common base class is performance: in the bytecode
> interpreter, when we call an object, CPython currently has a special case
> for calling Python functions, a special case for calling methods, a special
> case for calling method descriptors, a special case for calling built-in
> functions.
>
> By introducing a common base class, we reduce the number of special cases.
> Second, we allow using this fast path for custom classes. With PEP 575, it
> is possible to create new classes with the same __call__ performance as the
> current built-in function class.
>
> Bound-methods may be callables, but they are not functions, they are a
>> pair of a function and a "self" object.
>>
>
> From the Python language point of view, that may be true but that's not
> how you want to implement methods. When I write a method in C, I want that
> it can be called either as unbound method or as bound method: the C code
> shouldn't see the difference between the calls X.foo(obj) or obj.foo(). And
> you want both calls to be equally fast, so you don't want that the bound
> method just wraps the unbound method. For this reason, it makes sense to
> unify functions and methods.
>
> IMO, there are so many versions of "function" and "bound-method", that a
>> unified class hierarchy and the resulting restriction to the
>> implementation will make implementing a unified interface harder, not
>> easier.
>>
>
> PEP 575 does not add any restrictions: I never claimed that all callables
> should inherit from base_function. Regardless, why would the common base
> class add restrictions? You can still add attributes and customize whatever
> you want in subclasses.
>
>
>
> Jeroen.
> ___
> Python-Dev mailing list
> [email protected]
> https://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe: https://mail.python.org/mailman/options/python-dev/robertwb%
> 40gmail.com
>
___
Python-Dev mailing list
[email protected]
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 575 (Unifying function/method classes) update

2018-06-16 Thread Robert Bradshaw
Having had some time to let this settle for a bit, I hope it doesn't
get abandoned just because it was to complicated to come to a
conclusion.

I'd like to attempt to summarize the main ideas as follows.

1) Currently the "fast call" optimization is implemented by by
checking explicitly for a set of types (builtin functions, methods
method descriptors, and functions). This is both ugly, as it requires
listing several cases, and also locks out any other types from
participating in this protocol. This PEP proposes elevating this to a
contract that other types can participate in.

2) Inspect and friends are hard-coded checks on these non-extendable
types, again making it difficult for other types to be truly first
class citizens, and breaks attempts at duck typing.

3) The current hierarchy of builtin_function_or_method vs. function
vs. instancemethod could use some cleanup for consistency and
extensibility.

PEP 575 solves all of these by introducing a common base class, but
they are somewhat separable. As for complexity, there are two metrics,
the complexity of the delta (e.g. more lines of code in trickier
places = worse, paid once) and of the final result (less code, less
special casing = better, paid as long as the code is in use). I tend
to think it's a good tradeoff to pay former to improve the latter.

Jeroen, is this a fair summary? Are they fully separable?

Others, are these three valuable goals? At what cost (e.g. (3) may
have backwards compatibility concerns if taken as far as possible.)

- Robert


On Sun, May 20, 2018 at 1:15 PM, Jeroen Demeyer  wrote:
> On 2018-05-19 15:29, Nick Coghlan wrote:
>>
>> That's not how code reviews work, as their complexity is governed by the
>> number of lines changed (added/removed/modified), not just the number of
>> lines that are left at the end.
>
>
> Of course, you are right. I didn't mean literally that only the end result
> matters. But it should certainly be considered.
>
> If you only do small incremental changes, complexity tends to build up
> because choices which are locally optimal are not always globally optimal.
> Sometimes you need to do some refactoring to revisit some of that
> complexity. This is part of what PEP 575 does.
>
>> That said, "deletes more lines than it
>> adds" is typically a point strongly in favour of a particular change.
>
>
> This certainly won't be true for my patch, because there is a lot of code
> that I need to support for backwards compatibility (all the old code for
> method_descriptor in particular).
>
>
> Going back to the review of PEP 575, I see the following possible outcomes:
>
> (A) Accept it as is (possibly with minor changes).
>
> (B) Accept the general idea but split the details up in several PEPs which
> can still be discussed individually.
>
> (C) Accept a minimal variant of PEP 575, only changing existing classes but
> not changing the class hierarchy.
>
> (D) Accept some yet-to-be-written variant of PEP 575.
>
> (E) Don't fix the use case that PEP 575 wants to address.
>
>
> Petr Viktorin suggests (C). I am personally quite hesitant because that only
> adds complexity and it wouldn't be the best choice for the future
> maintainability of CPython. I also fear that this hypothetical PEP variant
> would be rejected because of that reason. Of course, if there is some
> general agreement that (C) is the way to go, then that is fine for me.
>
> If people feel that PEP 575 is currently too complex, I think that (B) is a
> very good compromise. The end result would be the same as what PEP 575
> proposes. Instead of changing many things at once, we could handle each
> class in a separate PEP. But the motivation of those mini-PEPs will still be
> PEP 575. So, in order for this to make sense, the general idea of PEP 575
> needs to be accepted: adding a base_function base class and making various
> existing classes subclasses of that.
>
>
>
> Jeroen.
> ___
> Python-Dev mailing list
> [email protected]
> https://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe:
> https://mail.python.org/mailman/options/python-dev/robertwb%40gmail.com
___
Python-Dev mailing list
[email protected]
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] How do people like the early 3.5 branch?

2015-06-16 Thread Robert Collins
On 17 June 2015 at 11:26, Larry Hastings  wrote:
>
>
> A quick look through the checkin logs suggests that there's literally
> nothing happening in 3.6 right now.  All the checkins are merges.
>
> Is anyone expecting to do work in 3.6 soon?  Or did the early branch just
> create a bunch of make-work for everybody?
>
> Monitoring the progress of our experiment,

When I next get tuits, it will be on 3.6; I like the branch early even
though I haven't used it.

-Rob


-- 
Robert Collins 
Distinguished Technologist
HP Converged Cloud
___
Python-Dev mailing list
[email protected]
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


[Python-Dev] Not getting mail from the issue tracker

2015-06-28 Thread Robert Collins
Firstly, a big sorry to all those unittest issues I haven't commented on.

Turns out I simply don't get mail from the issue tracker. :(.

Who should I speak to to try and debug this?

In the interim, if you want me to look at an issue please ping me on
IRC (lifeless) or mail me directly.

-Rob

-- 
Robert Collins 
Distinguished Technologist
HP Converged Cloud
___
Python-Dev mailing list
[email protected]
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


[Python-Dev] Freeze exception for http://bugs.python.org/issue23661 ?

2015-07-13 Thread Robert Collins
So unittest.mock regressed during 3.5, and I found out when I released
the mock backport.

The regression is pretty shallow - I've applied the fix to 3.6, its a
one-liner and comes with a patch.

Whats the process for getting this into 3.5? Its likely to affect a
lot of folk using mock (pretty much every OpenStack project got git
with it when I released mock 1.1).

-Rob

-- 
Robert Collins 
Distinguished Technologist
HP Converged Cloud
___
Python-Dev mailing list
[email protected]
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Freeze exception for http://bugs.python.org/issue23661 ?

2015-07-13 Thread Robert Collins
On 14 July 2015 at 14:25, R. David Murray  wrote:
> On Tue, 14 Jul 2015 14:01:25 +1200, Robert Collins 
>  wrote:
>> So unittest.mock regressed during 3.5, and I found out when I released
>> the mock backport.
>>
>> The regression is pretty shallow - I've applied the fix to 3.6, its a
>> one-liner and comes with a patch.
>>
>> Whats the process for getting this into 3.5? Its likely to affect a
>> lot of folk using mock (pretty much every OpenStack project got git
>> with it when I released mock 1.1).
>
> 3.5 hasn't been released yet.  The patch ideally would have gone into
> 3.5 first, then been merged to 3.6.  As it is, you'll apply it to
> 3.5, and then do a null merge to 3.6.  It will get released in the
> next 3.5 beta.

What I'm unclear on is the approval process for doing ^.

-Rob

-- 
Robert Collins 
Distinguished Technologist
HP Converged Cloud
___
Python-Dev mailing list
[email protected]
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] How far to go with user-friendliness

2015-07-14 Thread Robert Collins
On 15 July 2015 at 02:06, Paul Moore  wrote:
> On 14 July 2015 at 14:51, Florian Bruhin  wrote:
>> * Steven D'Aprano  [2015-07-14 23:41:56 +1000]:
...
>> With the patch, an AttributeError is raised if you call something
>> starting with assert or assret instead.
>
> In retrospect, this seems like a mistake in the design. Much like
> namedtuple, mocks should carefully separate "actual" methods from
> mocked ones (in the case of namedtuple, from tuple element names). If
> Guido would let us use the time machine, I'd argue that maybe the
> special methods should be _assert_called_with (or something similar).

Well.

I'd go further and just separate the APIs.

mock.assert_called_with(a_mock, *args, **kwargs)

mock can know how to poke under the covers (e.g. using
__Mock_assert_called_with) without leaking it into the users objects.

> Given that it's way too late to take that path, I can see the value of
> trying to detect common errors. Particularly as the result of failing
> to do so is an apparently-passing test.

We can add a new API and gradually deprecate the old one. With the
presence of 'mock' as a rolling backport, this can be used by folk on
Python 3.3 and 3.4 so they don't get locked into one release of Python
or another.

> In effect, this patch is "reserving" all attributes starting with
> "assert" or "assret" as actual methods of the mock object, and not
> mocked attributes.

Yes, and thats ugly. OTOH it caught hundreds of useless tests in
OpenStack when this got ported into mock 1.1.0.

> Reserving "assert" seems fair.
> Reserving "assret" seems odd, as people say why just this
> mis-spelling? Is there any specific evidence that this typo happens
> more often "in the wild" than any other? Given that the original issue
> was raised by Michael Foord (the author of mock), I'd be inclined to
> assume that he'd encountered evidence to that effect.
>
> So ultimately I'm +1 on reserving "assert" (given that a more radical
> fix isn't possible) and +0 on adding "assret" (simply on the basis
> that someone more knowledgeable than me says it makes sense).

Since assret is solely a 'you may not use this' case, I think we can
remove the check for that quite trivially, at any point we want to.

-Rob

-- 
Robert Collins 
Distinguished Technologist
HP Converged Cloud
___
Python-Dev mailing list
[email protected]
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] How far to go with user-friendliness

2015-07-14 Thread Robert Collins
On 15 July 2015 at 07:39, Paul Moore  wrote:
> On 14 July 2015 at 20:27, Robert Collins  wrote:

>>> In effect, this patch is "reserving" all attributes starting with
>>> "assert" or "assret" as actual methods of the mock object, and not
>>> mocked attributes.
>>
>> Yes, and thats ugly. OTOH it caught hundreds of useless tests in
>> OpenStack when this got ported into mock 1.1.0.
>
> ... which I guess counts as strong evidence that this *is* a common
> typo, at least in certain contexts.

For clarity: None of the caught failures were assret as far as I know.
They were things like assert_called_onec_with, or assert_called.

-Rob



-- 
Robert Collins 
Distinguished Technologist
HP Converged Cloud
___
Python-Dev mailing list
[email protected]
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] How far to go with user-friendliness

2015-07-14 Thread Robert Collins
On 15 July 2015 at 09:41, A.M. Kuchling  wrote:
> On Tue, Jul 14, 2015 at 09:53:33AM -0700, Ethan Furman wrote:
>> Part of writing tests is making sure they fail (and for the right reason) -- 
>> proper testing of the tests would reveal such a typo.
>
> And there are other failure modes for writing tests that succeed but
> are not testing what you think.  For example, you might re-use the
> same method name:
>
>def test_connection(self):
># Never executed
>...
>
>... 200 lines and 10 other test methods later ...
>
>def test_connection(self):
>...
>
> Or misuse assertRaises:
>
>with self.assertRaises(TypeError):
>1 + "a"
># Second statement never reached
>[] + 'b'
>
> I don't think unittest can protect its users from such things.

It can't, but there is a sliding scale of API usability, and we should
try to be up the good end of that :).

http://sweng.the-davies.net/Home/rustys-api-design-manifesto

-Rob

-- 
Robert Collins 
Distinguished Technologist
HP Converged Cloud
___
Python-Dev mailing list
[email protected]
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] How far to go with user-friendliness

2015-07-14 Thread Robert Collins
On 15 July 2015 at 10:05, Ethan Furman  wrote:
> On 07/14/2015 02:53 PM, Robert Collins wrote:
...
>>> I don't think unittest can protect its users from such things.
>>
>>
>> It can't, but there is a sliding scale of API usability, and we should
>> try to be up the good end of that :).
>
>
> I hope you're not suggesting that supporting misspellings, and thereby
> ruling out the proper use of an otherwise fine variable name, is at the good
> end of that scale?

I'm not supporting the misspelling thing - see my suggestion earlier
in this thread to move the mock assertions to standalone functions,
removing the bug in that area *entirely* and eventually removing the
check for method names starting with assert from mock entirely.

What I am doing is rejecting the argument that because we can't fix
every mis-use users might make, we therefore should not fix the cases
where we can fix it.

For clarity, I think we should:
 - remove the assret check, it is I think spurious.
 - add a set of functions to the mock module that should be used in
preference to Mock.assert*
 - mark the Mock.assert* functions as PendingDeprecation
 - in 3.6 move the PendingDeprecation to Deprecated
 - in 3.7 remove the Mock.assert* functions and the check for method
names beginning with assert entirely.

-Rob


-- 
Robert Collins 
Distinguished Technologist
HP Converged Cloud
___
Python-Dev mailing list
[email protected]
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Consenting adults considered beneficial [was: How far to go with user-friendliness]

2015-07-14 Thread Robert Collins
On 15 July 2015 at 15:00, Stephen J. Turnbull  wrote:
> Robert Collins writes:
>
>  > What I am doing is rejecting the argument that because we can't fix
>  > every mis-use users might make, we therefore should not fix the cases
>  > where we can fix it.
>
> This involves a value judgment, every time a new fix is proposed, as
> to whether it's a mis-use that deserves fixing or a permitted-to-
> consenting-adults behavior.  IMO, it's a bad idea to institutionalize
> that kind of bikeshedding, especially when such "fixes" involve
> overriding user choices that are permitted everywhere else.

I'm thoroughly confused by this.

> Arbitrary choices that *some* users want to be protected from ("stop
> me before I 'assret' again!") belong in linters, not in Python or the
> stdlib.

I agree with this.

> To be frank, I think you have the Pythonic approach exactly backwards
> here (though I am no authority on Pythonicity).  ISTM that in general
> Python takes the attitude that if a particular "mis-use" seems to be
> common, then we should figure out what it is about Python that
> encourages that "mistake", or makes an otherwise arbitrary user choice
> into a "mistake", and fix Python -- not restrict users.
>
> Of course that's not always possible, but that's the first choice
> AIUI.

And these two paragraphs confuse me again.

Lets be really specific here.

Mock today has the following behaviour:
x = Mock()
x.foo()
x.bar()
...

all will just work and are mock methods that magically appear on demand.

And it includes some methods:
x.assert_called_with()

which are not mock methods, can't be mocked, and if you make a typo
you got *no* signal back to say that you'd messed up, until the patch
that added assert_ and assret_ prefix checking was added.

Which part of that API is Pythonic?

I rejected an argument that just because some APIs are are
intrinsically able to be misused, that we should not try to write
better APIs.

I then gave an plan, for this case, which appears to have been
enthusiastically recieved by a bunch of long-time Python devs.

In what way is that unPythonic behaviour or design?

Confusedly-yrs.
Rob


-- 
Robert Collins 
Distinguished Technologist
HP Converged Cloud
___
Python-Dev mailing list
[email protected]
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] How far to go with user-friendliness

2015-07-14 Thread Robert Collins
On 15 July 2015 at 12:59, Nick Coghlan  wrote:
>
> There is zero urgency here, so nothing needs to change for 3.5.
> Robert's plan is a fine one to propose for 3.6 (and the PyPI mock
> backport).

Right - the bad API goes back to the very beginning. I'm not planning
on writing the new thing I sketched, though it should be straight
forward if someone wishes to do so. I'll probably file a ticket in the
tracker asking for it once this thread quiesces.

-Rob
___
Python-Dev mailing list
[email protected]
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] How far to go with user-friendliness

2015-07-15 Thread Robert Collins
On 15 July 2015 at 19:17, Antoine Pitrou  wrote:
> On Wed, 15 Jul 2015 10:22:14 +1200
> Robert Collins  wrote:
>>
>> For clarity, I think we should:
>>  - remove the assret check, it is I think spurious.
>>  - add a set of functions to the mock module that should be used in
>> preference to Mock.assert*
>>  - mark the Mock.assert* functions as PendingDeprecation
>>  - in 3.6 move the PendingDeprecation to Deprecated
>>  - in 3.7 remove the Mock.assert* functions and the check for method
>> names beginning with assert entirely.
>
> I think removing them is a bit too strong. There's software out there
> that would like to have cross-version-compatible test suites.

Which they can do using 'mock'.

-Rob

-- 
Robert Collins 
Distinguished Technologist
HP Converged Cloud
___
Python-Dev mailing list
[email protected]
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] How far to go with user-friendliness

2015-07-16 Thread Robert Collins
On 17 Jul 2015 08:34, "Michael Foord"  wrote:
>
>
>
> On Wednesday, 15 July 2015, Robert Collins 
wrote:
> > On 15 July 2015 at 12:59, Nick Coghlan  wrote:
> >>
> >> There is zero urgency here, so nothing needs to change for 3.5.
> >> Robert's plan is a fine one to propose for 3.6 (and the PyPI mock
> >> backport).
> >
> > Right - the bad API goes back to the very beginning. I'm not planning
>
>
> I disagree it's a bad api. It's part of why mock was so easy to use and
part of why it was so successful. With the new check for non-existent
assert methods it's no longer dangerous and so doesn't need fixing.

Could you help me understand how the presence of assert... on the mock vs
in the mock module affected ease of use?

-Rob
___
Python-Dev mailing list
[email protected]
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] How far to go with user-friendliness

2015-07-17 Thread Robert Collins
On 18 July 2015 at 15:19, Nick Coghlan  wrote:
>

> This change *doesn't really matter* in the grand scheme things, but would
> require a non-zero amount of time and effort to reverse, so unless you're
> offering one of the unittest maintainers a contract gig to change it back,
> let it go.

s/unittest/mock :). But yes.

Currently only Michael is listed under 'experts' in the devguide for
unittest.mock. I've taken up the baton to keep the backport up to
date, to keep the ecosystem healthy, but I've no specific plans to
hack on mock per se.  I think we'd probably benefit from more
names there :) I know Kushal and Berker have been doing things in the
stdlib.

But there's a tonne of important work to do before worrying about
tweaking a patch which was peer reviewed and went through the entirely
normal development process to address a critical usability issue with
mock. Which, judging from this thread, a bunch of folk don't actually
understand in the first place. [I mean no disrespect here, but there
have been multiple explanations trying to cover the distinction about
what is actually going on, and I'm well over them].

So - folk interested in unittest.mock. If you want to help, going
through the 200 open issues in https://github.com/testing-cabal/mock,
starting with the oldest, assessing whether they are:
 - still relevant
 - present only in the backport (leave them where they are)
 - or in 3.6 as well (in which case open a new ticket at
https://bugs.python.org/ linked to the github issue, and either close
the github issue or label it upstream (or both)).

THAT would be valuable, and improve users experience of unittest.mock
[and mock] much more than making a_mock.assret_called_once *not
error*.

-Rob

-- 
Robert Collins 
Distinguished Technologist
HP Converged Cloud
___
Python-Dev mailing list
[email protected]
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] How do we tell if we're helping or hindering the core development process? (was Re: How far to go with user-friendliness)

2015-07-21 Thread Robert Collins
On 21 July 2015 at 00:34, Ben Finney  wrote:
> Paul Moore  writes:
>
>> Again, I'm sorry to pick on one sentence out of context, but it cut
>> straight to my biggest fear when doing a commit (on any project) -
>> what if, after all the worrying and consideration I put into doing
>> this commit, people disagree with me (or worse still, I made a
>> mistake)? Will I be able to justify what I decided?
>
> That seems quite healthy to me. On a collaborative project with effects
> far beyond oneself, yes, any change *should* be able to be justified
> when challenged.

Depending on what you mean by justification , this leaves no leeway
for hunches, feelings, intuition, or grey area changes.

It's also a terrible way to treat people that are contributing their
own time and effort: assume good faith is a much better starting
point.

I think its reasonable to say that any change should be open to post
hoc discussion. Thats how we learn, but justification and challenging
is an adversarial framing, and one I'm flatly uninterested in. Life is
too short to volunteer on adversarial activies.

> That isn't a mandate to challenge every change, of course. It does mean
> that every change should be considered in light of “Can I justify
> this, if challenged?”
>
> So what you describe sounds like a healthy barrier: one which works to
> keep out unjustifiable changes.

I don't understand why thats healthy or useful.

> What is needed is to have both that *and* the support of the community
> so it's not a barrier to the *contributors*. The contributors should not
> feel excluded merely because some of their changes might need to be.

I don't think that contributors are a problem here. But, I'm going to
dig into that more in reply to Nick.

>> Hmm, maybe I'd better hold off and let someone else make the
>> decision...
>
> What of the (obvious, to me) option to retain the authority to make the
> decision, but take the doubt as a sign that one should consult with
> others before making the decision?
>
> That is, there's no need to feel that one shouldn't make the decision.
> But perhaps one shouldn't make it solely on one's own experience or
> insight. Get others involved, even non-committers, and discuss it, and
> understand the issue better. With that improved basis, then make the
> decision.

As a case study, this discussion where something like 90% of the
kibbitzing demonstrated a clear lack of understanding of the very
behaviour of Mock, *and* the change in question was discussed, *prior*
to it being done, *and* users have welcomed itI must conclude that
either:
 - this discussion was the exception to prove your general rule (but
why would we derive that general rule from this discussion)
 - the general rule isn't general.

> Am I naive to think that's desirable for PYthon core committers?

What's the goal here: what actual problem are we trying to solve for?

More contributors? A better Python more people can use? Keeping up
with the contributions we've already received but not actioned? [...]

Like: pick one thing. What we /really/ want to achieve, then lets look
at what will let us get there.

-Rob

-- 
Robert Collins 
Distinguished Technologist
HP Converged Cloud
___
Python-Dev mailing list
[email protected]
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] How do we tell if we're helping or hindering the core development process? (was Re: How far to go with user-friendliness)

2015-07-21 Thread Robert Collins
On 21 July 2015 at 19:40, Nick Coghlan  wrote:
> On 20 July 2015 at 22:34, Ben Finney  wrote:
>> Paul Moore  writes:
>>
>>> Again, I'm sorry to pick on one sentence out of context, but it cut
>>> straight to my biggest fear when doing a commit (on any project) -
>>> what if, after all the worrying and consideration I put into doing
>>> this commit, people disagree with me (or worse still, I made a
>>> mistake)? Will I be able to justify what I decided?
>>
>> That seems quite healthy to me. On a collaborative project with effects
>> far beyond oneself, yes, any change *should* be able to be justified
>> when challenged.
>
> No, that's not how this works: if folks are thinking that being a
> Python user, or even a CPython core developer, means that we're
> entitled to micromanage core developers by demanding extensive
> explanations for any arbitrary commit we choose, they're thoroughly
> mistaken. Only Guido has that privilege, and one of the reasons he's
> as respected as he is is his willingness to trust the experience and
> expertise of others and only rarely exercise his absolute authority.

I wouldn't even agree that Guido is entitled to micromanage us. He is
certainly entitled to demand explanations etc if he feels appropriate
- but if that turned into micromanaging, I think we'd be in a very bad
place.

> All of this is why the chart that I believe should be worrying people
> is the topmost one on this page:
> http://bugs.python.org/issue?@template=stats
>
> Both the number of open issues and the number of open issues with
> patches are steadily trending upwards. That means the bottleneck in
> the current process *isn't* getting patches written in the first
> place, it's getting them up to the appropriate standards and applied.

Perhaps.

> Yet the answer to the problem isn't a simple "recruit more core
> developers", as the existing core developers are *also* the bottleneck
> in the review and mentoring process for *new* core developers.

sidebar: So, anyone here familiar with theory of constraints - e.g.
'The Goal', 'Critical Chain' etc? Might be interesting to do some
brainstorming in that shape if folk are interest.

Having open idle patches is I think generally acknowledged as a poor
thing. I think of them as inventory in a manufacturing plant: they
take up space, they take effort to create, they offer no value [until
they're actually shipped to users], and they have direct negatives
(tracker is harder to work with due to the signal-to-noise ratio,
perception of the project suffers, contributors make the rational
discussion not to contribute further...).

Lets assume that our actual goal is to ship new Python versions,
offering more and better things to users.

AFAIK we're not lacking any infrastructure resources - we have enough
buildbots, we have freely available compilers for all platforms.

> In my view, those stats are a useful tool we can use to ask ourselves
> "Am I actually helping with this contribution, or at the very least,
> not causing harm?":

I like this approach  :). But - can we go further, can we actively
protect core committer time such that they waste less of it? Adding
core committers won't help if the problem isn't the number of
committers, but rather the amount of the time that they can devote to
Python that actually gets spent on committer-activities.

> * helping core developers that have time to work on "CPython in
> general" rather than specific projects of interest to them to focus
> their attention more effectively may help make those stats better (and
> it would be even better if we could credit such triaging efforts
> appropriately)

Iterating on a patch someone else put up can help. Making sure its
passing tests, trying it with ecosystem projects and giving feedback.

> * exploring ways to extract contribution metrics from Roundup so we
> can have a more reliable detection mechanism for active and consistent
> contributors than the "Mark 1 core developer memory, aka the
> notoriously unreliable human brain" may help make those stats better

OTOH, make sure that what we measure provokes things that help our
goal :). Double edged sword this.

> Make no mistake, sustainable open source development is a *genuinely
> hard problem*. We're in a much better position than most projects
> (being one of the top 5 programming languages in the world has its
> benefits), but we're still effectively running a giant multinational
> collaborative software development project with close to zero formal
> management support. While their are good examples we can (and are)
> learning from, improving the support structures for an already wildly
> successful open source project without alienating existing
> contributors isn't a task that comes with an instruction manual :)

+1

-Rob
___
Python-Dev mailing list
[email protected]
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-d

Re: [Python-Dev] How far to go with user-friendliness

2015-07-21 Thread Robert Collins
On 22 July 2015 at 03:47, Tim Golden  wrote:
> On 20/07/2015 19:48, Christie Wilson wrote:
>> I am terrified of replying to this thread since so many folks on it seem
>> unhappy that it is continuing, but I want to +1 what Erik said.
>
> Don't be terrified :)
>
> But do understand that, in general, and especially for this
> already-noisy thread, the right place for arguments supporting a change
> or a reversion is usually on the issue tracker:
>
>  https://bugs.python.org/
>
> I don't know whether Robert's opened an issue to propose his solution,
> but if not, you could open one and add him as nosy.

I did: http://bugs.python.org/issue24651

-Rob


-- 
Robert Collins 
Distinguished Technologist
HP Converged Cloud
___
Python-Dev mailing list
[email protected]
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Reminder: Python 3.5 beta 4 is tagged in one week

2015-07-21 Thread Robert Collins
Cool. http://bugs.python.org/issue21750 is in a bad state right now.

I landed a patch to fix it, which when exposed to users had some
defects. I'm working on a better patch now, but need to either roll
the prior patch completely back, or get the new one landed before the
next beta. I hope to have that up for review later today {fingers
crossed} - will that be soon enough, or should I look up how to easily
revert stuff out with hg?

-Rob

On 18 July 2015 at 22:24, Larry Hastings  wrote:
>
>
> Approximately a week from when I post this, I'll be tagging Python 3.5 beta
> 4, which is the last beta before we go to release candidates.  Please wind
> up all your bug fixes soon, I'd really like checkins to 3.5 to stop soon.
>
> And a minor reminder: when we hit Release Candidate 1, I'll be switching the
> canonical repo for 3.5 to a public Bitbucket repo.  Any bug fixes that go in
> between RC 1 and final will only be merged using Bitbucket "pull requests".
>
> The new workflow experiment continues,
>
>
> /arry
>
> ___
> Python-Dev mailing list
> [email protected]
> https://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe:
> https://mail.python.org/mailman/options/python-dev/robertc%40robertcollins.net
>



-- 
Robert Collins 
Distinguished Technologist
HP Converged Cloud
___
Python-Dev mailing list
[email protected]
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Reminder: Python 3.5 beta 4 is tagged in one week

2015-07-21 Thread Robert Collins
On 22 July 2015 at 05:08, Larry Hastings  wrote:
>
>
> On 07/21/2015 06:35 PM, Robert Collins wrote:
>
> Cool. http://bugs.python.org/issue21750 is in a bad state right now.
>
> I landed a patch to fix it, which when exposed to users had some
> defects. I'm working on a better patch now, but need to either roll
> the prior patch completely back, or get the new one landed before the
> next beta. I hope to have that up for review later today {fingers
> crossed} - will that be soon enough, or should I look up how to easily
> revert stuff out with hg?
>
>
> If you want to undo it, "hg backout" is the command you want.  In general
> it's best to not check in broken stuff.

Thanks. And yes, naturally - we didn't realise it was broken. Passing
tests != fit for purpose.

-Rob

-- 
Robert Collins 
Distinguished Technologist
HP Converged Cloud
___
Python-Dev mailing list
[email protected]
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Reminder: Python 3.5 beta 4 is tagged in one week

2015-07-23 Thread Robert Collins
On 22 July 2015 at 08:07, Robert Collins  wrote:
> On 22 July 2015 at 05:08, Larry Hastings  wrote:
>>
>>
>> On 07/21/2015 06:35 PM, Robert Collins wrote:
>>
>> Cool. http://bugs.python.org/issue21750 is in a bad state right now.
>>
>> I landed a patch to fix it, which when exposed to users had some
>> defects. I'm working on a better patch now, but need to either roll
>> the prior patch completely back, or get the new one landed before the
>> next beta. I hope to have that up for review later today {fingers
>> crossed} - will that be soon enough, or should I look up how to easily
>> revert stuff out with hg?
>>
>>
>> If you want to undo it, "hg backout" is the command you want.  In general
>> it's best to not check in broken stuff.
>
> Thanks. And yes, naturally - we didn't realise it was broken. Passing
> tests != fit for purpose.

21750 is now sorted out in the cpython repo.

I have a separate question for you - issue2091 has a good patch on it,
but would you like it added to 3.5?

It makes a broken combination of file modes - rU+ - a clean error, and
tweaks the existing exception text for U + writing modes.

-Rob




-- 
Robert Collins 
Distinguished Technologist
HP Converged Cloud
___
Python-Dev mailing list
[email protected]
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


[Python-Dev] Burning down the backlog.

2015-07-25 Thread Robert Collins
On 21 July 2015 at 19:40, Nick Coghlan  wrote:

> All of this is why the chart that I believe should be worrying people
> is the topmost one on this page:
> http://bugs.python.org/issue?@template=stats
>
> Both the number of open issues and the number of open issues with
> patches are steadily trending upwards. That means the bottleneck in
> the current process *isn't* getting patches written in the first
> place, it's getting them up to the appropriate standards and applied.
> Yet the answer to the problem isn't a simple "recruit more core
> developers", as the existing core developers are *also* the bottleneck
> in the review and mentoring process for *new* core developers.

Those charts doesn't show patches in 'commit-review' -
http://bugs.python.org/issue?%40columns=title&%40columns=id&stage=5&%40columns=activity&%40sort=activity&status=1&%40columns=status&%40pagesize=50&%40startwith=0&%40sortdir=on&%40action=search

There are only 45 of those patches.

AIUI - and I'm very new to core here - anyone in triagers can get
patches up to commit-review status.

I think we should set a goal to keep inventory low here - e.g. review
and either bounce back to patch review, or commit, in less than a
month. Now - a month isn't super low, but we have lots of stuff
greater than a month.

For my part, I'm going to pick up more or less one thing a day and
review it, but I think it would be great if other committers were to
also to do this: if we had 5 of us doing 1 a day, I think we'd burn
down this 45 patch backlog rapidly without significant individual
cost. At which point, we can fairly say to folk doing triage that
we're ready for patches :)

-Rob



-- 
Robert Collins 
Distinguished Technologist
HP Converged Cloud
___
Python-Dev mailing list
[email protected]
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] How do we tell if we're helping or hindering the core development process?

2015-07-29 Thread Robert Collins
On 29 July 2015 at 02:17, Ben Finney  wrote:
> Paul Moore  writes:
>
>> On 28 July 2015 at 13:35, Ben Finney  wrote:
>> > People can, do, and probably must make many decisions through
>> > non-rational processes. I don't propose to change that.
>>
>> Good.
>>
>> > Choices can be made that, when challenged, lack compelling rational
>> > justification. I do propose that such a challenge should be taken as a
>> > healthy desire to improve Python, not a personal attack.
>>
>> While that is fine, you appear unwilling to accept the possibility
>> that people may not have the time/energy to develop a detailed
>> rational justification for a change that they have made, and demanding
>> that they do so when they are offering the time they do give on a
>> volunteer basis, is what I claim is unacceptable.
>
> I've said many times now that's not what I'm advocating.
>
> I've made a clear distinction between the need to *be able to* justify a
> change, versus arbitrary demands to do so by arbitrary members.
>
> The latter is what you're arguing against, and of course I agree. I've
> never advocated that.

I'm arguing against the former. Being able to survive a crowd sourced
grilling on any arbitrary change would be quite the chilling effect,
and its a level of backpressure that the committers who engaged in
this discussion have rejected. Some have rejected contributing *at
all* as a result of the discussion. Others, like me, are telling you
that you're wrong, that we don't accept that we can be called up for
any odd commit and asked to justify it to anyone.

There is a social contract around our commits - and it does permit
enquiry and discussion, but not with the degree of heat or antagonism
that was present in this thread.

AND

Not by uninformed folk: If you're going to second guess the onus is on
you to educate yourself about the issue first. This particular one,
for instance, requires going back through the history of mock right to
its founding in 2007, and walking forward through the merge into the
stdlib in Python 3,3 (because its popular) and finally the realisation
that large chunks of peoples code were silently not testing what was
desired and the fixing of that. Discussing the thing we discussed *in
that context* is a very different discussion to what we had, where
every second message was someone misunderstanding what the issue is
and chiming in to say that this is surprising and unPythonic and
against the Zen and oh my.

>> The issue is not one of your motives in asking for explanations - it's
>> the implication that you are entitled to require others to *provide*
>> those explanations, to whatever level of detail *you* require.
>
> Hopefully this repetition is enough: I do not claim any such entitlement.

If you don't claim such entitlement, who does? Whose entitlement are
you arguing for? If its Guido's, I think we can stop arguing - sure,
he is entitled to ask for a lot, but I don't want to argue about what
entitlements someone else has: they can argue on their own.

-Rob

-- 
Robert Collins 
Distinguished Technologist
HP Converged Cloud
___
Python-Dev mailing list
[email protected]
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


[Python-Dev] updating ensurepip to include wheel

2015-08-02 Thread Robert Collins
So, pip 7.0 depends on the wheel module for its automatic wheel
building, and installing pip from get-pip.py, or the bundled copy in
virtualenvs will automatically install wheel.

But ensurepip doesn't bundle wheel, so we're actually installing a
slightly crippled pip 7.1, which will lead to folk having a poorer
experience.

Is this a simple bug, or do we need to update the PEP?

-Rob

-- 
Robert Collins 
Distinguished Technologist
HP Converged Cloud
___
Python-Dev mailing list
[email protected]
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] updating ensurepip to include wheel

2015-08-06 Thread Robert Collins
On 6 August 2015 at 15:04, Nick Coghlan  wrote:
> On 6 August 2015 at 09:29, Victor Stinner  wrote:
>> Le 5 août 2015 17:12, "Nick Coghlan"  a écrit :
>>> A hard dependency on wheel wouldn't fit into the same category - when
>>> folks are using a build pipeline to minimise the installation
>>> footprint on production systems, the wheel package itself has no
>>> business being installed anywhere other than developer systems and
>>> build servers.
>>
>> I'm quite sure that virtualenv is used to deploy python on production.
>>
>> Pip 7 automatically creates wheel packages when no build wheel package is
>> available on PyPI. Examples numpy and any pure python package only providing
>> a tarball.
>>
>> For me it makes sense to embed wheel in ensurepip and to install wheel on
>> production systems (to install pacakes, not to build them).
>
> pip can install from wheels just fine without the wheel package being
> present - that's how ensurepip already works.

pip can also do this without setuptools being installed; yet we bundle
setuptools with pip in ensurepip.

I am thus confused :).

When I consider the harm to a production pipeline that using
setuptools can cause (in that it triggers easy_install, and
easy_install has AFAIK none of the security improvements pip has added
over the last couple years), I find the acceptance of setuptools,
but non-acceptance of wheel flummoxing.

> The wheel package itself is only needed in order to support the
> setuptools "bdist_wheel" command, which then allows pip to implicitly
> cache wheel files when installing from an sdist.
>
> Installing from sdist in production is a *fundamentally bad idea*,
> because it means you have to have a build toolchain on your production
> servers. One of the benefits of the wheel format and projects like
> devpi is that it makes it easier to discourage people from doing that.
> Even without getting into Linux containers and tools like pyp2rpm,
> it's also possible to create an entire virtualenv on a build server,
> bundle that up as an RPM or DEB file, and use the system package
> manager to do the production deployment.

Yes: but the logic chain from 'its a bad idea' to 'we don't include
wheel but we do include setuptools' is the bit I'm having a hard time
with.

> However, production Linux servers aren't the only case we need to care
> about, and there's a strong user experience argument to be made for
> providing wheel by default upstream, and telling downstream
> redistributors that care about the distinction to do the necessary
> disentangling to make it easy to have "build dependency free"
> production images.
>
> We've learned from experience that things go far more smoothly if we
> thrash out those kinds of platform dependent behavioural differences
> *before* we inflict them on end users, rather than having downstream
> redistributors tackle foreseeable problems independently of both each
> other and upstream :)
>
> Hence my request for a PEP - I can see why adding wheel to the
> ensurepip bundle would be a good idea for upstream, but I can also see
> why it's a near certainty downstream Linux distros (including Fedora)
> would take it out again in at least some situations to better meet the

Does Fedora also take out setuptools? If not, why not?

> needs of *our* user base. (Since RPM has weak dependency support now,
> we'd likely make python-wheel a "Recommends:" dependency, rather than
> a "Requires:" dependency - still installed by default, but easy to
> omit if not wanted or needed)

So, a new PEP?

-Rob


-- 
Robert Collins 
Distinguished Technologist
HP Converged Cloud
___
Python-Dev mailing list
[email protected]
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] updating ensurepip to include wheel

2015-08-06 Thread Robert Collins
On 7 August 2015 at 03:28, Donald Stufft  wrote:
>
>> On Aug 6, 2015, at 5:04 AM, Robert Collins  wrote:
>>
>> Yes: but the logic chain from 'its a bad idea' to 'we don't include
>> wheel but we do include setuptools' is the bit I'm having a hard time
>> with.
>
>
> In my opinion, it’s the severity of how crippled their experience is without 
> that particular thing installed.
>
> In the case of wheel not being installed they lose the ability to have an 
> implicit wheel cache and to run ``pip wheel``. This makes pip less good but, 
> unless they are running ``pip wheel`` everything is still fully functioning.
>
> In the case of setuptools they lose the ability to ``pip install`` when there 
> isn’t a wheel available and the ability to run ``pip wheel``. This is making 
> pip completely unusable for a lot of people, and if we did not pre-install 
> setup tools the number one thing people would do is to ``pip install 
> setuptools``, most likely while bitching under their breath about the command 
> that just failed because they tried to install from sdist.
>
> So it’s really just “how bad are we going to break people’s expectations”.

So - I was in a talk at PyCon AU about conda[*], and the author
believed they were using the latest pip with all the latest caching
features, but their experience (16 minute installs) wasn't that.

I dug into that with them after the talk, and it was due to Conda not
installing wheel by default.

Certainly the framing of ensurepip as 'this installs pip' is going to
be confusing and misleading if it doesn't install pip the way
get-pip.py (or virtualenv) install pip, leading to confusion such as
that.

Given the inconsequential impact of installing wheel, I see only harm
in holding it back, and only benefits in adding it. All the harm from
having source builds comes in with setuptools ;).

-Rob

*) https://www.youtube.com/watch?v=Fqknoni5aX0

-- 
Robert Collins 
Distinguished Technologist
HP Converged Cloud
___
Python-Dev mailing list
[email protected]
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


[Python-Dev] PEP needed for http://bugs.python.org/issue9232 ?

2015-08-10 Thread Robert Collins
So, there's  a patch on issue 9232 - allow trailing commas in function
definitions - but there's been enough debate that I suspect we need a
PEP.

Would love it if someone could correct me, but I'd like to be able to
either categorically say 'no' and close the ticket, or 'yes and this
is what needs to happen next'.

-Rob

-- 
Robert Collins 
Distinguished Technologist
HP Converged Cloud
___
Python-Dev mailing list
[email protected]
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Burning down the backlog.

2015-08-17 Thread Robert Collins
On 26 July 2015 at 07:28, Robert Collins  wrote:
> On 21 July 2015 at 19:40, Nick Coghlan  wrote:
>
>> All of this is why the chart that I believe should be worrying people
>> is the topmost one on this page:
>> http://bugs.python.org/issue?@template=stats
>>
>> Both the number of open issues and the number of open issues with
>> patches are steadily trending upwards. That means the bottleneck in
>> the current process *isn't* getting patches written in the first
>> place, it's getting them up to the appropriate standards and applied.
>> Yet the answer to the problem isn't a simple "recruit more core
>> developers", as the existing core developers are *also* the bottleneck
>> in the review and mentoring process for *new* core developers.
>
> Those charts doesn't show patches in 'commit-review' -
> http://bugs.python.org/issue?%40columns=title&%40columns=id&stage=5&%40columns=activity&%40sort=activity&status=1&%40columns=status&%40pagesize=50&%40startwith=0&%40sortdir=on&%40action=search
>
> There are only 45 of those patches.
>
> AIUI - and I'm very new to core here - anyone in triagers can get
> patches up to commit-review status.
>
> I think we should set a goal to keep inventory low here - e.g. review
> and either bounce back to patch review, or commit, in less than a
> month. Now - a month isn't super low, but we have lots of stuff
> greater than a month.
>
> For my part, I'm going to pick up more or less one thing a day and
> review it, but I think it would be great if other committers were to
> also to do this: if we had 5 of us doing 1 a day, I think we'd burn
> down this 45 patch backlog rapidly without significant individual
> cost. At which point, we can fairly say to folk doing triage that
> we're ready for patches :)

We're down to 9 such patches, and reading through them today there are
none that I felt comfortable moving forward: either their state is
unclear, or they are waiting for action from a *specific* core.

However - 9 isn't a bad number for 'patches that the triagers think
are ready to commit' inventory.

So yay!. Also - triagers, thank you for feeding patches through the
process. Please keep it up :)

-Rob


-- 
Robert Collins 
Distinguished Technologist
HP Converged Cloud
___
Python-Dev mailing list
[email protected]
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Request for pronouncement on PEP 493 (HTTPS verification backport guidance)

2015-11-25 Thread Robert Collins
On 26 November 2015 at 08:57, Barry Warsaw  wrote:
> There's a lot to process in this thread, but as I see it, the issue breaks
> down to these questions:
>
> * How should PEP 493 be implemented?
>
> * What should the default be?
>
> * How should PEP 493 be worded to express the right tone to redistributors?
>
> Let me take on the implementation details here.
>
> On Nov 24, 2015, at 04:04 PM, M.-A. Lemburg wrote:
>
>>I would still find having built-in support for the recommendations
>>in the Python stdlib a better approach
>
> As would I.

For what its worth: a PEP telling distributors to patch the standard
library is really distasteful to me.

We've spent a long time trying to build close relations such that when
something doesn't work distributors can share their needs with us and
we can make Python out of the box be a good fit. This seems to fly in
the exact opposite direction: we're explicitly making it so that
Python builds on these vendor's platforms will not be the same as you
get by checking out the Python source code.

Ugh.

-Rob



-- 
Robert Collins 
Distinguished Technologist
HP Converged Cloud
___
Python-Dev mailing list
[email protected]
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] pypi simple index

2015-12-17 Thread Robert Collins
On 18 December 2015 at 06:13, Carlos Barera  wrote:

> Hi,
>
> I'm using install_requires in setup.py to specify a specific package my
> project is dependant on.
> When running python setup.py install, apparently the simple index is used
> as an older package is taken from pypi. While
>

What's happening here is that easy-install is triggering - which does not
support wheels. Use 'pip install .' instead.


> in https://pypi.python.org/pypi, there's a newer package.
> When installing directly using pip, the latest package is installed
> successfully.
> I noticed that the new package is only available as a wheel and older
> versions of setup tools won't install wheels for install_requires.
> However, upgrading setuptools didn't help.
>
> Several questions:
> 1. What's the difference between the pypi simple index and the general
> pypi index?
>

The '/simple' API is for machine consumption, /pypi is for humans, other
than that there should be not be any difference.


> 2. Why is setup.py defaulting to the simple index?
>

Because it is the only index :).


> 3. How can I make the setup.py triggered install use the main pypi index
> instead of simple
>

You can't - the issue is not the index being consulted, but your use of
'python setup.py install' which does not support  wheels.

Cheers,
Rob
___
Python-Dev mailing list
[email protected]
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Update PEP 7 to require curly braces in C

2016-01-17 Thread Robert Collins
+1 from me on requiring them.

On 18 January 2016 at 11:19, Brett Cannon  wrote:
>
>
> On Sun, 17 Jan 2016, 13:59 Ethan Furman  wrote:
>>
>> On 01/17/2016 11:10 AM, Brett Cannon wrote:
>>
>> > https://www.imperialviolet.org/2014/02/22/applebug.html. Skipping the
>> > curly braces is purely an aesthetic thing while leaving them out can
>> > lead to actual bugs.
>>
>> Not sure what that sentence actually says, but +1 on making them
>> mandatory.
>
>
>
> Yeah, bad phrasing on my part. What I meant to say is leaving them off is an
> aesthetic thing while requiring them is a bug prevention thing. When it
> comes to writing C code I always vote for practicality over aesthetics.
>
>>
>> --
>> ~Ethan~
>> ___
>> Python-Dev mailing list
>> [email protected]
>> https://mail.python.org/mailman/listinfo/python-dev
>> Unsubscribe:
>> https://mail.python.org/mailman/options/python-dev/brett%40python.org
>
>
> ___
> Python-Dev mailing list
> [email protected]
> https://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe:
> https://mail.python.org/mailman/options/python-dev/robertc%40robertcollins.net
>



-- 
Robert Collins 
Distinguished Technologist
HP Converged Cloud
___
Python-Dev mailing list
[email protected]
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Time for a change of random number generator?

2016-02-11 Thread Robert Kern

On 2016-02-11 00:08, Greg Ewing wrote:

The Mersenne Twister is no longer regarded as quite state-of-the art
because it can get into states that produce long sequences that are
not very random.

There is a variation on MT called WELL that has better properties
in this regard. Does anyone think it would be a good idea to replace
MT with WELL as Python's default rng?

https://en.wikipedia.org/wiki/Well_equidistributed_long-period_linear


There was a side-discussion about this during the secrets module proposal 
discussion.


WELL would not be my first choice. It escapes the excess-0 islands faster than 
MT, but still suffers from them. More troubling to me is that it is a linear 
feedback shift register, like MT, and all LFSRs quickly fail the linear 
complexity test in BigCrush.


xorshift* shares some of these flaws, but is significantly stronger and 
dominates WELL in most (all?) relevant dimensions.


  http://xorshift.di.unimi.it/

I'm favorable to the PCG family these days, though xorshift* and Random123 are 
reasonable alternatives.


  http://www.pcg-random.org/
  https://www.deshawresearch.com/resources_random123.html

--
Robert Kern

"I have come to believe that the whole world is an enigma, a harmless enigma
 that is made terrible by our own mad attempt to interpret it as though it had
 an underlying truth."
  -- Umberto Eco

___
Python-Dev mailing list
[email protected]
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Time for a change of random number generator?

2016-02-12 Thread Robert Kern

On 2016-02-12 04:15, Tim Peters wrote:

[Greg Ewing ]

The Mersenne Twister is no longer regarded as quite state-of-the art
because it can get into states that produce long sequences that are
not very random.

There is a variation on MT called WELL that has better properties
in this regard. Does anyone think it would be a good idea to replace
MT with WELL as Python's default rng?


I don't think so, because I've seen no groundswell of discontent about
the Twister among Python users.  Perhaps I'm missing some?


Well me, but I'm mostly focused on numpy's PRNG, which is proceeding apace.

  https://github.com/bashtage/ng-numpy-randomstate

While I am concerned about MT's BigCrush failures, what makes me most 
discontented is not having multiple guaranteed-independent streams.



It's prudent to wait for someone else to find the early surprises in
PCG and Random123 too ;-)


Quite so!

--
Robert Kern

"I have come to believe that the whole world is an enigma, a harmless enigma
 that is made terrible by our own mad attempt to interpret it as though it had
 an underlying truth."
  -- Umberto Eco

___
Python-Dev mailing list
[email protected]
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


[Python-Dev] thoughts on backporting __wrapped__ to 2.7?

2016-04-04 Thread Robert Collins
I'm working on teaching funcsigs - the backport of inspect.signature -
better handling for wrapped functions, and the key enabler to do that
is capturing the wrapped function in __wrapped__. I'm wondering what
folks thoughts are on backporting that to 2.7 - seems cleaner than
monkeypatching functools.wraps, which would tend to be subject to
import ordering races and general ick. I'll likely prep such a
monkeypatch for folk that are stuck on older versions of 2.7 anyhow...
so its not a huge win...

-Rob

-- 
Robert Collins 
Distinguished Technologist
HP Converged Cloud
___
Python-Dev mailing list
[email protected]
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] thoughts on backporting __wrapped__ to 2.7?

2016-04-05 Thread Robert Collins
Sadly that has the ordering bug of assigning __wrapped__ first and appears
a little unmaintained based on the bug tracker :(
On 5 Apr 2016 8:10 PM, "Victor Stinner"  wrote:

> See https://pypi.python.org/pypi/functools32 for the functools backport
> for Python 2.7.
>
> Victor
>
___
Python-Dev mailing list
[email protected]
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Challenge: Please break this! (a.k.a restricted mode revisited)

2016-04-11 Thread Robert Collins
On 11 April 2016 at 13:49, Tres Seaver  wrote:
> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA1
>
> On 04/10/2016 06:31 PM, Jon Ribbens wrote:
>> Unless someone knows a way to get to an object's __dict__ or its type
>> without using vars() or type() or underscore attributes...
>
> Hmm, 'classmethod'-wrapped functions get passed the type.

yeah, but to access that you need to assign the descriptor to the type
- circular loop. If you can arrange that assignment its easy:


thetype = []
class gettype:
def __get__(self, obj, type=None):
thetype.append((obj, type))
return None

classIwant.query = gettype()
classIwant().query
thetype[0][1]...

but you've already gotten to classIwant there.

-Rob

-- 
Robert Collins 
Distinguished Technologist
HP Converged Cloud
___
Python-Dev mailing list
[email protected]
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] thoughts on backporting __wrapped__ to 2.7?

2016-04-11 Thread Robert Collins
On 6 April 2016 at 15:03, Stephen J. Turnbull  wrote:
> Robert Collins writes:
>
>  > Sadly that has the ordering bug of assigning __wrapped__ first and appears
>  > a little unmaintained based on the bug tracker :(
>
> You can fix two problems with one patch, then!
>

Not really - taking over a project is somewhat long winded; it would
be centralising yet another backport which
may-or-may-not-be-a-good-thing, and I'm not exactly overflowing with
spare tuits. If someone wants to do it - great, more power to them,
but the last thing we need is to move it from one unmaintained spot to
another unmaintained spot.

-Rob



-- 
Robert Collins 
Distinguished Technologist
HP Converged Cloud
___
Python-Dev mailing list
[email protected]
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] potential argparse problem: bad mix of parse_known_args and prefix matching

2013-11-26 Thread Robert Kern

On 2013-11-26 18:16, Eli Bendersky wrote:


FWIW I'm not advocating a breaking behavior change here - I fully realize the
ship has sailed. I'm interested in mitigation actions, though. Making the
documentation explain this explicitly + adding an option to disable prefix
matching (in 3.5 since we're past the 3.4 features point) will go a long way for
alleviating this gotcha.


There is already the One Obvious Way to handle situations like yours: the user 
uses "--" to mark that the remaining arguments are pure arguments and not --options.


  parent-script --verbose -- ./child_script.py --no-verbose --etc

This is a common idiom across many argument processors. parse_known_args() is 
not a good solution for this situation even if you mitigate the prefix issue. 
Exact matches of --options are still possible.


--
Robert Kern

"I have come to believe that the whole world is an enigma, a harmless enigma
 that is made terrible by our own mad attempt to interpret it as though it had
 an underlying truth."
  -- Umberto Eco

___
Python-Dev mailing list
[email protected]
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] python 3 niggle: None < 1 raises TypeError

2014-02-18 Thread Robert Kern

On 2014-02-18 14:11, MRAB wrote:

On 2014-02-18 13:48, Serhiy Storchaka wrote:

18.02.14 10:10, Paul Moore написав(ла):

Or alternatively, a "default on None" function - Oracle SQL calls this
nvl, so I will too:

def nvl(x, dflt):
 return dflt if x is None else x

results = sorted(invoices, key=lambda x: nvl(x.duedate, datetime(MINYEAR,1,1))


Or, as was proposed above:

results = sorted(invoices,
   key=lambda x: (x.duedate is not None, x.duedate))


That makes me wonder.

Why is:

 None < None

unorderable and not False but:

  (None, ) < (None, )

orderable?


tuple's rich comparison uses PyObject_RichCompareBool(x, y, Py_EQ) to find the 
first pair of items that is unequal. Then it will test the order of any 
remaining elements.


  http://hg.python.org/cpython/file/79e5bb0d9b8e/Objects/tupleobject.c#l591

PyObject_RichCompareBool(x, y, Py_EQ) treats identical objects as equal.

  http://hg.python.org/cpython/file/79e5bb0d9b8e/Objects/object.c#l716

--
Robert Kern

"I have come to believe that the whole world is an enigma, a harmless enigma
 that is made terrible by our own mad attempt to interpret it as though it had
 an underlying truth."
  -- Umberto Eco

___
Python-Dev mailing list
[email protected]
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 465: A dedicated infix operator for matrix multiplication

2014-04-07 Thread Robert Kern

On 2014-04-07 19:54, francis wrote:




So, I guess as far as I'm concerned, this is ready to go. Feedback

welcome:

   http://legacy.python.org/dev/peps/pep-0465/



Hi,
just curiosity: why is the second parameter 'o2' in:

PyObject* PyObject_MatrixMultiply(PyObject *o1, PyObject o2)

not a pointer to PyObject?


Typo, I'm fairly certain.

--
Robert Kern

"I have come to believe that the whole world is an enigma, a harmless enigma
 that is made terrible by our own mad attempt to interpret it as though it had
 an underlying truth."
  -- Umberto Eco

___
Python-Dev mailing list
[email protected]
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 465: A dedicated infix operator for matrix multiplication

2014-04-09 Thread Robert Kern

On 2014-04-09 12:12, Nick Coghlan wrote:

On 8 April 2014 18:32, cjw  wrote:

Guido,

I am sorry to read this.

I shall be responding more completely in a day or two.

In my view, @ and @@ are completely redundant.  Both operations are  already
provided, * and **, in numpy.matrix.

PEP 465 provides no clear indication as to how the standard operators fail.


Note that numpy.matrix is specifically discussed in
http://www.python.org/dev/peps/pep-0465/#rejected-alternatives-to-adding-a-new-operator
(it's the first rejected alternative listed).


To be fair to Colin, the PEP asserts that the community at large would prefer an 
operator to the status quo but only alludes to the reason why it does so rather 
than explaining it fully. Personally, I think that's a reasonable allocation of 
Nathaniel's time, but then I happen to have agreed with the PEP's position 
before it was written, and I personally witnessed all of the history myself so I 
don't need it repeated back to me.


--
Robert Kern

"I have come to believe that the whole world is an enigma, a harmless enigma
 that is made terrible by our own mad attempt to interpret it as though it had
 an underlying truth."
  -- Umberto Eco

___
Python-Dev mailing list
[email protected]
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] pep8 reasoning

2014-04-24 Thread Robert Kern

On 2014-04-24 14:59, Barry Warsaw wrote:


I will say this: the original preference for underscore_names in PEP 8 was
spurred by user studies some of our early non-native English speaking users
conducted many years ago.  We learned that it was more difficult for many of
them to parse mixedCase names than underscore_names.  I'm afraid I probably no
longer have references to those studies, but the difference was pronounced,
IIRC, and I think it's easy to see why.  Underscores can be scanned by the eye
as spaces, while I'd hypothesize that the brain has to do more work to read
mixedCase names.


A more recent set of studies show some mixedResults (ha ha). On a low-level 
reading task, the studies agree with yours in that mixedCase takes more time and 
effort; however, it appears to improve accuracy as well. On a higher-level 
comprehension task, mixedCase took less or the same time and still improved 
accuracy. Experienced programmers don't see too much of a difference either way, 
but inexperienced programmers see a more marked benefit to mixedCase.


  http://www.cs.loyola.edu/~binkley/papers/tr-loy110720.pdf‎

That said, I can't vouch for the experiments or the analysis, and it isn't 
really germane to Chris' historical question. I mention it only because I had 
just run across this paper last night, so it was fresh in my mind when you 
mentioned studies on the subject.


--
Robert Kern

"I have come to believe that the whole world is an enigma, a harmless enigma
 that is made terrible by our own mad attempt to interpret it as though it had
 an underlying truth."
  -- Umberto Eco

___
Python-Dev mailing list
[email protected]
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Deprecating "instance method" class

2019-04-08 Thread Robert White
So we're making pretty heavy use of PyInstanceMethod_New in our python
binding library that we've written for a bunch of in house tools.
If this isn't the best / correct way to go about adding methods to objects,
what should we be using instead?


On Sun, Apr 7, 2019 at 2:17 AM Jeroen Demeyer  wrote:

> On 2019-04-07 09:48, Serhiy Storchaka wrote:
> > total_ordering monkeypatches the decorated class. I'm planning to
> > implement in C methods that implement __gt__ in terms of __lt__ etc.
>
> Yes, I understood that. I'm just saying: if you want to make it fast,
> that's not the best solution. The fastest would be to implement
> tp_richcompare from scratch (instead of relying on slot_tp_richcompare
> dispatching to methods).
> ___
> Python-Dev mailing list
> [email protected]
> https://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe:
> https://mail.python.org/mailman/options/python-dev/robert.wd.white%40gmail.com
>
___
Python-Dev mailing list
[email protected]
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Deprecating "instance method" class

2019-04-08 Thread Robert White
Just PyInstanceMethod_New, and by "adding methods to objects" this is
adding C functions to types defined in C.

Only appears to be called at module import / creation time.

On Mon, Apr 8, 2019 at 10:24 AM Jeroen Demeyer  wrote:

> On 2019-04-08 17:08, Robert White wrote:
> > So we're making pretty heavy use of PyInstanceMethod_New in our python
> > binding library that we've written for a bunch of in house tools.
> > If this isn't the best / correct way to go about adding methods to
> > objects, what should we be using instead?
>
> First of all, the consensus in this thread is not to deprecate
> instancemethod.
>
> Well, it depends what you mean with "adding methods to objects", that's
> vaguely formulated. Do you mean adding methods at run-time (a.k.a.
> monkey-patching) to a pre-existing class? And is the process of adding
> methods done in C or in Python?
>
> Do you only need PyInstanceMethod_New() or also other
> PyInstanceMethod_XXX functions/macros?
>
___
Python-Dev mailing list
[email protected]
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


[Python-Dev] (no subject)

2019-04-10 Thread Robert Okadar
Hi community,

I have developed a tkinter GUI component, Python v3.7. It runs very well in
Linux but seeing a huge performance impact in Windows 10. While in Linux an
almost real-time performance is achieved, in Windows it is slow to an
unusable level.

The code is somewhat stripped down from the original, but the performance
difference is the same anyway. The columns can be resized by clicking on
the column border and dragging it. Resizing works only for the top row (but
it resizes the entire column).
In this demo, all bindings are avoided to exclude influence on the
component performance and thus not included. If you resize the window
(i.e., if you maximize it), you must call the function table.fit() from
IDLE shell.

Does anyone know where is this huge difference in performance coming from?
Can anything be done about it?

All the best,
--
Robert Okadar
IT Consultant

Schedule an *online meeting <https://calendly.com/aranea-network/60min>* with
me!

Visit *aranea-mreze.hr* <http://aranea-mreze.hr> or call
* +385 91 300 8887*

import tkinter

class Resizer(tkinter.Frame):
def __init__(self, info_grid, master, **cnf):
self.table_grid = info_grid
tkinter.Frame.__init__(self, master, **cnf)
self.bind('', self.resize_column)
self.bind('', self.resize_start)
self.bind('', self.resize_end)
self._resizing = False

self.bind('', self.onDestroyEvent)

def onDestroyEvent(self, event):
self.table_grid = []

def resize_column(self, event, width = None):
#if self._resizing:
top = self.table_grid.Top
grid = self.table_grid._grid
col = self.master.grid_info()["column"]
if not width:
width = self._width + event.x_root - self._x_root
top.columnconfigure(col, minsize = width)
grid.columnconfigure(col, minsize = width)


def resize_start(self, event):
top = self.table_grid.Top
self._resizing = True
self._x_root = event.x_root
col = self.master.grid_info()["column"]
self._width = top.grid_bbox(row = 0, column = col)[2]
#print event.__dict__

col = self.master.grid_info()["column"]
#print top.grid_bbox(row = 0, column = col)

def resize_end(self, event):
pass
#self.table_grid.xscrollcommand()
#self.table_grid.column_resize_callback(col, self)


class Table(tkinter.Frame):
def __init__(self, master, columns = 10, rows = 20, width = 100,**kw):
tkinter.Frame.__init__(self, master, **kw)
self.columns = []
self._width = width
self._grid = grid = tkinter.Frame(self, bg = "#CC")
self.Top = top = tkinter.Frame(self, bg = "#DD")
self.create_top(columns)
self.create_grid(rows)


#self.bind('', self.on_table_configure)
#self.bind('', self.on_table_map)

top.pack(anchor = 'nw')#, expand = 1, fill = "both")
grid.pack(anchor = 'nw')#fill = "both",expand = 1

def on_table_map(self, event):
theight = self.winfo_height()

def fit(self):#on_table_configure(self, event):
i = 0
for frame in self.Top.grid_slaves(row = 0):
frame.resizer.resize_column(None, width = frame.winfo_width())
i += 1
theight = self.winfo_height()
fheight = self._grid.winfo_height() + self.Top.winfo_height()
#print('', theight, fheight)
if theight > fheight:
rheight = self.grid_array[0][0].winfo_height()
ammount = int((-fheight + theight) / rheight)
#print(rheight, ammount)
for i in range(ammount):
self.add_row()
self.update()


def add_row(self, ammount = 1):
columnsw = self.columns
row = []
i = len(self.grid_array)
for j in range(len(columnsw)):
bg = self.bgcolor0
if i % 2 == 1:
bg = self.bgcolor1
entry = tkinter.Label(self._grid, bg = bg, text = '%i %i' % (i, j))
entry.grid(row = i, column = j, sticky = "we", padx = 2)
row.append(entry)
self.grid_array.append(row)


bgcolor0 = "#FF"
bgcolor1 = "#EE"
def create_grid(self, height):

#grid.grid(row = 0, column = 0, sticky = "nsew")

columnsw = self.columns# = self.Top.grid_slaves(row = 1)
self.grid_array = []
for i in range(height):
row = []
for j in range(len(columnsw)):
bg = self.bgcolor0
if i % 2 == 1:
bg = self.bgcolor1
#entry = self.EntryClass(False, self,

Re: [Python-Dev] (no subject)

2019-04-10 Thread Robert Okadar
Hi Steven,

Thank you for pointing me in the right direction. Will search for help on
places you mentioned.

Not sure how can we help you with developing the Python interpreter, as I
doubt we have any knowledge that this project might use it. When I say
'we', I mean on my colleague and me.

All the best,
--
Robert Okadar
IT Consultant

Schedule an *online meeting <https://calendly.com/aranea-network/60min>* with
me!

Visit *aranea-mreze.hr* <http://aranea-mreze.hr> or call
* +385 91 300 8887*


On Wed, 10 Apr 2019 at 17:36, Steven D'Aprano  wrote:

> Hi Robert,
>
> This mailing list is for the development of the Python interpreter, not
> a general help desk. There are many other forums where you can ask for
> help, such as the comp.lang.python newsgroup, Stackoverflow, /r/python
> on Reddit, the IRC channel, and more.
>
> Perhaps you can help us though, I presume you signed up to this mailing
> list via the web interface at
>
> https://mail.python.org/mailman/listinfo/python-dev
>
> Is there something we could do to make it more clear that this is not
> the right place to ask for help?
>
>
> --
> Steven
>
___
Python-Dev mailing list
[email protected]
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] git history conundrum

2019-04-28 Thread Robert Collins
Thank you!

If I understand correctly this is just the hg style branch backport
consequence, multiple copies of a change. Should be safe to skip those.

Rob

On Sun, 28 Apr 2019, 07:11 Chris Withers,  wrote:

> Hi All,
>
> I'm in the process of bringing the mock backport up to date, but this
> has got me stumped:
>
> $ git log --oneline  --no-merges
> 5943ea76d529f9ea18c73a61e10c6f53bdcc864f.. -- Lib/unittest/mock.py
> Lib/unittest/test/testmock/ | tail
> 362f058a89 Issue #28735: Fixed the comparison of mock.MagickMock with
> mock.ANY.
> d9c956fb23 Issue #20804: The unittest.mock.sentinel attributes now
> preserve their identity when they are copied or pickled.
> 84b6fb0eea Fix unittest.mock._Call: don't ignore name
> 161a4dd495 Issue #28919: Simplify _copy_func_details() in unittest.mock
> ac5084b6c7 Fixes issue28380: unittest.mock Mock autospec functions now
> properly support assert_called, assert_not_called, and assert_called_once.
> 0be894b2f6 Issue #27895:  Spelling fixes (Contributed by Ville Skyttä).
> 15f44ab043 Issue #27895:  Spelling fixes (Contributed by Ville Skyttä).
> d4583d7fea Issue #26750: use inspect.isdatadescriptor instead of our own
> _is_data_descriptor().
> 9854789efe Issue #26750: unittest.mock.create_autospec() now works
> properly for subclasses of property() and other data descriptors.
> 204bf0b9ae English spelling and grammar fixes
>
> Right, so I've merged up to 15f44ab043, what comes next?
>
> $ git log --oneline  --no-merges 15f44ab043.. -- Lib/unittest/mock.py
> Lib/unittest/test/testmock/ | tail -n 3
> 161a4dd495 Issue #28919: Simplify _copy_func_details() in unittest.mock
> ac5084b6c7 Fixes issue28380: unittest.mock Mock autospec functions now
> properly support assert_called, assert_not_called, and assert_called_once.
> 0be894b2f6 Issue #27895:  Spelling fixes (Contributed by Ville Skyttä).
>
> Okay, no idea why 0be894b2f6 is there, appears to be a totally identical
> commit to 15f44ab043, so let's skip it:
>
> $ git log --oneline  --no-merges 0be894b2f6.. -- Lib/unittest/mock.py
> Lib/unittest/test/testmock/ | tail -n 3
> 161a4dd495 Issue #28919: Simplify _copy_func_details() in unittest.mock
> ac5084b6c7 Fixes issue28380: unittest.mock Mock autospec functions now
> properly support assert_called, assert_not_called, and assert_called_once.
> 15f44ab043 Issue #27895:  Spelling fixes (Contributed by Ville Skyttä).
>
> Wat?! Why is 15f44ab043 showing up again?!
>
> What's the git subtlety I'm missing here?
>
> Chris
> ___
> Python-Dev mailing list
> [email protected]
> https://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe:
> https://mail.python.org/mailman/options/python-dev/robertc%40robertcollins.net
>
___
Python-Dev mailing list
[email protected]
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] git history conundrum

2019-04-28 Thread Robert Collins
Share your own username with Michael or I and we'll add you there.

Rob

On Mon, 29 Apr 2019, 09:55 Chris Withers,  wrote:

> On 28/04/2019 22:21, Robert Collins wrote:
> > Thank you!
>
> Thank me when we get there ;-) Currently in Dec 2018 with a wonderful
> Py2 failure:
>
> ==
> ERROR: test_autospec_getattr_partial_function
> (mock.tests.testhelpers.SpecSignatureTest)
> --
> Traceback (most recent call last):
>File "mock/tests/testhelpers.py", line 973, in
> test_autospec_getattr_partial_function
>  autospec = create_autospec(proxy)
>File "mock/mock.py", line 2392, in create_autospec
>  for entry in dir(spec):
> TypeError: __dir__() must return a list, not str
>
> Once we're done, I'll need a username/password that can write to
> https://pypi.org/project/mock/ ...
>
> > If I understand correctly this is just the hg style branch backport
> > consequence, multiple copies of a change. Should be safe to skip those.
>
> Yep, current script I've been using is here, high level highlighted:
>
> https://github.com/cjw296/mock/blob/backporting/backport.py#L102-L125
>
> cheers,
>
> Chris
>
___
Python-Dev mailing list
[email protected]
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] "if __name__ == '__main__'" at the bottom of python unittest files

2019-04-30 Thread Robert Collins
They were never needed 😁

Removal is fine with me.

On Wed, 1 May 2019, 09:27 Chris Withers,  wrote:

> Hi All,
>
> I have a crazy idea of getting unittest.mock up to 100% code coverage.
>
> I noticed at the bottom of all of the test files in testmock/, there's a:
>
> if __name__ == '__main__':
>  unittest.main()
>
> ...block.
>
> How would people feel about these going away? I don't *think* they're
> needed now that we have unittest discover, but thought I'd ask.
>
> Chris
>
> ___
> Python-Dev mailing list
> [email protected]
> https://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe:
> https://mail.python.org/mailman/options/python-dev/robertc%40robertcollins.net
>
___
Python-Dev mailing list
[email protected]
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 594: Removing dead batteries from the standard library

2019-05-21 Thread Robert Collins
This vector exists today for all new stdlib modules: once added, any
existing dependency could include that name to cater it to be imported on
prior python versions.

Rob

On Wed, 22 May 2019, 17:03 Stephen J. Turnbull, <
[email protected]> wrote:

> Christian Heimes writes:
>
>  > It's all open source. It's up to the Python community to adopt
>  > packages and provide them on PyPI.
>  >
>  > Python core will not maintain and distribute the packages. I'll
>  > merely provide a repository with packages to help kick-starting the
>  > process.
>
> This looks to me like an opening to a special class of supply chain
> attacks.  I realize that PyPI is not yet particularly robust to such
> attacks, and we have seen "similar name" attacks (malware uploaded
> under a name similar to a popular package).  ISTM that this approach
> to implementing the PEP will enable "identical name" attacks.  (By
> download count, stdlib packages are as popular as Python. :-)
>
> It now appears that there's been substantial pushback against removing
> packages that could be characterized as "obsolete and superseded but
> still in use", so this may not be a sufficient great risk to be worth
> addressing.  I guess this post is already a warning to those who are
> taking care of the "similar name" malware that this class of attacks
> will be opened up.
>
> One thing we *could* do that would require moderate effort would be to
> put them up on PyPI ourselves, and require that would-be maintainers
> be given a (light) vetting before handing over the keys.  (Maybe just
> require that they be subscribers to the Dead Parrot SIG? :-)
>
> Steve
> ___
> Python-Dev mailing list
> [email protected]
> https://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe:
> https://mail.python.org/mailman/options/python-dev/robertc%40robertcollins.net
>
___
Python-Dev mailing list
[email protected]
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Python Language Summit at PyCon: Agenda

2013-03-03 Thread Robert Collins
On 28 February 2013 05:51, Michael Foord  wrote:
> Hello all,
>
> PyCon, and the Python Language Summit, is nearly upon us. We have a good 
> number of people confirmed to attend. If you are intending to come to the 
> language summit but haven't let me know please do so.
>
> The agenda of topics for discussion so far includes the following:
>
> * A report on pypy status - Maciej and Armin
> * Jython and IronPython status reports - Dino / Frank
> * Packaging (Doug Hellmann and Monty Taylor at least)
> * Cleaning up interpreter initialisation (both in hopes of finding areas
>   to rationalise and hence speed things up, as well as making things
>   more embedding friendly). Nick Coghlan
> * Adding new async capabilities to the standard library (Guido)
> * cffi and the standard library - Maciej
> * flufl.enum and the standard library - Barry Warsaw
> * The argument clinic - Larry Hastings
>
> If you have other items you'd like to discuss please let me know and I can 
> add them to the agenda.

I'd like to talk about overhauling - not tweaking, overhauling - the
standard library testing facilities.

-Rob


--
Robert Collins 
Distinguished Technologist
HP Cloud Services
___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Python Language Summit at PyCon: Agenda

2013-03-03 Thread Robert Collins
On 4 March 2013 18:54, Guido van Rossum  wrote:
> On Sun, Mar 3, 2013 at 9:24 PM, Robert Collins
>  wrote:
>> I'd like to talk about overhauling - not tweaking, overhauling - the
>> standard library testing facilities.
>
> That seems like too big a topic and too vague a description to discuss
> usefully. Perhaps you have a specific proposal? Or at least just a use
> case that's poorly covered?

I have both - I have a draft implementation for a new test result API
(and forwards and backwards compat code etc), and use cases that drive
it. I started a thread here -
http://lists.idyll.org/pipermail/testing-in-python/2013-February/005434.html
, with blog posts
https://rbtcollins.wordpress.com/2013/02/14/time-to-revise-the-subunit-protocol/
https://rbtcollins.wordpress.com/2013/02/15/more-subunit-needs/
https://rbtcollins.wordpress.com/2013/02/19/first-experience-implementing-streamresult/
https://rbtcollins.wordpress.com/2013/02/23/simpler-is-better/

They are focused on subunit, but much of subunit's friction has been
due to issues encountered from the stdlibrary TestResult API - in
particular three things:
 - the single-active-test model that the current API (or at least
implementation) has.
 - the expectation that all test outcomes will originate from the same
interpreter (or something with a live traceback object)
 - the inability to supply details about errors other than the exception

All of which start to bite rather deep when working on massively
parallel test environments.

It is of course possible for subunit and related tools to run their
own implementation, but it seems ideal to me to have a common API
which regular unittest, nose, py.test and others can all agree on and
use : better reuse for pretty printers, GUI displays and the like
depend on some common API.

> TBH, your choice of words is ambiguous -- are you interested in
> overhauling the facilities for testing *of* the standard library (i.e.
> the 'test' package), or the testing facilities *provided by* the
> standard library (i.e. the unittest module)?

Sorry! Testing facilities provided by the standard library. They
should naturally facilitate testing of the standard library too.

-Rob

> --
> --Guido van Rossum (python.org/~guido)



-- 
Robert Collins 
Distinguished Technologist
HP Cloud Services
___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Python Language Summit at PyCon: Agenda

2013-03-03 Thread Robert Collins
On 4 March 2013 19:40, Nick Coghlan  wrote:

> Your feedback on http://bugs.python.org/issue16997 would be greatly 
> appreciated.

Done directly to Antoine on IRC the other day in a conversation with
him and Michael about the compatability impact of subtests. Happy to
do a full code review if that would be useful.

-Rob


-- 
Robert Collins 
Distinguished Technologist
HP Cloud Services
___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Python Language Summit at PyCon: Agenda

2013-03-04 Thread Robert Collins
On 5 March 2013 05:34, Brett Cannon  wrote:
>
>
>
> On Mon, Mar 4, 2013 at 11:29 AM, Barry Warsaw  wrote:
>>
>> On Mar 04, 2013, at 07:26 PM, Robert Collins wrote:
>>
>> >It is of course possible for subunit and related tools to run their
>> >own implementation, but it seems ideal to me to have a common API
>> >which regular unittest, nose, py.test and others can all agree on and
>> >use : better reuse for pretty printers, GUI displays and the like
>> >depend on some common API.
>>
>> And One True Way of invoking and/or discovering how to invoke, a package's
>> test suite.
>
>
> How does unittest's test discovery not solve that?

Three reasons
 a) There are some bugs (all filed I think) - I intend to hack on
these in the near future - that prevent discovery working at all for
some use cases.
 b) discovery requires magic parameters that are project specific
(e.g. is it 'discover .' or 'discover . lib' to run it). This is
arguably a setup.py/packaging entrypoint issue.
 c) Test suites written for e.g. Twisted, or nose, or other
non-stdunit-runner-compatible test runners will fail to execute even
when discovered correctly.

There are ways to solve this without addressing a/b/c - just defining
a standard command to run that signals success/failure with it's exit
code. Packages can export a particular flavour of that in their
setup.py if they have exceptional needs, and do nothing in the common
case. That doesn't solve 'how to introspect a package test suite' but
for distro packagers - and large scale CI integration - that doesn't
matter.

For instance testrepository offers a setuptools extension to let it be
used trivially, I believe nose does something similar.

Having something that would let *any* test suite spit out folk's
favourite test protocol de jour would be brilliant of course :).
[junit-xml, subunit, TAP, ...]

-Rob

-- 
Robert Collins 
Distinguished Technologist
HP Cloud Services
___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Python Language Summit at PyCon: Agenda

2013-03-04 Thread Robert Collins
On 5 March 2013 05:51, Barry Warsaw  wrote:
> I should have added "from the command line".  E.g. is it:
>
> $ python -m unittest discover
> $ python setup.py test
> $ python setup.py nosetests
> $ python -m nose test
> $ nosetests-X.Y

$ testr run

:)

> Besides having a multitude of choices, there's almost no way to automatically
> discover (e.g. by metadata inspection or some such) how to invoke the tests.
> You're often lucky if there's a README.test and it's still accurate.


If there is a .testr.conf you can run 'testr init; testr run'. Thats
the defined entry point for testr, and .testr.conf can specify running
make, or setup.py build or whatever else is needed to run tests.


I would love to see a declaritive interface so that you can tell that
is what you should run.

-Rob

-- 
Robert Collins 
Distinguished Technologist
HP Cloud Services
___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] built-in Python test runner (was: Python Language Summit at PyCon: Agenda)

2013-03-04 Thread Robert Collins
On 5 March 2013 10:26, Eli Bendersky  wrote:
> [Splitting into a separate thread]
>
> Do we really need to overthink something that requires a trivial alias to
> set up for one's own convenience?

The big thing is automated tools, not developers.

When distributors want to redistribute packages they want to be sure
they work. Running the tests is a pretty good signal for that, but
having every package slightly different adds to the work they need to
do. Being able to do 'setup.py test' consistently, everywhere - that
would be great.

-Rob

-- 
Robert Collins 
Distinguished Technologist
HP Cloud Services
___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] built-in Python test runner (was: Python Language Summit at PyCon: Agenda)

2013-03-04 Thread Robert Collins
On 5 March 2013 12:49, Eli Bendersky  wrote:
>
> On Mon, Mar 4, 2013 at 2:14 PM, Barry Warsaw  wrote:
>>
>> On Mar 05, 2013, at 11:01 AM, Robert Collins wrote:
>>
>> >The big thing is automated tools, not developers.
>>
>> Exactly.
>
> I don't understand. Is "python -m unittest discover" too much typing for
> automatic tools? If anything, it's much more portable across Python versions
> since any new coommand/script won't be added before 3.4, while the longer
> version works in 2.7 and 3.2+

It isn't about length. It is about knowing that *that* is what to type
(and btw that exact command cannot run twisted's tests, among many
other projects tests).

Perhaps we are talking about different things. A top level script to
run tests is interesting, but orthogonal to the thing Barry was asking
for.

-Rob


-- 
Robert Collins 
Distinguished Technologist
HP Cloud Services
___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Python Language Summit at PyCon: Agenda

2013-03-04 Thread Robert Collins
On 5 March 2013 13:21, Michael Foord  wrote:
>

> We can certainly talk about it - although as Guido says, something specific 
> may be easier to have a useful discussion about.
>
> Reading through your blog articles it seemed like a whole lot of subunit 
> context was required to understand the specific
> proposal you're making for the TestResult. It also *seems* like you're 
> redesigning the TestResult for a single use case
> (distributed testing) with an api that looks quite "odd" for anything that 
> isn't that use case. I'd rather see how we can
> make the TestResult play *better* with those requirements. That discussion 
> probably belongs in another thread - or at
> the summit.

Right - all I wanted was to flag that you and I and any other
interested parties should discuss this at the summit :).

-Rob






-- 
Robert Collins 
Distinguished Technologist
HP Cloud Services
___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] built-in Python test runner (was: Python Language Summit at PyCon: Agenda)

2013-03-04 Thread Robert Collins
On 5 March 2013 13:35, Eli Bendersky  wrote:

> Perhaps :-)
> I'm specifically referring to a new top-level script that will run all
> unittests in discovery mode from the current directory, as a shortcut to
> "python -m unittest discover". ISTM this is at leas in part what was
> discussed, and my email was in this context.

So that is interesting, but its not sufficient to meet the automation
need Barry is calling out, unless all test suites can be run by
'python -m unittest discover' with no additional parameters [and a
pretty large subset cannot].

-Rob

-- 
Robert Collins 
Distinguished Technologist
HP Cloud Services
___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Python Language Summit at PyCon: Agenda

2013-03-04 Thread Robert Collins
On 5 March 2013 13:50, Michael Foord  wrote:

>> Right - all I wanted was to flag that you and I and any other
>> interested parties should discuss this at the summit :).
>
> I've added a testing topic to the agenda. At the very least you could outline 
> your streaming test result proposal, or kick off a meta discussion. We'll 
> probably time limit the discussion so some specific focus will make it more 
> productive - or maybe you can get a feel for how open to major changes in 
> this area other python devs are.


Cool. I can step through the core use cases and differences to what
TestResult is in pretty short order. We can spider out from there as
folk desire.

-Rob

-- 
Robert Collins 
Distinguished Technologist
HP Cloud Services
___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] built-in Python test runner (was: Python Language Summit at PyCon: Agenda)

2013-03-04 Thread Robert Collins
On 5 March 2013 20:02, Lennart Regebro  wrote:
> On Tue, Mar 5, 2013 at 1:41 AM, Robert Collins
>  wrote:
>> So that is interesting, but its not sufficient to meet the automation
>> need Barry is calling out, unless all test suites can be run by
>> 'python -m unittest discover' with no additional parameters [and a
>> pretty large subset cannot].
>
> But can they be changed so they are? That's gotta be the important bit.

In principle maybe. Need to talk with the trial developers, nose
developers, py.test developers etc - to get consensus on a number of
internal API friction points.

> What's needed here is not a tool that can run all unittests in
> existence, but an official way for automated tools to run tests, with
> the ability for any test and test framework to hook into that, so that
> you can run any test suite automatically from an automated tool. The,
> once that mechanism has been identified/implemented, we need to tell
> everybody to do this.

I think the command line is the right place to do that - declare as
metadata the command line to run a packages tests.

> I don't care much what that mechanism is, but I think the easiest way
> to get there is to tell people to extend distutils with a test command
> (or use Distribute) and perhaps add such a command in 3.4 that will do
> the unittest discover thingy. I remember looking into zope.testrunner
> hooking into that mechanism as well, but I don't remember what the
> outcome was.

Agreed.

-Rob

-- 
Robert Collins 
Distinguished Technologist
HP Cloud Services
___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] VC++ 2008 Express Edition now locked away?

2013-03-06 Thread Robert Kern

On 2013-03-06 16:55, Chris Angelico wrote:

On Thu, Mar 7, 2013 at 3:46 AM, Stefan Behnel  wrote:

Chris Angelico, 06.03.2013 17:30:

On Thu, Mar 7, 2013 at 1:40 AM, Ezio Melotti wrote:

I did try a few weeks ago, when I had to download a copy of Windows
for a project.  Long story short, after 30+ minutes and a number of
confirmation emails I reached a point where I had a couple of new
accounts on MSDN/Dreamspark, a "purchased" free copy of Windows in my
e-cart, and some .exe I had to download in order to download and
verify the purchased copy.  That's where I gave up.


That's the point where I'd start looking at peer-to-peer downloads.
These sorts of things are often available on torrent sites; once the
original publisher starts making life harder, third-party sources
become more attractive.


May I express my doubts that the license allows a redistribution of the
software in this form?


Someone would have to check, but in most cases, software licenses
govern the use, more than the distribution. If you're allowed to
download it free of charge from microsoft.com, you should be able to
get hold of it in some other way and it be exactly the same.


Sorry, but that's not how copyright works. The owner of the copyright on a work 
has to give you permission to allow you to distribute their work (modulo certain 
statutorily-defined exceptions that don't apply here). Just because you got the 
work from them free of charge doesn't mean that they have given you permission 
to redistribute it. If the agreements that you have with the copyright owner do 
not mention redistribution, you do not have permission to redistribute it.


IANAL, TINLA.

--
Robert Kern

"I have come to believe that the whole world is an enigma, a harmless enigma
 that is made terrible by our own mad attempt to interpret it as though it had
 an underlying truth."
  -- Umberto Eco

___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Semantics of __int__(), __index__()

2013-04-03 Thread Robert Kern

On 2013-04-03 13:47, Hrvoje Niksic wrote:

On 04/03/2013 01:17 PM, Nick Coghlan wrote:

 > > >
 > > I like Nick's answer to that: int *should* always return something of
 > > exact type int.  Otherwise you're always left wondering whether you
 > > have to do "int(int(x))", or perhaps even "int(int(int(x)))", to be
 > > absolutely sure of getting an int.
 >
 > Agreed.

Perhaps we should start emitting a DeprecationWarning for int subclasses
returned from __int__ and __index__ in 3.4?


Why would one want to be absolutely sure of getting an int?

It seems like a good feature that an __int__ implementation can choose to return
an int subclass with additional (and optional) information. After all, int
subclass instances should be usable everywhere where ints are, including in C
code.  I can imagine numpy and similar projects would be making use of this
ability already -- just think of uses for numpy's subclasses of "float".


We don't.

[~]
|1> type(float(np.float64(1.0)))
float

[~]
|2> type(int(np.int32(1)))
int

--
Robert Kern

"I have come to believe that the whole world is an enigma, a harmless enigma
 that is made terrible by our own mad attempt to interpret it as though it had
 an underlying truth."
  -- Umberto Eco

___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] casefolding in pathlib (PEP 428)

2013-04-11 Thread Robert Collins
On 12 April 2013 09:18, Oleg Broytman  wrote:
> On Thu, Apr 11, 2013 at 02:11:21PM -0700, Guido van Rossum  
> wrote:
>> - the case-folding algorithm on some filesystems is burned into the
>> disk when the disk is formatted
>
>Into the partition, I guess, not the physical disc?

CDROMs - Joliet IIRC - so yes, physical disc.

-Rob
-- 
Robert Collins 
Distinguished Technologist
HP Cloud Services
___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] __subclasses__() return order

2013-05-25 Thread Robert Kern

On 2013-05-25 09:18, Antoine Pitrou wrote:


Hello,

In http://bugs.python.org/issue17936, I proposed making tp_subclasses
(the internal container implementing object.__subclasses__) a dict.
This would make the return order of __subclasses__ completely
undefined, while it is right now slightly predictable. I have never seen
__subclasses__ actually used in production code, so I'm wondering
whether someone might be affected by such a change.


I do use a package that does use __subclasses__ in production code, but the 
order is unimportant.


--
Robert Kern

"I have come to believe that the whole world is an enigma, a harmless enigma
 that is made terrible by our own mad attempt to interpret it as though it had
 an underlying truth."
  -- Umberto Eco

___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Clean way in python to test for None, empty, scalar, and list/ndarray? A prayer to the gods of Python

2013-06-14 Thread Robert Kern

On 2013-06-14 21:03, Brett Cannon wrote:


On Fri, Jun 14, 2013 at 3:12 PM, Martin Schultz mailto:[email protected]>> wrote:



  - add a `size` attribute to all objects (I wouldn't mind if this is None
in case you don't really know how to define the size of something, but it
would be good to have it, so that `anything.size` would never throw an error

This is what len() is for. I don't know why numpy doesn't define the __len__
method on their array types for that.


It does. It gives the size of the first axis, i.e. the one accessed by simple 
indexing with an integer: some_array[i]. The `size` attribute givens the total 
number of items in the possibly-multidimensional array. However, one of the 
other axes can be 0-length, so the array will have no elements but the length 
will be nonzero.


[~]
|4> np.empty([3,4,0])
array([], shape=(3, 4, 0), dtype=float64)

[~]
|5> np.empty([3,4,0])[1]
array([], shape=(4, 0), dtype=float64)

[~]
|6> len(np.empty([3,4,0]))
3

--
Robert Kern

"I have come to believe that the whole world is an enigma, a harmless enigma
 that is made terrible by our own mad attempt to interpret it as though it had
 an underlying truth."
  -- Umberto Eco

___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Clean way in python to test for None, empty, scalar, and list/ndarray? A prayer to the gods of Python

2013-06-14 Thread Robert Kern

On 2013-06-14 21:55, R. David Murray wrote:

On Fri, 14 Jun 2013 21:12:00 +0200, Martin Schultz  wrote:

2. Testing for empty lists or empty ndarrays:

  In principle, `len(x) == 0` will do the trick. **BUT** there are several
caveats here:
- `len(scalar)` raises a TypeError, so you will have to use try and
except or find some other way of testing for a scalar value
- `len(numpy.array(0))` (i.e. a scalar coded as numpy array) also raises
a TypeError ("unsized object")
- `len([[]])` returns a length of 1, which is somehow understandable,
but - I would argue - perhaps not what one might expect initially

  Alternatively, numpy arrays have a size attribute, and
`numpy.array([]).size`, `numpy.array(8.).size`, and
`numpy.array([8.]).size` all return what you would expect. And even
`numpy.array([[]]).size` gives you 0. Now, if I could convert everything to
a numpy array, this might work. But have you ever tried to assign a list of
mixed data types to a numpy array? `numpy.array(["a",1,[2,3],(888,9)])`
will fail, even though the list inside is perfectly fine as a list.


In general you test whether nor not something is empty in Python by
testing its truth value.  Empty things are False.  Numpy seems to
follow this using size, from the limited examples you have given

>>> bool(numpy.array([[]])
False
>>> bool(numpy.array([[1]])
True


numpy does not do so. Empty arrays are extremely rare and testing for them rarer 
(rarer still is testing for emptiness not knowing if it is an array or some 
other sequence). What people usually want from bool(some_array) is either 
some_array.all() or some_array.any(). In the face of this ambiguity, numpy 
refuses the temptation to guess and raises an exception explaining matters.


--
Robert Kern

"I have come to believe that the whole world is an enigma, a harmless enigma
 that is made terrible by our own mad attempt to interpret it as though it had
 an underlying truth."
  -- Umberto Eco

___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Clean way in python to test for None, empty, scalar, and list/ndarray? A prayer to the gods of Python

2013-06-14 Thread Robert Kern

On 2013-06-14 23:31, Robert Kern wrote:

On 2013-06-14 21:55, R. David Murray wrote:

On Fri, 14 Jun 2013 21:12:00 +0200, Martin Schultz  wrote:

2. Testing for empty lists or empty ndarrays:

  In principle, `len(x) == 0` will do the trick. **BUT** there are several
caveats here:
- `len(scalar)` raises a TypeError, so you will have to use try and
except or find some other way of testing for a scalar value
- `len(numpy.array(0))` (i.e. a scalar coded as numpy array) also raises
a TypeError ("unsized object")
- `len([[]])` returns a length of 1, which is somehow understandable,
but - I would argue - perhaps not what one might expect initially

  Alternatively, numpy arrays have a size attribute, and
`numpy.array([]).size`, `numpy.array(8.).size`, and
`numpy.array([8.]).size` all return what you would expect. And even
`numpy.array([[]]).size` gives you 0. Now, if I could convert everything to
a numpy array, this might work. But have you ever tried to assign a list of
mixed data types to a numpy array? `numpy.array(["a",1,[2,3],(888,9)])`
will fail, even though the list inside is perfectly fine as a list.


In general you test whether nor not something is empty in Python by
testing its truth value.  Empty things are False.  Numpy seems to
follow this using size, from the limited examples you have given

>>> bool(numpy.array([[]])
False
>>> bool(numpy.array([[1]])
True


numpy does not do so. Empty arrays are extremely rare and testing for them rarer
(rarer still is testing for emptiness not knowing if it is an array or some
other sequence). What people usually want from bool(some_array) is either
some_array.all() or some_array.any(). In the face of this ambiguity, numpy
refuses the temptation to guess and raises an exception explaining matters.


Actually, that's a bit of a lie. In the empty case and the one-element case, we 
do return a bool, False for empty and bool(element) for whatever that one 
element is. Anything else raises the exception since we don't know whether it is 
all() or any() that was desired.


--
Robert Kern

"I have come to believe that the whole world is an enigma, a harmless enigma
 that is made terrible by our own mad attempt to interpret it as though it had
 an underlying truth."
  -- Umberto Eco

___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Stop using timeit, use perf.timeit!

2016-06-10 Thread Robert Collins
On 11 June 2016 at 04:09, Victor Stinner  wrote:
..> We should design a CLI command to do timeit+compare at once.

http://judge.readthedocs.io/en/latest/ might offer some inspiration

There's also ministat -
https://www.freebsd.org/cgi/man.cgi?query=ministat&apropos=0&sektion=0&manpath=FreeBSD+8-current&format=html
___
Python-Dev mailing list
[email protected]
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] BDFL ruling request: should we block forever waiting for high-quality random bits?

2016-06-16 Thread Robert Collins
On 16 Jun 2016 6:55 PM, "Larry Hastings"  wrote:
>
>

> Why do you call it only "semi-fixed"?  As far as I understand it, the
semantics of os.urandom() in 3.5.2rc1 are indistinguishable from reading
from /dev/urandom directly, except it may not need to use a file handle.

Which is a contract change. Someone testing in E.g. a chroot could have a
different device on /dev/urandom, and now they will need to intercept
syscalls for the same effect. Personally I think this is fine, but assuming
i see Barry's point correctly, it is indeed but the same as it was.

-rob
___
Python-Dev mailing list
[email protected]
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Removing memoryview object patch from Python 2.7

2016-12-13 Thread Robert Collins
On 14 December 2016 at 01:26, Sesha Narayanan Subbiah
 wrote:
> Hello
>
>
> I have some implementation that currently uses python 2.6.4, which I m
> trying to upgrade to Python 2.7.6. After upgrade, I get the following error:
>
>
> "expected string or Unicode object, memoryview found"
>
>
> On checking further, I could find that memory view object has been back
> ported to python 2.7 using this patch:
>
>
> https://bugs.python.org/issue2396
>
>
> I would like to know if it is safe to revert this patch alone from Python
> 2.7.6, or do we know if there are any other dependencies?

I'm not sure - if you're going to run with old, custom, builds of
Python, you're probably best served by testing comprehensively for
this yourself.

That said, I have to presume that the error you're getting is from
some code that should be changed anyway, and will need to be changed
when you move to Python 3. Please remember that Python 2.7.6 was
released in 2013 - there have been many security issues since then,
including some of the most egregious SSL issues ever, which should
prompt you to run the latest 2.7 branch (if you're unable to migrate
straight to 3.x.

-Rob
___
Python-Dev mailing list
[email protected]
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Removing memoryview object patch from Python 2.7

2016-12-14 Thread Robert Collins
On 14 December 2016 at 18:10, Sesha Narayanan Subbiah
 wrote:
> Hi Rob
>
> Thanks for your reply.
>
> From http://legacy.python.org/download/, I could see that the current
> production releases are Python 3.4 and Python 2.7.6.

Nope - https://www.python.org/downloads/ - 2.7.12 and 3.5.2 are
current. The 'legacy' domain there was from a site revamp, I think its
causing confusion at this point and we should look at retiring it
completely.

> Since we use python for some our legacy applications, we don't want to
> switch to Python 3.0 right now. Moreover, since Python 2.6 is not supported
> anymore, we want to upgrade to Python 2.7.

> Do you suggest I should use Python 2.7.12 which is the latest version in 2.7
> series? I picked up 2.7.6, since it was listed as production release and
> assumed it is the most stable version.

If you can, 3.5.2 is where to switch to. If that won't work, 2.7.12 yes.

-Rob
___
Python-Dev mailing list
[email protected]
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Deprecate `from __future__ import unicode_literals`?

2016-12-16 Thread Robert Collins
On 17 December 2016 at 08:24, Guido van Rossum  wrote:
> I am beginning to think that `from __future__ import unicode_literals` does
> more harm than good. I don't recall exactly why we introduced it, but with
> the restoration of u"" literals in Python 3.3 we have a much better story
> for writing straddling code that is unicode-correct.
>
> The problem is that the future import does both too much and not enough --
> it does too much because it changes literals to unicode even in contexts
> where there is no benefit (e.g. the argument to getattr() -- I still hear of
> code that breaks due to this occasionally) and at the same time it doesn't
> do anything for strings that you read from files, receive from the network,
> or even from other files that don't use the future import.
>
> I wonder if we can add an official note to the 2.7 docs recommending against
> it? (And maybe even to the 3.x docs if it's mentioned there at all.)

I think thats a good idea. I've found u"" to be entirely sufficient
and very robust.

Perhaps also have python2 -3 report on it?

-Rob
___
Python-Dev mailing list
[email protected]
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] 3.5 unittest does not support namespace packages for discovering

2017-03-23 Thread Robert Collins
On 24 March 2017 at 04:59, INADA Naoki  wrote:
> And this issue is relating to it too: http://bugs.python.org/issue29716
>
> In short, "namespace package" is for make it possible to `pip install
> foo_bar foo_baz`,
> when foo_bar provides `foo.bar` and foo_baz provides `foo.baz`
> package.  (foo is namespace package).
>
> If unittests searches normal directly, it may walk deep into very
> large tree containing
> millions of directories.  I don't like it.

That is a risk, OTOH I think the failure to do what folk expect is a
bigger risk.

-Rob
___
Python-Dev mailing list
[email protected]
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


[Python-Dev] [bpo-30421]: Pull request review

2017-08-28 Thread Robert Schindler
Hello,

In May, I submitted a pull request that extends the functionality of
argparse.ArgumentParser.

To do so, I followed the steps described in the developers guide.

According to [1], I already pinged at GitHub but got no response. The
next step seems to be writing to this list.

I know that nobody is payed for reviewing submissions, but maybe it just
got overlooked?

You can find the pull request at [2].

Thanks in advance for any feedback.

Best regards
Robert

[1] https://docs.python.org/devguide/pullrequest.html#reviewing
[2] https://github.com/python/cpython/pull/1698


signature.asc
Description: PGP signature
___
Python-Dev mailing list
[email protected]
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


  1   2   3   4   >