Re: [Python-Dev] Help requested with Python 2.7 performance regression

2017-03-02 Thread INADA Naoki
On Thu, Mar 2, 2017 at 4:07 AM, Antoine Pitrou  wrote:
> On Wed, 1 Mar 2017 19:58:14 +0100
> Matthias Klose  wrote:
>> On 01.03.2017 18:51, Antoine Pitrou wrote:
>> > As for the high level: what if the training set used for PGO in Xenial
>> > has become skewed or inadequate?
>>
>> running the testsuite
>
> I did some tests a year or two ago, and running the whole test suite is
> not a good idea, as coverage varies wildly from one functionality to the
> other, so PGO will not infer the right information from it.  You don't
> get very good benchmark results from it.
>
> (for example, decimal has an extensive test suite which might lead PGO
> to believe that code paths exercised by the decimal module are the
> hottest ones)
>
> Regards
>
> Antoine.
>

FYI, there are "profile-opt" make target.  It uses subset of regrtest.
https://github.com/python/cpython/blob/2.7/Makefile.pre.in#L211-L214

Does Ubuntu (and Debian) use it?
___
Python-Dev mailing list
[email protected]
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Help requested with Python 2.7 performance regression

2017-03-02 Thread Gregory P. Smith
We updated profile-opt to use the testsuite subset based on what distros
had already been using for their training runs. As for the comment about
the test suite not being good for training Mostly a myth. The test
suite exercises the ceval loop well as well as things like re and json
sufficiently to be a lot better than stupid workloads such as pybench (the
previous default training run).

Room for improvement in training? Likely in some corners. But I have yet to
see anyone propose any evidence based patch as a workload that reliably
improves on anything for PGO over what we train with today.

-gpshead

On Thu, Mar 2, 2017, 1:12 AM INADA Naoki  wrote:

> On Thu, Mar 2, 2017 at 4:07 AM, Antoine Pitrou 
> wrote:
> > On Wed, 1 Mar 2017 19:58:14 +0100
> > Matthias Klose  wrote:
> >> On 01.03.2017 18:51, Antoine Pitrou wrote:
> >> > As for the high level: what if the training set used for PGO in Xenial
> >> > has become skewed or inadequate?
> >>
> >> running the testsuite
> >
> > I did some tests a year or two ago, and running the whole test suite is
> > not a good idea, as coverage varies wildly from one functionality to the
> > other, so PGO will not infer the right information from it.  You don't
> > get very good benchmark results from it.
> >
> > (for example, decimal has an extensive test suite which might lead PGO
> > to believe that code paths exercised by the decimal module are the
> > hottest ones)
> >
> > Regards
> >
> > Antoine.
> >
>
> FYI, there are "profile-opt" make target.  It uses subset of regrtest.
> https://github.com/python/cpython/blob/2.7/Makefile.pre.in#L211-L214
>
> Does Ubuntu (and Debian) use it?
> ___
> Python-Dev mailing list
> [email protected]
> https://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe:
> https://mail.python.org/mailman/options/python-dev/greg%40krypto.org
>
___
Python-Dev mailing list
[email protected]
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


[Python-Dev] Enum conversions in the stdlib

2017-03-02 Thread Ethan Furman

There are a few modules that have had their constants redefined as Enums, such 
as signal, which has revealed a minor nit:

>>> pp(list(signal.Signals))
[,
 ,
 ,
 ,
 ,
 ,
 ,
 ,
 ,
 ,
 ,
 ,
 ,
 ,
 ,
 ,
 ,
 ,
 ,
 ,
 ,
 ,
 ,
 ,
 ,
 ,
 ,
 ,
 ,
 ,
 ,
 ]

The resulting enumeration is neither in alpha nor value order.  While this has no bearing on programmatic usage I would 
like these Enums to be ordered, preferably by value.


Would anyone prefer lexicographical ordering, and if so, why?

--
~Ethan~
___
Python-Dev mailing list
[email protected]
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


[Python-Dev] Fwd: Re: Enum conversions in the stdlib

2017-03-02 Thread Ethan Furman

I strongly prefer numeric order for signals.

--Guido (mobile)
___
Python-Dev mailing list
[email protected]
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Fwd: Re: Enum conversions in the stdlib

2017-03-02 Thread Alexander Belopolsky

> I strongly prefer numeric order for signals.
> 
> --Guido (mobile)

+1

Numerical values of UNIX signals are often more widely known than their names.  
For example, every UNIX user knows what signal 9 does. 
___
Python-Dev mailing list
[email protected]
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


[Python-Dev] why multiprocessing use os._exit

2017-03-02 Thread Tao Qingyun
in multiprocessing/forking.py#129, `os._exit` cause child process don't close 
open
file. For example:

```
from multiprocessing import Process

def f():
global log  # prevent gc close the file
log = open("info.log", "w")
log.write("***hello world***\n")

p = Process(target=f)
p.start()
p.join()

```
and the `info.log` will be empty. why not use sys.exit ? 


Thanks


___
Python-Dev mailing list
[email protected]
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Help requested with Python 2.7 performance regression

2017-03-02 Thread Nick Coghlan
On 2 March 2017 at 07:00, Victor Stinner  wrote:

> Hi,
>
> Your document doesn't explain how you configured the host to run
> benchmarks. Maybe you didn't tune Linux or anything else? Be careful
> with modern hardware which can make funny (or not) surprises.


Victor, do you know if you or anyone else has compared the RHEL/CentOS 7.x
binaries (Python 2.7.5 + patches, built with GCC 4.8.x) with the Fedora 25
binaries (Python 2.7.13 + patches, built with GCC 6.3.x)?

I know you've been using perf to look for differences between *Python*
major versions, but this would be more about using Python's benchmark suite
to investigate the performance of *gcc*, since it appears that may be the
culprit here.

Cheers,
Nick.

-- 
Nick Coghlan   |   [email protected]   |   Brisbane, Australia
___
Python-Dev mailing list
[email protected]
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com