Re: [Python-Dev] [GSoC] Developing a benchmark suite (for Python 3.x)
Jesse Noller, 07.04.2011 22:28: On Thu, Apr 7, 2011 at 3:54 PM, Anthony Scopatz wrote: Hi Daniel, Thanks for putting this together. I am a huge supporter of benchmarking efforts. My brief comment is below. On Wed, Apr 6, 2011 at 11:52 AM, DasIch wrote: 1. Definition of the benchmark suite. This will entail contacting developers of Python implementations (CPython, PyPy, IronPython and Jython), via discussion on the appropriate mailing lists. This might be achievable as part of this proposal. If you are reaching out to other projects at this stage, I think you should also be in touch with the Cython people (even if its 'implementation' sits on top of CPython). As a scientist/engineer what I care about is how Cython benchmarks to CPython. I believe that they have some ideas on benchmarking and have also explored this space. Their inclusion would be helpful to me thinking this GSoC successful at the end of the day (summer). Thanks for your consideration. Be Well Anthony Right now, we are talking about building "speed.python.org" to test the speed of python interpreters, over time, and alongside one another - cython *is not* an interpreter. Would you also want to exclude Psyco then? It clearly does not qualify as a Python interpreter. Cython is out of scope for this. Why? It should be easy to integrate Cython using pyximport. Basically, all you have to do is register the pyximport module as an import hook. Cython will then try to compile the imported Python modules and fall back to the normal .py file import if the compilation fails for some reason. So, once CPython is up and running in the benchmark test, adding Cython should be as easy as copying the configuration, installing Cython and adding two lines to site.py. Obviously, we'd have to integrate a build of the latest Cython development sources as well, but it's not like installing a distutils enabled Python package from sources is so hard that it pushes Cython out of scope for this GSoC. Stefan ___ Python-Dev mailing list [email protected] http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] [GSoC] Developing a benchmark suite (for Python 3.x)
On Fri, Apr 8, 2011 at 11:22 AM, Stefan Behnel wrote: > Jesse Noller, 07.04.2011 22:28: >> >> On Thu, Apr 7, 2011 at 3:54 PM, Anthony Scopatz wrote: >>> >>> Hi Daniel, >>> Thanks for putting this together. I am a huge supporter of benchmarking >>> efforts. My brief comment is below. >>> >>> On Wed, Apr 6, 2011 at 11:52 AM, DasIch wrote: 1. Definition of the benchmark suite. This will entail contacting developers of Python implementations (CPython, PyPy, IronPython and Jython), via discussion on the appropriate mailing lists. This might be achievable as part of this proposal. >>> >>> If you are reaching out to other projects at this stage, I think you >>> should >>> also be in touch with the Cython people (even if its 'implementation' >>> sits on top of CPython). >>> As a scientist/engineer what I care about is how Cython benchmarks to >>> CPython. I believe that they have some ideas on benchmarking and have >>> also explored this space. Their inclusion would be helpful to me >>> thinking >>> this GSoC successful at the end of the day (summer). >>> Thanks for your consideration. >>> Be Well >>> Anthony >> >> Right now, we are talking about building "speed.python.org" to test >> the speed of python interpreters, over time, and alongside one another >> - cython *is not* an interpreter. > > Would you also want to exclude Psyco then? It clearly does not qualify as a > Python interpreter. > Why not? it does run those benchmarks just fine. > >> Cython is out of scope for this. > > Why? It should be easy to integrate Cython using pyximport. Basically, all > you have to do is register the pyximport module as an import hook. Cython > will then try to compile the imported Python modules and fall back to the > normal .py file import if the compilation fails for some reason. then it's fine to include it. we can even include it now in speed.pypy.org that way. would it compile django? > > So, once CPython is up and running in the benchmark test, adding Cython > should be as easy as copying the configuration, installing Cython and adding > two lines to site.py. can you provide a simple command line tool for that? I want essentially to run ./cython-importing-stuff some-file.py > > Obviously, we'd have to integrate a build of the latest Cython development > sources as well, but it's not like installing a distutils enabled Python > package from sources is so hard that it pushes Cython out of scope for this > GSoC. no, that's fine. My main concern is - will cython run those benchmarks? and will you complain if we don't provide a custom cython hacks? (like providing extra type information) > > Stefan > > ___ > Python-Dev mailing list > [email protected] > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > http://mail.python.org/mailman/options/python-dev/fijall%40gmail.com > ___ Python-Dev mailing list [email protected] http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] [GSoC] Developing a benchmark suite (for Python 3.x)
Maciej Fijalkowski, 08.04.2011 11:41:
On Fri, Apr 8, 2011 at 11:22 AM, Stefan Behnel wrote:
Jesse Noller, 07.04.2011 22:28:
On Thu, Apr 7, 2011 at 3:54 PM, Anthony Scopatz wrote:
Hi Daniel,
Thanks for putting this together. I am a huge supporter of benchmarking
efforts. My brief comment is below.
On Wed, Apr 6, 2011 at 11:52 AM, DasIch wrote:
1. Definition of the benchmark suite. This will entail contacting
developers of Python implementations (CPython, PyPy, IronPython and
Jython), via discussion on the appropriate mailing lists. This might
be achievable as part of this proposal.
If you are reaching out to other projects at this stage, I think you
should
also be in touch with the Cython people (even if its 'implementation'
sits on top of CPython).
As a scientist/engineer what I care about is how Cython benchmarks to
CPython. I believe that they have some ideas on benchmarking and have
also explored this space. Their inclusion would be helpful to me
thinking
this GSoC successful at the end of the day (summer).
Thanks for your consideration.
Be Well
Anthony
Right now, we are talking about building "speed.python.org" to test
the speed of python interpreters, over time, and alongside one another
- cython *is not* an interpreter.
Would you also want to exclude Psyco then? It clearly does not qualify as a
Python interpreter.
Why not? it does run those benchmarks just fine.
Sure.
Cython is out of scope for this.
Why? It should be easy to integrate Cython using pyximport. Basically, all
you have to do is register the pyximport module as an import hook. Cython
will then try to compile the imported Python modules and fall back to the
normal .py file import if the compilation fails for some reason.
then it's fine to include it. we can even include it now in
speed.pypy.org that way. would it compile django?
Never tried. Likely not completely, but surely some major parts of it.
That's the beauty of it - it just falls back to CPython. :) If we're lucky,
it will manage to compile some performance critical parts without
modifications. In any case, it'll be trying to compile each module.
So, once CPython is up and running in the benchmark test, adding Cython
should be as easy as copying the configuration, installing Cython and adding
two lines to site.py.
can you provide a simple command line tool for that? I want
essentially to run ./cython-importing-stuff some-file.py
You can try
python -c 'import pyximport; \
pyximport.install(pyimport=True); \
exec("somefile.py")'
You may want to configure the output directory for the binary modules,
though, see
https://github.com/cython/cython/blob/master/pyximport/pyximport.py#L343
Please also take care to provide suitable gcc CFLAGS, e.g. "-O3
-march=native" etc.
Obviously, we'd have to integrate a build of the latest Cython development
sources as well, but it's not like installing a distutils enabled Python
package from sources is so hard that it pushes Cython out of scope for this
GSoC.
no, that's fine. My main concern is - will cython run those
benchmarks?
In the worst case, they will run at CPython speed with uncompiled modules.
and will you complain if we don't provide a custom cython
hacks? (like providing extra type information)
I don't consider providing extra type information a hack. Remember that
they are only used for additional speed-ups in cases where the author is
smarter than the compiler. It will work just fine without them.
Stefan
___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe:
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Test cases not garbage collected after run
On 08/04/2011 02:10, Robert Collins wrote: On Fri, Apr 8, 2011 at 8:12 AM, Michael Foord wrote: On 07/04/2011 20:18, Robert Collins wrote: On Fri, Apr 8, 2011 at 4:49 AM, Michael Foord wrote: You mean that the test run keeps the test instances alive for the whole test run so instance attributes are also kept alive. How would you solve this - by having calling a TestSuite (which is how a test run is executed) remove members from themselves after each test execution? (Any failure tracebacks etc stored by the TestResult would also have to not keep the test alive.) My only concern would be backwards compatibility due to the change in behaviour. An alternative is in TestCase.run() / TestCase.__call__(), make a copy and immediately delegate to it; that leaves the original untouched, permitting run-in-a-loop style helpers to still work. Testtools did something to address this problem, but I forget what it was offhand. That doesn't sound like a general solution as not everything is copyable and I don't think we should make that a requirement of tests. The proposed "fix" is to make test suite runs destructive, either replacing TestCase instances with None or pop'ing tests after they are run (the latter being what twisted Trial does). run-in-a-loop helpers could still repeatedly iterate over suites, just not call the suite. Thats quite expensive - repeating discovery etc from scratch. Nope, just executing the tests by iterating over the suite and calling them individually - no need to repeat discovery. With the fix in place executing tests by calling the suite would be destructive, but iterating over the suite wouldn't be destructive - so the contained tests can still be executed repeatedly by copying and executing. If you don't repeat discovery then you're assuming copyability. Well, individual test frameworks are free to assume what they want. My point is that frameworks that wish to do that would still be able to do this, but they'd have to iterate over the suite themselves rather than calling it directly. What I suggested didn't /require/ copying - it delegates it to the test, an uncopyable test would simply not do this. Ok, so you're not suggesting tests copy themselves by default? In which case I don't see that you're offering a fix for the problem. (Or at least not a built-in one.) All the best, Michael Foord -Rob -- http://www.voidspace.org.uk/ May you do good and not evil May you find forgiveness for yourself and forgive others May you share freely, never taking more than you give. -- the sqlite blessing http://www.sqlite.org/different.html ___ Python-Dev mailing list [email protected] http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] [GSoC] Developing a benchmark suite (for Python 3.x)
On 08/04/2011 11:18, Stefan Behnel wrote:
Maciej Fijalkowski, 08.04.2011 11:41:
On Fri, Apr 8, 2011 at 11:22 AM, Stefan Behnel
wrote:
[snip...]
So, once CPython is up and running in the benchmark test, adding Cython
should be as easy as copying the configuration, installing Cython
and adding
two lines to site.py.
can you provide a simple command line tool for that? I want
essentially to run ./cython-importing-stuff some-file.py
You can try
python -c 'import pyximport; \
pyximport.install(pyimport=True); \
exec("somefile.py")'
You may want to configure the output directory for the binary modules,
though, see
https://github.com/cython/cython/blob/master/pyximport/pyximport.py#L343
Please also take care to provide suitable gcc CFLAGS, e.g. "-O3
-march=native" etc.
If this works it is great. I don't think doing this work should be part
of the gsoc proposal. Considering it as a use case could be included in
the infrastructure work though.
All the best,
Michael Foord
Obviously, we'd have to integrate a build of the latest Cython
development
sources as well, but it's not like installing a distutils enabled
Python
package from sources is so hard that it pushes Cython out of scope
for this
GSoC.
no, that's fine. My main concern is - will cython run those
benchmarks?
In the worst case, they will run at CPython speed with uncompiled
modules.
and will you complain if we don't provide a custom cython
hacks? (like providing extra type information)
I don't consider providing extra type information a hack. Remember
that they are only used for additional speed-ups in cases where the
author is smarter than the compiler. It will work just fine without them.
Stefan
___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe:
http://mail.python.org/mailman/options/python-dev/fuzzyman%40voidspace.org.uk
--
http://www.voidspace.org.uk/
May you do good and not evil
May you find forgiveness for yourself and forgive others
May you share freely, never taking more than you give.
-- the sqlite blessing http://www.sqlite.org/different.html
___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe:
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Code highlighting in tracker
On 08/04/2011 02:02, Eugene Toder wrote: Because tracker is ugly. Is this an unbiased opinion? :) Having Python code syntax highlighted would definitely be *nicer*, and wouldn't *necessarily* mean switching to a custom markup format for all submissions (we could probably get 90% of the way there with heuristics). Of course as always someone would have to do the work... On the other hand switching to *permitting* restructured-text submissions for tracker comments, with syntax highlighting for literal blocks (::), would be nice. :-) All the best, Michael Foord Eugene ___ Python-Dev mailing list [email protected] http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/fuzzyman%40voidspace.org.uk -- http://www.voidspace.org.uk/ May you do good and not evil May you find forgiveness for yourself and forgive others May you share freely, never taking more than you give. -- the sqlite blessing http://www.sqlite.org/different.html ___ Python-Dev mailing list [email protected] http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
[Python-Dev] AST Transformation Hooks for Domain Specific Languages
A few odds and ends from recent discussions finally clicked into
something potentially interesting earlier this evening. Or possibly
just something insane. I'm not quite decided on that point as yet (but
leaning towards the latter).
Anyway, without further ado, I present:
AST Transformation Hooks for Domain Specific Languages
==
Consider:
# In some other module
ast.register_dsl("dsl.sql", dsl.sql.TransformAST)
# In a module using that DSL
import dsl.sql
def lookup_address(name : dsl.sql.char, dob : dsl.sql.date) from dsl.sql:
select address
from people
where name = {name} and dob = {dob}
Suppose that the standard AST for the latter looked something like:
DSL(syntax="dsl.sql",
name='lookup_address',
args=arguments(
args=[arg(arg='name',
annotation=),
arg(arg='dob',
annotation=)],
vararg=None, varargannotation=None,
kwonlyargs=[], kwarg=None, kwargannotation=None,
defaults=[], kw_defaults=[]),
body=[Expr(value=Str(s='select address\nfrom people\nwhere
name = {name} and dob = {dob}'))],
decorator_list=[],
returns=None)
(For those not familiar with the AST, the above is actually just the
existing Function node with a "syntax" attribute added)
At *compile* time (note, *not* function definition time), the
registered AST transformation hook would be invoked and would replace
that DSL node with "standard" AST nodes.
For example, depending on the design of the DSL and its support code,
the above example might be equivalent to:
@dsl.sql.escape_and_validate_args
def lookup_address(name: dsl.sql.char, dob: dsl.sql.date):
args = dict(name=name, dob=dob)
query = "select address\nfrom people\nwhere name = {name} and
dob = {dob}"
return dsl.sql.cursor(query, args)
As a simpler example, consider something like:
def f() from all_nonlocal:
x += 1
y -= 2
That was translated at compile time into:
def f():
nonlocal x, y
x += 1
y -= 2
My first pass at a rough protocol for the AST transformers suggests
they would only need two methods:
get_cookie() - Magic cookie to add to PYC files containing instances
of the DSL (allows recompilation to be forced if the DSL is updated)
transform_AST(node) - a DSL() node is passed in, expected to return
an AST containing no DSL nodes (SyntaxError if one is found)
Attempts to use an unregistered DSL would trigger SyntaxError
So there you are, that's the crazy idea. The stoning of the heretic
may now commence :)
Where this idea came from was the various discussions about "make
statement" style constructs and a conversation I had with Eric Snow at
Pycon about function definition time really being *too late* to do
anything particularly interesting that couldn't already be handled
better in other ways. Some tricks Dave Malcolm had done to support
Python level manipulation of the AST during compilation also played a
big part, as did Eugene Toder's efforts to add an AST optimisation
step to the compilation process.
Cheers,
Nick.
--
Nick Coghlan | [email protected] | Brisbane, Australia
___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe:
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
[Python-Dev] Ack, wrong list
Sorry, my last mail was meant to go to python-ideas, not python-dev (and the gmail/mailman disagreement means I can't easily reply to it). Reply to the version on python-ideas please, not the version on here. Cheers, Nick. -- Nick Coghlan | [email protected] | Brisbane, Australia ___ Python-Dev mailing list [email protected] http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] [GSoC] Developing a benchmark suite (for Python 3.x)
On Fri, Apr 8, 2011 at 12:18 PM, Stefan Behnel wrote:
> Maciej Fijalkowski, 08.04.2011 11:41:
>>
>> On Fri, Apr 8, 2011 at 11:22 AM, Stefan Behnel
>> wrote:
>>>
>>> Jesse Noller, 07.04.2011 22:28:
On Thu, Apr 7, 2011 at 3:54 PM, Anthony Scopatz wrote:
>
> Hi Daniel,
> Thanks for putting this together. I am a huge supporter of
> benchmarking
> efforts. My brief comment is below.
>
> On Wed, Apr 6, 2011 at 11:52 AM, DasIch wrote:
>>
>> 1. Definition of the benchmark suite. This will entail contacting
>> developers of Python implementations (CPython, PyPy, IronPython and
>> Jython), via discussion on the appropriate mailing lists. This might
>> be achievable as part of this proposal.
>>
>
> If you are reaching out to other projects at this stage, I think you
> should
> also be in touch with the Cython people (even if its 'implementation'
> sits on top of CPython).
> As a scientist/engineer what I care about is how Cython benchmarks to
> CPython. I believe that they have some ideas on benchmarking and have
> also explored this space. Their inclusion would be helpful to me
> thinking
> this GSoC successful at the end of the day (summer).
> Thanks for your consideration.
> Be Well
> Anthony
Right now, we are talking about building "speed.python.org" to test
the speed of python interpreters, over time, and alongside one another
- cython *is not* an interpreter.
>>>
>>> Would you also want to exclude Psyco then? It clearly does not qualify as
>>> a
>>> Python interpreter.
>>
>> Why not? it does run those benchmarks just fine.
>
> Sure.
>
>
Cython is out of scope for this.
>>>
>>> Why? It should be easy to integrate Cython using pyximport. Basically,
>>> all
>>> you have to do is register the pyximport module as an import hook. Cython
>>> will then try to compile the imported Python modules and fall back to the
>>> normal .py file import if the compilation fails for some reason.
>>
>> then it's fine to include it. we can even include it now in
>> speed.pypy.org that way. would it compile django?
>
> Never tried. Likely not completely, but surely some major parts of it.
> That's the beauty of it - it just falls back to CPython. :) If we're lucky,
> it will manage to compile some performance critical parts without
> modifications. In any case, it'll be trying to compile each module.
>
Ok, sure let's try.
>
>>> So, once CPython is up and running in the benchmark test, adding Cython
>>> should be as easy as copying the configuration, installing Cython and
>>> adding
>>> two lines to site.py.
>>
>> can you provide a simple command line tool for that? I want
>> essentially to run ./cython-importing-stuff some-file.py
>
> You can try
>
> python -c 'import pyximport; \
> pyximport.install(pyimport=True); \
> exec("somefile.py")'
I think you meant execfile. Also, how do I make sure that somefile.py
is also compiled?
>
> You may want to configure the output directory for the binary modules,
> though, see
>
> https://github.com/cython/cython/blob/master/pyximport/pyximport.py#L343
>
> Please also take care to provide suitable gcc CFLAGS, e.g. "-O3
> -march=native" etc.
>
>
>>> Obviously, we'd have to integrate a build of the latest Cython
>>> development
>>> sources as well, but it's not like installing a distutils enabled Python
>>> package from sources is so hard that it pushes Cython out of scope for
>>> this
>>> GSoC.
>>
>> no, that's fine. My main concern is - will cython run those
>> benchmarks?
>
> In the worst case, they will run at CPython speed with uncompiled modules.
ok, fine.
>
>
>> and will you complain if we don't provide a custom cython
>> hacks? (like providing extra type information)
>
> I don't consider providing extra type information a hack. Remember that they
> are only used for additional speed-ups in cases where the author is smarter
> than the compiler. It will work just fine without them.
We can agree to disagree on this one.
>
> Stefan
>
> ___
> Python-Dev mailing list
> [email protected]
> http://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe:
> http://mail.python.org/mailman/options/python-dev/fijall%40gmail.com
>
___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe:
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] [GSoC] Developing a benchmark suite (for Python 3.x)
Maciej Fijalkowski, 08.04.2011 13:37:
On Fri, Apr 8, 2011 at 12:18 PM, Stefan Behnel wrote:
So, once CPython is up and running in the benchmark test, adding Cython
should be as easy as copying the configuration, installing Cython and
adding
two lines to site.py.
can you provide a simple command line tool for that? I want
essentially to run ./cython-importing-stuff some-file.py
You can try
python -c 'import pyximport; \
pyximport.install(pyimport=True); \
exec("somefile.py")'
I think you meant execfile.
Ah, yes. Untested. ;)
Also, how do I make sure that somefile.py
is also compiled?
It's not getting compiled because it's not getting imported. Maybe we
should discuss the exact setup for speed.pypy.org in private e-mail.
Stefan
___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe:
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] [GSoC] Developing a benchmark suite (for Python 3.x)
-BEGIN PGP SIGNED MESSAGE- Hash: SHA1 On 04/07/2011 07:52 PM, Michael Foord wrote: > Personally I think the Gsoc project should just take the pypy suite and > run with that - bikeshedding about what benchmarks to include is going > to make it hard to make progress. We can have fun with that discussion > once we have the infrastructure and *some* good benchmarks in place (and > the pypy ones are good ones). > > So I'm still with Jesse on this one. If there is any "discussion phase" > as part of the Gsoc project it should be very strictly bounded by time. Somehow I missed seeing '[GSoC]' in the subject line (the blizzard of notification messages to the various GSoC specific lists must've snow-blinded me :). I'm fine with leaving Cython out-of-scope for the GSoC effort, just not for perf.python.org as a whole. Tres. - -- === Tres Seaver +1 540-429-0999 [email protected] Palladion Software "Excellence by Design"http://palladion.com -BEGIN PGP SIGNATURE- Version: GnuPG v1.4.10 (GNU/Linux) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/ iEYEARECAAYFAk2fBLgACgkQ+gerLs4ltQ67jACgozHfglhw7QQQH42hdwXy4VLX fXQAn33X/rq71BdZxmfsGn0swdeseHxJ =Ttvg -END PGP SIGNATURE- ___ Python-Dev mailing list [email protected] http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] [GSoC] Developing a benchmark suite (for Python 3.x)
On Fri, Apr 8, 2011 at 8:51 AM, Tres Seaver wrote: > -BEGIN PGP SIGNED MESSAGE- > Hash: SHA1 > > On 04/07/2011 07:52 PM, Michael Foord wrote: > >> Personally I think the Gsoc project should just take the pypy suite and >> run with that - bikeshedding about what benchmarks to include is going >> to make it hard to make progress. We can have fun with that discussion >> once we have the infrastructure and *some* good benchmarks in place (and >> the pypy ones are good ones). >> >> So I'm still with Jesse on this one. If there is any "discussion phase" >> as part of the Gsoc project it should be very strictly bounded by time. > > Somehow I missed seeing '[GSoC]' in the subject line (the blizzard of > notification messages to the various GSoC specific lists must've > snow-blinded me :). I'm fine with leaving Cython out-of-scope for the > GSoC effort, just not for perf.python.org as a whole. We don't need a massive outstanding todo list for perf.python.org - we need to get the current speed.pypy.org stuff made more generic for the purposes we're aiming for and to get the hardware (on my plate) first. Then we can talk about expanding it. I'm just begging that we not add a bunch of stuff to a todo list for something that doesn't exist right now. jesse ___ Python-Dev mailing list [email protected] http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] AST Transformation Hooks for Domain Specific Languages
On 8 April 2011 12:29, Nick Coghlan wrote:
> A few odds and ends from recent discussions finally clicked into
> something potentially interesting earlier this evening. Or possibly
> just something insane. I'm not quite decided on that point as yet (but
> leaning towards the latter).
>
>
The essence of the proposal is to allow arbitrary syntax within "standard
python files". I don't think it stands much of a chance in core.
It would be an awesome tool for experimenting with new syntax and dsls
though. :-)
Michael
> Anyway, without further ado, I present:
>
> AST Transformation Hooks for Domain Specific Languages
> ==
>
> Consider:
>
> # In some other module
> ast.register_dsl("dsl.sql", dsl.sql.TransformAST)
>
> # In a module using that DSL
> import dsl.sql
> def lookup_address(name : dsl.sql.char, dob : dsl.sql.date) from dsl.sql:
>select address
>from people
>where name = {name} and dob = {dob}
>
>
> Suppose that the standard AST for the latter looked something like:
>
>DSL(syntax="dsl.sql",
>name='lookup_address',
>args=arguments(
>args=[arg(arg='name',
> annotation=),
> arg(arg='dob',
> annotation=)],
>vararg=None, varargannotation=None,
>kwonlyargs=[], kwarg=None, kwargannotation=None,
>defaults=[], kw_defaults=[]),
>body=[Expr(value=Str(s='select address\nfrom people\nwhere
> name = {name} and dob = {dob}'))],
>decorator_list=[],
>returns=None)
>
> (For those not familiar with the AST, the above is actually just the
> existing Function node with a "syntax" attribute added)
>
> At *compile* time (note, *not* function definition time), the
> registered AST transformation hook would be invoked and would replace
> that DSL node with "standard" AST nodes.
>
> For example, depending on the design of the DSL and its support code,
> the above example might be equivalent to:
>
>@dsl.sql.escape_and_validate_args
>def lookup_address(name: dsl.sql.char, dob: dsl.sql.date):
> args = dict(name=name, dob=dob)
> query = "select address\nfrom people\nwhere name = {name} and
> dob = {dob}"
> return dsl.sql.cursor(query, args)
>
>
> As a simpler example, consider something like:
>
>def f() from all_nonlocal:
>x += 1
>y -= 2
>
> That was translated at compile time into:
>
>def f():
>nonlocal x, y
>x += 1
>y -= 2
>
> My first pass at a rough protocol for the AST transformers suggests
> they would only need two methods:
>
> get_cookie() - Magic cookie to add to PYC files containing instances
> of the DSL (allows recompilation to be forced if the DSL is updated)
> transform_AST(node) - a DSL() node is passed in, expected to return
> an AST containing no DSL nodes (SyntaxError if one is found)
>
> Attempts to use an unregistered DSL would trigger SyntaxError
>
> So there you are, that's the crazy idea. The stoning of the heretic
> may now commence :)
>
> Where this idea came from was the various discussions about "make
> statement" style constructs and a conversation I had with Eric Snow at
> Pycon about function definition time really being *too late* to do
> anything particularly interesting that couldn't already be handled
> better in other ways. Some tricks Dave Malcolm had done to support
> Python level manipulation of the AST during compilation also played a
> big part, as did Eugene Toder's efforts to add an AST optimisation
> step to the compilation process.
>
> Cheers,
> Nick.
>
> --
> Nick Coghlan | [email protected] | Brisbane, Australia
> ___
> Python-Dev mailing list
> [email protected]
> http://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe:
> http://mail.python.org/mailman/options/python-dev/fuzzyman%40voidspace.org.uk
>
--
http://www.voidspace.org.uk/
May you do good and not evil
May you find forgiveness for yourself and forgive others
May you share freely, never taking more than you give.
-- the sqlite blessing http://www.sqlite.org/different.html
___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe:
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] [GSoC] Developing a benchmark suite (for Python 3.x)
> > > > > > > >> and will you complain if we don't provide a custom cython > >> hacks? (like providing extra type information) > > > > I don't consider providing extra type information a hack. Remember that > they > > are only used for additional speed-ups in cases where the author is > smarter > > than the compiler. It will work just fine without them. > > We can agree to disagree on this one. > > The way to think about this is really that Cython is its own (creole) language that which has major intersections with the Python language. The goal of the Cython project is to have Python be a strict subset of Cython. Therefore constructions such as type declarations are really self-consistent Cython. Because it aims to be a superset of Python, Cython is more like a two-stage Python compiler (compile to C, then compile to assembly) than an interpreter. For the purposes of benchmarking, the distinction between compiler and interpreter, as some one said above, 'dubious'. You wouldn't want to add all of the type info or do anything in Cython that is *not* in Python here. That would defeat the purpose of benchmarking where you absolutely have to compare apples to apples. That said, despite the abstract, it seems that points 1 and 2 won't actually be present in this GSoC. We are not defining benchmarks. This project is more about porting to Python 3. Thus, I agree with Jesse, and we shouldn't heap on more TODOs than already exist. As people have mentioned here, it will be easy to add Cython support once the system is up and running. Be Well Anthony > > > > Stefan > > > > ___ > > Python-Dev mailing list > > [email protected] > > http://mail.python.org/mailman/listinfo/python-dev > > Unsubscribe: > > http://mail.python.org/mailman/options/python-dev/fijall%40gmail.com > > > ___ > Python-Dev mailing list > [email protected] > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > http://mail.python.org/mailman/options/python-dev/scopatz%40gmail.com > ___ Python-Dev mailing list [email protected] http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] [GSoC] Developing a benchmark suite (for Python 3.x)
On 4/8/2011 11:32 AM, Anthony Scopatz wrote: an interpreter. For the purposes of benchmarking, the distinction between compiler and interpreter, as some one said above, 'dubious'. I agree. We should be comparing 'Python execution systems'. My impression is that some of what Cython does in terms of code analysis is similar to PyPy does, perhaps in the jit phase. So comparing PyPy and CPython+Cython on standard Python code is a fair and interesting comparison. You wouldn't want to add all of the type info or do anything in Cython that is *not* in Python here. That would defeat the purpose of benchmarking where you absolutely have to compare apples to apples. If Cython people want to modify benchmarks to show what speedup one can get with what effort, that is a separate issue. -- Terry Jan Reedy ___ Python-Dev mailing list [email protected] http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
[Python-Dev] Summary of Python tracker Issues
ACTIVITY SUMMARY (2011-04-01 - 2011-04-08) Python tracker at http://bugs.python.org/ To view or respond to any of the issues listed below, click on the issue. Do NOT respond to this message. Issues counts and deltas: open2741 ( +8) closed 20845 (+58) total 23586 (+66) Open issues with patches: 1183 Issues opened (41) == #5673: Add timeout option to subprocess.Popen http://bugs.python.org/issue5673 reopened by haypo #11277: test_zlib.test_big_buffer crashes under BSD (Mac OS X and Free http://bugs.python.org/issue11277 reopened by haypo #11492: email.header.Header doesn't fold headers correctly http://bugs.python.org/issue11492 reopened by kitterma #11740: difflib html diff takes extremely long http://bugs.python.org/issue11740 opened by [email protected] #11743: Rewrite PipeConnection and Connection in pure Python http://bugs.python.org/issue11743 opened by pitrou #11747: unified_diff function product incorrect range information http://bugs.python.org/issue11747 opened by jan.koprowski #11748: test_ftplib failure in test for source_address http://bugs.python.org/issue11748 opened by pitrou #11750: Mutualize win32 functions http://bugs.python.org/issue11750 opened by pitrou #11751: Increase distutils.filelist test coverage http://bugs.python.org/issue11751 opened by jlove #11754: Changed test to check calculated constants in test_string.py http://bugs.python.org/issue11754 opened by Lynne.Qu #11757: test_subprocess.test_communicate_timeout_large_ouput failure o http://bugs.python.org/issue11757 opened by pitrou #11758: increase xml.dom.minidom test coverage http://bugs.python.org/issue11758 opened by mdorn #11762: Ast doc: warning and version number http://bugs.python.org/issue11762 opened by terry.reedy #11763: assertEqual memory issues with large text inputs http://bugs.python.org/issue11763 opened by michael.foord #11764: inspect.getattr_static code execution w/ class body as non dic http://bugs.python.org/issue11764 opened by michael.foord #11767: Maildir iterator leaks file descriptors by default http://bugs.python.org/issue11767 opened by moyix #11768: test_signals() of test_threadsignals failure on Mac OS X http://bugs.python.org/issue11768 opened by haypo #11769: test_notify() of test_threading hang on "x86 XP-4 3.x": http://bugs.python.org/issue11769 opened by haypo #11770: inspect.dir_static http://bugs.python.org/issue11770 opened by michael.foord #11772: email header wrapping edge case failure http://bugs.python.org/issue11772 opened by r.david.murray #11776: types.MethodType() params and usage is not documented http://bugs.python.org/issue11776 opened by techtonik #11779: test_mmap timeout (30 min) on "AMD64 Snow Leopard 3.x" buildbo http://bugs.python.org/issue11779 opened by haypo #11780: email.encoders are broken http://bugs.python.org/issue11780 opened by sdaoden #11781: test/test_email directory does not get installed by 'make inst http://bugs.python.org/issue11781 opened by sdaoden #11782: email.generator.Generator.flatten() fails http://bugs.python.org/issue11782 opened by sdaoden #11783: email parseaddr and formataddr should be IDNA aware http://bugs.python.org/issue11783 opened by r.david.murray #11784: multiprocessing.Process.join: timeout argument doesn't specify http://bugs.python.org/issue11784 opened by pyfex #11785: email subpackages documentation problems http://bugs.python.org/issue11785 opened by ysj.ray #11786: ConfigParser.[Raw]ConfigParser optionxform() http://bugs.python.org/issue11786 opened by Adam.Groszer #11787: File handle leak in TarFile lib http://bugs.python.org/issue11787 opened by shahpr #11789: Extend upon metaclass/type class documentation, here: zope.int http://bugs.python.org/issue11789 opened by carsten.klein #11790: transient failure in test_multiprocessing.WithProcessesTestCon http://bugs.python.org/issue11790 opened by pitrou #11792: asyncore module print to stdout http://bugs.python.org/issue11792 opened by kaplun #11795: Better core dev guidelines for committing submitted patches http://bugs.python.org/issue11795 opened by ncoghlan #11796: list and generator expressions in a class definition fail if e http://bugs.python.org/issue11796 opened by mjs0 #11797: 2to3 does not correct "reload" http://bugs.python.org/issue11797 opened by tebeka #11798: Test cases not garbage collected after run http://bugs.python.org/issue11798 opened by fabioz #11799: urllib HTTP authentication behavior with unrecognized auth met http://bugs.python.org/issue11799 opened by ubershmekel #11800: regrtest --timeout: apply the timeout on a function, not on th http://bugs.python.org/issue11800 opened by haypo #11802: filecmp.cmp needs a documented way to clear cache http://bugs.python.org/issue11802 opened by lopgok #11804: expat parser not xml 1.1 (breaks xmlrpclib) http://bugs.python.org/issue11804 opened by xrg Most recent 15 issues with no replies (15) ===
Re: [Python-Dev] [Python-checkins] cpython (3.1): Issue 11715: Build extension modules on multiarch Debian and Ubuntu by
Hi, http://hg.python.org/cpython/rev/7582a78f573b branch: 3.1 user:Barry Warsaw summary: Issue 11715: Build extension modules on multiarch Debian and Ubuntu by extending search paths to include multiarch directories. diff --git a/setup.py b/setup.py +if not os.path.exists(self.build_temp): +os.makedirs(self.build_temp) Isn’t there a possible raise condition here? I think it’s recommended to follow EAFP for mkdir and makedirs. +ret = os.system( +'dpkg-architecture -qDEB_HOST_MULTIARCH > %s 2> /dev/null' % +tmpfile) +try: +if ret >> 8 == 0: +with open(tmpfile) as fp: +multiarch_path_component = fp.readline().strip() +add_dir_to_list(self.compiler.library_dirs, +'/usr/lib/' + multiarch_path_component) +add_dir_to_list(self.compiler.include_dirs, +'/usr/include/' + multiarch_path_component) +finally: +os.unlink(tmpfile) Is there a benefit in creating and reading a file rather than catching stdout? Regards ___ Python-Dev mailing list [email protected] http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] AST Transformation Hooks for Domain Specific Languages
Hi Nick, all,
Just for the record, I would point to Mython (mython.org) as an
existing provider of this capability. I've already added an AST node
called "Quote" that functions like your DSL node, along with well
defined lexical, concrete syntax, and compile-time properties.
I have a mostly functioning front end for 2.X that does these
expansions (MyFront), and I'm waiting for a stable Mercurial migration
(I've been only lightly lurking on python-dev, so if this already
exists, someone should ping me) so I can publish a 3.X branch that
will get rid of a lot of the code I have to maintain by just building
on top of CPython (CMython? *smirk*).
It looks like you have some ideas about import semantics and managing
compile-time dependencies. I would invite further elaboration on the
mython-dev Google group. I currently have two different mechanisms
implemented, one via import hooks and the other by forced global
recompilation, but neither of these satisfies because you are imposing
a compile-time concept into a thoroughly dynamic language.
...and yes, compile-time metaprogramming is insane.
Regards,
-Jon
http://mython.org/ - Make Python yours.
On Fri, Apr 8, 2011 at 6:29 AM, Nick Coghlan wrote:
> A few odds and ends from recent discussions finally clicked into
> something potentially interesting earlier this evening. Or possibly
> just something insane. I'm not quite decided on that point as yet (but
> leaning towards the latter).
>
> Anyway, without further ado, I present:
>
> AST Transformation Hooks for Domain Specific Languages
> ==
>
> Consider:
>
> # In some other module
> ast.register_dsl("dsl.sql", dsl.sql.TransformAST)
>
> # In a module using that DSL
> import dsl.sql
> def lookup_address(name : dsl.sql.char, dob : dsl.sql.date) from dsl.sql:
> select address
> from people
> where name = {name} and dob = {dob}
>
>
> Suppose that the standard AST for the latter looked something like:
>
> DSL(syntax="dsl.sql",
> name='lookup_address',
> args=arguments(
> args=[arg(arg='name',
> annotation=),
> arg(arg='dob',
> annotation=)],
> vararg=None, varargannotation=None,
> kwonlyargs=[], kwarg=None, kwargannotation=None,
> defaults=[], kw_defaults=[]),
> body=[Expr(value=Str(s='select address\nfrom people\nwhere
> name = {name} and dob = {dob}'))],
> decorator_list=[],
> returns=None)
>
> (For those not familiar with the AST, the above is actually just the
> existing Function node with a "syntax" attribute added)
>
> At *compile* time (note, *not* function definition time), the
> registered AST transformation hook would be invoked and would replace
> that DSL node with "standard" AST nodes.
>
> For example, depending on the design of the DSL and its support code,
> the above example might be equivalent to:
>
> @dsl.sql.escape_and_validate_args
> def lookup_address(name: dsl.sql.char, dob: dsl.sql.date):
> args = dict(name=name, dob=dob)
> query = "select address\nfrom people\nwhere name = {name} and
> dob = {dob}"
> return dsl.sql.cursor(query, args)
>
>
> As a simpler example, consider something like:
>
> def f() from all_nonlocal:
> x += 1
> y -= 2
>
> That was translated at compile time into:
>
> def f():
> nonlocal x, y
> x += 1
> y -= 2
>
> My first pass at a rough protocol for the AST transformers suggests
> they would only need two methods:
>
> get_cookie() - Magic cookie to add to PYC files containing instances
> of the DSL (allows recompilation to be forced if the DSL is updated)
> transform_AST(node) - a DSL() node is passed in, expected to return
> an AST containing no DSL nodes (SyntaxError if one is found)
>
> Attempts to use an unregistered DSL would trigger SyntaxError
>
> So there you are, that's the crazy idea. The stoning of the heretic
> may now commence :)
>
> Where this idea came from was the various discussions about "make
> statement" style constructs and a conversation I had with Eric Snow at
> Pycon about function definition time really being *too late* to do
> anything particularly interesting that couldn't already be handled
> better in other ways. Some tricks Dave Malcolm had done to support
> Python level manipulation of the AST during compilation also played a
> big part, as did Eugene Toder's efforts to add an AST optimisation
> step to the compilation process.
>
> Cheers,
> Nick.
>
> --
> Nick Coghlan | [email protected] | Brisbane, Australia
> ___
> Python-Dev mailing list
> [email protected]
> http://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe:
> http://mail.python.org/mailman/options/python-dev/jon.riehl%40gmail.com
>
___
Python-Dev mailing list
[email protected]
http://mail.python.org/
Re: [Python-Dev] AST Transformation Hooks for Domain Specific Languages
On Fri, 2011-04-08 at 21:29 +1000, Nick Coghlan wrote:
> A few odds and ends from recent discussions finally clicked into
> something potentially interesting earlier this evening. Or possibly
> just something insane. I'm not quite decided on that point as yet (but
> leaning towards the latter).
I too am leaning towards the latter (I'm afraid my first thought was to
check the date on the email); as Michael said, I too don't think it
stands much of a chance in core.
> Anyway, without further ado, I present:
>
> AST Transformation Hooks for Domain Specific Languages
> ==
This reminds me a lot of Mython:
http://mython.org/
If you haven't seen it, it's well worth a look.
My favourite use case for this kind of thing is having the ability to
embed shell pipelines into Python code, by transforming bash-style
syntax into subprocess calls (it's almost possible to do all this in
regular Python by overloading the | and > operators, but not quite).
> Consider:
>
> # In some other module
> ast.register_dsl("dsl.sql", dsl.sql.TransformAST)
Where is this registered? Do you have to import this "other module"
before importing the module using "dsl.sql" ? It sounds like this is
global state for the interpreter.
> # In a module using that DSL
How is this usage expressed? via the following line?
> import dsl.sql
I see the "import dsl.sql" here, but surely you have to somehow process
the "import" in order to handle the rest of the parsing.
This is reminiscent of the "from __future__ " specialcasing in the
parser. But from my understanding of CPython's Python/future.c, you
already have an AST at that point (mod_ty, from Python/compile.c).
There seems to be a chicken-and-egg problem with this proposal.
Though another syntax might read:
from __dsl__ import sql
to perhaps emphasize that something magical is about to happen.
[...snip example of usage of a DSL, and the AST it gets parsed to...]
Where and how would the bytes of the file usage the DSL get converted to
an in-memory tree representation?
IIRC, manipulating AST nodes in CPython requires some care: the parser
has its own allocator (PyArena), and the entities it allocates have a
shared lifetime that ends when PyArena_Free occurs.
> So there you are, that's the crazy idea. The stoning of the heretic
> may now commence :)
Or, less violently, take it to python-ideas? (though I'm not subscribed
there, fwiw, make of that what you will)
One "exciting" aspect of this is that if someone changes the DSL file,
the meaning of all of your code changes from under you. This may or may
not be a sane approach to software development :)
(I also worry what this means e.g. for people writing text editors,
syntax highlighters, etc; insert usual Alan Perlis quote about syntactic
sugar causing cancer of the semicolon)
Also, insert usual comments about the need to think about how
non-CPython implementations of Python would go about implementing such
ideas.
> Where this idea came from was the various discussions about "make
> statement" style constructs and a conversation I had with Eric Snow at
> Pycon about function definition time really being *too late* to do
> anything particularly interesting that couldn't already be handled
> better in other ways. Some tricks Dave Malcolm had done to support
> Python level manipulation of the AST during compilation also played a
> big part, as did Eugene Toder's efforts to add an AST optimisation
> step to the compilation process.
Like I said earlier, have a look at Mython
Hope this is helpful
Dave
___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe:
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] AST Transformation Hooks for Domain Specific Languages
On Fri, Apr 8, 2011 at 10:50 AM, David Malcolm wrote:
> On Fri, 2011-04-08 at 21:29 +1000, Nick Coghlan wrote:
> > A few odds and ends from recent discussions finally clicked into
> > something potentially interesting earlier this evening. Or possibly
> > just something insane. I'm not quite decided on that point as yet (but
> > leaning towards the latter).
>
> I too am leaning towards the latter (I'm afraid my first thought was to
> check the date on the email); as Michael said, I too don't think it
> stands much of a chance in core.
>
> > Anyway, without further ado, I present:
> >
> > AST Transformation Hooks for Domain Specific Languages
> > ==
>
> This reminds me a lot of Mython:
> http://mython.org/
> If you haven't seen it, it's well worth a look.
>
> My favourite use case for this kind of thing is having the ability to
> embed shell pipelines into Python code, by transforming bash-style
> syntax into subprocess calls (it's almost possible to do all this in
> regular Python by overloading the | and > operators, but not quite).
>
> > Consider:
> >
> > # In some other module
> > ast.register_dsl("dsl.sql", dsl.sql.TransformAST)
>
> Where is this registered? Do you have to import this "other module"
> before importing the module using "dsl.sql" ? It sounds like this is
> global state for the interpreter.
>
> > # In a module using that DSL
>
> How is this usage expressed? via the following line?
>
> > import dsl.sql
>
> I see the "import dsl.sql" here, but surely you have to somehow process
> the "import" in order to handle the rest of the parsing.
>
> This is reminiscent of the "from __future__ " specialcasing in the
> parser. But from my understanding of CPython's Python/future.c, you
> already have an AST at that point (mod_ty, from Python/compile.c).
> There seems to be a chicken-and-egg problem with this proposal.
>
> Though another syntax might read:
>
> from __dsl__ import sql
>
> to perhaps emphasize that something magical is about to happen.
>
> [...snip example of usage of a DSL, and the AST it gets parsed to...]
>
> Where and how would the bytes of the file usage the DSL get converted to
> an in-memory tree representation?
>
> IIRC, manipulating AST nodes in CPython requires some care: the parser
> has its own allocator (PyArena), and the entities it allocates have a
> shared lifetime that ends when PyArena_Free occurs.
>
> > So there you are, that's the crazy idea. The stoning of the heretic
> > may now commence :)
>
> Or, less violently, take it to python-ideas? (though I'm not subscribed
> there, fwiw, make of that what you will)
>
> One "exciting" aspect of this is that if someone changes the DSL file,
> the meaning of all of your code changes from under you. This may or may
> not be a sane approach to software development :)
>
> (I also worry what this means e.g. for people writing text editors,
> syntax highlighters, etc; insert usual Alan Perlis quote about syntactic
> sugar causing cancer of the semicolon)
>
> Also, insert usual comments about the need to think about how
> non-CPython implementations of Python would go about implementing such
> ideas.
>
> > Where this idea came from was the various discussions about "make
> > statement" style constructs and a conversation I had with Eric Snow at
> > Pycon about function definition time really being *too late* to do
> > anything particularly interesting that couldn't already be handled
> > better in other ways. Some tricks Dave Malcolm had done to support
> > Python level manipulation of the AST during compilation also played a
> > big part, as did Eugene Toder's efforts to add an AST optimisation
> > step to the compilation process.
>
> Like I said earlier, have a look at Mython
>
> Hope this is helpful
> Dave
>
> ___
> Python-Dev mailing list
> [email protected]
> http://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe:
> http://mail.python.org/mailman/options/python-dev/ericsnowcurrently%40gmail.com
>
Someone brought up some of the same stuff in the python-ideas thread and
Nick responded there, particularly about the import question.
___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe:
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] [Python-checkins] cpython (3.1): Issue 11715: Build extension modules on multiarch Debian and Ubuntu by
On Fri, 08 Apr 2011 18:10:35 +0200 Éric Araujo wrote: > Hi, > > > http://hg.python.org/cpython/rev/7582a78f573b > > branch: 3.1 > > user:Barry Warsaw > > summary: > > Issue 11715: Build extension modules on multiarch Debian and Ubuntu > > by > > extending search paths to include multiarch directories. > > > > diff --git a/setup.py b/setup.py > > > +if not os.path.exists(self.build_temp): > > +os.makedirs(self.build_temp) > > Isn’t there a possible raise condition here? I think it’s recommended > to follow EAFP for mkdir and makedirs. Since this is setup.py, I don't think we care. (I assume you meant "race condition", not "raise condition") Regards Antoine. ___ Python-Dev mailing list [email protected] http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] [GSoC] Developing a benchmark suite (for Python 3.x)
I talked to Fijal about my project last night, the result is that basically the project as is, is not that interesting because the means to execute the benchmarks on multiple interpreters are currently missing. Another point we talked about was that porting the benchmarks would not be very useful as the interesting ones all have dependencies which have not (yet) been ported to Python 3.x. The first point, execution on multiple interpreters, has to be solved or this project is pretty much pointless, therefore I've changed my proposal to include just that. However the proposal still includes porting the benchmarks although this is planned to happen after the development of an application able to run the benchmarks on multiple interpreters. The reason for this is that even though the portable benchmarks might not prove to be that interesting the basic stuff for porting using 2to3 would be there, making it easier to port benchmarks in the future, as the dependencies become available under Python 3.x. However I plan to do that after implementing the prior mentioned application putting the application at higher priority. This way, should I not be able to complete all my goals, it is unlikely that anything but the porting will suffer and the project would still produce useful results during the GSoC. Anyway here is the current, updated, proposal: Abstract === As of now there are several benchmark suites used by Python implementations, PyPy uses the benchmarks[1] developed for the Unladen Swallow[2] project as well as several other benchmarks they implemented on their own, CPython[3] uses the Unladen Swallow benchmarks and several "crap benchmarks used for historical reasons"[4]. This makes comparisons unnecessarily hard and causes confusion. As a solution to this problem I propose merging the existing benchmarks - at least those considered worth having - into a single benchmark suite which can be shared by all implementations and ported to Python 3.x. Another problem reported by Maciej Fijalkowski is that currenly the way benchmarks are executed by PyPy is more or less a hack. Work will have to be done to allow execution of the benchmarks on different interpreters and their most recent versions (from their respective repositories). The application for this should also be able to upload the results to a codespeed instance such as http://speed.pypy.org. Milestones = The project can be divided into several milestones: 1. Definition of the benchmark suite. This will entail contacting developers of Python implementations (CPython, PyPy, IronPython and Jython), via discussion on the appropriate mailing lists. This might be achievable as part of this proposal. 2. Merging the benchmarks. Based on the prior agreed upon definition, the benchmarks will be merged into a single suite. 3. Implementing a system to run the benchmarks. In order to execute the benchmarks it will be necessary to have a configurable application which downloads the interpreters from their repositories, builds them and executes the benchmarks with them. 4. Porting the suite to Python 3.x. The suite will be ported to 3.x using 2to3[5], as far as possible. The usage of 2to3 will make it easier make changes to the repository especially for those still focusing on 2.x. It is to be expected that some benchmarks cannot be ported due to dependencies which are not available on Python 3.x. Those will be ignored by this project to be ported at a later time, when the necessary requirements are met. Start of Program (May 24) == Before the coding, milestones 2 and 3, can begin it is necessary to agree upon a set of benchmarks, everyone is happy with, as described. Midterm Evaluation (July 12) === During the midterm I want to merge the benchmarks and implement a way to execute them. Final Evaluation (Aug 16) = In this period the benchmark suite will be ported. If everything works out perfectly I will even have some time left, if there are problems I have a buffer here. Implementation of the Benchmark Runner == In order to run the benchmarks I propose a simple application which can be configured to download multiple interpreters, to build them and execute the benchmarks. The configuration could be similar to tox[6], downloads of the interpreters could be handled using anyvc[7]. For a site such as http://speed.pypy.org a cronjob, buildbot or whatelse is preferred, could be setup which executes the application regularly. Repository Handling The code for the project will be developed in a Mercurial[8] repository hosted on Bitbucket[9], both PyPy and CPython use Mercurial and most people in the Python community should be able to use it. Probably Asked Questions == Why not use one of the existing benchmark suites for porting? The effort will be wasted if there is no good base to build upon, creating a new be
Re: [Python-Dev] AST Transformation Hooks for Domain Specific Languages
On 4/8/2011 1:14 PM, Jon Riehl wrote: I have a mostly functioning front end for 2.X that does these expansions (MyFront), and I'm waiting for a stable Mercurial migration Done and in use over a month. http://hg.python.org/ Further discussion of this idea is on the python-ideas list. (The posting to pydev was an accident.) -- Terry Jan Reedy ___ Python-Dev mailing list [email protected] http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] [Python-checkins] cpython (3.1): Issue 11715: Build extension modules on multiarch Debian and Ubuntu by
On Sat, Apr 9, 2011 at 3:40 AM, Antoine Pitrou wrote: >> Isn’t there a possible raise condition here? I think it’s recommended >> to follow EAFP for mkdir and makedirs. > > Since this is setup.py, I don't think we care. > (I assume you meant "race condition", not "raise condition") Indeed, the pre-check is OK here due to the fact that we control "build_temp", so other processes shouldn't be creating a directory with the same name. Cheers, Nick. -- Nick Coghlan | [email protected] | Brisbane, Australia ___ Python-Dev mailing list [email protected] http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
