[Python-Dev] Expose dictproxy through collections rather than the types module?

2012-04-21 Thread Nick Coghlan
The internal dictproxy class was recently exposed as types.MappingProxyType.

Since it's not very discoverable that way, would anyone object if I
moved things around so it was exposed as collections.MappingProxy
instead? The main benefit to doing so is to get it into the table of
specialised container types at the top of the collections module docs
[1].

[1] http://docs.python.org/dev/library/collections

Cheers,
Nick.

-- 
Nick Coghlan   |   [email protected]   |   Brisbane, Australia
___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Expose dictproxy through collections rather than the types module?

2012-04-21 Thread Eric Snow
On Apr 21, 2012 7:11 AM, "Nick Coghlan"  wrote:
>
> The internal dictproxy class was recently exposed as
types.MappingProxyType.
>
> Since it's not very discoverable that way, would anyone object if I
> moved things around so it was exposed as collections.MappingProxy
> instead? The main benefit to doing so is to get it into the table of
> specialised container types at the top of the collections module docs
> [1].

A discussion on this played out in http://bugs.python.org/issue14386.

-eric

>
> [1] http://docs.python.org/dev/library/collections
>
> Cheers,
> Nick.
>
> --
> Nick Coghlan   |   [email protected]   |   Brisbane, Australia
> ___
> Python-Dev mailing list
> [email protected]
> http://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe:
http://mail.python.org/mailman/options/python-dev/ericsnowcurrently%40gmail.com
___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Expose dictproxy through collections rather than the types module?

2012-04-21 Thread R. David Murray
On Sat, 21 Apr 2012 23:09:08 +1000, Nick Coghlan  wrote:
> Since it's not very discoverable that way, would anyone object if I
> moved things around so it was exposed as collections.MappingProxy
> instead? The main benefit to doing so is to get it into the table of
> specialised container types at the top of the collections module docs

The short answer is yes, someone would mind, which is why it is where it
is.  Read the ticket for more: http://bugs.python.org/issue14386.

--David
___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Providing a mechanism for PEP 3115 compliant dynamic class creation

2012-04-21 Thread PJ Eby
(Sorry I'm so late to this discussion.)

I think that it's important to take into account the fact that PEP 3115
doesn't require namespaces to implement anything more than __setitem__ and
__getitem__ (with the latter not even needing to do anything but raise
KeyError).

Among other things, this means that .update() is right out as a
general-purpose solution to initializing a 3115-compatible class: you have
to loop and set items explicitly.  So, if we're providing helper functions,
there should be a helper that handles this common case by taking the
keywords (or perhaps an ordered sequence of pairs) and doing the looping
for you.

Of course, once you're doing that, you might as well implement it by
passing a closure into __build_class__...

More below:

On Sun, Apr 15, 2012 at 7:48 AM, Nick Coghlan  wrote:

>
> Yup, I believe that was my main objection to exposing __build_class__
> directly. There's no obligation for implementations to build a
> throwaway function to evaluate a class body.
>

Thing is, though, if an implementation is dynamic enough to be capable of
supporting PEP 3115 *at all*  (not to mention standard exec/eval
semantics), it's going to have no problem mimicking __build_class__.

I mean, to implement PEP 3115 namespaces, you *have* to support exec/eval
with arbitrary namespaces.  From that, it's only the tiniest of steps to
wrapping that exec/eval in a function object to pass to __build_class__.

Really, making that function is probably the *least* of the troubles an
alternate implementation is going to have with supporting PEP 3115 (by
far).  Hell, supporting *metaclasses* is the first big hurdle an alternate
implementation has to get over, followed by the exec/eval with arbitrary
namespaces.

Personally, I think __build_class__ should be explicitly exposed and
supported, if for no other reason than that it allows one to re-implement
old-style __metaclass__ support in 2.x modules that rely on it...  and I
have a lot of those to port.  (Which is why I also think the convenience
API for PEP 3115-compatible class creation should actually call
__build_class__ itself.  That way, if it's been replaced, then the replaced
semantics would *also* apply to dynamically-created classes.)

IOW, there'd be two functions: one that's basically "call __build_class__",
and the other that's "call __build_class__ with a convenience function to
inject these values into the prepared dictionary".

Having other convenience functions that reimplement lower-level features
than __build_class__ (like the prepare thing) sounds like a good idea, but
I think we should encourage common cases to just call something that keeps
the __setitem__ issue out of the way.

Thoughts?
___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Providing a mechanism for PEP 3115 compliant dynamic class creation

2012-04-21 Thread Nick Coghlan
On Sun, Apr 22, 2012 at 12:55 AM, PJ Eby  wrote:
> (Sorry I'm so late to this discussion.)
>
> I think that it's important to take into account the fact that PEP 3115
> doesn't require namespaces to implement anything more than __setitem__ and
> __getitem__ (with the latter not even needing to do anything but raise
> KeyError).
>
> Among other things, this means that .update() is right out as a
> general-purpose solution to initializing a 3115-compatible class: you have
> to loop and set items explicitly.  So, if we're providing helper functions,
> there should be a helper that handles this common case by taking the
> keywords (or perhaps an ordered sequence of pairs) and doing the looping for
> you.
>
> Of course, once you're doing that, you might as well implement it by passing
> a closure into __build_class__...

Yeah, the "operator.build_class" in the tracker issue ended up looking
a whole lot like the signature of CPython's __build_class__. The main
difference is that the class body evaluation argument moves to the end
and becomes optional in order to bring the first two arguments in line
with those of type(). The signature ends up being effectively:

def build_class(name, bases=(), kwds={}, exec_body=None):
...

Accepting an optional callback that is given the prepared namespace as
an argument just makes a lot more sense than either exposing a
separate prepare function or using the existing __build_class__
signature directly (which was designed with the compiler in mind, not
humans).

> Personally, I think __build_class__ should be explicitly exposed and
> supported, if for no other reason than that it allows one to re-implement
> old-style __metaclass__ support in 2.x modules that rely on it...  and I
> have a lot of those to port.  (Which is why I also think the convenience API
> for PEP 3115-compatible class creation should actually call __build_class__
> itself.  That way, if it's been replaced, then the replaced semantics would
> *also* apply to dynamically-created classes.)

No, we already have one replaceable-per-module PITA like that (i.e.
__import__). I don't want to see us add another one.

> Having other convenience functions that reimplement lower-level features
> than __build_class__ (like the prepare thing) sounds like a good idea, but I
> think we should encourage common cases to just call something that keeps the
> __setitem__ issue out of the way.
>
> Thoughts?

Agreed on the use of a callback to avoid making too many assumptions
about the API provided by the prepared namespace.

Definitely *not* agreed on making __build_class__ part of the language
spec (or even officially supporting people that decide to replace it
with their own alternative in CPython).

Cheers,
Nick.

-- 
Nick Coghlan   |   [email protected]   |   Brisbane, Australia
___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Expose dictproxy through collections rather than the types module?

2012-04-21 Thread Nick Coghlan
On Sun, Apr 22, 2012 at 12:43 AM, R. David Murray  wrote:
> On Sat, 21 Apr 2012 23:09:08 +1000, Nick Coghlan  wrote:
>> Since it's not very discoverable that way, would anyone object if I
>> moved things around so it was exposed as collections.MappingProxy
>> instead? The main benefit to doing so is to get it into the table of
>> specialised container types at the top of the collections module docs
>
> The short answer is yes, someone would mind, which is why it is where it
> is.  Read the ticket for more: http://bugs.python.org/issue14386.

No worries. Someone was asking on python-ideas about creating an
immutable ChainMap instance, and I was going to suggest
collections.MappingProxy as the answer (for future versions, of
course). I was surprised to find it squirrelled away in the types
module instead of being somewhere anyone other than a core dev is
likely to find it.

I personally suspect the lack of demand Raymond describes comes from
people just using mutable dicts and treating them as immutable by
convention - the same way Python programs may have "immutable by
convention" objects which don't actually go to the effort needed to
fully prevent mutation of internal state after creation. Some objects
would be more correct if they did that, but in practice, it's not
worth the hassle to make sure you've implemented it correctly.

Still, it doesn't bother me enough for me to try to persuade Raymond
its sufficiently valuable to make it public through the collections
module.

Cheers,
Nick.

-- 
Nick Coghlan   |   [email protected]   |   Brisbane, Australia
___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Handling deprecations in the face of PEP 384

2012-04-21 Thread Barry Warsaw
On Apr 20, 2012, at 09:59 PM, Brett Cannon wrote:

>As I clean up Python/import.c and move much of its functionality into
>Lib/imp.py, I am about to run into some stuff that was not kept private to
>the file. Specifically, I have PyImport_GetMagicTag() and NullImporter_Type
>which I would like to chop out and move to Lib/imp.py.
>
>>From my reading of PEP 384 that means I would need to at least deprecate
>PyImport_getMagicTag(), correct (assuming I follow through with this; I
>might not bother)? What about NullImporter_Type (it lacks a Py prefix so I
>am not sure if this is considered public or not)?

I'd have to go back into my archives for the discussions about the PEP, but my
recollection is that we intentionally made PyImport_GetMagicTag() a public API
method.  Thus no leading underscore.  It's a bug that it's not documented, but
OTOH, it's unlikely there are, or would be, many consumers for it.

Strictly speaking, I do think you need to deprecate the APIs.  I like Nick's
suggestion to make them C wrappers which just call back into Python.

-Barry
___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Providing a mechanism for PEP 3115 compliant dynamic class creation

2012-04-21 Thread PJ Eby
On Sat, Apr 21, 2012 at 11:30 AM, Nick Coghlan  wrote:

> On Sun, Apr 22, 2012 at 12:55 AM, PJ Eby  wrote:
> > Personally, I think __build_class__ should be explicitly exposed and
> > supported, if for no other reason than that it allows one to re-implement
> > old-style __metaclass__ support in 2.x modules that rely on it...  and I
> > have a lot of those to port.  (Which is why I also think the convenience
> API
> > for PEP 3115-compatible class creation should actually call
> __build_class__
> > itself.  That way, if it's been replaced, then the replaced semantics
> would
> > *also* apply to dynamically-created classes.)
>
> No, we already have one replaceable-per-module PITA like that (i.e.
> __import__). I don't want to see us add another one.
>

Well, it's more like replacing than adding; __metaclass__ has this job in
2.x.  PEP 3115 removed what is (IMO) an important feature: the ability for
method-level decorators to affect the class, without needing user-specified
metaclasses or class decorators.

This is important for e.g. registering methods that are generic functions,
without requiring the addition of redundant metaclass or class-decorator
statements, and it's something that's possible in 2.x using __metaclass__,
but *not* possible under PEP 3115 without hooking __build_class__.
 Replacing builtins.__build_class__ allows the restoration of __metaclass__
support at the class level, which in turn allows porting 2.x code that uses
this facility.

To try to be more concrete, here's an example of sorts:

class Foo:
@decorate(blah, fah)
def widget(self, spam):
 ...

If @decorate needs access to the 'Foo' class object, this is not possible
under PEP 3115 without adding an explicit metaclass or class decorator to
support it.  And if you are using such method-level decorators from more
than one source, you will have to combine their class decorators or
metaclasses in some way to get this to work.  Further, if somebody forgets
to add the extra metaclass(es) and/or class decorator(s), things will
quietly break.

However, under 2.x, a straightforward solution is possible (well, to me
it's straightforward) : method decorators can replace the class'
__metaclass__ and chain to the previous one, if it existed.  It's like
giving method decorators a chance to *also* be a class decorator.

Without some *other* way to do this in 3.x, I don't have much of a choice
besides replacing __build_class__ to accomplish this use case.
___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] OS X buildbots missing

2012-04-21 Thread David Bolen
Antoine Pitrou  writes:

> For the record, we don't have any stable OS X buildbots anymore.
> If you want to contribute a build slave (I hear we may have Apple
> employees reading this list), please take a look at
> http://wiki.python.org/moin/BuildBot

I realize it may not qualify for the official stable list as it's a
Tiger-based buildbot, but osx-tiger is an OS X buildbot that's still
chugging along quite nicely (including doing the daily DMG builds).

-- David

___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Handling deprecations in the face of PEP 384

2012-04-21 Thread Martin v. Löwis
> From my reading of PEP 384 that means I would need to at least deprecate
> PyImport_getMagicTag(), correct (assuming I follow through with this; I
> might not bother)? 

All that PEP 384 gives you is that you MAY deprecate certain API
(namely, all API not guaranteed as stable). If an API is not in the
restricted set, this doesn't mean that it SHOULD be deprecated at
some point. So there is no need to deprecate anything.

OTOH, if the new implementation cannot readily support the
API anymore, it can certainly go away. If it was truly private
(i.e. _Py_*), it can go away immediately. Otherwise, it should be
deprecated-then-removed.

Regards,
Martin

___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Handling deprecations in the face of PEP 384

2012-04-21 Thread Brett Cannon
On Sat, Apr 21, 2012 at 16:55, "Martin v. Löwis"  wrote:

> > From my reading of PEP 384 that means I would need to at least deprecate
> > PyImport_getMagicTag(), correct (assuming I follow through with this; I
> > might not bother)?
>
> All that PEP 384 gives you is that you MAY deprecate certain API
> (namely, all API not guaranteed as stable). If an API is not in the
> restricted set, this doesn't mean that it SHOULD be deprecated at
> some point. So there is no need to deprecate anything.
>

I meant "at least deprecate" as in "I can't just remove it from Python 3.3".

-Brett


>
> OTOH, if the new implementation cannot readily support the
> API anymore, it can certainly go away. If it was truly private
> (i.e. _Py_*), it can go away immediately. Otherwise, it should be
> deprecated-then-removed.
>
> Regards,
> Martin
>
>
___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Handling deprecations in the face of PEP 384

2012-04-21 Thread Brett Cannon
On Sat, Apr 21, 2012 at 12:10, Barry Warsaw  wrote:

> On Apr 20, 2012, at 09:59 PM, Brett Cannon wrote:
>
> >As I clean up Python/import.c and move much of its functionality into
> >Lib/imp.py, I am about to run into some stuff that was not kept private to
> >the file. Specifically, I have PyImport_GetMagicTag() and
> NullImporter_Type
> >which I would like to chop out and move to Lib/imp.py.
> >
> >>From my reading of PEP 384 that means I would need to at least deprecate
> >PyImport_getMagicTag(), correct (assuming I follow through with this; I
> >might not bother)? What about NullImporter_Type (it lacks a Py prefix so I
> >am not sure if this is considered public or not)?
>
> I'd have to go back into my archives for the discussions about the PEP,
> but my
> recollection is that we intentionally made PyImport_GetMagicTag() a public
> API
> method.  Thus no leading underscore.  It's a bug that it's not documented,
> but
> OTOH, it's unlikely there are, or would be, many consumers for it.
>
> Strictly speaking, I do think you need to deprecate the APIs.  I like
> Nick's
> suggestion to make them C wrappers which just call back into Python.
>

That was my plan, but the amount of code it will take to wrap them is
making me not care. =) For PyImport_GetMagicTag() I would need to expose a
new attribute on sys or somewhere which specifies the VM name. For
PyImport_GetMagicNumber() I have to do a bunch of bit twiddling to convert
a bytes object into a long which I am just flat-out not in the mood to
figure out how to do. And all of this will lead to the same amount of C
code as there currently is for what is already implemented, so I just don't
care anymore. =)

But I'm glad the clarifications are there about the stable ABI and how we
are handling it.
___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Handling deprecations in the face of PEP 384

2012-04-21 Thread Eric Snow
On Sat, Apr 21, 2012 at 4:17 PM, Brett Cannon  wrote:
> On Sat, Apr 21, 2012 at 12:10, Barry Warsaw  wrote:
>> Strictly speaking, I do think you need to deprecate the APIs.  I like
>> Nick's
>> suggestion to make them C wrappers which just call back into Python.
>
>
> That was my plan, but the amount of code it will take to wrap them is making
> me not care. =) For PyImport_GetMagicTag() I would need to expose a new
> attribute on sys or somewhere which specifies the VM name. For
> PyImport_GetMagicNumber() I have to do a bunch of bit twiddling to convert a
> bytes object into a long which I am just flat-out not in the mood to figure
> out how to do. And all of this will lead to the same amount of C code as
> there currently is for what is already implemented, so I just don't care
> anymore. =)

I thought I already (mostly) worked it all out in that patch on
issue13959.  I felt really good about the approach for the magic tag
and magic bytes.

Once find_module() and reload() are done in imp.py, I'm hoping to
follow up on a few things.  That includes the unresolved mailing list
thread about sys.implementation (or whatever it was), which will help
with the magic tag.  Anyway, I don't want to curtail the gutting of
import.c quite yet (as he hears cries of "bring out your dead!").

-eric


p.s.  I understand your sentiment here, considering that mothers are
often exhausted by childbirth and the importlib bootstrap was a big
baby.  You were in labor for, what, 6 years.[There's an
analogy that could keep on giving. :) ]
___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


[Python-Dev] isolating import state during tests

2012-04-21 Thread Eric Snow
It looks like the test suite accommodates a stable import state to
some extent, but would it be worth having a PEP-405-esque context
manager to help with this?  For example, something along these lines:


class ImportState:
# sys.modules is part of the interpreter state, so
# repopulate (don't replace)
def __enter__(self):
self.path = sys.path[:]
self.modules = sys.modules.copy()
self.meta_path = sys.meta_path[:]
self.path_hooks = sys.path_hooks[:]
self.path_importer_cache = sys.path_importer_cache.copy()

sys.path = site.getsitepackages()
sys.modules.clear()
sys.meta_path = []
sys.path_hooks = []
sys.path_importer_cache = {}

def __exit__(self, *args, **kwargs):
sys.path = self.path
sys.modules.clear()
sys.modules.update(self.modules)
sys.meta_path = self.meta_path
sys.path_hooks = self.path_hooks
sys.path_importer_cache = self.path_importer_cache



# in some unit test:
with ImportState():
...  # tests


-eric
___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Handling deprecations in the face of PEP 384

2012-04-21 Thread Brett Cannon
On Sat, Apr 21, 2012 at 20:54, Eric Snow wrote:

> On Sat, Apr 21, 2012 at 4:17 PM, Brett Cannon  wrote:
> > On Sat, Apr 21, 2012 at 12:10, Barry Warsaw  wrote:
> >> Strictly speaking, I do think you need to deprecate the APIs.  I like
> >> Nick's
> >> suggestion to make them C wrappers which just call back into Python.
> >
> >
> > That was my plan, but the amount of code it will take to wrap them is
> making
> > me not care. =) For PyImport_GetMagicTag() I would need to expose a new
> > attribute on sys or somewhere which specifies the VM name. For
> > PyImport_GetMagicNumber() I have to do a bunch of bit twiddling to
> convert a
> > bytes object into a long which I am just flat-out not in the mood to
> figure
> > out how to do. And all of this will lead to the same amount of C code as
> > there currently is for what is already implemented, so I just don't care
> > anymore. =)
>
> I thought I already (mostly) worked it all out in that patch on
> issue13959.  I felt really good about the approach for the magic tag
> and magic bytes.
>

You didn't update Python/import.c in your patches so that the public C API
continued to function. That's what is going to take a bunch of C code to
continue to maintain, not the Python side of it.


>
> Once find_module() and reload() are done in imp.py, I'm hoping to
> follow up on a few things.  That includes the unresolved mailing list
> thread about sys.implementation (or whatever it was), which will help
> with the magic tag.  Anyway, I don't want to curtail the gutting of
> import.c quite yet (as he hears cries of "bring out your dead!").
>

Even w/ all of that gutted, a decent chunk of coding is holding on to dear
life thanks to PyImport_ExecCodeModuleObject() (and those that call it).
IOW the C API as it is currently exposed is going to end up being the
limiting factor of how many lines get deleted in the very end.


>
> -eric
>
>
> p.s.  I understand your sentiment here, considering that mothers are
> often exhausted by childbirth and the importlib bootstrap was a big
> baby.  You were in labor for, what, 6 years.[There's an
> analogy that could keep on giving. :) ]
>

It's also about maintainability. It isn't worth upping complexity just to
shift some stuff into Python code, especially when it is such simple stuff
as the magic number and tag which places practically zero burden on other
VMs to implement.
___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] isolating import state during tests

2012-04-21 Thread Brett Cannon
On Sat, Apr 21, 2012 at 21:02, Eric Snow wrote:

> It looks like the test suite accommodates a stable import state to
> some extent, but would it be worth having a PEP-405-esque context
> manager to help with this?  For example, something along these lines:
>
>
> class ImportState:
># sys.modules is part of the interpreter state, so
># repopulate (don't replace)
>def __enter__(self):
>self.path = sys.path[:]
>self.modules = sys.modules.copy()
>self.meta_path = sys.meta_path[:]
>self.path_hooks = sys.path_hooks[:]
>self.path_importer_cache = sys.path_importer_cache.copy()
>
>sys.path = site.getsitepackages()
>sys.modules.clear()
>sys.meta_path = []
>sys.path_hooks = []
>sys.path_importer_cache = {}
>
>def __exit__(self, *args, **kwargs):
>sys.path = self.path
>sys.modules.clear()
>sys.modules.update(self.modules)
>sys.meta_path = self.meta_path
>sys.path_hooks = self.path_hooks
>sys.path_importer_cache = self.path_importer_cache
>
>
>
> # in some unit test:
> with ImportState():
>...  # tests
>

That practically all done for you with a combination of
importlib.test.util.uncache and importlib.test.util.import_state.
___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


[Python-Dev] path joining on Windows and imp.cache_from_source()

2012-04-21 Thread Brett Cannon
imp.cache_from_source() (and thus also imp.source_from_cache()) has special
semantics compared to how os.path.join() works. For instance, if you look
at test_imp you will notice it tries to use the same path separator as is
the farthest right in the path it is given::

  self.assertEqual(imp.cache_from_source('\\foo\\bar/baz/qux.py',
True), '\\foo\\bar\\baz/__pycache__/qux.{}.pyc'.format(self.tag))

But if you do the same basic operation using ntpath, you will notice it
simply doesn't care::

  >>> ntpath.join(ntpath.split('a\\b/c/d.py')[0], '__pycache__',
'd.cpython-32.pyc')
  'a\\b/c\\__pycache__\\d.cpython-32.pyc

Basically imp.cache_from_source() goes to a bunch of effort to reuse the
farthest right separator when there is an alternative separator *before*
and path splitting is done. But if you look at ntpath.join(), it doesn't
even attempt that much effort.

Now that we can reuse os.path.join() (directly for source_from_cache(),
indirectly through easy algorithmic copying in cache_from_source()) do we
want to keep the "special" semantics, or can I change it to match what
ntpath would do when there can be more than one path separator on an OS
(i.e. not do anything special)?
___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] path joining on Windows and imp.cache_from_source()

2012-04-21 Thread Martin v. Löwis
> Now that we can reuse os.path.join() (directly for source_from_cache(),
> indirectly through easy algorithmic copying in cache_from_source()) do
> we want to keep the "special" semantics, or can I change it to match
> what ntpath would do when there can be more than one path separator on
> an OS (i.e. not do anything special)?

This goes back to

http://codereview.appspot.com/842043/diff/1/3#newcode787

where Antoine points out that the code needs to look for altsep.

He then suggests "keep the right-most of both". I don't think he
literally meant that the right-most separator should then also be
used to separate __pycache__, but only that the right-most of
either SEP or ALTSEP is what separates the module name.

In any case, Barry apparently took this comment to mean that the
rightmost separator should be preserved.

So I don't think this is an important feature.

Regards,
Martin
___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] path joining on Windows and imp.cache_from_source()

2012-04-21 Thread Glenn Linderman

On 4/21/2012 8:53 PM, Brett Cannon wrote:
imp.cache_from_source() (and thus also imp.source_from_cache()) has 
special semantics compared to how os.path.join() works. For instance, 
if you look at test_imp you will notice it tries to use the same path 
separator as is the farthest right in the path it is given::


  self.assertEqual(imp.cache_from_source('\\foo\\bar/baz/qux.py', 
True), '\\foo\\bar\\baz/__pycache__/qux.{}.pyc'.format(self.tag))


But if you do the same basic operation using ntpath, you will notice 
it simply doesn't care::


>>> ntpath.join(ntpath.split('a\\b/c/d.py')[0], '__pycache__', 
'd.cpython-32.pyc')

  'a\\b/c\\__pycache__\\d.cpython-32.pyc

Basically imp.cache_from_source() goes to a bunch of effort to reuse 
the farthest right separator when there is an alternative separator 
*before* and path splitting is done. But if you look at ntpath.join(), 
it doesn't even attempt that much effort.


Now that we can reuse os.path.join() (directly for 
source_from_cache(), indirectly through easy algorithmic copying in 
cache_from_source()) do we want to keep the "special" semantics, or 
can I change it to match what ntpath would do when there can be more 
than one path separator on an OS (i.e. not do anything special)?


Is there an issue here with importing from zip files, which use / 
separator, versus importing from the file system, which on Windows can 
use either / or \ ?  I don't know if imp.cache_from_source cares or is 
aware, but it is the only thing I can think of that might have an impact 
on such semantics.  (Well, the other is command line usage, but I don't 
think you are dealing with command lines at that point.)
___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] path joining on Windows and imp.cache_from_source()

2012-04-21 Thread Brett Cannon
On Sun, Apr 22, 2012 at 01:44, Glenn Linderman wrote:

>  On 4/21/2012 8:53 PM, Brett Cannon wrote:
>
> imp.cache_from_source() (and thus also imp.source_from_cache()) has
> special semantics compared to how os.path.join() works. For instance, if
> you look at test_imp you will notice it tries to use the same path
> separator as is the farthest right in the path it is given::
>
>self.assertEqual(imp.cache_from_source('\\foo\\bar/baz/qux.py',
> True), '\\foo\\bar\\baz/__pycache__/qux.{}.pyc'.format(self.tag))
>
>  But if you do the same basic operation using ntpath, you will notice it
> simply doesn't care::
>
>>>> ntpath.join(ntpath.split('a\\b/c/d.py')[0], '__pycache__',
> 'd.cpython-32.pyc')
>   'a\\b/c\\__pycache__\\d.cpython-32.pyc
>
>  Basically imp.cache_from_source() goes to a bunch of effort to reuse the
> farthest right separator when there is an alternative separator *before*
> and path splitting is done. But if you look at ntpath.join(), it doesn't
> even attempt that much effort.
>
>  Now that we can reuse os.path.join() (directly for source_from_cache(),
> indirectly through easy algorithmic copying in cache_from_source()) do we
> want to keep the "special" semantics, or can I change it to match what
> ntpath would do when there can be more than one path separator on an OS
> (i.e. not do anything special)?
>
>
> Is there an issue here with importing from zip files, which use /
> separator, versus importing from the file system, which on Windows can use
> either / or \ ?  I don't know if imp.cache_from_source cares or is aware,
> but it is the only thing I can think of that might have an impact on such
> semantics.  (Well, the other is command line usage, but I don't think you
> are dealing with command lines at that point.)
>

Right now zipimport doesn't even support __pycache__ (I think). Besides,
zipimport already does a string substitution of os.altsep with os.sep (see
Modules/zipimport.c:90 amongst other places) so it also doesn't care in the
end.
___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] path joining on Windows and imp.cache_from_source()

2012-04-21 Thread Brett Cannon
On Sun, Apr 22, 2012 at 01:45, "Martin v. Löwis"  wrote:

> > Now that we can reuse os.path.join() (directly for source_from_cache(),
> > indirectly through easy algorithmic copying in cache_from_source()) do
> > we want to keep the "special" semantics, or can I change it to match
> > what ntpath would do when there can be more than one path separator on
> > an OS (i.e. not do anything special)?
>
> This goes back to
>
> http://codereview.appspot.com/842043/diff/1/3#newcode787
>
> where Antoine points out that the code needs to look for altsep.
>
> He then suggests "keep the right-most of both". I don't think he
> literally meant that the right-most separator should then also be
> used to separate __pycache__, but only that the right-most of
> either SEP or ALTSEP is what separates the module name.
>
> In any case, Barry apparently took this comment to mean that the
> rightmost separator should be preserved.
>
> So I don't think this is an important feature.
>

OK, then I'll go back to ntpath.join()/split() semantics of caring on split
about altsep but not on join to keep it consistent w/ os.path and what
people are used to.
___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com