Re: [Python-Dev] Proof of the pudding: str.partition()

2005-08-30 Thread tony
I once wrote a similar method called cleave(). My use case involved a
string-like class (Substr)  whose instances could report their position in
the original string. The re module wasn't preserving
my class so I had to provide a different API.

  def cleave(self, pattern, start=0):
"""return Substr until match, match, Substr after match

If there is no match, return Substr, None, ''
"""

Here are some observations/questions on Raymond's partition() idea. First
of all, partition() is a  much better name than cleave()!

Substr didn't copy as partition() will have to, won't many of uses of
partition() end up being
O(N^2)?

One way that gives the programmer a way avoid the copying would be to
provide a string method
findspan(). findspan() would returns the start and end of the found
position in the string. start >
end could signal no match; and since 0-character strings are disallowed in
partition, end == 0
could also signal no match. partition() could be defined in terms of
findspan():

start, end = s.findspan(sep)
before, sep, after = s[:start], s[start:end], s[end:]

Just a quick thought,

-Tony

___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Proof of the pudding: str.partition()

2005-08-30 Thread tony
> Actually no.  When str.parition() doesn't find the separator, you get s,
> '', ''.
> Yours would produce '', '', s.  On not found, you would need to use
> start==end==len(s).
>

You're right. Nevermind, then.


> I will say the same
> thing that I've said at least three times already (with a bit of an
> expansion):
>

Thanks for the re-re-emphasis.

-Tony

___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Status of C compilers for Python on Windows

2014-10-25 Thread Tony Kelman

I'm several weeks late to this discussion, but I'm glad to see that it
happened. I'm not a Python developer, and barely a user, but I have several
years of daily experience compiling complicated scientific software cross-
platform, particularly with MinGW-w64 for Windows. The Python community,
both core language and scientific package developers and users, needs to
act here. The problem is bad and getting worse. Luckily much of the work
to start solving it has already been done in bits and pieces, it needs
coordination and participation to come to a conclusion.


Cross compilation is a valid issue, but I hope that build services like
Appveyor also help out here. There is regular talk about the PSF/PyPI
providing something similar


AppVeyor is better than nothing (I've been using it since beta), but it's
a far cry from build services and package management as the Linux world
knows them. Obtaining and setting up build dependencies quickly and
repeatably, and finishing the build of a complicated piece of software
such as CPython, or NumPy, SciPy, Julia (where most of my recent expertise
lies), etc. on a small single-core VM with limited memory and a restrictive
time limit is often not possible. These problems are solved within Linux
infrastructure like Koji, Open Build Service, buildd, etc.

MinGW-w64 is a mature, well-tested toolchain that is very capable of cross-
compiling a wide variety of libraries from Linux to Windows, in addition to
building conventionally on Windows for Windows. The MSYS2 collection of
MinGW-w64-compiled packages (https://github.com/Alexpux/MINGW-packages) has
been mentioned. Linux distributions including
- Fedora https://admin.fedoraproject.org/pkgdb/packages/mingw%2A/
- openSUSE https://build.opensuse.org/project/show/windows:mingw:win32
- Arch https://aur.archlinux.org/packages/?K=mingw
and others have projects for providing many hundreds of open-source
packages compiled for Windows. Debian has the cross-compilers available but
not many packages yet (https://packages.debian.org/search?keywords=mingw).

As a developer of a (compiled) open-source library or application, wouldn't
you love to be able to build binaries on Linux for Windows? With some work
and build system patches, you can. For many projects it's a simple matter of
./configure --host=x86_64-w64-mingw32. Not with CPython though. CPython is
only included in 2 of the above MinGW-w64 distribution projects, MSYS2 and
Arch. This is possible with a very, very long set of patches, many of which
have been submitted by Roumen Petrov to the Python bug tracker - see
http://bugs.python.org/issue17605 and other issues linked therein. Roumen
has done a huge amount of work, and he and others who consider the MinGW-w64
compiler important will continue to do so. (Thanks a million Roumen!)


I could step in as maintainer for Cygwin and builds based on GCC using
mingw* APIs.

Regards,
Roumen Petrov


A maintainer has volunteered. Others will help. Can any core developers
please begin reviewing some of his patches? Even if just to say they need
to be rebased. The lack of responses on the bug tracker is disheartening
from an outside perspective. The pile of patches accumulating in external
MinGW packaging projects is tantamount to a fork of CPython. It won't go
away since there are dedicated packagers working to keep their MinGW-w64
builds functional, even in the ad-hoc current state. The patches continue
piling up, making it more difficult for everyone - except for the core
Python developers if they continue to ignore the problem. Bring the people
working on these patches into the fold as contributors. Review the patches.
It would make Python as a language and a community even more diverse and
welcoming.


Deprecate/remove support for compiling CPython itself with compilers
other than MSVC on Windows


I'm not alone in thinking that this would be a bad idea. MSVC can continue
to be the default compiler used for Python on Windows, none of Roumen's
patches change that. They would merely open up the choice for packagers and
users to build CPython (and extension modules, thanks to separate patches)
with alternate compilers, in cross-compilation or otherwise.

Sincerely,
Tony

___
Python-Dev mailing list
[email protected]
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Status of C compilers for Python on Windows

2014-10-26 Thread Tony Kelman
f MSVC to compile CPython than the version
used to build the official binaries. It requires care, but you can't deny
that there are use cases where people will want and need to do such things.
Is possible fragmentation a good enough reason to resist making it possible
in the build system?

> though I suspect most would like to see some traction achieved on a fork
> first

Those of us who consider this important should probably just do this. Ray,
Roumen, the maintainer of the Arch MinGW packages, myself and others could
look into making an actual fork on Github or Bitbucket where we merge the
various patches and come up with an out-of-the-box MinGW-[cross-]compilable
version of CPython. I'll happily write the spec files to get this building
from Fedora or openSUSE. That would help us test the feasibility from a
centralized repository. Ray, what do you think? Do you know xantares' email
address to ask if he'd be interested in helping or using the result?

Zachary Ware:
> I'm failing to see where that's simpler :)

If it were hypothetically merged instead of out in an external fork, it
could be ./configure --host=x86_64-w64-mingw32 && make to cross-compile
from Linux or Cygwin, or just ./configure && make after installing MSYS2
(which is just about as easy as installing MSVC) on Windows.

Paul Moore:
> If it were possible to cross-compile compatible extensions on Linux,
> projects developed on Linux could supply Windows binaries much more
> easily, which would be a huge benefit to the whole Windows Python
> community.

I want to do exactly this in an automated repeatable way, preferably on
a build service. This seems harder to do when CPython cannot itself be
built and handled as a dependency by that same automated, repeatable
build service. Unless it becomes possible to cross-compile extensions
using the build machine's own version of Python, which might be the right
approach.

> acutely aware of the common pain points for Python users on Windows.
> And they are all about binary extensions, and none at all about
> building Python itself.

I've done a lot of recent work keeping Julia working well on Windows, and
the interoperability we have with Python packages has propagated most of
these pain points to us as well. We have to rely on Conda in order to have
a reliable way of installing, as an example, IPython with the notebook
interface, in order for IJulia to work. This is not an ideal solution as it
requires a great deal of user intervention and manual steps to get up and
running (and it would be far worse without Conda). We are, so far, built
around MinGW-w64 on Windows, for the reasons I listed above. Having cross-
compiled builds of CPython and binary extensions available from the same
build services we already use to install other binary packages (Gtk, Cairo,
Tk, Nettle, HDF5, etc) on Windows would be enormously helpful for us.

There's a real use case. Its size and importance can be debated. For now
I'll take David Murray's post to heart and see where I have time or ability
to help things along.

Sincerely,
Tony
___
Python-Dev mailing list
[email protected]
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Status of C compilers for Python on Windows

2014-10-26 Thread Tony Kelman

If this includes (or would likely include) a significant portion of the
Scientific Computing community, I would think that would be a compelling
use case.


I can't speak for any of the scientific computing community besides myself,
but my thoughts: much of the development, as you know, happens on Linux
with GCC (or OSX with clang). But it's important for users across all
platforms to be able to install binaries with a minimum of fuss.
Limitations of MSVC have already led the numpy/scipy community to
investigate building with MinGW-w64. (See several related threads from
April on the numpy-discussion list.) Ensuring compatibility with CPython's
chosen msvcrt has made that work even more difficult for them.

And Julia is not yet a significant portion of anything, but our community
is growing rapidly. See https://github.com/JuliaLang/IJulia.jl/pull/211 -
with respect to IJulia, "Python is just an implementation detail." Even
such a silly thing as automating the execution of the Python installer, to
set up a private only-used-by-IJulia copy, is needlessly difficult to do.
The work on Jupyter will hopefully help this specific situation sooner or
later, but there are other cases where CPython needs to serve as part of
the infrastructure, and the status quo makes that harder to automate.


We'd need to be educated more about the reasons why this approach works
better than remaining compatible with MSVC CPython so we could evaluate
the risks and reward intelligently.


Ideally, we can pursue an approach that would be able to remain compatible
with MSVC CPython. Even if this needs involvement from MinGW-w64 to make
happen, I don't think it's intractable. But I know less about the inner
details of CPython than you do so I could be wrong.


But as has been discussed, it seems better to focus first on fixing the
issues on which we are all in agreement (building extensions with MinGW).


Yes. We'll look into how much of the work has already been done on this.


there *are* people on the core-mentorship list who have expressed
interest in helping out with our automated testing infrastructure,
including (if I understand correctly) adding some level of integration
to other CI systems (which might just be messages to the IRC
channel)[*].  So that could be a fruitful avenue to explore.


If we pursue a fork (which not everyone will like but might happen anyway)
then we likely would do this type of CI integration along the way as Ray
suggested. So even if it turns out to fail as an endeavor, some good may
come of it.

Sincerely,
Tony

___
Python-Dev mailing list
[email protected]
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Status of C compilers for Python on Windows

2014-10-26 Thread Tony Kelman

Not really, to be honest. I still don't understand why anyone not
directly involved in CPython development would need to build their own
Python executable on Windows. Can you explain a single specific
situation where installing and using the python.org executable is not
possible


I want, and in many places *need*, an all-MinGW stack. For a great deal
of software that is not Python, I can do this today. I can use build
services, package management, and dependency resolution tools that work
very well together with this all-MinGW software stack. These are problems
that Python has notoriously struggled with on Windows for a long time.
It's not "my views on free software," it's the reality of MSVC being a
near-useless compiler for scientific software. (And I don't see that
changing much.) Do my requirements conflict with many non-scientific
Python users on Windows? Probably. So you're welcome to ignore my
requirements and I'll do my own thing, but I don't think I'm alone.
There's likely no desire from the scientific Python community to branch
off and separate in quite the way I'm willing to do from non-scientific
Python, but it would solve some of their problems (while introducing many
others). I suspect a MinGW-w64-oriented equivalent to Conda would be
attractive to many. That's approximately what I'm aiming for.

There are some ways in which I can use the python.org MSVC executable and
installer. But it is nearly impossible for me to integrate it into the rest
of the tools and stack that I am using; it sticks out like a sore thumb.
Likewise MinGW-compiled CPython may stick out like a sore thumb relative
to the existing way things work with Python on Windows. I'm okay with that,
you probably aren't.


changes to extension building and you should contribute them
independently so that everyone can benefit


Noted.


I cannot see why you would need to build Python in order to build
extensions.


No, of course they are separate. CPython is one of my dependencies.
Compiled extensions are other dependencies. Software completely unrelated
to Python is yet another set of dependencies. It's not a very coherent
stack if I can't handle all of these dependencies in a uniform way.


On a tangential note, any work on supporting mingw builds and
cross-compilation should probably be done using setuptools, so that it
is external to the CPython code.


Noted.

Sincerely,
Tony

___
Python-Dev mailing list
[email protected]
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Status of C compilers for Python on Windows

2014-10-28 Thread Tony Kelman

Stephen J. Turnbull:

Python is open source.  Nobody is objecting to "somebody else" doing
this.[1]  The problem here is simply that some "somebody elses" are
trying to throw future work over the wall into python-dev space.


If that's how it's seen at this point, then it sounds like the logical
course of action for CPython-with-MinGW is to demonstrate feasibility
in a fork. Get what currently works as a set of 80-something patches
and PKGBUILD instructions into a single repository that is usable by a
wider variety of people, in more distributions, etc. Set up as much CI as
possible so every patch gets tested in as many configurations as we can.
Experiment with extension compatibility and find out what is actually
achievable, with or without needing to modify MinGW-w64 in the process.

There are people, though evidently not much of python-dev, who have a
need and desire to make this happen. It seems python-dev would rather
have us spend zero time proposing changes that allow CPython itself to
be built differently than today, and all of our time on MinGW extensions.
I personally find 3 of the 4 combinations of how one could build CPython
and how one could build extensions interesting and worth looking into for
different reasons (the one that's uninteresting to me is the currently
used all-MSVC method, due to its many limitations I listed earlier).
Ray for example may only care about using MinGW for everything. Python's
a big community with lots of room for different people to work on
different aspects of the same set of problems.

For the combination of MSVC Python and MinGW extensions that most of you
have recommended we focus on, it would be more productive to engage with
setuptools, distutils-sig, and likely numpy as well, instead of python-dev.
My experience lies more in getting troublesome C codebases to build with
MinGW than it does with the messy intricacies and backwards-compatibility
requirements of Python extensions and package management however, so my
ability to contribute on that side of things is more limited. I'll just
note that every project I've ever had a patch for which improved
functionality with a new compiler (whether GCC, MSVC, clang, icc or ifort,
etc.) reacted with review and thanks for the patches, not "why do you want
to do this?" pushback. If potential contributors have a desire to get it
working in the first place, then they will also be invested in helping
keep it working on an ongoing basis.

Sincerely,
Tony

___
Python-Dev mailing list
[email protected]
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Status of C compilers for Python on Windows

2014-10-28 Thread Tony Kelman

Stephen J. Turnbull:

Sure -- as long as it works for them, though, such potential
contributors don't necessarily care if it works for anybody else.  My
experience (in other projects) is that allowing that level of
commitment to be sufficient for inclusion in the maintained code base
frequently results in bug reports from third parties who try to use
the new feature and discover it *doesn't* work for them.


Good point. I definitely care whether patches work for everyone else.
Patches should be done well and accompanied with proper documentation
so new functionality is fully reproducible. If that's what's holding
up review, comments in the bug threads indicating as much would be
helpful. Any fork will also have to be thoroughly documented if it's
to have any chance of working.


Sounds good to me.  You seem to think that's excessive, though:


No, just hearing the words come out of my mouth they sound a little nuts.
Maybe not, there are after all half a dozen or more incompatible alternate
Python implementations floating around. I think most of them started as
from-scratch rewrites rather than source forks, but maybe that's irrelevant.


Well, for starters, most of python-dev would rather avoid any contact
whatsoever with Windows.  I think part of the problem is that Windows
developers *of* Python are *very* rare (fingers of one hand rare).


In my opinion the MSVC toolchain makes that problem worse, as it's far
harder for unix developers to have any familiarity with how things work.
But you do have involvement and core developers from Microsoft which is
obviously incredibly important. Maybe even mandatory for Python on Windows
to be viable in your eyes.


Even those who do work on C extensions have so far
been happy to work with the VC build (except for the recurrent issue
of getting one's hands on the toolchain).

It should be evident by now that our belief is that the large majority
of Windows users is well-served by the current model


This is not the case at all in the scientific community. NumPy and SciPy
put in a lot of extra work to come up with something that is compatible
with the MSVC build of CPython because they have to, not because they're
"happy to" jump through the necessary hoops. The situation today with
NumPy AIUI is they already have to build with a custom hacked MinGW
toolchain that only one person knows how to use. Evidently until very
recently they were using a many-years-old 32-bit-only build of GCC 3.x.
Do python-dev and numpy-discussion not talk to one another? I get that
not everyone uses numpy, or Windows, but it sounds like there's very
little understanding, or even awareness, of the issues they face.


They quite naturally don't want to do that work, and don't see much
point in discussing it if the (apparently) few people who need it aren't
going to supply the resources.


I'm going to move the "extensions with MinGW-w64" part of this conversation
over to numpy-discussion, since they have a need for this today and are
already using workarounds and processes that I don't think anyone is fully
satisfied with. I do find this combination interesting, worth working on,
and possible to make work well, but not completely in isolation as it does
not address my embedding use case.


but so is python-dev's
reluctance to admit new "aspects" until their impact on core
responsibilities is made clear.


Okay. I'll table the discussion with python-dev for now then.

-Tony

___
Python-Dev mailing list
[email protected]
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Status of C compilers for Python on Windows

2014-10-29 Thread Tony Kelman
own, all I have is the general
idea.


I do something similar today for C, C++, and Fortran libraries using
the MinGW-w64 cross-compiler. It looks like this
https://build.opensuse.org/project/show/windows:mingw:win64
I write a spec file and upload source, and get back dll's and exe's -
compressed in RPM's in this case, along with the RPM dependency metadata.
For OSX I would look at what Homebrew does with their "bottle"
infrastructure. A build farm with Vagrant (or similar) VM's could even
be made to do the same basic thing on Windows with MSVC, at least for
binaries that MSVC is capable of compiling.

-Tony

___
Python-Dev mailing list
[email protected]
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


[Python-Dev] Embedding multiple Python runtimes in the same process on Windows

2022-02-24 Thread Tony Roberts
Hi all,

This is specifically about embedding Python on Windows, and I'm hoping some of 
the Windows Python devs might have some ideas or be interested in this. I have 
implemented a partial solution (linked below) and I'm interested to hear what 
other people think of this.

Currently when embedding Python both python3x.dll and python3.dll get loaded, 
with python3.dll redirecting stable API calls to python3x.dll. If a process 
embeds two different versions of Python 3, e.g., python39.dll and python310.dll 
they both load their respective python3.dlls. At this point I imagine you are 
thinking "don't do that", but bear with me... The problem comes when the Python 
loaded second imports an extension module linked with python3.dll. The 
python3.dll that it gets linked to will be the first one that was loaded, not 
the one that relates to the second Python that is actually doing the import. 
This results in a call into the wrong Python dll and 'bad things' happen.

None of this is unexpected and I'm sure that the sensible thing to do is to 
simply not do this... but I've been working on a way to make this work anyway. 
In my case I have a plugin to another application that embeds Python into that 
application. It's perfectly possible (and reasonable) for other plugins to also 
want to embed Python. This can be dealt with by having both plugins use the 
same Python environment easily enough in most cases. With Python 2 it used to 
be possible to have two different Python interpreters embedded at the same time 
and not have them interfere with each other (although it is possible there 
would still be issues with DLL versions used by extension modules). With Python 
3 if we want to have two different versions of Python 3 embedded at the same 
time then it will fail because of the reason outlined above.

My idea is to redirect all loaded python3.dll dlls to the one we want to be 
used before loading any extension modules (i.e., just before any call to 
LoadLibraryEx) and then restore them afterwards. This can be done by 
manipulating the loader modules list in Windows. I have implemented this as a 
proof of concept in my own plugin and confirmed that this works and does allow 
two different versions of Python 3 to be embedded at the same time. This works 
with an unmodified version of Python by applying the redirect in an import 
hook. It uses several undocumented Windows structures and APIs in order to 
safely manipulate the loader table.

Here is my proof of concept code that performs the redirect 
https://gist.github.com/tonyroberts/74888762f0063238d4f7fd7c7d36f0f0

While this works for different versions of Python 3, there is still a problem 
when trying to embed two different instances of the same version of Python 3. 
The problem is basically the same but with the added complication that the pyd 
files are named the same, and the ones loaded first get found by the second 
Python runtime and you end up again calling across versions. I managed to solve 
this using a similar method to the code above, but rather than redirecting just 
python3.dll I look for any other loaded python3x.dll and then remove *all* 
modules loaded under that Python distribution from the loader table. This 
ensures that both Python runtimes are effectively isolated from each other as 
neither see any of the same modules. This gets more complicated once you start 
thinking about user site-packages folders and venvs, but for simple 
distributions where everything is under the same root folder this technique 
works.

Anyway, just keen to hear what people think or whether this has been tackled 
before in another way.

Best regards,
Tony
___
Python-Dev mailing list -- [email protected]
To unsubscribe send an email to [email protected]
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/[email protected]/message/IRO5XEMQPY7KEJJH5LBSMOCCL2ZKTT77/
Code of Conduct: http://python.org/psf/codeofconduct/


Re: [Python-Dev] Integrate BeautifulSoup into stdlib?

2009-03-04 Thread Tony Nelson
At 2:56 PM + 3/4/09, Chris Withers wrote:
>Vaibhav Mallya wrote:
>> We do have HTMLParser, but that doesn't handle malformed pages well, and
>> just isn't as nice as BeautifulSoup.
>
>Interesting, given that BeautifulSoup is built on HTMLParser ;-)

In BeautifulSoup >= 3.1, yes.  Before that (<= 3.07a), it was based on the
more robust sgmllib.SGMLParser.  The current BeautifulSoup can't handle
'', while the earlier SGMLParser versions can.  I don't
know quite how common that missing space is in the wild, but I've
personally made HTML with that problem.  Maybe this is the only problem
with using HTMLParser instead of SGMLParser; I don't know.  In the mean
time, if I have a need for BeautifulSoup in Python3.x, I'll port sgmllib
and use the older BeautifulSoup.
-- 

TonyN.:'   
  '  
___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


[Python-Dev] email package Bytes vs Unicode (was Re: Dropping bytes "support" in json)

2009-04-09 Thread Tony Nelson
(email-sig added)

At 08:07 -0400 04/09/2009, Steve Holden wrote:
>Barry Warsaw wrote:
 ...
>> This is an interesting question, and something I'm struggling with for
>> the email package for 3.x.  It turns out to be pretty convenient to have
>> both a bytes and a string API, both for input and output, but I think
>> email really wants to be represented internally as bytes.  Maybe.  Or
>> maybe just for content bodies and not headers, or maybe both.  Anyway,
>> aside from that decision, I haven't come up with an elegant way to allow
>> /output/ in both bytes and strings (input is I think theoretically
>> easier by sniffing the arguments).
>>
>The real problem I came across in storing email in a relational database
>was the inability to store messages as Unicode. Some messages have a
>body in one encoding and an attachment in another, so the only ways to
>store the messages are either as a monolithic bytes string that gets
>parsed when the individual components are required or as a sequence of
>components in the database's preferred encoding (if you want to keep the
>original encoding most relational databases won't be able to help unless
>you store the components as bytes).
 ...

I found it confusing myself, and did it wrong for a while.  Now, I
understand that essages come over the wire as bytes, either 7-bit US-ASCII
or 8-bit whatever, and are parsed at the receiver.  I think of the database
as a wire to the future, and store the data as bytes (a BLOB), letting the
future receiver parse them as it did the first time, when I cleaned the
message.  Data I care to query is extracted into fields (in UTF-8, what I
usually use for char fields).  I have no need to store messages as Unicode,
and they aren't Unicode anyway.  I have no need ever to flatten a message
to Unicode, only to US-ASCII or, for messages (spam) that are corrupt, raw
8-bit data.

If you need the data from the message, by all means extract it and store it
in whatever form is useful to the purpose of the database.  If you need the
entire message, store it intact in the database, as the bytes it is.  Email
isn't Unicode any more than a JPEG or other image types (often payloads in
a message) are Unicode.
-- 

TonyN.:'   
  '  
___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] email package Bytes vs Unicode (was Re: Dropping bytes "support" in json)

2009-04-09 Thread Tony Nelson
(email-sig dropped, as I didn't see Steve Holden's message there)

At 12:20 -0400 04/09/2009, Steve Holden wrote:
>Tony Nelson wrote:
 ...
>> If you need the data from the message, by all means extract it and store it
>> in whatever form is useful to the purpose of the database.  If you need the
>> entire message, store it intact in the database, as the bytes it is.  Email
>> isn't Unicode any more than a JPEG or other image types (often payloads in
>> a message) are Unicode.
>
>This is all great, and I did quite quickly realize that the best
>approach was to store the mails in their network byte-stream format as
>bytes. The approach was negated in my own case because of PostgreSQL's
>execrable BLOB-handling capabilities. I took a look at the escaping they
>required, snorted with derision and gave it up as a bad job.
 ...

I use MySQL, but sort of intend to learn PostgreSQL.  I didn't know that
PostgreSQL has no real support for BLOBs.  I agree that having to import
them from a file is awful.  Also, there appears to be a severe limit on the
size of character data fields, so storing in Base64 is out.  About the only
thing to do then is to use external storage for the BLOBs.

Still, email seems to demand such binary storage, whether all databases
provide it or not.
-- 

TonyN.:'   <mailto:[email protected]>
  '  <http://www.georgeanelson.com/>
___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] BLOBs in Pg (was: email package Bytes vs Unicode)

2009-04-09 Thread Tony Nelson
At 21:24 +0400 04/09/2009, Oleg Broytmann wrote:
>On Thu, Apr 09, 2009 at 01:14:21PM -0400, Tony Nelson wrote:
>> I use MySQL, but sort of intend to learn PostgreSQL.  I didn't know that
>> PostgreSQL has no real support for BLOBs.
>
>   I think it has - BYTEA data type.

So it does; I see that now that I've opened up the PostgreSQL docs.  I
don't find escaping data to be a problem -- I do it for all untrusted data.

So, after all, there isn't an example of a database that makes onerous the
storing of email and other such byte-oriented data, and Python's email
package has no need for workarounds in that area.
-- 

TonyN.:'   <mailto:[email protected]>
  '  <http://www.georgeanelson.com/>
___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] [Email-SIG] Dropping bytes "support" in json

2009-04-09 Thread Tony Nelson
At 22:38 -0400 04/09/2009, Barry Warsaw wrote:
 ...
>So, what I'm really asking is this.  Let's say you agree that there
>are use cases for accessing a header value as either the raw encoded
>bytes or the decoded unicode.  What should this return:
>
> >>> message['Subject']
>
>The raw bytes or the decoded unicode?

That's an easy one:  Subject: is an unstructured header, so it must be
text, thus Unicode.  We're looking at a high-level representation of an
email message, with parsed header fields and a MIME message tree.


>Okay, so you've picked one.  Now how do you spell the other way?

message.get_header_bytes('Subject')

Oh, I see that's what you picked.

>The Message class probably has these explicit methods:
>
> >>> Message.get_header_bytes('Subject')
> >>> Message.get_header_string('Subject')
>
>(or better names... it's late and I'm tired ;).  One of those maps to
>message['Subject'] but which is the more obvious choice?

Structured header fields are more of a problem.  Any header with addresses
should return a list of addresses.  I think the default return type should
depend on the data type.  To get an explicit bytes or string or list of
addresses, be explicit; otherwise, for convenience, return the appropriate
type for the particular header field name.


>Now, setting headers.  Sometimes you have some unicode thing and
>sometimes you have some bytes.  You need to end up with bytes in the
>ASCII range and you'd like to leave the header value unencoded if so.
>But in both cases, you might have bytes or characters outside that
>range, so you need an explicit encoding, defaulting to utf-8 probably.

Never for header fields.  The default is always RFC 2047, unless it isn't,
say for params.

The Message class should create an object of the appropriate subclass of
Header based on the name (or use the existing object, see other
discussion), and that should inspect its argument and DTRT or complain.

>
> >>> Message.set_header('Subject', 'Some text', encoding='utf-8')
> >>> Message.set_header('Subject', b'Some bytes')
>
>One of those maps to
>
> >>> message['Subject'] = ???

The expected data type should depend on the header field.  For Subject:, it
should be bytes to be parsed or verbatim text.  For To:, it should be a
list of addresses or bytes or text to be parsed.

The email package should be pythonic, and not require deep understanding of
dozens of RFCs to use properly.  Users don't need to know about the raw
bytes; that's the whole point of MIME and any email package.  It should be
easy to set header fields with their natural data types, and doing it with
bad data should produce an error.  This may require a bit more care in the
message parser, to always produce a parsed message with defects.
-- 

TonyN.:'   
  '  
___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] [Email-SIG] Dropping bytes "support" in json

2009-04-09 Thread Tony Nelson
At 22:26 -0400 04/09/2009, Barry Warsaw wrote:

>There are really two ways to look at an email message.  It's either an
>unstructured blob of bytes, or it's a structured tree of objects.
>Those objects have headers and payload.  The payload can be of any
>type, though I think it generally breaks down into "strings" for text/
>* types and bytes for anything else (not counting multiparts).
>
>The email package isn't a perfect mapping to this, which is something
>I want to improve.  That aside, I think storing a message in a
>database means storing some or all of the headers separately from the
>byte stream (or text?) of its payload.  That's for non-multipart
>types.  It would be more complicated to represent a message tree of
>course.

Storing an email message in a database does mean storing some of the header
fields as database fields, but the set of email header fields is open, so
any "unused" fields in a message must be stored elsewhere.  It isn't useful
to just have a bag of name/value pairs in a table.  General message MIME
payload trees don't map well to a database either, unless one wants to get
very relational.  Sometimes the database needs to represent the entire
email message, header fields and MIME tree, but only if it is an email
program and usually not even then.  Usually, the database has a specific
purpose, and can be designed for the data it cares about; it may choose to
keep the original message as bytes.


>It does seem to make sense to think about headers as text header names
>and text header values.  Of course, header values can contain almost
>anything and there's an encoding to bring it back to 7-bit ASCII, but
>again, you really have two views of a header value.  Which you want
>really depends on your application.

I think of header fields as having text-like names (the set of allowed
characters is more than just text, though defined headers don't make use of
that), but the data is either bytes or it should be parsed into something
appropriate:  text for unstructured fields like Subject:, a list of
addresses for address fields like To:.  Many of the structured header
fields have a reasonable mapping to text; certainly this is true for adress
header fields.  Content-Type header fields are barely text, they can be so
convolutedly structured, but I suppose one could flatten one of them to
text instead of bytes if the user wanted.  It's not very useful, though,
except for debugging (either by the programmer or the recipient who wants
to know what was cleaned from the message).


>Maybe you just care about the text of both the header name and value.
>In that case, I think you want the values as unicodes, and probably
>the headers as unicodes containing only ASCII.  So your table would be
>strings in both cases.  OTOH, maybe your application cares about the
>raw underlying encoded data, in which case the header names are
>probably still strings of ASCII-ish unicodes and the values are
>bytes.  It's this distinction (and I think the competing use cases)
>that make a true Python 3.x API for email more complicated.

If a database stores the Subject: header field, it would be as text.  The
various recipient address fields are a one message to many names and
addresses mapping, and need a related table of name/address fields, with
each field being text.  The original message (or whatever part of it one
preserves) should be bytes.  I don't think this complicates the email
package API; rather, it just shows where generality is needed.


>Thinking about this stuff makes me nostalgic for the sloppy happy days
>of Python 2.x

You now have the opportunity to finally unsnarl that mess.  It is not an
insurmountable opportunity.
-- 

TonyN.:'   
  '  
___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Needing help to change the grammar

2009-04-12 Thread Tony Nelson
At 16:30 -0400 04/12/2009, Terry Reedy wrote:
 ...
>  Source in .pyb (python-brazil) is parsed with with your new parser,
 ...

In case anyone ever does this again, I suggest that the extension be the
language and optionally country code:

.py_pt  or  .py_pt_BR
-- 

TonyN.:'   
  '  
___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] #!/usr/bin/env python --> python3 where applicable

2009-04-18 Thread Tony Nelson
At 20:51 -0700 04/18/2009, Steven Bethard wrote:
>On Sat, Apr 18, 2009 at 8:14 PM, Benjamin Peterson 
>wrote:
>> 2009/4/18 Nick Coghlan :
>>> I see a few options:
>>> 1. Abandon the "python" name for the 3.x series and commit to calling it
>>> "python3" now and forever (i.e. actually make the decision that Mitchell
>>> refers to).
>>
>> I believe this was decided on sometime (the sprints?).
>
>That's an unfortunate decision. When the 2.X line stops being
>maintained (after 2.7 maybe?) we're going to be stuck with the "3"
>suffix forever for the "real" Python.
>
>Why doesn't it make more sense to just use "python3" only for
>"altinstall" and "python" for "fullinstall"?

Just use python3 in the shebang lines all the time (where applicable ;), as
it is made by both altinstall and fullinstall.  fullinstall also make plain
"python", but that is not important.
-- 

TonyN.:'   
  '  
___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 383: Non-decodable Bytes in System Character Interfaces

2009-04-27 Thread Tony Nelson
At 23:39 -0700 04/26/2009, Glenn Linderman wrote:
>On approximately 4/25/2009 5:35 AM, came the following characters from
>the keyboard of Martin v. Löwis:
>>> Because the encoding is not reliably reversible.
>>
>> Why do you say that? The encoding is completely reversible
>> (unless we disagree on what "reversible" means).
>>
>>> I'm +1 on the concept, -1 on the PEP, due solely to the lack of a
>>> reversible encoding.
>>
>> Then please provide an example for a setup where it is not reversible.
>>
>> Regards,
>> Martin
>
>It is reversible if you know that it is decoded, and apply the encoding.
>  But if you don't know that has been encoded, then applying the reverse
>transform can convert an undecoded str that matches the decoded str to
>the form that it could have, but never did take.
>
>The problem is that there is no guarantee that the str interface
>provides only strictly conforming Unicode, so decoding bytes to
>non-strictly conforming Unicode, can result in a data pun between
>non-strictly conforming Unicode coming from the str interface vs bytes
>being decoded to non-strictly conforming Unicode coming from the bytes
>interface.
 ...

Maybe this is a dumb idea, but some people might be reassured if the
half-surrogates had some particular pattern that is unlikely to occur even
in unreasonable text (as half-surrogates are an error in Unicode).  The
pattern could be some sequence of half-surrogate encoded bytes, framing the
intended data, as is done for RFC 2047 internationalized header fields in
email.  It would take up a few more bytes in the string, but no matter.  It
would also make it easier to diagnose when decoding was not properly done.

FWIW, I like the idea in the PEP, now that I think I understand it.

(BTW, gotta love what the email package is doing to the Subject: header
field. ;-')
-- 

TonyN.:'   
  '  
___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 383: Non-decodable Bytes in System C haracter Interfaces

2009-04-27 Thread Tony Nelson
At 16:09 + 04/27/2009, Antoine Pitrou wrote:
>Stephen J. Turnbull  xemacs.org> writes:
>>
>> I hate to break it to you, but most stages of mail processing have
>> very little to do with SMTP.  In particular, processing MIME
>> attachments often requires dealing with file names.
>
>AFAIK, the file name is only there as an indication for the user when he wants
>to save the file. If it's garbled a bit, no big deal.
 ...

Yep.  In fact, it should be cleaned carefully.  RFC 2183, 2.3:

"It is important that the receiving MUA not blindly use the suggested
filename.  The suggested filename SHOULD be checked (and possibly
changed) to see that it conforms to local filesystem conventions,
does not overwrite an existing file, and does not present a security
problem (see Security Considerations below).

The receiving MUA SHOULD NOT respect any directory path information
that may seem to be present in the filename parameter.  The filename
should be treated as a terminal component only.  Portable
specification of directory paths might possibly be done in the future
via a separate Content Disposition parmeter, but no provision is
made for it in this draft."

-- 

TonyN.:'   
  '  
___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 376 : Changing the .egg-info structure

2009-05-15 Thread Tony Nelson
At 13:52 -0400 05/15/2009, P.J. Eby wrote:
>At 08:32 AM 5/15/2009 +0200, Jeroen Ruigrok van der Werven wrote:
>>Agreed. Within FreeBSD's ports the installed package registration
>>gets a MD5 hash per file recorded. Size is less interesting though,
>>since essentially this information is encapsulated within the hash.
>>Remove one byte from the file and your hash is already different.
>
>Which also means that in that case you can skip computing the
>MD5.  The size allows you to easily notice an overwrite/corruption
>without further processing.

In most cases the files will actually match, so the sizes and dates will be
the same and the checksum must be computed to verify the match.

RPM does this when asked to Verify a package.  It is faster than Removing a
package, and Verifying all installed packages takes a reasonable amount of
time.  I don't think Python would be any worse at verifying its own
packages, and it would normally have less data to verify, so it should be
fast enough.
-- 

TonyN.:'   
  '  
___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 3146: Merge Unladen Swallow into CPython

2010-01-22 Thread Tony Nelson
On 10-01-22 02:53:21, Collin Winter wrote:
> On Thu, Jan 21, 2010 at 11:37 PM, Glyph Lefkowitz
>  wrote:
> >
> > On Jan 21, 2010, at 6:48 PM, Collin Winter wrote:
 ...
> > There's been a recent thread on our mailing list about a patch that
> > dramatically reduces the memory footprint of multiprocess
> > concurrency by separating reference counts from objects. ...

Currently, CPython gets a performance advantage from having reference 
counts hot in the cache when the referenced object is used.  There is 
still the write pressure from updating the counts.  With separate 
reference counts, an extra cache line must be loaded from memory (it is 
unlikely to be in the cache unless the program is trivial).  I see from 
the referenced posting that this is a 10% speed hit (the poster 
attributes the hit to extra instructions).

Perhaps the speed and memory hits could be minimized by only doing this 
for some objects?  Only objects that are fully shared (such as read-
only data) benefit from this change.  I don't know but shared objects 
may already be treated separately.

 ...
> The data I've seen comes from
> http://groups.google.com/group/comp.lang.python/msg/c18b671f2c4fef9e:
 ...

-- 

TonyN.:'   
  '  

___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


[Python-Dev] Socket Timeouts patch 1519025

2006-07-23 Thread Tony Nelson
I request a review of my patch (1519025) to get socket timeouts to work
properly with errors and signals.  I don't expect this patch would make it
into 2.5, but perhaps it could be in 2.5.1, as it fixes a long-standing
bug.  I know that people are busy with getting 2.5 out the door, but it
would be helpful for me to know if my current patch is OK before I start on
another patch to make socket timeouts more useful.  There is also a version
of the patch for 2.4, which would make yum nicer in Fedora 4 and 5, and I
think that passing a review would make the patch more acceptable to
Fedora's maintainers.

My next patch will, if it works, make socket timeouts easier to use
per-thread, allow for the timing of entire operations rather than just
timing transaction phases, allow for setting an acceptable rate for file
transfers, and should be completely backward compatible, in that old code
would be unaffected and new code would work as well as possible now on
older unpatched versions.  That's my plan, anyway.  It would build on my
current patch, at least in its principles.

TonyN.:'   
  '  
___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


[Python-Dev] Testing Socket Timeouts patch 1519025

2006-07-29 Thread Tony Nelson
I'm trying to write a test for my Socket Timeouts patch [1], which fixes
signal handling (notably Ctl-C == SIGINT == KeyboarInterrupt) on socket
operations using a timeout.  I don't see a portable way to send a signal,
and asking the test runner to press Ctl-C is a non-starter.  A "real"
signal is needed to interrupt the select() (or equivalent) call, because
that's what wasn't being handled correctly.  The bug should happen on the
other platforms I don't know how to test on.

Is there a portable way to send a signal?  SIGINT would be best, but
another signal (such as SIGALRM) would do, I think.

If not, should I write the test to only work on systems implementing
SIGALRM, the signal I'm using now, or implementing kill(), or what?

[1] 

TonyN.:'   
  '  
___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Testing Socket Timeouts patch 1519025

2006-07-29 Thread Tony Nelson
At 2:38 PM -0700 7/29/06, Josiah Carlson wrote:
>Tony Nelson <[EMAIL PROTECTED]> wrote:
>>
>> I'm trying to write a test for my Socket Timeouts patch [1], which fixes
>> signal handling (notably Ctl-C == SIGINT == KeyboarInterrupt) on socket
>> operations using a timeout.  I don't see a portable way to send a signal,
>> and asking the test runner to press Ctl-C is a non-starter.  A "real"
>> signal is needed to interrupt the select() (or equivalent) call, because
>> that's what wasn't being handled correctly.  The bug should happen on the
>> other platforms I don't know how to test on.
>>
>> Is there a portable way to send a signal?  SIGINT would be best, but
>> another signal (such as SIGALRM) would do, I think.
>
>According to my (limited) research on signals, Windows signal support is
>horrible.  I have not been able to have Python send signals of any kind
>other than SIGABRT, and then only to the currently running process,
>which kills it (regardless of whether you have a signal handler or not).

Hmm, OK, darn, thanks.  MSWindows does allow users to press Ctl-C to send a
KeyboardInterrupt, so it's just too bad if I can't find a way to test it
from a script.


>> If not, should I write the test to only work on systems implementing
>> SIGALRM, the signal I'm using now, or implementing kill(), or what?
>
>I think that most non-Windows platforms should have non-braindead signal
>support, though the signal module seems to be severely lacking in
>sending any signal except for SIGALRM, and the os module has its fingers
>on SIGABRT.

The test now checks "hasattr(signal, 'alarm')" before proceeding, so at
least it won't die horribly.


>If someone is looking for a project for 2.6 that digs into all sorts of
>platform-specific nastiness, they could add actual signal sending to the
>signal module (at least for unix systems).

Isn't signal sending the province of kill (2) (or os.kill()) in python)?
Not that I know much about it.

BTW, I picked SIGALRM because I could do it all with one thread.  Reading
POSIX, ISTM that if I sent the signal from another thread, it would bounce
off that thread to the main thread during the call to kill(), at which
point I got the willies.  OTOH, if kill() is more widely available than
alarm(), I'll give it a try, but going by the docs, I'd say it isn't.

TonyN.:'   <mailto:[EMAIL PROTECTED]>
  '  <http://www.georgeanelson.com/>
___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Testing Socket Timeouts patch 1519025

2006-07-30 Thread Tony Nelson
At 9:42 AM +0200 7/30/06, Martin v. Löwis wrote:
>Tony Nelson schrieb:
>> Hmm, OK, darn, thanks.  MSWindows does allow users to press Ctl-C to send a
>> KeyboardInterrupt, so it's just too bad if I can't find a way to test it
>> from a script.
>
>You can use GenerateConsoleCtrlEvent to send Ctrl-C to all processes
>that share the console of the calling process.

That looks like it would work, but it seems prone to overkill.  To avoid
killing all the processes running from a console, the test would need to be
run in a subprocess in a new process group.  If the test simply sends the
event to its own process, all the other processes in its process group
would receive the event as well, and probably die.  I would expect that all
the processes sharing the console would die, but even if they didn't when I
tried it, I couldn't be sure that it wouldn't happen elsewhere, say when
run from a .bat file.

Martin, your advice is usually spot-on, but I don't always understand it.
Maybe using it here is just complicated.  I expect that
GenerateConsoleCtrlEvent() can be called through the ctypes module, though
that would make backporting the test to 2.4 a bit more difficult.  It looks
like the subprocess module can be passed the needed creation flag to make a
new process group.  The subprocess can send the event to itself, and could
return the test result in its result code, so that part isn't so bad.  To
avoid adding a new file to the distribution, test_socket.test_main() could
be modified to look for a command line argument requesting the particular
test action.


>> BTW, I picked SIGALRM because I could do it all with one thread.  Reading
>> POSIX, ISTM that if I sent the signal from another thread, it would bounce
>> off that thread to the main thread during the call to kill(), at which
>> point I got the willies.  OTOH, if kill() is more widely available than
>> alarm(), I'll give it a try, but going by the docs, I'd say it isn't.
>
>Indeed, alarm should be available on any POSIX system.

Well, if alarm() is available, then the test will work.  If not, it will be
silently skipped, as are some other tests already in test_socket.py.  I
can't offhand tell if MSWindows supports alarm(), but RiscOS and OS2 do not.

TonyN.:'   <mailto:[EMAIL PROTECTED]>
  '  <http://www.georgeanelson.com/>
___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Testing Socket Timeouts patch 1519025

2006-07-30 Thread Tony Nelson
At 11:42 PM +0200 7/30/06, Martin v. Löwis wrote:
>Tony Nelson schrieb:
>>> You can use GenerateConsoleCtrlEvent to send Ctrl-C to all processes
>>> that share the console of the calling process.
>[...]
>> Martin, your advice is usually spot-on, but I don't always understand it.
>> Maybe using it here is just complicated.
>
>This was really just in response to your remark that you couldn't
>find a way to send Ctrl-C programmatically. I researched (in
>the C library sources) how SIGINT was *generated* (through
>SetConsoleCtrlHandler), and that let me to a way to generate [one.]

Well, fine work there!

>I didn't mean to suggest that you *should* use GenerateConsoleCtrlEvent,
>only that you could if you wanted to.

Hmm.  Well, it would make the test possible on MSWindows as well as on OS's
implementing alarm(2).  If I figure out how to build Python on MSWindows, I
might give it a try.  I tried to get MSVC 7.1 via the .Net SDK, but it
installed VS 8 instead, so I'm not quite sure how to proceed.


>> I expect that
>> GenerateConsoleCtrlEvent() can be called through the ctypes module, though
>> that would make backporting the test to 2.4 a bit more difficult.
>
>Well, if there was general utility to that API, I would prefer exposing
>it in the nt module. It doesn't quite fit into kill(2), as it doesn't
>allow to specify a pid of the target process, so perhaps it doesn't
>have general utility. In any case, that would have to wait for 2.6.

A Process Group ID is the PID of the first process put in it, so it's sort
of a PID.  It just means a collection of processes, probably more than one.
It seems to be mostly applicable to MSWindows, and isn't a suitable way to
implement a form of kill(2).

I hope that the Socket Timeouts patch 1519025 can make it into 2.5, or
2.5.1, as it is a bug fix.  As such, it would probably be better to punt
the test on MSWindows than to do a tricky fancy test that might have its
own issues.

TonyN.:'   <mailto:[EMAIL PROTECTED]>
  '  <http://www.georgeanelson.com/>
___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Testing Socket Timeouts patch 1519025

2006-07-30 Thread Tony Nelson
At 7:23 PM -0400 7/30/06, Tony Nelson wrote:
 ...
>...I tried to get MSVC 7.1 via the .Net SDK, but it
>installed VS 8 instead, so I'm not quite sure how to proceed.
 ...

David Murmann suggested off-list that I'd probably installed the 2.0 .Net
SDK, and that I should install the 1.1 .Net SDK, which is the correct one.
Now I can try to build Python on MSWindows.

TonyN.:'   <mailto:[EMAIL PROTECTED]>
  '  <http://www.georgeanelson.com/>
___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Testing Socket Timeouts patch 1519025

2006-07-30 Thread Tony Nelson
At 4:34 AM +0200 7/31/06, Martin v. Löwis wrote:
>Tony Nelson schrieb:
>>Hmm. Well, it would make the test possible on MSWindows as well as on
>>OS's implementing alarm(2).  If I figure out how to build Python on
>>MSWindows, I might give it a try.  I tried to get MSVC 7.1 via the .Net
>>SDK, but it installed VS 8 instead, so I'm not quite sure how to proceed.
>
>The .NET SDK (any version) is not suitable to build Python.

I do see the warning in the instructions about it not be an optimizing
compiler.  I've managed to build python.exe and the rt.bat tests mostly
work -- 2 tests fail, test_popen, and test_cmd_line because of popen()
failing.

Hmm, actually, this might be a real problem with the MSWindows version of
posix_popen() in Modules/posixmodule.c.  The path to my built python.exe is:

"E:\Documents and Settings\Tony Nelson\My 
Documents\Python\pydev\trunk\PCBuild\python.exe"

(lots of spaces in it).  It seems properly quoted in the test and when I do
it by hand, but in a call to popen() it doesn't work:

popen('"E:\Documents and Settings\Tony Nelson\My 
Documents\Python\pydev\trunk\PCBuild\python.exe" -c "import 
sys;sys.version_info"')

The returned file object repr resembles one that does work.  If I just use
"python.exe" from within the PCBuild directory:

popen('python.exe -c "import sys;sys.version_info"')

I get the right version, and that's the only 2.5b2 python I've got, so the
built python must be working, but the path, even quoted, isn't accepted by
MSWindows XP SP2.  Should I report a bug?  It may well just be MSWindows
weirdness, and not something that posixmodule.c can do anything about. 
OTOH, it does work from the command line.  I'll bet I wouldn't have seen a
thing if I'd checked out to "E:\pydev" instead.

>You really need VS 2003; if you don't have it anymore, you might be able
>to find a copy of the free version of the VC Toolkit 2003
>(VCToolkitSetup.exe) somewhere.

I really never had VS 2003.  It doesn't appear to be on microsoft.com
anymore.  I'm reluctant to try to steal a copy.


>Of course, just for testing, you can also install VS Express 2005, and
>use the PCbuild8 projects directory; these changes should work the
>same under both versions.

I'll try that if I have any real trouble with the non-optimized python or
if you insist that it's necessary.

TonyN.:'   <mailto:[EMAIL PROTECTED]>
  '  <http://www.georgeanelson.com/>
___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Testing Socket Timeouts patch 1519025

2006-07-30 Thread Tony Nelson
At 12:39 AM -0400 7/31/06, Tony Nelson wrote:

>popen('"E:\Documents and Settings\Tony Nelson\My
>Documents\Python\pydev\trunk\PCBuild\python.exe" -c "import
>sys;sys.version_info"')

Ehh, I must admit that I retyped that.  Obviously what I typed would not
work, but what I used was:

python = '"' + sys.executable + '"'
popen(python + ' -c "import sys;sys.version_info"'

So there wasn't a problem with backslashes.  I've also been using raw
strings.  And, as I said, the file objects looked OK, with backslashes
where they should be.  Sorry for the mistyping.

TonyN.:'   <mailto:[EMAIL PROTECTED]>
  '  <http://www.georgeanelson.com/>
___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Testing Socket Timeouts patch 1519025

2006-07-30 Thread Tony Nelson
At 12:58 AM -0400 7/31/06, Tony Nelson wrote:
>At 12:39 AM -0400 7/31/06, Tony Nelson wrote:
>
>>popen('"E:\Documents and Settings\Tony Nelson\My
>>Documents\Python\pydev\trunk\PCBuild\python.exe" -c "import
>>sys;sys.version_info"')
>
>Ehh, I must admit that I retyped that.  Obviously what I typed would not
>work, but what I used was:
>
>python = '"' + sys.executable + '"'
>popen(python + ' -c "import sys;sys.version_info"'
>
>So there wasn't a problem with backslashes.  I've also been using raw
>strings.  And, as I said, the file objects looked OK, with backslashes
>where they should be.  Sorry for the mistyping.

OK, I recognize the bug now.  It's that quote parsing bug in MSWindows
(which I can find again if you want) which can be worked around by using an
extra quote at the front (and maybe also the back):

popen('""E:\Documents ...

Not really a bug in Python at all.

TonyN.:'   <mailto:[EMAIL PROTECTED]>
  '  <http://www.georgeanelson.com/>
___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


[Python-Dev] 2.4.4 fix: Socketmodule Ctl-C patch

2006-10-03 Thread Tony Nelson
I've put a patch for 2.4.4 of the Socketmodule Ctl-C patch for 2.5, at the
old closed bug  .  It passes "make
EXTRAOPS-=unetwork test".

Should I try to put this into the wiki at Python24Fixes?  I haven't used
the wiki before.
-- 

TonyN.:'The Great Writ 
  '  is no more. 
___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


[Python-Dev] Polling with Pending Calls?

2006-12-04 Thread Tony Nelson
I think I have a need to handle *nix signals through polling in a library.
It looks like chaining Pending Calls is almost the way to do it, but I see
that doing so would make the interpreter edgy.

The RPM library takes (steals) the signal handling away from its client
application.  It has good reason to be so paranoid, but it breaks the
handling keyboard interrupts, especially if rpmlib is used in the normal
way:  opened at the beginning, various things are done by the app, closed
at the end.  If there is an extended period in the middle where no calls
are made to rpmlib (say, in yum during the downloading of packages or
package headers), then responst to a keyboard interrupt can be delayed for
/minutes/!  Yum is presently doing something awful to work around that
issue.

It is possible to poll rpmlib to find if there is a pending keyboard
interrupt.  Client applications could have such polls sprinkled throughout
them.  I think getting yum, for example, to do that might be politically
difficult.  I'm hoping to propose a patch to rpmlib's Python bindings to do
the polling automagically.

Looking at Python's normal signal handling, I see that Py_AddPendingCall()
and Py_MakePendingCalls(), and  PyEvel_EvalFrameEx()'s ticker check are how
signals and other async events are done.  I could imagine making rpmlib's
Python bindings add a Pending Call when the library is loaded (or some
such), and that Pending Call would make a quick check of rpmlib's caught
signals flags and then call Py_AddPendingCall() on itself.  It appears that
this would work, and is almost the expected thing to do, but unfortunately
it would cause the ticker check to think that Py_MakePendingCalls() had
failed and needed to be called again ASAP, which would drastically slow
down the interpreter.

Is there a right way to get the Python interpreter to poll something, or
should I look for another approach?

[I hope this message doesn't spend too many days in the grey list limbo.]
-- 

TonyN.:'The Great Writ 
  '  is no more. 
___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Polling with Pending Calls?

2006-12-04 Thread Tony Nelson
At 6:07 PM + 12/4/06, Gustavo Carneiro wrote:
>This patch may interest you:
>http://www.python.org/sf/1564547
>
>Not sure it completely solves your case, but it's at least close to
>your problem.

I don't think that patch is useful in this case.  This case is not stuck in
some extension module's poll() call.  The signal handler is not Python's
nor is it under my control (so no chance that it would look at some new
pipe), though the rpmlib Python bindings can look at the state bits it
sets.  The Python interpreter is running full-bore when the secret rpmlib
SIGINT state is needed.  I think the patch is for the exact /opposite/ of
my problem.
-- 

TonyN.:'The Great Writ 
  '  is no more. 
___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Polling with Pending Calls?

2006-12-04 Thread Tony Nelson
At 12:48 PM -0500 12/4/06, Tony Nelson wrote:
>I think I have a need to handle *nix signals through polling in a library.
>It looks like chaining Pending Calls is almost the way to do it, but I see
>that doing so would make the interpreter edgy.
 ...

Bah.  Sorry to have put noise on the list.  I'm obviously too close to the
problem to see the simple solution of threading.Timer.  Checking once or
twice a second should be good enough.  Sorry to bother you all.
-- 

TonyN.:'The Great Writ <mailto:[EMAIL PROTECTED]>
  '  is no more. <http://www.georgeanelson.com/>
___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Python and the Linux Standard Base (LSB)

2006-12-23 Thread Tony Nelson
At 8:42 PM +0100 12/2/06, Martin v. Löwis wrote:
>Jan Claeys schrieb:
>> Like I said, it's possible to split Python without making things
>> complicated for newbies.
>
>You may have that said, but I don't believe its truth. For example,
>most distributions won't include Tkinter in the "standard" Python
>installation: Tkinter depends on _tkinter depends on Tk depends on
>X11 client libraries. Since distributors want to make X11 client
>libraries optional, they exclude Tkinter. So people wonder why
>they can't run Tkinter applications (search comp.lang.python for
>proof that people wonder about precisely that).
>
>I don't think the current packaging tools can solve this newbie
>problem. It might be solvable if installation of X11 libraries
>would imply installation of Tcl, Tk, and Tkinter: people running
>X (i.e. most desktop users) would see Tkinter installed, yet
>it would be possible to omit Tkinter.

Given the current packaging tools, could Python have stub modules for such
things that would just throw a useful exception giving the name of the
required package?  Perhaps if Python just had an example of such a stub
(and Tkinter comes to mind), packagers would customize it and make any
others they needed?
-- 

TonyN.:'The Great Writ 
  '  is no more. 
___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] splitext('.cshrc')

2007-03-08 Thread Tony Nelson
At 2:16 PM -0500 3/8/07, Phillip J. Eby wrote:
>At 11:53 AM 3/8/2007 +0100, Martin v. Löwis wrote:
>>That assumes there is a need for the old functionality. I really don't
>>see it (pje claimed he needed it once, but I remain unconvinced, not
>>having seen an actual fragment where the old behavior is helpful).
>
>The code in question was a type association handler that looked up loader
>functions based on file extension.  This was specifically convenient for
>recognizing the difference between .htaccess files and other dotfiles that
>might appear in a web directory tree -- e.g. .htpasswd.  The proposed
>change of splitext() would break that determination, because .htpasswd and
>.htaccess would both be considered files with empty extensions, and would
>be handled by the "empty extension" handler.

So, ".htaccess" and "foo.htaccess" should be treated the same way?  Is that
what Apache does?
-- 

TonyN.:'   
  '  
___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] datetime module enhancements

2007-03-11 Thread Tony Nelson
At 5:45 PM +1300 3/11/07, Greg Ewing wrote:
>Jon Ribbens wrote:
>
>> What do you feel "next Tuesday plus 12 hours" means? ;-)
>
>I would say it's meaningless. My feeling is that subtracting
>two dates should give an integer number of days, and that is
>all you should be allowed to add to a date.

Apple's old MacOS had a very flexible LongDateRecord and date utilities.
Nearly anything one could do to a date had a useful meaning.  Perhaps
Python should be different, but I've found Apple's date calculations and
date parsing to be very useful, in a Pythonic sort of way.

>From old New Inside Macintosh, _Operating System Utilities_, Ch. 4 "Date,
Time, and Measurement Utilities":

Calculating Dates
-
In the date-time record and long date-time record, any value in the month,
day, hour, minute, or second field that exceeds the maximum value allowed
for that field, will cause a wraparound to a future date and time when you
modify the date-time format.

*   In the month field, values greater than 12 cause a wraparound
to a future year and month.
*   In the day field, values greater than the number of days in a
given month cause a wraparound to a future month and day.
*   In the hour field, values greater than 23 cause a wraparound to
a future day and hour.
*   In the minute field, values greater than 59 cause a wraparound
to a future hour and minute.
*   In the seconds field, values greater than 59 cause a wraparound
to a future minute and seconds.

You can use these wraparound facts to calculate and retrieve information
about a specific date. For example, you can use a date-time record and the
DateToSeconds and SecondsToDate procedures to calculate the 300th day of
1994. Set the month field of the date-time record to 1 and the year field
to 1994. To find the 300th day of 1994, set the day field of the date-time
record to 300. Initialize the rest of the fields in the record to values
that do not exceed the maximum value allowed for that field. (Refer to the
description of the date-time record on page 4-23 for a complete list of
possible values). To force a wrap-around, first convert the date and time
(in this example, January 1, 1994) to the number of seconds elapsed since
midnight, January 1, 1904 (by calling the DateToSeconds procedure). Once
you have converted the date and time to a number of seconds, you convert
the number of seconds back to a date and time (by calling the SecondsToDate
procedure). The fields in the date-time record now contain the values that
represent the 300th day of 1994. Listing 4-6 shows an application-defined
procedure that calculates the 300th day of the Gregorian calendar year
using a date-time record.

Listing 4-6 Calculating the 300th day of the year

PROCEDURE MyCalculate300Day;
VAR
myDateTimeRec:  DateTimeRec;
mySeconds:  LongInt;
BEGIN
WITH myDateTimeRec DO
BEGIN
year := 1994;
month := 1;
day := 300;
hour := 0;
minute := 0;
second := 0;
dayOfWeek := 1;
END;
DateToSeconds (myDateTimeRec, mySeconds);
SecondsToDate (mySeconds, myDateTimeRec);
END;

The DateToSeconds procedure converts the date and time to the number of
seconds elapsed since midnight, January 1, 1904, and the SecondsToDate
procedure converts the number of seconds back to a date and time. After the
conversions, the values in the year, month, day, and dayOfWeek fields of
the myDateTimeRec record represent the year, month, day of the month, and
day of the week for the 300th day of 1994. If the values in the hour,
minute, and second fields do not exceed the maximum value allowed for each
field, the values remain the same after the conversions (in this example,
the time is exactly 12:00 A.M.).

Similarly, you can use a long date-time record and the LongDateToSeconds
and LongSecondsToDate procedures to compute the day of the week
corresponding to a given date. Listing 4-7 shows an application-defined
procedure that computes and retrieves the name of the day for July 4, 1776.
Note that because the year is prior to 1904, it is necessary to use a long
date-time record.
-- 

TonyN.:'   
  '  
___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Encouraging developers

2007-03-17 Thread Tony Meyer
On 8/03/2007, at 2:42 AM, Paul Moore wrote:
> On 06/03/07, Scott Dial <[EMAIL PROTECTED]> wrote:
>> Sadly the sf tracker doesn't let you search for "With comments  
>> by". The
>> patch I was making reference to was 1410680. Someone else actually  
>> had
>> wrote a patch that contained bugs and I corrected them. And with  
>> that, I
>> was the last person to comment or review the patch in question.
[...]
> On the other hand, what I've done is similar to what you did - comment
> on someone else's patch. It seems relevant to me that the original
> poster (Tony Meyer) hasn't felt strongly enough to respond on his own
> behalf to comments on his patch. No disrespect to Tony, but I'd argue
> that the implication is that the patch should be rejected because even
> the submitter doesn't care enough to respond to comments!

There is a considerable difference between "doesn't care enough", and  
"has not had time to be able to" (although in this specific case  
"doesn't care enough" is correct).

I have submitted a very small (3?) number of patches, however, I  
suspect that my position is similar to others, so I offer an  
explanation in the hope that it adds value to this thread.

I don't submit patches because I need the problem fixed in the Python  
distribution.  I make the change locally, and either I am  
distributing a frozen application (almost always the case), which  
includes my local fix, or a workaround is made in the application  
source which means that the main Python distribution fix is unneeded  
(e.g. this is what I did with SpamBayes).

The particular patch mentioned is one that uses code (more-or-less)  
from SpamBayes.  SpamBayes has the code - it doesn't matter whether  
it's in the Python distribution or not.  At the time I wrote the  
patch, there were (again) discussions on python-dev about what should  
be done to ConfigParser.  I had some time free in those days, and,  
since I had some code that did more-or-less what Guido indicated was  
the best option, I contributed it (writing unittests, documentation,  
and commenting in the related tickets).

To a certain extent, I considered that my work done.  This was  
something I contributed because many people continually requested it,  
not something I felt a personal need to be added to the distribution  
(as above, that's not a need that I generally feel).

I (much) later got email with patches, and then later email from Mark  
Hammond about the patch (IIRC Mark was looking at it and was thinking  
of fixing it up; I think I forwarded the email I got to him.  OTOH,  
maybe he also sent me fixes - I'm too busy to trawl through email  
archives to figure it out).  At the time, I hoped to fix up the  
errors and submit a revised patch, but my son was born a few weeks  
later and I never found the time.  If the patch had been reviewed  
more quickly, then I probably would have found time to correct it -  
however, everyone else is busy to (if I felt strongly about it, then  
I would have reviewed 5 other patches, as I have in the past, and  
'forced' more quick review, but I did not).

For me, submitting a patch is mostly altruistic - if I do that then  
other people don't also have do the work I did, and hopefully other  
people do that as well, saving me work.  It's not something I  
require, at all.  This isn't something that is easy to make time for.

ISTM that there is value in submitting a patch (including tests and  
documentation, and making appropriate comment in related patches),  
even if that is all that is done (i.e. no follow-up).  If the value  
isn't there without that follow-up 'caring', then that is something  
that should be addressed to 'encourage developers'.  Contributions  
don't only come from people hoping to be 'core' developers some day.

Uncaringly-(with-apologies-to-uncle-timmy),
Tony
___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Official version support statement

2007-05-11 Thread Tony Nelson
At 12:58 AM +0200 5/12/07, Martin v. Löwis wrote:
>> "The Python Software Foundation officially supports the current
>> stable major release of Python.  By "supports" we mean that the PSF
>> will produce bug fix releases of this version, currently Python 2.5.
>> We may release patches for earlier versions if necessary, such as to
>> fix security problems, but we generally do not make releases of such
>> unsupported versions.  Patch releases of earlier Python versions may
>> be made available through third parties, including OS vendors."
>
>If such an official statement still can be superseded by an even more
>official PEP, it's fine with me.
>
>However, I would prefer to not use the verb "support" at all. We (the
>PSF) don't provide any technical support for *any* version ever
>released: '''PSF is making Python available to Licensee on an "AS IS"
>basis.  PSF MAKES NO REPRESENTATIONS OR WARRANTIES, EXPRESS OR
>IMPLIED.  BY WAY OF EXAMPLE, BUT NOT LIMITATION, PSF MAKES NO AND
>DISCLAIMS ANY REPRESENTATION OR WARRANTY OF MERCHANTABILITY OR FITNESS
>FOR ANY PARTICULAR PURPOSE OR THAT THE USE OF PYTHON WILL NOT
>INFRINGE ANY THIRD PARTY RIGHTS.'''
>
>The more I think about it: no, there is no official support for the
>current stable release. We will like produce more bug fix releases,
>but then, we may not if the volunteers doing so lose time or
>interest, and 2.6 comes out earlier than planned.
>
>Why do you need such a statement?

I think Fedora might want it, per recent discussions on fedora-devel-list.

My impertinent attempt:

"The Python Software Foundation maintains the current stable major
release of Python.  By "maintains" we mean that the PSF will produce
bug fix releases of that version, currently Python 2.5.  We have
released patches for earlier versions as necessary, such as to fix
security problems, but we generally do not make releases of such
prior versions.  Patched releases of earlier Python versions may be
made available through third parties, including OS vendors."
-- 

TonyN.:'   
  '  
___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Adventures with x64, VS7 and VS8 on Windows

2007-05-26 Thread Tony Nelson
At 12:20 PM + 5/26/07, Kristján Valur Jónsson wrote:
>> -Original Message-
>> From: Alexey Borzenkov [mailto:[EMAIL PROTECTED]
>> Sent: Wednesday, May 23, 2007 20:36
>> To: Kristján Valur Jónsson
>> Cc: Martin v. Löwis; Mark Hammond; [EMAIL PROTECTED]; python-
>> [EMAIL PROTECTED]
>> Subject: Re: [Python-Dev] Adventures with x64, VS7 and VS8 on Windows
>>
>> On 5/23/07, Kristján Valur Jónsson <[EMAIL PROTECTED]> wrote:
>> > > > Install in the ProgramFiles folder.
>> > > Only over my dead body. *This* is silly.
>> > Bill doesn't think so.  And he gets to decide.  I mean we do want
>> > to play nice, don't we?  Nothing installs itself in the root anymore,
>> > not since windows 3.1
>>
>> Maybe installing in the root is not good, but installing to "Program
>> Files" is just asking for trouble. All sorts of development tools
>> might suddenly break because of that space in the middle of the path
>> and requirement to use quotes around it. I thus usually install things
>> to :\Programs. I'm not sure if any packages/programs will break
>> because of that space, but what if some will?
>
>Development tools used on windows already have to cope with this.
>Spaces are not going away, so why not bite the bullet and deal
>with them?  Moving forward sometimes means crossing rivers.
 ...

Microsoft's command line cannot cope with two pathnames that must be
quoted, so if the command path itself must be quoted, then no argument to
the command can be quoted.  There are tricky hacks that can work around
this mind-boggling stupidity, but life is simpler if Python itself doesn't
use up the one quoted pathname.  I don't know if Microsoft has had the good
sense to fix this in Vista (which I probably will never use, since an
alternative exists), but they didn't in XP.
-- 

TonyN.:'   
  '  
___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Adventures with x64, VS7 and VS8 on Windows

2007-05-29 Thread Tony Nelson
At 1:14 PM + 5/29/07, Kristján Valur Jónsson wrote:
>> -Original Message-
>>
>> Microsoft's command line cannot cope with two pathnames that must be
>> quoted, so if the command path itself must be quoted, then no argument
>> to
>> the command can be quoted.  There are tricky hacks that can work around
>> this mind-boggling stupidity, but life is simpler if Python itself
>> doesn't
>> use up the one quoted pathname.  I don't know if Microsoft has had the
>> good
>> sense to fix this in Vista (which I probably will never use, since an
>> alternative exists), but they didn't in XP.
>
>Do you have any references for this claim?
>In my command line on XP sp2, this works just fine:
>
>C:\Program Files\Microsoft Visual Studio 8\VC>"c:\Program Files\TextPad 
>4\TextPad.exe" "c:\tmp\f a.txt" "c:\tmp\f b.txt"
>
>Both the program, and the two file names are quoted and textpad.exe opens
>them both.

I pounded my head against this issue when working on a .bat file a few
years back, until I read the help for cmd and saw the quote logic (and
switched to VBScript).  It's still there, in "help cmd".  I had once found
references to the same issue for the run command in Microsoft's online help.

Perhaps it is fixed in SP2. If so, just change it and don't worry about
users with earlier versions of Windows.
-- 

TonyN.:'   
  '  
___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] [Python-3000] Universal newlines support in Python 3.0

2007-08-11 Thread Tony Lownds

On Aug 10, 2007, at 11:23 AM, Guido van Rossum wrote:

> Python 3.0 currently has limited universal newlines support: by
> default, \r\n is translated into \n for text files, but this can be
> controlled by the newline= keyword parameter. For details on how, see
> PEP 3116. The PEP prescribes that a lone \r must also be translated,
> though this hasn't been implemented yet (any volunteers?).
>

I'm working on this, but now I'm not sure how the file is supposed to  
be read when
the newline parameter is \r or \r\n. Here's the PEP language:

   buffer is a reference to the BufferedIOBase object to be wrapped  
with the TextIOWrapper.
   encoding refers to an encoding to be used for translating between  
the byte-representation
   and character-representation. If it is None, then the system's  
locale setting will be used
   as the default. newline can be None, '\n', '\r', or '\r\n' (all  
other values are illegal);
   it indicates the translation for '\n' characters written. If None,  
a system-specific default
   is chosen, i.e., '\r\n' on Windows and '\n' on Unix/Linux. Setting  
newline='\n' on input
   means that no CRLF translation is done; lines ending in '\r\n'  
will be returned as '\r\n'.
   ('\r' support is still needed for some OSX applications that  
produce files using '\r' line
   endings; Excel (when exporting to text) and Adobe Illustrator EPS  
files are the most common examples.

Is this ok: when newline='\r\n' or newline='\r' is passed, only that  
string is used to determine
the end of lines. No translation to '\n' is done.

> However, the old universal newlines feature also set an attibute named
> 'newlines' on the file object to a tuple of up to three elements
> giving the actual line endings that were observed on the file so far
> (\r, \n, or \r\n). This feature is not in PEP 3116, and it is not
> implemented. I'm tempted to kill it. Does anyone have a use case for
> this? Has anyone even ever used this?
>

This strikes me as a pragmatic feature, making it easy to read a file
and write back the same line ending. I can include in patch.

http://www.google.com/codesearch?hl=en&q=+lang:python+%22.newlines%22 
+show:cz2Fhijwr3s:yutdXigOmYY:YDns9IyEkLQ&sa=N&cd=12&ct=rc&cs_p=http://f 
tp.gnome.org/pub/gnome/sources/meld/1.0/ 
meld-1.0.0.tar.bz2&cs_f=meld-1.0.0/filediff.py#a0

http://www.google.com/codesearch?hl=en&q=+lang:python+%22.newlines%22 
+show:SLyZnjuFadw:kOTmKU8aU2I:VX_dFr3mrWw&sa=N&cd=37&ct=rc&cs_p=http://s 
vn.python.org/projects/ctypes/trunk&cs_f=ctypeslib/ctypeslib/ 
dynamic_module.py#a0

Thanks
-Tony

___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] [Python-3000] Universal newlines support in Python 3.0

2007-08-11 Thread Tony Lownds

On Aug 11, 2007, at 10:29 AM, Guido van Rossum wrote:
>> Is this ok: when newline='\r\n' or newline='\r' is passed, only that
>> string is used to determine
>> the end of lines. No translation to '\n' is done.
>
> I *think* it would be more useful if it always returned lines ending
> in \n (not \r\n or \r). Wouldn't it? Although this is not how it
> currently behaves; when you set newline='\r\n', it returns the \r\n
> unchanged, so it would make sense to do this too when newline='\r'.
> Caveat user I guess.

Because there's an easy way to translate, having the option to not  
translate
apply to all valid newline values is probably more useful. I do think  
it's easier
to define the behavior this way.

> OK, if you think you can, that's good. It's not always sufficient (not
> if there was a mix of line endings) but it's a start.

Right

-Tony
___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Removing the GIL (Me, not you!)

2007-09-14 Thread Tony Nelson
At 1:51 AM -0500 9/14/07, Justin Tulloss wrote:
>On 9/14/07, Adam Olsen <[EMAIL PROTECTED]> wrote:
>
>> Could be worth a try. A first step might be to just implement
>> the atomic refcounting, and run that single-threaded to see
>> if it has terribly bad effects on performance.
>
>I've done this experiment.  It was about 12% on my box.  Later, once I
>had everything else setup so I could run two threads simultaneously, I
>found much worse costs.  All those literals become shared objects that
>create contention.
>
>
>It's hard to argue with cold hard facts when all we have is raw
>speculation. What do you think of a model where there is a global "thread
>count" that keeps track of how many threads reference an object? Then
>there are thread-specific reference counters for each object. When a
>thread's refcount goes to 0, it decrefs the object's thread count. If you
>did this right, hopefully there would only be cache updates when you
>update the thread count, which will only be when a thread first references
>an object and when it last references an object.

It's likely that cache line contention is the issue, so don't glom all the
different threads' refcount for an object into one vector.  Keep each
thread's refcounts in a per-thread vector of objects, so only that thread
will cache that vector, or make refcounts so large that each will be in its
own cache line (usu. 64 bytes, not too horrible for testing purposes).  I
don't know all what would be required for separate vectors of refcounts,
but each object could contain its index into the vectors, which would all
be the same size (Go Virtual Memory!).


>I mentioned this idea earlier and it's growing on me. Since you've
>actually messed around with the code, do you think this would alleviate
>some of the contention issues?
>
>Justin

Your idea can be combined with the maxint/2 initial refcount for
non-disposable objects, which should about eliminate thread-count updates
for them.
-- 

TonyN.:'   
  '  ___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Removing the GIL (Me, not you!)

2007-09-14 Thread Tony Nelson
At 3:30 PM -0400 9/14/07, Jean-Paul Calderone wrote:
>On Fri, 14 Sep 2007 14:13:47 -0500, Justin Tulloss <[EMAIL PROTECTED]> wrote:
>>Your idea can be combined with the maxint/2 initial refcount for
>>> non-disposable objects, which should about eliminate thread-count updates
>>> for them.
>>> --
>>>
>>
>> I don't really like the maxint/2 idea because it requires us to
>>differentiate between globals and everything else. Plus, it's a hack. I'd
>>like a more elegant solution if possible.
>
>It's not really a solution either.  If your program runs for a couple
>minutes and then exits, maybe it won't trigger some catastrophic behavior
>from this hack, but if you have a long running process then you're almost
>certain to be screwed over by this (it wouldn't even have to be *very*
>long running - a month or two could do it on a 32bit platform).

I don't think either of you understand what setting the initial refcount to
maxint/2 for global objects in a thread's refcount vector would do.  It has
/no/ effect on refcounting.  It only prevents the refcount from becoming
zero for objects that can never be released, but which would always have a
zero thread refcount on thread exit, which would cause a useless and
frequent thread count decrement for the object.  As the object can never be
released, its thread count would be initially non-zero, so the thread count
won't be made zero when the thread refcount becomes zero.  The thread count
is shared in the object.  The thread refcount is per thread, and should not
be shared, even at the physical cache line level, if good performance is
desired.

When a new thread is created, part of the thread state would be the
refcount vector.  Hopefully it would mostly be just VM magic, but the
initial part of the vector would contain the immortal objects' refcount,
and those would be set to maxint/2.  Or 1, for that matter.
-- 

TonyN.:'   
  '  
___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Declaring setters with getters

2007-11-01 Thread Tony Lownds

On Nov 1, 2007, at 10:26 AM, [EMAIL PROTECTED] wrote:
> This is a minor nit, as with all decorators that take an argument,  
> it seems like it sets up a hard-to-debug error condition if you were  
> to accidentally forget it:
>
>   @property
>   def foo(): ...
>   @property.set
>   def foo(): ...
>
> would leave you with 'foo' pointing at something that wasn't a  
> descriptor at all.  Is there a way to make that more debuggable?

How about this: give the property instance a method that changes a  
property from read-only to read-write.
No parens, no frame magic. As a small bonus, the setter function would  
not have to be named the same as the
property.

class A(object):
   @property
   def foo(self):
 return 1

   @foo.setter
   def set_foo(self, value):
 print 'set:', value

-Tony

___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Signals+Threads (PyGTK waking up 10x/sec).

2007-12-08 Thread Tony Nelson
At 2:01 AM -0800 12/8/07, Guido van Rossum wrote:
 ...
>I'm curious -- is there anyone here who understands why [Py]GTK is
>using signals anyway? It's not like writing robust signal handling
>code in C is at all easy or obvious. If instead of a signal a file
>descriptor could be used, all problems would likely be gone.

I don't think PyGTK does for GTK2 signal emission -- though Johan Dahlin is
authoritative here.  See
 .
-- 

TonyN.:'   
  '  
___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Signals+Threads (PyGTK waking up 10x/sec).

2007-12-08 Thread Tony Nelson
At 11:17 AM +0100 12/8/07, Johan Dahlin wrote:
>Guido van Rossum wrote:
>> Adam, perhaps at some point (Monday?) we could get together on
>> #python-dev and interact in real time on this issue. Probably even
>> better on the phone. This offer is open to anyone who is serious about
>> getting this resolved. Someone please take it -- I'm offering free
>> consulting here!
>>
>> I'm curious -- is there anyone here who understands why [Py]GTK is
>> using signals anyway? It's not like writing robust signal handling
>> code in C is at all easy or obvious. If instead of a signal a file
>> descriptor could be used, all problems would likely be gone.
>
>The timeout handler was added for KeyboardInterrupt to be able to work when
>you want to Ctrl-C yourself out of the gtk.main() loop.

Is that always required (with threads), or are things better now that
Ctrl-C handling is improved (at least in the Socket module, which doesn't
lose signals anymore)?
-- 

TonyN.:'   
  '  
___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Syntax suggestion for imports

2008-01-03 Thread Tony Nelson
At 3:20 PM +0100 1/3/08, Christian Heimes wrote:
>Raymond Hettinger wrote:
>> How about a new, simpler syntax:
 ...
>> * import readline or emptymodule
>
>The syntax idea has a nice ring to it, except for the last idea. As
>others have already said, the name emptymodule is too magic.
>
>The readline example becomes more readable when you change the import to
>
>import readline or None as readline
>
>
>In my opinion the import or as syntax definition is easy to understand
>if you force the user to always have an "as" statement. The None name is
>optional but must be the last name:
>
>import name[, or name2[, or name3 ...] [, or None] as target
 ...

At 11:48 AM -0600 1/3/08, Ron Adam wrote:
 ...
>An alternative possibility might be, rather than "or", reuse "else" before
>import.
 ...

I prefer "else" to "or" but with the original single-statement syntax.

If the last clause could be an expression as well as a module name, what
I've done (used with and copied from BeautifulSoup):

try:
from htmlentitydefs import name2codepoint
except ImportError:
name2codepoint = {}

could become:

from htmlentitydefs else ({}) import name2codepoint as name2codepoint

Also:

import foo or (None) as foo
-- 

TonyN.:'   
  '  
___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] fixing tests on windows

2008-04-03 Thread Tony Nelson
At 3:52 PM -0600 4/3/08, Steven Bethard wrote:
>On Thu, Apr 3, 2008 at 3:09 PM, Terry Reedy <[EMAIL PROTECTED]> wrote:
 ...
>Or were you suggesting that there is some programmatic way for the
>test suite to create directories that disallow the Search Service,
>etc.?

I'd think that files and directories created in the TEMP directory would
normally not be indexed on any OS, including MSWindows.  But this is just a
guess.
-- 

TonyN.:'   
  '  
___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Encoding detection in the standard library?

2008-04-21 Thread Tony Nelson
At 1:14 PM -0400 4/21/08, David Wolever wrote:
>On 21-Apr-08, at 12:44 PM, [EMAIL PROTECTED] wrote:
>>
>> David> Is there some sort of text encoding detection module is the
>> David> standard library?  And, if not, is there any reason not
>> to add
>> David> one?
>> No, there's not.  I suspect the fact that you can't correctly
>> determine the
>> encoding of a chunk of text 100% of the time mitigates against it.
>Sorry, I wasn't very clear what I was asking.
>
>I was thinking about making an educated guess -- just like chardet
>(http://chardet.feedparser.org/).
>
>This is useful when you get a hunk of data which _should_ be some
>sort of intelligible text from the Big Scary Internet (say, a posted
>web form or email message), and you want to do something useful with
>it (say, search the content).

Feedparser.org's chardet can't guess 'latin1', so it should be used as a
last resort, just as the docs say.
-- 

TonyN.:'   
  '  
___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Copying cgi.parse_qs() to the urllib.parse module

2008-05-12 Thread Tony Nelson
At 11:56 PM -0400 5/10/08, Fred Drake wrote:
>On May 10, 2008, at 11:49 PM, Guido van Rossum wrote:
>> Works for me. The other thing I always use from cgi is escape() --
>> will that be available somewhere else too?
>
>
>xml.sax.saxutils.escape() would be an appropriate replacement, though
>the location is a little funky.

At least it's right next to the valuable quoteattr().
-- 

TonyN.:'   
  '  
___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Assignment to None

2008-06-09 Thread Tony Nelson
At 4:46 PM +0100 6/9/08, Michael Foord wrote:
>Alex Martelli wrote:
>> The problem is more general: what if a member  (of some external
>> object we're proxying one way or another) is named print (in Python <
>> 3), or class, or...?  To allow foo.print or bar.class would require
>> pretty big changes to Python's parser -- I have vague memories that
>> the issue was discussed ages ago (possibly in conjunction with some
>> early release of Jython) but never went anywhere much (including
>> proposals to automatically append an underscore to such IDs in the
>> proxying layer, etc etc).  Maybe None in particular is enough of a
>> special case (if it just happens to be hugely often used in dotNET
>> libraries)?
>>
>
>'None' as a member does occur particularly frequently in the .NET world.
>
>A halfway house might be to state (something like):
>
>Python as a language disallows you from having names the same as
>keywords or 'None'. An implementation restriction specific to CPython is
>that the same restriction also applies to member names. Alternative
>implementations are free to not implement this restriction, with the
>caveat that code using reserved member names directly will be invalid
>syntax for CPython.
 ...

Or perhaps CPython should just stop trying to detect this at compile time.
Note that while assignment to ".None" is not allowed, setattr(foo, "None",
1) then referencing ".None" is allowed.

>>> f.None = 1
SyntaxError: assignment to None
>>> f.None
Traceback (most recent call last):
  File "", line 1, in ?
AttributeError: 'Foo' object has no attribute 'None'
>>> setattr(f, 'None', 1)
> f.None
1
>>>
-- 

TonyN.:'   
  '  
___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Status of Issue 2331 - Backport parameter annotations

2008-06-19 Thread Tony Lownds

At any rate, I am still interested if anyone has a working patch for
this against the trunk, or any pointers for adapting 53170, words of
experience when changing the grammar, additions to PEP 306, etc... any
help would be greatly appreciated!


David,

I can help. I don't have a patch against the trunk but my first  
revisions of the patch
for annotations did handle things like tuple parameters which are  
relevant to 2.6.


-Tony
___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Another Proposal: Run GC less often

2008-06-21 Thread Tony Nelson
At 11:28 PM +0200 6/21/08, none wrote:
>Instead of collecting objects after a fixed number of allocations (700)
 ...

I've seen this asserted several times in this thread:  that GC is done
every fixed number of allocations.  This is not correct.  GC is done when
the surplus of allocations less deallocations exceeds a threashold.  See
Modules/gcmodule.c and look for ".count++" and ".count--".  In normal
operation, allocations and deallocations stay somewhat balanced, but when
creating a large data structure, it's allocations all the way and GC runs
often.
-- 

TonyN.:'   
  '  
___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Further PEP 8 compliance issues in threading and multiprocessing

2008-09-01 Thread Tony Nelson
At 1:04 PM +1200 9/2/08, Greg Ewing wrote:
>Antoine Pitrou wrote:
>
>> I don't see a problem for trivial functional wrappers to classes to be
>> capitalized like classes.
>
>The problem is that the capitalization makes you
>think it's a class, suggesting you can do things
>with it that you actually can't, e.g. subclassing.

Or that it returns a new object of that kind.


>I can't think of any reason to do this. If you
>don't want to promise that something is a class,
>what possible reason is there for naming it like
>one?
 ...

Lower-case names return something about an object.  Capitalized names
return a new object of the named type (more or less), either via a Class
constructor or a Factory object.  That's /a/ reason, anyway.

I suppose the question is what a capitalized name promises.  If it means
only "Class", then how should "Returns a new object", either from a Class
or a Factory, be shown?  Perhaps a new convention is needed for Factories?
-- 

TonyN.:'   
  '  
___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] bsddb alternative (was Re: [issue3769] Deprecate bsddb for removal in 3.0)

2008-09-04 Thread Tony Nelson
At 6:10 AM -0500 9/4/08, [EMAIL PROTECTED] wrote:
>>> Related but tangential question that we were discussing on the
>>> pygr[0] mailing list -- what is the "official" word on a scalable
>>> object store in Python?  We've been using bsddb, but is there an
>>> alternative?  And what if bsddb is removed?
>
>Brett> Beyond shelve there are no official plans to add a specific
>Brett> object store.
>
>Unless something has changed while I wasn't looking, shelve requires a
>concrete module under the covers: bsddb, gdbm, ndbm, dumbdbm.  It's just a
>thin layer over one of them that makes it appear as if you can have keys
>which aren't strings.

I thought that all that was happening was that BSDDB was becoming a
separate project.  If one needs BSDDB with Python2.6, one installs it.
Aren't there other parts of Python that require external modules, such as
Tk?  Using Tk requires installing it.  Such things are normally packaged by
each distro the same way as Python is packaged ("yum install tk bsddb").

Shipping an application to end users is a different problem.  Such packages
should include a private copy of Python as well as of any dependent
libraries, as tested.
-- 

TonyN.:'   
  '  
___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] bsddb alternative (was Re: [issue3769] Deprecate bsddb for removal in 3.0)

2008-09-04 Thread Tony Nelson
At 7:37 AM -0700 9/4/08, C. Titus Brown wrote:
>On Thu, Sep 04, 2008 at 10:29:10AM -0400, Tony Nelson wrote:
 ...
>-> Shipping an application to end users is a different problem.  Such packages
>-> should include a private copy of Python as well as of any dependent
>-> libraries, as tested.
>
>Why?  On Mac OS X, for example, Python comes pre-installed -- not sure
>if it comes with Tk yet, but the next version probably will.  On Windows
>there's a handy few-click installer that installs Tk.  Is there some
>reason why I shouldn't be relying on those distributions??

Yes.  An application is tested with one version of Python and one version
of its libraries.  When MOSX updates Python or some other library, you are
relying on their testing of your application.  Unless you are Adobe or
similarly large they didn't do that testing.  Perhaps you have noticed the
threads about installing a new Python release over the Python that came
with an OS, and how bad an idea that is?  This is the same issue, from the
other side.

>Requiring users to install anything at all imposes a barrier to use.
>That barrier rises steeply in height the more packages (with versioning
>issues, etc.) are needed.  This also increases the tech support burden
>dramatically.
 ...

Precisely why one needs to ship a single installer that installs the
complete application, including Python and any other libraries it needs.
-- 

TonyN.:'   <mailto:[EMAIL PROTECTED]>
  '  <http://www.georgeanelson.com/>
___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Bug in SimpleHTTPRequestHandler.send_head?

2008-09-05 Thread Tony Nelson
At 1:19 PM +0100 9/5/08, Michael Foord wrote:
>Hello Kim,
>
>Thanks for your post. The source code control used for Python is Subversion.
>
>Patches submitted to this list will unfortunately get lost. Please post
>the bug report along with your comments and patch to the Python bug tracker:
>
>http://bugs.python.org/

Patches are usually done with patch, using the output of diff -u.
bugs.python.org links to the Python wiki with Help : Tracker Documentation,
and searching the wiki can turn up some info on bug submission, but I don't
see any step-by-step instructions for newbies.

If you're not yet confident that this is really a bug or don't want to
wrestle with the bug tracker just now, you might get more disscussion on
the newsgroup comp.lang.python.  Probably the subject should not say "bug",
or you might only get suggestions to submit a bug, but rather something
like "Should SimpleHTTPRequestHandler.send_head() change text line
endings?", or whatever you think might provoke discussion.

FWIW, Python 2.6 and 3.0 are near release, so any accepted patch would at
the earliest go into the next after version of Python: 2.7 or 3.1.  Patches
often laguish and need a champion to push them through.  Helping review
other patches or bugs is one way to contribute.
-- 

TonyN.:'   
  '  
___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


[Python-Dev] Fwd: [ALERT] cbank: OldHashChecker cannot check password, uid is None

2009-01-23 Thread Tony Lownds

Rob and/or Tim,

Can you track this down?

Thanks
-Tony

Begin forwarded message:


From: [email protected]
Date: January 23, 2009 11:16:26 AM PST
To: [email protected]
Subject: [ALERT] cbank: OldHashChecker cannot check password, uid is  
None


OldHashChecker cannot check password, uid is None
Script: /inet/www/clients/cbank/index.cgi
Machine: siteserver3
Directory: /mnt/sitenfs2_clients/cbank



___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


[Python-Dev] Fwd: [ALERT] cityoftoronto: problem saving to products table

2009-01-23 Thread Tony Lownds

Hi Paulus,

Have you fixed these aerts before? We need a script to fix these alerts.

Thanks
-Tony

Begin forwarded message:


From: [email protected]
Date: January 23, 2009 11:00:01 AM PST
To: [email protected]
Subject: [ALERT] cityoftoronto: problem saving to products table

problem saving to products table

Traceback (most recent call last):
 File "/opt/printra/lib/python/printra/sossite/KindManager.py", line  
325, in save_to_products_table

   self.save_to_products_table2(kinds)
 File "/opt/printra/lib/python/printra/sossite/KindManager.py", line  
344, in save_to_products_table2

   if dbkind.update(site, kind):
 File "/opt/printra/lib/python/printra/sossite/KindManager.py", line  
490, in update

   v = fn(kind)
 File "/opt/printra/lib/python/printra/sossite/KindManager.py", line  
474, in _buy_price

   return coerce_qtyspec(kind.buy_qtyspec).price_for_qty(1)
 File "/opt/printra/lib/python/printra/sossite/Sos2qtyspec.py", line  
778, in price_for_qty

   q, p, l = _qtypricesplit(q)
 File "/opt/printra/lib/python/printra/sossite/Sos2qtyspec.py", line  
671, in _qtypricesplit

   raise ValueError, "bad tuple to qtyonly: %s" % t
ValueError: bad tuple to qtyonly: [(1000, '1000 - $67.42'), (6000,  
'6000 - $356.52'), (1, '1 - $486.2')]


Menu user: rolando
Script: /inet/www/clients/cityoftoronto/index.cgi
Machine: siteserver3
Directory: /mnt/sitenfs2_clients/cityoftoronto



___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Proposing an alternative to PEP 410

2012-02-26 Thread Tony Koker
my 2 cents...

being in electronics for over 30 years, it is forever expanding in both
directions, bigger mega, giga, tera, peta, etc. AND smaller nano, pico,
femto, atto.

but, I agree that it is moot, as it is not the range, which is usually
expressed in an exponential component of the system being used (decimal,
hex., etc), and it is more a matter of significant number of digits being
operated on, at that point in time. Basically the zeroes are removed and
tracked separately.

Tony


On Sun, Feb 26, 2012 at 11:12 AM, Larry Hastings  wrote:

>
> On 02/26/2012 06:51 AM, Simon Cross wrote:
>
>> There are good scientific use cases for nanosecond time resolution
>> (e.g. radio astronomy) where one is actually measuring time down to
>> that level and taking into account propagation delays. I have first
>> hand experience [...]
>>
>> I'm not sure whether any of this is intended to be for or against any
>> side in the current discussion. :D
>>
>
> It's probably neutral.  But I do have one question: can you foresee the
> scientific community moving to a finer resolution than nanoseconds in our
> lifetimes?
>
>
> //arry/
>
> __**_
> Python-Dev mailing list
> [email protected]
> http://mail.python.org/**mailman/listinfo/python-dev<http://mail.python.org/mailman/listinfo/python-dev>
> Unsubscribe: http://mail.python.org/**mailman/options/python-dev/**
> tkoker%40gmail.com<http://mail.python.org/mailman/options/python-dev/tkoker%40gmail.com>
>
___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Proposing an alternative to PEP 410

2012-02-26 Thread Tony Koker
Also, data collection will almost always be done by specialized hardware
and the data stored off for deferred processing and analysis.

Tony

On Sun, Feb 26, 2012 at 11:34 AM, Tony Koker  wrote:

> my 2 cents...
>
> being in electronics for over 30 years, it is forever expanding in both
> directions, bigger mega, giga, tera, peta, etc. AND smaller nano, pico,
> femto, atto.
>
> but, I agree that it is moot, as it is not the range, which is usually
> expressed in an exponential component of the system being used (decimal,
> hex., etc), and it is more a matter of significant number of digits being
> operated on, at that point in time. Basically the zeroes are removed and
> tracked separately.
>
> Tony
>
>
>
> On Sun, Feb 26, 2012 at 11:12 AM, Larry Hastings wrote:
>
>>
>> On 02/26/2012 06:51 AM, Simon Cross wrote:
>>
>>> There are good scientific use cases for nanosecond time resolution
>>> (e.g. radio astronomy) where one is actually measuring time down to
>>> that level and taking into account propagation delays. I have first
>>> hand experience [...]
>>>
>>> I'm not sure whether any of this is intended to be for or against any
>>> side in the current discussion. :D
>>>
>>
>> It's probably neutral.  But I do have one question: can you foresee the
>> scientific community moving to a finer resolution than nanoseconds in our
>> lifetimes?
>>
>>
>> //arry/
>>
>> __**_
>> Python-Dev mailing list
>> [email protected]
>> http://mail.python.org/**mailman/listinfo/python-dev<http://mail.python.org/mailman/listinfo/python-dev>
>> Unsubscribe: http://mail.python.org/**mailman/options/python-dev/**
>> tkoker%40gmail.com<http://mail.python.org/mailman/options/python-dev/tkoker%40gmail.com>
>>
>
>
___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


RE: [Python-Dev] Deprecated xmllib module

2004-12-06 Thread Tony Meyer
>> * The average quality of the library improves as we take 
>> out junk (the tzparse module for example) and put in high
>> quality modules like logging, csv, decimal, etc.
> 
> Yes and no.  The added modules have to be relevant to what 
> users want to do.  While (relatively) minor stuff like csv 
> and decimal are good ideas, of course, logging is kind of an 
> "insider's" module.

What do you mean by "insiders"?  The logging module is great (ok, it could
be simpler to use in some cases) for any Python programmer.

> What many more users want, however, are things like an XML 
> parser, or a CSS parser, or a usable HTTP server, or...

Statements like this are pretty common, but there's no evidence (that I've
ever seen pointed to) that someone has *measured* how many people want
modules for X.  People who work with HTML at lot are probably keen on those
things you list, yes.  OTOH, other people (e.g. me) have no use for any of
those, but use CSV and logging daily.  Others want something completely
different.

There's quite a difference between quality and relevance.  It's certainly
worthwhile to ensure that all the standard library modules are as of high a
quality as possible (e.g. email > rfc822).  You'll never be able to get
everyone to agree on the same set of modules that are relevant.

If there are that many people that want (e.g.) a CSS parser, wouldn't there
be a 3rd party one that everyone is using that could be proposed for
addition into the standard library?

=Tony.Meyer

___
Python-Dev mailing list
[EMAIL PROTECTED]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


RE: [Python-Dev] Deprecated xmllib module

2004-12-06 Thread Tony Meyer
> As far as I can tell, there are no CSS or XML 1.1 parsers for 
> Python, period.

This belongs on c.l.p, I suppose, but the first page of google results
includes:




=Tony.Meyer

___
Python-Dev mailing list
[EMAIL PROTECTED]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


RE: [Python-Dev] Please help complete the AST branch

2005-01-03 Thread Tony Meyer
> Perhaps interested parties should take up the discussion on 
> the compiler-sig.

This isn't listed in the 'currently active' SIGs list on
 - is it still active, or will it now be?  If so,
perhaps it should be added to the list?

By 'discussion on', do you mean via the wiki at
?

=Tony.Meyer

___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


[Python-Dev] Should Python's library modules be written to help the freeze tools?

2005-01-30 Thread Tony Meyer
The Python 2.4 Lib/bsddb/__init__.py contains this:

"""
# for backwards compatibility with python versions older than 2.3, the
# iterator interface is dynamically defined and added using a mixin
# class.  old python can't tokenize it due to the yield keyword.
if sys.version >= '2.3':
exec """
import UserDict
from weakref import ref
class _iter_mixin(UserDict.DictMixin):
...
"""

Because the imports are inside an exec, modulefinder (e.g. when using bsddb
with a py2exe built application) does not realise that the imports are
required.  (The requirement can be manually specified, of course, if you
know that you need to do so).

I believe that changing the above code to:

"""
if sys.version >= '2.3':
import UserDict
from weakref import ref
exec """
class _iter_mixin(UserDict.DictMixin):
"""

Would still have the intended effect and would let modulefinder do its work.

The main question (to steal Thomas's words) is whether the library modules
should be written to help the freeze tools - if the answer is 'yes', then
I'll submit the above as a patch for 2.5.

Thanks!

=Tony.Meyer

___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


RE: [Python-Dev] Should Python's library modules be written to help the freeze tools?

2005-01-30 Thread Tony Meyer
[Tony Meyer]
> The main question (to steal Thomas's words) is whether the 
> library modules should be written to help the freeze tools
> - if the answer is 'yes', then I'll submit the above as a
> patch for 2.5.

[Martin v. Löwis]
> The answer to this question certainly is "yes, if possible". In this
> specific case, I wonder whether the backwards compatibility is still
> required in the first place. According to PEP 291, Greg Smith and
> Barry Warsaw decide on this, so I think they would need to comment
> first because any patch can be integrated.
[...]

Thanks!  I've gone ahead and submitted a patch, in that case:

[ 1112812 ] Patch for Lib/bsddb/__init__.py to work with modulefinder
<http://sourceforge.net/tracker/index.php?func=detail&aid=1112812&group_id=5
470&atid=305470>

I realise that neither of the people that need to look at this are part of
the '5 for 1' deal, so I need to wait for one of them to have time to look
at it (plenty of time left before 2.5 anyway) but I'll do 5 reviews for the
karma anyway, today or tomorrow.

=Tony.Meyer

___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


[Python-Dev] Bug tracker reviews

2005-01-30 Thread Tony Meyer
As promised, here are five bug reviews with recommendations.  If they help 

[ 1112812 ] Patch for Lib/bsddb/__init__.py to work with modulefinder


get reviewed, then that'd be great.  Otherwise I'll just take the good karma
and run :)

-

[ 531205 ] Bugs in rfc822.parseaddr()


What to do when an email address contains spaces, when RFC2822 says it
can't.  At the moment the spaces are stripped.  Recommend closing "Won't
Fix", for reasons outlined in the tracker by Tim Roberts.

[ 768419 ] Subtle bug in os.path.realpath on Cygwin


Agree with Sjoerd that this is a Cygwin bug rather than a Python one (and no
response from OP for a very long time).  Recommend closing "Won't Fix".

[ 803413 ] uu.decode prints to stderr


The question is whether it is ok for library modules to print to stderr if a
recoverable error occurs.  Looking at other modules, it seems uncommon, but
ok, so recommend closing "Won't fix", but making the suggested documentation
change.
(Alternatively, change from printing to stderr to using warnings.warn, which
would be a simple change and possibly more correct, although giving the same
result).

[ 989333 ] Empty curses module is loaded in win32


Importing curses loads an empty module instead of raising ImportError on
win32.  I cannot duplicate this: recommend closing as "Invalid".

[ 1090076 ] Defaults in ConfigParser.get overrides section values


Behaviour of ConfigParser doesn't match the documentation.  The included
patch for ConfigParser does fix the problem, but might break existing code.
A decision needs to be made which is the desired behaviour, and the tracker
can be closed either "Won't Fix" or "Fixed" (and the fix applied for 2.5 and
2.4.1).

=Tony.Meyer

___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


RE: [Python-Dev] Is msvcr71.dll re-redistributable?

2005-02-02 Thread Tony Meyer
[Thanks for bringing this up, BTW, Thomas].

[Thomas Heller]
>> The 2.4 python.org installer installs msvcr71.dll on the 
>> target system. 
>>
>> If someone uses py2exe or a similar tool to create a frozen 
>> application, is he allowed to redistribute this msvcr71.dll
>> to other users together with his application or not, even if
>> he doesn't own MSVC?

[Vincent Wehren]
> According to the EULA,

Is that the EULA of MS VC++?

> you may distribute anything listed in redist.txt:

And, just to be clear, mscvr71.dll is in redist.txt?

> """2.2Redistributable Code-General.   Microsoft grants you a 
> nonexclusive, royalty-free right to reproduce and distribute 
> the object code form of any portion of the Software listed in
> REDIST.TXT ("Redistributable Code").  For general redistribution 
> requirements for Redistributable Code, see Section 3.1, below."""

Is it legit to redistribute an EULA?  If so, would you mind sending me a
copy of this (off-list)?

> So the right to distribute is coupled to the a) the EULA and b) 
> redist.txt. (As a side note, the Microsoft Visual C++ Toolkit 
> 2003 for example contains NO redistributables per redist.txt).

I'm not that familiar with the names of all these things.  Is the "Microsoft
Visual C++ Toolkit 2003" the free thing that you can get?

> In the case of not owning a compiler at all, chances seem pretty slim 
> you have any rights to distribute anything.

Well, I 'own' a copy of gcc, which is a compiler .

Can anyone here suggest a way to get around this?  As a specific example:
the SpamBayes distribution includes a py2exe binary, and it would be nice
(although not essential) to build this with 2.4.  However, at the moment my
name goes down as the release manager, and I don't have (AFAICT) a licence
to redistribute msvcr71.dl.

Should people in this situation just stick with 2.3 or buy a copy of a MS
compiler?

=Tony.Meyer

___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


RE: [Python-Dev] Is msvcr71.dll re-redistributable?

2005-02-02 Thread Tony Meyer
[Thomas Heller]
>> For the spambayes binary, maybe there should be another 
>> person adding the msvcr71.dll to the distribution that Tony
>> builds?  Someone who has a MSVC license, and also is developer
>> on the spambayes project?

[Tim Peters]
> To the best of my knowledge, Tony is distributing my duly 
> licensed copy of msvcr71.dll with spambayes.  And so long as 
> I remain totally ignorant of what Tony actually does, that 
> will remain my best knowledge.  Win-win . 

That solves the specific SpamBayes problem.  It still seems like this is
somewhat of a PITA for people wanting to build frozen Windows apps with
Python 2.4, though.  OTOH, I can't personally think of anything (apart from
the it'll-never-fly go back to VC6 solution or the bound-to-be-terrible
static linking solution) that the Python developers can do about it.

(Well, there's that chap from Microsoft at PyCon, right?  How about one of
you convince him to convince Microsoft to give all Python developers a
licence to redistribute msvcr71.dll?  ).

BTW, this bit of the EULA isn't great:

""(iii) to distribute the Licensee Software containing the Redistributables
pursuant to an end user license agreement (which may be "break-the-seal",
"click-wrap" or signed), with terms no less protective than those contained
in this EULA;"""

The PSF licence is probably somewhat less protective than that one.  I
suppose the PSF licence really applies to the source, though, and not the
built binary.  Or something like that :)

(Users giving the software directly to someone else, rather than downloading
from the official site, is probably covered by:

"""You also agree not to permit further distribution of the Redistributables
by your end users except you may permit further redistribution of the
Redistributables by your distributors to your end-user customers if your
distributors only distribute the Redistributables in conjunction with, and
as part of, the Licensee Software and you and your distributors comply with
all other terms of this EULA."""

Where the users become our redistributors.)

=Tony.Meyer

___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


RE: [Python-Dev] Is msvcr71.dll re-redistributable?

2005-02-02 Thread Tony Meyer
(I should point out the thread that starts here, too:



in case anyone isn't aware of it).

> Sounds like this puts all Python users in the clear, since 
> Python is the Licensee Software in that case.  So, anybody can
> distribute msvcr71 as "part of" Python.

I guess it would really take a lawyer (well, probably several) to say
whether distributing a frozen application is distributing Python or not.

> OTOH, the other wording sounds like Python itself has to have 
> a click-wrap, tear-open, or signature EULA!  IOW, the EULA
> appears to prohibit free distribution of the runtime with a
> program that has no EULA.
> 
> So, in an amusing turn of events, the EULA actually appears 
> to forbid the current offering of Python for Windows, since
> it does not have such a EULA.

I presume that adding a "click-wrap" EULA to the Python .msi would not be
difficult.  Lots of other .msi's have "click-wrap" licenses, so there must
be some sample code that can be used.  The license is already in the
distribution, it would just be displayed at an additional time.

The EULA has to be no less restrictive than the MSVC one (presumably only in
relation to the bits of MSVC that are being redistributed), so I guess a
section at the end of the PSF license that duplicates the relevant bits of
the MSVC one would work.  (Of course, IANAL).

=Tony.Meyer

___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


RE: [Python-Dev] python-dev Summary for 2005-02-01 through 2005-02-14[draft]

2005-03-06 Thread Tony Meyer
Somewhat slower, but here are two more threads from me (email is mostly a
weekday thing for me, and the last few days were full of sun, wine, food and
jazz.  Well, and work.  But working with sun, wine, food and jazz, so it's
hard to complain too much).  Feedback will not be ignored :)

--
More licensing issues - redistribution
--

As most people know, one of the major changes between the Windows builds of
Python 2.3 and 2.4 is that 2.4 is built with VC7, rather than VC6.  One of
the consequences of this change is that 2.4 links with the Microsoft DLL
msvcr71.dll, which only some people have, rather than msvcr.dll, which
pretty much all Windows users have.

The Windows Python 2.4 distribution installs msvcr71.dll, so it's there when
needed.  However, those building frozen applications (e.g. with py2exe) need
to ensure that their users have msvcr71.dll.

After going through the EULA's for both the commercial and free-to-use
Microsoft compilers, it seems that redistributing mscvr71.dll is acceptable,
if the re-distributor owns a copy of the commercial (not free) compiler,
includes an EULA agreement in one of various forms (e.g. 'click-wrap'), and
follows various other minor conditions (note that just about every message
in this thread contains "IANAL, but").

This leaves those without VC7 unable to redistribute msvcr71, unless, as
some suggested, distributing a frozen Python application can be considered
as redistributing Python (and the various other minor conditions are
followed).

In an interesting twist, it appears that the official Windows Python 2.4
distribution is in breach of the EULA, as a 'click-wrap' license is
required, and is not present.  This element of the thread died without
reaching a conclusion, however.

If you *are* a lawyer (with expertise in this area), and would like to
comment, then please do!

Contributing threads:
   - `Is msvcr71.dll re-redistributable?`__

--
Avoiding signs in memory addresses
--

Troels Walsted Hansen pointed out that builtin_id() can return a negative
number in Python 2.4 (and can generate a warning in 2.3).  Some 2.3 modules
(but no 2.4 ones) have code to work around this, but Troels suggested that a
better solution would be to simply have builtin_id() return an unsigned long
integer.  The consensus was that this would be a good idea, although nothing
has been checked in yet, and so this will probably stagnate without someone
submitting a patch (or at least a bug report).

Contributing threads: 
   - `builtin_id() returns negative numbers`__

___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


RE: [Python-Dev] python-dev Summary for 2005-02-15 through 2005-02-28[draft]

2005-03-06 Thread Tony Meyer
> I am not expecting the candidates for taking of the Summaries 
> to write stuff for this one (although I wouldn't mind it  =).

In penance for being late with the other ones, here are a summaries for a
couple of skipped threads for this period:

---
Slow unit tests should be distinguished
---

Guido clarified that unit tests should distinguish between "regular" tests
and slow ones by use of the unit test 'resource' keys, as a result of Peter
Åstrand asking for comments about bug #1124637, which complained that
test_subprocess is too slow.  The suggested solution was to add another
resource for subprocess, so that generally a quick version would run, but a
longer, more thorough test would run with -uall or -usubprocess.  Along the
way, it was discovered that the reason that Windows already ran
test_subprocess quickly was because there was code special-casing it to be
fast.  The resource solution was checked in, although Windows was left
special-cased.

Contributing threads:
   - `[ python-Bugs-1124637 ] test_subprocess is far too slow (fwd)
 
`__

---
Clarification of the '5 for 1' deal
---

It seems that the offer that some python-dev'ers have made to review a patch
in exchange for reviews of five (originally ten) other patches is finally
being taken up by various people.  However, python-dev traffic has increased
with patch and bug reviews, and the question was posed whether reviews
should be posted in general, or only for this specific deal.

The answer is that the comments should also be entered via the SourceForge
tracking system, but that a brief message covering a batch (rather than
individual) of reviews is acceptable for python-dev, at least for now.  New
reports should almost never be posted to python-dev, however, and should be
entered via the tracking system.

This offer isn't official policy, but a reference to it will be added to
Brett's summary of the development process.  However, people should also
remember that it may take developers some time to find time to deal with
reviews, and so have patience after posting their review.

Contributing threads:
   - `discourage patch reviews to the list?
 
`__
   - `Some old patches
 
`__
   - `Five review rule on the /dev/ page?
 
`__

___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


RE: [Python-Dev] RELEASED Python 2.4.1, release candidate 1

2005-03-10 Thread Tony Meyer
[Martin v. Löwis]
>> I'd like to encourage feedback on whether the Windows 
>> installer works for people. It replaces the VBScript part in the
>> MSI package with native code, which ought to drop the dependency on 
>> VBScript, but might introduce new incompatibilities.

[Tim Peters]
> Worked fine here.  Did an all-default "all users" install, 
> WinXP Pro SP2, from local disk, and under an account with 
> Admin rights.  I uninstalled 2.4 first.  I suppose that's the 
> least stressful set of choices I could possibly have made, 
> but at least it confirms a happy baseline. 

Also works fine for me with:

 * WinXP Pro SP2, from local disk, with admin rights, all defaults, over the
top of 2.4.0
 
 * Win2k SP4, from network disk, without admin rights, all defaults, with no
previous 2.4

 * Win2k SP4 (different machine), from local disk, with admin rights,
defaults apart from skipped test suite, over the top of 2.4.0

=Tony.Meyer

___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


RE: [Python-Dev] Python2.4.1c1 and win32com

2005-03-11 Thread Tony Meyer
> Win32com generates Python-files for use with com interfaces, 
> using the make-py.py utility.
> 
> The generated files are OK with Python2.3.5
> 
> The generated files crash the Python interpreter with Python 2.4
> 
> Under Python 2.4.1c1, They give a syntax error!?
> 
> The files unfortunately are very big, nearly 2Mb each, 
> although they compress very well (270Kb).
[...]
> Anyone who can help or offer any suggestions?

I believe this is a pywin32 bug, which has been fixed in (pywin32) CVS and
will be fixed for the next build.  It's certainly a probably with 2.4.0 as
well as 2.4.1.

The pywin32 mailing list archives have more details, as do the tracker for
the project: .

=Tony.Meyer

___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


RE: [Python-Dev] rationale for the no-new-features approach

2005-03-16 Thread Tony Meyer
[Bob Ippolito]
 try:
  set
 except NameError:
  from sets import Set as set

 You don't need the rest.

[Skip Montanaro]
>>> Sure, but then pychecker bitches about a statement that appears to
>>> have no effect. ;-)

[Bob Ippolito]
>> Well then fix PyChecker to look for this pattern :)

+1.

[Gregory P. Smith]
> or make it even uglier to hide from pychecker by writing that as:
> 
> exec("""
> try:
> set
> except NameError:
> from sets import Set as set
> """)

I presume that was somewhat tongue-in-cheek, but if it wasn't, please
reconsider.  Modulefinder isn't able to realise that set (or sets.Set) is
needed with the latter (a problem of this very nature was just fixed with
bsddb), which causes trouble for people later on.

=Tony.Meyer

___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


RE: [Python-Dev] Faster Set.discard() method?

2005-03-17 Thread Tony Meyer
> To avoid the exception in the discard method, it could be 
> implemented as:
> 
> def discard(self, element):
> """Remove an element from a set if it is a member.
> 
> If the element is not a member, do nothing.
> """
> try:
> self._data.pop(element, None)
> except TypeError:
> transform = getattr(element, 
> "__as_temporarily_immutable__", None)
> if transform is None:
> raise # re-raise the TypeError exception we caught
> del self._data[transform()]
[...]
> But the dict.pop method is about 12 times faster. Is this worth doing?

The 2.4 builtin set's discard function looks like it does roughly the same
as the 2.3 sets.Set.  Have you tried comparing a C version of your version
with the 2.4 set to see if there are speedups there, too?

IMO keeping the sets.Set version as clean and readable as possible is nice,
since the reason this exists is for other implementations (Jython, PyPy,
...) and documentation, right?  OTOH, speeding up the CPython implementation
is nice and it's read by many fewer people.

=Tony.Meyer

___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


RE: [Python-Dev] Faster Set.discard() method?

2005-03-17 Thread Tony Meyer
>>> But the dict.pop method is about 12 times faster. Is this 
>>> worth doing?
>>
>> The 2.4 builtin set's discard function looks like it does 
>> roughly the same as the 2.3 sets.Set.  Have you tried comparing
>> a C version of your version with the 2.4 set to see if there are
>> speedups there, too?
> 
> Ah. I had forgotten it was builtin - I'd found the python 
> implementation and concluded the C implementation didn't make
> it into 2.4 for some reason... 8-)
> 
> Yes, the builtin set.discard() method is already faster than 
> dict.pop().

The C implementation has this code:

"""
if (PyDict_DelItem(so->data, item) == -1) {
if (!PyErr_ExceptionMatches(PyExc_KeyError))
return NULL;
PyErr_Clear();
}
"""

Which is more-or-less the same as the sets.Set version, right?  What I was
wondering was whether changing that C to a C version of your dict.pop()
version would also result in speedups.  Are Exceptions really that slow,
even at the C level?

=Tony.Meyer

___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


[Python-Dev] python-dev Summary for 2005-04-16 through 2005-04-30 [draft]

2005-05-05 Thread Tony Meyer
Here's April Part Two.  If anyone can take their eyes of the anonymous block
threads for a moment and give this a once-over, that would be great!  Please
send any corrections or suggestions to Tim (tlesher at gmail.com), Steve
(steven.bethard at gmail.com) and/or me, rather than cluttering the list.
Ta!

==
Summary Announcements
==

---
Exploding heads
---

After a gentle introduction for our first summary, python-dev really let
loose last fortnight; not only with the massive PEP 340 discussion, but
also more spin-offs than a `popular`_ `TV`_ `series`_, and a few
stand-alone threads.

Nearly a week into May, and the PEP 340 talk shows no sign of abating;
this is unfortunate, since Steve's head may explode if he has to write
anything more about anonymous blocks.  Just as well there are three of us!

.. _popular: http://imdb.com/title/tt0060028/
.. _TV: http://imdb.com/title/tt0098844/
.. _series: http://imdb.com/title/tt0247082/

[TAM]

---
PEP 340
---

A request for anonymous blocks by Shannon -jj Behrens launched a
massive discussion about a variety of related ideas. This discussion
is split into different sections for the sake of readability, but
as the sections are extracted from basically the same discussion,
it may be easiest to read them in the following order:

1. `Localized Namespaces`_

2. `The Control Flow Management Problem`_

3. `Block Decorators`_

4. `PEP 310 Updates Requested`_

5. `Sharing Namespaces`_

6. `PEP 340 Proposed`_

[SJB]


=
Summaries
=


Localized Namespaces


Initially, the "anonymous blocks" discussion focused on introducing
statement-local namespaces as a replacement for lambda expressions.
This would have allowed localizing function definitions to a single
namespace, e.g.::

foo = property(get_foo) where:
 def get_foo(self):
 ...

where get_foo is only accessible within the ``foo = ...`` assignment
statement. However, this proposal seemed mainly to be motivated by a
desire to avoid "namespace pollution", an issue which Guido felt was not
really that much of a problem.


Contributing threads:

- `anonymous blocks
`__

[SJB]


---
The Control Flow Management Problem
---

Guido suggested that if new syntax were to be introduced for "anonymous
blocks", it should address the more important problem of being able to
extract common patterns of control flow. A very typical example of such
a problem, and thus one of the recurring examples in the thread, is
that of a typical acquire/release pattern, e.g.::

lock.acquire()
try:
   CODE
finally:
   lock.release()

Guido was hoping that syntactic sugar and an appropriate definition of
locking() could allow such code to be written as::

locking(lock):
   CODE

where locking() would factor out the acquire(), try/finally and
release().  For such code to work properly, ``CODE`` would have to
execute in the enclosing namespace, so it could not easily be converted
into a def-statement.

Some of the suggested solutions to this problem:

- `Block Decorators`_

- `PEP 310 Updates Requested`_

- `Sharing Namespaces`_

- `PEP 340 Proposed`_


Contributing threads:

- `anonymous blocks
`__

[SJB]



Block Decorators


One of the first solutions to `The Control Flow Management Problem`_ was
"block decorators".  Block decorators were functions that accepted a
"block object" (also referred to in the thread as a "thunk"), defined a
particular control flow, and inserted calls to the block object at the
appropriate points in the control flow. Block objects would have been
much like function objects, in that they encapsulated a sequence of
statements, except that they would have had no local namespace; names
would have been looked up in their enclosing function.

Block decorators would have wrapped sequences of statements in much the
same way as function decorators wrap functions today. "Block decorators"
would have allowed locking() to be written as::

def locking(lock):
def block_deco(block):
lock.acquire()
try:
block()
finally:
lock.release()
return block_deco

and invoked as::

@locking(lock):
CODE

The implementation of block objects would have been somewhat
complicated if a block object was a first class object and could be
passed to other functions.  This would have required all variables used
in a block object to be "cells" (which provide slower access than
normal name lookup). Additionally, first class block objects, as a type
of callable, would have confused the meaning of the return statement -
should the return exit the block or the enclosing

Re: [Python-Dev] Decimal FAQ

2005-05-22 Thread Tony Meyer
> Q.  I'm writing a fixed-point application to two decimal places.
> Some inputs have many places and needed to be rounded.  Others
> are not supposed to have excess digits and need to be validated.
> What methods should I use?
> 
> A.  The quantize() method rounds to a fixed number of decimal
> places.  If the Inexact trap is set, it is also useful for
> validation:
> 
> >>> TWOPLACES = Decimal(10) ** -2
> >>> # Round to two places
> >>> Decimal("3.214").quantize(TWOPLACES)
> Decimal("3.21")
> >>> # Validate that a number does not exceed two places
> >>> Decimal("3.21").quantize(TWOPLACES,
> context=Context(traps=[Inexact]))
> Decimal("3.21")

I think an example of what happens when it does exceed two places would make
this example clearer.  For example, adding this to the end of that:

>>> Decimal("3.214").quantize(TWOPLACES, context=Context(traps=[Inexact]))
Traceback (most recent call last):
[...]
Inexact: Changed in rounding

=Tony.Meyer

___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] 2.5 release schedule

2006-02-15 Thread Tony Meyer
> We still need a release manager.  No one has heard from Anthony.

It is the peak of the summer down here.  Perhaps he is lucky enough  
to be enjoying it away from computers for a while?

=Tony.Meyer
___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


[Python-Dev] DRAFT: python-dev Summary for 2006-01-16 through 2005-01-31

2006-03-01 Thread Tony Meyer
Here's the draft for the second half of January.  First half of
February on its way soon.  Any
suggestions/corrections/additions/comments welcome.  Thanks!  -TAM

=
Announcements
=

-
Google summer internships
-

Google is looking to fill an unprecedented number of `student intern
positions`_ this (US) summer, at several US locations (Mountain View,
Santa Monica, Kirkland (Wash.), and New York).  The perks are
incredible, and Google is not just looking for software development
interns - there are also product management positions, and UI design
and usability analyst positions.

Contributing thread:

 - `Know anyone interested in a Google internship?
`__

 .. _student intern positions: http://www.google.com/jobs/intern.html

[TAM]

---
Possible Summer of PyPy
---

Armin Rigo announced the possibility of a "Summer of PyPy", which
would follow the style of Google's "Summer of Code" in funding
students to work on various aspects of PyPy.  The possibility has not
been confirmed yet, but we'll let you know when there's more info.

Contributing thread:

 - `Summer of PyPy
`__

[SJB]

=
Summaries
=

---
Integers and strings in different bases
---

Alex Martelli requested the inverse of ``int(, )`` that
would convert an int into a string with digits in the appropriate
base. There was a lot of discussion of exactly where such
functionality should go. Among the suggested locations were:

* The str constructor, e.g. ``str(, )``
* A str classmethod, e.g. ``str.from_int(, )``
* An encoding method, e.g. ``str().encode("base")``
* A method on ints, e.g. ``.to_base()``
* A format code, e.g. ``"%b" % ``
* A builtin function, e.g. ``base(, )``
* A function in the math module, e.g. ``math.base(, )``

People seemed generally to like the builtin function or math module
function options, though there was some debate as to the best name for
the function.  Guido suggested letting the proposal sit for a week or
two to see if anyone could come up with a better name or suggest a
better location for the function.  (However, he seemed generally in
favor of the proposal, suggesting that hex() and oct() should be
deprecated and removed in a future version of Python.)  No decisions
had been made at the time this summary was written.

Contributing threads:

 - `str with base
`__

[SJB]


PEP 355: Path - Object oriented filesystem paths


Björn Lindqvist resuscitated the idea of incorporating a Path class
based on Jason Ordenorff's path module to the standard library by
creating `PEP 355`_.  There was some general discussion (and
corresponding PEP changes), with much discussion centred on the use of
"/" as a join-with-separator operator, which was eventually dropped
from the PEP.

More discussion considered whether Path should subclass string or not.
 Subclassing string provides the advantage that Paths can be used in
the majority of places where strings are currently used, without
modification.  However, there are many methods of strings that do not
seem appropriate for Path objects.  Jason Orendorff would prefer for
Paths to not subclass strings, and a new format specifier (e.g. for
PyArg_ParseTuple()) be created for use with Paths.

There was general agreement that the utility of the module would be
highest when Path objects could be seamlessly used where string paths
were previous used.  The debate centred on whether subclassing string
was the best way to do this or not.  Path objects clearly are not
string objects (e.g. __iter__ and join() are nonsensical with paths). 
Changing the C API so that Paths are accepted where necessary was the
suggested solution, although the PEP (at the time of writing the
summary) still subclasses Path from string.

Changing the methods from the names used by the os module and Jason's
module to ones that conform to PEP 8 was recommended.  Jason explained
that the reason that there is so much cruft in his path module is that
the design is heavily skewed toward people already familiar with the
existing standard library equivalents.  He feels that a standard
library Path class should have different design goals: less
redundancy, fewer methods, and PEP 8 compliant names.

 .. _PEP 355: http://www.python.org/peps/pep-0355.html

Contributing threads:

 - `The path module PEP
`__
 - `/ as path join operator (was: Re: The path module PEP)
`__
 - `/ as path join operator


[Python-Dev] python-dev Summary for 2005-05-16 through 2005-05-31 [draft]

2005-06-22 Thread Tony Meyer
You may have noticed that the summaries have been absent for the last month
- apologies for that; Steve has been dutifully doing his part, but I've been
caught up with other things.

Anyway, Steve will post the May 01-15 draft shortly, and here's May 16-31.
We should be able to get the first June one done fairly shortly, too.

If anyone has time to flick over this and let me/Steve/Tim know if you have
corrections, that would be great; thanks!

=Tony.Meyer

=
Announcements
=


QOTF


We have our first ever Quote of the Fortnight (QOTF), thanks to
the wave of discussion over `PEP 343`_ and Jack Diederich:

I still haven't gotten used to Guido's heart-attack inducing early
enthusiasm for strange things followed later by a simple
proclamation I like.  Some day I'll learn that the sound of
fingernails on the chalkboard is frequently followed by candy for
the whole class.

See, even threads about anonymous block statements can end happily! ;)

.. _PEP 343: http://www.python.org/peps/pep-0343.html

Contributing thread:

- `PEP 343 - Abstract Block Redux
`__

[SJB]

--
First PyPy Release
--

The first release of `PyPy`_, the Python implementation of Python, is
finally available. The PyPy team has made impressive progress, and
the current release of PyPy now passes around 90% of the Python
language regression tests that do not depend deeply on C-extensions.
The PyPy interpreter still runs on top of a CPython interpreter
though, so it is still quite slow due to the double-interpretation
penalty.

.. _PyPy: http://codespeak.net/pypy

Contributing thread:

- `First PyPy (preview) release
`__


[SJB]


Thesis: Type Inference in Python


Brett C. successfully defended his masters thesis `Localized Type
Inference of Atomic Types in Python`_, which investigates some of the
issues of applying type inference to the current Python language, as
well as to the Python language augmented with type annotations.
Congrats Brett!

.. _Localized Type Inference of Atomic Types in Python:
http://www.drifty.org/thesis.pdf

Contributing thread:

- `Localized Type Inference of Atomic Types in Python
`__


[SJB]

=
Summaries
=

---
PEP 343 and With Statements
---

The discussion on "anonymous block statements" brought itself closer
to a real conclusion this fortnight, with the discussion around
`PEP 343`_ and `PEP 3XX`_ converging not only on the semantics for
"with statements", but also on semantics for using generators as
with-statement templates.

To aid in the adaptation of generators to with-statements, Guido
proposed adding close() and throw() methods to generator objects,
similar to the ones suggested by `PEP 325`_ and `PEP 288`_. The
throw() method would cause an exception to be raised at the point
where the generator is currently suspended, and the close() method
would use throw() to signal the generator to clean itself up by
raising a GeneratorExit exception.

People seemed generally happy with this proposal and -- believe it or
not -- we actually went an entire eight days without an email about
anonymous block statements!! It looked as if an updated `PEP 343`_,
including the new generator functionality, would be coming early the
next month. So stay tuned. ;)

.. _PEP 288: http://www.python.org/peps/pep-0288.html

.. _PEP 325: http://www.python.org/peps/pep-0325.html

.. _PEP 343: http://www.python.org/peps/pep-0343.html

.. _PEP 3XX: http://members.iinet.net.au/~ncoghlan/public/pep-3XX.html

Contributing threads:

- `PEP 343 - Abstract Block Redux
`__
- `Simpler finalization semantics (was Re: PEP 343 - Abstract Block Redux)
`__
- `Example for PEP 343
`__
- `Combining the best of PEP 288 and PEP 325: generator exceptions and
cleanup
`__
- `PEP 343 - New kind of yield statement?
`__
- `PEP 342/343 status?
`__
- `PEP 346: User defined statements (formerly known as PEP 3XX)
`__

[SJB]

---
Decimal FAQ
---

Raymond Hettinger suggested that a decimal FAQ would shorten the module's
learning curve, and drafted one.  There  were no objections, but few
adjustments (to the list, at least).  Raymond will probably make the FAQ
available at  some point.

Contributing 

[Python-Dev] python-dev Summary for 2005-06-01 through 2005-06-15 [draft]

2005-06-24 Thread Tony Meyer
You've just read two summaries, but here is another one, as we come back up
to speed.  If at all possible, it would be great if we could send this out
in time to catch people for the bug day (very tight, we know), so if anyone
has a chance to check this straight away, that would be great.

Please send any amendments to Steve ([EMAIL PROTECTED]) as that
probably gives us the best chance of getting it out on time.

As always, many thanks for the proofreading!

=
Summary Announcements
=

-
Bug Day: Saturday, June 25th 2005
-

AMK is organizing another Python Bug Day this Saturday, June 25th. If you're
looking to help out with Python, this  is a great way to get started!

Contributing Threads:

- `Bug day on the 25th?
`__

[SJB]


--
FishEye for Python CVS
--

Peter Moore has kindly set up `Fish Eye for the Python CVS repository`_.
FishEye is a repository browsing,  searching, analysis and monitoring tool,
with great features like RSS feeds, Synthetic changesets, Pretty ediffs and
SQL like searches. Check it out!

.. _Fish Eye for the Python CVS repository:
http://fisheye.cenqua.com/viewrep/python/

Contributing Threads:

- `FishEye on Python CVS Repository
`__

[SJB]



PyPy Sprint: July 1st - 7th 2005


The next `PyPy`_ sprint is scheduled right after EuroPython 2005 in
Gothenborg, Sweden. It will focus mainly on  translating PyPy to lower level
backends, so as to move away from running PyPy on top of the CPython
interpreter.  There will be newcomer-friendly introductions, and other
topics are possible, so if you have any interest in PyPy,  now is the time
to help out!

.. _PyPy: http://codespeak.net/pypy

Contributing Threads:

- `Post-EuroPython 2005 PyPy Sprint 1st - 7th July 2005
`__

[SJB]


-
Reminder: Google's Summer of Code
-

Just a reminder that the friendly folks at Python have set up a `wiki`_ and
a `mailing list`_ for questions about  `Google's Summer of Code`_. For
specific details on particular projects (e.g. what needs done to complete
Python SSL  support) participants may also ask questions to the Python-Dev
list.

.. _wiki: http://wiki.python.org/moin/CodingProjectIdeas
.. _mailing list: http://mail.python.org/mailman/listinfo/summerofcode
.. _Google's Summer of Code: http://code.google.com/summerofcode.html

[SJB]


--
hashlib Review Request
--


Gregory P. Smith noted that he has finished up the hashlib work he started
on a few months ago for patches `935454`_  and `1121611`_ (where the final
patch is).  He feels that the patch is ready, and would like anyone
interested to  review it; the patch incorporates both OpenSSL hash support
and SHA256+SHA512 support in a single module.  `The  documentation`_ can be
accessed separately, for convenience.


.. _935454: http://python.org/sf/935454
.. _1121611: http://python.org/sf/1121611
.. _The documentation:
http://electricrain.com/greg/hashlib-py25-doc/module-hashlib.html 

Contributing Threads:

- `hashlib - faster md5/sha, adds sha256/512 support
`__

[TAM]

=
Summaries
=


PEP 343 Progress


The PEP 343 discussions were mostly concluded. Guido posted the newest
version of the PEP to both Python-Dev and  Python-List and the discussions
that followed were brief and mostly in agreement with the proposal.

The PEP 343 syntax was modified slightly to require parentheses if VAR is a
comma-separated list of variables.  This  made the proposal
forward-compatible to extending the with-block for multiple resources. In
the favored extension,  the with-block would take multiple expressions in a
manner analogous to import statements::

with EXPR1 [as VAR1], EXPR2 [as VAR2], ...:
BLOCK

There were also some brief discussions about how with-blocks should behave
in the presence of async exceptions like  the KeyboardInterrupt generated
from a ^C. While it seemed like it would be a nice property for with-blocks
to  guarantee that the __exit__ methods would still be called in the
presence of async exceptions, making such a  guarantee proved to be too
complicated.  Thus the final conclusion, as summarized by Nick Coghlan, was
that "with  statements won't make any more guarantees about this than
try/finally statements do".

Contributing Threads:

- `PEP 343 rewrite complete
`__

- `For review: PEP 343: Anonymous Block Redux and Generator Enhancements


Re: [Python-Dev] Adding the 'path' module (was Re: Some RFE forreview)

2005-06-26 Thread Tony Meyer
[Reinhold Birkenfeld]
>> One more issue is open: the one of naming. As "path" is already the 
>> name of a module, what would the new object be called to avoid 
>> confusion? pathobj?  objpath? Path?

[Michael Hoffman]
> I would argue for Path.

Granted "path" is actually os.path, but I don't think it's wise to have
stdlib modules whose names are differentiated only by case, especially on
Windows (and other case-insensitive filesystems).

=Tony.Meyer

___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Adding the 'path' module (was Re: Some RFE forreview)

2005-06-26 Thread Tony Meyer
[Reinhold Birkenfeld]
>>>> One more issue is open: the one of naming. As "path" is already
>>>> the name of a module, what would the new object be called to
>>>> avoid confusion? pathobj?  objpath? Path?

[Michael Hoffman]
>>> I would argue for Path.

[Tony Meyer
>> Granted "path" is actually os.path, but I don't think it's 
>> wise to have stdlib modules whose names are differentiated only
>> by case, especially on Windows (and other case-insensitive
>> filesystems).

[Phillip J. Eby]
> This is the name of a *class*, not a module.

Sorry - it sounded like the idea was to put this class in a module by itself
(i.e. the class would be os.Path.Path).

> I.e., we are discussing 
> adding a Path class to the 'os' module, that implements the 
> interface of the "path" module.

In that case, I would argue against Path as the name of the class because
it's confusing to have "os.path" be the path module, and "os.Path" be an
class that provides an interface to that module.

I think differentiating things solely on the case of the name is a bad idea.

=Tony.Meyer

___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Adding the 'path' module (was Re: Some RFEforreview)

2005-06-28 Thread Tony Meyer
Maybe this has already been answered somewhere (although I don't recall
seeing it, and it's not in the sourceforge tracker) but has anyone asked
Jason Orendorff what his opinion about this (including the module in the
stdlib) is?

If someone has, or if he posted it somewhere other than here, could someone
provide a link to it?

=Tony.Meyer

___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Adding the 'path' module (was Re: Some RFEfor review)

2005-07-09 Thread Meyer, Tony
>> Well, most people when confronted with this will rename the
>> directory to something simple like "ulib" and continue.
>
> I don't really buy this "trick": what if you happen to have
> a home directory with Unicode characters in it ?

I think this is one of the most common places, too.  Whenever I've come across 
unicode filenames it has been because the user has a unicode Windows username, 
so their 'home directory' (and temp directory, desktop folder, etc) does too.  
Convincing people to either change their username or to go through the fairly 
complicated process of moving these directories elsewhere generally isn't 
feasible.

=Tony.Meyer
___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP: Migrating the Python CVS to Subversion

2005-07-28 Thread Tony Meyer
[...]
> Publish the Repositories
> 
[...]
> As an option, websvn (available
> e.g. from the Debian websvn package) could be provided.

Is there any reason that this should be an option, and not just done?  For
occasional source (particularly C source) lookups, I've found webcvs really
useful (especially when on a machine without cvs or ssh).  I presume that
I'm not alone here.

If there are issues with it (stability, security, whatever), then I could
understand making it optional, but otherwise I think it would be great if
the PEP just included it.

=Tony.Meyer

___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP: Migrating the Python CVS to Subversion

2005-07-28 Thread Tony Meyer
> Do we also want to split off nondist and encodings?  IWBNI 
> the Python source code proper weren't buried too deep in the 
> directory structure. 

+1

=Tony.Meyer

___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


[Python-Dev] python-dev Summary for 2005-07-16 through 2005-07-31 [draft]

2005-08-14 Thread Tony Meyer
Here's July Part Two.  As usual, if anyone can spare the time to proofread
this (it's fairly short this fortnight!), that would be great!  Please send
any corrections or suggestions to Tim (tlesher at gmail.com), Steve
(steven.bethard at gmail.com) and/or me, rather than cluttering the list.
Ta!

=
Announcements
=

-
PyPy Sprint in Heidelberg 22nd - 29th August 2005
-

Heidelberg University in Germany will host a PyPy_ sprint from 22nd August
to 29th August. The sprint  will push towards the 0.7 release of PyPy_ which
hopes to reach Python 2.4.1 compliancy and to have  full, direct translation
into a low level language, instead of reinterpretation through CPython.  If
you'd like to help out, this is a great place to start!

For more information, see PyPy's `Heidelberg sprint`_ page.

.. _PyPy: http://codespeak.net/pypy
.. _Heidelberg sprint:
http://codespeak.net/pypy/index.cgi?extradoc/sprintinfo/Heidelberg-sprint.ht
ml

Contributing thread:

- `Next PyPy sprint: Heidelberg (Germany), 22nd-29th of August
`__



zlib 1.2.3 in Python 2.4 and 2.5


Trent Mick supplied a patch for updating Python from zlib 1.2.1 to zlib
1.2.3, which eliminates some  potential security vulnerabilities. Python
will move to this new version of zlib in both the  maintenance 2.4 branch
and the main (2.5) branch.

Contributing thread:

- `zlib 1.2.3 is just out
`__

=
Summaries
=

---
Moving Python CVS to Subversion
---

Martin v. Löwis submitted `PEP 347`_, covering changing from CVS to SVN for
source code revision  control of the Python repository, and moving from
hosting the repository on sourceforge.net to  python.org.

Moving to SVN from CVS met with general favour from most people, although
most were undecided about  moving from sourceforge.net to python.org.  The
additional administration requirements of the move  were the primary
concern, and moving to an alternative host was suggested.  Martin is open to
including suggestions for alternative hosts in the PEP, but is not
interested in carrying out such  research himself; as such, if alternative
hosts are to be included, someone needs to volunteer to  collect all the
required information and submit it to Martin.

Discussion about the conversion and the move is continuing in August.

.. _PEP 347: http://www.python.org/peps/pep-0347.html

Contributing thread:

- `PEP: Migrating the Python CVS to Subversion
`__

-
Exception Hierarchy in Python 3.0
-

Brett Cannon posted the first draft of `PEP 348`_, covering reorganisation
of exceptions in Python  3.0.  The initial draft included major changes to
the hierarchy, requiring any object raised to  inherit from a certain
superclass, and changing bare 'except' clauses to catch a specific
superclass.   The latter two proposals didn't generate much comment
(although Guido vacillated between removing bare  'except' clauses and not),
but the proposed hierarchy organisation and renaming was hotly discussed.

Nick Coghlan countered each revision of Brett's maximum-changes PEP with a
minimum-changes PEP, each  evolving through python-dev discussion, and
gradually moving to an acceptable middle ground.  At  present, it seems that
the changes will be much more minor than the original proposal.

The thread branched off into comments about `Python 3.0`_ changes in
general.  The consensus was  generally that although backwards compatibility
isn't required in Python 3.0, it should only be broken  when there is a
clear reason for it, and that, as much as possible, Python 3.0 should be
Python 2.9  without a lot of backwards compatibility code.  A number of
people indicated that they were reasonably  content with the existing
exception hierarchy, and didn't feel that major changes were required.

Guido suggested that a good principle for determining the ideal exception
hierarchy is whether there's  a use case for catching the common base class.
Marc-Andre Lemburg pointed out that when migrating  code changes in
Exception names are reasonably easy to automate, but changes in the
inheritance tree  are much more difficult.

Many exceptions were discussed at length (e.g. WindowsError, RuntimeError),
with debate about whether  they should continue to exist in Python 3.0, be
renamed, or be removed.  The PEP contains the current  status for each of
these exceptions.

The PEP evolution and discussion are still continuing in August, and since
this is for Python 3.0, are  likely to be considered open for some time yet.

.. _Python 3.0: http://www.p

[Python-Dev] python-dev Summary for 2005-08-01 through 2005-08-15 [draft]

2005-08-25 Thread Tony Meyer
Here's August Part One.  As usual, if anyone can spare the time to proofread
this, that would be great!  Please send any corrections or suggestions to
Steve (steven.bethard at gmail.com) and/or me, rather than cluttering the
list.  Ta!

=
Announcements
=


QOTF: Quote of the Fortnight


Some wise words from Donovan Baarda in the PEP 347 discussions:

It is true that some well designed/developed software becomes reliable
very quickly. However, it  still takes heavy use over time to prove that.

Contributing thread:

- `PEP: Migrating the Python CVS to Subversion
`__

[SJB]


Process PEPs


The PEP editors have introduced a new PEP category: "Process", for PEPs that
don't fit into the  "Standards Track" and "Informational" categories.  More
detail can be found in `PEP 1`_, which it  itself a Process PEP.

.. _PEP 1: http://www.python.org/peps/pep-0001.html

Contributing thread:

- `new PEP type: Process
`__

[TAM]

---
Tentative Schedule for 2.4.2 and 2.5a1 Releases
---

Python 2.4.2 is tentatively scheduled for a mid-to-late September release,
and a first alpha of Python  2.5 for March 2006 (with a final release around
May/June).  This means that a PEP for the 2.5 release,  detailing what will
be included, will likely be created soon; at present there are various
accepted  PEPs that have not yet been implemented.

Contributing thread:

- `plans for 2.4.2 and 2.5a1
`__

[TAM]

=
Summaries
=

---
Moving Python CVS to Subversion
---

The `PEP 347`_ discussion from last fortnight continued this week, with a
revision of the PEP, and a  lot more discussion about possible version
control software (RCS) for the Python repository, and where  the repository
should be hosted.  Note that this is not a discussion about bug trackers,
which will  remain with Sourceforge (unless a separate PEP is developed for
moving that).

Many revision control systems were extensively discussed, including
`Subversion`_ (SVN), `Perforce`_,  and `Monotone`_.  Whichever system is
moved to, it should be able to be hosted somewhere (if  *.python.org, then
it needs to be easily installable), needs to have software available to
convert a  repository from CVS, and ideally would be open-source; similarity
to CVS is also an advantage in that  it requires a smaller learning curve
for existing developers.  While Martin isn't willing to discuss  every
system there is, he will investigate those that make him curious, and will
add other people's  submissions to the PEP, where appropriate.

The thread included a short discussion about the authentication mechanism
that svn.python.org will  use; svn+ssh seems to be a clear winner, and a
test repository will be setup by Martin next fortnight.

The possibility of moving to a distributed revision control system
(particularly `Bazaar-NG`_) was  also brought up.  Many people liked the
idea of using a distributed revision control system, but it  seems unlikely
that Bazaar-NG is mature enough to be used for the main Python repository at
the  current time (a move to it at a later time is possible, but outside the
scope of the PEP).   Distributed RCS are meant to reduce the barrier to
participation (anyone can create the their own  branches, for example);
Bazaar-NG is also implemented in Python, which is of some benefit.  James Y
Knight pointed out `svk`_, which lets developers create their own branches
within SVN.

In general, the python-dev crowd is in favour of moving to SVN.  Initial
concern about the demands on  the volunteer admins should the repository be
hosted at svn.python.org were addressed by Barry Warsaw,  who believes that
the load will be easily managed with the existing volunteers.  Various
alternative  hosts were discussed, and if detailed reports about any of them
are created, these can be added to the  PEP.

While the fate of all PEPS lie with the BDFL (Guido), it is likely that the
preferences of those that  frequently check in changes, the pydotorg admins,
and the release managers (who have all given  favourable reports so far),
will have a significant effect on the pronouncement of this PEP.

.. _PEP 347: http://www.python.org/peps/pep-0347.html
.. _svk: http://svk.elixus.org/
.. _Perforce: http://www.perforce.com/
.. _Subversion: http://subversion.tigris.org/
.. _Monotone: http://venge.net/monotone/
.. _Bazaar-NG: http://www.bazaar-ng.org/

Contributing threads:

- `PEP: Migrating the Python CVS to Subversion
`__
- `PEP 347: Migration to Subversion


Re: [Python-Dev] Remove str.find in 3.0?

2005-08-29 Thread Tony Meyer
[Kay Schluehr]
>> The discourse about Python3000 has shrunken from the expectation
>> of the "next big thing" into a depressive rhetorics of feature 
>> elimination. The language doesn't seem to become deeper, smaller
>> and more powerfull but just smaller.
 
[Guido]
> There is much focus on removing things, because we want to be able 
> to add new stuff but we don't want the language to grow.

ISTM that a major reason that the Python 3.0 discussion seems 
focused more on removal than addition is that a lot of 
addition can be (and is being) done in Python 2.x.  This is a 
huge benefit, of course, since people can start doing things 
the "new and improved" way in 2.x, even though it's not until 
3.0 that the "old and evil" ;) way is actually removed.

Removal of map/filter/reduce is an example - there isn't 
discussion about addition of new features, because list 
comps/gen expressions are already here...

=Tony.Meyer

___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] setdefault's second argument

2005-08-31 Thread Tony Meyer
> To save you from following that link, to this day I still mentally
> translate "setdefault" to "getorset" whenever I see it.

I read these out of order (so didn't see the giveaway getorsetandget) and
spent some time wondering what an "orset" was.  I figured it must be some
obscure CS/text processing/numeric/literary term that suited this usage.  So
obscure that google's define couldn't find me a definition.

set[with]default is maybe a terrible name, but it does have some things
going for it ;)

=Tony.Meyer

...perhaps it was the similarity to corset...but surely I'm too young to
have "corset" spring to mind before "or set"...

___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Replacement for print in Python 3.0

2005-09-02 Thread Meyer, Tony
[Guido]
> The print statement harks back to ABC and even
> (unvisual) Basic. Out with it!

[Barry]
> I have to strongly disagree. 

As would I.  From observing recent discussions here, it would be helpful if 
everyone else that agrees could come up with a list (a wiki page on python.org, 
perhaps?) of simple, to-the-point, reasons why losing print is a bad idea.  
Once Guido sees the huge list of reasons in favour of keeping it, versus the 
one or two reasons against it (and ruminates on it while 2.5 through 2.9 are 
released) I'm sure he'll see reason.

FWIW, I wouldn't really care if >> or the trailing comma was lost.

[Barry]
> The print statement is simple, easy to understand, and
> easy to use.  For use cases like debugging or the interactive
> interpreter [...] I think it's hard to beat the useability
> of print with a write() function, even if builtin.

ISTM that Barry nails the key reasons here.  One of the real strengths of 
Python is that it can be used in a wide range of applications, many of which 
don't need to be burdened with a complex logging strategy, don't have a GUI, 
aren't inside a web browser, and so on.

"print" is the best example I can think of for "practicality beats purity".  
Writing to stdout is as common in the code I write as loops - it's worth 
keeping such basic functionality as elegant, simple, easy to understand, and 
easy to use as possible.  (This is certainly my motiviation, not any concern 
about backwards compatibility).

With standard English keyboards, at least, the '(' and ')' keys are also 
inconvenient to type, compared to lower-case English characters.  Fundamental 
actions like writing to stdout deserve simplicity.

=Tony.Meyer
___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Replacement for print in Python 3.0

2005-09-03 Thread Tony Meyer
[Nick Coghlan]
> "Print as statement" => printing sequences nicely is a pain

What's wrong with this?

>>> print range(10)
[0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
>>> print tuple("string")
('s', 't', 'r', 'i', 'n', 'g')

This is a serious question - that's how I would expect a print function to
work anyway.

> "Print as statement" => can't easily change the separator
[etc]

To me, the point of the builtin print is that it's simple.  If you want to
control what separator is used, or if there is a newline at the end, or
print to something that isn't sys.stdout, or some other magic, then use
sys.stdout.write().  If you want to get the contents of __unicode__/__str__
of an object to stdout, which there has been overwhelming evidence is a very
common task, then print is a fantastically simple and straightforward way to
do that.

[Terry Reedy]
> For quickly adding debug prints, two extra ()s are a small burden, 
> but if the function were called 'out', then there would still be just five

> keystrokes.

But seven keypresses (assuming one is using a keyboard where you use shift
to get '(' and ')').  It sounds trivial, but a print statement (i.e. no ())
looks clean and concise.  I like this:

   while True: pass

More than:

   while (true) {}

For the same reason.  This is a big plus of Python vs. C.

[Guido]
> Consider this: if Python *didn't* have a print statement, but
> it had a built-in function with the same functionality 
> (including, say, keyword parameters to suppress the trailing 
> newline or the space between items); would anyone support a 
> proposal to make it a statement instead?

Yes.  If it didn't have the redirect stuff; I would like it more if it also
didn't have the trailing comma magic.  "print" is a fundamental; it deserves
to be a statement :)

=Tony.Meyer

___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Replacement for print in Python 3.0

2005-09-03 Thread Tony Meyer
[...]
> maybe a few folks can go off and write up a PEP for a 
> print-replacement. 
[...]
> I'm pulling out of the 
> discussion until I see a draft PEP.

If there are two competing proposals, then the two groups write a PEP and
counter-PEP and the PEPs duke it out.  Is this still the case if proposal B
is very nearly the status quo?

IOW, would writing a "Future of the print statement in Python 3.0" counter
PEP that kept print as a statement be appropriate?  If not, other than
python-dev posting (tiring out the poor summary guys <0.5 wink>), what is
the thing to do?

=Tony.Meyer

___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Replacement for print in Python 3.0

2005-09-06 Thread Meyer, Tony
> In the end the process is not democratic.

Which may make it easier: rather than having to convince 50%+ of the people, 
one only has to convince a single person...

> I don't think there's anything that can change my mind
> about dropping the statement.

As long as "I don't think there's anything" isn't "There isn't anything", there 
is still hope, and the potential that the one person's opinion that matters can 
be changed.

However, when I wrote the email, I assumed you wouldn't read it (because you 
said you were leaving the discussion until there was a PEP).  What I wanted to 
know was what the best way of putting together succinct, clear, reasons why you 
should change your mind would be, so that could be done.  Even if you didn't 
change your mind, at least it would be (judging from previous decision 
reversals) the best shot.

=Tony.Meyer
___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


  1   2   >