[Python-Dev] Include/structmember.h, Py_ssize_t
In Include/structmember.h, there is no T_... constant for Py_ssize_t member fields. Should there be one? Thomas ___ Python-Dev mailing list [email protected] http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Stdlib Logging questions (PEP 337 SoC)
On Mon, Jun 05, 2006 at 08:49:47PM -0400, Jim Jewett wrote: > If no explicit changes are made locally, > >py.asyncore.dispatcher.hits >py.asyncore.dispatcher.messages These handler names seem really specific, though. Why have 'dispatcher' in there? Part of Jackilyn's task should be to refine and improve the PEP. Logging is probably irrelevant for many modules, but which ones are those? What conventions should be followed for handler names? Etc... --amk ___ Python-Dev mailing list [email protected] http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Include/structmember.h, Py_ssize_t
Thomas Heller wrote: > In Include/structmember.h, there is no T_... constant for Py_ssize_t > member fields. Should there be one? do you need one? if so, I see no reason why you cannot add it... ___ Python-Dev mailing list [email protected] http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] feature request: inspect.isgenerator
Phillip J. Eby telecommunity.com> writes: > I think the whole concept of inspecting for this is broken. *Any* > function can return a generator-iterator. A generator function is just a > function that happens to always return one. > In other words, the confusion is in the idea of introspecting for this in > the first place, not that generator functions are of FunctionType. The > best way to avoid the confusion is to avoid thinking that you can > distinguish one type of function from another without explicit guidance > from the function's author. Nolo contendere. I am convinced and I am taking back my feature request. Michele Simionato ___ Python-Dev mailing list [email protected] http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] feature request: inspect.isgenerator
Terry Reedy udel.edu> writes: > tout court?? is not English or commonly used at least in America It is French: http://encarta.msn.com/dictionary_561508877/tout_court.html I thought it was common in English too, but clearly I was mistaken. > Ok, you mean generator function, which produces generators, not generators > themselves. So what you want is a new isgenfunc function. That makes more > sense, in a sense, since I can see that you would want to wrap genfuncs > differently from regular funcs. But then I wonder why you don't use a > different decorator since you know when you are writing a generator > function. Because in a later refactoring I may want to replace a function with a generator function or viceversa, and I don't want to use a different decorator. The application I had in mind was a Web framework where you can write something like @expose def page(self): return 'Hello World!' or @expose def page(self): yield 'Hello ' yield 'World!' indifferently. I seem to remember CherryPy has something like that. Michele Simionato ___ Python-Dev mailing list [email protected] http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
[Python-Dev] ssize_t: ints in header files
(Neal Norwitz asked about changing some additional ints and longs to ssize_t) Martin v. Löwis replied: > ... column numbers shouldn't exceed 16-bits, and line #s > shouldn't exceed 31 bits. Why these particular numbers? As nearly as I can tell, 8 bits is already too many columns for human readability. If python is being used as an intermediate language (code is automatically generated, and not read by humans), then I don't see any justification for any particular limits, except as an implementation detail driven by convenience. Similar arguments apply to row count, #args, etc. With the exception of row count and possibly instruction count, the only readability reason to allow even 256 is that we don't want to accidentally encourage people to aim for the limit. (I really don't want people to answer the challenge and start inventing cases where a huge function might be justified, just so that they can blog about their workarounds; I would prefer that obfuscated python contests be clearly labeled so that beginners aren't turned off.) -jJ ___ Python-Dev mailing list [email protected] http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
[Python-Dev] DC Python sprint on July 29th
The Arlington sprint this past Saturday went well, though the number of Python developers was small and people mostly worked on other projects. The CanDo group, the largest at the sprint with about 10 people, will be having a three-day sprint July 28-30 (Fri-Sun) at the same location. We should take advantage of the opportunity to have another Python sprint. Let's schedule it for Saturday July 29th (the day after OSCON ends in Oregon). --amk ___ Python-Dev mailing list [email protected] http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Stdlib Logging questions (PEP 337 SoC)
On 6/5/06, Phillip J. Eby <[EMAIL PROTECTED]> wrote: > I notice you've completely avoided the question of whether this should be > being done at all. > As far as I can tell, this PEP hasn't actually been discussed. Please > don't waste time changing modules for which there is no consensus that this > *should* be done. Under a specific PEP number, no. The concept of adding logging to the stdlib, yes, periodically. The typical outcome is that some people say "why bother, besides it would slow things down" and others say "yes, please." I certainly agree that the PEP as written should not be treated as fully pronounced. I do think the discussion was stalled until we have a specific implementation to discuss. Google is generously funding one, and Jackilyn is providing it. I'm checking in here because when changes are needed, I would prefer that she know as soon as possible. Jackilyn has made it quite clear that she is willing to change her direction if we ask her to, she just needs to know what the goals are. > The original discussion that occurred prior to PEP 337's creation discussed > only modules that *already* do some kind of logging. There was no > discussion of changing *all* debugging output to use the logging module, > nor of adding logging to modules that do not even have any debugging output > (e.g. pkgutil). You may be reading too much ambition into the proposal. For pkgutil in particular, the change is that instead of writing to stderr (which can scroll off and get lost), it will write to the errorlog. In a truly default setup, that still ends up writing to stderr. The difference is that if a sysadmin does want to track problems, the change can now be made in one single place. Today, turning on that instrumentation would require separate changes to every relevant module, and requires you to already know what/where they are. I did ask whether extra debugging/instrumentation information should be added where it isn't already present. I personally think the answer is yes, but it sounds like the consensus answer is "not now" -- so she generally won't. -jJ ___ Python-Dev mailing list [email protected] http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Stdlib Logging questions (PEP 337 SoC)
Jim Jewett wrote: > For pkgutil in particular, the change is that instead of writing to > stderr (which can scroll off and get lost), it will write to the > errorlog. In a truly default setup, that still ends up writing to > stderr. umm. if pkgutil fails to open a pkg file, isn't it rather likely that the program will terminate with an ImportError a few milliseconds later? ___ Python-Dev mailing list [email protected] http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
[Python-Dev] Stdlib Logging questions (PEP 337 SoC)
>>py.asyncore.dispatcher.hits >>py.asyncore.dispatcher.messages > These handler names seem really specific, though. Why have > 'dispatcher' in there? The existing logging that she is replacing is done through methods of the dispatcher class. The dispatcher class is only a portion of the whole module. > Part of Jackilyn's task should be to refine and improve the PEP. Agreed. > Logging is probably irrelevant for many modules, but which ones are > those? What conventions should be followed for handler names? Etc... Are you suggesting that the logging module should ship with a standard configuration that does something specific for py.* loggers? Or even one that has different handlers for different stdlib modules? I had assumed this would be considered too intrusive a change. If no one chimes in, then I'll ask her to put at least investigating this into at least the second half of the summer. -jJ ___ Python-Dev mailing list [email protected] http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Stdlib Logging questions (PEP 337 SoC)
On 6/6/06, Fredrik Lundh <[EMAIL PROTECTED]> wrote: > Jim Jewett wrote: > > > For pkgutil in particular, the change is that instead of writing to > > stderr (which can scroll off and get lost), it will write to the > > errorlog. In a truly default setup, that still ends up writing to > > stderr. > > umm. if pkgutil fails to open a pkg file, isn't it rather likely that > the program will terminate with an ImportError a few milliseconds later? Maybe a mean time of a few milliseconds later. It really depends on the operating system's scheduler. If the failure occurs just at the end of a scheduler quantum, the process may not run again for some time. This would happen regardless of whether the operating system was modern. Jeremy ___ Python-Dev mailing list [email protected] http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Stdlib Logging questions (PEP 337 SoC)
>> As far as I can tell, this PEP hasn't actually been discussed.
>> Please don't waste time changing modules for which there is no
>> consensus that this *should* be done.
Jim> Under a specific PEP number, no. The concept of adding logging to
Jim> the stdlib, yes, periodically. The typical outcome is that some
Jim> people say "why bother, besides it would slow things down" and
Jim> others say "yes, please."
I'll chime in and suggest that any checkins be done on a branch for now. I
have a distinct love/hate relationship with the logging module, so I'm
ambivalent about whether or not
print >> sys.stderr, ...
should be replaced with
stderr_logger.debug("...")
I'd have to see it in action before deciding.
I notice in the PEP that BaseHTTPServer is on the list of candidate modules.
Please don't mess with anything that logs in the common Apache log format.
There are lots of tools out there that munch on that sort of output.
Changing it would just break them.
Skip
___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe:
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Stdlib Logging questions (PEP 337 SoC)
Jim Jewett wrote: > The existing logging that she is replacing is done through methods of > the dispatcher class. The dispatcher class is only a portion of the > whole module. the dispatcher class is never used on its own; it's a base class for user-defined communication classes. asyncore users don't think in terms of instances of a single dispatch class; they think in terms of their own communication classes, which inherit from asyncore.dispatch or some subclass thereof. using a single handler name for all subclasses doesn't strike me as especially useful. ___ Python-Dev mailing list [email protected] http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Python Benchmarks
Fredrik Lundh wrote: > M.-A. Lemburg wrote: > >> I just had an idea: if we could get each test to run >> inside a single time slice assigned by the OS scheduler, >> then we could benefit from the better resolution of the >> hardware timers while still keeping the noise to a >> minimum. >> >> I suppose this could be achieved by: >> >> * making sure that each tests needs less than 10ms to run > > iirc, very recent linux kernels have a 1 millisecond tick. so does > alphas, and probably some other platforms. Indeed, that's also what the microbench.py example that I posted demonstrates. And of, course, you have to call time.sleep() *before* running the test (which microbench.py does). -- Marc-Andre Lemburg eGenix.com Professional Python Services directly from the Source (#1, Jun 06 2006) >>> Python/Zope Consulting and Support ...http://www.egenix.com/ >>> mxODBC.Zope.Database.Adapter ... http://zope.egenix.com/ >>> mxODBC, mxDateTime, mxTextTools ...http://python.egenix.com/ ::: Try mxODBC.Zope.DA for Windows,Linux,Solaris,FreeBSD for free ! ___ Python-Dev mailing list [email protected] http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Stdlib Logging questions (PEP 337 SoC)
On 6/6/06, [EMAIL PROTECTED] <[EMAIL PROTECTED]> wrote: > I notice in the PEP that BaseHTTPServer is on the list of candidate modules. > Please don't mess with anything that logs in the common Apache log format. > There are lots of tools out there that munch on that sort of output. > Changing it would just break them. In general, the format of the messages shouldn't change; it is just that there should be a common choke point for controlling them. So by default, BaseHttpServer would still put out Apache log format, and it would still be occasionally interrupted by output from other modules. This does argue in favor of allowing the more intrusive additions to handlers and default configuration. It would be useful to have a handler that emitted only Apache log format records, and saved them (by default) to a rotating file rather than stderr.(And it *might* make sense to switch asyncore's hitlog default output to this format.) -jJ ___ Python-Dev mailing list [email protected] http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Stdlib Logging questions (PEP 337 SoC)
Jim Jewett wrote: > This does argue in favor of allowing the more intrusive additions to > handlers and default configuration. It would be useful to have a > handler that emitted only Apache log format records, and saved them > (by default) to a rotating file rather than stderr.(And it *might* > make sense to switch asyncore's hitlog default output to this format.) argh! can you please stop suggesting changes to API:s that you have never used ? ___ Python-Dev mailing list [email protected] http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Python Benchmarks
Fredrik Lundh wrote: > Martin v. Löwis wrote: > >>> since process time is *sampled*, not measured, process time isn't exactly >>> in- >>> vulnerable either. >> I can't share that view. The scheduler knows *exactly* what thread is >> running on the processor at any time, and that thread won't change >> until the scheduler makes it change. So if you discount time spent >> in interrupt handlers (which might be falsely accounted for the >> thread that happens to run at the point of the interrupt), then >> process time *is* measured, not sampled, on any modern operating system: >> it is updated whenever the scheduler schedules a different thread. > > updated with what? afaik, the scheduler doesn't have to wait for a > timer interrupt to reschedule things (think blocking, or interrupts that > request rescheduling, or new processes, or...) -- but it's always the > thread that runs when the timer interrupt arrives that gets the entire > jiffy time. for example, this script runs for ten seconds, usually > without using any process time at all: > > import time > for i in range(1000): > for i in range(1000): > i+i+i+i > time.sleep(0.005) > > while the same program, without the sleep, will run for a second or two, > most of which is assigned to the process. > > if the scheduler used the TSC to keep track of times, it would be > *measuring* process time. but unless something changed very recently, > it doesn't. it's all done by sampling, typically 100 or 1000 times per > second. This example is a bit misleading, since chances are high that the benchmark will get a good priority bump by the scheduler. >> On Linux, process time is accounted in jiffies. Unfortunately, for >> compatibility, times(2) converts that to clock_t, losing precision. > > times(2) reports time in 1/CLOCKS_PER_SEC second units, while jiffies > are counted in 1/HZ second units. on my machine, CLOCKS_PER_SEC is a > thousand times larger than HZ. what does this code print on your machine? You should use getrusage() for user and system time or even better clock_gettime() (the POSIX real-time APIs). >From the man-page of times: RETURN VALUE The function times returns the number of clock ticks that have elapsed since an arbitrary point in the past. ... The number of clock ticks per second can be obtained using sysconf(_SC_CLK_TCK); On my Linux system this returns 100. -- Marc-Andre Lemburg eGenix.com Professional Python Services directly from the Source (#1, Jun 06 2006) >>> Python/Zope Consulting and Support ...http://www.egenix.com/ >>> mxODBC.Zope.Database.Adapter ... http://zope.egenix.com/ >>> mxODBC, mxDateTime, mxTextTools ...http://python.egenix.com/ ::: Try mxODBC.Zope.DA for Windows,Linux,Solaris,FreeBSD for free ! ___ Python-Dev mailing list [email protected] http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Python Benchmarks
M.-A. Lemburg wrote: > This example is a bit misleading, since chances are high that > the benchmark will get a good priority bump by the scheduler. which makes it run infinitely fast ? what planet are you buying your hardware on ? ;-) ___ Python-Dev mailing list [email protected] http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Stdlib Logging questions (PEP 337 SoC)
At 10:13 AM 6/6/2006 -0400, Jim Jewett wrote: >On 6/5/06, Phillip J. Eby <[EMAIL PROTECTED]> wrote: > >>I notice you've completely avoided the question of whether this should be >>being done at all. > >>As far as I can tell, this PEP hasn't actually been discussed. Please >>don't waste time changing modules for which there is no consensus that this >>*should* be done. > >Under a specific PEP number, no. The concept of adding logging to the >stdlib, yes, periodically. The typical outcome is that some people >say "why bother, besides it would slow things down" and others say >"yes, please." All the conversations I was able to find were limited to the topic of changing modules that *do logging*, not modules that have optional debugging output, nor adding debugging output to modules that do not have it now. I'm +0 at best on changing modules that do logging now (not debug output or warnings, *logging*). -1 on everything else. >You may be reading too much ambition into the proposal. Huh? The packages are all listed right there in the PEP. >For pkgutil in particular, the change is that instead of writing to >stderr (which can scroll off and get lost), it will write to the >errorlog. In a truly default setup, that still ends up writing to >stderr. If anything, that pkgutil code should be replaced with a call to warnings.warn() instead. >The difference is that if a sysadmin does want to track problems, the >change can now be made in one single place. Um, what? You mean, one place per Python application instance, I presume. Assuming that the application allows you to configure the logging system, and doesn't come preconfigured to do something else. > Today, turning on that >instrumentation would require separate changes to every relevant >module, and requires you to already know what/where they are. And thus ensures that it won't be turned on by accident. ___ Python-Dev mailing list [email protected] http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Stdlib Logging questions (PEP 337 SoC)
On Tue, Jun 06, 2006 at 10:36:06AM -0400, Jim Jewett wrote: > Are you suggesting that the logging module should ship with a standard > configuration that does something specific for py.* loggers? Or even > one that has different handlers for different stdlib modules? No, I meant some modules don't need logging. e.g. adding logging to the string module would be silly. It makes more sense for larger systems and frameworks (the HTTP servers, asyncore, maybe some things in Tools/ like webchecker). --amk ___ Python-Dev mailing list [email protected] http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Python Benchmarks
FWIW, these are my findings on the various timing strategies: * Windows: time.time() - not usable; I get timings with an error interval of roughly 30% GetProcessTimes() - not usable; I get timings with an error interval of up to 100% with differences in steps of 15.626ms time.clock() - error interval of less than 10%; overall < 0.5% * Linux: time.clock() - not usable; I get timings with error interval of about 30% with differences in steps of 100ms time.time() - error interval of less than 10%; overall < 0.5% resource.getrusage() - error interval of less than 10%; overall < 0.5% with differences in steps of 10ms clock_gettime() - these don't appear to work on my box; even though clock_getres() returns a promising 1ns. All measurements were done on AMD64 boxes, using Linux 2.6 and WinXP Pro with Python 2.4. pybench 2.0 was used (which is not yet checked in) and the warp factor was set to a value that gave benchmark rounds times of between 2.5 and 3.5 seconds, ie. short test run-times. Overall, time.clock() on Windows and time.time() on Linux appear to give the best repeatability of tests, so I'll make those the defaults in pybench 2.0. In short: Tim wins, I lose. Was a nice experiment, though ;-) One interesting difference I found while testing on Windows vs. Linux is that the StringMappings test have quite a different run-time on both systems: around 2500ms on Windows vs. 590ms on Linux (on Python 2.4). UnicodeMappings doesn't show such a signficant difference. Perhaps the sprint changed this ?! -- Marc-Andre Lemburg eGenix.com Professional Python Services directly from the Source (#1, Jun 06 2006) >>> Python/Zope Consulting and Support ...http://www.egenix.com/ >>> mxODBC.Zope.Database.Adapter ... http://zope.egenix.com/ >>> mxODBC, mxDateTime, mxTextTools ...http://python.egenix.com/ ::: Try mxODBC.Zope.DA for Windows,Linux,Solaris,FreeBSD for free ! ___ Python-Dev mailing list [email protected] http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Include/structmember.h, Py_ssize_t
Thomas Heller wrote: > In Include/structmember.h, there is no T_... constant for Py_ssize_t > member fields. Should there be one? As Fredrik says: if you need it, feel free to add it. Regards, Martin ___ Python-Dev mailing list [email protected] http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] ssize_t: ints in header files
Jim Jewett wrote: > Martin v. Löwis replied: > >> ... column numbers shouldn't exceed 16-bits, and line #s >> shouldn't exceed 31 bits. > > Why these particular numbers? > > As nearly as I can tell, 8 bits is already too many columns for human > readability. There isn't a practical 8-bit integer type in C, so the smallest integer you can get is "short", i.e. 15 resp. 16 bits. For line numbers, 65536 seems a little too restrictive, so 31 bits is the next-larger type. > If python is being used as an intermediate language (code is > automatically generated, and not read by humans), then I don't see any > justification for any particular limits, except as an implementation > detail driven by convenience. Precisely so. The main point is that we should set a limit, and then code according to that limit. There is no point to use a 64-bit integer for code size constraints. Regards, Martin ___ Python-Dev mailing list [email protected] http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Stdlib Logging questions (PEP 337 SoC)
> Are you suggesting that the logging module should ship with a standard > configuration that does something specific for py.* loggers? Or even > one that has different handlers for different stdlib modules? Sorry I'm a little late in to the discussion :-( I can see people objecting to a "standard" configuration, as there will be many differing interpretations of what the "standard" should be. Perhaps the PEP should detail any proposed configuration. The configuration for py.* loggers, if approved in the PEP, will need to be set up with some care and probably need to be disabled by default. Once logging is introduced into the stdlib, the logger hierarchy used by the stdlib modules (e.g. "py.asyncore.dispatcher.hits", "py.asyncore.dispatcher.messages") will become something of a backward-compatibility concern. For example, client code might add handlers to specific portions of the hierarchy, and while adding "child" loggers to existing levels will be OK, removing or renaming parts of the hierarchy will cause client code to not produce the expected logging behaviour. Having logger names follow package/subpackage/public class should be OK since those couldn't change without breaking existing code anyway. One way of ring-fencing stdlib logging is to have the "py" logger created with a level of (say) DEBUG and propagate = 0. This way, logging events raised in stdlib code are not sent to the root logger's handlers, unless client code explicitly sets the propagate flag to 1. The "py" logger could be initialised with a bit-bucket handler which does nothing (and avoids the "No handlers could be found for logger xxx" message). In my view it'd be best to not add any other handlers in the stdlib itself, leaving that to user code. With this approach, by default stdlib code will behave as it does now. Even the verbose setting of DEBUG on the "py" logger will not produce any output unless user code sets its propagate attribute to 1 or explicitly adds a handler to it or any of its descendants. My 2 cents... Regards, Vinay Sajip ___ Python-Dev mailing list [email protected] http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Python Benchmarks
M.-A. Lemburg wrote: > * Linux: > > time.clock() > - not usable; I get timings with error interval of about 30% > with differences in steps of 100ms > resource.getrusage() > - error interval of less than 10%; overall < 0.5% > with differences in steps of 10ms hmm. I could have sworn clock() returned the sum of the utime and stime fields you get from getrusage() (which is the sum of the utime and stime tick counters for all tasks associated with the process, converted from jiffy count to the appropriate time unit), but glibc is one big maze of twisty little passages, so I'm probably looking at the wrong place. oh, well. ___ Python-Dev mailing list [email protected] http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Stdlib Logging questions (PEP 337 SoC)
Jim Jewett wrote: > Jackilyn is adding logging to several stdlib modules for the Google > Summer of Code (PEP 337), and asked me to review her first few > changes. A related question: Will your student try to resolve the issues on SF referring to logging, or is that not part of the project? There aren't that many of them, and she's certainly quite acquainted with the code base at some point. Cheers, Georg ___ Python-Dev mailing list [email protected] http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Python Benchmarks
M.-A. Lemburg wrote: > FWIW, these are my findings on the various timing strategies: Correction (due to a bug in my pybench dev version): > * Windows: > > time.time() > - not usable; I get timings with an error interval of roughly 30% > > GetProcessTimes() > - not usable; I get timings with an error interval of up to 100% > with differences in steps of 15.626ms > > time.clock() > - error interval of less than 10%; overall < 0.5% > > * Linux: > > time.clock() > - not usable; I get timings with error interval of about 30% > with differences in steps of 100ms This should read: steps of 10ms. time.clock() uses POSIX clock ticks which are hard-wired to 100Hz. > time.time() > - error interval of less than 10%; overall < 0.5% > > resource.getrusage() > - error interval of less than 10%; overall < 0.5% > with differences in steps of 10ms This should read: steps of 1ms. The true clock tick frequency on the test machine is 1kHz. > clock_gettime() > - these don't appear to work on my box; even though > clock_getres() returns a promising 1ns. > > All measurements were done on AMD64 boxes, using Linux 2.6 > and WinXP Pro with Python 2.4. pybench 2.0 was used (which is > not yet checked in) and the warp factor was set to a value that > gave benchmark rounds times of between 2.5 and 3.5 seconds, > ie. short test run-times. > > Overall, time.clock() on Windows and time.time() on Linux appear > to give the best repeatability of tests, so I'll make those the > defaults in pybench 2.0. > > In short: Tim wins, I lose. > > Was a nice experiment, though ;-) > > One interesting difference I found while testing on Windows > vs. Linux is that the StringMappings test have quite a different > run-time on both systems: around 2500ms on Windows vs. 590ms > on Linux (on Python 2.4). UnicodeMappings doesn't show such > a signficant difference. > > Perhaps the sprint changed this ?! -- Marc-Andre Lemburg eGenix.com Professional Python Services directly from the Source (#1, Jun 06 2006) >>> Python/Zope Consulting and Support ...http://www.egenix.com/ >>> mxODBC.Zope.Database.Adapter ... http://zope.egenix.com/ >>> mxODBC, mxDateTime, mxTextTools ...http://python.egenix.com/ ::: Try mxODBC.Zope.DA for Windows,Linux,Solaris,FreeBSD for free ! ___ Python-Dev mailing list [email protected] http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
[Python-Dev] How to fix the buffer object's broken char buffer support
If you run ``import array; int(buffer(array.array('c')))`` the
interpreter will segfault. While investigating this I discovered that
buffer objects, for their tp_as_buffer->bf_getcharbuffer, return the
result by calling the wrapped object bf_getreadbuffer or
bf_getwritebuffer. This is wrong since it is essentially redirecting
the expected call to the wrong tp_as_buffer slot for the wrapped
object. Plus it doesn't have Py_TPFLAGS_HAVE_GETCHARBUFFER defined.
I see two options here. One is to remove the bf_getcharbuffer slot
from the buffer object. The other option is to fix it so that it only
returns bf_getcharbuffer and doesn't redirect improperly (this also
brings up the issue if Py_TPFLAGS_HAVE_GETCHARBUFFER should then also
be defined for buffer objects).
Since I don't use buffer objects I don't know if it is better to fix
this or just rip it out.
-Brett
___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe:
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
[Python-Dev] wsgiref doc draft; reviews/patches wanted
I've finished my draft for the wsgiref documentation (including stuff I swiped from AMK's draft; thanks AMK!), and am looking for comments before I add it to the stdlib documentation. Source: http://svn.eby-sarna.com/svnroot/wsgiref/docs PDF:http://peak.telecommunity.com/wsgiref.pdf HTML: http://peak.telecommunity.com/wsgiref_docs/ My current plan is to make a hopefully-final release of the standalone version of wsgiref on PyPI, then clone that version for inclusion in the stdlib. The latest version of wsgiref in the eby-sarna SVN includes a new ``make_server()`` convenience function (addressing Titus' concerns about the constructor signatures while retaining backward compatibility) and it adds a ``wsgiref.validate`` module based on paste.lint. In addition to those two new features, tests were added for the new validate module and for WSGIServer. The test suite and directory layout of the package were also simplified and consolidated to make merging to the stdlib easier. Feedback welcomed. ___ Python-Dev mailing list [email protected] http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
[Python-Dev] 'fast locals' in Python 2.5
I just submitted http://python.org/sf/1501934 and assigned it to Neal so it doesn't get forgotten before 2.5 goes out ;) It seems Python 2.5 compiles the following code incorrectly: >>> g = 1>>> def f1():... g += 1... >>> f1()>>> g2It looks like the compiler is not seeing augmented assignment as creating a local name, as this fails properly: >>> def f2():... g += 1... g = 5... >>> f2()Traceback (most recent call last): File "", line 1, in File "", line 2, in f2 UnboundLocalError: local variable 'g' referenced before assignmentThe dis.dis output confirms this:>>> dis.dis(f1) 1 0 LOAD_GLOBAL 0 (g) 3 LOAD_CONST 1 (1) 6 INPLACE_ADD 7 STORE_GLOBAL 0 (g) 10 LOAD_CONST 0 (None) 13 RETURN_VALUE If anyone feels like fixing it and happens to remember where the new compiler does the fast-locals optimization (I recall a few people were working on extra optimizations and all), please do :-) (I can probably look at it before 2.5 if no one else does, though.) It may be a good idea to check for more such cornercases while we're at it (but I couldn't find any in the fast-locals bit.)-- Thomas Wouters < [EMAIL PROTECTED]>Hi! I'm a .signature virus! copy me into your .signature file to help me spread! ___ Python-Dev mailing list [email protected] http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Stdlib Logging questions (PEP 337 SoC)
Jim Jewett wrote: > For pkgutil in particular, the change is that instead of writing to > stderr (which can scroll off and get lost), it will write to the > errorlog. In a truly default setup, that still ends up writing to > stderr. This might be better addressed by providing a centralised way of redirecting stdout and/or stderr through the logging module. That would fix the problem for all modules, even if they know nothing about logging. -- Greg ___ Python-Dev mailing list [email protected] http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] [Python-checkins] Python Regression Test Failures refleak (1)
[Tim, gets different results across whole runs of
python_d ../Lib/test/regrtest.py -R 2:40: test_filecmp test_exceptions
]
I think I found the cause for test_filecmp giving different results
across runs, at least on Windows. It appears to be due to this test
line:
self.failUnless(d.same_files == ['file'])
and you _still_ think I'm nuts ;-) The skinny is that
d = filecmp.dircmp(self.dir, self.dir_same)
and filecmp contains a module-level _cache with a funky scheme for
avoiding file comparisons if various os.stat() values haven't changed.
But st_mtime on Windows doesn't necessarily change when a file is
modified -- it has limited resolution (2 seconds on FAT32, and I'm
having a hard time finding a believable answer for NTFS (which I'm
using)).
In any case, filecmp._cache _usually_ doesn't change during a run, but
sometimes it sprouts a new entry, like
{('c:\\docume~1\\owner\\locals~1\\temp\\dir\\file',
'c:\\docume~1\\owner\\locals~1\\temp\\dir-same\\file'):
((32768, 27L, 1149640843.78125),
(32768, 27L, 1149640843.796875),
True)
}
and then that shows up as a small "leak".
That's easily repaired, and after doing so I haven't seen test_filecmp
report a leak again.
test_exceptions is a different story. My first 12 post-fix runs of:
python_d ..\Lib\test\regrtest.py -R2:40: test_filecmp test_exceptions
gave leak-free:
test_filecmp
beginning 42 repetitions
123456789012345678901234567890123456789012
..
test_exceptions
beginning 42 repetitions
123456789012345678901234567890123456789012
..
All 2 tests OK.
[25878 refs]
output, but the 13th was unlucky:
test_filecmp
beginning 42 repetitions
123456789012345678901234567890123456789012
..
test_exceptions
beginning 42 repetitions
123456789012345678901234567890123456789012
..
test_exceptions leaked [0, 203, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0] references
All 2 tests OK.
[25883 refs]
Running test_filecmp too isn't necessary for me to see this --
test_exceptions can be run by itself, although it typically takes me
about 15 runs before "a leak" is reported. When a leak is reported,
it's always 203, and there's only one of those in the leak vector, but
I've seen it at index positions 0, 1, 2, and 3 (i.e., it moves around;
it was at index 1 in the output above).
Anyone bored enough to report what happens on Linux? Anyone remember
adding a goofy cache to exception internals?
a-suitable-msg-for-6/6/6-ly y'rs - tim
___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe:
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] How to fix the buffer object's broken char buffer support
On 6/6/06, Brett Cannon <[EMAIL PROTECTED]> wrote:
> If you run ``import array; int(buffer(array.array('c')))`` the
> interpreter will segfault. While investigating this I discovered that
> buffer objects, for their tp_as_buffer->bf_getcharbuffer, return the
> result by calling the wrapped object bf_getreadbuffer or
> bf_getwritebuffer. This is wrong since it is essentially redirecting
> the expected call to the wrong tp_as_buffer slot for the wrapped
> object. Plus it doesn't have Py_TPFLAGS_HAVE_GETCHARBUFFER defined.
>
> I see two options here. One is to remove the bf_getcharbuffer slot
> from the buffer object. The other option is to fix it so that it only
> returns bf_getcharbuffer and doesn't redirect improperly (this also
> brings up the issue if Py_TPFLAGS_HAVE_GETCHARBUFFER should then also
> be defined for buffer objects).
>
> Since I don't use buffer objects I don't know if it is better to fix
> this or just rip it out.
How ironic. the charbuffer slot was added late in the game -- now we'd
be ripping it out...
I suspect that there's a reason for it; but in Py3k it will
*definitely* be ripped out. Buffers will purely deal in byte then,
never in characters; you won't be able to get a buffer from a
(unicode) string at all.
Unhelpfully,
--
--Guido van Rossum (home page: http://www.python.org/~guido/)
___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe:
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] How to fix the buffer object's broken char buffer support
On 6/6/06, Guido van Rossum <[EMAIL PROTECTED]> wrote:
> On 6/6/06, Brett Cannon <[EMAIL PROTECTED]> wrote:
> > If you run ``import array; int(buffer(array.array('c')))`` the
> > interpreter will segfault. While investigating this I discovered that
> > buffer objects, for their tp_as_buffer->bf_getcharbuffer, return the
> > result by calling the wrapped object bf_getreadbuffer or
> > bf_getwritebuffer. This is wrong since it is essentially redirecting
> > the expected call to the wrong tp_as_buffer slot for the wrapped
> > object. Plus it doesn't have Py_TPFLAGS_HAVE_GETCHARBUFFER defined.
> >
> > I see two options here. One is to remove the bf_getcharbuffer slot
> > from the buffer object. The other option is to fix it so that it only
> > returns bf_getcharbuffer and doesn't redirect improperly (this also
> > brings up the issue if Py_TPFLAGS_HAVE_GETCHARBUFFER should then also
> > be defined for buffer objects).
> >
> > Since I don't use buffer objects I don't know if it is better to fix
> > this or just rip it out.
>
> How ironic. the charbuffer slot was added late in the game -- now we'd
> be ripping it out...
>
> I suspect that there's a reason for it; but in Py3k it will
> *definitely* be ripped out. Buffers will purely deal in byte then,
> never in characters; you won't be able to get a buffer from a
> (unicode) string at all.
>
> Unhelpfully,
I actually figured out a reasonable way to integrate it into the
buffer object so that it won't be a huge issue. Just took a while to
make sure there was not a ton of copy-and-paste and deciphering the
docs (have a separate patch going for clarifying those).
So buffer objects will properly support charbuffer in 2.5 (won't
backport since it is a semantic change). Hopefully it won't break too
much stuff. =)
-Brett
___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe:
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] [Python-checkins] Python Regression Test Failures refleak (1)
Tim Peters wrote: > and filecmp contains a module-level _cache with a funky scheme for > avoiding file comparisons if various os.stat() values haven't changed. > But st_mtime on Windows doesn't necessarily change when a file is > modified -- it has limited resolution (2 seconds on FAT32, and I'm > having a hard time finding a believable answer for NTFS (which I'm > using)). The time stamp itself has a precision of 100ns (it really is a FILETIME). I don't know whether there is any documentation that explains how often it is updated; I doubt it has a higher resolution than the system clock :-) > Anyone bored enough to report what happens on Linux? I had to run it 18 times to get test_exceptions beginning 42 repetitions 123456789012345678901234567890123456789012 .. test_exceptions leaked [203, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0] references 1 test OK. Regards, Martin ___ Python-Dev mailing list [email protected] http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] [Python-checkins] Python Regression Test Failures refleak (1)
[Tim and Martin talking about leak tests when running regtest with -R] I've disabled the LEAKY_TESTS exclusion in build.sh. This means if any test reports leaks when regtest.py -R :: is run, mail will be sent to python-checkins. The next run should kick off in a few hours (4 and 16 ET). We'll see what it reports. n ___ Python-Dev mailing list [email protected] http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
