Re: [Python-Dev] Developing/patching ctypes

2006-03-14 Thread Thomas Heller
Neal Norwitz wrote:
> On 3/9/06, Thomas Heller <[EMAIL PROTECTED]> wrote:
>> Would it be a solution to move the 'official' ctypes development into
>> Python SVN external/ctypes, or would this be considered abuse?  Another
>> location in SVN could be used as well, if external is though to contain
>> only vendor drops...
> 
> Thomas,
> 
> I'd be fine with the official ctypes repo being Python SVN.
> 
> The attached patch fixes all the ctypes tests so they pass on amd64. 
> It also fixes several warnings.  I'm not sure what else to do with the
> patch.  Let me know how you want to handle these in the future.
> 
> I'm not sure the patch is 100% correct.  You will need to decide what
> can be 64 bits and what can't.  I believe
> sq_{item,slice,ass_item,ass_slice} all need to use Py_ssize_t.  The
> types in ctypes.h may not require all the changes I made.  I don't
> know how you want to support older version, so I unconditionally
> changed the types to Py_ssize_t.
> 
> n

Thanks, Neal, I'll look into that this night.
In the future I hope to have access to a amd64-linux system, and I'll try
to keep this stuff up-to-date myself.

Thomas

___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Threading idea -- exposing a global thread lock

2006-03-14 Thread Raymond Hettinger
[Samuele Pedroni]
> there's no sys.checkinterval in Jython. Implementing this would need the
> introduction of some kind of GIL implementation in Jython, the JVM has no 
> primitive for global critical sections.

Wouldn't Java implement this directly by suspending and resuming the other 
threads (being careful to avoid access to monitored resources and to pair the 
suspend/resume operations in a try/finally or with-statement to prevent 
deadlocks)?


Raymond 

___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Threading idea -- exposing a global thread lock

2006-03-14 Thread Samuele Pedroni
Raymond Hettinger wrote:

> [Samuele Pedroni]
>
>> there's no sys.checkinterval in Jython. Implementing this would need the
>> introduction of some kind of GIL implementation in Jython, the JVM 
>> has no primitive for global critical sections.
>
>
> Wouldn't Java implement this directly by suspending and resuming the 
> other threads (being careful to avoid access to monitored resources 
> and to pair the suspend/resume operations in a try/finally or 
> with-statement to prevent deadlocks)?

suspending a thread is a deprecated operation because it can cause 
deadlocks.
___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Developing/patching ctypes (was: Re: integrating ctypes into python)

2006-03-14 Thread Barry Warsaw
On Mon, 2006-03-13 at 21:38 -0800, Neal Norwitz wrote:
> On 3/9/06, Thomas Heller <[EMAIL PROTECTED]> wrote:
> > Would it be a solution to move the 'official' ctypes development into
> > Python SVN external/ctypes, or would this be considered abuse?  Another
> > location in SVN could be used as well, if external is though to contain
> > only vendor drops...
> 
> Thomas,
> 
> I'd be fine with the official ctypes repo being Python SVN.

The sandbox seems a fine place for this.  It's what I'm currently doing
with the email package.  Two of the three versions are actually
external'd from Python branches and contain extra stuff to enable
standalone releases.  The third is being developed first in the sandbox,
but will soon be merged back into the trunk and then managed in the same
way as the other two.  Except for the usual headaches of managing three
versions of a package, it's working out quite well.

-Barry



signature.asc
Description: This is a digitally signed message part
___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Strange behavior in Python 2.5a0 (trunk) --- possible error in AST?

2006-03-14 Thread Nick Coghlan
Nick Coghlan wrote:
> Unfortunately my new test case breaks test_compiler. I didn't notice because 
> I 
> didn't use -uall before checking it in :(
> 
> If no-one else gets to it, I'll try to sort it out tonight.

OK, as of rev 43025 the compiler module also understands augmented assignment 
to tuple subscripts, so test_compiler can cope with the new test case.

Cheers,
Nick.

-- 
Nick Coghlan   |   [EMAIL PROTECTED]   |   Brisbane, Australia
---
 http://www.boredomandlaziness.org
___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Threading idea -- exposing a global thread lock

2006-03-14 Thread Donovan Baarda
On Mon, 2006-03-13 at 21:06 -0800, Guido van Rossum wrote:
> Oh, no! Please!
> 
> I just had to dissuade someone inside Google from the same idea.

Heh... that was me... I LOL'ed when I saw this... and no, I didn't put
Raymond up to it :-)

> IMO it's fatally flawed for several reasons: it doesn't translate
> reasonably to Jython or IronPython, it's really tricky to implement,
> and it's an invitation for deadlocks. The danger of this thing in the
> wrong hands is too big to warrant the (rare) use case that can only be
> solved elegantly using direct GIL access.

I didn't bother pursuing it because I'm not that attached to it... I'm
not sure that a language like Python really needs it, and I don't do
that kind of programming much any more.

When I did, I was programming in Ada. The Ada language has a global
thread-lock used as a primitive to implement all other atomic operations
and thread-synchronisation stuff... (it's been a while... this may have
been a particular Ada compiler extension, though I think the Ada
concurrency model pretty much required it). And before that it was in
assembler; an atomic section was done by disabling all interrupts. At
that low-level, atomic sections were the building-block for all the
other higher level synchronisation tools. I believe the original
semaphore relied on an atomic test-and-set operation.

The main place where something like this would be useful in Python is in
writing thread-safe code that uses non-thread safe resources. Examples
are; a chunk of code that redirects then restores sys.stdout, something
that changes then restores TZ using time.settz(), etc.

I think the deadlock risk argument is bogus... any locking has deadlock
risks. The "danger in the wrong hands" I'm also unconvinced about;
non-threadsafe resource use worries me far more than a strong lock. I'd
rather debug a deadlock than a race condition any day. But the hard to
implement for other VMs is a breaker, and suggests there a damn good
reasons those VMs disallow it that I haven't thought of :-)

So I'm +0, probably -0.5...

> --Guido
> 
> On 3/13/06, Raymond Hettinger <[EMAIL PROTECTED]> wrote:
> > A user on comp.lang.python has twisted himself into knots writing 
> > multi-threaded
> > code that avoids locks and queues but fails when running code with 
> > non-atomic
> > access to a shared resource. While his specific design is somewhat flawed, 
> > it
> > does suggest that we could offer an easy way to make a block of code atomic
> > without the complexity of other synchronization tools:
> >
> >gil.acquire()
> >try:
> >   #do some transaction that needs to be atomic
> >finally:
> >   gil.release()
> >
> > The idea is to temporarily suspend thread switches (either using the GIL or 
> > a
> > global variable in the eval-loop).  Think of it as "non-cooperative"
> > multi-threading. While this is a somewhat rough approach, it is dramatically
> > simpler than the alternatives (i.e. wrapping locks around every access to a
> > resource or feeding all resource requests to a separate thread via a Queue).
> >
> > While I haven't tried it yet, I think the implementation is likely to be
> > trivial.
> >
> > FWIW, the new with-statement makes the above fragment even more readable:
> >
> > with atomic_transaction():
> > # do a series of steps without interruption
> >
> >
> > Raymond
> >
> > ___
> > Python-Dev mailing list
> > [email protected]
> > http://mail.python.org/mailman/listinfo/python-dev
> > Unsubscribe: 
> > http://mail.python.org/mailman/options/python-dev/guido%40python.org
> >
> 
> 
> --
> --Guido van Rossum (home page: http://www.python.org/~guido/)
> ___
> Python-Dev mailing list
> [email protected]
> http://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe: 
> http://mail.python.org/mailman/options/python-dev/abo%40minkirri.apana.org.au
-- 
Donovan Baarda <[EMAIL PROTECTED]>
http://minkirri.apana.org.au/~abo/

___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Still looking for volunteer to run Windows buildbot

2006-03-14 Thread Tim Peters
[Trent Mick]
> I have a patch in the works that defaults to "yes, this machine does
> have a soundcard" if cscript.exe cannot be found on the PATH.
>
> However, one wrinkle: test_winsound.py is made up of three test cases:
> BeepTest
> MessageBeepTest
> PlaySoundTest
> only the last need be skipped if there is not soundcard.

I'd say instead that they should never be skipped:  the real
difference on your box is the expected _outcome_ in the third
category.

After umpteen years we've got a universe of one machine where
PlaySoundTest is known to fail, and now a little mound of VB code that
presumably returns something different on that machine than on other
machines.  In reality, that's more code to test.

We seem to be assuming here that "the VB code says no sound device"
means "PlaySoundTest will fail in a particular way", and have one box
on which that's known to be true.  So sure, skip the tests on that
box, and the immediate buildbot failure on that box will go away. 
Other possiblities include that the test will also be skipped on boxes
where it would actually work, because the VB code isn't actually a
definitive test for some reason.

Since we can't be sure from a universe of one exception, better to
test that assumption too, by reworking the tests to say "oh, but if
the VB code thinks we don't have a sound card, then this test should
raise RuntimeError instead".  There's still a testable outcome here.
___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Still looking for volunteer to run Windows buildbot

2006-03-14 Thread Tim Peters
[Mark Hammond]
> Maybe the following VBScript "port" of the above will work:
>
> -- check_soundcard.vbs
> rem Check for a working sound-card - exit with 0 if OK, 1 otherwise.
> set wmi = GetObject("winmgmts:")
> set scs = wmi.InstancesOf("win32_sounddevice")
> for each sc in scs
> set status = sc.Properties_("Status")
> wscript.Echo(sc.Properties_("Name") + "/" + status)
> if status = "OK" then
> wscript.Quit 0 rem normal exit
> end if
> next
> rem No sound card found - exit with status code of 1
> wscript.Quit 1
>
> -- eof
>
> Running "cscript.exe check_soundcard.vbs" and checking the return
> code should work.

FYI, "it works" on my main box:

C:\Code>cscript.exe csc.vbs
Microsoft (R) Windows Script Host Version 5.6
Copyright (C) Microsoft Corporation 1996-2001. All rights reserved.

Creative Audigy Audio Processor (WDM)/OK

C:\Code>echo %errorlevel%
0
___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Threading idea -- exposing a global thread lock

2006-03-14 Thread Donovan Baarda
On Tue, 2006-03-14 at 00:36 -0500, Raymond Hettinger wrote:
> [Guido]
> > Oh, no!
> 
> Before shooting this one down, consider a simpler incarnation not involving 
> the 
> GIL.  The idea is to allow an active thread to temporarily suspend switching 
> for 
> a few steps:
[...]
> I disagree that the need is rare.  My own use case is that I sometimes add 
> some 
> debugging print statements that need to execute atomically -- it is a PITA 
> because PRINT_ITEM and PRINT_NEWLINE are two different opcodes and are not 
> guaranteed to pair atomically.  The current RightWay(tm) is for me to create 
> a 
> separate daemon thread for printing and to send lines to it via the queue 
> module 
> (even that is tricky because you don't want the main thread to exit before a 
> print queued item is completed).  I suggest that that is too complex for a 
> simple debugging print statement.  It would be great to simply write:

You don't need to use queue... that has the potentially nasty side
affect of allowing threads to run ahead before their debugging has been
output. A better way is to have all your debugging go through a
print_debug() method that acquires and releases a debug_lock
threading.Lock. This is simpler as it avoids the separate thread, and
ensures that threads "pause" until their debugging output is done.

-- 
Donovan Baarda <[EMAIL PROTECTED]>
http://minkirri.apana.org.au/~abo/

___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Threading idea -- exposing a global thread lock

2006-03-14 Thread Josiah Carlson

Samuele Pedroni <[EMAIL PROTECTED]> wrote:
> 
> Raymond Hettinger wrote:
> 
> > [Samuele Pedroni]
> >
> >> there's no sys.checkinterval in Jython. Implementing this would need the
> >> introduction of some kind of GIL implementation in Jython, the JVM 
> >> has no primitive for global critical sections.
> >
> >
> > Wouldn't Java implement this directly by suspending and resuming the 
> > other threads (being careful to avoid access to monitored resources 
> > and to pair the suspend/resume operations in a try/finally or 
> > with-statement to prevent deadlocks)?
> 
> suspending a thread is a deprecated operation because it can cause 
> deadlocks.

There are two assumptions that one can make about code using the "gil",
or the equivalent of suspending all threads but the current one, or in
Python, just disabling thread switching; I'll call it a (global)
'critical section'.

Either the user is going to rely on just the critical section for
locking, or the user is going to mix locks too.  If the user doesn't mix
(even implicitly with Queue, etc.), then there can be no deadlocks
caused by the critical section.  If the user _is_ mixing standard locks
with critical sections, the only _new_ potential cause of deadlocks is
if the user attempts to acquire locks within the critical section which
have already been acquired by another thread.  Deadlocks of this
particular type, however, can be generally prevented by making locks
aware of critical sections and raising an exception whenever a lock
acquisition is taking place within a critical section.  You wouldn't
want the exception to only be raised if the acquisition would block, as
this would result in intermittant errors; just make it an error.


It would be nice if Jython or IronPython could (and would) implement
these 'critical sections'.  Whether they can or not, I think that it
would be a useful feature in the CPython runtime.  It could be
considered a platform-specific feature, similar to how you can use
select on any file handle on *nix, but you need to jump through hoops to
get a similar thing on Windows.

I'm +1, but only because I've spent more than my share of time digging
around with threads, locks, Rlocks, conditions, events, etc.

 - Josiah

___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Threading idea -- exposing a global thread lock

2006-03-14 Thread Michael Chermside
Josiah Carlson writes:
> It would be nice if Jython or IronPython could (and would) implement
> these 'critical sections'.  Whether they can or not, I think that it
> would be a useful feature in the CPython runtime.

The issue is not whether Jython and IronPython "will", it's whether
they "can". Essentially, Jython and IronPython both use locking within
their fundamental data structures. This then allows them to freely
allow threads to run on multiple processors. Meanwhile, CPython lacks
locks in its fundamental data structures, so it uses the GIL to
ensure that code which might touch Python data structures executes on
only one CPU at a time.

The concept of a "critical section" makes great sense when there is
effectively only one CPU: just stop switching threads. But if code
is using multiple CPUs, what does it mean? Shut down the other CPUs?
To do such a thing cooperatively would require checking some master
lock at every step... (a price which CPython pays, but which the
implementations built on good thread-supporting VMs don't have to).
To do such a thing non-cooperatively is not supported in either VM.

Since I doubt we're going to convince Sun or Microsoft to change
their approach to threading, I think it is unwise to build such a
feature into the Python language. Supporting it in CPython only
requires (I think) no more than a very simple C extension. I think
it should stay as an extension and not become part of the language.

-- Michael Chermside
___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Threading idea -- exposing a global thread lock

2006-03-14 Thread Raymond Hettinger
[Nice analysis from Michael Chermside]
> The concept of a "critical section" makes great sense when there is
> effectively only one CPU: just stop switching threads. But if code
> is using multiple CPUs, what does it mean? Shut down the other CPUs?
 . . .
> I think it is unwise to build such a
> feature into the Python language. Supporting it in CPython only
> requires (I think) no more than a very simple C extension. I think
> it should stay as an extension and not become part of the language.

That makes sense.

Once place where we already have CPython specific support is in 
sys.setcheckinterval().  That suggests adapting that function or adding a new 
one to  temporarily stop switching, almost the same as 
sys.setcheckinterval(sys.maxint) but continuing to perform other periodic 
checks 
for control-break and such.


Raymond

___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Threading idea -- exposing a global thread lock

2006-03-14 Thread Thomas Wouters
On 3/14/06, Raymond Hettinger <[EMAIL PROTECTED]> wrote:
Once place where we already have CPython specific support is insys.setcheckinterval().  That suggests adapting that function or adding a newone to  temporarily stop switching, almost the same assys.setcheckinterval
(sys.maxint) but continuing to perform other periodic checksfor control-break and such.Don't forget that sys.setcheckinterval() is more of a hint than a requirement. It's easy to wrap sys.setcheckinterval
() in a try/except or an if-hasattr, and just ignore the case where it doesn't exist ('it won't be necessary'). Doing the same thing with a 'critical section' would be a lot harder. I would also assume a 'critical section' should not allow threads in extension modules, even if they explicitly allow threads. That's quite a bit different from making the check-interval infinitely high for the duration of the block.
-- Thomas Wouters <[EMAIL PROTECTED]>Hi! I'm a .signature virus! copy me into your .signature file to help me spread!
___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Threading idea -- exposing a global thread lock

2006-03-14 Thread Tim Peters
[Raymond Hettinger]
>> FWIW, the new with-statement makes the above fragment even more
>> readable:
>>
>> with atomic_transaction():
>> # do a series of steps without interruption

[Phillip J. Eby]
> +1 on the idea, -1000 on the name.  It's neither atomic nor a
> transaction.  I believe that "critical section" is a more common term for
> what you're proposing.

No, there is no common term for this idea, no "standard" threading
model supports it directly (which is bad news for portability, of
course), and it's a Bad Idea to start calling it "critical section"
here.

There _is_ some variation in what "critical section" means, exactly,
to different thread programming cultures, but in none does it mean:

a section of code such that, once a thread enters it, all other
threads are blocked from doing anything for the duration

The common meaning is:

a section of code such that, once a thread enters it, all other
threads are blocked from entering the section for the duration

which is a very far cry from getting blocked from doing anything.

In some thread cultures, "critical section" also implies that a thread
won't migrate across processors (on a multi-CPU box) while that thread
is in a critical section, and that's in addition to the "other threads
are blocked from entering the section for the duration" meaning.

In some thread cultures, "critical section" isn't distinguished from
the obvious implementation in terms of acquiring and releasing a mutex
around the code section, but that gets muddy.  For example, on Win32
using a native mutex actually implments a cross-*process* "critical
section", while the term "critical section" is reserved for
cross-thread-within-a-process but not-cross-process mutual exclusion.
___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Threading idea -- exposing a global thread lock

2006-03-14 Thread Phillip J. Eby
At 02:21 PM 3/14/2006 -0500, Tim Peters wrote:
>There _is_ some variation in what "critical section" means, exactly,
>to different thread programming cultures, but in none does it mean:
>
> a section of code such that, once a thread enters it, all other
> threads are blocked from doing anything for the duration

Well, I'm showing my age here, but in the good ol' days of the 8086 
processor, I recall it frequently being used to describe a block of 
assembly code which ran with interrupts disabled - ensuring that no task 
switching would occur.

Obviously I haven't been doing a lot of threaded programming *since* those 
days, except in Python.  :)


>The common meaning is:
>
> a section of code such that, once a thread enters it, all other
> threads are blocked from entering the section for the duration

That doesn't seem like a very useful definition, since it describes any 
piece of code that's protected by a statically-determined mutex.  But you 
clearly have more experience in this than I.

___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Threading idea -- exposing a global thread lock

2006-03-14 Thread Tim Peters
[Raymond Hettinger]
> ...
> I disagree that the need is rare.  My own use case is that I sometimes
> add some debugging print statements that need to execute
> atomically -- it is a PITA because PRINT_ITEM and PRINT_NEWLINE
> are two different opcodes and are not guaranteed to pair atomically.

Well, it's much worse than that, right?  If you have a print list with
N items, there are N PRINT_ITEM opcodes.

> The current RightWay(tm) is for me to create a separate daemon
> thread for printing and to send lines to it via the queue module
> (even that is tricky because you don't want the main thread to exit
> before a print queued item is completed).  I suggest that that is too
> complex for a simple debugging print statement.

It sure is.  You're welcome to use my thread-safe debug-print function :-):

def msg(fmt, *args):
s = fmt % args + '\n'
for stream in sys.stdout, logfile:
stream.write(s)
stream.flush()

I use that for long-running (days) multi-threaded apps, where I want
to see progress messages on stdout but save them to a log file too. 
It assumes that the underlying C library writes a single string
atomically.  If I couldn't assume that, it would be easy to
acquire/release a lock inside the function.  For example, as-is the
order of lines displayed on stdout isn't always exactly the same as
the order in the log file, and when I care about that (I rarely do)
adding a lock can make it deterministic.

I also have minor variants of that function, some that prepend a
timestamp to each message, and/or prepend the id or name of the
current thread.  Because all such decisions are hiding inside the
msg() function, it's very esay to change the debug output as needed. 
Or to do

def msg(*args):
pass

when I don't want to see output at all.
___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Threading idea -- exposing a global thread lock

2006-03-14 Thread Guido van Rossum
On 3/14/06, Phillip J. Eby <[EMAIL PROTECTED]> wrote:
> At 02:21 PM 3/14/2006 -0500, Tim Peters wrote:
> >The common meaning is:
> >
> > a section of code such that, once a thread enters it, all other
> > threads are blocked from entering the section for the duration
>
> That doesn't seem like a very useful definition, since it describes any
> piece of code that's protected by a statically-determined mutex.  But you
> clearly have more experience in this than I.

Trust Tim. That's what "critical section" means in most places. And
yes, indeed, a static mutex is the obvious way to implement it.

--
--Guido van Rossum (home page: http://www.python.org/~guido/)
___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Threading idea -- exposing a global thread lock

2006-03-14 Thread Tim Peters
[Phillip J. Eby]
> Well, I'm showing my age here, but in the good ol' days of the 8086
> processor, I recall it frequently being used to describe a block of
> assembly code which ran with interrupts disabled - ensuring that no task
> switching would occur.

According to Wikipedia's current article on "critical section" (which
is pretty good!), that still may be common usage for kernel-level
programmers.  "The rest of us" don't get to run privileged
instructions anymore, and in Wikipedia terms I'm talking about what
they call "application level" critical sections:

http://en.wikipedia.org/wiki/Critical_section

A little Googling confirms that almost everyone has the
application-level sense in mind these days.

> Obviously I haven't been doing a lot of threaded programming *since*
> those days, except in Python.  :)

And for all the whining about it, threaded programming in Python is
both much easier than elsewhere, _and_ still freaking hard ;-)

>> The common meaning is:
>>
>> a section of code such that, once a thread enters it, all other
>> threads are blocked from entering the section for the duration

> That doesn't seem like a very useful definition, since it describes any
> piece of code that's protected by a statically-determined mutex.  But you
> clearly have more experience in this than I.

As I tried to explain the first time, a mutex is a common
implementation technique, but even saying "mutex" doesn't define the
semantics (see the original msg for one distinction between
cross-thread and cross-process exclusion).  There are ways to
implement critical sections other than via a mutex.  The "common
meaning" I gave above tries to describe the semantics (visible
behavior), not an implementation.  A Python-level lock is an obvious
and straightforward way to implement those semantics.
___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Topic suggestions from the PyCon feedback

2006-03-14 Thread Jan Claeys
Op ma, 13-03-2006 te 19:52 -0800, schreef Alex Martelli:
> The *ONE* thing I dislike about working in the US is vacations -- I  
> get about half of what I would expect in Europe, and that's with my  
> employer being reasonably generous... in practice, given I NEED some  
> time to go visit family and friends back in Italy, this means I can't  
> really take vacations to do conferences, but rather I must convince  
> my boss that conference X is worth my time [...]. 

Well, I'm sure (from a previous c.l.py encounter) that you know enough
about (European) civilisation to explain history to an American savage
(~= employer)?   ;-)

-- 
Jan Claeys

___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Still looking for volunteer to run Windows buildbot

2006-03-14 Thread Martin v. Löwis
Tim Peters wrote:
> I'd say instead that they should never be skipped:  the real
> difference on your box is the expected _outcome_ in the third
> category.

That is indeed more reasonable than what I proposed.

Regards,
Martin
___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Threading idea -- exposing a global thread lock

2006-03-14 Thread Martin v. Löwis
Raymond Hettinger wrote:
> Once place where we already have CPython specific support is in 
> sys.setcheckinterval().  That suggests adapting that function or adding a new 
> one to  temporarily stop switching, almost the same as 
> sys.setcheckinterval(sys.maxint) but continuing to perform other periodic 
> checks 
> for control-break and such.

I object to the notion that Python does "thread switching". It doesn't.
Instead, it releases the GIL under certain circumstances, which might
make the operating system switch threads. Whether the operating system
does that is its own choice.

I don't see a value to disabling the "release the GIL from time to
time". This does *not* mean there couldn't be thread-switching, anymore.
E.g. inside a PRINT_* opcode, thread-switching may still occur, as
file_write releases the GIL around the fwrite() call. So if you set
the checkinterval to "no check", you cannot trust that there won't
be any thread switching.

Regards,
Martin
___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


[Python-Dev] Another threading idea

2006-03-14 Thread Raymond Hettinger
FWIW, I've been working on a way to simplify the use of queues with daemon 
consumer threads

Sometimes, I launch one or more consumer threads that wait for a task to enter 
a 
queue and then work on the task. A recurring problem is that I sometimes need 
to 
know if all of the tasks have been completed so I can exit or do something with 
the result.

If each thread only does a single task, I can use t.join() to wait until the 
task is done.  However, if the thread stays alive and waits for more Queue 
entries, then there doesn't seem to be a good way to tell when all the 
processing is done.

So, the idea is to create a subclass of Queue that increments a counter when 
objects are enqueued, that provides a method for worker threads to decrement 
the 
counter when the work is done, and offers a blocking join() method that waits 
until the counter is zero

   # main thread
   q = TaskQueue()
   for t in worker_threads():
   t.start()
   for task in tasklist:
   q.put(task)  # increments the counter and enqueues a task
   q.join() # all of the tasks are done (counter is 
zero)
   do_work_on_results()



   # worker thread
   while 1:
 task = q.get() # task is popped but the counter is 
unchanged
 do_work(task)
 q.decrement()  # now the counter gets reduced


The idea is still in its infancy (no implementation and it hasn't been tried in 
real-world code) but I would like to get some feedback.  If it works out, I'll 
post a recipe to ASPN and see how it goes.


Raymond 

___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Threading idea -- exposing a global thread lock

2006-03-14 Thread Alexander Schremmer
On Mon, 13 Mar 2006 21:57:59 -0500, Raymond Hettinger wrote:

> Think of it as "non-cooperative" 
> multi-threading. While this is a somewhat rough approach, it is dramatically 
> simpler than the alternatives (i.e. wrapping locks around every access to a 
> resource or feeding all resource requests to a separate thread via a Queue).

Why is that actually more difficult to write? Consider

res_lock = Lock()
res = ...
with locked(res_lock):
do_something(res)

It is only about supplying the correct lock at the right time. Or even this
could work:

res = ... # implements lock()/unlock()
with locked(res):
do_something(res)

Directly exposing the GIL (or some related system) for such matters does
not seem to be a good reason for a novice to let him stop all threads.

Kind regards,
Alexander

___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] About "Coverity Study Ranks LAMP Code Quality"

2006-03-14 Thread Alexander Schremmer
On Tue, 14 Mar 2006 00:55:52 +0100, "Martin v. Löwis" wrote:

> I can understand that position. The bugs they find include potential
> security flaws, for which exploits could be created if the results are
> freely available. 

On the other hand, the exploit could be crafted based on reading the SVN
check-ins ...

Kind regards,
Alexander

___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Still looking for volunteer to run Windows buildbot

2006-03-14 Thread Trent Mick
[Martin v. Loewis wrote]
> Tim Peters wrote:
> > I'd say instead that they should never be skipped:  the real
> > difference on your box is the expected _outcome_ in the third
> > category.
> 
> That is indeed more reasonable than what I proposed.

I'll do this tonight or tomorrow.

Trent

-- 
Trent Mick
[EMAIL PROTECTED]
___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Another threading idea

2006-03-14 Thread Guido van Rossum
Isn't this a job for threading.BoundedSpemaphore()?

On 3/14/06, Raymond Hettinger <[EMAIL PROTECTED]> wrote:
> FWIW, I've been working on a way to simplify the use of queues with daemon
> consumer threads
>
> Sometimes, I launch one or more consumer threads that wait for a task to 
> enter a
> queue and then work on the task. A recurring problem is that I sometimes need 
> to
> know if all of the tasks have been completed so I can exit or do something 
> with
> the result.
>
> If each thread only does a single task, I can use t.join() to wait until the
> task is done.  However, if the thread stays alive and waits for more Queue
> entries, then there doesn't seem to be a good way to tell when all the
> processing is done.
>
> So, the idea is to create a subclass of Queue that increments a counter when
> objects are enqueued, that provides a method for worker threads to decrement 
> the
> counter when the work is done, and offers a blocking join() method that waits
> until the counter is zero
>
># main thread
>q = TaskQueue()
>for t in worker_threads():
>t.start()
>for task in tasklist:
>q.put(task)  # increments the counter and enqueues a 
> task
>q.join() # all of the tasks are done (counter is
> zero)
>do_work_on_results()
>
>
>
># worker thread
>while 1:
>  task = q.get() # task is popped but the counter is
> unchanged
>  do_work(task)
>  q.decrement()  # now the counter gets reduced
>
>
> The idea is still in its infancy (no implementation and it hasn't been tried 
> in
> real-world code) but I would like to get some feedback.  If it works out, I'll
> post a recipe to ASPN and see how it goes.
>
>
> Raymond
>
> ___
> Python-Dev mailing list
> [email protected]
> http://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe: 
> http://mail.python.org/mailman/options/python-dev/guido%40python.org
>


--
--Guido van Rossum (home page: http://www.python.org/~guido/)
___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Threading idea -- exposing a global thread lock

2006-03-14 Thread Raymond Hettinger
[Raymond]
>> While this is a somewhat rough approach, it is dramatically
>> simpler than the alternatives (i.e. wrapping locks around every access to a
>> resource or feeding all resource requests to a separate thread via a Queue).

[Alexander]
> Why is that actually more difficult to write? Consider
>
> res_lock = Lock()
> res = ...
> with locked(res_lock):
>do_something(res)
>
> It is only about supplying the correct lock at the right time.

In the case on the newsgroup, the resource was a mapping or somesuch.  Getitem 
and setitem accesses were spread throughout the program and it would have been 
a 
mess to put locks everywhere.  All he wanted was to simply freeze 
task-switching 
for a moment so he could loop over the mapping and have it be in a consistent 
state from the start of the loop to the end.  His situation was further 
complicated because the looping construct was buried in library code which used 
iterkeys() instead of keys() -- IOW, the code bombed if the mapping changed 
during iteration

While the guy had an odd design, the generalization was clear.  Sometimes you 
want to make one little section uninterruptable and don't want to put locks 
around everything that touches a resource.  If the resource access occurs in 
fifty places throughout your code and you only need one little section to have 
uninterrupted access, then the lock approach requires way too much effort. 
Further, if you miss putting locks around any one of the accesses, you lose 
reliability.  Also, with locks scatterred all over the place, it is not easy to 
tell at a glance that you haven't introduced the possibility of dead-lock.  In 
contrast, with the temporary suspension of thread-switching, it is pretty easy 
to look inside the uninterruptable block to make sure that none of the 
statements block.  Why make life unnecessarily hard.


Raymond 

___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Another threading idea

2006-03-14 Thread Paul Moore
On 3/14/06, Raymond Hettinger <[EMAIL PROTECTED]> wrote:
> FWIW, I've been working on a way to simplify the use of queues with daemon
> consumer threads
>
> Sometimes, I launch one or more consumer threads that wait for a task to 
> enter a
> queue and then work on the task. A recurring problem is that I sometimes need 
> to
> know if all of the tasks have been completed so I can exit or do something 
> with
> the result.
[...]
> So, the idea is to create a subclass of Queue that increments a counter when
> objects are enqueued, that provides a method for worker threads to decrement 
> the
> counter when the work is done, and offers a blocking join() method that waits
> until the counter is zero

I've also hit this problem, and would find this pattern useful.

FWIW, in my code, I bypassed the problem by spawning one worker per
task, and waiting on them all. This works, but is sub-optimal (there's
no need for 100+ subthreads, when 20 or so generic workers would have
done...)

Paul.
___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Another threading idea

2006-03-14 Thread Paul Moore
On 3/14/06, Guido van Rossum <[EMAIL PROTECTED]> wrote:
> Isn't this a job for threading.BoundedSpemaphore()?

Not sure I see how. What I think Raymond's after (and certainly what I
want) is to queue N tasks, set a counter to N, then wait until the
counter goes to zero.

I suppose

counter = Semaphore(-N)
# Queue N tasks
counter.acquire()

with each task (or the queue) saying

counter.release()

when it finishes. But the logic seems backwards, and highly prone to
off-by-one errors, and I'm not entirely convinced that a negative
semaphore value is officially supported...

(BoundedSemaphore seems a red herring here - the blocking semantics of
Semaphore and BoundedSemaphore are identical).

Paul.
___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Another threading idea

2006-03-14 Thread Raymond Hettinger
> Isn't this a job for threading.BoundedSpemaphore()?

I don't see how that would work.  ISTM that we need an inverse of a 
BoundedSemaphore.  If it understand it correctly, a BS blocks after some 
pre-set 
maximum number of acquires and is used for resources with limited capacity 
(i.e. 
a number of connections that can be served).  With the TaskQueue, there is no 
pre-set number, the queue can grow to any size, and the join() method will 
block 
until the counter falls back to zero.  IOW, a BS is about potentially blocking 
new requests and a TaskQueue is about blocking other work until outstanding 
requests are complete.


Raymond




___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Still looking for volunteer to run Windows buildbot

2006-03-14 Thread Tim Peters
[Trent Mick, on test_winsound]
> I'll do this tonight or tomorrow.

Cool!

I see that your Win2K buildbot slave always dies in the compile step now, with

"""
-- Build started: Project: pythoncore, Configuration: Debug Win32 --

Compiling resources...
generate buildinfo
cl.exe -c -D_WIN32 -DUSE_DL_EXPORT -D_WINDOWS -DWIN32 -D_WINDLL
-D_DEBUG -MDd ..\Modules\getbuildinfo.c -Fogetbuildinfo.o -I..\Include
-I..\PC
Microsoft (R) 32-bit C/C++ Optimizing Compiler Version 13.10.3077 for 80x86
Copyright (C) Microsoft Corporation 1984-2002. All rights reserved.
getbuildinfo.c
Linking...
LINK : fatal error LNK1104: cannot open file './python25_d.dll'
"""

That happened to me once, but I still don't understand it.  It turned
out that  the corresponding python_d.exe was still running (for hours,
and hours, and hours, ...), and I had to manually kill the process. 
I'm not sure that was enough, because I coincidentally rebooted the
box before the buildbot tests ran again.  I am pretty sure that the
symptom above won't fix itself.

Possibly related:  since we upgraded to a new bsddb (and this may be
coincidence), I've seen two failure modes in test_shelve:  test_bool
(which is the first test) never completes, and test_bool does complete
but fails.  Turns out both are rare failure modes, and they haven't
happened again since I prepared myself to dig into them <0.5 wink>.
___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Another threading idea

2006-03-14 Thread Guido van Rossum
I think I was thinking of the following: create a semaphore set to
zero; the main thread does N acquire operations; each of N workers
releases it once after it's done. When the main thread proceeds it
knows all workers are done. Doesn't that work? Also, I believe Tim
once implemented a barrier lock but I can't find it right now.

--Guido

On 3/14/06, Raymond Hettinger <[EMAIL PROTECTED]> wrote:
> > Isn't this a job for threading.BoundedSpemaphore()?
>
> I don't see how that would work.  ISTM that we need an inverse of a
> BoundedSemaphore.  If it understand it correctly, a BS blocks after some 
> pre-set
> maximum number of acquires and is used for resources with limited capacity 
> (i.e.
> a number of connections that can be served).  With the TaskQueue, there is no
> pre-set number, the queue can grow to any size, and the join() method will 
> block
> until the counter falls back to zero.  IOW, a BS is about potentially blocking
> new requests and a TaskQueue is about blocking other work until outstanding
> requests are complete.
>
>
> Raymond
>
>
>
>
>


--
--Guido van Rossum (home page: http://www.python.org/~guido/)
___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Another threading idea

2006-03-14 Thread Tim Peters
[Raymond Hettinger]
> FWIW, I've been working on a way to simplify the use of queues with
> daemon consumer threads
>
> Sometimes, I launch one or more consumer threads that wait for a task
> to enter a queue and then work on the task. A recurring problem is that
> I sometimes need to know if all of the tasks have been completed so I
> can exit or do something with the result.

FWIW, instead of:

   # main thread
   q = TaskQueue()
   for t in worker_threads():
   t.start()
   for task in tasklist:
   q.put(task)  # increments the counter and enqueues a task
   q.join() # all of the tasks are done
(counter is zero)
   do_work_on_results()

I've sometimes used a separate "work finished" queue, like so:

   # main thread
   q = TaskQueue()
   finished = Queue.Queue()
   for t in worker_threads():
   t.start()
   for task in tasklist:
   q.put(task)
   for task in tasklist:
   finished.get()
   do_work_on_results()

When a worker thread is done with a task, it simply does:

   finished.put(None)  # `None` can just as well be 42 or "done"

No explicit count is needed, although it's easy to add one if desired.
 The only trick is for the main thread to insist on doing
finished.get() exactly as many times as it does q.put().

This is easy and natural -- once you've seen it :-)  BTW, it's more
common for me to put something meaningful on the `finished` queue, as
my main thread often wants to accumulate some info about the outcomes
of tasks.
___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] About "Coverity Study Ranks LAMP Code Quality"

2006-03-14 Thread Greg Ewing
Fredrik Lundh wrote:

> return=NULL; output=junk => out of memory
> return=junk; output=-1 => cannot do this
> return=pointer; output=value => did this, returned value bytes

> I agree that the design is a bit questionable;

It sure is. If you get both NULL and -1 returned, how are
you supposed to know which one is the junk?

Greg
___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


[Python-Dev] Py3k branch - please stay out :-)

2006-03-14 Thread Guido van Rossum
So I created a Py3K branch in subversion.  (Due to my slippery fingers
it's actually called p3yk -- that's fine, it may keep bystanders out,
and it means we can rename it to the proper name when it's more ready
for public consumption. :-)

My current plans for this branch are simple: I'm going to rip out some
obvious stuff (string exceptions, classic classes) and get a "feel"
for what Python 3000 might look like. I'm not particularly looking for
help -- if all goes well this is going to be my personal hobby project
for the next few months, and then we'll see where we stand. I promised
OSCON a keynote on Python 3000 so that's a convenient deadline.

In other news, I'd like to nominate Neal Norwitz as the Python 2.5
"release coordinator". He's already doing a great job doing exactly
what I think a coordinator should be doing. Anthony will remain
release manager, Tim, Martin, Fred and others will do their stuff; but
Neal can be the public face. He and Anthony should probably get
together on IM and decide on the actual release schedule. For all
Python 2.5 issues, please look to Neal. Also, if your proposal doesn't
already have a PEP number, it shouldn't be going into Python 2.5. It's
time to start releasing and stop evolving for a few months...

--
--Guido van Rossum (home page: http://www.python.org/~guido/)
___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Py3k branch - please stay out :-)

2006-03-14 Thread Brett Cannon
On 3/14/06, Guido van Rossum <[EMAIL PROTECTED]> wrote:
[SNIP]
> In other news, I'd like to nominate Neal Norwitz as the Python 2.5
> "release coordinator". He's already doing a great job doing exactly
> what I think a coordinator should be doing. Anthony will remain
> release manager, Tim, Martin, Fred and others will do their stuff; but
> Neal can be the public face. He and Anthony should probably get
> together on IM and decide on the actual release schedule. For all
> Python 2.5 issues, please look to Neal. Also, if your proposal doesn't
> already have a PEP number, it shouldn't be going into Python 2.5. It's
> time to start releasing and stop evolving for a few months...

+1 for Neal being the release manager.

-Brett
___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Still looking for volunteer to run Windows buildbot

2006-03-14 Thread Trent Mick
[Tim Peters wrote]
>...
> I see that your Win2K buildbot slave always dies in the compile step now, with
> 
> """
> -- Build started: Project: pythoncore, Configuration: Debug Win32 --
> 
> Compiling resources...
> generate buildinfo
> cl.exe -c -D_WIN32 -DUSE_DL_EXPORT -D_WINDOWS -DWIN32 -D_WINDLL
> -D_DEBUG -MDd ..\Modules\getbuildinfo.c -Fogetbuildinfo.o -I..\Include
> -I..\PC
> Microsoft (R) 32-bit C/C++ Optimizing Compiler Version 13.10.3077 for 80x86
> Copyright (C) Microsoft Corporation 1984-2002. All rights reserved.
> getbuildinfo.c
> Linking...
> LINK : fatal error LNK1104: cannot open file './python25_d.dll'
> """
> 
> That happened to me once, but I still don't understand it.  It turned
> out that  the corresponding python_d.exe was still running (for hours,
> and hours, and hours, ...), and I had to manually kill the process. 
> I'm not sure that was enough, because I coincidentally rebooted the
> box before the buildbot tests ran again.  I am pretty sure that the
> symptom above won't fix itself.

Yes I've noticed it too. I've had to kill python_d.exe a few times. I
haven't yet had the chance to look into it. I am NOT getting this error
on another Windows Python build slave that I am running in-house for
play.

Trent

-- 
Trent Mick
[EMAIL PROTECTED]
___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Topic suggestions from the PyCon feedback

2006-03-14 Thread Anthony Baxter
On Tuesday 14 March 2006 14:22, A.M. Kuchling wrote:
> The conclusion I draw from these results: intermediate- or
> advanced-level topics of program design are not covered enough,
> whether in the Python documentation, in published books and
> articles, or in PyCon talks.  Please feel free to mine the above
> list, or the rest of the PyCon feedback, for topic ideas.
>
> In particular: if you're going to attend PyCon 2007, EuroPython, or
> some other conference (even a non-Python one), please consider
> submitting a talk proposal covering one of the above topics.  Such
> presentations would find a receptive audience, I think.

Just as another data point - at OSDC (australian open source 
conference) I've presented a "What's New In Python" talk the two 
years the conference has run, and it's gotten good responses from
the audience. It's fairly brief race through the world - only 1/2
an hour - but I try to hit all the good points.


-- 
Anthony Baxter <[EMAIL PROTECTED]>
It's never too late to have a happy childhood.
___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Octal literals

2006-03-14 Thread Guido van Rossum
(I'm shedding load; cleaning up my inbox in preparation for moving on
to Py3K. I'll try to respond to some old mail in the process.)

On 2/6/06, Alex Martelli <[EMAIL PROTECTED]> wrote:
> Essentially, you need to decide: does type(x) mostly refer to the
> protocol that x respects ("interface" plus semantics and pragmatics),
> or to the underlying implementation?  If the latter,  as your
> observation about "the philosophy" suggests, then it would NOT be nice
> if int was an exception wrt other types.
>
> If int is to be a concrete type, then I'd MUCH rather it didn't get
> subclassed, for all sorts of both pratical and principled reasons.
> So, to me, the best solution would be the abstract base class with
> concrete implementation subclasses.  Besides being usable for
> isinstance checks, like basestring, it should also work as a factory
> when called, returning an instance of the appropriate concrete
> subclass.

I like this approach, and I'd like to make it happen. (Not tomorrow. :-)

> AND it would let me have (part of) what I was pining for a
> while ago -- an abstract base class that type gmpy.mpz can subclass to
> assert "I _am_ an integer type!", so lists will accept mpz instances
> as indices, etc etc.

I'm still dead set against this. Using type checks instead of
interface checks is too big a deviation from the language's
philosophy. It would be the end of duck typing as we know it! Using
__index__ makes much more sense to me.

> Now consider how nice it would be, on occasion, to be able to operate
> on an integer that's guaranteed to be 8, 16, 32, or 64 bits, to
> ensured the desired shifting/masking behavior for certain kinds of
> low-level programming; and also on one that's unsigned, in each of
> these sizes.  Python could have a module offering signed8, unsigned16,
> and so forth (all combinations of size and signedness supported by the
> underlying C compiler), all subclassing the abstract int, and
> guarantee much happiness to people who are, for example, writing a
> Python prototype of code that's going to become C or assembly...

Why should these have to subclass int? They behave quite differently!
I still don't see the incredible value of such types compared to
simply doing standard arithmetic and adding "& 0xFF" or "& 0x" at
the end, etc. (Slightly more complicated for signed arithmetic, but
who really wants signed clipped arithmetic except if you're simulating
a microprocessor?)

You can write these things in Python 2.5, and as long as they
implement __index__ and do their own mixed-mode arithmetic when
combined with regular int or long, all should well. (BTW a difficult
design choice may be: if an int8 and an int meet, should the result be
an int8 or an int?)

> Similarly, it would help a slightly different kind of prototyping a
> lot if another Python module could offer 32-bit, 64-bit, 80-bit and
> 128-bit floating point types (if supported by the underlying C
> compiler) -- all subclassing an ABSTRACT 'float'; the concrete
> implementation that one gets by calling float or using a float literal
> would also subclass it... and so would the decimal type (why not? it's
> floating point -- 'float' doesn't mean 'BINARY fp';-).  And I'd be
> happy, because gmpy.mpf could also subclass the abstract float!

I'd like concrete indications that the implementation of such a module
runs into serious obstacles with the current approach. I'm not aware
of any, apart from the occasional isinstance(x, float) check in the
standard library. If that's all you're fighting, perhaps those
occurrences should be fixed? They violate duck typing.

> And then finally we could have an abstract superclass 'number', whose
> subclasses are the abstract int and the abstract float (dunno 'bout
> complex, I'd be happy either way), and Python's typesystem would
> finally start being nice and cleanly organized instead of
> grand-prarie-level flat ...!-)

I think you can have families of numbers separate from subclassing
relationships. I'm not at all sure that subclassing doesn't create
more problems than it solves here.

--
--Guido van Rossum (home page: http://www.python.org/~guido/)
___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Still looking for volunteer to run Windows buildbot

2006-03-14 Thread Tim Peters
[Trent Mick]
> Yes I've noticed it too. I've had to kill python_d.exe a few times. I
> haven't yet had the chance to look into it. I am NOT getting this error
> on another Windows Python build slave that I am running in-house for
> play.

The last run on your Win2K slave that got beyond the compile step:

http://www.python.org/dev/buildbot/all/x86%20W2k%20trunk/builds/16/step-test/0

Looks like it was running test_bsddb at the time, and the test
framework gave up after waiting 20 minutes for more output.  I had one
of those "recently" that waited 20 minutes for output after starting
test_shelve, but it's scrolled off the page.  Berkeley DB is fishy. 
Looks like the buildbot doesn't know how to kill a process on Windows
either (SIGKILL sure ain't gonna do it ;-)).

The good news is that at least we're not seeing the random segfaults
plaguing the Mac slave :-)
___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Making builtins more efficient

2006-03-14 Thread Steven Elliott
On Thu, 2006-03-09 at 08:51 -0800, Raymond Hettinger wrote:
> [Steven Elliott]
> > As you probably know each access of a builtin requires two hash table
> > lookups.  First, the builtin is not found in the list of globals.  It is
> > then found in the list of builtins.
> 
> If someone really cared about the double lookup, they could flatten a level 
> by 
> starting their modules with:
> 
>from __builtin__ import *
> 
> However, we don't see people writing this kind of code.  That could mean that 
> the double lookup hasn't been a big concern.

It could mean that.  I think what you are suggesting is sufficiently
cleaver that the average Python coder may not have thought of it.

In any case, many people are willing to do "while 1" instead of "while
True" to avoid the double lookup.  And the "from __builtin__ import *"
additionally imposes a startup cost and memory cost (at least a word per
builtin, I would guess).

> > Why not have a means of referencing the default builtins with some sort
> > of index the way the LOAD_FAST op code currently works?
> 
> FWIW, there was a PEP proposing a roughly similar idea, but the PEP 
> encountered 
> a great deal of resistance:
> 
>   http://www.python.org/doc/peps/pep-0329/
> 
> When it comes time to write your PEP, it would helpful to highlight how it 
> differs from PEP 329 (i.e. implemented through the compiler rather than as a 
> bytecode hack, etc.).

I'm flattered that you think it might be worthy of a PEP.  I'll look
into doing that.

> > Perhaps what I'm suggesting isn't feasible for reasons that have already
> > been discussed.  But it seems like it should be possible to make "while
> > True" as efficient as "while 1".
> 
> That is going to be difficult as long as it is legal to write:
> 
> True = 0

"LOAD_BUILTIN" (or whatever we want to call it) should be as fast as
"LOAD_FAST" (locals) or "LOAD_CONST" in that they each index into an
array where the index is the argument to the opcode.  

I'll look into writing a PEP.

-- 
---
|  Steven Elliott  |  [EMAIL PROTECTED] |
---


___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Still looking for volunteer to run Windows buildbot

2006-03-14 Thread Tim Peters
[Uncle Timmy]
...
> Looks like it was running test_bsddb at the time, and the test
> framework gave up after waiting 20 minutes for more output.  I had one
> of those "recently" that waited 20 minutes for output after starting
> test_shelve, but it's scrolled off the page.  Berkeley DB is fishy.

Well speak of the devil, and the Canadians appear ;-)  Your _current_
Win2K test run crapped out after waiting 20 minutes for test_shelve to
finish:

http://www.python.org/dev/buildbot/all/x86%20W2k%20trunk/builds/23/step-test/0

I don't recall this ever happening before we moved to the newer bsddb,
 Now it's happened on at least two machines.
___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Keep default comparisons - or add a second set?

2006-03-14 Thread Guido van Rossum
On 12/28/05, Robert Brewer <[EMAIL PROTECTED]> wrote:
> Noam Raphael wrote:
>  > I don't think that every type that supports equality
>  > comparison should support order comparison. I think
>  > that if there's no meaningful comparison (whether
>  > equality or order), an exception should be raised.
>
>  Just to keep myself sane...
>
>  def date_range(start=None, end=None):
>  if start == None:
>  start = datetime.date.today()
>  if end == None:
>  end = datetime.date.today()
>  return end - start
>
>  Are you saying the "if" statements will raise TypeError if start or end are
> dates? That would be a sad day for Python. Perhaps you're saying that there
> is a "meaningful comparison" between None and anything else, but please
> clarify if so.

Not to worry. My plans for Py3K are to ditch />= unless
explicitly defined, but to define == and != on all objects -- if not
explicitly defined, == will be false and != will be true. Types can
still override == and != to raise exceptions if they really want to
guard against certain comparisons; but equality is too important an
operation to drop. It should still be possible to use dicts with
mixed-type keys!

--
--Guido van Rossum (home page: http://www.python.org/~guido/)
___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


[Python-Dev] Deprecated modules going away in 2.5

2006-03-14 Thread Neal Norwitz
Unless I hear shouts *soon*, the following modules will be removed in 2.5:

reconvert.py
regex # regexmodule.c
regex_syntax.py
regsub.py

lib-old/* # these are the modules under lib-old
Para.py  codehack.py  fmt.py   ni.pystatcache.py  whatsound.py
addpack.py   dircmp.pygrep.py  packmail.py  tb.py whrandom.py
cmp.py   dump.py  lockfile.py  poly.py  tzparse.pyzmod.py
cmpcache.py  find.py  newdir.pyrand.py  util.py

In addition, I will swap sre and re.  This will make help(re) work properly.

Let me know if you disagree with any of these changes.

n
___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] About "Coverity Study Ranks LAMP Code Quality"

2006-03-14 Thread Fredrik Lundh
Greg Ewing wrote:

> Fredrik Lundh wrote:
>
> > return=NULL; output=junk => out of memory
> > return=junk; output=-1 => cannot do this
> > return=pointer; output=value => did this, returned value bytes
>
> > I agree that the design is a bit questionable;
>
> It sure is. If you get both NULL and -1 returned, how are
> you supposed to know which one is the junk?

I was about to say "by doing the tests in the prescribed order",
but you're right that it's not obvious that the function check
that it returns the right kind of junk...  (it's possible that the
junk in the second line is actually "pointer to some other ob-
ject").

I'll have a look.





___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] [Python-checkins] r43022 - in python/trunk: Modules/xxmodule.c Objects/object.c

2006-03-14 Thread Tim Peters
[M.-A. Lemburg]
>> Why do you add these things to the xx module and not the
>> _testcapi module where these things should live ?

[Neal Norwitz]
> Because I'm an idiot?

Ah, so _that's_ why you were made the release coordinator ;-)

> Thanks for pointing it out, I moved the code.

Or maybe that was why.  People who clean up after themselves are
certainly welcome to coordinate me!
___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] [Python-checkins] Python Regression Test Failures refleak (1)

2006-03-14 Thread Tim Peters
[Thomas Wouters]
> I did the same narrowing-down last week, and submitted a patch to add
> cycle-GC support to itertools.tee . It really needs it.

I agree.

> Come to think of it, now that I remember how to properly do GC, I think
> the patch cuts some corners, but it solved the problem.

You mean because it didn't supply tp_clear?  That's a funny one.  Some
people take pride in not supplying tp_clear unless it's absolutely
necessary.  For example, despite that tuples can be in cycles, the
tuple type doesn't supply a tp_clear.  This is "because" it's possible
to prove that any cycle involving tuples must involve a non-tuple
gc'ed type too, and that clearing the latter is always sufficient to
break the cycles (which is all tp_clear _needs_ to do:  we just need
that the aggregate of all tp_clear slots that are implemented suffice
to break all possible cycles).

I never saw a point to that cleverness, though.  It makes gc more
obscure, and I'm not sure what it buys.  Maybe the (typically teensy)
bit of code needed to implement a tp_clear slot?  That's all the
(dubious) benefit I can think of.

> Raymond is on it, anyway:
>
>  http://python.org/sf/1444398

You found it, you fix it :-)
___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] [Python-checkins] Python Regression Test Failuresrefleak (1)

2006-03-14 Thread Raymond Hettinger
>> Raymond is on it, anyway:
>>
>>  http://python.org/sf/1444398
> 
> You found it, you fix it :-)

I've got this one.


Raymond
___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] About "Coverity Study Ranks LAMP Code Quality"

2006-03-14 Thread Tim Peters
[Neal Norwitz]
> ...
> The public report says 15, but the current developer report shows 12.
> I'm not sure why there is a discrepancy.  All 12 are in ctypes which
> was recently imported.

I'm having a really hard time making sense of the UI on this.  When I
looked at the Python project just now (I can log in, so guess that's
what you called the "developer report" above), I see 13 "error" rows,
and none of them referencing ctypes.  OTOH, maybe you'd count this as
zero rows, since there are none left with BUG or UNINSPECTED status.

I dug into one of them, a claim by the tool that after marshal.c's:

int one = 1;
int is_little_endian = (int)*(char*)&one;

we have:

Event const: After this line, the value of "is_little_endian" is equal to 1

but of course that's not so on a big-endian box, and it goes on to
claim that there's dead code because of this.

I'm not much inclined to look at more of these -- I probably waited so
long that all we have left are false positives?  If not, and somebody
wants me to look at one, point it out specifically ;-)
___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] About "Coverity Study Ranks LAMP Code Quality"

2006-03-14 Thread Neal Norwitz
On 3/14/06, Tim Peters <[EMAIL PROTECTED]> wrote:
> [Neal Norwitz]
> > ...
> > The public report says 15, but the current developer report shows 12.
> > I'm not sure why there is a discrepancy.  All 12 are in ctypes which
> > was recently imported.
>
> I'm having a really hard time making sense of the UI on this.  When I

The UI is, um, a little less than intuitive.

> looked at the Python project just now (I can log in, so guess that's
> what you called the "developer report" above), I see 13 "error" rows,

Yes, the reports developers can see when they log in.

> and none of them referencing ctypes.  OTOH, maybe you'd count this as
> zero rows, since there are none left with BUG or UNINSPECTED status.

After you login, you can click View Runs.  Then click the link in the
Results column, currently it's 50 for Run 19 (the top row).  Now you
should be looking at all the results.  For me the top 10 rows or so
are UNCONFIRMED all for ctypes.  But to make the categories clearer,
use the Group By option menu at the top and select Status.  Then the
table will provide a table where each category is shown a little
clearer.

Click on the View links to see the actual code with the warnings
annotated inline.

> I'm not much inclined to look at more of these -- I probably waited so
> long that all we have left are false positives?  If not, and somebody
> wants me to look at one, point it out specifically ;-)

Yes, most of the problems have been resolved.  The one you pointed out
is bogus.  There's another dead code one, but it's in generated code
(an extra if (! value) return NULL;) so who cares.

Since there's no problem in any of your code AFAIK, I'll let you off
the hook. :-)

There really weren't that many reports and I believe most have been
reviewed by more than one person.

n
___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com