[issue10517] test_concurrent_futures crashes with "--with-pydebug" on RHEL5 with "Fatal Python error: Invalid thread state for this thread"

2011-10-07 Thread Graham Dumpleton

Graham Dumpleton  added the comment:

Did anyone test this fix for case of fork() being called from Python sub 
interpreter?

Getting a report of fork() failing in sub interpreters under mod_wsgi that may 
be caused by this change. Still investigating.

Specifically throwing up error:

  Couldn't create autoTLSkey mapping

--
nosy: +grahamd

___
Python tracker 
<http://bugs.python.org/issue10517>
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue13156] _PyGILState_Reinit assumes auto thread state will always exist which is not true.

2011-10-11 Thread Graham Dumpleton

New submission from Graham Dumpleton :

This is a followup bug report to fix wrong implementation of 
_PyGILState_Reinit() introduced by http://bugs.python.org/issue10517.

I don't have a standalone test case yet. Problem occurs under mod_wsgi with 
Python 2.7.2 and thus similarly 3.2 where _PyGILState_Reinit() was also added.

The Python code part which triggers the problem is:

pid = os.fork()
if pid:
   sys.stderr.write('Fork succeeded (PID=%s)\n' % pid)
else:
   sys.stderr.write('Fork succeeded (child PID=%s)\n' % os.getpid())
   time.sleep(60.0)
   os._exit(0)

To trigger the problem requires this code be executed from a thread originally 
created outside of Python and then calling into a sub interpreter.

This thread would have created its own thread state object for the sub 
interpreter call since auto thread states don't work for sub interpreters. 
Further, it would never have called into the main interpreter so auto thread 
state simply doesn't exist for main interpreter.

When this thread has a fork() occur and _PyGILState_Reinit() is called, the 
call of PyGILState_GetThisThreadState() returns NULL because auto thread state 
for main interpreter was never initialised for this thread. When it then calls 
into PyThread_set_key_value() it is value of NULL and rather than set it, it 
thinks internally in find_key() you are doing a get which results in 
PyThread_set_key_value() returning -1 and so the fatal error.

So _PyGILState_Reinit() is broken because it assumes that an auto thread state 
will always exist for the thread for it to reinit, which will not always be the 
case.

The simple fix may be that if PyGILState_GetThisThreadState() returns NULL then 
don't do any reinit. Making that change does seem to fix the problem. Code that 
works then is:

void
_PyGILState_Reinit(void)
{
PyThreadState *tstate = PyGILState_GetThisThreadState();

if (tstate) {
PyThread_delete_key(autoTLSkey);
if ((autoTLSkey = PyThread_create_key()) == -1)
Py_FatalError("Could not allocate TLS entry");

/* re-associate the current thread state with the new key */
if (PyThread_set_key_value(autoTLSkey, (void *)tstate) < 0)
Py_FatalError("Couldn't create autoTLSkey mapping");
}
}

Diff file also attached.

--
components: Extension Modules
files: pystate.c.diff
keywords: patch
messages: 145383
nosy: grahamd, neologix
priority: normal
severity: normal
status: open
title: _PyGILState_Reinit assumes auto thread state will always exist which is 
not true.
type: crash
versions: Python 2.7, Python 3.2
Added file: http://bugs.python.org/file23385/pystate.c.diff

___
Python tracker 
<http://bugs.python.org/issue13156>
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue13156] _PyGILState_Reinit assumes auto thread state will always exist which is not true.

2011-10-11 Thread Graham Dumpleton

Graham Dumpleton  added the comment:

Whoops. Missed the error. The fatal error that occurs is:

Fatal Python error: Couldn't create autoTLSkey mapping

--

___
Python tracker 
<http://bugs.python.org/issue13156>
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue13156] _PyGILState_Reinit assumes auto thread state will always exist which is not true.

2011-10-12 Thread Graham Dumpleton

Graham Dumpleton  added the comment:

The PyGILState_Ensure() function is only for when working with the main 
interpreter. These external threads are not calling into the main interpreter.

Because they are external threads, calling PyGILState_Ensure() and then 
PyGILState_Release() will cause a thread state to be created for the main 
interpreter, but it will also be destroyed on the PyGILState_Release().

The only way to avoid that situation and ensure that the thread state for the 
main interpreter is therefore maintained would be to call PyGILState_Ensure() 
and then call PyThreadState_Swap() to change to thread state for the sub 
interpreter. Problem is that you aren't supposed to use PyThreadState_Swap() 
any more and recollect that newer Python 3.X even prohibits it in some way 
through some checks.

So, the documentation you quote is only to do with the main interpreter and is 
not how things work for sub interpreters.

--

___
Python tracker 
<http://bugs.python.org/issue13156>
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue13156] _PyGILState_Reinit assumes auto thread state will always exist which is not true.

2011-10-12 Thread Graham Dumpleton

Graham Dumpleton  added the comment:

True. Doesn't appear to be an issue with Python 3.2.2. Only Python 2.7.2.

I was not aware that the TLS mechanism was changed in Python 3.X so assumed 
would also affect it.

So, looks like the change shouldn't have been applied to Python 2.7.

How many moons before Python 2.7.3 though?

--
versions:  -Python 3.2

___
Python tracker 
<http://bugs.python.org/issue13156>
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue13703] Hash collision security issue

2012-01-12 Thread Graham Dumpleton

Graham Dumpleton  added the comment:

Right back at the start it was said:

"""
We haven't agreed whether the randomization should be enabled by default or 
disabled by default. IMHO it should be disabled for all releases except for the 
upcoming 3.3 release. The env var PYTHONRANDOMHASH=1 would enable the 
randomization. It's simple to set the env var in e.g. Apache for mod_python and 
mod_wsgi.
"""

with a environment variable PYTHONHASHSEED still being mentioned towards the 
end.

Be aware that a user being able to set an environment variable which is used on 
Python interpreter initialisation when using mod_python or mod_wsgi is not as 
trivial as made out in leading comment.

To set an environment variable would require the setting of the environment 
variable to be done in the Apache etc init.d scripts, or if the Apache distro 
still follows Apache Software Foundation conventions, in the 'envvars' file.

Having to do this requires root access and is inconvenient, especially since 
where it needs to be done differs between every distro.

Where there are other environment variables that are useful to set for 
interpreter initialisation, mod_wsgi has been changed in the past to add 
specific directives for the Apache configuration file to set them prior to 
interpreter initialisation. This at least makes it somewhat easier, but still 
only of help where you are the admin of the server.

If that approach is necessary, then although mod_wsgi could eventually add such 
a directive, as mod_python is dead it will never happen for it.

As to another question posed about whether mod_wsgi itself is doing anything to 
combat this, the answer is no as don't believe there is anything it can do. 
Values like the query string or post data is simply passed through as is and 
always pulled apart by the application.

--
nosy: +grahamd

___
Python tracker 
<http://bugs.python.org/issue13703>
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue6531] atexit_callfuncs() crashing within Py_Finalize() when using multiple interpreters.

2011-06-26 Thread Graham Dumpleton

Graham Dumpleton  added the comment:

Segmentation fault. The original description explains the problem is 
dereferencing of a NULL pointer which has a tendency to invoke such behaviour.

--

___
Python tracker 
<http://bugs.python.org/issue6531>
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue11803] Memory leak in sub-interpreters

2011-04-11 Thread Graham Dumpleton

Graham Dumpleton  added the comment:

I wouldn't use mod_python as any guide for how to use sub interpreters as its 
usage of sub interpreters and threading in conjunction with them is technically 
broken, not following properly the Python C API requirements. It doesn't even 
shutdown the Python interpreters properly resulting in memory leaks on Apache 
restarts into the Apache parent process which is then inherited by all forked 
Apache child processes.

Also, mod_python does not destroy sub interpreters within the life of the 
process and then create replacements. It is a bit of a misconception that some 
have that mod_python creates a new sub interpreter for each request, it 
doesn't. Instead once a sub interpreter is created it persists for the life of 
the process. Thus it doesn't even trigger the scenario you talk about.

In early versions of mod_wsgi the recycling of sub interpreters within the 
lifetime of the process was tried but was found not to be practical and feature 
was removed. The big stumbling block was third party C extensions. Various C 
extensions do not cope well with being initialised within context of one sub 
interpreter, with the sub interpreter being destroyed, and the C extension then 
being used in context of another sub interpreter. This usage pattern caused 
memory leaks in some cases and in worst case the process would crash.

In short, use of sub interpreters for short periods of time and then destroying 
them is all but unusable except within very constrained situations where no use 
is made of complex C extensions.

For related information see:

http://blog.dscpl.com.au/2009/03/python-interpreter-is-not-created-for.html
http://blog.dscpl.com.au/2009/11/save-on-memory-with-modwsgi-30.html

--

___
Python tracker 
<http://bugs.python.org/issue11803>
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue10914] Python sub-interpreter test

2011-04-25 Thread Graham Dumpleton

Graham Dumpleton  added the comment:

Hmmm, I wander if that is related to the workaround I have added in mod_wsgi 
recently of:

/*
 * Force loading of codecs into interpreter. This has to be
 * done as not otherwise done in sub interpreters and if not
 * done, code running in sub interpreters can fail on some
 * platforms if a unicode string is added in sys.path and an
 * import then done.
 */

item = PyCodec_Encoder("ascii");
Py_XDECREF(item);

This fixes problem some have seen where one gets:

  LookupError: no codec search functions registered: can't find encoding

I haven't been able to reproduce the problem myself so no bug report ever 
lodged.

Have been told it affects some other embedded systems as well which use sub 
interpreters but other systems don't employ the workaround that I am now using.

--

___
Python tracker 
<http://bugs.python.org/issue10914>
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue4953] cgi module cannot handle POST with multipart/form-data in 3.x

2011-01-13 Thread Graham Dumpleton

Graham Dumpleton  added the comment:

FWIW, keep in mind that cgi.FieldStorage is also quite often used in WSGI 
scripts in arbitrary WSGI servers which have got nothing to do with CGI. Having 
cgi.FieldStorage muck around with stdout/stderr under WSGI, even where using a 
CGI/WSGI bridge, would potentially be a bad thing to do, especially in embedded 
systems like mod_wsgi where sys.stdout and sys.stderr are replaced with file 
like objects that map onto Apache error logging. Even in non embedded systems, 
you could very well screw up any application logging done via stdout/stderr and 
break the application.

So, the default or common code paths should never play with sys.stdout or 
sys.stderr. It is already a PITA that the implementation falls back to using 
sys.argv when QUERY_STRING isn't defined which also could produce strange 
results under a WSGI server. In other words, please don't go adding any more 
code which makes the wrong assumption that this is only used in CGI scripts.

--
nosy: +grahamd

___
Python tracker 
<http://bugs.python.org/issue4953>
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue10915] Make the PyGILState API compatible with multiple interpreters

2011-01-15 Thread Graham Dumpleton

Graham Dumpleton  added the comment:

Can you please provide an example of what user would do and what changes 
existing extension modules would need to make?

When I looked at this exact problem some time back, I worked out that you 
probably only need a single new public API function. This would be something 
like PyInterpreterState_Swap().

By default stuff would work on the main interpreter, but if for a specific 
thread it wanted to operate in context of a different sub interpreter, would 
call PyInterpreterState_Swap() to indicate that. That would store in TLS 
outside of any existing data structures. Functions like existing 
PyGILState_Ensure()/PyGILState_Release() would then look up that TLS variable 
to know which interpreter they are working with.

Doing it this way meant that no C extension modules using PyGILState_??? 
functions would need to change at all, as what interpreter is being operated on 
dictated by who created the thread and initiated call in to Python interpreter.

You probably want validation checks to say that PyInterpreterState_Swap() can 
only be called when not GIL lock held.

It worries me that you are talking about new PyGILState_??? functions as that 
would suggest to me that extension modules would need to change to be aware of 
this stuff. That you are saying that sqlite needs changes is what makes me 
things the way you are going is a problem. It isn't practical to make SWIG 
change to use something other than PyGILState_Ensure()/PyGILState_Release(), it 
should be transparent and required no changes to existing C extensions.

--

___
Python tracker 
<http://bugs.python.org/issue10915>
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue10915] Make the PyGILState API compatible with multiple interpreters

2011-01-15 Thread Graham Dumpleton

Graham Dumpleton  added the comment:

The bulk of use cases is going to be simple callbacks via the same thread that 
called out of Python in the first place. Thus ultimately all it is doing is:

Py_BEGIN_ALLOW_THREADS

Call into some foreign C library.
C library wants to do a callback into Python.

PyGILState_STATE gstate;
gstate = PyGILState_Ensure();

/* Perform Python actions here. */
result = CallSomeFunction();
/* evaluate result or handle exception */

/* Release the thread. No Python API allowed beyond this point. */
PyGILState_Release(gstate);

More stuff in C library.
Return back into the C extension wrapper.

Py_END_ALLOW_THREADS

This is what SWIG effectively does in its generated wrappers for callbacks.

Using a TLS solution, all these modules that simply do this will now start 
working where as they currently usually deadlock or have other problems.

In your solution, all these modules would need to be modified to some how 
transfer information about the current interpreter into the callback which is 
called by the foreign C library and use new PyGILState_??? functions rather 
than the old.

I do accept that more complicated extension modules which create their own 
foreign threads and perform the call back into interpreter from that thread, or 
systems like mod_wsgi which have a persistent thread pool from which calls 
originate, will have to be modified, but this is the lessor use case from what 
I have seen.

Overall, it is an easy win if TLS is used because a lot of code wouldn't need 
to change. Some will, but expect that a lot of the common stuff like lxml for 
example wouldn't.

--

___
Python tracker 
<http://bugs.python.org/issue10915>
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue10915] Make the PyGILState API compatible with multiple interpreters

2011-01-15 Thread Graham Dumpleton

Graham Dumpleton  added the comment:

As to the comment:

"""IMO we should really promote clean APIs which allow solving the whole
problem, rather than devise an internal hack to try to "improve" things
slightly."""

The reality is that if you force a change on every single extension module 
doing callbacks into the interpreter without having the GIL first, you will 
never see people update their code as they will likely not care about this 
special use case. And so the whole point of adding the additional APIs will be 
wasted effort and have achieved nothing.

The TLS solution means many modules will work without the authors having to do 
anything.

You therefore have to balance between what you perceive as a cleaner API and 
what is actually going to see a benefit without having to wait a half dozen 
years before people realise they should change their ways.

BTW, TLS is currently used for current thread state for simplified GIL API, why 
isn't that use of TLS a hack where as doing the same for interpreter is?

--

___
Python tracker 
<http://bugs.python.org/issue10915>
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue10915] Make the PyGILState API compatible with multiple interpreters

2011-01-16 Thread Graham Dumpleton

Graham Dumpleton  added the comment:

Nick, I think you are making the wrong assumption that an external threads will 
only ever call into the same interpreter. This is not the case. In mod_wsgi and 
mod_python there is a pool of external threads that for distinct HTTP requests, 
delegated to a specific thread, can make calls into different interpreters. 
This is all fine so long as you ensure that for each thread, it uses a distinct 
thread state for that thread for each interpreter. In other words, you cant use 
the same thread state instance across multiple interpreters as it is bound to a 
specific interpreter.

This is because autoInterpreterState is always going to be set to the main 
interpreter. This means that when the thread is calling into a new sub 
interpreter it will either inherit via current GIL state API an existing thread 
state bound to the main interpreter, or if one is created, will still get bound 
to the main interpreter. As soon as you start using a thread state bound to one 
interpreter against another, problems start occurring.

After thinking about this all some more I believe now what is needed is a mix 
of the TLS idea for current interpreter state that I am suggesting and in part 
the extended GIL state functions that Antoine describes.

So, the TLS records what interpreter a thread is currently running against so 
that GIL state APIs work for existing unmodified extension modules. At the same 
time though, you still need a way of switching what interpreter a thread is 
running against. For the latter, various of the thread state related functions 
that exist already could do this automatically. In some cases you will still 
need the extended function for acquisition that Antoine suggested.

Consider a few scenarios of usage.

First off, when an external thread calls PyInterpreter_New(), it creates a new 
thread state object against that new sub interpreter automatically and returns 
it. With this new systems, it would also automatically update the TLS for the 
current thread to be that new interpreter also. That way when it calls into 
Python which then calls back out to code which releases the GIL and then calls 
back in through PyGILState_Ensure(), with no arguments, it will work. This 
obviously implies though that PyGILState_Ensure() makes use of the TLS for the 
interpreter being used and isn't hard wired to the main interpreter like it is 
now.

Second, consider some of the other API functions such as PyThreadState_Swap(). 
When passing it a non NULL pointer, you are giving it a thread state object 
which is already bound to an interpreter. It thus can also update the TLS for 
the interpreter automatically. If you pass it a NULL then it clears the TLS 
with all functions later that rely on that TLS asserting that it is not NULL 
when used. Another similar case where TLS can be auto updated is functions 
which clear/delete an interpreter state and leave GIL unlocked at the end. 
These also would clear the TLS.

So, it is possible that that no new API functions may be needed to manage the 
TLS for what interpreter is associated with the current thread, as I thought, 
as existing API functions can do that management themselves transparently.

The third and final scenario, and the one where the extended GIL state 
functions for Ensure is still required, is where code doesn't have the GIL as 
yet and wants to make a call into sub interpreter rather than the main 
interpreter, where it already has a pointer to the sub interpreter and nothing 
more. In this case the new PyGILState_EnsureEx() function is used, with the sub 
interpreter being passed as argument.

The beauty of existing API functions of PyThreadState_Swap() etc managing the 
TLS for the interpreter is that the only code that needs to change is the 
embedded systems which are creating and using multiple interpreters in the 
first place. In other words, mod_wsgi would need to change, with it simply 
replacing all the equivalent stuff it already has for doing what PyGILState_??? 
functions do now but against sub interpreters. If I am right, all extension 
modules that don't really care about whether sub interpreters are being used 
should work without modification.

Oh, and I also think you probably don't need PyGILState_ReleaseEx() if all made 
TLS aware, just the single PyGILState_EnsureEx() is needed.

--

___
Python tracker 
<http://bugs.python.org/issue10915>
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue10906] wsgiref should mention that CGI scripts usually expect HTTPS variable to be set to 'on'

2011-01-20 Thread Graham Dumpleton

Graham Dumpleton  added the comment:

As has been pointed out to you already in other forums, the correct way of 
detecting in a compliant WSGI application that a SSL connection was used is to 
check the value of the wsgi.url_scheme variable. If your code does not do this 
then it is not a compliant WSGI application and you have no guarantee that it 
will work portably across different WSGI hosting mechanisms. This is because a 
WSGI server/adapter is not obligated to set the HTTPS variable in the WSGI 
environment dictionary.

So, the correct thing to do, which for some reasons you don't want to, is to 
fix your code when it is being ported to adhere to the WSGI specification and 
what it dictates as the way of detecting a SSL connection.

FWIW, the HTTPS variable will no longer be set from mod_wsgi version 4.0 to 
enforce the point that it is not the correct way of detecting that an SSL 
connection and that wsgi.url_scheme should be used. The HTTPS variable was only 
being set at all and with that value because older versions of Django weren't 
doing what you also refuse to do, which is check for wsgi.url_scheme instead of 
the HTTPS variable. Django did the right thing and fixed their code to be 
compliant. Why you can't and want to keep arguing this point in three different 
forums is beyond me. You have spent way much more time arguing the point than 
it would take to fix your code to be compliant.

--
nosy: +grahamd

___
Python tracker 
<http://bugs.python.org/issue10906>
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue1758146] Crash in PyObject_Malloc

2008-07-21 Thread Graham Dumpleton

Graham Dumpleton <[EMAIL PROTECTED]> added the comment:

I know the discussions more or less says this, but I want to add some 
additional information.

For the record, the reason that mod_python crashes with 'Invalid thread state 
for this thread' when Py_DEBUG is defined in part relates to:

  http://issues.apache.org/jira/browse/MODPYTHON-217

Also, that Py_DEBUG check effectively says that if you use simplified GIL API 
for a particular thread against the first interpreter, you are prohibited from 
creating additional thread states for that thread. I haven't checked the 
documentation lately, but I am not sure it is really clear on that specific 
point and so in some respects the documentation may be at fault here. Someone 
might like to point to exact part of documentation which states this 
requirement.

The problem thus is that code which worked prior to Python 2.3 would still work 
with Python 2.3 and later, up to the point that some code decided to use the 
simplified GIL API. At that point Python would create its own internal thread 
state for that thread even if user code had already created one. Conversely, if 
the simplified GIL API was used against the thread first and then user code 
tried to create an additional thread state for that thread against first 
interpreter.

With Py_DEBUG defined, this scenario causes the assertion failure and the above 
error. Without Py_DEBUG defined, the code can quite happily run fine, at least 
until the point where code which left Python using a user thread state object 
attempts to reenter Python by using simplified GIL API. At that point it would 
deadlock.

Now, as I said, that one was effectively forced to use simplified GIL API for 
first interpreter with Python 2.3 probably wasn't at all clear and so 
mod_python was never updated to meet that requirement. As per the JIRA issue 
referenced above it is a known problem that code isn't meeting this 
requirement, but not much development has been done on mod_python for quite a 
while.

I have though recently made changes to personal copy of mod_python code such 
that it uses simplified GIL API for all access against first interpreter and it 
no longer suffers that assertion failure when Py_DEBUG defined. The code also 
should work for any modules which use simplified GIL API, such as SWIG 
generated bindings for Xapian. You do have to force the application using such 
modules to run under first interpreter.

The code for mod_wsgi uses simplified GIL API for first interpreter as well and 
works with SWIG generated bindings, but it is possible that it may still fail 
that assertion when Py_DEBUG is defined. This is because in order to allow 
mod_python and mod_wsgi to be used in Apache at the same time, mod_wsgi had to 
incorporate some hacks to workaround the fact that mod_python was not using 
simplified GIL API for first interpreter, but also because mod_python wasn't 
releasing the GIL for a critical section between when it was initialised and 
Apache child processes were created. It was in this section that mod_wsgi has 
to initialise itself and so it had to fiddle the thread states to be able to do 
its things. This workaround may have been enough to create additional thread 
state of a thread for first interpreter, thus later triggering the assertion.

It would have been nice to have mod_wsgi do the correct thing from the start, 
but that would have barred it being used at same time as mod_python and so 
people may have baulked at trying mod_wsgi as a result. Now that mod_wsgi has 
got some traction, in mod_wsgi version 3.0 it will be changed to remove the 
mod_python fiddle. This will mean that mod_wsgi 3.0 will not be usable at same 
time as current mod_python versions and would only be usable with the 
mod_python version (maybe 3.4) which I have made modifications for to also use 
simplified GIL APIs properly.

So that is the state of play as I see and understand it.

As to Adam's comments about use cases for multiple interpreters, we have had 
that discussion before and despite that many people rely on that feature in 
both mod_python and mod_wsgi he still continues to dismiss it outright and 
instead calls for complete removal of the feature.

Also Adam's comments that multiple interpreters were used in mod_wsgi only to 
support buggy third party software, that is untrue. Multiple interpreter 
support exists in mod_wsgi because mod_python provided a similar feature and 
mod_python existed before many of the Python web applications which are claimed 
to be the reason that sub interpreters are used in the first place. So, 
mod_python and use of sub interpreters came first, and not really the other way 
around. Where Python web applications do rely on os.environ it is historically 
because that is how things were done in CGI. Many such as Trac may still 
support that means of configuration as a fall back, but Trac now also supports 
other ways which

[issue1758146] Crash in PyObject_Malloc

2008-07-23 Thread Graham Dumpleton

Graham Dumpleton <[EMAIL PROTECTED]> added the comment:

Franco, you said 'I found that you cannot create additional thread 
states against the  first interpreter and swap between them w/o this 
assertion occurring. ...'

Since the Py_DEBUG check is checking against the simplified GIL state 
API thread state object, then technically you could have a thread with 
multiple thread states, that thread just can't ever use/have used 
simplified GIL state API.

Take for example a system where threads are actually foreign threads and 
not created within Python. In this case simplified GIL state API thread 
state object would never have been created for that thread. For those 
you could have multiple thread states and not trip the test.

In other words, multiple thread states only blocked if one of them is 
the internal one created by simplified GIL state AP. This is getting 
hard to avoid though.

In summary, the simplified GIL state API is basically viral in nature.

___
Python tracker <[EMAIL PROTECTED]>
<http://bugs.python.org/issue1758146>
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue1758146] Crash in PyObject_Malloc

2008-07-24 Thread Graham Dumpleton

Graham Dumpleton <[EMAIL PROTECTED]> added the comment:

I do understand.

The initial thread, which is effectively a foreign thread to Python to 
begin with, when used to initialise Python, ie., call Py_Initialize(), 
is treated in a special way in as much as as a side effect it does that 
initialisation of GIL internal thread state. This is as you say. But, 
this is the only foreign thread this implicitly occurs for and why the 
main thread is a bit special.

If you were to create additional foreign threads outside of Python, ie., 
in addition to main thread which initialised it, those later threads 
should not fail the Py_DEBUG test unless the code they execute 
explicitly calls the simplified API and by doing so implicitly causes 
internal threadstate for that thread to be created.

Hope this makes sense. Sorry, in a bit of a hurry.

___
Python tracker <[EMAIL PROTECTED]>
<http://bugs.python.org/issue1758146>
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue3919] PySys_SetObject crashes after Py_NewInterpreter().

2008-09-20 Thread Graham Dumpleton

New submission from Graham Dumpleton <[EMAIL PROTECTED]>:

Somewhere between Python 3.0a3 and Python 3.0b3, a call to PySys_SetObject() 
after having used Py_NewInterpreter() to create a sub interpreter causes a 
crash. This appears to be due to interp->sysdict being NULL after 
Py_NewInterpreter() called.

As illustration of problem, consider program for Python 2.X.

#include 

int
main(int argc, char *argv[])
{
Py_Initialize();
PySys_SetObject("xxx", PyLong_FromLongLong(1));
fprintf(stderr, "sysdict=%d\n", !!PyThreadState_Get()->interp->sysdict);
fflush(stderr);
PyRun_SimpleString("import sys\n"
   "print >> sys.stderr, 'xxx =', sys.xxx\n");

Py_NewInterpreter();
fprintf(stderr, "sysdict=%d\n", !!PyThreadState_Get()->interp->sysdict);
fflush(stderr);
PySys_SetObject("yyy", PyLong_FromLongLong(2));
PyRun_SimpleString("import sys\n"
   "print >> sys.stderr, 'yyy =', sys.yyy\n");

Py_Finalize();
return 0;
}

This when run yields:

sysdict=1
xxx = 1
sysdict=1
yyy = 2

Now, for Python 3.0 variant of same program:

#include 

int
main(int argc, char *argv[])
{
Py_Initialize();
fprintf(stderr, "sysdict=%d\n", !!PyThreadState_Get()->interp->sysdict);
fflush(stderr);
PySys_SetObject("xxx", PyLong_FromLongLong(1));
PyRun_SimpleString("import sys\n"
   "print('xxx =',sys.xxx, file=sys.stderr)\n");

Py_NewInterpreter();
fprintf(stderr, "sysdict=%d\n", !!PyThreadState_Get()->interp->sysdict);
fflush(stderr);
PySys_SetObject("yyy", PyLong_FromLongLong(2));
PyRun_SimpleString("import sys\n"
   "print('yyy =',sys.yyy, file=sys.stderr)\n");

Py_Finalize();
return 0;
}

I get for Python 3.0a3:

sysdict=1
xxx = 1
sysdict=1
object  : AttributeError("'module' object has no attribute 'stderr'",)
type: AttributeError
refcount: 4
address : 0xf1180
lost sys.stderr

I am not concerned here about loss of sys.stderr, although that could be a 
separate issue for all I know.

The important bit here is that sysdict is set after Py_NewInterpreter().

In Python 3.0b3/3.0rc1 I instead get:

sysdict=1
xxx = 1
sysdict=0
Bus error

This is because PySys_SetObject() is presumably crashing because sysdict is 
not set in interp object.

I tried to ask about this on python-3000 Google group, but that message ended 
up in some moderation queue and has vanished. Thus quote part of that message 
below.

"""
>From what I can tell so far the problem is that 'interp->sysdict' is
NULL after calling Py_NewInterpreter() to create a secondary sub
interpreter.

Reading through code and using a debugger, at this point this seems to
be due to condition if code:

   sysmod = _PyImport_FindExtension("sys", "sys");
   if (bimod != NULL && sysmod != NULL) {
   interp->sysdict = PyModule_GetDict(sysmod);
   if (interp->sysdict == NULL)
   goto handle_error;
   Py_INCREF(interp->sysdict);
   PySys_SetPath(Py_GetPath());
   PyDict_SetItemString(interp->sysdict, "modules",
interp->modules);
   _PyImportHooks_Init();
   initmain();
   if (!Py_NoSiteFlag)
   initsite();
   }

in Py_NewInterpreter() not executing due to
_PyImport_FindExtension("sys", "sys") returning NULL.

Down in _PyImport_FindExtension(), it appears that the reason it fails
is because of following returning with NULL.

   def = (PyModuleDef*)PyDict_GetItemString(extensions,
filename);

   .

   if (def->m_base.m_init == NULL)
   return NULL;

In other words, whatever m_base.m_init is meant to be is NULL when
perhaps it isn't meant to be.

(gdb) call ((PyModuleDef*)PyDict_GetItemString(extensions,"builtins"))-
>m_base.m_init
$9 = (PyObject *(*)()) 0
(gdb) call ((PyModuleDef*)PyDict_GetItemString(extensions,"sys"))-
>m_base.m_init
$10 = (PyObject *(*)()) 0

I am going to keep tracking through to try and work out why, but
posting this initial information in case this rings a bell with
anyone.
"""

Is this expected behaviour? Or, is it necessary now to perform some special 
initialisation after having called Py_NewInterpreter() to get builtins and 
sys modules setup?

This problem originally came up with mod_wsgi, which worked fine with Python 
3.0a3, but fails on more recent releases because of

[issue3919] PySys_SetObject crashes after Py_NewInterpreter().

2008-09-20 Thread Graham Dumpleton

Graham Dumpleton <[EMAIL PROTECTED]> added the comment:

Sorry, should also mention that this was on MacOS X 10.4 (Tiger).

___
Python tracker <[EMAIL PROTECTED]>
<http://bugs.python.org/issue3919>
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue3723] Py_NewInterpreter does not work

2008-09-30 Thread Graham Dumpleton

Graham Dumpleton <[EMAIL PROTECTED]> added the comment:

Adding the functions as initfunc in module init table is of no use as 
they aren't invoked when creating a sub interpreter.

One thing that does appear to work, although no idea of whether it is 
correct way to solve problem, is to duplicate the builtin/sys 
initialisation that occurs in Py_InitializeEx() function.

Attached diff shows nature of changes. Diff is bit messy as have left 
existing code in there but #ifdef'd out.

Maybe this will give someone who knows how overall interpreter 
initialisation is supposed to work a head start on coming up with proper 
fix. But then it could be totally wrong as well.

At least with change as is, mod_wsgi works for sub interpreters now. 
I'll do more work later on whether it is correct way to solve it.

--
nosy: +grahamd
Added file: http://bugs.python.org/file11660/pythonrun.c.diff

___
Python tracker <[EMAIL PROTECTED]>
<http://bugs.python.org/issue3723>
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue3723] Py_NewInterpreter does not work

2008-09-30 Thread Graham Dumpleton

Graham Dumpleton <[EMAIL PROTECTED]> added the comment:

Argh. Personally I like to provide context diff's but more often than not 
get abused for providing them over a unified diff. Was in a hurry this 
time as had only a couple of minutes of battery life left on the laptop, 
so quickly did it without thinking and then ran off to find a power point. 
:-)

___
Python tracker <[EMAIL PROTECTED]>
<http://bugs.python.org/issue3723>
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue3723] Py_NewInterpreter does not work

2008-09-30 Thread Graham Dumpleton

Changes by Graham Dumpleton <[EMAIL PROTECTED]>:


Removed file: http://bugs.python.org/file11660/pythonrun.c.diff

___
Python tracker <[EMAIL PROTECTED]>
<http://bugs.python.org/issue3723>
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue3723] Py_NewInterpreter does not work

2008-09-30 Thread Graham Dumpleton

Graham Dumpleton <[EMAIL PROTECTED]> added the comment:

Unified diff now attached.

Added file: http://bugs.python.org/file11661/pythonrun.c.diff

___
Python tracker <[EMAIL PROTECTED]>
<http://bugs.python.org/issue3723>
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue8098] PyImport_ImportModuleNoBlock() may solve problems but causes others.

2010-03-09 Thread Graham Dumpleton

New submission from Graham Dumpleton :

Back in time, the function PyImport_ImportModuleNoBlock() was introduced and 
used in modules such as the time module to supposedly avoid deadlocks when 
using threads. It may well have solved that problem, but only served to cause 
other problems.

To illustrate the problem consider the test code:


import imp
import thread
import time

def run1():
   print 'acquire'
   imp.acquire_lock()
   time.sleep(5)
   imp.release_lock()
   print 'release'

thread.start_new_thread(run1, ())

time.sleep(2)

print 'strptime'
time.strptime("", "")
print 'exit'


The output of running this is


grumpy:~ grahamd$ python noblock.py 
acquire
strptime
Traceback (most recent call last):
  File "noblock.py", line 17, in 
time.strptime("", "")
ImportError: Failed to import _strptime because the import lockis held by 
another thread.


It is bit silly that code executing in one thread could fail because at the 
time that it tries to call time.strptime() a different thread has the global 
import lock.

This problem may not arise in applications which preload all modules, but it 
will where importing of modules is deferred until later within execution of a 
thread and where there may be concurrent threads running doing work that 
requires modules imported by that new C function.

Based on old discussion at:

http://groups.google.com/group/comp.lang.python/browse_frm/thread/dad73ac47b81a744

my expectation is that this issue will be rejected as not being a problem with 
any remedy being pushed to the application developer. Personally I don't agree 
with that and believe that the real solution is to come up with an alternate 
fix for the original deadlock that doesn't introduce this new detrimental 
behaviour. This may entail structural changes to modules such as the time 
module to avoid issue.

Unfortunately since the PyImport_ImportModuleNoBlock() function has been 
introduced, it is starting to be sprinkled like fairy dust across modules in 
the standard library and in third party modules. This is only going to set up 
future problems in multithreaded applications, especially where third party 
module developers don't appreciate what problems they are potentially 
introducing by using this function.

Anyway, not optimistic from what I have seen that this will be changed, so view 
this as a protest against this behaviour. :-)

FWIW, issue in mod_wsgi issue tracker about this is:

http://code.google.com/p/modwsgi/issues/detail?id=177

I have known about this issue since early last year though.

--
components: Interpreter Core
messages: 100713
nosy: grahamd
severity: normal
status: open
title: PyImport_ImportModuleNoBlock() may solve problems but causes others.
type: behavior
versions: Python 2.6, Python 2.7, Python 3.1

___
Python tracker 
<http://bugs.python.org/issue8098>
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue4200] atexit module not safe in Python 3.0 with multiple interpreters

2008-10-24 Thread Graham Dumpleton

New submission from Graham Dumpleton <[EMAIL PROTECTED]>:

In Python 3.0 the atexit module was translated from Python code to C code.

Prior to Python 3.0, because it was implemented at C code level, the list of 
callbacks was specific to each sub interpreter. In Python 3.0 that appears to 
no longer be the case, with the list of callbacks being maintained globally as 
static C variable across all interpreters.

The end result is that if a sub interpreter registers an atexit callback, that 
callback will be called within the context of the main interpreter on call of 
Py_Finalize(), and not in context of the sub interpreter against which it was 
registered.

Various problems could ensue from this depending on whether or not the sub 
interpreter had also since been destroyed.

Still need to validate the above, but from visual inspection looks to be a 
problem. Issue found because mod_wsgi will trigger atexit callbacks for a sub 
interpreter when the sub interpreter is being destroyed. This is because Python 
prior to 3.0 only did it from the main interpreter. In Python 3.0, this all 
seems to get messed up now as no isolation between callbacks for each 
interpreter.

For mod_wsgi case, since it is explicitly triggering atexit callbacks for sub 
interpreter, in doing so it is actually calling all registered atexit callbacks 
across all interpreters for each sub interpreter being destroyed. They then 
again get called for Py_Finalize().

Even if mod_wsgi weren't doing this, still a problem that Py_Finalize() calling 
atexit callbacks in context of main interpreter which were actually associated 
with sub interpreter.

For mod_wsgi, will probably end up installing own 'atexit' module variant in 
sub interpreters to ensure separation.

--
components: Interpreter Core
messages: 75195
nosy: grahamd
severity: normal
status: open
title: atexit module not safe in Python 3.0 with multiple interpreters
type: behavior
versions: Python 3.0

___
Python tracker <[EMAIL PROTECTED]>
<http://bugs.python.org/issue4200>
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue4200] atexit module not safe in Python 3.0 with multiple interpreters

2008-10-24 Thread Graham Dumpleton

Graham Dumpleton <[EMAIL PROTECTED]> added the comment:

I wouldn't be concerned about mod_python as likelihood that it will be ported 
is 
very low and even if it was it would be a long long way in the future. There is 
too 
much in mod_python itself unrelated to Python 3.0 that needs to be fixed before 
a 
port to Python 3.0 should be considered.

As to mod_wsgi, I can work around it in the short term by installing an 
'atexit' 
module replacement in sub interpreters and avoid the problem.

As to a solution, yes, using PyInterpreterState would seem the most logical 
place, 
however there is a lot more to it than that.

Prior to Python 3.0, any callbacks registered with atexit module in sub 
interpreters 
weren't called anyway. This is because Py_EndInterpreter() didn't trigger them, 
nor 
did Py_Finalize(). The latter is in Python 3.0 at the moment, but as pointed 
out 
that is a problem in itself.

So, although one may register sub interpreter atexit callbacks against 
PyInterpreterState, what would be done with them. A decision would need to be 
made 
as to whether Py_EndInterpreter() should trigger them, or whether the status 
quo be 
maintained and nothing done with them.

In the short term, ie., for Python 3.0.0, the simplest thing to do may be to 
have 
functions of atexit module silently not actually do anything for sub 
interpreters.

The only place this would probably cause a problem would be for mod_wsgi where 
it 
was itself calling sys.exitfunc() on sub interpreters to ensure they were run. 
Since 
mod_wsgi has to change for Python 3.0 anyway, to call atexit._run_exitfuncs, 
with a 
bit more work mod_wsgi can just replace atexit module altogether in sub 
interpreter 
context and have mod_wsgi track the callback functions and trigger them.

By having atexit module ignore stuff for sub interpreters, at least for now 
avoid 
problem of callbacks against sub interpreters being execute by Py_Finalize() in 
main 
interpreter context.

And no I haven't looked at how PEP 3121 has changed things in Python 3.0. Up 
till 
now I hadn't seen any problems to suggest I may need to look at it.

___
Python tracker <[EMAIL PROTECTED]>
<http://bugs.python.org/issue4200>
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue4202] Multiple interpreters and readline module hook functions.

2008-10-24 Thread Graham Dumpleton

New submission from Graham Dumpleton <[EMAIL PROTECTED]>:

Because the readline module uses PyGILState_Ensure() to facilitate triggering 
callbacks into Python code, this would make the ability to use the hook 
functions incompatible with use in sub interpreters.

If this is the case, then that readline module cannot be used in sub 
interpreters should be documented if not already.

Better still, attempts to register hooks from sub interpreters should result in 
an exception. Further, when in use, in sub interpreter, callback hooks should 
also not be called if defined, because if defined they would be the hooks from 
the main interpreter since variables holding the hooks are static C variables 
and shared across all interpreters.

This issue derived from reading of code only and not tested in real program.

--
components: Interpreter Core
messages: 75201
nosy: grahamd
severity: normal
status: open
title: Multiple interpreters and readline module hook functions.
versions: Python 2.5, Python 3.0

___
Python tracker <[EMAIL PROTECTED]>
<http://bugs.python.org/issue4202>
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue4200] atexit module not safe in Python 3.0 with multiple interpreters

2008-10-28 Thread Graham Dumpleton

Graham Dumpleton <[EMAIL PROTECTED]> added the comment:

By visual inspection the intent looks correct, but can't actually test it 
until I can checkout Python code from source repository and apply patch as 
patch doesn't apply cleanly to 3.0rc1.

With #3723 and #4213 now also having patches, will need to sit down and 
look at all of them and see if find any new issues. May take me a couple 
of days to get time.

___
Python tracker <[EMAIL PROTECTED]>
<http://bugs.python.org/issue4200>
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue3723] Py_NewInterpreter does not work

2008-10-29 Thread Graham Dumpleton

Graham Dumpleton <[EMAIL PROTECTED]> added the comment:

In conjunction with #4213, the attached subinterpreter.patch appears to 
fix issue for mod_wsgi.

___
Python tracker <[EMAIL PROTECTED]>
<http://bugs.python.org/issue3723>
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue4200] atexit module not safe in Python 3.0 with multiple interpreters

2008-10-29 Thread Graham Dumpleton

Graham Dumpleton <[EMAIL PROTECTED]> added the comment:

In conjunction with #3723 and #4213, the attached atexit_modulestate.patch 
appears to fix issue for mod_wsgi.

___
Python tracker <[EMAIL PROTECTED]>
<http://bugs.python.org/issue4200>
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue4718] wsgiref package totally broken

2008-12-23 Thread Graham Dumpleton

Graham Dumpleton  added the comment:

Note that the page:

http://www.wsgi.org/wsgi/Amendments_1.0

contains clarifications for WSGI PEP in respect of Python 3.0. This list 
was previously come up with on WEB-SIG list.

As another reference implementation for Python 3.0, you might look at 
mod_wsgi (source code from subversion trunk), as that has been updated to 
support Python 3.0 in line with those list of proposed clarifications for 
WSGI PEP.

--
nosy: +grahamd

___
Python tracker 
<http://bugs.python.org/issue4718>
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue4718] wsgiref package totally broken

2008-12-26 Thread Graham Dumpleton

Graham Dumpleton  added the comment:

If making changes in wsgireg.validate, may be worthwhile also fixing up one 
area where it isn't strictly correct 
according to WSGI PEP.

As per discussion:

http://groups.google.com/group/python-web-sig/browse_frm/thread/b14b862ec4c620c0

the check for number of arguments supplied to wsgi.input.read() is wrong as it 
allows for an optional argument, 
when argument is supposed to mandatory.

___
Python tracker 
<http://bugs.python.org/issue4718>
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue4718] wsgiref package totally broken

2009-01-01 Thread Graham Dumpleton

Graham Dumpleton  added the comment:

One interesting thing of note that has occurred to me looking at the patch 
is that although with Python <3.0 you technically could return a str as 
iterable from application, ie., because iteration over str returns str for 
each character, the same doesn't really apply to bytes in Python 3.0. This 
is because iterating over bytes yields an int fdor each item.

Thus have odd situation where with Python 3.0, one could technically return 
str as iterable, with rule that would apply would be that each str returned 
would then be converted to bytes by way of latin-1 conversion, but for 
bytes returned as iterable, should fail.

Not sure how this plays out in wsgiref server yet as haven't looked. 
Anyway, make the validator code:

@@ -426,6 +436,6 @@
 # Technically a string is legal, which is why it's a really bad
 # idea, because it may cause the response to be returned
 # character-by-character
-assert_(not isinstance(iterator, str),
+assert_(not isinstance(iterator, (str, bytes)),
 "You should not return a string as your application iterator, "
 "instead return a single-item list containing that string.")

quite a good thing to have.

___
Python tracker 
<http://bugs.python.org/issue4718>
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue45319] Possible regression in __annotations__ descr for heap type subclasses

2022-03-09 Thread Graham Dumpleton


Graham Dumpleton  added the comment:

I don't know about the comment "he has far more experience than I do with this 
sort of object proxying wizardry". I read what you said and my brain melted. I 
am getting too old for this and trying to understand how anything works anymore 
takes me ages. :-)

Anyway, will try and digest what you said a half dozen more times when my brain 
is working again and see if I can make sense of it.

--

___
Python tracker 
<https://bugs.python.org/issue45319>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue45319] Possible regression in __annotations__ descr for heap type subclasses

2022-03-10 Thread Graham Dumpleton


Graham Dumpleton  added the comment:

Let me try and summarise what I do understand about this.

The existing wrapt code as written and its tests, work fine for Python 2.7 and 
3.6-3.11. No special case handling is done in tests related to checking 
annotations that I can remember.

The only reason this issue came up is because Christian tried to convert wrapt 
C code to only use the limited API and stable ABI, and as a result the existing 
test suite then started to fail for newer versions of Python (3.10+) on tests 
related to annotations, because the way the limited API and stable ABI for 
Python 3.10+ didn't behave the same as it did in older versions.

So if wrapt doesn't change to use the limited API and stable ABI then wrapt 
doesn't need to make any changes as it all seems to work fine when using the 
older APIs.

For the time being therefore at least it seems wrapt doesn't need to make any 
changes, since switching to the limited API and stable ABI is not a confirmed 
direction yet, and can't be done anyway until at least Python 2.7 support is 
dropped, and perhaps some Python 3.X version support as well.

--

___
Python tracker 
<https://bugs.python.org/issue45319>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue46761] functools.update_wrapper breaks the signature of functools.partial objects

2022-03-11 Thread Graham Dumpleton


Graham Dumpleton  added the comment:

My vague recollection was that I identified some time back that partial() 
didn't behave correctly regards introspection for some use case I was trying to 
apply it to in the wrapt implementation. As a result I ended up creating my own 
PartialCallableObjectProxy implementation based around wrapt's own transparent 
object proxy object so that introspection worked properly and went with that 
where I needed it. I don't remember the exact details at the moment and don't 
think commit comments in code are likely to help. Even so, will try and spend 
some time this weekend looking more at the issue and see what I can remember 
about it and see if there is anything more I can comment on that may help.

--

___
Python tracker 
<https://bugs.python.org/issue46761>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue46761] functools.update_wrapper breaks the signature of functools.partial objects

2022-03-12 Thread Graham Dumpleton


Graham Dumpleton  added the comment:

I am still working through this and thinking about implications, but my first 
impression is that the functools.partial object should provide an attribute 
(property) __signature__ which yields the correct result.

When you think about it, any user who wants to implement a function wrapper 
using a class to do so rather than using functools.update_wrapper(), has to 
implement __signature__ if the wrapper is a signature changing decorator. So 
why shouldn't Python itself follow the same mechanism that is forced on users 
in their own wrappers.

If functools.partial were to implement __signature__, then the part of PEP 362 
where it says:

> If the object is an instance of functools.partial, construct a new Signature 
> from its partial.func attribute, and account for already bound partial.args 
> and partial.kwargs

becomes redundant as the code to deal with it is localised within the 
functools.partial implementation by virtue of __signature__ on that type rather 
than having a special case in inspect.signature().

If this was seen as making more sense, one might even argue that FunctionType 
and the bound variant could implement __signature__ and so localise things to 
those implementations as well, which would further simplify inspect.signature().

This would set a good precedent going forward that if any special callable 
wrapper objects are added to the Python core in the future, that they implement 
__signature__, rather than someone thinking that further special cases could be 
added to inspect.signature() to deal with them.

I have yet to do some actual code experiments so might have more thoughts on 
the matter later.

--

___
Python tracker 
<https://bugs.python.org/issue46761>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue46846] functools.partial objects should set __signature__ and _annotations__

2022-03-12 Thread Graham Dumpleton


Change by Graham Dumpleton :


--
nosy: +grahamd

___
Python tracker 
<https://bugs.python.org/issue46846>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue46761] functools.update_wrapper breaks the signature of functools.partial objects

2022-03-25 Thread Graham Dumpleton


Graham Dumpleton  added the comment:

It is Django I would worry about and look at closely as they do stuff with 
decorators on instance methods that uses partials.

https://github.com/django/django/blob/7119f40c9881666b6f9b5cf7df09ee1d21cc8344/django/utils/decorators.py#L43

```
def _wrapper(self, *args, **kwargs):
# bound_method has the signature that 'decorator' expects i.e. no
# 'self' argument, but it's a closure over self so it can call
# 'func'. Also, wrap method.__get__() in a function because new
# attributes can't be set on bound method objects, only on functions.
bound_method = wraps(method)(partial(method.__get__(self, type(self
for dec in decorators:
bound_method = dec(bound_method)
return bound_method(*args, **kwargs)
```

--

___
Python tracker 
<https://bugs.python.org/issue46761>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue46761] functools.update_wrapper breaks the signature of functools.partial objects

2022-03-25 Thread Graham Dumpleton


Graham Dumpleton  added the comment:

Another example in Django, albeit in a test harness.

* 
https://github.com/django/django/blob/7119f40c9881666b6f9b5cf7df09ee1d21cc8344/tests/urlpatterns_reverse/views.py#L65

--

___
Python tracker 
<https://bugs.python.org/issue46761>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue46761] functools.update_wrapper breaks the signature of functools.partial objects

2022-03-25 Thread Graham Dumpleton


Graham Dumpleton  added the comment:

These days I have no idea who is active on Django.

--

___
Python tracker 
<https://bugs.python.org/issue46761>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue44847] ABCMeta.__subclasscheck__() doesn't support duck typing.

2021-08-05 Thread Graham Dumpleton


New submission from Graham Dumpleton :

The Python standard library has two effective implementations of helpers for 
the ABCMeta class. A C implementation, and a pure Python version which is only 
used if the C implementation isn't available (perhaps for PyPy).

* https://github.com/python/cpython/blob/3.9/Lib/abc.py#L89
* https://github.com/python/cpython/blob/3.9/Lib/_py_abc.py
* https://github.com/python/cpython/blob/3.9/Modules/_abc.c

These two implementations behave differently.

Specifically, the ABCMeta.__subclasscheck__() implementation for the C version 
doesn't support duck typing for the subclass argument to issubclass() when this 
delegates to ABCMeta.__subclasscheck__(). The Python implementation for this 
has no problems though.

In the pure Python version it uses isinstance().

* https://github.com/python/cpython/blob/3.9/Lib/_py_abc.py#L110

In the C implementation it uses PyType_Check() which doesn't give the same 
result.

* https://github.com/python/cpython/blob/3.9/Modules/_abc.c#L610

The consequence of this is that transparent object proxies used as decorators 
on classes (eg., as wrapt uses) will break when the C implementation us used 
with an error of:

#   def __subclasscheck__(cls, subclass):
#   """Override for issubclass(subclass, cls)."""
#   >   return _abc_subclasscheck(cls, subclass)
#   E   TypeError: issubclass() arg 1 must be a class

Example of tests from wrapt and how tests using C implementation must be 
disabled can be found at:

* 
https://github.com/GrahamDumpleton/wrapt/blob/develop/tests/test_inheritance_py37.py

If instead of using PyType_Check() the C implementation used 
PyObject_IsInstance() at that point it is possible that wrapt may then work if 
the remainder of the C implementation is true to how the pure Python version 
works (not been able to test if that is the case or not as yet).

--
components: Library (Lib)
messages: 399060
nosy: grahamd
priority: normal
severity: normal
status: open
title: ABCMeta.__subclasscheck__() doesn't support duck typing.
type: behavior
versions: Python 3.10, Python 3.9

___
Python tracker 
<https://bugs.python.org/issue44847>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue45319] Possible regression in __annotations__ descr for heap type subclasses

2021-09-29 Thread Graham Dumpleton


Change by Graham Dumpleton :


--
nosy: +grahamd

___
Python tracker 
<https://bugs.python.org/issue45319>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue45356] Calling `help` executes @classmethod @property decorated methods

2021-10-27 Thread Graham Dumpleton


Graham Dumpleton  added the comment:

Too much to grok right now.

There is already a convention for what a decorator wraps. It is __wrapped__.

https://github.com/python/cpython/blob/3405792b024e9c6b70c0d2355c55a23ac84e1e67/Lib/functools.py#L70

Don't use __func__ as that has other defined meaning in Python related to bound 
methods and possibly other things as well and overloading on that will break 
other stuff.

In part I suspect a lot of the problems here are because things like 
classmethod and functools style decorators are not proper transparent object 
proxies, which is the point of what the wrapt package was trying to solve so 
that accessing stuff on the wrapper behaves as much as possible as if it was 
done on what was wrapped, including things like isinstance checks.

--

___
Python tracker 
<https://bugs.python.org/issue45356>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue40234] Disallow daemon threads in subinterpreters optionally.

2020-04-09 Thread Graham Dumpleton


Graham Dumpleton  added the comment:

Just to make few things clear. It isn't mod_wsgi itself that relies on daemon 
threads, it is going to be users WSGI applications (or the things they need) 
that do.

As a concrete example of things that would stop working are monitoring systems 
such as New Relic, DataDog, Elastic APM etc. These all fire off a background 
thread to handle aggregation of data collected from the application, with that 
data then being sent off once a minute to the backend servers.

It isn't just these though. Over the years have see many instances of people 
using background threads to off load small tasks to be done in process rather 
than using full blown queuing system such as Celery etc. So I don't believe it 
is a rare use case. Monitoring systems are a big use case though.

These would all usually use a daemon thread so they can be started and 
effectively forgotten, with no need to do anything to shut them down when the 
process is exiting.

Some (such as New Relic, which I wrote so know how it works), will register an 
atexit callback in order to flush data out before a process stops, but it may 
not actually exit the thread. Even if it does exit the thread, you can't just 
switch it to use a non daemon thread as that will not work.

The problem here is that atexit callbacks are only called after the 
(sub)interpreter shutdown code has waited on non daemon threads. Thus there is 
no current standard way I know of to notify a non daemon thread to shutdown. 
The result would be that if these were switched to non daemon thread, the 
process would hang on shutdown at the point of waiting for non daemon threads.

So if you are going to eliminate daemon threads (even if only in sub 
interpreters at this point), you are going to have to introduce a way to 
register something similar to an atexit callback which would be invoked before 
waiting on non daemon threads, so an attempt can be made to notify them that 
they need to shutdown. Use of this mechanism is going to have to be added to 
any code out there currently using daemon threads if they are going to be 
forced to use non daemon threads. This includes stuff in the stdlib such as the 
multiprocessing thread pools. They can't just switch to non daemon threads, 
they have to add the capability to register and be notified of (sub)interpreter 
shutdown so they can exit the thread else process hangs will occur.

Now a few other things about history and usage of mod_wsgi to give context.

Once upon a time mod_wsgi did try and delete sub interpreters and replace them 
in the life of a process. This as you can probably imagine now was very buggy 
because of issues in CPython sub interpreter support. As a result mod_wsgi 
discarded that ability and so a sub interpreter always persisted and was used 
for the life of the process. That way problems with clean up of sub 
interpreters wasn't a big issue.

During cleanup of (sub)interpreters on process shutdown, although crashes could 
sometimes occur (usually quite rare), what usually happened was that a Python 
exception would occur. The reason for this would be in cleaning up a 
(sub)interpreter, sys.modules was cleared up with everything appearing to be 
set to None. You would therefore get a Python exception because some code 
trying to access a class instance found the instance replaced by None and so it 
failed. Even this was rare and not a big deal.

Now although a crash or Python exception could in rare cases occur, for 
mod_wsgi it didn't really matter since we were talking about sub process of the 
Apache master process, and the master process didn't care. If Apache was 
stopping anyway, it just stopped normally. If Apache was doing a restart and 
child process were told to stop because of that, or if a maximum request 
threshold was reach and so process was being recycled, then Apache was going to 
replace the process anyway, so everything just carried on normally and a new 
process started in its place.

In the case where a process lockup managed to occur on process shutdown, for 
example if non daemon thread were used explicitly, then process shutdown 
timeouts applied by mod_wsgi on daemon processes would kick in and the process 
would be force killed anyway. So all up it was quite resilient and kept 
working. If embedded mode of mod_wsgi was used, it would though lock up the 
Apache process indefinitely if something used non daemon threads explicitly.

On the issue of non daemon threads, usually these would never arise. This is 
because usually people don't explicitly say a thread is non daemon. Where 
nothing is done to say that, a thread actually inherits the mode of the thread 
it was created in. Since all request handler threads in mod_wsgi are actually 
externally created threads which call into Python, they get assigned the 
DummyThread object to track them. These are treated as non daemon threads. As a 
result any new thread

[issue22213] Make pyvenv style virtual environments easier to configure when embedding Python

2020-04-15 Thread Graham Dumpleton


Graham Dumpleton  added the comment:

For the record. Since virtualenv 20.0.0 (or there about) switched to the python 
-m venv style virtual environment structure, the C API for embedding when using 
a virtual environment is now completely broken on Windows. The same workaround 
used on UNIX doesn't work on Windows.

The only known workaround is in the initial Python code you load, to add:

import site
site.addsitedir('C:/some/path/to/pythonX.Y/Lib/site-packages')

to at least force it to use the site-packages directory from the virtual 
environment.

As to mod_wsgi, means that on Windows the WSGIPythonHome directive no longer 
works anymore and have to suggest that workaround instead.

--

___
Python tracker 
<https://bugs.python.org/issue22213>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue6531] atexit_callfuncs() crashing within Py_Finalize() when using multiple interpreters.

2012-01-18 Thread Graham Dumpleton

Graham Dumpleton  added the comment:

What are the intentions with respect to atexit and sub interpreters?

The original report was only about ensuring that the main interpreter doesn't 
crash if an atexit function was registered in a sub interpreter. So, was not 
expecting a change to sub interpreters in submitting this report, in as much as 
atexit callbacks for sub interpreters are never invoked in Python 2.X.

That said, for mod_wsgi I have extended sub interpreter destruction so that 
atexit callbacks registered in sub interpreters are called. For mod_wsgi 
though, sub interpreters are only destroyed on process shutdown. For the 
general case, a sub interpreter could be destroyed at any time during the life 
of the process. If one called atexit callbacks on such sub interpreter 
destruction, it notionally changes the meaning of atexit, which is in process 
exit and not really sub interpreter exit.

--

___
Python tracker 
<http://bugs.python.org/issue6531>
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue14073] allow per-thread atexit()

2012-02-21 Thread Graham Dumpleton

Graham Dumpleton  added the comment:

My take on this is that if wanting to interact with a thread from an atexit 
callback, you are supposed to call setDaemon(True) on the thread. This is to 
ensure that on interpreter shutdown it doesn't try and wait on the thread 
completing before getting to atexit callbacks.

--
nosy: +grahamd

___
Python tracker 
<http://bugs.python.org/issue14073>
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue14073] allow per-thread atexit()

2012-02-22 Thread Graham Dumpleton

Graham Dumpleton  added the comment:

Reality is that the way Python behaviour is defined/implemented means that it 
will wait for non daemonised threads to complete before exiting.

Sounds like the original code is wrong in not setting it to be daemonised in 
the first place and should be reported as a bug in that code rather than 
fiddling with the interpreter.

--

___
Python tracker 
<http://bugs.python.org/issue14073>
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue14073] allow per-thread atexit()

2012-02-22 Thread Graham Dumpleton

Graham Dumpleton  added the comment:

At the moment you have showed some code which is causing you problems and a 
vague idea. Until you show how that idea may work in practice it is a bit hard 
to judge whether what it does and how it does it is reasonable.

--

___
Python tracker 
<http://bugs.python.org/issue14073>
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue14073] allow per-thread atexit()

2012-02-22 Thread Graham Dumpleton

Graham Dumpleton  added the comment:

I haven't said I am against it. All I have done so far is explain on the 
WEB-SIG how mod_wsgi works and how Python currently works and how one would 
normally handle this situation by having the thread be daemonised.

As for the proposed solution, where is the code example showing how what you 
are suggesting is meant to work. Right now you are making people assume how 
that would work. Add an actual example here at least of how with the proposed 
feature your code would then look.

For the benefit of those who might even implement what you want, which will not 
be me anyway as I am not involved in Python core development, you might also 
explain where you expect these special per thread atexit callbacks to be 
triggered within the current steps for shutting down the interpreter. That way 
it will be more obvious to those who come later as to what you are actually 
proposing.

--

___
Python tracker 
<http://bugs.python.org/issue14073>
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue14073] allow per-thread atexit()

2012-02-22 Thread Graham Dumpleton

Graham Dumpleton  added the comment:

Except that calling it at the time of current atexit callbacks wouldn't change 
the current behaviour. As quoted in WEB-SIG emails the sequence is:

wait_for_thread_shutdown();

/* The interpreter is still entirely intact at this point, and the
 * exit funcs may be relying on that.  In particular, if some thread
 * or exit func is still waiting to do an import, the import machinery
 * expects Py_IsInitialized() to return true.  So don't say the
 * interpreter is uninitialized until after the exit funcs have run.
 * Note that Threading.py uses an exit func to do a join on all the
 * threads created thru it, so this also protects pending imports in
 * the threads created via Threading.
 */
call_py_exitfuncs();

So would need to be done prior to wait_for_thread_shutdown() or by that 
function before waiting on thread.

The code in that function has:

PyObject *threading = PyMapping_GetItemString(tstate->interp->modules,
  "threading");

...
result = PyObject_CallMethod(threading, "_shutdown", "");

So calls _shutdown() on the threading module.

That function is aliased to _exitfunc() method of _MainThread.

def _exitfunc(self):
self._stop()
t = _pickSomeNonDaemonThread()
if t:
if __debug__:
self._note("%s: waiting for other threads", self)
while t:
t.join()
t = _pickSomeNonDaemonThread()
if __debug__:
self._note("%s: exiting", self)
self._delete()

So can be done in here.

The decision which would need to be made is whether you call atexit() on all 
threads before then trying to join() on any, or call atexit() only prior to the 
join() of the thread.

Calling atexit() on all possibly sounds the better option but I am not sure, 
plus the code would need to deal with doing two passes like that which may not 
may not have implications.

--

___
Python tracker 
<http://bugs.python.org/issue14073>
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue14590] ConfigParser doesn't strip inline comment when delimiter occurs earlier without preceding space.

2012-04-15 Thread Graham Dumpleton

New submission from Graham Dumpleton :

When parsing for inline comments, ConfigParser will only check the first 
occurrence of the delimiter in the line. If that instance of the delimiter 
isn't preceded with a space, it then assumes no comment. This ignores the fact 
that there could be a second instance of the delimiter which does have a 
preceding space. The result is that inline comments can be left as part of the 
value.

So, a config file of:

[section]
value1 = a;b
value2 = a ; comment
value3 = a; b ; comment

after parsing actually results in:

[section]
value1 = a;b
value2 = a
value3 = a; b ; comment

That is, 'value3' is incorrect as still embeds the inline comment.

Test script attached for Python 2.X.

Not tested on Python 3.X but code appears to do the same thing, except that on 
Python 3.X inline comments are disabled by default.

--
components: Library (Lib)
files: test_config.py
messages: 158397
nosy: grahamd
priority: normal
severity: normal
status: open
title: ConfigParser doesn't strip inline comment when delimiter occurs earlier 
without preceding space.
type: behavior
versions: Python 2.6, Python 2.7, Python 3.1, Python 3.2, Python 3.3
Added file: http://bugs.python.org/file25233/test_config.py

___
Python tracker 
<http://bugs.python.org/issue14590>
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue37072] PyNode_Compile() crashes in Python 3.8.

2019-05-27 Thread Graham Dumpleton


New submission from Graham Dumpleton :

The code:

#include 

int
main(int argc, char *argv[])
{
FILE *fp = NULL;
PyObject *co = NULL;
struct _node *n = NULL;
const char * filename = "/dev/null";

Py_Initialize();

fprintf(stderr, "START\n");

fp = fopen(filename, "r");

fprintf(stderr, "CALL PyParser_SimpleParseFile()\n");

n = PyParser_SimpleParseFile(fp, filename, Py_file_input);

fprintf(stderr, "CALL PyNode_Compile()\n");

co = (PyObject *)PyNode_Compile(n, filename);

fprintf(stderr, "DONE\n");

Py_Finalize();

return 0;
}

has worked fine since Python 2.3 (and maybe earlier) through Python 3.7, but 
now crashes in Python 3.8.

It crashes in PyNode_Compile().

START
CALL PyParser_SimpleParseFile()
CALL PyNode_Compile()
Segmentation fault: 11

Although it is part of the public interface of compile.h, the PyNode_Compile() 
seems never to actually be called anywhere in Python itself, and perhaps isn't 
even covered by tests. So if Python 3.8 internal changes mean this function 
implementation needs to be changed, that fact may have been missed.

--
messages: 343727
nosy: grahamd
priority: normal
severity: normal
status: open
title: PyNode_Compile() crashes in Python 3.8.
versions: Python 3.8

___
Python tracker 
<https://bugs.python.org/issue37072>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue37072] PyNode_Compile() crashes in Python 3.8.

2019-05-27 Thread Graham Dumpleton


Graham Dumpleton  added the comment:

FWIW, this was occurring on macOS. Not been able to test on other platforms.

--
type:  -> crash

___
Python tracker 
<https://bugs.python.org/issue37072>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue31901] atexit callbacks only called for current subinterpreter

2017-11-08 Thread Graham Dumpleton

Graham Dumpleton  added the comment:

FWIW, that atexit callbacks were not called for sub interpreters ever was a 
pain point for mod_wsgi.

What mod_wsgi does is override the destruction sequence so that it will first 
go through each sub interpreter when and shutdown threading explicitly, then 
call atexit handlers. When that is done, only then will it destroy the sub 
interpreter and the main interpreter.

I have noted this previously in discussion associated with:

https://bugs.python.org/issue6531

--

___
Python tracker 
<https://bugs.python.org/issue31901>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue15445] Ability to do code injection via logging module configuration listener port.

2012-07-24 Thread Graham Dumpleton

New submission from Graham Dumpleton :

This issue was raised first on secur...@python.org. Guido responded that not 
sensitive enough to be kept to the list and that okay to log a bug report.

This issue may not warrant any action except perhaps an update to
documentation for the logging module to warn about it, but thought
that should raise it just in case someone felt it needed actual code
changes to be made to avoid the issue if possible.

The problem arises in the Python logging modules ability to create a
listener socket which can accept new configuration in the ini file
format.

http://docs.python.org/library/logging.config.html#logging.config.listen

"""To send a configuration to the socket, read in the configuration
file and send it to the socket as a string of bytes preceded by a
four-byte length string packed in binary using struct.pack('>L',
n)."""

This sounds innocuous and the documentation at that point doesn't warn
that you are opening yourself up to security problems in using it.

You get a hint of potential issues later if one reads later
documentation about the file format:

"""The class entry indicates the handler’s class (as determined by
eval() in the logging package’s namespace). The level is interpreted
as for loggers, and NOTSET is taken to mean ‘log everything’."""

There are other mentions about eval() in context of log level and args
for the handler class as well, but not sure that is used for log level
as it says.

The combination of the open listener port for configuration and that
processing of the configuration file uses eval(), means that one could
send a configuration file to the process containing:

[handler_consoleHandler]
class=os.system('echo security issue') or StreamHandler
level=DEBUG
formatter=simpleFormatter
args=(sys.stdout,)

and one could execute an arbitrary command as the user the process runs as.

The problem is tempered by the fact that someone has to enable the
feature, which is likely rare, but also because socket connections to
send new configuration will only be accepted from the same host
('localhost') and the host can not be overridden. So can only be taken
advantage of by someone (potentially a different user) on the same
host and not remotely at least.

The specific code in Python 3.2 is:

section = cp["handler_%s" % hand]
klass = section["class"]
fmt = section.get("formatter", "")
try:
klass = eval(klass, vars(logging))
except (AttributeError, NameError):
klass = _resolve(klass)
args = section["args"]
args = eval(args, vars(logging))
h = klass(*args)

and older Python 2.X versions have similar code.

Although you could perhaps avoid need for eval for class lookup, can't
see that you could do that for args unless you restrict it to literal
values and use a more limited eval like parser.

At the minimum there probably should be a warning in the documentation about 
using the logging module configuration port on untrusted systems with shared 
users.

--
components: Library (Lib)
messages: 166343
nosy: grahamd
priority: normal
severity: normal
status: open
title: Ability to do code injection via logging module configuration listener 
port.
type: security
versions: Python 3.2

___
Python tracker 
<http://bugs.python.org/issue15445>
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue15751] Add PyGILState_SwitchInterpreter

2012-08-21 Thread Graham Dumpleton

Graham Dumpleton added the comment:

Just to clarify. One can still tell WSGI applications under mod_wsgi to run in 
the main interpreter and in that case modules using PyGILState* do work. By 
default though, sub interpreters are used as noted.

The mechanism for forcing use of main interpreter is the directive:

WSGIApplicationGroup %{GLOBAL}

Some details about this issue can be found in:

http://code.google.com/p/modwsgi/wiki/ApplicationIssues#Python_Simplified_GIL_State_API

--
nosy: +grahamd

___
Python tracker 
<http://bugs.python.org/issue15751>
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue15751] Add PyGILState_SwitchInterpreter

2012-08-21 Thread Graham Dumpleton

Graham Dumpleton added the comment:

In both embedded mode and daemon mode of mod_wsgi, albeit how thread pool is 
managed is different, there is a fixed number of threads with those being 
dedicated to handling web requests.

On a request arriving next available thread from the thread pool handles 
accepting of request at C code level, that thread may then map to any WSGI 
application and so any sub interpreter, or even the main interpreter.

Thus there is no one to one mapping between thread and (sub)interpreter.

The way the mod_wsgi code works now is that when it knows it will be calling 
into the main interpreter, it uses PyGILState_Ensure(). If not, it will use a 
thread state for that thread specific to the sub interpreter it is calling in 
to. At the end of the request, the thread state is remembered and not thrown 
away so that thread locals still work for that thread across requests for that 
sub interpreter.

Thus, there can be more than one thread state per thread, but this is fine so 
long as it is only used against the sub interpreter it was created for.

This is actually an enforced requirement of Python, because if you create more 
than one thread state for a thread for the same sub interpreter, or even an 
additional one for the main interpreter when there is also the auto TLS, then 
Python will die if you compile and run it is in debug mode.

Now, since mod_wsgi always knows which interpreter it is calling into, the 
intent was that there was this single API call so that mod_wsgi could say that 
at this time, this thread is going to be calling into that interpreter. It 
could then just call PyGILState_Ensure().

Any third party module then which uses the simplistic calling sequence of 
calling PyGILState_Release() on exiting Python code and thence within the same 
thread calling PyGILState_Ensure() when coming back into Python with a callback 
will work, as mod_wsgi has specified the interpreter context for that thread at 
that time.

As pointed out, if a third party module was creating its own background threads 
at C level and calling PyGILState_Ensure() when calling back into Python code, 
this could pose a problem. This could also be an issue for Python created 
background threads.

In the case of the latter, if a Python thread is created in a specific sub 
interpreter, it should automatically designate for that thread that that is its 
interpreter context, so if it calls out and does the Release/Ensure dance, that 
it goes back into the same sub interpreter.

The C initiated thread is a bit more complicated though and may not be 
solvable, but a lot of the main third party modules which don't work in sub 
interpreters, such as lxml, don't use background threads, so the simplistic 
approach means that will work at least.

So, in summary, saw a single API call which allowed designation of which 
interpreter a thread is operating against, overriding the implicit default of 
the main interpreter. PyGILState API will need to manage a set of interpreter 
states for each interpreter, with potential for more than one thread state for 
a thread due to a thread being able to call into multiple interpreters at 
different times.

--

___
Python tracker 
<http://bugs.python.org/issue15751>
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue15751] Add PyGILState_SwitchInterpreter

2012-08-21 Thread Graham Dumpleton

Graham Dumpleton added the comment:

Those macros only work for general GIL releasing and pop straight away, not for 
the case where released, calls into some non Python C library, which then calls 
back into Python.

My recollection is, and so unless they have changed it, SWIG generated calls 
use the GILState calls. See:

https://issues.apache.org/jira/browse/MODPYTHON-217

--

___
Python tracker 
<http://bugs.python.org/issue15751>
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue15751] Add PyGILState_SwitchInterpreter

2012-08-21 Thread Graham Dumpleton

Graham Dumpleton added the comment:

If you have a Ex version of Ensure, then if the interpreter argument is NULL, 
then it should assume main interpreter. That way the normal version of Ensure 
can just call PyGILState_EnsureEx(NULL).

--

___
Python tracker 
<http://bugs.python.org/issue15751>
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue15751] Support subinterpreters in the GIL state API

2012-08-24 Thread Graham Dumpleton

Graham Dumpleton added the comment:

It is past my bed time and not thinking straight, but I believe Antoine is 
aligned with what I had in mind, as need multiple thread states per OS thread 
where each is associated with separate interpreter.

My main reason for allowing NULL to EnsureEX rather than requiring 
main_interpreter to be explicitly passed, is that way way back in time, my 
recollection is that getting access to the main interpreter pointer was a pain 
as you had to iterate over the list of interpreters and assume it was the last 
one due to it being created first. I don't remember there being a special 
global variable or function for getting a pointer to the main interpreter. This 
may well have changed since and there is an easier way do let me know. So saw 
it as a convenience.

--

___
Python tracker 
<http://bugs.python.org/issue15751>
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue15751] Support subinterpreters in the GIL state API

2012-08-28 Thread Graham Dumpleton

Graham Dumpleton added the comment:

Sorry, Mark. Is not for associating thread state specified by embedding 
application. In simple terms it is exactly like existing PyGILState_Ensure() in 
that caller doesn't have a thread state, whether it has already been created. 
Only difference is to allow that simplified API to work against a sub 
interpreter.

Nick, I previously said:

"""In the case of the latter, if a Python thread is created in a specific sub 
interpreter, it should automatically designate for that thread that that is its 
interpreter context, so if it calls out and does the Release/Ensure dance, that 
it goes back into the same sub interpreter."""

So yes to your:

"""Thinking about it, I believe there still needs to be a concept of an "active 
thread state" TLS key in order to deal with Graham's original problem. 
Specifically, if PyGILState_EnsureEx is used to associate the thread with a 
particular interpreter, then subsequent calls to PyGILState_Ensure from *that 
thread* should get the explicitly associated interpreter, rather than the main 
interpreter."""

My example was more to do with a thread created in Python then calling out and 
back in, but same deal as foreign thread calling in, out and back in.

Antoine, yes, can possibly can be simplified to that. The original idea of a 
switch interpreter function was suggested on basis that PyGILState_Ensure would 
not be modified or extended version of function created. Rolling an implicit 
switch interpreter into PyGILState_EnsureEx when argument is different to the 
current may serve same purpose.

--

___
Python tracker 
<http://bugs.python.org/issue15751>
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue15751] Support subinterpreters in the GIL state API

2012-08-28 Thread Graham Dumpleton

Graham Dumpleton added the comment:

Nick. Valid point.

I guess I hadn't been thinking about case of one thread calling out of one 
interpreter and then into another, as I don't do it in mod_wsgi and even I 
regard doing that as partly evil.

Does that mean this switch interpreter call somehow gets used in the 
Py_BEGIN_ALLOW_THREADS and Py_END_ALLOW_THREADS.

--

___
Python tracker 
<http://bugs.python.org/issue15751>
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue15751] Support subinterpreters in the GIL state API

2012-08-28 Thread Graham Dumpleton

Graham Dumpleton added the comment:

So you are saying that as user of this I am meant to call it as:

PyGILState_INFO info;

PyGILState_EnsureEx(interp, &info);
...
PyGILState_ReleaseEx(&info);

What is the potential error code from PyGILState_EnsureEx() considering that 
right now for PyGILState_Ensure() it is a value passed back into 
PyGILState_Release().

--

___
Python tracker 
<http://bugs.python.org/issue15751>
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue15751] Support subinterpreters in the GIL state API

2012-08-29 Thread Graham Dumpleton

Graham Dumpleton added the comment:

If PyGILState_STATE is a struct, what happens if someone naively does:

PyGILState_Release(PyGILState_UNLOCKED)

I know they shouldn't, but I actually do this in mod_wsgi in one spot as it is 
otherwise a pain to carry around the state when I know for sure if was unlocked 
before the PyGILState_Ensure().

Or can PyGILState_UNLOCKED map to some a global struct instance with certain 
state in it that represents that without problem.

--

___
Python tracker 
<http://bugs.python.org/issue15751>
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue16220] wsgiref does not call close() on iterable response

2012-10-14 Thread Graham Dumpleton

Graham Dumpleton added the comment:

Hmmm. Wonders if finally finding this was prompted in part by recent post about 
this very issue. :-)

http://blog.dscpl.com.au/2012/10/obligations-for-calling-close-on.html

Also related is this issue from Django I highlighted long time ago.

https://code.djangoproject.com/ticket/16241

I would have to look through Django code again but wondering if the issue there 
was in fact caused by underlying issue in standard library or whether would 
have needed to be separately fixed in Django still.

--
nosy: +grahamd

___
Python tracker 
<http://bugs.python.org/issue16220>
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue16220] wsgiref does not call close() on iterable response

2012-10-15 Thread Graham Dumpleton

Graham Dumpleton added the comment:

That's right, the Django bug report I filed was actually for Django 1.3, which 
didn't use wsgiref. I wasn't using Django 1.4 at the time so didn't bother to 
check its new implementation based on wsgiref. Instead I just assumed wsgiref 
would be right. Whoops.

--

___
Python tracker 
<http://bugs.python.org/issue16220>
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue16362] _LegalCharsPatt in cookies.py includes illegal characters

2012-10-30 Thread Graham Dumpleton

Graham Dumpleton added the comment:

For that cookie string to be valid in the first place, shouldn't it have been 
sent as:

'HTTP_COOKIE': 'yaean_djsession=23ab7bf8b260cbb2f2bc80b1c1fd98fa; 
yaean_yasession=ff2a3030ee3f428f91c6f554a63b459c'

IOW, semicolon as separator.

What client generated that HTTP Cookie header with commas in it?

Only way I could see you ending up with that, if client isn't broken, is if 
when sent by application originally it sent it as only one Set-Cookie response 
header and had tried to set both values at same time with comma as separator. 
Then when it has come back from client like that to application, the cookie 
parser has then done the wrong thing on it.

If this is a browser client, check the browser cookie cache to see what it is 
stored as in there.

--
nosy: +grahamd

___
Python tracker 
<http://bugs.python.org/issue16362>
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue16500] Add an 'afterfork' module

2012-11-26 Thread Graham Dumpleton

Changes by Graham Dumpleton :


--
nosy: +grahamd

___
Python tracker 
<http://bugs.python.org/issue16500>
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue16679] Wrong URL path decoding

2012-12-14 Thread Graham Dumpleton

Graham Dumpleton added the comment:

The requirement per PEP  is that the original byte string needs to be 
converted to native string (Unicode) with the ISO-8891-1 encoding. This is to 
ensure that the original bytes are preserved so that the WSGI application, with 
its own knowledge of what encoding the byte string was in, can then properly 
convert it to the correct encoding.

In other words, the WSGI server is not allowed to assume that the original byte 
string was UTF-8, because in practice it may not be and it cannot know what it 
is. The WSGI server must use ISO-8859-1. The WSGI application if it needs it in 
UTF-8, must then convert it back to a byte string using IS0-8859-1 and then 
from there convert it back to a native string as UTF-8.

So if I understand what you are saying, you are suggesting a change which is 
incompatible with PEP .

Please provide a code snippet or patch to show what you are proposing to be 
changed so it can be determined precisely what you are talking about.

--
nosy: +grahamd

___
Python tracker 
<http://bugs.python.org/issue16679>
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue16679] Wrong URL path decoding

2012-12-14 Thread Graham Dumpleton

Graham Dumpleton added the comment:

You can't try UTF-8 and then fall back to ISO-8859-1. PEP  requires it 
always be ISO-8859-1. If an application needs it as something else, it is the 
web applications job to do it.

The relevant part of the PEP is:

"""On Python platforms where the str or StringType type is in fact 
Unicode-based (e.g. Jython, IronPython, Python 3, etc.), all "strings" referred 
to in this specification must contain only code points representable in 
ISO-8859-1 encoding (\u through \u00FF, inclusive). It is a fatal error for 
an application to supply strings containing any other Unicode character or code 
point. Similarly, servers and gateways must not supply strings to an 
application containing any other Unicode characters."""

By converting as UTF-8 you would be breaking the requirement that only code 
points representable in ISO-8859-1 encoding (\u through \u00FF, inclusive) 
are passed through.

So it is inconvenient if your expectation is that will always be UTF-8, but is 
how it has to work. This is because it could be something other than UTF-8, yet 
still be able to be successfully converted as UTF-8. In that case the 
application would get something totally different to the original which is 
wrong.

So, the WSGI server cannot ever make any assumptions and the WSGI application 
always has to be the one which converts it to the correct Unicode string. The 
only way that can be done and still pass through a native string, is that it is 
done as ISO-8859-1 (which is byte preserving), allowing the application to go 
back to bytes and then back to Unicode in correct encoding.

--

___
Python tracker 
<http://bugs.python.org/issue16679>
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue19070] In place operators of weakref.proxy() not returning self.

2013-09-22 Thread Graham Dumpleton

New submission from Graham Dumpleton:

When a weakref.proxy() is used to wrap a class instance which implements in 
place operators, when one applies the in place operator to the proxy, one could 
argue the variable holding the proxy should still be a reference to the proxy 
after the in place operator has been done. Instead the variable is replaced 
with the class instance the proxy was wrapping.

So for the code:

from __future__ import print_function

import weakref

class Class(object):
 def __init__(self, value):
 self.value = value
 def __iadd__(self, value):
 self.value += value
 return self

c = Class(1)

p = weakref.proxy(c)

print('p.value', p.value)
print('type(p)', type(p))

p += 1

print('p.value', p.value)
print('type(p)', type(p))

one gets:

$ python3.3 weakproxytest.py
p.value 1
type(p) 
p.value 2
type(p) 

One might expect type(p) at the end to still be .

In the weakref.proxy() C code, all the operators are set up with preprocessor 
macros.

#define WRAP_BINARY(method, generic) \
static PyObject * \
method(PyObject *x, PyObject *y) { \
UNWRAP(x); \
UNWRAP(y); \
return generic(x, y); \
}

#define WRAP_TERNARY(method, generic) \
static PyObject * \
method(PyObject *proxy, PyObject *v, PyObject *w) { \
UNWRAP(proxy); \
UNWRAP(v); \
if (w != NULL) \
UNWRAP(w); \
return generic(proxy, v, w); \
}

These are fine for:

WRAP_BINARY(proxy_add, PyNumber_Add)
WRAP_BINARY(proxy_sub, PyNumber_Subtract)
WRAP_BINARY(proxy_mul, PyNumber_Multiply)
WRAP_BINARY(proxy_div, PyNumber_Divide)
WRAP_BINARY(proxy_floor_div, PyNumber_FloorDivide)
WRAP_BINARY(proxy_true_div, PyNumber_TrueDivide)
WRAP_BINARY(proxy_mod, PyNumber_Remainder)
WRAP_BINARY(proxy_divmod, PyNumber_Divmod)
WRAP_TERNARY(proxy_pow, PyNumber_Power)
WRAP_BINARY(proxy_lshift, PyNumber_Lshift)
WRAP_BINARY(proxy_rshift, PyNumber_Rshift)
WRAP_BINARY(proxy_and, PyNumber_And)
WRAP_BINARY(proxy_xor, PyNumber_Xor)
WRAP_BINARY(proxy_or, PyNumber_Or)

Because a result is being returned and the original is not modified.

Use of those macros gives the unexpected result for:

WRAP_BINARY(proxy_iadd, PyNumber_InPlaceAdd)
WRAP_BINARY(proxy_isub, PyNumber_InPlaceSubtract)
WRAP_BINARY(proxy_imul, PyNumber_InPlaceMultiply)
WRAP_BINARY(proxy_idiv, PyNumber_InPlaceDivide)
WRAP_BINARY(proxy_ifloor_div, PyNumber_InPlaceFloorDivide)
WRAP_BINARY(proxy_itrue_div, PyNumber_InPlaceTrueDivide)
WRAP_BINARY(proxy_imod, PyNumber_InPlaceRemainder)
WRAP_TERNARY(proxy_ipow, PyNumber_InPlacePower)
WRAP_BINARY(proxy_ilshift, PyNumber_InPlaceLshift)
WRAP_BINARY(proxy_irshift, PyNumber_InPlaceRshift)
WRAP_BINARY(proxy_iand, PyNumber_InPlaceAnd)
WRAP_BINARY(proxy_ixor, PyNumber_InPlaceXor)
WRAP_BINARY(proxy_ior, PyNumber_InPlaceOr)

This is because the macro returns the result from the API call, such as 
PyNumber_InPlaceAdd() where as it should notionally be returning 'proxy' so 
that the variable holding the weakref proxy instance is set to the proxy object 
again and not the result of the inner API call.

In changing this though there is a complication which one would have to deal 
with.

If the result of the inner API call such as PyNumber_InPlaceAdd() is the same 
as the original object wrapped by the weakref proxy, then all is fine.

What though should be done if it is different as notionally this means that the 
reference to the wrapped object would need to be changed to the new value.

The issue is that if one had to replace the reference to the wrapped object 
with a different one due to the in place operator, then notionally the whole 
existence of that weakref is invalidated as you would no longer be tracking the 
same object the weakref proxy was created for.

This odd situation is perhaps why the code was originally written the way it 
was, although that then sees the weakref proxy being replaced which could cause 
different problems with the callback not later being called since the weakref 
proxy can be destroyed before the object it wrapped. As there is nothing in the 
documentation of the code which calls out such a decision, not sure if it was 
deliberate or simply an oversight.

Overall I am not completely sure what the answer should be, so I am logging it 
as interesting behaviour. Maybe this odd case needs to be called out in the 
documentation in some way at least. That or in place operators simply shouldn't 
be allowed on a weakref proxy because of the issues it can cause either way.

--
components: Library (Lib)
messages: 198257
nosy: grahamd
priority: normal
severity: normal
status: open
title: In place operators of weakref.proxy() not returning self.
type: behavior
versions: Python 2.6, Python 2.7, Python 3.3

___
Python tracker 
<http://bugs.python.org/issue19070>
___

[issue19071] Documentation on what self is for module-level functions is misleading/wrong.

2013-09-22 Thread Graham Dumpleton

New submission from Graham Dumpleton:

In the documentation for Python 2.X at:

http://docs.python.org/2/extending/extending.html#a-simple-example

it says:

"""
The self argument points to the module object for module-level functions; for a 
method it would point to the object instance.
"""

In respect of module-level functions this is misleading or arguably wrong.

If one uses Py_InitModule() or Py_InitModule3(), then self is actually passed 
through as NULL for module-level functions in Python 2.

There is a caveat on use of Py_InitModule4() which used to be mentioned in 
documentation for Python 2.6 at:

http://docs.python.org/release/2.6.7/c-api/structures.html#METH_VARARGS

where it says:

"""
This is the typical calling convention, where the methods have the type 
PyCFunction. The function expects two PyObject* values. The first one is the 
self object for methods; for module functions, it has the value given to 
Py_InitModule4() (or NULL if Py_InitModule() was used).
"""

Although one can supply a special argument to Py_InitModule4() which will be 
supplied as self, this still isn't the module object and in fact the module 
object for the module will not even exist at the point Py_InitModule4() is 
called so it is not possible to pass it in. Plus within the init function of an 
extension, the module object is not that which would end up being used in a 
specific interpreter due to how the init function is only called once and a 
copy then made of the module for each interpreter.

This actual page in the documentation was changed in Python 2.7 and now in:

http://docs.python.org/2/c-api/structures.html#METH_VARARGS

says:

"""
The function expects two PyObject* values. The first one is the self object for 
methods; for module functions, it is the module object.
"""

So the reference to Py_InitModule4() was removed and simply says that module 
object is supplied, which isn't actually the case.

Now, that NULL is always passed for Py_InitModule() and Py_InitModule3() is the 
case with Python 2. In Python 3 at some point, the code in Python internals was 
changed so the module object is actually passed as documented.

So, maybe the intent was that when in Python 3 the code was changed to pass the 
module object to module-level functions that it be back ported to Python 2.7 
and the documentation so changed, but looks like that back porting was never 
done, or if it was, it has been broken somewhere along the way.

Code used to verify this is all attached.

If compiled and installed for Python 3 one gets:

>>> import mymodule._extension
>>> id(mymodule._extension)
4480540328
>>> mymodule._extension.function()

>>> id(mymodule._extension.function())
4480540328

If compiled and installed for Python 2.7 one gets:

>>> import mymodule._extension
>>> id(mymodule._extension)
4554745960
>>> mymodule._extension.function()
Traceback (most recent call last):
  File "", line 1, in 
TypeError: no module supplied for self

The function in the extension module was doing:

static PyObject *extension_function(PyObject *self, PyObject *args)
{
if (!self) {
PyErr_SetString(PyExc_TypeError, "no module supplied for self");
return NULL;
}

Py_INCREF(self);
return self;
}

--
assignee: docs@python
components: Documentation
files: example.tar
messages: 198265
nosy: docs@python, grahamd
priority: normal
severity: normal
status: open
title: Documentation on what self is for module-level functions is 
misleading/wrong.
type: behavior
versions: Python 2.7
Added file: http://bugs.python.org/file31841/example.tar

___
Python tracker 
<http://bugs.python.org/issue19071>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue19072] classmethod doesn't honour descriptor protocol of wrapped callable

2013-09-22 Thread Graham Dumpleton

New submission from Graham Dumpleton:

The classmethod decorator when applied to a function of a class, does not 
honour the descriptor binding protocol for whatever it wraps. This means it 
will fail when applied around a function which has a decorator already applied 
to it and where that decorator expects that the descriptor binding protocol is 
executed in order to properly bind the function to the class.

A decorator may want to do this where it is implemented so as to be able to 
determine automatically the context it is used in. That is, one magic decorator 
that can work around functions, instance methods, class methods and classes, 
thereby avoiding the need to have multiple distinct decorator implementations 
for the different use case.

So in the following example code:

class BoundWrapper(object):
def __init__(self, wrapped):
self.__wrapped__ = wrapped
def __call__(self, *args, **kwargs):
print('BoundWrapper.__call__()', args, kwargs)
print('__wrapped__.__self__', self.__wrapped__.__self__)
return self.__wrapped__(*args, **kwargs)

class Wrapper(object):
def __init__(self, wrapped):
self.__wrapped__ = wrapped
def __get__(self, instance, owner):
bound_function = self.__wrapped__.__get__(instance, owner)
return BoundWrapper(bound_function)

def decorator(wrapped):
return Wrapper(wrapped)

class Class(object):
@decorator
def function_im(self):
print('Class.function_im()', self)

@decorator
@classmethod
def function_cm_inner(cls):
print('Class.function_cm_inner()', cls)

@classmethod
@decorator
def function_cm_outer(cls):
print('Class.function_cm_outer()', cls)

c = Class()

c.function_im()
print()
Class.function_cm_inner()
print()
Class.function_cm_outer()

A failure is encountered of:

$ python3.3 cmgettest.py
BoundWrapper.__call__() () {}
__wrapped__.__self__ <__main__.Class object at 0x1029fc150>
Class.function_im() <__main__.Class object at 0x1029fc150>

BoundWrapper.__call__() () {}
__wrapped__.__self__ 
Class.function_cm_inner() 

Traceback (most recent call last):
  File "cmgettest.py", line 40, in 
Class.function_cm_outer()
TypeError: 'Wrapper' object is not callable

IOW, everything is fine when the decorator is applied around the classmethod, 
but when it is placed inside of the classmethod, a failure occurs because the 
decorator object is not callable.

One could argue that the error is easily avoided by adding a __call__() method 
to the Wrapper class, but that defeats the purpose of what is trying to be 
achieved in using this pattern. That is that one can within the bound wrapper 
after binding occurs, determine from the __self__ of the bound function, the 
fact that it was a class method. This can be inferred from the fact that 
__self__ is a class type.

If the classmethod decorator tp_descr_get implementation is changed so as to 
properly apply the descriptor binding protocol to the wrapped object, then what 
is being described is possible.

Having it honour the descriptor binding protocol also seems to make application 
of the Python object model more consistent.

A patch is attached which does exactly this.

The result for the above test after the patch is applied is:

BoundWrapper.__call__() () {}
__wrapped__.__self__ <__main__.Class object at 0x10ad237d0>
Class.function_im() <__main__.Class object at 0x10ad237d0>

BoundWrapper.__call__() () {}
__wrapped__.__self__ 
Class.function_cm_inner() 

BoundWrapper.__call__() () {}
__wrapped__.__self__ 
Class.function_cm_outer() 

That is, the decorator whether it is inside or outside now sees things in the 
same way.

If one also tests for calling of the classmethod via the instance:

print()
c.function_cm_inner()
print()
c.function_cm_outer()

Everything again also works out how want it:

BoundWrapper.__call__() () {}
__wrapped__.__self__ 
Class.function_cm_inner() 

BoundWrapper.__call__() () {}
__wrapped__.__self__ 
Class.function_cm_outer() 

FWIW, the shortcoming of classmethod not applying the descriptor binding 
protocol to the wrapped object, was found in writing a new object proxy and 
decorator library called 'wrapt'. This issue in the classmethod implementation 
is the one thing that has prevented wrapt having a system of writing decorators 
that can magically work out the context it is used in all the time. Would be 
nice to see it fixed. :-)

The wrapt library can be found at:

https://github.com/GrahamDumpleton/wrapt
http://wrapt.readthedocs.org

The limitation in the classmethod implementation is noted in the wrapt 
documentation at:

http://wrapt.readthedocs.org/en/v1.1.2/issues.html#classmethod-get

--
components: Interpreter Core
files: funcobject.c.diff
keywords: patch
messages: 198274
nosy: grahamd
priority: normal
severity: normal
status: open
title: classmethod doesn&

[issue19073] Inability to specific __qualname__ as a property on a class instance.

2013-09-22 Thread Graham Dumpleton

New submission from Graham Dumpleton:

Python 3 introduced __qualname__. This attribute exists on class types and also 
instances of certain class types, such as functions. For example:

def f(): pass

print(f.__name__)
print(f.__qualname__)

class Class: pass

print(Class.__name__)
print(Class.__qualname__)

yields:

f
f
Class
Class

An instance of a class however does not have __name__ or __qualname__ 
attributes. With:

c = Class()

print(c.__name__)
print(c.__qualname__)

yielding:

Traceback (most recent call last):
  File "qualnametest.py", line 13, in 
print(c.__name__)
AttributeError: 'Class' object has no attribute '__name__'

Traceback (most recent call last):
  File "qualnametest.py", line 14, in 
print(c.__qualname__)
AttributeError: 'Class' object has no attribute '__qualname__'

For a class, it is possible to override the __name__ attribute using a property.

class Class:
@property
def __name__(self):
return 'override'

c = Class()

print(c.__name__)

With the result being:

override

This is useful in writing object proxies or function wrappers for decorators as 
rather than having to copy the __name__ attribute into the wrapper, the lookup 
can be deferred until when it is required.

The same though cannot be done for __qualname__. With:

class Class:
@property
def __qualname__(self):
return 'override'

yielding an error when the class definition is being processed:

Traceback (most recent call last):
  File "qualnametest.py", line 16, in 
class Class:
TypeError: type __qualname__ must be a str, not property

This means the same trick cannot be used in object proxies and function 
wrappers and instead __qualname__ must be copied and assigned explicitly as a 
string attribute in the __init__() function of the object proxy or function 
wrapper.

I can sort of understand a prohibition on __qualname__ being a string attribute 
in certain cases, especially if overriding it on a type or instance where 
__qualname__ attribute already exists, but I don't understand why a limitation 
would be imposed to prevent using a property as a means of generating the value 
for a class instance which doesn't otherwise have a __qualname__ attribute. 
There is no similar restriction for __name__.

Unless there is a good specific reason for this behaviour, the ability to 
override it with a property in cases where the __qualname__ attribute didn't 
already exist, would be handy for proxies and wrappers.

--
components: Interpreter Core
messages: 198275
nosy: grahamd
priority: normal
severity: normal
status: open
title: Inability to specific __qualname__ as a property on a class instance.
type: behavior
versions: Python 3.3

___
Python tracker 
<http://bugs.python.org/issue19073>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue19072] classmethod doesn't honour descriptor protocol of wrapped callable

2013-09-29 Thread Graham Dumpleton

Graham Dumpleton added the comment:

The classmethod __get__() method does:

static PyObject *
cm_descr_get(PyObject *self, PyObject *obj, PyObject *type)
{
classmethod *cm = (classmethod *)self;

if (cm->cm_callable == NULL) {
PyErr_SetString(PyExc_RuntimeError,
"uninitialized classmethod object");
return NULL;
}
if (type == NULL)
type = (PyObject *)(Py_TYPE(obj));
return PyMethod_New(cm->cm_callable,
type, (PyObject *)(Py_TYPE(type)));
}

So it isn't intentionally calling __call__(). If it still doing binding, but 
doing it by calling PyMethod_New() rather than using __get__() on the wrapped 
function. Where it wraps a regular function the result is same as if __get__() 
was called as __get__() for a regular function internally calls PyMethod_New() 
in the same way.

static PyObject *
func_descr_get(PyObject *func, PyObject *obj, PyObject *type)
{
if (obj == Py_None)
obj = NULL;
return PyMethod_New(func, obj, type);
}

By not using __get__(), you deny the ability to have chained decorators that 
want/need the knowledge of the fact that binding was being done. The result for 
stacking multiple decorators which use regular functions (closures) is exactly 
the same, but you open up other possibilities of smarter decorators.

--

___
Python tracker 
<http://bugs.python.org/issue19072>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue19072] classmethod doesn't honour descriptor protocol of wrapped callable

2013-09-30 Thread Graham Dumpleton

Graham Dumpleton added the comment:

If you have the time, would be great if you can have a quick look at my wrapt 
package. That will give you an idea of where I am coming from in suggesting 
this change.

http://wrapt.readthedocs.org/en/latest/
http://wrapt.readthedocs.org/en/latest/issues.html
http://wrapt.readthedocs.org/en/latest/decorators.html
http://wrapt.readthedocs.org/en/latest/examples.html

In short, aiming to be able to write decorators which are properly transparent 
and aware of the context they are used in, so we don't have this silly 
situation at the moment where it is necessary to write distinct decorators for 
regular functions and instance methods. A classmethod around another decorator 
was the one place things will not work as would like to see them work.

I even did a talk about writing better decorators at PyCon NZ. Slides with 
notes at:

http://lanyrd.com/2013/kiwipycon/scpkbk/

Thanks.

--

___
Python tracker 
<http://bugs.python.org/issue19072>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue19070] In place operators of weakref.proxy() not returning self.

2013-10-07 Thread Graham Dumpleton

Graham Dumpleton added the comment:

@shishkander I can't see how what you are talking about has got anything to do 
with the issue with in place operators. The results from your test script are 
expected and normal.

What result are you expecting?

The one thing you cannot override in Python is what type() returns for an 
object. Thus is it completely normal for the weakref.proxy object to have a 
different type that what it wraps.

This is one of the reasons why in Python why should rarely ever do direct 
comparison of type objects. Instead you should use isinstance().

>>> import weakref
>>> class Test(object):
... pass
...
>>> test = Test()
>>> proxy = weakref.proxy(test)
>>> type(test)

>>> type(proxy)

>>> isinstance(test, Test)
True
>>> isinstance(proxy, Test)
True
>>> proxy.__class__


The isinstance() check will work because weakref.proxy will proxy __class__() 
method such that it returns the type of the wrapped object rather than of the 
proxy.

Now if your problem is with methods of wrapped objects which return self not 
having that self object some how automatically wrapped in another proxy, there 
isn't anything the proxy can be do about that. That is a situation where you as 
a user need to be careful about what you are doing. A way one can handle that 
is through derivation off a proxy object and override specific methods where 
you then in turn need to wrap the result, but I can see that easily becoming 
fragile when weakrefs are involved. Also, the weakref proxy in Python doesn't 
expose a class for doing that anyway.

One important thing to note is that where self is returned is returned by a 
normal method, it is still on you to have assigned the result to a variable so 
as to have started any possible problems. In the case of in place operators 
that is done under the covers by Python and you have no control over it. This 
is why the current behaviour as originally described is arguably broken as is 
breaks the expectations of what would logically happen for an in place operator 
when used via a proxy, something you have no control over.

So can you go back and explain what your specific problem is that you believe 
is the same issue as this bug report is, because so far I can't see any 
similarity based on your example code.

--

___
Python tracker 
<http://bugs.python.org/issue19070>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue19070] In place operators of weakref.proxy() not returning self.

2013-10-07 Thread Graham Dumpleton

Graham Dumpleton added the comment:

The proxy is intended as a wrapper around an object, it is not intended to 
merge in some way with the wrapped object. The wrapped object shouldn't really 
ever be aware that it was being accessed via a proxy. Thus the expectation that 
the 'self' attribute of the methods of the wrapper object would actually be the 
proxy object is a strange one.

Can you explain why you need it to behave the way you are expecting? Also 
specifically indicate what requirement you have for needing the reference to 
the wrapped object to be a weakref?

Almost sounds a bit like you may be trying to use weakref.proxy in a way not 
intended.

It is technically possible to write an object proxy which would work how you 
are expecting, but the Python standard library doesn't provide an object proxy 
implementation to base such a thing on.

--

___
Python tracker 
<http://bugs.python.org/issue19070>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue19070] In place operators of weakref.proxy() not returning self.

2013-10-07 Thread Graham Dumpleton

Graham Dumpleton added the comment:

The __del__() method is generally something to be avoided. As this is a design 
issue with how you are doing things, I would suggest you move the discussion to:

https://groups.google.com/forum/#!forum/comp.lang.python

You will no doubt get many suggestions there.

--

___
Python tracker 
<http://bugs.python.org/issue19070>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue19072] classmethod doesn't honour descriptor protocol of wrapped callable

2013-10-29 Thread Graham Dumpleton

Graham Dumpleton added the comment:

I don't believe so.

--

___
Python tracker 
<http://bugs.python.org/issue19072>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue22213] pyvenv style virtual environments unusable in an embedded system

2014-08-17 Thread Graham Dumpleton

New submission from Graham Dumpleton:

In am embedded system, as the 'python' executable is itself not run and the 
Python interpreter is initialised in process explicitly using PyInitialize(), 
in order to find the location of the Python installation, an elaborate sequence 
of checks is run as implemented in calculate_path() of Modules/getpath.c.

The primary mechanism is usually to search for a 'python' executable on PATH 
and use that as a starting point. From that it then back tracks up the file 
system from the bin directory to arrive at what would be the perceived 
equivalent of PYTHONHOME. The lib/pythonX.Y directory under that for the 
matching version X.Y of Python being initialised would then be used.

Problems can often occur with the way this search is done though.

For example, if someone is not using the system Python installation but has 
installed a different version of Python under /usr/local. At run time, the 
correct Python shared library would be getting loaded from /usr/local/lib, but 
because the 'python' executable is found from /usr/bin, it uses /usr as 
sys.prefix instead of /usr/local.

This can cause two distinct problems.

The first is that there is no Python installation at all under /usr 
corresponding to the Python version which was embedded, with the result of it 
not being able to import 'site' module and therefore failing.

The second is that there is a Python installation of the same major/minor but 
potentially a different patch revision, or compiled with different binary API 
flags or different Unicode character width. The Python interpreter in this case 
may well be able to start up, but the mismatch in the Python modules or 
extension modules and the core Python library that was actually linked can 
cause odd errors or crashes to occur.

Anyway, that is the background.

For an embedded system the way this problem was overcome was for it to use 
Py_SetPythonHome() to forcibly override what should be used for PYTHONHOME so 
that the correct installation was found and used at runtime.

Now this would work quite happily even for Python virtual environments 
constructed using 'virtualenv' allowing the embedded system to be run in that 
separate virtual environment distinct from the main Python installation it was 
created from.

Although this works for Python virtual environments created using 'virtualenv', 
it doesn't work if the virtual environment was created using pyvenv.

One can easily illustrate the problem without even using an embedded system.

$ which python3.4
/Library/Frameworks/Python.framework/Versions/3.4/bin/python3.4

$ pyvenv-3.4 py34-pyvenv

$ py34-pyvenv/bin/python
Python 3.4.1 (v3.4.1:c0e311e010fc, May 18 2014, 00:54:21)
[GCC 4.2.1 (Apple Inc. build 5666) (dot 3)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> import sys
>>> sys.prefix
'/private/tmp/py34-pyvenv'
>>> sys.path
['', '/Library/Frameworks/Python.framework/Versions/3.4/lib/python34.zip', 
'/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4', 
'/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/plat-darwin', 
'/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/lib-dynload', 
'/private/tmp/py34-pyvenv/lib/python3.4/site-packages']

$ PYTHONHOME=/tmp/py34-pyvenv python3.4
Fatal Python error: Py_Initialize: unable to load the file system codec
ImportError: No module named 'encodings'
Abort trap: 6

The basic problem is that in a pyvenv virtual environment, there is no 
duplication of stuff in lib/pythonX.Y, with the only thing in there being the 
site-packages directory.

When you start up the 'python' executable direct from the pyvenv virtual 
environment, the startup sequence checks know this and consult the pyvenv.cfg 
to extract the:

home = /Library/Frameworks/Python.framework/Versions/3.4/bin

setting and from that derive where the actual run time files are.

When PYTHONHOME or Py_SetPythonHome() is used, then the getpath.c checks 
blindly believe that is the authoritative value:

 * Step 2. See if the $PYTHONHOME environment variable points to the
 * installed location of the Python libraries.  If $PYTHONHOME is set, then
 * it points to prefix and exec_prefix.  $PYTHONHOME can be a single
 * directory, which is used for both, or the prefix and exec_prefix
 * directories separated by a colon.

/* If PYTHONHOME is set, we believe it unconditionally */
if (home) {
wchar_t *delim;
wcsncpy(prefix, home, MAXPATHLEN);
prefix[MAXPATHLEN] = L'\0';
delim = wcschr(prefix, DELIM);
if (delim)
*delim = L'\0';
joinpath(prefix, lib_python);
joinpath(prefix, LANDMARK);
return 1;
}
Because of t

[issue22213] pyvenv style virtual environments unusable in an embedded system

2014-08-23 Thread Graham Dumpleton

Graham Dumpleton added the comment:

It is actually very easy for me to work around and I released a new mod_wsgi 
version today which works.

When I get a Python home option, instead of calling Py_SetPythonHome() with it, 
I append '/bin/python' to it and call Py_SetProgramName() instead.

--

___
Python tracker 
<http://bugs.python.org/issue22213>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue22213] Make pyvenv style virtual environments easier to configure when embedding Python

2014-08-23 Thread Graham Dumpleton

Graham Dumpleton added the comment:

I only make the change to Py_SetProgramName() on UNIX and not Windows. This is 
because back in mod_wsgi 1.0 I did actually used to use Py_SetProgramName() but 
it didn't seem to work in sane way on Windows so changed to Py_SetPythonHome() 
which worked on both Windows and UNIX. Latest versions of mod_wsgi haven't been 
updated yet to even build on Windows, so not caring about Windows right now.

--

___
Python tracker 
<http://bugs.python.org/issue22213>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue22264] Add wsgiref.util.fix_decoding

2014-08-24 Thread Graham Dumpleton

Graham Dumpleton added the comment:

Is actually WSGI 1.0.1 and not 1.1. :-)

--
nosy: +grahamd

___
Python tracker 
<http://bugs.python.org/issue22264>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue22264] Add wsgiref.util.fix_decoding

2014-08-24 Thread Graham Dumpleton

Graham Dumpleton added the comment:

>From memory, the term sometimes used on the WEB-SIG when discussed was 
>transcode.

I find the idea that it needs 'fixing' or is 'incorrect', as in 'fix the 
original incorrect decoding to latin-1' is a bit misleading as well. It was the 
only practical way of doing things that didn't cause a lot of other problems 
and was a deliberate decision. It wasn't a mistake.

--

___
Python tracker 
<http://bugs.python.org/issue22264>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue20138] wsgiref on Python 3.x incorrectly implements URL handling causing mangled Unicode

2014-01-06 Thread Graham Dumpleton

Changes by Graham Dumpleton :


--
nosy: +grahamd

___
Python tracker 
<http://bugs.python.org/issue20138>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue6393] OS X: python3 from python-3.1.dmg crashes at startup

2009-07-17 Thread Graham Dumpleton

Graham Dumpleton  added the comment:

I see this problem on both MacOS X 10.5 and on Windows. This is when using 
Python embedded inside of Apache/mod_wsgi.

On MacOS X the error is:

Fatal Python error: Py_Initialize: can't initialize sys standard streams
ImportError: No module named encodings.utf_8

On Windows the error is:

Fatal Python error: Py_Initialize: can't initialize sys standard streams
LookupError: unknown encoding: cp0

The talk about the fix mentioned it only addressing MacOS X. What about 
Windows case I am seeing. Will it help with that at all?

--
nosy: +grahamd

___
Python tracker 
<http://bugs.python.org/issue6393>
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue6393] OS X: python3 from python-3.1.dmg crashes at startup

2009-07-17 Thread Graham Dumpleton

Graham Dumpleton  added the comment:

Hmmm, actually my MacOS X error is different, although Windows one is 
same, except that encoding is listed and isn't empty.

--

___
Python tracker 
<http://bugs.python.org/issue6393>
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue6393] OS X: python3 from python-3.1.dmg crashes at startup

2009-07-17 Thread Graham Dumpleton

Graham Dumpleton  added the comment:

You can ignore my MacOS X example as that was caused by something else.

My question still stands as to whether the fix will address the similar 
problem I saw on Windows.

--

___
Python tracker 
<http://bugs.python.org/issue6393>
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue6501] Fatal LookupError: unknown encoding: cp0 on Windows embedded startup.

2009-07-17 Thread Graham Dumpleton

New submission from Graham Dumpleton :

When using Python 3.1 for Apache/mod_wsgi (3.0c4) on Windows, Apache will 
crash on startup because Python is forcing the process to exit with:

Fatal Python error: Py_Initialize: can't initialize sys standard streams
LookupError: unknown encoding: cp0

I first mentioned this on issue6393, but have now created it as a separate 
issue as appears to be distinct from the issue on MacOS X, athough possibly 
related.

In the Windows case there is actually an encoding, that of 'cp0' where as on 
MacOS X, the encoding name was empty.

The same mod_wsgi code works fine under Python 3.1 on MacOS X.

--
components: Interpreter Core, Windows
messages: 90616
nosy: grahamd
severity: normal
status: open
title: Fatal LookupError: unknown encoding: cp0 on Windows embedded startup.
type: crash
versions: Python 3.1

___
Python tracker 
<http://bugs.python.org/issue6501>
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue6393] OS X: python3 from python-3.1.dmg crashes at startup

2009-07-17 Thread Graham Dumpleton

Graham Dumpleton  added the comment:

I have created issue6501 for my Windows variant of this problem given that 
it appears to be subtly different due to there being an encoding where as 
the MacOS X variant doesn't have one.

Seeing that the fix for the MacOS X issue is in Python code, I will when I 
have a chance look at whether can work out any fix for the Windows 
variant. Not sure I have right tools to compile Python from C code on 
Windows, so if a C code problem, not sure can really investigate.

--

___
Python tracker 
<http://bugs.python.org/issue6393>
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue6501] Fatal LookupError: unknown encoding: cp0 on Windows embedded startup.

2009-07-17 Thread Graham Dumpleton

Graham Dumpleton  added the comment:

Yes, Apache remaps stdout and stderr to the Apache error log to still 
capture anything that errant modules don't log via the Apache error log 
functions. In mod_wsgi it replaces sys.stdout and sys.stderr with special 
file like objects that redirect via Apache error logging functions. This 
though obviously happens after Python first initialises sys.stdout and 
sys.stderr.

What would be an appropriate value to set PYTHONIOENCODING to on Windows 
as a workaround?

--

___
Python tracker 
<http://bugs.python.org/issue6501>
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue4200] atexit module not safe in Python 3.0 with multiple interpreters

2009-07-19 Thread Graham Dumpleton

Graham Dumpleton  added the comment:

I know this issue is closed, but for this patch, the code:

+modstate = get_atexitmodule_state(module);
+
+if (modstate->ncallbacks == 0)
+return;

was added.

Is there any condition under which modstate could be NULL.

Haven't touched Python 3.0 support in mod_wsgi for a long time and when 
revisiting code with final Python 3.0, I find that I get Py_Finalize() 
crashing on process shutdown. It is crashing because modstate above is 
NULL.

--

___
Python tracker 
<http://bugs.python.org/issue4200>
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue4200] atexit module not safe in Python 3.0 with multiple interpreters

2009-07-20 Thread Graham Dumpleton

Graham Dumpleton  added the comment:

This new problem I am seeing looks like it may be linked to where the 
'atexit' module is initialised/imported in a sub interpreter but never 
in the main interpreter. I can avoid the crash by having:

PyImport_ImportModule("atexit");

Py_Finalize();

At a guess, this is because:

module = PyState_FindModule(&atexitmodule);
if (module == NULL)
return;

still returns a module for case where imported in a sub interpreter but 
not in main interpreter, but then:

modstate = GET_ATEXIT_STATE(module);

if (modstate->ncallbacks == 0)
return;

returns NULL for modstate for the main interpreter as PyInit_atexit() 
had never been called for the main interpreter.

The fix would appear to be to check modstate for being NULL and return. 
Ie.,

module = PyState_FindModule(&atexitmodule);
if (module == NULL)
return;
modstate = GET_ATEXIT_STATE(module);

if (modstate == NULL)
return;

if (modstate->ncallbacks == 0)
return;

Does that make sense to anyone? If it does and I am correct, I'll create 
a new issue for it as original fix seems deficient.

--

___
Python tracker 
<http://bugs.python.org/issue4200>
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue6531] atexit_callfuncs() crashing within Py_Finalize() when using multiple interpreters.

2009-07-21 Thread Graham Dumpleton

New submission from Graham Dumpleton :

I am seeing a crash within Py_Finalize() with Python 3.0 in mod_wsgi. It looks 
like the 
patches for issue-4200 were not adequate and that this wasn't picked up at the 
time.

This new problem I am seeing looks like it may be linked to where the 'atexit' 
module is 
initialised/imported in a sub interpreter but never imported in the main 
interpreter. I can 
avoid the crash by having:

PyImport_ImportModule("atexit");

Py_Finalize();

At a guess, the problem is because in atexit_callfuncs():

module = PyState_FindModule(&atexitmodule);
if (module == NULL)
return;

still returns a module for case where imported in a sub interpreter but not in 
main 
interpreter, so doesn't return, but then code which follows:

modstate = GET_ATEXIT_STATE(module);

if (modstate->ncallbacks == 0)
return;

returns NULL for modstate for the main interpreter as PyInit_atexit() had never 
been called 
for the main interpreter as the 'atexit' module was never imported within that 
interpreter.

The fix would appear to be to check modstate for being NULL and return. Ie.,

module = PyState_FindModule(&atexitmodule);
if (module == NULL)
return;
modstate = GET_ATEXIT_STATE(module);

if (modstate == NULL)
return;

if (modstate->ncallbacks == 0)
return;

The only thing I am uncertain about is why PyState_FindModule() would return an 
object. I 
cant find any documentation about that function so not entirely sure what it is 
meant to do. 
I would have thought it would be returning data specific to the interpreter, 
but if never 
imported in that interpreter, why would there still be an object recorded.

BTW, I have marked this as for Python 3.1 as well, but haven't tested it on 
that. The code in 
'atexit' module doesn't appear to have changed though so assuming it will die 
there as well.

For now am using the workaround in mod_wsgi.

--
components: Interpreter Core
messages: 90753
nosy: grahamd
severity: normal
status: open
title: atexit_callfuncs() crashing within Py_Finalize() when using multiple 
interpreters.
type: crash
versions: Python 3.0, Python 3.1

___
Python tracker 
<http://bugs.python.org/issue6531>
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue4200] atexit module not safe in Python 3.0 with multiple interpreters

2009-07-21 Thread Graham Dumpleton

Graham Dumpleton  added the comment:

Have created issue6531 for my new issue related to this patch.

--

___
Python tracker 
<http://bugs.python.org/issue4200>
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue6531] atexit_callfuncs() crashing within Py_Finalize() when using multiple interpreters.

2009-07-21 Thread Graham Dumpleton

Graham Dumpleton  added the comment:

As a provider of software that others use I am just making mod_wsgi usable 
with everything so users can use whatever they want. You telling me to use 
Python 3.1 isn't going to stop people from using Python 3.0 if that is 
what they happen to have installed. Just look at how many people still use 
really old Python 2.X versions. Ultimately I don't care which Python 
version it is fixed in as I have the work around anyway.

--

___
Python tracker 
<http://bugs.python.org/issue6531>
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



  1   2   >