[ python-Bugs-1646838 ] os.path, %HOME% set: realpath contradicts expanduser on '~'

2007-01-29 Thread SourceForge.net
Bugs item #1646838, was opened at 2007-01-29 09:07
Message generated for change (Tracker Item Submitted) made by Item Submitter
You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1646838&group_id=5470

Please note that this message will contain a full copy of the comment thread,
including the initial issue submission, for this request,
not just the latest update.
Category: Python Library
Group: Python 2.5
Status: Open
Resolution: None
Priority: 5
Private: No
Submitted By: wrstl prmpft (wrstlprmpft)
Assigned to: Nobody/Anonymous (nobody)
Summary: os.path, %HOME% set: realpath contradicts expanduser on '~'

Initial Comment:
This might be intentional, but it is still confusing.

On Windows XP (german)::

  Python 2.5 (r25:51908, Sep 19 2006, 09:52:17) [MSC v.1310 32 bit (Intel)]
  ...
  In [1]: import os.path as path
  
  In [2]: import os; os.environ['HOME']
  
  Out[2]: 'D:\\HOME'
  
  In [3]: path.realpath('~')
  
  Out[3]: 'C:\\Dokumente und Einstellungen\\wrstl\\~'
  
  In [4]: path.expanduser('~')
  
  Out[4]: 'D:\\HOME'


The cause:
realpath uses path._getfullpathname which seems to do the '~' expansion, while 
path.expanduser has special code to look for HOME* environment variables.

I would expect that the HOME setting should always be honored if expansion is 
done.

cheers,
stefan


--

You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1646838&group_id=5470
___
Python-bugs-list mailing list 
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[ python-Bugs-1643738 ] Problem with signals in a single-threaded application

2007-01-29 Thread SourceForge.net
Bugs item #1643738, was opened at 2007-01-24 19:14
Message generated for change (Comment added) made by loewis
You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1643738&group_id=5470

Please note that this message will contain a full copy of the comment thread,
including the initial issue submission, for this request,
not just the latest update.
Category: Python Library
Group: Python 2.5
Status: Open
Resolution: None
Priority: 5
Private: No
Submitted By: Ulisses Furquim (ulissesf)
Assigned to: Nobody/Anonymous (nobody)
Summary: Problem with signals in a single-threaded application

Initial Comment:
I'm aware of the problems with signals in a multithreaded application, but I 
was using signals in a single-threaded application and noticed something that 
seemed wrong. Some signals were apparently being lost, but when another signal 
came in the python handler for that "lost" signal was being called.

The problem seems to be inside the signal module. The global variable 
is_tripped is incremented every time a signal arrives. Then, inside 
PyErr_CheckSignals() (the pending call that calls all python handlers for 
signals that arrived) we can return immediately if is_tripped is zero. If 
is_tripped is different than zero, we loop through all signals calling the 
registered python handlers and after that we zero is_tripped. This seems to be 
ok, but what happens if a signal arrives after we've returned from its handler 
(or even after we've checked if that signal arrived) and before we zero 
is_tripped? I guess we can have a situation where is_tripped is zero but some 
Handlers[i].tripped are not. In fact, I've inserted some debugging output and 
could see that this actually happens and then I've written the attached test 
program to reproduce the problem.

When we run this program, the handler for the SIGALRM isn't called after we 
return from the  SIGIO handler. We return to our main loop and print 'Loop!' 
every 3 seconds aprox. and the SIGALRM handler is called only when another 
signal arrives (like when we hit Ctrl-C).


--

>Comment By: Martin v. Löwis (loewis)
Date: 2007-01-29 09:13

Message:
Logged In: YES 
user_id=21627
Originator: NO

What I dislike about #1564547 is the introduction of the pipe. I don't
think this is an appropriate change, and unnecessary to fix the problems
discussed here. So if one of the patches is dropped, I'd rather drop
#1564547.

Also, I don't think it is necessary to set .tripped after
Py_AddPendingCall. If there is a CheckSignals invocation already going on,
it will invoke the handler just fine. What *is* necessary (IMO) is to set
is_tripped after setting .tripped: Otherwise, an in-progress CheckSignals
call might clear is_tripped before .tripped gets set, and thus not invoke
the signal handler. The subsequent CheckSignals would quit early because
is_tripped is not set.

So I think "a" right sequence is

  Handlers[SIGINT].tripped = 1;
  is_tripped = 1; /* Set is_tripped after setting .tripped, as it gets
cleared before .tripped. */
  Py_AddPendingCall((int (*)(void *))PyErr_CheckSignals, NULL);



--

Comment By: Adam Olsen (rhamphoryncus)
Date: 2007-01-28 13:02

Message:
Logged In: YES 
user_id=12364
Originator: NO

Augh, bloody firefox messed up my focus.

Your PyErr_SetInterrupt needs to set the flags after, like so:

Py_AddPendingCall((int (*)(void *))PyErr_CheckSignals, NULL);
Handlers[SIGINT].tripped = 1;
is_tripped = 1;

The reason is that the signal handler run in a thread while the main
thread goes through PyErr_CheckSignals, the main thread may notice the
flags, clear them flags, find nothing, then exit.  You need the signal
handler to supply all the data before setting the flags.

Really though, if you fix enough signal problems you'll converge with the
patch at
http://sourceforge.net/tracker/index.php?func=detail&aid=1564547&group_id=5470&atid=305470
No need for two patches that do the same thing.

--

Comment By: Adam Olsen (rhamphoryncus)
Date: 2007-01-28 12:57

Message:
Logged In: YES 
user_id=12364
Originator: NO

Your PyErr_SetInterrupt needs to set is_tripped twice, like so:

is_tripped = 1;
Handlers[SIGINT].tripped = 1;
Py_AddPendingCall((int (*)(void *))PyErr_CheckSignals, NULL);

is_tripped = 1;

The reason is that the signal handler run in a thread while the main
thread goes through check

--

Comment By: Ulisses Furquim (ulissesf)
Date: 2007-01-24 22:09

Message:
Logged In: YES 
user_id=1578960
Originator: YES

Yep, you're right, Tony Nelson. We overlooked this case but we can zero
is_tripped after the test for threading as you've already said. The patch
was updated and it also includes the code comment Tim Peters su

[ python-Bugs-1647037 ] cookielib.CookieJar does not handle cookies when port in url

2007-01-29 Thread SourceForge.net
Bugs item #1647037, was opened at 2007-01-29 12:31
Message generated for change (Tracker Item Submitted) made by Item Submitter
You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1647037&group_id=5470

Please note that this message will contain a full copy of the comment thread,
including the initial issue submission, for this request,
not just the latest update.
Category: Python Library
Group: Python 2.5
Status: Open
Resolution: None
Priority: 5
Private: No
Submitted By: STS (tools-sts)
Assigned to: Nobody/Anonymous (nobody)
Summary: cookielib.CookieJar does not handle cookies when port in url

Initial Comment:
In Python 2.5 the cookielib.CookieJar does not handle cookies (i.e., recognise 
the Set-Cookie: header) when the port is specified in the URL.

e.g., 
import urllib2, cookielib
cookiejar = cookielib.CookieJar()
opener = urllib2.build_opener(urllib2.HTTPCookieProcessor(cookiejar))
# add proxy to view results
proxy_handler = urllib2.ProxyHandler({'http':'127.0.0.1:8080'})
opener.add_handler(proxy_handler)
# Install opener globally so it can be used with urllib2.
urllib2.install_opener(opener)
# The ':80' will cause the CookieJar to never handle the 
# cookie set by Google
request = urllib2.Request('http://www.google.com.au:80/')
response = opener.open(request)
response = opener.open(request) # No Cookie:
# But this works
request = urllib2.Request('http://www.google.com.au/')
response = opener.open(request)
response = opener.open(request)# Cookie: PREF=ID=d2de0..

--

You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1647037&group_id=5470
___
Python-bugs-list mailing list 
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[ python-Bugs-1227748 ] subprocess: inheritance of std descriptors inconsistent

2007-01-29 Thread SourceForge.net
Bugs item #1227748, was opened at 2005-06-26 15:37
Message generated for change (Comment added) made by astrand
You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1227748&group_id=5470

Please note that this message will contain a full copy of the comment thread,
including the initial issue submission, for this request,
not just the latest update.
Category: Python Library
Group: Python 2.4
Status: Open
Resolution: None
Priority: 5
Private: No
Submitted By: Andr� Malo (ndparker)
Assigned to: Peter Åstrand (astrand)
Summary: subprocess: inheritance of std descriptors inconsistent

Initial Comment:
The inheritance of std descriptors is inconsistent
between Unix and Windows implementations.

If one calls Popen with stdin = stdout = stderr = None,
the caller's std descriptors are inherited on *x, but
not on Windows, because of the following optimization
(from subprocess.py r1.20):

   655  def _get_handles(self, stdin, stdout,
stderr):
   656  """Construct and return tupel with
IO objects:
   657  p2cread, p2cwrite, c2pread,
c2pwrite, errread, errwrite
   658  """
   659  if stdin is None and stdout is None
and stderr is None:
   660  return (None, None, None, None,
None, None)
   661  

I suggest to just remove those lines 659 and 660. The
current workaround is to duplicate the handles by the
application and supply an own STARTUPINFO structure.

--

>Comment By: Peter Åstrand (astrand)
Date: 2007-01-29 21:54

Message:
Logged In: YES 
user_id=344921
Originator: NO

>If one calls Popen with stdin = stdout = stderr = None,
>the caller's std descriptors are inherited on *x, but
>not on Windows, 

This is a correct observation. However, the current implementation is not
necessarily wrong. This could instead be seen as a consequence of the
different environments. The subprocess documentation states that "With
None, no redirection will occur". So, it becomes an interpretation of what
this really mean. Since the "default" behaviour on UNIX is to inherit and
the default behaviour on Windows is to attach the standard handles to (an
often newly created) console window, one could argue that this fits fairly
good with the description "no redirection will occur". 

If we would change this, so that the parents handles are always inherited,
then how would you specify that you want to attach the standard handles to
the new console window? 

For best flexibility, the API should allow both cases: Both inherit all
handles from the parent as well as attaching all standard handles to the
new console window. As you point out, the current API allows this. So why
change this?

One thing that's clearly an bug is the second part of the documentation:

"With None, no redirection will occur; the child's file handles will be
inherited from the
parent"

This is currently only true on UNIX. If we should keep the current
behaviour, at least the comment needs to be fixed. 


--

You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1227748&group_id=5470
___
Python-bugs-list mailing list 
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[ python-Bugs-1124861 ] subprocess fails on GetStdHandle in interactive GUI

2007-01-29 Thread SourceForge.net
Bugs item #1124861, was opened at 2005-02-17 17:23
Message generated for change (Comment added) made by astrand
You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1124861&group_id=5470

Please note that this message will contain a full copy of the comment thread,
including the initial issue submission, for this request,
not just the latest update.
Category: Windows
Group: Python 2.4
Status: Open
Resolution: None
Priority: 7
Private: No
Submitted By: davids (davidschein)
Assigned to: Nobody/Anonymous (nobody)
Summary: subprocess fails on GetStdHandle in interactive GUI

Initial Comment:
Using the suprocess module from with IDLE or PyWindows,
it appears that calls GetStdHandle (STD__HANDLE)
returns None, which causes an error.  (All appears fine
on Linux, the standard Python command-line, and ipython.)

For example:
>>> import subprocess
>>> p = subprocess.Popen("dir", stdout=subprocess.PIPE)

Traceback (most recent call last):
  File "", line 1, in -toplevel-
p = subprocess.Popen("dir", stdout=subprocess.PIPE)
  File "C:\Python24\lib\subprocess.py", line 545, in
__init__
(p2cread, p2cwrite,
  File "C:\Python24\lib\subprocess.py", line 605, in
_get_handles
p2cread = self._make_inheritable(p2cread)
  File "C:\Python24\lib\subprocess.py", line 646, in
_make_inheritable
DUPLICATE_SAME_ACCESS)
TypeError: an integer is required

The error originates in the mswindows implementation of
_get_handles.  You need to set one of stdin, stdout, or
strerr because the first line in the method is:
if stdin == None and stdout == None and stderr == None:
...return (None, None, None, None, None, None)

I added "if not handle: return GetCurrentProcess()" to
_make_inheritable() as below and it worked.  Of course,
I really do not know what is going on, so I am letting
go now...

def _make_inheritable(self, handle):
..."""Return a duplicate of handle, which is inheritable"""
...if not handle: return GetCurrentProcess()
...return DuplicateHandle(GetCurrentProcess(), handle,
GetCurrentProcess(),
0, 1,
DUPLICATE_SAME_ACCESS)


--

>Comment By: Peter Åstrand (astrand)
Date: 2007-01-29 22:42

Message:
Logged In: YES 
user_id=344921
Originator: NO

Some ideas of possible solutions for this bug:

1) As Roger Upole suggests, throw an readable error when GetStdHandle
fails. This would not really change much, besides of subprocess being a
little less confusing. 

2) Automatically create PIPEs for those handles that fails. The PIPE could
either be left open or closed. A WriteFile in the child would get
ERROR_BROKEN_PIPE, if the parent has closed it. Not as good as
ERROR_INVALID_HANDLE, but pretty close. (Or should I say pretty closed?
:-)

3) Try to attach the handles to a NUL device, as 1238747 suggests. 

4) Hope for the best and actually pass invalid handles in
startupinfo.hStdInput, startupinfo.hStdOutput, or
startupinfo.hStdError. It would be nice if this was possible: If
GetStdHandle fails in the current process, it makes sense that GetStdHandle
will fail in the child as well. But, as far as I understand, it's not
possible or safe to pass invalid handles in the startupinfo structure. 

Currently, I'm leaning towards solution 2), with closing the parents PIPE
ends. 

--

Comment By: Peter Åstrand (astrand)
Date: 2007-01-22 20:36

Message:
Logged In: YES 
user_id=344921
Originator: NO

The following bugs have been marked as duplicate of this bug:

1358527
1603907
1126208
1238747



--

Comment By: craig (codecraig)
Date: 2006-10-13 17:54

Message:
Logged In: YES 
user_id=1258995

On windows, this seems to work

from subprocess import *
p = Popen(cmd, stdin=PIPE, stdout=PIPE, stderr=PIPE)

in some cases (depending on what command you are
executing, a command prompt window may appear).  Do not show
a window use this...

import win32con
p = Popen(cmd, stdin=PIPE, stdout=PIPE, stderr=PIPE,
creationflags=win32con.CREATE_NO_WINDOW)

...google for Microsoft Process Creation Flags for more info

--

Comment By: Steven Bethard (bediviere)
Date: 2005-09-26 16:53

Message:
Logged In: YES 
user_id=945502

This issue was discussed on comp.lang.python[1] and Roger
Upole suggested:

"""
Basically, gui apps like VS don't have a console, so
GetStdHandle returns 0.   _subprocess.GetStdHandle
returns None if the handle is 0, which gives the original
error.  Pywin32 just returns the 0, so the process gets
one step further but still hits the above error.

Subprocess.py should probably check the
result of GetStdHandle for None (or 0)
and throw a readable error that says something like
"No standard handle available, you must specify one"
"""

[1]http://mail.python.

[ python-Bugs-1643738 ] Problem with signals in a single-threaded application

2007-01-29 Thread SourceForge.net
Bugs item #1643738, was opened at 2007-01-24 11:14
Message generated for change (Comment added) made by rhamphoryncus
You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1643738&group_id=5470

Please note that this message will contain a full copy of the comment thread,
including the initial issue submission, for this request,
not just the latest update.
Category: Python Library
Group: Python 2.5
Status: Open
Resolution: None
Priority: 5
Private: No
Submitted By: Ulisses Furquim (ulissesf)
Assigned to: Nobody/Anonymous (nobody)
Summary: Problem with signals in a single-threaded application

Initial Comment:
I'm aware of the problems with signals in a multithreaded application, but I 
was using signals in a single-threaded application and noticed something that 
seemed wrong. Some signals were apparently being lost, but when another signal 
came in the python handler for that "lost" signal was being called.

The problem seems to be inside the signal module. The global variable 
is_tripped is incremented every time a signal arrives. Then, inside 
PyErr_CheckSignals() (the pending call that calls all python handlers for 
signals that arrived) we can return immediately if is_tripped is zero. If 
is_tripped is different than zero, we loop through all signals calling the 
registered python handlers and after that we zero is_tripped. This seems to be 
ok, but what happens if a signal arrives after we've returned from its handler 
(or even after we've checked if that signal arrived) and before we zero 
is_tripped? I guess we can have a situation where is_tripped is zero but some 
Handlers[i].tripped are not. In fact, I've inserted some debugging output and 
could see that this actually happens and then I've written the attached test 
program to reproduce the problem.

When we run this program, the handler for the SIGALRM isn't called after we 
return from the  SIGIO handler. We return to our main loop and print 'Loop!' 
every 3 seconds aprox. and the SIGALRM handler is called only when another 
signal arrives (like when we hit Ctrl-C).


--

Comment By: Adam Olsen (rhamphoryncus)
Date: 2007-01-29 14:45

Message:
Logged In: YES 
user_id=12364
Originator: NO

To my knowledge, a pipe is the *only* way to reliably wakeup the main
thread from a signal handler in another thread.  It's not necessary here
simply because this bug only names a subset of the signal problems,
whereas #1564547 attempts to fix all of them.  Dropping it would be silly
unless it were officially declared that the signal module and the
threading module were incompatible.

You're right about the .tripped/Py_AddPendingCall order.  I got myself
confused as to what Py_AddPendingCall did.

--

Comment By: Martin v. Löwis (loewis)
Date: 2007-01-29 01:13

Message:
Logged In: YES 
user_id=21627
Originator: NO

What I dislike about #1564547 is the introduction of the pipe. I don't
think this is an appropriate change, and unnecessary to fix the problems
discussed here. So if one of the patches is dropped, I'd rather drop
#1564547.

Also, I don't think it is necessary to set .tripped after
Py_AddPendingCall. If there is a CheckSignals invocation already going on,
it will invoke the handler just fine. What *is* necessary (IMO) is to set
is_tripped after setting .tripped: Otherwise, an in-progress CheckSignals
call might clear is_tripped before .tripped gets set, and thus not invoke
the signal handler. The subsequent CheckSignals would quit early because
is_tripped is not set.

So I think "a" right sequence is

  Handlers[SIGINT].tripped = 1;
  is_tripped = 1; /* Set is_tripped after setting .tripped, as it gets
cleared before .tripped. */
  Py_AddPendingCall((int (*)(void *))PyErr_CheckSignals, NULL);



--

Comment By: Adam Olsen (rhamphoryncus)
Date: 2007-01-28 05:02

Message:
Logged In: YES 
user_id=12364
Originator: NO

Augh, bloody firefox messed up my focus.

Your PyErr_SetInterrupt needs to set the flags after, like so:

Py_AddPendingCall((int (*)(void *))PyErr_CheckSignals, NULL);
Handlers[SIGINT].tripped = 1;
is_tripped = 1;

The reason is that the signal handler run in a thread while the main
thread goes through PyErr_CheckSignals, the main thread may notice the
flags, clear them flags, find nothing, then exit.  You need the signal
handler to supply all the data before setting the flags.

Really though, if you fix enough signal problems you'll converge with the
patch at
http://sourceforge.net/tracker/index.php?func=detail&aid=1564547&group_id=5470&atid=305470
No need for two patches that do the same thing.

--

Comment By: Adam Olsen (rhamphoryncus)
Date: 2007-01-28 04:57

Message:
Logged In: YES 
user_id=12364
Originator: NO

Your P

[ python-Bugs-1643738 ] Problem with signals in a single-threaded application

2007-01-29 Thread SourceForge.net
Bugs item #1643738, was opened at 2007-01-24 19:14
Message generated for change (Comment added) made by loewis
You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1643738&group_id=5470

Please note that this message will contain a full copy of the comment thread,
including the initial issue submission, for this request,
not just the latest update.
Category: Python Library
Group: Python 2.5
Status: Open
Resolution: None
Priority: 5
Private: No
Submitted By: Ulisses Furquim (ulissesf)
Assigned to: Nobody/Anonymous (nobody)
Summary: Problem with signals in a single-threaded application

Initial Comment:
I'm aware of the problems with signals in a multithreaded application, but I 
was using signals in a single-threaded application and noticed something that 
seemed wrong. Some signals were apparently being lost, but when another signal 
came in the python handler for that "lost" signal was being called.

The problem seems to be inside the signal module. The global variable 
is_tripped is incremented every time a signal arrives. Then, inside 
PyErr_CheckSignals() (the pending call that calls all python handlers for 
signals that arrived) we can return immediately if is_tripped is zero. If 
is_tripped is different than zero, we loop through all signals calling the 
registered python handlers and after that we zero is_tripped. This seems to be 
ok, but what happens if a signal arrives after we've returned from its handler 
(or even after we've checked if that signal arrived) and before we zero 
is_tripped? I guess we can have a situation where is_tripped is zero but some 
Handlers[i].tripped are not. In fact, I've inserted some debugging output and 
could see that this actually happens and then I've written the attached test 
program to reproduce the problem.

When we run this program, the handler for the SIGALRM isn't called after we 
return from the  SIGIO handler. We return to our main loop and print 'Loop!' 
every 3 seconds aprox. and the SIGALRM handler is called only when another 
signal arrives (like when we hit Ctrl-C).


--

>Comment By: Martin v. Löwis (loewis)
Date: 2007-01-29 23:04

Message:
Logged In: YES 
user_id=21627
Originator: NO

rhamphoryncus, see the discussion on #1564547 about that patch. I believe
there are better ways to address the issues it raises, in particular by
means of pthread_kill. It's certainly more reliable than a pipe (which
wakes up the main thread only if it was polling the pipe).

--

Comment By: Adam Olsen (rhamphoryncus)
Date: 2007-01-29 22:45

Message:
Logged In: YES 
user_id=12364
Originator: NO

To my knowledge, a pipe is the *only* way to reliably wakeup the main
thread from a signal handler in another thread.  It's not necessary here
simply because this bug only names a subset of the signal problems, whereas
#1564547 attempts to fix all of them.  Dropping it would be silly unless it
were officially declared that the signal module and the threading module
were incompatible.

You're right about the .tripped/Py_AddPendingCall order.  I got myself
confused as to what Py_AddPendingCall did.

--

Comment By: Martin v. Löwis (loewis)
Date: 2007-01-29 09:13

Message:
Logged In: YES 
user_id=21627
Originator: NO

What I dislike about #1564547 is the introduction of the pipe. I don't
think this is an appropriate change, and unnecessary to fix the problems
discussed here. So if one of the patches is dropped, I'd rather drop
#1564547.

Also, I don't think it is necessary to set .tripped after
Py_AddPendingCall. If there is a CheckSignals invocation already going on,
it will invoke the handler just fine. What *is* necessary (IMO) is to set
is_tripped after setting .tripped: Otherwise, an in-progress CheckSignals
call might clear is_tripped before .tripped gets set, and thus not invoke
the signal handler. The subsequent CheckSignals would quit early because
is_tripped is not set.

So I think "a" right sequence is

  Handlers[SIGINT].tripped = 1;
  is_tripped = 1; /* Set is_tripped after setting .tripped, as it gets
cleared before .tripped. */
  Py_AddPendingCall((int (*)(void *))PyErr_CheckSignals, NULL);



--

Comment By: Adam Olsen (rhamphoryncus)
Date: 2007-01-28 13:02

Message:
Logged In: YES 
user_id=12364
Originator: NO

Augh, bloody firefox messed up my focus.

Your PyErr_SetInterrupt needs to set the flags after, like so:

Py_AddPendingCall((int (*)(void *))PyErr_CheckSignals, NULL);
Handlers[SIGINT].tripped = 1;
is_tripped = 1;

The reason is that the signal handler run in a thread while the main
thread goes through PyErr_CheckSignals, the main thread may notice the
flags, clear them flags, find nothing, then exit.  You need the signal
handler to supply 

[ python-Bugs-1647489 ] zero-length match confuses re.finditer()

2007-01-29 Thread SourceForge.net
Bugs item #1647489, was opened at 2007-01-29 14:35
Message generated for change (Tracker Item Submitted) made by Item Submitter
You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1647489&group_id=5470

Please note that this message will contain a full copy of the comment thread,
including the initial issue submission, for this request,
not just the latest update.
Category: Regular Expressions
Group: Python 2.5
Status: Open
Resolution: None
Priority: 5
Private: No
Submitted By: Jacques Frechet (jfrechet)
Assigned to: Gustavo Niemeyer (niemeyer)
Summary: zero-length match confuses re.finditer()

Initial Comment:
Hi!

re.finditer() seems to incorrectly increment the current position immediately 
after matching a zero-length substring.  For example:

>>> [m.groups() for m in re.finditer(r'(^z*)|(\w+)', 'abc')]
[('', None), (None, 'bc')]

What happened to the 'a'?  I expected this result:

[('', None), (None, 'abc')]

Perl agrees with me:

% perl -le 'print defined($1)?"\"$1\"":"undef",",",defined($2)?"\"$2\"":"undef" 
while "abc" =~ /(z*)|(\w+)/g' 
"",undef
undef,"abc"
"",undef

Similarly, if I remove the ^:

>>> [m.groups() for m in re.finditer(r'(z*)|(\w+)', 'abc')]
[('', None), ('', None), ('', None), ('', None)]

Now all of the letters have fallen through the cracks!  I expected this result:

[('', None), (None, 'abc'), ('', None)]

Again, perl agrees:

% perl -le 'print defined($1)?"\"$1\"":"undef",",",defined($2)?"\"$2\"":"undef" 
while "abc" =~ /(z*)|(\w+)/g' 
"",undef
undef,"abc"
"",undef

If this bug has already been reported, I apologize -- I wasn't able to find it 
here.  I haven't looked at the code for the re module, but this seems like the 
sort of bug that might have been accidentally introduced in order to try to 
prevent the same zero-length match from being returned forever.

Thanks,
Jacques

--

You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1647489&group_id=5470
___
Python-bugs-list mailing list 
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[ python-Bugs-1647541 ] SystemError with re.match(array)

2007-01-29 Thread SourceForge.net
Bugs item #1647541, was opened at 2007-01-30 00:04
Message generated for change (Tracker Item Submitted) made by Item Submitter
You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1647541&group_id=5470

Please note that this message will contain a full copy of the comment thread,
including the initial issue submission, for this request,
not just the latest update.
Category: Extension Modules
Group: None
Status: Open
Resolution: None
Priority: 4
Private: No
Submitted By: Armin Rigo (arigo)
Assigned to: Nobody/Anonymous (nobody)
Summary: SystemError with re.match(array)

Initial Comment:
An small issue which I guess is to be found in
the implementation of the buffer interface
for zero-length arrays:

>>> a = array.array("c")
>>> r = re.compile("bla")
>>> r.match(a)
SystemError: error return without exception set

--

You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1647541&group_id=5470
___
Python-bugs-list mailing list 
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[ python-Bugs-1647541 ] SystemError with re.match(array)

2007-01-29 Thread SourceForge.net
Bugs item #1647541, was opened at 2007-01-29 16:04
Message generated for change (Comment added) made by nnorwitz
You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1647541&group_id=5470

Please note that this message will contain a full copy of the comment thread,
including the initial issue submission, for this request,
not just the latest update.
Category: Extension Modules
Group: None
Status: Open
Resolution: None
Priority: 4
Private: No
Submitted By: Armin Rigo (arigo)
>Assigned to: Armin Rigo (arigo)
Summary: SystemError with re.match(array)

Initial Comment:
An small issue which I guess is to be found in
the implementation of the buffer interface
for zero-length arrays:

>>> a = array.array("c")
>>> r = re.compile("bla")
>>> r.match(a)
SystemError: error return without exception set

--

>Comment By: Neal Norwitz (nnorwitz)
Date: 2007-01-29 21:21

Message:
Logged In: YES 
user_id=33168
Originator: NO

Armin, what do you think of the attached patch?
File Added: empty-array.diff

--

You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1647541&group_id=5470
___
Python-bugs-list mailing list 
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[ python-Bugs-1647654 ] No obvious and correct way to get the time zone offset

2007-01-29 Thread SourceForge.net
Bugs item #1647654, was opened at 2007-01-30 13:48
Message generated for change (Tracker Item Submitted) made by Item Submitter
You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1647654&group_id=5470

Please note that this message will contain a full copy of the comment thread,
including the initial issue submission, for this request,
not just the latest update.
Category: None
Group: None
Status: Open
Resolution: None
Priority: 5
Private: No
Submitted By: James Henstridge (jhenstridge)
Assigned to: Nobody/Anonymous (nobody)
Summary: No obvious and correct way to get the time zone offset

Initial Comment:
It would be nice if the Python time module provided an obvious way to get the 
local time UTC offset for an arbitrary time stamp.  The existing constants 
included in the module are not sufficient to correctly determine this value.

As context, the Bazaar version control system (written in Python), the local 
time UTC offset is recorded in a commit.

The method used in releases prior to 0.14 made use of the "daylight", 
"timezone" and "altzone" constants from the time module like this:

if time.localtime(t).tm_isdst and time.daylight:
return -time.altzone
else:
return -time.timezone

This worked most of the time, but would occasionally give incorrect results.

On Linux, the local time system can handle different daylight saving rules for 
different spans of years.  For years where the rules change, these constants 
can provide incorrect data.  Furthermore, they may be incorrect for time stamps 
in the past.

I personally ran into this problem last December when Western Australia adopted 
daylight saving -- time.altzone gave an incorrect value until the start of 2007.

Having a function in the standard library to calculate this offset would solve 
the problem.  The implementation we ended up with for Bazaar was:

offset = datetime.fromtimestamp(t) - datetime.utcfromtimestamp(t)
return offset.days * 86400 + offset.seconds

Another alternative would be to expose tm_gmtoff on time tuples (perhaps using 
the above code to synthesise it on platforms that don't have the field).

--

You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1647654&group_id=5470
___
Python-bugs-list mailing list 
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com