[ python-Bugs-1664966 ] crash in exec statement if uncode filename cannot be decoded

2007-02-21 Thread SourceForge.net
Bugs item #1664966, was opened at 2007-02-21 09:31
Message generated for change (Tracker Item Submitted) made by Item Submitter
You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1664966&group_id=5470

Please note that this message will contain a full copy of the comment thread,
including the initial issue submission, for this request,
not just the latest update.
Category: Python Interpreter Core
Group: Python 2.5
Status: Open
Resolution: None
Priority: 5
Private: No
Submitted By: Stefan Schukat (sschukat)
Assigned to: Nobody/Anonymous (nobody)
Summary: crash in exec statement if uncode filename cannot be decoded

Initial Comment:
In case the exec statement gets an open file with a unicode object in f->f_fp 
the return value of PyString_AsString is not checked for an error and therefore 
a NULL pointer is given to PyRun_File which then leads to a crash.

in ceval.c:
line 4171 ff 

FILE *fp = PyFile_AsFile(prog);
char *name = PyString_AsString(PyFile_Name(prog));
PyCompilerFlags cf;
cf.cf_flags = 0;
if (PyEval_MergeCompilerFlags(&cf))
v = PyRun_FileFlags(fp, name, Py_file_input, 
globals, locals, &cf);
else
v = PyRun_File(fp, name, Py_file_input, globals,
   locals);

Name is NULL after conversion.

Patch would be:

FILE *fp = PyFile_AsFile(prog);
char *name = PyString_AsString(PyFile_Name(prog));
if(name == NULL)
 return -1;
PyCompilerFlags cf;

--

You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1664966&group_id=5470
___
Python-bugs-list mailing list 
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[ python-Bugs-1665206 ] Hangup when using cgitb in a thread while still in import

2007-02-21 Thread SourceForge.net
Bugs item #1665206, was opened at 2007-02-21 14:24
Message generated for change (Tracker Item Submitted) made by Item Submitter
You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1665206&group_id=5470

Please note that this message will contain a full copy of the comment thread,
including the initial issue submission, for this request,
not just the latest update.
Category: Threads
Group: Python 2.4
Status: Open
Resolution: None
Priority: 5
Private: No
Submitted By: hoffie (hoffie)
Assigned to: Nobody/Anonymous (nobody)
Summary: Hangup when using cgitb in a thread while still in import

Initial Comment:
The problem is best described using example code (also see 
http://trac.saddi.com/flup/ticket/12#comment:3):

  * file foo.py: see attachment
  * file bar.py:
  import foo

Running foo.py directly produces the expected result (html output from cgitb), 
running bar.py outputs nothing as it seems to hang internally (only the thread 
which should print the html).
Moving all non-import into a function in foo.py and calling that function after 
the import in bar.py leads to the expected result, so it seems to be related to 
code which is executed while still in import and while using threads.
I don't think it is serious problem as it's only triggered in that unusual case.

I'm running python-2.4.4 on Gentoo Linux.
Linux tux 2.6.20-gentoo #2 PREEMPT Mon Feb 5 19:20:30 CET 2007 i686 AMD 
Athlon(tm) XP 2600+ AuthenticAMD GNU/Linux

--

You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1665206&group_id=5470
___
Python-bugs-list mailing list 
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[ python-Feature Requests-1665292 ] Datetime enhancements

2007-02-21 Thread SourceForge.net
Feature Requests item #1665292, was opened at 2007-02-21 15:55
Message generated for change (Tracker Item Submitted) made by Item Submitter
You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=355470&aid=1665292&group_id=5470

Please note that this message will contain a full copy of the comment thread,
including the initial issue submission, for this request,
not just the latest update.
Category: Python Library
Group: Python 2.6
Status: Open
Resolution: None
Priority: 5
Private: No
Submitted By: Christian Heimes (tiran)
Assigned to: Nobody/Anonymous (nobody)
Summary: Datetime enhancements

Initial Comment:
I'm proposing some small enhancements to the datetime module:

Add a totimestamp() method to datetime.datetime that returns the seconds since 
1/1/1970 00:00:00. The datetime class has already a fromtimestamp() factory but 
is missing a totimestamp() method.

Add a __int__() and __float__() method to datetime.timedelta which return the 
seconds (seconds + 86400 * days) as int and seconds + miliseconds as float. It 
would save some typing if somebody needs an integer representation of a 
timedelta object :]

The datetime module is implemented in C. I've never written a Python C 
extension so I can't help with a patch.

Thx

--

You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=355470&aid=1665292&group_id=5470
___
Python-bugs-list mailing list 
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[ python-Bugs-1665333 ] Documentation missing for OptionGroup class in optparse

2007-02-21 Thread SourceForge.net
Bugs item #1665333, was opened at 2007-02-21 16:40
Message generated for change (Tracker Item Submitted) made by Item Submitter
You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1665333&group_id=5470

Please note that this message will contain a full copy of the comment thread,
including the initial issue submission, for this request,
not just the latest update.
Category: Documentation
Group: Python 2.4
Status: Open
Resolution: None
Priority: 5
Private: No
Submitted By: LunarYorn (lunar_yorn)
Assigned to: Nobody/Anonymous (nobody)
Summary: Documentation missing for OptionGroup class in optparse 

Initial Comment:
Python seems to lack documentation for the OptionGroup class and related 
methods in the optparse modul.

In detail documentation of the following classes and methods in optparse is 
missing:

- OptionGroup
- OptionParser.add_option_group
- OptionParser.get_option_group

These classes and methods also lack docstrings.

I found this in Python 2.4.4c1 which comes with Ubuntu 6.10 Edgy. It seems, 
that Python 2.5 on Ubuntu Edgy also suffers from this bug.

--

You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1665333&group_id=5470
___
Python-bugs-list mailing list 
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[ python-Bugs-1663329 ] subprocess/popen close_fds perform poor if SC_OPEN_MAX is hi

2007-02-21 Thread SourceForge.net
Bugs item #1663329, was opened at 2007-02-19 11:17
Message generated for change (Comment added) made by hvbargen
You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1663329&group_id=5470

Please note that this message will contain a full copy of the comment thread,
including the initial issue submission, for this request,
not just the latest update.
Category: Performance
Group: Python 2.5
Status: Open
Resolution: None
Priority: 5
Private: No
Submitted By: H. von Bargen (hvbargen)
Assigned to: Nobody/Anonymous (nobody)
Summary: subprocess/popen close_fds perform poor if SC_OPEN_MAX is hi

Initial Comment:
If the value of sysconf("SC_OPEN_MAX") is high
and you try to start a subprocess with subprocess.py or os.popen2 with 
close_fds=True, then starting the other process is very slow.
This boils down to the following code in subprocess.py:
def _close_fds(self, but):
for i in xrange(3, MAXFD):
if i == but:
continue
try:
os.close(i)
except:
pass

resp. the similar code in popen2.py:
def _run_child(self, cmd):
if isinstance(cmd, basestring):
cmd = ['/bin/sh', '-c', cmd]
for i in xrange(3, MAXFD):
try:
os.close(i)
except OSError:
pass

There has been an optimization already (range has been replaced by xrange to 
reduce memory impact), but I think the problem is that for high values of 
MAXFD, usually a high percentage of the os.close statements will fail, raising 
an exception (which is an "expensive" operation).
It has been suggested already to add a C implementation called "rclose" or 
"close_range" that tries to close all FDs in a given range (min, max) without 
the overhead of Python exception handling.

I'd like emphasize that this is not a theoretical, but a real world problem:
We have a Python application in a production environment on Sun Solaris. Some 
other software running on the same server needed a high value of 26 for 
SC_OPEN_MAX (set with ulimit -n XXX or in some /etc/-file (don't know which 
one).
Suddenly calling any other process with subprocess.Popen (..., close_fds=True) 
now took 14 seconds (!) instead of some microseconds.
This caused a huge performance degradation, since the subprocess itself only 
needs only  a few seconds.

See also:
Patches item #1607087 "popen() slow on AIX due to large FOPEN_MAX value".
This contains a fix, but only for AIX - and I think the patch does not support 
the "but" argument used in subprocess.py.
The correct solution should be coded in C, and should
do the same as the _close_fds routine in subprocess.py.
It could be optimized to make use of (operating-specific) system calls to close 
all handles from (but+1) to MAX_FD with "closefrom" or "fcntl" as proposed in 
the patch.


--

>Comment By: H. von Bargen (hvbargen)
Date: 2007-02-21 16:42

Message:
Logged In: YES 
user_id=1008979
Originator: YES

No, I have to use close_fds=True, because I don't want to have the
subprocess to inherit each and every file descriptor.
This is for two reasons:
i) Security - why should the subproces be able to work with all the parent
processes' files?
ii) Sometimes, for whatever reason, the subprocess (Oracle Reports in this
case) seems to hang. And because it inherited all of the parent's log file
handles, the paraent can not close and remove its log files correctly. This
is the reason why I stumbled about close_fds at all. BTW on MS Windows, a
similar (but not equivalent) solution was to create the log files as
non-inheritable.

--

Comment By: Martin v. Löwis (loewis)
Date: 2007-02-21 00:45

Message:
Logged In: YES 
user_id=21627
Originator: NO

Wouldn't it be simpler for you to just don't pass close_fds=True to popen?

--

You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1663329&group_id=5470
___
Python-bugs-list mailing list 
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[ python-Feature Requests-1665292 ] Datetime enhancements

2007-02-21 Thread SourceForge.net
Feature Requests item #1665292, was opened at 2007-02-21 15:55
Message generated for change (Comment added) made by tiran
You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=355470&aid=1665292&group_id=5470

Please note that this message will contain a full copy of the comment thread,
including the initial issue submission, for this request,
not just the latest update.
Category: Python Library
Group: Python 2.6
Status: Open
Resolution: None
Priority: 5
Private: No
Submitted By: Christian Heimes (tiran)
Assigned to: Nobody/Anonymous (nobody)
Summary: Datetime enhancements

Initial Comment:
I'm proposing some small enhancements to the datetime module:

Add a totimestamp() method to datetime.datetime that returns the seconds since 
1/1/1970 00:00:00. The datetime class has already a fromtimestamp() factory but 
is missing a totimestamp() method.

Add a __int__() and __float__() method to datetime.timedelta which return the 
seconds (seconds + 86400 * days) as int and seconds + miliseconds as float. It 
would save some typing if somebody needs an integer representation of a 
timedelta object :]

The datetime module is implemented in C. I've never written a Python C 
extension so I can't help with a patch.

Thx

--

>Comment By: Christian Heimes (tiran)
Date: 2007-02-21 17:16

Message:
Logged In: YES 
user_id=560817
Originator: YES

File Added: timedelta.patch

--

You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=355470&aid=1665292&group_id=5470
___
Python-bugs-list mailing list 
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[ python-Bugs-1663329 ] subprocess/popen close_fds perform poor if SC_OPEN_MAX is hi

2007-02-21 Thread SourceForge.net
Bugs item #1663329, was opened at 2007-02-19 11:17
Message generated for change (Comment added) made by loewis
You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1663329&group_id=5470

Please note that this message will contain a full copy of the comment thread,
including the initial issue submission, for this request,
not just the latest update.
Category: Performance
Group: Python 2.5
Status: Open
Resolution: None
Priority: 5
Private: No
Submitted By: H. von Bargen (hvbargen)
Assigned to: Nobody/Anonymous (nobody)
Summary: subprocess/popen close_fds perform poor if SC_OPEN_MAX is hi

Initial Comment:
If the value of sysconf("SC_OPEN_MAX") is high
and you try to start a subprocess with subprocess.py or os.popen2 with 
close_fds=True, then starting the other process is very slow.
This boils down to the following code in subprocess.py:
def _close_fds(self, but):
for i in xrange(3, MAXFD):
if i == but:
continue
try:
os.close(i)
except:
pass

resp. the similar code in popen2.py:
def _run_child(self, cmd):
if isinstance(cmd, basestring):
cmd = ['/bin/sh', '-c', cmd]
for i in xrange(3, MAXFD):
try:
os.close(i)
except OSError:
pass

There has been an optimization already (range has been replaced by xrange to 
reduce memory impact), but I think the problem is that for high values of 
MAXFD, usually a high percentage of the os.close statements will fail, raising 
an exception (which is an "expensive" operation).
It has been suggested already to add a C implementation called "rclose" or 
"close_range" that tries to close all FDs in a given range (min, max) without 
the overhead of Python exception handling.

I'd like emphasize that this is not a theoretical, but a real world problem:
We have a Python application in a production environment on Sun Solaris. Some 
other software running on the same server needed a high value of 26 for 
SC_OPEN_MAX (set with ulimit -n XXX or in some /etc/-file (don't know which 
one).
Suddenly calling any other process with subprocess.Popen (..., close_fds=True) 
now took 14 seconds (!) instead of some microseconds.
This caused a huge performance degradation, since the subprocess itself only 
needs only  a few seconds.

See also:
Patches item #1607087 "popen() slow on AIX due to large FOPEN_MAX value".
This contains a fix, but only for AIX - and I think the patch does not support 
the "but" argument used in subprocess.py.
The correct solution should be coded in C, and should
do the same as the _close_fds routine in subprocess.py.
It could be optimized to make use of (operating-specific) system calls to close 
all handles from (but+1) to MAX_FD with "closefrom" or "fcntl" as proposed in 
the patch.


--

>Comment By: Martin v. Löwis (loewis)
Date: 2007-02-21 19:18

Message:
Logged In: YES 
user_id=21627
Originator: NO

I understand you don't want the subprocess to inherit "incorrect" file
descriptors. However, there are other ways to prevent that from happening:
- you should close file descriptors as soon as you are done with the
files
- you should set the FD_CLOEXEC flag on all file descriptors you don't
want to be inherited, using fnctl(fd, F_SETFD, 1)

I understand that there are cases where neither these strategy is not
practical, but if you follow it, the performance will be much better, as
the closing of unused file descriptor is done in the exec(2) implementation
of the operating system.


--

Comment By: H. von Bargen (hvbargen)
Date: 2007-02-21 16:42

Message:
Logged In: YES 
user_id=1008979
Originator: YES

No, I have to use close_fds=True, because I don't want to have the
subprocess to inherit each and every file descriptor.
This is for two reasons:
i) Security - why should the subproces be able to work with all the parent
processes' files?
ii) Sometimes, for whatever reason, the subprocess (Oracle Reports in this
case) seems to hang. And because it inherited all of the parent's log file
handles, the paraent can not close and remove its log files correctly. This
is the reason why I stumbled about close_fds at all. BTW on MS Windows, a
similar (but not equivalent) solution was to create the log files as
non-inheritable.

--

Comment By: Martin v. Löwis (loewis)
Date: 2007-02-21 00:45

Message:
Logged In: YES 
user_id=21627
Originator: NO

Wouldn't it be simpler for you to just don't pass close_fds=True to popen?

--

You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1663329&group_id=5470

[ python-Bugs-1656559 ] I think, I have found this bug on time.mktime()

2007-02-21 Thread SourceForge.net
Bugs item #1656559, was opened at 2007-02-10 03:41
Message generated for change (Comment added) made by sergiomb
You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1656559&group_id=5470

Please note that this message will contain a full copy of the comment thread,
including the initial issue submission, for this request,
not just the latest update.
Category: None
Group: 3rd Party
Status: Closed
Resolution: Invalid
Priority: 5
Private: No
Submitted By: Sérgio Monteiro Basto (sergiomb)
Assigned to: Nobody/Anonymous (nobody)
Summary: I think, I have found this bug on time.mktime()

Initial Comment:
well, I think, I have found this bug on time.mktime() for dates less
than 1976-09-26

when I do stringtotime of 1976-09-25 

print "timeint %d" % time.mktime(__extract_date(m) + __extract_time(m) + (0, 0, 
0)) 

extract date = 1976 9 25
extract time = 0 0 0
timeint 212454000
and 
timetostring(212454000) = 1976-09-24T23:00:00Z !? 

To be honest the date that kept me the action was the 1-1-1970 that
appears 31-12-1969. After timetostring(stringtotime(date)))

I made the test and time.mktime got a bug when date is less than
1976-09-26 
see:
for 1976-09-27T00:00:00Z time.mktime gives 212630400
for 1976-09-26T00:00:00Z time.mktime gives 212544000
for 1976-09-25T00:00:00Z time.mktime gives 212454000

212630400 - 212544000 = 86400 (seconds) , one day correct !
but
212544000 - 212454000 = 9 (seconds), one day more 3600 (seconds),
more one hour ?!? 

--
Sérgio M. B. 



--

>Comment By: Sérgio Monteiro Basto (sergiomb)
Date: 2007-02-21 22:34

Message:
Logged In: YES 
user_id=4882
Originator: YES

well I found the bug is in ./site-packages/_xmlplus/utils/iso8601.py

 gmt = __extract_date(m) + __extract_time(m) + (0, 0, 0) this is wrong 
My sugestion is:  
 gmt = __extract_date(m) + __extract_time(m)
 gmt = datetime(gmt).timetuple()

(0,0,0) zero for week of day, zero for day of the year and zero isdst is
the error here. 

timetuple calculate this last 3 numbers well. 
and my problem is gone !

references http://docs.python.org/lib/module-time.html: 
0   tm_year (for example, 1993)
1   tm_mon  range [1,12]
2   tm_mday range [1,31]
3   tm_hour range [0,23]
4   tm_min  range [0,59]
5   tm_sec  range [0,61]; see (1) in strftime() description
6   tm_wday range [0,6], Monday is 0
7   tm_yday range [1,366]
8   tm_isdst0, 1 or -1; see below


--

Comment By: Martin v. Löwis (loewis)
Date: 2007-02-13 15:54

Message:
Logged In: YES 
user_id=21627
Originator: NO

cvalente, thanks for the research. Making a second attempt at closing this
as third-party bug.

--

Comment By: Sérgio Monteiro Basto (sergiomb)
Date: 2007-02-13 14:25

Message:
Logged In: YES 
user_id=4882
Originator: YES

ok bug openned on 
http://sources.redhat.com/bugzilla/show_bug.cgi?id=4033

--

Comment By: Claudio Valente (cvalente)
Date: 2007-02-13 12:47

Message:
Logged In: YES 
user_id=627298
Originator: NO

OK. This is almost surely NOT a Python bug but most likely a libc bug.

In c:
--
#include 
#include 

int main(int argc, char* argv[]){
struct tm t1;
struct tm t2;

/* midnight 26/SET/1076*/
t1.tm_sec  = 0;
t1.tm_min  = 0;
t1.tm_hour = 0;
t1.tm_mday = 26;
t1.tm_mon  = 8;
t1.tm_year = 76;

/* midnight 25/SET/1076*/
t2.tm_sec  = 0;
t2.tm_min  = 0;
t2.tm_hour = 0;
t2.tm_mday = 25;
t2.tm_mon  = 8;
t2.tm_year = 76;

printf("%li\n", mktime(&t1)-mktime(&t2));
printf("%li\n", mktime(&t1)-mktime(&t2));

return 0;
}
--
Outputs:

9
86400


In perl:
-
perl -le 'use POSIX; $t1=POSIX::mktime(0,0,0,26,8,76)
-POSIX::mktime(0,0,0,25,8,76); $t2 = POSIX::mktime(0,0,0,26,8,76)
-POSIX::mktime(0,0,0,25,8,76) ; print $t1."\n". $t2'
-

Outputs

9
86400

-

My system is gentoo with glibc 2.4-r4
and my timezone is:
/usr/share/zoneinfo/Europe/Lisbon

When I changed this to another timezone (Say London) the problem didn't
exist.

Thank you all for your time.

--

Comment By: Sérgio Monteiro Basto (sergiomb)
Date: 2007-02-13 12:22

Message:
Logged In: YES 
user_id=4882
Originator: YES

timezone :  WET in winter WEST in summer 
I try same with timezone of NEW YORK and 
>>>
time.mktime((1976,9,26,0,0,0,0,0,0))-time.mktime((1976,9,25,0,0,0,0,0,0))
86400.0


--

Comment By: Claudio Valente (cvalente)
Date: 2007-02-13 12:07

Message:
Logged In: YES 
user_id=6272