[issue10930] dict.setdefault: Bug: default argument is ALWAYS evaluated, i.e. no short-circuit eval

2011-01-18 Thread Albert

New submission from Albert :

Hello!

Is it intentional, that the default argument is ALWAYS evaluated, even if it is 
not needed???

Is it not a bug, that this method has no short-circuit eval 
(http://en.wikipedia.org/wiki/Short-circuit_evaluation)
??

Example1:
=
infinite = 1e100

one_div_by = {0.0 : infinite}

def func(n):
return one_div_by.setdefault(float(n), 1/float(n))

for i in [1, 2, 3, 4]:
print i, func(i)
print one_div_by
# works!!

for i in [0, 1, 2, 3, 4]: # added 0 -> FAIL!
print i, func(i)
print one_div_by
# fail!!



Example2:
=
fib_d = {0 : 0, 1 : 1}

def fibonacci(n):
return fib_d.setdefault(n, fibonacci(n-1) + fibonacci(n-2))

for i in range(10):
print i, fibonacci(i)
print fib_d

--
messages: 126456
nosy: albert.neu
priority: normal
severity: normal
status: open
title: dict.setdefault: Bug: default argument is ALWAYS evaluated, i.e. no 
short-circuit eval
type: behavior
versions: Python 2.6

___
Python tracker 
<http://bugs.python.org/issue10930>
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue13423] Ranges cannot be meaningfully compared for equality or hashed

2011-11-18 Thread Chase Albert

New submission from Chase Albert :

My expectation was that range(2,5) == range(2,5), and that they should hash the 
same. This is not the case.

--
messages: 147838
nosy: rob.anyone
priority: normal
severity: normal
status: open
title: Ranges cannot be meaningfully compared for equality or hashed
type: behavior
versions: Python 3.2

___
Python tracker 
<http://bugs.python.org/issue13423>
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue12311] memory leak with self-referencing dict

2011-06-10 Thread Albert Zeyer

New submission from Albert Zeyer :

The attached Python script leaks memory. It is clear that there is a reference 
circle (`__dict__` references `self`) but `gc.collect()` should find this.

--
components: Interpreter Core
files: py_dict_refcount_test.py
messages: 138062
nosy: Albert.Zeyer
priority: normal
severity: normal
status: open
title: memory leak with self-referencing dict
type: resource usage
versions: Python 3.2
Added file: http://bugs.python.org/file22311/py_dict_refcount_test.py

___
Python tracker 
<http://bugs.python.org/issue12311>
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue12311] memory leak with self-referencing dict

2011-06-10 Thread Albert Zeyer

Albert Zeyer  added the comment:

Whoops, looks like a duplicate of #1469629.

--
resolution:  -> duplicate
status: open -> closed

___
Python tracker 
<http://bugs.python.org/issue12311>
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue12608] crash in PyAST_Compile when running Python code

2011-07-22 Thread Albert Zeyer

New submission from Albert Zeyer :

Code:

```
import ast

globalsDict = {}

fAst = ast.FunctionDef(
name="foo",
args=ast.arguments(
args=[], vararg=None, kwarg=None, defaults=[],
kwonlyargs=[], kw_defaults=[]),
body=[], decorator_list=[])

exprAst = ast.Interactive(body=[fAst])
ast.fix_missing_locations(exprAst)
compiled = compile(exprAst, "", "single")
eval(compiled, globalsDict, globalsDict)

print(globalsDict["foo"])
```

Also CPython 2.6, 2.7, 3.0 and PyPy 1.5 crashes on this.

--
components: Interpreter Core
messages: 140873
nosy: Albert.Zeyer
priority: normal
severity: normal
status: open
title: crash in PyAST_Compile when running Python code
versions: Python 3.2

___
Python tracker 
<http://bugs.python.org/issue12608>
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue12608] crash in PyAST_Compile when running Python code

2011-07-22 Thread Albert Zeyer

Albert Zeyer  added the comment:

PyPy bug report: https://bugs.pypy.org/issue806

--

___
Python tracker 
<http://bugs.python.org/issue12608>
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue12609] SystemError: Objects/codeobject.c:64: bad argument to internal function

2011-07-22 Thread Albert Zeyer

New submission from Albert Zeyer :

Code:

```
from ast import *

globalsDict = {}

exprAst = Interactive(body=[FunctionDef(name=u'Py_Main', 
args=arguments(args=[Name(id=u'argc', ctx=Param()), Name(id=u'argv', 
ctx=Param())], vararg=None, kwarg=None, defaults=[]), 
body=[Assign(targets=[Name(id=u'argc', ctx=Store())], 
value=Call(func=Attribute(value=Name(id='ctypes', ctx=Load()), attr='c_int', 
ctx=Load()), args=[Attribute(value=Name(id=u'argc', ctx=Load()), attr='value', 
ctx=Load())], keywords=[], starargs=None, kwargs=None)), 
Assign(targets=[Name(id=u'argv', ctx=Store())], 
value=Call(func=Attribute(value=Name(id='ctypes', ctx=Load()), attr='cast', 
ctx=Load()), args=[Call(func=Attribute(value=Name(id='ctypes', ctx=Load()), 
attr='c_void_p', ctx=Load()), 
args=[Attribute(value=Call(func=Attribute(value=Name(id='ctypes', ctx=Load()), 
attr='cast', ctx=Load()), args=[Name(id=u'argv', ctx=Load()), 
Attribute(value=Name(id='ctypes', ctx=Load()), attr='c_void_p', ctx=Load())], 
keywords=[], starargs=None, kwargs=None), attr='value', ctx=Load())], 
keywords=[], starargs=No
 ne, kwargs=None), Call(func=Attribute(value=Name(id='ctypes', ctx=Load()), 
attr='POINTER', ctx=Load()), args=[Call(func=Attribute(value=Name(id='ctypes', 
ctx=Load()), attr='POINTER', ctx=Load()), 
args=[Attribute(value=Name(id='ctypes', ctx=Load()), attr='c_char', 
ctx=Load())], keywords=[], starargs=None, kwargs=None)], keywords=[], 
starargs=None, kwargs=None)], keywords=[], starargs=None, kwargs=None)), 
Assign(targets=[Name(id=u'c', ctx=Store())], 
value=Call(func=Attribute(value=Name(id='ctypes', ctx=Load()), attr='c_int', 
ctx=Load()), args=[], keywords=[], starargs=None, kwargs=None)), 
Assign(targets=[Name(id=u'sts', ctx=Store())], 
value=Call(func=Attribute(value=Name(id='ctypes', ctx=Load()), attr='c_int', 
ctx=Load()), args=[], keywords=[], starargs=None, kwargs=None)), 
Assign(targets=[Name(id=u'command', ctx=Store())], 
value=Call(func=Attribute(value=Name(id='ctypes', ctx=Load()), attr='cast', 
ctx=Load()), args=[Call(func=Attribute(value=Name(id='ctypes', ctx=Load()), att
 r='c_void_p', ctx=Load()), 
args=[Attribute(value=Call(func=Attribute(value=Name(id='ctypes', ctx=Load()), 
attr='c_uint', ctx=Load()), args=[Num(n=0L)], keywords=[], starargs=None, 
kwargs=None), attr='value', ctx=Load())], keywords=[], starargs=None, 
kwargs=None), Call(func=Attribute(value=Name(id='ctypes', ctx=Load()), 
attr='POINTER', ctx=Load()), args=[Attribute(value=Name(id='ctypes', 
ctx=Load()), attr='c_char', ctx=Load())], keywords=[], starargs=None, 
kwargs=None)], keywords=[], starargs=None, kwargs=None)), 
Assign(targets=[Name(id=u'filename', ctx=Store())], 
value=Call(func=Attribute(value=Name(id='ctypes', ctx=Load()), attr='cast', 
ctx=Load()), args=[Call(func=Attribute(value=Name(id='ctypes', ctx=Load()), 
attr='c_void_p', ctx=Load()), 
args=[Attribute(value=Call(func=Attribute(value=Name(id='ctypes', ctx=Load()), 
attr='c_uint', ctx=Load()), args=[Num(n=0L)], keywords=[], starargs=None, 
kwargs=None), attr='value', ctx=Load())], keywords=[], starargs=None, 
kwargs=None), 
 Call(func=Attribute(value=Name(id='ctypes', ctx=Load()), attr='POINTER', 
ctx=Load()), args=[Attribute(value=Name(id='ctypes', ctx=Load()), 
attr='c_char', ctx=Load())], keywords=[], starargs=None, kwargs=None)], 
keywords=[], starargs=None, kwargs=None)), Assign(targets=[Name(id=u'module', 
ctx=Store())], value=Call(func=Attribute(value=Name(id='ctypes', ctx=Load()), 
attr='cast', ctx=Load()), args=[Call(func=Attribute(value=Name(id='ctypes', 
ctx=Load()), attr='c_void_p', ctx=Load()), 
args=[Attribute(value=Call(func=Attribute(value=Name(id='ctypes', ctx=Load()), 
attr='c_uint', ctx=Load()), args=[Num(n=0L)], keywords=[], starargs=None, 
kwargs=None), attr='value', ctx=Load())], keywords=[], starargs=None, 
kwargs=None), Call(func=Attribute(value=Name(id='ctypes', ctx=Load()), 
attr='POINTER', ctx=Load()), args=[Attribute(value=Name(id='ctypes', 
ctx=Load()), attr='c_char', ctx=Load())], keywords=[], starargs=None, 
kwargs=None)], keywords=[], starargs=None, kwargs=None)), Assign(targe
 ts=[Name(id=u'fp', ctx=Store())], 
value=Call(func=Attribute(value=Name(id='ctypes', ctx=Load()), attr='cast', 
ctx=Load()), args=[Call(func=Attribute(value=Name(id='ctypes', ct

[issue12610] Fatal Python error: non-string found in code slot

2011-07-22 Thread Albert Zeyer

New submission from Albert Zeyer :

Code:

```
from ast import *

globalsDict = {}

body = [
Assign(targets=[Name(id=u'argc', ctx=Store())],
   value=Name(id=u'None', ctx=Load())),
]

exprAst = Interactive(body=[
FunctionDef(
name='foo',
args=arguments(args=[Name(id=u'argc', ctx=Param()), 
Name(id=u'argv', ctx=Param())],
   vararg=None, kwarg=None, 
defaults=[]),
body=body,
decorator_list=[])])

fix_missing_locations(exprAst)
compiled = compile(exprAst, "", "single")
eval(compiled, {}, globalsDict)

f = globalsDict["foo"]
print(f)
```

CPython 2.7.1: Fatal Python error: non-string found in code slot
PyPy 1.5: 

--
messages: 140877
nosy: Albert.Zeyer
priority: normal
severity: normal
status: open
title: Fatal Python error: non-string found in code slot
versions: Python 2.7

___
Python tracker 
<http://bugs.python.org/issue12610>
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue12609] SystemError: Objects/codeobject.c:64: bad argument to internal function

2011-07-22 Thread Albert Zeyer

Albert Zeyer  added the comment:

Simplified code:

```
from ast import *

globalsDict = {}

exprAst = Interactive(body=[
FunctionDef(
name=u'foo',
args=arguments(args=[], vararg=None, kwarg=None, defaults=[]),
body=[Pass()],
decorator_list=[])])

fix_missing_locations(exprAst)
compiled = compile(exprAst, "", "single")
eval(compiled, {}, globalsDict)

f = globalsDict["foo"]
print(f)
```

If I change `name=u'foo'` to `name='foo'`, it works.

--

___
Python tracker 
<http://bugs.python.org/issue12609>
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue12854] PyOS_Readline usage in tokenizer ignores sys.stdin/sys.stdout

2011-08-29 Thread Albert Zeyer

New submission from Albert Zeyer :

In Parser/tokenizer.c, there is `PyOS_Readline(stdin, stdout, tok->prompt)`. 
This ignores any `sys.stdin` / `sys.stdout` overwrites.

The usage should be like in Python/bltinmodule.c in builtin_raw_input.

--
messages: 143168
nosy: Albert.Zeyer
priority: normal
severity: normal
status: open
title: PyOS_Readline usage in tokenizer ignores sys.stdin/sys.stdout
versions: Python 2.7

___
Python tracker 
<http://bugs.python.org/issue12854>
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue12859] readline implementation doesn't release the GIL

2011-08-30 Thread Albert Zeyer

New submission from Albert Zeyer :

Modules/readline.c 's `call_readline` doesn't release the GIL while reading.

--
messages: 143226
nosy: Albert.Zeyer
priority: normal
severity: normal
status: open
title: readline implementation doesn't release the GIL
versions: Python 2.7

___
Python tracker 
<http://bugs.python.org/issue12859>
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue12859] readline implementation doesn't release the GIL

2011-08-30 Thread Albert Zeyer

Albert Zeyer  added the comment:

Whoops, sorry, invalid. It doesn't need to. It is handled in PyOS_Readline.

--
resolution:  -> invalid
status: open -> closed

___
Python tracker 
<http://bugs.python.org/issue12859>
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue12861] PyOS_Readline uses single lock

2011-08-30 Thread Albert Zeyer

New submission from Albert Zeyer :

In Parser/myreadline.c PyOS_Readline uses a single lock (`_PyOS_ReadlineLock`). 
I guess it is so that we don't have messed up stdin reads. Or are there other 
technical reasons?

However, it should work to call this function from multiple threads with 
different/independent `sys_stdin` / `sys_stdout`.

--
messages: 143229
nosy: Albert.Zeyer
priority: normal
severity: normal
status: open
title: PyOS_Readline uses single lock
versions: Python 2.7

___
Python tracker 
<http://bugs.python.org/issue12861>
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue12861] PyOS_Readline uses single lock

2011-08-30 Thread Albert Zeyer

Albert Zeyer  added the comment:

Ok, it seems that the Modules/readline.c implementation is also not really 
threadsafe... (Whereby, I think it should be.)

--

___
Python tracker 
<http://bugs.python.org/issue12861>
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue12869] PyOS_StdioReadline is printing the prompt on stderr

2011-08-31 Thread Albert Zeyer

New submission from Albert Zeyer :

PyOS_StdioReadline from Parser/myreadline.c is printing the prompt on stderr.

I think it should print it on the given parameter sys_stdout. Other readline 
implementations (like from the readline module) also behave this way.

Even if it really is supposed to write on stderr, it should use the 
`sys.stderr` and not the system stderr.

--
messages: 143256
nosy: Albert.Zeyer
priority: normal
severity: normal
status: open
title: PyOS_StdioReadline is printing the prompt on stderr
versions: Python 2.7

___
Python tracker 
<http://bugs.python.org/issue12869>
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue12861] PyOS_Readline uses single lock

2011-08-31 Thread Albert Zeyer

Albert Zeyer  added the comment:

Even more problematic: The readline lib itself is absolutely not designed in a 
way to be used from multi threads at once.

--

___
Python tracker 
<http://bugs.python.org/issue12861>
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue12861] PyOS_Readline uses single lock

2011-09-02 Thread Albert Zeyer

Albert Zeyer  added the comment:

You might have opened several via `openpty`.

I am doing that here: https://github.com/albertz/PyTerminal

--

___
Python tracker 
<http://bugs.python.org/issue12861>
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue9655] urllib2 fails to retrieve a url which is handled correctly by urllib

2010-08-21 Thread Albert Weichselbraun

New submission from Albert Weichselbraun :

urllib2 fails to retrieve the content of 
http://www.mfsa.com.mt/insguide/english/glossarysearch.jsp?letter=all

>>> urllib2.urlopen("http://www.mfsa.com.mt/insguide/english/glossarysearch.jsp?letter=all";).read()
''

urllib handles the same link correctly:

>>> len( 
>>> urllib.urlopen("http://www.mfsa.com.mt/insguide/english/glossarysearch.jsp?letter=all";).read()
>>>  )
56105

--
components: Library (Lib)
messages: 114482
nosy: Albert.Weichselbraun
priority: normal
severity: normal
status: open
title: urllib2 fails to retrieve a url which is handled correctly by urllib
type: behavior
versions: Python 2.6

___
Python tracker 
<http://bugs.python.org/issue9655>
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue8296] multiprocessing.Pool hangs when issuing KeyboardInterrupt

2010-08-24 Thread Albert Strasheim

Changes by Albert Strasheim :


--
nosy: +Albert.Strasheim

___
Python tracker 
<http://bugs.python.org/issue8296>
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue8296] multiprocessing.Pool hangs when issuing KeyboardInterrupt

2010-08-24 Thread Albert Strasheim

Albert Strasheim  added the comment:

Any chance of getting this patch applied? Thanks.

--

___
Python tracker 
<http://bugs.python.org/issue8296>
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue9205] Parent process hanging in multiprocessing if children terminate unexpectedly

2010-08-26 Thread Albert Strasheim

Changes by Albert Strasheim :


--
nosy: +Albert.Strasheim

___
Python tracker 
<http://bugs.python.org/issue9205>
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue9207] multiprocessing occasionally spits out exception during shutdown (_handle_workers)

2010-08-26 Thread Albert Strasheim

Changes by Albert Strasheim :


--
nosy: +Albert.Strasheim

___
Python tracker 
<http://bugs.python.org/issue9207>
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue4608] urllib.request.urlopen does not return an iterable object

2011-04-20 Thread Albert Hopkins

Albert Hopkins  added the comment:

This issue appears to persist when the protocol used is FTP:


root@tp-db $ cat test.py
from urllib.request import urlopen
for line in urlopen('ftp://gentoo.osuosl.org/pub/gentoo/releases/'):
print(line)
break

root@tp-db $ python3.2 test.py
Traceback (most recent call last):
  File "test.py", line 2, in 
for line in urlopen('ftp://gentoo.osuosl.org/pub/gentoo/releases/'):
TypeError: 'addinfourl' object is not iterable

--
nosy: +marduk

___
Python tracker 
<http://bugs.python.org/issue4608>
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue4608] urllib.request.urlopen does not return an iterable object

2011-04-20 Thread Albert Hopkins

Albert Hopkins  added the comment:

Oops, previous example was a directory, but it's the same if the url points to 
a ftp file.

--

___
Python tracker 
<http://bugs.python.org/issue4608>
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue3002] shutil.copyfile blocks indefinitely on named pipes

2008-05-29 Thread albert hofkamp

New submission from albert hofkamp <[EMAIL PROTECTED]>:

shutil.copytree() uses shutil.copyfile() to copy files recursively.
shutil.copyfile() opens the source file for reading, and the destination
file for writing, followed by a call to shutil.copyfileobj().

If the file happens to be a named pipe rather than a normal file,
opening for read blocks the copying function, since the Unix OS needs a
writer process to attach to the same named pipe before the open-for-read
succeeds.

Rather than opening the file for reading, the correct action would
probably be to simply create a new named pipe with the same name at the
destination.
Looking at the Python2.3 code, the same type of problem seem to exist
for other non-normal file-types other than symlinks (eg device files,
sockets, and possibly a few others).

--
components: Library (Lib)
messages: 67498
nosy: aioryi
severity: normal
status: open
title: shutil.copyfile blocks indefinitely on named pipes
type: behavior
versions: Python 2.3

___
Python tracker <[EMAIL PROTECTED]>
<http://bugs.python.org/issue3002>
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue5380] pty.read raises IOError when slave pty device is closed

2010-03-02 Thread Albert Hopkins

Changes by Albert Hopkins :


--
nosy: +marduk

___
Python tracker 
<http://bugs.python.org/issue5380>
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue1774] Reference to New style classes documentation is incorrect

2008-01-09 Thread albert hofkamp

New submission from albert hofkamp:

In the Python reference manual (the online current documentation), in
Section 3.3 "New-style and classic classes", there is a reference to
external documentation about new style classes.
The reference is however incorrect, it should be
http://www.python.org/doc/newstyle/ rather than the mentioned
http://www.python.org/doc/newstyle.html

--
components: Documentation
messages: 59588
nosy: aioryi
severity: normal
status: open
title: Reference to New style classes documentation is incorrect
versions: Python 2.5

__
Tracker <[EMAIL PROTECTED]>
<http://bugs.python.org/issue1774>
__
___
Python-bugs-list mailing list 
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue1773] Reference to Python issue tracker incorrect

2008-01-09 Thread albert hofkamp

New submission from albert hofkamp:

In the Python reference manual (the online current documentation), in
the "About this document" section, there is a reference to the
Sourceforge bug tracker for reporting errors in the document.
This tracker however has been closed, and has been replaced by the one
at http://bugs.python.org/

--
components: Documentation
messages: 59587
nosy: aioryi
severity: normal
status: open
title: Reference to Python issue tracker incorrect
versions: Python 2.5

__
Tracker <[EMAIL PROTECTED]>
<http://bugs.python.org/issue1773>
__
___
Python-bugs-list mailing list 
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue4565] io write() performance very slow

2008-12-06 Thread Istvan Albert

New submission from Istvan Albert <[EMAIL PROTECTED]>:

The write performance into text files is substantially slower (5x-8x)
than that of python 2.5. This makes python 3.0 unsuited to any
application that needs to write larger amounts of data.

test code follows ---

import time

lo, hi, step = 10**5, 10**6, 10**5

# writes increasingly more lines to a file
for N in range(lo, hi, step):
fp = open('foodata.txt', 'wt')
start = time.time()
for i in range( N ):
fp.write( '%s\n' % i)
fp.close()
stop = time.time()
print ( "%s\t%s" % (N, stop-start) )

--
components: Interpreter Core
messages: 77132
nosy: ialbert
severity: normal
status: open
title: io write() performance very slow
type: performance
versions: Python 3.0

___
Python tracker <[EMAIL PROTECTED]>
<http://bugs.python.org/issue4565>
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue4565] io write() performance very slow

2008-12-06 Thread Istvan Albert

Istvan Albert <[EMAIL PROTECTED]> added the comment:

Well I would strongly dispute that anyone other than the developers
expected this. The release documentation states:

"The net result of the 3.0 generalizations is that Python 3.0 runs the
pystone benchmark around 10% slower than Python 2.5."

There is no indication of an order of magnitudes in read/write slowdown.
I believe that this issue is extremely serious! IO is an essential part
of a program, and today we live in the world of gigabytes of data. I am
reading reports of even more severe io slowdowns than what I saw:

http://bugs.python.org/issue4561

Java has had a hard time getting rid of the "it is very slow" stigma
even after getting a JIT compiler, so there is a danger there for a
lasting negative impression.

___
Python tracker <[EMAIL PROTECTED]>
<http://bugs.python.org/issue4565>
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue4613] Can't figure out where SyntaxError: can not delete variable 'x' referenced in nested scope us coming from in python shows no traceback

2008-12-09 Thread Albert Hopkins

New submission from Albert Hopkins <[EMAIL PROTECTED]>:

Say I have module foo.py:

def a(x):
   def b():
   x
   del x

If I run foo.py under Python 2.4.4 I get:

  File "foo.py", line 4
del x
SyntaxError: can not delete variable 'x' referenced in nested
scope

Under Python 2.6 and Python 3.0 I get:

SyntaxError: can not delete variable 'x' referenced in nested
scope


The difference is under Python 2.4 I get a traceback with the lineno and
offending line, but I do not get a traceback in Pythons 2.6 and 3.0.

This also kinda relates to the 2to3 tool.  See:

http://groups.google.com/group/comp.lang.python/browse_frm/thread/a6600c80f8c3c60c/4d804532ea09aae7

--
components: Interpreter Core
messages: 77443
nosy: marduk
severity: normal
status: open
title: Can't figure out where SyntaxError: can not delete variable 'x' 
referenced in nested scope us coming from in python shows no traceback
type: behavior
versions: Python 2.6, Python 3.0

___
Python tracker <[EMAIL PROTECTED]>
<http://bugs.python.org/issue4613>
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue4613] Can't figure out where SyntaxError: can not delete variable 'x' referenced in nested scope us coming from in python shows no traceback

2008-12-09 Thread Albert Hopkins

Albert Hopkins <[EMAIL PROTECTED]> added the comment:

Thanks for looking into this.

Ok... I applied your patch (actually it does not apply against Python
3.0 so I had to change it manually).

Now I'm not sure if this is still an error in the compiler or if it's
truly a problem on my end, but the line given in the error doesn't
contain a del statement at all.

The code basically looks like this:

def method(self):
...
success = False
e = None
try:
success, mydepgraph, dropped_tasks =
resume_depgraph(
self.settings, self.trees,
self._mtimedb, self.myopts,
myparams, self._spinner,
skip_unsatisfied=True)
except depgraph.UnsatisfiedResumeDep as e:
mydepgraph = e.depgraph
dropped_tasks = set()

With the patch, the error occurs at "dropped_tasks = set()" in the
except clause.  The first time that "dropped_tasks" is used in the
entire module is in the try clause.

This is a horrible piece of code (I did not write it).  Do you think the
SyntaxError message could be masking what the real error is?

Again, the python2 version of this module imports fine, but when I run
2to3 on it I get the SyntaxError.  The line of code in question,
however, is unmodified by 2to3.

___
Python tracker <[EMAIL PROTECTED]>
<http://bugs.python.org/issue4613>
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue37157] shutil: add reflink=False to file copy functions to control clone/CoW copies (use copy_file_range)

2021-10-07 Thread Albert Zeyer


Albert Zeyer  added the comment:

> How is CoW copy supposed to be done by using copy_file_range() exactly?

I think copy_file_range() will just always use copy-on-write and/or 
server-side-copy when available. You cannot even turn that off.

--

___
Python tracker 
<https://bugs.python.org/issue37157>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue39318] NamedTemporaryFile could cause double-close on an fd if _TemporaryFileWrapper throws

2020-01-13 Thread Albert Zeyer


Albert Zeyer  added the comment:

Instead of `except:` and `except BaseException:`, I think better use `except 
Exception:`.

For further discussion and reference, also see the discussion here: 
https://news.ycombinator.com/item?id=22028581

--
nosy: +Albert.Zeyer

___
Python tracker 
<https://bugs.python.org/issue39318>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue39318] NamedTemporaryFile could cause double-close on an fd if _TemporaryFileWrapper throws

2020-01-14 Thread Albert Zeyer


Albert Zeyer  added the comment:

Why is `except BaseException` better than `except Exception` here? With `except 
Exception`, you will never run into the problem of possibly closing the fd 
twice. This is the main important thing which we want to fix here. This is more 
important than missing maybe to close it at all, or unlink it.

--

___
Python tracker 
<https://bugs.python.org/issue39318>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue39318] NamedTemporaryFile could cause double-close on an fd if _TemporaryFileWrapper throws

2020-01-15 Thread Albert Zeyer


Albert Zeyer  added the comment:

If you anyway accept that KeyboardInterrupt can potentially leak, by just using 
`except Exception`, it would also be solved here.

--

___
Python tracker 
<https://bugs.python.org/issue39318>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue39318] NamedTemporaryFile could cause double-close on an fd if _TemporaryFileWrapper throws

2020-01-15 Thread Albert Zeyer


Albert Zeyer  added the comment:

> I think it is worth pointing out that the semantics of 
>
> f = ``open(fd, closefd=True)`` 
>
> are broken (IMHO) because an exception can result in an unreferenced file
> object that has taken over reponsibility for closing the fd, but it can
> also fail without creating the file object.

I thought that if this raises a (normal) exception, it always means that it did 
not have overtaken the `fd`, i.e. never results in an unreferenced file object 
which has taken ownership of `fd`.

It this is not true right now, you are right that this is problematic. But this 
should be simple to fix on the CPython side, right? I.e. to make sure whenever 
some exception is raised here, even if some temporary file object already was 
constructed, it will not close `fd`. It should be consistent in this behavior, 
otherwise indeed, the semantics are broken.

--

___
Python tracker 
<https://bugs.python.org/issue39318>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue37159] Use copy_file_range() in shutil.copyfile() (server-side copy)

2020-12-29 Thread Albert Zeyer


Albert Zeyer  added the comment:

According to the man page of copy_file_range 
(https://man7.org/linux/man-pages/man2/copy_file_range.2.html), copy_file_range 
also should support copy-on-write:

>   copy_file_range() gives filesystems an opportunity to implement
>   "copy acceleration" techniques, such as the use of reflinks
>   (i.e., two or more inodes that share pointers to the same copy-
>   on-write disk blocks) or server-side-copy (in the case of NFS).

Is this wrong?

However, while researching more about FICLONE vs copy_file_range, I found e.g. 
this: https://debbugs.gnu.org/cgi/bugreport.cgi?bug=24399

Which suggests that there are other problems with copy_file_range?

--
nosy: +Albert.Zeyer

___
Python tracker 
<https://bugs.python.org/issue37159>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue37157] shutil: add reflink=False to file copy functions to control clone/CoW copies (use copy_file_range)

2020-12-29 Thread Albert Zeyer


Albert Zeyer  added the comment:

Is FICLONE really needed? Doesn't copy_file_range already supports the same?

I posted the same question here: 
https://stackoverflow.com/questions/65492932/ficlone-vs-ficlonerange-vs-copy-file-range-for-copy-on-write-support

--
nosy: +Albert.Zeyer

___
Python tracker 
<https://bugs.python.org/issue37157>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue37157] shutil: add reflink=False to file copy functions to control clone/CoW copies (use copy_file_range)

2020-12-31 Thread Albert Zeyer


Albert Zeyer  added the comment:

I did some further research (with all details here: 
https://stackoverflow.com/a/65518879/133374).

See vfs_copy_file_range in the Linux kernel. This first tries to call 
remap_file_range if possible.

FICLONE calls ioctl_file_clone. ioctl_file_clone calls vfs_clone_file_range. 
vfs_clone_file_range calls remap_file_range. I.e. FICLONE == remap_file_range.

So using copy_file_range (if available) should be the most generic solution, 
which includes copy-on-write support, and server-side copy support.

--

___
Python tracker 
<https://bugs.python.org/issue37157>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue31963] AMD64 Debian PGO 3.x buildbot: compilation failed with an internal compiler error in create_edge

2020-05-19 Thread Albert Christianto

Albert Christianto  added the comment:

Hi admins,

I need help for compiling and building Python-3.7.7. 
My system is Ubuntu 16.04 LTS, gcc 5.4.0 20160609

this is the configuration cmd for compling python
./configure --enable-optimizations --enable-shared

when i compiled it, i get these similar error

```
Parser/tokenizer.c: In function β€˜PyTokenizer_FindEncodingFilename’:
Parser/tokenizer.c:1909:1: error: the control flow of function 
β€˜PyTokenizer_FindEncodingFilename’ does not match its profile data (counter 
β€˜arcs’) [-Werror=coverage-mismatch]
 }
 ^
Parser/tokenizer.c:1909:1: error: the control flow of function 
β€˜PyTokenizer_FindEncodingFilename’ does not match its profile data (counter 
β€˜time_profiler’) [-Werror=coverage-mismatch]
Parser/tokenizer.c: In function β€˜tok_get’:
Parser/tokenizer.c:1909:1: error: the control flow of function β€˜tok_get’ does 
not match its profile data (counter β€˜arcs’) [-Werror=coverage-mismatch]
Parser/tokenizer.c:1909:1: error: the control flow of function β€˜tok_get’ does 
not match its profile data (counter β€˜single’) [-Werror=coverage-mismatch]
Parser/tokenizer.c:1909:1: error: the control flow of function β€˜tok_get’ does 
not match its profile data (counter β€˜time_profiler’) [-Werror=coverage-mismatch]
gcc -pthread -c -Wno-unused-result -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall 
   -std=c99 -Wextra -Wno-unused-result -Wno-unused-parameter 
-Wno-missing-field-initializers -Werror=implicit-function-declaration 
-fprofile-use -fprofile-correction  -I. -I./Include   -fPIC -DPy_BUILD_CORE -o 
Objects/bytes_methods.o Objects/bytes_methods.c
Objects/boolobject.c: In function β€˜bool_xor’:
Objects/boolobject.c:185:1: error: the control flow of function β€˜bool_xor’ does 
not match its profile data (counter β€˜arcs’) [-Werror=coverage-mismatch]
 };
 ^
Objects/boolobject.c:185:1: error: the control flow of function β€˜bool_xor’ does 
not match its profile data (counter β€˜indirect_call’) [-Werror=coverage-mismatch]
Objects/boolobject.c:185:1: error: the control flow of function β€˜bool_xor’ does 
not match its profile data (counter β€˜time_profiler’) [-Werror=coverage-mismatch]
Objects/boolobject.c: In function β€˜bool_or’:
Objects/boolobject.c:185:1: error: the control flow of function β€˜bool_or’ does 
not match its profile data (counter β€˜arcs’) [-Werror=coverage-mismatch]
Objects/boolobject.c:185:1: error: the control flow of function β€˜bool_or’ does 
not match its profile data (counter β€˜indirect_call’) [-Werror=coverage-mismatch]
Objects/boolobject.c:185:1: error: the control flow of function β€˜bool_or’ does 
not match its profile data (counter β€˜time_profiler’) [-Werror=coverage-mismatch]
Objects/boolobject.c: In function β€˜bool_and’:
Objects/boolobject.c:185:1: error: the control flow of function β€˜bool_and’ does 
not match its profile data (counter β€˜arcs’) [-Werror=coverage-mismatch]
Objects/boolobject.c:185:1: error: the control flow of function β€˜bool_and’ does 
not match its profile data (counter β€˜indirect_call’) [-Werror=coverage-mismatch]
Objects/boolobject.c:185:1: error: the control flow of function β€˜bool_and’ does 
not match its profile data (counter β€˜time_profiler’) [-Werror=coverage-mismatch]
Objects/boolobject.c: In function β€˜bool_new’:
Objects/boolobject.c:185:1: error: the control flow of function β€˜bool_new’ does 
not match its profile data (counter β€˜arcs’) [-Werror=coverage-mismatch]
Objects/boolobject.c:185:1: error: the control flow of function β€˜bool_new’ does 
not match its profile data (counter β€˜time_profiler’) [-Werror=coverage-mismatch]
gcc -pthread -c -Wno-unused-result -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall 
   -std=c99 -Wextra -Wno-unused-result -Wno-unused-parameter 
-Wno-missing-field-initializers -Werror=implicit-function-declaration 
-fprofile-use -fprofile-correction  -I. -I./Include   -fPIC -DPy_BUILD_CORE -o 
Objects/bytearrayobject.o Objects/bytearrayobject.c
cc1: some warnings being treated as errors
Makefile:1652: recipe for target 'Objects/boolobject.o' failed
make[1]: *** [Objects/boolobject.o] Error 1
make[1]: *** Waiting for unfinished jobs
Objects/abstract.c: In function β€˜PyObject_GetIter’:
Objects/abstract.c:2642:1: error: the control flow of function 
β€˜PyObject_GetIter’ does not match its profile data (counter β€˜arcs’) 
[-Werror=coverage-mismatch]
 }
 ^
Objects/abstract.c:2642:1: error: the control flow of function 
β€˜PyObject_GetIter’ does not match its profile data (counter β€˜indirect_call’) 
[-Werror=coverage-mismatch]
Objects/abstract.c:2642:1: error: the control flow of function 
β€˜PyObject_GetIter’ does not match its profile data (counter β€˜time_profiler’) 
[-Werror=coverage-mismatch]
Objects/abstract.c: In function β€˜PySequence_InPlaceRepeat’:
Objects/abstract.c:2642:1: error: the control flow of function 
β€˜PySequence_InPlaceRepeat’ does not match its profile data (counter β€˜arcs’) 
[-Werror=coverage-mismatch]
Objects/abstract.c:2642:1: error: the control flow of function 
β€˜PySequence_InPlaceRepeat’ does not match its profile dat

[issue31963] AMD64 Debian PGO 3.x buildbot: compilation failed with an internal compiler error in create_edge

2020-05-24 Thread Albert Christianto


Albert Christianto  added the comment:

Sorry for my late response. 
Well, thank you very much for your fast response to help me. 
Actually, I have solved the problem in 3 hours after I posted my question. 
hehehe.
I found this tip about uncleaned files after executing "make clean" 
(https://bugs.python.org/msg346553). So, I decided to wipe out all python3.7 
files from my computer and repeat the installation process. It worked!!
Anyway, thank you again for your response and your tip. It really made my day 
like I am not alone while doing some python program debugging. (^v^)
best regards,
Albert Christianto

--

___
Python tracker 
<https://bugs.python.org/issue31963>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue41468] Unrecoverable server exiting

2020-08-03 Thread Albert Francis


New submission from Albert Francis :

How to solve unrecoverable server exiting in IDLE

--
assignee: terry.reedy
components: IDLE
messages: 374766
nosy: albertpython, terry.reedy
priority: normal
severity: normal
status: open
title: Unrecoverable server exiting
type: behavior
versions: Python 3.5, Python 3.6, Python 3.7, Python 3.8

___
Python tracker 
<https://bugs.python.org/issue41468>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue41468] Unrecoverable server exiting

2020-08-04 Thread Albert Francis


Albert Francis  added the comment:

Dear Sir,

I got the solution. Thanks

On Tue, 4 Aug 2020, 16:55 Terry J. Reedy,  wrote:

>
> Terry J. Reedy  added the comment:
>
> One should never see this message.  As far as I remember, I have seen it
> only once in the last several years.  It is intended to indicate a 'random'
> non-reproducible glitch in the communication machinery connecting the IDLE
> GUI process and the user code execution process.  The most likely solution
> is to retry what you were doing.  A report should only be made is the error
> is reproducible. "Unrecoverable, server exiting" is meant to convey this,
> but the meaning should be explained in the message.  So I consider this a
> doc improvement issue.
>
> If you report a repeatable failure, then we can look into that.
>
> --
>
> ___
> Python tracker 
> <https://bugs.python.org/issue41468>
> ___
>

--

___
Python tracker 
<https://bugs.python.org/issue41468>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue41468] Unrecoverable server exiting

2020-08-10 Thread Albert Francis


Albert Francis  added the comment:

Got it, thanks!

On Mon, 10 Aug 2020, 19:26 Terry J. Reedy,  wrote:

>
> Terry J. Reedy  added the comment:
>
> Test error fixed on issue 41514.
>
> --
>
> ___
> Python tracker 
> <https://bugs.python.org/issue41468>
> ___
>

--

___
Python tracker 
<https://bugs.python.org/issue41468>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue41538] Allow customizing python interpreter in venv.EnvBuilder

2020-08-13 Thread Albert Cervin


New submission from Albert Cervin :

When creating a virtualenv using venv.EnvBuilder, it always uses 
sys._base_executable. However, in some embedded cases (Blender being one 
example), it is not set to a valid Python executable. The proposal is to add a 
keyword parameter to the EnvBuilder constructor for specifying which python 
interpreter to use, much like the -p flag in virtualenv.

--
components: Library (Lib)
messages: 375293
nosy: abbec
priority: normal
severity: normal
status: open
title: Allow customizing python interpreter in venv.EnvBuilder
type: enhancement
versions: Python 3.10, Python 3.5, Python 3.6, Python 3.7, Python 3.8, Python 
3.9

___
Python tracker 
<https://bugs.python.org/issue41538>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue41538] Allow customizing python interpreter in venv.EnvBuilder

2020-08-13 Thread Albert Cervin


Change by Albert Cervin :


--
keywords: +patch
pull_requests: +20981
stage:  -> patch review
pull_request: https://github.com/python/cpython/pull/21854

___
Python tracker 
<https://bugs.python.org/issue41538>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue30250] StreamIO truncate behavior of current position

2017-05-03 Thread Albert Zeyer

New submission from Albert Zeyer:

The doc says that StringIO.truncate should not change the current position.
Consider this code:

  try:
import StringIO
  except ImportError:
import io as StringIO
  buf = StringIO.StringIO()
  assert_equal(buf.getvalue(), "")
  print("buf: %r" % buf.getvalue())

  buf.write("hello")
  print("buf: %r" % buf.getvalue())
  assert_equal(buf.getvalue(), "hello")
  buf.truncate(0)
  print("buf: %r" % buf.getvalue())
  assert_equal(buf.getvalue(), "")

  buf.write("hello")
  print("buf: %r" % buf.getvalue())
  assert_equal(buf.getvalue(), "\x00\x00\x00\x00\x00hello")
  buf.truncate(0)
  print("buf: %r" % buf.getvalue())
  assert_equal(buf.getvalue(), "")


On Python 3.6, I get the output:

buf: ''
buf: 'hello'
buf: ''
buf: '\x00\x00\x00\x00\x00hello'

On Python 2.7, I get the output:

buf: ''
buf: 'hello'
buf: ''
buf: 'hello'


Thus it seems that Python 2.7 StringIO.truncate does actually resets the 
position for this case or there is some other bug in Python 2.7. At least from 
the doc, it seems that the Python 3.6 behavior is the expected behavior.

--
components: IO
messages: 292866
nosy: Albert.Zeyer
priority: normal
severity: normal
status: open
title: StreamIO truncate behavior of current position
versions: Python 2.7

___
Python tracker 
<http://bugs.python.org/issue30250>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue31814] subprocess_fork_exec more stable with vfork

2017-10-18 Thread Albert Zeyer

New submission from Albert Zeyer :

subprocess_fork_exec currently calls fork().

I propose to use vfork() or posix_spawn() or syscall(SYS_clone, SIGCHLD, 0) 
instead if possible and if there is no preexec_fn. The difference would be that 
fork() will call any atfork handlers (registered via pthread_atfork()), while 
the suggested calls would not.

There are cases where atfork handlers are registered which are not save to be 
called e.g. in multi-threaded environments. In the case of subprocess_fork_exec 
without preexec_fn, there is no need to call those atfork handlers, so avoiding 
this could avoid potential problems. It's maybe acceptable if a pure fork() 
without exec() doesn't work in this case anymore, but there is no reason that a 
fork()+exec() should not work in any such cases. This is fixed by my proposed 
solution.

An example case is OpenBlas and OpenMP, which registers an atfork handler which 
is safe to be called if there are other threads running.
See here:
https://github.com/tensorflow/tensorflow/issues/13802
https://github.com/xianyi/OpenBLAS/issues/240
https://trac.sagemath.org/ticket/22021

About fork+exec without the atfork handlers, see here for alternatives (like 
vfork):
https://stackoverflow.com/questions/46810597/forkexec-without-atfork-handlers/

--
components: Interpreter Core
messages: 304587
nosy: Albert.Zeyer
priority: normal
severity: normal
status: open
title: subprocess_fork_exec more stable with vfork
type: behavior
versions: Python 2.7, Python 3.4, Python 3.5, Python 3.6, Python 3.7, Python 3.8

___
Python tracker 
<https://bugs.python.org/issue31814>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue31814] subprocess_fork_exec more stable with vfork

2017-10-19 Thread Albert Zeyer

Albert Zeyer  added the comment:

This is a related issue, although with different argumentation:
https://bugs.python.org/issue20104

--

___
Python tracker 
<https://bugs.python.org/issue31814>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue31814] subprocess_fork_exec more stable with vfork

2017-10-20 Thread Albert Zeyer

Albert Zeyer  added the comment:

Here is some more background for a case where this occurs:
https://stackoverflow.com/questions/46849566/multi-threaded-openblas-and-spawning-subprocesses

My proposal here would fix this.

--

___
Python tracker 
<https://bugs.python.org/issue31814>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue24564] shutil.copytree fails when copying NFS to NFS

2017-10-25 Thread Albert Zeyer

Albert Zeyer  added the comment:

I'm also affected by this, with Python 3.6. My home directory is on a 
ZFS-backed NFS share.
See here for details:
https://github.com/Linuxbrew/homebrew-core/issues/4799

Basically:
Copying setuptools.egg-info to 
/u/zeyer/.linuxbrew/lib/python3.6/site-packages/setuptools-36.5.0-py3.6.egg-info
error: [Errno 5] Input/output error: 
'/u/zeyer/.linuxbrew/lib/python3.6/site-packages/setuptools-36.5.0-py3.6.egg-info/zip-safe'

Note that also by other tools, such as `mv` and `cp`, I get errors about 
setting `system.nfs4_acl`. But they just seem to ignore that and go on. I think 
this is the right thing to do here. You can print a warning about that, but 
then just go on. Maybe esp. just for `system.nfs4_acl`.

--
nosy: +Albert.Zeyer
versions: +Python 3.6

___
Python tracker 
<https://bugs.python.org/issue24564>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue10049] Add a "no-op" (null) context manager to contextlib (Rejected: use contextlib.ExitStack())

2017-11-09 Thread Albert Zeyer

Albert Zeyer  added the comment:

Note that this indeed seems confusing. I just found this thread by search for a 
null context manager. Because I found that in TensorFlow they introduced 
_NullContextmanager in their code and I wondered that this is not provided by 
the Python stdlib.

--
nosy: +Albert.Zeyer

___
Python tracker 
<https://bugs.python.org/issue10049>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue33827] Generators with lru_cache can be non-intuituve

2018-06-11 Thread Michel Albert


New submission from Michel Albert :

Consider the following code:

# filename: foo.py

from functools import lru_cache


@lru_cache(10)
def bar():
yield 10
yield 20
yield 30


# This loop will work as expected
for row in bar():
print(row)

# This loop will not loop over anything.
# The cache will return an already consumed generator.
for row in bar():
print(row)


This behaviour is natural, but it is almost invisible to the caller of "foo".

The main issue is one of "surprise". When inspecting the output of "foo" it is 
clear that the output is a generator:

>>> import foo
>>> foo.bar()


**Very** careful inspection will reveal that each call will return the same 
generator instance.

So to an observant user the following is an expected behaviour:

>>> result = foo.bar()
>>> for row in result:
...print(row)
...
10
20
30
>>> for row in result:
... print(row)
...
>>>

However, the following is not:

>>> import foo
>>> result = foo.bar()
>>> for row in result:
... print(row)
...
10
20
30
>>> result = foo.bar()
>>> for row in result:
... print(row)
...
>>>


Would it make sense to emit a warning (or even raise an exception) in 
`lru_cache` if the return value of the cached function is a generator?

I can think of situation where it makes sense to combine the two. For example 
the situation I am currently in:

I have a piece of code which loops several times over the same SNMP table. 
Having a generator makes the application far more responsive. And having the 
cache makes it even faster on subsequent calls. But the gain I get from the 
cache is bigger than the gain from the generator. So I would be okay with 
converting the result to a list before storing it in the cache.

What is your opinion on this issue? Would it make sense to add a warning?

--
messages: 319279
nosy: exhuma
priority: normal
severity: normal
status: open
title: Generators with lru_cache can be non-intuituve
type: behavior

___
Python tracker 
<https://bugs.python.org/issue33827>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue14658] Overwriting dict.__getattr__ is inconsistent

2012-04-23 Thread Albert Zeyer

New submission from Albert Zeyer :

```
class Foo1(dict):
def __getattr__(self, key): return self[key]
def __setattr__(self, key, value): self[key] = value

class Foo2(dict):
__getattr__ = dict.__getitem__
__setattr__ = dict.__setitem__

o1 = Foo1()
o1.x = 42
print(o1, o1.x)

o2 = Foo2()
o2.x = 42
print(o2, o2.x)
```

With CPython 2.5, 2.6 (similarly in 3.2), I get:
({'x': 42}, 42)
({}, 42)

With PyPy 1.5.0, I get the expected output::
({'x': 42}, 42)
({'x': 42}, 42)

I asked this also on SO: 
http://stackoverflow.com/questions/6305267/python-inconsistence-in-the-way-you-define-the-function-setattr

>From the answers, I am not exactly sure wether this is considered as a bug in 
>CPython or not. Anyway, I just wanted to post this here.

--
components: None
messages: 159099
nosy: Albert.Zeyer
priority: normal
severity: normal
status: open
title: Overwriting dict.__getattr__ is inconsistent
type: behavior
versions: Python 2.7

___
Python tracker 
<http://bugs.python.org/issue14658>
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue15885] @staticmethod __getattr__ doesn't work

2012-09-08 Thread Albert Zeyer

New submission from Albert Zeyer:

Code:

```
class Wrapper:
@staticmethod
def __getattr__(item):
return repr(item) # dummy

a = Wrapper()
print(a.foo)
```

Expected output: 'foo'

Actual output with Python 2.7:

Traceback (most recent call last):
  File "test_staticmethodattr.py", line 7, in 
print(a.foo)
TypeError: 'staticmethod' object is not callable

Python 3.2 does return the expected ('foo').
PyPy returns the expected 'foo'.

--
components: Interpreter Core
messages: 170070
nosy: Albert.Zeyer
priority: normal
severity: normal
status: open
title: @staticmethod __getattr__ doesn't work
versions: Python 2.7

___
Python tracker 
<http://bugs.python.org/issue15885>
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue15885] @staticmethod __getattr__ doesn't work

2012-09-09 Thread Albert Zeyer

Albert Zeyer added the comment:

I don't quite understand. Shouldn't __getattr__ also work in old-style classes?

And the error itself ('staticmethod' object is not callable), shouldn't that be 
impossible?

--

___
Python tracker 
<http://bugs.python.org/issue15885>
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue20062] Remove emacs page from devguide

2014-03-18 Thread Albert Looney

Albert Looney added the comment:

removing emacs from the devguide

Im am certainly new at this, so if this is incorrect please provide feedback.

--
keywords: +patch
nosy: +alooney
Added file: http://bugs.python.org/file34503/index.patch

___
Python tracker 
<http://bugs.python.org/issue20062>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue20062] Remove emacs page from devguide

2014-03-20 Thread Albert Looney

Albert Looney added the comment:

Patch should be fixed now and remove emacs.rst file.

It seems to me that the Emacs information that is being deleted is already in 
the wiki.

https://wiki.python.org/moin/EmacsEditor

--
Added file: http://bugs.python.org/file34542/devguide.patch

___
Python tracker 
<http://bugs.python.org/issue20062>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue20815] ipaddress unit tests PEP8

2014-03-23 Thread Michel Albert

Michel Albert added the comment:

It seems the contributor agreement form has been processed. As I understand it, 
the asterisk on my name confirms this.

I also verified that this patch cleanly applies to the most recent revision.

--

___
Python tracker 
<http://bugs.python.org/issue20815>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue20825] containment test for "ip_network in ip_network"

2014-03-23 Thread Michel Albert

Michel Albert added the comment:

Hi again,

The contribution agreement has been processed, and the patch still cleanly 
applies to the latest revision of branch `default`.

--

___
Python tracker 
<http://bugs.python.org/issue20825>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue20826] Faster implementation to collapse consecutive ip-networks

2014-03-23 Thread Michel Albert

Michel Albert added the comment:

Sorry for the late reply. I wanted to take some time and give a more detailed 
explanation. At least to the best of my abilities :)

I attached a zip-file with my quick-and-dirty test-rig. The zip contains:

 * gendata.py -- The script I used to generate test-data
 * testdata.lst -- My test-data set (for reproducability)
 * tester.py -- A simple script using ``timeit.timeit``.

I am not sure how sensitive the data is I am working with, so I prefer not to 
put any of the real data on a public forum. Instead, I wrote a small script 
which generates a data-set which makes the performance difference visible 
(``gendata.py``). The data which I processed actually created an even worse 
case, but it's difficult to reproduce. In my case, the data-set I used is in 
the file named ``testdata.lst``.

I then ran the operation 5 times using ``timeit`` (tester.py).

Let me also outline an explanation to what it happening:

It is possible, that through one "merge" operation, a second may become 
possible. For the sake of readability, let's use IPv4 addresses, and consider 
the following list:

[a_1, a_2, ..., a_n, 192.168.1.0/31, 192.168.1.2/32, 192.168.1.3/32, b_1, 
b_2, ..., b_n]

This can be reduced to

[a_1, a_2, ..., a_n, 192.168.1.0/31, 192.168.1.2/31, b_1, b_2, ..., b_n]

Which in turn can then be reduced to:

[a_1, a_2, ..., a_n, 192.168.1.0/30, b_1, b_2, ..., b_n]

The current implementation, sets a boolean (``optimized``) to ``True`` if any 
merge has been performed. If yes, it re-runs through the whole list until no 
optimisation is done. Those re-runs also include [a1..an] and [b1..bn], which 
is unnecessary. With the given data-set, this gives the following result:

Execution time: 48.27790632040014 seconds
./python tester.py  244.29s user 0.06s system 99% cpu 4:04.51 total

With the shift/reduce approach, only as many nodes are visited as necessary. If 
a "reduce" is made, it "backtracks" as much as possible, but not further. So in 
the above example, nodes [a1..an] will only be visited once, and [b1..bn] will 
only be visited once the complete optimisation of the example addresses has 
been performed. With the given data-set, this gives the following result:

Execution time: 20.298685277199912 seconds
./python tester.py  104.20s user 0.14s system 99% cpu 1:44.58 total

If my thoughts are correct, both implementations should have a similar 
"best-case", but the "worst-case" differs significantly. I am not well-versed 
with the Big-O notation, especially the best/average/worst case difference. 
Neither am I good at math. But I believe both are strictly speaking O(n). But 
more precisely, O(k*n) where k is proportional the number of reconciliation 
steps needed (that is, if one merger makes another merger possible). But it is 
much smaller in the shift/reduce approach as only as many elements need to be 
revisited as necessary, instead of all of them.

--
Added file: http://bugs.python.org/file34583/testrig.zip

___
Python tracker 
<http://bugs.python.org/issue20826>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue20825] containment test for "ip_network in ip_network"

2014-03-23 Thread Michel Albert

Michel Albert added the comment:

I made the changes mentioned by r.david.murray

I am not sure if the modifications in ``Doc/whatsnew/3.5.rst`` are correct. I 
tried to follow the notes at the top of the file, but it's not clear to me if 
it should have gone into ``News/Misc`` or into ``Doc/whatsnew/3.5.rst``.

On another side-note: I attached this as an ``-r3`` file, but I could have 
replaced the existing patch as well. Which method is preferred? Replacing 
existing patches on the issue or adding new revisions?

--
Added file: http://bugs.python.org/file34588/net-in-net-r3.patch

___
Python tracker 
<http://bugs.python.org/issue20825>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue20062] Remove emacs page from devguide

2014-03-26 Thread Albert Looney

Changes by Albert Looney :


Removed file: http://bugs.python.org/file34503/index.patch

___
Python tracker 
<http://bugs.python.org/issue20062>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue16385] evaluating dict with repeated keys gives no error/warnings

2012-11-02 Thread Albert Ferras

New submission from Albert Ferras:

I normally use dictionaries for configuration purposes in python, but there's a 
problem where I have a dictionary with many key<->values and one of the keys is 
repeated.
For example:

lives_in = { 'lion': ['Africa', 'America],
 'parrot': ['Europe'],
 #... 100+ more rows here
 'lion': ['Europe'],
 #... 100+ more rows here
   }

will end up with animal_lives_in['lion'] = 'Europe'. There's no way to detect 
that I've written a mistake in the code because python won't tell me there's a 
duplicated key assigned. It's easy to see when you have few keys but hard when 
you've got many.

I think it should atleast raise a warning when this happens.

--
components: Interpreter Core
messages: 174507
nosy: Albert.Ferras
priority: normal
severity: normal
status: open
title: evaluating dict with repeated keys gives no error/warnings
type: behavior
versions: Python 2.7

___
Python tracker 
<http://bugs.python.org/issue16385>
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue16385] evaluating dict with repeated keys gives no warnings/errors

2012-11-02 Thread Albert Ferras

Albert Ferras added the comment:

I would use json, but it allows me to set list/strings, etc.. not python 
objects like I'd want

--

___
Python tracker 
<http://bugs.python.org/issue16385>
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue16385] evaluating dict with repeated keys gives no warnings/errors

2012-11-02 Thread Albert Ferras

Albert Ferras added the comment:

sorry: *it only allows me

--

___
Python tracker 
<http://bugs.python.org/issue16385>
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue16385] evaluating dict with repeated keys gives no warnings/errors

2012-11-02 Thread Albert Ferras

Albert Ferras added the comment:

also, it creates confusion when this happens

--

___
Python tracker 
<http://bugs.python.org/issue16385>
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue17294] compile-flag for single-execution to return value instead of printing it

2013-10-14 Thread Albert Zeyer

Albert Zeyer added the comment:

I don't know that I have an expression and I want it also to work if it is not 
an expression. Basically I really want the 'single' behavior. (My 
not-so-uncommon use case: Have an interactive shell where the output on stdout 
does not make sense. Also I might want to save references to returned values.)

displayhook is not an option in any serious bigger project because you don't 
want to do overwrite that globally.

--
resolution: rejected -> 
status: closed -> open

___
Python tracker 
<http://bugs.python.org/issue17294>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue17294] compile-flag for single-execution to return value instead of printing it

2013-10-22 Thread Albert Zeyer

Albert Zeyer added the comment:

Thanks a lot for the long and detailed response! I didn't meant to start a 
header war; I thought that my request was misunderstood and thus the header 
changes were by mistake. But I guess it is a good suggestion to leave that 
decision to a core dev.

I still thing that this would have been more straight-forward in the first 
place:

for statement in user_input():
  if statement:
value = exec(compile(statement, '', 'single'))
if value is not None: print value

Because it is more explicit. But because introducing such an incompatible 
change is bad, I thought it's a good idea to add another compile-mode.

Your `ee_compile` seems somewhat inefficient to me because you call `compile` 
twice and I don't like solutions like this very much (try one thing, then try 
another thing) as rock-solid solutions. (Of course, neither is 
`interactive_py_compile`, that one just shows what I want.)

--

___
Python tracker 
<http://bugs.python.org/issue17294>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue4613] Can't figure out where SyntaxError: can not delete variable 'x' referenced in nested scope us coming from in python shows no traceback

2014-06-26 Thread Albert Hopkins

Albert Hopkins added the comment:

You can close this one out.  I don't even remember the use case anymore.

--

___
Python tracker 
<http://bugs.python.org/issue4613>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue22020] tutorial 9.10. Generators statement error

2014-07-20 Thread Albert Ho

New submission from Albert Ho:

https://docs.python.org/3/tutorial/classes.html
>>> for char in reverse('golf'):

I found that reverse didnt work
and i check the doc
https://docs.python.org/3.4/library/functions.html#reversed
>>> reversed(seq)ΒΆ

I guess it just forget to change the statement

--
messages: 223560
nosy: rt135792005
priority: normal
severity: normal
status: open
title: tutorial 9.10. Generators statement error
type: compile error
versions: Python 3.4

___
Python tracker 
<http://bugs.python.org/issue22020>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue20276] ctypes._dlopen should not force RTLD_NOW

2014-01-15 Thread Albert Zeyer

New submission from Albert Zeyer:

On MacOSX, when you build an ARC-enabled Dylib with backward compatibility for 
e.g. MacOSX 10.6, some unresolved functions like 
`_objc_retainAutoreleaseReturnValue` might end up in your Dylib.

Some reference about the issue:
1. http://stackoverflow.com/q/21119425/133374>.
2. http://osdir.com/ml/python.ctypes/2006-10/msg00029.html
3. https://groups.google.com/forum/#!topic/comp.lang.python/DKmNGwyLl3w

Thus, RTLD_NOW is often not an option for MacOSX.

This affects mostly `py_dl_open()` from ctypes.
But it is also related how you set `dlopenflags` in `PyInterpreterState_New()`.

I suggest to make RTLD_LAZY the default on MacOSX (or is there a good reason 
not to do?).
Also, ctypes should have options for both RTLD_NOW and RTLD_LAZY so that both 
can be used.

This is also consistent with the behavior of the [dl 
module](http://docs.python.org/2/library/dl.html).

--
components: ctypes
messages: 208226
nosy: Albert.Zeyer
priority: normal
severity: normal
status: open
title: ctypes._dlopen should not force RTLD_NOW
versions: Python 2.7

___
Python tracker 
<http://bugs.python.org/issue20276>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue20815] ipaddress unit tests PEP8

2014-03-01 Thread Michel Albert

New submission from Michel Albert:

While I was looking at the source of the ipaddress unit-tests, I noticed a 
couple of PEP8 violations. This patch fixes these (verified using the ``pep8`` 
tool).

There are no behavioural changes. Only white-space.

Test-cases ran successfully before, and after the change was made.

--
components: Tests
files: test_ipaddress_pep8.patch
keywords: patch
messages: 212497
nosy: exhuma, ncoghlan, pmoody
priority: normal
severity: normal
status: open
title: ipaddress unit tests PEP8
versions: Python 3.5
Added file: http://bugs.python.org/file34257/test_ipaddress_pep8.patch

___
Python tracker 
<http://bugs.python.org/issue20815>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue20815] ipaddress unit tests PEP8

2014-03-01 Thread Michel Albert

Michel Albert added the comment:

Thanks for the quick reply!

I did not know the pep8 tool added it's own rules :( I have read PEP8 a long 
while ago and have since relied on the tool to do "the right thing". Many of 
it's idiosyncrasies have probably made their way into my blood since :(

And you're right: The PEP is actually explicitly saying that it's okay to leave 
out the white-space to make operator precedence more visible (for reference: 
http://legacy.python.org/dev/peps/pep-0008/#other-recommendations).

I will undo those changes.

Is there anything else that immediately caught your eye so I can address it in 
the update?

--

___
Python tracker 
<http://bugs.python.org/issue20815>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue20815] ipaddress unit tests PEP8

2014-03-02 Thread Michel Albert

Michel Albert added the comment:

Here's a new patch which addresses white-space issues without touching the old 
tests.

--
Added file: http://bugs.python.org/file34265/test_ipaddress_pep8-r3.patch

___
Python tracker 
<http://bugs.python.org/issue20815>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue20825] containment test for "ip_network in ip_network"

2014-03-02 Thread Michel Albert

New submission from Michel Albert:

The ipaddress module always returns ``False`` when testing if a network is 
contained in another network. However, I feel this should be a valid test. No? 
Is there any reason why this is fixed to ``False``?

In case not, here's a patch which implements this test.

Note that by design, IP addresses networks can never overlap "half-way". In 
cases where this should return ``False``, you either have a network that lies 
completely "to the left", or completely "to the right". In the case it should 
return ``True`` the smaller network is always completely bounded by the larger 
network's network- and broadcast address.

I needed to change two containment tests as they were in conflict with this 
change. These tests were ``self.v6net not in self.v6net`` and ``self.v4net not 
in self.v4net``. The reason these two failed is that the new containment test 
checks the target range *including* broadcast and network address. So ``a in 
a`` always returns true.

This could be changed by excluding one of the two boundaries, and by that 
forcing the "containee" to be smaller than the "container". But it would make 
the check a bit more complex, as you would need to add an exception for the 
case that both are identical.

Backwards compatibility is a good question. Strictly put, this would break it. 
However, I can't think of any reason why anyone would expect ``a in a`` to be 
false in the case of IP-Addresses.

Just as a side-note, I currently work at our national network provider and am 
currently implementing a tool dealing with a lot of IP-Addresses. We have run 
into the need to test ``net in net`` a couple of times and ran into bugs 
because the stdlib returns ``False`` where you technically expect it to be 
``True``.

--
components: Library (Lib)
files: net-in-net.patch
keywords: patch
messages: 212550
nosy: exhuma, ncoghlan, pmoody
priority: normal
severity: normal
status: open
title: containment test for "ip_network in ip_network"
versions: Python 3.5
Added file: http://bugs.python.org/file34266/net-in-net.patch

___
Python tracker 
<http://bugs.python.org/issue20825>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue20825] containment test for "ip_network in ip_network"

2014-03-02 Thread Michel Albert

Michel Albert added the comment:

Hmm... after thinking about this, I kind of agree. I was about to state 
something about the fact that you could consider networks like an "ordered 
set". And use that to justify my addition :) But the more I think about it, the 
more I am okay with your point.

I quickly tested the following:

>>> a = ip_network('10.0.0.0/24')
>>> b = ip_network('10.0.0.0/30')
>>> a <= b
True
>>> b <= a
False

Which is wrong when considering "containement".

What about an instance-method? Something like ``b.contained_in(a)``?

At least that would be explicit and avoids confusion. Because the existing 
``__lt__`` implementation makes sense in the way it's already used.

--

___
Python tracker 
<http://bugs.python.org/issue20825>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue20826] Faster implementation to collapse consecutive ip-networks

2014-03-02 Thread Michel Albert

New submission from Michel Albert:

This alternative implementation runs over the ``addresses`` collection only 
once, and "backtracks" only if necessary. Inspired by a "shift-reduce" approach.

Technically both are O(n), so the best case is always the same. But the old 
implementation runs over the *complete* list multiple times until it cannot 
make any more optimisations. The new implementation only repeats the 
optimisation on elements which require reconciliation.

Tests on a local machine have shown a considerable increase in speed on large 
collections of elements (iirc about twice as fast on average).

--
components: Library (Lib)
files: faster-collapse-addresses.patch
keywords: patch
messages: 212553
nosy: exhuma, ncoghlan, pmoody
priority: normal
severity: normal
status: open
title: Faster implementation to collapse consecutive ip-networks
type: performance
versions: Python 3.5
Added file: http://bugs.python.org/file34267/faster-collapse-addresses.patch

___
Python tracker 
<http://bugs.python.org/issue20826>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue20815] ipaddress unit tests PEP8

2014-03-03 Thread Michel Albert

Michel Albert added the comment:

I strongly agree with Raymond's points! They are all valid.

I should note, that I submitted this patch to - as mentioned by Nick - 
familiarise myself with the patch submission process. I decided to make 
harmless changes which won't risk braking anything.

--

___
Python tracker 
<http://bugs.python.org/issue20815>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue20825] containment test for "ip_network in ip_network"

2014-03-04 Thread Michel Albert

Michel Albert added the comment:

I second "supernet_of" and "subnet_of". I'll implement it as soon as I get 
around it.

I have been thinking about using ``in`` and ``<=`` and, while I initially liked 
the idea for tests, I find both operators too ambiguous.

With ``in`` there's the already mentioned ambiguity of containment/inclusion. 
And ``<=`` could mean is a smaller size (has less individual hosts), but could 
also mean that it is a subnet, or even that it is "located to the left".

Naming it ``subnet_of`` makes it 100% clear what it does.

Currently, ``a <= b`` returns ``True`` if a comes before/lies on the left of b.

--

___
Python tracker 
<http://bugs.python.org/issue20825>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue20825] containment test for "ip_network in ip_network"

2014-03-06 Thread Michel Albert

Michel Albert added the comment:

Here's a new patch implementing both ``subnet_of`` and ``supernet_of``.

It also contains the relevant docs and unit-tests.

--
Added file: http://bugs.python.org/file34292/net-in-net-r2.patch

___
Python tracker 
<http://bugs.python.org/issue20825>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue20825] containment test for "ip_network in ip_network"

2014-03-11 Thread Michel Albert

Michel Albert added the comment:

Yes. I signed it last Friday if I recall correctly.

As I understood it, the way for you to tell if it's done, is that my username 
will be followed by an asterisk.

But I'm not in a hurry. Once I get the confirmation, I can just ping you again 
via a comment here, so you don't need to monitor it yourself.

--

___
Python tracker 
<http://bugs.python.org/issue20825>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue20815] ipaddress unit tests PEP8

2014-03-12 Thread Michel Albert

Michel Albert added the comment:

Did so already last weekend. I suppose it will take some time to be processed.

I can ping you via a message here once I receive the confirmation.

--

___
Python tracker 
<http://bugs.python.org/issue20815>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue6631] urlparse.urlunsplit() can't handle relative files (for urllib*.open()

2009-08-03 Thread albert Mietus

New submission from albert Mietus :

The functions urlparse.url{,un}split() and urllib{,2}.open() do not work 
together for relative, local files, due a bug in urlunsplit.

Given a file f='./rel/path/to/file.html' it can be open directly by 
urllib.open(f), but not in urllib2! as the later needs a scheme.
We can create a sound url with spilt/unspilt and a default scheme:
f2=urlparse.urlunsplit(urlparse.urlsplit(f,'file')); which works most 
cases, HOWEVER a bogus netloc is added for relative filepaths.

If have isolated this  "buggy" function, added some local testcode and 
made patch/workaround in my file 'unsplit.py' Which is included. Hope 
this will contribute to a real patch.


--Groetjes, Albert

ALbert Mietus
Don't send spam mail!
Mijn missie: http://SoftwareBeterMaken.nl  product, proces & imago.
Mijn leven in het kort:
http://albert.mietus.nl/Doc/CV_ALbert.html

--
components: Library (Lib)
files: unsplit.py
messages: 91222
nosy: albert
severity: normal
status: open
title: urlparse.urlunsplit() can't handle relative files (for urllib*.open()
type: performance
Added file: http://bugs.python.org/file14637/unsplit.py

___
Python tracker 
<http://bugs.python.org/issue6631>
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue6631] urlparse.urlunsplit() can't handle relative files (for urllib*.open()

2009-08-07 Thread albert Mietus

albert Mietus  added the comment:

There was a bug in the workaround:

if not ( scheme == 'file' and not netloc and url[0] != '/'):
-=---

The {{{and url[0] != '/'}}} was missing (above is corrected)

The effect: split/unspilt file:///path resulted in file:/path

--

___
Python tracker 
<http://bugs.python.org/issue6631>
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue7113] ConfigParser load speedup

2009-10-12 Thread albert hofkamp

New submission from albert hofkamp :

Current implementation (r71564) uses "'%s\n%s' % (old_val, new_line)" to
merge multi-line options into one string.
For options with many lines, this wastes a lot of CPU power.

Attached patch against r71564 fixes this problem by first building a
list of lines for each loaded option, and after reading the whole file,
merging them with the already loaded data. In that way, the '\n'.join()
can be performed once.
Patched ConfigParser.py works against test/test_cfgparser.py (and Python
2.5)

We have witnessed a reduction from 4 hours to 3 seconds loading time
with Python 2.6 and an option of 80 lines.

--
components: Library (Lib)
files: speedfix_71564.patch
keywords: patch
messages: 93895
nosy: aioryi
severity: normal
status: open
title: ConfigParser load speedup
type: performance
versions: Python 2.6
Added file: http://bugs.python.org/file15109/speedfix_71564.patch

___
Python tracker 
<http://bugs.python.org/issue7113>
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue20825] containment test for "ip_network in ip_network"

2016-06-25 Thread Michel Albert

Michel Albert added the comment:

I just realised that the latest patch on this no longer applies properly. I 
have fixed the issue and I am currently in the process of running the 
unit-tests which takes a while. Once those pass, I'll update some metadata and 
resubmit.

--

___
Python tracker 
<http://bugs.python.org/issue20825>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue20825] containment test for "ip_network in ip_network"

2016-06-25 Thread Michel Albert

Michel Albert added the comment:

Test pass properly.

Is there anything else left to do?

Here's the fixed patch (net-in-net-r4.patch)

--
Added file: http://bugs.python.org/file43534/net-in-net-r4.patch

___
Python tracker 
<http://bugs.python.org/issue20825>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue20825] containment test for "ip_network in ip_network"

2016-06-25 Thread Michel Albert

Michel Albert added the comment:

Updated patch, taking into account notes from the previous patch-reviews

--
Added file: http://bugs.python.org/file43535/net-in-net-r5.patch

___
Python tracker 
<http://bugs.python.org/issue20825>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue20825] containment test for "ip_network in ip_network"

2016-06-25 Thread Michel Albert

Michel Albert added the comment:

I don't quite see how the operator module could help. I don't have much 
experience with it though, so I might be missing something...

I don't see how I can relate one check to the other. The example I gave in the 
patch review was the following:

With the existing implementation:

'192.168.1.0/25' subnet of '192.168.1.128/25' -> False
'192.168.1.0/25' supernet of '192.168.1.128/25' -> False

With the proposal to simply return "not subnet_of(...)" it would become:

'192.168.1.0/25' subnet of '192.168.1.128/25' -> False
'192.168.1.0/25' supernet of '192.168.1.128/25' -> True

which would be wrong.


I have now added the new test-cases for the TypeError and removed the 
code-duplication by introducing a new "private" function. Let me know what you 
think.


I am running all test cases again and I'll uploaded it once they finished.

--

___
Python tracker 
<http://bugs.python.org/issue20825>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue20825] containment test for "ip_network in ip_network"

2016-06-25 Thread Michel Albert

Michel Albert added the comment:

New patch with proposed changes.

--
Added file: http://bugs.python.org/file43537/net-in-net-r6.patch

___
Python tracker 
<http://bugs.python.org/issue20825>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue17263] crash when tp_dealloc allows other threads

2013-02-20 Thread Albert Zeyer

New submission from Albert Zeyer:

If you have some Py_BEGIN_ALLOW_THREADS/Py_END_ALLOW_THREADS in some tp_dealloc 
and you use such objects in thread local storage, you might get crashes, 
depending on which thread at what time is trying to cleanup such object.

I haven't fully figured out the details but I have a somewhat reduced testcase. 
Note that I encountered this in practice because the sqlite connection object 
does that (while it disconnects, the GIL is released).

This is the C code with some dummy type which has a tp_dealloc which just 
sleeps for some seconds while the GIL is released: 
https://github.com/albertz/playground/blob/master/testcrash_python_threadlocal.c

This is the Python code: 
https://github.com/albertz/playground/blob/master/testcrash_python_threadlocal_py.py

The Python code also contains some code path with a workaround which I'm using 
currently to avoid such crashes in my application.

--
components: Interpreter Core
messages: 182577
nosy: Albert.Zeyer
priority: normal
severity: normal
status: open
title: crash when tp_dealloc allows other threads
type: crash
versions: Python 2.7

___
Python tracker 
<http://bugs.python.org/issue17263>
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue17263] crash when tp_dealloc allows other threads

2013-02-22 Thread Albert Zeyer

Albert Zeyer added the comment:

The latest 2.7 hg still crashes.

--

___
Python tracker 
<http://bugs.python.org/issue17263>
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue17263] crash when tp_dealloc allows other threads

2013-02-22 Thread Albert Zeyer

Albert Zeyer added the comment:

The backtrace:

Thread 0:: Dispatch queue: com.apple.main-thread
0   libsystem_kernel.dylib  0x7fff8a54e386 __semwait_signal + 10
1   libsystem_c.dylib   0x7fff85e30800 nanosleep + 163
2   libsystem_c.dylib   0x7fff85e30717 usleep + 54
3   testcrash_python_threadlocal.so 0x0001002ddd40 test_dealloc + 48
4   python.exe  0x0001000400a9 dict_dealloc + 153 
(dictobject.c:1010)
5   python.exe  0x0001000432d3 PyDict_DelItem + 259 
(dictobject.c:855)
6   python.exe  0x0001000d7f27 
_localdummy_destroyed + 71 (threadmodule.c:585)
7   python.exe  0x00016c61 PyObject_Call + 97 
(abstract.c:2529)
8   python.exe  0x00016e42 
PyObject_CallFunctionObjArgs + 370 (abstract.c:2761)
9   python.exe  0x00010006b2e6 
PyObject_ClearWeakRefs + 534 (weakrefobject.c:892)
10  python.exe  0x0001000d746b localdummy_dealloc + 
27 (threadmodule.c:231)
11  python.exe  0x0001000400a9 dict_dealloc + 153 
(dictobject.c:1010)
12  python.exe  0x0001000c003b PyThreadState_Clear 
+ 139 (pystate.c:240)
13  python.exe  0x0001000c02c8 
PyInterpreterState_Clear + 56 (pystate.c:104)
14  python.exe  0x0001000c1c68 Py_Finalize + 344 
(pythonrun.c:504)
15  python.exe  0x0001000d5891 Py_Main + 3041 
(main.c:665)
16  python.exe  0x00010a74 start + 52

Thread 1:
0   libsystem_kernel.dylib  0x7fff8a54e386 __semwait_signal + 10
1   libsystem_c.dylib   0x7fff85e30800 nanosleep + 163
2   libsystem_c.dylib   0x7fff85e30717 usleep + 54
3   testcrash_python_threadlocal.so 0x0001002ddd40 test_dealloc + 48
4   python.exe  0x0001000400a9 dict_dealloc + 153 
(dictobject.c:1010)
5   python.exe  0x0001000432d3 PyDict_DelItem + 259 
(dictobject.c:855)
6   python.exe  0x0001000d7f27 
_localdummy_destroyed + 71 (threadmodule.c:585)
7   python.exe  0x00016c61 PyObject_Call + 97 
(abstract.c:2529)
8   python.exe  0x00016e42 
PyObject_CallFunctionObjArgs + 370 (abstract.c:2761)
9   python.exe  0x00010006b2e6 
PyObject_ClearWeakRefs + 534 (weakrefobject.c:892)
10  python.exe  0x0001000d746b localdummy_dealloc + 
27 (threadmodule.c:231)
11  python.exe  0x0001000400a9 dict_dealloc + 153 
(dictobject.c:1010)
12  python.exe  0x0001000c003b PyThreadState_Clear 
+ 139 (pystate.c:240)
13  python.exe  0x0001000d7ec4 t_bootstrap + 372 
(threadmodule.c:643)
14  libsystem_c.dylib   0x7fff85da6742 _pthread_start + 327
15  libsystem_c.dylib   0x7fff85d93181 thread_start + 13

Thread 2:
0   libsystem_kernel.dylib  0x7fff8a54e322 __select + 10
1   time.so 0x0001002fb01b time_sleep + 139 
(timemodule.c:948)
2   python.exe  0x00010009fcfb PyEval_EvalFrameEx + 
18011 (ceval.c:4021)
3   python.exe  0x0001000a30f3 fast_function + 179 
(ceval.c:4107)
4   python.exe  0x00010009fdad PyEval_EvalFrameEx + 
18189 (ceval.c:4042)
5   python.exe  0x0001000a2fb7 PyEval_EvalCodeEx + 
2103 (ceval.c:3253)
6   python.exe  0x00010002f8cb function_call + 347 
(funcobject.c:526)
7   python.exe  0x00016c61 PyObject_Call + 97 
(abstract.c:2529)
8   python.exe  0x0001000a066a PyEval_EvalFrameEx + 
20426 (ceval.c:4334)
9   python.exe  0x0001000a30f3 fast_function + 179 
(ceval.c:4107)
10  python.exe  0x00010009fdad PyEval_EvalFrameEx + 
18189 (ceval.c:4042)
11  python.exe  0x0001000a30f3 fast_function + 179 
(ceval.c:4107)
12  python.exe  0x00010009fdad PyEval_EvalFrameEx + 
18189 (ceval.c:4042)
13  python.exe  0x0001000a2fb7 PyEval_EvalCodeEx + 
2103 (ceval.c:3253)
14  python.exe  0x00010002f8cb function_call + 347 
(funcobject.c:526)
15  python.exe  0x00016c61 PyObject_Call + 97 
(abstract.c:2529)
16  python.exe  0x000100018b07 instancemethod_call 
+ 439 (classobject.c:2603)
17  python.exe  0x00016c61 PyObject_Call + 97 
(abstract.c:2529)
18  python.exe

[issue17263] crash when tp_dealloc allows other threads

2013-02-23 Thread Albert Zeyer

Albert Zeyer added the comment:

Note that in my original application where I encountered this (with sqlite), 
the backtrace looks slightly different. It is at shutdown, but not at 
interpreter shutdown - the main thread is still running.

https://github.com/albertz/music-player/issues/23

I was trying to reproduce it in a similar way with this test case but in the 
test case, so far I could only reproduce the crash when it does the interpreter 
shutdown.

--

___
Python tracker 
<http://bugs.python.org/issue17263>
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue17263] crash when tp_dealloc allows other threads

2013-02-23 Thread Albert Zeyer

Albert Zeyer added the comment:

Here is one. Others are in the issue report on GitHub.

In Thread 5, the PyObject_SetAttr is where some attribute containing a 
threading.local object is set to None. This threading.local object had a 
reference to a sqlite connection object (in some TLS contextes). This should 
also be the actual crashing thread. I use faulthandler which makes it look like 
Thread 0 crashed in the crash reporter.

I had this crash about 5% of the time - but totally unpredictable. But it was 
always happening in exactly that line where the attribute was set to None.


Thread 0 Crashed:: Dispatch queue: com.apple.main-thread
0   libsystem_kernel.dylib  0x7fff8a54e0fa __psynch_cvwait + 10
1   libsystem_c.dylib   0x7fff85daaf89 _pthread_cond_wait + 869
2   org.python.python   0x00010006f54e PyThread_acquire_lock + 
96
3   org.python.python   0x00010001d8e3 PyEval_RestoreThread + 61
4   org.python.python   0x000100075bf3 0x19000 + 445427
5   org.python.python   0x000100020041 PyEval_EvalFrameEx + 7548
6   org.python.python   0x00010001e281 PyEval_EvalCodeEx + 1956
7   org.python.python   0x000100024661 0x19000 + 112225
8   org.python.python   0x0001000200d2 PyEval_EvalFrameEx + 7693
9   org.python.python   0x00010001e281 PyEval_EvalCodeEx + 1956
10  org.python.python   0x000100024661 0x19000 + 112225
11  org.python.python   0x0001000200d2 PyEval_EvalFrameEx + 7693
12  org.python.python   0x00010001e281 PyEval_EvalCodeEx + 1956
13  org.python.python   0x00010005df78 0x19000 + 348024
14  org.python.python   0x00010001caba PyObject_Call + 97
15  _objc.so0x000104615898 0x10460 + 88216
16  libffi.dylib0x7fff8236e8a6 ffi_closure_unix64_inner 
+ 508
17  libffi.dylib0x7fff8236df66 ffi_closure_unix64 + 70
18  com.apple.AppKit0x7fff84f63f3f -[NSApplication 
_docController:shouldTerminate:] + 75
19  com.apple.AppKit0x7fff84f63e4e 
__91-[NSDocumentController(NSInternal) 
_closeAllDocumentsWithDelegate:shouldTerminateSelector:]_block_invoke_0 + 159
20  com.apple.AppKit0x7fff84f63cea 
-[NSDocumentController(NSInternal) 
_closeAllDocumentsWithDelegate:shouldTerminateSelector:] + 1557
21  com.apple.AppKit0x7fff84f636ae 
-[NSDocumentController(NSInternal) 
__closeAllDocumentsWithDelegate:shouldTerminateSelector:] + 265
22  com.apple.AppKit0x7fff84f6357f -[NSApplication 
_shouldTerminate] + 772
23  com.apple.AppKit0x7fff84f9134f 
-[NSApplication(NSAppleEventHandling) _handleAEQuit] + 403
24  com.apple.AppKit0x7fff84d40261 
-[NSApplication(NSAppleEventHandling) _handleCoreEvent:withReplyEvent:] + 660
25  com.apple.Foundation0x7fff867e112b -[NSAppleEventManager 
dispatchRawAppleEvent:withRawReply:handlerRefCon:] + 308
26  com.apple.Foundation0x7fff867e0f8d 
_NSAppleEventManagerGenericHandler + 106
27  com.apple.AE0x7fff832eeb48 
aeDispatchAppleEvent(AEDesc const*, AEDesc*, unsigned int, unsigned char*) + 307
28  com.apple.AE0x7fff832ee9a9 
dispatchEventAndSendReply(AEDesc const*, AEDesc*) + 37
29  com.apple.AE0x7fff832ee869 aeProcessAppleEvent + 318
30  com.apple.HIToolbox 0x7fff8e19f8e9 AEProcessAppleEvent + 100
31  com.apple.AppKit0x7fff84d3c916 _DPSNextEvent + 1456
32  com.apple.AppKit0x7fff84d3bed2 -[NSApplication 
nextEventMatchingMask:untilDate:inMode:dequeue:] + 128
33  com.apple.AppKit0x7fff84d33283 -[NSApplication run] + 
517
34  libffi.dylib0x7fff8236dde4 ffi_call_unix64 + 76
35  libffi.dylib0x7fff8236e619 ffi_call + 853
36  _objc.so0x00010461a663 PyObjCFFI_Caller + 1980
37  _objc.so0x00010462f43e 0x10460 + 193598
38  org.python.python   0x00010001caba PyObject_Call + 97
39  org.python.python   0x000100020225 PyEval_EvalFrameEx + 8032
40  org.python.python   0x0001000245eb 0x19000 + 112107
41  org.python.python   0x0001000200d2 PyEval_EvalFrameEx + 7693
42  org.python.python   0x00010001e281 PyEval_EvalCodeEx + 1956
43  org.python.python   0x00010001dad7 PyEval_EvalCode + 54
44  org.python.python   0x000100054933 0x19000 + 309555
45  org.python.python   0x0001000549ff PyRun_FileExFlags + 165
46  org.python.python   0x0001000543e9 PyRun_SimpleFileExFlags 
+ 410
47  albertzeyer.MusicPlayer

[issue17263] crash when tp_dealloc allows other threads

2013-02-23 Thread Albert Zeyer

Albert Zeyer added the comment:

Sadly, that is quite complicated or almost impossible. It needs the MacOSX 
system Python and that one lacks debugging information.

I just tried with the CPython vom hg-2.7. But it seems the official Python 
doesn't have objc bindings (and I also need Cocoa bindings) so I can't easily 
run this right now (and another GUI is not yet implemented).

--

___
Python tracker 
<http://bugs.python.org/issue17263>
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue17294] compile-flag for single-execution to return value instead of printing it

2013-02-25 Thread Albert Zeyer

New submission from Albert Zeyer:

`compile(s, "", "single")` would generate a code object which 
prints the value of the evaluated string if that is an expression. This is what 
you would normally want in a REPL.

Instead of printing the value, it might make more sense to return it and to 
leave it to the developer - there are many cases where it shouldn't end up on 
stdout but somewhere else.

There could be an additional compile-flag which would make a code-object 
returning the value instead of printing it.

Note that I have come up with a workaround:

def interactive_py_compile(source, filename=""):
c = compile(source, filename, "single")

# we expect this at the end:
#   PRINT_EXPR 
#   LOAD_CONST
#   RETURN_VALUE
import dis
if ord(c.co_code[-5]) != dis.opmap["PRINT_EXPR"]:
return c
assert ord(c.co_code[-4]) == dis.opmap["LOAD_CONST"]
assert ord(c.co_code[-1]) == dis.opmap["RETURN_VALUE"]

code = c.co_code[:-5]
code += chr(dis.opmap["RETURN_VALUE"])

CodeArgs = [
"argcount", "nlocals", "stacksize", "flags", "code",
"consts", "names", "varnames", "filename", "name",
"firstlineno", "lnotab", "freevars", "cellvars"]
c_dict = dict([(arg, getattr(c, "co_" + arg)) for arg in CodeArgs])
c_dict["code"] = code

import types
c = types.CodeType(*[c_dict[arg] for arg in CodeArgs])
return c


My related StackOverflow question:
http://stackoverflow.com/questions/15059372/python-use-of-eval-in-interactive-terminal-how-to-get-return-value-what-compi

--
components: Interpreter Core
messages: 182934
nosy: Albert.Zeyer
priority: normal
severity: normal
status: open
title: compile-flag for single-execution to return value instead of printing it
type: enhancement
versions: Python 2.7, Python 3.1, Python 3.2, Python 3.3, Python 3.4, Python 3.5

___
Python tracker 
<http://bugs.python.org/issue17294>
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue17263] crash when tp_dealloc allows other threads

2013-02-25 Thread Albert Zeyer

Albert Zeyer added the comment:

The symbols are there because it is a library which exports all the symbols. 
Other debugging information are not there and I don't know any place where I 
can get them.

It currently cannot work on Linux in the same way because the GUI is Cocoa only 
right now. I'm trying to get it to run with another Python on Mac, though.

Note that in threadmodule.c, in local_clear, we are iterating through all 
threads:

/* Remove all strong references to dummies from the thread states */
if (self->key
&& (tstate = PyThreadState_Get())
&& tstate->interp) {
for(tstate = PyInterpreterState_ThreadHead(tstate->interp);
tstate;
tstate = PyThreadState_Next(tstate))
if (tstate->dict &&
PyDict_GetItem(tstate->dict, self->key))
PyDict_DelItem(tstate->dict, self->key);
}

In PyDict_DelItem, if the GIL is released and meanwhile, the list of 
threadstates is altered, is that a problem for this loop? So maybe tstate 
becomes invalid there.

I also noticed this part in another backtrace of the same crash:

Thread 2:
0   libsystem_kernel.dylib  0x7fff8a54e0fa __psynch_cvwait + 10
1   libsystem_c.dylib   0x7fff85daaf89 _pthread_cond_wait + 869
2   org.python.python   0x00010006f54e PyThread_acquire_lock + 
96
3   org.python.python   0x00010001d8e3 PyEval_RestoreThread + 61
4   org.python.python   0x000100053351 PyGILState_Ensure + 93
5   _objc.so0x000103b89b6e 0x103b8 + 39790
6   libobjc.A.dylib 0x7fff880c6230 (anonymous 
namespace)::AutoreleasePoolPage::pop(void*) + 464
7   libobjc.A.dylib 0x7fff880c85a2 (anonymous 
namespace)::AutoreleasePoolPage::tls_dealloc(void*) + 42
8   libsystem_c.dylib   0x7fff85dad4fe _pthread_tsd_cleanup + 
240
9   libsystem_c.dylib   0x7fff85da69a2 _pthread_exit + 146
10  libsystem_c.dylib   0x7fff85da674d _pthread_start + 338
11  libsystem_c.dylib   0x7fff85d93181 thread_start + 13


This seems to be a non-Python thread, so PyGILState_Ensure would have created a 
new threadstate and this would have altered the list.

--

___
Python tracker 
<http://bugs.python.org/issue17263>
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



  1   2   >