[issue12170] Bytes objects do not accept integers to many functions

2011-05-24 Thread Max

New submission from Max :

Bytes objects when indexed provide integers, but do not accept them to many 
functions, making them inconsistent with other sequences.

Basic example:
>>> test = b'012'
>>> n = test[1]
>>> n
49
>>> n in test
True
>>> test.index(n)
TypeError: expected an object with the buffer interface.

It is certainly unusual for n to be in the sequence, but not to be able to find 
it.  I would expect the result to be 1.  This set of commands with list, 
strings, tuples, but not bytes objects.

I suspect, from issue #10616, that all the following functions would be 
affected:
"bytes methods: partition, rpartition, find, index, rfind, rindex, count, 
translate, replace, startswith, endswith"

It would make more sense to me that instead of only supporting buffer interface 
objects, they also accept a single integer, and treat it as if it were provided 
a length-1 bytes object.

The use case I came across this problem was something like this:

Given seq1 and seq2, sequences of the same type:
[seq1.index(x) for x in seq2]

This works for strings, lists, tuples, but not bytes.

--
components: Interpreter Core
messages: 136786
nosy: max-alleged
priority: normal
severity: normal
status: open
title: Bytes objects do not accept integers to many functions
versions: Python 3.2

___
Python tracker 
<http://bugs.python.org/issue12170>
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue12170] Bytes objects do not accept integers to many functions

2011-05-24 Thread Max

Max  added the comment:

"This set of commands with list, strings, tuples, but not bytes objects."
should read
"This set of commands works with list, strings, tuples, but not bytes objects."

--

___
Python tracker 
<http://bugs.python.org/issue12170>
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue12170] Bytes.index() and bytes.count() should accept byte ints

2011-05-25 Thread Max

Max  added the comment:

Fair enough.

I think it would make sense for the string methods to also accept single ints 
where possible as well:

For haystack and needles both strings:
[haystack.find(n) for n in needles]

For both bytes, it's a bit contortionist:
[haystack.find(needles[i:i+1]) for i in range(len(needles))]

One ends up doing a lot of the [i:i+1] bending when using bytes functions.

--
type: behavior -> 
versions:  -Python 3.3

___
Python tracker 
<http://bugs.python.org/issue12170>
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue12170] Bytes.index() and bytes.count() should accept byte ints

2011-05-25 Thread Max

Changes by Max :


--
type:  -> behavior
versions: +Python 3.3

___
Python tracker 
<http://bugs.python.org/issue12170>
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue10029] bug in sample code in documentation

2010-10-05 Thread Max

New submission from Max :

The sample code explaining zip function is incorrect at 
http://docs.python.org/py3k/library/functions.html?highlight=zip#zip:

def zip(*iterables):
# zip('ABCD', 'xy') --> Ax By
iterables = map(iter, iterables)
while iterables:
yield tuple(map(next, iterables))

See http://stackoverflow.com/questions/3865640/understanding-zip-function for 
discussion.

--
assignee: d...@python
components: Documentation
messages: 118025
nosy: d...@python, max
priority: normal
severity: normal
status: open
title: bug in sample code in documentation
type: behavior
versions: Python 3.1

___
Python tracker 
<http://bugs.python.org/issue10029>
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue10029] "Equivalent to" code for zip is wrong in Python 3

2010-10-07 Thread Max

Max  added the comment:

Personally, I find it impossible in some cases to understand exactly what a 
function does just from reading a textual description. In those cases, I always 
refer to the equivalent code if it's given. In fact that's the reason I was 
looking going the zip equivalent function!

I would feel it's a loss if equivalent code disappear from the docs.

I understand sometimes the code requires maintenance, but I'd rather live with 
some temporary bugs than lose the equivalent code.

As to subtleties of how it works, that's not really a concern, if that's the 
only way to understand the precise meaning of whatever it explains.

--

___
Python tracker 
<http://bugs.python.org/issue10029>
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue10654] test_datetime fails on Python3.2 windows binary

2011-03-02 Thread Max

Max  added the comment:

This is still occurring with the release version of Python 3.2, installed from 
the 32-bit MSI, on Windows XP.

--
nosy: +max-alleged

___
Python tracker 
<http://bugs.python.org/issue10654>
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue5208] urllib2.build_opener([handler, ...]) incorrect signature in docs

2009-02-10 Thread Max

New submission from Max :

The build_opener() function of urllib2 is speciofied as:

urllib2.build_opener([handler, ...])

I think it should be:

urllib2.build_opener(handler, ...)

see
http://docs.python.org/library/urllib2.html?highlight=build_opener

--
assignee: georg.brandl
components: Documentation
messages: 81567
nosy: Böhm, georg.brandl
severity: normal
status: open
title: urllib2.build_opener([handler, ...]) incorrect signature in docs
versions: Python 2.6

___
Python tracker 
<http://bugs.python.org/issue5208>
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue38279] multiprocessing example enhancement

2019-09-25 Thread Max


Change by Max :


--
keywords: +patch
pull_requests: +15979
stage:  -> patch review
pull_request: https://github.com/python/cpython/pull/16398

___
Python tracker 
<https://bugs.python.org/issue38279>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue45393] help() on operator precedence has confusing entries "avait" "x" and "not" "x"

2021-10-06 Thread Max


New submission from Max :

Nobody seems to have noticed this AFAICS: 
If you type, e.g., help('+') to get help on operator precedence, the fist 
column gives a lit of operators for each row corresponding to a given 
precedence. However, the row for "not" (and similar for "await"), has the entry

"not" "x"

That looks as if there were two operators, "not" and "x". But the letter x is 
just an argument to the operator, so it should be:

 "not x"

exactly as for "+x" and "-x" and "~x" and "x[index]" and "x.attribute", where 
also x is not part of the operator but an argument.

On the corresponding web page 
https://docs.python.org/3/reference/expressions.html#operator-summary
it is displayed correctly, there are no quotes.

--
assignee: docs@python
components: Documentation
messages: 403321
nosy: MFH, docs@python
priority: normal
severity: normal
status: open
title: help() on operator precedence has confusing entries "avait" "x" and 
"not" "x"
type: enhancement
versions: Python 3.10, Python 3.11, Python 3.6, Python 3.7, Python 3.8, Python 
3.9

___
Python tracker 
<https://bugs.python.org/issue45393>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue45393] help() on operator precedence has confusing entries "await" "x" and "not" "x"

2021-10-06 Thread Max


Max  added the comment:

Thanks for fixing the typo, didn't knnow how to do that when I spotted it (I'm 
new to this). 
You also removed Python version 3.6, 3.7, 3.8, however, I just tested on 
pythonanywhere,
>>> sys.version
'3.7.0 (default, Aug 22 2018, 20:50:05) \n[GCC 5.4.0 20160609]'
So I can confirm that the bug *is* there on 3.7 (so I put this back in the list 
- unless it was removed in a later 3.7.x (to you mean that?) and put back in 
later versions...?)
It is also on the Python 3.9.7 I'm running on my laptop, so I'd greatly be 
surprised if it were not present on the other two versions you also removed.

--
versions: +Python 3.7, Python 3.8

___
Python tracker 
<https://bugs.python.org/issue45393>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue45393] help() on operator precedence has confusing entries "await" "x" and "not" "x"

2021-10-07 Thread Max


Max  added the comment:

option 1 looks most attractive to me (and will also look most attractive in the 
rendering, IMHO -- certainly better than "await" "x", in any case).

P.S.: OK, thanks for explanations concerning 3.6 - 3.8. I do understand that it 
won't be fixed for these versions (not certain why not if possible at no cost), 
but I do not understand why these labels must be removed. The bug does exist 
but should simply be considered as "nofix" for these versions (or not), given 
that it's not in the "security" category. The fact that it won't be fixed, for 
whatever reason, should not mean that it should not be listed as existing, 
there.

--

___
Python tracker 
<https://bugs.python.org/issue45393>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue39603] Injection in http.client

2020-02-10 Thread Max


New submission from Max :

I recently came across a bug during a pentest that's allowed me to perform some 
really interesting attacks on a target. While originally discovered in 
requests, I had been forwarded to one of the urllib3 developers after agreeing 
that fixing it at it's lowest level would be preferable. I was informed that 
the vulnerability is also present in http.client and that I should report it 
here as well.

The 'method' parameter is not filtered to prevent the injection from altering 
the entire request.

For example:
>>> conn = http.client.HTTPConnection("localhost", 80)
>>> conn.request(method="GET / HTTP/1.1\r\nHost: abc\r\nRemainder:", 
>>> url="/index.html")

This will result in the following request being generated:
GET / HTTP/1.1
Host: abc
Remainder: /index.html HTTP/1.1
Host: localhost
Accept-Encoding: identity

This was originally found in an HTTP proxy that was utilising Requests. It 
allowed me to manipulate the original path to access different files from an 
internal server since the developers had assumed that the method would filter 
out non-standard HTTP methods.

The recommended solution is to only allow the standard HTTP methods of GET, 
HEAD, POST, PUT, DELETE, CONNECT, OPTIONS, TRACE, and PATCH.

An alternate solution that would allow programmers to use non-standard methods 
would be to only support characters [a-z] and stop reading at any special 
characters (especially newlines and spaces).

--
components: Library (Lib)
messages: 361710
nosy: maxpl0it
priority: normal
severity: normal
status: open
title: Injection in http.client
type: security
versions: Python 2.7, Python 3.5, Python 3.6, Python 3.7, Python 3.8, Python 3.9

___
Python tracker 
<https://bugs.python.org/issue39603>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue39603] [security] http.client: HTTP Header Injection in the HTTP method

2020-02-11 Thread Max


Max  added the comment:

I agree that the solution is quite restrictive.
Restricting to ASCII characters alone would certainly work.

--

___
Python tracker 
<https://bugs.python.org/issue39603>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue39603] [security] http.client: HTTP Header Injection in the HTTP method

2020-07-22 Thread Max


Max  added the comment:

I've just noticed an issue with the current version of the patch. It should 
also include 0x20 (space) since that can also be used to manipulate the request.

--

___
Python tracker 
<https://bugs.python.org/issue39603>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue14445] Providing more fine-grained control over assert statements

2012-03-29 Thread Max

New submission from Max :

Currently -O optimizer flag disables assert statements.

I want to ask that more fine-grained control is offered to users over the 
assert statements. In many cases, it would be nice to have the option of 
keeping asserts in release code, while still performing optimizations (if any 
are offered in the future). It can be achieved by removing the "disable 
assertions" feature of the -O flag, and instead adding a new flag that does 
nothing but disables asserts.

--
messages: 157070
nosy: max
priority: normal
severity: normal
status: open
title: Providing more fine-grained control over assert statements
type: enhancement
versions: Python 3.4

___
Python tracker 
<http://bugs.python.org/issue14445>
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue35787] shlex.split inserts extra item on backslash space space

2019-01-20 Thread Max


New submission from Max :

I believe in both cases below, the ouptu should be ['a', 'b']; the extra ' ' 
inserted in the list is incorrect:

python3.6
Python 3.6.2 (default, Aug  4 2017, 14:35:04)
[GCC 6.3.0 20170516] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import shlex
>>> shlex.split('a \ b')
['a', ' b']
>>> shlex.split('a \  b')
['a', ' ', 'b']
>>>

Doc reference: https://docs.python.org/3/library/shlex.html#parsing-rules
> Non-quoted escape characters (e.g. '\') preserve the literal value of the 
> next character that follows;

I believe this implies that backslash space should be just space; and then two 
adjacent spaces should be used (just like a single space) as a separator 
between arguments.

--
components: Library (Lib)
messages: 334081
nosy: max
priority: normal
severity: normal
status: open
title: shlex.split inserts extra item on backslash space space
versions: Python 3.6

___
Python tracker 
<https://bugs.python.org/issue35787>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue29795] Clarify how to share multiprocessing primitives

2017-03-11 Thread Max

New submission from Max:

It seems both me and many other people (judging from SO questions) are confused 
about whether it's ok to write this:

from multiprocessing import Process, Queue
q = Queue()

def f():
q.put([42, None, 'hello'])

def main():
p = Process(target=f)
p.start()
print(q.get())# prints "[42, None, 'hello']"
p.join()

if __name__ == '__main__':
main()

It's not ok (doesn't work on Windows presumably because somehow when it's 
pickled, the connection between global queues in the two processes is lost; 
works on Linux, because I guess fork keeps more information than pickle, so the 
connection is maintained).

I thought it would be good to clarify in the docs that all the Queue() and 
Manager().* and other similar objects should be passed as parameters not just 
defined as globals.

--
assignee: docs@python
components: Documentation
messages: 289454
nosy: docs@python, max
priority: normal
severity: normal
status: open
title: Clarify how to share multiprocessing primitives
type: behavior
versions: Python 3.6

___
Python tracker 
<http://bugs.python.org/issue29795>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue29795] Clarify how to share multiprocessing primitives

2017-03-11 Thread Max

Max added the comment:

How about inserting this text somewhere:

Note that sharing and synchronization objects (such as `Queue()`, `Pipe()`, 
`Manager()`, `Lock()`, `Semaphore()`) should be made available to a new process 
by passing them as arguments to the `target` function invoked by the `run()` 
method. Making these objects visible through global variables will only work 
when the process was started using `fork` (and as such sacrifices portability 
for no special benefit).

--

___
Python tracker 
<http://bugs.python.org/issue29795>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue29797] Deadlock with multiprocessing.Queue()

2017-03-11 Thread Max

New submission from Max:

Using multiprocessing.Queue() with several processes writing very fast results 
in a deadlock both on Windows and UNIX.

For example, this code:

from multiprocessing import Process, Queue, Manager
import time, sys

def simulate(q, n_results):
for i in range(n_results):
time.sleep(0.01)
q.put(i)

def main():
n_workers = int(sys.argv[1])
n_results = int(sys.argv[2])

q = Queue()
proc_list = [Process(target=simulate, 
args=(q, n_results),
daemon=True) for i in range(n_workers)]

for proc in proc_list:
proc.start()

for i in range(5):
time.sleep(1)
print('current approximate queue size:', q.qsize())
alive = [p.pid for p in proc_list if p.is_alive()]
if alive:
print(len(alive), 'processes alive; among them:', alive[:5])
else:
break

for p in proc_list:
p.join()

print('final appr queue size', q.qsize())


if __name__ == '__main__':
main()


hangs on Windows 10 (python 3.6) with 2 workers and 1000 results each, and on 
Ubuntu 16.04 (python 3.5) with 100 workers and 100 results each. The print out 
shows that the queue has reached the full size, but a bunch of processes are 
still alive. Presumably, they somehow manage to lock themselves out even though 
they don't depend on each other (must be in the implementation of Queue()):

current approximate queue size: 9984
47 processes alive; among them: [2238, 2241, 2242, 2244, 2247]
current approximate queue size: 1
47 processes alive; among them: [2238, 2241, 2242, 2244, 2247]

The deadlock disappears once multiprocessing.Queue() is replaced with 
multiprocessing.Manager().Queue() - or at least I wasn't able to replicate it 
with a reasonable number of processes and results.

--
components: Library (Lib)
messages: 289479
nosy: max
priority: normal
severity: normal
status: open
title: Deadlock with multiprocessing.Queue()
type: behavior
versions: Python 3.5, Python 3.6

___
Python tracker 
<http://bugs.python.org/issue29797>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue29797] Deadlock with multiprocessing.Queue()

2017-03-12 Thread Max

Max added the comment:

Yes, this makes sense. My bad, I didn't realize processes might need to wait 
until the queue is consumed.

I don't think there's any need to update the docs either, nobody should have 
production code that never reads the queue (mine was a test of some other 
issue).

--

___
Python tracker 
<http://bugs.python.org/issue29797>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue29795] Clarify how to share multiprocessing primitives

2017-03-12 Thread Max

Max added the comment:

Somewhat related is this statement from Programming Guidelines:

> When using the spawn or forkserver start methods many types from 
> multiprocessing need to be picklable so that child processes can use them. 
> However, one should generally avoid sending shared objects to other processes 
> using pipes or queues. Instead you should arrange the program so that a 
> process which needs access to a shared resource created elsewhere can inherit 
> it from an ancestor process.

Since on Windows, even "inheritance" is really the same pickle + pipe executed 
inside CPython, I assume the entire paragraph is intended for UNIX platform 
only (might be worth clarifying, btw).

On Linux, "inheritance" works faster, and can deal with more complex objects 
compared to pickle with pipe/queue -- but it's equally true whether it's 
inheritance through global variables or through arguments to the target 
function. There's no reason 

So the text I proposed earlier wouldn't conflict with this one. It would just 
encourage programmers to use function arguments instead of global variables: 
because it's doesn't matter on Linux but makes the code portable to Windows.

--

___
Python tracker 
<http://bugs.python.org/issue29795>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue29795] Clarify how to share multiprocessing primitives

2017-03-12 Thread Max

Max added the comment:

Actually, never mind, I think one of the paragraphs in the Programming 
Guidelines ("Explicitly pass resources to child processes") basically explains 
everything already. I just didn't notice it until @noxdafox pointed it out to 
me on SO.

Close please.

--

___
Python tracker 
<http://bugs.python.org/issue29795>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue29982] tempfile.TemporaryDirectory fails to delete itself

2017-04-04 Thread Max

New submission from Max:

There's a known issue with `shutil.rmtree` on Windows, in that it fails 
intermittently. 

The issue is well known 
(https://mail.python.org/pipermail/python-dev/2013-September/128353.html), and 
the agreement is that it cannot be cleanly solved inside `shutil` and should 
instead be solved by the calling app. Specifically, python devs themselves 
faced it in their test suite and solved it by retrying delete.

However, what to do about `tempfile.TemporaryDirectory`? Is it considered the 
calling app, and therefore should retry delete when it calls `shutil.rmtree` in 
its `cleanup` method?

I don't think `tempfile` is protected by the same argument that `shutil.rmtree` 
is protected, in that it's too messy to solve it in the standard library. My 
rationale is that while it's very easy for the end user to retry 
`shutil.rmtree`, it's far more difficult to fix the problem with 
`tempfile.TempDirectory` not deleting itself - how would the end user retry the 
`cleanup` method (which is called from `weakref.finalizer`)?

So perhaps the retry loop should be added to `cleanup`.

--
components: Library (Lib)
messages: 291130
nosy: max
priority: normal
severity: normal
status: open
title: tempfile.TemporaryDirectory fails to delete itself
type: behavior
versions: Python 3.6

___
Python tracker 
<http://bugs.python.org/issue29982>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue30026] Hashable doesn't check for __eq__

2017-04-09 Thread Max

New submission from Max:

I think collections.abc.Hashable.__subclasshook__ should check __eq__ method in 
addition to __hash__ method. This helps detect classes that are unhashable due 
to:

to __eq__ = None

Of course, it still cannot detect:

def __eq__: return NotImplemented

but it's better than nothing.

In addition, it's probably worth documenting that explicitly inheriting from 
Hashable has (correct but unexpected) effect of *suppressing* hashability that 
was already present:

from collections.abc import Hashable
class X: pass
assert issubclass(X, Hashable)
x = X()

class X(Hashable): pass
assert issubclass(X, Hashable)
x = X() # Can't instantiate abstract class X with abstract methods

--
assignee: docs@python
components: Documentation, Interpreter Core
messages: 291382
nosy: docs@python, max
priority: normal
severity: normal
status: open
title: Hashable doesn't check for __eq__

___
Python tracker 
<http://bugs.python.org/issue30026>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue30026] Hashable doesn't check for __eq__

2017-04-09 Thread Max

Max added the comment:

Sorry, this should be just a documentation issue.

I just realized that __eq__ = None isn't correct anyway, so instead we should 
just document that Hashable cannot check for __eq__ and that explicitly 
deriving from Hashable suppresses hashability.

--
components:  -Interpreter Core

___
Python tracker 
<http://bugs.python.org/issue30026>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue29842] Make Executor.map work with infinite/large inputs correctly

2017-05-14 Thread Max

Max added the comment:

I'm also concerned about this (undocumented) inconsistency between map and 
Executor.map.

I think you would want to make your PR limited to `ThreadPoolExecutor`. The 
`ProcessPoolExecutor` already does everything you want with its `chunksize` 
paramater, and adding `prefetch` to it will jeopardize the optimization for 
which `chunksize` is intended.

Actually, I was even thinking whether it might be worth merging `chunksize` and 
`prefetch` arguments. The semantics of the two arguments is similar but not 
identical. Specifically, for `ProcessPoolExecutor`, there is pretty clear 
pressure to increase the value of `chunksize` to reduce amortized IPC costs; 
there is no IPC with threads, so the pressure to increase `prefetch` is much 
more situational (e.g., in the busy pool example I give below).

For `ThreadPoolExecutor`, I prefer your implementation over the current one, 
but I want to point out that it is not strictly better, in the sense that *with 
default arguments*, there are situations where the current implementation 
behaves better.

In many cases your implementation behaves much better. If the input is too 
large, it prevents out of memory condition. In addition, if the pool is not 
busy when `map` is called, your implementation will also be faster, since it 
will submit the first input for processing earlier.

But consider the case where input is produced slower than it can be processed 
(`iterables` may fetch data from a database, but the callable `fn` may be a 
fast in-memory transformation). Now suppose the `Executor.map` is called when 
the pool is busy, so there'll be a delay before processing begins. In this 
case, the most efficient approach is to get as much input as possible while the 
pool is busy, since eventually (when the pool is freed up) it will become the 
bottleneck. This is exactly what the current implementation does.

The implementation you propose will (by default) only prefetch a small number 
of input items. Then when the pool becomes available, it will quickly run out 
of prefetched input, and so it will be less efficient than the current 
implementation. This is especially unfortunate since the entire time the pool 
was busy, `Executor.map` is just blocking the main thread so it's literally 
doing nothing useful.

Of course, the client can tweak `prefetch` argument to achieve better 
performance. Still, I wanted to make sure this issue is considered before the 
new implementation is adopted.

>From the performance perspective, an even more efficient implementation would 
>be one that uses three background threads:

- one to prefetch items from the input
- one to sends items to the workers for processing
- one to yield results as they become available

It has a disadvantage of being slightly more complex, so I don't know if it 
really belongs in the standard library.

Its advantage is that it will waste less time: it fetches inputs without pause, 
it submits them for processing without pause, and it makes results available to 
the client as soon as they are processed. (I have implemented and tried this 
approach, but not in productioon.)

But even this implementation requires tuning. In the case with the busy pool 
that I described above, one would want to prefetch as much input as possible, 
but that may cause too much memory consumption and also possibly waste 
computation resources (if the most of input produced proves to be unneeded in 
the end).

--
nosy: +max

___
Python tracker 
<http://bugs.python.org/issue29842>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue29842] Make Executor.map work with infinite/large inputs correctly

2017-05-15 Thread Max

Max added the comment:

Correction: this PR is useful for `ProcessPoolExecutor` as well. I thought 
`chunksize` parameter handles infinite generators already, but I was wrong. 
And, as long as the number of items prefetched is a multiple of `chunksize`, 
there are no issues with the chunksize optimization either.

And a minor correction: when listing the advantages of this PR, I should have 
said: "In addition, if the pool is not busy when `map` is called, your 
implementation will also be more responsive, since it will yield the first 
result earlier."

--

___
Python tracker 
<http://bugs.python.org/issue29842>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue30488] Documentation for subprocess.STDOUT needs clarification

2017-05-26 Thread Max

New submission from Max:

The documentation states that subprocess.STDOUT is:

Special value that can be used as the stderr argument to Popen and indicates 
that standard error should go into the same handle as standard output.

However, when Popen is called with stdout=None, stderr=subprocess.STDOUT, 
stderr is not redirected to stdout and continues to be sent to stderr.

To reproduce the problem:

$ python >/dev/null -c 'import subprocess;\
subprocess.call(["ls", 
"/404"],stderr=subprocess.STDOUT)'

and observe the error message appearing on the console (assuming /404 directory 
does not exist).

This was reported on SO 5 years ago: 
https://stackoverflow.com/questions/11495783/redirect-subprocess-stderr-to-stdout.

The SO attributed this to a documentation issue, but arguably it should be 
considered a bug because there seems to be no reason to make subprocess.STDOUT 
unusable in this very common use case.

--
components: Interpreter Core
messages: 294560
nosy: max
priority: normal
severity: normal
status: open
title: Documentation for subprocess.STDOUT needs clarification
type: behavior
versions: Python 2.7, Python 3.3, Python 3.4, Python 3.5, Python 3.6, Python 3.7

___
Python tracker 
<http://bugs.python.org/issue30488>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue30517] Enum does not recognize enum.auto as unique values

2017-05-30 Thread Max

New submission from Max:

This probably shouldn't happen:

import enum

class E(enum.Enum):
  A = enum.auto
  B = enum.auto

x = E.B.value
print(x) # 
print(E(x))  # E.A

The first print() is kinda ok, I don't really care about which value was used 
by the implementation. But the second print() seems surprising.

By the same token, this probably shouldn't raise an exception (it does now):

import enum

@enum.unique
class E(enum.Enum):
  A = enum.auto
  B = enum.auto
  C = object()

and `dir(E)` shouldn't skip `B` in its output (it does now).

--
components: Library (Lib)
messages: 294804
nosy: max
priority: normal
severity: normal
status: open
title: Enum does not recognize enum.auto as unique values
type: behavior
versions: Python 3.6

___
Python tracker 
<http://bugs.python.org/issue30517>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue30517] Enum does not recognize enum.auto as unique values

2017-05-31 Thread Max

Max added the comment:

Ah sorry about that ... Yes, everything works fine when used properly.

--
resolution:  -> not a bug
stage:  -> resolved
status: open -> closed

___
Python tracker 
<http://bugs.python.org/issue30517>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue9592] Limitations in objects returned by multiprocessing Pool

2012-09-11 Thread Max

Max added the comment:

I propose to close this issue as fixed.

The first two problems in the OP are now resolved through patches to pickle.

The third problem is addressed by issue5370: it is a documented feature of 
pickle that anyone who defines __setattr__ / __getattr__ that depend on an 
internal state must also take care to restore that state during unpickling. 
Otherwise, the code is not pickle-safe, and by extension, not 
multiprocessing-safe.

--
nosy: +max

___
Python tracker 
<http://bugs.python.org/issue9592>
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue15981] improve documentation of __hash__

2012-09-20 Thread Max

New submission from Max:

In dev/reference/datamodel#object.__hash__, there are two paragraphs that seem 
inconsistent. The first paragraph seems to say that a class that overrides 
__eq__() *should* explicitly flag itself as unhashable. The next paragraph says 
that a class that overrides __eq__() *will be* flagged unhashable by default. 
Which one is it?

Here are the two paragraphs:

Classes which inherit a __hash__() method from a parent class but change the 
meaning of __eq__() such that the hash value returned is no longer appropriate 
(e.g. by switching to a value-based concept of equality instead of the default 
identity based equality) can explicitly flag themselves as being unhashable by 
setting __hash__ = None in the class definition. Doing so means that not only 
will instances of the class raise an appropriate TypeError when a program 
attempts to retrieve their hash value, but they will also be correctly 
identified as unhashable when checking isinstance(obj, collections.Hashable) 
(unlike classes which define their own __hash__() to explicitly raise 
TypeError).

If a class that overrides __eq__() needs to retain the implementation of 
__hash__() from a parent class, the interpreter must be told this explicitly by 
setting __hash__ = .__hash__. Otherwise the inheritance of 
__hash__() will be blocked, just as if __hash__ had been explicitly set to None.

--
assignee: docs@python
components: Documentation
messages: 170798
nosy: docs@python, max
priority: normal
severity: normal
status: open
title: improve documentation of __hash__
type: enhancement
versions: Python 3.3

___
Python tracker 
<http://bugs.python.org/issue15981>
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue15997] NotImplemented needs to be documented

2012-09-20 Thread Max

New submission from Max:

Quoting from 
http://docs.python.org/reference/datamodel.html#the-standard-type-hierarchy:

NotImplemented
This type has a single value. There is a single object with this value. This 
object is accessed through the built-in name NotImplemented. Numeric methods 
and rich comparison methods may return this value if they do not implement the 
operation for the operands provided. (The interpreter will then try the 
reflected operation, or some other fallback, depending on the operator.) Its 
truth value is true.

This is not a sufficient description of NotImplemented behavior. What does it 
mean "reflected operation" (I assume it is other.__eq__(self), but it needs to 
be clarified), and what does it mean "or some other fallback" (wouldn't 
developers need to know?). It also doesn't state what happens if the reflected 
operation or the fallback again return NotImplemented.

The rest of the documentation doesn't seem to talk about this either, despite 
several mentions of NotImplemented, with references to other sections.

This is particularly serious problem because Python's behavior changed in this 
respect not that long ago.

--
assignee: docs@python
components: Documentation
messages: 170860
nosy: docs@python, max
priority: normal
severity: normal
status: open
title: NotImplemented needs to be documented
type: enhancement
versions: Python 3.2

___
Python tracker 
<http://bugs.python.org/issue15997>
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue15997] NotImplemented needs to be documented

2012-09-20 Thread Max

Max added the comment:

I agree about reflected operation, although the wording could be clearer ("will 
try reflected operation" is better worded as "will return the result of the 
reflected operation called on the swapped arguments".)

But what does it mean "or some other fallback"? And what if the reflected 
operation or the fallback again return NotImplemented or is actually not 
implemented. Is it somewhere else in the docs?

--

___
Python tracker 
<http://bugs.python.org/issue15997>
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue16128] hashable documentation error

2012-10-04 Thread Max

New submission from Max:

http://docs.python.org/dev/glossary.html?highlight=hashable says:

Objects which are instances of user-defined classes are hashable by default; 
they all compare unequal, and their hash value is their id().

Since x == x returns True by default, so "they all compare unequal" isn't quite 
right.

In addition, both the above paragraph and 
http://docs.python.org/dev/reference/datamodel.html?highlight=__eq__#object.__hash__
 say:

User-defined classes have __eq__() and __hash__() methods by default; with 
them, all objects compare unequal (except with themselves) and x.__hash__() 
returns an appropriate value such that x == y implies both that x is y and 
hash(x) == hash(y).

This is correct, but may leave some confusion with the reader about what 
happens to a subclass of a built-in class (which doesn't use the default 
behavior, but instead simply inherits the parent's __hash__ and __eq__).

--
assignee: docs@python
components: Documentation
messages: 171935
nosy: docs@python, max
priority: normal
severity: normal
status: open
title: hashable documentation error
type: enhancement
versions: Python 3.2

___
Python tracker 
<http://bugs.python.org/issue16128>
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue21214] PEP8 doesn't verifies last line.

2014-04-14 Thread Max

New submission from Max:

PEP8 doesn't verifies last line at all. Also W292 will never be checked.
Reproducible on PEP8 >= 1.5.0

--
messages: 216072
nosy: f1ashhimself
priority: normal
severity: normal
status: open
title: PEP8 doesn't verifies last line.
type: behavior

___
Python tracker 
<http://bugs.python.org/issue21214>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue28785] Clarify the behavior of NotImplemented

2016-11-24 Thread Max

New submission from Max:

Currently, there's no clear statement as to what exactly the fallback is in 
case `__eq__` returns `NotImplemented`.  It would be good to clarify the 
behavior of `NotImplemented`; at least for `__eq__`, but perhaps also other 
rich comparison methods. For example: "When `NotImplemented` is returned from a 
rich comparison method, the interpreter behaves as if the rich comparison 
method was not defined in the first place." See 
http://stackoverflow.com/questions/40780004/returning-notimplemented-from-eq 
for more discussion.

--
assignee: docs@python
components: Documentation
messages: 281616
nosy: docs@python, max
priority: normal
severity: normal
status: open
title: Clarify the behavior of NotImplemented
type: enhancement
versions: Python 3.6

___
Python tracker 
<http://bugs.python.org/issue28785>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue28785] Clarify the behavior of NotImplemented

2016-11-24 Thread Max

Max added the comment:

Martin - what you suggest is precisely what I had in mind (but didn't phrase it 
as well):

> to document the above sort of behaviour as being directly associated with 
> operations like as == and !=, and only indirectly associated with the 
> NotImplemented object and the __eq__() method

Also a minor typo: you meant "If that call returns NotImplemented, the first 
fallback is to try the *reverse* call."

--

___
Python tracker 
<http://bugs.python.org/issue28785>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue29415] Exposing handle._callback and handle._args in asyncio

2017-02-01 Thread Max

New submission from Max:

Is it safe to use the _callback and _args attributes of asyncio.Handle? Is it 
possible to officially expose them as public API?

My use case: 

handle = event_loop.call_later(delay, callback)

# this function can be triggered by some events
def reschedule(handle):
  event_loop.call_later(new_delay, handle._callback, *handle._args)
  handle.cancel()

--
components: asyncio
messages: 286709
nosy: gvanrossum, max, yselivanov
priority: normal
severity: normal
status: open
title: Exposing handle._callback and handle._args in asyncio
type: enhancement
versions: Python 3.6

___
Python tracker 
<http://bugs.python.org/issue29415>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue29415] Exposing handle._callback and handle._args in asyncio

2017-02-01 Thread Max

Max added the comment:

@yselivanov I just wanted to use the handler to avoid storing the callback and 
args in my own data structure (I would just store the handlers whenever I may 
need to reschedule). Not a big deal, I don't have to use handler as a storage 
space, if it's not supported across implementations.

--

___
Python tracker 
<http://bugs.python.org/issue29415>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue29597] __new__ / __init__ calls during unpickling not documented correctly

2017-02-18 Thread Max

New submission from Max:

According to the 
[docs](https://docs.python.org/3/library/pickle.html#pickling-class-instances):

> Note: At unpickling time, some methods like `__getattr__()`, 
> `__getattribute__()`, or `__setattr__()` may be called upon the instance. In 
> case those methods rely on some internal invariant being true, the type 
> should implement `__getnewargs__()` or `__getnewargs_ex__()` to establish 
> such an invariant; otherwise, neither `__new__()` nor `__init__()` will be 
> called.

It seems, however, that this note is incorrect. First, `__new__` is called even 
if `__getnewargs__` isn't implemented. Second, `__init__` is not called even if 
it is (while the note didn't say that `__init__` would be called when 
`__getnewargs__` is defined, the wording does seem to imply it).


class A:
def __new__(cls, *args):
print('__new__ called with', args)
return object.__new__(cls)

def __init__(self, *args):
print('__init__ called with', args)
self.args = args

def __getnewargs__(self):
print('called')
return ()

a = A(1)
s = pickle.dumps(a)
a = pickle.loads(s) # __new__ called, not __init__
delattr(A, '__getnewargs__') 
a = A(1)
s = pickle.dumps(a)
a = pickle.loads(s) # __new__ called, not __init__

--
assignee: docs@python
components: Documentation
messages: 288088
nosy: docs@python, max
priority: normal
severity: normal
status: open
title: __new__ / __init__ calls during unpickling not documented correctly
versions: Python 3.6

___
Python tracker 
<http://bugs.python.org/issue29597>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue15373] copy.copy() does not properly copy os.environment

2022-03-01 Thread Max Katsev


Max Katsev  added the comment:

Note that deepcopy doesn't work either, even though it looks like it does at 
the first glance (which is arguably worse since it's harder to notice):

Python 3.8.6 (default, Jun  4 2021, 05:16:01)
>>> import copy, os, subprocess
>>> env_copy = copy.deepcopy(os.environ)
>>> env_copy["TEST"] = "oh no"
>>> os.environ["TEST"]
Traceback (most recent call last):
  File "", line 1, in 
  File "/usr/local/fbcode/platform009/lib/python3.8/os.py", line 675, in 
__getitem__
raise KeyError(key) from None
KeyError: 'TEST'
>>> subprocess.run("echo $TEST", shell=True, 
>>> capture_output=True).stdout.decode()
'oh no\n'

--
nosy: +mkatsev

___
Python tracker 
<https://bugs.python.org/issue15373>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue46935] import of submodule polutes global namespace

2022-03-05 Thread Max Bachmann


New submission from Max Bachmann :

In my environment I installed the following two libraries:
```
pip install rapidfuzz
pip install python-Levenshtein
```
Those two libraries have the following structures:
rapidfuzz
|-distance
  |- __init__.py (from . import Levenshtein)
  |- Levenshtein.*.so
|-__init__.py (from rapidfuzz import distance)


Levenshtein
|-__init__.py

When importing Levenshtein first everything behaves as expected:
```
>>> import Levenshtein
>>> Levenshtein.
Levenshtein.apply_edit(   Levenshtein.jaro_winkler( Levenshtein.ratio(
Levenshtein.distance( Levenshtein.matching_blocks(  
Levenshtein.seqratio(
Levenshtein.editops(  Levenshtein.median(   
Levenshtein.setmedian(
Levenshtein.hamming(  Levenshtein.median_improve(   
Levenshtein.setratio(
Levenshtein.inverse(  Levenshtein.opcodes(  
Levenshtein.subtract_edit(
Levenshtein.jaro( Levenshtein.quickmedian(   
>>> import rapidfuzz
>>> Levenshtein.
Levenshtein.apply_edit(   Levenshtein.jaro_winkler( Levenshtein.ratio(
Levenshtein.distance( Levenshtein.matching_blocks(  
Levenshtein.seqratio(
Levenshtein.editops(  Levenshtein.median(   
Levenshtein.setmedian(
Levenshtein.hamming(  Levenshtein.median_improve(   
Levenshtein.setratio(
Levenshtein.inverse(  Levenshtein.opcodes(  
Levenshtein.subtract_edit(
Levenshtein.jaro( Levenshtein.quickmedian( 
```

However when importing rapidfuzz first it import 
`rapidfuzz.distance.Levenshtein` when running `import Levenshtein`
```
>>> import rapidfuzz
>>> Levenshtein
Traceback (most recent call last):
  File "", line 1, in 
NameError: name 'Levenshtein' is not defined
>>> import Levenshtein
>>> Levenshtein.
Levenshtein.array(  Levenshtein.normalized_distance(
Levenshtein.similarity(
Levenshtein.distance(   Levenshtein.normalized_similarity(  
Levenshtein.editops(Levenshtein.opcodes( 
```

My expectation was that in both cases `import Levenshtein` should import the 
`Levenshtein` module. I could reproduce this behavior on all Python versions I 
had available (Python3.8 - Python3.10) on Ubuntu and Fedora.

--
components: Interpreter Core
messages: 414599
nosy: maxbachmann
priority: normal
severity: normal
status: open
title: import of submodule polutes global namespace
type: behavior
versions: Python 3.10, Python 3.8, Python 3.9

___
Python tracker 
<https://bugs.python.org/issue46935>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue46935] import of submodule polutes global namespace

2022-03-05 Thread Max Bachmann


Max Bachmann  added the comment:

It appears this only occurs when a C Extension is involved. When the so is 
imported first it is preferred over the .py file that the user would like to 
import. I could not find any documentation on this behavior, so I assume that 
this is not the intended.

My current workaround is the usage of a unique name for the C Extension and the 
importing everything from a Python file with the corresponding name.

--

___
Python tracker 
<https://bugs.python.org/issue46935>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue46935] import of submodule polutes global namespace

2022-03-06 Thread Max Bachmann


Max Bachmann  added the comment:

Thanks Dennis. This helped me track down the issue in rapidfuzz.

--
resolution:  -> not a bug
stage:  -> resolved
status: open -> closed

___
Python tracker 
<https://bugs.python.org/issue46935>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue10243] Packaged Pythons

2010-10-29 Thread Max Skaller

New submission from Max Skaller :

Not sure if this is a bug or not. I am unable to find libpython.so for Python3 
on either my Mac or Ubuntu. Perhaps this is a packaging fault, however some 
documentation in the Wiki suggests otherwise. It appears the builders have 
reverted to an archaic linkage pattern which I helped to get rid of (lets see, 
perhaps a decade ago?). Python 2.6, for example, does ship a shared library.

Python must be shipped with code linked as follows, I will explain below why, 
but first the list:

1) The mainline (C main() function) must be a stub which calls the real 
mainline, which is located in libpython.

2) The mainline MUST be compiled with C++ not C.

3) All extension libraries and add-ons to Python provided as shared libraries 
must be explicitly linked against libpython.

In particular it is NOT acceptable for any extension or shared library 
component to expect to find its symbols in the host application executable as 
the Wiki documentation seems to suggest (in a section which explains a bit of a 
workaround for OSX frameworks).

Now the reason it MUST be this way. First, any C++ code which is to be linked 
into an application, either statically, dynamically at load time, or under 
program control at run time, may require certain stuff to be in place (RTTI, 
streams, exception handling stuff, or whatever) which can only be put in place 
in the initialisation of the main application. Although the details are 
platform specific, it is simply not safe to permit C++ extension modules unless 
this step is taken.

Legacy or embedded systems may have to make do with a C mainline, and systems 
which don't support dynamic loading can also do without C++ compilation 
provided the pre-loaded extensions are all C.

On most major platforms, however, a C++ driver stub is required.

The second issue is also quite simple. It is quite incorrect in a modern 
computing environment to assume an *application* will be hosting the python 
interpreter directly. It is not only possible, it is in fact the case for my 
project, that were the Python interpreter to be called, it would from a shared 
library loaded under program control at run time.

Such an interpreter cannot be loaded at all if it isn't present in a library: 
it either has to be statically linked into the shared library making the call, 
with some ugly linker switches to make sure no symbols are dropped, or it has 
to be loaded dynamically. The latter case is the only viable option if the run 
time linker is unable to share symbols to a loaded application, and even if 
that is possible and can be arranged it is not likely to work so well if 
multiple shared libraries try to do it.

Similarly, even if you managed to load it somehow, any dynamically loaded 
extensions may or may not be able to find the symbols.

The ONLY reliable way to ensure extensions can find libpython symbols is to 
link them against libpython.

In fact, the mainline application should not only contain NO libpython symbols 
specifically to disable this wrong practice and break any bad extensions that 
rely on it, it should also, as explained, contain exactly one reference to 
libpython, which calls Python with argv, argc[] as if it were the mainline.

Just as motivation here: my product is an ultra-high performance programming 
language with a special construction to allow Python C-extension modules to be 
built. Its target is usually a shared library and that library is produced by 
first generating C++ and then compiling it with your native C++ compiler.

For a generated program to call Python interpreter, it HAS to be available in a 
shared library, and for any extension modules that interpreter loads, they HAVE 
to get their symbols from that shared library, and, if the generated program is 
itself a Python module, then if that module is to be loaded from any C 
extension, including itself or some other extension, it HAS to be linked 
against libpython which HAS to be loaded dynamically by the loaded.

The unfortunate downside of this is that it is NOT POSSIBLE to have a huge 
statically linked Python executable which just loads C extensions and nothing 
else happens. If you're loading any C extensions dynamically libpython must be 
loaded dynamically too.

Just to be clear: I can easily build it the way I want it but this will not 
solve my problem, which is to support clients who wish to use my product to 
generate high performance Python callable modules which will just work "out of 
the box" with existing Python code. In particular, replacing some slow modules 
with optimised ones would be more or less entirely transparent .. except that 
at the moment it could only work with Python 2.x since Python 3.x shipments 
don't seem to have any shared libpython included (and I just changed my 
compiler to support Python 3 modules instead of Python 2).

--
components: Installation
messages: 119963
nosy: Max.Skaller
pri

[issue10243] Packaged Pythons

2010-11-03 Thread Max Skaller

Max Skaller  added the comment:

On Sat, Oct 30, 2010 at 6:40 PM, Martin v. Löwis wrote:

It may be there is none. You need to read the bit where I explain that I am
not building Python, I'm grabbing pre-made packages, for OSX and for Ubuntu.

The problem is that these packages don't seem to supply a dynamic link
version.

My problem cannot be solved by telling me I can build Python myself with
the --enable-shared switch, because I am not the client. I am a vendor
supplying a tool that can generate Python shared libraries which cannot run
unless the CLIENT has a shared library version of libpython. So you're
telling me to tell THEM to build Python with --enable-shared switch which is
out of the question for many of them, who may, for example, be programmers
working in a shop where they do not have the ability to change the system
installed by their system admin.

So the problem is that the *packagers* are not supplying the dynamic lib.

Surely that is not the Python dev's issue directly, but it IS an issue the
Python dev's can do something about, by talking to the packagers.

Anyhow, I will give up. I can't test the feature of the compiler I have
implemented
because I don't have a libpython.so and I have no intention of building one,
because I can't expect all those Python users out there to do it either.

It seems you really don't understand the requirements for dynamic linking:
my application code is exclusively in a dlopen()d shared library, so if it
is
used AS a python module or it wants itself to USE a Python extension
module OR Python interpreter itself, it cannot do so.

The top level application is a fixed mainline which does not include
libpython.a or any such symbols.

It's a basic design principle, Meyer called it "the principle of explicit
interfaces"
which means: if you depend on something make the dependency explicit.

Extension modules which do not *explicitly* link against libpython break
this rule.

--
Added file: http://bugs.python.org/file19480/unnamed

___
Python tracker 
<http://bugs.python.org/issue10243>
___On Sat, Oct 30, 2010 at 6:40 PM, Martin v. 
Löwis <mailto:rep...@bugs.python.org";>rep...@bugs.python.org> 
wrote:

Martin v. Löwis <mailto:mar...@v.loewis.de";>mar...@v.loewis.de> added the 
comment:

> Python 2.6, for example, does ship a shared library.

I fail to see the bug in this report.It may be 
there is none. You need to read the bit where I explain that I am not building 
Python, I'm grabbing pre-made packages, for OSX and for Ubuntu.
The problem is that these packages don't seem to supply a dynamic link 
version.My problem cannot be solved by telling me I can build Python 
myself withthe --enable-shared switch, because I am not the client. I am a 
vendor supplying a tool that can generate Python shared libraries which cannot 
run unless the CLIENT has a shared library version of libpython. So you're 
telling me to tell THEM to build Python with --enable-shared switch which is 
out of the question for many of them, who may, for example, be programmers 
working in a shop where they do not have the ability to change the system 
installed by their system admin.
 So the problem is that the *packagers* are not supplying the dynamic 
lib.Surely that is not the Python dev's issue directly, but it IS 
an issue the Python dev's can do something about, by talking to the 
packagers.
Anyhow, I will give up. I can't test the feature of the compiler I have 
implementedbecause I don't have a libpython.so and I have no intention 
of building one,because I can't expect all those Python users out there 
to do it either.
It seems you really don't understand the requirements for dynamic 
linking:my application code is exclusively in a dlopen()d shared library, 
so if it isused AS a python module or it wants itself to USE a Python 
extension
module OR Python interpreter itself, it cannot do so.The top level 
application is a fixed mainline which does not includelibpython.a or any 
such symbols.It's a basic design principle, Meyer called it 
"the principle of explicit interfaces"
which means: if you depend on something make the dependency 
explicit.Extension modules which do not *explicitly* link against 
libpython breakthis rule.
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue10243] Packaged Pythons

2010-11-04 Thread Max Skaller

Max Skaller  added the comment:

On Thu, Nov 4, 2010 at 5:19 PM, Ned Deily  wrote:

>
> Ned Deily  added the comment:
>
> For what it's worth, the python.org installers for Mac OS X do include a
> libpython shared library.  As of Python 2.7 (and 3.2), the installer
> includes a symlink to make it easier to find:
>
> $ cd /Library/Frameworks/Python.framework/Versions/2.7/lib
> $ ls -l libpython2.7.dylib
>

Ok.. so why is it called Python instead of Python.dylib?

/Library/Frameworks/Python.framework>file Python
Python: broken symbolic link to Versions/Current/Python

/Library/Frameworks/Python.framework/Versions/3.1>file Python
Python: Mach-O universal binary with 2 architectures
Python (for architecture ppc):  Mach-O dynamically linked shared library ppc
Python (for architecture i386): Mach-O dynamically linked shared library
i386

Hmm .. i386? Oh dear, I'm running Snow Leopard and I generate 64 bit code.

--
Added file: http://bugs.python.org/file19498/unnamed

___
Python tracker 
<http://bugs.python.org/issue10243>
___On Thu, Nov 4, 2010 at 5:19 PM, Ned Deily 
<mailto:rep...@bugs.python.org";>rep...@bugs.python.org> 
wrote:

Ned Deily <mailto:n...@acm.org";>n...@acm.org> added the 
comment:

For what it's worth, the http://python.org"; 
target="_blank">python.org installers for Mac OS X do include a libpython 
shared library.  As of Python 2.7 (and 3.2), the installer includes a symlink 
to make it easier to find:


$ cd /Library/Frameworks/Python.framework/Versions/2.7/lib
$ ls -l libpython2.7.dylibOk.. so why is it called 
Python instead of 
Python.dylib?/Library/Frameworks/Python.framework>file 
PythonPython: broken symbolic link to Versions/Current/Python
/Library/Frameworks/Python.framework/Versions/3.1>file PythonPython: 
Mach-O universal binary with 2 architecturesPython (for architecture 
ppc):  Mach-O dynamically linked shared library ppcPython (for 
architecture i386): Mach-O dynamically linked shared library i386
Hmm .. i386? Oh dear, I'm running Snow Leopard and I generate 64 bit 
code.
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue1647654] No obvious and correct way to get the time zone offset

2010-11-22 Thread Max Arnold

Max Arnold  added the comment:

Our region recently switched to another timezone and I've noticed similar issue 
while using Mercurial. There is some (hopefully) useful details: 
http://mercurial.selenic.com/bts/issue2511

--
nosy: +LwarX

___
Python tracker 
<http://bugs.python.org/issue1647654>
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue1752] logging.basicConfig misleading behaviour

2008-01-06 Thread Max Ischenko

New submission from Max Ischenko:

Function logging.basicConfig has a confusing and undocumented behaviour:
it does nothing if there are any handlers already present in root logger.

It could be more explicit, say, by giving a ValueError in such cases.

--
components: Library (Lib)
messages: 59437
nosy: imax
severity: normal
status: open
title: logging.basicConfig misleading behaviour
type: behavior
versions: Python 2.5

__
Tracker <[EMAIL PROTECTED]>
<http://bugs.python.org/issue1752>
__
___
Python-bugs-list mailing list 
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue4979] random.uniform can return its upper limit

2009-01-17 Thread Max Hailperin

New submission from Max Hailperin :

The documentation for random.uniform says that random.uniform(a,b) 
should return a number strictly less than b, assuming ab.)  Thus both of the following 
expressions should always evaluate to False:

a
<http://bugs.python.org/issue4979>
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue38279] multiprocessing example enhancement

2019-09-25 Thread Max Voss


New submission from Max Voss :

Hello all,

I've been trying to understand multiprocessing for a while, I tried multiple 
times. The PR is a suggested enhancement to the example that made it "click" 
for me. Or should I say, produced a working result that made sense to me.

Details for each change in the PR. It's short too.

The concept of multiprocessing is easy enough, but the syntax is so unlike 
regular python and so much happens "behind the curtain" so to speak, it took me 
a while. When I looked for multiprocessing advice online, many answers seemed 
unsure if or how their solution worked.

Generally I'd like to help write documentation. So this is a test to see how 
good your issue handling process is too. :)

--
assignee: docs@python
components: Documentation
messages: 353222
nosy: BMV, docs@python
priority: normal
severity: normal
status: open
title: multiprocessing example enhancement
versions: Python 3.7

___
Python tracker 
<https://bugs.python.org/issue38279>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue43565] PyUnicode_KIND macro does not has specified return type

2021-03-19 Thread Max Bachmann


New submission from Max Bachmann :

The documentation stated, that the PyUnicode_KIND macro has the following 
interface:
- int PyUnicode_KIND(PyObject *o)
However it actually returns a value of the underlying type of the 
PyUnicode_Kind enum. This could be e.g. an unsigned int as well.

--
components: C API
messages: 389133
nosy: maxbachmann
priority: normal
severity: normal
status: open
title: PyUnicode_KIND macro does not has specified return type
type: behavior

___
Python tracker 
<https://bugs.python.org/issue43565>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue26680] Incorporating float.is_integer into Decimal

2021-03-21 Thread Max Prokop


Change by Max Prokop :


--
components: +2to3 (2.x to 3.x conversion tool), Argument Clinic, Build, C API, 
Cross-Build, Demos and Tools, Distutils, Documentation, asyncio, ctypes
nosy: +Alex.Willmer, asvetlov, dstufft, eric.araujo, larry, yselivanov
type: enhancement -> compile error
Added file: https://bugs.python.org/file49898/Mobile_Signup.vcf

___
Python tracker 
<https://bugs.python.org/issue26680>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue41100] Support macOS 11 and Apple Silicon Macs

2021-04-08 Thread Max Bélanger

Change by Max Bélanger :


--
nosy: +maxbelanger
nosy_count: 18.0 -> 19.0
pull_requests: +24010
pull_request: https://github.com/python/cpython/pull/25274

___
Python tracker 
<https://bugs.python.org/issue41100>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue42688] ctypes memory error on Apple Silicon with external libffi

2021-04-08 Thread Max Bélanger

Change by Max Bélanger :


--
nosy: +maxbelanger
nosy_count: 4.0 -> 5.0
pull_requests: +24011
pull_request: https://github.com/python/cpython/pull/25274

___
Python tracker 
<https://bugs.python.org/issue42688>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue44153] Signaling an asyncio subprocess raises ProcessLookupError, depending on timing

2021-05-16 Thread Max Marrone


New submission from Max Marrone :

# Summary

Basic use of `asyncio.subprocess.Process.terminate()` can raise a 
`ProcessLookupError`, depending on the timing of the subprocess's exit.

I assume (but haven't checked) that this problem extends to `.kill()` and 
`.send_signal()`.

This breaks the expected POSIX semantics of signaling and waiting on a process. 
See the "Expected behavior" section.


# Test case

I've tested this on macOS 11.2.3 with Python 3.7.9 and Python 3.10.0a7, both 
installed via pyenv.

```
import asyncio
import sys

# Tested with:
# asyncio.ThreadedChildWatcher (3.10.0a7  only)
# asyncio.MultiLoopChildWatcher (3.10.0a7 only)
# asyncio.SafeChildWatcher (3.7.9 and 3.10.0a7)
# asyncio.FastChildWatcher (3.7.9 and 3.10.0a7)
# Not tested with asyncio.PidfdChildWatcher because I'm not on Linux.
WATCHER_CLASS = asyncio.FastChildWatcher

async def main():
# Dummy command that should be executable cross-platform.
process = await asyncio.subprocess.create_subprocess_exec(
sys.executable, "--version"
)

for i in range(20):
# I think the problem is that the event loop opportunistically wait()s
# all outstanding subprocesses on its own. Do a bunch of separate
# sleep() calls to give it a bunch of chances to do this, for reliable
# reproduction.
#
# I'm not sure if this is strictly necessary for the problem to happen.
# On my machine, the problem also happens with a single sleep(2.0).
await asyncio.sleep(0.1)

process.terminate() # This unexpectedly errors with ProcessLookupError.

print(await process.wait())

asyncio.set_child_watcher(WATCHER_CLASS())
asyncio.run(main())
```

The `process.terminate()` call raises a `ProcessLookupError`:

```
Traceback (most recent call last):
  File "kill_is_broken.py", line 29, in 
asyncio.run(main())
  File "/Users/maxpm/.pyenv/versions/3.7.9/lib/python3.7/asyncio/runners.py", 
line 43, in run
return loop.run_until_complete(main)
  File 
"/Users/maxpm/.pyenv/versions/3.7.9/lib/python3.7/asyncio/base_events.py", line 
587, in run_until_complete
return future.result()
  File "kill_is_broken.py", line 24, in main
process.terminate() # This errors with ProcessLookupError.
  File 
"/Users/maxpm/.pyenv/versions/3.7.9/lib/python3.7/asyncio/subprocess.py", line 
131, in terminate
self._transport.terminate()
  File 
"/Users/maxpm/.pyenv/versions/3.7.9/lib/python3.7/asyncio/base_subprocess.py", 
line 150, in terminate
self._check_proc()
  File 
"/Users/maxpm/.pyenv/versions/3.7.9/lib/python3.7/asyncio/base_subprocess.py", 
line 143, in _check_proc
raise ProcessLookupError()
ProcessLookupError
```


# Expected behavior and discussion

Normally, with POSIX semantics, the `wait()` syscall tells the operating system 
that we won't send any more signals to that process, and that it's safe for the 
operating system to recycle that process's PID. This comment from Jack O'Connor 
on another issue explains it well: https://bugs.python.org/issue40550#msg382427

So, I expect that on any given `asyncio.subprocess.Process`, if I call 
`.terminate()`, `.kill()`, or `.send_signal()` before I call `.wait()`, then:

* It should not raise a `ProcessLookupError`.
* The asyncio internals shouldn't do anything with a stale PID. (A stale PID is 
one that used to belong to our subprocess, but that we've since consumed 
through a `wait()` syscall, allowing the operating system to recycle it).

asyncio internals are mostly over my head. But I *think* the problem is that 
the event loop opportunistically calls the `wait()` syscall on our child 
processes. So, as implemented, there's a race condition. If the event loop's 
`wait()` syscall happens to come before my `.terminate()` call, my 
`.terminate()` call will raise a `ProcessLookupError`.

So, as a corollary to the expectations listed above, I think the implementation 
details should be either:

* Ideally, the asyncio internals should not call syscall `wait()` on a process 
until *I* call `wait()` on that process. 
* Failing that, `.terminate()`, `.kill()` and `.send_signal()` should should 
no-op if the asyncio internals have already called `.wait()` on that process.

--
components: asyncio
messages: 393764
nosy: asvetlov, syntaxcoloring, yselivanov
priority: normal
severity: normal
status: open
title: Signaling an asyncio subprocess raises ProcessLookupError, depending on 
timing
type: behavior
versions: Python 3.10, Python 3.7

___
Python tracker 
<https://bugs.python.org/issue44153>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue44153] Signaling an asyncio subprocess might raise ProcessLookupError, even if you haven't called .wait() yet

2021-05-16 Thread Max Marrone


Change by Max Marrone :


--
title: Signaling an asyncio subprocess raises ProcessLookupError, depending on 
timing -> Signaling an asyncio subprocess might raise ProcessLookupError, even 
if you haven't called .wait() yet

___
Python tracker 
<https://bugs.python.org/issue44153>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue45105] Incorrect handling of unicode character \U00010900

2021-09-05 Thread Max Bachmann

New submission from Max Bachmann :

I noticed that when using the Unicode character \U00010900 when inserting the 
character as character:
Here is the result on the Python console both for 3.6 and 3.9:
```
>>> s = '0𐤀00'
>>> s
'0𐤀00'
>>> ls = list(s)
>>> ls
['0', '𐤀', '0', '0']
>>> s[0]
'0'
>>> s[1]
'𐤀'
>>> s[2]
'0'
>>> s[3]
'0'
>>> ls[0]
'0'
>>> ls[1]
'𐤀'
>>> ls[2]
'0'
>>> ls[3]
'0'
```

It appears that for some reason in this specific case the character is actually 
stored in a different position that shown when printing the complete string. 
Note that the string is already behaving strange when marking it in the 
console. When marking the special character it directly highlights the last 3 
characters (probably because it already thinks this character is in the second 
position).

The same behavior does not occur when directly using the unicode point
```
>>> s='000\U00010900'
>>> s
'000𐤀'
>>> s[0]
'0'
>>> s[1]
'0'
>>> s[2]
'0'
>>> s[3]
'𐤀'
```

This was tested using the following Python versions:
```
Python 3.6.0 (default, Dec 29 2020, 02:18:14) 
[GCC 10.2.1 20201125 (Red Hat 10.2.1-9)] on linux

Python 3.9.6 (default, Jul 16 2021, 00:00:00) 
[GCC 11.1.1 20210531 (Red Hat 11.1.1-3)] on linux
```
on Fedora 34

--
components: Unicode
messages: 401078
nosy: ezio.melotti, maxbachmann, vstinner
priority: normal
severity: normal
status: open
title: Incorrect handling of unicode character \U00010900
type: behavior
versions: Python 3.6, Python 3.9

___
Python tracker 
<https://bugs.python.org/issue45105>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue45105] Incorrect handling of unicode character \U00010900

2021-09-05 Thread Max Bachmann

Max Bachmann  added the comment:

This is the result of copy pasting example posted above on windows using 
```
Python 3.7.8 (tags/v3.7.8:4b47a5b6ba, Jun 28 2020, 08:53:46) [MSC v.1916 64 bit 
(AMD64)] on win32
```
which appears to run into similar problems:
```
>>> s = '0��00' 
>>> 
>>> 
>>> 
>>>   >>> s 
>>> 
>>> 
>>> 
>>> 
>>> '0𐤀00'  
>>> 
>>> 
>>> 
>>>   >>> ls = list(s)  
>>> 
>>> 
>>> 
>>> 
>>> >>> ls  
>>> 
>>> 
>>> 
>>>   ['0', '𐤀', '0', '0']  
>>> 
>>> 
>>> 
>>> 
>>> >>> s[0]
>>> 
>>> 
>>> 
>>>   '0'   
>>> 
>>> 
>>> 
>>> 
>>> >>> s[1]
>>> 
>>> 
>>> 
>>>   '𐤀'
```

--

___
Python tracker 
<https://bugs.python.org/issue45105>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue45105] Incorrect handling of unicode character \U00010900

2021-09-05 Thread Max Bachmann

Max Bachmann  added the comment:

> That is using Python 3.9 in the xfce4-terminal. Which xterm are you using?

This was in the default gnome terminal that is pre-installed on Fedora 34 and 
on windows I directly opened the Python Terminal. I just installed 
xfce4-terminal on my Fedora 34 machine which has exactly the same behavior for 
me that I had in the gnome terminal.

> But regardless, I cannot replicate the behavior you show where list(s) is 
> different from indexing the characters one by one.

That is what surprised me the most. I just ran into this because this was 
somehow generated when fuzz testing my code using hypothesis (which uncovered 
an unrelated bug in my application). However I was quite confused by the 
character order when debugging it.

My original case was:
```
s1='00'
s2='9010𐤀000\x8dÀĀĀĀ222Ā'
parts = [s2[max(0, i) : min(len(s2), i+len(s1))] for i in range(-len(s1), 
len(s2))]
for part in parts:
print(list(part))
```
which produced
```
[]
['9']
['9', '0']
['9', '0', '1']
['9', '0', '1', '0']
['9', '0', '1', '0', '𐤀']
['9', '0', '1', '0', '𐤀', '0']
['0', '1', '0', '𐤀', '0', '0']
['1', '0', '𐤀', '0', '0', '0']
['0', '𐤀', '0', '0', '0', '\x8d']
['𐤀', '0', '0', '0', '\x8d', 'À']
['0', '0', '0', '\x8d', 'À', 'Ā']
['0', '0', '\x8d', 'À', 'Ā', 'Ā']
['0', '\x8d', 'À', 'Ā', 'Ā', 'Ā']
['\x8d', 'À', 'Ā', 'Ā', 'Ā', '2']
['À', 'Ā', 'Ā', 'Ā', '2', '2']
['Ā', 'Ā', 'Ā', '2', '2', '2']
['Ā', 'Ā', '2', '2', '2', 'Ā']
['Ā', '2', '2', '2', 'Ā']
['2', '2', '2', 'Ā']
['2', '2', 'Ā']
['2', 'Ā']
['ĀÀ]
```
which has a missing single quote:
  - ['ĀÀ]
changing direction of characters including commas:
  - ['1', '0', '𐤀', '0', '0', '0']
changing direction back:
  - ['𐤀', '0', '0', '0', '\x8d', 'À']

> AFAICT, there is no bug here. It's just confusing how Unicode right-to-left 
> characters in the repr() can modify how it's displayed in the 
> console/terminal.

Yes it appears the same confusion occurs in other applications like Firefox and 
VS Code.
Thanks at @eryksun and @steven.daprano for testing and telling me about 
Bidirectional writing in Unicode (The more I know about Unicode the more it 
scares me)

--
status: pending -> open

___
Python tracker 
<https://bugs.python.org/issue45105>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue45105] Incorrect handling of unicode character \U00010900

2021-09-05 Thread Max Bachmann

Max Bachmann  added the comment:

As far as a I understood this is caused by the same reason:

```
>>> s = '123\U00010900456'
>>> s
'123𐤀456'
>>> list(s)
['1', '2', '3', '𐤀', '4', '5', '6']
# note that everything including the commas is mirrored until ] is reached
>>> s[3]
'𐤀'
>>> list(s)[3]
'𐤀'
>>> ls = list(s)
>>> ls[3] += 'a'
>>> ls
['1', '2', '3', '𐤀a', '4', '5', '6']
```

Which as far as I understood is the expected behavior when a right-to-left 
character is encountered.

--

___
Python tracker 
<https://bugs.python.org/issue45105>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue38952] asyncio cannot handle Python3 IPv4Address or IPv6 Address

2019-12-01 Thread Max Coplan


New submission from Max Coplan :

Trying to use new Python 3 `IPv4Address`s fails with the following error
```
File 
"/usr/local/Cellar/python/3.7.5/Frameworks/Python.framework/Versions/3.7/lib/python3.7/asyncio/base_events.py",
 line 1270, in _ensure_resolved
info = _ipaddr_info(host, port, family, type, proto, *address[2:])
  File 
"/usr/local/Cellar/python/3.7.5/Frameworks/Python.framework/Versions/3.7/lib/python3.7/asyncio/base_events.py",
 line 134, in _ipaddr_info
if '%' in host:
TypeError: argument of type 'IPv4Address' is not iterable
```

--
components: asyncio
messages: 357697
nosy: Max Coplan, asvetlov, yselivanov
priority: normal
severity: normal
status: open
title: asyncio cannot handle Python3 IPv4Address or IPv6 Address
versions: Python 3.5, Python 3.6, Python 3.7, Python 3.8

___
Python tracker 
<https://bugs.python.org/issue38952>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue38952] asyncio cannot handle Python3 IPv4Address or IPv6 Address

2019-12-01 Thread Max Coplan


Change by Max Coplan :


--
keywords: +patch
pull_requests: +16913
stage:  -> patch review
pull_request: https://github.com/python/cpython/pull/17434

___
Python tracker 
<https://bugs.python.org/issue38952>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue38952] asyncio cannot handle Python3 IPv4Address

2019-12-01 Thread Max Coplan


Change by Max Coplan :


--
title: asyncio cannot handle Python3 IPv4Address or IPv6 Address -> asyncio 
cannot handle Python3 IPv4Address

___
Python tracker 
<https://bugs.python.org/issue38952>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue38952] asyncio cannot handle Python3 IPv4Address

2019-12-02 Thread Max Coplan

Max Coplan  added the comment:

Well I’ve submitted a fix for it.  It isn’t perfect.  Well, while it doesn’t 
look perfect, it actually worked with everything I’ve thrown at it, and seems 
to be a very robust and sufficient fix.

--

___
Python tracker 
<https://bugs.python.org/issue38952>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue30825] csv.Sniffer does not detect lineterminator

2020-02-03 Thread Max Vorobev


Change by Max Vorobev :


--
keywords: +patch
pull_requests: +17708
stage: test needed -> patch review
pull_request: https://github.com/python/cpython/pull/18336

___
Python tracker 
<https://bugs.python.org/issue30825>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue42629] PyObject_Call not behaving as documented

2020-12-12 Thread Max Bachmann


New submission from Max Bachmann :

The documentation of PyObject_Call here: 
https://docs.python.org/3/c-api/call.html#c.PyObject_Call
states, that it is the equivalent of the Python expression: callable(*args, 
**kwargs).

so I would expect:
PyObject* args = PyTuple_New(0);
PyObject* kwargs = PyDict_New();
PyObject_Call(funcObj, args, kwargs)

to behave similar to
args = []
kwargs = {}
func(*args, **kwargs)

however this is not the case since in this case when I edit kwargs inside
PyObject* func(PyObject* /*self*/, PyObject* /*args*/, PyObject* keywds)
{
  PyObject* str = PyUnicode_FromString("test_str");
  PyDict_SetItemString(keywds, "test", str);
}

it changes the original dictionary passed into PyObject_Call. I was wondering, 
whether this means, that:
a) it is not allowed to modify the keywds argument passed to a 
PyCFunctionWithKeywords
b) when calling PyObject_Call it is required to copy the kwargs for the call 
using PyDict_Copy

Neither the documentation of PyObject_Call nor the documentation of 
PyCFunctionWithKeywords 
(https://docs.python.org/3/c-api/structures.html#c.PyCFunctionWithKeywords) 
made this clear to me.

--
components: C API
messages: 382927
nosy: maxbachmann
priority: normal
severity: normal
status: open
title: PyObject_Call not behaving as documented
type: behavior
versions: Python 3.9

___
Python tracker 
<https://bugs.python.org/issue42629>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue43221] German Text Conversion Using Upper() and Lower()

2021-02-13 Thread Max Parry

New submission from Max Parry :

The German alphabet has four extra characters (ä, ö, ü and ß) when compared to 
the UK/USA alphabet.  Until 2017 the character ß was normally only lower case.  
Upper case ß was represented by SS.  In 2017 upper case ß was introduced, 
although SS is still often/usually used instead.  It is important to note that, 
as far as I can see, upper case ß and lower case ß are identical.

The upper() method converts upper or lower case ß to SS.  N.B. ä, ö and ü are 
handled correctly.  Lower() seems to work correctly.

Please note that German is my second language and everything I say about the 
language, its history and its use might not be reliable.  Happy to be corrected.

--
components: Windows
messages: 386938
nosy: Strongbow, paul.moore, steve.dower, tim.golden, zach.ware
priority: normal
severity: normal
status: open
title: German Text Conversion Using Upper() and Lower()
type: behavior
versions: Python 3.9

___
Python tracker 
<https://bugs.python.org/issue43221>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue43377] _PyErr_Display should be available in the CPython-specific API

2021-03-03 Thread Max Bélanger

Change by Max Bélanger :


--
keywords: +patch
nosy: +maxbelanger
nosy_count: 1.0 -> 2.0
pull_requests: +23495
stage:  -> patch review
pull_request: https://github.com/python/cpython/pull/24719

___
Python tracker 
<https://bugs.python.org/issue43377>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue7856] cannot decode from or encode to big5 \xf9\xd8

2021-03-09 Thread Max Bolingbroke

Max Bolingbroke  added the comment:

As of Python 3.7.9 this also affects \xf9\xd6 which should be \u7881 in 
Unicode. This character is the second character of 宏碁 which is the name of the 
Taiwanese electronics manufacturer Acer.

You can work around the issue using big5hkscs just like with the original 
\xf9\xd8 problem.

It looks like the F9D6–F9FE characters all come from the Big5-ETen extension 
(https://en.wikipedia.org/wiki/Big5#ETEN_extensions, 
https://moztw.org/docs/big5/table/eten.txt) which is so popular that it is a 
defacto standard. Big5-2003 (mentioned in a comment below) seems to be an 
extension of Big5-ETen. For what it's worth, whatwg includes these mappings in 
their own big5 reference tables: https://encoding.spec.whatwg.org/big5.html. 

Unfortunately Big5 is still in common use in Taiwan. It's pretty funny that 
Python fails to decode Big5 documents containing the name of one of Taiwan's 
largest multinationals :-)

--
nosy: +batterseapower

___
Python tracker 
<https://bugs.python.org/issue7856>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue41100] Support macOS 11 and Apple Silicon Macs

2020-11-16 Thread Max Desiatov


Change by Max Desiatov :


--
nosy:  -MaxDesiatov

___
Python tracker 
<https://bugs.python.org/issue41100>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue9592] Limitations in objects returned by multiprocessing Pool

2012-03-07 Thread Max Franks

Max Franks  added the comment:

Issue 3 is not related to the other 2. See this post 
http://bugs.python.org/issue5370. As haypo said, it has to do with unpickling 
objects. The post above gives a solution by using the __setstate__ function.

--
nosy: +eliquious

___
Python tracker 
<http://bugs.python.org/issue9592>
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue30641] No way to specify "File name too long" error in except statement.

2017-06-12 Thread Max Staff

Max Staff added the comment:

Yes I know about the errno. There would be two ways to resolve this:

One way would be by introducing a new exception class which would be nice 
because it's almost impossible to reliably check the allowed filename length 
(except for trial and error) and I have quite a few functions where I would 
want the error to propagate further as long as it's not an ENAMETOOLONG.

The other way would be by introducing a new syntax feature ("except OSError as 
e if e.errno == errno.ENAMETOOLONG:") but I don't think that that approach is 
reasonable.

--

___
Python tracker 
<http://bugs.python.org/issue30641>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue30641] No way to specify "File name too long" error in except statement.

2017-06-12 Thread Max Staff

Max Staff added the comment:

...at least those are the only two ways that I can think of.

--

___
Python tracker 
<http://bugs.python.org/issue30641>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue30685] Multiprocessing Send to Manager Fails for Large Payload

2017-06-16 Thread Max Ehrlich

New submission from Max Ehrlich:

On line 393 of multiprocessing/connection.py, the size of the payload to be 
sent is serialized as an integer. This fails for sending large payloads. It 
should probably be serialized as a long or better yet a long long.

--
components: Library (Lib)
messages: 296210
nosy: maxehr
priority: normal
severity: normal
status: open
title: Multiprocessing Send to Manager Fails for Large Payload
versions: Python 3.5

___
Python tracker 
<http://bugs.python.org/issue30685>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue30821] unittest.mock.Mocks with specs aren't aware of default arguments

2017-06-30 Thread Max Rothman

New submission from Max Rothman:

For a function f with the signature f(foo=None), the following three calls are 
equivalent:

f(None)
f(foo=None)
f()

However, only the first two are equivalent in the eyes of 
unittest.mock.Mock.assert_called_with:

>>> with patch('__main__.f', autospec=True) as f_mock:
f_mock(foo=None)
f_mock.assert_called_with(None)

>>> with patch('__main__.f', autospec=True) as f_mock:
f_mock(None)
f_mock.assert_called_with()
AssertionError: Expected call: f()  Actual call: f(None)

This is definitely surprising to new users (it was surprising to me!) and 
unnecessarily couples tests to how a particular piece of code happens to call a 
function.

--
components: Library (Lib)
messages: 297433
nosy: Max Rothman
priority: normal
severity: normal
status: open
title: unittest.mock.Mocks with specs aren't aware of default arguments
versions: Python 2.7, Python 3.6

___
Python tracker 
<http://bugs.python.org/issue30821>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue30821] unittest.mock.Mocks with specs aren't aware of default arguments

2017-06-30 Thread Max Rothman

Max Rothman added the comment:

I'd be happy to look at submitting a patch for this, but it'd be helpful to be 
able to ask questions of someone more familiar with unittest.mock's code.

--

___
Python tracker 
<http://bugs.python.org/issue30821>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue30825] csv.Sniffer does not detect lineterminator

2017-07-01 Thread Max Vorobev

New submission from Max Vorobev:

Line terminator defaults to '\r\n' while detecting dialect in csv.Sniffer

--
components: Library (Lib)
messages: 297497
nosy: Max Vorobev
priority: normal
severity: normal
status: open
title: csv.Sniffer does not detect lineterminator
type: behavior
versions: Python 3.6

___
Python tracker 
<http://bugs.python.org/issue30825>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue30825] csv.Sniffer does not detect lineterminator

2017-07-01 Thread Max Vorobev

Changes by Max Vorobev :


--
pull_requests: +2595

___
Python tracker 
<http://bugs.python.org/issue30825>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue30821] unittest.mock.Mocks with specs aren't aware of default arguments

2017-07-12 Thread Max Rothman

Max Rothman added the comment:

> Generally the called with asserts can only be used to match the *actual 
> call*, and they don't determine "equivalence".

That's fair, but as unittest.mock stands now, it *does* check equivalence, but 
only partially, which is more confusing to users than either checking 
equivalence or not.

> I'm not convinced there's a massive use case - generally you want to make 
> asserts about what your code actually does - not just check if it does 
> something equivalent to your assert.

To me, making asserts about what your code actually does means not having tests 
fail because a function call switches to a set of equivalent but different 
arguments. As a developer, I care about the state in the parent and the state 
in the child, and I trust Python to work out the details in between. If Python 
treats two forms as equivalent, why shouldn't our tests?

--

___
Python tracker 
<http://bugs.python.org/issue30821>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue30821] unittest.mock.Mocks with specs aren't aware of default arguments

2017-07-18 Thread Max Rothman

Max Rothman added the comment:

Hi, just wanted to ping this again and see if there was any movement.

--

___
Python tracker 
<http://bugs.python.org/issue30821>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue29715] Arparse improperly handles "-_"

2017-03-03 Thread Max Rothman

New submission from Max Rothman:

In the case detailed below, argparse.ArgumentParser improperly parses the 
argument string "-_":
```
import argparse

parser = argparse.ArgumentParser()
parser.add_argument('first')
print(parser.parse_args(['-_']))
```

Expected behavior: prints Namespace(first='-_')
Actual behavior: prints usage message

The issue seems to be specific to the string "-_". Either character alone or 
both in the opposite order does not trigger the issue.

--
components: Library (Lib)
messages: 288929
nosy: Max Rothman
priority: normal
severity: normal
status: open
title: Arparse improperly handles "-_"
type: behavior
versions: Python 3.6

___
Python tracker 
<http://bugs.python.org/issue29715>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue29715] Arparse improperly handles "-_"

2017-03-04 Thread Max Rothman

Max Rothman added the comment:

Martin: huh, I didn't notice that documentation. The error message definitely 
could be improved.

It still seems like an odd choice given that argparse knows about the expected 
spec, so it knows whether there are any options or not. Perhaps one could 
enable/disable this cautious behavior with a flag passed to ArgumentParser? It 
was rather surprising in my case, since I was parsing morse code and the 
arguments were random combinations of "-", "_", and "*", so it wasn't 
immediately obvious what the issue was.

--

___
Python tracker 
<http://bugs.python.org/issue29715>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue29715] Arparse improperly handles "-_"

2017-03-12 Thread Max Rothman

Max Rothman added the comment:

I think that makes sense, but there's still an open question: what should the 
correct way be to allow dashes to be present at the beginning of positional 
arguments?

--

___
Python tracker 
<http://bugs.python.org/issue29715>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue30641] No way to specify "File name too long" error in except statement.

2017-06-12 Thread Max Staff

New submission from Max Staff:

There are different ways to catch exceptions of the type "OSError": By using 
"except OSError as e:" and then checking the errno or by using "except 
FileNotFoundError e:" or "except FileExistsError e:" or whatever error one 
wants to catch. There's no such way for above mentioned error that occurs when 
a filename is too long for the filesystem/OS.

------
components: IO
messages: 295810
nosy: Max Staff
priority: normal
severity: normal
status: open
title: No way to specify "File name too long" error in except statement.
type: behavior
versions: Python 2.7, Python 3.3, Python 3.4, Python 3.5, Python 3.6, Python 3.7

___
Python tracker 
<http://bugs.python.org/issue30641>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue28627] [alpine] shutil.copytree fail to copy a direcotry with broken symlinks

2018-04-18 Thread Max Rees

Max Rees  added the comment:

Actually the symlinks don't need to be broken. It fails for any kind of symlink
on musl.

$ ls -l /tmp/symtest
lrwxrwxrwx 1 mcrees mcrees 10 Apr 18 21:16 empty -> /var/empty
-rw-r--r-- 1 mcrees mcrees  0 Apr 18 21:16 regular
lrwxrwxrwx 1 mcrees mcrees 16 Apr 18 21:16 resolv.conf -> /etc/resolv.conf

$ python3
>>> import shutil; shutil.copytree('/tmp/symtest', '/tmp/symtest2', 
>>> symlinks=True)
shutil.Error: [('/tmp/symtest/resolv.conf', '/tmp/symtest2/resolv.conf', "[Errno
95] Not supported: '/tmp/symtest2/resolv.conf'"), ('/tmp/symtest/empty',
'/tmp/symtest2/empty', "[Errno 95] Not supported: '/tmp/symtest2/empty'")]

$ ls -l /tmp/symtest2
total 0
lrwxrwxrwx 1 mcrees mcrees 10 Apr 18 21:16 empty -> /var/empty
-rw-r--r-- 1 mcrees mcrees  0 Apr 18 21:16 regular
lrwxrwxrwx 1 mcrees mcrees 16 Apr 18 21:16 resolv.conf -> /etc/resolv.conf

The implication of these bugs mean that things like pip may fail if it calls
shutil.copytree(..., symlinks=True) on a directory that contains symlinks(!)

Attached is a patch that works around the issue but does not address why chmod
is returning OSError instead of NotImplementedError.

--
keywords: +patch
nosy: +sroracle
Added file: https://bugs.python.org/file47540/musl-eopnotsupp.patch

___
Python tracker 
<https://bugs.python.org/issue28627>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue30821] unittest.mock.Mocks with specs aren't aware of default arguments

2017-10-11 Thread Max Rothman

Max Rothman  added the comment:

Hi, I'd like to wrap this ticket up and get some kind of resolution, whether 
it's accepted or not. I'm new to the Python community, what's the right way to 
prompt a discussion about this sort of thing? Should I have taken it to one of 
the mailing lists?

--

___
Python tracker 
<https://bugs.python.org/issue30821>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue31903] `_scproxy` calls SystemConfiguration functions in a way that can cause deadlocks

2017-10-30 Thread Max Bélanger

Change by Max Bélanger :


--
keywords: +patch
pull_requests: +4148
stage:  -> patch review

___
Python tracker 
<https://bugs.python.org/issue31903>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue32280] Expose `_PyRuntime` through a section name

2017-12-11 Thread Max Bélanger

Change by Max Bélanger :


--
keywords: +patch
pull_requests: +4700
stage:  -> patch review

___
Python tracker 
<https://bugs.python.org/issue32280>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue32282] When using a Windows XP compatible toolset, `socketmodule.c` fails to build

2017-12-11 Thread Max Bélanger

Change by Max Bélanger :


--
keywords: +patch
pull_requests: +4702
stage:  -> patch review

___
Python tracker 
<https://bugs.python.org/issue32282>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue32285] In `unicodedata`, it should be possible to check a unistr's normal form without necessarily copying it

2017-12-11 Thread Max Bélanger

Change by Max Bélanger :


--
keywords: +patch
pull_requests: +4703
stage:  -> patch review

___
Python tracker 
<https://bugs.python.org/issue32285>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue35022] MagicMock should support `__fspath__`

2018-10-18 Thread Max Bélanger

Change by Max Bélanger :


--
keywords: +patch
pull_requests: +9307
stage:  -> patch review

___
Python tracker 
<https://bugs.python.org/issue35022>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue35025] Compiling `timemodule.c` can fail on macOS due to availability warnings

2018-10-18 Thread Max Bélanger

Change by Max Bélanger :


--
keywords: +patch
pull_requests: +9308
stage:  -> patch review

___
Python tracker 
<https://bugs.python.org/issue35025>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue35080] The tests for the `dis` module can be too rigid when changing opcodes

2018-10-26 Thread Max Bélanger

Change by Max Bélanger :


--
keywords: +patch
pull_requests: +9469
stage:  -> patch review

___
Python tracker 
<https://bugs.python.org/issue35080>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue35139] Statically linking pyexpat in Modules/Setup fails to compile on macOS

2018-11-01 Thread Max Bélanger

Change by Max Bélanger :


--
keywords: +patch
pull_requests: +9599
stage:  -> patch review

___
Python tracker 
<https://bugs.python.org/issue35139>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue35203] Windows Installer Ignores Launcher Installer Options Where The Python Launcher Is Already Present

2018-11-09 Thread Max Bowsher


Change by Max Bowsher :


--
nosy: +Max Bowsher

___
Python tracker 
<https://bugs.python.org/issue35203>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue18197] insufficient error checking causes crash on windows

2013-06-11 Thread Max DeLiso

New submission from Max DeLiso:

hi.

if you cross compile the mercurial native extensions against python 2.7.5 (x64) 
on 64 bit windows 7 and then try to clone something, it will crash. 

I believe the reason for this is that the c runtime functions in the microsoft 
crt will throw a win32 exception if they are given invalid parameters, and 
since the return value of fileno() is not checked in Objects/fileobject.c, if a 
file handle is passed to fileno and the result is not a valid file descriptor, 
that invalid decriptor will get passed to _fstat64i32, an invalid parameter 
exception will be raised, and the program will crash.

here's the function with the alleged bug:

static PyFileObject*
dircheck(PyFileObject* f)
{
#if defined(HAVE_FSTAT) && defined(S_IFDIR) && defined(EISDIR)
struct stat buf;  
if (f->f_fp == NULL)
return f;
if (fstat(fileno(f->f_fp), &buf) == 0 && // this line is the problem, 
fileno's return value never gets checked
S_ISDIR(buf.st_mode)) {
char *msg = strerror(EISDIR);
PyObject *exc = PyObject_CallFunction(PyExc_IOError, "(isO)",
  EISDIR, msg, f->f_name);
PyErr_SetObject(PyExc_IOError, exc);
Py_XDECREF(exc);
return NULL;
}
#endif
return f;
}

here's the stack trace:

>   msvcr90.dll!_invalid_parameter()   Unknown
msvcr90.dll!_fstat64i32()  Unknown
python27.dll!dircheck(PyFileObject * f) Line 127C
python27.dll!fill_file_fields(PyFileObject * f, _iobuf * fp, _object * 
name, char * mode, int (_iobuf *) * close) Line 183  C
python27.dll!PyFile_FromFile(_iobuf * fp, char * name, char * mode, int 
(_iobuf *) * close) Line 484C

here's a dump summary:

Dump Summary

Process Name:   python.exe : c:\Python27\python.exe
Process Architecture:   x64
Exception Code: 0xC417
Exception Information:  
Heap Information:   Present

about the patch:

the attached patch fixes that behavior and doesn't break any test cases on 
windows or linux. it applies against the current trunk of cpython. the return 
value of fileno should get checked for correctness anyways, even on *nix. the 
extra overhead is tiny, (one comparison and a conditional jump and a few extra 
bytes of stack space), but you do catch some weird edge cases.  

here are the steps to reproduce:

download the python 2.7.5 installer for windows
download the mercurial 2.6.2 source release
build the native extensions with 64 bit microsoft compilers
try to hg clone any remote repo 
(it should crash)

here are some version strings:

Python 2.7.5 (default, May 15 2013, 22:44:16) [MSC v.1500 64 bit (AMD64)] on 
win32
Microsoft (R) C/C++ Optimizing Compiler Version 17.00.60315.1 for x64
mercurial 2.6.2

here are some links:

in particular, read the bits about the invalid parameter exception:

_fsta64i32: 
http://msdn.microsoft.com/en-US/library/221w8e43%28v=vs.80%29.aspx 

_fileno:
http://msdn.microsoft.com/en-US/library/zs6wbdhx%28v=vs.80%29.aspx

Please let me know if my patch needs work or if I missed something.
Thanks!

--
components: IO
files: fileobject_fix.patch
hgrepos: 199
keywords: patch
messages: 191012
nosy: maxdeliso
priority: normal
severity: normal
status: open
title: insufficient error checking causes crash on windows
type: crash
versions: Python 2.7
Added file: http://bugs.python.org/file30552/fileobject_fix.patch

___
Python tracker 
<http://bugs.python.org/issue18197>
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue18197] insufficient error checking causes crash on windows

2013-06-11 Thread Max DeLiso

Changes by Max DeLiso :


--
hgrepos:  -199

___
Python tracker 
<http://bugs.python.org/issue18197>
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



  1   2   >