tzickel added the comment:
It should be noted that this causes a big headache for users of requests /
urllib3 / etc... as those print on each multipart response a logging warning
based on this bug, and it might cause people to go try debugging valid code:
https://github.com/urllib3/urllib3
tzickel added the comment:
The documentation needs to scrub this methods as well, for example:
https://docs.python.org/3/library/asyncio-stream.html#asyncio.StreamReader.at_eof
still mentions them.
--
nosy: +tzickel
___
Python tracker
<ht
New submission from tzickel :
bpo 36051 added optimization to release GIL on certain conditions of bytes
joining, but it has missed a critical path.
If the number of items joining is less or equal to NB_STATIC_BUFFERS (10) than
static_buffers will be used to hold the buffers.
https
Change by tzickel :
--
nosy: +bmerry, inada.naoki
___
Python tracker
<https://bugs.python.org/issue39974>
___
___
Python-bugs-list mailing list
Unsubscribe:
tzickel added the comment:
Also, in line:
https://github.com/python/cpython/blob/d07d9f4c43bc85a77021bcc7d77643f8ebb605cf/Objects/stringlib/join.h#L85
perhaps add an if to check if the backing object is really mutable ?
(Py_buffer.readonly
tzickel added the comment:
Also, semi related, (dunno where to discuss it), would a good .join()
optimization be to add an optional length parameter, like .join(iterable,
length=10), and when running in that code-path, it would skip all the calls to
(PySequence_Fast which converts no list
tzickel added the comment:
My mistake...
--
resolution: -> not a bug
___
Python tracker
<https://bugs.python.org/issue39974>
___
___
Python-bugs-list mai
tzickel added the comment:
Regarding getting the buffer and releasing the GIL, if it's wrong, why not fix
other places in the code that do it, like:
https://github.com/python/cpython/blob/611836a69a7a98bb106b4d315ed76a1e17266f4f/Modules/posixmodule.c#L9619
The GIL is released, the sy
New submission from tzickel :
I have a code that tries to be smart and prepare data to be chunked efficiently
before sending, so I was happy to read about:
https://docs.python.org/3/library/asyncio-protocol.html#asyncio.WriteTransport.writelines
Only to see that it simply does:
self.write(b
tzickel added the comment:
BTW, if wanted a much more simpler PR can be made, where writelines simply
calls sendmsg on the input if no buffer exists, and if not only then concats
and acts like the current code base.
--
___
Python tracker
<ht
New submission from tzickel :
os.writev and socket.sendmsg accept an iterable but the return value is number
of bytes sent. That is not helpful as the user will have to write manual code
to figure out which part of the iterable was not sent.
I propose to make a version of the functions where
New submission from tzickel :
I converted some code from python to c-api and was surprised that a code
stopped working.
Basically the "c" parsing option allows for 1 char bytes or bytearray inputs
and converts them to a C char.
But just as indexing a bytes array returns an int,
New submission from tzickel :
I am writing this as a bug, as I have an object which implements the buffer
protocol but not the __len__.
SSL's recv_into seems to require the buffer object to implement __len__, but
this is unlike the socket recv_into which uses the buffer protocol l
tzickel added the comment:
One should be careful with this modification because of the Windows definition
of process groups.
For example, if multi-threaded code thinks that by reading the value of the new
os.cpu_count() it can use all the cores returned, by default it cannot as in
windows
tzickel added the comment:
A. It would be nice to add a test that tests this.
B. Now that Pool is cleaning up properly, any of it's functions which return
another object (like imap's IMapIterator) need to hold a reference to the Pool,
so it won't get cleanedup b
tzickel added the comment:
here is something quick I did to check if it works (it works) but I'm not
fluent in multiprocessing code, so If i'm missing something or doing something
wrong feel free to tell me:
https://github.com/tzickel/cpython/commit/ec63a43706f3bf615ab7ed30fb0956
tzickel added the comment:
It's important to note that before those PR, that code would leak the Pool
instance until the process ends (once per call).
https://github.com/python/cpython/compare/master...tzickel:fix34172
Is my proposed fix (till I get it to
tzickel added the comment:
I dont mind, I think my code is ready for review, but I'm not versed in this,
so if you think you have something better, feel free to open a PR or tell me if
I should submit mine, and you can comment on it:
https://github.com/python/cpython/compare/m
tzickel added the comment:
The previous posts here touch all this subjects:
A. The documentation explicitly says: "When the pool object is garbage
collected terminate() will be called immediately." (Happened till a code
refactor 9 years ago introduced this bug).
B. Large amount o
tzickel added the comment:
Reverting the code will cause another class of problems, like the reason I
fixed it. Programs written such as the example that Pablo gave (and what I've
seen) will quietly leak child processes, file descriptors (for the pipes) and
memory to a variety degree
tzickel added the comment:
https://bugs.python.org/issue35267
--
___
Python tracker
<https://bugs.python.org/issue35378>
___
___
Python-bugs-list mailin
tzickel added the comment:
+1
--
___
Python tracker
<https://bugs.python.org/issue35378>
___
___
Python-bugs-list mailing list
Unsubscribe:
https://mail.pyth
tzickel added the comment:
OK, This issue has been biting me a few more times in production, so for now
I've added the environment variable PYTHONDONTWRITEBYTECODE which resolves it
(but it's a hack). I'm sure I am not the only one with it (recall that this is
happening in
tzickel added the comment:
Ignore the hash append / link at the start of each shell command (it's the
output from docker, and not related to python commits).
BTW, forgot to mention, of course when doing the fault injection on the .py
files, the error is bad as well, it should be I/O
tzickel added the comment:
Added a script to check if the bug exists (provided you have an updated strace
4.15 or above).
Without patch:
# ./import_io_check.sh
strace: Requested path 'tmp.py' resolved into '/root/tmp.py'
read(3, 0x55fc3a71cc50, 4096) = -1
New submission from tzickel :
In multiprocessing.Pool documentation it's written "When the pool object is
garbage collected terminate() will be called immediately.":
https://docs.python.org/3.7/library/multiprocessing.html#multiprocessing.pool.Pool.terminate
A. This does not h
tzickel added the comment:
>>> from multiprocessing import Pool
>>> import gc
>>> a = Pool(10)
>>> del a
>>> gc.collect()
0
>>>
After this, there are still left behind Process (Pool) or Dummy (ThreadPool)
and big _cache data (If you did s
tzickel added the comment:
But alas that does not work...
--
nosy: +davin, pitrou
___
Python tracker
<https://bugs.python.org/issue34172>
___
___
Python-bug
tzickel added the comment:
What other object in the standard lib, leaks resources when deleted in CPython
? Even that documentation says the garbage collector will eventually destroy
it, just like here... I think there is an implementation bug
tzickel added the comment:
I think I've found the code bug causing the leak:
https://github.com/python/cpython/blob/caa331d492acc67d8f4edd16542cebfabbbe1e79/Lib/multiprocessing/pool.py#L180
There is a circular reference between the Pool object, and the
self._worker_handler Thread o
Change by tzickel :
--
pull_requests: +7971
stage: -> patch review
___
Python tracker
<https://bugs.python.org/issue25083>
___
___
Python-bugs-list mai
Change by tzickel :
--
keywords: +patch
pull_requests: +7972
stage: -> patch review
___
Python tracker
<https://bugs.python.org/issue34172>
___
___
Python-
New submission from tzickel :
When compiling on ubuntu 18.04 the 2.7 branch, I get this warning:
gcc -pthread -c -fno-strict-aliasing -g -O2 -DNDEBUG -g -fwrapv -O3 -Wall
-Wstrict-prototypes -I. -IInclude -I./Include -DPy_BUILD_CORE
-DPYTHONPATH='":plat-linux2:lib-
tzickel added the comment:
Changing Py_FatalError prototype to add: __attribute__((noreturn)) also stops
the warning.
--
___
Python tracker
<https://bugs.python.org/issue34
tzickel added the comment:
It actually makes tons of sense that while the thread is running, that the
object representing it is alive. After the thread finishes its work, the object
dies.
>>> import time, threading, weakref, gc
>>> t = threading.Thread(target=time.sleep,
Change by tzickel :
--
pull_requests: +9072
___
Python tracker
<https://bugs.python.org/issue34172>
___
___
Python-bugs-list mailing list
Unsubscribe:
tzickel added the comment:
Its ok, you only did it twice :) I've submitted a manual 2.7 fix on GH.
--
___
Python tracker
<https://bugs.python.org/is
New submission from tzickel :
https://github.com/requests/requests/issues/4553#issuecomment-431514753
It was fixed in Python 3 by using weakref, but not backported to Python 2.
Also might be nice to somehow to that leak test in more places to detect such
issues ?
--
components
Change by tzickel :
--
keywords: +patch
pull_requests: +9344
stage: -> patch review
___
Python tracker
<https://bugs.python.org/issue35030>
___
___
Python-
tzickel added the comment:
You can see the testing code here:
https://github.com/numpy/numpy/blob/eb40e161e2e593762da9c77858343e3720351ce7/n
umpy/testing/_private/utils.py#L2199
it calls gc.collect in the end and only throws this error if it returns a non
zero return value from it (after
tzickel added the comment:
I see, so basically this would be a problem only if the root object had a
__del__ method and then the GC wouldn't reclaim it ?
--
___
Python tracker
<https://bugs.python.org/is
tzickel added the comment:
Is this commit interesting ? It has less lines, more simple and makes no cycles
to collect, and it seems in my limited benchmark faster than the current
implementation.
https://github.com/tzickel/cpython/commit/7e8b70b67cd1b817182be4dd2285bd136e6b156d
tzickel added the comment:
Sorry ignore it. Closed the PR as well.
--
___
Python tracker
<https://bugs.python.org/issue35030>
___
___
Python-bugs-list mailin
Change by tzickel :
--
pull_requests: +9540
___
Python tracker
<https://bugs.python.org/issue3243>
___
___
Python-bugs-list mailing list
Unsubscribe:
New submission from tzickel :
Sometimes you want to do something based on if the item existed before removal,
so instead of checking if it exists, then removing and doing something, if
would be nice to make the function return True or False based on if the element
existed
tzickel added the comment:
This patch was opened for 2.7 but never applied there ?
https://github.com/python/cpython/pull/10226
This causes a bug with requests HTTP library (and others as well as httplib)
when you want to send an iterable object as POST data (with a non-chunked way),
it
tzickel added the comment:
I would think that the .discard is the equivalent of .pop in dict. (instead of
wasting time once checking and once removing, also the data in a set is the
data, there is no value to check).
Even the standard lib has lots of usage of dict.pop(key, None) to not
New submission from tzickel :
There was a TODO in the code about this:
https://github.com/python/cpython/blob/e42b705188271da108de42b55d9344642170aa2b/Modules/_io/iobase.c#L909
--
components: IO
messages: 329629
nosy: tzickel
priority: normal
severity: normal
status: open
title: Use
Change by tzickel :
--
keywords: +patch
pull_requests: +9726
stage: -> patch review
___
Python tracker
<https://bugs.python.org/issue35210>
___
___
Python-
Change by tzickel :
--
nosy: +benjamin.peterson, stutzbach
___
Python tracker
<https://bugs.python.org/issue35210>
___
___
Python-bugs-list mailing list
Unsub
tzickel added the comment:
How is that different from the situation today ? The bytearray passed to
readinto() is deleted before the function ends.
This revision simply changes 2 mallocs and a memcpy to 1 malloc and a potential
realloc
tzickel added the comment:
I think that if someone tries that this code will raise an exception at the
resize part (since the reference will be higher than one), a check can be added
and in this case fallback to the previous behaviour, If it's a required check,
I can a
tzickel added the comment:
ahh, very interesting discussion. BTW, how is this code different than
https://github.com/python/cpython/blame/50ff02b43145f33f8e28ffbfcc6a9d15c4749a64/Modules/_io/bufferedio.c
which does the same thing exactly ? (i.e. the memoryview can leak there as
well
New submission from tzickel:
In Windows, python's os.environ currently handles the case sensitivity
different that the OS. While it's true that the OS is case insensitive, it does
preserve the case that you first set it as.
For example:
C:\Users\user>set aSD=Blah
C:\Users\use
tzickel added the comment:
My issue is that somebody wants to pass a few dict like environment variables
as some prefix_key=value but he wants to preserve the case of the key for usage
in python so the .keys() space needs to be enumerated.
A workaround for this issue can be importing nt and
tzickel added the comment:
Steve, I've checked in Python 3.5.2, and os.environ.keys() still uppercases
everything when scanning (for my use case). Has it changed since then ?
--
___
Python tracker
<http://bugs.python.org/is
tzickel added the comment:
any chance for 2.6.12 ? 4 line patch.
--
___
Python tracker
<http://bugs.python.org/issue25083>
___
___
Python-bugs-list mailin
tzickel added the comment:
Sorry Brett of course I meant the upcoming 2.7.12
--
___
Python tracker
<http://bugs.python.org/issue25083>
___
___
Python-bug
New submission from tzickel:
Python 2 has a wrong artificial limit on the amount of memory that can be
allocated in ctypes via sequence repeating (i.e. using create_string_buffer or
c_char * )
The problem is practical in Windows 64 bit, when running python 64 bit, since
in that platform the
New submission from tzickel:
I had a non-reproducible issue occur a few times in which python 2.7.9 would
produce .pyc files with empty code objects on a network drive under windows.
The .pyc might have been created due to intermittent network errors that are
hard to reproduce reliably. The
tzickel added the comment:
You are not looking at the correct code, the function you are pointing to,
check_compiled_module is run to check the existing .pyc (it is a good question,
why the .pyc is overriden, but that is a secondary issue, which I cannot
reproduce as I've said by demand
Changes by tzickel :
--
nosy: +brett.cannon, meador.inge
___
Python tracker
<http://bugs.python.org/issue25083>
___
___
Python-bugs-list mailing list
Unsubscribe:
tzickel added the comment:
As for the "example" .pyc just create an empty 0 byte .py file and compile it,
that is the same .pyc that is created in my system (instead in my case the .py
is not empty).
Just so people don't have to trace the code like I did, here is the traceback
tzickel added the comment:
Not sure why nobody has responded yet, but I have gone up and made a patch for
the problem for 2.7 HEAD. Would be great if someone with more understanding of
python's source could say if this is the optimal place to do the ferror test.
I am able to see that
tzickel added the comment:
Although I haven't reviewed python 3.5 code, I've put an breakpoint on calling
"ferror" in the debugger, and it seems that python 3 does not check the file
status on import as well...
--
nosy:
tzickel added the comment:
TL:DR
Python 2 forgot to do I/O error checking when reading .py files from disk. On
some rare situations this can bite Python in the ass and cause it to bork .pyc
files.
Checked python 3, it checks the I/O in a different / better way.
Next python 2.7 is out in 1.5
tzickel added the comment:
1. You are correct the issue I am talking about is in parsing source files
(Altough because python caches them as .pyc it's a worse situation).
2. The example you give is EINTR handling (which is mostly handling interrupted
I/O operations by signals and retrying
New submission from tzickel:
In Windows, there is a mechanizm called SEH that allows C/C++ programs to catch
OS Exceptions (such as divide by zero, page faults, etc..).
Python's ctypes module for some reason forces the user to wrap all ctypes FFI
calls with a special SEH wrapper that con
tzickel added the comment:
Meador Inge any other questions regarding the issue ? I can't believe 2.7.11 is
coming out soon, and nobody is taking this issue seriously enough...
--
___
Python tracker
<http://bugs.python.org/is
New submission from tzickel:
A few issues regarding threads:
A. (Python 2 & 3) The documentation (https://docs.python.org/3/c-api/init.html)
about initializing the GIL/Threading system does not specify that calling
PyEval_InitThreads actually binds the calling thread as the main_thread in
Changes by tzickel :
--
nosy: +pitrou
___
Python tracker
<http://bugs.python.org/issue26003>
___
___
Python-bugs-list mailing list
Unsubscribe:
https://mail.pyth
tzickel added the comment:
I think that the document regarding PyGILState_Ensure and PyEval_InitThreads
should be clarified better, written in issue #26003
--
nosy: +tzickel
___
Python tracker
<http://bugs.python.org/issue19
Changes by tzickel :
--
nosy: +serhiy.storchaka
___
Python tracker
<http://bugs.python.org/issue25083>
___
___
Python-bugs-list mailing list
Unsubscribe:
tzickel added the comment:
Just encountered this issue as well.
It's not related to newlines, but to not supporting HTTP or persistent
connections (the wsgi.input is the socket's I/O directly, and if the client
serves a persistent connection, then the .read() will block forever).
74 matches
Mail list logo