Min RK added the comment:
It appears that connect_read_pipe also doesn't accept pipes returned by
`os.pipe`. If that's the case, what _does_ ProactorEventLoop.connect_read_pipe
accept? I haven't been able to find any examples of `connect_read_pipe` that
work on Win
Min RK added the comment:
Oops, I interpreted "not deprecated by oversight" as the opposite of what you
meant. Sorry! All clear, now.
--
___
Python tracker
<https://bugs.python.o
Min RK added the comment:
Thank you! I think I have enough information to update.
> IMHO, asyncio.set_event_loop()...[is] not deprecated by oversight.
I'm curious, what is an appropriate use of `asyncio.set_event_loop()` if you
can never get the event loop with `get_event_loop()`
Min RK added the comment:
Further digging reveals that `policy.get_event_loop()` is _not_ deprecated
while `asyncio.get_event_loop()` is. Is that intentional? Does that mean
switching our calls to `get_event_loop_policy().get_event_loop()` should
continue to work without deprecation
Min RK added the comment:
The comments in this thread suggest that `set_event_loop` should also be
deprecated, but it hasn't been. It doesn't seem to have any use without
`get_event_loop()`.
I'm trying to understand the consequences of these changes for IPython, and
m
Min RK added the comment:
We just ran into this in Jupyter where we've removed a pre-processing step for
data structures passed to json.dumps, which took care of this, but was
expensive https://github.com/jupyter/jupyter_client/pull/706
My expectation was that our `default` would be c
Change by Min RK :
--
nosy: +minrk
nosy_count: 5.0 -> 6.0
pull_requests: +27016
pull_request: https://github.com/python/cpython/pull/28648
___
Python tracker
<https://bugs.python.org/issu
Min RK added the comment:
A hiccup to using uvloop is that it doesn't support Windows yet
(https://github.com/MagicStack/uvloop/issues/14), so it can't be used in the
affected environment.
I'm exploring this again for pyzmq / Jupyter, and currently investigating
relyi
Min RK added the comment:
In the A/B vote, I cast mine for B, for what it is worth, but it is not
strongly held.
>From the IPython side, I don't view our particular issue as a major regression
>for users. The only affected case for us is interactively typed string
>lite
Changes by RK-5wWm9h :
--
title: Wrong documentation for unicode and str comparison -> Wrong
documentation (Language Ref) for unicode and str comparison
___
Python tracker
<http://bugs.python.org/issu
New submission from RK-5wWm9h:
PROBLEM (IN BRIEF):
In the currently published 2.7.13 The Python Standard Library (Library
Reference manual) section 5.6 "Sequence Types"
(https://docs.python.org/2/library/stdtypes.html#sequence-types-str-unicode-list-tuple-bytearray-buffer-xrange)
New submission from RK-5wWm9h:
PROBLEM (IN BRIEF):
In the currently published 2.7.13 The Python Language Reference manual, section
5.9 "Comparisons"
(https://docs.python.org/2/reference/expressions.html#comparisons):
"If both are numbers, they are converted to a common
Min RK added the comment:
This affects IPython (specifically the traitlets component), which is what
prompted the report. We were able to push out a release of traitlets with a
workaround for the bug (4.3.1), but earlier versions of IPython / traitlets
will still be affected (all IPython >
New submission from Min RK:
HMAC digest methods call inner.digest() with no arguments, but new-in-3.6 shake
algorithms require a length argument.
possible solutions:
1. add optional length argument to HMAC.[hex]digest, and pass through to inner
hash object
2. set hmac.digest_size, and use
Min RK added the comment:
I pulled just now and saw changes in dictobject.c, and just wanted to confirm
the memory growth bug is still in changeset 56294e03ad89 (I think I used the
right hash, this time).
--
___
Python tracker
<h
Min RK added the comment:
> Ah, is the leak happen in 3.6b1?
The leak happens in 3.6b1 and master as of an hour ago (git:
3c06edfe9463f1cf81bc34b702f165ad71ff79b8, hg:r103797)
--
title: Memory leak in new 3.6 dictionary resize -> Unbounded memory growth
resizing split-table
Min RK added the comment:
> dictresize() is called for converting split table to combined table.
> How is it triggered many times?
every `self.__dict__.pop` triggers a resize. According to
https://www.python.org/dev/peps/pep-0412/#split-table-dictionaries
`obj.__dict__` is always a
Min RK added the comment:
I can add the cpython_only decorator, but I'm not sure it is the right thing to
do. I would expect the code in the test to pass on any Python implementation,
which would suggest that it should not be cpython_only, right? If you still
think so, I'
Changes by Min RK :
--
title: Memory leak in dictionary resize -> Memory leak in new 3.6 dictionary
resize
___
Python tracker
<http://bugs.python.org/issu
Min RK added the comment:
This patch fixes the memory leak in split-dict resizing.
Each time dict_resize is called, it gets a new, larger size `> minused`. If
this is triggered many times, it will keep growing in size by a factor of two
each time, as the previous size is passed as minused
New submission from Min RK:
There is a memory leak in the new dictionary resizing in 3.6, which can cause
memory exhaustion in just a few iterations.
I don't fully understand the details of the bug, but it happens when resizing a
dict with a split table several times. The only way t
Changes by rk :
Removed file:
http://bugs.python.org/file43815/bug_configparser_default_section.py
___
Python tracker
<http://bugs.python.org/issue27583>
___
___
Pytho
rk added the comment:
Verified/tested with Python 2.7.9, 3.2.6, 3.3.6, 3.4.2, 3.5.1.
The bug exists in all versions, so I've added 3.2, 3.3, 3.4 again.
I've also attached an updated testcase, which now works in both Python 2 and
Python 3.
--
versions: +Python 3.2, Python 3
rk added the comment:
(removed Python 2.7, since default_section was not supported there)
--
versions: -Python 2.7
___
Python tracker
<http://bugs.python.org/issue27
New submission from rk:
Modifying "default_section" in the configparser at runtime does not behave as
described.
The documentation says about default_section:
When default_section is given, it specifies the name for the special section
holding default values for other se
Min RK added the comment:
update patch to use file context manager on temporary source file
it should apply cleanly on current default (778ccbe3cf74)
--
Added file:
http://bugs.python.org/file42399/0001-cleanup-tempfiles-in-has_function.patch
Min RK added the comment:
Absolutely, I'll try to do that tomorrow.
--
___
Python tracker
<http://bugs.python.org/issue25544>
___
___
Python-bugs-list m
Changes by Min RK :
Added file: http://bugs.python.org/file41658/b.py
___
Python tracker
<http://bugs.python.org/issue26153>
___
___
Python-bugs-list mailing list
Unsub
Changes by Min RK :
Added file: http://bugs.python.org/file41659/main.py
___
Python tracker
<http://bugs.python.org/issue26153>
___
___
Python-bugs-list mailing list
Unsub
New submission from Min RK:
PyImport_GetModuleDict: no module dictionary! can be raised during interpreter
shutdown if a `__del__` method results in a warning. This only happens on
Python 3.5.
The prompting case is IPython 4.0.2 and traitlets 4.1.0. An IPython
ExtensionManager calls
Changes by Min RK :
Added file: http://bugs.python.org/file41657/a.py
___
Python tracker
<http://bugs.python.org/issue26153>
___
___
Python-bugs-list mailing list
Unsub
New submission from Min RK:
One of the nits noted in http://bugs.python.org/issue717152, which introduced
ccompiler.has_function, was that it does not clean up after itself.
This patch uses a TemporaryDirectory context to ensure that the files created
during has_function are cleaned up
Min RK added the comment:
On a bit of further investigation, the NFS files have an xattr
`system.nfs4_acl`. This can be read, but attempting to write it fails with
EINVAL. Attempting to copy from NFS to non-NFS fails with ENOTSUP, which is
caught and ignored, but copying from NFS to NFS
Min RK added the comment:
> Just because a feature can be misused doesn't make it a bad feature.
That's fair. I'm just not aware of any uses of this feature that aren't
misuses, hence the patch.
> Perhaps you could submit a fix for this to the setuptools maintai
Min RK added the comment:
> Could you please post an example of where the feature is problematic ?
setuptools/easy_install is the major one, which effectively does `sys.path[:0]
= pth_contents`, breaking import priority. This has been known to result in
adding `/usr/lib/pythonX.Y/d
Min RK added the comment:
Thanks for the feedback, I thought it might be a long shot. I will go back to
removing the *use* of the feature everywhere I can find it, since it is so
problematic and rarely, if ever, desirable.
> it's an essential feature that has been documented for a v
New submission from Min RK:
.pth files currently allow execution of arbitrary code, triggered by lines
starting with `import`. This is a rarely understood, and often misbehaving
feature. easy_install has used this feature to ensure that its packages are
highest priority (even higher than
Min RK added the comment:
`--prefix` vs `--user` is the only conflict I have encountered, but based on
the way it works, it could just as easily happen with any of the various other
conflicting options in install (install_base, exec_prefix, etc.), though that
might not be very common.
There
New submission from Min RK:
Background:
Some Python distros (OS X, Debian, Homebrew, others) want the default
installation prefix for packages to differ from sys.prefix. OS X and Debian
accomplish this by patching distutils itself, with special cases like `if
sys.prefix == '/System/Li
Min RK added the comment:
Thanks for your help and patience. Closing as slightly unfortunate, but not
unintended behavior.
--
resolution: -> not a bug
status: open -> closed
___
Python tracker
<http://bugs.python.org/i
Min RK added the comment:
Thanks for clarifying that there is indeed a reference cycle by way of the
module, I hadn't realized that.
The gc blocking behavior is exactly why I brought up the issue. The real code
where this causes a problem (rather than the toy example I attached) is in
New submission from Min RK:
Reference counts appear to be ignored at process cleanup, which allows
inter-dependent `__del__` methods to hang on exit. The problem does not seem to
occur for garbage collection of any other context (functions, etc.).
I have a case where one object must be
I've seen another bug submission similar to this. I am using 2.3.4 and
I get almost the exact same error. I'm on a linux box (2.6.9-5.ELsmp)
and the same code runs fine on other machines and previous versions of
python - here's the code snippet:
msg = MIMEMultipart()
COMMASPACE = ', '
msg['S
43 matches
Mail list logo