Grzegorz Grzywacz added the comment:
No, this PR originally fix similar but different issue.
However it also fix issue29406 as a side effect.
--
___
Python tracker
<https://bugs.python.org/issue30
Grzegorz Grzywacz added the comment:
Tests are reusing finished futures. `_yield_and_decref` function do not clear
waiters in finished futures.
In the initial merge i propose to clear waiters but after review we decide it
should be removed.
I am confused now, should we change tests or
Grzegorz Grzywacz added the comment:
This is already reported and patch was proposed. Here: #30006
--
nosy: +grzgrzgrz3
___
Python tracker
<http://bugs.python.org/issue30
Grzegorz Grzywacz added the comment:
Existing mock implementation already has that feature. Mock attributes can be
limited with `spec` attribute.
>>> inner_m = Mock(spec=["method2"], **{"method2.return_value": 1})
>>> m = Mock(spec=["me
Grzegorz Grzywacz added the comment:
No one yet responded, maybe this is unclear. I will clarify what is going on,
why i made this change, what we gain from this and why this is not ideal
solution.
I will focus on ssl layer shutdown as this issue regards.
We have connection asyncio <->
Grzegorz Grzywacz added the comment:
This is not problem with madis-data.ncep.noaa.gov not doing ssl shutdown, this
is problem with asyncio not doing it.
Patch from this #30698 issue fix this too.
--
nosy: +grzgrzgrz3
___
Python tracker
<h
Changes by Grzegorz Grzywacz :
--
pull_requests: +2320
___
Python tracker
<http://bugs.python.org/issue30698>
___
___
Python-bugs-list mailing list
Unsubscribe:
New submission from Grzegorz Grzywacz:
Asyncio on shutdown do not send shutdown confirmation to the other side.
_SSLPipe after doing unwrap is calling shutdown callback where transportis
closed and quit ssldata wont be sent.
--
components: asyncio
messages: 296295
nosy: grzgrzgrz3
Grzegorz Grzywacz added the comment:
of course it should be `if not`:
diff --git a/Lib/multiprocessing/queues.py b/Lib/multiprocessing/queues.py
index dda03dd..514f991 100644
--- a/Lib/multiprocessing/queues.py
+++ b/Lib/multiprocessing/queues.py
@@ -101,7 +101,7 @@ class Queue(object
Grzegorz Grzywacz added the comment:
Looks like build bot is too slow for timeout=0.1.
I am guessing `0.1` is too low because we have wrong condition in Queue.get.
It should be.
diff --git a/Lib/multiprocessing/queues.py b/Lib/multiprocessing/queues.py
index dda03dd..42e9884 100644
--- a/Lib
Changes by Grzegorz Grzywacz :
--
pull_requests: +1947
___
Python tracker
<http://bugs.python.org/issue30514>
___
___
Python-bugs-list mailing list
Unsubscribe:
Grzegorz Grzywacz added the comment:
./Lib/test/test_poplib.py
sub-issue issue30514
Fixed issue number from previous comment
--
___
Python tracker
<http://bugs.python.org/issue28
Grzegorz Grzywacz added the comment:
./Lib/test/test_poplib.py
sub-issue issue28533
--
___
Python tracker
<http://bugs.python.org/issue28533>
___
___
Python-bug
New submission from Grzegorz Grzywacz:
sub-issue of issue28533
--
components: Tests
messages: 294770
nosy: grzgrzgrz3
priority: normal
severity: normal
status: open
title: test_poplib replace asyncore
versions: Python 3.6, Python 3.7
___
Python
Grzegorz Grzywacz added the comment:
I would like to work on this issue.
I think it's a good idea to split this task into few parts/PR.
Let me start from ./Lib/test/test_poplib.py.
What about rewriting pop3 server stub using asyncio, i think requests could be
handled synchronously,
Changes by Grzegorz Grzywacz :
--
pull_requests: +1778
___
Python tracker
<http://bugs.python.org/issue30414>
___
___
Python-bugs-list mailing list
Unsubscribe:
New submission from Grzegorz Grzywacz:
multiprocessing.Queue is running background thread feeder. Feeder serialize and
sends buffered data to pipe.
The issue is with exception handling, feeder is catching all exceptions but out
of main loop, so after exception is handled feeder is not going
Grzegorz Grzywacz added the comment:
I think this do not solve this issue yet. There is still posibillity that
different tests/testrunners spawn threads and 'fool' testcase. I think we
should not relay on `thread._count` value where it's possible.
For master branch `threa
Changes by Grzegorz Grzywacz :
--
pull_requests: +1676
___
Python tracker
<http://bugs.python.org/issue30357>
___
___
Python-bugs-list mailing list
Unsubscribe:
Grzegorz Grzywacz added the comment:
Problem is with test
test_thread.ThreadRunningTests.test_save_exception_state_on_error when other
tests leave threads runnig.
test_save_exception_state_on_error relay on thread._get_count(), if this value
decrease test assume thread is finished with is
Grzegorz Grzywacz added the comment:
> We just ran into the exact same issue here in Google using a
> ThreadPoolExecutor.map call
Looks like map got simillar issue. I created GitHub PR with map fixed.
> Concurrent package was added in 3.2. How backport it 2.7?
There is official, u
Changes by Grzegorz Grzywacz :
--
pull_requests: +1658
___
Python tracker
<http://bugs.python.org/issue27144>
___
___
Python-bugs-list mailing list
Unsubscribe:
Changes by Grzegorz Grzywacz :
--
keywords: +patch
Added file: http://bugs.python.org/file43038/issue27144.patch
___
Python tracker
<http://bugs.python.org/issue27
New submission from Grzegorz Grzywacz:
as_complite generator keeps reference of all passed futures until
StopIteration. It may lead to serious memory inefficiency.
Solution is to remove reference from lists and yield future ad-hoc.
I have submitted patch and reproduce sample.
I can create
24 matches
Mail list logo