Ask Solem added the comment:
Perhaps we could add a self._finally to the event loop itself?
Like loop._ready, but a list of callbacks run_until_complete will call before
returning?
--
___
Python tracker
<https://bugs.python.org/issue36
Ask Solem added the comment:
Ah, so the extra call_soon means it needs a:
[code]
loop.run_until_complete(asyncio.sleep(0))```
[/code]
before the self.assertTrue(it.finally_executed)
to finish executing agen.close().
Why is create_task different? Does it execute an iteration of the
Ask Solem added the comment:
This patch is quite dated now and I have fixed many bugs since. The feature is
available in billiard and is working well but The code has diverged quite a lot
from python trunk. I will be updating billiard to reflect the changes for
Python 3.4 soon (billiard is
Ask Solem added the comment:
I vote to close too as it's very hard to fix in a clean way.
A big problem though is that there is a standard for defining exceptions, that
also ensures that the exception is pickleable (always call Exception.__init__
with original args), that is not docum
Ask Solem added the comment:
Later works, or just close it. I can open up a new issue to merge the
improvements in billiard later.
> The execv stuff certainly won't go in by Py3.3. There has not been
> consensus that adding it is a good idea.
> (I also have the unit tests
Ask Solem added the comment:
Well, I still don't know exactly why restarting the socket read made it work,
but the patch solved an issue where newly started pool processes would be stuck
in socket read forever (happening to maybe 1/500 new processes)
This and a dozen other pool related
Ask Solem added the comment:
@swindmill, if you provide a doc/test patch then this can probably be merged.
@pitrou, We could change it to `setup_queues`, though I don't think
even changing the name of "private" methods is a good idea. It could simply be
an alias to `_setup_
Ask Solem added the comment:
I have suspected that this may be necessary, not just merely useful, for some
time, and issue6721 seems to verify that. In addition to adding the keyword
arg to Process, it should also be added to Pool and Manager.
Is anyone working on a patch? If not I will
Ask Solem added the comment:
How would you replace the following functionality
with the multiple with statement syntax:
x = (A(), B(), C())
with nested(*x) as context:
It seems to me that nested() is still useful for this particular
use case.
--
nosy: +asksol
Ask Solem added the comment:
This is great! I always wondered if it was really necessary to use C for this.
10µs overhead should be worth it ;)
I've read the patch, but not carefully. So far nothing jumps at me either.
Cheers!
--
___
P
Ask Solem added the comment:
While it makes sense for `join` to raise an error on timeout, that could
possibly break existing code, so I don't think that is an option. Adding a
note in the documentation would be great.
--
___
Python tr
Ask Solem added the comment:
ah, this is something I've seen as well, its part of a bug that I haven't
created an issue for yet.
--
___
Python tracker
<http://bugs.python.o
Ask Solem added the comment:
Since you can't specify the return code, `self.terminate` is less flexible than
`sys.exit`.
I think the original intent is clear here, the method is there for the parent
to control the child. You are of course welcome to argue otherwise.
By the way, I just
Ask Solem added the comment:
Pickling on put makes sense to me. I can't think of cases where this could
break existing code either. I think this may also resolve issue 8323
--
stage: -> unit test needed
___
Python tracke
Changes by Ask Solem :
--
resolution: -> invalid
status: open -> closed
___
Python tracker
<http://bugs.python.org/issue5573>
___
___
Python-bugs-list
Ask Solem added the comment:
It seems that Process.terminate is not meant to be used by the child, but only
the parent.
>From the documentation:
"Note that the start(), join(), is_alive() and exit_code methods
should only be called by the process that created the process object.&
Changes by Ask Solem :
--
nosy: +asksol
___
Python tracker
<http://bugs.python.org/issue10133>
___
___
Python-bugs-list mailing list
Unsubscribe:
http://mail.pyth
Ask Solem added the comment:
Can't reproduce on Python 2.7, but can indeed reproduce on 2.6. Issue fixed?
--
___
Python tracker
<http://bugs.python.org/i
Ask Solem added the comment:
What is the status of this issue? There are several platform listed here,
which I unfortunately don't have access to.
--
nosy: +asksol
___
Python tracker
<http://bugs.python.org/i
Changes by Ask Solem :
--
resolution: -> invalid
status: open -> closed
___
Python tracker
<http://bugs.python.org/issue9733>
___
___
Python-bugs-list
Changes by Ask Solem :
--
nosy: +asksol
___
Python tracker
<http://bugs.python.org/issue5930>
___
___
Python-bugs-list mailing list
Unsubscribe:
http://mail.pyth
Changes by Ask Solem :
--
nosy: +asksol
___
Python tracker
<http://bugs.python.org/issue7292>
___
___
Python-bugs-list mailing list
Unsubscribe:
http://mail.pyth
Ask Solem added the comment:
I don't know about the socket internals, but I find the behavior
acceptable. It may not be feasible to change it now anyway, as there may be
people already depending on it (e.g. not handling errors occurring at
Ask Solem added the comment:
I can't seem to reproduce this on trunk...
--
nosy: +asksol
___
Python tracker
<http://bugs.python.org/issue7474>
___
___
Pytho
Ask Solem added the comment:
Matthew, would you be willing to write tests + documentation for this?
--
___
Python tracker
<http://bugs.python.org/issue6
Ask Solem added the comment:
Queue uses multiprocessing.util.Finalize, which uses weakrefs to track when the
object is out of scope, so this is actually expected behavior.
IMHO it is not a very good approach, but changing the API to use explicit close
methods is a little late at this point
Ask Solem added the comment:
aha, no. I see now you use connection.send_bytes instead.
Then I can't think of any issues with this patch, but I don't know why
it was done this way in the first place.
--
___
Python tracker
<http://bu
Ask Solem added the comment:
AFAICS the object will be pickled twice with this patch.
See Modules/_multiprocessing/connection.h: connection_send_obj.
--
nosy: +asksol
___
Python tracker
<http://bugs.python.org/issue8
Ask Solem added the comment:
Updated doc patch
--
nosy: +asksol
Added file: http://bugs.python.org/file19350/issue-4999.diff
___
Python tracker
<http://bugs.python.org/issue4
Ask Solem added the comment:
Please add the traceback, I can't seem to find any obvious places where this
would happen now.
Also, what version are you currently using?
I agree with the fileno, but I'd say close is a reasonable method to implement,
especially for stdin/std
Changes by Ask Solem :
--
resolution: -> invalid
status: open -> closed
___
Python tracker
<http://bugs.python.org/issue10128>
___
___
Python-bugs-list
Ask Solem added the comment:
Is this on Windows? Does it work for you now?
--
___
Python tracker
<http://bugs.python.org/issue10128>
___
___
Python-bugs-list m
New submission from Ask Solem :
While working on an "autoscaling" (yes, people call it that...) feature for
Celery, I noticed that the processes created by the _handle_workers thread
doesn't always work. I have reproduced this in general, by just using the
maxtasksperch
Ask Solem added the comment:
Did you finish the code to reproduce the problem?
--
nosy: +asksol
___
Python tracker
<http://bugs.python.org/issue8144>
___
___
Changes by Ask Solem :
--
nosy: +asksol
___
Python tracker
<http://bugs.python.org/issue8094>
___
___
Python-bugs-list mailing list
Unsubscribe:
http://mail.pyth
Ask Solem added the comment:
Could you please reduce this to the shorted possible example that reproduces
the problem?
--
nosy: +asksol
___
Python tracker
<http://bugs.python.org/issue8
Ask Solem added the comment:
I created a small doc patch for this (attached).
--
keywords: +needs review, patch
nosy: +asksol
versions: +Python 3.1 -Python 2.6
Added file: http://bugs.python.org/file18967/multiprocessing-issue7707.patch
___
Python
Ask Solem added the comment:
Maybe surprising but not so weird if you think about what happens
behind the scenes.
When you do
>>> x = man.list()
>>> x.append({})
You send an empty dict to the manager to be appended to x
when do:
>>> x[0]
{}
you
Ask Solem added the comment:
As no one is able to confirm that this is still an issue, I'm closing it. It
can be reopened if necessary.
--
resolution: -> out of date
___
Python tracker
<http://bugs.python.or
Ask Solem added the comment:
As no one has been able to confirm that this is still an issue, I'm closing it
as "out of date". The issue can be reopened if necessary.
--
resolution: accepted -> out of date
status: open -> closed
_
Ask Solem added the comment:
> I expected I could iterate over a DictProxy as I do over a
> regular dict.
DictProxy doesn't support iterkeys(), itervalues(), or iteritems() either.
So while
iter(d)
could do
iter(d.keys())
behind the scenes, it would mask the fact that
Changes by Ask Solem :
--
stage: needs patch -> unit test needed
___
Python tracker
<http://bugs.python.org/issue6407>
___
___
Python-bugs-list mailing list
Un
Ask Solem added the comment:
are there really any test/doc changes needed for this?
--
___
Python tracker
<http://bugs.python.org/issue6407>
___
___
Python-bug
Changes by Ask Solem :
--
nosy: +asksol
stage: -> needs patch
___
Python tracker
<http://bugs.python.org/issue3093>
___
___
Python-bugs-list mailing list
Un
Changes by Ask Solem :
--
keywords: +needs review
nosy: +asksol
___
Python tracker
<http://bugs.python.org/issue8534>
___
___
Python-bugs-list mailing list
Unsub
Changes by Ask Solem :
--
nosy: +asksol
___
Python tracker
<http://bugs.python.org/issue5501>
___
___
Python-bugs-list mailing list
Unsubscribe:
http://mail.pyth
Changes by Ask Solem :
--
nosy: +asksol
___
Python tracker
<http://bugs.python.org/issue3831>
___
___
Python-bugs-list mailing list
Unsubscribe:
http://mail.pyth
Changes by Ask Solem :
--
nosy: +asksol
___
Python tracker
<http://bugs.python.org/issue4892>
___
___
Python-bugs-list mailing list
Unsubscribe:
http://mail.pyth
Changes by Ask Solem :
--
resolution: -> postponed
stage: unit test needed -> needs patch
___
Python tracker
<http://bugs.python.org/issue3735>
___
___
Pyth
Ask Solem added the comment:
By the way, I'm also working on writing some simple benchmarks for the multiple
queues approach, just to see if theres at all an overhead to
worry about.
--
___
Python tracker
<http://bugs.python.org/i
Ask Solem added the comment:
> - A worker removes a job from the queue and is killed before
> sending an ACK.
Yeah, this may be a problem. I was thinking we could make sure the task is
acked before child process shutdown. Kill -9 is then not safe, but do we really
want to guarantee t
Ask Solem added the comment:
New patch attach (termination-trackjobs3.patch).
> Hmm, a few notes. I have a bunch of nitpicks, but those
> can wait for a later iteration. (Just one style nit: I
> noticed a few unneeded whitespace changes... please try
> not to do that, as it mak
Ask Solem added the comment:
This is a nice feature, but it's also very specific and can be implemented
by extending what's already there.
Could you make a patch for this that applies to the py3k branch? If no one has
the time for this, then we should probably just close the is
Ask Solem added the comment:
Duplicate of 3518?
--
nosy: +asksol
___
Python tracker
<http://bugs.python.org/issue5862>
___
___
Python-bugs-list mailing list
Unsub
Changes by Ask Solem :
--
nosy: +asksol
___
Python tracker
<http://bugs.python.org/issue7060>
___
___
Python-bugs-list mailing list
Unsubscribe:
http://mail.pyth
Changes by Ask Solem :
--
nosy: +asksol
___
Python tracker
<http://bugs.python.org/issue7123>
___
___
Python-bugs-list mailing list
Unsubscribe:
http://mail.pyth
Changes by Ask Solem :
--
nosy: +asksol
___
Python tracker
<http://bugs.python.org/issue6653>
___
___
Python-bugs-list mailing list
Unsubscribe:
http://mail.pyth
Changes by Ask Solem :
--
nosy: +asksol
___
Python tracker
<http://bugs.python.org/issue3518>
___
___
Python-bugs-list mailing list
Unsubscribe:
http://mail.pyth
Changes by Ask Solem :
--
nosy: +asksol
___
Python tracker
<http://bugs.python.org/issue6417>
___
___
Python-bugs-list mailing list
Unsubscribe:
http://mail.pyth
Changes by Ask Solem :
--
nosy: +asksol
___
Python tracker
<http://bugs.python.org/issue6407>
___
___
Python-bugs-list mailing list
Unsubscribe:
http://mail.pyth
Changes by Ask Solem :
--
nosy: +asksol
___
Python tracker
<http://bugs.python.org/issue6362>
___
___
Python-bugs-list mailing list
Unsubscribe:
http://mail.pyth
Changes by Ask Solem :
--
nosy: +asksol
___
Python tracker
<http://bugs.python.org/issue6056>
___
___
Python-bugs-list mailing list
Unsubscribe:
http://mail.pyth
Changes by Ask Solem :
--
nosy: +asksol
___
Python tracker
<http://bugs.python.org/issue3111>
___
___
Python-bugs-list mailing list
Unsubscribe:
http://mail.pyth
Changes by Ask Solem :
--
nosy: +asksol
___
Python tracker
<http://bugs.python.org/issue3125>
___
___
Python-bugs-list mailing list
Unsubscribe:
http://mail.pyth
Changes by Ask Solem :
--
nosy: +asksol
___
Python tracker
<http://bugs.python.org/issue5573>
___
___
Python-bugs-list mailing list
Unsubscribe:
http://mail.pyth
Ask Solem added the comment:
> Does the problem make sense/do you have any ideas for an alternate
> solution?
Well, I still haven't given up on the trackjobs patch. I changed it to use a
single queue for both the acks and the result (see new patch attached:
multiprocessing-
Ask Solem added the comment:
On closer look your patch is also ignoring SystemExit. I think it's beneficial
to honor SystemExit, so a user could use this as a means to replace the current
process with a new one.
If we keep that behavior, the real problem here is that the
result handler
Ask Solem added the comment:
This is related to our discussions at #9205 as well
(http://bugs.python.org/issue9205), as the final patch there will also fix this
issue.
--
___
Python tracker
<http://bugs.python.org/issue8
Ask Solem added the comment:
@greg
Been very busy lately, just had some time now to look at your patch.
I'm very ambivalent about using one SimpleQueue per process. What is the reason
for doing that?
--
___
Python tracker
<http://bugs.py
Ask Solem added the comment:
> A potential implementation is in termination.patch. Basically,
> try to shut down gracefully, but if you timeout, just give up and
> kill everything.
You can't have a sensible default timeout, because the worker may be processing
something import
Ask Solem added the comment:
Btw, the current problem with termination3.patch seems to be that the
MainProcess somehow appears in self._pool. I have no idea how it gets there.
Maybe some unrelated issue that appears when forking that late in the tests
Ask Solem added the comment:
>At first glance, looks like there are a number of sites where you don't
>>change the blocking calls to non-blocking calls (e.g. get()). Almost >all of
>the get()s have the potential to be called when there is no >possibility for
>the
Changes by Ask Solem :
Added file:
http://bugs.python.org/file18026/multiprocessing-tr...@82502-termination-trackjobs.patch
___
Python tracker
<http://bugs.python.org/issue9
Ask Solem added the comment:
> but if you make a blocking call such as in the following program,
> you'll get a hang
Yeah, and for that we could use the same approach as for the maps.
But, I've just implemented the accept callback approach, which should be
superior.
Ask Solem added the comment:
> Really? I could be misremembering, but I believe you deal
> with the case of the result being unpickleable. I.e. you
> deal with the put(result) failing, but not the get() in the
> result handler.
Your example is demonstrating the pickle error on p
Ask Solem added the comment:
Just some small cosmetic changes to the patch.
(added multiprocessing-tr...@82502-termination3.patch)
--
Added file:
http://bugs.python.org/file18015/multiprocessing-tr...@82502-termination3.patch
___
Python tracker
Changes by Ask Solem :
Removed file:
http://bugs.python.org/file18013/multiprocessing-tr...@82502-termination2.patch
___
Python tracker
<http://bugs.python.org/issue9
Ask Solem added the comment:
Updated patch with Greg's suggestions.
(multiprocessing-tr...@82502-handle_worker_encoding_errors2.patch)
--
Added file:
http://bugs.python.org/file18014/multiprocessing-tr...@82502-handle_worker_encoding_errors2.
Ask Solem added the comment:
Ok. I implemented my suggestions in the patch attached
(multiprocessing-tr...@82502-termination2.patch)
What do you think?
Greg, Maybe we could keep the behavior in termination.patch as an option for
map jobs? It is certainly a problem that map jobs won
Ask Solem added the comment:
Greg,
> Before I forget, looks like we also need to deal with the
> result from a worker being un-unpickleable:
This is what my patch in bug 9244 does...
> Yep. Again, as things stand, once you've lost an worker,
> you've lost a task, a
Ask Solem added the comment:
Jesse wrote,
> We can work around the shutdown issue (really, bug 9207) by
> ignoring the exception such as shutdown.patch does, or passing in
> references/adding references to the functions those methods need. Or (as
> Brett suggested) converting t
Ask Solem added the comment:
> To be clear, the errback change and the unpickleable result
> change are actually orthogonal, right?
Yes, it could be a separate issue. Jesse, do you think I should I open
up a separate issue for this?
> Why not add an error_callback for map_asyn
Ask Solem added the comment:
There's one more thing
if exitcode is not None:
cleaned = True
if exitcode != 0 and not worker._termination_requested:
abnormal.append((worker.pid, exitcode))
Instead of restarting crashed worker processes it
Changes by Ask Solem :
--
keywords: +patch
Added file:
http://bugs.python.org/file17985/multiprocessing-tr...@82502-apply-semaphore.patch
___
Python tracker
<http://bugs.python.org/issue9
New submission from Ask Solem :
This patch adds the `waitforslot` argument to apply_async. If set to `True`,
apply_async will not return until there is a worker available to process the
job.
This is implemented by a semaphore that is released by the result handler
whenever a new result is
Changes by Ask Solem :
--
title: multiprocessing.pool: Worker crashes if result can't be encoded result
(with patch) -> multiprocessing.pool: Worker crashes if result can't be encoded
___
Python tracker
<http://bugs.pyth
Changes by Ask Solem :
--
title: multiprocessing.pool: Pool crashes if worker can't encode result (with
patch) -> multiprocessing.pool: Worker crashes if result can't be encoded
result (with patch)
___
Python tracker
<http:/
Ask Solem added the comment:
For reference I opened up a new issue for the put() case here:
http://bugs.python.org/issue9244
--
___
Python tracker
<http://bugs.python.org/issue9
New submission from Ask Solem :
If the target function returns an unpickleable value the worker process
crashes. This patch tries to safely handle unpickleable errors, while enabling
the user to inspect such errors after the fact.
In addition a new argument has been added to apply_async
Ask Solem added the comment:
I think I misunderstood the purpose of the patch. This is about handling errors
on get(), not on put() like I was working on. So sorry for that confusion.
What kind of errors are you having that makes the get() call fail?
If the queue is not working, then I guess
Ask Solem added the comment:
> Unfortunately, if you've lost a worker, you are no
> longer guaranteed that cache will eventually be empty.
> In particular, you may have lost a task, which could
> result in an ApplyResult waiting forever for a _set call.
> More generally,
Ask Solem added the comment:
termination.patch, in the result handler you've added:
while cache and thread._state != TERMINATE and not failed
why are you terminating the second pass after finding a failed process?
Unpickleable errors and other errors occurring in the worker body ar
Changes by Ask Solem :
--
nosy: +asksol
___
Python tracker
<http://bugs.python.org/issue9207>
___
___
Python-bugs-list mailing list
Unsubscribe:
http://mail.pyth
Changes by Ask Solem :
--
nosy: +asksol
___
Python tracker
<http://bugs.python.org/issue9162>
___
___
Python-bugs-list mailing list
Unsubscribe:
http://mail.pyth
Ask Solem added the comment:
Patch for multiprocessing, adding the daemon kwarg attached.
--
nosy: +asksol
Added file:
http://bugs.python.org/file15376/6064-multiprocessing-daemon-kwarg.patch
___
Python tracker
<http://bugs.python.org/issue6
Changes by Ask Solem :
--
keywords: +patch
Added file: http://bugs.python.org/file15375/6615.patch
___
Python tracker
<http://bugs.python.org/issue6615>
___
___
Ask Solem added the comment:
Are we sure this fits the scope of multiprocessing? It's a nice feature,
but such a long and complex example in the documentation is wrong IMHO, if
this is something people need it should be implemented as a reusable
solution, not as an example people cop
Changes by Ask Solem :
Added file: http://bugs.python.org/file15374/7285.patch
___
Python tracker
<http://bugs.python.org/issue7285>
___
___
Python-bugs-list mailin
Ask Solem added the comment:
Amaury Forgeot d'Arc, wrote:
And I'd follow the same path: provide a way to build a launcher -
a .exe file that simply starts python with the given script.
Sounds good, but how would you expect to set the process name
for a subproce
99 matches
Mail list logo