On 2012-08-02, Laszlo Nagy wrote:
>
>> I still don't get it. shm_unlink() works the same way unlink() does.
>> The resource itself doesn't cease to exist until all open file
>> handles are closed. From the shm_unlink() man page on Linux:
>>
>> The operation of shm_unlink() is analogous to
I still don't get it. shm_unlink() works the same way unlink() does.
The resource itself doesn't cease to exist until all open file handles
are closed. From the shm_unlink() man page on Linux:
The operation of shm_unlink() is analogous to unlink(2): it
removes a shared memory o
On 2012-08-01, Laszlo Nagy wrote:
>
things get more tricky, because I can't use queues and pipes to
communicate with a running process that it's noit my child, correct?
>>> Yes, I think that is correct.
>> I don't understand why detaching a child process on Linux/Unix would
>> make
On Aug 1, 2012, at 9:25 AM, andrea crotti wrote:
> [beanstalk] does look nice and I would like to have something like that..
> But since I have to convince my boss of another external dependency I
> think it might be worth
> to try out zeromq instead, which can also do similar things and looks
> m
2012/8/1 Laszlo Nagy :
>
> So detaching the child process will not make IPC stop working. But exiting
> from the original parent process will. (And why else would you detach the
> child?)
>
> --
> http://mail.python.org/mailman/listinfo/python-list
Well it makes perfect sense if it stops working
Yes, I think that is correct.
I don't understand why detaching a child process on Linux/Unix would
make IPC stop working. Can somebody explain?
It is implemented with shared memory. I think (although I'm not 100%
sure) that shared memory is created *and freed up* (shm_unlink()
system call)
things get more tricky, because I can't use queues and pipes to
communicate with a running process that it's noit my child, correct?
Yes, I think that is correct.
I don't understand why detaching a child process on Linux/Unix would
make IPC stop working. Can somebody explain?
It is implemen
On 2012-08-01, Laszlo Nagy wrote:
>>
>> As I wrote "I found many nice things (Pipe, Manager and so on), but
>> actually even
>> this seems to work:" yes I did read the documentation.
> Sorry, I did not want be offensive.
>>
>> I was just surprised that it worked better than I expected even
>> with
2012/8/1 Roy Smith :
> In article ,
> Laszlo Nagy wrote:
>
>> Yes, I think that is correct. Instead of detaching a child process, you
>> can create independent processes and use other frameworks for IPC. For
>> example, Pyro. It is not as effective as multiprocessing.Queue, but in
>> return, you
On 2012-08-01 12:59, Roy Smith wrote:
In article ,
Laszlo Nagy wrote:
Yes, I think that is correct. Instead of detaching a child process, you
can create independent processes and use other frameworks for IPC. For
example, Pyro. It is not as effective as multiprocessing.Queue, but in
return,
The most effective IPC is usually through shared memory. But there is no
OS independent standard Python module that can communicate over shared
memory.
It's true that shared memory is faster than serializing objects over a
TCP connection. On the other hand, it's hard to imagine anything
writte
In article ,
Laszlo Nagy wrote:
> Yes, I think that is correct. Instead of detaching a child process, you
> can create independent processes and use other frameworks for IPC. For
> example, Pyro. It is not as effective as multiprocessing.Queue, but in
> return, you will have the option to ru
Yes I know we don't care about Windows for this particular project..
I think mixing multiprocessing and fork should not harm, but probably
is unnecessary since I'm already in another process after the fork so
I can just make it run what I want.
Otherwise is there a way to do same thing only us
2012/8/1 Laszlo Nagy :
> On thing is sure: os.fork() doesn't work under Microsoft Windows. Under
> Unix, I'm not sure if os.fork() can be mixed with
> multiprocessing.Process.start(). I could not find official documentation on
> that. This must be tested on your actual platform. And don't forget t
Thanks, there is another thing which is able to interact with running
processes in theory:
https://github.com/lmacken/pyrasite
I don't know though if it's a good idea to use a similar approach for
production code, as far as I understood it uses gdb.. In theory
though I could be able to set up
2012/8/1 Laszlo Nagy :
>> I was just surprised that it worked better than I expected even
>> without Pipes and Queues, but now I understand why..
>>
>> Anyway now I would like to be able to detach subprocesses to avoid the
>> nasty code reloading that I was talking about in another thread, but
>> t
As I wrote "I found many nice things (Pipe, Manager and so on), but
actually even
this seems to work:" yes I did read the documentation.
Sorry, I did not want be offensive.
I was just surprised that it worked better than I expected even
without Pipes and Queues, but now I understand why..
Any
2012/7/31 Laszlo Nagy :
>> I think I got it now, if I already just mix the start before another add,
>> inside the Process.run it won't see the new data that has been added after
>> the start. So this way is perfectly safe only until the process is launched,
>> if it's running I need to use some mu
> I think I got it now, if I already just mix the start before another
add, inside the Process.run it won't see the new data that has been
added after the start. So this way is perfectly safe only until the
process is launched, if it's running I need to use some
multiprocess-aware data structur
>
>
> def procs():
> mp = MyProcess()
> # with the join we are actually waiting for the end of the running time
> mp.add([1,2,3])
> mp.start()
> mp.add([2,3,4])
> mp.join()
> print(mp)
>
I think I got it now, if I already just mix the start before another
add, inside th
20 matches
Mail list logo