On 5/4/2020 9:05 PM, John Ladasky wrote:
On Monday, May 4, 2020 at 4:09:53 PM UTC-7, Terry Reedy wrote:
[snip]
Hi Terry,
Thanks for your reply. I have been hacking at this for a few hours. I have
learned two things:
1. Windows hangs unless you explicitly pass any references you want to us
On Monday, May 4, 2020 at 4:09:53 PM UTC-7, Terry Reedy wrote:
> On 5/4/2020 3:26 PM, John Ladasky wrote:
> > Several years ago I built an application using multiprocessing. It only
> > needed to work in Linux. I got it working fine. At the time,
> > concurrent.futures did not exist.
> >
> >
On 5/4/2020 3:26 PM, John Ladasky wrote:
Several years ago I built an application using multiprocessing. It only needed
to work in Linux. I got it working fine. At the time, concurrent.futures did
not exist.
My current project is an application which includes a PyQt5 GUI, and a live
video
> On Apr 6, 2020, at 12:19 PM, David Raymond wrote:
>
> Attempting reply as much for my own understanding.
>
> Are you on Mac? I think this is the pertinent bit for you:
> Changed in version 3.8: On macOS, the spawn start method is now the default.
> The fork start method should be considered u
145
>
> -Original Message-
> From: David Raymond
> Sent: Monday, April 6, 2020 4:19 PM
> To: python-list@python.org
> Subject: RE: Multiprocessing queue sharing and python3.8
>
> Attempting reply as much for my own understanding.
>
> Are you on Mac? I think
= mp.Pool(initializer = pool_init, initargs = (mp_comm_queue,))
...
-Original Message-
From: David Raymond
Sent: Monday, April 6, 2020 4:19 PM
To: python-list@python.org
Subject: RE: Multiprocessing queue sharing and python3.8
Attempting reply as much for my own understanding.
Are you on
Attempting reply as much for my own understanding.
Are you on Mac? I think this is the pertinent bit for you:
Changed in version 3.8: On macOS, the spawn start method is now the default.
The fork start method should be considered unsafe as it can lead to crashes of
the subprocess. See bpo-33725.
On 05Feb2020 15:48, Israel Brewster wrote:
In a number of places I have constructs where I launch several
processes using the multiprocessing library, then loop through said
processes calling join() on each one to wait until they are all
complete. In general, this works well, with the *apparen
answered here https://www.reddit.com/r/Python/comments/dxhgec/
how_does_multiprocessing_convert_a_methodrun_in/
basically starts two PVMs - the whole fork, check 'pid' trick.. one
process continues as the main thread and the other calls 'run'
--
https://mail.python.org/mailman/listinfo/python-li
On 03/07/2019 18.37, Israel Brewster wrote:
> I have a script that benefits greatly from multiprocessing (it’s generating a
> bunch of images from data). Of course, as expected each process uses a chunk
> of memory, and the more processes there are, the more memory used. The amount
> used per pr
On 2019-07-03 08:37:50 -0800, Israel Brewster wrote:
> 1) Determine the total amount of RAM in the machine (how?), assume an
> average of 10GB per process, and only launch as many processes as
> calculated to fit. Easy, but would run the risk of under-utilizing the
> processing capabilities and tak
On 7/3/19 9:37 AM, ijbrews...@alaska.edu wrote:
I have a script that benefits greatly from multiprocessing (it’s generating a
bunch of images from data). Of course, as expected each process uses a chunk of
memory, and the more processes there are, the more memory used. The amount used
per pro
With multiprocessing you can take advantage of multi-core processing as it
launches a separate python interpreter process and communicates with it via
shared memory (at least on windows). The big advantage of multiprocessing
module is that the interaction between processes is much richer than
subpr
Re: " My understanding (so far) is that the tradeoff of using multiprocessing
is that my manager script can not exit until all the work processes it starts
finish. If one of the worker scripts locks up, this could be problematic. Is
there a way to use multiprocessing where processes are launched
On Wed, Mar 13, 2019 at 2:01 AM Malcolm Greene wrote:
>
> Use case: I have a Python manager script that monitors several conditions
> (not just time based schedules) that needs to launch Python worker scripts to
> respond to the conditions it detects. Several of these worker scripts may end
> u
George: apologies for mis-identifying yourself as OP.
Israel:
On 22/02/19 6:04 AM, Israel Brewster wrote:
Actually not a ’toy example’ at all. It is simply the first step in
gridding some data I am working with - a problem that is solved by tools
like SatPy, but unfortunately I can’t use SatPy
Actually not a ’toy example’ at all. It is simply the first step in gridding
some data I am working with - a problem that is solved by tools like SatPy, but
unfortunately I can’t use SatPy because it doesn’t recognize my file format,
and you can’t load data directly. Writing a custom file import
I don't know whether this is a toy example, having grid of this size is not
uncommon. True, it would make more sense to do distribute more work on each
box, if there was any. One has to find a proper balance, as with many other
things in life. I simply responded to a question by the OP.
George
O
George
On 21/02/19 1:15 PM, george trojan wrote:
def create_box(x_y):
return geometry.box(x_y[0] - 1, x_y[1], x_y[0], x_y[1] - 1)
x_range = range(1, 1001)
y_range = range(1, 801)
x_y_range = list(itertools.product(x_range, y_range))
grid = list(map(create_box, x_y_range))
Which creates
def create_box(x_y):
return geometry.box(x_y[0] - 1, x_y[1], x_y[0], x_y[1] - 1)
x_range = range(1, 1001)
y_range = range(1, 801)
x_y_range = list(itertools.product(x_range, y_range))
grid = list(map(create_box, x_y_range))
Which creates and populates an 800x1000 “grid” (represented as a fl
> On Feb 18, 2019, at 6:37 PM, Ben Finney wrote:
>
> I don't have anything to add regarding your experiments with
> multiprocessing, but:
>
> Israel Brewster writes:
>
>> Which creates and populates an 800x1000 “grid” (represented as a flat
>> list at this point) of “boxes”, where a box is a
I don't have anything to add regarding your experiments with
multiprocessing, but:
Israel Brewster writes:
> Which creates and populates an 800x1000 “grid” (represented as a flat
> list at this point) of “boxes”, where a box is a
> shapely.geometry.box(). This takes about 10 seconds to run.
Thi
fredag den 10. august 2018 kl. 15.35.46 UTC+2 skrev Niels Kristian Jensen:
> Please refer to:
>
(cut)
It appears, that Python is simply not supported on Cygwin (!):
https://bugs.python.org/issue30563
Best regards,
Niels Kristian
--
https://mail.python.org/mailman/listinfo/python-list
On 2018-03-21 09:27:37 -0400, Larry Martell wrote:
> Yeah, I saw that and I wasn't trying to reinvent the wheel. On this
> page https://docs.python.org/2/library/multiprocessing.html it says
> this:
>
> The multiprocessing package offers both local and remote concurrency,
> effectively side-steppi
On Tue, Mar 20, 2018 at 11:15 PM, Steven D'Aprano
wrote:
> On Wed, 21 Mar 2018 02:20:16 +, Larry Martell wrote:
>
>> Is there a way to use the multiprocessing lib to run a job on a remote
>> host?
>
> Don't try to re-invent the wheel. This is a solved problem.
>
> https://stackoverflow.com/que
On Wed, 21 Mar 2018 02:20:16 +, Larry Martell wrote:
> Is there a way to use the multiprocessing lib to run a job on a remote
> host?
Don't try to re-invent the wheel. This is a solved problem.
https://stackoverflow.com/questions/1879971/what-is-the-current-choice-for-doing-rpc-in-python
I'
There is a trick that I use when data transfer is the performance killer. Just
save your big array first (for instance on and .hdf5 file) and send to the
workers the indices to retrieve the portion of the array you are interested in
instead of the actual subarray.
Anyway there are cases where m
Yes, it is a simplification and I am using numpy at lower layers. You correctly
observe that it's a simple operation, but it's not a shift it's actually
multidimensional vector algebra in numpy. So the - is more conceptual and takes
the place of hundreds of subtractions. But the example dies dem
Correct me if I'm wrong, but at a high level you appear to basically
just have a mapping of strings to values and you are then shifting all
of those values by a fixed constant (in this case, `z = 5`). Why are you
using a dict at all? It would be better to use something like a numpy
array or a serie
I refactored the map call to break dict_keys into cpu_count() chunks, (so each
f() call gets to run continuously over n/cpu_count() items) virtually the same
results. pool map is much slower (4x) than regular map, and I don't know why.
--
https://mail.python.org/mailman/listinfo/python-list
On Wed, Oct 18, 2017 at 10:21 AM, Jason wrote:
> On Wednesday, October 18, 2017 at 12:14:30 PM UTC-4, Ian wrote:
>> On Wed, Oct 18, 2017 at 9:46 AM, Jason wrote:
>> > #When I change line19 to True to use the multiprocessing stuff it all
>> > slows down.
>> >
>> > from multiprocessing import Proc
On Wed, Oct 18, 2017 at 10:13 AM, Ian Kelly wrote:
> On Wed, Oct 18, 2017 at 9:46 AM, Jason wrote:
>> #When I change line19 to True to use the multiprocessing stuff it all slows
>> down.
>>
>> from multiprocessing import Process, Manager, Pool, cpu_count
>> from timeit import default_timer as ti
On Wednesday, October 18, 2017 at 12:14:30 PM UTC-4, Ian wrote:
> On Wed, Oct 18, 2017 at 9:46 AM, Jason wrote:
> > #When I change line19 to True to use the multiprocessing stuff it all slows
> > down.
> >
> > from multiprocessing import Process, Manager, Pool, cpu_count
> > from timeit import de
On Wed, Oct 18, 2017 at 9:46 AM, Jason wrote:
> #When I change line19 to True to use the multiprocessing stuff it all slows
> down.
>
> from multiprocessing import Process, Manager, Pool, cpu_count
> from timeit import default_timer as timer
>
> def f(a,b):
> return dict_words[a]-b
Since
#When I change line19 to True to use the multiprocessing stuff it all slows
down.
from multiprocessing import Process, Manager, Pool, cpu_count
from timeit import default_timer as timer
def f(a,b):
return dict_words[a]-b
def f_unpack(args):
return f(*args)
def init():
On 10/18/2017 05:10 PM, Jason wrote:
> I've read the docs several times, but I still have questions.
> I've even used multiprocessing before, but not map() from it.
>
> I am not sure if map() will let me use a common object (via a manager) and if
> so, how to set that up.
>
As I said earlier, y
I've read the docs several times, but I still have questions.
I've even used multiprocessing before, but not map() from it.
I am not sure if map() will let me use a common object (via a manager) and if
so, how to set that up.
--
https://mail.python.org/mailman/listinfo/python-list
On 10/17/2017 10:52 AM, Jason wrote:
I've got problem that I thought would scale well across cores.
What OS?
def f(t):
return t[0]-d[ t[1] ]
d= {k: np.array(k) for k in entries_16k }
e = np.array()
pool.map(f, [(e, k) for k in d]
*Every* multiprocessing example in the doc intentiona
Could you post a full code snippet? If the lists of 16k numpy arrays are
fixed (say you read them from a file), you could just generate random
values that could be fed into the code as your list would.
It's hard to say how things could be sped up without a bit more specificity.
Cheers,
Thomas
--
On Tue, 28 Mar 2017 15:38:38 -0400, Terry Reedy wrote:
> On 3/28/2017 2:51 PM, Frank Miles wrote:
>> I tried running a bit of example code from the py2.7 docs
>> (16.6.1.2. Exchanging objects between processes)
>> only to have it fail. The code is simply:
>> #
>> from multiprocessi
On 3/28/2017 2:51 PM, Frank Miles wrote:
I tried running a bit of example code from the py2.7 docs
(16.6.1.2. Exchanging objects between processes)
only to have it fail. The code is simply:
#
from multiprocessing import Process, Queue
def f(q):
q.put([42, None, 'hello'])
if
On 2017-03-28 19:51, Frank Miles wrote:
I tried running a bit of example code from the py2.7 docs
(16.6.1.2. Exchanging objects between processes)
only to have it fail. The code is simply:
#
from multiprocessing import Process, Queue
def f(q):
q.put([42, None, 'hello'])
if _
On 2016-12-24 01:17, Charles Hixson wrote:
On 12/23/2016 01:56 PM, Charles Hixson wrote:
I was looking to avoid using a upd connection to transfer messages
between processes, so I thought I'd use multiprocessing (which I
expect would be faster), but...I sure would appreciate an explanation
of
On 12/23/2016 01:56 PM, Charles Hixson wrote:
I was looking to avoid using a upd connection to transfer messages
between processes, so I thought I'd use multiprocessing (which I
expect would be faster), but...I sure would appreciate an explanation
of this problem.
When I run the code (below
On 2016-12-23 21:56, Charles Hixson wrote:
I was looking to avoid using a upd connection to transfer messages
between processes, so I thought I'd use multiprocessing (which I expect
would be faster), but...I sure would appreciate an explanation of this
problem.
When I run the code (below) instea
On 2016-05-24 23:17, Noah wrote:
Hi,
I am using this example:
http://spartanideas.msu.edu/2014/06/20/an-introduction-to-parallel-programming-using-pythons-multiprocessing-module/
I am sending and receiving communication from the worker processes.
Two issues. the join is only getting to the pr
On Mon, Mar 21, 2016 at 1:46 PM, Michael Welle wrote:
> Wait on the result means to set a multiprocessing.Event if one of the
> consumers finds the sentinel task and wait for it on the producer? Hmm,
> that might be better than incrementing a counter. But still, it couples
> the consumers and the
On Mon, Mar 21, 2016 at 4:25 AM, Michael Welle wrote:
> Hello,
>
> I use a multiprocessing pool. My producer calls pool.map_async()
> to fill the pool's job queue. It can do that quite fast, while the
> consumer processes need much more time to empty the job queue. Since the
> producer can create
Thanks for the responses.
I will create another thread to supply a more realistic example.
On Tue, Sep 29, 2015 at 10:12 AM, Oscar Benjamin wrote:
> On Tue, 29 Sep 2015 at 02:22 Rita wrote:
>
>> I am using the multiprocessing with apply_async to do some work. Each
>> task takes a few seconds
On Tue, 29 Sep 2015 at 02:22 Rita wrote:
> I am using the multiprocessing with apply_async to do some work. Each task
> takes a few seconds but I have several thousand tasks. I was wondering if
> there is a more efficient method and especially when I plan to operate on a
> large memory arrays (n
Rita wrote:
> I am using the multiprocessing with apply_async to do some work. Each task
> takes a few seconds but I have several thousand tasks. I was wondering if
> there is a more efficient method and especially when I plan to operate on
> a
> large memory arrays (numpy)
>
> Here is what I ha
On Sat, May 9, 2015 at 12:31 AM, Michael Welle wrote:
>> As a general rule, queues need to have both ends operating
>> simultaneously, otherwise you're likely to have them blocking. In
>> theory, your code should all work with ridiculously low queue sizes;
>> the only cost will be concurrency (sin
On Fri, May 8, 2015 at 8:08 PM, Michael Welle wrote:
> Hello,
>
> what's wrong with [0]? As num_tasks gets higher proc.join() seems to
> block forever. First I thought the magical frontier is around 32k tasks,
> but then it seemed to work with 40k tasks. Now I'm stuck around 7k
> tasks. I think I
On 21 April 2015 at 16:53, Paulo da Silva
wrote:
> On 21-04-2015 11:26, Dave Angel wrote:
>> On 04/20/2015 10:14 PM, Paulo da Silva wrote:
>>> I have program that generates about 100 relatively complex graphics and
>>> writes then to a pdf book.
>>> It takes a while!
>>> Is there any possibility o
On 21-04-2015 03:14, Paulo da Silva wrote:
> I have program that generates about 100 relatively complex graphics and
> writes then to a pdf book.
> It takes a while!
> Is there any possibility of using multiprocessing to build the graphics
> and then use several calls to savefig(), i.e. some kind o
On 04/21/2015 07:54 PM, Dennis Lee Bieber wrote:
On Tue, 21 Apr 2015 18:12:53 +0100, Paulo da Silva
declaimed the following:
Yes. fork will do that. I have just looked at it and it is the same as
unix fork (module os). I am thinking of launching several forks that
will produce .png images an
On Tue, 21 Apr 2015 03:14:09 +0100, Paulo da Silva wrote:
> I have program that generates about 100 relatively complex graphics and
> writes then to a pdf book.
> It takes a while!
> Is there any possibility of using multiprocessing to build the graphics
> and then use several calls to savefig(),
On 21-04-2015 16:58, Chris Angelico wrote:
> On Wed, Apr 22, 2015 at 1:53 AM, Paulo da Silva
> wrote:
>> Yes, I have 8 cores and the graphics' processes calculation are all
>> independent. The problem I have is that if there is any way to generate
>> independent figures in matplotlib. The logic se
On Wed, Apr 22, 2015 at 1:53 AM, Paulo da Silva
wrote:
> Yes, I have 8 cores and the graphics' processes calculation are all
> independent. The problem I have is that if there is any way to generate
> independent figures in matplotlib. The logic seems to be build the
> graphic and save it. I was t
On 21-04-2015 11:26, Dave Angel wrote:
> On 04/20/2015 10:14 PM, Paulo da Silva wrote:
>> I have program that generates about 100 relatively complex graphics and
>> writes then to a pdf book.
>> It takes a while!
>> Is there any possibility of using multiprocessing to build the graphics
>> and then
On 04/20/2015 10:14 PM, Paulo da Silva wrote:
I have program that generates about 100 relatively complex graphics and
writes then to a pdf book.
It takes a while!
Is there any possibility of using multiprocessing to build the graphics
and then use several calls to savefig(), i.e. some kind of gra
On 30/01/15 23:25, Marko Rauhamaa wrote:
Sturla Molden :
Only a handful of POSIX functions are required to be "fork safe", i.e.
callable on each side of a fork without an exec.
That is a pretty surprising statement. Forking without an exec is a
routine way to do multiprocessing.
I understand
Sturla Molden :
> Only a handful of POSIX functions are required to be "fork safe", i.e.
> callable on each side of a fork without an exec.
That is a pretty surprising statement. Forking without an exec is a
routine way to do multiprocessing.
I understand there are things to consider, but all sy
Andres Riancho wrote:
> Spawn, and I took that from the multiprocessing 3 documentation, will
> create a new process without using fork().
> This means that no memory
> is shared between the MainProcess and the spawn'ed sub-process created
> by multiprocessing.
If you memory map a segment with
Skip Montanaro wrote:
> Can you explain what you see as the difference between "spawn" and "fork"
> in this context? Are you using Windows perhaps? I don't know anything
> obviously different between the two terms on Unix systems.
spawn is fork + exec.
Only a handful of POSIX functions are requ
On Wed, Jan 28, 2015 at 3:06 PM, Skip Montanaro
wrote:
>
> On Wed, Jan 28, 2015 at 7:07 AM, Andres Riancho
> wrote:
>>
>> The feature I'm specially interested in is the ability to spawn
>> processes [1] instead of forking, which is not present in the 2.7
>> version of the module.
>
>
> Can you ex
On Wed, Jan 28, 2015 at 10:06 AM, Skip Montanaro
wrote:
> On Wed, Jan 28, 2015 at 7:07 AM, Andres Riancho
> wrote:
>> The feature I'm specially interested in is the ability to spawn
>> processes [1] instead of forking, which is not present in the 2.7
>> version of the module.
>
> Can you explain
On Wed, Jan 28, 2015 at 7:07 AM, Andres Riancho
wrote:
> The feature I'm specially interested in is the ability to spawn
> processes [1] instead of forking, which is not present in the 2.7
> version of the module.
>
Can you explain what you see as the difference between "spawn" and "fork"
in thi
On Wed, Jan 14, 2015 at 2:16 PM, Chris Angelico wrote:
> And then you seek to run multiple workers. If my reading is correct,
> one of them (whichever one happens to get there first) will read the
> STOP marker and finish; the others will all be blocked, waiting for
> more work (which will never
On Thu, Jan 15, 2015 at 8:55 AM, wrote:
> I am trying to run a series of scripts on the Amazon cloud, multiprocessing
> on the 32 cores of our AWS instance. The scripts run well, and the queuing
> seems to work BUT, although the processes run to completion, the script below
> that runs the qu
On 12/31/2014 01:18 PM, MRAB wrote:
On 2014-12-31 19:33, Charles Hixson wrote:
In order to allow multiple processes to access a database (currently
SQLite) I want to run the process in a separate thread. Because it will
be accessed from multiple processes I intent to use Queues for shifting
me
On 2014-12-31 19:33, Charles Hixson wrote:
In order to allow multiple processes to access a database (currently
SQLite) I want to run the process in a separate thread. Because it will
be accessed from multiple processes I intent to use Queues for shifting
messages back and forth. But none of th
Why not use a multiuser database server instead of trying to make one? You
do not have the resources to a better job on your own. You know where to
find Firebird SQL, MariaDB, MySQL, PostegreSQL, IBM DB2, Oracle, etc.
Personally I prefer Firebird because like SQLite the database is stored in
a fi
On Wed, Jul 16, 2014 at 6:32 AM, Charles Hixson
wrote:
> from queue import Empty, Full
Not sure what this is for, you never use those names (and I don't have
a 'queue' module to import from). Dropped that line. In any case, I
don't think it's your problem...
> if __name__ == "__main__":
> db
In article <53c34400$0$9505$c3e8da3$54964...@news.astraweb.com>,
Steven D'Aprano wrote:
> On Sun, 13 Jul 2014 19:53:09 -0400, Paul LaFollette wrote:
>
> > I have thrown together a little C/UNIX program that forks a child
> > process, then proceeds to let the child and parent alternate. Either
Gary Herron :
> On 07/13/2014 04:53 PM, Paul LaFollette wrote:
>> I have thrown together a little C/UNIX program that forks a child
>> process, then proceeds to let the child and parent alternate. Either
>> can run until it pauses itself and wakes the other.
>>
>> [...]
>
> What do you gain from u
On Sun, 13 Jul 2014 19:53:09 -0400, Paul LaFollette wrote:
> I have thrown together a little C/UNIX program that forks a child
> process, then proceeds to let the child and parent alternate. Either
> can run until it pauses itself and wakes the other.
>
> I would like to know if there be a way t
On 07/13/2014 04:53 PM, Paul LaFollette wrote:
Kind people,
I have thrown together a little C/UNIX program that forks a child
process, then proceeds to let the child and parent alternate. Either
can run until it pauses itself and wakes the other.
I would like to know if there be a way to cre
Could you post
a) what the output looks like now (sans the logging part)
b) what output do you expect
In any event, this routine does not look right to me:
def consume_queue(queue_name):
conn = boto.connect_sqs()
q = conn.get_queue(queue_name)
m = q.read()
while m is not None:
yiel
I received a suggestion off list that my results indicated that the jobs were
not being distributed across the pool workers. I used
mp.current_process().name to confirm this suggestion.
Alan Isaac
--
https://mail.python.org/mailman/listinfo/python-list
> #--- temp.py -
> #run at Python 2.7 command prompt
> import time
> import multiprocessing as mp
> lst = []
> lstlst = []
>
> def alist(x):
> lst.append(x)
> lstlst.append(lst)
> print "a"
> return lst
>
> if __name__=='__main__':
> pool = mp
On 03/11/2013 10:10, capple...@gmail.com wrote:
On Friday, November 1, 2013 10:35:47 PM UTC-4, smhall05 wrote:
I am using a basic multiprocessing snippet I found:
#-
from multiprocessing import Pool
def f(x):
return x*x
if __na
On Friday, November 1, 2013 10:35:47 PM UTC-4, smhall05 wrote:
> I am using a basic multiprocessing snippet I found:
>
>
>
> #-
>
> from multiprocessing import Pool
>
>
>
> def f(x):
>
> return x*x
>
>
>
> if __name__ == '__main__'
On Nov 2, 2013, at 11:44 AM, Sherard Hall wrote:
> Thank you for the response. Processing time is very important so I suspect
> having to write to disk will take more time than letting the other processes
> complete without finding the answer. So I did some profiling one process
> finds the an
Thank you for the response. Processing time is very important so I suspect
having to write to disk will take more time than letting the other
processes complete without finding the answer. So I did some profiling one
process finds the answer in about 250ms, but since I can't stop the other
processe
On Nov 2, 2013, at 1:03 AM, smhall05 wrote:
> On Friday, November 1, 2013 10:52:40 PM UTC-4, MRAB wrote:
>> On 02/11/2013 02:35, smhall05 wrote:
>>
>>> I am using a basic multiprocessing snippet I found:
>>>
>>> #-
>>> from multiprocessing imp
On Friday, November 1, 2013 10:52:40 PM UTC-4, MRAB wrote:
> On 02/11/2013 02:35, smhall05 wrote:
>
> > I am using a basic multiprocessing snippet I found:
>
> >
>
> > #-
>
> > from multiprocessing import Pool
>
> >
>
> > def f(x):
>
> >
On 02/11/2013 02:35, smhall05 wrote:
I am using a basic multiprocessing snippet I found:
#-
from multiprocessing import Pool
def f(x):
return x*x
if __name__ == '__main__':
pool = Pool(processes=4) # start 4 worker pro
On 10/18/2013 8:52 AM, Марк Коренберг wrote:
import prctl
This is not a stdlib module.
prct.set_pdeathsig(.)
if os.getppid() == 1:
raise AlreadyDead()
What is your point?
Your signature said
>Segmentation fault
If you meant that the above code segfaults, then there is a bug in
prct
El 18/10/13 13:18, John Ladasky escribió:
What a lovely thread title! And just in time for Halloween! :^)
LOL
Couldn't that be construed as "sexism"?
Next we'll have a new long moronic thread about sexism and
discrimination in mail subjects. Which will, as usual, leave a lot of
satisfied eg
What a lovely thread title! And just in time for Halloween! :^)
--
https://mail.python.org/mailman/listinfo/python-list
On 2013-10-08, Mark Lawrence wrote:
> On 08/10/2013 06:34, Chandru Rajendran wrote:
>> Hi all,
>>
>> Please give me an idea about Multiprocessing and Multithreading.
>>
>> Thanks & Regards,
>>
>> Chandru
>>
>
> I'll assume that you're a newbie so I'll keep it simple.
> Multiprocessing is about mo
On 08/10/2013 06:34, Chandru Rajendran wrote:
Hi all,
Please give me an idea about Multiprocessing and Multithreading.
Thanks & Regards,
Chandru
I'll assume that you're a newbie so I'll keep it simple.
Multiprocessing is about more than one process and multithreading is
about more than on
On 10/8/2013 1:34 AM, Chandru Rajendran wrote:
Please give me an idea about Multiprocessing and Multithreading.
Please give us some idea of what you know and what you actually want to
know.
CAUTION - Disclaimer *
This e-mail contains PRIVILEGED AND CONFIDEN
Paul Pittlerson writes:
[...]
> def run(self):
> while True:
>
> sleep(0.1)
>
> if not self.q.empty():
> print self.q.get()
>
> else:
> break
[...]
> This works great on lin
On 6/9/2013 14:27, Paul Pittlerson wrote:
f> Ok here is the fixed and shortened version of my script:
>
> #!/usr/bin/python
>
> from multiprocessing import Process, Queue, current_process
> from threading import Thread
> from time import sleep
>
> class Worker():
> def __init__(self, Que):
>
On Fri, Sep 6, 2013 at 1:27 PM, Paul Pittlerson wrote:
> Ok here is the fixed and shortened version of my script:
Before going any further, I think you need to return to marduk's
response and consider if you really and truly need both threads and
fork (via multiprocessing).
http://www.linuxprogr
Ok here is the fixed and shortened version of my script:
#!/usr/bin/python
from multiprocessing import Process, Queue, current_process
from threading import Thread
from time import sleep
class Worker():
def __init__(self, Que):
self._pid = current_process().pid
self.q
Piet van Oostrum writes:
> def run(self):
> for n in range(5):
> self.que.put('%s tick %d' % (self._pid, n))
> # do some work
> time.sleep(1)
> self.que.put('%s has exited' % self._pid)
To prevent the 'exited' message to disappear if there
Paul Pittlerson writes:
> On Friday, September 6, 2013 1:46:40 AM UTC+3, Chris Angelico wrote:
>
>> The first thing I notice is that your Debugger will quit as soon as
>> its one-secondly poll results in no data. This may or may not be a
>> problem for your code, but I'd classify it as code smell
1 - 100 of 391 matches
Mail list logo