> After the change, it runs in 5 seconds on four cores as opposed to
> 15 seconds on a single
> core. Thank you for taking the time to help me with this. It's been a
> learning experience.
Great news! Happy to help. This stuff is pretty new to me, too :)
--~--~-~--~~~--
Nevermind, kindly ignore my last post ;-) You called it. The map
inside my decode function returns a lazy seq, and it was being
accessed on-demand by the doseq towards the end of the program. To
make matters worse, I was consuming the agents in a serial fashion
completely eliminating any parallel
Completely omitting work-buckets and spawn-agents, I've replaced with
the following, but the CPU still sits at 100% usage, and the run time
is still ~15 seconds.
(def work-units (doall (for [x (range 15)]
"88148433eeb5d372c0e352e38ac39aca")))
(def agents [(agent work-units)
(agent wo
Try doall:
http://clojure.org/api#toc216
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google
Groups "Clojure" group.
To post to this group, send email to clojure@googlegroups.com
Note that posts from new members are moderated -
I've tried this in lieu of the way I was generating the work units.
(def work-units (for [x (range 100)]
"88148433eeb5d372c0e352e38ac39aca"))
I know that for is still lazy, so I did this after the binding of work-
buckets:
(println work-buckets) ; yielded expected result
((88148433eeb5d372c0e35
> I'm testing on a quad-core box, and I'm seeing virtually identical
> performance numbers switching between one agent and four agents. I've
> also tried larger numbers like 12 agents, but the results are the
> same. The load average also stays the same hovering around 1.0, and
> the execution tim
Having learned about agents recently, I've created a pretty contrived
program, which I believed would easily lend itself to parallel
processing. The program takes a list of MD5 sums and then does brute
force comparison the find the corresponding four character strings
they were generated from. The