I've tried this in lieu of the way I was generating the work units.

(def work-units (for [x (range 100)]
"88148433eeb5d372c0e352e38ac39aca"))

I know that for is still lazy, so I did this after the binding of work-
buckets:

(println work-buckets) ; yielded expected result
((88148433eeb5d372c0e352e38ac39aca
88148433eeb5d372c0e352e38ac39aca....

I'm assuming if I can see work-buckets from the print statement that
it's fully materialized. This part all happened in less than a second.
>From here, all that's left to do is spawn four agents (assigning each
a work bucket), send them each a decode function, and the loop over
the result set, and I'm seeing the same results as before.

On Jun 15, 6:36 pm, Richard Newman <holyg...@gmail.com> wrote:
> > I'm testing on a quad-core box, and I'm seeing virtually identical
> > performance numbers switching between one agent and four agents. I've
> > also tried larger numbers like 12 agents, but the results are the
> > same. The load average also stays the same hovering around 1.0, and
> > the execution time is static at approximately 15 seconds.
>
> It looks to me like your work unit generation is lazy. I also surmise  
> that it's taking 15 seconds to do 50,000 MD5 computations == 0.3ms per  
> computation, and that might include the rest of your program, too.
>
> Is it possible that the lazy computation of the work unit -- which is  
> not parallelized -- is only running fast enough to supply one agent  
> with input?
>
> Have you tried completely materializing your work units before timing  
> the agent part?
--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google
Groups "Clojure" group.
To post to this group, send email to clojure@googlegroups.com
Note that posts from new members are moderated - please be patient with your 
first post.
To unsubscribe from this group, send email to
clojure+unsubscr...@googlegroups.com
For more options, visit this group at
http://groups.google.com/group/clojure?hl=en
-~----------~----~----~----~------~----~------~--~---

Reply via email to