Thank you very much for your replies. I will definitely take a look at
core.matrix. I really hate the fact that I had to use Java arrays to make
it fast. I'll take a look at transducers as well.
Kind regards,
Jose.
On Monday, December 22, 2014 7:09:27 PM UTC-5, Christopher Small wrote:
>
> I'
I'll second the use of core.matrix. It's a wonderful, idiomatic, fast
library, and I hope to see folks continue to rally around it.
On Monday, December 22, 2014 3:47:59 AM UTC-7, Mikera wrote:
>
> For most array operations (e.g. dot products on vectors), I strongly
> recommend trying out the re
For most array operations (e.g. dot products on vectors), I strongly
recommend trying out the recent core.matrix implementations. We've put a
lot of effort into fast implementations and a nice clean Clojure API so I'd
love to see them used where it makes sense!
For example vectorz-clj can be ov
Interesting read Jose, thanks!
It might be interesting to try a transducer on
(defn dot-prod
"Returns the dot product of two vectors"
[v1 v2]
(reduce + (map * v1 v2)))
if you can get your hands on the 1.7 alpha and the time and inclination to
do it. Transducers have shown to be faster t
Regarding the speed optimizations, execution time for a given model was
reduced from 2735 seconds to 70 seconds, over several versions by doing
several optimizations.
The same calculation implemented in C# takes 12 seconds using the same
computer and OS. Maybe the Clojure code can still be imp
Hi everyone:
Sorry that it has taken so long. I've just released the software in GitHub
under the EPL. It can be found at:
https://github.com/iosephus/gema
Kind regards,
Jose.
--
You received this message because you are subscribed to the Google
Groups "Clojure" group.
To post to this grou
On Tuesday, June 3, 2014 12:46:55 PM UTC-5, Mars0i wrote:
>
> (def ones (doall (repeat 1000 1)))
> (bench (def _ (doall (map rand ones ; 189 microseconds average time
> (bench (def _ (doall (pmap rand ones ; 948 microseconds average time
>
For the record, I worried later that rand was
Will take a look at the bigml/sampling library...
On Tuesday, June 3, 2014 7:52:06 PM UTC-4, Jose M. Perez Sanchez wrote:
>
>
> Thank you very much. I'm using the Colt random number generator directly.
> I've managed to reduce computing time by orders of magnitude using type
> hints and java ar
Thank you very much. I'm using the Colt random number generator directly.
I've managed to reduce computing time by orders of magnitude using type
hints and java arrays in some critical parts. I haven't had the time to
write a report on this for the list, since have been busy with other
project
Jose,
This is an old thread, and whatever problems you might be dealing with now,
they're probably not the same ones as when the thread was active. However,
I think that if parallel code uses the built-in Clojure random number
functions, there is probably a bottleneck in access to the RNG. Wi
Yes, the step extract function encodes the total number of steps and any
intermediate steps whose values are to be saved.
I did the following changes to the code:
1 - Store results locally in the threads and return them when the thread
function exits, instead of using global vector. This does n
Hi Jose,
I think you should try making the core iteration purely functional,
meaning no agents, atoms, or side effecting functions like the random
generator.
I assume the number of steps you evolve the particle is encoded in
step-extract-fn?
What you probably want is something like
(loop [i 0
Hi Andy, cej38, kovas:
Thanks for the replies. I plan to release the whole code soon (waiting for
institutional authorization).
I do use lazyness both within the move function to select the allowed
random displacements and when iterating the move function to generate the
trajectory. Lazy struc
Sounds like some form of overhead is dominating the computation. How
are the infinite sequences being consumed? Is it 1 thread per
sequence? How compute-intensive is (move particle) ? What kind of
numbers of are talking about in terms of steps, particles?
If move is fast, you probably need to batc
It is hard to say where the root of your problem lies without looking at
the code more. I would look closely at laziness. I find that lazy
evaluation really kills parallelization.
On Friday, November 8, 2013 4:42:11 PM UTC-5, Jose M. Perez Sanchez wrote:
>
> Hello everyone:
>
> This is my f
Hi Andy:
Yes, this breaks embarrassing parallelism indeed. When the calculations are
done for real this isn't a problem though, because these conj operations to
the global list would happen sporadically (in average once every couple of
seconds or so) so the probability of a thread waiting for
Jose:
On re-reading your original post, I noticed one statement you made that may
be of interest: "The resulting vector for each particle is then added
(conj) to a global vector for latter storage."
Do you mean that there is a single global vector that is conj'd onto by all
N threads? Is this ve
Hi Andy:
Thanks a lot for your reply. I'll do more careful testing in the very near
future and there is surely a lot to optimize in my code. I must say I did
expect computing speed reduction coming from an already optimized codebase
with the perfomance critical parts written in C, and there is
Jose:
I am not aware of any conclusive explanation for the issue, and would love
to know one if anyone finds out.
At least in the case of that program mentioned in the other discussion
thread, much better speedup was achieved running N different JVM processes,
each single-threaded, on a machine w
Hello everyone:
This is my first post here. I'm a researcher writing a numerical simulation
software in Clojure. Actually, I'm porting an app a coworker and I wrote in
C/Python (called GEMA) to Clojure: The app has been in use for a while at
our group, but became very difficult to maintain due
20 matches
Mail list logo