Maybe I'll just use my simpy models for now and wait for clj-sim ;)
Any chance of sharing?
Cheers
Andreas

On Saturday, 30 November 2013 15:40:10 UTC+10:30, Ben Mabey wrote:
>
>  On 11/29/13, 9:16 PM, Cedric Greevey wrote:
>  
>  On Fri, Nov 29, 2013 at 11:03 PM, Ben Mabey <b...@benmabey.com<javascript:>
> > wrote:
>
>>  On 11/29/13, 8:33 PM, Cedric Greevey wrote:
>>  
>>  Have you checked for other sources of performance hits? Boxing, var 
>> lookups, and especially reflection.
>>   
>>  As I said, I haven't done any optimization yet. :)  I did check for 
>> reflection though and didn't see any. 
>>
>>   
>>  I'd expect a reasonably optimized Clojure version to outperform a Python 
>> version by a very large factor -- 10x just for being JITted JVM bytecode 
>> instead of interpreted Python, times another however-many-cores-you-have 
>> for core.async keeping all your processors warm vs. Python and its GIL 
>> limiting the Python version to single-threaded performance.
>>  
>>  This task does not benefit from the multiplexing that core.async 
>> provides, at least not in the case of a single simulation which has no 
>> clear logical partition that can be run in parallel.  The primary benefit 
>> that core.async is providing in this case is to escape from call-back hell.
>>  
>  
>  Hmm. Then you're still looking for a 25-fold slowdown somewhere. It's 
> hard to get Clojure to run that slow *without* reflection, unless you're 
> hitting one of those cases where parallelizing actually makes things worse. 
> Hmm; core.async will be trying to multithread your code, even while the 
> nature of the task is limiting it to effectively serial performance anyway 
> due to blocking. Perhaps you're getting some of the slowdown from context 
> switches that aren't buying you anything for what they cost? The 
> GIL-afflicted Python code wouldn't be impacted by the cost of context 
> switches, by contrast.
>  
>  
> I had expected the context-switching to take a hit but I never tried to 
> see how much of a hit it is.  I just did and I got  a 1.62x speed 
> improvement[1] which means the clojure version is only 1.2x slower than the 
> simpy version. :) 
>
> Right now the thread pool in core.async is hardcoded in.  So for this 
> experiment I hacked in a fixed thread pool of size one.  I asked about 
> having the thread pool for core.async be swappable/parametrized at the conj 
> during the unsession and the idea was not received well.  For most use 
> cases I think the current thread pool is fine but for this particular one 
> it appears it is not...
>
> -Ben
>
> 1. Full benchmark... compare to times here: 
> https://gist.github.com/bmabey/7714431
> WARNING: Final GC required 5.486725933787122 % of runtime
> WARNING: Final GC required 12.905903007134539 % of runtime
> Evaluation count : 6 in 6 samples of 1 calls.
>              Execution time mean : 392.457499 ms
>     Execution time std-deviation : 8.225849 ms
>    Execution time lower quantile : 384.192999 ms ( 2.5%)
>    Execution time upper quantile : 401.027249 ms (97.5%)
>                    Overhead used : 1.847987 ns
>
>  

-- 
-- 
You received this message because you are subscribed to the Google
Groups "Clojure" group.
To post to this group, send email to clojure@googlegroups.com
Note that posts from new members are moderated - please be patient with your 
first post.
To unsubscribe from this group, send email to
clojure+unsubscr...@googlegroups.com
For more options, visit this group at
http://groups.google.com/group/clojure?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"Clojure" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to clojure+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

Reply via email to