Dredging this back up, you've read the scenario properly.  Timers are 
coalesced a particular resolution.

Use cases needing more than 1024 active handles on a channel can use a 
mult.  For example, if you had to timeout every request at a same time 
exactly 5 minutes in the future, (mult (timeout 300000)) and then give 
every request a fresh tap of that.

The fixed 100 case I think you're getting lucky, whereas the random window 
you're getting unfavorable coalescing, which seems counterintuitive.


On Friday, March 28, 2014 9:52:47 AM UTC-4, Peter Taoussanis wrote:
>
> One thing I'm not clear on: if I've understood your explanation correctly, 
> I would expect the 100ms timeout to produce this error _more_ (not less) 
> often.
>
> So can I just confirm some things here?
>
> 1. `async/timeout` calls can (always?) get "cached" to the nearest 
> TIMEOUT_RESOLUTION_MS.
> 2. In this tight loop example, that means that `<!` is sometimes getting 
> called against the same (cached) timeout channel.
> 3. It's happening sufficiently often (do to the high loop count+speed) to 
> overflow the [unbuffered] timeout channel's implicit take buffer.
>
> Is that all right?
>
> If so, why isn't the fixed `(async/timeout 100)` channel producing the 
> same (or worse) behaviour? Is something preventing it from being cached in 
> the same way?
>

-- 
You received this message because you are subscribed to the Google
Groups "Clojure" group.
To post to this group, send email to clojure@googlegroups.com
Note that posts from new members are moderated - please be patient with your 
first post.
To unsubscribe from this group, send email to
clojure+unsubscr...@googlegroups.com
For more options, visit this group at
http://groups.google.com/group/clojure?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"Clojure" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to clojure+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to