Hi All,

Thank you so much for your replies!

For my particular use case ("tail -f" multiple files and write the entries 
into a db), I'm using pmap to process each file in a separate thread and 
for each file, I'm using doseq to write to db. It seems to be working well 
(though I still need to benchmark it).

Thanks to your help, I have a better understanding of how doseq, dorun, et. 
al. work.

On Friday, October 18, 2013 12:05:50 AM UTC-7, Stefan Kamphausen wrote:
>
> Hi,
>
> On Friday, October 18, 2013 12:12:31 AM UTC+2, Brian Craft wrote:
>>
>> I briefly tried working with the reducers library, which generally made 
>> things 2-3 times slower, presumably because I'm using it incorrectly. I 
>> would really like to see more reducers examples, e.g. for this case: 
>> reading a seq larger than memory, doing transforms on the data, and then 
>> executing side effects.
>>
>
> I used reducers for processing lots of XML files.  Probably the most 
> common pitfall is, that fork only does parallel computation when working 
> on a vector.  While all the XML data would not have fit into memory, the 
> vector of filenames to read from certainly did, and that made a big 
> difference.  Plus, I reduced the chunksize from default 512 to 1.
>
>
> Cheers,
> Stefan
>

-- 
-- 
You received this message because you are subscribed to the Google
Groups "Clojure" group.
To post to this group, send email to clojure@googlegroups.com
Note that posts from new members are moderated - please be patient with your 
first post.
To unsubscribe from this group, send email to
clojure+unsubscr...@googlegroups.com
For more options, visit this group at
http://groups.google.com/group/clojure?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"Clojure" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to clojure+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

Reply via email to