On Tue, Apr 6, 2010 at 2:12 PM, Steve <sjh_cassan...@shic.co.uk> wrote:
...
> Should I assume that it isn't common practice to write updates
> atomically in-real time, and batch process them 'off-line' to increase
> the atomic granularity?  It seems an obvious strategy... possibly one
> for which an implementation might use "MapReduce" or something similar?
> I don't want to re-invent the wheel, of course.

I think this is indeed a common and common sense approach.

Traditionally this has actually been done by even simpler systems,
like writing log files on hosts, beaming them up to a central
location, and processing aggregates that way. But key/value stores can
be used to reduce latency that is the problematic part of simple
log-based aggregation.

-+ Tatu +-

Reply via email to