If you want to avoid overwhelming the cluster, there's an easy trick you can
try.
In the RDBMS world, we frequently batch requests and issue 5,000 or 10,000
updates at once. We then put a delay in the code of a second or so after the
commit so that the transaction log has a second to catch up.
Hi,
I need to reindex a bucket with a ~4 million items. If I do a
streaming list keys using the Erlang client and then read/write the
items as they keys come in it puts too much load on the cluster and
other mapred queries that get run timeout. I already have a date based
index on the items and wa