On Dec 28, 2012, at 11:57 AM, Brian Roach <ro...@basho.com> wrote:
> On Fri, Dec 28, 2012 at 11:37 AM, Dietrich Featherston > <dietrich.feathers...@gmail.com> wrote: >> >> All socket operations. It looks as though those that open a new socket are >> especially >> impacted. We are running 1.2.1 with the leveldb backend. Same 9 node SSD >> cluster info I >> have posted to the list before but don't have access to all of the details >> at the moment. > > Sorry, I mean what type of Riak operations? Store, fetch, MapReduce, > etc? What is actually timing out? Primarily stores but I did see one case of socket timeouts simply building a new connection pool using the rjc. > >> I suspect that there are additional timeouts to be configured and the >> previous default >> values have been lowered. I tried bumping the requestTimeout to no avail. >> This wouldn't >> explain the strange latency spikes (via /stats) seen as we began rolling out >> the new driver. > > It really shouldn't change any of this. Even the withoutFetch() > feature as it just ... doesn't do a fetch. > > Since you're using that new feature, how are you using it? Is this > storing new objects, or are you providing a vclock from a previous > fetch? We are simply doing a put. It is not uncommon for keys to be overwritten but we are not providing a vector clock. There is a dedicated master performing the write for a given key upstream from riak and overwriting is always safe (assuming last one wins) but we don't hold onto the vector clock. It seems possible/likely that we are inadvertently invoking some riak consistency machinery by turning off the get prior to put using withoutFetch(). Would it help to attempt to coordinate writes in another way? Somewhat related: I've been curious about writing a smart riak client that writes to a node based on the preflist for a key to avoid unnecessary internal handing off of reads and writes when possible. Two things strike me though 1) would need to compute this preflist outside of riak and 2) unsure how impactful this change would be without better understanding where internal riak bottlenecks present themselves. Perhaps best left for another thread. > > Thanks, > - Roach _______________________________________________ riak-users mailing list riak-users@lists.basho.com http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com