On Tue, Feb 14, 2012 at 8:01 PM, aaron morton <aa...@thelastpickle.com>wrote:
> And the output from tpstats is ? > I can't reproduce it at the moment ;-( nodetool is throwing 'Failed to retrieve RMIServer stub:' - which I'm guessing/hoping is related to the stalled bootstrap. > > A > > ----------------- > Aaron Morton > Freelance Developer > @aaronmorton > http://www.thelastpickle.com > > On 14/02/2012, at 12:43 PM, Franc Carter wrote: > > On Tue, Feb 14, 2012 at 6:06 AM, aaron morton <aa...@thelastpickle.com>wrote: > >> What CL are you reading at ? >> > > Quorum > > >> >> Write ops go to RF number of nodes, read ops go to RF number of nodes 10% >> (the default probability that Read Repair will be running) of the time and >> CL number of nodes 90% of the time. With 2 nodes and RF 2 the QUOURM is 2, >> every request will involve all nodes. >> > > Yep, the thing tat confuses is the different behaviour for reading from > one node versus two > > >> >> As to why the pending list gets longer, do you have some more info ? What >> process are you using to measure ? It's hard to guess why. In this setup >> every node will have the data and should be able to do a local read and >> then on the other node. >> > > I have four pycassa clients, two making requests to one server and two > making requests to the other (or all four making requests to the same > server). The requested keys don't overlap and I would expect/assume the > keys are in the keycache > > I am looking at the output of nodetool -h tpstats > > cheers > > >> Cheers >> >> >> ----------------- >> Aaron Morton >> Freelance Developer >> @aaronmorton >> http://www.thelastpickle.com >> >> On 14/02/2012, at 12:47 AM, Franc Carter wrote: >> >> >> Hi, >> >> I've been looking at tpstats as various test queries run and I noticed >> something I don't understand. >> >> I have a two node cluster with RF=2 on which I run 4 parallel queries, >> each job goes through a list of keys doing a multiget for 2 keys at a time. >> If two of the queries go to one node and the other two go to a different >> node then the pending queue on the node gets much longer than if they all >> go to the one node. >> >> I'm clearly missing something here as I would have expected the opposite >> >> cheers >> >> -- >> *Franc Carter* | Systems architect | Sirca Ltd >> <marc.zianideferra...@sirca.org.au> >> franc.car...@sirca.org.au | www.sirca.org.au >> Tel: +61 2 9236 9118 >> Level 9, 80 Clarence St, Sydney NSW 2000 >> PO Box H58, Australia Square, Sydney NSW 1215 >> >> >> > > > -- > *Franc Carter* | Systems architect | Sirca Ltd > <marc.zianideferra...@sirca.org.au> > franc.car...@sirca.org.au | www.sirca.org.au > Tel: +61 2 9236 9118 > Level 9, 80 Clarence St, Sydney NSW 2000 > PO Box H58, Australia Square, Sydney NSW 1215 > > > -- *Franc Carter* | Systems architect | Sirca Ltd <marc.zianideferra...@sirca.org.au> franc.car...@sirca.org.au | www.sirca.org.au Tel: +61 2 9236 9118 Level 9, 80 Clarence St, Sydney NSW 2000 PO Box H58, Australia Square, Sydney NSW 1215