"Does it show statistics for just the current node on which I am running, or for the entire cluster?" --> only current node.
"Are the read latencies shown the latencies within a single host, or are they the end-to-end latencies from the coordinator node?" --> cfhistograms shows metrics at table/node level, proxyhistograms shows metrics at cluster/coordinator level On Sun, Nov 16, 2014 at 10:31 PM, Clint Kelly <clint.ke...@gmail.com> wrote: > Thanks, Mark! > > A couple of other questions about the command: > > * Does it show statistics for just the current node on which I am > running, or for the entire cluster? > * Are the read latencies shown the latencies within a single host, or > are they the end-to-end latencies from the coordinator node? > > -Clint > > > On Sun, Nov 16, 2014 at 10:05 AM, Mark Reddy <mark.l.re...@gmail.com> > wrote: > > Hi Clint, > > > > The values of SSTables, Write Latency and Read Latency will be reset on > node > > start/restart and after running the cfhistograms command itself. > > > > The values of Row Size and Column Count are calculated at startup and > then > > re-evaluated during compaction. > > > > > > Mark > > > > > > On 16 November 2014 17:12, Clint Kelly <clint.ke...@gmail.com> wrote: > >> > >> Hi all, > >> > >> Over what time range does "nodetool cfhistograms" operate? > >> > >> I am using Cassandra 2.0.8.39. > >> > >> I am trying to debug some very high 95th and 99th percentile read > >> latencies in an application that I'm working on. > >> > >> I tried running nodetool cfhistograms to get a flavor for the > >> distribution of read latencies and also to see how many SSTables our > >> reads are using, and I saw different results yesterday versus this > >> morning, so I assume the time window is fairly tight (like an hour or > >> so). I vaguely recall Aaron Morton talking about this during the > >> training course I took at Cassandra Summit, but I cannot find my notes > >> and I think the behavior of the tool with regard to time windows > >> changed from version to version. > >> > >> Thanks! > >> > >> -Clint > > > > >