Keyspace: ActivityFeed
        Read Count: 699443
        Read Latency: 16.11017477192566 ms.
        Write Count: 69264920
        Write Latency: 0.020393242755495856 ms.
        Pending Tasks: 0
...snip....

                Column Family: Events
                SSTable count: 5
                Space used (live): 680625289
                Space used (total): 680625289
                Memtable Columns Count: 65974
                Memtable Data Size: 6901772
                Memtable Switch Count: 121
                Read Count: 232378
                Read Latency: 0.396 ms.
                Write Count: 919233
                Write Latency: 0.055 ms.
                Pending Tasks: 0
                Key cache capacity: 47
                Key cache size: 0
                Key cache hit rate: NaN
                Row cache capacity: 500000
                Row cache size: 62768
                Row cache hit rate: 0.007716049382716049


On Wed, Mar 31, 2010 at 4:15 PM, Jonathan Ellis <jbel...@gmail.com> wrote:

> What does the CFS mbean think read latencies are?  Possibly something
> else is introducing latency after the read.
>
> On Wed, Mar 31, 2010 at 5:37 PM, James Golick <jamesgol...@gmail.com>
> wrote:
> > Standard CF. 10 columns per row. Between about 800 bytes and 2k total per
> > row.
> > On Wed, Mar 31, 2010 at 3:06 PM, Chris Goffinet <goffi...@digg.com>
> wrote:
> >>
> >> How many columns in each row?
> >> -Chris
> >> On Mar 31, 2010, at 2:54 PM, James Golick wrote:
> >>
> >> I just tried running the same multi_get against cassandra 1000 times,
> >> assuming that that'd force it in to cache.
> >> I'm definitely seeing a 5-10ms improvement, but it's still looking like
> >> 20-30ms on average. Would you expect it to be faster than that?
> >> - James
> >>
> >> On Wed, Mar 31, 2010 at 11:44 AM, Jonathan Ellis <jbel...@gmail.com>
> >> wrote:
> >>>
> >>> But then you'd still be caching the same things memcached is, so
> >>> unless you have a lot more ram you'll presumably miss the same rows
> >>> too.
> >>>
> >>> The only 2-layer approach that makes sense to me would be to have
> >>> cassandra keys cache at 100% behind memcached for the actual rows,
> >>> which will actually reduce the penalty for a memcache miss by
> >>> half-ish.
> >>>
> >>> On Wed, Mar 31, 2010 at 1:32 PM, David Strauss <da...@fourkitchens.com
> >
> >>> wrote:
> >>> > Or, if faking memcached misses is too high a price to pay, queue some
> >>> > proportion of the reads to replay asynchronously against Cassandra.
> >>> >
> >>> > On Wed, 2010-03-31 at 11:04 -0500, Jonathan Ellis wrote:
> >>> >> Can you redirect some of the reads from memcache to cassandra?
>  Sounds
> >>> >> like the cache isn't getting warmed up.
> >>> >>
> >>> >> On Wed, Mar 31, 2010 at 11:01 AM, James Golick <
> jamesgol...@gmail.com>
> >>> >> wrote:
> >>> >> > I'm testing on the live cluster, but most of the production reads
> >>> >> > are being
> >>> >> > served by the cache. It's definitely the right CF.
> >>> >> >
> >>> >> > On Wed, Mar 31, 2010 at 8:30 AM, Jonathan Ellis <
> jbel...@gmail.com>
> >>> >> > wrote:
> >>> >> >>
> >>> >> >> On Wed, Mar 31, 2010 at 12:01 AM, James Golick
> >>> >> >> <jamesgol...@gmail.com>
> >>> >> >> wrote:
> >>> >> >> > Okay, so now my row cache hit rate jumps between 1.0, 99.5,
> 95.6,
> >>> >> >> > and
> >>> >> >> > NaN.
> >>> >> >> > Seems like that stat is a little broken.
> >>> >> >>
> >>> >> >> Sounds like you aren't getting enough requests for the
> >>> >> >> getRecentHitRate to make sense.  use getHits / getRequests.
> >>> >> >>
> >>> >> >> But if you aren't getting enough requests for getRecentHitRate,
> are
> >>> >> >> you sure you're tuning the cache on the right CF for your 35ms
> >>> >> >> test?
> >>> >> >> Are you testing live?  If not, what's your methodology here?
> >>> >> >>
> >>> >> >> -Jonathan
> >>> >> >
> >>> >> >
> >>> >
> >>> >
> >>> >
> >>> >
> >>
> >>
> >
> >
>

Reply via email to