How long to read just 10 columns?
On Wed, Apr 14, 2010 at 3:19 PM, James Golick wrote:
> The values are empty. It's 3000 UUIDs.
>
> On Wed, Apr 14, 2010 at 12:40 PM, Avinash Lakshman
> wrote:
>>
>> How large are the values? How much data on disk?
>>
>> On Wednesday, April 14, 2010, James Golick
The values are empty. It's 3000 UUIDs.
On Wed, Apr 14, 2010 at 12:40 PM, Avinash Lakshman <
avinash.laksh...@gmail.com> wrote:
> How large are the values? How much data on disk?
>
> On Wednesday, April 14, 2010, James Golick wrote:
> > Just for the record, I am able to repeat this locally.
> > I
How large are the values? How much data on disk?
On Wednesday, April 14, 2010, James Golick wrote:
> Just for the record, I am able to repeat this locally.
> I'm seeing around 150ms to read 1000 columns from a row that has 3000 in it.
> If I enable the rowcache, that goes down to about 90ms. Acc
Just for the record, I am able to repeat this locally.
I'm seeing around 150ms to read 1000 columns from a row that has 3000 in it.
If I enable the rowcache, that goes down to about 90ms. According to my
profile, 90% of the time is being spent waiting for cassandra to respond, so
it's not thrift.
On Wed, Apr 14, 2010 at 10:31 AM, Mike Malone wrote:
> ...
>
> Couldn't you cache a list of keys that were returned for the key range, then
> cache individual rows separately or not at all?
> By "blowing away rows queried by key" I'm guessing you mean "pushing them
> out of the LRU cache," not exp
On Wed, Apr 14, 2010 at 7:45 AM, Jonathan Ellis wrote:
> 35-50ms for how many rows of 1000 columns each?
>
> get_range_slices does not use the row cache, for the same reason that
> oracle doesn't cache tuples from sequential scans -- blowing away
> 1000s of rows worth of recently used rows querie
That helped a little. But, it's still quite slow. Now, it's around 20-35ms
on average, sometimes as high as 70ms.
On Wed, Apr 14, 2010 at 8:50 AM, James Golick wrote:
> Right - that make sense. I'm only fetching one row. I'll give it a try with
> get_slice().
>
> Thanks,
>
> -James
>
>
> On Wed,
Right - that make sense. I'm only fetching one row. I'll give it a try with
get_slice().
Thanks,
-James
On Wed, Apr 14, 2010 at 7:45 AM, Jonathan Ellis wrote:
> 35-50ms for how many rows of 1000 columns each?
>
> get_range_slices does not use the row cache, for the same reason that
> oracle do
35-50ms for how many rows of 1000 columns each?
get_range_slices does not use the row cache, for the same reason that
oracle doesn't cache tuples from sequential scans -- blowing away
1000s of rows worth of recently used rows queried by key, for a swath
of rows from the scan, is the wrong call mor
Yes, I find that get_range_slices takes an incredibly long time return
the results.
---
Gautam
On Tue, Apr 13, 2010 at 2:00 PM, James Golick wrote:
> Hi All,
> I'm seeing about 35-50ms to read 1000 columns from a CF using
> get_range_slices. The columns are TimeUUIDType with empty values.
> The
10 matches
Mail list logo