That helped a little. But, it's still quite slow. Now, it's around 20-35ms
on average, sometimes as high as 70ms.

On Wed, Apr 14, 2010 at 8:50 AM, James Golick <jamesgol...@gmail.com> wrote:

> Right - that make sense. I'm only fetching one row. I'll give it a try with
> get_slice().
>
> Thanks,
>
> -James
>
>
> On Wed, Apr 14, 2010 at 7:45 AM, Jonathan Ellis <jbel...@gmail.com> wrote:
>
>> 35-50ms for how many rows of 1000 columns each?
>>
>> get_range_slices does not use the row cache, for the same reason that
>> oracle doesn't cache tuples from sequential scans -- blowing away
>> 1000s of rows worth of recently used rows queried by key, for a swath
>> of rows from the scan, is the wrong call more often than it is the
>> right one.
>>
>> On Tue, Apr 13, 2010 at 1:00 PM, James Golick <jamesgol...@gmail.com>
>> wrote:
>> > Hi All,
>> > I'm seeing about 35-50ms to read 1000 columns from a CF using
>> > get_range_slices. The columns are TimeUUIDType with empty values.
>> > The row cache is enabled and I'm running the query 500 times in a row,
>> so I
>> > can only assume the row is cached.
>> > Is that about what's expected or am I doing something wrong? (It's from
>> java
>> > this time, so it's not ruby thrift being slow).
>> > - James
>>
>
>

Reply via email to