rch 2013 23:11
> To: user@cassandra.apache.org
> Subject: Re: Unable to fetch large amount of rows
>
> + Did run cfhistograms, the results are interesting (Note: row cache is
> disabled):
> SSTables in cfhistograms is a friend here. It tells you how many sstables
> were read from p
Answers prefixed with [PP]
_
From: aaron morton [mailto:aa...@thelastpickle.com]
Sent: 21 March 2013 23:11
To: user@cassandra.apache.org
Subject: Re: Unable to fetch large amount of rows
+ Did run cfhistograms, the results are interesting (Note: row cache is
disabled):
SSTables in
New Zealand
@aaronmorton
http://www.thelastpickle.com
On 21/03/2013, at 4:48 PM, Pushkar Prasad
wrote:
> Yes, I'm reading from a single partition.
>
> -Original Message-
> From: Hiller, Dean [mailto:dean.hil...@nrel.gov]
> Sent: 21 March 2013 01:38
> To: user@cass
Yes, I'm reading from a single partition.
-Original Message-
From: Hiller, Dean [mailto:dean.hil...@nrel.gov]
Sent: 21 March 2013 01:38
To: user@cassandra.apache.org
Subject: Re: Unable to fetch large amount of rows
Is your use case reading from a single partition? If so, you may
20, 2013 11:41 AM
To: "user@cassandra.apache.org<mailto:user@cassandra.apache.org>"
mailto:user@cassandra.apache.org>>
Subject: RE: Unable to fetch large amount of rows
Hi aaron.
I added pagination, and things seem to have started performing much better.
With 1000 page siz
r
response when queried from a different node, can something be done?
Thanks
Pushkar
_
From: aaron morton [mailto:aa...@thelastpickle.com]
Sent: 20 March 2013 15:02
To: user@cassandra.apache.org
Subject: Re: Unable to fetch large amount of rows
The query returns fine if I reques
me to return the records? And if
> data is stored sequentially, is there any alternative that would allow me to
> fetch all the records quickly (by sequential disk fetch)?
>
> Thanks
> Pushkar
>
> -Original Message-
> From: aaron morton [mailto:aa...@thelastpickle.
user@cassandra.apache.org
Subject: Re: Unable to fetch large amount of rows
> I have 1000 timestamps, and for each timestamp, I have 500K different
MACAddress.
So you are trying to read about 2 million columns ?
500K MACAddresses each with 3 other columns?
> When I run the following query
> I have 1000 timestamps, and for each timestamp, I have 500K different
> MACAddress.
So you are trying to read about 2 million columns ?
500K MACAddresses each with 3 other columns?
> When I run the following query, I get RPC Timeout exceptions:
What is the exception?
Is it a client side soc
Hi,
I have following schema:
TimeStamp
MACAddress
Data Transfer
Data Rate
LocationID
PKEY is (TimeStamp, MACAddress). That means partitioning is on TimeStamp,
and data is ordered by MACAddress, and stored together physically (let me
know if my understanding is wrong). I have 1000 ti
Hi,
I have following schema:
TimeStamp
MACAddress
Data Transfer
Data Rate
LocationID
PKEY is (TimeStamp, MACAddress). That means partitioning is on TimeStamp,
and data is ordered by MACAddress, and stored together physically (let me
know if my understanding is wrong). I have 1000 ti
11 matches
Mail list logo