--
>From: Hiller, Dean [mailto:dean.hil...@nrel.gov]
>Sent: 26 February 2013 09:32
>To: user@cassandra.apache.org
>Subject: Re: Read Perf
>
>In that case, make sure you don't plan on going into the millions or test
>the limit as I pretty sure it can't go above 10 million. (from p
Yep. So the read will remain constant in this case ?
-Original Message-
From: Hiller, Dean [mailto:dean.hil...@nrel.gov]
Sent: 26 February 2013 09:32
To: user@cassandra.apache.org
Subject: Re: Read Perf
In that case, make sure you don't plan on going into the millions or tes
be the same. The
>only thing which changes is the columns and they keep getting added.
>
>-Original Message-
>From: Hiller, Dean [mailto:dean.hil...@nrel.gov]
>Sent: 26 February 2013 09:21
>To: user@cassandra.apache.org
>Subject: Re: Read Perf
>
>To find stuf
Thanks. For our case, the no of rows will more or less be the same. The only
thing which changes is the columns and they keep getting added.
-Original Message-
From: Hiller, Dean [mailto:dean.hil...@nrel.gov]
Sent: 26 February 2013 09:21
To: user@cassandra.apache.org
Subject: Re: Read
To find stuff on disk, there is a bloomfilter for each file in memory. On the
docs, 1 billion rows has 2Gig of RAM, so it really will have a huge dependency
on your number of rows. As you get more rows, you may need to modify the
bloomfilter false positive to use less RAM but that means slower
Uh, so look at your await time of *107.3*. From the iostat man page: "await:
The average time (in milliseconds) for I/O requests issued to the device to
be served. This includes the time spent by the requests in queue and the
time spent servicing them."
If the key you are reading from is not