Re: Data Size on each node

2015-09-04 Thread Alprema
Hi, I agree with Alain, we have the same kind of problem here (4 DCs, ~1TB / node) and we are replacing our big servers full of spinning drives with a bigger number of smaller servers with SSDs (microservers are quite efficient in terms of rack space and cost). Kévin On Tue, Sep 1, 2015 at 1:11

Re: Periodic Anti-Entropy repair

2015-05-22 Thread Alprema
Did you have a look at Reaper? https://github.com/spotify/cassandra-reaper it's a project created by Spotify to address this issue, I did not evaluate it yet but it looks promising. On May 22, 2015 9:01 PM, "Brice Argenson" wrote: > Hi everyone, > > We are currently migrating from DSE to Apache

Re: Out of memory on wide row read

2015-05-15 Thread Alprema
I William file a jira for that, thanks On May 12, 2015 10:15 PM, "Jack Krupansky" wrote: > Sounds like it's worth a Jira - Cassandra should protect itself from > innocent mistakes or excessive requests from clients. Maybe there should be > a timeout or result size (bytes in addition to count) lim

Re: Read performance

2015-05-11 Thread Alprema
erties define how many possible disk reads cassandra has to do to get > all the data you need depending on which SST Tables have data for your > partition key. > > On Fri, May 8, 2015 at 6:25 PM, Alprema wrote: > >> I was planning on using a more "server-friendly" st

Re: Read performance

2015-05-08 Thread Alprema
2:34 PM, Bryan Holladay wrote: > Try breaking it up into smaller chunks using multiple threads and token > ranges. 86400 is pretty large. I found ~1000 results per query is good. > This will spread the burden across all servers a little more evenly. > > On Thu, May 7, 2015 at 4:

Read performance

2015-05-07 Thread Alprema
Hi, I am writing an application that will periodically read big amounts of data from Cassandra and I am experiencing odd performances. My column family is a classic time series one, with series ID and Day as partition key and a timestamp as clustering key, the value being a double. The query I r