Prior to Cassandra 0.7, there was a limitation of 2GB on row sizes as the
entire row had to fit in memory for compaction. As far as I'm aware, in
Cassandra 0.7, the limit has changed to 2^31 (approximately 2 billion)
columns.
See http://wiki.apache.org/cassandra/CassandraLimitations for more detai
Provided at least one node receives the write, it will eventually be written
to all replicas. A failure to meet the requested ConsistencyLevel is just
that; not a failure to write the data itself. Once the write is received by
a node, it will eventually reach all replicas, there is no roll back.
T
.com/docs/0.6.5/operations/tuning
<http://www.riptano.com/docs/0.6.5/operations/tuning>Regards,
Nick Telford
On 4 November 2010 22:20, Alaa Zubaidi wrote:
> Thanks for the advise...
> We are running on Windows, and I just added more memory to my system, 16G I
> will run the test
If you're bottle-necking on read I/O making proper use of Cassandras key
cache and row cache will improve things dramatically.
A little maths using the numbers you've provided tells me that you have
about 80GB of "hot" data (data valid in a 4 hour period). That's obviously
too much to directly cac
;
> On 9/3/2010 1:51 PM, Nick Telford wrote:
>
> Which ConsistencyLevels did you use for your batchMutate() and getSlice()
> operations?
>
> ConsistencyLevels directly dictate the level of consistency you will get
> with your data.
>
> Regards,
>
> Nick Telford
&
Which ConsistencyLevels did you use for your batchMutate() and getSlice()
operations?
ConsistencyLevels directly dictate the level of consistency you will get
with your data.
Regards,
Nick Telford
On 3 September 2010 12:03, Hugo wrote:
> Hi,
>
> I'm performing tests with C