Yes
> 2 - Yes (No major compaction needed, upgradesstables should do the job)
>
> As always in case of doubt, as always, test it. ìn this case you can even
> do it using a local machine.
>
> Alain
>
>
> 2014-04-29 9:57 GMT+02:00 Katriel Traum :
>
> Hello,
>>
>
Hello,
I am running mostly Cassandra 1.2 on my clusters, and wanted to migrate my
current Snappy compressed CF's to LZ4.
Changing the schema is easy, my questions are:
1. Will previous, Snappy compressed tables still be readable?
2. Will upgradesstables convert my current CFs from Snappy to LZ4?
Hello list,
I have a cluster of 3 nodes with RF=3. The cluster load is daily bulk
write/delete/compact, and read the rest of the time.
For better read performance, and to make sure data is 100% consistent, we
write with "ALL" and read "ONE", stopping the write process if there is a
problem.
My pr
>
>> dc=us-east
>> rack=1e
>>
>> After this step is complete on all nodes, then you can add a new
>> datacenter specifying different dc and rack on the
>> cassandra-rackdc.properties of the new DC. Make sure you upgrade your
>> initial datacenter to 1
Hello list.
I'm upgrading a 1.1 cassandra cluster to 1.2(.13).
I've read here and in other places that the best way to migrate to vnodes
is to add a new DC, with the same amount of nodes, and run rebuild on each
of them.
However, I'm faced with the fact that I'm using EC2MultiRegion snitch,
which
22, 2014 at 11:13 PM, Katriel Traum wrote:
>
>> I was if anyone has any pointers or some advise regarding using row cache
>> vs leaving it up to the OS buffer cache.
>>
>> I run cassandra 1.1 and 1.2 with JNA, so off-heap row cache is an option.
>>
>
> Many pe
Hello list,
I was if anyone has any pointers or some advise regarding using row cache
vs leaving it up to the OS buffer cache.
I run cassandra 1.1 and 1.2 with JNA, so off-heap row cache is an option.
Any input appreciated.
Katriel
Hello list,
I have a 2 DC set up with DC1:3, DC2:3 replication factor. DC1 has 6 nodes,
DC2 has 3. This whole setup runs on AWS, running cassandra 1.1.
Here's my nodetool ring:
1.1.1.1 eu-west 1a Up Normal 55.07 GB50.00%
0
2.2.2.1 us-east 1b Up
oblem is that NetworkTopologyStrategy will try to pick nodes that
> have a different rack when going around the ring, so the second node in
> each rack always gets skipped unless it was the first node picked. Your
> nodes is eu-west go a,a,b,b,c,c but they should be a,b,c,a,b,c.
>
>
> On Wed, No
Hello list,
I have a problem with my cluster ownership not being as expected.
I have 2 DC cluster using NetworkTopologyStrategy on and
EC2MultiRegionSnitch with cassandra 1.1.5. My placement strategy for all
keyspaces is: {eu-west: 3, us-east:3 }, and I have 6 nodes in eu-west and 3
in us-east.
I
10 matches
Mail list logo