We are using version 1.2.4 and it is difficult to shutdown the embedded
version. But you don't have to. Just check in each test setup method if
embedded Cassandra is already running and start it if necessary. Than
create keyspaces/tables in setup methods and drop them in teardown methods.
For us th
As far as I know there is nothing hard coded in Cassandra that kicks in
every 4 hours. Turn on GC logging, maybe dump the output of jstats to a
file and correlate this data with the Cassandra logs. Cassandra logs are
pretty good in telling you what is going on.
2013/12/10 Joel Samuelsson
> Hell
If you lose RF + 1 nodes the data that is replicated to only these nodes is
gone, good idea to have a recent backup than. Another situation is when you
deploy a bug in the software and start writing crap data to Cassandra.
Replication does not help and depending on the situation you need to
restore
We just migrated a Cassandra cluster on EC2 to another instance type. We
replaced one server after another, this creates problems similar to what
you describe.
We simply stop Cassandra, copy the complete data dir to an EBS volume,
terminate the server, launch another server with the same IP, copy
) to a new sstable (%s)"
>
> In the logs.
>
> Cheers
>
> -
> Aaron Morton
> Freelance Cassandra Developer
> New Zealand
>
> @aaronmorton
> http://www.thelastpickle.com
>
> On 12/02/2013, at 6:13 AM, Andre Sprenger
> wrote:
>
> Hi,
>
> I
Hi,
I'm running a 6 node Cassandra 1.1.5 cluster on EC2. We have switched to
leveled compaction a couple of weeks ago,
this has been successful. Some days ago 3 of the nodes start to log the
following exception during compaction of
a particular column family:
ERROR [CompactionExecutor:726] 2013-0