Hello Mitchell,
I think it is due to your replication factor, which, I assume, is 2 since
you have only 2 nodes in the cluster.
If you are using even number of nodes, Cassandra is impossible to run
queries require QUORUM participants.
So, I think you have to expand your cluster to 3 nodes and mak
I'm testing triggers as part of a project and would like to add some
logging to it. I'm using the same log structure as in the trigger example
InvertedIndex but can't seem to find any logs. Where would I find the
logging? In the system logs or somewhere else?
/Joel
Hi Marcelo,
I could create a fast copy program by repurposing some python apps that I
am using for benchmarking the python driver - do you still need this?
With high levels of concurrency and multiple subprocess workers, based on
my current actual benchmarks, I think I can get well over 1,000 row
I found now that i logged with a too low log level set so it was filtered
from the system log. Logging with a more critical log level made the log
messages appear in the system log.
/Joel
2014-06-03 16:30 GMT+02:00 Joel Samuelsson :
> I'm testing triggers as part of a project and would like to
One of the nodes in a cassandra cluster has died.
I'm using cassandra 2.0.7 throughout.
When I do a nodetool status this is what I see (real addresses have been
replaced with fake 10 nets)
[root@beta-new:/opt] #nodetool status
Datacenter: datacenter1
===
Statu
Hi,
in the last week week, we saw at least two emails about dead node
replacement. Though I saw the documentation about how to do this, i am not
sure I understand why this is required.
Assuming replication factor is >2, if a node dies, why does it matter? If
we add a new node is added, shouldn't
A dead node is still allocated key ranges, and Cassandra will wait for it
to come back online rather than redistributing its data. It needs to be
decommissioned or replaced by a new node for it to be truly dead as far as
the cluster is concerned.
On Tue, Jun 3, 2014 at 11:12 AM, Prem Yadav wrote
>
> Assuming replication factor is >2, if a node dies, why does it matter? If
> we add a new node is added, shouldn't it just take the chunk of data it
> server as the "primary" node from the other existing nodes.
> Why do we need to worry about replacing the dead node?
The reason this matters is
Thanks Mongo maven:)
I understand why you need to to do this.
My question was more from the architecture point if view. Why doesn't Cassandra
just redistribute the data? Is it because of the gossip protocol?
Thanks,
Prem
On 3 Jun 2014, at 17:35, Curious Patient wrote:
>> Assuming replication
>
> Thanks Mongo maven:)
> I understand why you need to to do this.
My question was more from the architecture point if view. Why doesn't
> Cassandra just redistribute the data? Is it because of the gossip protocol?
Sure.. well I've attempted to launch new nodes to redistribute the data on
a tem
On Tue, Jun 3, 2014 at 8:41 AM, Curious Patient
wrote:
> I then started following this documentation on how to replace the node:
> [
> http://www.datastax.com/documentation/cassandra/2.0/cassandra/operations/ops_replace_node_t.html?scroll=task_ds_aks_15q_gk][1
> ]
>
...
> And set the initial to
To be fair, it might be best to represent hex as 0xdeaf or 0xDEAF instead
of just 'deaf'
On Sun, Jun 1, 2014 at 8:37 PM, David Daeschler
wrote:
> I wouldnt worry unless it changes from deaf to deadbeef
>
>
> On Sun, Jun 1, 2014 at 11:34 PM, Tim Dunphy wrote:
>
>> This post should definitely ma
On Fri, May 30, 2014 at 4:08 AM, Vasileios Vlachos <
vasileiosvlac...@gmail.com> wrote:
> Basically you sort of confirmed that if down_time > max_hint_window_in_ms
> the only way to bring DC1 up-to-date is anti-entropy repair.
>
Also, read repair does not help either as we assumed that down_time
Hi Rob,
If you are replacing an address, you need to use the identical
> initial_token to the node you are replacing, not the token -1.
Thanks, I hope that does the trick. Btw.. was my idea of how to get at the
initial token of the missing/dead node correct?
.i.e.
nodetool ring | grep 10.10.1
On Tue, Jun 3, 2014 at 10:53 AM, Curious Patient
wrote:
> I want to be sure I'm using the right token.
>
In nodetool ring, if you're not using vnodes, only one token should be
listed with both the IP of the old node and the status Down.
If you are using vnodes, it's a comma delimited list in in
Repairing the range is an expensive operation and don't forget--just
because a node is down does not mean it's dead. I take nodes down for
maintenance all the time--maybe there was a security update that needed to
be applied, for example, or perhaps a kernel update. There are a multitude
of reaso
>
> In nodetool ring, if you're not using vnodes, only one token should be
> listed with both the IP of the old node and the status Down.
> If you are using vnodes, it's a comma delimited list in initial_token,
> which you can get from :
> nodetool info -T | grep Token | awk '{print $3}' | paste -s
On Tue, Jun 3, 2014 at 11:03 AM, Curious Patient
wrote:
> In nodetool ring, if you're not using vnodes, only one token should be
>> listed with both the IP of the old node and the status Down.
>> If you are using vnodes, it's a comma delimited list in initial_token,
>> which you can get from :
>>
Has anyone seen this error on Cassandra 1.2.9? We have not done any upgrades or
changes to column families since we went live in feb 2014.
we are getting the following error when we run the nodetool cleanup or nodetool
repair on one of our Production Nodes.
We have 2 data ceners with 2 node
Hi All.
I have a system thats going to make possibly several concurrent changes to
a running total. I know I could use a counter for this. However I have
extra meta data I can store with the changes which would allow me to reply
the changes. If I use a counter and it looses some writes I can't rec
Thanks Haebin, I scaled up to a 3 node system and it now behaves as expected.
Was trying to simplify the test case but shot myself in the foot instead.
Mitchell
From: monster@gmail.com [mailto:monster@gmail.com] On Behalf Of
Frederick Haebin Na
Sent: Tuesday, June 03, 2014 2:17 AM
To:
Thanks for your responses!
Matt, I did a test with 4 nodes, 2 in each DC and the answer appears to
be yes. The tokens seem to be unique across the entire cluster, not just
on a per DC basis. I don't know if the number of nodes deployed is
enough to reassure me, but this is my conclusion for no
I should have said that earlier really... I am using 1.2.16 and Vnodes
are enabled.
Thanks,
Vasilis
--
Kind Regards,
Vasileios Vlachos
Just out of curiosity, for a dead node, would it be possible to just
- replace the node (no data in data/commit dirs), same IP Address, same
hostname.
- restore the cassandra.yaml (initial_token etc)
- set auto_bootstrap:false
- start it up and then run a nodetool rebuild ?
Or would the Host
Thanks Vasileios. I think I need to make a call as to whether to switch to
vnodes or stick with tokens for my Multi-DC cluster.
Would you be able to show a nodetool ring/status from your cluster to see
what the token assignment looks like ?
Thanks
Matt
On Wed, Jun 4, 2014 at 8:31 AM, Vasileio
Indeed Alex, the problem was in the rpc timeouts on the server...
Thanks a lot, it's simple but I was losing time thinking my client config
was wrong!
[]s
2014-06-02 18:15 GMT-03:00 Alex Popescu :
> If I'm reading this correctly, what you are seeing is the read_timeout on
> Cassandra side and no
Hi Michael,
For sure I would be interested in this program!
I am new both to python and for cql. I started creating this copier, but
was having problems with timeouts. Alex solved my problem here on the list,
but I think I will still have a lot of trouble making the copy to work fine.
I open sou
On Tue, Jun 3, 2014 at 3:48 PM, Matthew Allen
wrote:
> Just out of curiosity, for a dead node, would it be possible to just
>
> - replace the node (no data in data/commit dirs), same IP Address, same
> hostname.
> - restore the cassandra.yaml (initial_token etc)
> - set auto_bootstrap:false
>
| That would work, but until CASSANDRA-6961 [1] there is no way to prevent
this node from having a long window where it may serve stale
| reads at CLs below QUORUM, until the rebuild completes.
Thanks Robert, this makes perfect sense. Do you know if CASSANDRA-6961
will be ported to 1.2.x ?
And a
29 matches
Mail list logo