I've been trying a number of combinations of starting/stopping machines
with and without various seeds and I can't recreate the problem now. I
strongly suspect it was because we were trying to use the
DatacenterShardStrategy code. Between changing snitches and shard
classes, we haven't been able to recreate the problem so far. It is
likely we had a misconfiguration of the DatacenterShardStrategy
properties files too or a mismatch of snitches and strategies classes at
the time.
Sorry I didn't respond sooner so you didn't have to waste time doing
this testing. :(
However, perhaps Victor Jevdokimov might have a combination where this
occurred as he stated in an earlier email.
Ron
Gary Dusbabek wrote:
I was unable to duplicate this problem using a 3 node cluster. Here
were my steps:
1. bring up a seed node, give it a schema using loadSchemaFromYaml
2. bring up a second node. it received schema from the seed node.
3. bring the seed node down.
4. bring up a third node, but set it's seed to be the second node
(this is important!).
Is it possible in your testing that you only had one seed node (the
original node), which is the node you shut down? If a node cannot
contact a seed node, it never really joins the cluster and effectively
becomes its own cluster of one node. In this case it follows that it
would never receive schema definitions as well.
If this isn't the case and you're still experiencing this, please let
me know the steps in which you bring nodes up and down so I can
replicate.
Gary.
On Fri, Jun 11, 2010 at 06:42, Gary Dusbabek <gdusba...@gmail.com> wrote:
I've filed this as
https://issues.apache.org/jira/browse/CASSANDRA-1182. I've created
steps to reproduce based on your email and placed them in the ticket
description. Can you confirm that I've described things correctly?
Gary.
On Thu, Jun 10, 2010 at 17:16, Ronald Park <ronald.p...@cbs.com> wrote:
Hi,
We've been fiddling around with a small Cassandra cluster, bringing nodes up
and down, to get a feel for how things are replicated and how spinning up a
new node works (before having to learn it live :). We are using the trunk
because we want to use the expiration feature. Along with that comes the
new keyspace api to load the 'schema'.
What we found was that, if the node on which we originally installed the
keyspace was down when a new node is added, the new node does not get the
keyspace schema. In some regards, it is now the 'master', at least in
distributing the keyspace data. Is this a known limitation?
Thanks,
Ron