Check the logs server to see if any errors are reported. If possible can you 
change the logging to DEBUG and run it ? 

> Note that the UUID did not change, 
Sounds fishy.

There is an issue fixed in 1.1.3 similar to this 
https://issues.apache.org/jira/browse/CASSANDRA-4432 But I doubt it applies 
here.

Cheers

-----------------
Aaron Morton
Freelance Developer
@aaronmorton
http://www.thelastpickle.com

On 19/07/2012, at 5:27 AM, Douglas Muth wrote:

> Hi folks,
> 
> I have an interesting problem in Cassandra 1.1.2, a Google Search
> wasn't much help, so I thought I'd ask here.
> 
> Essentially, I have a "problem keyspace" in my 2-node cluster that
> keeps me from changing the replication factor on a specific keyspace.
> It's probably easier to show what I'm seeing in cassandra-cli:
> 
> [default@foobar] update keyspace test1 with strategy_options =
> {replication_factor:1};
> 2d5f0d16-bb4b-3d75-a084-911fe39f7629
> Waiting for schema agreement...
> ... schemas agree across the cluster
> [default@foobar] update keyspace test1 with strategy_options =
> {replication_factor:1};
> 7745dd06-ee5d-3e74-8734-7cdc18871e67
> Waiting for schema agreement...
> ... schemas agree across the cluster
> 
> Even though keyspace "test1" had a replication_factor of 1 to start
> with, each of the above UPDATE KEYSPACE commands caused a new UUID to
> be generated for the schema, which I assume is normal and expected.
> 
> Then I try it with the problem keyspace:
> 
> [default@foobar] update keyspace foobar with strategy_options =
> {replication_factor:1};
> 7745dd06-ee5d-3e74-8734-7cdc18871e67
> Waiting for schema agreement...
> ... schemas agree across the cluster
> 
> Note that the UUID did not change, and the replication_factor in the
> underlying database did not change either.
> 
> The funny thing is that foobar had a replication_factor of 1
> yesterday, then I brought my second node online and changed the
> replication_factor to 2 without incident.  I only ran into issues when
> I tried changing it back to 1.
> 
> I tried running "nodetool clean" on both nodes, but the problem persists.
> 
> Any suggestions?
> 
> Thanks,
> 
> -- Doug
> 
> -- 
> http://twitter.com/dmuth

Reply via email to