Actually I think it is a different issue (or a freak issue)… the invocation in 
InternalResponseStage is part of the “schema pull” mechanism this ticket 
relates to, and in my case this is actually repairing (thank you) the schema 
disagreement because as a result of it eventually being noticed by gossip. For 
whatever reason, the “schema push” mechanism got broken for some nodes. Strange 
as I say since this push code looks for live nodes according to gossip and all 
nodes were up according to gossip info at the time. So, sadly the new debug 
logging in the pull path won’t help… if it happens again, I’ll have some more 
context to dig deeper, before just getting in and fixing the problem by 
restarting the nodes which I did today.

On Aug 8, 2014, at 4:37 PM, graham sanderson <gra...@vast.com> wrote:

> Ok thanks - I guess I can at least enable the debug logging added for that 
> issue to see if it is deliberately choosing not to pull the schema… no repro 
> case, but it may happen again!
> 
> On Aug 8, 2014, at 4:21 PM, Robert Coli <rc...@eventbrite.com> wrote:
> 
>> On Fri, Aug 8, 2014 at 1:45 PM, graham sanderson <gra...@vast.com> wrote:
>> We have some data that is partitioned in tables created periodically (once a 
>> day). This morning, this automated process timed out because the schema did 
>> not reach agreement quickly enough after we created a new empty table.
>> 
>> I have seen this on 1.2.16, but it was supposed to be fixed in 1.2.18 and 
>> 2.0.7.
>> 
>> https://issues.apache.org/jira/browse/CASSANDRA-6971
>> 
>> If you can repro on 2.0.9, I would file a JIRA with repro steps and link it 
>> on a reply to this thread.
>> 
>> =Rob 
> 

Attachment: smime.p7s
Description: S/MIME cryptographic signature

Reply via email to