Thank you very much for pointing this out Victor. Really useful to know.

On Wed, Sep 16, 2015 at 4:55 PM, Victor Chen <victor.h.c...@gmail.com>
wrote:

> Yes, you can examine the actual sstables in your cassandra data dir. That
> will tell you what version sstables you have on that node.
>
> You can refer to this link:
> http://www.bajb.net/2013/03/cassandra-sstable-format-version-numbers/
> which I found via google search phrase "sstable versions" to see which
> version you need to look for-- the relevant section of the link says:
>
>> Cassandra stores the version of the SSTable within the filename,
>> following the format *Keyspace-ColumnFamily-(optional tmp
>> marker-)SSTableFormat-generation*
>>
>
> FYI-- and at least in the cassandra-2.1 branch of the source code-- you
> can find sstable format generation descriptions in comments of
> Descriptor.java. Looks like for your old and new versions, you'd be looking
> for something like:
>
> for 1.2.1:
> $ find <path to datadir> -name "*-ib-*" -ls
>
> for 2.0.1:
> $ find <path to datadir> -name "*-jb-*" -ls
>
>
> On Wed, Sep 16, 2015 at 10:02 AM, Vasileios Vlachos <
> vasileiosvlac...@gmail.com> wrote:
>
>>
>> Hello Rob and thanks for your reply,
>>
>> At the end we had to wait for the upgradesstables ti finish on every
>> node. Just to eliminate the possibility of this being the reason of any
>> weird behaviour after the upgrade. However, this process might take a long
>> time in a cluster with a large number of nodes which means no new work can
>> be done for that period.
>>
>> 1) TRUNCATE requires all known nodes to be available to succeed, if you
>>> are restarting one, it won't be available.
>>>
>>
>> I suppose all means all, not all replicas here, is that right? Not
>> directly related to the original question, but that might explain why we
>> end up with peculiar behaviour some times when we run TRUNCATE. We've now
>> taken the approach DROP it and do it again when possible (even though this
>> is still problematic when using the same CF name).
>>
>>
>>> 2) in theory, the newly upgraded nodes might not get the DDL schema
>>> update properly due to some incompatible change
>>>
>>> To check for 2, do :
>>> "
>>> nodetool gossipinfo | grep SCHEMA |sort | uniq -c | sort -n
>>> "
>>>
>>> Before and after and make sure the schema propagates correctly. There
>>> should be a new version on all nodes between each DDL change, if there is
>>> you will likely be able to see the new schema on all the new nodes.
>>>
>>>
>> Yeas, this makes perfect sense. We monitor the schema changes every
>> minutes across the cluster with Nagios by checking the JMX console. It is
>> an important thing to monitor in several situations (running migrations for
>> example, or during upgrades like you describe here).
>>
>> Is there a way to find out if the upgradesstables has been run against a
>> particular node or not?
>>
>> Many Thanks,
>> Vasilis
>>
>
>

Reply via email to