#1 The cause of this problem is a CREATE TABLE statement collision. Do
*not* generate
tables dynamically from multiple clients, even with IF NOT EXISTS. First
thing you need to do is fix your code so that this does not happen. Just
create your tables manually from cqlsh allowing time for the schema to
settle.

#2 Here's the fix:

1) *Change your code to not automatically re-create tables (even with IF
NOT EXISTS).*

2) Run a rolling restart to ensure schema matches across nodes. Run
nodetool describecluster around your cluster. Check that there is only one
schema version.

ON EACH NODE:
3) Check your filesystem and see if you have two directories for the table
in question in the data directory.

If THERE ARE TWO OR MORE DIRECTORIES:
4)Identify from schema_column_families which cf ID is the "new" one
(currently in use).

cqlsh -e "select * from system.schema_column_families"|grep <table name>


5) Move the data from the "old" one to the "new" one and remove the old
directory.

6) If there are multiple "old" ones repeat 5 for every "old" directory.

7) run nodetool refresh

IF THERE IS ONLY ONE DIRECTORY:

No further action is needed.

All the best,


[image: datastax_logo.png] <http://www.datastax.com/>

Sebastián Estévez

Solutions Architect | 954 905 8615 | sebastian.este...@datastax.com

[image: linkedin.png] <https://www.linkedin.com/company/datastax> [image:
facebook.png] <https://www.facebook.com/datastax> [image: twitter.png]
<https://twitter.com/datastax> [image: g+.png]
<https://plus.google.com/+Datastax/about>
<http://feeds.feedburner.com/datastax>

<http://cassandrasummit-datastax.com/>

DataStax is the fastest, most scalable distributed database technology,
delivering Apache Cassandra to the world’s most innovative enterprises.
Datastax is built to be agile, always-on, and predictably scalable to any
size. With more than 500 customers in 45 countries, DataStax is the
database technology and transactional backbone of choice for the worlds
most innovative companies such as Netflix, Adobe, Intuit, and eBay.

On Fri, Jul 10, 2015 at 12:15 PM, Saladi Naidu <naidusp2...@yahoo.com>
wrote:

> My understanding is that Cassandra File Structure follows below naming
> convention
>
> /cassandra/*data/      <key-spaces> <table>*
>
>
>
> Whereas our file structure is as below, each table has multiple names and
> when we drop tables and recreate these directories remain. Also when we
> dropped the table one node was down, when it came back, we tried to do
> Nodetool repair and repair kept failing  referring to CFID error listed
> below
>
>
> drwxr-xr-x. 16 cass cass 4096 May 24 06:49 ../
> drwxr-xr-x.  4 cass cass 4096 Jul  2 11:09
> application_by_user-e0eec95019a211e58b954ffc8e9bfaa6/
> drwxr-xr-x.  2 cass cass 4096 Jun 25 10:15 application_info-
> 4dba2bf0054f11e58b954ffc8e9bfaa6/
> drwxr-xr-x.  4 cass cass 4096 Jul  2 11:09
> application_info-a0ee65d019a311e58b954ffc8e9bfaa6/
> drwxr-xr-x.  4 cass cass 4096 Jul  2 11:09
> configproperties-228ea2e0c13811e4aa1d4ffc8e9bfaa6/
> drwxr-xr-x.  4 cass cass 4096 Jul  2 11:09
> user_activation-95d005f019a311e58b954ffc8e9bfaa6/
> drwxr-xr-x.  3 cass cass 4096 Jun 25 10:16
> user_app_permission-9fddcd62ffbe11e4a25a45259f96ec68/
> drwxr-xr-x.  4 cass cass 4096 Jul  2 11:09
> user_credential-86cfff1019a311e58b954ffc8e9bfaa6/
> drwxr-xr-x.  4 cass cass 4096 Jul  2 11:09
> user_info-2fa076221b1011e58b954ffc8e9bfaa6/
> drwxr-xr-x.  2 cass cass 4096 Jun 25 10:15
> user_info-36028c00054f11e58b954ffc8e9bfaa6/
> drwxr-xr-x.  3 cass cass 4096 Jun 25 10:15
> user_info-fe1d7b101a5711e58b954ffc8e9bfaa6/
> drwxr-xr-x.  4 cass cass 4096 Jun 25 10:16
> user_role-9ed0ca30ffbe11e4b71d09335ad2d5a9/
>
>
> WARN  [Thread-2579] 2015-07-02 16:02:27,523 IncomingTcpConnection.java:91
> - UnknownColumnFamilyException reading from socket; closing
> org.apache.cassandra.db.UnknownColumnFamilyException: Couldn't find
> cfId=218e3c90-1b0e-11e5-a34b-d7c17b3e318a
>     at
> org.apache.cassandra.db.ColumnFamilySerializer.deserializeCfId(ColumnFamilySerializer.java:164)
> ~[apache-cassandra-2.1.2.jar:2.1.2]
>     at
> org.apache.cassandra.db.ColumnFamilySerializer.deserialize(ColumnFamilySerializer.java:97)
> ~[apache-cassandra-2.1.2.jar:2.1.2]
>     at
> org.apache.cassandra.db.Mutation$MutationSerializer.deserializeOneCf(Mutation.java:322)
> ~[apache-cassandra-2.1.2.jar:2.1.2]
>     at
> org.apache.cassandra.db.Mutation$MutationSerializer.deserialize(Mutation.java:302)
> ~[apache-cassandra-2.1.2.jar:2.1.2]
>     at
> org.apache.cassandra.db.Mutation$MutationSerializer.deserialize(Mutation.java:330)
> ~[apache-cassandra-2.1.2.jar:2.1.2]
>     at
> org.apache.cassandra.db.Mutation$MutationSerializer.deserialize(Mutation.java:272)
> ~[apache-cassandra-2.1.2.jar:2.1.2]
>     at org.apache.cassandra.net.MessageIn.read(MessageIn.java:99)
> ~[apache-cassandra-2.1.2.jar:2.1.2]
>     at
> org.apache.cassandra.net.IncomingTcpConnection.receiveMessage(IncomingTcpConnection.java:168)
> ~[apache-cassandra-2.1.2.jar:2.1.2]
>     at
> org.apache.cassandra.net.IncomingTcpConnection.receiveMessages(IncomingTcpConnection.java:150)
> ~[apache-cassandra-2.1.2.jar:2.1.2]
>     at
> org.apache.cassandra.net.IncomingTcpConnection.run(IncomingTcpConnection.java:82)
> ~[apache-cassandra-2.1.2.jar:2.1.2]
>
>
> Naidu Saladi
>

Reply via email to