Hi,
we will follow the recommendation not to use materialized views.
Thanks a lot to both of you!
You helped me a lot.
Oh and besides: We are also using the lagom framework :) So we will also
be able to regenerate a Read-Side if have to.
greetings,
Michael
On 07.06.2018 13:45, Evelyn Smith wrote:
Hey Michael,
In the case that you have a production cluster set up with multiple
nodes, assuming you have rf>1 it’s easier to just replace the broken
node and restore it’s data. (For future reference)
I wasn’t sure if view was referring to materialised view at the time
although Pradeeps comment along with your own suggest it might (I
didn’t get a chance to look through the code to confirm if view was
MV or something else and I’m not that familiar with the code base).
As far as the choice of using Materialised Views, they aren’t being
deprecated they are currently marked as experimental and most people
strongly advise you to not use them. If you can avoid it don’t do
it. They’re associated with a lot of bugs and scalability issues.
Also they’re just hard to do right if you aren’t exceptionally
familiar with Cassandra.
Regards,
Evelyn.
On 7 Jun 2018, at 3:05 am, Pradeep Chhetri <prad...@stashaway.com>
wrote:
Hi Michael,
We have faced the same situation as yours in our production
environment where we suddenly got "Unknown CF Exception" for
materialized views too. We are using Lagom apps with cassandra for
persistence. In our case, since these views can be regenerated from
the original events, we were able to safely recover.
Few suggestions from my operations experience:
1) Upgrade your cassandra cluster to 3.11.2 because there are lots
of bug fixes specific to materialized views.
2) Never let your application create/update/delete cassandra
table/materialized views. Always create them manually to make sure
that only connection is doing the operation.
Regards,
Pradeep
On Wed, Jun 6, 2018 at 9:44 PM, <m...@vis.at> wrote:
Hi Evelyn,
thanks a lot for your detailed response message.
The data is not important. We've already wiped the data and created
a new cassandra installation. The data re-import task is already
running. We've lost the data for a couple of months but in this case
this does not matter.
Nevertheless we will try what you told us - just to be
smarter/faster if this happens in production (where we will setup a
cassandra cluster with multiple cassandra nodes anyway). I will drop
you a note when we are done.
Hmmm... the problem is within a "View". Are this the materialized
views?
I'm asking this because:
* Someone on the internet (stackoverflow if a recall correctly)
mentioned that using materialized views are to be deprecated.
* I had been on a datastax workshop in Zurich a couple of days ago
where a datastax employee told me that we should not use
materialized views - it is better to create & fill all tables
directly.
Would you also recommend not to use materialized views? As this
problem is related to a view - maybe we could avoid this problem
simply by following this recommendation.
Thanks a lot again!
Greetings,
Michael
On 06.06.2018 16:48, Evelyn Smith wrote:
Hi Michael,
So I looked at the code, here are some stages of your error message:
1. at
org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:292)
[apache-cassandra-3.11.0.jar:3.11.0
At this step Cassandra is running through the keyspaces in it’s
schema turning off compactions for all tables before it starts
rerunning the commit log (so it isn’t an issue with the commit
log).
2. at org.apache.cassandra.db.Keyspace.open(Keyspace.java:127)
~[apache-cassandra-3.11.0.jar:3.11.0]
Loading key space related to the column family that is erroring out
3. at org.apache.cassandra.db.Keyspace.<init>(Keyspace.java:324)
~[apache-cassandra-3.11.0.jar:3.11.0]
Cassandra has initialised the column family and is reloading the
view
4. at
org.apache.cassandra.db.Keyspace.getColumnFamilyStore(Keyspace.java:204)
~[apache-cassandra-3.11.0.jar:3.11.0]
At this point I haven’t had enough time to tell if Cassandra is
requesting info on a column specifically or still requesting
information on a column family. Regardless, given we already rule
out
issues with the SSTables and their directory and Cassandra is yet to
start processing the commit log this to me suggests it’s something
wrong in one of the system keyspaces storing the schema information.
There should definitely be a way to resolve this with zero data loss
by either:
1. Fixing the issue in the system keyspace SSTables (hard)
2. Rerunning the commit log on a new Cassandra node that has been
restored from the current one (I’m not sure if this is possible
but
I’ll figure it out tomorrow)
The alternative is if you are ok with losing the commitlog then you
can backup the data and restore it to a new node (or the same node
but
with everything blown away). This isn’t a trivial process though
I’ve done it a few times.
How important is the data?
Happy to come back to this tomorrow (need some sleep)
Regards,
Eevee.
On 5 Jun 2018, at 7:32 pm, m...@vis.at wrote:
Keyspace.getColumnFamilyStore
---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscr...@cassandra.apache.org
For additional commands, e-mail: user-h...@cassandra.apache.org
---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscr...@cassandra.apache.org
For additional commands, e-mail: user-h...@cassandra.apache.org