Hi Alain ,
We are adding 12 tables on weekly basis job , and dropping history table
.
Our job is looking for schema mismatch by running "SELECT peer,
schema_version, tokens FROM peers" before it adds/drops each table .
nodetool describecluster looks ok , only one schema version .
Cluster Info
Hello Roy,
The name of the table makes me think that you might be doing automated
changes to the schema. I just dug this topic for someone else and schema
changes are way less consistent than standard Cassandra operations (see
https://issues.apache.org/jira/browse/CASSANDRA-10699).
> sessions_raw
maybe print out value into the logfile and that should lead to some
clue where it might be the problem?
On Tue, May 7, 2019 at 4:58 PM Paul Chandler wrote:
>
> Roy, We spent along time trying to fix it, but didn’t find a solution, it was
> a test cluster, so we ended up rebuilding the cluster, r
Roy, We spent along time trying to fix it, but didn’t find a solution, it was a
test cluster, so we ended up rebuilding the cluster, rather than spending
anymore time trying to fix the corruption. We have worked out what had caused
it, so were happy it wasn’t going to occur in production. Sorry
I can say that it happens now as well ,currently no node has been
added/removed .
Corrupted sstables are usually the index files and in some machines the
sstable even does not exist on the filesystem.
On one machine I was able to dump the sstable to dump file without any
issue . Any idea how to ta
Roy,
I have seen this exception before when a column had been dropped then re added
with the same name but a different type. In particular we dropped a column and
re created it as static, then had this exception from the old sstables created
prior to the ddl change.
Not sure if this applies in
can Disk have bad sectors? fccheck or something similar can help.
Long shot: repair or any other operation conflicting. Would leave that to
others.
On Mon, May 6, 2019 at 3:50 PM Roy Burstein wrote:
> It happens on the same column families and they have the same ddl (as
> already posted) . I di
It happens on the same column families and they have the same ddl (as
already posted) . I did not check it after cleanup
.
On Mon, May 6, 2019, 23:43 Nitan Kainth wrote:
> This is strange, never saw this. does it happen to same column family?
>
> Does it happen after cleanup?
>
> On Mon, May 6,
This is strange, never saw this. does it happen to same column family?
Does it happen after cleanup?
On Mon, May 6, 2019 at 3:41 PM Roy Burstein wrote:
> Yes.
>
> On Mon, May 6, 2019, 23:23 Nitan Kainth wrote:
>
>> Roy,
>>
>> You mean all nodes show corruption when you add a node to cluster??
Yes.
On Mon, May 6, 2019, 23:23 Nitan Kainth wrote:
> Roy,
>
> You mean all nodes show corruption when you add a node to cluster??
>
>
> Regards,
>
> Nitan
>
> Cell: 510 449 9629
>
> On May 6, 2019, at 2:48 PM, Roy Burstein wrote:
>
> It happened on all the servers in the cluster every time I
Roy,
You mean all nodes show corruption when you add a node to cluster??
Regards,
Nitan
Cell: 510 449 9629
> On May 6, 2019, at 2:48 PM, Roy Burstein wrote:
>
> It happened on all the servers in the cluster every time I have added node
> .
> This is new cluster nothing was upgraded here , we
It happened on all the servers in the cluster every time I have added node
.
This is new cluster nothing was upgraded here , we have a similar cluster
running on C* 2.1.15 with no issues .
We are aware to the scrub utility just it reproduce every time we added
node to the cluster .
We have many t
Before you scrub, from which version were you upgrading and can you post a(n
anonymized) schema?
--
Jeff Jirsa
> On May 6, 2019, at 11:37 AM, Nitan Kainth wrote:
>
> Did you try sstablescrub?
> If that doesn't work, you can delete all files of this sstable id and then
> run repair -pr on th
Did you try sstablescrub?
If that doesn't work, you can delete all files of this sstable id and then
run repair -pr on this node.
On Mon, May 6, 2019 at 9:20 AM Roy Burstein wrote:
> Hi ,
> We are having issues with Cassandra 3.11.4 , after adding node to the
> cluster we get many corrupted file
14 matches
Mail list logo