Thanks Jeff for the reply. Answers inlined.
Tombstones probably aren't clearing because the same partition exists with
older timestamps in other files (this is the "sstableexpiredblockers" problem,
or "overlaps").
>>The RF is 2, so there is two copies of one partition in two node. So my
>>meth
I have checked the dmesg and message logs ,there is no eth* content in it.so I
think there was no network connection issue.
Best Regards,
倪项菲/ David Ni
中移德电网络科技有限公司
Virtue Intelligent Network Ltd, co.
Add: 2003,20F No.35 Luojia creative city,Luoyu Road,Wuhan,HuBei
Mob: +86 13797007811|Tel: + 86
Thank you, JH.
I took a look and probably understand it.
Maybe, in summary,
if all the operations are LWT, there is no issue even if clocks are
drifted, because it's ballot based.
But, if some of the operations are non-LWT and clocks are drifted,
there might be causing issues.
(like overwriting da
Tombstones probably aren't clearing because the same partition exists with
older timestamps in other files (this is the "sstableexpiredblockers"
problem, or "overlaps").
If you're certain you are ok losing that data, then you could stop the
node, remove lb-143951-big-* , and start the node. This i
Only one node having the problem is suspicious. May be that your
application is improperly pooling connections, or you have a hardware
problem.
I dont see anything in nodetool that explains it, though you certainly have
a data model likely to cause problems over time (the cardinality of
rt_ac_sta
Look for errors on your network interface. I think you have periodic errors
in your network connectivity
<==>
"Who do you think made the first stone spear? The Asperger guy.
If you get rid of the autism genetics, there would be no Silicon Valley"
Temple Grandin
*Daemeon C.M. ReiydelleSan Fr
Hi All,
I changed STCS to TWCS months ago and left some old sstable files. Some are
almost tombstones. To release disk space, I issued compaction command on one
file by JMX. After the compaction is done, I got one new file with almost the
same size of the old one. Seems no tombstones are cleaned
That warning isn’t sufficient to understand why the node is going down
Cassandra 3.9 has some pretty serious known issues - upgrading to 3.11.3 is
likely a good idea
Are the nodes coming up on their own? Or are you restarting them?
Paste the output of nodetool tpstats and nodetool cfstats
Hi Cassandra experts,
I am facing an issue,a node downs every day in a 6 nodes cluster,the cluster
is just in one DC,
Every node has 4C 16G,and the heap configuration is MAX_HEAP_SIZE=8192m
HEAP_NEWSIZE=512m,every node load about 200G data,the RF for the business CF is
3,a node downs one tim
Jeff and Christophe,
Thank you very much ! I'll take a look.
Jero
On Sun, Mar 25, 2018 at 10:21 PM, Jeff Jirsa wrote:
> Probably closer to https://issues.apache.org/jira/browse/CASSANDRA-13289
>
>
> Will be in 4.0
> --
> Jeff Jirsa
>
>
> On Mar 25, 2018, at 4:44 PM, Christophe Schmitz <
> chri
If you check the Jira issue I linked you'll see a recent comment describing a
potential explanation for my mixed LWT/non-LWT problem. So, it looks like there
can be some edge cases.
I'd say that if data was inserted a while ago (seconds) there should be no
problems.
--
Jacques-Henri Berthemet
11 matches
Mail list logo