Hi Habash,
The reason of "Cannot replace_address /10.xx.xx.xxx.xx because it doesn't
exist in gossip " error during replace is that the dead node gossip
information could not survive when you did full cluster (rest of the nodes)
restart.
I faced this issue before, you can check my experience of
Env:
[cqlsh 5.0.1 | Cassandra 3.11.2 | CQL spec 3.4.4 | Native protocol v4]
I am trying to ingest a csv that has date in MM/DD/ format ( %m/%d/%Y
).
While trying to load I am providing the WITH datetimeformat = '%m/%d/%Y'
but still getting errored out *time data '03/12/2019' does not match fo
Is this using GPFS? If so, can you open a JIRA? It feels like potentially
GPFS is not persisting the rack/DC info into system.peers and loses the DC
on restart. This is somewhat understandable, but definitely deserves a
JIRA.
On Thu, Mar 14, 2019 at 11:44 PM Stefan Miklosovic <
stefan.mikloso...@
On Thu, Mar 14, 2019 at 3:42 PM Fd Habash wrote:
> I can conclusively say, none of these commands were run. However, I think
> this is the likely scenario …
>
>
>
> If you have a cluster of three nodes 1,2,3 …
>
>- If 3 shows as DN
>- Restart C* on 1 & 2
>- Nodetool status should NOT
Do you have a cassandra-topology.properties file in place? If so, GPFS will
instantiate a PropertyFileSnitch using that for compatibility mode. Then, when
gossip state doesn’t contain any endpoint info about the down node (because you
bounced the whole cluster), instead of reading the rack & dc
Hey guys,
Can someone give me some idea or link some good material for determining a good
/ aggressive tombstone strategy? I want to make sure my tombstones are getting
purged as soon as possible to reclaim disk.
Thanks
3 pod deployed in openshift. Read request timed out due to GC collection. Can
you please look at below parameters and value to see if anything is out of
place? Thanks
cat cassandra.yaml
num_tokens: 256
hinted_handoff_enabled: true
hinted_handoff_throttle_in_kb: 1024
max_hints_delivery_thr
1. What was the read request? Are you fetching a single row, a million,
something else?
2. What are your GC settings?
3. What's the hardware in use? What resources have been allocated to each
instance?
4. Did you see this issue after a single request or is the cluster under
heavy load?
If you're
Hi,
I want to add something:
5. do you know on which table are you getting these reads timeout ?
6. if yes, can you see if you don't have Excessive tombstone activity
7. how often do you run repair ?
8. can you send a system.log and also report of nodetool tpstats
9. Swap is enabled or not ?
B