I've emailed you a raw log file of an instance of this happening.
I've been monitoring more closely the timing of events in tpstats and the
logs and I believe this is what is happening:
- For some reason, C* decides to provoke a flush storm (I say some reason,
I'm sure there is one but I have had
Hello Maxime
Increasing the flush writers won't help if your disk I/O is not keeping up.
I've had a look into the log file, below are some remarks:
1) There are a lot of SSTables on disk for some tables (events for example,
but not only). I've seen that some compactions are taking up to 32 SSTab
what about for the nodes on the private cloud cluster ? if I mention,
ec2MultiRegion, it is failing since it is trying to invoking aws api on the
node inside the snitch. should I mention GossipingPropertyFileSnitch ? I am
not sure if I can mix and match. can someone advise me ?
thx
srinivas
On Sa
Thank you very much for your reply. This is a deeper interpretation of the
logs than I can do at the moment.
Regarding 2) it's a good assumption on your part but in this case,
non-obviously the loc table's primary key is actually not id, the scheme
changed historically which has led to this odd na
If the issue is related to I/O, you're going to want to determine if
you're saturated. Take a look at `iostat -dmx 1`, you'll see avgqu-sz
(queue size) and svctm, (service time).The higher those numbers
are, the most overwhelmed your disk is.
On Sun, Oct 26, 2014 at 12:01 PM, DuyHai Doan wro
I would try propertyfilesnitch and use the public ip's of the nodes in aws.
You'll need to set the configuration files on each node.
> On Oct 26, 2014, at 9:44 PM, Srinivas Chamarthi
> wrote:
>
> what about for the nodes on the private cloud cluster ? if I mention,
> ec2MultiRegion, it is
Hey all,
I'm trying to decommission a node.
First I'm getting a status:
[root@beta-new:/usr/local] #nodetool status
Note: Ownership information does not include topology; for complete
information, specify a keyspace
Datacenter: datacenter1
===
Status=Up/Down
|/ State=Normal
"Should doing a major compaction on those nodes lead to a restructuration
of the SSTables?" --> Beware of the major compaction on SizeTiered, it will
create 2 giant SSTables and the expired/outdated/tombstone columns in this
big file will be never cleaned since the SSTable will never get a chance t
Hmm, thanks for the reading.
I initially followed some (perhaps too old) maintenance scripts, which
included weekly 'nodetool compact'. Is there a way for me to undo the
damage? Tombstones will be a very important issue for me since the dataset
is very much a rolling dataset using TTLs heavily.
O