The data didn't spread evenly on disks
Hi there, I am using 2.0.10 and my Cassandra node has 6 disks and I configured 6 data directories in cassandra.yaml. But the data was not evenly stored on these 6 disks: disk1 67% used > disk2 100% used > disk3 100% used > disk4 76% used > disk5 69% used > disk6 81% used > > So: 1. Is there a way to make the data evenly spread on disks? 2. I set 'disk_failure_policy: best_effort', so when the disk is full, will it serve reading or just stop working at all? 3. Any other suggestions?
Re: The data didn't spread evenly on disks
What compaction strategy are you using? >venkat From: Yatong Zhang Sent: Saturday, November 1, 2014 12:32 PM To: user@cassandra.apache.org Hi there, I am using 2.0.10 and my Cassandra node has 6 disks and I configured 6 data directories in cassandra.yaml. But the data was not evenly stored on these 6 disks: disk1 67% used disk2 100% used disk3 100% used disk4 76% used disk5 69% used disk6 81% used So: 1. Is there a way to make the data evenly spread on disks? 2. I set 'disk_failure_policy: best_effort', so when the disk is full, will it serve reading or just stop working at all? 3. Any other suggestions?
Netstats > 100% streaming
We've been commissioning some new nodes on a 2.0.10 community edition cluster, and we're seeing streams that look like they're shipping way more data than they ought for individual files during bootstrap. /var/lib/cassandra/data/x/y/x-y-jb-11748-Data.db 3756423/3715409 bytes(101%) sent to /1.2.3.4 /var/lib/cassandra/data/x/y/x-y-jb-11043-Data.db 584745/570432 bytes(102%) sent to /1.2.3.4 /var/lib/cassandra/data/x/z/x-z-jb-525-Data.db 13020828/11141590 bytes(116%) sent to /1.2.3.4 /var/lib/cassandra/data/x/w/x-w-jb-539-Data.db 1044124/51404 bytes(2031%) sent to /1.2.3.4 /var/lib/cassandra/data/x/v/x-v-jb-546-Data.db 971447/22253 bytes(4365%) sent to /1.2.3.4 /var/lib/cassandra/data/x/y/x-y-jb-10404-Data.db 6225920/23215181 bytes(26%) sent to /1.2.3.4 Has anyone else seen something like this, and this something we should be worried about? I haven't been able to find any information about this symptom. -Eric
Re: The data didn't spread evenly on disks
leveled compaction On Sat, Nov 1, 2014 at 5:53 PM, venkat sam wrote: > What compaction strategy are you using? > > >venkat > *From:* Yatong Zhang > *Sent:* Saturday, November 1, 2014 12:32 PM > *To:* user@cassandra.apache.org > > Hi there, > > I am using 2.0.10 and my Cassandra node has 6 disks and I configured 6 > data directories in cassandra.yaml. But the data was not evenly stored on > these 6 disks: > > disk1 67% used >> disk2 100% used >> disk3 100% used >> disk4 76% used >> disk5 69% used >> disk6 81% used >> >> > So: > > 1. Is there a way to make the data evenly spread on disks? > 2. I set 'disk_failure_policy: best_effort', so when the disk is full, > will it serve reading or just stop working at all? > 3. Any other suggestions? >
Force purging of tombstones
Hi, I recently ran a migration where I modified (essentially, deleted and inserted) a large number of clustering keys. This obviously created a lot of tombstones and now I am receiving warnings in my logs that number of tombstones are above a certain threshold when I query all rows for a certain partition key. I suspect this might lead to RPC Timeouts in certain cases. The migration was done more than `gc_grace_seconds` ago, so I can safely remove these tombstones by now. Question: Is there any way for me to force Cassandra to purge tombstones for all sstables of a certain columnfamily (running LCS)? I asked this quesion on IRC and AFAIK the only way would be to switch to SizeTiered compaction strategy, issuing a major compaction, and then switching back to LCS. Would there be any implications/side effects executing this procedure? Thanks, Jens ——— Jens Rantil Backend engineer Tink AB Email: jens.ran...@tink.se Phone: +46 708 84 18 32 Web: www.tink.se Facebook Linkedin Twitter