Hi Jim,
It's 600 Megabits not MegaBytes. So near around 600/8=75MBps. Also,
streaming is happening from any 3 nodes at a time.
However we have tried with default Streaming throughput which is 200
Megabits per sec (25MBps), but still the same issue. Heap we have setup to
8gb on GCP and seems pretty
I met similar issue before. What I did was: reduce Heap size for
rebuild, reduce streamthroughput.
But it depends on version, and your env., may not your case, just hope it
helpful.
ps -ef | grep , you will see a new java process for rebuild, see what
memory size used, if use default, it may
Anyone else having problems doing this?
wget -qO - https://www.apache.org/dist/cassandra/KEYS | apt-key add -
gpg: key AD055BC2: no valid user IDs
This seems to refer to the following section...
-END PGP PUBLIC KEY BLOCK-
pub ed25519 2021-10-05 [SC]
7882099F3A0033DDD63D100C92BBDF
I think this is a bit extreme. If you know that 100% of all queries that
write to the table include a TTL, not having a TTL on the table is just
fine. You just need to ensure that you always write correctly.
On Wed, Oct 6, 2021 at 8:57 AM Bowen Song wrote:
> TWCS without a table TTL is unlikely
TWCS without a table TTL is unlikely to work correctly, and adding the
table TTL retrospectively alone is also unlikely to fix the existing
issue. You may need to add the table default TTL and update all existing
data to reflect the TTL change, and then trigger a major compaction to
update the
Hi, it's not set before. I set it to ensure all data have a ttl.
Thanks for your help.
Le 06/10/2021 à 13:47, Bowen Song a écrit :
What is the the table's default TTL? (Note: it may be different than the
TTL of the data in the table)
On 06/10/2021 09:42, Michel Barret wrote:
Hello,
I try to
Thank you for your pointers. sstablemetadata seem's explain that we have
data without ttl (= 0) I don't know how can appears in ours system.
- I replace our ttl by query by the default ttl.
- I reduce the gc grace seconds to one day
- I apply the unchecked_tombstone_compaction (on 31 days of dat
What is the the table's default TTL? (Note: it may be different than the
TTL of the data in the table)
On 06/10/2021 09:42, Michel Barret wrote:
Hello,
I try to use cassandra (3.11.5) with 8 nodes (in single datacenter). I
use one simple table, all data are inserted with 31 days TTL (the data
Hi Michael,
I have had similar problems in the past, and found this Last Pickle post very
useful: https://thelastpickle.com/blog/2016/12/08/TWCS-part1.html
This should help you pinpoint what is stopping the SSTables being deleted.
Assuming you are never manually deleting records from the table
Hello,
I try to use cassandra (3.11.5) with 8 nodes (in single datacenter). I
use one simple table, all data are inserted with 31 days TTL (the data
are never updated).
I use the TWCS strategy with:
- 'compaction_window_size': '24'
- 'compaction_window_unit': 'HOURS'
- 'max_threshold': '32'
-
10 matches
Mail list logo