Hi, thanks for the prompt reply.
I've tried this. Here's what I'm writing:
bytes: 3 capacity: 3 limit: 3 offset: 0
Here's what I'm reading:
cell buffer size: 1048576 capacity: 1048576 limit: 212 arrayOffset: 0
It still does not seem right. I would have expected Cassandra to allocate a
buffer the
I'm not a Java developer, but based on my best knowledge,
ByteBuffer.array() method returns the whole byte array, not just the
part of the byte array that's meaningful (i.e. has ever been written
to). You may want to check the difference between the bb.capacity() and
bb.limit(), and also check
Hi,
I'm new to the list but not new to Cassandra. I'm writing an app on top of
C* and I have come across an issue (huge cell buffer size after applying a
mutation) that I haven't been able to resolve yet. I would appreciate any
suggestions/help to resolve this. Here are the details:
1. I have a c
Hi Paul,
It's only problematic if you are trying to do *a lots of* subrange
incremental repairs. The whole point of incremental repair is each
repair is incremental and it will only touch the recently changed data,
therefore you shouldn't need to split each node into too many subranges
to re
Thanks Erick and Bowen
I do find all the different parameters for repairs confusing, and even reading
up on it now, I see Datastax warns against incremental repairs with -pr, but
then the code here seems to negate the need for this warning.
Anyway running it like this, produces data in the syst
Hi Erick,
From the source code:
https://github.com/apache/cassandra/blob/6709111ed007a54b3e42884853f89cabd38e4316/src/java/org/apache/cassandra/service/StorageService.java#L4042
The -pr option has no effect if -st and -et are specified. Therefore,
the command results in an incremental repair