Hi,
We are using a Cassandra to develop our application and we use a secondary
index in one of our table for faster performance. As of now in our production,
we saw a growing disk space on the table that has secondary index on it. It
becomes a problem on us since we have a lot of data need to s
This is not the case during host replacement correct?
On Tue, Oct 16, 2018 at 10:04 AM Jeremiah D Jordan <
jeremiah.jor...@gmail.com> wrote:
> As long as we are correctly storing such things in the system tables and
> reading them out of the system tables when we do not have the information
> fro
Hi,
It's really not appreciably slower compared to the decompression we are going
to do which is going to take several microseconds. Decompression is also going
to be faster because we are going to do less unnecessary decompression and the
decompression itself may be faster since it may fit in
I think if we're going to drop it to 16k, we should invest in the compact
sequencing as well. Just lowering it to 16k will have potentially a painful
impact on anyone running low memory nodes, but if we can do it without the
memory impact I don't think there's any reason to wait another major
versi
+1
I would guess a lot of C* clusters/tables have this option set to the
default value, and not many of them are having the need for reading so big
chunks of data.
I believe this will greatly limit disk overreads for a fair amount (a big
majority?) of new users. It seems fair enough to change this
Hi Mick,
Can you share the link to cwiki if you have started it ?
Thanks
Sankalp
> On Oct 4, 2018, at 5:20 PM, Mick Semb Wever wrote:
>
> Dinesh / Sankalp,
>
> My suggestion was to document the landscape in hope and an attempt to better
> understand the requirements possible to