John, anything I’ll say will be as a collective ‘we’ since it has been a team
effort here at Trip, and I’ve just been the hired gun to help out a bit. I’m
more of a Postgres and Java guy so filter my answers accordingly.
I can’t say we saw as much relevance to tuning chunk cache size, as we did
When I run the command 'select ttl(udt_field) from table; I'm getting an error
'InvalidRequest: Error from server: code=2200 [Invalid query] message="Cannot
use selection function ttl on collections"'. How can I get the TTL from a UDT
field?
Mark Furlong
[cid:image001.png@01D5A920.52B244C0]
W
Options: Assuming model and configurations are good and Data size per node
less than 1 TB (though no such Benchmark).
1. Infra scale for memory
2. Try to change disk_access_mode to mmap_index_only.
In this case you should not have any in memory DB tables.
3. Though Datastax do not recommended and
Reid,
I've only been working with Cassandra for 2 years, and this echoes my
experience as well.
Regarding the cache use, I know every use case is different, but have you
experimented and found any performance benefit to increasing its size?
Thanks,
John Belliveau
On Mon, Dec 2, 2019, 11:07 AM
I am sorry! This is true. I forgot "*not*"!
1. It's *not* recommended to use commit log after one node failure.
Cassandra has many options such as replication factor as
substitute solution.
*VafaTech.com - A Total Solution for Data Gathering & Analysis*
On Tue, Dec 3, 2019 at 10:42 AM Adarsh Kum