Re: "Maximum memory usage reached (512.000MiB), cannot allocate chunk of 1.000MiB"

2019-12-03 Thread Reid Pinchback
John, anything I’ll say will be as a collective ‘we’ since it has been a team effort here at Trip, and I’ve just been the hired gun to help out a bit. I’m more of a Postgres and Java guy so filter my answers accordingly. I can’t say we saw as much relevance to tuning chunk cache size, as we did

TTL on UDT

2019-12-03 Thread Mark Furlong
When I run the command 'select ttl(udt_field) from table; I'm getting an error 'InvalidRequest: Error from server: code=2200 [Invalid query] message="Cannot use selection function ttl on collections"'. How can I get the TTL from a UDT field? Mark Furlong [cid:image001.png@01D5A920.52B244C0] W

Re: "Maximum memory usage reached (512.000MiB), cannot allocate chunk of 1.000MiB"

2019-12-03 Thread Shishir Kumar
Options: Assuming model and configurations are good and Data size per node less than 1 TB (though no such Benchmark). 1. Infra scale for memory 2. Try to change disk_access_mode to mmap_index_only. In this case you should not have any in memory DB tables. 3. Though Datastax do not recommended and

Re: "Maximum memory usage reached (512.000MiB), cannot allocate chunk of 1.000MiB"

2019-12-03 Thread John Belliveau
Reid, I've only been working with Cassandra for 2 years, and this echoes my experience as well. Regarding the cache use, I know every use case is different, but have you experimented and found any performance benefit to increasing its size? Thanks, John Belliveau On Mon, Dec 2, 2019, 11:07 AM

Re: Optimal backup strategy

2019-12-03 Thread Hossein Ghiyasi Mehr
I am sorry! This is true. I forgot "*not*"! 1. It's *not* recommended to use commit log after one node failure. Cassandra has many options such as replication factor as substitute solution. *VafaTech.com - A Total Solution for Data Gathering & Analysis* On Tue, Dec 3, 2019 at 10:42 AM Adarsh Kum