I actually never set Xmx > 32 GB, for any java application, unless it 
necessarily need more. Just because of the fact: "once you exceed this 32 GiB 
border JVM will stop using compressed object pointers, effectively reducing the 
available memory. That means increasing your JVM heap above 32 GiB you must go 
way above.  increasing heap from 32 GiB to anything below 48 GiB will actually 
decrease the amount of available memory (!) because compressed object pointers 
are no longer there." And one another thing: why default cassandra setup won't 
go > 8GB, even if 256 GB of ram is available (Considering that default configs 
should be useful for most cases )? And also most of data structures could be 
mode to off-heap even memtable and it's been recommended for better performance 
(although i never changed default configs to move something to off-heap) Sent 
using Zoho Mail вт, 17 июл. 2018 г., 17:22 Rahul Singh 
<rahul.xavier.si...@gmail.com>: I usually don’t want to put more than 1.0-1.5 
TB ( at the most ) per node. It makes streaming slow beyond my patience and 
keeps the repair / compaction processes lean. Memory depends on how much you 
plan to keep in memory in terms of key / row cache. For my uses, no less than 
64GB if not more ~ 128GB. The lowest I’ve gone is 16GB but that’s for dev 
purposes only.  -- Rahul Singh rahul.si...@anant.us 
https://www.anant.us/datastax Anant Corporation On Jul 17, 2018, 8:26 AM -0400, 
Vsevolod Filaretov <vsfilare...@gmail.com>, wrote: What are general community 
and/or your personal experience viewpoints on cassandra node RAM amount vs data 
stored per node question? Thank you very much. Best regards, Vsevolod.

Reply via email to