> On Apr 9, 2021, at 6:15 AM, Joe Obernberger
> wrote:
>
>
> We run a ~1PByte HBase cluster on top of Hadoop/HDFS that works pretty well.
> I would love to be able to use Cassandra instead on a system like that.
>
1PB is definitely in the range of viable cassandra clusters today
> Even
4.0 has gone a ways to enable better densification of nodes, but it wasn't
a main focus. We're probably still only thinking that 4TB - 8TB nodes will
be feasible (and then maybe only for expert users). The main problems tend
to be streaming, compaction, and repairs when it comes to dense nodes.
Eb
Correct. It's also worth noting that if you delete log files and restart C*
the CompactionLogger will then find the earliest available file number
starting from 0. You'll have to explore what you can use to configure
proper log rotation as the CompactionLogger doesn't use the logging system
to writ
Yes that warning will still appear because it's a startup check and doesn't
take into account the disk_access_mode setting.
You may be able to cope with just indexes. Note this is still not an ideal
solution as you won't be making full use of your available memory.
raft.so - Cassandra consulting,
Also,
I just restarted my Cassandra process by setting "disk_access_mode:
mmap_index_only" and I still see the same WARN message, I believe it's
just a startup check and doesn't rely on the disk_access_mode value
WARN [main] 2021-04-16 00:08:00,088 StartupChecks.java:311 - Maximum
> number of m
Thank you Kane and Jeff.
can I survive with a low mmap value of 65530 with "disk_acces_mode =
mmap_index_only" ? does this hold true even for higher workloads with
larger datasets like ~1TB per node?
On Thu, Apr 15, 2021 at 4:43 PM Jeff Jirsa wrote:
> disk_acces_mode = mmap_index_only to use fe
disk_acces_mode = mmap_index_only to use fewer maps (or disable it entirely
as appropriate).
On Thu, Apr 15, 2021 at 4:42 PM Kane Wilson wrote:
> Cassandra mmaps SSTables into memory, of which there can be many files
> (including all their indexes and what not). Typically it'll do so greedily
Cassandra mmaps SSTables into memory, of which there can be many files
(including all their indexes and what not). Typically it'll do so greedily
until you run out of RAM. 65k map areas tends to be quite low and can
easily be exceeded - you'd likely need very low density nodes to avoid
going over 6
Hello All,
The recommended settings for Cassandra suggests to have a higher value for
vm.max_map_count than the default 65530
WARN [main] 2021-04-14 19:10:52,528 StartupChecks.java:311 - Maximum
> number of memory map areas per process (vm.max_map_count) 65530 is too low
> , recommended value: 1