Thank you all.
I found doc in datastax and it said the compression is default to enabled
and set it as LZ4Compressor.
Hi Oliver,
I don't have a quick answer (or any answer yet), though we ran into a
similar issue and I'm wondering about your environment and some configs.
- Operating system?
- Cloud or on-premise?
- Version of Cassandra?
- Version of Java?
- Compaction strategy?
- Primarily read or primarily writ
100,000 rows is pretty small. Import your data to your cluster, do a nodetool
flush on each node, then you can see how much disk space is actually used.
There are different compression tools available to you when you create the
table. It also matters if the rows are in separate partitions or you
As others pointed out, compression will reduce the size and replication will
(across nodes) increase the total size.
The other thing to note is that you can have multiple versions of the data in
different sstables, and tombstones related to deletions and TTLs, and indexes,
and any snapshots, an
Any read-only file systems? Have you tried to start from the command line
(instead of a service)? Sometimes that will give a more helpful error when
start-up can’t complete.
If your error is literally what you included, it looks like the executable
can’t find the cassandra.yaml file.
I will ag