On Mon, Oct 3, 2011 at 1:19 PM, Ramesh Natarajan wrote:
> Thanks for the pointers. I checked the system and the iostat showed that we
> are saturating the disk to 100%. The disk is SCSI device exposed by ESXi and
> it is running on a dedicated lun as RAID10 (4 600GB 15k drives) connected to
> ESX
Yes look at cassandra.yaml there is a section about throttling compaction.
You still *want* multi-threaded compaction. Throttling will occur across all
threads. The reason being is that you don't want to get stuck compacting
bigger files, while the smaller ones build up waiting for bigger compactio
Thanks for the pointers. I checked the system and the iostat showed that we
are saturating the disk to 100%. The disk is SCSI device exposed by ESXi and
it is running on a dedicated lun as RAID10 (4 600GB 15k drives) connected to
ESX host via iSCSI.
When I run compactionstats I see we are compact
Most likely what could be happening is you are running single threaded
compaction. Look at the cassandra.yaml of how to enable multi-threaded
compaction. As more data comes into the system, bigger files get created
during compaction. You could be in a situation where you might be compacting
at a hi
In order to understand what's going on you might want to first just do
write test, look at the results and then do just the read tests and
then do both read / write tests.
Since you mentioned high update/deletes I should also ask your CL for
writes/reads? with high updates/delete + high CL I think
I will start another test run to collect these stats. Our test model is in
the neighborhood of 4500 inserts, 8000 updates&deletes and 1500 reads every
second across 6 servers.
Can you elaborate more on reducing the heap space? Do you think it is a
problem with 17G RSS?
thanks
Ramesh
On Mon, Oc
I am wondering if you are seeing issues because of more frequent
compactions kicking in. Is this primarily write ops or reads too?
During the period of test gather data like:
1. cfstats
2. tpstats
3. compactionstats
4. netstats
5. iostat
You have RSS memory close to 17gb. Maybe someone can give f
maybe try row cache ?
have you enabled the mlock ? (need jna.jar , and set ulimit -l )
using iostat -x would also give you more clues as to disk performance
On Mon, Oct 3, 2011 at 10:12 AM, Ramesh Natarajan wrote:
> I am running a cassandra cluster of 6 nodes running RHEL6 virtualized by
> ESX
We have 5 CF. Attached is the output from the describe command. We don't
have row cache enabled.
Thanks
Ramesh
Keyspace: MSA:
Replication Strategy: org.apache.cassandra.locator.SimpleStrategy
Durable Writes: true
Options: [replication_factor:3]
Column Families:
ColumnFamily: admin
On Mon, Oct 3, 2011 at 10:12 AM, Ramesh Natarajan wrote:
> I am running a cassandra cluster of 6 nodes running RHEL6 virtualized by
> ESXi 5.0. Each VM is configured with 20GB of ram and 12 cores. Our test
> setup performs about 3000 inserts per second. The cassandra data partition
> is on a X
I am running a cassandra cluster of 6 nodes running RHEL6 virtualized by
ESXi 5.0. Each VM is configured with 20GB of ram and 12 cores. Our test
setup performs about 3000 inserts per second. The cassandra data partition
is on a XFS filesystem mounted with options
(noatime,nodiratime,nobarrier,l
11 matches
Mail list logo