That will give you a snapshot of thread pools. You should look at
ROW-READ-STAGE and see pending and active. If there are many pending, it
means that the cluster is not able to keep up with the read requests coming
along.
Thanks,
Jahangir Mohammed.
On Thu, Nov 24, 2011 at 2:14 PM, Patrik Modesto
We have our own servers, it is 16 core CPU, 32GB ram,8 1TB disks.
I didn't check tpstats, just iotop where cassandra used all the io capacity
when compacting/repairing.
I had to completely clean the test cluster, but I'll check tpstats in the
production. What should I look for?
Regards,
Patrik
D
What I know is timeout is because of increased load on node due to repair.
Hardware? EC2?
Did you check tpstats?
On Thu, Nov 24, 2011 at 11:42 AM, Patrik Modesto
wrote:
> Thanks for the reply. I know I can configure longer timeout but in our use
> case, reply longer than 1second is unacceptable
> I'm measuring a high load value on a few nodes during the update process
> (which is normal), but one node keeps the high load after the process for a
> long time.
I would say that either the reading that you to is overloading that
one node and other traffic is getting piled up as a result, or y
Thanks for the reply. I know I can configure longer timeout but in our use
case, reply longer than 1second is unacceptable.
What I don't understand is why I get timeout while reading differrent
keyspace than the repair is working on. I get timeouts even doing
compaction.
Besides usual access we d
The Go libraries for Thrift are broken in various ways. The official one
(from Thrift 0.8 SVN) does not support optional struct members, as you have
found out. The fork in https://github.com/pomack/thrift4go supports
optional members but does not work in r60. One workaround is to use 0.8 SVN
and on
Do you use any client which gives you this timeout ?
If you don't specify any timeout from client, look at rpc_timeout_in_ms.
Increase it and see if you still suffer this.
Repair is a costly process.
Thanks,
Jahangir Mohammed.
On Thu, Nov 24, 2011 at 2:45 AM, Patrik Modesto wrote:
> Hi,
>
>
I'm seeing this error on a 0.8.x node again. This node did suffer a crash
and the cassandra data is on a raid0 array. The array was remounted
correctly and the xfs filesystem did not report any issues.
Given that RF=3 I have the following question:
0. Can a storage problem cause this?
1. During re
Just remove its token from the ring using
nodetool removetoken
2011/11/23 Maxim Potekhin :
> This was discussed a long time ago, but I need to know what's the state of
> the art answer to that:
> assume one of my few nodes is very dead. I have no resources or time to fix
> it. Data is replicated
Hi,
We need help with choosing correct tokens for ByteOrderedPartitioner
Originally the key where supposed to be member_id-mmdd
but since we need to male rage scans on same member_id and varying date
ranges mmdd
we decided to use ByteOrderedPartitioner, so we need that same member
wil
Hi,
Whenever I start up cassandra 1.0.3, I notice the following warning:
WARN [main] 2011-11-24 07:57:13,129 CFMetaData.java (line 402) Unable to
instantiate cache provider
org.apache.cassandra.cache.SerializingCacheProvider; using default
org.apache.cassandra.cache.ConcurrentLinkedHashCacheProv
11 matches
Mail list logo