data/1.1/system/IndexInfo
>>> 786556 /spool1/cassandra/data/1.1/system
>>> 1161944 /spool1/cassandra/data/1.1/
>>>
>>>
>>> And also 700+MB in the commitlog. Neither of which seemed to 'go away' on
>>> its own when i
which seemed to 'go away' on
>> its own when idle or even after running nodetool repair/cleanup and even
>> dropping keyspace.
>>
>> I suppose these hints and commitlog may be the reason behind huge
>> difference
>> in load on nodes -- but why does it happen and more
it keep accumulating?
>
>
>
> --
> View this message in context:
> http://cassandra-user-incubator-apache-org.3065146.n2.nabble.com/Multiple-Data-Center-shows-very-uneven-load-tp7584197p7584256.html
> Sent from the cassandra-u...@incubator.apache.org mailing list archive at
> Nabble.com.
ccumulating?
--
View this message in context:
http://cassandra-user-incubator-apache-org.3065146.n2.nabble.com/Multiple-Data-Center-shows-very-uneven-load-tp7584197p7584256.html
Sent from the cassandra-u...@incubator.apache.org mailing list archive at
Nabble.com.
gt;> 1119) Node /55.555.555.5 state jump to normal
>> INFO [HintedHandoff:1] 2012-12-11 10:57:19,607 HintedHandOffManager.java
>> (line 296) Started hinted handoff for token: Token(bytes[6c01]) with IP:
>> /55.555.555.5
>> INFO [GossipStage:1] 2012-12-11 10:57:19,607 Gossiper.ja
Memtable-LocationInfo@133441329(21/26
> serialized/live bytes, 1 ops)
> INFO [FlushWriter:1] 2012-12-11 10:57:19,612 Memtable.java (line 264)
> Writing Memtable-LocationInfo@133441329(21/26 serialized/live bytes, 1 ops)
> INFO [FlushWriter:1] 2012-12-11 10:57:19,617 Memtable.java (line 305)
> Completed flushing
> /spool1/cassandra/data/1.1/system/LocationInfo/system-LocationInfo-hf-22-Data.db
> (75 bytes) for commitlog position ReplayPosition(segmentId=1355223438516,
> position=614)
> INFO [GossipStage:1] 2012-12-11 10:57:19,618 Gossiper.java (line 848) Node
> /33.333.333.3 has restarted, now UP
> INFO [GossipStage:1] 2012-12-11 10:57:19,618 Gossiper.java (line 816)
> InetAddress /33.333.333.3 is now UP
> INFO [GossipStage:1] 2012-12-11 10:57:19,619 StorageService.java (line
> 1119) Node /33.333.333.3 state jump to normal
> INFO [HintedHandoff:1] 2012-12-11 10:57:27,258 HintedHandOffManager.java
> (line 392) Finished hinted handoff of 0 rows to endpoint /22.222.22.2
> INFO [HintedHandoff:1] 2012-12-11 10:57:27,258 HintedHandOffManager.java
> (line 296) Started hinted handoff for token: Token(bytes[6c03]) with IP:
> /66.666.666.6
> INFO [HintedHandoff:1] 2012-12-11 10:57:27,259 HintedHandOffManager.java
> (line 392) Finished hinted handoff of 0 rows to endpoint /66.666.666.6
> INFO [HintedHandoff:1] 2012-12-11 10:57:27,259 HintedHandOffManager.java
> (line 296) Started hinted handoff for token: Token(bytes[6601]) with IP:
> /33.333.333.3
> INFO [HintedHandoff:1] 2012-12-11 10:57:27,260 HintedHandOffManager.java
> (line 392) Finished hinted handoff of 0 rows to endpoint /33.333.333.3
>
>
> I'd much appreciate if someone can provide some insight as to what is going
> on and what (if anything) needs to be done.
>
> Best regards,
> Sergey
>
>
>
> --
> View this message in context:
> http://cassandra-user-incubator-apache-org.3065146.n2.nabble.com/Multiple-Data-Center-shows-very-uneven-load-tp7584197p7584232.html
> Sent from the cassandra-u...@incubator.apache.org mailing list archive at
> Nabble.com.
to what is going
on and what (if anything) needs to be done.
Best regards,
Sergey
--
View this message in context:
http://cassandra-user-incubator-apache-org.3065146.n2.nabble.com/Multiple-Data-Center-shows-very-uneven-load-tp7584197p7584232.html
Sent from the cassandra-u...@incubator.apache.org mailing list archive at
Nabble.com.
I am running repairs now. I checked CF stats and they all appear to have
very similar max and average row sizes between the lowest loaded node and
the highest. One thing I did notice is that nodetool netstats shows files
for a column family I dropped named payload and they never appear to go
away
t;
>
> And here is syslog from Cassandra when I restarted 11.111.111.1 node:
>
>
>
> I'd much appreciate if someone can provide some insight as to what is going
> on and what (if anything) needs to be done.
>
> Best regards,
> Sergey
>
>
I would check the logs for Dropped Message alerts, and run repair if you have
not.
I would also look at the nodetool CF stats on each node to check the row size.
It may be the case that you have some very wide rows stored on nodes
10.56.92.196, 10.28.91.8, 10.56.92.198 and 10.28.91.2
Hope th
if someone can provide some insight as to what is going
on and what (if anything) needs to be done.
Best regards,
Sergey
--
View this message in context:
http://cassandra-user-incubator-apache-org.3065146.n2.nabble.com/Multiple-Data-Center-shows-very-uneven-load-tp7584197p7584206.htm
Hello,
I have base Cassandra 1.1.7 installed in two data centers with 3 nodes each
using a PropertyFileSnitch as outlined below. When I run a nodetool ring, I see
a very uneven load. Any idea what I could be going on? I have not added/removed
any nodes or changed the replication scheme or count
12 matches
Mail list logo