The I/O errors are caused by disk failure. Syslog contains some of those things:
Jan 16 09:53:24 --- kernel: [7065781.460804] sd 4:0:0:0: [sda] Add. Sense:
Unrecovered read error
Jan 16 09:53:24 --- kernel: [7065781.460810] sd 4:0:0:0: [sda] CDB: Read(10):
28 00 11 cf 60 70 00 00 08 00
Jan 16
Yep, I think I can. Here you are: https://github.com/tivv/cassandra-balancer
2012/1/15 Carlos Pérez Miguel
> If you can partage it would be greate
>
> Carlos Pérez Miguel
>
>
>
> 2012/1/15 Віталій Тимчишин :
> > Yep. Have written groovy script this friday to perform autobalancing :)
> I am
> > g
Hi,
I have a 4 nodes cluster 1.0.3 version
This is what I get when I run nodetool ring
Address DC RackStatus State LoadOwns
Token
127605887595351923798765477786913079296
10.8.193.87 datacenter1 rack1 Up Normal 46.47 GB
25.00% 0
The Cassandra team is pleased to announce the release of Apache Cassandra
version 1.0.7.
Cassandra is a highly scalable second-generation distributed database,
bringing together Dynamo's fully distributed design and Bigtable's
ColumnFamily-based data model. You can read more here:
http://cassand
Its technically possible to have without breaking basic levelDB
algorithm configurable sstable size and count on different levels?
something like:
level 1 - 10 x 50 MB tables
level 2 - 60 x 40 MB tables
level 3 - 150 x 30 MB tables
I am interested in more deeper leveldb research, because curre
Unfortunately, I'm not doing a 1-1 migration; I'm moving data from a 15-node to
a 6-node cluster. In this case, that means an excessive amount of time spent
repairing data put on to the wrong machines.
Also, the bulkloader's requirement of having either a different IP address or a
different mac
Is it possible to add Brisk only nodes to standard C* cluster? So if
we have node A,B,C with standard C* then add Brisk node D,E,F for
analytics?
Hello,
I've been trying to retrieve rows based on key range but every single time
I test, Hector retrieves ALL the rows, no matter the range I give it.
What can I possibly be doing wrong ? Thanks.
I'm doing a test on a single-node RF=1 cluster (c* 1.0.5) with one column
family (I've added & trunca
eeek, HW errors.
I would guess (thats all it is) that an IO error may have stopped the schema
from migrating.
Stop cassandra on that node and copy the files off as best you can.
I would then try a node replacement
First remove the failed new node with nodetool decomission or removetoken.
You can cross check the load with the SSTable Live metric for each CF in
nodetool cfstats.
Can you also double check what you are seeing on disk ? (sorry got to ask :) )
Finally compare du -h and df -h to make sure they match. (Sure they will, just
a simple way to check disk usage makes sense)
10 matches
Mail list logo