I'm going to tell you guys the answers I could find so far. On Tuesday, July 26, 2011, Rafael Almeida <almeida...@yahoo.com> wrote: > I couldn't find much documentation regarding how to make a cluster, but it > seemed simple enough. At cassandra server A (10.0.0.2) I had seeds: > "locahost". At server B (10.0.0.3) I configured seeds: > "10.0.0.2" and auto_bootstrap: true. Then I created a keyspace and a > few column families in it. > > I imediately began to add items and to get all these "Internal error > processing get". I found it quite odd, I thought it had to do with the load > I was putting in, seeing that a few small tests had worked before. I spent > quite > some time debugging, when I finally decided to write this e-mail. I wanted to > double check stuff, so I ran nodetool to see if everything was right. To my > surprise, there was only one of the node available. It took a little while > for > the other one to show up as Joining and then as Normal. > > After I waited that period, I was able to insert items to the cluster with no > error at all. Is that expected behaviour? What is the recommended way to > setup a > cluster? Should it be done manually. Setting up the machines, creating all > keyspaces and colum families then checking nodetool and waiting for it to get > stable?
The problem that I was having was mainly because I had set node A as seed of B and B as seed of A. I don't know what possessed me! Regarding the schema configuration. I made a schema file and I load it using: cassandra-cli -h localhost --batch < schema-file It works alright. > On a side note, sometimes I get "Default TException" (that seems to > happen when the machine is in a heavier load than usual), commonly retrying > the > read or insert right after works fine. Is that what's supposed to happen? > Perhaps I should raise some timeout somewhere? I still don't get why that error was so frequent. At first I was testing it on workstations, where people would compile stuff and run all sorts of software. I think that slowed down things considerable and the system was having a hard time managing connections from the application. After I moved it to dedicated computers those problems ceased to happen. > This is what ./bin/nodetool -h localhost ring reports me: > > Address DC Rack Status State Load > Owns > Token > > > 119105113551249187083945476614048008053 > 10.0.0.3 datacenter1 rack1 Up Normal 3.43 GB 65.90% > 61078635599166706937511052402724559481 > 10.0.0.2 datacenter1 rack1 Up Normal 1.77 GB 34.10% > 119105113551249187083945476614048008053 > > It's still adding stuff. I have no idea why B owns so many more keys than A. It happened due to my weird double-seed configuration. Now everything is fine. I've explained how tokens work on a different thread. Cheers, Rafael