Yes. I even tried just starting one node only, and then bootstrapping another node. (However, at the beginning a few days ago, the cluster was unstable and unresponsive and I had to restart the cluster. Maybe something went wrong back then.)
Anyway, I will export all the data, and reimport it with the new randomparitioner, which I should have used from the beginning. Thanks, Thibaut On Thu, Oct 28, 2010 at 12:35 AM, Tyler Hobbs <ty...@riptano.com> wrote: > Not sure if this is the cause, but do all of your nodes have the same seed > list? Did you bring up the seeds first? > > - Tyler > > > On Wed, Oct 27, 2010 at 1:46 PM, Thibaut Britz < > thibaut.br...@trendiction.com> wrote: > >> Depending on the range I choose, choosing manually a token will also fail. >> (node will never exit boostrap, streams doesn't list any open streams) >> >> >> INFO [Thread-53] 2010-10-27 20:33:37,399 SSTableReader.java (line 120) >> Sampling index for /hd2/cassandra/data/table_xyz/table_xyz-3-Data.db >> INFO [Thread-53] 2010-10-27 20:33:37,444 StreamCompletionHandler.java >> (line 64) Streaming added /hd2/cassandra/data/table_xyz/table_xyz-3-Data.db >> >> Stacktracke: >> >> "pool-1-thread-53" prio=10 tid=0x00000000412f2800 nid=0x215c runnable >> [0x00007fd7cf217000] >> java.lang.Thread.State: RUNNABLE >> at java.net.SocketInputStream.socketRead0(Native Method) >> at java.net.SocketInputStream.read(SocketInputStream.java:129) >> at java.io.BufferedInputStream.fill(BufferedInputStream.java:218) >> at java.io.BufferedInputStream.read1(BufferedInputStream.java:258) >> at java.io.BufferedInputStream.read(BufferedInputStream.java:317) >> - locked <0x00007fd7e77e0520> (a java.io.BufferedInputStream) >> at >> org.apache.thrift.transport.TIOStreamTransport.read(TIOStreamTransport.java:126) >> at >> org.apache.thrift.transport.TTransport.readAll(TTransport.java:84) >> at >> org.apache.thrift.protocol.TBinaryProtocol.readAll(TBinaryProtocol.java:314) >> at >> org.apache.thrift.protocol.TBinaryProtocol.readI32(TBinaryProtocol.java:262) >> at >> org.apache.thrift.protocol.TBinaryProtocol.readMessageBegin(TBinaryProtocol.java:192) >> at >> org.apache.cassandra.thrift.Cassandra$Processor.process(Cassandra.java:1154) >> at >> org.apache.cassandra.thrift.CustomTThreadPoolServer$WorkerProcess.run(CustomTThreadPoolServer.java:167) >> at >> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886) >> at >> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908) >> at java.lang.Thread.run(Thread.java:662) >> >> >> >> >> >> >> On Wed, Oct 27, 2010 at 8:27 PM, Thibaut Britz < >> thibaut.br...@trendiction.com> wrote: >> >>> Hello Tyler, >>> >>> thanksf or the quick answer. That's true, I should have noticed. >>> >>> I also tried kicking out one node, clearing all directories and then >>> restarting it with the bootstrap option. It received a few files, but just >>> set there in bootstrapping mode (streams always printed bootstrapping >>> without any files open), forever (> 15 minutes). I stopped the applicaiton >>> so it couldn't be load related, and also tried with a fresh cluster restart. >>> What could cause this? >>> >>> (This should ahve the advantage of cassandra choosing a key in my range >>> which splits the range evenly?) >>> >>> >>> Thanks, >>> Thibaut >>> >>> >>> >>> >>> >>> >>> >>> >>> On Wed, Oct 27, 2010 at 7:40 PM, Tyler Hobbs <ty...@riptano.com> wrote: >>> >>>> With OrderPreservingPartitioner, you have to keep the ring balanced >>>> manually. >>>> This is why people frequently suggest that you use RandomPartitioner >>>> unless >>>> you absolutely have to do otherwise. With OPP, keys are *not* evenly >>>> distributed >>>> around the ring. >>>> >>>> Apparently you have lots of keys that are between ~'t' and 'x', so start >>>> bunching >>>> your tokens there. >>>> >>>> - Tyler >>>> >>>> >>>> On Wed, Oct 27, 2010 at 12:00 PM, Thibaut Britz < >>>> thibaut.br...@trendiction.com> wrote: >>>> >>>>> Hi, >>>>> >>>>> I have a little java hector test application running whcih writes and >>>>> reads data to my little cassandra cluster (7 nodes). >>>>> >>>>> The data doesn't get loadbalanced at all: >>>>> >>>>> 192.168.1.12 Up 178.32 MB >>>>> 8S6VvT7oKNcQTso3 |<--| >>>>> 192.168.1.14 Up 30.12 MB >>>>> 9tybk3nB6JCtqQU1 | ^ >>>>> 192.168.1.15 Up 11.96 MB >>>>> RZVG3NC3ksqjEmYE v | >>>>> 192.168.1.16 Up 668.7 KB >>>>> aTV6W12YxxMI31Z8 | ^ >>>>> 192.168.1.10 Up 22.86 GB >>>>> u5iaQxEfyUSwnPn1 v | >>>>> 192.168.1.13 Up 22.5 GB >>>>> vZlWeU8b6LBeAcAY | ^ >>>>> 192.168.1.11 Up 22.27 GB >>>>> xrmaUS6nnrYFSk8e |-->| >>>>> >>>>> What could be the issue? I couldn't find anything in the FAQ related to >>>>> this >>>>> >>>>> Will data (writes) always be added to the server I connect to? If so, >>>>> why will the replicas then always be stored on the same 2 other machines. >>>>> >>>>> (Tested with >>>>> <Partitioner>org.apache.cassandra.dht.OrderPreservingPartitioner</Partitioner> >>>>> on 0.6.5 and replication level 3) >>>>> >>>>> Thanks, >>>>> Thibaut >>>>> >>>>> >>>>> >>>>> >>>> >>> >> >