Hi Thomas, Can you share your client code for the iteration? It would probably help me catch my problem. Anyone know where in the cassandra source the integration tests are for this functionality on the random partitioner?
Note that I posted a specific example where the iteration failed and I was not throwing out good keys only duplicate ones. That means 1 of 2 things: 1) I'm somehow using the API incorrectly 2) I am the only one encountering a bug My money is on 1) of course. I can check the thrift API against what my Scala client is calling under the hood. -Adam -----Original Message----- From: th.hel...@gmail.com on behalf of Thomas Heller Sent: Fri 8/6/2010 7:17 PM To: user@cassandra.apache.org Subject: Re: error using get_range_slice with random partitioner On Sat, Aug 7, 2010 at 1:05 AM, Adam Crain <adam.cr...@greenenergycorp.com> wrote: > I took this approach... reject the first result of subsequent get_range_slice > requests. If you look back at output I posted (below) you'll notice that not > all of the 30 keys [key1...key30] get listed! The iteration dies and can't > proceed past key2. > > 1) 1st batch gets 10 unique keys. > 2) 2nd batch only gets 9 unique keys with the 1st being a repeat > 3) 3rd batch only get 2 unqiue keys "" > > That means the iteration didn't see 9 keys in the CF. Key7 and Key30 are > missing for example. > Remember the returned results are NOT sorted, so you whenever you are dropping the first by default, you might be dropping a good one. At least that would be my guess here. I have iteration implemented in my client and everything is working as expected and so far I never had duplicates (running 0.6.3). I'm using tokens for range_slices tho, increment/decrement for get_slice only. /thomas
<<winmail.dat>>