Re: data dropped when using sstableloader?

2013-11-27 Thread Ross Black
wrote: > > On Wed, Nov 27, 2013 at 3:12 AM, Ross Black wrote: > >> Using Cassandra 1.2.10, I am trying to load sstable data into a cluster >> of 6 machines. > > > This may be affecting you: > https://issues.apache.org/jira/browse/CASSANDRA-6272 > > Using 1.2.

data dropped when using sstableloader?

2013-11-27 Thread Ross Black
Hi, Using Cassandra 1.2.10, I am trying to load sstable data into a cluster of 6 machines. The machines are using vnodes, and are configured with NetworkTopologyStrategy replication=3 and LeveledCompactionStrategy on the tables being loaded. The sstable data was generated using SSTableSimpleUnsort

Re: tombstones problem with 1.0.8

2012-04-04 Thread Ross Black
in 0.8.2? > > On Wed, Mar 21, 2012 at 8:38 PM, Ross Black > wrote: > > Hi, > > > > We recently moved from 0.8.2 to 1.0.8 and the behaviour seems to have > > changed so that tombstones are now not being deleted. > > > > Our application continually adds and

Re: tombstones problem with 1.0.8

2012-03-28 Thread Ross Black
pointless. Thanks, Ross On 28 March 2012 23:13, Radim Kolar wrote: > Dne 28.3.2012 13:14, Ross Black napsal(a): > > Radim, > > We are only deleting columns. *Rows are never deleted.* > > i suggest to change app to delete rows. try composite keys. >

Re: tombstones problem with 1.0.8

2012-03-28 Thread Ross Black
* (which refreshes the > tombstone on them), the deleted columns *should* get cleaned up, right? > (Even though the row itself continually gets new columns inserted and > other columns deleted?) > > Thanks, > John > > > > > On Tue, Mar 27, 2012 at 2:21 AM, Radim Kol

Re: tombstones problem with 1.0.8

2012-03-27 Thread Ross Black
Any pointers on what I should be looking for in our application that would be stopping the deletion of tombstones? Thanks, Ross On 26 March 2012 16:27, Viktor Jevdokimov wrote: > Upon read from S1 & S6 rows are merged, T3 timestamp wins. > T1 will be deleted upon S1 compaction with S6 or manual

Re: tombstones problem with 1.0.8

2012-03-23 Thread Ross Black
se of the named addressee and may be > confidential. If you are not the intended recipient, you are reminded that > the information remains the property of the sender. You must not use, > disclose, distribute, copy, print or rely on this e-mail. If you have > received this message in erro

Re: tombstones problem with 1.0.8

2012-03-22 Thread Ross Black
idential. If you are not the intended recipient, you are reminded that > the information remains the property of the sender. You must not use, > disclose, distribute, copy, print or rely on this e-mail. If you have > received this message in error, please contact the sender imme

tombstones problem with 1.0.8

2012-03-21 Thread Ross Black
Hi, We recently moved from 0.8.2 to 1.0.8 and the behaviour seems to have changed so that tombstones are now not being deleted. Our application continually adds and removes columns from Cassandra. We have set a short gc_grace time (3600) since our application would automatically delete zombies i

Re: problem with keys returned from multiget_slice

2012-02-01 Thread Ross Black
I just solved it. It was my mistake with using ByteBuffer.. the array() method returns the entire array without considering the index offset into the array. It works using String rowName = Charset.forName(UTF_8).decode(entry.getKey()).toString(); Ross On 1 February 2012 22:42, Ross Black

Re: copy data from multi-node cluster to single node

2011-07-19 Thread Ross Black
pact it down. > > Hope that helps. > > ----- > Aaron Morton > Freelance Cassandra Developer > @aaronmorton > http://www.thelastpickle.com > > On 5 Jul 2011, at 03:05, Ross Black wrote: > > Hi, > > I am using Cassandra 0.7.5 on Linux machines. >

copy data from multi-node cluster to single node

2011-07-04 Thread Ross Black
Hi, I am using Cassandra 0.7.5 on Linux machines. I am trying to backup data from a multi-node cluster (3 nodes) and restore it into a single node cluster that has a different name (for development testing). The multi-node cluster is backed up using clustertool global_snapshot, and then I copy t

problem with large batch mutation set

2011-04-06 Thread Ross Black
Hi, I am using the thrift client batch_mutate method with Cassandra 0.7.0 on Ubuntu 10.10. When the size of the mutations gets too large, the client fails with the following exception: Caused by: org.apache.thrift.transport.TTransportException: java.net.SocketException: Connection reset at