wrote:
>
> On Wed, Nov 27, 2013 at 3:12 AM, Ross Black wrote:
>
>> Using Cassandra 1.2.10, I am trying to load sstable data into a cluster
>> of 6 machines.
>
>
> This may be affecting you:
> https://issues.apache.org/jira/browse/CASSANDRA-6272
>
> Using 1.2.
Hi,
Using Cassandra 1.2.10, I am trying to load sstable data into a cluster of
6 machines.
The machines are using vnodes, and are configured with
NetworkTopologyStrategy replication=3 and LeveledCompactionStrategy on the
tables being loaded.
The sstable data was generated using SSTableSimpleUnsort
in 0.8.2?
>
> On Wed, Mar 21, 2012 at 8:38 PM, Ross Black
> wrote:
> > Hi,
> >
> > We recently moved from 0.8.2 to 1.0.8 and the behaviour seems to have
> > changed so that tombstones are now not being deleted.
> >
> > Our application continually adds and
pointless.
Thanks,
Ross
On 28 March 2012 23:13, Radim Kolar wrote:
> Dne 28.3.2012 13:14, Ross Black napsal(a):
>
> Radim,
>
> We are only deleting columns. *Rows are never deleted.*
>
> i suggest to change app to delete rows. try composite keys.
>
* (which refreshes the
> tombstone on them), the deleted columns *should* get cleaned up, right?
> (Even though the row itself continually gets new columns inserted and
> other columns deleted?)
>
> Thanks,
> John
>
>
>
>
> On Tue, Mar 27, 2012 at 2:21 AM, Radim Kol
Any pointers on what I should be looking for in our application that would
be stopping the deletion of tombstones?
Thanks,
Ross
On 26 March 2012 16:27, Viktor Jevdokimov wrote:
> Upon read from S1 & S6 rows are merged, T3 timestamp wins.
> T1 will be deleted upon S1 compaction with S6 or manual
se of the named addressee and may be
> confidential. If you are not the intended recipient, you are reminded that
> the information remains the property of the sender. You must not use,
> disclose, distribute, copy, print or rely on this e-mail. If you have
> received this message in erro
idential. If you are not the intended recipient, you are reminded that
> the information remains the property of the sender. You must not use,
> disclose, distribute, copy, print or rely on this e-mail. If you have
> received this message in error, please contact the sender imme
Hi,
We recently moved from 0.8.2 to 1.0.8 and the behaviour seems to have
changed so that tombstones are now not being deleted.
Our application continually adds and removes columns from Cassandra. We
have set a short gc_grace time (3600) since our application would
automatically delete zombies i
I just solved it.
It was my mistake with using ByteBuffer.. the array() method returns the
entire array without considering the index offset into the array.
It works using
String rowName =
Charset.forName(UTF_8).decode(entry.getKey()).toString();
Ross
On 1 February 2012 22:42, Ross Black
pact it down.
>
> Hope that helps.
>
> -----
> Aaron Morton
> Freelance Cassandra Developer
> @aaronmorton
> http://www.thelastpickle.com
>
> On 5 Jul 2011, at 03:05, Ross Black wrote:
>
> Hi,
>
> I am using Cassandra 0.7.5 on Linux machines.
>
Hi,
I am using Cassandra 0.7.5 on Linux machines.
I am trying to backup data from a multi-node cluster (3 nodes) and restore
it into a single node cluster that has a different name (for development
testing).
The multi-node cluster is backed up using clustertool global_snapshot, and
then I copy t
Hi,
I am using the thrift client batch_mutate method with Cassandra 0.7.0 on
Ubuntu 10.10.
When the size of the mutations gets too large, the client fails with the
following exception:
Caused by: org.apache.thrift.transport.TTransportException:
java.net.SocketException: Connection reset
at
13 matches
Mail list logo