In my cluster setup I have two datacenters with 5 hosts in one DC and 3 in
the other.
In the 5 hosts DC I'd like to remove two hosts so I'd get 3 and 3 in each.
The two nodes I'd like to decommission have less RAM than the other 3 so
they operate slower.
What's the most effective way to decommissio
My decommission was progressing OK, although very slow, but I'll send
another question to the list about that...
The exception must be a hiccup, I hope I won't get it again I suppose...
On Tue, May 18, 2010 at 4:10 PM, Gary Dusbabek wrote:
> If I had to guess, I'd say that something at the trans
that sounds like it, thanks
On Tue, May 18, 2010 at 3:53 PM, roger schildmeijer
wrote:
> This is hopefully fixed in trunk (CASSANDRA-757 (revision 938597));
> "Replace synchronization in Gossiper with concurrent data structures and
> volatile fields."
>
> // Roger Schildmeijer
>
>
> On Tue, May 1
Run nodetool streams.
On May 18, 2010 4:14 PM, "Maxim Kramarenko" wrote:
Hi!
After nodetool decomission data size on all nodes grow twice, node still up
and in ring, and no streaming now / tmp SSTables now.
BTW, I have ssh connection to server, so after run nodetool decommission I
expect, that
in a 5 node cluster, i noticed in our client error log that one of the
nodes was consistently throwing cassandra_UnavailableException during
a read operation.
looking into jmx, it was obvious that one of the node's view of the
ring was out of sync.
$ nodetool -host 192.168.20.150 ring
Address
2010/5/19 Maxim Kramarenko :
> Hi!
>
> We have mail archive application, so we have a lot of data (30TB on multiple
> nodes) and should delete data after a few months of storing.
>
> Questions are:
>
> 1) Compaction require extra space to process. What happend if node have no
> extra space for comp
We are currently working on a prototype that is using Cassandra for
realtime-ish statistics system. This seems to be quite a common use
case. If people are interested - maybe it be worth collaborating on
this beyond design discussions on the list. But first let's me explain
our approach and where w
Thanks Jonathan, using mysql as an id sequence generator definitely is a
good options. One thing though, does using sequential ids defeat the purpose
of random partitioner?
On Tue, May 18, 2010 at 11:25 PM, Jonathan Ellis wrote:
> Those are 2 of the 3 options (the other one being, continue to
>
Hello!
I have 3 node cluster: node1, node2, node3. Replication factor = 2.
I run decommission on node3 and it's in progress, moving data to node1
Ring on all nodes show all 3 nodes up, no problems (but node 1 response
with 3-5 sec delay).
I tried to execute a few "get" statements using cli, l
Hi!
We have mail archive application, so we have a lot of data (30TB on
multiple nodes) and should delete data after a few months of storing.
Questions are:
1) Compaction require extra space to process. What happend if node have
no extra space for compaction ? Will it crash, or just stop com
Thanks for you information.
I look at some source code of the implement. There still some question:
1 How did I know that the binary write message send to endpoint success?
2 What will happen if the some of nature endpoints dead?
Thanks again.
On Wed, May 19, 2010 at 2:26 PM, Jonathan Ellis wr
11 matches
Mail list logo