Hi,
My cluster was running fine. I rebooted all three nodes (one by one), and
now all nodes are back up and running. "nodetool status" shows UP for all
three nodes on all three nodes:
-- AddressLoad Tokens OwnsHost ID
Rack
UN xx.xx.xx.xx331.84 GB 1 ?
d3d3a7
Yes, all three nodes see all three nodes as UN.
Also, connecting from a local Cassandra machine using cqlsh, I can run the
same query just fine (with QUORUM consistency level).
On 4 February 2016 at 21:02, Robert Coli wrote:
> On Thu, Feb 4, 2016 at 12:53 PM, Flavien Charlon <
> fla
es down and has kept them marked
> that way (without retrying). Depending on the client, you may have options
> to set in RetryPolicy, FailoverPolicy, etc. A bounce of the client will
> probably fix the problem for now.
>
>
>
>
>
> Sean Durity
>
>
>
> *From:* Flavi
gt;
> On Thu, Feb 4, 2016 at 2:06 PM, Flavien Charlon > wrote:
>
>> I'm using the C# driver 2.5.2. I did try to restart the client
>> application, but that didn't make any difference, I still get the same
>> error after restart.
>>
>> On 4 February 2
4, 2016, at 6:32 PM, Flavien Charlon
> wrote:
>
> No, there was no other change. I did run "apt-get upgrade" before
> rebooting, but Cassandra has not been upgraded.
>
> On 4 February 2016 at 22:48, Bryan Cheng wrote:
>
>> Hey Flavien!
>>
>> Did your re
Hi,
I am using Size Tier Compaction (Cassandra 2.1.2). Minor compaction is not
triggering even though it should. See the SSTables on disk:
http://pastebin.com/PSwZ5mrT
You can see that we have 41 SSTable between 60MB and 85MB, which should
trigger compaction unless I am missing something.
Is tha
ion
>
> *Best Regards!*
>
>
> *Chao Yan--**My twitter:Andy Yan @yanchao727
> <https://twitter.com/yanchao727>*
>
>
> *My Weibo:http://weibo.com/herewearenow
> <http://weibo.com/herewearenow>--*
>
> 2015-01-19 3:51 GMT+08:00 Fl
Thanks Roland. Good to know, I will try that. Do you know the JIRA ticket
number of that bug?
Thanks,
Flavien
On 19 January 2015 at 06:15, Roland Etzenhammer
wrote:
> Hi Flavien,
>
> I hit some problem with minor compations recently (just some days ago) -
> but with many more tables. In my case
Hi,
When writing to Cassandra using CL = Quorum (or anything less than ALL), is
it correct to say that Cassandra tries to write to all the replica, but
only waits for Quorum?
If so, what can cause some replica to become out of sync when they're all
online?
Thanks
Flavien
; http://www.datastax.com/documentation/cassandra/2.0/cassandra/dml/dmlClientRequestsRead.html
> )
> - nodetool repair (Manually
> http://www.datastax.com/documentation/cassandra/2.0/cassandra/operations/ops_repair_nodes_c.html
> )
>
> Regards
> Andi
> -
threshold for compaction. He speculated it may be a bug. At the
> same time in a different thread, Roland had a similar problem, and Tyler's
> proposed workaround seemed to work for him.
>
> On Tue, Jan 20, 2015 at 3:35 PM, Robert Coli wrote:
>
>> On Sun, Jan 18, 2015 a
PM, Flavien Charlon <
> flavien.char...@gmail.com> wrote:
>
>> Thanks Andi. The reason I was asking is that even though my nodes have
>> been 100% available and no write has been rejected, when running an
>> incremental repair, the logs still indicate that some ranges are o
I don't think you can do nodetool repair on a single node cluster.
Still, one day or another you'll have to reboot your server, at which point
your cluster will be down. If you want high availability, you should use a
3 nodes cluster with RF = 3.
On 22 January 2015 at 18:10, Robert Coli wrote:
Did you run incremental repair? Incremental repair is broken in 2.1 and
tends to create way too many SSTables.
On 2 February 2015 at 18:05, 曹志富 wrote:
> Hi,all:
> I have 18 nodes C* cluster with cassandra2.1.2.Some nodes have aboud
> 40,000+ sstables.
>
> my compaction strategy is STCS.
>
> Coul
I already experienced the same problem (hundreds of thousands of SSTables)
with Cassandra 2.1.2. It seems to appear when running an incremental repair
while there is a medium to high insert load on the cluster. The repair goes
in a bad state and starts creating way more SSTables than it should (eve
Hi,
What is the process to re-bootstrap a node after hard drive failure
(Cassandra 2.1.3)?
This is the same node as previously, but the data folder has been wiped,
and I would like to re-bootstrap it from the data stored on the other nodes
of the cluster (I have RF=3).
I am not using vnodes.
Th
detool rebuild" in this node.
>
> 2015-03-25 9:20 GMT+08:00 Flavien Charlon :
>
>> Hi,
>>
>> What is the process to re-bootstrap a node after hard drive failure
>> (Cassandra 2.1.3)?
>>
>> This is the same node as previously, but the data folder has b
18 matches
Mail list logo