Although I didn't get an answer on this, it's worth noting the removing the
compaction_in_progress folder resolved the issue.
From: Walsh, Stephen
Sent: 17 September 2015 16:37
To: 'user@cassandra.apache.org'
Subject: RE: Cassandra shutdown during large number of compactions - now fails
to star
On Wed, Sep 16, 2015 at 3:39 AM, Richard Dawe
wrote:
> In that mixed non-EC2/EC2 environment, with GossipingPropertyFileSnitch,
> it seems like you would need to simulate what Ec2Snitch does, and manually
> configure GPFS to treat each Availability Zone as a rack.
>
Yes, you configure GPFS with
The Cassandra team is pleased to announce the release of Apache Cassandra
version 2.0.17.
This is most likely the final release for the 2.0 release series.
Apache Cassandra is a fully distributed database. It is the right choice
when you need scalability and high availability without compromising
The Cassandra team is pleased to announce the release of Apache Cassandra
version 3.0.0-rc1.
Apache Cassandra is a fully distributed database. It is the right choice
when you need scalability and high availability without compromising
performance.
http://cassandra.apache.org/
Downloads of sourc
Compatible version of python-driver (3.0.0a3) and java-driver (3.0.0-alpha3)
will be published to pypi and maven later today.
In the meantime, you can use the versions bundled with Cassandra 3.0.0-rc1.
--
AY
On September 21, 2015 at 09:04:57, Jake Luciani (j...@apache.org) wrote:
The Cassandr
Hi there,
I have a dead node in our cluster, which is a wired state right now, and
can not be removed from cluster.
The nodestatus shows:
Datacenter: DC1
===
Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
-- Address Load Tokens OwnsHost ID
Order is decommission, remove, assassinate.
Which have you tried?
On Sep 21, 2015 10:47 AM, "Dikang Gu" wrote:
> Hi there,
>
> I have a dead node in our cluster, which is a wired state right now, and
> can not be removed from cluster.
>
> The nodestatus shows:
> Datacenter: DC1
> ===
I have tried all of them, neither of them worked.
1. decommission: the host had hardware issue, and I can not connect to it.
2. remove, there is not HostID, so the removenode did not work.
3. unsafeAssassinateEndpoint, it will throw NPE as I pasted before, can we
fix it?
Thanks
Dikang.
On Mon, Se
John,
Yes the Trilio solution is private and today, it is for Cassandra running in
Vmware and OpenStack environment. AWS support is on the roadmap. Will reach out
separately to give you a demo after the summit.
Thanks,
Sanjay
_
Sanjay Baronia
VP of Product & Solutions Managemen
Thank you for your reply Jeff!
I will switch to Cassandra 2.1.9.
Quick follow up question: Does the schema, settings I have setup look
alright? My timestamp column's type is blob - I was wondering if this could
confuse DTCS?
On Sun, Sep 20, 2015 at 3:37 PM, Jeff Jirsa
wrote:
> 2.1.4 is getting
The timestamp involed here isn’t the one defined in the schema, it’s the
timestamp written on each cell when you apply a mutation (write).
That timestamp is the one returned by WRITETIME(), and visible in
sstablemetadata – it’s not visible in the schema directly.
Failing to have the proper uni
Hi,
My application issues more read requests than write, I do see that under
load cfstats for one of the table is quite high around 43ms
Local read count: 114479357
Local read latency: 43.442 ms
Local write count: 22288868
Local writ
Hi,
When a node is dead, is it supposed to exist in the ring? When I found a
node is lost, and I check with nodetool and ops center, I still see the
lost node in the token ring. When I describe_ring, the lost node is also
returned. Is this what it is supposed to be? Why did not C* server hide the
l
On Mon, Sep 21, 2015 at 8:32 PM, Shenghua(Daniel) Wan wrote:
> Hi,
> When a node is dead, is it supposed to exist in the ring?
>
It is still considered as part of a cluster. Imagine a case when you do a
rolling restart, the node would be temporary out of service for maybe a few
minutes to a few
A dead node should exist in the ring until it is replaced. If you remove a node
without a replacement, you’ll end up with that replica’s ownership being placed
onto another node without the data having been transferred, and queries against
that range will falsely empty records until a repair is
15 matches
Mail list logo