he
> /var/data/cassandra_new/cassandra/*
> folders back into the cluster if you still have it.
>
> -Jeremiah
>
>
>
> On Oct 20, 2016, at 3:58 PM, Branton Davis
> wrote:
>
> Howdy folks. I asked some about this in IRC yesterday, but we're looking
> to hop
pping.
Thanks for the assurance. I'm thinking (hoping) that we're good.
On Thu, Oct 20, 2016 at 11:24 PM, kurt Greaves wrote:
>
> On 20 October 2016 at 20:58, Branton Davis
> wrote:
>
>> Would they have taken on the token ranges of the original nodes or acted
>>
u run "nodetool cleanup".
> So to answer your question, I don't think the data have been moved away.
> More likely you have extra duplicate here :
>
> Yabin
>
> On Thu, Oct 20, 2016 at 6:41 PM, Branton Davis > wrote:
>
>> Thanks for the response, Yabin. H
always suggest to totally separate Cassandra application data
> directory from system keyspace directory (e.g. they don't share common
> parent folder, and such).
>
> Regards,
>
> Yabin
>
> On Thu, Oct 20, 2016 at 4:58 PM, Branton Davis > wrote:
>
>> Howdy folks.
Howdy folks. I asked some about this in IRC yesterday, but we're looking
to hopefully confirm a couple of things for our sanity.
Yesterday, I was performing an operation on a 21-node cluster (vnodes,
replication factor 3, NetworkTopologyStrategy, and the nodes are balanced
across 3 AZs on AWS EC2
I doubt that's true anymore. EBS volumes, while previously discouraged,
are the most flexible way to go, and are very reliable. You can attach,
detach, and snapshot them too. If you don't need provisioned IOPS, the GP2
SSDs are more cost-effective and allow you to balance IOPS with cost.
On Mon
This may be a silly question, but has anyone considered making
the mailing list accept unsubscribe requests this way? Or at least filter
them out and auto-respond with a message explaining how to unsubscribe? Seems
like it should be pretty simple and would make it easier for folks to leave
and le
This isn't a direct answer to your question, but jolokia (
https://jolokia.org/) may be a useful alternative. It runs as an agent
attached to your cassandra process and provides a REST API for JMX.
On Tue, Jul 19, 2016 at 11:19 AM, Ricardo Sancho
wrote:
> Is anyone using a custom reporter to pl
e node and my
> last rsync now has to copy only a few files which is quite fast and so the
> downtime for that node is within minutes.
>
> Jan
>
>
>
> Von meinem iPhone gesendet
>
> Am 18.02.2016 um 22:12 schrieb Branton Davis :
>
> Alain, thanks for sharing! I&
ssandra/data2
# unmount second volume
umount /dev/xvdf
# In AWS console:
# - detach sdf volume
# - delete volume
# remove mount directory
rm -Rf /var/data/cassandra_data2/
# restart cassandra
service cassandra start
# run repair
/usr/local/cassandra/bin/nodetool repair -pr
On Thu, Feb 18
lastpickle.com
>
> 2016-02-18 8:28 GMT+01:00 Anishek Agarwal :
>
>> Hey Branton,
>>
>> Please do let us know if you face any problems doing this.
>>
>> Thanks
>> anishek
>>
>> On Thu, Feb 18, 2016 at 3:33 AM, Branton Davis <
>> branton.da...@sp
We're about to do the same thing. It shouldn't be necessary to shut down
the entire cluster, right?
On Wed, Feb 17, 2016 at 12:45 PM, Robert Coli wrote:
>
>
> On Tue, Feb 16, 2016 at 11:29 PM, Anishek Agarwal
> wrote:
>>
>> To accomplish this can I just copy the data from disk1 to disk2 with i
ing safely.
>
> On Tue, 16 Feb 2016 at 10:57 Robert Coli wrote:
>
>> On Sat, Feb 13, 2016 at 4:30 PM, Branton Davis <
>> branton.da...@spanning.com> wrote:
>>
>>> We use SizeTieredCompaction. The nodes were about 67% full and we were
>>> plann
Yep, nodes were added one at a time and I ran clearsnapshots (there weren't
any). The way I finally got past this was adding a second volume via
data_file_directories (JBOD).
On Tue, Feb 16, 2016 at 12:57 PM, Robert Coli wrote:
> On Sat, Feb 13, 2016 at 4:30 PM, Branton Davis > wro
reed again.
>
> If using SizeTieredCompaction you can end up with very huge sstables as I
> do (>250gb each). In the worst case you could possibly need twice the space
> - a reason why I set up my monitoring for disk to 45% usage.
>
> Just my 2 cents.
> Jan
>
> Von meinem
One of our clusters had a strange thing happen tonight. It's a 3 node
cluster, running 2.1.10. The primary keyspace has RF 3, vnodes with 256
tokens.
This evening, over the course of about 6 hours, disk usage increased from
around 700GB to around 900GB on only one node. I was at a loss as to wh
If you use Chef, there's this cookbook:
https://github.com/michaelklishin/cassandra-chef-cookbook
It's not perfect, but you can make a wrapper cookbook pretty easily to
fix/extend it to do anything you need.
On Wed, Jan 27, 2016 at 11:25 PM, Richard L. Burton III
wrote:
> I'm curious to see if
We recently went down the rabbit hole of trying to understand the output of
lsof. lsof -n has a lot of duplicates (files opened by multiple threads).
Use 'lsof -p $PID' or 'lsof -u cassandra' instead.
On Fri, Nov 6, 2015 at 12:49 PM, Bryan Cheng wrote:
> Is your compaction progressing as expect
http://www.datastax.com/gartner-magic-quadrant-odbms>
>
> DataStax is the fastest, most scalable distributed database technology,
> delivering Apache Cassandra to the world’s most innovative enterprises.
> Datastax is built to be agile, always-on, and predictably scalable to any
> size
On Tue, Oct 20, 2015 at 3:31 PM, Robert Coli wrote:
> On Tue, Oct 20, 2015 at 9:13 AM, Branton Davis > wrote:
>
>>
>>> Just to clarify, I was thinking about a scenario/disaster where we lost
>> the entire cluster and had to rebuild from backups. I assumed we wou
Howdy Cassandra folks.
Crickets here and it's sort of unsettling that we're alone with this
issue. Is it appropriate to create a JIRA issue for this or is there maybe
another way to deal with it?
Thanks!
On Sun, Oct 18, 2015 at 1:55 PM, Branton Davis
wrote:
> Hey all.
>
>
On Mon, Oct 19, 2015 at 5:42 PM, Robert Coli wrote:
> On Mon, Oct 19, 2015 at 9:20 AM, Branton Davis > wrote:
>
>> Is that also true if you're standing up multiple nodes from backups that
>> already have data? Could you not stand up more than one at a time since
>
Is that also true if you're standing up multiple nodes from backups that
already have data? Could you not stand up more than one at a time since
they already have the data?
On Mon, Oct 19, 2015 at 10:48 AM, Eric Stevens wrote:
> It seems to me that as long as cleanup hasn't happened, if you
> *
Hey all.
We've been seeing this warning on one of our clusters:
2015-10-18 14:28:52,898 WARN [ValidationExecutor:14]
org.apache.cassandra.db.context.CounterContext invalid global counter shard
detected; (4aa69016-4cf8-4585-8f23-e59af050d174, 1, 67158) and
(4aa69016-4cf8-4585-8f23-e59af050d174, 1
24 matches
Mail list logo