wrote:
> Perhaps have a read here?
> https://docs.datastax.com/en/cassandra-oss/3.x/cassandra/operations/opsAddNodeToCluster.html
>
>
> On 04/04/2023 06:41, David Tinker wrote:
>
> Ok. Have to psych myself up to the add node task a bit. Didn't go well the
> first time
user <
user@cassandra.apache.org> wrote:
> Perhaps have a read here?
> https://docs.datastax.com/en/cassandra-oss/3.x/cassandra/operations/opsAddNodeToCluster.html
>
>
> On 04/04/2023 06:41, David Tinker wrote:
>
> Ok. Have to psych myself up to the add node task a bit.
hen you’re done
> (successfully) will remove a lot of it.
>
>
>
> On Apr 3, 2023, at 8:14 PM, David Tinker wrote:
>
>
> Looks like the remove has sorted things out. Thanks.
>
> One thing I am wondering about is why the nodes are carrying a lot more
>
and bring your cluster back.
>>
>> Next time when you are doing something like this again, please test it
>> out on a non-production environment, make sure everything works as expected
>> before moving onto the production.
>>
>>
>> On 03/04/2023 06:28, Davi
ed in the same rack. TBH -
> I'd build out two more nodes to have 6 nodes across 3 racks (2 in each),
> just to ensure even distribution. Otherwise, you might notice that the
> nodes sharing a rack will consume disk at a different rate than the nodes
> which have their own ra
I have a 3 node cluster using the GossipingPropertyFileSnitch and
replication factor of 3. All nodes are leased hardware and more or less the
same. The cassandra-rackdc.properties files look like this:
dc=dc1
rack=rack1
(rack2 and rack3 for the other nodes)
Now I need to expand the cluster. I was
ng works as expected
> before moving onto the production.
>
>
> On 03/04/2023 06:28, David Tinker wrote:
>
> Should I use assassinate or removenode? Given that there is some data on
> the node. Or will that be found on the other nodes? Sorry for all the
> questions but I really
run nodetool rebuild on the new node
>
> If you assassinate it now you violate consistency for your most recent
> writes
>
>
>
> On Apr 2, 2023, at 10:22 PM, Carlos Diaz wrote:
>
>
> That's what nodetool assassinte will do.
>
> On Sun, Apr 2, 2023 at 10:19 P
o.
>
> On Sun, Apr 2, 2023 at 10:19 PM David Tinker
> wrote:
>
>> Is it possible for me to remove the node from the cluster i.e. to undo
>> this mess and get the cluster operating again?
>>
>> On Mon, Apr 3, 2023 at 7:13 AM Carlos Diaz wrote:
>>
>>>
d list. However, if you do decide to fix
> the issue with the racks first assassinate this node (nodetool assassinate
> ), and update the rack name before you restart.
>
> On Sun, Apr 2, 2023 at 10:06 PM David Tinker
> wrote:
>
>> It is also in the seeds list for the other
I should add that the new node does have some data.
On Mon, Apr 3, 2023 at 7:04 AM David Tinker wrote:
> It is also in the seeds list for the other nodes. Should I remove it from
> those, restart them one at a time, then restart it?
>
> /etc/cassandra # grep -i bootstrap *
&g
at 7:01 AM Carlos Diaz wrote:
> Just remove it from the seed list in the cassandra.yaml file and restart
> the node. Make sure that auto_bootstrap is set to true first though.
>
> On Sun, Apr 2, 2023 at 9:59 PM David Tinker
> wrote:
>
>> So likely because I made it a seed n
So likely because I made it a seed node when I added it to the cluster it
didn't do the bootstrap process. How can I recover this?
On Mon, Apr 3, 2023 at 6:41 AM David Tinker wrote:
> Yes replication factor is 3.
>
> I ran nodetool repair -pr on all the nodes (one at a time
es of your replication factor in order to keep the "racks"
> balanced. In other words, this node should have been added to rack 1, 2 or
> 3.
>
> Having said that, you should be able to easily fix your problem by running
> a nodetool repair -pr on the new node.
>
> O
Hi All
I recently added a node to my 3 node Cassandra 4.0.5 cluster and now many
reads are not returning rows! What do I need to do to fix this? There
weren't any errors in the logs or other problems that I could see. I
expected the cluster to balance itself but this hasn't happened (yet?). The
no
Hi Guys
I need to backup my 3 node Cassandra cluster to a remote machine. Is there
a tool like Barman (really nice streaming backup tool for Postgresql) for
Cassandra? Or does everyone roll their own scripts using snapshots and so
on?
The data is on all 3 nodes using about 900G of space on each.
Thanks guys. The IP address hasn't changed so I will go ahead and start the
server and repair.
On Mon, Mar 1, 2021 at 1:50 PM Erick Ramirez
wrote:
> If the node's only been down for less than gc_grace_seconds and the data
> in the drives are intact, you should be fine just booting the server and
Hi Guys
I have a 3 node cluster running 4.0b3 with all data replicated to all 3
nodes. This morning one of the servers started randomly rebooting (up for a
minute or two then reboot) for a couple of hours. The cluster continued
running normally during this time (nice!).
My hosting company has rep
I could really use zstd compression! So if it's not too buggy I will take a
chance :) Tx
rchitect, DataStax
> (404) 822 3487
> <http://www.linkedin.com/in/jlacefield>
>
>
>
> <http://www.datastax.com/what-we-offer/products-services/training/virtual-training>
>
>
> On Fri, Jan 17, 2014 at 1:41 AM, David Tinker wrote:
>
>> I have an app t
I have an app that stores lots of bits of text in Cassandra. One of
the things I need to do is keep a global word frequency table.
Something like this:
CREATE TABLE IF NOT EXISTS word_count (
word text,
count value,
PRIMARY KEY (word)
);
This is slow to read as the rows (100's of thousands
We are seeing the exact same exception in our logs. Is there any workaround?
We never delete rows but we do a lot of updates. Is that where the
tombstones are coming from?
On Wed, Dec 25, 2013 at 5:24 PM, Sanjeeth Kumar wrote:
> Hi all,
> One of my cassandra nodes crashes with the following ex
Done. https://datastax-oss.atlassian.net/browse/JAVA-231
On Thu, Dec 19, 2013 at 10:42 AM, Sylvain Lebresne wrote:
> Mind opening a ticket on https://datastax-oss.atlassian.net/browse/JAVA?
> It's almost surely a bug.
>
> --
> Sylvain
>
>
> On Thu, Dec 19, 2013 at 8
We are using Cassandra 2.0.3-1 installed on Ubuntu 12.04 from the
DataStax repo with the DataStax Java driver version 2.0.0-rc1. Every
now and then we get the following exception:
2013-12-19 06:56:34,619 [sql-2-t15] ERROR core.RequestHandler -
Unexpected error while querying /x.x.x.x
java.lang.Nu
inserts. Unlogged batches is one way to do it (it's
> really
> all Cassandra does with unlogged batch, parallelizing). But as John Sanda
> mentioned, another option is to do the parallelization client side, with
> executeAsync.
>
> --
> Sylvain
>
>
>
> On We
elize your inserts. Unlogged batches is one way to do it (it's
> really
> all Cassandra does with unlogged batch, parallelizing). But as John Sanda
> mentioned, another option is to do the parallelization client side, with
> executeAsync.
>
> --
> Sylvain
>
>
>
Yes thats what I found.
This is faster:
for (int i = 0; i < 1000; i++) session.execute("INSERT INTO
test.wibble (id, info) VALUES ('${"" + i}', '${"aa" + i}')")
Than this:
def ps = session.prepare("INSERT INTO test.wibble (id, info) VALUES (?, ?)")
for (int i = 0; i < 1000; i++) session.execute
10, 2013, at 4:49 AM, David Tinker wrote:
>
>> I have tried the DataStax Java driver and it seems the fastest way to
>> insert data is to compose a CQL string with all parameters inline.
>>
>> This loop takes 2500ms or so on my test cluster:
>>
>> PreparedStateme
I have tried the DataStax Java driver and it seems the fastest way to
insert data is to compose a CQL string with all parameters inline.
This loop takes 2500ms or so on my test cluster:
PreparedStatement ps = session.prepare("INSERT INTO perf_test.wibble
(id, info) VALUES (?, ?)")
for (int i = 0;
Cg&usg=AFQjCNHTC7d6fcI1CNWmjbHMwgXI1nUWcQ&sig2=BaWgHj3ib-cQOBPQsoCadA&bvm=bv.56643336,d.aWc&cad=rjt
>
> If you you do end up using it, make sure to monitor write latency so you
> don't get hit by the bus.
>
>
> On Sat, Nov 16, 2013 at 6:12 AM, David Tinker
> wrot
Sat, Nov 16, 2013 at 11:05 AM, Philippe wrote:
>>
>> Hi david, we tried it two years ago and the performance of the USB stick
>> was so dismal we stopped.
>> Cheers
>>
>> Le 16 nov. 2013 15:13, "David Tinker" a écrit :
>>
>>> Our hosting
Our hosting provider has a cost effective server with 2 x 4TB disks
with a 16G (or 64G) USB thumb drive option. Would it make sense to put
the Cassandra commit log on the USB thumb disk and use RAID0 to use
both 4TB disks for data (and Ubuntu 12.04)?
Anyone know how long USB flash disks last when
32 matches
Mail list logo