It could also be related to the mutation size of your writes in hint files.
Are you seeing any mutation warnings? if yes, temporarily increasing the
commitlog segment size would help to solve the problem.
Thanks,
On Tue, Nov 16, 2021 at 1:29 PM Bowen Song wrote:
> I think your problem is likely
icies?
Thanks in advance?
On Tue, Sep 15, 2020 at 12:57 AM Sandeep Nethi
wrote:
> Thanks Erick.
>
> We are using datastax java driver (
> https://docs.datastax.com/en/developer/java-driver/).
> Driver version number: 3.3.0
> C* version: 3.11.6
>
> Regards,
> Sandeep
&g
Thanks Erick.
We are using datastax java driver (
https://docs.datastax.com/en/developer/java-driver/).
Driver version number: 3.3.0
C* version: 3.11.6
Regards,
Sandeep
On Sat, Sep 12, 2020 at 2:41 PM Erick Ramirez
wrote:
> That would be my last option to add a new host as contant point but
gt; definition) and partitioner is used across nodes ?
>
>
> ~Asad
>
>
>
>
>
>
>
>
>
>
>
> *From:* Sandeep Nethi
>
>
> *Sent:* Tuesday, September 8, 2020 1:12 AM
>
>
> *To:* user@cassandra.apache.org
>
>
> *Subject:* Re: C
which
> you are giving as contact points
>
> On Tue, Sep 8, 2020 at 11:27 AM Sandeep Nethi
> wrote:
>
>> Yes, all nodes are UN and no issues identified. Infact i could see some
>> client connections on new nodes with telnet but not seeing any traffic.
>>
>> Ca
Yes, all nodes are UN and no issues identified. Infact i could see some
client connections on new nodes with telnet but not seeing any traffic.
Cassandra version: 3.11.6
Load Balancing policy used is default with no custom policies.
Thanks,
On Tue, Sep 8, 2020 at 5:52 PM Erick Ramirez
wrote:
>
Thanks Erick.
Just to confirm, my application connection string is using host address
details and not VIP/some other load balancer in between.
So, inorder for my application to successfully establish control connection
and be able to read system related tables to learn about topology
automaticall
Hello everyone,
Hope everything is going well!
Coming to the problem,
I have recently scaled out existing 3 nodes/rack cassandra cluster by
additional 3 nodes (1 in each rack), scale out was successful, all the
nodes are UN and no errors from application servers.
But what I've been observing is,
Hi,
I think there is no way to revert sstables to previous version once they
are upgraded and having new transactions on upgraded version.
But you can try some workarounds by creating a secondary datacenter with
3.11.0 and having the primary datacenter upgraded to 3.11.5. This way if
you come acr
Hi Nitan,
You shouldn’t have any issues if you setup things properly.
Few possible issues could be (can become a bottleneck)
* CPU allocation (Instances can compete)
* Disk throughput & IOPS &
* Port allocations
* Network throughout
* Consistency issues.
And we have work around for all above,
is no running full
> repair.
> I cannot see any log about repair in system.log these days.
> Does full repair have anything to do with using large amount of memory?
>
> Thanks.
>
> On 2019/05/01 10:47:50, Sandeep Nethi wrote:
> > Are you by any chance running the f
Are you by any chance running the full repair on these nodes?
Thanks,
Sandeep
On Wed, 1 May 2019 at 10:46 PM, Mia wrote:
> Hello, Ayub.
>
> I'm using apache cassandra, not dse edition. So I have never used the dse
> search feature.
> In my case, all the nodes of the cluster have the same proble
Hi Kunal,
The simple solution for this case would be as follows,
1. Run *Full repair.*
2. Add firewall to block network on port 7000(,7001 if ssl enabled) between
two datacenter nodes.
3. Check the status of cassandra cluster from both data centers, each DC
must show down node status of another D
Singh
wrote:
> Can you define "inconsistent" results.. ? What's the topology of the
> cluster? What were you expecting and what did you get?
>
> On Thu, Mar 14, 2019 at 7:09 AM sandeep nethi
> wrote:
>
>> Hello,
>>
>> Does anyone experi
Hello,
Does anyone experience inconsistent results after restoring Cassandra
3.11.1 with refresh command? Was there any bug in this version of
cassandra??
Thanks in advance.
Regards,
Sandeep
15 matches
Mail list logo