Hi, I would say „It depends“ - as it always does. I have had a 21 Node Cluster
running in Production in one DC with versions ranging from 3.11.1 to 3.11.6
without having had any single issue for over a year. I just upgraded all nodes
to 3.11.6 for the sake of consistency.
Von meinem iPhone gese
Wow, didn't recognize/expect such fundamental Change in a minor Version. -
But then I can at least say that I have a 21 Node production Cluster
running with all different Versions from 3.11.0 to 3.11.4.
Am Do., 5. März 2020 um 08:09 Uhr schrieb Erick Ramirez <
erick.rami...@datastax.com>:
> Are y
Are you telling me that cassandra changes sstanle file format when updating
from 3.11.4 to 3.11.5?
Von meinem iPhone gesendet
> Am 05.03.2020 um 02:05 schrieb Sandeep Nethi :
>
>
> Hi,
>
> I think there is no way to revert sstables to previous version once they are
> upgraded and having new
1. Do the same people where you work operate the cluster and write
the code to develop the application?
yes
2. Do you have a metrics stack that allows you to see graphs of
various metrics with all the nodes displayed together?
no
3. Do you have a log stack that allows you to
Just turn it off. There is no persistent change to the cluster until the node
has finished bootstrap and in Status UN.
Von meinem iPhone gesendet
> Am 12.01.2019 um 22:36 schrieb Osman YOZGATLIOĞLU
> :
>
> Hello,
>
> I have one joining node. I decided to change cluster topology and I need to
And by all means, do not treat Cassandra as a relational Database. - Beware
of the limitations of CQL in contrast to SQL.
I don't want to argue angainst Cassandra because I like it for what it was
primarly designed - horizontal scalability for HUGE amounts of data.
It is good to access your Data by
Thank You All for your hints on this.
I added another folder on the commitlog Disk to compensate immediate urgency.
Next Step will be to reorganize and deduplicate the data into a 2nd table.
Then drop the original one, clean the snapshot, consolidate back all data Files
away from the commitlog
after another?
Or maybe there is a different way to free some disk space? - Any suggestions?
best regards
Jürgen Albersdorfer
-
To unsubscribe, e-mail: user-unsubscr...@cassandra.apache.org
For additional commands, e-mail: user-h
As far as I have seen, you have not configured outbound consistency which
defaults to Local_Quorum. Try with ONE. Then there might still be a
configurstion issue. Concurrent compactors maybe or ressource contention on cpu
with the Test Code.
Von meinem iPhone gesendet
> Am 02.03.2018 um 18:36
I'm not sure, but from what I've seen in the Code, I would assume that
something is wrong with your config file (cassandra.yaml)
Maybe there is an earlier error/warning in your logs.
2018-02-23 12:11 GMT+01:00 Jonathan Baynes :
> Hi Community,
>
>
>
> Can anyone give me some pointers where to loo
or so nodes - I know there's at least one person I talked to in
> IRC with a 1700 host cluster, but that'd be beyond what I'd ever do
> personally.
>
>
>
>> On Tue, Feb 20, 2018 at 12:34 PM, Jürgen Albersdorfer
>> wrote:
>> Thanks Jeff,
>&g
e, each streaming from a small number of neighbors, without overlap.
>
> It takes a bit more tooling (or manual token calculation) outside of
> cassandra, but works well in practice for "large" clusters.
>
>
>
>
>> On Tue, Feb 20, 2018 at 4:42 AM, J
; ? I have not
> seen yet one adding a node every day like this ;) have fun !
>
>
>
> On 20 February 2018 at 13:42, Jürgen Albersdorfer > wrote:
>
>> Hi, I'm wondering if it is possible resp. would it make sense to limit
>> concurrent streaming when joining a new n
Hi, I'm wondering if it is possible resp. would it make sense to limit
concurrent streaming when joining a new node to cluster.
I'm currently operating a 15-Node C* Cluster (V 3.11.1) and joining another
Node every day.
The 'nodetool netstats' shows it always streams data from all other nodes.
Ho
18-02-09 12:07 GMT+01:00 Jürgen Albersdorfer :
> Hi Jon,
> should I register to the JIRA and open an Issue or will you do so?
> I'm currently trying to bootstrap another node - with 100GB RAM, this
> time, and I'm recording Java Heap Memory over time via Jconsole, Top
>
1.8.0_161 is not yet supported - try 1.8.0_151
> Am 13.02.2018 um 16:44 schrieb Irtiza Ali :
>
> 1.8.0_161
ve to share some heap dump
> for further investigation (sorry I'm not gonna help lot on this matter)
>
> On 7 February 2018 at 14:43, Jürgen Albersdorfer
> wrote:
>
>> Hi Nicolas,
>>
>> Do you know how many sstables is this new node suppose to receive ?
>>
&
ncurrent_compactors has
> any effect ?
>
> Which memtable_allocation_type are you using ?
>
>
> On 7 February 2018 at 12:38, Jürgen Albersdorfer
> wrote:
>
>> Hi, I always face an issue when bootstrapping a Node having less than
>> 184GB RAM (156GB JVM HEAP) on our 10 Nod
Hi, I always face an issue when bootstrapping a Node having less than 184GB
RAM (156GB JVM HEAP) on our 10 Node C* 3.11.1 Cluster.
During bootstrap, when I watch the cassandra.log I observe a growth in JVM
Heap Old Gen which gets not significantly freed any more.
I know that JVM collects on Old Gen
- No gossip backlog;
> proceeding
> INFO [GossipTasks:1] 2018-02-03 11:33:20,114 Gossiper.java:1046 -
> InetAddress /10.10.10.222 is now DOWN<<<<< have no idea why this appeared
> in logs
> INFO [main] 2018-02-03 11:33:20,566 NativeTransportService.java:70 - Netty
&g
Cool, good to know. Do you know this is still true for 3.11.1?
> Am 03.02.2018 um 08:19 schrieb Oleksandr Shulgin
> :
>
> On 3 Feb 2018 02:42, "Kyrylo Lebediev" wrote:
> Thanks, Oleksandr,
> In my case I'll need to replace all nodes in the cluster (one-by-one), so
> streaming will introduce p
t; you're taking the right approach, probably just need some more tuning. Might
> be on the Cassandra side as well (concurrent_reads/writes).
>
>
> On 1 Feb. 2018 19:06, "Jürgen Albersdorfer" wrote:
> Hi Kurt, thanks for your response.
> I indeed utilized Spark -
copy between tables). https://www.
> instaclustr.com/support/documentation/apache-spark/
> using-spark-to-sample-data-from-one-cassandra-cluster-
> and-write-to-another/
>
> On 30 January 2018 at 21:05, Jürgen Albersdorfer
> wrote:
>
>> Hi, We are using C* 3.11.1 with a 9 N
Hi, We are using C* 3.11.1 with a 9 Node Cluster built on CentOS Servers eac=
h having 2x Quad Core Xeon, 128GB of RAM and two separate 2TB spinning Disks=
, one for Log one for Data with Spark on Top.
Due to bad Schema (Partitions of about 4 to 8 GB) I need to copy a whole Tab=
le into another ha
You might want to have a look onto this:
https://www.digitalocean.com/community/tutorials/how-to-run-a-multi-node-cluster-database-with-cassandra-on-ubuntu-14-04
It worked great for me, but I have to admit that my cluster is running, but not
yet in production.
Von: @Nandan@ [mailto:nandanpriyada
25 matches
Mail list logo