In fact I truncated hints table to stabilize the cluster. Through the heap
dumps I was able to identify the table on which there were numerous queries.
Then I focused on system_traces.session table around the time OOM occurred. It
turned out to be a full table scan on a large table which caused
On 2017-03-06 07:04 (-0800), "Thakrar, Jayesh"
wrote:
> Thanks Hannu - also considered that option.
> However, that's a trial and error and will have to play with the
> collision/false-positive fraction.
> And each iteration will most likely result in a compaction storm - so I was
> hoping f
On 2017-03-03 09:18 (-0800), Shravan Ch wrote:
>
> nodetool compactionstats -H
> pending tasks: 3
> compaction typekeyspace table
> completed totalunit progress
> Compaction system hints
> 28.
On 2017-03-04 07:23 (-0800), "Thakrar, Jayesh"
wrote:
> LCS does not rule out frequent updates - it just says that there will be more
> frequent compaction, which can potentially increase compaction activity
> (which again can be throttled as needed).
> But STCS will guarantee OOM when you h
--
sent from Dan Rathbone's tech/work email account
http://rathboneventures.com - my company
http://danrathbone.com -- personal site
Hi,
Before: 1 cluster with 2 DC. 3 nodes in each DCNow: 1 cluster with 1 DC. 6
nodes in this DC
Is it right?
If yes, depending on the RF - and assuming NetworkTopologyStrategy - I would
do: - RF = 2 => 2 C* rack, one rack in each AZ - RF = 3 => 3 C* rack, one
rack in each AZ
In other words, I
Hi Richard,
It depends on the snitch and the replication strategy in use.
Here's a link to a blogpost about how we deployed C* on 3AZ
http://highscalability.com/blog/2016/8/1/how-to-setup-a-highly-available-multi-az-cassandra-cluster-o.html
Best,
Tommaso
On Mar 7, 2017 18:05, "Ney, Richard"
On Sun, Mar 5, 2017 at 11:53 PM, anuja jain wrote:
> Is there is difference between creating column of type
> frozen> and frozen where list_double is UDT of
> type frozen> ?
>
Yes, there is a difference in serialization format: the first will be
serialized directly as a list, the second will be
I'd recommend three availability zones. In this case if you loose one AZ
you still have a quorum (assuming replication factor of 3)
Andrey
On Tue, Mar 7, 2017 at 9:05 AM, Ney, Richard wrote:
> We’ve collapsed our 2 DC – 3 node Cassandra clusters into a single 6 node
> Cassandra cluster split be
We’ve collapsed our 2 DC – 3 node Cassandra clusters into a single 6 node
Cassandra cluster split between two AWS availability zones.
Are there any behaviors we need to take into account to ensure the Cassandra
cluster stability with this configuration?
RICHARD NEY
TECHNICAL DIRECTOR, RESEARCH
Hi,
Why did the host ID change?
probably this node data folder (at least system keyspace) was erased. Or nodes
changed their IP, do you use dynamic IPs?
Best regards, Vladimir Yudovin,
Winguzone - Cloud Cassandra Hosting
On Mon, 06 Mar 2017 22:44:50 -0500 Joe Olson
Hi Satoshi,
One correction on my previous email, at 2.1.8 of the driver, Netty 4.0 was
>> in use, so please disregard my comments about the netty dependency changing
>> from 3.9 to 4.0, there is a different in version, but it's only at the
>> patch level (4.0.27 to 4.0.37)
>>
>
Does your comment m
12 matches
Mail list logo