Are you sure you’re blocked on internode and not commitlog? Batch is typically
not what people expect (group commitlog in 4.0 is probably closer to what you
think batch does).
--
Jeff Jirsa
> On Nov 27, 2018, at 10:55 PM, Yuji Ito wrote:
>
> Hi,
>
> Thank you for the reply.
> I've measured
I think what you're looking for might be solved by CASSANDRA-8303. However, I
am not sure if anybody is working on it. Generally you want to create different
clusters for users to physically isolate them. What you propose has been
discussed in the past and it is something that is currently unsup
Hi Jeff,
I've not looked at the new inter-node latency in 4.0 yet.
I think it isn't blocked by commitlog.
In 3.11.3, I've probed each Paxos phase and commitlog sync.
(In the investigation, I didn't use cassandra-stress tool. The workload has
LWT read requests.)
The below table shows the average l
Hi,
CASSANDRA-8303 talks about more granular control at the query level. What
we are looking at is throttling on the basis of the number of queries
received for different keyspaces. This is what request_scheduler and
request_scheduler_options provide for clients connecting via thrift.
Regards
On
Hello,
I have 2 dc cassandra 3.0.14 setup. I need to add 2 new nodes to each dc.
I started one node in dc1 and its already joining. 3TB of 50TB finished in 2
weeks. One year ttl time series data with twcs.
I know, its not best practise..
I want to start one node in dc2 and cassandra refused to
(I am sending the previous mail again because it seems that it has not been
sent properly.)
HI experts,
I am running 2 datacenters each containing five nodes. (total 10 nodes, all
3.11.3)
My data is stored one at each data center. (REPLICATION = { 'class' :
'org.apache.cassandra.locator.Netwo
You can use auto_bootstrap set to false to add a new node to the ring, it
will calculate the token range for the new node, but will not start
streaming the data.
In this case you can add several nodes into the ring quickly. After that
you can start nodetool rebuild -dc <> to start streaming data.
This violates any consistency guarantees you have and isn’t the right approach
unless you know what you’re giving up (correctness, typically)
--
Jeff Jirsa
> On Nov 28, 2018, at 2:40 AM, Vitali Dyachuk wrote:
>
> You can use auto_bootstrap set to false to add a new node to the ring, it
> wi
Hi All,
I'm exploring Cassandra for our project and would like to know the best
practices for handling transactions in real time. Also suggest if any
drivers or tools are available for this.
I've read about Apache Kundera transaction layer over Cassandra, is there
bottlenecks with this.
Pl
Agree with Jeff here, using auto_bootstrap:false is probably not what you
want.
Have you increased your streaming throughput?
Upgrading to 3.11 might reduce the time by quite a bit:
https://issues.apache.org/jira/browse/CASSANDRA-9766
You'd be doing committers a huge favor if you grabbed some hi
Hi All,
I need to use C* somehow as fluent data storage - maybe this is different
to the queue antipattern? Lots of data come in (10MB/sec/node), remains for
e.g. 1 hour and should then be evicted. It is somehow not critical when
data would occasionally disappear/get lost.
Thankful for any advice
I think you answered your own question, sort of.
When you expand a cluster, it copies the appropriate rows to the new
node(s) but doesn't automatically remove them from the old nodes. When you
ran cleanup on datacenter1, it cleared out those old extra copies. I would
suggest running a repair fir
Probably fine as long as there’s some concept of time in the partition key to
keep them from growing unbounded.
Use TWCS, TTLs and something like 5-10 minute buckets. Don’t use RF=1, but you
can write at CL ONE. TWCS will largely just drop whole sstables as they expire
(especially with 3.11 an
Thanks for the excellent advice, this was extremely helpful! Did not know
about TWCS... curing a lot of headache.
Adam
Am Mi., 28. Nov. 2018 um 20:47 Uhr schrieb Jeff Jirsa :
> Probably fine as long as there’s some concept of time in the partition key
> to keep them from growing unbounded.
>
> U
Thank you for your response.
I will run repair from datacenter2 with your advice. Do I have to run repair on
every node in datacenter2?
There is no snapshot when checked with nodetool listsnaphosts.
Thank you.
> On 29 Nov 2018, at 4:31 AM, Elliott Sims wrote:
>
> I think you answered your ow
15 matches
Mail list logo