Hi,
Change to *|durable_writes = false|*
And please post the results.
Thanks.
On 05/22/2017 10:08 PM, Jonathan Haddad wrote:
> How many CPUs are you using for interrupts?
>
> http://www.alexonlinux.com/smp-affinity-and-proper-interrupt-handling-in-linux
>
> Have you tried making a flame graph
* WARN [SharedPool-Worker-1] 2017-05-22 20:28:46,204 BatchStatement.java
(line 253) Batch of prepared statements for [site24x7.wm_rawstats_tb,
site24x7.wm_rawstats] is of size 6122, exceeding specified threshold of
5120 by 1002*
We are frequently getting this message in logs, so I wanted to restric
Hi,
If you were to know the batch size on client side to make sure it does not
get above the 5kb limit, so that you can "limit the number of statements in
a batch", I would suspect you do not need batch at all right ? See
https://inoio.de/blog/2016/01/13/cassandra-to-batch-or-not-to-batch/
As for
We are running on I3s since they came out. NVMe SSDs are really fast and I
managed to push them to 75k IOPs.
As Bhuvan mentioned the i3 storage is ephemeral. If you can work around it
and plan for failure recovery you are good to go.
I ran Cassandra on m4s before and had no problems with EBS volum
Oh, so all the data is lost if the instance is shutdown or restarted (for that
instance)? If we take a naïve approach to backing up the directory, and
restoring it, if we ever have to bring down the instance and back up, will that
work as a strategy? Data is only kept around for 2 days and is TT
> Oh, so all the data is lost if the instance is shutdown or restarted (for
that instance)?
When you restart the OS, you're technically not shutting down the
instance. As long as the instance isn't stopped / terminated, your data is
fine. I ran my databases on ephemeral storage for years without
I'm experimenting with bcache to see about using the ephemeral storage as a
cache backed with EBS. Not sure if that makes sense in your use case though.
On Tue, May 23, 2017 at 9:43 AM Jonathan Haddad wrote:
> > Oh, so all the data is lost if the instance is shutdown or restarted
> (for that ins
Thanks! So, I assume that as long we make sure we never explicitly “shutdown”
the instance, we are good. Are you also saying we won’t be able to snapshot a
directory with ephemeral storage and that is why EBS is better? We’re just
finding that to get a reasonable amount of IOPS (gp2) out of EBS
Exactly. You can easily build a solid backup/restore with snapshots and
automate it in case all hell breaks loose.
EBS volumes are expensive right now and with i3 you get much more IOPs and
a reasonable disk size for 1/2 or 1/3 the price.
Best,
Matija
On Tue, May 23, 2017 at 1:29 PM, Gopal, Dhruv
Another option that I like the idea of but never see used unfortunately is
using ZFS, with EBS for storage and the SSD ephemeral drive as L2Arc.
You'd get the performance of ephemeral storage with all the features of
EBS. Something to consider.
On Tue, May 23, 2017 at 10:30 AM Gopal, Dhruva
wrot
Note that EBS durability isn't perfect, you cannot rely on them entirely:
https://aws.amazon.com/ebs/details/
"Amazon EBS volumes are designed for an annual failure rate (AFR) of
between 0.1% - 0.2%, where failure refers to a complete or partial loss of
the volume, depending on the size and perform
Hi,
I never used version 2.0.x but I think port 7000 isn't enough.
Try enable:
7000 inter-node
7001 SSL inter-node
9042 CQL
9160 Thrift is enable in that version
And
**In Cassandra.yaml, add property “broadcast_address”. = local ipv4
**In Cassandra.yaml, change “listen_address” to privat
Hello,
I will preface this and say that all of the nodes have been running for
about the same amount of time and were not restarted before running
nodetool tpstats.
This is more for my understanding that anything else but I have a 20 node
cassandra cluster running cassandra 3.0.3. I have 0 read a
This is really atypical.
What about nodetool compactionstats?
crontab jobs in each node like nodetool repair, etc?
Also security these 2 nodes have the same ports open?
Same configuration, same JVM params?
nodetool ring normal?
Cheers.
On 23-05-2017 20:11, Andrew Jorgensen wrote:
> Hello,
>
I think this is overstating it. If the instance ever stops you'll lose the
data. That means if the server crashes for example, or if Amazon decides
your instance requires maintenance.
On Tue, May 23, 2017 at 10:30 AM Gopal, Dhruva
wrote:
> Thanks! So, I assume that as long we make sure we never
Yes we can only reboot.
But using rf=2 or higher it's only a node fresh restart.
EBS is a network attached disk. Spinning disk or SSD is almost the same.
It's better take the "risk" and use type i instances.
Cheers.
On 23-05-2017 21:39, sfesc...@gmail.com wrote:
> I think this is overstating
By that do you mean it’s like bootstrapping a node if it fails or is shutdown
and with a RF that is 2 or higher, data will get replicated when it’s brought
up?
From: Cogumelos Maravilha
Date: Tuesday, May 23, 2017 at 1:52 PM
To: "user@cassandra.apache.org"
Subject: Re: EC2 instance recommendat
Are the 3 sending clients maxed out?
Are you seeing JVM GC pauses?
On 2017-05-22 14:02 (-0700), Eric Pederson wrote:
> Hi all:
>
> I'm new to Cassandra and I'm doing some performance testing. One of things
> that I'm testing is ingestion throughput. My server setup is:
>
>- 3 node clu
When you are running a stress test, 1-1 match client to server won't
saturate a cluster. I would go closer to 3-5 clients per server, so 10-15
clients against your 3 node cluster.
Patrick
On Tue, May 23, 2017 at 4:18 PM, Jeff Jirsa wrote:
>
> Are the 3 sending clients maxed out?
> Are you seein
Thanks Akhil for response.
I have set memtable_allocation_type as Off-heap. But cassandra 2.1.x does
not allow to set *memtable_heap_space_in_mb: 0.*
It mentions , we need to assign some positive value to heap space. In such
case, will memtable still use jvm heap space.
Can anyone suggest below
We run on both ephemeral and persistent on AWS. Ephemeral storage is the
local storage attached to the server host. We don't have extreme write &
read, so EBS is fine.
If youever shut down the EC2 instance, your data is guaranteed to be gone
because AWS moves your VM to another host after every sh
Hi Varun,
Look at the recommendation for offheap_objects and memtable flush writers
and readers in the following guide
https://tobert.github.io/pages/als-cassandra-21-tuning-guide.html. In the
guide and cassandra.yaml the default is suggested as a good starting point.
If you want to use the defau
Exactly.
On 23-05-2017 23:55, Gopal, Dhruva wrote:
>
> By that do you mean it’s like bootstrapping a node if it fails or is
> shutdown and with a RF that is 2 or higher, data will get replicated
> when it’s brought up?
>
>
>
> *From: *Cogumelos Maravilha
> *Date: *Tuesday, May 23, 2017 at 1:52
23 matches
Mail list logo