One of our clusters had a strange thing happen tonight. It's a 3 node
cluster, running 2.1.10. The primary keyspace has RF 3, vnodes with 256
tokens.
This evening, over the course of about 6 hours, disk usage increased from
around 700GB to around 900GB on only one node. I was at a loss as to wh
Greetings,
Following Jack's and Matt's suggestions, I moved the doc to Google Docs and
added to it all the security gaps in Cassandra I was able to discover
(please, see second table below fist).
Here is an updated link to my document:
https://docs.google.com/document/d/13-yu-1a0MMkBiJFPNkYoTd1H
Jack,
I updated my document with all the security gaps I was able to find and
posted it there:
https://docs.google.com/document/d/13-yu-1a0MMkBiJFPNkYoTd1Hzed9tgKltWi6hFLZbsk/edit?usp=sharing
Thanks,
Oleg
On Thu, Feb 11, 2016 at 4:09 PM, oleg yusim wrote:
> Jack,
>
> I asked my management, if
Jack,
I updated my document with all the security gaps I was able to discover
(see the second table, below the fist one). I also moved the document to
Google Docs from Word doc, shared on Google Drive, following Matt's
suggestion.
Please, see the updated link:
https://docs.google.com/document/d/1
Thanks Carlos...This certainly helps..
Sent from Yahoo Mail on Android
On Fri, Feb 12, 2016 at 2:02 AM, Carlos Alonso wrote:
Hi Hari.
I'd suggest having a customers table like this:
CREATE TABLE customers ( customerid UUID, name VARCHAR, email VARCHAR,
phonenr VARCHAR, PRIMARY KEY(nam
I have one seed node and one non-seed node managed by opscenter.
I previously ran Cassandra daemons on 3 other non-seed nodes.
How do I let opscenter discover these 3 other nodes ?
Thanks
On Fri, Feb 12, 2016 at 2:17 PM, Ted Yu wrote:
> When I retried node addition from opscenter UI, I got th
When I retried node addition from opscenter UI, I got this:
Installation stage failed: The following packages are already installed:
dse-libsqoop, dse-full, dse-libmahout, dse-liblog4j, dse-libspark,
dse-libsolr, dse-libtomcat, dse- demos, dse-libcassandra; The following
packages are already
I believe you missed this note :
1. Attention: Depending on your environment, you might need to replace @ in
your email address with %40 and escape any character in your password
that is used in your operating system's command line. Examples: \! and \|
.
On Sat, Feb 13, 2016 at 3:15
I posted this on the IRC but wasn't able to receive any help. I have two
nodes running 3.0.3. They're in different datacenters, connected by
openvpn. When I go to bootstrap the new node it handshakes fine, but always
gets this error while transferring data:
http://gobin.io/oMll
If I follow the lo
Hi,
I followed this guide:
https://docs.datastax.com/en/datastax_enterprise/4.5/datastax_enterprise/install/installRHELdse.html
and populated /etc/yum.repos.d/datastax.repo with DataStax Academy account
info.
[Errno 14] PYCURL ERROR 6 - "Couldn't resolve host 'gmail.com:p
as...@rpm.datastax.com'"
Wide rows? How wide? How many rows per partition, typically and at the
extreme? how many clustering columns?
When you restart the node does it revert to completely normal response?
Which release of Cassandra?
Does every node eventually hit this problem?
After a restart, how long before the prob
if you have internode_compression: all, try disabling it.
Also I would move to STCS if you have a lot of tombstones.
If they get pilled in higher levels you have to wait until those higher
levels get compacted before you get them out.
For G1 your heap is too small. Bump that to 16GB (or at least 1
I had to decrease streaming throughput to 10 (from default 200) in order to
avoid effect or rising number of SSTables and number of compaction tasks
while running repair. It's working very slow but it's stable and doesn't
hurt the whole cluster. Will try to adjust configuration gradually to see
if
There was a recent performance inefficiency in nodetool status with virtual
nodes that will be fixed in the next releases (CASSANDRA-7238), so it
should be faster with this fixed.
You can also query StorageServiceMBean.getLiveNodes() via JMX (jolokia or
some other jmx client). For a list of useful
Is there a faster way to get the output of 'nodetool status' ?
I want us to more aggressively monitor for 'nodetool status' and boxes
being DN...
I was thinking something like jolokia and REST but I'm not sure if there
are variables exported by jolokia for nodetool status.
Thoughts?
--
We’re
If you positive this is not compaction related I would:
1. check disk IOPs and latency on the EBS volume. (dstat)
2. turn GC log on in cassandra-env.sh and use jstat to see what is
happening to your HEAP.
I have been asking about compactions initially because if you having one (1)
big ta
> Does the load decrease and the node answers requests “normally” when you do
> disable auto-compaction? You actually see pending compactions on nodes having
> high load correct?
Nope.
> All seems legit here. Using G1 GC?
Yes
Problems also occurred on nodes without pending compactions.
> On
> On Feb 12, 2016, at 9:24 AM, Skvazh Roman wrote:
>
> I have disabled autocompaction and stop it on highload node.
Does the load decrease and the node answers requests “normally” when you do
disable auto-compaction? You actually see pending compactions on nodes having
high load correct?
> H
If you are interested in a solution that maintains scripts, there are at
least a few projects available,
https://github.com/comeara/pillar - Runs on the JVM and written in Scala.
Scripts are CQL files.
https://github.com/Contrast-Security-OSS/cassandra-migration - Runs on JVM
and I believe a port
I have disabled autocompaction and stop it on highload node.
Freezes all nodes sequentially, 2-6 simultaneously.
Heap is 8Gb. gc_grace is 86400
All sstables is about 200-300 Mb.
$ nodetool compactionstats
pending tasks: 14
$ dstat -lvnr 10
---load-avg--- ---procs--- --memory-usage- ---p
At the time when the load is high and you have to restart, do you see any
pending compactions when using `nodetool compactionstats`?
Possible to see a `nodetool compactionstats` taken *when* the load is too high?
Have you checked the size of your SSTables for that big table? Any large ones
in
After disabling binary, gossip, thrift node blocks on 16 read stages and
[iadmin@ip-10-0-25-46 ~]$ nodetool tpstats
Pool NameActive Pending Completed Blocked All
time blocked
MutationStage 0 0 19587002 0
0
R
There is 1-4 compactions at that moment.
We have many tombstones, which does not removed.
DroppableTombstoneRatio is 5-6 (greater than 1)
> On 12 Feb 2016, at 15:53, Julien Anguenot wrote:
>
> Hey,
>
> What about compactions count when that is happening?
>
> J.
>
>
>> On Feb 12, 2016, at
"we cannot use it in set (manager.dsl().update().fromBaseTable().)" --> normal and intended, it is forbidden to update a column which
belongs to the primary key.
On Fri, Feb 12, 2016 at 1:50 PM, Atul Saroha
wrote:
> sorry I was my understanding issue of solution 2. Thanks for the solution
>
>
>
Hey,
What about compactions count when that is happening?
J.
> On Feb 12, 2016, at 3:06 AM, Skvazh Roman wrote:
>
> Hello!
> We have a cluster of 25 c3.4xlarge nodes (16 cores, 32 GiB) with attached 1.5
> TB 4000 PIOPS EBS drive.
> Sometimes one or two nodes user cpu spikes to 100%, load
sorry I was my understanding issue of solution 2. Thanks for the solution
-
Atul Saroha
*Sr. Software Engineer*
*M*: +91 8447784271 *T*: +91 124-415-6069 *EXT*: 12369
Plot # 362, AS
Thanks for the reply. We would go with solution 1.
One more thing, which might be a bug. We are using 4.0.1 version. And query
of solution 2 is not possible. c3 is cluster key. No option is visible for
this cluster key:
1. we cannot use it in set (manager.dsl().update().fromBaseTable().)
2
"How could I achieve Delta Update through this ORM where I want to
inserting one row for id,c3,c4 columns only"
2 ways:
1. Create an empty PrimeUser entity with only id, c3 and c4 values. Use
manager
.crud()
.insert(entity)
.withInsertStrategy(InsertStrateg
Hello!
We have a cluster of 25 c3.4xlarge nodes (16 cores, 32 GiB) with attached 1.5
TB 4000 PIOPS EBS drive.
Sometimes one or two nodes user cpu spikes to 100%, load average to 20-30 -
read requests drops of.
Only restart of this cassandra services helps.
Please advice.
One big table with wide
Hi Hari.
I'd suggest having a customers table like this:
CREATE TABLE customers (
customerid UUID,
name VARCHAR,
email VARCHAR,
phonenr VARCHAR,
PRIMARY KEY(name, email, phonenr)
).
This way your inserts could be INSERT INTO customers (customerid, ...)
VALUES (...) IF NOT EXISTS;
After
Hello,I have a scenario where I need to create a customer master table in
cassandra which has attributes like customerid, name, email, phonenr .etc
..What is the best way to model such table in cassandra keeping in mind that I
will be using customer id to populate customer information from other
Thanks Doan,
We are now evaluating or nearly finalized to use Achilles.
We are looking for one use case.
As I mentioned in above for static columns.
> CREATE TABLE IF NOT EXISTS ks.PrimeUser(
> id int,
> c1 text STATIC,
> c2 boolean STATIC,
> c3 text,
> c4 text,
> PRIMARY KEY (id, c
32 matches
Mail list logo