cqlsh version 5.0.1. nodetool tpstats looks good, log looks good. And I
used specified port 9042. And it immediately returns fail (less than 3
seconds). By the way where should I use '--connect-timeout', cqlsh seems
don't have such parameters.
2016-03-18 17:29 GMT+08:00 Alain RODRIGUEZ :
> Is the
Hey Clint,
we have two separate rings which don't talk to each other but both having
the same DC name "DCX".
@Raja,
We had already gone towards the path you suggested.
thanks all
anishek
On Fri, Mar 18, 2016 at 8:01 AM, Reddy Raja wrote:
> Yes. Here are the steps.
> You will have to change t
Hi,
I want to understand how Expiring columns work in Cassandra.
Query:Documentation says that once TTL of a column expires, tombstones are
created/ marked when the sstable gets compacted. Is there a possibility that a
query (range scan/ row query) returns expired column data just because the
s
It seems you have problems with your PATH, so system commands are not being
found. You need to recover your original PATH first, you may find on stack
overflow how to do this.
Another problem that seems to be happening is that path names with spaces
are not being interpreted correctly, so if fixin
Anuj, there are a couple of aspects that are important -1. Cassandra will not
return data with expired TTLs even if compaction has not run2. How is 1
possible? The read path will eliminate the expired TTLs from the result3. All
tombstones (explicit delete as well as expired TTLs) hang around for
Hi,
I'm using Cassandra 2.2.4 and I we are getting ~4k messages per hour, per
node.
It doesn't look good to me. Is it normal? If not, any idea on what might be
wrong?
Thanks
Herbert
I think the answer is no. There are explicit checks in Read code path to ignore
anything that’s past the TTL (based on local time of the node under question).
From: Anuj Wadehra [mailto:anujw_2...@yahoo.co.in]
Sent: Monday, March 21, 2016 5:19 AM
To: User
Subject: Expiring Columns
Hi,
I want t
This will only make failure detector calculations less accurate, so if
you're not having nodes flapping UP/DOWN or being incorrectly marked as
down, this shouldn't be a big problem. You'll probably want to tune
cassandra.fd_max_interval_ms for your cluster environment but I'm not
really familiar wi
Hi,
We added a bunch of new nodes to a cluster (2.1.13) and everything went
fine, except for the number of pending compactions that is staying quite
high on a subset of the new nodes. Over the past 3 days, the pending
compactions have never been less than ~130 on such nodes, with peaks of
~200. On
Hi, thanks for the detailed information, it is useful.
SSTables in each level: [43/4, 92/10, 125/100, 0, 0, 0, 0, 0, 0]
Looks like compaction is not doing so hot indeed.
What hardware do you use? Can you see it running at the limits (CPU / disks
IO)? Is there any error on system logs, are disks
On Mon, Mar 21, 2016 at 2:15 PM, Alain RODRIGUEZ wrote:
>
> What hardware do you use? Can you see it running at the limits (CPU /
> disks IO)? Is there any error on system logs, are disks doing fine?
>
>
Nodes are c3.2xlarge instances on AWS. The nodes are relatively idle, and,
as said in the ori
On Mon, Mar 21, 2016 at 12:50 PM, Gianluca Borello
wrote:
>
> - It's also interesting to notice how the compaction in the previous
> example is trying to compact ~37 GB, which is essentially the whole size of
> the column family message_data1 as reported by cfstats:
>
Also related to this point,
> We added a bunch of new nodes to a cluster (2.1.13) and everything went fine,
> except for the number of pending compactions that is staying quite high on a
> subset of the new nodes. Over the past 3 days, the pending compactions have
> never been less than ~130 on such nodes, with peaks of ~2
Thank you for your reply, definitely appreciate the tip on the compressed
size.
I understand your point, in fact whenever we bootstrap a new node we see a
huge number of pending compactions (in the order of thousands), and they
usually decrease steadily until they reach 0 in just a few hours. With
Hi, all,
We recently encountered a scenario under Cassandra 2.0 deployment. Cassandra
detected a corrupted sstable, and when we attempt to scrub the sstable (with
all the associated sstables), the corrupted sstable was not included in the
sstable list. This continues until we restart Cassandra
Hi guys,
So, quick background, we are using Outworkers (previously WebSudos) Phantom
v 1.22.0 Which appears to be using DataStax driver 3.0.0. We are running
scala 2.10 inside Samza on Yarn (CDH 5.4.4) with Oracle JDK 8.
This is all pointing at a 3 node dev cluster of DataStax Community v 2.1.13
Are you running repairs ?
You may try:
- increase concurrentçcompaction to 8 (max in 2.1.x)
- increase compaction_throupghput to more than 16MB/s (48 may be a good start)
What kind of data are you storing in theses tables ? timeseries ?
2016-03-21 23:37 GMT+01:00 Gianluca Borello :
> Thank yo
Thank you for your reply. To address your points:
- We are not running repairs
- Yes, we are storing timeseries-like binary blobs where data is heavily
TTLed (essentially the entire column family is incrementally refreshed with
completely new data every few days)
- I have tried with increasing c
Hello,
Using cassandra 2.0.17 on one of the 7 nodes i see that the "Load" column
from nodetool status
shows around 279.34 GB where as doing df -h on the two mounted disks the
total is about 400GB any reason of why this difference could show up and
how do i go about finding the cause for this ?
T
19 matches
Mail list logo