Are you running chef/puppet or similar?
From: Varun Barala
Reply-To: "user@cassandra.apache.org"
Date: Tuesday, July 26, 2016 at 10:15 PM
To: "user@cassandra.apache.org"
Subject: regarding drain process
Hi all,
Recently I'm facing a problem with cassandra nodes. Nodes go down very
fr
I have a table that I'm storing ad impression data in with every row being
an impression. I want to get a count of total rows / impressions. I know
that there is in the ball park of 200-400 million rows in this table and
from my reading "Number of keys" in the output of cfstats should be a
reason
the number of keys are the number of *partition keys, *not row keys. You
have ~39434 partitions, ranging from 311 bytes to 386mb. Looks like you
have some wide partitions that contain many of your rows.
Chris Lohfink
On Wed, Jul 27, 2016 at 1:44 PM, Luke Jolly wrote:
> I have a table that I'm s
hi;
we have a columnfamily that has around 1000 rows, with one row is really
huge (million columns). 95% of the row contains tombstones. Since there
exists just one SSTable , there is going to be no compaction kicked in. Any
way we can get rid of the tombstones in that row?
Userdefined compactio
You can run file level compaction using JMX to get rid of tombstones in one
SSTable. Ensure you set GC_Grace_seconds such that
current time >= deletion(tombstone time)+ GC_Grace_seconds
File level compaction
/usr/bin/java -jar cmdline-jmxclient-0.10.3.jar - localhost:
> {
> port}
> org.apac
This feature is also exposed directly in nodetool from version Cassandra 3.4
nodetool compact --user-defined
On Wed, Jul 27, 2016 at 9:58 PM, Vinay Chella wrote:
> You can run file level compaction using JMX to get rid of tombstones in
> one SSTable. Ensure you set GC_Grace_seconds such that
>
thanks Vinay and DuyHai.
we are using verison 2.0.14. I did "user defined compaction" following
the instructions in the below link, The tombstones still persist even after
that.
https://gist.github.com/jeromatron/e238e5795b3e79866b83
Also, we changed the tombstone_compaction_interval : 1800
Were you able to troubleshoot this yet? Private IPs for listen_address,
public IP for broadcast_address, and prefer_local=true on
cassandra-rackdc.properties should be sufficient to make nodes in the same
DC communicate over private address, so something must be going on there.
Can you check in yo
Is there any other way to get an estimate of rows?
On Wed, Jul 27, 2016 at 2:49 PM Chris Lohfink wrote:
> the number of keys are the number of *partition keys, *not row keys. You
> have ~39434 partitions, ranging from 311 bytes to 386mb. Looks like you
> have some wide partitions that contain ma
This looks somewhat related to CASSANDRA-9630. What is the C* version?
Can you check with netstats if other nodes keep connections with the
stopped node in the CLOSE_WAIT state? And also if the problem disappears if
you run nodetool disablegossip before stopping the node?
2016-07-26 16:54 GMT-03:
Thanks Paulo for the reply.
Cassandra version is 3.0.8. I will test what you said and share the results.
On Wed, Jul 27, 2016 at 2:01 PM, Paulo Motta
wrote:
> This looks somewhat related to CASSANDRA-9630. What is the C* version?
>
> Can you check with netstats if other nodes keep connections w
What is your GC_grace_seconds set to?
On Wed, Jul 27, 2016 at 1:13 PM, sai krishnam raju potturi <
pskraj...@gmail.com> wrote:
> thanks Vinay and DuyHai.
>
> we are using verison 2.0.14. I did "user defined compaction" following
> the instructions in the below link, The tombstones still persi
it's set to 1800 Vinay.
bloom_filter_fp_chance=0.01 AND
caching='KEYS_ONLY' AND
comment='' AND
dclocal_read_repair_chance=0.10 AND
gc_grace_seconds=1800 AND
index_interval=128 AND
read_repair_chance=0.00 AND
replicate_on_write='true' AND
populate_io_cache_on_flush='fal
and also the sstable size in question is like 220 kb in size.
thanks
On Wed, Jul 27, 2016 at 5:41 PM, sai krishnam raju potturi <
pskraj...@gmail.com> wrote:
> it's set to 1800 Vinay.
>
> bloom_filter_fp_chance=0.01 AND
> caching='KEYS_ONLY' AND
> comment='' AND
> dclocal_read_repair
220kb worth of tombstones doesn’t seem like enough to worry about.
From: sai krishnam raju potturi
Reply-To: "user@cassandra.apache.org"
Date: Wednesday, July 27, 2016 at 2:43 PM
To: Cassandra Users
Subject: Re: Re : Purging tombstones from a particular row in SSTable
and also the sst
The read queries are continuously failing though because of the tombstones.
"Request did not complete within rpc_timeout."
thanks
On Wed, Jul 27, 2016 at 5:51 PM, Jeff Jirsa
wrote:
> 220kb worth of tombstones doesn’t seem like enough to worry about.
>
>
>
>
>
> *From: *sai krishnam raju pottur
Paulo,
I can confirm that the problem is as you stated. Some or all of the other
nodes are keeping a connection in CLOSE_WAIT state. Those nodes are seen as
DN from the point of the node I have restarted the Cassandra service on.
But nodetool disablegossip did not fix the problem.
This sounds lik
> This sounds like an issue that can potentially affect many users. Is it
not the case?
This seems to affect only some configurations, specially EC2, but not all
for some reason (it might be related to default tcp timeout configuration).
> Do we have a solution for this?
Watch https://issues.apa
Hi,
I just released a detailed post about tombstones today that might be of
some interest for you:
http://thelastpickle.com/blog/2016/07/27/about-deletes-and-tombstones.html
220kb worth of tombstones doesn’t seem like enough to worry about.
+1
I believe you might be missing some other bigger S
No real evidence it’s the case hear but the one time I’ve seen tombstones
that refused to go away despite many attempts at compactions, etc it turned
out to be due to the data being written (and deleted) with invalid
timestamps years in the future (we guessed due to the time being set wrong
somewhe
Hi, data modeling question,
I have been investigating cassandra to store small objects as a trivial
replacement for s3. GET/PUT/DELETE are all easy, but LIST is what is tripping
me up.
S3 does a hierarchical list that kinda simulates traversing folders.
http://docs.aws.amazon.com/AmazonS3/l
21 matches
Mail list logo