Hey Jeff,
one of the nodes with high GC has 1400 SST tables, all other nodes have
about 500-900 SST tables. the other node with high GC has 636 SST tables.
the average row size for compacted partitions is about 1640 bytes on all
nodes. We have replication factor 3 but the problem is only on two n
Hi,
I have implemented once one way replication from a RDBMS to Cassandra using
triggers in the source database side. If you timestamp the changes from the
source, it’s possible to timestamp them on the cassandra side as well and that
takes care of a lot of ordering of the changes. Assuming tha
also MAX_HEAP_SIZE=6G and HEAP_NEWSIZE=4G.
On Wed, Mar 2, 2016 at 1:40 PM, Anishek Agarwal wrote:
> Hey Jeff,
>
> one of the nodes with high GC has 1400 SST tables, all other nodes have
> about 500-900 SST tables. the other node with high GC has 636 SST tables.
>
> the average row size for compa
Hi all.
Having two FSReadErrors:
FSReadError in
..\..\data\system\compaction_history-b4dbb7b4dc493fb5b3bfce6e434832ca\system-compaction_history-ka-329-CompressionInfo.db
FSReadError in
..\..\data\system\sstable_activity-5a1ff267ace03f128563cfae6103c65e\system-sstable_activity-ka-475-CompressionI
Hi Anishek,
Even if it highly depends on your workload, here are my thoughts:
`HEAP_NEWSIZE=4G.` is probably far too high (try 1200M <-> 2G)
`MAX_HEAP_SIZE=6G` might be too low, how much memory is available (You
might want to keep this as it or even reduce it if you have less than 16 GB
of native
Hi Fred,
Corrupted data due to software are quite rare nowadays. I would have a look
at the filesystem to first see if everything is ok. I recently had a case
where FS was unmounted and mounted back in a read only mode, Cassandra did
not like it.
1. You should indeed give a try to:
nodeto
Thanks a lot Alian for the details.
> `HEAP_NEWSIZE=4G.` is probably far too high (try 1200M <-> 2G)
> `MAX_HEAP_SIZE=6G` might be too low, how much memory is available (You
> might want to keep this as it or even reduce it if you have less than 16 GB
> of native memory. Go with 8 GB if you have a
It looks like you are doing a good work with this cluster and know a lot
about JVM, that's good :-).
our machine configurations are : 2 X 800 GB SSD , 48 cores, 64 GB RAM
That's good hardware too.
With 64 GB of ram I would probably directly give a try to
`MAX_HEAP_SIZE=8G` on one of the 2 bad n
On Tue, Mar 1, 2016 at 8:30 PM, ANG ANG wrote:
> Reference:
> http://stackoverflow.com/questions/35712166/broken-links-in-apache-cassandra-home-page/35724686#35724686
>
> The following links are broken in the Apache Cassandra Home/Welcome page:
>
> "materialized views":
> http://www.datastax.com/d
Thanks Asher.
Yes, I agree. It would be better if someone can help us with clear
documentation about this.
As the cross data communication is through private IP, I would consider
updating braodcast_address to private IP and use Ec2MultiRegionSnitch.
On Tue, Mar 1, 2016 at 3:26 PM, Asher Newcomer
Thanks Robert,
All the nodes in both datacenters are in DSE Search Mode(Solr). We may have
analytics datacenter as well in future. Will this have any impact in using
Ec2MultiRegionSnitch?
On Tue, Mar 1, 2016 at 7:10 PM, Robert Coli wrote:
> On Tue, Mar 1, 2016 at 12:12 PM, Arun Sandu
> wrote:
Can you post a gist of the output of jstat -gccause (60 seconds worth)? I
think it's cool you're willing to experiment with alternative JVM settings
but I've never seen anyone use max tenuring threshold of 50 either and I
can't imagine it's helpful. Keep in mind if your objects are actually
reach
Hi Rakesh,
Default consistency level of client is set to 1, from your query it seems you
are using CQL so could you try same query by setting CONSISTENCY ALL when you
enter cqlsh.
Regards
Amit Singh
From: Rakesh Kumar [mailto:dcrunch...@aim.com]
Sent: Wednesday, March 02, 2016 3:25 AM
To: u
Hi Anishek,
We too faced similar problem in 2.0.14 and after doing some research we config
few parameters in Cassandra.yaml and was able to overcome GC pauses . Those are
:
· memtable_flush_writers : increased from 1 to 3 as from tpstats output
we can see mutations dropped so it mean
On Wed, Mar 2, 2016 at 8:10 AM, Peddi, Praveen wrote:
> We have few dead nodes in the cluster (Amazon ASG removed those thinking
> there is an issue with health). Now we are trying to remove those dead
> nodes from the cluster so that other nodes can take over. As soon as I
> execute nodetool rem
On Wed, Mar 2, 2016 at 7:21 AM, Arun Sandu wrote:
>
> All the nodes in both datacenters are in DSE Search Mode(Solr). We may
> have analytics datacenter as well in future. Will this have any impact in
> using Ec2MultiRegionSnitch?
>
This list does not support DSE, but as I understand it, they cre
On Wed, Mar 2, 2016 at 7:00 AM, Eric Evans wrote:
> On Tue, Mar 1, 2016 at 8:30 PM, ANG ANG wrote:
> > "#cassandra channel": http://freenode.net/
>
> The latter, while not presently useful, links to a "coming soon..."
> for Freenode. It might be pedantic to insist it's not broken, but I
> don't
Hi Anishek,
We too faced similar problem in 2.0.14 and after doing some research we config
few parameters in Cassandra.yaml and was able to overcome GC pauses . Those are
:
· memtable_flush_writers : increased from 1 to 3 as from tpstats output
we can see mutations dropped so it mean
Hi Robert,
Thanks for your response.
Replication factor is 3.
We are in the process of upgrading to 2.2.4. We have had too many performance
issues with later versions of Cassandra (I have asked asked for help related to
that in the forum). We are close to getting to similar performance now and
Hi y'all,
I am writing to a cluster fairly fast and seeing this odd behavior happen,
seemingly to single nodes at a time. The node starts to take more and more
memory (instance has 48GB memory on G1GC). tpstats shows that
MemtableReclaimMemory Pending starts to grow first, then later
MutationStage
Also should note: Cassandra 2.2.5, Centos 6.7
On Wed, Mar 2, 2016 at 1:34 PM, Dan Kinder wrote:
> Hi y'all,
>
> I am writing to a cluster fairly fast and seeing this odd behavior happen,
> seemingly to single nodes at a time. The node starts to take more and more
> memory (instance has 48GB memo
Hi Praveen,
> We are not removing multiple nodes at the same time. All dead nodes are
> from same AZ so there were no errors when the nodes were down as expected
> (because we use QUORUM)
Do you use at leat 3 distinct AZ ? If so, you should indeed be fine
regarding data integrity. Also repair s
Just thought of a solution that might actually work even better.
You could try replacing one node at the time (instead of removing them)
I believe thhis would decrease the amount of streams significantly.
https://docs.datastax.com/en/cassandra/2.0/cassandra/operations/ops_replace_node_t.html
Go
Hi
I am to trying to insert data into cassandra using achilles which contains
only partition key and static columns(all other columns and clustering key
are null), but getting error
info.archinnov.achilles.exception.AchillesException: Field 'membername' in
entity of type 'com.xxx.domain.cassandra
You're right, it's a bug
I have created an issue here to fix it here:
https://github.com/doanduyhai/Achilles/issues/240
Fortunately you can use the query API right now to insert the static
columns:
PreparedStatement ps = INSERT INTO
BoundStatement bs = ps.bind(...)
manager.query(
25 matches
Mail list logo