Cassandra crashed in Two out of 10 nodes in my cluster within 1 day, the error
is: ERROR [CompactionExecutor:3389] 2018-07-10 11:27:58,857
CassandraDaemon.java:228 - Exception in thread
Thread[CompactionExecutor:3389,1,main] org.apache.cassandra.io.FSReadError:
java.io.IOException: Map failed
Could the problem be that the process ran out of file handles? Recommendation
is to tune that higher than the default.
Hannu
> onmstester onmstester kirjoitti 12.7.2018 kello 12.44:
>
> Cassandra crashed in Two out of 10 nodes in my cluster within 1 day, the
> error is:
>
> ERROR [Compactio
Kurt,
It is same mentioned on apache docuemtation too, I am not able to find it
right now.
But my question is:
How to set TTL for a whole column?
On Wed, Jul 11, 2018 at 11:36 PM, kurt greaves wrote:
> The Datastax documentation is wrong. It won't error, and it shouldn't. If
> you want to fix
To set TTL on a column only and not on the whole CQL row, use UPDATE
instead:
UPDATE USING TTL xxx SET = WHERE partition=yyy
On Thu, Jul 12, 2018 at 2:42 PM, Nitan Kainth wrote:
> Kurt,
>
> It is same mentioned on apache docuemtation too, I am not able to find it
> right now.
>
> But my questi
Okay so it means regular update and any ttl set with write overrides default
setting. Which means datastax documentation is incorrect and should be updated.
Sent from my iPhone
> On Jul 12, 2018, at 9:35 AM, DuyHai Doan wrote:
>
> To set TTL on a column only and not on the whole CQL row, use U
Probably close - maybe file handles or map counts. ulimit -a and/or
cat /proc/sys/vm/max_map_count
Would be useful
--
Jeff Jirsa
> On Jul 12, 2018, at 3:47 AM, Hannu Kröger wrote:
>
> Could the problem be that the process ran out of file handles? Recommendation
> is to tune that higher t
Hi everyone,
If several nodes experience brief outages simultaneously, substantial
> memory pressure can build up on the coordinator.* The coordinator tracks
> how many hints it is currently writing, and if the number increases too
> much, the coordinator refuses writes and throws the *
> *Overlo
Hi,
Which amount of data Cassandra 3 server in a cluster can serve at max?
The documentation says it is only 1TB.
If the load is not high (only about 100 requests per second with 1kb
of data each) is it safe to go above 1TB size (let's say 5TB per
server)?
What will be safe maximum disk size a ser
You can certainly go higher than a terabyte - 4 or so is common, Ive heard of
people doing up to 12 tb with the awareness that time to replace scales with
size on disk, so a very large host will take longer to rebuild than a small host
The 50% free guidance only applies to size tiered compaction
Hi All,
Can anybody let me know best approach for decommissiong a node in the cluster.
My cluster is using vnodes, is there any way to verify all the data of the
decommissioning node has been moved to remaining nodes, before completely
shutting down the server.
I followed below procedure :
1)
Folks,
I have a question regarding how mutations from batch statements trigger
'TRIGGERS'
In unlogged batch, in a single partition mutation, I'm expecting one
partition to be affected and returned.. but does it trigger for each and
every row? In logged batch, in a single partition, I'm expecting
Refs :
https://docs.datastax.com/en/cassandra/3.0/cassandra/operations/opsRepairNodesHintedHandoff.html
On Thu, Jul 12, 2018 at 7:46 PM Karthick V wrote:
> Hi everyone,
>
> If several nodes experience brief outages simultaneously, substantial
>> memory pressure can build up on the coordinator.*
12 matches
Mail list logo