Thanks Anthony! I’ve made a note to include that information in the
documentation. You’re right. It won’t work as intended unless that is
configured properly.
I’m also favoring a couple other guidelines for Slender Cassandra:
1. SSD’s only, no spinning disks
2. At least two co
It's fine and intended behaviour. Upgradesstables also has the same effect.
Basically cleanup operates on all SSTables on a node (for each table) and
will cancel any in-progress compactions and instead run cleanup across
them, as you can't have two different compactions including the same file.
The
Hello Vlad,
I don't remember the context of the sentences, but I see no
incompatibilities. I believe the misunderstanding comes from the word
'compaction' that is a bit large. In the first sentence it is about
tombstone compactions. in sentence two it is about standard, regular
compaction accordin
Typically, long lived connections are better, so global.
--
Jeff Jirsa
> On Jan 22, 2018, at 3:28 AM, Andreou, Arys (Nokia - GR/Athens)
> wrote:
>
> It turns out it was a mistake in the client’s implementation.
> The session was created for each request but it was shut down, so all the
>
Hello,
when triggering a "nodetool cleanup" with Cassandra 3.11, the nodetool call
almost returns instantly and I see the following INFO log.
INFO [CompactionExecutor:54] 2018-01-22 12:59:53,903
CompactionManager.java:1777 - Compaction interrupted:
Compaction@fc9b0073-1008-3a07-aeb9-baf6f3cd0
Hello,
Some other thoughts:
- Are you using internode secured communications (and then use the port
7001 instead) ?
- A rolling restart might help, have you tried restarting a few / all the
nodes?
This issue is very weird and I am only making poor guesses here. This is
not an issue I have seen i
It turns out it was a mistake in the client’s implementation.
The session was created for each request but it was shut down, so all the
connections were left open.
I only needed to execute a cluste.shutdown() once the request was over.
I do have a follow up question though.
Is it better to have a
You can increase system open files,
also if you compact, open files will go down.
On Mon, Jan 22, 2018 at 10:19 AM, Dor Laor wrote:
> It's a high number, your compaction may run behind and thus
> many small sstables exist. However, you're also taking the
> number of network connection in the cal
It's a high number, your compaction may run behind and thus
many small sstables exist. However, you're also taking the
number of network connection in the calculation (everything
in *nix is a file). If it makes you feel better my laptop
has 40k open files for Chrome..
On Sun, Jan 21, 2018 at 11:59
Hi,
I keep getting a "Last error: Too many open files" followed by a list of node
IPs.
The output of "lsof -n|grep java|wc -l" is about 674970 on each node.
What is a normal number of open files?
Thank you.
10 matches
Mail list logo