By the way, does anyone know what happens if I run a user defined
compaction on an sstable that's already in compaction?
On Sun, Sep 3, 2017 at 2:55 PM, Shalom Sagges
wrote:
> Try this blog by The Last Pickle:
>
> http://thelastpickle.com/blog/2016/10/18/user-defined-compaction.html
>
>
>
>
You'll get the WARN "Will not compact {}: it is not an active sstable" :)
On 4 September 2017 at 12:07, Shalom Sagges wrote:
> By the way, does anyone know what happens if I run a user defined
> compaction on an sstable that's already in compaction?
>
>
>
>
>
>
> On Sun, Sep 3, 2017 at 2:55 PM,
Wrong copy/paste !
Looking at the code, it should do nothing :
// look up the sstables now that we're on the compaction executor, so we
don't try to re-compact
// something that was already being compacted earlier.
On 4 September 2017 at 13:54, Nicolas Guyomar
wrote:
> You'll get the WARN "W
Try checking the Percent Repaired reported in nodetool cfstats
Likely. I believe counter mutations are a tad more expensive than a normal
mutation. If you're doing a lot of counter updates that probably doesn't
help. Regardless, high amounts of pending reads/mutations is generally not
good and indicates the node being overloaded. Are you just seeing this on
th
It can happen on any of the nodes. We can have a large number of pending
on ReadStage and CounterMutationStage. We'll try to increase
concurrent_counter_writes to see how it changes things
Likely. I believe counter mutations are a tad more expensive than a
normal mutation. If you're doing a lo
I'm going to try different options. Do any of you have some experience
with tweaking one of those conf parameters to improve read throughput,
especially in case of counter tables ?
1/ using SSD :
trickle_fsync: true
trickle_fsync_interval_in_kb: 1024
2/ concurrent_compactors to the number of
Thanks! :-)
On Mon, Sep 4, 2017 at 2:56 PM, Nicolas Guyomar
wrote:
> Wrong copy/paste !
>
> Looking at the code, it should do nothing :
>
> // look up the sstables now that we're on the compaction executor, so we
> don't try to re-compact
> // something that was already being compacted earli
Hello Kurt,
Thanks for the help :)
On Fri, Sep 1, 2017 at 1:12 PM, Jai Bheemsen Rao Dhanwada <
jaibheem...@gmail.com> wrote:
> yes looks like I am missing that.
>
> Let me test on one node and try a full cluster restore.
>
> will update here once I complete my test
>
> On Fri, Sep 1, 2017 at 5:0
Hi,
Is there any way to set the *gc_grace_seconds* parameter in the stress tool
command?
Regards
You can create the schema in advance with custom table options and stress will
happily use it as-is
--
Jeff Jirsa
> On Sep 4, 2017, at 10:25 AM, Akshit Jain wrote:
>
> Hi,
> Is there any way to set the gc_grace_seconds parameter in the stress tool
> command?
>
> Regards
>
>
11 matches
Mail list logo