ou should connect to the node with JConsole and see where the compaction
thread is stuck
2017-04-13 8:34 GMT+02:00 Roland Otta
mailto:roland.o...@willhaben.at>>:
hi,
we have the following issue on our 3.10 development cluster.
we are doing regular repairs with thelastpickle's fork of
mpactionManager.java:85)
> org.apache.cassandra.db.compaction.CompactionManager$13.call
> (CompactionManager.java:933)
> java.util.concurrent.FutureTask.run(FutureTask.java:266)
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> java.util.concurrent.F
00 Roland Otta
mailto:roland.o...@willhaben.at>>:
hi,
we have the following issue on our 3.10 development cluster.
we are doing regular repairs with thelastpickle's fork of creaper.
sometimes the repair (it is a full repair in that case) hangs because
of a stuck validation compaction
11)
> java.util.concurrent.FutureTask.run(FutureTask.java:266)
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPool
> Executor.java:1142)
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoo
> lExecutor.java:617)
> org.apache.cassandra.concurrent.Named
repairs with thelastpickle's fork of creaper.
sometimes the repair (it is a full repair in that case) hangs because
of a stuck validation compaction
nodetool compactionstats gives me
a1bb45c0-1fc6-11e7-81de-0fb0b3f5a345 Validation bds ad_event
805955242 841258085 bytes 95.80%
readLocalDeallocator$0(NamedThreadFactory.java:79)
> org.apache.cassandra.concurrent.NamedThreadFactory$$Lambda$5/1371495133.run(Unknown
> Source)
> java.lang.Thread.run(Thread.java:745)
>
> On Thu, 2017-04-13 at 08:47 +0200, benjamin roth wrote:
>
> You should connect to the node with
017-04-13 8:34 GMT+02:00 Roland Otta
mailto:roland.o...@willhaben.at>>:
hi,
we have the following issue on our 3.10 development cluster.
we are doing regular repairs with thelastpickle's fork of creaper.
sometimes the repair (it is a full repair in that case) hangs because
of a stuck v
ndra.concurrent.NamedThreadFactory.lambda$th
> readLocalDeallocator$0(NamedThreadFactory.java:79)
> org.apache.cassandra.concurrent.NamedThreadFactory$$Lambda$5/1371495133.run(Unknown
> Source)
> java.lang.Thread.run(Thread.java:745)
>
> On Thu, 2017-04-13 at 08:47 +0200, benjamin rot
read is stuck
2017-04-13 8:34 GMT+02:00 Roland Otta
mailto:roland.o...@willhaben.at>>:
hi,
we have the following issue on our 3.10 development cluster.
we are doing regular repairs with thelastpickle's fork of creaper.
sometimes the repair (it is a full repair in that case) hangs because
er.call(Executors.java:511)
>> java.util.concurrent.FutureTask.run(FutureTask.java:266)
>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPool
>> Executor.java:1142)
>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoo
>> lExecutor.java:617)
>>
gt; org.apache.cassandra.concurrent.NamedThreadFactory$
> $Lambda$5/1371495133.run(Unknown Source)
> java.lang.Thread.run(Thread.java:745)
>
> On Thu, 2017-04-13 at 08:47 +0200, benjamin roth wrote:
>
> You should connect to the node with JConsole and see where the compact
metimes the repair (it is a full repair in that case) hangs because
of a stuck validation compaction
nodetool compactionstats gives me
a1bb45c0-1fc6-11e7-81de-0fb0b3f5a345 Validation bds ad_event
805955242 841258085 bytes 95.80%
we have here no more progress for hours
nodetool tp
t; sometimes the repair (it is a full repair in that case) hangs because
> of a stuck validation compaction
>
> nodetool compactionstats gives me
> a1bb45c0-1fc6-11e7-81de-0fb0b3f5a345 Validation bds ad_event
> 805955242 841258085 bytes 95.80%
> we have here no more prog
hi,
we have the following issue on our 3.10 development cluster.
we are doing regular repairs with thelastpickle's fork of creaper.
sometimes the repair (it is a full repair in that case) hangs because
of a stuck validation compaction
nodetool compactionstats gives me
a1bb45c0-1fc6-11e7
Thanks for the reply Rob.
Date: Thu, 16 Oct 2014 11:46:52 -0700
Subject: Re: validation compaction
From: rc...@eventbrite.com
To: user@cassandra.apache.org
On Thu, Oct 16, 2014 at 6:41 AM, S C wrote:
Bob,
Bob is my father's name. Unless you need a gastrointestinal consult, you
pro
On Thu, Oct 16, 2014 at 6:41 AM, S C wrote:
> Bob,
>
Bob is my father's name. Unless you need a gastrointestinal consult, you
probably don't want to ask "Bob Coli" a question... ;P
> Default compression is Snappy compression and I have seen compression
> ranging between 2-4% (just as the doc s
...@outlook.com
To: user@cassandra.apache.org
Subject: RE: validation compaction
Date: Tue, 14 Oct 2014 17:09:14 -0500
Thanks Rob.
Date: Mon, 13 Oct 2014 13:42:39 -0700
Subject: Re: validation compaction
From: rc...@eventbrite.com
To: user@cassandra.apache.org
On Mon, Oct 13, 2014 at 1:04 PM, S
Thanks Rob.
Date: Mon, 13 Oct 2014 13:42:39 -0700
Subject: Re: validation compaction
From: rc...@eventbrite.com
To: user@cassandra.apache.org
On Mon, Oct 13, 2014 at 1:04 PM, S C wrote:
I have started repairing a 10 node cluster with one of the table having > 1TB
of data. I notice t
On Mon, Oct 13, 2014 at 1:04 PM, S C wrote:
> I have started repairing a 10 node cluster with one of the table having >
> 1TB of data. I notice that the validation compaction actually shows >3 TB
> in the "nodetool compactionstats" bytes total. However, I have less than
I have started repairing a 10 node cluster with one of the table having > 1TB
of data. I notice that the validation compaction actually shows >3 TB in the
"nodetool compactionstats" bytes total. However, I have less than 1TB data on
the machine. If I take into consideration of
20 matches
Mail list logo