Just to add, this doesn't happen in just one pool.
When I change "data" pool replicate size from 2 to 3, a few PGs (3) got
stuck too.
pg 0.7c is active+clean+degraded, acting [8,2]
pg 0.48 is active+clean+degraded, acting [4,8]
pg 0.1f is active+clean+degraded, acting [5,7]
I am already on tunab
Hi Sage,
Thanks, I noticed after re-reading the documentation.
I realized that osd.8 was not in host3. After adding osd.8 to host3, the
PGs are now in "active+remapped"
# ceph pg 3.45 query
{ "state": "active+remapped",
"epoch": 1374,
"up": [
4,
8],
"acting": [
4,
On Tue, 4 Jun 2013, YIP Wai Peng wrote:
> Sorry, to set things in context, I had some other problems last weekend.
> Setting it to optimal tunables helped (although I am on the older kernel).
> Since it worked, I was inclined to believed that the tunables do work on the
> older kernel.
> That being
On Tue, 4 Jun 2013, Wolfgang Hennerbichler wrote:
> On Mon, Jun 03, 2013 at 08:58:00PM -0700, Sage Weil wrote:
>
> > My first guess is that you do not have the newer crush tunables set and
> > some placements are not quite right. If you are prepared for some data
> > migration, and are not usi
Sorry, to set things in context, I had some other problems last weekend.
Setting it to optimal tunables helped (although I am on the older kernel).
Since it worked, I was inclined to believed that the tunables do work on
the older kernel.
That being said, I will upgrade the kernel to see if this i
On Mon, Jun 03, 2013 at 08:58:00PM -0700, Sage Weil wrote:
> My first guess is that you do not have the newer crush tunables set and
> some placements are not quite right. If you are prepared for some data
> migration, and are not using an older kernel client, try
>
> ceph osd crush tunables
Hi Sage,
It is on optimal tunables already. However, I'm on kernel
2.6.32-358.6.2.el6.x86_64. Will the tunables take effect or do I have to
upgrade to something newer?
- WP
On Tue, Jun 4, 2013 at 11:58 AM, Sage Weil wrote:
> On Tue, 4 Jun 2013, YIP Wai Peng wrote:
> > Hi all,
> > I'm running
On Tue, 4 Jun 2013, YIP Wai Peng wrote:
> Hi all,
> I'm running ceph on CentOS6 on 3 hosts, with 3 OSD each (total 9 OSD).
> When I increased one of my pool rep size from 2 to 3, just 6 PGs will get
> stuck in active+clean+degraded mode, but it doesn't create new replicas.
My first guess is that y
Hi all,
I'm running ceph on CentOS6 on 3 hosts, with 3 OSD each (total 9 OSD).
When I increased one of my pool rep size from 2 to 3, just 6 PGs will get
stuck in active+clean+degraded mode, but it doesn't create new replicas.
One of the problematic PG has the following (snipped for brevity)
{ "