Hi,
we are also experiencing this type of behavior for some weeks on our not
so performance critical hdd pools.
We haven't spent so much time on this problem, because there are
currently more important tasks - but here are a few details:
Running the following loop results in the following output:
Which configuration option determines the MDS timeout period?
William Lawton
From: Gregory Farnum
Sent: Thursday, August 30, 2018 5:46 PM
To: William Lawton
Cc: ceph-users@lists.ceph.com
Subject: Re: [ceph-users] MDS does not always failover to hot standby on reboot
Yes, this is a consequence
Hi Jones,
I still don't think creating an OSD on a partition will work. The
reason is that SES creates an additional partition per OSD resulting
in something like this:
vdb 253:16 05G 0 disk
├─vdb1253:17 0 100M 0 part /var/lib/ceph/osd/ceph-1
└─vdb2
Hi,
For those facing (lots of) active+clean+inconsistent PGs after the luminous
12.2.6 metadata corruption and 12.2.7 upgrade, I'd like to explain how I
finally got rid of those.
Disclaimer : my cluster doesn't contain highly valuable data, and I can sort of
recreate what is actually contains
Hi - I am using the Ceph Luminous release. here what are the OSD
journal settings needed for OSD?
NOTE: I used SSDs for journal till Jewel release.
Thanks
Swami
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/cep
arad...@tma-0.net writes:
> Can anyone confirm if the Ceph repos for Debian/Ubuntu contain packages for
> Debian? I'm not seeing any, but maybe I'm missing something...
>
> I'm seeing ceph-deploy install an older version of ceph on the nodes (from
> the
> Debian repo) and then failing when I ru
Did you change the default pg_num or pgp_num so the pools that did show up
made it go past the mon_max_pg_per_osd ?
Den fre 31 aug. 2018 kl 17:20 skrev Robert Stanford :
>
> I installed a new Luminous cluster. Everything is fine so far. Then I
> tried to start RGW and got this error:
>
> 2018
So we now have a different error, I ran `ceph fs reset k8s` because of the
map that was in the strange state. Now I'm getting the following error in
the MDS log when it tries to 'join' the cluster (even though its the only
one):
https://gist.github.com/Marlinc/59d0a9fe3c34fed86c3aba2ebff850fb
In the end if was because I hadn't completed the upgrade with "ceph osd
require-osd-release luminous", after setting that I had the default
backfill full (0.9 I think) and was able to change it with ceph osd set
backfillfull-ratio.
Potential gotcha for a Jewel -> Luminous upgrade if you delay the
Hi all
Trying to add a new host to a Luminous cluster, I'm doing one OSD at a
time. I've only added one so far but it's getting too full.
The drive is the same size (4TB) as all others in the cluster, all OSDs
have crush weight of 3.63689. Average usage on the drives is 81.70%
With the new OSD I
I am adding a node like this, I think it is more efficient, because in
your case you will have data being moved within the added node (between
the newly added osd's there). So far no problems with this.
Maybe limit your
ceph tell osd.* injectargs --osd_max_backfills=X
Because pg's being move
Hello everyone,
I am in the process of adding an additional osd server to my small ceph cluster
as well as migrating from filestore to bluestore. Here is my setup at the
moment:
Ceph - 12.2.5 , running on Ubuntu 16.04 with latest updates
3 x osd servers with 10x3TB SAS drives, 2 x Intel S371
Hi Marc
I like that approach although I think I'd go in smaller weight increments.
Still a bit confused by the behaviour I'm seeing, it looks like I've got
things weighted correctly. Redhat's docs recommend doing an OSD at a time
and I'm sure that's how I've done it on other clusters in the past
On 03.09.2018 17:42, Andrei Mikhailovsky wrote:
Hello everyone,
I am in the process of adding an additional osd server to my small
ceph cluster as well as migrating from filestore to bluestore. Here is
my setup at the moment:
Ceph - 12.2.5 , running on Ubuntu 16.04 with latest updates
3 x os
Yes you are right. I had moved the fs_meta (16 pg's) to the ssd's. I had
to check the crush rules, but that pool is only 200MB. Still puzzles me
why ceph 'out of the box' is not distributing data more evenly.
I will try the balancer first thing, when remapping of the newly added
node has fin
Hi all,
I am new to Ceph and we are setting up a new RadosGW and Ceph storage cluster
on Luminous. We are using only EC for our `buckets.data` pool at the moment.
However, I just read the Red Hat Ceph object Gateway for Production article and
it mentions an extra duplicated `buckets.non-ec`
Version 12.2.8 seems broken. Someone earlier on the ML had a MDS issue. We
accidentally upgraded an openstack compute node from 12.2.7 to 12.2.8 (librbd)
and it caused all kinds of issues writing to the VM disks.
From: ceph-users on behalf of Nicolas
Huillard
I don't think those issues are known... Could you elaborate on your
librbd issues with v12.2.8 ?
-- dan
On Tue, Sep 4, 2018 at 7:30 AM Linh Vu wrote:
>
> Version 12.2.8 seems broken. Someone earlier on the ML had a MDS issue. We
> accidentally upgraded an openstack compute node from 12.2.7 to 1
We're going to reproduce this again in testing (12.2.8 drops right between our
previous testing and going production) and compare it to 12.2.7. Will update
with our findings soon. :)
From: Dan van der Ster
Sent: Tuesday, 4 September 2018 3:41:01 PM
To: Linh Vu
C
19 matches
Mail list logo