Hi Dave,
Probably not complete but I know 2 interesting ways to get configuration
of a Bluestore OSD:
1/ the /show-label/ option of /ceph-bluestore-tool/ command
Ex:
$ sudo ceph-bluestore-tool show-label --path /var/lib/ceph/osd/ceph-0/
2/ the /config show/ and /perf dump/ parameters of the
uminous
to Nautilus...
Cheers,
rv
On 01/05/2020 07:14, Alex Gorbachev wrote:
Herve,
On Wed, Apr 29, 2020 at 2:57 PM Herve Ballans
mailto:herve.ball...@ias.u-psud.fr>> wrote:
Hi Alex,
Thanks a lot for your tips. I note that for my planned upgrade.
I take the opportunity h
Hi Paul,
So we did the upgrade of our cluster by upgrading Ceph and debian at the
same time.
And you right, it worked out perfectly ! with no problems at all.
We mainly followed the steps described here, so about a dozen steps.
Your assistant seems to be powerful as you do that with one butt
one)
and it took a very long time and had a strong impact on the ceph
performances during this operation (several hours)
Did you notice that too on your side ?
Thanks again,
Hervé
On 29/04/2020 20:39, Alex Gorbachev wrote:
On Wed, Apr 29, 2020 at 11:54 AM Herve Ballans
mailto:herve.ball
Hi all,
I'm planning to upgrade one on my Ceph Cluster currently on Luminous
12.2.13 / Debian Stretch (updated).
On this cluster, Luminous is packaged from the official Ceph repo (deb
https://download.ceph.com/debian-luminous/ stretch main)
I would like to upgrade it with Debian Buster and Na
t the correct way ?
both ways should work. You can first enable the second active MDS with
$ sudo ceph fs set /my_fs/ max_mds 2
and afterwards disable standby-replay or the other way around. I don't
think there's "the one correct" way.
Regards,
Eugen
Zitat von Herve
Hello to all confined people (and the others too) !
On one of my Ceph cluster (Nautilus 14.2.3), I previously set up 3 MDS
daemons in active/standy-replay/standby configuration.
For design reasons, I would like to replace this configuration by an
active/active/standby one.
It means replace