[ceph-users] Re: v15.2.0 Octopus released

2020-03-25 Thread Dietmar Rieder
On 2020-03-24 23:37, Sage Weil wrote:
> On Tue, 24 Mar 2020, konstantin.ilya...@mediascope.net wrote:
>> Is it poosible to provide instructions about upgrading from CentOs7+ 
>> ceph 14.2.8 to CentOs8+ceph 15.2.0 ?
> 
> You have ~2 options:
> 
> - First, upgrade Ceph packages to 15.2.0.  Note that your dashboard will 
> break temporarily.  Then, upgrade each host to CentOS 8.  Your dashboard 
> should "un-break" when the el8 ceph packages are installed.
> 
> - Combine the Ceph upgrade with a transition to cephadm based on 
> these directions:
> 
>   https://docs.ceph.com/docs/octopus/cephadm/adoption/
> 
> After the transition, you can either stick with el7 indefinitely (cephadm 
> doesn't care too much about the host OS) or upgrade the host to centos8.
> 
> - First upgrade each host to CentOS8, then upgrade Ceph.  This will 
> eventually be possible, but at the moment we don't have el8 packages built 
> for nautilus.  :/


as far as I know, please let me be wrong, there is no upgrade path from
CentOS 7 to 8. One has to do a new installation, right?

Dietmar

-- 
_
D i e t m a r  R i e d e r, Mag.Dr.
Innsbruck Medical University
Biocenter - Institute of Bioinformatics
Innrain 80, 6020 Innsbruck
Email: dietmar.rie...@i-med.ac.at
Web:   http://www.icbi.at
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: v15.2.0 Octopus released

2020-03-25 Thread konstantin . ilyasov
That is why i am asking that question about upgrade instruction.
I really don`t understand, how to upgrade/reinstall CentOS 7 to 8 without 
affecting the work of cluster.
As i know, this process is easier on Debian, but we deployed our cluster 
Nautilus on CentOS because there weren`t any packages for 14.x for Debian 
Stretch (9) or Buster(10).
P.s.: if this is even possible, i would like to know how to upgrade servers 
with CentOs7 + ceph 14.2.8 to Debian 10 with ceph 15.2.0 (we have servers with 
OSD only and 3 servers with Mon/Mgr/Mds)
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: v15.2.0 Octopus released

2020-03-25 Thread Simon Oosthoek
On 25/03/2020 10:10, konstantin.ilya...@mediascope.net wrote:
> That is why i am asking that question about upgrade instruction.
> I really don`t understand, how to upgrade/reinstall CentOS 7 to 8 without 
> affecting the work of cluster.
> As i know, this process is easier on Debian, but we deployed our cluster 
> Nautilus on CentOS because there weren`t any packages for 14.x for Debian 
> Stretch (9) or Buster(10).
> P.s.: if this is even possible, i would like to know how to upgrade servers 
> with CentOs7 + ceph 14.2.8 to Debian 10 with ceph 15.2.0 (we have servers 
> with OSD only and 3 servers with Mon/Mgr/Mds)
> ___
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
> 

I guess you could upgrade each node one by one. So upgrade/reinstall the
OS, install Ceph 15 and re-initialise the OSDs if necessary. Though it
would be nice if there was a way to re-integrate the OSDs from the
previous installation...

Personally, I'm planning to wait for a while to upgrade to Ceph 15, not
in the least because it's not convenient to do stuff like OS upgrades
from home ;-)

Currently we're running ubuntu 18.04 on the ceph nodes, I'd like to
upgrade to ubuntu 20.04 and then to ceph 15.

Cheers

/Simon
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: v15.2.0 Octopus released

2020-03-25 Thread Wido den Hollander



On 3/25/20 10:24 AM, Simon Oosthoek wrote:
> On 25/03/2020 10:10, konstantin.ilya...@mediascope.net wrote:
>> That is why i am asking that question about upgrade instruction.
>> I really don`t understand, how to upgrade/reinstall CentOS 7 to 8 without 
>> affecting the work of cluster.
>> As i know, this process is easier on Debian, but we deployed our cluster 
>> Nautilus on CentOS because there weren`t any packages for 14.x for Debian 
>> Stretch (9) or Buster(10).
>> P.s.: if this is even possible, i would like to know how to upgrade servers 
>> with CentOs7 + ceph 14.2.8 to Debian 10 with ceph 15.2.0 (we have servers 
>> with OSD only and 3 servers with Mon/Mgr/Mds)
>> ___
>> ceph-users mailing list -- ceph-users@ceph.io
>> To unsubscribe send an email to ceph-users-le...@ceph.io
>>
> 
> I guess you could upgrade each node one by one. So upgrade/reinstall the
> OS, install Ceph 15 and re-initialise the OSDs if necessary. Though it
> would be nice if there was a way to re-integrate the OSDs from the
> previous installation...
> 

That works just fine. You can re-install the host OS and have
ceph-volume scan all the volumes. The OSDs should then just come back.

Or you can take the even safer route by removing OSDs completely from
the cluster and wiping a box.

Did this recently with a customer. In the meantime they took the
oppertunity to also flash the firmware of all the components and the
machines came back again with a complete fresh installation.

> Personally, I'm planning to wait for a while to upgrade to Ceph 15, not
> in the least because it's not convenient to do stuff like OS upgrades
> from home ;-)
> 
> Currently we're running ubuntu 18.04 on the ceph nodes, I'd like to
> upgrade to ubuntu 20.04 and then to ceph 15.
> 

I think many people will do this. I wouldn't run 15.2.0 on my production
environment right away.

Wido

> Cheers
> 
> /Simon
> ___
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
> 
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: v15.2.0 Octopus released

2020-03-25 Thread Sasha Litvak
I assume upgrading cluster running in docker / podman containers should be
non issue is it?  Just making sure.   Also wonder if anything is different
in this case from normal container upgrade scenario
  I.e.  monitors -> mgrs -> osds -> mdss -> clients.

Thank you,

On Wed, Mar 25, 2020, 5:32 AM Wido den Hollander  wrote:

>
>
> On 3/25/20 10:24 AM, Simon Oosthoek wrote:
> > On 25/03/2020 10:10, konstantin.ilya...@mediascope.net wrote:
> >> That is why i am asking that question about upgrade instruction.
> >> I really don`t understand, how to upgrade/reinstall CentOS 7 to 8
> without affecting the work of cluster.
> >> As i know, this process is easier on Debian, but we deployed our
> cluster Nautilus on CentOS because there weren`t any packages for 14.x for
> Debian Stretch (9) or Buster(10).
> >> P.s.: if this is even possible, i would like to know how to upgrade
> servers with CentOs7 + ceph 14.2.8 to Debian 10 with ceph 15.2.0 (we have
> servers with OSD only and 3 servers with Mon/Mgr/Mds)
> >> ___
> >> ceph-users mailing list -- ceph-users@ceph.io
> >> To unsubscribe send an email to ceph-users-le...@ceph.io
> >>
> >
> > I guess you could upgrade each node one by one. So upgrade/reinstall the
> > OS, install Ceph 15 and re-initialise the OSDs if necessary. Though it
> > would be nice if there was a way to re-integrate the OSDs from the
> > previous installation...
> >
>
> That works just fine. You can re-install the host OS and have
> ceph-volume scan all the volumes. The OSDs should then just come back.
>
> Or you can take the even safer route by removing OSDs completely from
> the cluster and wiping a box.
>
> Did this recently with a customer. In the meantime they took the
> oppertunity to also flash the firmware of all the components and the
> machines came back again with a complete fresh installation.
>
> > Personally, I'm planning to wait for a while to upgrade to Ceph 15, not
> > in the least because it's not convenient to do stuff like OS upgrades
> > from home ;-)
> >
> > Currently we're running ubuntu 18.04 on the ceph nodes, I'd like to
> > upgrade to ubuntu 20.04 and then to ceph 15.
> >
>
> I think many people will do this. I wouldn't run 15.2.0 on my production
> environment right away.
>
> Wido
>
> > Cheers
> >
> > /Simon
> > ___
> > ceph-users mailing list -- ceph-users@ceph.io
> > To unsubscribe send an email to ceph-users-le...@ceph.io
> >
> ___
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
>
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Using sendfile on Ceph FS results in data stuck in client cache

2020-03-25 Thread Mikael Öhman
Hi all,

Using sendfile function to write data to cephfs, the data doesn't end up being 
written.
>From the client that writes the file, it looks correct at first, but from all 
>other ceph clients, the size is 0 bytes. Re-mounting the filesystem, the data 
>is lost.
I didn't see any errors, the data just doesn't get written, as if it's just 
cached in cephfs client.
Writing just an extra byte at the end of the file (without sendfile), it seems 
to trigger the actual write of all the data.

Could someone else confirm if they are also seeing such issue? I'm on ceph 
13.2.8, using kernel module for mounting on CentOS7.

I've used this sendfile-example for the example below:
https://github.com/pijewski/sendfile-example/blob/master/sendfile.c

Using a small 27 byte source file.
# ls -lh examples/
-rw-r--r-- 1 root c3-staff 27 Mar 24 18:04 src
# ./sendfile examples/src examples/dst 27
# ls -lh examples/
--x--- 1 root c3-staff 27 Mar 24 18:12 dst
-rw-r--r-- 1 root c3-staff 27 Mar 24 18:04 src

But, directory is still on 27 bytes:
# ls -lhd examples
drwxr-sr-x 1 root c3-staff 27 Mar 24 18:15 examples

and on all other cephfs clients, the file is empty:
# ls -lh examples/
--x--- 1 root c3-staff  0 Mar 24 18:12 dst
-rw-r--r-- 1 root c3-staff 27 Mar 24 18:04 src

Is this a bug in cephfs, or should I not expect sendfile to work (as it is not 
posix compliant). There are no error reported from what i can see, and it is 
100% reproducible 

Best regards, Mikael
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Help: corrupt pg

2020-03-25 Thread Jake Grimmett

Dear All,

We are "in a bit of a pickle"...

No reply to my message (23/03/2020),  subject  "OSD: FAILED 
ceph_assert(clone_size.count(clone))"


So I'm presuming it's not possible to recover the crashed OSD

This is bad news, as one pg may be lost, (we are using EC 8+2, pg dump 
shows [NONE,NONE,NONE,388,125,25,427,226,77,154] )


Without this pg we have 1.8PB of broken cephfs.

I could rebuild the cluster from scratch, but this means no user backups 
for a couple of weeks.


The cluster has 10 nodes, uses an EC 8:2 pool for cephfs data 
(replicated NVMe metdata pool) and is running Nautilus 14.2.8


Clearly, it would be nicer if we could fix the OSD, but if this isn't 
possible, can someone confirm that the right procedure to recover from a 
corrupt pg is:


1) Stop all client access
2) find all files that store data on the bad pg, with:
# cephfs-data-scan pg_files /backup 5.750 2> /dev/null > /root/bad_files
3) delete all of these bad files - presumably using truncate? or is "rm" 
fine?

4) destroy the bad pg
# ceph osd  force-create-pg 5.750
5) Copy the missing files back with rsync or similar...

a better "recipe" or other advice gratefully received,

best regards,
Jake




Note: I am working from home until further notice.

For help, contact unixad...@mrc-lmb.cam.ac.uk
--
Dr Jake Grimmett
Head Of Scientific Computing
MRC Laboratory of Molecular Biology
Francis Crick Avenue,
Cambridge CB2 0QH, UK.
Phone 01223 267019
Mobile 0776 9886539

___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: How can I recover PGs in state 'unknown', where OSD location seems to be lost?

2020-03-25 Thread Mark S. Holliman
So I've managed to use ceph-objectstore-tool to locate the pgs in 'unknown' 
state on the OSDs, but how do I tell the rest of the system where to find them? 
Is there a command for setting a the OSDs associated with a PG?  Or, less 
ideally, is there a table somewhere I can hack to do this by hand?


Mark Holliman
Wide Field Astronomy Unit
Institute for Astronomy
University of Edinburgh
The University of Edinburgh is a charitable body, registered in Scotland, with 
registration number SC005336.

___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: Help: corrupt pg

2020-03-25 Thread Eugen Block

Hi,

is there any chance to recover the other failing OSDs that seem to  
have one chunk of this PG? Do the other OSDs fail with the same error?



Zitat von Jake Grimmett :


Dear All,

We are "in a bit of a pickle"...

No reply to my message (23/03/2020),  subject  "OSD: FAILED  
ceph_assert(clone_size.count(clone))"


So I'm presuming it's not possible to recover the crashed OSD

This is bad news, as one pg may be lost, (we are using EC 8+2, pg  
dump shows [NONE,NONE,NONE,388,125,25,427,226,77,154] )


Without this pg we have 1.8PB of broken cephfs.

I could rebuild the cluster from scratch, but this means no user  
backups for a couple of weeks.


The cluster has 10 nodes, uses an EC 8:2 pool for cephfs data  
(replicated NVMe metdata pool) and is running Nautilus 14.2.8


Clearly, it would be nicer if we could fix the OSD, but if this  
isn't possible, can someone confirm that the right procedure to  
recover from a corrupt pg is:


1) Stop all client access
2) find all files that store data on the bad pg, with:
# cephfs-data-scan pg_files /backup 5.750 2> /dev/null > /root/bad_files
3) delete all of these bad files - presumably using truncate? or is  
"rm" fine?

4) destroy the bad pg
# ceph osd  force-create-pg 5.750
5) Copy the missing files back with rsync or similar...

a better "recipe" or other advice gratefully received,

best regards,
Jake




Note: I am working from home until further notice.

For help, contact unixad...@mrc-lmb.cam.ac.uk
--
Dr Jake Grimmett
Head Of Scientific Computing
MRC Laboratory of Molecular Biology
Francis Crick Avenue,
Cambridge CB2 0QH, UK.
Phone 01223 267019
Mobile 0776 9886539

___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io



___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: March Ceph Science User Group Virtual Meeting

2020-03-25 Thread Kevin Hrpcek
I made a mistake and 9am US central is no longer equal to 4pm central 
European. The actual time is now 10am US central, so in 20 minutes if 
people are interested. I'll include UTC from now on.


Kevin

On 3/18/20 7:43 AM, Kevin Hrpcek wrote:

Hello,

We will be having a Ceph science/research/big cluster call on 
Wednesday March 25th. If anyone wants to discuss something specific 
they can add it to the pad linked below. If you have questions or 
comments you can contact me.


This is an informal open call of community members mostly from 
hpc/htc/research environments where we discuss whatever is on our 
minds regarding ceph. Updates, outages, features, maintenance, 
etc...there is no set presenter but I do attempt to keep the 
conversation lively.


https://pad.ceph.com/p/Ceph_Science_User_Group_20200325

Due to the worldwide effects of covid-19 I'm thinking it won't hurt to 
try to host this call. If only a few people join the call we can then 
decide to continue or cancel it.


Ceph calendar event details:

March 25, 2020
9am US Central
4pm Central Eurpean

We try to keep it to an hour or less.

Description:Main pad for discussions: 
https://pad.ceph.com/p/Ceph_Science_User_Group_Index 
 


Meetings will be recorded and posted to the Ceph Youtube channel.
To join the meeting on a computer or mobile phone: 
https://bluejeans.com/908675367?src=calendarLink 
 


To join from a Red Hat Deskphone or Softphone, dial: 84336.
Connecting directly from a room system?
    1.) Dial: 199.48.152.152 or bjn.vc 
 


    2.) Enter Meeting ID: 908675367
Just want to dial in on your phone?
    1.) Dial one of the following numbers: 408-915-6466 (US)
    See all numbers: https://www.redhat.com/en/conference-numbers 
 


    2.) Enter Meeting ID: 908675367
    3.) Press #
Want to test your video connection? https://bluejeans.com/111 




Kevin



___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: OSDs wont mount on Debian 10 (Buster) with Nautilus

2020-03-25 Thread Marc Roos


I had something similar. My osd were disabled, maybe this installer of 
nautilus does that check

systemctl is-enabled ceph-osd@0 

https://tracker.ceph.com/issues/44102


 

-Original Message-
From: Ml Ml [mailto:mliebher...@googlemail.com] 
Sent: 25 March 2020 16:05
To: ceph-users
Subject: [ceph-users] OSDs wont mount on Debian 10 (Buster) with 
Nautilus

Hello list,

i upgraded to Debian 10, after that i upgraded from luminous to 
nautilus.
I restarted the mons, then the OSDs.

Everything was up and healthy.
After rebooting a node, only 3/10 OSD start up:

-4   20.07686 host ceph03
 4   hdd  2.67020 osd.4 down  1.0 1.0
 5   hdd  1.71660 osd.5   up  1.0 1.0
 6   hdd  1.71660 osd.6   up  1.0 1.0
10   hdd  2.67029 osd.10  up  1.0 1.0
15   hdd  2.0 osd.15down  1.0 1.0
17   hdd  1.2 osd.17down  1.0 1.0
20   hdd  1.71649 osd.20down  1.0 1.0
21   hdd  2.0 osd.21down  1.0 1.0
27   hdd  1.71649 osd.27down  1.0 1.0
32   hdd  2.67020 osd.32down  1.0 1.0

root@ceph03:~# /usr/bin/ceph-osd -f --cluster ceph --id 32 --setuser 
ceph --setgroup ceph
2020-03-25 15:46:36.330 7efddde5ec80 -1 auth: unable to find a keyring 
on /var/lib/ceph/osd/ceph-32/keyring: (2) No such file or directory
2020-03-25 15:46:36.330 7efddde5ec80 -1 AuthRegistry(0x56531c50a140) no 
keyring found at /var/lib/ceph/osd/ceph-32/keyring, disabling cephx
2020-03-25 15:46:36.330 7efddde5ec80 -1 auth: unable to find a keyring 
on /var/lib/ceph/osd/ceph-32/keyring: (2) No such file or directory
2020-03-25 15:46:36.330 7efddde5ec80 -1 AuthRegistry(0x7ffd04120468) no 
keyring found at /var/lib/ceph/osd/ceph-32/keyring, disabling cephx 
failed to fetch mon config (--no-mon-config to skip)

root@ceph03:~# df
Filesystem 1K-blocksUsed Available Use% Mounted on
udev24624580   0  24624580   0% /dev
tmpfs49282169544   4918672   1% /run
/dev/sda3   47930248 5209760  40262684  12% /
tmpfs   24641068   0  24641068   0% /dev/shm
tmpfs   5120   0  5120   0% /run/lock
tmpfs   24641068   0  24641068   0% /sys/fs/cgroup
/dev/sda1 944120  144752734192  17% /boot
tmpfs   24641068  24  24641044   1% /var/lib/ceph/osd/ceph-1
tmpfs   24641068  24  24641044   1% /var/lib/ceph/osd/ceph-6
tmpfs   24641068  24  24641044   1% /var/lib/ceph/osd/ceph-5
tmpfs   24641068  24  24641044   1% 
/var/lib/ceph/osd/ceph-10
tmpfs4928212   0   4928212   0% /run/user/0

root@ceph03:~# ceph-volume lvm list


== osd.1 ===

  [block]
/dev/ceph-9af8fc69-cab8-4c12-b51e-5746a0f0fc51/osd-block-b4987093-4fa5-4
7bd-8ddc-102b98444067

  block device
/dev/ceph-9af8fc69-cab8-4c12-b51e-5746a0f0fc51/osd-block-b4987093-4fa5-4
7bd-8ddc-102b98444067
  block uuidHSK6Da-elP2-CFYz-s0RH-UNiw-bey0-dVcml1
  cephx lockbox secret
  cluster fsid  5436dd5d-83d4-4dc8-a93b-60ab5db145df
  cluster name  ceph
  crush device classNone
  encrypted 0
  osd fsid  b4987093-4fa5-47bd-8ddc-102b98444067
  osd id1
  type  block
  vdo   0
  devices   /dev/sdj

== osd.10 ==

  [block]
/dev/ceph-78f2730d-7277-4d1f-8909-449b45339f80/osd-block-fa241441-1758-4
b85-9799-988eee3b2b3f

  block device
/dev/ceph-78f2730d-7277-4d1f-8909-449b45339f80/osd-block-fa241441-1758-4
b85-9799-988eee3b2b3f
  block uuid440fNG-guO2-l1WJ-m5cR-GUkz-ZTUd-Fcz5Ml
  cephx lockbox secret
  cluster fsid  5436dd5d-83d4-4dc8-a93b-60ab5db145df
  cluster name  ceph
  crush device classNone
  encrypted 0
  osd fsid  fa241441-1758-4b85-9799-988eee3b2b3f
  osd id10
  type  block
  vdo   0
  devices   /dev/sdl

== osd.5 ===

  [block]
/dev/ceph-793608ca-9dd1-4a4f-a776-c1e292127899/osd-block-112e0c75-f61b-4
e50-9bb5-775bacd854af

  block device
/dev/ceph-793608ca-9dd1-4a4f-a776-c1e292127899/osd-block-112e0c75-f61b-4
e50-9bb5-775bacd854af
  block uuidZ6VeNx-S9sg-ZOsh-HTw9-ykTc-YBrh-qFwz5i
  cephx lockbox secret
  cluster fsid  5436dd5d-83d4-4dc8-a93b-60ab5db145df
  cluster name  ceph
  crush device classNone
  encrypted 0
  osd fsid  112e0c75-f61b-4e50-9bb5-775bacd854af
  osd id5
  type  block
  vdo   0
  devices   /dev/sdb

== osd.6 ===

  [block]
/dev/ce

[ceph-users] OSDs wont mount on Debian 10 (Buster) with Nautilus

2020-03-25 Thread Ml Ml
Hello list,

i upgraded to Debian 10, after that i upgraded from luminous to nautilus.
I restarted the mons, then the OSDs.

Everything was up and healthy.
After rebooting a node, only 3/10 OSD start up:

-4   20.07686 host ceph03
 4   hdd  2.67020 osd.4 down  1.0 1.0
 5   hdd  1.71660 osd.5   up  1.0 1.0
 6   hdd  1.71660 osd.6   up  1.0 1.0
10   hdd  2.67029 osd.10  up  1.0 1.0
15   hdd  2.0 osd.15down  1.0 1.0
17   hdd  1.2 osd.17down  1.0 1.0
20   hdd  1.71649 osd.20down  1.0 1.0
21   hdd  2.0 osd.21down  1.0 1.0
27   hdd  1.71649 osd.27down  1.0 1.0
32   hdd  2.67020 osd.32down  1.0 1.0

root@ceph03:~# /usr/bin/ceph-osd -f --cluster ceph --id 32 --setuser
ceph --setgroup ceph
2020-03-25 15:46:36.330 7efddde5ec80 -1 auth: unable to find a keyring
on /var/lib/ceph/osd/ceph-32/keyring: (2) No such file or directory
2020-03-25 15:46:36.330 7efddde5ec80 -1 AuthRegistry(0x56531c50a140)
no keyring found at /var/lib/ceph/osd/ceph-32/keyring, disabling cephx
2020-03-25 15:46:36.330 7efddde5ec80 -1 auth: unable to find a keyring
on /var/lib/ceph/osd/ceph-32/keyring: (2) No such file or directory
2020-03-25 15:46:36.330 7efddde5ec80 -1 AuthRegistry(0x7ffd04120468)
no keyring found at /var/lib/ceph/osd/ceph-32/keyring, disabling cephx
failed to fetch mon config (--no-mon-config to skip)

root@ceph03:~# df
Filesystem 1K-blocksUsed Available Use% Mounted on
udev24624580   0  24624580   0% /dev
tmpfs49282169544   4918672   1% /run
/dev/sda3   47930248 5209760  40262684  12% /
tmpfs   24641068   0  24641068   0% /dev/shm
tmpfs   5120   0  5120   0% /run/lock
tmpfs   24641068   0  24641068   0% /sys/fs/cgroup
/dev/sda1 944120  144752734192  17% /boot
tmpfs   24641068  24  24641044   1% /var/lib/ceph/osd/ceph-1
tmpfs   24641068  24  24641044   1% /var/lib/ceph/osd/ceph-6
tmpfs   24641068  24  24641044   1% /var/lib/ceph/osd/ceph-5
tmpfs   24641068  24  24641044   1% /var/lib/ceph/osd/ceph-10
tmpfs4928212   0   4928212   0% /run/user/0

root@ceph03:~# ceph-volume lvm list


== osd.1 ===

  [block]
/dev/ceph-9af8fc69-cab8-4c12-b51e-5746a0f0fc51/osd-block-b4987093-4fa5-47bd-8ddc-102b98444067

  block device
/dev/ceph-9af8fc69-cab8-4c12-b51e-5746a0f0fc51/osd-block-b4987093-4fa5-47bd-8ddc-102b98444067
  block uuidHSK6Da-elP2-CFYz-s0RH-UNiw-bey0-dVcml1
  cephx lockbox secret
  cluster fsid  5436dd5d-83d4-4dc8-a93b-60ab5db145df
  cluster name  ceph
  crush device classNone
  encrypted 0
  osd fsid  b4987093-4fa5-47bd-8ddc-102b98444067
  osd id1
  type  block
  vdo   0
  devices   /dev/sdj

== osd.10 ==

  [block]
/dev/ceph-78f2730d-7277-4d1f-8909-449b45339f80/osd-block-fa241441-1758-4b85-9799-988eee3b2b3f

  block device
/dev/ceph-78f2730d-7277-4d1f-8909-449b45339f80/osd-block-fa241441-1758-4b85-9799-988eee3b2b3f
  block uuid440fNG-guO2-l1WJ-m5cR-GUkz-ZTUd-Fcz5Ml
  cephx lockbox secret
  cluster fsid  5436dd5d-83d4-4dc8-a93b-60ab5db145df
  cluster name  ceph
  crush device classNone
  encrypted 0
  osd fsid  fa241441-1758-4b85-9799-988eee3b2b3f
  osd id10
  type  block
  vdo   0
  devices   /dev/sdl

== osd.5 ===

  [block]
/dev/ceph-793608ca-9dd1-4a4f-a776-c1e292127899/osd-block-112e0c75-f61b-4e50-9bb5-775bacd854af

  block device
/dev/ceph-793608ca-9dd1-4a4f-a776-c1e292127899/osd-block-112e0c75-f61b-4e50-9bb5-775bacd854af
  block uuidZ6VeNx-S9sg-ZOsh-HTw9-ykTc-YBrh-qFwz5i
  cephx lockbox secret
  cluster fsid  5436dd5d-83d4-4dc8-a93b-60ab5db145df
  cluster name  ceph
  crush device classNone
  encrypted 0
  osd fsid  112e0c75-f61b-4e50-9bb5-775bacd854af
  osd id5
  type  block
  vdo   0
  devices   /dev/sdb

== osd.6 ===

  [block]
/dev/ceph-4b0cee89-03f4-4853-bc1d-09e0eb772799/osd-block-35288829-c1f6-42ab-aeb0-f2915a389e48

  block device
/dev/ceph-4b0cee89-03f4-4853-bc1d-09e0eb772799/osd-block-35288829-c1f6-42ab-aeb0-f2915a389e48
  block uuidG9qHxC-dN0b-XBes-QVss-Bzwa-7Xtw-ikksgM
  cephx lockbox secret
  cluster fsid  5436dd5d-83d4-4dc8-a93b-60ab5db145df
  cluster na

[ceph-users] Re: OSDs wont mount on Debian 10 (Buster) with Nautilus

2020-03-25 Thread Marc Roos
 Try this

chown ceph.ceph /dev/sdc2
chown ceph.ceph /dev/sdd2
chown ceph.ceph /dev/sde2
chown ceph.ceph /dev/sdf2
chown ceph.ceph /dev/sdg2
chown ceph.ceph /dev/sdh2



-Original Message-
From: Ml Ml [mailto:mliebher...@googlemail.com] 
Sent: 25 March 2020 16:22
To: Marc Roos
Subject: Re: [ceph-users] OSDs wont mount on Debian 10 (Buster) with 
Nautilus

They where indeed disabled. I enabled them with:

systemctl enable ceph-osd@4
systemctl enable ceph-osd@15
systemctl enable ceph-osd@17
systemctl enable ceph-osd@20
systemctl enable ceph-osd@21
systemctl enable ceph-osd@27
systemctl enable ceph-osd@32

But they still wont start.

On Wed, Mar 25, 2020 at 4:09 PM Marc Roos  
wrote:
>
>
> I had something similar. My osd were disabled, maybe this installer of 

> nautilus does that check
>
> systemctl is-enabled ceph-osd@0
>
> https://tracker.ceph.com/issues/44102
>
>
>
>
> -Original Message-
> From: Ml Ml [mailto:mliebher...@googlemail.com]
> Sent: 25 March 2020 16:05
> To: ceph-users
> Subject: [ceph-users] OSDs wont mount on Debian 10 (Buster) with 
> Nautilus
>
> Hello list,
>
> i upgraded to Debian 10, after that i upgraded from luminous to 
> nautilus.
> I restarted the mons, then the OSDs.
>
> Everything was up and healthy.
> After rebooting a node, only 3/10 OSD start up:
>
> -4   20.07686 host ceph03
>  4   hdd  2.67020 osd.4 down  1.0 1.0
>  5   hdd  1.71660 osd.5   up  1.0 1.0
>  6   hdd  1.71660 osd.6   up  1.0 1.0
> 10   hdd  2.67029 osd.10  up  1.0 1.0
> 15   hdd  2.0 osd.15down  1.0 1.0
> 17   hdd  1.2 osd.17down  1.0 1.0
> 20   hdd  1.71649 osd.20down  1.0 1.0
> 21   hdd  2.0 osd.21down  1.0 1.0
> 27   hdd  1.71649 osd.27down  1.0 1.0
> 32   hdd  2.67020 osd.32down  1.0 1.0
>
> root@ceph03:~# /usr/bin/ceph-osd -f --cluster ceph --id 32 --setuser 
> ceph --setgroup ceph
> 2020-03-25 15:46:36.330 7efddde5ec80 -1 auth: unable to find a keyring 

> on /var/lib/ceph/osd/ceph-32/keyring: (2) No such file or directory
> 2020-03-25 15:46:36.330 7efddde5ec80 -1 AuthRegistry(0x56531c50a140) 
> no keyring found at /var/lib/ceph/osd/ceph-32/keyring, disabling cephx
> 2020-03-25 15:46:36.330 7efddde5ec80 -1 auth: unable to find a keyring 

> on /var/lib/ceph/osd/ceph-32/keyring: (2) No such file or directory
> 2020-03-25 15:46:36.330 7efddde5ec80 -1 AuthRegistry(0x7ffd04120468) 
> no keyring found at /var/lib/ceph/osd/ceph-32/keyring, disabling cephx 

> failed to fetch mon config (--no-mon-config to skip)
>
> root@ceph03:~# df
> Filesystem 1K-blocksUsed Available Use% Mounted on
> udev24624580   0  24624580   0% /dev
> tmpfs49282169544   4918672   1% /run
> /dev/sda3   47930248 5209760  40262684  12% /
> tmpfs   24641068   0  24641068   0% /dev/shm
> tmpfs   5120   0  5120   0% /run/lock
> tmpfs   24641068   0  24641068   0% /sys/fs/cgroup
> /dev/sda1 944120  144752734192  17% /boot
> tmpfs   24641068  24  24641044   1% 
/var/lib/ceph/osd/ceph-1
> tmpfs   24641068  24  24641044   1% 
/var/lib/ceph/osd/ceph-6
> tmpfs   24641068  24  24641044   1% 
/var/lib/ceph/osd/ceph-5
> tmpfs   24641068  24  24641044   1%
> /var/lib/ceph/osd/ceph-10
> tmpfs4928212   0   4928212   0% /run/user/0
>
> root@ceph03:~# ceph-volume lvm list
>
>
> == osd.1 ===
>
>   [block]
> /dev/ceph-9af8fc69-cab8-4c12-b51e-5746a0f0fc51/osd-block-b4987093-4fa5
> -4
> 7bd-8ddc-102b98444067
>
>   block device
> /dev/ceph-9af8fc69-cab8-4c12-b51e-5746a0f0fc51/osd-block-b4987093-4fa5
> -4
> 7bd-8ddc-102b98444067
>   block uuidHSK6Da-elP2-CFYz-s0RH-UNiw-bey0-dVcml1
>   cephx lockbox secret
>   cluster fsid  5436dd5d-83d4-4dc8-a93b-60ab5db145df
>   cluster name  ceph
>   crush device classNone
>   encrypted 0
>   osd fsid  b4987093-4fa5-47bd-8ddc-102b98444067
>   osd id1
>   type  block
>   vdo   0
>   devices   /dev/sdj
>
> == osd.10 ==
>
>   [block]
> /dev/ceph-78f2730d-7277-4d1f-8909-449b45339f80/osd-block-fa241441-1758
> -4
> b85-9799-988eee3b2b3f
>
>   block device
> /dev/ceph-78f2730d-7277-4d1f-8909-449b45339f80/osd-block-fa241441-1758
> -4
> b85-9799-988eee3b2b3f
>   block uuid440fNG-guO2-l1WJ-m5cR-GUkz-ZTUd-Fcz5Ml
>   cephx lockbox secret
>   cluster fsid  5436dd5d-83d4-4dc8-a93b-60ab5db145df
>   cluster name  ceph
>   crush device classNone
>   encrypted 0
>   osd fsid  fa241441-1758-4b85-9799-988eee3b2b3f
>   osd id 

[ceph-users] Re: OSDs wont mount on Debian 10 (Buster) with Nautilus

2020-03-25 Thread Marc Roos
What does the osd error log say? I already have bluestore, if you have 
file store maybe you should inspect the mounted fs of /dev/sdd2 eg. 
maybe there permissions need to be changed. But first check the errors 
of one osd.

( You did you reset the failed service with somethig like this systemctl 
reset-failed ceph-osd@0 not? )


-Original Message-
From: Ml Ml [mailto:mliebher...@googlemail.com] 
Sent: 25 March 2020 16:37
To: Marc Roos
Cc: ceph-users
Subject: Re: [ceph-users] OSDs wont mount on Debian 10 (Buster) with 
Nautilus

Still no luck.

But the working OSDs have no partition:
OSD.1 => /dev/sdj
OSD.5 => /dev/sdb
OSD.6 => /dev/sdbc
OSD.10 => /dev/sdl


Where as the rest has:

root@ceph03:~# ls -l /dev/sd*
brw-rw 1 root disk 8,   0 Mar 25 16:23 /dev/sda
brw-rw 1 root disk 8,   1 Mar 25 16:23 /dev/sda1
brw-rw 1 root disk 8,   2 Mar 25 16:23 /dev/sda2
brw-rw 1 root disk 8,   3 Mar 25 16:23 /dev/sda3
brw-rw 1 root disk 8,  16 Mar 25 16:23 /dev/sdb
brw-rw 1 root disk 8,  32 Mar 25 16:23 /dev/sdc
brw-rw 1 root disk 8,  48 Mar 25 16:23 /dev/sdd
brw-rw 1 root disk 8,  49 Mar 25 16:23 /dev/sdd1
brw-rw 1 ceph ceph 8,  50 Mar 25 16:23 /dev/sdd2
brw-rw 1 root disk 8,  64 Mar 25 16:23 /dev/sde
brw-rw 1 root disk 8,  65 Mar 25 16:23 /dev/sde1
brw-rw 1 ceph ceph 8,  66 Mar 25 16:23 /dev/sde2
brw-rw 1 root disk 8,  80 Mar 25 16:23 /dev/sdf
brw-rw 1 root disk 8,  81 Mar 25 16:23 /dev/sdf1
brw-rw 1 ceph ceph 8,  82 Mar 25 16:23 /dev/sdf2
brw-rw 1 root disk 8,  96 Mar 25 16:23 /dev/sdg
brw-rw 1 root disk 8,  97 Mar 25 16:23 /dev/sdg1
brw-rw 1 ceph ceph 8,  98 Mar 25 16:23 /dev/sdg2
brw-rw 1 root disk 8, 112 Mar 25 16:23 /dev/sdh
brw-rw 1 root disk 8, 113 Mar 25 16:23 /dev/sdh1
brw-rw 1 ceph ceph 8, 114 Mar 25 16:23 /dev/sdh2
brw-rw 1 root disk 8, 128 Mar 25 16:23 /dev/sdi
brw-rw 1 root disk 8, 129 Mar 25 16:23 /dev/sdi1
brw-rw 1 ceph ceph 8, 130 Mar 25 16:23 /dev/sdi2
brw-rw 1 root disk 8, 144 Mar 25 16:23 /dev/sdj
brw-rw 1 root disk 8, 160 Mar 25 16:23 /dev/sdk
brw-rw 1 root disk 8, 161 Mar 25 16:23 /dev/sdk1
brw-rw 1 ceph ceph 8, 162 Mar 25 16:23 /dev/sdk2
brw-rw 1 root disk 8, 176 Mar 25 16:23 /dev/sdl
brw-rw 1 root disk 8, 192 Mar 25 16:23 /dev/sdm
brw-rw 1 root disk 8, 193 Mar 25 16:23 /dev/sdm1
brw-rw 1 ceph ceph 8, 194 Mar 25 16:23 /dev/sdm2

Did i miss to convert to bluestore or something?


On Wed, Mar 25, 2020 at 4:23 PM Marc Roos  
wrote:
>
>  Try this
>
> chown ceph.ceph /dev/sdc2
> chown ceph.ceph /dev/sdd2
> chown ceph.ceph /dev/sde2
> chown ceph.ceph /dev/sdf2
> chown ceph.ceph /dev/sdg2
> chown ceph.ceph /dev/sdh2
>
>
>
> -Original Message-
> From: Ml Ml [mailto:mliebher...@googlemail.com]
> Sent: 25 March 2020 16:22
> To: Marc Roos
> Subject: Re: [ceph-users] OSDs wont mount on Debian 10 (Buster) with 
> Nautilus
>
> They where indeed disabled. I enabled them with:
>
> systemctl enable ceph-osd@4
> systemctl enable ceph-osd@15
> systemctl enable ceph-osd@17
> systemctl enable ceph-osd@20
> systemctl enable ceph-osd@21
> systemctl enable ceph-osd@27
> systemctl enable ceph-osd@32
>
> But they still wont start.
>
> On Wed, Mar 25, 2020 at 4:09 PM Marc Roos 
> wrote:
> >
> >
> > I had something similar. My osd were disabled, maybe this installer 
> > of
>
> > nautilus does that check
> >
> > systemctl is-enabled ceph-osd@0
> >
> > https://tracker.ceph.com/issues/44102
> >
> >
> >
> >
> > -Original Message-
> > From: Ml Ml [mailto:mliebher...@googlemail.com]
> > Sent: 25 March 2020 16:05
> > To: ceph-users
> > Subject: [ceph-users] OSDs wont mount on Debian 10 (Buster) with 
> > Nautilus
> >
> > Hello list,
> >
> > i upgraded to Debian 10, after that i upgraded from luminous to 
> > nautilus.
> > I restarted the mons, then the OSDs.
> >
> > Everything was up and healthy.
> > After rebooting a node, only 3/10 OSD start up:
> >
> > -4   20.07686 host ceph03
> >  4   hdd  2.67020 osd.4 down  1.0 1.0
> >  5   hdd  1.71660 osd.5   up  1.0 1.0
> >  6   hdd  1.71660 osd.6   up  1.0 1.0
> > 10   hdd  2.67029 osd.10  up  1.0 1.0
> > 15   hdd  2.0 osd.15down  1.0 1.0
> > 17   hdd  1.2 osd.17down  1.0 1.0
> > 20   hdd  1.71649 osd.20down  1.0 1.0
> > 21   hdd  2.0 osd.21down  1.0 1.0
> > 27   hdd  1.71649 osd.27down  1.0 1.0
> > 32   hdd  2.67020 osd.32down  1.0 1.0
> >
> > root@ceph03:~# /usr/bin/ceph-osd -f --cluster ceph --id 32 --setuser 

> > ceph --setgroup ceph
> > 2020-03-25 15:46:36.330 7efddde5ec80 -1 auth: unable to find a 
> > keyring
>
> > on /var/lib/ceph/osd/ceph-32/keyring: (2) No such file or directory
> > 2020-03-25 15:46:36.330 7efddde5ec80 -1 AuthRegistry(0x56531c50a140) 

> > no keyring found at /var/lib/ceph/osd/ceph-32/keyring, 

[ceph-users] Re: OSDs wont mount on Debian 10 (Buster) with Nautilus

2020-03-25 Thread Ml Ml
Still no luck.

But the working OSDs have no partition:
OSD.1 => /dev/sdj
OSD.5 => /dev/sdb
OSD.6 => /dev/sdbc
OSD.10 => /dev/sdl


Where as the rest has:

root@ceph03:~# ls -l /dev/sd*
brw-rw 1 root disk 8,   0 Mar 25 16:23 /dev/sda
brw-rw 1 root disk 8,   1 Mar 25 16:23 /dev/sda1
brw-rw 1 root disk 8,   2 Mar 25 16:23 /dev/sda2
brw-rw 1 root disk 8,   3 Mar 25 16:23 /dev/sda3
brw-rw 1 root disk 8,  16 Mar 25 16:23 /dev/sdb
brw-rw 1 root disk 8,  32 Mar 25 16:23 /dev/sdc
brw-rw 1 root disk 8,  48 Mar 25 16:23 /dev/sdd
brw-rw 1 root disk 8,  49 Mar 25 16:23 /dev/sdd1
brw-rw 1 ceph ceph 8,  50 Mar 25 16:23 /dev/sdd2
brw-rw 1 root disk 8,  64 Mar 25 16:23 /dev/sde
brw-rw 1 root disk 8,  65 Mar 25 16:23 /dev/sde1
brw-rw 1 ceph ceph 8,  66 Mar 25 16:23 /dev/sde2
brw-rw 1 root disk 8,  80 Mar 25 16:23 /dev/sdf
brw-rw 1 root disk 8,  81 Mar 25 16:23 /dev/sdf1
brw-rw 1 ceph ceph 8,  82 Mar 25 16:23 /dev/sdf2
brw-rw 1 root disk 8,  96 Mar 25 16:23 /dev/sdg
brw-rw 1 root disk 8,  97 Mar 25 16:23 /dev/sdg1
brw-rw 1 ceph ceph 8,  98 Mar 25 16:23 /dev/sdg2
brw-rw 1 root disk 8, 112 Mar 25 16:23 /dev/sdh
brw-rw 1 root disk 8, 113 Mar 25 16:23 /dev/sdh1
brw-rw 1 ceph ceph 8, 114 Mar 25 16:23 /dev/sdh2
brw-rw 1 root disk 8, 128 Mar 25 16:23 /dev/sdi
brw-rw 1 root disk 8, 129 Mar 25 16:23 /dev/sdi1
brw-rw 1 ceph ceph 8, 130 Mar 25 16:23 /dev/sdi2
brw-rw 1 root disk 8, 144 Mar 25 16:23 /dev/sdj
brw-rw 1 root disk 8, 160 Mar 25 16:23 /dev/sdk
brw-rw 1 root disk 8, 161 Mar 25 16:23 /dev/sdk1
brw-rw 1 ceph ceph 8, 162 Mar 25 16:23 /dev/sdk2
brw-rw 1 root disk 8, 176 Mar 25 16:23 /dev/sdl
brw-rw 1 root disk 8, 192 Mar 25 16:23 /dev/sdm
brw-rw 1 root disk 8, 193 Mar 25 16:23 /dev/sdm1
brw-rw 1 ceph ceph 8, 194 Mar 25 16:23 /dev/sdm2

Did i miss to convert to bluestore or something?


On Wed, Mar 25, 2020 at 4:23 PM Marc Roos  wrote:
>
>  Try this
>
> chown ceph.ceph /dev/sdc2
> chown ceph.ceph /dev/sdd2
> chown ceph.ceph /dev/sde2
> chown ceph.ceph /dev/sdf2
> chown ceph.ceph /dev/sdg2
> chown ceph.ceph /dev/sdh2
>
>
>
> -Original Message-
> From: Ml Ml [mailto:mliebher...@googlemail.com]
> Sent: 25 March 2020 16:22
> To: Marc Roos
> Subject: Re: [ceph-users] OSDs wont mount on Debian 10 (Buster) with
> Nautilus
>
> They where indeed disabled. I enabled them with:
>
> systemctl enable ceph-osd@4
> systemctl enable ceph-osd@15
> systemctl enable ceph-osd@17
> systemctl enable ceph-osd@20
> systemctl enable ceph-osd@21
> systemctl enable ceph-osd@27
> systemctl enable ceph-osd@32
>
> But they still wont start.
>
> On Wed, Mar 25, 2020 at 4:09 PM Marc Roos 
> wrote:
> >
> >
> > I had something similar. My osd were disabled, maybe this installer of
>
> > nautilus does that check
> >
> > systemctl is-enabled ceph-osd@0
> >
> > https://tracker.ceph.com/issues/44102
> >
> >
> >
> >
> > -Original Message-
> > From: Ml Ml [mailto:mliebher...@googlemail.com]
> > Sent: 25 March 2020 16:05
> > To: ceph-users
> > Subject: [ceph-users] OSDs wont mount on Debian 10 (Buster) with
> > Nautilus
> >
> > Hello list,
> >
> > i upgraded to Debian 10, after that i upgraded from luminous to
> > nautilus.
> > I restarted the mons, then the OSDs.
> >
> > Everything was up and healthy.
> > After rebooting a node, only 3/10 OSD start up:
> >
> > -4   20.07686 host ceph03
> >  4   hdd  2.67020 osd.4 down  1.0 1.0
> >  5   hdd  1.71660 osd.5   up  1.0 1.0
> >  6   hdd  1.71660 osd.6   up  1.0 1.0
> > 10   hdd  2.67029 osd.10  up  1.0 1.0
> > 15   hdd  2.0 osd.15down  1.0 1.0
> > 17   hdd  1.2 osd.17down  1.0 1.0
> > 20   hdd  1.71649 osd.20down  1.0 1.0
> > 21   hdd  2.0 osd.21down  1.0 1.0
> > 27   hdd  1.71649 osd.27down  1.0 1.0
> > 32   hdd  2.67020 osd.32down  1.0 1.0
> >
> > root@ceph03:~# /usr/bin/ceph-osd -f --cluster ceph --id 32 --setuser
> > ceph --setgroup ceph
> > 2020-03-25 15:46:36.330 7efddde5ec80 -1 auth: unable to find a keyring
>
> > on /var/lib/ceph/osd/ceph-32/keyring: (2) No such file or directory
> > 2020-03-25 15:46:36.330 7efddde5ec80 -1 AuthRegistry(0x56531c50a140)
> > no keyring found at /var/lib/ceph/osd/ceph-32/keyring, disabling cephx
> > 2020-03-25 15:46:36.330 7efddde5ec80 -1 auth: unable to find a keyring
>
> > on /var/lib/ceph/osd/ceph-32/keyring: (2) No such file or directory
> > 2020-03-25 15:46:36.330 7efddde5ec80 -1 AuthRegistry(0x7ffd04120468)
> > no keyring found at /var/lib/ceph/osd/ceph-32/keyring, disabling cephx
>
> > failed to fetch mon config (--no-mon-config to skip)
> >
> > root@ceph03:~# df
> > Filesystem 1K-blocksUsed Available Use% Mounted on
> > udev24624580   0  24624580   0% /dev
> > tmpfs49282169544   49186

[ceph-users] Re: Help: corrupt pg

2020-03-25 Thread Jake Grimmett

Hi Eugen,

Many thanks for your reply.

The other two OSD's are up and running, and being used by other pgs with 
no problem, for some reason this pg refuses to use these OSD's.


The other two OSDs that are missing from this pg crashed at different 
times last month, each OSD crashed when we tried to fix a pg with 
recovery_unfound by running a command like:


# ceph pg  5.3fa mark_unfound_lost delete

the osd crash is shown in the osd log file here 



"mark_unfound_lost delete" occurs at line 3708

This caused the primary osd to crash with: PrimaryLogPG.cc: 11550: 
FAILED ceph_assert(head_obc)


when the osd tries to restart, we see lots of log entries similar to:

    -3> 2020-02-10 12:25:58.795 7f5935dfe700  1 get compressor lz4 = 
0x55cd193d34a0


and...

-1274> 2020-02-10 12:23:24.661 7f5936e00700  5 
bluestore(/var/lib/ceph/osd/ceph-443) _do_alloc_write  0x2 bytes 
compressed using lz4 failed with errcode = -1, leaving uncompressed


the osd then repeatedly crashes with "PrimaryLogPG.cc: 11550: FAILED 
ceph_assert(head_obc)" but with no more "compressor lz4" entries


the only fix we found was to destroy & recreate the osd, and then allow 
ceph to recover.


We thought that we could fix the small number of recovery unfound pgs by 
allowing their primary OSD to crash, and then recreating it.


Unfortunately, while I was waiting for the pg to heal, we seem to have 
got caught by another bug, as another osd in this pg got hit with


"OSD: FAILED ceph_assert(clone_size.count(clone))". this log is here:



the full "ceph pg dump" for this failed pg is

[root@ceph1 ~]# ceph pg dump | grep ^5.750
dumped all
5.750    190408  0    0 0   0 
569643615603   0  0 3090 
3090 down+remapped 2020-03-25 
11:17:47.228805  35398'3381328  35968:3266057 
[234,354,304,388,125,25,427,226,77,154]    234 
[NONE,NONE,NONE,388,125,25,427,226,77,154]    388 24471'3200829 
2020-01-28 15:48:35.574934   24471'3200829 2020-01-28 15:48:35.574934


I did notice this other LZ4 corruption bug: 
https://tracker.ceph.com/issues/39525 not sure if there is any relation..


best regards,

Jake


On 25/03/2020 14:22, Eugen Block wrote:

Hi,

is there any chance to recover the other failing OSDs that seem to 
have one chunk of this PG? Do the other OSDs fail with the same error?



Zitat von Jake Grimmett :


Dear All,

We are "in a bit of a pickle"...

No reply to my message (23/03/2020),  subject  "OSD: FAILED 
ceph_assert(clone_size.count(clone))"


So I'm presuming it's not possible to recover the crashed OSD

This is bad news, as one pg may be lost, (we are using EC 8+2, pg 
dump shows [NONE,NONE,NONE,388,125,25,427,226,77,154] )


Without this pg we have 1.8PB of broken cephfs.

I could rebuild the cluster from scratch, but this means no user 
backups for a couple of weeks.


The cluster has 10 nodes, uses an EC 8:2 pool for cephfs data 
(replicated NVMe metdata pool) and is running Nautilus 14.2.8


Clearly, it would be nicer if we could fix the OSD, but if this isn't 
possible, can someone confirm that the right procedure to recover 
from a corrupt pg is:


1) Stop all client access
2) find all files that store data on the bad pg, with:
# cephfs-data-scan pg_files /backup 5.750 2> /dev/null > /root/bad_files
3) delete all of these bad files - presumably using truncate? or is 
"rm" fine?

4) destroy the bad pg
# ceph osd  force-create-pg 5.750
5) Copy the missing files back with rsync or similar...

a better "recipe" or other advice gratefully received,

best regards,
Jake




Note: I am working from home until further notice.

For help, contact unixad...@mrc-lmb.cam.ac.uk
--
Dr Jake Grimmett
Head Of Scientific Computing
MRC Laboratory of Molecular Biology
Francis Crick Avenue,
Cambridge CB2 0QH, UK.
Phone 01223 267019
Mobile 0776 9886539

___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io



___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


Note: I am working from home until further notice.
For help, contact unixad...@mrc-lmb.cam.ac.uk
--
Dr Jake Grimmett
Head Of Scientific Computing
MRC Laboratory of Molecular Biology
Francis Crick Avenue,
Cambridge CB2 0QH, UK.
Phone 01223 267019
Mobile 0776 9886539
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: v15.2.0 Octopus released

2020-03-25 Thread Bryan Stillwell
On Mar 24, 2020, at 5:38 AM, Abhishek Lekshmanan  wrote:
> #. Upgrade monitors by installing the new packages and restarting the
>   monitor daemons.  For example, on each monitor host,::
> 
> # systemctl restart ceph-mon.target
> 
>   Once all monitors are up, verify that the monitor upgrade is
>   complete by looking for the `octopus` string in the mon
>   map.  The command::
> 
> # ceph mon dump | grep min_mon_release
> 
>   should report::
> 
> min_mon_release 15 (nautilus)

I believe this should say:

  min_mon_release 15 (octopus)

Bryan
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: OSDs wont mount on Debian 10 (Buster) with Nautilus

2020-03-25 Thread Marc Roos


Still down? 


-Original Message-
Cc: ceph-users
Subject: [ceph-users] Re: OSDs wont mount on Debian 10 (Buster) with 
Nautilus

What does the osd error log say? I already have bluestore, if you have 
file store maybe you should inspect the mounted fs of /dev/sdd2 eg. 
maybe there permissions need to be changed. But first check the errors 
of one osd.

( You did you reset the failed service with somethig like this systemctl 
reset-failed ceph-osd@0 not? )


-Original Message-
Cc: ceph-users
Subject: Re: [ceph-users] OSDs wont mount on Debian 10 (Buster) with 
Nautilus

Still no luck.

But the working OSDs have no partition:
OSD.1 => /dev/sdj
OSD.5 => /dev/sdb
OSD.6 => /dev/sdbc
OSD.10 => /dev/sdl


Where as the rest has:

root@ceph03:~# ls -l /dev/sd*
brw-rw 1 root disk 8,   0 Mar 25 16:23 /dev/sda
brw-rw 1 root disk 8,   1 Mar 25 16:23 /dev/sda1
brw-rw 1 root disk 8,   2 Mar 25 16:23 /dev/sda2
brw-rw 1 root disk 8,   3 Mar 25 16:23 /dev/sda3
brw-rw 1 root disk 8,  16 Mar 25 16:23 /dev/sdb
brw-rw 1 root disk 8,  32 Mar 25 16:23 /dev/sdc
brw-rw 1 root disk 8,  48 Mar 25 16:23 /dev/sdd
brw-rw 1 root disk 8,  49 Mar 25 16:23 /dev/sdd1
brw-rw 1 ceph ceph 8,  50 Mar 25 16:23 /dev/sdd2
brw-rw 1 root disk 8,  64 Mar 25 16:23 /dev/sde
brw-rw 1 root disk 8,  65 Mar 25 16:23 /dev/sde1
brw-rw 1 ceph ceph 8,  66 Mar 25 16:23 /dev/sde2
brw-rw 1 root disk 8,  80 Mar 25 16:23 /dev/sdf
brw-rw 1 root disk 8,  81 Mar 25 16:23 /dev/sdf1
brw-rw 1 ceph ceph 8,  82 Mar 25 16:23 /dev/sdf2
brw-rw 1 root disk 8,  96 Mar 25 16:23 /dev/sdg
brw-rw 1 root disk 8,  97 Mar 25 16:23 /dev/sdg1
brw-rw 1 ceph ceph 8,  98 Mar 25 16:23 /dev/sdg2
brw-rw 1 root disk 8, 112 Mar 25 16:23 /dev/sdh
brw-rw 1 root disk 8, 113 Mar 25 16:23 /dev/sdh1
brw-rw 1 ceph ceph 8, 114 Mar 25 16:23 /dev/sdh2
brw-rw 1 root disk 8, 128 Mar 25 16:23 /dev/sdi
brw-rw 1 root disk 8, 129 Mar 25 16:23 /dev/sdi1
brw-rw 1 ceph ceph 8, 130 Mar 25 16:23 /dev/sdi2
brw-rw 1 root disk 8, 144 Mar 25 16:23 /dev/sdj
brw-rw 1 root disk 8, 160 Mar 25 16:23 /dev/sdk
brw-rw 1 root disk 8, 161 Mar 25 16:23 /dev/sdk1
brw-rw 1 ceph ceph 8, 162 Mar 25 16:23 /dev/sdk2
brw-rw 1 root disk 8, 176 Mar 25 16:23 /dev/sdl
brw-rw 1 root disk 8, 192 Mar 25 16:23 /dev/sdm
brw-rw 1 root disk 8, 193 Mar 25 16:23 /dev/sdm1
brw-rw 1 ceph ceph 8, 194 Mar 25 16:23 /dev/sdm2

Did i miss to convert to bluestore or something?


On Wed, Mar 25, 2020 at 4:23 PM Marc Roos  
wrote:
>
>  Try this
>
> chown ceph.ceph /dev/sdc2
> chown ceph.ceph /dev/sdd2
> chown ceph.ceph /dev/sde2
> chown ceph.ceph /dev/sdf2
> chown ceph.ceph /dev/sdg2
> chown ceph.ceph /dev/sdh2
>
>
>
> -Original Message-
> From: Ml Ml [mailto:mliebher...@googlemail.com]
> Sent: 25 March 2020 16:22
> To: Marc Roos
> Subject: Re: [ceph-users] OSDs wont mount on Debian 10 (Buster) with 
> Nautilus
>
> They where indeed disabled. I enabled them with:
>
> systemctl enable ceph-osd@4
> systemctl enable ceph-osd@15
> systemctl enable ceph-osd@17
> systemctl enable ceph-osd@20
> systemctl enable ceph-osd@21
> systemctl enable ceph-osd@27
> systemctl enable ceph-osd@32
>
> But they still wont start.
>
> On Wed, Mar 25, 2020 at 4:09 PM Marc Roos 
> wrote:
> >
> >
> > I had something similar. My osd were disabled, maybe this installer 
> > of
>
> > nautilus does that check
> >
> > systemctl is-enabled ceph-osd@0
> >
> > https://tracker.ceph.com/issues/44102
> >
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: Using sendfile on Ceph FS results in data stuck in client cache

2020-03-25 Thread Jeff Layton
On Wed, 2020-03-25 at 12:14 +, Mikael Öhman wrote:
> Hi all,
> 
> Using sendfile function to write data to cephfs, the data doesn't end up 
> being written.
> From the client that writes the file, it looks correct at first, but from all 
> other ceph clients, the size is 0 bytes. Re-mounting the filesystem, the data 
> is lost.
> I didn't see any errors, the data just doesn't get written, as if it's just 
> cached in cephfs client.
> Writing just an extra byte at the end of the file (without sendfile), it 
> seems to trigger the actual write of all the data.
> 
> Could someone else confirm if they are also seeing such issue? I'm on ceph 
> 13.2.8, using kernel module for mounting on CentOS7.
> 
> I've used this sendfile-example for the example below:
> https://github.com/pijewski/sendfile-example/blob/master/sendfile.c
> 
> Using a small 27 byte source file.
> # ls -lh examples/
> -rw-r--r-- 1 root c3-staff 27 Mar 24 18:04 src
> # ./sendfile examples/src examples/dst 27
> # ls -lh examples/
> --x--- 1 root c3-staff 27 Mar 24 18:12 dst
> -rw-r--r-- 1 root c3-staff 27 Mar 24 18:04 src
> 
> But, directory is still on 27 bytes:
> # ls -lhd examples
> drwxr-sr-x 1 root c3-staff 27 Mar 24 18:15 examples
> 
> and on all other cephfs clients, the file is empty:
> # ls -lh examples/
> --x--- 1 root c3-staff  0 Mar 24 18:12 dst
> -rw-r--r-- 1 root c3-staff 27 Mar 24 18:04 src
> 
> Is this a bug in cephfs, or should I not expect sendfile to work (as it is 
> not posix compliant). There are no error reported from what i can see, and it 
> is 100% reproducible 

(sorry for the resend, the original got caught up in moderation as I
sent it from wrong address)

This sounds like a kernel client bug in an old Centos7 kernel. The
program seems to work as expected on current mainline kernels, and on
3.10.0-1062.1.2.el7.x86_64 (the latest one I had on my client).

What kernel version are you using on the client? If you're not on the
latest version Centos7 kernel version, then it'd be good to try that and
see if it's still reproducible. For the record:

[jlayton@centos7 ~]$ sudo umount -f /mnt/cephfs ; sudo mount /mnt/cephfs ; 
./sendfile ./testfile /mnt/cephfs/testfile 27 ; sudo umount /mnt/cephfs ; sudo 
mount /mnt/cephfs ; ls -l /mnt/cephfs
Sent 0 KiB over sendfile(3EXT) of 0 KiB requested 
total 1
drwxr-xr-x 1 rootroot 1 Mar 25 08:53 foo
drwxrwxrwx 1 rootroot 0 Mar 25 08:37 scratch
drwxr-xr-x 1 rootroot57 Mar 25 08:44 test
-rw-r--r-- 1 jlayton jlayton 27 Mar 25 12:35 testfile
[jlayton@centos7 ~]$ uname -r
3.10.0-1062.1.2.el7.x86_64

Attached is the cleaned-up version of sendfile.c that I was using.
-- 
Jeff Layton 
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: v15.2.0 Octopus released

2020-03-25 Thread Tecnologia Charne.Net



Yes, I was going to suggest the same on this page:

https://docs.ceph.com/docs/master/releases/octopus/


-Javier

El 25/3/20 a las 14:20, Bryan Stillwell escribió:

On Mar 24, 2020, at 5:38 AM, Abhishek Lekshmanan  wrote:

#. Upgrade monitors by installing the new packages and restarting the
   monitor daemons.  For example, on each monitor host,::

 # systemctl restart ceph-mon.target

   Once all monitors are up, verify that the monitor upgrade is
   complete by looking for the `octopus` string in the mon
   map.  The command::

 # ceph mon dump | grep min_mon_release

   should report::

 min_mon_release 15 (nautilus)

I believe this should say:

   min_mon_release 15 (octopus)

Bryan
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Luminous upgrade question

2020-03-25 Thread Shain Miley
Hi,
We are thinking about upgrading our cluster currently running ceph version 
12.2.12.  I am wondering if we should be looking at upgrading to the latest 
version of Mimic or the latest version Nautilus.

Can anyone here please provide a suggestion…I continue to be a little bit 
confused about the correct Ceph upgrade path given all the various versions 
that are out there currently.

Thank you in advance.

Shain
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: Luminous upgrade question

2020-03-25 Thread cassiano
I;ve upgraded from luminous to nautilus a few days ago and have only one 
issue with slow ops on the monitors, that I've struggled to find out 
that was cause by a malfunctioning client.


This issue was causing the monitors to keep crashing constantly.

When i've figured out that the client was actually sending the requests 
causing the slow ops and halted that client, the cluster start to 
behaving OK. The client was a KVM virtual machine using a RBD image as 
VHD.




Em 2020-03-25 15:28, Shain Miley escreveu:

Hi,
We are thinking about upgrading our cluster currently running ceph
version 12.2.12.  I am wondering if we should be looking at upgrading
to the latest version of Mimic or the latest version Nautilus.

Can anyone here please provide a suggestion…I continue to be a little
bit confused about the correct Ceph upgrade path given all the various
versions that are out there currently.

Thank you in advance.

Shain
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: Luminous upgrade question

2020-03-25 Thread Marc Roos


I upgraded from Luminous to Nautilus without any problems. Maybe check 
if you are currently cephfs snapshots, I think those are being disabled 
by default. Can't remember.




-Original Message-
Sent: 25 March 2020 19:29
To: ceph-users@ceph.io
Subject: [ceph-users] Luminous upgrade question

Hi,
We are thinking about upgrading our cluster currently running ceph 
version 12.2.12.  I am wondering if we should be looking at upgrading to 
the latest version of Mimic or the latest version Nautilus.

Can anyone here please provide a suggestionI continue to be a little 
bit confused about the correct Ceph upgrade path given all the various 
versions that are out there currently.

Thank you in advance.

Shain
___
ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an 
email to ceph-users-le...@ceph.io

___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: OSDs wont mount on Debian 10 (Buster) with Nautilus

2020-03-25 Thread Marc Roos


You have to be carefull with upgrading like this, sometimes upgrades 
between versions requires a scrub of all osd's. Good luck! :)

 

-Original Message-
Cc: ceph-users
Subject: Re: [ceph-users] Re: OSDs wont mount on Debian 10 (Buster) with 
Nautilus

i brought them up manually. Decided to upgrade to octopus, but i am 
stuck there now.
I will open  a new thread for it.

On Wed, Mar 25, 2020 at 6:23 PM Marc Roos  
wrote:
>
>
> Still down?
>
>
> -Original Message-
> Cc: ceph-users
> Subject: [ceph-users] Re: OSDs wont mount on Debian 10 (Buster) with 
> Nautilus
>
> What does the osd error log say? I already have bluestore, if you have 

> file store maybe you should inspect the mounted fs of /dev/sdd2 eg.
> maybe there permissions need to be changed. But first check the errors 

> of one osd.
>
> ( You did you reset the failed service with somethig like this 
> systemctl reset-failed ceph-osd@0 not? )
>
>
> -Original Message-
> Cc: ceph-users
> Subject: Re: [ceph-users] OSDs wont mount on Debian 10 (Buster) with 
> Nautilus
>
> Still no luck.
>
> But the working OSDs have no partition:
> OSD.1 => /dev/sdj
> OSD.5 => /dev/sdb
> OSD.6 => /dev/sdbc
> OSD.10 => /dev/sdl
>
>
> Where as the rest has:
>
> root@ceph03:~# ls -l /dev/sd*
> brw-rw 1 root disk 8,   0 Mar 25 16:23 /dev/sda
> brw-rw 1 root disk 8,   1 Mar 25 16:23 /dev/sda1
> brw-rw 1 root disk 8,   2 Mar 25 16:23 /dev/sda2
> brw-rw 1 root disk 8,   3 Mar 25 16:23 /dev/sda3
> brw-rw 1 root disk 8,  16 Mar 25 16:23 /dev/sdb
> brw-rw 1 root disk 8,  32 Mar 25 16:23 /dev/sdc
> brw-rw 1 root disk 8,  48 Mar 25 16:23 /dev/sdd
> brw-rw 1 root disk 8,  49 Mar 25 16:23 /dev/sdd1
> brw-rw 1 ceph ceph 8,  50 Mar 25 16:23 /dev/sdd2
> brw-rw 1 root disk 8,  64 Mar 25 16:23 /dev/sde
> brw-rw 1 root disk 8,  65 Mar 25 16:23 /dev/sde1
> brw-rw 1 ceph ceph 8,  66 Mar 25 16:23 /dev/sde2
> brw-rw 1 root disk 8,  80 Mar 25 16:23 /dev/sdf
> brw-rw 1 root disk 8,  81 Mar 25 16:23 /dev/sdf1
> brw-rw 1 ceph ceph 8,  82 Mar 25 16:23 /dev/sdf2
> brw-rw 1 root disk 8,  96 Mar 25 16:23 /dev/sdg
> brw-rw 1 root disk 8,  97 Mar 25 16:23 /dev/sdg1
> brw-rw 1 ceph ceph 8,  98 Mar 25 16:23 /dev/sdg2
> brw-rw 1 root disk 8, 112 Mar 25 16:23 /dev/sdh
> brw-rw 1 root disk 8, 113 Mar 25 16:23 /dev/sdh1
> brw-rw 1 ceph ceph 8, 114 Mar 25 16:23 /dev/sdh2
> brw-rw 1 root disk 8, 128 Mar 25 16:23 /dev/sdi
> brw-rw 1 root disk 8, 129 Mar 25 16:23 /dev/sdi1
> brw-rw 1 ceph ceph 8, 130 Mar 25 16:23 /dev/sdi2
> brw-rw 1 root disk 8, 144 Mar 25 16:23 /dev/sdj
> brw-rw 1 root disk 8, 160 Mar 25 16:23 /dev/sdk
> brw-rw 1 root disk 8, 161 Mar 25 16:23 /dev/sdk1
> brw-rw 1 ceph ceph 8, 162 Mar 25 16:23 /dev/sdk2
> brw-rw 1 root disk 8, 176 Mar 25 16:23 /dev/sdl
> brw-rw 1 root disk 8, 192 Mar 25 16:23 /dev/sdm
> brw-rw 1 root disk 8, 193 Mar 25 16:23 /dev/sdm1
> brw-rw 1 ceph ceph 8, 194 Mar 25 16:23 /dev/sdm2
>
> Did i miss to convert to bluestore or something?
>
>
> On Wed, Mar 25, 2020 at 4:23 PM Marc Roos 
> wrote:
> >
> >  Try this
> >
> > chown ceph.ceph /dev/sdc2
> > chown ceph.ceph /dev/sdd2
> > chown ceph.ceph /dev/sde2
> > chown ceph.ceph /dev/sdf2
> > chown ceph.ceph /dev/sdg2
> > chown ceph.ceph /dev/sdh2
> >
> >
> >
> > -Original Message-
> > From: Ml Ml [mailto:mliebher...@googlemail.com]
> > Sent: 25 March 2020 16:22
> > To: Marc Roos
> > Subject: Re: [ceph-users] OSDs wont mount on Debian 10 (Buster) with 

> > Nautilus
> >
> > They where indeed disabled. I enabled them with:
> >
> > systemctl enable ceph-osd@4
> > systemctl enable ceph-osd@15
> > systemctl enable ceph-osd@17
> > systemctl enable ceph-osd@20
> > systemctl enable ceph-osd@21
> > systemctl enable ceph-osd@27
> > systemctl enable ceph-osd@32
> >
> > But they still wont start.
> >
> > On Wed, Mar 25, 2020 at 4:09 PM Marc Roos 
> > wrote:
> > >
> > >
> > > I had something similar. My osd were disabled, maybe this 
> > > installer of
> >
> > > nautilus does that check
> > >
> > > systemctl is-enabled ceph-osd@0
> > >
> > > https://tracker.ceph.com/issues/44102
> > >
>

___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: OSDs wont mount on Debian 10 (Buster) with Nautilus

2020-03-25 Thread Ml Ml
i brought them up manually. Decided to upgrade to octopus, but i am
stuck there now.
I will open  a new thread for it.

On Wed, Mar 25, 2020 at 6:23 PM Marc Roos  wrote:
>
>
> Still down?
>
>
> -Original Message-
> Cc: ceph-users
> Subject: [ceph-users] Re: OSDs wont mount on Debian 10 (Buster) with
> Nautilus
>
> What does the osd error log say? I already have bluestore, if you have
> file store maybe you should inspect the mounted fs of /dev/sdd2 eg.
> maybe there permissions need to be changed. But first check the errors
> of one osd.
>
> ( You did you reset the failed service with somethig like this systemctl
> reset-failed ceph-osd@0 not? )
>
>
> -Original Message-
> Cc: ceph-users
> Subject: Re: [ceph-users] OSDs wont mount on Debian 10 (Buster) with
> Nautilus
>
> Still no luck.
>
> But the working OSDs have no partition:
> OSD.1 => /dev/sdj
> OSD.5 => /dev/sdb
> OSD.6 => /dev/sdbc
> OSD.10 => /dev/sdl
>
>
> Where as the rest has:
>
> root@ceph03:~# ls -l /dev/sd*
> brw-rw 1 root disk 8,   0 Mar 25 16:23 /dev/sda
> brw-rw 1 root disk 8,   1 Mar 25 16:23 /dev/sda1
> brw-rw 1 root disk 8,   2 Mar 25 16:23 /dev/sda2
> brw-rw 1 root disk 8,   3 Mar 25 16:23 /dev/sda3
> brw-rw 1 root disk 8,  16 Mar 25 16:23 /dev/sdb
> brw-rw 1 root disk 8,  32 Mar 25 16:23 /dev/sdc
> brw-rw 1 root disk 8,  48 Mar 25 16:23 /dev/sdd
> brw-rw 1 root disk 8,  49 Mar 25 16:23 /dev/sdd1
> brw-rw 1 ceph ceph 8,  50 Mar 25 16:23 /dev/sdd2
> brw-rw 1 root disk 8,  64 Mar 25 16:23 /dev/sde
> brw-rw 1 root disk 8,  65 Mar 25 16:23 /dev/sde1
> brw-rw 1 ceph ceph 8,  66 Mar 25 16:23 /dev/sde2
> brw-rw 1 root disk 8,  80 Mar 25 16:23 /dev/sdf
> brw-rw 1 root disk 8,  81 Mar 25 16:23 /dev/sdf1
> brw-rw 1 ceph ceph 8,  82 Mar 25 16:23 /dev/sdf2
> brw-rw 1 root disk 8,  96 Mar 25 16:23 /dev/sdg
> brw-rw 1 root disk 8,  97 Mar 25 16:23 /dev/sdg1
> brw-rw 1 ceph ceph 8,  98 Mar 25 16:23 /dev/sdg2
> brw-rw 1 root disk 8, 112 Mar 25 16:23 /dev/sdh
> brw-rw 1 root disk 8, 113 Mar 25 16:23 /dev/sdh1
> brw-rw 1 ceph ceph 8, 114 Mar 25 16:23 /dev/sdh2
> brw-rw 1 root disk 8, 128 Mar 25 16:23 /dev/sdi
> brw-rw 1 root disk 8, 129 Mar 25 16:23 /dev/sdi1
> brw-rw 1 ceph ceph 8, 130 Mar 25 16:23 /dev/sdi2
> brw-rw 1 root disk 8, 144 Mar 25 16:23 /dev/sdj
> brw-rw 1 root disk 8, 160 Mar 25 16:23 /dev/sdk
> brw-rw 1 root disk 8, 161 Mar 25 16:23 /dev/sdk1
> brw-rw 1 ceph ceph 8, 162 Mar 25 16:23 /dev/sdk2
> brw-rw 1 root disk 8, 176 Mar 25 16:23 /dev/sdl
> brw-rw 1 root disk 8, 192 Mar 25 16:23 /dev/sdm
> brw-rw 1 root disk 8, 193 Mar 25 16:23 /dev/sdm1
> brw-rw 1 ceph ceph 8, 194 Mar 25 16:23 /dev/sdm2
>
> Did i miss to convert to bluestore or something?
>
>
> On Wed, Mar 25, 2020 at 4:23 PM Marc Roos 
> wrote:
> >
> >  Try this
> >
> > chown ceph.ceph /dev/sdc2
> > chown ceph.ceph /dev/sdd2
> > chown ceph.ceph /dev/sde2
> > chown ceph.ceph /dev/sdf2
> > chown ceph.ceph /dev/sdg2
> > chown ceph.ceph /dev/sdh2
> >
> >
> >
> > -Original Message-
> > From: Ml Ml [mailto:mliebher...@googlemail.com]
> > Sent: 25 March 2020 16:22
> > To: Marc Roos
> > Subject: Re: [ceph-users] OSDs wont mount on Debian 10 (Buster) with
> > Nautilus
> >
> > They where indeed disabled. I enabled them with:
> >
> > systemctl enable ceph-osd@4
> > systemctl enable ceph-osd@15
> > systemctl enable ceph-osd@17
> > systemctl enable ceph-osd@20
> > systemctl enable ceph-osd@21
> > systemctl enable ceph-osd@27
> > systemctl enable ceph-osd@32
> >
> > But they still wont start.
> >
> > On Wed, Mar 25, 2020 at 4:09 PM Marc Roos 
> > wrote:
> > >
> > >
> > > I had something similar. My osd were disabled, maybe this installer
> > > of
> >
> > > nautilus does that check
> > >
> > > systemctl is-enabled ceph-osd@0
> > >
> > > https://tracker.ceph.com/issues/44102
> > >
>
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] octopus upgrade stuck: Assertion `map->require_osd_release >= ceph_release_t::mimic' failed.

2020-03-25 Thread Ml Ml
Hello List,

i followed:
 https://ceph.io/releases/v15-2-0-octopus-released/

I came from a healthy nautilus and i am stuck at:
  5.) Upgrade all OSDs by installing the new packages and restarting
the ceph-osd daemons on all OSD host

When i try to start an osd like this, i get:
  /usr/bin/ceph-osd -f --cluster ceph --id 32 --setuser ceph --setgroup ceph
...
2020-03-25T20:11:03.292+0100 7f2762874e00 -1 osd.32 57223
log_to_monitors {default=true}
ceph-osd: /build/ceph-15.2.0/src/osd/PeeringState.cc:109: void
PGPool::update(ceph::common::CephContext*, OSDMapRef): Assertion
`map->require_osd_release >= ceph_release_t::mimic' failed.
ceph-osd: /build/ceph-15.2.0/src/osd/PeeringState.cc:109: void
PGPool::update(ceph::common::CephContext*, OSDMapRef): Assertion
`map->require_osd_release >= ceph_release_t::mimic' failed.
*** Caught signal (Aborted) **
 in thread 7f274854f700 thread_name:tp_osd_tp
Aborted



My current status:

root@ceph03:~# ceph osd tree
ID  CLASS  WEIGHTTYPE NAMESTATUS  REWEIGHT  PRI-AFF
-1 60.70999  root default
-2 20.25140  host ceph01
 0hdd   1.71089  osd.0up   1.0  1.0
 8hdd   2.67029  osd.8up   1.0  1.0
11hdd   1.5  osd.11   up   1.0  1.0
12hdd   1.5  osd.12   up   1.0  1.0
14hdd   2.7  osd.14   up   1.0  1.0
18hdd   1.5  osd.18   up   1.0  1.0
22hdd   2.7  osd.22   up   1.0  1.0
23hdd   2.7  osd.23   up   1.0  1.0
26hdd   2.67029  osd.26   up   1.0  1.0
-3 23.05193  host ceph02
 2hdd   2.67029  osd.2up   1.0  1.0
 3hdd   2.0  osd.3up   1.0  1.0
 7hdd   2.67029  osd.7up   1.0  1.0
 9hdd   2.67029  osd.9up   1.0  1.0
13hdd   2.0  osd.13   up   1.0  1.0
16hdd   1.5  osd.16   up   1.0  1.0
19hdd   2.38409  osd.19   up   1.0  1.0
24hdd   2.67020  osd.24   up   1.0  1.0
25hdd   1.71649  osd.25   up   1.0  1.0
28hdd   2.67029  osd.28   up   1.0  1.0
-4 17.40666  host ceph03
 5hdd   1.71660  osd.5  down   1.0  1.0
 6hdd   1.71660  osd.6  down   1.0  1.0
10hdd   2.67029  osd.10 down   1.0  1.0
15hdd   2.0  osd.15 down   1.0  1.0
17hdd   1.2  osd.17 down   1.0  1.0
20hdd   1.71649  osd.20 down   1.0  1.0
21hdd   2.0  osd.21 down   1.0  1.0
27hdd   1.71649  osd.27 down   1.0  1.0
32hdd   2.67020  osd.32 down   1.0  1.0

root@ceph03:~#  ceph osd dump | grep require_osd_release
require_osd_release nautilus

root@ceph03:~# ceph osd versions
{
"ceph version 14.2.8 (88c3b82e8bc76d3444c2d84a30c4a380d6169d46)
nautilus (stable)": 19
}

oot@ceph03:~# ceph mon dump | grep min_mon_release
dumped monmap epoch 12
min_mon_release 15 (octopus)


ceph versions
{
"mon": {
"ceph version 15.2.0
(dc6a0b5c3cbf6a5e1d6d4f20b5ad466d76b96247) octopus (rc)": 3
},
"mgr": {
"ceph version 15.2.0
(dc6a0b5c3cbf6a5e1d6d4f20b5ad466d76b96247) octopus (rc)": 3
},
"osd": {
"ceph version 14.2.8
(88c3b82e8bc76d3444c2d84a30c4a380d6169d46) nautilus (stable)": 19
},
"mds": {
"ceph version 15.2.0
(dc6a0b5c3cbf6a5e1d6d4f20b5ad466d76b96247) octopus (rc)": 3
},
"overall": {
"ceph version 14.2.8
(88c3b82e8bc76d3444c2d84a30c4a380d6169d46) nautilus (stable)": 19,
"ceph version 15.2.0
(dc6a0b5c3cbf6a5e1d6d4f20b5ad466d76b96247) octopus (rc)": 9
}
}


Why does it complain about map->require_osd_release >= ceph_release_t::mimic ?

Cheers,
Michael
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: octopus upgrade stuck: Assertion `map->require_osd_release >= ceph_release_t::mimic' failed.

2020-03-25 Thread Ml Ml
in the logs it says:

2020-03-25T22:10:00.823+0100 7f0bd5320e00  0 
/build/ceph-15.2.0/src/cls/hello/cls_hello.cc:312: loading cls_hello
2020-03-25T22:10:00.823+0100 7f0bd5320e00  0 osd.32 57223 crush map
has features 288232576282525696, adjusting msgr requires for clients
2020-03-25T22:10:00.823+0100 7f0bd5320e00  0 osd.32 57223 crush map
has features 288232576282525696 was 8705, adjusting msgr requires for
mons
2020-03-25T22:10:00.823+0100 7f0bd5320e00  0 osd.32 57223 crush map
has features 1008808516661821440, adjusting msgr requires for osds
2020-03-25T22:10:00.823+0100 7f0bd5320e00  1 osd.32 57223
check_osdmap_features require_osd_release unknown -> luminous
2020-03-25T22:10:04.695+0100 7f0bd5320e00  0 osd.32 57223 load_pgs
2020-03-25T22:10:10.907+0100 7f0bcc01d700  4 rocksdb:
[db/compaction_job.cc:1332] [default] [JOB 3] Generated table #59886:
2107241 keys, 72886355 bytes
2020-03-25T22:10:10.907+0100 7f0bcc01d700  4 rocksdb: EVENT_LOG_v1
{"time_micros": 1585170610911598, "cf_name": "default", "job": 3,
"event": "table_file_creation", "file_number": 59886, "file_size":
72886355, "table_properties": {"data_size": 67112666, "index_size":
504659, "filter_size": 5268165, "raw_key_size": 38673953,
"raw_average_key_size": 18, "raw_value_size": 35746098,
"raw_average_value_size": 16, "num_data_blocks": 16488, "num_entries":
2107241, "filter_policy_name": "rocksdb.BuiltinBloomFilter"}}
2020-03-25T22:10:13.047+0100 7f0bd5320e00  0 osd.32 57223 load_pgs
opened 230 pgs
2020-03-25T22:10:13.047+0100 7f0bd5320e00 -1 osd.32 57223
log_to_monitors {default=true}
2020-03-25T22:10:13.107+0100 7f0bd5320e00  0 osd.32 57223 done with
init, starting boot process
2020-03-25T22:10:13.107+0100 7f0bd5320e00  1 osd.32 57223 start_boot


does the line:
  check_osdmap_features require_osd_release unknown -> luminous
mean it thinks the local osd itself is luminous?

On Wed, Mar 25, 2020 at 8:12 PM Ml Ml  wrote:
>
> Hello List,
>
> i followed:
>  https://ceph.io/releases/v15-2-0-octopus-released/
>
> I came from a healthy nautilus and i am stuck at:
>   5.) Upgrade all OSDs by installing the new packages and restarting
> the ceph-osd daemons on all OSD host
>
> When i try to start an osd like this, i get:
>   /usr/bin/ceph-osd -f --cluster ceph --id 32 --setuser ceph --setgroup ceph
> ...
> 2020-03-25T20:11:03.292+0100 7f2762874e00 -1 osd.32 57223
> log_to_monitors {default=true}
> ceph-osd: /build/ceph-15.2.0/src/osd/PeeringState.cc:109: void
> PGPool::update(ceph::common::CephContext*, OSDMapRef): Assertion
> `map->require_osd_release >= ceph_release_t::mimic' failed.
> ceph-osd: /build/ceph-15.2.0/src/osd/PeeringState.cc:109: void
> PGPool::update(ceph::common::CephContext*, OSDMapRef): Assertion
> `map->require_osd_release >= ceph_release_t::mimic' failed.
> *** Caught signal (Aborted) **
>  in thread 7f274854f700 thread_name:tp_osd_tp
> Aborted
>
>
>
> My current status:
>
> root@ceph03:~# ceph osd tree
> ID  CLASS  WEIGHTTYPE NAMESTATUS  REWEIGHT  PRI-AFF
> -1 60.70999  root default
> -2 20.25140  host ceph01
>  0hdd   1.71089  osd.0up   1.0  1.0
>  8hdd   2.67029  osd.8up   1.0  1.0
> 11hdd   1.5  osd.11   up   1.0  1.0
> 12hdd   1.5  osd.12   up   1.0  1.0
> 14hdd   2.7  osd.14   up   1.0  1.0
> 18hdd   1.5  osd.18   up   1.0  1.0
> 22hdd   2.7  osd.22   up   1.0  1.0
> 23hdd   2.7  osd.23   up   1.0  1.0
> 26hdd   2.67029  osd.26   up   1.0  1.0
> -3 23.05193  host ceph02
>  2hdd   2.67029  osd.2up   1.0  1.0
>  3hdd   2.0  osd.3up   1.0  1.0
>  7hdd   2.67029  osd.7up   1.0  1.0
>  9hdd   2.67029  osd.9up   1.0  1.0
> 13hdd   2.0  osd.13   up   1.0  1.0
> 16hdd   1.5  osd.16   up   1.0  1.0
> 19hdd   2.38409  osd.19   up   1.0  1.0
> 24hdd   2.67020  osd.24   up   1.0  1.0
> 25hdd   1.71649  osd.25   up   1.0  1.0
> 28hdd   2.67029  osd.28   up   1.0  1.0
> -4 17.40666  host ceph03
>  5hdd   1.71660  osd.5  down   1.0  1.0
>  6hdd   1.71660  osd.6  down   1.0  1.0
> 10hdd   2.67029  osd.10 down   1.0  1.0
> 15hdd   2.0  osd.15 down   1.0  1.0
> 17hdd   1.2  osd.17 down   1.0  1.0
> 20hdd   1.71649  osd.20 down   1.0  1.0
> 21hdd   2.0  osd.21 down   1.0  1.0
> 27hdd   1.71649  osd.27 down   1.0  1.0
> 32hdd   2.67020  osd.32 down   1.0  1.00

[ceph-users] Re: Using sendfile on Ceph FS results in data stuck in client cache

2020-03-25 Thread Mikael Öhman
Hi Jeff! (also, I'm also sorry for a resend, I did exactly the same with my 
message as well!)

Unfortunately, the answer wasn't that simple, as I am on the latest C7 kernel 
as well
uname -r
3.10.0-1062.1.2.el7.x86_64

I did some more testing, and it's a bit difficult to trigger this reliably when 
using a line like you do.
If I paste a sendfile+remount line like that, it does seem to trigger a proper 
write, but, if I put some delay in, it fails. About 10 seconds seems to be 
enough for me;

root@hermes:~# /root/sendfile sendfile.c /cephyr/dest 27; sleep 10; umount 
/cephyr; mount /cephyr; ls -l /cephyr/dest
Sent 0 KiB over sendfile(3EXT) of 0 KiB requested 
-rw-r--r-- 1 root root 0 Mar 25 23:01 /cephyr/dest

Easier yet; I can reliable see the problem immediately by not re-mounting 
anything and just looking at the files from another node;
root@hermes:~# /root/sendfile sendfile.c /cephyr/dest 27; ls -l /cephyr/dest
Sent 0 KiB over sendfile(3EXT) of 0 KiB requested 
-rw-r--r-- 1 root root 27 Mar 25 23:10 /cephyr/dest
and then, go to another node and look a couple seconds later:
[c3-micke@hebbe-c1 ~]$ ls -l /cephyr/dest
-rw-r--r-- 1 root root 0 Mar 25 23:10 /cephyr/dest

Best regards, Mikael
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: Space leak in Bluestore

2020-03-25 Thread vitalif
I have a question regarding this problem - is it possible to rebuild 
bluestore allocation metadata? I could try it to test if it's an 
allocator problem...



Hi.

I'm experiencing some kind of a space leak in Bluestore. I use EC,
compression and snapshots. First I thought that the leak was caused by
"virtual clones" (issue #38184). However, then I got rid of most of
the snapshots, but continued to experience the problem.

I suspected something when I added a new disk to the cluster and free
space in the cluster didn't increase (!).

So to track down the issue I moved one PG (34.1a) using upmaps from
osd11,6,0 to osd6,0,7 and then back to osd11,6,0.

It ate +59 GB after the first move and +51 GB after the second. As I
understand this proves that it's not #38184. Devirtualizaton of
virtual clones couldn't eat additional space after SECOND rebalance of
the same PG.

The PG has ~39000 objects, it is EC 2+1 and the compression is
enabled. Compression ratio is about ~2.7 in my setup, so the PG should
use ~90 GB raw space.

Before and after moving the PG I stopped osd0, mounted it with
ceph-objectstore-tool with debug bluestore = 20/20 and opened the
34.1a***/all directory. It seems to dump all object extents into the
log in that case. So now I have two logs with all allocated extents
for osd0 (I hope all extents are there). I parsed both logs and added
all compressed blob sizes together ("get_ref Blob ... 0x2 -> 0x...
compressed"). But they add up to ~39 GB before first rebalance
(34.1as2), ~22 GB after it (34.1as1) and ~41 GB again after the second
move (34.1as2) which doesn't indicate a leak.

But the raw space usage still exceeds initial by a lot. So it's clear
that there's a leak somewhere.

What additional details can I provide for you to identify the bug?

I posted the same message in the issue tracker,
https://tracker.ceph.com/issues/44731

___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: Space leak in Bluestore

2020-03-25 Thread Igor Fedotov
Bluestore fsck/repair detect and fix leaks at Bluestore level but I 
doubt your issue is here.


To be honest I don't understand from the overview why do you think that 
there are any leaks at all


Not sure whether this is relevant but from my experience space "leaks" 
are sometimes caused by 64K allocation unit and keeping tons of small 
files or massive small EC overwrites.


To verify if this is applicable you might want to inspect bluestore 
performance counters (bluestore_stored vs. bluestore_allocated) to 
estimate your losses due to high allocation units.


Significant difference at multiple OSDs might indicate that overhead is 
caused by high allocation granularity. Compression might make this 
analysis not that simple though...



Thanks,

Igor


On 3/26/2020 1:19 AM, vita...@yourcmc.ru wrote:
I have a question regarding this problem - is it possible to rebuild 
bluestore allocation metadata? I could try it to test if it's an 
allocator problem...



Hi.

I'm experiencing some kind of a space leak in Bluestore. I use EC,
compression and snapshots. First I thought that the leak was caused by
"virtual clones" (issue #38184). However, then I got rid of most of
the snapshots, but continued to experience the problem.

I suspected something when I added a new disk to the cluster and free
space in the cluster didn't increase (!).

So to track down the issue I moved one PG (34.1a) using upmaps from
osd11,6,0 to osd6,0,7 and then back to osd11,6,0.

It ate +59 GB after the first move and +51 GB after the second. As I
understand this proves that it's not #38184. Devirtualizaton of
virtual clones couldn't eat additional space after SECOND rebalance of
the same PG.

The PG has ~39000 objects, it is EC 2+1 and the compression is
enabled. Compression ratio is about ~2.7 in my setup, so the PG should
use ~90 GB raw space.

Before and after moving the PG I stopped osd0, mounted it with
ceph-objectstore-tool with debug bluestore = 20/20 and opened the
34.1a***/all directory. It seems to dump all object extents into the
log in that case. So now I have two logs with all allocated extents
for osd0 (I hope all extents are there). I parsed both logs and added
all compressed blob sizes together ("get_ref Blob ... 0x2 -> 0x...
compressed"). But they add up to ~39 GB before first rebalance
(34.1as2), ~22 GB after it (34.1as1) and ~41 GB again after the second
move (34.1as2) which doesn't indicate a leak.

But the raw space usage still exceeds initial by a lot. So it's clear
that there's a leak somewhere.

What additional details can I provide for you to identify the bug?

I posted the same message in the issue tracker,
https://tracker.ceph.com/issues/44731

___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: Space leak in Bluestore

2020-03-25 Thread Виталий Филиппов
Hi Igor,

I think so because
1) space usage increases after each rebalance. Even when the same pg is moved 
twice (!)
2) I use 4k min_alloc_size from the beginning

One crazy hypothesis is that maybe ceph allocates space for uncompressed 
objects, then compresses them and leaks (uncompressed-compressed) space. Really 
crazy idea but who knows o_O.

I already did a deep fsck, it didn't help... what else could I check?...

26 марта 2020 г. 1:40:52 GMT+03:00, Igor Fedotov  пишет:
>Bluestore fsck/repair detect and fix leaks at Bluestore level but I 
>doubt your issue is here.
>
>To be honest I don't understand from the overview why do you think that
>
>there are any leaks at all
>
>Not sure whether this is relevant but from my experience space "leaks" 
>are sometimes caused by 64K allocation unit and keeping tons of small 
>files or massive small EC overwrites.
>
>To verify if this is applicable you might want to inspect bluestore 
>performance counters (bluestore_stored vs. bluestore_allocated) to 
>estimate your losses due to high allocation units.
>
>Significant difference at multiple OSDs might indicate that overhead is
>
>caused by high allocation granularity. Compression might make this 
>analysis not that simple though...
>
>
>Thanks,
>
>Igor
>
>
>On 3/26/2020 1:19 AM, vita...@yourcmc.ru wrote:
>> I have a question regarding this problem - is it possible to rebuild 
>> bluestore allocation metadata? I could try it to test if it's an 
>> allocator problem...
>>
>>> Hi.
>>>
>>> I'm experiencing some kind of a space leak in Bluestore. I use EC,
>>> compression and snapshots. First I thought that the leak was caused
>by
>>> "virtual clones" (issue #38184). However, then I got rid of most of
>>> the snapshots, but continued to experience the problem.
>>>
>>> I suspected something when I added a new disk to the cluster and
>free
>>> space in the cluster didn't increase (!).
>>>
>>> So to track down the issue I moved one PG (34.1a) using upmaps from
>>> osd11,6,0 to osd6,0,7 and then back to osd11,6,0.
>>>
>>> It ate +59 GB after the first move and +51 GB after the second. As I
>>> understand this proves that it's not #38184. Devirtualizaton of
>>> virtual clones couldn't eat additional space after SECOND rebalance
>of
>>> the same PG.
>>>
>>> The PG has ~39000 objects, it is EC 2+1 and the compression is
>>> enabled. Compression ratio is about ~2.7 in my setup, so the PG
>should
>>> use ~90 GB raw space.
>>>
>>> Before and after moving the PG I stopped osd0, mounted it with
>>> ceph-objectstore-tool with debug bluestore = 20/20 and opened the
>>> 34.1a***/all directory. It seems to dump all object extents into the
>>> log in that case. So now I have two logs with all allocated extents
>>> for osd0 (I hope all extents are there). I parsed both logs and
>added
>>> all compressed blob sizes together ("get_ref Blob ... 0x2 ->
>0x...
>>> compressed"). But they add up to ~39 GB before first rebalance
>>> (34.1as2), ~22 GB after it (34.1as1) and ~41 GB again after the
>second
>>> move (34.1as2) which doesn't indicate a leak.
>>>
>>> But the raw space usage still exceeds initial by a lot. So it's
>clear
>>> that there's a leak somewhere.
>>>
>>> What additional details can I provide for you to identify the bug?
>>>
>>> I posted the same message in the issue tracker,
>>> https://tracker.ceph.com/issues/44731

-- 
With best regards,
  Vitaliy Filippov
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io