The ceph-volume services make sure that the right partitions are mounted at
/var/lib/ceph/osd/ceph-X

In "simple" mode the service gets the necessary information from a json
file (long-hex-string.json) in /etc/ceph

ceph-volume simple scan/activate create the json file and systemd unit.

ceph-disk used udev instead for the activation which was *very* messy and a
frequent cause of long startup delays (seen > 40 minutes on encrypted
ceph-disk OSDs)

Paul

-- 
Paul Emmerich

Looking for help with your Ceph cluster? Contact us at https://croit.io

croit GmbH
Freseniusstr. 31h
81247 München
www.croit.io
Tel: +49 89 1896585 90


On Thu, Dec 5, 2019 at 1:03 PM Ranjan Ghosh <gh...@pw6.de> wrote:

> Hi all,
>
> After upgrading to Ubuntu 19.10 and consequently from Mimic to Nautilus, I
> had a mini-shock when my OSDs didn't come up. Okay, I should have read the
> docs more closely, I had to do:
>
> # ceph-volume simple scan /dev/sdb1
>
> # ceph-volume simple activate --all
>
> Hooray. The OSDs came back to life. And I saw that some weird services
> were created. Didn't give that much thought at first, but later I noticed
> there is now a new service in town:
>
> ===
>
> root@yak1 ~ # systemctl status
> ceph-volume@simple-0-6585a10b-917f-4458-a464-b4dd729ef174.service
> ceph-volume@simple-0-6585a10b-917f-4458-a464-b4dd729ef174.service - Ceph
> Volume activation: simple-0-6585a10b-917f-4458-a464-b4dd729ef174
>    Loaded: loaded (/lib/systemd/system/ceph-volume@.service; enabled;
> vendor preset: enabled)
>    Active: inactive (dead) since Wed 2019-12-04 23:29:15 CET; 13h ago
>  Main PID: 10048 (code=exited, status=0/SUCCESS)
>
> ===
>
> Hmm. It's dead. But my cluster is alive & kicking, though. Everything is
> working. Why is this needed? Should I be worried? Or can I safely delete
> that service from /etc/systemd/... since it's not running anyway?
>
> Another, probably minor issue:
>
> I still get a HEALTH_WARN "1 MDSs report oversized cache". But it doesn't
> tell me any details and I cannot find anything in the logs. What should I
> do to resolve this? Set mds_cache_memory_limit? How do I determine an
> acceptable value?
>
>
> Thank you / Best regards
>
> Ranjan
> _______________________________________________
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to