Hi Paul,

thanks for the explanation. I didn't know about the JSON file yet.
That's certainly good to know. What I still don't understand, though:
Why is my service marked inactivate/dead? Shouldn't it be running?

If I run:

systemctl start
ceph-volume@simple-0-6585a10b-917f-4458-a464-b4dd729ef174.service

nothing seems to happen:

===

root@yak1 /etc/ceph/osd # systemctl status
ceph-volume@simple-0-6585a10b-917f-4458-a464-b4dd729ef174.service
● ceph-volume@simple-0-6585a10b-917f-4458-a464-b4dd729ef174.service -
Ceph Volume activation: simple-0-6585a10b-917f-4458-a464-b4dd729ef174
   Loaded: loaded (/lib/systemd/system/ceph-volume@.service; enabled;
vendor preset: enabled)
   Active: inactive (dead) since Thu 2019-12-05 14:14:08 CET; 2min 13s ago
 Main PID: 27281 (code=exited, status=0/SUCCESS)

Dec 05 14:14:08 yak1 systemd[1]: Starting Ceph Volume activation:
simple-0-6585a10b-917f-4458-a464-b4dd729ef174...
Dec 05 14:14:08 yak1 systemd[1]:
ceph-volume@simple-0-6585a10b-917f-4458-a464-b4dd729ef174.service:
Current command vanished from the unit file, execution of the command
Dec 05 14:14:08 yak1 sh[27281]: Running command: /usr/sbin/ceph-volume
simple trigger 0-6585a10b-917f-4458-a464-b4dd729ef174
Dec 05 14:14:08 yak1 systemd[1]:
ceph-volume@simple-0-6585a10b-917f-4458-a464-b4dd729ef174.service:
Succeeded.
Dec 05 14:14:08 yak1 systemd[1]: Started Ceph Volume activation:
simple-0-6585a10b-917f-4458-a464-b4dd729ef174.

===

It says status=0/SUCCESS and in the log "Succeeded". But then again why
is "Started Ceph Volume activation" the last log entry. It sounds like
sth. is unfinished.

The mount point seems to be mounted perfectly, though:

/dev/sdb1 on /var/lib/ceph/osd/ceph-0 type xfs
(rw,relatime,attr2,inode64,logbufs=8,logbsize=32k,noquota)

Shouldn't that service be running continually?


BR

Ranjan


Am 05.12.19 um 13:25 schrieb Paul Emmerich:
> The ceph-volume services make sure that the right partitions are
> mounted at /var/lib/ceph/osd/ceph-X
>
> In "simple" mode the service gets the necessary information from a
> json file (long-hex-string.json) in /etc/ceph
>
> ceph-volume simple scan/activate create the json file and systemd unit.
>
> ceph-disk used udev instead for the activation which was *very* messy
> and a frequent cause of long startup delays (seen > 40 minutes on
> encrypted ceph-disk OSDs)
>
> Paul
>
> -- 
> Paul Emmerich
>
> Looking for help with your Ceph cluster? Contact us at https://croit.io
>
> croit GmbH
> Freseniusstr. 31h
> 81247 München
> www.croit.io <http://www.croit.io>
> Tel: +49 89 1896585 90
>
>
> On Thu, Dec 5, 2019 at 1:03 PM Ranjan Ghosh <gh...@pw6.de
> <mailto:gh...@pw6.de>> wrote:
>
>     Hi all,
>
>     After upgrading to Ubuntu 19.10 and consequently from Mimic to
>     Nautilus, I had a mini-shock when my OSDs didn't come up. Okay, I
>     should have read the docs more closely, I had to do:
>
>     # ceph-volume simple scan /dev/sdb1
>
>     # ceph-volume simple activate --all
>
>     Hooray. The OSDs came back to life. And I saw that some weird
>     services were created. Didn't give that much thought at first, but
>     later I noticed there is now a new service in town:
>
>     ===
>
>     root@yak1 ~ # systemctl status
>     ceph-volume@simple-0-6585a10b-917f-4458-a464-b4dd729ef174.service
>     <mailto:ceph-volume@simple-0-6585a10b-917f-4458-a464-b4dd729ef174.service>
>
>     ceph-volume@simple-0-6585a10b-917f-4458-a464-b4dd729ef174.service
>     <mailto:ceph-volume@simple-0-6585a10b-917f-4458-a464-b4dd729ef174.service>
>     - Ceph Volume activation:
>     simple-0-6585a10b-917f-4458-a464-b4dd729ef174
>        Loaded: loaded (/lib/systemd/system/ceph-volume@.service;
>     enabled; vendor preset: enabled)
>        Active: inactive (dead) since Wed 2019-12-04 23:29:15 CET; 13h ago
>      Main PID: 10048 (code=exited, status=0/SUCCESS)
>
>     ===
>
>     Hmm. It's dead. But my cluster is alive & kicking, though.
>     Everything is working. Why is this needed? Should I be worried? Or
>     can I safely delete that service from /etc/systemd/... since it's
>     not running anyway?
>
>     Another, probably minor issue:
>
>     I still get a HEALTH_WARN "1 MDSs report oversized cache". But it
>     doesn't tell me any details and I cannot find anything in the
>     logs. What should I do to resolve this? Set
>     mds_cache_memory_limit? How do I determine an acceptable value?
>
>
>     Thank you / Best regards
>
>     Ranjan
>
>     _______________________________________________
>     ceph-users mailing list
>     ceph-users@lists.ceph.com <mailto:ceph-users@lists.ceph.com>
>     http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to