Hi,

Last night I upgraded a Luminous cluster to Nautilus. All went well, but there 
was one sleep depriving issue I would like to prevent from happening next week 
while upgrading another cluster. Maybe you people can help me figure out what 
actually happened.

So I upgraded the packages and restarted mons and mgrs. Then I started 
restarting the OSD's on one of the nodes. Below are the start and 'start_boot' 
times, during which the disks read at full speed, I think the whole disk.

2020-08-19 02:08:10.568 7fd742b09c80  0 set uid:gid to 64045:64045 (ceph:ceph)
2020-08-19 02:09:33.591 7fd742b09c80  1 osd.8 2188 start_boot


2020-08-19 02:08:10.592 7fb453887c80  0 set uid:gid to 64045:64045 (ceph:ceph)
2020-08-19 02:17:40.878 7fb453887c80  1 osd.5 2188 start_boot


2020-08-19 02:08:10.836 7f907bc0cc80  0 set uid:gid to 64045:64045 (ceph:ceph)
2020-08-19 02:19:58.462 7f907bc0cc80  1 osd.3 2188 start_boot


2020-08-19 02:08:10.584 7f1ca892cc80  0 set uid:gid to 64045:64045 (ceph:ceph)
2020-08-19 03:13:24.179 7f1ca892cc80  1 osd.11 2188 start_boot

2020-08-19 02:08:10.568 7f059f80dc80  0 set uid:gid to 64045:64045 (ceph:ceph)
2020-08-19 04:06:55.342 7f059f80dc80  1 osd.14 2188 start_boot

So, while this is not an issue which breaks anything technical, I would like to 
know how I can arrange for the OSD to do this 'maintenance' beforehand so I 
don't have to wait too long. :)

I do see a warning in the logging, is that related: "store not yet converted to 
per-pool stats"  ?

Thanks!
--
Mark Schouten <m...@tuxis.nl>

Tuxis, Ede, https://www.tuxis.nl

T: +31 318 200208 
 

_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to