Over the summer, I upgraded my cluster from Nautilus to Pacific, and converted 
to use cephadm after doing so.  Over the past couple weeks, I've been 
converting my OSDs to use NVMe drives for db+wal storage.  Schedule a node's 
worth of OSDs to be removed, wait for that to happen, delete the PVs and zap 
the drives, let the orchestrator do its thing.

Over this past weekend, the cluster threw up a HEALTH_WARN due to mismatched 
daemon versions.  Apparently the recreated OSDs are reporting different version 
information from the old daemons.

New OSDs:

-          Container Image Name:  
docker.io/ceph/daemon-base:latest-pacific-devel

-          Container Image ID: d253896d959e

-          Version: 16.2.5-226-g7c9eb137

Old OSDs and other daemons:

-          Container Image Name: docker.io/ceph/ceph:v16

-          Container Image ID: 6933c2a0b7dd

-          Version: 16.2.5

I'm assuming this is not actually a problem and will go away when I next 
upgrade the cluster, but I figured I'd throw it out here in case someone with 
more knowledge than I thinks otherwise.  If it's not a problem, is there a way 
to silence it until I next run an upgrade?  Is there an explanation for why it 
happened?

-----
Edward Huyer
Golisano College of Computing and Information Sciences
Rochester Institute of Technology
Golisano 70-2373
152 Lomb Memorial Drive
Rochester, NY 14623
585-475-6651
erh...@rit.edu<mailto:erh...@rit.edu>

Obligatory Legalese:
The information transmitted, including attachments, is intended only for the 
person(s) or entity to which it is addressed and may contain confidential 
and/or privileged material. Any review, retransmission, dissemination or other 
use of, or taking of any action in reliance upon this information by persons or 
entities other than the intended recipient is prohibited. If you received this 
in error, please contact the sender and destroy any copies of this information.

_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to