Well, we figured it out :)
This mailing list post fixed our problem
http://www.spinics.net/lists/ceph-users/msg24220.html
We had to mark the osds that were falsely reported as up, as down, and then
restart all osd's
Thanks!
On Tue, Jan 5, 2016 at 6:43 PM, Mike Carlson wrote:
> Hey ceph-users
Hey ceph-users
We upgraded from hammer to infernalis, stopped all osd's to change the user
permissions from root to ceph, and all of our osd's are down (some say they
are up, but the status says it is booting)
ceph -s
cluster cabd1728-2eca-4e18-a581-b4885364e5a4
health HEALTH_WARN