Re: [ceph-users] Monitor stuck at "probing"

2019-06-16 Thread Joshua M. Boniface
Do you happen to be running on Debian Buster? I'm running into a similar problem, though in my case I'm bootstrapping a new cluster using a manual (well, automated by Ansible from the Manual Install guide) method. The very first time I bootstrap it seems fine, then if I purge all the ceph-* packag

[ceph-users] bluestore_allocated vs bluestore_stored

2019-06-16 Thread Maged Mokhtar
Hi all, I want to understand more the difference between bluestore_allocated and bluestore_stored in the case of no compression. If i am writing fixed objects with sizes greater than min alloc size, would bluestore_allocated still be higher than bluestore_stored ? If so, is this a permanent o

[ceph-users] Simple bash script to reboot OSD nodes one by one

2019-06-16 Thread Alex Gorbachev
We use the following script after upgrades, and whenever it is necessary to reboot OSD nodes one at a time, making sure all PGs are healthy before rebooting the next node. I thought it may be helpful to share. The 600 seconds may need to be adjusted based on your load, OSD types etc. #!/bin/bash

[ceph-users] Broken mirrors: hk, us-east, de, se, cz, gigenet

2019-06-16 Thread Hector Martin
http://hk.ceph.com/: Looks like this mirror has the 13.2.6 release files, but is missing most of the 13.2.6 debs. Is the sync process broken? http://us-east.ceph.com/: Returns a CloudFlare error http://de.ceph.com/: NXDOMAIN http://se.ceph.com/: 503 Service Unavailable http://mirrors.gig

Re: [ceph-users] Broken mirrors: hk, us-east, de, se, cz, gigenet

2019-06-16 Thread Mart van Santen
hk: I have limited disk capacity and the disk filled up and at this point I do not have any way to extend capacity. I will recheck the monitoring of this. I've now excluded archive, hammer and giant for now. So at least newer versions are there. It is currently syncing, which will take some t