if u are looking under the POOLS:
USED is the amount of the data stored in kb unless specified in M or G.
refer here:
http://docs.ceph.com/docs/master/rados/operations/monitoring/
On Wed, Dec 28, 2016 at 1:32 PM, M Ranga Swami Reddy
wrote:
> Hello,
>
> From the "ceph df" command, USED details.
ri, Dec 16, 2016 at 3:01 PM, sandeep.cool...@gmail.com <
sandeep.cool...@gmail.com> wrote:
> Hi,
>
> The manual method is good if you have small number of OSD's, but in case
> of OSD's > 200 it will be a very time consuming task to create the OSD's
> like th
/jewel/rados/operations/add-or-rm-osds/
>
>
> -- Original --
> *From: * "Burkhard Linke" >;
> *Date: * Fri, Dec 16, 2016 05:09 PM
> *To: * "CEPH list";
> *Subject: * Re: [ceph-users] 2 OSD's per drive , unable to star
Hi Burkhard,
How can i achieve that so all the OSD's will auto start at boot time?
Regards,
Sandeep
On Fri, Dec 16, 2016 at 2:39 PM, Burkhard Linke <
burkhard.li...@computational.bio.uni-giessen.de> wrote:
> Hi,
>
> On 12/16/2016 09:22 AM, sandeep.cool...@gmail.com wrot
Hi,
I was trying the scenario where i have partitioned my drive (/dev/sdb) into
4 (sdb1, sdb2 , sdb3, sdb4) using the sgdisk utility:
# sgdisk -z /dev/sdb
# sgdisk -n 1:0:+1024 /dev/sdb -c 1:"ceph journal"
# sgdisk -n 1:0:+1024 /dev/sdb -c 2:"ceph journal"
# sgdisk -n 1:0:+4096 /dev/sdb -c 3:"cep
Hi,
Im using jewel (10.2.4) release on centos 7.2, after rebooting one of the
OSD node, the osd doesn't start. Even after trying the 'systemctl start
ceph-osd@.service'.
Does we have to make entry for in fstab for our ceph osd's folder or ceph
does it automatically?
Then i mounted the correct pa