Looks like I found the answer. The preparation was not the proper way. I
found valuable information in ceph-disk prepare --help page; the cluster
oprating better way:
NAMEMAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:00 931,5G 0 disk
├─sda18:10 476M 0 part
├─sda28:2
sudo df -h:
udev3,9G 0 3,9G 0% /dev
tmpfs 790M 19M 771M 3% /run
/dev/md0 46G 2,5G 41G 6% /
tmpfs 3,9G 0 3,9G 0% /dev/shm
tmpfs 5,0M 0 5,0M 0% /run/lock
tmpfs 3,9G 0 3,9G 0% /sys/fs/cgroup
/dev/sdb1
I'm not seeing anything that would show anything to indicate a problem. The
weights, cluster size, etc all say that ceph only sees 30GB per osd. I
don't see what is causing the discrepancy. Anyone else have any ideas?
On Mon, Jun 26, 2017, 5:02 PM Papp Rudolf Péter wrote:
> sudo df -h:
> udev
And the `sudo df -h`? Also a `ceph df` might be helpful to see what's
going on.
On Mon, Jun 26, 2017 at 4:41 PM Papp Rudolf Péter wrote:
> Hi David!
>
> lsblk:
>
> NAMEMAJ:MIN RM SIZE RO TYPE MOUNTPOINT
> sda 8:00 931,5G 0 disk
> ├─sda18:10 476M 0 part
> ├─sda28
sudo df -h:
udev3,9G 0 3,9G 0% /dev
tmpfs 790M 19M 771M 3% /run
/dev/md0 46G 2,5G 41G 6% /
tmpfs 3,9G 0 3,9G 0% /dev/shm
tmpfs 5,0M 0 5,0M 0% /run/lock
tmpfs 3,9G 0 3,9G 0% /sys/fs/cgroup
/dev/sdb1
Hi David!
lsblk:
NAMEMAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:00 931,5G 0 disk
├─sda18:10 476M 0 part
├─sda28:20 46,6G 0 part
│ └─md0 9:00 46,5G 0 raid1 /
└─sda38:30 884,5G 0 part /var/lib/ceph/osd/ceph-3
sdb 8:16 0 931,5G 0 disk
├
The output of `sudo df -h` would also be helpful. Sudo/root is generally
required because the OSD folders are only readable by the Ceph user.
On Mon, Jun 26, 2017 at 4:37 PM David Turner wrote:
> What is the output of `lsblk`?
>
> On Mon, Jun 26, 2017 at 4:32 PM Papp Rudolf Péter wrote:
>
>> D
What is the output of `lsblk`?
On Mon, Jun 26, 2017 at 4:32 PM Papp Rudolf Péter wrote:
> Dear cephers,
>
> Could someone show me an url where can I found how ceph calculate the
> available space?
>
> I've installed a small ceph (Kraken) environment with bluestore OSDs.
> The servers contains 2