sudo df -h:
udev 3,9G 0 3,9G 0% /dev
tmpfs 790M 19M 771M 3% /run
/dev/md0 46G 2,5G 41G 6% /
tmpfs 3,9G 0 3,9G 0% /dev/shm
tmpfs 5,0M 0 5,0M 0% /run/lock
tmpfs 3,9G 0 3,9G 0% /sys/fs/cgroup
/dev/sdb1 476M 3,4M 472M 1% /boot/efi
/dev/sda3 885G 1,4G 883G 1% /var/lib/ceph/osd/ceph-3
/dev/sdb3 885G 1,6G 883G 1% /var/lib/ceph/osd/ceph-0
tmpfs 790M 0 790M 0% /run/user/1001
2017-06-26 22:39 keltezéssel, David Turner írta:
The output of `sudo df -h` would also be helpful. Sudo/root is
generally required because the OSD folders are only readable by the
Ceph user.
On Mon, Jun 26, 2017 at 4:37 PM David Turner <drakonst...@gmail.com
<mailto:drakonst...@gmail.com>> wrote:
What is the output of `lsblk`?
On Mon, Jun 26, 2017 at 4:32 PM Papp Rudolf Péter <p...@peer.hu
<mailto:p...@peer.hu>> wrote:
Dear cephers,
Could someone show me an url where can I found how ceph
calculate the
available space?
I've installed a small ceph (Kraken) environment with
bluestore OSDs.
The servers contains 2 disks and 1 ssd. The disk 1. part is
UEFI (~500
MB), 2. raid (~50GB), 3. ceph disk (450-950MB). 1 server with
2 500 GB
HDDs, 2 with 1 TB HDDs total 3 servers.
For example the HDD parts:
/dev/sdb1 2048 976895 974848 476M EFI System
/dev/sdb2 976896 98633727 97656832 46,6G Linux RAID
/dev/sdb3 98633728 1953525134 1854891407 884,5G Ceph OSD
info from ceph-disk:
/dev/sda :
/dev/sda1 other, vfat
/dev/sda2 other, linux_raid_member
/dev/sda3 ceph data, active, cluster ceph, osd.4, block.db
/dev/sdc1,
block.wal /dev/sdc2
/dev/sdb :
/dev/sdb1 other, vfat, mounted on /boot/efi
/dev/sdb2 other, linux_raid_member
/dev/sdb3 ceph data, active, cluster ceph, osd.1, block.db
/dev/sdc3,
block.wal /dev/sdc4
/dev/sdc :
/dev/sdc1 ceph block.db, for /dev/sda3
/dev/sdc2 ceph block.wal, for /dev/sda3
/dev/sdc3 ceph block.db, for /dev/sdb3
/dev/sdc4 ceph block.wal, for /dev/sdb3
The reported size from ceph osd df tree:
ID WEIGHT REWEIGHT SIZE USE AVAIL %USE VAR PGS TYPE NAME
-1 0.17578 - 179G 104M 179G 0.06 1.00 0 root
default
-2 0.05859 - 61439M 35696k 61405M 0.06 1.00 0 host cl2
0 0.02930 1.00000 30719M 17848k 30702M 0.06 1.00 0 osd.0
3 0.02930 1.00000 30719M 17848k 30702M 0.06 1.00 0 osd.3
-3 0.05859 - 61439M 35696k 61405M 0.06 1.00 0 host cl3
1 0.02930 1.00000 30719M 17848k 30702M 0.06 1.00 0 osd.1
4 0.02930 1.00000 30719M 17848k 30702M 0.06 1.00 0 osd.4
-4 0.05859 - 61439M 35696k 61405M 0.06 1.00 0 host cl1
2 0.02930 1.00000 30719M 17848k 30702M 0.06 1.00 0 osd.2
5 0.02930 1.00000 30719M 17848k 30702M 0.06 1.00 0 osd.5
TOTAL 179G 104M 179G 0.06
MIN/MAX VAR: 1.00/1.00 STDDEV: 0
~ 30GB each 10 percent of the smallest real size. 3x
replication. Could
be possible that the system using wrong partition (2. in this
scenario)
for usable space calculation? Can I write more data than the
calculated?
Another hint?
Thank you!
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com <mailto:ceph-users@lists.ceph.com>
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com