Looks like I found the answer. The preparation was not the proper way. I found valuable information in ceph-disk prepare --help page; the cluster oprating better way:

NAME    MAJ:MIN RM   SIZE RO TYPE  MOUNTPOINT
sda       8:0    0 931,5G  0 disk
├─sda1    8:1    0   476M  0 part
├─sda2    8:2    0  46,6G  0 part
│ └─md0   9:0    0  46,5G  0 raid1 /
├─sda3    8:3    0   100M  0 part /var/lib/ceph/osd/ceph-0
└─sda4    8:4    0 884,4G  0 part
sdb       8:16   0 931,5G  0 disk
├─sdb1    8:17   0   476M  0 part  /boot/efi
├─sdb2    8:18   0  46,6G  0 part
│ └─md0   9:0    0  46,5G  0 raid1 /
├─sdb3    8:19   0   100M  0 part /var/lib/ceph/osd/ceph-5
└─sdb4    8:20   0 884,4G  0 part
sdc       8:32   0 232,9G  0 disk
├─sdc1    8:33   0    20G  0 part
├─sdc2    8:34   0   576M  0 part
├─sdc3    8:35   0    20G  0 part
└─sdc4    8:36   0   576M  0 part
udev            3,9G     0  3,9G   0% /dev
tmpfs           790M   59M  732M   8% /run
/dev/md0         46G  2,5G   41G   6% /
tmpfs           3,9G     0  3,9G   0% /dev/shm
tmpfs           5,0M     0  5,0M   0% /run/lock
tmpfs           3,9G     0  3,9G   0% /sys/fs/cgroup
/dev/sdb1       476M  3,4M  472M   1% /boot/efi
/dev/sda3        94M  5,4M   89M   6% /var/lib/ceph/osd/ceph-0
/dev/sdb3        94M  5,4M   89M   6% /var/lib/ceph/osd/ceph-5
tmpfs           790M     0  790M   0% /run/user/1001
/dev/sda :
 /dev/sda1 other, vfat
 /dev/sda2 other, linux_raid_member
/dev/sda3 ceph data, active, cluster ceph, osd.0, block /dev/sda4, block.db /dev/sdc1, block.wal /dev/sdc2
 /dev/sda4 ceph block, for /dev/sda3
/dev/sdb :
 /dev/sdb1 other, vfat, mounted on /boot/efi
 /dev/sdb2 other, linux_raid_member
/dev/sdb3 ceph data, active, cluster ceph, osd.5, block /dev/sdb4, block.db /dev/sdc3, block.wal /dev/sdc4
 /dev/sdb4 ceph block, for /dev/sdb3
/dev/sdc :
 /dev/sdc1 ceph block.db, for /dev/sda3
 /dev/sdc2 ceph block.wal, for /dev/sda3
 /dev/sdc3 ceph block.db, for /dev/sdb3
 /dev/sdc4 ceph block.wal, for /dev/sdb3
ID WEIGHT  REWEIGHT SIZE  USE    AVAIL %USE VAR  PGS TYPE NAME
-1 4.38956        - 4494G   111M 4494G 0.00 1.00   0 root default
-2 0.85678        -  877G 38008k  877G 0.00 1.71   0 host cl1
 3 0.42839  1.00000  438G 19004k  438G 0.00 1.71 117         osd.3
 4 0.42839  1.00000  438G 19004k  438G 0.00 1.71 139         osd.4
-3 1.76639        - 1808G 38008k 1808G 0.00 0.83   0 host cl2
 0 0.88319  1.00000  904G 19004k  904G 0.00 0.83 119         osd.0
 5 0.88319  1.00000  904G 19004k  904G 0.00 0.83 137         osd.5
-4 1.76639        - 1808G 38008k 1808G 0.00 0.83   0 host cl3
 1 0.88319  1.00000  904G 19004k  904G 0.00 0.83 133         osd.1
 2 0.88319  1.00000  904G 19004k  904G 0.00 0.83 123         osd.2
              TOTAL 4494G   111M 4494G 0.00
MIN/MAX VAR: 0.83/1.71  STDDEV: 0.00

David, thanks for your attention!


2017-06-27 06:06 keltezéssel, Papp Rudolf Péter írta:

sudo df -h:
udev            3,9G     0  3,9G   0% /dev
tmpfs           790M   19M  771M   3% /run
/dev/md0         46G  2,5G   41G   6% /
tmpfs           3,9G     0  3,9G   0% /dev/shm
tmpfs           5,0M     0  5,0M   0% /run/lock
tmpfs           3,9G     0  3,9G   0% /sys/fs/cgroup
/dev/sdb1       476M  3,4M  472M   1% /boot/efi
/dev/sda3       885G  1,4G  883G   1% /var/lib/ceph/osd/ceph-3
/dev/sdb3       885G  1,6G  883G   1% /var/lib/ceph/osd/ceph-0
tmpfs           790M     0  790M   0% /run/user/1001

ceph df:
GLOBAL:
    SIZE     AVAIL     RAW USED     %RAW USED
    179G      179G         116M          0.06
POOLS:
    NAME     ID     USED     %USED     MAX AVAIL     OBJECTS
    dev      6         0         0        61401M           0


2017-06-26 22:55 keltezéssel, David Turner írta:
And the `sudo df -h`? Also a `ceph df` might be helpful to see what's going on.

On Mon, Jun 26, 2017 at 4:41 PM Papp Rudolf Péter <p...@peer.hu <mailto:p...@peer.hu>> wrote:

    Hi David!

    lsblk:

    NAME    MAJ:MIN RM   SIZE RO TYPE  MOUNTPOINT
    sda       8:0    0 931,5G  0 disk
    ├─sda1    8:1    0   476M  0 part
    ├─sda2    8:2    0  46,6G  0 part
    │ └─md0   9:0    0  46,5G  0 raid1 /
    └─sda3    8:3    0 884,5G  0 part /var/lib/ceph/osd/ceph-3
    sdb       8:16   0 931,5G  0 disk
    ├─sdb1    8:17   0   476M  0 part  /boot/efi
    ├─sdb2    8:18   0  46,6G  0 part
    │ └─md0   9:0    0  46,5G  0 raid1 /
    └─sdb3    8:19   0 884,5G  0 part /var/lib/ceph/osd/ceph-0
    sdc       8:32   0 232,9G  0 disk
    ├─sdc1    8:33   0    20G  0 part
    ├─sdc2    8:34   0   576M  0 part
    ├─sdc3    8:35   0    20G  0 part
    └─sdc4    8:36   0   576M  0 part


    2017-06-26 22:37 keltezéssel, David Turner írta:
    What is the output of `lsblk`?

    On Mon, Jun 26, 2017 at 4:32 PM Papp Rudolf Péter <p...@peer.hu
    <mailto:p...@peer.hu>> wrote:

        Dear cephers,

        Could someone show me an url where can I found how ceph
        calculate the
        available space?

        I've installed a small ceph (Kraken) environment with
        bluestore OSDs.
        The servers contains 2 disks and 1 ssd. The disk 1. part is
        UEFI (~500
        MB), 2. raid (~50GB), 3. ceph disk (450-950MB). 1 server
        with 2 500 GB
        HDDs, 2 with 1 TB HDDs total 3 servers.

        For example the HDD parts:
        /dev/sdb1      2048     976895     974848   476M EFI System
        /dev/sdb2    976896   98633727   97656832  46,6G Linux RAID
        /dev/sdb3  98633728 1953525134 1854891407 884,5G Ceph OSD
        info from ceph-disk:
          /dev/sda :
          /dev/sda1 other, vfat
          /dev/sda2 other, linux_raid_member
          /dev/sda3 ceph data, active, cluster ceph, osd.4, block.db
        /dev/sdc1,
        block.wal /dev/sdc2
        /dev/sdb :
          /dev/sdb1 other, vfat, mounted on /boot/efi
          /dev/sdb2 other, linux_raid_member
          /dev/sdb3 ceph data, active, cluster ceph, osd.1, block.db
        /dev/sdc3,
        block.wal /dev/sdc4
        /dev/sdc :
          /dev/sdc1 ceph block.db, for /dev/sda3
          /dev/sdc2 ceph block.wal, for /dev/sda3
          /dev/sdc3 ceph block.db, for /dev/sdb3
          /dev/sdc4 ceph block.wal, for /dev/sdb3

        The reported size from ceph osd df tree:
        ID WEIGHT  REWEIGHT SIZE   USE    AVAIL  %USE VAR PGS TYPE NAME
        -1 0.17578        -   179G   104M   179G 0.06 1.00  0 root
        default
-2 0.05859 - 61439M 35696k 61405M 0.06 1.00 0 host cl2
          0 0.02930  1.00000 30719M 17848k 30702M 0.06 1.00  0 osd.0
          3 0.02930  1.00000 30719M 17848k 30702M 0.06 1.00  0 osd.3
-3 0.05859 - 61439M 35696k 61405M 0.06 1.00 0 host cl3
          1 0.02930  1.00000 30719M 17848k 30702M 0.06 1.00  0 osd.1
          4 0.02930  1.00000 30719M 17848k 30702M 0.06 1.00  0 osd.4
-4 0.05859 - 61439M 35696k 61405M 0.06 1.00 0 host cl1
          2 0.02930  1.00000 30719M 17848k 30702M 0.06 1.00  0 osd.2
          5 0.02930  1.00000 30719M 17848k 30702M 0.06 1.00  0 osd.5
                       TOTAL   179G   104M   179G 0.06
        MIN/MAX VAR: 1.00/1.00  STDDEV: 0

        ~ 30GB each 10 percent of the smallest real size. 3x
        replication. Could
        be possible that the system using wrong partition (2. in
        this scenario)
        for usable space calculation? Can I write more data than the
        calculated?

        Another hint?

        Thank you!


        _______________________________________________
        ceph-users mailing list
        ceph-users@lists.ceph.com <mailto:ceph-users@lists.ceph.com>
        http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


    _______________________________________________
    ceph-users mailing list
    ceph-users@lists.ceph.com <mailto:ceph-users@lists.ceph.com>
    http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to