O.k. thank you!

I removed the osd just in case after the fact but I will re-add it back in and update the thread if things still don't look right.

Shain

On 2/1/19 6:35 PM, Vladimir Prokofev wrote:
Your output looks a bit weird, but still, this is normal for bluestore. It creates small separate data partition that is presented as XFS mounted in /var/lib/ceph/osd, while real data partition is hidden as raw(bluestore) block device.
It's no longer possible to check disk utilisation with df using bluestore.
To check your osd capacity use 'ceph osd df'

сб, 2 февр. 2019 г. в 02:07, Shain Miley <smi...@npr.org <mailto:smi...@npr.org>>:

    Hi,

    I went to replace a disk today (which I had not had to do in a while)
    and after I added it the results looked rather odd compared to
    times past:

    I was attempting to replace /dev/sdk on one of our osd nodes:

    #ceph-deploy disk zap hqosd7 /dev/sdk
    #ceph-deploy osd create --data /dev/sdk hqosd7

    [ceph_deploy.conf][DEBUG ] found configuration file at:
    /root/.cephdeploy.conf
    [ceph_deploy.cli][INFO  ] Invoked (2.0.1): /usr/local/bin/ceph-deploy
    osd create --data /dev/sdk hqosd7
    [ceph_deploy.cli][INFO  ] ceph-deploy options:
    [ceph_deploy.cli][INFO  ]  verbose                       : False
    [ceph_deploy.cli][INFO  ]  bluestore                     : None
    [ceph_deploy.cli][INFO  ]  cd_conf                       :
    <ceph_deploy.conf.cephdeploy.Conf instance at 0x7fa3b1065a70>
    [ceph_deploy.cli][INFO  ]  cluster                       : ceph
    [ceph_deploy.cli][INFO  ]  fs_type                       : xfs
    [ceph_deploy.cli][INFO  ]  block_wal                     : None
    [ceph_deploy.cli][INFO  ]  default_release               : False
    [ceph_deploy.cli][INFO  ]  username                      : None
    [ceph_deploy.cli][INFO  ]  journal                       : None
    [ceph_deploy.cli][INFO  ]  subcommand                    : create
    [ceph_deploy.cli][INFO  ]  host                          : hqosd7
    [ceph_deploy.cli][INFO  ]  filestore                     : None
    [ceph_deploy.cli][INFO  ]  func                          :
    <function osd
    at 0x7fa3b14b3398>
    [ceph_deploy.cli][INFO  ]  ceph_conf                     : None
    [ceph_deploy.cli][INFO  ]  zap_disk                      : False
    [ceph_deploy.cli][INFO  ]  data                          : /dev/sdk
    [ceph_deploy.cli][INFO  ]  block_db                      : None
    [ceph_deploy.cli][INFO  ]  dmcrypt                       : False
    [ceph_deploy.cli][INFO  ]  overwrite_conf                : False
    [ceph_deploy.cli][INFO  ]  dmcrypt_key_dir               :
    /etc/ceph/dmcrypt-keys
    [ceph_deploy.cli][INFO  ]  quiet                         : False
    [ceph_deploy.cli][INFO  ]  debug                         : False
    [ceph_deploy.osd][DEBUG ] Creating OSD on cluster ceph with data
    device
    /dev/sdk
    [hqosd7][DEBUG ] connected to host: hqosd7
    [hqosd7][DEBUG ] detect platform information from remote host
    [hqosd7][DEBUG ] detect machine type
    [hqosd7][DEBUG ] find the location of an executable
    [ceph_deploy.osd][INFO  ] Distro info: Ubuntu 16.04 xenial
    [ceph_deploy.osd][DEBUG ] Deploying osd to hqosd7
    [hqosd7][DEBUG ] write cluster configuration to
    /etc/ceph/{cluster}.conf
    [hqosd7][DEBUG ] find the location of an executable
    [hqosd7][INFO  ] Running command: /usr/sbin/ceph-volume --cluster
    ceph
    lvm create --bluestore --data /dev/sdk
    [hqosd7][DEBUG ] Running command: /usr/bin/ceph-authtool
    --gen-print-key
    [hqosd7][DEBUG ] Running command: /usr/bin/ceph --cluster ceph --name
    client.bootstrap-osd --keyring
    /var/lib/ceph/bootstrap-osd/ceph.keyring
    -i - osd new c98a11d1-9b7f-487e-8c69-72fc662927d4
    [hqosd7][DEBUG ] Running command: vgcreate --force --yes
    ceph-bbe0e44e-afc9-4cf1-9f1a-ed7d20f796c1 /dev/sdk
    [hqosd7][DEBUG ]  stdout: Physical volume "/dev/sdk" successfully
    created
    [hqosd7][DEBUG ]  stdout: Volume group
    "ceph-bbe0e44e-afc9-4cf1-9f1a-ed7d20f796c1" successfully created
    [hqosd7][DEBUG ] Running command: lvcreate --yes -l 100%FREE -n
    osd-block-c98a11d1-9b7f-487e-8c69-72fc662927d4
    ceph-bbe0e44e-afc9-4cf1-9f1a-ed7d20f796c1
    [hqosd7][DEBUG ]  stdout: Logical volume
    "osd-block-c98a11d1-9b7f-487e-8c69-72fc662927d4" created.
    [hqosd7][DEBUG ] Running command: /usr/bin/ceph-authtool
    --gen-print-key
    [hqosd7][DEBUG ] Running command: mount -t tmpfs tmpfs
    /var/lib/ceph/osd/ceph-81
    [hqosd7][DEBUG ] Running command: chown -R ceph:ceph /dev/dm-0
    [hqosd7][DEBUG ] Running command: ln -s
    
/dev/ceph-bbe0e44e-afc9-4cf1-9f1a-ed7d20f796c1/osd-block-c98a11d1-9b7f-487e-8c69-72fc662927d4

    /var/lib/ceph/osd/ceph-81/block
    [hqosd7][DEBUG ] Running command: ceph --cluster ceph --name
    client.bootstrap-osd --keyring
    /var/lib/ceph/bootstrap-osd/ceph.keyring
    mon getmap -o /var/lib/ceph/osd/ceph-81/activate.monmap
    [hqosd7][DEBUG ]  stderr: got monmap epoch 2
    [hqosd7][DEBUG ] Running command: ceph-authtool
    /var/lib/ceph/osd/ceph-81/keyring --create-keyring --name osd.81
    --add-key AQCyyFRcSwWqGBAAKZR8rcWIEknj/o3rsehOdA==
    [hqosd7][DEBUG ]  stdout: creating /var/lib/ceph/osd/ceph-81/keyring
    [hqosd7][DEBUG ]  stdout: added entity osd.81 auth auth(auid =
    18446744073709551615 key=AQCyyFRcSwWqGBAAKZR8rcWIEknj/o3rsehOdA==
    with 0
    caps)
    [hqosd7][DEBUG ] Running command: chown -R ceph:ceph
    /var/lib/ceph/osd/ceph-81/keyring
    [hqosd7][DEBUG ] Running command: chown -R ceph:ceph
    /var/lib/ceph/osd/ceph-81/
    [hqosd7][DEBUG ] Running command: /usr/bin/ceph-osd --cluster ceph
    --osd-objectstore bluestore --mkfs -i 81 --monmap
    /var/lib/ceph/osd/ceph-81/activate.monmap --keyfile - --osd-data
    /var/lib/ceph/osd/ceph-81/ --osd-uuid
    c98a11d1-9b7f-487e-8c69-72fc662927d4 --setuser ceph --setgroup ceph
    [hqosd7][DEBUG ] --> ceph-volume lvm prepare successful for: /dev/sdk
    [hqosd7][DEBUG ] Running command: ceph-bluestore-tool --cluster=ceph
    prime-osd-dir --dev
    
/dev/ceph-bbe0e44e-afc9-4cf1-9f1a-ed7d20f796c1/osd-block-c98a11d1-9b7f-487e-8c69-72fc662927d4

    --path /var/lib/ceph/osd/ceph-81
    [hqosd7][DEBUG ] Running command: ln -snf
    
/dev/ceph-bbe0e44e-afc9-4cf1-9f1a-ed7d20f796c1/osd-block-c98a11d1-9b7f-487e-8c69-72fc662927d4

    /var/lib/ceph/osd/ceph-81/block
    [hqosd7][DEBUG ] Running command: chown -R ceph:ceph /dev/dm-0
    [hqosd7][DEBUG ] Running command: chown -R ceph:ceph
    /var/lib/ceph/osd/ceph-81
    [hqosd7][DEBUG ] Running command: systemctl enable
    ceph-volume@lvm-81-c98a11d1-9b7f-487e-8c69-72fc662927d4
    [hqosd7][DEBUG ]  stderr: Created symlink from
    
/etc/systemd/system/multi-user.target.wants/ceph-volume@lvm-81-c98a11d1-9b7f-487e-8c69-72fc662927d4.service

    to /lib/systemd/system/ceph-volume@.service.
    [hqosd7][DEBUG ] Running command: systemctl start ceph-osd@81
    [hqosd7][DEBUG ] --> ceph-volume lvm activate successful for osd
    ID: 81
    [hqosd7][DEBUG ] --> ceph-volume lvm create successful for: /dev/sdk
    [hqosd7][INFO  ] checking OSD status...
    [hqosd7][DEBUG ] find the location of an executable
    [hqosd7][INFO  ] Running command: /usr/bin/ceph --cluster=ceph osd
    stat
    --format=json
    [hqosd7][WARNIN] there are 2 OSDs down
    [hqosd7][WARNIN] there are 2 OSDs out
    [ceph_deploy.osd][DEBUG ] Host hqosd7 is now ready for osd use.

    _________________________________________________________


    However when I listed out the partitions on the server...this is
    what I
    found (osd 81 was showing up as 32G as opposed to the 3.7T that the
    drive is):

    /dev/sdm1       3.7T  2.9T  756G  80% /var/lib/ceph/osd/ceph-77
    tmpfs            32G   48K   32G   1% /var/lib/ceph/osd/ceph-81

    __________________________________________________________

    Here is some output from fdisk as well:

    Disk /dev/sdm: 3.7 TiB, 4000225165312 bytes, 7812939776 sectors
    Units: sectors of 1 * 512 = 512 bytes
    Sector size (logical/physical): 512 bytes / 512 bytes
    I/O size (minimum/optimal): 512 bytes / 512 bytes
    Disklabel type: gpt
    Disk identifier: CD3A35E7-CF85-4E79-9911-B80099349C85

    Device        Start        End    Sectors  Size Type
    /dev/sdm1  20973568 7812939742 7791966175  3.6T Ceph OSD
    /dev/sdm2      2048   20971520   20969473   10G Ceph Journal

    Partition table entries are not in disk order.


    Disk /dev/sdk: 3.7 TiB, 4000225165312 bytes, 7812939776 sectors
    Units: sectors of 1 * 512 = 512 bytes
    Sector size (logical/physical): 512 bytes / 512 bytes
    I/O size (minimum/optimal): 512 bytes / 512 bytes


    Disk
    
/dev/mapper/ceph--bbe0e44e--afc9--4cf1--9f1a--ed7d20f796c1-osd--block--c98a11d1--9b7f--487e--8c69--72fc662927d4:

    3.7 TiB, 4000220971008 bytes, 7812931584 sectors
    Units: sectors of 1 * 512 = 512 bytes
    Sector size (logical/physical): 512 bytes / 512 bytes
    I/O size (minimum/optimal): 512 bytes / 512 bytes

    __________________________________________________________


    I would normally spend more time looking around for an answer however
    our cluster is a little tight on space and I really need to
    replace 2 or
    3 drives ASAP in order to resolve some of this 'backfillfull'
    errors I
    am seeing.

    I am assuming this isn't normal...however this would be the first
    bluestore osd added to this cluster...so I am not really sure.

    Thanks in advance,

    Shain

-- NPR | Shain Miley | Manager of Infrastructure, Digital Media |
    smi...@npr.org <mailto:smi...@npr.org> | 202.513.3649

    _______________________________________________
    ceph-users mailing list
    ceph-users@lists.ceph.com <mailto:ceph-users@lists.ceph.com>
    http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
    
<https://urldefense.proofpoint.com/v2/url?u=http-3A__lists.ceph.com_listinfo.cgi_ceph-2Dusers-2Dceph.com&d=DwMFaQ&c=E2nBno7hEddFhl23N5nD1Q&r=cqFccwnwHGRorPuRWs36Dw&m=pfB6jmrh1mNDkHEJQD2WAo1zfgb_kOWj-CVReKz9SJU&s=Dmc_tJZr_Rtff9x_Xd3avq6jOXZFRqHAQRxavxG-cOw&e=>


_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
https://urldefense.proofpoint.com/v2/url?u=http-3A__lists.ceph.com_listinfo.cgi_ceph-2Dusers-2Dceph.com&d=DwICAg&c=E2nBno7hEddFhl23N5nD1Q&r=cqFccwnwHGRorPuRWs36Dw&m=pfB6jmrh1mNDkHEJQD2WAo1zfgb_kOWj-CVReKz9SJU&s=Dmc_tJZr_Rtff9x_Xd3avq6jOXZFRqHAQRxavxG-cOw&e=

--
NPR | Shain Miley | Manager of Infrastructure, Digital Media | smi...@npr.org | 
202.513.3649

_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to