Hi Cephers,
  I am in the process of upgrading a cluster from Filestore to bluestore,
but I'm concerned about frequent warnings popping up against the new
bluestore devices. I'm frequently seeing messages like this, although the
specific osd changes, it's always one of the few hosts I've converted to
bluestore.

6 ops are blocked > 32.768 sec on osd.219
1 osds have slow requests

I'm running 12.2.4, have any of you seen similar issues? It seems as though
these messages pop up more frequently when one of the bluestore pgs is
involved in a scrub.  I'll include my bluestore creation process below, in
case that might cause an issue. (sdb, sdc, sdd are SATA, sde and sdf are
SSD)


## Process used to create osds
sudo ceph-disk zap /dev/sdb /dev/sdc /dev/sdd /dev/sdd /dev/sde /dev/sdf
sudo ceph-volume lvm zap /dev/sdb
sudo ceph-volume lvm zap /dev/sdc
sudo ceph-volume lvm zap /dev/sdd
sudo ceph-volume lvm zap /dev/sde
sudo ceph-volume lvm zap /dev/sdf
sudo sgdisk -n 0:2048:+133GiB -t 0:FFFF -c 1:"ceph block.db sdb" /dev/sdf
sudo sgdisk -n 0:0:+133GiB -t 0:FFFF -c 2:"ceph block.db sdc" /dev/sdf
sudo sgdisk -n 0:0:+133GiB -t 0:FFFF -c 3:"ceph block.db sdd" /dev/sdf
sudo sgdisk -n 0:0:+133GiB -t 0:FFFF -c 4:"ceph block.db sde" /dev/sdf
sudo ceph-volume lvm create --bluestore --crush-device-class hdd --data
/dev/sdb --block.db /dev/sdf1
sudo ceph-volume lvm create --bluestore --crush-device-class hdd --data
/dev/sdc --block.db /dev/sdf2
sudo ceph-volume lvm create --bluestore --crush-device-class hdd --data
/dev/sdd --block.db /dev/sdf3
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to