Yes, i created all four luns with these sizes:
lun0 - 5120G lun1 - 5121G lun2 - 5122G lun3 - 5123G Its always one GB more per LUN... Is there any newer ceph-iscsi-config package than i have installed? ceph-iscsi-config-2.6-2.6.el7.noarch Then i could try to update the package and see if the error is fixed ... ________________________________ Von: Jason Dillaman <jdill...@redhat.com> Gesendet: Mittwoch, 2. Oktober 2019 16:00:03 An: Kilian Ries Cc: ceph-users@lists.ceph.com Betreff: Re: [ceph-users] tcmu-runner: mismatched sizes for rbd image size On Wed, Oct 2, 2019 at 9:50 AM Kilian Ries <m...@kilian-ries.de> wrote: > > Hi, > > > i'm running a ceph mimic cluster with 4x ISCSI gateway nodes. Cluster was > setup via ceph-ansible v3.2-stable. I just checked my nodes and saw that only > two of the four configured iscsi gw nodes are working correct. I first > noticed via gwcli: > > > ### > > > $gwcli -d ls > > Traceback (most recent call last): > > File "/usr/bin/gwcli", line 191, in <module> > > main() > > File "/usr/bin/gwcli", line 103, in main > > root_node.refresh() > > File "/usr/lib/python2.7/site-packages/gwcli/gateway.py", line 87, in > refresh > > raise GatewayError > > gwcli.utils.GatewayError > > > ### > > > I investigated and noticed that both "rbd-target-api" and "rbd-target-gw" are > not running. I were not able to restart them via systemd. I then found that > even tcmu-runner is not running and it exits with the following error: > > > > ### > > > tcmu_rbd_check_image_size:827 rbd/production.lun1: Mismatched sizes. RBD > image size 5498631880704. Requested new size 5497558138880. > > > ### > > > Now i have the situation that two nodes are running correct and two cant > start tcmu-runner. I don't know where the image size mismatches are coming > from - i haven't configured or resized any of the images. > > > Is there any chance to get my two iscsi gw nodes back working? It sounds like you are potentially hitting [1]. The ceph-iscsi-config library thinks your image size is 5TiB but you actually have a 5121GiB (~5.001TiB) RBD image. Any clue how your RBD image got to be 1GiB larger than an even 5TiB? > > > > The following packets are installed: > > > rpm -qa |egrep "ceph|iscsi|tcmu|rst|kernel" > > > libtcmu-1.4.0-106.gd17d24e.el7.x86_64 > > ceph-iscsi-cli-2.7-2.7.el7.noarch > > kernel-3.10.0-957.5.1.el7.x86_64 > > ceph-base-13.2.5-0.el7.x86_64 > > ceph-iscsi-config-2.6-2.6.el7.noarch > > ceph-common-13.2.5-0.el7.x86_64 > > ceph-selinux-13.2.5-0.el7.x86_64 > > kernel-tools-libs-3.10.0-957.5.1.el7.x86_64 > > python-cephfs-13.2.5-0.el7.x86_64 > > ceph-osd-13.2.5-0.el7.x86_64 > > kernel-headers-3.10.0-957.5.1.el7.x86_64 > > kernel-tools-3.10.0-957.5.1.el7.x86_64 > > kernel-3.10.0-957.1.3.el7.x86_64 > > libcephfs2-13.2.5-0.el7.x86_64 > > kernel-3.10.0-862.14.4.el7.x86_64 > > tcmu-runner-1.4.0-106.gd17d24e.el7.x86_64 > > > > Thanks, > > Greets > > > Kilian > > > _______________________________________________ > ceph-users mailing list > ceph-users@lists.ceph.com > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com [1] https://github.com/ceph/ceph-iscsi-config/pull/68 -- Jason
_______________________________________________ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com