[ceph-users] multiple active connections to a single LUN

2018-12-28 Thread Никитенко Виталий
hi, install iscsi gateway as write http://docs.ceph.com/docs/master/rbd/iscsi-overview/ kernel version 4.16.1 ceph v12.2.10 Luminous early i use tgt iscsi target i can connect 3 esxi server to one LUN as a shared drive and if one esxi was dropped the machines were transferred to others from the

Re: [ceph-users] Balancing cluster with large disks - 10TB HHD

2018-12-28 Thread jesper
Hi. .. Just an update - This looks awesome.. and in a 8x5 company - christmas is a good period to rebalance a cluster :-) >> I'll try it out again - last I tried it complanied about older clients - >> it should be better now. > upmap is supported since kernel 4.13. > >> Second - should the rewei

Re: [ceph-users] utilization of rbd volume

2018-12-28 Thread Jason Dillaman
There isn't anything built-in to Ceph to help with that. You could either increase the logging level for the OSDs and grep out "rbd_data." object accesses (assuming v2 image format), group by the image id, and figure out which image is being hammered. You could also create custom "iptables" LOG rul

Re: [ceph-users] utilization of rbd volume

2018-12-28 Thread Sinan Polat
Hi Jason, Thanks for your reply. Unfortunately we do not have access to the clients. We are running Red Hat Ceph 2.x which is based on Jewel, that means we cannot pinpoint who or what is causing the load on the cluster, am I right? Thanks! Sinan > Op 28 dec. 2018 om 15:14 heeft Jason Dillaman

Re: [ceph-users] EC pools grinding to a screeching halt on Luminous

2018-12-28 Thread Mohamad Gebai
Hi Marcus, On 12/27/18 4:21 PM, Marcus Murwall wrote: > Hey Mohamad > > I work with Florian on this issue. > Just reinstalled the ceph cluster and triggered the error again. > Looking at iostat -x 1 there is basically no activity at all against > any of the osds. > We get blocked ops all over the

Re: [ceph-users] utilization of rbd volume

2018-12-28 Thread Jason Dillaman
With the current releases of Ceph, the only way to accomplish this is by gathering the IO stats on each client node. However, with the future Nautilus release, this data will now be available directly from the OSDs. On Fri, Dec 28, 2018 at 6:18 AM Sinan Polat wrote: > > Hi all, > > We have a coup

[ceph-users] utilization of rbd volume

2018-12-28 Thread Sinan Polat
Hi all, We have a couple of hundreds RBD volumes/disks in our Ceph cluster, each RBD disk is mounted by a different client. Currently we see quite high IOPS happening on the cluster, but we don't know which client/RBD is causing it. Is it somehow easily to see the utilization per RBD disk? Thank

Re: [ceph-users] list admin issues

2018-12-28 Thread Ilya Dryomov
On Sat, Dec 22, 2018 at 7:18 PM Brian : wrote: > > Sorry to drag this one up again. > > Just got the unsubscribed due to excessive bounces thing. > > 'Your membership in the mailing list ceph-users has been disabled due > to excessive bounces The last bounce received from you was dated > 21-Dec-20