hi, install iscsi gateway as write
http://docs.ceph.com/docs/master/rbd/iscsi-overview/
kernel version 4.16.1
ceph v12.2.10 Luminous
early i use tgt iscsi target i can connect 3 esxi server to one LUN as a shared
drive and if one esxi was dropped the machines were transferred to others from
the
Hi. .. Just an update - This looks awesome.. and in a 8x5 company -
christmas is a good period to rebalance a cluster :-)
>> I'll try it out again - last I tried it complanied about older clients -
>> it should be better now.
> upmap is supported since kernel 4.13.
>
>> Second - should the rewei
There isn't anything built-in to Ceph to help with that. You could
either increase the logging level for the OSDs and grep out
"rbd_data." object accesses (assuming v2 image format), group by the
image id, and figure out which image is being hammered. You could also
create custom "iptables" LOG rul
Hi Jason,
Thanks for your reply.
Unfortunately we do not have access to the clients.
We are running Red Hat Ceph 2.x which is based on Jewel, that means we cannot
pinpoint who or what is causing the load on the cluster, am I right?
Thanks!
Sinan
> Op 28 dec. 2018 om 15:14 heeft Jason Dillaman
Hi Marcus,
On 12/27/18 4:21 PM, Marcus Murwall wrote:
> Hey Mohamad
>
> I work with Florian on this issue.
> Just reinstalled the ceph cluster and triggered the error again.
> Looking at iostat -x 1 there is basically no activity at all against
> any of the osds.
> We get blocked ops all over the
With the current releases of Ceph, the only way to accomplish this is
by gathering the IO stats on each client node. However, with the
future Nautilus release, this data will now be available directly from
the OSDs.
On Fri, Dec 28, 2018 at 6:18 AM Sinan Polat wrote:
>
> Hi all,
>
> We have a coup
Hi all,
We have a couple of hundreds RBD volumes/disks in our Ceph cluster, each RBD
disk is mounted by a different client. Currently we see quite high IOPS
happening on the cluster, but we don't know which client/RBD is causing it.
Is it somehow easily to see the utilization per RBD disk?
Thank
On Sat, Dec 22, 2018 at 7:18 PM Brian : wrote:
>
> Sorry to drag this one up again.
>
> Just got the unsubscribed due to excessive bounces thing.
>
> 'Your membership in the mailing list ceph-users has been disabled due
> to excessive bounces The last bounce received from you was dated
> 21-Dec-20