Re: [ceph-users] [jewel] High fs_apply_latency osds

2018-03-10 Thread shadow_lin
Hi Chris, The osds are running on arm nodes. Every node has a two core 1.5Ghz arm 32bit cpu and 2G ram and runs 2 osds.Hdd is 10TB and journal colocate with data on the same disk. Drives are half full now,but the problem I described also happened when the hdds are empty. Filesystem is ext4 bec

Re: [ceph-users] iSCSI Multipath (Load Balancing) vs RBD Exclusive Lock

2018-03-10 Thread shadow_lin
Hi Mike, So for now only suse kernel with target_rbd_core and tcmu-runner can run active/passive multipath safely? I am a newbie to iscsi. I think the stuck io get excuted cause overwrite problem can happen with both active/active and active/passive. What makes the active/passive safer than act

Re: [ceph-users] iSCSI Multipath (Load Balancing) vs RBD Exclusive Lock

2018-03-10 Thread Jason Dillaman
On Sat, Mar 10, 2018 at 7:42 AM, shadow_lin wrote: > Hi Mike, > So for now only suse kernel with target_rbd_core and tcmu-runner can run > active/passive multipath safely? Negative, the LIO / tcmu-runner implementation documented here [1] is safe for active/passive. > I am a newbie to iscsi. I t

Re: [ceph-users] iSCSI Multipath (Load Balancing) vs RBD Exclusive Lock

2018-03-10 Thread shadow_lin
Hi Jason, >As discussed in this thread, for active/passive, upon initiator >failover, we used the RBD exclusive-lock feature to blacklist the old >"active" iSCSI target gateway so that it cannot talk w/ the Ceph >cluster before new writes are accepted on the new target gateway. I can get duri

Re: [ceph-users] New Ceph cluster design

2018-03-10 Thread Vincent Godin
Hi, As i understand it, you'll have one RAID1 of two SSDs for 12 HDDs. A WAL is used for all writes on your host. If you have good SSDs, they can handle 450-550 MBpsc. Your 12 HDDs SATA can handle 12 x 100 MBps that is to say 1200 GBps. So your RAID 1 will be the bootleneck with this design. A goo

[ceph-users] (no subject)

2018-03-10 Thread Nathan Dehnel
Trying to create an OSD: gentooserver ~ # ceph-volume lvm create --data /dev/sdb Running command: ceph-authtool --gen-print-key Running command: ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new e70500fe-0d51-48c3-a607-414957886726 Runn

Re: [ceph-users] iSCSI Multipath (Load Balancing) vs RBD Exclusive Lock

2018-03-10 Thread Jason Dillaman
On Sat, Mar 10, 2018 at 10:11 AM, shadow_lin wrote: > Hi Jason, > >>As discussed in this thread, for active/passive, upon initiator >>failover, we used the RBD exclusive-lock feature to blacklist the old >>"active" iSCSI target gateway so that it cannot talk w/ the Ceph >>cluster before new writes

Re: [ceph-users] (no subject)

2018-03-10 Thread Oliver Freyermuth
Hi Nathan, this indeed appears to be a Gentoo-specific issue. They install the file at: /usr/libexec/ceph/ceph-osd-prestart.sh instead of /usr/lib/ceph/ceph-osd-prestart.sh It depends on how you strongly you follow FHS ( http://refspecs.linuxfoundation.org/FHS_3.0/fhs/ch04s07.html ) which is th

[ceph-users] rbd-nbd not resizing even after kernel tweaks

2018-03-10 Thread Alex Gorbachev
I am running into the problem described in https://lkml.org/lkml/2018/2/19/565 and https://tracker.ceph.com/issues/23137 I went ahead and built a custom kernel reverting the change https://github.com/torvalds/linux/commit/639812a1ed9bf49ae2c026086fbf975339cd1eef After that a resize shows in lsblk

Re: [ceph-users] iSCSI Multipath (Load Balancing) vs RBD Exclusive Lock

2018-03-10 Thread Maged Mokhtar
-- From: "Jason Dillaman" Sent: Sunday, March 11, 2018 1:46 AM To: "shadow_lin" Cc: "Lazuardi Nasution" ; "Ceph Users" Subject: Re: [ceph-users] iSCSI Multipath (Load Balancing) vs RBD Exclusive Lock On Sat, Mar 10, 2018 at 10:11 AM, shadow