Hi Chris,
The osds are running on arm nodes. Every node has a two core 1.5Ghz arm 32bit
cpu and 2G ram and runs 2 osds.Hdd is 10TB and journal colocate with data on
the same disk.
Drives are half full now,but the problem I described also happened when the
hdds are empty. Filesystem is ext4 bec
Hi Mike,
So for now only suse kernel with target_rbd_core and tcmu-runner can run
active/passive multipath safely?
I am a newbie to iscsi. I think the stuck io get excuted cause overwrite
problem can happen with both active/active and active/passive.
What makes the active/passive safer than act
On Sat, Mar 10, 2018 at 7:42 AM, shadow_lin wrote:
> Hi Mike,
> So for now only suse kernel with target_rbd_core and tcmu-runner can run
> active/passive multipath safely?
Negative, the LIO / tcmu-runner implementation documented here [1] is
safe for active/passive.
> I am a newbie to iscsi. I t
Hi Jason,
>As discussed in this thread, for active/passive, upon initiator
>failover, we used the RBD exclusive-lock feature to blacklist the old
>"active" iSCSI target gateway so that it cannot talk w/ the Ceph
>cluster before new writes are accepted on the new target gateway.
I can get duri
Hi,
As i understand it, you'll have one RAID1 of two SSDs for 12 HDDs. A
WAL is used for all writes on your host. If you have good SSDs, they
can handle 450-550 MBpsc. Your 12 HDDs SATA can handle 12 x 100 MBps
that is to say 1200 GBps. So your RAID 1 will be the bootleneck with
this design. A goo
Trying to create an OSD:
gentooserver ~ # ceph-volume lvm create --data /dev/sdb
Running command: ceph-authtool --gen-print-key
Running command: ceph --cluster ceph --name client.bootstrap-osd --keyring
/var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new
e70500fe-0d51-48c3-a607-414957886726
Runn
On Sat, Mar 10, 2018 at 10:11 AM, shadow_lin wrote:
> Hi Jason,
>
>>As discussed in this thread, for active/passive, upon initiator
>>failover, we used the RBD exclusive-lock feature to blacklist the old
>>"active" iSCSI target gateway so that it cannot talk w/ the Ceph
>>cluster before new writes
Hi Nathan,
this indeed appears to be a Gentoo-specific issue.
They install the file at:
/usr/libexec/ceph/ceph-osd-prestart.sh
instead of
/usr/lib/ceph/ceph-osd-prestart.sh
It depends on how you strongly you follow FHS (
http://refspecs.linuxfoundation.org/FHS_3.0/fhs/ch04s07.html )
which is th
I am running into the problem described in
https://lkml.org/lkml/2018/2/19/565 and
https://tracker.ceph.com/issues/23137
I went ahead and built a custom kernel reverting the change
https://github.com/torvalds/linux/commit/639812a1ed9bf49ae2c026086fbf975339cd1eef
After that a resize shows in lsblk
--
From: "Jason Dillaman"
Sent: Sunday, March 11, 2018 1:46 AM
To: "shadow_lin"
Cc: "Lazuardi Nasution" ; "Ceph Users"
Subject: Re: [ceph-users] iSCSI Multipath (Load Balancing) vs RBD Exclusive
Lock
On Sat, Mar 10, 2018 at 10:11 AM, shadow
10 matches
Mail list logo