Hello,

[Unusual setup]
Last week, I eventually managed to make a 4.2.1.7 oVirt work with iscsi-multipathing on both hosts and guest, connected to a Dell Equallogic SAN which is providing one single virtual ip - my hosts have two dedicated NICS for iscsi, but on the same VLAN. Torture-tests showed good resilience.

[Classical setup]
But this year, we plan to create at least two additional DCs but to connect their hosts to a "classical" SAN, ie which provides TWO IPs on segregated VLANs (not routed), and we'd like to use the same iscsi-multipathing feature.

The discussion below could lead to think that oVirt needs the two iscsi VLANs to be routed, allowing the hosts in one VLAN to access to resources in the other.
As Vinicius explained, this is not a best practice to say the least.

Searching through the mailing list archive, I found no answer to Vinicius' question.

May a Redhat storage and/or network expert enlighten us on these points?

Regards,

--
Nicolas Ecarnot

Le 21/07/2017 à 20:56, Vinícius Ferrão a écrit :

On 21 Jul 2017, at 15:12, Yaniv Kaul <[email protected] <mailto:[email protected]>> wrote:



On Wed, Jul 19, 2017 at 9:13 PM, Vinícius Ferrão <[email protected] <mailto:[email protected]>> wrote:

    Hello,

    I’ve skipped this message entirely yesterday. So this is per
    design? Because the best practices of iSCSI MPIO, as far as I
    know, recommends two completely separate paths. If this can’t be
    achieved with oVirt what’s the point of running MPIO?


With regular storage it is quite easy to achieve using 'iSCSI bonding'.
I think the Dell storage is a bit different and requires some more investigation - or experience with it.
 Y.

Yaniv, thank you for answering this. I’m really hoping that a solution would be found.

Actually I’m not running anything from DELL. My storage system is FreeNAS which is pretty standard and, as far as I know, iSCSI practices dictates segregate networks for proper working.

All other major virtualization products supports iSCSI this way: vSphere, XenServer and Hyper-V. So I was really surprised that oVirt (and even RHV, I requested a trial yesterday) does not implement ISCSI with the well know best practices.

There’s a picture of the architecture that I take from Google when searching for ”mpio best practives”: https://image.slidesharecdn.com/2010-12-06-midwest-reg-vmug-101206110506-phpapp01/95/nextgeneration-best-practices-for-vmware-and-storage-15-728.jpg?cb=1296301640

Ans as you can see it’s segregated networks on a machine reaching the same target.

In my case, my datacenter has five Hypervisor Machines, with two NICs dedicated for iSCSI. Both NICs connect to different converged ethernet switches and the iStorage is connected the same way.

So it really does not make sense that a the first NIC can reach the second NIC target. In a case of a switch failure the cluster will go down anyway, so what’s the point of running MPIO? Right?

Thanks once again,
V.
_______________________________________________
Users mailing list
[email protected]
http://lists.ovirt.org/mailman/listinfo/users

Reply via email to