Hi, i've tried bridged iscsi traffic with xen domU's, and got succesful logins to ietd and scst portals. You need to disable at least tx (not sure about rx) checksums on the physical NIC as well as on the vif's. I'm currently not sure how and if this can be done for the vif's from inside the dom0. My test domU's were setup to disable checksums during ifup. Throughput was horrible and packet loss at the physical eth's was at ~20%. I wouldn't recommend this setup...
My current setup is using clvm on top of multipath iscsi luns inside the dom0.
domU i/o is done via the usual blkback / blkfront. Using multipath's
path_grouping_policy multibus, rr_weight uniform and rr_min_io 1, I'm able to
get ~160MB/s read and ~190MB/s write speed with dd o/iflag=direct and bs=1G.
Obviously, domU's concurrent random i/o doesn't reach that throughput...
Servers are Supermicro X8DTN+-F w/ additional Intel dual GBit. Due to lspci all
NICs are 82576. eth0 is bridged for public domU IP, eth1 is bridged for private
domU IP, eth3 and eth4 are connected to separate iscsi switches and only
available to dom0.
Regarding your current question about creating the iscsi interfaces: No, I'm
not binding to MAC addresses. Just enabling
/proc/sys/net/ipv4/conf/*/arp_filter and two different IP ranges for two iscsi
pathes are enough, and much easier to configure (and to remember ). For
equalogic's "one IP to bind them all" concept, arp_filter alone did it, also
with tho IP's in the same range on two interfaces.
After reading a few lines of the google groups thread, I've noticed that
bonding iscsi interfaces is no longer on your list. Also in my tests,
dm-multipath shows much better performance compared to bonding.
--
Stephan Seitz
Senior System Administrator
netz-haut GmbH
multimediale kommunikation
Zweierweg 22
97074 Würzburg
Telefon: 0931 2876247
Telefax: 0931 2876248
Web: www.netz-haut.de
Amtsgericht Würzburg – HRB 10764
Geschäftsführer: Michael Daut, Kai Neugebauer
Am Freitag, den 21.10.2011, 09:53 -0400 schrieb Hoot, Joseph:
Hi all,
I've been attempting to do the following for a couple of years now and
have not had success. And we're getting ready to start sandboxing a newer
environment. As such, I'd like to approach the topic that I touched on in the
following thread:
https://groups.google.com/d/msg/open-iscsi/LpDRjQF_E0k/OIF1XZYdcRIJ
Basically we want the following using only 4 nics:
eth0 = bond0 (primary nic)
eth1 = bond0 (other slave nic)
eth2 = iSCSI nic 1 (MTU=9000)
eth3 = iSCSI nic 2 (MTU=9000)
eth2 and eth3 are created as iscsiadm ifaces using the following syntax:
iscsiadm -m iface -Ieth2 -o new
iscsiadm -m iface -Ieth3 -o new
iscsiadm -m iface -Ieth2 -o update -n iface.hwaddress -v
00:12:07:CE:24:E8
iscsiadm -m iface -Ieth3 -o update -n iface.hwaddress -v
00:12:07:CE:24:E9
1) iSCSI dom0 initiation - We want the dom0 to be able to mount iSCSI
storage through two ifaces (therefore 2 iSCSI sessions) to acheive better
throughput through multiple load balancing algorithms.
2) iSCSI Block Devices passed through as block devices to the vm's -
used so that we can re-present (at the dom0 layer) to the vm guest. This allows
us to have better security (i.e. - we don't need to include any virtual nics
inside the guests that will be on a common storage network. The guest only
sees a block device).
3) VM guest iSCSI initiation - If the need should arrise, we would want
to be able to present the following nics to a guest vm:
- vif0 = public network - connected using a vlan that is carved off of
bond0 and then connected to a software bridge (via bridge-utils). This is like
xenbr0. Since the bridge has bond0 connected to it, the active-passive load
balancing is already achieved at that layer. So I wouldn't need to also do
bonding in the guest vm.
- vif1 = nic assigned to a bridge that is connected to eth2 above.
- vif2 = another nic assigned to a different bridge that is connect to
eth3 above.
* NOTE - the vif1 and vif2 would then be available for me to setup
iscsiadm ifaces inside the guest. This would allow load balancing and better
throughput inside the guest itself. And since Oracle VM (which is what I'm
using as my hypervisor-- which just uses xen) doesn't officially support the
EqualLogic HIT Kit, I would then be able to install the Dell EqualLogic HIT Kit
inside my vm guests to achieve better routing of the iSCSI traffic.
I attempted to do this in the Oracle VM 2.2.2 world and was
unsuccessful. I attempted to set this up using a non-EqualLogic iSCSI target (I
used IETF) and was still NOT able to get this to behave the way one would
expect. The results are always the same-- namely, discovery works just fine in
the dom0 nodes (so my routing tables should have nothing to do with this), but
when I actually go to log into the targets at the OVM server, I am unable to.
Its almost like there is something in the bridge-utils packages or iscsiadm
kernel modules that are not allowing this to occur properly.
I am attaching an ovm_storage.png to this thread (not sure if it'll
make it into the google groups or not.
The example I used in the ovm_storage.png is the following (Again, this
is ONLY the storage-side... so lets not worry about bond0, eth0, and eth1 at
the moment):
eth2 is setup in /etc/sysconfig/network-scripts to connect to a bridge
called "viscsi0"
eth3 is setup in /etc/sysconfig/network-scripts to connect to a bridge
called "viscsi1"
So I have 2 software bridges:
viscsi0 - has port eth2 connected to it
viscsi1 - has port eth3 connected to it
Then, I carved off a viscsi0:1 sub-interface in
/etc/sysconfig/network-scripts/ to allow that bridge device to have an IP. This
IP is for the 1st iscsiadm iface to be setup. I showed the commands that I
used above to create the eth2 and eth3 ifaces. NOTE - the ifaces were created
using iface.hwaddrss of the ethernet MAC addresses.
viscsi1:1 sub-interface is created in the same fashion. I think you
could also just leave the IP on the bridge itself. But since OVM 2.2.2 worked
this way, I had tested it like this.
Then, without even going to the next step of seeing if I can get a vm
wtih 3 nics in it (1 for the public, and 2 for iSCSI connections), I simply
created the iscsiadm ifaces and tried to discovery my iscsi targets that I
would need to connect to the ovm server for use as repositories.
I do `iscsiadm -m discovery -t st -p <ip_of_eql_box_here>` and it
returns the list of targets (by the way, I have made sure to delete all known
targets prior to testing this). When I go to log into the targets, it is then
unable to connect. Like I said, I've tested this both with EqualLogic with
physical hardware and, inside a vmware host-only network, using IETF targets.
The results are always the same.
QUESTIONS:
-------------------
- Has anyone gotten something like this to work? If yes, did you
create your iSCSI ifaces differently than I did? What was your solution to get
it to work?
Thanks,
Joe
--
You received this message because you are subscribed to the Google Groups
"open-iscsi" group.
To post to this group, send email to [email protected].
To unsubscribe from this group, send email to
[email protected].
For more options, visit this group at
http://groups.google.com/group/open-iscsi?hl=en.
<<face-wink.png>>
