On 2012-01-24 21:59, James Carlson wrote:
Robin Axelsson wrote:
If you have two interfaces inside the same zone that have the same IP
prefix, then you have to have IPMP configured, or all bets are off.
Maybe it'll work.  But probably not.  And was never been supported that
way by Sun.
The idea I have with using two NICs is to create a separation between
the virtual machine(s) and the host system so that the network activity
of the virtual machine(s) won't interfere with the network activity of
the physical host machine.
Nice idea, but it unfortunately won't work.  When two interfaces are
plumbed up like that -- regardless of what VM or bridge or hub or
virtualness there might be -- the kernel sees two IP interfaces
configured with the same IP prefix (subnet), and it considers them to be
completely interchangeable.  It can (and will!) use either one at any
time.  You don't have control over where the packets go.

Well, unless you get into playing tricks with IP Filter.  And if you do
that, then you're in a much deeper world of hurt, at least in terms of
performance.
Here's what the virtualbox manul says about bridged networking:

"*Bridged networking*:

This is for more advanced networking needs such as network simulations and running servers in a guest. When enabled, VirtualBox connects to one of your installed network cards and exchanges network packets directly, circumventing your host operating system's network stack.

With bridged networking, VirtualBox uses a device driver on your _*host*_ system that filters data from your physical network adapter. This driver is therefore called a "net filter" driver. This allows VirtualBox to intercept data from the physical network and inject data into it, effectively creating a new network interface in software. When a guest is using such a new software interface, it looks to the host system as though the guest were physically connected to the interface using a network cable: the host can send data to the guest through that interface and receive data from it. This means that you can set up routing or bridging between the guest and the rest of your network.

For this to work, VirtualBox needs a device driver on your host system. The way bridged networking works has been completely rewritten with VirtualBox 2.0 and 2.1, depending on the host operating system. From the user perspective, the main difference is that complex configuration is no longer necessary on any of the supported host operating systems."

The virtual hub that creates the bridge between the VM network ports and
the physical port tap into the network stack of the host machine and I
suspect that this configuration is not entirely seamless. I think that
the virtual bridge interferes with the network stack so letting the
virtual bridge have its own network port to play around with has turned
out to be a good idea, at least when I was running OSOL b134 - OI148a.
I think you're going about this the wrong way, at least with respect to
these two physical interfaces.

I suspect that the right answer is to plumb only *ONE* of them in the
zone, and then use the other by name inside the VM when creating the
virtual hub.  That second interface should not be plumbed or configured
to use IP inside the regular OpenIndiana environment.  That way, you'll
have two independent paths to the network.
Perhaps the way to do it is to create a dedicated jail/zone for VIrtualBox to run in and "plumb the e1000g2" to that zone. I'm a little curious as to how this would affect the performance I'm not sure if you have to split up the CPU cores etc between zones or if that is taken care of as the zones pretty much share the same kernel (and its task scheduler).
I suppose I could try to configure the IPMP, I guess I will have to
throw away the DHCP configuration and go for fixed IP all the way as
DHCP only gives two IP addresses and I will need four of them. But then
we have the problem with the VMs and how to separate them from the
network stack of the host.
It's possible to have DHCP generate multiple addresses per interface.
And it's possible to use IPMP with just one IP address per interface (in
fact, you can use it with as little as one IP address per *group*).  And
it's possible to configure an IPMP group with some static addresses and
some DHCP.
In order to make DHCP generate more IP addresses I guess I have to generate a few (virtual) MAC addresses. Maybe ifconfig hadles this internally.

But read the documentation in the man pages.  IPMP may or may not be
what you really want here.  Based on the "isolation" demands mentioned,
I suspect it's not.  The only reason I mentioned it is that your current
IP configuration is invalid (unsupported, might not work, good luck with
that) without IPMP -- that doesn't mean you should use IPMP, but that
you should rethink the whole configuration.

One of the many interesting problems that happens with multiple
interfaces configured on the same network is that you get multicast and
broadcast traffic multiplication: each single message will be received
and processed by each of the interfaces.  Besides the flood of traffic
this causes (and the seriously bad things that will happen if you do any
multicast forwarding), it can also expose timing problems in protocols
that are listening to those packets.  When using IPMP, one working
interface is automatically designated to receive all incoming broadcast
and multicast traffic, and the others are disabled and receive unicast
only.  Without IPMP, you don't have that protection.

Another interesting problem is source address usage.  When the system
sends a packet, it doesn't really care what source address is used, so
long as the address is valid on SOME interface on the system.  The
output interface is chosen only by the destination IP address on the
packet -- not the source -- so you'll see packets with source address
"A" going out interface with address "B."  You might think you're
controlling interface usage by binding some local address, but you're
really not, because that's not how IP actually works.  With IPMP,
there's special logic engaged that picks source IP addresses to match
the output interface within the group, and then keeps the connection (to
the extent possible) on the same interface.

But those are just two small ways in which multiple interfaces
configured in this manner are a Bad Thing.  A more fundamental issue is
that it was just never designed to be used that way, and if you do so,
you're a test pilot.
This was very interesting and insightful. I've always wondered how Windows tell the difference between two network connections in a machine, now I see that it doesn't. Sometimes this can get corrupted in Windows and sever the internet connection completely. If I understand correctly, the TCP stack in Windows is borrowed from Sun. I guess this is a little OT, it's just a reflection.
I will follow these instructions if I choose to configure IPMP:
http://www.sunsolarisadmin.com/networking/configure-ipmp-load-balancing-resilience-in-sun-solaris/
Wow, that's old.  You might want to dig up something a little more
modern.  Before OpenIndiana branched off of OpenSolaris (or before
Oracle slammed the door shut), a lot of work went into IPMP to make it
much more flexible.
I'll see if there is something more up-to-date. There are no man entries for 'ipmp' in OI and 'apropos' doesn't work for me.

If you run "/sbin/route monitor" when the system is working fine and
leave it running until a problem happens, do you see any output produced?

If so, then this could fairly readily point the way to the problem.

WIth one port I mean that only one port is physically connected to the
switch, all other ports but one are disconnected. So I guess ifconfig
<port_id>  unplumb would have no effect on such ports.
Not so!

In terms of getting the kernel's IRE entries correct, it doesn't matter
so much where the physical wires go.  It matters a whole lot what you do
with "ifconfig."
Ok, but when it is not connected it has no IP address (as it is configured over DHCP) that can interfere with the multicast and the IP setup. Maybe this is a problem when the address is static.

I managed to reproduce a few short freezes while "/sbin/route monitor"
was running over ssh but it didn't spit out any messages, perhaps I
should run it on a local terminal instead.
If the freeze is damaging network communication, then doing that to some
destination (such as ">  /tmp/monitor.out") that won't be affected by the
network is probably a good idea.

It's a guess.  I'm guessing that what's really going on is that you have
either interfaces, IREs, or ARP entries flapping in the breeze due to
the odd configuration on this system.  "route monitor" should reveal that.

But it could be almost anything else.  Looking at kstats and/or
capturing network traffic may be necessary in order to find the problem.
I'll report back when I experience the next freeze-up...
I looked at the time stamps
of the entries in the /var/adm/messages and they do not match the
freeze-ups by the minute.
I assume that refers to the NWAM messages previously reported.  No, I
don't think those are the proximate cause of your problem.


I've been playing around with 'ifconfig <interface> unplumb' (as a superuser of course) but it doesn't appear to do anything on e1000g2. As a memory refresher, here's the setup:

e1000g1: 10.40.137.185, DHCP (the computer name is associated with this address in the /etc/hosts file) e1000g2: 10.40.137.171, DHCP (bridged network of the VM is attached to this port)
rge0: <no IP address>, DHCP (no cable attached)

No error message comes after that command and when issuing 'ifconfig -a' everything looks the same, and the '/sbin route monitor' doesn't yield any messages. If I do the same on rge0 it will disappear in the IPv4 section of the 'ifconfig -a' output but remain in the IPv6 section. I also see messages coming out of the route monitor; RTM_IFINFO, RTM_DELETE and RTM_DELADDR... as a result of the unplumb command on rge0 which I think should be expected.

I can see that e1000g2 is operating in Promiscuous state (Whatever that means) which the other ethernet connections are not.

I tried with 'ifconfig e1000g2 down' and the "UP" flag disappeared from the port in 'ifconfig -a'. Once again this only applies to IPv4, IPv6 remains unaffected. The route monitor yielded messages this time. The IP address 10.40.137.171 is still there however and the bridged network connection of the VM seems to be unaffected by this command (which would be desired if that command really severed the IFP from the host).

If I try to unplumb it again (ifconfig e1000g2 unplumb) I get the error:
"ifconfig: cannot unplumb e1000g2: Invalid argument provided"
which is a bit strange. The route monitor yields the message (in digested form): RTM_LOSING (Kernel Suspects partitioning) <DST,GATEWAY,NETMASK,IFA> default 10.40.137.1 default default
right after this command.

As a comparison; if I do the same on the rge0 which is already unplumbed I get:
"ifconfig: cannot unplumb rge0: Interface does not exist"
and no messages on the route monitor.

Maybe I'm using a bad monkey wrench...

Robin.
_______________________________________________
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss

Reply via email to