Jesse,

M2/eth0 is physically connected to M3/eth0. Both these machines are running
OVS, and have a GRE tunnel configured between them.


M2/eth1 is physically connected to M1/eth0. Here M1 is a client machine,
whose ping req./resp. are expected to be GRE encapsulated (between M2 &
M3).

For this purpose, I have configured a flow on M2 - to match all pkts
arriving on eth1 port & with M1-eth0-src-mac. to egress via GRE port (gre0).


Thanks

Rashmi


On Tue, Dec 17, 2013 at 10:38 PM, Jesse Gross <je...@nicira.com> wrote:

> On Tue, Dec 17, 2013 at 1:00 AM, Rashmi Rashmi <rashmi....@gmail.com>
> wrote:
> > Hello,
> >
> >
> >
> > I am using a GRE tunnel with OVS. Here is the setup detail:
> >
> >
> >
> >
> >
> > +--------------+   eth0 (11.0.0.2)  eth1     +----------------+  eth0
> > eth0   +----------------+
> >
> > | M1 (Linux) |-------------------------------------| M2 (Linux)
> > |-----------------------------------------------| M3 (Linux) |
> >
> > +--------------+                                     +----------------+
>  p0
> > (11.0.0.1)      p0 (11.0.0.3) +----------------+
> >
> >
> > gre0  <============> gre0
> >
> >
> >
> > I am running OVS (1.9.3) on Machines M2 & M3.
> >
> >
> >
> > For GRE tunnel between M2 & M3 –
> >
> > On pinging 11.0.0.3 from M1, the pkts coming out of M2.eth0 interface are
> > GRE encapsulated. But on machine M3, I see that the single ping-request
> pkt
> > is being looped multiple times.
>
> What are eth0/1 connected to on M2 and M3? It sounds like unknown
> unicast is being flooded over those ports.
>
_______________________________________________
discuss mailing list
discuss@openvswitch.org
http://openvswitch.org/mailman/listinfo/discuss

Reply via email to