[ovs-dev] [PATCH net v2 2/2] openvswitch: restore OVS_FLOW_CMD_NEW notifications

2014-09-18 Thread Nicolas Dichtel
From: Samuel Gauthier 

Since commit fb5d1e9e127a ("openvswitch: Build flow cmd netlink reply only if 
needed."),
the new flows are not notified to the listeners of OVS_FLOW_MCGROUP.

This commit fixes the problem by using the genl function, ie
genl_has_listerners() instead of netlink_has_listeners().

Signed-off-by: Samuel Gauthier 
Signed-off-by: Nicolas Dichtel 
---

v2: add patch 1/2

 net/openvswitch/datapath.c | 9 +
 1 file changed, 5 insertions(+), 4 deletions(-)

diff --git a/net/openvswitch/datapath.c b/net/openvswitch/datapath.c
index 91d66b7e64ac..64dc864a417f 100644
--- a/net/openvswitch/datapath.c
+++ b/net/openvswitch/datapath.c
@@ -78,11 +78,12 @@ static const struct genl_multicast_group 
ovs_dp_vport_multicast_group = {
 
 /* Check if need to build a reply message.
  * OVS userspace sets the NLM_F_ECHO flag if it needs the reply. */
-static bool ovs_must_notify(struct genl_info *info,
-   const struct genl_multicast_group *grp)
+static bool ovs_must_notify(struct genl_family *family, struct genl_info *info,
+   unsigned int group)
 {
return info->nlhdr->nlmsg_flags & NLM_F_ECHO ||
-   netlink_has_listeners(genl_info_net(info)->genl_sock, 0);
+  genl_has_listeners(family, genl_info_net(info)->genl_sock,
+ group);
 }
 
 static void ovs_notify(struct genl_family *family,
@@ -763,7 +764,7 @@ static struct sk_buff *ovs_flow_cmd_alloc_info(const struct 
sw_flow_actions *act
 {
struct sk_buff *skb;
 
-   if (!always && !ovs_must_notify(info, &ovs_dp_flow_multicast_group))
+   if (!always && !ovs_must_notify(&dp_flow_genl_family, info, 0))
return NULL;
 
skb = genlmsg_new_unicast(ovs_flow_cmd_msg_size(acts), info, 
GFP_KERNEL);
-- 
2.1.0

___
dev mailing list
dev@openvswitch.org
http://openvswitch.org/mailman/listinfo/dev


[ovs-dev] [PATCH net v2 1/2] genetlink: add function genl_has_listeners()

2014-09-18 Thread Nicolas Dichtel
This function is the counterpart of the function netlink_has_listeners().

Signed-off-by: Nicolas Dichtel 
---

v2: add patch 1/2

 include/net/genetlink.h | 8 
 1 file changed, 8 insertions(+)

diff --git a/include/net/genetlink.h b/include/net/genetlink.h
index 93695f0e22a5..af10c2cf8a1d 100644
--- a/include/net/genetlink.h
+++ b/include/net/genetlink.h
@@ -394,4 +394,12 @@ static inline int genl_set_err(struct genl_family *family, 
struct net *net,
return netlink_set_err(net->genl_sock, portid, group, code);
 }
 
+static inline int genl_has_listeners(struct genl_family *family,
+struct sock *sk, unsigned int group)
+{
+   if (WARN_ON_ONCE(group >= family->n_mcgrps))
+   return -EINVAL;
+   group = family->mcgrp_offset + group;
+   return netlink_has_listeners(sk, group);
+}
 #endif /* __NET_GENERIC_NETLINK_H */
-- 
2.1.0

___
dev mailing list
dev@openvswitch.org
http://openvswitch.org/mailman/listinfo/dev


[ovs-dev] checking load on OVS

2014-09-18 Thread Neha Joshi
Hii all,

I want to check statistics of openvswitch in terms of CPU, load average and
memory.
I found that i need to set *other_config:enable-statistics* to *true*
Can anyone help, how to do this?  I mean is there any command or such?

Thanks,
neha
___
dev mailing list
dev@openvswitch.org
http://openvswitch.org/mailman/listinfo/dev


Re: [ovs-dev] [PATCH] datapath-windows: cleanup dump state during instance cleanup

2014-09-18 Thread Samuel Ghinet
Eitan,


> In case that a handle is duplicated in user mode there will be multiple file 
> handles per file object associated with the handle returned by the system 
> when the device was opened.
As I remember when working with userspace HANDLEs and FILE_OBJECT ptrs in 
kernel, even though only one device was used and many file HANDLEs were opened 
for this same device, each HANDLE corresponded to one FILE_OBJECT ptr.
There was no case in which 2 or more userspace HANDLEs shared the same 
FILE_OBJECT ptr.

"there will be multiple file handles per file object" - Are you sure this is a 
real case?

Thanks,
Sam


From: Eitan Eliahu [elia...@vmware.com]
Sent: Monday, September 15, 2014 6:50 PM
To: Samuel Ghinet; Nithin Raju
Cc: dev@openvswitch.org
Subject: RE: [ovs-dev] [PATCH] datapath-windows: cleanup dump state during 
instance cleanup

Each time that a device is opened a new file object is created with a 
corresponding handle. In case that a handle is duplicated in user mode there 
will be multiple file handles per file object associated with the handle 
returned by the system when the device was opened.
We usually open the device per socket. I wouldn’t count on that that I/O on a 
handle and the closing of a handle would be from the same thread context.
Thanks,
Eitan

From: Samuel Ghinet [mailto:sghi...@cloudbasesolutions.com]
Sent: Sunday, September 14, 2014 8:04 PM
To: Eitan Eliahu; Nithin Raju
Cc: dev@openvswitch.org
Subject: RE: [ovs-dev] [PATCH] datapath-windows: cleanup dump state during 
instance cleanup

I have a question here,
is a file HANDLE normally used by only one userspace thread at a time (so that 
if you have multiple threads, each thread will have its own unique file 
HANDLEs), or each thread may use the file HANDLEs opened by other threads?

Thanks,
Sam

From: Eitan Eliahu [elia...@vmware.com]
Sent: Saturday, September 13, 2014 12:01 AM
To: Nithin Raju
Cc: Samuel Ghinet; dev@openvswitch.org
Subject: RE: [ovs-dev] [PATCH] datapath-windows: cleanup dump state during 
instance cleanup
My understating is the Cleanup callback is called when the file handle is 
closed by user mode. However, at that time you can still have outstanding I/O 
requests in the driver or even synchronous I/O requests sent to the driver from 
a context of a different user mode thread. We do complete the outstanding queue 
IRPs but I could see a race condition here.

Do you really need to allocate memory to hold the Dump OVS message? Please 
note, that the memory associated with the IRP is always available until we 
complete the IRP. I am not sure we need to create a copy of it. If you need to 
hold status variables (e.g. dump index across multiple I/O dump requests) you 
can add them to the device instance itself. Also, it would be nice that each 
Dump request would be self-contained. I know it requires some user mode change 
(store the dump index in the socket structure).

Thanks,
Eitan

From: Nithin Raju
Sent: Friday, September 12, 2014 1:02 PM
To: Eitan Eliahu
Cc: Samuel Ghinet; dev@openvswitch.org
Subject: Re: [ovs-dev] [PATCH] datapath-windows: cleanup dump state during 
instance cleanup

On Sep 12, 2014, at 12:46 PM, Eitan Eliahu 
mailto:elia...@vmware.com>>
 wrote:

Sorry for coming late on this one.
We should free the dump state when the system calls the driver on cleanup as 
you did. But, the cleanup IOCTL can be (actually will be) executed from a 
different thread context. This means that we need to protect  dumpState.ovsMsg.

By the time IRP_MJ_CLEANUP has been called, IRP_MJ_CLOSE has already been 
called. So, my understanding is that there's no scope for a userspace process 
to be calling into the kernel. Pls. let me know if I'm missing anything.

Thanks,
-- Nithin
___
dev mailing list
dev@openvswitch.org
http://openvswitch.org/mailman/listinfo/dev


Re: [ovs-dev] [PATCH v2] datapath-windows: Netlink command: vport dump

2014-09-18 Thread Samuel Ghinet
I meant to send it to ML.

From: Samuel Ghinet
Sent: Wednesday, September 17, 2014 7:46 PM
To: Eitan Eliahu
Subject: RE: [PATCH v2] datapath-windows: Netlink command: vport dump

Hi Eitan,

Yes, you're right, I forgot to check the return values from the NlMsgPut calls. 
:)

> Do we check the size of the out message before we populate it with the port 
> NL attributes?
It's stored in OvsCreateMsgFromVport as the maxPayloadSize param, and 
initialized in NlBufInit.

Sam

From: Eitan Eliahu [elia...@vmware.com]
Sent: Wednesday, September 17, 2014 7:30 PM
To: Samuel Ghinet; dev@openvswitch.org
Cc: Nithin Raju; Saurabh Shah; Ankur Sharma; Alin Serdean
Subject: RE: [PATCH v2] datapath-windows: Netlink command: vport dump

Hi Sam,
Looks good. Some comments:
Shouldn't we check the return code from NlMsgPutTailU32() ?
Do we check the size of the out message before we populate it with the port NL 
attributes?
Thanks,
Eitan

-Original Message-
From: Samuel Ghinet [mailto:sghi...@cloudbasesolutions.com]
Sent: Wednesday, September 17, 2014 6:09 AM
To: dev@openvswitch.org
Cc: Nithin Raju; Eitan Eliahu; Saurabh Shah; Ankur Sharma; Alin Serdean
Subject: [PATCH v2] datapath-windows: Netlink command: vport dump

Functionality for vport dump.
Later, when we will add more netlink dump commands, some common code will need 
to be split to functions.

Notes:
a) the current implementation of vport assumes the datapath feature "multiple 
upcall pids" is not used. A single upcall pid is used instead.
c) the vxlan destination udp port is currently a constant. When it will become 
configurable, the vport options netlink attribute will become relevant.

Signed-off-by: Samuel Ghinet 
---
 datapath-windows/ovsext/Datapath.c | 222 -
 datapath-windows/ovsext/Vport.h|  11 +-
 2 files changed, 227 insertions(+), 6 deletions(-)

diff --git a/datapath-windows/ovsext/Datapath.c 
b/datapath-windows/ovsext/Datapath.c
index d90ae23..5e9aadf 100644
--- a/datapath-windows/ovsext/Datapath.c
+++ b/datapath-windows/ovsext/Datapath.c
@@ -101,6 +101,9 @@ static NTSTATUS 
OvsGetPidCmdHandler(POVS_USER_PARAMS_CONTEXT usrParamsCtx,  static NTSTATUS 
OvsGetDpCmdHandler(POVS_USER_PARAMS_CONTEXT usrParamsCtx,
UINT32 *replyLen);

+static NTSTATUS OvsGetVportCmdHandler(POVS_USER_PARAMS_CONTEXT usrParamsCtx,
+  UINT32 *replyLen);
+
 /*
  * The various netlink families, along with the supported commands. Most of
  * these families and commands are part of the openvswitch specification for a 
@@ -156,14 +159,20 @@ NETLINK_FAMILY nlPacketFamilyOps = {  };

 /* Netlink vport family. */
+NETLINK_CMD nlVportFamilyCmdOps[] = {
+{ OVS_VPORT_CMD_GET, OvsGetVportCmdHandler,
+OVS_WRITE_DEV_OP | OVS_READ_DEV_OP, FALSE } };
+
+/* Netlink vport family. */
 /* XXX: Add commands here. */
 NETLINK_FAMILY nlVportFamilyOps = {
 .name = OVS_VPORT_FAMILY,
 .id   = OVS_WIN_NL_VPORT_FAMILY_ID,
 .version  = OVS_VPORT_VERSION,
 .maxAttr  = OVS_VPORT_ATTR_MAX,
-.cmds = NULL, /* XXX: placeholder. */
-.opsCount = 0
+.cmds = nlVportFamilyCmdOps,
+.opsCount = ARRAY_SIZE(nlVportFamilyCmdOps)
 };

 /* Netlink flow family. */
@@ -668,10 +677,11 @@ OvsDeviceControl(PDEVICE_OBJECT deviceObject,
 break;
 case OVS_WIN_NL_PACKET_FAMILY_ID:
 case OVS_WIN_NL_FLOW_FAMILY_ID:
-case OVS_WIN_NL_VPORT_FAMILY_ID:
 status = STATUS_NOT_IMPLEMENTED;
 goto done;
-
+case OVS_WIN_NL_VPORT_FAMILY_ID:
+nlFamilyOps = &nlVportFamilyOps;
+break;
 default:
 status = STATUS_INVALID_PARAMETER;
 goto done;
@@ -966,6 +976,210 @@ OvsSetupDumpStart(POVS_USER_PARAMS_CONTEXT usrParamsCtx)  
}

 /*
+* XXX: When "dump done" and "error" netlink types will be implemented,
+we will
+* need a few more BuildMsgOut functions, which would ultimately call a
+generic
+* BuildMsgOutFromMsgIn.
+*/
+
+static VOID
+BuildMsgOutFromMsgIn(POVS_MESSAGE msgIn, POVS_MESSAGE msgOut, UINT16
+flags) {
+msgOut->nlMsg.nlmsgType = msgIn->nlMsg.nlmsgType;
+msgOut->nlMsg.nlmsgFlags = flags;
+msgOut->nlMsg.nlmsgSeq = msgIn->nlMsg.nlmsgSeq;
+msgOut->nlMsg.nlmsgPid = msgIn->nlMsg.nlmsgPid;
+msgOut->nlMsg.nlmsgLen = sizeof *msgOut;
+
+msgOut->genlMsg.cmd = msgIn->genlMsg.cmd;
+msgOut->genlMsg.version = nlDatapathFamilyOps.version;
+msgOut->genlMsg.reserved = 0;
+}
+
+static NTSTATUS
+OvsCreateMsgFromVport(POVS_VPORT_ENTRY vport,
+  POVS_MESSAGE msgOut,
+  UINT32 maxPayloadSize) {
+NL_BUFFER nlBuffer;
+OVS_VPORT_FULL_STATS vportStats;
+
+NlBufInit(&nlBuffer, (PCHAR)msgOut + sizeof (*msgOut),
+ maxPayloadSize);
+
+NlMsgPutTailU32(&nlBuffer, OVS_VPORT_ATTR_PORT_NO, vport->portNo);
+NlMsgPutTailU32(&nlBuffer, OVS_VPORT_ATTR_TYPE, vport->ovsType);

Re: [ovs-dev] [PATCH] datapath-windows: cleanup dump state during instance cleanup

2014-09-18 Thread Eitan Eliahu
Hi Sam,
As far as I understand when a process calls CreatFile the system will create a 
file handle and return it to the process. The file handle (in UM) is associated 
with a file object in (KM). Both objects are valid until the process closes the 
handle or terminates.
This means that if the device is opened multiple times the kernel will have 
multiple file  objects, each one corresponds to a file handle returned by 
CreateFile(). All these file objects are associated with the driver device 
object.
The exception is when a process duplicates a handle or when a child process  
inherits a handle from a parent process. At this case the system holds multiple 
file handles which corresponds to a single file object.
I hope this answers your question.
Eitan

From: Samuel Ghinet [mailto:sghi...@cloudbasesolutions.com]
Sent: Thursday, September 18, 2014 3:53 AM
To: Eitan Eliahu; Nithin Raju
Cc: dev@openvswitch.org
Subject: RE: [ovs-dev] [PATCH] datapath-windows: cleanup dump state during 
instance cleanup

Eitan,


> In case that a handle is duplicated in user mode there will be multiple file 
> handles per file object associated with the handle returned by the system 
> when the device was opened.
As I remember when working with userspace HANDLEs and FILE_OBJECT ptrs in 
kernel, even though only one device was used and many file HANDLEs were opened 
for this same device, each HANDLE corresponded to one FILE_OBJECT ptr.
There was no case in which 2 or more userspace HANDLEs shared the same 
FILE_OBJECT ptr.

"there will be multiple file handles per file object" - Are you sure this is a 
real case?

Thanks,
Sam

From: Eitan Eliahu [elia...@vmware.com]
Sent: Monday, September 15, 2014 6:50 PM
To: Samuel Ghinet; Nithin Raju
Cc: dev@openvswitch.org
Subject: RE: [ovs-dev] [PATCH] datapath-windows: cleanup dump state during 
instance cleanup
Each time that a device is opened a new file object is created with a 
corresponding handle. In case that a handle is duplicated in user mode there 
will be multiple file handles per file object associated with the handle 
returned by the system when the device was opened.
We usually open the device per socket. I wouldn't count on that that I/O on a 
handle and the closing of a handle would be from the same thread context.
Thanks,
Eitan

From: Samuel Ghinet [mailto:sghi...@cloudbasesolutions.com]
Sent: Sunday, September 14, 2014 8:04 PM
To: Eitan Eliahu; Nithin Raju
Cc: dev@openvswitch.org
Subject: RE: [ovs-dev] [PATCH] datapath-windows: cleanup dump state during 
instance cleanup

I have a question here,
is a file HANDLE normally used by only one userspace thread at a time (so that 
if you have multiple threads, each thread will have its own unique file 
HANDLEs), or each thread may use the file HANDLEs opened by other threads?

Thanks,
Sam

From: Eitan Eliahu [elia...@vmware.com]
Sent: Saturday, September 13, 2014 12:01 AM
To: Nithin Raju
Cc: Samuel Ghinet; dev@openvswitch.org
Subject: RE: [ovs-dev] [PATCH] datapath-windows: cleanup dump state during 
instance cleanup
My understating is the Cleanup callback is called when the file handle is 
closed by user mode. However, at that time you can still have outstanding I/O 
requests in the driver or even synchronous I/O requests sent to the driver from 
a context of a different user mode thread. We do complete the outstanding queue 
IRPs but I could see a race condition here.

Do you really need to allocate memory to hold the Dump OVS message? Please 
note, that the memory associated with the IRP is always available until we 
complete the IRP. I am not sure we need to create a copy of it. If you need to 
hold status variables (e.g. dump index across multiple I/O dump requests) you 
can add them to the device instance itself. Also, it would be nice that each 
Dump request would be self-contained. I know it requires some user mode change 
(store the dump index in the socket structure).

Thanks,
Eitan

From: Nithin Raju
Sent: Friday, September 12, 2014 1:02 PM
To: Eitan Eliahu
Cc: Samuel Ghinet; dev@openvswitch.org
Subject: Re: [ovs-dev] [PATCH] datapath-windows: cleanup dump state during 
instance cleanup

On Sep 12, 2014, at 12:46 PM, Eitan Eliahu 
mailto:elia...@vmware.com>>
 wrote:

Sorry for coming late on this one.
We should free the dump state when the system calls the driver on cleanup as 
you did. But, the cleanup IOCTL can be (actually will be) executed from a 
different thread context. This means that we need to protect  dumpState.ovsMsg.

By the time IRP_MJ_CLEANUP has been called, IRP_MJ_CLOSE has already been 
called. So, my understanding is that there's no scope for a userspace process 
to be calling into the kernel. Pls. let me know if I'm missing anything.

Thanks,
-- Nithin
___
dev mailing list
dev@open

Re: [ovs-dev] [PATCH] datapath-windows: cleanup dump state during instance cleanup

2014-09-18 Thread Samuel Ghinet
Hi Eitan,

By duplication of HANDLES, you mean assignment (like, "HANDLE a = 
CreateFile(params); HANDLE b = a;")?

We do have cases in the vswitchd where we keep multiple duplicates / instances 
of the same HANDLE, or where we create children processes that may inherit the 
parent handle?

Thanks,
Sam,

From: Eitan Eliahu [elia...@vmware.com]
Sent: Thursday, September 18, 2014 2:16 PM
To: Samuel Ghinet; Nithin Raju
Cc: dev@openvswitch.org
Subject: RE: [ovs-dev] [PATCH] datapath-windows: cleanup dump state during 
instance cleanup

Hi Sam,
As far as I understand when a process calls CreatFile the system will create a 
file handle and return it to the process. The file handle (in UM) is associated 
with a file object in (KM). Both objects are valid until the process closes the 
handle or terminates.
This means that if the device is opened multiple times the kernel will have 
multiple file  objects, each one corresponds to a file handle returned by 
CreateFile(). All these file objects are associated with the driver device 
object.
The exception is when a process duplicates a handle or when a child process  
inherits a handle from a parent process. At this case the system holds multiple 
file handles which corresponds to a single file object.
I hope this answers your question.
Eitan

From: Samuel Ghinet [mailto:sghi...@cloudbasesolutions.com]
Sent: Thursday, September 18, 2014 3:53 AM
To: Eitan Eliahu; Nithin Raju
Cc: dev@openvswitch.org
Subject: RE: [ovs-dev] [PATCH] datapath-windows: cleanup dump state during 
instance cleanup

Eitan,


> In case that a handle is duplicated in user mode there will be multiple file 
> handles per file object associated with the handle returned by the system 
> when the device was opened.
As I remember when working with userspace HANDLEs and FILE_OBJECT ptrs in 
kernel, even though only one device was used and many file HANDLEs were opened 
for this same device, each HANDLE corresponded to one FILE_OBJECT ptr.
There was no case in which 2 or more userspace HANDLEs shared the same 
FILE_OBJECT ptr.

"there will be multiple file handles per file object" - Are you sure this is a 
real case?

Thanks,
Sam

From: Eitan Eliahu [elia...@vmware.com]
Sent: Monday, September 15, 2014 6:50 PM
To: Samuel Ghinet; Nithin Raju
Cc: dev@openvswitch.org
Subject: RE: [ovs-dev] [PATCH] datapath-windows: cleanup dump state during 
instance cleanup
Each time that a device is opened a new file object is created with a 
corresponding handle. In case that a handle is duplicated in user mode there 
will be multiple file handles per file object associated with the handle 
returned by the system when the device was opened.
We usually open the device per socket. I wouldn’t count on that that I/O on a 
handle and the closing of a handle would be from the same thread context.
Thanks,
Eitan

From: Samuel Ghinet [mailto:sghi...@cloudbasesolutions.com]
Sent: Sunday, September 14, 2014 8:04 PM
To: Eitan Eliahu; Nithin Raju
Cc: dev@openvswitch.org
Subject: RE: [ovs-dev] [PATCH] datapath-windows: cleanup dump state during 
instance cleanup

I have a question here,
is a file HANDLE normally used by only one userspace thread at a time (so that 
if you have multiple threads, each thread will have its own unique file 
HANDLEs), or each thread may use the file HANDLEs opened by other threads?

Thanks,
Sam

From: Eitan Eliahu [elia...@vmware.com]
Sent: Saturday, September 13, 2014 12:01 AM
To: Nithin Raju
Cc: Samuel Ghinet; dev@openvswitch.org
Subject: RE: [ovs-dev] [PATCH] datapath-windows: cleanup dump state during 
instance cleanup
My understating is the Cleanup callback is called when the file handle is 
closed by user mode. However, at that time you can still have outstanding I/O 
requests in the driver or even synchronous I/O requests sent to the driver from 
a context of a different user mode thread. We do complete the outstanding queue 
IRPs but I could see a race condition here.

Do you really need to allocate memory to hold the Dump OVS message? Please 
note, that the memory associated with the IRP is always available until we 
complete the IRP. I am not sure we need to create a copy of it. If you need to 
hold status variables (e.g. dump index across multiple I/O dump requests) you 
can add them to the device instance itself. Also, it would be nice that each 
Dump request would be self-contained. I know it requires some user mode change 
(store the dump index in the socket structure).

Thanks,
Eitan

From: Nithin Raju
Sent: Friday, September 12, 2014 1:02 PM
To: Eitan Eliahu
Cc: Samuel Ghinet; dev@openvswitch.org
Subject: Re: [ovs-dev] [PATCH] datapath-windows: cleanup dump state during 
instance cleanup

On Sep 12, 2014, at 12:46 PM, Eitan Eliahu 
mailto:elia...@vmware.com>>
 wrote:

Sorry for coming l

Re: [ovs-dev] [PATCH] datapath-windows: cleanup dump state during instance cleanup

2014-09-18 Thread Eitan Eliahu
No Sam, Duplication of handles is done through the calling DuplicateHandle((). 
I am not sure if OVS UM calls it. The general case if that one handle 
corresponds to a single file object.
Thanks,
Eitan

From: Samuel Ghinet [mailto:sghi...@cloudbasesolutions.com]
Sent: Thursday, September 18, 2014 4:27 AM
To: Eitan Eliahu; Nithin Raju
Cc: dev@openvswitch.org
Subject: RE: [ovs-dev] [PATCH] datapath-windows: cleanup dump state during 
instance cleanup

Hi Eitan,

By duplication of HANDLES, you mean assignment (like, "HANDLE a = 
CreateFile(params); HANDLE b = a;")?

We do have cases in the vswitchd where we keep multiple duplicates / instances 
of the same HANDLE, or where we create children processes that may inherit the 
parent handle?

Thanks,
Sam,

From: Eitan Eliahu [elia...@vmware.com]
Sent: Thursday, September 18, 2014 2:16 PM
To: Samuel Ghinet; Nithin Raju
Cc: dev@openvswitch.org
Subject: RE: [ovs-dev] [PATCH] datapath-windows: cleanup dump state during 
instance cleanup
Hi Sam,
As far as I understand when a process calls CreatFile the system will create a 
file handle and return it to the process. The file handle (in UM) is associated 
with a file object in (KM). Both objects are valid until the process closes the 
handle or terminates.
This means that if the device is opened multiple times the kernel will have 
multiple file  objects, each one corresponds to a file handle returned by 
CreateFile(). All these file objects are associated with the driver device 
object.
The exception is when a process duplicates a handle or when a child process  
inherits a handle from a parent process. At this case the system holds multiple 
file handles which corresponds to a single file object.
I hope this answers your question.
Eitan

From: Samuel Ghinet [mailto:sghi...@cloudbasesolutions.com]
Sent: Thursday, September 18, 2014 3:53 AM
To: Eitan Eliahu; Nithin Raju
Cc: dev@openvswitch.org
Subject: RE: [ovs-dev] [PATCH] datapath-windows: cleanup dump state during 
instance cleanup

Eitan,


> In case that a handle is duplicated in user mode there will be multiple file 
> handles per file object associated with the handle returned by the system 
> when the device was opened.
As I remember when working with userspace HANDLEs and FILE_OBJECT ptrs in 
kernel, even though only one device was used and many file HANDLEs were opened 
for this same device, each HANDLE corresponded to one FILE_OBJECT ptr.
There was no case in which 2 or more userspace HANDLEs shared the same 
FILE_OBJECT ptr.

"there will be multiple file handles per file object" - Are you sure this is a 
real case?

Thanks,
Sam

From: Eitan Eliahu [elia...@vmware.com]
Sent: Monday, September 15, 2014 6:50 PM
To: Samuel Ghinet; Nithin Raju
Cc: dev@openvswitch.org
Subject: RE: [ovs-dev] [PATCH] datapath-windows: cleanup dump state during 
instance cleanup
Each time that a device is opened a new file object is created with a 
corresponding handle. In case that a handle is duplicated in user mode there 
will be multiple file handles per file object associated with the handle 
returned by the system when the device was opened.
We usually open the device per socket. I wouldn't count on that that I/O on a 
handle and the closing of a handle would be from the same thread context.
Thanks,
Eitan

From: Samuel Ghinet [mailto:sghi...@cloudbasesolutions.com]
Sent: Sunday, September 14, 2014 8:04 PM
To: Eitan Eliahu; Nithin Raju
Cc: dev@openvswitch.org
Subject: RE: [ovs-dev] [PATCH] datapath-windows: cleanup dump state during 
instance cleanup

I have a question here,
is a file HANDLE normally used by only one userspace thread at a time (so that 
if you have multiple threads, each thread will have its own unique file 
HANDLEs), or each thread may use the file HANDLEs opened by other threads?

Thanks,
Sam

From: Eitan Eliahu [elia...@vmware.com]
Sent: Saturday, September 13, 2014 12:01 AM
To: Nithin Raju
Cc: Samuel Ghinet; dev@openvswitch.org
Subject: RE: [ovs-dev] [PATCH] datapath-windows: cleanup dump state during 
instance cleanup
My understating is the Cleanup callback is called when the file handle is 
closed by user mode. However, at that time you can still have outstanding I/O 
requests in the driver or even synchronous I/O requests sent to the driver from 
a context of a different user mode thread. We do complete the outstanding queue 
IRPs but I could see a race condition here.

Do you really need to allocate memory to hold the Dump OVS message? Please 
note, that the memory associated with the IRP is always available until we 
complete the IRP. I am not sure we need to create a copy of it. If you need to 
hold status variables (e.g. dump index across multiple I/O dump requests) you 
can add them to the device instance itself. Also,

Re: [ovs-dev] [PATCH 1/5 v1] datapath-windows: return TRUE on success in NlAttrValidate

2014-09-18 Thread Samuel Ghinet
Hi Nithin,

Good catch!

I would normally prefer the validation in a function to be done like:
ret = TRUE;

if (fail_cond_1) {
  log();
  ret = FALSE;
  goto done;
}

if (fail_cond_2) {
  log();
  ret = FALSE;
  goto done;
}

done:
return ret;

It would be a bit more text, but I think it improves clarity a bit on the ways 
"ret" is used and changed.

Anyway, it's good the way it is in NlAttrValidate as well.

Acked-by: Samuel Ghinet 

Sam

Date: Tue, 16 Sep 2014 19:06:07 -0700
From: Nithin Raju 
To: dev@openvswitch.org, sghi...@cloudbasesolutions.com,
elia...@vmware.com, ankursha...@vmware.com
Subject: [ovs-dev] [PATCH 1/5 v1] datapath-windows: return TRUE on
success in NlAttrValidate
Message-ID: <1410919571-34249-1-git-send-email-nit...@vmware.com>

Signed-off-by: Nithin Raju 
---
 datapath-windows/ovsext/Netlink/Netlink.c |1 +
 1 files changed, 1 insertions(+), 0 deletions(-)

diff --git a/datapath-windows/ovsext/Netlink/Netlink.c 
b/datapath-windows/ovsext/Netlink/Netlink.c
index 5faf07f..f6ee4c4 100644
--- a/datapath-windows/ovsext/Netlink/Netlink.c
+++ b/datapath-windows/ovsext/Netlink/Netlink.c
@@ -819,6 +819,7 @@ NlAttrValidate(const PNL_ATTR nla, const PNL_POLICY policy)
 goto done;
 }
 }
+ret = TRUE;

 done:
 return ret;
--
1.7.4.1
___
dev mailing list
dev@openvswitch.org
http://openvswitch.org/mailman/listinfo/dev


Re: [ovs-dev] [PATCH 4/5 v1] extract-odp-netlink-windows-dp-h: add definition of IFNAMSIZ

2014-09-18 Thread Samuel Ghinet
Acked-by: Samuel Ghinet 

Date: Tue, 16 Sep 2014 19:06:10 -0700
From: Nithin Raju 
To: dev@openvswitch.org, sghi...@cloudbasesolutions.com,
elia...@vmware.com, ankursha...@vmware.com
Subject: [ovs-dev] [PATCH 4/5 v1] extract-odp-netlink-windows-dp-h:
add definition of IFNAMSIZ
Message-ID: <1410919571-34249-4-git-send-email-nit...@vmware.com>

The Windows kernel datapath needs the definition of 'IFNAMSIZ' for
specifying attribute sizes in netlink policies. Adding the definition
of 'IFNAMSIZ' to be part of OvsDpInterface.h similar to ETH_ADDR_LEN.

Signed-off-by: Nithin Raju 
---
 build-aux/extract-odp-netlink-windows-dp-h |3 ++-
 1 files changed, 2 insertions(+), 1 deletions(-)

diff --git a/build-aux/extract-odp-netlink-windows-dp-h 
b/build-aux/extract-odp-netlink-windows-dp-h
index f2d9f07..70514cb 100755
--- a/build-aux/extract-odp-netlink-windows-dp-h
+++ b/build-aux/extract-odp-netlink-windows-dp-h
@@ -18,7 +18,8 @@ s,,"Types.h",

 # Add ETH_ADDR_LEN macro to avoid including userspace packet.h
 s,#include ,\n#ifndef ETH_ADDR_LEN \
-#define ETH_ADDR_LEN  6 \n#endif,
+#define ETH_ADDR_LEN  6 \n#endif \
+\n#ifndef IFNAMSIZ \n#define IFNAMSIZ 16 \n#endif,

 # Use OVS's own ETH_ADDR_LEN instead of Linux-specific ETH_ALEN.
 s/ETH_ALEN/ETH_ADDR_LEN/
--
1.7.4.1


___
dev mailing list
dev@openvswitch.org
http://openvswitch.org/mailman/listinfo/dev


Re: [ovs-dev] [PATCH 5/5 v1] datapath-windows: add OVS_DP_CMD_SET and OVS_DP_CMD_GET transaction support

2014-09-18 Thread Samuel Ghinet
Hey Nithin,

I think this could work refactored a bit (such as to better separate commands), 
but we should postpone refactor for a later time.

Acked-by: Samuel Ghinet 


Date: Tue, 16 Sep 2014 19:06:11 -0700
From: Nithin Raju 
To: dev@openvswitch.org, sghi...@cloudbasesolutions.com,
elia...@vmware.com, ankursha...@vmware.com
Subject: [ovs-dev] [PATCH 5/5 v1] datapath-windows: add OVS_DP_CMD_SET
and OVS_DP_CMD_GET transaction support
Message-ID: <1410919571-34249-5-git-send-email-nit...@vmware.com>

In this patch, we add support for two commands, both of them are issued
as part of transactions semantics from userspace:
1. OVS_DP_CMD_SET is used to get the properties of a DP as well as set
   some properties. The set operations does not seem to make much sense for
   the Windows datpath right now.
2. There's already support for OVS_DP_CMD_GET command issued via the
   dump semantics from userspace. Turns out that userspace can issue
   OVS_DP_CMD_GET as a transaction.

There's lot of common code between these two commands. Hence combining
the implementation and the review.

Also refactories some of the code in the implementation of dump-based
OVS_DP_CMD_GET, and updated some of the comments.

Validation:
- With these series of patches, I was able to run the following command:
> .\utilities\ovs-dpctl.exe show
system@ovs-system:
lookups: hit:0 missed:22 lost:0
flows: 0
- I got so far as to hit the PORT_DUMP command which is currently not
implemented.

Signed-off-by: Nithin Raju 
Tested-by: Nithin Raju 
Reported-at: https://github.com/openvswitch/ovs-issues/issues/38
---
 datapath-windows/ovsext/Datapath.c |  151 ++--
 1 files changed, 143 insertions(+), 8 deletions(-)

diff --git a/datapath-windows/ovsext/Datapath.c 
b/datapath-windows/ovsext/Datapath.c
index 2d1836c..25a1ea0 100644
--- a/datapath-windows/ovsext/Datapath.c
+++ b/datapath-windows/ovsext/Datapath.c
@@ -101,6 +101,16 @@ static NTSTATUS 
OvsGetPidCmdHandler(POVS_USER_PARAMS_CONTEXT usrParamsCtx,
 static NTSTATUS OvsGetDpCmdHandler(POVS_USER_PARAMS_CONTEXT usrParamsCtx,
UINT32 *replyLen);

+static NTSTATUS OvsSetDpCmdHandler(POVS_USER_PARAMS_CONTEXT usrParamsCtx,
+   UINT32 *replyLen);
+
+static NTSTATUS HandleGetDpTransaction(POVS_USER_PARAMS_CONTEXT usrParamsCtx,
+   UINT32 *replyLen);
+static NTSTATUS HandleGetDpDump(POVS_USER_PARAMS_CONTEXT usrParamsCtx,
+UINT32 *replyLen);
+static NTSTATUS HandleDpTransaction(POVS_USER_PARAMS_CONTEXT usrParamsCtx,
+UINT32 *replyLen);
+
 /*
  * The various netlink families, along with the supported commands. Most of
  * these families and commands are part of the openvswitch specification for a
@@ -130,8 +140,15 @@ NETLINK_FAMILY nlControlFamilyOps = {
 NETLINK_CMD nlDatapathFamilyCmdOps[] = {
 { .cmd = OVS_DP_CMD_GET,
   .handler = OvsGetDpCmdHandler,
-  .supportedDevOp  = OVS_WRITE_DEV_OP | OVS_READ_DEV_OP,
+  .supportedDevOp  = OVS_WRITE_DEV_OP | OVS_READ_DEV_OP |
+ OVS_TRANSACTION_DEV_OP,
   .validateDpIndex = FALSE
+},
+{ .cmd = OVS_DP_CMD_SET,
+  .handler = OvsSetDpCmdHandler,
+  .supportedDevOp  = OVS_WRITE_DEV_OP | OVS_READ_DEV_OP |
+ OVS_TRANSACTION_DEV_OP,
+  .validateDpIndex = TRUE
 }
 };

@@ -705,8 +722,6 @@ done:
  * --
  * Function to validate a netlink command. Only certain combinations of
  * (device operation, netlink family, command) are valid.
- *
- * XXX: Take policy into consideration.
  * --
  */
 static NTSTATUS
@@ -784,9 +799,10 @@ InvokeNetlinkCmdHandler(POVS_USER_PARAMS_CONTEXT 
usrParamsCtx,
 return status;
 }

-
 /*
  * --
+ *  Command Handler for 'OVS_CTRL_CMD_WIN_GET_PID'.
+ *
  *  Each handle on the device is assigned a unique PID when the handle is
  *  created. On platforms that support netlink natively, the PID is available
  *  to userspace when the netlink socket is created. However, without native
@@ -837,7 +853,7 @@ OvsDpFillInfo(POVS_SWITCH_CONTEXT ovsSwitchContext,
 PNL_MSG_HDR nlMsg;

 /* XXX: Add API for nlBuf->bufRemLen. */
-ASSERT(NlBufAt(nlBuf, 0, 0) != 0 && nlBuf->bufRemLen >= sizeof msgOutTmp);
+ASSERT(NlBufAt(nlBuf, 0, 0) != 0 && nlBuf->bufRemLen >= sizeof *msgIn);

 msgOutTmp.nlMsg.nlmsgType = OVS_WIN_NL_DATAPATH_FAMILY_ID;
 msgOutTmp.nlMsg.nlmsgFlags = 0;  /* XXX: ? */
@@ -871,17 +887,49 @@ OvsDpFillInfo(POVS_SWITCH_CONTEXT ovsSwitchContext,
 return writeOk ? STATUS_SUCCESS : STATUS_INVALID_BUFFER_SIZE;
 }

-
 /*

Re: [ovs-dev] [PATCH] datapath-windows: NetLink kernel side, Event subscription and notification

2014-09-18 Thread Samuel Ghinet
Looks good to me!

Acked-by: Samuel Ghinet 

Date: Wed, 17 Sep 2014 23:12:05 -0700
From: Eitan Eliahu 
To: dev@openvswitch.org
Subject: [ovs-dev] [PATCH] datapath-windows: NetLink kernel side,
Event subscription and notification
Message-ID: <1411020725-3952-1-git-send-email-elia...@vmware.com>

This code handles an event notification subscription for a user mode thread
which joins an MC group. The event wait handler queues an IRP which is
completed upon change in a port state.

Signed-off-by: Eitan Eliahu 
---
 datapath-windows/ovsext/Datapath.c | 103 +
 1 file changed, 93 insertions(+), 10 deletions(-)

diff --git a/datapath-windows/ovsext/Datapath.c 
b/datapath-windows/ovsext/Datapath.c
index 2d1836c..16aa56b 100644
--- a/datapath-windows/ovsext/Datapath.c
+++ b/datapath-windows/ovsext/Datapath.c
@@ -64,12 +64,12 @@
  * Handler for a given netlink command. Not all the parameters are used by all
  * the handlers.
  */
-typedef NTSTATUS (*NetlinkCmdHandler)(POVS_USER_PARAMS_CONTEXT usrParamsCtx,
-  UINT32 *replyLen);
+typedef NTSTATUS(NetlinkCmdHandler)(POVS_USER_PARAMS_CONTEXT usrParamsCtx,
+UINT32 *replyLen);

 typedef struct _NETLINK_CMD {
 UINT16 cmd;
-NetlinkCmdHandler handler;
+NetlinkCmdHandler *handler;
 UINT32 supportedDevOp;  /* Supported device operations. */
 BOOLEAN validateDpIndex;/* Does command require a valid DP argument. */
 } NETLINK_CMD, *PNETLINK_CMD;
@@ -95,11 +95,10 @@ typedef struct _NETLINK_FAMILY {
 #define OVS_TRANSACTION_DEV_OP   (1 << 2)

 /* Handlers for the various netlink commands. */
-static NTSTATUS OvsGetPidCmdHandler(POVS_USER_PARAMS_CONTEXT usrParamsCtx,
-UINT32 *replyLen);
-
-static NTSTATUS OvsGetDpCmdHandler(POVS_USER_PARAMS_CONTEXT usrParamsCtx,
-   UINT32 *replyLen);
+static NetlinkCmdHandler OvsGetPidCmdHandler,
+ OvsGetDpCmdHandler,
+ OvsPendEventCmdHandler,
+ OvsSubscribeEventCmdHandler;

 /*
  * The various netlink families, along with the supported commands. Most of
@@ -113,7 +112,17 @@ NETLINK_CMD nlControlFamilyCmdOps[] = {
 { .cmd = OVS_CTRL_CMD_WIN_GET_PID,
   .handler = OvsGetPidCmdHandler,
   .supportedDevOp  = OVS_TRANSACTION_DEV_OP,
-  .validateDpIndex = FALSE
+  .validateDpIndex = FALSE,
+},
+{ .cmd = OVS_CTRL_CMD_WIN_PEND_REQ,
+  .handler = OvsPendEventCmdHandler,
+  .supportedDevOp = OVS_WRITE_DEV_OP,
+  .validateDpIndex = TRUE,
+},
+{ .cmd = OVS_CTRL_CMD_MC_SUBSCRIBE_REQ,
+  .handler = OvsSubscribeEventCmdHandler,
+  .supportedDevOp = OVS_WRITE_DEV_OP,
+  .validateDpIndex = TRUE,
 }
 };

@@ -776,7 +785,11 @@ InvokeNetlinkCmdHandler(POVS_USER_PARAMS_CONTEXT 
usrParamsCtx,

 for (i = 0; i < nlFamilyOps->opsCount; i++) {
 if (nlFamilyOps->cmds[i].cmd == usrParamsCtx->ovsMsg->genlMsg.cmd) {
-status = nlFamilyOps->cmds[i].handler(usrParamsCtx, replyLen);
+NetlinkCmdHandler *handler = nlFamilyOps->cmds[i].handler;
+ASSERT(handler);
+if (handler) {
+status = handler(usrParamsCtx, replyLen);
+}
 break;
 }
 }
@@ -874,6 +887,76 @@ OvsDpFillInfo(POVS_SWITCH_CONTEXT ovsSwitchContext,

 /*
  * --
+ * Handler for queueing an IRP used for event notification. The IRP is
+ * completed when a port state changes. STATUS_PENDING is returned on
+ * success. User mode keep a pending IRP at all times.
+ * --
+ */
+static NTSTATUS
+OvsPendEventCmdHandler(POVS_USER_PARAMS_CONTEXT usrParamsCtx,
+   UINT32 *replyLen)
+{
+NDIS_STATUS status;
+
+UNREFERENCED_PARAMETER(replyLen);
+
+POVS_OPEN_INSTANCE instance =
+(POVS_OPEN_INSTANCE)usrParamsCtx->ovsInstance;
+POVS_MESSAGE msgIn = (POVS_MESSAGE)usrParamsCtx->inputBuffer;
+OVS_EVENT_POLL poll;
+
+poll.dpNo = msgIn->ovsHdr.dp_ifindex;
+status = OvsWaitEventIoctl(usrParamsCtx->irp, instance->fileObject,
+   &poll, sizeof poll);
+return status;
+}
+
+/*
+ * --
+ *  Handler for the subscription for the event queue
+ * --
+ */
+static NTSTATUS
+OvsSubscribeEventCmdHandler(POVS_USER_PARAMS_CONTEXT usrParamsCtx,
+UINT32 *replyLen)
+{
+NDIS_STATUS status;
+OVS_EVENT_SUBSCRIBE request;
+BOOLEAN rc;
+UINT8 join;
+PNL_ATTR attrs[2];
+const NL_POLICY policy[] =  {
+[OVS_NL_ATTR_MCAST_GRP] = {.type = NL_

Re: [ovs-dev] [PATCH] datapath-windows: cleanup dump state during instance cleanup

2014-09-18 Thread Samuel Ghinet
Thanks Eitan for clarifications!

I have checked the userspace code, there is no call for DuplicateHandle.
I am not sure whether we should put checks in kernel code now  for a possible 
future usage of this, or simply leave it to be handled later - if / when such a 
case will appear.

Sam

From: Eitan Eliahu [elia...@vmware.com]
Sent: Thursday, September 18, 2014 2:35 PM
To: Samuel Ghinet; Nithin Raju
Cc: dev@openvswitch.org
Subject: RE: [ovs-dev] [PATCH] datapath-windows: cleanup dump state during 
instance cleanup

No Sam, Duplication of handles is done through the calling DuplicateHandle((). 
I am not sure if OVS UM calls it. The general case if that one handle 
corresponds to a single file object.
Thanks,
Eitan

From: Samuel Ghinet [mailto:sghi...@cloudbasesolutions.com]
Sent: Thursday, September 18, 2014 4:27 AM
To: Eitan Eliahu; Nithin Raju
Cc: dev@openvswitch.org
Subject: RE: [ovs-dev] [PATCH] datapath-windows: cleanup dump state during 
instance cleanup

Hi Eitan,

By duplication of HANDLES, you mean assignment (like, "HANDLE a = 
CreateFile(params); HANDLE b = a;")?

We do have cases in the vswitchd where we keep multiple duplicates / instances 
of the same HANDLE, or where we create children processes that may inherit the 
parent handle?

Thanks,
Sam,

From: Eitan Eliahu [elia...@vmware.com]
Sent: Thursday, September 18, 2014 2:16 PM
To: Samuel Ghinet; Nithin Raju
Cc: dev@openvswitch.org
Subject: RE: [ovs-dev] [PATCH] datapath-windows: cleanup dump state during 
instance cleanup
Hi Sam,
As far as I understand when a process calls CreatFile the system will create a 
file handle and return it to the process. The file handle (in UM) is associated 
with a file object in (KM). Both objects are valid until the process closes the 
handle or terminates.
This means that if the device is opened multiple times the kernel will have 
multiple file  objects, each one corresponds to a file handle returned by 
CreateFile(). All these file objects are associated with the driver device 
object.
The exception is when a process duplicates a handle or when a child process  
inherits a handle from a parent process. At this case the system holds multiple 
file handles which corresponds to a single file object.
I hope this answers your question.
Eitan

From: Samuel Ghinet [mailto:sghi...@cloudbasesolutions.com]
Sent: Thursday, September 18, 2014 3:53 AM
To: Eitan Eliahu; Nithin Raju
Cc: dev@openvswitch.org
Subject: RE: [ovs-dev] [PATCH] datapath-windows: cleanup dump state during 
instance cleanup

Eitan,


> In case that a handle is duplicated in user mode there will be multiple file 
> handles per file object associated with the handle returned by the system 
> when the device was opened.
As I remember when working with userspace HANDLEs and FILE_OBJECT ptrs in 
kernel, even though only one device was used and many file HANDLEs were opened 
for this same device, each HANDLE corresponded to one FILE_OBJECT ptr.
There was no case in which 2 or more userspace HANDLEs shared the same 
FILE_OBJECT ptr.

"there will be multiple file handles per file object" - Are you sure this is a 
real case?

Thanks,
Sam

From: Eitan Eliahu [elia...@vmware.com]
Sent: Monday, September 15, 2014 6:50 PM
To: Samuel Ghinet; Nithin Raju
Cc: dev@openvswitch.org
Subject: RE: [ovs-dev] [PATCH] datapath-windows: cleanup dump state during 
instance cleanup
Each time that a device is opened a new file object is created with a 
corresponding handle. In case that a handle is duplicated in user mode there 
will be multiple file handles per file object associated with the handle 
returned by the system when the device was opened.
We usually open the device per socket. I wouldn’t count on that that I/O on a 
handle and the closing of a handle would be from the same thread context.
Thanks,
Eitan

From: Samuel Ghinet [mailto:sghi...@cloudbasesolutions.com]
Sent: Sunday, September 14, 2014 8:04 PM
To: Eitan Eliahu; Nithin Raju
Cc: dev@openvswitch.org
Subject: RE: [ovs-dev] [PATCH] datapath-windows: cleanup dump state during 
instance cleanup

I have a question here,
is a file HANDLE normally used by only one userspace thread at a time (so that 
if you have multiple threads, each thread will have its own unique file 
HANDLEs), or each thread may use the file HANDLEs opened by other threads?

Thanks,
Sam

From: Eitan Eliahu [elia...@vmware.com]
Sent: Saturday, September 13, 2014 12:01 AM
To: Nithin Raju
Cc: Samuel Ghinet; dev@openvswitch.org
Subject: RE: [ovs-dev] [PATCH] datapath-windows: cleanup dump state during 
instance cleanup
My understating is the Cleanup callback is called when the file handle is 
closed by user mode. However, at that time you can still have outstanding I/O 
requests in 

Re: [ovs-dev] [PATCH] datapath-windows: cleanup dump state during instance cleanup

2014-09-18 Thread Eitan Eliahu
Hi Sam, all we need to do is to protect critical variable in the Cleanup 
handler. In this case specifically we can store the dump state variables in the 
per process instance (rather than allocating memory) so it will avoid the 
possible race condition in freeing dynamic memory.
Everything else will be handled by the system,
Thanks!
Eitan

From: Samuel Ghinet [mailto:sghi...@cloudbasesolutions.com]
Sent: Thursday, September 18, 2014 5:17 AM
To: Eitan Eliahu; Nithin Raju
Cc: dev@openvswitch.org
Subject: RE: [ovs-dev] [PATCH] datapath-windows: cleanup dump state during 
instance cleanup

Thanks Eitan for clarifications!

I have checked the userspace code, there is no call for DuplicateHandle.
I am not sure whether we should put checks in kernel code now  for a possible 
future usage of this, or simply leave it to be handled later - if / when such a 
case will appear.

Sam

From: Eitan Eliahu [elia...@vmware.com]
Sent: Thursday, September 18, 2014 2:35 PM
To: Samuel Ghinet; Nithin Raju
Cc: dev@openvswitch.org
Subject: RE: [ovs-dev] [PATCH] datapath-windows: cleanup dump state during 
instance cleanup
No Sam, Duplication of handles is done through the calling DuplicateHandle((). 
I am not sure if OVS UM calls it. The general case if that one handle 
corresponds to a single file object.
Thanks,
Eitan

From: Samuel Ghinet [mailto:sghi...@cloudbasesolutions.com]
Sent: Thursday, September 18, 2014 4:27 AM
To: Eitan Eliahu; Nithin Raju
Cc: dev@openvswitch.org
Subject: RE: [ovs-dev] [PATCH] datapath-windows: cleanup dump state during 
instance cleanup

Hi Eitan,

By duplication of HANDLES, you mean assignment (like, "HANDLE a = 
CreateFile(params); HANDLE b = a;")?

We do have cases in the vswitchd where we keep multiple duplicates / instances 
of the same HANDLE, or where we create children processes that may inherit the 
parent handle?

Thanks,
Sam,

From: Eitan Eliahu [elia...@vmware.com]
Sent: Thursday, September 18, 2014 2:16 PM
To: Samuel Ghinet; Nithin Raju
Cc: dev@openvswitch.org
Subject: RE: [ovs-dev] [PATCH] datapath-windows: cleanup dump state during 
instance cleanup
Hi Sam,
As far as I understand when a process calls CreatFile the system will create a 
file handle and return it to the process. The file handle (in UM) is associated 
with a file object in (KM). Both objects are valid until the process closes the 
handle or terminates.
This means that if the device is opened multiple times the kernel will have 
multiple file  objects, each one corresponds to a file handle returned by 
CreateFile(). All these file objects are associated with the driver device 
object.
The exception is when a process duplicates a handle or when a child process  
inherits a handle from a parent process. At this case the system holds multiple 
file handles which corresponds to a single file object.
I hope this answers your question.
Eitan

From: Samuel Ghinet [mailto:sghi...@cloudbasesolutions.com]
Sent: Thursday, September 18, 2014 3:53 AM
To: Eitan Eliahu; Nithin Raju
Cc: dev@openvswitch.org
Subject: RE: [ovs-dev] [PATCH] datapath-windows: cleanup dump state during 
instance cleanup

Eitan,


> In case that a handle is duplicated in user mode there will be multiple file 
> handles per file object associated with the handle returned by the system 
> when the device was opened.
As I remember when working with userspace HANDLEs and FILE_OBJECT ptrs in 
kernel, even though only one device was used and many file HANDLEs were opened 
for this same device, each HANDLE corresponded to one FILE_OBJECT ptr.
There was no case in which 2 or more userspace HANDLEs shared the same 
FILE_OBJECT ptr.

"there will be multiple file handles per file object" - Are you sure this is a 
real case?

Thanks,
Sam

From: Eitan Eliahu [elia...@vmware.com]
Sent: Monday, September 15, 2014 6:50 PM
To: Samuel Ghinet; Nithin Raju
Cc: dev@openvswitch.org
Subject: RE: [ovs-dev] [PATCH] datapath-windows: cleanup dump state during 
instance cleanup
Each time that a device is opened a new file object is created with a 
corresponding handle. In case that a handle is duplicated in user mode there 
will be multiple file handles per file object associated with the handle 
returned by the system when the device was opened.
We usually open the device per socket. I wouldn't count on that that I/O on a 
handle and the closing of a handle would be from the same thread context.
Thanks,
Eitan

From: Samuel Ghinet [mailto:sghi...@cloudbasesolutions.com]
Sent: Sunday, September 14, 2014 8:04 PM
To: Eitan Eliahu; Nithin Raju
Cc: dev@openvswitch.org
Subject: RE: [ovs-dev] [PATCH] datapath-windows: cleanup dump state during 
instance cleanup

I have a question here,
is a file HANDLE normally used by only one userspac

[ovs-dev] [PATCH v2] datapath: Provide compatibility for kernels up to 3.17

2014-09-18 Thread Thomas Graf
Port datapath to work with kernrels up to 3.17 and use 3.16.2 as
the new kernel for CI testing.

Tested with 3.14, 3.16.2, and net-next (3.17).

Signed-off-by: Thomas Graf 
Co-authored-by: Madhu Challa 
---
v2:
 - Swapped ignore_df local_df compat direction
 - Provide alloc_netdev() compat macro to avoid #ifdef
 - iptunnel_xmit() version for > 3.12 && < 3.15

 .travis.yml |  2 +-
 .travis/build.sh|  6 ++--
 acinclude.m4| 12 ++--
 datapath/linux/Modules.mk   |  1 +
 datapath/linux/compat/include/linux/if.h| 12 
 datapath/linux/compat/include/linux/netdevice.h | 21 ++
 datapath/linux/compat/include/linux/skbuff.h|  4 +++
 datapath/linux/compat/include/net/ip_tunnels.h  | 14 +-
 datapath/linux/compat/include/net/udp.h | 37 +
 datapath/linux/compat/include/net/vxlan.h   |  5 
 datapath/linux/compat/ip_tunnels_core.c |  2 +-
 datapath/linux/compat/vxlan.c   |  3 +-
 datapath/vport-geneve.c |  8 --
 datapath/vport-gre.c|  4 +--
 datapath/vport-internal_dev.c   |  4 +--
 datapath/vport-lisp.c   |  4 +--
 datapath/vport-netdev.c |  3 +-
 datapath/vport-vxlan.c  | 12 ++--
 18 files changed, 114 insertions(+), 40 deletions(-)
 create mode 100644 datapath/linux/compat/include/net/udp.h

diff --git a/.travis.yml b/.travis.yml
index b66f821..4dfe15e 100644
--- a/.travis.yml
+++ b/.travis.yml
@@ -7,7 +7,7 @@ before_install: ./.travis/prepare.sh
 
 env:
   - OPTS="--disable-ssl"
-  - TESTSUITE=1 KERNEL=1 OPTS="--with-linux=./linux-3.14.7"
+  - TESTSUITE=1 KERNEL=1 OPTS="--with-linux=./linux-3.16.2"
   - KERNEL=1 DPDK=1 OPTS="--with-dpdk=./dpdk-1.7.0/build"
 
 script: ./.travis/build.sh $OPTS
diff --git a/.travis/build.sh b/.travis/build.sh
index 0a23969..b7ce5a0 100755
--- a/.travis/build.sh
+++ b/.travis/build.sh
@@ -7,9 +7,9 @@ CFLAGS="-Werror"
 
 function install_kernel()
 {
-wget https://www.kernel.org/pub/linux/kernel/v3.x/linux-3.14.7.tar.gz
-tar xzvf linux-3.14.7.tar.gz > /dev/null
-cd linux-3.14.7
+wget https://www.kernel.org/pub/linux/kernel/v3.x/linux-3.16.2.tar.gz
+tar xzvf linux-3.16.2.tar.gz > /dev/null
+cd linux-3.16.2
 make allmodconfig
 make net/openvswitch/
 KERNELSRC=$(pwd)
diff --git a/acinclude.m4 b/acinclude.m4
index e3a694b..6e31d88 100644
--- a/acinclude.m4
+++ b/acinclude.m4
@@ -134,10 +134,10 @@ AC_DEFUN([OVS_CHECK_LINUX], [
 AC_MSG_RESULT([$kversion])
 
 if test "$version" -ge 3; then
-   if test "$version" = 3 && test "$patchlevel" -le 14; then
+   if test "$version" = 3 && test "$patchlevel" -le 17; then
   : # Linux 3.x
else
- AC_ERROR([Linux kernel in $KBUILD is version $kversion, but version 
newer than 3.14.x is not supported (please refer to the FAQ for advice)])
+ AC_ERROR([Linux kernel in $KBUILD is version $kversion, but version 
newer than 3.17.x is not supported (please refer to the FAQ for advice)])
fi
 else
if test "$version" -le 1 || test "$patchlevel" -le 5 || test 
"$sublevel" -le 31; then
@@ -370,6 +370,14 @@ AC_DEFUN([OVS_CHECK_LINUX_COMPAT], [
 
   OVS_GREP_IFELSE([$KSRC/include/linux/openvswitch.h], 
[openvswitch_handle_frame_hook],
   [OVS_DEFINE([HAVE_RHEL_OVS_HOOK])])
+  OVS_GREP_IFELSE([$KSRC/include/net/vxlan.h], [bool xnet],
+  [OVS_DEFINE([HAVE_VXLAN_XMIT_SKB_XNET_ARG])])
+  OVS_GREP_IFELSE([$KSRC/include/net/udp.h], [udp_flow_src_port],
+  [OVS_DEFINE([HAVE_UDP_FLOW_SRC_PORT])])
+  OVS_GREP_IFELSE([$KSRC/include/linux/skbuff.h], [ignore_df:1],
+  [OVS_DEFINE([HAVE_IGNORE_DF_RENAME])])
+  OVS_GREP_IFELSE([$KSRC/include/uapi/linux/netdevice.h], [NET_NAME_UNKNOWN],
+  [OVS_DEFINE([HAVE_NET_NAME_UNKNOWN])])
 
   OVS_CHECK_LOG2_H
 
diff --git a/datapath/linux/Modules.mk b/datapath/linux/Modules.mk
index f7c64e2..274b1f1 100644
--- a/datapath/linux/Modules.mk
+++ b/datapath/linux/Modules.mk
@@ -72,6 +72,7 @@ openvswitch_headers += \
linux/compat/include/net/ipv6.h \
linux/compat/include/net/net_namespace.h \
linux/compat/include/net/netlink.h \
+   linux/compat/include/net/udp.h \
linux/compat/include/net/sock.h \
linux/compat/include/net/vxlan.h \
linux/compat/include/net/sctp/checksum.h
diff --git a/datapath/linux/compat/include/linux/if.h 
b/datapath/linux/compat/include/linux/if.h
index c4c656c..3beb61d 100644
--- a/datapath/linux/compat/include/linux/if.h
+++ b/datapath/linux/compat/include/linux/if.h
@@ -3,16 +3,4 @@
 
 #include_next 
 
-#ifndef IFF_TX_SKB_SHARING
-#define IFF_TX_SKB_SHARING 0
-#endif
-
-#ifndef IFF_OVS_DATAPATH
-#define IFF_OVS_DATAPATH 0
-#endi

Re: [ovs-dev] Openflow 1.4: Eviction mechanism implementation

2014-09-18 Thread Saloni Jain
Thanks for the suggestion and we have done more investigation on code.

We are thinking of implementing eviction on the basis of importance as 
done in current ovs code for lifetime by following the same algorithm as 
mentioned in conf.db(5)(Page 33).

As suggested by you we can enable eviction by both ovs-vsctl 
create...overflow-policy=evict and through ovs-ofctl mod-table...evict 
commands. Our understanding is that ovs-vsctl command does changes in 
bridge configuration present in conf.db where as ovs-ofctl mod-table 
command changes table state.

Currently ovs-vsctl create...overflow-policy=evict command has following 
flow:
bridge_run()-> bridge_run__ -> ofproto_run() which differentiates between 
OPENFLOW and EVICT state.
Then bridge_reconfigure() (which is called when there is a database 
change)-> bridge_configure_tables() -> ofproto_configure_tables() -> 
ofproto_enable_eviction() which sets eviction fields for a table

But mod-table is an openflow request having openflow handler and does not 
change any of bridge configuration so bridge_reconfigure() is not called 
and so the eviction fields cannot be set.

As per our understanding it is not correct to change db through ofctl 
commands i.e. to call bridge_reconfigure() through mod-table.

Query 1 - Is the above understanding correct?

Query 2 - Should we come up with a new approach or can you provide any 
inputs on how can we configure table eviction_fields without calling 
bridge_reconfigure() via ofctl command?

Thanks and Regards,
Saloni Jain
Tata Consultancy Services
Mailto: saloni.j...@tcs.com
Website: http://www.tcs.com

Experience certainty.   IT Services
Business Solutions
Consulting




From:   Ben Pfaff 
To: Saloni Jain 
Cc: dev@openvswitch.org, Deepankar Gupta , 
Hiteshi Madan , Sanjay6 Singh 

Date:   09/15/2014 08:54 PM
Subject:Re: Fw: Re: [ovs-dev] Openflow 1.4: Eviction mechanism 
implementation



At this point I think you should invest time in reading and
understanding code instead of asking more questions.  I have already
answered many questions.

On Mon, Sep 15, 2014 at 05:55:01PM +0530, Saloni Jain wrote:
>  It would be great help if someone could provide the inputs asap.
> Thanks in advance for your inputs and suggestions.
> 
> Thanks and Regards,
> Saloni Jain
>  Tata Consultancy Services
>  Mailto: saloni.j...@tcs.com
>  Website: http://www.tcs.com
>  
>  Experience certainty. IT Services
>Business Solutions
>Consulting
>  
> 
> 
> 
> -Forwarded by Saloni Jain/DEL/TCS on 09/15/2014 05:51PM -
> To: Ben Pfaff , dev@openvswitch.org
> From: Saloni Jain/DEL/TCS
> Date: 09/12/2014 05:00PM
> Cc: Deepankar Gupta , Hiteshi Madan 
, Sanjay6 Singh 
> Subject: Re: [ovs-dev] Openflow 1.4: Eviction mechanism implementation
> 
> Hi Ben/Team,
> 
> >?I would enable eviction if it is configured through the database or 
through the OpenFlow connection.
> 
> In current implementation, to enable eviction using create table ... 
overflow-policy=evict, oftable_enable_eviction() in ofproto.c is called 
which sets table->eviction fields.
> 
> I've been trying to do the same using table-mod command implementation 
but haven't been successful. It would be of great help if you could point 
me to some piece of code through ?which I can call 
oftable_enable_eviction() or can set table->eviction_fields using 
mod_table command.
> 
> Thanks in advance for you suggestion and support.
> 
> Thanks and Regards,
> Saloni Jain
> Tata Consultancy Services
> Mailto: saloni.j...@tcs.com
> Website: http://www.tcs.com
> 
> Experience certainty.  IT Services
>Business Solutions
>Consulting
> 
> 
> To: Saloni Jain 
> From: Ben Pfaff 
> Date: 09/11/2014 11:03PM
> Cc: dev@openvswitch.org, Hiteshi Madan , 
Deepankar Gupta , Sanjay6 Singh 

> Subject: Re: [ovs-dev] Openflow 1.4: Eviction mechanism implementation
> 
> On Thu, Sep 11, 2014 at 05:34:34PM +0530, Saloni Jain wrote:
> > 1. As per current implementation, "ovs-vsctl create table ... 
overflow-policy=evict" command, configures openvswitch database(conf.db) 
and sets table structure eviction fields and eviction groups.
> > ?
> > 2. As per our understanding, the implementation of eviction for 
importance as per openflow specification 1.4, should be in parallel with 
existing functionality, i.e. if eviction for importance is set through 
mod-table command and importance parameter is given by the user,then we 
must create the eviction groups(eviction_group_add_rule) and at th

Re: [ovs-dev] [PATCH] ofproto: Increase default datapath max_idle time.

2014-09-18 Thread Flavio Leitner
On Fri, Sep 12, 2014 at 10:03:56AM +1200, Joe Stringer wrote:
> The datapath max_idle value determines how long to wait before deleting
> an idle datapath flow when operating below the flow_limit. This patch
> increases the max_idle to 10 seconds, which allows datapath flows to be
> remain cached even if they are used less consistently, and provides a
> small improvement in the supported number of flows when operating around
> the flow_limit.

Could you tell what would be the motivation behind this change?

fbl

> Signed-off-by: Joe Stringer 
> ---
>  ofproto/ofproto.h |2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/ofproto/ofproto.h b/ofproto/ofproto.h
> index 1b8709a..d60b198 100644
> --- a/ofproto/ofproto.h
> +++ b/ofproto/ofproto.h
> @@ -267,7 +267,7 @@ int ofproto_port_dump_done(struct ofproto_port_dump *);
>  )
>  
>  #define OFPROTO_FLOW_LIMIT_DEFAULT 20
> -#define OFPROTO_MAX_IDLE_DEFAULT 1500
> +#define OFPROTO_MAX_IDLE_DEFAULT 1 /* ms */
>  
>  const char *ofproto_port_open_type(const char *datapath_type,
> const char *port_type);
> -- 
> 1.7.10.4
> 
> ___
> dev mailing list
> dev@openvswitch.org
> http://openvswitch.org/mailman/listinfo/dev
> 
___
dev mailing list
dev@openvswitch.org
http://openvswitch.org/mailman/listinfo/dev


[ovs-dev] [PATCH v2] FAQ: Add an entry about reconfiguration

2014-09-18 Thread YAMAMOTO Takashi
It seems that the behaviour is not so intuitive.
cf. https://bugs.launchpad.net/neutron/+bug/1346861

Signed-off-by: YAMAMOTO Takashi 
---
 FAQ | 21 +
 1 file changed, 21 insertions(+)

diff --git a/FAQ b/FAQ
index df5ac0e..d9ccc6e 100644
--- a/FAQ
+++ b/FAQ
@@ -731,6 +731,27 @@ A: It depends on mechanisms and configurations you want to 
use.
you want to use ebtables rules.)  On NetBSD, you might want to
consider using the bridge(4) with BRIDGE_IPF option.
 
+Q: It seems that Open vSwitch does nothing when I removed a port and
+   then immediately put it back.  For example, consider that p1 is
+   a port of type=internal:
+
+   ovs-vsctl del-port br0 p1 -- \
+   add-port br0 p1 -- \
+   set interface p1 type=internal
+
+A: It's expected that Open vSwitch "skips" intermediate steps especially
+   when it's in the middle of an OVSDB transaction.
+
+   If you want to make Open vSwitch actually destroy and then re-create
+   the port for some side effects like resetting kernel setting for the
+   corresponding interface, you need to separate operations into multiple
+   OVSDB transactions.  In the following example, the first ovs-vsctl,
+   which doesn't have --no-wait, will block until Open vSwitch reloads
+   the new configuration and removes the port:
+
+   ovs-vsctl del-port br0 p1
+   ovs-vsctl add-port br0 p1 -- \
+   set interface p1 type=internal
 
 Quality of Service (QoS)
 
-- 
1.9.4

___
dev mailing list
dev@openvswitch.org
http://openvswitch.org/mailman/listinfo/dev


Re: [ovs-dev] [PATCH net v2 1/2] genetlink: add function genl_has_listeners()

2014-09-18 Thread Pravin Shelar
On Thu, Sep 18, 2014 at 1:31 AM, Nicolas Dichtel
 wrote:
> This function is the counterpart of the function netlink_has_listeners().
>
> Signed-off-by: Nicolas Dichtel 

Looks good.
Acked-by: Pravin B Shelar 

> ---
>
> v2: add patch 1/2
>
>  include/net/genetlink.h | 8 
>  1 file changed, 8 insertions(+)
>
> diff --git a/include/net/genetlink.h b/include/net/genetlink.h
> index 93695f0e22a5..af10c2cf8a1d 100644
> --- a/include/net/genetlink.h
> +++ b/include/net/genetlink.h
> @@ -394,4 +394,12 @@ static inline int genl_set_err(struct genl_family 
> *family, struct net *net,
> return netlink_set_err(net->genl_sock, portid, group, code);
>  }
>
> +static inline int genl_has_listeners(struct genl_family *family,
> +struct sock *sk, unsigned int group)
> +{
> +   if (WARN_ON_ONCE(group >= family->n_mcgrps))
> +   return -EINVAL;
> +   group = family->mcgrp_offset + group;
> +   return netlink_has_listeners(sk, group);
> +}
>  #endif /* __NET_GENERIC_NETLINK_H */
> --
> 2.1.0
>
___
dev mailing list
dev@openvswitch.org
http://openvswitch.org/mailman/listinfo/dev


Re: [ovs-dev] [PATCH net v2 2/2] openvswitch: restore OVS_FLOW_CMD_NEW notifications

2014-09-18 Thread Pravin Shelar
On Thu, Sep 18, 2014 at 1:31 AM, Nicolas Dichtel
 wrote:
> From: Samuel Gauthier 
>
> Since commit fb5d1e9e127a ("openvswitch: Build flow cmd netlink reply only if 
> needed."),
> the new flows are not notified to the listeners of OVS_FLOW_MCGROUP.
>
> This commit fixes the problem by using the genl function, ie
> genl_has_listerners() instead of netlink_has_listeners().
>
> Signed-off-by: Samuel Gauthier 
> Signed-off-by: Nicolas Dichtel 

Thanks for the fix.
Acked-by: Pravin B Shelar 

> ---
>
> v2: add patch 1/2
>
>  net/openvswitch/datapath.c | 9 +
>  1 file changed, 5 insertions(+), 4 deletions(-)
>
> diff --git a/net/openvswitch/datapath.c b/net/openvswitch/datapath.c
> index 91d66b7e64ac..64dc864a417f 100644
> --- a/net/openvswitch/datapath.c
> +++ b/net/openvswitch/datapath.c
> @@ -78,11 +78,12 @@ static const struct genl_multicast_group 
> ovs_dp_vport_multicast_group = {
>
>  /* Check if need to build a reply message.
>   * OVS userspace sets the NLM_F_ECHO flag if it needs the reply. */
> -static bool ovs_must_notify(struct genl_info *info,
> -   const struct genl_multicast_group *grp)
> +static bool ovs_must_notify(struct genl_family *family, struct genl_info 
> *info,
> +   unsigned int group)
>  {
> return info->nlhdr->nlmsg_flags & NLM_F_ECHO ||
> -   netlink_has_listeners(genl_info_net(info)->genl_sock, 0);
> +  genl_has_listeners(family, genl_info_net(info)->genl_sock,
> + group);
>  }
>
>  static void ovs_notify(struct genl_family *family,
> @@ -763,7 +764,7 @@ static struct sk_buff *ovs_flow_cmd_alloc_info(const 
> struct sw_flow_actions *act
>  {
> struct sk_buff *skb;
>
> -   if (!always && !ovs_must_notify(info, &ovs_dp_flow_multicast_group))
> +   if (!always && !ovs_must_notify(&dp_flow_genl_family, info, 0))
> return NULL;
>
> skb = genlmsg_new_unicast(ovs_flow_cmd_msg_size(acts), info, 
> GFP_KERNEL);
> --
> 2.1.0
>
___
dev mailing list
dev@openvswitch.org
http://openvswitch.org/mailman/listinfo/dev


[ovs-dev] [PATCH] jsonrpc: Warn about excessive sending backlog.

2014-09-18 Thread Alex Wang
When ovs is running with large topology (e.g. large number of
interfaces), the stats and status update to ovsdb become huge and
normally require multiple run of ovsdb jsonrpc message processing
loop to consume.  Also, this could cause jsonrpc messages backlogged
in ovs.

This commit adds a warning message to warn the excessive backlog
for jsonrpc.

Signed-off-by: Alex Wang 
---
 lib/jsonrpc.c |6 ++
 1 file changed, 6 insertions(+)

diff --git a/lib/jsonrpc.c b/lib/jsonrpc.c
index 0841ad8..c0f80bc 100644
--- a/lib/jsonrpc.c
+++ b/lib/jsonrpc.c
@@ -259,6 +259,12 @@ jsonrpc_send(struct jsonrpc *rpc, struct jsonrpc_msg *msg)
 list_push_back(&rpc->output, &buf->list_node);
 rpc->backlog += length;
 
+if (list_size(&rpc->output) >= 50) {
+VLOG_WARN("excessive sending backlog, jsonrpc: %s, num of msgs: "
+  "%"PRIuSIZE", backlog: %"PRIuSIZE".", rpc->name,
+  list_size(&rpc->output), rpc->backlog);
+}
+
 if (rpc->backlog == length) {
 jsonrpc_run(rpc);
 }
-- 
1.7.9.5

___
dev mailing list
dev@openvswitch.org
http://openvswitch.org/mailman/listinfo/dev


[ovs-dev] Original New Cisco Switches,Modules and Firewalls

2014-09-18 Thread Elaine Hu
Original New Cisco Switches,Modules and Firewalls available for sale now.

Hello dear,
Good day !
Following is Original New Cisco Switches,Modules and Firewalls available for 
sale now.
Any needs of them, please feel free to contact me.
Skype: elaine.hu87

Part Number  Unit Price  Stock Qty
WS-C2960-24TC-L  $395   224
WS-C2960-24TT-L  $395   20
WS-C2960-48TC-S  $695   36
WS-C2960X-24TS-L$88525
WS-C2960X-24PS-L $1,18020
ASA5505-K8$350   20
ASA5505-BUN-K9$395  15
X2-10GB-LR $615  28
GLC-SX-MMD   $68100
GLC-LH-SMD$98100
WS-C3560G-24PS-S  $1,83530
WS-C2960S-48FPD-L $2,45012
WS-C3750X-24T-L $1,51520
WS-C3750X-24S-S $6,220 3
WS-C3850-24P-S   $3,010 10
WS-C3850-48T-S   $4,725 10
WS-C3850-48P-S   $5,260 10
WS-C3850-24T-S   $2,675  10
PWR-C1-715WAC=$42010
PWR-C1-350WAC= $23510
C3850-NM-4-1G $23510

Looking forward to your early reply !
Thanks & Regards,
Elaine Hu / Sales Manager
Asia Information Technology HK International Limited
Tel: 86+ 01062669007
WhatsApp: 008613051026936
Skype ID: elaine.hu87
Email: cisco-ela...@hcyxxx.com (mailto:cisco-ela...@hcyxxx.com)

elainehu.ci...@gmail.com (mailto:elainehu.ci...@gmail.com)

Web: www.networkingcisco.com 
(http://alibaba.us9.list-manage1.com/track/click?u=854f51db1f0d3283a58745060&id=236e0a30c4&e=e056b710bd)

www.hcyxxx.en.alibaba.com 
(http://alibaba.us9.list-manage.com/track/click?u=854f51db1f0d3283a58745060&id=3bdbbd5968&e=e056b710bd)





This email was sent to dev@openvswitch.org (mailto:dev@openvswitch.org)
why did I get this? 
(http://alibaba.us9.list-manage.com/about?u=854f51db1f0d3283a58745060&id=39280d73ae&e=e056b710bd&c=475f664d32)
 unsubscribe from this list 
(http://alibaba.us9.list-manage.com/unsubscribe?u=854f51db1f0d3283a58745060&id=39280d73ae&e=e056b710bd&c=475f664d32)
 update subscription preferences 
(http://alibaba.us9.list-manage.com/profile?u=854f51db1f0d3283a58745060&id=39280d73ae&e=e056b710bd)
ASIA INFORMATION TECHNOLOGY HK INTERNATIONAL LIMITED · Room 1304 #5 building, 
No.1Yard. ShangDi Tenth street,HaiDian · District,BeiJing100085  China · 
Beijing, NY 100085 · China

Email Marketing Powered by MailChimp
http://www.mailchimp.com/monkey-rewards/?utm_source=freemium_newsletter&utm_medium=email&utm_campaign=monkey_rewards&aid=854f51db1f0d3283a58745060&afl=1
___
dev mailing list
dev@openvswitch.org
http://openvswitch.org/mailman/listinfo/dev


Re: [ovs-dev] [PATCH] jsonrpc: Warn about excessive sending backlog.

2014-09-18 Thread Flavio Leitner
On Thu, Sep 18, 2014 at 11:37:08AM -0700, Alex Wang wrote:
> When ovs is running with large topology (e.g. large number of
> interfaces), the stats and status update to ovsdb become huge and
> normally require multiple run of ovsdb jsonrpc message processing
> loop to consume.  Also, this could cause jsonrpc messages backlogged
> in ovs.
> 
> This commit adds a warning message to warn the excessive backlog
> for jsonrpc.

Does that mean this message will periodically show up in the log
because of the large topology?

fbl

> 
> Signed-off-by: Alex Wang 
> ---
>  lib/jsonrpc.c |6 ++
>  1 file changed, 6 insertions(+)
> 
> diff --git a/lib/jsonrpc.c b/lib/jsonrpc.c
> index 0841ad8..c0f80bc 100644
> --- a/lib/jsonrpc.c
> +++ b/lib/jsonrpc.c
> @@ -259,6 +259,12 @@ jsonrpc_send(struct jsonrpc *rpc, struct jsonrpc_msg 
> *msg)
>  list_push_back(&rpc->output, &buf->list_node);
>  rpc->backlog += length;
>  
> +if (list_size(&rpc->output) >= 50) {
> +VLOG_WARN("excessive sending backlog, jsonrpc: %s, num of msgs: "
> +  "%"PRIuSIZE", backlog: %"PRIuSIZE".", rpc->name,
> +  list_size(&rpc->output), rpc->backlog);
> +}
> +
>  if (rpc->backlog == length) {
>  jsonrpc_run(rpc);
>  }
> -- 
> 1.7.9.5
> 
> ___
> dev mailing list
> dev@openvswitch.org
> http://openvswitch.org/mailman/listinfo/dev
> 
___
dev mailing list
dev@openvswitch.org
http://openvswitch.org/mailman/listinfo/dev


Re: [ovs-dev] [PATCH] jsonrpc: Warn about excessive sending backlog.

2014-09-18 Thread Alex Wang
On Thu, Sep 18, 2014 at 1:07 PM, Flavio Leitner  wrote:

> On Thu, Sep 18, 2014 at 11:37:08AM -0700, Alex Wang wrote:
> > When ovs is running with large topology (e.g. large number of
> > interfaces), the stats and status update to ovsdb become huge and
> > normally require multiple run of ovsdb jsonrpc message processing
> > loop to consume.  Also, this could cause jsonrpc messages backlogged
> > in ovs.
> >
> > This commit adds a warning message to warn the excessive backlog
> > for jsonrpc.
>
> Does that mean this message will periodically show up in the log
> because of the large topology?
>
> fbl
>
>

I'm about to send a patch to rate limit the statistics update.

So, expectedly, we should never see this warning.
___
dev mailing list
dev@openvswitch.org
http://openvswitch.org/mailman/listinfo/dev


Re: [ovs-dev] [PATCH] jsonrpc: Warn about excessive sending backlog.

2014-09-18 Thread Flavio Leitner
On Thu, Sep 18, 2014 at 01:10:00PM -0700, Alex Wang wrote:
> On Thu, Sep 18, 2014 at 1:07 PM, Flavio Leitner  wrote:
> 
> > On Thu, Sep 18, 2014 at 11:37:08AM -0700, Alex Wang wrote:
> > > When ovs is running with large topology (e.g. large number of
> > > interfaces), the stats and status update to ovsdb become huge and
> > > normally require multiple run of ovsdb jsonrpc message processing
> > > loop to consume.  Also, this could cause jsonrpc messages backlogged
> > > in ovs.
> > >
> > > This commit adds a warning message to warn the excessive backlog
> > > for jsonrpc.
> >
> > Does that mean this message will periodically show up in the log
> > because of the large topology?
> 
> I'm about to send a patch to rate limit the statistics update.
> 
> So, expectedly, we should never see this warning.
> 

Ok, my understanding is that after your rate limit patch,
the only chance to see that message is if there is a bug?

fbl
___
dev mailing list
dev@openvswitch.org
http://openvswitch.org/mailman/listinfo/dev


Re: [ovs-dev] [PATCH] ofproto: Increase default datapath max_idle time.

2014-09-18 Thread Ethan Jackson
In our experience, most people actually expect this number to be
fairly high.  For example, our traffic generator runs a little bit of
traffic to populate caches, pauses for a couple of seconds, before
finally starting the main test.  On top of that, we've had some minor
complaints from people managing deployments about how low this number
is.

Originally we used 1.5 seconds because the flow revalidation was so
slow that we wanted to seriously limit the size of the flow table to
be sure we can keep up.  However, now that we've made such scale
improvements, bumping the default up seems reasonable.

All of that said, we don't feel all that strongly about this change,
is there a specific concern you have?  Perhaps we can address it...

Ethan

On Thu, Sep 18, 2014 at 5:56 AM, Flavio Leitner  wrote:
> On Fri, Sep 12, 2014 at 10:03:56AM +1200, Joe Stringer wrote:
>> The datapath max_idle value determines how long to wait before deleting
>> an idle datapath flow when operating below the flow_limit. This patch
>> increases the max_idle to 10 seconds, which allows datapath flows to be
>> remain cached even if they are used less consistently, and provides a
>> small improvement in the supported number of flows when operating around
>> the flow_limit.
>
> Could you tell what would be the motivation behind this change?
>
> fbl
>
>> Signed-off-by: Joe Stringer 
>> ---
>>  ofproto/ofproto.h |2 +-
>>  1 file changed, 1 insertion(+), 1 deletion(-)
>>
>> diff --git a/ofproto/ofproto.h b/ofproto/ofproto.h
>> index 1b8709a..d60b198 100644
>> --- a/ofproto/ofproto.h
>> +++ b/ofproto/ofproto.h
>> @@ -267,7 +267,7 @@ int ofproto_port_dump_done(struct ofproto_port_dump *);
>>  )
>>
>>  #define OFPROTO_FLOW_LIMIT_DEFAULT 20
>> -#define OFPROTO_MAX_IDLE_DEFAULT 1500
>> +#define OFPROTO_MAX_IDLE_DEFAULT 1 /* ms */
>>
>>  const char *ofproto_port_open_type(const char *datapath_type,
>> const char *port_type);
>> --
>> 1.7.10.4
>>
>> ___
>> dev mailing list
>> dev@openvswitch.org
>> http://openvswitch.org/mailman/listinfo/dev
>>
> ___
> dev mailing list
> dev@openvswitch.org
> http://openvswitch.org/mailman/listinfo/dev
___
dev mailing list
dev@openvswitch.org
http://openvswitch.org/mailman/listinfo/dev


Re: [ovs-dev] [PATCH] jsonrpc: Warn about excessive sending backlog.

2014-09-18 Thread Alex Wang
Yes, that is correct,

Thanks,

On Thu, Sep 18, 2014 at 1:15 PM, Flavio Leitner  wrote:

> On Thu, Sep 18, 2014 at 01:10:00PM -0700, Alex Wang wrote:
> > On Thu, Sep 18, 2014 at 1:07 PM, Flavio Leitner  wrote:
> >
> > > On Thu, Sep 18, 2014 at 11:37:08AM -0700, Alex Wang wrote:
> > > > When ovs is running with large topology (e.g. large number of
> > > > interfaces), the stats and status update to ovsdb become huge and
> > > > normally require multiple run of ovsdb jsonrpc message processing
> > > > loop to consume.  Also, this could cause jsonrpc messages backlogged
> > > > in ovs.
> > > >
> > > > This commit adds a warning message to warn the excessive backlog
> > > > for jsonrpc.
> > >
> > > Does that mean this message will periodically show up in the log
> > > because of the large topology?
> >
> > I'm about to send a patch to rate limit the statistics update.
> >
> > So, expectedly, we should never see this warning.
> >
>
> Ok, my understanding is that after your rate limit patch,
> the only chance to see that message is if there is a bug?
>
> fbl
>
___
dev mailing list
dev@openvswitch.org
http://openvswitch.org/mailman/listinfo/dev


Re: [ovs-dev] [PATCH] ofproto: Increase default datapath max_idle time.

2014-09-18 Thread Flavio Leitner
On Thu, Sep 18, 2014 at 01:18:57PM -0700, Ethan Jackson wrote:
> In our experience, most people actually expect this number to be
> fairly high.  For example, our traffic generator runs a little bit of
> traffic to populate caches, pauses for a couple of seconds, before
> finally starting the main test.  On top of that, we've had some minor
> complaints from people managing deployments about how low this number
> is.
> 
> Originally we used 1.5 seconds because the flow revalidation was so
> slow that we wanted to seriously limit the size of the flow table to
> be sure we can keep up.  However, now that we've made such scale
> improvements, bumping the default up seems reasonable.
> 
> All of that said, we don't feel all that strongly about this change,
> is there a specific concern you have?  Perhaps we can address it...

Not anymore, you addressed all my concerns in your reply.
Thanks for explaining it,
fbl

___
dev mailing list
dev@openvswitch.org
http://openvswitch.org/mailman/listinfo/dev


Re: [ovs-dev] [PATCH] jsonrpc: Warn about excessive sending backlog.

2014-09-18 Thread Flavio Leitner
On Thu, Sep 18, 2014 at 01:20:30PM -0700, Alex Wang wrote:
> Yes, that is correct,

Perfect, thanks!
fbl

> 
> Thanks,
> 
> On Thu, Sep 18, 2014 at 1:15 PM, Flavio Leitner  wrote:
> 
> > On Thu, Sep 18, 2014 at 01:10:00PM -0700, Alex Wang wrote:
> > > On Thu, Sep 18, 2014 at 1:07 PM, Flavio Leitner  wrote:
> > >
> > > > On Thu, Sep 18, 2014 at 11:37:08AM -0700, Alex Wang wrote:
> > > > > When ovs is running with large topology (e.g. large number of
> > > > > interfaces), the stats and status update to ovsdb become huge and
> > > > > normally require multiple run of ovsdb jsonrpc message processing
> > > > > loop to consume.  Also, this could cause jsonrpc messages backlogged
> > > > > in ovs.
> > > > >
> > > > > This commit adds a warning message to warn the excessive backlog
> > > > > for jsonrpc.
> > > >
> > > > Does that mean this message will periodically show up in the log
> > > > because of the large topology?
> > >
> > > I'm about to send a patch to rate limit the statistics update.
> > >
> > > So, expectedly, we should never see this warning.
> > >
> >
> > Ok, my understanding is that after your rate limit patch,
> > the only chance to see that message is if there is a bug?
> >
> > fbl
> >
> ___
> dev mailing list
> dev@openvswitch.org
> http://openvswitch.org/mailman/listinfo/dev
> 
___
dev mailing list
dev@openvswitch.org
http://openvswitch.org/mailman/listinfo/dev


Re: [ovs-dev] [PATCH 1/5 v1] datapath-windows: return TRUE on success in NlAttrValidate

2014-09-18 Thread Nithin Raju

On Sep 18, 2014, at 4:35 AM, Samuel Ghinet 
 wrote:

> Hi Nithin,
> 
> Good catch!
> 
> I would normally prefer the validation in a function to be done like:
> ret = TRUE;
> 
> if (fail_cond_1) {
>  log();
>  ret = FALSE;
>  goto done;
> }
> 
> if (fail_cond_2) {
>  log();
>  ret = FALSE;
>  goto done;
> }
> 
> done:
> return ret;
> 
> It would be a bit more text, but I think it improves clarity a bit on the 
> ways "ret" is used and changed.
> 
> Anyway, it's good the way it is in NlAttrValidate as well.
> 
> Acked-by: Samuel Ghinet 

Thanks for the review. I've incorporated your comments.

-- Nithin
___
dev mailing list
dev@openvswitch.org
http://openvswitch.org/mailman/listinfo/dev


Re: [ovs-dev] [PATCH v2] FAQ: Add an entry about reconfiguration

2014-09-18 Thread Ben Pfaff
On Fri, Sep 19, 2014 at 12:24:38AM +0900, YAMAMOTO Takashi wrote:
> It seems that the behaviour is not so intuitive.
> cf. https://bugs.launchpad.net/neutron/+bug/1346861
> 
> Signed-off-by: YAMAMOTO Takashi 

I am not sure that I understand the bug report there.  It might be
reporting an actual bug in OVS.

The goal of ovs-vswitchd regarding the database is to make sure that
the state of the system is kept up-to-date with whatever is in the
database.  Maybe that bug report is saying, "If I have a port, and
then I del-port/add-port that in a single transaction, ovs-vswitchd
does not actually delete a port and then readd it at the datapath
level."  If it is saying that, then it is correct.  But that could
also happen if you use multiple transactions, because when it is busy
ovs-vswitchd might "miss" some of the intermediate transactions and
just implement the overall effect.

The reason that two ovs-vsctl calls always deletes and readds a port
is a little different: the first ovs-vsctl waits for its transaction
to take effect before executing.  If you use "--no-wait", then you
just have two bare transactions and won't get the behavior of a
del-port followed by an add-port 100% of the time.
___
dev mailing list
dev@openvswitch.org
http://openvswitch.org/mailman/listinfo/dev


Re: [ovs-dev] [ovs-discuss] checking load on OVS

2014-09-18 Thread Ben Pfaff
On Thu, Sep 18, 2014 at 02:17:47PM +0530, Neha Joshi wrote:
> I want to check statistics of openvswitch in terms of CPU, load average and
> memory.
> I found that i need to set *other_config:enable-statistics* to *true*
> Can anyone help, how to do this?  I mean is there any command or such?

ovs-vsctl set Open_vSwitch . other_config:enable-statistics=true
___
dev mailing list
dev@openvswitch.org
http://openvswitch.org/mailman/listinfo/dev


[ovs-dev] debugging OVS

2014-09-18 Thread samantha Andares
Hi, 
 
I am getting packets dropped when going out of an OVS bridge. The TX dropped is 
increased in ovs-dpctl but not at the physical interface level (ifconfig).
 
Can anyone tell me how to enable some OVS debugging so I can find the reason to 
fix that problem?
 
thanks.
 
sam
 

  
___
dev mailing list
dev@openvswitch.org
http://openvswitch.org/mailman/listinfo/dev


Re: [ovs-dev] Openflow 1.4: Eviction mechanism implementation

2014-09-18 Thread Ben Pfaff
As I said before, I would enable eviction if it is configured through
the database or through the OpenFlow connection.  This does not mean
that enabling eviction through OpenFlow would modify the database.

I don't think you are looking at the current codebase, on the master
branch.  The current code does not have OPENFLOW or EVICT states.  We
only accept new features in the master branch.

On Thu, Sep 18, 2014 at 06:21:11PM +0530, Saloni Jain wrote:
> Thanks for the suggestion and we have done more investigation on code.
> 
> We are thinking of implementing eviction on the basis of importance as 
> done in current ovs code for lifetime by following the same algorithm as 
> mentioned in conf.db(5)(Page 33).
> 
> As suggested by you we can enable eviction by both ovs-vsctl 
> create...overflow-policy=evict and through ovs-ofctl mod-table...evict 
> commands. Our understanding is that ovs-vsctl command does changes in 
> bridge configuration present in conf.db where as ovs-ofctl mod-table 
> command changes table state.
> 
> Currently ovs-vsctl create...overflow-policy=evict command has following 
> flow:
> bridge_run()-> bridge_run__ -> ofproto_run() which differentiates between 
> OPENFLOW and EVICT state.
> Then bridge_reconfigure() (which is called when there is a database 
> change)-> bridge_configure_tables() -> ofproto_configure_tables() -> 
> ofproto_enable_eviction() which sets eviction fields for a table
> 
> But mod-table is an openflow request having openflow handler and does not 
> change any of bridge configuration so bridge_reconfigure() is not called 
> and so the eviction fields cannot be set.
> 
> As per our understanding it is not correct to change db through ofctl 
> commands i.e. to call bridge_reconfigure() through mod-table.
> 
> Query 1 - Is the above understanding correct?
> 
> Query 2 - Should we come up with a new approach or can you provide any 
> inputs on how can we configure table eviction_fields without calling 
> bridge_reconfigure() via ofctl command?
> 
> Thanks and Regards,
> Saloni Jain
> Tata Consultancy Services
> Mailto: saloni.j...@tcs.com
> Website: http://www.tcs.com
> 
> Experience certainty.   IT Services
> Business Solutions
> Consulting
> 
> 
> 
> 
> From:   Ben Pfaff 
> To: Saloni Jain 
> Cc: dev@openvswitch.org, Deepankar Gupta , 
> Hiteshi Madan , Sanjay6 Singh 
> 
> Date:   09/15/2014 08:54 PM
> Subject:Re: Fw: Re: [ovs-dev] Openflow 1.4: Eviction mechanism 
> implementation
> 
> 
> 
> At this point I think you should invest time in reading and
> understanding code instead of asking more questions.  I have already
> answered many questions.
> 
> On Mon, Sep 15, 2014 at 05:55:01PM +0530, Saloni Jain wrote:
> >  It would be great help if someone could provide the inputs asap.
> > Thanks in advance for your inputs and suggestions.
> > 
> > Thanks and Regards,
> > Saloni Jain
> >  Tata Consultancy Services
> >  Mailto: saloni.j...@tcs.com
> >  Website: http://www.tcs.com
> >  
> >  Experience certainty. IT Services
> >Business Solutions
> >Consulting
> >  
> > 
> > 
> > 
> > -Forwarded by Saloni Jain/DEL/TCS on 09/15/2014 05:51PM -
> > To: Ben Pfaff , dev@openvswitch.org
> > From: Saloni Jain/DEL/TCS
> > Date: 09/12/2014 05:00PM
> > Cc: Deepankar Gupta , Hiteshi Madan 
> , Sanjay6 Singh 
> > Subject: Re: [ovs-dev] Openflow 1.4: Eviction mechanism implementation
> > 
> > Hi Ben/Team,
> > 
> > >?I would enable eviction if it is configured through the database or 
> through the OpenFlow connection.
> > 
> > In current implementation, to enable eviction using create table ... 
> overflow-policy=evict, oftable_enable_eviction() in ofproto.c is called 
> which sets table->eviction fields.
> > 
> > I've been trying to do the same using table-mod command implementation 
> but haven't been successful. It would be of great help if you could point 
> me to some piece of code through ?which I can call 
> oftable_enable_eviction() or can set table->eviction_fields using 
> mod_table command.
> > 
> > Thanks in advance for you suggestion and support.
> > 
> > Thanks and Regards,
> > Saloni Jain
> > Tata Consultancy Services
> > Mailto: saloni.j...@tcs.com
> > Website: http://www.tcs.com
> > 
> > Experience certainty.  IT Services
> >Business Solutions
> >Consulting
> > 
> > 
> > To: Saloni Jain 
> > From: Ben Pfaff 
> > Date: 09/11/2014 11:03PM
> > Cc: dev@openvswitch.org, Hiteshi Madan , 
> Deepankar Gupta , Sanjay6 Singh 
> 
> > Sub

Re: [ovs-dev] [PATCH] ofproto: Increase default datapath max_idle time.

2014-09-18 Thread Ben Pfaff
On Thu, Sep 18, 2014 at 05:24:12PM -0300, Flavio Leitner wrote:
> On Thu, Sep 18, 2014 at 01:18:57PM -0700, Ethan Jackson wrote:
> > In our experience, most people actually expect this number to be
> > fairly high.  For example, our traffic generator runs a little bit of
> > traffic to populate caches, pauses for a couple of seconds, before
> > finally starting the main test.  On top of that, we've had some minor
> > complaints from people managing deployments about how low this number
> > is.
> > 
> > Originally we used 1.5 seconds because the flow revalidation was so
> > slow that we wanted to seriously limit the size of the flow table to
> > be sure we can keep up.  However, now that we've made such scale
> > improvements, bumping the default up seems reasonable.
> > 
> > All of that said, we don't feel all that strongly about this change,
> > is there a specific concern you have?  Perhaps we can address it...
> 
> Not anymore, you addressed all my concerns in your reply.
> Thanks for explaining it,

Ethan, I hope you will add this context to the commit message?

Thanks,

Ben.
___
dev mailing list
dev@openvswitch.org
http://openvswitch.org/mailman/listinfo/dev


Re: [ovs-dev] [PATCH] ofproto: Increase default datapath max_idle time.

2014-09-18 Thread Ben Pfaff
On Thu, Sep 18, 2014 at 2:48 PM, Ben Pfaff  wrote:
> On Thu, Sep 18, 2014 at 05:24:12PM -0300, Flavio Leitner wrote:
>> On Thu, Sep 18, 2014 at 01:18:57PM -0700, Ethan Jackson wrote:
>> > In our experience, most people actually expect this number to be
>> > fairly high.  For example, our traffic generator runs a little bit of
>> > traffic to populate caches, pauses for a couple of seconds, before
>> > finally starting the main test.  On top of that, we've had some minor
>> > complaints from people managing deployments about how low this number
>> > is.
>> >
>> > Originally we used 1.5 seconds because the flow revalidation was so
>> > slow that we wanted to seriously limit the size of the flow table to
>> > be sure we can keep up.  However, now that we've made such scale
>> > improvements, bumping the default up seems reasonable.
>> >
>> > All of that said, we don't feel all that strongly about this change,
>> > is there a specific concern you have?  Perhaps we can address it...
>>
>> Not anymore, you addressed all my concerns in your reply.
>> Thanks for explaining it,
>
> Ethan, I hope you will add this context to the commit message?

Oops, Ethan isn't the author and it's already been committed.

Never mind
___
dev mailing list
dev@openvswitch.org
http://openvswitch.org/mailman/listinfo/dev


Re: [ovs-dev] [PATCH] datapath-windows: fix OVS_VPORT_TYPE

2014-09-18 Thread Nithin Raju

On Sep 17, 2014, at 6:06 AM, Samuel Ghinet  
wrote:

> The windows ovs kernel uses an OVS_VPORT_TYPE enum that is incompatible with
> the userspace counterpart (enum ovs_vport_type from openvswitch.h). We must 
> use
> the same enum type - enum ovs_vport_type - for the netlink communication to 
> work
> properly.
> 
> This patch makes the fix: "typedef enum ovs_vport_type OVS_VPORT_TYPE" and
> changes the afferent kernel driver code:
> o) vport types synthetic and emulated turn to: netdev
> o) vport type internal turns to: internal
> o) vport type external truns to: netdev (plus, we hold a field in vport, 
> "isExternal")
> 
> This patch is required for the ovs-dpctl show and vswitchd vport commands to 
> work
> properly.
> 
> Signed-off-by: Samuel Ghinet 


LG. Thanks for doing this.

The Hyper-V internal seems to map neatly into the OVS_VPORT_INTERNAL since it 
is a port that is connected to OVS as well as has a presence on the host.

Acked-by: Nithin Raju 

-- Nithin
___
dev mailing list
dev@openvswitch.org
http://openvswitch.org/mailman/listinfo/dev


[ovs-dev] [PATCH 1/3] bridge: Rate limit the statistics update.

2014-09-18 Thread Alex Wang
When ovs is running with large topology (e.g. large number of
interfaces), the stats update to ovsdb becomes huge and normally
requires multiple run of ovsdb jsonrpc message processing loop to
consume.

To prevent the periodic stats update from backlogging in the
jsonrpc sending queue, this commit adds rate limiting logic to
it which only allows new update if the previous one is done.

Signed-off-by: Alex Wang 
---
 vswitchd/bridge.c |   54 ++---
 1 file changed, 31 insertions(+), 23 deletions(-)

diff --git a/vswitchd/bridge.c b/vswitchd/bridge.c
index 1060719..3631cbe 100644
--- a/vswitchd/bridge.c
+++ b/vswitchd/bridge.c
@@ -192,6 +192,10 @@ static bool status_txn_try_again;
 static int stats_timer_interval;
 static long long int stats_timer = LLONG_MIN;
 
+/* Current stats database transaction, NULL if there is no ongoing
+ * transaction. */
+static struct ovsdb_idl_txn *stats_txn;
+
 /* In some datapaths, creating and destroying OpenFlow ports can be extremely
  * expensive.  This can cause bridge_reconfigure() to take a long time during
  * which no other work can be done.  To deal with this problem, we limit port
@@ -2744,35 +2748,39 @@ bridge_run(void)
 
 /* Refresh interface and mirror stats if necessary. */
 if (time_msec() >= stats_timer) {
-if (cfg) {
-struct ovsdb_idl_txn *txn;
-
-txn = ovsdb_idl_txn_create(idl);
-HMAP_FOR_EACH (br, node, &all_bridges) {
-struct port *port;
-struct mirror *m;
-
-HMAP_FOR_EACH (port, hmap_node, &br->ports) {
-struct iface *iface;
+enum ovsdb_idl_txn_status status;
 
-LIST_FOR_EACH (iface, port_elem, &port->ifaces) {
-iface_refresh_stats(iface);
+/* Rate limit the update.  Do not start a new update if the
+ * previous one is not done. */
+if (!stats_txn) {
+if (cfg) {
+stats_txn = ovsdb_idl_txn_create(idl);
+HMAP_FOR_EACH (br, node, &all_bridges) {
+struct port *port;
+struct mirror *m;
+
+HMAP_FOR_EACH (port, hmap_node, &br->ports) {
+struct iface *iface;
+
+LIST_FOR_EACH (iface, port_elem, &port->ifaces) {
+iface_refresh_stats(iface);
+}
+port_refresh_stp_stats(port);
+}
+HMAP_FOR_EACH (m, hmap_node, &br->mirrors) {
+mirror_refresh_stats(m);
 }
-
-port_refresh_stp_stats(port);
-}
-
-HMAP_FOR_EACH (m, hmap_node, &br->mirrors) {
-mirror_refresh_stats(m);
 }
-
+refresh_controller_status();
 }
-refresh_controller_status();
-ovsdb_idl_txn_commit(txn);
-ovsdb_idl_txn_destroy(txn); /* XXX */
 }
 
-stats_timer = time_msec() + stats_timer_interval;
+status = ovsdb_idl_txn_commit(stats_txn);
+if (status != TXN_INCOMPLETE) {
+stats_timer = time_msec() + stats_timer_interval;
+ovsdb_idl_txn_destroy(stats_txn);
+stats_txn = NULL;
+}
 }
 
 if (!status_txn) {
-- 
1.7.9.5

___
dev mailing list
dev@openvswitch.org
http://openvswitch.org/mailman/listinfo/dev


[ovs-dev] [PATCH 0/3] Rate limit stats update.

2014-09-18 Thread Alex Wang
When testing ovs with 10K or more tunnels with cfm/bfd enabled,
it can be observed that the memory utilization of ovs-vswitchd
process keeps growing.  After investigation, it is found that
the periodic statistics update to ovsdb becomes huge when there
are that many interfaces, and would take long time for ovsdb
to consume.  The backlog of following updates in the jsonrpc
sending queue causes the increase of memory utilization.

This series of patches solves the issue by adding rate-limit
logic to the statistics update.  Also, it refactors the code,
making it easy to understand.

Alex Wang (3):
  bridge: Rate limit the statistics update.
  bridge: Refactor the stats and status update.
  jsonrpc: Warn about excessive sending backlog.

 lib/jsonrpc.c |6 ++
 vswitchd/bridge.c |  211 +
 2 files changed, 122 insertions(+), 95 deletions(-)

-- 
1.7.9.5

___
dev mailing list
dev@openvswitch.org
http://openvswitch.org/mailman/listinfo/dev


[ovs-dev] [PATCH 2/3] bridge: Refactor the stats and status update.

2014-09-18 Thread Alex Wang
This commit refactors the stats and status update in bridge_run()
by moving the corresponding code to separate functions.  This
makes the code more organized.

Signed-off-by: Alex Wang 
---
 vswitchd/bridge.c |  211 -
 1 file changed, 112 insertions(+), 99 deletions(-)

diff --git a/vswitchd/bridge.c b/vswitchd/bridge.c
index 3631cbe..f2e464c 100644
--- a/vswitchd/bridge.c
+++ b/vswitchd/bridge.c
@@ -2595,6 +2595,114 @@ refresh_controller_status(void)
 }
 
 static void
+run_stats_update(void)
+{
+const struct ovsrec_open_vswitch *cfg = ovsrec_open_vswitch_first(idl);
+int stats_interval;
+
+if (!cfg) {
+return;
+}
+
+/* Statistics update interval should always be greater than or equal to
+ * 5000 ms. */
+stats_interval = MAX(smap_get_int(&cfg->other_config,
+  "stats-update-interval",
+  5000), 5000);
+if (stats_timer_interval != stats_interval) {
+stats_timer_interval = stats_interval;
+stats_timer = LLONG_MIN;
+}
+
+/* Refresh interface and mirror stats if necessary. */
+if (time_msec() >= stats_timer) {
+enum ovsdb_idl_txn_status status;
+
+/* Rate limit the update.  Do not start a new update if the
+ * previous one is not done. */
+if (!stats_txn) {
+struct bridge *br;
+
+stats_txn = ovsdb_idl_txn_create(idl);
+HMAP_FOR_EACH (br, node, &all_bridges) {
+struct port *port;
+struct mirror *m;
+
+HMAP_FOR_EACH (port, hmap_node, &br->ports) {
+struct iface *iface;
+
+LIST_FOR_EACH (iface, port_elem, &port->ifaces) {
+iface_refresh_stats(iface);
+}
+port_refresh_stp_stats(port);
+}
+HMAP_FOR_EACH (m, hmap_node, &br->mirrors) {
+mirror_refresh_stats(m);
+}
+}
+refresh_controller_status();
+}
+
+status = ovsdb_idl_txn_commit(stats_txn);
+if (status != TXN_INCOMPLETE) {
+stats_timer = time_msec() + stats_timer_interval;
+ovsdb_idl_txn_destroy(stats_txn);
+stats_txn = NULL;
+}
+}
+}
+
+static void
+run_status_update(void)
+{
+uint64_t seq;
+
+/* Check the need to update status. */
+seq = seq_read(connectivity_seq_get());
+if (seq != connectivity_seqno || status_txn_try_again) {
+enum ovsdb_idl_txn_status status;
+
+/* Rate limit the update.  Do not start a new update if the
+ * previous one is not done. */
+if (!status_txn) {
+struct bridge *br;
+
+connectivity_seqno = seq;
+status_txn = ovsdb_idl_txn_create(idl);
+HMAP_FOR_EACH (br, node, &all_bridges) {
+struct port *port;
+
+br_refresh_stp_status(br);
+br_refresh_rstp_status(br);
+HMAP_FOR_EACH (port, hmap_node, &br->ports) {
+struct iface *iface;
+
+port_refresh_stp_status(port);
+port_refresh_rstp_status(port);
+LIST_FOR_EACH (iface, port_elem, &port->ifaces) {
+iface_refresh_netdev_status(iface);
+iface_refresh_ofproto_status(iface);
+}
+}
+}
+}
+
+status = ovsdb_idl_txn_commit(status_txn);
+if (status != TXN_INCOMPLETE) {
+ovsdb_idl_txn_destroy(status_txn);
+status_txn = NULL;
+
+/* Sets the 'status_txn_try_again' if the transaction fails. */
+if (status == TXN_SUCCESS || status == TXN_UNCHANGED) {
+status_txn_try_again = false;
+} else {
+status_txn_try_again = true;
+}
+}
+}
+}
+
+static void
 bridge_run__(void)
 {
 struct bridge *br;
@@ -2622,8 +2730,6 @@ bridge_run(void)
 const struct ovsrec_open_vswitch *cfg;
 
 bool vlan_splinters_changed;
-struct bridge *br;
-int stats_interval;
 
 ovsrec_open_vswitch_init(&null_cfg);
 
@@ -2682,6 +2788,8 @@ bridge_run(void)
  * usage has changed. */
 vlan_splinters_changed = false;
 if (vlan_splinters_enabled_anywhere) {
+struct bridge *br;
+
 HMAP_FOR_EACH (br, node, &all_bridges) {
 if (ofproto_has_vlan_usage_changed(br->ofproto)) {
 vlan_splinters_changed = true;
@@ -2732,103 +2840,8 @@ bridge_run(void)
 }
 }
 
-/* Statistics update interval should always be greater than or equal to
- * 5000 ms. */
-if (cfg) {
-stats_interval = MAX(smap_get_int(&cfg->other_config,
-  "stats-update-interval",
- 

[ovs-dev] [PATCH 3/3] jsonrpc: Warn about excessive sending backlog.

2014-09-18 Thread Alex Wang
This commit adds a warning message to warn the excessive backlog
for jsonrpc.

Signed-off-by: Alex Wang 
---
 lib/jsonrpc.c |6 ++
 1 file changed, 6 insertions(+)

diff --git a/lib/jsonrpc.c b/lib/jsonrpc.c
index 0841ad8..c0f80bc 100644
--- a/lib/jsonrpc.c
+++ b/lib/jsonrpc.c
@@ -259,6 +259,12 @@ jsonrpc_send(struct jsonrpc *rpc, struct jsonrpc_msg *msg)
 list_push_back(&rpc->output, &buf->list_node);
 rpc->backlog += length;
 
+if (list_size(&rpc->output) >= 50) {
+VLOG_WARN("excessive sending backlog, jsonrpc: %s, num of msgs: "
+  "%"PRIuSIZE", backlog: %"PRIuSIZE".", rpc->name,
+  list_size(&rpc->output), rpc->backlog);
+}
+
 if (rpc->backlog == length) {
 jsonrpc_run(rpc);
 }
-- 
1.7.9.5

___
dev mailing list
dev@openvswitch.org
http://openvswitch.org/mailman/listinfo/dev


Re: [ovs-dev] [PATCH] datapath-windows: NetLink kernel side, Event subscription and notification

2014-09-18 Thread Ben Pfaff
On Wed, Sep 17, 2014 at 11:12:05PM -0700, Eitan Eliahu wrote:
> This code handles an event notification subscription for a user mode thread
> which joins an MC group. The event wait handler queues an IRP which is
> completed upon change in a port state.
> 
> Signed-off-by: Eitan Eliahu 

I fixed a couple of whitespace errors:

Applying: datapath-windows: NetLink kernel side, Event subscription and 
notification
/home/blp/nicira/ovs/.git/rebase-apply/patch:35: trailing whitespace.
 OvsGetDpCmdHandler, 
/home/blp/nicira/ovs/.git/rebase-apply/patch:138: trailing whitespace.
status = OvsSubscribeEventIoctl(instance->fileObject, &request, 
warning: 2 lines applied after fixing whitespace errors.

added the acks, and applied this to master.

Thanks,

Ben.
___
dev mailing list
dev@openvswitch.org
http://openvswitch.org/mailman/listinfo/dev


Re: [ovs-dev] [PATCH] jsonrpc: Warn about excessive sending backlog.

2014-09-18 Thread Ben Pfaff
On Thu, Sep 18, 2014 at 11:37:08AM -0700, Alex Wang wrote:
> When ovs is running with large topology (e.g. large number of
> interfaces), the stats and status update to ovsdb become huge and
> normally require multiple run of ovsdb jsonrpc message processing
> loop to consume.  Also, this could cause jsonrpc messages backlogged
> in ovs.
> 
> This commit adds a warning message to warn the excessive backlog
> for jsonrpc.
> 
> Signed-off-by: Alex Wang 

Is this the correct way to do it?  I would have thought that as long as
there are any backlogged stats messages, ovs-vswitchd should defer
sending any further stats updates until the backlog clears.
___
dev mailing list
dev@openvswitch.org
http://openvswitch.org/mailman/listinfo/dev


Re: [ovs-dev] [PATCH] datapath-windows: fix OVS_VPORT_TYPE

2014-09-18 Thread Ben Pfaff
On Wed, Sep 17, 2014 at 01:06:43PM +, Samuel Ghinet wrote:
> The windows ovs kernel uses an OVS_VPORT_TYPE enum that is incompatible with
> the userspace counterpart (enum ovs_vport_type from openvswitch.h). We must 
> use
> the same enum type - enum ovs_vport_type - for the netlink communication to 
> work
> properly.
> 
> This patch makes the fix: "typedef enum ovs_vport_type OVS_VPORT_TYPE" and
> changes the afferent kernel driver code:
> o) vport types synthetic and emulated turn to: netdev
> o) vport type internal turns to: internal
> o) vport type external truns to: netdev (plus, we hold a field in vport, 
> "isExternal")
> 
> This patch is required for the ovs-dpctl show and vswitchd vport commands to 
> work
> properly.
> 
> Signed-off-by: Samuel Ghinet 

This didn't apply for me:

Applying: datapath-windows: fix OVS_VPORT_TYPE
error: patch failed: datapath-windows/include/OvsDpInterfaceExt.h:80
error: datapath-windows/include/OvsDpInterfaceExt.h: patch does not apply
error: patch failed: datapath-windows/ovsext/Vport.h:93
error: datapath-windows/ovsext/Vport.h: patch does not apply
Patch failed at 0001 datapath-windows: fix OVS_VPORT_TYPE
The copy of the patch that failed is found in:
   /home/blp/nicira/ovs/.git/rebase-apply/patch
When you have resolved this problem, run "git am --continue".
If you prefer to skip this patch, run "git am --skip" instead.
To restore the original branch and stop patching, run "git am --abort".
___
dev mailing list
dev@openvswitch.org
http://openvswitch.org/mailman/listinfo/dev


Re: [ovs-dev] [PATCH] jsonrpc: Warn about excessive sending backlog.

2014-09-18 Thread Alex Wang
Hey Ben,

I sent out a new series of patches here:
http://openvswitch.org/pipermail/dev/2014-September/045949.html
which does exactly what you said.

Forgot to mark this thread as invalid.

Thanks,
Alex Wang,

On Thu, Sep 18, 2014 at 4:00 PM, Ben Pfaff  wrote:

> On Thu, Sep 18, 2014 at 11:37:08AM -0700, Alex Wang wrote:
> > When ovs is running with large topology (e.g. large number of
> > interfaces), the stats and status update to ovsdb become huge and
> > normally require multiple run of ovsdb jsonrpc message processing
> > loop to consume.  Also, this could cause jsonrpc messages backlogged
> > in ovs.
> >
> > This commit adds a warning message to warn the excessive backlog
> > for jsonrpc.
> >
> > Signed-off-by: Alex Wang 
>
> Is this the correct way to do it?  I would have thought that as long as
> there are any backlogged stats messages, ovs-vswitchd should defer
> sending any further stats updates until the backlog clears.
>
___
dev mailing list
dev@openvswitch.org
http://openvswitch.org/mailman/listinfo/dev


Re: [ovs-dev] [PATCH] jsonrpc: Warn about excessive sending backlog.

2014-09-18 Thread Ben Pfaff
Ah, great.  I don't think of that as "rate limiting", hence my
confusion.

I'll review the series.

Thank you!

On Thu, Sep 18, 2014 at 04:04:18PM -0700, Alex Wang wrote:
> Hey Ben,
> 
> I sent out a new series of patches here:
> http://openvswitch.org/pipermail/dev/2014-September/045949.html
> which does exactly what you said.
> 
> Forgot to mark this thread as invalid.
> 
> Thanks,
> Alex Wang,
> 
> On Thu, Sep 18, 2014 at 4:00 PM, Ben Pfaff  wrote:
> 
> > On Thu, Sep 18, 2014 at 11:37:08AM -0700, Alex Wang wrote:
> > > When ovs is running with large topology (e.g. large number of
> > > interfaces), the stats and status update to ovsdb become huge and
> > > normally require multiple run of ovsdb jsonrpc message processing
> > > loop to consume.  Also, this could cause jsonrpc messages backlogged
> > > in ovs.
> > >
> > > This commit adds a warning message to warn the excessive backlog
> > > for jsonrpc.
> > >
> > > Signed-off-by: Alex Wang 
> >
> > Is this the correct way to do it?  I would have thought that as long as
> > there are any backlogged stats messages, ovs-vswitchd should defer
> > sending any further stats updates until the backlog clears.
> >
___
dev mailing list
dev@openvswitch.org
http://openvswitch.org/mailman/listinfo/dev


Re: [ovs-dev] [PATCH v2] datapath-windows: Netlink command: vport dump

2014-09-18 Thread Nithin Raju
hi Samuel,
Thanks for sending out the changes.

Some high level comments are:
1. You might have to rebase your change when Eitan's change gets checked in. He 
has coalesced the declarations of the command handlers.
2. We'll have to get rid of the OvsDumpVportIoctl() sometime. We need not do it 
in this patch itself.

I have noted the comments inline. Pls. have a look.

On Sep 17, 2014, at 6:08 AM, Samuel Ghinet  
wrote:
> /* Netlink vport family. */
> +NETLINK_CMD nlVportFamilyCmdOps[] = {
> +{ OVS_VPORT_CMD_GET, OvsGetVportCmdHandler,
> +OVS_WRITE_DEV_OP | OVS_READ_DEV_OP, FALSE }

I am 100% sure which is the correct approach. But other commands, have aligned 
OVS_WRITE_DEV_OP with OVS_VPORT_CMD_GET.

Is the ValidateDpIndex FALSE? I'd think it is TRUE. Pls. see the following code 
in dpif-linux.c
dpif_linux_port_dump_start__(const struct dpif_linux *dpif,
 struct nl_dump *dump)
[...]
request.cmd = OVS_VPORT_CMD_GET; 
request.dp_ifindex = dpif->dp_ifindex;


> +/* Netlink vport family. */
> /* XXX: Add commands here. */

You can get rid of the two lines above. The formatting followed is:

/* Netlink XYS family. */




> @@ -966,6 +976,210 @@ OvsSetupDumpStart(POVS_USER_PARAMS_CONTEXT usrParamsCtx)
> }
> 
> /*
> +* XXX: When "dump done" and "error" netlink types will be implemented, we 
> will
> +* need a few more BuildMsgOut functions, which would ultimately call a 
> generic
> +* BuildMsgOutFromMsgIn.
> +*/
> +
> +static VOID
> +BuildMsgOutFromMsgIn(POVS_MESSAGE msgIn, POVS_MESSAGE msgOut, UINT16 flags)
> +{
> +msgOut->nlMsg.nlmsgType = msgIn->nlMsg.nlmsgType;
> +msgOut->nlMsg.nlmsgFlags = flags;
> +msgOut->nlMsg.nlmsgSeq = msgIn->nlMsg.nlmsgSeq;
> +msgOut->nlMsg.nlmsgPid = msgIn->nlMsg.nlmsgPid;
> +msgOut->nlMsg.nlmsgLen = sizeof *msgOut;
> +
> +msgOut->genlMsg.cmd = msgIn->genlMsg.cmd;
> +msgOut->genlMsg.version = nlDatapathFamilyOps.version;
> +msgOut->genlMsg.reserved = 0;
> +}
> +
> +static NTSTATUS
> +OvsCreateMsgFromVport(POVS_VPORT_ENTRY vport,
> +  POVS_MESSAGE msgOut,
> +  UINT32 maxPayloadSize)
> +{
> +NL_BUFFER nlBuffer;
> +OVS_VPORT_FULL_STATS vportStats;
> +
> +NlBufInit(&nlBuffer, (PCHAR)msgOut + sizeof (*msgOut), maxPayloadSize);

I think we should avoid initializing 'nlBuffer' each time and initialize it 
only once in the caller. Each call to OvsCreateMsgFromVport should continue 
from the offset it was left off. The caller of OvsCreateMsgFromVport() should 
be responsible for writing out the OVS_MESSAGE, and OvsCreateMsgFromVport 
should only write out the VPORT data.

> +
> +NlMsgPutTailU32(&nlBuffer, OVS_VPORT_ATTR_PORT_NO, vport->portNo);
> +NlMsgPutTailU32(&nlBuffer, OVS_VPORT_ATTR_TYPE, vport->ovsType);
> +
> +NlMsgPutTailString(&nlBuffer, OVS_VPORT_ATTR_NAME, vport->ovsName);
> +
> +/*
> +* XXX: when we implement OVS_DP_ATTR_USER_FEATURES in datapath,
> +* we'll need to check the OVS_DP_F_VPORT_PIDS flag: if it is set,
> +* it means we have an array of pids, instead of a single pid.
> +* ATM we assume we have one pid only.
> +*/

Multiple line comments should be formatted as:
/*
 *
 */

and not

/*
*
*
*/

s/ATM/Currently,


> +
> +NlMsgPutTailU32(&nlBuffer, OVS_VPORT_ATTR_UPCALL_PID, vport->upcallPid);
> +
> +/*stats*/
> +vportStats.rxPackets = vport->stats.rxPackets;
> +vportStats.rxBytes = vport->stats.rxBytes;
> +vportStats.txPackets = vport->stats.txPackets;
> +vportStats.txBytes = vport->stats.txBytes;
> +vportStats.rxErrors = vport->errStats.rxErrors;
> +vportStats.txErrors = vport->errStats.txErrors;
> +vportStats.rxDropped = vport->errStats.rxDropped;
> +vportStats.txDropped = vport->errStats.txDropped;
> +
> +NlMsgPutTailUnspec(&nlBuffer, OVS_VPORT_ATTR_STATS, (PCHAR)&vportStats,
> +  sizeof(OVS_VPORT_FULL_STATS));

How are you sure that there's enough space in the buffer. NlMsgPutTailUnspec() 
might return FALSE. This needs to be translated into "end of this dump-next 
operation".

> +
> +/*
> +* XXX: when vxlan udp dest port becomes configurable, we will also need
> +* to add vport options
> +*/
> +
> +msgOut->nlMsg.nlmsgLen = NlBufSize(&nlBuffer) + sizeof(OVS_MESSAGE);

I think it is best of the caller of OvsCreateMsgFromVport() wrote out the 
OVS_MESSAGE like what you are doing, and here you can just do. It is best if 
the NlBufSize() includes the OVS_MESSAGE.
msgOut->nlMsg.nlmsgLen = NlBufSize(&nlBuffer);


> +
> +return STATUS_SUCCESS;
> +}
> +
> +static NTSTATUS
> +OvsGetVportDumpNext(POVS_USER_PARAMS_CONTEXT usrParamsCtx,
> +UINT32 *replyLen)
> +{
> +POVS_MESSAGE msgIn;
> +POVS_MESSAGE msgOut = (POVS_MESSAGE)usrParamsCtx->outputBuffer;
> +POVS_OPEN_INSTANCE instance =
> +(POVS_OPEN_INSTANCE)usrParamsCtx->ovsInstance;
> +LOCK_STATE_EX lockState;
> +UINT32 i = 0;
> +
> +/*
> 

Re: [ovs-dev] [PATCH] datapath-windows: fix OVS_VPORT_TYPE

2014-09-18 Thread Nithin Raju
>> Signed-off-by: Samuel Ghinet 
> 
> This didn't apply for me:
> 
>Applying: datapath-windows: fix OVS_VPORT_TYPE
>error: patch failed: datapath-windows/include/OvsDpInterfaceExt.h:80
>error: datapath-windows/include/OvsDpInterfaceExt.h: patch does not apply
>error: patch failed: datapath-windows/ovsext/Vport.h:93
>error: datapath-windows/ovsext/Vport.h: patch does not apply
>Patch failed at 0001 datapath-windows: fix OVS_VPORT_TYPE
>The copy of the patch that failed is found in:
>   /home/blp/nicira/ovs/.git/rebase-apply/patch
>When you have resolved this problem, run "git am --continue".
>If you prefer to skip this patch, run "git am --skip" instead.
>To restore the original branch and stop patching, run "git am --abort".

Sam,
This might be because there's the following code that got committed in 
OvsDpInterfaceExt.h:

83 typedef struct ovs_dp_stats OVS_DP_STATS; 

Rebasing your patch should fix the issue.

-- Nithin
___
dev mailing list
dev@openvswitch.org
http://openvswitch.org/mailman/listinfo/dev


Re: [ovs-dev] [PATCHv2 2/2] datapath: Use nla_parse_strict() for netlink parsing.

2014-09-18 Thread Pravin Shelar
On Wed, Sep 17, 2014 at 9:11 PM, Pravin Shelar  wrote:
> On Wed, Sep 17, 2014 at 9:02 PM, Joe Stringer  wrote:
>>
>>
>> On 18 September 2014 13:02, Joe Stringer  wrote:


 > @@ -229,17 +244,19 @@ static bool match_validate(const struct
 > sw_flow_match *match,
 > }
 > }
 >
 > -   if ((key_attrs & key_expected) != key_expected) {
 > +   attrs = attrs_to_bitmask(key_attrs, OVS_KEY_ATTR_MAX);
 > +   if ((attrs & key_expected) != attrs) {
 > /* Key attributes check failed. */
 > OVS_NLERR("Missing expected key attributes
 > (key_attrs=%llx, expected=%llx).\n",
 > -   (unsigned long long)key_attrs,
 > (unsigned long long)key_expected);
 > +   (unsigned long long)attrs, (unsigned
 > long long)key_expected);
 > return false;
 > }
>>
>>
>> I also accidentally changed the key_attrs comparison line, which I'll roll
>> in a fix for.
>>

 

 > @@ -611,30 +569,31 @@ int ovs_nla_put_egress_tunnel_key(struct sk_buff
 > *skb,
 > egress_tun_info->options_len);
 >  }
 >
 > -static int metadata_from_nlattrs(struct sw_flow_match *match,  u64
 > *attrs,
 > +static int metadata_from_nlattrs(struct sw_flow_match *match,
 >  const struct nlattr **a, bool is_mask)
 >  {
 > -   if (*attrs & (1ULL << OVS_KEY_ATTR_DP_HASH)) {
 > +   if (a[OVS_KEY_ATTR_DP_HASH]) {
 > u32 hash_val = nla_get_u32(a[OVS_KEY_ATTR_DP_HASH]);
 >
 > SW_FLOW_KEY_PUT(match, ovs_flow_hash, hash_val,
 > is_mask);
 > -   *attrs &= ~(1ULL << OVS_KEY_ATTR_DP_HASH);
 > +   a[OVS_KEY_ATTR_DP_HASH] = NULL;
 > }
 If you change a[], then match_validate() does not work.
>>>
>>>
>>> I thought that I was misunderstanding something here, now I see that
>>> ovs_key_from_nlattrs() doesn't modify the caller's version of the attrs
>>> bitmask on master.
>>>
>>> Would you prefer that the ovs_key_from_nlattrs() function made a local
>>> copy of 'a' and use that here or just convert it to a bitmask and keep
>>> metadata_from_nlattrs() unchanged from master?
>>>
>>
>> Actually, as far as I can tell, the logic below is the only reason we are
>> changing 'a' in metadata_from_nlattrs(). We can drop all of the a[foo] =
>> NULL and it all becomes simpler.
>>
>>
> Sounds good to me. But I would like to take opinion from Andy.
>

I had offline chat with Andy. He is ok with it.
___
dev mailing list
dev@openvswitch.org
http://openvswitch.org/mailman/listinfo/dev


Re: [ovs-dev] [PATCH] datapath-windows: Handle NBLs with multiple NBs

2014-09-18 Thread Nithin Raju
hi Sam,
Pls. find my replies inline.

On Sep 14, 2014, at 9:12 PM, Samuel Ghinet 
 wrote:

> Hello Nithin,
> 
> Thanks a lot again for the review!

Welcome. As you said, the code is intricate. It is fine though. Thanks for 
working on this one.

> -
>> More than that I was a little concerned about the changes in OvsTunnelTx to 
>> split a TSOed NBL (with multiple NBs) into multiple smaller NBLs. I believe 
>> this would have a performance impact (when we measure it).
> 
>> The reason why we did not call OvsDoFlowLookupOutput() on each NB earlier is 
>> that we know all of the NBs have the same L2, L3 and L4 header. The fields 
>> that are different in the L3 and L4 headers are not part of the flow key, so 
>> there's no reason to do a separate flow lookup and actions execute. It is an 
>> optimization we had, and there was no correctness issue with that 
>> optimization.
> 
>> But, I understand that this does not suit the change you want to make where 
>> you want to update the pipeline to process one NB at a time rather than NBL 
>> at a time. Can I ask what is the advantage of that change? This change can 
>> potentially cause bad performance.
> There was an email I had sent about this, long ago.
> And the answer is that MS, in its doc, makes very few guarantees about the 
> things shared in common by the NBs in a NBL.
> Namely, we do have L2 the same. Yes.
> L3? we don't. We have here only the protocol type the same. If the packet is 
> TCP or UDP, we also have src ip and src destination the same, and src ports 
> and dest ports the same. But that's quite it.
> There are a lot of fields in the packet info / flow keys regarding L3 and L4 
> headers that the MS documentation does not give any guarantee about.
> We could theoretically assume, for instance, that if we have NBs of a tcp 
> connection, we would generally treat them the same. But practically, we can 
> apply a 'set action' if tcp flags = X (which may be for only one of the NBs 
> in the NBL), or if it is the first ipv4 / ipv6 fragment in a series or not, 
> etc. (which may all be NBs in the same NBL).
> 
> An optimized version of the "NBs-> NBLs" would have to do advanced flow match 
> checks, where it knows that say, tcp flags = wildcard, the same with others, 
> and it knows it needs not check flow match for the NBs that follow in the 
> NBL. 

Sam,
One of us is missing something here. Let me clarify what I meant to say.

VM sends a TSO packet which is say 20k, and MSS is 1k. Let's assume that entire 
20k is in one NB ie. the NBL has only 1 NB. Such a packet would be subject to 
tunneling in OvsDoEncapVxlan(). OvsDoEncapVxlan() calls into 
OvsTcpSegmentNBL(), which creates 20 NBs. Each of the NBs is guaranteed to have 
the same L2, L3 and L4, except for the following differences:
1. TCP sequence numbers
2. TCP length
3. TCP checksum

None of these fields that are different between the different NBs are part of 
the same key. So, we don't need to do a separate flow lookup for each of them.

Furthermore, it is NDIS that is creating such an NBL, but it is OVS ie. the 
function OvsTcpSegmentNBL(). So, it is guaranteed that all the NBs fall into 
the same flow key.

> We could have an IRC chat on this, if you think it could use more 
> clarifications.

Sure, we could chat more on IRC about this.

> 
>>> +atDispatch = NDIS_TEST_SEND_AT_DISPATCH_LEVEL(sendFlags) ?
>>> +NDIS_RWL_AT_DISPATCH_LEVEL : 0;
>> 
>> If there's some way to check if the 'switchContext->dispatchLock' is already 
>> taken, we should do it. At a minimum, we can ASSERT if we are already at 
>> DISPATCH level. I'm implying that the DPC level would have been upgrated to 
>> DISPATCH_LEVEL because we have taken the 'switchContext->dispatchLock'.
> Actually, we should be at DPC level, because OvsProcessOneNb is called by the 
> NDIS Send callback, which is always called (from observation) at DPC level.

Right, we can assume the the NDIS Send callback happens at DISPATCH_LEVEL or 
since we hold a spin lock, we can definitely be sure that we are at 
DISPATCH_LEVEL. So, we don't need 'atDispatch'.


>>> +ASSERT(KeGetCurrentIrql() == DISPATCH_LEVEL);
>> 
>> Cool, we have this ASSERT. Let's assume that we are at DISPATCH_LEVEL then. 
>> We don't need the 'atDispatch' variable.
> Actually, I think I should remove this ASSERT instead. Even though, from 
> observation, the NDIS Send callback is always called at dispatch level: they 
> give us a flag, and I think it is better to use that only.

Yes, that and also we hold a spinlock already.

>>> +newNbl = OvsSplitNblByNB(origNbl, switchContext, TRUE);
>>> +if (!newNbl) {
>>> +RtlInitUnicodeString(&filterReason, L"Cannot allocate new NBL: 
>>> partial"
>>> + L"copy NB to multiple NBLs.");
>>> +status = NDIS_STATUS_RESOURCES;
>>> +goto Cleanup;
>>> +}
>> 
>> Jumping to 'Cleanup' here or from code down the line would add 'origNbl' 

[ovs-dev] [PATCH] netdev-dpdk: Pass queue id to dpdk_do_tx_copy().

2014-09-18 Thread Alex Wang
Since dpdk_do_tx_copy() will be called by both pmd and
non-pmd thread, it should take the queue id as input.
The current ovs always uses NON_PMD_THREAD_TX_QUEUE
as queue id, which causes unprotected multi-access
to the same queue.

This commit fixes the issue by passing the queue id
to dpdk_do_tx_copy().

Reported-by: Ethan Jackson 
Signed-off-by: Alex Wang 
---
 lib/netdev-dpdk.c |   11 +--
 1 file changed, 5 insertions(+), 6 deletions(-)

diff --git a/lib/netdev-dpdk.c b/lib/netdev-dpdk.c
index 1e12e8b..f2a42e8 100644
--- a/lib/netdev-dpdk.c
+++ b/lib/netdev-dpdk.c
@@ -70,8 +70,6 @@ static struct vlog_rate_limit rl = VLOG_RATE_LIMIT_INIT(5, 
20);
 #define MP_CACHE_SZ  (256 * 2)
 #define SOCKET0  0
 
-#define NON_PMD_THREAD_TX_QUEUE 0
-
 #define NIC_PORT_RX_Q_SIZE 2048  /* Size of Physical NIC RX Queue, Max 
(n+32<=4096)*/
 #define NIC_PORT_TX_Q_SIZE 2048  /* Size of Physical NIC TX Queue, Max 
(n+32<=4096)*/
 
@@ -783,7 +781,8 @@ dpdk_queue_pkts(struct netdev_dpdk *dev, int qid,
 
 /* Tx function. Transmit packets indefinitely */
 static void
-dpdk_do_tx_copy(struct netdev *netdev, struct dpif_packet ** pkts, int cnt)
+dpdk_do_tx_copy(struct netdev *netdev, int qid, struct dpif_packet ** pkts,
+int cnt)
 OVS_NO_THREAD_SAFETY_ANALYSIS
 {
 struct netdev_dpdk *dev = netdev_dpdk_cast(netdev);
@@ -832,8 +831,8 @@ dpdk_do_tx_copy(struct netdev *netdev, struct dpif_packet 
** pkts, int cnt)
 ovs_mutex_unlock(&dev->mutex);
 }
 
-dpdk_queue_pkts(dev, NON_PMD_THREAD_TX_QUEUE, mbufs, newcnt);
-dpdk_queue_flush(dev, NON_PMD_THREAD_TX_QUEUE);
+dpdk_queue_pkts(dev, qid, mbufs, newcnt);
+dpdk_queue_flush(dev, qid);
 
 if (!thread_is_pmd()) {
 ovs_mutex_unlock(&nonpmd_mempool_mutex);
@@ -849,7 +848,7 @@ netdev_dpdk_send(struct netdev *netdev, int qid, struct 
dpif_packet **pkts,
 int i;
 
 if (!may_steal || pkts[0]->ofpbuf.source != OFPBUF_DPDK) {
-dpdk_do_tx_copy(netdev, pkts, cnt);
+dpdk_do_tx_copy(netdev, qid, pkts, cnt);
 
 if (may_steal) {
 for (i = 0; i < cnt; i++) {
-- 
1.7.9.5

___
dev mailing list
dev@openvswitch.org
http://openvswitch.org/mailman/listinfo/dev


Re: [ovs-dev] [PATCH] netdev-dpdk: Pass queue id to dpdk_do_tx_copy().

2014-09-18 Thread Daniele Di Proietto
Thanks for the fix Alex!

Acked-by: Daniele Di Proietto 

On 9/18/14, 5:09 PM, "Alex Wang"  wrote:

>Since dpdk_do_tx_copy() will be called by both pmd and
>non-pmd thread, it should take the queue id as input.
>The current ovs always uses NON_PMD_THREAD_TX_QUEUE
>as queue id, which causes unprotected multi-access
>to the same queue.
>
>This commit fixes the issue by passing the queue id
>to dpdk_do_tx_copy().
>
>Reported-by: Ethan Jackson 
>Signed-off-by: Alex Wang 
>---
> lib/netdev-dpdk.c |   11 +--
> 1 file changed, 5 insertions(+), 6 deletions(-)
>
>diff --git a/lib/netdev-dpdk.c b/lib/netdev-dpdk.c
>index 1e12e8b..f2a42e8 100644
>--- a/lib/netdev-dpdk.c
>+++ b/lib/netdev-dpdk.c
>@@ -70,8 +70,6 @@ static struct vlog_rate_limit rl =
>VLOG_RATE_LIMIT_INIT(5, 20);
> #define MP_CACHE_SZ  (256 * 2)
> #define SOCKET0  0
> 
>-#define NON_PMD_THREAD_TX_QUEUE 0
>-
> #define NIC_PORT_RX_Q_SIZE 2048  /* Size of Physical NIC RX Queue, Max
>(n+32<=4096)*/
> #define NIC_PORT_TX_Q_SIZE 2048  /* Size of Physical NIC TX Queue, Max
>(n+32<=4096)*/
> 
>@@ -783,7 +781,8 @@ dpdk_queue_pkts(struct netdev_dpdk *dev, int qid,
> 
> /* Tx function. Transmit packets indefinitely */
> static void
>-dpdk_do_tx_copy(struct netdev *netdev, struct dpif_packet ** pkts, int
>cnt)
>+dpdk_do_tx_copy(struct netdev *netdev, int qid, struct dpif_packet **
>pkts,
>+int cnt)
> OVS_NO_THREAD_SAFETY_ANALYSIS
> {
> struct netdev_dpdk *dev = netdev_dpdk_cast(netdev);
>@@ -832,8 +831,8 @@ dpdk_do_tx_copy(struct netdev *netdev, struct
>dpif_packet ** pkts, int cnt)
> ovs_mutex_unlock(&dev->mutex);
> }
> 
>-dpdk_queue_pkts(dev, NON_PMD_THREAD_TX_QUEUE, mbufs, newcnt);
>-dpdk_queue_flush(dev, NON_PMD_THREAD_TX_QUEUE);
>+dpdk_queue_pkts(dev, qid, mbufs, newcnt);
>+dpdk_queue_flush(dev, qid);
> 
> if (!thread_is_pmd()) {
> ovs_mutex_unlock(&nonpmd_mempool_mutex);
>@@ -849,7 +848,7 @@ netdev_dpdk_send(struct netdev *netdev, int qid,
>struct dpif_packet **pkts,
> int i;
> 
> if (!may_steal || pkts[0]->ofpbuf.source != OFPBUF_DPDK) {
>-dpdk_do_tx_copy(netdev, pkts, cnt);
>+dpdk_do_tx_copy(netdev, qid, pkts, cnt);
> 
> if (may_steal) {
> for (i = 0; i < cnt; i++) {
>-- 
>1.7.9.5
>
>___
>dev mailing list
>dev@openvswitch.org
>https://urldefense.proofpoint.com/v1/url?u=http://openvswitch.org/mailman/
>listinfo/dev&k=oIvRg1%2BdGAgOoM1BIlLLqw%3D%3D%0A&r=MV9BdLjtFIdhBDBaw5z%2BU
>6SSA2gAfY4L%2F1HCy3VjlKU%3D%0A&m=LD%2F%2FRbiiFsW0UPz71GTUbMxTg33MzqvHsPG%2
>BaBG7F6c%3D%0A&s=9f4e76b23ef2972b375cfa648354e5ea787e4cc2e803a42ee7181f460
>191af0d

___
dev mailing list
dev@openvswitch.org
http://openvswitch.org/mailman/listinfo/dev


Re: [ovs-dev] [PATCH] netdev-dpdk: Pass queue id to dpdk_do_tx_copy().

2014-09-18 Thread Alex Wang
Thx a lot,

Applied to master!

On Thu, Sep 18, 2014 at 5:21 PM, Daniele Di Proietto  wrote:

> Thanks for the fix Alex!
>
> Acked-by: Daniele Di Proietto 
>
> On 9/18/14, 5:09 PM, "Alex Wang"  wrote:
>
> >Since dpdk_do_tx_copy() will be called by both pmd and
> >non-pmd thread, it should take the queue id as input.
> >The current ovs always uses NON_PMD_THREAD_TX_QUEUE
> >as queue id, which causes unprotected multi-access
> >to the same queue.
> >
> >This commit fixes the issue by passing the queue id
> >to dpdk_do_tx_copy().
> >
> >Reported-by: Ethan Jackson 
> >Signed-off-by: Alex Wang 
> >---
> > lib/netdev-dpdk.c |   11 +--
> > 1 file changed, 5 insertions(+), 6 deletions(-)
> >
> >diff --git a/lib/netdev-dpdk.c b/lib/netdev-dpdk.c
> >index 1e12e8b..f2a42e8 100644
> >--- a/lib/netdev-dpdk.c
> >+++ b/lib/netdev-dpdk.c
> >@@ -70,8 +70,6 @@ static struct vlog_rate_limit rl =
> >VLOG_RATE_LIMIT_INIT(5, 20);
> > #define MP_CACHE_SZ  (256 * 2)
> > #define SOCKET0  0
> >
> >-#define NON_PMD_THREAD_TX_QUEUE 0
> >-
> > #define NIC_PORT_RX_Q_SIZE 2048  /* Size of Physical NIC RX Queue, Max
> >(n+32<=4096)*/
> > #define NIC_PORT_TX_Q_SIZE 2048  /* Size of Physical NIC TX Queue, Max
> >(n+32<=4096)*/
> >
> >@@ -783,7 +781,8 @@ dpdk_queue_pkts(struct netdev_dpdk *dev, int qid,
> >
> > /* Tx function. Transmit packets indefinitely */
> > static void
> >-dpdk_do_tx_copy(struct netdev *netdev, struct dpif_packet ** pkts, int
> >cnt)
> >+dpdk_do_tx_copy(struct netdev *netdev, int qid, struct dpif_packet **
> >pkts,
> >+int cnt)
> > OVS_NO_THREAD_SAFETY_ANALYSIS
> > {
> > struct netdev_dpdk *dev = netdev_dpdk_cast(netdev);
> >@@ -832,8 +831,8 @@ dpdk_do_tx_copy(struct netdev *netdev, struct
> >dpif_packet ** pkts, int cnt)
> > ovs_mutex_unlock(&dev->mutex);
> > }
> >
> >-dpdk_queue_pkts(dev, NON_PMD_THREAD_TX_QUEUE, mbufs, newcnt);
> >-dpdk_queue_flush(dev, NON_PMD_THREAD_TX_QUEUE);
> >+dpdk_queue_pkts(dev, qid, mbufs, newcnt);
> >+dpdk_queue_flush(dev, qid);
> >
> > if (!thread_is_pmd()) {
> > ovs_mutex_unlock(&nonpmd_mempool_mutex);
> >@@ -849,7 +848,7 @@ netdev_dpdk_send(struct netdev *netdev, int qid,
> >struct dpif_packet **pkts,
> > int i;
> >
> > if (!may_steal || pkts[0]->ofpbuf.source != OFPBUF_DPDK) {
> >-dpdk_do_tx_copy(netdev, pkts, cnt);
> >+dpdk_do_tx_copy(netdev, qid, pkts, cnt);
> >
> > if (may_steal) {
> > for (i = 0; i < cnt; i++) {
> >--
> >1.7.9.5
> >
> >___
> >dev mailing list
> >dev@openvswitch.org
> >
> https://urldefense.proofpoint.com/v1/url?u=http://openvswitch.org/mailman/
> >listinfo/dev&k=oIvRg1%2BdGAgOoM1BIlLLqw%3D%3D%0A&r=MV9BdLjtFIdhBDBaw5z%2BU
> >6SSA2gAfY4L%2F1HCy3VjlKU%3D%0A&m=LD%2F%2FRbiiFsW0UPz71GTUbMxTg33MzqvHsPG%2
> >BaBG7F6c%3D%0A&s=9f4e76b23ef2972b375cfa648354e5ea787e4cc2e803a42ee7181f460
> >191af0d
>
>
___
dev mailing list
dev@openvswitch.org
http://openvswitch.org/mailman/listinfo/dev


[ovs-dev] [PATCH] ovs-pki: Use SHA-512 instead of MD5 as message digest.

2014-09-18 Thread Ben Pfaff
This fixes numerous testsuite failures of the form "SSL_connect:
error:0D0C50A1:asn1 encoding routines:ASN1_item_verify:unknown message
digest algorithm" on systems that disable MD5 in OpenSSL.  Centos 7 is one
example.  Presumably it increase security as well for anyone who generates
certificates based on a new configuration created by the new ovs-pki.

Reported-by: Robert Strickler 
Signed-off-by: Ben Pfaff 
---
 AUTHORS  | 1 +
 NEWS | 3 +++
 utilities/ovs-pki.in | 4 ++--
 3 files changed, 6 insertions(+), 2 deletions(-)

diff --git a/AUTHORS b/AUTHORS
index e3fe7ba..47bbd82 100644
--- a/AUTHORS
+++ b/AUTHORS
@@ -268,6 +268,7 @@ Ralf Heiringhoffr...@frosty-geek.net
 Ram Jothikumar  rjothiku...@nicira.com
 Ramana Reddygtvrre...@gmail.com
 Rob Sherwoodrob.sherw...@bigswitch.com
+Robert Strickleranomal...@gmail.com
 Roger Leigh rle...@codelibre.net
 Rogério Vinhal Nunes
 Roman Sokolkov  rsokol...@gmail.com
diff --git a/NEWS b/NEWS
index 6cbb315..f9ea90f 100644
--- a/NEWS
+++ b/NEWS
@@ -20,6 +20,9 @@ Post-v2.3.0
  * "resubmit" actions may now be included in action sets.  The resubmit
is executed last, and only if the action set has no "output" or "group"
action.
+   - ovs-pki: Changed message digest algorithm from MD5 to SHA-512 because
+ MD5 is no longer secure and some operating systems have started to disable
+ it in OpenSSL.
- ovsdb-server: New OVSDB protocol extension allows inequality tests on
  "optional scalar" columns.  See ovsdb-server(1) for details.
- test-controller has been renamed ovs-testcontroller at request of users
diff --git a/utilities/ovs-pki.in b/utilities/ovs-pki.in
index 6081a5e..8745355 100755
--- a/utilities/ovs-pki.in
+++ b/utilities/ovs-pki.in
@@ -1,6 +1,6 @@
 #! /bin/sh
 
-# Copyright (c) 2008, 2009, 2010, 2011, 2012, 2013 Nicira, Inc.
+# Copyright (c) 2008, 2009, 2010, 2011, 2012, 2013, 2014 Nicira, Inc.
 #
 # Licensed under the Apache License, Version 2.0 (the "License");
 # you may not use this file except in compliance with the License.
@@ -274,7 +274,7 @@ private_key= $dir/private/cakey.pem# CA private key
 RANDFILE   = $dir/private/.rand# random number file
 default_days   = 3650  # how long to certify for
 default_crl_days= 30   # how long before next CRL
-default_md = md5   # md to use
+default_md = sha512# md to use
 policy = policy# default policy
 email_in_dn= no# Don't add the email into cert DN
 name_opt   = ca_default# Subject name display option
-- 
1.9.1

___
dev mailing list
dev@openvswitch.org
http://openvswitch.org/mailman/listinfo/dev


Re: [ovs-dev] [PATCH] datapath: Remove pkt_key from OVS_CB.

2014-09-18 Thread Andy Zhou
look good in general. a few small comments in line.

Acked-by: Andy Zhou 

On Wed, Sep 17, 2014 at 6:58 PM, Pravin B Shelar  wrote:
> OVS keeps pointer to packet key in skb->cb, but the packet key is
> store on stack. This could make code bit tricky. So it is better to
> get rid of the pointer.
>
> Signed-off-by: Pravin B Shelar 
> ---
>  datapath/actions.c| 294 
> +++---
>  datapath/datapath.c   |  35 +++---
>  datapath/datapath.h   |   8 +-
>  datapath/flow.c   |   1 -
>  datapath/vport-lisp.c |  22 ++--
>  datapath/vport.c  |   2 +-
>  6 files changed, 144 insertions(+), 218 deletions(-)
>
> diff --git a/datapath/actions.c b/datapath/actions.c
> index 8d18848..0fc32d6 100644
> --- a/datapath/actions.c
> +++ b/datapath/actions.c
> @@ -40,6 +40,10 @@
>  #include "vlan.h"
>  #include "vport.h"
>
> +static int do_execute_actions(struct datapath *dp, struct sk_buff *skb,
> + struct sw_flow_key *key,
> + const struct nlattr *attr, int len);
> +
>  struct deferred_action {
> struct sk_buff *skb;
> const struct nlattr *actions;
> @@ -72,8 +76,7 @@ static bool action_fifo_is_empty(struct action_fifo *fifo)
> return (fifo->head == fifo->tail);
>  }
>
> -static struct deferred_action *
> -action_fifo_get(struct action_fifo *fifo)
> +static struct deferred_action *action_fifo_get(struct action_fifo *fifo)
>  {
> if (action_fifo_is_empty(fifo))
> return NULL;
> @@ -81,8 +84,7 @@ action_fifo_get(struct action_fifo *fifo)
> return &fifo->fifo[fifo->tail++];
>  }
>
> -static struct deferred_action *
> -action_fifo_put(struct action_fifo *fifo)
> +static struct deferred_action *action_fifo_put(struct action_fifo *fifo)
>  {
> if (fifo->head >= DEFERRED_ACTION_FIFO_SIZE - 1)
> return NULL;
> @@ -90,15 +92,10 @@ action_fifo_put(struct action_fifo *fifo)
> return &fifo->fifo[fifo->head++];
>  }
>
> -static void flow_key_clone(struct sk_buff *skb, struct sw_flow_key *new_key)
> -{
> -   *new_key = *OVS_CB(skb)->pkt_key;
> -   OVS_CB(skb)->pkt_key = new_key;
> -}
> -
>  /* Return true if fifo is not full */
> -static bool add_deferred_actions(struct sk_buff *skb,
> -const struct nlattr *attr)
> +static struct deferred_action *add_deferred_actions(struct sk_buff *skb,
> +   struct sw_flow_key *key,
> +   const struct nlattr *attr)
the comment above needs update as well.
>  {
> struct action_fifo *fifo;
> struct deferred_action *da;
> @@ -108,109 +105,22 @@ static bool add_deferred_actions(struct sk_buff *skb,
> if (da) {
> da->skb = skb;
> da->actions = attr;
> -   flow_key_clone(skb, &da->pkt_key);
> +   da->pkt_key = *key;
> }
>
> -   return (da != NULL);
> -}
> -
> -static void flow_key_set_recirc_id(struct sk_buff *skb, u32 recirc_id)
> -{
> -   OVS_CB(skb)->pkt_key->recirc_id = recirc_id;
> -}
> -
> -static void flow_key_set_priority(struct sk_buff *skb, u32 priority)
> -{
> -   OVS_CB(skb)->pkt_key->phy.priority = priority;
> -}
> -
> -static void flow_key_set_skb_mark(struct sk_buff *skb, u32 skb_mark)
> -{
> -   OVS_CB(skb)->pkt_key->phy.skb_mark = skb_mark;
> -}
> -
> -static void flow_key_set_eth_src(struct sk_buff *skb, const u8 addr[])
> -{
> -   ether_addr_copy(OVS_CB(skb)->pkt_key->eth.src, addr);
> -}
> -
> -static void flow_key_set_eth_dst(struct sk_buff *skb, const u8 addr[])
> -{
> -   ether_addr_copy(OVS_CB(skb)->pkt_key->eth.dst, addr);
> -}
> -
> -static void flow_key_set_vlan_tci(struct sk_buff *skb, __be16 tci)
> -{
> -   OVS_CB(skb)->pkt_key->eth.tci = tci;
> -}
> -
> -static void flow_key_set_mpls_top_lse(struct sk_buff *skb, __be32 top_lse)
> -{
> -   OVS_CB(skb)->pkt_key->mpls.top_lse = top_lse;
> -}
> -
> -static void flow_key_set_ipv4_src(struct sk_buff *skb, __be32 addr)
> -{
> -   OVS_CB(skb)->pkt_key->ipv4.addr.src = addr;
> -}
> -
> -static void flow_key_set_ipv4_dst(struct sk_buff *skb, __be32 addr)
> -{
> -   OVS_CB(skb)->pkt_key->ipv4.addr.src = addr;
> -}
> -
> -static void flow_key_set_ip_tos(struct sk_buff *skb, u8 tos)
> -{
> -   OVS_CB(skb)->pkt_key->ip.tos = tos;
> -}
> -
> -static void flow_key_set_ip_ttl(struct sk_buff *skb, u8 ttl)
> -{
> -   OVS_CB(skb)->pkt_key->ip.ttl = ttl;
> -}
> -
> -static void flow_key_set_ipv6_src(struct sk_buff *skb,
> - const __be32 addr[4])
> -{
> -   memcpy(&OVS_CB(skb)->pkt_key->ipv6.addr.src, addr, sizeof(__be32[4]));
> -}
> -
> -static void flow_key_set_ipv6_dst(struct sk_buff *skb,
> - const __be32 addr[4])
> -{
> -   memcpy(&OVS_CB(skb)->pkt_key->ipv6.addr.dst, addr, sizeof(__be32[4]));
> -}
> -
> -static void flo