Re: [dpdk-dev] [PATCH 4/4] examples/vhost: fix uninitialized desc indexes

2017-06-03 Thread Maxime Coquelin



On 06/02/2017 01:20 PM, Jerin Jacob wrote:

Fixing the below error by returning from the function early
when count == 0.

Issue flagged by GCC 7.1.1

examples/vhost/virtio_net.c:370:38: error: ‘desc_indexes[0]’ may be used
uninitialized in this function [-Werror=maybe-uninitialized]
   rte_prefetch0(&vr->desc[desc_indexes[0]]);

Fixes: ca059fa5e290 ("examples/vhost: demonstrate the new generic APIs")

Cc: sta...@dpdk.org
Signed-off-by: Jerin Jacob 


Reviewed-by: Maxime Coquelin 


---
  examples/vhost/virtio_net.c | 5 +++--
  1 file changed, 3 insertions(+), 2 deletions(-)

diff --git a/examples/vhost/virtio_net.c b/examples/vhost/virtio_net.c
index cc2c3d882..5e1ed44a5 100644
--- a/examples/vhost/virtio_net.c
+++ b/examples/vhost/virtio_net.c
@@ -350,6 +350,9 @@ vs_dequeue_pkts(struct vhost_dev *dev, uint16_t queue_id,
count = RTE_MIN(count, MAX_PKT_BURST);
count = RTE_MIN(count, free_entries);
  
+	if (unlikely(count == 0))

+   return 0;
+
/*
 * Retrieve all of the head indexes first and pre-update used entries
 * to avoid caching issues.
@@ -385,8 +388,6 @@ vs_dequeue_pkts(struct vhost_dev *dev, uint16_t queue_id,
}
  
  	}

-   if (!i)
-   return 0;
  
  	queue->last_avail_idx += i;

queue->last_used_idx += i;



Thanks,
Maxime


Re: [dpdk-dev] [RFC] proposal of allowing personal/project repos on DPDK.org

2017-06-03 Thread Manoj Mallawaarachchi
Dear All,

Hope github is best place as many people do.

On Fri, 6/2/17, Dumitrescu, Cristian  wrote:

 Subject: Re: [dpdk-dev] [RFC] proposal of allowing personal/project repos  
on  DPDK.org
 To: "Thomas Monjalon" , "Bie, Tiwei" 
 Cc: "dev@dpdk.org" , "Richardson, Bruce" 
, "yuanhan@linux.intel.com" 
, "Ananyev, Konstantin" 

 Date: Friday, June 2, 2017, 10:37 PM
 
 > Why? Do you need to host a
 repo on dpdk.org to try a new idea?
 > 
 
 Prototyping new DPDK-related
 ideas and sharing them with DPDK community, with some of
 them likely to eventually make their way into DPDK once
 accepted and mature enough.
 
 
 > I am against adding some
 user repos in this list:
 >     http://dpdk.org/browse/
 > I think the list of official repos must be
 kept light for good visibility.
 
 We could have a single project called sandbox
 mentioned in this list; whoever interested, needs to drill
 down into this one?
 
 > 
 > But we can imagine a forge for users at a
 different location like
 >     http://dpdk.org/users/
 > However why not using another public forge
 for this need?
 
 Easier
 to share DPDK related ideas on dpdk.org rather than other
 places.
 
 


Re: [dpdk-dev] [PATCH v2] ring: use aligned memzone allocation

2017-06-03 Thread Ananyev, Konstantin


> 
> The PROD/CONS_ALIGN values on x86-64 are set to 2 cache lines, so members of 
> struct rte_ring are 128 byte aligned, 
>and therefore the whole struct needs 128-byte alignment according to the ABI 
>so that the 128-byte alignment of the fields can be guaranteed.

Ah ok, missed the fact that rte_ring is 128B aligned these days.
BTW, I probably missed the initial discussion, but what was the reason for that?
Konstantin

> 
> If the allocation is only 64-byte aligned, the beginning of the prod and cons 
> fields may not actually be 128-byte aligned (but we've told the
> compiler that they are using the __rte_aligned macro).  Accessing these 
> fields when they are misaligned will work in practice on x86 (as long
> as the compiler doesn't use e.g. aligned SSE instructions), but it is 
> undefined behavior according to the C standard, and UBSan (-
> fsanitize=undefined) checks for this.
> 
> Thanks,
> -- Daniel Verkamp
> 
> > -Original Message-
> > From: Ananyev, Konstantin
> > Sent: Friday, June 2, 2017 1:52 PM
> > To: Verkamp, Daniel ; dev@dpdk.org
> > Cc: Verkamp, Daniel 
> > Subject: RE: [dpdk-dev] [PATCH v2] ring: use aligned memzone allocation
> >
> >
> >
> > > -Original Message-
> > > From: dev [mailto:dev-boun...@dpdk.org] On Behalf Of Daniel Verkamp
> > > Sent: Friday, June 2, 2017 9:12 PM
> > > To: dev@dpdk.org
> > > Cc: Verkamp, Daniel 
> > > Subject: [dpdk-dev] [PATCH v2] ring: use aligned memzone allocation
> > >
> > > rte_memzone_reserve() provides cache line alignment, but
> > > struct rte_ring may require more than cache line alignment: on x86-64,
> > > it needs 128-byte alignment due to PROD_ALIGN and CONS_ALIGN, which are
> > > 128 bytes, but cache line size is 64 bytes.
> >
> > Hmm but what for?
> > I understand we need our rte_ring cche-line aligned,
> > but why do you want it 2 cache-line aligned?
> > Konstantin
> >
> > >
> > > Fixes runtime warnings with UBSan enabled.
> > >
> > > Fixes: d9f0d3a1ffd4 ("ring: remove split cacheline build setting")
> > >
> > > Signed-off-by: Daniel Verkamp 
> > > ---
> > >
> > > v2: fixed checkpatch warnings
> > >
> > >  lib/librte_ring/rte_ring.c | 3 ++-
> > >  1 file changed, 2 insertions(+), 1 deletion(-)
> > >
> > > diff --git a/lib/librte_ring/rte_ring.c b/lib/librte_ring/rte_ring.c
> > > index 5f98c33..6f58faf 100644
> > > --- a/lib/librte_ring/rte_ring.c
> > > +++ b/lib/librte_ring/rte_ring.c
> > > @@ -189,7 +189,8 @@ rte_ring_create(const char *name, unsigned count, int
> > socket_id,
> > >   /* reserve a memory zone for this ring. If we can't get rte_config or
> > >* we are secondary process, the memzone_reserve function will set
> > >* rte_errno for us appropriately - hence no check in this this 
> > > function */
> > > - mz = rte_memzone_reserve(mz_name, ring_size, socket_id, mz_flags);
> > > + mz = rte_memzone_reserve_aligned(mz_name, ring_size, socket_id,
> > > +  mz_flags, __alignof__(*r));
> > >   if (mz != NULL) {
> > >   r = mz->addr;
> > >   /* no need to check return value here, we already checked the
> > > --
> > > 2.9.4



Re: [dpdk-dev] [RFCv2] service core concept

2017-06-03 Thread Ananyev, Konstantin

Hi Harry,

> >
> > Thanks everybody for the input on the RFC - appreciated! From an 
> > application point-of-view, I
> > see merit in Konstantin's suggestions for the API, over the RFC I sent 
> > previously. I will re-
> > work the API taking inspiration from both APIs and send an RFCv2, you'll be 
> > on CC :)
> 
> 
> Hi All,
> 
> This email is the v2 RFC for Service Cores, in response to the v1 sent 
> previously[1].
> The service-cores API has been updated, and various concepts have changed.
> 
> The API has been redesigned, with Konstantin's API suggestions as a base, and 
> the
> other comments taken into account. In my opinion this v2 API is more 
> application-centric,
> and enables the common application tasks much better. Such tasks are for 
> example start/stop
> of a service, and add/remove of a service core.
> 
> In particular this version of the API enables applications that are not aware 
> of services to
> benefit from the services concept, as EAL args can be used to setup services 
> and service cores.
> With this design, switching to/from SW/HW PMD is transparent to the 
> application. An example
> use-case is the Eventdev HW PMD to Eventdev SW PMD that requires a service 
> core.
> 
> I have noted the implementation comments that were raised on the v1. For v2, 
> I think our time
> is better spent looking at the API design, and I will handle implementation 
> feedback in the
> follow-up patchset to v2 RFC.
> 
> Below a summary of what we are trying to achieve, and the current API design.
> Have a good weekend! Cheers, -Harry

Looks good to me in general.
The only comment I have - do we really need to put it into rte_eal_init()
and a new EAL command-line parameter for it? 
Might be better to leave it to the particular app to decide.
Konstantin

> 
> 
> Summary of goals (summarized from a previous email)
> 1. Allowing libraries and drivers to register the fact that they require 
> background processing
> 2. Providing support for easily multiplexing these independent functions from 
> different libs onto a different core
> 3. Providing support for the application to configure the running of these 
> background services on specific cores
> 4. Once configured, hiding these services and the cores they run on from the 
> rest of the application,
>so that the rest of the app logic does not need to change depending on 
> whether service cores are in use or not
> 
> 
> === RFC v2 API ===
> 
> There are 3 parts to the API; they separate the concerns of each "user" of 
> the API:
>   - Service Registration
>   - Service Config
>   - ServiceCore Config
> 
> Service Registration:
> A service requires a core. It only knows about its NUMA node, and that it 
> requires CPU time.
> Registration of a service requires only that information.
> 
> Service Config:
> An application may configure what service runs where using the Service Config 
> APIs.
> EAL is capable of performing this during rte_eal_init() if requested by 
> passing a
> --service-cores argument. The application (mostly) calls these functions at 
> initialization
> time, with start() and stop() being available to dynamically switch on/off a 
> service if required.
> 
> ServiceCore Config
> An application can start/stop or add/remove service-lcores using the 
> ServiceCore Config, allowing
> dynamically scaling the number of used lcores by services. Lcores used as 
> service-cores are removed
> from the application coremask, and are not available to remote_launch() as 
> they are already in use.
> 
> 
> Service Registration:
> int32_t rte_service_register(const struct rte_service_spec *spec);
> int32_t rte_service_unregister(struct rte_service_spec *service);
> 
> Service Configuration:
> uint32_t rte_service_get_count(void);
> struct rte_service_spec *rte_service_get_by_id(uint32_t id);
> const char *rte_service_get_name(const struct rte_service_spec *service);
> int32_t rte_service_set_coremask(struct rte_service_spec *service, const 
> rte_cpuset_t *coremask);
> int32_t rte_service_start(struct rte_service_spec *service); /* runtime 
> function */
> int32_t rte_service_stop(struct rte_service_spec *service);  /* runtime 
> function */
> 
> ServiceCore Configuration:
> int32_t rte_service_cores_start(void);
> int32_t rte_service_cores_stop(void);
> int32_t rte_service_cores_add(const rte_cpuset_t *coremask);
> int32_t rte_service_cores_del(const rte_cpuset_t *coremask);
> 
> 
> I am working on a patchset - but for now I would appreciate general design 
> feedback,
> particularly if a specific use-case is not handled.


Re: [dpdk-dev] Getting meta data from different pipelines in ip_pipeline application

2017-06-03 Thread Dumitrescu, Cristian
Hi Nidhia,

You should go ahead and test it. Did you test it and it did not work as you 
expect?

Personally, I cannot see any reasons why it should not work, but you need to 
test it yourself. I am assuming that pipelines 4, 5,6 below have the 
passthrough type and pipeline 7 has the flow classification type.

Regards,
Cristian

> -Original Message-
> From: dev [mailto:dev-boun...@dpdk.org] On Behalf Of Nidhia Varghese
> Sent: Thursday, June 1, 2017 1:59 PM
> To: us...@dpdk.org; dev@dpdk.org
> Subject: Re: [dpdk-dev] Getting meta data from different pipelines in
> ip_pipeline application
> 
> Hi,
> 
> Any comments on the below thread?
> 
> Thanks,
> Nidhia
> 
> On Tue, May 30, 2017 at 12:33 PM, Nidhia Varghese <
> nidhiavarghes...@gmail.com> wrote:
> 
> > Hi,
> >
> > This is how I want my pipelines to work:-
> > Pipeline 4:
> > Has to get the source mac address and save it in 160th(160-167) meta data
> > filed in the mbuf.
> > Pipeline 5:
> > Will take the vlan id and store it in 168th(168-175) offset of the same
> > mbuf
> > Pipeline 6:
> > Take the incoming port id which is in 23rd position of the mbuf and store
> > it in 175th offset. (It will overlap with vlan id field stored, but since I
> > need only that one bit and I have to access all the three fileds together
> > as a 16 bit key value, I have done this overlapping)
> > Pipeline 7:
> > Key offset is given as 160 and key size is given as 16. So that 160 to 175
> > will be available as we have stored those in meta data in the previous
> > three pipelines.
> >
> > I want to know whether my logic will work if I write the config file as
> > shown below. Will pipeline 7 be able to get the stored source mac address,
> > vlan id and port id from the given key offset?
> >
> > [PIPELINE4]
> > ..
> > ..
> > dma_size = 8
> > dma_dst_offset = 160
> > dma_src_offset = 262
> > dma_src_mask = 
> > dma_hash_offset = 192
> >
> > [PIPELINE5]
> > ..
> > ..
> > dma_size = 8
> > dma_dst_offset = 168
> > dma_src_offset = 268
> > dma_src_mask = 0FFF
> > dma_hash_offset = 200
> >
> > [PIPELINE6]
> > ..
> > ..
> > dma_size = 8
> > dma_dst_offset = 175
> > dma_src_offset = 23
> > dma_src_mask = FF00
> > dma_hash_offset = 208
> >
> > [PIPELINE7]
> > ..
> > ..
> > key_size = 16
> > key_offset = 160
> > key_mask = 0FFF00FF
> > ..
> >
> > Thanks for your reply and help.
> >
> > Regards,
> > Nidhia Varghese
> >
> 
> 
> 
> --
> 
> Regards,
> Nidhia Varghese


Re: [dpdk-dev] [dpdk-stable] [PATCH] vhost: fix crash on NUMA

2017-06-03 Thread Yuanhan Liu
On Fri, Jun 02, 2017 at 10:20:38AM +0200, Jens Freimann wrote:
> On Fri, Jun 02, 2017 at 08:14:46AM +0800, Yuanhan Liu wrote:
> > The queue allocation was changed, from allocating one queue-pair at a
> > time to one queue at a time. Most of the changes have been done, but
> > just with one being missed: the size of coping the old queue is still
> 
> s/coping/copying/ ?

right, thanks.

> 
> > based on queue-pair at numa_realloc(), which leads to overwritten issue.
> > As a result, crash may happen.
> > 
> > Fix it by specifying the right copy size. Also, the net queue macros
> > are not used any more. Remove them.
> > 
> > Fixes: ab4d7b9f1afc ("vhost: turn queue pair to vring")
> > 
> > Cc: sta...@dpdk.org
> > Reported-by: Ciara Loftus 
> > Signed-off-by: Yuanhan Liu 
> > ---
> >  lib/librte_vhost/vhost_user.c | 4 +---
> >  1 file changed, 1 insertion(+), 3 deletions(-)
> > 
> 
> Reviewed-by: Jens Freimann 

Applied to dpdk-next-virtio.

--yliu


[dpdk-dev] [PATCH v3]net/mlx5: implement drop action in hardware classifier

2017-06-03 Thread Shachar Beiser
remove/add blank line according to convensions

Shachar Beiser (1):
  net/mlx5: implement drop action in hardware classifier

 drivers/net/mlx5/Makefile|  5 +
 drivers/net/mlx5/mlx5_flow.c | 17 +
 2 files changed, 22 insertions(+)

-- 
1.8.3.1



[dpdk-dev] [PATCH v3] net/mlx5: implement drop action in hardware classifier

2017-06-03 Thread Shachar Beiser
The current drop action is implemented as a queue tail drop,
requiring to instantiate multiple WQs to maintain high drop rate.
This commit, implements the drop action in hardware classifier.
This enables to reduce the amount of contexts needed for the drop,
without affecting the drop rate.

Signed-off-by: Shachar Beiser 
---
 drivers/net/mlx5/Makefile|  5 +
 drivers/net/mlx5/mlx5_flow.c | 17 +
 2 files changed, 22 insertions(+)

diff --git a/drivers/net/mlx5/Makefile b/drivers/net/mlx5/Makefile
index c079959..daf8013 100644
--- a/drivers/net/mlx5/Makefile
+++ b/drivers/net/mlx5/Makefile
@@ -101,6 +101,11 @@ mlx5_autoconf.h.new: FORCE
 mlx5_autoconf.h.new: $(RTE_SDK)/buildtools/auto-config-h.sh
$Q $(RM) -f -- '$@'
$Q sh -- '$<' '$@' \
+   HAVE_VERBS_IBV_EXP_FLOW_SPEC_ACTION_DROP \
+   infiniband/verbs_exp.h \
+   enum IBV_EXP_FLOW_SPEC_ACTION_DROP \
+   $(AUTOCONF_OUTPUT)
+   $Q sh -- '$<' '$@' \
HAVE_VERBS_IBV_EXP_CQ_COMPRESSED_CQE \
infiniband/verbs_exp.h \
enum IBV_EXP_CQ_COMPRESSED_CQE \
diff --git a/drivers/net/mlx5/mlx5_flow.c b/drivers/net/mlx5/mlx5_flow.c
index adcbe3f..77a8c04 100644
--- a/drivers/net/mlx5/mlx5_flow.c
+++ b/drivers/net/mlx5/mlx5_flow.c
@@ -53,7 +53,11 @@
 #include "mlx5_prm.h"
 
 /* Number of Work Queue necessary for the DROP queue. */
+#ifndef HAVE_VERBS_IBV_EXP_FLOW_SPEC_ACTION_DROP
 #define MLX5_DROP_WQ_N 4
+#else
+#define MLX5_DROP_WQ_N 1
+#endif
 
 static int
 mlx5_flow_create_eth(const struct rte_flow_item *item,
@@ -993,6 +997,10 @@ struct mlx5_flow_action {
   struct rte_flow_error *error)
 {
struct rte_flow *rte_flow;
+#ifdef HAVE_VERBS_IBV_EXP_FLOW_SPEC_ACTION_DROP
+   struct ibv_exp_flow_spec_action_drop *drop;
+   unsigned int size = sizeof(struct ibv_exp_flow_spec_action_drop);
+#endif
 
assert(priv->pd);
assert(priv->ctx);
@@ -1007,6 +1015,15 @@ struct mlx5_flow_action {
rte_flow->qp = priv->flow_drop_queue->qp;
if (!priv->started)
return rte_flow;
+#ifdef HAVE_VERBS_IBV_EXP_FLOW_SPEC_ACTION_DROP
+   drop = (void *)((uintptr_t)flow->ibv_attr + flow->offset);
+   *drop = (struct ibv_exp_flow_spec_action_drop){
+   .type = IBV_EXP_FLOW_SPEC_ACTION_DROP,
+   .size = size,
+   };
+   ++flow->ibv_attr->num_of_specs;
+   flow->offset += sizeof(struct ibv_exp_flow_spec_action_drop);
+#endif
rte_flow->ibv_flow = ibv_exp_create_flow(rte_flow->qp,
 rte_flow->ibv_attr);
if (!rte_flow->ibv_flow) {
-- 
1.8.3.1



[dpdk-dev] [PATCH v3 0/2] Various fixes for contigmem

2017-06-03 Thread Tiwei Bie
The vm_page_replace_checked() was introduced in FreeBSD 11, so the
build is broken on FreeBSD 10. The fix is to use vm_page_replace()
directly and do the check in caller.

---

v2: destroy the initialized mtx before failing or unloading
v3: fix build on FreeBSD 10, refine comments

Tiwei Bie (2):
  contigmem: free the allocated memory when error occurs
  contigmem: don't zero the pages during each mmap

 lib/librte_eal/bsdapp/contigmem/contigmem.c | 197 
 1 file changed, 171 insertions(+), 26 deletions(-)

-- 
2.12.1



[dpdk-dev] [PATCH v3 2/2] contigmem: don't zero the pages during each mmap

2017-06-03 Thread Tiwei Bie
Don't zero the pages during each mmap. Instead, only zero the pages
when they are not already mmapped. Otherwise, the multi-process
support will be broken, as the pages will be zeroed when secondary
processes map the memory. Besides, track the open and mmap operations
on the cdev, and prevent the module from being unloaded when it is
still in use.

Fixes: 82f931805506 ("contigmem: zero all pages during mmap")
Cc: sta...@dpdk.org

Signed-off-by: Tiwei Bie 
Acked-by: Bruce Richardson 
---
 lib/librte_eal/bsdapp/contigmem/contigmem.c | 186 
 1 file changed, 160 insertions(+), 26 deletions(-)

diff --git a/lib/librte_eal/bsdapp/contigmem/contigmem.c 
b/lib/librte_eal/bsdapp/contigmem/contigmem.c
index 03e3e8def..e8fb9087d 100644
--- a/lib/librte_eal/bsdapp/contigmem/contigmem.c
+++ b/lib/librte_eal/bsdapp/contigmem/contigmem.c
@@ -50,24 +50,37 @@ __FBSDID("$FreeBSD$");
 
 #include 
 #include 
+#include 
 #include 
 #include 
 #include 
+#include 
+
+struct contigmem_buffer {
+   void   *addr;
+   int refcnt;
+   struct mtx  mtx;
+};
+
+struct contigmem_vm_handle {
+   int buffer_index;
+};
 
 static int  contigmem_load(void);
 static int  contigmem_unload(void);
 static int  contigmem_physaddr(SYSCTL_HANDLER_ARGS);
 
-static d_mmap_t contigmem_mmap;
 static d_mmap_single_t  contigmem_mmap_single;
 static d_open_t contigmem_open;
+static d_close_tcontigmem_close;
 
 static int  contigmem_num_buffers = RTE_CONTIGMEM_DEFAULT_NUM_BUFS;
 static int64_t  contigmem_buffer_size = RTE_CONTIGMEM_DEFAULT_BUF_SIZE;
 
 static eventhandler_tag contigmem_eh_tag;
-static void*contigmem_buffers[RTE_CONTIGMEM_MAX_NUM_BUFS];
+static struct contigmem_buffer contigmem_buffers[RTE_CONTIGMEM_MAX_NUM_BUFS];
 static struct cdev *contigmem_cdev = NULL;
+static int  contigmem_refcnt;
 
 TUNABLE_INT("hw.contigmem.num_buffers", &contigmem_num_buffers);
 TUNABLE_QUAD("hw.contigmem.buffer_size", &contigmem_buffer_size);
@@ -78,6 +91,8 @@ SYSCTL_INT(_hw_contigmem, OID_AUTO, num_buffers, CTLFLAG_RD,
&contigmem_num_buffers, 0, "Number of contigmem buffers allocated");
 SYSCTL_QUAD(_hw_contigmem, OID_AUTO, buffer_size, CTLFLAG_RD,
&contigmem_buffer_size, 0, "Size of each contiguous buffer");
+SYSCTL_INT(_hw_contigmem, OID_AUTO, num_references, CTLFLAG_RD,
+   &contigmem_refcnt, 0, "Number of references to contigmem");
 
 static SYSCTL_NODE(_hw_contigmem, OID_AUTO, physaddr, CTLFLAG_RD, 0,
"physaddr");
@@ -114,9 +129,10 @@ MODULE_VERSION(contigmem, 1);
 static struct cdevsw contigmem_ops = {
.d_name = "contigmem",
.d_version  = D_VERSION,
-   .d_mmap = contigmem_mmap,
+   .d_flags= D_TRACKCLOSE,
.d_mmap_single  = contigmem_mmap_single,
.d_open = contigmem_open,
+   .d_close= contigmem_close,
 };
 
 static int
@@ -124,6 +140,7 @@ contigmem_load()
 {
char index_string[8], description[32];
int  i, error = 0;
+   void *addr;
 
if (contigmem_num_buffers > RTE_CONTIGMEM_MAX_NUM_BUFS) {
printf("%d buffers requested is greater than %d allowed\n",
@@ -141,18 +158,20 @@ contigmem_load()
}
 
for (i = 0; i < contigmem_num_buffers; i++) {
-   contigmem_buffers[i] =
-   contigmalloc(contigmem_buffer_size, 
M_CONTIGMEM, M_ZERO, 0,
-   BUS_SPACE_MAXADDR, contigmem_buffer_size, 0);
-
-   if (contigmem_buffers[i] == NULL) {
+   addr = contigmalloc(contigmem_buffer_size, M_CONTIGMEM, M_ZERO,
+   0, BUS_SPACE_MAXADDR, contigmem_buffer_size, 0);
+   if (addr == NULL) {
printf("contigmalloc failed for buffer %d\n", i);
error = ENOMEM;
goto error;
}
 
-   printf("%2u: virt=%p phys=%p\n", i, contigmem_buffers[i],
-   (void 
*)pmap_kextract((vm_offset_t)contigmem_buffers[i]));
+   printf("%2u: virt=%p phys=%p\n", i, addr,
+   (void *)pmap_kextract((vm_offset_t)addr));
+
+   mtx_init(&contigmem_buffers[i].mtx, "contigmem", NULL, MTX_DEF);
+   contigmem_buffers[i].addr = addr;
+   contigmem_buffers[i].refcnt = 0;
 
snprintf(index_string, sizeof(index_string), "%d", i);
snprintf(description, sizeof(description),
@@ -170,10 +189,13 @@ contigmem_load()
return 0;
 
 error:
-   for (i = 0; i < contigmem_num_buffers; i++)
-   if (contigmem_buffers[i] != NULL)
-   contigfree(contigmem_buffers[i], contigmem_buffer_size,
-   M_CONTIGMEM);
+   for (i = 0; i < contigmem_num_buffers; i++) {
+   

[dpdk-dev] [PATCH v3 1/2] contigmem: free the allocated memory when error occurs

2017-06-03 Thread Tiwei Bie
Fixes: 764bf26873b9 ("add FreeBSD support")
Cc: sta...@dpdk.org

Signed-off-by: Tiwei Bie 
Acked-by: Bruce Richardson 
---
 lib/librte_eal/bsdapp/contigmem/contigmem.c | 19 +++
 1 file changed, 15 insertions(+), 4 deletions(-)

diff --git a/lib/librte_eal/bsdapp/contigmem/contigmem.c 
b/lib/librte_eal/bsdapp/contigmem/contigmem.c
index da971debe..03e3e8def 100644
--- a/lib/librte_eal/bsdapp/contigmem/contigmem.c
+++ b/lib/librte_eal/bsdapp/contigmem/contigmem.c
@@ -123,19 +123,21 @@ static int
 contigmem_load()
 {
char index_string[8], description[32];
-   int  i;
+   int  i, error = 0;
 
if (contigmem_num_buffers > RTE_CONTIGMEM_MAX_NUM_BUFS) {
printf("%d buffers requested is greater than %d allowed\n",
contigmem_num_buffers, 
RTE_CONTIGMEM_MAX_NUM_BUFS);
-   return EINVAL;
+   error = EINVAL;
+   goto error;
}
 
if (contigmem_buffer_size < PAGE_SIZE ||
(contigmem_buffer_size & (contigmem_buffer_size - 1)) 
!= 0) {
printf("buffer size 0x%lx is not greater than PAGE_SIZE and "
"power of two\n", contigmem_buffer_size);
-   return EINVAL;
+   error = EINVAL;
+   goto error;
}
 
for (i = 0; i < contigmem_num_buffers; i++) {
@@ -145,7 +147,8 @@ contigmem_load()
 
if (contigmem_buffers[i] == NULL) {
printf("contigmalloc failed for buffer %d\n", i);
-   return ENOMEM;
+   error = ENOMEM;
+   goto error;
}
 
printf("%2u: virt=%p phys=%p\n", i, contigmem_buffers[i],
@@ -165,6 +168,14 @@ contigmem_load()
GID_WHEEL, 0600, "contigmem");
 
return 0;
+
+error:
+   for (i = 0; i < contigmem_num_buffers; i++)
+   if (contigmem_buffers[i] != NULL)
+   contigfree(contigmem_buffers[i], contigmem_buffer_size,
+   M_CONTIGMEM);
+
+   return error;
 }
 
 static int
-- 
2.13.0