[dpdk-dev] RSS for double vlan tagged packets
I have also tried by enabling vlan_extended support in port_conf and set rss_hf as all 1s. .hw_vlan_extend = 1 .rss_hf = ~0 Still there is no change? Surya On May 12, 2014, at 12:38 PM, Surya Nimmagadda wrote: > Hi, > > I am using RSS functionality on 82599 with dpdk igb_uio driver. > > I am able to get proper hash for untagged and single tagged packets. But for > double tagged packets, it is not working. I get hash value 0 for all packets. > > Is there any additional flag I need to enable for this? > > Regards, > Surya
[dpdk-dev] RSS for double vlan tagged packets
I don't think the hardware supports using RSS on packets with a double VLAN tag. Regards, /Bruce > -Original Message- > From: dev [mailto:dev-bounces at dpdk.org] On Behalf Of Surya Nimmagadda > Sent: Tuesday, May 13, 2014 3:17 AM > To: dev at dpdk.org > Subject: Re: [dpdk-dev] RSS for double vlan tagged packets > > I have also tried by enabling vlan_extended support in port_conf and set > rss_hf > as all 1s. > > .hw_vlan_extend = 1 > .rss_hf = ~0 > > Still there is no change... > > Surya > > On May 12, 2014, at 12:38 PM, Surya Nimmagadda > wrote: > > > Hi, > > > > I am using RSS functionality on 82599 with dpdk igb_uio driver. > > > > I am able to get proper hash for untagged and single tagged packets. But for > double tagged packets, it is not working. I get hash value 0 for all packets. > > > > Is there any additional flag I need to enable for this? > > > > Regards, > > Surya
[dpdk-dev] Need help "How to use dpdk on Mellanox MT27520"
Hi, I am using Mellanox Technologies MT27520 Family card and it is not recognized by DPDK test program. When I run the l2fwd binary, I got the below error. Please guide me how to solve the issue. I am using DPDK-1.6.0r2 version. # ./build/l2fwd -c 3 -n 2 --huge-dir /mnt/huge -m 4096 -b :01:00.0 -b :01:00.1 -b :01:00.2 -b :01:00.3 -b :00:19.0 -- -p 3 EAL: Setting up memory... EAL: Ask a virtual area of 0x1 bytes EAL: Virtual area found at 0x2aa9aaa0 (size = 0x1) EAL: Requesting 2048 pages of size 2MB from socket 0 EAL: TSC frequency is ~3410018 KHz EAL: Master core 0 is ready (tid=a83c8880) EAL: Core 1 is ready (tid=a7360700) EAL: Error - exiting with code: 1 Cause: No Ethernet ports - bye [root at localhost dpdk-1.6.0r2]# lspci -s 02:00.0 -x -vv 02:00.0 Network controller: Mellanox Technologies MT27520 Family Subsystem: Mellanox Technologies Device 0003 Product Name: CX354A - ConnectX-3 Pro QSFP Kernel driver in use: mlx4_core Thanks Nagesh
[dpdk-dev] [PATCH RFC 06/11] mbuf: replace data pointer by an offset
Hi Stephen, On 05/12/2014 07:13 PM, Stephen Hemminger wrote: > In cloned mbuf > rte_pktmbuf_mtod(m, char *) points to the original data. > RTE_MBUF_TO_BADDR(m) points to buffer in the mbuf which we > use for metadata (timestamp). I still don't see the problem. Let's take an example: m2 is a clone of m1. Before applying the patch series, we have: - rte_pktmbuf_mtod(m1) points to m1->pkt.data - RTE_MBUF_TO_BADDR(m1) points to m1->buf_addr - rte_pktmbuf_mtod(m2) points to m1->pkt.data - RTE_MBUF_TO_BADDR(m2) points to m2->buf_addr After the patches: - rte_pktmbuf_mtod(m1) points to m1->buf_addr + m1->data_off - RTE_MBUF_TO_BADDR(m1) points to m1->buf_addr - rte_pktmbuf_mtod(m2) points to m1->buf_addr + m2->data_off - RTE_MBUF_TO_BADDR(m2) points to m2->buf_addr I assume this is equivalent, as m2->data_off will have the same value than m1->data_off. Have you identified a specific test case that fails? The mbuf autotest is successful, if you think there is something else to test, it should be added in the test app. I don't use this feature, but by the way it seems that the macros RTE_MBUF_TO_BADDR(mb) and RTE_MBUF_FROM_BADDR(ba) won't return the proper value if the application initializes a mbuf pool with an object size != sizeof(rte_mbuf). It thought it was something quite common as it allows the application to add its own info in the mbuf. Regards, Olivier
[dpdk-dev] [PATCH v3 2/6] examples: use rte.extsubdir.mk to process subdirectories
Signed-off-by: Olivier Matz --- examples/l2fwd-ivshmem/Makefile | 9 + examples/multi_process/Makefile | 16 +++- examples/multi_process/client_server_mp/Makefile | 15 ++- examples/quota_watermark/Makefile| 12 +++- 4 files changed, 17 insertions(+), 35 deletions(-) change included in v3: use x86_64-default-linuxapp-gcc instead of x86_64-ivshmem-linuxapp-gcc for default RTE_TARGET of multi_process example (was a bad copy/paste). diff --git a/examples/l2fwd-ivshmem/Makefile b/examples/l2fwd-ivshmem/Makefile index 7286b37..df59ed8 100644 --- a/examples/l2fwd-ivshmem/Makefile +++ b/examples/l2fwd-ivshmem/Makefile @@ -37,14 +37,7 @@ endif RTE_TARGET ?= x86_64-ivshmem-linuxapp-gcc include $(RTE_SDK)/mk/rte.vars.mk -unexport RTE_SRCDIR RTE_OUTPUT RTE_EXTMK DIRS-$(CONFIG_RTE_EXEC_ENV_LINUXAPP) += host guest -.PHONY: all clean $(DIRS-y) - -all: $(DIRS-y) -clean: $(DIRS-y) - -$(DIRS-y): - $(MAKE) -C $@ $(MAKECMDGOALS) +include $(RTE_SDK)/mk/rte.extsubdir.mk diff --git a/examples/multi_process/Makefile b/examples/multi_process/Makefile index ba96a7e..5e01f9a 100644 --- a/examples/multi_process/Makefile +++ b/examples/multi_process/Makefile @@ -33,15 +33,13 @@ ifeq ($(RTE_SDK),) $(error "Please define RTE_SDK environment variable") endif -include $(RTE_SDK)/mk/rte.vars.mk -unexport RTE_SRCDIR RTE_OUTPUT RTE_EXTMK - -DIRS-$(CONFIG_RTE_EXEC_ENV_LINUXAPP) += $(wildcard *_mp) +# Default target, can be overriden by command line or environment +RTE_TARGET ?= x86_64-default-linuxapp-gcc -.PHONY: all clean $(DIRS-y) +include $(RTE_SDK)/mk/rte.vars.mk -all: $(DIRS-y) -clean: $(DIRS-y) +DIRS-$(CONFIG_RTE_EXEC_ENV_LINUXAPP) += client_server_mp +DIRS-$(CONFIG_RTE_EXEC_ENV_LINUXAPP) += simple_mp +DIRS-$(CONFIG_RTE_EXEC_ENV_LINUXAPP) += symmetric_mp -$(DIRS-y): - $(MAKE) -C $@ $(MAKECMDGOALS) +include $(RTE_SDK)/mk/rte.extsubdir.mk diff --git a/examples/multi_process/client_server_mp/Makefile b/examples/multi_process/client_server_mp/Makefile index 24d31b0..d2046ba 100644 --- a/examples/multi_process/client_server_mp/Makefile +++ b/examples/multi_process/client_server_mp/Makefile @@ -33,15 +33,12 @@ ifeq ($(RTE_SDK),) $(error "Please define RTE_SDK environment variable") endif -include $(RTE_SDK)/mk/rte.vars.mk -unexport RTE_SRCDIR RTE_OUTPUT RTE_EXTMK - -DIRS-$(CONFIG_RTE_EXEC_ENV_LINUXAPP) += $(wildcard mp_*) +# Default target, can be overriden by command line or environment +RTE_TARGET ?= x86_64-default-linuxapp-gcc -.PHONY: all clean $(DIRS-y) +include $(RTE_SDK)/mk/rte.vars.mk -all: $(DIRS-y) -clean: $(DIRS-y) +DIRS-$(CONFIG_RTE_EXEC_ENV_LINUXAPP) += mp_client +DIRS-$(CONFIG_RTE_EXEC_ENV_LINUXAPP) += mp_server -$(DIRS-y): - $(MAKE) -C $@ $(MAKECMDGOALS) +include $(RTE_SDK)/mk/rte.extsubdir.mk diff --git a/examples/quota_watermark/Makefile b/examples/quota_watermark/Makefile index 5596dcc..e4d54c2 100644 --- a/examples/quota_watermark/Makefile +++ b/examples/quota_watermark/Makefile @@ -37,14 +37,8 @@ endif RTE_TARGET ?= x86_64-default-linuxapp-gcc include $(RTE_SDK)/mk/rte.vars.mk -unexport RTE_SRCDIR RTE_OUTPUT RTE_EXTMK -DIRS-$(CONFIG_RTE_EXEC_ENV_LINUXAPP) += $(wildcard qw*) +DIRS-$(CONFIG_RTE_EXEC_ENV_LINUXAPP) += qw +DIRS-$(CONFIG_RTE_EXEC_ENV_LINUXAPP) += qwctl -.PHONY: all clean $(DIRS-y) - -all: $(DIRS-y) -clean: $(DIRS-y) - -$(DIRS-y): - $(MAKE) -C $@ $(MAKECMDGOALS) +include $(RTE_SDK)/mk/rte.extsubdir.mk -- 1.9.2
[dpdk-dev] [PATCH RFC 06/11] mbuf: replace data pointer by an offset
An alternative way to save 6 bytes (without the side effects this change has) would be to change the mempool struct * to a uint16_t mempool_id. That limits the changes to a return function, and the performance impact of that can be mitigated quite easily. -Venky -Original Message- From: Neil Horman [mailto:nhor...@tuxdriver.com] Sent: Monday, May 12, 2014 11:40 AM To: Venkatesan, Venky Cc: Olivier MATZ; Thomas Monjalon; dev at dpdk.org Subject: Re: [dpdk-dev] [PATCH RFC 06/11] mbuf: replace data pointer by an offset On Mon, May 12, 2014 at 04:06:23PM +, Venkatesan, Venky wrote: > Olivier, > > The impact isn't going to be felt on the driver quite as much (and can > be mitigated) - the driver runs a pretty low IPC (~1.7) compared to > some of the more optimized code above it that actually accesses the > data. The problem with the dependent compute is like this - in effect > you are changing > > struct eth_hdr * eth = (struct eth_hdr *) m->data; to struct eth_hdr * > eth = (struct eth_hdr *) ( (char *)m->buf _addr + m->data_offset); > > We have some code that actually processes 4-8 packets in parallel (parse + > hash), with a pretty high IPC. What we've done here is essentially replaced > is a simple load, with a load, load, add sequence in front of it. There is > no real way to do these computations in parallel for multiple packets - it > has to be done one or two at a time. What suffers is the IPC of the overall > function that does the parse/hash quite significantly. It's those functions > that I worry about more than the driver. I haven't yet been able to come up > with a mitigation for this yet. > > Neil, > > The last time we looked at this change - and it's been a while ago, the > negative effect on the upper level functions built on this was on the order > of about 15-20%. It's probably will get worse once we tune the code even > more. Hope the above explanation gives you a flavour of the problem this > will introduce. > I'm sorry, it doesnt. I take you at your word that it was a problem, but I don't think we can just categorically deny patches based on past testing of potentially simmilar code, especially given that this series attempts to improve some traffic patten via the implementation TSO (meaning the net result will be different based on the use case). I understand what your saying above, that this code incurs a second load operation (though I would think they could be implemented in parallel, or at the very least accelerated by clever placement of data_offset relative to buf_addr to ensure that the second load was cache hot). Regardless, my point is, just saying that this can't be done because you saw a performance hit with something simmilar in the past, isn't helpful. If you think thats a problem, then we really need to get details of your test case and measurements you took so that they can be reproduced, and confirmed or refuted. Regards Neil. > Regards, > -Venky > > > > > -Original Message- > From: Olivier MATZ [mailto:olivier.matz at 6wind.com] > Sent: Monday, May 12, 2014 8:07 AM > To: Neil Horman; Venkatesan, Venky > Cc: Thomas Monjalon; dev at dpdk.org > Subject: Re: [dpdk-dev] [PATCH RFC 06/11] mbuf: replace data pointer > by an offset > > Hi Venky, > > On 05/12/2014 04:41 PM, Neil Horman wrote: > >> This is a hugely problematic change, and has a pretty large > >> performance impact (because the dependency to compute and access). > >> We debated this for a long time during the early days of DPDK and > >> decided against it. This is also a repeated sequence - the driver > >> will do it twice (Rx + Tx) and the next level stack will do it > >> twice (Rx + Tx) ... > >> > >> My vote is to reject this change particular change to the mbuf. > >> > >> Regards, > >> -Venky > >> > > Do you have perforamance numbers to compare throughput with and > > without this change? I always feel suspcious when I see the spectre > > of performane used to support or deny a change without supporting reasoning > > or metrics. > > I agree with Neil. My feeling is that it won't impact performance, and it is > correlated with the forwarding tests I've done with this patch. > > I don't really understand what would cost more by storing the offset > instead of the virtual address. I agree that each time the stack will > access to the begining of the mbuf, there will be an arithmetic > operation, but it is compensated by other operations that will be > accelerated: > > - When receiving a packet, the driver will do: > > m->data_off = RTE_PKTMBUF_HEADROOM; > >instead of: > > m->data = (char*) rxm->buf_addr + RTE_PKTMBUF_HEADROOM; > > - Each time the stack will prepend data, it has to check if the headroom >is large enough to do the operation. This will be faster as data_off >is the headroom. > > - When transmitting a packet, the driver will get the physical address: > > phys_addr = m->buf_phy
[dpdk-dev] [PATCH RFC 06/11] mbuf: replace data pointer by an offset
Hi Venky, 2014-05-13 13:54, Venkatesan, Venky: > An alternative way to save 6 bytes (without the side effects this change > has) would be to change the mempool struct * to a uint16_t mempool_id. That > limits the changes to a return function, and the performance impact of that > can be mitigated quite easily. It's very difficult to compare things without code examples. Please, provide: - a patch for your proposal - an example application which allows to test and understand the performance issue you are pointing out PS: please don't top post, it makes this thread difficult to read -- Thomas
[dpdk-dev] [PATCH] version: 1.7.0-rc0
> > Start development cycle for version 1.7.0. > > > > This new development workflow introduces a new versioning scheme. > > Instead of having releases r0, r1, r2, etc, there will be release > > candidates. Last number has special meanings: > > < 16 numbers are reserved for release candidates (RTE_VER_SUFFIX is -rc) > > 16 is reserved for the release (RTE_VER_SUFFIX must be unset) > > > > > 16 numbers can be used locally (RTE_VER_SUFFIX must be set) > > > > Signed-off-by: Thomas Monjalon > > Acked-by: Bruce Richardson Applied for version 1.7.0 -- Thomas
[dpdk-dev] [PATCH] EAL: Take reserved hugepages into account
2014-04-16 11:11, Burakov, Anatoly: > Some applications reserve hugepages for later use, but DPDK doesn't take > reserved pages into account when calculating number of available number of > hugepages. This patch adds reading from "resv_hugepages" file in addition > to "free_hugepages". Acked-by: Thomas Monjalon Applied for version 1.7.0 -- Thomas
[dpdk-dev] [PATCH v2 0/7] pci cleanup
2014-05-09 15:15, David Marchand: > Hello all, > > Here is an attempt at having an equal implementation in bsd and linux > eal_pci.c. It results in following changes : > - checks on driver flag in bsd which were missing > - remove virtio-uio workaround in linux eal_pci.c > - remove deprecated RTE_EAL_UNBIND_PORTS option > > Along the way, I discovered two small bugs: a mem leak in linux eal_pci.c > and a fd leak in both bsd and linux eal_pci.c. > > Changes included in v2: > - fix another mem leak noticed by Anatoly Burakov First version was acked: Acked-by: Anatoly Burakov Acked-by: Neil Horman Applied for version 1.7.0 -- Thomas
[dpdk-dev] [PATCH v2 0/2] ring: allow to init a rte_ring outside of an rte_memzone
> These 2 patches adds 2 new functions that permits to initialize and use > a rte_ring anywhere in memory. > > Before this patches, only rte_ring_create() was available. This function > allocates a rte_memzone (that cannot be freed) and initializes a ring > inside. > > This series allows to do the following: > size = rte_ring_get_memsize(1024); > r = malloc(size); > rte_ring_init(r, "my_ring", 1024, 0); > > > Changes included in v2: > - fix syntax for functions definitions in rte_ring_get_memsize() > - use RTE_ALIGN() to get nearest higher multiple of cache line size > - fix description of rte_ring_init() in doxygen comments > > Olivier Matz (2): > ring: introduce rte_ring_get_memsize() > ring: introduce rte_ring_init() Acked-by: Konstantin Ananyev Applied for version 1.7.0 -- Thomas
[dpdk-dev] [PATCH v2] eal: change default per socket memory allocation
Hi Venky, There were comments on the first version of this patch and you suggested to try this new implementation. So do you acknowledge this patch? Thanks for your review 2014-05-09 15:30, David Marchand: > From: Didier Pallard > > Currently, if there is more memory in hugepages than the amount > requested by dpdk application, the memory is allocated by taking as much > memory as possible from each socket, starting from first one. > For example if a system is configured with 8 GB in 2 sockets (4 GB per > socket), and dpdk is requesting only 4GB of memory, all memory will be > taken in socket 0 (that have exactly 4GB of free hugepages) even if some > cores are configured on socket 1, and there are free hugepages on socket > 1... > > Change this behaviour to allocate memory on all sockets where some cores > are configured, spreading the memory amongst sockets using following > ratio per socket: > N? of cores configured on the socket / Total number of configured cores > * requested memory > > This algorithm is used when memory amount is specified globally using > -m option. Per socket memory allocation can always be done using > --socket-mem option. > > Changes included in v2: > - only update linux implementation as bsd looks not to be ready for numa > - if new algorithm fails, then defaults to previous behaviour > > Signed-off-by: Didier Pallard > Signed-off-by: David Marchand > --- > lib/librte_eal/linuxapp/eal/eal_memory.c | 50 > +++--- 1 file changed, 45 insertions(+), 5 > deletions(-) > > diff --git a/lib/librte_eal/linuxapp/eal/eal_memory.c > b/lib/librte_eal/linuxapp/eal/eal_memory.c index 73a6394..471dcfd 100644 > --- a/lib/librte_eal/linuxapp/eal/eal_memory.c > +++ b/lib/librte_eal/linuxapp/eal/eal_memory.c > @@ -881,13 +881,53 @@ calc_num_pages_per_socket(uint64_t * memory, > if (num_hp_info == 0) > return -1; > > - for (socket = 0; socket < RTE_MAX_NUMA_NODES && total_mem != 0; > socket++) > { - /* if specific memory amounts per socket weren't requested */ > - if (internal_config.force_sockets == 0) { > + /* if specific memory amounts per socket weren't requested */ > + if (internal_config.force_sockets == 0) { > + int cpu_per_socket[RTE_MAX_NUMA_NODES]; > + size_t default_size, total_size; > + unsigned lcore_id; > + > + /* Compute number of cores per socket */ > + memset(cpu_per_socket, 0, sizeof(cpu_per_socket)); > + RTE_LCORE_FOREACH(lcore_id) { > + cpu_per_socket[rte_lcore_to_socket_id(lcore_id)]++; > + } > + > + /* > + * Automatically spread requested memory amongst detected > sockets > according +* to number of cores from cpu mask present on each > socket > + */ > + total_size = internal_config.memory; > + for (socket = 0; socket < RTE_MAX_NUMA_NODES && total_size != 0; > socket++) { + > + /* Set memory amount per socket */ > + default_size = (internal_config.memory * > cpu_per_socket[socket]) > + / rte_lcore_count(); > + > + /* Limit to maximum available memory on socket */ > + default_size = RTE_MIN(default_size, get_socket_mem_size(socket)); > + > + /* Update sizes */ > + memory[socket] = default_size; > + total_size -= default_size; > + } > + > + /* > + * If some memory is remaining, try to allocate it by getting > all > + * available memory from sockets, one after the other > + */ > + for (socket = 0; socket < RTE_MAX_NUMA_NODES && total_size != 0; > socket++) { /* take whatever is available */ > - memory[socket] = RTE_MIN(get_socket_mem_size(socket), > - total_mem); > + default_size = RTE_MIN(get_socket_mem_size(socket) - memory[socket], > +total_size); > + > + /* Update sizes */ > + memory[socket] += default_size; > + total_size -= default_size; > } > + } > + > + for (socket = 0; socket < RTE_MAX_NUMA_NODES && total_mem != 0; > socket++) > { /* skips if the memory on specific socket wasn't requested */ > for (i = 0; i < num_hp_info && memory[socket] != 0; i++){ > hp_used[i].hugedir = hp_info[i].hugedir;
[dpdk-dev] [PATCH v2] eal: change default per socket memory allocation
From: Didier Pallard Currently, if there is more memory in hugepages than the amount requested by dpdk application, the memory is allocated by taking as much memory as possible from each socket, starting from first one. For example if a system is configured with 8 GB in 2 sockets (4 GB per socket), and dpdk is requesting only 4GB of memory, all memory will be taken in socket 0 (that have exactly 4GB of free hugepages) even if some cores are configured on socket 1, and there are free hugepages on socket 1... Change this behaviour to allocate memory on all sockets where some cores are configured, spreading the memory amongst sockets using following ratio per socket: N? of cores configured on the socket / Total number of configured cores * requested memory This algorithm is used when memory amount is specified globally using -m option. Per socket memory allocation can always be done using --socket-mem option. Changes included in v2: - only update linux implementation as bsd looks not to be ready for numa - if new algorithm fails, then defaults to previous behaviour Signed-off-by: Didier Pallard Signed-off-by: David Marchand --- lib/librte_eal/linuxapp/eal/eal_memory.c | 50 +++--- 1 file changed, 45 insertions(+), 5 deletions(-) diff --git a/lib/librte_eal/linuxapp/eal/eal_memory.c b/lib/librte_eal/linuxapp/eal/eal_memory.c index 73a6394..471dcfd 100644 --- a/lib/librte_eal/linuxapp/eal/eal_memory.c +++ b/lib/librte_eal/linuxapp/eal/eal_memory.c @@ -881,13 +881,53 @@ calc_num_pages_per_socket(uint64_t * memory, if (num_hp_info == 0) return -1; - for (socket = 0; socket < RTE_MAX_NUMA_NODES && total_mem != 0; socket++) { - /* if specific memory amounts per socket weren't requested */ - if (internal_config.force_sockets == 0) { + /* if specific memory amounts per socket weren't requested */ + if (internal_config.force_sockets == 0) { + int cpu_per_socket[RTE_MAX_NUMA_NODES]; + size_t default_size, total_size; + unsigned lcore_id; + + /* Compute number of cores per socket */ + memset(cpu_per_socket, 0, sizeof(cpu_per_socket)); + RTE_LCORE_FOREACH(lcore_id) { + cpu_per_socket[rte_lcore_to_socket_id(lcore_id)]++; + } + + /* +* Automatically spread requested memory amongst detected sockets according +* to number of cores from cpu mask present on each socket +*/ + total_size = internal_config.memory; + for (socket = 0; socket < RTE_MAX_NUMA_NODES && total_size != 0; +socket++) { + + /* Set memory amount per socket */ + default_size = (internal_config.memory * cpu_per_socket[socket]) + / rte_lcore_count(); + + /* Limit to maximum available memory on socket */ + default_size = RTE_MIN(default_size, get_socket_mem_size(socket)); + + /* Update sizes */ + memory[socket] = default_size; + total_size -= default_size; + } + + /* +* If some memory is remaining, try to allocate it by getting all +* available memory from sockets, one after the other +*/ + for (socket = 0; socket < RTE_MAX_NUMA_NODES && total_size != 0; +socket++) { /* take whatever is available */ - memory[socket] = RTE_MIN(get_socket_mem_size(socket), - total_mem); + default_size = RTE_MIN(get_socket_mem_size(socket) - memory[socket], + total_size); + + /* Update sizes */ + memory[socket] += default_size; + total_size -= default_size; } + } + + for (socket = 0; socket < RTE_MAX_NUMA_NODES && total_mem != 0; +socket++) { /* skips if the memory on specific socket wasn't requested */ for (i = 0; i < num_hp_info && memory[socket] != 0; i++){ hp_used[i].hugedir = hp_info[i].hugedir; -- 1.7.10.4 Acked-by: Venky Venkatesan
[dpdk-dev] Heads up: Fedora packaging plans
Hey all- This isn't really germaine to dpdk development, but Thomas and Vincent, you expressed interest in my progress regarding packaging of dpdk for Fedora, so I figured I would post here in case others were interested. Please find here: http://people.redhat.com/nhorman/dpdk-1.7.0-0.1.gitb20539d68.src.rpm My current effort to do so. I've made some changes from the stock spec file included in dpdk: * Modified the version and release values to be separate from the name. I did some reading on requirements for packaging and it seems we can be a bit more lax with ABI version on a pre-release I think, so I setup the N-V-R to use pre-release conventions, which makes sense, give that this is a 1.7.0 pre-release. The git tag on the relase value will get bumped as we move forward in the patch series. * Added config files to match desired configs for Fedora (i.e. disabled PMD's that require out of tree kernel modules * Removed Packager tag (Fedora doesn't use those) * Moved the package target directories to include N-V of the package in the path names. This allows for multiple versions of the dpdk to be installed in parallel (I.e. dpdk-1.7.0 files are in /lib/dpdk-1.7.0, /usr/include/dpdk-1.7.0, etc). This is how java packages allow for multiple version installs, and makes sense given ABI instability in dpdk. It will require that developers add some -I / -L paths to their makefiles to pull the proper version, but I think thats a fair tradeoff. My plan is to go through the review process with this package, and update to tagged 1.7.0 as soon as its ready. Neil