[dpdk-dev] Testing vmdq sample application
Hi, I am trying to run vmdq sample application.But not getting how to test this.Can anyone please help with detailed procedure how to test this sample application. -- Regards Ankit Batra
[dpdk-dev] TCP/IP stack for DPDK
Hi, On Tue, Sep 09, 2014 at 09:09:11AM -0700, Jeff Shaw wrote: > > You can find the code from the link: https://github.com/dpdk-net/netdp > > Hi zimeiw, when will you be posting the source code to github? > I can only find a static lib and some header files. It's BSD licensed, getting only the binary should be good enough for you. :-) A.
[dpdk-dev] Testing vmdq sample application
Hi Firstly compiling the application 1. Go to the examples directory: export RTE_SDK=/path/to/rte_sdk cd ${RTE_SDK}/examples/vmdq 2. Set the target (a default target is used if not specified). For example: export RTE_TARGET=x86_64-native-linuxapp-gcc See the Intel? DPDK Getting Started Guide for possible RTE_TARGET values. 3. Build the application: make Then running the application To run the example in a linuxapp environment: user at target:~$ ./build/vmdq_app -c f -n 4 -- --nb-pools 8 If you use 1G NIC, 8 pools are available, If you use 10G NIC, 64 pools are available, At last, send packets with vlan tag to select a pool. The vlan tag and pool has the following mapping: const uint16_t vlan_tags[] = { 0, 1, 2, 3, 4, 5, 6, 7,// It occupies pool 0 ~ pool 7, one for each 8, 9, 10, 11, 12, 13, 14, 15, // It occupies pool 8 ~ pool 15, one for each 16, 17, 18, 19, 20, 21, 22, 23,// It occupies pool 16 ~ pool 23, one for each 24, 25, 26, 27, 28, 29, 30, 31,// It occupies pool 24 ~ pool 31, one for each 32, 33, 34, 35, 36, 37, 38, 39,// It occupies pool 32 ~ pool 39, one for each 40, 41, 42, 43, 44, 45, 46, 47,// It occupies pool 40 ~ pool 47, one for each 48, 49, 50, 51, 52, 53, 54, 55,// It occupies pool 48 ~ pool 55, one for each 56, 57, 58, 59, 60, 61, 62, 63,// It occupies pool 56 ~ pool 63, one for each }; Thanks Changchun > -Original Message- > From: dev [mailto:dev-bounces at dpdk.org] On Behalf Of ANKIT BATRA > Sent: Wednesday, September 10, 2014 3:23 AM > To: dev at dpdk.org > Subject: [dpdk-dev] Testing vmdq sample application > > Hi, > > I am trying to run vmdq sample application.But not getting how to test > this.Can anyone please help with detailed procedure how to test this sample > application. > -- > Regards > Ankit Batra
[dpdk-dev] [PATCH] igb_ethdev.c: complete switches for i211 NC
> -Original Message- > From: dev [mailto:dev-bounces at dpdk.org] On Behalf Of Sergey Mironov > Sent: Thursday, September 4, 2014 4:35 PM > To: dev at dpdk.org > Subject: [dpdk-dev] [PATCH] igb_ethdev.c: complete switches for i211 NC > > Hi! I have got an update for my "i212 problem". First of all, I found that I > have > made a mistake. My controller is named i211, not i212 :) Next, i211 > controller is > controlled by the lib_pmd_e1000 driver. > Unfortunately, looks like it's support is pure. For example, igb_ethdev.c > contains function eth_igb_infos_get() which should set number of tx/rx queues > supported by the hardware. It contains huge [switch] but there is no i211 > case! > That is why I see zeros every time I try to start this NC. Also, there are few > other places which mention i210, but not mention i211. I've attached a patch > which adds necessary > i211 support, but I didn't check it enough to say it is totally correct. For > now I > see that it just able to send and receive some packets. > > Could you please review/correct it? > --- > lib/librte_pmd_e1000/igb_ethdev.c | 11 +-- > 1 file changed, 9 insertions(+), 2 deletions(-) > > diff --git a/lib/librte_pmd_e1000/igb_ethdev.c > b/lib/librte_pmd_e1000/igb_ethdev.c > index f93a460..23e638d 100644 > --- a/lib/librte_pmd_e1000/igb_ethdev.c > +++ b/lib/librte_pmd_e1000/igb_ethdev.c > @@ -689,7 +689,8 @@ eth_igb_start(struct rte_eth_dev *dev) >* value of Write-Back Threshold registers. >*/ > if ((hw->mac.type == e1000_82576) || (hw->mac.type == e1000_82580) || > - (hw->mac.type == e1000_i350) || (hw->mac.type == e1000_i210)) { > + (hw->mac.type == e1000_i350) || (hw->mac.type == e1000_i210) || > +(hw->mac.type == e1000_i211)) { > uint32_t ivar; > > /* Enable all RX & TX queues in the IVAR registers */ @@ -837,7 > +838,7 @@ igb_get_rx_buffer_size(struct e1000_hw *hw) > rx_buf_size = (E1000_READ_REG(hw, E1000_RXPBS) & 0xf); > rx_buf_size = (uint32_t) e1000_rxpbs_adjust_82580(rx_buf_size); > rx_buf_size = (rx_buf_size << 10); > - } else if (hw->mac.type == e1000_i210) { > + } else if (hw->mac.type == e1000_i210 || hw->mac.type == e1000_i211) { > rx_buf_size = (E1000_READ_REG(hw, E1000_RXPBS) & 0x3f) << 10; > } else { > rx_buf_size = (E1000_READ_REG(hw, E1000_PBA) & 0x) << 10; > @@ -1179,6 +1180,12 @@ eth_igb_infos_get(struct rte_eth_dev *dev, > dev_info->max_vmdq_pools = 0; > break; > > + case e1000_i211: > + dev_info->max_rx_queues = 2; > + dev_info->max_tx_queues = 2; > + dev_info->max_vmdq_pools = 0; > + break; > + > case e1000_vfadapt: > dev_info->max_rx_queues = 2; > dev_info->max_tx_queues = 2; > -- > 1.8.4.3 Reviewed-by: Helin Zhang Really good to have this patch of supporting i211! Thank you very much! Regards, Helin
[dpdk-dev] TCP/IP stack for DPDK
hi, Currently, still not open the netdp lib source code. But could provide some hooks in netdp for user to special handle packet if need. At 2014-09-10 05:49:22, "Aaro Koskinen" wrote: >Hi, > >On Tue, Sep 09, 2014 at 09:09:11AM -0700, Jeff Shaw wrote: >> > You can find the code from the link: https://github.com/dpdk-net/netdp >> >> Hi zimeiw, when will you be posting the source code to github? >> I can only find a static lib and some header files. > >It's BSD licensed, getting only the binary should be good enough >for you. :-) > >A.
[dpdk-dev] reg:porting intel dpdk kit as LINC switch
Hi All, I wanted to port my intel DPDK kit as LINC switch, could you help me how can i do it. -- Regards, ANAND
[dpdk-dev] reg:porting intel dpdk kit as LINC switch
> > Hi All, > I wanted to port my intel DPDK kit as LINC switch, could you help me how can > i do it. * There are two open source software switches using DPDK. You could use them as a reference. https://github.com/openvswitch/ovs (look at lib/netdev-dpdk.c) https://github.com/01org/dpdk-ovs * Packet Framework in dpdk may also be a good starting point. * At the very least you will probably want to modify the Linc dataplane to use dpdk PMDs One thing I am not sure about is how you would call DPDK functions from Linc as it is written in Erlang. However, I am not familiar with Erlang. Regards, > > -- > Regards, > ANAND
[dpdk-dev] reg:porting intel dpdk kit as LINC switch
I think if he's using OTP framework, it has some interfacing for other languages as well, but not sure how efficient it's. In order to keep up with the performance, it's better to evaluate those interfaces first. Thanks, Rashmin On Sep 10, 2014 1:07 AM, "Gray, Mark D" wrote: > > Hi All, > I wanted to port my intel DPDK kit as LINC switch, could you help me how can > i do it. * There are two open source software switches using DPDK. You could use them as a reference. https://github.com/openvswitch/ovs (look at lib/netdev-dpdk.c) https://github.com/01org/dpdk-ovs * Packet Framework in dpdk may also be a good starting point. * At the very least you will probably want to modify the Linc dataplane to use dpdk PMDs One thing I am not sure about is how you would call DPDK functions from Linc as it is written in Erlang. However, I am not familiar with Erlang. Regards, > > -- > Regards, > ANAND
[dpdk-dev] [PATCH 07/13] mbuf: use macros only to access the mbuf metadata
On Mon, Sep 08, 2014 at 02:05:41PM +0200, Olivier MATZ wrote: > Hi Bruce, > > On 09/03/2014 05:49 PM, Bruce Richardson wrote: > > Removed the explicit zero-sized metadata definition at the end of the > > mbuf data structure. Updated the metadata macros to take account of this > > change so that all existing code which uses those macros still works. > > > > Signed-off-by: Bruce Richardson > > --- > > lib/librte_mbuf/rte_mbuf.h | 22 -- > > 1 file changed, 8 insertions(+), 14 deletions(-) > > > > diff --git a/lib/librte_mbuf/rte_mbuf.h b/lib/librte_mbuf/rte_mbuf.h > > index 5260001..ca66d9a 100644 > > --- a/lib/librte_mbuf/rte_mbuf.h > > +++ b/lib/librte_mbuf/rte_mbuf.h > > @@ -166,31 +166,25 @@ struct rte_mbuf { > > struct rte_mempool *pool; /**< Pool from which mbuf was allocated. */ > > struct rte_mbuf *next;/**< Next segment of scattered packet. */ > > > > - union { > > - uint8_t metadata[0]; > > - uint16_t metadata16[0]; > > - uint32_t metadata32[0]; > > - uint64_t metadata64[0]; > > - } __rte_cache_aligned; > > } __rte_cache_aligned; > > > > #define RTE_MBUF_METADATA_UINT8(mbuf, offset) \ > > - (mbuf->metadata[offset]) > > + (((uint8_t *)&(mbuf)[1])[offset]) > > #define RTE_MBUF_METADATA_UINT16(mbuf, offset) \ > > - (mbuf->metadata16[offset/sizeof(uint16_t)]) > > + (((uint16_t *)&(mbuf)[1])[offset/sizeof(uint16_t)]) > > #define RTE_MBUF_METADATA_UINT32(mbuf, offset) \ > > - (mbuf->metadata32[offset/sizeof(uint32_t)]) > > + (((uint32_t *)&(mbuf)[1])[offset/sizeof(uint32_t)]) > > #define RTE_MBUF_METADATA_UINT64(mbuf, offset) \ > > - (mbuf->metadata64[offset/sizeof(uint64_t)]) > > + (((uint64_t *)&(mbuf)[1])[offset/sizeof(uint64_t)]) > > > > #define RTE_MBUF_METADATA_UINT8_PTR(mbuf, offset) \ > > - (&mbuf->metadata[offset]) > > + (&RTE_MBUF_METADATA_UINT8(mbuf, offset)) > > #define RTE_MBUF_METADATA_UINT16_PTR(mbuf, offset) \ > > - (&mbuf->metadata16[offset/sizeof(uint16_t)]) > > + (&RTE_MBUF_METADATA_UINT16(mbuf, offset)) > > #define RTE_MBUF_METADATA_UINT32_PTR(mbuf, offset) \ > > - (&mbuf->metadata32[offset/sizeof(uint32_t)]) > > + (&RTE_MBUF_METADATA_UINT32(mbuf, offset)) > > #define RTE_MBUF_METADATA_UINT64_PTR(mbuf, offset) \ > > - (&mbuf->metadata64[offset/sizeof(uint64_t)]) > > + (&RTE_MBUF_METADATA_UINT64(mbuf, offset)) > > > > /** > > * Given the buf_addr returns the pointer to corresponding mbuf. > > > > I think it goes in the good direction. So: > Acked-by: Olivier Matz > > Just one question: why not removing RTE_MBUF_METADATA*() macros? > I'd just provide one macro that gives a (void*) to the first byte > after the mbuf structure. > > The format of the metadata is up to the application, that usually > casts (m + 1) into a private structure, making the macros not very > useful. I suggest to move these macros outside rte_mbuf.h, in the > application-specific or library-specific header, what do you think? > Things look to work if I just move the definitions en-masse into rte_port.h. Is that the sort of change you would be thinking of? I was wondering about replacing the typed macros here with a generic one to access just beyond the definition of the mbuf structure, but on further thought, I believe that using the buf_addr pointer in the mbuf data structure is probably enough for most applications. [An alternative to moving the definitions into rte_port.h is to move them into rte_table.h and having port_frag.c use the buf_addr pointer instead of a macro to get at the metadata. All other references to the macros apart from two in that port file, are in the tables or in apps that use the tables lib.] What do you think? /Bruce
[dpdk-dev] [PATCH 07/13] mbuf: use macros only to access the mbuf metadata
Hi Bruce, On 09/10/2014 05:09 PM, Bruce Richardson wrote: >> Just one question: why not removing RTE_MBUF_METADATA*() macros? >> I'd just provide one macro that gives a (void*) to the first byte >> after the mbuf structure. >> >> The format of the metadata is up to the application, that usually >> casts (m + 1) into a private structure, making the macros not very >> useful. I suggest to move these macros outside rte_mbuf.h, in the >> application-specific or library-specific header, what do you think? >> > > Things look to work if I just move the definitions en-masse into > rte_port.h. Is that the sort of change you would be thinking of? > I was wondering about replacing the typed macros here with a > generic one to access just beyond the definition of the mbuf > structure, but on further thought, I believe that using the > buf_addr pointer in the mbuf data structure is probably enough > for most applications. [An alternative to moving the definitions > into rte_port.h is to move them into rte_table.h and having > port_frag.c use the buf_addr pointer instead of a macro to get at > the metadata. All other references to the macros apart from two > in that port file, are in the tables or in apps that use the > tables lib.] > What do you think? Yes, moving the macros in rte_port.h looks good to me, since the libraries using rte_ports are the users of this specific metadata format. Thanks! Olivier
[dpdk-dev] Testing vmdq sample application
Is your traffic VLAN tagged? I think vmdq_app has "conf.enable_default_pool = 0;" so untagged traffic will be dropped. Thanks, Weichun On Wed, Sep 10, 2014 at 12:00 PM, ANKIT BATRA wrote: > Hi, > > I have started the application on my host machine.And from another terminal > on host machine gave sudo killall -HUP vmdq_dcb_app and sent packets on NIC > card from another machine .But on terminal where vmdq application is > running, I am seeing that no packets are coming.All rows and columns are > 0.And where will the VMs will come into picture here for testing this. > Please suggest and correct me if i am doing anything incorrect. > > On Wed, Sep 10, 2014 at 6:24 AM, Ouyang, Changchun < > changchun.ouyang at intel.com> wrote: > >> Hi >> >> Firstly compiling the application >> 1. Go to the examples directory: >> export RTE_SDK=/path/to/rte_sdk >> cd ${RTE_SDK}/examples/vmdq >> 2. Set the target (a default target is used if not specified). For example: >> export RTE_TARGET=x86_64-native-linuxapp-gcc >> See the Intel? DPDK Getting Started Guide for possible RTE_TARGET values. >> 3. Build the application: >> make >> >> Then running the application >> To run the example in a linuxapp environment: >> user at target:~$ ./build/vmdq_app -c f -n 4 -- --nb-pools 8 >> >> If you use 1G NIC, 8 pools are available, >> If you use 10G NIC, 64 pools are available, >> >> At last, send packets with vlan tag to select a pool. >> >> The vlan tag and pool has the following mapping: >> const uint16_t vlan_tags[] = { >> 0, 1, 2, 3, 4, 5, 6, 7,// It occupies pool 0 ~ pool >> 7, one for each >> 8, 9, 10, 11, 12, 13, 14, 15, // It occupies pool 8 ~ pool >> 15, one for each >> 16, 17, 18, 19, 20, 21, 22, 23,// It occupies pool 16 ~ pool >> 23, one for each >> 24, 25, 26, 27, 28, 29, 30, 31,// It occupies pool 24 ~ pool >> 31, one for each >> 32, 33, 34, 35, 36, 37, 38, 39,// It occupies pool 32 ~ pool >> 39, one for each >> 40, 41, 42, 43, 44, 45, 46, 47,// It occupies pool 40 ~ pool >> 47, one for each >> 48, 49, 50, 51, 52, 53, 54, 55,// It occupies pool 48 ~ pool >> 55, one for each >> 56, 57, 58, 59, 60, 61, 62, 63,// It occupies pool 56 ~ pool >> 63, one for each >> }; >> >> Thanks >> Changchun >> >> > -Original Message- >> > From: dev [mailto:dev-bounces at dpdk.org] On Behalf Of ANKIT BATRA >> > Sent: Wednesday, September 10, 2014 3:23 AM >> > To: dev at dpdk.org >> > Subject: [dpdk-dev] Testing vmdq sample application >> > >> > Hi, >> > >> > I am trying to run vmdq sample application.But not getting how to test >> > this.Can anyone please help with detailed procedure how to test this >> sample >> > application. >> > -- >> > Regards >> > Ankit Batra >> > > > > -- > Regards > Ankit Batra
[dpdk-dev] [PATCH] Fix librte_pmd_pcap driver double stop error
From: Nicola?s Pernas Maradei librte_pmd_pcap driver was opening the pcap/interfaces only at init time and closing them only when the port was being stopped. This behaviour would cause problems (leading to segfault) if the user closed the port 2 times. The first time the pcap/interfaces would be normally closed but libpcap would throw an error causing a segfault if the closed pcaps/interfaces were closed again. This behaviour is solved by re-openning pcaps/interfaces when the port is started (only if these weren't open already for example at init time). Signed-off-by: Nicola?s Pernas Maradei --- lib/librte_pmd_pcap/rte_eth_pcap.c | 254 + 1 file changed, 202 insertions(+), 52 deletions(-) diff --git a/lib/librte_pmd_pcap/rte_eth_pcap.c b/lib/librte_pmd_pcap/rte_eth_pcap.c index eebe768..f4d501d 100644 --- a/lib/librte_pmd_pcap/rte_eth_pcap.c +++ b/lib/librte_pmd_pcap/rte_eth_pcap.c @@ -66,6 +66,8 @@ struct pcap_rx_queue { struct rte_mempool *mb_pool; volatile unsigned long rx_pkts; volatile unsigned long err_pkts; + const char *name; + const char *type; }; struct pcap_tx_queue { @@ -73,17 +75,23 @@ struct pcap_tx_queue { pcap_t *pcap; volatile unsigned long tx_pkts; volatile unsigned long err_pkts; + const char *name; + const char *type; }; struct rx_pcaps { unsigned num_of_rx; pcap_t *pcaps[RTE_PMD_RING_MAX_RX_RINGS]; + const char *names[RTE_PMD_RING_MAX_RX_RINGS]; + const char *types[RTE_PMD_RING_MAX_RX_RINGS]; }; struct tx_pcaps { unsigned num_of_tx; pcap_dumper_t *dumpers[RTE_PMD_RING_MAX_TX_RINGS]; pcap_t *pcaps[RTE_PMD_RING_MAX_RX_RINGS]; + const char *names[RTE_PMD_RING_MAX_RX_RINGS]; + const char *types[RTE_PMD_RING_MAX_RX_RINGS]; }; struct pmd_internals { @@ -105,6 +113,10 @@ const char *valid_arguments[] = { NULL }; +static int open_single_tx_pcap(const char *pcap_filename, pcap_dumper_t **dumper); +static int open_single_rx_pcap(const char *pcap_filename, pcap_t **pcap); +static int open_single_iface(const char *iface, pcap_t **pcap); + static struct ether_addr eth_addr = { .addr_bytes = { 0, 0, 0, 0x1, 0x2, 0x3 } }; static const char *drivername = "Pcap PMD"; static struct rte_eth_link pmd_link = { @@ -253,6 +265,59 @@ eth_pcap_tx(void *queue, static int eth_dev_start(struct rte_eth_dev *dev) { + unsigned i; + struct pmd_internals *internals = dev->data->dev_private; + unsigned nb_rxq = internals->nb_rx_queues; + unsigned nb_txq = internals->nb_tx_queues; + struct pcap_tx_queue *tx; + struct pcap_rx_queue *rx; + + /* Special iface case. Single pcap is open and shared between tx/rx. */ + if (nb_rxq == nb_txq && nb_rxq == 1) { + tx = &internals->tx_queue[0]; + rx = &internals->rx_queue[0]; + + if (!tx->pcap && strcmp(tx->type, ETH_PCAP_IFACE_ARG) == 0) { + if (open_single_iface(tx->name, &tx->pcap) < 0) + return -1; + rx->pcap = tx->pcap; + return 0; + } + } + + /* If not open already, open tx pcaps/dumpers */ + for (i = 0; i < internals->nb_tx_queues; i++) { + tx = &internals->tx_queue[i]; + + if (!tx->dumper && strcmp(tx->type, ETH_PCAP_TX_PCAP_ARG) == 0) { + if (open_single_tx_pcap(tx->name, &tx->dumper) < 0) + return -1; + } + + else if (!tx->pcap && strcmp(tx->type, ETH_PCAP_TX_IFACE_ARG) == 0) { + if (open_single_iface(tx->name, &tx->pcap) < 0) + return -1; + } + } + + /* If not open already, open rx pcaps */ + for (i = 0; i < internals->nb_rx_queues; i++) { + rx = &internals->rx_queue[i]; + + if (rx->pcap != NULL) + continue; + + if (strcmp(rx->type, ETH_PCAP_RX_PCAP_ARG) == 0) { + if (open_single_rx_pcap(rx->name, &rx->pcap) < 0) + return -1; + } + + else if (strcmp(rx->type, ETH_PCAP_RX_IFACE_ARG) == 0) { + if (open_single_iface(rx->name, &rx->pcap) < 0) + return -1; + } + } + dev->data->dev_link.link_status = 1; return 0; } @@ -266,17 +331,45 @@ static void eth_dev_stop(struct rte_eth_dev *dev) { unsigned i; - pcap_dumper_t *dumper; - pcap_t *pcap; struct pmd_internals *internals = dev->data->dev_private; + struct pcap_tx_queue *tx; + struct pcap_rx_queue *rx; + unsigned nb_rxq = internals->nb_rx_queues; + unsigned nb_txq = internals->nb_tx_queues; + + /* Special iface case. Single pcap is
[dpdk-dev] l2fwd does not send packets
Hi, The l2fwd sample application in my environment does not send packets through the TX port. I run DPDK inside a KVM guest. The NIC ports are VFs assigned to the VM by pci passthrough. Environment: Host OS: ubuntu 14.04 x86_64 NIC: intel x540-t1 Guest OS: ubuntu 14.04 x86_64 DPDK: v1.7.0 Some findings: 1. l2fwd reports 511 packets sent when max tx descriptor is 512, The number changes to 1023 if the max tx descriptor is set to 1024. 2. On the receiver side, no packet captured. Anyone know the issue and the corresponding fix? Thanks. Best, Xin
[dpdk-dev] dpdk starting issue with descending virtual address allocation in new kernel
Hi All, We have a kernel config question to consult you. DPDK failed to start due to mbuf creation issue with new kernel 3.14.17 + grsecurity patches. We tries to trace down the issue, it seems that the virtual address of huge page is allocated from high address to low address by kernel where dpdk expects it to be low to high to think it is as consecutive. See dumped virt address bellow. It is first 0x71042140, then 0x71042120. Where previously it would be 0x71042120 first , then 0x71042140. But they are still consecutive. Initialize Port 0 -- TxQ 1, RxQ 1, Src MAC 00:0c:29:b3:30:db Create: Default RX 0:0 - Memory used (MBUFs 4096 x (size 1984 + Hdr 64)) + 790720 = 8965 KB Zone 0: name:, phys:0x6ac0, len:0x2080, virt:0x71042140, socket_id:0, flags:0 Zone 1: name:, phys:0x6ac02080, len:0x1d10c0, virt:0x710421402080, socket_id:0, flags:0 Zone 2: name:, phys:0x6ae0, len:0x16, virt:0x71042120, socket_id:0, flags:0 Zone 3: name:, phys:0x6add3140, len:0x11a00, virt:0x7104215d3140, socket_id:0, flags:0 Zone 4: name:, phys:0x6ade4b40, len:0x300, virt:0x7104215e4b40, socket_id:0, flags:0 Zone 5: name:, phys:0x6ade4e80, len:0x200, virt:0x7104215e4e80, socket_id:0, flags:0 Zone 6: name:, phys:0x6ade5080, len:0x10080, virt:0x7104215e5080, socket_id:0, flags:0 Segment 0: phys:0x6ac0, len:2097152, virt:0x71042140, socket_id:0, hugepage_sz:2097152, nchannel:0, nrank:0 Segment 1: phys:0x6ae0, len:2097152, virt:0x71042120, socket_id:0, hugepage_sz:2097152, nchannel:0, nrank:0 Segment 2: phys:0x6b00, len:2097152, virt:0x71042100, socket_id:0, hugepage_sz:2097152, nchannel:0, nrank:0 Segment 3: phys:0x6b20, len:2097152, virt:0x710420e0, socket_id:0, hugepage_sz:2097152, nchannel:0, nrank:0 Segment 4: phys:0x6b40, len:2097152, virt:0x710420c0, socket_id:0, hugepage_sz:2097152, nchannel:0, nrank:0 Segment 5: phys:0x6b60, len:2097152, virt:0x710420a0, socket_id:0, hugepage_sz:2097152, nchannel:0, nrank:0 Segment 6: phys:0x6b80, len:2097152, virt:0x71042080, socket_id:0, hugepage_sz:2097152, nchannel:0, nrank:0 Segment 7: phys:0x6ba0, len:2097152, virt:0x71042060, socket_id:0, hugepage_sz:2097152, nchannel:0, nrank:0 Segment 8: phys:0x6bc0, len:2097152, virt:0x71042040, socket_id:0, hugepage_sz:2097152, nchannel:0, nrank:0 Segment 9: phys:0x6be0, len:2097152, virt:0x71042020, socket_id:0, hugepage_sz:2097152, nchannel:0, nrank:0 --- Related dpdk code is in dpdk/lib/librte_eal/linuxapp/eal/eal_memory.c :: rte_eal_hugepage_init() for (i = 0; i < nr_hugefiles; i++) { new_memseg = 0; /* if this is a new section, create a new memseg */ if (i == 0) new_memseg = 1; else if (hugepage[i].socket_id != hugepage[i-1].socket_id) new_memseg = 1; else if (hugepage[i].size != hugepage[i-1].size) new_memseg = 1; else if ((hugepage[i].physaddr - hugepage[i-1].physaddr) != hugepage[i].size) new_memseg = 1; else if (((unsigned long)hugepage[i].final_va - (unsigned long)hugepage[i-1].final_va) != hugepage[i].size) { new_memseg = 1; } Is this a known issue? Is there any workaround? Or Could you advise which kernel config may relate this this kernel behavior change? Thanks, Michael