We just don't have ipfix collectors setup right now. We have a system for
syslog and I was hoping to integrate vpp snat into that as well.
Thanks for the quick response.
On Thu, Jun 22, 2017 at 2:50 PM, Ole Troan wrote:
> Tell me more.
>
> Ole
>
> On 22 Jun 2017, at 23:37, Matt Paska wrote:
>
Dear VPP community,
Today is RC0 day, hooray ☺
The release branch for the 17.07 branch has now been pulled: stable/1707, and
tags laid.
From this point forward, up until the release date on July 19th, we need to be
disciplined with respect to bugfix commits. Here are a few common-sense
sugge
Tell me more.
Ole
> On 22 Jun 2017, at 23:37, Matt Paska wrote:
>
> Yes, we do need it. What's the current plan?
>
>> On Thu, Jun 22, 2017 at 1:05 PM, Ole Troan wrote:
>> Matt,
>>
>> It will not have it for 17.07.
>> We have ipfix support now as well as deterministic.
>>
>> Do you need i
Yes, we do need it. What's the current plan?
On Thu, Jun 22, 2017 at 1:05 PM, Ole Troan wrote:
> Matt,
>
> It will not have it for 17.07.
> We have ipfix support now as well as deterministic.
>
> Do you need it?
>
> Cheers
> Ole
>
> On 22 Jun 2017, at 21:33, Matt Paska wrote:
>
> Hi,
>
> Is Sna
Dear Sergio,
That was enough of a hint. We weren't calling rte_eth_tx_prepare(...) at all.
It's decidedly not free if you're not using h/w offload. Once I added a call to
it, the problem disappeared...
Thanks... Dave
From: Sergio Gonzalez Monroy [mailto:sergio.gonzalez.mon...@intel.com]
Sent:
Matt,
It will not have it for 17.07.
We have ipfix support now as well as deterministic.
Do you need it?
Cheers
Ole
> On 22 Jun 2017, at 21:33, Matt Paska wrote:
>
> Hi,
>
> Is Snat syslog based logging support still planned for 17.07? I see it
> mentioned on the release plan
> wiki(htt
Hi,
Is Snat syslog based logging support still planned for 17.07? I see it
mentioned on the release plan wiki(
https://wiki.fd.io/view/Projects/vpp/Release_Plans/Release_Plan_17.07) but
not on the snat work list(https://wiki.fd.io/view/VPP/SNAT).
Thanks
On Tue, May 23, 2017 at 10:45 PM, Matus Fa
I have to admit that I did not know about this API until I saw your
email and had a chat with some folks about it.
From my understanding, not all PMDs need such "pseudo-header", so the
DPDK approach would be to always call the API, then PMDs that do not
need it would have NULL function pointer
Dear Sergio,
Thanks for confirming that the packet_type isn't interesting. I thought so,
but...
As the code stands, I wasn't computing any TCP checksum at all. Does the i40e
hardware / PMD expect tcp->checksum = ? Do other PMDs
expect the same?
Thanks... Dave
From: Sergio Gonzalez Monroy [ma
On 22/06/2017 16:11, Dave Barach (dbarach) wrote:
Folks,
I’m having a hard time trying to convince an Intel Fortville (i40e)
PMD / NIC to compute and insert TCP correct TCP checksums.
In addition to the typical struct rte_mbuf setup, I add the following:
if (b->flags & VLIB_BUFFER_TCP_CHE
Folks,
I'm having a hard time trying to convince an Intel Fortville (i40e) PMD / NIC
to compute and insert TCP correct TCP checksums.
In addition to the typical struct rte_mbuf setup, I add the following:
if (b->flags & VLIB_BUFFER_TCP_CHECKSUM_OFFLOAD)
{
mb->packet_type =
Hi,
> Op 21 jun. 2017, om 21:47 heeft Ole Troan het
> volgende geschreven:
>
> In interesting news, Sander Stefann the author of DHCPKit has integrated his
> software with VPP.
/me waves
> http://dhcpkit-vpp.readthedocs.io/en/stable/
> http://dhcpkit.readthedocs.io/en/stable/
>
> It depends
The general case for hashing keys larger than the machine word size is to use
hash_get_mem and friends instead of hash_get. You’re then hashing the contents
of a memory object which can be of any length. src/net/vnet/vxlan/ makes use of
this in the IPv6 case; at some point, for 32bit compatibili
Hi,
The hash_create and hash_get use u64 as key in 64-bit system.
In 32-bit system the key will be casting to uword, the high 32 bits of
information will be dropped.
gm->tunnel_by_key4 = hash_create (0, sizeof (uword));
hash_set (gm->tunnel_by_key4, key4, t - gm->tunnels);
p = hash_get (gm->t
Dear Zhangpan,
This behavior is 100% attributable to the low-vector-rate epoll_pwait(...) call
in .../src/vlib/unix/input.c. If the vector rate is less than 2, vpp currently
sleeps for 10ms between polls. We can discuss how to parameterize and/or change
this behavior.
HTH... Dave
From: vpp-de
Choonon,
> After update configuration like below, I can see flow information.
>
> flowprobe params record l3
> flowprobe add-feature TengigabitEthernet4/0/0 l2
>
> One more question,
> The flow is out-bounding traffic. Can I collect inbound traffic?
it is currently only an output feature.
give
Hello Ole,
After update configuration like below, I can see flow information.
flowprobe params record l3
flowprobe add-feature TengigabitEthernet4/0/0 l2
One more question,
The flow is out-bounding traffic. Can I collect inbound traffic?
Thanks,
Choonho Son
On Thu, Jun 22, 2017 at 5:31 PM, Ol
Hi Choonho,
Specify l2 in the feature add command. That determines where (in l2 bridging ,
ip4/6 forwarding) the ipfix monitoring is done.
The params command what information to collect.
Cheers
Ole
> On 22 Jun 2017, at 03:07, Choonho Son wrote:
>
> Hi All,
>
> I want to collect flow inf
1 main thread + 2 work thread
-- Original --
From: "wang.hui56";
Date: Thu, Jun 22, 2017 03:06 PM
To: "fortitude.zhang";
Cc: "leiyanzhang";
"vpp-dev";
Subject: 答复:Re: [vpp-dev] 答复: 【vpp-dev】delay is error in ping with multi
worker thread
stage2, how m
stage2, how many your worker threads?
i remembered it is normal when 1main thread+1 worker thread.
发自我的zMail
原始邮件
发件人:
张东亚
收件人:雷彦章
抄送:王辉10067165 vpp-dev
日期:
2017-06-22 11:09:18
主题:Re: [vpp-dev] 答复: 【vpp-dev】delay is error in ping with
20 matches
Mail list logo