On 04/11/2019 06:48 PM, Paul Vinciguerra wrote:
I looked at this a while back. The spec file doesn’t pick up the
packages from the makefile. I can look at it in a few hours if you
like. Let me know.
I had intended to import the new packages from epel into Centos. Epel
dependency causes proble
I looked at this a while back. The spec file doesn’t pick up the packages from
the makefile. I can look at it in a few hours if you like. Let me know.
> On Apr 11, 2019, at 6:25 PM, Thomas F Herbert wrote:
>
> Dave and Ed,
>
> The Makefile has dependencies that work. However, you will need e
Dave and Ed,
The Makefile has dependencies that work. However, you will need epel.
The packages required are:
yum install epel-release
yum install mbedtls
yum install python36
--Tom
On 04/11/2019 05:11 PM, Florin Coras wrote:
We do have it as a dependency because we want to make sure the
We do have it as a dependency because we want to make sure the plugin still
builds. However, given that most of those who use tls use the openssl engine,
we might consider not having mbedtls as a dependency for the packages that we
push to packagecloud ...
On a separate note, what centos do we
On Apr 11, 2019, at 2:22 PM, Florin Coras
mailto:fcoras.li...@gmail.com>> wrote:
Are we building rpms on a host that has mbedtls installed?
Yes the centos container has mbedtls installed….has for quite some time since
it is a listed
prerequisite straight out of the Makefile.
Ed
Just unin
On Thu, 11 Apr 2019 10:24:45 -0700
"Stephen Hemminger via Lists.Fd.Io"
wrote:
> On Thu, 11 Apr 2019 09:30:12 +
> "Benoit Ganne (bganne)" wrote:
>
> > Hi Stephen,
> >
> > > The rdma-core stuff is likely to be a problem on Azure. The Mellanox
> > > device is always hidden as a secondary d
Are we building rpms on a host that has mbedtls installed? Just uninstalling it
should disable the tlsmbedtls plugin, which is what we pretty much want.
Florin
> On Apr 11, 2019, at 1:00 PM, Dave Wallace wrote:
>
> Tom/Billy,
>
> Do you know what the current status is with the VPP rpm's on
Tom/Billy,
Do you know what the current status is with the VPP rpm's on master/19.04?
I'm trying to install the VPP 19.04 packages from packagecloud.io on a
centos7 Vagrant/virtualbox VM (using .../vpp/extras/vagrant/Vagrantfile)
to verify that the packages are correct. The 19.01 packages ins
On Thu, 11 Apr 2019 09:30:12 +
"Benoit Ganne (bganne)" wrote:
> Hi Stephen,
>
> > The rdma-core stuff is likely to be a problem on Azure. The Mellanox
> > device is always hidden as a secondary device behind a synthetic virtual
> > device based on VMBus. There are two DPDK different ways thi
Hi Satish,
From the error counters lower, it seems that ip local is not the one to drop
the packets. Also, if the packet is not tcp or udp, checksum should not
computed, as per ip4_local_check_l4_csum [1]. Also, packets that fail the
source ip check should be dropped with IP4_ERROR_SPOOFED_LOC
Hi VPP-Dev,
We recently moved to 1810 from 1801 and found the following issue.
Is this a known issue (or) any help on whats happening here ?
1. We are using memif in IP mode.
2. Some Control plane process sends message over MEMIF to VPP process ( memif
mode = IP )
3. We are using a custom ip-p
> On 11 Apr 2019, at 13:57, Mohamed Mohamed via Lists.Fd.Io
> wrote:
>
> Hi All:
>
> Today DPDK bind script will name the interfaces based on their PCI address
> so for example if I have the following 2 interfaces
> 00:08.0 Ethernet controller: Intel Corporation 82540EM Gigabit Ethernet
>
Hi All:
Today DPDK bind script will name the interfaces based on their PCI address so
for example if I have the following 2 interfaces
00:08.0 Ethernet controller: Intel Corporation 82540EM Gigabit Ethernet
Controller (rev 02)
00:09.0 Ethernet controller: Intel Corporation 82540EM Gigabit Ethern
Can VPP and non-VPP Linux process share a single database (say shared memory)?
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#12762): https://lists.fd.io/g/vpp-dev/message/12762
Mute This Topic: https://lists.fd.io/mt/31029666/21656
Mute #vpp: https
[Edited Message Follows]
Hi VPP-Dev,
We recently moved to 1810 from 1801 and found the following issue.
Is this a known issue (or) any help on whats happening here ?
1. We are using memif in IP mode.
2. Some Control plane process sends message over MEMIF to VPP process ( memif
mode = IP )
3. We
Hi Stephen,
> The rdma-core stuff is likely to be a problem on Azure. The Mellanox
> device is always hidden as a secondary device behind a synthetic virtual
> device based on VMBus. There are two DPDK different ways this is used. One
> is with vdev_netvsc/failsafe/tap and the other is with netvsc
Hi VPP-Dev,
We recently moved to 1810 from 1801 and found the following issue.
Is this a known issue (or) any help on whats happening here ?
1. We are using memif in IP mode.
2. Some Control plane process sends message over MEMIF to VPP process ( memif
mode = IP )
3. We are using a custom ip-pro
There is another physical port bridged to loop1 which is on 192.168.15.0/24
network.
.the packets coming inside GRE tunnel are for 192.168.15.0/24 network.
also just want to understand why SNAT is blocked when forwarding is
enabled ?
someone might have a requirement to SNAT first and then do
Shahid,
Right, so the GRE packets shouldn’t go through the NAT at all.
Are the GRE tunnel itself marked as inside?
I should have thoguht this was supported with https://jira.fd.io/browse/VPP-447
Let me see if I can reproduce.,
Best regards,
Ole
> On 10 Apr 2019, at 12:55, Shahid Khan wrote:
>
19 matches
Mail list logo