+1
On 11/8/2019 11:15 AM, Andrew Yourtchenko wrote:
Ah. Well makes sense. But then the solution is simple - rather than
disabling *all* the plugins by default, disable individually all
plugins you know of “now”.
That way you get the backwards compatibility automatically.
You will still need
You can look up the msg_id against the plugin registration. No?
I was looking at this to squash the papi log messages for when a plugin is
not loaded.
DBGvpp# show api plugin
Plugin API message ID ranges...
Name First-ID Last-ID
abf_fc925b52
> On 8 Nov 2019, at 17:16, Andrew Yourtchenko wrote:
>
> Unlike an API change, which can have a huge impact radius, the decision about
> the breaking case of the “chipped off” plugin takes O(1) time when manual
> (and can be trivially fully automated), so a blocking process does not appear
>
Ah. Well makes sense. But then the solution is simple - rather than disabling
*all* the plugins by default, disable individually all plugins you know of
“now”.
That way you get the backwards compatibility automatically.
You will still need to be vigilant in case you want to be precise about ha
> plugins are not disabled by default
So? Default is not minimal, we are explicitly disabling default plugins:
plugins { plugin default { disable } plugin dpdk_plugin.so { enable } }
Vratko.
From: vpp-dev@lists.fd.io On Behalf Of Andrew Yourtchenko
Sent: Friday, November 8, 2019 4:17 PM
To: Vra
These “New” plugins are not disabled by default, how comes they are not loaded
in your scenario if you say you don’t know about them?
What does the config that experiences problems look like ?
--a
> On 8 Nov 2019, at 14:47, Vratko Polak -X (vrpolak - PANTHEON TECHNOLOGIES at
> Cisco) via List
Questions: If an API message handler is moved into a different plugin,
does it constitute a (backward incompatible) API change?
If yes, should we use similar checks and processes as for CRC changes?
Example (not merged yet): sw_interface_ip6nd_ra_config
is being moved from vnet [0] to plugins/ip6
Hi all,
I am testing the performance of memif to connect multiple VPP. In my
simple test, I am finding that with each memif connections, overall
packet throughput gets drastically reduced.
1. I configured VPP and did a performance test.
command
```
set int ip address TenGigabitEthern
Looks like a straightforward sequencing issue. We should set
vlib_mains[next_thread_index]->check_frame_queues = 1 after marking the frame
queue element valid / calling vlib_put_frame_queue_elt, not when switching to a
new destination queue.
I'll push a patch for you to test.
HTH... Dave
Fro
Hello everyone.
I am working on automating "git bisect" process
for locating mainly performance regressions and progressions,
(also usable for locating breakages and fixes).
Of course, the process works correctly
only if the performance results are stable enough.
And we know from the per-patch p
Hi Chuan,
> The weird thing is that when I reduced the number of workers
> everything worked fine. I did send 8.5Gbps udp/tcp traffic over the two
> machines. I also saw encryption/decryption happening. How could this be
> possible without crypto engine?
Hmm that should not have happened :)
Frank
11 matches
Mail list logo