> I think this should handle the unlink case you mention, however perhaps you
> have identified a genuine bug. If you have more info or a sample config / app
> that easily demonstrates the issue that would help reproduce/debug here?
Hi Harry,
The bug report includes a simple test application
> No related to this question, Are you planning to use rte_event_port_unlink()
> in fastpath?
> Does rte_event_stop() works for you, if it is in slow path.
Hi Jerin,
Sorry for missing your question earlier. We need rte_event_port_link() /
rte_event_port_unlink() for doing load balancing, so call
Hi,
In bug report https://bugs.dpdk.org/show_bug.cgi?id=60 we have been discussing
issues related to events ending up in wrong ports after calling
rte_event_port_unlink(). In addition of finding few bugs we have identified a
need for a new API call (or documentation extension) for an application t
Hi,
rte_event_dev_start() requires that all queues have to be linked, which makes
writing applications which link/unlink queues at runtime cumbersome.
E.g. the application has to dummy link all queues before rte_event_dev_start()
and then unlink them after the function call. This alone wouldn't be
>>
>> In bug report https://bugs.dpdk.org/show_bug.cgi?id=60 we have been
>> discussing
>> issues related to events ending up in wrong ports after calling
>> rte_event_port_unlink(). In addition of finding few bugs we have identified a
>> need for a new API call (or documentation extension) for
> I don't think that the eventdev API requires 1:1 Lcore / Port mapping, so
> really a
> PMD should be able to handle any thread calling any port.
>
> The event/sw PMD allows any thread to call dequeue/enqueue any port,
> so long as it is not being accessed by another thread.
>
>
A given
>> For this "runtime scale down" use-case the missing information is being
>> able to identify when an unlink is complete. After that (and ensuring the
>> port buffer is empty) the application can be guaranteed that there are no
>> more events going to be sent to that port, and the application ca
I think the end result we're hoping for is something like pseudo code
below,
(keep in mind that the event/sw has a service-core thread running it, so no
application code there):
int worker_poll = 1;
worker() {
while(worker_poll) {
// e
>>
>> I think the end result we're hoping for is something like pseudo code
>> below,
>> (keep in mind that the event/sw has a service-core thread running it, so
>> no
>> application code there):
>>
>> int worker_poll = 1;
>>
>> worker() {
>> while
>>>
>>> I think the end result we're hoping for is something like pseudo code
>>> below,
>>> (keep in mind that the event/sw has a service-core thread running it,
>>> so no
>>> application code there):
>>>
>>> int worker_poll = 1;
>>>
>>> worker() {
>>
>
> # Other than that, I am still not able to understand, why not
> application wait until rte_event_port_unlink() returns.
Making rte_event_port_unlink() blocking would be troublesome if one doesn’t care
about unlink completion. E.g. doing dynamic load balancing.
>
> # What in real word use c
>>
>> I'm not sure I understand the issue here.
>> Is anybody suggesting to make unlink() blocking?
>>
>> For certain PMDs, perhaps it must be a synchronous handled unlink().
>> For other PMDs (eg event/sw) there are multiple threads involved,
>> so it must be async. Hence, APIs should be async
>
> +/**
> + * Returns the number of unlinks in progress.
> + *
> + * This function provides the application with a method to detect when an
> + * unlink has been completed by the implementation. See
> *rte_event_port_unlink*
> + * on how to issue unlink requests.
> + *
> + * @param dev_id
> + *
> On 21 Sep 2018, at 13:25, Harry van Haaren wrote:
>
> This commit fixes the cq index checks when unlinking
> ports/queues while the scheduler core is running.
> Previously, the == comparison could be "skipped" if
> in particular corner cases. With the check being changed
> to >= this is resol
>> What is not clear to me is motivation to use weak here instead of simply
>> using >CONFIG_RTE_I40E_INC_VECTOR
>> macro to exclude stubs in i40e_rxtx.c. It will make library smaller and
>> avoid issues like this one
>> which are quite hard to troubleshoot.
>
>Since this issue seen in fd.io, I d
> >>> What is not clear to me is motivation to use weak here instead of simply
> using >CONFIG_RTE_I40E_INC_VECTOR
> >>> macro to exclude stubs in i40e_rxtx.c. It will make library smaller and
> >>> avoid
> issues like this one
> >>> which are quite hard to troubleshoot.
> >> Since this issue seen
> > >>> What is not clear to me is motivation to use weak here instead of simply
> > using >CONFIG_RTE_I40E_INC_VECTOR
> > >>> macro to exclude stubs in i40e_rxtx.c. It will make library smaller and
> > >>> avoid
> > issues like this one
> > >>> which are quite hard to troubleshoot.
> > >> Since t
> -Original Message-
> From: Sergio Gonzalez Monroy [mailto:sergio.gonzalez.monroy at intel.com]
> Sent: Friday, July 01, 2016 1:05 PM
> To: Elo, Matias (Nokia - FI/Espoo) ;
> dev at dpdk.org
> Cc: ferruh.yigit at intel.com; damarion at cisco.com
> Subject: Re: [dpdk-d
Hi,
The SW eventdev rx adapter has an internal enqueue buffer
'rx_adapter->event_enqueue_buffer', which stores packets received from the NIC
until at least BATCH_SIZE (=32) packets have been received before enqueueing
them to eventdev. For example in case of validation testing, where often a
s
On 7 May 2019, at 15:01, Mattias Rönnblom
mailto:hof...@lysator.liu.se>> wrote:
On 2019-05-07 13:12, Honnappa Nagarahalli wrote:
Hi,
The SW eventdev rx adapter has an internal enqueue buffer 'rx_adapter-
event_enqueue_buffer', which stores packets received from the NIC until at
least BATCH_
:24, Rao, Nikhil
mailto:nikhil@intel.com>> wrote:
-Original Message-
From: dev [mailto:dev-boun...@dpdk.org] On Behalf Of Elo, Matias (Nokia -
FI/Espoo)
Sent: Tuesday, May 7, 2019 5:33 PM
To: Mattias Rönnblom mailto:hof...@lysator.liu.se>>
Cc: Honnappa
> On 2019-05-10 15:30, Thomas Monjalon wrote:
>> Any review please?
>
> Reviewed-by: Mattias Rönnblom
>
> Mattias Elo reported "Thanks, I’ve tested this patch and can confirm that it
> fixes the problem." for the (nearly identical) v2 of this patch.
I’ve now tested also the v3:
Tested-by: M
22 matches
Mail list logo