On 05/12/16 15:12, Neil Horman wrote:
On Fri, Dec 02, 2016 at 04:22:16PM +0000, Declan Doherty wrote:
On 02/12/16 14:57, Bruce Richardson wrote:
On Fri, Dec 02, 2016 at 03:31:24PM +0100, Thomas Monjalon wrote:
2016-12-02 14:15, Fan Zhang:
This patch provides the initial implementation of the scheduler poll mode
driver using DPDK cryptodev framework.
Scheduler PMD is used to schedule and enqueue the crypto ops to the
hardware and/or software crypto devices attached to it (slaves). The
dequeue operation from the slave(s), and the possible dequeued crypto op
reordering, are then carried out by the scheduler.
The scheduler PMD can be used to fill the throughput gap between the
physical core and the existing cryptodevs to increase the overall
performance. For example, if a physical core has higher crypto op
processing rate than a cryptodev, the scheduler PMD can be introduced to
attach more than one cryptodevs.
This initial implementation is limited to supporting the following
scheduling modes:
- CRYPTO_SCHED_SW_ROUND_ROBIN_MODE (round robin amongst attached software
slave cryptodevs, to set this mode, the scheduler should have been
attached 1 or more software cryptodevs.
- CRYPTO_SCHED_HW_ROUND_ROBIN_MODE (round robin amongst attached hardware
slave cryptodevs (QAT), to set this mode, the scheduler should have
been attached 1 or more QATs.
Could it be implemented on top of the eventdev API?
Not really. The eventdev API is for different types of scheduling
between multiple sources that are all polling for packets, compared to
this, which is more analgous - as I understand it - to the bonding PMD
for ethdev.
To make something like this work with an eventdev API you would need to
use one of the following models:
* have worker cores for offloading packets to the different crypto
blocks pulling from the eventdev APIs. This would make it difficult to
do any "smart" scheduling of crypto operations between the blocks,
e.g. that one crypto instance may be better at certain types of
operations than another.
* move the logic in this driver into an existing eventdev instance,
which uses the eventdev api rather than the crypto APIs and so has an
extra level of "structure abstraction" that has to be worked though.
It's just not really a good fit.
So for this workload, I believe the pseudo-cryptodev instance is the
best way to go.
/Bruce
As Bruce says this is much more analogous to the ethdev bonding driver, the
main idea is to allow different crypto op scheduling mechanisms to be
defined transparently to an application. This could be load-balancing across
multiple hw crypto devices, or having a software crypto device to act as a
backup device for a hw accelerator if it becomes oversubscribed. I think the
main advantage of a crypto-scheduler approach means that the data path of
the application doesn't need to have any knowledge that scheduling is
happening at all, it is just using a different crypto device id, which is
then manages the distribution of crypto work.
This is a good deal like the bonding pmd, and so from a certain standpoint it
makes sense to do this, but whereas the bonding pmd is meant to create a single
path to a logical network over several physical networks, this pmd really only
focuses on maximizing througput, and for that we already have tools. As Thomas
mentions, there is the eventdev library, but from my view the distributor
library already fits this bill. It already is a basic framework to process
mbufs in parallel according to whatever policy you want to implement, which
sounds like exactly what the goal of this pmd is.
Neil
Hey Neil,
this is actually intended to act and look a good deal like the ethernet
bonding device but to handling the crypto scheduling use cases.
For example, take the case where multiple hw accelerators may be
available. We want to provide user applications with a mechanism to
transparently balance work across all devices without having to manage
the load balancing details or the guaranteeing of ordering of the
processed ops on the dequeue_burst side. In this case the application
would just use the crypto dev_id of the scheduler and it would look
after balancing the workload across the available hw accelerators.
+-------------------+
| Crypto Sch PMD |
| |
| ORDERING / RR SCH |
+-------------------+
^ ^ ^
| | |
+-+ | +-------------------------------+
| +---------------+ |
| | |
V V V
+---------------+ +---------------+ +---------------+
| Crypto HW PMD | | Crypto HW PMD | | Crypto HW PMD |
+---------------+ +---------------+ +---------------+
Another use case we hope to support is migration of processing from one
device to another where a hw and sw crypto pmd can be bound to the same
crypto scheduler and the crypto processing could be transparently
migrated from the hw to sw pmd. This would allow for hw accelerators to
be hot-plugged attached/detached in a Guess VM
+----------------+
| Crypto Sch PMD |
| |
| MIGRATION SCH |
+----------------+
| |
| +-----------------+
| |
V V
+---------------+ +---------------+
| Crypto HW PMD | | Crypto SW PMD |
| (Active) | | (Inactive) |
+---------------+ +---------------+
The main point is that isn't envisaged as just a mechanism for
scheduling crypto work loads across multiple cores, but a framework for
allowing different scheduling mechanisms to be introduced, to handle
different crypto scheduling problems, and done so in a way which is
completely transparent to the data path of an application. Like the eth
bonding driver we want to support creating the crypto scheduler from EAL
options, which allow specification of the scheduling mode and the crypto
pmds which are to be bound to that crypto scheduler.