Hello Patrick,

External email: Use caution opening links or attachments
Thank you for sharing Gregory. I did not get an opportunity to look through the 
code today, but I did run
through the presentation. A few points I noted:
1. The presentation shows an example testpmd testcase for creating a flow rule, 
and then shows a
validation step in which standard out is compared against the expected string ("flow 
rule x created") and
we can conclude whether we are able to create flow rules. Are you also sending 
packets according to the
flow rules and validating that what is sent/received corresponds to the 
expected behavior of the flow
rules? When I look at the old DTS framework, and an example flow rules testsuite
(https://doc.dpdk.org/dts/test_plans/rte_flow_test_plan.html) which we want 
feature parity with, I think
that validation for this testing framework needs to primarily rely on comparing 
packets sent and packets
received.

The unit test infrastructure validates flow rule creation and
a result produced by that flow.
Flow result is triggered by a packet.
However, flow result validation does not always can be done by testing a packet.
Unit test implements 2 flow validation methods.

The first validation method tests testpmd output triggered by a test packet.

Example: use the MODIFY_FIELD action to copy packet VLAN ID to flow TAG item.
Flow tag is internal flow resource. It must be validated in DPDK application.

Test creates 2 flow rules:

Rule 1: use MODIFY_FILED to copy packet VLAN ID to flow TAG item
pattern eth / vlan / end \
actions modify_field op set dst_type tag ... src_type vlan_id ... / end

Rule 2: validate the TAG item:
pattern tag data is 0x31 ... / end actions mark id 0xaaa / rss / end

The test sends a packet with VLAN ID 0x31: / Dot1Q(vlan=0x31) /
The test matches tespmd output triggered by the packet for
`FDIR matched ID=0xaaa`.

The second validation method tests a packet after it was processed by a flow.

Unit test operates in a static environment. It does not compare
source and target packets. The test "knows" valid target packet configuration.

Example: push VLAN header into a packet.

There is a single flow rule in that example:
pattern eth / end \
actions of_push_vlan ethertype 0x8100 / \
        of_set_vlan_vid vlan_vid 3103 .../ port_id id 1 / end


There are 2 SCAPY processes in that test: `tg` runs on peer host and
sends a source packet. `vm` runs on the same host as testpmd. It validates
incoming packet.

Phase 0 prepares test packet on the `tg` and starts AsyncSniffer on the `vm`.
Phase 1 sends the packet.
Phase 2 validates the packet.
The test can repeat phases 1 and 2.


phase0:
  vm: |
    sniff = AsyncSniffer(iface=pf1vf0, filter='udp and src port 1234')

  tg: |
    udp_packet = Ether(src='11:22:33:44:55:66',
                       dst='aa:bb:cc:dd:ee:aa')/
                 IP(src='1.1.1.1', dst='2.2.2.2')/
                 UDP(sport=1234, dport=5678)/Raw('== TEST ==')

phase1: &phase1
  vm: sniff.start()
  tg: sendp(udp_packet, iface=pf1)

phase2: &phase2
  vm: |
    cap = sniff.stop()
    if len(cap[UDP]) > 0: cap[UDP][0][Ether].command()
  result:
        vm: vlan=3103

In any case, there may be some testsuites which can be written which are small enough in scope
that validating by standard out in this way may be appropriate. I'm not sure 
but we should keep our
options open. 

2. If the implementation overhead is not too significant for the configuration 
step in the DTS execution a
"--fast" option like you use may be a good improvement for the framework. In 
your mind, is the main
benefit A. reduced execution time, B. reduced user setup time (don't have to 
write full config file) or C.
Something else?

A user must always provide test configuration.
However a host can already have prepared setup before the test execution.
In that case a user can skip host setup phase and reduce execution time.
 

Thanks for making this available to use so we can use it as a reference in 
making DTS better. :) 

Reply via email to