On 26. 8. 2024 19:11, Jeremy Spewock wrote:
On Wed, Aug 21, 2024 at 10:53 AM Juraj Linkeš
<juraj.lin...@pantheon.tech> wrote:
<snip>
  @dataclass
  class TestPmdPort(TextParser):
      """Dataclass representing the result of testpmd's ``show port info`` 
command."""
@@ -962,3 +1043,96 @@ def _close(self) -> None:
          self.stop()
          self.send_command("quit", "Bye...")
          return super()._close()
+
+    """
+    ====== Capability retrieval methods ======
+    """
+
+    def get_capabilities_rxq_info(
+        self,
+        supported_capabilities: MutableSet["NicCapability"],
+        unsupported_capabilities: MutableSet["NicCapability"],
+    ) -> None:
+        """Get all rxq capabilities and divide them into supported and 
unsupported.
+
+        Args:
+            supported_capabilities: Supported capabilities will be added to 
this set.
+            unsupported_capabilities: Unsupported capabilities will be added 
to this set.
+        """
+        self._logger.debug("Getting rxq capabilities.")
+        command = f"show rxq info {self.ports[0].id} 0"
+        rxq_info = TestPmdRxqInfo.parse(self.send_command(command))
+        if rxq_info.rx_scattered_packets:
+            supported_capabilities.add(NicCapability.SCATTERED_RX_ENABLED)
+        else:
+            unsupported_capabilities.add(NicCapability.SCATTERED_RX_ENABLED)
+
+    """
+    ====== Decorator methods ======
+    """
+
+    @staticmethod
+    def config_mtu_9000(testpmd_method: TestPmdShellSimpleMethod) -> 
TestPmdShellDecoratedMethod:

It might be more valuable for me to make a method for configuring the
MTU of all ports so that you don't have to do the loops yourself, I
can add this to the MTU patch once I update that and rebase it on
main.


Sure, if you add that, I'll use it here. :-)
What won't work with that is the per-port restoration of MTU. But if we assume that MTU is always the same for all ports, then I don't think that's going to be a problem. This assumption doesn't seem unreasonable, I don't see a scenario where it would differ.

+        """Configure MTU to 9000 on all ports, run `testpmd_method`, then 
revert.
+
+        Args:
+            testpmd_method: The method to decorate.
+
+        Returns:
+            The method decorated with setting and reverting MTU.
+        """
+
+        def wrapper(testpmd_shell: Self):
+            original_mtus = []
+            for port in testpmd_shell.ports:
+                original_mtus.append((port.id, port.mtu))
+                testpmd_shell.set_port_mtu(port_id=port.id, mtu=9000, 
verify=False)
+            testpmd_method(testpmd_shell)
+            for port_id, mtu in original_mtus:
+                testpmd_shell.set_port_mtu(port_id=port_id, mtu=mtu if mtu 
else 1500, verify=False)
+
+        return wrapper
<snip>
diff --git a/dts/framework/testbed_model/capability.py 
b/dts/framework/testbed_model/capability.py
index 8899f07f76..9a79e6ebb3 100644
--- a/dts/framework/testbed_model/capability.py
+++ b/dts/framework/testbed_model/capability.py
@@ -5,14 +5,40 @@

  This module provides a protocol that defines the common attributes of test 
cases and suites
  and support for test environment capabilities.
+
+Many test cases are testing features not available on all hardware.
+
+The module also allows developers to mark test cases or suites a requiring 
certain

small typo: I think you meant " mark test cases or suites *as*
requiring certain..."


Ack.

+hardware capabilities with the :func:`requires` decorator.
+
+Example:
+    .. code:: python
+
+        from framework.test_suite import TestSuite, func_test
+        from framework.testbed_model.capability import NicCapability, requires
+        class TestPmdBufferScatter(TestSuite):
+            # only the test case requires the scattered_rx capability
+            # other test cases may not require it
+            @requires(NicCapability.scattered_rx)

Is it worth updating this to what the enum actually holds
(SCATTERED_RX_ENABLED) or not really since it is just an example in a
doc-string? I think it could do either way, but it might be better to
keep it consistent at least to start.


Yes, I overlooked this.

+            @func_test
+            def test_scatter_mbuf_2048(self):
<snip>

@@ -96,6 +122,128 @@ def __hash__(self) -> int:
          """The subclasses must be hashable so that they can be stored in 
sets."""


+@dataclass
+class DecoratedNicCapability(Capability):
+    """A wrapper around 
:class:`~framework.remote_session.testpmd_shell.NicCapability`.
+
+    Some NIC capabilities are only present or listed as supported only under 
certain conditions,
+    such as when a particular configuration is in place. This is achieved by 
allowing users to pass
+    a decorator function that decorates the function that gets the support 
status of the capability.
+
+    New instances should be created with the :meth:`create_unique` class 
method to ensure
+    there are no duplicate instances.
+
+    Attributes:
+        nic_capability: The NIC capability that partly defines each instance.
+        capability_decorator: The decorator function that will be passed the 
function associated
+            with `nic_capability` when discovering the support status of the 
capability.
+            Each instance is defined by `capability_decorator` along with 
`nic_capability`.
+    """
+
+    nic_capability: NicCapability
+    capability_decorator: TestPmdShellDecorator | None
+    _unique_capabilities: ClassVar[
+        dict[Tuple[NicCapability, TestPmdShellDecorator | None], Self]
+    ] = {}
+
+    @classmethod
+    def get_unique(
+        cls, nic_capability: NicCapability, decorator_fn: 
TestPmdShellDecorator | None
+    ) -> "DecoratedNicCapability":

This idea of get_unique really confused me at first. After reading
different parts of the code to learn how it is being used, I think I
understand now what it's for. My current understanding is basically
that you're using an uninstantiated class as essentially a factory
that stores a dictionary that you are using to hold singletons.

Just a note, these are not singletons, just similar to them. A singleton is just one instance of class can exist. This class allows more instances, but it does limit the instances. It closer to an Enum, which work exactly the same way, but only attribute names are taken into consideration (with Enums).

It
might be confusing to me in general because I haven't really seen this
idea of dynamically modifying attributes of a class itself rather than
an instance of the class used this way. Understanding it now, it makes
sense what you are trying to do and how this is essentially a nice
cache/factory for singleton values for each capability, but It might
be helpful to document a little more somehow that _unique_capabilities
is really just a container for the singleton capabilities, and that
the top-level class is modified to keep a consistent state throughout
the framework.

Again, it could just be me having not really seen this idea used
before, but it was strange to wrap my head around at first since I'm
more used to class methods being used to read the state of attributes.


I'm thinking of adding this to get_unique's docstring:

This is a factory method that implements a quasi-enum pattern.
The instances of this class are stored in a class variable, _unique_capabilities.

If an instance with `nic_capability` and `decorator_fn` as inputs doesn't exist, it is created and added to _unique_capabilities. If it exists, it is returned so that a new identical instance is not created.


+        """Get the capability uniquely identified by `nic_capability` and 
`decorator_fn`.
+
+        Args:
+            nic_capability: The NIC capability.
+            decorator_fn: The function that will be passed the function 
associated
+                with `nic_capability` when discovering the support status of 
the capability.
+
+        Returns:
+            The capability uniquely identified by `nic_capability` and 
`decorator_fn`.
+        """
+        if (nic_capability, decorator_fn) not in cls._unique_capabilities:
+            cls._unique_capabilities[(nic_capability, decorator_fn)] = cls(
+                nic_capability, decorator_fn
+            )
+        return cls._unique_capabilities[(nic_capability, decorator_fn)]
+
+    @classmethod
+    def get_supported_capabilities(
+        cls, sut_node: SutNode, topology: "Topology"
+    ) -> set["DecoratedNicCapability"]:
+        """Overrides :meth:`~Capability.get_supported_capabilities`.
+
+        The capabilities are first sorted by decorators, then reduced into a 
single function which
+        is then passed to the decorator. This way we only execute each 
decorator only once.

This second sentence repeats the word "only" but I don't think it is
really necessary to and it might flow better with either one of them
instead of both.


Ack.

+        """
+        supported_conditional_capabilities: set["DecoratedNicCapability"] = 
set()
+        logger = get_dts_logger(f"{sut_node.name}.{cls.__name__}")
+        if topology.type is Topology.type.no_link:
+            logger.debug(
+                "No links available in the current topology, not getting NIC 
capabilities."
+            )
+            return supported_conditional_capabilities
+        logger.debug(
+            f"Checking which NIC capabilities from {cls.capabilities_to_check} are 
supported."
+        )
+        if cls.capabilities_to_check:
+            capabilities_to_check_map = cls._get_decorated_capabilities_map()
+            with TestPmdShell(sut_node, privileged=True) as testpmd_shell:
+                for conditional_capability_fn, capabilities in 
capabilities_to_check_map.items():
+                    supported_capabilities: set[NicCapability] = set()
+                    unsupported_capabilities: set[NicCapability] = set()
+                    capability_fn = cls._reduce_capabilities(
+                        capabilities, supported_capabilities, 
unsupported_capabilities
+                    )

This combines calling all of the capabilities into one function, but
if there are multiple capabilities that use the same underlying
testpmd function won't this call the same method multiple times? Or is
this handled by two Enum values in NicCapability that have the same
testpmd method as their value hashing to the same thing? For example,
if there are two capabilities that both require show rxq info and the
same decorator (scatter and some other capability X), won't this call
`show rxq info` twice even though you already know that the capability
is supported after the first call? It's not really harmful for this to
happen, but it would go against the idea of calling a method and
getting all of the capabilities that you can the first time. Maybe it
could be fixed with a conditional check which verifies if `capability`
is already in `supported_capabilities` or `unsupported_capabilities`
or not if it's a problem?


All you say is true. The whole reason for using all these sets is that we don't call the functions multiple times. The check you mention is exactly what's missing.


+                    if conditional_capability_fn:
+                        capability_fn = 
conditional_capability_fn(capability_fn)
+                    capability_fn(testpmd_shell)
+                    for supported_capability in supported_capabilities:
+                        for capability in capabilities:
+                            if supported_capability == 
capability.nic_capability:
+                                
supported_conditional_capabilities.add(capability)

I might be misunderstanding, but is this also achievable by just writing:

for capability in capabilities:
     if capability.nic_capability in supported_capabilities:
         supported_conditional_capabilities.add(capability)

I think that would be functionally the same, but I think it reads
easier than a nested loop.


It is the same thing, I'll change it.

+
+        logger.debug(f"Found supported capabilities 
{supported_conditional_capabilities}.")
+        return supported_conditional_capabilities
+
+    @classmethod
+    def _get_decorated_capabilities_map(
+        cls,
+    ) -> dict[TestPmdShellDecorator | None, set["DecoratedNicCapability"]]:
+        capabilities_map: dict[TestPmdShellDecorator | None, 
set["DecoratedNicCapability"]] = {}
+        for capability in cls.capabilities_to_check:
+            if capability.capability_decorator not in capabilities_map:
+                capabilities_map[capability.capability_decorator] = set()
+            capabilities_map[capability.capability_decorator].add(capability)
+
+        return capabilities_map
+
+    @classmethod
+    def _reduce_capabilities(
+        cls,
+        capabilities: set["DecoratedNicCapability"],
+        supported_capabilities: MutableSet,
+        unsupported_capabilities: MutableSet,
+    ) -> TestPmdShellSimpleMethod:
+        def reduced_fn(testpmd_shell: TestPmdShell) -> None:
+            for capability in capabilities:

This is where I'll add the fix:
if capability not in supported_capabilities | unsupported_capabilities:

+                capability.nic_capability(
+                    testpmd_shell, supported_capabilities, 
unsupported_capabilities
+                )
+
+        return reduced_fn

Would it make sense to put these two methods above
get_supported_capabilities since that is where they are used? I might
be in favor of it just because it would save you from having to look
further down in the diff to find what the method does and then go back
up, but I also understand that it looks like you might have been
sorting methods by private vs. public so if you think it makes more
sense to leave them here that is also viable.


I sorted it this what so that the code it's easier to read (in my opinion). I read the method, what it does, then the method calls a method I haven't seen so I go look beneath the method for the method definition. To me, this is preferable that reading methods I haven't seen before. Or, put in another way, the methods are sorted in the order they're used in code (that's how the code is executed and that's why this order feels natural to me).

+
+    def __hash__(self) -> int:
+        """Instances are identified by :attr:`nic_capability` and 
:attr:`capability_decorator`."""
+        return hash((self.nic_capability, self.capability_decorator))

I guess my question above is asking if `hash(self.nic_capability) ==
hash(self.nic_capability.value())` because, if they aren't, then I
think the map will contain multiple capabilities that use the same
testpmd function since the capabilities themselves are unique, and
then because the get_supported_capabilities() method above just calls
whatever is in this map, it would call it twice. I think the whole
point of the NoAliasEnum is making sure that they don't hash to the
same thing. I could be missing something, but, if I am, maybe some
kind of comment showing where this is handled would be helpful.


I think the simple fix in _reduce_capabilities() addresses this, right?

+
+    def __repr__(self) -> str:
+        """Easy to read string of :attr:`nic_capability` and 
:attr:`capability_decorator`."""
+        condition_fn_name = ""
+        if self.capability_decorator:
+            condition_fn_name = f"{self.capability_decorator.__qualname__}|"
+        return f"{condition_fn_name}{self.nic_capability}"
+
+
  class TestProtocol(Protocol):
      """Common test suite and test case attributes."""

@@ -116,6 +264,34 @@ def get_test_cases(cls, test_case_sublist: Sequence[str] | 
None = None) -> tuple
          raise NotImplementedError()


+def requires(
+    *nic_capabilities: NicCapability,
+    decorator_fn: TestPmdShellDecorator | None = None,
+) -> Callable[[type[TestProtocol]], type[TestProtocol]]:
+    """A decorator that adds the required capabilities to a test case or test 
suite.
+
+    Args:
+        nic_capabilities: The NIC capabilities that are required by the test 
case or test suite.
+        decorator_fn: The decorator function that will be used when getting
+            NIC capability support status.
+        topology_type: The topology type the test suite or case requires.
+
+    Returns:
+        The decorated test case or test suite.
+    """
+
+    def add_required_capability(test_case_or_suite: type[TestProtocol]) -> 
type[TestProtocol]:
+        for nic_capability in nic_capabilities:
+            decorated_nic_capability = DecoratedNicCapability.get_unique(
+                nic_capability, decorator_fn
+            )
+            decorated_nic_capability.add_to_required(test_case_or_suite)
+
+        return test_case_or_suite
+
+    return add_required_capability
+
+
  def get_supported_capabilities(
      sut_node: SutNode,
      topology_config: Topology,
diff --git a/dts/tests/TestSuite_pmd_buffer_scatter.py 
b/dts/tests/TestSuite_pmd_buffer_scatter.py
index 178a40385e..713549a5b2 100644
--- a/dts/tests/TestSuite_pmd_buffer_scatter.py
+++ b/dts/tests/TestSuite_pmd_buffer_scatter.py
@@ -25,6 +25,7 @@
  from framework.params.testpmd import SimpleForwardingModes
  from framework.remote_session.testpmd_shell import TestPmdShell
  from framework.test_suite import TestSuite, func_test
+from framework.testbed_model.capability import NicCapability, requires


  class TestPmdBufferScatter(TestSuite):
@@ -123,6 +124,7 @@ def pmd_scatter(self, mbsize: int) -> None:
                      f"{offset}.",
                  )

+    @requires(NicCapability.SCATTERED_RX_ENABLED, 
decorator_fn=TestPmdShell.config_mtu_9000)

Is it possible to instead associate the required decorator with the
scattered_rx capability itself? Since the configuration is required to
check the capability, I don't think there will ever be a case where
`decorator_fn` isn't required here, or a case where it is ever
anything other than modifying the MTU. Maybe it is more clear from the
reader's perspective this way that there are other things happening
under-the-hood, but it also saves developers from having to specify
something static when we already know beforehand what they need to
specify.

Doing so would probably mess up some of what you have written in the
way of DecoratedNicCapability and it might be more difficult to do it
in a way that only calls the decorator method once if there are
multiple capabilities that require the same decorator.

Maybe something that you could do is make the NicCapability class in
Testpmd have values that are tuples of (decorator_fn | None,
get_capabilities_fn), and then you can still have the
DecoratedNicCapabilitity class and the methods wouldn't really need to
change. I think the main thing that would change is just that the
decorator_fn is collected from the capability/enum instead of the
requires() method. You could potentially make get_unique easier as
well since you can just rely on the enum values since already know
what is required. Then you could take the pairs from that enum and
create a mapping like you have now of which ones require which
decorators and keep the same idea.


All good points, this is a really good suggestion. Great for the writer of the tests and basically no downsides, except maybe if there is a capability which works under different conditions (and we'd want to use all that), but even with that, we could have different capability names for tuples with the same capability, but different decorator_fn.

I'll rework this, seems very much worth it.

      @func_test
      def test_scatter_mbuf_2048(self) -> None:
          """Run the :meth:`pmd_scatter` test with `mbsize` set to 2048."""
--
2.34.1


Reply via email to