On 9. 9. 2024 17:55, Jeremy Spewock wrote:
On Mon, Sep 9, 2024 at 8:16 AM Juraj Linkeš <juraj.lin...@pantheon.tech> wrote:



On 12. 8. 2024 19:22, jspew...@iol.unh.edu wrote:
From: Jeremy Spewock <jspew...@iol.unh.edu>

The DTS framework in its current state supports binding ports to
different drivers on the SUT node but not the TG node. The TG node
already has the information that it needs about the different drivers
that it has available in the configuration file, but it did not
previously have access to the devbind script, so it did not use that
information for anything.

This patch moves the steps to copy the DPDK tarball into the node class
rather than the SUT node class, and calls this function on the TG node
as well as the SUT. It also moves the driver binding step into the Node
class and triggers the same pattern of binding to ports that existed on
the SUT on the TG.


This is a very inefficient way to do this. We'll have to build DPDK
twice and that's very time consuming. I was thinking in terms of just

This patch shouldn't be compiling DPDK twice, are you referring to the
process of copying the tarball over and extracting it taking too long?
If so, that makes sense that it takes longer than we need for this one
task. I figured it wouldn't hurt to have the whole DPDK directory
there, and that it could even be potentially useful to have it if the
TG ever needed it. That and it seemed like the most straightforward
way that kept these two set up in a similar way. Extracting the
tarball is obviously pretty quick, so I guess the real question here
is whether it is fine to add the time of one extra SCP of the DPDK
tarball around.


Ah, I didn't look carefully at the split. This is fine, but there some things I noticed.

As Patrick mentioned, the docstrings in Node.set_up_build_target() and SutNode.set_up_build_target() would need to be updated.
Why are we binding ports on the TG node?
This shouldn't really be part of set_up_build_target; set_up_test_run is a better place to put this, as we don't need to copy it for each build target. And, as I realized then thinking about the property (down below), we don't need to do that even per test_run; once per TG node's lifetime is enough.

copying the script to the TG node and storing its location on the TG
node. We should have access to the script whether DTS is run from the
repository or a tarball.

We should have access to it regardless, but extracting only that one
script would be different based on if it was a tarball or a repository
since, I believe at least, I would have to use the tarfile library to
read and extract only this one file to copy over if it was a tarball.
It would be faster I assume, so if you think it is worth it I could
make the change. Unless you are saying that we wouldn't need to take
the devbind script from the tarball that is passed into the DTS run
and instead assume that we can just go one directory up from `dts/` on
the runner host. That could be an interesting idea which would be
faster, but I wasn't sure if that was something that was fine to do
since (I don't think at least) there is anything that technically ties
you to running from in a DPDK directory other than the docker
container.

You can run DTS from any directory, but currently DTS it's always going to be in a DPDK tree (there's no other way to get DTS), so I think it's safe to assume the script is there. We can put a variable pointing to dpdk_root into utils.py and use that.

My idea was copying that one file, nothing else (no tarball or anything would be needed). I think we'd only need to move _remote_tmp_dir and _path_to_devbind_script to Node and then implement set_up_test_run() on the TG node to copy just the script (with self.main_session.copy_to()) and set _path_to_devbind_script. And I guess set _path_to_devbind_script in SutNode.tear_down_build_target() and TGNode.tear_down_test_run() since those seems to be missing.

But there's actually one more thing we could improve on top of that. _path_to_devbind_script could be a property which would be used the same way in SutNode, but in TGNode, we could copy the script only if it's None and set it back to None only when closing the Node (as we need to copy the script only once per TG node lifespan).

Reply via email to