Hi Juergen,
On 09/12/2020 16:16, Juergen Gross wrote:
Add /domain/<domid> directories to hypfs. Those are completely
dynamic, so the related hypfs access functions need to be implemented.
Signed-off-by: Juergen Gross <jgr...@suse.com>
---
V3:
- new patch
---
docs/misc/hypfs-paths.pandoc | 10 +++
xen/common/Makefile | 1 +
xen/common/hypfs_dom.c | 137 +++++++++++++++++++++++++++++++++++
3 files changed, 148 insertions(+)
create mode 100644 xen/common/hypfs_dom.c
diff --git a/docs/misc/hypfs-paths.pandoc b/docs/misc/hypfs-paths.pandoc
index e86f7d0dbe..116642e367 100644
--- a/docs/misc/hypfs-paths.pandoc
+++ b/docs/misc/hypfs-paths.pandoc
@@ -34,6 +34,7 @@ not containing any '/' character. The names "." and ".." are
reserved
for file system internal use.
VALUES are strings and can take the following forms (note that this represents
+>>>>>>> patched
This seems to be a left-over of a merge.
only the syntax used in this document):
* STRING -- an arbitrary 0-delimited byte string.
@@ -191,6 +192,15 @@ The scheduling granularity of a cpupool.
Writing a value is allowed only for cpupools with no cpu assigned and if the
architecture is supporting different scheduling granularities.
[...]
+
+static int domain_dir_read(const struct hypfs_entry *entry,
+ XEN_GUEST_HANDLE_PARAM(void) uaddr)
+{
+ int ret = 0;
+ const struct domain *d;
+
+ for_each_domain ( d )
This is definitely going to be an issue if you have a lot of domain
running as Xen is not preemptible.
I think the first step is to make sure that HYPFS can scale without
hogging a pCPU for a long time.
Cheers,
--
Julien Grall