Hi Guys,
Lately, I have been trying to play with Xen Driver Domain. I have been able to
make it work where both the Driver Domain OS and the guest OS are run on
paravirtualized (PV) machine. However, it doesn't work when any of them are
hardware virtualized machine (HVM). Therefore, I have the
I was trying to understand the following things regarding the PV driver.
1. Who create frontend and backend instances?
2. When are these instances created?
3. How xenbus directories are created? What is the hierarchy of the
directories?
4. What is the role of "vifname" and who sets it?
Please l
Anthony and Roger, thanks for your informative responses. It helped a lot.
> I'm however unsure by what you mean with instance, so you might have
> to clarify exactly what you mean in order to get a more concise
> reply.
Let's say there are two DomU's, and their respective network interfaces ar
Hi everyone,
Lately, I have been experimenting with 10Gb NIC performance on Xen domains. I
have found that network performance is very poor for PV networking when a
driver domain is used as a network backend.
My experimental setup is I have two machines connected by the 10Gb network: a
server
The driver domain is HVM. Both the driver domain and
On Monday, April 27, 2020, 1:28:13 AM EDT, Jürgen Groß wrote:
> Is the driver domain PV or HVM?
The driver domain is HVM.
> How many vcpus do dom0, the driver domain and the guest have?
Dom0 has 12 vcpus, pinned. Both the driver d
> Driver domains with passthrough devices need to perform IOMMU
operations in order to add/remove page table entries when doing grant
maps (ie: IOMMU TLB flushes), while dom0 doesn't need to because it
has the whole RAM identity mapped in the IOMMU tables. Depending on
how fast your IOMMU is and wh
> Do you get the expected performance from the driver domain when not
> using it as a backend? Ie: running the iperf benchmarks directly on
> the driver domain and not on the guest.
Yes, the bandwidth between the driver domain and the client machine is close to
10Gbits/sec.
Hi all,
I was doing some experiments on the Xen network Driver Domain using Ubuntu
18.04. I was able to see the driver domain works fine when I run it in PV
mode. However, I wasn't able to make the driver domain work when I run it in
HVM mode. I get the following error when I want my DomU to u
Hi Marek,
Thanks for your response. The server machine I am using for this setup is an
x86_64 Intel Xeon. For the Dom0, I am using Ubuntu 18.04 running on kernel
version 5.0.0-37-generic. My Xen version is 4.9.2.
For the HVM driver domain, I am using Ubuntu 18.04 running on kernel version
5.0
> I don't see what is wrong here. Are you sure the backend domain is running?
If you mean the HVM network driver domain then, Yes, I am running the backend
domain.
> Probably irrelevant at this stage, but do you have "xendriverdomain" service
> running in the backend?
I do not have this service
I wasn't able to make the HVM driver domain work even with the latest Xen
version which is 4.14. I see the 'xendriverdomain' executable in the
/etc/init.d/ directory, but it doesn't seem to be running in the background.
On the other hand, I see the official "Qubes OS Architecture" document
(
> > builder = "hvm"
> > name = "ubuntu-doment-hvm"
> This name...
> > vif = [ 'backend=ubuntu-domnet-hvm,bridge=xenbr1' ]
> ...and this name don't match.
Jason,
Thanks for pointing this out. I feel very stupid. However, the problem is not
solved yet, but I was able to get to the next step wi
Rojer,
> You can also start xl devd manually, as that will give you verbose
> output of whats going on. In the driver domain:
> # killall xl
> # xl -vvv devd -F
> That should give you detailed output of what's going on in the driver
> domain, can you paste the output you get from the driver doma
> BTW, are you creating the driver domain with 'driver_domain=1' in the xl
> config file?
No, I wasn't aware of the 'driver_domain' configuration option before, and this
is what I was missing. With this configuration option, I was able to make the
HVM driver domain work. However, the PV drive
> 'xl devd' should add the backend interfaces (vifX.Y) to the bridge if
> properly configured, as it should be calling the hotplug scripts to do that.
Yes, running ' xl devd' in the driver domain before launching the DomU, solved
the bridge issue. Thanks a lot.
So, for the people who end up rea
Hello,
This email is to request Xen community's feedback on our work on implementing
Xen’s driver domains using the unikernel virtual machine model (as opposed to
using general-purpose OSs like Linux) to reduce the attack surface, among other
benefits. The effort, called Kite, has implemented d
Hi,
Back in the year 2020, I was inquiring into the status of PCI passthrough
support for PVH guests. At that time, Arm people were working on using vPCI for
guest VMs. The expectation was to port that implementation to x86 once ready.
I was wondering if there is any update on this. Does Xen su
driver domain, like we can do with the Xen on x86?
Please let me know.
Regards,
Mehrab
On Monday, February 7, 2022, 02:57:45 AM EST, Jan Beulich
wrote:
On 06.02.2022 06:59, tosher 1 wrote:
> Back in the year 2020, I was inquiring into the status of PCI passthrough
> support f
Feb 2022, at 07:22, tosher 1 wrote:
>
> Hi Jan,
>
> Thanks for letting me know this status.
>
> I am wondering if PCI passthrough is at least available in Arm for other
> virtualization modes like PV, HVM, or PVHVM. For example, is it possible for
> someone to attach a P
Hi Julien,
Thanks for the clarification!
Regrads,
Mehrab
On Thursday, February 10, 2022, 06:12:53 PM EST, Julien Grall
wrote:
Hi Bertrand,
On 10/02/2022 08:32, Bertrand Marquis wrote:
>> On 10 Feb 2022, at 07:22, tosher 1 wrote:
>>
>> Hi Jan,
>>
>> T
Hi,
As of Xen 4.10, PCI passthrough support was not available in PVH mode. I was
wondering if the PCI passthrough support was added in a later version.
It would be great to know the latest status of the PCI passthrough support for
the Xen PVM mode. Please let me know if you have any updates on
Hi Roger,
> I think you meant PVH mode in the sentence above instead of PVM?
Sorry, that was a typo. I meant PVH.
> Arm folks are working on using vPCI for domUs, which could easily be picked
> up by x86 once ready. There's also the option to import xenpt [0] from Paul
> Durrant and use it wit
22 matches
Mail list logo