When reading https://openwrt.org/docs/guide-user/virtualization/start it is clear this hasn't seen much love. Several portions seemed to exclusively target developers and not really be meant for serious use. As such I was unsurprised to discover: https://openwrt.org/docs/guide-developer/start#testing_openwrt_in_a_virtual_machine Which is exactly the same information in the developer guide. This area seems to need work.
One obvious problem is mixing Docker and LXC with virtualization. These are useful for very basic testing, but they cannot handle many types of testing which VMs can do. It is notable `xm` was deprecated by the Xen Project and was replaced with `xl`. `xm` had completely disappeared by 2019 and was on its way out by 2018. Both examples are also using older boot methods. If you're doing a new setup, you likely want to use PvGRUB. PyGRUB is semi-deprecated (PvGRUB is far superior on x86, not yet available on other arches) while direct kernel boot is more often used to load PvGRUB or EDK2 as a bootloader. The image directly under "Where is my router?": https://openwrt.org/docs/guide-user/virtualization/virtualbox-advanced#where_is_my_router makes me wonder whether the creater of the image had an interesting insight. A useful capability of modern hypervisors is the ability to pass hardware devices (notably PCI and USB) into a VM. I've confirmed Xen, Bhyve and KVM have this capability. I would be surprised if any other hypervisor doesn't have the capability, if appropriate hardware is available (notably an IOMMU). How exactly you do this will vary from hypervisor to hypervisor. The basic idea though is the hypervisor removes/hides a device from the supervisor VM, and installs it into another VM. This can be done with most devices, notably graphics cards, ethernet NICs and 802.11 cards. The fun bit is, what happens if you have a hypervisor machine with a spare ethernet NIC and 802.11 card, not being used for any other purpose? On such a machine you could run OpenWRT in a VM. One NIC being the physical ethernet interface. One NIC being the 802.11 interface. Then a last interface going to the hypervisor's internal software switch. Thing is this has all the characteristics of a serious use of OpenWRT. If the physical ethernet interface connects to your ISP, it can handle NAT for both 802.11 devices and everything on the network. The only difference being this merely uses /part/ of a high-powered computer, rather than all of an embedded device. Certainly not my intended target, but imagining a Hyper-V system using OpenWRT to handle the upstream connection seems an interesting idea. The problems I forsee with this setup revolve around the choices made for OpenWRT which are appropriate for embedded devices, but inappropriate for dynamic systems. Notably much of OpenWRT is based on knowning more about the platform ahead of time. Whereas when running on a hypervisor, most drivers should be dynamically loaded. Except much of that support is stripped out by OpenWRT to reduce kernel size. -- (\___(\___(\______ --=> 8-) EHM <=-- ______/)___/)___/) \BS ( | ehem+sig...@m5p.com PGP 87145445 | ) / \_CS\ | _____ -O #include <stddisclaimer.h> O- _____ | / _/ 8A19\___\_|_/58D2 7E3D DDF4 7BA6 <-PGP-> 41D1 B375 37D0 8714\_|_/___/5445 _______________________________________________ openwrt-devel mailing list openwrt-devel@lists.openwrt.org https://lists.openwrt.org/mailman/listinfo/openwrt-devel