Hello Theo, Mike, All, @Theo Understood it is important to protect developers and the project goals ... @Mike Thanks for your Generosity in the time you took on this thread, Yes I want Mike to make VMM more awesome :) @Mike keep up the good work
I cant disagree with any point that Theo made in his email on this tread that said, unfortunately I cant always choose my hypervisor and I dearly want to run OpenBSD on it proxmox... I do think (based on the fact that OpenBSD 6.0-6.2 works on PVE 4.4 it is probably a (virtual Hardware issue ) .. not necessarily an OpenBSD issue I will raise this with the PVE Support guys (as I have already done since mid July ) Any further posts on this thread from me will be (hopefully for other OpenBSD users benefit (if I make progress) and certainly not intended as a request or a distraction for Core OpenBSD Developers All the Best, Tom Smyth On 27 October 2017 at 06:37, Theo de Raadt <dera...@openbsd.org> wrote: > Tom, > > A virtual machine setup is an operating system running on an operating > system on top of an operating system. > > OK, not quite. The middle one, the VM itself, is as a bit less > complex than a full operating system as machine-independent code goes, > but nevertheless the machine-dependent bat-shit-crazy stuff is far > more complex with gobs of extremely messy nuances face it on both > sides because x86 is a fucking minefield > > Everyone needs to adjust their expectation that all 3 layers are > perfect, AND not assume that it is our layer doing the wrong thing > > Really the layers should simplify but the current marketplace is still > gaining more value out of product differentiation than > simplification+convergence, both sw and hw > > Even if our subsystem isn't doing something 'right', it is NOT the > stated goal of OpenBSD to run well on every garbage VM, because it has > become impossible for the little guy to be perfect. > > Concerted efforts to diagnose and improve these low-level issues uses > the same crowd of people who are trying to improve other edges which > may be more important. do you want our vmm to work well? or do you > want us to work better on someone else's vmm? Sorry, limited > skillset, pick what you want mlarkin to focus on! But that is unfair, > and even if he listened to your wishlist, UNPRODUCTIVE. > > Where does this go? Get ready for monopolies in everything, or > oligopolies at best... or fight their establishment. > >> Just to say the gaps in ping response seems get worse as the uptime >> increases >> ie >> with the uptime around 5 minutes the gaps between ping results are around 1 >> sec >> (what I consider normal) >> with the uptime around 2 hrs 45 minutes the gaps between ping results are 13 >> sec >> with the uptime 8 hrs 30 minutes the gaps between ping results are 35 >> seconds >> >> Output of sysctl kern.timecounter below >> >> kern.timecounter.tick=1 >> kern.timecounter.timestepwarnings=0 >> kern.timecounter.hardware=acpihpet0 >> kern.timecounter.choice=i8254(0) acpihpet0(1000) acpitimer0(1000) >> dummy(-1000000) >> >> I will change the ACPI now to i8254 and report back later on >> Thanks >> >> >> On 26 October 2017 at 20:25, Mike Belopuhov <m...@belopuhov.com> wrote: >> > On Thu, Oct 26, 2017 at 19:05 +0100, Tom Smyth wrote: >> >> Lads, >> >> >> >> Im pleased to say that my testing of OpenBSD 6.1 and OpenBSD 6.2 >> >> Release >> >> amd64 , >> >> appear to work a little better in Proxmox PVE5.1 as released this week, >> >> >> >> I used iso version 5.1-722cc488-1 from Proxmox >> >> Updated on 24 October 2017 >> >> >> >> The Console no longer freezes but after a few hours >> >> the console (vga console accessed via Proxmox webinterface seems >> >> to lag a little >> >> the interval between pings for instance takes up to 13 seconds, which >> >> is a bit strange... ie it takes 13 seconds for each line of Ping result >> >> which is u >> >> Ill report more feedback later, but at least OpenBSD is not freezing >> >> as bad in this >> >> version of Proxmox PVE 5.1 >> >> >> > >> > Hi, >> > >> > Can you please show us the output of "sysctl kern.timecounter". >> > If you're currently using an acpihpet0, can you please try >> > switching to the acpitimer0 (and if that doesn't help, i8254) via >> > >> > sysctl kern.timecounter.hardware=acpitimer0 >> > >> > and attempt to reproduce the 13 secod delay. >> > >> > Regards, >> > Mike >> >