On Thu, Dec 12 2024, Eric Auger <eric.au...@redhat.com> wrote: > Connie, > > On 12/6/24 12:21, Cornelia Huck wrote: >> Whether it make sense to continue with the approach of tweaking values in >> the ID registers in general. If we want to be able to migrate between cpus >> that do not differ wildly, we'll encounter differences that cannot be >> expressed via FEAT_xxx -- e.g. when comparing various AmpereAltra Max >> systems, >> they only differ in parts of CTR_EL0 -- which is not a feature register, but >> a writable register. > In v1 most of the commenters said they would prefer to see FEAT props > instead of IDREG field props. I think we shall try to go in this > direction anyway. As you pointed out there will be some cases where FEAT > won't be enough (CTR_EL0 is a good example). So I tend to think the end > solution will be a mix of FEAT and ID reg field props.
Some analysis of FEAT_xxx mappings: https://lore.kernel.org/qemu-devel/87ikstn8sc....@redhat.com/ (actually, ~190 of FEAT_xxx map to a single value in a single register, so mappings are easy other than the sheer amount of them) We probably should simply not support FEAT_xxx that are solely defined via dependencies. Some more real-world examples from some cpu pairings I had looked at: https://lore.kernel.org/qemu-devel/87ldx2krdp....@redhat.com/ (but also see Peter's follow-up, the endianness field is actually covered by a feature) The values-in-registers-not-covered-by-features we are currently aware of are: - number of breakpoints - PARange values - GIC - some fields in CTR_EL0 (see also https://lore.kernel.org/qemu-devel/4fb49b5b02bb417399ee871b2c85b...@huawei.com/ for the latter two) Also, MIDR/REVIDR handling. Given that we'll need a mix if we support FEAT_xxx, should we mandate the FEAT_xxx syntax if there is a mapping and allow direct specification of register fields only if there is none, or allow them as alternatives (with proper priority handling, or alias handling?) > > Personally I would smoothly migrate what we can from ID reg field props > to FEAT props (maybe using prop aliases?), starting from the easiest 1-1 > mappings and then adressing the FEAT that are more complex but are > explictly needed to enable the use cases we are interested in, at RedHat: > migration within Ampere AltraMax family, migration within NVidia Grace > family, migration within AmpereOne family and migration between Graviton3/4. For these, we'll already need the mix (my examples above all came from these use cases.) (Of course, the existing legacy props need to be expressed as well. I guess they should map to registers directly.) > > We have no info about other's use cases. If some of you want to see some > other live migration combinations addressed, please raise your voice. > Some CSPs may have their own LM solution/requirements but they don't use > qemu. So I think we shall concentrate on those use cases. > > You did the exercise to identify most prevalent patterns for FEAT to > IDREG fields mappings. I think we should now encode this conversion > table for those which are needed in above use cases. I'd focus on the actually needed features first, as otherwise it's really overwhelming. > > From a named model point of view, since I do not see much traction > upstream besides Red Hat use cases, targetting ARM spec revision > baselines may be overkill. Personally I would try to focus on above > models: AltraMax, AmpereOne, Grace, ... Or maybe the ARM cores they may > be derived from. According to the discussion we had with Marc in [1] it > seems it does not make sense to target migration between very > heterogeneous machines and Dan said we would prefer to avoid adding > plenty of feat add-ons to a named models. So I would rather be as close > as possible to a specific family definition. Using e.g. Neoverse-V2 as a base currently looks most attractive to me -- going with Armv<x>.<y> would probably give a larger diff (although the diff for Graviton3/4 is pretty large anyway.) > > Thanks > > Eric > > [1] > https://lore.kernel.org/all/c879fda9-db5a-4743-805d-03c0acba8...@redhat.com/#r