On Mon, Jul 30, 2018 at 11:16:38AM +0200, David Hildenbrand wrote: > On 27.07.2018 14:55, Cornelia Huck wrote: > > On Wed, 25 Jul 2018 11:12:33 +0200 > > David Hildenbrand <da...@redhat.com> wrote: > > > >> The "max" CPU model behaves like "-cpu host" when KVM is enabled, and like > >> a CPU with the maximum possible feature set when TCG is enabled. > >> > >> While the "host" model can not be used under TCG ("kvm_required"), the > >> "max" model can and "Enables all features supported by the accelerator in > >> the current host". > >> > >> So we can treat "host" just as a special case of "max" (like x86 does). > >> It differs to the "qemu" CPU model under TCG such that compatibility > >> handling will not be performed and that some experimental CPU features > >> not yet part of the "qemu" model might be indicated. > >> > >> These are right now under TCG (see "qemu_MAX"): > >> - stfle53 > >> - msa5-base > >> - zpci > >> > >> This will result right now in the following warning when starting QEMU TCG > >> with the "max" model: > >> "qemu-system-s390x: warning: 'msa5-base' requires 'kimd-sha-512'." > >> > >> The "qemu" model (used as default in QEMU under TCG) will continue to > >> work without such warnings. The "max" mdel in the current form > >> might be interesting for kvm-unit-tests (where we would e.g. now also > >> test "msa5-base"). > >> > >> The "max" model is neither static nor migration safe (like the "host" > >> model). It is independent of the machine but dependends on the accelerator. > >> It can be used to detect the maximum CPU model also under TCG from upper > >> layers without having to care about CPU model names for CPU model > >> expansion. > >> > >> Signed-off-by: David Hildenbrand <da...@redhat.com> > >> --- > >> target/s390x/cpu_models.c | 81 +++++++++++++++++++++++++++------------ > >> 1 file changed, 56 insertions(+), 25 deletions(-) > > > > So, what's the outcome? Can I merge this with the discussed minor > > edits, or should I wait for a v2? > > > > Eduardo identified possible optimizations independent of this patch, so > we should be good to go. @Eduardo, please correct me if I'm wrong!
This version still looks good to me, my Reviewed-by line still applies. Thanks! -- Eduardo