On Sat, 4 Jul 2020 14:34:04 -0400 "Michael S. Tsirkin" <m...@redhat.com> wrote:
> On Tue, Jun 16, 2020 at 06:50:34AM +0200, Halil Pasic wrote: > > The atomic_cmpxchg() loop is broken because we occasionally end up with > > old and _old having different values (a legit compiler can generate code > > that accessed *ind_addr again to pick up a value for _old instead of > > using the value of old that was already fetched according to the > > rules of the abstract machine). This means the underlying CS instruction > > may use a different old (_old) than the one we intended to use if > > atomic_cmpxchg() performed the xchg part. > > And was this ever observed in the field? Or is this a theoretical issue? > commit log should probably say ... > It was observed in the field (Christian already answered). I think the message already implies this, because the only conjunctive is about the compiler behavior. > > > > Let us use volatile to force the rules of the abstract machine for > > accesses to *ind_addr. Let us also rewrite the loop so, we that the > > we that -> we know that? s/we// It would be nice to fix this before the patch gets merged. > > > new old is used to compute the new desired value if the xchg part > > is not performed. > > > > Signed-off-by: Halil Pasic <pa...@linux.ibm.com> > > Reported-by: Andre Wild <andre.wi...@ibm.com> > > Fixes: 7e7494627f ("s390x/virtio-ccw: Adapter interrupt support.") > > --- > > hw/s390x/virtio-ccw.c | 18 ++++++++++-------- > > 1 file changed, 10 insertions(+), 8 deletions(-) > > > > diff --git a/hw/s390x/virtio-ccw.c b/hw/s390x/virtio-ccw.c > > index c1f4bb1d33..3c988a000b 100644 > > --- a/hw/s390x/virtio-ccw.c > > +++ b/hw/s390x/virtio-ccw.c > > @@ -786,9 +786,10 @@ static inline VirtioCcwDevice > > *to_virtio_ccw_dev_fast(DeviceState *d) > > static uint8_t virtio_set_ind_atomic(SubchDev *sch, uint64_t ind_loc, > > uint8_t to_be_set) > > { > > - uint8_t ind_old, ind_new; > > + uint8_t expected, actual; > > hwaddr len = 1; > > - uint8_t *ind_addr; > > + /* avoid multiple fetches */ > > + uint8_t volatile *ind_addr; > > > > ind_addr = cpu_physical_memory_map(ind_loc, &len, true); > > if (!ind_addr) { > > @@ -796,14 +797,15 @@ static uint8_t virtio_set_ind_atomic(SubchDev *sch, > > uint64_t ind_loc, > > __func__, sch->cssid, sch->ssid, sch->schid); > > return -1; > > } > > + actual = *ind_addr; > > do { > > - ind_old = *ind_addr; > > - ind_new = ind_old | to_be_set; > > - } while (atomic_cmpxchg(ind_addr, ind_old, ind_new) != ind_old); > > - trace_virtio_ccw_set_ind(ind_loc, ind_old, ind_new); > > - cpu_physical_memory_unmap(ind_addr, len, 1, len); > > + expected = actual; > > + actual = atomic_cmpxchg(ind_addr, expected, expected | to_be_set); > > + } while (actual != expected); > > + trace_virtio_ccw_set_ind(ind_loc, actual, actual | to_be_set); > > + cpu_physical_memory_unmap((void *)ind_addr, len, 1, len); > > > > - return ind_old; > > + return actual; > > } > > I wonder whether cpuXX APIs should accept volatile pointers, too: > casting away volatile is always suspicious. > But that is a separate issue ... > Nod. Thanks for having a look! > > static void virtio_ccw_notify(DeviceState *d, uint16_t vector) > > -- > > 2.17.1 >