On Tue, Dec 22, 2015 at 10:06:25AM -0500, Eric Shelton wrote: > The XSA mentions that "PV frontend patches will be developed and > released (publicly) after the embargo date." Has anything been done > towards this that should also be incorporated into MiniOS? On a > system utilizing a "driver domain," where a backend is running on a > domain that is considered unprivileged and untrusted (such as the > example described in http://wiki.xenproject.org/wiki/Driver_Domain), > it seems XSA-155-style double fetch vulnerabilities in the frontends > are also a potential security concern, and should be eliminated. > However, perhaps that does not include pcifront, since pciback would > always be running in dom0.
And BTW the same applies to Linux frontends, for which also I haven't seen any public development. In attachment my email to xen-security-issues-discuss list (sent during embargo), with patches attached there. I haven't got any response. PS Dropping minios-devel -- Best Regards, Marek Marczykowski-Górecki Invisible Things Lab A: Because it messes up the order in which people normally read text. Q: Why is top-posting such a bad thing?
--- Begin Message ---On Fri, Dec 11, 2015 at 11:52:17AM -0500, Konrad Rzeszutek Wilk wrote: > On Fri, Dec 11, 2015 at 12:03:01PM +0100, Marek Marczykowski-Górecki wrote: > > Hi, > > > > In advisory it's said that PV frontend patches will be developed and > > released (publicly) after the embargo date. But in some cases (namely: > > if driver domains are in use), frontends may be more trusted/privileged > > than backends. So it would make sense to consider developing frontend > > patches before embargo end date. At least for the most common ones: > > network and block. > > > > Do you have some list of places that needs to be fixed? I can try to > > help with patches, but need some input on this... Patches (against 4.4) for netfront and blkfront attached. I don't have test case for actual XSA155, but at least it doesn't break normal usage. Below are some comments/questions. Sorry for the delay. > Any code where the shared page is being accessed. For example > blkif_interrupt. Added RING_COPY_RESPONSE here. Probably it would be enough to use just READ_ONCE(bret->operation), but blkif_response isn't that big, so safer option is better. The other places in block frontend are about preparing request - it should not reuse data already put into shared ring, because (malicious) backend might already changed it. Generally added local structure on the stack, filled there and only then put in on the ring. And save a copy to info->shadow[id].req (the original one, not the one from shared ring!). Maybe it would be better to use info->shadow[id].req in the first place (and then copy it to the shared ring)? Theoretically one memory copy less. Does it matter? > We probably should add another macro: RING_COPY_RESPONSE which > would replace the RING_GET_RESPONSE and copy the response to > an on stack variable and then operated on. > > Ditto with xennet_alloc_rx_buffers and the RING_GET_REQUEST. I'm not sure about this one - there is only write to that request, no reads. Or maybe I'm missing some obscure code there? > And > xennet_tx_buf_gc. Additionally added BUG_ON(id >= NET_TX_RING_SIZE). Not sure about the impact of backend giving out of range id, so make sure it's DoS in the _worst_ case. > There is also xennet_tx_setup_grant. It looks like the tx pointer outlive this function. So instead of changing it to local copy, made sure no data is read from that structure (only written). Similar in handling gso in xennet_start_xmit - it's only filled with data. Not sure about xennet_move_rx_slot, but I guess it's safe. > xen-pcifront looks to be safe - it memcopies the fields. However > as I've found out - doing 'memcpy' does not mean the compiler > will do that - it may convert that just to a pointer operation. > So an barrier() will have to be sprinkled in 'do_pci_op'. There is already wmb() after the first memcpy. Is it necessary to have another one before the memcpy in the other direction? $ shasum -a 256 *.patch f7b0c45dbd2c91bee5cc151b1c53b5b452e4e448991204ab6ac6aeb3b5811674 xsa155-linux-0008-xen-Add-RING_COPY_RESPONSE.patch a9f34c6b0b746bdd49594ddabab71ff18477b1cbad62d7cd380f3ca5dc9ff7e3 xsa155-linux-0009-xen-netfront-copy-response-out-of-shared-buffer-befo.patch e924fb7b344135e21c7df9ed6297165275646ea97b9d3072b9f1cc0152d88bec xsa155-linux-0010-xen-netfront-do-not-use-data-already-exposed-to-back.patch 03ac14dcbfb2ccd9e1c375e57e2c9184075d29ffadbaf696a8b015f7eb6bb2c0 xsa155-linux-0011-xen-netfront-add-range-check-for-Tx-response-id.patch e71bbd472b33af4424bee936019b11af2dfc1e3512d13d1aad9713fd8b337227 xsa155-linux-0012-xen-blkfront-make-local-copy-of-response-before-usin.patch 4c99b24ed5092e45c504c84312b619cb77c7ee1169b95002a9ea77fb96523d1d xsa155-linux-0013-xen-blkfront-prepare-request-locally-only-then-put-i.patch -- Best Regards, Marek Marczykowski-Górecki Invisible Things Lab A: Because it messes up the order in which people normally read text. Q: Why is top-posting such a bad thing?From 8322f4eddaf1fe5a9bdf5252c8140daa8bad60fd Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Marek=20Marczykowski-G=C3=B3recki?= <marma...@invisiblethingslab.com> Date: Tue, 15 Dec 2015 21:35:14 +0100 Subject: [PATCH 08/13] xen: Add RING_COPY_RESPONSE() MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Organization: Invisible Things Lab Cc: Marek Marczykowski-Górecki <marma...@invisiblethingslab.com> Using RING_GET_RESPONSE() on a shared ring is easy to use incorrectly (i.e., by not considering that the other end may alter the data in the shared ring while it is being inspected). Safe usage of a response generally requires taking a local copy. Provide a RING_COPY_RESPONSE() macro to use instead of RING_GET_RESPONSE() and an open-coded memcpy(). This takes care of ensuring that the copy is done correctly regardless of any possible compiler optimizations. Use a volatile source to prevent the compiler from reordering or omitting the copy. This is part of XSA155. CC: sta...@vger.kernel.org Signed-off-by: Marek Marczykowski-Górecki <marma...@invisiblethingslab.com> --- include/xen/interface/io/ring.h | 14 ++++++++++++++ 1 file changed, 14 insertions(+) diff --git a/include/xen/interface/io/ring.h b/include/xen/interface/io/ring.h index 7dc685b..312415c 100644 --- a/include/xen/interface/io/ring.h +++ b/include/xen/interface/io/ring.h @@ -198,6 +198,20 @@ struct __name##_back_ring { \ #define RING_GET_RESPONSE(_r, _idx) \ (&((_r)->sring->ring[((_idx) & (RING_SIZE(_r) - 1))].rsp)) +/* + * Get a local copy of a response. + * + * Use this in preference to RING_GET_RESPONSE() so all processing is + * done on a local copy that cannot be modified by the other end. + * + * Note that https://gcc.gnu.org/bugzilla/show_bug.cgi?id=58145 may cause this + * to be ineffective where _rsp is a struct which consists of only bitfields. + */ +#define RING_COPY_RESPONSE(_r, _idx, _rsp) do { \ + /* Use volatile to force the copy into _rsp. */ \ + *(_rsp) = *(volatile typeof(_rsp))RING_GET_RESPONSE(_r, _idx); \ +} while (0) + /* Loop termination condition: Would the specified index overflow the ring? */ #define RING_REQUEST_CONS_OVERFLOW(_r, _cons) \ (((_cons) - (_r)->rsp_prod_pvt) >= RING_SIZE(_r)) -- 2.1.0From 3a1006355114da4b8fc4b935a64928b7f6ae374f Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Marek=20Marczykowski-G=C3=B3recki?= <marma...@invisiblethingslab.com> Date: Wed, 16 Dec 2015 05:09:55 +0100 Subject: [PATCH 09/13] xen-netfront: copy response out of shared buffer before accessing it MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Organization: Invisible Things Lab Cc: Marek Marczykowski-Górecki <marma...@invisiblethingslab.com> Make local copy of the response, otherwise backend might modify it while frontend is already processing it - leading to time of check / time of use issue. Signed-off-by: Marek Marczykowski-Górecki <marma...@invisiblethingslab.com> --- drivers/net/xen-netfront.c | 51 +++++++++++++++++++++++----------------------- 1 file changed, 25 insertions(+), 26 deletions(-) diff --git a/drivers/net/xen-netfront.c b/drivers/net/xen-netfront.c index d6abf19..2af5100 100644 --- a/drivers/net/xen-netfront.c +++ b/drivers/net/xen-netfront.c @@ -372,13 +372,13 @@ static void xennet_tx_buf_gc(struct netfront_queue *queue) rmb(); /* Ensure we see responses up to 'rp'. */ for (cons = queue->tx.rsp_cons; cons != prod; cons++) { - struct xen_netif_tx_response *txrsp; + struct xen_netif_tx_response txrsp; - txrsp = RING_GET_RESPONSE(&queue->tx, cons); - if (txrsp->status == XEN_NETIF_RSP_NULL) + RING_COPY_RESPONSE(&queue->tx, cons, &txrsp); + if (txrsp.status == XEN_NETIF_RSP_NULL) continue; - id = txrsp->id; + id = txrsp.id; skb = queue->tx_skbs[id].skb; if (unlikely(gnttab_query_foreign_access( queue->grant_tx_ref[id]) != 0)) { @@ -721,7 +721,7 @@ static int xennet_get_extras(struct netfront_queue *queue, RING_IDX rp) { - struct xen_netif_extra_info *extra; + struct xen_netif_extra_info extra; struct device *dev = &queue->info->netdev->dev; RING_IDX cons = queue->rx.rsp_cons; int err = 0; @@ -737,24 +737,23 @@ static int xennet_get_extras(struct netfront_queue *queue, break; } - extra = (struct xen_netif_extra_info *) - RING_GET_RESPONSE(&queue->rx, ++cons); + RING_COPY_RESPONSE(&queue->rx, ++cons, &extra); - if (unlikely(!extra->type || - extra->type >= XEN_NETIF_EXTRA_TYPE_MAX)) { + if (unlikely(!extra.type || + extra.type >= XEN_NETIF_EXTRA_TYPE_MAX)) { if (net_ratelimit()) dev_warn(dev, "Invalid extra type: %d\n", - extra->type); + extra.type); err = -EINVAL; } else { - memcpy(&extras[extra->type - 1], extra, - sizeof(*extra)); + memcpy(&extras[extra.type - 1], &extra, + sizeof(extra)); } skb = xennet_get_rx_skb(queue, cons); ref = xennet_get_rx_ref(queue, cons); xennet_move_rx_slot(queue, skb, ref); - } while (extra->flags & XEN_NETIF_EXTRA_FLAG_MORE); + } while (extra.flags & XEN_NETIF_EXTRA_FLAG_MORE); queue->rx.rsp_cons = cons; return err; @@ -764,28 +763,28 @@ static int xennet_get_responses(struct netfront_queue *queue, struct netfront_rx_info *rinfo, RING_IDX rp, struct sk_buff_head *list) { - struct xen_netif_rx_response *rx = &rinfo->rx; + struct xen_netif_rx_response rx = rinfo->rx; struct xen_netif_extra_info *extras = rinfo->extras; struct device *dev = &queue->info->netdev->dev; RING_IDX cons = queue->rx.rsp_cons; struct sk_buff *skb = xennet_get_rx_skb(queue, cons); grant_ref_t ref = xennet_get_rx_ref(queue, cons); - int max = MAX_SKB_FRAGS + (rx->status <= RX_COPY_THRESHOLD); + int max = MAX_SKB_FRAGS + (rx.status <= RX_COPY_THRESHOLD); int slots = 1; int err = 0; unsigned long ret; - if (rx->flags & XEN_NETRXF_extra_info) { + if (rx.flags & XEN_NETRXF_extra_info) { err = xennet_get_extras(queue, extras, rp); cons = queue->rx.rsp_cons; } for (;;) { - if (unlikely(rx->status < 0 || - rx->offset + rx->status > XEN_PAGE_SIZE)) { + if (unlikely(rx.status < 0 || + rx.offset + rx.status > XEN_PAGE_SIZE)) { if (net_ratelimit()) dev_warn(dev, "rx->offset: %u, size: %d\n", - rx->offset, rx->status); + rx.offset, rx.status); xennet_move_rx_slot(queue, skb, ref); err = -EINVAL; goto next; @@ -799,7 +798,7 @@ static int xennet_get_responses(struct netfront_queue *queue, if (ref == GRANT_INVALID_REF) { if (net_ratelimit()) dev_warn(dev, "Bad rx response id %d.\n", - rx->id); + rx.id); err = -EINVAL; goto next; } @@ -812,7 +811,7 @@ static int xennet_get_responses(struct netfront_queue *queue, __skb_queue_tail(list, skb); next: - if (!(rx->flags & XEN_NETRXF_more_data)) + if (!(rx.flags & XEN_NETRXF_more_data)) break; if (cons + slots == rp) { @@ -822,7 +821,7 @@ next: break; } - rx = RING_GET_RESPONSE(&queue->rx, cons + slots); + RING_COPY_RESPONSE(&queue->rx, cons + slots, &rx); skb = xennet_get_rx_skb(queue, cons + slots); ref = xennet_get_rx_ref(queue, cons + slots); slots++; @@ -878,9 +877,9 @@ static RING_IDX xennet_fill_frags(struct netfront_queue *queue, struct sk_buff *nskb; while ((nskb = __skb_dequeue(list))) { - struct xen_netif_rx_response *rx = - RING_GET_RESPONSE(&queue->rx, ++cons); + struct xen_netif_rx_response rx; skb_frag_t *nfrag = &skb_shinfo(nskb)->frags[0]; + RING_COPY_RESPONSE(&queue->rx, ++cons, &rx); if (shinfo->nr_frags == MAX_SKB_FRAGS) { unsigned int pull_to = NETFRONT_SKB_CB(skb)->pull_to; @@ -891,7 +890,7 @@ static RING_IDX xennet_fill_frags(struct netfront_queue *queue, BUG_ON(shinfo->nr_frags >= MAX_SKB_FRAGS); skb_add_rx_frag(skb, shinfo->nr_frags, skb_frag_page(nfrag), - rx->offset, rx->status, PAGE_SIZE); + rx.offset, rx.status, PAGE_SIZE); skb_shinfo(nskb)->nr_frags = 0; kfree_skb(nskb); @@ -987,7 +986,7 @@ static int xennet_poll(struct napi_struct *napi, int budget) i = queue->rx.rsp_cons; work_done = 0; while ((i != rp) && (work_done < budget)) { - memcpy(rx, RING_GET_RESPONSE(&queue->rx, i), sizeof(*rx)); + RING_COPY_RESPONSE(&queue->rx, i, rx); memset(extras, 0, sizeof(rinfo.extras)); err = xennet_get_responses(queue, &rinfo, rp, &tmpq); -- 2.1.0From 2adc557330dde5b474d885518d2663180d3c8f45 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Marek=20Marczykowski-G=C3=B3recki?= <marma...@invisiblethingslab.com> Date: Wed, 16 Dec 2015 05:19:37 +0100 Subject: [PATCH 10/13] xen-netfront: do not use data already exposed to backend MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Organization: Invisible Things Lab Cc: Marek Marczykowski-Górecki <marma...@invisiblethingslab.com> Backend may freely modify anything on shared page, so use data which was supposed to be written there, instead of reading it back from the shared page. This unfortunatelly require putting xennet_make_first_txreq inline into xennet_start_xmit (the only use), because we need info.size, which isn't available anywhere else (other than shared page). This is part of XSA155. CC: sta...@vger.kernel.org Signed-off-by: Marek Marczykowski-Górecki <marma...@invisiblethingslab.com> --- drivers/net/xen-netfront.c | 32 +++++++++++--------------------- 1 file changed, 11 insertions(+), 21 deletions(-) diff --git a/drivers/net/xen-netfront.c b/drivers/net/xen-netfront.c index 2af5100..959e479 100644 --- a/drivers/net/xen-netfront.c +++ b/drivers/net/xen-netfront.c @@ -453,23 +453,7 @@ static void xennet_tx_setup_grant(unsigned long gfn, unsigned int offset, tx->flags = 0; info->tx = tx; - info->size += tx->size; -} - -static struct xen_netif_tx_request *xennet_make_first_txreq( - struct netfront_queue *queue, struct sk_buff *skb, - struct page *page, unsigned int offset, unsigned int len) -{ - struct xennet_gnttab_make_txreq info = { - .queue = queue, - .skb = skb, - .page = page, - .size = 0, - }; - - gnttab_for_one_grant(page, offset, len, xennet_tx_setup_grant, &info); - - return info.tx; + info->size += len; } static void xennet_make_one_txreq(unsigned long gfn, unsigned int offset, @@ -564,6 +548,7 @@ static int xennet_start_xmit(struct sk_buff *skb, struct net_device *dev) struct netfront_info *np = netdev_priv(dev); struct netfront_stats *tx_stats = this_cpu_ptr(np->tx_stats); struct xen_netif_tx_request *tx, *first_tx; + struct xennet_gnttab_make_txreq info; unsigned int i; int notify; int slots; @@ -614,14 +599,19 @@ static int xennet_start_xmit(struct sk_buff *skb, struct net_device *dev) } /* First request for the linear area. */ - first_tx = tx = xennet_make_first_txreq(queue, skb, - page, offset, len); - offset += tx->size; + info.queue = queue; + info.skb = skb; + info.page = page; + info.size = 0; + gnttab_for_one_grant(page, offset, len, xennet_tx_setup_grant, &info); + + first_tx = tx = info.tx; + offset += info.size; if (offset == PAGE_SIZE) { page++; offset = 0; } - len -= tx->size; + len -= info.size; if (skb->ip_summed == CHECKSUM_PARTIAL) /* local packet? */ -- 2.1.0From 76a020d3b2023ca02961eab38318ef2d6f1338d9 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Marek=20Marczykowski-G=C3=B3recki?= <marma...@invisiblethingslab.com> Date: Wed, 16 Dec 2015 05:22:24 +0100 Subject: [PATCH 11/13] xen-netfront: add range check for Tx response id MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Organization: Invisible Things Lab Cc: Marek Marczykowski-Górecki <marma...@invisiblethingslab.com> Tx response ID is fetched from shared page, so make sure it is sane before using it as an array index. This is part of XSA155. CC: sta...@vger.kernel.org Signed-off-by: Marek Marczykowski-Górecki <marma...@invisiblethingslab.com> --- drivers/net/xen-netfront.c | 1 + 1 file changed, 1 insertion(+) diff --git a/drivers/net/xen-netfront.c b/drivers/net/xen-netfront.c index 959e479..94309e6 100644 --- a/drivers/net/xen-netfront.c +++ b/drivers/net/xen-netfront.c @@ -379,6 +379,7 @@ static void xennet_tx_buf_gc(struct netfront_queue *queue) continue; id = txrsp.id; + BUG_ON(id >= NET_TX_RING_SIZE); skb = queue->tx_skbs[id].skb; if (unlikely(gnttab_query_foreign_access( queue->grant_tx_ref[id]) != 0)) { -- 2.1.0From ef0d243bfeaf1da8854c26f89536dc1b69c56602 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Marek=20Marczykowski-G=C3=B3recki?= <marma...@invisiblethingslab.com> Date: Wed, 16 Dec 2015 05:51:10 +0100 Subject: [PATCH 12/13] xen-blkfront: make local copy of response before using it MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Organization: Invisible Things Lab Cc: Marek Marczykowski-Górecki <marma...@invisiblethingslab.com> Data on the shared page can be changed at any time by the backend. Make a local copy, which is no longer controlled by the backend. And only then access it. This is part of XSA155. CC: sta...@vger.kernel.org Signed-off-by: Marek Marczykowski-Górecki <marma...@invisiblethingslab.com> --- drivers/block/xen-blkfront.c | 34 +++++++++++++++++----------------- 1 file changed, 17 insertions(+), 17 deletions(-) diff --git a/drivers/block/xen-blkfront.c b/drivers/block/xen-blkfront.c index 2fee2ee..5d7eb04 100644 --- a/drivers/block/xen-blkfront.c +++ b/drivers/block/xen-blkfront.c @@ -1296,7 +1296,7 @@ static void blkif_completion(struct blk_shadow *s, struct blkfront_info *info, static irqreturn_t blkif_interrupt(int irq, void *dev_id) { struct request *req; - struct blkif_response *bret; + struct blkif_response bret; RING_IDX i, rp; unsigned long flags; struct blkfront_info *info = (struct blkfront_info *)dev_id; @@ -1316,8 +1316,8 @@ static irqreturn_t blkif_interrupt(int irq, void *dev_id) for (i = info->ring.rsp_cons; i != rp; i++) { unsigned long id; - bret = RING_GET_RESPONSE(&info->ring, i); - id = bret->id; + RING_COPY_RESPONSE(&info->ring, i, &bret); + id = bret.id; /* * The backend has messed up and given us an id that we would * never have given to it (we stamp it up to BLK_RING_SIZE - @@ -1325,29 +1325,29 @@ static irqreturn_t blkif_interrupt(int irq, void *dev_id) */ if (id >= BLK_RING_SIZE(info)) { WARN(1, "%s: response to %s has incorrect id (%ld)\n", - info->gd->disk_name, op_name(bret->operation), id); + info->gd->disk_name, op_name(bret.operation), id); /* We can't safely get the 'struct request' as * the id is busted. */ continue; } req = info->shadow[id].request; - if (bret->operation != BLKIF_OP_DISCARD) - blkif_completion(&info->shadow[id], info, bret); + if (bret.operation != BLKIF_OP_DISCARD) + blkif_completion(&info->shadow[id], info, &bret); if (add_id_to_freelist(info, id)) { WARN(1, "%s: response to %s (id %ld) couldn't be recycled!\n", - info->gd->disk_name, op_name(bret->operation), id); + info->gd->disk_name, op_name(bret.operation), id); continue; } - error = (bret->status == BLKIF_RSP_OKAY) ? 0 : -EIO; - switch (bret->operation) { + error = (bret.status == BLKIF_RSP_OKAY) ? 0 : -EIO; + switch (bret.operation) { case BLKIF_OP_DISCARD: - if (unlikely(bret->status == BLKIF_RSP_EOPNOTSUPP)) { + if (unlikely(bret.status == BLKIF_RSP_EOPNOTSUPP)) { struct request_queue *rq = info->rq; printk(KERN_WARNING "blkfront: %s: %s op failed\n", - info->gd->disk_name, op_name(bret->operation)); + info->gd->disk_name, op_name(bret.operation)); error = -EOPNOTSUPP; info->feature_discard = 0; info->feature_secdiscard = 0; @@ -1358,15 +1358,15 @@ static irqreturn_t blkif_interrupt(int irq, void *dev_id) break; case BLKIF_OP_FLUSH_DISKCACHE: case BLKIF_OP_WRITE_BARRIER: - if (unlikely(bret->status == BLKIF_RSP_EOPNOTSUPP)) { + if (unlikely(bret.status == BLKIF_RSP_EOPNOTSUPP)) { printk(KERN_WARNING "blkfront: %s: %s op failed\n", - info->gd->disk_name, op_name(bret->operation)); + info->gd->disk_name, op_name(bret.operation)); error = -EOPNOTSUPP; } - if (unlikely(bret->status == BLKIF_RSP_ERROR && + if (unlikely(bret.status == BLKIF_RSP_ERROR && info->shadow[id].req.u.rw.nr_segments == 0)) { printk(KERN_WARNING "blkfront: %s: empty %s op failed\n", - info->gd->disk_name, op_name(bret->operation)); + info->gd->disk_name, op_name(bret.operation)); error = -EOPNOTSUPP; } if (unlikely(error)) { @@ -1378,9 +1378,9 @@ static irqreturn_t blkif_interrupt(int irq, void *dev_id) /* fall through */ case BLKIF_OP_READ: case BLKIF_OP_WRITE: - if (unlikely(bret->status != BLKIF_RSP_OKAY)) + if (unlikely(bret.status != BLKIF_RSP_OKAY)) dev_dbg(&info->xbdev->dev, "Bad return from blkdev data " - "request: %x\n", bret->status); + "request: %x\n", bret.status); blk_mq_complete_request(req, error); break; -- 2.1.0From 74aaa42e1f25309a163acd00083ecbbc186fbb47 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Marek=20Marczykowski-G=C3=B3recki?= <marma...@invisiblethingslab.com> Date: Wed, 16 Dec 2015 06:07:14 +0100 Subject: [PATCH 13/13] xen-blkfront: prepare request locally, only then put it on the shared ring MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Organization: Invisible Things Lab Cc: Marek Marczykowski-Górecki <marma...@invisiblethingslab.com> Do not reuse data which theoretically might be already modified by the backend. This is mostly about private copy of the request (info->shadow[id].req) - make sure the request saved there is really the one just filled. This is part of XSA155. CC: sta...@vger.kernel.org Signed-off-by: Marek Marczykowski-Górecki <marma...@invisiblethingslab.com> --- drivers/block/xen-blkfront.c | 56 ++++++++++++++++++++++++-------------------- 1 file changed, 30 insertions(+), 26 deletions(-) diff --git a/drivers/block/xen-blkfront.c b/drivers/block/xen-blkfront.c index 5d7eb04..514cf18 100644 --- a/drivers/block/xen-blkfront.c +++ b/drivers/block/xen-blkfront.c @@ -459,27 +459,29 @@ static int blkif_ioctl(struct block_device *bdev, fmode_t mode, static int blkif_queue_discard_req(struct request *req) { struct blkfront_info *info = req->rq_disk->private_data; - struct blkif_request *ring_req; + struct blkif_request ring_req; unsigned long id; /* Fill out a communications ring structure. */ - ring_req = RING_GET_REQUEST(&info->ring, info->ring.req_prod_pvt); id = get_id_from_freelist(info); info->shadow[id].request = req; - ring_req->operation = BLKIF_OP_DISCARD; - ring_req->u.discard.nr_sectors = blk_rq_sectors(req); - ring_req->u.discard.id = id; - ring_req->u.discard.sector_number = (blkif_sector_t)blk_rq_pos(req); + ring_req.operation = BLKIF_OP_DISCARD; + ring_req.u.discard.nr_sectors = blk_rq_sectors(req); + ring_req.u.discard.id = id; + ring_req.u.discard.sector_number = (blkif_sector_t)blk_rq_pos(req); if ((req->cmd_flags & REQ_SECURE) && info->feature_secdiscard) - ring_req->u.discard.flag = BLKIF_DISCARD_SECURE; + ring_req.u.discard.flag = BLKIF_DISCARD_SECURE; else - ring_req->u.discard.flag = 0; + ring_req.u.discard.flag = 0; + /* make the request available to the backend */ + *RING_GET_REQUEST(&info->ring, info->ring.req_prod_pvt) = ring_req; + wmb(); info->ring.req_prod_pvt++; /* Keep a private copy so we can reissue requests when recovering. */ - info->shadow[id].req = *ring_req; + info->shadow[id].req = ring_req; return 0; } @@ -569,7 +571,7 @@ static void blkif_setup_rw_req_grant(unsigned long gfn, unsigned int offset, static int blkif_queue_rw_req(struct request *req) { struct blkfront_info *info = req->rq_disk->private_data; - struct blkif_request *ring_req; + struct blkif_request ring_req; unsigned long id; int i; struct setup_rw_req setup = { @@ -613,7 +615,6 @@ static int blkif_queue_rw_req(struct request *req) new_persistent_gnts = 0; /* Fill out a communications ring structure. */ - ring_req = RING_GET_REQUEST(&info->ring, info->ring.req_prod_pvt); id = get_id_from_freelist(info); info->shadow[id].request = req; @@ -628,7 +629,7 @@ static int blkif_queue_rw_req(struct request *req) for_each_sg(info->shadow[id].sg, sg, num_sg, i) num_grant += gnttab_count_grant(sg->offset, sg->length); - ring_req->u.rw.id = id; + ring_req.u.rw.id = id; info->shadow[id].num_sg = num_sg; if (num_grant > BLKIF_MAX_SEGMENTS_PER_REQUEST) { /* @@ -636,16 +637,16 @@ static int blkif_queue_rw_req(struct request *req) * BLKIF_OP_WRITE */ BUG_ON(req->cmd_flags & (REQ_FLUSH | REQ_FUA)); - ring_req->operation = BLKIF_OP_INDIRECT; - ring_req->u.indirect.indirect_op = rq_data_dir(req) ? + ring_req.operation = BLKIF_OP_INDIRECT; + ring_req.u.indirect.indirect_op = rq_data_dir(req) ? BLKIF_OP_WRITE : BLKIF_OP_READ; - ring_req->u.indirect.sector_number = (blkif_sector_t)blk_rq_pos(req); - ring_req->u.indirect.handle = info->handle; - ring_req->u.indirect.nr_segments = num_grant; + ring_req.u.indirect.sector_number = (blkif_sector_t)blk_rq_pos(req); + ring_req.u.indirect.handle = info->handle; + ring_req.u.indirect.nr_segments = num_grant; } else { - ring_req->u.rw.sector_number = (blkif_sector_t)blk_rq_pos(req); - ring_req->u.rw.handle = info->handle; - ring_req->operation = rq_data_dir(req) ? + ring_req.u.rw.sector_number = (blkif_sector_t)blk_rq_pos(req); + ring_req.u.rw.handle = info->handle; + ring_req.operation = rq_data_dir(req) ? BLKIF_OP_WRITE : BLKIF_OP_READ; if (req->cmd_flags & (REQ_FLUSH | REQ_FUA)) { /* @@ -658,21 +659,21 @@ static int blkif_queue_rw_req(struct request *req) switch (info->feature_flush & ((REQ_FLUSH|REQ_FUA))) { case REQ_FLUSH|REQ_FUA: - ring_req->operation = + ring_req.operation = BLKIF_OP_WRITE_BARRIER; break; case REQ_FLUSH: - ring_req->operation = + ring_req.operation = BLKIF_OP_FLUSH_DISKCACHE; break; default: - ring_req->operation = 0; + ring_req.operation = 0; } } - ring_req->u.rw.nr_segments = num_grant; + ring_req.u.rw.nr_segments = num_grant; } - setup.ring_req = ring_req; + setup.ring_req = &ring_req; setup.id = id; for_each_sg(info->shadow[id].sg, sg, num_sg, i) { BUG_ON(sg->offset + sg->length > PAGE_SIZE); @@ -694,10 +695,13 @@ static int blkif_queue_rw_req(struct request *req) if (setup.segments) kunmap_atomic(setup.segments); + /* make the request available to the backend */ + *RING_GET_REQUEST(&info->ring, info->ring.req_prod_pvt) = ring_req; + wmb(); info->ring.req_prod_pvt++; /* Keep a private copy so we can reissue requests when recovering. */ - info->shadow[id].req = *ring_req; + info->shadow[id].req = ring_req; if (new_persistent_gnts) gnttab_free_grant_references(setup.gref_head); -- 2.1.0
pgpWHs0mj0Jqp.pgp
Description: PGP signature_______________________________________________ Xen-security-issues-discuss mailing list xen-security-issues-disc...@lists.xenproject.org http://lists.xenproject.org/cgi-bin/mailman/listinfo/xen-security-issues-discuss
--- End Message ---
pgpWBzFWAcMQ2.pgp
Description: PGP signature
_______________________________________________ Xen-devel mailing list Xen-devel@lists.xen.org http://lists.xen.org/xen-devel