> -----Original Message----- > From: Yuanhan Liu [mailto:yuanhan.liu at linux.intel.com] > Sent: Wednesday, November 18, 2015 10:57 AM > To: Rich Lane <rich.lane at bigswitch.com> > Cc: dev at dpdk.org; Xie, Huawei <huawei.xie at intel.com>; Wang, Zhihong > <zhihong.wang at intel.com>; Richardson, Bruce <bruce.richardson at intel.com> > Subject: Re: [PATCH] vhost: avoid buffer overflow in update_secure_len > > On Tue, Nov 17, 2015 at 08:39:30AM -0800, Rich Lane wrote: > > > > I don't think that adding a SIGINT handler is the right solution, > > though. The guest app could be killed with another signal (SIGKILL). > > Good point. > > > Worse, a malicious or > > buggy guest could write to just that field. vhost should not crash no > > matter what the guest writes into the virtqueues. > > Yeah, I agree with you: though we could fix this issue in the source side, we > also > should do some defend here. >
Exactly, DPDK should be able to take care of both ends: # Provide interface for resource cleanup # Be prepared if the app doesn't shutdown properly > How about following patch then? > > Note that the vec_id overflow check should be done before referencing it, but > not after. Hence I moved it ahead. > > --yliu > > --- > diff --git a/lib/librte_vhost/vhost_rxtx.c b/lib/librte_vhost/vhost_rxtx.c > index > 9322ce6..08f5942 100644 > --- a/lib/librte_vhost/vhost_rxtx.c > +++ b/lib/librte_vhost/vhost_rxtx.c > @@ -132,6 +132,8 @@ virtio_dev_rx(struct virtio_net *dev, uint16_t queue_id, > > /* Get descriptor from available ring */ > desc = &vq->desc[head[packet_success]]; > + if (desc->len == 0) > + break; > > buff = pkts[packet_success]; > > @@ -153,6 +155,8 @@ virtio_dev_rx(struct virtio_net *dev, uint16_t queue_id, > /* Buffer address translation. */ > buff_addr = gpa_to_vva(dev, desc->addr); > } else { > + if (desc->len < vq->vhost_hlen) > + break; > vb_offset += vq->vhost_hlen; > hdr = 1; > } > @@ -446,6 +450,9 @@ update_secure_len(struct vhost_virtqueue *vq, uint32_t > id, > uint32_t vec_id = *vec_idx; > > do { > + if (vec_id >= BUF_VECTOR_MAX) > + break; > + > next_desc = 0; > len += vq->desc[idx].len; > vq->buf_vec[vec_id].buf_addr = vq->desc[idx].addr; @@ -519,6 > +526,8 @@ virtio_dev_merge_rx(struct virtio_net *dev, uint16_t queue_id, > goto merge_rx_exit; > } else { > update_secure_len(vq, res_cur_idx, > &secure_len, > &vec_idx); > + if (secure_len == 0) > + goto merge_rx_exit; > res_cur_idx++; > } > } while (pkt_len > secure_len); > @@ -631,6 +640,8 @@ rte_vhost_dequeue_burst(struct virtio_net *dev, > uint16_t queue_id, > uint8_t alloc_err = 0; > > desc = &vq->desc[head[entry_success]]; > + if (desc->len == 0) > + break; > > /* Discard first buffer as it is the virtio header */ > if (desc->flags & VRING_DESC_F_NEXT) { @@ -638,6 +649,8 @@ > rte_vhost_dequeue_burst(struct virtio_net *dev, uint16_t queue_id, > vb_offset = 0; > vb_avail = desc->len; > } else { > + if (desc->len < vq->vhost_hlen) > + break; > vb_offset = vq->vhost_hlen; > vb_avail = desc->len - vb_offset; > }