On Mon, Apr 23, 2018 at 10:15:18PM +0200, Björn Töpel wrote:
> 2018-04-23 22:11 GMT+02:00 Michael S. Tsirkin <m...@redhat.com>:
> > On Mon, Apr 23, 2018 at 10:00:15PM +0200, Björn Töpel wrote:
> >> 2018-04-23 18:18 GMT+02:00 Michael S. Tsirkin <m...@redhat.com>:
> >>
> >> [...]
> >>
> >> >> +static void xdp_umem_unpin_pages(struct xdp_umem *umem)
> >> >> +{
> >> >> +     unsigned int i;
> >> >> +
> >> >> +     if (umem->pgs) {
> >> >> +             for (i = 0; i < umem->npgs; i++)
> >> >
> >> > Since you pin them with FOLL_WRITE, I assume these pages
> >> > are written to.
> >> > Don't you need set_page_dirty_lock here?
> >> >
> >>
> >> Hmm, I actually *removed* it from the RFC V2, but after doing some
> >> homework, I think you're right. Thanks for pointing this out!
> >>
> >> Thinking more about this; This function is called from sk_destruct,
> >> and in the Tx case the sk_destruct can be called from interrupt
> >> context, where set_page_dirty_lock cannot be called.
> >>
> >> Are there any preferred ways of solving this? Scheduling the whole
> >> xsk_destruct call to a workqueue is one way (I think). Any
> >> cleaner/better way?
> >>
> >> [...]
> >
> > Defer unpinning pages until the next tx call?
> >
> 
> If the sock is released, there wont be another tx call.

unpin them on socket release too?

> Or am I
> missing something obvious?
> 
> >
> >> >> +static int __xdp_umem_reg(struct xdp_umem *umem, struct xdp_umem_reg 
> >> >> *mr)
> >> >> +{
> >> >> +     u32 frame_size = mr->frame_size, frame_headroom = 
> >> >> mr->frame_headroom;
> >> >> +     u64 addr = mr->addr, size = mr->len;
> >> >> +     unsigned int nframes;
> >> >> +     int size_chk, err;
> >> >> +
> >> >> +     if (frame_size < XDP_UMEM_MIN_FRAME_SIZE || frame_size > 
> >> >> PAGE_SIZE) {
> >> >> +             /* Strictly speaking we could support this, if:
> >> >> +              * - huge pages, or*
> >> >
> >> > what does "or*" here mean?
> >> >
> >>
> >> Oops, I'll change to just 'or' in the next revision.
> >>
> >>
> >> Thanks!
> >> Björn

Reply via email to