Hi Aaron, On Thu, Aug 22, 2019 at 6:06 AM Aaron Williams <awilli...@marvell.com> wrote: > > Hi Bin, > > On Wednesday, August 21, 2019 8:23:50 AM PDT Bin Meng wrote: > > Hi Aaron, > > > > On Wed, Aug 21, 2019 at 7:26 PM Aaron Williams <awilli...@marvell.com> > wrote: > > > Hi Bin, > > > > > > I submitted another patch via git. Hopefully it went through. I'm new to > > > trying to get email to work with GIT since until now nobody in my group > > > has > > > had access to a working SMTP server so I'm still learning how to use git > > > send- email. > > > > > > -Aaron > > > > > > On Wednesday, August 21, 2019 12:55:59 AM PDT Bin Meng wrote: > > > > External Email > > > > > > > > ---------------------------------------------------------------------- > > > > Hi Aaron, > > > > > > > > On Wed, Aug 21, 2019 at 8:34 AM Aaron Williams <awilli...@marvell.com> > > > > > > wrote: > > > > > When large writes take place I saw a Samsung > > > > > EVO 970+ return a status value of 0x13, PRP > > > > > Offset Invalid. I tracked this down to the > > > > > improper handling of PRP entries. The blocks > > > > > the PRP entries are placed in cannot cross a > > > > > page boundary and thus should be allocated on > > > > > page boundaries. This is how the Linux kernel > > > > > driver works. > > > > > > > > > > With this patch, the PRP pool is allocated on > > > > > a page boundary and other than the very first > > > > > allocation, the pool size is a multiple of > > > > > the page size. Each page can hold (4096 / 8) - 1 > > > > > entries since the last entry must point to the > > > > > next page in the pool. > > > > > > > > Please write more words in a line, about 70 characters in a line. > > > > > > > > > Signed-off-by: Aaron Williams <awilli...@marvell.com> > > > > > --- > > > > > > > > > > drivers/nvme/nvme.c | 9 ++++++--- > > > > > 1 file changed, 6 insertions(+), 3 deletions(-) > > > > > > > > > > diff --git a/drivers/nvme/nvme.c b/drivers/nvme/nvme.c > > > > > index 7008a54a6d..ae64459edf 100644 > > > > > --- a/drivers/nvme/nvme.c > > > > > +++ b/drivers/nvme/nvme.c > > > > > @@ -75,6 +75,8 @@ static int nvme_setup_prps(struct nvme_dev *dev, u64 > > > > > *prp2,> > > > > > > > > > > int length = total_len; > > > > > int i, nprps; > > > > > length -= (page_size - offset); > > > > > > > > > > + u32 prps_per_page = (page_size >> 3) - 1; > > > > > + u32 num_pages; > > > > > > > > nits: please move these 2 above the line "length -= (page_size - > > > > offset);" > > > > > > Done. > > > > > > > > if (length <= 0) { > > > > > > > > > > *prp2 = 0; > > > > > > > > > > @@ -90,15 +92,16 @@ static int nvme_setup_prps(struct nvme_dev *dev, > > > > > u64 > > > > > *prp2,> > > > > > > > > > > } > > > > > > > > > > nprps = DIV_ROUND_UP(length, page_size); > > > > > > > > > > + num_pages = (nprps + prps_per_page - 1) / prps_per_page; > > > > > > > > use DIV_ROUND_UP() > > > > > > Done > > > > > > > > if (nprps > dev->prp_entry_num) { > > > > > > > > I think we should adjust nprps before the comparison here. > > > > > > > > nprps += num_pages - 1; > > > > > > > > > free(dev->prp_pool); > > > > > > > > > > - dev->prp_pool = malloc(nprps << 3); > > > > > + dev->prp_pool = memalign(page_size, num_pages * > > > > > page_size); > > > > > > > > Then we need only do: dev->prp_pool = memalign(page_size, nprps << 3)? > > > > > > We can't use nprps << 3 because if the prps span more than a single page > > > then we lose a prp per page. > > > > Looks you missed my comment above. > > > > If we adjust nprps by "nprps += num_pages - 1", then we don't lose the > > last prp per page. > This would work. > > > > > Mallocing num_pages * page_size exceeds the real needs. We should only > > allocate the exact prp size we need. > > This is true, but if we were forced to increase it, there may be a good chance > that it may need to be increased again. I do not see an issue in allocating > based on the number of pages required rather than the number of bytes. At > most, 4K-8 will be wasted. On the other hand, this can prevent additional > memalign calls from being called.
OK, please add some comments there indicating why we are doing this. Thanks. [snip] Regards, Bin _______________________________________________ U-Boot mailing list U-Boot@lists.denx.de https://lists.denx.de/listinfo/u-boot