On Thu, Sep 21, 2023 at 07:59:51PM +0200, Christophe JAILLET wrote:
> Le 21/09/2023 à 17:37, Frank Li a écrit :
> > From: Guanhua Gao <guanhua....@nxp.com>
> > 
> > Set DMA mask and coherent DMA mask to enable 64-bit addressing.
> > 
> > Signed-off-by: Guanhua Gao <guanhua....@nxp.com>
> > Signed-off-by: Hou Zhiqiang <zhiqiang....@nxp.com>
> > Signed-off-by: Frank Li <frank...@nxp.com>
> > ---
> >   drivers/pci/controller/dwc/pci-layerscape-ep.c | 5 +++++
> >   1 file changed, 5 insertions(+)
> > 
> > diff --git a/drivers/pci/controller/dwc/pci-layerscape-ep.c 
> > b/drivers/pci/controller/dwc/pci-layerscape-ep.c
> > index de4c1758a6c33..6fd0dea38a32c 100644
> > --- a/drivers/pci/controller/dwc/pci-layerscape-ep.c
> > +++ b/drivers/pci/controller/dwc/pci-layerscape-ep.c
> > @@ -249,6 +249,11 @@ static int __init ls_pcie_ep_probe(struct 
> > platform_device *pdev)
> >     pcie->big_endian = of_property_read_bool(dev->of_node, "big-endian");
> > +   /* set 64-bit DMA mask and coherent DMA mask */
> > +   if (dma_set_mask_and_coherent(dev, DMA_BIT_MASK(64)))
> > +           if (dma_set_mask_and_coherent(dev, DMA_BIT_MASK(32)))
> 
> As stated in [1], dma_set_mask() with a 64-bit mask will never
> fail if dev->dma_mask is non-NULL.
> 
> So, if it fails, the 32 bits case will also fail for the same reason.
> There is no need for the 2nd test.
> 
> 
> See [1] for Christoph Hellwig comment about it.

I don't think it is true. the below is dma_set_mask()'s implementation

int dma_set_mask(struct device *dev, u64 mask)
{
        /*
         * Truncate the mask to the actually supported dma_addr_t width to
         * avoid generating unsupportable addresses.
         */
        mask = (dma_addr_t)mask;

        if (!dev->dma_mask || !dma_supported(dev, mask))
                                ^^^^^^^
                return -EIO;

        arch_dma_set_mask(dev, mask);
        *dev->dma_mask = mask;
        return 0;
}

dma_supported() may return failiure.

static int dma_supported(struct device *dev, u64 mask)
{
        const struct dma_map_ops *ops = get_dma_ops(dev);

        /*
         * ->dma_supported sets the bypass flag, so we must always call
         * into the method here unless the device is truly direct mapped.
         */
        if (!ops)
                return dma_direct_supported(dev, mask);
        if (!ops->dma_supported)
                return 1;
        return ops->dma_supported(dev, mask);
                    ^^^^^^
                        DMA driver or IOMMU driver may return failure.
}

 
Frank

> 
> CJ
> 
> 
> [1]: https://lkml.org/lkml/2021/6/7/398
> 
> > +                   return -EIO;
> > +
> >     platform_set_drvdata(pdev, pcie);
> >     ret = dw_pcie_ep_init(&pci->ep);
> 

Reply via email to