On Wed, Apr 24, 2019 at 07:53:17AM +0200, Christoph Hellwig wrote:
> On Tue, Apr 23, 2019 at 06:32:40PM +0800, Ming Lei wrote:
> > big, the whole pre-allocation for sg list can consume huge memory.
> > For example of lpfc, nr_hw_queues can be 70, each queue's depth
> > can be 3781, so the pre-allocation for data sg list can be 70*3781*2k
> > =517MB for single HBA.
> 
> We should probably limit the number of queues to something actually
> useful, independent of your patch..
> 
> > +static bool scsi_use_inline_sg(struct scsi_cmnd *cmd)
> > +{
> > +   struct scatterlist *sg = (void *)cmd + sizeof(struct scsi_cmnd) +
> > +           cmd->device->host->hostt->cmd_size;
> > +
> > +   return cmd->sdb.table.sgl == sg;
> > +}
> 
> It might make more sense to have a helper to calculate the inline
> sg address and use that for the comparism in scsi_mq_free_sgtables
> and any other place that wants the address.

Good idea, and the helper can be used in scsi_mq_prep_fn() too.

> 
> > +   if (cmd->sdb.table.nents && !scsi_use_inline_sg(cmd))
> > +           sg_free_table_chained(&cmd->sdb.table, false);
> 
> This removes the last use of the first_chunk paramter to
> sg_free_table_chained, please remove the paramter in an additional
> patch.

NVMe FC/RDMA still uses first_chunk.

> 
> > +   if (nr_segs <= SCSI_INLINE_SG_CNT)
> > +           sdb->table.nents = sdb->table.orig_nents =
> > +                   SCSI_INLINE_SG_CNT;
> 
> Don't we need a sg_init_table here?

Will do it in V2.

> 
> > +   else if (unlikely(sg_alloc_table_chained(&sdb->table, nr_segs,
> > +                                   NULL)))
> >             return BLK_STS_RESOURCE;
> 
> We should probably also be able to drop the last parameter to
> sg_alloc_table_chained now.

NVMe FC/RDMA still uses first_chunk.

Thanks,
Ming

Reply via email to