On Mon, Feb 10, 2014 at 04:06:27PM -0700, Jens Axboe wrote:
> It obviously all depends on the access pattern. X threads for X tags
> would work perfectly well with per-cpu tagging, if they are doing
> sync IO. And similarly, 8 threads each having low queue depth would
> be fine. However, it all fal
On Tue, Feb 11, 2014 at 06:42:40AM -0800, James Bottomley wrote:
> > Unfortunately that's not true in SCSI land, where most driver do per-lun
> > tagging, and the the cmd_per_lun values are very low and very often
> > single digits, as a simple grep for cmd_per_lun will tell.
>
> Remember we do sh
On Tue, 2014-02-11 at 01:12 -0800, Christoph Hellwig wrote:
> On Mon, Feb 10, 2014 at 04:06:27PM -0700, Jens Axboe wrote:
> > For the common case, I'd assume that anywhere between 31..256 tags
> > is "normal". That's where the majority of devices will end up being,
> > largely. So single digits wou
On Mon, Feb 10, 2014 at 04:06:27PM -0700, Jens Axboe wrote:
> For the common case, I'd assume that anywhere between 31..256 tags
> is "normal". That's where the majority of devices will end up being,
> largely. So single digits would be an anomaly.
Unfortunately that's not true in SCSI land, where
On 02/10/2014 03:41 PM, Kent Overstreet wrote:
On Mon, Feb 10, 2014 at 09:26:15AM -0700, Jens Axboe wrote:
On 02/10/2014 03:32 AM, Christoph Hellwig wrote:
On Sun, Feb 09, 2014 at 04:50:07PM +0100, Alexander Gordeev wrote:
Yeah, that was my first thought when I posted "percpu_ida: Allow va
On Mon, Feb 10, 2014 at 09:26:15AM -0700, Jens Axboe wrote:
>
>
> On 02/10/2014 03:32 AM, Christoph Hellwig wrote:
> >On Sun, Feb 09, 2014 at 04:50:07PM +0100, Alexander Gordeev wrote:
> >>Yeah, that was my first thought when I posted "percpu_ida: Allow variable
> >>maximum number of cached tags"
On 02/10/2014 03:32 AM, Christoph Hellwig wrote:
On Sun, Feb 09, 2014 at 04:50:07PM +0100, Alexander Gordeev wrote:
Yeah, that was my first thought when I posted "percpu_ida: Allow variable
maximum number of cached tags" patch some few months ago. But I am back-
pedalling as it does not appear
On Mon, Feb 10, 2014 at 04:49:17PM +0100, Alexander Gordeev wrote:
> > Do we really always need the pool for these classes of devices?
> >
> > Pulling tags from local caches to the pool just to (near to) dry it at
> > the very next iteration does not seem beneficial. Not to mention caches
> > vs p
On Mon, Feb 10, 2014 at 01:29:42PM +0100, Alexander Gordeev wrote:
> > We'll defintively need a fix to be able to allow the whole tag space.
> > For large numbers of tags per device the flush might work, but for
> > devices with low number of tags we need something more efficient. The
> > case of
On Mon, Feb 10, 2014 at 02:32:11AM -0800, Christoph Hellwig wrote:
> > May be we can walk off with a per-cpu timeout that flushes batch nr of tags
> > from local caches to the pool? Each local allocation would restart the
> > timer,
> > but once allocation requests stopped coming on a CPU the tags
On Sun, Feb 09, 2014 at 04:50:07PM +0100, Alexander Gordeev wrote:
> Yeah, that was my first thought when I posted "percpu_ida: Allow variable
> maximum number of cached tags" patch some few months ago. But I am back-
> pedalling as it does not appear solves the fundamental problem - what is the
>
On Mon, Jan 06, 2014 at 01:47:26PM -0800, Kent Overstreet wrote:
> Ok, so I hadn't really given any thought to that kind of use case; insofar as
> I
> had I would've been skeptical percpu tag allocation made sense for 32
> different
> tags at all.
>
> We really don't want to screw over the users
On Mon, Jan 06, 2014 at 01:52:19PM -0700, Jens Axboe wrote:
> On 01/06/2014 01:46 PM, Kent Overstreet wrote:
> > On Sun, Jan 05, 2014 at 09:13:00PM +0800, Shaohua Li wrote:
>
> >>> - we explicitly don't guarantee that all
> >>> the tags will be available for allocation at any given time, only half
On 01/06/2014 01:46 PM, Kent Overstreet wrote:
> On Sun, Jan 05, 2014 at 09:13:00PM +0800, Shaohua Li wrote:
>>> - we explicitly don't guarantee that all
>>> the tags will be available for allocation at any given time, only half
>>> of them.
>>
>> only half of the tags can be used? this is scarin
On Sun, Jan 05, 2014 at 09:13:00PM +0800, Shaohua Li wrote:
> On Sat, Jan 04, 2014 at 01:08:04PM -0800, Kent Overstreet wrote:
> > On Tue, Dec 31, 2013 at 11:38:27AM +0800, Shaohua Li wrote:
> > >
> > > steal_tags only happens when free tags is more than half of the total
> > > tags.
> > > This i
On Sat, Jan 04, 2014 at 01:08:04PM -0800, Kent Overstreet wrote:
> On Tue, Dec 31, 2013 at 11:38:27AM +0800, Shaohua Li wrote:
> >
> > steal_tags only happens when free tags is more than half of the total tags.
> > This is too restrict and can cause live lock. I found one cpu has free tags,
> > bu
On Tue, Dec 31, 2013 at 11:38:27AM +0800, Shaohua Li wrote:
>
> steal_tags only happens when free tags is more than half of the total tags.
> This is too restrict and can cause live lock. I found one cpu has free tags,
> but other cpu can't steal (thread is bound to specific cpus), threads which
>
steal_tags only happens when free tags is more than half of the total tags.
This is too restrict and can cause live lock. I found one cpu has free tags,
but other cpu can't steal (thread is bound to specific cpus), threads which
wants to allocate tags are always sleeping. I found this when I run n
18 matches
Mail list logo