On Mon, 2015-04-06 at 23:17 -0700, Christoph Hellwig wrote:
> On Thu, Apr 02, 2015 at 01:57:10PM -0700, Nicholas A. Bellinger wrote:
> > > mempools are for I/O path that make guaranteed progress. While the
> > > callers of core_enable_device_list_for_node are in the control path,
> > > and not in
On Thu, Apr 02, 2015 at 01:57:10PM -0700, Nicholas A. Bellinger wrote:
> > mempools are for I/O path that make guaranteed progress. While the
> > callers of core_enable_device_list_for_node are in the control path,
> > and not in a very deep callstack, fairly shallow below the configfs
> > interfa
On Thu, 2015-04-02 at 01:56 -0700, Christoph Hellwig wrote:
> On Wed, Apr 01, 2015 at 10:37:27PM -0700, Nicholas A. Bellinger wrote:
> > On Wed, 2015-04-01 at 00:04 -0700, Christoph Hellwig wrote:
> > > On Tue, Mar 31, 2015 at 11:51:56PM -0700, Nicholas A. Bellinger wrote:
> > > > Since last week,
On Wed, Apr 01, 2015 at 10:37:27PM -0700, Nicholas A. Bellinger wrote:
> On Wed, 2015-04-01 at 00:04 -0700, Christoph Hellwig wrote:
> > On Tue, Mar 31, 2015 at 11:51:56PM -0700, Nicholas A. Bellinger wrote:
> > > Since last week, enable/disable device_list code has been converted to
> > > use memp
On Wed, 2015-04-01 at 00:04 -0700, Christoph Hellwig wrote:
> On Tue, Mar 31, 2015 at 11:51:56PM -0700, Nicholas A. Bellinger wrote:
> > Since last week, enable/disable device_list code has been converted to
> > use mempool + call_rcu() and performs the RCU pointer swap in
> > se_node_acl->lun_entr
On Tue, Mar 31, 2015 at 11:51:56PM -0700, Nicholas A. Bellinger wrote:
> Since last week, enable/disable device_list code has been converted to
> use mempool + call_rcu() and performs the RCU pointer swap in
> se_node_acl->lun_entry_hlist[] under se_node_acl->lun_entry_mutex.
Why use a mempool the
On Mon, 2015-03-30 at 05:08 -0700, Christoph Hellwig wrote:
> I went through this in detail, and the odd patch split that splits
> one change up into muliple patches, but then also mixes up other
> changes makes it hard to read.
>
Thanks for having a look. Will fix up patch ordering for -v2.
>
On Mon, Mar 30, 2015 at 12:35:55PM -0700, Andy Grover wrote:
> So what's next? Were you going to do the LUNs too? In addition to +perf and
> less mem usage, going down this path also leads to >256 LUNs per tpg.
This just started as a rework of Nic' series, so I'd like to avoid scope
creep. I thin
On 03/30/2015 12:21 PM, Christoph Hellwig wrote:
On Mon, Mar 30, 2015 at 12:16:04PM -0700, Andy Grover wrote:
I dug my patches up and rebased on top of target-pending/for-next. Pushed
here:
git://git.kernel.org/pub/scm/linux/kernel/git/grover/linux.git
mar30-dynalloc-deve
https://git.kernel.or
On Mon, Mar 30, 2015 at 12:16:04PM -0700, Andy Grover wrote:
> I dug my patches up and rebased on top of target-pending/for-next. Pushed
> here:
>
> git://git.kernel.org/pub/scm/linux/kernel/git/grover/linux.git
> mar30-dynalloc-deve
>
> https://git.kernel.org/cgit/linux/kernel/git/grover/linux.g
On 03/30/2015 05:08 AM, Christoph Hellwig wrote:
Fortunately there is the old patch from Andy to make the se_dev_entry
dynamically allocated, which comes in useful here. With that we might
only change the rdonly flag on a live dev entry, or assign an ACL when
it previously was NULL, something th
I went through this in detail, and the odd patch split that splits
one change up into muliple patches, but then also mixes up other
changes makes it hard to read.
So I started rebasing the tree to understand it moving your various
cleanups to the front of the series. Of coure while reading throug
From: Nicholas Bellinger
Hi Hannes, HCH, & Sagi,
Here is an initial pass at the conversion of se_node_acl->device_list[]
to use RCU protected pointers for se_lun fast-path lookup code.
The big advantage with RCU is that transport_lookup_cmd_lun() can now
run completely lock-less using existing
13 matches
Mail list logo