On Thu 30-03-17 19:19:59, Ilya Dryomov wrote:
> On Thu, Mar 30, 2017 at 6:12 PM, Michal Hocko wrote:
> > On Thu 30-03-17 17:06:51, Ilya Dryomov wrote:
> > [...]
> >> > But if the allocation is stuck then the holder of the lock cannot make
> >> > a forward progress and it is effectivelly deadlocked
On Thu, Mar 30, 2017 at 6:12 PM, Michal Hocko wrote:
> On Thu 30-03-17 17:06:51, Ilya Dryomov wrote:
> [...]
>> > But if the allocation is stuck then the holder of the lock cannot make
>> > a forward progress and it is effectivelly deadlocked because other IO
>> > depends on the lock it holds. May
On Thu 30-03-17 17:06:51, Ilya Dryomov wrote:
[...]
> > But if the allocation is stuck then the holder of the lock cannot make
> > a forward progress and it is effectivelly deadlocked because other IO
> > depends on the lock it holds. Maybe I just ask bad questions but what
>
> Only I/O to the sam
On Thu, Mar 30, 2017 at 4:36 PM, Michal Hocko wrote:
> On Thu 30-03-17 15:48:42, Ilya Dryomov wrote:
>> On Thu, Mar 30, 2017 at 1:21 PM, Michal Hocko wrote:
> [...]
>> > familiar with Ceph at all but does any of its (slab) shrinkers generate
>> > IO to recurse back?
>>
>> We don't register any cu
On Thu 30-03-17 15:48:42, Ilya Dryomov wrote:
> On Thu, Mar 30, 2017 at 1:21 PM, Michal Hocko wrote:
[...]
> > familiar with Ceph at all but does any of its (slab) shrinkers generate
> > IO to recurse back?
>
> We don't register any custom shrinkers. This is XFS on top of rbd,
> a ceph-backed bl
On Thu 30-03-17 15:53:35, Ilya Dryomov wrote:
> On Thu, Mar 30, 2017 at 8:25 AM, Michal Hocko wrote:
> > On Wed 29-03-17 16:25:18, Ilya Dryomov wrote:
[...]
> >> are you saying it's OK for a block
> >> device to recurse back into the filesystem when doing I/O, potentially
> >> generating more I/O?
On Thu, Mar 30, 2017 at 8:25 AM, Michal Hocko wrote:
> On Wed 29-03-17 16:25:18, Ilya Dryomov wrote:
>> On Wed, Mar 29, 2017 at 1:16 PM, Michal Hocko wrote:
>> > On Wed 29-03-17 13:10:01, Ilya Dryomov wrote:
>> >> On Wed, Mar 29, 2017 at 12:55 PM, Michal Hocko wrote:
>> >> > On Wed 29-03-17 12:4
On Thu, Mar 30, 2017 at 1:21 PM, Michal Hocko wrote:
> On Thu 30-03-17 12:02:03, Ilya Dryomov wrote:
>> On Thu, Mar 30, 2017 at 8:25 AM, Michal Hocko wrote:
>> > On Wed 29-03-17 16:25:18, Ilya Dryomov wrote:
> [...]
>> >> We got rid of osdc->request_mutex in 4.7, so these workers are almost
>> >>
On Thu 30-03-17 12:02:03, Ilya Dryomov wrote:
> On Thu, Mar 30, 2017 at 8:25 AM, Michal Hocko wrote:
> > On Wed 29-03-17 16:25:18, Ilya Dryomov wrote:
[...]
> >> We got rid of osdc->request_mutex in 4.7, so these workers are almost
> >> independent in newer kernels and should be able to free up me
On Thu, Mar 30, 2017 at 8:25 AM, Michal Hocko wrote:
> On Wed 29-03-17 16:25:18, Ilya Dryomov wrote:
>> On Wed, Mar 29, 2017 at 1:16 PM, Michal Hocko wrote:
>> > On Wed 29-03-17 13:10:01, Ilya Dryomov wrote:
>> >> On Wed, Mar 29, 2017 at 12:55 PM, Michal Hocko wrote:
>> >> > On Wed 29-03-17 12:4
On Wed 29-03-17 16:25:18, Ilya Dryomov wrote:
> On Wed, Mar 29, 2017 at 1:16 PM, Michal Hocko wrote:
> > On Wed 29-03-17 13:10:01, Ilya Dryomov wrote:
> >> On Wed, Mar 29, 2017 at 12:55 PM, Michal Hocko wrote:
> >> > On Wed 29-03-17 12:41:26, Michal Hocko wrote:
> >> > [...]
> >> >> > ceph_con_wo
On Wed, Mar 29, 2017 at 1:49 PM, Brian Foster wrote:
> On Wed, Mar 29, 2017 at 01:18:34PM +0200, Michal Hocko wrote:
>> On Wed 29-03-17 13:14:42, Ilya Dryomov wrote:
>> > On Wed, Mar 29, 2017 at 1:05 PM, Brian Foster wrote:
>> > > On Wed, Mar 29, 2017 at 12:41:26PM +0200, Michal Hocko wrote:
>> >
On Wed, Mar 29, 2017 at 1:16 PM, Michal Hocko wrote:
> On Wed 29-03-17 13:10:01, Ilya Dryomov wrote:
>> On Wed, Mar 29, 2017 at 12:55 PM, Michal Hocko wrote:
>> > On Wed 29-03-17 12:41:26, Michal Hocko wrote:
>> > [...]
>> >> > ceph_con_workfn
>> >> > mutex_lock(&con->mutex) # ceph_connection:
On Wed, Mar 29, 2017 at 01:18:34PM +0200, Michal Hocko wrote:
> On Wed 29-03-17 13:14:42, Ilya Dryomov wrote:
> > On Wed, Mar 29, 2017 at 1:05 PM, Brian Foster wrote:
> > > On Wed, Mar 29, 2017 at 12:41:26PM +0200, Michal Hocko wrote:
> > >> [CC xfs guys]
> > >>
> > >> On Wed 29-03-17 11:21:44, Il
On Wed 29-03-17 13:14:42, Ilya Dryomov wrote:
> On Wed, Mar 29, 2017 at 1:05 PM, Brian Foster wrote:
> > On Wed, Mar 29, 2017 at 12:41:26PM +0200, Michal Hocko wrote:
> >> [CC xfs guys]
> >>
> >> On Wed 29-03-17 11:21:44, Ilya Dryomov wrote:
> >> [...]
> >> > This is a set of stack traces from htt
On Wed, Mar 29, 2017 at 1:05 PM, Brian Foster wrote:
> On Wed, Mar 29, 2017 at 12:41:26PM +0200, Michal Hocko wrote:
>> [CC xfs guys]
>>
>> On Wed 29-03-17 11:21:44, Ilya Dryomov wrote:
>> [...]
>> > This is a set of stack traces from http://tracker.ceph.com/issues/19309
>> > (linked in the change
On Wed 29-03-17 13:10:01, Ilya Dryomov wrote:
> On Wed, Mar 29, 2017 at 12:55 PM, Michal Hocko wrote:
> > On Wed 29-03-17 12:41:26, Michal Hocko wrote:
> > [...]
> >> > ceph_con_workfn
> >> > mutex_lock(&con->mutex) # ceph_connection::mutex
> >> > try_write
> >> > ceph_tcp_connect
> >> >
On Wed, Mar 29, 2017 at 12:55 PM, Michal Hocko wrote:
> On Wed 29-03-17 12:41:26, Michal Hocko wrote:
> [...]
>> > ceph_con_workfn
>> > mutex_lock(&con->mutex) # ceph_connection::mutex
>> > try_write
>> > ceph_tcp_connect
>> > sock_create_kern
>> > GFP_KERNEL allocation
>> >
On Wed, Mar 29, 2017 at 12:41:26PM +0200, Michal Hocko wrote:
> [CC xfs guys]
>
> On Wed 29-03-17 11:21:44, Ilya Dryomov wrote:
> [...]
> > This is a set of stack traces from http://tracker.ceph.com/issues/19309
> > (linked in the changelog):
> >
> > Workqueue: ceph-msgr con_work [libceph]
> > ff
On Wed 29-03-17 12:41:26, Michal Hocko wrote:
[...]
> > ceph_con_workfn
> > mutex_lock(&con->mutex) # ceph_connection::mutex
> > try_write
> > ceph_tcp_connect
> > sock_create_kern
> > GFP_KERNEL allocation
> > allocator recurses into XFS, more I/O is issued
One mo
[CC xfs guys]
On Wed 29-03-17 11:21:44, Ilya Dryomov wrote:
[...]
> This is a set of stack traces from http://tracker.ceph.com/issues/19309
> (linked in the changelog):
>
> Workqueue: ceph-msgr con_work [libceph]
> 8810871cb018 0046 881085d4
> 0001
On Tue, Mar 28, 2017 at 3:30 PM, Michal Hocko wrote:
> On Tue 28-03-17 15:23:58, Ilya Dryomov wrote:
>> On Tue, Mar 28, 2017 at 2:43 PM, Michal Hocko wrote:
>> > On Tue 28-03-17 14:30:45, Greg KH wrote:
>> >> 4.4-stable review patch. If anyone has any objections, please let me
>> >> know.
>> >
On Tue 28-03-17 15:23:58, Ilya Dryomov wrote:
> On Tue, Mar 28, 2017 at 2:43 PM, Michal Hocko wrote:
> > On Tue 28-03-17 14:30:45, Greg KH wrote:
> >> 4.4-stable review patch. If anyone has any objections, please let me know.
> >
> > I haven't seen the original patch but the changelog makes me wo
On Tue, Mar 28, 2017 at 2:43 PM, Michal Hocko wrote:
> On Tue 28-03-17 14:30:45, Greg KH wrote:
>> 4.4-stable review patch. If anyone has any objections, please let me know.
>
> I haven't seen the original patch but the changelog makes me worried.
> How exactly this is a problem? Where do we lock
On Tue 28-03-17 14:30:45, Greg KH wrote:
> 4.4-stable review patch. If anyone has any objections, please let me know.
I haven't seen the original patch but the changelog makes me worried.
How exactly this is a problem? Where do we lockup? Does rbd/libceph take
any xfs locks?
> --
4.4-stable review patch. If anyone has any objections, please let me know.
--
From: Ilya Dryomov
commit 633ee407b9d15a75ac9740ba9d3338815e1fcb95 upstream.
sock_alloc_inode() allocates socket+inode and socket_wq with
GFP_KERNEL, which is not allowed on the writeback path:
26 matches
Mail list logo