On Fri, Nov 2, 2018 at 10:05 PM Dave Hansen wrote:
> On 11/2/18 6:22 AM, Vovo Yang wrote:
> > Chris helped to answer this question:
> > Though it includes a few non-shmemfs objects, see
> > debugfs/dri/0/i915_gem_objects and the "bound objects".
> >
> > Example i915_gem_object output:
> > 591 ob
On Fri 02-11-18 20:35:11, Vovo Yang wrote:
> On Thu, Nov 1, 2018 at 9:10 PM Michal Hocko wrote:
> > OK, so that explain my question about the test case. Even though you
> > generate a lot of page cache, the amount is still too small to trigger
> > pagecache mostly reclaim and anon LRUs are scanned
On 11/2/18 6:22 AM, Vovo Yang wrote:
> On Thu, Nov 1, 2018 at 10:30 PM Dave Hansen wrote:
>> On 11/1/18 5:06 AM, Vovo Yang wrote:
mlock() and ramfs usage are pretty easy to track down. /proc/$pid/smaps
or /proc/meminfo can show us mlock() and good ol' 'df' and friends can
show us r
On Thu, Nov 1, 2018 at 10:30 PM Dave Hansen wrote:
> On 11/1/18 5:06 AM, Vovo Yang wrote:
> >> mlock() and ramfs usage are pretty easy to track down. /proc/$pid/smaps
> >> or /proc/meminfo can show us mlock() and good ol' 'df' and friends can
> >> show us ramfs the extent of pinned memory.
> >>
>
On Thu, Nov 1, 2018 at 9:10 PM Michal Hocko wrote:
> OK, so that explain my question about the test case. Even though you
> generate a lot of page cache, the amount is still too small to trigger
> pagecache mostly reclaim and anon LRUs are scanned as well.
>
> Now to the difference with the previo
On 11/1/18 5:06 AM, Vovo Yang wrote:
>> mlock() and ramfs usage are pretty easy to track down. /proc/$pid/smaps
>> or /proc/meminfo can show us mlock() and good ol' 'df' and friends can
>> show us ramfs the extent of pinned memory.
>>
>> With these, if we see "Unevictable" in meminfo bump up, we a
On Thu 01-11-18 19:28:46, Vovo Yang wrote:
> On Thu, Nov 1, 2018 at 12:42 AM Michal Hocko wrote:
> > On Wed 31-10-18 07:40:14, Dave Hansen wrote:
> > > Didn't we create the unevictable lists in the first place because
> > > scanning alone was observed to be so expensive in some scenarios?
> >
> >
Quoting Chris Wilson (2018-10-31 09:41:55)
> Quoting Kuo-Hsin Yang (2018-10-31 08:19:45)
> > The i915 driver uses shmemfs to allocate backing storage for gem
> > objects. These shmemfs pages can be pinned (increased ref count) by
> > shmem_read_mapping_page_gfp(). When a lot of pages are pinned, vm
On Wed, Oct 31, 2018 at 10:19 PM Dave Hansen wrote:
> On 10/31/18 1:19 AM, owner-linux...@kvack.org wrote:
> > -These are currently used in two places in the kernel:
> > +These are currently used in three places in the kernel:
> >
> > (1) By ramfs to mark the address spaces of its inodes when th
On Thu, Nov 1, 2018 at 12:42 AM Michal Hocko wrote:
> On Wed 31-10-18 07:40:14, Dave Hansen wrote:
> > Didn't we create the unevictable lists in the first place because
> > scanning alone was observed to be so expensive in some scenarios?
>
> Yes, that is the case. I might just misunderstood the c
On Wed 31-10-18 07:40:14, Dave Hansen wrote:
> On 10/31/18 7:24 AM, Michal Hocko wrote:
> > I am also wondering whether unevictable pages culling can be
> > really visible when we do the anon LRU reclaim because the swap path is
> > quite expensinve on its own.
>
> Didn't we create the unevictable
On 10/31/18 7:24 AM, Michal Hocko wrote:
> I am also wondering whether unevictable pages culling can be
> really visible when we do the anon LRU reclaim because the swap path is
> quite expensinve on its own.
Didn't we create the unevictable lists in the first place because
scanning alone was obse
On Wed 31-10-18 16:19:45, Kuo-Hsin Yang wrote:
[...]
> The previous mapping_set_unevictable patch is worse on gem_syslatency
> because it defers to vmscan to move these pages to the unevictable list
> and the test measures latency to allocate 2MiB pages. This performance
> impact can be solved by e
On 10/31/18 1:19 AM, owner-linux...@kvack.org wrote:
> -These are currently used in two places in the kernel:
> +These are currently used in three places in the kernel:
>
> (1) By ramfs to mark the address spaces of its inodes when they are created,
> and this mark remains for the life of
On Wed, Oct 31, 2018 at 5:42 PM Chris Wilson wrote:
> Will do. As you are confident, I'll try a few different machines. :)
> -Chris
Great! Thanks for your help. :)
Vovo
___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop
Quoting Kuo-Hsin Yang (2018-10-31 08:19:45)
> The i915 driver uses shmemfs to allocate backing storage for gem
> objects. These shmemfs pages can be pinned (increased ref count) by
> shmem_read_mapping_page_gfp(). When a lot of pages are pinned, vmscan
> wastes a lot of time scanning these pinned p
The i915 driver uses shmemfs to allocate backing storage for gem
objects. These shmemfs pages can be pinned (increased ref count) by
shmem_read_mapping_page_gfp(). When a lot of pages are pinned, vmscan
wastes a lot of time scanning these pinned pages. In some extreme case,
all pages in the inactiv
17 matches
Mail list logo