On Fri, Feb 03, 2017 at 04:14:14PM +0100, Robert Richter wrote:
> On 17.01.17 19:16:56, Will Deacon wrote:
> > I can't really see the trend given that, for system time, your
> > pfn_valid_within results have a variance of ~9 and the early_pfn_valid
> > results have a variance of ~92. Given that the
On 17.01.17 19:16:56, Will Deacon wrote:
> I can't really see the trend given that, for system time, your
> pfn_valid_within results have a variance of ~9 and the early_pfn_valid
> results have a variance of ~92. Given that the variance seems to come
> about due to the reboots, I think we need more
On Tue, Jan 17, 2017 at 11:00:15AM +0100, Robert Richter wrote:
> On 13.01.17 14:15:00, Robert Richter wrote:
> > On 13.01.17 09:19:04, Will Deacon wrote:
> > > On Thu, Jan 12, 2017 at 07:58:25PM +0100, Robert Richter wrote:
> > > > On 12.01.17 16:05:36, Will Deacon wrote:
> > > > > On Mon, Jan 09,
On 13.01.17 14:15:00, Robert Richter wrote:
> On 13.01.17 09:19:04, Will Deacon wrote:
> > On Thu, Jan 12, 2017 at 07:58:25PM +0100, Robert Richter wrote:
> > > On 12.01.17 16:05:36, Will Deacon wrote:
> > > > On Mon, Jan 09, 2017 at 12:53:20PM +0100, Robert Richter wrote:
> > >
> > > > > Kernel c
On 13.01.17 09:19:04, Will Deacon wrote:
> On Thu, Jan 12, 2017 at 07:58:25PM +0100, Robert Richter wrote:
> > On 12.01.17 16:05:36, Will Deacon wrote:
> > > On Mon, Jan 09, 2017 at 12:53:20PM +0100, Robert Richter wrote:
> >
> > > > Kernel compile times (3 runs each):
> > > >
> > > > pfn_valid_w
On Thu, Jan 12, 2017 at 07:58:25PM +0100, Robert Richter wrote:
> On 12.01.17 16:05:36, Will Deacon wrote:
> > On Mon, Jan 09, 2017 at 12:53:20PM +0100, Robert Richter wrote:
>
> > > Kernel compile times (3 runs each):
> > >
> > > pfn_valid_within():
> > >
> > > real6m4.088s
> > > user37
On 12.01.17 16:05:36, Will Deacon wrote:
> On Mon, Jan 09, 2017 at 12:53:20PM +0100, Robert Richter wrote:
> > Kernel compile times (3 runs each):
> >
> > pfn_valid_within():
> >
> > real6m4.088s
> > user372m57.607s
> > sys 16m55.158s
> >
> > real6m1.532s
> > user372m48.453s
Hi Robert,
On Mon, Jan 09, 2017 at 12:53:20PM +0100, Robert Richter wrote:
> On 06.01.17 08:37:25, Ard Biesheuvel wrote:
> > Any comments on the performance impact (including boot time) ?
>
> I did a kernel compile test and kernel mode time increases by about
> 2.2%. Though this is already signif
On 06.01.17 08:37:25, Ard Biesheuvel wrote:
> Any comments on the performance impact (including boot time) ?
I did a kernel compile test and kernel mode time increases by about
2.2%. Though this is already significant, we need a more suitable mem
benchmark here for further testing.
For boot time
Thanks Hanjun ,
On Mon, Jan 9, 2017 at 10:39 AM, Hanjun Guo wrote:
> Hi Prakash,
> I didn't test "cpuset01" on D05 but according to the test in
> Linaro, LTP full test is passed on D05 with Ard's 2 patches.
>
>>
>> Any idea what might be causing this issue.
>
>
> Since it's not happening on D05,
On 2017/1/6 16:37, Ard Biesheuvel wrote:
On 6 January 2017 at 01:07, Hanjun Guo wrote:
On 2017/1/5 10:03, Hanjun Guo wrote:
On 2017/1/4 21:56, Ard Biesheuvel wrote:
On 16 December 2016 at 16:54, Robert Richter wrote:
On ThunderX systems with certain memory configurations we see the
follo
Hi Prakash,
On 2017/1/6 13:22, Prakash B wrote:
Hi Hanjun,
a update here, tested on 4.9,
- Applied Ard's two patches only
- Applied Robert's patch only
Both of them can work fine on D05 with NUMA enabled, which means
boot ok and LTP MM stress test is passed.
It is not related to this p
On 6 January 2017 at 01:07, Hanjun Guo wrote:
> On 2017/1/5 10:03, Hanjun Guo wrote:
>>
>> On 2017/1/4 21:56, Ard Biesheuvel wrote:
>>>
>>> On 16 December 2016 at 16:54, Robert Richter wrote:
On ThunderX systems with certain memory configurations we see the
following BUG_ON():
Hi Hanjun,
> a update here, tested on 4.9,
>
> - Applied Ard's two patches only
> - Applied Robert's patch only
>
> Both of them can work fine on D05 with NUMA enabled, which means
> boot ok and LTP MM stress test is passed.
It is not related to this patch set.
LTP "cpuset01" test crashes wi
On 2017/1/5 10:03, Hanjun Guo wrote:
On 2017/1/4 21:56, Ard Biesheuvel wrote:
On 16 December 2016 at 16:54, Robert Richter wrote:
On ThunderX systems with certain memory configurations we see the
following BUG_ON():
kernel BUG at mm/page_alloc.c:1848!
This happens for some configs with 64k
On 04.01.17 13:56:39, Ard Biesheuvel wrote:
> Given that you are touching arch/arm/ as well as arch/arm64, could you
> explain why only arm64 needs this treatment? Is it simply because we
> don't have NUMA support there?
I haven't considered a solution for arch/arm yet. The fixes are
independent.
On 2017/1/4 21:56, Ard Biesheuvel wrote:
On 16 December 2016 at 16:54, Robert Richter wrote:
On ThunderX systems with certain memory configurations we see the
following BUG_ON():
kernel BUG at mm/page_alloc.c:1848!
This happens for some configs with 64k page size enabled. The BUG_ON()
checks
On 16 December 2016 at 16:54, Robert Richter wrote:
> On ThunderX systems with certain memory configurations we see the
> following BUG_ON():
>
> kernel BUG at mm/page_alloc.c:1848!
>
> This happens for some configs with 64k page size enabled. The BUG_ON()
> checks if start and end page of a memm
On ThunderX systems with certain memory configurations we see the
following BUG_ON():
kernel BUG at mm/page_alloc.c:1848!
This happens for some configs with 64k page size enabled. The BUG_ON()
checks if start and end page of a memmap range belongs to the same
zone.
The BUG_ON() check fails if a
19 matches
Mail list logo