Commit a25bd72badfa ("powerpc/mm/radix: Workaround prefetch issue with
KVM") introduced a number of workarounds as coming out of a guest with
the mmu enabled would make the cpu would start running in hypervisor
state with the PID value from the guest. The cpu will then start
prefetching for the hyp
On 12/5/19 10:35 AM, Sebastian Andrzej Siewior wrote:
> On 2019-12-03 10:56:35 [-0600], Rob Herring wrote:
>>> Another possibility would be to make the cache be dependent
>>> upon not CONFIG_PPC. It might be possible to disable the
>>> cache with a minimal code change.
>>
>> I'd rather not do that
On 12/3/19 10:56 AM, Rob Herring wrote:
> On Mon, Dec 2, 2019 at 10:28 PM Frank Rowand wrote:
>>
>> On 12/2/19 10:12 PM, Michael Ellerman wrote:
>>> Frank Rowand writes:
On 11/29/19 9:10 AM, Sebastian Andrzej Siewior wrote:
> I've been looking at phandle_cache and noticed the following:
On 12/3/19 12:35 PM, Segher Boessenkool wrote:
> Hi!
>
> On Tue, Dec 03, 2019 at 03:03:22PM +1100, Michael Ellerman wrote:
>> Sebastian Andrzej Siewior writes:
>> I've certainly heard it said that on some OF's the phandle was just ==
>> the address of the internal representation, and I guess mayb
Michael Ellerman writes:
> Russell Currey writes:
>> With CONFIG_STRICT_KERNEL_RWX=y and CONFIG_KPROBES=y, there will be one
>> W+X page at boot by default. This can be tested with
>> CONFIG_PPC_PTDUMP=y and CONFIG_PPC_DEBUG_WX=y set, and checking the
>> kernel log during boot.
>>
>> powerpc doe
On Tue, Nov 26, 2019 at 05:02:03PM -0800, Haren Myneni wrote:
> [PATCH 01/14] powerpc/vas: Describe vas-port and interrupts properties
Something wrong here with the subject in the body.
>
> Signed-off-by: Haren Myneni
> ---
> Documentation/devicetree/bindings/powerpc/ibm,vas.txt | 5 +
> 1
On Thu, Dec 05, 2019 at 02:02:17PM +0530 Srikar Dronamraju wrote:
> With commit 247f2f6f3c70 ("sched/core: Don't schedule threads on pre-empted
> vCPUs"), scheduler avoids preempted vCPUs to schedule tasks on wakeup.
> This leads to wrong choice of CPU, which in-turn leads to larger wakeup
> latenc
On Thu, Dec 05, 2019 at 02:02:18PM +0530 Srikar Dronamraju wrote:
> With the static key shared processor available, is_shared_processor()
> can return without having to query the lppaca structure.
>
> Cc: Parth Shah
> Cc: Ihor Pasichnyk
> Cc: Juri Lelli
> Cc: Phil Auld
> Cc: Waiman Long
> Sig
Hi Alastair,
Thank you for the patch! Yet something to improve:
[auto build test ERROR on v5.4-rc8]
[also build test ERROR on char-misc/char-misc-testing]
[cannot apply to linux-nvdimm/libnvdimm-for-next linus/master next-20191205]
[if your patch is applied to the wrong git tree, please drop us
Hi Nathan,
Nathan Lynch wrote:
Hi Kamalesh,
Kamalesh Babulal writes:
On 12/5/19 3:54 AM, Nathan Lynch wrote:
"Gautham R. Shenoy" writes:
Tools such as lparstat which are used to compute the utilization need
to know [S]PURR ticks when the cpu was busy or idle. The [S]PURR
counters are alre
Gautham R. Shenoy wrote:
From: "Gautham R. Shenoy"
On Pseries LPARs, to calculate utilization, we need to know the
[S]PURR ticks when the CPUs were busy or idle.
The total PURR and SPURR ticks are already exposed via the per-cpu
sysfs files /sys/devices/system/cpu/cpuX/purr and
/sys/devices/sy
On 2019-12-03 10:56:35 [-0600], Rob Herring wrote:
> > Another possibility would be to make the cache be dependent
> > upon not CONFIG_PPC. It might be possible to disable the
> > cache with a minimal code change.
>
> I'd rather not do that.
>
> And yes, as mentioned earlier I don't like the com
Hi Kamalesh,
Kamalesh Babulal writes:
> On 12/5/19 3:54 AM, Nathan Lynch wrote:
>> "Gautham R. Shenoy" writes:
>>>
>>> Tools such as lparstat which are used to compute the utilization need
>>> to know [S]PURR ticks when the cpu was busy or idle. The [S]PURR
>>> counters are already exposed throu
On 12/5/19 3:54 AM, Nathan Lynch wrote:
> "Gautham R. Shenoy" writes:
>> From: "Gautham R. Shenoy"
>>
>> On PSeries LPARs, the data centers planners desire a more accurate
>> view of system utilization per resource such as CPU to plan the system
>> capacity requirements better. Such accuracy can
On 12/5/19 3:32 AM, Srikar Dronamraju wrote:
> With the static key shared processor available, is_shared_processor()
> can return without having to query the lppaca structure.
>
> Cc: Parth Shah
> Cc: Ihor Pasichnyk
> Cc: Juri Lelli
> Cc: Phil Auld
> Cc: Waiman Long
> Signed-off-by: Srikar Dro
Hi,
On 04/12/19 19:14, Srikar Dronamraju wrote:
> With commit 247f2f6f3c70 ("sched/core: Don't schedule threads on pre-empted
> vCPUs"), scheduler avoids preempted vCPUs to schedule tasks on wakeup.
> This leads to wrong choice of CPU, which in-turn leads to larger wakeup
> latencies. Eventually,
On Thu, Dec 5, 2019 at 11:18 AM Michael Walle wrote:
>
> Hi Daniel,
>
> Am 2019-12-05 09:43, schrieb Daniel Baluta:
> > On Fri, Nov 29, 2019 at 12:40 AM Michael Walle
> > wrote:
> >>
> >> The LS1028A SoC uses the same interrupt line for adjacent SAIs. Use
> >> IRQF_SHARED to be able to use these
Hi Daniel,
Am 2019-12-05 09:43, schrieb Daniel Baluta:
On Fri, Nov 29, 2019 at 12:40 AM Michael Walle
wrote:
The LS1028A SoC uses the same interrupt line for adjacent SAIs. Use
IRQF_SHARED to be able to use these SAIs simultaneously.
Hi Michael,
Thanks for the patch. We have a similar chan
On Fri, Nov 29, 2019 at 12:40 AM Michael Walle wrote:
>
> The LS1028A SoC uses the same interrupt line for adjacent SAIs. Use
> IRQF_SHARED to be able to use these SAIs simultaneously.
Hi Michael,
Thanks for the patch. We have a similar change inside our internal tree
(it is on my long TODO list
With the static key shared processor available, is_shared_processor()
can return without having to query the lppaca structure.
Cc: Parth Shah
Cc: Ihor Pasichnyk
Cc: Juri Lelli
Cc: Phil Auld
Cc: Waiman Long
Signed-off-by: Srikar Dronamraju
---
Changelog v1 (https://patchwork.ozlabs.org/patch/
With commit 247f2f6f3c70 ("sched/core: Don't schedule threads on pre-empted
vCPUs"), scheduler avoids preempted vCPUs to schedule tasks on wakeup.
This leads to wrong choice of CPU, which in-turn leads to larger wakeup
latencies. Eventually, it leads to performance regression in latency
sensitive b
On Wed, Dec 04, 2019 at 12:42:32PM -0800, Ram Pai wrote:
> > The other approach we could use for that - which would still allow
> > H_PUT_TCE_INDIRECT, would be to allocate the TCE buffer page from the
> > same pool that we use for the bounce buffers. I assume there must
> > already be some sort o
22 matches
Mail list logo