On 10/02/2024 17:28, Luca Weiss wrote:
> Add the compatible for the SAW2 for L2 cache found on MSM8226.
>
> Signed-off-by: Luca Weiss
> ---
> Documentation/devicetree/bindings/soc/qcom/qcom,saw2.yaml | 1 +
Acked-by: Krzysztof Kozlowski
Best regards,
Krzysztof
Add the compatible for the SAW2 for L2 cache found on MSM8226.
Signed-off-by: Luca Weiss
---
Documentation/devicetree/bindings/soc/qcom/qcom,saw2.yaml | 1 +
1 file changed, 1 insertion(+)
diff --git a/Documentation/devicetree/bindings/soc/qcom/qcom,saw2.yaml
b/Documentation/devicetree
On Fri, Apr 09, 2021, Yang Weijiang wrote:
> These fields are rarely updated by L1 QEMU/KVM, sync them when L1 is trying to
> read/write them and after they're changed. If CET guest entry-load bit is not
> set by L1 guest, migrate them to L2 manaully.
>
> Opportunistically r
These fields are rarely updated by L1 QEMU/KVM, sync them when L1 is trying to
read/write them and after they're changed. If CET guest entry-load bit is not
set by L1 guest, migrate them to L2 manaully.
Opportunistically remove one blank line in previous patch.
Suggested-by: Sean Christoph
Add L2 metrics.
Signed-off-by: John Garry
Reviewed-by: Kajol Jain
---
.../arch/arm64/hisilicon/hip08/metrics.json | 42 +++
1 file changed, 42 insertions(+)
diff --git a/tools/perf/pmu-events/arch/arm64/hisilicon/hip08/metrics.json
b/tools/perf/pmu-events/arch/arm64
Commit 08ed77e414ab ("perf vendor events amd: Add recommended events")
added the hits event "L2 Cache Hits from L2 HWPF" with the same metric
expression as the accesses event "L2 Cache Accesses from L2 HWPF":
$ perf list --details
...
l2_cache_accesses_from_l2_hwp
Add L2 metrics.
Signed-off-by: John Garry
---
.../arch/arm64/hisilicon/hip08/metrics.json | 42 +++
1 file changed, 42 insertions(+)
diff --git a/tools/perf/pmu-events/arch/arm64/hisilicon/hip08/metrics.json
b/tools/perf/pmu-events/arch/arm64/hisilicon/hip08/metrics.json
red
> > > > value since
> > > > setting MSRs would have written the value into vmcs01.
> > >
> > > Then the code nested_vmx_enter_non_root_mode() would look like:
> > >
> > > if (kvm_cet_supported() && !vmx->nested
alid
> physical address instead of 0.
>
> Tested:
> kvm-unit-tests
> kvm selftests
> Fedora L1 L2
>
> Signed-off-by: Cathy Avery
> ---
> arch/x86/kvm/svm/nested.c | 9 ++---
> arch/x86/kvm/svm/svm.c| 2 +-
> 2 files changed, 7 insertions(+), 4 deletio
;
> > !(vmcs12->vm_entry_controls & VM_ENTRY_LOAD_CET_STATE)) {
> > ...
> > }
> >
> > I have another concern now, if vm_entry_controls.load_cet_state == false,
> > and L1
> > updated vmcs fields, so the latest states are in vmcs12, but they canno
On Tue, Mar 23, 2021 at 10:59:01AM +0800, Xu Yihang wrote:
> Fixes the following W=1 kernel build warning(s):
> ../arch/x86/kernel/cpu/intel.c: In function ‘init_intel’:
> ../arch/x86/kernel/cpu/intel.c:644:20: warning: variable ‘l2’ set but not
> used [-Wunused-but-set-variable]
&
Fixes the following W=1 kernel build warning(s):
../arch/x86/kernel/cpu/intel.c: In function ‘init_intel’:
../arch/x86/kernel/cpu/intel.c:644:20: warning: variable ‘l2’ set but not used
[-Wunused-but-set-variable]
unsigned int l1, l2;
^~
Compilation command(s):
make
These fields are rarely updated by L1 QEMU/KVM, sync them when L1 is
> > > trying to
> > > read/write them and after they're changed. If CET guest entry-load bit is
> > > not
> > > set by L1 guest, migrate them to L2 manaully.
> > >
Clean up the x2APIC MSR bitmap intereption code for L2, which is the last
holdout of open coded bitmap manipulations. Freshen up the SDM/PRM
comment, rename the function to make it abundantly clear the funky
behavior is x2APIC specific, and explain _why_ vmcs01's bitmap is ignored
(the pre
selftests
Fedora L1 L2
Signed-off-by: Cathy Avery
---
arch/x86/kvm/svm/nested.c | 9 ++---
arch/x86/kvm/svm/svm.c| 2 +-
2 files changed, 7 insertions(+), 4 deletions(-)
diff --git a/arch/x86/kvm/svm/nested.c b/arch/x86/kvm/svm/nested.c
index 8523f60adb92..6f9a40e002bc 100644
--- a/arch/x86
Clean up the x2APIC MSR bitmap intereption code for L2, which is the last
holdout of open coded bitmap manipulations. Freshen up the SDM/PRM
comment, rename the function to make it abundantly clear the funky
behavior is x2APIC specific, and explain _why_ vmcs01's bitmap is ignored
(the pre
ad bit is
> > not
> > set by L1 guest, migrate them to L2 manaully.
> >
> > Opportunistically remove one blank line and add minor fix for MPX.
> >
> > Suggested-by: Sean Christopherson
> > Signed-off-by: Yang Weijiang
> > ---
On Mon, Mar 15, 2021, Yang Weijiang wrote:
> These fields are rarely updated by L1 QEMU/KVM, sync them when L1 is trying to
> read/write them and after they're changed. If CET guest entry-load bit is not
> set by L1 guest, migrate them to L2 manaully.
>
> Opportunistically r
These fields are rarely updated by L1 QEMU/KVM, sync them when L1 is trying to
read/write them and after they're changed. If CET guest entry-load bit is not
set by L1 guest, migrate them to L2 manaully.
Opportunistically remove one blank line and add minor fix for MPX.
Suggested-by:
nt, can it be checked by BNDCFGS
> > EN bit?
> > E.g.:
> >
> > if (kvm_mpx_supported() && (vmcs12->guest_bndcfgs & 1))
> >
> > > Same would apply to CET. Not sure it'd be a net positive in terms of
> > > perfo
both support MPX?
>
> For MPX, if guest_cpuid_has() is not efficent, can it be checked by BNDCFGS
> EN bit?
> E.g.:
>
> if (kvm_mpx_supported() && (vmcs12->guest_bndcfgs & 1))
>
> > Same would apply to CET. Not sure it'd be a net positive in terms of
>
Hi Paolo,
On 3/6/21 5:56 AM, Paolo Bonzini wrote:
> On 05/03/21 23:57, Dongli Zhang wrote:
>> The new per-cpu stat 'nested_run' is introduced in order to track if L1 VM
>> is running or used to run L2 VM.
>>
>> An example of the usage of 'nested_run'
g vmcs12->guest_cr4.CET?
> E.g.:
> if (kvm_cet_supported() && (vmcs12->guest_cr4 & X86_CR4_CET))
>
> >
> > > + vmcs12->guest_ssp = vmcs_readl(GUEST_SSP);
> > > + vmcs12->guest_s_cet = vmcs_readl(GUEST_S_CET);
> > >
_cr4.CET?
E.g.:
if (kvm_cet_supported() && (vmcs12->guest_cr4 & X86_CR4_CET))
>
> > + vmcs12->guest_ssp = vmcs_readl(GUEST_SSP);
> > + vmcs12->guest_s_cet = vmcs_readl(GUEST_S_CET);
> > + vmcs12->guest_ssp_tbl = vmcs_readl(GUEST_INT
On 05/03/21 23:57, Dongli Zhang wrote:
The new per-cpu stat 'nested_run' is introduced in order to track if L1 VM
is running or used to run L2 VM.
An example of the usage of 'nested_run' is to help the host administrator
to easily track if any L1 VM is used to run L2 VM. Su
The new per-cpu stat 'nested_run' is introduced in order to track if L1 VM
is running or used to run L2 VM.
An example of the usage of 'nested_run' is to help the host administrator
to easily track if any L1 VM is used to run L2 VM. Suppose there is issue
that ma
dl(GUEST_INTR_SSP_TABLE);
> + }
>
> vmx->nested.need_sync_vmcs02_to_vmcs12_rare = false;
> }
> diff --git a/arch/x86/kvm/vmx/vmx.h b/arch/x86/kvm/vmx/vmx.h
> index 9d3a557949ac..36dc4fdb0909 100644
> --- a/arch/x86/kvm/vmx/vmx.h
> +++ b/arch/x86/kvm/vmx/vmx.h
>
t; not
> > set by L1 guest, migrate them to L2 manaully.
> >
> > Suggested-by: Sean Christopherson
> > Signed-off-by: Yang Weijiang
> > ---
> > arch/x86/kvm/cpuid.c | 1 -
> > arch/x86/kvm/vmx/nested.c | 30 ++
>
Yang Weijiang writes:
> These fields are rarely updated by L1 QEMU/KVM, sync them when L1 is trying to
> read/write them and after they're changed. If CET guest entry-load bit is not
> set by L1 guest, migrate them to L2 manaully.
>
> Suggested-by: Sean Christopherson
&
is
> > not
> > set by L1 guest, migrate them to L2 manaully.
> >
> > Suggested-by: Sean Christopherson
> > Signed-off-by: Yang Weijiang
>
> Hi Weijiang, can you post the complete series again? Thanks!
Sure, sent v3 version to include all the patches. Thank
These fields are rarely updated by L1 QEMU/KVM, sync them when L1 is trying to
read/write them and after they're changed. If CET guest entry-load bit is not
set by L1 guest, migrate them to L2 manaully.
Suggested-by: Sean Christopherson
Signed-off-by: Yang Weijiang
---
arch/x86/kvm/cp
Add L2 metrics.
Signed-off-by: John Garry
---
.../arch/arm64/hisilicon/hip08/metrics.json | 42 +++
1 file changed, 42 insertions(+)
diff --git a/tools/perf/pmu-events/arch/arm64/hisilicon/hip08/metrics.json
b/tools/perf/pmu-events/arch/arm64/hisilicon/hip08/metrics.json
On 03/03/21 07:04, Yang Weijiang wrote:
These fields are rarely updated by L1 QEMU/KVM, sync them when L1 is trying to
read/write them and after they're changed. If CET guest entry-load bit is not
set by L1 guest, migrate them to L2 manaully.
Suggested-by: Sean Christopherson
Signed-o
to
> >> read/write them and after they're changed. If CET guest entry-load bit is
> >> not
> >> set by L1 guest, migrate them to L2 manaully.
> >>
> >> Suggested-by: Sean Christopherson
> >> Signed-off-by: Yang Weijiang
> >>
These fields are rarely updated by L1 QEMU/KVM, sync them when L1 is trying to
read/write them and after they're changed. If CET guest entry-load bit is not
set by L1 guest, migrate them to L2 manaully.
Suggested-by: Sean Christopherson
Signed-off-by: Yang Weijiang
---
arch/x86/kvm/cp
On 02/03/21 18:45, Sean Christopherson wrote:
If KVM (L0) intercepts #GP, but L1 does not, then L2 can kill L1 by
triggering triple fault. On both VMX and SVM, if the CPU hits a fault
while vectoring an injected #DF (or I supposed any #DF), any intercept
from the hypervisor takes priority over
From: Cathy Avery
svm->vmcb will now point to a separate vmcb for L1 (not nested) or L2
(nested).
The main advantages are removing get_host_vmcb and hsave, in favor of
concepts that are shared with VMX.
We don't need anymore to stash the L1 registers in hsave while L2
runs, but we need
Synthesize a nested VM-Exit if L2 triggers an emulated triple fault
instead of exiting to userspace, which likely will kill L1. Any flow
that does KVM_REQ_TRIPLE_FAULT is suspect, but the most common scenario
for L2 killing L1 is if L0 (KVM) intercepts a contributory exception that
is
On Tue, Mar 02, 2021, Paolo Bonzini wrote:
> On 02/03/21 18:45, Sean Christopherson wrote:
> > If KVM (L0) intercepts #GP, but L1 does not, then L2 can kill L1 by
> > triggering triple fault. On both VMX and SVM, if the CPU hits a fault
> > while vectoring an injected #DF (
If KVM (L0) intercepts #GP, but L1 does not, then L2 can kill L1 by
triggering triple fault. On both VMX and SVM, if the CPU hits a fault
while vectoring an injected #DF (or I supposed any #DF), any intercept
from the hypervisor takes priority over triple fault. #PF is unlikely to
be intercepted
On 02/03/21 18:45, Sean Christopherson wrote:
If KVM (L0) intercepts #GP, but L1 does not, then L2 can kill L1 by
triggering triple fault. On both VMX and SVM, if the CPU hits a fault
while vectoring an injected #DF (or I supposed any #DF), any intercept
from the hypervisor takes priority over
On 02/03/21 01:59, Sean Christopherson wrote:
+ svm->vmcb->save.cr2 = svm->vcpu.arch.cr2 = vmcb12->save.cr2;
Same question for VMCB_CR2.
Besides the question of how much AMD processors actually use the clean
bits (a quick test suggests "not much"), in this specific case I suspect
that
On 02/03/21 13:56, Cathy Avery wrote:
On 3/1/21 7:59 PM, Sean Christopherson wrote:
On Mon, Mar 01, 2021, Cathy Avery wrote:
svm->nested.vmcb12_gpa = 0;
+ svm->nested.last_vmcb12_gpa = 0;
This should not be 0 to avoid a false match. "-1" should be okay.
kvm_set_rflags(&
On 3/1/21 7:59 PM, Sean Christopherson wrote:
On Mon, Mar 01, 2021, Cathy Avery wrote:
kvm_set_rflags(&svm->vcpu, vmcb12->save.rflags | X86_EFLAGS_FIXED);
svm_set_efer(&svm->vcpu, vmcb12->save.efer);
svm_set_cr0(&svm->vcpu, vmcb12->save.cr0);
svm_set_cr4(&svm->vcp
Sean Christopherson writes:
> +Vitaly
>
> On Thu, Feb 25, 2021, Yang Weijiang wrote:
>> These fields are rarely updated by L1 QEMU/KVM, sync them when L1 is trying
>> to
>> read/write them and after they're changed. If CET guest entry-load bit is not
>&g
On 02/03/21 10:05, Yang Weijiang wrote:
I got some description from MSFT as below, do you mean that:
GuestSsp uses clean field GUEST_BASIC (bit 10)
GuestSCet/GuestInterruptSspTableAddr uses GUEST_GRP1 (bit 11)
HostSCet/HostSsp/HostInterruptSspTableAddr uses HOST_GRP1 (bit 14)
If it is, should t
. If CET guest entry-load bit is
> > not
> > set by L1 guest, migrate them to L2 manaully.
> >
> > Suggested-by: Sean Christopherson
> > Signed-off-by: Yang Weijiang
> >
> > change in v2:
> > - Per Sean's review feedback, change CET guest state
On Mon, Mar 01, 2021, Cathy Avery wrote:
> kvm_set_rflags(&svm->vcpu, vmcb12->save.rflags | X86_EFLAGS_FIXED);
> svm_set_efer(&svm->vcpu, vmcb12->save.efer);
> svm_set_cr0(&svm->vcpu, vmcb12->save.cr0);
> svm_set_cr4(&svm->vcpu, vmcb12->save.cr4);
Why not utilize VMCB_CR?
Use the vmcb12 control clean field to determine which vmcb12.save
registers were marked dirty in order to minimize register copies
when switching from L1 to L2. Those L12 registers marked as dirty need
to be copied to L2's vmcb as they will be used to update the vmcb
state cache for the L2
+Vitaly
On Thu, Feb 25, 2021, Yang Weijiang wrote:
> These fields are rarely updated by L1 QEMU/KVM, sync them when L1 is trying to
> read/write them and after they're changed. If CET guest entry-load bit is not
> set by L1 guest, migrate them to L2 manaully.
>
>
These fields are rarely updated by L1 QEMU/KVM, sync them when L1 is trying to
read/write them and after they're changed. If CET guest entry-load bit is not
set by L1 guest, migrate them to L2 manaully.
Suggested-by: Sean Christopherson
Signed-off-by: Yang Weijiang
change in v2:
- Per S
On Thu, Feb 11, 2021 at 09:18:03AM -0800, Sean Christopherson wrote:
> On Tue, Feb 09, 2021, Yang Weijiang wrote:
> > When L2 guest status has been changed by L1 QEMU/KVM, sync the change back
> > to L2 guest before the later's next vm-entry. On the other hand, if it's
&g
Unconditionally disable PML in vmcs02, KVM emulates PML purely in the
MMU, e.g. vmx_flush_pml_buffer() doesn't even try to copy the L2 GPAs
from vmcs02's buffer to vmcs12. At best, enabling PML is a nop. At
worst, it will cause vmx_flush_pml_buffer() to record bogus GFNs in the
On Tue, Feb 09, 2021, Yang Weijiang wrote:
> When L2 guest status has been changed by L1 QEMU/KVM, sync the change back
> to L2 guest before the later's next vm-entry. On the other hand, if it's
> changed due to L2 guest, sync it back so as to let L1 guest see the change.
>
umented in
https://git-scm.com/docs/git-format-patch]
url:
https://github.com/0day-ci/linux/commits/Yang-Weijiang/KVM-nVMX-Sync-L2-guest-CET-states-between-L1-L2/20210209-162909
base: https://git.kernel.org/pub/scm/virt/kvm/kvm.git linux-next
config: x86_64-rhel (attached as .config)
compi
When L2 guest status has been changed by L1 QEMU/KVM, sync the change back
to L2 guest before the later's next vm-entry. On the other hand, if it's
changed due to L2 guest, sync it back so as to let L1 guest see the change.
Signed-off-by: Yang Weijiang
---
arch/x86/kvm/vmx/nes
From: Kan Liang
The TMA method level 2 metrics is supported from the Intel Sapphire
Rapids server, which expose four L2 Topdown metrics events to user
space. There are eight L2 events in total. The other four L2 Topdown
metrics events are calculated from the corresponding L1 and the exposed
L2
From: Kan Liang
The TMA method level 2 metrics is supported from the Intel Sapphire
Rapids server, which expose four L2 Topdown metrics events to user
space. There are eight L2 events in total. The other four L2 Topdown
metrics events are calculated from the corresponding L1 and the exposed
L2
Implement the support for SAW v4.1, used in at least MSM8998,
SDM630, SDM660 and APQ variants and, while at it, also add the
configuration for the SDM630/660 Silver and Gold cluster L2
Adaptive Voltage Scaler: this is also one of the prerequisites
to allow the OSM controller to perform DCVS
G_CFG] = 0x08,
[SPM_REG_SPM_CTL] = 0x30,
@@ -149,6 +161,10 @@ static const struct of_device_id spm_match_table[] = {
.data = &spm_reg_660_gold_l2 },
{ .compatible = "qcom,sdm660-silver-saw2-v4.1-l2",
.data = &spm_reg_660_silver_l2 }
From: Julian Wiedmann
[ Upstream commit f9c4845385c8f6631ebd5dddfb019ea7a285fba4 ]
ip_finish_output_gso() may call .ndo_features_check() even before the
skb has a L2 header. This conflicts with qeth_get_ip_version()'s attempt
to inspect the L2 header via vlan_eth_hdr().
Swit
From: Julian Wiedmann
[ Upstream commit f9c4845385c8f6631ebd5dddfb019ea7a285fba4 ]
ip_finish_output_gso() may call .ndo_features_check() even before the
skb has a L2 header. This conflicts with qeth_get_ip_version()'s attempt
to inspect the L2 header via vlan_eth_hdr().
Swit
G_CFG] = 0x08,
[SPM_REG_SPM_CTL] = 0x30,
@@ -149,6 +161,10 @@ static const struct of_device_id spm_match_table[] = {
.data = &spm_reg_660_gold_l2 },
{ .compatible = "qcom,sdm660-silver-saw2-v4.1-l2",
.data = &spm_reg_660_silver_l2 }
Implement the support for SAW v4.1, used in at least MSM8998,
SDM630, SDM660 and APQ variants and, while at it, also add the
configuration for the SDM630/660 Silver and Gold cluster L2
Adaptive Voltage Scaler: this is also one of the prerequisites
to allow the OSM controller to perform DCVS
Implement the support for SAW v4.1, used in at least MSM8998,
SDM630, SDM660 and APQ variants and, while at it, also add the
configuration for the SDM630/660 Silver and Gold cluster L2
Adaptive Voltage Scaler: this is also one of the prerequisites
to allow the OSM controller to perform DCVS
G_CFG] = 0x08,
[SPM_REG_SPM_CTL] = 0x30,
@@ -149,6 +161,10 @@ static const struct of_device_id spm_match_table[] = {
.data = &spm_reg_660_gold_l2 },
{ .compatible = "qcom,sdm660-silver-saw2-v4.1-l2",
.data = &spm_reg_660_silver_l2 }
On Thu, 2021-01-07 at 04:38 +0200, Maxim Levitsky wrote:
> On Wed, 2021-01-06 at 10:17 -0800, Sean Christopherson wrote:
> > On Wed, Jan 06, 2021, Maxim Levitsky wrote:
> > > If migration happens while L2 entry with an injected event to L2 is
> > > pending,
> > &g
On Wed, 2021-01-06 at 10:17 -0800, Sean Christopherson wrote:
> On Wed, Jan 06, 2021, Maxim Levitsky wrote:
> > If migration happens while L2 entry with an injected event to L2 is pending,
> > we weren't including the event in the migration state and it would be
> &g
On Wed, Jan 06, 2021, Maxim Levitsky wrote:
> If migration happens while L2 entry with an injected event to L2 is pending,
> we weren't including the event in the migration state and it would be
> lost leading to L2 hang.
But the injected event should still be
s,
> Maxim Levitsky
>
> Maxim Levitsky (2):
> KVM: VMX: create vmx_process_injected_event
> KVM: nVMX: fix for disappearing L1->L2 event injection on L1 migration
>
> arch/x86/kvm/vmx/nested.c | 12
> arch/x86/kvm/vmx/vmx.c| 60 ---
> arch/x86/kvm/vmx/vmx.h| 4 +++
> 3 files changed, 47 insertions(+), 29 deletions(-)
>
> --
> 2.26.2
>
>
If migration happens while L2 entry with an injected event to L2 is pending,
we weren't including the event in the migration state and it would be
lost leading to L2 hang.
Fix this by queueing the injected event in similar manner to how we queue
interrupted injections.
This can be reproduc
cted_event
KVM: nVMX: fix for disappearing L1->L2 event injection on L1 migration
arch/x86/kvm/vmx/nested.c | 12
arch/x86/kvm/vmx/vmx.c| 60 ---
arch/x86/kvm/vmx/vmx.h| 4 +++
3 files changed, 47 insertions(+), 29 deletions(-)
--
2.26.2
If migration happens while L2 entry with an injected event to L2 is pending,
we weren't including the event in the migration state and it would be
lost leading to L2 hang.
Fix this by queueing the injected event in similar manner to how we queue
interrupted injections.
This can be reproduc
On Thu, 10 Dec 2020 16:08:54 +0530, Gautham R. Shenoy wrote:
> This is the v2 of the patchset to extend parsing of "ibm,thread-groups"
> property
> to discover the Shared-L2 cache information.
>
> The previous versions can be found here :
>
> v2 :
> ht
* Gautham R. Shenoy [2020-12-10 16:08:59]:
> From: "Gautham R. Shenoy"
>
> On POWER platforms where only some groups of threads within a core
> share the L2-cache (indicated by the ibm,thread-groups device-tree
> property), we currently print the incorrect shared_cpu_m
* Gautham R. Shenoy [2020-12-10 16:08:58]:
> From: "Gautham R. Shenoy"
>
> On POWER systems, groups of threads within a core sharing the L2-cache
> can be indicated by the "ibm,thread-groups" property array with the
> identifier "2".
>
> T
On Thu, 10 Dec 2020 15:58:02 +0530, Yash Shah wrote:
> The L2 cache controller in SiFive FU740 has 4 ECC interrupt sources as
> compared to 3 in FU540. Update the DT documentation accordingly with
> "compatible" and "interrupt" property changes.
On Thu, 10 Dec 2020 02:28:02 PST (-0800), yash.s...@sifive.com wrote:
The L2 cache controller in SiFive FU740 has 4 ECC interrupt sources as
compared to 3 in FU540. Update the DT documentation accordingly with
"compatible" and "interrupt" property changes.
This generally l
From: "Gautham R. Shenoy"
Hi,
This is the v2 of the patchset to extend parsing of "ibm,thread-groups" property
to discover the Shared-L2 cache information.
The previous versions can be found here :
v2 :
https://lore.kernel.org/linuxppc-dev/1607533700-5
From: "Gautham R. Shenoy"
On POWER systems, groups of threads within a core sharing the L2-cache
can be indicated by the "ibm,thread-groups" property array with the
identifier "2".
This patch adds support for detecting this, and when present, populate
the popul
From: "Gautham R. Shenoy"
On POWER platforms where only some groups of threads within a core
share the L2-cache (indicated by the ibm,thread-groups device-tree
property), we currently print the incorrect shared_cpu_map/list for
L2-cache in the sysfs.
This patch reports t
SiFive FU740 has 4 ECC interrupt sources as compared to 3 in FU540.
Update the L2 cache controller driver to support this additional
interrupt in case of FU740-C000 chip.
Signed-off-by: Yash Shah
---
drivers/soc/sifive/sifive_l2_cache.c | 27 ---
1 file changed, 24
The L2 cache controller in SiFive FU740 has 4 ECC interrupt sources as
compared to 3 in FU540. Update the DT documentation accordingly with
"compatible" and "interrupt" property changes.
Signed-off-by: Yash Shah
---
.../devicetree/bindings/riscv/sifiv
From: "Gautham R. Shenoy"
On POWER platforms where only some groups of threads within a core
share the L2-cache (indicated by the ibm,thread-groups device-tree
property), we currently print the incorrect shared_cpu_map/list for
L2-cache in the sysfs.
This patch reports t
From: "Gautham R. Shenoy"
On POWER systems, groups of threads within a core sharing the L2-cache
can be indicated by the "ibm,thread-groups" property array with the
identifier "2".
This patch adds support for detecting this, and when present, populate
the popul
From: "Gautham R. Shenoy"
Hi,
This is the v2 of the patchset to extend parsing of "ibm,thread-groups" property
to discover the Shared-L2 cache information.
The v1 can be found here :
https://lore.kernel.org/linuxppc-dev/1607057327-29822-1-git-send-email-...@
* Gautham R Shenoy [2020-12-08 23:12:37]:
>
> > For L2 we have thread_group_l2_cache_map to store the tasks from the thread
> > group. but cpu_l2_cache_map for keeping track of tasks.
>
> >
> > I think we should do some renaming to keep the names consistent.
>
On Wed, Dec 09, 2020 at 02:09:21PM +0530, Srikar Dronamraju wrote:
> * Gautham R Shenoy [2020-12-08 23:26:47]:
>
> > > The drawback of this is even if cpus 0,2,4,6 are released L1 cache will
> > > not
> > > be released. Is this as expected?
> >
> > cacheinfo populates the cache->shared_cpu_map
* Gautham R Shenoy [2020-12-08 23:26:47]:
> > The drawback of this is even if cpus 0,2,4,6 are released L1 cache will not
> > be released. Is this as expected?
>
> cacheinfo populates the cache->shared_cpu_map on the basis of which
> CPUs share the common device-tree node for a particular cache.
rg; s...@ravnborg.org;
> a...@eecs.berkeley.edu; pal...@dabbelt.com; Paul Walmsley ( Sifive)
> ; Sagar Kadam ;
> Sachin Ghadi
> Subject: Re: [PATCH v2 1/2] RISC-V: Update l2 cache DT documentation to
> add support for SiFive FU740
>
> [External Email] Do not click links or att
On Mon, Nov 30, 2020 at 11:13:03AM +0530, Yash Shah wrote:
> The L2 cache controller in SiFive FU740 has 4 ECC interrupt sources as
> compared to 3 in FU540. Update the DT documentation accordingly with
> "compatible" and "interrupt" property changes.
'dt-bin
_shares_l2;
> > /*
> > * On big-core systems, each core has two groups of CPUs each of which
> > * has its own L1-cache. The thread-siblings which share l1-cache with
> > * @cpu can be obtained via cpu_smallcore_mask().
> > + *
> > + * On some big-core system
Hello Srikar,
On Mon, Dec 07, 2020 at 06:10:39PM +0530, Srikar Dronamraju wrote:
> * Gautham R. Shenoy [2020-12-04 10:18:46]:
>
> > From: "Gautham R. Shenoy"
> >
> > On POWER systems, groups of threads within a core sharing the L2-cache
> > can be indic
link to L2 state. When the link doesn't
go to L2 state, Tegra194 requires the LTSSM to be disabled to allow PHY
to start the next link up process cleanly during suspend/resume sequence.
Failing to disable LTSSM results in the PCIe link not coming up in the
next resume cycle.
Is this a Tegra19
[+cc Jingoo, Gustavo]
On Thu, Dec 03, 2020 at 07:04:51PM +0530, Vidya Sagar wrote:
> PCIe cards like Marvell SATA controller and some of the Samsung NVMe
> drives don't support taking the link to L2 state. When the link doesn't
> go to L2 state, Tegra194 requires the LTSSM to b
own L1-cache. The thread-siblings which share l1-cache with
> * @cpu can be obtained via cpu_smallcore_mask().
> + *
> + * On some big-core systems, the L2 cache is shared only between some
> + * groups of siblings. This is already parsed and encoded in
> + * cpu
* Gautham R. Shenoy [2020-12-04 10:18:46]:
> From: "Gautham R. Shenoy"
>
> On POWER systems, groups of threads within a core sharing the L2-cache
> can be indicated by the "ibm,thread-groups" property array with the
> identifier "2".
>
> T
From: "Gautham R. Shenoy"
On POWER systems, groups of threads within a core sharing the L2-cache
can be indicated by the "ibm,thread-groups" property array with the
identifier "2".
This patch adds support for detecting this, and when present, populate
the popul
s. The "ibm,ppc-interrupt-server#s" of
the first group is {8,10,12,14} and the
"ibm,ppc-interrupt-server#s" of the second group is
{9,11,13,15}. Property "2" indicates that the threads in each group
share the L2-cache.
The existing code assumes that the "
From: "Gautham R. Shenoy"
On POWER platforms where only some groups of threads within a core
share the L2-cache (indicated by the ibm,thread-groups device-tree
property), we currently print the incorrect shared_cpu_map/list for
L2-cache in the sysfs.
This patch reports t
1 - 100 of 1001 matches
Mail list logo