Re: [PATCH v6 00/15] integrity: Introduce the Integrity Digest Cache

2024-12-04 Thread Eric Snowberg


> On Dec 4, 2024, at 3:44 AM, Roberto Sassu  
> wrote:
> 
> On Tue, 2024-12-03 at 20:06 +, Eric Snowberg wrote:
>> 
>>> On Nov 26, 2024, at 3:41 AM, Roberto Sassu  
>>> wrote:
>>> 
>>> On Tue, 2024-11-26 at 00:13 +, Eric Snowberg wrote:
 
> On Nov 19, 2024, at 3:49 AM, Roberto Sassu 
>  wrote:
> 
> From: Roberto Sassu 
> 
> The Integrity Digest Cache can also help IMA for appraisal. IMA can simply
> lookup the calculated digest of an accessed file in the list of digests
> extracted from package headers, after verifying the header signature. It 
> is
> sufficient to verify only one signature for all files in the package, as
> opposed to verifying a signature for each file.
 
 Is there a way to maintain integrity over time?  Today if a CVE is 
 discovered 
 in a signed program, the program hash can be added to the blacklist 
 keyring. 
 Later if IMA appraisal is used, the signature validation will fail just 
 for that 
 program.  With the Integrity Digest Cache, is there a way to do this?  
>>> 
>>> As far as I can see, the ima_check_blacklist() call is before
>>> ima_appraise_measurement(). If it fails, appraisal with the Integrity
>>> Digest Cache will not be done.
>> 
>> 
>> It is good the program hash would be checked beforehand and fail if it is 
>> contained on the list. 
>> 
>> The .ima keyring may contain many keys.  If one of the keys was later 
>> revoked and added to the .blacklist, wouldn't this be missed?  It would 
>> be caught during signature validation when the file is later appraised, but 
>> now this step isn't taking place.  Correct?
> 
> For files included in the digest lists, yes, there won't be detection
> of later revocation of a key. However, it will still work at package
> level/digest list level, since they are still appraised with a
> signature.
> 
> We can add a mechanism (if it does not already exist) to invalidate the
> integrity status based on key revocation, which can be propagated to
> files verified with the affected digest lists.
> 
>> With IMA appraisal, it is easy to maintain authenticity but challenging to 
>> maintain integrity over time. In user-space there are constantly new CVEs.  
>> To maintain integrity over time, either keys need to be rotated in the .ima 
>> keyring or program hashes need to be frequently added to the .blacklist.   
>> If neither is done, for an end-user on a distro, IMA-appraisal basically 
>> guarantees authenticity.
>> 
>> While I understand the intent of the series is to increase performance, 
>> have you considered using this to give the end-user the ability to maintain 
>> integrity of their system?  What I mean is, instead of trying to import 
>> anything 
>> from an RPM, just have the end-user provide this information in some format 
>> to the Digest Cache.  User-space tools could be built to collect and format 
> 
> This is already possible, digest-cache-tools
> (https://github.com/linux-integrity/digest-cache-tools) already allow
> to create a digest list with the file a user wants.
> 
> But in this case, the user is vouching for having taken the correct
> measure of the file at the time it was added to the digest list. This
> would be instead automatically guaranteed by RPMs or other packages
> shipped with Linux distributions.
> 
> To mitigate the concerns of CVEs, we can probably implement a rollback
> prevention mechanism, which would not allow to load a previous version
> of a digest list.

IMHO, pursuing this with the end-user being in control of what is contained 
within the Digest Cache vs what is contained in a distro would provide more
value. Allowing the end-user to easily update their Digest Cache in some way 
without having to do any type of revocation for both old and vulnerable 
applications with CVEs would be very beneficial.

Is there a belief the Digest Cache would be used without signed kernel 
modules?  Is the performance gain worth changing how kernel modules 
get loaded at boot?  Couldn't this part just be dropped for easier acceptance?  
Integrity is already maintained with the current model of appended signatures. 



Re: [PATCH 5/5] KVM: arm64: Handle DABT caused by LS64* instructions on unsupported memory

2024-12-04 Thread Yicong Yang
On 2024/12/3 17:38, Marc Zyngier wrote:
> On Mon, 02 Dec 2024 13:55:04 +,
> Yicong Yang  wrote:
>>
>> From: Yicong Yang 
>>
>> FEAT_LS64* instructions only support to access Device/Uncacheable
>> memory, otherwise a data abort for unsupported Exclusive or atomic
> 
> Not quite. FEAT_LS64WB explicitly supports Write-Back mappings.

thanks for the information, I didn't know this since it's not listed on 
DDI0487K.a,
have seen this feature from [1]. will refine the comments and mention the 
behaviour
applies when no FEAT_LS64WB.

[1] 
https://developer.arm.com/documentation/109697/2024_09/Feature-descriptions/The-Armv9-6-architecture-extension

> 
>> access (0x35) is generated per spec. It's implementation defined
>> whether the target exception level is routed and is possible to
>> implemented as route to EL2 on a VHE VM. Per DDI0487K.a Section
>> C3.2.12.2 Single-copy atomic 64-byte load/store:
>>
>>   The check is performed against the resulting memory type after all
>>   enabled stages of translation. In this case the fault is reported
>>   at the final enabled stage of translation.
>>
>> If it's implemented as generate the DABT to the final enabled stage
>> (stage-2), inject a DABT to the guest to handle it.
>>
>> Signed-off-by: Yicong Yang 
>> ---
>>  arch/arm64/kvm/mmu.c | 14 ++
>>  1 file changed, 14 insertions(+)
>>
>> diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c
>> index c9d46ad57e52..b7e6f0a27537 100644
>> --- a/arch/arm64/kvm/mmu.c
>> +++ b/arch/arm64/kvm/mmu.c
>> @@ -1787,6 +1787,20 @@ int kvm_handle_guest_abort(struct kvm_vcpu *vcpu)
>>  return 1;
>>  }
>>  
>> +/*
>> + * If instructions of FEAT_{LS64, LS64_V, LS64_ACCDATA} operated on
>> + * unsupported memory regions, a DABT for unsupported Exclusive or
>> + * atomic access is generated. It's implementation defined whether
>> + * the exception will be taken to, a stage-1 DABT or the final enabled
>> + * stage of translation (stage-2 in this case as we hit here). Inject
>> + * a DABT to the guest to handle it if it's implemented as a stage-2
>> + * DABT.
>> + */
>> +if (esr_fsc_is_excl_atomic_fault(esr)) {
>> +kvm_inject_dabt(vcpu, kvm_vcpu_get_hfar(vcpu));
>> +return 1;
>> +}
> 
> This doesn't seem quite right. This is injecting an *External* Data
> Abort, which is not what the spec says happens, as you are emulating
> the *first* acceptable behaviour:
> 
>   "The check is performed at each enabled stage of translation, and
>the fault is reported for the first stage of translation that
>provides an inappropriate memory type. In this case, the value of
>the HCR_EL2.DC bit does not cause accesses generated by the
>instructions to generate a stage 1 Data abort,"
> 
> So while the exception is reported at a different EL, the fault should
> still be an "unsupported Exclusive or atomic access". 

yes it's trying to emulate the first behaviour to let guest OS handle it
as how host OS handle it - send a SIGBUS to the EL0 task if it's performing
on unsupported memory type. Currently if such cases happens the VM will be
killed without the handling here. Keep the injected FSC as "unsupported
Exclusive or atomic access" should be more proper, will make it in next
version.

> But that's also
> assuming that S2 has a device mapping, and it is EL1 that did
> something wrong. Surely you should check the IPA against its memory
> type?

in my case it happens when both S1 and S2 mapping is normal memory, for example
running the hwcap test (PATCH 3/5) in the VM. But yes the memory type should
be checked here.

> 
> Further questions: what happens when a L2 guest triggers such fault?
> I don't think you can't arbitrarily route it back to L2 without
> looking at why it faulted.
> 

will see what affect to the nested VM case. but as you point out, it's
too early to inject the abort here, at least we should check the memory
type. will address it.

Thanks.







Re: [PATCH 2/5] arm64: Add support for FEAT_{LS64, LS64_V, LS64_ACCDATA}

2024-12-04 Thread Yicong Yang
On 2024/12/3 17:38, Marc Zyngier wrote:
> On Mon, 02 Dec 2024 13:55:01 +,
> Yicong Yang  wrote:
>>
>> From: Yicong Yang 
>>
>> Armv8.7 introduces single-copy atomic 64-byte loads and stores
>> instructions and its variants named under FEAT_{LS64, LS64_V,
>> LS64_ACCDATA}. These features are identified by ID_AA64ISAR1_EL1.LS64
>> and the use of such instructions in userspace (EL0) can be trapped.
>> In order to support the use of corresponding instructions in userspace:
>> - Make ID_AA64ISAR1_EL1.LS64 visbile to userspace
>> - Add identifying and enabling in the cpufeature list
>> - Expose these support of these features to userspace through HWCAP
>>   and cpuinfo
>>
>> Signed-off-by: Yicong Yang 
>> ---
>>  Documentation/arch/arm64/booting.rst| 28 ++
>>  Documentation/arch/arm64/elf_hwcaps.rst |  9 
>>  arch/arm64/include/asm/hwcap.h  |  3 ++
>>  arch/arm64/include/uapi/asm/hwcap.h |  3 ++
>>  arch/arm64/kernel/cpufeature.c  | 70 -
>>  arch/arm64/kernel/cpuinfo.c |  3 ++
>>  arch/arm64/tools/cpucaps|  3 ++
>>  7 files changed, 118 insertions(+), 1 deletion(-)
>>
>> diff --git a/Documentation/arch/arm64/booting.rst 
>> b/Documentation/arch/arm64/booting.rst
>> index 3278fb4bf219..c35cfe9da501 100644
>> --- a/Documentation/arch/arm64/booting.rst
>> +++ b/Documentation/arch/arm64/booting.rst
>> @@ -449,6 +449,34 @@ Before jumping into the kernel, the following 
>> conditions must be met:
>>  
>>  - HFGWTR_EL2.nGCS_EL0 (bit 52) must be initialised to 0b1.
>>  
>> +  For CPUs support for 64-byte loads and stores without status (FEAT_LS64):
>> +
>> +  - If the kernel is entered at EL1 and EL2 is present:
>> +
>> +- HCRX_EL2.EnALS (bit 1) must be initialised to 0b1.
>> +
>> +  For CPUs support for 64-byte loads and stores with status (FEAT_LS64_V):
>> +
>> +  - If the kernel is entered at EL1 and EL2 is present:
>> +
>> +- HCRX_EL2.EnASR (bit 2) must be initialised to 0b1.
>> +
>> +  For CPUs support for 64-byte EL0 stores with status (FEAT_LS64_ACCDATA):
>> +
>> +  - If EL3 is present:
>> +
>> +- SCR_EL3.EnAS0 (bit 36) must be initialised to 0b1.
>> +
>> +- SCR_EL3.ADEn (bit 37) must be initialised to 0b1.
>> +
>> +  - If the kernel is entered at EL1 and EL2 is present:
>> +
>> +- HCRX_EL2.EnAS0 (bit 0) must be initialised to 0b1.
>> +
>> +- HFGRTR_EL2.nACCDATA_EL1 (bit 50) must be initialised to 0b1.
>> +
>> +- HFGWTR_EL2.nACCDATA_EL1 (bit 50) must be initialised to 0b1.
>> +
>>  The requirements described above for CPU mode, caches, MMUs, architected
>>  timers, coherency and system registers apply to all CPUs.  All CPUs must
>>  enter the kernel in the same exception level.  Where the values documented
>> diff --git a/Documentation/arch/arm64/elf_hwcaps.rst 
>> b/Documentation/arch/arm64/elf_hwcaps.rst
>> index 2ff922a406ad..6cb2594f0803 100644
>> --- a/Documentation/arch/arm64/elf_hwcaps.rst
>> +++ b/Documentation/arch/arm64/elf_hwcaps.rst
>> @@ -372,6 +372,15 @@ HWCAP2_SME_SF8DP4
>>  HWCAP2_POE
>>  Functionality implied by ID_AA64MMFR3_EL1.S1POE == 0b0001.
>>  
>> +HWCAP3_LS64
>> +Functionality implied by ID_AA64ISAR1_EL1.LS64 == 0b0001.
>> +
>> +HWCAP3_LS64_V
>> +Functionality implied by ID_AA64ISAR1_EL1.LS64 == 0b0010.
>> +
>> +HWCAP3_LS64_ACCDATA
>> +Functionality implied by ID_AA64ISAR1_EL1.LS64 == 0b0011.
>> +
> 
> I don't mind the two others, but I seriously question exposing ST64BV0
> to userspace. How is ACCDATA_EL1 populated? How is it context-switched?
> 
> As it stands, this either does the wrong thing by always having the
> low 32bit set to an UNKNOWN value, or actively leaks kernel data.
> TBH, I don't see it being usable in practice (the more I read this
> part of the architecture, the more broken it looks).
> 

you're right, expose this LS64_ACCDATA alone to userspace won't make it
usable since ACCDATA_EL1 cannot be accessed from EL0. will drop this at
this stage.

Thanks.




Re: [PATCH v6 00/15] integrity: Introduce the Integrity Digest Cache

2024-12-04 Thread Roberto Sassu
On Tue, 2024-12-03 at 20:06 +, Eric Snowberg wrote:
> 
> > On Nov 26, 2024, at 3:41 AM, Roberto Sassu  
> > wrote:
> > 
> > On Tue, 2024-11-26 at 00:13 +, Eric Snowberg wrote:
> > > 
> > > > On Nov 19, 2024, at 3:49 AM, Roberto Sassu 
> > > >  wrote:
> > > > 
> > > > From: Roberto Sassu 
> > > > 
> > > > The Integrity Digest Cache can also help IMA for appraisal. IMA can 
> > > > simply
> > > > lookup the calculated digest of an accessed file in the list of digests
> > > > extracted from package headers, after verifying the header signature. 
> > > > It is
> > > > sufficient to verify only one signature for all files in the package, as
> > > > opposed to verifying a signature for each file.
> > > 
> > > Is there a way to maintain integrity over time?  Today if a CVE is 
> > > discovered 
> > > in a signed program, the program hash can be added to the blacklist 
> > > keyring. 
> > > Later if IMA appraisal is used, the signature validation will fail just 
> > > for that 
> > > program.  With the Integrity Digest Cache, is there a way to do this?  
> > 
> > As far as I can see, the ima_check_blacklist() call is before
> > ima_appraise_measurement(). If it fails, appraisal with the Integrity
> > Digest Cache will not be done.
> 
> 
> It is good the program hash would be checked beforehand and fail if it is 
> contained on the list. 
> 
> The .ima keyring may contain many keys.  If one of the keys was later 
> revoked and added to the .blacklist, wouldn't this be missed?  It would 
> be caught during signature validation when the file is later appraised, but 
> now this step isn't taking place.  Correct?

For files included in the digest lists, yes, there won't be detection
of later revocation of a key. However, it will still work at package
level/digest list level, since they are still appraised with a
signature.

We can add a mechanism (if it does not already exist) to invalidate the
integrity status based on key revocation, which can be propagated to
files verified with the affected digest lists.

> With IMA appraisal, it is easy to maintain authenticity but challenging to 
> maintain integrity over time. In user-space there are constantly new CVEs.  
> To maintain integrity over time, either keys need to be rotated in the .ima 
> keyring or program hashes need to be frequently added to the .blacklist.   
> If neither is done, for an end-user on a distro, IMA-appraisal basically 
> guarantees authenticity.
> 
> While I understand the intent of the series is to increase performance, 
> have you considered using this to give the end-user the ability to maintain 
> integrity of their system?  What I mean is, instead of trying to import 
> anything 
> from an RPM, just have the end-user provide this information in some format 
> to the Digest Cache.  User-space tools could be built to collect and format 

This is already possible, digest-cache-tools
(https://github.com/linux-integrity/digest-cache-tools) already allow
to create a digest list with the file a user wants.

But in this case, the user is vouching for having taken the correct
measure of the file at the time it was added to the digest list. This
would be instead automatically guaranteed by RPMs or other packages
shipped with Linux distributions.

To mitigate the concerns of CVEs, we can probably implement a rollback
prevention mechanism, which would not allow to load a previous version
of a digest list.

> the data needed by the Digest Cache.  This data  may allow multiple versions 
> of the same program.  The data would then be signed by one of the system 
> kernel keys (either something in the secondary or machine keyring), to 
> maintain 
> a root of trust.  This would give the end-user the ability to have integrity 
> however 
> they see fit.  This leaves the distro to provide signed programs and the 
> end-user 
> the ability to decide what level of software they want to run on their 
> system.  If 
> something isn't in the Digest Cache, it gets bumped down to the traditional 
> IMA-appraisal.  I think it would simplify the problem you are trying to 
> solve, 

All you say it is already possible. Users can generate and sign their
digest lists, and add enroll their key to the kernel keyring.

> especially around the missing kernel PGP code required for all this to work, 
> since it wouldn't be necessary.   With this approach, besides the performance 
> gain, the end-user would gain the ability to maintain integrity that is 
> enforced by
> the kernel.

For what I understood, Linus would not be against the 




[PATCH v4 1/5] docs: maintainer-pgp-guide.rst: add a reference for kernel.org sign

2024-12-04 Thread Mauro Carvalho Chehab
The media profile documentation will point to kernel.org sign.
Add a link to it.

Signed-off-by: Mauro Carvalho Chehab 
---

Patch resent, as linux-media was not on its Cc list.


 Documentation/process/maintainer-pgp-guide.rst | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/Documentation/process/maintainer-pgp-guide.rst 
b/Documentation/process/maintainer-pgp-guide.rst
index f5277993b195..795ef8d89271 100644
--- a/Documentation/process/maintainer-pgp-guide.rst
+++ b/Documentation/process/maintainer-pgp-guide.rst
@@ -903,6 +903,8 @@ the new default in GnuPG v2). To set it, add (or modify) the
 
 trust-model tofu+pgp
 
+.. _kernel_org_trust_repository:
+
 Using the kernel.org web of trust repository
 
 
-- 
2.47.1




Re: [PATCH net-next 2/4] netconsole: Add option to auto-populate CPU number in userdata

2024-12-04 Thread Breno Leitao
On Tue, Nov 19, 2024 at 09:07:45AM -0800, Breno Leitao wrote:
> > >  #endif
> > >   boolenabled;
> > >   boolextended;
> > 
> > > + /* Check if CPU NR should be populated, and append it to the user
> > > +  * dictionary.
> > > +  */
> > > + if (child_count < MAX_USERDATA_ITEMS && nt->userdata_auto & AUTO_CPU_NR)
> > > + scnprintf(&nt->userdata_complete[complete_idx],
> > > +   MAX_USERDATA_ENTRY_LENGTH, " cpu=%u\n",
> > > +   raw_smp_processor_id());
> > 
> > I guess it may be tricky for backward compat, but shouldn't we return
> > an error rather than silently skip?
> 
> yes, this should be easy to do, in fact. Probably return -E2BIG to
> userspace when trying to update the entry. I thought about something as
> the following patch, and piggy-back into it.

Back to this topic, in fact, this is not needed at all. 

The configfs make item helper (userdatum_make_item()) checks for
exceeding entries, and fails if an additional entry is created.


static struct config_item *userdatum_make_item(struct config_group 
*group,
const char *name)
{

child_count = list_count_nodes(&nt->userdata_group.cg_children);
if (child_count >= MAX_USERDATA_ITEMS)
return ERR_PTR(-ENOSPC);


I've sent an additional test for this mechanism, and make the check in
update_userdata() a warning instead of just silently dropping the entry.

https://lore.kernel.org/all/20241204-netcons_overflow_test-v1-0-a85a8d0ac...@debian.org/