Re: [RESEND PATCH] tee: add kernel internal client interface

2018-07-13 Thread Sumit Garg
On Fri, 13 Jul 2018 at 14:54, Jens Wiklander  wrote:
>
> [+Sumit]
>
> On Mon, Jul 09, 2018 at 08:15:49AM +0200, Jens Wiklander wrote:
> > Adds a kernel internal TEE client interface to be used by other drivers.
> >
> > Signed-off-by: Jens Wiklander 
> > ---

Thanks Jens for this patch. I have reviewed and tested this patch on
Developerbox [1]. Following is brief description of my test-case:

Developerbox doesn't have support for hardware based TRNG. But it does
have 7 on-chip thermal sensors accessible from Secure world only. So I
wrote OP-TEE static TA to collect Entropy using thermal noise from
these sensors.
After using the interface provided by this patch, I am able to write
"hw_random" char driver for Developerbox to get Entropy from OP-TEE
static TA which could be further used by user-space daemon (rngd).

Reviewed-by: Sumit Garg 
Tested-by: Sumit Garg 

[1] https://www.96boards.org/product/developerbox/

-Sumit

> >  drivers/tee/tee_core.c  | 113 +---
> >  include/linux/tee_drv.h |  73 ++
> >  2 files changed, 179 insertions(+), 7 deletions(-)
> >
> > diff --git a/drivers/tee/tee_core.c b/drivers/tee/tee_core.c
> > index dd46b758852a..7b2bb4c50058 100644
> > --- a/drivers/tee/tee_core.c
> > +++ b/drivers/tee/tee_core.c
> > @@ -38,15 +38,13 @@ static DEFINE_SPINLOCK(driver_lock);
> >  static struct class *tee_class;
> >  static dev_t tee_devt;
> >
> > -static int tee_open(struct inode *inode, struct file *filp)
> > +static struct tee_context *teedev_open(struct tee_device *teedev)
> >  {
> >   int rc;
> > - struct tee_device *teedev;
> >   struct tee_context *ctx;
> >
> > - teedev = container_of(inode->i_cdev, struct tee_device, cdev);
> >   if (!tee_device_get(teedev))
> > - return -EINVAL;
> > + return ERR_PTR(-EINVAL);
> >
> >   ctx = kzalloc(sizeof(*ctx), GFP_KERNEL);
> >   if (!ctx) {
> > @@ -57,16 +55,16 @@ static int tee_open(struct inode *inode, struct file 
> > *filp)
> >   kref_init(&ctx->refcount);
> >   ctx->teedev = teedev;
> >   INIT_LIST_HEAD(&ctx->list_shm);
> > - filp->private_data = ctx;
> >   rc = teedev->desc->ops->open(ctx);
> >   if (rc)
> >   goto err;
> >
> > - return 0;
> > + return ctx;
> >  err:
> >   kfree(ctx);
> >   tee_device_put(teedev);
> > - return rc;
> > + return ERR_PTR(rc);
> > +
> >  }
> >
> >  void teedev_ctx_get(struct tee_context *ctx)
> > @@ -100,6 +98,18 @@ static void teedev_close_context(struct tee_context 
> > *ctx)
> >   teedev_ctx_put(ctx);
> >  }
> >
> > +static int tee_open(struct inode *inode, struct file *filp)
> > +{
> > + struct tee_context *ctx;
> > +
> > + ctx = teedev_open(container_of(inode->i_cdev, struct tee_device, 
> > cdev));
> > + if (IS_ERR(ctx))
> > + return PTR_ERR(ctx);
> > +
> > + filp->private_data = ctx;
> > + return 0;
> > +}
> > +
> >  static int tee_release(struct inode *inode, struct file *filp)
> >  {
> >   teedev_close_context(filp->private_data);
> > @@ -928,6 +938,95 @@ void *tee_get_drvdata(struct tee_device *teedev)
> >  }
> >  EXPORT_SYMBOL_GPL(tee_get_drvdata);
> >
> > +struct match_dev_data {
> > + struct tee_ioctl_version_data *vers;
> > + const void *data;
> > + int (*match)(struct tee_ioctl_version_data *, const void *);
> > +};
> > +
> > +static int match_dev(struct device *dev, const void *data)
> > +{
> > + const struct match_dev_data *match_data = data;
> > + struct tee_device *teedev = container_of(dev, struct tee_device, dev);
> > +
> > + teedev->desc->ops->get_version(teedev, match_data->vers);
> > + return match_data->match(match_data->vers, match_data->data);
> > +}
> > +
> > +struct tee_context *
> > +tee_client_open_context(struct tee_context *start,
> > + int (*match)(struct tee_ioctl_version_data *,
> > +  const void *),
> > + const void *data, struct tee_ioctl_version_data *vers)
> > +{
> > + struct device *dev = NULL;
> > + struct device *put_dev = NULL;
> > + struct tee_context *ctx = NULL;
> > + struct tee_ioctl_version_data v;
> > + struct match_dev_data match_data = { 

Re: [RFC PATCH 0/2] allow optee to be exposed on ACPI systems

2019-01-01 Thread Sumit Garg
On Fri, 28 Dec 2018 at 00:31, Ard Biesheuvel  wrote:
>
> Similar to how OP-TEE is exposed as a pseudo device under /firmware/optee
> on DT systems, permit OP-TEE presence to be exposed via a device object
> in the ACPI namespace. This makes it possible to model the OP-TEE interface
> as a platform device gets instantiated automatically both on DT and ACPI
> systems, and implement the driver as a platform driver that is able to
> use the generic device properties API to access the 'method' attribute
> as well as potential future extensions to the binding that introduce
> new attributes.
>
> What remains to be discussed is how to expose OP-TEE pseudo devices,
> e.g., Sumit's RNG implementation on SynQuacer which we would like to
> bind a Linux driver to.
>
> Cc: Jens Wiklander 
> Cc: Sumit Garg 
> Cc: Graeme Gregory 
> Cc: Jerome Forissier 
>
> Ard Biesheuvel (2):
>   optee: model OP-TEE as a platform device/driver
>   optee: add ACPI support
>
>  drivers/tee/optee/core.c | 94 +---
>  1 file changed, 41 insertions(+), 53 deletions(-)
>

Looks good to me.

Acked-by: Sumit Garg 


> --
> 2.19.2
>


Re: [PATCH v8 2/4] KEYS: trusted: Introduce TEE based Trusted Keys

2021-02-15 Thread Sumit Garg
On Fri, 12 Feb 2021 at 05:04, Jarkko Sakkinen  wrote:
>
> On Mon, Jan 25, 2021 at 02:47:38PM +0530, Sumit Garg wrote:
> > Hi Jarkko,
> >
> > On Fri, 22 Jan 2021 at 23:42, Jarkko Sakkinen  wrote:
> > >
> > > On Thu, Jan 21, 2021 at 05:23:45PM +0100, Jerome Forissier wrote:
> > > >
> > > >
> > > > On 1/21/21 4:24 PM, Jarkko Sakkinen wrote:
> > > > > On Thu, Jan 21, 2021 at 05:07:42PM +0200, Jarkko Sakkinen wrote:
> > > > >> On Thu, Jan 21, 2021 at 09:44:07AM +0100, Jerome Forissier wrote:
> > > > >>>
> > > > >>>
> > > > >>> On 1/21/21 1:02 AM, Jarkko Sakkinen via OP-TEE wrote:
> > > > >>>> On Wed, Jan 20, 2021 at 12:53:28PM +0530, Sumit Garg wrote:
> > > > >>>>> On Wed, 20 Jan 2021 at 07:01, Jarkko Sakkinen  
> > > > >>>>> wrote:
> > > > >>>>>>
> > > > >>>>>> On Tue, Jan 19, 2021 at 12:30:42PM +0200, Jarkko Sakkinen wrote:
> > > > >>>>>>> On Fri, Jan 15, 2021 at 11:32:31AM +0530, Sumit Garg wrote:
> > > > >>>>>>>> On Thu, 14 Jan 2021 at 07:35, Jarkko Sakkinen 
> > > > >>>>>>>>  wrote:
> > > > >>>>>>>>>
> > > > >>>>>>>>> On Wed, Jan 13, 2021 at 04:47:00PM +0530, Sumit Garg wrote:
> > > > >>>>>>>>>> Hi Jarkko,
> > > > >>>>>>>>>>
> > > > >>>>>>>>>> On Mon, 11 Jan 2021 at 22:05, Jarkko Sakkinen 
> > > > >>>>>>>>>>  wrote:
> > > > >>>>>>>>>>>
> > > > >>>>>>>>>>> On Tue, Nov 03, 2020 at 09:31:44PM +0530, Sumit Garg wrote:
> > > > >>>>>>>>>>>> Add support for TEE based trusted keys where TEE provides 
> > > > >>>>>>>>>>>> the functionality
> > > > >>>>>>>>>>>> to seal and unseal trusted keys using hardware unique key.
> > > > >>>>>>>>>>>>
> > > > >>>>>>>>>>>> Refer to Documentation/tee.txt for detailed information 
> > > > >>>>>>>>>>>> about TEE.
> > > > >>>>>>>>>>>>
> > > > >>>>>>>>>>>> Signed-off-by: Sumit Garg 
> > > > >>>>>>>>>>>
> > > > >>>>>>>>>>> I haven't yet got QEMU environment working with aarch64, 
> > > > >>>>>>>>>>> this produces
> > > > >>>>>>>>>>> just a blank screen:
> > > > >>>>>>>>>>>
> > > > >>>>>>>>>>> ./output/host/usr/bin/qemu-system-aarch64 -M virt -cpu 
> > > > >>>>>>>>>>> cortex-a53 -smp 1 -kernel output/images/Image -initrd 
> > > > >>>>>>>>>>> output/images/rootfs.cpio -serial stdio
> > > > >>>>>>>>>>>
> > > > >>>>>>>>>>> My BuildRoot fork for TPM and keyring testing is located 
> > > > >>>>>>>>>>> over here:
> > > > >>>>>>>>>>>
> > > > >>>>>>>>>>> https://git.kernel.org/pub/scm/linux/kernel/git/jarkko/buildroot-tpmdd.git/
> > > > >>>>>>>>>>>
> > > > >>>>>>>>>>> The "ARM version" is at this point in aarch64 branch. Over 
> > > > >>>>>>>>>>> time I will
> > > > >>>>>>>>>>> define tpmdd-x86_64 and tpmdd-aarch64 boards and everything 
> > > > >>>>>>>>>>> will be then
> > > > >>>>>>>>>>> in the master branch.
> > > > >>>>>>>>>>>
> > > > >>>>>>>>>>> To create identical images you just need to
> > > > >>>>>>>>>>>
> > > > >>>>>>>>>>&g

Re: [PATCH v8 1/4] KEYS: trusted: Add generic trusted keys framework

2021-02-15 Thread Sumit Garg
On Wed, 10 Feb 2021 at 22:30, Jarkko Sakkinen  wrote:
>
> On Tue, Nov 03, 2020 at 09:31:43PM +0530, Sumit Garg wrote:
> > + case Opt_new:
> > + key_len = payload->key_len;
> > + ret = static_call(trusted_key_get_random)(payload->key,
> > +   key_len);
> > + if (ret != key_len) {
> > + pr_info("trusted_key: key_create failed (%d)\n", ret);
> > + goto out;
> > + }
>
> This repeats a regression in existing code, i.e. does not check
> "ret < 0" condition. I noticed this now when I rebased the code
> on top of my fixes.
>
> I.e. it's fixed in my master branch, which caused a merge conflict,
> and I found this.
>

Okay, I will rebase the next version to your master branch.

-Sumit

> /Jarkko


Re: [PATCH v8 1/4] KEYS: trusted: Add generic trusted keys framework

2021-02-15 Thread Sumit Garg
On Tue, 24 Nov 2020 at 09:12, Jarkko Sakkinen  wrote:
>
> On Tue, Nov 03, 2020 at 09:31:43PM +0530, Sumit Garg wrote:
> > Current trusted keys framework is tightly coupled to use TPM device as
> > an underlying implementation which makes it difficult for implementations
> > like Trusted Execution Environment (TEE) etc. to provide trusted keys
> > support in case platform doesn't posses a TPM device.
> >
> > Add a generic trusted keys framework where underlying implementations
> > can be easily plugged in. Create struct trusted_key_ops to achieve this,
> > which contains necessary functions of a backend.
> >
> > Also, define a module parameter in order to select a particular trust
> > source in case a platform support multiple trust sources. In case its
> > not specified then implementation itetrates through trust sources list
> > starting with TPM and assign the first trust source as a backend which
> > has initiazed successfully during iteration.
> >
> > Note that current implementation only supports a single trust source at
> > runtime which is either selectable at compile time or during boot via
> > aforementioned module parameter.
> >
> > Suggested-by: Jarkko Sakkinen 
> > Signed-off-by: Sumit Garg 
> > ---
> >  Documentation/admin-guide/kernel-parameters.txt |  12 +
> >  include/keys/trusted-type.h |  47 
>
>
>
> >  include/keys/trusted_tpm.h  |  17 +-
> >  security/keys/trusted-keys/Makefile |   1 +
> >  security/keys/trusted-keys/trusted_core.c   | 350 
> > 
> >  security/keys/trusted-keys/trusted_tpm1.c   | 336 
> > ---
> >  6 files changed, 468 insertions(+), 295 deletions(-)
> >  create mode 100644 security/keys/trusted-keys/trusted_core.c
> >
> > diff --git a/Documentation/admin-guide/kernel-parameters.txt 
> > b/Documentation/admin-guide/kernel-parameters.txt
> > index 526d65d..df9b9fe 100644
> > --- a/Documentation/admin-guide/kernel-parameters.txt
> > +++ b/Documentation/admin-guide/kernel-parameters.txt
> > @@ -5392,6 +5392,18 @@
> >   See Documentation/admin-guide/mm/transhuge.rst
> >   for more details.
> >
> > + trusted.source= [KEYS]
> > + Format: 
> > + This parameter identifies the trust source as a 
> > backend
> > + for trusted keys implementation. Supported trust
> > + sources:
> > + - "tpm"
> > + - "tee"
> > + If not specified then it defaults to iterating through
> > + the trust source list starting with TPM and assigns 
> > the
> > + first trust source as a backend which is initialized
> > + successfully during iteration.
> > +
> >   tsc=Disable clocksource stability checks for TSC.
> >   Format: 
> >   [x86] reliable: mark tsc clocksource as reliable, this
> > diff --git a/include/keys/trusted-type.h b/include/keys/trusted-type.h
> > index a94c03a..a566451 100644
> > --- a/include/keys/trusted-type.h
> > +++ b/include/keys/trusted-type.h
> > @@ -40,6 +40,53 @@ struct trusted_key_options {
> >   uint32_t policyhandle;
> >  };
> >
> > +struct trusted_key_ops {
> > + /*
> > +  * flag to indicate if trusted key implementation supports migration
> > +  * or not.
> > +  */
> > + unsigned char migratable;
> > +
> > + /* Initialize key interface. */
> > + int (*init)(void);
> > +
> > + /* Seal a key. */
> > + int (*seal)(struct trusted_key_payload *p, char *datablob);
> > +
> > + /* Unseal a key. */
> > + int (*unseal)(struct trusted_key_payload *p, char *datablob);
> > +
> > + /* Get a randomized key. */
> > + int (*get_random)(unsigned char *key, size_t key_len);
> > +
> > + /* Exit key interface. */
> > + void (*exit)(void);
> > +};
> > +
> > +struct trusted_key_source {
> > + char *name;
> > + struct trusted_key_ops *ops;
> > +};
> > +
> >  extern struct key_type key_type_trusted;
> >
> > +#define TRUSTED_DEBUG 0
> > +
> > +#if TRUSTED_DEBUG
> > +static inline void dump_payload(struct trusted_key_payload *p)
> > +{
> > + pr_info("trusted_key: key_len %d\n

Re: [PATCH v5 4/4] hwrng: add OP-TEE based rng driver

2019-01-27 Thread Sumit Garg
Hi Herbert,

On Thu, 24 Jan 2019 at 11:25, Sumit Garg  wrote:
>
> On ARM SoC's with TrustZone enabled, peripherals like entropy sources
> might not be accessible to normal world (linux in this case) and rather
> accessible to secure world (OP-TEE in this case) only. So this driver
> aims to provides a generic interface to OP-TEE based random number
> generator service.
>
> This driver registers on TEE bus to interact with OP-TEE based rng
> device/service.
>
> Signed-off-by: Sumit Garg 

Do you have any comments/feedback on this patch before I send next version (v6)?

-Sumit

> ---
>  MAINTAINERS|   5 +
>  drivers/char/hw_random/Kconfig |  15 ++
>  drivers/char/hw_random/Makefile|   1 +
>  drivers/char/hw_random/optee-rng.c | 274 
> +
>  4 files changed, 295 insertions(+)
>  create mode 100644 drivers/char/hw_random/optee-rng.c
>
> diff --git a/MAINTAINERS b/MAINTAINERS
> index 51029a4..dcef7e9 100644
> --- a/MAINTAINERS
> +++ b/MAINTAINERS
> @@ -11262,6 +11262,11 @@ M: Jens Wiklander 
>  S: Maintained
>  F:     drivers/tee/optee/
>
> +OP-TEE RANDOM NUMBER GENERATOR (RNG) DRIVER
> +M: Sumit Garg 
> +S: Maintained
> +F: drivers/char/hw_random/optee-rng.c
> +
>  OPA-VNIC DRIVER
>  M: Dennis Dalessandro 
>  M: Niranjana Vishwanathapura 
> diff --git a/drivers/char/hw_random/Kconfig b/drivers/char/hw_random/Kconfig
> index dac895d..25a7d8f 100644
> --- a/drivers/char/hw_random/Kconfig
> +++ b/drivers/char/hw_random/Kconfig
> @@ -424,6 +424,21 @@ config HW_RANDOM_EXYNOS
>   will be called exynos-trng.
>
>   If unsure, say Y.
> +
> +config HW_RANDOM_OPTEE
> +   tristate "OP-TEE based Random Number Generator support"
> +   depends on OPTEE
> +   default HW_RANDOM
> +   help
> + This  driver provides support for OP-TEE based Random Number
> + Generator on ARM SoCs where hardware entropy sources are not
> + accessible to normal world (Linux).
> +
> + To compile this driver as a module, choose M here: the module
> + will be called optee-rng.
> +
> + If unsure, say Y.
> +
>  endif # HW_RANDOM
>
>  config UML_RANDOM
> diff --git a/drivers/char/hw_random/Makefile b/drivers/char/hw_random/Makefile
> index e35ec3c..7c9ef4a 100644
> --- a/drivers/char/hw_random/Makefile
> +++ b/drivers/char/hw_random/Makefile
> @@ -38,3 +38,4 @@ obj-$(CONFIG_HW_RANDOM_CAVIUM) += cavium-rng.o 
> cavium-rng-vf.o
>  obj-$(CONFIG_HW_RANDOM_MTK)+= mtk-rng.o
>  obj-$(CONFIG_HW_RANDOM_S390) += s390-trng.o
>  obj-$(CONFIG_HW_RANDOM_KEYSTONE) += ks-sa-rng.o
> +obj-$(CONFIG_HW_RANDOM_OPTEE) += optee-rng.o
> diff --git a/drivers/char/hw_random/optee-rng.c 
> b/drivers/char/hw_random/optee-rng.c
> new file mode 100644
> index 000..4ad0eca
> --- /dev/null
> +++ b/drivers/char/hw_random/optee-rng.c
> @@ -0,0 +1,274 @@
> +// SPDX-License-Identifier: GPL-2.0
> +/*
> + * Copyright (C) 2018-2019 Linaro Ltd.
> + */
> +
> +#include 
> +#include 
> +#include 
> +#include 
> +#include 
> +#include 
> +#include 
> +#include 
> +
> +#define DRIVER_NAME "optee-rng"
> +
> +#define TEE_ERROR_HEALTH_TEST_FAIL 0x0001
> +
> +/*
> + * TA_CMD_GET_ENTROPY - Get Entropy from RNG
> + *
> + * param[0] (inout memref) - Entropy buffer memory reference
> + * param[1] unused
> + * param[2] unused
> + * param[3] unused
> + *
> + * Result:
> + * TEE_SUCCESS - Invoke command success
> + * TEE_ERROR_BAD_PARAMETERS - Incorrect input param
> + * TEE_ERROR_NOT_SUPPORTED - Requested entropy size greater than size of pool
> + * TEE_ERROR_HEALTH_TEST_FAIL - Continuous health testing failed
> + */
> +#define TA_CMD_GET_ENTROPY 0x0
> +
> +/*
> + * TA_CMD_GET_RNG_INFO - Get RNG information
> + *
> + * param[0] (out value) - value.a: RNG data-rate in bytes per second
> + *value.b: Quality/Entropy per 1024 bit of data
> + * param[1] unused
> + * param[2] unused
> + * param[3] unused
> + *
> + * Result:
> + * TEE_SUCCESS - Invoke command success
> + * TEE_ERROR_BAD_PARAMETERS - Incorrect input param
> + */
> +#define TA_CMD_GET_RNG_INFO0x1
> +
> +#define MAX_ENTROPY_REQ_SZ (4 * 1024)
> +
> +static struct tee_context *ctx;
> +static struct tee_shm *entropy_shm_pool;
> +static u32 ta_rng_data_rate;
> +static u32 ta_rng_session_id;
> +
> +static size_t get_optee_rng_data(void *buf, size_t req_size)
> +{
> +   u32 ret = 0;
> +   u8 *rng_data = NULL;
> +   size_t rng_size = 0;
> +   struct tee_i

Re: [PATCH v4] kdb: Simplify kdb commands registration

2021-02-22 Thread Sumit Garg
On Mon, 22 Feb 2021 at 19:17, Daniel Thompson
 wrote:
>
> On Mon, Feb 22, 2021 at 06:33:18PM +0530, Sumit Garg wrote:
> > On Mon, 22 Feb 2021 at 17:35, Daniel Thompson
> >  wrote:
> > >
> > > On Thu, Feb 18, 2021 at 05:39:58PM +0530, Sumit Garg wrote:
> > > > Simplify kdb commands registration via using linked list instead of
> > > > static array for commands storage.
> > > >
> > > > Signed-off-by: Sumit Garg 
> > > > ---
> > > >
> > > > Changes in v4:
> > > > - Fix kdb commands memory allocation issue prior to slab being available
> > > >   with an array of statically allocated commands. Now it works fine with
> > > >   kgdbwait.
> > >
> > > I'm not sure this is the right approach. It's still faking dynamic usage
> > > when none of the callers at this stage of the boot actually are dynamic.
> > >
> >
> > Okay, as an alternative I came across dbg_kmalloc()/dbg_kfree() as well but 
> > ...
>
> Last time I traced these functions I concluded that this heap can be
> removed if the symbol handling code is refactored a little.

Yeah, I also observed symbol handing code being the only user. So, I
will try to rework that code and see if we can get rid of this custom
heap.

> I'd be
> *seriously* reluctant to add any new callers... which I assume from your
> later comments you can live with ;-) .
>

Yes that's fine with me.

-Sumit

>
> Daniel.


Re: [PATCH] kernel: debug: Handle breakpoints in kernel .init.text section

2021-02-23 Thread Sumit Garg
Thanks Doug for your comments.

On Tue, 23 Feb 2021 at 05:28, Doug Anderson  wrote:
>
> Hi,
>
> On Fri, Feb 19, 2021 at 12:03 AM Sumit Garg  wrote:
> >
> > Currently breakpoints in kernel .init.text section are not handled
> > correctly while allowing to remove them even after corresponding pages
> > have been freed.
> >
> > In order to keep track of .init.text section breakpoints, add another
> > breakpoint state as BP_ACTIVE_INIT and don't try to free these
> > breakpoints once the system is in running state.
> >
> > To be clear there is still a very small window between call to
> > free_initmem() and system_state = SYSTEM_RUNNING which can lead to
> > removal of freed .init.text section breakpoints but I think we can live
> > with that.
>
> I know kdb / kgdb tries to keep out of the way of the rest of the
> system and so there's a bias to just try to infer the state of the
> rest of the system, but this feels like a halfway solution when really
> a cleaner solution really wouldn't intrude much on the main kernel.
> It seems like it's at least worth asking if we can just add a call
> like kgdb_drop_init_breakpoints() into main.c.  Then we don't have to
> try to guess the state...
>

Sounds reasonable, will post RFC for this. I think we should call such
function as kgdb_free_init_mem() in similar way as:
- kprobe_free_init_mem()
- ftrace_free_init_mem()

>
> > Suggested-by: Peter Zijlstra 
> > Signed-off-by: Sumit Garg 
> > ---
> >  include/linux/kgdb.h  |  3 ++-
> >  kernel/debug/debug_core.c | 17 +
> >  2 files changed, 15 insertions(+), 5 deletions(-)
> >
> > diff --git a/include/linux/kgdb.h b/include/linux/kgdb.h
> > index 0d6cf64..57b8885 100644
> > --- a/include/linux/kgdb.h
> > +++ b/include/linux/kgdb.h
> > @@ -71,7 +71,8 @@ enum kgdb_bpstate {
> > BP_UNDEFINED = 0,
> > BP_REMOVED,
> > BP_SET,
> > -   BP_ACTIVE
> > +   BP_ACTIVE_INIT,
> > +   BP_ACTIVE,
> >  };
> >
> >  struct kgdb_bkpt {
> > diff --git a/kernel/debug/debug_core.c b/kernel/debug/debug_core.c
> > index af6e8b4f..229dd11 100644
> > --- a/kernel/debug/debug_core.c
> > +++ b/kernel/debug/debug_core.c
> > @@ -324,7 +324,11 @@ int dbg_activate_sw_breakpoints(void)
> > }
> >
> > kgdb_flush_swbreak_addr(kgdb_break[i].bpt_addr);
> > -   kgdb_break[i].state = BP_ACTIVE;
> > +   if (system_state >= SYSTEM_RUNNING ||
> > +   !init_section_contains((void *)kgdb_break[i].bpt_addr, 
> > 0))
>
> I haven't searched through all the code, but is there any chance that
> this could trigger incorrectly?  After we free the init memory could
> it be re-allocated to something that would contain code that would
> execute in kernel context and now we'd be unable to set breakpoints in
> that area?
>

"BP_ACTIVE_INIT" state is added specifically to handle this scenario
as to keep track of breakpoints that actually belong to the .init.text
section. And we should be able to again set breakpoints after free
since below change in this patch would mark them as "BP_UNDEFINED":

@@ -378,8 +382,13 @@ int dbg_deactivate_sw_breakpoints(void)
int i;

for (i = 0; i < KGDB_MAX_BREAKPOINTS; i++) {
-   if (kgdb_break[i].state != BP_ACTIVE)
+   if (kgdb_break[i].state < BP_ACTIVE_INIT)
+   continue;
+   if (system_state >= SYSTEM_RUNNING &&
+   kgdb_break[i].state == BP_ACTIVE_INIT) {
+   kgdb_break[i].state = BP_UNDEFINED;
continue;
+   }
error = kgdb_arch_remove_breakpoint(&kgdb_break[i]);
if (error) {
pr_info("BP remove failed: %lx\n",

>
> > +   kgdb_break[i].state = BP_ACTIVE;
> > +   else
> > +   kgdb_break[i].state = BP_ACTIVE_INIT;
>
> I don't really see what the "BP_ACTIVE_INIT" state gets you.  Why not
> just leave it as "BP_ACTIVE" and put all the logic fully in
> dbg_deactivate_sw_breakpoints()?

Please see my response above.

>
> ...or, if we can inject a call in main.c we can do a one time delete
> of all "init" breakpoints and get rid of all this logic? Heck, even
> if we can't get called by "main.c", we still only need to do a
> one-time drop of all init breakpoints the first time we drop into the
> debugger after they are freed, right?

Yes and th

[PATCH] kdb: Remove redundant function definitions/prototypes

2021-02-23 Thread Sumit Garg
Cleanup kdb code to get rid of unused function definitions/prototypes.

Signed-off-by: Sumit Garg 
---
 kernel/debug/kdb/kdb_main.c|  2 +-
 kernel/debug/kdb/kdb_private.h |  3 ---
 kernel/debug/kdb/kdb_support.c | 18 --
 3 files changed, 1 insertion(+), 22 deletions(-)

diff --git a/kernel/debug/kdb/kdb_main.c b/kernel/debug/kdb/kdb_main.c
index b29f7f375afb..2b9f0bb3542e 100644
--- a/kernel/debug/kdb/kdb_main.c
+++ b/kernel/debug/kdb/kdb_main.c
@@ -410,7 +410,7 @@ int kdbgetularg(const char *arg, unsigned long *value)
return 0;
 }
 
-int kdbgetu64arg(const char *arg, u64 *value)
+static int kdbgetu64arg(const char *arg, u64 *value)
 {
char *endp;
u64 val;
diff --git a/kernel/debug/kdb/kdb_private.h b/kernel/debug/kdb/kdb_private.h
index 12d0abab73ee..99ec64cfe791 100644
--- a/kernel/debug/kdb/kdb_private.h
+++ b/kernel/debug/kdb/kdb_private.h
@@ -103,7 +103,6 @@ extern int kdb_getword(unsigned long *, unsigned long, 
size_t);
 extern int kdb_putword(unsigned long, unsigned long, size_t);
 
 extern int kdbgetularg(const char *, unsigned long *);
-extern int kdbgetu64arg(const char *, u64 *);
 extern char *kdbgetenv(const char *);
 extern int kdbgetaddrarg(int, const char **, int*, unsigned long *,
 long *, char **);
@@ -209,9 +208,7 @@ extern unsigned long kdb_task_state(const struct 
task_struct *p,
unsigned long mask);
 extern void kdb_ps_suppressed(void);
 extern void kdb_ps1(const struct task_struct *p);
-extern void kdb_print_nameval(const char *name, unsigned long val);
 extern void kdb_send_sig(struct task_struct *p, int sig);
-extern void kdb_meminfo_proc_show(void);
 extern char kdb_getchar(void);
 extern char *kdb_getstr(char *, size_t, const char *);
 extern void kdb_gdb_state_pass(char *buf);
diff --git a/kernel/debug/kdb/kdb_support.c b/kernel/debug/kdb/kdb_support.c
index 6226502ce049..b59aad1f0b55 100644
--- a/kernel/debug/kdb/kdb_support.c
+++ b/kernel/debug/kdb/kdb_support.c
@@ -665,24 +665,6 @@ unsigned long kdb_task_state(const struct task_struct *p, 
unsigned long mask)
return (mask & kdb_task_state_string(state)) != 0;
 }
 
-/*
- * kdb_print_nameval - Print a name and its value, converting the
- * value to a symbol lookup if possible.
- * Inputs:
- * namefield name to print
- * val value of field
- */
-void kdb_print_nameval(const char *name, unsigned long val)
-{
-   kdb_symtab_t symtab;
-   kdb_printf("  %-11.11s ", name);
-   if (kdbnearsym(val, &symtab))
-   kdb_symbol_print(val, &symtab,
-KDB_SP_VALUE|KDB_SP_SYMSIZE|KDB_SP_NEWLINE);
-   else
-   kdb_printf("0x%lx\n", val);
-}
-
 /* Last ditch allocator for debugging, so we can still debug even when
  * the GFP_ATOMIC pool has been exhausted.  The algorithms are tuned
  * for space usage, not for speed.  One smallish memory pool, the free
-- 
2.25.1



Re: [PATCH] kernel: debug: Handle breakpoints in kernel .init.text section

2021-02-23 Thread Sumit Garg
On Tue, 23 Feb 2021 at 18:24, Daniel Thompson
 wrote:
>
> On Tue, Feb 23, 2021 at 02:33:50PM +0530, Sumit Garg wrote:
> > Thanks Doug for your comments.
> >
> > On Tue, 23 Feb 2021 at 05:28, Doug Anderson  wrote:
> > > > To be clear there is still a very small window between call to
> > > > free_initmem() and system_state = SYSTEM_RUNNING which can lead to
> > > > removal of freed .init.text section breakpoints but I think we can live
> > > > with that.
> > >
> > > I know kdb / kgdb tries to keep out of the way of the rest of the
> > > system and so there's a bias to just try to infer the state of the
> > > rest of the system, but this feels like a halfway solution when really
> > > a cleaner solution really wouldn't intrude much on the main kernel.
> > > It seems like it's at least worth asking if we can just add a call
> > > like kgdb_drop_init_breakpoints() into main.c.  Then we don't have to
> > > try to guess the state...
>
> Just for the record, +1. This would be a better approach.
>
>
> > Sounds reasonable, will post RFC for this. I think we should call such
> > function as kgdb_free_init_mem() in similar way as:
> > - kprobe_free_init_mem()
> > - ftrace_free_init_mem()
>
> As is matching the names...
>
>
> > @@ -378,8 +382,13 @@ int dbg_deactivate_sw_breakpoints(void)
> > int i;
> >
> > for (i = 0; i < KGDB_MAX_BREAKPOINTS; i++) {
> > -   if (kgdb_break[i].state != BP_ACTIVE)
> > +   if (kgdb_break[i].state < BP_ACTIVE_INIT)
> > +   continue;
> > +   if (system_state >= SYSTEM_RUNNING &&
> > +   kgdb_break[i].state == BP_ACTIVE_INIT) {
> > +   kgdb_break[i].state = BP_UNDEFINED;
> > continue;
> > +   }
> > error = kgdb_arch_remove_breakpoint(&kgdb_break[i]);
> > if (error) {
> > pr_info("BP remove failed: %lx\n",
> >
> > >
> > > > +   kgdb_break[i].state = BP_ACTIVE;
> > > > +   else
> > > > +   kgdb_break[i].state = BP_ACTIVE_INIT;
> > >
> > > I don't really see what the "BP_ACTIVE_INIT" state gets you.  Why not
> > > just leave it as "BP_ACTIVE" and put all the logic fully in
> > > dbg_deactivate_sw_breakpoints()?
> >
> > Please see my response above.
> >
> > [which was]
> > > "BP_ACTIVE_INIT" state is added specifically to handle this scenario
> > > as to keep track of breakpoints that actually belong to the .init.text
> > > section. And we should be able to again set breakpoints after free
> > > since below change in this patch would mark them as "BP_UNDEFINED":
>
> This answer does not say whether the BP_ACTIVE_INIT state needs to be
> per-breakpoint state or whether we can infer it from the global state.
>
> Changing the state of breakpoints in .init is a one-shot activity
> whether it is triggered explicitly (e.g. kgdb_free_init_mem) or implicitly
> (run the first time dbg_deactivate_sw_breakpoints is called with the system
> state >= running).
>
> As Doug has suggested it is quite possible to unify all the logic to
> handle .init within a single function by running that function when the
> state changes globally.
>

Ah, I see. Thanks for further clarification. Will get rid of
BP_ACTIVE_INIT state.

-Sumit

>
> Daniel.


Re: [PATCH] kdb: Remove redundant function definitions/prototypes

2021-02-23 Thread Sumit Garg
On Tue, 23 Feb 2021 at 21:39, Doug Anderson  wrote:
>
> Hi,
>
> On Tue, Feb 23, 2021 at 4:01 AM Sumit Garg  wrote:
> >
> > @@ -103,7 +103,6 @@ extern int kdb_getword(unsigned long *, unsigned long, 
> > size_t);
> >  extern int kdb_putword(unsigned long, unsigned long, size_t);
> >
> >  extern int kdbgetularg(const char *, unsigned long *);
> > -extern int kdbgetu64arg(const char *, u64 *);
>
> IMO you should leave kdbgetu64arg() the way it was.  It is symmetric
> to all of the other similar functions and even if there are no
> external users of kdbgetu64arg() now it seems like it makes sense to
> keep it matching.
>

Okay, will keep kdbgetu64arg() the way it was.

-Sumit

>
> > @@ -209,9 +208,7 @@ extern unsigned long kdb_task_state(const struct 
> > task_struct *p,
> > unsigned long mask);
> >  extern void kdb_ps_suppressed(void);
> >  extern void kdb_ps1(const struct task_struct *p);
> > -extern void kdb_print_nameval(const char *name, unsigned long val);
> >  extern void kdb_send_sig(struct task_struct *p, int sig);
> > -extern void kdb_meminfo_proc_show(void);
>
> Getting rid of kdb_print_nameval() / kdb_meminfo_proc_show() makes sense to 
> me.
>
>
> >  extern char kdb_getchar(void);
> >  extern char *kdb_getstr(char *, size_t, const char *);
> >  extern void kdb_gdb_state_pass(char *buf);
> > diff --git a/kernel/debug/kdb/kdb_support.c b/kernel/debug/kdb/kdb_support.c
> > index 6226502ce049..b59aad1f0b55 100644
> > --- a/kernel/debug/kdb/kdb_support.c
> > +++ b/kernel/debug/kdb/kdb_support.c
> > @@ -665,24 +665,6 @@ unsigned long kdb_task_state(const struct task_struct 
> > *p, unsigned long mask)
> > return (mask & kdb_task_state_string(state)) != 0;
> >  }
> >
> > -/*
> > - * kdb_print_nameval - Print a name and its value, converting the
> > - * value to a symbol lookup if possible.
> > - * Inputs:
> > - * namefield name to print
> > - * val value of field
> > - */
> > -void kdb_print_nameval(const char *name, unsigned long val)
> > -{
> > -   kdb_symtab_t symtab;
> > -   kdb_printf("  %-11.11s ", name);
> > -   if (kdbnearsym(val, &symtab))
> > -   kdb_symbol_print(val, &symtab,
> > -
> > KDB_SP_VALUE|KDB_SP_SYMSIZE|KDB_SP_NEWLINE);
> > -   else
> > -   kdb_printf("0x%lx\n", val);
> > -}
> > -
>
> Getting rid of kdb_print_nameval() makes sense to me.
>
> -Doug


[PATCH v5] kdb: Simplify kdb commands registration

2021-02-23 Thread Sumit Garg
Simplify kdb commands registration via using linked list instead of
static array for commands storage.

Signed-off-by: Sumit Garg 
---

Changes in v5:
- Introduce new method: kdb_register_table() to register static kdb
  main and breakpoint command tables instead of using statically
  allocated commands.

Changes in v4:
- Fix kdb commands memory allocation issue prior to slab being available
  with an array of statically allocated commands. Now it works fine with
  kgdbwait.
- Fix a misc checkpatch warning.
- I have dropped Doug's review tag as I think this version includes a
  major fix that should be reviewed again.

Changes in v3:
- Remove redundant "if" check.
- Pick up review tag from Doug.

Changes in v2:
- Remove redundant NULL check for "cmd_name".
- Incorporate misc. comment.

 kernel/debug/kdb/kdb_bp.c  |  81 --
 kernel/debug/kdb/kdb_main.c| 472 -
 kernel/debug/kdb/kdb_private.h |   3 +
 3 files changed, 343 insertions(+), 213 deletions(-)

diff --git a/kernel/debug/kdb/kdb_bp.c b/kernel/debug/kdb/kdb_bp.c
index ec4940146612..c15a1c6abfd6 100644
--- a/kernel/debug/kdb/kdb_bp.c
+++ b/kernel/debug/kdb/kdb_bp.c
@@ -522,6 +522,60 @@ static int kdb_ss(int argc, const char **argv)
return KDB_CMD_SS;
 }
 
+static kdbtab_t bptab[] = {
+   {   .cmd_name = "bp",
+   .cmd_func = kdb_bp,
+   .cmd_usage = "[]",
+   .cmd_help = "Set/Display breakpoints",
+   .cmd_minlen = 0,
+   .cmd_flags = KDB_ENABLE_FLOW_CTRL | KDB_REPEAT_NO_ARGS,
+   },
+   {   .cmd_name = "bl",
+   .cmd_func = kdb_bp,
+   .cmd_usage = "[]",
+   .cmd_help = "Display breakpoints",
+   .cmd_minlen = 0,
+   .cmd_flags = KDB_ENABLE_FLOW_CTRL | KDB_REPEAT_NO_ARGS,
+   },
+   {   .cmd_name = "bc",
+   .cmd_func = kdb_bc,
+   .cmd_usage = "",
+   .cmd_help = "Clear Breakpoint",
+   .cmd_minlen = 0,
+   .cmd_flags = KDB_ENABLE_FLOW_CTRL,
+   },
+   {   .cmd_name = "be",
+   .cmd_func = kdb_bc,
+   .cmd_usage = "",
+   .cmd_help = "Enable Breakpoint",
+   .cmd_minlen = 0,
+   .cmd_flags = KDB_ENABLE_FLOW_CTRL,
+   },
+   {   .cmd_name = "bd",
+   .cmd_func = kdb_bc,
+   .cmd_usage = "",
+   .cmd_help = "Disable Breakpoint",
+   .cmd_minlen = 0,
+   .cmd_flags = KDB_ENABLE_FLOW_CTRL,
+   },
+   {   .cmd_name = "ss",
+   .cmd_func = kdb_ss,
+   .cmd_usage = "",
+   .cmd_help = "Single Step",
+   .cmd_minlen = 1,
+   .cmd_flags = KDB_ENABLE_FLOW_CTRL | KDB_REPEAT_NO_ARGS,
+   },
+};
+
+static kdbtab_t bphcmd = {
+   .cmd_name = "bph",
+   .cmd_func = kdb_bp,
+   .cmd_usage = "[]",
+   .cmd_help = "[datar [length]|dataw [length]]   Set hw brk",
+   .cmd_minlen = 0,
+   .cmd_flags = KDB_ENABLE_FLOW_CTRL | KDB_REPEAT_NO_ARGS,
+};
+
 /* Initialize the breakpoint table and registerbreakpoint commands. */
 
 void __init kdb_initbptab(void)
@@ -537,30 +591,7 @@ void __init kdb_initbptab(void)
for (i = 0, bp = kdb_breakpoints; i < KDB_MAXBPT; i++, bp++)
bp->bp_free = 1;
 
-   kdb_register_flags("bp", kdb_bp, "[]",
-   "Set/Display breakpoints", 0,
-   KDB_ENABLE_FLOW_CTRL | KDB_REPEAT_NO_ARGS);
-   kdb_register_flags("bl", kdb_bp, "[]",
-   "Display breakpoints", 0,
-   KDB_ENABLE_FLOW_CTRL | KDB_REPEAT_NO_ARGS);
+   kdb_register_table(bptab, ARRAY_SIZE(bptab));
if (arch_kgdb_ops.flags & KGDB_HW_BREAKPOINT)
-   kdb_register_flags("bph", kdb_bp, "[]",
-   "[datar [length]|dataw [length]]   Set hw brk", 0,
-   KDB_ENABLE_FLOW_CTRL | KDB_REPEAT_NO_ARGS);
-   kdb_register_flags("bc", kdb_bc, "",
-   "Clear Breakpoint", 0,
-   KDB_ENABLE_FLOW_CTRL);
-   kdb_register_flags("be", kdb_bc, "",
-   "Enable Breakpoint", 0,
-   KDB_ENABLE_FLOW_CTRL);
-   kdb_register_flags("bd", kdb_bc, "",
-   "Disable Breakpoint", 0,
-   KDB_ENABLE_FLOW_CTRL);
-
-   kdb_register_flags("ss", kdb_ss, "",
-   "Single Step", 1,
-   KDB_ENABLE_FLOW_CTRL | KDB_REPEAT_NO_ARGS);
-   /*
-* Arch

[PATCH v2] kdb: Remove redundant function definitions/prototypes

2021-02-23 Thread Sumit Garg
Cleanup kdb code to get rid of unused function definitions/prototypes.

Signed-off-by: Sumit Garg 
---

Changes in v2:
- Keep kdbgetu64arg() the way it was.

 kernel/debug/kdb/kdb_private.h |  2 --
 kernel/debug/kdb/kdb_support.c | 18 --
 2 files changed, 20 deletions(-)

diff --git a/kernel/debug/kdb/kdb_private.h b/kernel/debug/kdb/kdb_private.h
index 3cf8d9e47939..b857a84de3b5 100644
--- a/kernel/debug/kdb/kdb_private.h
+++ b/kernel/debug/kdb/kdb_private.h
@@ -210,9 +210,7 @@ extern unsigned long kdb_task_state(const struct 
task_struct *p,
unsigned long mask);
 extern void kdb_ps_suppressed(void);
 extern void kdb_ps1(const struct task_struct *p);
-extern void kdb_print_nameval(const char *name, unsigned long val);
 extern void kdb_send_sig(struct task_struct *p, int sig);
-extern void kdb_meminfo_proc_show(void);
 extern char kdb_getchar(void);
 extern char *kdb_getstr(char *, size_t, const char *);
 extern void kdb_gdb_state_pass(char *buf);
diff --git a/kernel/debug/kdb/kdb_support.c b/kernel/debug/kdb/kdb_support.c
index 6226502ce049..b59aad1f0b55 100644
--- a/kernel/debug/kdb/kdb_support.c
+++ b/kernel/debug/kdb/kdb_support.c
@@ -665,24 +665,6 @@ unsigned long kdb_task_state(const struct task_struct *p, 
unsigned long mask)
return (mask & kdb_task_state_string(state)) != 0;
 }
 
-/*
- * kdb_print_nameval - Print a name and its value, converting the
- * value to a symbol lookup if possible.
- * Inputs:
- * namefield name to print
- * val value of field
- */
-void kdb_print_nameval(const char *name, unsigned long val)
-{
-   kdb_symtab_t symtab;
-   kdb_printf("  %-11.11s ", name);
-   if (kdbnearsym(val, &symtab))
-   kdb_symbol_print(val, &symtab,
-KDB_SP_VALUE|KDB_SP_SYMSIZE|KDB_SP_NEWLINE);
-   else
-   kdb_printf("0x%lx\n", val);
-}
-
 /* Last ditch allocator for debugging, so we can still debug even when
  * the GFP_ATOMIC pool has been exhausted.  The algorithms are tuned
  * for space usage, not for speed.  One smallish memory pool, the free
-- 
2.25.1



[PATCH] kgdb: Fix to kill breakpoints on initmem after boot

2021-02-24 Thread Sumit Garg
Currently breakpoints in kernel .init.text section are not handled
correctly while allowing to remove them even after corresponding pages
have been freed.

Fix it via killing .init.text section breakpoints just prior to initmem
pages being freed.

Suggested-by: Doug Anderson 
Signed-off-by: Sumit Garg 
---
 include/linux/kgdb.h  |  2 ++
 init/main.c   |  1 +
 kernel/debug/debug_core.c | 11 +++
 3 files changed, 14 insertions(+)

diff --git a/include/linux/kgdb.h b/include/linux/kgdb.h
index 57b8885708e5..3aa503ef06fc 100644
--- a/include/linux/kgdb.h
+++ b/include/linux/kgdb.h
@@ -361,9 +361,11 @@ extern atomic_tkgdb_active;
 extern bool dbg_is_early;
 extern void __init dbg_late_init(void);
 extern void kgdb_panic(const char *msg);
+extern void kgdb_free_init_mem(void);
 #else /* ! CONFIG_KGDB */
 #define in_dbg_master() (0)
 #define dbg_late_init()
 static inline void kgdb_panic(const char *msg) {}
+static inline void kgdb_free_init_mem(void) { }
 #endif /* ! CONFIG_KGDB */
 #endif /* _KGDB_H_ */
diff --git a/init/main.c b/init/main.c
index c68d784376ca..a446ca3d334e 100644
--- a/init/main.c
+++ b/init/main.c
@@ -1417,6 +1417,7 @@ static int __ref kernel_init(void *unused)
async_synchronize_full();
kprobe_free_init_mem();
ftrace_free_init_mem();
+   kgdb_free_init_mem();
free_initmem();
mark_readonly();
 
diff --git a/kernel/debug/debug_core.c b/kernel/debug/debug_core.c
index 229dd119f430..319381e95d1d 100644
--- a/kernel/debug/debug_core.c
+++ b/kernel/debug/debug_core.c
@@ -465,6 +465,17 @@ int dbg_remove_all_break(void)
return 0;
 }
 
+void kgdb_free_init_mem(void)
+{
+   int i;
+
+   /* Clear init memory breakpoints. */
+   for (i = 0; i < KGDB_MAX_BREAKPOINTS; i++) {
+   if (init_section_contains((void *)kgdb_break[i].bpt_addr, 0))
+   kgdb_break[i].state = BP_UNDEFINED;
+   }
+}
+
 #ifdef CONFIG_KGDB_KDB
 void kdb_dump_stack_on_cpu(int cpu)
 {
-- 
2.25.1



Re: [pet...@infradead.org: Re: [PATCH] x86/kgdb: Allow removal of early BPs]

2021-02-17 Thread Sumit Garg
Hi Peter,

> On Mon, Dec 14, 2020 at 03:13:12PM +0100, Stefan Saecherl wrote:
>
> > One thing to consider when doing this is that code can go away during boot
> > (e.g. .init.text). Previously kgdb_arch_remove_breakpoint handled this case
> > gracefully by just having copy_to_kernel_nofault fail but if one then calls
> > text_poke_kgdb the system dies due to the BUG_ON we moved out of
> > __text_poke.  To avoid this __text_poke now returns an error in case of a
> > nonpresent code page and the error is handled at call site.
>
> So what if the page is reused and now exists again?
>
> We keep track of the init state, how about you look at that and not poke
> at .init.text after it's freed instead?
>

Makes sense. I'll see if I can patch the debug core to get an
architecture neutral fix for this.

-Sumit


[PATCH v4] kdb: Simplify kdb commands registration

2021-02-18 Thread Sumit Garg
Simplify kdb commands registration via using linked list instead of
static array for commands storage.

Signed-off-by: Sumit Garg 
---

Changes in v4:
- Fix kdb commands memory allocation issue prior to slab being available
  with an array of statically allocated commands. Now it works fine with
  kgdbwait.
- Fix a misc checkpatch warning.
- I have dropped Doug's review tag as I think this version includes a
  major fix that should be reviewed again.

Changes in v3:
- Remove redundant "if" check.
- Pick up review tag from Doug.

Changes in v2:
- Remove redundant NULL check for "cmd_name".
- Incorporate misc. comment.

 kernel/debug/kdb/kdb_main.c| 129 ++---
 kernel/debug/kdb/kdb_private.h |   2 +
 2 files changed, 47 insertions(+), 84 deletions(-)

diff --git a/kernel/debug/kdb/kdb_main.c b/kernel/debug/kdb/kdb_main.c
index 930ac1b..5215e04 100644
--- a/kernel/debug/kdb/kdb_main.c
+++ b/kernel/debug/kdb/kdb_main.c
@@ -33,6 +33,7 @@
 #include 
 #include 
 #include 
+#include 
 #include 
 #include 
 #include 
@@ -84,15 +85,12 @@ static unsigned int kdb_continue_catastrophic =
 static unsigned int kdb_continue_catastrophic;
 #endif
 
-/* kdb_commands describes the available commands. */
-static kdbtab_t *kdb_commands;
-#define KDB_BASE_CMD_MAX 50
-static int kdb_max_commands = KDB_BASE_CMD_MAX;
-static kdbtab_t kdb_base_commands[KDB_BASE_CMD_MAX];
-#define for_each_kdbcmd(cmd, num)  \
-   for ((cmd) = kdb_base_commands, (num) = 0;  \
-num < kdb_max_commands;\
-num++, num == KDB_BASE_CMD_MAX ? cmd = kdb_commands : cmd++)
+/* kdb_cmds_head describes the available commands. */
+static LIST_HEAD(kdb_cmds_head);
+
+#define KDB_CMD_INIT_MAX   50
+static int kdb_cmd_init_idx;
+static kdbtab_t kdb_commands_init[KDB_CMD_INIT_MAX];
 
 typedef struct _kdbmsg {
int km_diag;/* kdb diagnostic */
@@ -921,7 +919,7 @@ int kdb_parse(const char *cmdstr)
char *cp;
char *cpp, quoted;
kdbtab_t *tp;
-   int i, escaped, ignore_errors = 0, check_grep = 0;
+   int escaped, ignore_errors = 0, check_grep = 0;
 
/*
 * First tokenize the command string.
@@ -1011,25 +1009,17 @@ int kdb_parse(const char *cmdstr)
++argv[0];
}
 
-   for_each_kdbcmd(tp, i) {
-   if (tp->cmd_name) {
-   /*
-* If this command is allowed to be abbreviated,
-* check to see if this is it.
-*/
-
-   if (tp->cmd_minlen
-&& (strlen(argv[0]) <= tp->cmd_minlen)) {
-   if (strncmp(argv[0],
-   tp->cmd_name,
-   tp->cmd_minlen) == 0) {
-   break;
-   }
-   }
+   list_for_each_entry(tp, &kdb_cmds_head, list_node) {
+   /*
+* If this command is allowed to be abbreviated,
+* check to see if this is it.
+*/
+   if (tp->cmd_minlen && (strlen(argv[0]) <= tp->cmd_minlen) &&
+   (strncmp(argv[0], tp->cmd_name, tp->cmd_minlen) == 0))
+   break;
 
-   if (strcmp(argv[0], tp->cmd_name) == 0)
-   break;
-   }
+   if (strcmp(argv[0], tp->cmd_name) == 0)
+   break;
}
 
/*
@@ -1037,19 +1027,15 @@ int kdb_parse(const char *cmdstr)
 * few characters of this match any of the known commands.
 * e.g., md1c20 should match md.
 */
-   if (i == kdb_max_commands) {
-   for_each_kdbcmd(tp, i) {
-   if (tp->cmd_name) {
-   if (strncmp(argv[0],
-   tp->cmd_name,
-   strlen(tp->cmd_name)) == 0) {
-   break;
-   }
-   }
+   if (list_entry_is_head(tp, &kdb_cmds_head, list_node)) {
+   list_for_each_entry(tp, &kdb_cmds_head, list_node) {
+   if (strncmp(argv[0], tp->cmd_name,
+   strlen(tp->cmd_name)) == 0)
+   break;
}
}
 
-   if (i < kdb_max_commands) {
+   if (!list_entry_is_head(tp, &kdb_cmds_head, list_node)) {
int result;
 
if (!kdb_check_flags(tp->cmd_flags, kdb_cmd_enabled, argc <= 1))
@@ -2428,17 +2414,14 @@ static int kdb_kgdb(int argc, const char **argv

[PATCH] kernel: debug: Handle breakpoints in kernel .init.text section

2021-02-19 Thread Sumit Garg
Currently breakpoints in kernel .init.text section are not handled
correctly while allowing to remove them even after corresponding pages
have been freed.

In order to keep track of .init.text section breakpoints, add another
breakpoint state as BP_ACTIVE_INIT and don't try to free these
breakpoints once the system is in running state.

To be clear there is still a very small window between call to
free_initmem() and system_state = SYSTEM_RUNNING which can lead to
removal of freed .init.text section breakpoints but I think we can live
with that.

Suggested-by: Peter Zijlstra 
Signed-off-by: Sumit Garg 
---
 include/linux/kgdb.h  |  3 ++-
 kernel/debug/debug_core.c | 17 +
 2 files changed, 15 insertions(+), 5 deletions(-)

diff --git a/include/linux/kgdb.h b/include/linux/kgdb.h
index 0d6cf64..57b8885 100644
--- a/include/linux/kgdb.h
+++ b/include/linux/kgdb.h
@@ -71,7 +71,8 @@ enum kgdb_bpstate {
BP_UNDEFINED = 0,
BP_REMOVED,
BP_SET,
-   BP_ACTIVE
+   BP_ACTIVE_INIT,
+   BP_ACTIVE,
 };
 
 struct kgdb_bkpt {
diff --git a/kernel/debug/debug_core.c b/kernel/debug/debug_core.c
index af6e8b4f..229dd11 100644
--- a/kernel/debug/debug_core.c
+++ b/kernel/debug/debug_core.c
@@ -324,7 +324,11 @@ int dbg_activate_sw_breakpoints(void)
}
 
kgdb_flush_swbreak_addr(kgdb_break[i].bpt_addr);
-   kgdb_break[i].state = BP_ACTIVE;
+   if (system_state >= SYSTEM_RUNNING ||
+   !init_section_contains((void *)kgdb_break[i].bpt_addr, 0))
+   kgdb_break[i].state = BP_ACTIVE;
+   else
+   kgdb_break[i].state = BP_ACTIVE_INIT;
}
return ret;
 }
@@ -378,8 +382,13 @@ int dbg_deactivate_sw_breakpoints(void)
int i;
 
for (i = 0; i < KGDB_MAX_BREAKPOINTS; i++) {
-   if (kgdb_break[i].state != BP_ACTIVE)
+   if (kgdb_break[i].state < BP_ACTIVE_INIT)
+   continue;
+   if (system_state >= SYSTEM_RUNNING &&
+   kgdb_break[i].state == BP_ACTIVE_INIT) {
+   kgdb_break[i].state = BP_UNDEFINED;
continue;
+   }
error = kgdb_arch_remove_breakpoint(&kgdb_break[i]);
if (error) {
pr_info("BP remove failed: %lx\n",
@@ -425,7 +434,7 @@ int kgdb_has_hit_break(unsigned long addr)
int i;
 
for (i = 0; i < KGDB_MAX_BREAKPOINTS; i++) {
-   if (kgdb_break[i].state == BP_ACTIVE &&
+   if (kgdb_break[i].state >= BP_ACTIVE_INIT &&
kgdb_break[i].bpt_addr == addr)
return 1;
}
@@ -439,7 +448,7 @@ int dbg_remove_all_break(void)
 
/* Clear memory breakpoints. */
for (i = 0; i < KGDB_MAX_BREAKPOINTS; i++) {
-   if (kgdb_break[i].state != BP_ACTIVE)
+   if (kgdb_break[i].state < BP_ACTIVE_INIT)
goto setundefined;
error = kgdb_arch_remove_breakpoint(&kgdb_break[i]);
if (error)
-- 
2.7.4



Re: [PATCH v5] arm64: Enable perf events based hard lockup detector

2021-02-19 Thread Sumit Garg
Hi Will, Mark,

On Fri, 15 Jan 2021 at 17:32, Sumit Garg  wrote:
>
> With the recent feature added to enable perf events to use pseudo NMIs
> as interrupts on platforms which support GICv3 or later, its now been
> possible to enable hard lockup detector (or NMI watchdog) on arm64
> platforms. So enable corresponding support.
>
> One thing to note here is that normally lockup detector is initialized
> just after the early initcalls but PMU on arm64 comes up much later as
> device_initcall(). So we need to re-initialize lockup detection once
> PMU has been initialized.
>
> Signed-off-by: Sumit Garg 
> ---
>
> Changes in v5:
> - Fix lockup_detector_init() invocation to be rather invoked from CPU
>   binded context as it makes heavy use of per-cpu variables and shouldn't
>   be invoked from preemptible context.
>

Do you have any further comments on this?

Lecopzer,

Does this feature work fine for you now?

-Sumit

> Changes in v4:
> - Rebased to latest pmu v7 NMI patch-set [1] and in turn use "has_nmi"
>   hook to know if PMU IRQ has been requested as an NMI.
> - Add check for return value prior to initializing hard-lockup detector.
>
> [1] https://lkml.org/lkml/2020/9/24/458
>
> Changes in v3:
> - Rebased to latest pmu NMI patch-set [1].
> - Addressed misc. comments from Stephen.
>
> [1] https://lkml.org/lkml/2020/8/19/671
>
> Changes since RFC:
> - Rebased on top of Alex's WIP-pmu-nmi branch.
> - Add comment for safe max. CPU frequency.
> - Misc. cleanup.
>
>  arch/arm64/Kconfig |  2 ++
>  arch/arm64/kernel/perf_event.c | 48 
> --
>  drivers/perf/arm_pmu.c |  5 +
>  include/linux/perf/arm_pmu.h   |  2 ++
>  4 files changed, 55 insertions(+), 2 deletions(-)
>
> diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
> index f39568b..05e1735 100644
> --- a/arch/arm64/Kconfig
> +++ b/arch/arm64/Kconfig
> @@ -174,6 +174,8 @@ config ARM64
> select HAVE_NMI
> select HAVE_PATA_PLATFORM
> select HAVE_PERF_EVENTS
> +   select HAVE_PERF_EVENTS_NMI if ARM64_PSEUDO_NMI && HW_PERF_EVENTS
> +   select HAVE_HARDLOCKUP_DETECTOR_PERF if PERF_EVENTS && 
> HAVE_PERF_EVENTS_NMI
> select HAVE_PERF_REGS
> select HAVE_PERF_USER_STACK_DUMP
> select HAVE_REGS_AND_STACK_ACCESS_API
> diff --git a/arch/arm64/kernel/perf_event.c b/arch/arm64/kernel/perf_event.c
> index 3605f77a..bafb7c8 100644
> --- a/arch/arm64/kernel/perf_event.c
> +++ b/arch/arm64/kernel/perf_event.c
> @@ -23,6 +23,8 @@
>  #include 
>  #include 
>  #include 
> +#include 
> +#include 
>
>  /* ARMv8 Cortex-A53 specific event types. */
>  #define ARMV8_A53_PERFCTR_PREF_LINEFILL0xC2
> @@ -1246,12 +1248,30 @@ static struct platform_driver armv8_pmu_driver = {
> .probe  = armv8_pmu_device_probe,
>  };
>
> +static int __init lockup_detector_init_fn(void *data)
> +{
> +   lockup_detector_init();
> +   return 0;
> +}
> +
>  static int __init armv8_pmu_driver_init(void)
>  {
> +   int ret;
> +
> if (acpi_disabled)
> -   return platform_driver_register(&armv8_pmu_driver);
> +   ret = platform_driver_register(&armv8_pmu_driver);
> else
> -   return arm_pmu_acpi_probe(armv8_pmuv3_init);
> +   ret = arm_pmu_acpi_probe(armv8_pmuv3_init);
> +
> +   /*
> +* Try to re-initialize lockup detector after PMU init in
> +* case PMU events are triggered via NMIs.
> +*/
> +   if (ret == 0 && arm_pmu_irq_is_nmi())
> +   smp_call_on_cpu(raw_smp_processor_id(), 
> lockup_detector_init_fn,
> +   NULL, false);
> +
> +   return ret;
>  }
>  device_initcall(armv8_pmu_driver_init)
>
> @@ -1309,3 +1329,27 @@ void arch_perf_update_userpage(struct perf_event 
> *event,
> userpg->cap_user_time_zero = 1;
> userpg->cap_user_time_short = 1;
>  }
> +
> +#ifdef CONFIG_HARDLOCKUP_DETECTOR_PERF
> +/*
> + * Safe maximum CPU frequency in case a particular platform doesn't implement
> + * cpufreq driver. Although, architecture doesn't put any restrictions on
> + * maximum frequency but 5 GHz seems to be safe maximum given the available
> + * Arm CPUs in the market which are clocked much less than 5 GHz. On the 
> other
> + * hand, we can't make it much higher as it would lead to a large hard-lockup
> + * detection timeout on parts which are running slower (eg. 1GHz on
> + * Developerbox) and doesn't possess a cpufreq driver.
> + */
> +#define SA

Re: Migration to trusted keys: sealing user-provided key?

2021-02-03 Thread Sumit Garg
On Tue, 2 Feb 2021 at 18:04, Jan Lübbe  wrote:
>
> On Tue, 2021-02-02 at 17:45 +0530, Sumit Garg wrote:
> > Hi Jan,
> >
> > On Sun, 31 Jan 2021 at 23:40, James Bottomley  wrote:
> > >
> > > On Sun, 2021-01-31 at 15:14 +0100, Jan Lübbe wrote:
> > > > On Sun, 2021-01-31 at 07:09 -0500, Mimi Zohar wrote:
> > > > > On Sat, 2021-01-30 at 19:53 +0200, Jarkko Sakkinen wrote:
> > > > > > On Thu, 2021-01-28 at 18:31 +0100, Ahmad Fatoum wrote:
> > > > > > > Hello,
> > > > > > >
> > > > > > > I've been looking into how a migration to using
> > > > > > > trusted/encrypted keys would look like (particularly with dm-
> > > > > > > crypt).
> > > > > > >
> > > > > > > Currently, it seems the the only way is to re-encrypt the
> > > > > > > partitions because trusted/encrypted keys always generate their
> > > > > > > payloads from RNG.
> > > > > > >
> > > > > > > If instead there was a key command to initialize a new
> > > > > > > trusted/encrypted key with a user provided value, users could
> > > > > > > use whatever mechanism they used beforehand to get a plaintext
> > > > > > > key and use that to initialize a new trusted/encrypted key.
> > > > > > > From there on, the key will be like any other trusted/encrypted
> > > > > > > key and not be disclosed again to userspace.
> > > > > > >
> > > > > > > What are your thoughts on this? Would an API like
> > > > > > >
> > > > > > >   keyctl add trusted dmcrypt-key 'set ' # user-
> > > > > > > supplied content
> > > > > > >
> > > > > > > be acceptable?
> > > > > >
> > > > > > Maybe it's the lack of knowledge with dm-crypt, but why this
> > > > > > would be useful? Just want to understand the bottleneck, that's
> > > > > > all.
> > > >
> > > > Our goal in this case is to move away from having the dm-crypt key
> > > > material accessible to user-space on embedded devices. For an
> > > > existing dm-crypt volume, this key is fixed. A key can be loaded into
> > > > user key type and used by dm-crypt (cryptsetup can already do it this
> > > > way). But at this point, you can still do 'keyctl read' on that key,
> > > > exposing the key material to user space.
> > > >
> > > > Currently, with both encrypted and trusted keys, you can only
> > > > generate new random keys, not import existing key material.
> > > >
> > > > James Bottomley mentioned in the other reply that the key format will
> > > > become compatible with the openssl_tpm2_engine, which would provide a
> > > > workaround. This wouldn't work with OP-TEE-based trusted keys (see
> > > > Sumit Garg's series), though.
> > >
> > > Assuming OP-TEE has the same use model as the TPM, someone will
> > > eventually realise the need for interoperable key formats between key
> > > consumers and then it will work in the same way once the kernel gets
> > > updated to speak whatever format they come up with.
> >
> > IIUC, James re-work for TPM trusted keys is to allow loading of sealed
> > trusted keys directly via user-space (with proper authorization) into
> > the kernel keyring.
> >
> > I think similar should be achievable with OP-TEE (via extending pseudo
> > TA [1]) as well to allow restricted user-space access (with proper
> > authorization) to generate sealed trusted key blob that should be
> > interoperable with the kernel. Currently OP-TEE exposes trusted key
> > interfaces for kernel users only.
>
> What is the security benefit of having the key blob creation in user-space
> instead of in the kernel? Key import is a standard operation in HSMs or 
> PKCS#11
> tokens.

User authentication, AFAIK most of the HSMs or PKCS#11 require that
for key import. But IIUC, your suggested approach to load plain key
into kernel keyring and say it's *trusted* without any user
authentication, would it really be a trusted key? What prevents a
rogue user from making his key as the dm-crypt trusted key?

>
> I mainly see the downside of having to add another API to access the 
> underlying
> functionality (be it trusted key TA or the NXP CAAM HW *) and requiring
> platform-specific userspace code.

I am not sure why you would call the standardized TEE interface [1] to
be platform-specific, it is meant to be platform agnostic. And I think
we can have openssl_tee_engine on similar lines as the
openssl_tpm2_engine.

[1] https://globalplatform.org/specs-library/tee-client-api-specification/

-Sumit

>
> This CAAM specific API (in out-of-tree patches) was exactly the part I was
> trying to get rid of. ;)
>
> Regards,
> Jan
>
> --
> Pengutronix e.K.   | |
> Steuerwalder Str. 21   | http://www.pengutronix.de/  |
> 31137 Hildesheim, Germany  | Phone: +49-5121-206917-0|
> Amtsgericht Hildesheim, HRA 2686   | Fax:   +49-5121-206917- |
>


Re: [PATCH] tee: optee: remove need_resched() before cond_resched()

2021-01-31 Thread Sumit Garg
Hi Jens,

On Fri, 29 Jan 2021 at 18:59, Jens Wiklander  wrote:
>
> Hi Rouven and Sumit,
>
> On Mon, Jan 25, 2021 at 10:58 AM Jens Wiklander via OP-TEE
>  wrote:
> >
> > Hi Rouven and Sumit,
> >
> > On Mon, Jan 25, 2021 at 10:55 AM Jens Wiklander
> >  wrote:
> > >
> > > Testing need_resched() before cond_resched() is not needed as an
> > > equivalent test is done internally in cond_resched(). So drop the
> > > need_resched() test.
> > >
> > > Fixes: dcb3b06d9c34 ("tee: optee: replace might_sleep with cond_resched")
> > > Signed-off-by: Jens Wiklander 
> > > ---
> > >  drivers/tee/optee/call.c | 3 +--
> > >  1 file changed, 1 insertion(+), 2 deletions(-)
> >
> > Can you please test to see that this works in your setups too?
>
> Does this work for you? I'd like to get this patch ready for v5.12.

It does work for me as well.

Tested-by: Sumit Garg 

-Sumit

>
> Thanks,
> Jens


Re: [PATCH] optee: sync OP-TEE headers

2021-02-01 Thread Sumit Garg
On Fri, 29 Jan 2021 at 19:13, Jens Wiklander via OP-TEE
 wrote:
>
> Pulls in updates in the internal headers from OP-TEE OS [1]. A few
> defines has been shortened, hence the changes in rpc.c. Defines not used
> by the driver in tee_rpc_cmd.h has been filtered out.
>
> Note that this does not change the ABI.
>
> Link: [1] https://github.com/OP-TEE/optee_os
> Signed-off-by: Jens Wiklander 
> ---
>  drivers/tee/optee/optee_msg.h | 154 ++
>  drivers/tee/optee/optee_rpc_cmd.h | 103 
>  drivers/tee/optee/optee_smc.h |  70 +-
>  drivers/tee/optee/rpc.c   |  39 
>  4 files changed, 178 insertions(+), 188 deletions(-)
>  create mode 100644 drivers/tee/optee/optee_rpc_cmd.h
>

Looks good to me apart from the minor nit below.

Reviewed-by: Sumit Garg 

> diff --git a/drivers/tee/optee/optee_msg.h b/drivers/tee/optee/optee_msg.h
> index 7b2d919da2ac..7c4723b8 100644
> --- a/drivers/tee/optee/optee_msg.h
> +++ b/drivers/tee/optee/optee_msg.h
> @@ -1,6 +1,6 @@
>  /* SPDX-License-Identifier: (GPL-2.0 OR BSD-2-Clause) */
>  /*
> - * Copyright (c) 2015-2019, Linaro Limited
> + * Copyright (c) 2015-2021, Linaro Limited
>   */
>  #ifndef _OPTEE_MSG_H
>  #define _OPTEE_MSG_H
> @@ -12,11 +12,9 @@
>   * This file defines the OP-TEE message protocol used to communicate
>   * with an instance of OP-TEE running in secure world.
>   *
> - * This file is divided into three sections.
> + * This file is divided into two sections.
>   * 1. Formatting of messages.
>   * 2. Requests from normal world
> - * 3. Requests from secure world, Remote Procedure Call (RPC), handled by
> - *tee-supplicant.
>   */
>
>  
> /*
> @@ -54,8 +52,8 @@
>   * Every entry in buffer should point to a 4k page beginning (12 least
>   * significant bits must be equal to zero).
>   *
> - * 12 least significant bints of optee_msg_param.u.tmem.buf_ptr should hold 
> page
> - * offset of the user buffer.
> + * 12 least significant bits of optee_msg_param.u.tmem.buf_ptr should hold
> + * page offset of user buffer.
>   *
>   * So, entries should be placed like members of this structure:
>   *
> @@ -176,17 +174,9 @@ struct optee_msg_param {
>   * @params: the parameters supplied to the OS Command
>   *
>   * All normal calls to Trusted OS uses this struct. If cmd requires further
> - * information than what these field holds it can be passed as a parameter
> + * information than what these fields hold it can be passed as a parameter
>   * tagged as meta (setting the OPTEE_MSG_ATTR_META bit in corresponding
> - * attrs field). All parameters tagged as meta has to come first.
> - *
> - * Temp memref parameters can be fragmented if supported by the Trusted OS
> - * (when optee_smc.h is bearer of this protocol this is indicated with
> - * OPTEE_SMC_SEC_CAP_UNREGISTERED_SHM). If a logical memref parameter is
> - * fragmented then has all but the last fragment the
> - * OPTEE_MSG_ATTR_FRAGMENT bit set in attrs. Even if a memref is fragmented
> - * it will still be presented as a single logical memref to the Trusted
> - * Application.
> + * attrs field). All parameters tagged as meta have to come first.
>   */
>  struct optee_msg_arg {
> u32 cmd;
> @@ -290,13 +280,10 @@ struct optee_msg_arg {
>   * OPTEE_MSG_CMD_REGISTER_SHM registers a shared memory reference. The
>   * information is passed as:
>   * [in] param[0].attr  OPTEE_MSG_ATTR_TYPE_TMEM_INPUT
> - * [| OPTEE_MSG_ATTR_FRAGMENT]
> + * [| OPTEE_MSG_ATTR_NONCONTIG]
>   * [in] param[0].u.tmem.buf_ptrphysical address (of first 
> fragment)
>   * [in] param[0].u.tmem.size   size (of first fragment)
>   * [in] param[0].u.tmem.shm_refholds shared memory reference
> - * ...
> - * The shared memory can optionally be fragmented, temp memrefs can follow
> - * each other with all but the last with the OPTEE_MSG_ATTR_FRAGMENT bit set.
>   *
>   * OPTEE_MSG_CMD_UNREGISTER_SHM unregisteres a previously registered shared

nit: since you are touching this file, s/unregisteres/unregisters/

-Sumit

>   * memory reference. The information is passed as:
> @@ -313,131 +300,4 @@ struct optee_msg_arg {
>  #define OPTEE_MSG_CMD_UNREGISTER_SHM   5
>  #define OPTEE_MSG_FUNCID_CALL_WITH_ARG 0x0004
>
> -/*
> - * Part 3 - Requests from secure world, RPC
> - 
> */
> -
> -/*
> - * All RPC is done

Re: Migration to trusted keys: sealing user-provided key?

2021-02-02 Thread Sumit Garg
Hi Jan,

On Sun, 31 Jan 2021 at 23:40, James Bottomley  wrote:
>
> On Sun, 2021-01-31 at 15:14 +0100, Jan Lübbe wrote:
> > On Sun, 2021-01-31 at 07:09 -0500, Mimi Zohar wrote:
> > > On Sat, 2021-01-30 at 19:53 +0200, Jarkko Sakkinen wrote:
> > > > On Thu, 2021-01-28 at 18:31 +0100, Ahmad Fatoum wrote:
> > > > > Hello,
> > > > >
> > > > > I've been looking into how a migration to using
> > > > > trusted/encrypted keys would look like (particularly with dm-
> > > > > crypt).
> > > > >
> > > > > Currently, it seems the the only way is to re-encrypt the
> > > > > partitions because trusted/encrypted keys always generate their
> > > > > payloads from RNG.
> > > > >
> > > > > If instead there was a key command to initialize a new
> > > > > trusted/encrypted key with a user provided value, users could
> > > > > use whatever mechanism they used beforehand to get a plaintext
> > > > > key and use that to initialize a new trusted/encrypted key.
> > > > > From there on, the key will be like any other trusted/encrypted
> > > > > key and not be disclosed again to userspace.
> > > > >
> > > > > What are your thoughts on this? Would an API like
> > > > >
> > > > >   keyctl add trusted dmcrypt-key 'set ' # user-
> > > > > supplied content
> > > > >
> > > > > be acceptable?
> > > >
> > > > Maybe it's the lack of knowledge with dm-crypt, but why this
> > > > would be useful? Just want to understand the bottleneck, that's
> > > > all.
> >
> > Our goal in this case is to move away from having the dm-crypt key
> > material accessible to user-space on embedded devices. For an
> > existing dm-crypt volume, this key is fixed. A key can be loaded into
> > user key type and used by dm-crypt (cryptsetup can already do it this
> > way). But at this point, you can still do 'keyctl read' on that key,
> > exposing the key material to user space.
> >
> > Currently, with both encrypted and trusted keys, you can only
> > generate new random keys, not import existing key material.
> >
> > James Bottomley mentioned in the other reply that the key format will
> > become compatible with the openssl_tpm2_engine, which would provide a
> > workaround. This wouldn't work with OP-TEE-based trusted keys (see
> > Sumit Garg's series), though.
>
> Assuming OP-TEE has the same use model as the TPM, someone will
> eventually realise the need for interoperable key formats between key
> consumers and then it will work in the same way once the kernel gets
> updated to speak whatever format they come up with.

IIUC, James re-work for TPM trusted keys is to allow loading of sealed
trusted keys directly via user-space (with proper authorization) into
the kernel keyring.

I think similar should be achievable with OP-TEE (via extending pseudo
TA [1]) as well to allow restricted user-space access (with proper
authorization) to generate sealed trusted key blob that should be
interoperable with the kernel. Currently OP-TEE exposes trusted key
interfaces for kernel users only.

[1] https://github.com/OP-TEE/optee_os/blob/master/ta/trusted_keys/entry.c

-Sumit

>
> > > We upstreamed "trusted" & "encrypted" keys together in order to
> > > address this sort of problem.   Instead of directly using a
> > > "trusted" key for persistent file signatures being stored as
> > > xattrs, the "encrypted" key provides one level of
> > > indirection.   The "encrypted" key may be encrypted/decrypted with
> > > either a TPM based "trusted" key or with a "user" type symmetric
> > > key[1].
> > >
> > > Instead of modifying "trusted" keys, use a "user" type "encrypted"
> > > key.
> >
> > I don't see how this would help. When using dm-crypt with an
> > encrypted key, I can't use my existing key material.
> >
> > Except for the migration aspect, trusted keys seem ideal. Only a
> > single exported blob needs to be stored and can only be loaded/used
> > again on the same (trusted) system. Userspace cannot extract the key
> > material.
>
> Yes, that's what I was thinking ... especially when you can add policy
> to the keys, which includes PCR locking.  Part of the problem is that
> changing policy, which you have to do if something happens to update
> the PCR values, is technically a migration, so your trusted keys for
> dm-crypt are really going to have to be migrateable.
>
> > To get to this point on systems in the field without re-encryption of
> > the whole storage, only the initial trusted/encrypted key creation
> > would need to allow passing in existing key material.
>
> What about a third option: why not make dm-crypt store the master key
> it uses as an encrypted key (if a parent trusted key is available)?
> That way you'd be able to extract the encrypted form of the key as
> root, but wouldn't be able to extract the actual master key.
>
> James
>
>


Re: Migration to trusted keys: sealing user-provided key?

2021-02-03 Thread Sumit Garg
On Wed, 3 Feb 2021 at 19:16, Jan Lübbe  wrote:
>
> On Wed, 2021-02-03 at 17:20 +0530, Sumit Garg wrote:
> > On Tue, 2 Feb 2021 at 18:04, Jan Lübbe  wrote:
> > >
> > > On Tue, 2021-02-02 at 17:45 +0530, Sumit Garg wrote:
> > > > Hi Jan,
> > > >
> > > > On Sun, 31 Jan 2021 at 23:40, James Bottomley  
> > > > wrote:
> > > > >
> > > > > On Sun, 2021-01-31 at 15:14 +0100, Jan Lübbe wrote:
> > > > > > On Sun, 2021-01-31 at 07:09 -0500, Mimi Zohar wrote:
> > > > > > > On Sat, 2021-01-30 at 19:53 +0200, Jarkko Sakkinen wrote:
> > > > > > > > On Thu, 2021-01-28 at 18:31 +0100, Ahmad Fatoum wrote:
> > > > > > > > > Hello,
> > > > > > > > >
> > > > > > > > > I've been looking into how a migration to using
> > > > > > > > > trusted/encrypted keys would look like (particularly with dm-
> > > > > > > > > crypt).
> > > > > > > > >
> > > > > > > > > Currently, it seems the the only way is to re-encrypt the
> > > > > > > > > partitions because trusted/encrypted keys always generate 
> > > > > > > > > their
> > > > > > > > > payloads from RNG.
> > > > > > > > >
> > > > > > > > > If instead there was a key command to initialize a new
> > > > > > > > > trusted/encrypted key with a user provided value, users could
> > > > > > > > > use whatever mechanism they used beforehand to get a plaintext
> > > > > > > > > key and use that to initialize a new trusted/encrypted key.
> > > > > > > > > From there on, the key will be like any other 
> > > > > > > > > trusted/encrypted
> > > > > > > > > key and not be disclosed again to userspace.
> > > > > > > > >
> > > > > > > > > What are your thoughts on this? Would an API like
> > > > > > > > >
> > > > > > > > >   keyctl add trusted dmcrypt-key 'set ' # user-
> > > > > > > > > supplied content
> > > > > > > > >
> > > > > > > > > be acceptable?
> > > > > > > >
> > > > > > > > Maybe it's the lack of knowledge with dm-crypt, but why this
> > > > > > > > would be useful? Just want to understand the bottleneck, that's
> > > > > > > > all.
> > > > > >
> > > > > > Our goal in this case is to move away from having the dm-crypt key
> > > > > > material accessible to user-space on embedded devices. For an
> > > > > > existing dm-crypt volume, this key is fixed. A key can be loaded 
> > > > > > into
> > > > > > user key type and used by dm-crypt (cryptsetup can already do it 
> > > > > > this
> > > > > > way). But at this point, you can still do 'keyctl read' on that key,
> > > > > > exposing the key material to user space.
> > > > > >
> > > > > > Currently, with both encrypted and trusted keys, you can only
> > > > > > generate new random keys, not import existing key material.
> > > > > >
> > > > > > James Bottomley mentioned in the other reply that the key format 
> > > > > > will
> > > > > > become compatible with the openssl_tpm2_engine, which would provide 
> > > > > > a
> > > > > > workaround. This wouldn't work with OP-TEE-based trusted keys (see
> > > > > > Sumit Garg's series), though.
> > > > >
> > > > > Assuming OP-TEE has the same use model as the TPM, someone will
> > > > > eventually realise the need for interoperable key formats between key
> > > > > consumers and then it will work in the same way once the kernel gets
> > > > > updated to speak whatever format they come up with.
> > > >
> > > > IIUC, James re-work for TPM trusted keys is to allow loading of sealed
> > > > trusted keys directly via user-space (with proper authorization) into
> > > > the kernel keyring.
> > > >
> > > > I think similar should be achievable with OP-TE

[PATCH v2] kdb: Refactor env variables get/set code

2021-02-04 Thread Sumit Garg
Add two new kdb environment access methods as kdb_setenv() and
kdb_printenv() in order to abstract out environment access code
from kdb command functions.

Also, replace (char *)0 with NULL as an initializer for environment
variables array.

Signed-off-by: Sumit Garg 
---

Changes in v2:
- Get rid of code motion to separate kdb_env.c file.
- Replace (char *)0 with NULL.
- Use kernel-doc style function comments.
- s/kdb_prienv/kdb_printenv/

 kernel/debug/kdb/kdb_main.c | 166 +---
 1 file changed, 93 insertions(+), 73 deletions(-)

diff --git a/kernel/debug/kdb/kdb_main.c b/kernel/debug/kdb/kdb_main.c
index 588062a..b257d35 100644
--- a/kernel/debug/kdb/kdb_main.c
+++ b/kernel/debug/kdb/kdb_main.c
@@ -142,40 +142,40 @@ static const int __nkdb_err = ARRAY_SIZE(kdbmsgs);
 
 static char *__env[] = {
 #if defined(CONFIG_SMP)
- "PROMPT=[%d]kdb> ",
+   "PROMPT=[%d]kdb> ",
 #else
- "PROMPT=kdb> ",
+   "PROMPT=kdb> ",
 #endif
- "MOREPROMPT=more> ",
- "RADIX=16",
- "MDCOUNT=8",  /* lines of md output */
- KDB_PLATFORM_ENV,
- "DTABCOUNT=30",
- "NOSECT=1",
- (char *)0,
- (char *)0,
- (char *)0,
- (char *)0,
- (char *)0,
- (char *)0,
- (char *)0,
- (char *)0,
- (char *)0,
- (char *)0,
- (char *)0,
- (char *)0,
- (char *)0,
- (char *)0,
- (char *)0,
- (char *)0,
- (char *)0,
- (char *)0,
- (char *)0,
- (char *)0,
- (char *)0,
- (char *)0,
- (char *)0,
- (char *)0,
+   "MOREPROMPT=more> ",
+   "RADIX=16",
+   "MDCOUNT=8",/* lines of md output */
+   KDB_PLATFORM_ENV,
+   "DTABCOUNT=30",
+   "NOSECT=1",
+   NULL,
+   NULL,
+   NULL,
+   NULL,
+   NULL,
+   NULL,
+   NULL,
+   NULL,
+   NULL,
+   NULL,
+   NULL,
+   NULL,
+   NULL,
+   NULL,
+   NULL,
+   NULL,
+   NULL,
+   NULL,
+   NULL,
+   NULL,
+   NULL,
+   NULL,
+   NULL,
+   NULL,
 };
 
 static const int __nenv = ARRAY_SIZE(__env);
@@ -318,6 +318,65 @@ int kdbgetintenv(const char *match, int *value)
 }
 
 /*
+ * kdb_setenv() - Alter an existing environment variable or create a new one.
+ * @var: Name of the variable
+ * @val: Value of the variable
+ *
+ * Return: Zero on success, a kdb diagnostic on failure.
+ */
+static int kdb_setenv(const char *var, const char *val)
+{
+   int i;
+   char *ep;
+   size_t varlen, vallen;
+
+   varlen = strlen(var);
+   vallen = strlen(val);
+   ep = kdballocenv(varlen + vallen + 2);
+   if (ep == (char *)0)
+   return KDB_ENVBUFFULL;
+
+   sprintf(ep, "%s=%s", var, val);
+
+   ep[varlen+vallen+1] = '\0';
+
+   for (i = 0; i < __nenv; i++) {
+   if (__env[i]
+&& ((strncmp(__env[i], var, varlen) == 0)
+  && ((__env[i][varlen] == '\0')
+   || (__env[i][varlen] == '=' {
+   __env[i] = ep;
+   return 0;
+   }
+   }
+
+   /*
+* Wasn't existing variable.  Fit into slot.
+*/
+   for (i = 0; i < __nenv-1; i++) {
+   if (__env[i] == (char *)0) {
+   __env[i] = ep;
+   return 0;
+   }
+   }
+
+   return KDB_ENVFULL;
+}
+
+/*
+ * kdb_printenv() - Display the current environment variables.
+ */
+static void kdb_printenv(void)
+{
+   int i;
+
+   for (i = 0; i < __nenv; i++) {
+   if (__env[i])
+   kdb_printf("%s\n", __env[i]);
+   }
+}
+
+/*
  * kdbgetularg - This function will convert a numeric string into an
  * unsigned long value.
  * Parameters:
@@ -374,10 +433,6 @@ int kdbgetu64arg(const char *arg, u64 *value)
  */
 int kdb_set(int argc, const char **argv)
 {
-   int i;
-   char *ep;
-   size_t varlen, vallen;
-
/*
 * we can be invoked two ways:
 *   set var=valueargv[1]="var", argv[2]="value"
@@ -422,37 +477,7 @@ int kdb_set(int argc, const char **argv)
 * Tokenizer squashed the '=' sign.  argv[1] is variable
 * name, argv[2] = value.
 */
-   varlen = strlen(argv[1]);
-   vallen = strlen(argv[2]);
-   ep = kdballocenv(varlen + vallen + 2);
-   if (ep == (char *)0)
-   return KDB_ENVBUFFULL;
-
-   sprintf(ep, "%s=%s", argv[1], argv[2]);
-
-   ep[varlen+vallen+1] = '\0';
-
-   for (i = 0; i < __nenv; i++) {
-   if (__env[i]
-&& ((strncmp(__env[i], argv[1], varlen) == 0)
-  && ((__env[i][varlen] == '\0')
-   || (__env[i][varlen] == '=' {
-

DMA direct mapping fix for 5.4 and earlier stable branches

2021-02-08 Thread Sumit Garg
Hi Christoph, Greg,

Currently we are observing an incorrect address translation
corresponding to DMA direct mapping methods on 5.4 stable kernel while
sharing dmabuf from one device to another where both devices have
their own coherent DMA memory pools.

I am able to root cause this issue which is caused by incorrect virt
to phys translation for addresses belonging to vmalloc space using
virt_to_page(). But while looking at the mainline kernel, this patch
[1] changes address translation from virt->to->phys to dma->to->phys
which fixes the issue observed on 5.4 stable kernel as well (minimal
fix [2]).

So I would like to seek your suggestion for backport to stable kernels
(5.4 or earlier) as to whether we should backport the complete
mainline commit [1] or we should just apply the minimal fix [2]?

[1] 
https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=34dc0ea6bc960f1f57b2148f01a3f4da23f87013
[2] minimal fix required for 5.4 stable kernel:

commit bb0b3ff6e54d78370b6b0c04426f0d9192f31795
Author: Sumit Garg 
Date:   Wed Feb 3 13:08:37 2021 +0530

dma-mapping: Fix common get_sgtable and mmap methods

Currently common get_sgtable and mmap methods can only handle normal
kernel addresses leading to incorrect handling of vmalloc addresses which
is common means for DMA coherent memory mapping.

So instead of cpu_addr, directly decode the physical address from
dma_addr and
hence decode corresponding page and pfn values. In this way we can handle
normal kernel addresses as well as vmalloc addresses.

This fix is inspired from following mainline commit:

34dc0ea6bc96 ("dma-direct: provide mmap and get_sgtable method overrides")

This fixes an issue observed during dmabuf sharing from one device to
another where both devices have their own coherent DMA memory pools.

    Signed-off-by: Sumit Garg 

diff --git a/kernel/dma/mapping.c b/kernel/dma/mapping.c
index 8682a53..034bbae 100644
--- a/kernel/dma/mapping.c
+++ b/kernel/dma/mapping.c
@@ -127,7 +127,7 @@ int dma_common_get_sgtable(struct device *dev,
struct sg_table *sgt,
return -ENXIO;
page = pfn_to_page(pfn);
} else {
-   page = virt_to_page(cpu_addr);
+   page = pfn_to_page(PHYS_PFN(dma_to_phys(dev, dma_addr)));
}

ret = sg_alloc_table(sgt, 1, GFP_KERNEL);
@@ -214,7 +214,7 @@ int dma_common_mmap(struct device *dev, struct
vm_area_struct *vma,
if (!pfn_valid(pfn))
return -ENXIO;
} else {
-   pfn = page_to_pfn(virt_to_page(cpu_addr));
+   pfn = PHYS_PFN(dma_to_phys(dev, dma_addr));
}

return remap_pfn_range(vma, vma->vm_start, pfn + vma->vm_pgoff,


Re: DMA direct mapping fix for 5.4 and earlier stable branches

2021-02-08 Thread Sumit Garg
Thanks Greg for your response.

On Tue, 9 Feb 2021 at 12:28, Greg Kroah-Hartman
 wrote:
>
> On Tue, Feb 09, 2021 at 11:39:25AM +0530, Sumit Garg wrote:
> > Hi Christoph, Greg,
> >
> > Currently we are observing an incorrect address translation
> > corresponding to DMA direct mapping methods on 5.4 stable kernel while
> > sharing dmabuf from one device to another where both devices have
> > their own coherent DMA memory pools.
>
> What devices have this problem?

The problem is seen with V4L2 device drivers which are currently under
development for UniPhier PXs3 Reference Board from Socionext [1].
Following is brief description of the test framework:

The issue is observed while trying to construct a Gstreamer pipeline
leveraging hardware video converter engine (VPE device) and hardware
video encode/decode engine (CODEC device) where we use dmabuf
framework for Zero-Copy.

Example GStreamer pipeline is:
gst-launch-1.0 -v -e videotestsrc \
> ! video/x-raw, width=480, height=270, format=NV15 \
> ! v4l2convert device=/dev/vpe0 capture-io-mode=dmabuf-import \
> ! video/x-raw, width=480, height=270, format=NV12 \
> ! v4l2h265enc device=/dev/codec0 output-io-mode=dmabuf \
> ! video/x-h265, format=byte-stream, width=480, height=270 \
> ! filesink location=out.hevc

Using GStreamer's V4L2 plugin,
- v4l2convert controls VPE driver,
- v4l2h265enc controls CODEC driver.

In the above pipeline, VPE driver imports CODEC driver's DMABUF for Zero-Copy.

[1] arch/arm64/boot/dts/socionext/uniphier-pxs3-ref.dts

> And why can't then just use 5.10 to
> solve this issue as that problem has always been present for them,
> right?

As the drivers are currently under development and Socionext has
chosen 5.4 stable kernel for their development. So I will let
Obayashi-san answer this if it's possible for them to migrate to 5.10
instead?

BTW, this problem belongs to the common code, so others may experience
this issue as well.

>
> > I am able to root cause this issue which is caused by incorrect virt
> > to phys translation for addresses belonging to vmalloc space using
> > virt_to_page(). But while looking at the mainline kernel, this patch
> > [1] changes address translation from virt->to->phys to dma->to->phys
> > which fixes the issue observed on 5.4 stable kernel as well (minimal
> > fix [2]).
> >
> > So I would like to seek your suggestion for backport to stable kernels
> > (5.4 or earlier) as to whether we should backport the complete
> > mainline commit [1] or we should just apply the minimal fix [2]?
>
> Whenever you try to create a "minimal" fix, 90% of the time it is wrong
> and does not work and I end up having to deal with the mess.

I agree with your concerns for having to apply a non-mainline commit
onto a stable kernel.

>  What
> prevents you from doing the real thing here?  Are the patches to big?
>

IMHO, yes the mainline patch is big enough to touch multiple
architectures. But if that's the only way preferred then I can
backport the mainline patch instead.

> And again, why not just use 5.10 for this hardware?  What hardware is
> it?
>

Please see my response above.

-Sumit

> thanks,
>
> greg k-h


Re: [PATCH v3] kdb: Simplify kdb commands registration

2021-02-09 Thread Sumit Garg
On Mon, 8 Feb 2021 at 19:18, Daniel Thompson  wrote:
>
> On Mon, Feb 08, 2021 at 03:18:19PM +0530, Sumit Garg wrote:
> > On Mon, 8 Feb 2021 at 15:13, Daniel Thompson  
> > wrote:
> > >
> > > On Fri, Jan 29, 2021 at 03:47:07PM +0530, Sumit Garg wrote:
> > > > @@ -1011,25 +1005,17 @@ int kdb_parse(const char *cmdstr)
> > > >   ++argv[0];
> > > >   }
> > > >
> > > > - for_each_kdbcmd(tp, i) {
> > > > - if (tp->cmd_name) {
> > > > - /*
> > > > -  * If this command is allowed to be abbreviated,
> > > > -  * check to see if this is it.
> > > > -  */
> > > > -
> > > > - if (tp->cmd_minlen
> > > > -  && (strlen(argv[0]) <= tp->cmd_minlen)) {
> > > > - if (strncmp(argv[0],
> > > > - tp->cmd_name,
> > > > - tp->cmd_minlen) == 0) {
> > > > - break;
> > > > - }
> > > > - }
> > > > -
> > > > - if (strcmp(argv[0], tp->cmd_name) == 0)
> > > > + list_for_each_entry(tp, &kdb_cmds_head, list_node) {
> > > > + /*
> > > > +  * If this command is allowed to be abbreviated,
> > > > +  * check to see if this is it.
> > > > +  */
> > > > + if (tp->cmd_minlen && (strlen(argv[0]) <= tp->cmd_minlen) 
> > > > &&
> > > > + (strncmp(argv[0], tp->cmd_name, tp->cmd_minlen) == 0))
> > > >   break;
> > >
> > > Looks like you forgot to unindent this line.
> > >
> > > I will fix it up but... checkpatch would have found this.
> > >
> >
> > Ah, I missed to run checkpatch on v3. Thanks for fixing this up.
>
> Unfortunately, it's not just checkpatch. This patch also causes a
> large number of test suite regressions. In particular it looks like
> kgdbwait does not work with this patch applied.
>
> The problem occurs on multiple architectures all with
> close-to-defconfig kernels. However to share one specific
> failure, x86_64_defconfig plus the following is not bootable:
>
> ../scripts/config --enable DEBUG_INFO --enable DEBUG_FS \
>   --enable KALLSYMS_ALL --enable MAGIC_SYSRQ --enable KGDB \
>   --enable KGDB_TESTS --enable KGDB_KDB --enable KDB_KEYBOARD \
>   --enable LKDTM
>
> Try:
>
> qemu-system-x86_64 \
>   -enable-kvm -m 1G -smp 2 -nographic
>   -kernel arch/x86/boot/bzImage \
>   -append "console=ttyS0,115200 kgdboc=ttyS0 kgdbwait"
>

Thanks Daniel for this report. I am able to reproduce it with
"kgdbwait" and will investigate it.

-Sumit

>
> Daniel.


Re: DMA direct mapping fix for 5.4 and earlier stable branches

2021-02-09 Thread Sumit Garg
Hi Christoph,

On Tue, 9 Feb 2021 at 15:06, Christoph Hellwig  wrote:
>
> On Tue, Feb 09, 2021 at 10:23:12AM +0100, Greg KH wrote:
> > >   From the view point of ZeroCopy using DMABUF, is 5.4 not
> > > mature enough, and is 5.10 enough mature ?
> > >   This is the most important point for judging migration.
> >
> > How do you judge "mature"?
> >
> > And again, if a feature isn't present in a specific kernel version, why
> > would you think that it would be a viable solution for you to use?
>
> I'm pretty sure dma_get_sgtable has been around much longer and was
> supposed to work, but only really did work properly for arm32, and
> for platforms with coherent DMA.  I bet he is using non-coherent arm64,

It's an arm64 platform using coherent DMA where device coherent DMA
memory pool is defined in the DT as follows:

reserved-memory {
#address-cells = <2>;
#size-cells = <2>;
ranges;


encbuffer: encbuffer@0xb000 {
compatible = "shared-dma-pool";
reg = <0 0xb000 0 0x0800>; // this
area used with dma-coherent
no-map;
};

};

Device is dma-coherent as per following DT property:

codec {
compatible = "socionext,uniphier-pxs3-codec";

memory-region = <&encbuffer>;
dma-coherent;

};

And call chain to create device coherent DMA pool is as follows:

rmem_dma_device_init();
  dma_init_coherent_memory();
memremap();
  ioremap_wc();

which simply maps coherent DMA memory into vmalloc space on arm64.

The thing I am unclear is why this is called a new feature rather than
a bug in dma_common_get_sgtable() which is failing to handle vmalloc
addresses? While at the same time DMA debug APIs specifically handle
vmalloc addresses [1].

[1] 
https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/tree/kernel/dma/debug.c?h=linux-5.4.y#n1462

-Sumit

> and it would be broken for other drivers there as well if people did
> test them, which they apparently so far did not.


[PATCH] kdb: Get rid of custom debug heap allocator

2021-02-25 Thread Sumit Garg
Currently the only user for debug heap is kdbnearsym() which can be
modified to rather ask the caller to supply a buffer for symbol name.
So do that and modify kdbnearsym() callers to pass a symbol name buffer
allocated from stack and hence remove custom debug heap allocator.

This change has been tested using kgdbtest on arm64 which doesn't show
any regressions.

Suggested-by: Daniel Thompson 
Signed-off-by: Sumit Garg 
---
 kernel/debug/kdb/kdb_debugger.c |   1 -
 kernel/debug/kdb/kdb_main.c |   6 +-
 kernel/debug/kdb/kdb_private.h  |   7 +-
 kernel/debug/kdb/kdb_support.c  | 294 +---
 4 files changed, 11 insertions(+), 297 deletions(-)

diff --git a/kernel/debug/kdb/kdb_debugger.c b/kernel/debug/kdb/kdb_debugger.c
index 0220afda3200..e91fc3e4edd5 100644
--- a/kernel/debug/kdb/kdb_debugger.c
+++ b/kernel/debug/kdb/kdb_debugger.c
@@ -140,7 +140,6 @@ int kdb_stub(struct kgdb_state *ks)
 */
kdb_common_deinit_state();
KDB_STATE_CLEAR(PAGER);
-   kdbnearsym_cleanup();
if (error == KDB_CMD_KGDB) {
if (KDB_STATE(DOING_KGDB))
KDB_STATE_CLEAR(DOING_KGDB);
diff --git a/kernel/debug/kdb/kdb_main.c b/kernel/debug/kdb/kdb_main.c
index 9d69169582c6..ca525a3e0032 100644
--- a/kernel/debug/kdb/kdb_main.c
+++ b/kernel/debug/kdb/kdb_main.c
@@ -526,6 +526,7 @@ int kdbgetaddrarg(int argc, const char **argv, int *nextarg,
char symbol = '\0';
char *cp;
kdb_symtab_t symtab;
+   char namebuf[KSYM_NAME_LEN];
 
/*
 * If the enable flags prohibit both arbitrary memory access
@@ -585,7 +586,7 @@ int kdbgetaddrarg(int argc, const char **argv, int *nextarg,
}
 
if (!found)
-   found = kdbnearsym(addr, &symtab);
+   found = kdbnearsym(addr, &symtab, namebuf);
 
(*nextarg)++;
 
@@ -1503,6 +1504,7 @@ static void kdb_md_line(const char *fmtstr, unsigned long 
addr,
int i;
int j;
unsigned long word;
+   char namebuf[KSYM_NAME_LEN];
 
memset(cbuf, '\0', sizeof(cbuf));
if (phys)
@@ -1518,7 +1520,7 @@ static void kdb_md_line(const char *fmtstr, unsigned long 
addr,
break;
kdb_printf(fmtstr, word);
if (symbolic)
-   kdbnearsym(word, &symtab);
+   kdbnearsym(word, &symtab, namebuf);
else
memset(&symtab, 0, sizeof(symtab));
if (symtab.sym_name) {
diff --git a/kernel/debug/kdb/kdb_private.h b/kernel/debug/kdb/kdb_private.h
index b857a84de3b5..1707eeebc59a 100644
--- a/kernel/debug/kdb/kdb_private.h
+++ b/kernel/debug/kdb/kdb_private.h
@@ -108,8 +108,7 @@ extern char *kdbgetenv(const char *);
 extern int kdbgetaddrarg(int, const char **, int*, unsigned long *,
 long *, char **);
 extern int kdbgetsymval(const char *, kdb_symtab_t *);
-extern int kdbnearsym(unsigned long, kdb_symtab_t *);
-extern void kdbnearsym_cleanup(void);
+extern int kdbnearsym(unsigned long addr, kdb_symtab_t *symtab, char *namebuf);
 extern char *kdb_strdup(const char *str, gfp_t type);
 extern void kdb_symbol_print(unsigned long, const kdb_symtab_t *, unsigned 
int);
 
@@ -233,10 +232,6 @@ extern struct task_struct *kdb_curr_task(int);
 
 #define GFP_KDB (in_dbg_master() ? GFP_ATOMIC : GFP_KERNEL)
 
-extern void *debug_kmalloc(size_t size, gfp_t flags);
-extern void debug_kfree(void *);
-extern void debug_kusage(void);
-
 extern struct task_struct *kdb_current_task;
 extern struct pt_regs *kdb_current_regs;
 
diff --git a/kernel/debug/kdb/kdb_support.c b/kernel/debug/kdb/kdb_support.c
index b59aad1f0b55..319b36ceedf9 100644
--- a/kernel/debug/kdb/kdb_support.c
+++ b/kernel/debug/kdb/kdb_support.c
@@ -57,8 +57,6 @@ int kdbgetsymval(const char *symname, kdb_symtab_t *symtab)
 }
 EXPORT_SYMBOL(kdbgetsymval);
 
-static char *kdb_name_table[100];  /* arbitrary size */
-
 /*
  * kdbnearsym -Return the name of the symbol with the nearest address
  * less than 'addr'.
@@ -79,13 +77,11 @@ static char *kdb_name_table[100];   /* arbitrary size */
  * hold active strings, no kdb caller of kdbnearsym makes more
  * than ~20 later calls before using a saved value.
  */
-int kdbnearsym(unsigned long addr, kdb_symtab_t *symtab)
+int kdbnearsym(unsigned long addr, kdb_symtab_t *symtab, char *namebuf)
 {
int ret = 0;
unsigned long symbolsize = 0;
unsigned long offset = 0;
-#define knt1_size 128  /* must be >= kallsyms table size */
-   char *knt1 = NULL;
 
if (KDB_DEBUG(AR))
kdb_printf("kdbnearsym: addr=0x%lx, symtab=%px\n", addr, 
symtab);
@@ -93,14 +89,9 @@ int kdbnearsym(unsigned long addr, kdb_symtab_t *symtab)
 
if (addr < 4096)
goto out;
-   knt1 = debug_kmalloc(knt1_size, GFP_ATOMIC);

Re: [PATCH] kgdb: Fix to kill breakpoints on initmem after boot

2021-02-25 Thread Sumit Garg
On Wed, 24 Feb 2021 at 23:39, Doug Anderson  wrote:
>
> Hi,
>
> On Wed, Feb 24, 2021 at 12:17 AM Sumit Garg  wrote:
> >
> > Currently breakpoints in kernel .init.text section are not handled
> > correctly while allowing to remove them even after corresponding pages
> > have been freed.
> >
> > Fix it via killing .init.text section breakpoints just prior to initmem
> > pages being freed.
>
> It might be worth it to mention that HW breakpoints aren't handled by
> this patch but it's probably not such a big deal.
>
>
> > Suggested-by: Doug Anderson 
> > Signed-off-by: Sumit Garg 
> > ---
> >  include/linux/kgdb.h  |  2 ++
> >  init/main.c   |  1 +
> >  kernel/debug/debug_core.c | 11 +++
> >  3 files changed, 14 insertions(+)
> >
> > diff --git a/include/linux/kgdb.h b/include/linux/kgdb.h
> > index 57b8885708e5..3aa503ef06fc 100644
> > --- a/include/linux/kgdb.h
> > +++ b/include/linux/kgdb.h
> > @@ -361,9 +361,11 @@ extern atomic_tkgdb_active;
> >  extern bool dbg_is_early;
> >  extern void __init dbg_late_init(void);
> >  extern void kgdb_panic(const char *msg);
> > +extern void kgdb_free_init_mem(void);
> >  #else /* ! CONFIG_KGDB */
> >  #define in_dbg_master() (0)
> >  #define dbg_late_init()
> >  static inline void kgdb_panic(const char *msg) {}
> > +static inline void kgdb_free_init_mem(void) { }
> >  #endif /* ! CONFIG_KGDB */
> >  #endif /* _KGDB_H_ */
> > diff --git a/init/main.c b/init/main.c
> > index c68d784376ca..a446ca3d334e 100644
> > --- a/init/main.c
> > +++ b/init/main.c
> > @@ -1417,6 +1417,7 @@ static int __ref kernel_init(void *unused)
> > async_synchronize_full();
> > kprobe_free_init_mem();
> > ftrace_free_init_mem();
> > +   kgdb_free_init_mem();
> > free_initmem();
> > mark_readonly();
> >
> > diff --git a/kernel/debug/debug_core.c b/kernel/debug/debug_core.c
> > index 229dd119f430..319381e95d1d 100644
> > --- a/kernel/debug/debug_core.c
> > +++ b/kernel/debug/debug_core.c
> > @@ -465,6 +465,17 @@ int dbg_remove_all_break(void)
> > return 0;
> >  }
> >
> > +void kgdb_free_init_mem(void)
> > +{
> > +   int i;
> > +
> > +   /* Clear init memory breakpoints. */
> > +   for (i = 0; i < KGDB_MAX_BREAKPOINTS; i++) {
> > +   if (init_section_contains((void *)kgdb_break[i].bpt_addr, 
> > 0))
>
> A nit, but instead of 0 should this be passing "BREAK_INSTR_SIZE" ?
>
> Also: even if memory is about to get freed it still seems like it'd be
> wise to call this:
>
>   kgdb_arch_remove_breakpoint(&kgdb_break[i]);
>
> It looks like it shouldn't matter today but just in case an
> architecture decides to do something fancy in the future it might not
> hurt to tell it that the breakpoint is going away.
>
>
> Everything here is pretty nitty, though.  This looks good to me now.
>
> Reviewed-by: Douglas Anderson 

Thanks Doug for your review.

-Sumit


Re: [PATCH] kgdb: Fix to kill breakpoints on initmem after boot

2021-02-25 Thread Sumit Garg
On Wed, 24 Feb 2021 at 23:50, Andrew Morton  wrote:
>
> On Wed, 24 Feb 2021 10:09:25 -0800 Doug Anderson  
> wrote:
>
> > On Wed, Feb 24, 2021 at 12:17 AM Sumit Garg  wrote:
> > >
> > > Currently breakpoints in kernel .init.text section are not handled
> > > correctly while allowing to remove them even after corresponding pages
> > > have been freed.
> > >
> > > Fix it via killing .init.text section breakpoints just prior to initmem
> > > pages being freed.
> >
> > It might be worth it to mention that HW breakpoints aren't handled by
> > this patch but it's probably not such a big deal.
>
> I added that to the changelog, thanks.
>

Thanks Andrew for picking this up.

-Sumit

> I'll take your response to be the coveted acked-by :)


Re: [PATCH] kgdb: Fix to kill breakpoints on initmem after boot

2021-02-25 Thread Sumit Garg
+ stable ML

On Thu, 25 Feb 2021 at 21:26, Daniel Thompson
 wrote:
>
> On Wed, Feb 24, 2021 at 01:46:52PM +0530, Sumit Garg wrote:
> > Currently breakpoints in kernel .init.text section are not handled
> > correctly while allowing to remove them even after corresponding pages
> > have been freed.
> >
> > Fix it via killing .init.text section breakpoints just prior to initmem
> > pages being freed.
> >
> > Suggested-by: Doug Anderson 
> > Signed-off-by: Sumit Garg 
>
> I saw Andrew has picked this one up. That's ok for me:
> Acked-by: Daniel Thompson 
>
> I already enriched kgdbtest to cover this (and they pass) so I guess
> this is also:
> Tested-by: Daniel Thompson 
>

Thanks Daniel.

> BTW this is not Cc:ed to stable and I do wonder if it crosses the
> threshold to be considered a fix rather than a feature. Normally I
> consider adding safety rails for kgdb to be a new feature but, in this
> case, the problem would easily ensnare an inexperienced developer who is
> doing nothing more than debugging their own driver (assuming they
> correctly marked their probe function as .init) so I think this weighs
> in favour of being a fix.
>

Makes sense, Cc:ed stable.

-Sumit

>
> Daniel.
>
>
> > ---
> >  include/linux/kgdb.h  |  2 ++
> >  init/main.c   |  1 +
> >  kernel/debug/debug_core.c | 11 +++
> >  3 files changed, 14 insertions(+)
> >
> > diff --git a/include/linux/kgdb.h b/include/linux/kgdb.h
> > index 57b8885708e5..3aa503ef06fc 100644
> > --- a/include/linux/kgdb.h
> > +++ b/include/linux/kgdb.h
> > @@ -361,9 +361,11 @@ extern atomic_t  kgdb_active;
> >  extern bool dbg_is_early;
> >  extern void __init dbg_late_init(void);
> >  extern void kgdb_panic(const char *msg);
> > +extern void kgdb_free_init_mem(void);
> >  #else /* ! CONFIG_KGDB */
> >  #define in_dbg_master() (0)
> >  #define dbg_late_init()
> >  static inline void kgdb_panic(const char *msg) {}
> > +static inline void kgdb_free_init_mem(void) { }
> >  #endif /* ! CONFIG_KGDB */
> >  #endif /* _KGDB_H_ */
> > diff --git a/init/main.c b/init/main.c
> > index c68d784376ca..a446ca3d334e 100644
> > --- a/init/main.c
> > +++ b/init/main.c
> > @@ -1417,6 +1417,7 @@ static int __ref kernel_init(void *unused)
> >   async_synchronize_full();
> >   kprobe_free_init_mem();
> >   ftrace_free_init_mem();
> > + kgdb_free_init_mem();
> >   free_initmem();
> >   mark_readonly();
> >
> > diff --git a/kernel/debug/debug_core.c b/kernel/debug/debug_core.c
> > index 229dd119f430..319381e95d1d 100644
> > --- a/kernel/debug/debug_core.c
> > +++ b/kernel/debug/debug_core.c
> > @@ -465,6 +465,17 @@ int dbg_remove_all_break(void)
> >   return 0;
> >  }
> >
> > +void kgdb_free_init_mem(void)
> > +{
> > + int i;
> > +
> > + /* Clear init memory breakpoints. */
> > + for (i = 0; i < KGDB_MAX_BREAKPOINTS; i++) {
> > + if (init_section_contains((void *)kgdb_break[i].bpt_addr, 0))
> > + kgdb_break[i].state = BP_UNDEFINED;
> > + }
> > +}
> > +
> >  #ifdef CONFIG_KGDB_KDB
> >  void kdb_dump_stack_on_cpu(int cpu)
> >  {
> > --
> > 2.25.1


Re: [PATCH] kdb: Get rid of custom debug heap allocator

2021-02-25 Thread Sumit Garg
On Thu, 25 Feb 2021 at 17:49, Daniel Thompson
 wrote:
>
> On Thu, Feb 25, 2021 at 04:52:58PM +0530, Sumit Garg wrote:
> > Currently the only user for debug heap is kdbnearsym() which can be
> > modified to rather ask the caller to supply a buffer for symbol name.
> > So do that and modify kdbnearsym() callers to pass a symbol name buffer
> > allocated from stack and hence remove custom debug heap allocator.
>
> Is it really a good idea to increase stack usage this much? I thought
> several architectures will take the debug exception on existing stacks
> (and that these can nest with other exceptions).
>
> The reason I'm concerned is that AFAICT the *purpose* of the current
> heap is to minimise stack usage... and that this has the effect of
> improving debugger robustness when we take exceptions on small shared
> stacks.
>
> The reason I called the heap redundant is that currently it also allows
> us to have nested calls to kdbnearsym() whilst not consuming stack. In
> this case, when I say nested I mean new calls to kdbnearsym() before the
> previous caller has consumed the output rather than truely recursive
> calls.
>
> This is why I think the heap is pointless. In "normal" usage I don't
> think there will never be a nested call to kdbnearsym() so I think a
> single static buffer will suffice.
>
> Technically speaking there is one way that kdbnearsym() can nest but I
> think it is OK for that to be considered out-of-scope.
>
> To explain...
>
> It can nest is if we recursively enter the debugger! Recursive entry
> should never happen, is pretty much untestable and, even if we tested
> it, it is not a bug for an architeture to choose not to support it.
> Nevertheless kgdb/kdb does include logic to handle this if an
> architecture does make it as far are executing the trap. Note that
> even if the architecture does somehow land in the debug trap there's
> a strong chance the system is is too broken to resume (since we just
> took an impossible trap). Therefore kdb will inhibit resume unless the
> operator admits what they are doing won't work before trying to do it.
>
> Therefore I think it is ok for namebuf to be statically allocated and
> the only thing we need do for stability is ensure that kdbnearsym()
> guarantees that namebuf[sizeof(namebuf)-1] == '\0' regardless of the
> symbol length. Thus if by some miracle the system can resume after the
> user has ignored the warning then kdb can't take a bad memory access
> when it tries to print an overwritten symbol name. They see a few
> garbage characters... but since they just told us to do something
> crazy they should be expecting that.
>

Thanks for the detailed explanation. I see the reasoning to not use
stack and it does sound reasonable to use statically allocated namebuf
with a stability guarantee.

>
> Daniel.
>
>
> PS The code to guarantee that if we read past the end of the string
>we will still see a '\'0' before making an invalid memory access
>should be well commented though... because its pretty nasty.
>

Sure, I will add a proper comment.

-Sumit

>
> >
> > This change has been tested using kgdbtest on arm64 which doesn't show
> > any regressions.
> >
> > Suggested-by: Daniel Thompson 
> > Signed-off-by: Sumit Garg 
> > ---
> >  kernel/debug/kdb/kdb_debugger.c |   1 -
> >  kernel/debug/kdb/kdb_main.c |   6 +-
> >  kernel/debug/kdb/kdb_private.h  |   7 +-
> >  kernel/debug/kdb/kdb_support.c  | 294 +---
> >  4 files changed, 11 insertions(+), 297 deletions(-)
> >
> > diff --git a/kernel/debug/kdb/kdb_debugger.c 
> > b/kernel/debug/kdb/kdb_debugger.c
> > index 0220afda3200..e91fc3e4edd5 100644
> > --- a/kernel/debug/kdb/kdb_debugger.c
> > +++ b/kernel/debug/kdb/kdb_debugger.c
> > @@ -140,7 +140,6 @@ int kdb_stub(struct kgdb_state *ks)
> >*/
> >   kdb_common_deinit_state();
> >   KDB_STATE_CLEAR(PAGER);
> > - kdbnearsym_cleanup();
> >   if (error == KDB_CMD_KGDB) {
> >   if (KDB_STATE(DOING_KGDB))
> >   KDB_STATE_CLEAR(DOING_KGDB);
> > diff --git a/kernel/debug/kdb/kdb_main.c b/kernel/debug/kdb/kdb_main.c
> > index 9d69169582c6..ca525a3e0032 100644
> > --- a/kernel/debug/kdb/kdb_main.c
> > +++ b/kernel/debug/kdb/kdb_main.c
> > @@ -526,6 +526,7 @@ int kdbgetaddrarg(int argc, const char **argv, int 
> > *nextarg,
> >   char symbol = '\0';
> >   char *cp;
> >   kdb_symtab_t symtab;
> > + char namebuf[KSYM_NAME_LEN];
> >
> >   /*
> >  

Re: [PATCH] kdb: Get rid of custom debug heap allocator

2021-02-26 Thread Sumit Garg
On Fri, 26 Feb 2021 at 12:54, Sumit Garg  wrote:
>
> On Thu, 25 Feb 2021 at 17:49, Daniel Thompson
>  wrote:
> >
> > On Thu, Feb 25, 2021 at 04:52:58PM +0530, Sumit Garg wrote:
> > > Currently the only user for debug heap is kdbnearsym() which can be
> > > modified to rather ask the caller to supply a buffer for symbol name.
> > > So do that and modify kdbnearsym() callers to pass a symbol name buffer
> > > allocated from stack and hence remove custom debug heap allocator.
> >
> > Is it really a good idea to increase stack usage this much? I thought
> > several architectures will take the debug exception on existing stacks
> > (and that these can nest with other exceptions).
> >
> > The reason I'm concerned is that AFAICT the *purpose* of the current
> > heap is to minimise stack usage... and that this has the effect of
> > improving debugger robustness when we take exceptions on small shared
> > stacks.
> >
> > The reason I called the heap redundant is that currently it also allows
> > us to have nested calls to kdbnearsym() whilst not consuming stack. In
> > this case, when I say nested I mean new calls to kdbnearsym() before the
> > previous caller has consumed the output rather than truely recursive
> > calls.
> >
> > This is why I think the heap is pointless. In "normal" usage I don't
> > think there will never be a nested call to kdbnearsym() so I think a
> > single static buffer will suffice.
> >
> > Technically speaking there is one way that kdbnearsym() can nest but I
> > think it is OK for that to be considered out-of-scope.
> >
> > To explain...
> >
> > It can nest is if we recursively enter the debugger! Recursive entry
> > should never happen, is pretty much untestable and, even if we tested
> > it, it is not a bug for an architeture to choose not to support it.
> > Nevertheless kgdb/kdb does include logic to handle this if an
> > architecture does make it as far are executing the trap. Note that
> > even if the architecture does somehow land in the debug trap there's
> > a strong chance the system is is too broken to resume (since we just
> > took an impossible trap). Therefore kdb will inhibit resume unless the
> > operator admits what they are doing won't work before trying to do it.
> >
> > Therefore I think it is ok for namebuf to be statically allocated and
> > the only thing we need do for stability is ensure that kdbnearsym()
> > guarantees that namebuf[sizeof(namebuf)-1] == '\0' regardless of the
> > symbol length. Thus if by some miracle the system can resume after the
> > user has ignored the warning then kdb can't take a bad memory access
> > when it tries to print an overwritten symbol name. They see a few
> > garbage characters... but since they just told us to do something
> > crazy they should be expecting that.
> >
>
> Thanks for the detailed explanation. I see the reasoning to not use
> stack and it does sound reasonable to use statically allocated namebuf
> with a stability guarantee.
>

It looks like kallsyms_lookup() already takes care of
namebuf[KSYM_NAME_LEN - 1] = 0; [1]. So I think we don't require it in
kdbnearsym().

[1] 
https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/kernel/kallsyms.c#n294

-Sumit

> >
> > Daniel.
> >
> >
> > PS The code to guarantee that if we read past the end of the string
> >we will still see a '\'0' before making an invalid memory access
> >should be well commented though... because its pretty nasty.
> >
>
> Sure, I will add a proper comment.
>
> -Sumit
>
> >
> > >
> > > This change has been tested using kgdbtest on arm64 which doesn't show
> > > any regressions.
> > >
> > > Suggested-by: Daniel Thompson 
> > > Signed-off-by: Sumit Garg 
> > > ---
> > >  kernel/debug/kdb/kdb_debugger.c |   1 -
> > >  kernel/debug/kdb/kdb_main.c |   6 +-
> > >  kernel/debug/kdb/kdb_private.h  |   7 +-
> > >  kernel/debug/kdb/kdb_support.c  | 294 +---
> > >  4 files changed, 11 insertions(+), 297 deletions(-)
> > >
> > > diff --git a/kernel/debug/kdb/kdb_debugger.c 
> > > b/kernel/debug/kdb/kdb_debugger.c
> > > index 0220afda3200..e91fc3e4edd5 100644
> > > --- a/kernel/debug/kdb/kdb_debugger.c
> > > +++ b/kernel/debug/kdb/kdb_debugger.c
> > > @@ -140,7 +140,6 @@ int kdb_stub(struct kgdb_state *ks)
> > >*/
> > >   

Re: [PATCH] kgdb: Fix to kill breakpoints on initmem after boot

2021-02-26 Thread Sumit Garg
On Fri, 26 Feb 2021 at 13:01, Greg KH  wrote:
>
> On Fri, Feb 26, 2021 at 12:32:07PM +0530, Sumit Garg wrote:
> > + stable ML
> >
> > On Thu, 25 Feb 2021 at 21:26, Daniel Thompson
> >  wrote:
> > >
> > > On Wed, Feb 24, 2021 at 01:46:52PM +0530, Sumit Garg wrote:
> > > > Currently breakpoints in kernel .init.text section are not handled
> > > > correctly while allowing to remove them even after corresponding pages
> > > > have been freed.
> > > >
> > > > Fix it via killing .init.text section breakpoints just prior to initmem
> > > > pages being freed.
> > > >
> > > > Suggested-by: Doug Anderson 
> > > > Signed-off-by: Sumit Garg 
> > >
> > > I saw Andrew has picked this one up. That's ok for me:
> > > Acked-by: Daniel Thompson 
> > >
> > > I already enriched kgdbtest to cover this (and they pass) so I guess
> > > this is also:
> > > Tested-by: Daniel Thompson 
> > >
> >
> > Thanks Daniel.
> >
> > > BTW this is not Cc:ed to stable and I do wonder if it crosses the
> > > threshold to be considered a fix rather than a feature. Normally I
> > > consider adding safety rails for kgdb to be a new feature but, in this
> > > case, the problem would easily ensnare an inexperienced developer who is
> > > doing nothing more than debugging their own driver (assuming they
> > > correctly marked their probe function as .init) so I think this weighs
> > > in favour of being a fix.
> > >
> >
> > Makes sense, Cc:ed stable.
>
>
> 
>
> This is not the correct way to submit patches for inclusion in the
> stable kernel tree.  Please read:
> https://www.kernel.org/doc/html/latest/process/stable-kernel-rules.html
> for how to do this properly.
>
> 

Thanks for the pointer, let me wait for this patch to land in Linus’
tree and then will drop a mail to sta...@vger.kernel.org.

-Sumit


[PATCH v2] kdb: Get rid of custom debug heap allocator

2021-02-26 Thread Sumit Garg
Currently the only user for debug heap is kdbnearsym() which can be
modified to rather ask the caller to supply a buffer for symbol name.
So do that and modify kdbnearsym() callers to pass a symbol name buffer
allocated statically and hence remove custom debug heap allocator.

This change has been tested using kgdbtest on arm64 which doesn't show
any regressions.

Suggested-by: Daniel Thompson 
Signed-off-by: Sumit Garg 
---

Changes in v2:
- Allocate namebuf statically instead of stack to maintain debugger
  robustness.

 kernel/debug/kdb/kdb_debugger.c |   1 -
 kernel/debug/kdb/kdb_main.c |   6 +-
 kernel/debug/kdb/kdb_private.h  |   7 +-
 kernel/debug/kdb/kdb_support.c  | 294 +---
 4 files changed, 11 insertions(+), 297 deletions(-)

diff --git a/kernel/debug/kdb/kdb_debugger.c b/kernel/debug/kdb/kdb_debugger.c
index 0220afda3200..e91fc3e4edd5 100644
--- a/kernel/debug/kdb/kdb_debugger.c
+++ b/kernel/debug/kdb/kdb_debugger.c
@@ -140,7 +140,6 @@ int kdb_stub(struct kgdb_state *ks)
 */
kdb_common_deinit_state();
KDB_STATE_CLEAR(PAGER);
-   kdbnearsym_cleanup();
if (error == KDB_CMD_KGDB) {
if (KDB_STATE(DOING_KGDB))
KDB_STATE_CLEAR(DOING_KGDB);
diff --git a/kernel/debug/kdb/kdb_main.c b/kernel/debug/kdb/kdb_main.c
index 9d69169582c6..6efe9ec53906 100644
--- a/kernel/debug/kdb/kdb_main.c
+++ b/kernel/debug/kdb/kdb_main.c
@@ -526,6 +526,7 @@ int kdbgetaddrarg(int argc, const char **argv, int *nextarg,
char symbol = '\0';
char *cp;
kdb_symtab_t symtab;
+   static char namebuf[KSYM_NAME_LEN];
 
/*
 * If the enable flags prohibit both arbitrary memory access
@@ -585,7 +586,7 @@ int kdbgetaddrarg(int argc, const char **argv, int *nextarg,
}
 
if (!found)
-   found = kdbnearsym(addr, &symtab);
+   found = kdbnearsym(addr, &symtab, namebuf);
 
(*nextarg)++;
 
@@ -1503,6 +1504,7 @@ static void kdb_md_line(const char *fmtstr, unsigned long 
addr,
int i;
int j;
unsigned long word;
+   static char namebuf[KSYM_NAME_LEN];
 
memset(cbuf, '\0', sizeof(cbuf));
if (phys)
@@ -1518,7 +1520,7 @@ static void kdb_md_line(const char *fmtstr, unsigned long 
addr,
break;
kdb_printf(fmtstr, word);
if (symbolic)
-   kdbnearsym(word, &symtab);
+   kdbnearsym(word, &symtab, namebuf);
else
memset(&symtab, 0, sizeof(symtab));
if (symtab.sym_name) {
diff --git a/kernel/debug/kdb/kdb_private.h b/kernel/debug/kdb/kdb_private.h
index b857a84de3b5..1707eeebc59a 100644
--- a/kernel/debug/kdb/kdb_private.h
+++ b/kernel/debug/kdb/kdb_private.h
@@ -108,8 +108,7 @@ extern char *kdbgetenv(const char *);
 extern int kdbgetaddrarg(int, const char **, int*, unsigned long *,
 long *, char **);
 extern int kdbgetsymval(const char *, kdb_symtab_t *);
-extern int kdbnearsym(unsigned long, kdb_symtab_t *);
-extern void kdbnearsym_cleanup(void);
+extern int kdbnearsym(unsigned long addr, kdb_symtab_t *symtab, char *namebuf);
 extern char *kdb_strdup(const char *str, gfp_t type);
 extern void kdb_symbol_print(unsigned long, const kdb_symtab_t *, unsigned 
int);
 
@@ -233,10 +232,6 @@ extern struct task_struct *kdb_curr_task(int);
 
 #define GFP_KDB (in_dbg_master() ? GFP_ATOMIC : GFP_KERNEL)
 
-extern void *debug_kmalloc(size_t size, gfp_t flags);
-extern void debug_kfree(void *);
-extern void debug_kusage(void);
-
 extern struct task_struct *kdb_current_task;
 extern struct pt_regs *kdb_current_regs;
 
diff --git a/kernel/debug/kdb/kdb_support.c b/kernel/debug/kdb/kdb_support.c
index b59aad1f0b55..9b907a84f2db 100644
--- a/kernel/debug/kdb/kdb_support.c
+++ b/kernel/debug/kdb/kdb_support.c
@@ -57,8 +57,6 @@ int kdbgetsymval(const char *symname, kdb_symtab_t *symtab)
 }
 EXPORT_SYMBOL(kdbgetsymval);
 
-static char *kdb_name_table[100];  /* arbitrary size */
-
 /*
  * kdbnearsym -Return the name of the symbol with the nearest address
  * less than 'addr'.
@@ -79,13 +77,11 @@ static char *kdb_name_table[100];   /* arbitrary size */
  * hold active strings, no kdb caller of kdbnearsym makes more
  * than ~20 later calls before using a saved value.
  */
-int kdbnearsym(unsigned long addr, kdb_symtab_t *symtab)
+int kdbnearsym(unsigned long addr, kdb_symtab_t *symtab, char *namebuf)
 {
int ret = 0;
unsigned long symbolsize = 0;
unsigned long offset = 0;
-#define knt1_size 128  /* must be >= kallsyms table size */
-   char *knt1 = NULL;
 
if (KDB_DEBUG(AR))
kdb_printf("kdbnearsym: addr=0x%lx, symtab=%px\n", addr, 
symtab);
@@ -93,14 +89,9 @@ int kdbnearsym(unsigned long addr, kdb_symtab_t *symta

Re: [PATCH v2] kdb: Get rid of custom debug heap allocator

2021-02-26 Thread Sumit Garg
On Fri, 26 Feb 2021 at 16:29, Daniel Thompson
 wrote:
>
> On Fri, Feb 26, 2021 at 03:23:06PM +0530, Sumit Garg wrote:
> > Currently the only user for debug heap is kdbnearsym() which can be
> > modified to rather ask the caller to supply a buffer for symbol name.
> > So do that and modify kdbnearsym() callers to pass a symbol name buffer
> > allocated statically and hence remove custom debug heap allocator.
>
> Why make the callers do this?
>
> The LRU buffers were managed inside kdbnearsym() why does switching to
> an approach with a single buffer require us to push that buffer out to
> the callers?
>

Earlier the LRU buffers managed namebuf uniqueness per caller (upto
100 callers) but if we switch to single entry in kdbnearsym() then all
callers need to share common buffer which will lead to incorrect
results from following simple sequence:

kdbnearsym(word, &symtab1);
kdbnearsym(word, &symtab2);
kdb_symbol_print(word, &symtab1, 0);
kdb_symbol_print(word, &symtab2, 0);

But if we change to a unique static namebuf per caller then the
following sequence will work:

kdbnearsym(word, &symtab1, namebuf1);
kdbnearsym(word, &symtab2, namebuf2);
kdb_symbol_print(word, &symtab1, 0);
kdb_symbol_print(word, &symtab2, 0);

>
> > diff --git a/kernel/debug/kdb/kdb_main.c b/kernel/debug/kdb/kdb_main.c
> > index 9d69169582c6..6efe9ec53906 100644
> > --- a/kernel/debug/kdb/kdb_main.c
> > +++ b/kernel/debug/kdb/kdb_main.c
> > @@ -526,6 +526,7 @@ int kdbgetaddrarg(int argc, const char **argv, int 
> > *nextarg,
>
> The documentation comment for this function has not been updated to
> describe the new contract on callers of this function (e.g. if they
> consume the symbol name they must do so before calling kdbgetaddrarg()
> (and maybe kdbnearsym() again).
>

I am not sure if I follow you here. If we have a unique static buffer
per caller then why do we need this new contract?

>
> >   char symbol = '\0';
> >   char *cp;
> >   kdb_symtab_t symtab;
> > + static char namebuf[KSYM_NAME_LEN];
> >
> >   /*
> >* If the enable flags prohibit both arbitrary memory access
> > diff --git a/kernel/debug/kdb/kdb_support.c b/kernel/debug/kdb/kdb_support.c
> > index b59aad1f0b55..9b907a84f2db 100644
> > --- a/kernel/debug/kdb/kdb_support.c
> > +++ b/kernel/debug/kdb/kdb_support.c
> > @@ -57,8 +57,6 @@ int kdbgetsymval(const char *symname, kdb_symtab_t 
> > *symtab)
> >  }
> >  EXPORT_SYMBOL(kdbgetsymval);
> >
> > -static char *kdb_name_table[100];/* arbitrary size */
> > -
> >  /*
> >   * kdbnearsym -  Return the name of the symbol with the nearest address
> >   *   less than 'addr'.
>
> Again the documentation comment has not been updated and, in this case,
> is now misleading.

Okay, I will fix it.

>
> If we move the static buffer here then the remarks section on this
> function is a really good place to describe what the callers must do to
> manage the static buffer safely as well as a convenient place to mention
> that we tolerate the reuse of the static buffer if kdb is re-entered
> becase a) kdb is broken if that happens and b) we are crash resilient
> if if does.
>
>
> > @@ -79,13 +77,11 @@ static char *kdb_name_table[100]; /* arbitrary size */
> >   *   hold active strings, no kdb caller of kdbnearsym makes more
> >   *   than ~20 later calls before using a saved value.
> >   */
> > -int kdbnearsym(unsigned long addr, kdb_symtab_t *symtab)
> > +int kdbnearsym(unsigned long addr, kdb_symtab_t *symtab, char *namebuf)
>
> As above, I don't understand why we need to add namebuf here. I think
> the prototype can remain the same.
>
> Think of it simple that we have reduce the cache from having 100 entries
> to having just 1 ;-) .

Please see my response above.

-Sumit

>
>
> Daniel.


Re: [PATCH v4 4/4] kdb: Switch to use safer dbg_io_ops over console APIs

2020-05-31 Thread Sumit Garg
On Sun, 31 May 2020 at 10:58, kbuild test robot  wrote:
>
> Hi Sumit,
>
> I love your patch! Yet something to improve:
>
> [auto build test ERROR on tty/tty-testing]
> [also build test ERROR on usb/usb-testing v5.7-rc7 next-20200529]
> [cannot apply to kgdb/kgdb-next]
> [if your patch is applied to the wrong git tree, please drop us a note to help
> improve the system. BTW, we also suggest to use '--base' option to specify the
> base tree in git format-patch, please see 
> https://stackoverflow.com/a/37406982]
>
> url:    
> https://github.com/0day-ci/linux/commits/Sumit-Garg/kdb-Improve-console-handling/20200531-075431
> base:   https://git.kernel.org/pub/scm/linux/kernel/git/gregkh/tty.git 
> tty-testing
> config: x86_64-allyesconfig (attached as .config)
> compiler: clang version 11.0.0 (https://github.com/llvm/llvm-project 
> 2388a096e7865c043e83ece4e26654bd3d1a20d5)
> reproduce (this is a W=1 build):
> wget 
> https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O 
> ~/bin/make.cross
> chmod +x ~/bin/make.cross
> # install x86_64 cross compiling tool for clang build
> # apt-get install binutils-x86-64-linux-gnu
> # save the attached .config to linux build tree
> COMPILER_INSTALL_PATH=$HOME/0day COMPILER=clang make.cross ARCH=x86_64
>
> If you fix the issue, kindly add following tag as appropriate
> Reported-by: kbuild test robot 
>
> All errors (new ones prefixed by >>, old ones prefixed by <<):
>
> >> drivers/usb/early/ehci-dbgp.c:1062:24: error: assigning to 'struct console 
> >> *' from incompatible type 'struct console'; take the address with &
> kgdbdbgp_io_ops.cons = early_dbgp_console;
> ^ ~~
> &
> 1 error generated.
>

Ah, my bad. Will fix it up in the next version.

-Sumit

> vim +1062 drivers/usb/early/ehci-dbgp.c
>
>   1046
>   1047  static int __init kgdbdbgp_parse_config(char *str)
>   1048  {
>   1049  char *ptr;
>   1050
>   1051  if (!ehci_debug) {
>   1052  if (early_dbgp_init(str))
>   1053  return -1;
>   1054  }
>   1055  ptr = strchr(str, ',');
>   1056  if (ptr) {
>   1057  ptr++;
>   1058  kgdbdbgp_wait_time = simple_strtoul(ptr, &ptr, 10);
>   1059  }
>   1060  kgdb_register_io_module(&kgdbdbgp_io_ops);
>   1061  if (early_dbgp_console.index != -1)
> > 1062  kgdbdbgp_io_ops.cons = early_dbgp_console;
>   1063
>   1064  return 0;
>   1065  }
>   1066  early_param("kgdbdbgp", kgdbdbgp_parse_config);
>   1067
>
> ---
> 0-DAY CI Kernel Test Service, Intel Corporation
> https://lists.01.org/hyperkitty/list/kbuild-...@lists.01.org


Re: [Tee-dev] [PATCH v2] drivers: optee: allow op-tee to access devices on the i2c bus

2020-05-31 Thread Sumit Garg
Hi Jorge,

On Mon, 1 Jun 2020 at 04:41, Jorge Ramirez-Ortiz  wrote:
>
> Some secure elements like NXP's SE050 sit on I2C buses. For OP-TEE to
> control this type of cryptographic devices it needs coordinated access
> to the bus, so collisions and RUNTIME_PM dont get in the way.
>
> This trampoline driver allow OP-TEE to access them.
>

This sounds like an interesting use-case but I would like to
understand how secure is this communication interface with the secure
element? Like in the case of RPMB, secure world data is encrypted
which flows via tee-supplicant to RPMB device.

-Sumit

> Signed-off-by: Jorge Ramirez-Ortiz 
> ---
>  drivers/tee/optee/optee_msg.h | 18 +++
>  drivers/tee/optee/rpc.c   | 57 +++
>  2 files changed, 75 insertions(+)
>
> diff --git a/drivers/tee/optee/optee_msg.h b/drivers/tee/optee/optee_msg.h
> index 795bc19ae17a..b6cc964fdeea 100644
> --- a/drivers/tee/optee/optee_msg.h
> +++ b/drivers/tee/optee/optee_msg.h
> @@ -419,4 +419,22 @@ struct optee_msg_arg {
>   */
>  #define OPTEE_MSG_RPC_CMD_SHM_FREE 7
>
> +/*
> + * Access a device on an i2c bus
> + *
> + * [in]  param[0].u.value.amode: RD(0), WR(1)
> + * [in]  param[0].u.value.bi2c adapter
> + * [in]  param[0].u.value.ci2c chip
> + *
> + * [io]  param[1].u.tmem.buf_ptr   physical address
> + * [io]  param[1].u.tmem.size  transfer size in bytes
> + * [io]  param[1].u.tmem.shm_ref   shared memory reference
> + *
> + * [out]  param[0].u.value.a   bytes transferred
> + *
> + */
> +#define OPTEE_MSG_RPC_CMD_I2C_TRANSFER 8
> +#define OPTEE_MSG_RPC_CMD_I2C_TRANSFER_RD 0
> +#define OPTEE_MSG_RPC_CMD_I2C_TRANSFER_WR 1
> +
>  #endif /* _OPTEE_MSG_H */
> diff --git a/drivers/tee/optee/rpc.c b/drivers/tee/optee/rpc.c
> index b4ade54d1f28..21d452805c6f 100644
> --- a/drivers/tee/optee/rpc.c
> +++ b/drivers/tee/optee/rpc.c
> @@ -9,6 +9,7 @@
>  #include 
>  #include 
>  #include 
> +#include 
>  #include "optee_private.h"
>  #include "optee_smc.h"
>
> @@ -48,6 +49,59 @@ static void handle_rpc_func_cmd_get_time(struct 
> optee_msg_arg *arg)
>  bad:
> arg->ret = TEEC_ERROR_BAD_PARAMETERS;
>  }
> +static void handle_rpc_func_cmd_i2c_transfer(struct tee_context *ctx,
> +struct optee_msg_arg *arg)
> +{
> +   struct i2c_client client;
> +   struct tee_shm *shm;
> +   int i, ret;
> +   char *buf;
> +   uint32_t attr[] = {
> +   OPTEE_MSG_ATTR_TYPE_VALUE_INPUT,
> +   OPTEE_MSG_ATTR_TYPE_TMEM_INOUT,
> +   OPTEE_MSG_ATTR_TYPE_VALUE_OUTPUT,
> +   };
> +
> +   if (arg->num_params != ARRAY_SIZE(attr))
> +   goto bad;
> +
> +   for (i = 0; i < ARRAY_SIZE(attr); i++)
> +   if ((arg->params[i].attr & OPTEE_MSG_ATTR_TYPE_MASK) != 
> attr[i])
> +   goto bad;
> +
> +   shm = (struct tee_shm *)(unsigned long)arg->params[1].u.tmem.shm_ref;
> +   buf = (char *)shm->kaddr;
> +
> +   client.addr = arg->params[0].u.value.c;
> +   client.adapter = i2c_get_adapter(arg->params[0].u.value.b);
> +   if (!client.adapter)
> +   goto bad;
> +
> +   snprintf(client.name, I2C_NAME_SIZE, "i2c%d", client.adapter->nr);
> +
> +   switch (arg->params[0].u.value.a) {
> +   case OPTEE_MSG_RPC_CMD_I2C_TRANSFER_RD:
> +   ret = i2c_master_recv(&client, buf, 
> arg->params[1].u.tmem.size);
> +   break;
> +   case OPTEE_MSG_RPC_CMD_I2C_TRANSFER_WR:
> +   ret = i2c_master_send(&client, buf, 
> arg->params[1].u.tmem.size);
> +   break;
> +   default:
> +   i2c_put_adapter(client.adapter);
> +   goto bad;
> +   }
> +
> +   if (ret >= 0) {
> +   arg->params[2].u.value.a = ret;
> +   arg->ret = TEEC_SUCCESS;
> +   } else
> +   arg->ret = TEEC_ERROR_COMMUNICATION;
> +
> +   i2c_put_adapter(client.adapter);
> +   return;
> +bad:
> +   arg->ret = TEEC_ERROR_BAD_PARAMETERS;
> +}
>
>  static struct wq_entry *wq_entry_get(struct optee_wait_queue *wq, u32 key)
>  {
> @@ -382,6 +436,9 @@ static void handle_rpc_func_cmd(struct tee_context *ctx, 
> struct optee *optee,
> case OPTEE_MSG_RPC_CMD_SHM_FREE:
> handle_rpc_func_cmd_shm_free(ctx, arg);
> break;
> +   case OPTEE_MSG_RPC_CMD_I2C_TRANSFER:
> +   handle_rpc_func_cmd_i2c_transfer(ctx, arg);
> +   break;
> default:
> handle_rpc_supp_cmd(ctx, arg);
> }
> --
> 2.17.1
>
> ___
> Tee-dev mailing list
> tee-...@lists.linaro.org
> https://lists.linaro.org/mailman/listinfo/tee-dev


Re: [PATCH v4 1/4] KEYS: trusted: Add generic trusted keys framework

2020-06-01 Thread Sumit Garg
On Mon, 1 Jun 2020 at 07:30, Jarkko Sakkinen
 wrote:
>
> On Wed, May 06, 2020 at 03:10:14PM +0530, Sumit Garg wrote:
> > Current trusted keys framework is tightly coupled to use TPM device as
> > an underlying implementation which makes it difficult for implementations
> > like Trusted Execution Environment (TEE) etc. to provide trusked keys
> > support in case platform doesn't posses a TPM device.
> >
> > So this patch tries to add generic trusted keys framework where underlying
> > implemtations like TPM, TEE etc. could be easily plugged-in.
> >
> > Suggested-by: Jarkko Sakkinen 
> > Signed-off-by: Sumit Garg 
> > ---
> >  include/keys/trusted-type.h |  45 
> >  include/keys/trusted_tpm.h  |  15 --
> >  security/keys/trusted-keys/Makefile |   1 +
> >  security/keys/trusted-keys/trusted_common.c | 333 
> > +++
>
> I think trusted_core.c would be a better name (less ambiguous).
>

Okay.

> >  security/keys/trusted-keys/trusted_tpm1.c   | 335 
> > +---
> >  5 files changed, 437 insertions(+), 292 deletions(-)
> >  create mode 100644 security/keys/trusted-keys/trusted_common.c
> >
> > diff --git a/include/keys/trusted-type.h b/include/keys/trusted-type.h
> > index a94c03a..5559010 100644
> > --- a/include/keys/trusted-type.h
> > +++ b/include/keys/trusted-type.h
> > @@ -40,6 +40,51 @@ struct trusted_key_options {
> >   uint32_t policyhandle;
> >  };
> >
> > +struct trusted_key_ops {
> > + /*
> > +  * flag to indicate if trusted key implementation supports migration
> > +  * or not.
> > +  */
> > + unsigned char migratable;
> > +
> > + /* trusted key init */
> > + int (*init)(void);
>
> /* Init a key. */
>

This API isn't initializing a key but rather the underlying interface
(see init_tpm_trusted()). So how about:

/* Initialize key interface */

> > +
> > + /* seal a trusted key */
> > + int (*seal)(struct trusted_key_payload *p, char *datablob);
>
> /* Seal a key. */
>

Ack.

> > +
> > + /* unseal a trusted key */
> > + int (*unseal)(struct trusted_key_payload *p, char *datablob);
>
> /* Unseal a key. */
>

Ack.

> > +
> > + /* get random trusted key */
> > + int (*get_random)(unsigned char *key, size_t key_len);
>
> /* Get a randomized key. */
>

Ack.

> > +
> > + /* trusted key cleanup */
> > + void (*cleanup)(void);
>
> Please remove this from this commit since it is not in use in the scope
> of this commit. You should instead make a separate commit just for this
> callback, which explains what it is and how it will be used in the
> follow up commits.
>

This API is pretty much relevant to TPM as well (see:
cleanup_tpm_trusted()) but I guess "cleanup()" terminology is bringing
up some confusion, so how about to call it "exit()" instead?

>
> > +};
> > +
> >  extern struct key_type key_type_trusted;
> > +#if defined(CONFIG_TCG_TPM)
> > +extern struct trusted_key_ops tpm_trusted_key_ops;
> > +#endif
> > +
> > +#define TRUSTED_DEBUG 0
> > +
> > +#if TRUSTED_DEBUG
> > +static inline void dump_payload(struct trusted_key_payload *p)
> > +{
> > + pr_info("trusted_key: key_len %d\n", p->key_len);
> > + print_hex_dump(KERN_INFO, "key ", DUMP_PREFIX_NONE,
> > +16, 1, p->key, p->key_len, 0);
> > + pr_info("trusted_key: bloblen %d\n", p->blob_len);
> > + print_hex_dump(KERN_INFO, "blob ", DUMP_PREFIX_NONE,
> > +16, 1, p->blob, p->blob_len, 0);
> > + pr_info("trusted_key: migratable %d\n", p->migratable);
> > +}
> > +#else
> > +static inline void dump_payload(struct trusted_key_payload *p)
> > +{
> > +}
> > +#endif
> >
> >  #endif /* _KEYS_TRUSTED_TYPE_H */
> > diff --git a/include/keys/trusted_tpm.h b/include/keys/trusted_tpm.h
> > index a56d8e1..5753231 100644
> > --- a/include/keys/trusted_tpm.h
> > +++ b/include/keys/trusted_tpm.h
> > @@ -60,17 +60,6 @@ static inline void dump_options(struct 
> > trusted_key_options *o)
> >  16, 1, o->pcrinfo, o->pcrinfo_len, 0);
> >  }
> >
> > -static inline void dump_payload(struct trusted_key_payload *p)
> > -{
> > - pr_info("trusted_key: key_len %d\n", p->key_len);
> > - print_hex_dump(KERN_INFO,

Re: [PATCH v4 1/4] KEYS: trusted: Add generic trusted keys framework

2020-06-01 Thread Sumit Garg
On Mon, 1 Jun 2020 at 07:41, Jarkko Sakkinen
 wrote:
>
> On Wed, May 06, 2020 at 03:10:14PM +0530, Sumit Garg wrote:
> > Current trusted keys framework is tightly coupled to use TPM device as
> > an underlying implementation which makes it difficult for implementations
> > like Trusted Execution Environment (TEE) etc. to provide trusked keys
> > support in case platform doesn't posses a TPM device.
> >
> > So this patch tries to add generic trusted keys framework where underlying
> > implemtations like TPM, TEE etc. could be easily plugged-in.
> >
> > Suggested-by: Jarkko Sakkinen 
> > Signed-off-by: Sumit Garg 
> > ---
> >  include/keys/trusted-type.h |  45 
> >  include/keys/trusted_tpm.h  |  15 --
> >  security/keys/trusted-keys/Makefile |   1 +
> >  security/keys/trusted-keys/trusted_common.c | 333 
> > +++
> >  security/keys/trusted-keys/trusted_tpm1.c   | 335 
> > +---
> >  5 files changed, 437 insertions(+), 292 deletions(-)
> >  create mode 100644 security/keys/trusted-keys/trusted_common.c
> >
> > diff --git a/include/keys/trusted-type.h b/include/keys/trusted-type.h
> > index a94c03a..5559010 100644
> > --- a/include/keys/trusted-type.h
> > +++ b/include/keys/trusted-type.h
> > @@ -40,6 +40,51 @@ struct trusted_key_options {
> >   uint32_t policyhandle;
> >  };
> >
> > +struct trusted_key_ops {
> > + /*
> > +  * flag to indicate if trusted key implementation supports migration
> > +  * or not.
> > +  */
> > + unsigned char migratable;
> > +
> > + /* trusted key init */
> > + int (*init)(void);
> > +
> > + /* seal a trusted key */
> > + int (*seal)(struct trusted_key_payload *p, char *datablob);
> > +
> > + /* unseal a trusted key */
> > + int (*unseal)(struct trusted_key_payload *p, char *datablob);
> > +
> > + /* get random trusted key */
> > + int (*get_random)(unsigned char *key, size_t key_len);
> > +
> > + /* trusted key cleanup */
> > + void (*cleanup)(void);
> > +};
> > +
> >  extern struct key_type key_type_trusted;
> > +#if defined(CONFIG_TCG_TPM)
> > +extern struct trusted_key_ops tpm_trusted_key_ops;
> > +#endif
> > +
> > +#define TRUSTED_DEBUG 0
> > +
> > +#if TRUSTED_DEBUG
> > +static inline void dump_payload(struct trusted_key_payload *p)
> > +{
> > + pr_info("trusted_key: key_len %d\n", p->key_len);
> > + print_hex_dump(KERN_INFO, "key ", DUMP_PREFIX_NONE,
> > +16, 1, p->key, p->key_len, 0);
> > + pr_info("trusted_key: bloblen %d\n", p->blob_len);
> > + print_hex_dump(KERN_INFO, "blob ", DUMP_PREFIX_NONE,
> > +16, 1, p->blob, p->blob_len, 0);
> > + pr_info("trusted_key: migratable %d\n", p->migratable);
> > +}
> > +#else
> > +static inline void dump_payload(struct trusted_key_payload *p)
> > +{
> > +}
> > +#endif
> >
> >  #endif /* _KEYS_TRUSTED_TYPE_H */
> > diff --git a/include/keys/trusted_tpm.h b/include/keys/trusted_tpm.h
> > index a56d8e1..5753231 100644
> > --- a/include/keys/trusted_tpm.h
> > +++ b/include/keys/trusted_tpm.h
> > @@ -60,17 +60,6 @@ static inline void dump_options(struct 
> > trusted_key_options *o)
> >  16, 1, o->pcrinfo, o->pcrinfo_len, 0);
> >  }
> >
> > -static inline void dump_payload(struct trusted_key_payload *p)
> > -{
> > - pr_info("trusted_key: key_len %d\n", p->key_len);
> > - print_hex_dump(KERN_INFO, "key ", DUMP_PREFIX_NONE,
> > -16, 1, p->key, p->key_len, 0);
> > - pr_info("trusted_key: bloblen %d\n", p->blob_len);
> > - print_hex_dump(KERN_INFO, "blob ", DUMP_PREFIX_NONE,
> > -16, 1, p->blob, p->blob_len, 0);
> > - pr_info("trusted_key: migratable %d\n", p->migratable);
> > -}
> > -
> >  static inline void dump_sess(struct osapsess *s)
> >  {
> >   print_hex_dump(KERN_INFO, "trusted-key: handle ", DUMP_PREFIX_NONE,
> > @@ -96,10 +85,6 @@ static inline void dump_options(struct 
> > trusted_key_options *o)
> >  {
> >  }
> >
> > -static inline void dump_payload(struct trusted_key_payload *p)
> > -{
> > -}
> > -
> >  static inline void dum

Re: [PATCHv5 2/3] optee: use uuid for sysfs driver entry

2020-06-01 Thread Sumit Garg
On Fri, 29 May 2020 at 13:57, Maxim Uvarov  wrote:
>
> OP-TEE device names for sysfs need to be unique
> and it's better if they will mean something. UUID for name
> looks like good solution:
> /sys/bus/tee/devices/optee-ta-
>

I think this description is a little vague here which fails to explain
why we are doing this. How about:

===
With the evolving use-cases for TEE bus, now it's required to support
multi-stage enumeration process. But using a simple index doesn't
suffice this requirement and instead leads to duplicate sysfs entries.
So instead switch to use more informative device UUID for sysfs entry
like:

/sys/bus/tee/devices/optee-ta-


> Signed-off-by: Maxim Uvarov 
> ---
>  Documentation/ABI/testing/sysfs-bus-optee-devices | 8 
>  MAINTAINERS   | 2 ++
>  drivers/tee/optee/device.c| 6 +++---
>  3 files changed, 13 insertions(+), 3 deletions(-)
>  create mode 100644 Documentation/ABI/testing/sysfs-bus-optee-devices
>

I think this patch belongs as patch #1 in this series given the dependency.

> diff --git a/Documentation/ABI/testing/sysfs-bus-optee-devices 
> b/Documentation/ABI/testing/sysfs-bus-optee-devices
> new file mode 100644
> index ..0ae04ae5374a
> --- /dev/null
> +++ b/Documentation/ABI/testing/sysfs-bus-optee-devices
> @@ -0,0 +1,8 @@
> +What:  /sys/bus/tee/devices/optee-ta-/
> +Date:   May 2020
> +KernelVersion   5.7
> +Contact:tee-...@lists.linaro.org
> +Description:
> +   OP-TEE bus provides reference to registered drivers under 
> this directory. The 
> +   matches Trusted Application (TA) driver and corresponding TA 
> in secure OS. Drivers
> +   are free to create needed API under optee-ta- directory.
> diff --git a/MAINTAINERS b/MAINTAINERS
> index ecc0749810b0..52717ede29fc 100644
> --- a/MAINTAINERS
> +++ b/MAINTAINERS
> @@ -12516,8 +12516,10 @@ OP-TEE DRIVER
>  M: Jens Wiklander 
>  L: tee-...@lists.linaro.org
>  S: Maintained
> +F: Documentation/ABI/testing/sysfs-bus-optee-devices
>  F: drivers/tee/optee/
>
> +

Unnecessary blank line.

-Sumit

>  OP-TEE RANDOM NUMBER GENERATOR (RNG) DRIVER
>  M: Sumit Garg 
>  L: tee-...@lists.linaro.org
> diff --git a/drivers/tee/optee/device.c b/drivers/tee/optee/device.c
> index d4931dad07aa..2eb1c0283aec 100644
> --- a/drivers/tee/optee/device.c
> +++ b/drivers/tee/optee/device.c
> @@ -65,7 +65,7 @@ static int get_devices(struct tee_context *ctx, u32 session,
> return 0;
>  }
>
> -static int optee_register_device(const uuid_t *device_uuid, u32 device_id)
> +static int optee_register_device(const uuid_t *device_uuid)
>  {
> struct tee_client_device *optee_device = NULL;
> int rc;
> @@ -75,7 +75,7 @@ static int optee_register_device(const uuid_t *device_uuid, 
> u32 device_id)
> return -ENOMEM;
>
> optee_device->dev.bus = &tee_bus_type;
> -   dev_set_name(&optee_device->dev, "optee-clnt%u", device_id);
> +   dev_set_name(&optee_device->dev, "optee-ta-%pUl", device_uuid);
> uuid_copy(&optee_device->id.uuid, device_uuid);
>
> rc = device_register(&optee_device->dev);
> @@ -144,7 +144,7 @@ static int __optee_enumerate_devices(u32 func)
> num_devices = shm_size / sizeof(uuid_t);
>
> for (idx = 0; idx < num_devices; idx++) {
> -   rc = optee_register_device(&device_uuid[idx], idx);
> +   rc = optee_register_device(&device_uuid[idx]);
> if (rc)
> goto out_shm;
> }
> --
> 2.17.1
>


Re: [PATCHv5 1/3] optee: do drivers initialization before and after tee-supplicant run

2020-06-01 Thread Sumit Garg
On Fri, 29 May 2020 at 13:57, Maxim Uvarov  wrote:
>
> Some drivers (like ftpm) can operate only after tee-supplicant
> runs because of tee-supplicant provides things like storage
> services.  This patch splits probe of non tee-supplicant dependable
> drivers to the early stage, and after tee-supplicant run probe other
> drivers.
>
> Signed-off-by: Maxim Uvarov 
> Suggested-by: Sumit Garg 
> Suggested-by: Arnd Bergmann 
> ---
>  drivers/tee/optee/core.c  | 24 +---
>  drivers/tee/optee/device.c| 17 +++--
>  drivers/tee/optee/optee_private.h | 10 +-
>  3 files changed, 41 insertions(+), 10 deletions(-)
>

Commit subject sounds a little vague, so how about:


optee: enable support for multi-stage bus enumeration


Then in the commit description, you can elaborate on what it actually means.

> diff --git a/drivers/tee/optee/core.c b/drivers/tee/optee/core.c
> index 99698b8a3a74..bf0851fdf108 100644
> --- a/drivers/tee/optee/core.c
> +++ b/drivers/tee/optee/core.c
> @@ -17,6 +17,7 @@
>  #include 
>  #include 
>  #include 
> +#include 
>  #include "optee_private.h"
>  #include "optee_smc.h"
>  #include "shm_pool.h"
> @@ -218,6 +219,11 @@ static void optee_get_version(struct tee_device *teedev,
> *vers = v;
>  }
>
> +static void optee_bus_scan(struct work_struct *work)
> +{
> +   WARN_ON(optee_enumerate_devices(PTA_CMD_GET_DEVICES_SUPP));
> +}
> +
>  static int optee_open(struct tee_context *ctx)
>  {
> struct optee_context_data *ctxdata;
> @@ -241,8 +247,18 @@ static int optee_open(struct tee_context *ctx)
> kfree(ctxdata);
> return -EBUSY;
> }
> -   }
>
> +   if (!optee->scan_bus_done) {
> +   INIT_WORK(&optee->scan_bus_work, optee_bus_scan);
> +   optee->scan_bus_wq = 
> create_workqueue("optee_bus_scan");
> +   if (!optee->scan_bus_wq) {
> +   kfree(ctxdata);
> +   return -ECHILD;
> +   }
> +   queue_work(optee->scan_bus_wq, &optee->scan_bus_work);
> +   optee->scan_bus_done = true;
> +   }
> +   }
> mutex_init(&ctxdata->mutex);
> INIT_LIST_HEAD(&ctxdata->sess_list);
>
> @@ -296,8 +312,10 @@ static void optee_release(struct tee_context *ctx)
>
> ctx->data = NULL;
>
> -   if (teedev == optee->supp_teedev)
> +   if (teedev == optee->supp_teedev) {
> +   destroy_workqueue(optee->scan_bus_wq);

Doesn't it deserve a prior check "if(optee->scan_bus_wq)" as we only
allocate it once during multiple tee-supplicant instances?

> optee_supp_release(&optee->supp);
> +   }
>  }
>
>  static const struct tee_driver_ops optee_ops = {
> @@ -675,7 +693,7 @@ static int optee_probe(struct platform_device *pdev)
>
> platform_set_drvdata(pdev, optee);
>
> -   rc = optee_enumerate_devices();
> +   rc = optee_enumerate_devices(PTA_CMD_GET_DEVICES);
> if (rc) {
> optee_remove(pdev);
> return rc;
> diff --git a/drivers/tee/optee/device.c b/drivers/tee/optee/device.c
> index e3a148521ec1..d4931dad07aa 100644
> --- a/drivers/tee/optee/device.c
> +++ b/drivers/tee/optee/device.c
> @@ -21,7 +21,6 @@
>   * TEE_ERROR_BAD_PARAMETERS - Incorrect input param
>   * TEE_ERROR_SHORT_BUFFER - Output buffer size less than required
>   */

This comment needs to be moved as well.

> -#define PTA_CMD_GET_DEVICES0x0
>
>  static int optee_ctx_match(struct tee_ioctl_version_data *ver, const void 
> *data)
>  {
> @@ -32,7 +31,8 @@ static int optee_ctx_match(struct tee_ioctl_version_data 
> *ver, const void *data)
>  }
>
>  static int get_devices(struct tee_context *ctx, u32 session,
> -  struct tee_shm *device_shm, u32 *shm_size)
> +  struct tee_shm *device_shm, u32 *shm_size,
> +  u32 func)
>  {
> int ret = 0;
> struct tee_ioctl_invoke_arg inv_arg;
> @@ -42,7 +42,7 @@ static int get_devices(struct tee_context *ctx, u32 session,
> memset(¶m, 0, sizeof(param));
>
> /* Invoke PTA_CMD_GET_DEVICES function */

You can get rid of this comment.

-Sumit

> -   inv_arg.func = PTA_CMD_GET_DEVICES;
> +   inv_arg.func = func;
> inv_arg.session = session;
> inv_arg.num_params = 4;
>
> @@ -87,7 +87,7 @@ st

Re: [PATCHv5 3/3] tpm_ftpm_tee: register driver on TEE bus

2020-06-01 Thread Sumit Garg
On Fri, 29 May 2020 at 13:57, Maxim Uvarov  wrote:
>
> Register driver on the TEE bus. The module tee registers bus,
> and module optee calls optee_enumerate_devices() to scan
> all devices on the bus. Trusted Application for this driver
> can be Early TA's (can be compiled into optee-os). In that
> case it will be on OPTEE bus before linux booting. Also
> optee-suplicant application is needed to be loaded between
> OPTEE module and ftpm module to maintain functionality
> for fTPM driver.

I think this description merely describes the functioning of TEE bus
and misses what value add does TEE bus provide compared to platform
bus.

Consider:


OP-TEE based fTPM Trusted Application depends on tee-supplicant to
provide NV RAM implementation based on RPMB secure storage. So this
dependency can be resolved via TEE bus where we only invoke fTPM
driver probe once fTPM device is registered on the bus which is only
true after the tee-supplicant is up and running. Additionally, TEE bus
provides auto device enumeration.


With that, implementation looks good to me. So feel free to add:

Reviewed-by: Sumit Garg 

-Sumit

>
> Signed-off-by: Maxim Uvarov 
> Suggested-by: Sumit Garg 
> Suggested-by: Arnd Bergmann 
> ---
>  drivers/char/tpm/tpm_ftpm_tee.c | 70 -
>  1 file changed, 60 insertions(+), 10 deletions(-)
>
> diff --git a/drivers/char/tpm/tpm_ftpm_tee.c b/drivers/char/tpm/tpm_ftpm_tee.c
> index 22bf553ccf9d..28da638360d8 100644
> --- a/drivers/char/tpm/tpm_ftpm_tee.c
> +++ b/drivers/char/tpm/tpm_ftpm_tee.c
> @@ -214,11 +214,10 @@ static int ftpm_tee_match(struct tee_ioctl_version_data 
> *ver, const void *data)
>   * Return:
>   * On success, 0. On failure, -errno.
>   */
> -static int ftpm_tee_probe(struct platform_device *pdev)
> +static int ftpm_tee_probe(struct device *dev)
>  {
> int rc;
> struct tpm_chip *chip;
> -   struct device *dev = &pdev->dev;
> struct ftpm_tee_private *pvt_data = NULL;
> struct tee_ioctl_open_session_arg sess_arg;
>
> @@ -297,6 +296,13 @@ static int ftpm_tee_probe(struct platform_device *pdev)
> return rc;
>  }
>
> +static int ftpm_plat_tee_probe(struct platform_device *pdev)
> +{
> +   struct device *dev = &pdev->dev;
> +
> +   return ftpm_tee_probe(dev);
> +}
> +
>  /**
>   * ftpm_tee_remove() - remove the TPM device
>   * @pdev: the platform_device description.
> @@ -304,9 +310,9 @@ static int ftpm_tee_probe(struct platform_device *pdev)
>   * Return:
>   * 0 always.
>   */
> -static int ftpm_tee_remove(struct platform_device *pdev)
> +static int ftpm_tee_remove(struct device *dev)
>  {
> -   struct ftpm_tee_private *pvt_data = dev_get_drvdata(&pdev->dev);
> +   struct ftpm_tee_private *pvt_data = dev_get_drvdata(dev);
>
> /* Release the chip */
> tpm_chip_unregister(pvt_data->chip);
> @@ -328,11 +334,18 @@ static int ftpm_tee_remove(struct platform_device *pdev)
> return 0;
>  }
>
> +static int ftpm_plat_tee_remove(struct platform_device *pdev)
> +{
> +   struct device *dev = &pdev->dev;
> +
> +   return ftpm_tee_remove(dev);
> +}
> +
>  /**
>   * ftpm_tee_shutdown() - shutdown the TPM device
>   * @pdev: the platform_device description.
>   */
> -static void ftpm_tee_shutdown(struct platform_device *pdev)
> +static void ftpm_plat_tee_shutdown(struct platform_device *pdev)
>  {
> struct ftpm_tee_private *pvt_data = dev_get_drvdata(&pdev->dev);
>
> @@ -347,17 +360,54 @@ static const struct of_device_id of_ftpm_tee_ids[] = {
>  };
>  MODULE_DEVICE_TABLE(of, of_ftpm_tee_ids);
>
> -static struct platform_driver ftpm_tee_driver = {
> +static struct platform_driver ftpm_tee_plat_driver = {
> .driver = {
> .name = "ftpm-tee",
> .of_match_table = of_match_ptr(of_ftpm_tee_ids),
> },
> -   .probe = ftpm_tee_probe,
> -   .remove = ftpm_tee_remove,
> -   .shutdown = ftpm_tee_shutdown,
> +   .shutdown = ftpm_plat_tee_shutdown,
> +   .probe = ftpm_plat_tee_probe,
> +   .remove = ftpm_plat_tee_remove,
> +};
> +
> +/* UUID of the fTPM TA */
> +static const struct tee_client_device_id optee_ftpm_id_table[] = {
> +   {UUID_INIT(0xbc50d971, 0xd4c9, 0x42c4,
> +  0x82, 0xcb, 0x34, 0x3f, 0xb7, 0xf3, 0x78, 0x96)},
> +   {}
>  };
>
> -module_platform_driver(ftpm_tee_driver);
> +MODULE_DEVICE_TABLE(tee, optee_ftpm_id_table);
> +
> +static struct tee_client_driver ftpm_tee_driver = {
> +   .id_table   = optee_ftpm_id_table,
> +   .driver = {
> +   

Re: [PATCH 0/3] arm64: perf: Add support for Perf NMI interrupts

2020-05-18 Thread Sumit Garg
On Mon, 18 May 2020 at 19:49, Mark Rutland  wrote:
>
> On Mon, May 18, 2020 at 07:39:23PM +0530, Sumit Garg wrote:
> > On Mon, 18 May 2020 at 16:47, Alexandru Elisei  
> > wrote:
> > > On 5/18/20 11:45 AM, Mark Rutland wrote:
> > > > On Mon, May 18, 2020 at 02:26:00PM +0800, Lecopzer Chen wrote:
> > > >> HI Sumit,
> > > >>
> > > >> Thanks for your information.
> > > >>
> > > >> I've already implemented IPI (same as you did [1], little difference
> > > >> in detail), hardlockup detector and perf in last year(2019) for
> > > >> debuggability.
> > > >> And now we tend to upstream to reduce kernel maintaining effort.
> > > >> I'm glad if someone in ARM can do this work :)
> > > >>
> > > >> Hi Julien,
> > > >>
> > > >> Does any Arm maintainers can proceed this action?
> > > > Alexandru (Cc'd) has been rebasing and reworking Julien's patches, which
> > > > is my preferred approach.
> > > >
> > > > I understand that's not quite ready for posting since he's investigating
> > > > some of the nastier subtleties (e.g. mutual exclusion with the NMI), but
> > > > maybe we can put the work-in-progress patches somewhere in the mean
> > > > time.
> > > >
> > > > Alexandru, do you have an idea of what needs to be done, and/or when you
> > > > expect you could post that?
> > >
> > > I'm currently working on rebasing the patches on top of 5.7-rc5, when I 
> > > have
> > > something usable I'll post a link (should be a couple of days). After 
> > > that I will
> > > address the review comments, and I plan to do a thorough testing because 
> > > I'm not
> > > 100% confident that some of the assumptions around the locks that were 
> > > removed are
> > > correct. My guess is this will take a few weeks.
> > >
> >
> > Thanks Mark, Alex for the status updates on perf NMI feature.
> >
> > Alex,
> >
> > As the hard-lockup detection patch [1] has a dependency on perf NMI
> > patch-set, I will rebase and test hard-lockup detector when you have
> > got a working tree. But due to the dependency, I think patch [1]
> > should be accepted along with perf NMI patch-set. So would you be open
> > to include this patch as part of your series?
> >
> > [1] 
> > http://lists.infradead.org/pipermail/linux-arm-kernel/2020-May/732227.html
>
> While it depends on the perf NMI bits, I don't think it makes sense to
> tie that into the series given it's trying to achieve something very
> different.
>
> I think that should be reposted separately once the perf NMI bits are in
> shape.

Okay, fair enough. Will keep it as a separate patch then.

-Sumit

>
> Thanks,
> Mark.


Re: [PATCH 06/11] irqchip/gic-v3: Configure SGIs as standard interrupts

2020-05-20 Thread Sumit Garg
Hi Marc,

On Tue, 19 May 2020 at 21:48, Marc Zyngier  wrote:
>
> Change the way we deal with GICv3 SGIs by turning them into proper
> IRQs, and calling into the arch code to register the interrupt range
> instead of a callback.
>
> Signed-off-by: Marc Zyngier 
> ---
>  drivers/irqchip/irq-gic-v3.c | 91 +---
>  1 file changed, 53 insertions(+), 38 deletions(-)
>
> diff --git a/drivers/irqchip/irq-gic-v3.c b/drivers/irqchip/irq-gic-v3.c
> index 23d7c87da407..d57289057b75 100644
> --- a/drivers/irqchip/irq-gic-v3.c
> +++ b/drivers/irqchip/irq-gic-v3.c
> @@ -36,6 +36,9 @@
>  #define FLAGS_WORKAROUND_GICR_WAKER_MSM8996(1ULL << 0)
>  #define FLAGS_WORKAROUND_CAVIUM_ERRATUM_38539  (1ULL << 1)
>
> +#define GIC_IRQ_TYPE_PARTITION (GIC_IRQ_TYPE_LPI + 1)
> +#define GIC_IRQ_TYPE_SGI   (GIC_IRQ_TYPE_LPI + 2)
> +
>  struct redist_region {
> void __iomem*redist_base;
> phys_addr_t phys_base;
> @@ -657,38 +660,14 @@ static asmlinkage void __exception_irq_entry 
> gic_handle_irq(struct pt_regs *regs
> if ((irqnr >= 1020 && irqnr <= 1023))
> return;
>
> -   /* Treat anything but SGIs in a uniform way */
> -   if (likely(irqnr > 15)) {
> -   int err;
> -
> -   if (static_branch_likely(&supports_deactivate_key))
> -   gic_write_eoir(irqnr);
> -   else
> -   isb();
> -
> -   err = handle_domain_irq(gic_data.domain, irqnr, regs);
> -   if (err) {
> -   WARN_ONCE(true, "Unexpected interrupt received!\n");
> -   gic_deactivate_unhandled(irqnr);
> -   }
> -   return;
> -   }
> -   if (irqnr < 16) {
> +   if (static_branch_likely(&supports_deactivate_key))
> gic_write_eoir(irqnr);
> -   if (static_branch_likely(&supports_deactivate_key))
> -   gic_write_dir(irqnr);
> -#ifdef CONFIG_SMP
> -   /*
> -* Unlike GICv2, we don't need an smp_rmb() here.
> -* The control dependency from gic_read_iar to
> -* the ISB in gic_write_eoir is enough to ensure
> -* that any shared data read by handle_IPI will
> -* be read after the ACK.
> -*/
> -   handle_IPI(irqnr, regs);
> -#else
> -   WARN_ONCE(true, "Unexpected SGI received!\n");
> -#endif
> +   else
> +   isb();
> +
> +   if (handle_domain_irq(gic_data.domain, irqnr, regs)) {
> +   WARN_ONCE(true, "Unexpected interrupt received!\n");
> +   gic_deactivate_unhandled(irqnr);
> }
>  }
>
> @@ -1136,11 +1115,11 @@ static void gic_send_sgi(u64 cluster_id, u16 tlist, 
> unsigned int irq)
> gic_write_sgi1r(val);
>  }
>
> -static void gic_raise_softirq(const struct cpumask *mask, unsigned int irq)
> +static void gic_ipi_send_mask(struct irq_data *d, const struct cpumask *mask)
>  {
> int cpu;
>
> -   if (WARN_ON(irq >= 16))
> +   if (WARN_ON(d->hwirq >= 16))
> return;
>
> /*
> @@ -1154,7 +1133,7 @@ static void gic_raise_softirq(const struct cpumask 
> *mask, unsigned int irq)
> u16 tlist;
>
> tlist = gic_compute_target_list(&cpu, mask, cluster_id);
> -   gic_send_sgi(cluster_id, tlist, irq);
> +   gic_send_sgi(cluster_id, tlist, d->hwirq);
> }
>
> /* Force the above writes to ICC_SGI1R_EL1 to be executed */
> @@ -1163,10 +1142,36 @@ static void gic_raise_softirq(const struct cpumask 
> *mask, unsigned int irq)
>
>  static void gic_smp_init(void)
>  {
> -   set_smp_cross_call(gic_raise_softirq);
> +   struct irq_fwspec sgi_fwspec = {
> +   .fwnode = gic_data.fwnode,
> +   };
> +   int base_sgi;
> +
> cpuhp_setup_state_nocalls(CPUHP_AP_IRQ_GIC_STARTING,
>   "irqchip/arm/gicv3:starting",
>   gic_starting_cpu, NULL);
> +
> +   if (is_of_node(gic_data.fwnode)) {
> +   /* DT */
> +   sgi_fwspec.param_count = 3;
> +   sgi_fwspec.param[0] = GIC_IRQ_TYPE_SGI;
> +   sgi_fwspec.param[1] = 0;
> +   sgi_fwspec.param[2] = IRQ_TYPE_EDGE_RISING;
> +   } else {
> +   /* ACPI */
> +   sgi_fwspec.param_count = 2;
> +   sgi_fwspec.param[0] = 0;
> +   sgi_fwspec.param[1] = IRQ_TYPE_EDGE_RISING;
> +   }
> +
> +   /* Register all 8 non-secure SGIs */
> +   base_sgi = __irq_domain_alloc_irqs(gic_data.domain, -1, 8,
> +  NUMA_NO_NODE, &sgi_fwspec,
> +  false, NULL);
> +   if (WARN_ON(base_sgi <= 0))
> +   return;
> +
> +   set_smp_ipi_range(base_sgi, 8);
>  }
>
>  static

[PATCH v2 0/4] arm64: Introduce new IPI as IPI_CALL_NMI_FUNC

2020-05-20 Thread Sumit Garg
With pseudo NMIs support available its possible to configure SGIs to be
triggered as pseudo NMIs running in NMI context. And kernel features
such as kgdb relies on NMI support to round up CPUs which are stuck in
hard lockup state with interrupts disabled.

This patch-set adds support for IPI_CALL_NMI_FUNC which can be triggered
as a pseudo NMI which in turn is leveraged via kgdb to round up CPUs.

After this patch-set we should be able to get a backtrace for a CPU
stuck in HARDLOCKUP. Have a look at an example below from a testcase run
on Developerbox:

$ echo HARDLOCKUP > /sys/kernel/debug/provoke-crash/DIRECT

# Enter kdb via Magic SysRq

[11]kdb> btc
btc: cpu status: Currently on cpu 10
Available cpus: 0-7(I), 8, 9(I), 10, 11-23(I)

Stack traceback for pid 619
0x000871bc9c00  619  618  18   R  0x000871bca5c0  bash
CPU: 8 PID: 619 Comm: bash Not tainted 5.7.0-rc6-00762-g3804420 #77
Hardware name: Socionext SynQuacer E-series DeveloperBox, BIOS build #73 Apr  6 
2020
Call trace:
 dump_backtrace+0x0/0x198
 show_stack+0x18/0x28
 dump_stack+0xb8/0x100
 kgdb_cpu_enter+0x5c0/0x5f8
 kgdb_nmicallback+0xa0/0xa8
 ipi_kgdb_nmicallback+0x24/0x30
 ipi_handler+0x160/0x1b8
 handle_percpu_devid_fasteoi_ipi+0x44/0x58
 generic_handle_irq+0x30/0x48
 handle_domain_nmi+0x44/0x80
 gic_handle_irq+0x140/0x2a0
 el1_irq+0xcc/0x180
 lkdtm_HARDLOCKUP+0x10/0x18
 direct_entry+0x124/0x1c0
 full_proxy_write+0x60/0xb0
 __vfs_write+0x1c/0x48
 vfs_write+0xe4/0x1d0
 ksys_write+0x6c/0xf8
 __arm64_sys_write+0x1c/0x28
 el0_svc_common.constprop.0+0x74/0x1f0
 do_el0_svc+0x24/0x90
 el0_sync_handler+0x178/0x2b8
 el0_sync+0x158/0x180


Changes since RFC version [1]:
- Switch to use generic interrupt framework to turn an IPI as NMI.
- Dependent on Marc's patch-set [2] which turns IPIs into normal
  interrupts.
- Addressed misc. comments from Doug on patch #4.
- Posted kgdb NMI printk() fixup separately which has evolved since
  to be solved using different approach via changing kgdb interception
  of printk() in common printk() code (see patch [3]).

[1] https://lkml.org/lkml/2020/4/24/328
[2] https://lkml.org/lkml/2020/5/19/710
[3] https://lkml.org/lkml/2020/5/20/418

Sumit Garg (4):
  arm64: smp: Introduce a new IPI as IPI_CALL_NMI_FUNC
  irqchip/gic-v3: Enable support for SGIs to act as NMIs
  arm64: smp: Setup IPI_CALL_NMI_FUNC as a pseudo NMI
  arm64: kgdb: Round up cpus using IPI_CALL_NMI_FUNC

 arch/arm64/include/asm/hardirq.h |  2 +-
 arch/arm64/include/asm/kgdb.h|  8 +++
 arch/arm64/include/asm/smp.h |  1 +
 arch/arm64/kernel/kgdb.c | 21 +
 arch/arm64/kernel/smp.c  | 49 
 drivers/irqchip/irq-gic-v3.c | 13 +--
 6 files changed, 81 insertions(+), 13 deletions(-)

-- 
2.7.4



[PATCH v2 1/4] arm64: smp: Introduce a new IPI as IPI_CALL_NMI_FUNC

2020-05-20 Thread Sumit Garg
Introduce a new inter processor interrupt as IPI_CALL_NMI_FUNC that
can be invoked to run special handlers in NMI context. One such handler
example is kgdb_nmicallback() which is invoked in order to round up CPUs
to enter kgdb context.

As currently pseudo NMIs are supported on specific arm64 platforms which
incorporates GICv3 or later version of interrupt controller. In case a
particular platform doesn't support pseudo NMIs, IPI_CALL_NMI_FUNC will
act as a normal IPI which can still be used to invoke special handlers.

Signed-off-by: Sumit Garg 
---
 arch/arm64/include/asm/hardirq.h |  2 +-
 arch/arm64/include/asm/smp.h |  1 +
 arch/arm64/kernel/smp.c  | 13 -
 3 files changed, 14 insertions(+), 2 deletions(-)

diff --git a/arch/arm64/include/asm/hardirq.h b/arch/arm64/include/asm/hardirq.h
index 87ad961..abaa23a 100644
--- a/arch/arm64/include/asm/hardirq.h
+++ b/arch/arm64/include/asm/hardirq.h
@@ -13,7 +13,7 @@
 #include 
 #include 
 
-#define NR_IPI 7
+#define NR_IPI 8
 
 typedef struct {
unsigned int __softirq_pending;
diff --git a/arch/arm64/include/asm/smp.h b/arch/arm64/include/asm/smp.h
index bec6ef0..b4602de 100644
--- a/arch/arm64/include/asm/smp.h
+++ b/arch/arm64/include/asm/smp.h
@@ -106,6 +106,7 @@ extern void secondary_entry(void);
 
 extern void arch_send_call_function_single_ipi(int cpu);
 extern void arch_send_call_function_ipi_mask(const struct cpumask *mask);
+extern void arch_send_call_nmi_func_ipi_mask(const struct cpumask *mask);
 
 #ifdef CONFIG_ARM64_ACPI_PARKING_PROTOCOL
 extern void arch_send_wakeup_ipi_mask(const struct cpumask *mask);
diff --git a/arch/arm64/kernel/smp.c b/arch/arm64/kernel/smp.c
index d29823a..236784e 100644
--- a/arch/arm64/kernel/smp.c
+++ b/arch/arm64/kernel/smp.c
@@ -81,7 +81,8 @@ enum ipi_msg_type {
IPI_CPU_CRASH_STOP,
IPI_TIMER,
IPI_IRQ_WORK,
-   IPI_WAKEUP
+   IPI_WAKEUP,
+   IPI_CALL_NMI_FUNC
 };
 
 #ifdef CONFIG_HOTPLUG_CPU
@@ -802,6 +803,7 @@ static const char *ipi_types[NR_IPI] __tracepoint_string = {
S(IPI_TIMER, "Timer broadcast interrupts"),
S(IPI_IRQ_WORK, "IRQ work interrupts"),
S(IPI_WAKEUP, "CPU wake-up interrupts"),
+   S(IPI_CALL_NMI_FUNC, "NMI function call interrupts"),
 };
 
 static void smp_cross_call(const struct cpumask *target, unsigned int ipinr);
@@ -855,6 +857,11 @@ void arch_irq_work_raise(void)
 }
 #endif
 
+void arch_send_call_nmi_func_ipi_mask(const struct cpumask *mask)
+{
+   smp_cross_call(mask, IPI_CALL_NMI_FUNC);
+}
+
 static void local_cpu_stop(void)
 {
set_cpu_online(smp_processor_id(), false);
@@ -949,6 +956,10 @@ static void do_handle_IPI(int ipinr)
break;
 #endif
 
+   case IPI_CALL_NMI_FUNC:
+   /* nop, IPI handlers for special features can be added here. */
+   break;
+
default:
pr_crit("CPU%u: Unknown IPI message 0x%x\n", cpu, ipinr);
break;
-- 
2.7.4



[PATCH v2 3/4] arm64: smp: Setup IPI_CALL_NMI_FUNC as a pseudo NMI

2020-05-20 Thread Sumit Garg
Setup IPI_CALL_NMI_FUNC as a pseudo NMI using generic interrupt framework
APIs. In case a plarform doesn't provide support for pseudo NMIs, switch
back to IPI_CALL_NMI_FUNC being a normal interrupt.

Signed-off-by: Sumit Garg 
---
 arch/arm64/kernel/smp.c | 35 ++-
 1 file changed, 26 insertions(+), 9 deletions(-)

diff --git a/arch/arm64/kernel/smp.c b/arch/arm64/kernel/smp.c
index 236784e..c5e42a1 100644
--- a/arch/arm64/kernel/smp.c
+++ b/arch/arm64/kernel/smp.c
@@ -68,6 +68,7 @@ struct secondary_data secondary_data;
 int cpus_stuck_in_kernel;
 
 static int ipi_irq_base;
+static int ipi_nmi = -1;
 static int nr_ipi = NR_IPI;
 static struct irq_desc *ipi_desc[NR_IPI];
 
@@ -986,8 +987,14 @@ static void ipi_setup(int cpu)
if (ipi_irq_base) {
int i;
 
-   for (i = 0; i < nr_ipi; i++)
-   enable_percpu_irq(ipi_irq_base + i, 0);
+   for (i = 0; i < nr_ipi; i++) {
+   if (ipi_nmi == ipi_irq_base + i) {
+   if (!prepare_percpu_nmi(ipi_nmi))
+   enable_percpu_nmi(ipi_nmi, 0);
+   } else {
+   enable_percpu_irq(ipi_irq_base + i, 0);
+   }
+   }
}
 }
 
@@ -997,23 +1004,33 @@ static void ipi_teardown(int cpu)
int i;
 
for (i = 0; i < nr_ipi; i++)
-   disable_percpu_irq(ipi_irq_base + i);
+   if (ipi_nmi == ipi_irq_base + i) {
+   disable_percpu_nmi(ipi_nmi);
+   teardown_percpu_nmi(ipi_nmi);
+   } else {
+   disable_percpu_irq(ipi_irq_base + i);
+   }
}
 }
 
 void __init set_smp_ipi_range(int ipi_base, int n)
 {
-   int i;
+   int i, err;
 
WARN_ON(n < NR_IPI);
nr_ipi = min(n, NR_IPI);
 
-   for (i = 0; i < nr_ipi; i++) {
-   int err;
+   err = request_percpu_nmi(ipi_base + IPI_CALL_NMI_FUNC,
+ipi_handler, "IPI", &irq_stat);
+   if (!err)
+   ipi_nmi = ipi_base + IPI_CALL_NMI_FUNC;
 
-   err = request_percpu_irq(ipi_base + i, ipi_handler,
-"IPI", &irq_stat);
-   WARN_ON(err);
+   for (i = 0; i < nr_ipi; i++) {
+   if (ipi_base + i != ipi_nmi) {
+   err = request_percpu_irq(ipi_base + i, ipi_handler,
+"IPI", &irq_stat);
+   WARN_ON(err);
+   }
 
ipi_desc[i] = irq_to_desc(ipi_base + i);
irq_set_status_flags(ipi_base + i, IRQ_NO_ACCOUNTING);
-- 
2.7.4



[PATCH v2 2/4] irqchip/gic-v3: Enable support for SGIs to act as NMIs

2020-05-20 Thread Sumit Garg
Add support to handle SGIs as regular NMIs. As SGIs or IPIs defaults to a
special flow handler: handle_percpu_devid_fasteoi_ipi(), so skip NMI
handler update in case of SGIs.

Also, enable NMI support prior to gic_smp_init() as allocation of SGIs
as IRQs/NMIs happen as part of this routine.

Signed-off-by: Sumit Garg 
---
 drivers/irqchip/irq-gic-v3.c | 13 +++--
 1 file changed, 11 insertions(+), 2 deletions(-)

diff --git a/drivers/irqchip/irq-gic-v3.c b/drivers/irqchip/irq-gic-v3.c
index 82095b8..ceef63b 100644
--- a/drivers/irqchip/irq-gic-v3.c
+++ b/drivers/irqchip/irq-gic-v3.c
@@ -477,6 +477,11 @@ static int gic_irq_nmi_setup(struct irq_data *d)
if (WARN_ON(gic_irq(d) >= 8192))
return -EINVAL;
 
+   if (get_intid_range(d) == SGI_RANGE) {
+   gic_irq_set_prio(d, GICD_INT_NMI_PRI);
+   return 0;
+   }
+
/* desc lock should already be held */
if (gic_irq_in_rdist(d)) {
u32 idx = gic_get_ppi_index(d);
@@ -514,6 +519,11 @@ static void gic_irq_nmi_teardown(struct irq_data *d)
if (WARN_ON(gic_irq(d) >= 8192))
return;
 
+   if (get_intid_range(d) == SGI_RANGE) {
+   gic_irq_set_prio(d, GICD_INT_DEF_PRI);
+   return;
+   }
+
/* desc lock should already be held */
if (gic_irq_in_rdist(d)) {
u32 idx = gic_get_ppi_index(d);
@@ -1675,6 +1685,7 @@ static int __init gic_init_bases(void __iomem *dist_base,
 
gic_dist_init();
gic_cpu_init();
+   gic_enable_nmi_support();
gic_smp_init();
gic_cpu_pm_init();
 
@@ -1686,8 +1697,6 @@ static int __init gic_init_bases(void __iomem *dist_base,
gicv2m_init(handle, gic_data.domain);
}
 
-   gic_enable_nmi_support();
-
return 0;
 
 out_free:
-- 
2.7.4



[PATCH v2 4/4] arm64: kgdb: Round up cpus using IPI_CALL_NMI_FUNC

2020-05-20 Thread Sumit Garg
arm64 platforms with GICv3 or later supports pseudo NMIs which can be
leveraged to round up CPUs which are stuck in hard lockup state with
interrupts disabled that wouldn't be possible with a normal IPI.

So instead switch to round up CPUs using IPI_CALL_NMI_FUNC. And in
case a particular arm64 platform doesn't supports pseudo NMIs,
IPI_CALL_NMI_FUNC will act as a normal IPI which maintains existing
kgdb functionality.

Signed-off-by: Sumit Garg 
---
 arch/arm64/include/asm/kgdb.h |  8 
 arch/arm64/kernel/kgdb.c  | 21 +
 arch/arm64/kernel/smp.c   |  3 ++-
 3 files changed, 31 insertions(+), 1 deletion(-)

diff --git a/arch/arm64/include/asm/kgdb.h b/arch/arm64/include/asm/kgdb.h
index 21fc85e..6f3d3af 100644
--- a/arch/arm64/include/asm/kgdb.h
+++ b/arch/arm64/include/asm/kgdb.h
@@ -24,6 +24,14 @@ static inline void arch_kgdb_breakpoint(void)
 extern void kgdb_handle_bus_error(void);
 extern int kgdb_fault_expected;
 
+#ifdef CONFIG_KGDB
+extern void ipi_kgdb_nmicallback(int cpu, void *regs);
+#else
+static inline void ipi_kgdb_nmicallback(int cpu, void *regs)
+{
+}
+#endif
+
 #endif /* !__ASSEMBLY__ */
 
 /*
diff --git a/arch/arm64/kernel/kgdb.c b/arch/arm64/kernel/kgdb.c
index 4311992..ee932ba 100644
--- a/arch/arm64/kernel/kgdb.c
+++ b/arch/arm64/kernel/kgdb.c
@@ -14,6 +14,7 @@
 #include 
 #include 
 #include 
+#include 
 
 #include 
 #include 
@@ -353,3 +354,23 @@ int kgdb_arch_remove_breakpoint(struct kgdb_bkpt *bpt)
return aarch64_insn_write((void *)bpt->bpt_addr,
*(u32 *)bpt->saved_instr);
 }
+
+void ipi_kgdb_nmicallback(int cpu, void *regs)
+{
+   if (atomic_read(&kgdb_active) != -1)
+   kgdb_nmicallback(cpu, regs);
+}
+
+#ifdef CONFIG_SMP
+void kgdb_roundup_cpus(void)
+{
+   struct cpumask mask;
+
+   cpumask_copy(&mask, cpu_online_mask);
+   cpumask_clear_cpu(raw_smp_processor_id(), &mask);
+   if (cpumask_empty(&mask))
+   return;
+
+   arch_send_call_nmi_func_ipi_mask(&mask);
+}
+#endif
diff --git a/arch/arm64/kernel/smp.c b/arch/arm64/kernel/smp.c
index c5e42a1..3baace7 100644
--- a/arch/arm64/kernel/smp.c
+++ b/arch/arm64/kernel/smp.c
@@ -31,6 +31,7 @@
 #include 
 #include 
 #include 
+#include 
 #include 
 
 #include 
@@ -958,7 +959,7 @@ static void do_handle_IPI(int ipinr)
 #endif
 
case IPI_CALL_NMI_FUNC:
-   /* nop, IPI handlers for special features can be added here. */
+   ipi_kgdb_nmicallback(cpu, get_irq_regs());
break;
 
default:
-- 
2.7.4



Re: [PATCH v2 00/17] arm/arm64: Turning IPIs into normal interrupts

2020-08-11 Thread Sumit Garg
Hi Marc,

On Thu, 25 Jun 2020 at 01:28, Marc Zyngier  wrote:
>
> For as long as SMP ARM has existed, IPIs have been handled as
> something special. The arch code and the interrupt controller exchange
> a couple of hooks (one to generate an IPI, another to handle it).
>
> Although this is perfectly manageable, it prevents the use of features
> that we could use if IPIs were Linux IRQs (such as pseudo-NMIs). It
> also means that each interrupt controller driver has to follow an
> architecture-specific interface instead of just implementing the base
> irqchip functionalities. The arch code also duplicates a number of
> things that the core irq code already does (such as calling
> set_irq_regs(), irq_enter()...).
>
> This series tries to remedy this on arm/arm64 by offering a new
> registration interface where the irqchip gives the arch code a range
> of interrupts to use for IPIs. The arch code requests these as normal
> per-cpu interrupts.
>
> The bulk of the work is at the interrupt controller level, where all 5
> irqchips used on arm+SMP/arm64 get converted.
>
> Finally, we drop the legacy registration interface as well as the
> custom statistics accounting.
>
> Note that I have had a look at providing a "generic" interface by
> expanding the kernel/irq/ipi.c bag of helpers, but so far all
> irqchips have very different requirements, so there is hardly anything
> to consolidate for now. Maybe some as hip04 and the Marvell horror get
> cleaned up (the latter certainly could do with a good dusting).
>
> This has been tested on a bunch of 32 and 64bit guests (GICv2, GICv3),
> as well as 64bit bare metal (GICv3). The RPi part has only been tested
> in QEMU as a 64bit guest, while the HiSi and Marvell parts have only
> been compile-tested.

This series works perfectly fine on Developerbox.

I just want to follow-up regarding when you are planning to push this
series upstream? Are you waiting for other irqchips (apart from GIC)
to be reviewed?

Actually mine work to turn IPI as a pseudo NMI [1] is dependent on
this patch-set.

[1] https://lkml.org/lkml/2020/5/20/488

-Sumit

>
> * From v1:
>   - Clarified the effect of nesting irq_enter/exit (Russell)
>   - Changed the point where we tear IPIs down on (Valentin)
>   - IPIs are no longer accessible from DT
>   - HIP04 and Armada 370-XP have been converted, but are untested
>   - arch-specific kstat accounting is removed
>   - ARM's legacy interface is dropped
>
> Marc Zyngier (17):
>   genirq: Add fasteoi IPI flow
>   genirq: Allow interrupts to be excluded from /proc/interrupts
>   arm64: Allow IPIs to be handled as normal interrupts
>   ARM: Allow IPIs to be handled as normal interrupts
>   irqchip/gic-v3: Describe the SGI range
>   irqchip/gic-v3: Configure SGIs as standard interrupts
>   irqchip/gic: Atomically update affinity
>   irqchip/gic: Refactor SMP configuration
>   irqchip/gic: Configure SGIs as standard interrupts
>   irqchip/gic-common: Don't enable SGIs by default
>   irqchip/bcm2836: Configure mailbox interrupts as standard interrupts
>   irqchip/hip04: Configure IPIs as standard interrupts
>   irqchip/armada-370-xp: Configure IPIs as standard interrupts
>   arm64: Kill __smp_cross_call and co
>   arm64: Remove custom IRQ stat accounting
>   ARM: Kill __smp_cross_call and co
>   ARM: Remove custom IRQ stat accounting
>
>  arch/arm/Kconfig|   1 +
>  arch/arm/include/asm/hardirq.h  |  17 --
>  arch/arm/include/asm/smp.h  |   5 +-
>  arch/arm/kernel/smp.c   | 135 +-
>  arch/arm64/Kconfig  |   1 +
>  arch/arm64/include/asm/hardirq.h|   9 -
>  arch/arm64/include/asm/irq_work.h   |   4 +-
>  arch/arm64/include/asm/smp.h|   6 +-
>  arch/arm64/kernel/smp.c | 119 -
>  drivers/irqchip/irq-armada-370-xp.c | 262 +++-
>  drivers/irqchip/irq-bcm2836.c   | 151 +---
>  drivers/irqchip/irq-gic-common.c|   3 -
>  drivers/irqchip/irq-gic-v3.c|  99 ++-
>  drivers/irqchip/irq-gic.c   | 183 ++-
>  drivers/irqchip/irq-hip04.c |  89 +-
>  include/linux/irq.h |   5 +-
>  kernel/irq/chip.c   |  27 +++
>  kernel/irq/debugfs.c|   1 +
>  kernel/irq/proc.c   |   2 +-
>  kernel/irq/settings.h   |   7 +
>  20 files changed, 713 insertions(+), 413 deletions(-)
>
> --
> 2.27.0
>


Re: [RFC 0/5] Introduce NMI aware serial drivers

2020-08-11 Thread Sumit Garg
On Tue, 21 Jul 2020 at 17:40, Sumit Garg  wrote:
>
> Make it possible for UARTs to trigger magic sysrq from an NMI. With the
> advent of pseudo NMIs on arm64 it became quite generic to request serial
> device interrupt as an NMI rather than IRQ. And having NMI driven serial
> RX will allow us to trigger magic sysrq as an NMI and hence drop into
> kernel debugger in NMI context.
>
> The major use-case is to add NMI debugging capabilities to the kernel
> in order to debug scenarios such as:
> - Primary CPU is stuck in deadlock with interrupts disabled and hence
>   doesn't honor serial device interrupt. So having magic sysrq triggered
>   as an NMI is helpful for debugging.
> - Always enabled NMI based magic sysrq irrespective of whether the serial
>   TTY port is active or not.
>
> Currently there is an existing kgdb NMI serial driver which provides
> partial implementation in upstream to have a separate ttyNMI0 port but
> that remained in silos with the serial core/drivers which made it a bit
> odd to enable using serial device interrupt and hence remained unused. It
> seems to be clearly intended to avoid almost all custom NMI changes to
> the UART driver.
>
> But this patch-set allows the serial core/drivers to be NMI aware which
> in turn provides NMI debugging capabilities via magic sysrq and hence
> there is no specific reason to keep this special driver. So remove it
> instead.
>
> Approach:
> -
>
> The overall idea is to intercept serial RX characters in NMI context, if
> those are specific to magic sysrq then allow corresponding handler to run
> in NMI context. Otherwise, defer all other RX and TX operations onto IRQ
> work queue in order to run those in normal interrupt context.
>
> This approach is demonstrated using amba-pl011 driver.
>
> Patch-wise description:
> ---
>
> Patch #1 prepares magic sysrq handler to be NMI aware.
> Patch #2 adds NMI framework to serial core.
> Patch #3 and #4 demonstrates NMI aware uart port using amba-pl011 driver.
> Patch #5 removes kgdb NMI serial driver.
>
> Goal of this RFC:
> -
>
> My main reason for sharing this as an RFC is to help decide whether or
> not to continue with this approach. The next step for me would to port
> the work to a system with an 8250 UART.
>

A gentle reminder to seek feedback on this series.

-Sumit

> Usage:
> --
>
> This RFC has been developed on top of 5.8-rc3 and if anyone is interested
> to give this a try on QEMU, just enable following config options
> additional to arm64 defconfig:
>
> CONFIG_KGDB=y
> CONFIG_KGDB_KDB=y
> CONFIG_ARM64_PSEUDO_NMI=y
>
> Qemu command line to test:
>
> $ qemu-system-aarch64 -nographic -machine virt,gic-version=3 -cpu cortex-a57 \
>   -smp 2 -kernel arch/arm64/boot/Image -append 'console=ttyAMA0,38400 \
>   keep_bootcon root=/dev/vda2 irqchip.gicv3_pseudo_nmi=1 kgdboc=ttyAMA0' \
>   -initrd rootfs-arm64.cpio.gz
>
> NMI entry into kgdb via sysrq:
> - Ctrl a + b + g
>
> Reference:
> --
>
> For more details about NMI/FIQ debugger, refer to this blog post [1].
>
> [1] https://www.linaro.org/blog/debugging-arm-kernels-using-nmifiq/
>
> I do look forward to your comments and feedback.
>
> Sumit Garg (5):
>   tty/sysrq: Make sysrq handler NMI aware
>   serial: core: Add framework to allow NMI aware serial drivers
>   serial: amba-pl011: Re-order APIs definition
>   serial: amba-pl011: Enable NMI aware uart port
>   serial: Remove KGDB NMI serial driver
>
>  drivers/tty/serial/Kconfig   |  19 --
>  drivers/tty/serial/Makefile  |   1 -
>  drivers/tty/serial/amba-pl011.c  | 232 +---
>  drivers/tty/serial/kgdb_nmi.c| 383 
> ---
>  drivers/tty/serial/kgdboc.c  |   8 -
>  drivers/tty/serial/serial_core.c | 120 +++-
>  drivers/tty/sysrq.c  |  33 +++-
>  include/linux/kgdb.h |  10 -
>  include/linux/serial_core.h  |  67 +++
>  include/linux/sysrq.h|   1 +
>  kernel/debug/debug_core.c|   1 +
>  11 files changed, 386 insertions(+), 489 deletions(-)
>  delete mode 100644 drivers/tty/serial/kgdb_nmi.c
>
> --
> 2.7.4
>


Re: [RFC 0/5] Introduce NMI aware serial drivers

2020-08-11 Thread Sumit Garg
Hi Greg,

Thanks for your comments.

On Tue, 11 Aug 2020 at 19:27, Greg Kroah-Hartman
 wrote:
>
> On Tue, Aug 11, 2020 at 07:20:26PM +0530, Sumit Garg wrote:
> > On Tue, 21 Jul 2020 at 17:40, Sumit Garg  wrote:
> > >
> > > Make it possible for UARTs to trigger magic sysrq from an NMI. With the
> > > advent of pseudo NMIs on arm64 it became quite generic to request serial
> > > device interrupt as an NMI rather than IRQ. And having NMI driven serial
> > > RX will allow us to trigger magic sysrq as an NMI and hence drop into
> > > kernel debugger in NMI context.
> > >
> > > The major use-case is to add NMI debugging capabilities to the kernel
> > > in order to debug scenarios such as:
> > > - Primary CPU is stuck in deadlock with interrupts disabled and hence
> > >   doesn't honor serial device interrupt. So having magic sysrq triggered
> > >   as an NMI is helpful for debugging.
> > > - Always enabled NMI based magic sysrq irrespective of whether the serial
> > >   TTY port is active or not.
> > >
> > > Currently there is an existing kgdb NMI serial driver which provides
> > > partial implementation in upstream to have a separate ttyNMI0 port but
> > > that remained in silos with the serial core/drivers which made it a bit
> > > odd to enable using serial device interrupt and hence remained unused. It
> > > seems to be clearly intended to avoid almost all custom NMI changes to
> > > the UART driver.
> > >
> > > But this patch-set allows the serial core/drivers to be NMI aware which
> > > in turn provides NMI debugging capabilities via magic sysrq and hence
> > > there is no specific reason to keep this special driver. So remove it
> > > instead.
> > >
> > > Approach:
> > > -
> > >
> > > The overall idea is to intercept serial RX characters in NMI context, if
> > > those are specific to magic sysrq then allow corresponding handler to run
> > > in NMI context. Otherwise, defer all other RX and TX operations onto IRQ
> > > work queue in order to run those in normal interrupt context.
> > >
> > > This approach is demonstrated using amba-pl011 driver.
> > >
> > > Patch-wise description:
> > > ---
> > >
> > > Patch #1 prepares magic sysrq handler to be NMI aware.
> > > Patch #2 adds NMI framework to serial core.
> > > Patch #3 and #4 demonstrates NMI aware uart port using amba-pl011 driver.
> > > Patch #5 removes kgdb NMI serial driver.
> > >
> > > Goal of this RFC:
> > > -
> > >
> > > My main reason for sharing this as an RFC is to help decide whether or
> > > not to continue with this approach. The next step for me would to port
> > > the work to a system with an 8250 UART.
> > >
> >
> > A gentle reminder to seek feedback on this series.
>
> It's the middle of the merge window, and I can't do anything.
>
> Also, I almost never review RFC patches as I have have way too many
> patches that people think are "right" to review first...
>

Okay, I understand and I can definitely wait for your feedback.

> I suggest you work to flesh this out first and submit something that you
> feels works properly.
>

IIUC, in order to make this approach substantial I need to make it
work with 8250 UART (major serial driver), correct? As currently it
works properly for amba-pl011 driver.

> good luck!
>

Thanks.

-Sumit

> greg k-h


Re: [RFC 0/5] Introduce NMI aware serial drivers

2020-08-11 Thread Sumit Garg
On Tue, 11 Aug 2020 at 20:28, Greg Kroah-Hartman
 wrote:
>
> On Tue, Aug 11, 2020 at 07:59:24PM +0530, Sumit Garg wrote:
> > Hi Greg,
> >
> > Thanks for your comments.
> >
> > On Tue, 11 Aug 2020 at 19:27, Greg Kroah-Hartman
> >  wrote:
> > >
> > > On Tue, Aug 11, 2020 at 07:20:26PM +0530, Sumit Garg wrote:
> > > > On Tue, 21 Jul 2020 at 17:40, Sumit Garg  wrote:
> > > > >
> > > > > Make it possible for UARTs to trigger magic sysrq from an NMI. With 
> > > > > the
> > > > > advent of pseudo NMIs on arm64 it became quite generic to request 
> > > > > serial
> > > > > device interrupt as an NMI rather than IRQ. And having NMI driven 
> > > > > serial
> > > > > RX will allow us to trigger magic sysrq as an NMI and hence drop into
> > > > > kernel debugger in NMI context.
> > > > >
> > > > > The major use-case is to add NMI debugging capabilities to the kernel
> > > > > in order to debug scenarios such as:
> > > > > - Primary CPU is stuck in deadlock with interrupts disabled and hence
> > > > >   doesn't honor serial device interrupt. So having magic sysrq 
> > > > > triggered
> > > > >   as an NMI is helpful for debugging.
> > > > > - Always enabled NMI based magic sysrq irrespective of whether the 
> > > > > serial
> > > > >   TTY port is active or not.
> > > > >
> > > > > Currently there is an existing kgdb NMI serial driver which provides
> > > > > partial implementation in upstream to have a separate ttyNMI0 port but
> > > > > that remained in silos with the serial core/drivers which made it a 
> > > > > bit
> > > > > odd to enable using serial device interrupt and hence remained 
> > > > > unused. It
> > > > > seems to be clearly intended to avoid almost all custom NMI changes to
> > > > > the UART driver.
> > > > >
> > > > > But this patch-set allows the serial core/drivers to be NMI aware 
> > > > > which
> > > > > in turn provides NMI debugging capabilities via magic sysrq and hence
> > > > > there is no specific reason to keep this special driver. So remove it
> > > > > instead.
> > > > >
> > > > > Approach:
> > > > > -
> > > > >
> > > > > The overall idea is to intercept serial RX characters in NMI context, 
> > > > > if
> > > > > those are specific to magic sysrq then allow corresponding handler to 
> > > > > run
> > > > > in NMI context. Otherwise, defer all other RX and TX operations onto 
> > > > > IRQ
> > > > > work queue in order to run those in normal interrupt context.
> > > > >
> > > > > This approach is demonstrated using amba-pl011 driver.
> > > > >
> > > > > Patch-wise description:
> > > > > ---
> > > > >
> > > > > Patch #1 prepares magic sysrq handler to be NMI aware.
> > > > > Patch #2 adds NMI framework to serial core.
> > > > > Patch #3 and #4 demonstrates NMI aware uart port using amba-pl011 
> > > > > driver.
> > > > > Patch #5 removes kgdb NMI serial driver.
> > > > >
> > > > > Goal of this RFC:
> > > > > -
> > > > >
> > > > > My main reason for sharing this as an RFC is to help decide whether or
> > > > > not to continue with this approach. The next step for me would to port
> > > > > the work to a system with an 8250 UART.
> > > > >
> > > >
> > > > A gentle reminder to seek feedback on this series.
> > >
> > > It's the middle of the merge window, and I can't do anything.
> > >
> > > Also, I almost never review RFC patches as I have have way too many
> > > patches that people think are "right" to review first...
> > >
> >
> > Okay, I understand and I can definitely wait for your feedback.
>
> My feedback here is this:
>
> > > I suggest you work to flesh this out first and submit something that you
> > > feels works properly.
>
> :)
>
> > IIUC, in order to make this approach substantial I need to make it
> > work with 8250 UART (major serial driver), correct? As currently it
> > works properly for amba-pl011 driver.
>
> Yes, try to do that, or better yet, make it work with all serial drivers
> automatically.

I would like to make serial drivers work automatically but
unfortunately the interrupt request/ handling code is pretty specific
to the corresponding serial driver.

BTW, I will look for ways how we can make it much easier for serial
drivers to adapt.

-Sumit

>
> thanks,
>
> greg k-h


Re: [RFC 0/5] Introduce NMI aware serial drivers

2020-08-12 Thread Sumit Garg
Hi Doug,

On Tue, 11 Aug 2020 at 22:46, Doug Anderson  wrote:
>
> Hi,
>
> On Tue, Aug 11, 2020 at 7:58 AM Greg Kroah-Hartman
>  wrote:
> >
> > On Tue, Aug 11, 2020 at 07:59:24PM +0530, Sumit Garg wrote:
> > > Hi Greg,
> > >
> > > Thanks for your comments.
> > >
> > > On Tue, 11 Aug 2020 at 19:27, Greg Kroah-Hartman
> > >  wrote:
> > > >
> > > > On Tue, Aug 11, 2020 at 07:20:26PM +0530, Sumit Garg wrote:
> > > > > On Tue, 21 Jul 2020 at 17:40, Sumit Garg  
> > > > > wrote:
> > > > > >
> > > > > > Make it possible for UARTs to trigger magic sysrq from an NMI. With 
> > > > > > the
> > > > > > advent of pseudo NMIs on arm64 it became quite generic to request 
> > > > > > serial
> > > > > > device interrupt as an NMI rather than IRQ. And having NMI driven 
> > > > > > serial
> > > > > > RX will allow us to trigger magic sysrq as an NMI and hence drop 
> > > > > > into
> > > > > > kernel debugger in NMI context.
> > > > > >
> > > > > > The major use-case is to add NMI debugging capabilities to the 
> > > > > > kernel
> > > > > > in order to debug scenarios such as:
> > > > > > - Primary CPU is stuck in deadlock with interrupts disabled and 
> > > > > > hence
> > > > > >   doesn't honor serial device interrupt. So having magic sysrq 
> > > > > > triggered
> > > > > >   as an NMI is helpful for debugging.
> > > > > > - Always enabled NMI based magic sysrq irrespective of whether the 
> > > > > > serial
> > > > > >   TTY port is active or not.
> > > > > >
> > > > > > Currently there is an existing kgdb NMI serial driver which provides
> > > > > > partial implementation in upstream to have a separate ttyNMI0 port 
> > > > > > but
> > > > > > that remained in silos with the serial core/drivers which made it a 
> > > > > > bit
> > > > > > odd to enable using serial device interrupt and hence remained 
> > > > > > unused. It
> > > > > > seems to be clearly intended to avoid almost all custom NMI changes 
> > > > > > to
> > > > > > the UART driver.
> > > > > >
> > > > > > But this patch-set allows the serial core/drivers to be NMI aware 
> > > > > > which
> > > > > > in turn provides NMI debugging capabilities via magic sysrq and 
> > > > > > hence
> > > > > > there is no specific reason to keep this special driver. So remove 
> > > > > > it
> > > > > > instead.
> > > > > >
> > > > > > Approach:
> > > > > > -
> > > > > >
> > > > > > The overall idea is to intercept serial RX characters in NMI 
> > > > > > context, if
> > > > > > those are specific to magic sysrq then allow corresponding handler 
> > > > > > to run
> > > > > > in NMI context. Otherwise, defer all other RX and TX operations 
> > > > > > onto IRQ
> > > > > > work queue in order to run those in normal interrupt context.
> > > > > >
> > > > > > This approach is demonstrated using amba-pl011 driver.
> > > > > >
> > > > > > Patch-wise description:
> > > > > > ---
> > > > > >
> > > > > > Patch #1 prepares magic sysrq handler to be NMI aware.
> > > > > > Patch #2 adds NMI framework to serial core.
> > > > > > Patch #3 and #4 demonstrates NMI aware uart port using amba-pl011 
> > > > > > driver.
> > > > > > Patch #5 removes kgdb NMI serial driver.
> > > > > >
> > > > > > Goal of this RFC:
> > > > > > -
> > > > > >
> > > > > > My main reason for sharing this as an RFC is to help decide whether 
> > > > > > or
> > > > > > not to continue with this approach. The next step for me would to 
> > > > > > port
> > > > > > the work to a system with an 8250 UART.
> > > > > >
> > > > >

Re: [PATCHv2 2/2] hwrng: optee: fix wait use case

2020-08-05 Thread Sumit Garg
Apologies for my delayed response as I was busy with some other tasks
along with holidays.

On Fri, 24 Jul 2020 at 19:53, Jorge Ramirez-Ortiz, Foundries
 wrote:
>
> On 24/07/20, Sumit Garg wrote:
> > On Thu, 23 Jul 2020 at 14:16, Jorge Ramirez-Ortiz  
> > wrote:
> > >
> > > The current code waits for data to be available before attempting a
> > > second read. However the second read would not be executed as the
> > > while loop exits.
> > >
> > > This fix does not wait if all data has been read and reads a second
> > > time if only partial data was retrieved on the first read.
> > >
> > > This fix also does not attempt to read if not data is requested.
> >
> > I am not sure how this is possible, can you elaborate?
>
> currently, if the user sets max 0, get_optee_rng_data will regardless
> issuese a call to the secure world requesting 0 bytes from the RNG
>

This case is already handled by core API: rng_dev_read().

> with this patch, this request is avoided.
>
> >
> > >
> > > Signed-off-by: Jorge Ramirez-Ortiz 
> > > ---
> > >  v2: tidy up the while loop to avoid reading when no data is requested
> > >
> > >  drivers/char/hw_random/optee-rng.c | 4 ++--
> > >  1 file changed, 2 insertions(+), 2 deletions(-)
> > >
> > > diff --git a/drivers/char/hw_random/optee-rng.c 
> > > b/drivers/char/hw_random/optee-rng.c
> > > index 5bc4700c4dae..a99d82949981 100644
> > > --- a/drivers/char/hw_random/optee-rng.c
> > > +++ b/drivers/char/hw_random/optee-rng.c
> > > @@ -122,14 +122,14 @@ static int optee_rng_read(struct hwrng *rng, void 
> > > *buf, size_t max, bool wait)
> > > if (max > MAX_ENTROPY_REQ_SZ)
> > > max = MAX_ENTROPY_REQ_SZ;
> > >
> > > -   while (read == 0) {
> > > +   while (read < max) {
> > > rng_size = get_optee_rng_data(pvt_data, data, (max - 
> > > read));
> > >
> > > data += rng_size;
> > > read += rng_size;
> > >
> > > if (wait && pvt_data->data_rate) {
> > > -   if (timeout-- == 0)
> > > +   if ((timeout-- == 0) || (read == max))
> >
> > If read == max, would there be any sleep?
>
> no but I see no reason why there should be a wait since we already have
> all the data that we need; the msleep is only required when we need to
> wait for the RNG to generate entropy for the number of bytes we are
> requesting. if we are requesting 0 bytes, the entropy is already
> available. at leat this is what makes sense to me.
>

Wouldn't it lead to a call as msleep(0); that means no wait as well?

-Sumit

>
> >
> > -Sumit
> >
> > > return read;
> > > msleep((1000 * (max - read)) / 
> > > pvt_data->data_rate);
> > > } else {
> > > --
> > > 2.17.1
> > >


Re: [PATCHv2 2/2] hwrng: optee: fix wait use case

2020-08-05 Thread Sumit Garg
On Thu, 6 Aug 2020 at 02:08, Jorge Ramirez-Ortiz, Foundries
 wrote:
>
> On 05/08/20, Sumit Garg wrote:
> > Apologies for my delayed response as I was busy with some other tasks
> > along with holidays.
>
> no pb! was just making sure this wasnt falling through some cracks.
>
> >
> > On Fri, 24 Jul 2020 at 19:53, Jorge Ramirez-Ortiz, Foundries
> >  wrote:
> > >
> > > On 24/07/20, Sumit Garg wrote:
> > > > On Thu, 23 Jul 2020 at 14:16, Jorge Ramirez-Ortiz  
> > > > wrote:
> > > > >
> > > > > The current code waits for data to be available before attempting a
> > > > > second read. However the second read would not be executed as the
> > > > > while loop exits.
> > > > >
> > > > > This fix does not wait if all data has been read and reads a second
> > > > > time if only partial data was retrieved on the first read.
> > > > >
> > > > > This fix also does not attempt to read if not data is requested.
> > > >
> > > > I am not sure how this is possible, can you elaborate?
> > >
> > > currently, if the user sets max 0, get_optee_rng_data will regardless
> > > issuese a call to the secure world requesting 0 bytes from the RNG
> > >
> >
> > This case is already handled by core API: rng_dev_read().
>
> ah ok good point, you are right
> but yeah, there is no consequence to the actual patch.
>

So, at least you could get rid of the corresponding text from commit message.

> >
> > > with this patch, this request is avoided.
> > >
> > > >
> > > > >
> > > > > Signed-off-by: Jorge Ramirez-Ortiz 
> > > > > ---
> > > > >  v2: tidy up the while loop to avoid reading when no data is requested
> > > > >
> > > > >  drivers/char/hw_random/optee-rng.c | 4 ++--
> > > > >  1 file changed, 2 insertions(+), 2 deletions(-)
> > > > >
> > > > > diff --git a/drivers/char/hw_random/optee-rng.c 
> > > > > b/drivers/char/hw_random/optee-rng.c
> > > > > index 5bc4700c4dae..a99d82949981 100644
> > > > > --- a/drivers/char/hw_random/optee-rng.c
> > > > > +++ b/drivers/char/hw_random/optee-rng.c
> > > > > @@ -122,14 +122,14 @@ static int optee_rng_read(struct hwrng *rng, 
> > > > > void *buf, size_t max, bool wait)
> > > > > if (max > MAX_ENTROPY_REQ_SZ)
> > > > > max = MAX_ENTROPY_REQ_SZ;
> > > > >
> > > > > -   while (read == 0) {
> > > > > +   while (read < max) {
> > > > > rng_size = get_optee_rng_data(pvt_data, data, (max - 
> > > > > read));
> > > > >
> > > > > data += rng_size;
> > > > > read += rng_size;
> > > > >
> > > > > if (wait && pvt_data->data_rate) {
> > > > > -   if (timeout-- == 0)
> > > > > +   if ((timeout-- == 0) || (read == max))
> > > >
> > > > If read == max, would there be any sleep?
> > >
> > > no but I see no reason why there should be a wait since we already have
> > > all the data that we need; the msleep is only required when we need to
> > > wait for the RNG to generate entropy for the number of bytes we are
> > > requesting. if we are requesting 0 bytes, the entropy is already
> > > available. at leat this is what makes sense to me.
> > >
> >
> > Wouldn't it lead to a call as msleep(0); that means no wait as well?
>
> I dont understand: there is no reason to wait if read == max and this
> patch will not wait: if read == max it calls 'return read'
>
> am I misunderstanding your point?

What I mean is that we shouldn't require this extra check here as
there wasn't any wait if read == max with existing implementation too.

-Sumit

>
> >
> > -Sumit
> >
> > >
> > > >
> > > > -Sumit
> > > >
> > > > > return read;
> > > > > msleep((1000 * (max - read)) / 
> > > > > pvt_data->data_rate);
> > > > > } else {
> > > > > --
> > > > > 2.17.1
> > > > >


Re: [PATCHv2 2/2] hwrng: optee: fix wait use case

2020-08-05 Thread Sumit Garg
On Thu, 6 Aug 2020 at 12:00, Jorge Ramirez-Ortiz, Foundries
 wrote:
>
> On 06/08/20, Sumit Garg wrote:
> > On Thu, 6 Aug 2020 at 02:08, Jorge Ramirez-Ortiz, Foundries
> >  wrote:
> > >
> > > On 05/08/20, Sumit Garg wrote:
> > > > Apologies for my delayed response as I was busy with some other tasks
> > > > along with holidays.
> > >
> > > no pb! was just making sure this wasnt falling through some cracks.
> > >
> > > >
> > > > On Fri, 24 Jul 2020 at 19:53, Jorge Ramirez-Ortiz, Foundries
> > > >  wrote:
> > > > >
> > > > > On 24/07/20, Sumit Garg wrote:
> > > > > > On Thu, 23 Jul 2020 at 14:16, Jorge Ramirez-Ortiz 
> > > > > >  wrote:
> > > > > > >
> > > > > > > The current code waits for data to be available before attempting 
> > > > > > > a
> > > > > > > second read. However the second read would not be executed as the
> > > > > > > while loop exits.
> > > > > > >
> > > > > > > This fix does not wait if all data has been read and reads a 
> > > > > > > second
> > > > > > > time if only partial data was retrieved on the first read.
> > > > > > >
> > > > > > > This fix also does not attempt to read if not data is requested.
> > > > > >
> > > > > > I am not sure how this is possible, can you elaborate?
> > > > >
> > > > > currently, if the user sets max 0, get_optee_rng_data will regardless
> > > > > issuese a call to the secure world requesting 0 bytes from the RNG
> > > > >
> > > >
> > > > This case is already handled by core API: rng_dev_read().
> > >
> > > ah ok good point, you are right
> > > but yeah, there is no consequence to the actual patch.
> > >
> >
> > So, at least you could get rid of the corresponding text from commit 
> > message.
> >
> > > >
> > > > > with this patch, this request is avoided.
> > > > >
> > > > > >
> > > > > > >
> > > > > > > Signed-off-by: Jorge Ramirez-Ortiz 
> > > > > > > ---
> > > > > > >  v2: tidy up the while loop to avoid reading when no data is 
> > > > > > > requested
> > > > > > >
> > > > > > >  drivers/char/hw_random/optee-rng.c | 4 ++--
> > > > > > >  1 file changed, 2 insertions(+), 2 deletions(-)
> > > > > > >
> > > > > > > diff --git a/drivers/char/hw_random/optee-rng.c 
> > > > > > > b/drivers/char/hw_random/optee-rng.c
> > > > > > > index 5bc4700c4dae..a99d82949981 100644
> > > > > > > --- a/drivers/char/hw_random/optee-rng.c
> > > > > > > +++ b/drivers/char/hw_random/optee-rng.c
> > > > > > > @@ -122,14 +122,14 @@ static int optee_rng_read(struct hwrng 
> > > > > > > *rng, void *buf, size_t max, bool wait)
> > > > > > > if (max > MAX_ENTROPY_REQ_SZ)
> > > > > > > max = MAX_ENTROPY_REQ_SZ;
> > > > > > >
> > > > > > > -   while (read == 0) {
> > > > > > > +   while (read < max) {
> > > > > > > rng_size = get_optee_rng_data(pvt_data, data, 
> > > > > > > (max - read));
> > > > > > >
> > > > > > > data += rng_size;
> > > > > > > read += rng_size;
> > > > > > >
> > > > > > > if (wait && pvt_data->data_rate) {
> > > > > > > -   if (timeout-- == 0)
> > > > > > > +   if ((timeout-- == 0) || (read == max))
> > > > > >
> > > > > > If read == max, would there be any sleep?
> > > > >
> > > > > no but I see no reason why there should be a wait since we already 
> > > > > have
> > > > > all the data that we need; the msleep is only required when we need to
> > > > > wait for the RNG to generate entropy for the number of bytes we are
> > > > > requesting. if we are requesting 0 bytes, the entropy is already
> > > > > 

Re: [PATCHv2 2/2] hwrng: optee: fix wait use case

2020-08-06 Thread Sumit Garg
On Thu, 6 Aug 2020 at 13:44, Jorge Ramirez-Ortiz, Foundries
 wrote:
>
> On 06/08/20, Sumit Garg wrote:
> > On Thu, 6 Aug 2020 at 12:00, Jorge Ramirez-Ortiz, Foundries
> >  wrote:
> > >
> > > On 06/08/20, Sumit Garg wrote:
> > > > On Thu, 6 Aug 2020 at 02:08, Jorge Ramirez-Ortiz, Foundries
> > > >  wrote:
> > > > >
> > > > > On 05/08/20, Sumit Garg wrote:
> > > > > > Apologies for my delayed response as I was busy with some other 
> > > > > > tasks
> > > > > > along with holidays.
> > > > >
> > > > > no pb! was just making sure this wasnt falling through some cracks.
> > > > >
> > > > > >
> > > > > > On Fri, 24 Jul 2020 at 19:53, Jorge Ramirez-Ortiz, Foundries
> > > > > >  wrote:
> > > > > > >
> > > > > > > On 24/07/20, Sumit Garg wrote:
> > > > > > > > On Thu, 23 Jul 2020 at 14:16, Jorge Ramirez-Ortiz 
> > > > > > > >  wrote:
> > > > > > > > >
> > > > > > > > > The current code waits for data to be available before 
> > > > > > > > > attempting a
> > > > > > > > > second read. However the second read would not be executed as 
> > > > > > > > > the
> > > > > > > > > while loop exits.
> > > > > > > > >
> > > > > > > > > This fix does not wait if all data has been read and reads a 
> > > > > > > > > second
> > > > > > > > > time if only partial data was retrieved on the first read.
> > > > > > > > >
> > > > > > > > > This fix also does not attempt to read if not data is 
> > > > > > > > > requested.
> > > > > > > >
> > > > > > > > I am not sure how this is possible, can you elaborate?
> > > > > > >
> > > > > > > currently, if the user sets max 0, get_optee_rng_data will 
> > > > > > > regardless
> > > > > > > issuese a call to the secure world requesting 0 bytes from the RNG
> > > > > > >
> > > > > >
> > > > > > This case is already handled by core API: rng_dev_read().
> > > > >
> > > > > ah ok good point, you are right
> > > > > but yeah, there is no consequence to the actual patch.
> > > > >
> > > >
> > > > So, at least you could get rid of the corresponding text from commit 
> > > > message.
> > > >
> > > > > >
> > > > > > > with this patch, this request is avoided.
> > > > > > >
> > > > > > > >
> > > > > > > > >
> > > > > > > > > Signed-off-by: Jorge Ramirez-Ortiz 
> > > > > > > > > ---
> > > > > > > > >  v2: tidy up the while loop to avoid reading when no data is 
> > > > > > > > > requested
> > > > > > > > >
> > > > > > > > >  drivers/char/hw_random/optee-rng.c | 4 ++--
> > > > > > > > >  1 file changed, 2 insertions(+), 2 deletions(-)
> > > > > > > > >
> > > > > > > > > diff --git a/drivers/char/hw_random/optee-rng.c 
> > > > > > > > > b/drivers/char/hw_random/optee-rng.c
> > > > > > > > > index 5bc4700c4dae..a99d82949981 100644
> > > > > > > > > --- a/drivers/char/hw_random/optee-rng.c
> > > > > > > > > +++ b/drivers/char/hw_random/optee-rng.c
> > > > > > > > > @@ -122,14 +122,14 @@ static int optee_rng_read(struct hwrng 
> > > > > > > > > *rng, void *buf, size_t max, bool wait)
> > > > > > > > > if (max > MAX_ENTROPY_REQ_SZ)
> > > > > > > > > max = MAX_ENTROPY_REQ_SZ;
> > > > > > > > >
> > > > > > > > > -   while (read == 0) {
> > > > > > > > > +   while (read < max) {
> > > > > > > > > rng_size = get_optee_rng_data(pvt_data, data, 
> > > > > > > > > (max - read));
> > > > > >

Re: [RFC 2/5] serial: core: Add framework to allow NMI aware serial drivers

2020-08-18 Thread Sumit Garg
On Mon, 17 Aug 2020 at 19:58, Daniel Thompson
 wrote:
>
> On Mon, Aug 17, 2020 at 05:57:03PM +0530, Sumit Garg wrote:
> > On Fri, 14 Aug 2020 at 19:43, Daniel Thompson
> >  wrote:
> > > On Fri, Aug 14, 2020 at 04:47:11PM +0530, Sumit Garg wrote:
> > > Does it look better if you create a new type to map the two structures
> > > together. Alternatively are there enough existing use-cases to want to
> > > extend irq_work_queue() with irq_work_schedule() or something similar?
> > >
> >
> > Thanks for your suggestion, irq_work_schedule() looked even better
> > without any overhead, see below:
> >
> > diff --git a/include/linux/irq_work.h b/include/linux/irq_work.h
> > index 3082378..1eade89 100644
> > --- a/include/linux/irq_work.h
> > +++ b/include/linux/irq_work.h
> > @@ -3,6 +3,7 @@
> >  #define _LINUX_IRQ_WORK_H
> >
> >  #include 
> > +#include 
> >
> >  /*
> >   * An entry can be in one of four states:
> > @@ -24,6 +25,11 @@ struct irq_work {
> > void (*func)(struct irq_work *);
> >  };
> >
> > +struct irq_work_schedule {
> > +   struct irq_work work;
> > +   struct work_struct *sched_work;
> > +};
> > +
> >  static inline
> >  void init_irq_work(struct irq_work *work, void (*func)(struct irq_work *))
> >  {
> >  {
> > @@ -39,6 +45,7 @@ void init_irq_work(struct irq_work *work, void
> > (*func)(struct irq_work *))
> >
> >  bool irq_work_queue(struct irq_work *work);
> >  bool irq_work_queue_on(struct irq_work *work, int cpu);
> > +bool irq_work_schedule(struct work_struct *sched_work);
> >
> >  void irq_work_tick(void);
> >  void irq_work_sync(struct irq_work *work);
> > diff --git a/kernel/irq_work.c b/kernel/irq_work.c
> > index eca8396..3880316 100644
> > --- a/kernel/irq_work.c
> > +++ b/kernel/irq_work.c
> > @@ -24,6 +24,8 @@
> >  static DEFINE_PER_CPU(struct llist_head, raised_list);
> >  static DEFINE_PER_CPU(struct llist_head, lazy_list);
> >
> > +static struct irq_work_schedule irq_work_sched;
> > +
> >  /*
> >   * Claim the entry so that no one else will poke at it.
> >   */
> > @@ -79,6 +81,25 @@ bool irq_work_queue(struct irq_work *work)
> >  }
> >  EXPORT_SYMBOL_GPL(irq_work_queue);
> >
> > +static void irq_work_schedule_fn(struct irq_work *work)
> > +{
> > +   struct irq_work_schedule *irq_work_sched =
> > +   container_of(work, struct irq_work_schedule, work);
> > +
> > +   if (irq_work_sched->sched_work)
> > +   schedule_work(irq_work_sched->sched_work);
> > +}
> > +
> > +/* Schedule work via irq work queue */
> > +bool irq_work_schedule(struct work_struct *sched_work)
> > +{
> > +   init_irq_work(&irq_work_sched.work, irq_work_schedule_fn);
> > +   irq_work_sched.sched_work = sched_work;
> > +
> > +   return irq_work_queue(&irq_work_sched.work);
> > +}
> > +EXPORT_SYMBOL_GPL(irq_work_schedule);
> > +
>
> This is irredeemably broken.
>
> Even if we didn't care about dropping events (which we do) then when you
> overwrite irq_work_sched with a copy of another work_struct, either of
> which could currently be enqueued somewhere, then you will cause some
> very nasty corruption.
>

Okay, I see your point. I think there isn't a way to avoid caller
specific struct such as:

struct nmi_queuable_work_struct {
  struct work_struct work;
  struct irq_work iw;
};

So in that case will shift to approach as suggested by Doug to rather
have a new nmi_schedule_work() API.

-Sumit

>
> Daniel.


Re: [RFC 2/5] serial: core: Add framework to allow NMI aware serial drivers

2020-08-18 Thread Sumit Garg
On Mon, 17 Aug 2020 at 20:02, Daniel Thompson
 wrote:
>
> On Mon, Aug 17, 2020 at 07:53:55PM +0530, Sumit Garg wrote:
> > On Mon, 17 Aug 2020 at 19:27, Doug Anderson  wrote:
> > >
> > > Hi,
> > >
> > > On Mon, Aug 17, 2020 at 5:27 AM Sumit Garg  wrote:
> > > >
> > > > Thanks for your suggestion, irq_work_schedule() looked even better
> > > > without any overhead, see below:
> > > >
> > > > diff --git a/include/linux/irq_work.h b/include/linux/irq_work.h
> > > > index 3082378..1eade89 100644
> > > > --- a/include/linux/irq_work.h
> > > > +++ b/include/linux/irq_work.h
> > > > @@ -3,6 +3,7 @@
> > > >  #define _LINUX_IRQ_WORK_H
> > > >
> > > >  #include 
> > > > +#include 
> > > >
> > > >  /*
> > > >   * An entry can be in one of four states:
> > > > @@ -24,6 +25,11 @@ struct irq_work {
> > > > void (*func)(struct irq_work *);
> > > >  };
> > > >
> > > > +struct irq_work_schedule {
> > > > +   struct irq_work work;
> > > > +   struct work_struct *sched_work;
> > > > +};
> > > > +
> > > >  static inline
> > > >  void init_irq_work(struct irq_work *work, void (*func)(struct irq_work 
> > > > *))
> > > >  {
> > > >  {
> > > > @@ -39,6 +45,7 @@ void init_irq_work(struct irq_work *work, void
> > > > (*func)(struct irq_work *))
> > > >
> > > >  bool irq_work_queue(struct irq_work *work);
> > > >  bool irq_work_queue_on(struct irq_work *work, int cpu);
> > > > +bool irq_work_schedule(struct work_struct *sched_work);
> > > >
> > > >  void irq_work_tick(void);
> > > >  void irq_work_sync(struct irq_work *work);
> > > > diff --git a/kernel/irq_work.c b/kernel/irq_work.c
> > > > index eca8396..3880316 100644
> > > > --- a/kernel/irq_work.c
> > > > +++ b/kernel/irq_work.c
> > > > @@ -24,6 +24,8 @@
> > > >  static DEFINE_PER_CPU(struct llist_head, raised_list);
> > > >  static DEFINE_PER_CPU(struct llist_head, lazy_list);
> > > >
> > > > +static struct irq_work_schedule irq_work_sched;
> > > > +
> > > >  /*
> > > >   * Claim the entry so that no one else will poke at it.
> > > >   */
> > > > @@ -79,6 +81,25 @@ bool irq_work_queue(struct irq_work *work)
> > > >  }
> > > >  EXPORT_SYMBOL_GPL(irq_work_queue);
> > > >
> > > > +static void irq_work_schedule_fn(struct irq_work *work)
> > > > +{
> > > > +   struct irq_work_schedule *irq_work_sched =
> > > > +   container_of(work, struct irq_work_schedule, work);
> > > > +
> > > > +   if (irq_work_sched->sched_work)
> > > > +   schedule_work(irq_work_sched->sched_work);
> > > > +}
> > > > +
> > > > +/* Schedule work via irq work queue */
> > > > +bool irq_work_schedule(struct work_struct *sched_work)
> > > > +{
> > > > +   init_irq_work(&irq_work_sched.work, irq_work_schedule_fn);
> > > > +   irq_work_sched.sched_work = sched_work;
> > > > +
> > > > +   return irq_work_queue(&irq_work_sched.work);
> > > > +}
> > > > +EXPORT_SYMBOL_GPL(irq_work_schedule);
> > >
> > > Wait, howzat work?  There's a single global variable that you stash
> > > the "sched_work" into with no locking?  What if two people schedule
> > > work at the same time?
> >
> > This API is intended to be invoked from NMI context only, so I think
> > there will be a single user at a time.
>
> How can you possibly know that?

I guess here you are referring to NMI nesting, correct?

Anyway, I am going to shift to another implementation as mentioned in
the other thread.

-Sumit

>
> This is library code, not a helper in a driver.
>
>
> Daniel.
>
>
> > And we can make that explicit
> > as well:
> >
> > +/* Schedule work via irq work queue */
> > +bool irq_work_schedule(struct work_struct *sched_work)
> > +{
> > +   if (in_nmi()) {
> > +   init_irq_work(&irq_work_sched.work, irq_work_schedule_fn);
> > +   irq_work_sched.sched_work = sched_work;
> > +
> > +   return irq_work_queue(&irq_work_sched.work);
> > +   }
> > +
> > +   return false;
> > +}
> > +EXPORT_SYMBOL_GPL(irq_work_schedule);
> >
> > -Sumit
> >
> > >
> > > -Doug


Re: [RFC 1/5] tty/sysrq: Make sysrq handler NMI aware

2020-08-18 Thread Sumit Garg
On Mon, 17 Aug 2020 at 22:49, Doug Anderson  wrote:
>
> Hi,
>
> On Mon, Aug 17, 2020 at 7:08 AM Sumit Garg  wrote:
> >
> > On Fri, 14 Aug 2020 at 20:27, Doug Anderson  wrote:
> > >
> > > Hi,
> > >
> > > On Fri, Aug 14, 2020 at 12:24 AM Sumit Garg  wrote:
> > > >
> > > > + Peter (author of irq_work.c)
> > > >
> > > > On Thu, 13 Aug 2020 at 05:30, Doug Anderson  
> > > > wrote:
> > > > >
> > > > > Hi,
> > > > >
> > > > > On Tue, Jul 21, 2020 at 5:10 AM Sumit Garg  
> > > > > wrote:
> > > > > >
> > > > > > In a future patch we will add support to the serial core to make it
> > > > > > possible to trigger a magic sysrq from an NMI context. Prepare for 
> > > > > > this
> > > > > > by marking some sysrq actions as NMI safe. Safe actions will be 
> > > > > > allowed
> > > > > > to run from NMI context whilst that cannot run from an NMI will be 
> > > > > > queued
> > > > > > as irq_work for later processing.
> > > > > >
> > > > > > A particular sysrq handler is only marked as NMI safe in case the 
> > > > > > handler
> > > > > > isn't contending for any synchronization primitives as in NMI 
> > > > > > context
> > > > > > they are expected to cause deadlocks. Note that the debug sysrq do 
> > > > > > not
> > > > > > contend for any synchronization primitives. It does call 
> > > > > > kgdb_breakpoint()
> > > > > > to provoke a trap but that trap handler should be NMI safe on
> > > > > > architectures that implement an NMI.
> > > > > >
> > > > > > Signed-off-by: Sumit Garg 
> > > > > > ---
> > > > > >  drivers/tty/sysrq.c   | 33 -
> > > > > >  include/linux/sysrq.h |  1 +
> > > > > >  kernel/debug/debug_core.c |  1 +
> > > > > >  3 files changed, 34 insertions(+), 1 deletion(-)
> > > > > >
> > > > > > diff --git a/drivers/tty/sysrq.c b/drivers/tty/sysrq.c
> > > > > > index 7c95afa9..8017e33 100644
> > > > > > --- a/drivers/tty/sysrq.c
> > > > > > +++ b/drivers/tty/sysrq.c
> > > > > > @@ -50,6 +50,8 @@
> > > > > >  #include 
> > > > > >  #include 
> > > > > >  #include 
> > > > > > +#include 
> > > > > > +#include 
> > > > > >
> > > > > >  #include 
> > > > > >  #include 
> > > > > > @@ -111,6 +113,7 @@ static const struct sysrq_key_op 
> > > > > > sysrq_loglevel_op = {
> > > > > > .help_msg   = "loglevel(0-9)",
> > > > > > .action_msg = "Changing Loglevel",
> > > > > > .enable_mask= SYSRQ_ENABLE_LOG,
> > > > > > +   .nmi_safe   = true,
> > > > > >  };
> > > > > >
> > > > > >  #ifdef CONFIG_VT
> > > > > > @@ -157,6 +160,7 @@ static const struct sysrq_key_op sysrq_crash_op 
> > > > > > = {
> > > > > > .help_msg   = "crash(c)",
> > > > > > .action_msg = "Trigger a crash",
> > > > > > .enable_mask= SYSRQ_ENABLE_DUMP,
> > > > > > +   .nmi_safe   = true,
> > > > > >  };
> > > > > >
> > > > > >  static void sysrq_handle_reboot(int key)
> > > > > > @@ -170,6 +174,7 @@ static const struct sysrq_key_op 
> > > > > > sysrq_reboot_op = {
> > > > > > .help_msg   = "reboot(b)",
> > > > > > .action_msg = "Resetting",
> > > > > > .enable_mask= SYSRQ_ENABLE_BOOT,
> > > > > > +   .nmi_safe   = true,
> > > > > >  };
> > > > > >
> > > > > >  const struct sysrq_key_op *__sysrq_reboot_op = &sysrq_reboot_op;
> > > > > > @@ -217,6 +222,7 @@ static const struct sysrq_key_op 
> > > > > > sysrq_showlocks_op = {
> > > > > > .handler= sys

Re: [PATCH v8 0/4] Introduce TEE based Trusted Keys support

2020-12-08 Thread Sumit Garg
Hi Jarkko,

Apologies for the delay in my response as I was busy with other high
priority work.

On Fri, 4 Dec 2020 at 10:46, Jarkko Sakkinen  wrote:
>
> On Fri, Nov 06, 2020 at 04:52:52PM +0200, Jarkko Sakkinen wrote:
> > On Fri, Nov 06, 2020 at 03:02:41PM +0530, Sumit Garg wrote:
> > > On Thu, 5 Nov 2020 at 10:37, Jarkko Sakkinen  wrote:
> > > >
> > > > On Tue, Nov 03, 2020 at 09:31:42PM +0530, Sumit Garg wrote:
> > > > > Add support for TEE based trusted keys where TEE provides the 
> > > > > functionality
> > > > > to seal and unseal trusted keys using hardware unique key. Also, this 
> > > > > is
> > > > > an alternative in case platform doesn't possess a TPM device.
> > > > >
> > > > > This patch-set has been tested with OP-TEE based early TA which is 
> > > > > already
> > > > > merged in upstream [1].
> > > >
> > > > Is the new RPI400 computer a platform that can be used for testing
> > > > patch sets like this? I've been looking for a while something ARM64
> > > > based with similar convenience as Intel NUC's, and on the surface
> > > > this new RPI product looks great for kernel testing purposes.
> > >
> > > Here [1] is the list of supported versions of Raspberry Pi in OP-TEE.
> > > The easiest approach would be to pick up a supported version or else
> > > do an OP-TEE port for an unsupported one (which should involve minimal
> > > effort).
> > >
> > > [1] 
> > > https://optee.readthedocs.io/en/latest/building/devices/rpi3.html#what-versions-of-raspberry-pi-will-work
> > >
> > > -Sumit
> >
> > If porting is doable, then I'll just order RPI 400, and test with QEMU
> > up until either I port OP-TEE myself or someone else does it.
> >
> > For seldom ARM testing, RPI 400 is really convenient device with its
> > boxed form factor.
>
> I'm now a proud owner of Raspberry Pi 400 home computer :-)
>
> I also found instructions on how to boot a custom OS from a USB stick:
>
> https://www.raspberrypi.org/documentation/hardware/raspberrypi/bootmodes/msd.md
>
> Also, my favorite build system BuildRoot has bunch of of the shelf
> configs:
>
> ➜  buildroot-sgx (master) ✔ ls -1 configs | grep raspberry
> raspberrypi0_defconfig
> raspberrypi0w_defconfig
> raspberrypi2_defconfig
> raspberrypi3_64_defconfig
> raspberrypi3_defconfig
> raspberrypi3_qt5we_defconfig
> raspberrypi4_64_defconfig
> raspberrypi4_defconfig
> raspberrypi_defconfig
>
> I.e. I'm capable of compiling kernel and user space and boot it up
> with it.
>
> Further, I can select this compilation option:
>
> BR2_TARGET_OPTEE_OS:  
>   
>   │
>   
>   
>  │
>OP-TEE OS provides the secure world boot image and the trust   
>   
>  │
>application development kit of the OP-TEE project. OP-TEE OS   
>   
>  │
>also provides generic trusted application one can embedded 
>   
>  │
>into its system.   
>   
>  │
>   
>   
>  │
>http://github.com/OP-TEE/optee_os
>
> Is that what I want? If I put this all together and apply your patches,
> should the expectation be that I can use trusted keys?
>

Firstly you need to do an OP-TEE port for RPI 400 (refer here [1] for
guidelines). And then in order to boot up OP-TEE on RPI 400, you can
refer to Raspberry Pi 3 build instructions [2].

[1] https://optee.readthedocs.io/en/latest/architecture/porting_guidelines.html
[2] 
https://optee.readthedocs.io/en/latest/building/devices/rpi3.html#build-instructions

> Please note that I had a few remarks about your patches (minor but need
> to be fixed), but this version is already solid enough for testing.
>

Sure, I will incorporate your remarks and Randy's documentation
comments in the next version.

-Sumit

> /Jarkko


Re: [PATCH v8 2/4] KEYS: trusted: Introduce TEE based Trusted Keys

2021-01-13 Thread Sumit Garg
Hi Jarkko,

On Mon, 11 Jan 2021 at 22:05, Jarkko Sakkinen  wrote:
>
> On Tue, Nov 03, 2020 at 09:31:44PM +0530, Sumit Garg wrote:
> > Add support for TEE based trusted keys where TEE provides the functionality
> > to seal and unseal trusted keys using hardware unique key.
> >
> > Refer to Documentation/tee.txt for detailed information about TEE.
> >
> > Signed-off-by: Sumit Garg 
>
> I haven't yet got QEMU environment working with aarch64, this produces
> just a blank screen:
>
> ./output/host/usr/bin/qemu-system-aarch64 -M virt -cpu cortex-a53 -smp 1 
> -kernel output/images/Image -initrd output/images/rootfs.cpio -serial stdio
>
> My BuildRoot fork for TPM and keyring testing is located over here:
>
> https://git.kernel.org/pub/scm/linux/kernel/git/jarkko/buildroot-tpmdd.git/
>
> The "ARM version" is at this point in aarch64 branch. Over time I will
> define tpmdd-x86_64 and tpmdd-aarch64 boards and everything will be then
> in the master branch.
>
> To create identical images you just need to
>
> $ make tpmdd_defconfig && make
>
> Can you check if you see anything obviously wrong? I'm eager to test this
> patch set, and in bigger picture I really need to have ready to run
> aarch64 environment available.

I would rather suggest you to follow steps listed here [1] as to test
this feature on Qemu aarch64 we need to build firmwares such as TF-A,
OP-TEE, UEFI etc. which are all integrated into OP-TEE Qemu build
system [2]. And then it would be easier to migrate them to your
buildroot environment as well.

[1] https://lists.trustedfirmware.org/pipermail/op-tee/2020-May/27.html
[2] https://optee.readthedocs.io/en/latest/building/devices/qemu.html#qemu-v8

-Sumit

>
> /Jarkko


[PATCH] kdb: Simplify kdb commands registration

2021-01-19 Thread Sumit Garg
Simplify kdb commands registration via using linked list instead of
static array for commands storage.

Signed-off-by: Sumit Garg 
---
 kernel/debug/kdb/kdb_main.c| 78 ++
 kernel/debug/kdb/kdb_private.h |  1 +
 2 files changed, 20 insertions(+), 59 deletions(-)

diff --git a/kernel/debug/kdb/kdb_main.c b/kernel/debug/kdb/kdb_main.c
index 930ac1b..93ac0f5 100644
--- a/kernel/debug/kdb/kdb_main.c
+++ b/kernel/debug/kdb/kdb_main.c
@@ -33,6 +33,7 @@
 #include 
 #include 
 #include 
+#include 
 #include 
 #include 
 #include 
@@ -84,15 +85,8 @@ static unsigned int kdb_continue_catastrophic =
 static unsigned int kdb_continue_catastrophic;
 #endif
 
-/* kdb_commands describes the available commands. */
-static kdbtab_t *kdb_commands;
-#define KDB_BASE_CMD_MAX 50
-static int kdb_max_commands = KDB_BASE_CMD_MAX;
-static kdbtab_t kdb_base_commands[KDB_BASE_CMD_MAX];
-#define for_each_kdbcmd(cmd, num)  \
-   for ((cmd) = kdb_base_commands, (num) = 0;  \
-num < kdb_max_commands;\
-num++, num == KDB_BASE_CMD_MAX ? cmd = kdb_commands : cmd++)
+/* kdb_cmds_head describes the available commands. */
+static LIST_HEAD(kdb_cmds_head);
 
 typedef struct _kdbmsg {
int km_diag;/* kdb diagnostic */
@@ -921,7 +915,7 @@ int kdb_parse(const char *cmdstr)
char *cp;
char *cpp, quoted;
kdbtab_t *tp;
-   int i, escaped, ignore_errors = 0, check_grep = 0;
+   int escaped, ignore_errors = 0, check_grep = 0;
 
/*
 * First tokenize the command string.
@@ -1011,7 +1005,7 @@ int kdb_parse(const char *cmdstr)
++argv[0];
}
 
-   for_each_kdbcmd(tp, i) {
+   list_for_each_entry(tp, &kdb_cmds_head, list_node) {
if (tp->cmd_name) {
/*
 * If this command is allowed to be abbreviated,
@@ -1037,8 +1031,8 @@ int kdb_parse(const char *cmdstr)
 * few characters of this match any of the known commands.
 * e.g., md1c20 should match md.
 */
-   if (i == kdb_max_commands) {
-   for_each_kdbcmd(tp, i) {
+   if (list_entry_is_head(tp, &kdb_cmds_head, list_node)) {
+   list_for_each_entry(tp, &kdb_cmds_head, list_node) {
if (tp->cmd_name) {
if (strncmp(argv[0],
tp->cmd_name,
@@ -1049,7 +1043,7 @@ int kdb_parse(const char *cmdstr)
}
}
 
-   if (i < kdb_max_commands) {
+   if (!list_entry_is_head(tp, &kdb_cmds_head, list_node)) {
int result;
 
if (!kdb_check_flags(tp->cmd_flags, kdb_cmd_enabled, argc <= 1))
@@ -2428,12 +2422,11 @@ static int kdb_kgdb(int argc, const char **argv)
 static int kdb_help(int argc, const char **argv)
 {
kdbtab_t *kt;
-   int i;
 
kdb_printf("%-15.15s %-20.20s %s\n", "Command", "Usage", "Description");
kdb_printf("-"
   "-\n");
-   for_each_kdbcmd(kt, i) {
+   list_for_each_entry(kt, &kdb_cmds_head, list_node) {
char *space = "";
if (KDB_FLAG(CMD_INTERRUPT))
return 0;
@@ -2667,13 +2660,9 @@ int kdb_register_flags(char *cmd,
   short minlen,
   kdb_cmdflags_t flags)
 {
-   int i;
kdbtab_t *kp;
 
-   /*
-*  Brute force method to determine duplicates
-*/
-   for_each_kdbcmd(kp, i) {
+   list_for_each_entry(kp, &kdb_cmds_head, list_node) {
if (kp->cmd_name && (strcmp(kp->cmd_name, cmd) == 0)) {
kdb_printf("Duplicate kdb command registered: "
"%s, func %px help %s\n", cmd, func, help);
@@ -2681,35 +2670,10 @@ int kdb_register_flags(char *cmd,
}
}
 
-   /*
-* Insert command into first available location in table
-*/
-   for_each_kdbcmd(kp, i) {
-   if (kp->cmd_name == NULL)
-   break;
-   }
-
-   if (i >= kdb_max_commands) {
-   kdbtab_t *new = kmalloc_array(kdb_max_commands -
-   KDB_BASE_CMD_MAX +
-   kdb_command_extend,
- sizeof(*new),
- GFP_KDB);
-   if (!new) {
-   kdb_printf("Could not allocate new kdb_command "
-  "table\n");
-   return 1;
- 

Re: [PATCH v2] arm64: perf: Fix access percpu variables in preemptible context

2021-01-08 Thread Sumit Garg
On Mon, 21 Dec 2020 at 21:53, Lecopzer Chen  wrote:
>
> commit 367c820ef08082 ("arm64: Enable perf events based hard lockup detector")
> reinitilizes lockup detector after arm64 PMU is initialized and open
> a window for accessing smp_processor_id() in preemptible context.
> Since hardlockup_detector_perf_init() always called in init stage
> with a single cpu, but we initialize lockup detector after the init task
> is migratable.
>
> Fix this by utilizing lockup detector reconfiguration which calls
> softlockup_start_all() on each cpu and calls watatchdog_nmi_enable() later.
> Because softlockup_start_all() use IPI call function to make sure
> watatchdog_nmi_enable() will bind on each cpu and fix this issue.

IMO, this just creates unnecessary dependency for hardlockup detector
init via softlockup detector (see the alternative definition of
lockup_detector_reconfigure()).

>
> BUG: using smp_processor_id() in preemptible [] code: swapper/0/1

How about just the below fix in order to make CONFIG_DEBUG_PREEMPT happy?

diff --git a/kernel/watchdog_hld.c b/kernel/watchdog_hld.c
index 247bf0b1582c..db06ee28f48e 100644
--- a/kernel/watchdog_hld.c
+++ b/kernel/watchdog_hld.c
@@ -165,7 +165,7 @@ static void watchdog_overflow_callback(struct
perf_event *event,

 static int hardlockup_detector_event_create(void)
 {
-   unsigned int cpu = smp_processor_id();
+   unsigned int cpu = raw_smp_processor_id();
struct perf_event_attr *wd_attr;
struct perf_event *evt;

-Sumit

> caller is debug_smp_processor_id+0x20/0x2c
> CPU: 2 PID: 1 Comm: swapper/0 Not tainted 5.10.0+ #276
> Hardware name: linux,dummy-virt (DT)
> Call trace:
>   dump_backtrace+0x0/0x3c0
>   show_stack+0x20/0x6c
>   dump_stack+0x2f0/0x42c
>   check_preemption_disabled+0x1cc/0x1dc
>   debug_smp_processor_id+0x20/0x2c
>   hardlockup_detector_event_create+0x34/0x18c
>   hardlockup_detector_perf_init+0x2c/0x134
>   watchdog_nmi_probe+0x18/0x24
>   lockup_detector_init+0x44/0xa8
>   armv8_pmu_driver_init+0x54/0x78
>   do_one_initcall+0x184/0x43c
>   kernel_init_freeable+0x368/0x380
>   kernel_init+0x1c/0x1cc
>   ret_from_fork+0x10/0x30
>
>
> Fixes: 367c820ef08082 ("arm64: Enable perf events based hard lockup detector")
> Signed-off-by: Lecopzer Chen 
> Reported-by: kernel test robot 
> Cc: Sumit Garg 
> ---
>
> Changelog v1 -> v2:
> * 
> https://lore.kernel.org/lkml/20201217130617.32202-1-lecopzer.c...@mediatek.com/
> * Move solution from kernel/watchdog_hld.c to arm64 perf_event
> * avoid preemptive kmalloc in preempt_disable().
>
>
>
>  arch/arm64/kernel/perf_event.c | 16 
>  1 file changed, 16 insertions(+)
>
> diff --git a/arch/arm64/kernel/perf_event.c b/arch/arm64/kernel/perf_event.c
> index 38bb07eff872..c03e21210bbb 100644
> --- a/arch/arm64/kernel/perf_event.c
> +++ b/arch/arm64/kernel/perf_event.c
> @@ -1345,4 +1345,20 @@ u64 hw_nmi_get_sample_period(int watchdog_thresh)
>
> return (u64)max_cpu_freq * watchdog_thresh;
>  }
> +
> +/*
> + * hardlockup_detector_perf_init() always call in init stage with a single
> + * cpu. In arm64 case, we re-initialize lockup detector after pmu driver
> + * initialized. Lockup detector initial function use lots of percpu variables
> + * and this makes CONFIG_DEBUG_PREEMPT unhappy because we are now in
> + * preemptive context.
> + * Return 0 if the nmi is ready and register nmi hardlockup detector by
> + * lockup detector reconfiguration.
> + */
> +int __init watchdog_nmi_probe(void)
> +{
> +   if (arm_pmu_irq_is_nmi())
> +   return 0;
> +   return -ENODEV;
> +}
>  #endif
> --
> 2.25.1
>


Re: [PATCH v8 2/4] KEYS: trusted: Introduce TEE based Trusted Keys

2021-01-14 Thread Sumit Garg
On Thu, 14 Jan 2021 at 07:35, Jarkko Sakkinen  wrote:
>
> On Wed, Jan 13, 2021 at 04:47:00PM +0530, Sumit Garg wrote:
> > Hi Jarkko,
> >
> > On Mon, 11 Jan 2021 at 22:05, Jarkko Sakkinen  wrote:
> > >
> > > On Tue, Nov 03, 2020 at 09:31:44PM +0530, Sumit Garg wrote:
> > > > Add support for TEE based trusted keys where TEE provides the 
> > > > functionality
> > > > to seal and unseal trusted keys using hardware unique key.
> > > >
> > > > Refer to Documentation/tee.txt for detailed information about TEE.
> > > >
> > > > Signed-off-by: Sumit Garg 
> > >
> > > I haven't yet got QEMU environment working with aarch64, this produces
> > > just a blank screen:
> > >
> > > ./output/host/usr/bin/qemu-system-aarch64 -M virt -cpu cortex-a53 -smp 1 
> > > -kernel output/images/Image -initrd output/images/rootfs.cpio -serial 
> > > stdio
> > >
> > > My BuildRoot fork for TPM and keyring testing is located over here:
> > >
> > > https://git.kernel.org/pub/scm/linux/kernel/git/jarkko/buildroot-tpmdd.git/
> > >
> > > The "ARM version" is at this point in aarch64 branch. Over time I will
> > > define tpmdd-x86_64 and tpmdd-aarch64 boards and everything will be then
> > > in the master branch.
> > >
> > > To create identical images you just need to
> > >
> > > $ make tpmdd_defconfig && make
> > >
> > > Can you check if you see anything obviously wrong? I'm eager to test this
> > > patch set, and in bigger picture I really need to have ready to run
> > > aarch64 environment available.
> >
> > I would rather suggest you to follow steps listed here [1] as to test
> > this feature on Qemu aarch64 we need to build firmwares such as TF-A,
> > OP-TEE, UEFI etc. which are all integrated into OP-TEE Qemu build
> > system [2]. And then it would be easier to migrate them to your
> > buildroot environment as well.
> >
> > [1] https://lists.trustedfirmware.org/pipermail/op-tee/2020-May/27.html
> > [2] 
> > https://optee.readthedocs.io/en/latest/building/devices/qemu.html#qemu-v8
> >
> > -Sumit
>
> Can you provide 'keyctl_change'? Otherwise, the steps are easy to follow.
>

$ cat keyctl_change
diff --git a/common.mk b/common.mk
index aeb7b41..663e528 100644
--- a/common.mk
+++ b/common.mk
@@ -229,6 +229,7 @@ BR2_PACKAGE_OPTEE_TEST_SDK ?= $(OPTEE_OS_TA_DEV_KIT_DIR)
 BR2_PACKAGE_OPTEE_TEST_SITE ?= $(OPTEE_TEST_PATH)
 BR2_PACKAGE_STRACE ?= y
 BR2_TARGET_GENERIC_GETTY_PORT ?= $(if
$(CFG_NW_CONSOLE_UART),ttyAMA$(CFG_NW_CONSOLE_UART),ttyAMA0)
+BR2_PACKAGE_KEYUTILS := y

 # All BR2_* variables from the makefile or the environment are appended to
 # ../out-br/extra.conf. All values are quoted "..." except y and n.
diff --git a/kconfigs/qemu.conf b/kconfigs/qemu.conf
index 368c18a..832ab74 100644
--- a/kconfigs/qemu.conf
+++ b/kconfigs/qemu.conf
@@ -20,3 +20,5 @@ CONFIG_9P_FS=y
 CONFIG_9P_FS_POSIX_ACL=y
 CONFIG_HW_RANDOM=y
 CONFIG_HW_RANDOM_VIRTIO=y
+CONFIG_TRUSTED_KEYS=y
+CONFIG_ENCRYPTED_KEYS=y

> After I've successfully tested 2/4, I'd suggest that you roll out one more
> version and CC the documentation patch to Elaine and Mini, and clearly
> remark in the commit message that TEE is a standard, with a link to the
> specification.
>

Sure, I will roll out the next version after your testing.

-Sumit

> /Jarkko


[PATCH v5] arm64: Enable perf events based hard lockup detector

2021-01-15 Thread Sumit Garg
With the recent feature added to enable perf events to use pseudo NMIs
as interrupts on platforms which support GICv3 or later, its now been
possible to enable hard lockup detector (or NMI watchdog) on arm64
platforms. So enable corresponding support.

One thing to note here is that normally lockup detector is initialized
just after the early initcalls but PMU on arm64 comes up much later as
device_initcall(). So we need to re-initialize lockup detection once
PMU has been initialized.

Signed-off-by: Sumit Garg 
---

Changes in v5:
- Fix lockup_detector_init() invocation to be rather invoked from CPU
  binded context as it makes heavy use of per-cpu variables and shouldn't
  be invoked from preemptible context.

Changes in v4:
- Rebased to latest pmu v7 NMI patch-set [1] and in turn use "has_nmi"
  hook to know if PMU IRQ has been requested as an NMI.
- Add check for return value prior to initializing hard-lockup detector.

[1] https://lkml.org/lkml/2020/9/24/458

Changes in v3:
- Rebased to latest pmu NMI patch-set [1].
- Addressed misc. comments from Stephen.

[1] https://lkml.org/lkml/2020/8/19/671

Changes since RFC:
- Rebased on top of Alex's WIP-pmu-nmi branch.
- Add comment for safe max. CPU frequency.
- Misc. cleanup.

 arch/arm64/Kconfig |  2 ++
 arch/arm64/kernel/perf_event.c | 48 --
 drivers/perf/arm_pmu.c |  5 +
 include/linux/perf/arm_pmu.h   |  2 ++
 4 files changed, 55 insertions(+), 2 deletions(-)

diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
index f39568b..05e1735 100644
--- a/arch/arm64/Kconfig
+++ b/arch/arm64/Kconfig
@@ -174,6 +174,8 @@ config ARM64
select HAVE_NMI
select HAVE_PATA_PLATFORM
select HAVE_PERF_EVENTS
+   select HAVE_PERF_EVENTS_NMI if ARM64_PSEUDO_NMI && HW_PERF_EVENTS
+   select HAVE_HARDLOCKUP_DETECTOR_PERF if PERF_EVENTS && 
HAVE_PERF_EVENTS_NMI
select HAVE_PERF_REGS
select HAVE_PERF_USER_STACK_DUMP
select HAVE_REGS_AND_STACK_ACCESS_API
diff --git a/arch/arm64/kernel/perf_event.c b/arch/arm64/kernel/perf_event.c
index 3605f77a..bafb7c8 100644
--- a/arch/arm64/kernel/perf_event.c
+++ b/arch/arm64/kernel/perf_event.c
@@ -23,6 +23,8 @@
 #include 
 #include 
 #include 
+#include 
+#include 
 
 /* ARMv8 Cortex-A53 specific event types. */
 #define ARMV8_A53_PERFCTR_PREF_LINEFILL0xC2
@@ -1246,12 +1248,30 @@ static struct platform_driver armv8_pmu_driver = {
.probe  = armv8_pmu_device_probe,
 };
 
+static int __init lockup_detector_init_fn(void *data)
+{
+   lockup_detector_init();
+   return 0;
+}
+
 static int __init armv8_pmu_driver_init(void)
 {
+   int ret;
+
if (acpi_disabled)
-   return platform_driver_register(&armv8_pmu_driver);
+   ret = platform_driver_register(&armv8_pmu_driver);
else
-   return arm_pmu_acpi_probe(armv8_pmuv3_init);
+   ret = arm_pmu_acpi_probe(armv8_pmuv3_init);
+
+   /*
+* Try to re-initialize lockup detector after PMU init in
+* case PMU events are triggered via NMIs.
+*/
+   if (ret == 0 && arm_pmu_irq_is_nmi())
+   smp_call_on_cpu(raw_smp_processor_id(), lockup_detector_init_fn,
+   NULL, false);
+
+   return ret;
 }
 device_initcall(armv8_pmu_driver_init)
 
@@ -1309,3 +1329,27 @@ void arch_perf_update_userpage(struct perf_event *event,
userpg->cap_user_time_zero = 1;
userpg->cap_user_time_short = 1;
 }
+
+#ifdef CONFIG_HARDLOCKUP_DETECTOR_PERF
+/*
+ * Safe maximum CPU frequency in case a particular platform doesn't implement
+ * cpufreq driver. Although, architecture doesn't put any restrictions on
+ * maximum frequency but 5 GHz seems to be safe maximum given the available
+ * Arm CPUs in the market which are clocked much less than 5 GHz. On the other
+ * hand, we can't make it much higher as it would lead to a large hard-lockup
+ * detection timeout on parts which are running slower (eg. 1GHz on
+ * Developerbox) and doesn't possess a cpufreq driver.
+ */
+#define SAFE_MAX_CPU_FREQ  50UL // 5 GHz
+u64 hw_nmi_get_sample_period(int watchdog_thresh)
+{
+   unsigned int cpu = smp_processor_id();
+   unsigned long max_cpu_freq;
+
+   max_cpu_freq = cpufreq_get_hw_max_freq(cpu) * 1000UL;
+   if (!max_cpu_freq)
+   max_cpu_freq = SAFE_MAX_CPU_FREQ;
+
+   return (u64)max_cpu_freq * watchdog_thresh;
+}
+#endif
diff --git a/drivers/perf/arm_pmu.c b/drivers/perf/arm_pmu.c
index cb2f55f..794a37d 100644
--- a/drivers/perf/arm_pmu.c
+++ b/drivers/perf/arm_pmu.c
@@ -726,6 +726,11 @@ static int armpmu_get_cpu_irq(struct arm_pmu *pmu, int cpu)
return per_cpu(hw_events->irq, cpu);
 }
 
+bool arm_pmu_irq_is_nmi(void)
+{
+   return has_nmi;
+}
+
 /*
  * PMU hardware loses all context w

Re: [PATCH v1 3/3] KEYS: trusted: Introduce support for NXP CAAM-based trusted keys

2021-03-24 Thread Sumit Garg
On Wed, 24 Mar 2021 at 14:56, Ahmad Fatoum  wrote:
>
> Hello Mimi,
>
> On 23.03.21 19:07, Mimi Zohar wrote:
> > On Tue, 2021-03-23 at 17:35 +0100, Ahmad Fatoum wrote:
> >> On 21.03.21 21:48, Horia Geantă wrote:
> >>> caam has random number generation capabilities, so it's worth using that
> >>> by implementing .get_random.
> >>
> >> If the CAAM HWRNG is already seeding the kernel RNG, why not use the 
> >> kernel's?
> >>
> >> Makes for less code duplication IMO.
> >
> > Using kernel RNG, in general, for trusted keys has been discussed
> > before.   Please refer to Dave Safford's detailed explanation for not
> > using it [1].
>
> The argument seems to boil down to:
>
>  - TPM RNG are known to be of good quality
>  - Trusted keys always used it so far
>
> Both are fine by me for TPMs, but the CAAM backend is new code and neither 
> point
> really applies.
>
> get_random_bytes_wait is already used for generating key material elsewhere.
> Why shouldn't new trusted key backends be able to do the same thing?
>

Please refer to documented trusted keys behaviour here [1]. New
trusted key backends should align to this behaviour and in your case
CAAM offers HWRNG so we should be better using that.

Also, do update documentation corresponding to CAAM as a trusted keys backend.

[1] 
https://git.kernel.org/pub/scm/linux/kernel/git/jarkko/linux-tpmdd.git/tree/Documentation/security/keys/trusted-encrypted.rst#n87

-Sumit

> Cheers,
> Ahmad
>
> >
> > thanks,
> >
> > Mimi
> >
> > [1]
> > https://lore.kernel.org/linux-integrity/bca04d5d9a3b764c9b7405bba4d4a3c035f2a...@alpmbapa12.e2k.ad.ge.com/
> >
> >
> >
>
> --
> Pengutronix e.K.   | |
> Steuerwalder Str. 21   | http://www.pengutronix.de/  |
> 31137 Hildesheim, Germany  | Phone: +49-5121-206917-0|
> Amtsgericht Hildesheim, HRA 2686   | Fax:   +49-5121-206917- |


Re: [PATCH v1 3/3] KEYS: trusted: Introduce support for NXP CAAM-based trusted keys

2021-03-24 Thread Sumit Garg
On Wed, 24 Mar 2021 at 19:37, Ahmad Fatoum  wrote:
>
> Hello Sumit,
>
> On 24.03.21 11:47, Sumit Garg wrote:
> > On Wed, 24 Mar 2021 at 14:56, Ahmad Fatoum  wrote:
> >>
> >> Hello Mimi,
> >>
> >> On 23.03.21 19:07, Mimi Zohar wrote:
> >>> On Tue, 2021-03-23 at 17:35 +0100, Ahmad Fatoum wrote:
> >>>> On 21.03.21 21:48, Horia Geantă wrote:
> >>>>> caam has random number generation capabilities, so it's worth using that
> >>>>> by implementing .get_random.
> >>>>
> >>>> If the CAAM HWRNG is already seeding the kernel RNG, why not use the 
> >>>> kernel's?
> >>>>
> >>>> Makes for less code duplication IMO.
> >>>
> >>> Using kernel RNG, in general, for trusted keys has been discussed
> >>> before.   Please refer to Dave Safford's detailed explanation for not
> >>> using it [1].
> >>
> >> The argument seems to boil down to:
> >>
> >>  - TPM RNG are known to be of good quality
> >>  - Trusted keys always used it so far
> >>
> >> Both are fine by me for TPMs, but the CAAM backend is new code and neither 
> >> point
> >> really applies.
> >>
> >> get_random_bytes_wait is already used for generating key material 
> >> elsewhere.
> >> Why shouldn't new trusted key backends be able to do the same thing?
> >>
> >
> > Please refer to documented trusted keys behaviour here [1]. New
> > trusted key backends should align to this behaviour and in your case
> > CAAM offers HWRNG so we should be better using that.
>
> Why is it better?
>
> Can you explain what benefit a CAAM user would have if the trusted key
> randomness comes directly out of the CAAM instead of indirectly from
> the kernel entropy pool that is seeded by it?

IMO, user trust in case of trusted keys comes from trusted keys
backend which is CAAM here. If a user doesn't trust that CAAM would
act as a reliable source for RNG then CAAM shouldn't be used as a
trust source in the first place.

And I think building user's trust for kernel RNG implementation with
multiple entropy contributions is pretty difficult when compared with
CAAM HWRNG implementation.

-Sumit

>
> > Also, do update documentation corresponding to CAAM as a trusted keys 
> > backend.
>
> Yes. The documentation should be updated for CAAM and it should describe
> how the key material is derived. Will do so for v2.
>
> Cheers,
> Ahmad
>
> >
> > [1] 
> > https://git.kernel.org/pub/scm/linux/kernel/git/jarkko/linux-tpmdd.git/tree/Documentation/security/keys/trusted-encrypted.rst#n87
> >
> > -Sumit
> >
> >> Cheers,
> >> Ahmad
> >>
> >>>
> >>> thanks,
> >>>
> >>> Mimi
> >>>
> >>> [1]
> >>> https://lore.kernel.org/linux-integrity/bca04d5d9a3b764c9b7405bba4d4a3c035f2a...@alpmbapa12.e2k.ad.ge.com/
> >>>
> >>>
> >>>
> >>
> >> --
> >> Pengutronix e.K.   | |
> >> Steuerwalder Str. 21   | http://www.pengutronix.de/  |
> >> 31137 Hildesheim, Germany  | Phone: +49-5121-206917-0|
> >> Amtsgericht Hildesheim, HRA 2686   | Fax:   +49-5121-206917- |
> >
>
> --
> Pengutronix e.K.   | |
> Steuerwalder Str. 21   | http://www.pengutronix.de/  |
> 31137 Hildesheim, Germany  | Phone: +49-5121-206917-0|
> Amtsgericht Hildesheim, HRA 2686   | Fax:   +49-5121-206917- |


Re: [PATCH 1/1] tee: optee: do not check memref size on return from Secure World

2021-03-25 Thread Sumit Garg
On Mon, 22 Mar 2021 at 16:11, Jerome Forissier via OP-TEE
 wrote:
>
> When Secure World returns, it may have changed the size attribute of the
> memory references passed as [in/out] parameters. The GlobalPlatform TEE
> Internal Core API specification does not restrict the values that this
> size can take. In particular, Secure World may increase the value to be
> larger than the size of the input buffer to indicate that it needs more.
>
> Therefore, the size check in optee_from_msg_param() is incorrect and
> needs to be removed. This fixes a number of failed test cases in the
> GlobalPlatform TEE Initial Configuratiom Test Suite v2_0_0_0-2017_06_09
> when OP-TEE is compiled without dynamic shared memory support
> (CFG_CORE_DYN_SHM=n).
>
> Suggested-by: Jens Wiklander 
> Signed-off-by: Jerome Forissier 
> ---
>  drivers/tee/optee/core.c | 10 --
>  1 file changed, 10 deletions(-)
>

Looks good to me.

Reviewed-by: Sumit Garg 

-Sumit

> diff --git a/drivers/tee/optee/core.c b/drivers/tee/optee/core.c
> index 319a1e701163..ddb8f9ecf307 100644
> --- a/drivers/tee/optee/core.c
> +++ b/drivers/tee/optee/core.c
> @@ -79,16 +79,6 @@ int optee_from_msg_param(struct tee_param *params, size_t 
> num_params,
> return rc;
> p->u.memref.shm_offs = mp->u.tmem.buf_ptr - pa;
> p->u.memref.shm = shm;
> -
> -   /* Check that the memref is covered by the shm object 
> */
> -   if (p->u.memref.size) {
> -   size_t o = p->u.memref.shm_offs +
> -  p->u.memref.size - 1;
> -
> -   rc = tee_shm_get_pa(shm, o, NULL);
> -   if (rc)
> -   return rc;
> -   }
> break;
> case OPTEE_MSG_ATTR_TYPE_RMEM_INPUT:
> case OPTEE_MSG_ATTR_TYPE_RMEM_OUTPUT:
> --
> 2.25.1
>


Re: [PATCH] kdb: Refactor kdb_defcmd implementation

2021-03-22 Thread Sumit Garg
On Fri, 19 Mar 2021 at 22:47, Daniel Thompson
 wrote:
>
> On Tue, Mar 09, 2021 at 05:47:47PM +0530, Sumit Garg wrote:
> > Switch to use kdbtab_t instead of separate struct defcmd_set since
> > now we have kdb_register_table() to register pre-allocated kdb commands.
>
> This needs rewriting. I've been struggling for some time to figure out
> what it actually means means and how it related to the patch. I'm
> starting to conclude that this might not be my fault!
>

Okay.

>
> > Also, switch to use a linked list for sub-commands instead of dynamic
> > array which makes traversing the sub-commands list simpler.
>
> We can't call these things sub-commands! These days a sub-commands
> implies something like `git subcommand` and kdb doesn't have anything
> like that.
>

To me, defcmd_set implied that we are defining a kdb command which
will run a list of other kdb commands which I termed as sub-commands
here. But yes I agree with you that these don't resemble `git
subcommand`.

>
> > +struct kdb_subcmd {
> > + char*scmd_name; /* Sub-command name */
> > + struct  list_head list_node;/* Sub-command node */
> > +};
> > +
> >  /* The KDB shell command table */
> >  typedef struct _kdbtab {
> >   char*cmd_name;  /* Command name */
> > @@ -175,6 +181,7 @@ typedef struct _kdbtab {
> >   kdb_cmdflags_t cmd_flags;   /* Command behaviour flags */
> >   struct list_head list_node; /* Command list */
> >   boolis_dynamic; /* Command table allocation type */
> > + struct list_head kdb_scmds_head; /* Sub-commands list */
> >  } kdbtab_t;
>
> Perhaps this should be more like:
>
> struct defcmd_set {
> kdbtab_t cmd;
> struct list_head commands;
>
> };
>
> This still gets registers using kdb_register_table() but it keeps the
> macro code all in once place:
>
> kdb_register_table(¯o->cmd, 1);
>
> I think that is what I *meant* to suggest ;-) . It also avoids having to
> talk about sub-commands!

Okay, I will use this struct instead.

> BTW I'm open to giving defcmd_set a better name
> (kdb_macro?)
>

kdb_macro sounds more appropriate.

> but I don't see why we want to give all commands a macro
> list.

I am not sure if I follow you here but I think it's better to
distinguish between a normal kdb command and a kdb command which is a
super-set (or macro) representing a list of other kdb commands.

-Sumit

>
> Daniel.


Re: [PATCH v2] kdb: Get rid of custom debug heap allocator

2021-03-22 Thread Sumit Garg
On Fri, 19 Mar 2021 at 23:05, Daniel Thompson
 wrote:
>
> On Mon, Mar 01, 2021 at 11:33:00AM +0530, Sumit Garg wrote:
> > On Fri, 26 Feb 2021 at 23:07, Daniel Thompson
> >  wrote:
> > >
> > > On Fri, Feb 26, 2021 at 06:12:13PM +0530, Sumit Garg wrote:
> > > > On Fri, 26 Feb 2021 at 16:29, Daniel Thompson
> > > >  wrote:
> > > > >
> > > > > On Fri, Feb 26, 2021 at 03:23:06PM +0530, Sumit Garg wrote:
> > > > > > Currently the only user for debug heap is kdbnearsym() which can be
> > > > > > modified to rather ask the caller to supply a buffer for symbol 
> > > > > > name.
> > > > > > So do that and modify kdbnearsym() callers to pass a symbol name 
> > > > > > buffer
> > > > > > allocated statically and hence remove custom debug heap allocator.
> > > > >
> > > > > Why make the callers do this?
> > > > >
> > > > > The LRU buffers were managed inside kdbnearsym() why does switching to
> > > > > an approach with a single buffer require us to push that buffer out to
> > > > > the callers?
> > > > >
> > > >
> > > > Earlier the LRU buffers managed namebuf uniqueness per caller (upto
> > > > 100 callers)
> > >
> > > The uniqueness is per symbol, not per caller.
> > >
> >
> > Agree.
> >
> > > > but if we switch to single entry in kdbnearsym() then all
> > > > callers need to share common buffer which will lead to incorrect
> > > > results from following simple sequence:
> > > >
> > > > kdbnearsym(word, &symtab1);
> > > > kdbnearsym(word, &symtab2);
> > > > kdb_symbol_print(word, &symtab1, 0);
> > > > kdb_symbol_print(word, &symtab2, 0);
> > > >
> > > > But if we change to a unique static namebuf per caller then the
> > > > following sequence will work:
> > > >
> > > > kdbnearsym(word, &symtab1, namebuf1);
> > > > kdbnearsym(word, &symtab2, namebuf2);
> > > > kdb_symbol_print(word, &symtab1, 0);
> > > > kdb_symbol_print(word, &symtab2, 0);
> > >
> > > This is true but do any of the callers of kdbnearsym ever do this?
> >
> > No, but any of prospective callers may need this.
> >
> > > The
> > > main reaason that heap stuck out as redundant was that I've only ever
> > > seen the output of kdbnearsym() consumed almost immediately by a print.
> > >
> >
> > Yeah but I think the alternative proposed in this patch isn't as
> > burdensome as the heap and tries to somewhat match existing
> > functionality.
> >
> > > I wrote an early version of a patch like this that just shrunk the LRU
> > > cache down to 2 and avoided any heap usage... but I threw it away
> > > when I realized we never carry cached values outside the function
> > > that obtained them.
> > >
> >
> > Okay, so if you still think that having a single static buffer inside
> > kdbnearsym() is an appropriate approach for time being then I will
> > switch to use that instead.
>
> Sorry to drop this thread for so long.
>
> On reflection I still have a few concerns about the current code.
> To be clear this is not really about wasting 128 bytes of RAM (your
> patch saves 256K after all).
>
> It's more that the current static buffers "look weird". They are static
> so any competent OS programmer reads them and thinks "but what about
> concurrency/reentrancy"). With the static buffers scattered through the
> code they don't have a single place to find the answer.
>
> I originally proposed handling this by the static buffer horror in
> kdbnearsym() and describing how it all works in the header comment!
> As much as anything this was to centralize the commentary in the
> contract for calling kdbnearsym(). Hence nobody should write the
> theoretic bug you describe because they read the contract!
>
> You are welcome to counter propose but you must ensure that there are
> equivalent comments so our "competent OS programmer" from the paragraph
> above can figure out how the static buffer works without having to run
> git blame` and digging out the patch history.
>

Okay, I understand your point here. Let me go ahead with a single
static buffer in kdbnearsym() with a proper header comment.

-Sumit

>
> Daniel.
>
>
>
> >
> > -Sumit
> >
> > >
> > &

Re: [PATCH v3] kdb: Refactor env variables get/set code

2021-03-22 Thread Sumit Garg
Hi Daniel,

On Mon, 8 Feb 2021 at 13:32, Sumit Garg  wrote:
>
> Add two new kdb environment access methods as kdb_setenv() and
> kdb_printenv() in order to abstract out environment access code
> from kdb command functions.
>
> Also, replace (char *)0 with NULL as an initializer for environment
> variables array.
>
> Signed-off-by: Sumit Garg 
> Reviewed-by: Douglas Anderson 
> ---
>
> Changes in v3:
> - Remove redundant '\0' char assignment.
> - Pick up Doug's review tag.
>
> Changes in v2:
> - Get rid of code motion to separate kdb_env.c file.
> - Replace (char *)0 with NULL.
> - Use kernel-doc style function comments.
> - s/kdb_prienv/kdb_printenv/
>
>  kernel/debug/kdb/kdb_main.c | 164 
> 
>  1 file changed, 91 insertions(+), 73 deletions(-)
>

Do you have any further comments on this? If no, can you pick this up as well?

-Sumit

> diff --git a/kernel/debug/kdb/kdb_main.c b/kernel/debug/kdb/kdb_main.c
> index 588062a..69b8f55 100644
> --- a/kernel/debug/kdb/kdb_main.c
> +++ b/kernel/debug/kdb/kdb_main.c
> @@ -142,40 +142,40 @@ static const int __nkdb_err = ARRAY_SIZE(kdbmsgs);
>
>  static char *__env[] = {
>  #if defined(CONFIG_SMP)
> - "PROMPT=[%d]kdb> ",
> +   "PROMPT=[%d]kdb> ",
>  #else
> - "PROMPT=kdb> ",
> +   "PROMPT=kdb> ",
>  #endif
> - "MOREPROMPT=more> ",
> - "RADIX=16",
> - "MDCOUNT=8",  /* lines of md output */
> - KDB_PLATFORM_ENV,
> - "DTABCOUNT=30",
> - "NOSECT=1",
> - (char *)0,
> - (char *)0,
> - (char *)0,
> - (char *)0,
> - (char *)0,
> - (char *)0,
> - (char *)0,
> - (char *)0,
> - (char *)0,
> - (char *)0,
> - (char *)0,
> - (char *)0,
> - (char *)0,
> - (char *)0,
> - (char *)0,
> - (char *)0,
> - (char *)0,
> - (char *)0,
> - (char *)0,
> - (char *)0,
> - (char *)0,
> - (char *)0,
> - (char *)0,
> - (char *)0,
> +   "MOREPROMPT=more> ",
> +   "RADIX=16",
> +   "MDCOUNT=8",/* lines of md output */
> +   KDB_PLATFORM_ENV,
> +   "DTABCOUNT=30",
> +   "NOSECT=1",
> +   NULL,
> +   NULL,
> +   NULL,
> +   NULL,
> +   NULL,
> +   NULL,
> +   NULL,
> +   NULL,
> +   NULL,
> +   NULL,
> +   NULL,
> +   NULL,
> +   NULL,
> +   NULL,
> +   NULL,
> +   NULL,
> +   NULL,
> +   NULL,
> +   NULL,
> +   NULL,
> +   NULL,
> +   NULL,
> +   NULL,
> +   NULL,
>  };
>
>  static const int __nenv = ARRAY_SIZE(__env);
> @@ -318,6 +318,63 @@ int kdbgetintenv(const char *match, int *value)
>  }
>
>  /*
> + * kdb_setenv() - Alter an existing environment variable or create a new one.
> + * @var: Name of the variable
> + * @val: Value of the variable
> + *
> + * Return: Zero on success, a kdb diagnostic on failure.
> + */
> +static int kdb_setenv(const char *var, const char *val)
> +{
> +   int i;
> +   char *ep;
> +   size_t varlen, vallen;
> +
> +   varlen = strlen(var);
> +   vallen = strlen(val);
> +   ep = kdballocenv(varlen + vallen + 2);
> +   if (ep == (char *)0)
> +   return KDB_ENVBUFFULL;
> +
> +   sprintf(ep, "%s=%s", var, val);
> +
> +   for (i = 0; i < __nenv; i++) {
> +   if (__env[i]
> +&& ((strncmp(__env[i], var, varlen) == 0)
> +  && ((__env[i][varlen] == '\0')
> +   || (__env[i][varlen] == '=' {
> +   __env[i] = ep;
> +   return 0;
> +   }
> +   }
> +
> +   /*
> +* Wasn't existing variable.  Fit into slot.
> +*/
> +   for (i = 0; i < __nenv-1; i++) {
> +   if (__env[i] == (char *)0) {
> +   __env[i] = ep;
> +   return 0;
> +   }
> +   }
> +
> +   return KDB_ENVFULL;
> +}
> +
> +/*
> + * kdb_printenv() - Display the current environment variables.
> + */
> +static void kdb_printenv(void)
> +{
> +   int i;
> +
> +   for (i = 0; i < __nenv; i++) {
> +   if (__env[i])
> +   kdb_printf("%s\n", __env[i]);
> +   }
> +}
> +
> +/*
>   * kdbgetularg - This function will convert a numeric string into an
>   * unsigned long value.
>   * Parameters:
> @@ -374,10 

[PATCH v2] kdb: Get rid of custom debug heap allocator

2021-03-22 Thread Sumit Garg
Currently the only user for debug heap is kdbnearsym() which can be
modified to rather use statically allocated buffer for symbol name as
per it's current usage. So do that and hence remove custom debug heap
allocator.

Note that this change puts a restriction on kdbnearsym() callers to
carefully use shared namebuf such that a caller should consume the symbol
returned immediately prior to another call to fetch a different symbol.

This change has been tested using kgdbtest on arm64 which doesn't show
any regressions.

Suggested-by: Daniel Thompson 
Signed-off-by: Sumit Garg 
---

Changes in v2:
- Use single static buffer for symbol name in kdbnearsym() instead of
  per caller buffers allocated on stack.

 kernel/debug/kdb/kdb_debugger.c |   1 -
 kernel/debug/kdb/kdb_private.h  |   5 -
 kernel/debug/kdb/kdb_support.c  | 318 ++--
 3 files changed, 15 insertions(+), 309 deletions(-)

diff --git a/kernel/debug/kdb/kdb_debugger.c b/kernel/debug/kdb/kdb_debugger.c
index 0220afda3200..e91fc3e4edd5 100644
--- a/kernel/debug/kdb/kdb_debugger.c
+++ b/kernel/debug/kdb/kdb_debugger.c
@@ -140,7 +140,6 @@ int kdb_stub(struct kgdb_state *ks)
 */
kdb_common_deinit_state();
KDB_STATE_CLEAR(PAGER);
-   kdbnearsym_cleanup();
if (error == KDB_CMD_KGDB) {
if (KDB_STATE(DOING_KGDB))
KDB_STATE_CLEAR(DOING_KGDB);
diff --git a/kernel/debug/kdb/kdb_private.h b/kernel/debug/kdb/kdb_private.h
index b857a84de3b5..ec91d7e02334 100644
--- a/kernel/debug/kdb/kdb_private.h
+++ b/kernel/debug/kdb/kdb_private.h
@@ -109,7 +109,6 @@ extern int kdbgetaddrarg(int, const char **, int*, unsigned 
long *,
 long *, char **);
 extern int kdbgetsymval(const char *, kdb_symtab_t *);
 extern int kdbnearsym(unsigned long, kdb_symtab_t *);
-extern void kdbnearsym_cleanup(void);
 extern char *kdb_strdup(const char *str, gfp_t type);
 extern void kdb_symbol_print(unsigned long, const kdb_symtab_t *, unsigned 
int);
 
@@ -233,10 +232,6 @@ extern struct task_struct *kdb_curr_task(int);
 
 #define GFP_KDB (in_dbg_master() ? GFP_ATOMIC : GFP_KERNEL)
 
-extern void *debug_kmalloc(size_t size, gfp_t flags);
-extern void debug_kfree(void *);
-extern void debug_kusage(void);
-
 extern struct task_struct *kdb_current_task;
 extern struct pt_regs *kdb_current_regs;
 
diff --git a/kernel/debug/kdb/kdb_support.c b/kernel/debug/kdb/kdb_support.c
index b59aad1f0b55..e131d74abb8d 100644
--- a/kernel/debug/kdb/kdb_support.c
+++ b/kernel/debug/kdb/kdb_support.c
@@ -57,35 +57,26 @@ int kdbgetsymval(const char *symname, kdb_symtab_t *symtab)
 }
 EXPORT_SYMBOL(kdbgetsymval);
 
-static char *kdb_name_table[100];  /* arbitrary size */
-
 /*
- * kdbnearsym -Return the name of the symbol with the nearest address
- * less than 'addr'.
+ * kdbnearsym() - Return the name of the symbol with the nearest address
+ *less than @addr.
+ * @addr: Address to check for near symbol
+ * @symtab: Structure to receive results
  *
- * Parameters:
- * addrAddress to check for symbol near
- * symtab  Structure to receive results
- * Returns:
- * 0   No sections contain this address, symtab zero filled
- * 1   Address mapped to module/symbol/section, data in symtab
- * Remarks:
- * 2.6 kallsyms has a "feature" where it unpacks the name into a
- * string.  If that string is reused before the caller expects it
- * then the caller sees its string change without warning.  To
- * avoid cluttering up the main kdb code with lots of kdb_strdup,
- * tests and kfree calls, kdbnearsym maintains an LRU list of the
- * last few unique strings.  The list is sized large enough to
- * hold active strings, no kdb caller of kdbnearsym makes more
- * than ~20 later calls before using a saved value.
+ * Note here that only single statically allocated namebuf is used for every
+ * symbol, so the caller should consume it immediately prior to another call
+ * to fetch a different symbol.
+ *
+ * Return:
+ * * 0 - No sections contain this address, symtab zero filled
+ * * 1 - Address mapped to module/symbol/section, data in symtab
  */
 int kdbnearsym(unsigned long addr, kdb_symtab_t *symtab)
 {
int ret = 0;
unsigned long symbolsize = 0;
unsigned long offset = 0;
-#define knt1_size 128  /* must be >= kallsyms table size */
-   char *knt1 = NULL;
+   static char namebuf[KSYM_NAME_LEN];
 
if (KDB_DEBUG(AR))
kdb_printf("kdbnearsym: addr=0x%lx, symtab=%px\n", addr, 
symtab);
@@ -93,14 +84,9 @@ int kdbnearsym(unsigned long addr, kdb_symtab_t *symtab)
 
if (addr < 4096)
goto out;
-   knt1 = debug_kmalloc(knt1_size, GFP_ATOMIC);
-   if (!knt1) {
-   kdb_printf("kdbnearsym: addr=0x%lx cannot kmalloc knt1\n",
-  addr);
- 

Re: [PATCH v1 0/3] KEYS: trusted: Introduce support for NXP CAAM-based trusted keys

2021-03-23 Thread Sumit Garg
On Tue, 23 Mar 2021 at 22:04, Ahmad Fatoum  wrote:
>
> Hello Horia,
>
> On 21.03.21 21:01, Horia Geantă wrote:
> > On 3/16/2021 7:02 PM, Ahmad Fatoum wrote:
> >> This patch series builds on top of Sumit's rework to have the CAAM as yet 
> >> another
> >> trusted key backend.
> >>
> > Shouldn't the description under TRUSTED_KEYS (in security/keys/Kconfig)
> > be updated to reflect the availability of multiple backends?
>
> This is indeed no longer correct. It also depends on TCG_TPM, which AFAIU
> is not really needed for the new TEE backend.
>
> @Sumit, can you confirm?
>

Yes, that's correct. Let me share a separate patch to fix that.

-Sumit

> --
> Pengutronix e.K.   | |
> Steuerwalder Str. 21   | http://www.pengutronix.de/  |
> 31137 Hildesheim, Germany  | Phone: +49-5121-206917-0|
> Amtsgericht Hildesheim, HRA 2686   | Fax:   +49-5121-206917- |


Re: [PATCH v9 0/4] Introduce TEE based Trusted Keys support

2021-03-04 Thread Sumit Garg
Hi Jarkko,

On Mon, 1 Mar 2021 at 18:41, Sumit Garg  wrote:
>
> Add support for TEE based trusted keys where TEE provides the functionality
> to seal and unseal trusted keys using hardware unique key. Also, this is
> an alternative in case platform doesn't possess a TPM device.
>
> This patch-set has been tested with OP-TEE based early TA which is already
> merged in upstream [1].
>
> [1] 
> https://github.com/OP-TEE/optee_os/commit/f86ab8e7e0de869dfa25ca05a37ee070d7e5b86b
>
> Changes in v9:
> 1. Rebased to latest tpmdd/master.
> 2. Defined pr_fmt() and removed redundant tags.
> 3. Patch #2: incorporated misc. comments.
> 4. Patch #3: incorporated doc changes from Elaine and misc. comments
>from Randy.
> 5. Patch #4: reverted to separate maintainer entry as per request from
>Jarkko.
> 6. Added Jarkko's Tested-by: tag on patch #2.

It looks like we don't have any further comments on this patch-set. So
would you be able to pick up this patch-set?

-Sumit

>
> Changes in v8:
> 1. Added static calls support instead of indirect calls.
> 2. Documented trusted keys source module parameter.
> 3. Refined patch #1 commit message discription.
> 4. Addressed misc. comments on patch #2.
> 5. Added myself as Trusted Keys co-maintainer instead.
> 6. Rebased to latest tpmdd master.
>
> Changes in v7:
> 1. Added a trusted.source module parameter in order to enforce user's
>choice in case a particular platform posses both TPM and TEE.
> 2. Refine commit description for patch #1.
>
> Changes in v6:
> 1. Revert back to dynamic detection of trust source.
> 2. Drop author mention from trusted_core.c and trusted_tpm1.c files.
> 3. Rebased to latest tpmdd/master.
>
> Changes in v5:
> 1. Drop dynamic detection of trust source and use compile time flags
>instead.
> 2. Rename trusted_common.c -> trusted_core.c.
> 3. Rename callback: cleanup() -> exit().
> 4. Drop "tk" acronym.
> 5. Other misc. comments.
> 6. Added review tags for patch #3 and #4.
>
> Changes in v4:
> 1. Pushed independent TEE features separately:
>   - Part of recent TEE PR: https://lkml.org/lkml/2020/5/4/1062
> 2. Updated trusted-encrypted doc with TEE as a new trust source.
> 3. Rebased onto latest tpmdd/master.
>
> Changes in v3:
> 1. Update patch #2 to support registration of multiple kernel pages.
> 2. Incoporate dependency patch #4 in this patch-set:
>https://patchwork.kernel.org/patch/11091435/
>
> Changes in v2:
> 1. Add reviewed-by tags for patch #1 and #2.
> 2. Incorporate comments from Jens for patch #3.
> 3. Switch to use generic trusted keys framework.
>
> Sumit Garg (4):
>   KEYS: trusted: Add generic trusted keys framework
>   KEYS: trusted: Introduce TEE based Trusted Keys
>   doc: trusted-encrypted: updates with TEE as a new trust source
>   MAINTAINERS: Add entry for TEE based Trusted Keys
>
>  .../admin-guide/kernel-parameters.txt |  12 +
>  .../security/keys/trusted-encrypted.rst   | 171 ++--
>  MAINTAINERS   |   8 +
>  include/keys/trusted-type.h   |  53 +++
>  include/keys/trusted_tee.h|  16 +
>  include/keys/trusted_tpm.h|  29 +-
>  security/keys/trusted-keys/Makefile   |   2 +
>  security/keys/trusted-keys/trusted_core.c | 358 +
>  security/keys/trusted-keys/trusted_tee.c  | 317 +++
>  security/keys/trusted-keys/trusted_tpm1.c | 366 --
>  10 files changed, 981 insertions(+), 351 deletions(-)
>  create mode 100644 include/keys/trusted_tee.h
>  create mode 100644 security/keys/trusted-keys/trusted_core.c
>  create mode 100644 security/keys/trusted-keys/trusted_tee.c
>
> --
> 2.25.1
>


Re: [PATCH v5] kdb: Simplify kdb commands registration

2021-03-04 Thread Sumit Garg
Hi Doug,

On Tue, 2 Mar 2021 at 00:10, Doug Anderson  wrote:
>
> Hi,
>
> On Tue, Feb 23, 2021 at 11:08 PM Sumit Garg  wrote:
> >
> > Simplify kdb commands registration via using linked list instead of
> > static array for commands storage.
> >
> > Signed-off-by: Sumit Garg 
> > ---
> >
> > Changes in v5:
> > - Introduce new method: kdb_register_table() to register static kdb
> >   main and breakpoint command tables instead of using statically
> >   allocated commands.
> >
> > Changes in v4:
> > - Fix kdb commands memory allocation issue prior to slab being available
> >   with an array of statically allocated commands. Now it works fine with
> >   kgdbwait.
> > - Fix a misc checkpatch warning.
> > - I have dropped Doug's review tag as I think this version includes a
> >   major fix that should be reviewed again.
> >
> > Changes in v3:
> > - Remove redundant "if" check.
> > - Pick up review tag from Doug.
> >
> > Changes in v2:
> > - Remove redundant NULL check for "cmd_name".
> > - Incorporate misc. comment.
> >
> >  kernel/debug/kdb/kdb_bp.c  |  81 --
> >  kernel/debug/kdb/kdb_main.c| 472 -
> >  kernel/debug/kdb/kdb_private.h |   3 +
> >  3 files changed, 343 insertions(+), 213 deletions(-)
>
> This looks good to me, thanks!
>
> Random notes:
>
> * We no longer check for "duplicate" commands for any of these
> statically allocated ones, but I guess that's fine.

Yeah, I think that check is redundant for static ones.

>
> * Presumably nothing outside of kdb/kgdb itself needs the ability to
> allocate commands statically.  The only user I see now is ftrace and
> it looks like it runs late enough that it should be fine.

Agree.

>
> Reviewed-by: Douglas Anderson 
>

Thanks,
-Sumit

>
> -Doug


Re: [PATCH v5] arm64: Enable perf events based hard lockup detector

2021-03-30 Thread Sumit Garg
On Tue, 30 Mar 2021 at 14:07, Lecopzer Chen  wrote:
>
> > > Hi Will, Mark,
> > >
> > > On Fri, 15 Jan 2021 at 17:32, Sumit Garg  wrote:
> > > >
> > > > With the recent feature added to enable perf events to use pseudo NMIs
> > > > as interrupts on platforms which support GICv3 or later, its now been
> > > > possible to enable hard lockup detector (or NMI watchdog) on arm64
> > > > platforms. So enable corresponding support.
> > > >
> > > > One thing to note here is that normally lockup detector is initialized
> > > > just after the early initcalls but PMU on arm64 comes up much later as
> > > > device_initcall(). So we need to re-initialize lockup detection once
> > > > PMU has been initialized.
> > > >
> > > > Signed-off-by: Sumit Garg 
> > > > ---
> > > >
> > > > Changes in v5:
> > > > - Fix lockup_detector_init() invocation to be rather invoked from CPU
> > > >   binded context as it makes heavy use of per-cpu variables and 
> > > > shouldn't
> > > >   be invoked from preemptible context.
> > > >
> > >
> > > Do you have any further comments on this?
> > >
> > > Lecopzer,
> > >
> > > Does this feature work fine for you now?
> >
> > This really fixes the warning, I have a real hardware for testing this now.

Thanks for the testing. I assume it as an implicit Tested-by.

> > but do we need to call lockup_detector_init() for each cpu?
> >
> > In init/main.c, it's only called by cpu 0 for once.
>
> Oh sorry, I just misread the code, please ignore previous mail.
>

No worries.

-Sumit

>
> BRs,
> Lecopzer


Re: [PATCH v1 3/3] KEYS: trusted: Introduce support for NXP CAAM-based trusted keys

2021-03-30 Thread Sumit Garg
On Mon, 29 Mar 2021 at 01:07, Jarkko Sakkinen  wrote:
>
> On Sat, Mar 27, 2021 at 01:41:24PM +0100, David Gstir wrote:
> > Hi!
> >
> > > On 25.03.2021, at 06:26, Sumit Garg  wrote:
> > >
> > > On Wed, 24 Mar 2021 at 19:37, Ahmad Fatoum  
> > > wrote:
> > >>
> > >> Hello Sumit,
> > >>
> > >> On 24.03.21 11:47, Sumit Garg wrote:
> > >>> On Wed, 24 Mar 2021 at 14:56, Ahmad Fatoum  
> > >>> wrote:
> > >>>>
> > >>>> Hello Mimi,
> > >>>>
> > >>>> On 23.03.21 19:07, Mimi Zohar wrote:
> > >>>>> On Tue, 2021-03-23 at 17:35 +0100, Ahmad Fatoum wrote:
> > >>>>>> On 21.03.21 21:48, Horia Geantă wrote:
> > >>>>>>> caam has random number generation capabilities, so it's worth using 
> > >>>>>>> that
> > >>>>>>> by implementing .get_random.
> > >>>>>>
> > >>>>>> If the CAAM HWRNG is already seeding the kernel RNG, why not use the 
> > >>>>>> kernel's?
> > >>>>>>
> > >>>>>> Makes for less code duplication IMO.
> > >>>>>
> > >>>>> Using kernel RNG, in general, for trusted keys has been discussed
> > >>>>> before.   Please refer to Dave Safford's detailed explanation for not
> > >>>>> using it [1].
> > >>>>
> > >>>> The argument seems to boil down to:
> > >>>>
> > >>>> - TPM RNG are known to be of good quality
> > >>>> - Trusted keys always used it so far
> > >>>>
> > >>>> Both are fine by me for TPMs, but the CAAM backend is new code and 
> > >>>> neither point
> > >>>> really applies.
> > >>>>
> > >>>> get_random_bytes_wait is already used for generating key material 
> > >>>> elsewhere.
> > >>>> Why shouldn't new trusted key backends be able to do the same thing?
> > >>>>
> > >>>
> > >>> Please refer to documented trusted keys behaviour here [1]. New
> > >>> trusted key backends should align to this behaviour and in your case
> > >>> CAAM offers HWRNG so we should be better using that.
> > >>
> > >> Why is it better?
> > >>
> > >> Can you explain what benefit a CAAM user would have if the trusted key
> > >> randomness comes directly out of the CAAM instead of indirectly from
> > >> the kernel entropy pool that is seeded by it?
> > >
> > > IMO, user trust in case of trusted keys comes from trusted keys
> > > backend which is CAAM here. If a user doesn't trust that CAAM would
> > > act as a reliable source for RNG then CAAM shouldn't be used as a
> > > trust source in the first place.
> > >
> > > And I think building user's trust for kernel RNG implementation with
> > > multiple entropy contributions is pretty difficult when compared with
> > > CAAM HWRNG implementation.
> >
> > Generally speaking, I’d say trusting the CAAM RNG and trusting in it’s
> > other features are two separate things. However, reading through the CAAM
> > key blob spec I’ve got here, CAAM key blob keys (the keys that secure a 
> > blob’s
> > content) are generated using its internal RNG. So I’d save if the CAAM RNG
> > is insecure, so are generated key blobs. Maybe somebody with more insight
> > into the CAAM internals can verify that, but I don’t see any point in using
> > the kernel’s RNG as long as we let CAAM generate the key blob keys for us.
>
> Here's my long'ish analysis. Please read it to the end if by ever means
> possible, and apologies, I usually try to keep usually my comms short, but
> this requires some more meat than the usual.
>
> The Bad News
> 
>
> Now that we add multiple hardware trust sources for trusted keys, will
> there ever be a scenario where a trusted key is originally sealed with a
> backing hardware A, unsealed, and resealed with hardware B?
>
> The hardware and vendor neutral way to generate the key material would be
> unconditionally always just the kernel RNG.
>
> CAAM is actually worse than TCG because it's not even a standards body, if
> I got it right. Not a lot but at least a tiny fraction.
>
> This brings an open item in TEE patches: trusted_tee_get_rando

Re: [PATCH v5] arm64: Enable perf events based hard lockup detector

2021-04-12 Thread Sumit Garg
Hi Will,

On Tue, 30 Mar 2021 at 18:00, Sumit Garg  wrote:
>
> On Tue, 30 Mar 2021 at 14:07, Lecopzer Chen  
> wrote:
> >
> > > > Hi Will, Mark,
> > > >
> > > > On Fri, 15 Jan 2021 at 17:32, Sumit Garg  wrote:
> > > > >
> > > > > With the recent feature added to enable perf events to use pseudo NMIs
> > > > > as interrupts on platforms which support GICv3 or later, its now been
> > > > > possible to enable hard lockup detector (or NMI watchdog) on arm64
> > > > > platforms. So enable corresponding support.
> > > > >
> > > > > One thing to note here is that normally lockup detector is initialized
> > > > > just after the early initcalls but PMU on arm64 comes up much later as
> > > > > device_initcall(). So we need to re-initialize lockup detection once
> > > > > PMU has been initialized.
> > > > >
> > > > > Signed-off-by: Sumit Garg 
> > > > > ---
> > > > >
> > > > > Changes in v5:
> > > > > - Fix lockup_detector_init() invocation to be rather invoked from CPU
> > > > >   binded context as it makes heavy use of per-cpu variables and 
> > > > > shouldn't
> > > > >   be invoked from preemptible context.
> > > > >
> > > >
> > > > Do you have any further comments on this?
> > > >

Since there aren't any further comments, can you re-pick this feature for 5.13?

-Sumit

> > > > Lecopzer,
> > > >
> > > > Does this feature work fine for you now?
> > >
> > > This really fixes the warning, I have a real hardware for testing this 
> > > now.
>
> Thanks for the testing. I assume it as an implicit Tested-by.
>
> > > but do we need to call lockup_detector_init() for each cpu?
> > >
> > > In init/main.c, it's only called by cpu 0 for once.
> >
> > Oh sorry, I just misread the code, please ignore previous mail.
> >
>
> No worries.
>
> -Sumit
>
> >
> > BRs,
> > Lecopzer


Re: [PATCH][next] KEYS: trusted: Fix missing null return from kzalloc call

2021-04-12 Thread Sumit Garg
On Mon, 12 Apr 2021 at 21:31, Colin King  wrote:
>
> From: Colin Ian King 
>
> The kzalloc call can return null with the GFP_KERNEL flag so
> add a null check and exit via a new error exit label. Use the
> same exit error label for another error path too.
>
> Addresses-Coverity: ("Dereference null return value")
> Fixes: 830027e2cb55 ("KEYS: trusted: Add generic trusted keys framework")
> Signed-off-by: Colin Ian King 
> ---
>  security/keys/trusted-keys/trusted_core.c | 6 --
>  1 file changed, 4 insertions(+), 2 deletions(-)
>

Ah, it's my bad. Thanks for fixing this issue.

Reviewed-by: Sumit Garg 

-Sumit

> diff --git a/security/keys/trusted-keys/trusted_core.c 
> b/security/keys/trusted-keys/trusted_core.c
> index ec3a066a4b42..90774793f0b1 100644
> --- a/security/keys/trusted-keys/trusted_core.c
> +++ b/security/keys/trusted-keys/trusted_core.c
> @@ -116,11 +116,13 @@ static struct trusted_key_payload 
> *trusted_payload_alloc(struct key *key)
>
> ret = key_payload_reserve(key, sizeof(*p));
> if (ret < 0)
> -   return p;
> +   goto err;
> p = kzalloc(sizeof(*p), GFP_KERNEL);
> +   if (!p)
> +   goto err;
>
> p->migratable = migratable;
> -
> +err:
> return p;
>  }
>
> --
> 2.30.2
>


Re: [PATCH][next] KEYS: trusted: Fix missing null return from kzalloc call

2021-04-12 Thread Sumit Garg
On Mon, 12 Apr 2021 at 22:34, Colin Ian King  wrote:
>
> On 12/04/2021 17:48, James Bottomley wrote:
> > On Mon, 2021-04-12 at 17:01 +0100, Colin King wrote:
> >> From: Colin Ian King 
> >>
> >> The kzalloc call can return null with the GFP_KERNEL flag so
> >> add a null check and exit via a new error exit label. Use the
> >> same exit error label for another error path too.
> >>
> >> Addresses-Coverity: ("Dereference null return value")
> >> Fixes: 830027e2cb55 ("KEYS: trusted: Add generic trusted keys
> >> framework")
> >> Signed-off-by: Colin Ian King 
> >> ---
> >>  security/keys/trusted-keys/trusted_core.c | 6 --
> >>  1 file changed, 4 insertions(+), 2 deletions(-)
> >>
> >> diff --git a/security/keys/trusted-keys/trusted_core.c
> >> b/security/keys/trusted-keys/trusted_core.c
> >> index ec3a066a4b42..90774793f0b1 100644
> >> --- a/security/keys/trusted-keys/trusted_core.c
> >> +++ b/security/keys/trusted-keys/trusted_core.c
> >> @@ -116,11 +116,13 @@ static struct trusted_key_payload
> >> *trusted_payload_alloc(struct key *key)
> >>
> >>  ret = key_payload_reserve(key, sizeof(*p));
> >>  if (ret < 0)
> >> -return p;
> >> +goto err;
> >>  p = kzalloc(sizeof(*p), GFP_KERNEL);
> >> +if (!p)
> >> +goto err;
> >>
> >>  p->migratable = migratable;
> >> -
> >> +err:
> >>  return p;
> >
> > This is clearly a code migration bug in
> >
> > commit 251c85bd106099e6f388a89e88e12d14de2c9cda
> > Author: Sumit Garg 
> > Date:   Mon Mar 1 18:41:24 2021 +0530
> >
> > KEYS: trusted: Add generic trusted keys framework
> >
> > Which has for addition to trusted_core.c:
> >
> > +static struct trusted_key_payload *trusted_payload_alloc(struct key
> > *key)
> > +{
> > +   struct trusted_key_payload *p = NULL;
> > +   int ret;
> > +
> > +   ret = key_payload_reserve(key, sizeof(*p));
> > +   if (ret < 0)
> > +   return p;
> > +   p = kzalloc(sizeof(*p), GFP_KERNEL);
> > +
> > +   p->migratable = migratable;
> > +
> > +   return p;
> > +}
> >
> > And for trusted_tpm1.c:
> >
> > -static struct trusted_key_payload *trusted_payload_alloc(struct key
> > *key)
> > -{
> > -   struct trusted_key_payload *p = NULL;
> > -   int ret;
> > -
> > -   ret = key_payload_reserve(key, sizeof *p);
> > -   if (ret < 0)
> > -   return p;
> > -   p = kzalloc(sizeof *p, GFP_KERNEL);
> > -   if (p)
> > -   p->migratable = 1; /* migratable by default */
> > -   return p;
> > -}
> >
> > The trusted_tpm1.c code was correct and we got this bug introduced by
> > what should have been a simple cut and paste ... how did that happen?

It was a little more than just cut and paste where I did generalized
"migratable" flag to be provided by the corresponding trust source's
ops struct.

> > And therefore, how safe is the rest of the extraction into
> > trusted_core.c?
> >
>
> fortunately it gets caught by static analysis, but it does make me also
> concerned about what else has changed and how this gets through review.
>

I agree that extraction into trusted_core.c was a complex change but
this patch has been up for review for almost 2 years [1]. And
extensive testing can't catch this sort of bug as allocation wouldn't
normally fail.

[1] https://lwn.net/Articles/795416/

-Sumit

> > James
> >
> >
>


Re: [PATCH v1 0/3] KEYS: trusted: Introduce support for NXP CAAM-based trusted keys

2021-04-01 Thread Sumit Garg
On Thu, 1 Apr 2021 at 19:00, Ahmad Fatoum  wrote:
>
> Hello Richard, Sumit,
>
> On 01.04.21 15:17, Richard Weinberger wrote:
> > Sumit,
> >
> > - Ursprüngliche Mail -
> >> Von: "Sumit Garg" 
> >> IIUC, this would require support for multiple trusted keys backends at
> >> runtime but currently the trusted keys subsystem only supports a
> >> single backend which is selected via kernel module parameter during
> >> boot.
> >>
> >> So the trusted keys framework needs to evolve to support multiple
> >> trust sources at runtime but I would like to understand the use-cases
> >> first. IMO, selecting the best trust source available on a platform
> >> for trusted keys should be a one time operation, so why do we need to
> >> have other backends available at runtime as well?
> >
> > I thought about devices with a TPM-Chip and CAAM.

In this case why would one prefer to use CAAM when you have standards
compliant TPM-Chip which additionally offers sealing to specific PCR
(integrity measurement) values.

> > IMHO allowing only one backend at the same time is a little over simplified.
>
> It is, but I'd rather leave this until it's actually needed.
> What can be done now is adopting a format for the exported keys that would
> make this extension seamless in future.
>

+1

-Sumit

> Cheers,
> Ahmad
>
> --
> Pengutronix e.K.   | |
> Steuerwalder Str. 21   | http://www.pengutronix.de/  |
> 31137 Hildesheim, Germany  | Phone: +49-5121-206917-0|
> Amtsgericht Hildesheim, HRA 2686   | Fax:   +49-5121-206917- |


Re: [PATCH v1 0/3] KEYS: trusted: Introduce support for NXP CAAM-based trusted keys

2021-04-01 Thread Sumit Garg
On Thu, 1 Apr 2021 at 15:36, Ahmad Fatoum  wrote:
>
> Hello Richard,
>
> On 31.03.21 21:36, Richard Weinberger wrote:
> > James,
> >
> > - Ursprüngliche Mail -
> >> Von: "James Bottomley" 
> >> Well, yes.  For the TPM, there's a defined ASN.1 format for the keys:
> >>
> >> https://git.kernel.org/pub/scm/linux/kernel/git/jejb/openssl_tpm2_engine.git/tree/tpm2-asn.h
> >>
> >> and part of the design of the file is that it's distinguishable either
> >> in DER or PEM (by the guards) format so any crypto application can know
> >> it's dealing with a TPM key simply by inspecting the file.  I think you
> >> need the same thing for CAAM and any other format.
> >>
> >> We're encouraging new ASN.1 formats to be of the form
> >>
> >> SEQUENCE {
> >>type   OBJECT IDENTIFIER
> >>... key specific fields ...
> >> }
> >>
> >> Where you choose a defined OID to represent the key and that means
> >> every key even in DER form begins with a unique binary signature.
> >
> > I like this idea.
> > Ahmad, what do you think?
> >
> > That way we could also get rid off the kernel parameter and all the fall 
> > back logic,
> > given that we find a way to reliable detect TEE blobs too...
>
> Sounds good to me. Sumit, your thoughts on doing this for TEE as well?
>

AFAIU, ASN.1 formating should be independent of trusted keys backends
which could be abstracted to trusted keys core layer so that every
backend could be plugged in seamlessly.

James,

Would it be possible to achieve this?

-Sumit

> >
> > Thanks,
> > //richard
> >
>
> --
> Pengutronix e.K.   | |
> Steuerwalder Str. 21   | http://www.pengutronix.de/  |
> 31137 Hildesheim, Germany  | Phone: +49-5121-206917-0|
> Amtsgericht Hildesheim, HRA 2686   | Fax:   +49-5121-206917- |


Re: [PATCH v1 0/3] KEYS: trusted: Introduce support for NXP CAAM-based trusted keys

2021-04-01 Thread Sumit Garg
Hi Richard,

On Wed, 31 Mar 2021 at 03:34, Richard Weinberger
 wrote:
>
> Ahmad,
>
> On Wed, Mar 17, 2021 at 3:08 PM Ahmad Fatoum  wrote:
> > keyctl add trusted $KEYNAME "load $(cat ~/kmk.blob)" @s
>
> Is there a reason why we can't pass the desired backend name in the
> trusted key parameters?
> e.g.
> keyctl add trusted $KEYNAME "backendtype caam load $(cat ~/kmk.blob)" @s
>

IIUC, this would require support for multiple trusted keys backends at
runtime but currently the trusted keys subsystem only supports a
single backend which is selected via kernel module parameter during
boot.

So the trusted keys framework needs to evolve to support multiple
trust sources at runtime but I would like to understand the use-cases
first. IMO, selecting the best trust source available on a platform
for trusted keys should be a one time operation, so why do we need to
have other backends available at runtime as well?

-Sumit

> --
> Thanks,
> //richard


Re: [PATCH v1 0/3] KEYS: trusted: Introduce support for NXP CAAM-based trusted keys

2021-04-01 Thread Sumit Garg
On Thu, 1 Apr 2021 at 19:29, Richard Weinberger  wrote:
>
> Sumit,
>
> - Ursprüngliche Mail -
> > Von: "Sumit Garg" 
> > In this case why would one prefer to use CAAM when you have standards
> > compliant TPM-Chip which additionally offers sealing to specific PCR
> > (integrity measurement) values.
>
> I don't think we can dictate what good/sane solutions are and which are not.
> Both CAAM and TPM have pros and cons, I don't see why supporting both is a 
> bad idea.

I didn't mean to say that supporting both is a bad idea but rather I
was looking for use-cases where one time selection of the best trust
source (whether it be a TPM or TEE or CAAM etc.) for a platform
wouldn't suffice for user needs.

>
> >> > IMHO allowing only one backend at the same time is a little over 
> >> > simplified.
> >>
> >> It is, but I'd rather leave this until it's actually needed.
> >> What can be done now is adopting a format for the exported keys that would
> >> make this extension seamless in future.
> >>
> >
> > +1
>
> As long we don't make multiple backends at runtime impossible I'm
> fine and will happily add support for it when needed. :-)
>

You are most welcome to add such support. I will be happy to review it.

-Sumit

> Thanks,
> //richard


Re: [PATCH v1 3/3] KEYS: trusted: Introduce support for NXP CAAM-based trusted keys

2021-03-17 Thread Sumit Garg
Hi Richard,

On Wed, 17 Mar 2021 at 04:45, Richard Weinberger
 wrote:
>
> Ahmad,
>
> On Tue, Mar 16, 2021 at 6:24 PM Ahmad Fatoum  wrote:
> > +#include 
> > +#include 
> > +#include 
> > +#include 
> > +#include 
> > +
> > +struct caam_blob_priv *blobifier;
>
> Who is using this pointer too?
> Otherwise I'd suggest marking it static.
>
> >  module_param_named(source, trusted_key_source, charp, 0);
> > -MODULE_PARM_DESC(source, "Select trusted keys source (tpm or tee)");
> > +MODULE_PARM_DESC(source, "Select trusted keys source (tpm, tee or caam)");
>
> I didn't closely follow the previous discussions, but is a module
> parameter really the right approach?
> Is there also a way to set it via something like device tree?
>

It's there to support a platform which possesses multiple trusted keys
backends. So that a user is able to select during boot which one to
use as a backend.

-Sumit

> --
> Thanks,
> //richard


Re: [PATCH] objtool,static_call: Don't emit static_call_site for .exit.text

2021-03-17 Thread Sumit Garg
On Wed, 17 Mar 2021 at 18:16, Peter Zijlstra  wrote:
>
> On Wed, Mar 17, 2021 at 05:25:48PM +0530, Sumit Garg wrote:
> > Thanks Peter for this fix. It does work for me on qemu for x86. Can
> > you turn this into a proper fix patch? BTW, feel free to add:
>
> Per the below, the original patch ought to be fixed as well, to not use
> static_call() in __exit.

Okay, fair enough.

Jarkko,

Can you please incorporate the following change to the original patch as well?

diff --git a/security/keys/trusted-keys/trusted_core.c
b/security/keys/trusted-keys/trusted_core.c
index ec3a066a4b42..bef52d1ebe5e 100644
--- a/security/keys/trusted-keys/trusted_core.c
+++ b/security/keys/trusted-keys/trusted_core.c
@@ -41,7 +41,7 @@ DEFINE_STATIC_CALL_NULL(trusted_key_unseal,
*trusted_key_sources[0].ops->unseal);
 DEFINE_STATIC_CALL_NULL(trusted_key_get_random,
*trusted_key_sources[0].ops->get_random);
-DEFINE_STATIC_CALL_NULL(trusted_key_exit, *trusted_key_sources[0].ops->exit);
+static void (*trusted_key_exit)(void);
 static unsigned char migratable;

 enum {
@@ -328,8 +328,7 @@ static int __init init_trusted(void)
   trusted_key_sources[i].ops->unseal);
static_call_update(trusted_key_get_random,
   trusted_key_sources[i].ops->get_random);
-   static_call_update(trusted_key_exit,
-  trusted_key_sources[i].ops->exit);
+   trusted_key_exit = trusted_key_sources[i].ops->exit;
migratable = trusted_key_sources[i].ops->migratable;

ret = static_call(trusted_key_init)();
@@ -349,7 +348,8 @@ static int __init init_trusted(void)

 static void __exit cleanup_trusted(void)
 {
-   static_call(trusted_key_exit)();
+   if (trusted_key_exit)
+   trusted_key_exit();
 }

 late_initcall(init_trusted);

-Sumit

>
> ---
> Subject: objtool,static_call: Don't emit static_call_site for .exit.text
> From: Peter Zijlstra 
> Date: Wed Mar 17 13:35:05 CET 2021
>
> Functions marked __exit are (somewhat surprisingly) discarded at
> runtime when built-in. This means that static_call(), when used in
> __exit functions, will generate static_call_site entries that point
> into reclaimed space.
>
> Simply skip such sites and emit a WARN about it. By not emitting a
> static_call_site the site will remain pointed at the trampoline, which
> is also maintained, so things will work as expected, albeit with the
> extra indirection.
>
> The WARN is so that people are aware of this; and arguably it simply
> isn't a good idea to use static_call() in __exit code anyway, since
> module unload is never a performance critical path.
>
> Reported-by: Sumit Garg 
> Signed-off-by: Peter Zijlstra (Intel) 
> Tested-by: Sumit Garg 
> ---
>  tools/objtool/check.c |   32 
>  1 file changed, 20 insertions(+), 12 deletions(-)
>
> --- a/tools/objtool/check.c
> +++ b/tools/objtool/check.c
> @@ -850,6 +850,22 @@ static int add_ignore_alternatives(struc
> return 0;
>  }
>
> +static inline void static_call_add(struct instruction *insn,
> +  struct objtool_file *file)
> +{
> +   if (!insn->call_dest->static_call_tramp)
> +   return;
> +
> +   if (!strcmp(insn->sec->name, ".exit.text")) {
> +   WARN_FUNC("static_call in .exit.text, skipping inline 
> patching",
> + insn->sec, insn->offset);
> +   return;
> +   }
> +
> +   list_add_tail(&insn->static_call_node,
> + &file->static_call_list);
> +}
> +
>  /*
>   * Find the destination instructions for all jumps.
>   */
> @@ -888,10 +904,7 @@ static int add_jump_destinations(struct
> } else if (insn->func) {
> /* internal or external sibling call (with reloc) */
> insn->call_dest = reloc->sym;
> -   if (insn->call_dest->static_call_tramp) {
> -   list_add_tail(&insn->static_call_node,
> - &file->static_call_list);
> -   }
> +   static_call_add(insn, file);
> continue;
> } else if (reloc->sym->sec->idx) {
> dest_sec = reloc->sym->sec;
> @@ -950,10 +963,7 @@ static int add_jump_destinations(struct
>
> /* internal sibling call (without reloc) */
> insn->call_d

Re: [PATCH] objtool,static_call: Don't emit static_call_site for .exit.text

2021-03-17 Thread Sumit Garg
On Thu, 18 Mar 2021 at 03:26, Jarkko Sakkinen  wrote:
>
> On Wed, Mar 17, 2021 at 07:07:07PM +0530, Sumit Garg wrote:
> > On Wed, 17 Mar 2021 at 18:16, Peter Zijlstra  wrote:
> > >
> > > On Wed, Mar 17, 2021 at 05:25:48PM +0530, Sumit Garg wrote:
> > > > Thanks Peter for this fix. It does work for me on qemu for x86. Can
> > > > you turn this into a proper fix patch? BTW, feel free to add:
> > >
> > > Per the below, the original patch ought to be fixed as well, to not use
> > > static_call() in __exit.
> >
> > Okay, fair enough.
> >
> > Jarkko,
> >
> > Can you please incorporate the following change to the original patch as 
> > well?
>
> Can you roll-out a proper patch of this?

Okay, I will post a separate patch for this.

-Sumit

>
> /Jarkko


Re: [PATCH 0/3] static_call() vs __exit fixes

2021-03-18 Thread Sumit Garg
On Thu, 18 Mar 2021 at 17:10, Peter Zijlstra  wrote:
>
> Hi,
>
> After more poking a new set of patches to fix static_call() vs __exit
> functions. These patches replace the patch I posted yesterday:
>
>   https://lkml.kernel.org/r/yfh6br61b5gk8...@hirez.programming.kicks-ass.net
>
> Since I've reproduced the problem locally, and these patches do seem to fully
> cure things, I'll shortly queue them for tip/locking/urgent.
>

Thanks Peter for these fixes, works fine for me.

FWIW:

Tested-by: Sumit Garg 

-Sumit


Re: [PATCH v8 2/4] KEYS: trusted: Introduce TEE based Trusted Keys

2021-02-21 Thread Sumit Garg
On Tue, 16 Feb 2021 at 12:59, Jarkko Sakkinen  wrote:
>
> On Mon, Feb 15, 2021 at 06:37:00PM +0530, Sumit Garg wrote:
> > On Fri, 12 Feb 2021 at 05:04, Jarkko Sakkinen  wrote:
> > >
> > > On Mon, Jan 25, 2021 at 02:47:38PM +0530, Sumit Garg wrote:
> > > > Hi Jarkko,
> > > >
> > > > On Fri, 22 Jan 2021 at 23:42, Jarkko Sakkinen  wrote:
> > > > >
> > > > > On Thu, Jan 21, 2021 at 05:23:45PM +0100, Jerome Forissier wrote:
> > > > > >
> > > > > >
> > > > > > On 1/21/21 4:24 PM, Jarkko Sakkinen wrote:
> > > > > > > On Thu, Jan 21, 2021 at 05:07:42PM +0200, Jarkko Sakkinen wrote:
> > > > > > >> On Thu, Jan 21, 2021 at 09:44:07AM +0100, Jerome Forissier wrote:
> > > > > > >>>
> > > > > > >>>
> > > > > > >>> On 1/21/21 1:02 AM, Jarkko Sakkinen via OP-TEE wrote:
> > > > > > >>>> On Wed, Jan 20, 2021 at 12:53:28PM +0530, Sumit Garg wrote:
> > > > > > >>>>> On Wed, 20 Jan 2021 at 07:01, Jarkko Sakkinen 
> > > > > > >>>>>  wrote:
> > > > > > >>>>>>
> > > > > > >>>>>> On Tue, Jan 19, 2021 at 12:30:42PM +0200, Jarkko Sakkinen 
> > > > > > >>>>>> wrote:
> > > > > > >>>>>>> On Fri, Jan 15, 2021 at 11:32:31AM +0530, Sumit Garg wrote:
> > > > > > >>>>>>>> On Thu, 14 Jan 2021 at 07:35, Jarkko Sakkinen 
> > > > > > >>>>>>>>  wrote:
> > > > > > >>>>>>>>>
> > > > > > >>>>>>>>> On Wed, Jan 13, 2021 at 04:47:00PM +0530, Sumit Garg 
> > > > > > >>>>>>>>> wrote:
> > > > > > >>>>>>>>>> Hi Jarkko,
> > > > > > >>>>>>>>>>
> > > > > > >>>>>>>>>> On Mon, 11 Jan 2021 at 22:05, Jarkko Sakkinen 
> > > > > > >>>>>>>>>>  wrote:
> > > > > > >>>>>>>>>>>
> > > > > > >>>>>>>>>>> On Tue, Nov 03, 2020 at 09:31:44PM +0530, Sumit Garg 
> > > > > > >>>>>>>>>>> wrote:
> > > > > > >>>>>>>>>>>> Add support for TEE based trusted keys where TEE 
> > > > > > >>>>>>>>>>>> provides the functionality
> > > > > > >>>>>>>>>>>> to seal and unseal trusted keys using hardware unique 
> > > > > > >>>>>>>>>>>> key.
> > > > > > >>>>>>>>>>>>
> > > > > > >>>>>>>>>>>> Refer to Documentation/tee.txt for detailed 
> > > > > > >>>>>>>>>>>> information about TEE.
> > > > > > >>>>>>>>>>>>
> > > > > > >>>>>>>>>>>> Signed-off-by: Sumit Garg 
> > > > > > >>>>>>>>>>>
> > > > > > >>>>>>>>>>> I haven't yet got QEMU environment working with 
> > > > > > >>>>>>>>>>> aarch64, this produces
> > > > > > >>>>>>>>>>> just a blank screen:
> > > > > > >>>>>>>>>>>
> > > > > > >>>>>>>>>>> ./output/host/usr/bin/qemu-system-aarch64 -M virt -cpu 
> > > > > > >>>>>>>>>>> cortex-a53 -smp 1 -kernel output/images/Image -initrd 
> > > > > > >>>>>>>>>>> output/images/rootfs.cpio -serial stdio
> > > > > > >>>>>>>>>>>
> > > > > > >>>>>>>>>>> My BuildRoot fork for TPM and keyring testing is 
> > > > > > >>>>>>>>>>> located over here:
> > > > > > >>>>>>>>>>>
> > > > > > >>>>>>>>>>> https://git.kernel.org/p

Re: [PATCH v4] kdb: Simplify kdb commands registration

2021-02-22 Thread Sumit Garg
On Mon, 22 Feb 2021 at 17:35, Daniel Thompson
 wrote:
>
> On Thu, Feb 18, 2021 at 05:39:58PM +0530, Sumit Garg wrote:
> > Simplify kdb commands registration via using linked list instead of
> > static array for commands storage.
> >
> > Signed-off-by: Sumit Garg 
> > ---
> >
> > Changes in v4:
> > - Fix kdb commands memory allocation issue prior to slab being available
> >   with an array of statically allocated commands. Now it works fine with
> >   kgdbwait.
>
> I'm not sure this is the right approach. It's still faking dynamic usage
> when none of the callers at this stage of the boot actually are dynamic.
>

Okay, as an alternative I came across dbg_kmalloc()/dbg_kfree() as well but ...

> Consider instead what would happen if there was a kdb_register_table() that
> took a kdbtab_t pointer and an length and enqueued them to the new list.
>
> The effect of this is that most of the existing kdb_register() and
> kdb_register_flags() calls would become (self documenting) static
> tables instead:
>
> kdb_register_flags("md", kdb_md, "",
>   "Display Memory Contents, also mdWcN, e.g. md8c1", 1,
>   KDB_ENABLE_MEM_READ | KDB_REPEAT_NO_ARGS);
> ...
>
> Effectively becomes:
>
> kdbtab_t maintab[] = {
> { .cmd_name = "md",
>   .cmd_func = mdb_md,
>   .cmd_usage = ",
>   .cmd_help = "Display Memory Contents, also mdWcN, e.g. md8c1",
>   .cmd_minlen = 1,
>   .cmd_flags = KDB_ENABLE_MEM_READ | KDB_REPEAT_NO_ARGS,
> },
> ...
> };
>
> kdb_register_table(maintab, ARRAY_SIZE(maintab));
>

... this approach sounds more appropriate since these commands look
static in nature.

> At that point the only users of kdb_register_flags() would be the macro
> logic and that already relies on the slabs so it is OK to have dynamic
> memory allocation for that.

Makes sense, will use this approach instead.

>
> Daniel.
>
>
> PS It is also possible to switch the macro logic to simplify the
>allocation by embedded a kdbtab_t into struct defcmd_set. That
>would also even more tidy up of registration code... but that
>could (and should) be in another patch so it doesn't all
>have to land together.
>

Okay.

-Sumit

>
> > - Fix a misc checkpatch warning.
> > - I have dropped Doug's review tag as I think this version includes a
> >   major fix that should be reviewed again.
> >
> > Changes in v3:
> > - Remove redundant "if" check.
> > - Pick up review tag from Doug.
> >
> > Changes in v2:
> > - Remove redundant NULL check for "cmd_name".
> > - Incorporate misc. comment.
> >
> >  kernel/debug/kdb/kdb_main.c| 129 
> > ++---
> >  kernel/debug/kdb/kdb_private.h |   2 +
> >  2 files changed, 47 insertions(+), 84 deletions(-)
> >
> > diff --git a/kernel/debug/kdb/kdb_main.c b/kernel/debug/kdb/kdb_main.c
> > index 930ac1b..5215e04 100644
> > --- a/kernel/debug/kdb/kdb_main.c
> > +++ b/kernel/debug/kdb/kdb_main.c
> > @@ -33,6 +33,7 @@
> >  #include 
> >  #include 
> >  #include 
> > +#include 
> >  #include 
> >  #include 
> >  #include 
> > @@ -84,15 +85,12 @@ static unsigned int kdb_continue_catastrophic =
> >  static unsigned int kdb_continue_catastrophic;
> >  #endif
> >
> > -/* kdb_commands describes the available commands. */
> > -static kdbtab_t *kdb_commands;
> > -#define KDB_BASE_CMD_MAX 50
> > -static int kdb_max_commands = KDB_BASE_CMD_MAX;
> > -static kdbtab_t kdb_base_commands[KDB_BASE_CMD_MAX];
> > -#define for_each_kdbcmd(cmd, num)\
> > - for ((cmd) = kdb_base_commands, (num) = 0;  \
> > -  num < kdb_max_commands;\
> > -  num++, num == KDB_BASE_CMD_MAX ? cmd = kdb_commands : cmd++)
> > +/* kdb_cmds_head describes the available commands. */
> > +static LIST_HEAD(kdb_cmds_head);
> > +
> > +#define KDB_CMD_INIT_MAX 50
> > +static int kdb_cmd_init_idx;
> > +static kdbtab_t kdb_commands_init[KDB_CMD_INIT_MAX];
> >
> >  typedef struct _kdbmsg {
> >   int km_diag;/* kdb diagnostic */
> > @@ -921,7 +919,7 @@ int kdb_parse(const char *cmdstr)
> >   char *cp;
> >   char *cpp, quoted;
> >   kdbtab_t *tp;
> > - int i, escaped, ignore_errors = 0, check_grep = 0;
> > + int escaped, ig

Re: [PATCH v2] kdb: Get rid of custom debug heap allocator

2021-02-28 Thread Sumit Garg
On Fri, 26 Feb 2021 at 23:07, Daniel Thompson
 wrote:
>
> On Fri, Feb 26, 2021 at 06:12:13PM +0530, Sumit Garg wrote:
> > On Fri, 26 Feb 2021 at 16:29, Daniel Thompson
> >  wrote:
> > >
> > > On Fri, Feb 26, 2021 at 03:23:06PM +0530, Sumit Garg wrote:
> > > > Currently the only user for debug heap is kdbnearsym() which can be
> > > > modified to rather ask the caller to supply a buffer for symbol name.
> > > > So do that and modify kdbnearsym() callers to pass a symbol name buffer
> > > > allocated statically and hence remove custom debug heap allocator.
> > >
> > > Why make the callers do this?
> > >
> > > The LRU buffers were managed inside kdbnearsym() why does switching to
> > > an approach with a single buffer require us to push that buffer out to
> > > the callers?
> > >
> >
> > Earlier the LRU buffers managed namebuf uniqueness per caller (upto
> > 100 callers)
>
> The uniqueness is per symbol, not per caller.
>

Agree.

> > but if we switch to single entry in kdbnearsym() then all
> > callers need to share common buffer which will lead to incorrect
> > results from following simple sequence:
> >
> > kdbnearsym(word, &symtab1);
> > kdbnearsym(word, &symtab2);
> > kdb_symbol_print(word, &symtab1, 0);
> > kdb_symbol_print(word, &symtab2, 0);
> >
> > But if we change to a unique static namebuf per caller then the
> > following sequence will work:
> >
> > kdbnearsym(word, &symtab1, namebuf1);
> > kdbnearsym(word, &symtab2, namebuf2);
> > kdb_symbol_print(word, &symtab1, 0);
> > kdb_symbol_print(word, &symtab2, 0);
>
> This is true but do any of the callers of kdbnearsym ever do this?

No, but any of prospective callers may need this.

> The
> main reaason that heap stuck out as redundant was that I've only ever
> seen the output of kdbnearsym() consumed almost immediately by a print.
>

Yeah but I think the alternative proposed in this patch isn't as
burdensome as the heap and tries to somewhat match existing
functionality.

> I wrote an early version of a patch like this that just shrunk the LRU
> cache down to 2 and avoided any heap usage... but I threw it away
> when I realized we never carry cached values outside the function
> that obtained them.
>

Okay, so if you still think that having a single static buffer inside
kdbnearsym() is an appropriate approach for time being then I will
switch to use that instead.

-Sumit

>
> > > > @@ -526,6 +526,7 @@ int kdbgetaddrarg(int argc, const char **argv, int 
> > > > *nextarg,
> >
> > >
> > > > diff --git a/kernel/debug/kdb/kdb_main.c b/kernel/debug/kdb/kdb_main.c
> > > > index 9d69169582c6..6efe9ec53906 100644
> > > > --- a/kernel/debug/kdb/kdb_main.c
> > > > +++ b/kernel/debug/kdb/kdb_main.c
> > > > @@ -526,6 +526,7 @@ int kdbgetaddrarg(int argc, const char **argv, int 
> > > > *nextarg,
> > >
> > > The documentation comment for this function has not been updated to
> > > describe the new contract on callers of this function (e.g. if they
> > > consume the symbol name they must do so before calling kdbgetaddrarg()
> > > (and maybe kdbnearsym() again).
> > >
> >
> > I am not sure if I follow you here. If we have a unique static buffer
> > per caller then why do we need this new contract?
>
> I traced the code wrong. I thought it shared symtab->sym_name with its
> own caller... but it doesn't it shares synname with its caller and
> that's totally different...
>
>
> Daniel.
>
> >
> > >
> > > >   char symbol = '\0';
> > > >   char *cp;
> > > >   kdb_symtab_t symtab;
> > > > + static char namebuf[KSYM_NAME_LEN];
> > > >
> > > >   /*
> > > >* If the enable flags prohibit both arbitrary memory access
> > > > diff --git a/kernel/debug/kdb/kdb_support.c 
> > > > b/kernel/debug/kdb/kdb_support.c
> > > > index b59aad1f0b55..9b907a84f2db 100644
> > > > --- a/kernel/debug/kdb/kdb_support.c
> > > > +++ b/kernel/debug/kdb/kdb_support.c
> > > > @@ -57,8 +57,6 @@ int kdbgetsymval(const char *symname, kdb_symtab_t 
> > > > *symtab)
> > > >  }
> > > >  EXPORT_SYMBOL(kdbgetsymval);
> > > >
> > > > -static char *kdb_name_table[100];/* arbitrary size */
> > > > -
> > > >  /*
> > > >   * kdbnearsym -  Return the

[PATCH v9 0/4] Introduce TEE based Trusted Keys support

2021-03-01 Thread Sumit Garg
Add support for TEE based trusted keys where TEE provides the functionality
to seal and unseal trusted keys using hardware unique key. Also, this is
an alternative in case platform doesn't possess a TPM device.

This patch-set has been tested with OP-TEE based early TA which is already
merged in upstream [1].

[1] 
https://github.com/OP-TEE/optee_os/commit/f86ab8e7e0de869dfa25ca05a37ee070d7e5b86b

Changes in v9:
1. Rebased to latest tpmdd/master.
2. Defined pr_fmt() and removed redundant tags.
3. Patch #2: incorporated misc. comments.
4. Patch #3: incorporated doc changes from Elaine and misc. comments
   from Randy.
5. Patch #4: reverted to separate maintainer entry as per request from
   Jarkko.
6. Added Jarkko's Tested-by: tag on patch #2.

Changes in v8:
1. Added static calls support instead of indirect calls.
2. Documented trusted keys source module parameter.
3. Refined patch #1 commit message discription.
4. Addressed misc. comments on patch #2.
5. Added myself as Trusted Keys co-maintainer instead.
6. Rebased to latest tpmdd master.

Changes in v7:
1. Added a trusted.source module parameter in order to enforce user's
   choice in case a particular platform posses both TPM and TEE.
2. Refine commit description for patch #1.

Changes in v6:
1. Revert back to dynamic detection of trust source.
2. Drop author mention from trusted_core.c and trusted_tpm1.c files.
3. Rebased to latest tpmdd/master.

Changes in v5:
1. Drop dynamic detection of trust source and use compile time flags
   instead.
2. Rename trusted_common.c -> trusted_core.c.
3. Rename callback: cleanup() -> exit().
4. Drop "tk" acronym.
5. Other misc. comments.
6. Added review tags for patch #3 and #4.

Changes in v4:
1. Pushed independent TEE features separately:
  - Part of recent TEE PR: https://lkml.org/lkml/2020/5/4/1062
2. Updated trusted-encrypted doc with TEE as a new trust source.
3. Rebased onto latest tpmdd/master.

Changes in v3:
1. Update patch #2 to support registration of multiple kernel pages.
2. Incoporate dependency patch #4 in this patch-set:
   https://patchwork.kernel.org/patch/11091435/

Changes in v2:
1. Add reviewed-by tags for patch #1 and #2.
2. Incorporate comments from Jens for patch #3.
3. Switch to use generic trusted keys framework.

Sumit Garg (4):
  KEYS: trusted: Add generic trusted keys framework
  KEYS: trusted: Introduce TEE based Trusted Keys
  doc: trusted-encrypted: updates with TEE as a new trust source
  MAINTAINERS: Add entry for TEE based Trusted Keys

 .../admin-guide/kernel-parameters.txt |  12 +
 .../security/keys/trusted-encrypted.rst   | 171 ++--
 MAINTAINERS   |   8 +
 include/keys/trusted-type.h   |  53 +++
 include/keys/trusted_tee.h|  16 +
 include/keys/trusted_tpm.h|  29 +-
 security/keys/trusted-keys/Makefile   |   2 +
 security/keys/trusted-keys/trusted_core.c | 358 +
 security/keys/trusted-keys/trusted_tee.c  | 317 +++
 security/keys/trusted-keys/trusted_tpm1.c | 366 --
 10 files changed, 981 insertions(+), 351 deletions(-)
 create mode 100644 include/keys/trusted_tee.h
 create mode 100644 security/keys/trusted-keys/trusted_core.c
 create mode 100644 security/keys/trusted-keys/trusted_tee.c

-- 
2.25.1



[PATCH v9 4/4] MAINTAINERS: Add entry for TEE based Trusted Keys

2021-03-01 Thread Sumit Garg
Add MAINTAINERS entry for TEE based Trusted Keys framework.

Signed-off-by: Sumit Garg 
Acked-by: Jarkko Sakkinen 
---
 MAINTAINERS | 8 
 1 file changed, 8 insertions(+)

diff --git a/MAINTAINERS b/MAINTAINERS
index 1d75afad615f..eb1ac9c90f7f 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -9830,6 +9830,14 @@ F:   include/keys/trusted-type.h
 F: include/keys/trusted_tpm.h
 F: security/keys/trusted-keys/
 
+KEYS-TRUSTED-TEE
+M: Sumit Garg 
+L: linux-integr...@vger.kernel.org
+L: keyri...@vger.kernel.org
+S: Supported
+F: include/keys/trusted_tee.h
+F: security/keys/trusted-keys/trusted_tee.c
+
 KEYS/KEYRINGS
 M: David Howells 
 M: Jarkko Sakkinen 
-- 
2.25.1



[PATCH v9 1/4] KEYS: trusted: Add generic trusted keys framework

2021-03-01 Thread Sumit Garg
Current trusted keys framework is tightly coupled to use TPM device as
an underlying implementation which makes it difficult for implementations
like Trusted Execution Environment (TEE) etc. to provide trusted keys
support in case platform doesn't posses a TPM device.

Add a generic trusted keys framework where underlying implementations
can be easily plugged in. Create struct trusted_key_ops to achieve this,
which contains necessary functions of a backend.

Also, define a module parameter in order to select a particular trust
source in case a platform support multiple trust sources. In case its
not specified then implementation itetrates through trust sources list
starting with TPM and assign the first trust source as a backend which
has initiazed successfully during iteration.

Note that current implementation only supports a single trust source at
runtime which is either selectable at compile time or during boot via
aforementioned module parameter.

Suggested-by: Jarkko Sakkinen 
Signed-off-by: Sumit Garg 
---
 .../admin-guide/kernel-parameters.txt |  12 +
 include/keys/trusted-type.h   |  53 +++
 include/keys/trusted_tpm.h|  29 +-
 security/keys/trusted-keys/Makefile   |   1 +
 security/keys/trusted-keys/trusted_core.c | 354 +
 security/keys/trusted-keys/trusted_tpm1.c | 366 --
 6 files changed, 497 insertions(+), 318 deletions(-)
 create mode 100644 security/keys/trusted-keys/trusted_core.c

diff --git a/Documentation/admin-guide/kernel-parameters.txt 
b/Documentation/admin-guide/kernel-parameters.txt
index 0ac883777318..fbc828994b06 100644
--- a/Documentation/admin-guide/kernel-parameters.txt
+++ b/Documentation/admin-guide/kernel-parameters.txt
@@ -5459,6 +5459,18 @@
See Documentation/admin-guide/mm/transhuge.rst
for more details.
 
+   trusted.source= [KEYS]
+   Format: 
+   This parameter identifies the trust source as a backend
+   for trusted keys implementation. Supported trust
+   sources:
+   - "tpm"
+   - "tee"
+   If not specified then it defaults to iterating through
+   the trust source list starting with TPM and assigns the
+   first trust source as a backend which is initialized
+   successfully during iteration.
+
tsc=Disable clocksource stability checks for TSC.
Format: 
[x86] reliable: mark tsc clocksource as reliable, this
diff --git a/include/keys/trusted-type.h b/include/keys/trusted-type.h
index a94c03a61d8f..24016898ca41 100644
--- a/include/keys/trusted-type.h
+++ b/include/keys/trusted-type.h
@@ -11,6 +11,12 @@
 #include 
 #include 
 
+#ifdef pr_fmt
+#undef pr_fmt
+#endif
+
+#define pr_fmt(fmt) "trusted_key: " fmt
+
 #define MIN_KEY_SIZE   32
 #define MAX_KEY_SIZE   128
 #define MAX_BLOB_SIZE  512
@@ -40,6 +46,53 @@ struct trusted_key_options {
uint32_t policyhandle;
 };
 
+struct trusted_key_ops {
+   /*
+* flag to indicate if trusted key implementation supports migration
+* or not.
+*/
+   unsigned char migratable;
+
+   /* Initialize key interface. */
+   int (*init)(void);
+
+   /* Seal a key. */
+   int (*seal)(struct trusted_key_payload *p, char *datablob);
+
+   /* Unseal a key. */
+   int (*unseal)(struct trusted_key_payload *p, char *datablob);
+
+   /* Get a randomized key. */
+   int (*get_random)(unsigned char *key, size_t key_len);
+
+   /* Exit key interface. */
+   void (*exit)(void);
+};
+
+struct trusted_key_source {
+   char *name;
+   struct trusted_key_ops *ops;
+};
+
 extern struct key_type key_type_trusted;
 
+#define TRUSTED_DEBUG 0
+
+#if TRUSTED_DEBUG
+static inline void dump_payload(struct trusted_key_payload *p)
+{
+   pr_info("key_len %d\n", p->key_len);
+   print_hex_dump(KERN_INFO, "key ", DUMP_PREFIX_NONE,
+  16, 1, p->key, p->key_len, 0);
+   pr_info("bloblen %d\n", p->blob_len);
+   print_hex_dump(KERN_INFO, "blob ", DUMP_PREFIX_NONE,
+  16, 1, p->blob, p->blob_len, 0);
+   pr_info("migratable %d\n", p->migratable);
+}
+#else
+static inline void dump_payload(struct trusted_key_payload *p)
+{
+}
+#endif
+
 #endif /* _KEYS_TRUSTED_TYPE_H */
diff --git a/include/keys/trusted_tpm.h b/include/keys/trusted_tpm.h
index a56d8e1298f2..7769b726863a 100644
--- a/include/keys/trusted_tpm.h
+++ b/include/keys/trusted_tpm.h
@@ -16,6 +16,8 @@
 #define LOAD32N(buffer, offset)(*(uint32_t *)&buffer[offset])
 #define LOAD16(buffer, offset) 

[PATCH v9 2/4] KEYS: trusted: Introduce TEE based Trusted Keys

2021-03-01 Thread Sumit Garg
Add support for TEE based trusted keys where TEE provides the functionality
to seal and unseal trusted keys using hardware unique key.

Refer to Documentation/staging/tee.rst for detailed information about TEE.

Signed-off-by: Sumit Garg 
Tested-by: Jarkko Sakkinen 
---
 include/keys/trusted_tee.h|  16 ++
 security/keys/trusted-keys/Makefile   |   1 +
 security/keys/trusted-keys/trusted_core.c |   4 +
 security/keys/trusted-keys/trusted_tee.c  | 317 ++
 4 files changed, 338 insertions(+)
 create mode 100644 include/keys/trusted_tee.h
 create mode 100644 security/keys/trusted-keys/trusted_tee.c

diff --git a/include/keys/trusted_tee.h b/include/keys/trusted_tee.h
new file mode 100644
index ..151be25a979e
--- /dev/null
+++ b/include/keys/trusted_tee.h
@@ -0,0 +1,16 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (C) 2019-2021 Linaro Ltd.
+ *
+ * Author:
+ * Sumit Garg 
+ */
+
+#ifndef __TEE_TRUSTED_KEY_H
+#define __TEE_TRUSTED_KEY_H
+
+#include 
+
+extern struct trusted_key_ops trusted_key_tee_ops;
+
+#endif
diff --git a/security/keys/trusted-keys/Makefile 
b/security/keys/trusted-keys/Makefile
index 49e3bcfe704f..347021d5d1f9 100644
--- a/security/keys/trusted-keys/Makefile
+++ b/security/keys/trusted-keys/Makefile
@@ -7,3 +7,4 @@ obj-$(CONFIG_TRUSTED_KEYS) += trusted.o
 trusted-y += trusted_core.o
 trusted-y += trusted_tpm1.o
 trusted-y += trusted_tpm2.o
+trusted-$(CONFIG_TEE) += trusted_tee.o
diff --git a/security/keys/trusted-keys/trusted_core.c 
b/security/keys/trusted-keys/trusted_core.c
index 0db86b44605d..ec3a066a4b42 100644
--- a/security/keys/trusted-keys/trusted_core.c
+++ b/security/keys/trusted-keys/trusted_core.c
@@ -8,6 +8,7 @@
 
 #include 
 #include 
+#include 
 #include 
 #include 
 #include 
@@ -29,6 +30,9 @@ static const struct trusted_key_source trusted_key_sources[] 
= {
 #if defined(CONFIG_TCG_TPM)
{ "tpm", &trusted_key_tpm_ops },
 #endif
+#if defined(CONFIG_TEE)
+   { "tee", &trusted_key_tee_ops },
+#endif
 };
 
 DEFINE_STATIC_CALL_NULL(trusted_key_init, *trusted_key_sources[0].ops->init);
diff --git a/security/keys/trusted-keys/trusted_tee.c 
b/security/keys/trusted-keys/trusted_tee.c
new file mode 100644
index ..62983d98a252
--- /dev/null
+++ b/security/keys/trusted-keys/trusted_tee.c
@@ -0,0 +1,317 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Copyright (C) 2019-2021 Linaro Ltd.
+ *
+ * Author:
+ * Sumit Garg 
+ */
+
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+
+#include 
+
+#define DRIVER_NAME "trusted-key-tee"
+
+/*
+ * Get random data for symmetric key
+ *
+ * [out] memref[0]Random data
+ */
+#define TA_CMD_GET_RANDOM  0x0
+
+/*
+ * Seal trusted key using hardware unique key
+ *
+ * [in]  memref[0]Plain key
+ * [out] memref[1]Sealed key datablob
+ */
+#define TA_CMD_SEAL0x1
+
+/*
+ * Unseal trusted key using hardware unique key
+ *
+ * [in]  memref[0]Sealed key datablob
+ * [out] memref[1]Plain key
+ */
+#define TA_CMD_UNSEAL  0x2
+
+/**
+ * struct trusted_key_tee_private - TEE Trusted key private data
+ * @dev:   TEE based Trusted key device.
+ * @ctx:   TEE context handler.
+ * @session_id:Trusted key TA session identifier.
+ * @shm_pool:  Memory pool shared with TEE device.
+ */
+struct trusted_key_tee_private {
+   struct device *dev;
+   struct tee_context *ctx;
+   u32 session_id;
+   struct tee_shm *shm_pool;
+};
+
+static struct trusted_key_tee_private pvt_data;
+
+/*
+ * Have the TEE seal(encrypt) the symmetric key
+ */
+static int trusted_tee_seal(struct trusted_key_payload *p, char *datablob)
+{
+   int ret;
+   struct tee_ioctl_invoke_arg inv_arg;
+   struct tee_param param[4];
+   struct tee_shm *reg_shm_in = NULL, *reg_shm_out = NULL;
+
+   memset(&inv_arg, 0, sizeof(inv_arg));
+   memset(¶m, 0, sizeof(param));
+
+   reg_shm_in = tee_shm_register(pvt_data.ctx, (unsigned long)p->key,
+ p->key_len, TEE_SHM_DMA_BUF |
+ TEE_SHM_KERNEL_MAPPED);
+   if (IS_ERR(reg_shm_in)) {
+   dev_err(pvt_data.dev, "key shm register failed\n");
+   return PTR_ERR(reg_shm_in);
+   }
+
+   reg_shm_out = tee_shm_register(pvt_data.ctx, (unsigned long)p->blob,
+  sizeof(p->blob), TEE_SHM_DMA_BUF |
+  TEE_SHM_KERNEL_MAPPED);
+   if (IS_ERR(reg_shm_out)) {
+   dev_err(pvt_data.dev, "blob shm register failed\n");
+   ret = PTR_ERR(reg_shm_out);
+   goto out;
+   }
+
+   inv_arg.func = TA_CMD_SEAL;
+   inv_arg.session = pvt_data.session_id;
+   inv_arg.num_params = 4;
+
+   param[0].attr = TEE_I

[PATCH v9 3/4] doc: trusted-encrypted: updates with TEE as a new trust source

2021-03-01 Thread Sumit Garg
Update documentation for Trusted and Encrypted Keys with TEE as a new
trust source. Following is brief description of updates:

- Add a section to demonstrate a list of supported devices along with
  their security properties/guarantees.
- Add a key generation section.
- Updates for usage section including differences specific to a trust
  source.

Co-developed-by: Elaine Palmer 
Signed-off-by: Elaine Palmer 
Signed-off-by: Sumit Garg 
---
 .../security/keys/trusted-encrypted.rst   | 171 ++
 1 file changed, 138 insertions(+), 33 deletions(-)

diff --git a/Documentation/security/keys/trusted-encrypted.rst 
b/Documentation/security/keys/trusted-encrypted.rst
index 1da879a68640..5369403837ae 100644
--- a/Documentation/security/keys/trusted-encrypted.rst
+++ b/Documentation/security/keys/trusted-encrypted.rst
@@ -6,30 +6,127 @@ Trusted and Encrypted Keys are two new key types added to 
the existing kernel
 key ring service.  Both of these new types are variable length symmetric keys,
 and in both cases all keys are created in the kernel, and user space sees,
 stores, and loads only encrypted blobs.  Trusted Keys require the availability
-of a Trusted Platform Module (TPM) chip for greater security, while Encrypted
-Keys can be used on any system.  All user level blobs, are displayed and loaded
-in hex ascii for convenience, and are integrity verified.
+of a Trust Source for greater security, while Encrypted Keys can be used on any
+system. All user level blobs, are displayed and loaded in hex ASCII for
+convenience, and are integrity verified.
 
-Trusted Keys use a TPM both to generate and to seal the keys.  Keys are sealed
-under a 2048 bit RSA key in the TPM, and optionally sealed to specified PCR
-(integrity measurement) values, and only unsealed by the TPM, if PCRs and blob
-integrity verifications match.  A loaded Trusted Key can be updated with new
-(future) PCR values, so keys are easily migrated to new pcr values, such as
-when the kernel and initramfs are updated.  The same key can have many saved
-blobs under different PCR values, so multiple boots are easily supported.
 
-TPM 1.2

+Trust Source
+
 
-By default, trusted keys are sealed under the SRK, which has the default
-authorization value (20 zeros).  This can be set at takeownership time with the
-trouser's utility: "tpm_takeownership -u -z".
+A trust source provides the source of security for Trusted Keys.  This
+section lists currently supported trust sources, along with their security
+considerations.  Whether or not a trust source is sufficiently safe depends
+on the strength and correctness of its implementation, as well as the threat
+environment for a specific use case.  Since the kernel doesn't know what the
+environment is, and there is no metric of trust, it is dependent on the
+consumer of the Trusted Keys to determine if the trust source is sufficiently
+safe.
 
-TPM 2.0

+  *  Root of trust for storage
 
-The user must first create a storage key and make it persistent, so the key is
-available after reboot. This can be done using the following commands.
+ (1) TPM (Trusted Platform Module: hardware device)
+
+ Rooted to Storage Root Key (SRK) which never leaves the TPM that
+ provides crypto operation to establish root of trust for storage.
+
+ (2) TEE (Trusted Execution Environment: OP-TEE based on Arm TrustZone)
+
+ Rooted to Hardware Unique Key (HUK) which is generally burnt in 
on-chip
+ fuses and is accessible to TEE only.
+
+  *  Execution isolation
+
+ (1) TPM
+
+ Fixed set of operations running in isolated execution environment.
+
+ (2) TEE
+
+ Customizable set of operations running in isolated execution
+ environment verified via Secure/Trusted boot process.
+
+  * Optional binding to platform integrity state
+
+ (1) TPM
+
+ Keys can be optionally sealed to specified PCR (integrity measurement)
+ values, and only unsealed by the TPM, if PCRs and blob integrity
+ verifications match. A loaded Trusted Key can be updated with new
+ (future) PCR values, so keys are easily migrated to new PCR values,
+ such as when the kernel and initramfs are updated. The same key can
+ have many saved blobs under different PCR values, so multiple boots 
are
+ easily supported.
+
+ (2) TEE
+
+ Relies on Secure/Trusted boot process for platform integrity. It can
+ be extended with TEE based measured boot process.
+
+  *  Interfaces and APIs
+
+ (1) TPM
+
+ TPMs have well-documented, standardized interfaces and APIs.
+
+ (2) TEE
+
+ TEEs have well-documented, standardized client interface and APIs. For
+ more details refer to ``Documentation/staging/tee.rst``.
+
+
+  *  Threat model
+
+ The strength and appropriateness of a particular TPM or TEE for a given
+ purpose must be assessed when u

  1   2   3   4   >