On Sat, Aug 15, 2020 at 01:54:34PM +0200, Mark Kettenis wrote:
> > Date: Sat, 15 Aug 2020 20:21:09 +1000
> > From: Jonathan Gray <j...@jsg.id.au>
> >
> > On Fri, Aug 14, 2020 at 11:06:59PM +0200, Mark Kettenis wrote:
> > > > Date: Fri, 14 Aug 2020 14:40:23 +0200 (CEST)
> > > > From: Mark Kettenis <mark.kette...@xs4all.nl>
> > > >
> > > > I suppose a way to test this properly is to pick a system call and
> > > > replace a copyin() with a direct access? That will succeed without
> > > > PAN but should fail with PAN enabled right?
> > >
> > > So that does indeed work. However, the result is a hard hang. So
> > > here as an additional diff that makes sure we panic instead. The idea
> > > is that all user-space access from the kernel should be done by the
> > > special unprivileged load/store instructions.
> >
> > Would disabling PSTATE.PAN in copyin/copyout along the lines of how
> > stac/clac is done for SMAP avoid having to test the instruction type
> > entirely?
>
> No. The problem is that we meed to catch permission faults caused by
> PAN. But since the faulting address may be valid in the sense that
> UVM has a mapping for them that allows the requested access. In that
> case we end up looping since uvm_fault() returns 0 and we retry the
> faulting instruction.
>
> > Is it faulting on valid copyin/copyout the way you have it at the
> > moment? I don't quite follow what is going on.
>
> The copyin/copyout functions use the unpriviliged load/store
> instructions (LDTR/STTR) which bypass PAN. But we may still fault
> because the memory hasn't been faulted in or because the memory has
> been marked read-only for CoW or for MOD/REF emulation. And some of
> those faults manifest themselves as permission faults as well.
>
> Currently (without the diff quoted below) those faults will be handled
> just fine. The diff below needs to make sure this continues to be the
> case. The easiest way to do that is to check the instruction.
>
> Note that this check is in the "slow path". In most cases the address
> touched by copyin/copyout will already be in the page tables since
> userland touched it already.
>
> Does that clarfify things?
Yes, thanks. I'm fine with both of these diffs going in but still think
you should change the mask.
>
>
> > > Index: arch/arm64/arm64/trap.c
> > > ===================================================================
> > > RCS file: /cvs/src/sys/arch/arm64/arm64/trap.c,v
> > > retrieving revision 1.27
> > > diff -u -p -r1.27 trap.c
> > > --- arch/arm64/arm64/trap.c 6 Jan 2020 12:37:30 -0000 1.27
> > > +++ arch/arm64/arm64/trap.c 14 Aug 2020 21:05:54 -0000
> > > @@ -65,6 +65,14 @@ void do_el0_error(struct trapframe *);
> > >
> > > void dumpregs(struct trapframe*);
> > >
> > > +/* Check whether we're executing an unprivileged load/store instruction.
> > > */
> > > +static inline int
> > > +is_unpriv_ldst(uint64_t elr)
> > > +{
> > > + uint32_t insn = *(uint32_t *)elr;
> > > + return ((insn & 0x3f200c00) == 0x38000800);
> >
> > The value of op1 (bit 26) is not used according to the table in the Arm
> > ARM. The mask would be better as 0x3b200c00
> >
> >
> > > +}
> > > +
> > > static void
> > > data_abort(struct trapframe *frame, uint64_t esr, uint64_t far,
> > > int lower, int exe)
> > > @@ -104,8 +112,18 @@ data_abort(struct trapframe *frame, uint
> > > /* The top bit tells us which range to use */
> > > if ((far >> 63) == 1)
> > > map = kernel_map;
> > > - else
> > > + else {
> > > + /*
> > > + * Only allow user-space access using
> > > + * unprivileged load/store instructions.
> > > + */
> > > + if (!is_unpriv_ldst(frame->tf_elr)) {
> > > + panic("attempt to access user address"
> > > + " 0x%llx from EL1", far);
> > > + }
> > > +
> > > map = &p->p_vmspace->vm_map;
> > > + }
> > > }
> > >
> > > if (exe)
> > >
> > >
> >
>
>