Actually it is possible to make it simpler and avoid the gotos altogether
with the changes I have made using the following conditional:

+    /* Use the target EL, current execution state and SCR/HCR settings to
+     * determine whether the corresponding CPSR bit is used to mask the
+     * interrupt.
+     */
+    if ((tar_el > cur_el) && (tar_el != 1) && (scr || hcr)) {
+        if (arm_el_is_aa64(env, 3) || !secure) {
+            unmasked = 1;
+        }
+    }
+
+    /* The PSTATE bits only mask the interrupt if we have not overriden the
+     * ability above.
+     */
+    return unmasked || pstate_unmasked;

scr and hcr are set depending on the exception type so both EXCP_IRQ and
EXCP_FIQ fall through to this check.  The conditional basically weeds out
all the cases where we can ignore/override the CPSR masking bits.

Removed table and reworked function in v9.

On 31 October 2014 14:00, Peter Maydell <peter.mayd...@linaro.org> wrote:

> On 30 October 2014 21:28, Greg Bellows <greg.bell...@linaro.org> wrote:
> > This patch extends arm_excp_unmasked() to use lookup tables for
> determining
> > whether IRQ and FIQ exceptions are masked.  The lookup tables are based
> on the
> > ARMv8 and ARMv7 specification physical interrupt masking tables.
> >
> > If EL3 is using AArch64 IRQ/FIQ masking is ignored in all exception
> levels
> > other than EL3 if SCR.{FIQ|IRQ} is set to 1 (routed to EL3).
> >
> > Signed-off-by: Greg Bellows <greg.bell...@linaro.org>
> >
> > ---
> >
> > v7 -> v8
> > - Add IRQ and FIQ exeception masking lookup tables.
> > - Rewrite patch to use lookup tables for determining whether an
> excpetion is
> >   masked or not.
> >
> > v5 -> v6
> > - Globally change Aarch# to AArch#
> > - Fixed comment termination
> >
> > v4 -> v5
> > - Merge with v4 patch 10
> > ---
> >  target-arm/cpu.h | 218
> ++++++++++++++++++++++++++++++++++++++++++++++++-------
> >  1 file changed, 190 insertions(+), 28 deletions(-)
>
> Having got through the rest of the series I'm coming back
> to this one, which I skipped because it needed checking
> against the ARM ARM.
>
> I definitely don't like the use of tables here -- it is in almost
> all of the cases simply repeating the routing calculations.
> All you actually need here is:
>
>     if (target_el < current_el) {
>         /* Don't take exception */
>         return false;
>     }
>     if (target_el > current_el && target_el != 1) {
>         if (target_el == 3 && !arm_el_is_aa64(env, 3)) {
>             /* In a 32 bit EL3 there are some awkward special cases where
> we
>              * must honour the PSTATE/CPSR mask bits even when taking the
>              * exception to EL3.
>              */
>             if (arm_is_secure(env)) {
>                 goto honour_masking;
>             }
>             /* (We know at this point that SCR.FIQ/IRQ must be set.) */
>             if (excp_idx == EXCP_FIQ && HCR.FMO == 0 && SCR.FW == 1) {
>                 goto honour_masking;
>             }
>             /* If we supported SError then the async external abort routing
>              * would have a similar case for SCR.AW here. There is no
> SCR.IW for IRQs.
>              */
>             if (excp_idx == EXCP_IRQ && HCR.IMO == 0) {
>                 goto honour_masking;
>             }
>        }
>        /* Take the exception unconditionally. */
>        return true;
>     }
> honour_masking:
>     /* If we get here then we check the PSTATE flags. */
>
> (I think the 'gotos' here are clearer than turning the if statement inside
> out
> to avoid them...)
>
> > -    unsigned int target_el = arm_excp_target_el(cs, excp_idx);
> > -    /* FIXME: Use actual secure state.  */
> > -    bool secure = false;
> > -    /* If in EL1/0, Physical IRQ routing to EL2 only happens from NS
> state.  */
> > -    bool irq_can_hyp = !secure && cur_el < 2 && target_el == 2;
> > -    /* ARMv7-M interrupt return works by loading a magic value
> > -     * into the PC.  On real hardware the load causes the
> > -     * return to occur.  The qemu implementation performs the
> > -     * jump normally, then does the exception return when the
> > -     * CPU tries to execute code at the magic address.
> > -     * This will cause the magic PC value to be pushed to
> > -     * the stack if an interrupt occurred at the wrong time.
> > -     * We avoid this by disabling interrupts when
> > -     * pc contains a magic address.
>
> You'll probably find it easier to base on top of the patches
> I sent out that split out the M profile code from this
> function (I'm planning to put those into target-arm.next soon
> so they get in before hardfreeze).
>
> thanks
> -- PMM
>

Reply via email to