On 01/11/2018 11:21 AM, Peter Maydell wrote: > On 11 January 2018 at 19:10, Richard Henderson > <richard.hender...@linaro.org> wrote: >> On 01/11/2018 10:06 AM, Peter Maydell wrote: >>> On 18 December 2017 at 17:45, Richard Henderson >>> <richard.hender...@linaro.org> wrote: > >>>> +# Pattern examples: >>>> +# >>>> +# addl_r 010000 ..... ..... .... 0000000 ..... @opr >>>> +# addl_i 010000 ..... ..... .... 0000000 ..... @opi >>>> +# >>> >>> I think we should insist that a pattern defines all the >>> bits (either as constant values or as fields that get >>> passed to the decode function). That will help prevent >>> accidental under-decoding. >> >> Hmm. What do you suggest then for bits that the cpu does not decode at all? >> This doesn't happen with ARM (I don't think) but it does happen with HPPA, >> and >> probably others. > > Arm does have undecoded bits (they're in brackets in encoding diagrams), > but they're UNPREDICTABLE if you don't set them right, so ideally we > check them all and UNDEF. Our current aarch32 decoder doesn't always > do this, and it's non-obvious when that happens. > >> I suppose I could either wrap it in a field that the translator ignores, or >> choose another character besides ".", e.g. >> >> mfia 000000 xxxxx 00000 xxx 10100101 t:5 >> >> where bits [21-25] and bits [13-15] really are ignored by hardware. > > Yes, I'd like to see something so that if you want the translator > to ignore a bit you have to explicitly mark it as to be ignored.
Ok. > Something I noticed the doc comment doesn't mention: what's the > semantics for if the patterns you declare overlap? Is this a > purely declarative language where you have to make sure an > insn can only match one pattern (or get an error, presumably), > or is there an implicit "match starting from the top, so put > looser patterns last" process? It *should* error. But I'm not sure that it does. It's probably worth adding some unit tests for this... r~