On 30/04/2019 18:00, Richard Henderson wrote: > On 4/28/19 7:38 AM, Mark Cave-Ayland wrote: >> #define VSX_MADD(op, nels, tp, fld, maddflgs, afrm, sfprf, r2sp) >> \ >> void helper_##op(CPUPPCState *env, uint32_t opcode, >> \ >> - ppc_vsr_t *xt, ppc_vsr_t *xa, ppc_vsr_t *xb) >> \ >> + ppc_vsr_t *xt, ppc_vsr_t *xa, >> \ >> + ppc_vsr_t *b, ppc_vsr_t *c) >> \ >> { >> \ >> - ppc_vsr_t *b, *c; >> \ >> int i; >> \ >> >> \ >> - if (afrm) { /* AxB + T */ >> \ >> - b = xb; >> \ >> - c = xt; >> \ >> - } else { /* AxT + B */ >> \ >> - b = xt; >> \ >> - c = xb; >> \ >> - } >> \ > > The afrm argument is no longer used. > This also means that e.g. > > VSX_MADD(xsmaddadp, 1, float64, VsrD(0), MADD_FLGS, 1, 1, 0) > VSX_MADD(xsmaddmdp, 1, float64, VsrD(0), MADD_FLGS, 0, 1, 0) > > are redundant. Similarly with all of the other pairs.
Agreed. What do you think is the best solution here - maybe a double macro that looks something like this? #define VSX_MADD(op, prec, nels, tp, fld, maddflgs, sfprf, r2sp) _VSX_MADD(op##aprec, nels, tp, fld, maddflgs, sfprf, r2sp) _VSX_MADD(op##mprec, nels, tp, fld, maddflgs, sfprf, r2sp) VSX_MADD(xsmadd, dp, 1, float64, VsrD(0), MADD_FLGS, 1, 0) ATB, Mark.