On Wed, Jun 04, 2014 at 11:34:14PM +0200, Stephane Eranian wrote: > + > + /* > + * Modify static constraint with current dynamic > + * state of thread > + * > + * EXCLUSIVE: sibling counter measuring exclusive event > + * SHARED : sibling counter measuring non-exclusive event > + * UNUSED : sibling counter unused > + */ > + for_each_set_bit(i, cx->idxmsk, X86_PMC_IDX_MAX) { > + /* > + * exclusive event in sibling counter > + * our corresponding counter cannot be used > + * regardless of our event > + */ > + if (xl->state[i] == INTEL_EXCL_EXCLUSIVE) > + __clear_bit(i, cx->idxmsk); > + /* > + * if measuring an exclusive event, sibling > + * measuring non-exclusive, then counter cannot > + * be used > + */ > + if (is_excl && xl->state[i] == INTEL_EXCL_SHARED) > + __clear_bit(i, cx->idxmsk); > + } > + > + /* > + * recompute actual bit weight for scheduling algorithm > + */ > + cx->weight = hweight64(cx->idxmsk64);
So I think we talked about this a bit; what happens if CPU0 (taking your 4 core HSW-client) is first to program its counters and takes all 4 in exclusive mode? Then there's none left for CPU4. Did I miss where we avoid that problem, or is that an actual issue?
pgpLXn3VBOAB3.pgp
Description: PGP signature