On Thu, Oct 09, 2014 at 06:34:40PM +0200, Stephane Eranian wrote: > From: Maria Dimakopoulou <maria.n.dimakopou...@gmail.com>
SNIP > +static struct event_constraint * > +intel_get_excl_constraints(struct cpu_hw_events *cpuc, struct perf_event > *event, > + int idx, struct event_constraint *c) > +{ > + struct event_constraint *cx; > + struct intel_excl_cntrs *excl_cntrs = cpuc->excl_cntrs; > + struct intel_excl_states *xl, *xlo; > + int is_excl, i; SNIP > + /* > + * Modify static constraint with current dynamic > + * state of thread > + * > + * EXCLUSIVE: sibling counter measuring exclusive event > + * SHARED : sibling counter measuring non-exclusive event > + * UNUSED : sibling counter unused > + */ > + for_each_set_bit(i, cx->idxmsk, X86_PMC_IDX_MAX) { > + /* > + * exclusive event in sibling counter > + * our corresponding counter cannot be used > + * regardless of our event > + */ > + if (xl->state[i] == INTEL_EXCL_EXCLUSIVE) > + __clear_bit(i, cx->idxmsk); if we want to check sibling counter, shouldn't we check xlo->state[i] instead? like if (xlo->state[i] == INTEL_EXCL_EXCLUSIVE) __clear_bit(i, cx->idxmsk); and also in condition below? > + /* > + * if measuring an exclusive event, sibling > + * measuring non-exclusive, then counter cannot > + * be used > + */ > + if (is_excl && xl->state[i] == INTEL_EXCL_SHARED) > + __clear_bit(i, cx->idxmsk); > + } > + > + /* > + * recompute actual bit weight for scheduling algorithm > + */ > + cx->weight = hweight64(cx->idxmsk64); > + > + /* > + * if we return an empty mask, then switch > + * back to static empty constraint to avoid > + * the cost of freeing later on > + */ > + if (cx->weight == 0) > + cx = &emptyconstraint; > + SNIP jirka -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/