On 10/4/21 13:16, Richard Biener wrote:
I meant in merge_one_data do not check ->stamp or ->checksum but instead rely
on the counter merging code to detect mismatches (there's read_mismatch and
read_error).  There's multiple things we can do when we run into those:

  - when we did not actually merged any counter yet we could issue the
    warning as before and drop the old data on the floor
  - when we_did_  merge some counters already we could hard-error
    (I suppose we can't roll-back merging that took place already)
  - we could do the merging two-stage, first see whether the data matches
    and only if it did perform the merging

I've got your point, you are basically suggesting a fine grained merging
(function based). Huh, I don't like it much as it's typically a mistake
in the build setup that 2 objects (with a different checksum) want to emit
profile to the same .gcda file.

My patch handles the obvious situation where an object file is built exactly
the same way (so no e.g. -O0 and -O2).


Note that all of the changes (including yours) have user-visible effects and
the behavior is somewhat unobvious.  Not merging when the object was
re-built is indeed the most obvious behavior so I'm not sure it's a good
idea.  A new env variable to say whether to simply keep the_old_  data
when merging in new data isn't possible would be another "fix" I guess?

Even for a situation when checksum matches, but the timestamp is different?
Sure, we can provide env. variables that can tweak the behavior.

Cheers,
Martin

Reply via email to