http://gcc.gnu.org/bugzilla/show_bug.cgi?id=54487
--- Comment #8 from Teresa Johnson <tejohnson at google dot com> 2012-09-06 18:58:55 UTC --- I think I have a solution for the issue that H.J. is encountering. Details below. Markus and H.J., would you be able to try the following patch to see if it addresses the failure you were seeing? Markus, were you only seeing failures when using a parallel make? Index: libgcc/libgcov.c =================================================================== --- libgcc/libgcov.c (revision 191035) +++ libgcc/libgcov.c (working copy) @@ -707,7 +707,9 @@ gcov_exit (void) memcpy (cs_all, cs_prg, sizeof (*cs_all)); else if (!all_prg.checksum && (!GCOV_LOCKED || cs_all->runs == cs_prg->runs) - && memcmp (cs_all, cs_prg, sizeof (*cs_all))) + && memcmp (cs_all, cs_prg, + sizeof (*cs_all) - (sizeof (gcov_bucket_type) + * GCOV_HISTOGRAM_SIZE))) { fprintf (stderr, "profiling:%s:Invocation mismatch - some data files may have been removed%s\n", gi_filename, GCOV_LOCKED After looking at the cp-demangle matching issue for awhile, I finally realized that in my case at least, it was a valid issue with the preprocessed cp-demangle source code not matching the existing cp-demangle.gcda file. I tracked that down to different includes being done due to a difference in the libiberty configure. The libiberty config.log showed that there were some failures in some of the checks which were using the instrumented prev-gcc/xgcc that was giving errors like: profiling:/home/tejohnson/extra/gcc_trunk_4_validate4/gcc/dwarf2out.gcda:Invocation mismatch - some data files may have been removed configure:3427: $? = 0 configure: failed program was: ... When profile merging happens, there are some sanity checks to ensure that the merged summaries for all object files are the same. These tests are failing in some cases due to small differences in the merged histograms, resulting in the above message. The total of the counter values in the histogram is the same, but there are slight differences in the cumulative counter values assigned to consecutive buckets. This could happen due to the way the cumulative counter values are apportioned out to the counters when merging. If the summaries are merged in different orders by the parallel runs, the integer division truncation may result in small differences. Ultimately these differences don't matter much as the sum of all the counter values saved in the histograms is consistent and the differences will be small and not significant. The best solution is to ignore the histogram when doing the sanity check, and just compare the high-level summary info (sum_all, sum_max, run_max, etc). That seems to be addressing the issue I was having. At least, I haven't been able to reproduce it yet.