On Fri, Jun 28, 2019 at 09:36:14PM -0400, Derrick Stolee wrote:

> > Still, if it's not too painful to add them in time-wise, it probably
> > makes sense for the coverage tests to be as exhaustive as possible.
> 
> Unfortunately, even running the t9*.sh tests once (among the two runs:
> first with default options and then with several GIT_TEST_* options)
> causes the build to go beyond the three hour limit, and the builds time
> out.

Is that because you're running the tests sequentially, due to the
corruption of the gcov files?

I think something like this would work to get per-script profiles:

diff --git a/t/test-lib.sh b/t/test-lib.sh
index 4b346467df..81841191d2 100644
--- a/t/test-lib.sh
+++ b/t/test-lib.sh
@@ -369,6 +369,9 @@ TZ=UTC
 export LANG LC_ALL PAGER TZ
 EDITOR=:
 
+GCOV_PREFIX=$TEST_RESULTS_BASE.gcov
+export GCOV_PREFIX
+
 # GIT_TEST_GETTEXT_POISON should not influence git commands executed
 # during initialization of test-lib and the test repo. Back it up,
 # unset and then restore after initialization is finished.


And then you can reassemble that with something like this (gcov-tool
comes with gcc):

  for i in t/test-results/t*.gcov; do
    echo >&2 "Merging $i..."
    gcov-tool merge -o . . "$i/$PWD"
  done

The merge is pretty slow, though (and necessarily serial). I wonder if
you'd do better to dump gcov output from each directory and then collate
it as text. I've heard lcov also has better support for handling
multiple runs like this.

-Peff

Reply via email to