Hi, All,

I'm working on pretty big golang project. This project consists of several 
hundreds of packages, most of them are with tests. When we run tests, go 
test tool does the following:

* for each package with test files it extends this package with *_test 
files of current package, create two packages (pxtest and pmain), details 
in code 
<https://github.com/golang/go/blob/374546d800124e9ab4d51b75e335a71f866f3ef8/src/cmd/go/internal/load/test.go#L42-L55>
. 
* compiles those packages and dependencies
* links them to test binary

The latest (link) stage is very CPU and memory consuming for our project, 
because all tests have a lot of dependencies. We've got pipeline that build 
tests in parallel, used all known speed optimizations (like -w -s flags, 
etc).
We even (as an experiment) forked go test tool to add ability to build 
tests from different packages into one binary (yep, there are some caveats, 
like: circular dependencies between xtest-modules, clashes with global data 
structures like flags, etc, but this is manageable). So now we can build 
tests into one binary (with limitations), and instead of N linking 
operations (where N is number of packages with tests) we got only one.

But this approach is a little hacky, so the question is, is there any other 
ways to optimize linking time of tests? Maybe it is possible to build whole 
project to shared library, and link test dynamically? Maybe some other ways?

Thanks a lot! 

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to