What you want to do is common, but it's application-specific enough that there aren't so many generalized solutions. What I've always done is create a Go program which takes a while to run (I shoot for at least a minute) which runs your BeefyFunc in various ways that make sense to you, then I make a Jenkins Job which runs it on every commit on fixed hardware, and uploads timings to some kind of metrics database like DataDog or Prometheus. Both of those have alert triggers on outliers. I realize this doesn't help much if you don't have DataDog, Prometheus or Jenkins.
-- Marcin On Mon, Mar 15, 2021 at 10:45 AM Jeremy French <ibi...@gmail.com> wrote: > I keep running into this solution to a particular problem, but when I go > to search on how to do it, I find that not only are there seemingly no > solutions out there in search-results-land, there doesn't seem to be anyone > else even asking about it. This then leads me to suspect, that I'm going > about it in completely the wrong mindset, and I'd appreciate anyone helping > me see how everyone else is solving this problem. > > The issue is this: Let's say I have some function - call it BeefyFunc() - > that is heavily relied upon and executed many times per second in my > high-traffic, high-availability critical application. And let's say that > BeefyFunc() is very complex in the sense that it calls many other functions > which call other functions, etc, and these called functions are spread out > all over the code base, and maintained by a broad and distributed > development team (or even just one forgetful, overworked developer). If > BeefyFunc() currently executes at 80 ns/op on my development machine, I > want to make sure that no one makes some change to an underlying function > which causes BeefyFunc to balloon up to 800 ns/op. And more to the point, > if that ballooning does happen, I want someone to be notified, not to have > to rely on a QA person's ability to notice a problem while scanning the > benchmark reports. I want that problem to be highlighted in the same way a > test failure would be. > > So it seems like the logical solution would be to create a test that runs > a benchmark and makes sure the benchmark results are within some acceptable > range. I realize that benchmarks are going to differ from machine to > machine, or based on CPU load, etc. But it seems like it would still be > useful in a CI/CD situation or on my personal dev machine, where the > machine hardware is stable and known and the CPU load can be reasonably > predicted, that at least a sufficiently wide range for benchmarks could be > reasonably enforced and tested for. > > Am I being stupid? Or is this a solved problem and it's just my google-fu > that's failing me? > > -- > You received this message because you are subscribed to the Google Groups > "golang-nuts" group. > To unsubscribe from this group and stop receiving emails from it, send an > email to golang-nuts+unsubscr...@googlegroups.com. > To view this discussion on the web visit > https://groups.google.com/d/msgid/golang-nuts/7adf598e-03a8-456a-a52f-824a8d1832e3n%40googlegroups.com > <https://groups.google.com/d/msgid/golang-nuts/7adf598e-03a8-456a-a52f-824a8d1832e3n%40googlegroups.com?utm_medium=email&utm_source=footer> > . > -- You received this message because you are subscribed to the Google Groups "golang-nuts" group. To unsubscribe from this group and stop receiving emails from it, send an email to golang-nuts+unsubscr...@googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/golang-nuts/CA%2Bv29LtOaGrUxcg%2BCX82qrqBSdVy417JUrEJxWFR4%2BGV6WD97w%40mail.gmail.com.