Testing style will tend to be shaped by both personal preference and the 
needs of the project.
Personally, I do a ton of data-driven tests, as is required by my 
data intensive projects (custom databases; analytics on them).

I often use a fork of goconvey for my testing.  It promotes BDD 
style of documenting your tests, which is
very helpful when revisiting them months or years later. 
You state the intent of the test in a string
that is passed into the test framework. The 
typical "Given... when... then..." style is useful.

https://github.com/smartystreets/goconvey

For tests of a database, I tend prefer to write both
 my test output and my expected output to a file.
This keeps the expected output independent of
 the code. Since the output of the database may 
change dramatically (and then thousands of lines 
of output may change due to a small code change), 
this makes it very easy to update the expected correct 
output when making a change to the processing pipeline.

If the expected output was hardcoded in the test code, 
updating tests when the correct output
changes alot would be excruciating. By using files 
to store the thousands of lines of output,
updating tests when the expected output changes is 
very easy.  Just copy over the new
output to the expected file, and done.  Standard command 
line tools like diff, head, and tail
make it easy to compare observed and expected 
output of the test. I wrote a simple
test function to compare the two files 
on disk. I'll copy it below.  Expected output files
are version controlled, just like the test code.

This is also addresses a major pain point of doing the 
extensive testing need to develop
working software.  If you code the expected 
value into your tests, they become alot of
work to update when the expected values change.
Have you ever broken 500 tests with one code change?
I certainly have. With expected on disk, its all
updated with a short bash script.

Obviously this is perhaps an unusual approach. Certainly 
developers unfamiliar with
it may have a knee jerk reaction calling it 
outrageous. Yet, it is extremely effective and efficient
for my projects.

Hopefully this example gives you a sense of why the 
best practice is very dependent on the
project at hand.

It is worth being familiar with table driven tests as well. 

// CompareFiles runs diff on the two given files, and returns the length of 
the diff output.
func CompareFiles(expected string, observed string) (int, []byte) {
    cmd := exec.Command("/usr/bin/diff", "-b", observed, expected)
    var out bytes.Buffer
    cmd.Stdout = &out
    err := cmd.Run()
    if err != nil {
        fmt.Printf("CompareFiles(): error during '\ndiff %s %s\n': %s", 
observed, expected, err)
        return -1, nil // unknown, but not 0
    }
    N := len(out.Bytes())
    return N, out.Bytes()
}

On Thursday, December 22, 2022 at 3:57:51 PM UTC-6 nishant...@keploy.io 
wrote:

> Hi everyone!
> I am new to Golang and currently trying to research about Golang and what 
> are the best practices for writing test-cases in Golang.
> While I have been able to find some resources online, I still have a doubt 
> that I hope you can help me with. My doubt is related to the best practices 
> for writing test-cases in Golang. Would love to know what practices fellow 
> Golang devs follows.
> Thank you in advance for your help.
>

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/golang-nuts/b507125a-b22a-4bb6-9992-4b2535c892f3n%40googlegroups.com.

Reply via email to