On Mon, Apr 24, 2017, 21:31 Sam Whited <s...@samwhited.com> wrote:

> On Mon, Apr 24, 2017 at 6:31 PM, Dan Kortschak
> <dan.kortsc...@adelaide.edu.au> wrote:
> > We (gonum) would extend the security exception to include scientific
> > code; there are far too many peer reviewed works that depend on code
> > that will blithely continue after an error condition that should stop
> > execution or log failure.
>
> Also a great example! The main take away here is that we should always
> design for failure, and sometimes the primary failure mode should be
> "abort at all costs and let the application developer know that
> something catastrophic happened which could lead to worse things
> happening in the future".
>
> —Sam
>

In this example we're considering panic as a mechanism of preventing
otherwise avoidable code bugs. What happens when the same code begins
silencing panics and continuing on? Do we add a new level of panic that
overcomes the normal recovery method? The fundamental assertion being made
by panic advocates is that you know better than I when my program should
end and you want some mechanism to enforce that opinion on me.

I'll argue that sticking to idiomatic errors returned by function calls
combined with static analysis tools, like errcheck, are sufficient in
solving for all scenarios where panic might otherwise be used to signal an
error state. If you want to use panic internally within an API that's
completely acceptable so long as that panic is never exposed beyond the API
boundary. To quote the golang blog on the subject:

The convention in the Go libraries is that even when a package uses panic
internally, its external API still presents explicit error return values.

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to