On Thu, May 1, 2025, at 7:47 AM, Juris Evertovskis wrote:
> On 2025-04-29 17:29, Matthew Weier O'Phinney wrote:
>
>>  
>> * Exceptions should not be used for normal application logic flow. If the 
>> "error" is recoverable and/or expected, use a different mechanism so you can 
>> use standard conditional branching.
>>  
>> As such, there are a lot of situations where I may not want to use 
>> exceptions. Two common ones:
>>  
>> * Input validation. In most cases, _invalid input is expected_, and a 
>> condition you will handle in your code. Exceptions are a really poor 
>> mechanism for this.
>> * "Not found" conditions, such as not finding a matching row in a database 
>> or a cache. Again, this is expected, and something you should handle via 
>> conditionals.
> I don't want to make this into a quarrel, please consider this to be a 
> genuine question — I'm trying to understand the viewpoint behind the 
> need for such "failed result" channel.
>
> I'm considering this scenario: An update request comes into a 
> controller and passes a superficial validation of field types. The 
> 'troller invokes an action which in turn invokes a service or whatever 
> the chain is. Somewhere along the call stack once all the data is 
> loaded we realize that the request was invalid all along, e.g. the 
> status can't be changed to X because that's not applicable for objects 
> of kind B that have previously been in status Z.
>
> In such situations I have found (according to my experience) the 
> following solution to be a good, robust and maintainable pattern:
>
> Once I find the request was invalid, I throw a ValidationException. No 
> matter how deep in the stack I am. No matter that the callers don't 
> know I might've thrown that. The exception will be caught and handled 
> by some boundary layer (controller, middleware, error handler or 
> whatever), formatted properly and returned to the user in a 
> request-appropriate form.
>
> I currently have no urge to return an indication of invalidity manually 
> and pass it up the call stack layer by layer. Should I want that? In my 
> experience such patterns (requiring each layer to do an `if` for the 
> possible issue and return up the stack instead of continuing the 
> execution) get very clumsy for complex actions. Or have I misunderstood 
> the usecase that you had in mind?
>
> BR,
> Juris

The key distinction is here:

> Somewhere along the call stack once all the data is 
> loaded we realize that the request was invalid all along

combined with:

> No matter that the callers don't 
> know I might've thrown that.

Addressing the second part first, unchecked exceptions means I have no idea at 
all if an exception is going to get thrown 30 calls down the stack from me.  
Literally any line in my function that calls anything could be the last.  Is my 
code ready for that?  Can it handle that?  Or do I need to put a try-finally 
inside every function just in case?

Admittedly in a garbage collected language that concern is vastly reduced, to 
the point most people don't think about that concern.  But that's not because 
it's gone away entirely.  Are you writing a file and need to write a terminal 
line to it before closing the handle?  Are you in the middle of a DB 
transaction that isn't using a closure wrapper for auto-closing (which PDO 
natively does not)?  

Technically, if you're writing this code:

$pdo->beginTransaction();

foreach ($something as $val) {
  $write = transform($val);
  $pdo->query('write $write here');
}

$pdo->commit();

That's unsafe because transform() might throw, and if it does, the commit() 
line is never reached.  So you really have to put it in a try-catch-finally.  
(Usually people push that off to a wrapping closure these days, but that 
introduces extra friction and you have to know to do it.)

Or similarly, 

$fp = fopen('out.csv', 'w');
fwrite($fp, "Header here");
foreach ($input as $data) {
  $line = transform($data);
  fputcsv($fp, $line);
}
fwrite($fp, "Footer here");
fclose($fp);

If transform() throws on the 4th entry, you now have an incomplete file written 
to disk.  And *literally any function you could conceive of* could do this to 
you.  The contract of every function is implicitly "and I might also unwind the 
call stack 30 levels at any time and crash the program, cool?"

You are correct that unchecked exceptions let you throw from way down in the 
stack if something goes wrong later.  Which brings us back to the first point: 
If that's a problem, it's a code smell that you should be validating your data 
sooner.  This does naturally lead to a different architectural approach.  Error 
return channels are an "in the small" feature.  They're intended to make the 
contract between one function and another more robust.  One can build a 
system-wide error pattern out of them, but they're fundamentally an 
in-the-small feature.

So in the example you list, I would offer:

1. Do more than "superficial validation" at the higher level.  Validate field 
types *and* that a user is authorized to write this value (for example).
2. As discussed, we do need some trivially easy way to explicitly defer an 
error value back to our caller.  That should be easy enough that it's not a 
burden to do, but still explicit enough that both the person reading and 
writing the code know that an error is being propagated.  This still requires 
research to determine what would work best for PHP.

Wouldn't the heavy use of error return channels create additional friction in 
some places and cause us to shift how we write code?  Yes, grasshopper, that is 
the point. :-)  The language should help us write robust, error-proof code, and 
the affordances and frictions should naturally nudge us in that direction.  
Just as explicit typing makes certain patterns less comfortable and therefore 
we move away from them, and are better for it.

--Larry Garfield

Reply via email to