All,

On 4/5/21 16:20, Christopher Schultz wrote:
Peter,

On 4/5/21 12:35, Peter Kreuser wrote:
All,

Am 05.04.2021 um 14:38 schrieb Christopher Schultz <ch...@christopherschultz.net>:

André,

On 4/4/21 06:23, André Warnier (tomcat/perl) wrote:
Hi.
I have a question which may be totally off-topic for this list, but this has been puzzling me for a while and I figure that someone here may be able to provide some clue as to the answer, or at least some interesting ponts of view. In various places (including on this list), I have seen multiple occurrences of a certain way to write a test, namely :
  if (null == request.getCharacterEncoding()) {
as opposed to
  if (request.getCharacterEncoding() == null) {
Granted, the two are equivalent in the end.
But it would seem to me, maybe naively, that the second form better corresponds to some "semantic logic", by which one wants to know if a certain a-priori unknown piece of data (here the value obtained by retrieving the character encoding of the current request) is defined (not null) or not (null). Said another way : we don't want to know if "null" is equal to anything; we want to know if request.getCharacterEncoding() is null or not. Or in yet another way : the focus (or the "subject" of the test) here is on "request.getCharacterEncoding()" (which we don't know), and not on "null" (which we know already). Or, more literarily, given that the syntax of most (all?) programming languages is based on English (if, then, else, new, for, while, until, exit, continue, etc.), we (*) do normally ask "is your coffee cold ?" and not "is cold your coffee ?".

On the other hand, in English, coffee which is not hot is called "cold coffee" but in e.g. Spanish, it's "coffee cold".

So why do (some) people write it the other way ?

I personally put the null first because of my background in C. C compilers (especially older ones) would happily compile this code without batting an eyelash:

  char *s;

  s = call_some_function();

  if(s = null) {
    // do some stuff
  }

Guess what? "Do some stuff" is always executed, and s is always null.

If you switch the operands, the compiler will fail because you can't assign a value to null:

  if(null = s ) {
    // Compiler will refuse to compile
  }


Isn‘t it true that only one bit difference would result in false - so result would not have to be completely tested?

I'm not sure what you mean, here.

This isn't an issue in Java: conditional predicates (the stuff inside the "if" statement) must be boolean expressions. C and C++ will both happily cast a number to what programmers typically consider to be a boolean (remember: C doesn't actually have a boolean data type), and for that it uses the truthiness of the number to determine what to do.

In C, NULL (the constant) is typically defined to be (void*)0, and (surprise!) the only truthy numeric value in C is 0. So,

  if(s = NULL) {
    // Stuff
  }

does two things:

1. Assigns the value of 0 to s (nulling-out any pointer you had)
and
2. Executes the body of the conditional, since 0 is considered true

Chuck didn't have the heart to publicly point out that this is 100% wrong, but it is. He s=guessed correctly that I was remembering that a 0 return value from many functions means "all is well" or similar.

Actually, I was remembering that strcmp returns 0 when the strings are equal, and so you need to logically-invert that value when checking to see if two strings are equal:

  if(!strcmp("foo", "bar")) {
    // confusingly, "foo" and "bar" are evidently equal...
  }

In C, this can be disastrous for a few reasons, not the least of which is the simple (lack of) correctness of the behavior relative to the programmer's likely intent: nulling a pointer can lead to memory leaks.

So, let's re-do that example again, shall we?

   if(s = NULL) {
     // Stuff
   }

This will null-out your pointer and *not* execute the stuff you should do when your pointer is NULL.

It gets more fun when you do something like this:

   if(s = NULL) {
     // Stuff
   } else {
     free(s); // boom
   }

:(

Anyway, the whole point is that I tend to lead with rvalues as they are not assignable, and therefore trigger compiler errors for simple typos which are syntactically valid in C. This is much less of an issue in Java, and one of the reasons it's a "safer" language than C.

-chris

---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org

Reply via email to