IF() Statement in Excel

Perl laughs in your general direction. It’s a language that added special duplicate low-precedence logical ops so that the short-circuiting can be used more effectively as a control structure. Like:
open FILE, "myfile.txt" or die "couldn't open file";

And frankly, C/C++ depends on the short circuiting also. For instance:
if ((ptr != NULL) && (ptr->next != NULL)) { // ...

Wouldn’t be a valid check if not for the short-circuiting. That said, one can overuse the effect.

Any reasonable optimising compiler will reduce any of these. There is no performance gain argument. Just human.
I have mixed feelings about the short circuiting semantics. They feel awfully hacky, but I use them mercilessly.

Then again, code like

if (ptr && ptr->next) { // etc

deserves to die. Yet I have had colleagues for whom this is SOP, and they don’t get why I complain.

Are you talking about that specific case or more generally? I do think it’s useful to reduce null-pointer checks as much as possible, but sometimes there’s no way around it, and constructs like the above are both correct and idiomatic.

I wouldn’t use Perl-style short-circuit programming in C, though. Though I’m not quite sure I can pin down what the difference is. I guess maybe it’s about whether the right-most clause is exceptional or expected. In the “open or die” example, the open is expected to succeed and only rarely pass control to the die. But for “p && p->next”, presumably p is normally expected to be valid. I’m not sure I’ve fully thought through this, though.

My view is that if (p && p->whatever == blah blah) is terse to the point of confusing failure.

I grant the idiom exists.

But it came up in the days of 80 column screens and 10KLoC projects where terseness was a clear unequivocal virtue. In the modern era of big screens, auto-pretty- and consistent-formatting and 1M LoC projects, terseness is greatly overrated IMO.

if (p !=null && p->whatever == blah blah) is a more ‘honest’ and clear explanation of the concept.

They’re both depending on short-circuiting, though, which is the point of contention here (at least I think it is). You can’t legally evaluate the right-hand unless the left-hand is true (whether or not it’s expressed as “p != NULL”).

C#, FWIW, requires testing against “null”. It doesn’t have implicit object-reference-to-bool conversion.

Ágreed. I was beefing only about the idiom of if(p) standing in for if(p != null).

I never liked it when I was writing C and I was very glad when C# did not carry that (IMO) mistake forward.

New dictionary just dropped.

I won’t give up C/C++ until I get another language where I can tell “by sight” how it’s being compiled into machine code. Rust is getting close, but it’s not there yet. C# is nice for some types of app programming, but is unusable when I need things to go fast. If I can’t tell when an object will fit into a cache line, it’s not worth my time for performance code.

I’ve added the US international keyboard so I can properly and easily write things like El Niño and outré.

The problem is there’s two keystroke sequences to turn that keyboard on. One that make sense that you need to poke quite deliberately, and one that I tap inadvertently and unnoticed 2 or 3 times per sentence.

Which results in random double single or double quotes, and wild runaway loose umlauts. I catch most of them. But leaks happen. :slight_smile:

Well, C, as I used to say, is a glorified assembler. C++ doesn’t get a pass from me. Not when you start adding code into the language that runs at compile time. C++ lies too often.

Even then, a good optimising compiler can do some pretty evil things. Depending upon your target ISA.

My maxim has always been to give the compiler all the information it needs to generate tight code. It can almost always do a better job than a human. Let it do the common sub-expression elimination, code hoisting, loop optimisation etc. Where you get problems is when you do indeed need to manage caches. But then you need to manage cache boundary alignments, and issues like array striding and cache associativity architecture (if you are doing numerical stuff) start to really bite. Modern compilers provide pragmas for a lot of this, so again, let the compiler do the work with full information about what you need.

The problems with idioms like 0 == false == null is that this isn’t actually true for every machine you might code for. It assumes that there is nothing you might actually need to access at address 0, or indeed anything on the entire first page of memory. Fine if you are on a machine with virtual memory and all the support that comes with that. Not so good on embedded systems.

Worse, I see code like if(x) {z = y/x;} - and that is just bad behaviour

I don’t believe this is true. Certainly any address other than 0 in the first page is fine; you’re probably thinking of some systems that implement hardware detection of null dereferences by demapping the first page. (This only detects null derefs with an offset less than a page size, but in practice that catches almost everything.) An embedded system doesn’t need to do this; it can leave the first page mapped to valid memory at the cost of not detecting non-zero-offset null dereferences.

For address 0 itself, the C standard (at least the last time I looked at it) doesn’t require that the null pointer actually have the value 0, it’s just written as “0” in the source code. The compiler can compile that to any value known to be an invalid pointer.

That is sort of my point. Maybe the compiler is smart enough to realise that if(ptr){} should compile to ptr != machine _specific_null, but I would not be be betting my life on it. Sure, coding for some embedded system should require more care, and realise the problems with lack of portability, but even simple embedded systems are now boasting significant capability, and the use of a plethora of libraries for add on capability is common.

I’ve spent enough of my life writing C and C++ code that does minute bit twiddling of addresses (to do things like implement software virtual memory of interesting capabilities) to never want to trust the compiler to get this sort of thing right. There is far too much code out there that treats int and char* as the same thing. It shouldn’t be so, but it is. I won’t trust a compiler to get it right. If you have the need to code in C, you are almost by definition taking on the responsibility yourself. Otherwise you would be using a more advanced language.