From Russ Cox

Lumping both non-portable and buggy code into the same category was a mistake. As time has gone on, the way compilers treat undefined behavior has led to more and more unexpectedly broken programs, to the point where it is becoming difficult to tell whether any program will compile to the meaning in the original source. This post looks at a few examples and then tries to make some general observations. In particular, today’s C and C++ prioritize performance to the clear detriment of correctness.

I am not claiming that anything should change about C and C++. I just want people to recognize that the current versions of these sacrifice correctness for performance. To some extent, all languages do this: there is almost always a tradeoff between performance and slower, safer implementations. Go has data races in part for performance reasons: we could have done everything by message copying or with a single global lock instead, but the performance wins of shared memory were too large to pass up. For C and C++, though, it seems no performance win is too small to trade against correctness.

  • mo_ztt ✅@lemmy.world
    link
    fedilink
    English
    arrow-up
    18
    arrow-down
    6
    ·
    edit-2
    11 months ago

    I’m definitely open to the idea that C and C++ have problems, but the things listed in this article aren’t them. He lists some very weird behavior by the clang compiler, and then blames it on C despite the fact that in my mind they’re clearly misfeatures of clang. He talks about uncertainty of arithmetic overflow… unless I’ve missed something, every chip architecture that 99% of programmers will ever encounter uses two’s complement, so the undefined behavior he talks about is in practice defined.

    He says:

    But not all systems where C and C++ run have hardware memory protection. For example, I wrote my first C and C++ programs using Turbo C on an MS-DOS system. Reading or writing a null pointer did not cause any kind of fault: the program just touched the memory at location zero and kept running. The correctness of my code improved dramatically when I moved to a Unix system that made those programs crash at the moment of the mistake. Because the behavior is non-portable, though, dereferencing a null pointer is undefined behavior.

    This is the same thing. He’s taking something that’s been a non-issue in practice for decades and deciding it’s an issue again. Yes, programming in C has some huge and unnecessary difficulties on non-memory-protected systems. The next time I’m working on that MS-DOS project, I’ll be sure to do it in Python to avoid those difficulties. OH WAIT

    Etc etc. C++ actually has enough big flaws to fill an essay ten times this long about things that cause active pain to working programmers every day… but no, we’re unhappy that arithmetic overflow depends on the machine’s reliably-predictable behavior, instead of being written into the C standard regardless overriding the machine architecture. It just seems like a very weird and esoteric list of things to complain about.

    Edit: Actually, I thought about it, and I don’t think clang’s behavior is wrong in the examples he cites. Basically, you’re using an uninitialized variable, and choosing to use compiler settings which make that legal, and the compiler is saying “Okay, you didn’t give me a value for this variable, so I’m just going to pick one that’s convenient for me and do my optimizations according to the value I picked.” Is that the best thing for it to do? Maybe not; it certainly violates the principle of least surprise. But, it’s hard for me to say it’s the compiler’s fault that you constructed a program that does something surprising when uninitialized variables you’re using happen to have certain values.

    • Sonotsugipaa@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      6
      ·
      11 months ago

      Of all the things the article could have used to make its point, they should have mentioned the issue of type punning through type aliasing (fancy words for "reinterpret_cast from uint32_t* to std::float32_t* "), which is something that can realistically lead to incredibly sneaky bugs with all popuplar compilers.

      • mo_ztt ✅@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        1
        ·
        11 months ago

        I could talk for a long time about things I don’t like about C++. This type of stuff doesn’t even scratch the surface lol.

        Years and years ago I actually wrote up a pretty in-depth blog post talking about it, even going so far as to show that it’s not even faster than the competitors once you’ve added in all this overbloated garbage that it calls a standard library. I wrote up a little toy implementation of some problem in C, Python, C++, and a couple other languages, and lo and behold then C one was faster by a mile and the C++ one using all the easier C++ abstractions was pretty comparable with the others and actually slower than the Perl implementation.

    • metiulekm@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      3
      ·
      11 months ago

      Edit: Actually, I thought about it, and I don’t think clang’s behavior is wrong in the examples he cites. Basically, you’re using an uninitialized variable, and choosing to use compiler settings which make that legal, and the compiler is saying “Okay, you didn’t give me a value for this variable, so I’m just going to pick one that’s convenient for me and do my optimizations according to the value I picked.” Is that the best thing for it to do? Maybe not; it certainly violates the principle of least surprise. But, it’s hard for me to say it’s the compiler’s fault that you constructed a program that does something surprising when uninitialized variables you’re using happen to have certain values.

      You got it correct in this edit. But the important part is that gcc will also do this, and they both are kinda expected to do so. The article cites some standard committee discussions: somebody suggested ensuring that signed integer overflow in C++20 will not UB, and the committee decided against it. Also, somebody suggested not allowing to optimize out the infinite loops like 13 years ago, and then the committee decided that it should be allowed. Therefore, these optimisations are clearly seen as features.

      And these are not theoretical issues by any means, there has been this vulnerability in the kernel for instance: https://lwn.net/Articles/342330/ which happened because the compiler just removed a null pointer check.

      • mo_ztt ✅@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        3
        ·
        11 months ago

        Right, exactly. If you’re using C in this day and age, that means you want to be one step above assembly language. Saying C should attempt to emulate a particular specific architecture – for operations as basic as signed integer add and subtract – if you’re on some weird other architecture, is counter to the whole point. From the point of view of the standard, the behavior is “undefined,” but from the point of view of the programmer it’s very defined; it means whatever those operations are in reality on my current architecture.

        That example of the NULL pointer use in the kernel was pretty fascinating. I’d say that’s another exact example of the same thing: Russ Cox apparently wants the behavior to be “defined” by the standard, but that’s just not how C works or should work. The behavior is defined; the behavior is whatever the processor does when you read memory from address 0. Trying to say it should be something else just means you’re wanting to use a language other than C – which again is fine, but for writing a kernel, I think you’re going to have a hard time saying that the language need to introduce an extra layer of semantics between the code author and the CPU.

        • qwertyasdef@programming.dev
          link
          fedilink
          arrow-up
          8
          ·
          11 months ago

          The behavior is defined; the behavior is whatever the processor does when you read memory from address 0.

          If that were true, there would be no problem. Unfortunately, what actually happens is that compilers use the undefined behavior as an excuse to mangle your program far beyond what mere variation in processor behavior could cause, in the name of optimization. In the kernel bug, the issue wasn’t that the null pointer dereference was undefined per se, the real issue was that the subsequent null check got optimized out because of the previous undefined behavior.

          • mo_ztt ✅@lemmy.world
            link
            fedilink
            English
            arrow-up
            3
            arrow-down
            1
            ·
            11 months ago

            Well… I partially agree with you. The final step in the failure-chain was the optimizer assuming that dereferencing NULL would have blown up the program, but (1) that honestly seems like a pretty defensible choice, since it’s accurate 99.999% of the time (2) that’s nothing to do with the language design. It’s just an optimizer bug. It’s in that same category as C code that’s mucks around with its own stack, or single-threaded code that has to have stuff marked volatile because of crazy pointer interactions; you just find complex problems sometimes when your language starts getting too close to machine code.

            I guess where I disagree is that I don’t think a NULL pointer dereference is undefined. In the spec, it is. In a running program, I think it’s fair to say it should dereference 0. Like e.g. I think it’s safe for an implementation of assert() to do that to abort the program, and I would be unhappy if a compiler maker said “well the behavior’s undefined, so it’s okay if the program just keeps going even though you dereferenced NULL to abort it.”

            The broader assertion that C is a badly-designed language because it has these important things undefined, I would disagree with; I think there needs to be a category of “not nailed down in the spec because it’s machine-dependent,” and any effort to make those things defined machine-independently would mean C wouldn’t fulfill the role it’s supposed to fulfill as a language.

            • Sonotsugipaa@lemmy.dbzer0.com
              link
              fedilink
              English
              arrow-up
              1
              ·
              11 months ago

              I’m not sure about C, but C++ often describes certain things as “implementation defined”;
              one such example is casting a pointer to a sufficiently big integral type.

              For example, you can assign a float* p0 to a size_t i, then i to a float* p1 and expect that p0 == p1.
              Here the compiler is free to choose how to calculate i, but other than that the compiler’s behavior is predictable.

              Undefined behavior” is not “machine-dependent” code - it’s “these are seemingly fine instructions that do not make sense when put together, so we’re allowing the compiler to assume that you’re not going to do this” code.

              That said, UB is typically the result of “clever” programming that ignores best practices, aside from extreme cases prosecuted by the fiercest language lawyers (like empty while(true) loops that may or may not boot Skynet, or that one time that atan2(0,0) erased from this universe all traces of Half Life 3).

              • mo_ztt ✅@lemmy.world
                link
                fedilink
                English
                arrow-up
                1
                ·
                edit-2
                11 months ago

                For example, you can assign a float* p0 to a size_t i, then i to a float* p1 and expect that p0 == p1. Here the compiler is free to choose how to calculate i, but other than that the compiler’s behavior is predictable.

                I don’t think this specific example is true, but I get the broader point. Actually, “implementation defined” is maybe a better term for this class of “undefined in the language spec but still reliable” behavior, yes.

                “Undefined behavior” is not “machine-dependent” code

                In C, that’s exactly what it is (or rather, there is some undefined-in-the-spec behavior which is machine dependent). I feel like I keep just repeating myself – dereferencing 0 is one of those things, overflowing an int is one of those things. It can’t be in the C language spec because it’s machine-dependent, but it’s also not “undefined” in the sense you’re talking about (“clever” programming by relying on something outside the spec that’s not really official or formally reliable.) The behavior you get is defined, in the manual for your OS or processor, and perfectly consistent and reliable.

                I’m taking the linked author at his word that these things are termed as “undefined” in the language spec. If what you’re saying is that they should be called “implementation defined” and “undefined” should mean something else, that makes 100% sense to me and I can get behind it.

                The linked author seems to think that because those things exist (whatever we call them), C is flawed. I’m not sure what solution he would propose other than doing away with the whole concept of code that compiles down close to the bare metal… in which case what kernel does he want to switch to for his personal machine?

  • mrkite@programming.dev
    link
    fedilink
    English
    arrow-up
    7
    ·
    edit-2
    11 months ago

    My problem with C/C++ is the people behind the spec have sacrificed our sanity in the name of “compiler optimization”. Signed overflow behaves the same on every cpu on the planet, why is it undefined behaviour? Even more insane, they specify intN_t must be implemented via 2s complement… but signed overflow is still undefined because compilers want to pretend they run on pixie dust instead of real hardware.