• Björn Tantau@swg-empire.de
    link
    fedilink
    arrow-up
    301
    ·
    5 months ago

    Fake news.

    Both Windows and Linux have their respective SIGTERM and SIGKILL equivalents. And both usually try SIGTERM before resorting to SIGKILL. That’s what systemd’s dreaded “a stop job is running” is. It waits a minute or so for the SIGTERM to be honoured before SIGKILLing the offending process.

    • 𝕽𝖚𝖆𝖎𝖉𝖍𝖗𝖎𝖌𝖍@midwest.social
      link
      fedilink
      arrow-up
      48
      arrow-down
      5
      ·
      5 months ago

      Also fake because zombie processes.

      I once spent several angry hours researching zombie processes in a quest to kill them by any means necessary. Ended up rebooting, which was a sort of baby-with-the bath-water solution.

      Zombie processes still infuriate me. While I’m not a Rust developer, nor do I particularly care about the language, I’m eagerly watching Redox OS, as it looks like the micro kernel OS with the best chance to make to it useful desktop status. A good micro kernel would address so so many of the worst aspects of Linux.

      • CameronDev@programming.dev
        link
        fedilink
        arrow-up
        80
        arrow-down
        1
        ·
        edit-2
        5 months ago

        Zombie processes are already dead. They aren’t executing, the kernel is just keeping a reference to them so their parent process can check their return code (waitpid).

        All processes becomes zombies briefly after they exit, just usually their parents wait on them correctly. If their parents exit without waiting on the child, then the child gets reparented to init, which will wait on it. If the parent stays alive, but doesn’t wait on the child, then it will remain zombied until the parent exits and triggers the reparenting.

        Its not really Linux’s fault if processes don’t clean up their children correctly, and I’m 99% sure you can zombie a child on redox given its a POSIX OS.

        Edit: https://gist.github.com/cameroncros/8ae3def101efc08be2cd69846d9dcc81 - Rust program to generate orphans.

        • senkora@lemmy.zip
          link
          fedilink
          arrow-up
          3
          ·
          5 months ago

          I haven’t tried this, but if you just need the parent to call waitpid on the child’s pid then you should be able to do that by attaching to the process via gdb, breaking, and then manually invoking waitpid and continuing.

          • CameronDev@programming.dev
            link
            fedilink
            arrow-up
            8
            ·
            edit-2
            5 months ago

            I think that should do it. I’ll try later today and report back.

            Of course, this risks getting into an even worse state, because if the parent later tries to correctly wait for its child, the call will hang.

            Edit: Will clean up the orphan/defunct process.

            If the parent ever tried to wait, they would either get ECHILD if there are no children, or it would block until a child exited.

            Will likely cause follow on issues - reaping someone elses children is generally frowned upon :D.

      • MNByChoice@midwest.social
        link
        fedilink
        arrow-up
        26
        arrow-down
        1
        ·
        5 months ago

        Zombie processes are hilarious. They are the unkillable package delivery person of the Linux system. They have some data that must be delivered before they can die. Before they are allowed to die.

        Sometimes just listening to them is all they want. (Strace or redirect their output anywhere.)

        Sometimes, the whole village has to burn. (Reboot)

      • Diabolo96@lemmy.dbzer0.com
        link
        fedilink
        arrow-up
        9
        ·
        5 months ago

        RedoxOS would likely never become feature complete enough to be a stable, useful and daily-drivable OS. It’s currently a hobbyist OS that is mainly used as a testbed for OS programming in Rust.

        If the RedoxOs devs could port the Cosmic DE, they’d become one of the best Toy OS and maybe become used on some serious projects . This could give them enough funds to become a viable OS used by megacorps on infrastructures where security is critical and it may lead it to develop into a truly daily drivable OS.

      • uis@lemm.ee
        link
        fedilink
        arrow-up
        2
        ·
        5 months ago

        Ok, how change of kernel would fix userspace program not reading return value? And if you just want to use microkernel, then use either HURD or whatever DragonflyBSD uses.

        But generally microkernels are not solution to problems most people claim they would solve, especially in post-meltdown era.

        • This particular issue could be solved in most cases in a monolithic kernel. That it isn’t, is by design. But it’s a terrible design decision, because it can lead to situations where (for example) a zombie process locks a mount point and prevents unmounting because the kernel insists it’s still in use by the zombie process. Which the kernel provides no mechanism for terminating.

          It is provable via experiment in Linux by use of fuse filesystems. Create a program that is guaranteed to become a zombie. Run it within a filesystem mounted by an in-kernel module, like a remote nfs mount. You now have a permanently mounted NFS mount point. Now, use mount something using fuse, say a WebDAV remote point. Run the same zombie process there. Again, the mount point is unmountable. Now, kill the fuse process itself. The mount point will be unmounted and disappear.

          This is exactly how microkernels work. Every module is killable, crashable, upgradable - all without forcing a reboot or affecting any processes not using the module. And in a well-designed microkernel, even processes using the module can in many cases continue functioning as if the restarted kernel module never changed.

          Fuse is really close to the capabilities of microkernels, except it’s only filesystems. In a microkernel, nearly everything is like fuse. A linux kernel compiled such that everything is a loadable module, and not hard linked into the kernel, is close to a microkernel, except without the benefits of actually being a microkernel.

          Microkernels are better. Popularity does not prove superiority, except in the metric of popularity.

          • uis@lemm.ee
            link
            fedilink
            arrow-up
            2
            ·
            edit-2
            5 months ago

            This particular issue could be solved in most cases in a monolithic kernel. That it isn’t, is by design.

            It was(see CLONE_DETATCHED here) and is(source)

            Create a program that is guaranteed to become a zombie. Run it within a filesystem mounted by an in-kernel module, like a remote nfs mount. You now have a permanently mounted NFS mount point.

            Ok, this is not really good implementation. I’m not sure that standard requires zombie processes to keep mountpoints(unless its executable is located in that fs) untill return value is read. Unless there is call to get CWD of another process. Oh, wait. Can’t ptrace issue syscall on behalf of zombie process or like that? Or use vfs of that process? If so, then it makes sense to keep mountpoint.

            Every module is killable, crashable, upgradable - all without forcing a reboot or affecting any processes not using the module.

            except without the benefits of actually being a microkernel.

            Except Linux does it too. If graphics module crashes, I still can SSH into system. And when I developed driver for RK3328 TRNG, it crashed a lot. Replaced it without reboot.

            Microkernels are better. Popularity does not prove superiority, except in the metric of popularity.

            As I said, we live in post-meltdown world. Microkernels are MUCH slower.

            • As I said, we live in post-meltdown world. Microkernels are MUCH slower.

              I’ve heard this from several people, but you’re the lucky number by which I’d heard it enough that I bothered to gather some references to refute this.

              First, this is an argument that derived from first generation microkernels, and in particular, MINIX, which - as a teaching aid OS, never tried to play the benchmark game. It’s been repeated, like dogma, through several iterations of microkernels which have, in the interim, largely erased most of those performance leads of monolithic kernels. One paper notes that, once the working code exceeds the L2 cache size, there is marginal advantage to the monolithic structure. A second paper running benchmarks on L4Linux vs Linux concluded that the microkernel penalty was only about 5%-10% slower for applications than the Linux monolithic kernel.

              This is not MUCH slower, and - indeed - unless you’re doing HPC applications, is close enough to be unnoticeable.

              Edit: I was originally going to omit this, as it’s propaganda from a vested interest, and includes no concrete numbers, but this blog entry from a product manager at QNX specifically mentions using microkernels in HPC problem spaces, which I thought was interesting, so I’m post-facto including it.

              • uis@lemm.ee
                link
                fedilink
                arrow-up
                1
                ·
                edit-2
                5 months ago

                First, this is an argument that derived from first generation microkernels, and in particular, MINIX, which - as a teaching aid OS, never tried to play the benchmark game.

                Indeed, first generation microkernels were so bad, that Jochen Liedtke in rage created L3 “to show how it’s done”. While it was faster than existing microkernels, it was still slow.

                One paper notes that, once the working code exceeds the L2 cache size, there is marginal advantage to the monolithic structure.

                1. The paper was written in pre-meltdown era.
                2. The paper is about hybrid kernels. And gutted Mach(XNU) is used as example.
                3. Nowdays(after meltdown) all cache levels are usually invalidated during context switch. Processors try to add mechanisms to avoid this, but they create new vulnreabilities.

                A second paper running benchmarks on L4Linux vs Linux concluded that the microkernel penalty was only about 5%-10% slower for applications than the Linux monolithic kernel.

                  1. Waaaaay before meltdown era.

                I’ll mark quotes from paper as doublequotes.

                a Linux version that executes on top of a first- generation Mach-derived µ-kernel.

                1. So, hybrid kernel. Not as bad as microkernel.

                The corresponding penalty is 5 times higher for a co-located in-kernel version of MkLinux, and 7 times higher for a user- level version of MkLinux.

                Wait, what? Co-located in-kernel? So, loadable module?

                In particular, we show (1) how performance can be improved by implementing some Unix services and variants of them directly above the L4 µ-kernel

                1. No surprise here. Hybrids are faster than microkernels. Kinda proves my point, that moving close to monolithic improves performance.

                Right now I stopped at the end of second page of this paper. Maybe will continue later.

                this blog entry

                Will read.

        • areyouevenreal@lemm.ee
          link
          fedilink
          arrow-up
          1
          ·
          5 months ago

          But generally microkernels are not solution to problems most people claim they would solve, especially in post-meltdown era.

          Can you elaborate? I am not an OS design expert, and I thought microkernels had some advantages.

          • uis@lemm.ee
            link
            fedilink
            arrow-up
            2
            ·
            5 months ago

            Can you elaborate? I am not an OS design expert, and I thought microkernels had some advantages.

            Many people think that microcernels are only way to run one program on multiple machines without modyfing them. Counterexample to such statement is Plan 9, which had such capability with monolithic kernel.

            • areyouevenreal@lemm.ee
              link
              fedilink
              arrow-up
              2
              ·
              5 months ago

              That’s not something I ever associated with microkernels to be honest. That’s just clustering.

              I was more interested in having minimal kernels with a bunch of processes handling low level stuff like file systems that could be restarted if they died. The other cool thing was virtualized kernels.

              • uis@lemm.ee
                link
                fedilink
                arrow-up
                1
                ·
                5 months ago

                Well, even monolithic Linux can restart fs driver if it dies. I think.

      • Vilian@lemmy.ca
        link
        fedilink
        arrow-up
        1
        arrow-down
        1
        ·
        5 months ago

        nah, you can have micro-kernel features on linux, but you can’t have monolithc kernel features on microkernel, there’s zero arguments in favor of a micro kernel, except being a novel project

        • ORLY.

          Do explain how you can have micro kernel features on Linux. Explain, please, how I can kill the filesystem module and restart it when it bugs out, and how I can prevent hard kernel crashes when a bug in a kernel module causes a lock-up. I’m really interested in hearing how I can upgrade a kernel module with a patch without forcing a reboot; that’d really help on Arch, where minor, patch-level kernel updates force reboots multiple times a week (without locking me into an -lts kernel that isn’t getting security patches).

          I’d love to hear how monolithic kernels have solved these.

          • frezik@midwest.social
            link
            fedilink
            arrow-up
            3
            ·
            5 months ago

            I’ve been hoping that we can sneak more and more things into userspace on Linux. Then, one day, Linus will wake up and discover he’s accidentally made a microkernel.

          • areyouevenreal@lemm.ee
            link
            fedilink
            arrow-up
            2
            ·
            5 months ago

            I thought the point of lts kernels is they still get patches despite being old.

            Other than that though you’re right on the money. I think they don’t know what the characteristics of a microkernel are. I think they mean that a microkernel can’t have all the features of a monolithic kernel, what they fail to realise is that might actually be a good thing.

            • I thought the point of lts kernels is they still get patches despite being old.

              Well, yeah, you’re right. My shameful admission is that I’m not using LTS because I wanted to play with bcachefs and it’s not in LTS. Maybe there’s a package for LTS now that’d let me at it, but, still. It’s a bad excuse, but there you go.

              I think a lot of people also don’t realize that most of the performance issues have been worked around, and if RedoxOS is paying attention to advances in the microkernel field and is not trying to solve every problem in isolation, they could end up with close to monolithic kernel performance. Certainly close to Windows performance, and that seems good enough for Industry.

              I don’t think microkernels will ever compete in the HPC field, but I highly doubt anyone complaining about the performance penalty of microkernel architecture would actual notice a difference.

              • areyouevenreal@lemm.ee
                link
                fedilink
                arrow-up
                2
                ·
                5 months ago

                Windows is a hybrid kernel, and has some interesting layers of abstraction, all of which make it slower. It’s also full of junkware these days. So beating it shouldn’t be that hard.

                Yeah to be fair in HPC it’s probably easier to just setup a watchdog and reboot that node in case of issues. No need for the extra resilience.

                • That’s my point. If you’re l33t gaming, what matters is your GPU anyway. If HPC, sure, use whatever architecture gets you the most bang for your buck, which is probably going to be a monolithic kernel (but, maybe not - nanokernels allow processes basically direct access to hardware, with minimal abstraction, like X11 DRI, and might allow even faster solutions to be programmed). For most people, the slight improvement in performance of a monolithic kernel over a modern, optimized microkernel design will probably not be noticeable.

                  I keep getting people telling me monolithic kernels are way faster, dude, but most are just parroting the state of things decades ago and are ignoring many of the advancements micro kernels like L4 have made in intervening years. But I need to go find links and put together references before I counter-claim, and right now I have other things I’d rather be doing.

          • Vilian@lemmy.ca
            link
            fedilink
            arrow-up
            1
            ·
            5 months ago

            you don’t need a micro kernel to install medules, nor to make a crash in certain module don’t bring the kernel down, you program it isolated, they don’t do that now because it’s unecessary, but android do that, and there’s work being doing in that way https://www.phoronix.com/news/Ubuntu-Rust-Scheduler-Micro

            the thing is that it’s harder todo that, that’s why no one does, but not impossible, you also need to give the kernel the foundation to support that

              • Vilian@lemmy.ca
                link
                fedilink
                arrow-up
                1
                ·
                5 months ago

                bro thinking a chromecast OS gonna run in google servers 💀, micro kernels has their utility in embedded system, we know, saying that they are replacement for monolithic kernel is dumb, also companies can’t do different/hacks project anymore?

    • Mojave@lemmy.world
      link
      fedilink
      arrow-up
      35
      ·
      5 months ago

      Clicking end task in windows task manager has definitely let the hanging task live in its non-responsive state for multiple hours before.

      • Björn Tantau@swg-empire.de
        link
        fedilink
        arrow-up
        16
        arrow-down
        1
        ·
        5 months ago

        Been a while since I’ve been on Windows but I distinctly remember some button to kill a task without waiting. Maybe they removed it to make Windows soooo much more user friendly.

        • Rev3rze@feddit.nl
          link
          fedilink
          arrow-up
          19
          ·
          5 months ago

          Off the top of my head: right click the task and hit end process. That has literally never failed me. Back in windows XP it might sometimes not actually kill the process but then there was always the “kill process tree” button to fall back on.

          • Zoot@reddthat.com
            link
            fedilink
            arrow-up
            7
            ·
            5 months ago

            Yep, typically “Kill Process Tree” was like the nuke from orbit. You’ll likely destroy any unsaved data, but it works nice when steam has 12 processes running at once.

            • Aux@lemmy.world
              link
              fedilink
              arrow-up
              3
              ·
              5 months ago

              It’s not really a nuke as some processes might be protected. The nuke is to use debugger privileges. Far Manager can kill processes using debugger privileges, that will literally nuke anything and in an instant: the app won’t even receive any signals or anything.

        • blind3rdeye@lemm.ee
          link
          fedilink
          arrow-up
          7
          ·
          5 months ago

          The normal Windows task manager’s ‘end task’ button just politely asks the app to close - but then later will tell the user if the app is unresponsive, and offer to brutally murder it instead.

          There is also the sysinternals Process Monitor, which is basically ‘expert’ version of the task manager. Procmon does allow you to just kill a task outright.

      • Aux@lemmy.world
        link
        fedilink
        arrow-up
        9
        ·
        5 months ago

        The end task doesn’t terminate the app, it only sends a message to the window to close itself. The app will then decide what to do on its own. For example, if the app has multiple windows open, it might close the active one, but still continue running with other windows open. Or it might ignore the message completely.

    • DefederateLemmyMl@feddit.nl
      link
      fedilink
      English
      arrow-up
      11
      ·
      5 months ago

      That’s what systemd’s dreaded “a stop job is running” is

      The worst part of that is that you can’t quickly login to check what it is (so maybe you can prevent it in the future?), or kill it anyway because it’s likely to be something stupid and unimportant. And if it actually was important, well… it’s gonna be shot in the head in a minute anyway, and there’s nothing you can do to prevent it, so what’s the point of delaying?

      • Björn Tantau@swg-empire.de
        link
        fedilink
        arrow-up
        21
        ·
        5 months ago

        so what’s the point of delaying?

        In the best case the offending process actually does shut down cleanly before the time is up. Like, some databases like redis keep written data in memory for fast access before actually writing the data to disc. If you were to kill such a process before all the data is written you’d lose it.

        So, admins of servers like these might even opt to increase the timeout, depending on their configuration and disc speed.

        • DefederateLemmyMl@feddit.nl
          link
          fedilink
          English
          arrow-up
          12
          ·
          edit-2
          5 months ago

          I know what it theoretically is for, I still think it’s a bad implementation.

          1. It often doesn’t tell you clearly what it is waiting for.
          2. It doesn’t allow you to checkout what’s going on with the process that isn’t responding, because logins are already disabled
          3. It doesn’t allow you to cancel the wait and terminate the process anyway. 9/10 when I get it, it has been because of something stupid like a stale NFS mount or a bug in a unit file.
          4. If it is actually something important, like your Redis example, it doesn’t allow you to cancel the shutdown, or to give it more time. Who’s to say that your Redis instance will be able to persist its state to disk within 90 seconds, or any arbitrary time?

          Finally, I think that well written applications should be resilient to being terminated unexpectedly. If, like in your Redis example, you put data in memory without it being backed by persistent storage, you should expect to lose it. After all, power outages and crashes do happen as well.

    • Avid Amoeba@lemmy.ca
      link
      fedilink
      arrow-up
      8
      ·
      5 months ago

      Stop jobs are a systemdism and they’re nice. I think the desktop environment kills its children on its own during reboot and it might not be as nice. Graphical browsers often complain about being killed after a reboot in GNOME.

      • Perry@lemy.lol
        link
        fedilink
        arrow-up
        3
        ·
        edit-2
        5 months ago

        AFAIK running firefox in a terminal and pressing ^C (SIGINT) has kind of the same effect as logging out or poweroff in GNOME (SIGTERM, if you’re using systemd). This gives the browser (or other processes with crash recovery) enough time to save all its data and exit gracefully for the crash recovery the next time they are run.

        Please correct me if I’m wrong

        • uis@lemm.ee
          link
          fedilink
          arrow-up
          5
          ·
          5 months ago

          SIGTERM, if you’re using systemd

          SIGTERM it was since original init

    • Constant Pain@lemmy.world
      link
      fedilink
      arrow-up
      3
      ·
      5 months ago

      Windows gives you the option to kill on shutdown if the app is trying to delay the process. I think it’s ideal.

    • dorumon@lemmy.world
      link
      fedilink
      arrow-up
      2
      ·
      4 months ago

      BTW you can control systemd and how fast it chooses SIGKILL after sending SIGTERM. I don’t know why people complain so much about it. It’s really just there such that things on your computer end properly without any sort of data corruption or something bad going on after a reboot or the next time you turn on your computer.