• Ephera@lemmy.ml
    link
    fedilink
    English
    arrow-up
    16
    ·
    22 hours ago

    Yeah, linked lists are rarely a good idea. Modern memory optimization, where contiguous regions of memory are loaded into CPU caches, means that array-backed lists have better performance in virtually all situations.

    In a way, I’d want to argue that you should actually only ever roll your own linked lists, because you should only use linked lists when you’re not working in-memory, i.e. when array-backed lists are not an option to begin with.

    • Sasquatch@lemmy.ml
      link
      fedilink
      English
      arrow-up
      2
      ·
      4 hours ago

      im not sure how malloc() works, but I would guess it would attempt to squeeze new allocations into partially-filled memory pages, right? Wouldn’t that largely offset the inefficiency?

    • [object Object]@lemmy.ca
      link
      fedilink
      English
      arrow-up
      9
      ·
      20 hours ago

      You really need frequent middle insertion (insert joke here) for the linked list to become better than an array list.

      • Zagorath@quokk.au
        link
        fedilink
        English
        arrow-up
        4
        ·
        19 hours ago

        Oh damn, I was just about to reply to your reply to @fishface@piefed.social (which is literally directly above this comment, on my screen) suggesting exactly this. Glad that Piefed initially failed to register my clicking “reply”.

    • FishFace@piefed.social
      link
      fedilink
      English
      arrow-up
      4
      ·
      21 hours ago

      What would you use if you don’t know how much space you were going to need in advance, and you were gonna only read the data once for every time the structure got created.

      • [object Object]@lemmy.ca
        link
        fedilink
        English
        arrow-up
        4
        ·
        20 hours ago

        Array list/vector types often have dynamic resize built in, and then if you can benchmark it that always helps.

        • FishFace@piefed.social
          link
          fedilink
          English
          arrow-up
          7
          ·
          19 hours ago

          Yes, but dynamic resize typically means copying all of the old data to the new destination, whereas a linked list does not need to do this. The time complexity of reading a large quantity of data into a linked list is O(N), but reading it into an array can end up being O(N^2) or at best O(N log N).

          You can make the things in your list big chunks so that you don’t pay much penalty on cache performance.

          I thought of another good example situation: a text buffer for an editor. If you use an array, then on large documents inserting a character at the beginning of the document requires you to rewrite the rest of the array, every single character, to move everything up. If you use a linked list of chunks, you can cap the amount of rewriting you need to do at the size of a single chunk.

          • anton@lemmy.blahaj.zone
            link
            fedilink
            English
            arrow-up
            2
            ·
            2 hours ago

            Expanding a dynamic array to powers of 2 has amortized constant complexity so filling one up from empty is O(n).

            • FishFace@piefed.social
              link
              fedilink
              English
              arrow-up
              1
              ·
              56 minutes ago

              Well I just had to work it out again myself and you’re right. I dunno what scenario I was thinking of that had worse complexity and whether it was really due to dynamic arrays; I just remember getting asked about it in some interview and somehow the answer ended up being “use a linked list and the time complexity goes down to linear” /shrug

              Thanks for the correction!