• [object Object]@lemmy.ca
    link
    fedilink
    English
    arrow-up
    4
    ·
    1 day ago

    Array list/vector types often have dynamic resize built in, and then if you can benchmark it that always helps.

    • FishFace@piefed.social
      link
      fedilink
      English
      arrow-up
      7
      ·
      1 day ago

      Yes, but dynamic resize typically means copying all of the old data to the new destination, whereas a linked list does not need to do this. The time complexity of reading a large quantity of data into a linked list is O(N), but reading it into an array can end up being O(N^2) or at best O(N log N).

      You can make the things in your list big chunks so that you don’t pay much penalty on cache performance.

      I thought of another good example situation: a text buffer for an editor. If you use an array, then on large documents inserting a character at the beginning of the document requires you to rewrite the rest of the array, every single character, to move everything up. If you use a linked list of chunks, you can cap the amount of rewriting you need to do at the size of a single chunk.

      • anton@lemmy.blahaj.zone
        link
        fedilink
        English
        arrow-up
        2
        ·
        9 hours ago

        Expanding a dynamic array to powers of 2 has amortized constant complexity so filling one up from empty is O(n).

        • FishFace@piefed.social
          link
          fedilink
          English
          arrow-up
          1
          ·
          8 hours ago

          Well I just had to work it out again myself and you’re right. I dunno what scenario I was thinking of that had worse complexity and whether it was really due to dynamic arrays; I just remember getting asked about it in some interview and somehow the answer ended up being “use a linked list and the time complexity goes down to linear” /shrug

          Thanks for the correction!