• wolfrasin@lemmy.today
    link
    fedilink
    English
    arrow-up
    1
    ·
    2 minutes ago

    Oh yes this. Not using AI is best but if you must: Keep putting it on some random corpo tab and bankrupt them if they’re going to insist on having AI ‘help bots’

  • mlg@lemmy.world
    link
    fedilink
    English
    arrow-up
    41
    ·
    17 hours ago

    I haven’t kept up with the prompt injection memes, but I wonder if anyone bothered making a frontend or Ollama plugin that just uses the thousands of random public chatbots to actually accomplish a free high token rate usage lol.

  • [object Object]@lemmy.ca
    link
    fedilink
    English
    arrow-up
    59
    ·
    23 hours ago

    Writing a linked list in Python just feels wrong. I know it’s probably homework.

    If you must, stdlib has dequeue which is going to be faster and simpler than rolling your own. Plus no reference semantics to deal with.

    • Ephera@lemmy.ml
      link
      fedilink
      English
      arrow-up
      14
      ·
      21 hours ago

      Yeah, linked lists are rarely a good idea. Modern memory optimization, where contiguous regions of memory are loaded into CPU caches, means that array-backed lists have better performance in virtually all situations.

      In a way, I’d want to argue that you should actually only ever roll your own linked lists, because you should only use linked lists when you’re not working in-memory, i.e. when array-backed lists are not an option to begin with.

      • Sasquatch@lemmy.ml
        link
        fedilink
        English
        arrow-up
        1
        ·
        2 hours ago

        im not sure how malloc() works, but I would guess it would attempt to squeeze new allocations into partially-filled memory pages, right? Wouldn’t that largely offset the inefficiency?

      • [object Object]@lemmy.ca
        link
        fedilink
        English
        arrow-up
        9
        ·
        18 hours ago

        You really need frequent middle insertion (insert joke here) for the linked list to become better than an array list.

        • Zagorath@quokk.au
          link
          fedilink
          English
          arrow-up
          4
          ·
          18 hours ago

          Oh damn, I was just about to reply to your reply to @fishface@piefed.social (which is literally directly above this comment, on my screen) suggesting exactly this. Glad that Piefed initially failed to register my clicking “reply”.

      • FishFace@piefed.social
        link
        fedilink
        English
        arrow-up
        4
        ·
        19 hours ago

        What would you use if you don’t know how much space you were going to need in advance, and you were gonna only read the data once for every time the structure got created.

        • [object Object]@lemmy.ca
          link
          fedilink
          English
          arrow-up
          4
          ·
          18 hours ago

          Array list/vector types often have dynamic resize built in, and then if you can benchmark it that always helps.

          • FishFace@piefed.social
            link
            fedilink
            English
            arrow-up
            7
            ·
            18 hours ago

            Yes, but dynamic resize typically means copying all of the old data to the new destination, whereas a linked list does not need to do this. The time complexity of reading a large quantity of data into a linked list is O(N), but reading it into an array can end up being O(N^2) or at best O(N log N).

            You can make the things in your list big chunks so that you don’t pay much penalty on cache performance.

            I thought of another good example situation: a text buffer for an editor. If you use an array, then on large documents inserting a character at the beginning of the document requires you to rewrite the rest of the array, every single character, to move everything up. If you use a linked list of chunks, you can cap the amount of rewriting you need to do at the size of a single chunk.

            • anton@lemmy.blahaj.zone
              link
              fedilink
              English
              arrow-up
              1
              ·
              13 minutes ago

              Expanding a dynamic array to powers of 2 has amortized constant complexity so filling one up from empty is O(n).

  • sp3ctr4l@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    12
    arrow-down
    1
    ·
    20 hours ago

    Personally, I’ve been using Ecosia as a coding assistant for shit my local LLM on my SteamDeck can’t figure out without triggering thermal threshold warnings.

    Its… theoretically more eco friendly than uh, Ronald McDonald teaches Python.

    • lyralycan@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      18
      arrow-down
      1
      ·
      17 hours ago

      Wait, Ecosia has an AI assistant? One - that sounds ridiculously counterproductive to their goal of being eco, Two - I need to check that out haha

      • sp3ctr4l@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        5
        ·
        16 hours ago

        It has an AI assisted search feature.

        Which just also happens to have basically no guidelines preventing it from analyzing and rewriting code that you copy paste to it.

        lol.

        But uh, they have some kind of scheme involving Ai usage or search queries = pledge to plant trees.

        … ???

        At least with most models I’ve run locally, they’ll all at least try to read code, tell you how it works, suggest improvements, etc.

        You could probably just copy paste a block of random code into any Ai ‘thing’, and ask it to help you with it, and theres a pretty good chance it would try to.