I guess we all kinda knew that, but it’s always nice to have a study backing your opinions.

  • UnderpantsWeevil@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    arrow-down
    1
    ·
    10 months ago

    I’d choose something like kagi but I guess many people will rather cheap out

    I often feel as though these paid-for services aren’t delivering a meaningfully better product. After all, it isn’t as though Google’s problem is that they don’t have enough cash to spend on optimization. The problem is that they’re a profit-motivated firm fixated on minimizing their cost and maximizing their revenue. Kagi has far less money to optimize than Google and the same profit-chasing incentives.

    If there was a Github / Linux distro equivalent to a modern search engine - or even a Wikipedia-style curated collaborative effort - I’d be happy to kick in for that (like I donate to these projects). For all Wiki gets shit on ask Spook-o-pedia, they do at least have a public change history and an engaged community of participants. If Kagi is just going to kick me back the same Wiki article at a higher point in the return list than Google, why get their premium service when I can just donate to Wiki and search there directly?

    If I’m just getting a feed of paywalled news journals like the NYT or WaPo, its the same question? Why not just pay them directly and use their internal search?

    Other than screening out the crap that Google or Bing vomit up, what is the value-add of Kagi? And why shouldn’t I expect to see the same shit-creep in Kagi that I’ve seen in Google or Bing over the last decade? Because I’m paying them? Fuck, I subscribe to Google and Amazon services, and they haven’t gotten any better.

    • hannes3120@feddit.de
      link
      fedilink
      English
      arrow-up
      1
      ·
      10 months ago

      The problem is that it’s just incredibly expensive to keep scanning and indexing the web over and over in a way that makes it possible to search within seconds.

      And the problem with search engines is that you can’t make the algorithm completely open source since that would make it too easy to manipulate the results with SEO which is exactly what’s destroying google

      • UnderpantsWeevil@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        arrow-down
        1
        ·
        10 months ago

        you can’t make the algorithm completely open source since that would make it too easy to manipulate

        I don’t think “security through obscurity” has ever been an effective precautionary measure. SEO optimization works today because it is possible to intuit the function of the algorithms without ever seeing the interior code.

        Knowing the interior of the code gives black hats a chance to manipulate the algorithm, but it also gives white hats the chance to advise alternative optimization strategies. Again, consider an algorithm that biases itself to websites without ads. The means by which you game the system would be contrary to the incentives for click-bait. What’s more, search engines and ad-blockers would now have a common cause, which would have their own knock-on effects.

        But this would mean moving towards an internet model that was more friendly to open-sourced, collaboratively managed, and not-for-profit content. That’s not something companies like Google and Microsoft want to encourage. And that’s the real barrier to such an implementation.