“I found it very weird that there essentially is no way to browse the web in an open manner. So that’s what I am trying to build,” the founder of Stract said.
So have many others, except they didn’t start a company based on it. As soon as it is part of a company, it is no longer free and open
The license shall not restrict any party from selling or giving away the software as a component of an aggregate software distribution containing programs from several different sources. The license shall not require a royalty or other fee for such sale.
Paradoxically (or not), restrictions on selling software is a fundamental violation of freedom. When the OSS movement says free, it means freedom as in free to do what you want, not free as in free beer. Of course, that freedom also includes the freedom to give it away.
So in practice, that usually results in exactly what you lament: free software with a business model on top to support its development and pay programmers so they can eat.
Why? It depends on the business model, even RMS says it’s ok to make money with open source
What are the actual reasonable outcomes here:
- The search engine becomes successful and requires monetization to pay for the hosting/indexing costs
- The search engine does not become successful and the ever increasing cost of indexing the entire internet forces monetization or shut down
- You self host your own version, in which case you need to start indexing yourself (see problem #2)
I think what would be interesting is to get everyone who self hosts this do part of the indexing. As in, find some way to split the indexing over self-hosted instances running this search engine. Then make sure “the internet” is divided somewhat reasonably. Kind of what crypto does, but instead producing the indexes instead of nothing.
That would give random strangers (at least partial) control over what is indexed and how and you’d have to trust them all. I’m not sure that’s a great idea.
There areways to get around this. Give every indexing job to multiple nodes, decide the result by majority vote between those nodes and penalize (i.e. exclude) nodes that repeatedly produce results that don’t match the majority. Basically what distributed research has done for decades.
Getting the details of such a system right wouldn’t be easy but far from impossible.
I wonder how it compares with searxng. I do like that it’s written in Rust instead of Python.
It’s got a fully independent search index according to the README. SearxNG, LibreX, LibreY, etc. just takes results from multiple search engines and combines them.
Mh, but there are (were?) other search-engines where you could crawl the web yourself, I relember doing that for the lolz, can’t rember the name, though.
Ah, as thinking of
as mentioned here: https://h4.io/@helioselene/111908397221160157
Interesting. The creator included the !bang feature. Nice. Gonna have to play with this more.
yup, every engine that supports !bangs gets my attention immediately.
I should probably know what this does but I’m thinking I don’t. Could somebody explain?
They’re ways to search on a specific site from the engine’s search bar. For instance,
!gsch cows
will search for cows on google scholar from DuckDuckGo. I don’t know how stamdardized bangs are across engines, but they’re super useful if you use a bunch of obscure search tools on the day to day.
there is a business around it and the project doesn’t really have any background so no trust that has built up. I would thread carefully
Ok, hold on…
Can it be self-hosted?
Looks like it from the readme!
Amazing, will try this out on the Pi then.
I was wondering the same, but I didn’t find any information on how it builds the search index. I guess it takes quite a while until it’s usable. Also, it might be very dependent on the speed if the internet connection and also the available storage.
In the github page linked in this post:
We recommend everyone to use the hosted version at stract.com, but you can also follow the steps outlined in CONTRIBUTING.md to setup the engine locally.