That's ~20M (Firefox) to ~30M (Chromium) lines of code as a dependency for your application, just for auth. This applies even if you have a slick CLI app like rclone. If you want to connect it to Google drive you still need a browser to do the OAuth2 flow. All of this just so we have a safe, known location to stash auth cookies.
It would be sweet if there was a lightweight protocol where you could lay out a basic consent UI (maybe with a simple JSON format) that can be rendered outside the browser. Then you need a way to connect to a central trusted cookie store. You could still redirect to a separate app, but it wouldn't need to be nearly as complicated as a browser.
As an aside, to my knowledge OAuth2 still works in PaleMoon. I just downloaded the source and did a count with CLOC, and it looks like there's "only" ~13.5M lines of code. :)
It's a fair point, and there are specs defined for these uses. Something like rclone could certainly do it this way if Google supports it on their end. But IMO the UX of browser-redirect OAuth is actually pretty dang good. I would like to have that available for CLI apps. What if you could literally import an ncurses library directly into your app and do the flow in-process? I'm not even sure if there's a way to do that securely but it would be sweet.
> As an aside, to my knowledge OAuth2 still works in PaleMoon.
I mean, the security of OAuth as a UX flow is that you're entering your username and password on the site you're authenticating with, not the intermediary. (And this is verifiable by looking at the address bar, insofar as that's a reliable option.)
I recall using an Electron app that did OAuth using its own web view, and despite being a fairly well-known app, I had some misgivings because I had no way of knowing if that Google login interface was actually Google's OAuth page or a mock-up generated by the app. (Not trying to spare the guilty here, I can't remember which app. Too many suspects.)
I don't see how you could avoid this problem with an imported ncurses library. Even if the requesting process launched a separate trusted binary to do the authentication flow, verifying that you were actually interacting with that program instead of a mockup is very far into the weeds of tech savvy, and at best is even slower than doing the process manually.
The only option I can see working is to have a dedicated OAuth app. You copy-paste some sort of request token out of the client, it prompts you for username and password, then it gives you a code to enter into the client. Basically the same as a how CLI apps negotiate OAuth now, except you never leave the terminal.
Not saying it's a better idea, just that it's the only one.
Although since we're here, discussing numbers of lines code—how much do you think web browsing performance has to do with browser complexity versus the websites themselves? I've always assumed the primary issue was the latter. Does e.g. Hacker News have higher system requirements in Chrome 97 vs Chrome 1.0?
Browser complexity worries me for other reasons—namely, the web isn't really a standard if there's only one implementation that matters, and because browsers are so complex, no one can really hope to create a new one.
Well the problem is that OAuth providers will only give you your tokens if you go through impossibly complex websites
It's 100% the fault of websites - but websites allow themselves to get fat because browsers allow it, and because they allow it websites get even fatter. It's a codependent system. The medium makes the media etc...
I can only share your concern about having the monoculture we have in practice. Gemini is the best tool we have not only to escape this complexity, but hopefully put some sense into the mind of web designers.
> It would be sweet if there was a lightweight protocol where you could lay out a basic consent UI
This also feels like it should be a part of the OS, since it's "select user, input password, and maybe 2fa, store auth". It doesn't need a full-blown web-browser.
It does, however, needs a properly defined protocol.
I don't think this is true. My first (and last, yuck) foray into golang was forking and improving on a REST API client for Questrade. I used it to write a bot that would alert me via push notifications on my phone when certain options contracts met favorable criteria, and despite the auth being OAuth2 the whole thing was hand-tooled in golang. No browser anywhere near it.
Granted, but isn't it also true that the services most people want to use in general require a browser?
That said: cookies may have been a mistake (opinion). I should back that up with more evidence, but until I can put a complete argument with evidence together -- that's all I would like to say on the matter for now.
That seems like it's always required then. I think maybe I'm not understanding what you mean. Can you describe the flow in a bit more detail? I would be very interesting in doing OAuth2 without a web browser.
Most content on the web is static text, audio, and video, and shouldn't need a Rube Goldberg virtual machine to consume. I've been advocating splitting the web into the "document web" (ie the web as it was originally conceived) and the "app web" (which is cool and useful but a different thing) for a while now. There should be two different programs for consuming them.
And to your point, if simple web browsers became useful again, maybe simple operating systems would be viable once more, further reducing the dependencies involved.
This kind of idea would be really nicely paired with good Microformats support, which continues to be a very good idea. That way we can find, say, a recipe or an address on a web page in a reusable way and without needing magical heuristics.
(Of course, "reusable" in theory, with the caveat that everybody forgot about microformats around when Google decided they could machine learn their way out of everything).
Ever try to start a forum? It's a monumental task with no guarantee of success. You may even need to employ people to grow and maintain one.
And once you've finally grown one of these hubs that accumulates recipes, lyrics, real estate listings, classifieds, etc. (whatever you had in mind) there's no incentive to make it as easy as possible to share it with the world. Once you get over "ugh, everyone just wants to make a buck", there's the fact that it wasn't free to build and maintain the platform to begin with. And perhaps the only incentive to build the platform was the idea that people would pay for the value.
Or, who is supposed to do the work of curating and organizing all of this information and then producing an API so that others can build on it, and why haven't they started? There are probably some inconvenient truths in the answer beyond cynicism.
Care to share or explain more? Sounds awesome.
I'm not sure I'm interpreting you correctly here, but I think I'm on the other side of this. The problem is that many modern websites are godawful. I think the story pretty much ends there. If websites were not awful, we wouldn't find ourselves appalled by the idea of just embedding a browser.
Modern web browsers feature a 'reader mode' as a countermeasure to that much modern web design is significantly worse than having no web design at all.
If you're serious about a 'lightweight' alternative to the lumbering horror-show of the modern web, the way forward is either Gemini , or a formalised simple subset of HTML. 
> That way we can find, say, a recipe or an address on a web page in a reusable way and without needing magical heuristics.
I think the find and reusable aspects here are really two very different problems.
The reusable part is easy. HTML is already reusable. A standardised simple subset would be even more so. 
The find part is trickier. Discovering decent content is harder, as there's an arms race of ad-funded spammers trying to out-compete legitimate recipe sites in search-engine rankings. (There's also the possibility of search engines not being motivated to work on delivering good search results. )
The idea of having a choice between native GUI applications and web apps, has been with us for some time. Email is probably the best example, we've long had the choice between webmail and native email clients. Beyond webmail, these days even Microsoft Word has a web-based version. There are of course both advantages and disadvantages to web-based applications.
I wonder if we even need the search engines? I think a lot of the things we've come to rely on them for could easily be handled in other ways. Recipes for example. You don't really want the best recipe page for a given dish. You want a good quality recipe site that has a recipe for that dish. Quality of recipe sites ebb and flow as they sell out and incentives change, but generally you would probably only need to be aware of the top 2-3 sites. This is exactly the type of information that is easily stored as "tribal knowledge" on a subreddit, forum sticky, community wiki, or even blasting out to your Facebook friends "hey what's everyone's favorite recipe site?"
Is that an accurate assessment?
You actually don't need to. Gemini is a client-server architecture just like the web. You can grab a Gemini client, or use some of the web portals out there that make a server-side Gemini request and render the result to the browser.
> or "type in search" feature
There is a limited form of input allowed, but this input shows up as a query parameter essentially. This is how you can do search or offer some form of interactivity and how the current Gemini search engines/crawlers work.
> basically as if gemini was entirely reduced to mostly static websites with no forms of any kind possible
Again Gemini does allow limited forms, but it only accepts a single query parameter as input. You can, of course, parse the parameter however you choose, but the general culture is that there's very little interactivity and most everything is a static page.
With common (and evolving) formats -- and incentives for publishers to provide their information within those formats -- we could then have much simpler, more streamlined tools to use and remix that data in application-specific ways.
 - https://schema.org/docs/full.html
Seems it's used for SEO hackinge. For instance on recipes why wouldn't a site give their recipe a super high rating. Those sites are awful SEO spam adservers basically.
But business info that can be used to map seems pretty valuable to google.
Google uses a subset of schema.org, validates it according to its own stricter specification and extends it in other ways. It doesn't naively consume everything you provide. For example 
> If the Recipe structured data contains a single review, the reviewer's name must be a valid person or organization. For example, "50% off ingredients" is not a valid name for a reviewer.
> Warning: If your site violates one or more of these guidelines, then Google may take manual action against it. Once you have remedied the problem, you can submit your site for reconsideration.
For a while, people were pitching this as Web 2.0. It's also what RSS and podcasts still are.
Unfortunately, most of the websites we visit are revenue-generating and want to control their presentation.
This project seems designed to work even without the cooperation of the sites you're interacting with; all the site-specific modules are maintained by the community. Adversarial interoperability at its finest.
In extreme cases, one could imagine a module running a full headless browser on the back-end, pretending to be a user scrolling around and clicking stuff, while presenting the actual user with a clean front-end.
- A social media website without a frontend. We just provide a fully exposed API and Oauth, and devs can create their own client to interact with the social network. This would give devs the freedom to create their own experiences without locking users into one specific way of using the social network.
- "Cloud" content hosting as a service. You'd be able to build your own frontend for interacting with a website / blog, and then include our JS code and your site's content will automatically be populated in. This would keep the frontend clean, simple, and cheap, while offloading posts, comments, and other advanced functionality to the service.
Of course both are purely experimental ideas, with no potential real world meaning :D
Not being a jerk but this concept was one of the major ideas behind Web 2.0 but fizzled out. Services would provide data endpoints that your user agent (browser or whatever) would tie together. Even your identity was just a bunch of meta tags in the headers of your web page pointing to things like FOAF or OPML files that linked to people you knew or sites you liked.
Your User Generated Content would just be your blog posts that could be easily followed by someone with an RSS reader. Things like photos or videos would work the same way as someone would just follow your Flickr feed (which you could point to with a metatag on your homepage.
The key takeaway was that everyone would host their own data, deciding what to publish or make public, and then smarter clients (other sites or apps) would collect this information and do whatever graph analysis you wanted.
But normal people do t want to run their own servers and tying disparate services together is non-trivial. So we got UGC but it was/is hosted on social media sites. They made it easier to put together an online presence than self-hosting everything.
Take CouchDB and store all activities as ActivityStreams documents.
> "Cloud" content hosting as a service.
"Headless CMS" is the term you are looking for, and it is already a big industry https://jamstack.org/headless-cms/
I'm sure there are many sites which help provide better context for the fediverse, but here, check this one out: https://fediverse.party/en/fediverse
It has fallen out of fashion tho.
> Also eliminates the possibility of sites engaging in annoying or abusive behavior by putting users in full control of the client rather than the site operator. Obviously it can't work for every site, but it's quite the interesting concept.
That’s the job of the User Agent after all, acting on behalf of the user.
That's what HTTP is. You're free to write a client that isn't a browser that sends and receives the same API messages as any HTTP client app does. Most people use browsers, but there's also things like iOS and Android apps that consume the same APIs as browsers, or Postman that directly communicates with the APIs, etc.
The APIs that sit in top of HTTP are even sort of standardized in the sense that HTTP verbs mean the same everywhere (in theory, but some devs get it wrong.)
Thr only hard bit that no one has really solved in a nice way is how you discover the APIs in the first place. There's things like WSDL but it's horrible.
However, right about the same time web apps were taking over the world there were thick client apps that were solving the problems of installation and and updates. Two of the prominent thick client applications doing this were iTunes and the browsers themselves.
Now fast forward a decade to the early teens and the ubiquitous use of smart phones. What is the single largest determining factor of platform success? Is it the ability for web apps to render on your platform's web browser or is it the breadth and depth of your platform's app store?
My rant is over, I wish web apps would die. I've wished that for most of the 21st century.
The current differences in attitudes between mobile and non-mobile apps is pretty interesting.
If you visit Reddit or Twitter from a browser that's running on a mobile OS then they make it extremely clear that they would prefer that you use a dedicated app. When visiting from a non-mobile OS not only do you not see such messages, but a dedicated app doesn't even exist.
In one place the browser is good enough to be the only option, in the other it's so inferior that they will bully and harass you until you stop using it.
An interesting thing happened here as well - because of the iOS app store and its auto-update policy, "thick" clients came back in vogue again. It just happened that they're running on phones, not full PCs. I'd argue these count as thick client apps. :-) Consider WhatsApp or WeChat - it has a complete and full ecosystem of messaging, apps, and utility function. It actually (I think) supercedes AOL 2 or 3 in terms of functionality. There's DMs, Chat, Apps...
At first I thought this was like an API to integrate web content into your own apps. But now it looks more like Groupware, in the sense that Woob is actually your user interface and there are just modules to consume content from random websites.
It goes back to the old idea where you would have one dedicated desktop application for each thing you wanted to do on the internet, like read news, send mail, listen to music, view a calendar... turning your computer into a utilitarian appliance. Rather than a portal for businesses to spend a lot of time and money building their own dedicated user interfaces to lock you in. The latter has made life more difficult, where we have to constantly learn every business's new interface, there's always competition between missing features, and the dedicated UI (or platform) becomes a way for the business to squeeze more out of the user.
And there are no ads. I just realized there's an entire generation who have never seen technology without advertisements. I wonder what they'd make of this.
Age vs ad blocker usage (female, male)
16-24 43.2%, 49.2%
25-34 43.0%, 47.6%
35-44 38.4%, 44.8%
45-54 33.5%, 39.1%
55-65 32.1%, 37.3%
> If provided, icons are preferred to be parodic or humorous in nature for
legal reasons, however there are no restrictions on the quality or style of humor.
Because, also a fun fact : this project changed its name recently, it was called Weboob before: https://weboob.org
> When weboob was started in 2010, 11 years ago, the name was chosen, without a
hidden agenda, since as a French speaker, "boob" wasn't part of my vocabulary.
> Following its release and the ensuing reactions, during its first years, the
project was complemented with various provocative elements (icons, application
names, English slurs in the code). This was done with the sole motive that at
that time, it was seen as "fun".
But when the project gained traction he realized that the name was probably not appropriate for people building business apps with it, which he wanted to support.
> But in practice, it's been years the project isn't following this approach
anymore, it's used as an essential building block of professional companies, the
provocative elements are progressively removed, and the professionnalisation[sic] question is being raised.
Source: Weboob will become woob − https://lists.symlink.me/pipermail/weboob/2021-February/0016...
I think this sentence has been carefully constructed to be kind of true. The original author certainly knew what "boob" mean at the time. And the name "weboob" was voluntarily chosen as a pun. Now if you ask him maybe he will deny (I have other memories, but who cares) or maybe he will simply say that we do not use "boob" on a day to day basis in french so the pun seemed completely inconsequential.
But frankly, that's not a big deal, and nobody gives a shit to begin with. Everybody has been young, and some big bosses of far bigger companies have way more annoying histories. As for his company "budget insight", IIRC they already had contracts with some banks when it was still called "weboob" with even more terrible module names available.
Maybe, but not certainly. Do not assume too much English proficiency in France in 2010. It has changed a lot in the past decade, with most of the French hacker spaces disappearing or fusing in the global internet, but I still remember the 2000s, when we were hanging around in French IRC channels or forums.
Doesn't really inspire confidence in their professionalism or trustworthiness with handling financial transactions, if you ask me.
Quirky naming and crude humour can be orthogonal to functionality.
Also, separately (though i wouild not be surprised if frequented by same/similar folks who have interets in Gemini), there is also the tildeverse...again, more text-heavy environments. See: https://tildeverse.org/
And, as i have stated in another comment there is the fediverse (e.g. Mastodon, pleroma, etc.), so the ability to leverage APIs to interact with other folks and their content without explicitly needing a typical web browser exists, and flourishes.
I'll end by stating that there are a few exciting things - like the above items i mention as well as this neat Woob platform - which to me seem very fun, a little new, and yet at the same time in some ways nostalgic...maybe they won't make the morning news, and likely only attract geeks, but it is all still exciting - at least for me!
I can see the normal site just fine on Lynx browser (maybe that's the point?)
Edit: ah, I see, they're doing a JWZ and redirecting based on referral, but going a step further and setting a cookie. Cute, but also terribly immature.
...Tilde.net used to be only open to invite not open to anyone creating an account...so if interested - and still not open to the public - i can trigger to send you an invite.
I never did figure out if any of the GUI clients  are still actively developed and I'd appreciate if anyone who knows about this could point me towards a good client.
 Some software listed here: http://www.loc.gov/z3950/agency/resources/software.html
in woob-weather, with weather.com backend, I've been getting "Error(weather): 401 Client Error: Unauthorized";
in woob-gallery, with imgur backend, when I attempt to download an image the module crashes with "FileNotFoundError: [Errno 2] No such file or directory: ''"
I like the idea though and I'll keep trying further.
Update: I resolved the image-gallery problem by specifying the foldername (so: using "download ID 1 foldername" instead of "download ID"). BUT: it looks like I'm unable to download text descriptions that sometimes accompany the images.
Since it is all client side, it can be dubbed a "browser" not a "scraper" and one might hope popularity is high enough that active blocking of it is blatantly user hostile. Granted one hopes that, like EasyList and uBO and others have shown, the community can outpace site owners. Not appearing headless (tunneling captchas, literal mousemove events in pseudo-random human-like ways, etc) should be doable.
It's something I have thought about and once dubbed "recapitate" (https://github.com/cretz/software-ideas/issues/82) and plan to revisit. I have seen many versions of this attempted. We need to encourage shared data extraction tools.
Yes, as a user another government gets to read your posts, but I mean yet another.
To get on, I literally just knocked on a few doors in San Francisco and got authorized, so many people here can too. You could probably do it at a park.
Note: Hong Kong citizens cannot do it for US citizens even I was trying. Has to be a mainland Chinese person.
Do not install or operate on any trusted device. Do not connect to your home network. Do not store personal details on your weibo device. Do not ever send sensitive information or talk to real contacts on Weibo.
So the user experience on a Chinese service is simply not different enough for me to treat it differently.
However there is a world of difference between sloppily shutting down vaccine or election misinformation, and actively censoring a Tennis star reporting a sexual assault by a politiburo member.
There is a world of difference between taking down shit talking politicians twitter profiles, and actively censoring the genocide of an ethnic minority.
Ads that slurp up personal data are bad. Threatening political dissidents abroad directly is incomparable.
To say "well they are all the same bad" is to be willfully blind to the basic facts.
 - https://www.theatlantic.com/technology/archive/2019/03/what-...
 - https://www.thecut.com/2021/12/the-disappearance-of-peng-shu...
Haha I’m not saying that, I’m saying it has nothing to do with my participation in those platforms because I know what to expect and my lack of participation changes nothing.
My words are the user experience is not different enough.
For everyone else, check out that great robust example of a web outside of browsers.
An amusing detour.
I can't for the life of me remember what that type of service was. It was back in the era of anonymous remailers.. any ideas?
The developer was listed as Laurent Bachelier (https://github.com/laurentb). Searching him up, he unfortunately seems to have committed suicide a year ago. Bizarrely with some links to right winged political groups? (First result on google from "Laurent Bachelier")
My deepest condolences to the programmers of this project who have lost someone who I assume was a close friend and co-worker.
Woob does a lot more than just banks. It allows you to get any of your data. Adding additional providers is piss easy too.