Hacker News new | past | comments | ask | show | jobs | submit login
We Must Revive Gopherspace (2017) (matto.nl)
230 points by stargrave on Feb 16, 2019 | hide | past | favorite | 179 comments

HTML ain't the problem; you can build websites without tracking. If you somehow managed to pull enough users to Gopher, they'd just write Gopher Chrome and start adding new features that conveniently allow tracking into it, and gradually kill off the original protocol (see EEE). The problem is economic, and the solution must be too.

I’m old enough to have used gopher on vt100 terminals as an undergrad in college to try and do some ‘work’. And when http/www arrived, it didn’t take long to switch to a better mousetrap. And this wasn’t just because you could now render a gif in NCSA Mosaic on indigo workstation. Everything was just better in this new http world.

Let’s fast forward to today, yes, we’ve gone overboard all over, but then again, Gopher [i think] doesn’t come standard with TLS, it hasn’t gone through the evolution that http[s] has that makes it a robust and scalable backbone it is today.

What I’m trying to say is that we should not casually float around pipe dreams about switching to ancient tech that wasn’t that good to begin with. Yes, electric cars were a thing already in early 1900s, and we maybe took a wrong turn with combustion engine, but with Gopher, I think we should let the sleeping dogs lie, and focus on improving next version of QUIC, or even inventing something entirely new that would address many of the concerns in the article without sacrificing years of innovation since we abandoned Gopher. Heck, this new thing might as well run on TCP/70, never mind UDP appears to be the thing now[0].

[0] https://en.m.wikipedia.org/wiki/HTTP/3

Well said. The article conflates a content / rendering problem with a protocol solution.

A lightweight HTTP/TLS subset that severely limits client-side execution expectations would seem to accomplish the same goals.

While repurposing all the amazing tech we've built since the 1990s.

Essentially, "just pass me the bare minimum of response to make Firefox Reader View work."

... but then we wouldn't be able to serve high-value targeted ads, would we?

If the desire is to make online content more readable, it might be worth starting with the assumption that all content downloaded from the network will be read on a black-and-white ereader device with no persistent internet connection.

This assumption might require substantially reworking the hyperlink model of the internet, so that external references to content delivered by third-parties is sharply distinguished from internal references to other pages within the same work.

Your idea of hypermedia with an offline browsing assumption is very good! Imagine an "offline archive" format that contains a document D + a pre-downloaded copy of all referred documents R1, R2, ..., Rn, along with necessary assets to render R1,R2..Rn in some useful manner (e.g. save html + main-narrative images from each page Ri, but skip everything else).

This "offline archive format" has numerous benefits: (A) Cognitive benefits of a limited/standard UI for information (e.g. "read on a black-and-white ereader device"), (B) Accessibility: standardizing on text would make life easier for people using screen readers, (C) Performance (since accessing everything on localhost), (D) async access (reaching the "edge" of the subgraph of the internet you have pre-downloaded on your localnet could be recorded and queued up for async retrieval by "opportunistic means", e.g., next time you connect to free wifi somewhere you retrieve the content and resolve those queued "HTTP promises", (E) cognitive benefits of staying on task when doing research (read the actual paper you wanted to read, instead of getting lost reading the references, and the references's references).

I'm not sure what "standard" for offline media (A) would should target... Do we allow video or not? On the one hand video has great usefulness as communication medium on the other it's very passive medium, often associated with entertainment rather than information. Hard choice if you ask me.

I'm sure such "pre-fetched HTTP" exists already of some sort, no? Or is it just not that useful if you only have "one hop" in the graph? How hard would it be to crawl/scrape 2 hops? 3 hops? I think we could have pretty good offline internet experience with a few hops. For me personally, I think async interactions with the internet limited to 3 hops would improve my focus—I'm thinking of hckrnews crawled + 3 hops of web content linked, a clone of any github repo encountered (if <10MB), and maybe doi links resolved to actual paper from sci-hub. Having access to this would be 80%+ of the daily "internet value" delivered for me, and more importantly allow me to cutoff from the useless information like news and youtube entertainment.

update: found WARC https://en.wikipedia.org/wiki/Web_ARChive http://archive-access.sourceforge.net/warc/warc_file_format-...

The issue is this thrashes caching at both the local and network levels, decreases overall hit rate, and doesn't scale as links-per-page increase.

How many links from any given page are ever taken? And is it worth network capacity and storage to cache any given one?

What if the Web was filesystem accessible?


FYI, that's noted in the article:

Plan 9 OS and the 9P protocol

I'm carving out a subsection for this, as the concept appears to contain a number of the elements (though not all of them) mentioned above. See Wikpedia's 9P (protocol) entry for more:

In particular, 9P supports Plan 9 applications through file servers:

acme: a text editor/development environment

rio: the Plan 9 windowing system

plumber: interprocess communication

ftpfs: an FTP client which presents the files and directories on a remote FTP server in the local namespace

wikifs: a wiki editing tool which presents a remote wiki as files in the local namespace

webfs: a file server that retrieves data from URLs and presents the contents and details of responses as files in the local namespace


> Essentially, "just pass me the bare minimum of response to make Firefox Reader View work."

I wish it was possible to use a html meta tag to declare to the user-agent that it should show the content in reader view.

Then sites that only want to provide text and images and no ads etc could be implemented without any CSS at all and minimal amounts of markup and still be nice and readable on all devices thanks to user-agent reader view taking care of the presentation.

So you're really wishing that we had sensible default css, instead of the terrible user agent styles that we are forced to keep for compatibility.

No, because that is not realistic for the exact reason you mention — that would break a lot of sites. Probably the vast majority even.

And I don’t see any real benefit in changing the defaults either. Most sites want to provide custom CSS. The point of reader view is to make simple articles consisting of text and images comfortable to read on your specific device. Device/resolution specific defaults would be at least as painful, and probably more painful, to override for every site that wants to use custom CSS.

Whereas an explicit meta tag telling the user-agent to use reader view is entirely feasible. Such a meta tag does not interfere with existing pages, requires nothing extra from sites that don’t want it, and would still fall back to something that works for all end-users whose user-agents don’t implement it (because browsers ignore meta tags they don’t understand so those browsers would render the page in the exact same way they would any unstyled page). And on top of that this theoretical meta tag would be really easy for browser vendors to implement — they parse the HTML, see the meta tag and trigger the reader view mode that they have already implemented.

The solution is to stop respecting the style tag and enforce sane defaults that the user has control over. The Web was meant to be a means of creating academic documents with fancy citations in a tree structure, not a glossy magazine delivery mechanism.

>just pass me the bare minimum of response to make Firefox Reader View work.

Couldn't an HTTP proxy work here? Something you run on localhost that fetches web pages, stripping them of trackers, adverts, and other undesirable stuff? Perhaps with enough development, it could even selectively strip CSS and JavaScript -- eg. it would be really nice if websites stopped shipping scrollbar hijacking / custom scrollbar code.

> Essentially, "just pass me the bare minimum of response to make Firefox Reader View work."

What would that constraint look like? Is animation and interaction prohibited there, for example?

I think the one-sentence summary would be "Content without code, and a bare minimum of formatting."

The lesson from the modern web being, if you give web developers a toolbox, they'll figure out how to build a surveillance system.

And for decades we've been creating more powerful tools (Flash/ActionScript, ActiveX, JavaScript, Wasm, etc).

The answer would seem to be that we should be far more careful what tools we allow to be used.

(Note: Not saying restrict all the web this way, but if one wanted to build parent article's wikipedia-esque info web)

I don't think the issue is the tools available but the motivation behind their use. Until we get rid of the ad revenue model we'll be stuck with user profiling and tracking. Micropayments or some other model will have to take it's place before we see a return to the ideals we used to aspire to.

I'd prohibit interaction aside from clicking on links to get to the next page. As for animation, only embedded videos.

If we allow all those, it's just the modern web again.

If there's support for embedded assets (you mention videos, so I assume also images, etc), how do you prevent "tracking pixels" that can monitor what content is viewed per IP address?

I think it's generally useful to look at email here, where only a small subset of html—primarily photos + text—are reliably supported across clients.

>how do you prevent "tracking pixels" that can monitor what content is viewed per IP address?

No third party connections allowed. As for the originating server, they already know you requested the page, no?

Content addressing can help here. If every asset is identified by a cryptographic hash, the user agent is free to fetch it from any available source. In the case of images, you can hash over the rendered pixels instead of the file encoding, so all transparent 1x1 images are concise red interchangeable.

Alternatively/in addition, the user agent can treat embedded assets like textbooks do, and present them all as numbered, boxed, and captioned figures.

If there was a way for me as a web developer to tell browsers "render this page in reader view" I would definitely use it.

Just use plain output and a little bit of CSS. http://bettermotherfuckingwebsite.com

> Black on white? How often do you see that kind of contrast in real life? Tone it down a bit

I never understood why people have a problem with normal contrast. Low contrast is hard to read.

#444 on white is hardly "low contrast".

That doesn't use the user's preferred font/line/column size and background color.

There's a subset of crusty techies like us who would like this.

The vast teeming hordes of Kardashian fans and youtube addicts would not. They like where we are now.

>render a gif in NCSA Mosaic on indigo workstation

You know what I would love to see is a comparison chart for all Historic SGI machines, and their MSRP from their heyday to comparable compute capabilities in modern devices.

I remember when we were moving ILM/Lucas to the Presidio - and they were throwing our massive SGI machines, which were hundreds of thousands at the time - but some of them were turned into kegerators...

I recall those times too, however I switched back to gopher because the web was click-click-crash if you were running windows or on the college unix workstations, it was click-click-hang.

Moving to another protocol, like Gopher, seems like an economic solution to me.

A parallel protocol and hypermedia format that's restricted enough to prevent tracking isn't going to attract everyone. It's going to attract the subset of users who care enough about stuff privacy to give up "rich Web" features like single page applications and animated HTML canvas elements in return.

That's not a group that's likely to start immediately demanding to to build in the features needed to create infiniscrolling Pinterest feeds. And it's also going to be a much restricted group, meaning businesses won't stand to profit much by pushing for it. So there might not be any economic incentive to do it.

At least, that's how it might be at first. If it remains a nice place to be for 5 years, I'd call it a decent run. 10, and I'd be ecstatic.

Why not just put up a web page or forum without the features you hate? Build a search engine for non-JS pages or whatever Why do you need gopher?

Someone already did:


You need the user agent and transpory layer to enforce privacy protections.

The modern web cannot do that.

(I’m not convinced gopher can either — I think NNTP was actually better at that — but like the thought experiment.)

Usenet is still going strong! It's weird the looks I get from "techies" when I tell them I still use irc, usenet, and rss... and it's shameful.

Ironically, on HN you got the local equivalent in the form of downvotes.

The downvote/bury mechanic makes HN just as poor as SN or Reddit. If one must implement downvotes, at least provide tally, such as on Ars, rather than the default silly text fading (which one should disable).

Precisely. This is leveraging the “network effect” that has thus far mostly been a tool of corporate profit for mass benefit.

I yearn for a future where knowledge is distributed in a practical, organizied way such as via Gopher or similar, along with tapping the full potential of email as a one-to-many, many-to-many, and many-to-one communications medium where data is distributed and address ownership is maintained.

>I yearn for a future where knowledge is distributed in a practical, organizied way such as via Gopher or similar, along with tapping the full potential of email as a one-to-many, many-to-many, and many-to-one communications medium where data is distributed and address ownership is maintained.

We already have that, it's called the World Wide Web.

You seem to be under the impression that the popularity of the web is due to centralization by and commercialization by corporate interests, but that isn't the case. It's still entirely feasible to distribute knowledge in a "practical, organized way" using HTML and HTTP, and people do use it for things besides the three social media sites people now mistakenly believe comprise the entire web.

I started using the Internet at a time when HTTP traffic already exceeded Gopher+FTP traffic, but not by orders of magnitude.

I'd start nearly any session with Gopher, and it would end in either some web pages or an FTP server. Gopher was the go-to because it was organised unlike the rat's nest of links you had to deal with on the web.

This wasn't really fixed until the advent of big centralised search engines (and even then, the early ones weren't worth damn).

Even the internet as a whole isn't the problem. Corporations can share your data just as easily if it's from a paper form that you gave to them. It's not going to be long until cameras, drones, microphones everywhere are spying on everybody.

It'd be nice to have a quieter place like old gopher, but yeah the real increase in privacy it would bring is somewhat illusory.

Say we've built the system of economic incentives for privacy(opt-in). And managed to convince 10% of users and 50% of site owners to convert. Extremely hard.

What do users get in the end ? half of the web is still tracking them. And many of the big guys still track them.

Not enough if you ask me. That's what makes it so difficult.

So let's solve that: let's build a search engine that let me filter sites according to privacy, Ah and be perceived as good as google - because in today's world, in many jobs, you cannot give up on an information advantage.

That's kind of an impossible mission.

DuckDuckGo should implement that, or at least put a privacy indicator.

They do. Their browser plugin grades sites both on the number of tracking scripts and the quality of their privacy policy.


> HTML ain't the problem;

Gopher is an alternative to HTTP, not HTML. HTML can be used with gopher, and since documents served via gopher can be accessed by URLs, it's hypermedia features are usable over gopher. (But gopher is read only, unlike HTTP, so even with JS—since even though it uses URLs, HTML’s built-in form behavior presupposes HTTP and it's verbs—to do a gopher call for a form you'd only get the equivalent of GET forms.)

In principal at least; other than Lynx and an extension for Firefox, I don't think any current browser has support for Gopher, anyway.

> If you somehow managed to pull enough users to Gopher, they'd just write Gopher Chrome and start adding new features that conveniently allow tracking into it

You can do IP based tracking on any TCP/IP protocol, and if the clients have JS (or other scripting, but if you make it alternate channel for HTML, JS is the obvious choice) support and aren't a monoculture, and have differences detectable with JS, you can do client fingerprinting on top of that. Yes, even with gopher protocol alone, as long as your server can treat certain crafted requests specially.

> The problem is economic, and the solution must be too.

Alright, let's talk to our representatives and ask them to consider taxing tracking and data collection.

That's totally true.

But then again, the harm of advertising and surveillance capitalism is a thing. So is the focus on data hoarding, vendor lock-in, favoring prettiness over utility.

I really wish we could run a parallel web. One optimized for utility, where data and content is available in maximally useful form, where users are in control of their rendering, and are free to use whatever automation they want. Not a replacement web, just a for people who are willing to jump through some hoops in order to avoid the crap that's on the mainsteram one.

I don't know much about Gopher yet (I'm starting to learn now), but maybe such a parallel web could be developed there?

Gopher might be amenable to that. It's a much more limited protocol, and AFAIK doesn't allow inline images or any server-defined styling.

Well, let's be explicit here. The wire protocol doesn't care (you can serve HTML documents from a gopher server with inline content), but gopher menus indeed don't allow those sorts of features.

Freenet is kind of that parallel web - decentralized, very hard to surveil, mostly made of simple HTML documents and pictures.

And nobody uses it.

How is Freenet different from Tor?

Tor follows the traditional client/server model, it just uses a few connection layers to attempt to make it harder for the server (or any listener, of course) to identify the origin of the connection. That means someone must be running that server, which means it's possible to shut it down and punish the operators, and those operators can in turn identify the clients by any means except those directly prevented by the layering model.

Freenet is a distributed content-addressed store, much like IPFS, except that rather than you directly fetching content from other users who have specifically "pinned" that content (outing you both as interested in the material), the request hops over multiple users, leaving it cached (in an encrypted form, for plausible deniability) over their machines, so that it's very hard to tell who actually requested it.

Using technology to kickstart the discussion about the economical solution is not a bad idea. It would be nice to have a neutral place to discuss technology and it's slowly stopping to be a thing.

Indeed[1] You always have a choice!

[1] https://artlists.org/privacy-policy/

just like RSS. text feeds weren't the problem - advertisers declaring it "stealing content" and trying to force advertisements everywhere are the problem.

usenet was also great until spam went nuts.

Pay people to use Gopher? Maybe state funded.

> The problem is economic, and the solution must be too.

FB for example is US-based company. Last time I checked you are not forced to accept outside money, get VC rounds or go IPO.

Mark chosen to go the "American path" that is capitalism to the maximum so of course I will lose an argument over why he is trying to maximize profits. But nothing stopped him from building sponsorship agreements with i.e. fortune 500 corps, instead of building bidding platform ala Google.

I'm pretty sure if you would signup fortune 500 and have for example 500 rotating banners, it would give you enough founds to run operations and pay every employee $150,000 salary. Plus having exactly ZERO tracking cookies and ZERO malicious following you JS script. Its quite possible given FB size and reach, but again "this is America, this is business."

How about reviving the “blogosphere” instead? Does it even need reviving? Most of the personal or tech blogs I visit do not have heavy ads or tracking on them, still offer full RSS articles and so on. People who care still have a lot of nice web sites to go to.

Maybe what we need is a search engine that penalises JS and tracker use.

The web of the 90s is alive on tor. On tor the idea of running third party executable code in the age of spectre is (properly) seen as absurd. We just need to bring back webrings and we'll be set.

Since it's on tor there's no need for evil centralization for DoS protection since it's baked into the protocol. Additionally your onion vanity name you brute forced cannot simply be taken away from you if there's political or social pressure on your registrar or above.

No, we don't need gopher. We need people to stop running third party code like it's some normal thing. We need devs to stop making websites that don't render unless you run their code.

It's really not that hard to run a hidden service. No harder than running a webserver. And everyone's home connections are fast enough now.

Color me intrigued. How do I get started? (I have the Tor browser but can never find anything worthwhile, reddit just talks about illegal stuff)

> We just need to bring back webrings


fyi its quite common in the gopher space to host behind an hidden service.

I'm not even sure it's about search engines. Blogs got much of their viewership via feeds. However, RSS/Atom feeds lost the mindshare war with social media feeds. And that's not despite tracking, but in big part because of it - the more the feed knows about you, the higher chance it has at showing you something you'll like. Breaking out of this is more of a human nature than even an economical thing.

RSS is not really a discovery medium, unless you subscribe to link lists though. Facebook and Twitter do have the advantage that they surface 'interesting' stuff that you previously did not know about, at least in theory.

This is why I think that having a search engine that would search only in the 'cool' (read: old-style) web.

RSS could be a discovery medium if combined with a discovery platform (social media). This is why I think Mastodon's (or any other federated decentralized social network) priority should be allowing aggregating external feeds.

Fewer people read blogs. Why should more people write them?

I have no proof but I have a hunch that although fewer people read blogs in comparison with Facebook and Twitter relatively speaking. More people read blogs than in the late 90’s and the 00’s in absolute numbers.

I don’t think that social media has “stolen” many blog readers, rather the number of people on the web increased by helluvalot.

That's the thing: Twitter and Facebook are nothing but least-effort blogs. Twitter in particular was touted as a "micro-blogging" platform.

For yourself. web log is a diary, an external memory, a memoir.

The fact that you can share it to help others as well makes it a blog.

That is more often not the case than is.

Why not just serve static text over HTTP? At least then you'd have the ability to inline images. This--the use of Javascript and other technology for tracking purposes--isn't a problem for Gopher to solve. It's a problem for web content creators.

Correct, here are some tips for content creators how a website can look like without all the bloat:





Hadn't seen the last two. There's a slippery slope effect going on.

Really, I feel like its showing 0.01% of the process that got us to where we are today.

I have no idea how I could ever repay you for those precious links -- you made my night. ( Or how I haven't ever encountered those until now. )

I was loving these until they said they were satire. Is it taboo to actually be serious about minimal design?

satire != comedy

Instead of fixing the protocol, maybe the solution is to come up with a modern browser that doesn't support javascript at all. If there's widespread adoption of such a browser, then that would change the trend.

I think the USP could have to be something about "reading friendly" or "consistent reading experience" etc.

It's already possible to block javascript in modern browsers, why do we need browsers that can't support javascript in any case?

Javascript is just a general purpose programming language. It has uses besides tracking and advertising.

> a modern browser that doesn't support javascript

Modern-ish: https://www.dillo.org/

(it's very far from a lot of CSS features, but it works)

Well, Gopher is one solution to the problem -- a technical one. Getting web content creators to knock it off is another solution -- a social one.

The problem with that is that it requires restraint, and I just don't see much of that in web content creators. In an environment where everyone is competing for attention and clicks and tracking, how do you expect people to willingly do less?

The only approach I could possibly see working is picking some subset of web browser functionality, and branding it as the Next Cool Thing, somehow. Maybe some way to badge or advertise pages, like "JS-free", or "KB-fast" (all content < 1.0MB).

You can still do some pretty amazing designs in a million bytes of HTML+CSS.

something like 'works without JS'. The only things I'm now using JS for on my site:

- dark/light auto or manual theme switching

- syntax highlighting ( https://prismjs.com/ ), because in-code version makes the text code in `pre` unreadable.

Up until maybe 10 years ago basically all forums were just HTML, CSS, a tonne of emoji images, and a light sprinkling of JavaScript to implement a rich text editor. All the heavy lifting was done server side. And it usually scaled pretty well, considering if you needed to upgrade to a bigger machine you physically had to have someone switch out hardware.

Gopher is a really fun (and constrained) protocol. I’ve experimented a bit with interactive gopher servers in the past.

A cool thing is that you can build a server in an afternoon starting with nothing more than your favorite programming language, some TCP server docs, and the wikipedia page.

I’d love to see people build some gopher sites to do stupid and crazy things. Interactive fiction over gopher? Sure! SQL to gopher gateway with ascii viz? Awesome!

Everyone should have a gopher hole... probably firewalled off of any production networks.

I always like to refer to Ian Hickson's Requirements for Replacing the Web [0] when this topic comes up. They seems to encapsulate well the social, technological and economic dynamics required when discussing replacing the Web. However, few attempts (Crockford's Seif Project [2], MS's Project Atlantis and Project Gazelle [2][3][4][5]) seemed have to heed this wisdom.

[0] https://webcache.googleusercontent.com/search?q=cache:8zGGJQ...

[1] http://seif.place/

[2] https://youtu.be/1uflg7LDmzI

[3] https://mickens.seas.harvard.edu/publications/atlantis-robus...

[4] https://www.microsoft.com/en-us/research/wp-content/uploads/...

[5] https://www.microsoft.com/en-us/research/blog/browser-not-br...

> I always like to refer to Ian Hickson's Requirements for Replacing the Web [0] when this topic comes up.

It's not that hard. Just iterate on any idea old that's even slightly more appealing to hack on than a full-blown browser. That includes... let's see... nearly anything!

Then just be smart and dedicated about specifying the behavior of the new thing and figuring out workarounds for the awful parts.

Ian Hickson did it[0].

[0] https://webcache.googleusercontent.com/search?q=cache:8zGGJQ...

I've half seriously evangelized a few times here for a .text TLD.

It wouldn't solve everything, but would make a nice playground that might be taken interesting places.

The article discusses reviving gopher, but doesn't mention how to access it (sure, I could invest a bit of time and effort googling how to do that, but that seems beside point for an article evangelising its revival).

Yes, I was a little disappointed by that too. It even has a "gopher://..." link at the end and when I click on it I can't even open it. Just tell me how I can open the one example you provide man!

There was a time when gopher: scheme URLs would just work, because GOPHER support was built in to popular WWW browsers. Netscape Navigator and Internet Explorer both had it, for example.

It's not that gopher: is some novelty that no-one has ever adopted. It's that a WWW browser nowadays lacks quite a lot of things that used to be commonly built-in to WWW browsers. gopher: scheme support has gone completely, as has news: support. ftp: support has been reimplemented several times, and is significantly poorer now than it used to be.

* http://jdebp.eu./FGA/web-browser-ftp-hall-of-shame.html

I use lynx to view gopher content, although I did write a gopher client in ruby with ncurses at one point for fun.

At the risk of shameless self-promotion, there are Firefox add-ons and at least several mobile clients. Disclaimer: I wrote a number of the ones on this page.


(Yes, it's accessible over Gopher too, just to be difficult)

This thread points to a few Gopher clients. I like:


It's cross-platform: I use the statically linked Linux binary. Loads very quickly, like Dillo. Author commented on HN once on something unrelated and I stumbled across it on his site. Doesn't do images (yet) and I just discovered that it doesn't let me highlight text (bummer) but overall... nice client to have.

Hopefully the author (runtimeterror) continues to work on 'Little Gopher'.

This is what OP's linked gopher page looks like:


The article's author: true to his word. Still keeping his gopher page current - with the latest post updated 11jan2019.

Can the visual styles be set by the page, or is it client-side?

As mentioned, Lynx still supports the protocol. It's an easy protocol to implement [1] such that you could possibly write on in shell using little more than `nc`, `less` and possibly some `awk` (or similar) to format the index pages (it's mostly text anyway).

[1] I wrote about the technical differences between http and gopher http://boston.conman.org/2019/01/12.2

nor does it try to explain what it actually is and why someone would need to care, let alone actually use it.

I was toying with an idea a while back of making sites just for non visual browsers. There was basically just a piece of css, blocking the visualization of content and letting users know this "this is a web 0.5 website. This site is best viewed in a terminal". The enforced rules where kind of a gentleman's (gentleperson) code of no css no js.

Conclusions I got is that the thing had crazy fast loading (it even weird when you can no longer distinguish local for server), that it would be actually quite enjoyable coding experience has it's suddently is just 50% of the work and that the rendering of web pages in terminal browsers is actually really nice.

It called txt files.

Good example: http://textfiles.com/magazines/LOD/lod-1

GP is talking about text that is hidden in a normal web browser and only visible in something that ignores CSS.

Like putting the entire website in the noscript tag?

> Gopher is not HTML

Gopher can easily serve HTML content (and any other content type, too)

I made a Gopher HackerNews proxy a few years ago, you can see it in action by running

    lynx gopher://hn.irth.pl
and check out the source at https://github.com/irth/gophernews

There's also gopher://hngopher.com/

Oh, this one's pretty cool. Makes me wanna do a similar thing for another website that's as nice looking.

>Gopher is a much feature-less protocol than html

HTML is not a protocol, that's HTTP.

We need a new mode for Fırefox an extremely restricted form of html5 without javascript, call it html0.

<doctype html0>

No JS, no thirparty content, only html5+, css3+, text, images, videos, audio and other stuff.

It seems possible to achieve this today with Content-Security-Policy and Feature-Policy[1].

However, even without the help of those headers, one could also have some discipline (perhaps also respect for users) and refrain from putting tracking and other undesirable things onto their website.

This doesn't seem to be a technical problem, so a technical solution - especially an opt-in one - probably won't help.

[1] https://scotthelme.co.uk/a-new-security-header-feature-polic...

If you have the ability to set the doctype of a page, don't you also already have the ability to not load thirdparty content?

The point is to allow the User Agent to display an icon meaning "This page says it dosnt need third party media so will be prevented from loading any"

That seems like a silly reason to have a restricted subset of HTML. Just show that icon on existing HTML pages which don't include third party content.

Like reading mode (F9) on certain websites? Doesn't show ads, so why should big business support it?


It's time to develop a new independent web-zero with no sugar. Use a mode for Firefox and punish the cruft and bloat.

You can do this today with CSPs, but page authors aren't interested.

How about not developing sites that break when JS is turned off? Why has it become a standard to make websites completely in JavaScript when it brings nothing positive to the table whatsoever? Who came up with this idiocy?

As someone who grew up with a 1200 baud modem and never used gopher: why would I start using gopher? What even is it? Can I use it to host webpages? It sounds like if "tracking is impossible" it probably can't use html+javascript? Why would I want to use that?

There is no good reason to use gopher other than for nostalgic reasons.

Gopher menus are quick to parse and interact with even on very constrained systems, and the protocol is very simple. Given some of the other responses in this thread, I don't think those things are simply "nostalgia."

I think you could argue that gopher has few practical uses, and while (as a Gopher user) I don't personally agree, I think the position is defensible depending on what your use cases are. But Gopher is a good example of how a minimal protocol can still offer services of some reasonable basic functionality, and I think that's worth something more than reminiscence.

I love the Gopher protocol but there is no practical difference between a gopher service and a web server serving directory listings and files.

You're excluding gopher menus themselves, which can be customized and act as miniature documents, and search servers, which can take queries and offer basic interactivity.

A gopher menu does not have to be a directory listing.

I did a hackday project where I wrote a script that converted our intranet at work into a gopher site. Before I started, I was really enthused about it, but once I started, it just became evident how much a kludge these early protocols were.

Its the COBOL of page description languages. Its truly horrible, its not like HTML was just this minor improvement, its a complete conceptual shift. GOPHER is just a tab delimitated file, so excel is the best editor for it.

The first character is the type of thing, it can be a submenu (1), a text doc(0), a gif (g), an image (I), a binary file (b), a bin-hex file (4) or it can tell you the name of a mirror server so you can load balance?? (+).

How do you take form input like a a street address? You can't, its one way data transfer.

Item type 7 is how you handle queries, and how things like Veronica were implemented, so it's not one way. It's definitely constrained, but not impossible. There was also item type 2 for CSO, though that is much less common.

Gopher+ had ASK forms which were much like HTML form controls but were, like much of Gopher+, complex to implement and not widely adopted. Some recent clients and servers support arguments over stdin like POST requests, but this too is not widely implemented.

Most of these tools like Archie and Veronica seemed like magical services when I started cruzing the internet. I recently learned that Archie when it started was nothing more than a collection of `ls -lR`s of various ftp servers that would be searched with grep. Which, in hindsite, was the obvious unixy way of doing it.

I'd rather have a conservative version of our current web standards, strip things back to a sensible subset of what we have now, possibly consider putting some kind of heavy rate-limiting or quotas on any client-side code that's ran.

The web is no longer open if you need the funds and backing of a megacorp in order to implement a renderer that covers the whole standard.

Let's dial things back to before we entered the darkest timeline: the fork in the road where HTML5 happened instead of XHTML2 development being continued. A stricter, saner markup standard with less overhead to developing a rendering engine, and shitcan scripts as well.

Blast from the past. If anyone wants to download NCSA Mosaic and load the author's gopher site with it, here ya go:

https://github.com/alandipert/ncsa-mosaic (binaries in Ubuntu's Snap Store, probably in other distros too)

Find out more at: https://en.wikipedia.org/wiki/Mosaic_(web_browser)

For those looking in the comments for places to explore in gopherspace, I would recommend starting here (use lynx):

  gopher://sdf.org # large community
  gopher://floodgap.com # a venerable gopher presence
  gopher://bitreich.org # small but very active community
  gopher://gopher.black/1/moku-pona  # my phlog listing aggregator

One tangent from this consideration would be:

What would it take to make Content-Type: text/markdown a reality for web publishers?

What would be the point?

Markdown is a textual format intended to result in HTML anyway, and it includes the entirety of HTML already in its spec.

To start with, deciding what constitutes markdown, i.e., a spec. There are a bunch of incompatible "flavours" out there.

https://commonmark.org/ is a good start.

This hasn't hindered HTML.

There are around 281 Gopher servers active on the Internet at the moment:


Will be interesting to see whether that number shifts in the near future.

My memory is getting kind of blurry on this, but wasn't Gopher heading in a direction where someone wanted to extract licensing fees from it?

I do remember discovering how WWW had made some leaps forward and promptly abandoning my project to write a Gopher+ server and instead turning what I was working on into an HTTP server. Sadly I never bothered publishing the code since interesting things were happening with the NCSA httpd code at the time (something which eventually turned into Apache)

Back in the day, yes, UMN tried this and it probably did indeed kill the ecosystem. They backpedaled later but the damage was done.


The nature of this problem (that companies are able to track you) is not so much a technological problem, but an economical one. Even if gopher were the only alternative back then, it would have evolved just as HTML/HTTP did to support ads and tracking.

All a content provider that doesn't want to serve ads and tracking has to do is not implement it. While content creators are still bound to whatever their publishing platform chooses to do (e.g. any content on Medium is subject to Medium's tracking practices), using an inferior technology is simply not a realistic solution. This is essentially a human issue, technology has little to do with it.

You want to enable ad-free, tracking-free mass publishing? Provide a free publishing platform. The catch? Someone has to pay for it.

You don't want to be tracked? Disable javascript. Some sites stopped working? Oh yeah, tracking you is how they pay the cost (nominal or economical) for serving you content.

I would see merit however, in a search engine that allowed filtering for non-javascript friendly content.

There is a public gopher proxy you can use e.g. http://gopher.floodgap.com/gopher/gw?gopher://box.matto.nl:7...

Once again, an internet user decides that in order to solve a social problem, we must move people to an ancient internet protocol for serving web pages rather than actually address the social problem by dealing with the real world entities performing the tracking.

As others are saying, HTML isn't the problem and Gopher isn't the solution: any bidirectional request-response protocol can be used to track clients, because there's a record of interactions the server can save. Client-side scripting as now commonly used on the Web increases the likelihood that some of these events occur despite the user's intent, but hosts can track and profile you by IP just fine, and if this hypothetical Gopher revival came to pass, it would also revive an interest in server-side ad serving and log mining that dynamic ads have long made obsolete.

The two solutions are to: (a) not interact with hosts who track you -- which is hard to know ahead of time -- or (b) use a one-way broadcast protocol that leaves no ability for hosts to collect an interaction stream. And this exists too, from over-the-air television and radio, to teletext [1] and datacasting [2]. Compare the business models: unencrypted broadcast streams are full of ads too, but you don't get tracked. Or, the services are encrypted and the key exchange is moved out of band; you trade a bit of your privacy to establish an ongoing customer relationship to access gated content.

Of course, broadcast on public airwaves is heavily regulated, and broadcast on unlicensed spectrum is sufficiently intertwined with and streamlined into wireless internet to be hidden in plain sight. Despite its technical merits, a broadcast 'renaissance' of sorts isn't likely to attract a discretionary audience without a real integrated commercial offering raising awareness -- amateur radio and tech demos don't have universal appeal, but a sleek device that accesses compelling first-party content in a privacy-preserving way might. But it's also a technical gamble when more proven solutions are less risky, and the kinds of players who deliver integrated offerings can deliver their service over IP with less fuss.

[1] https://en.wikipedia.org/wiki/Teletext [2] https://en.wikipedia.org/wiki/Datacasting

I don't know if I necessarily want Gopher back, but I often dream of returning to the days when "the Internet" was primarily Usenet, IRC, Telnet, and email.

Don’t forget FTP. So much FTP.

The article makes the incorrect assumption that tracking depends on HTML and/or JS/images. If we managed to revive Gopher, browser makes would soon build tracking into browsers and publishers would simply track on the server side like Cloudflare does already (https://www.cloudflare.com/analytics/).

I remember during my undergrad days, my university (McGill) used to have its classified ads accessible via gopher. It was pretty popular and fairly easy to use. Surprisingly, there were quite a few non-technical people on there, e.g., posting apartments to sublet. This was in the days of Windows 2k/ME, so people had lower expectations for user interfaces back then.

Should at least have a small guide on how to get started, like server software, client software, how to make a "page" ?

You could block all ip-ranges for known trackers via firewall. And also disable JavaScript, cookies and media content. Or just surf the web using an old browser. Serious webmasters still make sure their web pages work in more then just Chrome.

My biggest issue with "gopher" is that I don't know how secure it is, how do I know the connection I'm using is secure and hasn't been intercepted, the current clients don't show that at all if it's even possible. I couldn't care less about tracking when the content isn't trustworthy.

What do Gopher pages look like, are they mostly ascii or is the format weird? Why did HTML/HTTP become the standard over Gopher? It seems like Gopher could be capable of doing similar things to the web, just nobody bothered to expand on it or the standard is frozen in time.

With the risk of sounding patronizing, the wikipedia page provides answers to all your questions https://en.wikipedia.org/wiki/Gopher_(protocol)

Thank you, I appreciate the link. It's been a while since I've looked at that page, so I probably forgot about some of what is there.

I think the unchecked expansion of arguably questionable capabilities is exactly what the author's objection is. Otherwise, it just makes Gopher into an unnecessary second-rate HTTP.

(Disclaimer: I maintain gopher.floodgap.com)

When they talk about “putting content on gopher”, what do they mean? Gopher is basically FTP with the ability to link to other sites. Other than text blogs or videos, what sort of content would we out on there?

If Github was still independent, a Gopher service for Github would make sense. Gopher is basically a file server, and Github is a file repository.

I was wondering if Net News / NNTP / Usenet could address some of the distributed use cases people are trying to throw at blockchain?

Blockchain wasn't really on my mind, but NNTP is something I think we should consider reviving.

Reddit and Facebook has taken over the old forums and mailing lists, but I feel that those markets would be served equally well, or better by NNTP.

The Reddit redesign makes it clear what direction they are moving in, and I fear that it will kill of all the interesting subreddits, where people have real discussions. In it's place will be an endless stream of memes, pictures and angry anti-Trump posts. All these subreddits will scatter and their uses left without a "home".

The village I live in has a Facebook group, it's a closed group, so no browsing without a Facebook account. I'm relying on my wife to inform me, if anything interesting is posted. It's sad, because it's pretty much the only source you can turn to if it smells like the entire village is burning or the local power plant is making a funny sound. All the stuff that's to small for even local news, or is happing right now.

Usenet would, in my mind, be a create place to host the communities currently on Facebook and Reddit. They will be safe from corporate control, or shifts in focus from they "hosting partner", and everyone will have equal and open access. Spam might be the unsolved problem, but I feel like that is something we can manage.

I know that a Usenet comeback, with all the hope and dream I have for it, isn't coming. People don't like NNTP, they like Facebook.

Anyone know the easiest way to get a server up and running? I google “open source nntp “ with not great results.

I wouldn't really classify it as a "good" nntp server (nor would I recommend it as it's probably not fully compliant) but nntpchan is one possible route if you don't mind the imageboard aftertaste. it's not meant for mainline usenet. I made it out of frustration with INN's feed syntax and because another daemon that I used at the time was abandoned written in python (it was an abomination but it worked).


Depending on your usecase, Leafnode might be reasonable.

Personally, when I needed NNTP I sprung straight for INN, but be prepared to educate yourself.

http://leafnode.sourceforge.net/ https://www.eyrie.org/~eagle/software/inn/

Disable JS and this problem fixes itself.

WTF is gopher space? Asking for a friend.

How am I going to get my grandma to use gopher? Solve that, then you'll have my support.

Pleroma, the alternative ActivityPub server to Mastodon, has a built in Gopher service. :)

Ironically, I cannot go to the link he provides. Gopher: are not supported by Safari. :D


> If you build it, they will come.

People already built it. And not even talking about old gopher. Adblockers are that now.

People who are technical enough see the benefit and swear by it. We just need to make it easier to use. Maybe an adblocker add-on with live support and constant monitoring (and tweaking of the rules) is a produt that you can sell by the millions?

Canvas fingerprinting? gone. Third party cookies? gone. Auto play media? gone. etc. Everyone say that privacy is most expensive luxury nowadays. Maybe we need to commoditize it?

Ad blocker with constant monitoring looks like a fun absurdist art project.

Every step you take on the web, every site you visit, every page you view, is used to create and enhance a profile about you. Everything you do is carefully tracked and monitored.

Bold of the author to openly admit this.

How is that bold? It's common knowledge.

It's bold of the author to admit that they are carefully tracking and monitoring everyone using this site to create and enhance profiles on them, particularly given the context of the article.

This particular site has no external trackers or analytics scripts.

But the claim made in the article is that "every step you take on the web, every site you visit, every page you view, is used to create and enhance a profile about you. Everything you do is carefully tracked and monitored."

Obviously, if this is true, then it must be true for that site as well. Otherwise that assertion is just FUD and hyperbole, and it undermines the credibility of the argument being made about the scale of the evil of the modern web, and the necessity of a simpler, non-HTML based protocol to avoid those evils.

All traffic is monitored by the countless agencies around the world, the datacenters, the isps, spyware, malicious browser extensions, companies "helping" you by making a backup of your bookmarks and history, and so on, so it is definitely true for this site aswell.

In that sense, it would be true for gopher as well.

But the article is clearly describing javascript, analytics and tracking within HTML, with the solution being Gopher's "featurelessness." But it's possible to build an HTML page without analytics and tracking, or even with non-malicious javascript, so the premise that the only way to escape that is to leave the web entirely for simpler and more restrictive pastures is untrue.

Not that the point needs to be belabored but it's worth pointing out that the article opens with a patent falsehood.

>Every step you take on the web, every site you visit, every page you view, is used to create and enhance a profile about you. Everything you do is carefully tracked and monitored.

* if you use traditional web browsers.

I've been moving more and more of my browsing over to Tor.

Both Reddit and HN can be browsed, though the former requires JS to fully function. (A persistent problem across the web)

I can't do all my browsing on Tor, but I can do a substantial chunk. Conversely, I can maintain "clean" profiles tied to my real name that seem to simply check email, read the news a bit, and check the weather.


I tried using QubesOS for my primary (desktop-ish) laptop. I7, .5TB ssd, 32GB ram for $500 on ebay. And I would have maintained using it if it hasn't been for research into SDRs.

I needed the performance from USB for SDRs, and the way Qubes does it sends all USB data to a USBvm to prevent against all sorts of bad USB attacks (badUSB, rubber duckys, usb-gsm gateways, etc).

But if I wasn't doing SDR work, you can bet on it that I'd be using Qubes.

I've been considering. My current machine is slow enough just running Quebes (which requires two VMSs - one gatway, one ) chokes so I stick to TBB.

It might be more economical to buy an old chromebook or something to repurpose for TAILS rather than buy a really souped up laptop for Qubes.

Then again, I'm not doing anything evil so, my threat model is a little looser. (I don't think they're out to get me specifically, but will happily siphon up whatver they can get)

A big problem I have with QubesOS, which is a good idea, isn't that it sacrifices performance in the name of security by default, but that it has no escape hatch for when you actually really need stuff to work... and its absurdly high resource requirements. It is fundamentally not a usable OS for any but a few niche use cases, in my opinion.

Will this protect you from JS fingerprinting? Will it protect your from software fingerprinting? (device id's and serial numbers, etc)

What do you mean by "JS fingerprinting"? If you use a Whonix VM on Tor, there's really two parts - a Gateway VM and a workstation. So even if JS "punches through" the workstation it literally cannot transmit or access your external IP.

If you do things like resize your window or install nonstandard applications, that could make you a unique Whonix user, but not reveal your true IP.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact