Hacker News new | past | comments | ask | show | jobs | submit login
We Need Software Updates Forever (ieee.org)
143 points by pabs3 on Sept 23, 2021 | hide | past | favorite | 216 comments



I've becoming increasingly convinced that this "update culture" is really about encouraging rather than preventing forced obolescence. If they would just make it work right the first time, the way software used to be developed, maybe we wouldn't have this continual update cycle of fixing-and-breaking things every time, but then they wouldn't be able to sell you anything new. Instead, by conditioning the population to accept and even demand continual updates, companies maintain more control over their products and can change them however they want --- which of course can include making them less functional and more user-hostile.

It's the same reason why anything "smart" or cloud-dependent is best avoided. For example, all my white goods are several decades old and most of them don't even have any electronics.

No, we don't "need software updates forever", despite how much the corporatocracy tries its best to persuade you onto the treadmill.


I know of almost no software that's ever finished. This is not some conspiracy. It's just a fact of life. Examples, image editors need to load new formats. Old computers didn't have cameras, now they do, many softwares would suck without support for them. New input devices come out (mice, touch screens), software needs an update to support them. Old old computers didn't have networking. Once networking exists plenty of software will be effectively obsolete if not updated to take advantagae. Old software was ASCII only, then the net connected us to the world and not all that old software needs to support full unicode. Old software didn't support useful rich copy and paste, new use cases appear (copy and paste text, images, audio clips, video, URL links, rich text, structured graphics). Old software that can't handle this is obsolete. Old software didn't handle multiple resolutions. Try running some non-HD-DPI aware Windows software on an HD-DPI screen. Good luck reading the text. Old software thought it could own the machine. OSes realized they needed to put the user in control. Now the software needs to be updated to ask for permission instead of just crash when it fails to access something.

Most software has to be updated, period.


>> I know of almost no software that's ever finished.

All appliances including TVs until recently. 10s to 100s of modules on cars - until recently.

I knew someone who worked in over-the-air updates at <big car company> who said the software teams kept wanting to push updates after start of production. I said "no, updates are so you can avoid the cost of recalls, not so they can finish late" and we had complete agreement. Updates are potentially good by allowing critical fixes, but they are more commonly wanted by developers who can't wrap-up a solid release on schedule.

To be clear, suppliers to the auto companies have HARD deadlines - your customer has hundreds of suppliers delivering components for a particular model year SOP and they aren't going to disrupt everyone else because your software team is late. This leads to incremental (manageable) improvements rather than big new developments (which do happen but not tied to a particular program). Abusing OTA updates to allow more slop in this is probably going to lead to some issues.


> Examples, image editors need to load new formats

Alternatively, they could be designed to support plugins.


Someone still needs to write those, and you'd need to be a psychic to succeed in making an immutable API that can handle all possible future requirements.

E.g., up until a while back, 32-bit ARGB as exchange format would've been sufficient, but is now insufficient to handle 10 bit channels and/or stuff like HDR. What if we get a standard for 3D images? It gets worse if you want to handle animated images as well, whatever exchange format you specify will break once a decade at least. And god forbid someone tries DRM for images again…


Amiga did it. We can do it again. Besides, such an API is a slow-moving target, so still better than status quo.


Are we thinking about the same thing? Amiga did not have a perfectly future-proof API. Such an invention would be the holy grail of API design.


Some future proofing is better than none, right?


The status quo is the continued existence of IrfanView, which is probably as close as it gets to the ideal. Hasn't changed much since the 1990s, and looks like it, but supports about everything via plugins.


IrfanView is updated constantly though.


Yes, but the idea is that even if the original author dies, people can still keep using the last release of the application, but independently write new plugins to interpret new image formats that come out in the future.

Winamp has a similar property. If in 2030 FLAC is replaced by some fancy neural-network based lossless encoder, it can still read the files and play them without touching the core of the application, because all that's needed is a single plugin.


Well IrfanView is open source, so if the original author dies I would expect someone else to pick it up.

Your description might help with one specific area (compression algorithms) but there are numerous other areas where you can't just use a plugin.


At the point your text processing utility that was ASCII capably gets a plugin to support UTF-8 or UTF-16 you've made so much of the core modular that it no longer serves the benefit your proposed them for. Replacing core plugins of a program that provide the actual usefulness of it is no better than just upgrading the program.

Modularity can be useful, but there's always a core, and sometimes the core needs a change.

And that's before we even get into security issues. Sometimes entirely new classes of security problems are discovered, and if they weren't known at the time of writing, how were the authors supposed to guard against them? How much can we fault browsers for not guarding against spectre/meltdown when they were first written and released?


This works, until something about the image violate your expectation. To use the image example, say you have an image editor that uses the common assumption that a pixel is a 3-byte item, with 255 possible values. Lots of stuff worked this way, its sort of a base assumption around which you have to build all your other stuff. Violating it requires you to like change sizes everywhere, which may mean performance of your fancy vectorized algorithm plummets somewhere.

Anyway, lots of stuff follows this 8-bit-per-channel api. Until you use RAW images or any number of higher color depth . Then you need 12 or 16 or 24 or 32 (or more) bits per channel.


Thats a update to.


If it was the case that you needed your image processing software to handle new formats, or your OS to handle a camera, you would go to the supplier and buy an upgrade.

If that case was any common, then upgrades would be hugely lucrative, and software distributors wouldn't be forcing people to move into a different model.


> I know of almost no software that's ever finished.

I know of quite a lot of software that is "finished" as far as I'm concerned. Developers tend not to think it is, though.


And not just software, _everything_ is updated quite regularly because of technological advances and changing preferences: cars, clothes, beverages, laws, ...


Tools generally don't, I know they have added some antivibration stuff to hammers but an old hammer still works.

As far as socket sets and crow bars go I can't think of a new feature that happened in my life time.

For screw drivers the biggest change is companies keep inventing new heads for security by obscurity reasons, not better performance.


For screw drivers that may be the case now, or it may not be. Screws became popularized in the 1700s, but it wasn't until the early 1900s we see evidence of people trying different heads (square head), and it wasn't until the Phillips head in the 1930s that we adopted a standard. Even then it was a while before it entered consumer goods; I encountered a 1940 Fibber Mcgee and Molly episode (which is what led me to looking up the history of the screw) where Fibber went to buy a screwdriver, and literally asked for "a screwdriver with a black handle", no other qualifier, and the shop keep didn't ask for clarification.


Screw drivers, who need them when you have a perfectly good hammer, amirit?


Reminds me of a joke I made in passing by the pool this summer. All the ladies were drinking their hard seltzer cans, which is very trendy last couple years in US anyways. All of them using the Yeti/similar insulated can holders (coozies). My joke was something about how the entire beverage industry aligned on a new narrow and tall can size for seltzers just so they all had to buy new drink accessories


I agree with you that a lot of software needs open ended development, but I don't believe most of it does. I like the philosophy used in the development of the Gemini protocol: software should have a clearly defined scope, it should break not bend and it should eventually be done. I think the vast majority of software doesn't actually need continuous updates.

Look at the downsides, web browsers in particular are the biggest example of the downsides to this approach. They've become megalithic blobs of unmaintainable bloat. If they'd just stuck to delivering documents and let "web app" developers develop standalone clients we wouldn't be in the situation we are in with the web.

Some software can never be done, but most of it should be done.


If web browsers had stuck to just delivering documents, the Internet wouldn't have become what it is today. We'd end up with a bunch of AOL-like walled gardens, which benefits nobody.

Your "get off my lawn" ideal for web browsers would have never worked.


That's just ridiculous. It's not a dichotomy between news sites running megabytes of software on your machine and proprietary walled garden networks. There was a near decade long golden age where people used protocols and clients, browsers delivered documents and everyone used the same global network, and you're conveniently just ignoring that to make your invalid point.

Do you like what the web is today? I know almost nobody that does. "We wouldn't have the web we have today" isn't a selling point.


I have many issues with what the web is today, but removing all interactive features from web browsers would be a very negative change in my opinion.


Why?

Think about this scenario. You boot your machine, and it boots to a beautiful environment, with all the software you need to do the things you want to do. But most of it is useless because you have to start this other piece of software that's a blank rectangle, and in that software you must type the name of the application you want to run, and it will fetch the application over the network and render it. This blank rectangle must constantly be updated with new features in order to continue fetching software to run over the network properly. Whole teams need to be employed by non profits to keep this blank rectangle up to date. An operating system inside of an operating system. You don't think this is senseless?

Now imagine this scenario: you click a link to a video, the stream opens in VLC. You click subscribe, the link opens in thunderbird. If a company wants to have some interactive service they have to give you a client, they can't just externalize that workload on a browser maintainer. Isn't that the web we wanted?


It is ridiculous and senseless, but not a popular view here since there are a lot of HN'ers whose salary depends on that rectangle being the OS instead of the user's actual OS. What's even more ridiculous is that we also now have software that you download and "install" onto the actual OS, but the software itself comes bundled with another copy of that rectangle plus all the machinery underneath it. Installing an OS on an OS for each application. We have gone mad.

OS vendors could have prevented this madness, but chose the path of deliberate incompatibility with each other, so we're left with the world as it is rather than as it should be.


I sometimes joke that I am going to build a VM that runs a docker container inside which there is only an electron wrapper around Facebook.com as an art project.


You could've omitted most of your description and just said, "If a company wants to have some interactive service they have to give you a client."

> Isn't that the web we wanted?

No, because you just described the world we're actually in, where everything from Starbucks to the laundromat exhorts you to install their app (going so far as to nag you about it e.g. when looking then up on a map and clicking through to their website). Despite your framing it in a positive light, we know this isn't the case.

Let me try the same reframing trick to make the Web sound like the best option:

If a company wants to offer you something, whether a physical item or informational content, then they have a way for you to fill out a form and get it. There's a way to deliver digital forms in a universally understood format, and you can use a software agent to deal with them for you and handle them in whatever way you think is best.


That's not a problem on platforms like ChromeOS where web apps are treated like standard apps, from the user point of view they're indistinguishable.

A solution on other operating systems would be to use Electron, but I've seen many people on here also cry foul about that.


Electron is an even worse aspect of what I described, I thought about adding a line about it but decided to stay on point. You create an entire system designed for web apps, then you create basically a site specific browser to render just one web app because users need a dedicated client. It's another step in the senseless direction.


What is senseless about that? It's common for apps to bundle their dependencies, with Electron the bundle just happens to be a app platform for HTML, CSS and Javascript that we colloquially call a web browser. I'm not really a fan of Electron myself but some of the hatred it seems to get on this website is pretty irrational and unreasonable, in my opinion.


So electron is "great" in the sense that it is a workaround to this disaster that I have described above that is the web. You have to build a web app, but you want a client application, the resources to build both are a lot so you just wrap your web app in electron. It's a stopgap and it's an enabler. It is not good for the user. As much as I dislike complex web apps, I will sooner run one in a tab in my browser than download an electron app.

Did you know that electron apps basically bundle the entire chrome browser in with every application you use it to make? Have you ever tried running more than a couple of electron apps at once?


I'm fully aware of what Electron is, you don't have to explain it. But for these teams I don't see how Electron is practically any different from any other cross-platform framework like Java. It's the same purpose. The teams using it are going for a "write once run anywhere" approach. It's not a stopgap, they're never going to spend the money on building two or more applications if they don't have to. If you think that's not good for the user, you may want to try explaining to the users that having a native app is going to cost them several times as much. That's the situations I've been in, and the cheap and easy Javascript-based solution (Electron, React Native, Flutter, etc) usually seems to win.


That's only because web apps are the default.

You build a system, the web app is supposed to be a demo of what your system can do. But since the days of Facebook, YouTube, reddit, the web app is the product itself. This is why these companies always break third party apps that connect to their API.

So you're in one of a few situations. You have to build a web app because precedent, and you don't want cost overruns. Also use case comes second to business case. And we end up where we are. It's a mess for the user, and I don't think it's sustainable, which means this problem won't exist long term.


I think they break third party apps because it's a real pain to keep maintaining a bunch of incompatible old versions of an API forever.

Also your last paragraph doesn't follow to me because the whole reason these apps exist is because they are web apps. They wouldn't exist otherwise. That's the whole reason they are sustainable now in the first place. They're the default because it's the cheapest and easiest way to deliver a product for billions of users.


How about this scenario: I vote on a comment on HN, my email client opens up with an email pre-filled by the "mailto:" link, and then I have to send it to cast my vote. What a great experience!


I hate to throw the word "stramwan" around because it gets done plenty enough already, but you just made up the most ridiculous scenario you could and then attributed it to my point to refute it.

How about this: you try turning JS off and go to HN and realize it works just fine without the bloat.


I had no idea HN had a fallback like that, but it makes sense. I'll concede that point.

So you're fine with web pages that function as applications, you just want the user experience to be stuck in 1995.


I want the user experience to be stuck in 2095. I want the user experience to be better than it is now.

I'll give you an example of a web app that I like: github. It is built as a web app because it makes sense as a web app, it works really well, there's no cruft. It could be better if it were an API with user built clients, something git is designed for anyway, but it's not bad.

I would be fine theoretically with web apps, if there were some way to ensure actual web apps that need interactive functionality were the only ones that could use it. The situation we are in now every document is a web app, every piece of content you try to see online loads it's own application just to render. I have a video player on my machine, I shouldn't have to download and render a video player onto a one size fits all abstraction layer every time I want to play a video. I shouldn't have to download an entire rendering framework just to display a CNN article.

The right way to do the web is with APIs, client applications and protocols. If you're delivering a document, I should use a client applications to render documents, which is what browsers are supposed to be. If you're building an interactive service, give your community an API to build a client or build a client yourself. An instant messaging application should not be rendered in a document delivery program delivered over HTTP. If there's some use case that requires a standard, a standalone application that implements that standard is the way to go. Think XMPP and pidgin. How terrible would it have been if XMPP could only be used in a web app?

These design decisions are not about the user, they're deliberately designed badly because incentives are misaligned. Can't find what the user bought on the internet and sell that info to marketing companies if you just deliver content and let the user choose what software they execute.

The future is what I'm looking at, not the past. To deliver a truly useful web you have to determine it's shortcomings and rethink the architecture that led to those shortcomings. Saying "this was a bad design decision and look at where it led us" is not the same as saying "let's go back to 1995."


I don't think anyone is arguing against removal of things like HTML forms, which are arguably the only good interactivity that browsers have; and indeed, a lot of "web 2.0" sites worked very well that way, no JS required.

This includes all the web forums, blogs, and even sites like YouTube and Twitter which used to actually work without JS (and were much faster too).


"The Internet today" friggin' blows. Get off my lawn indeed, lol


agreed. your software isn’t finished when it passes final qa. it isn’t finished when you deploy it to production. it isn’t finished when it has no known bugs. it’s only finished when your last user is dead.


A lot of Unix utilities have not been updated for years.

My thoughts on finishing software: https://gavinhoward.com/2019/11/finishing-software/ .


All software needs to be maintained much like food needs to be grown every year. The environment, need, situation, and security around the software is always changing so static software rots. Some rots quickly and some rots slowly but without maintenance it all rots.


> This is not some conspiracy. It's just a fact of life.

I disagree.

> Examples, image editors need to load new formats.

An image editor doesn't need to load new formats. An image editor just needs a library to convert to a format it can edit easier which almost always ends up being a bitmap of some sort.

That smells like a library. And that smells like not one library for _all_ image formats but instead one library for _each_ image format.

User wants to support the new copyright-compressed-image-video? Add a library. User doesn't want support for old-copyright-expired-and-is-now-public-domain-image-video? Don't update that library.

> Old computers didn't have cameras, now they do, many softwares would suck without support for them.

Nearly all softwares suck with support for them! Different features on the camera, different features in the software. What's needed is for software to just have a library to process an image stream and let the OS handle the camera...

> New input devices come out (mice, touch screens), software needs an update to support them.

What's new about mice that needs a software update? What's new about touch screens that needs a software update? These things haven't fundamentally changed in fifty years.

> Old old computers didn't have networking. Once networking exists plenty of software will be effectively obsolete if not updated to take advantagae

I disagree. Plenty of software shouldn't _need_ networking. That image editor? Yeah no it doesn't need networking. Your OS does though. Your OS should transparently provide any networking that you might need to edit an image on the network. Mount that network drive and your image editor should see it as if it's a local file. No update needed to the app.

> Old software was ASCII only, then the net connected us to the world and not all that old software needs to support full unicode.

Well yes but have you noticed how broken full unicode support often is? ASCII just worked.

> Old software didn't support useful rich copy and paste

And thank God for it. Rich copy and paste is a security nightmare and always ends up pasting junk and trash and vomit into something. I don't want rich copy and paste. How do I turn it off??!?!?!?!

> OSes realized they needed to put the user in control.

Hahahaha yeah... uhhh... no. Modern paid-for OSes (Windows, macOS, Android, et al) often take away more user control than they give. Dumb down the user interface so that they don't have to teach the user how things actually work and don't have to support that weird use case that the one power user used and most people wouldn't even understand. So take it away because it's just not worth the cost involved.


The problem with your suggestions is that the program would still need to be updated to support those new libraries. And new mice and touchscreens have gotten new hardware features, which require changes in the software to support them. Even something as simple as upping the resolution of the device can require an API break to widen the data type. Networking is also not something that can just be tacked on by the OS, if you want a decent experience then it needs to be built into the program.

Unicode is required if you want to have decent support for non-English text, for a really basic example you might want to try writing an ASCII document that includes multiple languages in it and see how it doesn't really work at all.

I'm not sure what you mean rich copy and paste is a security nightmare, programs that don't handle a complex data type will just ignore that type. A text editor will commonly just negotiate the type down to "text/plain" or equivalent.


> the program would still need to be updated to support those new libraries

Really? You think programs can't just load all of the libraries in a given directory?

> And new mice and touchscreens have gotten new hardware features

Yeah, like... more buttons and higher DPI

except they still boil down to the exact same interface. "Button X was pressed... button Y was released... moved +X+Y"

For real the biggest innovation I've seen in touchscreens is a sensor that tells how far away a stylus is or how hard it's been pressed. And that's really _not_ a big breaking change to require an update to every piece of software.

> Even something as simple as upping the resolution of the device can require an API break to widen the data type.

Yeah okay but that hasn't really happened any time recently. Or are you going to tell me that we should support a screen whose resolution is measured in _more than_ billions of pixels wide and tall? because I'm pretty sure your eye can't even _see_ that many pixels.

> Unicode is required if you want to have decent support for non-English text

Yes okay that's nice. That doesn't change the fact that unicode is broken in most apps.

> programs that don't handle a complex data type will just ignore that type

Programs that do handle that type are exactly what I want to stop. I absolutely never ever want to copy HTML from somewhere but there's often no way to disable it. Formatted text? Trash. Send me a file. Picture? Trash. Send me a file.


>Really? You think programs can't just load all of the libraries in a given directory?

They can't, a library implements a specific API.

>except they still boil down to the exact same interface. "Button X was pressed... button Y was released... moved +X+Y"

Well no, with multitouch the interface has changed from that to a series of events, and you have to handle gestures too. Some apps still don't support these. This isn't just for mobile devices, laptops all have these touchpads now too.

>Or are you going to tell me that we should support a screen whose resolution is measured in _more than_ billions of pixels wide and tall?

This would be for pointing devices, so yeah, with a high DPI mouse then you might want to make it 32-bit or 64-bit. This wouldn't be for billions of pixels but to subdivide into many fractions of a pixel. Example: the original Win32 API only uses 16 bits for mouse coordinates, so a new API is needed and the apps need to be changed if they want to take full advantage of that mouse.

>That doesn't change the fact that unicode is broken in most apps.

Please report those bugs to the developer, that should be fixed.

>Programs that do handle that type are exactly what I want to stop. I absolutely never ever want to copy HTML from somewhere but there's often no way to disable it. Formatted text? Trash. Send me a file. Picture? Trash. Send me a file.

I can't agree with this, I copy formatted text and pictures all the time. Your program should have the ability to remove the formatting.


> They can't, a library implements a specific API.

I'm sorry, what? That's the whole point of putting things into a library. Every library should be compatible with the common set of APIs. Any library that isn't compatible is defective.

> with multitouch the interface has changed from that to a series of events, and you have to handle gestures too. Some apps still don't support these. This isn't just for mobile devices, laptops all have these touchpads now too.

A series of events is exactly what a mouse has always reported.

Gestures? Apps don't need to support gestures. Even iOS often doesn't understand the gestures that it claims it supports. Why on earth do you think an app should support something that the OS itself can't support?

> subdivide into many fractions of a pixel

The whole point of a pixel is to be the smallest point that a program needs to understand.

> the original Win32 API only uses 16 bits for mouse coordinates

My screen resolution is 2560x1600. I have two monitors spanned, giving me 5120x1600. I can see someone having a couple dozen of them to get up beyond 65k width. But at that point the user is already reaching the limits of the hardware to render such a large number of pixels, the limits of physical space to even _have_ them.

Given that the Win32 API has been around for a few decades and I don't think we're actually reaching the hardware capabilities to need the upgrade... I'm going to simply say that you're being hyperbolic.

> Please report those bugs to the developer, that should be fixed.

No, fuck that. I have better things to do with my time than to be every developer's free QA.


I want a right to repair. Both for hardware and software. If the makers doesn't want to provide fixes forever, they should at least allow me to fix it if it breaks. I'm happy if they just provide the source code with a nice OSS licence.


Strictly speaking, for a pure right to repair, it doesn't even need to be OSS licensed.

Germany's Left Party (inspired name, I know…) is lobbying for that constellation e.g.: License can be restrictive, as long as users are given source access and permission to fix bugs themselves.


This is a very simplistic view as the source code alone may not be enough to build/fix most complex closed source application. You have IDEs that may be costly and expensive components that you cannot release the development builds, etc.


I find it funny that a Left Party would not promote/defend copyleft tooth and nail. Sounds more like a "pro-social Right" party :)


Political left and right have very very different meaning depending on where you live in the world :)

As a French person, anything which defends environment and customer rights (vs corporate interests) sounds lefty enough.


> Political left and right have very very different meaning depending on where you live in the world :)

I'm curious if you have resources on that topic. It was my understanding that the left/right division used to designate the French national assembly, where royalists and other reactionaries would sit on the "right" and various stripes of progressives on the "left", that this model was imported to other newly-founded so-called "democracies", and that the words took new meaning when communism (whether authoritarian marxism or libertarian anarchism) began spreading in the mid-19th century.

> anything which defends environment and customer rights (vs corporate interests) sounds lefty enough

As a french person myself, i don't agree with this interpretation. Not so long ago, a "fair capitalism" system was a selling point of the centrist movement, and was advertised by a considerable segment of the "droite sociale". The left was defined by promoting the abolition of the capitalist system with further sub-classifications: socialists promoted reforms as a means of achieving communism (which arguably fails every time), communists promoted dictatorship of the proletariat to build communism (which arguably fails every time, and produces results very similar to industrial capitalism), anarchists and libertarian communists promoted decentralization of power and workers/citizens self-organization to build communism (which arguably succeeded every time, but was shortly crushed by authoritarian reaction [0]).

In the 80s, the Socialist party rose to power and abandoned all socialist ideals, until it adopted market economy as official policy some time in the 90s if i remember well. Likewise, the Communist party seems to have abandoned most campaigning around expropriation, instead focusing on the much diluted/vague "workers/citizens rights" concepts.

Today, apart from the NPA (Nouveau Parti Anticapitaliste), i don't see any mainstream (>1% votes) leftist party in the public discourse. I don't see an abundance of further-left initiatives from the bottom-up (not political parties) though, in the forms of ZADs, self-organized workers coops and unions, squats, and other collectives for social struggles against racism, patriarchy, etc.. Of course they're not the ones the media would tell you about, or as the old 68 saying goes, "the revolution will not be televized" :-)

[0] 1917 revolution in Russia started as a libertarian communist movement organized around local "soviets", before Lenin Trostky and other power-hungry tyrants moved in and killed everyone (see "Constadt" or "Makhnovtchina"). Likewise, 1936 revolution in Spain was mostly led by anarchist discourse and practice, until USSR-backed communist party (which had considerably more weapons despite having much less popular support) massacred everyone.


Yes the definition depends on where you are and when too indeed :)

I think the mainstream opinion is that the PS, LFI and EELV are left parties. Everyone is free to define left and right as they prefer (or any other word actually) but it's easier to use the same meaning as most other people to communicate ;) I do agree though that this is a bit subjective, and for example whether LREM is left, center or right if probably a matter of your political opinion :)

One important thing I realized a while ago with something called the "political compass" [1] is that there is actually 2 main axis. Depending on when/where you are, left and right can be associated to different side of the axis.

[1] https://www.politicalcompass.org


> I think the mainstream opinion is that the PS, LFI and EELV are left parties.

I think that's what the propaganda apparatus is trying to build for a picture, yes. But compare their programs to those of the PS, PCF, and Greens (ancestors of those modern parties) from 40 years ago and you may have a few surprises in how the entire political apparatus has moved to the right over the years.

> LREM is left, center or right if probably a matter of your political opinion

LREM is definitely right-wing. They have regularly attacked the poor and the oppressed, while removing taxes on the rich . It could be considered a centrist policy ("all taxes should be equal") if it were accompanied by a traditional centrist "civil rights" program for political freedom and dismantling of monopolies (separation of investment/savings banks, break up of media from other industry interests, defending the right to protest, etc). Which is was not.

In the political compass, LREM/Macron policy is no doubt categorized in the top-right corner, in the authoritarian capitalist quadrant.

> Depending on when/where you are, left and right can be associated to different side of the axis.

Agreed, but there is a common understanding that left means redistribution of wealth while right means private property. Moreover, i would personally argue that the political compass is good for evaluating political speeches/opinions, but fails to grasp the complexity of facts in practice. For example, the USSR is usually painted as "authoritarian left", while in practice many critiques have pointed out that "There is no communism in Russia" [0].

[0] https://theanarchistlibrary.org/library/emma-goldman-there-i... (1935)


could be interesting if there was a sort of tax on throwaway technology. something around a few principles:

a) any manufacturer who ships an electronic device that fails to be operational for 10 years will pay a short term e-waste tax every year for every device that failed to remain operational.

b) any device that talks to the internet will not be considered operational unless either it receives regular security updates and supporting backend services -or- if the manufacturer chooses to close the line of business all firmware and backend code must be released under an oss license so that end users may pick up the slack if they so choose.

some additional guarantees around media licenses in the same vein would be pretty cool as well (although it seems that all the media companies are moving away from buying the right to consume on their service to just a regular fixed licensing fee for access to everything).


> I'm happy if they just provide the source code

I agree with you, but I figure it's worth mentioning that—OSS license aside—it's often possible to get source code from the original programmers many years after EOL.

I got my hands on the DEC VTStar source code a few years ago just by doing some digging on LinkedIn and messaging the right people.


You can’t create perfect software no matter how hard you try. Security updates especially need to be a forever thing, lest some major RCE comes along and is used on tech support scam sites to cause actual harm to the user.


Read OP's point again: You don't need security upgrades in many cases. Neither my dishwasher nor my fridge are, or should be, connected. I could imagine them having some kind of protocol for remote maintenance (showing power consumption, warning about damage), but that's it.

Software for utility devices should not be a selling point. It should just be there and do its job and maybe get an update every five years or so.

Why does my TV need updates nowadays? My car? Do I have to download updates for my clothes or my bed soon? How about the toilet?

These updates are indeed just a way to turn products you once used to buy into leased stuff. Every business loves to lease: There's much more profit in it and the customer has no say about how long they use a particular device. But as a customer I hate it.

Here's just one instance: I had an old kindle paperwhite. I loved that thing for piling all the stupid, cheap scifi on it that I would otherwise have bought as cheap paperback and that would then clutter my apartment. Literally two weeks after the device went out of the warranty, amazon decided to push an over-the-air upgrade. And it bricked the device. Now customer care was nice enough to send me a new one, but what if they didn't? Who is legally to blame? I would have to explain the concept of over-the-air upgrades and how software can sometimes destroy hardware to a judge and that judge would have to apply law from a hundred years ago to that situation. In a court case over maybe $100 - good luck with that.


> Why does my TV need updates nowadays? My car? Do I have to download updates for my clothes or my bed soon? How about the toilet?

Those things don't all need software to exist. But good luck buying a new TV or car these days that isn't 100% reliant on software to keep doing what you bought it to do. (And I'm sure there are clothes/furniture/toilet manufacturers looking very closely at John Deere's current business model to try and see if they can turn "buyers" into "recurring licensees" of their products.)

But I think that's missing the article's point. It talks about "some old gadgets" and "each booted up successfully" and "Two perfectly good pieces of electronic gear have become useless, simply for want of software updates." - it's clearly not talking about clothes or beds, probably not cars (although arguably they count), and probably is talking about modern TVs (many of which het thrown away when the ancient Android version they run can no longer update to new versions of that apps the TV came with).

I'm not sure whether "We should mandate that device manufacturers set aside a portion of the purchase price of a gadget to support ongoing software maintenance, forcing them to budget for a future they'd rather ignore." is the only/right answer. but the article certainly eraises an important question.


> "Two perfectly good pieces of electronic gear have become useless, simply for want of software updates"

isn't even true. The PSP is "useless" (in his book) for want of hardware updates and the same is probably true of the tablet to be honest.

(Personally, my PSP isn't useless; I even have a secondary wifi running 802.11b and WEP for it and other old hardware.)


> Why does my TV need updates nowadays? My car?

Because they're connected to the internet or to internet-connected networks.

Anything that's connected to the internet needs to be receiving at least security updates. If it's not connected to the internet, it's your call whether the update is worthwhile

If you don't want to update it, don't connect it to the internet. Easy.

If you want to connect it to the internet, you need to accept the rules of society and one of those is don't leave vulnerable systems online where they can become part of a botnet and mess things up for the rest of us.

I could not care less about your data, but we've seen with hacked security cameras and such that insecure crap left on the internet causes problems for everyone, not just their owners.


  > How about the toilet?
So long as you flush the cache manually after each use, no, the toilet does not need software. Cache invalidation is classically a very difficult problem in software.


Extremely few software updates I'm presented with seem to be focused on security. Including those specifically saying they are. It's all adding/removing/tweaking features.


We need to encourage a culture of maintenance releases at companies where we work. As a user, I want the software I use to look and behave the same, and have the same feature set forever, with only under-the-hood security updates and bug fixes. I don't want the constant UX churn, constant feature cram/bloat, constant regressions and increased CPU/memory usage.

But with most companies' development process, you're stuck having to take the bad with the good. Their release tree is one long trunk, and to get the security updates you must also take the latest PM's pet features and the disruptive results of their latest UX designer's art project. Companies need to learn how to branch at major versions and offer minor maintenance releases on each of those branches, while offering the next version.

EDIT: If 'grep' was developed like most commercial/startup software, its command-line switches would all change with every release, it would require an internet connection, it would be able to grep through Tweets, and it would phone home to its developer with logs of every file you grep through.


I guess the underlying issue is that maintenance is afforded as few resources as necessary and is hence not somewhere most want to end up. So there is a lot of incentive to keep products from ending up in maintenance mode even if they really should. I find it quite sad. So many offerings seem to be made worse when people can't leave well enough alone.


This hits on one of the ways that I think the software industry has gone very wrong -- no longer separating security updates from feature updates.

I shouldn't have to accept whatever feature/UI madness developers have cooked up in order to have security issues fixed.


That isn't just a problem of misaligned incentives, it's more fundamental than that. It's also a problem with how software "construction" tools work at a very basic level. Look around at open source projects, they don't have this perverse incentive, and which of them ship security updates separate from the feature updates?

Nobody does because it causes a combinatorial explosion in code branches, and in testing. This isn't just a problem with the higher level convenience tools like compilers and package managers, its a problem with the actual source code as formulated as it is today. We'd need an entirely different way of writing code in order to do that without a massive increase in programmer/testing hours. Personally I don't think it will ever realistically happen, but I'd love to be proven wrong.


As a user, it doesn't matter to me why software updates have degraded like this. All that matters is that they have.

As a developer, I don't really understand. I've been working on large, complex software that targets multiple operating systems for years, and we don't have any such issues.

It sounds to me like a lot of companies are using development methodologies that are a bit broken...

I've long thought that the reason for this is rather different -- the industry really wants to go to continuous-release models (which I don't think is a good thing, but that's a separate issue), which make security-only releases a bit nonsensical.


This is a really reasonable counter point, but maybe we need to live in a world where that becomes a problem for some third party (or if right-to-repair really takes off).

You buy a device, the manufacturer finds out there are flaws -- if it's under their promised timeframe (there's an incentive for them to offer ) they offer you updates. If you're outside of their time frame, you hope for a third party or you determine if you must upgrade. As the number of devices outside support range grows a market is made.

This isn't a new problem -- software can copy off of the robust recipe that automobiles have left, hopefully making fewer mistakes this time and letting a little less corruption in.


Sure you can, make liability a thing like in other industries, and make introducing development practices that make RCE avoidable a legal requirement.

Most RCE come to to happen, because the industry largely considers security best practices an afterthought and keeps pushing for the use of unsafe languages.

Don't complain about death tools, when people keep driving without helmets and seat belts.


I made a blog post under the humorous title of "Never update anything" which does sound a bit like what you're suggesting: https://blog.kronis.dev/articles/never-update-anything

It's probably a bit chaotic, but some of the arguments that i made were that "updates" is too broad a term to be meaningful, since you might be interested in security updates without any other sorts of breaking changes being forced upon you:

  Types of updates are the problem
  
  The problem here is that we never differentiate between the types of updates properly:
   - major leaps in the development of the software which add new features
   - security updates that don't change any actual functionality apart from the security fix
   - bug fixes that should not alter any functionality in significant ways
   - small backwards compatible feature updates that are essentially opt in
Essentially, you might want stable and secure software and not much more, since if it was enough for you at a certain point in time, then it might be the same way in 10 years, instead of needing new features that break your logic numerous times over that period.

But perhaps the current situation that we're in is simply a reflection of the current industry and market conditions, where companies try to have their products remain competitive.

You can't sell stable software (or hardware) that has a new feature once every 10 years, when there are products out there that release new stuff monthly - at least when it comes to the layperson, otherwise we wouldn't have enterprises like Google or Facebook make significant UI/UX changes, since none of those make sense from a purely engineering oriented point of view.

Of course, that's just my interpretation after applying Occam's razor - for some reason we don't hear much about extremely stable products as success stories.


Extremely stable products do exist. The Z80 wouldn't still be made (after almost 50 years) unless software needs that CPU, and at this point that has to mean stable software, right?

But in this case, the software works as it did, provided that you give it the surroundings for which it was made. The change needed isn't one of the four classes you point out, but rather a fifth:

- support for new networking technology

For the N95, a new web stack, presumably including xhr, webrtc and the other goodies so many apps use nowadays. For the Playstation, the ability to connect to new WLANs.

I suspect that new hardware would be needed in both of these cases. Running webrtc on a CPU from 2007 sounds dubious, and didn't 802.11foo change something extremely low-level wrt. hardware?


some functionality might only be possible by being insecure, thus a security update may have to remove functionality.


Nope. A security update that removes a feature should add a checkbox to a control panel that's pre-checked to disable functionality. The system administrator is prominently notified that they must take a look at whether this breaks their use case, and make the right call following risk assessment.


In that case, we're probably (hopefully) talking about air gapped environments, like medical devices that aren't networked or internal corporate networks with very particular host access rules.

In those cases, you might truly want no updates of any kind, with which i agree - updates should always be compatible with the "opt out" approach, or even do "opt in" (like Debian has an unattended upgrades package, for example: https://wiki.debian.org/UnattendedUpgrades).

That said, due to the world that we live in, it's risky to forgo any sorts of security updates, especially in the context of networked devices and systems!


I mean I was thinking more about the thing were WMF files used to allow you to run scripts inside the files which created security issues https://www.computerworld.com/article/2826084/what-you-need-... - lots of functionality that is enabled to make things easier for applications to do stuff is fundamentally insecure, so pushing a solution to fix the security issue will sometimes lead to remove functionality from the application that the user has to accept.

So the premise that we could remove security issues but not touch functionality seems utopian.


I was asked about 5 years ago to help to keep an application alive that was running on NT, it was doing an extremely important function and could not be replaced easily. Nobody knew how long it had been running and the developers obv had long gone

I suggested we immediatly isolate all the servers, clients and the network. Create snapshots of the hards disks (on isolated disks) - and try and virtualise the images so they're not tied in to the hardware.

as far as I know that thing will keep going forever to the point the tech will be (or already is?) so old not many people could hack it


> as far as I know that thing will keep going forever to the point the tech will be (or already is?) so old not many people could hack it

You're underestimating young people's ability to grasp and enjoy obsolete technologies. I am personally fascinated with DOS and Palm OS despite definitely not being around to witness their heydays. Give me a few years and the binaries/manuals people had back in the day, and I will hack it one way or another.


You're overestimating anyone's ability to spend years trying to hack a single target with documentation that likely no longer exists when there are likely both easier and more profitable targets out there.


The documentation and software is still extant if you know where to look, including in archives created specifically for this purpose. If I didn't have either of those, then obviously I couldn't get very far.


you're right, I personally think the more people who have this way of thinking the better. but the number of people who can be motivated to do this stuff along with the time required is a much smaller group

I've done my fair share of reverse engineering machinecode in the past - I do see it as a puzzle to be solved


> If they would just make it work right the first time, the way software used to be developed

Back in my day kids were respectful of their elders, politicians didnt lie and the sun shone gloriously every day.


Exactly. Remember Windows95?

There always has been stuff that was made to work and last, and there always has been tons more stuff that was a bug ridden "worse is better" kind of deal with great marketing.


I remember windows 3.1.

Back in those days, the way they shipped - mmm it was a thing of beauty. I had to reinstall it every 2-3 months it was that good.


OTOH, if it wasn't that good we would be using something else, good and better that win3.1 and the history would erase Windows.


The big lie started when they conned people into calling the next version, 3.11 three point one one. Its pretty obvious now with the benefit of hindsight that is an eleven and big tech had to hide the fact they were transitioning from completed code to incomplete code in the unreleased versions 2 through 10.


> It's the same reason why anything "smart" or cloud-dependent is best avoided. For example, all my white goods are several decades old and most of them don't even have any electronics.

I would definitely welcome regulations dis-incentivizing IoT crapware like connected fridges and toasters and whatnot. However, for devices that are intended to run software (general-purpose computers) shouldn't it be a requirement that all data sheets and source code be released under a copyleft license?

Can't we have the best of both worlds?


I agree that better technology shouldn't need continuous updates, but I do think there are two more likely drivers than planned obsolescence, differing from early technology.

Firstly, security. Older devices weren't connected to the entire world with bad actors attempting to exploit anything they can. Until people start writing software that have zero security exploits, anything connected to the internet needs an update path.

Secondly, most apps in use today have a server component. Maintaining a perfect backwards compatible server for all clients ever released is a very tall order.

Also, I think it's a strong indictment of current OS technology that updates are in question at all. If everything was as modular as it really should be, you'd have the core components of the kernal, the drivers for the hardware bits, (both of which really should be secure and not rely on servers) and then everything else should be in the user's complete control, to update or not as they please.


People could just not connect some devices to the wider world. Make them offline, or support airgapped usage patterns, or support a 1:1 connection only with a management interface (like a KVM), or support fully online use but with as simple a protocol as you can to reduce surface area.


>If they would just make it work right the first time, the way software used to be developed

Was this ever true? Was there really some golden age where everything worked or were we just blissfully ignorant to the problems?


Generally most of my electronics that contain a microcontroller but no internet have been bug free.

I think part of this comes from the much smaller testing surface - my old vacuum cleaner has one button for power and one dial for suction power, so it's quite simple to write a bug-free implementation. Same for my dishwasher, laundry machine, etc. They generally come with a very limited number of inputs, and a microcontroller that runs through a state machine based on the inputs.


I don't know how old you are but things pre-internet electronics just worked, because there it was assumed that there was probably connection. Before everything was internet connected. Yes everything did just work until component failure.

* Games consoles and games were plug and play. Until the PS3 (late 2000s), none of the game consoles required any sort of updates. Yes there was hardware revisions but what you bought in the box was what you got.

* Most computers before the PC was dominate in the UK, just switched on and you operated them.

* Mobile phones didn't require updates, the only thing I need to do once was restart the phone because the sim needed to update a setting because I wasn't receiving SMS. These old nokia phones still work btw and my newer smart phones many don't work or can't be updated now.

* Almost all my media was on VHS and/or Cassette. As long as the player worked, my media would play.

I tend to do a lot of retro-gaming / retro computing now and all this stuff that is pre-internet and doesn't assume an internet connection just works (even on modern operating systems) without much interference.

There was of course bad products, but they didn't require an internet connection either so it was just a bad product that you replaced with something better if you cared to.


Sort of. Some things were - and still are a lot better. I've been in companies where all software is assumed to spend 6 months in test after it is finished - the customers would then do 3 months of extensive lab testing and only if it passed would they buy it so perfection was needed.


Playstation 2 games were generally bug free. Same goes for Nintendo cartridges.


This is completely not true, all those old games had tons of bugs.

I think we don't think of them as having a ton of bugs for a few reasons; one, we didn't have the social media forums to post bugs and read about them, so unless you personally encountered a bug, you wouldn't know about it. Since most bugs only trigger under rare circumstances, most people aren't going to know about them. Secondly, since we knew there were never going to be updates, we didn't think about things not working as being 'bugs' that can be fixed... they became just part of the game.


> Since most bugs only trigger under rare circumstances

That's true of old games (when updates couldn't be delivered), but certainly not any newer AAA game at release that I can think of.


Those games had a ton of bugs. Look into speed run tech. Super Mario World even has an arbitrary code execution bug you can trigger from controller input.

https://www.youtube.com/watch?v=v_KsonqcMv0


Probably not bug free, but the bugs that remained were subtle enough to be difficult to notice, or they happened in rare circumstances, or they actually worked to the benefit of the game.

Video games have never been produced in an environment where bugs can be completely stamped out.


>"update culture" is really about encouraging forced obolescence >If they would just make it work right the first time

What makes contant updates unavoidable is the ever-connectedness of the current world. If you miss out on security updates, then your sensitive data is at higher risk to be harvested, or the device can be joined into a botnet. This is why we can't return to the old world of doing software.

Also it's not like software was written to be perfect, like, ever. Product versioning is not new to the current era.


Security updates alone pretty much refute your entire premise. Not to mention that if you don't have software updates all your options for repair are costly and leave you without a device for a while. Scaled to everyone that's mountains of electronic and time wasted. The problem is not software updates.


I don't think companies should be forced to maintain their software forever -- that's too much of an unpredictable burden for the company to bear. Also it's fraught with problems, like what happens if the company goes out of business?

But what I do think is they should be forced to release their source code if they no longer want to maintain it. If a server is required for the software, then they should release the server source such that anyone can run one. If there is a hostname that the software connects to, they should be required to push an update that allows you to change that before shutting down the service and releasing the source.


You could have a system of "software trusts" where source code is held in escrow. Businesses and governments could require this in vendor contracts.


Good idea. The escrow company should also be in the business of advertising for their clients. "Here is a list of SaaS products you can depend on for the long term" they'd say, and I'd believe them because, worst case, the source code gets released. Whenever I thought something like "I need a finance app" I'd head on over to saasescro.com to see who they recommend. The escrow company could then use this clout to win more customers.

We're heading towards a world where there's 20 SaaS apps for everything imaginable (or are we already there?), this could be a competitive edge.


This has been common practice for decades.


But what I do think is they should be forced to release their source code if they no longer want to maintain it.

This is a common suggestion from open source fans but it quickly fails on practicality. What if the code includes licensed software from a third party? What about trade secrets of the developer, or information provided to them in confidence by a third party?

You quickly end up back at software developers either having to maintain indefinitely or accept an onerous set of restrictions and either way the end result is that a lot of software never gets made at all.

There is definitely a serious problem with having our modern dependence on software and yet accepting the very limited support that some developers have tried to get away with lately, but I don't think attempts to force the source open are likely to be the solution we need.


If they begin the project knowing that it will one day be open source, they can build accordingly. Make the law only apply to products released one year after the law goes into effect.

> What if the code includes licensed software from a third party?

The third party would be subject to the same law, or the law could be written that liability is removed. Then it's up to the third party to evaluate the risk.

> What about trade secrets of the developer,

That's a tradeoff they'd have to make -- reveal the trade secret or pay to maintain the software.

> or information provided to them in confidence by a third party?

Since the third party would be aware of the potential for the software to be open source, they would have to decide if they want to reveal that or not, or require that part to be removed before open sourcing and replaced with equivalent functionality.


Sorry, but this just seems completely unrealistic. The commercial software development industry in any country that adopted such a law would become a toxic wasteland. Open source has always had a terrible track record of producing effective, commercially viable results, and you would be reducing the whole industry to that kind of level. Not that you could pass the laws required to make it happen without causing a major diplomatic problem anyway since it would probably violate about a million provisions of the WIPO treaties and so on.


To be honest, the biggest problem I see is that with massive corporate projects, fixing them after original company lost interest will be too difficult and expensive.


How would this work?

Any such law has limited jurisdiction. There are already many examples of software development moving to avoid problematic laws.

There are circumstances where further maintenance is all but impossible that happen in the real world.

Open sourcing software cannot always grant all rights required to use said software legally, many of which may not be owned by the company. Some software has dependencies on third-parties that can void the relationship and for which no alternative exists, in some cases because the capabilities are unique and trade secrets.

This either creates a de facto requirement to support all released versions of software indefinitely or a giant and easily exploitable loophole that renders it useless. How does removal of features, common in long-lived software, work under this regime?

It would be nearly impossible to define "maintenance" in law such that it doesn't lead to de facto non-maintenance that also doesn't create unlimited liability that courts are unlikely to enforce.

Furthermore, this would violate laws that expressly prohibit disclosure of code in some cases, that are unlikely to be repealed anytime soon. Any such exceptions would be widely exploited.

This policy would create some perverse incentives and a massive number of loopholes.


It could work like intellectual property law was meant to work in the first place?

Patents are a limited monopoly granted in exchange for all of the technical details of the invention so that the invention can be built by others after the monopoly period expires. "All of the technical details" for a software patent should be expanded explicitly require "all of the relevant source code" and for good measure "relevant" should be described widely and probably "virally" (ala "copyleft"; maybe requiring a "copyleft" license?).

Copyright is a limited monopoly granted in exchange for helping to insure that the product eventually is available to use by the public domain. For many years the US had a legal requirement for active copyright that a copy of every book be sent to the Library of Congress for cataloging. (This is no longer a requirement, though some US publishers still do it as a courtesy.) A requirement could easily be made that for software copyrights to be enforced in a court of law that same software must have all source code registered with a US Software Source Code Depository. We collectively could do a lot of interesting things to build automated tools for that.

Back in the day when copyright required published books to be sent to the Library of Congress not every publisher followed the law and so it was exploited some, but if you get caught (exploiting it) not registering your source code and then get taken to court over it: you lose your copyright immediately, the court has the power to order you to hand over your source code, and at the very least makes it fair game as a part of the public domain to reverse engineer your software if your source code can't be found.

On the surface it sounds like a great idea. It's also a very unlikely idea because it would require new legislation and would be opposed by many lobbies of corporate lawyers.


This presumes that the software companies own all of their intellectual property. Most software companies don't own all the intellectual property that goes into their software or even all of the source code. Open source without the dependent intellectual property is a lot less useful and may be subject to incompatible legal regimes.


It doesn't presume anything of the sort. The dependent intellectual property is still intellectual property and would be subject to the exact same considerations. (Per the proposal: If copyright, code for that dependent IP would need to be registered with a source code depository or be subject to forfeiture of said copyright in a court of law; if patented the code would need to be open source and maybe even copyleft to begin with.) The public may need to play whack-a-mole to access the full source code of larger projects, bring dependents and then their dependents and so on to court until the entire thing is open source, but that's still a better deal than the current situation where the public domain is owed nothing ever.

(ETA: And "incompatibility between legal regimes" is mostly moot due to the Berne Convention anyway, at least per my lay understanding of it. That said, the proposal is entirely hypothetical and no such legislation would likely pass in any country so worry about international complications is mostly cart-before-the-horse.)


I'm not sure why any of that stuff should be protected in the first place.

100 years ago, a company could build a machine. It could be full of "trade secrets". But at the end of the day, anyone could tear it apart and figure out how it was made and how to repair it. If you were willing to fix it, you could use that machine forever.

Why should a modern "machine" be any different? Why should how the machine works be legally protected after the company no longer makes it or supports it?

This is basically just an extension of the right to repair argument.


> We should mandate that device manufacturers set aside a portion of the purchase price of a gadget to support ongoing software maintenance, forcing them to budget for a future they'd rather ignore.

I think the problem is that any portion of the purchase price will eventually run out over time. Let's say that I purchase a $1,000 phone and the company sets aside $250 for ongoing software maintenance. How much should they budget when? If they spend the money too quickly, it will run out too quickly. But no matter how prudent, it will run out.

I think the problem with software is that it's often unmaintainable by the end user. When you buy a house or a car, you don't expect infinite free maintenance of it. You simply expect the ability to have third parties maintain it for a fee. With closed-source software, you don't really have the option to hire a third party to maintain the software for your PSP.

However, if we're expecting the original manufacturer to support it for free, that support can't be indefinite. Any amount of the purchase price put away (even 100%) would eventually get exhausted.

> Or maybe they aren't ignoring the future so much as trying to manage it by speeding up product obsolescence, because it typically sparks another purchase.

I think this is one of the reasons that so many companies have moved to subscription models. If there's ongoing revenue, there's an ongoing reason to support it. Even then, if consumers have moved on, there might not be enough revenue coming in for that device to justify working on it. If there are 12 people who still want to use a PSP and they're paying $10/mo, that won't cover one engineer to keep updating the software. I'm guessing there are more than 12 people who'd like to use a PSP, but you can understand that there is some point at which there aren't enough users even with a subscription model.

In a lot of ways, it feels like the author is making two arguments.

1) Companies should provide free updates for their products forever via a single purchase. That strikes me as a weak argument. Nothing lasts forever. Too many things are abandoned too soon and that is a problem - one that could be handled by some more modest proposals like supporting devices for 6 years or something.

2) We should have a right-to-maintain. This is somewhat at odds with the original one. The right-to-repair isn't about free repairs from the manufacturer, but about our ability to maintain our own things (or get third parties to maintain them).

Maybe a right-to-maintain could be useful without going the direction of totally free software. It seems like it might be difficult to mandate this, but one could imagine saying that consumers should have access to the source code to make modifications to maintain abandoned systems.

However, I think there are issues in general. As more items rely on services that cost money to operate, at some point a device manufacturer will want to stop paying for those services if they're no longer used and for a product they no longer sell.

I think it's also important to think about what this means for products that were financially unsuccessful to begin with. When we think of something like the Playstation, we think of something that makes a lot of money for Sony. What about a device that sells poorly and the company has lost a lot of money on the device itself? I'm not talking about some sort of loss-leader where their strategy is to make up the money some other way. I spend $100M creating a new smartphone and I sell 100,000 of them at $500 a piece with a cost-of-goods of $200. I've spent $120M and taken in $50M in revenue. Having to support these devices for decades would mean that companies wouldn't challenge incumbents. I'm not going to go up against Samsung and others if I might get saddled with a huge maintenance burden. You'd only be able to launch a product if you knew that you were going to be able to handle the maintenance burden.

That would probably make the market a lot less competitive than it is now. So many companies wouldn't try things for fear of getting saddled with large, ongoing maintenance costs. You'd rather not try your chances at competing with incumbents and consumers would be left with fewer options.

I think some of that could be accommodated by tying maintenance to number of units sold or profits achieved. However, that would mean the maintenance window for a product depends on how popular it is. Of course, we already see this a bit in the real world. Companies do maintain popular and profitable products longer than unpopular ones. If we're asking manufacturers to "set aside a portion of the purchase price" then we're giving cheaper and less popular devices less money for maintenance than more expensive devices. That seems to defeat the purpose of buying with the confidence that it'll be maintained. In fact, you'd want to spend your money with large incumbents rather than supporting a new competitor since the incumbent device will likely sell well and be supported longer.

I agree that devices should have their software maintained longer. It's one of the things that annoys me so much about the Android ecosystem. I just don't see anything in this article that proposes a workable solution. Free maintenance forever isn't workable and would likely distort the marketplace in other ways with reduced competition since the burden of an unsuccessful product isn't just the immediate losses, but the ongoing maintenance losses. While the author briefly talks about a right-to-maintain, the idea is never actually explored and the article is dominated by the idea of the original manufacturer maintaining things indefinitely rather than us having the ability to maintain our own software.

I think a more limited scope proposal might include things like a limited maintenance commitment in established categories where we've seen the lifecycle of devices and such (ex. one could argue for something in the 3-7 year range for smartphones); unlocked boot loaders so that third-party software could be run on hardware; a requirement that key parts of source code be released (even if the release only allows the source and modifications to be run on the original hardware, ie. not free "FOSS" software) if the devices are no longer maintained so that third-parties could offer service for it.


I wish - for large appliances - where the material weight of the appliance is more than say 95% of the total weight/volume as compared to the electronics PCB, there should be dumb mode feature (either as a brand differentiation marketing feature or a legislated required feature). Note: I actually believe in free markets and believe this could be a huge selling point feature that consumers would pay extra for if marketed correctly as more sustainable.

By dumb mode, I mean just a basic non internet connected, 1980's style refrigerator/washer/garage door opener/microwave mode. So you have a smart mode that works until Samsung stops providing smartMicrowave 3.0 updates at which point you can flip a DIP switch on the back and just use it as a plain ol' microwave.


I agree with your point, but the % weight might need adjustment. For example, a PCB the size of an iPhone 13 (146x71mm) [0], weighs maybe 40.61g [1]. The PCB in an iPhone is approx .25 of the surface area [2]. The result is right at 5% of the weight of an iPhone 13.

(40.61*.25)/204 = 0.05

0. https://www.apple.com/iphone-13-pro/specs/

1. https://www.leiton.de/leiton-tools-weight-calculation.html

2. https://www.ifixit.com/Teardown/iPhone+12+and+12+Pro+Teardow...


Yes upon further reflection perhaps its more a rule of thumb than getting to specific weight ratio. Its mostly for manufacturers where the appliances were historically non internet connected (i.e. microwave, fridge) and much of the functionality is not internet related versus a tablet whose sole purpose is connectedness.

Its mostly about providing support for a "non-interconnected" UX mode which doesn't bug to create an account with the manufacturers website or even more frustratingly bug you constantly about being unable to access the manufacturer's cloud servers which are no longer being maintained by the manufacturer themselves.

And the point of it is perhaps less of a "there should be regulation" and more of a - as an end consumer - might as well post this on a forum where there are likely consumer electronics product managers for said devices viewing comments and getting some gears and mind shifts going.


The vast majority of consumers would not be willing to pay the extra cost associated.


That did not even slow down the "smart TV" phenomena.

We do not live in efficient markets.


The extra cost would be building the "dumb mode".

Smart TVs took over in part because they were cheaper than regular TVs because they could be subsidized by ads, placement deals for apps, etc.

Also most normal consumers don't care all that much about the negatives, to them it means they need to figure out one less device, one less remote, etc. to watch Netflix.

You can still buy TVs without these things from the vendors' business divisions but you'll be paying more for less features which means it will never catch on with the retail market.

Also, in the end, you can just use it without connecting it to the network. I bought a new LG CX soon after a zero-click exploit was discovered affecting them. I just left it offline and used my other devices as sources until that was patched.


I think it's interesting that the author mentions the Playstation Portable as an example: it seems like the reason it doesn't work actually _isn't_ a fault of the device manufacturer:

> You might think that a 15-year-old gaming console wouldn't even be operating, but Sony's build quality is such that, with the exception of a very tired lithium-Ion battery, the unit is in perfect condition. It runs but can't connect to modern Wi-Fi without an update, which it can't access without an update to its firmware (a classic catch-22).

In this case, developers seem to actually have provided updates, but it just doesn't work! Reminds me of when I booted up an old laptop to find that the old root SSL certificates don't work for sites today and everything's HTTPS now ¯\_(ツ)_/¯

It really points to an issue with net-connected tech (and yes, the PSP actually benefits from being online unlike certain ridiculous IOT devices): really, even if the manufacturer designs with long term support in mind, the rest of the world doesn't. So I don't know quite how tractable a problem this really is in the Internet age; the solution might be just better electronics recycling rather than indefinite updates


> It runs but can't connect to modern Wi-Fi without an update, which it can't access without an update to its firmware (a classic catch-22).

This argument is moot - the PSP is capable of upgrading from its memory card even in complete absence of a WiFi network.


That is great flexibility and design! Also, it might be possible to either find an older access point, or configure a modern one to use an older standard, so all hope should not be lost. :)


Also, doesn’t the PSP use this little disk cartridges? I don’t see why you couldn’t still play those games.


>But even if they fell wholly on the purchaser, consumers would, I suspect, be willing to pay a few dollars more for a gadget if that meant reliable access to software for it—indefinitely

The author's premise is entirely wrong, and years of consumer behavior show this. If you offer consumers a $99.99 microwave that will die in 2 years and a $107.99 microwave that will last 5, the VAST majority will buy the cheaper one regardless of the cost per year.


Is this so? How does the consumer know that the slightly pricier one is more durable and not just slightly pricier?


I think one problem is that the device that's cheaper isn't exactly advertising the fact that it will die in a year. In fact, when it is a brand new one that just hit the market, who even knows when it will die? This is the kind of thing that requires some historical knowledge which only a good salesman will be able to tell you, but if you're just buying online by five star reviews of happy customers who also bought it the last year, how are you going to know it'll die next year?

And next year, there will be a new model, so all those one-star review updates to last year's model will not be seen by someone in the market for a new one.


Exactly. Moreover less expensive doesn't mean worst quality. True for clothes, food and electronics.


refrigerators. Christ next time I get a fridge KISS is going to be the primary feature I'm looking for.


Only fancy thing you want is dual compressor, so one compressor for the fridge one for the freezer. (Otherwise they do sometime things like heating one compartment with the lightbulb....)


The warranty should be a clue but is not often advertised.


I’ve once bought a warranty extension for my washing machine, only to discover that it is impossible to reach them.


It’s never a choice between $99.99 and $107.99, usually the difference is 2-5-10x.


I agree with the author but I think they have it backwards. Don't factor it into the cost but offer a way to pay for updates if you want it. I'd be willing to pay for an extra year of software updates for an old Android phone rather than get a new phone.


What rate do you expect to be paying? How does it compare to the salaries for a development team?


I believe this is true but I believe it's because they actually expect both to last about the same duration.

Too many products either aren't worth the effort of a warranty claim process or the company makes it almost impossible to use.

Too many products are just the same rebranded imported goods that you can not tell anything from the product description unless it's a major product from a brand.

IMO portable electronics make large strides in the lest 10 and 15 years. So much that wanting to use an old device with the same but working software is a pretty small hill to die on.

It is essentially impossible to support a device for 10+ years unless it runs a standardized OS and the component drivers are kept working. Most ARM SoC chips do not get more than 2-3 years of support.

Maintaining cloud services is also very expensive since no one wants or likes having old, unmaintainable stuff around and constantly updating stuff is a lot of work/money.

TLDR; This is only possible if devices and software are as open as x86 + Linux computers.


Not if you brand the more expensive one as a GAMING model.


Not always the best option from a marketing perspective. Years ago I was trying to get a laptop with a decent-ish GPU so that I could game on it while away on business trips. My work would buy me basically anything I wanted, except GAMING products. All I wanted was the GAMING model, with the BUSINESS branding.


Heh, I would love more boring laptops with good GPUs, I always feel bad giving our data scientists or computer vision engineers garish LED monstrosities just so they can train faster locally. You _could_ probably do okay running games on a ‘workstation’ model but the consumer hardware is usually faster.


Perhaps law needs to be that software needs to be open sourced after some years of product launch or when official support ends - then community can over?


Commercial software often contains components licensed from third parties. So a vendor can't really open source their code in a meaningful way without also getting permission from the whole dependency chain. And in practice that's often impossible.


OP proposed solving the problem with an actual law, which presumably would override that (very common) excuse.


Same as patent law — After 20 years, no protection? Except the cycle is rather 5 years.

As a startup, I’m ok. But customers will be hit by vulns every year, either with the OS or any layer up to my software, and one of them will have to be upgraded.


Stronger than that. When releasing the gadget, require that the firmware source build tree is put in escrow. After 5 years it is opened to enable ongoing maintenance.

The initial release of gadget is released with firmware build by the escrow build process. This will ensure the company actually provides tree that builds the real thing.


Good idea, better than classic escrow: With a classic escrow, your customers are incentivized to make you go bankrupt, so that they recover the source code and eat your benefits.

But that doesn’t solve the vulnerabilities and the need to have 0-day updates.


It is tall order to expect companies providing up-to-date code for open sourcing.


If we collectively agreed to do it, that's not an issue

You provide a copy to a specified organisation which will keep the physical copy locked until date X. If you release something on local market and the source is not deposited, you get fined until you do. It would only need regulation - which of course we won't get due to many companies that would fight this idea.


We already do this with national libraries holding a copy of each book ever published. We can do it with software.


How do you figure? Your position sounds like "oh we made this thing but no, we cant show you how, too hard". That is not generally acceptable in society. Can you point out where I've misunderstood you?


Trade secrets have been an accepted part of society for a long time. Coke and KFC don't have to tell you their formulas. Tesla doesn't have to tell you how Autopilot is analyzing images.

Unless you want patent protection, you have no obligation to show your process.


What we need is also open source, open documentation and open firmware. That way when manufacturers loose economic interest in software updates users can take over. Ie open source access should be enforced by law.

We also need modular hardware so that processors / main board the brain of the device can be replaced independently of the other hardware.

For example the entertainment system software of a car will get much more out of date than the physical hardware of the car itself. For environmental reasons this needs to be addressed. Same with other devices such as TVs the physical screen will last longer than the software in the device.


I think introducing a law to require open documentation plus repairability at launch, and open-source+hardware at EOL (maybe even before say 5 years after launch) could work. Businesses keeping their secrets to make a profit and get a head start for a while is OK - it should create competition. Driving pollution with e-waste through forced obsolescence is not.


We also need to make companies to be obligated to provide source code on demand when they fail to provide a required fix or support.


Or simple software that Just Works.

The Ford Electronic Engine Control IV first appeared in Ford vehicles in the 1980s. It was designed to last for at least 30 years. (I was around when it was being designed.) The program is in a read-only memory etched into the silicon and cannot be changed. It was tested very thoroughly before it shipped by the millions. There are still vehicles on the road with that unit, mostly older Ford trucks. I still have a 1985 Ford Bronco with it.

Safety-critical functions should be like that, not in downloadable software.


> Safety-critical functions should be like that, not in downloadable software.

And then the concept of self driving, driver assists and other functionality comes into play and all of the sudden things are way more muddy - due to the extreme complexity of those, it's inevitable that patches and upgrades will be necessary.

> Or simple software that Just Works.

That's why you're perhaps spot on here - software should be bulletproof, but achieving that goal might need us to simply do less.

However that's not what the industry is striving towards, perhaps apart from aerospace engineering, but even there profit margins have invaded as a metric that's optimized towards.

It's unfortunate that it's going to cost many people their lives: https://en.wikipedia.org/wiki/Maneuvering_Characteristics_Au...


> The devilish details come in decisions about who should bear those costs. But even if they fell wholly on the purchaser, consumers would, I suspect, be willing to pay a few dollars more for a gadget if that meant reliable access to software for it—indefinitely.

I think the author has a hard time realizing the cost of what they are asking for.

Infinite support is not going to cost just a few dollar more, it will cost, well infinite money. But even if we take "forever" to mean something like 30 years, it's a crazy amount of work that only goes up every year, as finding/retaining someone that knows how the system works becomes harder, and just keeping track with the technological advancement also becomes harder.


Old working devices could last for decades with it's original function or some newer simpler function. For example, I use old phones as clocks.

There are a couple of things I would like to see at device end of life:

1. A final update that disables all software related locks so I can load whatever software I want and change any files I want on the device to keep it going.

2. Access to the last version of the developer tools and documentation that can create software for the device, and for those tools to be available on the latest OS or even a frozen virtual machine containing an older OS and apps.

3. Ideally, access to source code or a new software licence that allows decompiling the software.


Okay, that'll be $10,000 per seat, per year, but to get that deal you'll have to reserve the first 100 years.

In advance, thanks for your understanding. We'll start breeding the support staff as soon as your check clears.


This. Can you imagine factoring the cost of keeping the N95, launched in 2006, updated? All the related software?


The most likely outcome of demanding software support would be the elimination of perpetual software licenses and maybe logic bombs in the hardware to destroy the hardware after a certain calendar date.

So buy an iphone in 2023 (or really any hardware), the software license says it'll only work until 2025, after which its illegal to operate all licences revoked and only a 'computer pirate' would illegally try to run an iphone 2023 after 2025, and to enforce that if an onboard real time clock sees a date significantly beyond jan 2025, say... march 2025, it'll fire some FET transistors to crowbar the battery circuit and blow the internal fuse (hopefully thats all that happens LOL...)

It'll be greenwash marketed as "making the world safe for those dangerous old lithium batteries". "Help prevent lithium battery fires while also saving the planet, by recycling your old phone every two years"

A distant side effect of that will be something like forcing the adoption of something like certificate protected NTP service, because honeypot MITM types might find it hilarious to spoof a coffee shop's wifi, get everyone to connect to their fake network, then emit NTP packets claiming its 2040 thus making all equipment on the network self destruct. Imagine a cheap little $5 raspi level of hardware that can be plugged into any open ethernet port on any enterprise network, determine and take over the local NTP service via some trickery or whatever, then it bricks every piece of enterprise hardware on the LAN, LOL. Maybe it bricks unless the enterprise makes a special payment thats only 10% the hardware cost of all enterprise hardware, LOL... The future is always an exciting time for infosec workers, LOL.

"We" already convinced people to replace incandescent lightbulbs that lasted a decade with "LEDs that will last forever" except home depot only sells LED products that are engineered with insufficient heatsinks such that LEDs only last two years, so now people buy new bulbs five times as often as the "bad old days" because they think they are "saving the environment" with LEDs. I'm sure we could pull this off with phones and any other equipment that uses software. Heck, put five cent microcontrollers with RTCs into each LED light bulb to make absolutely certain nobody gets more than two years life out of LED products....


Somewhere after right-to-repair comes prohibiting built-in obsolescence and I suspect we'll get there eventually, as environmental awareness among younger generations becomes more influential on consumer spending if not before.

But we can bet that plenty of businesses will try to play the kinds of games you're talking about first.


I don't see any reason for greenwashing, or fearmongering along the lines of "think of the children", or virtue signaling in general, to decline in effectiveness, so they'll likely keep on using those strategies.

Younger generations understand less tech than older generations, although they use tech more than older generations. What I'm getting at is a societal wide trend where young people are more likely to own an e-bike but old people are more likely to be shade-tree-mechanics doing their own car brake jobs or whatever. Young people own and use iphones and streaming, old people own and use hammers and soldering irons.


Younger generations understand less tech than older generations, although they use tech more than older generations.

That's an interesting premise but I don't know how true it really is. Certainly today's younger generations haven't necessarily had the kind of curious exploration phase that those around GenX had and a lot of the modern technology is much more about lock-ins and walled gardens at the moment. That's a relatively recent disease, though, and breaking down some of the tech giants and their de facto monopolies might go some way towards curing it.

At the same time, the digital native generations tend to have much more street smarts than their predecessors in that world. Today's teenagers are much less attached to individual personas online and change social media accounts frequently. They know not to give out their personal details or even their real name too readily. They're often more savvy about common online threats than their parents and staying safe online is routinely taught in schools now. And they are often much more willing to engage in campaigns about issues like equality and environmental concerns, which their whole generation recognises as an existential threat in the future and a sign of generational inequality, and quite knowledgable about these issues.

It's not hard to imagine a near-future scenario where a big company tries on some sort of greenwashing to justify its obviously greedy strategy of forced obsolescence, it gets called out prominently by the next Greta or Malala, that message spreads virally throughout whichever social media platforms are popular with younger generations at the time, and that corporation's stock price crashes over the next few days as their PR team scramble to tell everyone that environmental protection is very important to them and their previously unannounced follow-up to the earlier announcement involves reversing it as quietly as possible.


Upvoted, because you have good points, but...

>> Younger generations understand less tech than older generations, although they use tech more than older generations.

> That's an interesting premise but I don't know how true it really is.

An article came out recently about that: https://www.theverge.com/22684730/students-file-folder-direc... .


The contrast in that article is very much what I mean. Kids are not using files and folders today, they don't understand what is going on under the hood the way many of the previous generation did because they have never had to. But when it comes to navigating Insta, grandpa had better take a seat and watch the experts.

There has definitely been a loss of capability and effectiveness in one sense (and that is a very serious concern, just not the one I'm talking about here) but in another sense the kids are very naturally navigating things like social networks and phone apps in a way many of their elders find similarly unintuitive. I think Big Tech challenges the assumptions modern kids make about how their devices should work at its peril.


This is the wrong approach. Rather than "software updates forever" how about when you stop providing updates you must provide that software to the community? The community can maintain it themselves, if they so desire.

My solution puts an interesting pressure on companies: if you don't want your trade secrets being released to the public then you have to keep supporting the software. The companies may very well decide that after 10 years they don't care. Which is fine. Now the community can take over. If there's no community to take over then the product is dead and there's no point expending any more effort maintaining the software.

Wouldn't this work better and be far more practical?


We do not need updates forever.

Updates become scary because you never know what BS they decided to add that I never needed which will break things that I did need.

We do not need updates forever. We need quality and respect.

We need something that 'just works' in the first place and the source code to fix it when it doesn't or when the company isn't there anymore.

We need control over our own devices because they become more and more extension of the mind and just like your mind should be protected the extension of it should be protected too.

Which means complete control over updates and what they contain. It means full control over each aspect of the device whether it is hardware or software.

Updates have became another form of control over your device and your workflow and your life eventually. They have become a tool to push into your throat something you never wanted. They have become a tool to disrespect your choices.

If you reject updates they bombard you with endless messages and warnings that jump up in the middle of your work sometimes even breaking your work. Talk about user hostility.

Apple for instance abuses even your internet traffic and simply downloads updates without asking and there is no way to turn it off. It is especially "useful" during the travel when you share connection from one device with another and another "thinks" it is wifi so it downloads 5gb of update leaving you without internet access when you need it the most not mentioning the cost. Thanks Apple. You simply cannot turn it off.

This is a clear example of disrespect to people and their choices. Asking on their support forum didn't help. Well guess what dear Apple, ignoring people and their requests would not help either. We would bombard you with messages and demands until you start respecting our choices and as long as you bombard us with "invitations to update" without option to turn them off. I think Apple and many others do not realize that pushing something "through the throat" tactic would not work in a long run.

We need full control over our devices and the sooner they understand it the better.


In the corporate sphere, software is treated as an ever-evolving amalgamation of code and people - This is why enterprise software services keep changing and breaking over time; without the people, the software stops working. Also, as the people change, the code changes. The code is often overcomplicated; there are a lot of politics going on whereby developers introduce unnecessary complexity into the code as a way to make themselves essential to the company. The code is too tightly coupled with individuals or teams. When some individuals leave or the teams are shuffled around, their code has to be replaced. Also the code is often tightly coupled to specific hardware or operating systems or SaaS services; developers in big corporations rarely bother to try to make their code portable to different cloud providers or operating sytsems.

In the open source sphere, software is more often treated as just code. There is not such a strong incentive to play politics because there is no money or job security involved. Because the software often needs to run on a wide range of systems within different cloud providers by different people/companies, on different operating systems with many different execution engines; coupling to specific hardware, systems or SaaS services is strongly discouraged. When open source does become coupled to some external service, it's more likely to be a blockchain - Which has this 'forever' quality (due to the decentralization aspect) and it almost never changes.


Forever maintenance sounds absurd. "Right-to-maintain" seems much better, like "please, release blueprints and sdks, allowing users to run community powered update systems"


I mean, I already wonder what will happen when Square Enix will pull life support to "Rise of the Tomb Raider"'s "endurance mode" servers. A whole game mode made useless. It would be nice if community could patch the game and point it towards different servers maintained by community. Same for PS3 store, why can't I download and run arbitrary software on the console?


Solutions that mandate companies to continuously actively maintain software can't and won't work. Even if they have to set aside a budget, they could obey the letter of the law and fail to do this, no matter how the law is framed. If we're going to make a mandate, it should be that the firmware for any device you refuse to maintain must be open-sourced and made available to the public. This way, if anyone cares to keep the device running, they can, even if the company goes out of business


Exactly, otherwise it's no different than saying you need to support IE8 because 0.05% of the world still use it. Very unreasonable imho.


Seems to me that this piece comes directly from Magical Christmasland, never taking into account the fact that companies are entities that operate for a profit. Why would they spend resources on maintaining old software when they can just cobble together something that (despite being full of flaws) works well enough that most customers will never complain about?

"[...] consumers would, I suspect, be willing to pay a few dollars more for a gadget if that meant reliable access to software for it—indefinitely"

Even if customers would be fine with it, I doubt it would work. Not only maintaining multiple pieces of decades-old software is extremely complicated in practice but, mostly, why bother? Instead, companies are more than happy to focus on selling you a shiny new toy every few years. This way they have greater profit margins and the vast majority of customers are totally fine with it.


> It runs but can't connect to modern Wi-Fi without an update, which it can't access without an update to its firmware (a classic catch-22).

In this case the issue is that PSP is so old that its hardware simply cannot connect to WPA2 networks (it can connect to WPA and WPA/WPA2 Mixed networks however). This is not something that a software update can fix.

Also, honestly, internet access is honestly not very useful for PSP, especially now. Pretty much all games have shutdown their online services, games themselves can be played from Universal Media Disc, and the web browser is pretty much useless with modern web, and with 32MB/64MB of RAM that the console has I don't think Sony could update web browser even if they wanted to.


I thought that the auth happened in the 'supplicant' software, so a WPA capable device, should be WPA2 capable as long as there's enough room for new (and maybe bigger) software and storage of new (and maybe bigger) state. Or does the PSP do the supplication in hardware?


I predict that this would be circumvented just like environmental cleanup laws are.

Your create a new company, release the product, promise the moon. You cash out on the profits of the initial sale. The company goes bankrupt, so sadly you cannot provide support anymore.


Problems with circumvention require better laws, or better law enforcement. The idea is still valid.


Good luck finding the people who are willing to support an old operating system forever. Even Linux shuts off older versions from updates.


You can find lots of instances of people maintaining old software, a lot of times just fans. Nobody is going to maintain anything just for you but we should give the opportunity at least just in case there is enough critical mass.

About Linux: is not switching to a new version an update in itself? Maybe your remark would be more correct saying: "older versions don't get fixes". In this case as I said maybe there is enough market, or maybe you have enough money to pay someone to do the job.

The problem are proprietary systems that remain closed when going out of business. It's great when at least some of them close but say "here is the code just in case someone wants it, for us it no longer makes monetary sense"


Because manufacturers choose to make a crappy kernel fork with hacky drivers. So it's on them to provide updates to that crap-kernel. If they instead made sure their drivers are in mainline, they could just use official current kernels as updates, and this is how it's meant...


If that crap-kernel passed QA tests on its day, what would be the excuse for updating it? The use case does not change, then the crap-kernel is valid.

Example: my Korg Kronos is a synthesizer workstation that runs an outdated Linux kernel on an obsolete motherboard. But it just works, it's a musical instrument so why would it need new updates? Security could be arguable (it even has an FTP server, so, yes, but it's a musical keyboard not a server) New features? The product feature set was closed when I bought it. There have been a few updates but basically everything is the same.


What do you mean "willing"? Shouldn't they be obligated to? If I pay my own money for the software it should work, no?

For how long? Day ? Two? Or may be a week? If so they should say it will work for one week. Otherwise I do not see a reason why it should stop working.

If it stops working it's a fraud isn't it?

For instance. I have bought iPhone4. Youtube app doesn't work there anymore but all links are redirected to it so basically you cannot see it at all in Safari. Weather doesn't work there anymore. Safari doesn't open many sites anymore. XCode doesn't support it anymore so I couldn't even put my own software there. Transfer of photos to a M1 Mac didn't work last time I've checked.

Why it should stop working? They should fix it, no? Or at least provide source code so I could fix it if they do not want to.

And this also rises the question of having full access to a device you own, also because those who sell it to you have a tendency to write a garbage software anyway.


Perhaps we can just start with updates to the cacerts file so I can still access websites.


What we need are stable standards. Eventually. I can understand why internet protocols, (some) file formats, scripting languages, and wifi protocols etc are still in a flux and why everything is a chaotic mess right now. It is difficult to get everything right. Time is needed. But I think there is a lack of discussion about how important standards are as a long-term goal. We need to be able to deploy code that can operate in a stable environment, just like physical objects we design can operate in a world made of natural laws that will not suddenly change around them for no reason (well, rarely change).


I can't even get replacement parts for my still in production toaster oven, much less things that stopped being produced 3+ years ago. Honestly, I think software has had noticeably better support than hardware - hardware is typically "no" or "here's a different replacement" or "do it yourself (if that's even possible, oh and that voids the warranty)". Software often gets free, easily applied by everyone patches for a couple years or more, and those patches are often available for a decade or longer, even beyond the lifespan of the company, thanks to (sometimes sketchy) alternate hosting sites, which also supply it for free.

But to the broader point: I think the cause may be that much of software is built with the intent of it being one-shot. Build to a target as fast as is reasonably possible, then abandon it to do a better job next time. And because there will be numerous flaws with such a quickly-produced thing, make sure to hide your shame, along with hiding your competitive advantage that others can duplicate for free because there is zero material cost.

So, broadly, the current status quo makes sense. But yeah, it has downsides.

I sure would like open-source everything though. That seems to be the only way out.


It's very simple, actually:

Products that are connected to the internet directly or indirectly need updates forever.

Products that are not connected to the internet could keep the same software version.


> Products that are connected to the internet directly or indirectly need updates forever.

Are you prepared to buy that product with the infinite amount of billed work hours included in the price? If not, where do you expect those wages to be paid from?


That's taken care of by parent's second paragraph - if you're not comfortable with supporting a device indefinitely, make it offline.


What if you want to build a phone handset that you don't want to support forever? It has to be online but there is no realistic way the original hardware will support continuous upgrades, even my Samsung S8, which is hardly ancient, is already slow (for reasons unknown) and will not survive very long.


You're basically suggesting no internet or networks, ever. No company will agree to perpetual support of anything. It's not practical or financially viable to do so.


If a company has to provide software updates forever for no revenue, eventually its business model will go upside down.

I instead prefer a model where a tax is applied to all IoT sales. Then the manufacturer gets paid monthly out of a fund created by that tax. Then various abuses can be taken out of that fund: appliance support less than 30 years, electronics support less than 15, security or privacy violations, etc.

Essentially, IoT companies need to earn their sale price over time.


They could release their sources. That would be nice.

I'm still getting use out of a Nintendo New 2DS XL thanks to the homebrew community. It's really nice to write software for it thanks to the efforts of talented folks to develop and maintain the toolchains... but the more advanced consoles are getting harder and harder to maintain by design.

It's one thing to maintain an NES or DMG-001 but to maintain PSPs and Nintendo Switches once they go out of fashion?


I wish that manufacturer's had to announce supported software lifetime of a product, that you could compare and contrast on a general spec sheet, like energy efficiency.

I also wish that device manufacturers should offer a marginal or low-cost software update service on a yearly subscription fee to ensure the longest lifespan out of a device. Particularly in the case of Sonos, I would be happy to shell out a few bucks a year to keep my older speakers up to date, maybe give us access to Airplay 2 or other standards. How many iPhone camera tricks used to sell the latest model are just computational at this point?

I feel like adding a long tail support subscription would really help the economics for B2C device manufacturing while moving us away from the mentality of continuously junking our silicon.


From the article: "We have seen a global right-to-repair movement emerge from maker communities and start to influence public policy around such things as the availability of spare parts. I'd argue that there should be a parallel right-to-maintain movement."

That sounds fine, but the the emphasis is on manufacturer hardware and software support, when the issue really is that devices are locked down.

Why not unlock the hardware and publish specs after a few years so people can update their own hardware with new roms, updates, alternative firmware and software, etc. The devs at XDA Developers constantly have to work around these issues trying to support their own old (and new) hardware.


I'm more in favor of expiration dates with legally-defined minimums.

Why? For many of these devices, the developers and teams have long since moved on. If the code was good, it might be possible for an outsider to figure out how to update it. BUT, if the code was poorly-written, it might take a prohibitively long time for an outsider to figure out how to update it.

Requiring infinite updates forever just isn't practical. No one will remember how to maintain software forever. But, a clearly communicated (and legally-enforced) expiration date where a company has an obligation to provide updates is practical.


Not gonna happen, this is not an accident, planned obsolecence is planned across most industries. That said, we do need to buy less crap, and try only to buy stuff that will last.


> we do need to buy less crap

I really wish there was an easy way to do this. Price has largely been decoupled from quality. Reviews are fake. Flaws are not obvious until it's too late. You don't have the right to repair. Warranties are useless. Companies are basically not competing on quality--they're not directly colluding but in effect...


I was pretty amazed last year to discover that MacOS Catalina finder did perfectly fine job in restoring an early 2000 era iPod Nano once connected through "the dongle of shame" (USB-C/USB-A).

Connoisseurs will appreciate that a the time of shipping the device required the now deprecated iTunes software. So someone, bless him, probably took time to port those old firmwares to the new finder based interface.

PS: No it doesn't run on iOS 15 but should we realistically expect that ? :P


Some of the problems that this article mentions (like the PSP not connecting to modern wifi) can't even be addressed by software/firmware updates. The PSP's hardware supports 802.11b and WEP. There is no way to make it support newer 802.11 standards, nor to support WPA2, just like virtually all other hardware which was released before WPA2 existed.

Also, the PSP can install firmware updates from game discs, and I believe also from local (USB-accessible) storage.


> indefinitely

Uh, that's an infinite cost.


But with a finite net present cost.


So, anyone still use their PDP-8 for email and web browsing?

I find myself falling in both pro and anti camps. Old hardware is cool and still useful but the world does incompatibly move on. ASICs are a thing ... and they are often not built with standards that don't-yet-exist. Eg: try running IPv6 on older network gear.

Also, hardware tends to get more capable for less power over time which leads to real environmental impact. OTOH: electronic waste is also an impacting problem.

"forever" is an impossibly long time in tech... and some tech is definitely made to be cheap and ephemeral. Maybe "we" should start a classification system that gives some idea of the expected lifespan of a piece of technology. At that point, consumers can decide what the worth of a computer is that will be supported (or supportable) for 5,10,15,... years.


At the very least I would want it to be in working condition in a museum.

If I owned a PDP-8 I would want to still write software for it as a hobby. It would be a very cool email tool and with a signature "Sent from my PDP-8"!


Agreed! A museum seems like it should be in the condition it was in when the machine was used, not upgraded with a tcp/ip stack and thicknet/ethernet/wifi/whatever.

If you are super into this, you could always emulate a PDP-8 on an FPGA, setup some kind of telephony link with a local machine and make an email client with that signature... all the kludge with 1/1000th the energy cost. :)

More to the point:

  A categorization system would be helpful.
eg: "this product relies on a cloud service vs this product can work in complete isolation given power and maybe cooling."

I would expect a PDP-8 to be solidly in the latter category (works in isolation).

I would expect a cheap feature phone has a very limited lifespan because it's filled with mass-produced, cheap circuitry and radios that only work in the context of current standards. Once cell-network technology moves on, those feature phones become useless.

Both are reasonable categories to make a device fit into. For the people that want to pay extra for a toaster that is upgradeable or has swappable components, they should be able to find that as a standardized metric or certification. Some people just want a cheap device to plug in and make toast.


Update is a necessity in my opinion, but yes it comes with a price and it is really great to see some updates that really take the notch a level up.


Even if it cannot connect to the internet, the psp van still run games, can't it? That isn't "useless" in my jargon


I'd be good with disclosing on the package, what date support and updates for my product will end. Then I can make a good decision.


As a user, that's fine by me -- as long as nobody is forced to take an update.


Well that is NEVER going to happen.


sure, send bill to ieee ?


Could we agree to a wiring harness for devices that is compatible with some sort of raspberry pi like device, so that worst case we swap in a card running Linux when the caps die in the original PCB?

I don’t think we need this eternal support for entertainment systems, but we are transitioning into this same problem set for major appliances and that’s just not a good deal.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: