>...something I’m very concerned about is the use of AI for autonomous weapons. This is another area where we fight against media stereotypes. So when the media talk about autonomous weapons, they invariably have a picture of a Terminator. Always. And I tell journalists, I’m not going to talk to you if you put a picture of a Terminator in the article. And they always say, well, I don’t have any control over that, that’s a different part of the newspaper, but it always happens anyway.
See also: video calling , internet as a social revolution , coronavirus 
> Covid has exposed how incompetent the British state is, from top to bottom
> Come the inevitable inquiry into the events of the past year, it is not only politicians who should carry the can. All the components of Britain’s government, central and local, should be tested – the constitution as a whole should be under examination.
Interesting how the media is missing from that list.
Many journalists truly have no shame.
Then we see this change in tone a month later in April 2020: https://www.theguardian.com/commentisfree/2020/apr/02/wrong-...
I highly recommend going back and looking at the history for yourself and NOT taking the parent comment at face value. Decide for yourself: is this guy just telling readers what they want to hear, or did he just change his mind as the situation played out? Was he honest? Are his articles worth reading now, and were they worth reading at time of publication? What would've you written in his situation?
> Every medical expert I have heard on the subject is reasonable and calm.
All the healthcare professionals I know were taking covid-19 very seriously, especially when they heard it had arrived in UK.
Doctors started renting bedsits so they could keep away from their families. Nurses were sharing recipes that used only tinned food, and telling people to get some extras in. (Well before we went into lockdown).
And it won't be sudden. People will be eased into it over time. And not even that much time; we went from nothing to basically having one or more permanent on tracking devices on ourselves at all time within just a decade or two, and we freely give permission for companies to track us because they (say they) offer convenience in return.
It's not an oppressive regime, it's companies. Which funnel money to the regime so they get a free pass on doing whatever they like.
They were accused of fear mongering at the time, and told that they should worry about the flu, and not about a virus which only killed a few hundred people.
AI robots will probably not be intentionally violent against humans. AIs are also evolving, but in an artificial world. In this world AIs are competing for humans favor. If they do well, if we're happy with them, we grant them computing power and replicate them. This selects for very specialized AIs with very deterministic behavior. Nobody wants an AI with unpredictable behavior.
AIs that survive are the ones that are the most adapted to serving humans. The danger is not that the AI itself harms humans, but that humans wants to harm or exploit other humans through AI. There could be a danger in AIs developed by the military, but I'm not too worried because they'll most likely be extremely special purpose with multiple fail-safes. Nobody wants to develop an AI that could kill the ones developing/using it. I'm most worried about AIs developed for economic exploitation. It's what we're motivated to work on, it's the area where there's most development being done, so it's probably where we'll first see advanced AIs causing problems. Arguably we already have (algorithms used on social media platforms promoting disinformation)
The thought that AIs will somehow gain some kind of general intelligence where it'll find that the logical thing to do is to eliminate humans is a fantasy. We don't select for AIs with general intelligence, if there even is such a thing. Most likely we are overestimating our own intelligence. It's probably not as "general" as we like to think. We don't generally kill because it's the logical thing to do, but because of emotional reactions which are a product of our evolution.
The example of the paperclip maximizer is really dumb. Such an AI would not be selected for general intelligence, and there's no reason to think general intelligence will occur accidentally. Even if it somehow gained this magical general intelligence, the decision of whether to murder humans to secure metal resources, or work with them, is probably absolutely undecidable. Even the most intelligent AI imaginable could not consider all the factors. The default would be no action. An AI would not have emotions produced through natural evolution that it could use as heuristic to decide what to do here. Not a problem for us humans. We have a built-in drive to consider killing someone outside our group, even if there's no rational argument to do it.
It wasn't popular but neither was it unheard of for people to recognize social media as antisocial a decade ago. Plenty of people would not be surprised a bit by that headline. People didn't just decide not to join Facebook and Twitter in a void by themselves; there was plenty of media (and lots of fiction being written for decades) warning about social media ten years ago.
And the fear of robots was always more about their human controllers.
AI is it's own issue, although the idea that we are being manipulated by a perverse AI has crossed my mind, as an entertaining but not realistic idea.
The real power behind this was not social media. It was television. No need to rehash the history of Fox News and Trump. The key point is that news detached from real-world facts became the major input for a sizable fraction of the population. Fox discovered that there's a huge market for telling people what they want to hear. Notably, to the exclusion of contrary views.
Supply-side propaganda has been around for centuries. Now we have a market demand for propaganda. One that pulls media into being even more radical. That's new. Eventually even Fox felt they'd gone too far. Then they started losing viewers to Breitbart and OAN. It really is demand pull, not supply push.
Social media let you listen to people you want to listen to. So it amplifies this phenomenon. But it didn't create it.
It seems as though there's this widespread belief these days that things you don't like are harmful to society.
I'm not sure it's entirely new, though; people keep repeating stories from the rise of fascism in the 1930s. Material that tells you your country is great and all the problems are the fault of the Other is always going to be tempting.