Hacker News new | past | comments | ask | show | jobs | submit login
AI is a 'Glorified Tape Recorder,' Says Theoretical Physicist Michio Kaku (2023) (observer.com)
18 points by Brajeshwar 7 months ago | hide | past | favorite | 53 comments



I'd really love to see his more recent real physics publications.

arXiv lists the latest as 24 year old:

https://arxiv.org/search/?query=Michio+Kaku&searchtype=all&s...

. Didn't he pivot around 2000's to a public speaker figure?


A lot of people dismiss LLMs based on how ridiculously simple next token prediction is.

Human nerve cells look ridiculously simple relative to what a clump of them can do. So does a transistor that CPUs are built out of.


I don't dismiss LLMs. They're amazing tools. I dismiss the idea that they are sentient.


Very few people claim they are sentient. The more common claim is that they have shown qualities that makes it reasonable to think that we are getting close to it - for very varying estimates of "close".

If you were to look at human brain cells without ever having seen a functioning human brain built from one, you might well question if they'd be enough too.


I don't think anyone claims that. The claim that one hears more often is the AI "understands" what it talks about.


I don't get that. To truly predict the next token you have to understand everything in the universe. Maybe people haven't thought enough about what a powerful learning objective that is. The fact that its simply defined is deceptive because as a heuristic people associate complex definitions with powerful tools.


>To truly predict the next token you have to understand everything in the universe.

What do "truly" and "understand" mean here, when LLMs are known for making errors and generating semantically correct but contextually meaningless results? Do they truly understand everything in the universe, or is the appearance of understanding the result of that universe being composed of data created by humans, so that relevant and meaningful results are statistically more likely than not?


[deleted]


> No, human nerve cells are not "ridiculously simple." The idea that neurons = transistors is old-fashioned

That's not what the OP or the literal quote you pulled from them said. They said a single neuron is ridiculously simple _relative_ to what a clump of them can do.


I deleted my comment, it was a mistake to get involved in this conversation.


They are complicated because they got to do something much harder than neural nets - they have to build themselves starting from a single cell. When a neural net will be able to secrete GPUs to enlarge itself like the brain, then it will do the same thing with brains. But then it would have to carry extra apparatus. A neural net doesn't need to operate for 80-90 years without servicing the hardware, brain builds redundancy from the start to compensate. And brains keep an energy budget of just a few watts, not kilowatts.

Out of all that complex biology in the brain maybe just a small part of it is contributing to intelligent behavior. Most of it is for support or redundancy. So I don't care you can equal a neuron with a 3 layer MLP with 1000 neurons or more. A model that fits on a DVD can speak fluent language, write code, and solve tasks - that is the useful capacity we are looking for.


I deleted my comment, it was a mistake to get involved in this conversation.


the claim was not made that something is simple. (other than llm output.) “look.” which can be done with an untrained eye.

A hand looks simple to a child. to a doctor? orthopedic surgeon?

worse, to an untrained eye, your response looks like it is from an llm. it would explain missing the nuance. not the only explanation by far, but when combined with talk of genetic effects in the footnote…


I deleted my comment, it was a mistake to get involved in this conversation.


that wasn’t the mistake.


"Why do we care..."

It's not about him especially.

What are all your possible sources of other opinions to consider?

* Informed, or at least intelligent, biased stake holders. (People with money or status or even merely true simple academic interest in it)

* Uninformed or unintelligent biased stake holders. (Maybe politicians or religious)

* Informed or at least intelligent non stake holders. (Kaku)

* Uninformed or unintelligent non stake holders. (Me)

All the other 3 groups are blasting out their opinions like they actually matter at all times, and that matters because through one mechanism or another, be it voting or market, we make policy by simple popularity.

I would say that is why the opinion of anyone in that group matters. Not because it's him by name, but just anyone not in the other 3 groups.


I don't understand why people care what he has to say on any of the topics he gets asked to give an opinion on.


This type of thing always reminds me of Dave Chappelle’s bit on Ja Rule being asked for his opinion on September 11th: https://youtu.be/Mo-ddYhXAZc?si=rKYGUpZJgb0RU7Qg


I always said AI was the equivalent of algorithmic slight of hand and I still feel that way with these LLM models out there.


I just asked my GPT friend about this.

He was little salty about always being picked on as not intelligent.

"If AI is a glorified tape recorder, then humans are quirky old computers, running on organic software that requires coffee to boot up, prone to emotional overheating, and occasionally needing to be put into sleep mode to prevent system crashes."


Tape recorders don’t get randomly added to my start menu and ship my data back to Microsoft.


And a computer is a glorified abacus.

Given the possible range of complexity that hides inside the word 'glorified' I'm not sure it's particularly helpful


In the same vein: 'Humans are glorified apes.'


> 'Humans are glorified apes.'

Humans are actually apes. No one thinks AI is actually a tape recorder.


a gun is a glorified slingshot. that makes them _not dangerous_ I guess


People tend to like thinking their particular area of expertise is the acme of human pursuit, and everything else gets more attention than it warrants.


Internet is a glorified telephone.


I am a glorified amoeba.


Not that it's wrong, but why is the opinion of a string theorist on LLMs significant?


Everyone's having a field day dunking on Michio Kaku for talking out of his field but let's be realistic - everyone talks out of their ass here, all the time. This community is particularly well known for its strain of aggressive ignorance on all sorts of subjects, with physics threads especially being peppered with crankery. If we limited conversation on HN only to people with relevant, real world experience in the subject being discussed, this place would turn into a mausoleum. Kaku claiming AI is a glorified tape recorder is no more or less insightful than the countless people here parroting "humans also do x" on the other side. It's an emotional argument.

Maybe instead of attacking the person, it would be better to attack their argument. As of my writing this, there are 36 comments and not one seems willing or capable of making a coherent, technical counterargument. It's just "lol who's this nerd?"


He's not wrong, he's just saying stuff we're already familiar with and we're annoyed someone who's well known as being a dick in person is getting more attention.


Seems reasonable to me. Right now, LLMs are basically regurgitating an average of the whole internet. Dog-level AI indeed.

There might be a breakthrough in grokking and building abstractions https://pair.withgoogle.com/explorables/grokking/ and at that point the scary thing is that a big tech billionaire or state government has no incentive to share that with humanity, but rather to use selfishly. That's the scary inflection point on the horizon.


Who cares. It's useful as hell. Here's one of my questions yesterday: https://chat.openai.com/share/500a4377-7cf5-47b3-b833-74b85d.... You know how long it would take me to Google this crap and piece together a coherent answer after going through a rabbit hole of JSTOR reviews, math stackexchange and Reddits? I don't have the time for that, and if that is what regurgitating an average of the whole internet means, I am very glad someone decided to build something like that.


cough https://www.google.com/search?client=firefox-b-d&q=What+are+... have you heard of "site:reddit.com"?

I will say that "site:reddit.com" is merely a poor man's way of telling Google "don't give me low quality machine generated results". Which, ironically, is exactly what you want ... from an AI!

Funny ... it's like a text based version of the plot of terminator: only machines ... can protect you from the machines!

I do think you're right btw, but for me the best applications of AI are, well, applied. As in "write tests for code X", "use library Y to ...". The code is definitely not useful as-is, far from it. But ...


Yes that's kind of my point, I don't _want_ to go and read through the pages returned by a search engine. I have a clear question and I want an answer. If I want to dig more, I would. That's exactly what these LLMs are providing, it's extremely useful.


Seems to work even better without the `site:` qualifier.


Dogs can't do that


That frankly sounds like someone that has never used chatgpt for anything beyond “write me a poem” and read a lot of news about artist work being stolen.

A valid point certainly but missing a key angle.

The fact that the models are capable following basic data processing tasks. Ie here is a table I need you to do xyz with it tells me this “they’re just parrots” view fundamentally doesn’t capture where the utility and usefulness lies.


Michio Kaku likes to speculate a whole lot, and is no stranger to pushing pseudoscience. I don't mean to say this as an ad hominem attack, maybe he's right in this instance, but not caring about his opinion in an area outside his expertise is a rational course of action given the limited number of hours in our lives.


Google is a glorified library. Excavator is a glorified spade. See, I don’t have to be Michio Kaku to do that. I wonder if sometimes people like Kaku make these outlandish statements for publicity.


Actually those all work fine and don't invalidate his point. No one thinks an excavator drives itself, not even the highly autonomous farm equipment. That was the point, was to say that llms are merely equipment.


Interestingly though, he seems to use AI and LLMs interchangeably.

AI risk != LLMs; the latter just happened to be so impressive that they shocked us into worrying more about the former.

Which is arguably a good thing: humanity beginning preparations for a dangerous tool before the actual dangerous part has even happened


I do agree with that. It's kind of a facet of the same premis. If a thing is mere equipment, then everyone knows it needs an operator. You don't start the car up and turn it loose to run down the road by itself. A thing doesn't have to have agency to be dangerous.

Except we are starting to do exactly that because too many people are not realizing that neither llms nor ai are actually ai with understanding and agency.

Or really, they do have agency because we're giving them agency, but we're giving agency to things that don't have understanding and that's the problem.


Hacker News is a glorified virtual Colosseum whose emperor is dang


"Yes! Yes, it is!" --Alan Turing


Same can be said of my manager.


I am quite critical of LLMs but I really don't see the point of asking Michio Kaku what he thinks, other than cynically leveraging his status as a physicist so he can be a "thoughtfluencer" in areas he probably doesn't know very much about.

Becoming a celebrity seems to be the worst thing that can happen to a thinker.


>other than cynically leveraging his status as a physicist so he can be a "thoughtfluencer"

So should we only be listening to the people with a financial stake in the outcomes, in either direction?

Not saying he's correct, or even that I agree. But it seems like we should listen to random, smart people.


I'd listen to Kaku on this if it were clear he had actually spent significant time using, developing, and understanding LLMs, but it's apparent that he hasn't and is just parroting other takes he's seen online ;)


That he's calling it a glorified tape recorder suggests he'd have been smarter if he'd answer he doesn't know the field and so doesn't have anything to say about it, and is a strong argument for why listening to random smart people is not necessarily going to be very useful.


But then he'd never have made it to Hacker News.


Is he actually smart though? This statement is profoundly stupid, now is he intentionally saying something stupid because as an academic he feels threatened by AI or is he actually just kinda dumb outside his field


What does it mean to be critical of LLMs? In that they are overhyped or that they are not intelligence?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: