Hacker News new | past | comments | ask | show | jobs | submit login

I'm in the following camp: It is wrong to think about the world or the models as "complex systems" that may or may not be understood by human intelligence. There is no meaning beyond that which is created by humans. There is no 'truth' that we can grasp in parts but not entirely. Being unable to understand these complex systems means that we have framed them in such a way (f.e. millions of matrix operations) that does not allow for our symbol-based, causal reasoning mode. That is on us, not our capabilities or the universe.

All our theories are built on observation, so these empirical models yielding such useful results is a great thing - it satisfies the need for observing and acting. Missing explainability of the models merely means we have less ability to act more precisely - but it does not devalue our ability to act coarsely.




But the human brain has limited working memory and experience. Even in software development we are often teetering at the edge of the mental power to grasp and relate ideas. We have tried so much to manage complexity, but real world complexity doesn't care about human capabilities. So there might be high dimensional problems where we simply can't use our brains directly.


A human mind is perfectly capable of following the same instructions as the computer did. Computers are stupidly simple and completely deterministic.

The concern is about "holding it all in your head", and depending on your preferred level of abstraction, "all" can perfectly reasonably be held in your head. For example: "This program generates the most likely outputs" makes perfect sense to me, even if I don't understand some of the code. I understand the system. Programmers went through this decades ago. Physicists had to do it too. Now, chemists I suppose.


Abstraction isn't the silver bullet. Not everything is abstractable.

"This program generates the most likely outputs" isn't a scientific explanation, it's teleology.


"this tool works better than my intuition" absolutely is science. "be quiet and calculate" is a well worn mantra in physics is it not?


“calculate” in that phrase, refers to doing the math, and the understanding that that entails, not pressing the “=“ button on a calculator.


Why do you think systems of partial differential equations (common in physics) are somehow provide more understanding than the corresponding ML math (at the end of the day both can produce results using a lots of matrix multiplications).


... because people understand things about what is described when dealing with such systems in physics, and people don't understand how the weights in ML learned NNs produce the overall behavior? (For one thing, the number of parameters is much greater with the NNs)


Looking at Navier-Stokes equations tells you very little about the weather tomorrow.


Sure. It does tell you things about fluids though.


What is an example of something that isn't abstractable?


Stuff that we can't program directly, but can program using machine learning.

Speech recognition. OCR. Reccomendation engines.

You don't write OCR by going "if there's a line at this angle going for this long and it crosses another line at this angle then it's an A".

There's too many variables and influence of each of them is too small and too tightly coupled with others to be able to abstract it into something that is understandeable to a human brain.


AI arguably accomplishes this using some form of abstraction though does it not?

Or, consider the art word broadly, artists routinely engage in various forms of unusual abstraction.


> AI arguably accomplishes this using some form of abstraction though does it not?

It's unabstractable for people, because the most abstract model that works still has far too many variables for our puny brains.

> artists routinely engage in various forms of unusual abstraction

Abstraction in art is just another, unrelated meaning of the word. Like execution of a program vs execution of a person. You could argue executing the journalist for his opinions isn't bad, because execution of mspaint.exe is perfectly fine, but it won't get you far :)


> It's unabstractable for people, because the most abstract model that works still has far too many variables for our puny brains.

Abstraction doesn't have to be perfect, just as "logic" doesn't have to be.

> Abstraction in art is just another, unrelated meaning of the word.

Speaking of art: have you seen the movie The Matrix? It's rather relevant here.


This is just wrong.

While computer operations in solutions are computable by humans, the billions of rapid computations are unachievable by humans. In just a few seconds, a computer can perform more basic arithmetic operations than a human could in a lifetime.


I'm not saying it's achievable, I'm saying it's not magic. A chemist who wishes to understand what the model is doing can get as far as anyone else, and can reach a level of "this prediction machine works well and I understand how to use and change it". Even if it requires another PhD in CS.

That the tools became complex is not a reason to fret in science. No more than statistical physics or quantum mechanics or CNN for image processing - it's complex and opaque and hard to explain but perfectly reproduceable. "It works better than my intuition" is a level of sophistication that most methods are probably doomed to achieve.


"There is no 'truth' that we can grasp in parts but not entirely."

The value of pi is a simple counterexample.


We can predict the digits of pi with a formula, to me that counts as grasping it



> There is no 'truth' that we can grasp in parts but not entirely

It appears that your own comment is disproving this statement


> There is no 'truth' that we can grasp in parts but not entirely.

If anyone actually thought this way -- no one does -- they definitely wouldn't build models like this.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: