I'm in the following camp:
It is wrong to think about the world or the models as "complex systems" that may or may not be understood by human intelligence. There is no meaning beyond that which is created by humans. There is no 'truth' that we can grasp in parts but not entirely.
Being unable to understand these complex systems means that we have framed them in such a way (f.e. millions of matrix operations) that does not allow for our symbol-based, causal reasoning mode. That is on us, not our capabilities or the universe.
All our theories are built on observation, so these empirical models yielding such useful results is a great thing - it satisfies the need for observing and acting. Missing explainability of the models merely means we have less ability to act more precisely - but it does not devalue our ability to act coarsely.
But the human brain has limited working memory and experience. Even in software development we are often teetering at the edge of the mental power to grasp and relate ideas. We have tried so much to manage complexity, but real world complexity doesn't care about human capabilities. So there might be high dimensional problems where we simply can't use our brains directly.
A human mind is perfectly capable of following the same instructions as the computer did. Computers are stupidly simple and completely deterministic.
The concern is about "holding it all in your head", and depending on your preferred level of abstraction, "all" can perfectly reasonably be held in your head. For example: "This program generates the most likely outputs" makes perfect sense to me, even if I don't understand some of the code. I understand the system. Programmers went through this decades ago. Physicists had to do it too. Now, chemists I suppose.
Why do you think systems of partial differential equations (common in physics) are somehow provide more understanding than the corresponding ML math (at the end of the day both can produce results using a lots of matrix multiplications).
... because people understand things about what is described when dealing with such systems in physics, and people don't understand how the weights in ML learned NNs produce the overall behavior? (For one thing, the number of parameters is much greater with the NNs)
Stuff that we can't program directly, but can program using machine learning.
Speech recognition. OCR. Reccomendation engines.
You don't write OCR by going "if there's a line at this angle going for this long and it crosses another line at this angle then it's an A".
There's too many variables and influence of each of them is too small and too tightly coupled with others to be able to abstract it into something that is understandeable to a human brain.
> AI arguably accomplishes this using some form of abstraction though does it not?
It's unabstractable for people, because the most abstract model that works still has far too many variables for our puny brains.
> artists routinely engage in various forms of unusual abstraction
Abstraction in art is just another, unrelated meaning of the word. Like execution of a program vs execution of a person. You could argue executing the journalist for his opinions isn't bad, because execution of mspaint.exe is perfectly fine, but it won't get you far :)
While computer operations in solutions are computable by humans, the billions of rapid computations are unachievable by humans. In just a few seconds, a computer can perform more basic arithmetic operations than a human could in a lifetime.
I'm not saying it's achievable, I'm saying it's not magic. A chemist who wishes to understand what the model is doing can get as far as anyone else, and can reach a level of "this prediction machine works well and I understand how to use and change it". Even if it requires another PhD in CS.
That the tools became complex is not a reason to fret in science. No more than statistical physics or quantum mechanics or CNN for image processing - it's complex and opaque and hard to explain but perfectly reproduceable. "It works better than my intuition" is a level of sophistication that most methods are probably doomed to achieve.
All our theories are built on observation, so these empirical models yielding such useful results is a great thing - it satisfies the need for observing and acting. Missing explainability of the models merely means we have less ability to act more precisely - but it does not devalue our ability to act coarsely.