Hacker News new | past | comments | ask | show | jobs | submit login
Humans began to rapidly accumulate technological knowledge 600k years ago (asu.edu)
273 points by geox 4 months ago | hide | past | favorite | 267 comments



News article says humans, but earliest human (homo sapiens) was around 300kya. The actual paper uses the word hominids rather than humans.


The word human is commonly used for both modern humans and members of the entire genus Homo. Hominids is a more general superset that isn't strictly correct here. The term hominin is more appropriate in this context and what they actually use in the abstract.

In my opinion though, "human" is the better word here for conveying the right mix of informality without implying the specific semantics of "Hominini sans Pan".


Is it?

This is literally the first time Ive seen the word human applied to other hominids. I see many discussions about neanderthals and denisovians and so on. I have never seen them referred to as human.


I'm not sure where you are reading but both layman and scientists commonly use the word "human" to refer to the genus "homo". To look at just one example, Ian Tattersall called one of his books "Extinct Humans" and it is a look at the history of the genus homo:

https://www.amazon.com/Extinct-Humans-Ian-Tattersall/dp/0813...


I'll admit this is the first I've encountered the concept that "neanderthals and denisovians and so on" are not human. Maybe not biologically modern humans, but I'd certainly consider them fundamentally human.


Bizarre, this is the first I've encountered the concept of someone thinking Neanderthals and Denisovans were "Human".



We're all part of the genus "homo", but to me, "human" refers to "homo sapiens". I was sufficiently surprised to hear that humans were around 600k years ago that I came here to comment.

The Wikipedia article [1] on humans makes the same point. It does acknowledge that some use the term human to refer to all members of the genus homo, but this is not the common usage.

[1] https://en.m.wikipedia.org/wiki/Human


I think it’s not uncommon to refer to Neanderthals as early humans, I’m sure I’ve read that in many places.

Natural history museum, for example, refers to them as early humans: https://www.nhm.ac.uk/discover/who-were-the-neanderthals.htm...


It is also reasonable to argue that they and we are subspecies of the same species not rather than separate species.


Of course we are not separate species since successful interbreeding did happen (when the populations interacted).


If you look closely at it, the concept of "species" is impossible to define exactly.

That's fine in general. Many words are like that. But it makes it flawed as a scientific concept.


Donkeys and horses are not the same species, but they interbreed. Their offspring are called mules. Although mules rarely have offspring, so perhaps not a great example.


There is supposedly neanderthal DNA in some modern humans, implying that offspring were viable. Breeding resulting in viable offspring is one of the only consistent definitions of what a species even is.


We could make that the definition, but we'd doing a lot of redefinition: coyotes and wolves would become the same species, as would lions and jaguars. Fertility issues tend to increase with genetic distance but aren't guaranteed; for example, mules are usually but not always sterile.


What about the 2nd generation of mule, then 3rd? If the probability of viability keeps dropping generation after generation, eventually it will delete itself?


It doesn't work like that. Backcrossing it with one of the parent species (assuming fertility in the first place) as you'd expect tends to increase the likelihood of fertile offspring in proportion to the number of generations. And that's exactly what you'd expect in any hybridization event. And anyway, mules are actually pretty special in that horses and donkeys are actually fairly distant relatives (diverged 4 million years ago) and have different numbers of chromosomes. All members of homo (supposed to have emerged all more recently than 3 million years ago) could probably interbreed and the ones that had the opportunity probably did.

It's really unfortunate that schools tend to simplify the definition of species in this way, because it's just not really not meaningfully true at all. We could "make" it true by actually defining species this way (at least for animals) but it'd radically transform our taxonomies.


That's not the definition of species. Many species can interbreed, sometimes even species that are pretty distantly related.


I have never heard anyone refer to Neanderthal as a human unless they are talking in a "those are cavemen, early humans" way that's wrong. Where is this coming from? Is it a non-English world thing?

Generally Neanderthals are pointed out as an exception to cross species fertility since... humans have some Neanderthal DNA.


> Generally Neanderthals are pointed out as an exception to cross species fertility since...

There's no such rule and Neanderthals are not notable as an exception. Fertility is just a very rough proxy for genetic distance, which is correlated to our arbitrary "species" buckets but by no means a real line or hard rule. Many, many reasonably closely related species can interbreed, like jaguars and lions. Most of homo that had the opportunity could probably interbreed.


Wherever it’s coming from, it’s not based on any particular language. Museums in English-speaking countries use the term.


> I have never seen them referred to as human.

I have never seen anything else. But then in French, they are called Neandertal men and Denisova men. So pretty clearly humans.


I think it's typical in non-technical English to use "humans" to refer to homo sapiens only, unless you qualify it, like "archaic/early humans". Without additional context I wouldn't assume somebody talking about humans meant to include e.g. Neanderthals.


Qualifications like, “600k years ago”? That’s pretty clearly talking about the humans of 600k years ago. “Early humans” are still humans.


> “Early humans” are still humans.

I respectfully disagree, as started in my earlier post. It would be nice if human language worked like that, but it does not always. A "stone frigate" isn't a boat, an "iron lung" isn't a respiratory organ.


Those things aren't closely related biologically, so not a good analogy. "Human" isn't a precise biological category, it's just based on how the word is used, unlike homo sapiens. And some people, including scientists, use it to mean closely related species of hominids. Or hominids generally.


I agree? I didn't know close biological relation was a prerequisite for a good analogy, but I'll gladly oblige: vampire squid, velvet ant, slipper lobster, naked ape.


I can't speak to what you have or haven't seen before, but yes it's quite common as informal language among anthropologists with particular kinds of views. Sometimes people will use "modern humans" or "archaic humans" or some other variation to differentiate, but not always and it's usually pretty clear from context regardless.

This is just one of many examples of definitions being extremely unstandardized in human evolution. You get used to it after awhile.


Usage is definitely mixed, but I’m surprised you haven’t encountered this. From Wikipedia:

> Although some scientists equate the term "humans" with all members of the genus Homo, in common usage it generally refers to Homo sapiens, the only extant member. All other members of the genus Homo, which are now extinct, are known as archaic humans, and the term "modern human" is used to distinguish Homo sapiens from archaic humans.


Being 2.3% neanderthal, I'm... not sure what to think about it.

I think it's a case of the classic issue of trying to make sharp boundaries in a continuum. There just are no fully satisfying answers.


Wait a minute. I am part Neanderthal.

Does this mean I am only part human?


You're also part bacterium and part treeshrew, so I couldn't really say for sure.


If 500,000 years ago a hominid sits by the fire they lit and is working on sharpening a stone tool, would you say:

"someone / a person is sharpening a tool" or "it / a hominid is sharpening a tool"?


I personally would call them a human, but this seems like a false equivalence unless you believe that personhood is something exclusive to humans. “Someone / a hominid” is perfectly valid and could at the same time be “someone / a person”.


Uh, I guess you aren’t reading the same things I am. Neanderthals were definitely “human”.


Human === Homo sapien _sapien_. Homo Sapiens are a different thing


H. s. sapiens is not universally accepted either. As a practical matter, it'd raise a lot more eyebrows in an actual conversation with anthropologists than the informal usage of humans = genus Homo I mention above.


Can we all agree that 'human' should be defined as all members of the genus Homo?


I would be fairly happy with that definition, but I wouldn't rely on it out-of-context.

(I would definitely call Neanderthals "human" in almost any context.)


According to 23andme I am 2 to 4% neanderthal, and I think of myself as human...


While funny, I don't think this has much bearing on the topic at hand: we share a lot of DNA with chimps as well (or rather, the last common ancestor between chimps and humans). But that doesn't mean we should call chimps human.

So the shared genes alone can't be used as a reason to call Neanderthals human.


True, but while all humans have chimp dna, many humans have 0 percent neanderthal dna - are they more human than me?


No human had a chimp ancestor. But humans share a fairly recent common ancestor with chimps.

Some human had Neanderthal ancestors, some did not, as you rightly suggest. However, all humans and Neanderthals share a common ancestor that's much more recent than our last common ancestor with chimps.

Ie we only have 'chimp DNA' in the sense that we share a lot of DNA with them, and that we haven't changed too much since our last common ancestor. Exactly the same is true of us and Neanderthals.


A single-word common name usually refers to a genus: "oak" is anything in Quercus, "wagtail" is anything in Motacilla and so on. That's not conclusive because there are plenty of exceptions, and "human" could easily qualify as a special case, but I don't see why "human" shouldn't be any member of Homo.


No



It’s fascinating to think about the number of PUs (procedural units) it takes to make a modern tool. Something as simple as a modern hammer must number in the thousands and a mobile phone in the millions or billions.


You might find the essay "I, Pencil" by Leonard Read interesting. It's told from the point of view of a pencil who talks about the complexity of his own creation and all of the components involved in the process


That essay also loosely inspired the opening scene of Lord of War, which showcases the journey of a bullet from an underground mine to an Eastern European factory all the way into the head of an African child soldier.


Loved that scene. Reminds me of all the "autobiography" stories we had to write for various items such as cars, horses, computers and pens, back in school.


There's also Thomas Thwaites' "building a toaster... from scratch" project that goes into the details of how even an extremely simple appliance involves materials and processes that are effectively impossible for a single person to replicate (spoiler: he has to give up and "cheat" on some things).


Likewise, someone (I forgot who) who made a sandwich from scratch - growing wheat, pigs, cows, produce, the works.



Paraphrased here in 2 minutes by Milton Friedman: https://www.youtube.com/watch?v=67tHtpac5ws


It's just ideological drivel. People never had any difficulty to cooperate on such level.


Except for over 300.000 years ago.


One part of the drivel - he says don't let the government inhibit the invisible hand of the market that Adam Smith discusses in The Wealth of Nations. But in The Wealth of Nations, the invisible hand is the hand of the government inhibiting free trade between nations.


It gets inhibited by the banking system anyway, you would at least have to adopt the Chicago plan (full reserve banking) to avoid it. As it is now, it's about who gets given the money, rather than anything similar to free market.


That idea is pretty similar to "assembly theory" no? In the sense of how much information or evolution was necessary to generate some artifact, be it a benzene molecule, a stone tool, or an iphone


The same came to my mind. I think there may be some elements of assembly which have to do with biological process that don’t apply to “accumulation of technological knowledge,” but I need to reread it.

https://www.nature.com/articles/s41586-023-06600-9

Here, we introduce AT, which addresses these challenges by describing how novelty generation and selection can operate in forward-evolving processes. The framework of AT allows us to predict features of new discoveries during selection, and to quantify how much selection was necessary to produce observed objects, without having to prespecify individuals or units of selection.


It took humans 300k years, all of us, our collective output to reach this point. Yet people insist on comparing a human who is part of society with a LLM alone, who doesn't even have search, and very limited contexts, just closed book mode remembering.


You seem to be hallucinating a strawman


Who is comparing LLM with Humans?

As far as I can tell nobody is.

LLM is a great tool that helps our brains same way a hammer helps our hands.


Wrong thread, perhaps?


Billions of transistors in a smart phone, and those took many^many machines to build


The factories that build them is like something out of a sci-fi film. Quite mind blowing.


There was a great article posted the other day that explained in detail but also followable terms all the processes that go into making a chip. I didn't realise that a single wafer can spend months in the production line.


the genius of photolithography (and its descendents) is that each chip is printed all at once, all the transistors at once over several steps for the several layers. This is what makes chips inexpensive.


The genius of photolithography is many things, I wouldn't say the wafer process is any more special than the statistical models that can predict where a nozzle needs to point to lay substract with more precision than the nozzle itself can provide or a number of other important inventions there.


using the same process that was used to make a single transistor (and package and attach connecting leads to it)...

...to make an entire circuit (and package and attach connecting leads to it)...

is the invention of the integrated circuit

and photolithography made that possible. It subsequently had a large number of important follow on innovations, but which were conceptually proper subsets of it.

had some other process made integrated circuits possible, that other process would have subsequently had many important innovations.

what flowed from photolithographic chip making is still flowing today.


Is "procedural units" a commonly used term? I couldn't find any info in a quick search. I'd love to hear more about the concept


You have to think that there were breakthroughs in communication technology — not just language in general but possibly also one individual who happened to be good at explaining things, either before or after language, who both taught more people, but also taught them how to teach — that led to step changes in technology.


It all starts with imitation of some sufficient fidelity and generality that seeds a runaway evolutionary process. Language evolved to improve the fidelity of imitation much like how genes evolved pathways to minimize random mutations during copying.

Once imitation gets good enough (general and accurate) were capable of spreading behaviors (phenotypes) without having to wait for folks to be born and grow up.


That's culture, and it's known to exist among animals in a limited way.

We don't really know what the upper bound for non humans is, because we don't know exactly what's being communicated by, for example, whale song.


I wonder whether we (as human) are merely the first species that managed to overcome initial barriers to developing culture and tech and thereby preventing any other species to do the same (for now).

It feels like bipedalism, opposable thumbs and strong social behaviour and other factores were the perfect storm at the perfect time.


There’s a professor at St. Andrew’s, Kevin Lalland I think, who has done a lot of work on imitation. His theory is that that imitation is generally useful but without high enough copying fidelity culture won’t develop.

For example monkeys have been known to exhibit fashion trends (hanging a bit of grass out of their ear). But these trends fizzle out over time. Before more behaviors can be layered on top.

I think you’re right about those factors creating a good environment for this to happen and I bet they’re sufficient but not necessary. Too bad we don’t have another example :)


Chickens are bipedal. There are tree frogs with opposable thumbs. Elephants and cetaceans have strong social behavior. Other hominids have all these.

What other animals make fire? Cooked food was the game changer.

https://www.livescience.com/5946-chimps-master-step-controll...


Fire is rad but I don’t see how it explains modern civilization.

If most animals are operating on a calorie deficit and fixing this can lead to larger brains then why haven’t domesticated animals evolved towards our level of intelligence?


I don't think fire and therefore cooked food was THE change, though I think it definitely helped, but

>If most animals are operating on a calorie deficit and fixing this can lead to larger brains then why haven’t domesticated animals evolved towards our level of intelligence?

Because farm animals do not evolve, they are bred. We have directed their evolutionary paths for centuries, away from what it would do on it's own, towards creatures that produce more milk, more meat, or fattier meat.


Probably because our interference with their breeding selects for non-smart qualities.


What interference? The animal world was free to evolve for the majority of the history of the earth. Literally millions of years and nothing came of it. Until we came along.


Domestication, the process we put animals in to make them peaceful enough to provide a calorie surplus without stomping the early humans to death because they got scared by a branch


yes


Modern civilization is multi-special.

From a calories-per-unit-of-labor standpoint, dogs and cats have now clearly eclipsed humanity, using their emotional intelligence to create a post-scarcity Marxian utopia where they can mostly do whatever they want all day while humans toil for them.


Other species have culture (behavioural traits/patterns and social organisation/cues that are communicated, not innately inherited via genes and development), and other species have tech (making intentional modifications to their environment, creating structures that better support their existence).

What seems to be rare is the ability to use culture a medium to store and transmit tech.


It's that Noah Yuval Harari book, isn't it?

What's separates humans is we can construct common fictions that we actually believe in, eg "my job is to maximize shareholder value" where both the corporation and the responsibilities towards it are made up but generally believed.

There's of course a rather large elephant in this room, too, that decides a lot of things about our lives.


Harari’s book is awesome but I think he gets stuck too far up the hierarchy.

Trying to describe culture with stories feels like trying to describe biology with proteins. It gets us most of the way there but also adds a whole ton of complexity because it misses the fundamental nature of the system.

Humans are great at general and accurate imitation. This probably seeded a runaway evolutionary process, the result of which was tools, fire, and language.


I think if any animal has a shot at sentience, it'll be dogs. We've been improving their diets and breeding for intelligence.


This is what the book Ishmael by Daniel Quinn proposes. Interesting read.


the same thing probably happened early on, chemical evolution may have thrown up multiple replicators, but our ancestors had some small edge which allowed them to literally eat the competition.


This was at least 200k years before the advent of speech if you go by the hyoid bone evidence.


What if the first language was a sort of sign language, and vocalizations were only auxiliary, optional? With time, humans who vocalized had a better chance to be understood, and their vocal tract evolved. Humans instinctively gesture to this day when speaking.


This has been hypothesized, but the fact that there aren’t many or any purely sign languages still around and other primates don’t show signs of using signs makes that seem like a reach, IMO


>other primates don’t show signs of using signs

other primates don't show signs of using vocal language either, yet humans have it

>there aren’t many or any purely sign languages still around

we're the only species of modern humans who survived, maybe Neanderthals/Denisovans etc. used sign language, who knows

maybe the fact that we started using vocal language made us much more superior evolutionarily speaking, making other similar species without developed vocal tracts extinct (by our warfare/assimilation)

deaf/mute people around the world have always historically come up with ways to talk using signs (different unrelated systems), i.e. we still have the means to do it, but it's unnecessary when you can produce sounds (freeing your hands to do work)


Yes these are the arguments in support of the idea. I’m not going to dismiss the idea completely but it’s not at the top of the list of likelihood imo


I would consider this evidence that language predates human vocalizations. We already know deaf children will invent signs and gestures to communicate, and language not dependent on a hyoid bone. Do we have any way of dating the relevant neural structures through genetics?


I think we’d have trouble because we’d have to tie the gene to a specific linguistic cognitive function. My hypothesis is that humans configured themselves into self-replicating group structures I’d call institutions, and language evolved as a way to facilitate that. These institutions exhibit all the thermodynamic properties of life, and they have goal directed behavior independent of individual humans.


FOXP2 was thought to be the genetic basis for language, but this has been overturned[1]. As far as I know there isn't a good candidate for a gene selected for language.

[1] https://www.the-scientist.com/language-gene-dethroned-64608


It doesn't even have to be verbal, but through showing; many animals teach each other using that, and that goes back much further than 300K years ago. But there must've been a change in early humans that made it more effective, make it go beyond basic skills.

The other thing to consider is that they reached a point where they could gather food / survive more easily. If less time needs to be spent getting food - because they've reached a level of intelligence where they can, for example, store food for longer, or prepare / cook it for more efficient calorie gathering, or grow food, or share food / acquire more than an individual needs, etc - then there's more time left for cultural exchange, experimentation and play. That is, spend more time experimenting with a stone tool to make it better beyond the base necessity.

disclaimer: I have no idea what I'm talking about. I don't actually believe early humans spent that much time surviving, looking at apes they seem to spend a lot of time just sitting around.


This matches up well with my hypothesis that language developed as a mediator of institutional behavior and not the other way around. https://spacechimplife.com/institutional-code-and-human-beha...


It’s not necessarily scientifically accurate at all but this is the premise of Spaceship Earth, the ride in the big geodesic sphere, at EPCOT in Disney World. It’s an interesting take on history being accelerated by advances in communication technology, and not just language.


I've read a lot of hypotheses about what could have happened.

Some say it was language.

Others say it was the development of consciousness, which came with an improved theory of mind.

Others say it was thinking in what-if/hypothetical scenarios.

Maybe they all developed in lockstep, somehow.


Or cultural breakthroughs - either societies that started to value learning, or the advancement of such cultures?


Theory: there are no humans without language. Consider: what language do you think in?



There are also humans who can’t conjure up an image in their head. Mozilla cofounder wrote a fairly famous piece about his own experience.

https://www.theverge.com/2016/4/25/11501230/blake-ross-cant-...

If there are people who can’t picture and people who don’t have an inner dialogue, I think it lends more credence to the idea that we don’t have free will and are just a bunch of chemicals controlling our behavior. It also makes you think about consciousness and whether it’s even real.


>If there are people who can’t picture and people who don’t have an inner dialogue, I think it lends more credence to the idea that we don’t have free will

That seems like a bizarre leap. What's the connection?


Because it implies that we only behave based on how we feel, not how we think. The thinking part is an illusion. Therefore, our “consciousness” has no effect on how we behave.

I’m not willing to die on this hill by the way so if someone else comes along and argues otherwise, I’m open to other ideas.


There's alien hand syndrome in split-brain people, where the verbal hemisphere will invent its own confidently incorrect explanation of "why I just did that" when the non-verbal hemisphere does something with the hand it controls. And there's Marvin Minsky's society of mind. But none of this undermines free will, it just means the will is a function of lots of components, only some of which are involved in contemplative thought (and not necessarily verbally).


Just because you're reliant on words to think doesn't mean others are.

Language certainly helps shape thoughts, even to the extent that, certainly for older kids and adults, different languages influence and constrain thought in different ways.

But consider dreams. These are adjacent to thoughts, and don't require language. Some dreamers are even able to express will not only on their actions, but over the dream itself. And if proof were needed of a conscious agent, it's there even in an apparently unconscious being.


> The thinking part is an illusion.

Except for this part of thinking, paradoxically.

https://plato.stanford.edu/entries/self-reference/


Turn it around. How would "consciousness" manifest if we behave based on how we think, not how we feel? You meed to prove the opposite too.

There is not a prerequisite of it being either or but I guess it is both.


Thinking isnt even a path to "free will". Thinking is pretty clearly determined by exterior stimulus just like feeling is.


How would thinking be determined by exterior stimulus, when we all vary to some extent to the same stimulus? Behaviorism was found wanting because it didn't account properly for goes on between the ears. And the rest of the body, which makes us individuals. The idea of free will comes from the fact that people do make different choices. We make different choices than our past selves did. From a compatibilist perspective, one could say free will comes from being part of determining what goes on. Things don't just happen to us. We participate in what does happen. We're not simply puppets on the string of the external world. We're just as much a part of causing things as everything else is.


> How would thinking be determined by exterior stimulus, when we all vary to some extent to the same stimulus?

Because, and this may shock you: we are not all the same person

You make an excellent point that people respond differently to a given stimulus. This is explained by those people having had different genetics, environments, upbringings and even breakfasts leading up to the point of the test. It has been demonstrated that your metabolic state will vastly change your response to stimuli throughout a day.

I ate something different for breakfast today than yesterday because one sounded tastier than the other. I made this decision not because I have an eternal soul making randomized decisions like a roulette machine but because yesterday I craved salty food and today yogurt sounded better. I am different person today than I was yesterday. My gut, brain and everything else has been changed by all the stimulus I experienced yesterday and is now influencing my responses today.


Where does "clearly" originate, and what does it refer to?

Is "just" a synonym for "exactly" in this context?


"Clearly" refers to the obviousness or self-evidence of the statement. In this context, "just" means "similarly" or "in the same way," or ig "exactly" if you like.

"Clearly" because exterior stimuli clearly influence both thinking and feeling, a concept supported by common experience and scientific understanding of human cognition and emotions.


How are all of these measured?


Theres actually a ton of different ways. The most obvious is probably MRIs of which many have been conducted with the subject having a huge variety of different stimulus. Then theres the good old fashioned behavioral studies ranging from the classic "does the subject jerk their hand back in the presence of heat?" to the more recent hungry judge phenomenon.

I wouldnt think the fairly self evident assertion that exterior stimulus changes both the internal state of a person as well as their behavior would be controversial.

At this point in history "free will" is really just the god of the gaps and those gaps shrink every year. Its probably a useful religious concept but as far as reality goes its the least interesting question one could ask about the whole human experience.


Did you remember to check/contemplate whether your "measurements" are correct?

This seems like a rather recursively self-referential problem.


Im not really sure what you mean here, could you elaborate?


Well, above you seem to be describing "how things are". What if what you're describing is not the things themselves, but rather only a model of the things?

Like, where are the details of how you are "measuring" these things? And what measurement instrument returns values like "is clearly" and "is really just"? I can think of only one.


Still not entirely sure what exactly youre getting at here.

Yes what I am describing is my opinion which I have formed from looking at evidence. My opinion isnt really a measurement in the sense that one can measure activity in parts of a brain with an MRI. It is really just the state of the system.

I'm not really interested in abstract epistemological definitions of free will because they arent useful in the way that neurology psychology and biology are generally. I am interested in predicting or explaining behavior or observations mostly. You might be interested in that kind of exploration which is great! If you think it does have bearing on such things I am totally interested in hearing how.


> Yes what I am describing is my opinion

Have you an opinion on whether your opinion is necessarily true?

> If you think it does have bearing on such things I am totally interested in hearing how.

I happen to believe humans have > 0 free will, and that if we do it derives from cognition, at least primarily. If this is the case, I believe that it would be advantageous to be able to exert control over cognition on demand. I also believe that doing something often requires trying to do it, and that if one doesn't think something can be done, it decreases the chances that one will try to do it, in turn decreasing the chances that it gets done.

I believe this to be > 0 "true", and that it has extremely broad applicability.


So yeah I dont disagree with you that attempting to exert control over oneself is a desirable thing or that it generally results in things that wouldnt have happened if one didnt work to have control over ones cognition, behavior etc. This isn't at all at odds with my deterministic view in my opinion.

I guess what I am asking is what is causing you to exert control over cognition? I would say its the sum of too many variables to count (including previous states of the system) acting on you to cause you, the system to be in such a state that that is just what it does. Given enough time and resources we could probably come up with a half decent accounting of the most important of those variables and be able to explain why the system is doing a given thing. In this example that means explaining why the system is modifying its own state in some way. What would you say is the cause?

> Have you an opinion on whether your opinion is necessarily true?

Yes my opinion is that my opinion is likely true as I believe it is supported by evidence. I would bet on it but I am not certain about it.


> ...or that it generally results in things that wouldnt have happened if one didnt work to have control over ones cognition, behavior etc. This isn't at all at odds with my deterministic view in my opinion.

Can you explain how it is not? As I see it, my theory is directly breaking out of determinism.

> I guess what I am asking is what is causing you to exert control over cognition?

Consciousness (self-awareness, will & determination, etc...the "how" of which I make no claims of knowledge about). I absolutely agree that it is substantially out of our control and thus at least semi-deterministic, where we would differ (at least) is on the 100% part.

There are many problems, here are some:

https://en.m.wikipedia.org/wiki/Decidability_(logic)

https://en.m.wikipedia.org/wiki/Necessity_and_sufficiency

https://en.m.wikipedia.org/wiki/Direct_and_indirect_realism

https://en.m.wikipedia.org/wiki/Cotard%27s_syndrome


Influence != Determine


Yes it is indeed possible that in that small and shrinking gap of behavior that isnt explained by some complex set of circumstances and stimuli there lies some magic immaterial soul that grants the ineffable quality of free will. Bbbut since we have yet to find any evidence for such a thing I wouldnt keep my hopes up. Also theres much more interesting questions to be asking.


Might "explanations" have a (perhaps private, or semi-private) epistemic value attribute? That could certainly change the game up.


Well most of the explanations I personally am interested in have a public and therefore high epistemic value. Since they are published and repeatable. Certain behavioral tests or self-report surveys on the other hand have lower value because of the private nature of what were intending to test. They still have some value though.

Private experiences may be very important for individuals and like I said elsewhere, free-will is certainly a useful idea in religious contexts. I do not believe this has any bearing on practical matters such as our ability to predict or understand the actions taken by any given system such as humans or computers.


> Well most of the explanations I personally am interested in have a public and therefore high epistemic value. Since they are published and repeatable.

Are you saying that these two claims have some sort of scientific ~proofs:

1.Thinking isn't even a path to "free will".

2a. Thinking is pretty clearly determined by exterior stimulus

2b. just like feeling is.

Note: a definition for "determined by" (in percentage of total causality) would be required in the specification, as would a non-ambiguous definition for "just like".


> Are you saying that these two claims have some sort of scientific ~proofs

No. I think this statement is a conclusion only indirectly supported by scientific fact. This is a logical leap from more and more behaviors and internal states being correlated with various external stimulus over time. I believe it only a matter of time until it becomes entirely possible (though silly and will likely not be done because it only serves to prove a foregone conclusion and would be a unfathomably huge undertaking) to provide a complete accounting of all the factors that influence a system bringing it to the state where it reacts in a given way to some stimulus.

The only other conclusion I can see is that human beings are not deterministic and there is some magical spark IE everlasting soul making randomized decisions for us.


> I think this statement is a conclusion only indirectly supported by scientific fact.

Can you expand on this "supported by" scientific fact? Is it something more than is not inconsistent with, and more towards must be, necessarily?


It only implies that some people think (or perceive thinking) differently.


I find it hard to believe that they can't at all imagine what a tree looks like or imagine a face of a friend. I can understand some difficulty in a perfect image, but nothing?


The fun thing is that for those of us who cannot it’s as hard to believe others can conjure images. Yes the idea of mental imagery is deeply ingrained in our language but I’d always assumed it was allusion till I learned of aphantasia when I was 30.

One of the more cliche, and not super useful tests, is “imagine a ball on a table, someone pushes the ball and it begins to roll. What color is the ball?” For me that was a revelatory statement because I’d never consider that others might give the ball a color, or size, or texture as the imagine it. I assume not everyone with the ability to visualize does but it seems like many do according to the literature. To me it’s just a statement, a ball is rolling pushed by a nondescript person.


So for me I definitely visualized a ball on a table, but the color wasn’t resolved, if that makes sense. After hearing the question asked it kind of snaps out of superposition into a red color (but there was definitely a color-choosing step that happened after hearing the question).

Same thing happens if you ask “what surface is the table on” or “what country is this image in”. It’s layers I can add to the mental state, but if they’re not important they’re just not there.

I’d say the closest thing to what I was “seeing” before the color question is something like a wireframe, or maybe the gray color of a Blender model without colors/textures applied. Grey in the sense that you don’t really notice it’s grey, you just understand the grey color means color is absent.


You described my experience well as well, except my abstract colorless ball was barely even visual. I was quite happy to entertain the idea of a ball on a table with very few concepts activated - roundness for the ball, flatness for the table, gravity holding the ball on the table. Rather like a physics problem.

When forced to dereference a color, it felt like an entire system was booted up. Not only did the ball have color, it also had specular reflections. There was lighting. The table gained an abstract sense of having texture.

What I find particularly fascinating is that my mind also assigned "red". I wonder if that is a coincidence, or a deep reflection of something about how brains work. Supporting evidence: in languages with only three words for color, the three colors are "light", "dark", and "red": https://en.wikipedia.org/wiki/Color_term#Stage_II_(red)


I'll also chime in and say I was able to visualize the generic ball-ness and as soon as the question of color arose, I also immediately picked red despite not being my favorite color or anything. I blame childhood depictions of balls lol


I imagined a small, red, dense foam ball actively being placed by a feminine hand with matching red nails and a fancy bracelet onto a white marble slab with olive green marbling. The slab is a couple inches thick and overhangs the cabinets below it. The point of view, like watching from my own eyes, is looking down at the counter with the ball being placed by the ephemeral hand as if they stood opposite me.

What color was the ball? I didn't have to think; I knew it was red. However, I can't tell you much about the cabinets or the arm or body attached to the hand. Totally unrendered and void.


I’m curious, what kind of work do you do? Do your dreams have that kind of detail?


Software engineer, but I've also been a graphic designer, a photo editor, a fire and casualty insurance agent, a financial services advisor, a construction worker (very basic carpentry, electrical, plumbing, roofing, etc), and high school math teacher, all before being a software developer turned manager turned developer again. And, yeah, dreams have that much detail, but also the extra depth that comes in dreams like knowing intent, emotions, relative history, etc


That’s a really cool description. Makes me think about people complaining of having unwelcome images brought to mind. Which I can relate to through songs. If someone says Baby Shark it’s going to start playing whether I want it to or not.

For me when someone asks “What color is it?” I think to myself “I don’t know you’re the one telling the story you tell me.”


My ball was kind of an indeterminate grey, too, but it was on a granite countertop of a kitchen island in an open concept kitchen/living room. Like one from a Bounty commercial or something.


You accurately described my experience.


I basically only can see in dreams and visions, but rarely remember full images. Basically, everything in my mind is in a compressed symbolic state.

The most notable difference is if you ask me to imagine something, there isn’t any detail in the “image” that I haven’t intentionally placed there. “Imagine the face of a stranger you haven’t met before; what color are their eyes?” Idk, I can add eye color to the image, but I certainly can’t just observe it, because both before and after it’s just the concept, not like a picture I can just look at.


For me, it is a picture to look at. Like a bust, I had a head and shoulders to look at. The stranger's eyes were deep brown; practically black. They were half asian half white, had black hair, palish white skin, dark eyes, a flat near-smirk. He was wearing a blue collared button up shirt with a mild jeans like texture. The sleeves were rolled up. The buttons were white.


Same, at least 90% or so.

I'm terrible at recognising faces - changing a "symbolic" feature like facial hair or glasses will completely throw me - but great at reading maps.


I feel like I kinda have this, and I'd describe it like this:

With actual vision, there's a pipeline of steps: light hits actual cone/rod cells -> optic nerve fires -> brain stage 1 -> brain stage 2 -> brain stage 3... where each of those stages also have various side effects associated with the experience of "seeing"

I can't synthesize "brain stage 1" at all I don't think, I need my optic nerve to send some signals to "see". But I think I might be synthesizing "brain stage 2" when I imagine seeing e.g. a red apple in a pretty similar way to actually seeing it - I can feel "red apple vibes" but there is no image of a red apple my field of view. My brain state certainly contains some data about the color of the imaginary apple and the shape of it, but it's not nearly the same as actually seeing it.

This is all astonishingly hard to explain in a way that communicates accurately between two people though.


Absolutely same. I wonder how much of it is most people being like this and many of those saying "well yes of course I see the apple" without really thinking about it. If someone actually sees the apple, v.s. imagining what it would be like if they were seeing it, they're hallucinating. Not that it wouldn't be cool to be able to do that on command, I just wonder how many people really can.


I really see it. It's not hallucinating as the imagined image is not in the real world. It's like a different input to the visual processing part of my brain (but both inputs work at once, a bit like looking at a different thing with each eye). I have visual memories (snapshots I can call up) of things I never saw with my eyes (e.g. scenes from novels) that are as strong as any that came from my eyes.


Can you do it with your eyes open?

Have you tried drawing a real object then the imagined one and seeing which ends up better? I've always thought that this would be a great skill for visual artists.


It's part of the reason why discussions about consciousness often result in people talking past each other, when they start with the assumption that other humans think just like them.


I started training to develop the ability around age 10 or so. As far as I can tell, it wasn't an innate capacity in my case.


I see nothing at all.

I spent 45 years or so thinking people were talking metaphorically when talking about picturing things, because surely they couldn't actually see things while awake?

I see things when I dream, so I know what it is like, and some years ago I had a single experience during meditation I've never managed to replicate, but otherwise nothing while awake.


So if I told you to close your eyes and think about your best friend's face....you cannot see ANYTHING?


No. Nothing.

EDIT: This is a great article about Ed Catmull and aphantasia: https://www.bbc.co.uk/news/health-47830256


Interesting and thanks for answering.

If you're on HN, I assume this hasn't impacted you academically?


No reason why it would. My abstract reasoning is far above average, and for that matter my ability to draw used to be far above average, though I'm a few decades out of practice. I "visualize" what things look like and where things are in relationship to each other in the sense that I know where things are with a level of precision well above average - I just can't see it in front of me, though I know where they are and what they look like.

I use the term "visualize" because that is what I thought people meant when they said to visualize or imagine things. I remember the shape of the visual rendition of source code, for example, and that is usually the basis for how I navigate large code bases. And I know what parts of papers I last read 30 years ago look like, but I can't see them.

I think the biggest way it has impacted me is that e.g. when it comes to fiction, I find visual descriptions of things usually bore me unless the language used in itself is particularly compelling because the words themselves are beautiful to me. So I often skip and skim visual descriptions.


There's a spectrum, but some people see nothing at all. It's known as aphantasia.


If consciousness, or rather, "philosophy of mind" interest you, and you like thinking about it, your position about "consciousness isn't real" (i simplify here) is a more and more talked about position, and the tenants are called "illusionists". Basically, they think we have an illusion of consciousness/subjective experience. Once again, poorly simplified, i'm not smart/dedicated enough to totally understand the idea and it is hard to explain something you don't totally get.


What even is "free will"? It's an idea that falls apart on close inspection.

You think "I will wash the dishes". You wash the dishes. Ta-daa, free will!

A paralyzed person thinks "I will raise my hand". Nothing happens. No free will!

An ADHD person says to themself "I will wash the dishes". They don't get washed. A different part is broken than to the paralyzed person, but the result is the same.

A lazy person says to their roommate "I refuse to wash the dishes". Free will? They "could", if they were not lazy. Just as the ADHD person "could" if they did not have executive dysfunction. Just as the paralyzed person "could" if their spinal cord were intact.

"Free will", as a philosophical construct, is nothing more than an attempt by the ego to regain a sense of control in the face of the irrefutable realization that the universe is governed by rigid laws, and we are made of universe. (And no, quantum randomness doesn't help you - a random choice is hardly more of an extension of will than a deterministic one).


I agree with the only caveat that the universe should have a maximum amount of precision it uses to calculate every next frame, and at the margin of error of these calculations, non-determinism may arise.


Even if those assumptions were true (and why should they be? the universe is not a computer, does not have "frames") what would that have to do with "free will"?


Why should yours? It seemed like you liked musing about this topic so I added a bit.

You (and I) have no idea if the universe is or not a computer. If you don't see the connection of free will to error margin of a deterministic system then I'm not sure how else to put it. Just imagine a deterministic program getting a bit flipped by a cosmic ray and turning non deterministic and apply that to the universe's system of laws.


I'm also not seeing the connection to free will. Did the program choose to be broken by a cosmic ray? Did the cosmic ray choose to break the program? Stochasticty doesn't imply volition.


It's the only way I've come up with that allows for free will, maybe other people have other ways. If the universe is not like a computer then I have even less ability to explain where free will would be possible.

Cosmic rays are just an example to explain that there's external causes of non-determinism to a deterministic system. Another example I gave above was the limit of precision of the system itself, allowing for "free will" (meaning something which isn't perfectly explainable by just the laws+starting conditions).

To me it's obvious the universe is deterministic, but the fun part is imagining ways in which it might not be. Comparing the universe to a computer is a fun way to think about it.


We're too deep so I can't reply anymore, but I didn't say this was an explanation for it, more like a possible requirement for it. There may be other ways in which free will would be possible. Remember, I said already multiple times I start from the premise there is none, but it's fun to take the other side and try to come up with possibilities. I think you're too boring if you just take a position and never try to defeat it in your mind!


  10 print "If the universe is non-deterministic, that provides an explanation for free will!"
  20 print "How? Coin flips don't have free will..."
  30 goto 10


>You (and I) have no idea if the universe is or not a computer.

You actually cannot just say "Man, the universe is so complicated, so it is possible it could be anything!"

That's a non-sequitur and not sound reasoning or logic.


I didn't say it could be anything - I said it could be a computer! I don't think the universe can be a towel for example.


The samples you gave make no sense.

Firstly, a person suffering from ADHD can do the dishes, despite the odds being against them. And a paralyzed person can raise their hand, given advances in technology.

And these are bad examples, as being incapacitated isn't an argument against free-will. The paralyzed still wants to raise their hand, even if they are unable at the moment. And the far more difficult question is if that want was entirely predetermined by the universe or not.

Quantum randomness and the huge number of variables at play point to the fact that believing that we don't have free-will is a non-falsifiable notion.

Consider this thought experiment: God comes to you and says ... "Here are 2 universes you can observe, one in which people have free-will, and another, that looks similar, but its people are just sophisticated automatons. Make an experiment to say which is which."

Your irrefutable argument quickly falls apart because there's no way you can show any relevant evidence for it. And it's worse than talking about the absence of God, because we experience free-will everyday. It's like talking of consciousness — we can't define it yet, but we know it's real, as we're thinking and talking about it. And a theory being non-falsifiable does not make it false. God could exist and you could have a soul.

You may or may not believe in free-will. But not believing in free-will is dangerous because it makes one believe that everything is predestined, so there's no point to doing anything, no point in struggling to achieve anything, no point in trying to escape your condition. If no free-will is possible, does life have any value at all? And that's the actual philosophical bullshit.


Mental imagery and mental language could both be a layer above a lower more basic form of abstract thought. You might not need either to think, plan, and reflect effectively.


I don't see the connection.

When I am doing math or painting I am not spelling out what to do in my head. You can probably do just fine without an inner monologue.


With regards to consciousness per se being real, that question was resolved with Cogito, ergo sum.


Indeed, there are two types of people in that regard, whose mind is blown (usually) that there is another type. One thinks in words, one has no words but a smooth stream of thought going.


That's wild - I am clearly the first sort, because I cannot imagine what "a smooth stream of thought" would even be if it were not expressed in words.


For what it's worth, I used to be certain that I thought in words too. Then I moved to a different country and used a second language often enough that I sometimes think in it. I then realized that there are periods where I think in neither my native language nor in my second language, even when I'm thinking. YMMV of course.


This describes my experience nicely too, although I didn't quite realize it until reading this. I was thinking I just thought in words and had a tendency to mix up the bits of various languages I know some words from, but occasionally I have some thought, then for one reason or another become conscious of the thought and get confused which language I was thinking in. Sometimes this even manifests as trying to mix languages (eg at one point I used to struggle to not sprinkle in french and farsi words into normal english, despite speaking english at a native level and only knowing the basics of the other two).

I've always thought of my mental model as an endless conversation with myself, but I think the more fitting description would be a "smooth" series of thoughts which only materialize into language when I explicitly focus on those thoughts as their own thing.

I do also think visually for things that have a visual component though.


Do you find your domain of thought shifts when you do not think with language? For example, do you feel you are able to do complex reasoning without language?


I usually dream in English - but I don't speak English. But I understand everything thanks to my dream's subtitles.


It is like your motor control thoughts, you don't think let me pickup this spoon you just pick it up, when given an equation you do not think go through the steps if x - 9 = 22, you just say x is 31 yet you have skipped in your mind there exists a base mental representation of these things some call it a small world model of a large world model encoded in an abstract knowledge representation sort of like a compilers abstract syntax tree and this representation is sort of universal in that you can take it from someone and give it to another person and they will automatically know provided that they have the requisite properties of the world model in which it exists for instance a thought about the symmetric group s3 could be transferred provided that the requisite structure for groups( just the definition, and concept of rotation exists which already exists by default in mammals).


I don't think in words unless I am speaking, writing, or modeling a conversational interaction. The vast majority of my thoughts are in the form of sense impressions, motor sequences, visualization, or wordless intuition.

I'm sure you think that way too, you probably just layer a narrative over it. The sibling comment about picking up a spoon is an example I sometimes use - see yourself walk to the kitchen, move your hand to open the drawer, pick up a spoon, pour the tea, scoop the sugar. I can describe them but it's not natively linguistic to me.

I'm hell at rearranging furniture or putting together an engine, not so good at positive self talk.


It’s a process of serialization. I frequently have thoughts or ideas that take me a while to express in words. Being stuck to words seems so limiting, slow, and linear that I have a hard time believing it; surely there are more fundamental mental processes generating the words and the monologue is just a serialization of thought? Right?


That's an interesting perspective, and I think you are on to something.

I have some experience with a meditative/dissociative state in which that monologue - which I think of as the "narrator process" - can be observed as just one of many mental subsystems, neither containing the whole of my consciousness nor acting as the agent of my will. The narrator merely describes the feelings which arise in other mental components and arranges them, along with the actions I take, into some plausible linear causal sequence.

Minds differ in many ways, and perhaps one way your mind and mine differ is that words flow quickly for me and do not feel slow or limiting; so I suppose I am easily fooled into perceiving that narrative as the medium of thought in itself. It had not occurred to me to describe the activities of the other mental subsystems as thoughts, but why shouldn't they be? And now I have a better guess at what it might be like to experience the world in a different way. Thank you!


> Indeed, there are two types of people in that regard

> One thinks in words

No. There are people who believe they think in words, because they haven't bothered to examine the question, but there are no people who think in words.

Think about the number of people you've ever seen do a double take at the idea that "I don't know how to put this into words".


I absolutely think in words. There are no pictures or whatever other mental model. There is literally a narrative of words and blackness inside head.


Do you prefer lions or tigers? BBQ sauce or ketchup? Green or blue?

I bet you can answer all of these at a moment's notice, but where does your answer come from? Have you ever sat down and tried to reason out which one you like more? Or do you just 'know' the answer and then 'come up' with the justification afterwards?

People can think in words, but it's certainly not their only way of thinking. I think the thinking in words is kind of like "thinking on paper" where you're trying to explicitly reason through something. The thinking process itself seems to be something on a deeper level.


The crux is the word 'I'. When I say 'I' think, do I mean the conscious part of me which has direct experience of that thinking? If so, then I am denying all the of the thinking that 'I' don't do, but my brain/body does.

From that perspective, the experience of thinking in words or pictures is distinct from actually thinking in words or pictures. Saying one thinks in one of these ways seems to be saying what they identify thinking with.

For example, I don't usually think of fantasy as thinking. If I day dream, I wouldn't say I am thinking, but that is fairly visual. To what degree am I saying something about myself vs my identity if I say I do or don't think in words given that context?

Relatedly, I've noticed that when it comes to remembering something, it is not 'I' that remember. Rather 'I' set up mental cues and direct focus, which then hopefully causes the memory to be placed within my awareness. This happens below the level of direct experience. But I might say I failed to remember, taking responsibility for something that 'I' - the part separated from the automatic functions of the body - did not do.

So I'm suggesting statements about words vs pictures are about ego, metaphor and meaning-building, and not about actual mechanisms or communicating actual differences in the experience of thinking.

It can be difficult to talk about these things because such conversations implicitly occur between our identities, not between who we actually are - something beyond our grasp - and the noise this introduces is something I don't know how to surmount, or if it can be surmounted.


No, you don't think in words. The words in your head are a side effect of the thoughts. They aren't the thoughts.


Why is it so hard to accept that different people can have a different internal experience of our shared reality? Perhaps different people have conscious awareness of different aspects of their own cognition pipelines though the pipeline is similar for most people, or perhaps there is a more fundamental diversity in how different people think. I find it interesting though how strong an aversion some have to being told what's going on in their own heads.


I'm highly suspicious that the whole "inner vs no-inner monologue" thing doesn't actually exist and has more to do with these things being extremely hard to explain using language, and people describing their subjective experience in subtly different ways.


I 100% do not think in words or sentences or language. It feels more like node-traversing a graph of concepts, but it's all completely abstract. While I am able to consciously play back my thoughts linguistically or visually with some effort (and at the cost of efficiency), by default there are neither words nor images.

French is my first language, I'm fluent in English, and I know a bit about many languages... But I've never had an inner voice or thought in language, even when I was very young.


Right, very like me. Language is the front end; I use it to express thoughts when needed (to record, communicate and check reasoning) The back end is working in some kind of abstract space.

I suspect people who say they think in language are not really thinking "in" language. I think we all have this abstract conceptual space in us. I'm not even sure what it means to think in language; where do the ideas and concepts come from that are being expressed? I guess some people have to internally "hear" a thought expressed before it is real to them.


Internal monologue (or dialogue, as it can play out as imagined conversations) is an apt name for it. Imagine reading a play. Everything is expressed as language, and ideas and concepts are expressed through that language.


What I am saying is that probably everyone thinks the way you do, it's just that not everyone would describe it in that way. For example, I'm sure you came up with this comment in your head at some point before typing it and many people would describe this as an inner monologue, whereas you apparently do not.


Oh, perhaps. And I didn't mean to suggest that I was special in any way, but... When I discuss this with people in real life, it is very rare that I find someone who relates to this way of describing how I think. Most people are just stuck in literal wide-eyed disbelief and either tell me they think in literal words, fragments or sentences, experiencing a real "inner voice", or they tell me that they visualize things somehow.

Sometimes people seem to not be sure how to describe the experience of thinking... Which I suppose explains a lot! /laughs


Yep, make a statement like this and somebody will always pop up and say "but me, but I, because I characterize my inner experience as such-and-such and I like to think of myself as special ... that makes the distinction real."


Good, I have the same suspicions about "no mental imagery vs. vivid mental imagery". Perhaps people who claim either thing are just being, uh, imaginative.


"inner monologue" exists whenever people have hopes, dreams, plans, etc--thats what humans being intentional agents mean. Sure, there is a spectrum: those whose monologue is so active during the wakeful time to those whose daily brief monologue is about what to say in tomorrow's scrum standup.


Research indicates that reality is more complex and messy than that.

https://www.livescience.com/does-everyone-have-inner-monolog...

It seems more likely that a theory that bases being an intentional agent on the notion of an inner monologue is not the best model for what's happening.


We think in something Steven Pinker termed “mentalese”. It is distinct from human language. There are many examples of people who are cognitively normal, but either lack or have severely impaired language for various reasons.

The distinction between reason and language is not widely appreciated, and is a main if not the primary reason people overestimate the abilities of LLMs.


This really reminds me of the recent “Hellen Keller on life before self-consciousness” and the discussion about people who were born deaf and how their thinking changed after learning sign language.

https://news.ycombinator.com/item?id=40466814


Stating the mentalese hypothesis as fact is a bit tenuous


Sure, like most areas involving intelligence, much work is yet to be done. However, mentalese or the Language of Thought Hypothesis, occupies a much stronger position than the hypothesis that human language itself is our fundamental engine of reason, which is almost assuredly not true. This last fact has serious implications for the valuations of the majority of AI companies.


This is sort of the chicken and egg question though. You're basically asking if pre-human primates had language, which plausibly they did, and so consequently did the first humans who were their descendants. But somebody was the first whether they were homo sapiens or not.


As i replied to the parent comment: this was at least 200k years before speech if you go by the hyoid bone evidence


Last week there was an essay posted by Helen Keller describing her own development of consciousness. Great essay and she describes it’s development.


We don't think in language. If you take the time to become fluent in two or more radically different languages then this becomes obvious.


I suspect most of our thinking is not in language until we try to articulate it to others.


Ancient human anthropology is a fascinating subject because we are still discovering so much yet know so little.

Stone tools remained essentially unchanged for a long, long time. 600kya would have been within the range where there were still multiple homo species occupying very similar regions. An important thing to remember is that the homo family tree is more like a bush than a tree, with lines getting very blurry and a lot of interbreeding. It makes me wonder how much technology like this actually came from our precise lineage or was shared/stolen/learned amongst human species.


This is a tale of one million years ago. Life evolves slowly and our equipment was essentially the same as today. They must have been the original thinkers and the original lovers, with motivations and daydreams we would easily recognize. We are Nature's first experiment of an advanced thinking species writing its own rules, so why has our recent recorded history been a tale of human upheaval, driven by the actions of empires and regional powers?

I propose that for the longest march of human prehistory, small villages along a trade route that shared genes and language, goods and services, was a very stable unit of human society. And I mean civilized on every scale and over time, where the simple desire of young to start a family and live eventful fulfilling lives in peace is the norm and they had done so. This tale is in the subtle and expressive language of today and happens in fast-time with a single individual, but consider that the events and astronomical observations could have unfolded on any timescale. Over a lifetime, or even several. The end result being that even ancient people could glimpse their place in the universe, and essential truth about their world.

We had socialized and innovated but had no need for stone monuments to cheat time, and they would not have anyway. If there are, they are buried in the deep sediments of East Africa. You and I are their only legacy.

~Stargazer: A Novel of One Million Years Ago


The would strongly point to speech developing around that time, but there's no mention of language or speech in the article - odd.


The common ancestor of neanderthals and homo sapiens had a larger than normal hole in the skull (the hypoglossal canal) which contains the nerve bundle going to the tongue. They're pretty sure that was the result of language use before 1M BC. There may have been several stages of later improvement after the initial development.


Yet some other evidence places it much later at around 130ky. Go figure.


It's unclear tech transfer is speech driven. People can read machines and reason about purpose without natural language. We've all learned tricks from undocumented code. I've learned mechanical tricks poking around scrap yards.


maybe unclear to you. may I recommend Orality and Literacy by Walter J. Ong. which talks about how text and writing transformed human culture

the point (which I learned from the book) is how writting transformed human culture but did not define it, humans have a pre-literal (i.e. written) culture whose origin is literally really lost to history

I'm saying that tech transfer can absolutely be speech driven, but I admit that text driven technological transfer is in some ways an improved version of oral-only cultural transmision (i.e. knowledge preservation)


No one is denying that tech transfer can be speech driven, but that doesn't mean that this particular increase in tech transfer was because of speech.


Right, but there are certainly going to be a ceiling to how far one can go. With writing it can be copied nearly verbatim and kept for centuries whereas the human mind is brittle and forgets, fuzzes, and randomly changes knowledge, it can contain only so much data.


That ceiling is pretty high above 18 steps for flint knapping.


What durable artifacts of either are presented in the article? It's paywalled, I haven't read the whole thing.



Not directly on topic, but: Didn't use to be so interested in (or, allow my attention to be diverted to?) early human history and anthropology, until on recommendation I watched a movie "Man from Earth" [0]. Talk about baking your noodle: The story of someone claiming to be over 10K years of age, intimated to a group of academic peers that draw on their respective domain knowledge to assess the claim. Draws a fascinating, engrossing, not implausible yarn. Ya gotta wonder .. What would it have been like? Exactly how did we climb out of the muck?

[0] https://www.youtube.com/results?search_query=man+from+earth Viewable at no charge on Youtube!


Another ASU researcher, Sander van der Leeuw, whose nearly decade-old presentation at the "More is Different" conference on complexity theory in Singapore looks how stone tool archaeology tracks with intellectual capacity (and most specifically the size of short-term memory (7 +/- 2 items for modern humans, according to most accounts).

Video, 1h6m running time: <https://yewtu.be/watch?v=pOyQqPi28ug>

(I'm aware Google / YouTube are making alternative interface usage more challenging. MPV handles this stream well, as should tools such as VLC which incorporate it and/or ytdl.)

Video is admittedly long but has high information density.


Just imagine where we'll be 600k from now.


The past 100 years of development are unfathomable already - even though some sci-fi from 100 years ago was accurate, so there was some imagination of what things could be like already. I like to believe I can vaguely predict what the next 100 years will bring, but that's based in a cynicism that the big scientific discoveries have been done already.

But 1000 or more years, unfathomable. It's the one reason where I wish time travel or suspended animation was real, just so I can see what the future might bring. Assuming it meanders on as it has for the past few thousand years, and there's no Great Filter event.


In the fuel tanks of future Cephalopoda.


I was surprised to learn that octopus fossils exist that predate dinosaurs. I dropped all hope for their eventual world domination. If they haven't done it by now...


I don’t think our species will make it that long. We will have scienced our way to some sort of singularity or be extinct by several hundred thousand years.


Unrecognizable as a species.


Long gone, having been killed by AI research probably.


"The result is, our cultures — from technological problems and solutions to how we organize our institutions — are too complex for individuals to invent on their own.

Yes, solo geniuses might often make unbelievable inventions on their own.

But science, progress and innovation are all social concepts at the end of the day.

I find this pretty amazing.


You will find "the secret of our success" an interesting read.

One thing I remember is: There are some cultures out there hunting seals with some tools that are made using parts... of seals. Some European explorers almost perished where locals were just living their routine (supported by their cultural baggage).


My guess is language emergence allows to teach and "accumulate" knowledge in cultures.


Language transmission for sure. The Tower of Babel slowed technology considerably and by design.


The Tower of Babel?


Genesis 11 in the Bible.

Men decide to build a tower to Heaven. God isn't having that so he scrambles every language so the workers can't communicate with each other. They put down their tools and migrate to other locations around the globe.


And to think it culminates in tech companies emitting more CO2 than Belgium to make celebrity deepfakes and tell us to put glue on pizza. Wow.


I personally believe forms of writing and record keeping are far, far older than we think, having been discovered and forgotten repeatedly. Very hard for something hundreds of thousands of years old to be preserved. Once you can pass on information orally and written through the generations, knowledge will always improve.


Record keeping doesn't need writing. Like Incas with their knotted ropes (probably not how how you call it in English).

Also, while some central Asians could read and write (they had courrier relay to deliver letters), their administrative/taxing/military system, the decimal system, worked without any writing for two thousand years, only by making a mark on a wooden branch for each person in an Arban, a mark on another for each Arban, again for each Ja'un, again for each minggan. That's how they counted and this was taught without text for at least 1500 years (Mongols wrote something about it in the 13th century, but this system is at least from 300BCE and the Xiongnus)


What's amusing of course is that the word text shares a root with textiles and hence the notion of weaving thread or cloth.

I'd strongly suggest not getting bogged down in the details of what various forms of notional recording are --- ink on paper, etching on stone, holes punched in paper, magnetic field alignments in rust (spinning, sequential, or otherwise), bitfields in memory arrays, holographic images ...

Records are created by varying matter in space to transmit messages over time.

Signals are created by varying energy in time to transmit messages over space.

Or expanded slightly:

Signals transmit encoded symbolic messages from a transmitter across space through a channel by variations in energy over time to a receiver potentially creating a new record.

Records transmit encoded symbolic messages from a writer through a substrate across time by variations in matter over space to a reader potentially creating a new signal.


You managed to explain my own thoughts to me, thank you.


Thanks.

The symmetric equivalence between records and signals is one that I seem to have come up with myself (I'm unaware of it being noted elsewhere, though bits of it have been observed, e.g., speech is conversation in space, writing is conversation in time). How significant it is I really don't know, though something tells me it should be important.



> Once you can pass on information orally and written through the generations, knowledge will always improve.

I suggest reading up about how we found the cure for scurvy and then lost it. The problem wasn’t that someone forgot to record “the cure for scurvy is vitamin C” the problem is that people were wrong with what they “knew” cured the disease.

Basically: the act of preserving and carrying on the wrong bits of knowledge lead to a regression with deadly consequences.

The social side of things can’t be ignored either. Rulers, religions, and pop culture can and does choose to selectively remember history. In recent years people even take delight in being uninformed about various topics.

So it seems like there’s a sort of knowledge decay working against general progress. The question is: will progress always (on average) increase faster than that decay or will we reach some kind of equilibrium or perhaps even regress (idiocracy, for example)?


> Once you can pass on information orally and written through the generations, knowledge will always improve.

Nope. You can pass on myths the same way. All that happens is that you get more refined myths, that always have "evidence" for them at any point in time because they are refined by each generation to fit any new observations.

It's worse with oral knowledge[1]: oral "knowledge" that is passed down is frequently more damaging than not passing anything down at all, because in the lack of any knowledge on the subject, people will try to gain some knowledge (whether experimentally or not), but with "knowledge" passed down, there will exist pockets of people who will actively resist any attempt to discard that "knowledge".[2]

[1] Ever play a game of telephone? Each generation changes the message enough so that even a short period of 30 generations is enough to completely obliterate any of the original message.

[2] See every religion ever.


> Each generation changes the message enough so that even a short period of 30 generations is enough to completely obliterate any of the original message.

Wouldn't multiple transmission between generations and the comparison of diverging stories of the older generation in a newer generation work like an error correction?


> Wouldn't multiple transmission between generations and the comparison of diverging stories of the older generation in a newer generation work like an error correction?

I doubt it - which one is the "correct" message is going to be up to chance, not up to a network effect.

Even if you take the optimistic approach and only retain that knowledge that is common to all branches, there is a vanishingly small chance that the correct knowledge would be retained, because the errors in transmission are not going to be completely random - the sort of error one oral storyteller introduces is going to be similar to the errors introduced by other storytellers.[1]

It's why the history of tribes who did not write things down is treated as myths: the myth-makers are likely to weave whatever current affairs into existing mythology to "explain" new observations, in the process discarding what was there in the first place.

When the record is written down, at least we can read what was thought at the time of writing.

[1] Completely made-up example: Original story -> Crocodile dragged off tribes best warrior. Probable error after gen-10 -> Crocodile is double the original size. Probable error after gen-20 -> Longer than a man. Probable error after gen-30 -> Taller than a man. Probable error after gen-40 -> Walks around, bipedal.


Why don't they just look for the monolith ? :-)


What if they did find it and that's why we don't send crewed missions to the moon anymore?

Think about it. The first manmade object on the moon was in 1959. The first pineapple on pizza was in 1962. Coincidence, or lunar monolith? You decide!


Yet more evidence that unknown civilizations rose and fell multiple times before recorded history began ~4000 BC.

(The Why Files fans can insert here "The Younger Dryas!")


>Humans began to rapidly accumulate technological knowledge 600k years ago

I knew I was late to the party.


Humans began to rapidly accumulate technological knowledge around the Renaissance, with the invention of the printing press.

The farther back we go, the more we have to use tongue-in-cheek definitions of "rapidly".


But like China had the printing press and gun powder and plenty of other things that went on to be repurposed in the Renaissance. I feel like the narrative of the modern technological exponential growth curve kicking off with the Renaissance is rather Eurocentric and misses that the left end of that graph is likely so shallow as to be hard to differentiate the “before” and the “after”.


Slightly unrelated, but James Burke explains this really well in Connections[1] S01E03.

Basically he says one of the reasons China didn't experience the rapid growth Europe went through during the Renaissance was because Chinese society during those times had a strict hierarchy. You weren't allowed to rise through the ranks with your inventions or technologies, no matter how revolutionary they were. "No incentive, no change."

[1] https://en.wikipedia.org/wiki/Connections_(British_TV_series...


Was the Renaissance driven by concerns of social mobility? Galileo's telescope was not calculated to improve his rank in society.


It certainly was, and initially did improve his reputation.

If you look at what happened he could have benefited hugely if he was not obnoxious and had not gone out of his was to annoy people. He mocked the powerful, and he insisted that the Copernican cosmology (the sun is the centre of the universe) was not just a theory, but had been proven true.


I suppose there are more ways to be paid: fame / rank, money, or even piety.


Imagine what they’ll say 600,000 years from now.

“Humans began to acquire knowledge around the time of Dacro and the Second Wormhole Leap. The farther back we go…”


Agents probably won't refer to all members of Homo as humans, I would imagine.


The word "rapid", just like "fast", is a relative term. Turtles move rapidly, compared to snails.


The knowledge was there, but it was in part thanks to the printing press (and political / sociological reform, e.g. schools and improving quality of life) that it became more accessible to more people.


More recent history seems to paint another picture - knowledge is quickly gained as cognitive ability increases, and is even more rapidly lost once it diminishes.

Think of not only the replication crisis in science, but also the despair to retain "tacit knowledge" the failure of which often results in outright abandonment and outsourcing to somewhere else.

The technology to land on the Moon seems to be effectively lost. The supersonic airliner is for some reason not available either.

Same happened in Rome, and Rome itself together with Greece rediscovered civilization from a much darker age before them.

In light of that, the idea that any real knowledge could be retained for that long is preposterous.


>>> The technology to land on the Moon seems to be effectively lost.

The technology might be, but not the knowledge. The tech just became obsolete and probably not up to safety standards anyway. I doubt we'd have much trouble recreating a better equivalent if the incentives existed.


We couldn't. It was just a well known example. Writing doesn't help much - the words lose their meaning without people capable of acquiring the knowledge before hand. Whatever you preserve is worthless without the mountain of implicit knowledge that can only be acquired, and not taught. Knowledge is being lost even as elderly workers try to pass their knowledge to new hires with zero success.


> The supersonic airliner is for some reason not available either.

This whole comment shows a solid amount of ignorance of history, but this particular quote is completely preposterous. "for some reason" is money, plain and simple. Just go read the Wikipedia article on Concorde. And there are plenty of supersonic military airplanes, the technology is definitely there.

> Same happened in Rome, and Rome itself together with Greece rediscovered civilization from a much darker age before them.

Did you ever hear about Ancient Egypt?


There were many like Egypt, Egypt was exceptional that it some sort of survived (but actually not really), while the rest got wiped completely. Then a dark age followed.




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: