Hacker News new | past | comments | ask | show | jobs | submit login
The Unparalleled Genius of John von Neumann (2019) (cantorsparadise.com)
484 points by jdkee on April 6, 2021 | hide | past | favorite | 261 comments



> von Neumann proposed a description for a computer architecture now known as the von Neumann architecture

Well, he actually described an already existing design by Eckert and Mauchly under his name and this paper was illegally disclosed. For the ones interested in this and other fascinating stories about ENIAC, here is a good book: https://www.amazon.com/Eniac-Triumphs-Tragedies-Worlds-Compu... worth reading is also the review by Jean Bartik (here https://www.amazon.com/gp/customer-reviews/R3K2DSB6UE1X7H/re...) who was there and witnessed everything firsthand.


This isn't really true. There was no existing architecture and even if there was, Eckert and Mauchly weren't the only ones involved. ENIAC itself encapsulated almost none of the Von Neumann architecture principles. While it's true Von Neumann's paper started as a summary of ongoing discussions on computer architecture for the EDVAC with a group that included Eckert, Mauchly, Arthur Burks, and others, Von Neumann's proofs of the architecture's viability within the paper were entirely original. Hence the name.

Jean Bartik was a hugely biased source who used personal recollection to quarrel with anyone who offered any view of computer history that didn't put ENIAC front and center. She seems to have spent the latter part of her life writing strident negative Amazon reviews of any book which didn't place Eckert and Mauchly and the ENIAC as the primary source of all computer innovation. Her reviews [1] of Who Invented the Computer? The Legal Battle That Changed Computing History in particular were extremely negative personal attacks on the author and her husband (Alice and Arthur Burks) for having written books which dared question ENIAC's supremacy.

[1] https://www.amazon.com/gp/customer-reviews/R2MERA4EUZ8M17


There is not only Jean Bartik; there is a lot of recorded history, even court cases that left many files and affidavits. Bartik's commentary lent itself because it is with the book I referenced. Burks' case is also amply documented. He wanted to be registered as an inventor on the patent after the fact, which was denied to him, even by a court. In his bitterness, he then made many claims. The comment by Bartik you reference should be seen in this context.


Fascinating when individuals of such advanced intelligence are not obsessed with immortality. Either They believe it is not possible or they are secretly a god trying to escape the boredom of already having long since achieved it.


Or maybe they are horrified with the idea of living forever...


Or they have realized that immortality is actually a dumb idea. Either you talk about real immortality where you live forever. And forever is a really long time. Or you talk about being able to decide when to die. In which case dying accidentally today becomes much more expensive so you become obsessed about avoiding accidental death today and miss the whole point of living a longer life...


Quoting a character in one of Stanisław Lem's stories, "People do not really want to live forever -- they simply do not want to die." Always struck me as a rather profound take on the whole immortality deal.


Von Neumann definitely didn't want to die. According to Wigner: "When Jancsi realized that he was incurably ill, his logic made him conclude that he would cease to exist, cease to have thoughts. The full content of this was incomprehensible to him and it horrified him. We all know this and try to accept it, but it is always difficult. It was heartbreaking to watch the frustration in his mind when the hope was gone. His fate was unavoidable to him, but it was still unacceptable."


His friends and colleagues were very disappointed that at the very end he turned to religion to help him cope with his fears. He even had a priest visit him for last rites. (Source: A Beautiful Mind)


To add another source, Greg Evan's Permutation City is what drove home for me how profoundly inhuman immortality is.


I think it's incredibly arrogant to talk with such certainty about biological immortality. It's not something we can fathom at this time.


You are suggesting that death is somehow less dumb when you don’t know anything about the true existential nature of what that is.


If you get philosophical, I don't think you can really know anything about anything. However, in practice I have every reason to assume that I have been practically dead i.e. non-existent for the vast majority of the lifetime of our universe (and whatever universes there may have been outside our one, whatever that may mean.) And I do not recall being non-existent that bad... Dying, that's another thing, that I know nothing about and am not looking forward for it - quite yet. But I know that the amount of things I want to do here is finite. And based on knowing myself and the feelings of quite a few old people, I am quite sure that when those would be done, I would be happy to let go. (Only small thing that I might want from approximately infinite life, would be to know how this life story here on earth/universe ends.)


Well said


Sometimes I wish von Neumann had gone into another field, such as genetics.


He basically did.

He was creating frameworks for understanding self-replicating systems before the nature of DNA had been determined.


Now the water is really muddy...

Is there a defitive book or wiki page for the whole thing regarding: * Von Neumann's actual contribution to the so-called von Neumann architecture


Here is the wiki page on the report and the controversy

https://en.wikipedia.org/wiki/First_Draft_of_a_Report_on_the...


“““ The treatment of the preliminary report as a publication (in the legal sense) was the source of bitter acrimony between factions of the EDVAC design team for two reasons.[3] First, publication amounted to a public disclosure that prevented the EDVAC from being patented; second, some on the EDVAC design team contended that the stored-program concept had evolved out of meetings at the University of Pennsylvania's Moore School of Electrical Engineering predating von Neumann's activity as a consultant there, and that much of the work represented in the First Draft was no more than a translation of the discussed concepts into the language of formal logic in which von Neumann was fluent. Hence, failure of von Neumann and Goldstine to list others as authors on the First Draft led credit to be attributed to von Neumann alone. (See Matthew effect and Stigler's law.) ”””

This seems more vague than what have been disclosed here...


How does Turing’s work factor into this? Does Bartik also discredit his influence on computer architecture?


According to J. Presper Eckert in his oral history interview [1] the Moore School was actually collecting patent applications, when von Neumann went public with the report. Which is why principal computer architecture is in the public domain. And, according to Eckert, this was done with a purpose and in von Neumann's self-interest. He also accused von Neumann of having done similarly at the IAS. (According to von Neumann, it was merely an oversight that his name was the only one on the report and it had been just an internal proposal for a draft of the final paper.)

Eckert summarizes this condemnatory as follows on p.35: "You know, we finally regarded von Neuman as a huckster of other people's ideas with Goldstine as his principle mission salesman. Now, if you don't believe this, talk to Julian Biglow at the Institute for Advanced Study(…) Von Neumann was stealing ideas and trying to pretend work done at the Moore School was work he had done."

[1] Oral history interview with J. Presper Eckert conducted by Nancy Stern, 1977: https://conservancy.umn.edu/handle/11299/107275

PDF: https://conservancy.umn.edu/bitstream/handle/11299/107275/oh...


Interesting if true but it seems odd to think that Von Neumann of all people in the 20th century perhaps would need to steal someone's ideas. From what I've read of him he was more likely to improve on them before the ideator had finished speaking.

But certainly those were days (if indeed we have left them) when 'great men' like him could and would take credit for something like that for reasons of pride and posterity.


Eckert accused von Neumann specifically of doing so that he could consult for IBM on the matter.

However, there is a problem with this: As I understand it, IBM wasn't that interested in computers at the time and actually stumbled into (electronic digital) computing by a government contract for replicating Whirlwind for the Semi Automatic Ground Environment (SAGE). On the other hand, Eckert also pointed out some interest shown repeatedly by von Neumann in bringing in commercial third parties into the project, so it may have been concerning any third party which may be interested in his consultancy, and this then became historically IBM.


IBM was not yet interested in electronic computers.

IBM was developing electromechanical computers and for a few years the electromechanical IBM SSEC (operating since January 1948) was the fastest computer with stored program, solving many important problems for the US government and for various companies.

After their competitor Remington Rand introduced the first commercial electronic computers (UNIVAC), IBM also entered this market very quickly with a few commercial models (IBM 701 & IBM 702), then also with the faster electronic computers made for the US military, in the NORC and SAGE projects.


People dont steal ideas because they have to, they do it because they can and because they are ambitious. They steel them because it help build cult of geniality around themselves.


What a crime that such 22 year old memories are locked away in an amazon comment, to be lost whenever amazon deems it necessary to cull comments older than a certain age.


It appears someone (some machine?) asked the Web Archive to save it before I did.

https://web.archive.org/web/20210407045457/https://www.amazo...


The whole internet is like this. Everything is atrophying.

Remember personal websites and blogs? Barely anybody runs them anymore.

Archive Team is doing amazing work, and we need to support them.


It's not locked away. Copy the text and save it on your disk if you think it's important to you. Publish it on your own blog if you think it's important to everyone.

People seem to forget about the "save" functionality included in every web browser for some reason. You should save anything you deem important enough if you fear it might be gone one day. This is no different from acquiring a copy of an important book for your own shelf.

Where would you prefer this comment to reside? Where would it have lived before the web?


While I'm generally in favor of preserving digital history, I don't really disagree. In a world where so many utterances and video can be preserved--and just wait until a lot of people are wearing AR glasses everywhere and AI/ML systems can increasingly tag and organize all that recording and associate it with real world identities--I'm not sure we even want everything to be saved even if it were possible.


ENIAC the worlds first computer? What about Z3 by Konrad Zuse? https://en.m.wikipedia.org/wiki/Z3_(computer)


The winners write history.

It's interesting that the Z3 was denied funding from the German gov't for not being war-important. It turns out they were ahead of the computing game and squandered it, as US and GB relied on computers for code breaking and performing calculations for developing the atom bomb.

It's not a blunder to the tune of invading Russia, but it's still interesting.


Yeah I disagree here. The general public's knowledge on WW2 and their perception of that war usually has been very disproportionately influenced and shaped directly by the post wars memoirs of german generals & other important figures of the 3rd Reich. Which is why there are so many myths and outright falsehoods on the war (pushed by those authors to rehabilitate/distance themselves from their own actions) that are still very very widely believed to be true. It's pretty fitting that "history is written by the victors" is one of those narratives.

Plus it makes even less sense when ww2 Germany is almost always believed to have been way ahead of the allies technologically. Way more than they actually were, so unless the allies were veryyy bad at rewriting history to erase Germany's contributions... I'd probably guess It's more due to it being a very obscure technology and had almost no "cool" factor, and v. little influence on the overall technology compared to the V-2 or jet fighters


So was the German Z3 not the first digital computer? Or were the Germans ahead of the Allies technologically in this case?


There was no first computer. The further you go back the less it becomes a computer, the further you go forward the less it becomes a first.

I can really recommend the book "ENIAC in Action" on this matter and of course on the ENIAC generally.


It’s rather like the development of the automobile. The first computers—EDVAC, ENIAC, Z3, ABC—all had idiosyncrasies that reflect on them being transitional technologies.


But what was the zeroth computer?


Interestingly, “computer” was the term applied to office workers whose responsibility was doing calculations. Which is why the first machines were called “automatic computers”. (Fast forward to today, “computing” is perceived now mostly as “using a computer”, such as web browsing or reading email.)


It can be argued that Z3 was not as general purpose (Turing complete) as ENIAC.

"The Z3 was demonstrated in 1998 to be, in principle, Turing-complete. However, because it lacked conditional branching, the Z3 only meets this definition by speculatively computing all possible outcomes of a calculation."


The claim was the first programmable, general-purpose "electronic computer". Zuse's machine was "electro mechanic" (i.e. based on relays instead of tubes).


That seems a little unclear. In the title it says "Eniac: The Triumphs and Tragedies of the World's First Computer" In the outline: "Presents a history of the world's first programmable computer, ENIAC" In the reviews once "Nobody doubts the pair designed and built ENIAC, the world's first fully electronic computer" and another time "This account of how an engineer barely out of college and a physicist with dreams of predicting the weather, conceived and built the world's first computer."


On page 3 of the book: "They built ENIAC, the first digital, general-purpose, electronic computer". It all depends on how you define the term "computer". Not to forget that "computers" at that time were the women who did the calculations; see e.g. https://en.wikipedia.org/wiki/Computer#Etymology. Nearly all computers today are electronic and digital; so ENIAC was one of the pioneers of this development (the Colossus was another one).


True but on the other hand, he kind of made computers 'open source'


> he kind of made computers 'open source'

The only "positive" (depeding on perspective) effect of his illegal and morally questionable action was that the patent was declared null and void due to the novelty-damaging effect of his report, among other issues supporting this outcome. The computers however did not become more "open source" because of this, just cheaper.


Declaring a patent null and void seems like a positive outcome.

Patents are legally enforced monopolies after all.


So the end justifies all means? Consultants and employees who reveal trade secrets to sabotage patents are heroes in your eyes, not criminals?


Still a semi-positive outcome no? Computers becoming cheaper?


J. Presper Eckert (the Eckert in "Eckert & Mauchly") in his oral history interview [1], p.46f:

> "Originally we called ENIAC the "MANIAC" when it didn't work right. And later they borrowed that name for some other actual machine. But if you worked for von Neumann on the MANIAC, then if you invented something it belonged to you. Well, on some relatively short notice, like it might have been a week, or a month, or something, a short time before the deadlines hit, von Neumann went down and published all that stuff. All the reports of the engineers went to the Library of Congress which put a bar on any patents being obtained by any of his employees. And when they complained about it to him, he just said, "Well, that's tough; that's the way I think; that stuff should be in the public domain." Now there is a perfectly obvious reason for this. He was consulting with people like IBM. If the things weren't patented that would be a problem for IBM. The idea was he was selling ideas to other people...if it wasn't covered by patents, he would have been selling something they couldn't use. They would have come back and said, now, "what kind of a consultant are you, coming up with new ideas which are already patented by others and we can't use them." But if these ideas could come under the public domain, then he could go around and sell them to people. That was his game."

[1] Oral history interview with J. Presper Eckert (1977): https://conservancy.umn.edu/handle/11299/107275


For the humanity, the publication of the von Neumann report was certainly a very positive thing because it launched a large number of research projects with the purpose of building electronic computers, both in USA and in many other countries.

There is absolutely no doubt that without this publication and with a patent claiming the design of a computer with stored program, the evolution of the computers would have been much slower.

Whoever pushed the report to be published did a huge service to everybody else but Eckert and Mauchly.

Also, Eckert and his coworkers were deluded if they thought they could have gained anything by having a patent on the stored-program computer.

Nobody prevented them to build good computers even without a patent, but they did not have the intellectual and financial resources to do much until they were bought by Remington Rand, which lead to the introduction of UNIVAC in 1951.

If they had a patent, the result would have been that Remington Rand would have had a monopoly on the computer market, with inferior products, and all the great advances in computers that have been done in other places, like NIST, MIT or IBM, would have happened only many years later.

I have read both the von Neumann report, which is a model of clarity, and all the early documents that remain from Eckert, Mauchly et al.

It is likely that they already had some ideas about stored-program computers, but there is no doubt that their ideas were very confused.

Von Neumann distinguished what was right from what was wrong in their ideas, then clarified, organized and completed all those concepts and synthesized them in a coherent theory that could be easily understood and used by anyone else.

The contribution of von Neumann is much more important than Eckert tries to imply.

Moreover, while Eckert and al. were upset that their own contributions were not acknowledged as well as they should have been, they also failed to acknowledge the important ideas about electronic computers that they themselves have taken from Atanasoff, without ever mentioning him.

Unlike Eckert and al., but like von Neumann, Atanasoff was also someone able to express very clearly his concepts, so you could easily learn from him. For example a report about the classification of computer memories, written by Atanasoff at the beginning of WWII, was extremely advanced for those days (Atanasoff invented what is now called DRAM).


I want to add that the fast evolution of computers caused by the publication of the von Neumann report and the lack of "IP protection" is paralleled by the fast evolution of semiconductor devices after the discovery of the transistor, which was also caused by the lack of "IP protection" in the modern sense.

Even if the transistors have been patented, the patents have been licensed to anyone for negligible fees or even for free (e.g. for hearing aids).

If the essential transistor patents had been used like worthless patents are used today in preventing competition, the history of the computer and electronics industries would have been very different, with progress delayed by decades.

So both the publication of the von Neumann report and the generous handling of the transistor patents have been very influential in shaping the word of today, while many inventions where the inventors clinged to the patents have been either doomed or they began to be used only after the patents have expired, more than 20 years later.


Regarding "very confused ideas":

To be fair, Mauchly proposed combined program and data storage in a mercury delay line, much like it was done on the UNIVAC I, in 1945, before von Neumann arrived. (Allegedly, there had been ideas for stored program in the ENIAC group even earlier than this.) Mauchly also invented Short Code (1949, originally named Brief Code), the first programming language used in production, and the Critical Path Method for automatic scheduling. The group around Maurice Wilkes attended his 1946 lectures on the design of digital computers and went from there to build the EDSAC, the very first modern computer.


I was looking through the comments to see if someone mentioned Jean Bartik already! Just to add to this comment, she also wrote an autobiographical book, which is worth reading: https://www.amazon.co.uk/Pioneer-Programmer-Jennings-Compute...

It is clearly a very personal and biased account, but I think there are good reasons to believe many of the things she is saying, even when they go against the commonly accepted history "written by the winners".


Wow... How is this not more well known? It's not like von Neumann doesn't have plenty of other important contributions to his name.


It is far from settled whether the GP’s interpretation is in fact what actually happened. I highly recommend The Innovators by Walter Isaacson. Among other things, it covers this “controversy” in great detail.


History is not science. There will never be absolute certainty because human language is always open to interpretation. Even when the evidence is clear, the historiography that relates to that evidence is, again, open to interpretation. The interpretation of terms and situations changes over time, and facts can later be interpreted differently (even maliciously). Not even witness statements in a court case, of which there were several in the present case with hundreds of pages of legal reporting, are immune to interpretation. So yes, I can't prove to you that it happened that way because all of my sources are open to interpretation. There are enough other events in history that are considered sufficiently certain, and yet continue to be disputed by certain people for various motives. Some states even consider it necessary to criminalize the questioning of certain historical facts for this reason.


Have a look the oral history interview with J. Presper Eckert, it's a long lasting controversy: https://conservancy.umn.edu/handle/11299/107275

PDF: https://conservancy.umn.edu/bitstream/handle/11299/107275/oh...


The oral history of the person claiming credit was stolen from him is not a reliable source.


Mind that he repeatedly refers to Julian Biglow of the IAS as a (neutral) witness. There have been plenty of law suits regarding this… Probably, the same also applies to any commentary and/or writing by von Neumann.


Lawsuits? Unless you're talking personal suits, the only one I know of, Honeywell v. Sperry Rand, did not go well for Eckert or Mauchly, personally or professionally, especially Mauchly.


There was already a law suite in 1962 (Bell Telephone Laboratories vs. Sperry Rand) where the court upheld the validity of the ENIAC patent. There was another case in 1964 where Shaw, Sharpless and Burks required that their names would be added as inventors to the patent, which was not successful.


My (moral) problems with von Neumann are of entirely different nature, namely his persistent advocacy for a "preemptive" nuclear first strike, before the USSR could roll out a thermonuclear bomb in production. Which might be regarded as attempted, cold-blooded genocide. So, not my personal hero.


"The second world war over, von Neumann accepted a string of commercial consultancies. But he was increasingly drawn into the defence establishment. An earnest advocate of the development of the H-Bomb, he later described himself as "violently anti-communist, and a good deal more militaristic than most". After his death, Life magazine reported that he had said in 1950: "If you say why not bomb them [the Russians] tomorrow, I say why not bomb them today? If you say today at five o'clock, I say why not one o'clock?"

Though frequently quoted as evidence that von Neumann favoured a first strike "preventive war" against the Soviet Union, this is a weak reed on which to build such a claim. It seems more likely it was characteristic - though tasteless and bloodthirsty - banter designed to emphasise his hawkishness. ...

Von Neumann was not an unthinking militarist: he believed that the world's only chance of avoiding destructive conflict was world government, but - since he could not see any practical way of achieving this - he was determined to defend his adopted country with as much armed might as possible. "

http://www.math.uwaterloo.ca/~hwolkowi//neumann1.html

Unlike Teller, von Neumann is not renowned for what you describe as "persistent advocacy" of nuclear first strike. AFAIK, von Neumann was not especially politically active in promoting US gov't aggression of any kind, again, unlike Teller.


As I understand it, there had been about two months during the Eisenhower presidency, where a group tried to urged the administration towards a first strike. And, again, as I understand it, von Neumann played a prominent role in this. At least, this is what I was referring to. (Compare Eisenhower's remarks on the military industrial complex in this context.)


I don't think there is any doubt that a "preemptive" nuclear first strike by the US would have morally and legally been regarded as genocide.

The last time we had a debate on HN on this topic a number of people seemed to think it would have been a good idea - which I found genuinely disturbing. Hopefully I was just being trolled.


Genocide is about destruction of a people not simply large numbers of deaths. The Holocaust is the model people assume, but China’s current genocide of the Uyghur is genocide based on intent even without large scale slaughter.

Now you might be overestimating the US nuclear arsenal at the time, remember he died in 1957. At the time a nuclear exchange could have killed a lot of people, but only on the scale of a major bombing campaign not glassing the country.

As such, assuming US attacked and occupied Russia after an early nuclear exchange, it would have been war but not fit the definition of genocide because the intent to destroy a people simply wasn’t there.

PS: One of the less well known examples, the US forcibly sending American Indian children to boarding schools to be Americanized actually fits the UN definition of genocide. That’s the heart of the issue, the intent to destroy a people or culture rather than say conquest or even retaliation.


In 1961 the Pentagon's own estimates gave the expected casualties at 600 million plus when it had ~20K weapons. In 1950 the US had ~2500 weapons so scaling that back proportionally still looks like 10s of millions dead.

Edit: I suspect maximum lethality of the US stockpile would have been during the 1960s when they still used large numbers of very large and dirty weapons.


1961 the US had ICBM’s and large numbers of vastly more powerful Hydrogen bombs and very accurate targeting methods. In 1950 the US was still limited to long range bombers, many or even most of which would get shot down and so they would have concentrated on the most critical military targets.

Thus, even 10 million dead in 1950 seems very high unless they simply tried to maximize civilian casualties which it’s self would reasonably qualify as genocide. And again Russia lost ~27 million people in WWII and it isn’t considered a genocide of Russia so numbers alone while horrific don’t qualify.


I agree that a "preventative" war in 1950 would have been a very different thing from 1961. However, von Neumann was active as a political adviser to very close to his death and I was unaware that he had rejected the idea of such an attack - there were certainly others who favoured such a thing well into the 1960s.


Sure, though my understanding was this was specifically about an attack before Russia started mass production of atomic bombs. First strike after that point would likely have been vastly more costly on both sides. He was very much aware of the idea, which is why he’s credited in the name and acronym MAD.

In the end the world avoided a large scale nuclear exchange, but when faced with a seemingly like war between two nuclear armed superpowers I can understand the argument.


On the other hand, there is still no universal agreement on the morality of the wartime bombing campaigns that lead to high numbers of civilian casualties.


I'm always reminded of the lines from Daniel Ellsberg in the describing his experiences in the early 1960s:

"The total death toll as calculated by the Joint Chiefs, from a U.S. first strike aimed primarily at the Soviet Union and China, would be roughly 600 million dead. A hundred Holocausts.

I remember what I thought when I held the single sheet with the graph on it. I thought, this piece of paper should not exist. It should never have existed. Not in America. Not anywhere, ever. It depicted evil beyond any human project that had ever existed. There should be nothing on Earth, nothing real, that it referred to."

https://apjjf.org/-Daniel-Ellsberg/3222/article.html

Edit: The staggering thing about the estimate of 600 million dead was that this was known to be an underestimate as the US didn't take the thermal effects of nuclear weapons into account during planning as they were regarded as too unpredictable. Actual 'real' estimates would be over a billion dead....


Most people who have a lot of things named after then wanted to have a lot of things named after them. Consider: why is more named after von Neumann then Erdos?



There's a large number of things named after Euler as well -- so many that sometimes things are named after the first person to prove them after Euler: https://en.wikipedia.org/wiki/List_of_things_named_after_Leo...


How come it's known as "von Neumann" architecture?


The name "von Neumann" is rightly applied to this, because von Neumann was the first who has written a complete and consistent description of this architecture, so that anybody could understand it and build computers based on this architecture.

The Eckert group had some rather vague ideas about the architecture of their future computer, the successor of ENIAC, and those ideas certainly included some kind of stored program (not necessarily stored in the same memory as the data; stored programs were not new, as they were common in mechanical or electromechanical computers).

However the Eckert group was secretive, because they hoped to start after the war their own company to make computers and become rich, so it is not known for sure how much von Neumann learned from them and how much of what von Neumann wrote were his own ideas.

In any case, I am more grateful to von Neumann than to the Eckert group, because all the other more important computer projects have started from the von Neumann report, while the work of Eckert and al. had quite a little influence.

The ENIAC computer was very important only as a demonstration that it is possible to make very fast electronic computers, otherwise it had much less influence on the structure of computers than the many other slower computers made around that time.


They were "secretive" because they were obligated to do so. It was a secret, military project, and nothing could be published. Ironically, it was the security officer (H. Goldstein) who (illegally) circulated the Neumann paper; no one had expected it, least of all the actual inventors.


You are right, but even after they were no longer bound by secrecy they never published anything important, which would help advance the computer technology as much as the publications of other computer projects, e.g. from Cambridge, IAS, NIST, MIT or even IBM.

I have not seen any document from them that could establish credibly how much von Neumann learned from them and how much they have learned from von Neumann.

At least in other cases where some people had secret knowledge that became public only much later, e.g. in the case of the public-key cryptography discovered by the British before Diffie-Hellman-Merkle, they could show some classified reports containing the secret knowledge.

The Eckert group has not shown any previous documents containing the ideas from the von Neumann report, but they just claimed that most ideas have already been communicated during the previous discussions of the group.

That might be true, or not, but it does not matter much.

Only the publication of the von Neumann report was really important, regardless who was the source for the ideas contained in it.


> but even after they were no longer bound by secrecy they never published anything important

There were a lot of publications; the first one in September 1945 (three month after Goldstein disclosed the Neuman paper) named "Automatic High Speed Computing, A Progress Report on the EDVAC" (see https://www.computerhistory.org/collections/catalog/10272462...), which was also prior to the patent and yet another reason for its invalidity.

> I have not seen any document from them that could establish credibly how much von Neumann learned from them and how much they have learned from von Neumann.

Having a look at the timeline Neumann was not aware of the project before the ENIAC design was completed.


Did von Neumann himself ever acknowledge that the whole thing originated somewhere else or did he cite any of that group's findings?

In all documentaries on the origins of computers that I've watched, von Neumann gets all of the recognition while many of us have heard about Eckert and Mauchly from a Wikipedia article or comment deep down a relevant article.


This reminds me of the Newton-Leibniz calculus priority dispute, with Newton not publishing on calculus until after Leibniz had, although with that we now at least have the benefit of contemporary correspondence to refer to.


So a little bit longer version of the story: Eckert and Mauchly had designed the architecture. Herman Goldstine, the military liaison of the ENIAC project, and something of a social climber, sought out von Neumann to see if he was interested. von Neumann was, and came to a lot of design meetings and made contributions about what the instruction set should be. Then when he was heading out to Los Alamos on a trip, he had some downtime during travel and wrote up the design for the group. Goldstine typed it up and circulated it with just von Neumann's name on it.

It is a point of discredit to von Neumann that he did not disavow the naming of the architecture.

Funnily, Goldstine's wife was one of the early programmers and well respected in that area. Goldstine himself made no significant contributions to computing.


Stigler's law of eponymy?


Here is a fairly old documentary about 'Johnny' von Neumann:

https://www.youtube.com/watch?v=Y2jiQXI6nrE

He was a myth even in his own lifetime but parts of that film hint at the man. It features interviews with many of his old colleagues, discussion of what it was like to work with him, etc.

If you just watch one bit, personally I love the anecdote that begins around here:

https://youtu.be/Y2jiQXI6nrE?t=2604

It is a story of his incredible facility for mental arithmetic that I hadn't heard before. It is arguably one of his least impressive skills but for me it illustrates so clearly the gap between him and his peers.


Regarding the coining of entropy is just an example of the creative genius of Von Neumann:

When Shannon first derived his famous formula for information, he asked von Neumann what he should call it and von Neumann replied “You should call it entropy for two reasons: first because that is what the formula is in statistical mechanises but second and more important, as nobody knows what entropy is, whenever you use the term you will always be at an advantage!

http://www.spatialcomplexity.info/what-von-neumann-said-to-s...

https://en.wikipedia.org/wiki/Entropy_(information_theory)


Funny you mention that story, I came across it for the first time about 3 days ago in a book I was reading -- Grammatical Man. It's one of the best reads I've found on the history of information theory. Funnily enough the friend that lent it to me isn't even a tech person, he's an electrician who's more interested in occultism and esotericism than mathematics and technology.

Every time I read about the 20th century history of computing, it sounds like it was a wild time. Every advance or discovery seems to have some humorous anecdote attached to it.


I hope you like this biography on Paul Erdos. Great read. If you like it recommend it to others. I have gifted it to many friends and even to those who are not mathematicians, STEMmy or mathematically inclined. Hope you get a chance to read it.

https://en.wikipedia.org/wiki/The_Man_Who_Loved_Only_Numbers


I often use von Neumann's quote, "Young man, in mathematics you don't understand things. You just get used to them.", to console myself when I have to read a math book multiple times to really understand something.

The quote also has truth in it. I had no problem accepting that 0! = 1 only because I learnt that fact early in school. However, I struggled quite a bit to accept that span({}) = {0}, even though it is not that different from 0!=1 and I knew multiple explanations. It seems the later one learns a new concept, the longer it takes to accept it.


Yes, those two facts about zero/empty cases (and so many more) are definitely related, and this class of facts is one of my favourites! Usually, if you're dealing with something algebraic in flavour (which is a very vague concept, sorry), there will be a sensible way to define the zero/empty case. This is often a good test of whether you have a uniform concept that works for all n without corner cases.

It almost irritates me when I read a book or a paper and they say that the zero/empty case is "by convention". I almost want to yell, "no! it's because that's how you make the definition uniform!"

Addition is usually defined as a binary operation, a+b, but really it should be defined as an n-ary operation; associativity tells us that doing "two layers" of addition should boil down to doing a single layer of addition on the concatenated list of operands. That forces 0-ary addition to be zero, which can always be added to the list of operands without affecting the result.

Something similar happens with empty products (which explains the factorial), empty spans, etc. In all cases, the trick is to figure out, what is the equivalent of associativity? What "syntactic" operations on the inputs (for example, concatenating a list of lists of operands) correspond to operations on the outputs (you can get the total sum by first computing partial sums)?

A fun puzzle, if you enjoy this kind of thing: what's the determinant of the 0x0 matrix (over your favourite field or ring)? For all (square) sizes, the determinant of the zero matrix is zero, but the determinant of the identity matrix is one, and the 0x0 matrix is kind of both. So which pattern should win? Which one is stronger? I know my own answer ;)


I was also puzzled by det(0x0) being 1, because I had built an intuition that determinant of a matrix was the volume of the parallelepiped represented by the matrix. I made my peace by accepting that my intuition on volume implies that volume is defined in a space that has positive dimensions, and by treating zero space as an algebraic construct.


Now you're reminding me of a wacky math conversation I had at Mathcamp [1] with a much smarter guy, who was talking about more esoteric definitions of volume in euclidean space. Something like:

- n-dimensional volume is a function from (some) subsets of space to real numbers

- it should be additive under union

- it should scale by t^n when you scale the space by a factor of t

I think the upshot of the conversation was that 0-dimensional volume of a shape should be its Euler characteristic. In the simple case of a finite set of points, the "volume" would be the number of points.

And by your earlier comment, span({}) consists of a single point, so its volume should be 1. It all works!

[1] https://www.mathcamp.org/


> what's the determinant of the 0x0 matrix (over your favourite field or ring)?

1, because 0x0 seems like a more elegant base case for the recursive det formula than 1x1.


Yeah, it took a while to sink into my head that many of these "wait, why is span({}) = {0}?" kinds of cases have answers that sum up as "because anything else means other rules are inconsistent, and the whole thing is either less useful or useless". It's "arbitrary", but it's either the only useful option, or sometimes a simple(st) one of many.

Even just one number theory course helped a lot, since it brought that kind of consistency into its own concept, where [this set of rules] forms a ring, and [this set] forms a field, etc.


To be fair, in this case the rule is not quite arbitrary, because it, in fact, follows from the definition of span (as a subspace).


Define the determinant to live in the ring quotiented by the annihilator of the module. Then the determinant of the 0 by 0 matrix is both 0 and 1.


Couple more fun examples:

all([]) == True

any([]) == False


This is a bit intuitive:

all: no false elements any: not all false elements


Plenty of things are intuitive if you have the right mental model backing it. I'd wager some folks thing of all/any as "at least one True"/"Everything is true", which makes it a trickier think.

Mental models often get spicy with empty/"corner" cases. This isn't quite the same, but a lot of kids struggle with division as sharing rather than division as measuring, which makes division by a number less than 1 conceptually difficult. http://langfordmath.com/ECEMath/Multiplication/DivModels.htm...


> there will be a sensible way to define the zero/empty case. This is often a good test of whether you have a uniform concept that works for all n without corner cases.

And so, begun the array indexing war has.


A polynomial always includes a member (monomial) with the power zero. It seems natural, therefore, to index the coefficients correspondingly. In other situations, 1 may be the more natural starting index.


The useful corner case/"neutral element" for array indexing is the zero-width interval in an arbitrary position.


Fourier coefficients need a zeroth element too.


> Young man, in mathematics you don't understand things. You just get used to them.

Not quite in the same weight class as von Neumann, but Matt Parker's "There's a trick for dealing with that in mathematics, called 'not really worrying about it'" when discussing results that don't mesh with our intuitive understanding is another nice one.


I’d say, rather, try and build your intuition in accordance to what mathematics, in fact, tells you (which is achieved by doing exercises).


0! = 1 is not hard to accept, since it follows a rule. You just need to look at it backwards. To get the previous factorial (Ni-1)! you need to divide the N! by N

4! = 24 ;; 24 / 4 = 6

3! = 6 ;; 6 / 3 = 2

2! = 2 ;; 2 / 2 = 1

1!= 1 ;; 1 / 1 = 1

0! = 1


I was going to joke, "So you're saying -1! = 1/0" but then I thought I'd check Wikipedia first and that's exactly the reason they give for factorials of negatives being undefined, which spoils the joke.


This looks even more natural from the programmer’s standpoint: reduction of a set of numbers by summation starts with 0, and reduction by multiplication, with 1; so, if the set is empty (has zero elements) the result is simply the starting number (0 or 1, respectively).


And so in general the reduction of an empty list should always the neutral element of the operator.


Even funnier is how this roughly works for -1!


For the last two years I've been learning Neumann's native language, Hungarian. It's a very difficult language for an English speaker to acquire (and vice-versa), but this quote could be adapted perfectly to how I feel learning Hungarian, "Young man, in Hungarian you don't understand things. You just get used to them.". I wonder if Neumann had a similar experience early on while learning English, and it shaped his view of mathematics as well.


It’s quite rare for someone to learn Hungarian, what a surprise! (I’m a native speaker)

Feel free to ask me anything you have trouble with. In the reverse direction, double negative is something that I sometimes have to pay attention to, and I don’t know about another language that employs it.


There's one of his maxims I like better: You don't know something until you can prove it 3 different ways.

In mathematics, it begs the question of what "understanding" really means. To understand an object doesn't necessarily mean divining an unquestionable structure by chance. What usually happens is that you're investigating some kind of problem (usually with real-world applications, if distant), and then you find you need a certain tool or a certain theory to simplify your problem, make it more tractable or more abstract. For example, you could be studying permutations, and from there the Binomial comes naturally, as well as the factorial function. From this theory, comes several definitions. The definitions are such to further you goal: they are the ones that make your tool easier to use, simpler, more "streamlined", more suitable to approach your application with minimal special cases. This is how something like '0!' is defined, and how most theories are discovered.

The thing about understanding is that it's a bit too much to require to "understand" something (even to yourself).

How can one know when he's reached "understanding"? Being used to it should be good enough for most purposes. If you know the rules, and you know how to apply them, that's mathematics.

Perhaps another direction to understanding is seeing a thing from a variety of lenses (connecting to different fields), expanding your ability to apply a tool, seeing more broadly. That's when you generalize, and you're able to see what your had as a special case (another definition of understanding): from addition to algebra, to rings, to abstract algebra. From numbers to equations to functions, each step perhaps you "understand" the fundamentals better by having a broader perspective on generalization. But of course that's only useful if your generalization is useful at all.


I've never seen that quote before, but it feels very human.

I feel like there's an intuition which comes along with learning things. Or maybe learning is simply developing intuition.

At some point things somehow just make sense in your mind because you've built up an intuition of how it works.


I learned it the hard way, as I hit a wall when taking a course on abstract algebra, that the only path to truly understanding college-level math is building solid intuition. Otherwise, the sheer number of definitions and theorems will just be overwhelming. Of course, intuition is as critical for pre-college math. It's just that we somehow get the intuition naturally probably because we get to experience all kinds of examples on a daily basis to hone our intuition unconsciously.


At some deep level, a key to understanding abstract-algebraic objects is to think about them geometrically.

L'algèbre n’est qu’une géométrie écrite, la géométrie n'est qu'une algèbre figurée. — Sophie Germain


I agree, negative numbers learned early are used with aplomb but imaginary numbers learned later are initially met with suspicion.


I wonder if e.g. using constructive math would make things easier since every step of a constructive proof is understandable.


Unfortunately, constructive vs classical (vs linear, etc.) applies to proofs, but this is really about definitions. Proofs can be correct or incorrect pretty straightforwardly, but definitions being correct or not is really a matter of taste. (And as someone who's been formalising some mathematics in Lean recently, definitions are so much trickier to get right than proofs!)


My understanding is that since any mathematical proof can (in principle) be reduced to manipulation with formal objects (symbols), it’s always “constructive.”


Not really. Check out the proof of Hilbert's Nullstellensatz. He got tons of criticism for proving the existence of a relation without constructing that relation.


But he did construct the proof! (So, the argument can only be about what axioms the proof is based on.)


This is not what constructivity means in mathematics.


What I was objecting to was that constructive proofs are somehow easier to understand.


He was beyond a brilliant person, of that is no doubt. And not in anyway trying to take away from that fact, reading the following excerpt from the article..;

"[..] von Neumann both worked alongside and collaborated with some of the foremost figures of twentieth century science. He went to high school with Eugene Wigner, collaborated with Hermann Weyl at ETH, attended lectures by Albert Einstein in Berlin, worked under David Hilbert at Göttingen, with Alan Turing and Oskar Morgenstern in Princeton, with Niels Bohr in Copenhagen and was close with both Richard Feynman and J. Robert Oppenheimer at Los Alamos."

..makes me do wonder how many of the accomplishments he made were due to his own brilliance, or due to both it and the stimulating environments he kept finding him self in.

For my own parts, I see a big difference in my own output, and the quality of that output, when getting the opportunity to interact with high caliber people (knowledgable, emotionally intelligent, inquisitive and open) on a regular basis.


A lot of those particularly accomplished people he interacted with specifically pointed him out as exceptional.

For example, from this article about his eidetic memory:

"One of his remarkable abilities was his power of absolute recall. As far as I could tell, von Neumann was able on once reading a book or article to quote it back verbatim; moreover, he could do it years later without hesitation. He could also translate it at no diminution in speed from its original language into English. On one occasion I tested his ability by asking him to tell me how A Tale of Two Cities started. Whereupon, without any pause, he immediately began to recite the first chapter and continued until asked to stop after about ten or fifteen minutes."


Another:

> “There was a seminar for advanced students in Zürich that I was teaching and von Neumann was in the class. I came to a certain theorem, and I said it is not proved and it may be difficult. von Neumann didn’t say anything but after five minutes he raised his hand. When I called on him he went to the blackboard and proceeded to write down the proof. After that I was afraid of von Neumann” — George Pólya

The nature vs. nurture argument has its merits in general, but sometimes nature just produces a complete freak. Being surrounded by geniuses probably isn't that valuable when you're head and shoulders above even them.


The nature vs. nurture argument has its merits in general, but sometimes nature just produces a complete freak.

Possibly less of a freak relative to his surroundings, compared to the difference relative to the rest of us:

Here's some argument from both sides of the nature v nurture debate (from https://slatestarcodex.com/2017/05/26/the-atomic-bomb-consid... )

The locality here suggests some "nature":

the Manhattan Project was led by a group of Hungarian supergeniuses, all born in Budapest between 1890 and 1920. These included Manhattan Project founder Leo Szilard, H-bomb creator Edward Teller, Nobel-Prize-winning quantum physicist Eugene Wigner, and legendary polymath John von Neumann, namesake of the List Of Things Named After John Von Neumann.

Or, maybe it's not just localized gene pool, but an experience common to all of them? (from same esay)

The coincidences actually pile up beyond this. Von Neumann, Wigner, and possibly Teller all went to the same central Budapest high school at about the same time, leading a friend to joke about the atomic bomb being basically a Hungarian high school science fair project. [...]

In this case, the guy was Laszlo Ratz, legendary Budapest high school math teacher. I didn’t even know people told legends about high school math teachers, but apparently they do, and this guy features in a lot of them. There is apparently a Laszlo Ratz Memorial Congress for high school math teachers each year, and a Laszlo Ratz medal for services to the profession. There are plaques and statues to this guy. It’s pretty impressive.


My favorite, from Edward Teller:

“von Neumann would carry on a conversation with my 3-year-old son, and the two of them would talk as equals, and I sometimes wondered if he used the same principle when he talked to the rest of us.”


Fermi to one of his PhD students:

"You know how much faster I am in thinking than you are? That's how much faster von Neumann is compared to me."


These things are a virtuous circle. The more brilliant you are, the more brilliant people want to hang around, and the more brilliant you become. And loop.


"Work with the smartest people who will work with you" has become one of the pieces of career advice I give myself.


it's all mythmaking, a custom as old as time and the ancestor of our modern obsession with promotion. exceptional people aren't born so much as constructed to tickle our imagination and ego.

every culture makes myths but obsession is where it becomes problematic. we must have idols, so we will make them opportunistically rather than leave them to organic discovery. there's enough randomness and manipulation in the people we remember historically to not put too much faith in the objectiveness of any of these constructions.

that's not to say that no credit is due them, but that it's overwhelmingly likely overstated, per this obsession.

also note that any story title with a name prominently featured is simply going to be subjective (practically by definition).


Is it just me or does it seem like there something like there were a lot of celebrity-level STEM heroes to come out of the early 20th century. These folks all seemed to work together closely.

Maybe these things take time, but it just doesn't seem like there is anything remotely like this going on today, despite far greater access to education and increases in global population.

Is it perhaps that the limelight is less on mathematicians than it was during the early 20th century?


They all worked closely together because they all worked on the Manhattan Project. When there’s a single huge government project requiring the nation’s smartest people and on which the survival of the nation depends, it shouldn’t be too surprising that all of the top scientists and mathematicians and engineers all end up working together. :)

The fact that so many of them specifically grew up in Hungary, went to school in Germany, and came to the United States is interesting to note though!


There is interesting blog post about it here:

https://www.lesswrong.com/posts/xPJKZyPCvap4Fven8/the-atomic...


They immigrated because they could see that another war was brewing in Europe. So basically the US can thank the Nazis for that. They all had Jewish heritage (which is a very interesting fact in itself!) so it was sort of personal to them as well.


It's not just you. This whole group of (Hungarian) scientists is known as "The Martians" [1]

[1] https://en.wikipedia.org/wiki/The_Martians_(scientists)


Scientists become famous partly due to their accomplishments and partly due to their life story.

Would John Nash be famous if it weren’t for his tragic struggles with mental health? Would I know about Feynman if it weren’t for “Surely You’re Joking Mr. Feynman”, his safecracking, and the topless bars? Would Stephen Hawking be a household name if he weren’t disabled?

WW2 gave a fascinating context to many scientific projects and made lives of these scientists more colorful than they would have been otherwise. Ditto for the Cold War.

It’s up for debate if there’s a slowdown in science nowadays. But it’s certain that lives of those who do science are much more boring to the outsiders. Papers, conferences, labs, petty academic squabbles, fights over funding, perhaps some consulting for the government or the private sector. It’s much harder to create a mythology out of that.

When I think of famous 21st century mathematicians I think of Perelman (a recluse who proved a famous conjecture and refused the Fields Medal) or Zhang (worked as an accountant and at a Subway, then as a lecturer at UNH before making a breakthrough on the twin primes conjecture). Ordinary geniuses who went from top graduate programs to top professorships are comparatively anonymous to the general public.


I would offer Terry Tao as a counterexample. The only thing strange about him is how, well, not-strange he is!


That seems more a point in favor than a counter.

Terry Tao is surely very accomplished, but is he anywhere nearly as famous?


Besides if Tao has a claim to fame among the general public it’s because he was a child prodigy (a good story!). It’s hard to find some article about him that doesn’t mention his extraordinary childhood accomplishments.


It was a relatively small community, even before Manhattan project they all knew each other. Academic science is still very much like this, but the difference now is that there are so many more scientists, very few of their results are known to a wide audience.


I think research has become a lot bigger, more accessible, collaborative and derivative since the early 20th century. There are probably tons of graduate students with the intellect of a Einstein or a von Neumann who will likely never get the same recognition because they will be competing with (or working with) others for the same breakthroughs.

We had a lot of early "heroes" in Artificial Intelligence, but their accomplishments were eclipsed by newer breakthroughs in a short amount of time by equally smart people. Overall, this is great for humanity since we are pushing boundaries much quicker than was possible earlier.


> Is it perhaps that the limelight is less on mathematicians than it was during the early 20th century?

Everything make sense looking backwards. It is quite possible there are people who exist today in various fields (think AI, crypto, space, vehicles) who will go down in history the same way. Kids will look back in 50 years citing their names, just as we cite Dirac, Einstein, Feynman, Heisenberg, etc. To us, these people are just "scientists", and "engineers" who are also alive at the same time we are. The reward is reaped years later.

Quantum Mechanics' reward was computers and health tech for the most part, but most of that truly started popping off long after the Manhattan project celebs died.


How is computing downwind from QM? I would trace computing's lineage back to the invention of the transistor or the earlier vacuum tube (advancements in electrical engineering, and the beginning of electronics), and logically back to Boole.

I don't think QM has had its payoff yet, but certainly general and special relativity have paid off in things like GPS, and probably more examples that I don't know about.


Matching the impact von Neumann made by himself in the 20th century might take dozens of von Neumanns today.


Or one Katalin Karikó.


I think mathematicians only become famous when they are being driven by/driving an industrial revolution. Just look at the digital revolution. I have never heard of George Boole until a quick Google search, but he laid down the framework for Boolean Algebra in 1850. Claude Shannon is a much more recognized name, even though the impact of information theory is smaller than that of boolean algebra (considering that BA is a prerequisite). Is it possible that George Boole was as popular in the 1930's as Shannon is now? Maybe. Perhaps the fame of mathematicians is staggered, as the applications often drive their fame, and the applications often come ~50 years later.


It was a much smaller field back then. Reading about computing in the 70's (especially ARPANET), the same handful of names keep popping up.

Also, they may be celebrities to us. But ask the average person who John von Neumann, Claude Shannon, Edsger Dijkstra, or Paul Erdős was, and they'll probably have no idea.


> it just doesn't seem like there is anything remotely like this going on today

It seems to me like the STEM heroes these days are working together on vaccine technologies. If you follow developments in mRNA technology you'll see there's a lot of international cooperation. It seems to me that current collaboration is on an even greater scale than the STEM heroes of the early 20th century, but it probably won't become clear to many until we look on it in retrospect 50 years from now.


Two words: SILICON VALLEY. Plus a lot of incest with Stanford grad students and young Stanford professors.


All of these Hungarian STEM heroes (https://en.wikipedia.org/wiki/The_Martians_(scientists)) were pre-Silicon Valley people. It was still Valley of Heart's Delight back then.

Not sure who their successors were. I guess there's Knuth.


Which episode are you referring to?


It staggers me that in an age of incredible innovation and technology- we still have no idea what biologically separates a man like this from the rest of us.

I'd like to take the idyllic position of (somewhat irresponsibly) hoping that some level of genius can be fostered and created in anyone from a young age.. But when I read articles like this I feel the compulsive need to sit back in my chair and ponder just how much more primitive my mind is in comparison to a man like John's.


To repeat a quote that probably appears in every Von Neumann post,

"Teller also said "von Neumann would carry on a conversation with my 3-year-old son, and the two of them would talk as equals, and I sometimes wondered if he used the same principle when he talked to the rest of us."


Obligatory Edward Teller talking about Von Neumann => https://youtu.be/ra4K6WPUkpk


László Polgár believed this and turned his daughters into chess prodigies.

https://en.wikipedia.org/wiki/Judit_Polg%C3%A1r


It’s pretty clear that the Polgár daughters had a very favorable genetic disposition on their side as well.


Do we actually have no idea what the biological/genetic differences between geniuses and average people are? That's interesting.

> I'd like to take the idyllic position of (somewhat irresponsibly) hoping that some level of genius can be fostered and created in anyone from a young age

So you believe that genius is environmental and not genetic? Or maybe we are thinking about different definitions of genius.


My personal experience is that it's a bit of both.

I am not a genius, but I can tell that I'm likely above average. I was raised in an environment that did not value intellect or education, and I always felt out of place. It definitely felt innate, and there are lots of stories from people in similar circumstances.

On the other hand, I never did well in school, especially math. One of my teachers literally called me stupid. So I had this self-perception of just not being very good at STEM. Had I not fallen into programming by accident, I would never have changed that perception of myself. I imagine there are very many people who had similar experiences, but were not so lucky as to learn their actual potential.

So, it's almost certainly a combination of factors.


I believe the foundations have to be genetic. There are multitude of factors involved and one of them is having a strong memory. But there also needs to be a drive and thirst for knowledge.


I think drive and thirst for knowledge is more “nurture”, but the base is heavily correlated with genetics.

I think it is the same with people holding world records in sports. You can get insanely good no matter where you come from (let alone some unfortunate medical conditions) with only drive and enough exercise, but to win a gold medal, you have to be born for a given sport (of course on top of the insane amount of motivation and exercise spent on it!). Also, probably it is less sport-specific, so someone becoming a world-class soccer player would have been similarly good in a similar sport.


Absolutely. Drive and determination can get you closer to your personal genetic ceiling. But if you don't have lucky genetics your ceiling will be lower than the world class level. Whether that is sports, math, painting, whatever.


The difference between humans and other apes is genetic, but anything more than that is not. Humans have nearly no genetic diversity because of population bottlenecks, there are barely any interesting differences between us.

It's also not that useful to think about. Phenylketonuria is a condition that makes you less intelligent if you drink Diet Coke, and most people don't have the gene for it. But not having phenylketonuria isn't a "gene for intelligence", it's just not having a disease.


Both points you make are massively incorrect.

Humans do have massive genetic diversity, in physical characteristics as well as mental ones). In the last decade, GWAS analysis has now lead to the ability to identify the gene combinations responsible for intelligence (as well as height, various cancer predispositions etc), and predictive ability is high (i.e. predicting cohort differences in income, years of education etc just from genetic material).

It is also something both extremely useful and important to think about both how to avoid both nature and nurture components of intelligence. Preventing diseases detrimental to brain development (like using unleaded fuel and paints) are some of the best bang for the buck on a population level, and maximizing the genetic potential of your offspring by avoiding genetic faults and choosing positive traits while doing IVF will be an unavoidable consequence of current research.


> In the last decade, GWAS analysis has now lead to the ability to identify the gene combinations responsible for intelligence (as well as height, various cancer predispositions etc), and predictive ability is high (i.e. predicting cohort differences in income, years of education etc just from genetic material)

That you seem to think GWAS operates at a genetic level, just for a start, leads me to doubt very strongly you will be able to provide any supporting citations for the claims you make here, which are wildly heterodox verging upon eugenicist.

I am no longer close to the field of bioinformatics, though, so perhaps you will surprise me!


From a genetic standpoint, all humans are 99.9% identical beasts.

There is of course some genetic components in potential abilities.

But I strongly believe that most of the differences we are seeing are built on compounded interest of environmental factors, starting from a very young age.


You strongly believe this based on what?

Being "99.9 percent identical" doesn't necessarily preclude substantial genetic variation in intelligence just like it doesn't preclude such variation in height or skin tone.

https://en.m.wikipedia.org/wiki/Heritability_of_IQ


Heritability of traits is not always directly genetic.


Correct.

I was asking what the basis is for your confidence that it positively isn't substantially genetic.

We know that intelligence is highly heritable, which leaves a few possible explanations, genetics being one of them (along with prenatal nutrition, prenatal lead exposure, etc).

If you're going to claim that genetics is only a small part of the reason that it's heritable, then that has its own burden of proof that needs to be met.


I'm not sure this is a viable argument considering that we also share much of our DNA with apes/monkeys.

It seems like minor genetic differences can create large disparities.

I definitely think environmental factors can compound on a genetic baseline though, and of course the earlier you start compounding the greater the returns.


The differences between apes and humans is much larger than between humans, by at least an order of magnitude.


But that's not a sufficient counter-argument that the GGP post's reasoning is sound. Even single base pair mutations can have huge impacts. A surprising percentage of the human genome is retroviral in origin. As far as I know, "Genetic clock" analysis is the only sound application of counting base pair differences.

I don't think the GP is arguing one way or the other. They seem to be mostly questioning how wide the error bars should be on these statements. It looks like a Socratic twist on "citation needed" without an implication that statements are necessarily wrong.


Indeed, some cognitive disabilities are based in genetics; but those are usually quite obvious. The rest of us humans are pretty much the same. Young child’s brain is, by nature, like a sponge, and impressions received at an early age are extremely critical.


Childrens' growth and development is also seriously affected by things that happened before they were born, like their mother's diet and exposure to local pollutants, that causes effects that could be mistaken for genetic differences. This is why twin studies and such things don't actually prove anything.


Well, one amino acid change can render someone’s whole life into a suffering, so I don’t see how the 99.9% mean anything. If the difference is only in a few genes, it can already have a huge effect.


By now we should have some unequivocal evidence to support your conjecture. I think it would be nice if we did but we don't.


[flagged]


Why not?


> we still have no idea what biologically separates a man like this from the rest of us

I don't think it was a biological separation. Neumann, and his brilliant peers, all grew up in Hungary in the midst of political and ethnic oppression with significant economic and language barriers to overcome. Books have been written about it, and I can't accurately recount the facts. But it seems to me that what separates him and his peers is their perseverance in the face of adversity. Having overcome the very difficult circumstances in which they were born prepared them for great success later in life. It's inspiring, but in some ways it's also survivorship bias, since there were uncountable people who were born in the same circumstances and did not overcome. I think similar things are happening with refugees today. Those who overcome their circumstances often go on to be outstanding examples of genius and accomplishment in their fields.


I think you are confusing von Neumann with someone else, because that is not the case, at all. He grew up in a wealthy family which was ennobled by the emperor when he was 10 (hence the von), and spoke Hungarian natively. In childhood he was taught by governesses in numerous major European languages before going to a Christian school for the Hungarian elite, while receiving private tutoring from a renowned mathematician. Then he went to prestigious universities in Germany and Switzerland, before becoming the youngest Privatdozent in University of Berlin history.

The situation obviously rapidly changed in Europe, but Von Neumann left for Princeton before the 30s began. He grew up in a life of great privilege.


As I said "I can't accurately recount the facts", but Hungary was definitely not a country of privilege at really any time in it's history, much less the late 1800s and the early 1900s. The facts of Neumann's own life may have been much more fortunate than his country of birth. That is something I don't know much about. But I would imagine, no matter how privileged his life was, the fact that he was born in a country that was oppressed by a foreign superpower, lived through a holocaust of his ethnic group, and spoke Hungarian as his first language in a world dominated by German and English, shaped who he was.


"Anybody who looks at living organisms knows perfectly well that they can produce other organisms like themselves. This is their normal function, they wouldn’t exist if they didn’t do this, and it’s not plausible that this is the reason why they abound in the world. In other words, living organisms are very complicated aggregations of elementary parts, and by any reasonable theory of probability or thermodynamics highly improbable. That they should occur in the world at all is a miracle of the first magnitude; the only thing which removes, or mitigates, this miracle is that they reproduce themselves. Therefore, if by any peculiar accident there should ever be one of them, from there on the rules of probability do not apply, and there will be many of them, at least if the milieu is reasonable. But a reasonable milieu is already a thermodynamically much less improbable thing. So, the operations of probability somehow leave a loophole at this point, and it is by the process of self-reproduction that they are pierced."

— John von Neumann


In this I disagree with von Neumann. The relevant phrase to understand my point of departure is "That which persists exists." There is a continuum of strategies for persistence and reproduction is only one of them. While thermodynamics over a large enough timescale dictates even disorder, over shorter timescales and in specific local it allows for the temporary increase in order. This is no more or less probable than any other arrangement. What's fundamentally fascinating is that there are forces which resist change. It is that resistance to change on the small scale that allows for the persistence of patterns on the large scale. Atoms resist change, persist and thus continue to exist. Molecules resist change. Crystal structures. Arrangements of molecules into cells. The patterns of behavior in individuals. There are layers upon layers of this same phenomenon of the ability to resist change via different strategies and inherent properties. Locally this means that there is an anti-entropic tendency built into the fabric of, and all subsequent layers or organization in the universe. It's bloody wonderful, but hardly improbable.


I am by no means knowledgeable enough on this topic, but what we think of as increased order may not always equate to physically having larger order.

Eg. your atom/molecule example, it may only be more orderly to a human brain, when in actuality the added entropy of the tight packing is overcome by the favorable energy state at the temperatures we are used to. So there may be a difference in a fixed order by energy expenditure like a cell (Neumann’s point) and seemingly ordered state that in physical actuality requires no additional energy like a salt crystal (your point). With this (maybe faulty) distinction, Neumann’s point still stands in my reading.

If we were to have an immortal supercell that didn’t reproduce, it would likely seize to exist at an “infinite” timeline due to some rare external event, even if it is superior in every way to a simple cell that only reproduces. While at the other hand a “single” crystal will similarly not survive a long timeline, but by simply its structure being a law of nature, it will form again in the same way.



> Therefore, if by any peculiar accident there should ever be one of them, from there on the rules of probability do not apply, and there will be many of them, at least if the milieu is reasonable.

This is the line of arguments in favor of the "simulation theory": "If in a distance future a civilization ever manages to create a simulation, who says we are not already into one of them?"


My Dad knew and worked with von Neumann at the Institute for Advanced Studies at Princeton. My Dad says nice things about him and mentioned eccentricities like working through the night and playing music loudly.


Hi Mark. My name is Jørgen Veisdal, I wrote the essay. I’d love to interview your dad for a book project I am doing. Any chance he might be available for something like that?


Email me. I am visiting my Dad next week.


I think John von Neumann was a genius but I also think that at the time there was also confluence of great minds across mathematics and science in general which likely feed into and accelerated the advancements.

Maybe it's still happening but we no longer hear the names, though potentially it was also just one of those alignments of possibilities.


I fully agree with this. While he was definitely a smart person, he was also lucky to be a polymath during the time of big transition for math and science. Knowing multiple fields, polymaths have a big advantage in systemizing a new school of thoughts. Being a grand master of one school will be less advantageous for this specific goal.

Also, I think there are much more polymaths during our time than 1900s, thanks to the internet. A single person can consume much more information than before. This will make it difficult to make a single person get all the spotlight. I believe we have a lot of geniuses in the back stage that the public simply don’t know exist.


I often wonder if there are points in human understanding where there is an opportunistic amount of existing knowledge, current exploration of knowledge, and sets of ideas floating around that lead to these sorts of surges of shared ideas and principles in disciplines.

While it's no doubt from accounts Nuemann was brilliant, I've observed other brilliant people who I feel could be ground breaking minds given the right set of circumstances.

A bit of nature vs nurture, environment vs natural talent. I feel as though humanity often has the Neumann, Einstein, Euler, Ramanujan, etc. but need the right set of circumstances to really shine through. They could just be outliers but when you see a collection of these sort of minds during a time period, I can't help but think there is an environmental factor enabling it. Even people as brilliant as Newton had some equivilance in Leibnez with the invention of calculus (Newton was clearly above and beyond though).


Him and Einstein also did not have to contend with getting jobs optimizing ad placement.

Necessity is the mother of invention. But abstract thought requires time to focus on things outside the mundane.


Einstein had a job as a patent clerk!


> Maybe it's still happening but we no longer hear the names

We do hear the names, it's just difficult to recognise which ones are going to be the names that echo for generations.

I would be completely unsurprised if articles like this were written about Tao in 50 years' time.


It's also possible there no longer is an environment that supports these kind of people.


Some strange political beliefs but indeed an unparalleled genius. (He advocated for immediately nuking the USSR!) His faculty with infinite series and other mathematical tricks was rather amazing.


If you think war is inevetiable anyway it makes sense to nuke first while the otherside is still weak.

Luckily the war didnt happen, but it was a reasonable possibility. So only in hindsight do these beliefs look stupid (and evil).

Von Neumann also influenced the Goverment on Military Strategy and basically invented MAD. Which worked great so far.


> He advocated for immediately nuking the USSR!

Considering the oppression that his native country experienced at the hands of the USSR, and the fact that the implications of nuking an entire country weren't really understood yet at that point, this view is hardly surprising.


Or that “if you don’t do it, they will”.


I don’t find advocating for attacking the USSR with nuclear weapons prior to their acquisition of the same to be that strange.

Consider that Bertrand Russell - the well-known and self-described pacifist - shared that view as well.

To greatly simplify the concept: if you believed that total war between the USSR and the West was inevitable, attacking first while the US was the only nuclear power would substantially limit casualties and ensure that the West won.


Always in awe to read about assemblies of great minds like at Princeton's IAS.

Does another such assembly exist today? I don't think it does but want to know if others can point to one.

Did it happen then because the discoveries were "low hanging" enough that they would be found eventually, such that today we've basically exhausted the major discoverable parts of nature given human limitations, like a depleting orebody where what's left are marginally economic residuals? Or was there something in particular about Western societies a century or two ago that produced such brilliance (or even specifically the Habsburgs's HRE where von Neumann was from), which perhaps we can learn from to steer things today?


Just reading the duties of his assistant makes you realize you should really reserve the word Genius for a few. And he certainly was one of them:

"The Duties of John von Neumann’s Assistant"

https://www.cantorsparadise.com/the-duties-of-john-von-neuma...


There have been attempts at this (new bell labs janelia farms) but I feel like the current academic system filters out the type of minds that work in such situations, and every attempt at a new bell lab recruits from the same system.


When new fields are being researched there’s a ton of low hanging fundamental work that can be done. Over the years all of the low hanging fundamental work is done and research gets more specialized and the amount required to learn increases dramatically.

I don’t think there’s any reason to think people like Gauss and von Neumann are much smarter than a modern day equivalent, say Terrence Tao. But Tao’s work, despite being across many fields, is very specialized and not as well publicized.


Reminds me of the comment made by JFK in 1962 when hosting the Nobel Prize winners at the White House: " I think this is the most extraordinary collection of talent, of human knowledge, that has ever been gathered together at the White House, with the possible exception of when Thomas Jefferson dined alone."


MicrosSoft Research and Google Labs have thrown obscene amounts of money at luminaries. But I am not aware of major discoveries or products made by them. Probably past their prime.


I’ve already linked it in this comment thread, but it seems related here as well (no affiliation):

https://www.lesswrong.com/posts/xPJKZyPCvap4Fven8/the-atomic...


[flagged]


[flagged]


It's quite controversial to study the IQ of different racial groups, mostly because of history (the racists of the 19th and 20th century were fascinated with racial differences) and because it can be used as a weapon by racist groups and right or wrong it feels a bit like a politically incorrect or even a racist thing to do (I'm not passing a judgment on race/IQ studies, just saying why scientists may steer away). As a jew I'm not totally comfortable with these studies btw; other than a general curiousity I don't see what they accomplish. I can even see how they give more reason to hate jews in certain circumstances, so no thanks we're quite fine already.


Well, I think it is interesting to study it, as well as it can further our knowledge on areas that can’t really be ethically researched.

Also, it can also further our medical knowledge, for example Ashkenazi Jews do have more neurological problems, which may explain their higher IQs on average.

But I do agree that the results should be used sparingly (but it can’t be helped. Bad people will say what they want about a given race without any truth)


Discussed at the time (of the article, not of von Neumann):

The Unparalleled Genius of John von Neumann - https://news.ycombinator.com/item?id=21542753 - Nov 2019 (319 comments)


"You know, Herb, Johnny can do calculations in his head ten times as fast as I can! And I can do them ten times as fast as you can, Herb, so you can see how impressive Johnny is!"

- Enrico Fermi


And he was only -51-, correction, 53 when he passed… One has to wonder what more he would have accomplished had he lived another 30 years or more.

As an aside, are there many pure research institutes left? It seems like academic life is now mostly spent chasing grants and managing teaching load, and very very few people get to do pure research.


I heard or read a story from Freeman Dyson about a somewhat legendary conference where von Neumann was to present his version of Hilbert's problems, presumably about the new computing age, I think at the IAS. It was highly anticipated, but Dyson and everybody else present were shocked because von Neumann was in a state of advanced mental decline when he presented it, and people there agreed to pretend it hadn't happened. Sorry that I can't find a reference to it, the memory of it is vague.


This is probably the story in page 220 of [1], beginning at "Von Neumann got into trouble at the end of his life because he was really a frog but everyone expected him to fly like a bird.".

[1] http://www.uvm.edu/pdodds/files/papers/others/2009/dyson2009...


Great read, thanks.


That was exactly it, thanks for posting the link!


So many brilliant ideas in so few pages !



Oops, I must have misread, thanks, updated


MicroSoft Research,mGoogle Labs. Unclear if they have developed anything significant.


As a counterpoint:

“Can Programming Be Liberated from the von Neumann Style? A Functional Style and Its Algebra of Programs“ — John Backus, in ACM 1978.

http://www.csc.villanova.edu/~beck/csc8310/BackusFP.pdf


OT but when I open an article like this in Firefox's reader view, it often displays each image twice, first a blurry version and then the actual image. This is frustrating when I want to print a long article from reader view as I have to go through and delete all the blurry duplicate images first using developer tools (given that I don't want to waste ink or paper). Is there a better way I can fix or work around this?

I'm also interested because, as my username implies, I'm learning web dev and I would like to ensure if possible that any websites I make don't have this issue for people wanting to print pages from them.


First the article about Dirac and Feynman, and now this article about von Neumann. Cantor's Paradise has been a revelation to me.

It reminds me of my teenage years when I was a young student of mathematics and voraciously devoured the MacTutor History of Mathematics archive: https://mathshistory.st-andrews.ac.uk/

Thank you, Jorgen Veisdal.

My only appeal to you is that you not put these articles behind the Medium paywall. But if you are earning significant income via Medium, I retract my appeal.


“Voraciously devoured” induces in me a mental image of some existential situation forcing you to physically eat a book for survival.


Haha, it felt like it. I read those biographies obsessively. :)


It was striking for me when I read his paper on self replicating machine, basically a design spec for DNA, and that was in the 40s. I still can't believe it.


This post actually suggests cloning John von Neumann as one of the options if we ever have adversarial AGI :) https://fantasticanachronism.com/2021/03/23/two-paths-to-the...


I love this idea. I hope his DNA is preserved somewhere in case one day we really want to do this


Is it just me, or anyone else feel a good bit of insecurity wash over them when they encounter such tales of genius.


insecurity...pfft...it's usually a short lived existential crisis for me :D


As a pretty average person articles like this often depress me. Like what is the point of trying to learn stuff when I will never be able to do what he did at 8. We only need a few geniuses to do things and then push their solutions to humanity. The rest of us are useless.


Stephen Wolfram says he only works on the things he knows only he can do.

I've been listening to his podcast, it has a lot of great Q&A!


< The Unparalleled Genius of John von Neumann There is a lot mathematicians out there and these kind of statements are not helping. Von Neumann work was actually concentrated around a small set of ideas (the study of measure/probability). Most of his work is related to that. Even the Von Neumann algebras were in his interpretation, representation measures. You can also argue that he was pushing further already existing ideas(way further...). In contrast Einstein was creating brand new theory. Von Neumann was a great mathematician and for sure genius. But he was not by any means unparalleled.


As easy as it is to admire intelligence it's really motivation which is key, which is the admirable thing about people. VN could have accomplished far more if he'd been less concerned with prestige, academies, spy stuff, and so on. If he'd opted to isolate himself a bit more. Like his colleague Godel, or like Grothendieck.

Then there would have been many more theorems bearing his name:

https://en.wikipedia.org/wiki/List_of_theorems


By most accounts Gödel lived an anxious, unhappy life. He died, basically, from paranoia.

Not to say that von Neumann lived a happy life — he sounds like a bit of an asshole — but I wonder whether Gödel would have traded his achievements for less isolation and a less tortured existence?


Yes, I wonder too. Yet isolation does seem to be a component of creative achievement.

Feynman would be another example like Von Neumann. Off-the-charts intelligence and admired for his intelligence, his honesty and sense of adventure. But limited creative output due apparently to his pursuit of sex and intrigue.


There seems to be a typo in the title (Hungarian speaker):

“von Neumann (1926). His third paper Az általános nalmazelmélet axiomatikus folépitése, his doctoral dissertation”

It should be halmazelmélet and fölépítése/felépítése


Reading stories like this, makes me certain that one of the biggest breakthroughs the human race will achieve is when we cure cancer.

Our progress will start increasing even more.

Imagine what our world would look like if the average person lived to over 90 and was still mentally sharp.

Imagine an additional 40 years of Neumann, 20 years of Einstein, even from the more modern times also, Steve Jobs or my personal favorite Paul Allen.

Paul Allen is one of my personal heroes, and I cannot help but drop a tear whenever I think of how the techonology world would look like if Paul Allen didn't get cancer and had to retire from Microsoft before the age of 40..

If anyone is interested, I implore you to read Paul Allen's autobiography, The Idea Man. One of the best books I have ever read, you are amazed at what kind of man Paul Allen was.

One of my favorite stories is when Allen and Gates made their first BASIC and had to go pitch it to Albuquerque to get their first customer. Gates did not dare leave Harvard yet, so they sent Allen alone to go deal with it, and while Allen was on the plane, he realized that while they had developed the BASIC, they didn't make the bootstrap loader. So Allen minutes before landing, grabbed a steno pad and started writing the code in machine language, all from the head...

And it worked... I mean what can say to that, except feel insignificant?


It is a dark view, but I’m not sure the general population living longer would be necessary beneficial to society. But it has more to do with all the implicit propaganda around us and the flaws of democracy. And that older people seem to have a more rigid world view. So all I’m saying is that we have to understand the human brain’s psychology as it gets older.


I agree it will be big breakthrough, but more because it will improve the quality of life of the average person. Don't most of these geniuses produce their best work in their 20s? Not sure if adding another 20 unproductive years would help much.



> includes the first introduction of the concept of a class, defined using the primitive notions of functions and arguments.

Might he be the inventor of OO programming ?? :)


In "metamathematics" in the 1920s, sets were divided into 2 kinds or types as an attempt to evade a certain paradox. ("The set of all men who do not shave themselves" if I recall correctly.) One of those kinds of sets were called classes.

So, nothing to do with OOP.


What fascinates me is this man with one of the greatest raw intellects of anyone who ever lived was also very human and flawed in some ways. The guy who would fall asleep not knowing how to solve a math problem and immediately have the solution upon waking (one of the anecdotes I've read about him) was the same guy who liked to ogle at women excessively and crack dirty jokes.


  Young man, in mathematics you don't understand things. You just get used to them.


Lets speculate this is entirely nurture. How do you get as smart as von Neumann? Zettelkasten?


Von Neumann did it by watching loads of Khan Academy videos on YouTube, taking regular Vitamin D supplements, using Anki cards and practicing mindfulness meditation.


I didn't know I was in such auspicious company!


you joke but you would jave entirely better outcomes with the above regimen than without


I should add that Von Neumann only started with maths at 39. He’s actually a self taught PHP programmer by background.


don't forget the keto diet and intermittent fasting


Have a lot of money, be privileged to work with lots of very smart people, and spend a lot of time learning from the sound of this article.

I'm always super-suspicious when I hear someone's brilliance described in anecdotes. Some people are better at promoting themselves than others. So perhaps judge him only by his published work and not his reputation.


By the age of six Von Neumann could divide eight digit numbers mentally and speak in Ancient Greek. If it is nurture, it is something accidental which happens in early childhood, not something you work on and learn about later in life.


apropos von Neumann. why is there no Manhatten project for corona?


The raw science is not what held back the response. The mRNA vaccines were designed within a week. Testing and approval were the bottlenecks.


I have read this article in 2019 but now Medium has put it behind a paywall.

I really hate Medium so much.


.


Fwiw, Rms is being cancelled after doing his best work on software (yes, I'm predicting that his prime is behind him). You can understand that different ways, but primarily it's just timing (changing culture, Rms is old too).


It's a shame that so many "great" people are (or were) also awful people.


WTH?!

Why are you bringing up RMS?

The social norms were different back then. And even then von Neumann is not known as someone who harmed women (or men).

Those women might have equally looked at men. It’s getting ridiculous.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: