Turing Machines, on the other hand, were instantly obviously mechanizable. It was clear that one could build a physical machine to run any Turing program without human input. By proving that they could simulate the other systems, Turing showed that those other systems could be mechanizable as well.
I don't understand why Schmidhuber continues to ignore this crucial point.
Ultimately, as always, credit will be assigned by future generations of scientists and historians long after everyone who is alive today has disappeared from the face of the earth. Many disagreements that seem important now will be forgotten. Many findings that seem important now will turn out not be. Many names that are very prominent now will fade into obscurity. As always.
So, he has
>Ultimately, as always, credit will be assigned by future generations of scientists and historians long after everyone who is alive today has disappeared from the face of the earth.
Well, credit for the field of CS is already applied now, and also yesterday and 20 years ago. It's not like this future credit would be the first such, or that it would be more accurat (or that it would not feed of the current popular views).
~ Sir Thomas Browne, Hydriotaphia (1658)
But that downplays her accomplishments too much. She didn't write the "first program" but she was the first to understand what computers would be capable of doing (for example, that by assigning numbers to letters and symbols, computers could do more than simply perform numerical computations), and she was the first to invent foundational control flow structures such as loops and conditionals. Her program was much more rigorously defined and sophisticated than any previous examples.
>The longest program that Menabrea presented was 11 operations long and contained no loops or branches; Lovelace’s program contains 25 operations and a nested loop (and thus branching).
I very much recoginized from that that she had the attitude and experience of a "programmer," so I would say she was the first programmer, in the modern sense.
The Analytical Engine has no pretensions whatever to originate anything. It can do whatever we know how to order it to perform. It can follow analysis; but it has no power of anticipating any analytical relations or truths. Its province is to assist us in making available what we are already acquainted with. This it is calculated to effect primarily and chiefly of course, through its executive faculties; but it is likely to exert an indirect and reciprocal influence on science itself in another manner. For, in so distributing and combining the truths and the formulæ of analysis, that they may become most easily and rapidly amenable to the mechanical combinations of the engine, the relations and the nature of many subjects in that science are necessarily thrown into new lights, and more profoundly investigated. This is a decidedly indirect, and a somewhat speculative, consequence of such an invention. It is however pretty evident, on general principles, that in devising for mathematical truths a new form in which to record and throw themselves out for actual use, views are likely to be induced, which should again react on the more theoretical phase of the subject. There are in all extensions of human power, or additions to human knowledge, various collateral influences, besides the main and primary object attained."
-- Ada Lovelace, 1842
I learned about it in Walter Isaacson's Innovators.
That almost sounds to me like saying that the Wright brothers "made a few simple flights".
> first to invent foundational control flow structures such as loops
I wonder how sigma notation fits into this. Clearly the notion of expressing arbitrarily repeated operations using a fixed amount of information (which is what a loop is, essentially) was known at least to Euler.
Also, the fact that the machine enabled these things in the first place (unlike even some of the later machines such as Z3) suggests that its designer was either aware of this necessity to begin with, or at the very least in possession of preternatural prescience. In that case the use of these features in some programs but not in others would be not a matter of inventing them in the former programs but instead a matter of choosing to exploit existing hardware features, or declining to do so, depending on what program you're looking at.
You can even go further back. Algorithms with loops were known already to Babylonian mathematicians. So you don't need to resort to preternatural prescience.
The Z3 was not intended as a general computing device but as a practical help for engineers. Because of that you can't say it was missing something it didn't need to do its job. Whereas when Zuse designed Plankalkül loops and conditional branches where naturally included in the design.
Richard Pearse gets written off in the same way to elevate the Wright brothers flying accomplishments.
Pearse was just perusing powered flight as hobby in rural New Zealand, didn't bother informing the press and didn't bother even telling the government until WWII, 40 years later, about his flights and engineering designs.
Maybe the first programmer who wasn't also a hardware engineer.
That said, I'm sure in the case of Ada Lovelace there is at least some element of my misogyny involved.
Or merely the first to express it? I'm pretty sure Babbage himself, as the inventor, understood well what computers would be capable of doing.
> There is a seemingly minor difference whose significance emerged only later. Many of Gödel's instruction sequences were series of multiplications of number-coded storage contents by integers. Gödel did not care that the computational complexity of such multiplications tends to increase with storage size. Similarly, Church also ignored the context-dependent spatio-temporal complexity of the basic instructions in his algorithms. Turing and Post, however, adopted a traditional, reductionist, minimalist, binary view of computing. Their machine models permitted only very simple elementary binary instructions with constant complexity, like the early binary machine model of Leibniz (1679)[L79][LA14][HO66] and Zuse's 1936 patent application...
I presume by "seemingly minor" Schmidhuber implies "it turns out to be very important".
To echo my other comment here, it’s not that Church himself knew that. Even five years ago people did not know that. It’s not a question of priority. But I find it fascinating and reassuring nevertheless, as actually programming e.g. a self-interpreter for a Turing machine—or for general recursive functions, or for combinatory calculus—is an absolute slog.
Dal Lago, Martini (2008). “The weak lambda calculus as a reasonable machine”. DOI:10.1016/j.tcs.2008.01.044, apparently not on arXiv.
Accattoli (2012). “A fresh look at the lambda-calculus”. DOI: 10.4230/LIPIcs.FSCD.2019.1, apparently not on arXiv.
Accattoli, Dal Lago (2014). “Beta reduction is invariant, indeed”. DOI: 10.1145/2603088.2603105, arXiv: 1405.3311 [cs.LO].
Accattoli, Dal Lago (2016). “(Leftmost-outermost) beta reduction is invariant, indeed”. DOI: 10.2168/LMCS-12(1:4)2016, arXiv: 1601.01233 [cs.PL].
Forster, Kunze, Roth (2020). “The weak call-by-value lambda-calculus is reasonable for both time and space”. DOI: 10.1145/3371095, arXiv: 1902.07515 [cs.CC].
Now that I’m looking through bibliographies, there are apparently other relevant intervening papers in the vicinity, even by some of the same authors, but these are what I’ve looked at personally.
Bonus: the paper
Hackett, Hutton (2019). “Call-by-need is clairvoyant call-by-value”. DOI: 10.1145/3341718, apparently not on arXiv.
is unrelated to questions of complexity but is just so absolutely lovely.
> Nevertheless, according to Wang,[WA74-96] it was Turing's work (1936) that convinced Gödel of the universality of both his own approach (1931-34) and Church's (1935).
Unless "according to Wang" is meant as "I don't know if I believe it", then apparently it's documented that Godel himself thought Turing's contribution was major and shed important light on the mathematical implications of godel's own work.
There's never any one person that invents anything, it's always built on work that came before.
Reading the OP, I got increasingly bored and tired... ok, what's your point? Yes, clearly Godel and Church especially did foundational work without which Turing's work would not be possible -- and I don't think anyone denies it, anyone with any kind of computer science education is surely familiar with Godel and Church. It's not like they are languishing in obscurity, they are both very well known and respected! Godel especially is widely considered a real giant in his fields. I am as confident that neither Godel nor Church is going to be forgotten for many generations.
But Turing made important contributions that took important steps necessary for the development of CS as a field. It's not a mystery or undeserved why Turing's name ends up remembered.
The OP's point is just that any singular "inventor" is always building on work of may who have come before? OK, sure. Boring. So we should never promote anyone as making important foundational contributions? Well, people aren't gonna stop. Boring.
It is difficult to get a man to understand something when his salary depends upon his not understanding it.
But wait a minute, you might say, facts are facts.
And if everyone had the time and resources to discover and digest every fact, your understanding would be the end of it.
But everyone doesn't have time and resources. To compensate, we rely on others to curate facts for us. When we encounter an internally consistent subset of facts that suits our ideals and our interests, we adopt that point of view.
There are infinitely many subsets of curated facts that can be presented as internally consistent. That's why there are so many different points of view.
What bearing does this have on Turing's role in computer science, and his latter day fame in a society which came to be defined by silicone logic?
The First Computers Were Human (and Mostly Women)
Turing, in addition to stating an ontology of computing, dared to invite the question, what is the difference between a computer and a human?
I cringe when I read ill-researched essays like these because it informs a relationship between women and computing that back then genuinely did not exist.
For the vast, vast majority of women in the field of computing at this time, they were nothing more than glorified calculators. Yes, there were a few women scientists and mathematicians (by few, I mean literally handfuls). Yes, it was a male dominated field.
But the overwhelming majority of women working in this industry at this time did secretarial style busywork. It wasn't glorious. It was underappreciated. It sucked.
These types of essays are a genuine attempt to rewrite a history that did not exist. It is literary gaslighting the likes of which the article we are discussing right now is attempting to rectify.
I'm not sure what you are saying? Could you clarify?
What is "secretarial style busywork"?
Is this article also "ill-researched"?
What would a better researched view of history say?
I'm not trying to troll you or make you write an essay here - just trying to glean the main points of what you believe so I can see your point of view.
Are you familiar with the view that the "Turing Test" was first conceived by Turing as a challenge to discern the gender of a human hidden behind a screen?
In my comment I was trying to point to Turing's interests in the nature of gender and humanity. I find that expansive and prescient in the sense that "computing" to us moderns is more about NLP than calculating maths, and yet, it's all the same thing under the hood.
There’s also the point (mentioned e.g. by Wadler in his “Propositions as types” talk) that Gödel didn’t actually realize how pervasive universal computation was and actively pushed back against Turing proclaining the equivalence. This is not to accuse him—he wasn’t particularly obstinate or unfair about it and, furthermore, came to understand the idea fairly quickly. But it’s not at all uncommon in science for the one who actually invented a thing to fail to realize what it was that they just invented, and somebody else comes years or in some unfortunate cases decades later and announces the founding of a new area of knowledge on that already-discovered site. Whom the later naming will associate with the discovery is a toss-up, but it’s generally fair and in good taste, I think, to mention both.
So your argument is, because it is unclear how to "physically implement the arbitrary-function minimization operator", Turing is the better "theoretical backbone" and has founded computer science?
Church had piles of stuff that he and his students produced that were computable with the lambda calculus. Basically, all of the natural number functions that a person thinks are intuitively mechanically computable, those folks had showed how to lambda compute. With this evidence, he proposed to Godel (they were working together at Princeton at the time), who was considered the world's expert, taking "lambda-calculable" as a mathematically precise version of "mechanically computable." But Godel was notoriously careful, and he did not accept the thought as perfectly clear.
That is, they had a subset of the things that could be mechanically computed. But was it the entire set? Or was there something that some discrete and deterministic mechanism could be made to do that would lead to more than Church's set?
Imagine you are Dedekind and you are looking at the primitive recursive functions and (1) any such function is intuitively mechanically computable, and (2) you are able to work out how to define a pile of things like prime factorization of an integer using this system. You might well conjucture that this is it. But we know (and Godel and Church and Turing knew) that this is not it, that you need to add unbounded search of some kind (this is what minimization does) to get more things that are intuitively mechanically computable.
I agree that the minimization operator is not as easy to picture with gears and levers as some of the other operations. But the issue in 36 was that a person could worry that there was even more. Just as minimization is not as easy to picture and the need for it didn't hit Dedekind with great force, could there be something else out there that we have all missed?
That worry disappeared when Godel read Turing's masterful analysis. It convinced him that this is what a machine can do.
"That this really is the correct definition of mechanical computability
was established beyond any doubt
by Turing.'' Church felt the same way, writing
that Turing machines have
"the advantage of making the
identification with effectiveness
Yes, the set of functions computable with mu recursion, or with lambda calculus, is the same as the set of functions computable with a Turing machine. (Turing in his original paper showed that the set is the same for his system and Church's, and the proofs for mu recursion and many other systems are well-known.)
> I'll totally agree that from a teacher's perspective, the Turing machine is the better explanation.
When I learned this, the instructor did not use Turing machines. They used mu recursion. That has the advantage that you don't first define a machine and then derive the set of functions, instead you go straight to the functions. But I agree that Sipser (which I understand to be the most popular text today) defines a computable function as one that is computed by Turing machine, and to my mind that has the advantage of honestly suggesting to students what they are about.
It does seem weird to me though that we're letting our engineering limitations determine how we think about these things mathematically.
Turing machines do have mathematical advantages in some areas. See this siblings comment: https://news.ycombinator.com/item?id=28541800
The Rekursiv advantage is its ability to do recursion on the bare-metal. It's described as an object-oriented architecture. Its memory was a persistent store of objects, with each object having its size, type, and position known in memory.
I came into programming/computer science from a mathematics degree; I read some old treatises on recursion theory and fell in love. I couldn't ever quite wrap my head around the Turing Machine formalism, but kept at it for a while. Finding Barendregt's paper  was a huge shock! I grasped it much quicker. So, yes, lambda calculus and the Turing Machine formalism are equivalent in explanatory power, but there are also reasons someone might prefer one to the other. So, yes, for me, the value _is_ the formalism.
As to why I think the Rekursiv would provide a good platform for implementing lambda calculus on the bare-metal, that's entirely due to Rekursiv's memory model advantage and the fact that it has a user-writable ISA. Why would someone choose to implement the lambda calculus on bare-metal? You call it "fetishism," I call it fun!
More generally, I just really like the idea of having a machine with a user-writable ISA.
 Theory of Recursive Functions and Effective Computability: https://openlibrary.org/books/OL2738948M/Theory_of_recursive...
 Introduction to Lambda Calculus: https://www.cse.chalmers.se/research/group/logic/TypesSS05/E...
Thanks for clearing up that it's the formalism you find interesting. Also, to offer a counterpoint, I'm also from a math background, but I was more of an analysis person (as much as one can be in mathematics where it's all related) than an algebra person, and when I did some FP research, it often felt like where all the algebraists go to play computer science. I feel like analysts are underrepresented in PLT (and overrepresented in complexity theory!) but this is already going off-topic, so cheers.
Out of curiosity, can you identify any areas in PLT that could be made more analyst-friendly?
Intuitively, it feels that PLT is almost necessarily of the algebraist; to me, one of the big divides is the discreteness of algebra vs the continuity of analysis. Would it help if there was a PLT that exhibited a greater degree of continuousness? If so, what do you think that might look like?
That's also a very interesting perspective on analysis. To get a better feel: is your joy of analysis in getting into the "internals" of an algebra, to directly derive properties of elements within the algebra as opposed to relying solely on global properties endemic to the construction of the algebra?
I'd also love to see your talk! Do you think a RISC Rekursiv could be achieved? What value, if any, do you think such might have in our current world?
The slides for my 2019 talk about Smalltalk computers (in LibreOffice and PDF formats):
and the video (1 hour and 13 minutes):
If you replace the Rekursiv's special microcode memory with a cache to allow microcode to live in main memory you will essentially have a RISC version of the machine. I have adopted this solution in several of my own designs.
Could I purchase a Merlin 6 or a Pegasus 2000?
Speaking of which, the way that the Pegasus 2000's eGUI documentation is written is incredibly dear. The "world" and "heaven" analogies are great!
If I may ask, from your perspective, what's more elegant about Rekursiv's design? Is it the philosophy or implementation? I'd also love any links to orthogonal persistence!
That theory continues to bear fruit as the history of programming languages is almost entirely about bringing new and "better" abstractions to problems and then translating them to "dumber, more practical" machines. We have programming languages today modeled directly off the elegance of (though now sometimes still a few steps removed from being direct implementations of) the lambda calculus and mu-recursive functions, and the amazing thing is that they work great even given how "dumb" and "inelegant" our machines can be in practice.
See also https://en.wikipedia.org/wiki/Church%E2%80%93Turing_thesis
This is already proven to be the case. Mu-recursion (which IIRC is not Godel's general recursive functions, despite what Wikipedia says; Kleene was the originator of the mu operator. Godel's general recursive functions are defined in a separate, but yet again equivalent way that directly extends primitive recursion), Turing machines, and the lambda calculus are all proven to be exactly equivalent to each other. The fact that these three independent approaches to computability are all equivalent is why we have strong informal justification that the Church-Turing Thesis (a non-mathematical statement) holds.
Separately, there's a sentiment that I've seen come up several times on HN that somehow the lambda calculus, mu-recursion, or general recursion is more "mathematical" and Turing machines are less "mathematical."
I want to push back on that. The mathematical field of computability is based almost entirely off of Turing machines because there are many classes of mathematical and logical problems that are easy to state with Turing machines and extremely awkward/almost impossible to state with the lambda calculus and mu-recursion (this is consistent with my previous statement that the three are all equivalent in power because computability theory often deals with non-computable functions, in particular trying to specify exactly how non-computable something is). The notion of oracles, which then leads to a rich theory of things like Turing jumps and the arithmetical hierarchy, is trivial to state with Turing machines and very unwieldy to state in these other formalisms.
Likewise the lambda calculus and mu-recursion (but not general recursion) provide a very poor foundation to do complexity theory in CS. Unlike Turing machines, where it is fairly easy to discern what is a constant time operator, the story is much more complicated for the lambda calculus, where to the best of my knowledge, analyzing complexity in the formalism of the lambda calculus, instead of translating it to Turing machines, is still an open problem.
There is indeed a mathematical elegance to Turing machines that makes it so that most of the mathematics of computability is studied with Turing machines rather than the lambda calculus.
The lambda calculus on the other hand is invaluable when studying programming language theory, but we should not mistake PLT to be representative of the wider field of mathematics or theoretical CS.
EDIT: I should perhaps make clear that if I put on my mathematical hat, mu-recursive functions seem like the most familiar formalism, because they align with a common way families of things are defined in mathematics (specify individual members and then generate the rest through some relationship). However, I would contend that for the majority of mathematicians outside of computability theory, the lambda calculus and Turing machines seem equally "strange."
However, yes, I do think that 'mechanization' or physical implementation is a crucial piece of Turing's contribution that is wrongly ignored in this article. And I think without mechanization, there is no CS as we understand it.
"Likewise, Konrad Zuse never got a Turing award despite having created the world's first working programmable general computer 1935-41. [...] It was pointed out that none of the computers built during the 1940s were influenced in any way by Turing's 1936 theoretical paper, [...]"
In my view, Turing's contribution is providing a plausible definition of computation along with a deep and comprehensive theoretical characterization of the properties of this model of computation. This is why Turing machines form the basis of theoretical computer science, and not other models such as lambda calculus. I think saying that Turing machines were adopted since they were merely more convenient is highly misleading.
I think this pattern repeats a lot: There are many cases where you can point to multiple people who invented similar ideas around the same time, but it is typically the person who provided the most deep and comprehensive treatment of the subject that ultimately gets most of the credit. This depth is not conveyed in pop science attributions such as "Turing invented computation", but this doesn't mean Turing doesn't deserve the credit.
His Turing award citation: "Professor Wilkes is best known as the builder and designer of the EDSAC, the first computer with an internally stored program."
The 2009 award to Chuck Thacker, on the other hand, was clearly based on his contributions to hardware. I have the impression that ACM had a change in policy around that time.
But officially I seem to be wrong:
In the "hardware" category Wilkes is listed as the only one, while Thacker, with Brooks, Cocke and Wilkes are in the "computer architecture" category.
The word computer means multiple things. In one sense it the abstraction of universal computation. Imagine a world where actual physical computers didn't progress to universal computation, but were stuck being purpose built to the present day. The field of computer science would be utterly different because they couldn't actually compute anything with their science. They could just discuss computability in an abstract sense. It'd be like physics without the particle colliders or telescopes or lasers.
I think of the founders of computer science more like the founding fathers of America, rather than a single guy named Turing, but some are more memorable than others.
It’s not clear that you need a theory of computation to build a stored process computer. You might not clearly understand the theoretical capability of such a machine , but that wouldn’t prevent you from doing many useful things with it.
Harold Abelson points out in one of his lectures  that computer science isn't about computers any more than biology is about microscopes.
From that perspective, it is clear that Turing found an existing discipline of computer science and made major contributions to it. But it doesn't really seem right to say that he invented or founded computer science.
But it's certainly not the study of what mechanical computers can do. Among other things, mechanical computers all have bounded resources unlike Turing machines.
Nobody is saying that's not important. But the field as a whole began before mechanical computers were invented or practical, and there are several subfields that are basically indistinguishable from mathematics.
In any university, Chemistry is not the study of beakers nor is astronomy the study of telescopes.
Both Software Engineering and Computer Engineering exist, as sub-disciplines, but it doesn't make sense to argue that somehow studying Graphene (a chemical which exists) is Chemistry while studying non-blocking algorithms is only Applied Computer Science somehow just because such algorithms could be used on an actual computer.
If N is sufficiently (polynomially) large, the two are approximately equal.
Yup, so do humans with paper and pencil.
Also if you ever work with the turing machine (NFA hooked up to an infinite recording tape) it is not at all "intuitive" that this construction comprehensively captures the world of computation.
Is Lisp such a mechanization?
The famous Lisp Machines of the 1980s were reasonably traditional Von Neumann computers with nice features like tagged memory and optimized stacks.
Much closer to mechanizing lambda calculus, but still a bit Von Neumannish, is the SECD virtual machine for Lisp and friends:
Seems to be a lot of uneasiness, of late, about the way credit is allocated in science. IMO, it's mistaken to point this at the top: nobel laureates, heroic icons like Einstein or Turing. These figures are supposed to be idolized and idealized. Yes, this is "untrue," technically. But, it serves many purposes. A nobel prize win elevates science by singling out scientists for hero status. Achilles elevated Greece by giving Greeks something to collectively aspire to or adulate.
If you're already deeply interested in computer science, of course the detailed narrative recognizing dozens of brilliant early computer scientists is richer. Of course!
Where poor credit allocation matters isn't historical hero narratives, it's at the working scientist level. The grants & positions level. Here, it's important to be accurate, fair, etc. Being inaccurate, unfair or corrupt at this level creates actual deficits.
The Turing machine is a key conceptual model for understanding the basics of computation. The Turing Test is a great model for thinking about what being intelligent means. Hardly a week goes by without the term Turing Complete appearing somewhere in a HN comment. The fact that he also played an important role in the design and construction of actual practical computing machines, and did so to fight nazis seals the deal.
Of course there's more to it, there's plenty of credit to go around, but Turing is the perfect entry point for people to appreciate and learn more about all the work that went into the founding of computer science. It elevates the profile of the whole field.
People waste their lives in service of causes and ideas that just are not grounded in reality. Not just in the philosophical sense that we cannot know truth, but in the practical sense of "the outcome you want will not flow from the actions you are taking today". Narratives are inferior to truth when it comes to making decisions.
I'm reminded of, "The Glass Bead Game," which discusses an academic society that forgets the names of contributors since they're all just part of the flow of humanity
In that context narratives end up glorifying endless minutiae.
But then we've invented and perfected writing, developed symbolic languages and notations (e.g. math, musical), long-duration storage media for text, and eventually networked digital computers. In terms of communicating and preserving knowledge, stories are pretty much the worst possible option you can choose.
We're comfortable with narratives because we didn't have anything else for hundreds of thousands of years. Stories are pretty much hardwired into our brains. But that doesn't make them the right choice, now that we've figured out much better alternatives.
More than that, I'm personally suspicious of stories being used in communication. There's no good reason to use them, and there's plenty of bad ones - it so happens that what makes a good story robust over time is the same thing you need to manipulate people into believing lies and shut off critical thinking.
As far as preserving information goes, no argument there. Stories aren’t a good way to preserve the truth of matters for future generations. To look and determine if the stories told have truth in them requires more detailed writing.
What has empirically brought more folks into careers in science, dry textbooks foisted by teachers or Star Trek? I'd argue Star Trek and science fiction more generally. You can chalk that up to human failings if you like, but inspiration is a need that can't be avoided if you wish to convince.
This is why a historian can read, understand (both the pros and the cons), and respect books that represent an economic history, a social history, an information history, a microhistory, and even a great-man history of a given subject without trouble.
More reason for engineers to take humanities courses!
Also, the more I learn about my heroes the more I realize that they never saw themselves as ubermensch. If anything, self doubt seems to be the common thread. I think this angle does not get enough attention.
However, I agree with you on a broader point. This is just one perspective. Here is another one: Turing the historical figure is necessarily oversold because many more people than Turing the real person contributed to his aggrandizement. Like all cultural icons, Turing the idea outlived and outshined Turing the man.
I think most of that reaction isn't coming from your narrative on history, it's from accusing other commenter's narratives of being false.
If there was no narrative, no idols, no celebrities, would people be less motivated to do science? Why do we need to lie to ourselves so?
> If you're already deeply interested in computer science, of course the detailed narrative recognizing dozens of brilliant early computer scientists is richer. Of course!
Personally I'm mostly uninterested in who did what, but maybe that's just me. It seems obvious to me that nearly every scientific discovery could have been done equally well by millions of people, it's just a matter of who had the resources to be educated, who decided to research the problem, who managed to snipe the answer first, and who had the right connections to get it acknowledged. They're still great achievements, for sure, but they're not the markers of exceptional genius we want to think they are, not for Turing or Einstein, but not for anyone at all, really.
The point isn't to prove that they're special. The point is that something special happened and these people are designated symbols for that... and they're kind of selected for being good at this. We're not doing this for them, they're dead. The celebrity of Einstein is a deification of his relativity theories. We need idols for our symbolic world, to work without them in the real one.
Learning about why correlation doesn't equal causation (and spurrious correlations) is more impactful if you also learn about Wakefield's sins at the same time. He's a villian.
Archimedes and the bathtub is a great story - and I learned it in elementary school and still remember it and the lessons it teaches. We like to associate people with events and they help for learning and retaining information.
It's easy to see then that such events allow for the eventual "recruitment" of other scientists, and in showing society that "science is working" and "solves important problems".
Both of which serve to enrich the scientific world with new researchers and funding to keep the engine running.
Because we're flesh and blood, i.e. utterly irrational.
> If there was no narrative, no idols, no celebrities, would people be less motivated to do science? Why do we need to lie to ourselves so?
Yes, definitely, a huge amount of what motivates scientists is desire for fame, being considered a genius, Nobel prizes, scientific immortality, and so on. It is entirely unrealistic to imagine that we can stop being like this, it's almost a religious belief, akin to thinking that, one day, people can live without sin.
> Personally I'm mostly uninterested in who did what, but maybe that's just me. It seems obvious to me that nearly every scientific discovery could have been done equally well by millions of people, it's just a matter of who had the resources to be educated, who decided to research the problem, who managed to snipe the answer first, and who had the right connections to get it acknowledged. They're still great achievements, for sure, but they're not the markers of exceptional genius we want to think they are, not for Turing or Einstein, but not for anyone at all, really.
This may be an accurate description of your personality, in which you're one in a million, or it may be that you're ignorant about the things that actually drive you. The vast majority of people are driven by some kind of desire for fame, recognition, status, upvotes, and so on.
Suggesting that Turing and Einstein were not "exceptional geniuses" is bizarre. Even in proper context, they were exceptional geniuses, just among other, lesser-known, exceptional geniuses. If we take your view seriously, we remove all human agency and uniqueness, we remove the idea of an "achievement" and we can only give credit to luck, the historical process, and various contingent circumstances. Even if your view is accurate, people simply cannot live that way. Creating narratives is part of what makes us human and narratives need protagonists (idols, heroes, whatever).
That might do more harm than good. Once someone wins a Nobel, their productivity tends to decrease. Fighting over credit can be really toxic (see Newton vs Leibniz which probably stunted the development of calculus) and lead to less collaboration and knowledge sharing.
It may be unrealistic to think we can be different, but at least seeing that it's problematic should be unrelated to that. It's unrealistic to think crime will stop, but we should at least try to minimize it.
Or it might be that people who are driven by fame and recognition are more likely to become famous than those who aren't, which skews our idea of what motivates people. Given how emphatic society is about fame and money as markers of success, I feel people tend to be mistaken in the other direction: many people think they are, or should be driven by fame or money even when it simply contradicts their personality.
Even if it was indeed the case that most people are motivated by fame, I think those who aren't are more like 1 in 3 or 1 in 4 than 1 in a million. It might be 1 in a million in actually famous people, but not in the population at large.
> Even in proper context, they were exceptional geniuses, just among other, lesser-known, exceptional geniuses.
If I am correct that millions of people had the capability, that would place "exceptional genius" at 1 in 1000, or 1 in 10000. I think that's a reasonable ballpark.
> If we take your view seriously, we remove all human agency and uniqueness, we remove the idea of an "achievement" and we can only give credit to luck, the historical process, and various contingent circumstances.
Whether we acknowledge exceptional geniuses or not, it remains the case that 99.99% of people are not exceptional geniuses. Are you saying these people don't have agency, or that they aren't unique? I think we all have agency, we're all unique, and we all have achievements. Some achievements are more impactful than others, some achievements are more impressive than others, but these are not necessarily the same, and neither is necessarily remembered, because what matters most is not the person or the achievement, but how the story fits in the narrative. In any case, you don't need to care about that narrative to care about or acknowledge agency, uniqueness or achievement.
Of late? You should read up on Newton/Leibniz hysterics over who invented calculus. The arguments over who invented the first light bulb, car, etc. Whether greek knowledge ( the foundation of western civilization ) originated in the near east or even in india. Heck, people still argue about who "discovered" america first. There is national, religious, ethnic, racial, gender, sexuality pride tied to "priority". It's not just in science/math, it's applies to everything.
> These figures are supposed to be idolized and idealized
Why? They weren't particularly good people. Neither were saints.
> Achilles elevated Greece by giving Greeks something to collectively aspire to or adulate.
Are you talking about science/knowledge or politics? But you are right on the point. It's what this is all about at the end of the day. Politics.
Without politics, the discovery/knowledge would be what is important. Because of politics, the people become the focal point.
But one soon realizes you probably won't get heroic credit even if you do contribute something heroic, neutralizing that encouragement.
Therefore, you'd better do it for the love of the work itself or for how it helps others.
There's no limit to what you can accomplish if you don't mind who gets the credit.
This is always been the case---medieval and renaissance thinkers would publish anagrams of their key findings because they didn't want to give someone else the advantage of knowing the finding but also wanted to prove that they thought of the idea. IIRC, Isaac Newton did not publish any of his findings until someone else threatened to publish their independent results. And he's known as the creator of calculus because the British Royal Society got into an academic slap-fight with the French.
I feel the era of great thinkers who single handledly performed disruptive breakthroughs in their field, the Galileos and Newtons, was over with the Einstein-era (and even Einstein also stood in the shoulders of giants).
No one works in isolation any more, and that is not a bad thing. You can subject any relevant figure to a similar analysis and come with the same results, it's absurd to try and come up with someone with such an overwhelming figure like Albert Einstein these days.
But if you need to choose a Founding Father of Computing Science for the general public, I'd say Alan Turing is the best candidate. Scholars will give due credit to Church, Zuse, von Neumann and all the others.
Move Newton, Faraday, Maxwell and Einstein 10kms away from where they were born, surround them by a different set of chimps and the story doesnt end the same way.
A good book from Niall Ferguson - the Sqaure and the Tower - makes the case tradionally Historians have studied individuals instead of groups because its easier to collect data on one chimp versus the entire troupe.
Doesn't diminish their achievement in my mind.
I'm sympathetic to the marxian view of Great Men; I think it's no coincidence that the related work of Godel and Turing was published within a couple of decades of one-another, or that the ideas of Copernicus, Kepler and Galileo emerged around the same time as one-another.
I'm certainly impressed by the greatness of Great Men; but I'm hard-pressed to find one whose discoveries were so remarkable, in the context of their times, that noone else could have been expected to make similar discoveries around the same time.
But Turing probably isn't more important to how you get from the Treaty of Bern in 1874 (creating the UPU thus you could now practically write letters in Paris and send them to New York and it Just Works™ albeit it's expensive and slow) to the Internet than, say, Godel (more fundamental observations about the nature of mathematics that underpin computation) or Grace Hopper (the first compiler although today we'd say this is only a linker). Her Navy bosses couldn't immediately see any value for it. But Grace is apparently the first to make use of the meta-applicability of computing - the minutiae of actually programming the computer are tiresome, a lot of rote tasks perfectly suited to a machine, so, why not have the computer do those parts for you?
von Neumann simply described the work of Eckert and Mauchly on the ENIAC in a written report. And his name was on the report which made people think that he came up with the idea, which was false. It also resulted in a patent dispute -- it's interesting to imagine what would have happened if the concept had been patented. The book goes into detail on this.
The wikipedia article also talks about Turing machines as a precedent that store data and code in the same place. But ironically I'd say that probably gives him too much credit! I think a large share should go to the people who designed a working machine, because it's easy to say come up with the idea of an airplane; much harder to make it work :) And to me it seems unlikely that the Turing Machine, which was an idea created to prove mathematical facts, was a major inspiration for the design of the ENIAC.
Finally, even though the author of this web page has his own credit dispute, I appreciate this elaboration on the credit assigned to Turing.
Actually, it was a study group to come up with the successor to ENIAC (called EDVAC) which included Eckert, Mauchly, von Neumann, Goldstine and Burks. Von Neumann was the last to join, but wrote down the group's conclusions in a memo meant for the group. Herman Goldstine typed that up into a nice report but listed von Neumann as the sole author and distributed 24 copies to researchers. Many new copies of the report were made and circulated causing confusion about who had created the ideas.
George Dyson's "Turing's Cathedral", on the other hand, argues that von Neumann's close relationship with Gödel had a major role in getting the stored program idea adopted by the EDVAC group.
People have had that perched-on-giants feeling for some time:
This concept has been traced to the 12th century, attributed to Bernard of Chartres. Its most familiar expression in English is by Isaac Newton in 1675: "If I have seen further it is by standing on the shoulders of Giants."
Once you start digging you realize that nothing is as simple. For example for physics, "Physics for Poets" by Robert H. March is an eye opener.
I agree with this. It's certainly the case that I wish more people knew of Alonso Church and Kurt Gödel, but you have to realize in a "PR" sense that it's simply not going to be feasible to teach the general public about their contributions.
And Turing's contributions were genuinely ground-breaking, there's a reason that computer science is lousy with concepts named after or by him (Turing machines, Turing-completeness, even the word "computing" was arguably coined in "On Computable Numbers"). He also thought deeply and hard about the philosophical implications to computing in a way that others didn't (the "Turing test" being the obvious example).
In addition: when a mathematically inclined person describes any kind of mathematical concept to laymen, the first question is always "Yeah, but what is that actually useful for?", asked with a certain amount of disdain. With Turing, the answer is powerful: "How about defeating the Nazis and laying the foundation for modern society?". That case is harder to make for Church or Gödel: they obviously didn't work for the GCSE, and "lambda calculus" as a concept is a much more abstract thing than Turing machines, which laymen can readily understand (i.e. it's "just" a computer).
Add to that the fact that Turing's story is not just about computing, or code-breaking, it's also the story of the suffering that society inflicted on gay men. The fact that he was shamed into suicide is just all the more reason to celebrate him now.
I agree with the basic point of the article, but I have no issue with giving Alan Turing this title. He earned it.
He applied computational thinking all over the place, showing great foresight in https://en.wikipedia.org/wiki/Turing_pattern
Please don’t diminish his legacy by repeating this lie.
Turings suicide is contentious and circumstantial at best. His documented behaviour had none of the very common signs of suicide - there was no note, he had plans for later in the week, and none of his close friends noted any change in behaviour.
So, Founding Fathers of computing science becomes mixed - starting from those low brow thinkers we call journalists - with the idea of Founding Father of computing. And this is not only unfair, but technically wrong.
Just look at his previous blog post , in which he explains that the most cited neural networks all cite works by him. These papers cite dozens of papers, so a lot of other groups that are active in AI can claim the same thing...
: For example, Turing published an independent proof of the Entscheidungsproblem, in the [TUR] article, just a month after Church, that the article forgets to highlight.
Originality is easily defined as who did sth first.
This might not be the same as influence of some work. It might be that someone else does a lot of groundbreaking work which actually makes sth work (e.g. Goodfellow et al for GAN). You can say the GAN paper had more influence than Schmidhubers Adversarial Curiosity Principle.
Also, of course some newer authors might not know of all the old work. So it might be that people get the same ideas. So when Goodfellow got the idea for GAN, he might not have known about Schmidhubers Adversarial Curiosity.
The problem is, sometimes people did know about the other original work but intentionally do not cite them. You can not really know. People of course will tell you they did not know. But this can be fixed by just adding the citation. It looks bad of course when there are signs that they should have known, so it was really intentionally.
There is also a lot of arguing when sth is the same idea, or when sth is a different novel idea. This can be ambiguous. But for most cases which are discussed by Schmidhuber, when you look at the core idea, this is actually not so much the case. Also, this is also not so much a problem. There is less argumentation about whether sth is at least related. So this still should be cited then.
The question is then, which work should one cite. I would say all the relevant references. Which is definitely the original work, but then also other influential work. Many people just do the latter. And this is one of the criticism by Schmidhuber, that people do not give enough credit (or no credit) to the original work.
It seemed more like he felt he was unfairly being uncredited. Which is probably why he wrote this - he now cares deeply about giving credit to the right people.
I understand why he'd care about that if he'd been uncredited and watched peers be overcredited, but I'd hardly call it a noble work, even if it is understandable.
You_again wants his work and that of others properly recognised. For example, his article, titled Critique of Paper by "Deep Learning Conspiracy" (Nature 521 p 436)  that is referenced by your link to wikipedia, cites a couple dozen pioneers of deep learning, quite apart from Schmidhuber hismelf. Quoting from it:
>> 2. LBH discuss the importance and problems of gradient descent-based learning through backpropagation (BP), and cite their own papers on BP, plus a few others, but fail to mention BP's inventors. BP's continuous form was derived in the early 1960s (Bryson, 1961; Kelley, 1960; Bryson and Ho, 1969). Dreyfus (1962) published the elegant derivation of BP based on the chain rule only. BP's modern efficient version for discrete sparse networks (including FORTRAN code) was published by Linnainmaa (1970). Dreyfus (1973) used BP to change weights of controllers in proportion to such gradients. By 1980, automatic differentiation could derive BP for any differentiable graph (Speelpenning, 1980). Werbos (1982) published the first application of BP to NNs, extending thoughts in his 1974 thesis (cited by LBH), which did not have Linnainmaa's (1970) modern, efficient form of BP. BP for NNs on computers 10,000 times faster per Dollar than those of the 1960s can yield useful internal representations, as shown by Rumelhart et al. (1986), who also did not cite BP's inventors.
That is not "wanting to be the center of attention". It is very much asking for proper attribution of research results. Failing to do so is a scientific scandal, especially when the work cited has contributed towards a Turing award.
However, his greatest impact came probably through his contribution to cracking the Enigma code, used by the German military during the Second World War. He worked with Gordon Welchman at Bletchley Park in the UK. The famous code-breaking Colossus machine, however, was designed by Tommy Flowers (not by Turing). The British cryptographers built on earlier foundational work by Polish mathematicians Marian Rejewski, Jerzy Rozycki and Henryk Zygalski who were the first to break the Enigma code (none of them were even mentioned in the movie). Some say this was decisive for defeating the Third Reich.
Yes, Turing worked on Enigma and the Bombe to automate breaking of the code. However, Colossus wasn't for Engima (it was for Tunny) and Turing didn't work on it. This paragraph seems confused about which is which.
Also, the fact that the Polish who did the original work weren't mentioned in the movie is just one of many things horribly wrong with that movie. It's so bad it shouldn't be considered canon.
> Polish mathematicians Marian Rejewski, Jerzy Rozycki
> and Henryk Zygalski who were the first to break the Enigma
The person who toured us around was telling us a brief story of how the Enigma was broken, starting with Bletchley Park. Me or maybe my friend asked who was the first to break Enigma, and he immediately answered that it was Turing, then noticed our puzzled face expressions, and added something along 'ah.. yeah.. and Polish did some minor work too' :). Just an anecdote.
Since when are movies supposed to be accurate historical references? They are made to be entertaining, so facts get kicked out of the door from Day 1.
As it happens, Verity Stobb panned the movie (justifiably, IMHO), in her splendidly British style, for much more than just getting the facts wrong.
You can't tell a complex story in a short time and numerous characters on screen. In books you can. In movies it's borderline impossible and therefore simplifying/dumbing things down is a filter you need to apply first.
And dumbing-down is not equivalent to simplifying, unless you do it in a dumb way.
Talented writers take one issue, or one theme, from the big picture and weave a story from that.
I think this is a joking extrapolation to the real life, and the "author" here is, in best case real life, or as an approximation, historians' position based on evidence.
Since it wasn't linked, Bombe was based on the Polish Bomba machine:
But science fields do need marketing. All children have heroes they look up to. Putting focus on Turing's achievements is merely creating a pop star figure in the mainstream, which I think is a good thing: a smart dude works on a problem that saves World War 2 and now powers your phone and your TikTok app. Once you are actually interested in the field you can work out the nuances and the falsehoods in that claim.
Evaluating earlier work in some field throughout history always leads to a complex graph of achievements, but you cannot put that graph in the name of an annual prize. Do we change "Turing Award" to "Frege-Cantor-Godel-Church-Turing"?
After reading that I sat here for a minute and racked my brain as to who my childhood 'hero' might be. I can't remember a single person.
It's amusing to me how much of intellectual work deals in a currency of status. Getting/giving credit for things appears to be the Prime Directive, at least among the observers. We've now graduated to not only stressing who is responsible but what demographic groups they are a part of.
Now, it could be that the real deal groundbreaking folks don't give a damn. Tip o' the hat to those people.
The vast majority of people working anywhere near mathematics, physical sciences or electrical engineering (the 3 founding pillars of CS) in the 1920s and 1930s probably worked on problems related to WW2 during WW2. You can equally state that motivating claim for a lot of other people.
I think Turing gets the Media Treatment^TM because there's a lot of Tragic Hero Energy in his story:
<A gay man in an era that summarily rejected him [and we tell this story in an era that is extremely oversensitive and hyper-reactive to this particular sort of injustice]; a smart, shy pupil whose closest childhood friend (and suspected lover) died early of a now-extinct illness; a mathematician who dreamed of how numbers and lookup tables could hold a conversation, saw them used and counter-used to destroy cities and murder millions, then was finally rewarded with prison and humiliation by the people he fought for.>
Turing himself off course deserves all praise and glory and the righteous anger for how he was treated in his last years, but I think our era's affinity for him is just the old mechanism of people digging the past for battles that reflect the moral values they're currently (fighting for|winning|losing), see also the misguided claim that Ada Lovelace is the first "computer programmer", usually followed by a looong screed about Women In Tech.
We just like a good story to exaggerate and make it reflect our current moral memes, and the idea of a Man|Woman Ahead Of Their Times is a catch by this standard.
Jailing or sterilizing gay people for having sex is evil. End of story. It has only been 20 years since this was the law in some US states. I see no reason why vigorous rejection of this sort of policy as monstrous can possibly be seen as "oversensitive and hyper-reactive".
The point isn't that this oversensitivity is misplaced, the point is that it's moral outrage porn that the tellers of the story use in a smart way to get a reaction from you.
This isn't necessarily a bad thing if it's just one or two story among others, after all the purpose of art is to get strong reactions out of its audience. But when every such story has to lean hard into the moral aspect to the exclusion of all else it becomes a trite children story told for nothing else but feel-good points.
Consider the amount of articles on trump during his presidency. How much of it was high-quality investigative journalism telling you things you don't know, and how much was "Trump tweeted something shockingly stupid, here are a list of experts you don't need telling you this is shockingly stupid, this means End Of Democracy (EOD)" ? The latter articles are technically true, but it's trite and accomplishes nothing but pulling on your memetic levers to get you to like/share/feel-bad|good-all-day.
So much for the Polish Cipher Bureau. Not so many tragic hero opportunities there.
The Bombe helps break Enigma, and thus is an early part of Ultra and arguably does "save World War 2" but it has no more relevance to your phone or your TikTok app than does the Rubik's cube or the slide rule.
Colossus isn't very far from the Bombe today, you might likely visit both on the same trip, but Turing didn't build it, and although it's clearly in some sense a computer, it is critically lacking in some features you'd want from a general purpose computer since it had a single purpose, to break Tunny in the mid 1940s.
In some sense Colossus is relevant to your phone and TikTok, because it is a computer, but, Turing didn't work on it and it isn't their direct ancestor by any means at all.
Bletchley Park work was still classified in those days, so not much was known about that. Much to the annoyance of Tony Flowers, who built the Colossus code-breaking machine but couldn't mention it in post-war job-hunting. (Incidentally, Colossus was not a general-purpose computer. It was a key-tester, like a Bitcoin miner ASIC.)
As I've pointed out before, the big problem was memory. IBM had electronic addition and multiplication, with tubes, in test before WWII. But the memory situation was really bad. Two tubes per bit. Or electromagnetic relays. Or electromechanical counter wheels, IBM's mainstay in the tabulator era. To store N bits, you had to build at least N somethings. Babbage's Analytical Engine called for a memory of a ring of rows of counter wheels, a mechanism the size of a locomotive. Access time would have been seconds.
Much of early computing was about kludges to get some kind of memory. The "Manchester Baby" had a CRT-based memory, the Williams tube. That was the first computer to actually run a program stored in memory. Frederic C. Williams, Tom Kilburn, and Geoff Tootill, 1948.
After that, everybody got in on the act. Mercury tanks and long metal rods were built. Really slow, and the whole memory has to go past for each read, so twice the memory means twice the access time. Then there were magnetic drum machines. Magnetic tape. Finally, magnetic cores. At last, random access, but a million dollars a megabyte as late as 1970. Memory was a choice between really expensive and really slow well into the 1980s.
Turing was involved with one of the early machines, Pilot ACE/ACE. But he quit before it was finished.
If the problem is that some important early contributions to CS are being overlooked, the solution is to promote those contributions.
By framing this as Turing vs. others, the focus is squarely on Turning and his contributions. It puts itself in the position of having to beat down and minimize Turing’s contributions before raising up other contributions. Pretty much setting itself up to fail to convince very many.
Instead, e.g., present the narrative of those early contributions, showing how they provided the foundation Turing worked from.
(edit: I should add: I understand perfectly that the point of this article probably isn’t to actually convince anyone of anything, but is just a hot take meant to get people worked up for the page views. So mission accomplished, from that perspective, I guess.)
This is a misunderstanding of the Turing machine model. The Turing machine is not designed to be a realistically implementable physical machine, and indeed there are no details in Turing's paper on how such a physical machine could be achieved.
Instead, the Turing machine model is designed to be a mechanistic model of what a mathematician does (in rote day-to-day tasks at least). The tape is a representation of the notebook where the mathematician writes down notes. The read head is a representation of the mathematician reading from the notebook.
It's fascinating to read the paper because of this, since it spends quite a few paragraphs showing that this simplification doesn't lose anything of the mathematician's work. It spends some time noting that even though paper is two-dimensional, it can be losslessly compressed on unidimensional tape. It spends time noting that writing/reading one symbol at a time is a good enough approximation for human writing/reading. It spends time noting that the next step doesn't need to depend on more than the current symbol + internal state, as human attention is also focused on a limited number of symbols.
This is actually why Turing's paper is so convincing on the argument of universal computation - not because the machine is realizable, but because it's hard to invent anything that a mathematician does while calculating that is not captured by the Turing machine model.
I very much recommend reading the first part of the paper  to see this argument (the second part, where it is proven that this abstract machine can in fact compute all known numbers, is more technical and flew over my own head).
 PDF https://www.cs.virginia.edu/~robins/Turing_Paper_1936.pdf
I’ve read the paper. I think we just take different things from it, possibly because you have a background in mathematics?
To me, the main takeaway (if I imagine reading it in 1936) is that a universal Turing machine is not all that complicated, and arouses the “I could build this thing”-intuition.
That of course doesn’t mean that Turing intended it to be realizable, that’s not my point. But he appeals to an engineer’s intuition. It’s that intuitive link that’s valuable and unique IMHO.
BTW, I think your takeaway is probably clearer in Gödel’s work.
Moreover although it turns out that that model of computation is very robust and sufficient for all purposes in physics (unless black holes or something allow hypercomputation) Turing does not really definitively show that and in a way that can’t be definitively shown. All we have is a lack of counterexamples (admittedly a very convincing one.)
I don’t see why this intuition is that helpful generally either; Turing machines don't really help at an implementation level with modern engineering problems as far as I can tell. Most of the time you know that what you want to do is possible in finite time &c.—presumably the difficulty is doing what you want to do, and going via the Turing formalism would be a bit odd.
On the contrary, I think this is one of the advantages of Turing’s model: I can imagine standing there in my garage looking on as my universal Turing machine is running low on tape on the left side, and then simply attaching a new roll of fresh empty tape at the end, holding it as it is fed into the machine. :)
It’s simply the least leaky abstraction of computation.
I think this is often part of the misunderstanding when you stumble into a post by someone who's confused about 0.999... = 1. People sometimes write things like: "0.999... only moves closer and closer to 1, it never reaches 1." I think that highlights a deeper point than people usually give these comments credit for. The thing is, 0.999... doesn't "move" anywhere, it's considered a completed object right from the beginning.
Anyway, the point is that Turing machines are not like this at all. They only look at a fixed-size part of the tape during each step, from this follows that they have only used a finite amount of tape at each point of their execution.
So for any given (halting) computation, you don't actually need an infinite tape, you just need "enough", without changing the result. This is important because it makes Turing machines a model for practical computers. For example, the device you're reading this on has gigabytes of tape, and that's big enough for many, many, many kinds of computation.
In a practical sense, turing machines don't voraciously consume tape. Adding extra feet of tape gives you an exponential increase in what you can compute. So if you set up a program to be reasonably judicious with its tape use, you can just say that if it reaches an end you pause it for a day, head to the shop, and buy another reel. Big computations take a lot of time anyway.
IMO the given example muddies the waters a bit by conflating the conceptual tape a given machine is running on (which might be infinite for non-halting programs) with the physical tape you've used so far (for which it suffices for such tape to be finite at any fixed moment in time, though the amount needed might grow unboundedly as time increases).
However, for this argument to work, we need to accept both that all computation is captured by Turing machines, and also that what Turing machines do is in fact computable. In essence, Turing machine <=> realizable machine. Maybe some people are more impressed by one, others more by the other direction of that double implication.
Or more accurately, what human computers did in those days (i.e. the rooms full of people algorithmically working out numerical calculations for e.g. physicists or whatever without understanding what they were doing beyond the mechanical steps they were taking). In other words a formalization of so-called effective methods.
Not proving, conjecturing. It's not proven until this day: https://en.wikipedia.org/wiki/Church%E2%80%93Turing_thesis
Also, it can’t perform any computation, if we say hypercomputation is a form of computation. Hypercomputation is (as far as we know) physically impossible, but so strictly speaking are Turing machines - a true Turing machine has unlimited storage, and an unlimited amount of time in which to complete its computations - any Turing machine you could physically construct would only be a finite approximation of a real one.
That's really important in an industry saturated with hype, elitism and general nonsense. Anything I can do in Rust, you can do in Assembly, so I've got some explaining to do (I can probably succeed in this example, others maybe not).
If Turing actually failed to deliver the goods on "completeness", I'd really like to resolve that.
Of course Turing machine is more easy to understand because... it's a theoritical machine, after all. On the other side, lambda calculus was weirder, I didn't get it until learning Haskell :D
I think register machines are more intuitive than Turing machines - they are much closer to how real world computers work.
The Turing machine model is a mechanically plausible abstraction of a human performing a computation by following some steps in their mind and with a notebook. The tape stands in for the notebook. The read head stands in for the human's eyes. The write head stands in for their pencil hand. The internal state of the machine stands in for their mental state. At every step, you either read a symbol from the notebook tape and change your mental state in relation to this symbol, or write a symbol based on the current mental state. The procedure itself can be written on the tape, and you can occasionally refer back to it.
The original paper spends a good few pages working out this metaphor and showing that the machine model perfectly abstracts the mathematician's work of computation.
But on the physical side of the link they are much less intuitive IMHO: it’s much less clear that “this is just a machine that I could build in my garage”.
All of the details of how a symbol is read, recognized, how it alters the internal state, how the next symbol is chosen, or even how many symbols you actually need are not mentioned and even considered irrelevant. Turing wasn't building one of these, he was proving that this model captures all known computation, and that even so it is undecidable whether this machine would ever stop for any arbitrary computation.
But any individual Turing machine is. Building a simple one is not very hard, and you can imagine supplying it with more and more tape as it needs it.
It’s thus the only model of computation that I can fully imagine “working”. And that to me is the beauty of Turing’s contribution.
If you're trying to imagine something you can mechanically assemble out of discrete compoonents, it's not so great. You need an unlimited number of components hooked up in complicated ways.
A turing machine is a fixed-size and relatively simple box, plus a long tape that feeds through.
It passes the popular bar for new knowledge because it illustrates a conflict and resolves it, but it's in a cognitive style that seems to conflate debasing knowledge with discovering it. It is how people are educated these days, where it is sufficient to destabilize and neutralize target ideas instead of augmenting or improving them, so it's no surprise, but the article seemed like a good example to articulate what is irritating about this trend in educated thinking.
I think everyone with an interest in theoretical CS should work through Turing's 1936 paper at one point in their life. For me, the important part of that paper is how convincingly it argues that the proposed machine model is not just yet another model of computation, but a complete model of computation: that everything you could reasonably call computation is encompassed by these machines.
So there's a finality to Turing's definition of computation: these problems are unsolvable not just with this model, but with any reasonable machine model. It's very hard to make the same case for lambda calculus, which is (in my opinion) a large part of what made Turing's paper so groundbreaking.
- The core mechanisms of the machine for running the Enigma "quickly" was from the Polish
- The machine wasn't even a generalized computer!
I just felt really misled! Perhaps the biggest thing is Turing probably ended up doing good amounts of contributions to the academic/theoretical side and the practical side, but it feels like we are missing opportunities to describe the confluence of so many people's ideas in this period of history to end up at the current "machines that read instructions and then act upon them in a generalized fashion, very quickly".
This article seems to be that, and it's super intersting
Among so much that’s just plain wrong, I really dislike the insidious idea that Turing’s horrible punishment at the hands of the state was wrong because he was a unique genius and war hero. No, it was wrong because he was a human being and being gay should not be a crime!
That line of thought makes it harder to argue that no, Turing may have been a genius but wasn’t unique, he was just a significant player in a rich field. That doesn’t make him any less interesting.
100% agree, an unfortunate mentality all too present in society, where we tend to build narratives of feeling bad for people because are exceptional, and not because they are people
See the classic kids story of “oh the store tried to kick the hobo out but actually he was a millionaire!!!” How about treating all people like human beings even if they aren’t like… valuable to you
Just don't expect historical accuracy from a Hollywood movie. Cleopatra didn't look like Elizabeth Taylor either.
Edit: I'm Scottish so Braveheart is the obvious example - entertaining movie but wildly inaccurate and even manages to get who/what the term "braveheart" refers to wrong.
Of what I know, this might be, because he worked for the Nazis and his prototypes were mostly destroyed in the war and somewhat forgotten and seemed to have not been influential to the general branch of computing.
The "for" keyword used in the "for" loops comes from Algol 60, which inherited it from its predecessor from 1958, IAL.
The "for" loops were introduced in IAL by Heinz Rutishauser, who had used "fuer" loops (i.e. "for" in German) for the first time in 1951, when he had published a paper describing a computer programming language, 3 years before Fortran introduced the "do" loops in 1954.
In the Rutishauser paper from 1951, he quotes Zuse with his paper about a programming language (published in 1948) as the inspiration for his own improved programming language, which included the "for" loops.
Interesting to speculate what would have happened if Zuse had been paper-clipped to the US and given a huge budget.
“While Zuse never became a member of the Nazi Party, he is not known to have expressed any doubts or qualms about working for the Nazi war effort.”
That might seem pedantic and it is, but you need to define exactly where the line is drawn and more so, give a good reason why. In fact its not even that simple, WE need to decide and agree on where the line is drawn and all of us agree why. Otherwise one mans pedantic is anothers important creditation.
Obviously that's not going to happen anytime soon, so for now, figureheads like Einstein and Turing do the job. And they do certainly deserve credit to some degree. That or we stop giving credit completely, which a). seems like a good way to destroy knowledge and b). isn't going to happen anytime soon.
Edit: As another commenter pointed out, if Einstein or the like were born somewhere else and lived around a different group of people, theres a chance he wouldn't become a figurehead, or he would make less or more or different discoveries. Therefore, theres a third option for creditation, in which everyone who has ever lived up until those discoveries has equal credit. If I were 60 or so years older, I'd be as much to credit for the turing machine as Turing himself. So would you. Of course, this is pretty much as good as no credit to anyone at all, but fixing it again requires a joint agreement on where the line is drawn
It reminds me of voting systems, but maybe that’s just because of the election yesterday. If you want to give singular nontransferrable credit, the things you say are important because giving someone credit takes it away from someone else. Division and fighting become the right answers. But if you spread the credit around, saying Leibniz and Newton both get calculus credit (and probably not just those two!), then discussions of which one should get the title of The One And Only Calculus Hero just seems absurd.
The problem with my wheel example is that it demonstrates the absurdity of trying to assign credit to anyone involved, but it doesn't quite demonstrate how difficult it is to even draw a rough line. Does the inventor of the abacus get credit in the creation of computer science? The discovery of electricity? And what about all the (comparatively) minor inventions and discoveries off the back of those?
As far as I can see it, it either needs to be completely subjective, or there needs to be a line. Maybe it doesn't need to be incredibly specific, although at that point some subjectivity creeps back in
Goedel apparently believed that it was impossible to explicitly enumerate all the partial computable functions over the integers, until Turing proved him wrong. He reasoned as follows:
- You can't enumerate the total functions because Cantor's diagonal argument is constructive. Given a claimed enumeration of N^N, you can perform Cantor's diagonal construction to produce a computable element of N^N not in the enumeration. So N^N is computably uncountable.
- Classical set theory says that a superset of an uncountable set is also uncountable.
The problem is that while a superset of an uncountable set is uncountable, a superset of a computably uncountable set may instead be computably countable. The partial functions over the integers show that this is indeed the case.
Cantor's construction is blocked because a partial function can output a degenerate value called non-termination. A computable function must map the non-termination value to itself. Since it isn't mapped to something different, Cantor's construction can't produce something outside an enumeration.
It shows that the unsolvability of the halting problem follows from:
- The computable uncountability of N^N, which can be proved using an identical argument to the one Cantor used to prove the set-theoretic uncountability of N^N.
- The computable countability of the partial maps from N to N.
If the halting problem were solvable, then the second bullet point would contradict the first. So it's essentially a weird twist on set theory that uses constructive logic.
However, that's not the point I wanted to make. I wouldn't like calling it computably countable even if everyone else did, simply because it gives the wrong intuition about subsets.
The term "countable" is used in the slides, where it means computably enumerable. The adjective "computably" is used when there's a need to distinguish set-theoretic notions from similarly behaved computability-theoretic notions. Otherwise the meaning of the term "countable" can be context-dependent.
> it gives the wrong intuition about subsets
In constructive logic, a countable set can contain an uncountable subset. The misleading intuition (in the context of non-classical logics) is based on classical logic where countability is a statement about the size of a set. Whether you think constructive logic is a good way of explaining computability theory is another question, but it's certainly a viable way of doing it.
It's like how the term "line" can mean something different in hyperbolic geometry from what it means in Euclidean geometry. You could argue that it might mislead people about the nature of parallel lines, but that's why hyperbolic geometry is not Euclidean geometry. Another example is using "multiplication" to refer to an operation on matrices, which might make people think that AB=BA when that usually isn't true. Mathematics is all about re-using terms and pointing out that there are differences.
Admittedly, the slides do use the term "enumerable" as well, so that's another option. When there's a possibility for confusion with set theory, you can say "computably enumerable" as you suggested.
 Made lots of edits. Hopefully, that's it.
> The computable countability of the partial maps from N to N.
Can anybody give an example?
does this have any to do with rationals?
or is it more related to limits and calculus?
I'm going to use Haskell, which I'm going to assume you know. I'm using it because it seems closer to the math. The type for naturals is:
data Nat = Zero | Succ Nat
and then all Haskell functions of type `Nat -> Nat` represent partial functions. They're not total functions because they might enter an infinite loop for some inputs. You can clearly enumerate all Haskell functions which are syntactically correct and which have type Nat->Nat, so the partial functions of that type are computably enumerable.
But consider the total functions of type Nat->Nat (i.e. those that never enter an infinite loop). Assume you can have a function `en :: Nat -> (Nat -> Nat)` which can output every total function of type Nat->Nat.
Then the function `counterexample n = Succ (en n n)` is a function that cannot be outputted by `en`, and therefore `en` fails to enumerate all total functions.
I've got other things to do, unfortunately, so I can't say more than this.
 Fixed the counterexample.
Is this a finite process?
It's almost like Schmidhuber hates Turing and wants the world to worship Godel.
It's personal for him.
He also wrote a tweet and maybe another post where he ranked up all countries of Europe and says that Europeans collectively got the most medals in Olympics and hence they are superior.
He suffers from this weird and serious case of European Supremacy.
I know about his actual works (not what he claims to have done).
And based on that I respect him.
He also has got a cult of people following him and who will gang up on any dissenters, i.e. people not thinking of him as a walking god.
Schmidhuber has actual contributions, I know.
Now he moved to a country with extremely bad record of human rights, and has policies and implementation that put the Taliban to shame.
Now he is promoting the institution left and right any opportunity he gets.
He is very hard to take seriously.
History seems to like a single origin story.
"There's a tendency among the press to attribute the creation of a game to a single person," says Warren Spector, creator of Thief and Deus Ex.
I guess whoever wrote that line didn't even get the irony.
What Turing did in 1937 might not have been an advance over what Godel and Church did. But if you want to make the case for building a general purpose computer out of mechanical relays or vacuum tubes in 1946, you need a semantic advance. Turing machines did that.
Nobody would read Godel and say "let's build ENIAC." Tommy Flowers did read Turing and say that. THAT is the difference.
It's like Einstein versus Lorenz. Same mathematics.Different semantic interpretation.