Usually the combination of names for the theory means that researchers recognize both versions as largely equivalent (e.g. Newton-Leibnitz axiom).
The contention that the combined theories must be the definition of computability should be true about each theory in isolation. If a Turing machine cannot calculate any computable problem, then why bother defining a Turing machine? What is the purpose of the definition if not to prove that any computable problem can be solved using the device? The same goes for lambda calculus.
> can't be computed by a Turing machine or the lambda calculus
I agree they're equivalent. That's been proven. Maybe that would be clearer if it said "both a turing machine and the lambda calculus"? That's how I meant it.
Anyway I was just trying to add to the discussion by pointing out that what makes the Church-Turing Thesis a thesis or conjecture is that it hasn't been proven that "any computable problem can be solved using the device". That's generally accepted but not proven.
The equivalence of turing machines and the lambda calculus is actually a theorem though because it has been proven.
Yet, it's a conjecture because it hasn't been proven. It's unprovable in fact. Church-Turing is in effect a statement of belief about what computation is. Strong Church-Turing is a statement of belief about our universe.
But it's normal to accept Church-Turing (though not Strong Church-Turing) because once you follow what Gödel, Church and Turing had uncovered the alternatives seem intuitively crazy - Church-Turing has to be right. It's like accepting Modus Ponens and Modus Tollens. I can't present a useful logical argument for why you should accept those because my logical argument will end up relying on them, if you won't accept them then we've nothing further to talk about.
I agree with that. Would you agree that the Church-Turing Thesis is different from the equivalence between the lambda calculus and Turing machines? I was trying to argue that even if Church-Turing were somehow (magically) invalidated that the equivalence of the lambda calculus and Turing machines would still hold.
Thanks for bringing up strong Church-Turing. It was interesting to read about it here (https://plato.stanford.edu/entries/church-turing/#StroWeakFo...) I think I had a broader interpretation of Church-Turing than I should. I was thinking that if we somehow discovered a working oracle machine on Mars, it would invalidate Church-Turing. But that article says:
> the thesis concerns what a human being can achieve when working by rote, with paper and pencil (ignoring contingencies such as boredom, death, or insufficiency of paper)
Not a belief: it is a definition. If somebody comes up with something useful that doesn't fit, this becomes "classical" computation, and the new kind gets an adjective -- say, "quantum", that's catchy -- and then they both get to be computation.
Defining what you can't prove isn't a dodge. It's the foundation of all rigorous thinking. You can only prove anything within the bounds of a formalism, and anything is impossible only within that formalism. Thus in Einstein Lorentz space, going faster than light is impossible. Out here, it's true only if E-L is right about that. (And we know it's not right about everything, but still maybe -- probably -- about that.)
Quantum computers aren't doing anything that you couldn't do with a Turing machine. You can get a perfectly nice emulator that will run all Quantum computer algorithms on your Turing machine today. They'll just be annoyingly slow.
There are some machines that don't exist which are categorically more capable than a Turing machine, such as Clock Doubling Machines, (which get to do an infinite amount of Turing computation in finite time) but those don't exist at all so while philosophers can insist upon debating them it won't do you any practical good.
What you are describing sounds like non-deterministic behavior of a thread-safe primitive, e.g. non-deterministic ordering in a synchronized queue. On the other hand, a true race condition in the queue could mean messages overwriting each other.
Having recently listened to John McWhorter's lectures on the history of human language, I was surprised to find out that there is no reliable evidence for the Sapir-Whorf hypothesis (that language affects thought). The Wikipedia article on linguistic relativism states:
> The strongest form of the theory is linguistic determinism, which holds that language entirely determines the range of cognitive processes. The hypothesis of linguistic determinism is now generally agreed to be false.
The Pragmatic Programmer is pretty zealous about DRY. For instance, the authors describe how they inserted sample code snippets into the book using specialized scripts developed for that purpose. A simple copy paste wasn't good enough (see p. 100-101). Granted, they had wanted to make sure the code snippets were tested and updated as needed, but repeating code anywhere seems to be a bad practice according this definition.