A computer is a physical implementation of an idea called the Turing Machine, named after Father of Computing Alan Turing. It can come in several states {S_{0}, S_{1}, S₂, … S_{n}} where each state has a set of rules telling what should be written into memory where the read/write head is pointing and where to move next.

For most computers the output would be a screen and the input would be a pointer or key press, but for a Turing Machine, the output and input is a symbol from an alphabet such as {A, B, C} or {0, 1} written at the location of the head on a (theoretically infinite) strip of memory. The input tells the machine what state to go into, and the state says what symbol to write to memory and whether to move the head left or right, or stay put. Here’s an example of a 2-state Turing machine (excluding the halting state H where it does nothing) using an alphabet of binary:

State | On 0 | On 1 | ||
---|---|---|---|---|

S_{0} |
Write 1, go right, change to S_{1} |
Write 1, go left, change to S_{1} |
||

S_{1} |
Write 1, go left, change to S_{0} |
Write 1, go right, change to H |

What does it do? Well, let’s take a look at the memory over time T starting from state 0 over a blank tape:

T_{0}S _{0} |
↓ | T_{1}S _{1} |
↓ | |||||||||||

0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | |||

T₂ S _{0} |
↓ | T₃ S _{1} |
↓ | |||||||||||

0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | |||

T₄ S _{0} |
↓ | T_{5}S _{1} |
↓ | |||||||||||

0 | 0 | 1 | 1 | 1 | 0 | 0 | 1 | 1 | 1 | 1 | 0 |

Notice how the machine alternates between states 0 and 1. However, after reading the first 1 while the machine is in state 1 (at time T_{5}), the head will move right and halt. Given a two-symbol alphabet and two-state space, this is essentially the Turing machine which prints the most 1s onto an initially blank memory, called a “Busy Beaver”. If you gave it another state to go into, it would have more decision-making options available and could be programmed print out six 1s. Or if you gave its alphabet another symbol to work with, it could print nine 1s. But if you gave this machine a three-symbol alphabet *and* three-state space, it could print over 347 million 1s before halting.

Let’s switch tracks for a bit and think about how people think. We’re not sure how subjective thought works, but communication can tell us a lot about the state of things objectively.

Consider interfacing between human and computer communications. Human languages tend to be high-level and difficult to dissect, whereas computer languages tend to be low-level and exact with their instructions. Part of programming is being able to find middle ground where you specify a task at the level where both the programmer and the computer can understand it. Whenever you write in a high-level language and test code on a computer, it compiles the source code into an object program that can be executed on that machine.

Whenever we communicate in a high-level language, not all of the details are gone over one-by-one the way computers do so to interpret them. We tend to communicate through our voices, and these sounds can be ordered as linguistic symbols (phonemes), which can be translated into a grammatical language, which can then be interpreted into high-level programming that the computer can begin to come to understand. “Give me that box”, for example, can translate from speech into a more detailed, reorganized syntax of labels such as: “[subject]-me [command]-give [direct object]-box [qualifier]-that [indirect object]-them”. This will itself be compiled into machine code for running the task of recognizing the box and speaker, then actually moving the box carefully into their possession.

Breaking down this syntax into executable commands runs

The lambda calculus ignores the state of the process used to get the answer, given certain variables; the syntax *λx. 2x* is a function that takes in *x* and spits out *2x* for any value of *x*; similarly,* (λx λy. x)(5, 12)* is a function that takes in x and y and spits out x for the values *x=5* and *y=12*; there is no internal mechanism, like you can see in a Turing machine, but the two representations of basic calculations are isomorphic. That is, they can translate between each other.

Y = *λf.(λx. f (x x)) (λx. f (x x))* is called the Y Combinator, which allows for programming languages to encode recursion. See Raúl Rojas’ tutorial for a step-by-step guide to the lambda calculus. Turing Machine language

There are many ways to break down a problem into fundamental components. Hopefully I’ve shown how all complex tasks in a computer are essentially broken down into bit manipulation and simple instructions that can be represented through a Turing machine or the lambda calculus