The Lambda Calculus

Or: Wikipedia and Nightmares

Review

Functions

Recursion

First Class Functions

Closure

Currying

Functions

Core concept

 

"Take input and return output"

 

abstract some processing

def add(x: int, y: int) -> int:
    return x + y

int add(int x, int y) {
    return x + y;
}

def my_print(string: str):
    print(string)

Recursion

We've used it

 

when a function calls itself

def factorial(x):
    if x == 1:
        return 1
    else:
        return x * factorial(x-1) 

"First Class"

Might be new

 

functions are data and can be manipulated as such

 

essentially:

a function is a type that you can manipulate in the same way you can manipulate int or String

def map(f: Function[T] -> T, li: List[T]) -> List[T]:
    result = []
    for item in li:
        result.append(f(li))
    return result

def double(x: int) -> int:
    return x * 2

map(double, [1,2,3])  # [2,4,6]

Closure

Defining a Function within a function

 

variables in outer scopes can be accessed from inner ones

 

when you call an outer function it returns a new inner function bound to a variable

def map(f, li):
    result = []
    for item in li:
        result.append(f(li))
    return result

def addx(x):
    def adder(val):
        return x + val
    return adder

add3 = addx(3)
map(add3, [1,2,3])  # [4,5,6]

Currying

Functions need only one arg

 

without loss of generality:

this can be applied to functions of any number of arguments

 

what is the type signature of add_curried?

def add(x, y):
    return x + y

def add_curried(x):
    def add_inner(y):
        return x + y
    return add_inner

add(1, 2) == add_curried(1)(2)
add(a, b) == add_curried(a)(b)  # for all (a, b)
                                # of type int

Type Signature

Add curried take in an integer

 

it returns a function

 

that function takes in an int and returns an int

def add_curried(x: int) -> Function[int] -> int:
    def add_inner(y: int) -> int:
        return x + y
    return add_inner

Simplification

We strip things away

 

Functions have no side effects

 

Functions always return something

 

Functions are immediately used

 

 

No worries about outside

(reductions!)

 

No need for a return statement

 

No need for a name

Lambdas

All functions can be either immediately invoked

add = lambda x, y: x + y  # eww!
add(2,3) == 5

(lambda x, y: x + y)(2,3) == 5

# Putting it together with previous slides

(lambda x: lambda y : x + y)(2)(3) == 5

(lambda x:
    lambda y:  # returned by the function of x
        x + y  # returned by the function of y
)  # <- this is f(x) that returns f(y)
(2)  # <- this is f(y) that returns an int
(3)  # <- this is an integer

Lambdas

Or immediately passed into another function

(lambda f, x, y: f(x) + f(y))
    (lambda n: n + 1, 2, 3) == 7
#  or
(lambda f:
    lambda x:
        lambda y:
            f(x) + f(y)
)(lambda n: n + 1)(2)(3)

Lambda Calculus

Because mathematicians hate you

 

replace "lambda" with "λ"

replace "x:" with "x."

Now for some examples

Lambda Calculus

x
"a value x"


t(x)
f(t)(x)  # vaguely equivalent to f(t, x)
"the function t called with the value x"

lambda x: t
"the function that takes in x and returns t"
x  # is valid
a "lambda term"


tx  # is also valid
ftx  # is equivalent to ((ft)x)
an "application"

λx.t  # is valid
an "abstraction"

That's it

Those are the rules

Some Basic Functions

lambda x: x


lambda f: lambda x: f(x)
λx.x
the identity function

λf.λx.fx
the function "apply"

Church Numerals

lambda f: lambda x: x
lambda f: lambda x: f(x)
lambda f: lambda x: f(f(x))
λf.λx.x  # Zero
λf.λx.fx  # One, this is the same as "apply"
λf.λx.f(fx)  # Two

Church Numerals

λf.λx.x  # Zero
λf.λx.fx  # One, this is the same as "apply"
λf.λx.f(fx)  # Two

huh?

Church Numerals

λf.λx.x  # Zero
λf.λx.fx  # One, this is the same as "apply"
λf.λx.f(fx)  # Two

Type signature of this function?

(Function[T] -> T) -> Function[T] -> T

we can simplify this to

(T -> T) -> T -> T

Resolution

λf.λx.x 1+ 0
λf=(1+).λx.x 0
λx.x 0
λx=0.x 0
λx=0.0
0
λf.λx.fx 1+ 0
λf=(1+).λx.fx 0
λf=(1+).λx.1+x 0
λx.1+x 0
λx=0.1+x
λx=0.1+0
1+0
λf.λx.f(fx) 1+ 0
λf=(1+).λx.f(fx) 0
λf=(1+).λx.1+1+x 0
λx.1+1+x 0
λx=0.1+1+x
λx=0.1+1+0
1+1+0
f = 1+
x = 0

In Code

(lambda f:
    lambda x:
        f(f(x))  #two
)(lambda n: n + 1)(0) == 2
(lambda f:
    lambda x:
        return f(f(x))  #two
)(lambda n: {n} | {i for i in n})({}) 
        == {{}, {{}}}  # two in ZF axioms

More useful functions

λf.λz.fz  # one

λn.λf.λz.f (n f z)  # "successor"
(T -> T) -> T -> T

? -> (T -> T) -> T -> T
(T -> T) -> T -> T

? -> (T -> T) -> T -> T

(nfz) means that n is a function on two args, 
    f and z
 f is of type (T -> T) and z is of type T

so n is type (T -> T) -> T -> T
(T -> T) -> T -> T

? -> (T -> T) -> T -> T

(nfz) means that n is a function on two args, 
    f and z
 f is of type (T -> T) and z is of type T

so n is type (T -> T) -> T -> T

we know some things with that type signature:
Church Numerals
(T -> T) -> T -> T

C -> (T -> T) -> T -> T

(nfz) means that n is a function on two args, 
    f and z
 f is of type (T -> T) and z is of type T

so n is type (T -> T) -> T -> T

we know some things with that type signature:
Church Numerals

More useful functions

λf.λz.fz  # one

λn.λf.λz.f (n f z)  # "successor"

λm.λn.λf.λz. m f (n f z)  # "plus"
(T -> T) -> T -> T

C -> (T -> T) -> T -> T

? -> C -> (T -> T) -> T -> T
(T -> T) -> T -> T

C -> (T -> T) -> T -> T

? -> C -> (T -> T) -> T -> T

(nfx) resolves to type T
f is type (T -> T)
so m is of type (T -> T) -> T -> T
(T -> T) -> T -> T

C -> (T -> T) -> T -> T

C -> C -> (T -> T) -> T -> T

(nfx) resolves to type T
f is type (T -> T)
so m is of type (T -> T) -> T -> T

Another Example

λf.λz.fz  # one

λn.λf.λz.f (n f z)  # "successor"

λm.λn.λf.λz. m f (n f z)  # "plus"
m = ONE = λg.λx.gx
n = ONE = λh.λy.hy
f = 1+                     arg 1   2    3  4
z = 0                        \/   \/   \/ \/
(λm.λn.λf.λz. m f (n f z)) (ONE) (ONE) 1+ 0
λm=(ONE).λn.λf.λz. m f (n f z) (ONE) 1+ 0
λm=(ONE).λn.λf.λz.λg.λx.g x f (n f z) (ONE) 1+ 0
λn.λf.λz.λg.λx.g x f (n f z) (ONE) 1+ 0
λn=(ONE).λf.λz.λg.λx.g x f (n f z) 1+ 0
λn=(ONE).λf.λz.λg.λx.g x f (ONE n f z) 1+ 0
λf.λz.λg.λx.g x f (ONE f z) 1+ 0
λf=(1+).λz.λg.λx.g x f (ONE f z) 0
λf=(1+).λz.λg.λx.g x 1+ (ONE 1+ z) 0
λz.λg.λx.g x 1+ (ONE 1+ z) 0
λz=0.λg.λx.g x 1+ (ONE 1+ z)
λz=0.λg.λx.g x 1+ (ONE 1+ 0)
λg=(1+).λx.g x (ONE 1+ 0)
λg=(1+).λx.1+ x (ONE 1+ 0)
λx.1+ x (ONE 1+ 0)
λx.1+ x (λh.λy.hy 1+ 0)
λx=(λh.λy.hy 1+ 0).1+ x
1+ λh.λy.hy 1+ 0
1+ λh=(1+).λy.hy 1+ 0
1+ λh=(1+).λy.1+y 0
1+ λy.1+y 0
1+ λy=0.1+y
1+ λy=0.1+0
1+1+0  # == 2!

Turing Machines

Turing Complete

To prove turing completeness, we need to show that we can simulate a turing machine

 

So we'll prove that we can simulate an arbitrary finite state machine as well as "memory"

A state is just a type, they could be integers

 

A transition is simply a function from a previous state to some future state

Turing Complete

That leaves: A tape/memory

Turing Complete

So I'll start with booleans

 

These *are* conditionals

λx.λy.x  # True
λx.λy.y  # False

And Tuples

 A Tuple holds 2 things

λx.λy.λf.f x y

x is the first thing
y is the second thing
f is the "getter function"

so
(λx.λy.λf.f x y) "A" "B" is a pair
λf.f "A" "B"

Tuple Pickers

"The first thing in

the tuple of

"A" and "B"

(λp.p λx.λy.x)  # First
(λp.p λx.λy.y)  # Second

λx.λy.x  # is True
λx.λy.y  # is False

(λp.p λx.λy.x) (λx.λy.λf.f x y) "A" "B"
(λp.p λx.λy.x) (λf.f "A" "B")
(λf.f "A" "B") λx.λy.x
λx.λy.x "A" "B"
"A"

Linked List

A tuple whose first item is

a value and second item is another tuple

("A", | )
      -> ("B", | )
               -> ("C" | )
                       -> ("D" \ )

Turing Complete!

That's it!

we have the pieces we need

 - A Tape (memory)

 - Transition functions with conditionals

 - States

Putting it all Together

How do we get to the next item in a linked list?

λx.λy.λf.f x y  # a linked list node (tuple)
(λp.p) λx.λy.y  # "SECOND"
λx.λy.λf.f x y  # a linked list node (tuple)
(λp.p) λx.λy.y  # "SECOND"

(λn.λf.λx.f (n f x)) n SECOND HEAD
λx.λy.λf.f x y  # a linked list node (tuple)
(λp.p) λx.λy.y  # "SECOND"

(λn.λf.λx.f (n f x)) n SECOND HEAD

SUCCESSOR(N)(SECOND)(HEAD)
("A", | )
      -> ("B", | )
               -> ("C" | )
                       -> ("D" \ )

Useful?

Can be fast

Can parallelize everything (fast)

Efficient way of thinking

λx.x == λy.y  # α-conversion
x=("Hello") == "Hello"  # substitution
λx=("Hello").x == "Hello"  # β-reduction
λx.(f x) == f  # η-conversion

Combinators

Y?

λh.((λx.h (x x)) (λx.h (x x))) g
λh.((λx.h (x x)) (λx.h (x x))) g
λh=g.((λx.h (x x)) (λx.h (x x)))
λx.g (x x) (λx.g (x x))
λx=(λx.g (x x)).g (x x)
g (λx.g (x x) λx.g (x x))
λh.((λx.h (x x)) (λx.h (x x))) g
λh=g.((λx.h (x x)) (λx.h (x x)))
λx.g (x x) (λx.g (x x))
λx=(λx.g (x x)).g (x x)
λh.((λx.h (x x)) (λx.h (x x))) g
λh=g.((λx.h (x x)) (λx.h (x x)))
λh.((λx.h (x x)) (λx.h (x x))) g
λh=g.((λx.h (x x)) (λx.h (x x)))
λx.g (x x) (λx.g (x x))
λh.((λx.h (x x)) (λx.h (x x))) g
λh=g.((λx.h (x x)) (λx.h (x x)))
λx.g (x x) (λx.g (x x)) -----
λx=(λx.g (x x)).g (x x)     |
g (λx.g (x x) λx.g (x x))   |
   λx.g (x x) λx.g (x x) <--|

Combinators

Y?

This is the Y combinator

 

recursion!

λh.((λx.h (x x)) (λx.h (x x))) g
λh=g.((λx.h (x x)) (λx.h (x x)))
λx.g (x x) (λx.g (x x)) -----
λx=(λx.g (x x)).g (x x)     |
g (λx.g (x x) λx.g (x x))   |
   λx.g (x x) λx.g (x x) <--|

h(g) == g(h(g))