We have seen that dependent type theory is flexible enough to encode a
broad array of data types and objects. A simple device makes it
possible to encode any assertion you might want to make, as well:
there is a type Prop
, whose elements are taken to be
propositions. The usual logical connectives are simply functions that
take propositions and return a proposition.
variables p q r : Prop
#check and -- Prop → Prop → Prop
#check or -- Prop → Prop → Prop
#check not -- Prop → Prop
#check p ∧ (p → r)
#check p ∧ ¬ (q ∨ ¬ r)
There is no corresponding constant for implication: if p
and q
are
propositions, the arrow in p → q
is in fact an instance of the arrow
used to construct function spaces. We will return to this in the next
section.
A predicate on a type α
is a function from α
to Prop
, and a
binary relation on α
is a function that takes two arguments in α
and returns an element of Prop
.
variables p q : ℕ → Prop
variable r : ℕ → ℕ → Prop
variables m n : ℕ
#check p m ∧ q n
#check p m ∧ ¬ r m n → q n
#check ∀ x, p x → q x
#check ∀ x, ∃ y, r x y
The universal quantifier is really an instance of the dependent function space construction; again, more on this in the next section. Lean notation supports bounded quantifiers:
variables p q : ℕ → Prop
variable n : ℕ
-- BEGIN
#check ∀ x ≤ n, p x → q x
variables (α : Type) (s t : list α) (a b : α)
#check ∀ x ∈ s, x ∈ t
-- END
With these resources, we can start writing substantial mathematical assertions:
namespace nat
-- BEGIN
def dvd (m n : ℕ) : Prop := ∃ k, n = m * k
instance : has_dvd ℕ := ⟨dvd⟩
def even (n : ℕ) : Prop := 2 ∣ n
def prime (p : ℕ) : Prop := p ≥ 2 ∧ ∀ n, n ∣ p → n = 1 ∨ n = p
def Fermat : Prop := ∀ n > 2, ∀ (a b c : ℕ), a ≠ 0 → b ≠ 0 → a^n + b^n ≠ c^n
def Goldbach : Prop := ∀ n > 2, even n → ∃ p q, prime p ∧ prime q ∧ n = p + q
def twin_primes : Prop := ∀ n, ∃ p > n, prime p ∧ prime (p + 2)
-- END
end nat
Of course, what we really want are means to prove such assertions, which is what we turn to next.
Given the expressive power of dependent type theory, it should by now
not be too surprisingly that the language is rich enough to encode
proofs as well. In fact, the CIC employs a device known as the
Curry-Howard isomorphism
, or propositions as types
, that makes
writing and checking proofs especially convenient. Remember that if
φ
is any expression of type Prop
, we are thinking of φ
as a
proposition, or an assertion. In that case, we think of an term t :
φ
as being a proof of φ
. The rules of CIC support this
interpretation: given t : φ → ψ
and s : φ
, then t s : ψ
describes the result of using modus ponens. To construct a proof of φ
→ ψ
, it suffices to construct a proof of t : ψ
, assuming
hypothetically x : φ
. The resulting proof is written λ x, t
, which
is to say, the proof is an instance of lambda abstraction.
Similar considerations hold of the universal quantifier. The net
effect is that we can use the same notation we use for function
application to apply theorems to parameters and hypotheses. For
example, the theorem and.left
in the standard library has the
following type:
∀ {a b : Prop}, a ∧ b → a
Notice that the arguments a
and b
are implicit. This means that if
h
is any expression of the form a ∧ b
, then and.left h : a
is a
proof of a
. Similarly, add_comm
, which expresses the commutativity
of addition for any type that can be seen as an instance of an
additive, commutative semigroup has the following type
∀ {α : Type u} [s : add_comm_semigroup α] (a b : α), a + b = b + a
The second argument, that is, the relevant algebraic structure, is
inferred by class inference. Given, say, m n : ℕ
, the expression
add_comm m n
then represents the fact that m + n = n + m
.
Now the task of proving a proposition boils down to the task of constructing an expression of the right type, and Lean is designed to help us do this. We can provide such an expression explicitly:
example (a b : Prop) : a ∧ b → b ∧ a :=
λ h, and.intro (and.right h) (and.left h)
We can use projections and anonymous constructors to express the proof even more concisely, though somewhat cryptically:
example (a b : Prop) : a ∧ b → b ∧ a :=
λ h, ⟨h.right, h.left⟩
In the opposite direction, Lean provides syntactic sugar that allows us to annotation assumptions and goals, and build a proof incrementally:
example (a b : Prop) : a ∧ b → b ∧ a :=
assume h : a ∧ b,
have ha : a, from and.left h,
have hb : b, from and.right h,
show b ∧ a, from and.intro hb ha
You can write proofs incrementally using sorry
to temporarily fill
in any intermediate step.
example (a b : Prop) : a ∧ b → b ∧ a :=
assume h : a ∧ b,
have ha : a, from sorry,
have hb : b, from sorry,
show b ∧ a, from and.intro hb ha
Lean notices that you are cheating, but will otherwise confirm that
the proof is correct modulo the instances of sorry
. Replacing one of
them by an underscore tells Lean that it should infer the value of
that expression. Lean’s elaborator will not prove propositions for us
without explicit instructions to do so, but the error message will show
you exactly what needs to be proved, and what hypotheses are
available.
Lean supports the use of tactics
, which are instructions which tell
the system how to construct a term or proof.
example (a b : Prop) : a ∧ b → b ∧ a :=
begin
intro h, cases h, split,
repeat { assumption }
end
These commands can be used to invoke automation, like the simplifier:
example (a b : Prop) : a ∧ b → b ∧ a :=
begin intro, simp_using_hs end
Anywhere Lean’s parser expects an expression, you can enter tactic
mode with a begin ... end
block, or with the by
keyword.
example (a b : Prop) : a ∧ b → b ∧ a :=
assume h : a ∧ b,
have ha : a, from h.left,
have hb : b, from h.right,
show b ∧ a,
begin split, repeat { assumption } end
Conversely, in a begin ... end
block, Lean provides various ways of
specifying an explicit term:
example (a b : Prop) : a ∧ b → b ∧ a :=
begin
intro h, cases h with ha hb,
exact and.intro hb ha
end
We can even pass back and forth between the two modes freely:
example (a b : Prop) : a ∧ b → b ∧ a :=
begin
intro h, cases h with ha hb,
exact and.intro (by assumption) (by assumption)
end
This lets us write proofs in a manner that lays out the structure explicitly and provides briefer hints and instructions where appropriate, just as in an ordinary mathematical proof.
When writing proof terms explicitly, Lean provides the word suppose
to introduce an assumption without a label, and the label can be
ommitted in the have
command as well. In this case, we can refer to
the anonymous fact that was most recently added to the context with
the keyword this
. We can also refer to them by surrounding the
statement of the proposition with French quotes, obtained by typing
\f<
and \f>
.
example (a b : Prop) : a ∧ b → b ∧ a :=
suppose a ∧ b,
have a, from this.left,
have b, from ‹a ∧ b›.right,
show b ∧ a, from and.intro ‹b› ‹a›
These anonymous elements of the context are also visible to tactics and automation:
example (a b : Prop) : a ∧ b → b ∧ a :=
suppose a ∧ b,
have a, from this.left,
have b, from ‹a ∧ b›.right,
show b ∧ a, begin split, repeat { assumption } end
example (a b : Prop) : a ∧ b → b ∧ a :=
suppose a ∧ b,
have a, from this.left,
have b, from ‹a ∧ b›.right,
show b ∧ a, by simp_using_hs
Because inductive types are so fundamental, Lean’s proof language provides a number of ways of carrying out proofs by induction. Suppose, for example, we define exponentiation generically in any monoid.
namespace hide
-- BEGIN
universe u
variable {α : Type u}
variable [monoid α]
open nat
def pow (a : α) : ℕ → α
| 0 := 1
| (n + 1) := a * pow n
infix `^` := pow
theorem pow_zero (a : α) : a^0 = 1 := rfl
theorem pow_succ (a : α) (n : ℕ) : a^(succ n) = a * a^n := rfl
-- END
end hide
We use the rewrite tactic rw
to rewrite an expression with a
sequence of identities.
The theorem pow_succ
states that a^(succ n) = a * a^n
. The monoid
in question is not assumed to be commutative, so it requires a proof
by induction to show that a^(succ n) = a^n * a
.
namespace hide
universe u
variable {α : Type u}
variable [monoid α]
open nat
def pow (a : α) : ℕ → α
| 0 := 1
| (n + 1) := a * pow n
infix `^` := pow
theorem pow_zero (a : α) : a^0 = 1 := rfl
theorem pow_succ (a : α) (n : ℕ) : a^(succ n) = a * a^n := rfl
-- BEGIN
theorem pow_succ' (a : α) (n : ℕ) : a^(succ n) = a^n * a :=
nat.rec_on n
(show a^(succ 0) = a^0 * a,
by simp [pow_zero, one_mul, pow_succ])
(take n,
assume ih : a^(succ n) = a^n * a,
show a^(succ (succ n)) = a^(succ n) * a,
by rw [pow_succ, ih, -mul_assoc, -pow_succ, ih])
-- END
end hide
By propositions as types, the same principle nat.rec_on
governs both
induction and recursion on the natural numbers, and works as you would
expect: you prove the base case, and then carry out the induction
step. Lean has a special proof mode, calc
, that facilitates writing
calculational proofs. It can be used in this case to make the argument
more readable:
namespace hide
universe u
variable {α : Type u}
variable [monoid α]
open nat
def pow (a : α) : ℕ → α
| 0 := 1
| (n + 1) := a * pow n
infix `^` := pow
theorem pow_zero (a : α) : a^0 = 1 := rfl
theorem pow_succ (a : α) (n : ℕ) : a^(succ n) = a * a^n := rfl
-- BEGIN
theorem pow_succ' (a : α) (n : ℕ) : a^(succ n) = a^n * a :=
nat.rec_on n
(show a^(succ 0) = a^0 * a,
by simp [pow_zero, one_mul, pow_succ])
(take n,
assume ih : a^(succ n) = a^n * a,
show a^(succ (succ n)) = a^(succ n) * a, from
calc
a^(succ (succ n)) = a * a^(succ n) : by rw pow_succ
... = a * (a^n * a) : by rw ih
... = (a * a^n) * a : by rw mul_assoc
... = a^(succ n) * a : by rw -pow_succ)
-- END
end hide
The calc
mode can be used with inequalities and transitive relations
that have been registered with the system.
By the propositions-as-types correspondence, induction is just a form of recursion, and so the function definition system can be used to write proofs by induction as well.
namespace hide
universe u
variable {α : Type u}
variable [monoid α]
open nat
def pow (a : α) : ℕ → α
| 0 := 1
| (n + 1) := a * pow n
infix `^` := pow
theorem pow_zero (a : α) : a^0 = 1 := rfl
theorem pow_succ (a : α) (n : ℕ) : a^(succ n) = a * a^n := rfl
-- BEGIN
theorem pow_succ' (a : α) : ∀ n, a^(succ n) = a^n * a
| 0 := by simp [pow_zero, one_mul, pow_succ]
| (succ n) := by rw [pow_succ, pow_succ' n, -mul_assoc,
-pow_succ, pow_succ' n]
-- END
end hide
Here the rewrite tactic uses the inductive hypothesis pow_succ'
n
. In an inductive proof like this, structurally decreasing calls can
be used.
Finally, one can write a tactic proof using the induction
tactic,
which will revert any hypotheses in the context that depend on the
induction variable and then generalize them again. The with
clause
names the variable used in the inductive step, as well as the
inductive hypothesis.
namespace hide
universe u
variable {α : Type u}
variable [monoid α]
open nat
def pow (a : α) : ℕ → α
| 0 := 1
| (n + 1) := a * pow n
infix `^` := pow
theorem pow_zero (a : α) : a^0 = 1 := rfl
theorem pow_succ (a : α) (n : ℕ) : a^(succ n) = a * a^n := rfl
-- BEGIN
theorem pow_succ' (a : α) (n : ℕ) : a^(succ n) = a^n * a :=
begin
induction n with n ih,
{ simp [pow_zero, one_mul, pow_succ] },
rw [pow_succ, ih, -mul_assoc, -pow_succ, ih]
end
-- END
end hide
Here is another example of proof that uses the induction tactic.
namespace hide
universe u
variable {α : Type u}
variable [monoid α]
open nat
def pow (a : α) : ℕ → α
| 0 := 1
| (n + 1) := a * pow n
infix `^` := pow
theorem pow_zero (a : α) : a^0 = 1 := rfl
theorem pow_succ (a : α) (n : ℕ) : a^(succ n) = a * a^n := rfl
theorem pow_succ' (a : α) (n : ℕ) : a^(succ n) = a^n * a :=
begin
induction n with n ih,
{ simp [pow_zero, one_mul, pow_succ] },
rw [pow_succ, ih, -mul_assoc, -pow_succ, ih]
end
-- BEGIN
theorem pow_add (a : α) (m n : ℕ) : a^(m + n) = a^m * a^n :=
begin
induction n with n ih,
{ simp [add_zero, pow_zero, mul_one] },
rw [add_succ, pow_succ', ih, pow_succ', mul_assoc]
end
-- END
end hide
Recall the recursive definitions of the append
and length
functions for lists from Section 3.2.
namespace hide
open list
universe u
variable {α : Type u}
-- BEGIN
def append : list α → list α → list α
| [] l := l
| (h :: s) t := h :: (append s t)
def length : list α → nat
| [] := 0
| (a :: l) := length l + 1
-- END
end hide
The natural way to prove things about these is to use induction on lists. Here are some examples.
universe u
variable {α : Type u}
open nat list
-- BEGIN
theorem append_nil (t : list α) : t ++ [] = t :=
begin induction t with a t ih, reflexivity, simp [nil_append, cons_append, ih] end
theorem append.assoc (s t u : list α) : s ++ t ++ u = s ++ (t ++ u) :=
begin induction s with a s ih, reflexivity, simp [cons_append, ih] end
theorem length_append (s t : list α) : length (s ++ t) = length s + length t :=
begin
induction s with a s ih,
simp [nil_append, length],
simp [length_cons, cons_append, ih]
end
theorem eq_nil_of_length_eq_zero : ∀ {l : list α}, length l = 0 → l = []
| [] h := rfl
| (a::s) h := by contradiction
theorem ne_nil_of_length_eq_succ : ∀ {l : list α} {n : nat}, length l = succ n → l ≠ []
| [] n h := by contradiction
| (a::l) n h := begin intro leq, contradiction end
-- END
The first three are tactic-style proofs, whereas the last two use the function definition package
[To do: describe all the axioms of Lean, including classical axioms.]