-
Notifications
You must be signed in to change notification settings - Fork 7
/
Copy path721lectures.tex
4121 lines (3168 loc) · 265 KB
/
721lectures.tex
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
% !TEX program = lualatex
% The following code introduces a new \if macro which we use to switch
% compilation between the published and internet versions of the book
%
\newif\ifmine
%
% The default is false, which means we compile the internet version.
% Un-comment the next line to compile the published version:
%
%\minetrue
%
\documentclass{amsart}
\usepackage[hidelinks]{hyperref}
\usepackage{tensor}
\usepackage{comment}
\usepackage{enumitem}
\usepackage{moreenum}
\usepackage{graphicx}
\usepackage{ifthen}
\usepackage{stmaryrd}
\usepackage[svgnames]{xcolor}
\usepackage{fullpage}
\hypersetup{
colorlinks,
linkcolor={red!50!black},
citecolor={blue!50!black},
urlcolor={blue!80!black}
}
\usepackage{mathpartir}%I think for \inferrule
% Font packages
\usepackage[no-math]{fontspec}
\usepackage{realscripts}
% Unicode mathematics fonts
\usepackage{unicode-math}
\setmathfont{Asana Math}[Alternate = 2]
% Font imports, for some reason this has to be after
% the unicode-math stuff.
\setmainfont{CormorantGaramond}[
Numbers = Lining,
Ligatures = NoCommon,
Kerning = On,
UprightFont = *-Medium,
ItalicFont = *-MediumItalic,
BoldFont = *-Bold,
BoldItalicFont = *-BoldItalic
]
\setsansfont{texgyreheros}[
Scale=0.9129,
Ligatures = NoCommon,
UprightFont = *-Regular,
ItalicFont = *-Italic,
BoldFont = *-Bold,
BoldItalicFont = *-BoldItalic
]
\setmonofont{SourceCodePro}[
Scale=0.8333,
UprightFont = *-Regular,
ItalicFont = *-MediumItalic,
BoldFont = *-Bold,
BoldItalicFont = *-BoldItalic
]
% AMS Packages
\usepackage{amsmath}
\usepackage{amsxtra}
\usepackage{amsthm}
% We use TikZ for diagrams
\usepackage{tikz}
\usepackage{tikz-cd}
\usepackage{makebox}%to try and fix the spacing in some diagrams with wildly divergent node sizes
\renewcommand{\theenumi}{\roman{enumi}} %roman numerals in enumerate
% Adjust list environments.
\setlist{}
\setenumerate{leftmargin=*,labelindent=0\parindent}
\setitemize{leftmargin=\parindent}%,labelindent=0.5\parindent}
%\setdescription{leftmargin=1em}
\newcommand{\todo}[1]
{ {\bfseries \color{blue} #1 }}
\newcommand{\lecture}[1]{\vspace{.1cm}\centerline{\fbox{\textbf{#1}}}\vspace{.1cm}}
\theoremstyle{theorem}
\newtheorem*{thm}{Theorem}
\newtheorem*{lem}{Lemma}
\newtheorem*{fact}{Fact}
\newtheorem*{cor}{Corollary}
\newtheorem*{prop}{Proposition}
\theoremstyle{definition}
\newtheorem*{defn}{defn}
\newtheorem*{ntn}{Notation}
\newtheorem*{post}{Postulate}
\newtheorem*{ax}{Axiom}
\newtheorem*{ex}{ex}
\newtheorem*{nex}{non-ex}
\newtheorem*{exc}{Exercise}
\newtheorem*{exnex}{Example/Non-Example}
\newtheorem*{tf}{T/F}
\newtheorem*{q}{Q}
\newtheorem*{rQ}{rhetorialQ}
\newtheorem*{rev}{Review}
\theoremstyle{remark}
\newtheorem*{rmk}{Remark}
\newtheorem*{war}{Warning}
\newtheorem*{dig}{Digression}
%\makeatletter
%\let\c@equation\c@thm
%\makeatother
%\numberwithin{equation}{section}
\newcommand{\cat}[1]{\textup{\textsf{#1}}}% for categories
\newcommand{\fun}[1]{\textup{#1}}%for functors
%math operators
\DeclareMathOperator{\dom}{\mathrm{dom}}
\DeclareMathOperator{\cod}{\mathrm{cod}}
\DeclareMathOperator{\ob}{\mathrm{ob}}
\DeclareMathOperator{\mor}{\mathrm{mor}}
\DeclareMathOperator*{\colim}{\mathrm{colim}}
\newcommand{\hocolim}{\mathrm{hocolim}}
\newcommand{\wcolim}{\mathrm{wcolim}}
\newcommand{\holim}{\mathrm{holim}}
\newcommand{\op}{\mathrm{op}}
\newcommand{\co}{\mathrm{co}}
\newcommand{\Nat}{\mathrm{Nat}}
\newcommand{\End}{\mathrm{End}}
\newcommand{\Aut}{\mathrm{Aut}}
\newcommand{\Sym}{\mathrm{Sym}}
\newcommand{\coim}{\mathrm{coim}}
\newcommand{\To}{\Rightarrow}
\newcommand{\coker}{\mathrm{coker}}
\newcommand{\Map}{\mathord{\text{\normalfont{\textsf{Map}}}}}
\newcommand{\Fun}{\mathord{\text{\normalfont{\textsf{Fun}}}}}
\newcommand{\Hom}{\mathord{\text{\normalfont{\textsf{Hom}}}}}
\newcommand{\Ho}{\mathord{\text{\normalfont{\textsf{Ho}}}}}
\newcommand{\h}{\cat{h}}
\DeclareMathOperator{\Lan}{\fun{Lan}}
\DeclareMathOperator{\Ran}{\fun{Ran}}
\newcommand{\comma}{\!\downarrow\!}
%special blackboard bold characters
\newcommand{\bbefamily}{\fontencoding{U}\fontfamily{bbold}\selectfont}
\newcommand{\textbbe}[1]{{\bbefamily #1}}
\DeclareMathAlphabet{\mathbbe}{U}{bbold}{m}{n}
\def\DDelta{{\mbfDelta}}
%categories?
\newcommand{\cA}{\mathsf{A}}
\newcommand{\cB}{\mathsf{B}}
\newcommand{\cC}{\mathsf{C}}
\newcommand{\cD}{\mathsf{D}}
\newcommand{\cE}{\mathsf{E}}
\newcommand{\cF}{\mathsf{F}}
\newcommand{\cG}{\mathsf{G}}
\newcommand{\cI}{\mathsf{I}}
\newcommand{\cJ}{\mathsf{J}}
\newcommand{\cK}{\mathsf{K}}
\newcommand{\cL}{\mathsf{L}}
\newcommand{\cM}{\mathsf{M}}
\newcommand{\cN}{\mathsf{N}}
\newcommand{\cP}{\mathsf{P}}
\newcommand{\cS}{\mathsf{S}}
\newcommand{\cT}{\mathsf{T}}
\newcommand{\cV}{\mathsf{V}}
\newcommand{\0}{\mathbbe{0}}
\newcommand{\1}{\mathbbe{1}}
\newcommand{\2}{\mathbbe{2}}
\newcommand{\3}{\mathbbe{3}}
\newcommand{\4}{\mathbbe{4}}
\newcommand{\iso}{\mathbbe{I}}
%blackboard bold
\renewcommand{\AA}{\mathbb{A}}
\newcommand{\CC}{\mathbb{C}}
\newcommand{\FF}{\mathbb{F}}
\newcommand{\LL}{\mathbb{L}}
\newcommand{\NN}{\mathbb{N}}
\newcommand{\QQ}{\mathbb{Q}}
\newcommand{\RR}{\mathbb{R}}
\newcommand{\ZZ}{\mathbb{Z}}
\newcommand{\kk}{\mathbbe{k}}
\newcommand{\sA}{\mathcal{A}}
\newcommand{\sB}{\mathcal{B}}
\newcommand{\sC}{\mathcal{C}}
\newcommand{\sJ}{\mathcal{J}}
\newcommand{\sL}{\mathcal{L}}
\newcommand{\sR}{\mathcal{R}}
%type theory stuff
\newcommand{\univ}{{~\texttt{type}~}}
\newcommand{\judgment}{\mathcal{J}}
\newcommand{\term}[1]{{\textup{\texttt{#1}}}}
\newcommand{\type}[1]{{\textup{#1}}}
\newcommand{\comp}{\term{comp}}
\newcommand{\id}{\term{id}}
\newcommand{\bN}{{\mathbb{N}}}
\newcommand{\bT}{{\mathbb{T}}}
\newcommand{\suc}{\term{succ}_{\bN}}
\newcommand{\ind}{\term{ind}}
\newcommand{\inl}{\term{inl}}
\newcommand{\inr}{\term{inr}}
\newcommand{\pair}{\term{pair}}
\newcommand{\pr}{\term{pr}}
\newcommand{\bZ}{{\mathbb{Z}}}
\newcommand{\refl}{\term{refl}}
\newcommand{\pathind}{\term{path}\text{-}\term{ind}}
\newcommand{\concat}{\term{concat}}
\newcommand{\inv}{\term{inv}}
\newcommand{\assoc}{\term{assoc}}
\newcommand{\ap}{\term{ap}}
\newcommand{\apcoh}[1]{\term{ap}\text{-}\term{#1}}
\newcommand{\tr}{\term{tr}}
\newcommand{\apd}{\term{apd}}
\newcommand{\const}{\term{const}}
\newcommand{\glue}{\term{glue}}
\newcommand{\UU}{{\mathcal{U}}}
\newcommand{\sT}{\mathcal{T}}
\newcommand{\Id}{\textup{\text{Id}}}
\newcommand{\Eq}{\textup{\text{Eq}}}
\newcommand{\bool}{\type{bool}}
\newcommand{\true}{\term{true}}
\newcommand{\false}{\term{false}}
\newcommand{\is}[1]{\type{is-{#1}}}
\newcommand{\iscontr}{\type{is-contr}}
\newcommand{\isprop}{\type{is-prop}}
\newcommand{\isequiv}{\type{is-equiv}}
\newcommand{\fib}{\type{fib}}
\newcommand{\Prop}{\type{Prop}_{\UU}}
\newcommand{\Set}{\type{Set}_{\UU}}
\renewcommand{\iff}{\leftrightarrow}
\newcommand{\mere}[1]{\|{#1}\|}
\newcommand{\set}[1]{\|{#1}\|_0}
\newcommand{\im}[1]{\type{im}(#1)}
\newcommand{\ev}{\term{ev}}
\newcommand{\Fin}{\type{Fin}}
\newcommand{\Sone}{\mathbf{S}^1}
\newcommand{\Sn}[1]{\mathbf{S}^{#1}}
\newcommand{\base}{\term{base}}
\newcommand{\lloop}{\term{loop}}
\newcommand{\mul}{\term{mul}}
\begin{document}
\title{Math 721: Homotopy Type Theory}
\author{Emily Riehl}
\date{Fall 2021}
%\begin{abstract}
%\end{abstract}
\address{Dept.~of Mathematics\\Johns Hopkins University\\3400 N Charles St\\Baltimore, MD 21218}
\email{[email protected]}
\maketitle
\setcounter{tocdepth}{1}
\tableofcontents
\ifmine
\subsection*{Index cards:} name, what I should call you, pronouns, year, relevant previous coursework, why you are here, something fun
\subsection*{Personal history:}
Prehistory of homotopy type theory dates to a 1998 paper of Martin Hofmann and Thomas Streicher called ``The groupoid interpretation of type theory,'' that I'll tell you about later. Key early developments were in the mid aughts with the major players coming together for the first time in 2010 at Carnegie Mellon. This expanded to a larger group at Oberwolfach in 2011 followed by a special year at IAS in 2012-2013, during which the HoTT book (not our book) was written.
I got my PhD in 2011. I started hearing a little about this in my final years of grad school but learned most of what I'll tell you about this semester during my postdoc and while here at Johns Hopkins. Point of this to say is that learning doesn't stop when you earn your PhD. Learning also doesn't necessarily precede teaching. My goal for this semester is to know a lot more \texttt{agda} by the end than I do right now. My promise to you is that I'll do all the homework before I assign it. I don't promise that I'll be able to answer very many of your questions about what is going on under the hood, but that's okay too: it's probably useful for all of us to be reminded that figures of authority don't always know what they're talking about.
\subsection*{Syllabus:}
The goal for this course is to change the way you think about doing mathematics. I certainly have. Along the way, I hope you all learn a lot, though as is typical in a graduate course, that's somewhat in proportion to the amount of work you put in. This also has a lot to do with your background. If you're coming from CS, you'll probably leave the course knowing way more \texttt{agda} than I do, but perhaps a bit less about synthetic homotopy theory. For others, it will be vice versa. Still others might engage most with the philosophical implications of these developments. I'm equally happy with all of these directions.
There are two ``active learning'' activities. The first is writing your own formal proofs in the computer proof assistant \texttt{agda}. This is hard in the sense that if you make one typo the whole thing breaks but a lot of help is available. I'll say more about it when the time comes. In particular, I'm hoping many folks will attend group office hours on Thursday evenings from 5-6pm. In the first meeting there won't be a problem set due: instead the goal will be to get \texttt{agda} installed. Come see me if you're worried about technical issues.
The second active learning activity is a final project. I'm very flexible about the parameters. Essentially you should pick something that sounds fun to you.
Questions?
\fi
\part{Martin-L\"of's Dependent Type Theory}
\section*{August 30: Dependent Type Theory}
Martin-L\"{o}f's dependent type theory is a formal language for writing mathematics: both constructions of mathematical objects and proofs of mathematical propositions. As we shall discover, these two things are treated in parallel (in contrast to classical Set theory plus first-order logic, where the latter supplies the proof calculus and the former gives the language which you use to state things to prove).
\subsection*{Judgments and contexts}
I find it helpful to imagine I'm teaching a computer to do mathematics. It's also helpful to forget that you know other ways of doing mathematics.\footnote{Indeed, there are very deep theorems that describe how to interpret dependent type theory into classical set-based mathematics. You're welcome to investigate these for your final project but they are beyond the scope of this course.}
\begin{defn} There are four kinds of \textbf{judgments} in dependent type theory, which you can think of as the ``grammatically correct'' expressions:
\begin{enumerate}
\item $\Gamma \vdash A \univ$, meaning that $A$ is a well-formed type in \textbf{context} $\Gamma$ (more about this soon).
\item $\Gamma \vdash a : A$, meaning that $a$ is a well-formed term of type $A$ in context $\Gamma$.
\item $\Gamma \vdash A \doteq B \univ$, meaning that $A$ and $B$ are \textbf{judgmentally} or \textbf{definitionally} equal types in context $\Gamma$.
\item $\Gamma \vdash a \doteq b : A$, meaning that $a$ and $b$ are judgmentally equal terms of type $A$ in context $\Gamma$.
\end{enumerate}
These might be collectively abbreviated by $\Gamma \vdash \judgment$.
\end{defn}
The statement of a mathematical theorem, often begins with an expression like ``Let $n$ and $m$ be positive integers, with $n < m$, and let $\vec{v}_1,\ldots, \vec{v}_m$ be vectors in $\RR^n$. Then \ldots'' This statement of the hypotheses defines a \textbf{context}, a finite list of types and hypothetical terms (called \textbf{variables}\footnote{We're not going to say anything about proper syntax for variables and instead rely on instinct to recognize proper and improper usage.}) satisfying an inductive condition that that each type can be derived in the context of the previous types and terms using the inference rules of type theory.
\begin{defn} A \textbf{context} is a finite list of variable declarations:
\[ x : A_1, x_2 : A_2(x_1), \ldots, x_n : A_n(x_1,\ldots, x_{n-1})\]
satisfying the condition that for each $1\leq k \leq n$ we can derive the judgment
\[ x_1 : A_1, \ldots, x_{k-1} : A_{k-1}(x_1,\ldots, x_{k-2}) \vdash A_k(x_1,\ldots, x_{k-1}) \univ\]
using the inference rules of type theory.
\end{defn}
We'll introduce the inference rules shortly but the idea is that it needs to be possible to form the type $A_k(x_1,\ldots, x_{k-1})$ given terms $x_1, \ldots, x_{k-1}$ of the previously-formed types.
\begin{ex} For example, there is a unique context of length zero: the empty context.
\end{ex}
\begin{ex} $n : \NN, m : \NN, p : n < m, \vec{v} : (\RR^n)^m$ is a context. Here $n : \NN, m : \NN \vdash n < m$ is a dependent type that corresponds to the relation $\{ n < m \mid n, m \in \NN\} \subset \NN \times \NN$ and the variable $p$ is a witness that $n < m$ is true (more about this later).
\end{ex}
\subsection*{Type families}
Absolutely everything in dependent type theory is context dependent so we always assume we're working in a background context $\Gamma$. Let's focus on the primary two judgment forms.
\begin{defn} Given a type $A$ in context $\Gamma$ a \textbf{family} of types over $A$ in context $\Gamma$ is a type $B(x)$ in context $\Gamma, x : A$, as represented by the judgment:
\[ \Gamma, x : A \vdash B(x) \univ\]
We also say that $B(x)$ is a type \textbf{indexed} by $x : A$, in context $\Gamma$.
\end{defn}
\begin{ex} $\RR^n$ is a type indexed by $n \in \NN$.
\end{ex}
\begin{defn} Consider a type family $B$ over $A$ in context $\Gamma$. A \textbf{section} of the family $B$ over $A$ in context $\Gamma$ is a term of type $B(x)$ in context $\Gamma, x : A$, as represented by the judgment:
\[ \Gamma, x : A \vdash b(x) : B(x) \]
We say that $b$ is a \textbf{section} of the family $B$ over $A$ in context $\Gamma$ or that $b(x)$ is a term of type $B(x)$ indexed by $x : A$ in context $\Gamma$.
\end{defn}
\begin{ex} $\vec{0}_n : \RR^n$ is a term dependent on $n \in \NN$.
\end{ex}
\begin{exc} If you've heard the word ``section'' before you should think about what it is being used here.
\end{exc}
\subsection*{Inference rules}
There are five types of inference rules that collectively describe the structural rules of dependent type theory. They are
\begin{enumerate}
\item Rules postulating that judgmental equality is an equivalence relation:
\[
\inferrule{\Gamma \vdash A \univ}{\Gamma \vdash A \doteq A \univ}\quad
\inferrule{\Gamma \vdash A \doteq B \univ}{\Gamma \vdash B \doteq A \univ}\quad
\inferrule{\Gamma \vdash A \doteq B \univ \\ \Gamma \vdash B \doteq C \univ}{\Gamma \vdash A \doteq C \univ}
\]
and similarly for judgmental equality between terms.
\item Variable conversion rules for judgmental equality between types:
\[
\inferrule{\Gamma \vdash A \doteq A' \univ \\ \Gamma, x : A, \Delta \vdash \judgment}{\Gamma, x : A', \Delta \vdash \judgment}
\]
\item Substitution rules:
\[
\inferrule{\Gamma \vdash a : A \\ \Gamma, x : A, \Delta \vdash \judgment}{\Gamma, \Delta[a/x] \vdash \judgment[a/x]}
\]
If $\Delta$ is the context $y_1 : B_1(x),\ldots, y_n : B_n(x,y_1,\ldots, y_{n-1})$ then $\Delta[a/x]$ is the context $y_1 : B(a), \ldots, y_n : B_n(a,y_1,\ldots, y_{n-1})$. A similar substitution is performed in the judgment $\judgment[a/x]$. Further rules indicate that substitution by judgmentally equal terms gives judgmentally equal results.
\item Weakening rules:
\[
\inferrule{ \Gamma \vdash A \univ \\ \Gamma,\Delta \vdash \judgment}{\Gamma, x : A, \Delta \vdash \judgment}
\]
Eg if $A$ and $B$ are types in context $\Gamma$, then $B$ is also a type in context $\Gamma, x : A$.
\item The generic term:
\[
\inferrule{\Gamma \vdash A \univ }{\Gamma, x : A \vdash x :A}
\]
This will be used to define the identity function of any type.
\end{enumerate}
\subsection*{Derivations}
A derivation in type theory is a finite rooted tree where each node is a valid rule of inference. The root is the conclusion.
\begin{ex} The interchange rule is derived as follows
\[
\inferrule{
\inferrule{
\inferrule{ \Gamma \vdash B \univ}{\Gamma, y :B \vdash y : B}}
{\Gamma, y : B, x : A\vdash y : B} \qquad
\inferrule{
\inferrule{}{\Gamma \vdash B \univ} \\ \inferrule{\Gamma, x : A, y : B, \Delta \vdash \judgment}{\Gamma, x : A, z : B, \Delta[z/y] \vdash \judgment[z/y]}}
{\Gamma, y : B, x : A, z : B, \Delta[z/y] \vdash \judgment[z/y]}
}{\Gamma, y : B, x : A, \Delta \vdash \judgment}
\]
\end{ex}
\section*{September 1: Dependent function types \& the natural numbers}
\subsection*{The rules for dependent function types}
Consider a section $b$ of a family $B$ over $A$ in context $\Gamma$, as encoded by a judgment:
\[ \Gamma, x : A \vdash b(x) : B(x).\]
We think of the section $b$ as a function that takes as input $x : A$ and produces a term $b(x) : B(x)$. Since the type of the output is allowed to depend on the term being input, this isn't quite an ordinary function but a \textbf{dependent function}. The type of all dependent functions is the \textbf{dependent function type}
\[ \Pi_{x : A} B(x)\]
What is a thing in mathematics? Structuralism says the ontology of a thing is determined by its behavior. In dependent type theory, we define dependent function types by stating their rules, which have the following forms:
\begin{enumerate}
\item \textbf{formation rules} tell us how a type may be formed
\item \textbf{introduction rules} tell us how to introduce new terms of the type
\item \textbf{elimination rules} tell us how the terms of a type may be used
\item \textbf{computation rules} tell us how the introduction and elimination rules interact
\end{enumerate}
There are also \textbf{congruence rules} that tell us that all constructions respect judgmental equality. See \cite{Rijke} for more details.
\begin{defn}[dependent function types]
The $\Pi$-\textbf{formation rule} has the form:
\[
\inferrule{ \Gamma, x : A \vdash B(x) \univ}{\Gamma \vdash \Pi_{x :A} B(x) \univ}
\]
The $\Pi$-\textbf{introduction rule} has the form:
\[
\inferrule{ \Gamma, x : A \vdash b(x) : B(x)}{ \Gamma \vdash \lambda x. b(x) : \Pi_{x :A} B(x)}
\]
The $\lambda$-\textbf{abstraction} $\lambda x. b(x)$ can be thought of as notation for $x \mapsto b(x)$.
The $\Pi$-\textbf{elimination rule} has the form of the evaluation function:
\[
\inferrule{ \Gamma \vdash f : \Pi_{x :A} B(x)}{\Gamma, x :A \vdash f(x) : B(x)}
\]
Finally, there are two computation rules: the $\beta$-\textbf{rule}
\[
\inferrule{\Gamma , x :A \vdash b(x) : B(x) }{ \Gamma, x : A \vdash (\lambda y. b(y))(x) \doteq b(x) : B(x)}
\]
and the $\eta$-\textbf{rule}, which says that all elements of a $\Pi$-type are dependent functions:
\[
\inferrule{ \Gamma \vdash f : \Pi_{x : A}B(x)}{\Gamma \vdash \lambda x. f(x) \doteq f : \Pi_{x :A} B(x)}\]
\end{defn}
\subsection*{Ordinary function types}
\begin{defn}[function types]
The formation rule is derived from the formation rule for $\Pi$-types together with weakening:
\[
\inferrule{
\inferrule{ \Gamma \vdash A \univ \\ \Gamma \vdash B \univ}{\Gamma, x : A \vdash B \univ}}
{\Gamma \vdash \Pi_{x : A}B \univ}
\]
We adopt the notation
\[ A \to B \coloneq \Pi_{x : A} B\]
for the dependent function type in the case where the type family $B$ is constant over $x : A$.
The introduction, evaluation, and computation rules are instances of term conversion: eg
\[
\inferrule{ \Gamma \vdash B \univ \\ \Gamma, x : A \vdash b(x) : B}
{ \Gamma \vdash \lambda x. b(x) : A \to B}
\qquad
\inferrule{ \Gamma \vdash f : A \to B}{\Gamma, x : A \vdash f(x) : B}
\]
plus the two computation rules:
\[
\inferrule{ \Gamma \vdash B \univ \\ \Gamma, x : A \vdash b(x) : B}{ \Gamma, x : A \vdash (\lambda y. b(y))(x) \doteq b(x) : B}
\qquad
\inferrule{ \Gamma \vdash f : A \to B}{\Gamma \vdash \lambda x. f(x) \doteq f : A \to B}
\]
\end{defn}
\begin{defn} Identity functions are defined as follows:
\[
\inferrule{
\inferrule{ \Gamma \vdash A \univ}{\Gamma, x : A \vdash x : A}}
{ \Gamma \vdash \lambda x. x : A \to A}\]
which is traditionally denoted by $\id_A \coloneq \lambda x. x$.
\end{defn}
The idea of composition is that given a function $f \colon A \to B$ and $g \colon B \to C$ you should get a function $g \circ f \colon A \to C$. Using infix notation you might denote this function by $\_\circ\_$.
\begin{q} $\_\circ\_$ is itself a function, so it's a term of some type. What type?\footnote{Really the type should involve three universe variables but let's save this for next week.}
\end{q}
\begin{defn} Composition has the form:
\[
\inferrule{ \Gamma \vdash A \univ \\ \Gamma \vdash B \univ \\ \Gamma \vdash C \univ}{\Gamma \vdash \_\circ\_ : (B \to C) \to ((A \to B) \to (A \to C))}
\]
It is defined by
\[ \_\circ\_ \coloneq \lambda g. \lambda f . \lambda x. g(f(x))\]
which can be understood as the term constructed by three applications of the $\Pi$-introduction rule followed by two applications of the $\Pi$-elimination rule.
\end{defn}
Composition is associative essentially because both $(h\circ g)\circ f$ and $h \circ (g \circ f)$ are defined by $\lambda x. h(g(f(x)))$. We'll think about this more formally when we come back to identity types.
Similarly, you can compute that for all $f : A \to B$, $\id_B \circ f \doteq f : A \to B$ and $f \circ \id_A \doteq f : A \to B$.
\subsection*{The type of natural numbers}
The type $\bN$ of natural numbers is the archetypical example of an \textbf{inductive type} about more which soon. It is given by rules which say that it has a term $0_\bN : \bN$, it has a successor function $\suc : \bN \to\bN$ and it satisfies the induction principle.
The $\bN$-formation rule is
\[
\inferrule{~}{\vdash \bN \univ}
\]
In other words, $\bN$ is a type in the empty context.
There are two $\bN$-introduction rules:
\[
\inferrule{~}{\vdash 0_\bN : \bN} \qquad
\inferrule{~}{\vdash \suc : \bN \to \bN}
\]
\begin{dig}[traditional induction]
In traditional first-order logic, the principle of $\bN$-induction is stated in terms of a \textbf{predicate} $P$ over $\bN$. One way to think about $P$ is as a function $P \colon \bN \to \{ \top, \bot\}$. That is, for each $n \in \bN$, $P(n)$ is either true or false. We could also think of $P$ as an indexed family of sets $(P(n))_{n \in \bN}$ where for each $n$ either $P(n) = \emptyset$ (corresponding to $P(n)$ being false) or $P(n) = *$ (corresponding to $P(n)$ being true).
The induction principle then says \[ \forall P : \{0,1\}^\bN, (P(0) \wedge (\forall n, P(n) \to P(n+1)) \to \forall n, P(n)).\]
\end{dig}
In dependent type theory it is most natural to let $P$ be an arbitrary type family over $\bN$. This is a stronger assumption, as we'll see.
\begin{q}
What then corresponds to a proof that $\forall n, P(n)$?
\end{q}
The induction principle is encoded by the following rule:
\[
\inferrule{ \Gamma, n : \bN \vdash P(n) \univ \\ \Gamma \vdash p_0 : P(0_\bN) \\ \Gamma \vdash p_S : \Pi_{n : \bN} (P(n) \to P(\suc(n))) }
{ \Gamma \vdash \term{ind}_\bN(p_0,p_S) : \Pi_{n : \bN} P(n)}
\]
\begin{rmk} There are other forms this rule might take that are interderivable with this one.
\end{rmk}
The computation rules say that the function $\term{ind}_\bN(p_0,p_S) : \Pi_{n : \bN} P(n)$ behaves like it should on $0_\bN$ and successors:
\[
\inferrule{ \Gamma , n : \bN \vdash P(n) \univ \\ \Gamma \vdash p_0 : P(0_\bN) \\ \Gamma \vdash p_S : \Pi_{n : \bN} (P(n) \to P(\suc(n))) }
{ \Gamma \vdash \term{ind}_\bN(p_0,p_S)(0_\bN) \doteq p_0 : P(0_\bN)}
\]
and under the same premises
\[ \Gamma, n : \bN \vdash \texttt{ind}_\bN(p_0, p_S)(\suc(n)) \doteq p_S(n, \texttt{ind}_\bN(p_0,p_S,n)) : P(\suc(n)).\]
These computation rules don't matter so much if the type family $n : \bN \vdash P(n)$ is really a predicate --- $P(n)$ is either true or false and that's the end of the story --- but they do matter if $P(n)$ is more like an indexed family of sets. In the latter case, $\term{ind}_\bN(p_0,p_S)$ is the recursive function defined from $p_0$ and $p_S$ and these are the computation rules for that recursion.
\begin{rmk} Recall Peano's axioms for the natural numbers:
\begin{enumerate}
\item $0_\bN \in \bN$
\item $\suc : \bN \to \bN$
\item $\forall n, \suc(n) \neq 0_\bN$
\item $\forall n,m, \suc(n)= \suc(m) \to n=m$
\item induction
\end{enumerate}
We'll be able to \emph{prove} the missing two axioms from the induction principle we've assumed once we have identity types and universes. We'll come back to this in a few weeks.
\end{rmk}
\subsection*{Addition on the natural numbers}
\begin{rmk} When addition is defined by recursion on the second variable, from the computation rules associated to function types and the natural numbers type you can derive judgmental equalities \[
m + 0 \doteq m \quad \text{and} \quad m + \suc(n) \doteq \suc(m + n).
\]
But you can't derive the symmetric judgmental equalities.
\end{rmk}
We \emph{will} be able to prove such equalities using the identity types, to be introduced shortly.
\subsection*{Pattern matching}
To define a dependent function $f : \Pi_{n : \bN} P(n)$ by induction on $n$ it suffices, by the elimination rule for the natural numbers type, to provide two terms:
\[ p_0 : P(0_\bN) \qquad p_S : \Pi_{n : \bN} P(n) \to P(\suc(n)).\]
Thus the definition of $f$ may be presented by writing
\[ f(0_\bN) \coloneq p_0 \qquad f(\suc(n)) \coloneq p_S(n,f(n)).\]
This defines the function $f$ by \textbf{pattern matching} on the variable $n$. When a function is defined in this form, the judgmental equalities accompanying the definition are immediately displayed.
\section*{September 8: The formal proof assistant \texttt{agda}}
See \url{https://github.com/emilyriehl/721/blob/master/introduction.agda}
\section*{September 13: Inductive types}
The rules for the natural numbers type $\bN$ tell us:
\begin{enumerate}
\item how to form terms in $\bN$, and
\item how to define dependent functions in $\Pi_{n : \bN} P(n)$ for any type family $n : \bN \vdash P(n) \univ$,
\end{enumerate}
while providing two computation rules for those dependent functions.
Many types can be specified by stating how to form their terms and how to define dependent functions out of them. Such types are called \textbf{inductive types}.
\subsection*{The idea of inductive types}
Recall a type is specified by its formation rules, its introduction rules, its elimination rules, and its computation rules. For inductive types, the introduction rules specify the \textbf{constructors} of the inductive type, while the elimination rule provides the \textbf{induction principle}. The computation rules provide definitional equalities for the induction principle.
In more detail:
\begin{enumerate}
\item The constructors tell us what structure the identity type is given with.
\item The induction principle defines sections of any type family over the inductive type by specifying the behavior at the constructors.
\item The computation rules assert that the inductively defined section agrees on the constructors with the data used to define it. So there is one computation rule for each constructor.
\end{enumerate}
\subsection*{The unit type}
The formal definition of the \textbf{unit type} is as follows:
\[
\inferrule{}{\vdash \1 \univ} \qquad
\inferrule{}{\vdash \star : \1} \qquad
\inferrule{ x : \1 \vdash P(x) \univ \\ p : P(\star)}{ x: \1 \vdash \ind_\1(p,x) : P(x)} \qquad
\inferrule{ x : \1 \vdash P(x) \univ \\ p : P(\star)}{x : \1 \vdash \ind_1(p,\star) \doteq p : P(\star)}
\]
As an inductive type, the definition is packaged as follows:
\begin{defn} The \textbf{unit type} is a type $\1$ equipped with a term $\star : \1$ satisfying the inductive principle that for any family $x : \1 \vdash P(x)$ there is a function
\[ \ind_\1 : P(\star) \to \Pi_{x : 1} P(x)\]
with the computation rule $\ind_1(p,\star) \doteq p$.
\end{defn}
In agda, this definition has the form:
\begin{quote}
\texttt{data unit : UU lzero where\\ \indent
star : unit}
\end{quote}
\begin{q} What does the induction rule look like for a constant type family $A$ that does not depend on $\1$?
\end{q}
\subsection*{The empty type}
\begin{defn}
The empty type is a type $\emptyset$ satisfying the induction principle that for any family of types $x : \emptyset \vdash P(x)$ there is a term
\[ \ind_\emptyset : \Pi_{x : \emptyset} P(x).\]
\end{defn}
That is the empty type is the inductive type with no constructors. Thus there are no computation rules. In agda, this definition has the form:
\begin{quote}
\texttt{data empty : UU lzero where}
\end{quote}
\begin{rmk} As a special case of the elimination rule for the empty type we have
\[
\inferrule{ \vdash A \univ}{ \term{ex-falso} \coloneq \ind_\emptyset : \emptyset \to A}
\]
By the elimination rule for function types it follows that if we had a term $x : \emptyset$ then we could get a term in any type. The name comes from latin \emph{ex falso quodlibet}: ``from falsehood, anything.''
\end{rmk}
We've already seen a few glimpses of logic in type theory, something we'll discuss more formally soon. The basic idea is that we can interpret the formation of a type as akin to the process of formulating a mathematical statement that could be a sentence (if its a type in the empty context) or a predicate (if it's a dependent type). The act of constructing a term in that type is then analogous to proving the proposition so-encoded.
These ideas motivate the logically-inflected terms in what follows.
For instance, we can use the empty type to define a negation operation on types:
\begin{defn}
For any type $A$, we define its \textbf{negation} by $\neg A \coloneq A \to \emptyset$ and say the type $A$ \textbf{is empty} if there is a term in this type.
\end{defn}
\begin{rmk} To construct a term of type $\neg A$, use the introduction rule for function types and assume given a term $a : A$. The task then is to derive a term of $\emptyset$. In other words, we prove $\neg A$ by assuming $A$ and deriving a contradiction. This proof technique is called \textbf{proof of negation}.
This should be contrasted with \textbf{proof by contradiction}, which aims to prove a proposition $P$ by assuming $\neg P$ and deriving a contradiction. This uses the logical step ``$\neg \neg P$ implies $P$.'' In type theory, however, $\neg \neg A$ is the type of functions
\[ \neg \neg A \coloneq (A \to \emptyset) \to \emptyset)\] and it is not possible in general to use a term in this type to construct a term of type $A$.
\end{rmk}
The law of contraposition does work, at least in one direction.
\begin{prop} For any types $P$ and $Q$ there is a function
\[ (P \to Q) \to (\neg Q \to \neg P).\]
\end{prop}
\begin{proof}
By $\lambda$-abstraction assume given $f : P \to Q$ and $\tilde{q} : Q \to \emptyset$. We seek a term in $P \to \emptyset$, which we obtain simply by composing: $\tilde{q} \circ f : P \to \emptyset$. Thus
\[\lambda f. \lambda \tilde{q} .\lambda p. \tilde{q}(f(p)) : (P \to Q) \to (\neg Q \to \neg P). \qedhere\]
\end{proof}
\subsection*{Coproducts}
Inductive types can be defined outside the empty context. For instance, the formation and introduction rules for the coproduct type have the form:
\[
\inferrule{ \Gamma \vdash A \univ \\ \Gamma \vdash B \univ}{\Gamma \vdash A + B \univ}
\]
\[
\inferrule{ \Gamma \vdash A \univ \\ \Gamma \vdash B \univ \\ \Gamma \vdash a : A}{ \Gamma \vdash \inl a : A + B} \qquad
\inferrule{ \Gamma \vdash A \univ \\ \Gamma \vdash B \univ \\ \Gamma \vdash b : B}{ \Gamma \vdash \inr b : A + B}
\]
\begin{defn} Given types $A$ and $B$ the \textbf{coproduct type} is the type equipped with
\[ \inl : A \to A + B \qquad \inr : B \to A+ B\]
satisfying the induction principle that says that for any family of types $x : A + B \vdash P(x) \univ$ there is a term
\[ \ind_+ : \left( \Pi_{x : A} P(\inl (x))\right) \to \left( \Pi_{y : B} P(\inr(y))\right) \to \Pi_{z : A + B}P(z)\]
satisfying the computation rules
\[ \ind_+(f,g, \inl(x)) \doteq f(x) \qquad \ind_+(f,g, \inr(y)) \doteq g(y).\]
\end{defn}
Not as a special case we have
\[ \ind_+ : (A \to X) \to (B \to X) \to (A + B \to X)\]
which is similar to the elimination rule for disjunction in first order logic: if you've proven that $A$ implies $X$ and that $B$ implies $X$ then you can conclude that $A$ or $B$ implies $X$.
\subsection*{The type of integers}
There are many ways to define the integers in Martin-L\"{o}f type theory, one of which is as follows:
\begin{defn} Define the \textbf{integers} to be the type $\bZ \coloneq \bN + (\1 + \bN)$ which comes equipped with inclusions:
\[ \term{in-pos} \coloneq \inr\circ\inr : \bN \to \bZ \qquad \term{in-neg} \coloneq \inl : \bN \to \bZ\]
and constants
\[ -1_\bZ \coloneq \term{in-neg}(0_\bN) \qquad 0_\bZ \coloneq \inr (\inl(\star)) \qquad 1_{\bZ} \coloneq \term{in-pos}(0_\bN).\]
\end{defn}
Since $\bZ$ is built from inductive types it is then an inductive type given with its own induction principle.
\subsection*{Dependent pair types}
Of all the inductive types we've introduced, the final one is perhaps the most important.
Recall a \textbf{dependent function} $\lambda x . f(x) : \Pi_{x : A}B(x)$ is like an ordinary function except the output type is allowed to vary with the input term. Similarly, a \textbf{dependent pair} $(a,b) : \Sigma_{x : A}B(x)$ is like an ordinary (ordered) pair except the type of the second term $b : B(a)$ is allowed to vary with the first term $a :A$.
\begin{defn} Consider a type family $x : A \vdash B(x) \univ$. The \textbf{dependent pair type} or $\Sigma$-\textbf{type} $\Sigma_{x :A}B(x)$ is the inductive type equipped with the function
\[ \pair : \Pi_{x : A} \left(B(x) \to \Sigma_{y : A}B(y)\right).\]
The induction principle asserts that for any family of types $p : \Sigma_{x :A} B(x) \vdash P(p) \univ$ there is a function
\[ \ind_\Sigma : \left( \Pi_{x :A} \Pi_{y : B}P(\pair(x,y) \right) \to \left( \Pi_{z : \Sigma_{x :A}B(x)}P(z) \right)
\]
satisfying the computation rule $\ind_\Sigma (g,\pair(x,y)) \doteq g(x,y)$.
\end{defn}
It is common to write ``$(x,y)$'' as shorthand for ``$\pair(x,y)$.''
\begin{defn} Given a type family $x :A \vdash B(x) \univ$ by the induction principle for $\Sigma$-types, we have a function
\[ \pr_1 : \Sigma_{x :A}B(x) \to A\]
defined by $\pr_1(x,y) \coloneq x$ and a dependent function
\[ \pr_2 : \Pi_{ p : \Sigma_{x :A}B(x)} B(\pr_1(p))\]
defined by $\pr_2(x,y) \coloneq y$.
\end{defn}
When $B$ is a constant type family over $A$, the type $\Sigma_{x : A}B$ is the type of ordinary pairs $(x,y)$ where $x :A $ and $y: B$. Thus \textbf{product types} arise as special cases of $\Sigma$-types.
\begin{defn} Given types $A$ and $B$ their product type is the type $A \times B \coloneq \Sigma_{x :A}B$. It comes with a pairing function
\[ (-,-) : A \to B \to A \times B\] and satisfies an induction principle:
\[ \ind_\times : \Pi_{x : A} \Pi_{y : B} P(x,y) \to \Pi_{z : A \times B} P(z)\]
satisfying the computation rule $\ind_\times (g,(x,y)) \doteq g(x,y)$.
\end{defn}
As a special case, we have
\[ \ind_\times : (A \to B \to C) \to ((A \times B) \to C).\]
This is the inverse of the \textbf{currying function}. Thus $\ind_\times$ and $\ind_\Sigma$ sometimes go by the name \textbf{uncurrying}.
\section*{September 15: Identity types}
We have started to develop an analogy in which types play the role of mathematical propositions and terms in a type play the role of proofs of that proposition. More exactly, we might think of a type as a ``proof-relevant'' proposition, the distinction being that the individual proofs of a given proposition---the terms of the type---are first class mathematical objects, which may be used as ingredients in future proofs, rather than mere witnesses to the truth of the particular proposition.
The various constructions on types that we have discussed are analogous to the logical operations ``and,'' ``or,'' ``implies,'' ``not,'' ``there exists,'' and ``for all.'' We also have the unit type $\1$ to represent the proposition $\top$ and the empty type $\emptyset$ to represent the proposition $\bot$. There is one further ingredient from first-order logic that is missing a counterpart in dependent type theory: the logical operation ``$=$.''
Given a type $A$ and two terms $x,y : A$ it is sensible to ask whether $x = y$. From the point of view of types as proof-relevant propositions, ``$x=y$'' should be the name of a type, in fact a dependent type. The formation rule for \textbf{identity types} says
\[
\inferrule{ \Gamma \vdash A \univ}
{\Gamma, x : A, y : A \vdash x =_A y \univ}
\]
where ``$x=y$'' is commonly used as an abbreviation for ``$x=_Ay$'' when the type of $x$ and $y$ is clear from context. A term $p : x = y$ of an identity type is called an \textbf{identification} of $x$ and $y$ or a \textbf{path} from $x$ to $y$ (more about this second term later). Identifications have a rich structure that follows from a very simple characterization of the identity type due to Per Martin-L\"{o}f: it is the inductive type family freely generated by the reflexivity terms.
\subsection*{The inductive definition of identity types}
We can define identity types as inductive types in either a one-sided or two-sided fashion. The induction rule may be easier to understand from the one-sided point of view, so we present it first.
\begin{defn}[one-sided identity types] Given a type $A$ and a term $a : A$, the \textbf{identity type} of $A$ at $a$ is the inductive family of types $x : A \vdash a =_A x \univ$ with a single constructor $\refl_a : a =_A a$. The induction principle is postulates that for any type family $x : A, p : a =_A x \vdash P(x,p) \univ$ there is a function
\[ \pathind_a : P(a, \refl_a) \to \Pi_{x : A} \Pi_{p : a =_A x} P(x,p)\]
satisfying $\pathind_a(q,a, \refl_a) \doteq q$.
\end{defn}
This is a very strong induction principle: it says that to prove a predicate $P(x,p)$ depending on any term $x :A$ and any identification $p : a =_A x$ it suffices to assume $x$ is $a$ and $p$ is $\refl_a$ and prove $P(a,\refl_a)$.
More formally, identity types are defined by the following rules:
\[
\inferrule{ \Gamma \vdash a : A}{ \Gamma, x : A \vdash a =_A x \univ} \qquad
\inferrule{\Gamma \vdash a : A}{\Gamma \vdash \refl_a : a =_A a}\]
\[
\inferrule{\Gamma \vdash a : A \\ \Gamma, x : A, p: a=_A x \vdash P(x,p) \univ}{\Gamma \vdash \pathind_a : P(a,\refl_a) \to \Pi_{x :A}\Pi_{p : a =_A x}P(x,p)} \qquad
\inferrule{\Gamma \vdash a : A \\ \Gamma, x : A, p: a=_A x \vdash P(x,p) \univ}{\Gamma \vdash \pathind_a(q,a, \refl_a) \doteq q : P(a,\refl_a)}
\]
Equally, the identity type can be considered in a two-sided fashion:
\begin{defn}[two-sided identity types]
Given a type $A$, the \textbf{identity type} of $A$ is the inductive family of types $x : A, y :A \vdash x =_A y \univ$ with a single constructor $x : A \vdash \refl_x : x =_A x$. The induction principle is postulates that for any type family $x : A, y : A, p : x =_A y \vdash P(x,y,p) \univ$ there is a function
\[ \pathind : \Pi_{a : A} P(a, a, \refl_a) \to \Pi_{x : A} \Pi_{y: A}\Pi_{p : x =_A y} P(x,y,p)\]
satisfying $\pathind(q,a, a,\refl_a) \doteq q$.
\end{defn}
In this form, the identity types are defined by the following rules:
\[
\inferrule{ \Gamma \vdash A}{ \Gamma, x : A, y : A \vdash x =_A y \univ} \qquad
\inferrule{\Gamma \vdash A}{\Gamma, x :A \vdash \refl_x : x =_A x}\]
\[
\inferrule{\Gamma \vdash A \\ \Gamma, x : A, y : A, p: x=_A y \vdash P(x,y,p) \univ}{\Gamma \vdash \pathind : \Pi_{a : A} P(a,a,\refl_a) \to \Pi_{x :A}\Pi_{y :A}\Pi_{p : x =_A y}P(x,y, p)} \quad
\inferrule{\Gamma \vdash A \\ \Gamma, x : A, y : A, p: x=_A y \vdash P(x,y,p) \univ}{\Gamma, a : A \vdash \pathind (q,a,a,\refl_a) \doteq q : P(a,a,\refl_a)}
\]
These presentations are interderivable.
\subsection*{The groupoid structure on types}
Mathematical equality, as traditionally understood, is an equivalence relation: it's reflexive, symmetric, and transitive. But all we've asserted about identity types is that they are inductively generated by the reflexivity terms! As we'll now start to discover, considerable additional structure follows.
\begin{prop}[symmetry] For any type $A$, three is an \textbf{inverse operation}
\[ \inv : \Pi_{x, y :A} x = y \to y =x.\]
\end{prop}
\begin{proof}
We define $\inv$ by path induction. By the introduction rule for function types it suffices to define $\inv p : y =x$ for $p : x = y$. Consider the type family $x : A, y : A, p: x = y \vdash P(x,y,p) \coloneq y = x$. By path induction to inhabit $y=x$ it suffices to assume $x = y$ and $p$ is $\refl_x$ in which case we may define $\inv \refl_x \coloneq \refl_x : x=x$. Thus $\inv$ is
\[ \pathind (\lambda x, \refl x) : \Pi_{x :A} \Pi_{y :A} \Pi_{x = y} y =x.\]
\end{proof}
\begin{ntn} Write $p^{-1}$ for $\inv(p)$.
\end{ntn}
\begin{prop}[transitivity]
For any type $A$, there is a \textbf{concatenation} operation
\[ \concat : \Pi_{x,y,z :A} x= y \to y=z \to x = z.\]
\end{prop}
\begin{proof}
We define $\concat$ by appealing to the path induction principle for identity types.
By the introduction rule for dependent function types, to define $\concat$ you may assume given $p : x=y$. The task is then to define $\concat (p) : \Pi_{z} y =z \to x = z$. For this, consider the type family $x :A , y : A, p : x = y \vdash P(x,y,p)$ where $P(x,y,p) \coloneq \Pi_{z : A} (y = z) \to (x = z)$. By applying the function $\pathind$ to get a term of this type it suffices to assume $y$ is $x$ and $p$ is $\refl_x$. So we need only define $\concat( \refl_x) : \Pi_{z :A} x =z \to x = z$ and we define this to be the identity function $\id_{x = z}$. Thus the function $\concat$ is
\[ \pathind( \lambda x, \lambda z, \id_{x=z}) : \Pi_{x :A}\Pi_{y :A} \Pi_{p : x = y} \Pi_{z :A} y=z \to x =z,\]
which can be regarded as a function in the type $\Pi_{x,y,z :A} x= y \to y=z \to x = z$ by swapping the order of the arguments $p$ and $z$.
\end{proof}
\begin{ntn} Write $p \cdot q$ for $\concat(p,q)$.
\end{ntn}
While the elimination rule for identity types is quite strong the corresponding computation rule is relatively weak. It's not strong enough to show that $(p \cdot q) \cdot r$ and $p \cdot (q \cdot r)$ are judgmentally equal for any $p : x = y$, $q : y = z$, and $r : z = q$. In fact there are countermodels that show that this is false in general. However, since both $(p \cdot q) \cdot r$ and $p \cdot (q \cdot r)$ are terms of type $x = w$ we can ask whether there is an identification between them and it turns out this is always true.
\begin{prop}[associativity] Given $x,y,z,w :A$ and identifications $p : x = y$, $q : y =z$, and $r : z = w$, there is an associator
\[ \assoc (p,q,r) : (p \cdot q) \cdot r = p \cdot (q \cdot r)\]
\end{prop}
\begin{proof} We define $\assoc(p,q,r)$ by path induction.
Consider the type family $x :A, y : A, p: x = y \vdash \Pi_{z :A} \Pi_{q : y =z} \Pi_{w:A} \Pi_{r : z =w} (p \cdot q) \cdot r = p \cdot (q \cdot r)$. To define a term $\assoc (p,q,r)$ in here it suffices to assume $y$ is $x$ and $p$ is $\refl_x$ and define
\[ \lambda z. \lambda q. \lambda w. \lambda r. \assoc(\refl_x, q,r) : \Pi_{z :A} \Pi_{q : x =z} \Pi_{w:A} \Pi_{r : z =w} (\refl_x \cdot q) \cdot r = \refl_x \cdot (q \cdot r).\]
By the definition of concatenation, $\refl_x \cdot q \doteq q$ and $\refl_x \cdot (q \cdot r) \doteq q \cdot r$. So we must define
\[ \assoc(\refl_x, q,r) : q \cdot r = q \cdot r\]
and we can take this term to be $\refl_{q \cdot r}$.
\end{proof}
\begin{prop}[units] For any type $A$, there are left and right \textbf{unit laws}
\[ \lambda x. \lambda y. \lambda p. \term{left-unit}(p) : \lambda {x,y : A} \Pi_{p: x = y} \refl_x \cdot p = p \qquad \lambda x. \lambda y. \lambda p.\term{right-unit}(p) : \Pi_{x , y : A} \Pi_{p : x = y} p \cdot \refl_y = p.\]
\end{prop}
\begin{proof}
We are asked to define dependent functions that takes $x, y :A$ and $p : x = y$ and produce terms
\[ \term{left-unit}(p) : \refl_x \cdot p = p \qquad \term{right-unit}(p) : p \cdot \refl_y = p.\]
By path induction, it suffices to assume $y$ is $x$ and $p$ is $\refl_x$, in which case we require terms
\[ \term{left-unit}(\refl_x) : \refl_x \cdot \refl_x = \refl_x \qquad \term{right-unit}(\refl_x) : \refl_x \cdot \refl_x = \refl_x.\] By the definition of concatenation $\refl_x \cdot \refl_x \doteq \refl_x$ so we can take $\refl_{\refl_x}$ as both $\term{left-unit}(\refl_x)$ and $\term{right-unit}(\refl_x)$.
\end{proof}
\begin{prop}[inverses] For any type $A$, there are left and right \textbf{inverse laws}
\[ \lambda x. \lambda y. \lambda p.\term{left-inv}(p) : \Pi_{x,y :A} \Pi_{p : x= y} p^{-1} \cdot p = \refl_y \qquad \lambda x. \lambda y. \lambda p.\term{right-inv}(p) : \Pi_{x,y :A} \Pi_{p : x=y} p \cdot p^{-1} = \refl_x.\]
\end{prop}
\begin{proof}
We are asked to define dependent functions that takes $x, y :A$ and $p : x = y$ and produce terms
\[ \term{left-inv}(p) : p^{-1} \cdot p = \refl_y \qquad \term{right-inv}(p) : p \cdot p^{-1} = \refl_x.\]
By path induction, it suffices to assume $y$ is $x$ and $p$ is $\refl_x$, in which case we require terms
\[ \term{left-inv}(\refl_x) : \refl_x^{-1} \cdot \refl_x = \refl_x \qquad \term{right-inv}(\refl_x) : \refl_x \cdot \refl_x^{-1} = \refl_x.\]
By the definitions of concatenation and inverses, again both left-hand and right-hand sides are judgementally equal so we take $\term{left-inv}(\refl_x)$ and $\term{right-inv}(\refl_x)$ to be $\refl_{\refl_x}$.
\end{proof}
\section*{September 20: More identity types}
\subsection*{Types as \texorpdfstring{$\infty$}{infinity}-groupoids}
Martin-L\"{o}f's rules for the identity types date from a 1975 paper ``An Intuitionistic Theory of Types.'' In the following two decades, there was a conjecture that went by the name ``uniqueness of identity proofs" that for any $x,y : A$, $p, q : x =_A y$, the type $p =_{x=_A y} q$ is inhabited, meaning that it's possible to construct an identification between $p$ and $q$. In 1994, Martin Hofmann and Thomas Streicher constructed a model of Martin-L\"{o}f's dependent type theory in the category of groupoids that refutes uniqueness of identity proofs.\footnote{The technical details of what exactly it means to ``construct a model of type theory'' are quite elaborate and would be interesting to explore as a final project.}
In the Hofmann-Streicher model, types $A$ correspond to \emph{groupoids} and terms $x, y : A$ correspond to \emph{objects} in the groupoid. An identification $p : x = y$ corresponds to a(n iso)morphism $p : x \to y$ in the groupoid, while an identification between identifications exists if and only if $p$ and $q$ define the same morphism. Since there are groupoids with multiple distinct morphisms between a fixed pair of objects, we see that it is not always the case that $p=_{x=_A y} q$. Following Hofmann-Streicher, it made sense to start viewing types as more akin to groupoids than to sets. The proofs of symmetry and transitivity for identity types are more accurately described as inverses and concatenation operations in a groupoid. As we've seen, these satisfy various associativity, unit, and inverse laws---up to identification at least---as required by a groupoid.
But that last caveat is important. We've shown that for any type $A$, its identity types $x,y :A \vdash x=_A y \univ$ give it something like the structure of a groupoid. But for each $x,y :A$, $x=_A y$ is also a type, so \emph{its} identity types $p,q: x=_A y \vdash p =_{x=_A y}q \univ$ give $x=_A y$ its own groupoid structure. And the higher identity types, $\alpha, \beta : p =_{x=_A y}q \vdash \alpha= \beta \univ$ give $p =_{x=_A y}q$ its own groupoid structure and so on. So a modern point of view is that the types in Martin-L\"{o}f's dependent type theory should be thought of as $\infty$-\emph{groupoids}.
If $A$ is an $\infty$-groupoid, its terms $x :A$ might be called \textbf{points} and its identifications $p : x =_A y$ might be called \textbf{paths}. This explains the modern name ``path induction'' for the induction principle for identity types. These ideas are at the heart of the homotopical interpretation of type theory, about more which later.
\subsection*{The uniqueness of \texorpdfstring{$\refl$}{refl}}
The definition of the identity types says that the family of types $a =x$ indexed by $x : A$ is inductively generated by the term $\refl_a : a = a$. It does \emph{not} say that the type $a =a$ is inductively generated by $a : A$. In particular, we cannot apply path induction to prove that $p = \refl_a$ for any $p : a = a$ because in this case neither endpoint of the identity type is free.
There is a sense however in which the reflexivity term is unique:
\begin{prop} For any type $A$ and $a : A$, $(a, \refl_a)$ is the unique term of the type $\Sigma_{x : A} a =x$. That is, for any $z : \Sigma_{x :A} a =x$, there is an identification $(a, \refl_a) = z$.
\end{prop}
\begin{proof} We're trying to define a dependent function that takes $z : \Sigma_{x :A} a =x$ and gives a term in the identity type $(a,\refl_a) =_{\Sigma_{x :A} a =x} z$. By $\Sigma$-induction it suffices to assume $z$ is a pair $(x,p)$ where $x :A$ and $p : a =x$ and construct an identification $(a, \refl_a) =_{\Sigma_{x :A} a =x} (x,p)$. So now we're trying to define a dependent function that takes $x :A$ and $p : a =x$ and constructs an identification $(a, \refl_a) =_{\Sigma_{x :A} a =x} (x,p)$. By path induction, it suffices to assume $x$ is $a$ and $p$ is $\refl_a$. But now we can use reflexivity to show that $(a, \refl_a) = (a, \refl_a)$.
\end{proof}
In terminology to be introduced later, this result says that the type $\Sigma_{x : A} a =x$ is \textbf{contractible} with the term $(a, \refl_a)$ serving as its \textbf{center of contraction}.
\subsection*{The action of paths on functions}
The structural rules of type theory guarantee that any function (and indeed any construction in type theory) preserve definitional equality. We now show that in addition every function preserves identifications.
\begin{prop} Let $f \colon A \to B$. There is an operation that defines the \textbf{action on paths} of $f$
\[ \ap_f : \Pi_{x,y :A} (x=y) \to (f(x) = f(y))\]
that satisfies the coherence conditions
\[ \apcoh{id}_A : \Pi_{x,y:A} \Pi_{p : x= y} p = \ap_{\id_A}(p)\]
\[ \apcoh{comp}(f,g) : \Pi_{x,y:A} \Pi_{p: x=y} \ap_g(\ap_f(p))= \ap_{g\circ f}(p).\]
\end{prop}
\begin{proof}
By path induction to define $\ap_f(p) : f(x)=f(y)$ it suffices to assume $y$ is $x$ and $p$ is $\refl_x$. We may then define $\ap_f(\refl_x) \coloneq \refl_{f(x)} : f(x) = f(x)$.
Next to define $\apcoh{id}_A$ it similarly suffices to suppose $y$ is $x$ and $p$ is $\refl_x$. Since $\ap_{\id_A}(\refl_x)\doteq \refl_x$, we may define $\apcoh{id}_A(\refl_x) \coloneq \refl_{\refl_x} : \refl_x = \refl_x$.
Finally, to define $\apcoh{comp}(f,g)$, by path induction we may again assume $y$ is $x$ and $p$ is $\refl_x$. Since both $\ap_g(\ap_f(\refl_x))$ and $\ap_{g \circ f}(\refl_x)$ are defined to be $\refl_{g(f(x))}$ we may define $\apcoh{comp}(f,g)(\refl_x)$ to be $\refl_{\refl_{g(f(x))}}$.
\end{proof}
If the types $A$ and $B$ are thought of as $\infty$-groupoids, then $f \colon A \to B$ can be thought of as a functor of $\infty$-groupoids in a sense hinted at by the following lemma.
\begin{lem} For $f \colon A \to B$ there are identifications
\begin{align*}
\apcoh{refl}(f,x) &: \ap_f (\refl_x) = \refl_{f(x)} \\
\apcoh{inv}(f,p) &: \ap_f (p^{-1}) = \ap_f(p)^{-1} \\
\apcoh{concat}(f,p,q) &: \ap_f(p \cdot q) = \ap_f(p) \cdot \ap_f(q)
\end{align*}
for every $p : x= y$ and $q: y = z$.
\end{lem}
\begin{proof}
For the first coherence, there is a definitional equality $\ap_f (\refl_x) \doteq \refl_{f(x)}$ so we take $\apcoh{refl}(f,x) \coloneq \refl_{\refl_{f(x)}}$.
We define $\apcoh{inv}(f,p)$ by path induction on $p$ by defining $\apcoh{inv}(f,\refl_x) \coloneq \refl_{\refl_{f(x)}}$.
Similarly, we define $\apcoh{concat}(f,p,q)$ by path induction on $p$ (since concat was defined by path induction on $p$) by defining $\apcoh{concat}(f,\refl_x,q)$ to be $\refl_{\ap_f(q)}$.
\end{proof}
\subsection*{Transport}
The term $\ap_f$ defines the action of a non-dependent function $f \colon A \to B$ on paths in $A$. It's natural to ask whether a dependent function $f : \Pi_{z:A}B(z)$ also induces an action on paths. There's a challenge here, though. If $x,y : A$ are terms belonging to the base type, then we can form the type $x=_A y$ to ask whether they are identifiable. But the terms $f(x) : B(x)$ and $f(y) : B(y)$ belong to different types and are not identifiable. But nevertheless if there is path $p : x = y$ identifying $y$ with $x$ intuition suggests there should be some way to compare $f(y)$ to $f(x)$.
To achieve this, we must construct a different sort of action of paths function first. This is called the \textbf{transport} function for dependent types $x :A \vdash B(x) \univ$ that, given an identification $p : x = y$ in the base type, can be used to transport any term in $B(x)$ to a term in $B(y)$.
\begin{prop} For any type family $x : A \vdash B(x) \univ$, there is a \textbf{transport operation}
\[ \tr_B : \Pi_{x,y :A} (x=y) \to (B(x) \to B(y)).\]
\end{prop}
\begin{proof}
By path induction it suffices to define $\tr_B(\refl_x)) \coloneq \id_{B(x)}$.
\end{proof}
As an application of transport we can now defined the action on paths of a dependent function.
\begin{prop} For any dependent function $f : \Pi_{z: A} B(z)$ and identification $p : x=_A y$ there is a path
\[ \apd_f(p) : \tr_B(p, f(x)) =_{B(y)} f(y).\]
\end{prop}
\begin{proof}
The function
\[ \lambda x . \lambda y. \lambda p. \apd_f(p) : \Pi_{x,y:A} \Pi_{p : x=y} \tr_B(p, f(x)) =_{B(y)} f(y)\]
may be defined by path induction on $p$. It suffices to construct a path
\[ \lambda x . \apd_f(\refl_x) : \Pi_{x :A} \tr_B(\refl_x,f(x)) =_{B(x)} f(x).\]
Since $\tr_B(\refl_x),f(x)) \doteq f(x)$ we may defined $\apd_f(\refl_x) \coloneq \refl_{f(x)}$.
\end{proof}
\subsection*{The laws of addition on $\bN$}
Recall that we defined the addition of natural numbers in such a way that
\[ m+ 0 \doteq m \qquad m+ \suc(n) \doteq \suc(m+n)\]
by induction on the second variable. With this definition, these are the only definitional equalities. However, it is possible to produce identifications proving the other commutative monoid axioms.
\begin{lem} For any $n : \bN$ there are identifications
\[\term{left-unit-law-add}_\bN(n) : 0 + n = n \qquad \term{right-unit-law-add}_\bN(n) : n+0=n.\]
\end{lem}
\begin{proof}
The second of these can be taken to be $\refl_n$ but the first is more complicated. We define $\term{left-unit-law-add}_\bN(n)$ by induction on $n : \bN$. When $n = 0$, $0+0=0$ holds by reflexivity.
Our final goal is to show $0+\suc(n) = \suc(n)$, for which it suffices to construct an identification \[ \suc(0+n) = \suc(n)\]
by the definition of addition. We may assume we have an identification $p : 0 + n = n$. Thus, we can use the action on paths of $\suc : \bN \to \bN$ to obtain a term $\ap_{\suc}(p) : \suc(0+n) = \suc(n)$.
\end{proof}
\begin{prop} For any $m,n : \bN$ there are identifications
\begin{align*} \term{left-successor-law-add}_\bN(m,n) &: \suc(m) + n = \suc(m+n) \\
\term{right-sucessor-law-add}_\bN(m,n) &= m+ \suc(n) = \suc(m+n)
\end{align*}
\end{prop}
\begin{proof}
Again the second identification holds judgmentally so we define \[\term{right-sucessor-law-add}_\bN(m,n) \coloneq \refl_{\suc(m+n)}.\] We construct the former using induction on $n \in \bN$. The base case $\suc(m)+0 = \suc(m+0)$ holds by $\refl_{\suc(m)}$. For the inductive step we assume we have an identification $p : \suc(m)+n = \suc(m+n)$. Our goal is to show that $\suc(m)+\suc(n) = \suc(m+\suc(n))$. By action of paths of $\suc : \bN \to \bN$ we obtain a term
\[ \ap_{\suc}(p) : \suc(\suc(m)+n) = \suc(\suc(m+n))\] but here the left hand side is judgmentally equal to $\suc(m)+\suc(n)$ while the right hand side is judgmentally equal to $\suc(m+ \suc(n))$.