-
Notifications
You must be signed in to change notification settings - Fork 568
/
Copy pathphilosophy-exercises.tex
131 lines (106 loc) · 5.28 KB
/
philosophy-exercises.tex
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
%%%% 26.1: Weak AI: Can Machines Act Intelligently? (3 exercises, 1 labelled)
%%%% ========================================================================
\begin{exercise}[disability-exercise]%
Go through Turing's\nindex{Turing, A.} list of alleged
``disabilities''\index{disabilities} of machines, identifying which
have been achieved, which are achievable in principle by
a program, and which are still problematic because they require
conscious mental states.
\end{exercise}
% id=26.0 section=26.1.1
\begin{uexercise}
Find and analyze an account in the popular media of one or
more of the arguments to the effect that AI is impossible.
\end{uexercise}
% id=26.3 section=26.1
\begin{iexercise}
Attempt to write definitions of the terms ``intelligence,'' ``thinking,'' and
``consciousness.'' Suggest some possible objections to your definitions.
\end{iexercise}
% id=26.5 section=26.1
%%%% 26.2: Strong AI: Can Machines Really Think? (4 exercises, 1 labelled)
%%%% =====================================================================
\begin{iexercise}
Does a refutation of the Chinese\index{Chinese Room} room argument
necessarily prove that appropriately programmed computers have mental
states? Does an acceptance of the argument necessarily mean that
computers cannot have mental states?
\end{iexercise}
% id=26.1 section=26.2.3
\begin{exercise}[brain-prosthesis-exercise]%
In the brain\index{brain!replacement} replacement argument, it is
important to be able to restore the subject's brain to normal, such
that its external behavior is as it would have been if the operation
had not taken place. Can the skeptic reasonably object that this would
require updating those neurophysiological properties of the neurons
relating to conscious experience, as distinct from those involved in
the functional behavior of the neurons?
\end{exercise}
% id=26.2 section=26.2.2
\begin{uexercise}
Suppose that a Prolog program containing many clauses about the rules of British citizenship
is compiled and run on an ordinary computer. Analyze
the ``brain states'' of the computer under wide and narrow content.
\end{uexercise}
% id=26.6 section=26.2.1
\begin{exercise}
Alan Perlis \citeyear{Perlis:1982} wrote, ``A year spent in artificial intelligence is enough
to make one believe in God''. He also wrote, in a letter to Philip Davis, that one of
the central dreams of computer science is that ``through the performance
of computers and their programs we will remove all doubt that there is only
a chemical distinction between the living and nonliving world.''
To what extent does the progress made so far in artificial intelligence
shed light on these issues? Suppose that at some future date, the AI endeavor
has been completely successful; that is, we have build intelligent agents
capable of carrying out any human cognitive task at human levels of ability.
To what extent would that shed light on these issues?
\end{exercise}
% id=26.7 section=26.2
%%%% 26.3: The Ethics and Risks of Developing Artificial Intelligence (5 exercises, 0 labelled)
%%%% ==========================================================================================
\begin{exercise}
Compare the social impact of artificial intelligence in the last fifty years
with the social impact of the introduction of electric appliances and the
internal combustion engine in the fifty years between 1890 and 1940.
\end{exercise}
% id=26.4 section=26.3
\begin{exercise}
I. J. Good claims that intelligence is the most important quality, and that
building ultraintelligent machines will change everything. A sentient cheetah
counters that ``Actually speed is more important; if we could build ultrafast machines,
that would change everything,'' and a sentient elephant claims ``You're both wrong;
what we need is ultrastrong machines.'' What do you think of these arguments?
\end{exercise}
% id=26.8 section=26.3
\begin{exercise}
Analyze the potential threats from AI technology to society. What threats
are most serious, and how might they be combated? How do they compare to
the potential benefits?
\end{exercise}
% id=26.9 section=26.3
\begin{exercise}
How do the potential threats from AI technology compare with those from other computer
science technologies, and to bio-, nano-, and nuclear technologies?
\end{exercise}
% id=26.10 section=26.3
\begin{exercise}
Some critics object that AI is impossible, while others object that
it is {\em too} possible and that ultraintelligent machines pose a threat.
Which of these objections do you think is more likely? Would it be a
contradiction for someone to hold both positions?
\end{exercise}
% id=26.11 section=26.3
% \begin{exercise}
% Suppose we take Searle's Chinese room
% paper, and switch the words ``room'' and ``human'' (and
% related terms) throughout, so that the paper argues that humans
% are not conscious beings, since they are composed of parts that
% have no understanding. If you believe Searle's paper is valid, would
% you also have to believe the altered paper? If you believe humans are
% conscious, does this refute the altered paper? The original paper?
% What might Searle's \nindex{Searle, J.~R.} response be?
% \end{exercise}
% \begin{exercise}
% Under the correspondence theory, what kinds of propositions can be represented
% by a logical agent? A reflex (condition--action) agent?
% \end{exercise}