diff --git a/18.06-PS9.ipynb b/18.06-PS9.ipynb new file mode 100644 index 00000000..ef96f393 --- /dev/null +++ b/18.06-PS9.ipynb @@ -0,0 +1,1373 @@ +{ + "cells": [ + { + "cell_type": "code", + "execution_count": 1, + "metadata": { + "collapsed": false + }, + "outputs": [ + { + "name": "stderr", + "output_type": "stream", + "text": [ + "INFO: Removing Homework (unregistered)\n", + "INFO: Cloning Homework from https://github.com/shashi/Homework.jl.git\n", + "INFO: Computing changes...\n" + ] + }, + { + "data": { + "text/html": [ + "" + ] + }, + "metadata": {}, + "output_type": "display_data" + }, + { + "data": { + "text/html": [ + "
\n", + " \n", + " \n", + "
" + ] + }, + "metadata": {}, + "output_type": "display_data" + }, + { + "name": "stderr", + "output_type": "stream", + "text": [ + "INFO: No packages to install, update or remove\n", + "WARNING: This version of the GnuTLS library (3.2.11) is deprecated\n", + "and contains known security vulnerabilities. Please upgrade to a\n", + "more recent version.\n" + ] + }, + { + "data": { + "text/html": [ + "" + ] + }, + "metadata": {}, + "output_type": "display_data" + }, + { + "data": { + "text/html": [ + "" + ], + "text/plain": [ + "Html(\"\")" + ] + }, + "metadata": {}, + "output_type": "display_data" + }, + { + "data": { + "text/html": [ + "" + ] + }, + "metadata": {}, + "output_type": "display_data" + } + ], + "source": [ + "Pkg.rm(\"Homework\")\n", + "Pkg.clone(\"https://github.com/shashi/Homework.jl.git\")\n", + "using Homework\n", + "\n", + "\n", + "Homework.show_mit_form()" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## 1. Linear Transformations\n", + "\n", + "1a (2 pts) Which transformations with input v=(v$_1$,v$_2$) are linear?

\n", + "\n", + "1. T(v)=(v$_2$,v$_1$)
\n", + "2. T(v)=(v$_1$,v$_1$)
\n", + "3. T(v)=(0,v$_1$)
\n", + "4. T(v)=(0,1)
\n", + "5. T(v)=v$_1$-v$_2$
\n", + "6. T(v)= v$_1$v$_2$\n", + "\n", + "Format: If 1,2,and 6 are linear write [1,2,6] etc.\n", + "\n" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "collapsed": false, + "max_attempts": 1000, + "max_score": 2, + "question": "1a" + }, + "outputs": [], + "source": [] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "1b (1 pt) Which transformations satisfy T(v+w)=T(v)+T(w). The input is v=(v$_1$,v$_2$,v$_3$).

\n", + "\n", + "1. T(v)=v/$\\|v\\|$
\n", + "2. T(v)=v$_1$+v$_2$+v$_3$
\n", + "3. T(v)=(v$_1$,2v$_2$,3v$_3$)
\n", + "4. T(v) = largest component of v\n", + "\n", + "Format: If 1 and 4 work write [1,4] etc." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "collapsed": false, + "max_attempts": 1000, + "max_score": 2, + "question": "1b" + }, + "outputs": [], + "source": [] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "1c (1 pt.) Which transformations satisfy T(cv)=cT(v). The input is v=(v$_1$,v$_2$,v$_3$)\n", + "\n", + "\n", + "1. T(v)=v/$\\|v\\|$
\n", + "2. T(v)=v$_1$+v$_2$+v$_3$
\n", + "3. T(v)=(v$_1$,2v$_2$,3v$_3$)
\n", + "4. T(v) = largest component of v\n", + "\n", + "Format: If 1 and 4 work write [1,4] etc." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "collapsed": false, + "max_attempts": 1000, + "max_score": 2, + "question": "1c" + }, + "outputs": [], + "source": [] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## 2. Transpose as a Linear Transformation\n", + "\n", + "(2 pts.) The transformation T that transposes every matrix is definitely linear. Which of these extra properties are true? \n", + "\n", + "1. T$^2$ = the identity transformation.
\n", + "2. The kernel of T is the zero matrix.
\n", + "2. Every matrix is in the range of T.
\n", + "4. T(M) = −M is impossible\n", + "\n", + "Format: If 1 and 4 are true write [1,4] etc.\n", + "\n" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "collapsed": false, + "max_attempts": 1000, + "max_score": 2, + "question": "2" + }, + "outputs": [], + "source": [] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## 3. Derivative as a Linear Transformation\n", + "\n", + "(2 pts.) The transformation S takes the second derivative. Let 1,x,x$^2$,x$^3$ be an ordered basis for degree 3 polynomials in x. Find the 4 by 4 matrix B for S with respect to this ordered basis.\n", + "\n", + "Format: [[1 2 3 4],[1 2 3 4],[1 2 3 4],[1 2 3 4]]" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "collapsed": false, + "max_attempts": 1000, + "max_score": 5, + "question": "3" + }, + "outputs": [], + "source": [] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## 4. Fibonacci Matrix SVD\n", + "\n", + "Let the SVD of $A=\\left[\\begin{matrix}1 & 1\\\\1 & 0\\end{matrix}\\right]$\n", + "be $A=U\\Sigma V^T$ where the first row of $U$ is positive.\n", + "\n", + "Enter U, $\\Sigma$ and $V$ numerically, we will check a few decimal places.\n", + "\n", + "(If you want to use Julia, try -svd(A)[1], diagm(svd(A)[2]), -svd(A)[3] to\n", + "match the format of MITx)\n", + "\n", + "4a.(.5 pts) U=" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "collapsed": false, + "max_attempts": 1000, + "max_score": 1, + "question": "4a" + }, + "outputs": [], + "source": [] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "4b.(1 pt) $\\Sigma$=" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "collapsed": false, + "max_attempts": 1000, + "max_score": 1, + "question": "4b" + }, + "outputs": [], + "source": [] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "4c. (.5 pts) V=" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "collapsed": false, + "max_attempts": 1000, + "max_score": 1, + "question": "4c" + }, + "outputs": [], + "source": [] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## 5. Constructing a matrix of rank one\n", + "\n", + "5a (1 pt.) What is the matrix A with rank one that has Av=12u for v=$\\frac{1}{2}$(1,1,1,1) and u=$\\frac{1}{3}$(2,2,1)?\n", + "\n" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "collapsed": false, + "max_attempts": 1000, + "max_score": 2, + "question": "5a" + }, + "outputs": [], + "source": [] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "5b. (1 pt.) What is the (non-zero) singular value of $A$?" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "collapsed": false, + "max_attempts": 1000, + "max_score": 2, + "question": "5b" + }, + "outputs": [], + "source": [] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## 6. Singular Value Decomposition of a Symmetric Matrix\n", + "\n", + "6a. (2 pts.)\n", + "Suppose A is a 2 by 2 symmetric matrix with unit eigenvectors u$_1$ and u$_2$ and corresponding eigenvalues λ$_1$=3 and λ$_2$=−2. What are possible singular value decompositions for A?\n", + "\n", + "1. U=(u$_1$,u$_2$), $\\Sigma=\\left[\\begin{matrix}3 & 0\\\\0 & -2\\end{matrix}\\right]$, V=(u$_1$,u$_2$)
\n", + "2. U=(u$_1$,-u$_2$), $\\Sigma=\\left[\\begin{matrix}3 & 0\\\\0 & 2\\end{matrix}\\right]$, V=(u$_1$,u$_2$)
\n", + "3. U=(u$_1$,u$_2$), $\\Sigma=\\left[\\begin{matrix}3 & 0\\\\0 & 2\\end{matrix}\\right]$, V=(u$_1$,u$_2$)
\n", + "4. U=(u$_1$,u$_2$), $\\Sigma=\\left[\\begin{matrix}3 & 0\\\\0 & 2\\end{matrix}\\right]$, V=(u$_1$,-u$_2$)
\n", + "\n", + "Format: If 1 and 4 work write [1,4] etc." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "collapsed": false, + "max_attempts": 1000, + "max_score": 2, + "question": "6a" + }, + "outputs": [], + "source": [] + }, + { + "cell_type": "markdown", + "metadata": { + "collapsed": true + }, + "source": [ + "## 7. Linear Fitting of Data\n", + "\n", + "\n", + "Suppose you have the following 3 dimensional points:\n", + "\n", + "{⟨0,2,1⟩,⟨0,4,3⟩,⟨1,4,5⟩,⟨1,8,6⟩,⟨1,8,10⟩,⟨4,8,14⟩,⟨5,9,13⟩}

\n", + "We'd like to find the best fit plane that will fit these data points. There are many ways that we can attempt this.\n", + "\n", + "We can use least squares approximation\n", + "\n", + "$\\begin{bmatrix} x_1 & y_1 & 1 \\\\ x_2 & y_2 & 1\\\\ x_3 & y_3 & 1 \\\\ \\vdots & \\vdots & \\vdots \\end{bmatrix} \\begin{bmatrix} A\\\\ B \\\\ C \\end{bmatrix} = \\begin{bmatrix} z_1 \\\\ z_2 \\\\ z_3 \\\\ \\vdots \\end{bmatrix}\n", + "$\n", + "\n", + "to find the best fit plane of the form z=Ax+By+C.\n", + "However, you might ask yourself why is the z-coordinate on the right, and x and y on the left?\n", + "Why should z be different? This strategy \n", + "assumes that the x and y coordinates are correct, and the error is all in the z-direction, perpendicular to the xy-plane.\n", + "\n", + "We could instead assume the error is along the x or the y direction and use least square approximation to find the best fit plane of the form x=Ay+Bz+C or y=Ax+Bz+C. However, each of these methods makes a pretty strong assumption about in which direction the error or noise in the data occurs.\n", + "\n", + "Another method is to make an assumption that the data is mostly correct, and lies on some plane, and that the error is small in comparison. A priori, we make no assumption about the direction of this error. This method is called principal component analysis and applies SVD. Let's see how.\n", + "\n", + "PRINCIPAL COMPONENT ANALYSIS\n", + "\n", + "We will explain this method in the context of the problem we are using. Let A be the matrix whose rows are the data of 3 dimensional points above. However, a more general method is described below. First, find the mean data point μ=[μ$_x$ μ$_y$ μ$_z$]. We use this mean to create a matrix M with zero mean by taking the rows of A and subtracting μ from each row.\n", + "\n", + "Once we find the plane P that the data in M lie on, the data from A lie on P+μ.\n", + "\n", + "Question: Which plane do the data in M come from? The answer is to use SVD. If we look at svd(M), we get M=USV$^T$. S has 3 nonzero singular values if our data is not already on a plane. However, one singular value will be smaller than the other two. The matrix V=[v$_1$ v$_2$ v$_3$] consists of orthogonal vectors. The conjecture is that the data come from the plane spanned by v$_1$ and v$_2$, and that the error occurs in the direction of v$_3$. Because v$_3$ corresponds to the smallest singular value, this minimizes the magnitude of the error in this data.\n", + "\n", + "GENERAL PRINCIPAL COMPONENT ANALYSIS\n", + "\n", + "Suppose your data is a collection of m-dimensional vectors: D=[d$_1$, …, $d_n$]. But we suppose that this data should fit some k-dimensional subspace. It doesn't because there is error, noise that is perpendicular to this k-dimensional subspace. The question posed is: what subspace did this data most likely come from? And what dimension is that subspace?\n", + "\n", + "Method.\n", + "\n", + "* Find the expected value of your data μ=$\\sum_{i=1}^n d_i/n$.\n", + "\n", + "* Create a new, zero mean matrix M whose ith row is d$_i$−μ.\n", + "\n", + "* Find svd(M)=USV$^T$.\n", + "\n", + "* Identify k such that σ$_k$>>σ$_{k+1}$.\n", + "\n", + "* Conjecture that data came from the span of μ and the first k columns of V. These vectors are called the principal components." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "7a) (6 pts)\n", + "\n", + "\n", + "This problem is exercise 14.4 in \n", + "\n", + "“Linear Algebra and Probability for Computer Science Applications\", by Ernest Davis.\n", + "\n", + "Consider the following set of three dimensional points we introduced earlier.\n", + "\n", + "{⟨0,2,1⟩,⟨0,4,3⟩,⟨1,4,5⟩,⟨1,8,6⟩,⟨1,8,10⟩,⟨4,8,14⟩,⟨5,9,13⟩}\n", + "Find the least squares estimate for z taken as a linear combination of x and y; e.g. z=ax+by+c.\n", + "\n", + "Define a matrix A whose rows are the data above:" + ] + }, + { + "cell_type": "code", + "execution_count": 15, + "metadata": { + "alert": "info", + "collapsed": false + }, + "outputs": [ + { + "data": { + "text/plain": [ + "3-element Array{Int64,1}:\n", + " 1\n", + " 3\n", + " 5" + ] + }, + "execution_count": 15, + "metadata": {}, + "output_type": "execute_result" + } + ], + "source": [ + "# Complete A (currently with three points) to the seven data points (not graded) \n", + "A= [ 0 2 1\n", + " 0 4 3\n", + " 1 4 5 ]\n", + "\n", + "\n", + "# Also obtain A_ls (least squares)\n", + "A_ls = [A[:,1:2] ones(size(A,1))]\n", + "z = A[:,3]" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Let zz be the data predicted by the least squares approximation. (You can use backslash: \"\\\" )
\n", + "Compute norm(zz-z), the norm of the difference between zz and the actual data for the z-coordinates " + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "collapsed": false + }, + "outputs": [], + "source": [ + "# Fill in the blanks for two points\n", + "zz = ?? #(Hint: Use A_ls, z, *, and \\ )\n", + "norm(z-zz)" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "collapsed": false, + "max_attempts": 1000, + "max_score": 6, + "precision": 3, + "question": "7a" + }, + "outputs": [], + "source": [ + "A_ls = # Use 2:3 this time\n", + "x=\n", + "xx = # Use x this time\n", + "norm(x-xx)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "7b) ( 6 pts.)\n", + "\n", + "\n", + "Take the same data from before.\n", + "\n", + "{⟨0,2,1⟩,⟨0,4,3⟩,⟨1,4,5⟩,⟨1,8,6⟩,⟨1,8,10⟩,⟨4,8,14⟩,⟨5,9,13⟩}
\n", + "This time, find the least squares estimate for x as a linear combination of y and z; e.g. x = A + By + Cz. Do you think the error will be larger or smaller than in part (a)? Compute norm(x-xx)\n" + ] + }, + { + "cell_type": "code", + "execution_count": 45, + "metadata": { + "collapsed": false + }, + "outputs": [ + { + "data": { + "text/plain": [ + "2.097079044348641" + ] + }, + "execution_count": 45, + "metadata": {}, + "output_type": "execute_result" + } + ], + "source": [ + "A_ls = # Use 2:3 this time\n", + "x=\n", + "xx = # Use x this time\n", + "norm(x-xx)" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "collapsed": false, + "max_attempts": 1000, + "max_score": 6, + "precision": 3, + "question": "7b" + }, + "outputs": [], + "source": [ + "# We can't grade you but try to understand how this works\n", + "μ = mean(A,1)\n", + "M = broadcast(-,A,μ) # Make mean 0\n", + "U,S,V = svd(M)\n", + "best_fit = broadcast(+, U[:,1:2]*diagm(S[1:2])*V[:,1:2]', μ) # Make rank 2 and add back mean" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "7c) (6 pts.)\n", + "\n", + "Looking at the same data one last time.\n", + "\n", + "{⟨0,2,1⟩,⟨0,4,3⟩,⟨1,4,5⟩,⟨1,8,6⟩,⟨1,8,10⟩,⟨4,8,14⟩,⟨5,9,13⟩}
\n", + "Find the best-fit plane from a principal component analysis. Use U,S,V = svd(A) to find the 3 singular value decomposition of A. (Julia stores the diagonal of S as a vector which is less wasteful than storing a whole matrix)" + ] + }, + { + "cell_type": "code", + "execution_count": 47, + "metadata": { + "collapsed": false + }, + "outputs": [ + { + "data": { + "text/plain": [ + "7x3 Array{Float64,2}:\n", + " -0.14732 1.94221 1.08145\n", + " 0.0728165 4.02856 2.95974\n", + " 1.1453 4.05699 4.91967\n", + " 0.449134 7.78393 6.30457\n", + " 1.96438 8.37827 9.4668 \n", + " 4.4242 8.16639 13.7655 \n", + " 4.0915 8.64365 13.5023 " + ] + }, + "execution_count": 47, + "metadata": {}, + "output_type": "execute_result" + } + ], + "source": [ + "# We can't grade you but try to understand how this works\n", + "μ = mean(A,1)\n", + "M = broadcast(-,A,μ) # Make mean 0\n", + "U,S,V = svd(M)\n", + "best_fit = broadcast(+, U[:,1:2]*diagm(S[1:2])*V[:,1:2]', μ) # Make rank 2 and add back mean" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "collapsed": false, + "max_attempts": 1000, + "max_score": 6, + "precision": 3, + "question": "7c" + }, + "outputs": [], + "source": [ + "# For six points create a vector of length three which\n", + "# has norm(best_fit[:,j]-A[:,j]) for j=1:3\n" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## 8. COMPRESSION ALGORITHMS\n", + "\n", + "(10 pts.) The following is a classical example of an application of svd, although it is not typically used for image compression .\n", + "\n", + "A compression algorithm Φ takes in data D and constructs a representation E=Φ(D) such that:\n", + "\n", + "The computer memory required to record E is less than that required by natural encoding of D.\n", + "\n", + "There is a decompression algorithm Ψ to reconstruct D from E. If Ψ(E)=D, the algorithms Φ is called lossless compression. If Ψ(E) is approximately D, then Φ is called lossy compression.\n", + "\n", + "The singular value decomposition gives lossy compression algorithms.\n", + "\n", + "Suppose image data is stored in an m by n matrix M, let USV$^T$=svd(M). Choose k<\"score\",\"Incorrect attempts\"=>\"attempts\"}),None[],None[])" + ] + }, + "metadata": {}, + "output_type": "display_data" + }, + { + "data": { + "text/html": [ + "
Namestotal1a1b1c234a4b4c5a5b6a7a7b7c
1MAX40.02.02.02.02.05.01.01.01.02.02.02.06.06.06.0
" + ], + "text/plain": [ + "1x16 DataFrame\n", + "| Row | Names | total | 1a |\n", + "|-----|-------|-----------------------|------------------------|\n", + "| 1 | \"MAX\" | Colored(\"black\",40.0) | Colored(\"black\",\"2.0\") |\n", + "\n", + "| Row | 1b | 1c |\n", + "|-----|------------------------|------------------------|\n", + "| 1 | Colored(\"black\",\"2.0\") | Colored(\"black\",\"2.0\") |\n", + "\n", + "| Row | 2 | 3 |\n", + "|-----|------------------------|------------------------|\n", + "| 1 | Colored(\"black\",\"2.0\") | Colored(\"black\",\"5.0\") |\n", + "\n", + "| Row | 4a | 4b |\n", + "|-----|------------------------|------------------------|\n", + "| 1 | Colored(\"black\",\"1.0\") | Colored(\"black\",\"1.0\") |\n", + "\n", + "| Row | 4c | 5a |\n", + "|-----|------------------------|------------------------|\n", + "| 1 | Colored(\"black\",\"1.0\") | Colored(\"black\",\"2.0\") |\n", + "\n", + "| Row | 5b | 6a |\n", + "|-----|------------------------|------------------------|\n", + "| 1 | Colored(\"black\",\"2.0\") | Colored(\"black\",\"2.0\") |\n", + "\n", + "| Row | 7a | 7b |\n", + "|-----|------------------------|------------------------|\n", + "| 1 | Colored(\"black\",\"6.0\") | Colored(\"black\",\"6.0\") |\n", + "\n", + "| Row | 7c |\n", + "|-----|------------------------|\n", + "| 1 | Colored(\"black\",\"6.0\") |" + ] + }, + "execution_count": 27, + "metadata": { + "comm_id": "45043668-2592-4b40-825f-01c049efa7fc", + "reactive": true + }, + "output_type": "execute_result" + } + ], + "source": [ + "Homework.progress()" + ] + } + ], + "metadata": { + "homework": { + "admins": [ + "mit.edelman@gmail.com" + ], + "course": "MIT-18.06-Spring-2015", + "mode": "answering", + "problemset": "pset9" + }, + "kernelspec": { + "display_name": "Julia 0.3.12", + "language": "julia", + "name": "julia-0.3" + }, + "language_info": { + "file_extension": ".jl", + "mimetype": "application/julia", + "name": "julia", + "version": "0.3.12" + } + }, + "nbformat": 4, + "nbformat_minor": 0 +} diff --git a/README.md b/README.md index ec91b7cf..8e79c8d1 100644 --- a/README.md +++ b/README.md @@ -1,17 +1,17 @@ Students in 18.06 have the option of using through the [juliabox.org](https://juliabox.org) website. For those who don't read instructions:
-`download("https://raw.githubusercontent.com/alanedelman/18.06_Spring_2015/master/18.06-PS8.ipynb", "18.06-PS8.ipynb" )` +`download("https://raw.githubusercontent.com/alanedelman/18.06_Spring_2015/master/18.06-PS9.ipynb", "18.06-PS9.ipynb" )` -## Instructions for Problem Set 8 (and so on) +## Instructions for Problem Set 9 (and so on) 0. The Julia option (only!) may be completed by midnight Friday night for full credit (12:01AM Saturday to be precise) 1. Login to [juliabox.org](https://juliabox.org) with your google account.
-2. Go back into an existing (or a new notebook) and copy and paste the next line:
`download("https://raw.githubusercontent.com/alanedelman/18.06_Spring_2015/master/18.06-PS8.ipynb", "18.06-PS8.ipynb" )` +2. Go back into an existing (or a new notebook) and copy and paste the next line:
`download("https://raw.githubusercontent.com/alanedelman/18.06_Spring_2015/master/18.06-PS9.ipynb", "18.06-PS9.ipynb" )`
in a new cell and execute -4. Go back to the notebooks tab (or hit the IJ icon), refresh, and enjoy pset 8 +4. Go back to the notebooks tab (or hit the IJ icon), refresh, and enjoy pset 9 3. If you filled in your gmail and mit address once this semester, it is not necessary to do again. 4. If you are trying julia for the first time, open up a new notebook (any kernel is fine) and follow the instruction on line 3 @@ -23,6 +23,6 @@ For those who don't read instructions:
3. The MITx and Julia problems are nearly identical, though there are some minor changes due to format and language. We hope the Julia set is more fun. 4. Ask Julia questions through Piazza and very likely Professor Edelman will answer them quickly 6. Don't worry if something technically goes wrong. We can always do things manually, just send a friendly note through piazza or to edelman at mit.edu. -7. If you need to re-download the notebook to get some fixes you can run `download("https://raw.githubusercontent.com/alanedelman/18.06_Spring_2015/master/18.06-PS8.ipynb", "18.06-PS8_a.ipynb" )` (notice the changed name as the second argument to `download`. Now you can open the new notebook and you *only need to answer the problems you haven't already or had trouble with*. Your score from the previous notebook will be saved. +7. If you need to re-download the notebook to get some fixes you can run `download("https://raw.githubusercontent.com/alanedelman/18.06_Spring_2015/master/18.06-PS9.ipynb", "18.06-PS9_a.ipynb" )` (notice the changed name as the second argument to `download`. Now you can open the new notebook and you *only need to answer the problems you haven't already or had trouble with*. Your score from the previous notebook will be saved. 8. Notice that the files on the github page are timestamped. If you are having trouble with a problem for technical reasons, check for updates. 9. If you see a dead kernel, please follow the instructions on https://piazza.com/class/iebnrdsl7th4h7?cid=130