You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The function compiles and runs correctly, however, I don't know if this is necessary or just leftover from a previous function. In addition, at least on my machine, the @btime is faster if you take the indexing out.
The text was updated successfully, but these errors were encountered:
I think the indexing into Float64 behavior is so Numbers can be used like a 0-tensor when necessary, like for linear algebra. JuliaLang/julia#1871
It works bc of https://github.com/JuliaLang/julia/blob/master/base/number.jl#L94, and might be slower because it has to check that the index is 1, though I would hope the compiler takes care of it. Fwiw it's actually a couple ms slower without the indexing on my machine, so probably just @btime error. I agree it isn't necessary in this case anyway
Yes oops, we probably shouldn't index into the val. It's accidentally relying on the 0-tensor behavior. I think in the video I correct it, but I forgot to fix the notes.
In this line in Lecture 2, you calculate
val = A[i,j] + B[i,j]
, which should come out as a Float64. In the next line, you index intoval
, the Float64, which somehow works without erroring.https://github.com/mitmath/18337/blob/cdd7b2078048d83ff1180f7c8832ff2efb3ad058/lecture2/optimizing.jmd#L157
The function compiles and runs correctly, however, I don't know if this is necessary or just leftover from a previous function. In addition, at least on my machine, the
@btime
is faster if you take the indexing out.The text was updated successfully, but these errors were encountered: