Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Inference: slow for for loop, fast for corresponding while #16122

Closed
timholy opened this issue Apr 29, 2016 · 4 comments
Closed

Inference: slow for for loop, fast for corresponding while #16122

timholy opened this issue Apr 29, 2016 · 4 comments
Labels
compiler:inference Type inference compiler:latency Compiler latency

Comments

@timholy
Copy link
Member

timholy commented Apr 29, 2016

Pkg.clone("https://github.com/timholy/ArrayIteration.jl.git")
Pkg.checkout("ArrayIteration", "teh/stored")

using ArrayIteration

A = Int[1 3; 2 4]
B = Array{Int}(2, 2)

julia> @time include("/tmp/slow1.jl")
 23.763574 seconds (31.16 M allocations: 1.095 GB, 2.28% gc time)

where slow1.jl is defined as

for (I, a) in sync(index(B, :, 2), value(A, :, 1))
    B[I] = a
end

In contrast, in a fresh session

julia> @time include("/tmp/slow2.jl")
  0.219750 seconds (130.95 k allocations: 5.498 MB)

where slow2 is the manually-expanded while loop version,

iter = sync(index(B, :, 2), value(A, :, 1))
s = start(iter)
while !done(iter, s)
    (I, a), s = next(iter, s)
    B[I] = a
end
julia> versioninfo()
Julia Version 0.5.0-dev+3780
Commit 811fe45* (2016-04-28 09:02 UTC)
Platform Info:
  System: Linux (x86_64-linux-gnu)
  CPU: Intel(R) Core(TM) i7 CPU       L 640  @ 2.13GHz
  WORD_SIZE: 64
  BLAS: libopenblas (USE64BITINT DYNAMIC_ARCH NO_AFFINITY Nehalem)
  LAPACK: libopenblas64_
  LIBM: libopenlibm
  LLVM: libLLVM-3.7.1 (ORCJIT, westmere)
@vtjnash
Copy link
Member

vtjnash commented Apr 29, 2016

are you testing this at the toplevel? what if you put both in a function?

@timholy
Copy link
Member Author

timholy commented Apr 29, 2016

Hmm. Then it's fast. (Thought I'd tested that, but evidently not.)

So may not be an inference thing?

@vtjnash vtjnash closed this as completed May 3, 2016
@timholy
Copy link
Member Author

timholy commented May 3, 2016

With regards to closing: perhaps it's worth emphasizing that this is a 2x2 matrix, and that 30 seconds is quite a long time to wait for looping over all of its elements. Since compilation is fast when it's in a function, it's also not an inescapable compilation overhead. In other words, this is likely not your run-of-the-mill "avoid globals, put it in a function" problem.

OK to reopen?

@vtjnash
Copy link
Member

vtjnash commented Mar 12, 2020

IIUC what this was doing, it's fast now:

julia> @time for (I, a) in zip(eachindex(view(B, :, 2)), view(A, :, 1))
           view(B, :, 2)[I] = a
       end
  0.000021 seconds (21 allocations: 672 bytes)

@vtjnash vtjnash closed this as completed Mar 12, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
compiler:inference Type inference compiler:latency Compiler latency
Projects
None yet
Development

No branches or pull requests

5 participants