Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Parallelize Pkg.precompile #2018

Merged
merged 15 commits into from
Sep 15, 2020
Merged
Changes from 6 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
34 changes: 31 additions & 3 deletions src/API.jl
Original file line number Diff line number Diff line change
Expand Up @@ -894,15 +894,33 @@ end

precompile() = precompile(Context())
function precompile(ctx::Context)

is_stdlib_from_name(name::String) = name in values(stdlibs())
IanButterworth marked this conversation as resolved.
Show resolved Hide resolved

printpkgstyle(ctx, :Precompiling, "project...")

num_tasks = parse(Int, get(ENV, "JULIA_NUM_PRECOMPILE_TASKS", string(Sys.CPU_THREADS + 1)))
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Seems kinda excessive to introduce an env variable for this. Its so specific.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not sure how else to gate this. Suggestions? There are some concerns that with a large core count this could accidentally OOM.

Copy link
Member

@KristofferC KristofferC Sep 13, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How much memory does each worker use approximately? Isn't this the case for every parallel workload that uses memory? Does this scale up to super high core counts, perhaps just setting an upper cap is OK.

I guess we should look at nthreads but everyone runs with that equal to 1.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Isn't this the case for every parallel workload that uses memory?

Yes, but most other workloads allow tuning (e.g. via -t).

I guess we should look at nthreads but everyone runs with that equal to 1.

There's that, and also Lyndon's comment above that this is more like a multiprocessing thing than a multithreading thing. I also agree that I shouldn't have to limit my computation's thread count to limit the precompilation, and vice versa.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd rather have it as a normal argument to the precompile function then. This is exactly what we already have to limit parallelism in the asynchronous package downloader.

Looking at it, funnily enough we do have an env variable for the package downloader but that seems like it was added as a workaround for something:

Pkg.jl/src/Types.jl

Lines 329 to 331 in ede7b07

# NOTE: The JULIA_PKG_CONCURRENCY environment variable is likely to be removed in
# the future. It currently stands as an unofficial workaround for issue #795.
num_concurrent_downloads::Int = haskey(ENV, "JULIA_PKG_CONCURRENCY") ? parse(Int, ENV["JULIA_PKG_CONCURRENCY"]) : 8

Copy link
Member

@KristofferC KristofferC Sep 14, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Although if we at some point want to run this automatically when a package is updated, there is no chance to give this argument.

Perhaps there should be a .julia/config/PkgConfig.toml where things like this could be set?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Anyway, let's go with this for now. Can always tweak it later.

parallel_limiter = Channel{Bool}(num_tasks)
IanButterworth marked this conversation as resolved.
Show resolved Hide resolved

man = Pkg.Types.read_manifest(ctx.env.manifest_file)
pkgids = [Base.PkgId(first(dep), last(dep).name) for dep in man if !Pkg.Operations.is_stdlib(first(dep))]
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I know this filtering was here before, but why is it necessary to filter out stdlibs?

(tmp.9L2rmMdEXO) pkg> st
Status `/tmp/tmp.9L2rmMdEXO/Project.toml`
  [37e2e46d] LinearAlgebra

(tmp.9L2rmMdEXO) pkg> precompile
Precompiling project...

julia> using LinearAlgebra
[ Info: Precompiling LinearAlgebra [37e2e46d-f89d-539d-b4ee-838fcccc9c8e]

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Aren't stdlib's always going to be precompiled already, and if you're dev-ing them they'd need to have their uuid removed, so wouldn't identify as stdlibs in that check?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Right, forgot to say that this is when you compile Julia without them in the sysimg. Perhaps we can instead filter based on if the package is already loaded? That should work for both regular packages and stdlibs. If it is a stdlib that is in the sysimg it doesn't need to precompile, and if it is a regular package that is already loaded in the session it is probably just precompiled from the using?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

But what if they're loaded, and in need of recompiling? Perhaps the filter just isn't needed at all?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yea I am not sure what happens if you try to precompile stdlibs that are loaded though? Since no precompiles files exist, will that spend time on precompiling them anyway? At least we can add the filter I suggested to the stdlibs.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

#2021 updated with this now (the PkgId version)

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I thought the stdlib check was for some kind of optimization (launching julia takes a few hundreds of ms even if you don't do anything). So, I think the correct predicate here is "is it in sysimage?" than "is it a stdlib?" since non-stdlib packages can be in sysimage and there is no point in calling compilecache for them. This also covers the exotic situation where some stdlibs are not in sysimage.

I think is_stdlib_and_loaded is better than nothing. But I feel it's a bit half-way solution if my assumption (the stdlib check was optimization) is correct.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thats true, I didn't think about regular packages in the sysimg. But perhaps #2018 (comment) is a good enough approximation of that? It seems pretty strange to (i) load a dependency, (ii) update its version, (iii) pkg> precompile, (iv) restart Julia and expect everything to be precompiled?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Although #2021 is looking good, I do like the properness of in_sysimage. It explains exactly why we'd always want to skip. I'll prepare a PR

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It seems pretty strange to (i) load a dependency, (ii) update its version, (iii) pkg> precompile, (iv) restart Julia and expect everything to be precompiled?

@fredrikekre Hmm... That was my expectation, actually. I generally expect pkg> $cmd and shell> jlpkg $cmd to be identical (when a project is not activated). Anyway, what do you think about #2021 + JuliaLang/julia#37652? I think in_sysimage is simple enough and nice to have.

pkg_dep_lists = [collect(keys(last(dep).deps)) for dep in man if !Pkg.Operations.is_stdlib(first(dep))]
filter!.(!is_stdlib_from_name, pkg_dep_lists)

pkgids = [Base.PkgId(uuid, name) for (name, uuid) in ctx.env.project.deps if !is_stdlib(uuid)]
if ctx.env.pkg !== nothing && isfile( joinpath( dirname(ctx.env.project_file), "src", ctx.env.pkg.name * ".jl") )
push!(pkgids, Base.PkgId(ctx.env.pkg.uuid, ctx.env.pkg.name))
push!(pkg_dep_lists, collect(keys(ctx.env.project.deps)))
end

precomp_events = Dict{String,Base.Event}()
for pkgid in pkgids
precomp_events[pkgid.name] = Base.Event()
end

IanButterworth marked this conversation as resolved.
Show resolved Hide resolved
precomp_tasks = Task[]
IanButterworth marked this conversation as resolved.
Show resolved Hide resolved

# TODO: since we are a complete list, but not topologically sorted, handling of recursion will be completely at random
for pkg in pkgids
for (i, pkg) in pairs(pkgids)
IanButterworth marked this conversation as resolved.
Show resolved Hide resolved
paths = Base.find_all_in_cache_path(pkg)
sourcepath = Base.locate_package(pkg)
sourcepath === nothing && continue
Expand All @@ -917,9 +935,19 @@ function precompile(ctx::Context)
break
end
if stale
Base.compilecache(pkg, sourcepath)
t = @async begin
staticfloat marked this conversation as resolved.
Show resolved Hide resolved
length(pkg_dep_lists[i]) > 0 && wait.(map(x->precomp_events[x], pkg_dep_lists[i]))
put!(parallel_limiter, true)
Base.compilecache(pkg, sourcepath)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

To limit parallelization, I suggest launching a million tasks like this, but then creating a Channel(num_tasks), then having each task put!() something into it just before this call to compilecache, then when it's finished, you take!() something back out. This will create, essentially, an N-parallel critical section, and allow N tasks to be running that section at once, while all others are blocked, waiting for the channel to free up space.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice. Ok

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What happens if this throws?

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@staticfloat aren't you describing a (counting) Semaphore?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yep, pretty much.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This PR now has a Semafore approach, and I tried out a channel-based approach here, which doesn't seem simpler master...ianshmean:ib/parallel_precomp_chanelbased

What should we move forward with?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

cc @tkf just to bring the conversation to a single thread

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd actually implement precompile differently by using a recursive function that returns was_recompiled::Bool and wrapping the recursive calls with tasks. This way, we don't need to implement a future-like construct (i.e., was_processed + was_recompiled). The error handling would probably be more straightforward this way. Resource control is probably still easier with semaphore (unless we have an easy-to-use task pool and future in Base or stdlib) although I wish there were Base.acquire(f, semaphore).

But this is the kind of thing the trade-off is not completely clear until you have a concrete implementation. So, I think it's reasonable to defer this to future refactoring.

notify(precomp_events[pkg.name])
take!(parallel_limiter)
end
push!(precomp_tasks, t)
else
notify(precomp_events[pkg.name])
end
end
wait.(precomp_tasks)
nothing
end

Expand Down