-
-
Notifications
You must be signed in to change notification settings - Fork 5.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
optimizer: alias-aware SROA #43888
base: master
Are you sure you want to change the base?
optimizer: alias-aware SROA #43888
Conversation
With this PR, will the pattern of |
This is an optimizer-time change, so that pattern is fine to block inference-time specialization (which is most of what we wanted to block, LLVM was already able to SROA this. That said, we have |
The actual benchmarking pattern is |
e343279
to
007fa58
Compare
007fa58
to
b0859f2
Compare
0b54dc5
to
c2ca394
Compare
b0859f2
to
5ae2ba7
Compare
All tests should be green now. @nanosoldier |
Your benchmark job has completed - possible performance regressions were detected. A full report can be found here. |
6f2c4a5
to
ca12688
Compare
Implements a simple Julia-level array allocation elimination on top of #43888. ```julia julia> code_typed((String,String)) do s, t a = Vector{Base.RefValue{String}}(undef, 2) a[1] = Ref(s) a[2] = Ref(t) return a[1][] end ``` ```diff diff --git a/master b/pr index 9c8da14380..5b63d08190 100644 --- a/master +++ b/pr @@ -1,11 +1,4 @@ 1-element Vector{Any}: CodeInfo( -1 ─ %1 = $(Expr(:foreigncall, :(:jl_alloc_array_1d), Vector{Base.RefValue{String}}, svec(Any, Int64), 0, :(:ccall), Vector{Base.RefValue{String}}, 2, 2))::Vector{Base.RefValue{String}} -│ %2 = %new(Base.RefValue{String}, s)::Base.RefValue{String} -│ Base.arrayset(true, %1, %2, 1)::Vector{Base.RefValue{String}} -│ %4 = %new(Base.RefValue{String}, t)::Base.RefValue{String} -│ Base.arrayset(true, %1, %4, 2)::Vector{Base.RefValue{String}} -│ %6 = Base.arrayref(true, %1, 1)::Base.RefValue{String} -│ %7 = Base.getfield(%6, :x)::String -└── return %7 +1 ─ return s ) => String ``` Still this array SROA handle is very limited and able to handle only trivial examples (though I confirmed this version already eliminates few array allocations during sysimg build). For those who interested, I added some discussions on array optimization [here](https://aviatesk.github.io/EscapeAnalysis.jl/dev/#EA-Array-Analysis).
Some non-scientific numbers:
|
Implements a simple Julia-level array allocation elimination on top of #43888. ```julia julia> code_typed((String,String)) do s, t a = Vector{Base.RefValue{String}}(undef, 2) a[1] = Ref(s) a[2] = Ref(t) return a[1][] end ``` ```diff diff --git a/master b/pr index 9c8da14380..5b63d08190 100644 --- a/master +++ b/pr @@ -1,11 +1,4 @@ 1-element Vector{Any}: CodeInfo( -1 ─ %1 = $(Expr(:foreigncall, :(:jl_alloc_array_1d), Vector{Base.RefValue{String}}, svec(Any, Int64), 0, :(:ccall), Vector{Base.RefValue{String}}, 2, 2))::Vector{Base.RefValue{String}} -│ %2 = %new(Base.RefValue{String}, s)::Base.RefValue{String} -│ Base.arrayset(true, %1, %2, 1)::Vector{Base.RefValue{String}} -│ %4 = %new(Base.RefValue{String}, t)::Base.RefValue{String} -│ Base.arrayset(true, %1, %4, 2)::Vector{Base.RefValue{String}} -│ %6 = Base.arrayref(true, %1, 1)::Base.RefValue{String} -│ %7 = Base.getfield(%6, :x)::String -└── return %7 +1 ─ return s ) => String ``` Still this array SROA handle is very limited and able to handle only trivial examples (though I confirmed this version already eliminates few array allocations during sysimg build). For those who interested, I added some discussions on array optimization [here](https://aviatesk.github.io/EscapeAnalysis.jl/dev/#EA-Array-Analysis).
@nanosoldier |
@nanosoldier |
Your benchmark job has completed - possible performance regressions were detected. A full report can be found here. |
Your package evaluation job has completed - possible issues were detected. A full report can be found here. |
This commit ports [EscapeAnalysis.jl](https://github.com/aviatesk/EscapeAnalysis.jl) into Julia base. You can find the documentation of this escape analysis at [this GitHub page](https://aviatesk.github.io/EscapeAnalysis.jl/dev/)[^1]. [^1]: The same documentation will be included into Julia's developer documentation by this commit. This escape analysis will hopefully be an enabling technology for various memory-related optimizations at Julia's high level compilation pipeline. Possible target optimization includes alias aware SROA (#43888), array SROA (#43909), `mutating_arrayfreeze` optimization (#42465), stack allocation of mutables, finalizer elision and so on[^2]. [^2]: It would be also interesting if LLVM-level optimizations can consume IPO information derived by this escape analysis to broaden optimization possibilities. The primary motivation for porting EA in this PR is to check its impact on latency as well as to get feedbacks from a broader range of developers. The plan is that we first introduce EA in this commit, and then merge the depending PRs built on top of this commit like #43888, #43909 and #42465 This commit simply defines and runs EA inside Julia base compiler and enables the existing test suite with it. In this commit, we just run EA before inlining to generate IPO cache. The depending PRs, EA will be invoked again after inlining to be used for various local optimizations.
25635ea
to
2e4cb53
Compare
This commit ports [EscapeAnalysis.jl](https://github.com/aviatesk/EscapeAnalysis.jl) into Julia base. You can find the documentation of this escape analysis at [this GitHub page](https://aviatesk.github.io/EscapeAnalysis.jl/dev/)[^1]. [^1]: The same documentation will be included into Julia's developer documentation by this commit. This escape analysis will hopefully be an enabling technology for various memory-related optimizations at Julia's high level compilation pipeline. Possible target optimization includes alias aware SROA (JuliaLang#43888), array SROA (JuliaLang#43909), `mutating_arrayfreeze` optimization (JuliaLang#42465), stack allocation of mutables, finalizer elision and so on[^2]. [^2]: It would be also interesting if LLVM-level optimizations can consume IPO information derived by this escape analysis to broaden optimization possibilities. The primary motivation for porting EA in this PR is to check its impact on latency as well as to get feedbacks from a broader range of developers. The plan is that we first introduce EA in this commit, and then merge the depending PRs built on top of this commit like JuliaLang#43888, JuliaLang#43909 and JuliaLang#42465 This commit simply defines and runs EA inside Julia base compiler and enables the existing test suite with it. In this commit, we just run EA before inlining to generate IPO cache. The depending PRs, EA will be invoked again after inlining to be used for various local optimizations.
This commit ports [EscapeAnalysis.jl](https://github.com/aviatesk/EscapeAnalysis.jl) into Julia base. You can find the documentation of this escape analysis at [this GitHub page](https://aviatesk.github.io/EscapeAnalysis.jl/dev/)[^1]. [^1]: The same documentation will be included into Julia's developer documentation by this commit. This escape analysis will hopefully be an enabling technology for various memory-related optimizations at Julia's high level compilation pipeline. Possible target optimization includes alias aware SROA (#43888), array SROA (#43909), `mutating_arrayfreeze` optimization (#42465), stack allocation of mutables, finalizer elision and so on[^2]. [^2]: It would be also interesting if LLVM-level optimizations can consume IPO information derived by this escape analysis to broaden optimization possibilities. The primary motivation for porting EA in this PR is to check its impact on latency as well as to get feedbacks from a broader range of developers. The plan is that we first introduce EA in this commit, and then merge the depending PRs built on top of this commit like #43888, #43909 and #42465 This commit simply defines and runs EA inside Julia base compiler and enables the existing test suite with it. In this commit, we just run EA before inlining to generate IPO cache. The depending PRs, EA will be invoked again after inlining to be used for various local optimizations.
2e4cb53
to
bad5206
Compare
This commit ports [EscapeAnalysis.jl](https://github.com/aviatesk/EscapeAnalysis.jl) into Julia base. You can find the documentation of this escape analysis at [this GitHub page](https://aviatesk.github.io/EscapeAnalysis.jl/dev/)[^1]. [^1]: The same documentation will be included into Julia's developer documentation by this commit. This escape analysis will hopefully be an enabling technology for various memory-related optimizations at Julia's high level compilation pipeline. Possible target optimization includes alias aware SROA (#43888), array SROA (#43909), `mutating_arrayfreeze` optimization (#42465), stack allocation of mutables, finalizer elision and so on[^2]. [^2]: It would be also interesting if LLVM-level optimizations can consume IPO information derived by this escape analysis to broaden optimization possibilities. The primary motivation for porting EA in this PR is to check its impact on latency as well as to get feedbacks from a broader range of developers. The plan is that we first introduce EA in this commit, and then merge the depending PRs built on top of this commit like #43888, #43909 and #42465 This commit simply defines and runs EA inside Julia base compiler and enables the existing test suite with it. In this commit, we just run EA before inlining to generate IPO cache. The depending PRs, EA will be invoked again after inlining to be used for various local optimizations.
bad5206
to
05f22d5
Compare
This commit ports [EscapeAnalysis.jl](https://github.com/aviatesk/EscapeAnalysis.jl) into Julia base. You can find the documentation of this escape analysis at [this GitHub page](https://aviatesk.github.io/EscapeAnalysis.jl/dev/)[^1]. [^1]: The same documentation will be included into Julia's developer documentation by this commit. This escape analysis will hopefully be an enabling technology for various memory-related optimizations at Julia's high level compilation pipeline. Possible target optimization includes alias aware SROA (#43888), array SROA (#43909), `mutating_arrayfreeze` optimization (#42465), stack allocation of mutables, finalizer elision and so on[^2]. [^2]: It would be also interesting if LLVM-level optimizations can consume IPO information derived by this escape analysis to broaden optimization possibilities. The primary motivation for porting EA by this PR is to check its impact on latency as well as to get feedbacks from a broader range of developers. The plan is that we first introduce EA to Julia Base by this commit, and then merge the depending PRs built on top of this commit later. This commit simply defines EA inside Julia base compiler and enables the existing test suite with it. In this commit we don't run EA at all, and so this commit shouldn't affect Julia-level compilation latency. In the depending PRs, EA will run in two stages: - `IPO EA`: run EA on pre-inlining state to generate IPO-valid cache - `Local EA`: run EA on post-inlining state to generate local escape information used for various optimizations In order to integrate `IPO EA` with our compilation cache system, this commit also implements a new `CodeInstance.argescapes` field that keeps the IPO-valid cache generated by `IPO EA`.
05f22d5
to
d14fda8
Compare
This commit ports [EscapeAnalysis.jl](https://github.com/aviatesk/EscapeAnalysis.jl) into Julia base. You can find the documentation of this escape analysis at [this GitHub page](https://aviatesk.github.io/EscapeAnalysis.jl/dev/)[^1]. [^1]: The same documentation will be included into Julia's developer documentation by this commit. This escape analysis will hopefully be an enabling technology for various memory-related optimizations at Julia's high level compilation pipeline. Possible target optimization includes alias aware SROA (#43888), array SROA (#43909), `mutating_arrayfreeze` optimization (#42465), stack allocation of mutables, finalizer elision and so on[^2]. [^2]: It would be also interesting if LLVM-level optimizations can consume IPO information derived by this escape analysis to broaden optimization possibilities. The primary motivation for porting EA by this PR is to check its impact on latency as well as to get feedbacks from a broader range of developers. The plan is that we first introduce EA to Julia Base by this commit, and then merge the depending PRs built on top of this commit later. This commit simply defines EA inside Julia base compiler and enables the existing test suite with it. In this commit we don't run EA at all, and so this commit shouldn't affect Julia-level compilation latency. In the depending PRs, EA will run in two stages: - `IPO EA`: run EA on pre-inlining state to generate IPO-valid cache - `Local EA`: run EA on post-inlining state to generate local escape information used for various optimizations In order to integrate `IPO EA` with our compilation cache system, this commit also implements a new `CodeInstance.argescapes` field that keeps the IPO-valid cache generated by `IPO EA`.
d14fda8
to
6f90d83
Compare
This commit ports [EscapeAnalysis.jl](https://github.com/aviatesk/EscapeAnalysis.jl) into Julia base. You can find the documentation of this escape analysis at [this GitHub page](https://aviatesk.github.io/EscapeAnalysis.jl/dev/)[^1]. [^1]: The same documentation will be included into Julia's developer documentation by this commit. This escape analysis will hopefully be an enabling technology for various memory-related optimizations at Julia's high level compilation pipeline. Possible target optimization includes alias aware SROA (#43888), array SROA (#43909), `mutating_arrayfreeze` optimization (#42465), stack allocation of mutables, finalizer elision and so on[^2]. [^2]: It would be also interesting if LLVM-level optimizations can consume IPO information derived by this escape analysis to broaden optimization possibilities. The primary motivation for porting EA by this PR is to check its impact on latency as well as to get feedbacks from a broader range of developers. The plan is that we first introduce EA to Julia Base by this commit, and then merge the depending PRs built on top of this commit later. This commit simply defines EA inside Julia base compiler and enables the existing test suite with it. In this commit we don't run EA at all, and so this commit shouldn't affect Julia-level compilation latency. In the depending PRs, EA will run in two stages: - `IPO EA`: run EA on pre-inlining state to generate IPO-valid cache - `Local EA`: run EA on post-inlining state to generate local escape information used for various optimizations In order to integrate `IPO EA` with our compilation cache system, this commit also implements a new `CodeInstance.argescapes` field that keeps the IPO-valid cache generated by `IPO EA`.
This commit ports [EscapeAnalysis.jl](https://github.com/aviatesk/EscapeAnalysis.jl) into Julia base. You can find the documentation of this escape analysis at [this GitHub page](https://aviatesk.github.io/EscapeAnalysis.jl/dev/)[^1]. [^1]: The same documentation will be included into Julia's developer documentation by this commit. This escape analysis will hopefully be an enabling technology for various memory-related optimizations at Julia's high level compilation pipeline. Possible target optimization includes alias aware SROA (JuliaLang#43888), array SROA (JuliaLang#43909), `mutating_arrayfreeze` optimization (JuliaLang#42465), stack allocation of mutables, finalizer elision and so on[^2]. [^2]: It would be also interesting if LLVM-level optimizations can consume IPO information derived by this escape analysis to broaden optimization possibilities. The primary motivation for porting EA by this PR is to check its impact on latency as well as to get feedbacks from a broader range of developers. The plan is that we first introduce EA to Julia Base by this commit, and then merge the depending PRs built on top of this commit later. This commit simply defines EA inside Julia base compiler and enables the existing test suite with it. In this commit we don't run EA at all, and so this commit shouldn't affect Julia-level compilation latency. In the depending PRs, EA will run in two stages: - `IPO EA`: run EA on pre-inlining state to generate IPO-valid cache - `Local EA`: run EA on post-inlining state to generate local escape information used for various optimizations In order to integrate `IPO EA` with our compilation cache system, this commit also implements a new `CodeInstance.argescapes` field that keeps the IPO-valid cache generated by `IPO EA`.
This commit ports [EscapeAnalysis.jl](https://github.com/aviatesk/EscapeAnalysis.jl) into Julia base. You can find the documentation of this escape analysis at [this GitHub page](https://aviatesk.github.io/EscapeAnalysis.jl/dev/)[^1]. [^1]: The same documentation will be included into Julia's developer documentation by this commit. This escape analysis will hopefully be an enabling technology for various memory-related optimizations at Julia's high level compilation pipeline. Possible target optimization includes alias aware SROA (JuliaLang#43888), array SROA (JuliaLang#43909), `mutating_arrayfreeze` optimization (JuliaLang#42465), stack allocation of mutables, finalizer elision and so on[^2]. [^2]: It would be also interesting if LLVM-level optimizations can consume IPO information derived by this escape analysis to broaden optimization possibilities. The primary motivation for porting EA by this PR is to check its impact on latency as well as to get feedbacks from a broader range of developers. The plan is that we first introduce EA to Julia Base by this commit, and then merge the depending PRs built on top of this commit later. This commit simply defines EA inside Julia base compiler and enables the existing test suite with it. In this commit we don't run EA at all, and so this commit shouldn't affect Julia-level compilation latency. In the depending PRs, EA will run in two stages: - `IPO EA`: run EA on pre-inlining state to generate IPO-valid cache - `Local EA`: run EA on post-inlining state to generate local escape information used for various optimizations In order to integrate `IPO EA` with our compilation cache system, this commit also implements a new `CodeInstance.argescapes` field that keeps the IPO-valid cache generated by `IPO EA`.
This commit ports [EscapeAnalysis.jl](https://github.com/aviatesk/EscapeAnalysis.jl) into Julia base. You can find the documentation of this escape analysis at [this GitHub page](https://aviatesk.github.io/EscapeAnalysis.jl/dev/)[^1]. [^1]: The same documentation will be included into Julia's developer documentation by this commit. This escape analysis will hopefully be an enabling technology for various memory-related optimizations at Julia's high level compilation pipeline. Possible target optimization includes alias aware SROA (JuliaLang#43888), array SROA (JuliaLang#43909), `mutating_arrayfreeze` optimization (JuliaLang#42465), stack allocation of mutables, finalizer elision and so on[^2]. [^2]: It would be also interesting if LLVM-level optimizations can consume IPO information derived by this escape analysis to broaden optimization possibilities. The primary motivation for porting EA by this PR is to check its impact on latency as well as to get feedbacks from a broader range of developers. The plan is that we first introduce EA to Julia Base by this commit, and then merge the depending PRs built on top of this commit later. This commit simply defines EA inside Julia base compiler and enables the existing test suite with it. In this commit we don't run EA at all, and so this commit shouldn't affect Julia-level compilation latency. In the depending PRs, EA will run in two stages: - `IPO EA`: run EA on pre-inlining state to generate IPO-valid cache - `Local EA`: run EA on post-inlining state to generate local escape information used for various optimizations In order to integrate `IPO EA` with our compilation cache system, this commit also implements a new `CodeInstance.argescapes` field that keeps the IPO-valid cache generated by `IPO EA`.
Enhances SROA of mutables using the novel Julia-level escape analysis (on top of #43800): 1. alias-aware SROA, mutable ϕ-node elimination 2. `isdefined` check elimination 3. load-forwarding for non-eliminable but analyzable mutables --- 1. alias-aware SROA, mutable ϕ-node elimination EA's alias analysis allows this new SROA to handle nested mutables allocations pretty well. Now we can eliminate the heap allocations completely from this insanely nested examples by the single analysis/optimization pass: ```julia julia> function refs(x) (Ref(Ref(Ref(Ref(Ref(Ref(Ref(Ref(Ref(Ref((x))))))))))))[][][][][][][][][][] end refs (generic function with 1 method) julia> refs("julia"); @allocated refs("julia") 0 ``` EA can also analyze escape of ϕ-node as well as its aliasing. Mutable ϕ-nodes would be eliminated even for a very tricky case as like: ```julia julia> code_typed((Bool,String,)) do cond, x # these allocation form multiple ϕ-nodes if cond ϕ2 = ϕ1 = Ref{Any}("foo") else ϕ2 = ϕ1 = Ref{Any}("bar") end ϕ2[] = x y = ϕ1[] # => x return y end 1-element Vector{Any}: CodeInfo( 1 ─ goto #3 if not cond 2 ─ goto #4 3 ─ nothing::Nothing 4 ┄ return x ) => Any ``` Combined with the alias analysis and ϕ-node handling above, allocations in the following "realistic" examples will be optimized: ```julia julia> # demonstrate the power of our field / alias analysis with realistic end to end examples # adapted from http://wiki.luajit.org/Allocation-Sinking-Optimization#implementation%5B abstract type AbstractPoint{T} end julia> struct Point{T} <: AbstractPoint{T} x::T y::T end julia> mutable struct MPoint{T} <: AbstractPoint{T} x::T y::T end julia> add(a::P, b::P) where P<:AbstractPoint = P(a.x + b.x, a.y + b.y); julia> function compute_point(T, n, ax, ay, bx, by) a = T(ax, ay) b = T(bx, by) for i in 0:(n-1) a = add(add(a, b), b) end a.x, a.y end; julia> function compute_point(n, a, b) for i in 0:(n-1) a = add(add(a, b), b) end a.x, a.y end; julia> function compute_point!(n, a, b) for i in 0:(n-1) a′ = add(add(a, b), b) a.x = a′.x a.y = a′.y end end; julia> compute_point(MPoint, 10, 1+.5, 2+.5, 2+.25, 4+.75); julia> compute_point(MPoint, 10, 1+.5im, 2+.5im, 2+.25im, 4+.75im); julia> @allocated compute_point(MPoint, 10000, 1+.5, 2+.5, 2+.25, 4+.75) 0 julia> @allocated compute_point(MPoint, 10000, 1+.5im, 2+.5im, 2+.25im, 4+.75im) 0 julia> compute_point(10, MPoint(1+.5, 2+.5), MPoint(2+.25, 4+.75)); julia> compute_point(10, MPoint(1+.5im, 2+.5im), MPoint(2+.25im, 4+.75im)); julia> @allocated compute_point(10000, MPoint(1+.5, 2+.5), MPoint(2+.25, 4+.75)) 0 julia> @allocated compute_point(10000, MPoint(1+.5im, 2+.5im), MPoint(2+.25im, 4+.75im)) 0 julia> af, bf = MPoint(1+.5, 2+.5), MPoint(2+.25, 4+.75); julia> ac, bc = MPoint(1+.5im, 2+.5im), MPoint(2+.25im, 4+.75im); julia> compute_point!(10, af, bf); julia> compute_point!(10, ac, bc); julia> @allocated compute_point!(10000, af, bf) 0 julia> @allocated compute_point!(10000, ac, bc) 0 ``` 2. `isdefined` check elimination This commit also implements a simple optimization to eliminate `isdefined` call by checking load-fowardability. This optimization may be especially useful to eliminate extra allocation involved with a capturing closure, e.g.: ```julia julia> callit(f, args...) = f(args...); julia> function isdefined_elim() local arr::Vector{Any} callit() do arr = Any[] end return arr end; julia> code_typed(isdefined_elim) 1-element Vector{Any}: CodeInfo( 1 ─ %1 = $(Expr(:foreigncall, :(:jl_alloc_array_1d), Vector{Any}, svec(Any, Int64), 0, :(:ccall), Vector{Any}, 0, 0))::Vector{Any} └── goto #3 if not true 2 ─ goto #4 3 ─ $(Expr(:throw_undef_if_not, :arr, false))::Any 4 ┄ return %1 ) => Vector{Any} ``` 3. load-forwarding for non-eliminable but analyzable mutables EA also allows us to forward loads even when the mutable allocation can't be eliminated but still its fields are known precisely. The load forwarding might be useful since it may derive new type information that succeeding optimization passes can use (or just because it allows simpler code transformations down the load): ```julia julia> code_typed((Bool,String,)) do c, s r = Ref{Any}(s) if c return r[]::String # adce_pass! will further eliminate this type assert call also else return r end end 1-element Vector{Any}: CodeInfo( 1 ─ %1 = %new(Base.RefValue{Any}, s)::Base.RefValue{Any} └── goto #3 if not c 2 ─ return s 3 ─ return %1 ) => Union{Base.RefValue{Any}, String} ``` --- Please refer to the newly added test cases for more examples. Also, EA's alias analysis already succeeds to reason about arrays, and so this EA-based SROA will hopefully be generalized for array SROA as well.
Mostly adapted from #43888. Should be more robust and cover more cases.
Mostly adapted from #43888. Should be more robust and cover more cases.
@aviatesk can you rebase this? |
Enhances SROA of mutables using the novel Julia-level
escape analysis (on top of #43800):
isdefined
check elimination1. alias-aware SROA, mutable ϕ-node elimination
EA's alias analysis allows this new SROA to handle nested mutables allocations
pretty well. Now we can eliminate the heap allocations completely from
this insanely nested examples by the single analysis/optimization pass:
EA can also analyze escape of ϕ-node as well as its aliasing.
Mutable ϕ-nodes would be eliminated even for a very tricky case, e.g.:
Combined with the powerful alias analysis and ϕ-node handling,
the following realistic examples will be fully optimized:
2.
isdefined
check eliminationThis commit also implements a simple optimization to eliminate
isdefined
call by checking load-fowardability.This optimization may be especially useful to eliminate extra allocation
involved with a capturing closure, e.g. from @vchuravy's old example:
3. load-forwarding for non-eliminable but analyzable mutables
EA also allows us to forward loads even when the mutable allocation
can't be eliminated but still its fields are known precisely.
The load forwarding might be useful since it may derive new type information
that succeeding optimization passes can use (or just because it allows
simpler code transformations down the load):
Please refer to the newly added test cases for more examples.
Also, EA's alias analysis is general and already reasons about arrays, and
so this EA-based SROA will hopefully be generalized for array SROA as well.