Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Inlining loses ABI-relevant target feature information at call operations #70563

Open
RalfJung opened this issue Oct 28, 2023 · 1 comment
Open
Labels

Comments

@RalfJung
Copy link
Contributor

RalfJung commented Oct 28, 2023

The behavior of the call terminator is (currently) very sensitive to surrounding target features: e.g. on x86-64, when an argument of type <8 x float> is passed, the exact way that argument will be passed depends on whether the function that contains the call has the AVX feature enabled.

This means that the seemingly harmless operation of moving call from one function to another (e.g. via inlining) can lead to the behavior of call changing, which is obviously bad! This is what happens in this example, where the uninlined program would behave as intended (caller and callee use matching ABI everywhere), but the program produced by clang -emit-llvm calls no_target_feature in a context where the AVX feature is available, thus breaking the call.

Before:

; no target features
define internal void @no_target_feature_intermediate(float %dummy, ptr align 32 %x) unnamed_addr #0 {
start:
  %0 = load <8 x float>, ptr %x, align 32
  ; call uses "no-avx" ABI
  call void @no_target_feature(float %dummy, <8 x float> %0)
  ret void
}

; "target-features"="+avx"
define void @with_target_feature(ptr align 32 %x) unnamed_addr #2 {
start:
  %0 = alloca <8 x float>, align 32
  %1 = load <8 x float>, ptr %x, align 32
  store <8 x float> %1, ptr %0, align 32
  call void @no_target_feature_intermediate(float 0.000000e+00, ptr align 32 %0)
  ret void
}

After:

; "target-features"="+avx"
define void @with_target_feature(ptr align 32 %x) unnamed_addr #1 {
start:
  %0 = alloca <8 x float>, align 32
  %1 = load <8 x float>, ptr %x, align 32
  store <8 x float> %1, ptr %0, align 32
  %2 = load <8 x float>, ptr %0, align 32
  ; call uses "avx" ABI
  call void @no_target_feature(float 0.000000e+00, <8 x float> %2)
  ret void
}

(We are running into this in Rust, where it causes code that should be completely fine to misbehave, so it's a critical issue for us that we'd like to fix.)

The obvious way to avoid this is to not do inlining when the target features differ between caller and callee, but that seems like a big hammer that leaves a lot of optimization potential on the table -- and indeed, if I understood correctly, the most recent attempt to enforce this for all kinds of inlining (including alwaysinline) quickly led to problems.

An alternative would be to make call less context-dependent -- after all it's not the inlining itself that is the fundamental problem here, it's the fact that call doesn't know the ABI it has to use for the callee, and then takes the caller target flags to fill that information gap -- a fragile heuristic, as the inlining issue shows. So I wonder if it wouldn't be possible to in fact still do the inlining, but end up with code like

; "target-features"="+avx"
define void @with_target_feature(ptr align 32 %x) unnamed_addr #1 {
start:
  %0 = alloca <8 x float>, align 32
  %1 = load <8 x float>, ptr %x, align 32
  store <8 x float> %1, ptr %0, align 32
  %2 = load <8 x float>, ptr %0, align 32
  call void @no_target_feature(float 0.000000e+00, <8 x float> %2) !abi-target-features=""
  ret void
}

IOW, the call terminator would stop relying on context clues (which are fragile since inlining changes the context), and instead it would carry an explicit annotation saying which target features should be considered when determining how arguments must be passed for this call. This approach would enable arbitrary inlining from "less feature" to "more feature" functions, provided the inliner takes care to equip the call instructions it is copying with the right annotation.

(I don't know the LLVM internal data structures so I'm not sure how that annotation would best be represented, but LLVM already supports tons of annotations at various instructions so I hope adding one more isn't too hard.)

@RalfJung
Copy link
Contributor Author

This limitation in LLVM also prevents us from supporting (without unnecessary overhead) some pretty natural-looking Rust programs; see rust-lang/rust#132865 for details.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

3 participants