Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Parallel RBI generation is not deterministic because of autoload #618

Closed
vinistock opened this issue Nov 17, 2021 · 1 comment · Fixed by #639
Closed

Parallel RBI generation is not deterministic because of autoload #618

vinistock opened this issue Nov 17, 2021 · 1 comment · Fixed by #639
Assignees
Labels
bug Something isn't working

Comments

@vinistock
Copy link
Member

When generating RBIs, we require the bundle/application ahead of time in the main process, but that's not sufficient to load the entire code.

When we delegate generation to the Executor, the SymbolTableCompiler will autoload constants for the first time inside a forked worker. This means that the other workers will not know about the autoloaded constant, which leads to RBIs that are not deterministic.

It's possible to verify that this is the case by inspecting the $LOADED_FEATURES outside and inside of the forked workers. This prints different loaded features from the workers, which is most likely the source of the non-deterministic behavior when running in parallel.

# lib/tapioca/generators/gem.rb

def generate
  # ...

  current_features = $LOADED_FEATURES
  Executor.new(.....) do |gem|
    puts $LOADED_FEATURES - current_features
    # ...
  end
  # ...
end
@vinistock vinistock added the bug Something isn't working label Nov 17, 2021
@paracycle
Copy link
Member

I have been working on triggering autoloads eagerly before starting to generate gem RBIs. I think I can get that merged this week, which should solve this as well.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants