Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

TARDIS May produce different results when multithreading. #2021

Closed
2 tasks done
Rodot- opened this issue May 10, 2022 · 2 comments
Closed
2 tasks done

TARDIS May produce different results when multithreading. #2021

Rodot- opened this issue May 10, 2022 · 2 comments
Assignees
Labels

Comments

@Rodot-
Copy link
Contributor

Rodot- commented May 10, 2022

Describe the bug
Updating the estimators when in parallel during the montecarlo main loop is not an atomic operation, resulting in invalid values when run with multiple threads.

Should be fixed by adding multithreaded tests.

To Reproduce
Hard to compare, see continuum branch

Screenshots

System

  • OS:

    • GNU/Linux
    • macOS
  • Environment (conda list):

Additional context

@Rodot- Rodot- added the bug label May 10, 2022
@jayantbhakar
Copy link
Member

I think the same issue is with progress bars. I am trying to figure out a way for it. I have gone through its code and the multithread loop is causing the problem. I didn't find a way to control that loop for single execution of the progress bars as well.

This was referenced May 11, 2022
@Rodot- Rodot- self-assigned this May 11, 2022
@epassaro epassaro added bug 🐞 and removed bug labels May 18, 2022
@Rodot-
Copy link
Contributor Author

Rodot- commented May 23, 2022

Should be noted while this was partially fixed by #2022 there may still be inconsistencies in the v_packet energy histograms as they are not properly handled in parallel (though the likelihood they run into such problems is lower, but non-zero)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

4 participants