Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Macro-atom speedup changes are cause incorrect ionization states #1045

Closed
kslong opened this issue Jan 9, 2024 · 5 comments · Fixed by #1106
Closed

Macro-atom speedup changes are cause incorrect ionization states #1045

kslong opened this issue Jan 9, 2024 · 5 comments · Fixed by #1106

Comments

@kslong
Copy link
Collaborator

kslong commented Jan 9, 2024

There are several indications that in one of the recent updates to Python intended to speed up macro atom calculations something has failed. There are several threads of issues, not the least of which when Macro atoms are invoked cloudy-python comparisons are failing for the macro atoms, which was not the case in 2140b19 from 231029. The problem is apparent in 3a49b22, which also failed James' Balmer test.

@kslong
Copy link
Collaborator Author

kslong commented Jan 10, 2024

I found a mistake caused the cloudy/python comparisons to fail entirely. The results are still not as good as before, but rather than a programming error this might be that the integrations are not as acurate.

This is what py87f produced (before the change)

image

and this is what 87g produces after the fix

image

As one can see, both 87f and 87g are failing at very low IP, but the problem is more "long lasting" for the new cruder, but faster integration routine.

@jhmatthews
Copy link
Collaborator

For reference, here is the results a similar test I got for my thesis:
Screenshot 2024-01-11 at 11 39 41

@kslong obviously the low IP needs sorting out, but in some sense the comparison before the change was already an improvement in some areas on my calculations. Is this using your new atomic dataset for He?

@kslong
Copy link
Collaborator Author

kslong commented Jan 11, 2024

No, I have been testing with data/h10_hetop_standard80.dat

The results with the branch indicated above from 231019 are very close to what is in your thesis @jhmatthews The differences are due to the fact that my Python grid did not go all the way to -8 in IP. The model comparisons I posted above are for converged models, which I also believe is what your thesis likely contains.

I've now looked at how the ionization changes with ionization cycle in one of the models where there is a difference between 87f and 87h. In the first cycle, there is good agreement (with the fix I made yesterday), and but they arrive a different final ionization fractions after 5 or so cycles, specifically 87f has significantly more He I than 87h.

Unfortunately that fact that the problem does not express itself in the initial cycle, it means that tracking down the problem is going to be more difficult

kslong added a commit that referenced this issue Jan 12, 2024
This version has the correct Cloudy/Python comparisons for H and He macro atoms, and avoids the problems of #1045
@kslong
Copy link
Collaborator Author

kslong commented Jan 12, 2024

Further inspection of the cloudy/python comparisons shows that in the portions of parameter space around log IP = -4, 87h with the new integration scheme, has higher electron temperatures than the old one. This is consistent with the fact that we are getting less He I in that region.

Note that in order that I not leave "dev" in a broken state I have replaced the version of matom that was in dev, with the old version that produces. The new integration scheme can still be studied using branch bug1045_macro.

@kslong
Copy link
Collaborator Author

kslong commented Feb 19, 2024

The gsl integration has been restored on the dev branch. Work on speeding up the integrations will contnue, but be reported in #1038

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants