-
Notifications
You must be signed in to change notification settings - Fork 24
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Interaction between macro_pops and saha/lucy_mazzali #40
Comments
We discussed this in the 130814 telecon with SS, NSH, JM and CK. By going through the code we came to a rather different conclusion than the one above. in Lucy Mazzali ionisation mode (mode 3 in the pf file, and mode 2 in nebular concentrations), nebular_concentrations does the following partition_functions (xplasma, mode); // t_r with weights
m = concentrations (xplasma, 0); // Saha equation using t_r
m = lucy (xplasma); // Main routine for running LucyMazzali i.e., it works out the partition functions, then works out saha abundances, then applies the lucy mazzali correction factors ( in non macro mode). In macro mode, both of these functions also call There are a number of problems with this
#define MAXITERATIONS 200 //The number of loops to do to try to converge in ne
#define FRACTIONAL_ERROR 0.03 //The change in n_e which causes a break out of the loop for ne So these need to be set differently for macro pops to obtain the correct answer. Note also that MAXITERATIONS is only set to 20 in python 58. Are we confident that they are good enough for the non macro atom mode? So, what does this mean?
|
So here are a couple of comments about all of this. First, it is definitely true that much of what is is contained in the ionization calculation part of the routine is "historical" and developed as we were trying to understand how to do the ionization calculation. We also were only really testing situations where the plasma was essentially fully ionized, so that ne did not change much. Indeed, my understanding at the time was, that the only reason we had to iterate was to get a better estimate of ne. I presume, when James talks about needing more iterations, he is talking about cases where the temperature is fairly low, and this is the reason ne keeps changing. Even so, generally speaking, it seems odd to me that we should under normal situations have to iterated for 200 times and still just get a fractional error of 0.03, and if we are getting close to this, we ought to try to figure out if there is a better approach. If memory serves me correctly, it is possible to write down the set of equations that includes all of the ion abundances and ne in a way that avoids the iteration. I thought that was what Stuart did in his program. |
Hi Knox - hope you're enjoying the beach! The problem isn't that ne doesn't converge - probably ne converges immediately (without iteration). The problem is that the macro atom level populations are currently being overwritten by LTE populations, which are being inserted as part of the process of computing LTE ion populations prior to going into the Lucy/Mazzali correction stuff. That shouldn't have been allowed - the "density" quantities associated with the macro atom levels should not have been allowed to be modified by anything other than "macro pops" - the effect of this is that the escape probabilities used to solve the rate equations will not be consistent with the level populations themselves, which will have a knock on effect. If there had been iterations for ne occurring then the situation would have been somewhat better (because that part of the code does not interfere with the macro atom populations - so during the iteration on ne it should have effectively been iterating on the macro atom populations too, which would be fine). However, that's not really the elegant way to fix it - the correct thing would be to somehow forbid "saha" from overwriting the macro atom densities...or avoid the "saha" call completely. Cheers from a wet and grey Belfast! |
Yes, sorry I probably didn't explain that clearly enough. A few interesting tests- as yet I've just been testing the sensitivity to FRACTIONAL_ERROR and N-ITERATIONS in both 58 and 76...the results are a little odd. Remember I'm still doing these diagnostics with one cycle runs for ease and simplicity, and it is not about whether we converge on ne but rather whether we run macro_pops enough times after calling saha that we forget about our initial guess, which is first of all not the guess we should be using, and secondly is a different guess between the 2 versions.
..but of course, it wouldn't be that simple! You'll see that in 76, there is almost no sensitivity to these two variables... Mysteries:
Ideas
Conclusions:
Things to do:
|
Hi James, Sounds like you are making some progress - I agree that it sounds odd that v76 doesn't seem to care about this though. I guess I'm not up to speed - why are the partition functions different, and what values do they have in the two cases (I would have expected for H you always get "2" for HI and "1" for HII, pretty much regardless of what one does!? Are the partition functions even computed for the macro atoms? Stuart |
Ok, this is getting really odd. When I change the code so that the only time saha gets called for macro ions is in the wind initialisation, and also keep FRACTIONAL_ERROR and MAX_ITERATIONS at 1e-8 and 20000, then I get 58 to agree ALMOST PERFECTLY with 76. Here's the actual figures when you don't call saha in either and both have the same improved convergence conditions:
What is going on?? This is bizarre, but without understanding it yet I think it may be almost solved! I'll take a look at the partition functions in a moment- the only relevance they have here is that they are used in the saha equation, but they shouldn't be used at all in a macro atom scheme if it's done properly. |
Right, just talking with Christian and I think we've cracked it. The reason the above happens is because, actually, the lucy_mazzali1 routine in 58 doesn't call macro_pops at all (something I'd missed because we'd been debugging 76 in the telecon, woops), but it does in 76. So here's what happens
Hope that makes some sort of sense, it's still a bit fuzzy in my head, but I think we might've finally cracked it! The bad news of course, is that this could mean the line strengths in Sim et al. 2005 are out by a factor of ~2- I'm currently running a test with 58 to see how this changes the final spectrum, and if that spectrum will even agree with 76. I'll also probably need to get hold of 55 to do a test of Sim 2005 properly, as that is the version stuart used. |
So do I infer from all of this, that the problem is isolated to the macro-atom case? I hope so. Or has the fact that we set the convergence criterion to 0.03 also produced some inaccurate results. Also, since you have investigated a lot of time in this, please add a lot of comments to the various code bits to explain better why we/I did what we did. We ought to make sure the code is clean as possible as well, eliminating any options that are unused, and explaining what situations the various ones are used for. Ideally some of the modes, which appear to be transferred as numbers, should be replaced by #defined variables, so that they are more transparent. We probably should use this as an opportunity to explain in the "user's manual" section of the google site, how each of the various modes is calculated at least in general terms. |
Yes, I think it is unique to macro atoms, although we may want to think about how that value is set. Agreed that we'll need some cleaning up, and substantial commenting of this section. |
Ok, so I think the best solution to this is going to be as follows. saha is called in concentrations, and, if we are going to preserve the current way of doing things then this still needs to be done so that python can treat some ions and simple ions and some as macro. The function saha loops over all ions and populates the xplasma structure with saha ion densities. Here's the relevant loop: for (nion = first + 1; nion < last; nion++)
{
b = xsaha * partition[nion]
* exp (-ion[nion - 1].ip / (BOLTZMANN * t)) / (ne *
partition[nion -
1]);
// if (b > big && nh < 1e5) this is a line to only modify things if the density is high enough to matter
if (b > big)
b = big; //limit step so there is no chance of overflow
a = density[nion - 1] * b;
sum += density[nion] = a;
if (density[nion] < 0.0)
mytrap ();
if (sane_check (sum))
Error ("saha:sane_check failed for density summation\n");
}
a = nh * ele[nelem].abun / sum;
for (nion = first; nion < last; nion++)
{
density[nion] *= a;
if (sane_check (density[nion]))
Error ("saha:sane_check failed for density of ion %i\n", nion);
}
} So I think two if statements and substantial commenting here should do the trick if ((ion[nion].macro_info == 0) || (geo.macro_ioniz_mode == 0))
{ inside each loop of nion. This means the loops will only be entered if we are before the ionization cycles and want our initial saha guess at densities ( geo.macro_ioniz_mode == 0 ) or if the ion in question is being treated as a simple ion (ion[nion].macro_info == 0). We will also need to do something similar once the macro atom mode is implemented in ionization mode 7 ( #43 ), and we probably don't need to call macro_pops in both concentrations and lucy. And then we may need to think about FRACTIONAL_ERROR, if it's correct, and if we even want macro_pops to be called in the same place with the ne criteria. |
We now understand this problem, but a few questions still remain
I have done some tests with a fiducial style model, but with mode 3 ionization so we call saha. It seems to suggest that the routine doesn't spend much time in this routine at all and so decreasing fractional error should be fine in that regard. Thus, an easy fix may be just to set this to a low value, although I'm not sure how reliable this is. If we set it to be 1/ne_max, where ne_max is the highest thing we care about then that should be stringent enough in most situations. We also have an error report for 'does not converge on ne' ... with max iterations set to 200, this was never called in my test model. Slightly unsure as to how to proceed with this one in a reliable way. Perhaps the only solution is to rewrite the procedure in matom mode, but it could be tricky given that we want simple ions to be treated in the normal way (which was presumably the motivation for doing it this way in the first place). |
I'm closing this issue, as the FRACTIONAL_ERROR problem is now identified elsewhere. |
Originally I flagged this bug up as a concern about the lucy_mazzali routine interacting with macro_pops, but in the end we discovered that this was due to a problem to do with saha abundances and convergence criteria. This is a macro specific problem that is understood (to an extent) but unsolved, and requires some thought about the best fix -it will require a combination of stopping saha abundances being calculated for macro ions, and some thought about the best way to converge on macro level populations. There may also be questions that need to be asked about why we are even calling the saha function during the ionization cycles, as we should surely use the output from the last cycle as our initial guess and then calculate the corrected saha equation without overwriting everything.
The problem is broadly described as follows (see comments below for the progress on the bug over time and more discussion).
with FRACTIONAL_ERROR changed to 1e-12 in both versions, and saha not called after the first ionization cycle, we were able to get the following answers
which I am happy to declare this close to a success and proceed with a better way of getting the level populations in macro atom mode, provided that the spectra and level emissivities are fairly similar.
The text was updated successfully, but these errors were encountered: