Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

System type "previous" does not work #290

Closed
jhmatthews opened this issue Aug 30, 2017 · 14 comments
Closed

System type "previous" does not work #290

jhmatthews opened this issue Aug 30, 2017 · 14 comments
Assignees
Labels

Comments

@jhmatthews
Copy link
Collaborator

jhmatthews commented Aug 30, 2017

Sam had discovered some strange behaviour with restarted runs (using the "previous" option). This option is accessed via the question System_type(0=star,1=binary,2=agn,3=previous). It's broken in the following way.

The basic logic is as follows (streamlined for ease of reading):

rdint ("System_type(0=star,1=binary,2=agn,3=previous)", &geo.system_type);

if (geo.system_type == SYSTEM_TYPE_PREVIOUS)
    {
      /* read the old wind file */
      wind_read (files.old_windsave);
      geo.run_type = SYSTEM_TYPE_PREVIOUS;
      ndomain = geo.ndomain;
   }

if (geo.system_type != SYSTEM_TYPE_PREVIOUS)
  {
    rdint ("disk.type(0=no.disk,1=standard.flat.disk,2=vertically.extended.disk)", &geo.disk_type);
    rdint ("Number.of.wind.components", &ndomains);

   /* do more stuff that includes eventually incrementing ```ndomain``` by the number of requested 
      domains, that is...```ndomains``` - confusing! */
  }

The issue is that one sets geo.system_type from the PF file, but then reads in the geo structure, therefore overwriting geo.system_type with whatever the previous file had (AGN, binary, etc). As a result, we get the wrong number of domains (2 when it should be 1) stored in "ndomains", which is later copied over to geo.domain and then who knows what happens.

Proposed fix: I think one just has to replace if (geo.system_type != SYSTEM_TYPE_PREVIOUS) with else or if (geo.run_type != SYSTEM_TYPE_PREVIOUS)

As an aside, it is very confusing having variables called "ndomain" and "ndomains" and "geo.ndomain" to do various convoluted domain number jobs. We should probably be more descriptive with variable names to avoid issues. Very easy to miss an s and get no warning. I think I would rather have cumbersome names that are descriptive than shorter names that are ambiguous.

@jhmatthews jhmatthews added bug Bugs in the code domain urgent labels Aug 30, 2017
@jhmatthews
Copy link
Collaborator Author

@Higginbottom - probably worth checking that this doesn't affect hydro runs, I presume not.

@jhmatthews
Copy link
Collaborator Author

The main issue here is fixed by agnwinds/python@0f04465

@smangham had been finding some other issues so we will wait an see whether this fix changes those, as it could have lead to all sorts of strange effects by setting up multiple domains on top of each other.

@kslong
Copy link
Collaborator

kslong commented Sep 11, 2017 via email

@jhmatthews
Copy link
Collaborator Author

So "Previous" used to be a "wind type", but it got changed to a "system type" - rightly, I think. That meant some new code was introduced and the logic of reading in the inputs was changed. In the process, the above bug was introduced.

So originally I was concerned about a bug in code logic that just meant "previous" runs didn't work as they used to or as I expect.

Sam has also spotted some strange behaviour which may or may not be cause by said bug. Sam or myself can update in due course.

@smangham
Copy link
Collaborator

I reran a 'Previous' run with the fix, and whilst it stopped declaring 2 domains, we still had much higher matom emissivities than the original run.

@kslong
Copy link
Collaborator

kslong commented Nov 1, 2017

@smangham I still would like to see a simple set of examples of this problem. It is something that should be sorted out. Could you post the relevant .pf files here, with instructions on what to run.

@smangham
Copy link
Collaborator

smangham commented Nov 1, 2017

Archived as GitHub won't accept .pf files:
Archive.zip

If you run original.pf, then resume.pf, and use py_wind to get the matom level 4 emissivity for both then diff the files they won't be identical. These are real differences not just machine precision issues, as can be seen if you do a scatter plot of r-vs-emissivity.

@kslong
Copy link
Collaborator

kslong commented Mar 20, 2018

Might also need to test rtheta with previous for any solution to this. Use the existing rtheta test and Nick's fix for this would be twested

@jhmatthews
Copy link
Collaborator Author

The error described by Sam is due to k-packet emissivities and macro-atom emissivities not being zeroed when you read in a previous model. As a result, the emissivities are added to those that are already stored in the macromain and plasmamain structures.

The safest thing to do is zero them in get_matom_f(). This is done in this commit: agnwinds/python@25e8299

@kslong
Copy link
Collaborator

kslong commented Feb 15, 2019

I tried to confirm that this could be closed, but #473 needs to be fixed to enable this.

@jhmatthews
Copy link
Collaborator Author

Can we test this again now the other issue is fixed, @kslong ?

@kslong
Copy link
Collaborator

kslong commented Mar 12, 2019

As far as I can tell there are no differences between the restarted version and the original version. Updated version of the parameter files are here:

z.models.190312.tar.gz

But I still don't understand the sequence of py_wind commands I was supposed to execute to verify this, so I am not confident about the statement that we have actually solved the problem. The emissivities all seem to be zero.

I used dev (commit 357bf26)

(run in single processor mode)

Can @jhmatthews or @smangham provide more instruction about this, the exact sequence of commands after M, or I can provide the wind_save files, etc. on DropBox and someone else can check whether this error is solved.

As an aside there are some sequences of commands to py_wind in the Matom area that generate segmentation faults. Also, and related to this, if we are going to continue to use py_wind, then we probably should have it generate a command file or something so we can duplicate a specific set of commands. If others think this important, we should generate a separate issue for that. It certainly would have helped in this case.

@jhmatthews
Copy link
Collaborator Author

Hi Knox. The commands after M should just be "2" for emissivities, then whichever level you want to look at ("3" for H alpha). Note, that the line in question must lie in the wavelength range specified.

If the emissivities are zero, this is normally because no spectral cycles have been run (emissivities get computed in the first spectral cycle). This test needs to be modified slightly so we are comparing two models that compute spectral cycles as in Sam's original archive - just running 1 spectral cycle in original.pf should be sufficient. If the emissivities are zero in resume.pf this implies a bug.

I should be able to look at this in the next few days if needed.

The segfaults are probably related to #473

@jhmatthews
Copy link
Collaborator Author

I've verified that the emissivities are now the same in restarted and normal runs as long as both have run a spectral cycle. This can now be closed.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

3 participants