Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Cannot read in any of the model compilations on my Mac #113

Closed
kslong opened this issue Sep 8, 2014 · 7 comments
Closed

Cannot read in any of the model compilations on my Mac #113

kslong opened this issue Sep 8, 2014 · 7 comments

Comments

@kslong
Copy link
Collaborator

kslong commented Sep 8, 2014

I tried to calculate a model on my Mac today using one of the .ls models. I found that the program crashed, after reading in 400 models. To get the program to work, I needed to reduce the size of NWAVES in models.h, and recompile.

Obviously these is some kind of seqmentaton fault to limitations of program (or stack) size on my machine, but it is something we need to deal with somehow. I don't actually understands what is setting the maximum, whether it is the physical memory on my machine, 16 Gbytes, or something else.

Minimally, we need to trap the problem, and then warn the user what is causing the problem and how he/she should deal with it.
Medium range: Right now when we add a spectrum, it adds an array NWAVES long. We could try to adaptively setup the arrays so that the length was adjusted to the link of the spectrum.
Maximally, we might ask is there was a way around the problem. One slow way would be to read in figure out the spectrum of only those models that were actually needed in python, and to do it once.

For example, for the star, we could instead of storing the files and interpolating, we could read figure out which two spectra we needed to interpolated and read the files in one at a time, and create the interpolation and store only that. If we did that for each ring of the disk, we presumably would not need nearly as many models as we have.

Worst case, we need to tell the reader how to fix the problem.

@kslong kslong added this to the Public Release milestone Sep 8, 2014
@kslong
Copy link
Collaborator Author

kslong commented Sep 8, 2014

Note that models is an external static structure that is set up on compilation, so it is not immediately obvious why it fails some way into reading models.

However as the failure is someway 400 models into reading something, it still seems as if this some kind of memory segmentation/overflow problem.

The structure is dominated by wavelengths and fluxes. We read both of these variables into doubles. We could save a factor of 2 in space by switching to floats. We could save another factor two if we wished by storing the wavelengths separately, since these will be mostly the same.

Note that the problem with not storing the spectra, I think, is that when we construct spectra of the disk, there is some "cleverness" involved. I believe we change the ring boundaries themselves to divide up the disk differently when the frequency boundaries are changed.

@jhmatthews
Copy link
Collaborator

@kslong - what do you get as the error message when it crashes? Is it a segmentation fault or an abort trap 6?

@jhmatthews
Copy link
Collaborator

@kslong would you be able to give a few more details here: parameter file, what you changed NWAVES to and what the actual error output looks like? Need to check the error I'm producing is the same one.

@kslong
Copy link
Collaborator Author

kslong commented Sep 24, 2014

It was simply a fairly standard run, which included kurucz91.ls for the star/disk I believe. I first tried kur_tlusty_hybrid.ls, but when that failed, I went to something simpler. This was when I (re) discovered that we just set the number of NWAVES in spectrum.h

@jhmatthews
Copy link
Collaborator

@kslong

I have isolated AN error - I am not entirely sure if it is the same as yours. My version reads in all 460 models of kur_tlusty_hybrid.ls, then crashes just after doing so in the routine get_spectype() (see lines around 2717 of python.c):

rdstr ("Model_file", model_list);
get_models (model_list, 2, spectype);
strcpy (geo.model_list[get_spectype_count], model_list);    // Copy it to geo 
strcpy (get_spectype_oldname, model_list);  // Also copy it back to the old name  <---CRASHES HERE
get_spectype_count++;

The reason is crashes is dependent on the length of the masterfile name, because we make the declaration of get_spectype_oldname by doing:

char get_spectype_oldname[] = "data/kurucz91/kurucz91.ls";

Thus if you're filename is longer than this, then the strcpy function will report an error because you don't have enough memory to store the new filename. This can be fixed by simply doing:

char get_spectype_oldname[LINELENGTH] = "data/kurucz91/kurucz91.ls";

Instead.

If it is really changing NWAVES that fixes it for you then this must be different, but I wonder if you also changed the name of your masterfile at some point.

@kslong
Copy link
Collaborator Author

kslong commented Sep 29, 2014

You could be easily be correct about this. I will see if I can recreate the error and provide a more complete description oft he problem.

@kslong
Copy link
Collaborator Author

kslong commented Sep 29, 2014

I don't appear to be able to reproduce this error on my Mac, using I believe the same .pf file that I had the problem with previously and the version of dev branch I have. It is possible as @jhmatthews suggests that this was solved by the fix to #113.

I am closing this issue until it can be demonstrated that it remains a problem.

@kslong kslong closed this as completed Sep 29, 2014
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants