-
Notifications
You must be signed in to change notification settings - Fork 26
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Running VELOCIraptor on large RAMSES cosmo hydro snapshots #119
Comments
Hi @sorcej so I think it is likely the read header isn't reading the header correctly. I will say that I have tried to make it work but I've encountered several different ramses formats with different structure in the header. Since the binary is not self-describing I did not know how to make it work. Could you provide a description of the binary data? |
Thanks @pelahi ! Ok so I switched to the development branch. I have the same issue. I do not have a reader in C++ unfortunately but common readers in Python or Fortran typically work on the simulation (For instance, https://github.com/florentrenaud/rdramses/blob/master/rdramses.f90 works after some modifications to use longint). So there should not be anything specific but for the fact that I had to use longint for both DM and star particles |
Here might be some of the problems in the reader: -> since I am using longint for nstarTotal (because of more stars than 2^31). However it is probably not the only one... not sure whether it is easy to have an option for longint for both nstartot and nparttot. |
(@pelahi ): Ok , I have fixed and modified a few things, now I am stuck a bit further in MPINumInDomain. |
Hi @sorcej , apologies I'll be a bit slow in replying for the next two days as I have to finishing marking assignments for a high performance computing course. Do you mind creating a draft PR with your proposed changes so I can have a look? |
Thanks @pelahi and sorry for the delay. I cannot do any PR from the supercomputer unfortunately. Anyway, I tried to understand where it was further crashing and I finally managed to pinpoint it. It is in: MPIInitialDomainDecompositionWithMesh(opt). Not sure if it is because I am using too many or not enough cores perhaps? -> ok so found out, I was using too many cores, consequently, n3 did not fit in an unsigned int. With less core, now I am stuck a bit further when broadcasting... I am continuing to explore |
After several days, I decided to simply remove the option opt.impiusemesh to try to go further. I will try later to re-add it... |
@pelahi ok so now I am again stuck a bit further. I again had to bypass dmp_mass = 1.0 / (opt.Neffopt.Neffopt.Neff) * (OmegaM - OmegaB) / OmegaM; as it gives something negative so I had to use Particle IDs instead in mpiramsescio.cxx this time. ... |
Hi @sorcej , so the dmp_mass calculation was based on reading some ramses data where it was it was easier to quickly calculate the mass for dark matter particles using the matter density - the baryon density and the effective resolution of the simulation. Regarding your error, it could be something to do with ints being used and where values would exceed 2e9 and then give an unsigned number. Could this be the case? |
@pelahi ok thanks, now I get the dmp_mass. I actually had to change manually Neff though. I will try to add it as an option (unless there is an option that I missed). Regarding the malloc. I am not sure yet but I think something went wrong when reading the particle positions thus they cannot be properly assigned to the different tasks. I am trying to fix it. |
@pelahi Ok I have fixed the particle positions. I still need to understand something with the IDs but it looks better now. I think I have now yet another problem to solve when writing: [mpiexec@i05r01c01s04] HYD_sock_write (../../../../../src/pm/i_hydra/libhydra/sock/hydra_sock_intel.c:360): write error (Bad file descriptor) I still of course have load balancing issues but for now I leave it as is. I might also have to fix some units but I will see later. Thanks again :) |
Hi @sorcej , not certain about the write error. When does this happen? Could you provide the associated velociraptor output? |
@pelahi The code writes the files .configuration, .siminfo and .units and then stops but it is still in the middle of doing SearchFullSet, I confirm that it is stuck in pfof=SearchFullSet(opt,Nlocal,Part,ngroup) again... I fixed the long IDs and the long trees but I still have a seg fault and the debug mode is not helpful. |
@pelahi ok finally managed to narrow down where it crashes. It is when trying to build a new tree: I have yet to find out why it crashes for some of the tasks. Sometimes it gives me a free() invalid pointer and sometimes a double free or corruption (out) but I do not get why and why only for some of the tasks |
Hi @sorcej , would you have a small ramses input example I could try? It would help me debug the issue. |
Hi @pelahi, sorry for not getting back to you earlier. I was attending conferences (actually I will be leaving again on Sunday). Anyway, it works with a small example. That said I managed to get outputs when dealing only with DM particles even with the large simulation now. I need though to check these outputs and whether they make sense. Then I will try to deal also with the stars, etc. Thanks |
Hi @pelahi , any chance that you still have the config file used for that paper https://arxiv.org/pdf/1806.11417 ? I am interested in finding galaxies and their properties. I tried using and adapting the config file sample_galaxycatalog_run.cfg in the example folder but with no success yet. Thanks |
Hi @sorcej , apologies for the late reply so the best config is https://github.com/pelahi/VELOCIraptor-STF/blob/development/examples/sample_galaxycatalog_run.cfg as you noted but you might want to try keeping the 3DFOF envelop if looking at replicating Rodrigo's paper. Now can you provide some information as to what issues you are encountering? Is it a processing issue? or is it that you are not getting the intracluster light you were expecting? |
Thanks a lot @pelahi for your answer and sorry for the delay in getting back to you. I had to put aside this analysis for a while. By keeping the 3DFOF envelop, you mean Keep_FOF=1 correct? About the issues, I had several. At first, it was not even running but it ended up being apparently a problem with the machine... Now it seems to be fixed. Currently, I am a bit perplex regarding the catalogs I get. If only star particles are used, I do not really understand how to read the output file "properties" that seems to have information similar to what I will get with dark matter particles. But perhaps I did not activate the proper output? Thanks for your help. |
I @sorcej , so the properties autocalculated were very focused on typical halo + galaxy properties people often require. If you have an idea of the properties you'd like to calculate I could quickly make a branch that might calculate the output you need. Otherwise, you can you the python tools to load the properties of the |
@pelahi , thanks a lot for your answer. In the outputs though I am missing the galaxy properties you mentioned. I must have done something wrong there. I only have typical halo properties but on star particles... This is why I find this a bit obscure to me. I will have a look in the meantime to the tools you mentioned. Thanks |
Hi @pelahi, sorry I am still super confused: velociraptor_00218.catalog_particles.unbound.0 velociraptor_00218.catalog_particles.0 Why is the sum of the two not equal to the total number of the DM particles in the simulation? Thanks for your highlights. I am clearly struggling to understand the outputs. |
Hi @sorcej so the *.catalog_particles *.catalog_particles.unbound arose from the format of listing particles used by subfind which listed "bound" and "unbound" particles in separate files. Now this distinction really only makes physical sense when extracting data from the entire simulation (all particle types). In your case with just stars, I would just combine the lists and not treat the distinction as anything real. |
Hi @pelahi, sorry, I was unclear. I was first looking at the results using dark matter particles trying to understand the outputs. I understand your point using star particles though. Could you please point me to some explanations on how to understand these outputs so that I can read the particles (either DM or stars) belonging to a given object (either halo or galaxy) to derive properties that are not in the velociraptor_00218.properties.0 files (for example for the star particles I do not see the age and metallicity of the star particles belonging to a given galaxy)? Thank you so much |
Sure @sorcej , it's here https://velociraptor-stf.readthedocs.io/en/latest/output.html. Note to have extra star properties read and then used, you will likely need to adjust the ramses io to ensure that these are stored and also you need to compile the code with You can event store extra custom properties with specific names (see for example the config https://github.com/pelahi/VELOCIraptor-STF/blob/development/examples/sample_swifthydro_3dfof_subhalo_extra_properties.cfg) specifically entries like However, both will likely need an update to the ramses interface and substructure_properties files to calculate what you desire. Can you be precise on exactly what you want to calculate and then we can see if updates are actually needed? |
Thanks @pelahi , weirdly enough I think what I need would be with -DVR_USE_STAR=ON but when I compile with it, STARON is still not defined... a minima what I would need is not only to have the IDs of the particles that belong to the galaxies but also their pos, vel, mass, age and metallicity so that I can derive galaxy properties. |
The text was updated successfully, but these errors were encountered: