-
Notifications
You must be signed in to change notification settings - Fork 12
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Mesh integration #10
Mesh integration #10
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
A few initial thoughts... the integration with the @connoramoreno code is good. For the mesh analysis code... I've suggested some things to break up the code.
parametric_stellarator.py
Outdated
h5m_path = f'{cwd}/coil_mesh.h5m' | ||
h5m_conversion = subprocess.run(f'mbconvert {exo_path} {h5m_path}', shell = True) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we can do this with pymoab commands by just reading and writing a file in different formats, possibly in a new MOAB instance but maybe just into a new meshset? It will be lighter weight than launching a subprocess.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I looked in the MOAB bitbucket for a pymoab version of mbconvert, but could not find anything. Is there a specific function that you have in mind to replace the subprocess command?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Something like:
myFileset = pymoab.EntitySet()
myMoab.read_file('file.exo', fileset = myFileset)
myMoab.write_file('file.h5m', fileset = myFileset)
coils_tet_mesh.py
Outdated
cubit.cmd(f'import step "{coils_path}" heal') | ||
cubit.cmd('volume size auto factor 5') | ||
|
||
# For-loop iterates over all 48 coils included in the coils.step file |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Comment still refers to 48 coils
coils_tet_mesh.py
Outdated
|
||
# Create the mesh | ||
cubit.cmd(f'volume {coil_num} scheme tetmesh') | ||
cubit.cmd(f'volume {coil_num} scheme tetmesh') |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Repeated line... does it do anything?
coils_tet_mesh.py
Outdated
cubit.cmd(f'mesh volume {coil_num}') | ||
|
||
# Set log file path and initiate logging | ||
log_split = log_general_path.split('.') |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do this once before the loop, perhaps name the components for clarity something like log_prefix
and log_ext
?
coils_tet_mesh.py
Outdated
# Mesh quality metrics | ||
quality_metric_list = ['shape', 'aspect ratio', 'condition no.', 'distortion', 'element volume', | ||
'equiangle skew', 'equivolume skew','inradius', 'jacobian', | ||
'normalized inradius', 'relative size', 'scaled jacobian', 'shape and size'] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Define this once before the loop
coils_tet_mesh.py
Outdated
# Read the text file | ||
with open(log_path, 'r') as file: | ||
text = file.read() | ||
|
||
# Cut out each of the string 'tables' | ||
start_table = 'Function Name' | ||
end_table = 'Finished Command:' | ||
ranges = [] | ||
start_index = 0 | ||
|
||
while True: | ||
# Find the starting index after the previous occurrence | ||
start_index = text.find(start_table, start_index) | ||
|
||
# Break the loop if the starting cue is not found | ||
if start_index == -1: | ||
break | ||
|
||
# Find the ending index after the starting index | ||
end_index = text.find(end_table, start_index + 1) | ||
|
||
# Break the loop if the ending cue is not found | ||
if end_index == -1: | ||
break | ||
|
||
# Extract the range between the starting and ending index | ||
range_text = text[start_index : end_index + 1] | ||
ranges.append(range_text) | ||
|
||
# Update the starting index for the next iteration | ||
start_index = end_index + 1 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe this can be a function to reduce cognitive load
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Arguably, the file reading should perhaps be outside the function for separation of concerns
coils_tet_mesh.py
Outdated
range_split = qual_metric[qual_metric.index('-\n'):].split(' ') | ||
|
||
row_extraction = [] | ||
for i in range_split: | ||
if i != '' and "(" not in i: | ||
row_extraction.append(i) | ||
offset = len(row_extraction) - 4 | ||
row_extraction[0] = [''.join(row_extraction[:offset])] | ||
|
||
for j in np.arange(1,5): | ||
row_extraction[j] = row_extraction[offset + (j-1) : offset + j] | ||
row_extraction = row_extraction[:5] | ||
row_extraction[0][0] = row_extraction[0][0].split('-\n')[1] | ||
|
||
data_lists = [function_names, averages, std_devs, minima, maxima] | ||
indices = np.arange(5) | ||
for index, data_list in zip(indices, data_lists): | ||
if index == 0: | ||
data_list.append(row_extraction[index][0]) | ||
else: | ||
data_list.append(float(row_extraction[index][0])) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe another function to reduce cognitive load
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We'll also want to review whether there are better ways to do this parsing....
coils_tet_mesh.py
Outdated
# Recreate the geometry and mesh with the best_analytic | ||
cubit.cmd('reset') | ||
cubit.cmd(f'import step "{coils_path}" heal') | ||
cubit.cmd('volume size auto factor 5') | ||
cubit.cmd(f'volume {coil_num} scheme tetmesh') | ||
cubit.cmd(f'volume {coil_num} scheme tetmesh') | ||
cubit.cmd(f'mesh volume {coil_num}') | ||
cubit.cmd(f'mesh volume {coil_num}') | ||
cubit.cmd(f'quality volume {coil_num} {best_analytic} global draw mesh list') |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not sure this is doing anything meaningful...
Thanks for the comments, Paul. I'll get on them now and see if I can implement these improvements. |
c4757b4
to
f00a903
Compare
Sorry about the whirlwind of commits in the past few minutes. I had made an accidental commit when trying to update the |
somewhere you have lost the integration of meshing within the main package |
Alright, the integration has been re-incorporated and updated to include the PyMOAB commands. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks great! The integrated meshing is great, still some more refining on the stand-alone analysis.
parametric_stellarator.py
Outdated
# Get current working directory | ||
cwd = os.getcwd() | ||
# Exodus export | ||
exo_path = f'{cwd}/coil_mesh.exo' |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Define the base_name
of the file
base_name = 'coil_mesh'
exo_path = f'{file_path}/{base_name}.exo'
and use that in both places. This will facilitate changing it in the future.
parametric_stellarator.py
Outdated
# Conditionally export tetrahedral meshing | ||
if magnets['meshing']: | ||
# Get current working directory | ||
cwd = os.getcwd() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
cwd = os.getcwd() | |
file_path = os.getcwd() |
parametric_stellarator.py
Outdated
# Create a new meshset for each coil iteration | ||
coil_meshset = mb.create_meshset() | ||
# Add the current coil's mesh to the meshset | ||
mb.add_entities(coil_meshset, [exodus_set]) | ||
# Set .h5m path | ||
h5m_path = f'{cwd}/coil_mesh.h5m' | ||
# Write the current coil's meshset to an individual .h5m file | ||
mb.write_file(h5m_path, [coil_meshset]) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
# Create a new meshset for each coil iteration | |
coil_meshset = mb.create_meshset() | |
# Add the current coil's mesh to the meshset | |
mb.add_entities(coil_meshset, [exodus_set]) | |
# Set .h5m path | |
h5m_path = f'{cwd}/coil_mesh.h5m' | |
# Write the current coil's meshset to an individual .h5m file | |
mb.write_file(h5m_path, [coil_meshset]) | |
# Set .h5m path | |
h5m_path = f'{file_path}/{base_name}.h5m' | |
# Write the current coil's meshset to an individual .h5m file | |
mb.write_file(h5m_path, [exodus_set]) |
coils_tet_mesh.py
Outdated
log_split = log_general_path.split('.') | ||
log_prefix = log_split[0] | ||
log_ext = log_split[1] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
might be easier with the pathlib
package/module??
coils_tet_mesh.py
Outdated
# Override the current mesh analytic with the best quality metric | ||
cubit.cmd(f'quality volume {coil_num} {best_analytic} global draw mesh list') |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
May not be necessary?? We have this in the log file and/or data frame already
coils_tet_mesh.py
Outdated
# For-loop iterates over all of the coils included in the coils.step file | ||
for coil_num in np.arange(1,total_coil_count + 1): | ||
|
||
# Create the mesh | ||
cubit.cmd(f'volume {coil_num} scheme tetmesh') | ||
cubit.cmd(f'mesh volume {coil_num}') | ||
|
||
# Set log file path and initiate logging | ||
log_path = f"{log_prefix}_{coil_num}.{log_ext}" | ||
cubit.cmd(f'logging on file "{log_path}"') | ||
|
||
# Run mesh quality metrics for each relevant metric | ||
for metric in quality_metric_list: | ||
qual_command = f'quality volume {coil_num} {metric} global draw mesh list' | ||
mesh_analytics = cubit.cmd(qual_command) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Separation of concerns & cognitive load:
Make this a function that creates and does the quality measures on each mesh
coils_tet_mesh.py
Outdated
# MESH QUALITY ANALYSIS | ||
|
||
# Read the log file | ||
with open(log_path, 'r') as file: | ||
text = file.read() | ||
|
||
# Extract the data from the log file using the extract_tabular_data function | ||
function_names, averages, std_devs, minima, maxima = extract_tabular_data(text) | ||
|
||
# Separate multi-word function names to be able to match-case with Cubit names | ||
function_names = split_function_names(function_names) | ||
|
||
# Compile the extracted tabular data into a Pandas data frame | ||
quality_metrics_df = pd.DataFrame({'Function Name': function_names, | ||
'Average': averages, | ||
'Standard Deviation': std_devs, | ||
'Minimum': minima, | ||
'Maximum': maxima}) | ||
|
||
# Remove rows where the standard deviation equals 0 | ||
# Found this for a few cases and it appeared too implausible to keep | ||
quality_metrics_df = quality_metrics_df.loc[quality_metrics_df['Standard Deviation'] != 0] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Separation of concerns & cognitive load:
Make this a method that builds & cleans the dataframe
coils_tet_mesh.py
Outdated
# MESH EXPORT | ||
|
||
# Export the mesh as an EXODUS file | ||
cubit.cmd(f'export mesh "{exo_path}"') | ||
|
||
# Initialize the MOAB core instance | ||
mb = core.Core() | ||
|
||
# Load the EXODUS file | ||
exodus_set = mb.create_meshset() | ||
mb.load_file(exo_path, exodus_set) | ||
|
||
# Create a new meshset for each coil iteration | ||
coil_meshset = mb.create_meshset() | ||
|
||
# Add the current coil's mesh to the meshset | ||
mb.add_entities(coil_meshset, [exodus_set]) | ||
|
||
# Write the current coil's meshset to an individual .h5m file | ||
mb.write_file(h5m_path, [coil_meshset]) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Method to export the mesh - make it optional based on the user input
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Also update as in the parametric stellarator code
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Consider a method that can be reused here and in parametric_stellarator
for mesh export as H5M.
coils_tet_mesh.py
Outdated
# CSV EXPORT | ||
|
||
# Rename 'index' column to 'Coil Number' | ||
top_results.reset_index(inplace = True) | ||
top_results.rename(columns = {'index' : 'Coil Number'}, inplace = True) | ||
|
||
# Fix indexing for coil numbers | ||
top_results = top_results.transpose() | ||
top_results.iloc[0] = np.arange(1, total_coil_count + 1) | ||
top_results = top_results.transpose() | ||
print(top_results.head()) | ||
|
||
# Export the top_results data frame as a .csv | ||
top_results.to_csv(csv_path) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Function to write CSV - optional?
Separation of concerns: separate analysis from I/O
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Some quick feedback - very much in the fine tuning style aspects
parametric_stellarator.py
Outdated
# Conditionally export tetrahedral meshing | ||
if magnets['meshing']: | ||
# Assign desired directory | ||
file_path = os.getcwd() | ||
# Define the base name of the file | ||
base_name = 'coil_mesh' | ||
# Exodus export | ||
exo_path = f'{file_path}/{base_name}.exo' | ||
cubit.cmd(f'export mesh "{exo_path}"') | ||
# Initialize the MOAB core instance | ||
mb = core.Core() | ||
# Load the EXODUS file | ||
exodus_set = mb.create_meshset() | ||
mb.load_file(exo_path, exodus_set) | ||
# Set .h5m path | ||
h5m_path = f'{file_path}/{base_name}.h5m' | ||
# Write the current coil's meshset to an individual .h5m file | ||
mb.write_file(h5m_path, [exodus_set]) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
A couple of fine tuning thoughts related to best practice, readability and cognitive load:
- Consider whether all the comments are necessary; good choice of variable names and code structure can make comments redundant and interrupt the flow reading a snippet of code. Comments can often be reserved for explain it more complicated algorithmic choices.
- If the two filename definitions are adjacent, it can be more readable by their association to each other as well as removing them from the "action" flow of the read/write steps
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
A few more ways to streamline and simplify some of the algorithms.
I think we perhaps should have sat down to discuss the cleanest way to extract data from the logs - it seems far more complicated than it probably needs to be.
There may be some value in separating the coils_tet_mesh.py
script from the rest of this PR.
# Create an argument parser | ||
parser = argparse.ArgumentParser(description="Magnet Coils Tetrahedral Mesh Script") | ||
|
||
# Define command-line arguments | ||
parser.add_argument("coils_path", type=str, help="Path to the coils .step file") | ||
parser.add_argument("total_coil_count", type=int, help="Number of coils") | ||
parser.add_argument("--cwd", action="store_true", help="Export outputs to current working directory") | ||
parser.add_argument("--mesh-export", action="store_true", help="Export meshes as .exo and .h5m files") | ||
parser.add_argument("--log-path", type=str, help="Path to .log file prefix") | ||
parser.add_argument("--exo-path", type=str, help="Path to .exo file") | ||
parser.add_argument("--h5m-path", type=str, help="Path to .h5m file") | ||
parser.add_argument("--csv-export", action="store_true", help="Export data as a .csv file") | ||
parser.add_argument("--csv-path", type=str, help="Path to .csv file") | ||
|
||
# Parse the command-line arguments | ||
args = parser.parse_args() | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This needs to be buried in a method as well.
One goal is to allow this to be imported as a module into another python script. Ideally, no code should run at the time of import - or only code that is necessary at the time of import. As it is, this parser code will run when the module is imported and may cause issues.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That method can then either be called from within the conditional below, or from within coils_tet_mesh
directly
# Define command-line arguments | ||
parser.add_argument("coils_path", type=str, help="Path to the coils .step file") | ||
parser.add_argument("total_coil_count", type=int, help="Number of coils") | ||
parser.add_argument("--cwd", action="store_true", help="Export outputs to current working directory") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For most tools, this is the default, and then the other override it. You've created an unusual hybrid where the other options define the output locations and this option overrides those. For the sake of moving forward, I'd recommend removing many of these options - we can add them back in as needed.
for char in name: | ||
# Check if the character is an uppercase letter | ||
if char.isupper() and current_word.isupper() and len(current_word) > 1: | ||
# If the character is uppercase and the current_word is also uppercase | ||
# and not a single character, consider it a separate word and append | ||
# the current_word to the split_words list. | ||
split_words.append(current_word) | ||
current_word = char | ||
elif char.isupper() and current_word and not current_word.isupper(): | ||
# If the character is uppercase and the current_word exists but is not | ||
# uppercase, consider it a separate word and append the current_word | ||
# to the split_words list. | ||
split_words.append(current_word) | ||
current_word = char | ||
else: | ||
# If the character is not uppercase or doesn't meet the above conditions, | ||
# append it to the current_word to build the word. | ||
current_word += char | ||
|
||
# Append the last word to the split_words list | ||
split_words.append(current_word) | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
A slightly simpler approach might be to insert a special character (underscore?) before each upper case and then use the python split()
for char in name: | |
# Check if the character is an uppercase letter | |
if char.isupper() and current_word.isupper() and len(current_word) > 1: | |
# If the character is uppercase and the current_word is also uppercase | |
# and not a single character, consider it a separate word and append | |
# the current_word to the split_words list. | |
split_words.append(current_word) | |
current_word = char | |
elif char.isupper() and current_word and not current_word.isupper(): | |
# If the character is uppercase and the current_word exists but is not | |
# uppercase, consider it a separate word and append the current_word | |
# to the split_words list. | |
split_words.append(current_word) | |
current_word = char | |
else: | |
# If the character is not uppercase or doesn't meet the above conditions, | |
# append it to the current_word to build the word. | |
current_word += char | |
# Append the last word to the split_words list | |
split_words.append(current_word) | |
name_split = '' | |
delim = '_' | |
for char in name: | |
# insert a delim character before each upper case letter to facilitate splitting | |
if char.isupper(): | |
name_split += delim | |
name_split += char | |
split_words = name_split.split(delim) |
indices = np.arange(5) | ||
for index, data_list in zip(indices, data_lists): | ||
if index == 0: | ||
data_list.append(row_extraction[index][0]) | ||
else: | ||
data_list.append(float(row_extraction[index][0])) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
indices = np.arange(5) | |
for index, data_list in zip(indices, data_lists): | |
if index == 0: | |
data_list.append(row_extraction[index][0]) | |
else: | |
data_list.append(float(row_extraction[index][0])) | |
for index, data_list in enumerate(data_lists): | |
if index == 0: | |
data_list.append(row_extraction[index][0]) | |
else: | |
data_list.append(float(row_extraction[index][0])) |
This PR integrates tetrahedral meshing into the magnet coil creation, so that it can be directly incorporated into the parametric stellarator, if desired.
ExampleScript.py
was also modified to allow for the meshing.