Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Mesh integration #10

Merged
merged 15 commits into from
Oct 2, 2023
Merged

Mesh integration #10

merged 15 commits into from
Oct 2, 2023

Conversation

eitan-weinstein
Copy link
Contributor

@eitan-weinstein eitan-weinstein commented Jul 27, 2023

This PR integrates tetrahedral meshing into the magnet coil creation, so that it can be directly incorporated into the parametric stellarator, if desired. ExampleScript.py was also modified to allow for the meshing.

Copy link
Member

@gonuke gonuke left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A few initial thoughts... the integration with the @connoramoreno code is good. For the mesh analysis code... I've suggested some things to break up the code.

Comment on lines 218 to 219
h5m_path = f'{cwd}/coil_mesh.h5m'
h5m_conversion = subprocess.run(f'mbconvert {exo_path} {h5m_path}', shell = True)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we can do this with pymoab commands by just reading and writing a file in different formats, possibly in a new MOAB instance but maybe just into a new meshset? It will be lighter weight than launching a subprocess.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I looked in the MOAB bitbucket for a pymoab version of mbconvert, but could not find anything. Is there a specific function that you have in mind to replace the subprocess command?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Something like:

myFileset = pymoab.EntitySet()
myMoab.read_file('file.exo', fileset = myFileset)
myMoab.write_file('file.h5m', fileset = myFileset)

cubit.cmd(f'import step "{coils_path}" heal')
cubit.cmd('volume size auto factor 5')

# For-loop iterates over all 48 coils included in the coils.step file
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Comment still refers to 48 coils


# Create the mesh
cubit.cmd(f'volume {coil_num} scheme tetmesh')
cubit.cmd(f'volume {coil_num} scheme tetmesh')
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Repeated line... does it do anything?

cubit.cmd(f'mesh volume {coil_num}')

# Set log file path and initiate logging
log_split = log_general_path.split('.')
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do this once before the loop, perhaps name the components for clarity something like log_prefix and log_ext?

Comment on lines 55 to 58
# Mesh quality metrics
quality_metric_list = ['shape', 'aspect ratio', 'condition no.', 'distortion', 'element volume',
'equiangle skew', 'equivolume skew','inradius', 'jacobian',
'normalized inradius', 'relative size', 'scaled jacobian', 'shape and size']
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Define this once before the loop

Comment on lines 68 to 98
# Read the text file
with open(log_path, 'r') as file:
text = file.read()

# Cut out each of the string 'tables'
start_table = 'Function Name'
end_table = 'Finished Command:'
ranges = []
start_index = 0

while True:
# Find the starting index after the previous occurrence
start_index = text.find(start_table, start_index)

# Break the loop if the starting cue is not found
if start_index == -1:
break

# Find the ending index after the starting index
end_index = text.find(end_table, start_index + 1)

# Break the loop if the ending cue is not found
if end_index == -1:
break

# Extract the range between the starting and ending index
range_text = text[start_index : end_index + 1]
ranges.append(range_text)

# Update the starting index for the next iteration
start_index = end_index + 1
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe this can be a function to reduce cognitive load

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Arguably, the file reading should perhaps be outside the function for separation of concerns

Comment on lines 108 to 128
range_split = qual_metric[qual_metric.index('-\n'):].split(' ')

row_extraction = []
for i in range_split:
if i != '' and "(" not in i:
row_extraction.append(i)
offset = len(row_extraction) - 4
row_extraction[0] = [''.join(row_extraction[:offset])]

for j in np.arange(1,5):
row_extraction[j] = row_extraction[offset + (j-1) : offset + j]
row_extraction = row_extraction[:5]
row_extraction[0][0] = row_extraction[0][0].split('-\n')[1]

data_lists = [function_names, averages, std_devs, minima, maxima]
indices = np.arange(5)
for index, data_list in zip(indices, data_lists):
if index == 0:
data_list.append(row_extraction[index][0])
else:
data_list.append(float(row_extraction[index][0]))
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe another function to reduce cognitive load

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We'll also want to review whether there are better ways to do this parsing....

Comment on lines 171 to 179
# Recreate the geometry and mesh with the best_analytic
cubit.cmd('reset')
cubit.cmd(f'import step "{coils_path}" heal')
cubit.cmd('volume size auto factor 5')
cubit.cmd(f'volume {coil_num} scheme tetmesh')
cubit.cmd(f'volume {coil_num} scheme tetmesh')
cubit.cmd(f'mesh volume {coil_num}')
cubit.cmd(f'mesh volume {coil_num}')
cubit.cmd(f'quality volume {coil_num} {best_analytic} global draw mesh list')
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not sure this is doing anything meaningful...

@eitan-weinstein
Copy link
Contributor Author

Thanks for the comments, Paul. I'll get on them now and see if I can implement these improvements.

@eitan-weinstein
Copy link
Contributor Author

Sorry about the whirlwind of commits in the past few minutes. I had made an accidental commit when trying to update the coils_tet_mesh.py script and then made a few compounding mistakes on GitHub trying to fix it all. It looks like it should be updated properly now. My newest change incorporates the PyMOAB commands in the mesh file conversion, as well as the export of a .csv file containing the mesh quality analytics for each coil.

@gonuke
Copy link
Member

gonuke commented Aug 8, 2023

somewhere you have lost the integration of meshing within the main package

@eitan-weinstein
Copy link
Contributor Author

Alright, the integration has been re-incorporated and updated to include the PyMOAB commands.

Copy link
Member

@gonuke gonuke left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks great! The integrated meshing is great, still some more refining on the stand-alone analysis.

# Get current working directory
cwd = os.getcwd()
# Exodus export
exo_path = f'{cwd}/coil_mesh.exo'
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Define the base_name of the file

base_name = 'coil_mesh'
exo_path = f'{file_path}/{base_name}.exo'

and use that in both places. This will facilitate changing it in the future.

# Conditionally export tetrahedral meshing
if magnets['meshing']:
# Get current working directory
cwd = os.getcwd()
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
cwd = os.getcwd()
file_path = os.getcwd()

Comment on lines 223 to 230
# Create a new meshset for each coil iteration
coil_meshset = mb.create_meshset()
# Add the current coil's mesh to the meshset
mb.add_entities(coil_meshset, [exodus_set])
# Set .h5m path
h5m_path = f'{cwd}/coil_mesh.h5m'
# Write the current coil's meshset to an individual .h5m file
mb.write_file(h5m_path, [coil_meshset])
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
# Create a new meshset for each coil iteration
coil_meshset = mb.create_meshset()
# Add the current coil's mesh to the meshset
mb.add_entities(coil_meshset, [exodus_set])
# Set .h5m path
h5m_path = f'{cwd}/coil_mesh.h5m'
# Write the current coil's meshset to an individual .h5m file
mb.write_file(h5m_path, [coil_meshset])
# Set .h5m path
h5m_path = f'{file_path}/{base_name}.h5m'
# Write the current coil's meshset to an individual .h5m file
mb.write_file(h5m_path, [exodus_set])

coils_tet_mesh.py Show resolved Hide resolved
Comment on lines 173 to 175
log_split = log_general_path.split('.')
log_prefix = log_split[0]
log_ext = log_split[1]
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

might be easier with the pathlib package/module??

Comment on lines 232 to 233
# Override the current mesh analytic with the best quality metric
cubit.cmd(f'quality volume {coil_num} {best_analytic} global draw mesh list')
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

May not be necessary?? We have this in the log file and/or data frame already

Comment on lines 188 to 202
# For-loop iterates over all of the coils included in the coils.step file
for coil_num in np.arange(1,total_coil_count + 1):

# Create the mesh
cubit.cmd(f'volume {coil_num} scheme tetmesh')
cubit.cmd(f'mesh volume {coil_num}')

# Set log file path and initiate logging
log_path = f"{log_prefix}_{coil_num}.{log_ext}"
cubit.cmd(f'logging on file "{log_path}"')

# Run mesh quality metrics for each relevant metric
for metric in quality_metric_list:
qual_command = f'quality volume {coil_num} {metric} global draw mesh list'
mesh_analytics = cubit.cmd(qual_command)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Separation of concerns & cognitive load:
Make this a function that creates and does the quality measures on each mesh

Comment on lines 205 to 226
# MESH QUALITY ANALYSIS

# Read the log file
with open(log_path, 'r') as file:
text = file.read()

# Extract the data from the log file using the extract_tabular_data function
function_names, averages, std_devs, minima, maxima = extract_tabular_data(text)

# Separate multi-word function names to be able to match-case with Cubit names
function_names = split_function_names(function_names)

# Compile the extracted tabular data into a Pandas data frame
quality_metrics_df = pd.DataFrame({'Function Name': function_names,
'Average': averages,
'Standard Deviation': std_devs,
'Minimum': minima,
'Maximum': maxima})

# Remove rows where the standard deviation equals 0
# Found this for a few cases and it appeared too implausible to keep
quality_metrics_df = quality_metrics_df.loc[quality_metrics_df['Standard Deviation'] != 0]
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Separation of concerns & cognitive load:
Make this a method that builds & cleans the dataframe

Comment on lines 242 to 261
# MESH EXPORT

# Export the mesh as an EXODUS file
cubit.cmd(f'export mesh "{exo_path}"')

# Initialize the MOAB core instance
mb = core.Core()

# Load the EXODUS file
exodus_set = mb.create_meshset()
mb.load_file(exo_path, exodus_set)

# Create a new meshset for each coil iteration
coil_meshset = mb.create_meshset()

# Add the current coil's mesh to the meshset
mb.add_entities(coil_meshset, [exodus_set])

# Write the current coil's meshset to an individual .h5m file
mb.write_file(h5m_path, [coil_meshset])
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Method to export the mesh - make it optional based on the user input

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also update as in the parametric stellarator code

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Consider a method that can be reused here and in parametric_stellarator for mesh export as H5M.

Comment on lines 264 to 277
# CSV EXPORT

# Rename 'index' column to 'Coil Number'
top_results.reset_index(inplace = True)
top_results.rename(columns = {'index' : 'Coil Number'}, inplace = True)

# Fix indexing for coil numbers
top_results = top_results.transpose()
top_results.iloc[0] = np.arange(1, total_coil_count + 1)
top_results = top_results.transpose()
print(top_results.head())

# Export the top_results data frame as a .csv
top_results.to_csv(csv_path)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Function to write CSV - optional?
Separation of concerns: separate analysis from I/O

Copy link
Member

@gonuke gonuke left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Some quick feedback - very much in the fine tuning style aspects

Comment on lines 211 to 228
# Conditionally export tetrahedral meshing
if magnets['meshing']:
# Assign desired directory
file_path = os.getcwd()
# Define the base name of the file
base_name = 'coil_mesh'
# Exodus export
exo_path = f'{file_path}/{base_name}.exo'
cubit.cmd(f'export mesh "{exo_path}"')
# Initialize the MOAB core instance
mb = core.Core()
# Load the EXODUS file
exodus_set = mb.create_meshset()
mb.load_file(exo_path, exodus_set)
# Set .h5m path
h5m_path = f'{file_path}/{base_name}.h5m'
# Write the current coil's meshset to an individual .h5m file
mb.write_file(h5m_path, [exodus_set])
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A couple of fine tuning thoughts related to best practice, readability and cognitive load:

  • Consider whether all the comments are necessary; good choice of variable names and code structure can make comments redundant and interrupt the flow reading a snippet of code. Comments can often be reserved for explain it more complicated algorithmic choices.
  • If the two filename definitions are adjacent, it can be more readable by their association to each other as well as removing them from the "action" flow of the read/write steps

Copy link
Member

@gonuke gonuke left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A few more ways to streamline and simplify some of the algorithms.

I think we perhaps should have sat down to discuss the cleanest way to extract data from the logs - it seems far more complicated than it probably needs to be.

There may be some value in separating the coils_tet_mesh.py script from the rest of this PR.

Comment on lines +385 to +401
# Create an argument parser
parser = argparse.ArgumentParser(description="Magnet Coils Tetrahedral Mesh Script")

# Define command-line arguments
parser.add_argument("coils_path", type=str, help="Path to the coils .step file")
parser.add_argument("total_coil_count", type=int, help="Number of coils")
parser.add_argument("--cwd", action="store_true", help="Export outputs to current working directory")
parser.add_argument("--mesh-export", action="store_true", help="Export meshes as .exo and .h5m files")
parser.add_argument("--log-path", type=str, help="Path to .log file prefix")
parser.add_argument("--exo-path", type=str, help="Path to .exo file")
parser.add_argument("--h5m-path", type=str, help="Path to .h5m file")
parser.add_argument("--csv-export", action="store_true", help="Export data as a .csv file")
parser.add_argument("--csv-path", type=str, help="Path to .csv file")

# Parse the command-line arguments
args = parser.parse_args()

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This needs to be buried in a method as well.

One goal is to allow this to be imported as a module into another python script. Ideally, no code should run at the time of import - or only code that is necessary at the time of import. As it is, this parser code will run when the module is imported and may cause issues.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That method can then either be called from within the conditional below, or from within coils_tet_mesh directly

# Define command-line arguments
parser.add_argument("coils_path", type=str, help="Path to the coils .step file")
parser.add_argument("total_coil_count", type=int, help="Number of coils")
parser.add_argument("--cwd", action="store_true", help="Export outputs to current working directory")
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For most tools, this is the default, and then the other override it. You've created an unusual hybrid where the other options define the output locations and this option overrides those. For the sake of moving forward, I'd recommend removing many of these options - we can add them back in as needed.

Comment on lines +114 to +135
for char in name:
# Check if the character is an uppercase letter
if char.isupper() and current_word.isupper() and len(current_word) > 1:
# If the character is uppercase and the current_word is also uppercase
# and not a single character, consider it a separate word and append
# the current_word to the split_words list.
split_words.append(current_word)
current_word = char
elif char.isupper() and current_word and not current_word.isupper():
# If the character is uppercase and the current_word exists but is not
# uppercase, consider it a separate word and append the current_word
# to the split_words list.
split_words.append(current_word)
current_word = char
else:
# If the character is not uppercase or doesn't meet the above conditions,
# append it to the current_word to build the word.
current_word += char

# Append the last word to the split_words list
split_words.append(current_word)

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A slightly simpler approach might be to insert a special character (underscore?) before each upper case and then use the python split()

Suggested change
for char in name:
# Check if the character is an uppercase letter
if char.isupper() and current_word.isupper() and len(current_word) > 1:
# If the character is uppercase and the current_word is also uppercase
# and not a single character, consider it a separate word and append
# the current_word to the split_words list.
split_words.append(current_word)
current_word = char
elif char.isupper() and current_word and not current_word.isupper():
# If the character is uppercase and the current_word exists but is not
# uppercase, consider it a separate word and append the current_word
# to the split_words list.
split_words.append(current_word)
current_word = char
else:
# If the character is not uppercase or doesn't meet the above conditions,
# append it to the current_word to build the word.
current_word += char
# Append the last word to the split_words list
split_words.append(current_word)
name_split = ''
delim = '_'
for char in name:
# insert a delim character before each upper case letter to facilitate splitting
if char.isupper():
name_split += delim
name_split += char
split_words = name_split.split(delim)

Comment on lines +77 to +82
indices = np.arange(5)
for index, data_list in zip(indices, data_lists):
if index == 0:
data_list.append(row_extraction[index][0])
else:
data_list.append(float(row_extraction[index][0]))
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
indices = np.arange(5)
for index, data_list in zip(indices, data_lists):
if index == 0:
data_list.append(row_extraction[index][0])
else:
data_list.append(float(row_extraction[index][0]))
for index, data_list in enumerate(data_lists):
if index == 0:
data_list.append(row_extraction[index][0])
else:
data_list.append(float(row_extraction[index][0]))

@gonuke gonuke merged commit 45d802b into svalinn:main Oct 2, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants