-
Notifications
You must be signed in to change notification settings - Fork 12
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Change magnet modeling workflow to use CadQuery #139
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
A few comments here, including some data structure design improvements for streamlining the code
parastell/magnet_coils.py
Outdated
parallel_parts = [] | ||
for dir, tangent in zip(normal_dirs, self.tangents): | ||
parallel_parts.append(np.dot(dir, tangent)) | ||
parallel_parts = np.array(parallel_parts) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If these are numpy arrays then this block becomes
parallel_parts = [] | |
for dir, tangent in zip(normal_dirs, self.tangents): | |
parallel_parts.append(np.dot(dir, tangent)) | |
parallel_parts = np.array(parallel_parts) | |
parallel_parts = np.dot(normal_dirs, self.tangents) |
Or close (maybe need to specify an axis?)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As mentioned above, these are all numpy arrays. Unfortunately, the numpy functions that perform the dot product, dot
and inner
, do not have a way of specifying an axis along which to operate. Without a specified axis, they will just perform a full matrix operation.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we structure the matrices such that a full matrix operation is the right thing to do?
f1e019b
to
13bf0db
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looking a lot better... just a few final polishing questions
parastell/magnet_coils.py
Outdated
parallel_parts = [] | ||
for dir, tangent in zip(normal_dirs, self.tangents): | ||
parallel_parts.append(np.dot(dir, tangent)) | ||
parallel_parts = np.array(parallel_parts) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Isn't this the same as
parallel_parts = [] | |
for dir, tangent in zip(normal_dirs, self.tangents): | |
parallel_parts.append(np.dot(dir, tangent)) | |
parallel_parts = np.array(parallel_parts) | |
parallel_parts = np.matmul(normal_dirs, self.tangents.transpose()) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Almost. We'd then need to take the diagonal of the resultant matrix (parallel_parts
is supposed to be an array of scalars and np.matmul(normal_dirs, self.tangents.transpose())
results in an NxN matrix where N is the number of points on the filament), but that's easy enough to do.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe I described the transpose backwards then? It should definitely be possible to get a vector of N scalars from multiplying a 3xN matrix times an Nx3 matrix, right?
9cb3464
to
3fbecaf
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Still puzzled by the matrix multiply
parallel_parts = np.diagonal( | ||
np.matmul(outward_dirs, self.tangents.transpose()) | ||
) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
See my comment elsewhere... would this get you a vector if you swapped the transpose?
parallel_parts = np.diagonal( | |
np.matmul(outward_dirs, self.tangents.transpose()) | |
) | |
parallel_parts = p.matmul(outward_dirs.transpose(), self.tangents) |
They are both 3 x N matrices, right? so there must be some way to arrange them such that a matrix multiplication results in a 1 x N vector without needing to take the diagonal...
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Perhaps I'm not seeing the same thing as you but I don't believe there's a way for a matrix product to be smaller in a dimension than the smallest dimension of the original matrices. I.e., for two Nx3 matrices, we can produce a 3x3 matrix (first matrix transposed) or a NxN matrix (second matrix transposed). That said, there may be some NumPy operation capable of producing what we want but I can't seem to find one...
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I really am getting old.... and would probably fail my qual... I'll stop questioning these things when they work...
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sorry for the runaround on all these issues...
Models the magnets using CadQuery instead of Coreform Cubit. Furthermore, restricts magnet cross-section to rectangular, removing the option for circular magnets.
Closes #138