-
-
Notifications
You must be signed in to change notification settings - Fork 29
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
suggestions for 00-intro #31
Comments
"Well" seems to be broadly-defined here. Using an HPC resource well in terms of performance may be well outside of the scope of this lesson, as it can be dependent on things like the problem domain, the specific algorithm or application, and available hardware. At the most, we can have learners write and run simple parallel programs, so that they may understand some of the factors influencing performance/correctness in a parallel setting. I would propose that the following is a more novice-friendly objective:
Any thoughts? |
Good point Ashwin. My goal here is not optimization/performance but simple things like -- match the number of threads to the number of cores you request, know how many jobs you can/should submit at once (good citizenship), etc. I like your version! |
I second @shwina 's comment and would go with this version. |
I'm just a silent listener on the carpentry lists, but found this issue in googling all the HPC carpentry efforts so just to throw a random user community comment in: would 100% second the very real need for this. Every HPC training event I've been involved in encountered problems if we didn't go through these high level context making concepts. I made the attached as an attempt to conveying some of it but more details on how these map to different real world platforms (grids/institute&national cluster/your laptop) and what the parallel coding tools you'd use for each would be. Not suggesting that a carpentry could go into the details of parallel programming that takes a full semester course obviously but just describing eg OpenMP is a tool for using a shared memory system and you would use MPI on a distributed memory system because...can be super helpful. |
@r4space thanks for the feedback and for sharing the diagram. Inside the group we discussed to start off with this set of lessons to go as far as using the scheduler. We then considered creating another set of lessons called Constructive feedback is always very welcome. |
I love the diagram @r4space ! This is exactly the kind of thing I think we should include, so that people can make better choices about what they're doing. Not talking how to program, or too many details, but just enough that there's a big picture motivation and a framework to use later. It's tricky to figure out how to put it in so learners get the big picture + aren't overwhelmed, but I'm glad I'm not the only one who wants to see it happen. |
"simple things like -- match the number of threads to the number of cores you request" I don't think this is simple. 1. hyperthreads? 2. how do you set the number of threads to begin with? OpenMP is very dynamic in number of threads created, and a large number of threads can improve performance through statistical evening out of the load 3. and no mention of affinity? |
@VictorEijkhout thanks for the feedback. You are right, but ... this is not the league this material is playing in. Whereas topics as HT, affinity etc will be mentioned in the session potentially, at this point, we want to convey concepts that novice HPC users can relate to easily. This is a tough challenge within it's own right. And CPU cores is already mindboggling for someone that never wrote a parallel program. Adding more details of computer architecture will only confuse learners I am afraid (at least according to my experiences and according to the audience we have in mind for this). |
Suggestions of key points to cover:
Conclusion: this workshop will show you not only how to use a cluster/distributed system but when/why you would want to use one and how to use it WELL.
The text was updated successfully, but these errors were encountered: