Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Improve episode 2 #44

Open
11 tasks
Tracked by #40
gcroci2 opened this issue Dec 4, 2024 · 0 comments
Open
11 tasks
Tracked by #40

Improve episode 2 #44

gcroci2 opened this issue Dec 4, 2024 · 0 comments
Assignees

Comments

@gcroci2
Copy link
Collaborator

gcroci2 commented Dec 4, 2024

  • Focus more on this part, more time for exercises
  • Prepare and get informed about how data are actually stored (or how they claim to), when using chat vs command, and also when you use copilot.
  • Policy at the eScience Center?
  • Doing the “theory” beforehand is a bit boring considering this, participants will pick these things up as they use it. Just do a recap at the end.
  • Avoid jargon like “lenses”, “boilerplate”, or explain more clearly what they mean
  • Autocomplete can be also useful for in-line comments. Mention it.
  • Remove the slides for this part and explain the modes during the hands-on
  • In general, best practices slides can be uniform (give a look at the episode content as well).
    • Autocomplete best practices slide uses different programming language formats.
  • The exercise is too wordy, at least while teaching split it into three parts and type them along with students (try to shorten prompt if possible)
    • The point about prompt engineering is better made if you do it live. First don’t add that final point, show that it is doing sth weird. Then add the extra point and show that you get a better result.
  • You may consider to choose a different coding example. During the session the problem was related to database reading, it would be nice to see another example where the problem is (for example) about a function or the code logic
  • Consider to add opinion of eScience RSEs on this topic, especially given that AI tool performance still seems hard to quantify.
    • For example, consider having another exercise where you submit legacy code to the AI assistant, asking for an explanation, documentation, possible pitfalls, and improvements. Dealing with legacy code is a situation researchers find theirself very often, which could be a valuable study case.
    • How to retrieve history from command in codeium?
@gcroci2 gcroci2 changed the title Episode 2 Improve episode 2 Dec 4, 2024
@gcroci2 gcroci2 self-assigned this Dec 4, 2024
@gcroci2 gcroci2 moved this to Todo in lesson dev Dec 4, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant