-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Cmd + K AI helper in live editor #1
base: main
Are you sure you want to change the base?
Conversation
Just saving my work in case I need it later - upon reflection I don't think the live component is needed at all
Also added anthropic and improved the styling
|
||
# TODO I'm not handling errors here and haven't thought | ||
# much about what happens if the liveview process dies and this | ||
# is still running |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Latest LiveView (v0.20) supports assign_async for uses cases precisely like these, so once we update it, you should be able to use it just fine!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hi @jonastemplestein, the demo looks impressive!
Unfortunately, it also shows the problem with AI: the code it generates for fizzbuzz is actually inefficient because it appends to the list on every operation, which has to copy the left side. A best approach is to prepend and then reverse. :D
Asking it to write doctests was fantastic though. I think the refactoring/debugging/explanation features may bring more value than code generation.
The code overall looks good although I did not look at the JS bits. Perhaps one suggestion is to change the Finch streaming to, instead of it streaming the tokens, have it already stream only the parts that you care about. It will definitely be less data to send around processes. You may also be able to provide better APIs depending if you are expecting a diff/code or a chat conversation? Perhaps you could ask the model to not bother sending text when all you want is code.
Multi-message conversations (question about whether the messages state should live on server or client - probably client)
It depends, do you want the AI state to be shared across all users or not?
Thanks for having a look! I really appreciate it 🙏 I'll keep plugging away at this, as I'm learning a lot and keen to have such a feature in livebook. If this is something you'd be interested in launching in livebook, I'm happy to help with that effort or share what I've learned so far. I agree the code generation isn't replacing human engineers at the moment, but it can give them more leverage - especially when working in a new stack, as lots of people coming to elixir via livebook are. And then, in a year or two when there's a more powerful model, you can just plug that in.
Thanks, will have a look - we can strip out the various bits of JSON before it hits a process boundary, but we do need to stream non-code tokens as we need them for back-and-forth conversations
Oh, I don't think it makes sense for other users to see these inline "conversations". Although for a code-interpreter like outer loop that lives in a separate panel and creates multiple cells, maybe. The "server" solution I was thinking of was just to keep the fully agent responses in the liveview process. This would save having to send the full response back to the client (who only needs to show the code) I think it makes most sense to send everything to the client |
Actually, one more question if I may... The biggest design decision I go back and forth on is whether it would be better to implement this as a live component as I had originally done. It'd make it faster to build the UI and add basic interactions, loading states, etc without hand-writing a bunch of fiddly javascript - especially once I add more complex UI states for back-and-forth conversations But the downside of the live component approach was that there was overall more code in more different places, as I still need a somewhat beefy JS class to deal with the monaco API and the token streaming (which was hard to do efficiently via live component assigns) What's your view on this? Or maybe it doesn't really matter |
I don't think the difference between LV and LC is going to matter on how much JS you need to write. LC can still be useful to decouple large parts of the code and logic... but at this point I would stick more with "it doesn't really matter". Once you need to pass the notebook as context, then maybe you will have more reasons to push towards one direction or the other. Btw, did you have any feedback about training ChatGPT or Claude on the Elixir docs? |
I did, yes! I just got home yesterday and was going to write it up. What's the best way to share with interested parties in a less public forum? |
Email me on my GitHub email and I can share with the Livebook team if that's ok. :) Thank you! |
Screen.Recording.2023-09-27.at.19.52.37_compressed_rotated.mp4
As discussed here, here's my attempt to bring functionality similar
A few notes
TODO
Required before this is useful / can be shipped
Ideas for some time later