-
Notifications
You must be signed in to change notification settings - Fork 26
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Web/App Generators? e.g: websim.ai, tabby, io.google, v0.dev, claude3.5 #142
Comments
Had a quick look. Could be promising. I've been saying that "AI" will replace all |
@nelsonic you are full of clever citations!!! Tofu-dreg!!! Indeed, its kind of, but I have not talent and no patience for the CSS side, so this would give me a kind of base. Not expecting more. I know a lot sof people use copilot, but unfortunately, I don't have a free one. But I found https://tabby.tabbyml.com/. Seems like Yet another copilot, free (for the moment). I always wonder how people can afford to pay servers for this. What is the product? |
|
I don't have this problem 😄.
So ,
Since you can import the server messages in a LiveView app, you can also potentially debug Elixir errors? No idea if Elixir knowledge is effectively developed by the LLM. |
Thanks for sharing the link to the Forum topic. 🔗 🤖 💭 I don't have time to dig deeper right now, but it looks promising! 💭 |
Yes, and made me think of a wise response:
More seriously, it is good in some sense. An Elixir-RAG that digested the whole Elixir GitHub might be good, but not quite yet.
A little complain. The Then for some reason, I could not render anymore: I had an error "no root layout" ?!! I had no idea of what was happening. Yes, I compared file by file to try understand but I lost patience and it did not help me.So many layers of Phoenix niceties that I gave up! => I created a brand new project, copied the code back and it worked back. I was hoping Claude to be more clever than me, but no, my brutal force worked and was faster. Arguably, I did not learn anything by not finding the problem but Claude neither. So the code Claude produced is pretty good in some sense. Arguably, my subject is not rocket science. Also I don't have the feeling of working with a senior dev, but rather with a good helper. I also tried in parallel Chatgpt as for some reason I seem to have access to v4. I would say that ChatGpt seemed less accurate. I corrected the code for mediaPipe and ML5 as it seemed to use an older API. Saying that, I almost used "him" instead of "it" when I talk about Claude of ChatGPT. I'm a bit schizophrenic, aren't I?😄 I hope I corrected. Now, I was thinking of letting Claude produce a kind of Dwyl post on using Websockets and Channel to pass binary data. If you are interested of having this, I will do it (mostly because I will forget how to do it!). |
Correction! Claude IS good. |
So I played with Claude. He is my wonderful week-end work, LiveView and passing binary data with Channels or raw WebSockets. At least I learnt something! https://github.com/ndrean/phoenix-websockets I would say: "don't rely on it". I asked for some JS code to perhaps zip or deflate images I sent. It happily proposed I may try to run this on Fly, just to see how this demo works. If you think it is valuable work for you to have this as a repo in Dwyl, let me know. |
Very interesting! How much of this was |
Claude produced the solution when I wanted to send a "large file from the server to the browser. The problem was to pass indexes. But don't copy bindly.... Claude produced this server-side: File.stream!("channel.jpg", 1024 * 10)
|> Stream.with_index()
|> Enum.each(fn {chunk, index} ->
push(socket, "new chunk", {:binary, <<index::32, chunk::binary>>}) https://github.com/ndrean/phoenix-websockets/blob/d2b1d332ba05f21a4611cf1a7462f4f04905d4d9/lib/ws_web/channels/image_channel.ex#L24 and produced this client-side: let view = new DataView(payload);
let index = view.getInt32(0);
let chunk = payload.slice(4);
imageChunks[index] = chunk; Th rest is "standard", as Channels document passing binary data (I found another clue in the Elixirforum regarding Phoenix.Socket.Transport, a response from J Valim). Claude refactored and corrected my JS code when sending big files from the browser to the server: |
Yeah, it's only going to get better. Have you tried Melty? 💭 |
No, not yet. I have only a small M1 with 8GB of RAM. I put it here in order to come back later. One day perhaps, an Ubuntu PC with 32GB of RAM and a solid GPU.... |
And waiting for the AI bubble to burst to get a cheap second hand GPU 🤔 |
Which Yeah, the energy demands for all these chips is insane! When the Ai Bubble 🫧 bursts, and it will ... the whole economy is coming crashing down hard 📉 😬 It's incredible how The "investors" (speculators) who rode this wave and got out |
In France, its around 0.25Eur. What seems surprising is how Apple tackles this trend. Once again, it is a closed ecosystem. You can't plug a GPU into your mac. I know there is Metal and Core ML but I am not sure how you can leverage them. So I asked to my new friend (I have no idea if ChatGPT is hallucinating or not...):
Response: Using Apple’s Core ML directly in the Elixir language isn't straightforward because Core ML is a framework designed for use with languages like Swift and Objective-C, which are native to Apple's ecosystem. Elixir, being a functional language that runs on the Erlang virtual machine (BEAM), doesn't have direct bindings to Core ML. However, you can still integrate Core ML models in an Elixir-based project by using a few different approaches: Approach 1: Using Ports or NIFs
Approach 2: External Microservice
Approach 3: Using Core ML via Python
SummaryWhile Elixir doesn’t natively support Core ML, you can integrate Core ML functionality into your Elixir projects by:
This allows you to benefit from the M-series chips and Core ML while developing in Elixir, albeit with some additional complexity. |
Meanwhile ... nelsonic/nelsonic.github.io#912 (comment) 🤯 |
@ndrean recommend watching: https://youtu.be/U_cSLPv34xk 📺 Thoughts? 💭 |
FULLY AGREE with him. But, saying that, who cares if I agree with him! 😀 Why are there so many experts/gurus on this new AI paradigm? It is hard to say smart things, so let's stick to small facts. ML is good for a few scoped tasks, but certainly not at developing a full project, at the time of writing (saved 😬). However, its capacity of "inventing" can be very useful. Aren't (pretty? very?) good at hallucinate things for you like images. For the Elixir eco-system, still need much more fine tuning. Check my little RAG pipeline test on the Phoenix Liveview documentation to see through an example how a RAG pipeline improves the LLM's inference. |
Things improve quickly. Take a look at Google's Gemini 1.5; as explained in the paper below, it accepts 1M tokens, up to 10M as the window context, or 30.000 LOC up to 300.000. A significant improvement which might make the RAG even better. In particular, they demo interacting with three.js. This works as said with Javascript, but probably not with small languages like Elixir and evolving framework. The big limitation: the costs of using this. For example, the "small" Gemini Flash: ![]() |
If you are looking for something to read, the paper below explains the possible next gen models surpassing the transformer model (2017): it is called SSM (2021). It is supposed to be fast and supports long context (what the RAG does in some way). An implementation: the Jamba 1.5 . You may need .... 2 80Gb GPU🤨 but - good news - a quantized version (8-bit precision) may only need 1 GPU. We are saved 🥳 https://huggingface.co/ai21labs/AI21-Jamba-1.5-Mini The SSM explained: |
Did someone tried grok? They claim to have developed LPU chips: So CPU, GPU, TPU, and LPU.... Lots of marketing? Language Processing Units (LPUs) are chips developed to aid in the development of LLMs.
Built from the ground-up to make language training fast and efficient,
LPUs aren’t going to replace GPUs as the preferred chip for AI tasks,
but there are likely tasks where they’ll soon be the best possible choice. [...]
Architecturally, LPUs are designed for sequential, rather than parallel,
computationally intensive applications.... Let's try to understand another new marketing concept - "optimistic UI" - and ask in the Groq console: The response: Optimistic UI is a concept in Phoenix LiveView that focuses on providing a better user experience by allowing the user to see the effects of their actions instantly, even if there's a possibility that the action might eventually fail. This is achieved by introducing a temporary, optimistic state in the client before sending a request to the server.
In other words, Optimistic UI in Phoenix LiveView is a way to temporarily update the UI to reflect a change made by the user, before the server has confirmed whether the change was valid. This approach allows for faster and more interactive user interfaces.
Here's how it works:
1. The user makes a change to the UI, such as editing a text field or clicking a button.
2. The LiveView client side code captures this change and temporarily updates the UI to reflect this change.
3. The client then sends a request to the server to update the state.
4. If the server confirms the change (e.g., the text field value is valid), the client updates the state permanently and sends the new state back to the server.
5. If the server rejects the change (e.g., the text field value is not valid), the client rolls back the state change and sends the previous state back to the server.
This approach allows for a better user experience, as users can see the effects of their actions immediately, without having to wait for the server's confirmation.
For example, consider a form that allows users to edit their profile information. Using Optimistic UI, when a user edits their name, the text field on the screen is updated instantly, even before the request is sent to the server. If the server rejects the change (for example, because the username is not valid), the client can roll back the change, and the user will immediately see the previous username value.
In the context of Phoenix LiveView, you can implement Optimistic UI by using the `push_event` function to update the state and the UI while the request is being sent, and then using the `assign` function to undo the change if the server rejects it. |
@ndrean did you mean
This response from |
ah yes Did not know about Grok (the X one 😁). Below is an "uncontexted" answer from At least, with my "tiny" RAG attempt, I really saw the difference in the answer when the LLM is fed with some ("good") context ("good" because the context is semantically close to the question and based on "real" documentation). Optimistic UI is a fascinating concept that can greatly enhance the user experience
in Phoenix LiveView applications.
In a nutshell, optimistic UI is a technique that allows users to edit data
while a long-running operation is being performed,
by keeping track of the changes and only redoing the UI update
if the operation was successful.
In LiveView, optimistic UI is particularly useful when dealing
with complex operations like creating or updating records in a database.
These operations might take some time, and in the meantime,
the user might want to continue editing other data or performing other actions.
Optimistic UI ensures that the user's changes are persisted
and only applied if the operation is successful.
Here's a high-level overview of how optimistic UI works in LiveView:
**Initial Fetch**: When a page loads, LiveView fetches the initial data
from the database or an external API. This data is used to render the UI.
**Initial Rendering**: The UI is rendered based on the initial data,
and the user can start interacting with the page.
**Editing**: The user makes changes to the data,
and the changes are persisted locally by LiveView.
This saved data is then sent in batches to the server for processing upon each update.
**Server Processing**: The server processes the changes
(e.g., updates the database) and sends the result back to the client.
**Result Rendering**: LiveView receives the result from the server
and renders the updated UI. If there were any issues with the update
(e.g., invalid data, record already exists), LiveView will display an error message
and retain the previously entered data.
Optimistic UI has several benefits:
**Improved User Experience**: Users can continue to edit data
while operations are being performed, leading to a better overall experience.
**Reduced Network Traffic**: Since the user's changes are only sent
to the server in batches, network traffic is reduced.
**Better Error Handling**: If an error occurs during processing,
the user's changes are retained, allowing them to review and resolve the issue.
In Phoenix LiveView, you can implement optimistic UI using its built-in features
and libraries.
For example, LiveView provides a `patch/3` function for updating data on the server,
which supports optimistic concurrency.
Additionally, libraries like `ebrief` offer more advanced features
for handling optimistic UI in LiveView applications.
In summary, optimistic UI is a technique for creating more responsive
and user-friendly interfaces in Phoenix LiveView applications
by allowing users to edit data while long-running operations are being performed,
while keeping track of the changes and only redoing
the UI update if the operation is successful. |
For reference, a repo that illustrates the chatgpt-o1 trend (instead of increasing the model size or training time, you increase inference time, meaning more "reasoning" ). Th with I believe the The repo: The costs of using It also uses streamlit which is a cool Python library. Read about |
Yet another promise... Focusses on coding in People who don't use AI will be out of job soon ? Link to Pythagora: The (convincing) YT link: |
Maybe too political to appear here? Reading that Open.ai raised more than 6MM (https://finance.yahoo.com/news/openai-closed-funding-round-raising-161842157.html) while loosing 5MM, and still valued at 150MM. I feel dizzy with these figures. I believe it's a way to kill the competition, just like Amazon did 15 years in a row. Open.ai is like a magnet for the investors, and only a few other competitors will drain cash. If the bubble doesn't burst immediately, and given that AI will become essential everywhere, this could work. Indeed, I read that they expect to double the price from 20$ to 40$ in the next couple of years: this gives hope to the investors as customers might become hostages. This could be the idea. But again, this will work only if there is no more competition, if AI is everywhere and essential, and if they stop from being toxic: so many people leaving! Looks like a typical french big IT service company: toxic, 30% turnover, treating customers like sh*t. My own experience, I've been on both sides :) |
It's fine to be "political". 😆 Sam is raising as much cash as he can because he knows what is coming! 😬 📉
don't know to what extent By far the biggest determinant of how much I firmly believe in ![]() |
ah, just discovered your "pirate" channel nelsonic/nelsonic.github.io#733 ! Plenty of resources to read for the next coming winter evenings :) |
@nelsonic yes! I did not try the audio though. The speech is very American style , isn't it? 😄 |
@nelsonic Another gun from Gogole, a bit like NotebookLM, https://illuminate.google.com/home. Also produces an audio, perhaps less "american" or opinionated if you prefer. If you drive a lot, can be nice. |
@ndrean voices on |
computer use ... https://www.anthropic.com/news/3-5-models-and-computer-use 🤖 💻 💭 |
In the link below, you can discover plenty of cool projects: For example: |
Apple MLX: MLX is an array framework for machine learning on Apple silicon, brought to you by Apple machine learning research. MLX is available as an Nx backend: TestedInstead of {:emlx, github: "elixir-nx/emlx", ovveride: true} Does not like integers, only floats ⭐️ |
Take a look at this.
It creates for you a website. Yet another AI assisted tech?
Yes, perhaps, but honestly, its pretty good! I mean, it populates based on the theme you give, and understands your orders. If you say, "I want a web site to promote herbal teas (why not!) with pictures inspired by Zen philosophy and related to Japan, and presenting my products in cards, and only zooming on the picture when you pass over the pciture, it does all this. You can create games etc. You can check what other created, and I checked DOOM. You can download the code, check the HTML, CSS. Pretty good! For the lest gifted like me, I can say that it gives quickly a solid foundation to prototype something.
Disclaimer: I don't make any money from this, its free, so you must be the product, but I did not understood how the creators think they will make money, nor how they can finance the server costs to make this work.
https://websim.ai/
The text was updated successfully, but these errors were encountered: