Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Web/App Generators? e.g: websim.ai, tabby, io.google, v0.dev, claude3.5 #142

Open
ndrean opened this issue Jul 16, 2024 · 43 comments
Open
Labels
discuss Share your constructive thoughts on how to make progress with this issue

Comments

@ndrean
Copy link

ndrean commented Jul 16, 2024

Take a look at this.

It creates for you a website. Yet another AI assisted tech?
Yes, perhaps, but honestly, its pretty good! I mean, it populates based on the theme you give, and understands your orders. If you say, "I want a web site to promote herbal teas (why not!) with pictures inspired by Zen philosophy and related to Japan, and presenting my products in cards, and only zooming on the picture when you pass over the pciture, it does all this. You can create games etc. You can check what other created, and I checked DOOM. You can download the code, check the HTML, CSS. Pretty good! For the lest gifted like me, I can say that it gives quickly a solid foundation to prototype something.

Disclaimer: I don't make any money from this, its free, so you must be the product, but I did not understood how the creators think they will make money, nor how they can finance the server costs to make this work.

https://websim.ai/

Screenshot 2024-07-16 at 22 09 37
@nelsonic
Copy link
Member

Had a quick look. Could be promising. I've been saying that "AI" will replace all Dev for a while ...
The allure of "AI" building a website/app for us is too tempting for many people to resist. 🤖
For now it's just templates and generators (i.e. smoke and mirrors). 🪞
I wish I could agree with you that an "AI"-generated site provides a "solid foundation" ... 💭
Sadly, it's more of a "Tofu-dreg" that only the "AI" can maintain.
And if people don't learn how to extend and maintain their idea it dies.

@nelsonic nelsonic added the discuss Share your constructive thoughts on how to make progress with this issue label Jul 19, 2024
@ndrean
Copy link
Author

ndrean commented Jul 19, 2024

@nelsonic you are full of clever citations!!! Tofu-dreg!!! Indeed, its kind of, but I have not talent and no patience for the CSS side, so this would give me a kind of base. Not expecting more.

I know a lot sof people use copilot, but unfortunately, I don't have a free one. But I found https://tabby.tabbyml.com/. Seems like Yet another copilot, free (for the moment). I always wonder how people can afford to pay servers for this. What is the product?

@nelsonic
Copy link
Member

Copilot is good for certain tasks but I found it gets in my way a lot too so I've disabled it. 🙅
I was an early beta tester and am still hopeful for it. 🤞
However it will rapidly become a "crutch" for people who think they are "10x Engineers" because they appear to be producing lots of code ... but if they don't understand it and nobody else in their team can either ... 😬

@ndrean
Copy link
Author

ndrean commented Jul 20, 2024

I don't have this problem 😄.
I don't expect it to code for me, but instead I looked into this because I thought - like I imagine 99% of the people - that it could be nice (?) for:

  • helping at naming variables or functions,
  • pasting some code and ask for some explanation (I tried with mine).
  • proposing some skeleton of code when you use some case or switch, things that are repetitive,
  • maybe help and understand errors
  • ...
    I am targeting wildly used languages, such as Javascript or Go. I don't expect anything for Elixir.

So , tabby ? Maybe. Tools seems to pop-up all the time and improve quite a lot.

I've been saying that "nelsonic/nelsonic.github.io#733" will replace all Dev for a while ...

Screenshot 2024-07-20 at 17 01 47

Since you can import the server messages in a LiveView app, you can also potentially debug Elixir errors? No idea if Elixir knowledge is effectively developed by the LLM.

@ndrean ndrean changed the title Take a look at websim.ai Take a look at websim.ai, tabby, io.google Jul 20, 2024
@nelsonic nelsonic changed the title Take a look at websim.ai, tabby, io.google Web/App Generators? e.g: websim.ai, tabby, io.google, v0.dev Jul 24, 2024
@nelsonic
Copy link
Member

https://v0.dev
image

@ndrean
Copy link
Author

ndrean commented Jul 24, 2024

A thread from Elixirforum (which neither uses an Elixir based framework for its forum by the way).

Screenshot 2024-07-24 at 23 21 13

Screenshot 2024-07-24 at 23 22 56

@ndrean ndrean changed the title Web/App Generators? e.g: websim.ai, tabby, io.google, v0.dev Web/App Generators? e.g: websim.ai, tabby, io.google, v0.dev, claude3.5 Jul 24, 2024
@nelsonic
Copy link
Member

Thanks for sharing the link to the Forum topic. 🔗 🤖 💭
https://github.com/paul-gauthier/aider + https://www.anthropic.com/news/claude-3-5-sonnet
looks like a good combo. 👌
Have you tried it?

I don't have time to dig deeper right now, but it looks promising! 💭

@ndrean
Copy link
Author

ndrean commented Jul 25, 2024

Have you tried it?

Yes, and made me think of a wise response:

I've been saying that "nelsonic/nelsonic.github.io#733" will replace all Dev for a while ...🤔

More seriously, it is good in some sense. An Elixir-RAG that digested the whole Elixir GitHub might be good, but not quite yet.
My example if you have patience and time to lose. I want to do a rather simple task: compare image captioning techs running pre-trained models, either server-side and in the browser.
I was lazy so asked for some JS code for ml5.js, mediaPipe and transformer.js to implement it in the browser.
I had a dilemma: I needed the models to evaluate the same image that I got randomly from "picsum". Should I fetch it from the browser, send to the server to run the model, and in parallel run the client-side models, or should I get it from the server, run the model and in parallel send to the browser and run the client-side models.
It responded with a server-side request (I specified Req). Pretty good, Bumblebee, Exla, but failed to convert the image into a tensor, and forgot to declare the Nx backend (and so did I...) .
Then it converted the image into base64, to populate the <img src=...> that is rendered via an assign server-side, and then run the models on the DOM element. It was better than me because it reminded me to pass the assigns in the mount as I did not do it and nothing rendered.
However, this means that the image data are sent via the LiveSocket as a string. I asked Claude who will transform an image into base64 and increase the payload of 30% as well as latency? I suggested solutions such as:

  • file upload,
  • pubsub the binary from the LiveView to a Channel, and send a binary via the userSocket (or whatever name you give), and directly use "onmessage" on the userSocket to get a binary payload.
  • setup a raw WebSocket directly instead of the Channel overhead, directly -30% data transfered. It get mixed up, so I showed the solution using Phoenix.Socket.Transport.

A little complain. The Bumblebee library wants a tensor; honestly, when do you have a tensor under the hand? Most probably a file, or a buffer. Compared to the client-side, they all simply get a DOM element - say <img> - as input and work on it. So more than the response of Claude, I thought that indeed, Bumblebee was kinda rough so I excused Claude.

Then for some reason, I could not render anymore: I had an error "no root layout" ?!! I had no idea of what was happening. Yes, I compared file by file to try understand but I lost patience and it did not help me.So many layers of Phoenix niceties that I gave up! => I created a brand new project, copied the code back and it worked back. I was hoping Claude to be more clever than me, but no, my brutal force worked and was faster. Arguably, I did not learn anything by not finding the problem but Claude neither.

So the code Claude produced is pretty good in some sense. Arguably, my subject is not rocket science. Also I don't have the feeling of working with a senior dev, but rather with a good helper. I also tried in parallel Chatgpt as for some reason I seem to have access to v4. I would say that ChatGpt seemed less accurate. I corrected the code for mediaPipe and ML5 as it seemed to use an older API. Saying that, I almost used "him" instead of "it" when I talk about Claude of ChatGPT. I'm a bit schizophrenic, aren't I?😄 I hope I corrected.

Now, I was thinking of letting Claude produce a kind of Dwyl post on using Websockets and Channel to pass binary data. If you are interested of having this, I will do it (mostly because I will forget how to do it!).
But I will continue as long as it is free because I enjoyed it! Almost never got stuck with a stupid problem.

@ndrean
Copy link
Author

ndrean commented Jul 25, 2024

Correction! Claude IS good.
An example. I want to secure a raw WebSocket connection by passing a user token and get it checked when connecting. This is basic. But I have a problem when I declare "check_origin: false". Sobelow does not like this. I needed help for this, as I am 100% sure I will to it wrong with the common issue with runtime vs compile-time configuration. Just hard. For example, betweenSystem.get_env and System.fetch_env!, one work, not the other, Application.compile_env etc... Unless you are versed into this, it is hell. Well Claude solved this for me with NimbleOptions and it seems to work!

@ndrean
Copy link
Author

ndrean commented Jul 29, 2024

So I played with Claude. He is my wonderful week-end work, LiveView and passing binary data with Channels or raw WebSockets. At least I learnt something!

https://github.com/ndrean/phoenix-websockets

I would say: "don't rely on it". I asked for some JS code to perhaps zip or deflate images I sent. It happily proposed pako. Then I suggested to use the native compression stream, but the JS code wasn't good. The good part is that the code is not difficult. I ran it but discovered that compressing an image is useless, as JPEG is already compressed. So I asked why I hardly saw a difference and Claude responded that indeed JPEG is maybe not a good fit.... a bit of a waste of time wasting, isn't it? I asked Claude to explain a bit the code, but most was paraphrasing. Some good bits however.

I may try to run this on Fly, just to see how this demo works.

If you think it is valuable work for you to have this as a repo in Dwyl, let me know.

@nelsonic
Copy link
Member

Very interesting! How much of this was Claude output? 💭

@ndrean
Copy link
Author

ndrean commented Jul 29, 2024

Claude produced the solution when I wanted to send a "large file from the server to the browser. The problem was to pass indexes.

But don't copy bindly....

Claude produced this server-side:

File.stream!("channel.jpg", 1024 * 10)
    |> Stream.with_index()
    |> Enum.each(fn {chunk, index} ->
      push(socket, "new chunk", {:binary, <<index::32, chunk::binary>>})

https://github.com/ndrean/phoenix-websockets/blob/d2b1d332ba05f21a4611cf1a7462f4f04905d4d9/lib/ws_web/channels/image_channel.ex#L24
You see the trick: the JS should read the data as an array of bytes, and read the 4 first bytes....to glue everything properly by index

and produced this client-side:

let view = new DataView(payload); 
let index = view.getInt32(0);
let chunk = payload.slice(4);
imageChunks[index] = chunk;

https://github.com/ndrean/phoenix-websockets/blob/d2b1d332ba05f21a4611cf1a7462f4f04905d4d9/assets/js/channelHook.js#L18

Th rest is "standard", as Channels document passing binary data (I found another clue in the Elixirforum regarding Phoenix.Socket.Transport, a response from J Valim).

Claude refactored and corrected my JS code when sending big files from the browser to the server:

https://github.com/ndrean/phoenix-websockets/blob/d2b1d332ba05f21a4611cf1a7462f4f04905d4d9/assets/js/channelHook.js#L74

@ndrean
Copy link
Author

ndrean commented Sep 3, 2024

By Charlie Holtz

Screenshot 2024-09-03 at 16 31 23

@nelsonic
Copy link
Member

nelsonic commented Sep 3, 2024

Yeah, it's only going to get better. Have you tried Melty? 💭

@ndrean
Copy link
Author

ndrean commented Sep 3, 2024

No, not yet. I have only a small M1 with 8GB of RAM. I put it here in order to come back later. One day perhaps, an Ubuntu PC with 32GB of RAM and a solid GPU....

@ndrean
Copy link
Author

ndrean commented Sep 3, 2024

And waiting for the AI bubble to burst to get a cheap second hand GPU 🤔
Seriously, so many apps using AI/GPU! How can they pay the bill?
Not talking about who makes the money, but just thinking of all the electricity you must produce for air conditioning and powering all this.
Norway should be top1 in the world for green washing this: no air conditioning 😄 and incredible amount of hydropower for free.

@nelsonic
Copy link
Member

nelsonic commented Sep 4, 2024

Which GPU will you get? 💭 and is your electricity price cheap enough to power it? ⚡ 💸

Yeah, the energy demands for all these chips is insane!
10-20x more energy per query than a regular Google search!
Norway, Iceland and Greenland are all well-positioned for building epic data centers.

When the Ai Bubble 🫧 bursts, and it will ... the whole economy is coming crashing down hard 📉 😬
There's been waaaay too much CapEx on Ai hardware in the last few years. 💰 🔥

It's incredible how intel (INTC) has missed the boat on "Ai" while NVDA has sky-rocketed! 📈 🚀 🤯
image

The "investors" (speculators) who rode this wave and got out before the bubble burst are laughing so hard on their yachts ... 💭

@ndrean
Copy link
Author

ndrean commented Sep 4, 2024

In France, its around 0.25Eur.

What seems surprising is how Apple tackles this trend. Once again, it is a closed ecosystem. You can't plug a GPU into your mac. I know there is Metal and Core ML but I am not sure how you can leverage them.

So I asked to my new friend (I have no idea if ChatGPT is hallucinating or not...):

Question: Can I use Apple Core ML with the Elixir language on an M-serie?

Response:


Using Apple’s Core ML directly in the Elixir language isn't straightforward because Core ML is a framework designed for use with languages like Swift and Objective-C, which are native to Apple's ecosystem. Elixir, being a functional language that runs on the Erlang virtual machine (BEAM), doesn't have direct bindings to Core ML. However, you can still integrate Core ML models in an Elixir-based project by using a few different approaches:

Approach 1: Using Ports or NIFs

  1. Ports:

    • Ports allow Elixir to communicate with external programs written in other languages like Swift or Objective-C.
    • You could write a small Swift/Objective-C application that uses Core ML to perform machine learning tasks, and then use Elixir to interact with that application via a port.
  2. NIFs (Native Implemented Functions):

    • NIFs are a way to run C code directly in the Elixir VM. However, NIFs are generally not recommended for heavy computations or tasks that could block the VM.
    • You could theoretically write a NIF that bridges to a Core ML model, but this would involve writing a lot of C and potentially Swift/Objective-C code.

Approach 2: External Microservice

  1. Microservice Architecture:

    • Another approach is to create a separate microservice in Swift (or another language that can use Core ML) and then interact with it from your Elixir application via HTTP or a similar protocol.
    • This service would load and run the Core ML models, and your Elixir application would send data to it and receive predictions.
  2. Inter-process Communication:

    • You could also use a messaging system like RabbitMQ or Redis to communicate between your Elixir app and a separate Core ML service.

Approach 3: Using Core ML via Python

  1. Use a Python Bridge:
    • Since Python has many machine learning libraries and can interact with Core ML (using the coremltools library), you could use Elixir’s :erlang.port_open to run Python scripts from Elixir.
    • The Python script would handle the Core ML model, and Elixir would handle the rest of the application.

Summary

While Elixir doesn’t natively support Core ML, you can integrate Core ML functionality into your Elixir projects by:

  • Using ports or NIFs to call out to Swift/Objective-C code.
  • Setting up a microservice in Swift or Python that your Elixir app can communicate with.
  • Leveraging Python as an intermediary between Elixir and Core ML.

This allows you to benefit from the M-series chips and Core ML while developing in Elixir, albeit with some additional complexity.

@nelsonic
Copy link
Member

nelsonic commented Sep 4, 2024

@ndrean
Copy link
Author

ndrean commented Sep 4, 2024

Yes, with Intel chip, but this is old. Even I have a Silicon chip! 😁

Of course, I am focussing on the dev mode, not the production mode where OSX does not exist anymore.

But I am still puzzled:

Screenshot 2024-09-04 at 09 19 19

@nelsonic
Copy link
Member

nelsonic commented Sep 4, 2024

Meanwhile ... nelsonic/nelsonic.github.io#912 (comment) 🤯

@ndrean
Copy link
Author

ndrean commented Sep 4, 2024

But everything is not lost! see the current work
Screenshot 2024-09-04 at 12 13 24

@nelsonic
Copy link
Member

nelsonic commented Sep 4, 2024

@ndrean recommend watching: https://youtu.be/U_cSLPv34xk 📺 Thoughts? 💭

@ndrean
Copy link
Author

ndrean commented Sep 4, 2024

FULLY AGREE with him.

But, saying that, who cares if I agree with him! 😀

Why are there so many experts/gurus on this new AI paradigm?
"everybody is a CEO, and everyone is lying" ... to themselves firstly 🫢
But wait, why??

It is hard to say smart things, so let's stick to small facts.

ML is good for a few scoped tasks, but certainly not at developing a full project, at the time of writing (saved 😬).
Firstly it invents a lot. A good parametrised generator can do a better job.
Then, you are also limited with the length of the context you pass to an LLM, and the responses suffer from this.

However, its capacity of "inventing" can be very useful. Aren't (pretty? very?) good at hallucinate things for you like images.
For example, ML is also known to be quite good at finding clusters, similarities, discovering patterns when using various algorithms.
ML is known to be good in recognition (OCR, captioning...) and fields related to technical document search: law, patent, technical documentation.....Even mathematicians use them. In general, fields where you might need an advanced technical workforce exploring data in a semantical way, not in a linear way. The problem of these industries is more about building an efficient permanent data ingestion system for the search engine(s).

For the Elixir eco-system, still need much more fine tuning. Check my little RAG pipeline test on the Phoenix Liveview documentation to see through an example how a RAG pipeline improves the LLM's inference.

@ndrean
Copy link
Author

ndrean commented Sep 5, 2024

Things improve quickly. Take a look at Google's Gemini 1.5; as explained in the paper below, it accepts 1M tokens, up to 10M as the window context, or 30.000 LOC up to 300.000. A significant improvement which might make the RAG even better.

Screenshot 2024-09-05 at 10 27 52

In particular, they demo interacting with three.js. This works as said with Javascript, but probably not with small languages like Elixir and evolving framework.

Screenshot 2024-09-05 at 10 31 52

The big limitation: the costs of using this. For example, the "small" Gemini Flash:

Screenshot 2024-09-05 at 10 53 10

@ndrean
Copy link
Author

ndrean commented Sep 6, 2024

If you are looking for something to read, the paper below explains the possible next gen models surpassing the transformer model (2017): it is called SSM (2021). It is supposed to be fast and supports long context (what the RAG does in some way).

An implementation: the Jamba 1.5 . You may need .... 2 80Gb GPU🤨 but - good news - a quantized version (8-bit precision) may only need 1 GPU. We are saved 🥳

https://huggingface.co/ai21labs/AI21-Jamba-1.5-Mini

The SSM explained:

Screenshot 2024-09-06 at 22 11 26

@ndrean
Copy link
Author

ndrean commented Sep 7, 2024

A new toy to test: Codeium (it's like Copilot)
Screenshot 2024-09-07 at 15 41 23

Training: Codeium challenges "only" GPT-3.5, so it must not be so good....
Screenshot 2024-09-07 at 15 42 51

FAQ: Elixir?
Screenshot 2024-09-07 at 15 44 32

@ndrean
Copy link
Author

ndrean commented Sep 12, 2024

Did someone tried grok? They claim to have developed LPU chips:

Screenshot 2024-09-12 at 09 09 18

So CPU, GPU, TPU, and LPU....

Lots of marketing?

Language Processing Units (LPUs) are chips developed to aid in the development of LLMs. 
Built from the ground-up to make language training fast and efficient, 
LPUs aren’t going to replace GPUs as the preferred chip for AI tasks, 
but there are likely tasks where they’ll soon be the best possible choice. [...]
Architecturally, LPUs are designed for sequential, rather than parallel, 
computationally intensive applications....

Let's try to understand another new marketing concept - "optimistic UI" - and ask in the Groq console:

Screenshot 2024-09-12 at 08 57 42

The response:

Optimistic UI is a concept in Phoenix LiveView that focuses on providing a better user experience by allowing the user to see the effects of their actions instantly, even if there's a possibility that the action might eventually fail. This is achieved by introducing a temporary, optimistic state in the client before sending a request to the server.

In other words, Optimistic UI in Phoenix LiveView is a way to temporarily update the UI to reflect a change made by the user, before the server has confirmed whether the change was valid. This approach allows for faster and more interactive user interfaces.

Here's how it works:

1.  The user makes a change to the UI, such as editing a text field or clicking a button.

2.  The LiveView client side code captures this change and temporarily updates the UI to reflect this change.

3.  The client then sends a request to the server to update the state.

4.  If the server confirms the change (e.g., the text field value is valid), the client updates the state permanently and sends the new state back to the server.

5.  If the server rejects the change (e.g., the text field value is not valid), the client rolls back the state change and sends the previous state back to the server.

This approach allows for a better user experience, as users can see the effects of their actions immediately, without having to wait for the server's confirmation.

For example, consider a form that allows users to edit their profile information. Using Optimistic UI, when a user edits their name, the text field on the screen is updated instantly, even before the request is sent to the server. If the server rejects the change (for example, because the username is not valid), the client can roll back the change, and the user will immediately see the previous username value.

In the context of Phoenix LiveView, you can implement Optimistic UI by using the `push_event` function to update the state and the UI while the request is being sent, and then using the `assign` function to undo the change if the server rejects it.

@nelsonic
Copy link
Member

@ndrean did you mean groq (with the q)? ... groq != grok 😉

groq are building their own LPU chips using novel architecture
and have built a "Tokens as a Service" dev platform on top of that innovation.
Definitely a lot of marketing hype to pump valuation. 📈 🙄
But appears to have some unique advantages. 👌

Grok is built by x.ai (Elon & Co.) using Twitter X.com and other data
to attempt to beat OpenAI (Closed Ai) at LLM and Image Gen

This response from groq on Optimistic UI seems decent. 👍
I cannot help but think: this is just lifted or synthesis from docs. 💭

@ndrean
Copy link
Author

ndrean commented Sep 12, 2024

ah yes grok, groq, ngrok....

Did not know about Grok (the X one 😁).

Below is an "uncontexted" answer from Groq (the LPU one 😀). It hallucinates (ebrief does not exist).
Perhaps the link I gave above helped the AI to behave a bit?

At least, with my "tiny" RAG attempt, I really saw the difference in the answer when the LLM is fed with some ("good") context ("good" because the context is semantically close to the question and based on "real" documentation).

Optimistic UI is a fascinating concept that can greatly enhance the user experience 
in Phoenix LiveView applications. 
In a nutshell, optimistic UI is a technique that allows users to edit data 
while a long-running operation is being performed, 
by keeping track of the changes and only redoing the UI update 
if the operation was successful.

In LiveView, optimistic UI is particularly useful when dealing 
with complex operations like creating or updating records in a database. 
These operations might take some time, and in the meantime,
the user might want to continue editing other data or performing other actions. 
Optimistic UI ensures that the user's changes are persisted 
and only applied if the operation is successful.

Here's a high-level overview of how optimistic UI works in LiveView:

**Initial Fetch**: When a page loads, LiveView fetches the initial data 
from the database or an external API. This data is used to render the UI.
**Initial Rendering**: The UI is rendered based on the initial data, 
and the user can start interacting with the page.
**Editing**: The user makes changes to the data, 
and the changes are persisted locally by LiveView. 
This saved data is then sent in batches to the server for processing upon each update.
 **Server Processing**: The server processes the changes
 (e.g., updates the database) and sends the result back to the client.
**Result Rendering**: LiveView receives the result from the server 
and renders the updated UI. If there were any issues with the update 
(e.g., invalid data, record already exists), LiveView will display an error message
 and retain the previously entered data.

Optimistic UI has several benefits:

**Improved User Experience**: Users can continue to edit data 
while operations are being performed, leading to a better overall experience.
 **Reduced Network Traffic**: Since the user's changes are only sent 
to the server in batches, network traffic is reduced.
 **Better Error Handling**: If an error occurs during processing, 
the user's changes are retained, allowing them to review and resolve the issue.

In Phoenix LiveView, you can implement optimistic UI using its built-in features 
and libraries. 
For example, LiveView provides a `patch/3` function for updating data on the server, 
which supports optimistic concurrency. 
Additionally, libraries like `ebrief` offer more advanced features 
for handling optimistic UI in LiveView applications.

In summary, optimistic UI is a technique for creating more responsive 
and user-friendly interfaces in Phoenix LiveView applications 
by allowing users to edit data while long-running operations are being performed, 
while keeping track of the changes and only redoing 
the UI update if the operation is successful.

@ndrean
Copy link
Author

ndrean commented Sep 19, 2024

For reference, a repo that illustrates the chatgpt-o1 trend (instead of increasing the model size or training time, you increase inference time, meaning more "reasoning" ). Th with groq.

I believe the llama can be installed locally, but not groq which needs an API key.

The repo:

Screenshot 2024-09-19 at 21 20 18

The costs of using groq seem very high: check below
Screenshot 2024-09-19 at 21 30 54

It also uses streamlit which is a cool Python library.
And LLMs are good (?) at generating streamlit code: https://blog.streamlit.io/generative-ai-and-streamlit-a-perfect-match/

Read about streamlit below:

Screenshot 2024-09-19 at 21 24 15

@ndrean
Copy link
Author

ndrean commented Oct 3, 2024

Yet another promise... Pythagora, a kinda "no code" tool with powerful State Machine capabilities, which works as an extension to VSCode.
I could not try it yet, I just watched the video (link below).

Focusses on coding in Node.js along with mongo.

People who don't use AI will be out of job soon ?

Link to Pythagora:

Screenshot 2024-10-03 at 10 57 38

The (convincing) YT link:

Screenshot 2024-10-03 at 11 01 16

@ndrean
Copy link
Author

ndrean commented Oct 4, 2024

Maybe too political to appear here?

Reading that Open.ai raised more than 6MM (https://finance.yahoo.com/news/openai-closed-funding-round-raising-161842157.html) while loosing 5MM, and still valued at 150MM. I feel dizzy with these figures.

I believe it's a way to kill the competition, just like Amazon did 15 years in a row. Open.ai is like a magnet for the investors, and only a few other competitors will drain cash. If the bubble doesn't burst immediately, and given that AI will become essential everywhere, this could work. Indeed, I read that they expect to double the price from 20$ to 40$ in the next couple of years: this gives hope to the investors as customers might become hostages. This could be the idea. But again, this will work only if there is no more competition, if AI is everywhere and essential, and if they stop from being toxic: so many people leaving! Looks like a typical french big IT service company: toxic, 30% turnover, treating customers like sh*t. My own experience, I've been on both sides :)

@nelsonic
Copy link
Member

nelsonic commented Oct 6, 2024

It's fine to be "political". 😆

Sam is raising as much cash as he can because he knows what is coming! 😬 📉
OpenAI ClosedAi is exclusively focussed on product dev and making as much money as possible 💰
and near-zero on research and their original mission. 👎
The only code they release is client/interface libraries no models ref: https://github.com/OPENAI ... 😕
Which allow Devs/Apps to access their proprietary models via gated/controlled access.

ClosedAi is already over-valued considering their epic losses and limited "moat";
APPL know it ... That's why they didn't participate in the funding
even though they are customers (for "Apple Intelligence") and have $61 Bn Cash collecting dust.
There are so many startups chasing the Ai pie and the hardware is moving fast. 🏎️
Sam & Co know that they need to scale fast and capture mind/market-share. ⏳

don't know to what extent ClosedAi can double their price from $20/month to $40/month 👆
without churning a significant percentage of customers. 💭

By far the biggest determinant of how much ClosedAi can charge is what Google are charging for Gemini:
https://workspace.google.com/solutions/ai/#plan
image

I firmly believe in Demis Hassabis to lead Google DeepMind into the future 📈 🧠 (Disclosure: Long GOOG)
because they are still focussed on deep technical research with practical industry applications: 🧑‍🔬 🧪
https://deepmind.google/discover/the-podcast/demis-hassabis-the-interview/
Just look at the progress they have made with AlphaFold: wikipedia.org/AlphaFold 🧬
And what they are doing with Isomorphic Labs: isomorphiclabs.com 🚀
Even though (former CEO) Eric Schmidt thinks Google will flounder ... 🙃 youtu.be/7PMUVqtXS0A
Eric is not (and never was) a deeply technical engineer; he's a manager, Demis is the real deal Ai researcher with many published + highly cited papers: scholar.google.com/citations?user=dYpPMQEAAAAJ and all DeepMind research is Open!

image

@ndrean
Copy link
Author

ndrean commented Oct 6, 2024

ah, just discovered your "pirate" channel nelsonic/nelsonic.github.io#733 ! Plenty of resources to read for the next coming winter evenings :)

@nelsonic
Copy link
Member

nelsonic commented Oct 6, 2024

@ndrean have you tried NotebookLM? #149 😬

@ndrean
Copy link
Author

ndrean commented Oct 7, 2024

@nelsonic yes! I did not try the audio though. The speech is very American style , isn't it? 😄
Instead, I use it to pass long youtube chatty videos when I want to have a résumé on the subject 😬

@ndrean
Copy link
Author

ndrean commented Oct 17, 2024

@nelsonic Another gun from Gogole, a bit like NotebookLM, https://illuminate.google.com/home. Also produces an audio, perhaps less "american" or opinionated if you prefer. If you drive a lot, can be nice.

@nelsonic
Copy link
Member

@ndrean voices on illuminate are very similar to NotebookLM. Definitely a useful tool. 👌

@nelsonic
Copy link
Member

computer use ... https://www.anthropic.com/news/3-5-models-and-computer-use 🤖 💻 💭

@ndrean
Copy link
Author

ndrean commented Nov 23, 2024

In the link below, you can discover plenty of cool projects:

Screenshot 2024-11-23 at 17 48 29

For example:

https://foyle.io/

https://openwebui.com/

@ndrean
Copy link
Author

ndrean commented Nov 30, 2024

Game changer, again! Well this looks cool. Starting with this YT video

Screenshot 2024-11-30 at 17 45 01

Anthropic webpage about MCP

Screenshot 2024-11-30 at 17 49 24

About Model Context Protocol (GitHub repo)

Screenshot 2024-11-30 at 17 51 17

Looks like Elixir could be well suited to provide such a thing. Alas building such a thing won't bring a penny, not even recognition I'm afraid.

@ndrean
Copy link
Author

ndrean commented Dec 2, 2024

Apple MLX: MLX is an array framework for machine learning on Apple silicon, brought to you by Apple machine learning research.

MLX is available as an Nx backend:

Screenshot 2024-12-02 at 10 01 31

Tested

Instead of EXLA, use:

{:emlx, github: "elixir-nx/emlx", ovveride: true}

Does not like integers, only floats

⭐️

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
discuss Share your constructive thoughts on how to make progress with this issue
Projects
None yet
Development

No branches or pull requests

2 participants