This project aims to compare the cost of buying a GPU versus the cost of using an API for Large Language Model (LLM) inference.
At the end of the day, this is very crude but it's a start and was immensely helpful for me to understand the cost of using an API for LLM inference for me.
If you like it, AWESOME... if not, sorry... I will try harder next time. :D
You can try it here
-
Install
http-server
locally by running:npm install http-server npm install -g http-server
-
Run the following command to start the HTTP server:
./run.sh
-
Open the web page in your default browser by visiting:
http://localhost:8080
- Make sure to have Node.js installed on your machine before running the script.
- Report / PDF download not working yet... my bad.