Skip to content

This project aims to compare the cost of buying a GPU versus the cost of using an API for Large Language Model (LLM) inference.

License

Notifications You must be signed in to change notification settings

MichaelMartinez/GPUvsAPI

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

9 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

GPU versus API

Description

This project aims to compare the cost of buying a GPU versus the cost of using an API for Large Language Model (LLM) inference.

At the end of the day, this is very crude but it's a start and was immensely helpful for me to understand the cost of using an API for LLM inference for me.

If you like it, AWESOME... if not, sorry... I will try harder next time. :D

Demo

You can try it here

Screenshots

Screenshot 1

Screenshot 1

Screenshot 2

Screenshot 2

Screenshot 3

Screenshot 3

Screenshot 4

Screenshot 4

Installation

  1. Install http-server locally by running:

    npm install http-server
    npm install -g http-server

Usage

  1. Run the following command to start the HTTP server:

    ./run.sh
  2. Open the web page in your default browser by visiting:

    http://localhost:8080

Notes

  • Make sure to have Node.js installed on your machine before running the script.
  • Report / PDF download not working yet... my bad.

About

This project aims to compare the cost of buying a GPU versus the cost of using an API for Large Language Model (LLM) inference.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published