Skip to content

Caches Cosmos RPC/API calls for configurable periods of time

Notifications You must be signed in to change notification settings

clydedevv/cosmos-endpoint-cache

 
 

Repository files navigation

Optimize Cosmos query calls by caching responses with a local Key-Value store for a predefined set of time.

This program sits on top of another server and acts as a middleware between the requesting client and the actual cosmos RPC/API server.

It supports

  • Variable length cache times (for both RPC methods & REST URL endpoints)

  • Disable specific endpoints entirely from being queried (ex: REST API /accounts)

  • Enable cache only until the next block (via Tendermint RPC event subscription)

  • Cached RPC request

  • Cached REST request

  • Swagger + OpenAPI support (openapi.yml cached)

  • HttpBatchClient (for RPC with Tendermint 0.34 client)

  • Statistics (optional /stats endpoint with password)

  • Websocket basic passthrough support for Keplr wallet (TODO)

  • Index blocks (TODO)

Public Endpoints

Juno

Akash

CosmosHub

Comdex

Chihuahua

Injective

Pre-Requirements

  • A Cosmos RPC / REST server endpoint (state synced, full node, or archive).
  • A reverse proxy (to forward subdomain -> the endpoint cache on a machine)

NOTE In the past, Redis was used. If you wish to use Redis still it can be found in v0.0.8

Where to run

Ideally, you should run this on your RPC/REST Node for localhost queries. However, you can also run on other infra including on your reverse proxy itself, or another separate node. This makes it possible to run on cloud providers like Akash, AWS, GCP, Azure, etc.


Setup

python3 -m pip install -r requirements/requirements.txt --upgrade

# Edit the ENV file to your needs
cp configs/.env .env

# Update which endpoints you want to disable / allow (regex) & how long to cache each for.
cp configs/cache_times.json cache_times.json

# THen run to ensure it was setup correctly
python3 rest.py
# ctrl + c
python3 rpc.py
# ctrl + c

# If all is good, continue on.
# NOTE: You can only run 1 of each locally at a time because WSGI is a pain. Requires Systemd as a service to run both in parallel.

# Then point your NGINX / CADDY config to this port rather than the default 26657 / 1317 endpoints

Running in Production

Documentation

About

Caches Cosmos RPC/API calls for configurable periods of time

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 90.3%
  • Shell 6.7%
  • Dockerfile 2.3%
  • Makefile 0.7%