Welcome to the Heurist Miner setup guide. This document is designed to help you get started with the Heurist Miner, a tool for participating in the Heurist testnet mining program. Whether you're a seasoned miner or new to cryptocurrency mining, we've structured this guide to make the setup process as straightforward as possible.
Heurist Miner allows users to contribute to the Heurist network by performing AI inference tasks in exchange for rewards. This guide will take you through the necessary steps to set up your mining operation.
For curious readers to learn more about the hardware requirements for AI inference, read Heurist's Guide to AI article.
For those eager to dive in, here's a quick overview of the setup process:
- Check system requirements and compatibility notes.
- Configure your Miner ID(s).
- Generate or import identity wallets.
- Install necessary software (CUDA, Python).
- Choose your setup: Windows or Linux guide.
- Install miner scripts and dependencies.
- Run the miner program.
- Preview Version: You're working with a preview version of the Heurist Miner. Expect some bumps along the way. For assistance, join Heurist Discord #miner-chat channel
- System Requirements: Advanced users may skip steps they've already completed, but compatibility checks are recommended.
- CUDA Compatibility: CUDA versions 12.1 or 12.2 are advised for compatibility with PyTorch.
- Python Installation: If Python 3.x is already installed on your system, you may not need to reinstall Miniconda and Python. However, managing dependencies via a Conda environment is recommended.
- CUDA Installation: For those with CUDA pre-installed, ensure that the PyTorch version (
pytorch-cuda
) installed matches your CUDA version.
To prevent impersonation, every miner should have a pair of wallets.
-
Identity wallet: It should not hold any funds. The private key should be stored in the miner locally. The wallet is generated by script
auth/generator.py
. -
Reward wallet (Miner ID): It is the wallet to receive points. It may hold NFT to boost the rewards. The address is shared publicly.
We use a Soul-Bound Token (SBT) to store 1:1 relationship between an identity wallet and a reward wallet. The SBT contract is currently active on the zkSync Era testnet and can be found at here. It will be migrated to zkSync mainnet in the future. The identity-reward wallet binding is established automatically after your miner processes some jobs.
Each miner request must be accompanied by a signature from the identity wallet to prevent unauthorized actions.
- Create a
.env
File: In the root of yourminer-release
directory, create a file named.env
. This will store the miner IDs for your operation, formatted as shown in the providedenv.example
.
-
Ethereum Addresses as IDs: In the
.env
file, assign an Ethereum wallet address as a miner ID for each GPU. These IDs are essential for tracking contributions and ensuring accurate reward distribution. If using multiple GPUs, assign each a unique or common address depending on your preference. -
Add Custom Tags: Enhance monitoring by adding a suffix to your miner ID. This allows for individual performance tracking of GPUs.
Default Tags Example:
MINER_ID_0=0xYourFirstWalletAddressHere MINER_ID_1=0xYourSecondWalletAddressHere
Custom Tags Example:
MINER_ID_0=0xYourFirstWalletAddressHere-GamingPC4090 MINER_ID_1=0xYourSecondWalletAddressHere-GoogleCloudT4
- Adjust GPU Settings: For operations using multiple GPUs, set
num_cuda_devices
inconfig.toml
to match the total GPUs reflected by the miner IDs in your.env
file.
- Configuring the Virtual Environment: For SD miners, the
gpu-3-11
conda environment is pre-configured. Ensure you activate this environment and rerunpip install -r requirements
to install any new dependencies required for authentication. For LLM miners, the initialization script manages this setup automatically. - Creating New Wallets: Run
python3 ./auth/generator.py
if no identity wallet exists for a miner ID. This script generates a new wallet for each miner ID. - Existing Wallets on Another Machine: If your identity wallet was created elsewhere, run
python3 ./auth/generator.py
and enter the seed phrase when prompted. Alternatively, manually place the wallet file in the~/.heurist-keys
directory.
- Sequential Startup: If using both LLM Miner and SD Miners on the same GPU, start the LLM Miner first to prevent load failures, then proceed with starting the SD Miner.
Stable Diffusion Miner Guide (Windows)
-
Go to the NVIDIA Driver Downloads page.
-
Select your GPU model and OS.
-
Download and install the latest driver. Restart your PC if necessary.
- Download the Miniconda Installer.
- Visit the Miniconda Downloads page.
- Get the latest Windows 64-bit version for Python 3.11. conda activate pytorch-gpu-python-3-10.
-
Open a command prompt (Win + X > βCommand Promptβ).
-
Create the Environment:
- Type
conda create --name gpu-3-11 python=3.11
(or choose your Python version). - Press Enter and wait for the process to finish.
- Activate the Environment
- Type
conda activate gpu-3-11
- Download and Install CUDA:
- Visit the CUDA Toolkit 12.1 download page.
- Select your OS version.
- Download and install it by following the prompts.
- Go to the PyTorch Install Page.
- Set Your Preferences: Choose PyTorch, Conda, CUDA 12.1
- Install PyTorch: Copy the generated command (like
conda install pytorch torchvision torchaudio pytorch-cuda=12.1 -c pytorch -c nvidia
). Paste it in the Command Prompt and hit Enter.
- Run
git clone https://github.com/heurist-network/miner-release
in command prompt. Or Click "Code -> Download ZIP" in this Github repo - miner-release to download miner scripts.
- Open Your Command Prompt
- Make sure you're still in your Conda environment. If not, activate it again with
conda activate gpu-3-11
- Navigate to
miner-release
folder
- Use the cd command to change directories to where
requirements.txt
is located. For example,cd C:\Users\YourUsername\Documents\miner-release
.
- Install Dependencies:
- Run the command
pip install -r requirements.txt
. This command tells pip (Python's package installer) to install all the packages listed in your requirements.txt file.
See the top of this guide.
-
Run
python3 sd-miner-v1.x.x.py
(select the latest version of file) in Conda environment command prompt. -
Type
yes
when the program prompts you to download model files. It will take a while to download all models. The program will start processing automatically once it completes downloading.
To optimize and customize your mining operations, you can utilize the following command line interface (CLI) options when starting the miner:
Control the verbosity of the miner's log messages by setting the log level. Available options are DEBUG
, INFO
(default), WARNING
, ERROR
, and CRITICAL
.
Automate the download confirmation process, especially useful in automated setups. Use yes
to auto-confirm or stick with no
(default) for manual confirmation.
Exclude SDXL models. Recommended for Laptop GPUs, 3060, 4060, or if you are running LLM miner alongside SD miner on a same GPU, or if your available VRAM is less than 10GB. SDXL models consumes more resources (and they also earn more rewards). Turning it off will prevent performance issues or crashes on slower GPUs.
Usage Example:
To enable debug-level logging and auto-confirm:
python sd-miner.py --log-level DEBUG --auto-confirm yes
To exclude SDXL models:
python sd-miner.py --exclude-sdxl
Congratulations! π You're now set to serve image generation requests. You don't need to keep it up 24/7. Feel free to close the program whenever you need your GPU like playing video games or streaming videos.
Stable Diffusion Miner Guide (Linux)
This guide assumes you're familiar with the terminal and basic Linux commands. Most steps are similar to the Windows setup, with adjustments for Linux-specific commands and environments.- Python Installation: If Python 3.x is already installed, you can skip the Miniconda installation. However, using Miniconda or Conda to manage dependencies is still recommended.
- CUDA: If CUDA is previously installed, ensure the PyTorch installation matches your CUDA version.
- Use your Linux distribution's package manager or download drivers directly from the NVIDIA Driver Downloads. For Ubuntu, you might use commands like
sudo apt update
andsudo ubuntu-drivers autoinstall
.
- Download the Miniconda installer for Linux from the Miniconda Downloads page.
- Use the command line to run the installer.
- Open a terminal.
- Create a new environment with
conda create --name gpu-3-11 python=3.11
. - Activate the environment using
conda activate gpu-3-11
.
- Install CUDA from the CUDA Toolkit download page appropriate for your Linux distribution. Follow the installation instructions provided on the NVIDIA website.
- Visit the PyTorch installation guide, set preferences for Linux, Conda, and the appropriate CUDA version.
- Use the provided command in the page, such as
conda install pytorch torchvision torchaudio pytorch-cuda=12.1 -c pytorch -c nvidia
, in your terminal.
- Use Git to clone the miner scripts repository with
git clone https://github.com/heurist-network/miner-release
. Alternatively, download the ZIP from the GitHub page and extract it.
- Ensure you're in the Conda environment (
conda activate gpu-3-11
). - Navigate to the miner-release directory.
- Install dependencies with
pip install -r requirements.txt
.
Use .env
in the miner-release folder to set a unique miner_id for each GPU. (See the top of this guide. This is very important for tracking your contribution!)
- Execute the miner script with
python3 sd-miner-v1.x.x.py
(select the latest version) in your terminal. Agree to download model files when prompted.
To optimize and customize your mining operations, you can utilize the following command line interface (CLI) options when starting the miner:
Control the verbosity of the miner's log messages by setting the log level. Available options are DEBUG
, INFO
(default), WARNING
, ERROR
, and CRITICAL
.
Automate the download confirmation process, especially useful in automated setups. Use yes
to auto-confirm or stick with no
(default) for manual confirmation.
Exclude SDXL models. Recommended for Laptop GPUs, 3060, 4060, or if you are running LLM miner alongside SD miner on a same GPU, or if your available VRAM is less than 10GB. SDXL models consumes more resources (and they also earn more rewards). Turning it off will prevent performance issues or crashes on slower GPUs.
Usage Example:
To enable debug-level logging and auto-confirm:
python sd-miner.py --log-level DEBUG --auto-confirm yes
To exclude SDXL models:
python sd-miner.py --exclude-sdxl
- Use
screen
ortmux
to keep the miner running in the background, especially when connected via SSH.
Large Language Model Miner Guide
We use vLLM, a fast and easy-to-use library for LLM inference and serving. We have only tested the miner program on Linux.
- Make sure you have CUDA driver installed. We recommend using NVIDIA drivers with CUDA version 12.1 or 12.2. Other versions may probably work fine. Use
nvidia-smi
command to check CUDA version. - You need enough disk space. You can find model size in heurist-models repo. Use
df -h
to see available disk space. - You must be able to access HuggingFace from internet.
LLMs typically consume a large amount of VRAM (Video Memory) of your GPU. Larger models have higher VRAM requirements and also have higher rewards. Read Miner Guide Docs to choose a model that fits your hardware.
chmod +x llm-miner-starter.sh
./llm-miner-starter.sh <model_id> --miner-id-index 0 --port 8000 --gpu-ids 0
model_id
is mandatory. For example, openhermes-2.5-mistral-7b-gptq
is the smallest model that we support. It requires 12GB VRAM.
--miner-id-index
specifies the index of miner_id in.env
file to use. Default is 0 (using the first address configured)--port
specifies the port to communicate with vLLM process. Default is 8000. Change this if this port is occupied.--gpu-ids
specifies the GPU ID to use. Default is 0. Change this if you have multiple GPUs and want to use a different one.
To use default options:
./llm-miner-starter.sh openhermes-2.5-mistral-7b-gptq
To use the second address with custom port and GPU ID
./llm-miner-starter.sh openhermes-2.5-mistral-7b-gptq --miner-id-index 1 --port 8001 --gpu-ids 1
The first time that the miner program starts up will take a long time because it needs to download the model file. You should see progress bars in the command line output. Models are saved in $HOME/.cache/huggingface
by default. If download progress is interrupted or throws an error, press "Ctrl+C" to stop the starter script and retry. If it's still stuck, delete $HOME/.cache/huggingface
and try again.
We notice that 8x7b, 34b, 70b model loading might take very long time (up to 1 hour) on some devices. If you keep seeing "Model is not ready" and if you don't see any error during downloading, you should wait for some more time.
Model ID | VRAM Usage (GB) |
---|---|
openhermes-2.5-mistral-7b-gptq | 10 |
mistralai/mistral-7b-instruct-v0.2 | 15 |
openhermes-2-pro-mistral-7b | 15 |
(recommended) dolphin-2.9-llama3-8b | 17 |
mistralai/mixtral-8x7b-instruct-v0.1 | 28 |
(recommended) openhermes-mixtral-8x7b-gptq | 28 |
openhermes-2-yi-34b-gptq | 37 |
meta-llama/llama-2-70b-chat | 41 |
Q: Can I run LLM miner on Windows?
A: You may run the miner program with WSL, but we haven't tested it yet. There may be unexpected issues.
Q: Why do I see "Model is not ready. Waiting for LLM Server to finish loading the model to start."?
A: It takes some time to download and load model files before it starts serving requests. Please confirm the downloading progress bar is showing up. Otherwise, it may indicate that your internet is having issues connecting to HuggingFace where the model files are stored. We plan to host the model at a different location soon.
Q: Why do I see "CUDA out of memory error"?
A: Use nvidia-smi
to see available memory. Check if there are any other processes using the GPU. Confirm that your available GPU memory satisfies the minimum requirement for the model. If you have multiple GPUs, make sure you configure --gpu-ids
to specify a GPU with sufficient free VRAM.