Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix: remove legacy variables (XAI_MODEL, XAI_API_KEY & IMAGE_GEN) #2001

Merged
merged 3 commits into from
Jan 8, 2025
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 0 additions & 4 deletions .env.example
Original file line number Diff line number Diff line change
Expand Up @@ -88,9 +88,6 @@ TWITTER_TARGET_USERS= # Comma separated list of Twitter user names to
TWITTER_RETRY_LIMIT= # Maximum retry attempts for Twitter login
TWITTER_SPACES_ENABLE=false # Enable or disable Twitter Spaces logic

XAI_API_KEY=
XAI_MODEL=

# Post Interval Settings (in minutes)
POST_INTERVAL_MIN= # Default: 90
POST_INTERVAL_MAX= # Default: 180
Expand All @@ -103,7 +100,6 @@ MAX_ACTIONS_PROCESSING=1 # Maximum number of actions (e.g., retweets, likes) to
ACTION_TIMELINE_TYPE=foryou # Type of timeline to interact with. Options: "foryou" or "following". Default: "foryou"

# Feature Flags
IMAGE_GEN= # Set to TRUE to enable image generation
USE_OPENAI_EMBEDDING= # Set to TRUE for OpenAI/1536, leave blank for local
USE_OLLAMA_EMBEDDING= # Set to TRUE for OLLAMA/1024, leave blank for local

Expand Down
3 changes: 0 additions & 3 deletions README_CN.md
Original file line number Diff line number Diff line change
Expand Up @@ -188,9 +188,6 @@ TWITTER_USERNAME= # Account username
TWITTER_PASSWORD= # Account password
TWITTER_EMAIL= # Account email

XAI_API_KEY=
XAI_MODEL=


# For asking Claude stuff
ANTHROPIC_API_KEY=
Expand Down
9 changes: 3 additions & 6 deletions README_ES.md
Original file line number Diff line number Diff line change
Expand Up @@ -54,15 +54,15 @@ Para evitar conflictos en el directorio central, se recomienda agregar acciones

### Ejecutar con Llama

Puede ejecutar modelos Llama 70B o 405B configurando la variable de ambiente `XAI_MODEL` en `meta-llama/Meta-Llama-3.1-70B-Instruct-Turbo` o `meta-llama/Meta-Llama-3.1-405B-Instruct`
Puede ejecutar modelos Llama 70B o 405B configurando la variable de ambiente para un proveedor que soporte estos modelos. Llama también es soportado localmente si no se configura otro proveedor.

### Ejecutar con Grok

Puede ejecutar modelos Grok configurando la variable de ambiente `XAI_MODEL` en `grok-beta`
Puede ejecutar modelos Grok configurando la variable de ambiente `GROK_API_KEY` y configurando "grok" como proveedor en el archivo de caracteres.

### Ejecutar con OpenAI

Puede ejecutar modelos OpenAI configurando la variable de ambiente `XAI_MODEL` en `gpt-4o-mini` o `gpt-4o`
Puede ejecutar modelos OpenAI configurando la variable de ambiente `OPENAI_API_KEY` y configurando "openai" como proveedor en el archivo de caracteres.

## Requisitos Adicionales

Expand Down Expand Up @@ -99,9 +99,6 @@ TWITTER_USERNAME= # Nombre de usuario de la cuenta
TWITTER_PASSWORD= # Contraseña de la cuenta
TWITTER_EMAIL= # Correo electrónico de la cuenta

XAI_API_KEY=
XAI_MODEL=

# Para consultar a Claude
ANTHROPIC_API_KEY=

Expand Down
2 changes: 0 additions & 2 deletions agent/src/index.ts
Original file line number Diff line number Diff line change
Expand Up @@ -282,8 +282,6 @@ export function getTokenForProvider(
settings.LLAMACLOUD_API_KEY ||
character.settings?.secrets?.TOGETHER_API_KEY ||
settings.TOGETHER_API_KEY ||
character.settings?.secrets?.XAI_API_KEY ||
settings.XAI_API_KEY ||
character.settings?.secrets?.OPENAI_API_KEY ||
settings.OPENAI_API_KEY
);
Expand Down
14 changes: 4 additions & 10 deletions docs/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -59,15 +59,15 @@ To avoid git clashes in the core directory, we recommend adding custom actions t

### Run with Llama

You can run Llama 70B or 405B models by setting the `XAI_MODEL` environment variable to `meta-llama/Meta-Llama-3.1-70B-Instruct-Turbo` or `meta-llama/Meta-Llama-3.1-405B-Instruct`
You can run Llama 70B or 405B models by setting the environment variable for a provider that supports these models. Llama is also supported locally if no other provider is set.

### Run with Grok

You can run Grok models by setting the `XAI_MODEL` environment variable to `grok-beta`
You can run Grok models by setting the `GROK_API_KEY` environment variable to your Grok API key and setting grok as the model provider in your character file.

### Run with OpenAI

You can run OpenAI models by setting the `XAI_MODEL` environment variable to `gpt-4-mini` or `gpt-4o`
You can run OpenAI models by setting the `OPENAI_API_KEY` environment variable to your OpenAI API key and setting openai as the model provider in your character file.

## Additional Requirements

Expand Down Expand Up @@ -103,10 +103,6 @@ TWITTER_USERNAME= # Account username
TWITTER_PASSWORD= # Account password
TWITTER_EMAIL= # Account email

X_SERVER_URL=
XAI_API_KEY=
XAI_MODEL=


# For asking Claude stuff
ANTHROPIC_API_KEY=
Expand Down Expand Up @@ -143,9 +139,7 @@ Make sure that you've installed the CUDA Toolkit, including cuDNN and cuBLAS.

### Running locally

Add XAI_MODEL and set it to one of the above options from [Run with
Llama](#run-with-llama) - you can leave X_SERVER_URL and XAI_API_KEY blank, it
downloads the model from huggingface and queries it locally
By default, the bot will download and use a local model. You can change this by setting the environment variables for the model you want to use.

# Clients

Expand Down
14 changes: 4 additions & 10 deletions docs/docs/api/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -56,15 +56,15 @@ To avoid git clashes in the core directory, we recommend adding custom actions t

### Run with Llama

You can run Llama 70B or 405B models by setting the `XAI_MODEL` environment variable to `meta-llama/Meta-Llama-3.1-70B-Instruct-Turbo` or `meta-llama/Meta-Llama-3.1-405B-Instruct`
You can run Llama 70B or 405B models by setting the environment variable for a provider that supports these models. Llama is also supported locally if no other provider is set.

### Run with Grok

You can run Grok models by setting the `XAI_MODEL` environment variable to `grok-beta`
You can run Grok models by setting the `GROK_API_KEY` environment variable to your Grok API key

### Run with OpenAI

You can run OpenAI models by setting the `XAI_MODEL` environment variable to `gpt-4o-mini` or `gpt-4o`
You can run OpenAI models by setting the `OPENAI_API_KEY` environment variable to your OpenAI API key

## Additional Requirements

Expand Down Expand Up @@ -101,10 +101,6 @@ TWITTER_USERNAME= # Account username
TWITTER_PASSWORD= # Account password
TWITTER_EMAIL= # Account email

X_SERVER_URL=
XAI_API_KEY=
XAI_MODEL=

# For asking Claude stuff
ANTHROPIC_API_KEY=

Expand Down Expand Up @@ -147,9 +143,7 @@ Make sure that you've installed the CUDA Toolkit, including cuDNN and cuBLAS.

### Running locally

Add XAI_MODEL and set it to one of the above options from [Run with
Llama](#run-with-llama) - you can leave X_SERVER_URL and XAI_API_KEY blank, it
downloads the model from huggingface and queries it locally
By default, the bot will download and use a local model. You can change this by setting the environment variables for the model you want to use.

# Clients

Expand Down
8 changes: 0 additions & 8 deletions docs/docs/guides/configuration.md
Original file line number Diff line number Diff line change
Expand Up @@ -25,10 +25,6 @@ Here are the essential environment variables you need to configure:
OPENAI_API_KEY=sk-your-key # Required for OpenAI features
ANTHROPIC_API_KEY=your-key # Required for Claude models
TOGETHER_API_KEY=your-key # Required for Together.ai models

# Default Settings
XAI_MODEL=gpt-4o-mini # Default model to use
X_SERVER_URL= # Optional model API endpoint
```

### Client-Specific Configuration
Expand Down Expand Up @@ -74,11 +70,7 @@ HEURIST_API_KEY=

# Livepeer Settings
LIVEPEER_GATEWAY_URL=

# Local Model Settings
XAI_MODEL=meta-llama/Llama-3.1-7b-instruct
```

### Image Generation

Configure image generation in your character file:
Expand Down
2 changes: 0 additions & 2 deletions docs/docs/guides/local-development.md
Original file line number Diff line number Diff line change
Expand Up @@ -75,8 +75,6 @@ Configure essential development variables:
```bash
# Minimum required for local development
OPENAI_API_KEY=sk-* # Optional, for OpenAI features
XAI_API_KEY= # Leave blank for local inference
XAI_MODEL=meta-llama/Llama-3.1-7b-instruct # Local model
```

### 5. Local Model Setup
Expand Down
8 changes: 3 additions & 5 deletions docs/docs/quickstart.md
Original file line number Diff line number Diff line change
Expand Up @@ -92,9 +92,9 @@ Eliza supports multiple AI models:
- **Heurist**: Set `modelProvider: "heurist"` in your character file. Most models are uncensored.
- LLM: Select available LLMs [here](https://docs.heurist.ai/dev-guide/supported-models#large-language-models-llms) and configure `SMALL_HEURIST_MODEL`,`MEDIUM_HEURIST_MODEL`,`LARGE_HEURIST_MODEL`
- Image Generation: Select available Stable Diffusion or Flux models [here](https://docs.heurist.ai/dev-guide/supported-models#image-generation-models) and configure `HEURIST_IMAGE_MODEL` (default is FLUX.1-dev)
- **Llama**: Set `XAI_MODEL=meta-llama/Meta-Llama-3.1-70B-Instruct-Turbo`
- **Grok**: Set `XAI_MODEL=grok-beta`
- **OpenAI**: Set `XAI_MODEL=gpt-4o-mini` or `gpt-4o`
- **Llama**: Set `OLLAMA_MODEL` to your chosen model
- **Grok**: Set `GROK_API_KEY` to your Grok API key and set `modelProvider: "grok"` in your character file
- **OpenAI**: Set `OPENAI_API_KEY` to your OpenAI API key and set `modelProvider: "openai"` in your character file
- **Livepeer**: Set `LIVEPEER_IMAGE_MODEL` to your chosen Livepeer image model, available models [here](https://livepeer-eliza.com/)

You set which model to use inside the character JSON file
Expand All @@ -103,8 +103,6 @@ You set which model to use inside the character JSON file

#### For llama_local inference:

1. Set `XAI_MODEL` to your chosen model
2. Leave `X_SERVER_URL` and `XAI_API_KEY` blank
3. The system will automatically download the model from Hugging Face
4. `LOCAL_LLAMA_PROVIDER` can be blank

Expand Down
Loading