-
Notifications
You must be signed in to change notification settings - Fork 3.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Replace GPT-3.5 with Node Llama3 #3
Comments
lalalune
pushed a commit
that referenced
this issue
Dec 14, 2024
add attachTerms and improve registerIP to use IP metadata spec
pythonberg1997
pushed a commit
to pythonberg1997/eliza
that referenced
this issue
Jan 6, 2025
add faucet action
wtfsayo
pushed a commit
that referenced
this issue
Jan 7, 2025
feat: rename plugin from EVM to Arthera and update README feat: remove bridge and swap actions from Arthera plugin feat: update transfer examples and templates to use AA instead of ETH feat: update viem dependency to version 2.21.58 and adjust pnpm-lock.yaml feat: remove unused LiFi dependencies and clean up type definitions in Arthera plugin feat: remove bridge actions and templates from Arthera plugin feat: remove swap actions and templates from Arthera plugin feat: update EVM naming to Arthera feat: update README and types for Arthera mainnet integration feat: update plugin to use Arthera instead of mainnet fix: add required devDependencies fix: remove switchChain fix: update _options type to Record<string, unknown> in transferAction fix: correct log message format in transfer action to include wallet client address test: enhance transfer tests with additional wallet provider and address validation Plugin arthera merge (#3) * feat: added arthera to default character and agent * feat: renamed EVM_PRIVATE_KEY by ARTHERA_PRIVATE_KEY * fix: roll back core package * fix: workspace: version --------- Co-authored-by: Arthera Node <[email protected]> fix: run transfer test only if private key provided fix: add missing newline at end of package.json and tsconfig.json files
shakkernerd
pushed a commit
that referenced
this issue
Jan 22, 2025
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
We want to make Ruby run entirely locally and as quickly as possible. We will use a local running Llama 3 model.
https://github.com/withcatai/node-llama-cpp
If this is very slow, even with GPU acceleration, than we will use Fireworks.ai or Together.ai
The text was updated successfully, but these errors were encountered: