Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Ollama Llama3.1 #348

Open
prpanto opened this issue Aug 5, 2024 · 1 comment
Open

Ollama Llama3.1 #348

prpanto opened this issue Aug 5, 2024 · 1 comment

Comments

@prpanto
Copy link

prpanto commented Aug 5, 2024

How to use the llama3.1 with Ollama? Do you support it?

@pedinil
Copy link

pedinil commented Aug 18, 2024

First you should see if you have your model downloaded in ollama

you can see list of all your model with below command

ollama list

NAME           	ID          	SIZE  	MODIFIED
gemma2:latest  	ff02c3702f32	5.4 GB	2 weeks ago
llama3.1:latest	62757c860e01	4.7 GB	2 weeks ago
llama3:latest  	365c0bd3c000	4.7 GB	6 weeks ago

If llama3.1 is avaible than you can easily used below code to call the model

		  const res = await generateText({
			model: ollama
			  .ChatTextGenerator({
				model: "llama3.1",
				maxGenerationTokens: 500,
			  })
			  .withTextPrompt(),
			prompt: prompt,
		  })

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants