Leverage the power of Scikit-LLM and the security of self-hosted LLMs.
pip install scikit-ollama
You can support the project in the following ways:
- Support the original Scikit-LLM package. New features will be made available downstream slowly but surely.
- Star this repository.
- Provide feedback in the issues section.
- Share this repository with others.
Assuming you have installed and configured Ollama to run on your machine:
from skllm.datasets import get_classification_dataset
from skollama.models.ollama.classification.zero_shot import ZeroShotOllamaClassifier
X, y = get_classification_dataset()
clf = ZeroShotOllamaClassifier(model="llama3:8b")
clf.fit(X, y)
preds = clf.predict(X)
For more information please refer to the documentation.
Scikit-Ollama lets you use locally run models for several text classification approaches. Running models locally can be beneficial for cases where data privacy and control are paramount. This also makes you less dependent on 3rd-party APIs and gives you more control over when you want to add changes.
This project builds heavily on Scikit-LLM and has it as a core dependency. Scikit-LLM provides broad and great support to query a variety of backend families, e.g. OpenAI, Vertex, GPT4All. In their version you could already use the OpenAI compatible v1 API backend to query locally run models. However, the issue is that Ollama does not support passing options, such as the context size to that endpoint.
Therefore this model uses the Ollama Python SDK to allow that level of control.
For a guide to contributing please follow the steps here.
@software{Scikit-Ollama,
author = {Andreas Karasenko},
year = {2024},
title = {Scikit-Ollama: an extension of Scikit-LLM for Ollama served models},
url = {https://github.com/AndreasKarasenko/scikit-ollama}
}
If you consider citing this repository, please also consider citing scikit-llm
.