Skip to content

llm-jp/llm-jp-dpo

Repository files navigation

LLM-jp DPO (Direct Preference Optimization)

This repository contains the code for DPO of LLM-jp models.

Requirements

See pyproject.toml for the required packages.

Installation

poetry install
poetry shell

Training

Here is the command to train a model using 8 GPUs.

CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 accelerate launch --config_file accelerate_configs/zero2.yaml train.py

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Packages

No packages published

Languages