Skip to content

Commit

Permalink
readme updates
Browse files Browse the repository at this point in the history
  • Loading branch information
KevinMusgrave committed Aug 28, 2024
1 parent d5f726a commit fc93739
Show file tree
Hide file tree
Showing 2 changed files with 7 additions and 1 deletion.
1 change: 1 addition & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -15,6 +15,7 @@ This repository contains a variety of Determined examples that are not actively
| [LLM Finetuning](blog/llm-finetuning) | Finetuning TinyLlama-1.1B on Text-to-SQL. |
| [LLM Finetuning 2](blog/llm-finetuning-2) | Finetuning Mistral-7B on Text-to-SQL using LoRA and DeepSpeed. |
| [LLM Finetuning 3](blog/llm-finetuning-3) | Finetuning Gemma-2B using DPO. |
| [LLM Finetuning 4](blog/llm-finetuning-4) | Experimenting with LoRA parameters. |
| [Python SDK demo](blog/python_sdk_demo) | Example usage of the Determined Python SDK to run and administer experiments. |
| [Tensor Parallelism](blog/tp) | Profiling tensor parallelism in PyTorch. |

Expand Down
7 changes: 6 additions & 1 deletion blog/llm-finetuning-4/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -23,4 +23,9 @@ Change configuration options in `lora.yaml`. Some important options are:
- `per_device_train_batch_size`: the batch size per GPU.


DeepSpeed configuration files are in the `ds_configs` folder.
DeepSpeed configuration files are in the `ds_configs` folder.


## Contributors

- [Sze Wai Yuen](https://github.com/szewaiyuen6)

0 comments on commit fc93739

Please sign in to comment.