Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Roadmap] Q3 2024 #1350

Closed
15 of 42 tasks
peterschmidt85 opened this issue Jun 24, 2024 · 0 comments
Closed
15 of 42 tasks

[Roadmap] Q3 2024 #1350

peterschmidt85 opened this issue Jun 24, 2024 · 0 comments

Comments

@peterschmidt85
Copy link
Contributor

peterschmidt85 commented Jun 24, 2024

This issue outlines the major items planned for Q3 2024.

For major bugs, see major

Core features

Supported architectures

Examples

Important

Community help is welcome!

  • AMD
  • Nim
  • GitHub Actions
  • VLLM/TGI
  • Llama 3.2 (multi-modal)
  • FLUX
  • Ray
  • Spark
  • Unsloth
  • Alignment Handbook with Llama 3.1
  • Nemo
  • TensorRT-LLM
  • Triton
  • Function calling
  • Llama Index
  • LangChain
  • TPU
  • Multi-node Alignment Handbook
  • Llama 3.1
  • Fine-tuning Llama 3.1
  • Axolotl

Improvements

Research

Important

Research and feedback is required!

  • Metrics: Research whether dstack should collect certain metrics out of the box (at least hardware utilization) or if it should be integrated with more enterprise-grade tools.
  • Fault-tolerant training: Research how dstack can be used for fault-tolerant training of massive models.
@peterschmidt85 peterschmidt85 pinned this issue Jun 24, 2024
@peterschmidt85 peterschmidt85 mentioned this issue Jun 24, 2024
41 tasks
@olgenn olgenn unpinned this issue Aug 26, 2024
@peterschmidt85 peterschmidt85 pinned this issue Aug 30, 2024
@peterschmidt85 peterschmidt85 unpinned this issue Oct 16, 2024
@r4victor r4victor closed this as completed Dec 6, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants