An open-source, modern-designed deep learning training tracking and visualization tool
Supports both cloud/offline usage, integrates with 30+ mainstream frameworks, and easily integrates with your experimental code.
🔥SwanLab Online · 📃 Documentation · Report Issues · Feedback · Changelog
👋 Join our WeChat Group
- 🌟 Recent Updates
- 👋🏻 What is SwanLab
- 📃 Online Demo
- 🏁 Quick Start
- 💻 Self-Hosting
- 🚗 Framework Integration
- 🆚 Comparison with Familiar Tools
- 👥 Community
- 📃 License
-
2025.01.22: Added
sync_tensorboardX
andsync_tensorboard_torch
features, supporting synchronization of experiment tracking with these two TensorBoard frameworks. -
2025.01.17: Added
sync_wandb
feature, docs, supporting synchronization with Weights & Biases experiment tracking; significantly improved log rendering performance. -
2025.01.11: The cloud version enhanced project table performance with drag-and-drop, sorting, and filtering support.
-
2025.01.01: Added persistent smoothing for line charts and drag-to-resize functionality for line charts, improving chart browsing experience.
-
2024.12.22: We completed integration with LLaMA Factory. Now you can use SwanLab in LLaMA Factory to track and visualize large model fine-tuning experiments. Usage Guide.
-
2024.12.15: Hardware Monitoring (0.4.0) is now available, supporting system-level information recording and monitoring for CPU, NPU (Ascend), and GPU (Nvidia).
-
2024.12.06: Added integration with LightGBM and XGBoost; increased the limit for single-line log length.
-
2024.11.26: Environment tab - Hardware section now supports identifying Huawei Ascend NPU and Kunpeng CPU; cloud provider section now supports identifying QingCloud Jishi Computing.
SwanLab is an open-source, lightweight AI model training tracking and visualization tool, providing a platform for tracking, recording, comparing, and collaborating on experiments.
SwanLab is designed for AI researchers, offering a friendly Python API and a beautiful UI interface, and providing features such as training visualization, automatic logging, hyperparameter recording, experiment comparison, and multi-user collaboration. With SwanLab, researchers can identify training issues through intuitive visual charts, compare multiple experiments to find research inspiration, and break down team communication barriers through online web sharing and multi-user collaborative training within organizations, improving organizational training efficiency.
Here is a list of its core features:
1. 📊 Experiment Metrics and Hyperparameter Tracking: Minimal code integration into your machine learning pipeline to track and record key training metrics.
- Supports cloud usage (similar to Weights & Biases), allowing you to check training progress anytime, anywhere. How to view experiments on mobile.
- Supports hyperparameter recording and table display.
- Supported metadata types: Scalar metrics, images, audio, text, ...
- Supported chart types: Line charts, media charts (images, audio, text), ...
- Automatic background logging: Logging, hardware environment, Git repository, Python environment, Python library list, project runtime directory.
2. ⚡️ Comprehensive Framework Integration: PyTorch, 🤗HuggingFace Transformers, PyTorch Lightning, 🦙LLaMA Factory, MMDetection, Ultralytics, PaddleDetection, LightGBM, XGBoost, Keras, Tensorboard, Weights&Biases, OpenAI, Swift, XTuner, Stable Baseline3, Hydra, and more, totaling 30+ frameworks.
3. 💻 Hardware Monitoring: Supports real-time recording and monitoring of system-level hardware metrics for CPU, NPU (Ascend), GPU (Nvidia), and memory.
4. 📦 Experiment Management: Through a centralized dashboard designed for training scenarios, quickly overview and manage multiple projects and experiments.
5. 🆚 Result Comparison: Compare hyperparameters and results of different experiments through online tables and comparison charts to uncover iteration insights.
6. 👥 Online Collaboration: Collaborate with your team on training, supporting real-time synchronization of experiments under a single project. You can view team training records online and provide feedback and suggestions based on results.
7. ✉️ Share Results: Copy and send persistent URLs to share each experiment, easily send to partners, or embed in online notes.
8. 💻 Self-Hosting Support: Supports offline usage, and the self-hosted community edition also allows viewing dashboards and managing experiments.
Important
Star the project to receive all release notifications from GitHub without delay~ ⭐️
Check out SwanLab's online demos:
ResNet50 Cat-Dog Classification | Yolov8-COCO128 Object Detection |
---|---|
Track a simple ResNet50 model training on a cat-dog dataset for image classification. | Use Yolov8 on the COCO128 dataset for object detection, tracking training hyperparameters and metrics. |
Qwen2 Instruction Fine-Tuning | LSTM Google Stock Prediction |
---|---|
Track Qwen2 large language model instruction fine-tuning for simple instruction following. | Use a simple LSTM model on Google stock price dataset to predict future stock prices. |
ResNeXt101 Audio Classification | Qwen2-VL COCO Dataset Fine-Tuning |
---|---|
Progressive experimental process from ResNet to ResNeXt on audio classification tasks. | Fine-tune Qwen2-VL multimodal large model on COCO2014 dataset using Lora. |
pip install swanlab
-
Register for free at SwanLab.
-
Log in, and copy your API Key from User Settings > API Key.
-
Open the terminal and enter:
swanlab login
When prompted, enter your API Key, press Enter, and complete the login.
import swanlab
# Initialize a new SwanLab experiment
swanlab.init(
project="my-first-ml",
config={'learning-rate': 0.003},
)
# Log metrics
for i in range(10):
swanlab.log({"loss": i, "acc": i})
Done! Head over to SwanLab to view your first SwanLab experiment.
The self-hosted community edition supports offline viewing of the SwanLab dashboard.
Set the logdir
and mode
parameters in swanlab.init
to track experiments offline:
...
swanlab.init(
logdir='./logs',
mode='local',
)
...
-
Set the
mode
parameter tolocal
to disable syncing experiments to the cloud. -
The
logdir
parameter is optional and specifies the location where SwanLab log files are saved (default is theswanlog
folder).- Log files are created and updated during the experiment tracking process, and the offline dashboard is launched based on these log files.
Everything else is identical to cloud usage.
Open the terminal and use the following command to launch a SwanLab dashboard:
swanlab watch ./logs
After running, SwanLab will provide you with a local URL link (default is http://127.0.0.1:5092).
Visit this link to view experiments in the browser using the offline dashboard.
Use your favorite frameworks with SwanLab!
Below is a list of frameworks we have integrated. Feel free to submit an Issue to request integration for your desired framework.
Basic Frameworks
Specialized/Fine-Tuning Frameworks
- PyTorch Lightning
- HuggingFace Transformers
- OpenMind
- LLaMA Factory
- Modelscope Swift
- Sentence Transformers
- Torchtune
- XTuner
- MMEngine
- FastAI
- LightGBM
- XGBoost
Computer Vision
Reinforcement Learning
Other Frameworks:
-
☁️ Online Support: SwanLab allows convenient cloud-based synchronization and storage of training experiments, enabling remote viewing of training progress, managing historical projects, sharing experiment links, sending real-time notifications, and multi-device experiment viewing. Tensorboard, on the other hand, is an offline experiment tracking tool.
-
👥 Multi-User Collaboration: SwanLab facilitates multi-user, cross-team machine learning collaboration by easily managing team training projects, sharing experiment links, and enabling cross-space discussions. Tensorboard is primarily designed for individual use, making multi-user collaboration and experiment sharing difficult.
-
💻 Persistent, Centralized Dashboard: Regardless of where you train your models—on a local computer, a lab cluster, or a public cloud GPU instance—your results are recorded in the same centralized dashboard. TensorBoard requires time-consuming copying and management of TFEvent files from different machines.
-
💪 More Powerful Tables: SwanLab tables allow you to view, search, and filter results from different experiments, making it easy to review thousands of model versions and identify the best-performing models for different tasks. TensorBoard is not suitable for large-scale projects.
-
Weights and Biases is a closed-source MLOps platform that requires an internet connection.
-
SwanLab not only supports online usage but also offers an open-source, free, self-hosted version.
- GitHub Issues: Errors and issues encountered while using SwanLab.
- Email Support: Feedback and questions about using SwanLab.
- WeChat Group: Discuss SwanLab usage and share the latest AI technologies.
If you enjoy using SwanLab in your work, please add the SwanLab badge to your README:
[![swanlab](https://img.shields.io/badge/powered%20by-SwanLab-438440)](https://github.com/swanhubx/swanlab)
If you find SwanLab helpful in your research journey, please consider citing it in the following format:
@software{Zeyilin_SwanLab_2023,
author = {Zeyi Lin, Shaohong Chen, Kang Li, Qiushan Jiang, Zirui Cai, Kaifang Ji and {The SwanLab team}},
doi = {10.5281/zenodo.11100550},
license = {Apache-2.0},
title = {{SwanLab}},
url = {https://github.com/swanhubx/swanlab},
year = {2023}
}
Considering contributing to SwanLab? First, take a moment to read the Contribution Guide.
We also greatly appreciate support through social media, events, and conference sharing. Thank you!
Contributors
This repository is licensed under the Apache 2.0 License.