Skip to content

MedIA 2025 | A-Eval: A Benchmark for Cross-Dataset and Cross-Modality Evaluation of Abdominal Multi-Organ Segmentation

License

Notifications You must be signed in to change notification settings

uni-medical/A-Eval

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

16 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

A-Eval

overview

🤖 Trained Model

We provide pre-trained model that has been jointly trained on A-Eval. You can download it from our Google Drive repository: Google Drive Download Link

📚 Datasets

Dataset Modality # Train # Test # Organs # Organs (Test) Region
FLARE22 CT 50 labeled
2000 unlabeled
50 13 10 North America
Europe
AMOS CT CT 200 40 15 10 Asia
WORD CT 100 30 16 10 Asia
TotalSegmentator v2 CT 1082 89 117 10 Europe
BTCV CT - 30 13 10 North America
AMOS MR MR 40 20 15 10 Asia
TotalSegmentator MR MR 268 30 56 10 Europe
A-Eval Totals CT & MR 1432 labeled CT
2000 unlabeled CT
308 MR
239 CT
50 MR
10 10 North America
Europe
Asia

To ensure a meaningful and fair comparison across datasets, we evaluate the models' performance based on a set of ten organ classes shared by all datasets. We unify these labels using an overlapped label system. The corresponding code for label systems and label conversion can be found in the repository: label_systems.py and convert_label.py.

Organ Class FLARE22 AMOS CT WORD* TotalSegmentator v2 AMOS MR TotalSegmentator MR A-Eval
Liver
Kidney Right
Kidney Left
Spleen
Pancreas
Aorta
Inferior Vena Cava
Adrenal Gland Right
Adrenal Gland Left
Gallbladder
Esophagus
Stomach
Duodenum

*Note: The WORD dataset has been post-processed to distinguish between left and right adrenal glands.

Results

results

🎫 License

This project is released under the Apache 2.0 license.

🙏 Acknowledgement

👋 Hiring & Global Collaboration

  • Hiring: We are hiring researchers, engineers, and interns in General Vision Group, Shanghai AI Lab. If you are interested in Medical Foundation Models and General Medical AI, including designing benchmark datasets, general models, evaluation systems, and efficient tools, please contact us.
  • Global Collaboration: We're on a mission to redefine medical research, aiming for a more universally adaptable model. Our passionate team is delving into foundational healthcare models, promoting the development of the medical community. Collaborate with us to increase competitiveness, reduce risk, and expand markets.
  • Contact: Junjun He([email protected]), Jin Ye([email protected]), and Tianbin Li ([email protected]).

About

MedIA 2025 | A-Eval: A Benchmark for Cross-Dataset and Cross-Modality Evaluation of Abdominal Multi-Organ Segmentation

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published