-
Notifications
You must be signed in to change notification settings - Fork 129
Organizer_Codalab competition YAML definition language
This page describes all the attributes in the Codalab competition definition language, using YAML. This is used to create configuration files in Codalab competition bundles. If in doubt about YAML, consult the reference card.
- title: < String between double quotes>
- description: < String between double quotes>
- image: <File name, .jpg or .png; competition logo> Why is it called logo in the editor?
- has_registration: <True/False. If true, organizer must approve participants>
- force_submission_to_leaderboard: <True/False. If false, participants must submit manually>
- disallow_leaderboard_modifying: <True/False. If true, submissions cannot be changed>
- enable_detailed_results: <True/False. True if html file written>
- end_date: <YYYY-MM-DD. No end date if left blank>
- admin_names: < Comma separated list of Codalab IDs of co-organizers> Why is it called admins in the editor?
- html:
overview: overview.html # Detailed competition description.
evaluation: evaluation.html # Instructions and evaluation metrics.
terms: terms_and_conditions.html # Rules and prizes.
data: data.html # Dataset description.
p1: additionalPage1.html # Additional pages (optional).
p2: additionalPage2.html
p3: additionalPage3.html
Some properties of competitions cannot be specified in the YAML file. However, they are available from the editor:
- Queue: Queue selected to send execution jobs. New in v1.5.
- Publicly Available: Checked if the competition is published.
- Enable medical image viewer: No longer in use in v1.5. Only for medical competitions.
- Show datasets from yaml: No longer in use in v1.5. Remains for backward compatibility.
- Reward: Amount of competition prize.
- Allow teams: Checked if profile-declared team name is used.
- Enable per submission metadata: If checked, when submitting an entry, competitors will be asked for Team name, Method name, Method description, etc.
- Allow sharing of public submissions: If checked, participants will be able to share their submissions with others.
- Enable forum: If checked, participants will be able to make posts to the forum.
- Anonymous leaderboard: If checked, participants' names will not appear in the leaderboard.
- Enable Competition level teams: If checked, per competition team registration will be required for participation. New in v1.5.
- Organizers need to approve the new team: If checked, the organizers will need to approve the teams. New in v1.5.
- phasenumber:
- label: <String. Label used to identify the phase.>
- description: <String. Short phase description>
- color: <Tab colors, one of: white, orange, yellow, green, blue, purple>
- start_date: <YYYY-MM-DD, Phase start (ends when new phase begins)>
- max_submissions:
- max_submissions_per_day:
- is_scoring_only: <True/False. If true (default) no code submission> (Results Scoring Only is the name in the editor)
- input_data: <Zip file. Data fed to submission. No need if is_scoring_only == True>
- scoring_program:
- reference_data:
- public_data: <Zip file. Data downloadable by participants (in Files page)>. New in v1.5.
- starting_kit: <Zip file. Sample code downloadable by participants (in Files page)>. New in v1.5.
- auto_migration: <True /False. First submission = last one of previous phase.>
Some phase properties cannot be specified in the YAML file. However, they are available from the editor:
- Leaderboard Mode: . The mode "Hide Results" makes the leaderboard results invisible to non organizers ("private leaderboard").
- Phase never ends: If checked, the phase does not end at the beginning of the next one.
- If submission beats old score, put submission on leaderboard: Show best score instead of last score.
- Scoring program docker image: Docker in which the scoring program is run. See list in docker hub.
- Default docker image: Docker in which the participant code is run, if the participants do not supply their own docker.
- Disable custom docker image: Code of participants always run in the default docker.
- Ingestion program organizer dataset: Ingestion program.
- Ingestion program docker image: We need to clarify that.
This section allows you to configure the leaderboard. Don't panic! For memory:
Alias indicators:
'&' : Anchor property.
'*' : Alias indicator.
This is a rather generic example of leaderboard configuration:
leaderboards: # This indicates the beginning of the leaderboard description.
Results: &RESULTS # The leaderboard table anchor is &RESULTS.
label: RESULTS # The table is called "RESULT".
rank: 1
columns: # Now comes the list of columns:
set1_score: # This column will display the results of set1_score.
leaderboard: *RESULTS # It will show up in the table referred to as &RESULTS.
label: Set 1 # The column will be named "Set 1" in the leaderboard.
numeric_format: 4 # There will be 4 decimals displayed.
rank: 2 # This will be the second column
set2_score:
leaderboard: *RESULTS
label: Set 2
numeric_format: 4
rank: 3
ave_score: # Ranking by average rank
leaderboard: *RESULTS
label: < Rank >
numeric_format: 4
rank: 1
computed: # This column will show the average rank of all score columns.
operation: Avg
fields: set1_score, set2_score
ExecutionTime: # This column will show the execution time.
leaderboard: *RESULTS
label: ExecutionTime
numeric_format: 2
rank: 4
When a scoring program runs, it must return a file scores.txt
. For the example above that file should contain something like:
set1_score: 0.1234
set2_score: -2.4321
ExecutionTime: 0.5
The scoring program may optionally also return a file scores.html containing detailed results (in free format), which will be linked from the leaderboard in the column "Detailed Results".