You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
There are 21900 samples in test_challenge.csv provided in fine-grained-annotations. When I preprocessed them and convert 30 fps to 60 fps, I found the number of the valid samples is not 21900, but 20882.
I have figured out why those 21900-20882=1018 samples are invalid:
In function gather_split_annotations() in this script, 286 samples are ignored since they can not be found in jsons_list_poses. I printed the names of those missing samples. They should have been read from a json file named 'nusar-2021_action_both_9025-c12a_9025_user_id_2021-02-18_111731.json'. But the Google Drive doesn't provide this json file.
To convert 30 fps to 60 fps, I wrote the following lines in preprocessing script:
where hand_labels are also 60 fps.
Most samples are handled correctly, but surprisingly, 732 samples get start_f & end_f which are out of range. segment['start_frame'] - contex_val and segment['end_frame'] + contex_val + 1 are both bigger than len(hand_labels), resulting in a strange situation where start_f>end_f=len(hand_labels).
epic_joints_seg = [] # Stores all handposes frame by frame for current segment
start_f = max(0, segment['start_frame'] - contex_val) # Adjust start_frame with context
end_f = min(segment['end_frame'] + contex_val + 1, len(hand_labels)) # Adjust end_frame with context
for img_index in range(start_f, end_f, 1): # This loop is ignored because start_f>end_f=len(hand_labels)
......
epic_joints_seg.append(landmarks3d) # So the epic_joints_seg list will be empty
In this case, epic_joints_seg will be empty, so these 732 samples are also invalid.
These 1018 invlid test samples (5% of the test set) has no pose data. They will definitely fail to predict. It may cause lower accuracy reports in your CodaLab Challenge Page, if all 21900 samples in test_challenge.csv are computed. I don't know if this is okay.
Thanks for your help in advance.
Best,
Necca
The text was updated successfully, but these errors were encountered:
Hi! Thanks for your great work.
There are 21900 samples in
test_challenge.csv
provided in fine-grained-annotations. When I preprocessed them and convert 30 fps to 60 fps, I found the number of the valid samples is not 21900, but 20882.I have figured out why those 21900-20882=1018 samples are invalid:
gather_split_annotations()
in this script, 286 samples are ignored since they can not be found injsons_list_poses
. I printed the names of those missing samples. They should have been read from a json file named'nusar-2021_action_both_9025-c12a_9025_user_id_2021-02-18_111731.json'
. But the Google Drive doesn't provide this json file.In
main()
function of the script, thestart_f
&end_f
are calculated using the following lines:where
hand_labels
are also 60 fps.Most samples are handled correctly, but surprisingly, 732 samples get
start_f
&end_f
which are out of range.segment['start_frame'] - contex_val
andsegment['end_frame'] + contex_val + 1
are both bigger thanlen(hand_labels)
, resulting in a strange situation wherestart_f>end_f=len(hand_labels)
.In this case, epic_joints_seg will be empty, so these 732 samples are also invalid.
These 1018 invlid test samples (5% of the test set) has no pose data. They will definitely fail to predict. It may cause lower accuracy reports in your CodaLab Challenge Page, if all 21900 samples in
test_challenge.csv
are computed. I don't know if this is okay.Thanks for your help in advance.
Best,
Necca
The text was updated successfully, but these errors were encountered: