-
Notifications
You must be signed in to change notification settings - Fork 1.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
FCOS3D train on kitti dataset #865
Comments
Please show your config. Besides, if you are not in a big hurry, please stay tuned for our released KITTI model. It is expected to be done by the end of September. |
The configs: 1. fcos3d_r101_caffe_fpn_gn-head_dcn_2x8_1x_kitti-mono3d.py:
] model settingsmodel = dict( class_names = [ img_norm_cfg = dict( optimizeroptimizer = dict( learning policylr_config = dict( 2. kitti-mono3d.py: dataset_type = 'NuScenesMonoDataset' class_names = [ Input modality for kitti dataset, this is consistent with the submissionformat which requires the information in input_modality.input_modality = dict( construct a pipeline for data and gt loading in show functionplease keep its loading function consistent with test_pipeline (e.g. client)eval_pipeline = [ data = dict( up here is the config file. And If i set the 'datase_type'= 'KittiMonoDataset' , there will be another error: |
Please use |
Think you very much for your answer, and i modified it as your suggestted , Traceback (most recent call last): The keys=[ |
The keys are recorded after several data preprocessing of the overall training pipeline. Similarly to removing the |
Think you for your answer, I have removed the
The operation of |
I have set the |
Should be |
Think you very much for your help, i have set the
The
I do not know what caused this error, Is there any other KITTI specific parameters should be adjusted? To solve this error, I just set the
Setting as above, there is a key error when eval time, so I modified the
I wonder if this is right |
The |
@xiaofengWang-CCNU had you train fcos3d in waymo dataset? as waymo dataset can be converted to kitti. |
Hi all, thanks for your interest! We have got an updated version of FCOS3D (FCOS3D++ or PGD) with #964 and #1014 supported on KITTI. You can refer to that config and implementation for more insights. Some hyperparameters of the baseline (FCOS3D) are basically fine-tuned but I believe there is still space for better performance. Hope you can make further progress! |
We are working on a more extensive study based on FCOS3D and PGD on different datasets. Just close this issue temporarily. We will update related information on the homepage if there is any progress. Please stay tuned. |
@xiaofengWang-CCNU Could you leave your email to me, I am also using fcos3d in kitti, hope to learn from you。 |
@xiaofengWang-CCNU Could you leave your email to me, I am also using fcos3d in KITTI, but I can't get a similar result with your config file, My email is [email protected]. hope to learn from you, thanks a lot! |
Your config does not reproduce the AP2D closer to 70. We have to train it with batch size data = dict(
samples_per_gpu=12,
workers_per_gpu=12
) |
Hi @Tai-Wang , PS - I tried the kitti_run_13.py.txt config for the FCOS3D on KITTI. The KITTI results are as follows (I could not reproduce the exact FCOS3D results as mentioned in Table 1 of PGD):
|
Hi @Tai-Wang! |
Sorry to bother you.
To train FCOS3D on kitti dataset, I did following steps.
write the 'fcos3d_r101_caffe_fpn_gn-head_dcn_2x8_1x_kitti-mono3d.py' according to 'fcos3d_r101_caffe_fpn_gn-head_dcn_2x8_1x_nus-mono3d.py'.
writer a 'kitti-mono3d.py' in path 'configs/base/datasets' according to 'nus-mono3d.py'.
run python tools/train.py configs/fcos3d/fcos3d_r101_caffe_fpn_gn-head_dcn_2x8_1x_kitti-mono3d.py --work-dir ./ckpt --gpu-ids 6
the data are followed the create_data.py.
BUT I get a error :
Traceback (most recent call last):
File "tools/train.py", line 223, in
main()
File "tools/train.py", line 219, in main
meta=meta)
File "/mmdetection3d/mmdet3d/apis/train.py", line 34, in train_model
meta=meta)
File "/opt/conda/lib/python3.7/site-packages/mmdet/apis/train.py", line 170, in train_detector
runner.run(data_loaders, cfg.workflow)
File "/opt/conda/lib/python3.7/site-packages/mmcv/runner/epoch_based_runner.py", line 127, in run
epoch_runner(data_loaders[i], **kwargs)
File "/opt/conda/lib/python3.7/site-packages/mmcv/runner/epoch_based_runner.py", line 47, in train
for i, data_batch in enumerate(self.data_loader):
File "/opt/conda/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 291, in iter
return _MultiProcessingDataLoaderIter(self)
File "/opt/conda/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 764, in init
self._try_put_index()
File "/opt/conda/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 994, in _try_put_index
index = self._next_index()
File "/opt/conda/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 357, in _next_index
return next(self._sampler_iter) # may raise StopIteration
File "/opt/conda/lib/python3.7/site-packages/torch/utils/data/sampler.py", line 208, in iter
for idx in self.sampler:
File "/opt/conda/lib/python3.7/site-packages/mmdet/datasets/samplers/group_sampler.py", line 36, in iter
indices = np.concatenate(indices)
File "<array_function internals>", line 6, in concatenate
ValueError: need at least one array to concatenate
I can not find what caused this error, does anyone are doing this ,please help me, think you.
The text was updated successfully, but these errors were encountered: