-
Notifications
You must be signed in to change notification settings - Fork 725
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Custom model for multisensor environments #631
Comments
Related to #133 |
arrafin beat me to it again <.<. Just for more direct link: Here is an example on how to combine visual observation with 1D vector: #133 (comment) |
Araffin, Miffly,
|
As discussed in #133, true multi-modal observations are not currently possible and you have to resort to this kind of dirty hacks for now. However this is the very next thing on the to-do list after TF2 support, which is slowly getting there but currently on a hiatus due to holidays :) |
Closing in favor of #133 |
Hello everyone,
I would like to create an algorithm to train multi sensor agent using your DRL framework.
What I have in mind is concatenating one or more convolutional layers whose input could be cameras or lidar sensor with 1D arrays from other sensors (such as GPS).
It looks like I should add an option to inputs.py and a custom model to manage this kind of environments. Would this be enough? Do you have any suggestion?
Thanks,
Simone
The text was updated successfully, but these errors were encountered: