You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Look at the function get_model in curiosity repo in the file
curiosity/models/obj_detector_feedforward.py
initial strategy:
1) in parallel randomly draw from parameter space and run models [do on openmind]
2) if there's any traction, we can do hyperopt (look at hyperopt package)
--> traction = does anything besides the base model train at all?
is there any range in testing performance? [needs to comparable across models] is there any range in consistency to humans? is there any range in consistency to base model?
The text was updated successfully, but these errors were encountered:
Look at the function get_model in curiosity repo in the file
initial strategy:
1) in parallel randomly draw from parameter space and run models [do on openmind]
2) if there's any traction, we can do hyperopt (look at hyperopt package)
--> traction = does anything besides the base model train at all?
is there any range in testing performance? [needs to comparable across models]
is there any range in consistency to humans?
is there any range in consistency to base model?
The text was updated successfully, but these errors were encountered: