You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I'd love to try and implement GradCam on this project, however the pretrained models seem to be packed with the Tensorflow save instead of Keras. Do you have the checkpoints available for the evaluation or training code for finetuning/ custom implementations?
The text was updated successfully, but these errors were encountered:
Do you necessarily need a Keras-style checkpoint for this? It should be possible to backprop through the TF SavedModel as well and get gradients wrt the input image. You may want to use the model.crop_model for this analysis, which is the single-person pose estimator at the core of the method, which takes a 256x256 cropped image. (See bottom of https://github.com/isarandi/metrabs/blob/master/docs/INFERENCE_GUIDE.md)
Young-Yoon
pushed a commit
to Young-Yoon/metrabs
that referenced
this issue
Mar 8, 2024
…sarandi#43)
This PR brings the functionality for model conversion inspired by the avelab model conversion pipeline used in face tracking.
The converter takes as input the model folder with pb files and variable folder from a training and converts it into several onnx files. If ave-tracker path is given then the models are converted into ncnn files (optimized and binarized), a bundle file to be used in ave-tracker containing a settings.json and the ncnn file is also generated. The settings.json file is automatically created. Furthermore a info.txt file is created with md5 checksum of the relevant files used as inputs.
I'd love to try and implement GradCam on this project, however the pretrained models seem to be packed with the Tensorflow save instead of Keras. Do you have the checkpoints available for the evaluation or training code for finetuning/ custom implementations?
The text was updated successfully, but these errors were encountered: