-
Notifications
You must be signed in to change notification settings - Fork 1.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Enhance] Enhance feature extraction function. #593
Conversation
Codecov Report
@@ Coverage Diff @@
## master #593 +/- ##
==========================================
+ Coverage 80.34% 81.31% +0.97%
==========================================
Files 115 115
Lines 6585 6654 +69
Branches 1125 1142 +17
==========================================
+ Hits 5291 5411 +120
+ Misses 1151 1095 -56
- Partials 143 148 +5
Flags with carried forward coverage won't be shown. Click here to find out more.
Continue to review full report at Codecov.
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
- remove se_cfg in the docstring in
StackedLinearClsHead
- add some docstring and for some functions and class
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM.
Hello, are all the codes based on the default, why is there no information about the trained model and the relative data set? I want to use the model I have trained(such as resnet101) and the matching dataset picture(imagenet) for neck output, can you provide a code demonstration. Thank you very much. |
These codes are just simple examples. If you want to use the real dataset, you also need to build the dataset to do some pre-processing. Here is an example: import mmcv
from mmcv import Config
from mmcls.apis import init_model
from mmcls.datasets import build_dataset
cfg = Config.fromfile('configs/resnet/resnet18_8xb32_in1k.py')
cfg.model.backbone.out_indices = (0, 1, 2, 3) # Output multi-scale feature maps
model = init_model(cfg, 'your_checkpoint_path', device='cpu') # Use the API `init_model` to initialize and load checkpoint
dataset = build_dataset(cfg.data.test) # Use test dataset and test pipeline
img = dataset[0]['img'][None, :] # Get one input image
outs = model.extract_feat(img, stage='neck')
for out in outs:
print(out.shape)
# torch.Size([1, 256])
# torch.Size([1, 512])
# torch.Size([1, 1024])
# torch.Size([1, 2048]) |
* Fix MobileNet V3 configs * Refactor to support more powerful feature extraction. * Add unit tests * Fix unit test * Imporve according to comments * Update checkpoints path * Fix unit tests * Add docstring of `simple_test` * Add docstring of `extract_feat` * Update model zoo
import mmcv cfg = Config.fromfile('configs/resnet/resnet18_8xb32_in1k.py') dataset = build_dataset(cfg.data.test) # Use test dataset and test pipeline torch.Size([1, 256])torch.Size([1, 512])torch.Size([1, 1024])torch.Size([1, 2048])请教在新版的mmpretrain中如何完成该特征提取的功能,尝试了很久无法完成: from mmpretrain.models import build_classifier cfg = Config.fromfile('configs/resnet/resnet18_8xb32_in1k.py') dataset = build_dataset(cfg.data.test) # Use test dataset and test pipeline |
这是来自QQ邮箱的假期自动回复邮件。
您好,我最近正在休假中,无法亲自回复您的邮件。我将在假期结束后,尽快给您回复。
|
@mzr1996 |
这是来自QQ邮箱的假期自动回复邮件。
您好,我最近正在休假中,无法亲自回复您的邮件。我将在假期结束后,尽快给您回复。
|
Motivation
Feature extraction is an important function of the backbone. We already have the
extract_feat
method inImageClassifier
, but it can only extract the feature map after neck.Modification
Here we split the whole network into six parts and enable our API to get all of their outputs.
Use cases
Middle layer output, with
model.extract_feat
① backbone output
② neck output
The neck is usually
GlobalAveragePooling
, and some networks don't have a neck.③ Pre-logits output (without the final linear classifier head)
Some heads have not only a single linear layer classifier but also some other processing, like
VisionTransformerClsHead
andStackedLinearClsHead
. Now we can extract features before the final linear classifier.Final layer output, with
model.simple_test
④ Head linear output (without softmax)
⑤ Softmax output (without post-processing)
In multi-label tasks, the
softmax
is changed tosigmoid
.⑥ Post-processing output
In post-processing, we will convert the tensor output into a list.
The post-processing doesn't depend on softmax, you can also post-process the logits output.
Checklist
Before PR:
After PR: