We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
源代码如下:
def comp_class_vec(ouput_vec, index=None): if not index: index = np.argmax(ouput_vec.cpu().data.numpy()) # int else: index = np.array(index) index = index[np.newaxis, np.newaxis] # (1,1) ndarray index = torch.from_numpy(index) # (1,1) Tensor one_hot = torch.zeros(1, 1000).scatter_(1, index, 1) # 热编码 (1,1000) Tensor 全0和一个和1 one_hot.requires_grad = True class_vec = torch.sum(one_hot * output) # 求损失 return class_vec
按照我对该Loss计算方法的理解,
比如5分类,ouput_vec最大最大概率为pos=3的类别, ouput_vec=[0.1,0.1,0.6,0.1,0.1] one_hot = [0,0,1,0,0] 计算torch.sum(one_hot * output)=0.6
如果pos=3类别的概率更高,计算出的torch.sum(one_hot * output)会越大。但是按直观来理解,网络判断正确的概率更高了,所以Loss应该更低才对啊?
The text was updated successfully, but these errors were encountered:
此处不是损失函数的概念,而是激活值的概念,利用激活值反向传播回去。所以是应该越大的,这个参考论文
Sorry, something went wrong.
No branches or pull requests
源代码如下:
按照我对该Loss计算方法的理解,
如果pos=3类别的概率更高,计算出的torch.sum(one_hot * output)会越大。但是按直观来理解,网络判断正确的概率更高了,所以Loss应该更低才对啊?
The text was updated successfully, but these errors were encountered: