-
Notifications
You must be signed in to change notification settings - Fork 202
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
About "detection adjustment" in the line 339-360 of solver.py #14
Comments
您好,请问一下,如果要将您的这个成果应用在别的领域的话。您异常检测出来的值和怎么导出来然后从原始数据中过滤掉? |
有无学者知道如何解决不知道测试标签的情况下解决该问题嘛 |
您好,这个调整只是用于计算metric,如果您是想用于部署的话,直接注释掉就可以了 @xiaobiao998 |
好的,谢谢
发自我的iPhone
…------------------ Original ------------------
From: wuhaixu2016 ***@***.***>
Date: Thu,Apr 18,2024 10:47 PM
To: thuml/Anomaly-Transformer ***@***.***>
Cc: xiaobiao998 ***@***.***>, Mention ***@***.***>
Subject: Re: [thuml/Anomaly-Transformer] About "detection adjustment" in theline 339-360 of solver.py (Issue #14)
您好,这个调整只是用于计算metric,如果您是想用于部署的话,直接注释掉就可以了 @xiaobiao998
—
Reply to this email directly, view it on GitHub, or unsubscribe.
You are receiving this because you were mentioned.Message ID: ***@***.***>
|
这个代码的性能全靠看了GT后的调整,在现实中没有GT,也就是你注释掉的效果,如果我是reviewer,我会强烈要求去掉这个adjustment,完全是为了好看的不现实步骤。这个不能叫fair compare,因为adjustment有可能利于作者提出的方法,而且这个adjustment现实中不存在,毫无意义。 |
烦请仔细阅读上面的说明。为了清晰,我们提供一个中文版本,见下 |
Hi @wuhaixu2016 , Thank you for sharing your code and making it easy to use and reproduce the results. I’d like to clarify my understanding of the code: From a theoretical perspective, the following lines introduce information leakage from the training set into the test set. Specifically, the model's predictions ( Practically speaking, since we wouldn’t have access to ground truth labels during deployment, I believe these lines should be omitted. After removing them, here are the results I obtained: SMD dataset: SMAP dataset: PSM dataset: If my understanding is incorrect, could you please clarify? Alternatively, if you have any suggestions for addressing this issue in a practical way, I would greatly appreciate your input. Thank you! |
|
Since some researchers are confused about the "detection adjustment", we provide some clarification here.
(1) Why use "detection adjustment"?
Firstly, I strongly suggest the researchers read the original paper Xu et al., 2018, which has given a comprehensive explanation of this operation.
In our paper, we follow this convention because of the following reasons:
In summary, you can view the adjustment as an "evaluation protocol", which is to measure the capability of models in "abnormal event detection".
(2) We have provided a comprehensive and fair comparison in our paper.
If you still have some questions about the adjustment, welcome to email me and discuss more ([email protected]).
The text was updated successfully, but these errors were encountered: