-
Notifications
You must be signed in to change notification settings - Fork 383
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
MSC3843: Reporting content over federation #3843
base: main
Are you sure you want to change the base?
Conversation
This comment was marked as duplicate.
This comment was marked as duplicate.
@@ -0,0 +1,82 @@ | |||
# MSC3843: Reporting content over federation |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@Yoric says:
At first glance, I fear that this may have the usual problem that it reports content to the homeserver administrator, who typically cannot do much about it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Let me elaborate.
I agree that we definitely should have mechanisms for reporting content over federation. However, the current mechanism for reporting content is essentially broken in many ways. The most important one in my eyes being that content reporting is a mechanism to get in touch with the (current) homeserver's administrator who:
- often cannot actually look at that content;
- often cannot actually redact that content;
- in the P2P future, is the same person as the offender.
While this proposal does address an existing problem, I believe that it does so by extending a broken mechanism.
I believe that an approach comparable to #2938 (disclaimer: I'm the author, so I'm biased), which directs reports towards room moderators, is the way forward.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It would be good to have ways to do both things. Room moderators can take more effective action in that specific room but only server admins can deactivate the account to prevent it from spamming/abusing other rooms.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
it's worth noting that the current spec for /report
doesn't actually say how you're supposed to handle the report. Synapse just dumps it into a database table, but it has provision and capability to notify room moderators, server admins, etc if it wanted to - the federation side is simply to introduce this same generic approach to ensure things route to a sensible place.
It is not expected that 100% of calls to the client-server /report
endpoint result in a federation call. Instead, it is expected that the reporting server be intelligent to know when another server can deal with the report rather than the room moderators/local server.
Such examples would be reporting spam originating from a server where the server admin might want to shut down the account, or reporting illegal content on a server where the server admins need to get involved to handle removal.
If a server automatically reported everything to remote servers, it would be in bad taste and likely blocked by the servers who receive effectively reporting spam.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would suggest that /report
is not a good example to be followed. At the very least, its current implementation in Synapse feels designed to block it from being useful.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Conduit sends reports into the Admin Matrix room, so server admin can easily see it
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It is expected that the reporting server be intelligent to know when another server can deal with the report rather than the room moderators/local server.
Would something like MSC2938 or MSC3215 not allow for the server knowing whether it can be handled by a remote server admin?
Also as per reports without reasons being spam and automatic report forwarding, surely the admin would be viewing the event anyways to see if the concern is actually correct, so they would be able to tell if the content is against their terms of service, and hence the reason
shouldn't be required and automatic report forwarding should be able to happen.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Those MSCs are part of the wider system for reporting, and could be used to help make decisions.
When we're working over federation, it's difficult to reliably assume that the remote server is handling reports safely. For example, they may be automatically banning users they receive too many reports for, or may be using reports to send abuse back to the reporter. Sometimes this is malicious, but often it's unsafe code being run on the receiving side. We can prevent a lot of these cases by discouraging automatic forwarding, and, more on the Foundation side, building tools which exemplify safe reporting practices.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
automatically banning users they receive too many reports
The problem with this is that even if all compliant homeservers do not automatically forward reports, any bad actor can spam reports anyways, basically making it a lose-lose (as not only do you lose the benefits of automatic forwarding, but you also don't gain anything in return as bad actors can abuse your assumptions). IMO automatically banning is the type of behavior we should be discouraging instead.
using reports to send abuse back to the reporter
When doing manual reports by trying to find the server admin's contacts and then reporting content that way, this issue still can occur.
Automatic report forwarding could have the potential to significantly reduce the friction of reporting bad actors over federation, as once the report is sent you only have to wait for the remote admin to take action. This of course can cause the bad actor to be stopped before they can do any more damage, compared to if in addition you have to wait for your admin to then forward the request. I really hope you reconsider your stance on this. 🙏
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
A sending server can choose to utilize automatic sending, but this MSC will not be encouraging it due to the safety concerns.
content's event ID would be used to report that content. | ||
|
||
The new endpoint takes very similar body parameters to the client-server API, though with the `score` | ||
notably missing and `reason` being explicitly required. For example: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What is the motivation for these changes? Wouldn't it be simpler to mirror the CS api completely?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The score
isn't used by anything, and wouldn't really inform any useful data anyways. Making the reason
required is to prevent abuse of the system: reporting over federation without reason is just spam.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
But now servers can't automatically forward reports to the sender's server and instead have to make up a reason or require manual action
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Wirh some rate limiting on both sides it should work. I'd really like if reports of users on my hs automatically get forwarded to me so I can look at them.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I mean it's a content report, so it's closer to telling the admin that you want to report a user because of the content in the permalinks, but like you said, not specifying the reason alongside the link. Considering that the admins should be able to access the reported content, they can assess whether a punishment is required and to what extent.
For example, if I get a content report with no reason specified, I can view the content and see if there is anything against my TOS. This holds true even if you get given a fake reason, and notably doesn't change even when a true reason is given.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not all the content is safe to view, nor is the report always going to be a piece of content. Something this MSC doesn't (yet) consider is that a new reporting system is on the horizon where different types of things can be reported: events, rooms, servers, policies, decisions, appeals, etc.
I do strongly consider a reason as required in lieu of a more detailed reporting API which covers these cases. Particularly and API which may be better suited to classifying the report, giving the receiving admins/moderators information about the report without having to view the content.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
events, rooms, servers, policies, decisions, appeals, etc.
Ah ok, that would make sense in those situations. I'm guessing the reason the client-server API doesn't require a reason
then because this wasn't considered at the time.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Most likely, yes. It's also less important for local reports, but over federation the amount of anonymity increases risk.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good point, I didn't think about that with this you wouldn't even be able to tell which user generated the report.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we want to mention in the moderation policy lists section that blocking reports from a server over federation might be something server implementations want to do for m.ban
recommendations applying to servers?
Rendered