Twitter is overhauling the way it handles problematic and abusive tweets reported by its users, aiming to bring a more 'human first' approach to improve the quality of tweets flagged by its users for misinformation, hate speech, spam and others.
The new approach, which is currently being tested with a small group in the US, will be rolled out globally next year.
"It lifts the burden from the individual to be the one who has to interpret the violation at hand. Instead it asks them what happened," Twitter said in a statement late on Tuesday.
This method is called 'symptoms-first', where Twitter first asks the person what's going on.
"Here's the analogy the team uses: say you're in the midst of an emergency medical situation. If you break your leg, the doctor doesn't say, is your leg broken? They say, where does it hurt? The idea is, first let's try to find out what's happening instead of asking you to diagnose the issue," the company elaborated.
By refocusing on the experience of the person reporting the Tweet, Twitter hopes to improve the quality of the reports they get.
The platform hopes that this rich pool of information, even if the Tweets in question don't technically violate any rules, will still give it valuable input that they can use to improve people's experience on the platform.
"What can be frustrating and complex about reporting is that we enforce based on terms of service violations as defined by the Twitter rules," said Renna Al-Yassini, senior UX manager on the team.
"The vast majority of what people are reporting on fall within a much larger gray spectrum that don't meet the specific criteria of Twitter violations, but they're still reporting what they are experiencing as deeply problematic and highly upsetting," Al-Yassini added.
Twitter said it will be able to use the feedback it gains from this new process to improve it and help more people.