In the two years since its launch, Discord has become incredibly popular with individual video game communities. You can find Discord servers for subreddits, MMORPG classes, Patreon users, YouTubers — just about anyone or anything with an audience. It gives those audiences a place to gather, chat over text or voice and play games together.
But as with any place people gather, Discord has problems. In February, Kotaku covered some of the issues with the decentralized service, including harassment in the form of hate speech messages, horrifying images and “raids” of aggressive outside users.
As a result, joining a Discord server has been getting fraught. If servers have lax security, abuse can be common. If they have strict security, joining may mean jumping through a lot of hoops. And if you’re dealing with social anxiety, PTSD or other mental health issues, you might be especially reluctant to put yourself in the middle of that mess.
The Discord team has commented on the problem, but it faces roadblocks in responding. Public social networks like Facebook and Twitter have moderation teams, reporting systems, and automated filters to help them handle abuse (though it remains a problem on both). But generally, semi-private services like Discord let users sort out their own abuse policies for their channels. Discord’s size and profile have made that untenable, but moderating thousands of servers is a challenge. Many users prefer the service specifically because it doesn’t have centralized moderation.
With this week’s newly announced features, Discord may have found a way to balance those preferences with the safety of their users.
Users can now opt to turn off direct messages from people who aren’t on their list of friends. If they still want to receive messages from strangers, images and links in those messages will automatically be scanned and flagged for illegal and hazardous content.
Server administrators also have new optional tools to work with. One is an advanced verification setting requiring new users to associate a verified phone number with their account. Banning a user will then ban their phone number, which is more complicated to get around than an email-based ban.
Admins will be able to turn on explicit content filtering to receive the same automated flagging for images that direct messages get. And while Discord is leaving full moderation in the hands of admins, users on large servers can now report abusive messages and users to Discord’s support team directly.
By adding these safety features, Discord is prioritizing the protection of its individual users while still allowing server administrators the freedom to police their own servers however they choose. It’s an interesting balance, one that will hopefully help to make the Discord experience better for everyone but the abusers.
Being part of a gaming community can be a wonderful thing. With Discord becoming the go-to service for real-time communication in gaming, it’s good to see that its developers take safety seriously.[Discord blog]