Yoel RothEarlier, the head of Twitter trust and security, currently in the match, is sharing his fears about the future of the open social network and its ability to combat disinformation, spam and other illegal content, such as children's sexual material (CSAM). In a recent interview, Roth was worried about the lack of moderation tools available to Fediverse – an open social network that includes applications such as Mastodon, Threads, Pixelfed and others, as well as other open platforms such as BlueSky.
He also mentioned the key moments of trust and safety on Twitter, such as the decision to prohibit President Trump from the Platform, disinformation widespread by Russian bots of bots and about how Twitter users, including the general director of Jack Dorsey, fell victim to bots.
On the podcast Revolution.social with @rabbleRoth pointed out that efforts to build more democratically running the online community in an open social network are also those that have the smallest resources when it comes to moderation tools.
“… Looking at Mastodon, looking at other services based on ActivityPub (protocol), looking at BlueSky in the earliest days, and then looking at the threads, when the finish began to develop, what we saw that many services that were most difficult to control the community, gave them technical tools that could be able to administer their rules.
He also saw “quite a great return” in an open social network when it comes to the transparency and ID of the decision that Twitter once had. Although probably many at that time did not agree with Twitter's decision to ban Trump, the company explained their justification. Now social media suppliers are so concerned about preventing bad actors before playing so that they rarely explain themselves.
Meanwhile, on many open social platforms, users would not receive a notification of their forbidden posts, and their posts would simply disappear – there was not even an indication for others that the post took place.
“I do not blame the startups for being startups or new software for the lack of all bells and whistles, but if the whole point of the project increased the democratic management ID, and what we did, we went back for management, did it work?” Roth miracles.
TechCrunch event
San Francisco
|.
October 27-29 2025
Economics of moderation
He also combined problems related to the economy of moderation and how the federal approach was not yet balanced on this front.
For example, an organization called IFTAS (Independent Federated Trust & Safety) worked on building moderation tools for Fediverse, including the Fediverse providing access to tools to fight CSAM, but he lacked money and money and money and money and money I had to close many your projects earlier in 2025.
“We saw how it arrives two years ago. Iftas saw how it was coming. Everyone who worked in this space would largely volunteer time and efforts, and this goes only so far, because at some point people have families and have to pay bills, and the costs calculate piles if you need to run the ML models to detect specific types of bad content,” he explained. “Everything becomes expensive, and the economy of this federal approach to trust and security never adds up. And in my opinion still not.”
Meanwhile, BlueSky decided to employ moderators and employ trust and security, but is limited to moderation of his own application. In addition, they provide tools that allow people to adapt their own moderation preferences.
“They do this work on a large scale. There is of course a place to improve. I would like them to be a bit more transparent. But they basically do the right things,” said Roth. However, as the service further decentralizes, BlueSky will face questions when it is obligation to protect the individual against the needs of the community, he notes.
For example, in the case of DOXXing it is possible that someone will not see that their personal data was disseminated online due to configuring moderation tools. But there should still be someone's responsibility for enforcing these security, even if the user does not have BlueSy in the main application.
Where to draw a privacy line
Another issue before which Fediverse faces is that the decision on privacy may thwart moderation attempts. Although Twitter tried not to store personal data that he did not have to, he still collected items such as the user's IP address when they gained access to the service, device identifiers and others. They helped the company when she had to conduct a forensic analysis of something like the Russian Farm Troll.
Meanwhile, Fediverse administrators may not even collect the necessary dailies or do not review them if they think it is a violation of user privacy.
But in fact, it is more difficult to determine without data who is really a bot.
Roth gave a few examples of this from Twitter times, noticing how users became the trend of the answer “bot” to everyone they did not agree with. He says that he initially established a warning and manually checked all these posts, examining hundreds of accusations of “Bot” and no one was ever right. Even Twitter's co -founder and former general director of Jack Dorsey fell victim by sending posts from a Russian actor who claimed to be Crystal JohnsonA black woman from New York.
“The general director of the company liked this content, strengthened it and was not able to know as a user that Crystal Johnson was actually a Russian troll,” said Roth.
The role of AI
One of the timely topics of the discussion was how Ai changed the landscape. Roth referred to the recent Stanford's research, which showed that in the political context, large language models (LLM) can be even more convincing than people when they are properly tuned.
This means that the solution, which is based only on the analysis of the content itself, is not enough.
Instead, companies must follow other behavioral signals – for example, if a subject creates many accounts, uses automation for publishing or publications at strange times of the day, which correspond to different time zones, he suggested.
“These are behavioral signals that are hidden, even in really convincing content. And I think that you have to start it there,” said Roth. “If you start with the content, you are in the arms race with leading AI models and you have already lost.”

















