X, the social media platform formerly known as Twitter, has announced a significant expansion of its content moderation team, with plans to hire 100 full-time employees for a new trust and safety office in Austin, Texas. This move comes as X grapples with mounting criticism over its handling of child sexual exploitation (CSE) content, particularly in light of a Senate Judiciary Committee hearing involving CEO Linda Yaccarino.
The formation of the new team, primarily focusing on CSE, marks a shift from the platform's earlier approach under Elon Musk, who significantly reduced the moderation team following his acquisition. X's updated blog post on CSE moderation also acknowledges this development, indicating a broader strategy to address online safety concerns.
The decision follows a series of controversies, including widespread dissemination of AI-generated images showing singer Taylor Swift in a sexually explicit context. This incident, which attracted over 27 million views and 260,000 likes before the originating account's suspension, has reportedly led Swift to explore legal action against X and the content's creator.
X's response to the incident - banning all searches for "Taylor Swift" in the app - has raised questions about the effectiveness of its crowd-sourced Community Notes approach to moderation. The incident demonstrates the limitations of relying solely on user-driven content policing and underscores the need for professional moderation teams.
In addition to CSE, the new Austin-based team will also address other moderation enforcement areas, such as hate speech, spam, fraud, and customer support. The move is seen as a partial admission that X’s current “freedom of speech, not reach” policy, heavily reliant on Community Notes, needs reinforcement through dedicated human moderators.
The expansion also responds to broader issues of bots and misinformation, with recent reports of a Russian bot network in the app spreading anti-Ukraine sentiment. Despite efforts to eradicate bots through payment verification and a $1 fee for app engagement, verified bot profiles continue to operate, highlighting the complexity of the challenge.
As X faces decreasing advertising revenue and increasing regulatory scrutiny, this expansion into content moderation signals a critical step in addressing both operational challenges and public trust. The focus now shifts to how X balances these new initiatives with financial sustainability in an increasingly competitive social media landscape.