Online platforms and moderation in 2023
Content moderation has become a pressing issue in the digital landscape, with platforms like Substack, Reddit, Meta, and X (a/k/a Twitter) facing increasing scrutiny over their policies and practices. These platforms find themselves caught between the need to address harmful and misleading content and the desire to uphold the principles of free speech. So we wanted to explore the complexities of content moderation, examining why platforms may be hesitant to moderate certain types of content while being quick to moderate others.
The challenge of content moderation
Content moderation is not a new concept, but the scale and reach of online platforms have amplified its importance. These platforms serve as virtual public spaces where individuals can freely express their thoughts and ideas. However, this freedom comes with the risk of abuse, misinformation, and hate speech, which can harm individuals and society as a whole. Balancing the need for open dialogue and the responsibility to protect users from harmful content poses a significant challenge for platform operators.
The role of Substack: a platform under scrutiny
Substack, a subscription newsletter platform, has recently faced controversy over its content moderation policies. While it has gained popularity among writers for its ease of use and revenue-sharing model that makes it free for writers, legal concerns have been raised about the platform’s approach to addressing harmful content. Gizmodo points out Substack CEO Chris Best’s refusal to take a stance on overt racism has drawn criticism and raised questions about the platform’s commitment to combating hate speech.
The dilemma of Reddit: navigating the dark corners
Reddit, often referred to as the “front page of the Internet,” has grappled with content moderation challenges for years. Initially known for its hands-off approach, the platform faced backlash for hosting communities that propagated hate speech, conspiracy theories, and misogyny. Over time, Wired reports, Reddit has taken steps to curb these issues, but the balance between free expression and responsible moderation remains delicate. Because the majority of community moderators are volunteers, this compounds the problem, and recent protests by moderators only underscored some of the issues with this model.
The evolution of Meta: from Facebook to the Metaverse
Meta, formerly known as Facebook, has long been a subject of debate when it comes to content moderation. The social media giant has faced accusations of amplifying misinformation, enabling harmful algorithms, and failing to address hate speech and harassment adequately. As Meta transitions into the metaverse, it faces the challenge of creating a safer and more inclusive digital environment while respecting users’ freedom of expression.
The X paradox: struggles with moderation and censorship
X, a platform synonymous with public discourse, has long grappled with content moderation issues. The social network formerly known as Twitter has sparked debates about censorship and the limits of free speech laws in the United States.
Electronic Frontier Foundation recently reported that the US Ninth Circuit court ruled that Twitter did not violate First Amendment rights by banning a user, Rogan O’Handley, who was flagged for allegedly spreading election misinformation. The court found that as a private entity, Twitter is not subject to the First Amendment restrictions applied to the government, and its moderation of content is protected by its own First Amendment rights.
The court also determined that Twitter did not cede control of its content moderation process to the government, despite the California Office of Election Cybersecurity flagging O’Handley’s tweet. Although O’Handley had the standing to sue the California government for flagging his tweet, the court ruled this did not infringe upon his First Amendment rights.
Navigating legal and ethical boundaries
Content moderation is not a one-size-fits-all solution. Platforms must consider legal obligations, such as potential liability for hosting harmful content, while also navigating ethical considerations surrounding free speech and user privacy. Striking the right balance requires careful thought and a nuanced understanding of the challenges involved.
Section 230 and government regulation
The legal framework surrounding content moderation is shaped by laws like Section 230 in the US. Section 230 provides platforms with immunity from liability for third-party content, allowing them to moderate content without fear of legal repercussions. However, calls for reform and increased government regulation have intensified as policymakers seek to address concerns about misinformation, hate speech, and the spread of harmful content.
“Government, elites — whatever you want to say — will always blame somebody else before they blame themselves.” – Steve Huffman, CEO of Reddit
Striving for responsible moderation
While legal protections exist, platforms must also grapple with the ethical dimensions of content moderation. Striving for responsible moderation involves striking a balance between protecting users from harm and upholding principles of free speech. Determining what content is acceptable and what crosses the line requires careful consideration of societal norms, community standards, and the potential impact of harmful content on marginalized groups.
Personal control over comment moderation
Amidst the challenges and controversies surrounding content moderation on social media and news forums, it is essential to highlight platforms that provide users with greater personal control over comment moderation. One option worth considering is WordPress, an open-source platform that offers extensive customization options. It offers robust tools for content creators to curate their online spaces.WordPress emphasizes user empowerment, enabling content creators to make decisions about what content is acceptable on their platforms.
People using self-hosted WordPress (via WordPress.org; WordPress.com is a hosted platform) can choose from a wide range of themes, plugins, and settings to tailor their websites and newsletters to their specific needs. This flexibility extends to comment moderation, allowing users to set moderation rules, filter spam, and disable comments according to their preferences. (Check out EasyWP to learn about some of the other features WordPress has to offer.)
WordPress allows website owners to strike a balance between open dialogue and responsible moderation, placing the power in the hands of those who create and curate the content rather than remote AI systems or corporations.
Navigating the path forward
Content moderation remains a complex and evolving challenge for online platforms. While Substack, Reddit, Meta, and X grapple with the delicate balance between free speech and responsible moderation, platforms have some legal protection to shape their online spaces. As the digital landscape continues to evolve, it is crucial to strike a balance that respects individual expression while safeguarding against the spread of harmful content.
Nice post. I think the moderators on these sites (Facebook/Meta, and Reddit) generally do a great job at content moderation on their respective sites. It can’t always be easy to keep an eye on these flood of messages; and that fact that they Do manage to do so; whilst also letting the majority of the public have their say; is testimony to their quality and decency, both as companys, but also as human beings!