FUTURES: Moderating Online Speech – The tightrope between a Ministry of Truth and tragedy of the commons
by Sally Chase
就在国会议员们考虑下一步行动时，他们中间出现了三种解决方案。在其中一个就是国会选择全面否决《通信端正法》(Communications Decency Act)的230条款责任保护。社交媒体平台不再因为审核内容或未能审核内容而受到保护，所以它们走了传统出版商的老路，雇佣写手，印刷经过审核的文章。
As members of Congress considered their next move, the universe splintered into three possible timelines. In one, the prestigious body had opted for wholesale rejection of the Communications Decency Act’s Section 230 liability protections. Social media platforms were no longer protected for either moderating or failing to moderate content, so they went the way of traditional publishers, hiring writers and printing approved pieces.
In timeline two, only the liability shield for moderating content was thrown out. Platforms quickly descended into anarchy, with the loudest, meanest voices, or, as was often the case, the most persistent bot networks, driving the discourse. Offensive, obscene, and exploitative posts abounded, and polite society politely left the chat, quickly followed by advertisers. As platforms turned to hawking data and subscriptions for revenue, their business model darkened, fueled by criminal gangs, drug and arms deals, and human trafficking. Congress had to reconvene on the issue, since the situation was obviously untenable, but the path forward was no longer clear.
Timeline three saw the rise of a state Ministry of Truth. Only liability protections for failing to moderate content were removed, so naturally an agency responsible for arbiting acceptable and unacceptable content was needed. There were squabbles about the appropriate balance of political representation in the agency, but eventually one party won, and minority views were sidelined. Legitimate scientific debate was silenced, as was any discussion of alternative worldviews or divergent principles. Labels of “hate speech” and “misinformation” were applied with abandon. Before too long, criticism of governing authorities was off limits, along with private religion and any advocacy outside the topics currently in vogue. Society survived, but it was no longer society as we know it.
What’s wrong with Section 230, and what have people proposed we do about it?
In hearing after hearing, Congress has reviewed a variety of concerns with the 1996 liability shield that at present protects social media platforms like Facebook, Twitter, and YouTube for “good faith” attempts to moderate online content. If harmful content slips through, they’re not responsible, like a newspaper might be. The platforms are also sheltered from the results of removing “objectionable” content.
The problem, or rather, problems, with Section 230, according to Congress, advocacy organizations, and the general public, are manifold. Big Tech allegedly leans left, and unfairly censors conservatives, while some claim platforms turn users into addicts and conspiracy theorists. Moderation mechanisms alternately let horrific things slip through and banish innocuous content. Algorithms, policies, and design choices perpetuate discrimination, depress children, enable the targeting of vulnerable groups, and spread lies through certain communities. Worst of all, in the eyes of some, Big Tech profits off this mess.
保守派人士可能希望对审核内容的责任保护条款进行彻底修改或删除，称其违反了“诚信”条款，并指出大型科技公司的人口结构和捐款情况。他们说，像Twitter审查《New York Post》关于拜登夫妇的负面报道这样的例子具有代表性。然而，这种观点在逻辑上的极端似乎是不受欢迎的:这是一个完全不受节制的沼泽，可能危及社会的许多部门。
A conservative might want the liability shield for moderating content amended or removed entirely, calling foul on the “good faith” condition and pointing to Big Tech’s demographics and donations. Examples like Twitter’s censoring of the unfavorable New York Post story about the Bidens are representative, they say. The logical extreme of this position, however, is one that seems undesirable to many: an entirely unmoderated morass that might endanger many sectors of society.
Liberals might prefer to alter or do away with the protections that safeguard the platforms that fail to remove all undesirable content. Social media is a hotbed of bigotry, extremism, and fringe views, they say, and the events of January 6th are the natural real-world result. The logical end point of this stance could be the creation of a regulatory body with the power to set the standards of appropriate content. But this world, too, is unappealing to many.
一些人主张以调整代替戏剧性的改革。例如，Facebook的CEO Mark Zuckerberg希望定期发布透明报告，并对大型平台在审核内容方面的能力设定230保护条款。只要内容虚假，令人不适的内容比例保持在相对较低的水平，大型科技公司就不会受到影响。目前还不清楚这一提议是否会满足保守派或自由派的担忧，因为审查可能会继续不受检查，大量破坏性的帖子仍可能出现在人们的视野之中。
Some advocate for tweaks in place of dramatic overhauls. Facebook CEO Mark Zuckerberg, for example, would like to mandate regular transparency reports, and conditionalize 230 protections on big platforms’ ability to generally do a decent job at moderating content. As long as the percentage of false negatives remains relatively low, Big Tech would be off the hook. It’s not clear that this proposal would satisfy either conservative or liberal concerns, since censorship could go on unchecked, and large quantities of damaging posts could still appear in peoples’ timelines.
Twitter’s Jack Dorsey is pitching an alternative proposal, one grounded in design rather than regulatory reform. The platform’s Bluesky initiative would use open source solutions to tackle problems like moderation and transparency. Issues of power distribution, mob rule, and technological literacy could trouble this plan.
At the most recent Section 230 hearing, Representative Tim Walberg (R-MI) offered yet another path forward. There is a principle in Catholic social teaching popular with small-government conservatives, the EU, and the UN called subsidiarity, which says responsibility should generally lie with the lowest possible organizational level. If an individual can do something, the family shouldn’t. If the family can, the church shouldn’t. If community organizations can, local government shouldn’t. If local government can, federal government shouldn’t. Advocates say this model dignifies participants, and empowers those best positioned to troubleshoot a given problem. Walberg called on households, communities, and centers of education to take up the mantle of civilizing online discourse, though whether these nodes of society are equipped to tackle problems as sinister as child exploitation or as global as disinformation rings is up for debate.
Questions of character, humanity, human interactions, virtue gradients, economics, politics, and unintended consequences are at play and at stake. Who do we want to be? How should good people act online, and allow others to act? What sort of exchanges do we want to encourage, or discourage? How can businesses persist through these decisions? How will various interests endure? The right reforms have the potential to re-invigorate public discourse, and bring wildly divergent parties back to the same table. The wrong moves could lead us down a dimmer path.
How optimistic one is about the trajectory of social media may depend on their general view of technology. Technological optimists believe developments generally work for good in society; technological pessimists see bleaker consequences from most innovations, in terms of human liberty and happiness. Technological determinists say if something can be invented, eventually, it will be—and if inventions can be used a certain way, eventually, they will be. Those embracing the critical approach think each new instance should be carefully evaluated before adoption.
Will we find our way back to a shared set of facts, rigorous and respectful debate, and regard for one another’s humanity? Silicon Valley breakthroughs and Congressional decisions in the coming years, in addition to the efforts of private citizens and community organizations, could shape the answers to these questions, as well as the digital lives and possibilities of future generations.