Marshall Van Alstyne: Free Speech, Platforms & the Fake News Problem

Marshall Van Alstyne: Free Speech, Platforms & the Fake News Problem

How should a platform or a society address the problem of fake news? The spread of misinformation is ancient, complex, yet ubiquitous in media concerning elections, vaccinations, and global climate policy.  After examining key attributes of “fake news” and of current solutions, this article presents design tradeoffs for curbing fake news. The challenges are not restricted to truth or to scale alone. Surprisingly, there exist boundary cases when a just society is better served by a mechanism that allows lies to pass, even as there are alternate boundary cases when a just society should put friction on truth.  Harm reflects an interplay of lies, decision error, scale, and externalities.  Using mechanism design, this article then proposes three tiers of solutions: (1) those that are legal and business model compatible, so firms should adopt them (2) those that are legal but not business model compatible, so firms need compulsion to adopt them, and (3) those that require changes to bad law.

The first set of solutions, grounded in choice architecture, seek to alter information sets available to those affected by misinformation. By enabling transparency into not simply the content and sources but also the distribution and destination, the system provides effective means for counter narratives that are infeasible under current transparency practices and proposals.

The second set of solutions, based in externality economics, considers how to protect free speech while updating Section 230. Revisions have faced two main critiques: one, that holding platforms actionable for false speech would cause them to take down user speech, and two, that ambiguity in individual messages makes judgement of false speech infeasible at scale. Whistleblower testimony before congress emphasized platform amplification of content in pursuit of engagement.  A targeted solution, therefore, can separate original speech from amplified speech, generously protecting the former while reverse amplifying the latter.  The posting and even discovery of false speech is protected even as amplification is unprotected.  The second element uses scale as an advantage.  Rather than vet every message, the system takes only statistical samples.  The Central Limit Theorem guarantees that establishing the presence of misinformation in amplified speech is feasible to any level of desired accuracy simply by taking larger samples.  A doctor testing for cholesterol does not test every drop of blood but only a statistically valid sample.

The third set of solutions imports insights of antitrust jurisprudence into free speech jurisprudence.  The paradox of antitrust before 1978 was that legal decisions, intended to protect consumers and free markets, artificially raised prices by protecting inefficient firms from consequences of competition.  Free speech rulings vigorously protect speakers on the basis of enabling a free market of ideas. Overzealous protection of those with bad ideas, however, prevent the market from clearing itself.  No government intervention is required.  Rather, it simply needs to step aside in such cases as WASHLITE v Fox News, where numerous false stories that covid is no worse than flu and that vaccines do not work have been causally implicated in thousands of unnecessary deaths.[1] The free speech paradox is that legal decisions intended to protect citizens and free idea markets artificially raise harms and protect those with bad ideas from consequences of acting on those ideas.

The event is finished.