Fake News and Real Damages: How Administrative Agencies Can Solve Our Manipulated Media Problem
November 13, 2020
By Gillian Vernick*
American society used to rely on the “marketplace of ideas” theory that more information was better than overly censoring content to get to the truth.1 As Justice Oliver Wendell Holmes said in his prolific dissent in Abrams v. United States, the best test of the truth is reached by free trade of ideas in the competitive marketplace.2 Holmes’s words ring earnest and nostalgic in the era of Fake News. Can society still rely on the “marketplace of ideas” for truthful content if the marketplace is flooded with fake news and manipulative media?
Recently, Emma González, survivor of the Parkland, Florida shooting and gun reform advocate, was speaking at the March For Our Lives in Washington, D.C. when Gab, a “free speech social network” popular with ultra-conservatives, tweeted a fake video of González appearing to rip up the U.S. Constitution.3 The tweet amassed 1,500 retweets and 2,900 likes before Gab disclosed the video was “obviously a parody/satire.”4 The fake video was only quickly debunked because Teen Vogue, the creator of the original video, had already widely circulated the real video of González ripping a paper gun target.5 In the absence of consistent platform self-regulation or adequate legal remedies, González has limited recourse to stop the video, which only added to the online smear campaign targeting her and her family.6
While lawmakers scramble to solve the manipulated media problem often fueling viral disinformation, the federal government should authorize agencies experienced in disinformation, such as the FTC, FCC, and FEC, to provide guidelines for platforms to guide their self-regulation and create the digital infrastructure necessary to uphold democratic discourse online.7
Manipulated Media
Although a Reddit user first coined this offensive content as a ‘deepfake’ in 2017 to describe videos manipulated with a celebrity’s face overlaid on existing pornography8, deepfakes now target politicians, businesses, and cybersecurity alike.9 Despite Reddit removing the deepfake forum in early 2018, interest in the emerging medium grows as creation tools get into the hands of non-experts through computer apps and service portals.10
While attention has primarily focused on deepfakes, there is a larger scheme of doctored media flooding the internet with disinformation.11 Due to the pervasiveness of the internet, a lie posted online has the potential to endure.12 Some have attributed the danger of manipulated media to our innate biases reinforced with “the right kind of algorithmically selected” content.13 This potent concoction of biases and disinformation on social media can cause a moment to “go viral.”14 Recall “Pizzagate,” the conspiracy theory born on Facebook and spread on Twitter out of partisan animus, alleging 2016 Democratic presidential nominee Hillary Clinton ran a child sex ring out of a restaurant.15 While there was no factual basis for this rumor, it still propelled a man armed with an AR-15 semiautomatic rifle, a .38 handgun, and a folding knife to burst into the D.C. restaurant, proving disinformation can have dangerous ramifications.16 Notwithstanding violence, manipulated media has made it harder for people to trust sources of information that historically appeared to be relatively trustworthy17, and that threat is alarming government and industry actors.18
Current Legal Remedies
Some argue that current legal avenues already remedy offensive manipulated media and that rushing new regulations will create bigger problems.19 The most applicable current legal remedies are defamation, false light, and intentional infliction of emotional distress (IIED).
To prevail on a defamation action, a public figure or official must meet the ‘actual malice’ standard—that someone made the defamatory falsehood with knowledge it was false or with reckless disregard of whether it was false or not.20 A private citizen need only show the content was created negligently.21 However, it is difficult to implicate defendant creators in defamation lawsuits because malicious posters can easily shield their identity or may be outside the country, leaving defamation ineffective.22
False light, commonly used to address photo manipulation, embellishment, and distortion, awards a private individual damages from a publisher or broadcaster for “a defamatory falsehood . . . upon a showing the publisher or broadcaster knew or should have known the defamatory statement was false.”23 An actionable deepfake would give a false impression damaging the victim’s reputation, embarrassing or offending a reasonable person, and causing the victim mental anguish or suffering.24 However, because false light is recognized as a legal action in only two-thirds of the states, it is an unreliable remedy to manipulated media.25
IIED could be an option for manipulated media claims.26 However, it might be difficult for a politician who has been through scandals and public scrutiny to prove genuine trauma to the extent necessary to recover.27
Proposals Behind the Changing Landscape
Given the inadequacies of current legal avenues, state and federal lawmakers have proposed banning various categories of deepfakes and manipulated media.
Numerous states have proposed bills providing civil or criminal liability for manipulated media. Virginia became the first state to criminalize deepfakes with a recent amendment to the state’s revenge porn law.28 However, criminal and local law enforcement priorities usually lack the resources necessary to combat online abuse.29
Texas and California became the first two states to specifically ban deepfake content meant to harm political elections.30 The Texas law criminalizes the destruction of a deepfake with the intent to injure or influence a political candidate or election.31 California enacted a similar ban on political deepfakes meant to influence or injure.32 However, such laws raise First Amendment concerns because election-related lies are still protected speech.33 Thus, this is probably not a solution to the problem.34
New York proposed a bill that would establish broad protection for the rights of privacy and publicity, exempting newscasts and artistic works that make edits clear and providing injunctions or other equitable relief for victims.35 Entertainment companies like Disney and the Motion Picture Association of America still say the bill is “simply not permitted” by the First Amendment.36
Federal lawmakers have introduced two deepfake regulations in the last few years. The Malicious Deep Fake Prohibition Act of 2018 would criminalize the malicious creation or distribution of a deepfake with a fine or up to two years in prison, imposing a more severe punishment for government or election interference.37 It targets both individual deepfake creators acting with the intent to distribute and platforms that intend to distribute the deepfake with actual knowledge of its fakeness.38 However, the bill places excessive liability on distributors, which could scare platforms into overly censoring content out of fear of liability and inadvertently chill constitutional speech.39
The Defending Each and Every Person from False Appearances by Keeping Exploitation Subject to Accountability Act (DEEPFAKES Accountability Act) would criminalize the failure to disclose a synthetically manipulated video.40 If a video has been altered, the creator must include an embedded watermark and verbal and written disclosure and explanation of the edit.41 Critics called the bill “too optimistic,” claiming the requirement is like “asking bootleggers to mark their barrels with their contact information” and arguing those willing to comply are not creators maliciously interfering in elections or business.42
Challenges Presented by Regulation of Manipulated Media
Lawmakers underestimate First Amendment and threshold obstacles to regulation.43 Any regulation of manipulated media should tread lightly on First Amendment free speech and expression rights.44 The Supreme Court has already stated falsity alone does not render speech unprotected, first in the landmark decision of New York Times v. Sullivan in 1964, and again more recently in United States v. Alvarez in 2012.45 In Alvarez, the Court struck down a law criminalizing false claims regarding certain military honors, so a federal regulation of manipulated media would only survive a constitutional challenge if it targeted a specific sub-section of otherwise unprotected or legally harmful speech.46
Another challenge stems from where this content is shared—the internet. How does one bring a claim of defamation if there is no identifiable person to bring it against? Identifying the creator of a deepfake is problematic because (1) creator identity is lost when insufficient metadata exists to ascertain video origin47 and (2) even if one can identify the creator, the creator could be located outside of the U.S. and thus outside the reach of U.S. legal action.48 Furthermore, adding the necessary digital forensic techniques and expert witnesses to a complicated technical case will add to already expensive litigation costs.49
Platform Self-Regulation and the Administrative Agency Framework
While regulatory proposals stall in Congress, platforms are self-regulating manipulated media on their services.50 The challenge becomes squaring inconsistent platform policies.51 Content banned on one platform is still allowed on another with a warning.52 Many fear that, without accountability from government oversight, platforms will not successfully regulate themselves or will regulate to protect their own best interest.53 Who does society want making decisions about our online world—elected assemblies or unelected boardrooms?
Given platform inconsistency, the government should authorize targeted action from federal agencies to provide guidelines for platforms and create the digital infrastructure necessary to protect democratic discourse online. Administrative agencies previously played a larger role in content regulation, from Saturday morning cartoon content restrictions to equal price election ads for politicians on TV.54 The government has been slow to regulate this same content online, allowing the internet to turn into an “unlawful virtual internet society.”55 As cartoons moved to YouTube and political ads moved to Facebook, regulatory protections from agencies such as the Federal Communications Commission (FCC), the Federal Election Commission (FEC), or the Federal Trade Commission (FTC) disappeared.56 While platform self-regulation superficially preserves the independence of cyberspace once romanticized by early internet pioneers,57 it is unrealistic to expect tech companies to police every aspect of their operations successfully; no major industry has been able to do that,58 especially when doing so might hurt their bottom line.59
Therefore, the government should empower the FTC, FCC, and FEC to work with platforms, providing guidelines along with adjudication if necessary, to combat manipulated media online.60 These agencies are qualified to do so with existing expertise and can force accountability from platforms to create a better virtual infrastructure.61
The FTC
The FTC, tasked with protecting Americans from unfair and deceptive business practices in or affecting commerce, already has significant power to police tech companies for anti-competitive behavior, harmful consumer practices, and privacy policy enforcement.62 The FTC’s influence over tech policy has been so substantial over the last fifteen years that some scholars have enshrined it as the new privacy common law.63 Recently, the FTC waded into social media in the first ever complaint of its kind alleging fraud against companies inflating their social media presence with fake followers, subscribers, and likes.64 Given the FTC’s authority to police unfair or deceptive business, it makes sense that the FTC would regulate uses of manipulated media when such media is created by apps or software programs deceptively using personal data to create manipulated content.65
The FCC
The FCC is tasked with regulating broadcast stations in the public interest.66 The FCC already has the authority, albeit limited, to prohibit disinformation in and ensure the public interest is being served on broadcast.67 The government should extend the FCC’s jurisdiction from the airwaves to cyberspace. Facebook has already agreed to comply with the FCC regulations on broadcast companies censoring political candidate speech and compared its platform to federally regulated broadcast networks.68 Broadcast network regulation is based on the idea that broadcasters use a public resource, the broadcast spectrum, for profit.69 Social media platforms similarly monetize users to fuel their platforms.70 Thus, user data should be thought of as a public resource, which requires the FCC to regulate its use for the public good.71
The FEC
The FEC enforces the Federal Election Campaign Act and oversees financial contributions and public funding of presidential elections.72 Therefore, the FEC has plausible authority over manipulated media related to elections. The Commission recently expressed a desire to combat media manipulation targeted at election interference and engaged big tech and social media companies to help identify effective tools to curb fraudulent news and propaganda in the 2020 election.73 A quorum74 should authorize the FEC to take targeted action against election interference via manipulated media because it is an undeniably vulnerable area for cyber-meddling.75
Objections to a New Agency
Some have called for an entirely new regulatory agency to oversee online markets, similar to a proposal in the United Kingdom, but this is misguided.76 Strategically, it makes more sense to spread regulatory authority over several agencies.77
First, studies show companies would much rather narrowly lobby one agency full of bureaucrats with whom the agency is familiar than spread company resources across a broader array of regulators.78 This resulting “regulatory capture” causes the agency to act in the best interest of the industry it is regulating rather than in the public interest.79
Second, creating a new administrative agency would be a lengthy and contentious process. Democrats and Republicans both think they are disadvantaged in content moderation.80 Negotiations on authority could take years; media manipulation is a problem that needs swift action now. Although the boundaries of FTC, FEC, and FCC authority surrounding manipulated media are not exactly clear, “if we already have an agency that has power, let’s see what it is capable of.”81
Despite challenges presented by manipulated media, that attention is being paid to the issue generates cautious optimism. However, legislation generally falls short of what is necessary to combat this digital warfare.82 Conversely, it’s naive to think platforms can self-regulate in opposition to their bottom line.83 Platforms need guidance from administrative agencies to handle manipulated media as it evolves.84 Therefore, the government should authorize targeted action from the FTC, FCC, and FEC to provide guidelines for platforms and build the digital infrastructure necessary to preserve our virtual democracy.
*J.D. Candidate, Drexel University Thomas R. Kline School of Law, Class of 2021. Gillian is an associate editor of Drexel Law Review. Gillian would like to thank Professor Hannah Bloch-Wehba for her invaluable knowledge, advice, and encouragement writing this piece.
1 Abrams v. United States, 250 U.S. 616, 630 (1919).
“But when men have realized that time has upset many fighting faiths, they may come to believe even more than they believe the very foundations of their own conduct that the ultimate good desired is better reached by free trade in ideas -- that the best test of truth is the power of the thought to get itself accepted in the competition of the market, and that truth is the only ground upon which their wishes safely can be carried out. That at any rate is the theory of our Constitution. It is an experiment, as all life is an experiment.” Id.
2 Id.
3 See Katie Reilly, No, Parkland Student Emma González Did Not Rip Up the U.S. Constitution, TIME (Mar. 26, 2018) https://time.com/5215433/emma-gonzalez-march-for-our-lives-fake-photo/.
4 Id.
5 Id.
6 See infra Current Legal Remedies.
7 See infra Platform Self-Regulation and the Administrative Agency Framework.
8 John Brandon, Terrifying High-Tech Porn: Creepy ‘Deepfake’ Videos Are on the Rise, FOX NEWS (last updated Feb. 20, 2018), https://www.foxnews.com/tech/terrifying-high-tech-porn-creepy-deepfake-videos-are-on- the-rise.
9 See Socialistische Partij Anders, Teken de Klimaatpetitie [Sign the Climate Petition], FACEBOOK https://www.facebook.com/watch/?v=10155618434657151; see also Drew Harwell, An Artificial-Intelligence First: Voice-Mimicking Software Reportedly Used in a Major Theft, WASH. POST (Sept. 4, 2019) https://www.washingtonpost.com/technology/2019/09/04/an-artificial-intelligence-first-voice-mimicking-software-reportedly-used-major-theft/?noredirect=on; Henry Ajder et al., The State of Deepfakes: Landscape, Threats, and Impact, DEEPTRACE (Sept. 2019) 3, https://storage.googleapis.com/deeptrace-public/Deeptrace-the-State-of-Deepfakes-2019.pdf.
10 Ajder et al., supra note 9, at 4.
11 Disinformation, now coined as “Fake News,” comes in the form of legitimate-looking news articles, tweets, Facebook posts, or advertisements and is weaponized through social media to propagandize political differences and undermine democracy. Shelly Banjo, Facebook, Twitter and the Digital Disinformation Mess, WASH. POST (Oct. 2, 2019), https://www.washingtonpost.com/business/facebook-twitter-and-the-digital-disinformation- mess/2019/10/01/53334c08-e4b4-11e9-b0a6-3d03721b85ef_story.html; see, e.g., Liang Wu et. al., Misinformation in Social Media: Definition, Manipulation, and Detection, 21 SIGKDD EXPLORATIONS 80, 81 (Dec. 2019) (“Misinformation and disinformation both refer to fake or inaccurate information, and a key distinction between them lies in the intention—whether the information is deliberately created to deceive, and disinformation usually refers to the intentional cases while misinformation the unintentional.”).
12 See DANAH BOYD, IT’S COMPLICATED: THE SOCIAL LIVES OF NETWORKED TEENS 11 (2014) (explaining four affordances of social media: persistence, visibility, spreadability, and searchability, allowing content posted online to endure and reach an audience far larger than traditional communication has the capacity for).
13 Benedict Carey, How Fiction Becomes Fact on Social Media, N.Y. TIMES (Oct. 20, 2017), https://www.nytimes.com/2017/10/20/health/social-media-fake-news.html (“[I]t is the interaction of the technology with our common, often subconscious psychological biases that makes so many of us vulnerable to misinformation, and this has largely escaped notice.”).
14 See, e.g., Jonah Berger & Katherine L. Milkman, Social Transmission, Emotion and the Virality of Online Content, J. MARKETING RES. 2 (2011) (analyzing what types of shared content are more likely to “go viral.”).
15 Amanda Robb, Anatomy of a Fake News Scandal, ROLLING STONE (Nov. 16, 2017).
16 Id. Though there is “no precise formula” for a moment to turn into a viral phenomenon, experts say the “very absurdity” of a moment could propel it to prominence, regardless of the biases of the users sharing it. Berger & Milkman, supra note 14.
17 Ajder, supra note 9, at foreword; see e.g., Jennifer L. Mnookin, The Image of Truth: Photographic Evidence and the Power of Analogy, 10 YALE J. L. & HUMAN. 1, 2 (“The photograph, in particular, has long been perceived to have a special power of persuasion, grounded both in the lifelike quality of its depictions and in its claim to mechanical objectivity.”).
18 See Kalev Leetaru, History Tells Us Social Media Regulation Is Inevitable, FORBES (Apr. 22, 2019), https://www.forbes.com/sites/kalevleetaru/2019/04/22/history-tells-us-social-media-regulation-is-inevitable/#119f5cac21be; see also Nina Iacono Brown, Congress Wants to Solve Deepfakes by 2020, SLATE (July 15, 2019) https://slate.com/technology/2019/07/congress-deepfake-regulation-230-2020.html.
19 David Greene, We Don’t Need New Laws for Faked Videos, We Already Have Them, ELECTRONIC FRONTIER FOUND. (Feb. 13. 2018), https://www.eff.org/deeplinks/2018/02/we-dont-need-new-laws-faked-videos-we-already-have-them.
20 N.Y. Times Co. v. Sullivan, 379 U.S. 254, 279 (1964).
21 Gertz v. Welch, 418 U.S. 323, 325 (1974).
22 See BOYD, supra note 12.
23 Wood v. Hustler Mag., Inc., 736 F.2d 1084, 1091 (5th Cir. 1984).
24 Greene, supra note 19; see also Michael Scott Henderson, Applying Tort law to Fabricated Digital Content, 5 UTAH L. REV. 1145, 1162-63 (2018).
25 Greene, supra note 19.
26 A plaintiff must show the defendant injured and traumatized him or her in a way exceeding reasonable bounds of decency with knowledge the statement was false or with reckless disregard of its accuracy. Hustler Mag. v. Falwell, 485 U.S. 46, 56 (1988).
27 Id.; see also N.Y. Times Co. v. Sullivan, 379 U.S. 254, 279 (1964); Nicholas Schmidt, Privacy Law and Resolving ‘Deepfakes’ Online, INT’L ASS’N FOR PRIVACY PROFS. (Jan. 30, 2019), https://iapp.org/news/a/privacy-law-and-resolving-deepfakes-online/; Bobby Chesney & Danielle Citron, Deep Fakes: A Looming Challenge for Privacy, Democracy, and National Security, 107 CAL. L. REV. 1763, 1794 (2019).
28 VA. CODE ANN. tit. 18.2-386.2 (2019). The amended revenge porn law now includes “[a]ny person who, with the intent to coerce, harass, or intimidate, maliciously disseminates or sells any videographic or still image created by any means whatsoever, including a falsely created videographic or still image . . . .” Id. Violation of the law is a Class 1 misdemeanor, carrying up to a one-year prison sentence and a fine of $2,500 and specifically exempts internet service providers from liability stemming from user misconduct under the statute. Id.
29 See Chesney & Citron, supra note 27, at 1794 (finding criminal law enforcement attention responding to online abuse like cyberstalking has historically been lacking, with inadequate attention, training, and investigative techniques necessary to tackle offenses other than the rare extreme case).
30 CAL. ELEC. CODE ANN. § 20010 (2019); TEX. ELEC. CODE ANN. § 255.004 (2019).
31 TEX. ELEC. CODE ANN. § 255.004 (2019). It defines a “deep fake video” as “a video, created with the intent to deceive, that appears to depict a real person performing an action that did not occur in reality.” Id. The law makes it a Class A misdemeanor “for a person who, with intent to injure a candidate or influence the result of an election, creates a deep fake video and causes the video to be published or distributed within 30 days of an election” and is punishable by up to a year in county jail, a fine of $4,000 or both. TEX. PENAL CODE tit. 3.12.21 (2019).
32 CAL. ELEC. CODE ANN. § 20010 Legislative Counsel’s Digest (2019). The California law bans the distribution with malice of “materially deceptive audio or visual media of the candidate with the intent to injure the candidate’s reputation or to deceive a voter into voting for or against the candidate, unless the media includes a disclosure stating that the media has been manipulated.” Id.; see also Rich Haridy, California Bans Political Deepfake Videos Ahead of 2020 Elections, NEW ATLAS (Oct. 7, 2019), https://newatlas.com/computers/california-bans-political-deepfake-videos-2020-elections/.
33 The Supreme Court has held regulating election-related lies are especially prone to government abuse and bias. See Chesney & Citron, supra note 27; Brown v. Hartlage, 456 U.S. 45, 46 (1982). False information is constitutionally protected speech and a state interest in preventing voters from “mak[ing] an ill-advised choice does not provide the State with a compelling justification for limiting speech.” Id.
34 A bill targeting false election speech must be narrowly tailored to accomplish the government interest and not overbroad, so as to not chill constitutionally protected speech. See Chesney & Citron, supra note 27 (quoting Brown v. Hartlage, 456 U.S. 45, 46 (1982)). For example, the Texas bill’s language banning “a video, created with the intent to deceive, that appears to depict a real person performing an action that did not occur in reality” may chill satire or political speech. See id.; TEX. ELEC. CODE ANN. § 255.004 (2019).
35 N.Y. Assembly Bill A8155B (2017-2018). The bill
“[e]stablishes the right of privacy and the right of publicity for both living and deceased individuals; provides that an individual's persona is the personal property of the individual and is freely transferable and descendible; provides for the registration with the department of state of such rights of a deceased individual; and that the use of a digital replica for purposes of trade within an expressive work shall be a violation.”
Id. The bill defines a digital replica as “a computer-generated or electronic reproduction of a living or decreased individual’s likeness or voice that realistically depicts the likeness or voice of the individual being portrayed.” Id.
36 Memorandum in Opposition to New York Assembly Bill A.8155B, MOTION PICTURE ASS’N OF AMERICA, INC., (June 2019), https://www.rightofpublicityroadmap.com/sites/default/files/pdfs/mpaa_opposition_to_a8155b.pdf.
37 Malicious Deep Fake Prohibition Act of 2018, S. 3805, 115th Cong. § 2 (2018). Election or government interreference is punished with a fine or imprisonment for up to ten years if the deepfake could reasonably be expected to “affect the conduct of any administrative, legislative, or judicial proceeding of a Federal, State, local, or Tribal government agency, including the administration of an election or the conduct of foreign relations; or facilitate violence.” Id.; see also Kaveh Waddell, Lawmakers Plunge into “Deepfake” War, AXIOS (Jan. 31, 2019), https://www.axios.com/deepfake-laws-fb5de200-1bfe-4aaf-9c93-19c0ba16d744.html.
38 Malicious Deep Fake Prohibition Act of 2018, S. 3805, 115th Cong. § 2 (2018).
39 See Waddell, supra note 38.
40 Defending Each and Every Person from False Appearances by Keeping Exploitation Subject to Accountability Act of 2019, H.R. 3230, 116th Cong. § 2 (2019). Knowingly failing to make the necessary disclosures, altering a video to intentionally remove or obscure the disclosure made necessary under the act will result in a fine of up to $150,000 per record or imprisonment for up to five years, or both. Id.
41 Id.
42 Devin Coldewey, DEEPFAKES Accountability Act Would Impose Unenforceable Rules—But It’s a Start, TECHCRUNCH (June 13, 2019), https://techcrunch.com/2019/06/13/deepfakes-accountability-act-would-impose-unenforceable-rules-but-its-a-start/.
43 Chesney & Citron, supra note 27.
44 See N.Y. Times v. Sullivan, 376 U.S. 254, 264 (1964); United States v. Alvarez, 567 U.S. 709, 736 (2012).
45 Sullivan, 376 U.S. at 264; Alvarez, 567 U.S at 736.
46 Alvarez, 567 U.S. at 719. It’s undisputable that deepfake videos and manipulated media themselves are not inherently dangerous; they only become dangerous when weaponized to maliciously misrepresent, manipulate, and defraud. See Chesney & Citron, supra note 27.
47 Id.
48 Id.
49 Jason Tashea, As Deepfakes Make It Harder to Discern Truth, Lawyers Can Be Gatekeepers, ABA J. (Feb. 26, 2019). It is likely some victims will not have the funds to pursue a suit, and further may not want to bring additional attention to the often embarrassing or traumatizing manipulated video or audio in question. See Chesney & Citron, supra note 27.
50 See Jillian C. York & Corynne McSherry, Content Moderation is Broken. Let Us Count the Ways., ELECTRONIC FRONTIER FOUND. (Apr. 29, 2019), https://www.eff.org/deeplinks/2019/04/content-moderation-broken-let-us-count-ways.
51 Twitter announced its manipulated media policy, pledging to label deepfakes and other deceptive media on its platform and to remove any deliberately misleading content and harmful content, including posts that could cause “threats to physical safety, widespread civil unrest, voter suppression or privacy risks.” Reuters, Twitter to Label Deepfakes and Other Deceptive Media, N.Y. TIMES (Feb. 5, 2020), https://www.nytimes.com/reuters/2020/02/05/technology/04reuters-twitter-security.html. Reddit announced it was banning “deceptive misrepresentation,” including deepfakes, entirely from its platform. Updates to Our Policy Around Impersonation, REDDIT (Feb. 2020), https://www.reddit.com/r/redditsecurity/comments/emd7yx/updates_to_our_policy_around_impersonation/. TikTok recently updated its Community Guidelines to ban “misinformation that could cause harm.” Community Guidelines, TIKTOK (Jan. 2020), https://www.tiktok.com/community-guidelines?lang=en. Facebook announced in early 2020 that it would be banning certain misleading manipulated media if it “has been edited or synthesized—beyond adjustments for clarity or quality—in ways that aren’t apparent to an average person and would likely mislead someone into thinking that a subject of the video said words that they did not actually say” and if it “is the product of artificial intelligence or machine learning that merges, replaces or superimposes content onto a video, making it appear to be authentic.” Monika Bickert, Enforcing Against Manipulated Media, FACEBOOK (Jan. 6, 2020), https://about.fb.com/news/2020/01/enforcing-against-manipulated-media/.
52 See York & McSherry, supra note 50.
53 SeeBrad Smith & Carol Ann Browne, Tech Firms Need More Regulation, THE ATLANTIC (Sept. 9, 2019), https://www.theatlantic.com/ideas/archive/2019/09/please-regulate-us/597613/.
54 U.S. House of Representatives, House Committee on Energy & Commerce, “Hearing on ‘Americans at Risk: Manipulation and Deception in the Digital Age” (Jan. 8, 2020), https://energycommerce.house.gov/committee- activity/hearings/hearing-on-americans-at-risk-manipulation-and-deception-in-the-digital.
55 See id.
56 See id.
57 See John Perry Barlow, A Declaration of the Independence of Cyberspace, ELEC. FRONTIER FOUND. (Feb. 8, 1996), https://www.eff.org/cyberspace-independence (repudiating the Telecommunications Reform Act’s attempt to govern online behavior, rejecting government meddling in area it lacks expertise, and denying government rights to control inherently independent cyberspace).
58 See Smith & Browne, supra note 53.
59 See id. “The platforms are not doing enough, and it’s because their entire business models are misaligned with solving the problem . . . we’re basically moving from a lawful society to an unlawful virtual internet society, and that is what we have to change.”
60 See U.S. House of Representatives, House Committee on Energy & Commerce, supra note 54; Chesney & Citron, supra note 27, at 1804–08.
61 See U.S. House of Representatives, House Committee on Energy & Commerce, supra note 54; Chesney & Citron, supra note 27, at 1804–08.
62 Federal Trade Commission Act, 15 U.S.C. §§ 41–58 (2006).
63 See generally Daniel J. Solove & Woodrow Hartzog, The FTC and the New Common Law of Privacy, 114 COLUM. L. REV. 583 (2014).
64 See Brooke M. Wilner et al., Bad Influence: FTC Settles Two Complaints Alleging Fake Social Media Influence, LEXOLOGY (Feb. 10, 2020), https://www.lexology.com/library/detail.aspx?g=f9925738-7f6d-4e81-a6e5-9fa25867faa2.
65 See Elizabeth Caldera, "Reject the Evidence of Your Eyes and Ears": Deepfakes and the Law of Virtual Replicants, 50 SETON HALL L. REV. 177, 195 (2019); see also Callum Borchers, How the Federal Trade Commission Could (Maybe) Crack Down on Fake News, WASH. POST (Jan. 30, 2017), https://www.washingtonpost.com/news/the-fix/wp/2017/01/30/how-the-federal-trade-commission-could-maybe-crack-down-on-fake-news/ (explaining the FTC and courts could decide “fake news isn't really a form of political discourse but is, instead, a kind of commercial offering” and former FTC Bureau of Consumer Protection director stating regulating Fake News could be “worthwhile” if the FTC has jurisdiction).
66 See Federal Communications Commission,What We Do, FCC.GOV, https://www.fcc.gov/about-fcc/what-we-do; see also Will Fischer, Facebook is Acting Like a Broadcast Station When it Comes to Running Ads From Politicians. What if the FCC Regulated it Like One?, BUS. INSIDER (Oct. 15, 2019), https://www.businessinsider.com/fcc-regulate-facebook-broadcast-station-politicians-ads-2019-10.
67 Philip M. Napoli, What Would Facebook Regulation Look Like? Start With the FCC, WIRED (Oct. 4, 2019), https://www.wired.com/story/what-would-facebook-regulation-look-like-start-with-the-fcc/.
68 Although, platforms do not have to abide by FCC regulation due to CDA 230 immunity. Fischer, supra note 66.
69 Napoli, supra note 67.
70 Id.
71 See id.
72 Federal Election Campaigns Act, 2 U.S.C. 14 § 437(c).
73 Federal Election Commission, Digital Disinformation and the Threat to Democracy: Information Integrity in the 2020 Elections, FEC.GOV (Sept. 17, 2019), https://www.fec.gov/about/leadership-and-structure/ellen-l-weintraub/symposium-digital-disinformation-and-threat-democracy-information-integrity-2020-elections/; see also Nancy Scola, FEC Chair Summons Facebook, Twitter, Google to Disinformation Session, POLITICO (Aug. 29, 2019), https://www.politico.com/story/2019/08/29/fec-chair-facebook-twitter-google-disinformation-1692742.
74 Despite interest in regulating media manipulation online, the FEC is currently without a quorum of commissioners, with just three of the six positions filled. See FEC Remains Open for Business, Despite Lack of Quorum, FED. ELECTION COMMISSION (Sept. 11, 2019), https://www.fec.gov/updates/fec-remains-open-business-despite-lack-quorum/. Because of this, the FEC will lack authority to conduct much of its oversight power for the upcoming election, leaving its role relegated to informing the public on the current law. See Derek B. Johnson,With Voting Nearly Underway for 2020, Who Is in Charge of Policing Disinformation?, FED. COMPUTER WEEK (Feb. 02, 2020), https://fcw.com/articles/2020/02/02/disinfo-election-plan- johnson.aspx?m=1.
75 Johnson, supra note 74.
76Neil Chilson, Creating a New Federal Agency to Regulate Big Tech Would Be a Disaster, WASH. POST (Oct. 30, 2019), https://www.washingtonpost.com/outlook/2019/10/30/creating-new-federal-agency-regulate-big-tech-would-be-disaster/.
77 Id.
78 Id.
79 Will Kenton, Regulatory Capture, INVESTOPEDIA (Oct. 23, 2019), https://www.investopedia.com/terms/r/regulatory-capture.asp.
80 See Billy Binion, 'Nobody Wants To See a Government Speech Police': Senate Republicans Threaten To Regulate Facebook and Twitter, REASON (Apr.11, 2019), https://reason.com/2019/04/11/senate-republicans-reinvigorate-calls-to/ (noting how Republicans perceive bias in social media platforms’ content moderation). But see David McCabe & Davey Alba, Facebook Says It Will Ban ‘Deepfakes’, N.Y. TIMES (Jan. 7, 2020), https://www.nytimes.com/2020/01/07/technology/facebook-says-it-will-ban-deepfakes.html (highlighting exasperation from Pelosi and Biden’s staff at tech companies’ refusal to remove manipulated content of the politicians).
81 U.S. House of Representatives, House Committee on Energy & Commerce, supra note 54, at comments by Tristian Harris, Executive Director of the Center for Humane Technology.
82 See supra Current Legal Remedies.
83 See supra Platform Self-Regulation and the Administrative Agency Framework.
84 See id.