After the Big Tech Senate Hearing, Is Section 230 Likely to Change?
Twitter, Facebook, and Google CEOs testified before the Senate Commerce Committee in late October — the latest in a bipartisan effort to moderate online content without violating the First Amendment. But the hearing was nothing if not political. Twitter came under fire for policing content from President Trump more aggressively than other world leaders, and the timing suggests Republicans hoped the hearing would impact the presidential election.
But what actually came out of the hearing? Will Section 230 be revised? If so, what would it take to propose and pass new legislation, and how would such legislation reshape the tech industry? GLG spoke with Tim Sparapani, former Director of Public Policy at Facebook, for insights into these questions. Below are a few select excerpts from our broader discussion.
The Big Tech hearing was technically to review Section 230. What exactly is Section 230, and how does it impact behaviors on Facebook, Twitter, and Google currently?
What we colloquially refer to as Section 230 is the Communications Decency Act. It gives a safe harbor to internet companies that are publishing the content of anyone who uses their service as a platform, and allows those companies to serve as a platform and a publishing platform without liability.
It does also create some liability for these companies for content that they themselves put up. But because of this broad publisher safe harbor that was created, most companies have been able to publish the thoughts, comments, writings, blogs, YouTube videos, TikTok reels, whatever, of people the world over, without fear of being prosecuted themselves as companies for the speech of the people whose content they host.
What actions could be taken to provide the transparency Zuckerberg and Dorsey spoke of throughout the Big Tech hearing? Would that look more user-experience focused or back-end-tech-stack focused?
Facebook and other companies already publish a routine report about the content they take down for the German NetzGD log and other transparency reports. So this is a throwaway offer about what they’re doing. It’s a means of saying, “Please don’t regulate us further. We’ll voluntarily disclose what we’re up to.”
My guess is that this is really for PR purposes rather than to improve user experience or back-end tech stack. And that’s because it’s unlikely that the companies will actually share any of the details about how they’re undertaking content moderation. Companies would be very fearful that if they talked in a more sophisticated public manner about what they were doing and how content moderation algorithms and sliding systems work, many of the base-level systems used to identify impermissible content would be easily reverse-engineered by those who want to push content through that would be against the terms of service.
From an industry perspective, how do these platforms think about First Amendment issues when moderating user posts?
The First Amendment is absolutely fundamental to both the companies’ conception of who and what they are. This is why they are built in the United States. They rely on the principles of the First Amendment to provide effectively a safe haven for them to build their services.
In that role, they rely on the First Amendment’s limitations that the U.S. government itself cannot dictate what private actors say, with the exception of a few limited things. You can’t incite violence; you can’t defame people. The companies have built their entire platforms and their content moderation worldwide and projected this as free-speech First Amendment values with those First Amendment limitations built into them into worldwide systems. If these things were built in another country, they would have begun with limitations instead of having to build them in as world governments insist upon them over time.
What are the implications of Big Tech companies that uphold the U.S. value of free speech but operate so dominantly in the international market?
Yesterday’s hearing was an exposé in contradictions. When these companies are talking to world leaders or their audiences abroad, they try to distinguish themselves as not being controlled by the U.S. government or a national security agency. But when they’re talking to Congress, they want to let every member of Congress know that they are an engine for innovation and job creation.
It’s a reminder that if the companies are regulated too heavily in the future, especially about Section 230 law, which is the bedrock of how they do business and how they’re able to do business, that some of the huge economic value that is produced by these companies might be lost, or the companies might be forced to relocate to some other place.
Is there a precedent for us to look to in terms of how much of this is just political theater vs. serious allegations that Big Tech is going to have to come up against in the near future here?
There are no hearings that take place this close to an election, in any Congress, that are intended to do anything other than create momentum to rally the most partisan voters to the polls.
What is interesting is that there was broad consensus that there need to be some reforms of the companies’ content moderation. And there probably will still be some broad consensus between disparate thinkers from the left, all the way to the right, and everybody in between. But nobody agrees yet about what to do about the companies and how to regulate it.
Will the exchanges from yesterday’s hearing have lasting positive or negative impacts on Big Tech brands or user engagement?
Probably not. I don’t think the companies are more loathsome after yesterday’s hearing or more loathed than they were before the hearing. Rather, I think they were simply easy foils for discussion. Virtually every press outlet ran a story about yesterday’s hearing, even though absolutely no news really came out of it. There was less consensus about what to do about Section 230, if anything.
In terms of scalability of the infrastructure to moderate content, what are these companies up against?
The technology is mostly already built. Those are already spent costs. Upcoming costs are really about human review.
Yesterday’s hearing again laid bare the need for an appeals process. That’s going to take real human beings and it’s going to take time. And those people are going to be expensive and it’s going to be at a massive scale. Those people have to be trained to understand virtually every ethnic conflict, every religious conflict, all sorts of ongoing slurs that people are saying and doing to each other around the world so that they can understand whether something’s in bounds or out of bounds, whether it violates the terms of service or doesn’t. That takes a lot of training and will be an ongoing expense. Is it a huge number? No. But it’s probably going to be 10,000 employees per company.
Will legislators make official moves to revise Section 230? And if so, how would they likely restructure it?
I think there are going to be myriad proposals to reform 230, but they’re all ultimately doomed to fail. Almost everybody hates 230, but it’s difficult to come up with either consensus about what to do about it, or to do it in a way that it is going to pass constitutional muster. Several of the proposals that are already out there are facially invalid or violate the First Amendment because they force companies to engage in speech monitoring that allows the government to determine what speech is good or bad.
What would a timeline look like for a final ruling on Section 230, and if enacted, how long would companies have to become compliant?
Given that these are going to be technical and technological mandates, as well as the legal mandates, and that the big companies will be ready but the small companies will not, my guess is that we’re talking at least 18 months to two years for compliance to be required.
The litigation would go up to the U.S. Supreme Court. It would go through multiple appellate courts probably on its way. Tip to tail, we’re talking five years of litigation for the U.S. Supreme Court to determine the constitutionality of this realized law.
Will the overall appetite for a revised Section 230 change under a Biden administration?
I don’t think this is a high-priority issue for Democrats in the same way that other tech reforms would be, or that other big macro reforms are going to be, like healthcare and climate change.
About Tim Sparapani
Tim Sparapani is the former Director of Public Policy at Facebook, where he left in 2011. In this role, Tim was responsible for developing and implementing the company’s interaction with the federal, state, local, and international governments and with opinion and policy makers. Following his role at Facebook, Tim founded SPQR Strategies in Washington, DC. SPQR Strategies advises tech companies and tech advocacy organizations about public policy issues affecting this industry. Tim is also a Tech Contributor to Forbes, where he shares his expertise in privacy, technology, and constitutional law.
This tech industry article is adapted from the October 29, 2020, GLG teleconference “Big Tech Senate Hearing: Impact to Facebook, Twitter, and Google.” If you would like access to this teleconference or would like to speak with Tim Sparapani, or any of our more than 700,000 experts, contact us.
Get the latest insights from the world’s knowledge marketplace