Pages

Wednesday, November 27, 2019

Why can’t Internet companies stop awful content? - Ars Technica

Don't abandon the Internet yet!
Enlarge / Don't abandon the Internet yet!
Eric Goldman is a professor at Santa Clara University School of Law and co-director of the High-Tech Law Institute. Jess Miers is an Internet Law & Policy Foundry fellow and a second year Tech Edge J.D. student at Santa Clara University School of Law. The opinions expressed here do not necessarily represent those of Ars Technica.

For the first two decades of the commercial Internet, we celebrated the Internet as one of society's greatest inventions. After all, the Internet has led to truly remarkable outcomes: it has helped overthrow repressive political regimes, made economic markets more efficient, created safe spaces for otherwise marginalized communities to find their voices, and led to the most exquisite cat videos ever seen.

But in the last few years, public perceptions of the Internet have plummeted. We've lost trust in the Internet giants, who seem to have too much power and make missteps daily. We also are constantly reminded of all of the awful and antisocial ways that people interact with each other over the Internet. We are addicted to the Internet—but we don't really love it any more.

Many of us are baffled by the degradation of the Internet. We have the ingenuity to put men on the Moon (unfortunately, only men so far), so it defies logic that the most powerful companies on Earth can't fix this. With their wads of cash and their smart engineers, they should nerd harder.

So why does the Internet feel like it's getting worse, not better? And, more importantly, what do we do about it?

It was always thus

Let's start with the feeling that the Internet is getting worse. Perhaps this reflects an overly romantic view of the past. The Internet has always had low-value content. Remember the Hamster Dance or the Turkish "I Kiss You" memes?

More generally, though, this feeling reflects our overly romantic view of the offline world. People are awful to each other, both online and off. So the Internet is a mirror of our society, and as the Internet merges into everyday life, it will reflect the many ways that people are awful to each other. No amount of nerding harder will change this baseline of antisocial behavior.

Furthermore, the Internet reflects the full spectrum of human activity, from great to awful. With the Internet's proliferation—and with its lack of gatekeepers—we will inevitably see more content at the borders of propriety, or content that is OK with some audiences but not with others. We've also seen the rise of weaponized political content, including from state-sponsored entities, designed to propagandize or to pit communities against each other.

There is no magical way to eliminate problematic content or ensure it reaches only people who are OK with it. By definition, this content reflects edge cases where mistakes are most common, and it often requires external context to properly understand. That context won't be available to either the humans or the machines assessing its propriety. The result is those infamous content moderation blunders, such as Facebook's removal of the historic "Napalm Girl" photo or YouTube's misclassification of fighting robot videos as animal abuse. And even if the full amount of necessary context were available, both humans and machines are susceptible to biases that will make their decisions seem wrong to at least one audience segment.

There's a more fundamental reason why Internet companies can never successfully moderate content for a mass audience. Content moderation is a zero-sum game. With every content decision, the Internet companies make winners and losers. The winners get the results they wanted; the losers don't. Hence, there's no way to create win-win content-moderation decisions. Internet companies can—and are trying to—improve their content moderation efforts. But dissatisfaction with that process is inevitable regardless of how good a job the Internet companies do.

So given that Internet companies can never eliminate awful content, what should regulators do?

The downside of “getting tough”

One regulatory impulse is to crack down harder on Internet companies, forcing them to do more to clean up the Internet. Unfortunately, tougher laws are unlikely to achieve the desired outcomes for three reasons.

First, because of its zero-sum nature, it's impossible to make everyone happy with the content moderation process. Worse, if any law enables lawsuits over content moderation decisions, this virtually ensures that every decision will be "litigation bait."

Second, tougher laws tend to favor incumbents. Google and Facebook are OK with virtually any regulatory intervention because these companies mint money and can afford any compliance cost. But the companies that hope to dethrone Google and Facebook may not survive the regulatory gauntlet long enough to compete.

Third, some laws expect Internet companies to essentially eliminate antisocial behavior on their sites. Those laws ignore the baseline level of antisocial behavior in the offline world, which effectively makes Internet companies liable for the human condition.

The logical consequence of "tougher" Internet laws is clear but chilling. Google and Facebook will likely survive the regulatory onslaught, but few other user-generated content services will. Instead, if they are expected to achieve impossible outcomes, they will shut down all user-generated content.

In its place, some of those services will turn to professionally generated content, which has lower legal exposure and is less likely to contain antisocial material. These services will have to pay for professionally generated content, and ad revenue won't be sufficient to cover the licensing costs. As a result, these services will set up paywalls to charge users for access to their databases of professionally licensed content. We will shift from a world where virtually everyone has global publication reach to a world where most readers will pay for access to a much less diverse universe of content.

In other words, the Internet will resemble the cable industry circa the mid-1990s, where cable subscribers paid monthly subscription fees to access a large but limited universe of professionally produced content. All of the other benefits we currently associate with user-generated content will just be fond memories of Gen Xers and millennials.

The way forward is the way back

The irony is that we already have the regulatory solution that will lead to the optimal level of content moderation in our society.

In 1996, Congress enacted 47 USC 230 ("Section 230"). Section 230 says that websites aren't liable for third-party content, with limited exceptions that include intellectual property infringement and federal crimes. This simple concept forms the backbone of the modern Internet. Most of the Internet services we use on an hour-by-hour—or even minute-by-minute—basis exist only because Section 230 reduced their legal exposure. With this legal protection, Internet services can confidently engage in content moderation efforts without fearing that they will be sued by the folks who are inevitably unhappy with their decisions.

Section 230 is sometimes described as a "sword and shield": a sword for actively engaging in moderation and a shield for protecting those moderation decisions. Some say Internet services have no incentive to use the sword, but this is contrary to reality. Internet services rely on the sword to manage their reputation, to retain advertisers, and to maintain their competitive edge in the market. Failing to curb antisocial behavior is almost certainly a doomed business strategy, as "anything goes" services like Yik Yak found out the hard way.

As a result, legitimate Internet services routinely invest substantial resources in protecting their users from harmful content. For example, Pinterest suppresses content that promotes eating disorders; online games geared towards children, like Roblox and Fortnite, deploy proactive chat filtering software to screen abusive content before it's communicated.

Pro-social

Indeed, Section 230 lets Internet services experiment with new ways of mediating user-to-user conversations—including trying to reduce antisocial behavior below the offline baseline. For example, the local social network Nextdoor redesigned its service to discourage racist reports of "suspicious people" in the neighborhood. Nextdoor can't stop people from thinking racist thoughts, but it can hinder their ability to normalize racism among their neighbors. More recently, Nextdoor launched a "Kindness Reminder" to encourage users to think twice before making offensive or hurtful comments.

Internet services increasingly help their users engage in pro-social conduct online that they might not have pursued in the offline world. All of these interventions are protected, and encouraged, by Section 230.

Section 230 won't lead to perfection online. But that's not the right measuring stick. Rather, the question is whether Section 230 helps Internet services beat the offline baseline. There's good reason to believe that Internet services will get there, with Section 230's help.

Increasingly, regulators and politicians have targeted Section 230 as part of the Internet's ailments. They are wrong. Section 230 is part of the solution, not the problem. The sooner we embrace that view, the greater the likelihood we can make meaningful improvements and recover our love for the Internet.

Let's block ads! (Why?)



"can" - Google News
November 27, 2019 at 08:45PM
https://ift.tt/2OMPX0N

Why can’t Internet companies stop awful content? - Ars Technica
"can" - Google News
https://ift.tt/2NE2i6G
Shoes Man Tutorial
Pos News Update
Meme Update
Korean Entertainment News
Japan News Update

No comments:

Post a Comment