It’s been 24 years since Internet companies were declared off-the-hook for the behavior of their users. That may change, and soon.
(Cross posted from Signal360)
In a sweeping talk at the Association of National Advertisers conference last month, P&G Chief Brand Officer (and ANA Chair) Marc Pritchard laid out a five-step plan to address systemic problems in the marketing and media industries. Each step addresses serious challenges and opportunities — in diversity, inequality, and creative and business practices. But perhaps no step is more challenging — and crucial — than Pritchard’s Step Four: Eliminating all harmful content online.
“There is still too much harmful, hateful, denigrating, and discriminatory content and commentary in too many digital sites, channels, and feeds,” Pritchard said. “There is no place for this type of content.”
While nearly everyone agrees with the idea of eliminating harmful content, key actors across the digital media industry seem paralyzed when it comes to how best to take action on the problem. What’s really going on? To understand, we must dive into the early formation of the Internet industry in the United States, and the role the First Amendment plays — to this day — in shaping an increasingly contentious debate on how to regulate digital speech.
But, First, a Bit of History
When the Internet was in its early stages as a commercial medium more than 25 years ago, a moral panic erupted in the United States following the publication of a Time magazine cover story Titled “Cyberporn” and featuring a terrified child staring aghast into the blue light of a computer monitor, the story claimed — falsely, as it turned out — that the majority of images on the then-novel medium consisted of pornography.
Internet service providers were to be treated like the phone company … not held responsible for the speech of their customers.
Congress quickly took up the cause of cleaning up the Internet and passed the