Facebook, Alphabet CEOs Outline Approaches To Web Regulation

Facebook, Alphabet CEOs Outline Approaches To Web Regulation

Facebook, Inc. CEO Mark Zuckerberg detailed actions to reform a long-standing regulation that pertains to the internet in written testimony ahead of a Thursday (March 25) hearing before a U.S. House of Representatives committee.

Zuckerberg said that Section 230 of the Communications Decency Act would benefit from some modifications. However, he acknowledged that “identifying a way forward is challenging given the chorus of people arguing — sometimes for contradictory reasons — that the law is doing more harm than good.”

“We believe Congress should consider making platforms’ intermediary liability protection for certain types of unlawful content conditional on companies’ ability to meet best practices to combat the spread of this content. Instead of being granted immunity, platforms should be required to demonstrate that they have systems in place for identifying unlawful content and removing it,” Zuckerberg said.

But the executive said that platforms shouldn’t be liable if some material avoids detection, noting “that would be impractical for platforms with billions of posts per day.” However, he said that platforms “should be required to have adequate systems in place to address unlawful content.”

Alphabet CEO Sundar Pichai said in written testimony ahead of the Thursday (March 25) hearing that regulation plays an important part in making sure that “we protect what is great about the open web while addressing harm and improving accountability.”

However, Pichai said that Alphabet is “concerned that many recent proposals to change Section 230 —including calls to repeal it altogether — would not serve that objective well.”

He said that those actions would “have unintended consequences — harming both free expression and the ability of platforms to take responsible action to protect users in the face of constantly evolving challenges.”

Pichai instead proposed possible approaches such as building content policies that are “clear and accessible,” telling individuals when their material is taken down and providing them with avenues to appeal content decisions and “sharing how systems designed for addressing harmful content are working over time.”