Skip to main content

All the benefits of the internet are made possible by two foundational laws: Section 230 of the Communications Decency Act (CDA) and Section 512 of the Digital Millennium Copyright Act (DMCA). These laws enable the modern internet to function by allowing everyone to post content online, and by providing intermediary liability protections to websites and apps to allow them to remove or moderate inappropriate content.

Without the protections these laws provide, two things would happen: 

  1. Companies could be held legally responsible for EVERYTHING people say. 
  2. Companies would face legal risks for enforcing community standards or voluntarily restricting access to or deleting inappropriate or illegal content.

IA member companies use a variety of techniques – including human review and machine learning – to evaluate posts, images, videos and other content, and restrict or remove access to content that is illegal or otherwise violates their community standards.

Intermediary liability protections found in Section 230 of the CDA are what make this possible.

Intermediary liability protections help internet companies create a safer, more enjoyable experience online by:

  • Fighting the opioid epidemic

    by participating in the U.S. Food and Drug Administration’s annual Online Opioid Summit in which companies committed to taking stronger action against the availability of opioids online.

  • Combating trafficking online

    by creating technology to identify over 6,000 victims and 2,000 sex traffickers in a single year, and reducing law enforcement investigation time by 60 percent.

  • Protecting child health and safety

    by partnering with non-governmental organizations like the National Center for Missing and Exploited Children (NCMEC), the International Center for Missing and Exploited Children, the International Women’s Forum, and Polaris. 

  • Combatting NCII

    by supporting the ENOUGH Act in both 2017 and 2019, which would establish criminal liability for individuals who share intimate imagery without consent while protecting victims’ privacy. 

  • Improving online safety

    by working closely with parenting groups, child development experts, and third-party advocates, including the Family Online Safety Institute (FOSI) and Cybersmile Foundation to develop solutions that make platforms safer and more educational.

  • Addressing the threat of deepfakes

    by partnering with PolitiFact, Factcheck.org, ABC News, Trust Project, Social Science One, and the Associated Press to monitor popular content and evaluate the accuracy of content.

  • Combating online extremism

    by partnering with the Global Internet Forum to Counter Terrorism to organize collaborations between companies to share, information, content identifiers, and best practices for the removal of terrorist content across multiple platforms.