Skip to main content

Protecting Innovation and Creativity Online:

  • date icon June 21, 2016
  • category icon

In Defense of the DMCA


Congress gets a bad rap these days

We spend a lot of time discussing everything that’s wrong with legislative proposals. Within this pessimistic legislative environment, it’s important to reflect back on the places where Congress got it right, in hopes of gaining insight into how we can replicate that success moving forward. 

For those of us at the Internet Association, getting it right meant passing laws that laid the foundation for success of the internet as an economic engine of growth, beacon of free expression, and cultivator of creativity.

 Let’s start with a basic decision that policy makers made in the 1990s: internet companies can’t, and shouldn’t, police the internet. Policy makers and negotiators behind the DMCA’s Section 512 safe harbors were careful to not only make sure that internet companies weren’t responsible for monitoring and actively filtering content, they also put in a separate section of the law (Section 512(m)) just to hammer home the point that nothing in the law required monitoring of online content. 

This point is critical to protecting lawful online content and speech covered by flexibilities in copyright law like fair use, and to ensuring that the internet is a space for robust startup activity. Had lawmakers decided to go in a different direction, either leaving the law vague or imposing a monitoring requirement, the landscape today would be very different: internet companies probably wouldn’t have gotten the capital necessary to grow, scale, and diversify, leaving the web looking an awful lot like it did twenty years ago (for those keeping track, that’s also bad news for content owners and creators, who have grown and benefitted from the global reach of the internet and lowered barriers to entry that online platforms provide). 

Worse yet, policies that put internet companies on the hook for determining the nature of content (infringing or legal) would mean that a great deal of user speech and material allowed under copyright law would probably be taken offline since the parties responsible for it wouldn’t have the information necessary to know what’s legal and what’s not (if this debate sounds familiar, it’s because in 2012 Congress again rejected the idea that the internet should be policed in a way that endangered legal speech and content).

The U.S. Copyright Office announced at the end of 2015 that it would be conducting a new study of Section 512 of the DMCA. Public comments (to the tune of 90,000+) were sent to the Office in early spring, followed by two roundtables in New York and San Francisco. The Internet Association participated in both, highlighting the value this law has brought to all stakeholders (read our comments here). Unfortunately, there were plenty of concerning ideas to go around, too (at the roundtables and then most recently, in a slew of articles targeting the DMCA). The worst of these suggestions have some common themes: many of them revolved around forcing internet companies to police the web, which as noted above, would (1) endanger user speech online and (2) be a huge barrier for creative, growing internet companies to actually introduce new, legal online platforms and functions. Here’s just a sample of how these themes emerge in such short-sighted proposals:

Notice and “Staydown”

The theory of Notice and Staydown is simple: once infringing content is identified, internet companies should keep it down. The reality, however, isn’t so simple.

A requirement to “staydown” would force platforms to monitor the web for reappearing content (that is accused to be infringing), all without the knowledge of when licensing deals have changed and without regard for fair use or other instances of legal content use. There’s a lot more to say about how terrible this system would be (even if the technology existed), as highlighted here

Red Flag Knowledge

This is wonky, nerdy, lawyer-ly territory, but suffice it to say when a lot of courts agree on something, it’s probably for a reason. Basically, platforms aren’t protected under the safe harbors if they are aware of infringing activity. The courts have said this part of the law squares with the guarantee that platforms don’t monitor the web since it only blocks the safe harbor when they have specific knowledge of an infringement. 

Attempting to block safe harbor protection when a platform has some kind of “general knowledge” is just as vague as it sounds: and because the legal protections are so important to companies just to exist in the first place, companies would be thrown into a world of uncertainty where they’d be forced to monitor content just in case something might be infringing- and potentially block it without certainty. That means platforms will be weary of growing and diversifying, lest they need to play internet cop and expose themselves to even more lawsuits, and legal user content could be caught up in the vague requirements to police the web.

Strict Repeat Infringer Policies

No one likes a troublemaker, which is why Congress told internet companies to come up with policies that punish repeat infringers in appropriate circumstances.

This isn’t about vagueness- it’s about flexibility. Some are now suggesting that this should be a strict, inflexible requirement that banishes accused infringers after a certain number of problems. But what about a user whose content is infringing in one instance, and licensed in another? What about content that could be legal under fair use or other flexibilities in law? Blind termination requirements create tunnel vision that ignores huge parts of copyright law, and creates incentives for litigation over legal creativity.

Just to be clear, the law also doesn’t force rights holders to police the internet: it merely provides them an extraordinary and unprecedented tool to rapidly have content removed that they allege infringes their rights. Without such a powerful tool (which exists nowhere else in law) rights holders would be forced to hire a lawyer and go to court each time they wanted to fight infringing content- such a system would be expensive and time consuming. Instead, Congress knew it had to fight infringement and protect legal content, too: that’s why the DMCA sets up a balanced set of shared responsibilities that allows rights holder access to special tools and requires internet companies to rapidly remove the specific identified content. 

Washington is full of talk about fixing what’s broken: but it’s just as important to remember what needs protecting now more than ever. The internet’s scale and diversity means we need the safe harbors more than we did even in its infancy to protect user speech and creativity online.