Users around the world have been outraged by the European Commission's proposal to require websites to enter into Shadow Regulation agreements with copyright holders concerning the automatic filtering of user-generated content. This proposal, which some are calling RoboCopyright and others Europe's #CensorshipMachine, would require many Internet platforms to integrate content scanning software into their websites to alert copyright holders every time it detected their content being uploaded by a user, without any consideration of the context.
on intermediary liability
When user content is threatened with removal from the Internet, it's unlikely that anyone is going to put up more of a fight than the user who uploaded it. That's what makes it so critically important that the user is informed whenever an Internet intermediary is asked to remove their content from its platform, or decides to do so on its own account.
Unfortunately this doesn't consistently happen. In the case of content taken down for copyright infringement under the DMCA or its foreign equivalents, the law typically requires the user to be informed. But for content that allegedly infringes other laws (such as defamation, privacy, hate speech, or obscenity laws), or content that isn't alleged to be illegal but merely against the intermediary's terms of service, there is often no requirement that the user be informed, and some intermediaries don't make a practice of doing so.
The language in the Trans-Pacific Partnership (TPP) on Internet Service Provider (ISP) liability—which governs the legal liability of Internet intermediaries and platforms for communications of their users—resides in an annex in the trade agreement's Intellectual Property chapter and was one of the most contentious parts of its copyright enforcement rules.
Europe is very close to the finishing line of an extraordinary project: the adoption of the new General Data Protection Regulation (GDPR), a single, comprehensive replacement for the 28 different laws that implement Europe's existing 1995 Data Protection Directive. More than any other instrument, the original Directive has created a high global standard for personal data protection, and led many other countries to follow Europe's approach. Over the years, Europe has grown ever more committed to the idea of data protection as a core value. In the Union's Charter of Fundamental Rights, legally binding on all the EU states since 2009, lists the “right to the protection of personal data” as a separate and equal right to privacy. The GDPR is intended to update and maintain that high standard of protection, while modernising and streamlining its enforcement.
As we've noted before, online harassment is a pressing problem—and a problem that, thankfully and finally, many are currently working on together to mitigate and resolve. Part of the long road to creating effective tools and policies to help users combat harassment is drawing attention to just how bad it can be, and using that spotlight to propose fixes that might work for everyone affected.
But not all of the solutions now being considered will work. In fact, some of them will not only fail to fix harassment, but they will actually place drastic limitations on the abilities of ordinary users to work together, using the Net, to build and agitate real, collective solutions.
If you operate your own website, be glad that you don't host it in South Korea (or if you do, you might want to rethink that). Whereas in the United States, an important law called CDA 230 protects you from liability for comments contributed by users to your website, South Korea has some of the toughest liability rules in the world that can leave intermediaries such as website owners carrying the can for content they didn't even know about.
The future for online discussion platforms in Europe is looking cloudy following yesterday's ruling of the European Court of Human Rights in the case of Delfi AS v. Estonia. In a disappointing decision, the court affirmed that Estonian courts were entitled to hold an online news portal liable in defamation for comments submitted anonymously by readers. The court was unmoved by the fact that the publisher, Delfi, had set up a system for users to flag and automatically remove comments that they found offensive, or by the fact that the comments at issue had been removed prior to the initial lawsuit being filed.