Tuesday, December 6, 2016

Content that is hateful – Bad balance sheet of the Web platforms and flaws of the good conduct ZDNet France

Facebook, YouTube (Google), Twitter and Microsoft are doing this-they quite contrary to the content that is hateful on their platforms ? Nothing is less sure. And this is without a doubt in part need to forget this bad record and also to avoid regulatory constraints in Europe, these four giants have announced the creation of a shared database of content related to terrorism.

This announcement is, however, occurred after that these online services have been criticised by the european Commission for non-compliance of their commitments. Under a code of conduct approved by the platforms in may, the reported contents should be reviewed and if necessary removed within 24 hours.

A serious effort in the next few months, otherwise…

This goal is far from reached. When asked by the Financial Times, the Commissioner for justice, Věra Jourová, pins the lack of reactivity of the giants of the Web. According to a report commissioned by the Commission, only 40% of reports are reviewed within 24 hours. This figure is more than 80% after 48 hours.

Too long, accuses the eu executive, which also finds very large differences depending on the States of the EU. In France and Germany, where the moderation of such content is imposed by the law, Facebook, Google, Twitter and Microsoft to remove more than 50% in the time limit.

On the other hand, as reported in The Tribune, in countries such as Austria and Italy, only 11% and 4% of the content concerned is deleted within 24 hours. “If Facebook, YouTube, Twitter, and Microsoft want to convince us, me and the ministers, that the approach is non-legislative, can work, it will be necessary that they act quickly to make a serious effort in the coming months,” warns, therefore, Věra Jourová.

In the clear, the soft method in the code of conduct could be abandoned in favour of an approach more restrictive, which preferred to escape the Web-based platforms. Several organizations were also skeptical vis-à-vis the code of conduct held to be too little to be effective.

For EDRi and Access Now, it comes back to request the prohibition of content that legally should already be moderated by these social networks. But most importantly, what are the terms and conditions of use determine the rules of moderation. For associations, the giant US enforce their own rules, and only “if necessary” to national laws.

social networks follow their own rules

And this principle is still required with the creation of a shared database of content that is hateful. Each platform will apply its own policies and its own definition of the contents terrorist. A deleted content on a site and registered in the shared basis, will not automatically be removed from other partner services.

similarly, each company will apply its internal rules to determine what content to share in the common database. This partnership, agreed under pressure, it seems already covered by the code of conduct.

At the time of ratification, The major platforms commit to cooperate more intensively among themselves and with other services and companies in social media to “strengthen the exchange of good practices.”

The future database should finally clarify the “grey areas” identified by the UEJF, SOS Racism and I Acknowledge. For these associations, online services take the only commitment the review of the reports, and nothing about further action.

now this is what poses a problem for those representatives of the civil society to which “the reports don’t work”.

“All of the improvements could be a good thing if the platforms had a real political will to regulate the content when alerts are sent. From experience, we know that the reality is different, the rest is corporate communication and lobbying” denounced in may, the president of j’accuse, Marc Knobel.

LikeTweet

No comments:

Post a Comment