EU to force Big Tech to protect users

The European Union is close to agreement on new rules that protect internet users. These rules require big tech companies such as Google and Facebook to intensify their efforts to stop the spread of illegal content and hate speech.

EU to force Big Tech to protect users

Friday's negotiations over final details of the legislation (the Digital Services Act) were held by EU officials. This legislation is part of a comprehensive overhaul of the digital rulebook of the 27-nation bloc. It highlights the EU's leadership position in the global movement to reinvigorate the power of online platforms, and social media companies.

The rules must still be approved by the European Parliament or the European Council, which represents the 27 member states. However, the bloc is ahead of the United States as well as other countries in drafting regulations for tech giants that will force them to protect users from the harmful content that proliferates online.

Negotiators from the EU’s executive Commission, member states and France were trying to reach a deal before Friday's end, just in time for Sunday's French elections.

These new rules are intended to protect internet users' "fundamental rights" online and make tech companies more responsible for the content they post. Twitter and Facebook would need to improve their tools to remove hate speech and flag it. Amazon and other online marketplaces would also have to take responsibility for dangerous products such as counterfeit sneakers and unsafe toys.

These systems will be standardized so they can work on all online platforms.

This means that "any national authority" can request the removal of illegal content, regardless of the location of the platform in Europe, the EU's single market Commissioner, Thierry Breton stated on Twitter.

Companies who break the rules could face fines up to 6% of their global annual revenue. This would be a huge penalty for tech companies that make billions. Repeat offenders may be expelled from the EU market.

Twitter and Google declined to comment. Amazon and Facebook did not respond to our requests for comment.

The Digital Services Act also contains measures to protect children, including banning targeted advertising at minors. Online advertisements targeting users based upon their gender, ethnicity, and sexual orientation will be banned.

Also, there would be a ban against so-called dark pattern -- tricks that trick users into doing things they don't want to do.

Tech companies will need to conduct regular risk assessments of illegal content, disinformation, and other harmful information. Then they'll report back on their progress in addressing the problem.

They will need to be more transparent with regulators and independent researchers about content moderation efforts. YouTube could be required to provide data about whether its recommendation algorithm directed users to more Russian propaganda than usual.

The European Commission will hire over 200 additional staff to enforce the new rules. Tech companies will pay a "supervisory" fee to cover the cost of this service. This could amount up to 0.1% of their global net income each year, depending on how they negotiate.

Last month, the EU reached a similar agreement on its Digital Markets Act. This separate piece of legislation was aimed at limiting tech giants' power and making smaller competitors more equal.

Britain has its own online safety legislation, which includes jail sentences for tech company executives who do not comply.