When the European Commission presented a draft for a regulation to prevent child sexual abuse (CSAR) in May 2022, which later became known as Chat Control, it seemed noble at first glance. It was presented by the Commissioner for Home Affairs, Ylva Johansson, and the goal was clear and reasonable: to combat child sexual abuse on the internet.
This is a problem whose solution is undisputed. However, as is so often the case, it fails because of the way in which the goal is to be achieved.
The Commission proposes that internet platforms such as WhatsApp or Messenger be required to monitor their users' communications and report any suspected child abuse. This applies in particular to the distribution of inappropriate photos and videos or attempts at so-called grooming, i.e., contacting minors for the purpose of abuse.
Since some applications encrypt their customers' communications, the proposal stipulates that every message must be checked before it is encrypted and sent.
Unprecedented invasion of privacy
The fundamental problem, however, is that the European Commission, with its noble goal, has sanctioned practices that even its comrades in China, who are so fond of monitoring their citizens, would not be ashamed of.
The mass scanning of communications opens the door to the misuse of sensitive data. Not only by hackers, for whom databases containing a wealth of sensitive information would be a huge lure, but also by employees or institutions to whom the data could be leaked.
The inspectors appointed by Brussels could also pass on the data to non-governmental organizations or other “interested” third parties if this is in the interests of research and the prevention of sexual abuse in the online sphere.
Let us quote exactly: "The EU center should also contribute to achieving the objectives of this Regulation by serving as a center of knowledge, expertise, and research on issues related to the prevention and combating of child sexual abuse online. In this context, it should cooperate with relevant stakeholders from the Union and third countries and enable Member States to make use of the knowledge and expertise gathered...“
Of course, it is also added that ”the processing and storage of certain personal data is necessary" in order to fulfill the tasks of this supervisory authority. According to the proposal, this should only be done “to the extent necessary,” but the door is already ajar and can very easily be pushed wide open.
However, this is only the beginning of the list of problems.
Once the Union has created a mechanism to monitor citizens' communications, pressure will grow to use it for other crimes as well. And it will be difficult to argue why messages mentioning bombs, murders, terrorism, and so on should not be reported and reviewed...
Given the enormous amount of data that needs to be searched, it is assumed that artificial intelligence will be the primary control authority. However, AI often makes mistakes and is not yet able to reliably and systematically distinguish between humor or the exchange of children's photos between family members and the communication of real predators.
Although a human employee should be at the end of the process, a large part of the public already fears that the police could knock on their door because of a misworded report.
Even if this were not the case and the system were 100% reliable, the mere fact that someone (or something) is reading people's private correspondence behind their backs and turning them into suspects is outrageously brazen. Without them knowing.
Mass surveillance of chats would quite simply mean the end of privacy for Europeans living in the Union. And this despite the fact that we are fighting for it with other regulations such as the GDPR.
Compromise versions
However, emotions seem to have flared up too quickly these days. Although many well-known commentators and influencers are sounding the alarm about Chat Control, the truth is that we are still a long way from the final version of the law that could come into force. This is because the Commission, the European Parliament, and the member states must agree on it.
Many European parliamentarians are aware of the problematic parts of the Commission's proposal. This is evident from the fact that the European Parliament did not support it in its original form.
On the contrary, at the end of 2023, the Committee on Civil Liberties (LIBE) adopted an opinion rejecting several parts of it. In its opinion, scanning should not be comprehensive, but only targeted. Platforms would only have to review the communications of suspicious users on the basis of a court order and provide information about them. At the same time, it explicitly rules out interference with end-to-end encryption.
The panic these days was triggered primarily by the resumption of discussions on the scanning of digital communications in the Council of the EU, i.e., between the member states of the Union. The main force behind this is Denmark, which has held the presidency since July and is stubbornly pushing ahead with the legislation.
The vote was supposed to take place on October 14, but the balance of power is changing from day to day. Countries were supposed to state their positions by September 12, but so far no qualified majority has been found to pass the law, and the deadlines are being postponed. Currently, according to public statements by government representatives, 14 countries support the draft, while nine are against it and four are undecided. Undecided Germany is leaning toward the blocking minority, as it refuses to remove encryption and other controversial passages.
However, there are rumors that the Council is trying to reach a compromise. A reporter from the Kazakh news agency Qazinform described that the Danish version of the revised draft should broadly resemble the European Parliament's draft.
Scanning should only apply to visual content (photos, videos), not text or audio messages and calls. The obligation to scan and provide data on private communications would also only apply to online services classified as “high-risk.” Any collection order would have to be approved by a court or independent authority, and the data would have to be pseudonymized before human inspection.
Why is this still a problem?
The details published by the Kazakh agency Qazinform have not been confirmed by other sources. However, they suggest that there is indeed a discussion among member states about a version of the regulation that would tone down the most controversial elements of the original draft. After the unsuccessful September 12 meeting, the Danish presidency also communicated something similar.
Even if it is likely that the European Commission's original draft will not be accepted by either the Parliament or the member states, caution is also warranted with regard to the revised versions.
Once platforms have built in tools for communication control, even if they only use them in a targeted manner, they create a mechanism that can technically be extended to all citizens at any time. Once it exists, it is only a matter of waiting to see whether there is a political or ideological mandate for its use.
The changes also do not solve the problem of the creation of databases containing sensitive data, which are not only vulnerable to targeted attacks, but also to internal leaks and the disclosure of information to various third parties. Even if not all private correspondence were included, there would still be a lot of data that could compromise innocent people.
If part of the process is entrusted to artificial intelligence, there will be quite a few false positives that will be stored in the databases with other suspects. And these people don't even have to know about it.
The risks of all the proposals currently on the table are simply too great.