Brussels has been intensively addressing the issue of child abuse in the online environment for some time. Mandatory blanket monitoring of Europeans' messages is apparently a thing of the past, as several member states have opposed it. Now on the table and close to final approval is "voluntary" vetting by platforms, which the EU plans to encourage with fines.
In practice, technology companies will be required to assess the risk of child abuse on their platforms and implement mitigating measures. If these are deemed insufficient, they will face heavy fines, which can reach billions (six percent of turnover) for large companies.
The use of artificial intelligence for blanket scanning is therefore the easiest way to demonstrate that they are doing everything in their power to reduce the risk.
Once again, the dilemma arises as to whether protecting children from abuse is worth the loss of privacy for EU citizens and whether artificial intelligence can be relied upon in this regard.
But let's start from the other side.
What are the arguments in favor of screening messages?
European authorities rely on alarming statistics to argue in favor of controversial legislation. Last year, the National Center for Missing and Exploited Children (NCMEC) received approximately 20 million reports from technology companies around the world concerning suspected sexual abuse of minors on the internet.
Although this is a significant decrease from more than 36 million in 2023, since last year the NCMEC has been trying to place greater emphasis on identifying and tracking unique children and perpetrators. Several platforms have therefore changed the way incidents are reported—in simple terms, if it is the same photo, they no longer report it every time it is shared, but only once per case. The NCMEC itself is also working to deduplicate reports of the same material from different platforms.
If the 2023 methodology had been continued, there would have been around 29 million suspicious reports last year.
There are several ways in which children are abused online. The largest category in the aforementioned statistics is reports relating to the creation and distribution of visual material depicting child sexual abuse, ranging from photos and videos to live streaming.
A new trend in this area is videos of child pornography created by artificial intelligence. The number of reports increased from just under 5,000 in 2023 to 67,000 last year. Although this is fiction, the free distribution of such material may encourage potential perpetrators to commit acts in the real world.
However, in addition to photos and videos, NCMEC data also includes reports of cases where someone has intimate material of the victim (or pretends to) and threatens to publish it, which is defined in modern vocabulary as "sextortion."
In addition, adult chatting with a child is also reported when there is suspicion of an attempt to get closer, send intimate photos or videos, or even lure a minor into a personal meeting (grooming).
It should be added that most statistics define children as persons under the age of 18.
Millions of data points are controversial
When considering the aforementioned statistics, which are often cited, it is important to take into account what exactly they describe. These are millions of reports of suspicious communications or audiovisual materials. The important words in the previous sentence are "suspicious" and "reports."
Compared to the past, child abuse is now viewed very sensitively. This is entirely justified and correct, but in this context, it must be taken into account that some of the reported suspicions will simply be "statistically false."
They may capture communication and sharing of photos (including nude photos) of children between family members, for example from a vacation at the seaside. Or a debate between people with inappropriate humor, which may not be morally correct, but is far from "online pedophilia."
In recent years, tech companies such as Meta and Google have been investing heavily in artificial intelligence and advanced algorithms that detect and report suspicions automatically, but they are not accurate.
This is precisely one of the main arguments against the planned European legislation—even if artificial intelligence were 100% successful in detecting problematic visual material and communication between real predators and victims, this says nothing about false positives, when the system mistakenly flags an artistic photograph, a medical conversation, parental joking, or a poor-quality image as suspicious.
This then potentially leads to innocent people being investigated by the police and possible entanglements or damage to their reputation.
But let's go back to the NCMEC data. Even after thoroughly cleaning up all false reports and duplicate occurrences of the same material, the resulting number is not data on the number of perpetrators and victims, as there may be multiple photos or videos from a single crime.
This is precisely what the Internet Watch Foundation (IWF) focuses on, as it attempts to link various materials (not reports) to individual cases of child abuse in the online environment.
Its statistics are therefore much lower [the fact that, although it actively searches for problematic material and does not only accept reports, it comes from "only" about 50 countries also plays a role, editor's note]. On the other hand, they are more meaningful.
According to its own figures, the organization assessed more than 424,000 reports last year and took action in more than 290,000 cases where it verified that the content contained criminal images.
It sorts this visual material according to various categories and characteristics. For example, it cites alarming statistics showing that by far the most child victims in photographs or videos were between the ages of 7 and 13.
There are therefore very few cases among the reported materials involving two chatting adolescents sending each other explicit photos, one of whom is an adult and the other a minor.
The institution confirmed that the most serious material depicting penetrative sexual activity numbered around 60,000. A similar number of files contained other sexual practices such as masturbation or petting (erotic fondling). The majority of cases (166,000) contained other inappropriate images of minors.
Regardless of the numbers, this is a problem. The solution is not Big Brother's eye
The above comparison shows that there are not as many cases of child abuse on the internet as it may seem. When statistics are presented by the media or politicians, it is always necessary to be clear about what the data actually tells us, because it is often used manipulatively to achieve political or ideological goals more easily.
However, it must be clear to everyone that this is not a reason to neglect the protection of children on the internet or to stop trying to achieve it. Regardless of the number of cases, this is a serious issue that needs to be addressed at the political level, as well as within families and educational institutions.
However, mass screening of all citizens, which carries the risk of sensitive information being passed on or sold to third parties, is not a good solution.
If blanket monitoring of communications becomes the norm and internet platforms have built-in mechanisms for vetting their users, it will only be a matter of time before there is a political or ideological demand to use them in other ways. For example, to obtain information or block and shut down those who do not conform to the system. This is a path to centralization of power, which, as we know from history, is dangerous.
The risks of all the proposals currently on the table in Brussels are therefore still too great. If we want to protect our children from the dangers of the internet, let's make sure they only access problematic platforms when they are mature enough. Similar to what Australia is already doing.