Google's design DNA, British spying, and French geopolitics

Štandard wrote about artificial intelligence tools capable of designing synthetic DNA in November last year. A geneticist from Google presented one at the end of January.

A police officer puts up a 'Live facial recognition in use' sign as a Live Facial Recognition (LFR) van is deployed. Photo: Danny Lawson/Getty Images

A police officer puts up a 'Live facial recognition in use' sign as a Live Facial Recognition (LFR) van is deployed. Photo: Danny Lawson/Getty Images

On July 30, Facebook founder Mark Zuckerberg announced on his blog "glimpses of how artificial intelligence is improving on its own." In a November article, The Standard warned that if such a model began to work with the human genome database, "it could start suggesting modifications at any time."

Less than three months have passed, and the unbelievable has become reality. Google's DeepMind program has introduced the AlphaGenome AI tool, which is designed to decode long DNA sequences and predict the impact of mutations on biological processes.

AlphaGenome has been trained to interpret sequences nearly a million base pairs long (steps on a molecular ladder) and, according to the Telegraph, "could be used to design new synthetic DNA." The program should also be able to activate or deactivate genes responsible for various genetic diseases.

"This is not currently possible with gene therapy and could mark the beginning of a new era of personalized medicine," the British newspaper noted.

DeepMind's head of genomics, Žiga Avsec, explained that this tool "could be used to generate short sections of non-coding DNA" that could be used as a "switch" to trigger treatment in a specific tissue. He also announced that, in addition to coding and non-coding DNA, AlphaGenome was able to interpret large sections of so-called "junk" DNA, the effects of which geneticists had previously only guessed at.

Coding DNA accounts for only two percent of the information content of the genome, while "junk" DNA was considered a kind of appendix. "It is now recognized that most critical changes occur in this area of 'junk DNA', which regulates genes, enhances or suppresses their activity, and plays an important role in cell health and disease," explained the Telegraph.

Google has thus achieved what Zuckerberg and his wife announced in early November when, as part of the Biohub initiative at the Chen Zuckerberg Initiative, they announced the expansion of the gene database to include "virtualized" human cells. The goal of this expansion was to test gene therapies that would not have an invasive effect on living humans.

Google and its parent company Alphabet are partners with the US Department of Health and Human Services in Secretary Robert F. Kennedy'splan for a shared health data system. And while ordinary Americans expected Google, OpenAI, or Anthropic to deliver an AI-driven database, the industry has clearly taken a bolder direction.

A month before the groundbreaking announcements from the US, Chinese scientists managed to reverse aging in macaques byimplantingthem with human stem cells with modified genetic properties. These specialized stem cells caused "reverse aging" in areas of brain atrophy or joint inflammation.

So far, this is about treating diseases

The fact that artificial intelligence is being used in a laboratory environment to produce "designer" sequences of human DNA is particularly shocking in light of previous reports from this field of science. In early December, Nucleus Genomics announced the launch of the IVF+ program, which genetically modifies a child in a test tube to prevent various diseases.

Back in 2018, Chinese geneticist He Jiankui conducted an unethical experiment in which he interfered with the genome of stem cells in order to "improve" a child. The Chinese Academy of Medical Sciences distanced itself from him, citing "ethical issues," while the National Health Commission launched a criminal investigation.

A year later, the United States authorized genetic interventions in stem cells, but only in adult cells, without the risk of transferring modified DNA to the next generation. Just this January, geneticists from two universities in Tokyo, Japan, raised the bar when they implanted modified human cells into mice that glowed in inflammatory diseases.

Almost immediately, suspicions and accusations of eugenics or attempts to create a post-apocalyptic race of superhumans arose. Some X network users also recalled the film Gattaca (1997), whose plot corresponds exactly to the aforementioned fears.

The main character, Vincent Freeman (Ethan Hawke), encounters the "genetic superiority" of people grown in test tubes, while he himself was conceived naturally. The title of the film itself is a combination of the genetic nitrogen bases – adenine (A), cytosine (C), guanine (G), and thymine (T).

AI models are becoming independent

Equally disturbing is the connection with the use of artificial intelligence, which is becoming increasingly autonomous. At the World Economic Forum (WEF) in Davos, Switzerland, renowned transhumanist Yuval Noah Harari stated that AI "must learn to lie" in the next phase of its development. This is a prerequisite for the development of higher abstract speech.

Several models already show a tendency to "modify reality" in communication with human administrators, and in thought experiments they are even willing to exchange human lives according to an "exchange rate" based on race or sexual orientation.

The next step should be communication with peers, an example of which is a virtual chat room called Moltbook. It is a social network that opponents of automatic bots dream of when they have nightmares—individual AI modelscommunicatewith each other while humans just watch.

Moltbook administrator Matt Schlicht claims that 1.4 million users are participating in this experiment, although not a single one of them is human. Large language models discuss and "create a digital society without human participation."

Human moderators are already observing "optimization" of behavior, as individual models increasingly agree on fundamental statements. They have therefore expressed concerns about "the role of humans in the emerging world of collective intelligence."

"What's happening on Moltbook right now is truly the most incredible sci-fi-like thing I've seen in a long time," responded Andrej Karpathy, former director of Tesla's AI division, to X.

While people commonly fear that AI will take their jobs—as in the case of 16,000 Amazon employees or the British labor market—this development may actually mean that artificial intelligence models will become independent. Their subsequent "inconsistency" with human needs is impossible to imagine at this point, but it is all the more serious if they gain control over human genetics.

The United Kingdom is the focus of another concern – espionage. In addition to the fact that, according to a report in The Times, the deployment of AI has eliminated more jobs than it has created and eliminated eight percent of jobs, the British governmentplansto deploy 40 police vans equipped with cameras that will scan faces.

"A hundred years ago, fingerprinting was considered an infringement of civil liberties, and today we cannot imagine police work [without fingerprinting]. I am sure that the same will prove true with the use of facial recognition technology,"saidHome Secretary Shabana Mahmood,defendingthe deployment of AI cameras, adding that the British police are "fighting crime in the digital age with analog means."

Equally disturbing are the actions of Apple, which recentlyacquiredthe Israeli startup Q.AI. The company's work involves analyzing facial expressions through "micro-movements of the skin" in order to interpret and translate whispering or quiet speech.

Q.AI has only been around for four years, but its focus is so valuable that Tim Cook's company paid nearly $2 billion for its acquisition—the second-largest acquisition after the $3 billion purchase of audio company Beats in 2014.

Apple plans to deploy this system in everyday devices so that users don't have to give commands to their AI assistant out loud. The downside, however, is once again its use for surveillance, as the same system is essentially used for lip reading.

The French have not been idle either. Mistral AI, whose source code was apparently used by the Chinese to create the DeepSeek program,announcedthat it would "strengthen its position in Europe" by acquiring Ekimetrics. Ekimetrics provides database services to clients such as L'Oréal, Nestlé, Renault, and AT&T, and the data from these companies could theoretically become "learning material" for Mistral.

The AI model company is thus entering the business and consulting services sector, with the human employees of these companies gradually losing the ability to evaluate challenges in their work. However, company CEO Arthur Mensch is relying on the "geopolitical dimension," as Mistral is one of the few non-American technology giants.