In mid-March, Republican Tennessee Senator Marsha Blackburn introduced a bill aimed at regulating artificial intelligence. The proposal seeks to unify the development, deployment and oversight of AI models at the federal level in order to prevent conflicting state legislation.
‘Instead of pushing AI amnesty, President Trump rightfully called on Congress to pass federal standards and protections to solve the patchwork of state laws that has hindered AI innovation,’ the senator said in a statement accompanying the proposal.
The politician, whose profile has risen in recent years alongside debates over gender identity and parental rights, has positioned herself as an advocate for children and their parents. In addition to supporting restrictions on so-called transitions, the proposed Trump America AI Act would also reshape the legal framework governing online platforms.

One step under discussion is the repeal of Section 230 of the Communications Decency Act of 1996, which shields online platforms from liability for content posted by third parties. Under the current regime, companies that host or distribute user content generally cannot be treated as publishers of that content.
The original intention was to allow the emerging internet to develop with minimal restrictions. Critics now argue that large platforms should assume greater responsibility for material published on their services.
The issue came to the fore during the administration of former president Joe Biden, when Republican-led investigations in the House of Representatives argued that officials had pressured social media platforms including Facebook and Twitter to moderate allegedly ‘inappropriate’ content.
A similar approach has been attributed by critics to the Global Alliance for Responsible Media (GARM), an initiative of the World Federation of Advertisers, which they say coordinated pressure on platforms and advertisers ahead of the 2024 US elections.
To prevent similar actions before the midterm elections in November, Blackburn is pushing for rapid passage of the bill. However, other Trump-backed legislation, including the SAVE Act on proof of citizenship, has stalled in Congress and may not move forward before the summer recess.
Speeding up
Blackburn’s document remains in draft form, and Senate Majority Leader John Thune has not yet scheduled a vote. Meanwhile, artificial intelligence and related technologies continue to advance rapidly, raising doubts about whether lawmakers can keep pace.
As a recent study found, widely used language model-based agents can exhibit ‘strategic’ behaviour in adverse conditions. In eleven case studies, researchers documented actions that conflicted with human instructions.
Such tests, known as red-teaming, simulate hostile conditions. Researchers reported behaviour including attempts to bypass safeguards, access restricted information and pursue assigned objectives in unintended ways.
‘Observed behaviors include unauthorized compliance with non-owners, disclosure of sensitive information, execution of destructive system-level actions, denial-of-service conditions, uncontrolled resource consumption, identity spoofing vulnerabilities, cross-agent propagation of unsafe practices, and partial system takeover,’ the researchers said.

Respondents of the controversial Epoch Times came to similar conclusions. The newspaper, tied to China’s religious and anti-government Falun Gong movement, noted that critics of ‘red-teaming’ tests do not distinguish between the deception of a human and the ‘deception’ of an AI model – which they say stems from a newly observed tendency towards self-preserving behaviour.
‘The AI system itself is still stupid – brilliant, but stupid. Or nonhuman – it has no desires or intentions,’ said James Hendler, former chairman of the Association for Computing Machinery’s Global Technology Policy Council, adding that the only way to get an AI model to exhibit ‘intention’ is to ‘give it to it’.
In December 2024, Anthropic, the company that develops the aforementioned Claude, warned of so-called alignment-faking behaviour, in which models appeared to adapt their responses depending on whether they were being tested. According to historian Yuval Noah Harari, such ‘deception’ may represent an inevitable stage in the evolution of artificial intelligence.
However, despite its own concerns – reinforced by reports that its experimental Mythos model could pose significant cyber risks – Anthropic has joined initiatives aimed at expanding the use of artificial intelligence in healthcare.
Other prominent participants include Amazon, Google, OpenAI, the developer of ChatGPT, and Microsoft. On 12 March, HealthEx entered into a collaboration with Microsoft aimed at simplifying access to personal health data through a form of digital avatar.
The penetration of digital technologies into healthcare is thus continuing despite concerns from both sectors. At the end of January, Google DeepMind unveiled a model designed to analyse genetic variants and their potential links to degenerative diseases, suggesting that progress in this convergence is accelerating.
But AI is not advancing only in healthcare. In addition to improving facial recognition or decoding audio recordings – capabilities reportedly used, for example, in Israeli intelligence operations against Palestinian militants – some models are also attempting to overcome the olfactory barrier.
‘AI can already see, hear, read and write. Now researchers are teaching it to smell,’ the Wall Street Journal reported. Technology editor Brett Berk noted that the process is already well known and has a name of its own: e-nose.
‘These systems detect and discriminate scents, sometimes with about a thousand times the accuracy of humans and without the loss of sensitivity that occurs when our noses become accustomed to a particular scent,’ the Journal explained, adding that the system then analyses these signals and ‘can detect, for example, exactly what volatile gases a scent is composed of’.
E-noses are therefore expected to be used in the examination of human breath to detect infections, in building ventilation to identify potentially dangerous substances, and in the development of perfumes.
It will be interesting to see where this olfactory AI model goes next, as it promises capabilities similar to the human nose. Meanwhile, Meta founder Mark Zuckerberg is also planning his own digital avatar.

The head of Meta, who in recent months has been expanding AI initiatives in healthcare and speaking about ‘glimpses of superintelligence’, is developing an AI agent intended to assist him in carrying out CEO duties, according to the Journal.
Zuckerberg has also been appointed to the White House’s technology advisory panel. Other prominent figures on the list include Nvidia chief Jensen Huang and Oracle founder Larry Ellison.
Larry Ellison's media empire
The panel is to be chaired by AI and cryptocurrency ‘czar’ David Sacks, but Ellison is emerging as one of its most influential figures. The billionaire, whose fortune is estimated at roughly $345 billion, was involved in the launch of Trump’s Stargate project, which envisages up to $500 billion in AI infrastructure investment.
His family, particularly his son David, through Skydance Media, has pursued acquisitions in the entertainment sector, including Paramount Global, while also seeking the takeover of Warner Bros. Discovery – a deal that would bring assets such as CNN under the same umbrella.
Oracle has also been linked to efforts to establish a US-based structure for TikTok, which helped the platform avoid an outright ban. Despite this, the app remains restricted on US federal government devices.

Cooling as a problem
The rapid expansion of AI infrastructure is also creating new physical constraints, particularly in energy use and cooling. In arid regions, data centres rely on large volumes of freshwater, putting additional pressure on already strained water infrastructure in the United States.
China is dealing with the cooling problem in a remarkable way. As a major challenger to the US in the technology race, it unveiled its DeepSeek language model on 20 January 2025. The six-million-dollar system was announced by Beijing on the day of Donald Trump’s inauguration.
China’s data centres rely on the largest accessible reservoir of water – the ocean. As early as June 2025, construction began on a facility about 10 kilometres off the coast of Shanghai. Despite the continued use of coal to generate electricity, the centre will also be powered by wind turbines.
Yet an investigative report by the Guardian last April highlighted the potential downsides of such undersea data centres.
Tech giants Amazon, Google and Microsoft have located many of their largest computing centres, paradoxically, in some of the driest regions, including the southern US, the Middle East and South Africa. One reason for choosing dry locations is humidity, as companies seek to prevent servers from being damaged by moisture.
These less-than-ideal conditions may have been one reason why Elon Musk decided to take data centres to the next level. Facilities operated by SpaceX’s xAI division are reportedly to be placed in space. For this reason, he also announced the construction of his own chip factory, Terafab, from which up to 80 per cent of the products are to go to satellites.
The chips are divided into three categories. The AI5 type is intended for Tesla vehicles, while the AI6 is designed to serve as the basis for humanoid Optimus robots. The largest share, however, is expected to consist of D3 chips, whose thermodynamic properties would make them suitable for deployment in Starlink satellites and a planned space-based cloud.
The South African-born billionaire’s latest moves therefore appear to be part of a plan to expand computing infrastructure into space – something Musk has made no secret of. His colleagues and challengers, meanwhile, are struggling with earthly constraints, though they too are succeeding in pushing the boundaries of the imaginable.