Instead of live buddies, chatbot. AI can also be a good hacker

Artificial intelligence is learning every day, it wants to be useful in every way. More and more people even prefer it to contact with living beings.

Illustrative photo. Photo: Carlos Barria/Reuters

Illustrative photo. Photo: Carlos Barria/Reuters

A survey of more than two thousand adults showed that, in addition to practical purposes, people are increasingly turning to AI for personal and emotional needs. The most used tools were chatbots like ChatGPT, followed by voice assistants like Amazon's Alexa, according to the BBC.

One in 25 respondents even admitted to interacting with AI on a daily basis.

AISI warns that in addition to the positive impact, frequent use of AI can also lead to psychological problems. The researchers analyzed the behavior of more than two million users of an online community on the Reddit platform dedicated to AI companions.

After a technical outage of the service, many reported feelings of anxiety, depression, sleep disturbances or neglect of daily responsibilities. There were also descriptions of so-called "withdrawal symptoms" after losing contact with AI chatbots.

Cyberbullying doubles

The report also highlighted the rapid evolution of AI capabilities. According to the AISI, some models have already reached expert level in cybernetics - performing complex tasks that would take decades of experience in a human. At the same time, their ability to detect and exploit security flaws is doubling approximately every eight months.

Deepfaky sa v roku 2025 posunuli na vyššiu úroveň. Čo nás čaká ďalej?

You might be interested Deepfaky sa v roku 2025 posunuli na vyššiu úroveň. Čo nás čaká ďalej?

However, the researchers did not consider the potential for AI to cause unemployment in the short term by replacing human workers.

AI has surpassed professionals with PhDs in biology and is quickly catching up in chemistry, according to the research. In controlled tests, it has in turn shown that some systems can perform the initial steps required for self-replication - for example, gaining access to computing resources through identity authentication.

To do this in the real world, however, AI systems would need to perform several such actions in a row "without being detected", which they are currently unable to do.

Learning how to bypass the protection of systems

The Institute's experts also looked at the ability of AI to model "sandbagging", a strategy of deliberately underestimating one's own abilities in order to gain an advantage in the future. While they believe this is possible, they found no evidence of this translating into action.

The area of fraud perpetrated by artificial intelligence has been a long-standing controversy among researchers. While some consider it to be an exaggerated AI capability, others do not. Current research has confirmed that companies mitigate the risk of their systems being misused for nefarious purposes with a number of safeguards.

V čom stávka na umelú inteligenciu pripomína domček z karát?

You might be interested V čom stávka na umelú inteligenciu pripomína domček z karát?

However, for some models, the time it took experts to convince systems to bypass safeguards increased forty-fold in six months.

AISI did not consciously focus on the economic or environmental impacts of AI. According to the report's authors, their goal is primarily to assess the direct societal implications of AI technologies and to encourage early intervention before their mass deployment.

The UK government plans to use these findings to further regulate AI.