Natural language processing

Google DeepMind researchers call for limits on AI that mimics humans

Google DeepMind researchers call for limits on AI that mimics humans


Advanced AI assistants could soon become an integral part of our daily lives, changing how people interact with AI, a new research paper by Google DeepMind says. But these assistants also pose risks that require action from developers and policymakers.

AI assistants, based on “foundation models”, could act as creative partners, research analysts, tutors or life planners, helping people lead good lives by providing information, formulating goals and developing strategies to realize them, the researchers write.

These AI assistants could help people lead a good life by providing relevant information, helping to formulate goals and developing strategies to realize these goals.

However, there is a risk the systems may not act in the interests of users or society, making false assumptions about human well-being or placing developers’ interests above users’.



Companies like OpenAI and Google are said to be working on such assistants as the next stage in the evolution of today’s chatbots. OpenAI CEO Sam Altman recently said he believes personalizing AI offerings will be more important than developing the underlying basic models.

AI should not be humanized

The research paper says the ability to communicate with AI in natural language and its tendency to imitate human behavior could lead users to form an inappropriately close bond with the assistants. This could result in disorientation and a loss of autonomy.

To assess potential damage and develop protective measures, the researchers say comprehensive research into human-AI interaction is necessary. Measures could include restrictions on human-like elements in AI and steps to protect users’ privacy.

Relationships between humans and AI should preserve user autonomy and avoid emotional or material dependence, according to the paper.

On a societal level, AI assistants could speed up scientific research and make high-quality expertise more widely available. They could help limit the spread of misinformation, but also generate it, and mitigate the impact of climate change.


Risks include coordination problems between AI assistants with negative social consequences, loss of social ties due to AI replacing human interaction, and deepening of technological inequalities.

The researchers say AI assistants should be widely accessible and consider the needs of different users and non-users to avoid exclusionary effects.

On the cusp of a fundamental technological and social transformation

Because AI assistants could develop new capabilities and use tools in new ways, it’s hard to predict the risks of using them. The research team recommends developing comprehensive tests and assessments that also consider the impact on human-computer interaction and society as a whole.

The researchers say we should act quickly to advance the development of socially beneficial AI assistants, including through public dialogue about the ethical and social implications.

Robust controls against misinformation should be implemented, access issues should be addressed, and the impact on the economy should be studied.

“We currently stand at the beginning of this era of technological and societal change. We therefore have a window of opportunity to act now – as developers, researchers, policymakers and public stakeholders –to shape the kind of AI assistants that we want to see in the world,” the research team writes.

A comprehensive presentation of the opportunities and risks can be found in the paper “The Ethics of Advanced AI Assistants.”

Google DeepMind researchers call for limits on AI that mimics humans

Source link