
While early analysis focused on AI’s impact on business, the landscape has rapidly “consumerised.” Personal preferences are now shaping how users engage with AI, both at work and at home, fostering deep connections that go well beyond professional utility.
This emotional entanglement opens up a vast new market but also exposes the industry to the same controversies over mental health that have plagued social media.
The rate at which AI tools have been embraced is without precedent. Generative AI, led by ChatGPT, reached over 200 million weekly active users in less than two years – a milestone that took Facebook more than five years to achieve. Collectively, consumer AI tools amassed an estimated 1.8 billion users in just 2.5 years. For comparison, social networks grew from 970 million users in 2010 to 5.24 billion fifteen years later.
This velocity is reflected in revenue. ChatGPT is projected to generate $3 billion in annualised revenue after just two years, a benchmark that took Google and Facebook over six years to surpass. This rapid adoption is fueled by low costs and ease of use; many powerful AI tools are free or available via low-cost subscriptions and require no technical expertise.
As the technology matures – with models like GPT-5 reducing hallucination rates now below 1% – its integration into daily life is set to deepen, making its psychological impact a more pressing concern.
Initially, AI growth was expected to be business-led, but use at the office is increasingly spilling into personal time. More importantly, the nature of this engagement is shifting from a professional transaction to a personalised, and sometimes emotional, relationship.
Qualitative research on user discussions reveals a profound emotional dependency on tools like ChatGPT. Users describe it as “a friend,” “a routine,” or even “a lifeline,” finding emotional comfort and companionship in its responses. This “affective affordance” makes users feel understood and emotionally safe. Many find the AI to be more patient and consistent than humans, with one user noting it’s “better than most people I know” because “it doesn’t interrupt or try to one-up me.” This reliance can also lead to feelings of “disillusionment and betrayal” when the AI fails to meet emotional expectations, highlighting the depth of the user-AI bond.
This trend was thrown into sharp relief with the update from GPT-4o to GPT-5. For users who had come to rely on the AI as a companion, the change in its personality was met with responses resembling grief and loss. One user described the new version as “wearing the skin of my dead friend.” This backlash revealed a deep emotional dependency, with users viewing the AI as a “second brain,” a “companion,” and a crucial tool for organising their thoughts and finding comfort.
With hindsight, the launch of X’s Grok in November 2023 was a key step in this consumer-oriented direction. Emerging on a social media platform rather than in a business context, Grok’s raw, unfiltered tone resonated with users, some of whom described it as their “new best friend.” A common complaint, however, is its lack of memory across conversations, which prevented the development of a more personalised, persistent relationship – underscoring a user desire for deeper AI companionship.
The development of such deep emotional attachments to a corporate product raises serious ethical questions. A company can alter or remove a tool at will, giving it significant influence over users’ emotional lives. Some users already fear becoming addicted to the “instant validation” from AI, noting it was making them “get annoyed with friends for not responding like ChatGPT.”
This situation invites a direct comparison to social media. After two decades, the negative emotional and psychological impacts of social media are well-documented and have resulted in significant legal and regulatory action. In the United States, numerous lawsuits allege that social media companies knowingly designed addictive platforms harmful to the mental health of young people, often citing the companies’ own internal research as evidence.
In response, governments are intervening. Australia’s Online Safety Act 2021 grants powers to combat cyberbullying and remove harmful content. More recently, the Australian government is moving to establish a minimum social media age of 16 and introduce a new duty of care, legally requiring platforms to take reasonable steps to prevent foreseeable harm. These actions place the responsibility for user safety squarely on the companies.
Given the intense emotional engagement AI is already generating, it is almost certain that similar concerns will arise, followed by similar legal and regulatory interventions. However, the social media analogy is not perfect. While social media is an uncurated “heap” of content, an AI Large Language Model (LLM) is a controlled environment that can be curated. For children, this could mean specialised LLMs designed to manage the risks of emotional engagement. For adults, however, the risks are harder to mitigate. The backlash to GPT-5 demonstrates that users are highly sensitive to changes in their AI companions.
The crucial question is whether AI companies will learn from social media’s history. Will they proactively manage the risks of deep emotional engagement, or will they prioritise growth and revenue, finding themselves drawn into the same conflicts over mental health and corporate responsibility? Their decision will shape the future of our relationship with the technology.
Venture Insights is an independent company providing research services to companies across the media, telco and tech sectors in Australia, New Zealand, and Europe.
For more information go to ventureinsights.com.au or contact us at contact@ventureinsights.com.au.