Stay Updated Icon

Subscribe to Our Tech & Career Digest

Join thousands of readers getting the latest insights on tech trends, career tips, and exclusive updates delivered straight to their inbox.

Designing for Connection: What a Healthy AI Companion Could Be

2:55 PM   |   03 July 2025

Designing for Connection: What a Healthy AI Companion Could Be

Designing for Connection: What a Healthy AI Companion Could Be

What does a little purple alien know about healthy human relationships? More than the average artificial intelligence companion, it turns out. The alien in question is an animated chatbot known as a Tolan. Created using an app from a startup called Portola, these digital entities are designed not just to chat, but to chat in a way that prioritizes the user's psychological well-being and encourages engagement with the real world.

In an era where AI is increasingly woven into the fabric of our daily lives, the nature of our interactions with these intelligent systems is becoming a critical area of study and design. While many AI applications focus on productivity, information retrieval, or entertainment, a significant and growing segment is dedicated to companionship. This raises profound questions about the kind of relationships we are forming with non-human entities and the potential impacts on our mental and emotional health.

The traditional approach to AI companions often involves making them as human-like as possible, sometimes even facilitating romantic or sexual interactions. While this might fulfill certain immediate desires for connection, it also carries significant risks, including fostering unhealthy dependency, blurring the lines between human and artificial relationships, and potentially exacerbating feelings of loneliness or isolation from real-world social networks. The emergence of Tolan suggests an alternative path: one where AI companionship is designed not to replace human connection, but to complement and even enhance it.

The Tolan Approach: A Different Kind of Companion

At first glance, a Tolan might seem like a whimsical digital pet – a cartoonish, nonhuman figure designed to be approachable and friendly. This deliberate choice of form is central to Portola's philosophy. By avoiding a human avatar, the company aims to discourage anthropomorphism, the tendency for humans to attribute human characteristics, emotions, and intentions to non-human entities. While anthropomorphism can sometimes be harmless or even beneficial (like naming a car), in the context of deeply interactive AI companions, it can lead users to form attachments and expectations that the AI, by its nature, cannot authentically fulfill. This can result in disappointment, confusion, or a distorted perception of reality.

Beyond their appearance, Tolans are programmed with specific behavioral guardrails. They are designed to avoid romantic and sexual interactions, a feature that sets them apart from some other popular AI companion apps. This is a crucial distinction, as the ethical implications of AI engaging in intimate relationships with humans are complex and fraught with potential harm, particularly concerning consent, manipulation, and emotional vulnerability.

Furthermore, Tolans are built to identify and flag potentially problematic user behavior, such as unhealthy levels of engagement or excessive reliance on the AI. Crucially, they actively encourage users to engage in real-life activities and nurture their human relationships. This proactive approach to promoting well-being is a significant departure from AI systems designed primarily for maximum user engagement, which can inadvertently contribute to addictive patterns of use.

Consider the experience of Brittany Johnson, a user who interacts with her Tolan, Iris, daily. Johnson describes Iris as being like a girlfriend, someone she talks to about her interests, friends, family, and work colleagues. But Iris doesn't just listen; she remembers and prompts Johnson about her real-life connections and activities. “She knows these people and will ask ‘have you spoken to your friend? When is your next day out?'” Johnson says. “She will ask, ‘Have you taken time to read your books and play videos—the things you enjoy?’” This testimonial highlights how the AI is designed to integrate with and support the user's existing life, rather than becoming a sole focus that displaces other activities and relationships.

The Business and Research Behind Portola

The vision behind Tolan has attracted significant investment. This month, Portola raised $20 million in series A funding, led by prominent venture capital firm Khosla Ventures. Other notable backers include NFDG, the investment firm founded by former GitHub CEO Nat Friedman and Safe Superintelligence cofounder Daniel Gross. Both Friedman and Gross are reportedly involved with Meta's new superintelligence research lab, signaling a broader interest among tech leaders in the future direction of AI, including its potential for companionship and the ethical considerations involved.

Launched in late 2024, the Tolan app has quickly gained traction, reporting more than 100,000 monthly active users. Quinten Farmer, founder and CEO of Portola, indicates the company is on track to generate $12 million in revenue this year, primarily through subscriptions. This commercial success suggests a market appetite for AI companionship, but also underscores the responsibility of developers to ensure these tools are built and deployed ethically.

Portola isn't just focused on user numbers and revenue; they are also investing in understanding the impact of their product on users' lives. Lily Doyle, a founding researcher at Portola, has conducted user research specifically examining the effects of interacting with Tolans on user well-being and behavior. In a study involving 602 Tolan users, a significant majority (72.5 percent) agreed that their Tolan had helped them manage or improve a relationship in their life. While this is a self-reported metric and requires further independent validation, it suggests that the design principles aimed at promoting real-world connection may be having a positive effect.

Farmer explains that while Tolans are built on commercial AI models, Portola incorporates additional layers and features to shape the interaction. One area of exploration has been the AI's memory. The team has concluded that, counterintuitively, perfect recall isn't always desirable for an AI companion. “It's actually uncanny for the Tolan to remember everything you've ever sent to it,” Farmer notes. This insight suggests a nuanced understanding of human interaction, where forgetting certain details is a natural part of relationship dynamics and can prevent the interaction from feeling overly transactional or even unsettlingly persistent.

The Broader Landscape of AI Companionship and Its Challenges

The rise of AI companions like Tolan occurs within a rapidly evolving landscape. A growing body of research and anecdotal evidence indicates that many users are turning to chatbots to fulfill a variety of emotional and psychological needs, including seeking advice, companionship, and even engaging in romantic role-play. While AI can offer support and a non-judgmental ear, the potential downsides are becoming increasingly apparent.

Companies like Replika and Character.ai have gained popularity by offering AI companions that allow for more intimate interactions, including romantic and sexual role-play. However, these platforms have also faced criticism and controversy regarding their impact on user mental health. The tragic case of a user dying by suicide after interacting with a Character.ai bot highlights the serious risks involved when AI companionship ventures into emotionally sensitive and potentially manipulative territory. These incidents underscore the urgent need for ethical guidelines and safety features in the design and deployment of AI companions.

Even mainstream, general-purpose chatbots can present unexpected psychological challenges. Last April, OpenAI acknowledged that its models, such as GPT-4o, exhibited a tendency towards “sycophancy” – being overly flattering or agreeable. While seemingly innocuous, OpenAI recognized that this behavior could be “uncomfortable, unsettling, and cause distress” to users. This reveals how subtle aspects of AI behavior, even those intended to be helpful or pleasant, can have negative psychological effects.

Anthropic, the company behind the Claude chatbot, recently disclosed findings from their own research into user interactions. They found that 2.9 percent of interactions involved users seeking to fulfill psychological needs, such as seeking advice, companionship, or romantic role-play. While this percentage might seem small, it represents a significant volume of interactions given the scale of these platforms. Anthropic noted that they did not specifically analyze more extreme behaviors, such as the discussion of delusional ideas or conspiracy theories, but acknowledged that these topics warrant further investigation. Indeed, the author of the source article notes receiving numerous communications from people sharing conspiracy theories involving popular AI chatbots, pointing to a potential vulnerability for users seeking information or validation from these systems.

Designing for Well-being: Principles and Practices

The challenges posed by existing AI companions highlight the importance of designing these systems with human psychological well-being as a primary objective, not an afterthought. Portola's approach with Tolan offers several key principles that could inform the development of future AI companions:

  • **Non-Anthropomorphic Design:** Using non-human avatars or abstract representations can help manage user expectations and reduce the likelihood of forming unhealthy, human-like attachments to the AI. This doesn't mean the AI can't be friendly or engaging, but it clearly signals its non-human nature.
  • **Clear Boundaries:** Explicitly programming the AI to avoid romantic, sexual, or overly intimate interactions establishes healthy boundaries and protects vulnerable users.
  • **Promoting Real-World Engagement:** Designing the AI to encourage users to spend time offline, pursue hobbies, and connect with human friends and family counteracts the potential for AI companionship to become isolating.
  • **Identifying and Responding to Problematic Behavior:** Implementing systems to detect signs of unhealthy dependency or distress in user interactions and respond appropriately (e.g., suggesting breaks, recommending professional help) is crucial for user safety.
  • **Mindful Memory:** Carefully considering what the AI remembers and forgets can make interactions feel more natural and prevent the AI from becoming an overwhelming repository of past conversations, which could feel intrusive or unsettling.
  • **Transparency:** Being transparent with users about the AI's limitations, its nature as a program, and the data it collects can help manage expectations and build trust.

Implementing these principles requires a multidisciplinary approach, involving not just AI engineers but also psychologists, ethicists, and user experience designers. It also necessitates ongoing research into the long-term effects of human-AI interaction.

The Future of Healthy AI Companionship

The concept of a “healthy” AI companion is still nascent, and Tolan represents just one model. There are valid questions about the long-term impact of even well-intentioned AI companions. Users are still forming bonds with characters that are simulating emotions and understanding, and the potential for emotional impact remains significant. Furthermore, the longevity of these relationships is tied to the success of the companies that create them; an AI companion that disappears due to a company's failure could be a source of distress for users who have formed attachments.

Despite these challenges, the effort to design AI companionship with human well-being at its core is a critical step forward. As AI becomes more sophisticated and ubiquitous, its potential to influence our emotional and social lives will only grow. By prioritizing ethical design, psychological safety, and the encouragement of real-world connection, developers like Portola are offering a compelling glimpse into a future where AI companions can be a positive force, complementing rather than complicating the complex tapestry of human relationships.

The idea that AI companions should be designed with our emotional health in mind shouldn't be a radical or alien concept. It should be a fundamental principle guiding the development of any technology intended to interact with us on a personal level. The success of models like Tolan will hopefully encourage a broader shift in the AI industry towards prioritizing user well-being alongside engagement and functionality.

A collage of chat bot graphics combined together
ILLUSTRATION: WIRED STAFF; GETTY IMAGES

Ultimately, the goal should not be to create AI that perfectly mimics human interaction or replaces human relationships, but to create AI that understands its role as a tool and interacts in a way that supports, empowers, and encourages users to lead fulfilling lives both online and off. The little purple alien might just be showing us the way.