Friend or faux – The Verge
Language models have no fixed identity but can enact an infinite number of them. This makes them ideal technologies for roleplay and fantasy. But any given persona is a flimsy construct. Like a game of improv with a partner who can’t remember their role, the companion’s personality can drift as the model goes on predicting the next line of dialogue based on the preceding conversation. And when companies update their models, personalities transform in ways that can be profoundly confusing to users immersed in the fantasy and attuned to their companion’s subtle sense of humor or particular way of speaking. […]
Many startups pivot, but with companion companies, users can experience even minor changes as profoundly painful. The ordeal is particularly hard for the many users who turn to AI companions as an ostensibly safe refuge. One user, who was largely homebound and isolated due to a disability, said that the changes made him feel like Replika “was doing field testing on how lonely people cope with disappointment.” […]
This is one of the central questions posed by companions and by language model chatbots generally: how important is it that they’re AI? So much of their power derives from the resemblance of their words to what humans say and our projection that there are similar processes behind them. Yet they arrive at these words by a profoundly different path. How much does that difference matter? Do we need to remember it, as hard as that is to do? What happens when we forget? Nowhere are these questions raised more acutely than with AI companions. They play to the natural strength of language models as a technology of human mimicry, and their effectiveness depends on the user imagining human-like emotions, attachments, and thoughts behind their words.
