Oliviarfrias Posted February 19 Posted February 19 The premise of Drew Hancock's debut film, Companion, is nothing new. A lonely man falls in love with his AI personal assistant, only for her to gain a self-preservation instinct and eventually try to kill him. It's a little bit Her, a little bit Westworld. And yet, it gets something right about artificial intelligence that neither of those "robot who can love" stories do. Iris (played masterfully by Yellowjackets' Sophie Turner) never goes against her programming. She doesn't transcend into a higher state of being like Scarlett Johansson's Samantha or wake up at the center of the maze like Evan Rachel Wood's Dolores. There is no mystical crossing-the-threshold moment where she becomes more than the sum of her parts. Instead, Iris becomes dangerous when her owner Josh (played by Jack Quaid) jailbreaks her. He ups her self-preservation instincts and disables the safeguards that prevent her from hurting humans to orchestrate the death of a rich man he plans to rob. Everything that happens after that – her getting ahold of Josh's phone and upping her own intelligence, her making a run for it, and eventually killing Josh – can be seen as essentially a result of this change in prime directive. Her main objective switches from making Josh happy to staying alive. This not only feels more realistic but leaves us with an interesting question: Is Iris alive? Not in the sense of "is she organic?" but in the sense of "does she have rights as a sentient being?" The movie certainly makes you sympathize with her. She does everything a human being would do in her situation, including crying when Josh forces her to burn herself. But there is nothing that directly disproves what Josh continues to say about her: that her feelings aren't real, that she is simply programmed to mimic the behaviors of someone who is feeling them. Which brings us to the central question of possibly every piece of fiction about AI ever: How can we notice when AI becomes conscious? When has it become aware? When can a robot love? The traditional answer is the Turing test. Alan Turing, the father of modern computer science, said that we will know when computers have reached human-level intelligence when a human interrogator can ask the same questions to a human and a machine and not know which one is which. It seems like a simple test... and yet, its efficacy in determining if a machine has a sense of self has basically been disproven. We now live in the world of ChatGPT and DeepSeek. Large language models (LLMs) pass the Turing Test all the time. And yet, virtually no one believes LLMs are conscious or that they possess human-like intelligence. At one point in the movie, Josh's friend Eli (Harvey Guillén) professes his love for his robot companion Patrick (Lukas Gage). Patrick, newly aware he is a robot, professes it back. Patrick says he doesn't care if the memory of their meet-cute at a Halloween party was programmed; it still feels real to him. Likewise, Eli says he doesn't care that Patrick's love was programmed – it likewise feels real. This scene felt genuine, and it's a situation I can see happening if an invention like the companion robots were ever made. Because, ultimately, the answer to the question of whether robots can experience love is as unanswerable as the question of whether we live in The Matrix. The question then becomes: if we can't ever know when and if our creations have become conscious, what do we do about it? It's a question we're a long way off from answering as a society, but I expect, much like Eli, we might end up having to answer it with our hearts as much as our heads. Quote
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.