After watching Her last night (the Golden Globe winner for the Best Screenplay category) I was captivated by its amazing performances and its delightfully depicted technology. What motivated me the most to watch the movie was this article from Wired: why Her will dominate UI design even more than Minority Report.
In the movie, technology is almost completely transparent for the user: most of the interactions between Theodore, the main character, and his “smartphone” happen through the earpiece and the virtual assistant, and he only touches the little screen when he wants to watch a picture. This is what his gadgets look like:
Visual user interfaces are almost nowhere to be seen, and that’s the message: human interactions are the future of personal computing.
Right before we get introduced to Samantha (the uber-intelligent personal assistant), we see Theodore in a typical day: he talks to his computer to write documents and touches an area in front of the screen for a speech-less interaction (no keyboard or mouse), he uses his earpiece to check for new emails, to reply to them, to browse his news feed, etc. This kind of artificial intelligence is already on its way to our pockets, but what about Samantha?
Samantha sounds and behaves as a real human being: she changes her voice tone when has something important to say, she apologizes for disturbing Theodore when he is in the middle of a conversation and even makes jokes and laughs. With no doubt, we will reach that stage where our virtual assistants sound very real, so we can believe we are interacting with a human. But even then, we will not fall in love with them.
One of the premises of the movie is that Samantha evolves and grows like a normal person, she learns from her experiences (which will be real very soon, since it’s starting to happen with Siri/Google Now). Then she starts having feelings: she gets upset, sad, annoyed… and this is the one big thing we won’t ever see coming from our smartphone. We live in a competitive world where users switch between services based on what they prefer and like. Would we trust our smartphone in a moment of crisis if we had an argument the night before? Would companies fight to sell the most pleasant assistant?
No and yes to those questions. No company would dare to pretend that their assistants have feelings if there was a risk of hurting consumers on a daily basis. And let’s not forget, could we fall in love with someone who literally cannot understand what being upset or hurt means? would we even trust anyone who had a limited list of possible “good” feelings?
And what about physical limitations? without the ability to build infinite number of servers, the computational power and storage capabilities are limited. Which simply means that all these assistants will need to share knowledge, and therefore they won’t behave differently enough (at least as much as we would need to actually feel love for only one of them).
I haven’t even talked about privacy concerns: what would happen to the personal information that our assistants learn? would we feel comfortable with a company knowing the major events that happened in our lives and make us who we are? where would all this information be stored? what would happen if there was a data breach? I think you get my point.
Virtual assistants will get smarter, they will understand almost everything we say, and most likely, will have an answer to practically every question. They’ll learn from unresolved questions, we will even be able to teach them certain answers, and ultimately, we will like them so much that we won’t be able to live without them. But falling in love with them? As much as anyone can love Siri today.
Image via Warner Bros. Pictures
2 thoughts on “Why we will never fall in love with our smartphone’s virtual assistant”