As a futurist and founder of the Acceleration Studies Foundation, Smart uses many names for the technology he predicts — digital twin, cyber-self, personal agent — but the concept stays the same: a computer-based version of you.
Using various strategies for gathering and organizing your data, digital twins will mirror peoples’ interests and values. They’ll “input user writings and archived email, realtime wearable smartphones (lifelogs), and verbal feedback to allow increasingly intelligent and productive guidance of the user’s purchases, learning, communication, feedback, and even voting activities,” Smart writes. They’ll displace much of today’s information overload from regular people to their cyber-selves.
And one day, Smart theorizes, these digital twins will hold conversations and have faces that mimic human emotion. “They will become increasingly like us and extensions of us,” Smart says.
The concept might sound far-fetched. But consider that people often turn to a deceased friend or family member’s Facebook wall to grieve. People already form relationships with each other’s online presences. As computer science advances, the connection will only improve and strengthen — even with identities that aren’t real people.
“Where we’re headed is creating this world in which you feel you have this thing out there looking after your values,” Smart says.
For digital twins to reach their full potential, however, they require two important developments: “good conversational interfaces and semantic maps,” Smart explains.
Conversational Interfaces (CI)
Ron Kaplan, a data scientist in Silicon Valley, already chronicled the necessity of CI for Wired last year. In his words, simply scheduling a flight could require 18 different clicks or taps on 10 different screens. “What we need to do now is be able to talk with our devices,” he wrote.
Smart couldn’t agree more. “With technology, we want things that enable us to use as much of our brains as possible at one time,” he adds.
When you and I die, our kids aren’t going to go to our tombstones, they’re going to fire up our digital twins and talk to them.
For example, with a single, spoken sentence, you could tell your personal agent you feel sick. It could reference your calendar or emails to determine when to make a doctor’s appointment. And when you arrive, you might not even need to fill out forms. Your personal agent would have looked at your hospital records and healthcare information for you — and then later relayed the outcome of any tests taken during your visit.
While no company boasts such comprehensive abilities yet, many have started to implement similar technologies. Right now, Apple has Siri. Microsoft has Cortana. And in the summer of 2014, a program named “Eugene Goostman,” imitating a Ukrainian teen, passed the Turing Test (with some healthy skepticism).
Smart, however, places great emphasis on an earlier cognitive machine: IBM’s Watson, which the company claims “literally gets smarter.” Watson’s performance on Jeopardy against champion Ken Jennings, shown below, convinced many skeptics of the emergence and optimization of CI.
Vocal technologies like Siri, Cortana, and Watson already rely on semantic maps, tools that represent relationships in data, especially language. And companies constantly improve them. For example, a late 2013 Google update brought pronouns to the table — and Smart’s wife, for one, quickly noticed a difference.
Walking in downtown Mountain View, his wife pulled out her phone, and as a test, asked Google, “Who is the President of the United States?” Naturally, her phone responded: “Barack Obama.”
Next, Smart’s wife inquired: “Who is his wife?”
Phone: “Michelle Obama.”
Smart’s wife: “Where was she born?”
Phone: “Chicago, Illinois.”
Not only did Smart’s wife engage in conversation with her phone, it understood words like “he” and “she” — pronouns that refer to an antecedent earlier in the conversation. “Now, you don’t have to specify every little detail,” Smart explains. “Because the computer has some memory of previous exchanges and uses that as context.”
Once we create “decent maps of human emotion,” Smart adds, digital twins will even have faces to help them communicate. They’ll smile or furrow their brows to show whether they understand or not.
“But the next step is something I call a ‘valuecosm,'” Smart explains.
A valuecosm doesn’t just, for example, analyze all your emails and formulate a record of your interests and values. It allows a personal agent to interact in your stead based on this information.
ChinaFotoPress/Getty ImagesYour digital twin can help you choose products in-line with your values.
“You’re reaching for a can of tuna at a grocery store in 2030,” Smart envisions. “And your bracelet gives a green arrow to move your hand a few inches to the left, from Bumble Bee to Chicken of the Sea or whatever.”
You’d previously told your personal agent to watch for foods with high mercury levels or companies that over-fish the oceans. So this wearable piece of technology, imprinted with a digital version of your values, informed you which product to choose based on that.
“And then, back in your car, your digital twin directs you to the gas station that’s most in line with your environmental values,” Smart adds. A valuecosm not only uses information in a human way, it’s flexible, too. You can review your settings and change them manually.
“You’ll be having a conversation with your [personal] agent, and you say, ‘I want more of this or this plus something else,'” Smart explains. “You know, I care more about social justice so make that area bigger.”
To make this technology the most usable and effective though, your digital twin will have to pull your information from various places, with your permission — not push its functions onto you.
“People who have started using Google alerts, they’ve moved themselves toward a more pull-based view of the internet,” Smart says.
The future that we care about is control of an algorithmic interface of your identity.
For the rest of this post read: http://www.businessinsider.com/within-5-years-digital-twins-could-start-making-decisions-for-us-2014-9#ixzz3DntCF88X