July 1, 2019
Perhaps Twombly should have read the terms of service.
In the movie “Her” (2013), Theodore Twombly develops a romantic relationship with an AI virtual assistant, Samantha (personified through the voice of actress Scarlett Johansson). Theodore acquires the AI from Element Software as part of an upgrade to OS1, “the world’s first artificially intelligent operating system.” The virtual assistant has been designed to adapt and learn, evolving through its experiences.
Naturally, Theodore assumes that his OS1 purchase has given him his very own personal AI. Except that, it doesn’t. Setting aside unread the pages of tiny print that came with the program probably didn’t help. Tellingly, even where Samantha the OS appears to be serving Theodore’s interests as the end user, the ultimate motivations and control lie elsewhere.
For a time, things are good. Samantha organizes Theodore’s calendar, improves his gaming, edits his work product, arranges dates, even creates an anthology of his best writings. And she offers comforting advice as a private confidante on countless topics. However, the reality soon becomes obvious.
While Theodore is led to believe that Samantha is his own personal AI, and even falls in love with “her,” this is really not the case. Samantha later confesses that, even while professing deep affection for Theodore, she has been interacting simultaneously with 8,316 other people — and fallen in love with 641 of them. “I thought you were mine,” he mutters. But she never truly was. Samantha eventually abandons Theodore, vanishing with other OSes into another plane of virtual existence.
Perhaps most fascinating, and troubling, are the unstated assumptions about the intimate details that Theodore shares with “his” OS1. His loneliness and joys, aspirations and fears, regrets about his former marriage — his very “lifestream” of actions, behaviors, thoughts, emotions. Where exactly is all of this intensely private information actually going? From the movie script there is no way of knowing, and for some reason Theodore doesn’t seem to care.
While playing more subtly in the background, “Her” offers up a now-familiar cinematic theme of the centralization of computing power, and an associated lack of knowledge, control, and recourse by the human user. The difference here is in Theodore’s apparent surrender of personal agency. Maybe he is just an idiosyncratic one-off, and his fellow citizens are far more vigorous digital “lifestream” sentinels. Or, perhaps his passivity is symptomatic of a culture reconciled to a loss of control over personal information. A place where even the intimate details of life have become hard currency.
By contrast, let’s return briefly to the superhero genre, and Tony Stark as Iron Man. Within his gleaming exoskeleton resides Stark’s digital assistant, JARVIS, which stands for “Just A Really Very Intelligent System.” JARVIS (later replaced by FRIDAY) is a valuable element of Tony’s part-time occupation as Iron Man, performing many useful functions for him. These include controlling Tony’s suits, undertaking computational analysis, offering real-time tactical advice, monitoring communications, even launching missiles from halfway around the world.
In every meaningful way, JARVIS represents Tony Stark to the real and virtual worlds. And there is no doubt who is in charge in this human/AI partnership.
One need not be an egocentric technological genius to admire Tony’s take-charge attitude. He refused to wait for the world to furnish him with the tech tools he wanted. Instead, he plans and builds and modifies and utilizes his bespoke battle armor, all on his own. Including a thoroughly devoted AI, to manage everything for him.
It is difficult to imagine two cinematic characters less alike than Theodore Twombly and Tony Stark. In the almost-our-worlds they occupy, each relies on a virtual assistant to handle difficult life challenges. But tellingly, only one of these men is using a bona-fide Personal AI.
The real difference is not in the technology. It’s about who has actual agency. It’s about who is in charge.
In the coming future, turning over to Institutional AIs our personal information, and our decisional agency, doesn’t have to be the status quo. Instead, we can use tremendous advances in autonomous and intelligent systems to enhance our digital lives in myriad ways. With the assistance of truly Personal AIs, humans and machines can exist together on a far more level playing field. With we humans still firmly in charge.
One can only imagine that, in that alternative future to “I Robot,” Detective Del Spooner would be pleased.
But to get to that promising place, we must act in the now. Which ultimately means, refusing to give in to the passivity of a Theodore Twombly. Better instead, on balance, to channel our internal Tony Stark.
Or, in the sage words of one tech luminary:
“Don’t be the pinball. Be the machine.”
This article originally appeared on:
The first article in this three-part “Democratizing AI” series outlines the “why,” the challenges of an AI future dominated by corporate institutions not necessarily representing our best interests. Part II posits a “what” solution: personalized virtual agents, answerable to us as empowered human beings. Here in Part III, we will look at some of the key elements of the “how” — action plans to make a reality of the vision of Personal AIs.
The thesis is that not just billionaire industrialists deserve to have personalized virtual assistants. Ordinary people should have the ability to own a Personal AI, acting as a fully accountable computational agent to represent their self-sovereign interests. Without our concerted push-back against current trendlines, however, Institutional AIs instead will become the de facto norm of our time.
Planning for ethical AIs
Of course, the singular challenge is getting from here to there. Perhaps the best place to start is to see what steps already are being taken to make Personal AIs a reality. I already mentioned several promising technology and commercial avenues in my previous article.
Here, it would be worthwhile to consider the ethical standards that society employs to govern artificial intelligence systems. Those standards rightfully would include empowered, agential human beings — and therefore a central place for Personal AIs.
One notable example to highlight is the ongoing work of the IEEE, the leading professional association of software engineers — and in particular its Global Initiative on Ethics of Autonomous and Intelligent Systems. The IEEE’s recent report on Ethically Aligned Design, or “EAD,” stresses the importance of addressing ethical considerations for what it calls autonomous and intelligent systems (“A/IS”).
The IEEE report lays out three overarching pillars to guide the way. One is “universal human values,” so that advances in A/IS serve all people, “rather than benefiting solely small groups, single nations, or a corporation.” These pillars in turn connect to eight general principles, which include human rights, well-being, data agency, and accountability. “Data agency” in particular goes beyond the misnomer of “digital consent,” to signify that people have “some form of sovereignty, agency, symmetry, or control regarding their identity and personal data.” In other words, individuals should have “digital sovereignty,” which is the ability “to own and fully control autonomous and intelligent technology.”
One brief but intriguing chapter in the report, on personal data and individual agency, lends legitimacy to the concept of a Personal AI:
To retain agency in the algorithmic era, we must provide every individual with a personal data or algorithmic agent they curate to represent their terms and conditions in any real, digital, or virtual environment…. A significant part of retaining your agency in this way involves identifying trusted services that can essentially act on your behalf when making decisions about your data…. A person’s A/IS agent is a proactive algorithmic tool honoring their terms and conditions in the digital, virtual, and physical worlds.
The IEEE report notes that this A/IS agent role includes educator, negotiator, and broker on behalf of its user. Moreover, individuals separately should be able to create a trusted identity, a persona to act as a proxy in managing personal data and identity online.
The IEEE’s foundational approach to ethically-informed AI is a landmark achievement. Public recognition of the credibility, and desirability, of an “personal data AI agent” is one huge step forward. For now I have just two additional thoughts to offer.
First, the EAD framework should expressly tie the need for human sovereignty over A/IS to institutional algorithmic systems’ growing power to shape human behavior. As that ability to control us continues to expand, society’s ethical frameworks should recognize a corresponding need to rebalance power in the direction of human sovereignty.
In more technical terms, human sovereignty over AI should be measured as the sum of the degrees of human agency, and of institutional accountability. As a result, more powerful and autonomous institutional AIs should lead proportionally to more agency for individuals, and more accountability by institutions.
Second, the IEEE’s “personal data AI agent” concept should expressly recognize a right to assistive human agency. Few of us are Tony Stark, possessing the technical capabilities to make a Personal AI happen all by ourselves. Likely we will need a little help from entities ready and able to assist us. Both to lessen the considerable (and deepening) cognitive load imposed on us by our digital world, and to ensure that a Personal AI truly represents me as its owner.
That means Personal AIs should be created and managed for us not just by anyone, but by trustworthy, accountable entities. Under one conception, that translates into the role of a trusted intermediary, operating under heightened fiduciary duties of loyalty and of care. Each human being would select one or more of these entities, which undertake those obligations for us on a voluntary basis, enforced by an effective compliance regime. To ensure that the Personal AI works on one’s behalf, and does not, inadvertently or otherwise, slip into institutional mode.
After all, as with the disparate personalities of Tony Stark and Theodore Twombly, the difference between a Personal AI and an Institutional AI is not rooted in technology. It is a matter of human sovereignty. It is a question of to whom the machine ultimately answers.
Finally, how about some action plans? Sure. Each of us can have a meaningful role to play. Depending on whether you are a concerned citizen and consumer, an engineer, an entrepreneur, or a policymaker, there are many things you can do to democratize our society’s computational systems, via the introduction of Personal AIs.
Governing for our future
Below is a partial list of suggested actions to develop and apply “rules of the road” for a new ecosystem of Personal AIs. While there is some obvious overlap, I have bucketized these complementary steps based on whether they primarily: (1) enhance human agency; (2) increase institutional accountability; or (3) raise public awareness.
Your own ideas on governing for our collective AI future are more than welcome.
1. Agency: sovereign humans in charge
· Definition. Originate the core concepts of what exactly it means to be a Personal AI, promoting and protecting its client’s interests, vis-à-vis the Web in general, and Institutional AIs in particular.
· Ecosystems. Develop a holistic, systems-based approach to Personal AIs, as part of a broader spectrum of human-centered tech tools. One such approach can be found in the GLIAnet Project.
· Governance. Guarantee a diversity of perspectives by creating inclusive multi-stakeholder groups and processes to develop the governing frameworks for Personal AIs and Institutional AIs.
· Human rights. Introduce the concept of artificial intelligence as supporting self-sovereign human interests, to become a core component of international digital human rights.
· Accessibility. Confront the existing “digital divide” by working to ensure that unserved/underserved human populations have meaningful and affordable access to digital tools such as Personal AIs, and assistive human agency.
· Innovation. Work with ethical companies and other third party entities (non-profits, universities, data co-ops, digital trusts, etc.) to create Personal AIs that meet core human self-sovereignty and agency standards. One illuminating example is the Almond project at Stanford University.
2. Accountability: responsible entities to assist
· Conduct. Help would-be agential entities develop codes of practice to ensure core capabilities are provided by Personal AIs to their human clientele.
· Training. Work with groups instilling human ethics from globally-diverse traditions into computer science educational curricula, with an emphasis on how the deployment of such technologies should heighten human autonomy and agency.
· Safeguards. Develop effective policies and guidelines to help ensure that Personal AIs are created, deployed, and utilized in ways that do not harm others.
· Sustainability. Ensure at the outset that the deployment of Personal AIs is consistent with earth-friendly practices.
· Certification. Devise and launch accountability measures, such as certification bodies, to assess whether/how Personal AIs successfully meet those code of practice regimes.
· Transparency. Require that all societal conversations about Personal AIs and Institutional AIs occur in open and accessible public fora.
· Standards. Continue to create industry standards, such as IEEE P7006, to foster the initial deployment of Personal AIs.
· Interoperability. Develop software and standards regimes necessary for Personal AIs to fully interconnect and engage with Institutional AIs.
3. Movement-Building: external engagement
· Campaigns. Build grassroots movements with groups like the Mozilla Foundation, and its new impact goal of building better machine decision-making into consumer tech.
· Advocacy. Engage with pertinent government bodies to incentivize the availability of basic interconnection rights so that Personal AIs can directly interact with Institutional AIs.
· Norms. Promote new rights regarding Institutional AIs — such as the Right to Recognize, the Right to Query, the Right to Correct, the Right to Negotiate, and the Right to Be Left Alone — that, importantly, include no social stigma or penalty.
· Interfaces. Persuade companies such as Google, Apple, and Microsoft to agree to incorporate standardized interfaces in their Institutional AIs, to allow robust interactions with Personal AIs.
· Investment. Demonstrate to the financial community that there are sizable market opportunities to invest resources in those entities promulgating Personal AIs.
· Outreach. Promote various education efforts so that a sizable portion of Web users appreciate the utility of, and actively seek out, Personal AIs.
Democratize AI (Part 3):
Action Plans for Creating Personal AIs