Carol Danvers is having a big problem with her AI.
As a “noble warrior hero” from the Kree homeworld of Hala, Carol has just discovered that much of what she assumes to be true about herself is a lie. And not just any random fib. Seems the “Supreme Intelligence” — the advanced AI governing the Kree civilization — has been blatantly deceiving her. About her true earthly origins, about the source of her mysterious powers, and about the basis for ongoing war against evil Skrull terrorists. It seems that subterfuge, and treachery, and galactic genocide, are not beneath the ken of a ruling AI composed of the best minds of an advanced, star-hopping civilization. More than a bit distressing. And depressingly familiar to those with a working knowledge of earth’s own history.
Carol’s daunting challenge as a re-born Captain Marvel is how to take on the all-powerful Supreme Intelligence. After all, the SI essentially controls the Kree’s computational world, and in turn its citizens. And once she is captured physically, Carol’s mental state appears to be at the SI’s mercy.
Throughout the movie, Carol’s sheer grit and determination are demonstrated in flashbacks to tough life situations. Those qualities shine through again in her showdown with the SI. And yet, it turns out that Carol as Captain Marvel does not defeat the SI’s nefarious intentions on its own computationally-based terms. No “AI versus AI” battle royale ever develops. Instead, she ends up doing the cinematic superhero thing: using her latent Tesseract power to blast her way through the SI’s dubious machinations.
A highly entertaining and successful spectacle — but not exactly a blueprint for others to follow. Precious few of us after all have that kind of raw cosmic energy churning in our fists.
The theme of centralized machine intelligence wielding societal control is not a new one, from modern literature to popular entertainment. EM Forster’s short story “The Machine Stops,” first published in 1909, is one literary precursor. And “Captain Marvel” is only the latest in that movie genre. From “2001: A Space Odyssey” to “The Terminator” series, plotlines revolve around small bands of independence-minded humans rebelling against the despotic authority of a ruling AI. As in the “Captain Marvel” movie, such challenges rarely operate, let alone succeed, on the same virtual terms as the AI. Instead, resistance typically is offered through real-world physical actions (unless, as in “The Matrix,” one is fortunate enough to become “The One,” although equipped with kick-ass fight moves). Even in “I Robot,” VIKI the super AI is done in by a rogue robot injecting nanobots into her operating system core. Greatly entertaining for sure. But these variations on pulling the proverbial plug on machines do not provide optimal AI governance strategies for our own future.
Putting aside far-flung scenarios of “AIs ruling the world,” our society nonetheless is presented in the here and now with a smaller scale but still-pressing challenge. Namely: how can each of us hope to fully protect and promote our legitimate life interests, in a world increasingly dominated by third party AIs?
As Part I of this three-part series describes, complex and deepening societal challenges stem from what I call Institutional AIs: a mix of online “screens,” bureaucratic “unseens,” and environmental “scenes.” These AIs, owned and controlled by corporate and governmental bodies, amount to consequential decision systems that are embedded in, and shape, aspects of our everyday lives. Importantly, those systems are beholden to the priorities of their institutional master — and not to the rest of us. The resulting loss of human agency threatens to become our new status quo.
To be crystal clear, this concern is not symptomatic of any desire to demonize AI as a technology. Artificial intelligence offers incredible potential for numerous life-enhancing, and life-saving, applications. Just in the healthcare arena alone, for example, the advances already being reported are truly astounding. As an advanced tool in the hands of trained doctors and researchers, AI can provide immense benefits to humanity. The real issue boils down to the actual motivations and control of those wielding the tech tools.
Below, I will explain that a potentially effective way to challenge the one-sided proliferation of Institutional AIs is the introduction of human-agential artificial intelligence — let’s just call them Personal AIs. These virtual avatars would directly serve each of us as human beings, and our chosen communities of interest — including family, friends, and other social ties. Part III in this series (coming soon) will lay out a proposed action plan — the “how” — to help make these aspirations a reality.
To date, most proposals for democratizing AI — creating greater transparency, more balanced priorities, and less harm to basic human rights — center on changing the practices and behaviors of existing institutions. Some dedicated organizations, from AI Now, to the Future of Humanity Institute, to the Ethics and Governance of AI Initiative, are working diligently to uncover, explore, and propose meaningful fixes to some of the more pernicious flaws in existing AI systems. Others, such as the professional software engineers of IEEE, are promoting new technical standards and practices based on ethical AI. These and many other groups are engaged in highly worthwhile and useful endeavors to bring Institutional AIs closer to core human values.
Notably, these organizations tend to advocate for improvements while operating from outside these third party-controlled algorithmic systems. This means their work often amounts to seeking to curtail, to varying degrees, the fast-evolving activities of those in our society with access to all the tech tools, and all our personal data. They do not entail a frontal challenge to a looming world where only a relatively few develop and deploy AI, ostensibly to the benefit of the rest of us. In short, their efforts are profoundly important, but not quite complete.
Nonetheless, there are opportunities to support a complementary and impactful approach in the nascent AI space. Rather than continuing to serve as data donors, and objects of intrusive algorithmic systems, ordinary human beings actually should have similar technology on their side. In the particular context of artificial intelligence, this means that people should have their own Personal AIs, answerable only to their own unique interests.
So what does this really entail? In brief, each of us should have a highly-individualized virtual intelligence, there to support us in our daily lives. These computational agents would exist on our personal devices, managed for us by trustworthy and accountable entities that we select. These Personal AIs would serve as our trusted advisors and vigilant advocates, in part by actively engaging with third party Institutional AIs.
This doesn’t mean, for example, having Amazon and Apple completely reprogram their Alexa and Siri applications. Rather, it amounts to arming ordinary humans with their own AIs, which then would interact directly with those Institutional AIs. Indeed, if these entities intend to mediate our lives for us, perhaps the introduction of virtual personal agents could be considered a form of “counter-mediation,” a way of introducing some checks and balances to level the computational playing field. Unlike the current first generation of virtual assistants, this second generation version would be chosen and controlled by the actual human it serves.
Thus, the Personal AI is intended to represent solely the interests of its client, helping to fashion and manage her interactions with the virtual world. In particular, the application could interact constantly and intuitively with the persistent data-streams bombarding us from the online/offline world. This would entail countless simultaneous activities in real-time: from recognizing, analyzing, and researching a specific situation, to recommending and acting upon options, to negotiating with third parties over terms of engagement, and, where necessary, even overriding and blocking harmful third party entreaties. All without necessitating the conscious involvement of the client.
This virtual interface function could play out in many different scenarios.
A few include having the Personal AIs:
- manage and protect their clients’ online and offline flows of personal data and information, and other digital interactions with third parties;
- ensure that online recommendation engines are serving relevant information, and not harmful content such as “deep fakes” or addictive videos;
- challenge the efficacy of financial and healthcare algorithms for bias and other flaws that would harm its client; and
- prevent environmental devices — smart speakers, facial recognition cameras, biometric sensors — from engaging in needless surveillance.
The last scenario involves bringing online computational power into the offline space. This would allow each of us to deal with the deluge of invisible signals coming from Internet of Things (IoT) devices. No longer need we enter a camera- and sensor-laden physical environment without any knowledge or consent or recourse. In those instances, the Personal AI could make recommendations or decisions to allow or deny third parties the ability to access personal information — such as our precise physical location, or unique biometric characteristics — based on what we prefer.
The Personal AI concept has the potential to evolve into an entirely new paradigm for humans interacting with each other using virtual connectivity. By accessing your personal information, and using advanced machine learning tools, your AI over time would truly get to know you, and promote your best interests. As a result, the Personal AI application carries the prospect of becoming the essential trusted agent, constantly tethered to, and fully representing, the individual human user. Practical agency, where the human has both the power of knowledge (transparency) and the ability to act (opportunity), would be enhanced, and not simply assumed away.
So, going back to Part I of this article series, both Detective Del Spooner from “I Robot,” and you as the autonomous car operator, would be far better situated. Each would have your very own super mediation agent, actively engaging third party AIs across a panoply of “screens, scenes, and unseens.” True personal avatars for our digital lives.
Is this kind of individual empowerment even possible? Absolutely. Personal AIs are starting to become viable, as a matter of rapidly-advancing technology. For example, IEEE Working Group P7006 is developing a new industry standard for personal data AI agents. Organizations such as Open AI are open-sourcing AI software intended for beneficial, humanity-centric purposes. Further, companies like Silk Labs and Perceptio (both now part of Apple) and Google (as just announced at Google I/O) are building such “on-device, off-cloud” AIs. This means the actual computation and personal data can reside on the end user’s device, rather than controlled from a distant cloud. No further need then for invisible tethers to institutional AIs.
There is also a proven business model to sustain the technology. Genuine trustworthiness should be considered a commercial premium. After all, fiduciaries, cooperatives, credit unions, social enterprises, and a range of professions are built on providing trusted advice and services to individuals as clients. Those same models should work especially well in the computational intelligence space, where most of us would require an entity to help us set up and manage our Personal AIs.
The real challenge is not a technical or commercial one. The key is for all of us to demand that Personal AIs become a real and viable choice in the near future.
But what can be done now to make it happen? Perhaps the optimal way to ensure that AI-based technologies support the interests of ordinary people is to help create an actual ecosystem for such tech. Individuals and entities alike have the opportunity to step up to facilitate a marketplace of Personal AIs. One such example is the author’s GLIAnet Project, which aims to create a new Web ecosystem based on trustworthy digital agents, including AI “avatars.”
In Part III of this series, we will examine in some detail an action plan to bring this compelling vision of Personal AIs to reality. And perhaps next time around, Captain Marvel’s very own computational “superagent” will prove to be a useful complement to her super cosmic energies.