November 27, 2018
Last time, I discussed the importance of truly trustworthy and accountable entities -- acting legally and ethically as “countermediaries” -- to help us manage and promote our digital lives. When we are able to select these entities, in a voluntary and consensual manner, to fully represent our personal interests, we can think of them as becoming our Digital TrustMediaries.
There are other potential components as well of a trustworthy and open GLIAnet ecosystem that actually serves the empowered human being. Here are a few.
A second component: the “Avatar.”
Most popular coverage of Artificial Intelligence focuses on the supposed long-term threats to human employment, and even to humanity itself. In the nearer term, however, employing Machine Learning systems in various institutional decisional capacities already is having an impact. Did I get this job? Do I secure this loan? How long is my probation? What is my medical diagnosis? Why did I receive this particular news feed, or video clip, or ad? These and many other decisive life points, large and small, currently are being shaped and influenced by algorithmic systems. As a result, along with deep questions about fairness and bias, observers rightly are raising concerns about a loss of decisional accountability, and human autonomy.
Currently, the rise in consumer markets of “virtual assistants,” such as Google’s Assistant, Amazon’s Alexa, Apple’s Siri, and Microsoft’s Cortana, involves an AI-based agent provided by a corporate entity and intended to address the desires of the actual human user. However, in reality these virtual assistants are part of the virtual fabric of the sponsoring organization, and thus can be seen as primarily serving the interests of those for-profit entities.
Increasingly this is becoming the case as these entities exert more control over the datastreams emanating from “personal” devices (smartphones and wearables), environmental devices (the Internet of Things), and other machine intelligence-enabled systems. As a result, the Platforms have their virtual corporate agents -- Alexa and Assistant and Siri and Cortana -- perching on our mobile devices, and listening in our living rooms. Vying for our time and space and attention. And our data, and our money.
Given this one-sided situation, it seems only fair that each of us have our very own AI, on our own terms. What about an intelligent personal avatar, there to support you in your daily life, on and around the Internet? This Avatar could serve as a personal advisor, vigilant advocate, trusted agent, and able assistant. Unlike the first generation of virtual assistants, the Avatar would be chosen and controlled by the human it will serve. Thus, the Avatar is intended to represent solely the interests of the User, helping to fashion and manage her interactions with the virtual world.
The Avatar concept has the potential to evolve into an entirely new paradigm for humans interacting with each other using virtual connectivity. By accessing your personal data, and using advanced machine learning tools, your Avatar over time would truly get to know you, and promote your best interests. As a result, the Avatar application carries the prospect of becoming the essential trusted agent, constantly tethered to, and fully representing, the individual human user. The Avatar could interact constantly and intuitively with the persistent datastreams bombarding us from the rest of the online/offline world -- sifting and searching, recognizing and recommending, negotiating and dealing, even blocking and protecting.
In essence, individual humans for the first time would have the tools to manage in real-time their virtual interactions and relationships, by way of standardized interfaces between the individual’s Avatar(s) and third party virtual assistants. Practical agency, where the human has the power of knowledge (transparency) and the ability to act (opportunity), would be enhanced. This would include dealing in a personalized manner with the deluge of invisible signals coming from IoT devices. No longer need we enter a camera- and sensor-laden physical environment without any knowledge or recourse. In those instances, the Avatar could make recommendations or decisions to allow or deny access to our data, based on personal preferences. On balance, Users would have true personal avatars for their digital Lifestreams.
Over time, as we desire, the Avatar would learn to anticipate and provide for even our unspoken needs and wants. Eventually, for some of us, the Avatar could become a true augmentation to the human body/brain, in what Elon Musk would call a “neuralink”-type capacity. Playing helpful offense, and vigorous defense. And democratizing AI for all.
A third component: the “Cloudlet.”
The cloud is an intentionally amorphous concept. In reality, it largely resembles a return to “Big Iron,” the centralized computing platforms prevalent in the 1970s and early 1980s. With clouds we have a partially decentralized infrastructure, but still a highly centralized service model. As a result, today our personal data lives on thousands of servers spread around the world. Twenty-four by seven, under someone else’s control. Waiting to be used, or misused, or lost, or -- increasingly -- hacked.
The modern notion of universal clouds has expanded the traditional client-server relationship, so that it consists mostly of lots of servers, with less and less residing on the client side. But that doesn’t have to be so. The cloud can be refashioned as distributed datastreams, protected in encrypted transactions — a Cloudlet. After all, with the advent of edge computing and other decentralized infrastructure, processes like data processing, storage, and computation now can occur anywhere.
One can envision a cloud world without any databases at all, where online companies provide the computational tools but not necessarily the centralized data storage. Users could have their data stored not in an external repository, but in an on-premises module, creating true server-to-server, peer-to-peer connectivity. The bits could be shared in a “data-unbase” with any entity, such as a Digital TrustMediary, designated by the individual. That sharing could be instantaneous, limited for a particular purpose, and with the data simply disappearing on the other side of the Cloudlet.
The Cloudlet model should lead to little to no Web User data being leaked to third parties for their own use, separate from the original context and without an affirmative agreement. This means fewer concerns about data breaches, particularly involving large centralized repositories. Indeed, why did Equifax possess anyone’s data in the first place to be exposed? Why does my social identity need to live constantly within a series of Facebook data centers? With Cloudlets, no more would pieces of my life be scattered around on servers all over the world, vulnerable to countless data breaches and unauthorized uses. Next time these and other entities need my data for some approved purpose, they can come to me. Utilize what they need, run their reports, and then drop the data where it is. Done.
The Cloudlet can live practically anywhere, including in a larger cloud itself. But the crucial distinction is that my data now resides behind a virtual wall of my own choosing.
A fourth component: the “Identity Layer.”
Software-based technologies can be useful tools to help solve social challenges. For example, while trust is fundamental to the GLIAnet concept, for some it can run up against concerns about privacy. In order to determine whether or not to trust a particular person or entity in a particular situation, one typically needs to know certain personal, even sensitive details. This conundrum has led to the notion of introducing self-sovereign or decentralized identity (DID), where a person controls when and how her personal information is revealed to parties in the world. Another term sometimes used for this approach is an Identity Layer, a form of virtual “pseudonymity” that allows me to project to the world those chosen aspects of my self and my Lifestream. From authenticating an online purchase, to hailing a late-night Uber ride.
One potential technical solution (previewing our later “How” discussion) could use zero knowledge proof (“ZKP”) protocols. These remarkable applications let one party (the verifier) validate the truth of something from the other party (the prover), without knowing anything more than the fact of simple validation. No personal information need be passed along, and no passwords need be exchanged (so they cannot be stolen). Using strong encryption techniques, this makes the communications channel extremely secure and protected.
Technologies like ZKP protocols, embedded within new Identity Layers, can allow individuals to manage their decentralized identity. Among many use cases, this gives members of historically disadvantaged or currently persecuted communities the power to create safe zones around the ways they present themselves to the public and to each other. On the commerce side, businesses too can benefit; by not needing direct access to personal data, they can readily opt out of prevailing data protection regulations.
Other potential components of the GLIA Project could include:
Distributed Applications -- tailored to serve my interests, as vetted for me by the Digital TrustMediary.
Modular Devices -- can be modularized, modifiable, repairable -- in other words, fully owned.
Protected Content -- secured and preserved by edge-based Cloudlets.
Personal Access -- one’s own slice of spectrum to connect to the Web.
Taken together, these and other components can be thought of as comprising a “GLIAnet,” a localized overlay network serving the User and linked via software interfaces to the Platforms, and to the Web. Creating a virtual zone of trust and support.
GLIAnet Part 5:
Avatars, cloudlets, identity layers, dApps -- and other potential elements of a GLIAnet ecosystem