Personal AI (artificial intelligence) assistants are now nearly ubiquitous. Every leading smartphone operating system comes with a personal AI assistant that promises to help you with basic cognitive tasks such as searching, planning, messaging and so on. Usage of such devices is effectively a form of algorithmic outsourcing: getting a smart algorithm to do something on your behalf. Many have expressed concerns about this algorithmic outsourcing. They claim that it is dehumanizing, leads to cognitive degeneration, and robs us of our freedom and autonomy.
Classically, following the work of Alan Turing, human-likeness was the operative standard in definitions of AI. A system could only be held to be intelligent if it could think or act like a human with respect to one or more tasks. The precise definition of AI is divided into four main categories: thinking like a human, acting like a human, thinking rationally, and acting rationally.
Humans have long outsourced the performance of cognitive tasks to others. If humanistic outsourcing demands its own ethical framework, then presumably AI outsourcing does too. But what might that ethical framework look like?
In order to think about the ethical significance of such cognitive outsourcing, it helps to draw upon some theoretical models. One thesis- according to the situated/embodied cognition school of thought is that cognition is not a purely brain-based phenomenon. We don’t just think inside our heads. Cognition is a distributed phenomenon not a localized one. We use maps to navigate, notebooks to remember, rulers to measure, calculators to calculate and so on. We can think about these interactions with cognitive artifacts at the system level (i.e. our brains/bodies plus the artifact) and the personal level (i.e. how we interact with the artifact):
One thing that is missing is any discussion of the positive role that AI assistance could play in addressing other cognitive deficits that are induced by resource scarcity. If a resource is scarce to you, you tend to focus all your cognitive energies on it. In other words, cognitive outsourcing through AI could redress scarcity-induced cognitive imbalances within one’s larger cognitive ecology. This serves as a counterbalance to some concerns about degeneration.
Moreover, autonomy and responsibility should also be taken into account when it comes to a discussion of the role of AI assistance. It is commonly believed that personal happiness and self-fulfillment are best served when one pursues goals that are of his/her own choosing; and it is also commonly believed that the achievement and meaning derived from personal goals is dependent on one’s own being responsible for what one does. If AI assistance threatened autonomy and responsibility, it could have an important knock-on effect on our personal happiness and fulfillment.
There is some worry that AI would gradually ‘nudge’ a person into a set of preferences and beliefs about the world that are not of his or her own making. Yet, there might be something different about the kinds of nudging that are made possible through AI assistants: they can constantly and dynamically update an individual’s choice architecture to make it as personally appealing as possible, learning from past behavior and preferences, and so make it much more likely that they will select the choice architect’s preferred option.
The primary value of some interpersonal actions comes from immediate, conscious engagement in the performance of that action. To the extent that AI assistants replace that immediate, conscious engagement, they should be avoided. Nevertheless, in many other cases, the value of interpersonal actions lies in their content and effect; in these cases, the use of AI assistants may be beneficial, provided they are not used in a deceptive/misleading way. This is, of course, very generic.
The intention would be for these principles to be borne in mind by users of the technology as they try to make judicious use of them in their lives. Yet, these principles could also be of use to designers. If they wish to avoid negatively impacting on their user’s lives, then considering the effect of their technologies on cognitive capacity, autonomy and interpersonal virtue would be important.
More guidance on which types of activity derive their value from immediate conscious engagement or the situations/abilities that would be in need of some resiliency, would always be desirable.
Data Driven Investor
Data Driven Investor (DDI) brings you various news and op-ed pieces in the areas of technologies, finance, and society. We are dedicated to relentlessly covering tech topics, their anomalies and controversies, and reviewing all things fascinating and worth knowing. DDI has only one mission: see what is coming, and do what is important – “NOW”.
Visit us at datadriveninvestor.com.
About the DDI Team
Dr. Justin S P Chan has a passion for clarity and synergy - seeing through the complexity of the intersecting spheres of technology, finance, innovation and social dynamics, to enable game-changing collaborations between entrepreneurs and innovative opportunities. Combining the vision of a true inventor and entrepreneur with his data-driven, evidence-based approach to investment, Justin also co-founded OCIM and serves as Chief Investment Officer for its fund management platform. Within OCIM, He co-manages OC Horizon Fintech, a transformational hedge fund, where he blends real applications, expertise and future-awareness into truly exceptional investment performance. Justin gains inspiration for these projects from his global network of contacts in investment and fintech communities, where he stays on the pulse of fast-moving conversations and trends affecting global markets and emerging technologies.
John DeCleene is a fund manager for OCIM’s fintech fund, and currently progressing towards becoming a CFA charter holder. He loves to travel for business and pleasure, having visited 38 countries (including North Korea); he represents the new breed of global citizen for the 21st century. Whilst having spent a lot of his life in Asia, John DeCleene has lived and studied all over the world - including spells in Hong Kong, Mexico, The U.S. and China. He graduated with a BA in Political Science from Tulane University in 2016.
Contact
General: [email protected]
John DeCleene: [email protected]
Phone: (+65) 8420 4779
Justin Chan: [email protected]
Phone: (+65) 9129 2832
© Liana Technologies