Looking at state of the art technology, today’s robots are nowhere close to the intelligence and complexity of humans or animals, nor will they reach this stage in the near future. Yet, while it seems far-fetched for a robot’s legal status to differ from that of a toaster, there is already a notable difference in how we interact with certain types of robotic objects. This occurs mainly due to our tendencies to project into them cognitive capabilities, emotions, and motivations that do not necessarily exist.
There is something about today’s robots that looks and feels different. This may be because we perceive robots differently than we do other objects that one should consider extending some level of legal protections to the latter but not the former. This conclusion is consistent with Hume’s thesis stating that if “ought” cannot be derived from “is,” then axiological decisions concerning moral value are little more than sentiments based on how we feel about something at a particular time. To give a specific example, violent behavior toward robotic objects feels wrong to many of us, even if we know that the abused object does not experience anything. Consequently, we should try to accommodate and work with, rather than against, current experiences with robots.
There are, however, a number of complications with this approach. First, basing decisions concerning moral standing on individual perceptions and sentiment can be criticized for being capricious and inconsistent. As sentiment is a matter of individual experience, it remains uncertain as to whose perceptions actually matter or make the difference?
Second, we project our own inherent qualities onto other entities to make them seem more human-like”—qualities like emotions, intelligence, sentence, etc. Even though these capabilities do not (for now at least) really exist in the mechanism, we project them onto the robot in such a way that we then perceive them to be something that we presume actually belongs to the robot. What ultimately matters is not what the robot actually is “in and of itself.” What makes the difference is how the mechanism comes to be perceived. It is, in other words, the way the robot appears to us that determines how it comes to be treated.
Finally, what ultimately matters is how “we” see things. The principal reason we need to consider extending legal rights to others, like robots, is for our sake. This follows the well-known argument for restricting animal abuse. As our actions toward non-humans reflect our morality, we become inhumane persons if we treat animals in inhumane ways. This logically extends to the treatment of robotic companions. This way of thinking transforms animals and robot companions into nothing more than instruments of human self-interest. The rights of others, in other words, is not about them; it is all about us.
According to another way of thinking, we are first confronted with a mess of anonymous others who intrude on us and to whom we are obligated to respond even before we know anything at all about them. Therefore, moral consideration is no longer seen as being ‘intrinsic’ to the entity; instead it is seen as something that is ‘extrinsic’. In other words, it is attributed to entities within social relations and within a social context. As we encounter and interact with others—whether they be other human persons, an animal, the natural environment, or a robot—this other entity is first and foremost situated in relationship to us. Consequently, the question of social and moral status does not necessarily depend on what the other is in its essence but on how it supervenes before us and how we decide, in “the face of the other” to respond.
Such an ethical framework is not based on “respect for others”. Rather, it is about deciding how to respond to the ‘Other’ who supervenes before the individual in such a way that always and already places the assumed rights and privilege of that individual in question.
When one asks “Can or should robots have rights?” the form of the question already makes an assumption, namely that rights are a kind of personal property or possession that an entity can have or should be bestowed with. However, this question can be situated otherwise as well: “What does it take for something—another human person, an animal, a mere object, or a social robot—to supervene and be revealed as Other?” This other question—a question about others that is situated otherwise—comprises a more precise and properly altruistic inquiry. It is a mode of questioning that remains open, endlessly open, to others and other forms of otherness. For this reason, it deliberately interrupts and resists the imposition of power. The gist of the problem with granting or extending rights to others is that it presupposes the existence and the maintenance of a position of power from which to do the granting.
Such a shift in our mindset has the potential to reorient the way we think about robots and the question concerning rights. This means, of course, that we would be obligated to consider all kinds of others as Other, including other human persons, animals, the natural environment, artifacts, technologies, and robots. An “altruism” that tries to limit in advance who can or should be Other would not be, strictly speaking, altruistic.
Data Driven Investor
Data Driven Investor (DDI) brings you various news and op-ed pieces in the areas of technologies, finance, and society. We are dedicated to relentlessly covering tech topics, their anomalies and controversies, and reviewing all things fascinating and worth knowing. DDI has only one mission: see what is coming, and do what is important – “NOW”.
Visit us at datadriveninvestor.com.
About the DDI Team
Dr. Justin S P Chan has a passion for clarity and synergy - seeing through the complexity of the intersecting spheres of technology, finance, innovation and social dynamics, to enable game-changing collaborations between entrepreneurs and innovative opportunities. Combining the vision of a true inventor and entrepreneur with his data-driven, evidence-based approach to investment, Justin also co-founded OCIM and serves as Chief Investment Officer for its fund management platform. Within OCIM, He co-manages OC Horizon Fintech, a transformational hedge fund, where he blends real applications, expertise and future-awareness into truly exceptional investment performance. Justin gains inspiration for these projects from his global network of contacts in investment and fintech communities, where he stays on the pulse of fast-moving conversations and trends affecting global markets and emerging technologies.
John DeCleene is a fund manager for OCIM’s fintech fund, and currently progressing towards becoming a CFA charter holder. He loves to travel for business and pleasure, having visited 38 countries (including North Korea); he represents the new breed of global citizen for the 21st century. Whilst having spent a lot of his life in Asia, John DeCleene has lived and studied all over the world - including spells in Hong Kong, Mexico, The U.S. and China. He graduated with a BA in Political Science from Tulane University in 2016.
Contact
General: [email protected]
John DeCleene: [email protected]
Phone: (+65) 8420 4779
Justin Chan: [email protected]
Phone: (+65) 9129 2832
© Liana Technologies