A new division of history occurred with the advent of the digital information revolution, the way we now live described as hyperhistory. This new way of living is characterized by certain features:
1) Our technologies are now capable of more than just mediation between us – as users – and other technologies or the world. ICTs (Information Communication Technologies) are also able to control our other technologies.
To give a specific example, home automation software can shop from an online grocery based on what it determines the individual wants and what is available in the fridge.
2) The narrative of our lives generate vast stores of data which are consumed by ICTs in ways that we do not actively endorse or control. We cannot even attune our awareness to what processes are involved in creating this data.
An example would be the smartphone in one’s pocket capturing the location and proximity to nearby landmarks, or the social graph of personal calendar events. These in turn feed complex engines of analysis and prediction, fitting the individual into semi-anonymized profiles of aggregate behavior, then in turn generating even more data.
3) Societies that have entered hyperhistory are particularly vulnerable to attacks using ICTs. There are a multitude of vectors of attack from infrastructure controlled by ICTs, through economic disruption to disinformation campaigns that damage social cohesion and political institutions.
Coordinated attacks against infrastructure or strategically important programs, ransomware (such as the Wannacry virus that targeted the NHS), and other emerging patterns of attack are well-understood by cybersecurity experts.
4) In hyperhistory, individual human-beings attempt to define themselves in terms of the ICTs they use. They try to fit into the imposed data models and system processes, redefining what it means to be a human-being for themselves, through their understanding of their role within these structural systems.
Social networks require us to reconsider the types of relationships we have and structure the information we share through the filters of the data models of these systems.
Our relationship with the wider society is further defined by our identities in the systems of governments, corporations and other institutions. Whether it is government systems defining citizenship, welfare beneficiaries or taxpayers or it is corporate networks defining authorized personnel, the overlapping identities imposed by these systems are internalized in how we talk about ourselves with each other in society.
5) The distinction between being connected (“online”) and disconnected (“offline”) becomes meaningless when the human-being cannot avoid being tracked and managed, just like any other component within the network. We are always connected, always mediating ourselves via digital technologies. These technologies have become transparent.
One of the major players in the development of the hyperhistory is AI (artificial intelligence) which seems to be a a catch-all phrase for a wide-ranging set of technologies most of which apply learning techniques from statistics to find patterns in large sets of data and make predictions based on those patterns. From the critical, like law enforcement, healthcare, and humanitarian aid, to the mundane, like shopping, AI seems to be the answer to all our problems. Yet, while the progress of hyperhistory seems to be beneficial for the humankind, it is also worth reflecting upon who is driving the regulatory agenda and who benefits from it?
This question needs to be answered because letting industry needs drive the AI agenda presents real risks. With so many digital giants of Silicon Valley located in the US, one particular concern regarding AI is its potential to mirror societies in the image of US culture and to the preferences of large US companies, even more than is currently the case. These tech companies sit on troves of data, which can be turned into the feeding material for new AI-based services.
A related concern is how much influence these companies have over AI regulation. In some instances, they are invited to act as co-regulators. Much can be said in favor of such open norm-setting venues that aim to address AI regulation by developing technical standards, ethical principles, and professional codes of conducts outside of the dab-and-drag of regulatory processes. Yet again, the question that needs to be asked is: who benefits? The solutions presented by these initiatives are often framed in terms of ethical frameworks or narrow solutions that lead to fair, accountable, and transparent AI. Yet, they don’t address questions of hard-regulation or the Internet’s business model of advertising and attention.
How AI systems function and, by extension, what regulatory problems they raise is highly contextualized. A US-based commercially-driven agenda is naturally going to be an irrelevant fit for much of the rest of the world.
One way would be to ensure that there is equitable stakeholder representation when regulating AI. Yet, we are not sufficiently hearing the concerns of the Global South. Those voices are especially relevant, as their countries are often used as ‘test-beds’ for technology that will be rolled out across the rest of the world.
Similarly, it is important to go beyond the fairness rhetoric and start to formulate what other fundamental values should be included. In focusing on narrowly defined conceptualizations of fairness, accountability, and transparency, what are we leaving behind? It should be evident that merely ensuring that there is a civil society representative or academic in the room for every industry representative is not enough. Even in cases where various stakeholders are invited to join the regulatory process in equal numbers, there is no internal equality. Corporations simply have more resources to dedicate towards such processes.
To ensure we all benefit from these technologies we need to guarantee a diverse set of concerns and values are represented – equitably – when setting the regulatory agenda for AI. As it says in the Qur’an;
“And those who harm believing men and believing women for [something] other than what they have earned have certainly born upon themselves a slander and manifest sin.” (Quran; 33:58)
This is not only about technology, it also about re-writing the history as well as making technology more ethical and human.
Data Driven Investor
Data Driven Investor (DDI) brings you various news and op-ed pieces in the areas of technologies, finance, and society. We are dedicated to relentlessly covering tech topics, their anomalies and controversies, and reviewing all things fascinating and worth knowing. DDI has only one mission: see what is coming, and do what is important – “NOW”.
Visit us at datadriveninvestor.com.
About the DDI Team
Dr. Justin S P Chan has a passion for clarity and synergy - seeing through the complexity of the intersecting spheres of technology, finance, innovation and social dynamics, to enable game-changing collaborations between entrepreneurs and innovative opportunities. Combining the vision of a true inventor and entrepreneur with his data-driven, evidence-based approach to investment, Justin also co-founded OCIM and serves as Chief Investment Officer for its fund management platform. Within OCIM, He co-manages OC Horizon Fintech, a transformational hedge fund, where he blends real applications, expertise and future-awareness into truly exceptional investment performance. Justin gains inspiration for these projects from his global network of contacts in investment and fintech communities, where he stays on the pulse of fast-moving conversations and trends affecting global markets and emerging technologies.
John DeCleene is a fund manager for OCIM’s fintech fund, and currently progressing towards becoming a CFA charter holder. He loves to travel for business and pleasure, having visited 38 countries (including North Korea); he represents the new breed of global citizen for the 21st century. Whilst having spent a lot of his life in Asia, John DeCleene has lived and studied all over the world - including spells in Hong Kong, Mexico, The U.S. and China. He graduated with a BA in Political Science from Tulane University in 2016.
John DeCleene: firstname.lastname@example.org
Phone: (+65) 8420 4779
Justin Chan: email@example.com
Phone: (+65) 9129 2832
© Liana Technologies