The viewpoint format of the iuvenal research blog covers ongoing developments in ESG, compliance, and AI ethics. In its first edition, co-founder Dr. Alexander Kriebitz discusses the ethical implications of current advances in the intersection between machine learning and neurosciences.
In the last weeks, international media has covered recent advances in machine learning technologies applied to the analysis of human brain activities. The purpose of these so-called semantic decoders is to gain a deeper understanding of the human mind and to associate specific brain activities with concrete human thoughts. Semantic decoders are even able to convert brain activities into concrete text with a surprising degree of accuracy.
While good in intent as it aims to support individuals with disabilities, research on semantic decoders is ethically controversial, as it may touch upon the very fundamentals of human rights.
AI and Human Rights: The Quest to Decipher the Human Brain
The phrase "die Gedanken sind frei" ("thoughts are free") not only appears in a popular German folk song of the 19th century but also encapsulates important fundamental normative beliefs related to human autonomy, freedom, and intimacy. It has been composed at a time when classical liberalism articulated the - at that time revolutionary - idea of restricting governmental power by constitutionally guaranteed rights. Two centuries later, this phrase might become popular again.
Quite recently, scientists at the University of Texas at Austin have published a paper on their experiments with non-invasive technologies analyzing brain activity and using machine learning solutions to allow individuals who have lost their ability to speak, to communicate again with the outside world. While the purpose of these semantic decoders is to improve the lives of individuals with disabilities, the development raises fundamental ethical concerns and questions, particularly from a human rights perspective.
Freedom of thought, as well as mental and physical integrity, constitute fundamental rights articulated in the wake of the Enlightenment and have been enshrined in the International Bill of Human Rights, particularly Article 18 of the Universal Declaration of Human Rights. The idea of others gaining access to the intimate thoughts of an individual is worrying for multiple reasons, as it has the potential to reshape power balances. Particularly, the use of such insights outside of the health context, for instance by law enforcement or in human resources, raises concerns about potential misuse.
While not being a fan of dystopian visions of AI rule either, it strikes me as difficult to overlook the risks coming along with such technologies, particularly in authoritarian settings, with authoritarian not necessarily confined to the likes of China, Russia or North Korea. What “thought-crime” and preventive policing mean in practice can be seen in the case of Xinjiang, but has also been an issue in Guantanamo. Such cases demonstrate the need for an ethical debate in neurosciences but also in machine learning on how to conduct such research and with whom to share such technologies, particularly as they might be used at the expense of certain individuals. The deployment of polygraphs in the American court system, but also by border control institutions such as Frontex illustrates that the biggest threat of the deployment of semantic decoders with adverse impact for human rights lies in the temptation to use such technologies for the “greater good”; in order to identify criminals, but also in order to prevent crime ex-ante. Populists can trust on our instincts and emotions when such technologies assist the police in preventing crime, such as sexual offenses but also acts of terrorism. The greatest risks attached to semantic decoders’ deployment lie in the interrogation of subjects, as the systems require a sufficient degree of training at the individual exposed to it to operate successfully.
Our gut feelings prove that something ain’t right with this. From a political-ethical perspective, it is about power. The introduction of such technologies in areas beyond health would fundamentally reconfigure our societies indeed: irrespective of how advanced though-reading technologies may be, they might be biased against particular populations. Thus, the ethical debate needs to address the wider question of where to draw the line when insights generated by research in health reshape power balances elsewhere, enhancing the risks of human error, misuse and perverse incentives.
In the normative space, we already have some instruments to judge or, at least, frame semantic decoders: It is worth noting that there are existing conventions on bioethics and research ethics that are highly relevant for the topic. For example, the Universal Declaration on Bioethics and Human Rights adopted by UNESCO in 2005 acknowledges the need for respect for human dignity and human rights, as well as the principle of non-discrimination and non-stigmatization in the development and application of new technologies. Additionally, the Declaration of Helsinki provides ethical principles for medical research involving human subjects, including informed consent and the protection of privacy and confidentiality. These conventions are already giving some sort of normative frame to what is going on in research at the intersection of neurosciences and machine learning, and support the idea that advances in health or neurosciences should not be used at the expense of individuals. However, most of these conventions are soft-laws and not binding internationally. Particularly, authoritarian governments, but also institutions in the West that are somewhat exempted from legislation protecting fundamental or human rights, are likely to cross such ethical considerations.
Overall, the scientifically fascinating development of non-invasive thought-reading technologies raises important ethical questions that require careful consideration. This applies particularly to the notion of freedom of thought as an underlying concept of human rights, which becomes increasingly relevant against the backdrop of technologies such as semantic decoders. That being said, the discourse on AI ethics needs to have a closer look at the latest insights on this front.
Dr. Alexander Kriebitz is a co-founder of iuvenal research and has given lectures on human rights implications of globalization and digitization in Munich, Vienna, Freiberg and Moscow. His current research focuses on the human rights implications of AI deployment in the context of public administration, health, and human resources. Please feel free to reach him with any interest in the topic!