This blog is based on a post that I have written on occasion of the launch on the market of the book of Dag Hareide “Man and the Techno Power” in September, and it is now in oral form.
Man, brain and technology are an interesting theme, maybe scary and a little bit science fiction. There are many opportunities for people to live a better life. The serious illness is healed, that the blind people can see, the deaf people can hear and that crippled people can walk. Arild is crippled and he is mentioned into the introduction of the book:
“Every night I dream of a technological breakthrough that helps me in walking again”. Arild –
We try to pursue this dream.
We are in front of the classical dilemma: the positive side of the technology is bigger, but the negative one is definitely more scary.
We are talking about the “update human being” who can control objects and body parts by the power of thinking, the brain system and the connection between brain and computer.
We are immediately in a debate on super soldiers, or super humans, and on about what is always called cyborg.
Four rights for a human brain.
The question is if the research field described into the book by Dag Hareide is sufficiently regulated. There are rules that prevent the not etic research and the use not etic? And, how can we ensure ourselves to not stop a development which permits Arild to realize its biggest dream?
27 nanotechnology researches has proposed four rights for the human brain:
- Right to privacy and consent.
- Right to not being discriminated against.
- Right to their own identity and the freedom of action.
- Right to say no to the upgrading.
This is a good point of start, but it is true that the regulamentation is always the base of technological development. In addition, the legislation is always “only” legislation and the ethics in a field of research or a technology are difficult to be regulated. We discuss a lot into the privacy area, where also the technology is rampant and the legislation is updated every twenty years. We can choose for example the insurance: we can prevent a lot of illness by mapping in depth people’s lifestyle and we can calculate an insurance premium which corresponds to the individual risks of getting sick. Is almost impossible to regulate this aspect, so the etic needs to act as a wrapping around the legislation. And the etic question that arises here is, obviously, till what point we can push ourselves into the individuation of the insurance premium, before undermining the insurance product itself.
The ethical discussion needs to be conducted side by side with the discussion on right and rights.
But the brain needs rights? Because the brain is part of a human being, the right that we have like human being will be applied to our brain. It is useful taking as point of start the European Human Rights Convention (CEDU). Amonf the other things, it establishes the prohibition to discrimination, the right to freedom of thought and, obviously, my favourite privacy right. These articles are relevant into a discussion on brain rights.
Brain data process.
But also rights on the privacy ordinance can be important. If the Regulation is applied if we connect a computer to our brain it depends from the thing that the brain data recovered from the brain should be included inside a register or if the brain data process happens in a completely or partial automatic way. Let’s let it be now. This that is important, anyway, are the principles included into the regulation:
- Data can be processed in a legal, open and correct way.
- Data must be recollected for a specific purpose.
- The recollection must be limited to what is necessary in order to reach the aim.
These principles are integrated by specifics legislation disposals:
- The process is legal only if there exists a legal base, a consent or similar.
- A consent must be voluntary, specific, informed and unequivocal.
- Sensible information is subjected to special protection and in principle is forbidden the process. Brain data will be obviously sensible.
- The citizen has a series of rights, like the right to access, the right to delete, the right to receive good information and so on.
We need to also mention the responsibility principle, which imposes a series of duties to companies, like studying consequences for privacy and having a good and complete overview of which data are processed and for which are used.
Regulation of the brain.
So the principles on the privacy ordinance are applicable to a future brain regulamentation? In a certain measure, yes they are. The Regulation is neutral from the technological point of view and mostly based on principles that can be applied to new challenged and “live together with” the technological development.
But a good regulatory model needs to be based on challenges and special opportunities that can exist in a particular technology. In this case, a brain connected to a computer.
The brain is something in itself.
It asks for an approach completely different from challenges that are being argued now. In addition, there will be significant challenges related to the execution, for example by a supervisor or a national or international tribunal, especially since it takes a long time to get a decision on a case. If a state is to be sued for breaching the ECHR, it will take many years before the case is completed.
Self-determination and power
There are also two other challenges: human self-determination and the desire for political power. Can and should we refuse people to voluntarily agree to participate in an ethically questionable research project or refuse to become a super soldier who can withstand a chemical attack? When someone agrees for the first time, it can create pressure on consensus that we don’t know where it will end up. And how to regulate superpowers? How to regulate the United States if they want to develop such a soldier?
I am not sure that today’s regulatory models are suitable for the development described in Dag Hareide’s book.
In the Data Inspectorate, we are now starting, with government support, a regulatory sandbox. Here companies can try out their technical solutions, such as a facial recognition app or a safe way to manage consents, before they are released to market. Perhaps there is a way to go in other areas too, where technology is developing at a rapid pace and where technology, ethics and law are closely linked? And perhaps especially when we talk about “double use”, that the technology that is developed can be used for both military and civilian purposes.
I want Arild to be able to walk again, but I don’t want a super soldier to be able to control a swarm of drones just with his mind.