This is definitely one for Safety People who like to think!
Say Something that Makes Sense
When Francis Fukuyama was asked what he thought was the most dangerous idea in the world he responded, ‘transhumanism’. Transhumanism is the quest to liberate humans from their biological constraints (eg. mortality and fallibility). Transhumanism is projected as an ideal when in fact it is a quest for immortality. The Transhumanist quest is based on a denial of humanness and a naïve faith in technology. The general idea is that technology is the savior of humans from the suffering of mortality. Fukuyama added: ‘transhumanists, are just about the last group that I’d like to see live forever. This is because transhumanism is essentially a very religious quest, the quest for perfection is the quest to be god.
The idea of Transhumanism is at the foundation of zero discourse (discourse is about the nature of power in language). Zero discourse, because of its binary assumptions, cannot reconcile the challenge of mortality/fallibility with a desire for safety. The binary (black and white) assumptions of zero discourse exclude any in-between or grey. So, zero idealists become Transhumanists because of their own binary assumptions and hence take on the trajectory for trans-human perfection.
What Transhumanism leads to is a strange way of speaking about risk and safety. It is because of this inability to reconcile fallibility with safety that the Transhumanists end up speaking crazy gobbledygook. The language is gobbledygook because it doesn’t make sense in the light of human mortality and fallibility. How strange to speak to each other and demand perfectionism. No amount of wishing or projections about technology can change the fact that humans are limited and that their biological constraints are essential and good for the human condition. Transhumanism is not good for humans because it denies many things that are good in life. The paradox is that any push towards Transhumanism robs humans of the very qualities that make living real. This is why Risk Makes Sense.
Transhumanism has a trajectory that is anti-learning. If a human becomes all knowing and there is no uncertainty, no risk, then there is nothing to learn. If there is nothing to learn then there is nothing to risk and hence nothing to feel. The human then becomes ‘apathea’, without emotions, need for senses or the up-building in resilience. The excitement of discovery is therefore gone, the fulfillment of achievement is gone, the exhilaration of choice is gone and the empathy with others in their struggles is gone. The desire to eliminate all suffering, pain, harm and misfortune is also anti-human. If humans transcend suffering, pain and harm then what it is to be human? What then of the trade-offs and paradoxes associated with living? Transhumanism is not about living but rather an escape from living. Being human is neither about being fatalistic, this just falls back into the binary trap.
So, let’s have a look at some of this Transhumanist discourse.
Example 1. ‘All accidents are preventable’. Of course not, what a silly thing to say. To err is to be human, to deny error is to be Transhuman. But this is not just just about error, the rejection of mistakes is the rejection of all that is learned through mistake and misfortune. Hallinan (Why We Make Mistakes) demonstrates just how many good things that have transpired for humanity because of mistakes. The only humans who can see with foresight in the same way as hindsight must be the Transhumans.
Example 2. ‘Safety is a Choice You Make’. What a crazy thing to say as if all of life is a matter of free will. The wish of the Transhuman is to transcend the complexities of life and choice, and have technology eliminate all risk. If there is no risk, then there is no longer any choice, and without uncertainty and choice there is only apathea. The nature of risk is that it makes us fully human, we then need to learn resilience in the face of much of life that is not free but rather socially, culturally and environmentally determined. The journey of living to wisdom is paved by the many risks, lessons and experiences we have in life. To speak otherwise is to demonize what it is to be human.
Example 3. ‘How many people do you want to harm today’? What a crazy question, of course no-one but this doesn’t mean the answer must be zero. The answer to this question is another question ‘why do you want to ask such a silly loaded numerical question’? There are many loaded questions like this that don’t enable discussion about the complexities of living but rather wish to close down dialogue in favour of the one true efficient answer. Yet we know in real life that every circumstance, every element of life, every human will and wish cannot be controlled. To be able to control all things would be Transhuman. Rather, we need to ask questions and speak language that makes sense.
The best way to be fully human is to not speak language that is gobbedlygook and certainly not to rely on technology as the savior to the human condition. For example, the latest thinking with automatic cars is the need to give these cars choice. Surprise surprise, they want to make the cars human-like to prevent accidents (http://www.canberratimes.com.au/technology/technology-news/humans-are-slamming-into-driverless-cars-and-exposing-a-key-flaw-20151222-gltebr.html). Just like other failed ideas like Google Glass, humans want to remain human when they see what the technology does to humanness.
The first question the human needs to ask of Transhuman language is ‘where is this taking us’? If we follow the logic of your gobbledygook, what is the trajectory? What trade-offs are needed to take this trajectory? And what kind of person and humans are needed to fulfill the goal of your language?
Do you have any thoughts? Please share them below