Digital Morality in the Face of Terror
Updated: May 26, 2019
Nearly a year ago, I decided I would no longer sit on the side lines talking about AI or digital ethics. I would start creating a solution.
I would work towards protecting people and organizations from AI-based cyber security threats and enabling individuals to personalize morality on their own terms.
I would do something to ensure that even the worst version of ourselves could be contained digitally, in a way that Christchurch wasn't.
You know what I've learned over the last 12 months?
Creating an autonomous moral agent is hard. Ridiculously hard. Because the moment you try to paint morality in the black and white, you see the colours bleed and separate uncontrollably. You realize you can't predict every iteration of behaviour, that there aren't enough IF-THEN commands to colour the human psyche, and even deep learning models can't possibly identify or predict what it has never seen before.
While my academic and professional history has enjoyed discrete numbers and equations, I have to humbly accept the beauty of our condition. A human condition that is incredibly flawed, consistently inconsistent and often unpredictable. A condition that is fueled by ego and fear, as equally as love and selflessness. Religion often feels diagonally opposite to science but, nothing has pushed me further into the grey area than considering artificial intelligence. And, in my effort to comprehend our ethics in 1s and 0s, I can’t deny the decimals, nor the space between them.
In that space, I will continue to delve.
I hope that recent events in New Zealand has more people talking about the world's digital future. But more than that. I hope we start contributing more time, money and collaborative effort towards creating long-term solutions. Weapon reform is an obvious reaction, but I hope we also pay attention to mental health and how AI is currently trained to validate beliefs and interests - no matter how extreme - you already have.
My heart goes out to all who were affected by the events in Christchurch this month. While others find their own way to contribute to a solution, I will work with my colleagues to quietly build a technology that enables trust and responsibility at the speed of digital - hopefully, forming a stronger foundation for things like risk prediction, threat alerts and autonomous safety nets for digital content and ultimately, keeping all Humans accountable.