top of page

Coding a Moral Code

Updated: Aug 19, 2019

The Moral Code, with Nikolay Kadrileev.


He loves computer vision because you can see the results of the maths with your eyes. In his world of BMW autonomy - and maybe in all of ours, autonomous driving is the 'next big thing' in applied ML.


At the time of interview, he was working on a technology to monitor the driver to understand when they were distracted. He sees what he’s doing as a sort of interim machine learning solution that’s filling the gap between now and driverless, autonomous cars. I bet you my children will think of ‘automatic’ and ‘manual’ cars differently to how we do... (whaaaaat, you actually used to drive manual?!?!?). Language‘s evolution is interesting and seems to go hand in hand with innovation.


"Algorithms are SO complicated, mathematically we're hardly able to explain them... getting more advanced, take FB's face detection - already better than humans!"

The irony is not lost on me that the closer we bring computer vision to our own the less we understand it.

I ask for his view on the areas of life and business that machine learning will improve and his answer was simple: decision making.


"Currently, computers help us make decisions. The clear trajectory is for them to make more decisions by themselves."

I ask: "are we building morality into our code now? Is there a moral code (literally) being written?"


Nikolay pauses, unsure how to process my question.


"If you're all working towards ML making decisions without us, then shouldn't we begin a 'moral code' project or something NOW?!"


After all, this is Human101.


"It's just input and output, though" - I can hear him shrug off the question and I don’t think he expected my response.

"Aren't we?"


”We would have to watch the behaviour and guess.”

“We still haven’t definitively found consciousness in humans!” And we’ve been watching those things for a while...


How would we know whether we’ve created something that‘s alive if we don’t even know how to prove we’re conscious?! I’ve referenced panpsychism before, a previous domain of pseudo science that is gaining traction, which suggests consciousness doesn’t suddenly emerge (deus ex machina) but rather exists in even the smallest increment of computation (i.e. the biological computation happening within a sea urchin, or a network of tree roots). This makes me wonder if, in some way, even my phone has a small degree of consciousness. A worrying thought considering we’ve connected and networked billions of these things together - those ‘pictures’ of the internet look disturbingly like pictures of the brain. I have heard the argument that general AI could already exist - maybe it’s just biding its time... I joke, but the idea has some weight IMO.


Machine Learning to some degree simulates the mechanics of the mind and yet there are at least these two fundamental differences in how we process information:

  1. Our senses are different, so our drives are different. Life on earth basically sparked from a need to move and then to reproduce. (A conversation from the film Ex Machina comes to mind: “why give it a gender?”)

  2. Plasticity. Humans restructure their minds over time, which we can simulate but doesn’t actually happen at a physical level (IBM has talked about fluid based neurosynaptic chips). Did you know that in many cases the neurons that encoded for amputee limbs actually repurpose and donate their resources to other parts of the body?


The fact is still shrouded in uncertainty, if ML will ever lead to general AI, but the Markovian property sounds a heck of a lot like human memory recall (and identity...) to me. Perhaps when we solve these two problems in computer science (drive and fluid representation), only then will we create truly capable - and possibly aware - machines.


Is it disturbing that the people building ML models aren't even considering a moral code?

From the original post, a good comment from Lex Hager:

"The problem with programming morality is that a normative approach will lead to conflicting ethical rules at some point. An empirical approach would have a car learning morality from human drivers (lol). A realistic approach probably lies somewhere in the middle. Interesting stuff!"


Edit: since this post, I came across a project doing exactly this! http://moralmachine.mit.edu

21 views0 comments

Recent Posts

See All
bottom of page