top of page

Actioning a Moral Code Project

I wrote an article in the form of a GoFundMe page, with more the aim of making a point than actually raising funds. But heck, maybe Elon Musk will stumble across it and support me to catalyse a Moral Code project ;) Enjoy the read.



Hi there, I'm a human. 

I'm fairly sure you're also a human. Not certain, just fairly sure.  

In the future, I'll get less sure. (Is that you Mr Turing?)

But that's ok. Computers have helped us make decisions about almost everything for a decent amount of time. Perhaps, finally, now it is time for them to make more decisions by themselves. 

Google demonstrated their AutoML solution in December 2017, the first real example of machine learning code itself being written by machine learning code (taught to fish).

I talk to machine learning experts all day and have been amazed to find that morality is something that barely factors into computer science. I was attracted to machine learning from psychology - a fascination with human decision making, so have a completely different perspective on machine learning than the individuals 'on the tools'. 

In a recent conversation about machine learning with an engineer, he said "the clear trajectory is for computers to make more decisions by themselves."

I asked, given morality is Human 101, whether there was any work being done on a moral code - literally a code that represents what it means to be a moral decision maker, which would then (ideally) guide any non-human decision maker. 

He was unable to process my question. 

I don't know, maybe it's just me: I think we need a thinktank dedicated to a pragmatic 'moral code'. Just in case. To see if it's even possible. 

Any funds raised will be used to create such a think tank. I see it going a little like this:

1. I will organise the initial think tank, recruiting thought leaders from different fields and coordinating their approach to one question: how do we approach the creation of a moral  code?

1.1 Is a moral code possible? If so, what could it look like? If not, what steps should we take to ensure morality in machine decision making?

2. I will run a website that documents this journey, such as the routes of thought taken and points of contention. I will further organise community and media engagement, inviting contributions from the public, etc. 

3. Following an outlined approach, I will publish a piece of work 'The Moral Code' that outlines the findings of the group and community. This piece of work will be made available to the public, possibly for sale.

3.1. Any revenue generated by the thinktank will be reinvested in the thinktank, used to both a) build a brand for 'The Moral Code' (15-25%) and b) invest in the Moral Code project.

3.2. A list of all organisations that have contributed to or helped the thinktank will be published, the 'Moral Code' will create a branded trademark (like the Fairtrade mark) that these organisations can use on their promotional collateral (i.e. "we're working towards a moral future" {logo}). The idea is to use the brand to further stimulate interest and participation in the project - possibly to prompt corporate donations in the project (the companies that stand to benefit from machine learning will be encouraged to participate as a matter of due diligence).

4. The future of the project (redundancy or ongoing vision) will be outlined and the next round of funding will be organised. This will fund the perpetual maintenance of The Moral Code project and the ongoing efforts of the think tank.

5. The ideal outcome in the long term is to literally have a piece of code that enforces moral decision making, or for the organisation to be an active lobbyist in ensuring morality in machine learning applications.

Finally, I don't know if such a project will succeed. I don't know if there would even be any progress. Heck, maybe it's so unjustified that 'computer says no' will be the final answer. That's the point, nobody knows. 

The problem is time. How we perceive time. The amount of time between any two states of progress (i.e. the difference from points 1 to 2, then from points 2 to 3) is going to reduce so drastically that general AI could literally go from an impossibility to a pressing global concern overnight.

I hope we never need a moral code, truly; but, I also know that if a moral code will save us then logically we must begin it before we know that we need one. And it needs to be one that we all agree on.

Let's start the conversation.

4 views0 comments

Recent Posts

See All
bottom of page