No Collections Here
Sort your projects into collections. Click on "Manage Collections" to get started
Value-aligned decision-making
Moral values are the criteria that guide our decisions, we use them to discern what's right and wrong. In AI the problem of value alignment, i.e. ensuring AI adheres and respects our moral values is of high importance. The research here presented looks at (1) formalising how we humans understand and priorise values when we make decisions, and (2) defining methods to make AI reason and make decisions considering the formalisation of our moral values.
The difficulty of this project laid on the gap between the team's expertise (Artificial Intelligence) and the project's are (AI and Ethics). As such, we collaborated with experts Ethics (in particular Dr Paula Boddington and Dr Begoña Román) to fully understand what moral values are and how we use them to make decisions. Through monthly meetings with the experts and recommended literature on the related subjects, we formed an idea of the different ethics theories, the process of discerning right from wrong, how context affects this decision, and the relations between values, actions, and norms. With this knowledge we crafted a framework for a model of moral values and their preferences (called value system). Assuming we knew the value system of society (i.e. the moral values society wants AI to uphold) we designed an approach based on utility theory to guide AI decisions. Overall, the outputs of this project have been some of my most cited papers and they were the basis for two awards. This project evidences the need for multi-disciplinary research nowadays and how collaborating with academics outside of my area of expertise was a fruitful and enriching experience.
However, in this project we supposed we knew the value system AI should align with, however finding the value system of society is not straightforward, in fact, this problem is precisely what our current project "Learning ethics to guide AI decisions" will tackle.