In developing a governance framework for our ethics strategy we’ve tried to find existing models to apply. Sources assessed include: Berkman Klein Centre for Internet and Society (in partnership with MIT Media Lab); AI Now; Algorithm Watch; Open Data Institute; Oxford Internet Institute, Centre for the Future of Intelligence, Eticas, European High-Level Expert Group on AI; the IEEE; and the British Academy and Royal Society.
Finding a framework
Our assessment of these sources found that there are not really any concrete models for AI governance at a company level. Most commentary and papers tend to refer to the sorts of principles that we have already established. Therefore we have developed our framework from first principles and considered: who is accountable, for what, to whom and how.
Who is accountable?
Clearly, Koa Health must be accountable for its own actions and this will ultimately rest with the Executive Team and the Board.
We’re responsible for defining, delivering and iterating our ethics strategy. This applies to all of Koa Health’s work, including with whom we choose to work.
We must also be accountable for the trade-offs that we make in defining and delivering our strategy and we must accept the consequences of not delivering on our commitments.
Most obviously we are accountable to our users, for whom we develop products. In addition, we are accountable to our staff and our shareholders as the owners of Koa Health.
We are also accountable to society. Both because our principles recognise that we must consider harm to others as we strive to support our customers to have the greatest possible health and happiness, but also because companies, in general, do not operate in a vacuum - society affects them, and they, in turn, affect society.
How can we be held accountable?
Accountability for Koa Health’s ethics strategy is delivered through the framework set out below:
First, we have appointed a Head of Ethics, supported by an Ethics Committee with at least one representative from each team. This person is responsible for a number of tasks, including, but not limited to the following:
- Developing an ethics strategy and iterating it over time;
- Establishing measures of success for the ethics strategy;
- Developing and implementing a delivery plan for the ethics strategy;
- Creating a process for ensuring any partnerships are aligned with our ethics strategy;
- Making recommendations on where trade-offs are required in delivering the Ethics Strategy;
- Acting as the arbiter (supported by the Ethics Committee) when disagreements arise on how to apply the ethics strategy;
- Investigating failures in delivering the ethics strategy, with a focus on learning what happened and what can be improved; and
- Appointing external auditors.
With regard to our ethical principles, governance and strategy, all proposals must be approved by the Koa Health Executive Team (of which the Head of Ethics is a member) and so operate with its collective authority. Any changes to the ethics strategy or its delivery must be approved by the Executive Team. With the aim of ensuring alignment with company objectives, the Koa Health Executive team will consider our ethics strategy and its delivery at least once a quarter.
On day-to-day accountability
Day-to-day responsibility for delivering the ethics strategy rests with the Head of Ethics. Therefore if progress is poor or a significant problem occurs, then he or she will be held immediately responsible and this would become an issue for performance management, with an ultimate sanction of dismissal. Delivery issues may also reflect wider performance issues within the Executive Team, and it would be the responsibility of the Board to performance-manage these.
Ultimately, there is no legal sanction beyond those laid down within GDPR and Human Rights legislation. As such, until further legislation is enacted, there is nothing to compel the Koa Health Executive Team nor Board to deliver this ethics strategy beyond its stated intention to do so. It is for this reason that is important that we publish our strategy and external audits of our work, to provide a degree of external sanction through public and professional opinion that would likely flow back into our position in the market.
We recognise that this is imperfect as a form of accountability but we believe that it is the most effective approach at this time. In light of this, it is worth noting that at this time we have decided not to appoint an external ethics committee. We are not ruling out introducing such a body at some time in the future but at present, we feel that it would not provide sufficient extra benefit over and above external audits.
As noted in the governance framework above, there will be occasions where our principles conflict with one another, and we will need to understand how to best make trade-offs between them.
In the first instance, we should try to ensure that our principles and commitments are clear enough to preclude trade-offs. However, we must accept that some trade-offs are inevitable, both between principles (and commitments) and between principles and our overall fiduciary duties as a company (including generating revenue). For instance:
- How far should we preserve privacy when this could limit our ability to undertake research which might, in turn, help us to improve the services we offer and their efficacy for our customers?
- How much explanation of our algorithms should we put into apps and risk undermining user engagement with unnecessary content?
Firstly, in such cases, we should challenge ourselves to see to what extent there really is a trade-off. In the example of explainability above, can we try to use the explanation as a way to boost engagement?
But where a trade-off remains, we should use the potential impact on user health and happiness to make a decision on the right balance to strike. For instance, if we can’t undertake research we won’t have any impact at all. Similarly, if we ignore engagement, we won’t have any user base to help.
While this approach should provide a good means of ensuring we stay tied to our overall mission, it is not without its own challenges. This is due to the complexity of what we mean by health and happiness, for individuals and society, as described previously in this strategy. To provide some guidance on how we will manage this complexity, we must turn now to our underlying philosophy.
Ethical schools of thought that guide our approach
We’re blending three schools of ethical thought in our approach:
- Deontological - where being good is defined as following a predefined set of rules. Here the journey matters
- Consequentialist/utilitarian - where what is good is what delivers the most of a particular measure of success. Essentially we focus on the end goal and less on the journey (and what happens during said journey) to get there
- Virtue ethics - where being good is defined by acting in the way that a virtuous person would. Here, the person making the journey matters
The reason for not simply choosing one normative framework is that, to grossly over-simplify thousands of years of philosophical debate, each school requires perfect knowledge for the theory to be applied in practice. For instance, within consequentialism, perfect knowledge of the different weightings of the various measures of success that we have, how they change over time, and how they may differ from individuals is required. In the absence of perfect knowledge, we believe our best course of action is to mix these normative approaches, accept that this will require us to make judgements when they conflict and set up a process for making such judgements.
We will therefore make recommendations using the following framework:
1. We will work to maximise the health status of each of our users
2. Our actions to optimise health status will be bound by the following conditions:
- They must not significantly reduce the pleasure or purpose of the individual [and ideally they would promote it]
- They must not significantly reduce the control that an individual has over their affairs [and ideally they would promote it]
- They must not widen nor reinforce any inequalities experienced by our users
3. We will learn and improve how to optimise health status within our boundary conditions
Even with the above framework, it will not be possible to set a precise threshold for trade-offs, and we will therefore need to make such judgements. However, it should help us to provide a rationale to justify the decisions that we make within the scope of our governance framework.