Our Approach to Ethics - Appendix

Consumer views on trust and ethics

People are increasingly thinking about trust in terms of transparency and data privacy and indeed people say that they want control over their data.

A survey of 4,000 people in the USA by Rock Health in the Autumn of 2018 asked people who they would be willing to share health data with:

  • My GP - 72% willing to share health data
  • My health insurer - 49%
  • My pharmacy - 47%
  • Research institution - 35%
  • Pharmaceutical company - 20%
  • Government organisation - 12%
  • Tech company - 11%

It is unsurprising that the list is topped by GPs given the strong professional framework that underpins their trusted position with patients and the public; something that is true to a lesser extent for pharmacists.

It is somewhat surprising that health insurers are second, but we can speculate that the people surveyed may feel that the health insurer is on their side because the health insurer does well when the customer is healthiest.

The fact that tech companies score badly is not surprising, but that they score worse than government is testament to the impact of recent scandals. Rock Health dug deeper into this category to see which companies were more likely to be trusted by consumers:

  • Google - 60%
  • Amazon - 55%
  • Mircrosoft - 51%
  • Apple - 49%
  • Samsung - 46%
  • Facebook - 40%
  • IBM - 34%

Rock Health themselves express surprise at this result, having expected Apple to score highly given how much they promote privacy as a core value. Indeed, the top and bottom positions of the list seem to make no sense at all. Perhaps the best that might be said of this is that consumers do not see much differentiation between tech companies. This suggests that we, or any company, will have to work hard to truly demonstrate that we can be trusted with health data.

Ethical frameworks we have drawn on:

The IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems

IEEE is an extensive effort aimed at developing standards specific to different uses of AI, such as robotics, mixed reality software (VR, AR, etc), and autonomous weapons systems. The entire effort is guided by a set of general principles:

  • Human Rights - ensuring that AI does not infringe on internationally recognised human rights
  • Prioritising wellbeing - they define wellbeing as human satisfaction with life and the conditions of life, along with an appropriate balance between positive and negative affect.
  • Accountability - designers and operators of AI are aware of what an AI is doing, why and be able to take responsibility for its actions
  • Transparency - AI systems should be able to explain why they took an action, both to experts and lay individuals
  • Awareness of potential misuse of technology - with a focus on education of developers, operators and users of AI

IEEE is also doing a lot of work to understand what values should be embedded in AI so that it can do good and be ethical. In doing so they inevitably struggle to come up with a set of universal values, instead highlighting that norms must be identified for a particular community. This embrace of moral relativism is to be constrained by universal human rights.

The European Commission’s High-Level Expert Group on AI

This group takes a rights-based approach, drawing on the UDHR and the EU Charter on Human Rights, with a particular focus on the following: respect for human dignity; freedom of the individual; respect for democracy, justice and the rule of law; equality and non-discrimination; and citizens’ rights. The group then sets out five principles for AI:

  • Beneficence - 'do good' - improve individual and collective wellbeing. This is not defined in great detail
  • Non-maleficence - 'do no harm' - reference is made to various human rights but a full definition of harm is not given
  • Preserve human agency - humans must remain in control of their own actions and decisions and not be undermined by the AI
  • Be fair - ensure that the development, operation and use of AI remains free from bias
  • Operate transparently - be able to explain the operations of AI to people with varying degrees of knowledge. This principle also relates to transparency with respect to business models

Confusingly, having set out five principles, the EC group then lists ten separate requirements of Trustworthy AI: accountability; data governance; design for all; human oversight of AI; non-discrimination; respect for human autonomy; respect for privacy; robustness; safety; and transparency. It is not clear how these relate to the principles.

The Association of Computing Machinery

The ACM set out its Statement on Algorithmic Transparency and Accountability in January 2017. It sets out seven principles:

  • Awareness - owners, designers, builders, user and other stakeholders of analytic systems should be aware of the possible biases involved in their design, implementation and use and the potential harm that biases can cause to individuals and society
  • Access and redress - regulators should encourage the adoption of mechanisms that enable questioning and redress for individuals and groups that are adversely affected by algorithmically informed decisions
  • Accountability - institutions should be held responsible for decisions made by algorithms that they use, even if it is not feasible to explain in detail how the algorithms produce their results
  • Explanation - systems and institutions that use algorithmic decision-making are encouraged to produce explanations regarding both the procedures followed by the algorithm and the specific decisions that are made
  • Data provenance - a description of the way in which the training data was collected should be maintained by the builders of the algorithms, accompanied by an exploration of the potential biases induced by the human or algorithmic data-gathering process. Public scrutiny of the data provides the maximum opportunity for corrections. However, concerns over privacy, protecting trade secrets, or revelation of analytics that might allow malicious actors to game the system can justify restricting access to qualified and authorised individuals
  • Auditability - models, algorithms, data and decisions should be recorded so that they can be audited in cases where harm is suspected
  • Validation and testing - institutions should use rigorous methods to validate their models and document those methods and results. In particular, they should routinely perform tests to assess and determine whether the model generates discriminatory harm. Institutions are encouraged to make the results of such tests public

The UK House of Lords Select Committee on AI has also proposed five principles:

  • AI should be developed for the common good and benefit of humanity
  • AI should operate on principles of intelligibility and fairness
  • AI should not be used to diminish the data rights or privacy of individuals, families, or communities
  • All citizens should have the right to be educated to enable them to flourish mentally, emotionally and economically alongside AI
  • The autonomous power to hurt, destroy, or deceive human beings should never be vested in AI