If ‘Trust is a must’ for AI governance — here are 3 things regulators should do

Hilary Sutcliffe
5 min readMay 21, 2021

--

Co-authored with Sam Brown, Director, Consequential

“On artificial intelligence, trust is a must, not a nice-to-have” proclaimed European Commission’s digital chief, Margrethe Vestager in a statement launching the long-awaited European Union rules on Artificial Intelligence (AI). She is right. The new rules and actions the EU has proposed will be fundamental to earning the trust of citizens in AI technologies. But it is not only the technologies which need to be seen as trustworthy, the system of governance itself, and the regulators applying it, need also to earn public trust.

The increasing weight that the citizens give to governance as the basis for trust has been seen in attitudes to Covid vaccine governance across the world, where the trust in the approvals process mattered as much as trust in the vaccines themselves in combatting vaccine hesitancy.

Its importance in the digital space was reinforced by the UK government Centre for Data Ethics which found in its Covid-19 Repository & Public Attitudes 2020 Review that “trust in the rules and regulations governing technology is the single biggest predictor of whether someone believes that digital technology has a role to play in the COVID-19 response. This trust in governance was substantially more predictive than attitudinal variables such as people’s level of concern about the pandemic, or belief that the technology would be effective; and demographic variables such as age and education.”

But what do EU member state regulators of AI need to do to be seen as trustworthy and so earn public trust in their approach? 3 critical factors identified in our recent research into Trust and Technology Governance may help them, and those involved in governance in the UK and worldwide with this critical task:

1. Ensure effective enforcement

Our research shows that citizens trust governance most when they can see it is working — when governance institutions visibly stand up for the public interest, values are upheld, laws enforced, organisations penalised, breaches published. They are most likely to lose trust where they perceive that regulators are more concerned with smoothing the path of tech development and prioritising financial concerns over ethics, societal values and human rights.

The proposed EU AI laws are genuinely innovative in trying to tread the line between promoting innovation and upholding European values through identifying clear ‘unacceptable’ and ‘high risk’ areas which require special attention with a green light to less contentious areas. So far so good. Further steps in being responsive to civil society representatives and citizens concerns on protecting human rights and EU values in how this enforcement happens will be necessary to uphold the public interest as the governance and the tech evolves.

The main enforcement mechanism is the imposition of significant financial penalties on tech companies, up to 6% of global turnover, with assessment largely down to self-regulation. Will this be enough? Particularly as historically even large financial penalties are often factored in as ‘a cost of doing business’, and self-regulation leaving behaviours largely unchanged.

As this new legal framework is implemented, member states may need to consider other, more innovative and effective approaches than simply fines to ensure this regulation does its job in holding companies to account and so to earn the trust of citizens.

2. Be more human, open, communicative

People feel more confident about regulation when they know more about who is in charge. In the UK for example, 82% of people felt more protected when they have heard of the regulator and 67% would like to know more about what regulators do — according to the PA Consulting report Rethinking Regulators.

Under this new legal framework, member states will be required to appoint one or more national competent authorities to enforce the regulation at the national level. It is important to trust that they widely publicise who these bodies are, what they are responsible for and encourage them to open up much more about how they work, what they do and how their approach is taking effect. Though uncomfortable for some, it is particularly helpful when regulator representatives are named and visible and get out there in the community and media talking about what they do and how it is working, and when it is not.

‘Be less aloof, more open, more human’ is a recurring theme in citizen dialogues exploring trust in regulators. They should start with the website. It is surely no coincidence that the UK’s two most trusted regulators — the Food Standards Agency and Human Fertilisation and Embryology Authority — have the best websites, which are written in plain language, make clear what they do and give evidence that they are open, accessible and inclusive in their approach to their role.

3. Empower us and develop inclusive relationships with citizens

The underlying assumption in the regulation is that EU citizens will reward trustworthy companies with their cash and attention. Given the confusing nature of AI, and of regulation, we may need a lot more help than is currently available to better discern the trustworthy from the untrustworthy.

Our research shows that citizens would actively like help from regulators to provide ‘consistent ways to judge companies’ and also desired accessible information to ‘help us help ourselves and educate us about important issues’.

Furthermore, if trust is really a must then regulators of AI in the EU and beyond may need to develop a new, more inclusive relationship with citizens; involving them directly or through impartial intermediaries in the complex ethical judgements which will need to be made. An example of this in action may found in the UK with the Citizen Biometrics Council, convened by the Ada Lovelace Institute which will directly inform policy and governance of facial recognition technologies.

This is not only because, as TIGTech research highlights, citizens “are more likely to trust a decision that has been influenced by ordinary people than one made solely by government or behind closed doors”; or even because the more diversity of perspective incorporated into decision-making the wiser the judgements, but because the involvement of citizens gives more democratic legitimacy and so perceived trustworthiness to the governance design process.

Going Forward

These three important considerations will go a significant way towards earning citizen trust in the governance of AI and potentially then their trust in its many diverse applications. These considerations may also shift the role of regulators, moving them as PA Consulting proposes, from being “Watchdogs of Industry, to Champions of the Public”. The Commission’s new rules are paving the way for this role, but they will only be effective if regulators in member states and beyond succeed in upholding the public interest through effective enforcement and meaningful engagement to help AI fulfil its potential for societal good.

Hilary Sutcliffe, Society Inside, Director of the Trust in Tech Governance initiative www.tigtech.org

Sam Brown, Director, Consequential

--

--

Hilary Sutcliffe

Works on trust, ethics, governance and exposing bullsh*t.