The Four Pillars of Responsible AI

6th April 2024, by Ismael Kherroubi Garcia

Jamillah Knowles / Better Images of AI / Data People / CC-BY 4.0

Our approach to responsible artificial intelligence (AI) is theoretically informed, evidence-based, and academically rigorous. This post describes some of the theory behind the Four Pillars of Responsible AI, which enable us to design and implement organisational mechanisms that ensure responsible approaches to AI.

There are dozens – even hundreds – of frameworks on the ethics of AI technologies.¹ Whilst the number of relevant documents will continue to grow and inform our practices at Kairoi, we promote four pillars of responsible AI, which ensure that our approach parts from organisational practices. Indeed, we prioritise organisations and their values when helping clients design, develop, deploy, adopt and govern AI responsibly and thoughtfully.

By prioritising how our clients operate, we ensure that our interventions are feasible and impactful. We work within the constraints of real practices, collaborating with different business units, ensuring buy-in from across an entire organisation, and, ultimately, driving culture change.

To help organisations adopt responsible AI cultures,² we articulate practices according to the four pillars of responsible AI. In a sense, these are not pillars but categories of responsible AI actions.

The Four Categories of Responsible AI Actions for Organisations

The four categories are implicit in the influential paper, The global landscape of AI ethics guidelines, in which Jobin et al. review 84 documents containing guidelines for AI.³ The authors’ analysis found eleven moral values that overlapped in many of those documents. They also found that the values were not interpreted equally across documents. This is not surprising, as it is difficult to imagine there being universally valid conceptions of such abstract ideas. Importantly, the authors also identify over thirty different methods for enacting organisational values when working with AI; and this is where our four categories are uncovered.

The methods organisations may choose to follow for responsible AI are extremely diverse, and often informed by organisations’ size, industry, finances, clients and other contextual factors. Notwithstanding, we need some tool or heuristic for a shared understanding of responsible AI. Luckily, we can find trends when analysing different organisations’ methods. And the organisational mechanisms Jobin et al. identified can be categorised as either:

  • Better Communications.
  • Relevant Technical Standards.
  • Meaningful Public Engagement, or
  • Robust governance.

These four categories inform all of our work at Kairoi. In 2023, we conducted a mapping exercise, effectively demonstrating how our different projects fit within these different categories.⁴ And, whilst the four categories may have started out as such, they are now used to bolster real change and support innovative solutions for responsible AI.

So, what sort of change is buttressed by the pillars of responsible AI? Let’s see each in turn.

Better Communications

Better Communications relates to how organisations communicate internally and externally about AI. And how organisations communicate about AI can have significant impacts on staff, the public and stakeholders’ wellbeing and literacy. Consider the cultural background that helps us share a language about AI.

Works of fiction commonly depict advanced technologies as humanoid robots; often as scary machines such as in the Terminator, and sometimes as machines capable of manipulating humans such as Ex Machina’s Ava. This humanoid depiction of AI instantiates a common risk in how we communicate about AI called anthropomorphism, which distracts from the reality that AI technologies are not like humans but are computational systems that scan data to elicit patterns and output predictions.

In the context of a rise in confusion⁵ and anxiety about AI,⁶ organisations have a duty to communicate about AI with their staff, investors, clients, providers and wider stakeholders in ways that are informative and accurate. One excellent resource to draw on when communicating about AI more effectively is the Better Images of AI library, which hosts images that speak to real challenges and practices related to AI.

Relevant Technical Standards

Relevant technical standards refer to those rules and practices often promoted by standards development organisations (SDOs), such as ISO, ITU and the IEEE. SDOs have been seen as an important source of intelligence for AI-related practices.⁸ This is because they operate through consensus-building to ensure diverse stakeholders and experts create high-quality standards. SDOs may also issue certifications for companies to demonstrate publicly that they meet certain criteria.

This pillar helps contextualise AI as a technical practice. When designing and implementing change so that organisations approach AI more responsibly, we cannot forget that AI encompasses many technical concepts from data science, engineering, mathematics, statistics, computer science, and other disciplines. This ensures that the change we advocate for is cemented in the realities of the AI lifecycle.

For example, a common challenge in responsible AI pertains to how “fair” is conceived of in mathematics as opposed to philosophy and law.⁹ Meanwhile, lawmakers themselves have claimed to struggle to understand and, therefore, legislate on the technology.¹⁰ These complex issues have inspired us at Kairoi to promote interdepartmental discussions in organisations, enabling IT and engineering departments to engage with departments such as marketing, finance and human resources. This helps foster a shared understanding about the technology.

But SDOs’ outputs may not always be “technical” in the above sense. ISO 42001 for “AI management systems,” for instance, refers to practices that must be “integrated with [an] organisation’s processes and overall management structure.”¹¹ Meanwhile, repositories of standards have included organisational practices, such as the Portfolio of AI Assurance Techniques including Kairoi’s “Responsible AI Interview Questions.”¹²

Meaningful Public Engagement

Meaningful public engagement refers to the process of bringing different stakeholders into the AI lifecycle, which generally encompasses design, development, deployment and governance. 

When designing a new AI tool, for instance, it is good practice to ensure potential users are part of the conversation and inform the problem you are trying to solve, as well as the relevancy of the suggested solution. At the development stage, which involves data collecting and labelling, and machine learning, it is valuable to document decisions and discuss them with diverse voices, responding to and integrating feedback that may otherwise be unavailable. When it comes to the deployment and governance of AI, innovators must be able to continue monitoring the impacts of the tool, gathering further feedback and either improving it iteratively or terminating it. Through the process of public engagement, organisations can instil trust in how they approach AI-related decisions, and ensure their AI initiatives entail positive impacts for different communities. 

But what does meaningful public engagement look like? Whilst some of the methods used by big tech firms for public engagement have been heavily criticised,¹³ there are both theoretical foundations and practical forms of public engagement that we can build on in the AI space. At a theoretical level, Sherry Arnstein’s 1969 “ladder of citizen participation” remains relevant, suggesting that we may engage with people in ways that hand them more or less power in decision-making processes.¹⁴ At a practical level, lessons can be learned from how the environmental justice movement has influenced environmental legislation,¹⁵ and how public involvement has evolved in the context of healthcare.¹⁶

Robust Governance

Robust governance is both about how organisations manage themselves, and how they relate with policy makers. Regarding internal governance, we must comply with relevant legislation. Whilst many new laws have emerged targeting AI in particular,¹⁷ many more legislative frameworks and authorities are relevant to AI. The EU’s General Data Protection Regulation (GDPR), for example, has been cited on various occasions where raising concerns about the AI-powered chatbot ChatGPT.¹⁸ Meanwhile, antitrust regulators in the US and the UK have been scrutinising Microsoft’s tie-up with the company OpenAI.¹⁹

Beyond compliance, robust governance can mean developing mechanisms that ensure thoughtful practices relating with AI. Much like with meaningful public engagement, we do not need to reinvent the wheel. Research ethics committees (or institutional review boards) are a common approach to reviewing the ethical implications of biomedical research and, increasingly, other research disciplines in academia. In our own research with the Ada Lovelace Institute, we developed a series of recommendations for a similar model to be implemented in AI-related research.²⁰

Externally, robust governance means informing policymakers with industry- and context-specific perspectives and experiences regarding AI. Kairoi advocates for all our clients to engage with public calls for evidence, and we have done so ourselves, with both governmental²¹ and non-governmental²² bodies.

Concluding

The Four Pillars of Responsible AI allow us to both categorise and devise initiatives for thoughtful approaches to AI that are pragmatic and impactful. The pillars build on sound research and theory to help organisations strive for more better. And they are grounded in real practices, enabling diverse values to be enacted, rather than imposing a worldview.

This is Kairoi’s approach to AI Ethics. If you want to learn how your organisation can adapt to make the most of AI, contact us at hello@kairoi.uk.

Contact us

hello@kairoi.uk

References

¹ Dotan, R. (2021) The Proliferation of AI Ethics Principles: What’s Next? Montreal AI Ethics Institute, online [accessed 06 April 2024]

² Gordon, A. (2023) Building a responsible culture of AI Innovation, Torchbox, online [accessed 06 April 2024]

³ Jobin, A. et al. (2019) The global landscape of AI ethics guidelines, Nature Machine Intelligence, 1, pp. 389–399, DOI: 10.1038/s42256-019-0088-2

⁴ Kherroubi Garcia, I. (2023) 2023 in Review, Kairoi Blog, online [accessed 06 April 2024]

⁵ Kherroubi Garcia, (2023) From the Director’s Desk: Reflections on the State of AI Ethics, Kairoi Blog, online  [accessed 06 April 2024]

⁶ Leffer, L. (2023) ‘AI Anxiety’ Is on the Rise—Here’s How to Manage It, Scientific American, online [accessed 06 April 2024]

⁷ We and AI (n.d.) Images of AI Library, Better Images of AI, online [accessed 06 April 2024]

⁸ Thomas, C. & Florian, O. (2024) Enabling AI governance and innovation through standards, UNESCO, online [accessed 06 April 2024]

⁹ Green, B. (2022) Escaping the Impossibility of Fairness: From Formal to Substantive Algorithmic Fairness. Philosophy & Technology, 35(90), DOI: 10.1007/s13347-022-00584-6

¹⁰ Kang, C. & Satariano, A. (2023) As A.I. Booms, Lawmakers Struggle to Understand the Technology, The New York Times, online [accessed 06 April 2024]

¹¹ ISO (2023) ISO/IEC 42001:2023: Information Technology: Artificial intelligence: Management system, online [accessed 06 April 2024]

¹² Kherroubi Garcia, I. (2023) Kairoi contributes to CDEI Portfolio of AI Assurance Techniques, Kairoi Blog, online [accessed 06 April 2024]

¹³ Caplan, R. (2022) Networked Governance, Yale Journal of Law and Technology, 24, online [accessed 06 April 2024]

¹⁴ Arnstein, S.R. (1969) A Ladder of Citizen Participation, Journal of The American Planning Association (Routledge), 35(1), pp. 216-224

¹⁵ Gilman, M. (2023) Democratizing AI: Principles for Meaningful Public Participation, Data & Society, online [accessed 06 April 2024]

¹⁶ Grotz, J. et al. (2020). Historical and Conceptual Background of Public Involvement, In: Patient and Public Involvement in Health and Social Care Research, Palgrave Macmillan, Cham, DOI:  10.1007/978-3-030-55289-3_2

¹⁷ Patelli, A. (2023) AI: the world is finally starting to regulate artificial intelligence – what to expect from US, EU and China’s new laws, The Conversation, online [accessed 06 April 2024]

¹⁸ Chan, K. (2024) ChatGPT violated European privacy laws, Italy tells chatbot maker OpenAI, The Associated Press, online [accessed 06 April 2024]

¹⁹ M, M. et al. (2023) Microsoft, OpenAI tie-up comes under antitrust scrutiny, Reuters, online [accessed 06 April 2024]

²⁰ Petermann et al. (2022) Looking before we Leap: Expanding Ethical Review Processes for AI and Data Science Research, Ada Lovelace Institute, online [accessed 06 April 2024]

²¹ Kairoi Ltd. (2023) RE: Consultation on policy proposals for UK’s pro-innovation approach to AI regulation, Kairoi GitHub, online [accessed 06 April 2024]

²² Kairoi Ltd. (2023) RE: AI Skills for Business Framework Feedback, Kairoi GitHub, online [accessed 06 April 2024]

Author

Portrait of Ismael in an aubergine blazer, black t-shirt and black glasses

Ismael Kherroubi Garcia, FRSA

Ismael is the founder and CEO of Kairoi.

You can find him on LinkedIn, Mastodon and Twitter.