Shaping AI with the Open Modelling Foundation

15th January 2025, by Ismael Kherroubi Garcia

Image by Martin Sanchez / Unsplash Licence

Since October 2023, we have been supporting the Open Modelling Foundation’s (OMF) vision for “a common suite of ethics, standards, protocols, and best practices that enable modelling scientists to share knowledge and build on one another’s research.”¹ On 10th January 2025, the project culminated in a publication in a prestigious academic journal.

Ten Simple Rules for Good Model-sharing Practices (“the Paper,” hereafter) is freely accessible in PLOS Computational Biology and captures learnings from a series of workshops led by the OMF in early 2024. The Paper is relevant to computational modelling across scientific domains and modelling practices, including artificial intelligence (AI) and machine learning (ML). The variety of domains that can learn from the Paper is only thanks to the great multidisciplinary team that worked on it: 22 authors from across three continents.

For Kairoi, the Paper supports two of the four pillars of responsible AI particularly well: meaningful public engagement and better communications.³

The ten simple rules for model-sharing practices: 1. Define what you mean by “model" 2. Involve the community in informing and promoting model-sharing practices 3. Acknowledge diverse contributions 4. Provide accessible documentation for the appropriate audience 5. Embrace FAIR principles for sharing models 6. Publicly recognize and reward research software engineers 7. Deploy user-friendly tools for collaborative modelling practices 8. Influence publishers to promote good model-sharing practices 9. Break down silos 10. Don’t wait for perfection when sharing models
Read and download the full paper from https://doi.org/10.1371/journal.pcbi.1012702

Meaningful Public Engagement

The Paper advocates for developing models in ways that they can be interpreted by many stakeholders. “Model-sharing” isn’t just about sharing, but making what is shared valuable. With this, the Paper begins (rule 1) by recommending defining what is meant by “model,” and being explicit about a model’s (i) domain, (ii) type and (iii) purpose. In the context of AI, we may want to be clear about a model being designed (i) for use in, say, the context of medical devices; (ii) on some deep learning architecture;⁴ and (iii) for the purpose of predicting certain medical conditions.

The Paper later (rule 4) introduces certain stakeholders of computational models who benefit from good model documentation; namely, policy-makers, domain experts, archivists, and fellow model developers. When it comes to AI, policy-makers gain from having a clear understanding of AI technologies,⁵ and domain experts can understand the assumptions underpinning an AI model (e.g.: medical device developers, continuing with the domain above). Similarly, archivists or librarians can help store AI models in ways that are useful to their colleagues in model development, who don’t need to reinvent the wheel where relevant models are already available.

In the context of responsible AI, we would introduce the need for transparency for the benefit of two more stakeholders: AI tools’ buyers and end users.⁶

  • Buyers are those who make decisions about deploying technologies that impact groups of people. They are banks, schools, hospitals, charities and so on. One benefit of more clearly defined and explained AI models for buyers is that they can conduct more thorough procurement processes. In other words, a hospital that procures a new medical device that incorporates some AI features can be clear as to what those are and where they add value. A related benefit is buyers being able to be transparent and show competency when explaining why a certain tool is being deployed. They can also create clear governance mechanisms and training content for staff to use those tools properly.
  • End users are those who use AI-powered tools, knowingly or not. Users of software and hardware that know there is AI present further benefit from knowing what that actually means. For example, a smart watch with an AI algorithm for detecting irregularities in the user’s heart rate can be transparent about the limitations of that algorithm. Conversely, users who do not know that a smartwatch’s notifications about health are AI generated will be better informed and less in-the-dark as to how the device works.

Better Communications

AI tools rely on theory (from information sciences, statistics, physics, and so on), compute power (to run the algorithms and calculations), data (which they are trained on), and people (who establish the theory and build the tools). However, far too often, “AI” is reduced to a buzzword that is used to position a product as innovative and cool. But the result in recent times has been quite the opposite: including “AI” in a product’s name may reduce consumer intention.⁷

This is why we, at Kairoi, take a strong stance on understanding “AI” as a product of science (where “science” is a social process, but that’s for another time). When a service or software is marketed as “AI,” the consumer should not be struck with fear or undue excitement. We want the consumers to know that “AI” refers to a range of features and practices that emerged from scientific work. In this sense, the Paper is a great reminder that many good AI practices indeed result from science.

In particular, the Paper speaks of the need to acknowledge and value the role of diverse contributions to a model’s development, and introduces various tools to do so. For example, the Contributor Roles Taxonomy (CRediT) helps describe the work done towards data curation, analytics and software development throughout a model’s lifecycle (rule 3). Importantly, CRediT emerged as a practice for academic publications.⁸

Transparency about who is involved in an AI model’s development aligns with the responsible AI pillar “better communications.” By emphasising the great deal of work involved in building AI tools, acknowledging diverse contributions means reminding consumers that AI relies on human labour.

Tools like CRediT also help explain that AI —as a scientific product— must rely on many perspectives, including those of domain experts with little to no computer science training (rule 7). This shows that good AI tools must be built not only on mathematical theory and coding skills, but on the basis of a strong understanding of the domains where they are to be implemented, which may be the domain of sociologists, philosophers, policy-makers, and so on.

Concluding

We are thrilled to have facilitated the workshops that led to the publication of Ten Simple Rules for Good Model-sharing Practices. Fundamentally, this is a paper about cutting-edge open science practices being applied to computational modelling. But this post shows that the paper’s message can directly influence more thoughtful practices in AI; the practices we advocate for and advise on at Kairoi.

Contact us

hello@kairoi.uk

References

¹ OMF (2024) Open Modelling Foundation Charter, online [accessed 15 January 2025]

² Kherroubi Garcia, I. et al. (2025) Ten simple rules for good model-sharing practices, PLOS Computational Biology, 21(1): e1012702, DOI: 10.1371/journal.pcbi.1012702

³ Kherroubi Garcia, I. (2024) The Four Pillars of Responsible AI, Kairoi, online [accessed 15 January 2025]

⁴ Madhavan, S. & Jones, M.T. (2024) Deep Learning Architectures, IBM Developer, online [accessed 15 January 2025]

⁵ Kherroubi Garcia, I. (2024) AI Literacy and Governance, Kairoi, online [accessed 12 January 2025]

⁶ Kherroubi Garcia, I. (2023) Another Piece of the AI Ethics Puzzle, Kairoi, online [accessed 15 January 2025]

⁷ Cicek, M. et al. (2024) Adverse impacts of revealing the presence of “Artificial Intelligence (AI)” technology in product and service descriptions on purchase intentions: the mediating role of emotional trust and the moderating role of perceived risk, Journal of Hospitality Marketing & Management, DOI: 10.1080/19368623.2024.2368040

⁸ Brand, A. et al. (2015) Beyond authorship: attribution, contribution, collaboration, and credit, Learned Publishing, 28: 151-155, DOI: 10.1087/20150211

Author

Portrait of Ismael in an aubergine blazer, black t-shirt and black glasses

Ismael Kherroubi Garcia, FRSA

Ismael is the founder and CEO of Kairoi.

You can find him on LinkedIn and Bluesky..