Skip navigation

AI, Law and Regulation

Rethinking governance for AI systems that operate in spaces of meaning, not just measurement

URKI LOGO

 CCAI Large

 




As AI systems produce cultural outputs (language, images, decisions) rather than merely numerical predictions, governance faces a fundamental challenge: regulatory frameworks designed for technical systems now confront technologies that operate in spaces of cultural meaning, ambiguity, and context. When AI makes decisions about healthcare, employment, or justice, questions of accountability, transparency, and fairness become matters of interpretation rather than calculation. Yet current governance approaches often treat these concepts as if they have singular, technical definitions, when in reality they are culturally situated and context-dependent. 

Your research in this theme could explore how legal and regulatory frameworks must evolve beyond technical compliance to engage with the interpretive complexity of AI governance. Rather than asking only "how does this system work?" (technical explainability), you might investigate "what does this system mean?" in diverse cultural contexts. How can governance frameworks account for the reality that fairness, transparency, and accountability are understood differently across communities? What forms of regulation enhance rather than diminish human agency as we move beyond treating AI as mere assistants toward more complex human-AI relationships? 

Potential directions include examining how current regulatory approaches risk entrenching homogenised conceptions of responsible AI that fail to reflect diverse human values and experiences, developing participatory governance mechanisms that redistribute interpretive authority to affected communities, or exploring how accountability frameworks can be reimagined when AI systems operate across fragmented value chains and cultural contexts. You might study how the UK's principles-based approach and the EU's comprehensive regulation each encode particular assumptions about what AI governance should achieve, or investigate emerging challenges where generative and autonomous systems resist traditional regulatory categories. 

This research requires truly transdisciplinary collaboration, integrating perspectives from human-computer interaction, legal scholarship, policy studies, humanities, and the lived experiences of communities whose cultural worlds AI systems increasingly shape. 

Place-based and Regional Context 

The North East of England offers unique opportunities for examining AI governance in practice. As one of the UK's most digitally connected regions, local authorities including Newcastle City Council and the North of Tyne Combined Authority are deploying AI systems in public services whilst grappling with transparency and accountability requirements. Regional police forces are implementing algorithmic tools for resource allocation and risk assessment, creating opportunities to study how contestability mechanisms work (or fail) for citizens. 

The region's strong public sector presence, combined with areas of significant deprivation, means AI deployment decisions have immediate consequences for vulnerable communities. Your research could work directly with organisations like VONNE (the voluntary sector infrastructure body) to understand how AI systems affect citizens accessing services, or partner with regional health bodies to examine AI governance in healthcare contexts. The presence of DSIT officials and connections to national policymaking through partner organisations creates pathways for research to directly inform UK AI regulation as it evolves beyond the current principles-based approach towards potential legislation. 

Relevant Partner Organisations 

This theme connects with partners spanning policymaking, regulation, and public services. At the national level, relationships with DSIT, the Cabinet Office, and 10 Downing Street provide insight into how AI governance policy develops and could be improved. Regulatory partners including Ofcom and the National Cyber Security Centre offer perspectives on sector-specific implementation challenges. 

Public sector partners such as Newcastle City Council, DWP, NHS organisations, Northumbria Police, and Cleveland Police deploy AI systems directly affecting citizens, creating research opportunities to examine governance frameworks in practice. These partnerships enable you to study not just policy design but implementation realities, working with organisations facing the daily challenges of making AI governance principles actionable within resource constraints and complex institutional contexts. 

Related Articles and Reading

Participatory AI Governance and Power

  • Corbett, E., Denton, R., & Erete, S. (2023). Power and Public Participation in AI. EAAMO '23. 
  • Delgado, F. et al. (2023). The Participatory Turn in AI Design: Theoretical Foundations and the Current State of Practice. EAAMO '23. 
  • Sieber, R. et al. (2025). What Is Civic Participation in Artificial Intelligence? Environment and Planning B. 
  • Novak, M. et al. (2025). Artificial Intelligence for Digital Citizen Participation: Design Principles for a Collective Intelligence Architecture. Government Information Quarterly. 

Transparency, Explainability, and Understanding 

  • Liao, Q.V. & Sundar, S.S. (2024). Mind The Gap: Designers and Standards on Algorithmic System Transparency for Users. CHI '24. 
  • Kim, S.S.Y. et al. (2023). Help Me Help the AI: Understanding How Explainability Can Support Human-AI Interaction. CHI '23. 
  • Richardson, R. et al. (2025). Improving Governance Outcomes Through AI Documentation: Bridging Theory and Practice. CHI '25. 
  • Bove, C. et al. (2023). Investigating the Intelligibility of Plural Counterfactual Examples for Non-Expert Users. IUI '23. 

Accountability, Contestability, and Fairness 

  • Karusala, N. et al. (2024). Understanding Contestability on the Margins: Implications for the Design of Algorithmic Decision-making in Public Services. CHI '24. 
  • Yurrita, M. et al. (2023). Disentangling Fairness Perceptions in Algorithmic Decision-Making. CHI '23. 
  • Green, B. & Viljoen, S. (2024). Building, Shifting, & Employing Power: A Taxonomy of Responses From Below to Algorithmic Harm. FAccT '24. 
  • Costanza-Chock, S. et al. (2024). A Framework for Assurance Audits of Algorithmic Systems. FAccT '24. 
  • Helberger, N. & Diakopoulos, N. (2023). Accountability in Artificial Intelligence: What It Is and How It Works. AI & Society. 

Public Sector AI and Citizen Perspectives 

  • Aljuneidi, S. et al. (2024). Why the Fine, AI? The Effect of Explanation Level on Citizens' Fairness Perception of AI-based Discretion in Public Administrations. CHI '24. 
  • National Audit Office (2024). Use of Artificial Intelligence in Government. 
  • Netherlands Court of Audit (2024). Focus on AI in Central Government. 

UK Policy and Regulatory Framework 

  • Department for Science, Innovation and Technology (2023). A Pro-Innovation Approach to AI Regulation (AI White Paper). 
  • DSIT (2024). A Pro-Innovation Approach to AI Regulation: Government Response. 
  • House of Commons Science and Technology Committee (2024). Governance of AI. 
  • Competition and Markets Authority (2024). CMA AI Strategic Update. 
  • Government Digital Service (2023). Algorithmic Transparency Recording Standard. 
  • UK Government (2023). The Bletchley Declaration. 

EU Policy and Regulatory Framework 

  • European Parliament and Council (2024). Regulation (EU) 2024/1689 on Artificial Intelligence (AI Act). 
  • European Commission. AI Act Overview and Implementation Guidance. 
  • European Data Protection Board (2024). Opinion 28/2024 on the Processing of Personal Data and the AI Act. 

Emerging Challenges: Generative and Agentic AI 

  • Lee, M. et al. (2025). Governance of Generative AI in Creative Work: Consent, Credit, Compensation, and Beyond. CHI '25. 
  • Weidinger, L. et al. (2024). Visibility into AI Agents. FAccT '24. 
  • Government of the Netherlands (2024). Government-Wide Vision on Generative AI. 

Cross-cutting Resources 

  • Hemment, D., Kommers, C. et al. (2025). Doing AI Differently: Rethinking the Foundations of AI via the Humanities. The Alan Turing Institute. 
  • Chatham House (2023). AI Governance and Human Rights: Resetting the Relationship. 
  • Stanford HAI (2024). The 2024 AI Index Report: Policy and Governance. 
  • Ada Lovelace Institute. AI Now, AI Next: What We've Learned About AI Governance So Far.

Back to top