Skip navigation

Communities, Democracy and Society

How can AI strengthen democratic life, community participation, and social wellbeing?

URKI LOGO

 CCAI Large

 




AI is increasingly present in areas that shape how we live together — from managing public services and moderating online spaces, to influencing how communities are represented and heard. Yet many AI systems currently prioritise efficiency and scale over nuance, participation, and fairness. This can flatten cultural differences, reduce complex social issues to optimisation problems, and risk weakening rather than supporting democratic processes. 

In this theme, you will explore how citizen-centred approaches can guide the design and deployment of AI so that it supports plural viewpoints, collective decision-making, and community agency. Rather than treating citizens as data sources or passive users, your research will consider them as co-designers, interpreters, and evaluators of AI systems. 

Projects in this theme are likely to combine social, cultural, design and/or technical methods. You might, for example: 

  • Investigate how interpretive and qualitative traditions in the humanities and social sciences can inform new AI models for democratic contexts. 
  • Design human-AI decision-making processes that preserve citizen agency and avoid ‘black box’ governance. 
  • Develop participatory and power-sharing methods that move beyond consultation to co-creation with communities. 
  • Co-design AI systems with communities who are directly affected by data-driven decision-making, particularly those historically under-represented in technology design. 
  • Create new evaluation frameworks for AI that value cultural reasoning, contextual sensitivity, and social wellbeing alongside accuracy or efficiency.

This theme is well-suited to applicants from diverse disciplinary backgrounds — including computing, HCI, design, social sciences, humanities, law, public policy, data science, STS, and cultural studies — who are interested in working collaboratively and reflexively across sectors.

Place-based and Regional Context 

The North East provides a distinctive and timely environment for research on democratic and community-centred AI. Local authorities in the region — including Newcastle City Council, the North of Tyne Combined Authority, and the newly constituted North East Mayoral Combined Authority (NECA) — are actively developing strategies for responsible data use, digital public services, and community participation. These organisations are committed to experimenting with new governance models for AI in public services, making the region a national testbed for democratic innovation. 

The North East AI Growth Zone, supported by NECA and national government, is accelerating the deployment of AI across health, policing, welfare, cultural institutions, and environmental services. This creates rich opportunities to study how AI systems are introduced into communities, how trust is built (or eroded), and how citizens can be meaningfully involved in shaping technology that affects their lives.

The region also contains communities disproportionately impacted by automated decision-making, including those affected by welfare algorithms and predictive policing. Through networks such as VONNE, the Trussell Trust, and Digital Safety CIC, students can work directly with communities who are often excluded from AI design processes — co-creating systems that reflect lived experience, local priorities, and diverse ways of knowing. 

Relevant Partner Organisations 

This theme benefits from partnerships spanning government, civil society, and industry. Your research could investigate transparency, accountability, and citizen participation through collaborations with Newcastle City Council, the North of Tyne Combined Authority, DWP, and local NHS organisations who are actively deploying AI systems in public services. Northumbria Police and Cleveland Police offer contexts for exploring algorithmic policing and community trust. 

Pathways to engage marginalised communities emerge through partnerships with VONNE (supporting the voluntary sector), the Trussell Trust (addressing food poverty), and Digital Safety CIC. These relationships enable participatory research centred on communities most affected by algorithmic decisions. Meanwhile, collaborations with Ofcom, the National Cyber Security Centre, DSIT, and the Cabinet Office position your work to directly influence UK regulatory development. 

Technical expertise and commercial perspectives complement this public interest research through partnerships with Google, Thoughtworks, Nokia Bell Labs, and Yoti, creating opportunities for critical dialogue between civic values and industry practice. 

Related Reading 

Participatory AI and Governance 

  • Lee, M. K., et al. (2019). WeBuildAI: Participatory Framework for Algorithmic Governance. PACMHCI, 3(CSCW). 
  • Delgado, F., et al. (2023). The Participatory Turn in AI Design: Theoretical Foundations and the Current State of Practice. EAAMO '23. 
  • Birhane, A., et al. (2022). Power to the People? Opportunities and Challenges for Participatory AI. EAAMO '22. 
  • Saxena, D., et al. (2025). Emerging Practices in Participatory AI Design in Public Sector Innovation. CHI EA '25. 
  • Sieber, R., et al. (2025). What is civic participation in artificial intelligence? Environment and Planning B. 

Democracy and Civic Engagement 

  • Jungherr, A., & Schroeder, R. (2023). Artificial Intelligence and Democracy: A Conceptual Framework. Social Media + Society, 9(3). 
  • Arana-Catania, M., et al. (2021). Citizen Participation and Machine Learning for a Better Democracy. Digital Government: Research and Practice, 2(3). 
  • McKinney, S. (2024). Integrating Artificial Intelligence into Citizens' Assemblies: Benefits, Concerns and Future Pathways. Journal of Deliberative Democracy, 20(1). 
  • Helbing, D., et al. (2023). Democracy by Design: Perspectives for Digitally Assisted, Participatory Upgrades of Society. Journal of Computational Science. 
  • McCord, C. W., & Becker, C. (2023). Beyond Transactional Democracy: A Study of Civic Tech in Canada. PACMHCI, 7(CSCW1). 

Algorithmic Transparency and Accountability 

  • Lee, M. K., et al. (2019). Procedural Justice in Algorithmic Fairness: Leveraging Transparency and Outcome Control for Fair Algorithmic Mediation. PACMHCI, 3(CSCW). 
  • Krafft, P. M., et al. (2021). An Action-Oriented AI Policy Toolkit for Technology Audits by Community Advocates and Activists. FAccT '21. 
  • Aljuneidi, S., et al. (2024). Why the Fine, AI? The Effect of Explanation Level on Citizens' Fairness Perception of AI-based Discretion in Public Administrations. CHI '24. 

Public Trust and AI Literacy 

  • Long, D., & Magerko, B. (2020). What is AI Literacy? Competencies and Design Considerations. CHI '20. 
  • Haque, M. R., et al. (2024). Are We Asking the Right Questions?: Designing for Community Stakeholders' Interactions with AI in Policing. CHI '24. 

UK Government Policy and Strategy 

  • DSIT. (2025). AI Opportunities Action Plan. 
  • DSIT. (2021). National AI Strategy. 
  • DSIT. (2023). AI Regulation: A Pro-Innovation Approach (White Paper). 
  • UK Parliament Public Accounts Committee. (2025). Use of AI in Government. 
  • Science, Innovation and Technology Committee. (2024). Governance of Artificial Intelligence. 
  • Government Digital Service. (2021–2025). Algorithmic Transparency Recording Standard. 
  • Centre for Data Ethics and Innovation. (2020). Review into Bias in Algorithmic Decision-Making. 

UK Research Institutes and Public Bodies 

  • Ada Lovelace Institute. (2025). Learn Fast and Build Things: Lessons from Six Years of AI in Public Sector. 
  • Ada Lovelace Institute. (2023). Going Public: Exploring Public Participation in Commercial AI. 
  • Ada Lovelace Institute. (2023). Regulating AI in the UK: Putting Principles into Practice. 
  • Ada Lovelace Institute. (2022). Algorithmic Accountability for the Public Sector. 
  • Alan Turing Institute. (2024–2025). Understanding Public Attitudes to AI. 
  • Leslie, D. (2020). Using AI in the Public Sector: Ethics and Safety Guidance. Alan Turing Institute. 
  • Aitken, M., et al. (2022). Common Regulatory Capacity for AI. Alan Turing Institute. 

European Policy and Regulation 

  • European Commission. (2024). Regulation (EU) 2024/1689: The EU Artificial Intelligence Act. 
  • Council of Europe. (2024). Framework Convention on Artificial Intelligence and Human Rights, Democracy and the Rule of Law. 
  • European Commission. (2020). White Paper on Artificial Intelligence: A European approach to excellence and trust. 
  • High-Level Expert Group on AI. (2019). Ethics Guidelines for Trustworthy Artificial Intelligence. 
  • European Parliament. (2022). Digital Services Act. 
  • European Commission. (2020). European Democracy Action Plan.

Back to top