Skip navigation

Identity, Privacy, Security and Mis(dis)information

URKI LOGO

 CCAI Large

 




This theme recognises that artificial intelligence (AI) could be used to both promote and reduce online harms. For example, generative AI could lead to an explosion of fake social media accounts, could be used to create false profiles on dating websites (complete with images), could ensure that phishing emails are fine-tuned to the interests of the recipient (spear phishing) and seem to come from reliable parties, or could be used to create and disseminate fake news. On the other hand, AI can be used to improve the detection of online threats, recognise text associated with radicalisation and/or terrorist activities, reduce the amount of unwanted spam and irrelevant content in our social media feeds and provide a useful synthesis of news, assessing the reliability of source material. 

AI systems rely on vast amounts of data and naturally issues arise around an individual’s rights to privacy in such a data intense environment. This issue has been raised in regard to mass surveillance, where AI techniques facilitate face recognition on a national scale and can thus be used to track the activities of individual citizens. In a very different context, AI is likely to lead to a much greater level of personalisation in the retail and service industries. This means that something as simple as a shopping experience is much more likely to be tailored to an individual on the basis of data that captures that person’s previous behaviour, whether they may choose this or not.  It is unsurprising then to note that a great deal of work is ongoing in relation to privacy-preserving AI. 

Finally, there are some identities that aren’t properly represented in existing AI systems, which are known to exhibit bias because underlying data sources do not properly reflect the diversity that exists in our society. Certain individuals also hide aspects of themselves when interacting online and again, this means that digital data isn’t always a true mirror of society. This bias gives rise to another digital harm, likely to be exacerbated by AI decision-making systems. In this, and other areas, the best gains are made in interdisciplinary research that foregrounds the needs of the citizen and which brings students together from backgrounds in psychology, computer science, design, business, criminology, and law. 

Experts

  • Prof Pam Briggs
  • Dr Dawn Branley Bell
  • Prof Shaun Lawson
  • Dr Kyle Montague
  • Prof Longzhi Yang 
  • Prof Marion Oswald

Related Peak of Research Excellence

  • Computerised Society and Digital Citizens

Suggested Literature

*if you are struggling to access any of the suggested literature, then please contact ccai.cdt@northumbria.ac.uk 

Back to top