Recognising markers for hate speech and cyber bullying

by Richard Clegg working at Queen Mary University of London, Electronic Engineering and Computer Science responding to Digital Security for All

I am interested in exploring

We will look at the structure and content of online social networks, particularly those found problematic for hate speech. We will intend to look at the network structure and linguistic content to see if the two can be used together to identify communities which will foster hate speech and problematic online behaviour. Can we automatically detect communities on social networks that become more extreme in their language and behaviour? Can we tie this to hate speech?

My motivation is

It is widely recognised that there is a "filter bubble" problem on social networks: people are reconfirmed in their beliefs and behaviours by a self-selected group of contacts. It is also reasonable to worry that social media algorithms can exaggerate this by showing people content that further reinforces this. When those beliefs include hate-speech and behaviours that could be harmful to society and individuals it is imperative that we are able to quantify the problem.

Project focus

I want to focus on:

  • Enablement & Radical Trust
  • Accountability & Care
  • Proactive Resilience and Reparation

Collaboration

I am looking for a partner in these sectors:

  • Public Sector
  • Industry / SMEs
  • Third Sector

I am looking to work in these areas:

  • Policing
  • Artificial Intelligence and/or Machine Learning

Sandpit events

I am going to: I am not attending

Contact

r.clegg@qmul.ac.uk

Catalyst facilitates collaborations between Not-Equal Network+ members, who wish to develop and submit a research project in response to Not-Equal’s second Call for Collaborative Proposals. Search for initial project ideas and get in touch with potential collaborators.