Examine use cases of online trust and safety technologies, translating research findings into actionable insights that contribute to content moderation strategies, policy updates, and/or product design.
Study innovative data sources and computational tools, develop models and metrics to detect and assess risks (e.g., coordinated inauthentic behaviour, abuse patterns, bot activity).
Conduct quantitative and mixed-methods research to understand harmful behaviours (e.g., misinformation, hate speech, harassment, manipulation, etc.) and their spread on digital platforms.
Collaborate with data scientists, policy experts, engineers, and product managers to inform and evaluate safety-related decisions.
Stay current on academic and industry trends in trust and safety, online behaviour, and computational social science.
Qualifications
A PhD degree in Computational Social Science / Societal Computing / Human Computer Interaction / Sociology / Political Science / Psychology / Economics, or related field with a strong quantitative component.
Proficiency in data analysis using tools such as Python, R, SQL, or similar.
Experience working with large-scale datasets, social network analysis, or NLP for social media data.
Demonstrated ability to conduct independent research and publish or present findings.
Strong understanding of causal inference, experimental design, and statistical modelling.
Ability to communicate complex findings to non-technical stakeholders.
Prior experience in trust and safety research is preferred.
Good communication and interpersonal skills for working in a multicultural work environment.