Ben Kereopa-Yorke

About Ben  
Ben Kereopa-Yorke operates at the nexus of AI and cognitive security governance, dynamical systems theory, and cyber risk quantification, blending theoretical innovation with practical security frameworks.

His research in AI security explores the application of chaos theory and game-theoretic approaches to AI risk modeling, while his broader research interests span computational linguistics, sociolinguistics, sociotechnical systems impacts, information warfare, social engineering, and the philosophical implications of human-computer interaction.

As a published researcher and Associate Editor of the IEEE Transactions on Technology and Society, Senior Security Specialist and AI/ML Security SME at Telstra, alongside his role as a core team member of the OWASP Machine Learning Security Top Ten, his work bridges academia and industry in addressing emerging AI and cognitive security challenges. Currently pursuing a second Master's in Cyber Security Operations at UNSW Canberra as well as postgraduate Neuroscience at UNE, Ben is passionate about fostering interdisciplinary approaches to security research and mentoring the next generation of security professionals through various industry initiatives.

Ben holds postgraduate qualifications in Terrorism and Security Studies, as well as Cloud Computing and Virtualisation. Current certifications include the AWS Machine Learning Speciality, Drone Security Operations Certificate from DroneSec, and the Artificial Intelligence Governance Professional (AIGP) from the IAPP.

Looking ahead, he envisions a future where AI security frameworks are deeply integrated with ethical considerations and human values, drawing from diverse fields including philosophy, sociology, and cognitive science. He is committed to developing novel frameworks for quantifying AI risk that can adapt to emerging technological paradigms, while ensuring these advances benefit humanity equitably. Through his research and advocacy, he hopes to contribute to the responsible development of AI systems that enhance rather than diminish human agency and potential.

pip install ai-security: Dependencies You Can't Ignore

Melbourne
Security: Fortifying the Future

Developers face unprecedented security challenges that transcend traditional cybersecurity approaches when building and productionising AI systems. This session introduces CIPHER - a practical mental model for securing AI systems across their lifecycle, from development to deployment.You'll learn how to:

  • Map and secure AI-specific attack surfaces, including data pipelines, model architectures, and inference endpoints
  • Apply game theory principles to anticipate and counter adversarial attacks on your AI systems
  • Implement quantitative risk assessment techniques for AI components
  • Build trustworthy AI systems by addressing bias, transparency, and privacy by design

Through real-world examples and interactive scenarios, you'll gain a cognitive framework for thinking about AI security that goes beyond checklists and static rules. Whether you're building recommendation engines, natural language processors, or computer vision systems, you'll leave with practical techniques to make your AI implementations more secure and trustworthy.

Perfect for developers working with AI/ML or those looking to better understand the security implications of adding AI to their applications.

YOU MIGHT ALSO LIKE...

Full Name

Lorem ipsum dolor sit amet, consectetur adipiscing elit.

View Speaker

Full Name

Lorem ipsum dolor sit amet, consectetur adipiscing elit.

View Speaker

Full Name

Lorem ipsum dolor sit amet, consectetur adipiscing elit.

View Speaker

Full Name

Lorem ipsum dolor sit amet, consectetur adipiscing elit.

View Speaker

Full Name

Lorem ipsum dolor sit amet, consectetur adipiscing elit.

View Speaker

Full Name

Lorem ipsum dolor sit amet, consectetur adipiscing elit.

View Speaker