During this Vision Dinner, Bernhardt Fourie, currently Cybersecurity Capability Lead at ASML and soon to be AI Security Architect at Nationale Nederlanden, will explore how Large Language Models (LLMs) and internal AI chatbots such as Copilot and ChatGPT are vulnerable to manipulation via prompt injection and data poisoning.He will share practical measures organizations can implement to actively safeguard trust and ensure AI becomes a controlled, transparent part of security and governance, rather than a blind spot.
Cyber Security professionals
CPE Points | 2 points
17:30 | Start of program (including presentation, discussion and 3-course dinner)
20:30 | End of formal program, opportunity for informal networking
Language: English
Kasteel Montfoort, Montfoort
Cybersecurity Capability Lead
Bernhardt Fourie is a cybersecurity professional and researcher with over seven years of experience, currently serving as Cybersecurity Capability Lead at ASML and, from May onward, transitioning into the role of AI Security Architect at Nationale Nederlanden. He specializes in incident response and proactive security architecture for both IT and OT environments. Bernhardt has held security-focused roles at organizations including Uber, ABN AMRO Bank, the Organization for the Prohibition of Chemical Weapons, and the International Criminal Court, and holds a BA in International Relations from the University of California and an MA in International Security from Rijksuniversiteit Groningen with Honours in Leadership.
As AI increasingly shapes business decisions, the threat landscape is changing. Attackers no longer break systems but try to influence them. Like human social engineering, the biggest risk comes from seemingly legitimate behavior that quietly leads to wrong decisions, without alerts or incidents, yet with serious business impact.
During this Vision Dinner, Bernhardt Fourie, currently Cybersecurity Capability Lead at ASML and soon to be AI Security Architect at Nationale Nederlanden, will explore how Large Language Models (LLMs) and internal AI chatbots such as Copilot and ChatGPT are vulnerable to manipulation via prompt injection and data poisoning.He will share practical measures organizations can implement to actively safeguard trust and ensure AI becomes a controlled, transparent part of security and governance, rather than a blind spot.
The session will connect concrete AI threats to scalable solutions, translating them into governance, assurance, and measurable value for board and security leadership. It will provide valuable takeaways on strategic assessment of AI behavior, key success factors, and remaining challenges. The presentation serves as a starting point for a conversation with the speaker and fellow cyber security leaders. You are warmly invited to join us!
"*" indicates required fields