Nicole Carignan, MCTTP

Nicole Carignan

SVP, Security & AI Strategy, Field CISO, Darktrace

 

Nicole Carignan is Senior Vice President, Security and AI Strategy, Field CISO at Darktrace. In this role, Nicole is focused on providing technical, strategic guidance and expertise in cybersecurity, threat research, AI and data science to customers, partners, and organizations within Darktrace. An expert in the safe, secure and responsible application of AI in cybersecurity, Nicole engages in product innovation and advisory, thought leadership, research, and cybersecurity and AI community engagement to ensure Darktrace delivers solutions that meet customer’s evolving needs. Her insights have been cited by global cybersecurity publications, and she is a frequent speaker at industry conferences and professional associations such as Women in Data Science, AUSA Cyberworld, ISACA Houston and more.
 
With over 25 years of experience, Nicole has deep expertise in data science, machine learning, cybersecurity, threat intelligence, operations and network engineering. Prior to Darktrace, Nicole served as the Head of North America Operations for CounterCraft Security, a leader in deception technology and threat intelligence solutions. Nicole worked in and supported the US federal government for over 20 years, serving in multiple technical and operational roles for the Intelligence Community and US Department of Defense and consulted on multiple large scale data science efforts. 

talks & Q&A

conference | sep 18

Autonomous Agentic AI versus Adversarial innovation with AI – Who's winning?

Who is using AI more intelligent: Attackers or Defenders?


Description

Cyber Threat Actors were quick to start innovating with Generative AI, starting with developing sophisticated social engineering attacks that humans can no longer detect to jailbreaking models (adversarial machine learning with attacking the models themselves) for malicious code creation and surveillance data collection. Most recently, research has shown multi-agent systems can be used to discover new vulnerabilities. As academia and industry develop agentic systems for vulnerability discovery in code, we are observing a huge increase in edge security device critical vulnerability and exposures (CVE) exploitation and disclosures.
 
Previously, the race between defenders attempting to patch a CVE and threat actors attempting to exploit that CVE started the day the CVE was disclosed. In this talk, we will show how this race has started long before the Defenders are alerted. Using a multi-layered agentic AI system, we have detected the exploitation or attempted exploitation of edge security devices 2-17 days prior to the CVE being disclosed. What does this mean? Threat actors are more effectively identifying vulnerabilities, and the Human SOC teams and traditional security products do not even know that the race has begun.
 
Defenders must adopt autonomous agentic AI systems in order to protect against what they do not know, specifically in order to take autonomous action to contain an incident, minimize damage and buy human defenders time. In this talk, we will break down different types of Agentic systems, evaluating AI systems, and operationalizing it safely and responsibly with data privacy, security, control, interpretability, and trust.


Why we choose this talk:

The new battle between attackers and defenders is: who is using AI more intelligent. Agentic AI system multiply the power of AI in certain use cases. The equation changes once again.