Hunting malicious ML models: Insights from analyzing 1.5M nefarious ML models

In 2024, the number of open source ML models tripled, while ML model attacks increased 6.5x, indicating that attackers are exploiting how easy it is to gain remote code execution with malicious ML models. This trend combined with the high number of vulnerabilities and the lack of awareness and effective automated tools around ML Modes leaves users vulnerable. JFrog’s Security Research Team research analyzed over 1.5 million publicly available models, revealing the latest malicious techniques used by attackers, how current security solutions can be bypassed, and the limitations of inherent defenses in ML libraries.

In this webinar, we’ll go over what we discovered and what best practices can be gleaned from these findings. Attendees will learn:

  • Common malicious ML model techniques and attack methods
  • How current detection tools can be bypassed
  • Ways ML libraries attempt to prevent attacks and their limitations
  • Best practices for vetting ML models and trusted model types
  • Effective sandboxing and security strategies for deploying ML models

This session is essential for data scientists, DevOps, and security teams seeking to bolster ML model defenses.

Ready to Try JFrog?