FrogML SDK: the Gateway to Model Governance

Data-driven decisions are critical. And to support high-stakes decision-making – from fraud detection in credit card transactions to demand forecasting in retail – organizations are increasingly relying on complex models. According to McKinsey, 78% of organizations report using AI in at least one business function, highlighting just how embedded AI and ML models have become in operational and strategic decision-making.

However, these models are only as effective as the frameworks that govern them and the ecosystem in which they operate. This is why a Unified Development Lifecycle for both software and machine learning is crucial. Integrating model storage and management into a unified lifecycle supports seamless development, version control analysis, security, and compliance, ensuring that models are both efficient and secure throughout their entire lifecycle.

The Need for Robust Model Governance

Per the OWASP Machine Learning Security Top 10, machine learning systems face significant security risks, primarily categorized as adversarial attacks like input manipulation, data poisoning, model inversion, membership inference, and model theft. Moreover, models can drift over time, leading to degraded performance, biased outcomes, or decisions misaligned with current market conditions or demographic trends.

Robust model governance is key to mitigating these risks and ensuring the long-term reliability of AI systems. Model governance encompasses an end-to-end process that establishes, implements, and maintains controls surrounding the use of models. Its primary purpose is to ensure that models operate as intended, remain compliant, and consistently deliver trustworthy outcomes.

This is where the FrogML SDK comes into play. By enabling you to seamlessly load your models into JFrog Artifactory directly from your IDE of choice, Frog ML SDK integrates model management into your existing development lifecycle. Artifactory serves as a centralized repository providing a single source of truth for your organization. Whether you’re storing proprietary models or curated models from platforms like Hugging Face, the Frog ML SDK ensures that your models are secure, version-controlled, and easily accessible. This streamlines both the deployment and management of machine learning models, empowering organizations to focus on deriving value rather than worrying about logistical intricacies.

Bridging Your Models to Artifactory with FrogML SDK

The FrogML SDK simplifies the process of integrating your machine learning models, whether custom-built or sourced from public registries, directly into your Artifactory repositories. By integrating with JFrog Artifactory, the SDK ensures your models are securely stored, versioned, and readily accessible. It also establishes Artifactory as the system of record for your model training.

This centralized approach enables comprehensive governance by allowing organizations to track every model’s purpose, ownership, methodology, and lifecycle status. The FrogML SDK supports enhanced risk assessment, facilitates real-time decision-making, and, by storing models alongside their ML-BOM (Machine Learning Bill of Materials) and SBOM (Software Bill of Materials), provides the necessary evidence for auditing and compliance, thereby solidifying a commitment to responsible AI.

Let’s look at how straightforward it is to manage your machine learning models with Frog ML SDK.

1. Storing Your Custom Models (e.g., a Scikit-learn Pickle File)

For models developed in-house, the Frog ML SDK provides a simple and efficient way to upload them. This ensures that your proprietary models are versioned and managed alongside other software components. Not only does this facilitate better organization and tracking, but it also enhances collaboration among your data science and engineering teams.

Here’s how to upload a custom model saved as a .pkl file to Artifactory:

frogml.files.log_model(

source_path="model.pkl",

repository=JF_REPOSITORY,

model_name="credit_risk_model",

version=datetime.now().strftime("%Y%m%d-%H%M%S"),

properties={"stage": "test"}

)

2. Incorporating Open-Source Models from Hugging Face

You can leverage JFrog Artifactory’s native integration with Hugging Face to proxy and cache pre-trained models. This capability enables you to centralize your dependencies, thereby ensuring that all models, whether in-house or externally sourced, are managed under a unified governance structure.

By caching Hugging Face models within Artifactory, you benefit from faster access and enhanced control over the models your organization relies on. Furthermore, this integration facilitates future security scanning with JFrog Xray, helping you identify and mitigate vulnerabilities in third-party models before they affect your operations.

Here’s how you can copy a model from Hugging Face to Artifactory:

frogml.huggingface.log_model(

model=get_huggingface_model(), # Function that Returns a huggingface model

tokenizer=get_huggingface_tokenizer(llama4_model_id), # Function that Returns a huggingface tokenizer

repository=repository,

model_name=name,

version=datetime.now().strftime("%Y%m%d-%H%M%S"),

properties=properties

)

From Storage to Serving: JFrog ML as the Natural Extension

Once your models are securely stored in Artifactory using the FrogML SDK, the next step is to put them to work. This is where JFrog ML comes in. JFrog ML is an end-to-end MLOps platform that allows you to build, deploy, evaluate, monitor, and govern models throughout their entire lifecycle.

Using JFrog ML, you can:

  • Seamlessly deploy stored models into production environments
  • Track experiments and model iterations
  • Evaluate model performance with real-world data
  • Monitor for drift or bias over time

Integrate model scanning with tools like Xray and Curation for runtime security

Suppose you’re already using JFrog as your unified platform for software development. In that case, JFrog ML extends those capabilities into the machine learning domain, ensuring governance, traceability, and operational continuity from day one through production.

Conclusion

The FrogML SDK streamlines machine learning workflows by centralizing the management and deployment of both proprietary and curated models. Rather than navigating a gauntlet of security hurdles and governance checklists, the SDK makes enterprise-grade security and governance an inherent part of the development process. This new way of working also allows teams to leverage JFrog’s security tools, such as Xray and Curation, to continuously scan their model repositories for vulnerabilities, building trust into every model from the start. By taking control of the ML lifecycle with FrogML, organizations can achieve a more secure and efficient path to production.