Model Build Overview

JFrog ML Documentation

Products
JFrog ML
Content Type
User Guide

Learn how to easily build a model on JFrog ML

About Model Builds

A model build in JFrog ML is the process of creating a ready-for-deployment, trained, serialized, and tested version of your machine learning model.

During the build, JFrog ML packages your model’s source code, dependencies, and metadata into a secure, immutable artifact stored in JFrog Artifactory.

The build process can be triggered from the FrogML CLI/SDK or the platform UI, and it runs on JFrog’s scalable infrastructure that supports any workload size, from lightweight pre-trained models to full training pipelines.

Model build lifecycle diagram

Model Build Lifecycle

The remote build process on JFrog ML comprises the following steps:

  1. Creating the model's virtual environment

  2. Executing the build function

  3. Running unit and integration tests

  4. Serializing the model

  5. Building a docker image

  6. Pushing the docker image to JFrog ML model registry

Building a Model with FrogML CLI

The following steps will show you how easily to build your first model on JFrog ML.

Important

Please install the FrogML Python SDK.

1. Creating a New Model

Start by creating a new project and model on the JFrog ML platform. Note that the command doesn't generate local output but rather creates a remote project and model.

Your model ID will be the model name in lowercase letters and stripped from spaces, in this example, titanic.

frogml models create "Titanic" --project "example-models"

2. Generating the Model Code

Generate the Titanic example model, which is available in the example templates provided with the FrogML SDK.

This command will create the files needed to build a model on JFrog ML.

frogml models init --example titanic .

The models init command works in the following format:

frog models init --example <example-name> <local-model-directory>

3. Building Your Model

With the local model code, and our new model on JFrog ML, we can initiate a model build. Build names are unique across a project.

Note

Note that the build name parameter is optional.

frogml models build --model-id titanic --name v1 ./titanic_survival_classification

The models build command works in the following format:

frogml models build --model-id <remote-model-id> --name <build-name> <local-model-directory>

Running the above command generates the build ID and a link you can follow to view the live build status:

✅ Fetching model code (0:00:00.22)
✅ Registering frogml build - 100% (0:00:05.77)

Build ID 2cac1883-47eb-44dd-9806-bdd9887dcc16 triggered remotely

########### Follow build logs in the CLI
frogml models builds logs -b 2cac1883-47eb-44dd-9806-bdd9887dcc16 --follow

########### Follow build logs in the platform
https://mydemo.jfrog.io/ui/ml/models/credit_risk/build/2cac1883-47eb-44dd-9806-bdd9887dcc16

4. Tracking Build Progress

Building a model takes several minutes to complete. To view the build status, open the the model's build table.

When the build is complete, you can proceed to deploy your model.

There are two options for viewing the build progress logs:

Option1: Build Page in JFrog ML UI

Visit your model's page and choose the current build ID. Alternatively, follow the link you received in the CLI.

Option 2: Following Logs in the CLI
frogml models builds logs -b <build-id> --follow
  • <build-id> - The build ID that you received when executing the build remotely.

Building Models with GPUs

JFrog ML GPU Instances provide high-performance computing resources that can significantly accelerate the model build process. Easily customize your build resources to achieve faster training times and better results.

To build a model on a GPU instance, specify the following additional arguments in the model build command:

frogml models build  --model-id <model-id> --instance gpu.t4.xl .

Note

Choosing the Correct GPU

Visit the JFrog ML GPU Resources page to select the resources that best fit your use-case.

Each GPU type has its own configuration of pre-defined memory and number of CPUs.

Note

JFrog ML utilizes EC2 Spot instances for cost-effective GPU-based builds. This may result in a slightly extended wait time when initiating GPU Spot instances.

Deploying Models with GPUs

Note

To deploy models using a GPU, you do not need to build it on a GPU instance.

Simply use the --gpu-compatible flag during the model build process.

When deploying a model on a GPU instance, you must verify that the model was build using a GPU compatible image. Building a model using a GPU compatible image installs additional dependencies and drivers.

Creating a GPU compatible image is simply done by adding the --gpu-compatible flag:

frogml models build  --model-id <model-id> --gpu-compatible .

Running the above command will build your model on a regular CPU instance, but will enable you to later deploy it on a GPU instance.

Tagging Your Model Build

Tags can be attached to specific builds for identification and tracking.

Add model tags from JFrog ML UI manually, or add tags via the FrogML CLI:

frogml models build --model-id <model-id> -T <tag_1> -T <tag_2> <local-model-directory>

Use the model-id of the model to which you want to attach tags.

Using Environment Variable in Model Builds

You may use and pass environment variables to your models build in the CLI using the following command:

frogml models build --model-id <model-id> -E ENV_VAR=VALUE <local-model-directory>

or for example with mock values:

frogml models build --model-id "titanic" -E VERSION_NUMBER=1.2 -E MODEL_NAME=catboost .

```