Learn how to adjust and configure the build for some advanced use-cases.
Using a Custom AWS IAM Role ARN
In some cases, a model needs to access external services during its build process.
If a build needs to access external AWS resources, a custom AWS IAM role can be passed to the FrogML build process.
The IAM role ARN can be passed directly to a build using the --iam-role-arn flag. For example:
frogml models build \
--model-id <model-id> \
--iam-role-arn arn:aws:iam::<account-id>:role/<role-name> \
<uri>The Supplied IAM role ARN role can be accessed from the model code using the following code example:
from boto3
session = boto3.Session(profile_name='user-provided-role')
# Example for an S3 client:
s3 = session.client('s3')For additional information on how to configure an AWS IAM role to allow JFrog ML runtime to access your resources, please also visit the AWS Resources access dedicated page.
Note
In the provided code snippet, the profile name specified as user-provided-role is automatically associated by JFrog ML with your assumed IAM role. Therefore, there's no need to replace it in the code. This ensures seamless integration and access to AWS services within your model code.
Using GCP Service Account
In order to reach a GCP client resource make sure to create secret with you json service account. Once the secret key was created use the following flag to define it: --service-account-key-secret-name.
Generating Config Files
To generate a config file from a command that you already use, specify the --outconf flag, which will print to the console a ready config based on the options you specified. For example assume you run the following command:
frogml models build \
--model-id test_model \
-T tag_1 -T tag_2 \
-E OS=WINDOWS \
-E VERSION=3.9 \
~/modelHere is an example of the output:
YAML
build_env:
docker:
base_image: qwak-base:0.0.18-cpu
build_args: {}
env_vars: ["OS=WINDOWS", "VERSION=3.9"]
no_cache: true
params: []
local:
aws_profile: null
no_push: true
python_env:
conda:
conda_file: conda.yml
git_credentials: null
git_credentials_secret: null
poetry: null
qwak_sdk_extra_index_url: null
virtualenv: null
remote:
is_remote: false
resources:
cpus: 2.0
memory: 4Gi
build_properties:
branch: main
build_id: f139b71a-533a-11ec-a934-367dda8b746f
model_id: test_model
model_uri:
git_branch: master
git_credentials: null
git_credentials_secret: null
main_dir: main
uri: ~/model
tags:
- tag1
- tag2To save the configuration into redirection , For example: frogml models build --model-id test_model ~/model --out-conf > config.yml.
Customizing Your Build
Custom Docker Image
You can use a custom Docker image for the build process. The custom base image will also be used for the serving image.
When executing a build, the image should be specified using the --base-image flag:
Shell
frogml models build \
--model-id <model> \
--base-image <image-id> \
<dest> <model-id>- The model ID associated with this build.<image-id>- Docker image ID.<uri>- FrogML-based model URI.
The custom docker image should be based on:
public.ecr.aws/w8k8y6b6/qwak-base:0.0.37-cpu.public.ecr.aws/w8k8y6b6/qwak-base:0.0.19-gpu((in case of models requiring a GPU).
The Docker image entry point should not be changed.
Custom Build Instance Type
You can choose either ondemand or spot by using the flag --purchase-option. By default the remote build will run on spot . For example:
Shell
frogml models build \
--model-id <model> \
--purchase-option ondemand \ Environment Variables
Configuring environment variable in build and serving environment, The environment variables should specified with the flag -E when executing a build:
frogml models build \
--model-id <model> \
-E <key>=<value> -E <key>=<value> \
<dest><model-id> - Model id which the build should be related to.
<key> - Environment variable key.
<value> - Environment variable value.
<uri> - FrogML based model URI.
Note
When you pass environment variables to the build process, their impact goes beyond just the build phase. These variables are not only propagated to the testing phase but are also 'baked' into the resulting image. As a result, they remain available during deployment, providing flexibility to overwrite them as needed. This enables seamless configuration management throughout the entire lifecycle of your application.
Passing Secrets as Environment Variables
JFrog ML allows passing environment variables to model builds which receive values from JFrog ML secrets during the model build process.
While secret values will be accessible as environment variables during the build, they won't be displayed in the UI alongside other passed environment variables.
To implement this, you need to supply the environment variable value in the specified format: <key>=<secret.{secret-name}> .
For instance, if you have an API token stored under a JFrog ML secret named cloud_token and wish to pass it in the build under the environment variable APP_TOKEN, you would utilize the following command as an example:
Shell
frogml models build --model-id <model> -E APP_TOKEN=secret.cloud_token <dest>
Note
Note: Please note that the secrets must exist in the JFrog ML platform before running the above command.
Customizing Main Directory
Given the following FrogML-based model directory structure:
Shell
jfrogml_based_model/ ├── main/ ├── tests/
You can change the main/ directory’s name. For example, changing its name to iris_classifier/:
Shell
jfrogml_based_model/ ├── iris_classifier/ ├── tests/
To configure the name in the main directory, use a build config as follows:
YAML
build_properties:
model_uri:
main_dir: iris-classifierInstalling Packages from PyPI
During the build process you can download and use packages from private repositories.
Create an environment variable PIP_EXTRA_INDEX_URL .
Shell
frogml models build --env-vars PIP_EXTRA_INDEX_URL=https://USERNAME:PASSWORD@JFROG_ARTIFACTORY_URL
You can also send this data not in clear text, by creating local environment variables:
Shell
export JFROG_USERNAME=<USERNAME>
export JFROG_PASSWORD=<PASSWORD>
frogml models build --env-vars PIP_EXTRA_INDEX_URL=https://${JFROG_USERNAME}:${JFROG_PASSWORD}@JFROG_ARTIFACTORY_URL
Fetching Model Code from a Private Git Repository
You can fetch model code from a private Git repository using either the --git-credentials-secret or --git-secret-ssh flags.
Using Token Credentials
Generate a GitHub Access Token:
Navigate to GitHub > Settings > Developer settings > Personal access tokens > Generate new token.
Select the scopes you need for your project and generate the token.
Create a JFrog ML Secret:
Use the format
USERNAME:ACCESS_TOKENto create a new JFrog ML secret.Shellfrogml secrets set --name '<your-jfrogml-secret>' --value "<username>:<access_token>"
Build the Model:
Use the
--git-credentials-secretflag to specify the name of this secret when building your model.
Shellfrogml models build \ git@github.com:<git_user>/model-test.git#models/model/churn \ --git-credentials-secret '<your-jfrogml-secret>' \ --git-branch '<your-branch>'
Note
The #models/model/churn part specifies the folder path where the model is located inside the repository.
Using SSH Authentication
Generate an SSH Key:
If you don't have an SSH key, you can generate one using
ssh-keygen -t rsa -b 4096 -C "your_email@example.com".
Add SSH Key to GitHub:
Navigate to GitHub > Settings > SSH and GPG keys> New SSH key. Paste your public key and save.
Create a JFrog ML Secret:
Copy your private SSH key to your clipboard:
Shell
pbcopy < ~/.ssh/<your-private-key-file>
Create a new JFrog ML secret with the content of the copied private key.
Shellfrogml secrets set --name '<your-jfrog-secret>' --value "$(pbpaste)"
Build the Model:
Use the
--git-secret-sshflag to specify the name of this secret when building your model.Shellfrogml models build \ git@github.com:<git_user>/model-test.git#models/model/churn \ --git-secret-ssh '<your-jfrogml-secret>' \ --git-branch '<your-branch'