JFrog ML Rest API

JFrog ML Documentation

Products
JFrog ML
Content Type
User Guide

After deploying a FrogML-based model, you can use a REST client to request inferences from the model, which is hosted as a real-time endpoint.

Authentication Process

To access the REST client, you first need to generate an access token.

  1. Generate an access token.Access Tokens

  2. Set up your environment: Add the generated token to your environment by using the following command;

export TOKEN="<Auth Token>"

Make sure to replace <Auth Token> with the actual token you generated. After this, you will be able to use the REST client with your access token for authentication.

Inference Example

The following example demonstrates how to invoke the model test_model. This model accepts a feature vector containing three fields, and it returns a single output field called "score."

To illustrate this, we will use a curl command as a REST client.

Once a token is generated, invoke the model as follows:

export TOKEN=""

curl --location --request POST 'https://models.<environment_name>.qwak.ai/v1/test_model/predict' 
    --header 'Content-Type: application/json' \
    --header 'Authorization: Bearer '$TOKEN'' \
    --header 'X-JFrog-Tenant-Id: <TENANT ID>' \
    --data '{"columns":["feature_a","feature_b","feature_c"],"index":[0],"data":[["feature_value",1,0.5]]}'

Inference for a Specific Variation

When working with variations, you can create an inference for a specific variation (endpoint) by appending the variation name to the URL as shown below:

curl --location --request POST 'https://models.<environment_name>.qwak.ai/v1/test_model/variation_name/predict' \
    --header 'Content-Type: application/json' \
    --header 'Authorization: Bearer '$TOKEN'' \
    --header 'X-JFrog-Tenant-Id: <TENANT ID>'
    --data '{"columns":["feature_a","feature_b","feature_c"],"index":[0],"data":[["feature_value",1,0.5]]}'