Log Hugging Face Models

JFrog Artifactory Documentation

Products
JFrog Artifactory
Content Type
User Guide

Run the following command:

import frogml

repository = "<REPO_NAME>"
name = "<MODEL_NAME>"
version = "<MODEL_VERSION>" # optional
properties = {"<KEY1>": "<VALUE1>"} # optional
dependencies = ["<MODEL_DEPENDENCIES>"] # optional
code_dir = "<CODE_PATH>" # optional
parameters = {"<HYPERPARAM_NAME>": "<VALUE>"} # optional
metrics = {"<METRIC_NAME>": "<VALUE>"} # optional
predict_file = "<PREDICT_PATH>" # optional
model = get_huggingface_model() 
tokenizer = get_huggingface_tokenizer()

frogml.huggingface.log_model(
    model=model,
    tokenizer=tokenizer,
    repository=repository,
    model_name=name,
    version=version,
    properties=properties,
    dependencies=dependencies,
    code_dir=code_dir,
    parameters=parameters,
    metrics=metrics,
    predict_file=predict_file,
)

Important

The parameters code_dir, dependencies, and predict_file must be provided as a complete set. You must specify all of them or none at all.

Where:

  • <REPO_NAME>: The name of the Artifactory repository where the model is stored

  • <MODEL_NAME>: The unique name of the model you want to download

  • <MODEL_VERSION>: The version of the model you want to upload. If version is not specified, the current timestamp is used

  • <KEY1> and <VALUE1>: Key pair values to add searchable metadata to the model. Separate multiple key pairs with a comma

  • <MODEL_DEPENDENCIES> (Optional): Dependencies required to run the model, in one of the following formats:

    • A list of specific package requirements, for example ["catboost==1.2.5", "scikit-learn==1.3.2"]

    • A path to a single requirements.txt , pyproject.toml, or conda.yml package manager file

    • A two-item list containing paths to the pyproject.toml and poetry.lock files

  • <CODE_PATH>: The path to the directory containing the source code

  • <HYPERPARAM_NAME> and <VALUE> (Optional): Hyperparameters used to train the model

  • <METRIC_NAME> and <VALUE> (Optional): Key pair values to add searchable numeric metadata to the model. Separate multiple key pairs with a comma

  • <PREDICT_PATH> (Optional): The path to a script that defines how to make predictions with the model.

For example:

import frogml

repository = "ml-local"
name = "sentiment-analyzer"
version = "1.2.0"
properties = {
    "base_model": "distilbert-base-uncased-finetuned-sst-2-english",
    "dataset": "SST-2 (Stanford Sentiment Treebank)", 
    "accuracy": 0.93
}
dependencies = ["environment.yaml"]
code_dir = "src/fine_tuning/"
predict_file = "src/fine_tuning/predict.py"
model = get_huggingface_model() 
tokenizer = get_huggingface_tokenizer()

frogml.huggingface.log_model(
    model=model,
    tokenizer=tokenizer,
    repository=repository,
    model_name=name,
    version=version,
    properties=properties,
    dependencies=dependencies,
    code_dir=code_dir,
)