How to Deploy a Model for Inference Using SageMaker and Artifactory

ARTIFACTORY: How to Use JFrog Artifactory with AWS Sagemaker

AuthorFullName__c
Melissa McKay
articleNumber
000005986
ft:sourceType
Salesforce
FirstPublishedDate
2024-01-17T12:49:57Z
lastModifiedDate
2024-01-17
VersionNumber
2
The goal of this example is to demonstrate deploying and interacting with the custom ML model created and stored in the previous section by performing the following steps:
  1. Prepare an inference script
  2. Build an inference container image
  3. Deploy ML model with SageMaker Inference
  4. Utilize deployed model to make predictions
All of the sample code for inference is available in the inference directory here.