Deploying ML Model with SageMaker Inference

ARTIFACTORY: How to Use JFrog Artifactory with AWS Sagemaker

AuthorFullName__c
Melissa McKay
articleNumber
000005986
ft:sourceType
Salesforce
FirstPublishedDate
2024-01-17T12:49:57Z
lastModifiedDate
2024-01-17
VersionNumber
2
With the custom Docker image, scripts, and custom ML model in place, it’s now possible to deploy the model to a SageMaker endpoint. Run the deploy-model.py script which does the work of configuring, initializing, and deploying the model to a specified endpoint.