Using Deployed Model to Make Predictions

ARTIFACTORY: How to Use JFrog Artifactory with AWS Sagemaker

AuthorFullName__c
Melissa McKay
articleNumber
000005986
ft:sourceType
Salesforce
FirstPublishedDate
2024-01-17T12:49:57Z
lastModifiedDate
2024-01-17
VersionNumber
2
Finally, via the SageMaker Python SDK Predictor, it’s possible to run predictions against the SageMaker endpoint. Note that the script test-inference.py in the sample repo uses the same inference endpoint configured in the deploy script to make predictions against the custom ML model that’s already deployed.