Refer to the script inference.py to see an example and full explanation of how to retrieve a custom, trained model from Artifactory and handle queries.
This script is responsible for the following:
This script is responsible for the following:
- Initializing the handler service that is executed by the model server
- Retrieving the necessary credentials and other info from the AWS Secret
- Retrieving the custom Docker image for inference from Artifactory
- Retrieving the custom model for inference from Artifactory
- Processes the input and output for predictions