An inference is a prediction to a question or task. In the concept of Machine Learning (ML) and Artificial Intelligence (AI), the term inference is often compared with training. To put it simple, inference is where capabilities learnt during training are put to analyze data to "infer" a result. Inference can be found and are applied everywhere across industries from photo tagging to autonomous driving.
Instill Model provides an automated model inference server.
#Inference with Dedicated Model API Endpoint
After importing a model from a supported source, such as GitHub or Hugging Face, and deploying it online,
it dynamically generates a dedicated API endpoint for model inference /users/<user-id>/models/<model-id>/trigger
.
You can send multiple images of popular formats (PNG and JPEG) in one request to the generated model API endpoint. Check the examples below. The API accepts batched images
- sent by remote URL and Base64 or
- uploaded by multipart.
in which <user-id>
and <model-id>
corresponds to the namespace and ID of a model.
#Using Models in VDP
To build VDP pipelines for your AI workflows with models served on Instill Model platform, you can utilize the Instill Model AI Connector.