Home / Companies / Roboflow / Blog / Post Details
Content Deep Dive

Predict on an Image Over HTTP

Blog post from Roboflow

Post Details
Company
Date Published
Author
Roboflow
Word Count
1,799
Language
English
Hacker News Points
-
Summary

A Roboflow Inference server offers a standardized API to facilitate running inference on computer vision models, specifically for object detection, classification, and segmentation tasks. It supports models trained on Roboflow, with plans to expand compatibility for custom models. Inference can be performed on images sourced from URLs, local files, PIL images, or NumPy arrays. The setup involves installing the Inference Server via Docker, followed by configuring and running inference through various routes, depending on the version being used. The server can process batch requests and provides JSON responses with predictions. Roboflow's hosted inference endpoints support V1 routes, while local servers offer V2 routes, with differences in request structure. The guide includes Python code examples for setting up and running inference, detailing the necessary parameters such as project ID, model version, and API key. Additionally, the server provides OpenAPI-powered documentation accessible at designated local routes to aid in referencing available routes and configuration options.