The USB Device Service is a microservice created to address the lack of standardization and automation of camera discovery and onboarding. EdgeX Foundry is a flexible microservice-based architecture created to promote the interoperability of multiple device interface combinations at the edge. In an EdgeX deployment, the USB Device Service controls and communicates with USB cameras, while EdgeX Foundry presents a standard interface to application developers. With normalized connectivity protocols and a vendor-neutral architecture, EdgeX paired with USB Camera Device Service, simplifies deployment of edge camera devices.
Specifically, the device service uses V4L2 API to get camera metadata, FFmpeg framework to capture video frames and stream them to an RTSP server, which is embedded in the dockerized device service. This allows the video stream to be integrated into the larger architecture.
Use the USB Device Service to streamline and scale your edge camera device deployment.
Currently, the NATS Messaging capability (NATS MessageBus) is opt-in at build time. This means that the published Docker image and Snaps do not include the NATS messaging capability.
The following make commands will build the local binary or local Docker image with NATS messaging capability included.
make build-nats
make docker-nats
The locally built Docker image can then be used in place of the published Docker image in your compose file.
See Compose Builder nat-bus
option to generate compose file for NATS and local dev images.
The figure below illustrates the software flow through the architecture components.
Figure 1: Software Flow
- EdgeX Device Discovery: Camera device microservices probe network and platform for video devices at a configurable interval. Devices that do not currently exist and that satisfy Provision Watcher filter criteria are added to
Core Metadata
. - Application Device Discovery: The microservices then query
Core Metadata
for devices and associated configuration. - Application Device Configuration: The configuration and triggering of device actions are done through a REST API representing the resources of the video device.
- Pipeline Control: The application initiates the
Video Analytics Pipeline
through HTTP Post Request. - Publish Inference Events/Data: Analytics inferences are formatted and passed to the destination message bus specified in the request.
- Export Data: Publish prepared (transformed, enriched, filtered, etc.) and groomed (formatted, compressed, encrypted, etc.) data to external systems (be it analytics package, enterprise or on-premises application, cloud systems like Azure IoT, AWS IoT, or Google IoT Core, etc.
To set up your system, follow this guide.
For a full walkthrough on how to use this service and RTSP streaming, follow this guide.
Use the following query to determine the status of the camera. URL parameter:
- DeviceName: The name of the camera
- InputIndex: indicates the current index of the video input (if a camera only has one source for video, the index needs to be set to '0')
curl -X GET http://localhost:59882/api/v2/device/name/<DeviceName>/CameraStatus?InputIndex=0 | jq -r '"CameraStatus: " + (.event.readings[].value|tostring)'
Example Output:
CameraStatus: 0
Response meanings:
Response | Description |
---|---|
0 | Ready |
1 | No Power |
2 | No Signal |
3 | No Color |