EdgeFirst AI

AI for Spatial Perception

EdgeFirst Perception is an open-source suite of libraries and microservices for AI-driven spatial perception on edge devices. It supports cameras, LiDAR, radar, and time-of-flight sensors — enabling real-time object detection, segmentation, sensor fusion, and 3D spatial understanding, optimized for resource-constrained embedded hardware.

Workflow

EdgeFirst Model Zoo Ecosystem: Training, Validation, and Publication Workflow

Every model in the EdgeFirst Model Zoo passes through a validated pipeline. EdgeFirst Studio manages datasets, training, multi-format export (ONNX, TFLite INT8, eIQ Neutron, Kinara DVM, HailoRT HEF, TensorRT), and reference validation. Models are then deployed to our board farm for full-dataset on-target validation on real hardware — measuring both accuracy (mAP) and detailed timing breakdown per device. Results are published here on HuggingFace with per-platform performance tables.

Model Lifecycle

Model Lifecycle: 5 stages from training to publication

On-Target Validation

On-Target Validation Pipeline: full dataset validation on real hardware

Unlike desktop-only benchmarks, EdgeFirst validates every model on real target hardware with the full dataset. Each device produces both accuracy metrics (mAP) and a detailed timing breakdown — load, preprocessing, NPU inference, and decode — so you know exactly how a model performs on your specific platform.

Supported Hardware

NXP i.MX 8M Plus NXP i.MX 95 NXP Ara240 RPi5 + Hailo-8/8L NVIDIA Jetson

Model Zoo

Pre-trained YOLO models for edge deployment. Each model repo contains all sizes (nano through x-large), ONNX FP32 and TFLite INT8 formats, with platform-specific compiled variants as they become available.

Detection

YOLO26

n/s/m/l/x · COCO 80 classes · Nano mAP@0.5: 54.9%

YOLO11

n/s/m/l/x · COCO 80 classes · Nano mAP@0.5: 53.4%

YOLOv8

n/s/m/l/x · COCO 80 classes · Nano mAP@0.5: 50.2%

YOLOv5

n/s/m/l/x · COCO 80 classes · Nano mAP@0.5: 49.6%

Instance Segmentation

YOLO26

n/s/m/l/x · COCO 80 classes · Nano Mask mAP: 37.0%

YOLO11

n/s/m/l/x · COCO 80 classes · Nano Mask mAP: 35.5%

YOLOv8

n/s/m/l/x · COCO 80 classes · Nano Mask mAP: 34.1%

Roadmap

The EdgeFirst Model Zoo is expanding across the full spatial perception stack — from 2D detection through depth estimation, 3D scene understanding, and edge VLMs. All models are validated on real hardware with the same pipeline used for our YOLO models.

CategoryExamplesPlatformsStatus
Detection (Apache 2.0) DETR-class, EfficientDet, mobile-optimized detectors
i.MX Ara240 Hailo Jetson
Coming Soon
Semantic Segmentation Lightweight real-time scene parsing
i.MX Ara240 Hailo Jetson
Roadmap
Instance Segmentation (Apache 2.0) Non-YOLO mask prediction
Ara240 Jetson
Roadmap
SAM-like Segmentation Prompted, class-agnostic masks
Ara240 Jetson
Roadmap
Monocular Depth Relative and metric depth estimation
i.MX Ara240 Jetson
Roadmap
Stereo Depth Hardware stereo depth matching
Jetson
Roadmap
3D Detection & Occupancy Monocular 3D, BEV, occupancy grids
Jetson
Roadmap
Edge VLMs Visual language models for edge inference
Ara240 Jetson
Roadmap

Roadmap is subject to change. Models are published as validation completes on each target platform.

Naming Convention

Each HuggingFace repo contains one model family for one task, with all size variants inside.

ComponentPatternExample
HF RepoEdgeFirst/{version}-{task}EdgeFirst/yolov8-det
ONNX Model{version}{size}-{task}.onnxyolov8n-det.onnx
TFLite Model{version}{size}-{task}-int8.tfliteyolov8n-det-int8.tflite
i.MX 95 TFLite{version}{size}-{task}.imx95.tfliteyolov8n-det.imx95.tflite
i.MX 93 TFLite{version}{size}-{task}.imx93.tfliteyolov8n-det.imx93.tflite
i.MX 943 TFLite{version}{size}-{task}.imx943.tfliteyolov8n-det.imx943.tflite
Hailo HEF{version}{size}-{task}.hailo{variant}.hefyolov8n-det.hailo8l.hef
Studio Project{Dataset} {Task}COCO Detection
Studio Experiment{Version} {Task}YOLOv8 Detection

Validation Pipeline

Models go through two validation stages before publication:

StageWhatWhere
Reference ONNX FP32 and TFLite INT8 mAP on full COCO val2017 (5000 images) EdgeFirst Studio (cloud)
On-Target Full dataset mAP + timing breakdown (load, preproc, invoke, decode, e2e) per device Board farm (real hardware) In Progress

Perception Architecture

LayerDescription
FoundationHardware abstraction, video I/O, accelerated inference delegates
ZenohModular perception pipeline over Zenoh pub/sub
GStreamerSpatial perception elements for GStreamer / NNStreamer
ROS 2Native ROS 2 nodes extending Zenoh microservices Roadmap

EdgeFirst Studio

EdgeFirst Studio is the MLOps platform that drives the entire model zoo pipeline. Free tier available.