YOLO26 Segmentation β€” EdgeFirst Edge AI

NXP i.MX 8M Plus | NXP i.MX 93 | NXP i.MX 95 | NXP Ara240 | RPi5 + Hailo-8/8L | NVIDIA Jetson YOLO26 Segmentation models optimized for edge AI deployment across multiple hardware platforms. All sizes from Nano to XLarge, in ONNX FP32 and TFLite INT8 formats, with platform-specific compiled models for NPU acceleration.

Trained on COCO 2017 (80 classes). Part of the EdgeFirst Model Zoo.

Training session: View on EdgeFirst Studio β€” dataset, training config, metrics, and exported artifacts.

end2end=False required for INT8. Fastest architecture.


Size Comparison

All models validated on COCO val2017 (5000 images, 80 classes).

Size Params GFLOPs ONNX Det mAP@0.5-0.95 INT8 Det mAP@0.5-0.95 ONNX Mask mAP@0.5-0.95 INT8 Mask mAP@0.5-0.95
Nano 2.7M 7.6 29.6% 26.8% 37.0% 34.5%
Small 10.3M 27.0 β€” β€” β€” β€”
Medium 24.5M 74.4 β€” β€” β€” β€”
Large 42.5M 155.0 β€” β€” β€” β€”
XLarge 67.5M 244.0 β€” β€” β€” β€”

On-Target Performance

Full pipeline timing: pre-processing + inference + post-processing.

Size Platform Pre-proc (ms) Inference (ms) Post-proc (ms) Total (ms) FPS
β€” β€” β€” β€” β€” β€” β€”

Measured with EdgeFirst Perception stack. Timing includes full GStreamer pipeline overhead.


Downloads

ONNX FP32 β€” Any platform with ONNX Runtime.
Size File Status
Nano yolo26n-seg-coco.onnx Download
Small yolo26s-seg-coco.onnx Download
Medium yolo26m-seg-coco.onnx Download
Large yolo26l-seg-coco.onnx Download
XLarge yolo26x-seg-coco.onnx Download
TFLite INT8 β€” CPU or NPU via runtime delegate (i.MX 8M Plus VX Delegate).
Size File Status
Nano yolo26n-seg-coco.tflite Download
Small yolo26s-seg-coco.tflite Download
Medium yolo26m-seg-coco.tflite Download
Large yolo26l-seg-coco.tflite Download
XLarge yolo26x-seg-coco.tflite Download

Deploy with EdgeFirst Perception

Copy-paste GStreamer pipeline examples for each platform.

NXP i.MX 8M Plus β€” Camera to Detection with Vivante NPU

gst-launch-1.0 \
  v4l2src device=/dev/video0 ! video/x-raw,width=640,height=480 ! \
  edgefirstcameraadaptor ! \
  tensor_filter framework=tensorflow-lite \
    model=yolo26n-seg-coco.tflite \
    custom=Delegate:External,ExtDelegateLib:libvx_delegate.so ! \
  edgefirstsegdecoder ! edgefirstoverlay ! waylandsink

RPi5 + Hailo-8L

gst-launch-1.0 \
  v4l2src device=/dev/video0 ! video/x-raw,width=640,height=480 ! \
  hailonet hef-path=yolo26n-seg-coco.hailo8l.hef ! \
  hailofilter function-name=yolo26_nms ! \
  hailooverlay ! videoconvert ! autovideosink

NVIDIA Jetson (TensorRT)

gst-launch-1.0 \
  v4l2src device=/dev/video0 ! video/x-raw,width=640,height=480 ! \
  edgefirstcameraadaptor ! \
  nvinfer config-file-path=yolo26n-seg-coco-config.txt ! \
  edgefirstsegdecoder ! edgefirstoverlay ! nveglglessink

Full pipeline documentation: EdgeFirst GStreamer Plugins


Foundation (HAL) Python Integration

from edgefirst.hal import Model, TensorImage

# Load model β€” metadata (labels, decoder config) is embedded in the file
model = Model("yolo26n-seg-coco.tflite")

# Run inference on an image
image = TensorImage.from_file("image.jpg")
results = model.predict(image)

# Access detections
for det in results.detections:
    print(f"{det.label}: {det.confidence:.2f} at {det.bbox}")

EdgeFirst HAL β€” Hardware abstraction layer with accelerated inference delegates.


CameraAdaptor

EdgeFirst CameraAdaptor enables training and inference directly on native sensor formats (GREY, YUYV, etc.) β€” skipping the ISP color conversion pipeline entirely. This reduces latency and power consumption on edge devices.

CameraAdaptor variants are included alongside baseline RGB models:

Variant Input Format Use Case
yolo26n-seg-coco.onnx RGB (3ch) Standard camera input
yolo26n-seg-coco-grey.onnx GREY (1ch) Monochrome / IR sensors
yolo26n-seg-coco-yuyv.onnx YUYV (2ch) Raw sensor bypass

Train CameraAdaptor models with EdgeFirst Studio β€” the CameraAdaptor layer is automatically inserted during training.


Train Your Own with EdgeFirst Studio

Train on your own dataset with EdgeFirst Studio:

  • Free tier includes YOLO training with automatic INT8 quantization and edge deployment
  • Upload datasets via EdgeFirst Recorder or COCO/YOLO format
  • AI-assisted annotation with auto-labeling
  • CameraAdaptor integration for native sensor format training
  • Deploy trained models to edge devices via EdgeFirst Client

See Also

Other models in the EdgeFirst Model Zoo:

Model Task Best Nano Metric Link
YOLOv5 Detection Detection 49.6% mAP@0.5 (ONNX) EdgeFirst/yolov5-det
YOLOv8 Detection Detection 50.2% mAP@0.5 (ONNX) EdgeFirst/yolov8-det
YOLOv8 Segmentation Segmentation 34.1% Mask mAP@0.5-0.95 (ONNX) EdgeFirst/yolov8-seg
YOLO11 Detection Detection 53.4% mAP@0.5 (ONNX) EdgeFirst/yolo11-det
YOLO11 Segmentation Segmentation 35.5% Mask mAP@0.5-0.95 (ONNX) EdgeFirst/yolo11-seg
YOLO26 Detection Detection 54.9% mAP@0.5 (ONNX) EdgeFirst/yolo26-det

Technical Details

Quantization Pipeline

All TFLite INT8 models are produced by EdgeFirst's custom quantization pipeline (details):

  1. ONNX Export β€” Standard Ultralytics export with simplify=True
  2. TF-Wrapped ONNX β€” Box coordinates normalized to [0,1] inside DFL decode via tf_wrapper (~1.2% better mAP than post-hoc normalization)
  3. Split Decoder β€” Boxes, scores, and mask coefficients split into separate output tensors for independent INT8 quantization scales
  4. Smart Calibration β€” 500 images selected via greedy coverage maximization from COCO val2017
  5. Full INT8 β€” uint8 input (raw pixels), int8 output (per-tensor scales), MLIR quantizer

Split Decoder Output Format

Segmentation (e.g., yolo26n-seg):

  • Boxes: (1, 4, 8400) β€” normalized [0,1] coordinates
  • Scores: (1, 80, 8400) β€” class probabilities
  • Mask coefficients: (1, 32, 8400) β€” per-anchor mask coefs
  • Protos: (1, 160, 160, 32) β€” prototype masks

Each tensor has independent quantization scale and zero-point. EdgeFirst HAL handles dequantization and reassembly automatically.

Metadata

  • TFLite: edgefirst.json, labels.txt, and edgefirst.yaml embedded via ZIP (no tflite-support dependency)
  • ONNX: edgefirst.json embedded via model.metadata_props

No standalone metadata files β€” models are self-contained.


Limitations

  • COCO bias β€” Models trained on COCO (80 classes) inherit its biases: Western-centric scenes, specific object distributions, limited weather/lighting diversity
  • INT8 accuracy loss β€” Full-integer quantization typically degrades mAP by 6-12% relative to FP32; actual loss depends on model architecture and dataset
  • Thermal variation β€” On-target performance varies with device temperature; sustained inference may throttle on passively-cooled devices
  • Input resolution β€” All models expect 640Γ—640 input; other resolutions require letterboxing or may reduce accuracy
  • CameraAdaptor variants β€” GREY/YUYV models trade color information for latency; accuracy may differ from RGB baseline depending on the task

Citation

@software{edgefirst_yolo26_seg,
  title = { {YOLO26 Segmentation β€” EdgeFirst Edge AI} },
  author = {Au-Zone Technologies},
  url = {https://huggingface.co/EdgeFirst/yolo26-seg},
  year = {2026},
  license = {Apache-2.0},
}

EdgeFirst Studio Β· GitHub Β· Docs Β· Au-Zone Technologies
Apache 2.0 Β· Β© Au-Zone Technologies Inc.

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Space using EdgeFirst/yolo26-seg 1

Evaluation results

  • Mask mAP@0.5-0.95 (Nano ONNX FP32) on COCO val2017
    self-reported
    37.000
  • Mask mAP@0.5-0.95 (Nano TFLite INT8) on COCO val2017
    self-reported
    34.500