InternImage: Optimized for Qualcomm Devices

InternImage employs DCNv3 as its core operator to equips the model with dynamic and effective receptive fields required for downstream tasks like object detection and segmentation, while enabling adaptive spatial aggregation.

This repository contains pre-exported model files optimized for Qualcomm® devices. You can use the Qualcomm® AI Hub Models library to export with custom configurations. More details on model performance across various devices, can be found here.

Qualcomm AI Hub Models uses Qualcomm AI Hub Workbench to compile, profile, and evaluate this model. Sign up to run these models on a hosted Qualcomm® device.

Getting Started

There are two ways to deploy this model on your device:

Option 1: Download Pre-Exported Models

Below are pre-exported model assets ready for deployment.

Runtime Precision Chipset SDK Versions Download
QNN_CONTEXT_BINARY float qualcomm_qcs8450_proxy QAIRT 2.43 Download
QNN_CONTEXT_BINARY float qualcomm_qcs8550_proxy QAIRT 2.43 Download
QNN_CONTEXT_BINARY float qualcomm_qcs9075 QAIRT 2.43 Download
QNN_CONTEXT_BINARY float qualcomm_sa7255p QAIRT 2.43 Download
QNN_CONTEXT_BINARY float qualcomm_sa8295p QAIRT 2.43 Download
QNN_CONTEXT_BINARY float qualcomm_sa8775p QAIRT 2.43 Download
QNN_CONTEXT_BINARY float qualcomm_snapdragon_8_elite_for_galaxy QAIRT 2.43 Download
QNN_CONTEXT_BINARY float qualcomm_snapdragon_8_elite_gen5 QAIRT 2.43 Download
QNN_CONTEXT_BINARY float qualcomm_snapdragon_8gen3 QAIRT 2.43 Download
QNN_CONTEXT_BINARY float qualcomm_snapdragon_x2_elite QAIRT 2.43 Download
QNN_CONTEXT_BINARY float qualcomm_snapdragon_x_elite QAIRT 2.43 Download

For more device-specific assets and performance metrics, visit InternImage on Qualcomm® AI Hub.

Option 2: Export with Custom Configurations

Use the Qualcomm® AI Hub Models Python library to compile and export the model with your own:

  • Custom weights (e.g., fine-tuned checkpoints)
  • Custom input shapes
  • Target device and runtime configurations

This option is ideal if you need to customize the model beyond the default configuration provided here.

See our repository for InternImage on GitHub for usage instructions.

Model Details

Model Type: Model_use_case.image_classification

Model Stats:

  • Model checkpoint: internimage_t_1k_224
  • Input resolution: 1x3x224x224
  • Number of parameters: 30.6M
  • Model size (float): 117 MB

Performance Summary

Model Runtime Precision Chipset Inference Time (ms) Peak Memory Range (MB) Primary Compute Unit
InternImage PRECOMPILED_QNN_ONNX float Snapdragon® 8 Elite Gen 5 Mobile 19.931 ms 1 - 10 MB NPU
InternImage PRECOMPILED_QNN_ONNX float Snapdragon® X2 Elite 20.745 ms 66 - 66 MB NPU
InternImage PRECOMPILED_QNN_ONNX float Snapdragon® X Elite 52.06 ms 66 - 66 MB NPU
InternImage PRECOMPILED_QNN_ONNX float Snapdragon® 8 Gen 3 Mobile 34.983 ms 1 - 7 MB NPU
InternImage PRECOMPILED_QNN_ONNX float Qualcomm® QCS8550 (Proxy) 50.021 ms 0 - 81 MB NPU
InternImage PRECOMPILED_QNN_ONNX float Qualcomm® QCS9075 51.659 ms 1 - 4 MB NPU
InternImage PRECOMPILED_QNN_ONNX float Snapdragon® 8 Elite For Galaxy Mobile 25.44 ms 1 - 7 MB NPU
InternImage QNN_CONTEXT_BINARY float Snapdragon® 8 Elite Gen 5 Mobile 20.488 ms 1 - 10 MB NPU
InternImage QNN_CONTEXT_BINARY float Snapdragon® X2 Elite 21.475 ms 1 - 1 MB NPU
InternImage QNN_CONTEXT_BINARY float Snapdragon® X Elite 53.296 ms 1 - 1 MB NPU
InternImage QNN_CONTEXT_BINARY float Snapdragon® 8 Gen 3 Mobile 36.282 ms 1 - 8 MB NPU
InternImage QNN_CONTEXT_BINARY float Qualcomm® QCS8275 (Proxy) 98.726 ms 1 - 9 MB NPU
InternImage QNN_CONTEXT_BINARY float Qualcomm® QCS8550 (Proxy) 51.086 ms 1 - 2 MB NPU
InternImage QNN_CONTEXT_BINARY float Qualcomm® QCS9075 52.771 ms 3 - 5 MB NPU
InternImage QNN_CONTEXT_BINARY float Qualcomm® QCS8450 (Proxy) 59.749 ms 1 - 10 MB NPU
InternImage QNN_CONTEXT_BINARY float Snapdragon® 8 Elite For Galaxy Mobile 25.901 ms 1 - 10 MB NPU

License

  • The license for the original implementation of InternImage can be found here.

Community

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support