Add metadata and improve model card
#1
by nielsr HF Staff - opened
README.md
CHANGED
|
@@ -1,10 +1,40 @@
|
|
| 1 |
---
|
|
|
|
|
|
|
| 2 |
tags:
|
| 3 |
- model_hub_mixin
|
| 4 |
- pytorch_model_hub_mixin
|
| 5 |
---
|
| 6 |
|
| 7 |
-
|
| 8 |
-
|
| 9 |
-
-
|
| 10 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
---
|
| 2 |
+
license: apache-2.0
|
| 3 |
+
pipeline_tag: image-segmentation
|
| 4 |
tags:
|
| 5 |
- model_hub_mixin
|
| 6 |
- pytorch_model_hub_mixin
|
| 7 |
---
|
| 8 |
|
| 9 |
+
# EdgeCrafter: ECSeg-L
|
| 10 |
+
|
| 11 |
+
EdgeCrafter is a unified compact Vision Transformer (ViT) framework designed for efficient edge dense prediction. This specific model, **ECSeg-L**, is optimized for instance segmentation on resource-constrained devices. It is part of the work presented in [EdgeCrafter: Compact ViTs for Edge Dense Prediction via Task-Specialized Distillation](https://arxiv.org/abs/2603.18739).
|
| 12 |
+
|
| 13 |
+
- **Paper:** [EdgeCrafter: Compact ViTs for Edge Dense Prediction via Task-Specialized Distillation](https://arxiv.org/abs/2603.18739)
|
| 14 |
+
- **Repository:** [https://github.com/Intellindust-AI-Lab/EdgeCrafter](https://github.com/Intellindust-AI-Lab/EdgeCrafter)
|
| 15 |
+
- **Project Page:** [https://intellindust-ai-lab.github.io/projects/EdgeCrafter/](https://intellindust-ai-lab.github.io/projects/EdgeCrafter/)
|
| 16 |
+
|
| 17 |
+
## Model Description
|
| 18 |
+
EdgeCrafter addresses the performance gap between compact ViTs and CNN-based architectures like YOLO on edge devices. By using task-specialized distillation and an edge-friendly encoder-decoder design, EdgeCrafter models achieve a strong accuracy-efficiency tradeoff. ECSeg-L provides a high-performance balance for instance segmentation tasks.
|
| 19 |
+
|
| 20 |
+
## Usage
|
| 21 |
+
To use this model, please refer to the [official GitHub repository](https://github.com/Intellindust-AI-Lab/EdgeCrafter) for installation instructions. You can run inference using the following command:
|
| 22 |
+
|
| 23 |
+
```bash
|
| 24 |
+
cd ecdetseg
|
| 25 |
+
# Run PyTorch inference
|
| 26 |
+
# Make sure to replace `path/to/your/image.jpg` with an actual image path and provide the path to the weights
|
| 27 |
+
python tools/inference/torch_inf.py -c configs/ecseg/ecseg_l.yml -r /path/to/ecseg_l.pth -i path/to/your/image.jpg
|
| 28 |
+
```
|
| 29 |
+
|
| 30 |
+
For loading models directly via the Hugging Face Hub, check the [hf_models.ipynb](https://github.com/Intellindust-AI-Lab/EdgeCrafter/blob/main/hf_models.ipynb) notebook in the repository.
|
| 31 |
+
|
| 32 |
+
## Citation
|
| 33 |
+
```bibtex
|
| 34 |
+
@article{liu2026edgecrafter,
|
| 35 |
+
title={EdgeCrafter: Compact ViTs for Edge Dense Prediction via Task-Specialized Distillation},
|
| 36 |
+
author={Liu, Longfei and Hou, Yongjie and Li, Yang and Wang, Qirui and Sha, Youyang and Yu, Yongjun and Wang, Yinzhi and Ru, Peizhe and Yu, Xuanlong and Shen, Xi},
|
| 37 |
+
journal={arXiv},
|
| 38 |
+
year={2026}
|
| 39 |
+
}
|
| 40 |
+
```
|