| UniFormer-S COCO top-down pose - acaua mirror (pure-PyTorch port) |
| ================================================================== |
|
|
| This product includes: |
|
|
| 1. PORTED SOURCE CODE: a pure-PyTorch port of the UniFormer-pose |
| architecture (located in the acaua repository at |
| src/acaua/adapters/uniformer/pose/) is a derivative work of: |
|
|
| - Sense-X/UniFormer (Apache-2.0) |
| https://github.com/Sense-X/UniFormer @ main |
| Files derived from: |
| pose_estimation/mmpose/models/backbones/uniformer.py |
| (the global-attention UniFormer backbone, parameter-tree |
| identical to the detection variant — we reuse acaua's |
| UniFormer2DDense with hybrid=False, windows=False) |
| pose_estimation/mmpose/models/keypoint_heads/top_down_simple_head.py |
| (TopDownSimpleHead: 3x ConvTranspose2d-stride-2 + BN + ReLU |
| + 1x1 final conv) |
| pose_estimation/mmpose/core/evaluation/top_down_eval.py |
| (keypoints_from_heatmaps with post_process='default' branch: |
| argmax + sign(neighbor_diff) * 0.25 shift) |
|
|
| 2. CONVERTED WEIGHTS: the model.safetensors file in this mirror is a |
| key-remapped conversion of the upstream pretrained checkpoint: |
|
|
| - upstream repo: Sense-X/UniFormer (Apache-2.0) |
| - upstream weights: Google Drive file id |
| 162R0JuTpf3gpLe1IK6oxRoQK7JSj4ylx |
| - upstream filename: top_down_256x192_global_small.pth |
| - upstream SHA256: d77059e3e9322c0e20dc89dc0cf2a583ffe2ced7d3e9b350233738add570bc30 |
| - upstream paper: Li et al., "UniFormer: Unifying Convolution |
| and Self-attention for Visual Recognition", |
| arXiv:2201.09450 (ICLR 2022) |
|
|
| Key remap rule (eager, at conversion time): |
| backbone.* -> backbone.* (no change) |
| keypoint_head.* -> head.* |
|
|
| Conversion was performed by scripts/convert_uniformer_pose.py in the |
| acaua repository. The adapter loads the safetensors file under |
| load_state_dict(strict=True) — key-naming drift between upstream and |
| the port is a hard error at load time. |
|
|
| 3. BUNDLED DETECTOR: this mirror's adapter loads |
| CondadosAI/rtmdet_t_coco at adapter-init time so callers get the |
| full top-down pipeline (detect persons -> per-person crop -> pose -> |
| inverse-warp) from a single predict(image) call. RTMDet-tiny carries |
| its own attribution chain — see CondadosAI/rtmdet_t_coco's NOTICE |
| file. |
|
|
| Mirrored on 2026-04-25 by CondadosAI. |
|
|
| License |
| |
| Licensed under the Apache License, Version 2.0 (the "License"); |
| you may not use this file except in compliance with the License. |
| You may obtain a copy of the License at |
|
|
| http://www.apache.org/licenses/LICENSE-2.0 |
|
|
| Unless required by applicable law or agreed to in writing, software |
| distributed under the License is distributed on an "AS IS" BASIS, |
| WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. |
| See the License for the specific language governing permissions and |
| limitations under the License. |
|
|