How to use from the
Use from the
PaddleOCR library
# See https://www.paddleocr.ai/latest/version3.x/pipeline_usage/PaddleOCR-VL.html to installation

from paddleocr import PaddleOCRVL
pipeline = PaddleOCRVL(pipeline_version="mlx-community/PaddleOCR-VL-bfloat16")
output = pipeline.predict("path/to/document_image.png")
for res in output:
	res.print()
	res.save_to_json(save_path="output")
	res.save_to_markdown(save_path="output")

mlx-community/PaddleOCR-VL-bfloat16

This model was converted to MLX format from PaddlePaddle/PaddleOCR-VL using mlx-vlm version 0.3.10. Refer to the original model card for more details on the model.

Use with mlx

pip install -U mlx-vlm
python -m mlx_vlm.generate --model mlx-community/PaddleOCR-VL-bfloat16 --max-tokens 100 --temperature 0.0 --prompt "Describe this image." --image <path_to_image>
Downloads last month
36
MLX
Hardware compatibility
Log In to add your hardware

Quantized

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for mlx-community/PaddleOCR-VL-bfloat16

Finetuned
(28)
this model

Collection including mlx-community/PaddleOCR-VL-bfloat16