CodeBERT INT8 β ONNX Quantized
ONNX INT8 quantized version of microsoft/codebert-base for efficient code and natural language embeddings.
Model Details
| Property | Value |
|---|---|
| Base Model | microsoft/codebert-base |
| Format | ONNX |
| Quantization | INT8 (dynamic quantization) |
| Embedding Dimension | 768 |
| Quantized by | JustEmbed |
What is this?
This is a quantized ONNX export of CodeBERT, a bimodal pre-trained model for programming and natural language by Microsoft Research. The INT8 quantization reduces model size and improves inference speed while maintaining high accuracy for code-related embeddings.
CodeBERT is trained on both natural language and programming language data (Python, Java, JavaScript, PHP, Ruby, Go).
Use Cases
- Code search and retrieval
- Code documentation matching
- Programming language embeddings
- Code similarity detection
- Natural language to code matching
Files
model_quantized.onnxβ INT8 quantized ONNX modeltokenizer.jsonβ Fast tokenizerconfig.jsonβ Model configuration
Usage with JustEmbed
from justembed import Embedder
embedder = Embedder("codebert-int8")
vectors = embedder.embed(["def sort_list(arr): return sorted(arr)"])
Usage with ONNX Runtime
import onnxruntime as ort
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained(".")
session = ort.InferenceSession("model_quantized.onnx")
inputs = tokenizer("def sort_list(arr): return sorted(arr)", return_tensors="np")
outputs = session.run(None, dict(inputs))
Quantization Details
- Method: Dynamic INT8 quantization via ONNX Runtime
- Source: Original PyTorch weights converted to ONNX, then quantized
- Speed: ~2-3x faster inference than FP32
- Size: ~4x smaller than FP32
License
This model is a derivative work of microsoft/codebert-base.
The original model is licensed under MIT License. This quantized version is distributed under the same license. See the LICENSE file for the full text.
Citation
@inproceedings{feng2020codebert,
title={CodeBERT: A Pre-Trained Model for Programming and Natural Languages},
author={Feng, Zhangyin and Guo, Daya and Tang, Duyu and Duan, Nan and Feng, Xiaocheng and Gong, Ming and Shou, Linjun and Qin, Bing and Liu, Ting and Jiang, Daxin and Zhou, Ming},
booktitle={Findings of EMNLP},
year={2020}
}
Acknowledgments
- Original model by Microsoft Research
- Quantization and packaging by JustEmbed
- Downloads last month
- 16
Model tree for sekarkrishna/codebert-int8
Base model
microsoft/codebert-base