CodeBERT INT8 β€” ONNX Quantized

ONNX INT8 quantized version of microsoft/codebert-base for efficient code and natural language embeddings.

Model Details

Property Value
Base Model microsoft/codebert-base
Format ONNX
Quantization INT8 (dynamic quantization)
Embedding Dimension 768
Quantized by JustEmbed

What is this?

This is a quantized ONNX export of CodeBERT, a bimodal pre-trained model for programming and natural language by Microsoft Research. The INT8 quantization reduces model size and improves inference speed while maintaining high accuracy for code-related embeddings.

CodeBERT is trained on both natural language and programming language data (Python, Java, JavaScript, PHP, Ruby, Go).

Use Cases

  • Code search and retrieval
  • Code documentation matching
  • Programming language embeddings
  • Code similarity detection
  • Natural language to code matching

Files

  • model_quantized.onnx β€” INT8 quantized ONNX model
  • tokenizer.json β€” Fast tokenizer
  • config.json β€” Model configuration

Usage with JustEmbed

from justembed import Embedder

embedder = Embedder("codebert-int8")
vectors = embedder.embed(["def sort_list(arr): return sorted(arr)"])

Usage with ONNX Runtime

import onnxruntime as ort
from transformers import AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained(".")
session = ort.InferenceSession("model_quantized.onnx")

inputs = tokenizer("def sort_list(arr): return sorted(arr)", return_tensors="np")
outputs = session.run(None, dict(inputs))

Quantization Details

  • Method: Dynamic INT8 quantization via ONNX Runtime
  • Source: Original PyTorch weights converted to ONNX, then quantized
  • Speed: ~2-3x faster inference than FP32
  • Size: ~4x smaller than FP32

License

This model is a derivative work of microsoft/codebert-base.

The original model is licensed under MIT License. This quantized version is distributed under the same license. See the LICENSE file for the full text.

Citation

@inproceedings{feng2020codebert,
  title={CodeBERT: A Pre-Trained Model for Programming and Natural Languages},
  author={Feng, Zhangyin and Guo, Daya and Tang, Duyu and Duan, Nan and Feng, Xiaocheng and Gong, Ming and Shou, Linjun and Qin, Bing and Liu, Ting and Jiang, Daxin and Zhou, Ming},
  booktitle={Findings of EMNLP},
  year={2020}
}

Acknowledgments

Downloads last month
16
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for sekarkrishna/codebert-int8

Quantized
(5)
this model