CodeV
Collection
Models of paper "CodeV: Empowering LLMs for Verilog Generation through Multi-Level Summarization". • 11 items • Updated • 4
How to use yang-z/CodeV-QW-7B with Transformers:
# Use a pipeline as a high-level helper
from transformers import pipeline
pipe = pipeline("text-generation", model="yang-z/CodeV-QW-7B")
messages = [
{"role": "user", "content": "Who are you?"},
]
pipe(messages) # Load model directly
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("yang-z/CodeV-QW-7B")
model = AutoModelForCausalLM.from_pretrained("yang-z/CodeV-QW-7B")
messages = [
{"role": "user", "content": "Who are you?"},
]
inputs = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True,
tokenize=True,
return_dict=True,
return_tensors="pt",
).to(model.device)
outputs = model.generate(**inputs, max_new_tokens=40)
print(tokenizer.decode(outputs[0][inputs["input_ids"].shape[-1]:]))How to use yang-z/CodeV-QW-7B with vLLM:
# Install vLLM from pip:
pip install vllm
# Start the vLLM server:
vllm serve "yang-z/CodeV-QW-7B"
# Call the server using curl (OpenAI-compatible API):
curl -X POST "http://localhost:8000/v1/chat/completions" \
-H "Content-Type: application/json" \
--data '{
"model": "yang-z/CodeV-QW-7B",
"messages": [
{
"role": "user",
"content": "What is the capital of France?"
}
]
}'docker model run hf.co/yang-z/CodeV-QW-7B
How to use yang-z/CodeV-QW-7B with SGLang:
# Install SGLang from pip:
pip install sglang
# Start the SGLang server:
python3 -m sglang.launch_server \
--model-path "yang-z/CodeV-QW-7B" \
--host 0.0.0.0 \
--port 30000
# Call the server using curl (OpenAI-compatible API):
curl -X POST "http://localhost:30000/v1/chat/completions" \
-H "Content-Type: application/json" \
--data '{
"model": "yang-z/CodeV-QW-7B",
"messages": [
{
"role": "user",
"content": "What is the capital of France?"
}
]
}'docker run --gpus all \
--shm-size 32g \
-p 30000:30000 \
-v ~/.cache/huggingface:/root/.cache/huggingface \
--env "HF_TOKEN=<secret>" \
--ipc=host \
lmsysorg/sglang:latest \
python3 -m sglang.launch_server \
--model-path "yang-z/CodeV-QW-7B" \
--host 0.0.0.0 \
--port 30000
# Call the server using curl (OpenAI-compatible API):
curl -X POST "http://localhost:30000/v1/chat/completions" \
-H "Content-Type: application/json" \
--data '{
"model": "yang-z/CodeV-QW-7B",
"messages": [
{
"role": "user",
"content": "What is the capital of France?"
}
]
}'How to use yang-z/CodeV-QW-7B with Docker Model Runner:
docker model run hf.co/yang-z/CodeV-QW-7B
CodeV is an innovative series of open-source, instruction-tuned Large Language Models (LLMs) specifically designed for the generation of high-quality Verilog code, addressing the challenges faced by existing models in this domain. (This repo is under development)
| Base Model | CodeV | |
|---|---|---|
| 6.7B | deepseek-ai/deepseek-coder-6.7b-base | yang-z/CodeV-DS-6.7B |
| 7B | codellama/CodeLlama-7b-Python-hf | yang-z/CodeV-CL-7B |
| 7B | Qwen/CodeQwen1.5-7B-Chat | yang-z/CodeV-QW-7B |
If you want to test the generation capability of existing models on Verilog, you need to install the VerilogEval and RTLLM environments.
from transformers import pipeline
import torch
prompt= "FILL IN THE QUESTION"
generator = pipeline(
model="CODEV",
task="text-generation",
torch_dtype=torch.bfloat16,
device_map="auto",
)
result = generator(prompt , max_length=2048, num_return_sequences=1, temperature=0.0)
response = result[0]["generated_text"]
print("Response:", response)
Arxiv: https://arxiv.org/abs/2407.10424
Please cite the paper if you use the models from CodeV.
@misc{yang-z,
title={CodeV: Empowering LLMs for Verilog Generation through Multi-Level Summarization},
author={Yang Zhao and Di Huang and Chongxiao Li and Pengwei Jin and Ziyuan Nan and Tianyun Ma and Lei Qi and Yansong Pan and Zhenxing Zhang and Rui Zhang and Xishan Zhang and Zidong Du and Qi Guo and Xing Hu and Yunji Chen},
year={2024},
eprint={2407.10424},
archivePrefix={arXiv},
primaryClass={cs.PL},
url={https://arxiv.org/abs/2407.10424},
}
docker model run hf.co/yang-z/CodeV-QW-7B