flwrlabs/code-alpaca-20k
Viewer • Updated • 20k • 709 • 2
How to use ethicalabs/Flwr-Qwen2.5-0.5B-Instruct-Coding-PEFT with PEFT:
from peft import PeftModel
from transformers import AutoModelForCausalLM
base_model = AutoModelForCausalLM.from_pretrained("Qwen/Qwen2.5-0.5B-Instruct")
model = PeftModel.from_pretrained(base_model, "ethicalabs/Flwr-Qwen2.5-0.5B-Instruct-Coding-PEFT")This PEFT adapter has been trained by using Flower, a friendly federated AI framework.
The adapter and benchmark results have been submitted to the FlowerTune LLM Code Leaderboard.
Please check the following GitHub project for details on how to reproduce training and evaluation steps: