Models trained from our paper "Continual Pre-Training is (not) What You Need in Domain Adaptation."
AI & ML interests
LLM evaluation; Domain-adaptive; Linguistic theories and LLM
Recent Activity
View all activity
models
9
lopentu/Llama-3-8B-Taiwan-Llawa-TCxYZL-Instruct.gguf
8B
•
Updated
•
49
lopentu/Blawstral-Nemo-2407-YZL-LoRA-Instruct
Text Generation
•
12B
•
Updated
•
6
lopentu/Llama-3-8B-Taiwan-Bllawa-YZL-LoRA-Instruct
8B
•
Updated
•
2
lopentu/Llama-3-8B-Taiwan-Llawa-TCxYZL-DPO-Beta-0.01-Instruct
Updated
•
4
lopentu/Llama-3-8B-Taiwan-Llawa-TCxYZL-ORPO-Beta-0.1-Instruct
8B
•
Updated
•
2
lopentu/Llama-3-8B-Taiwan-Llawa-TC-YZL-Instruct
Text Generation
•
8B
•
Updated
•
6
lopentu/Llama-3-8B-Taiwan-Llawa-TCxYZL-Instruct
Text Generation
•
8B
•
Updated
•
5
lopentu/Llama-3-8B-Taiwan-Llawa
Text Generation
•
8B
•
Updated
•
4
lopentu/Taiwan-Lawstral
Text Generation
•
7B
•
Updated
•
14
•
1