File size: 1,365 Bytes
2618ca6
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
---
license: mit
datasets:
- sxiong/SWAP
language:
- en
base_model:
- meta-llama/Meta-Llama-3-8B-Instruct
---

# **Model Card for SWAP_LLM**

**SWAP_LLM** is a suite of fine-tuned models developed for **multi-step reasoning** with large language models (LLMs).
The framework encompasses two primary components: **generator** and **discriminator**.


## **Model Details**

### **Generator**

* **Base Model:** `meta-llama/Meta-Llama-3-8B-Instruct`
* **LoRA Configuration:**

  * `lora_alpha`: 32
  * `r`: 16
  * `target_modules`: `["q_proj","k_proj", "v_proj", "o_proj"]`
  * `bias`: `"none"`

### **Discriminator**

* **Base Model:** `meta-llama/Meta-Llama-3-8B-Instruct`
* **LoRA Configuration:**

  * `lora_alpha`: 32
  * `r`: 16
  * `target_modules`: `["q_proj","k_proj", "v_proj", "o_proj"]`
  * `bias`: `"none"`

For additional information and implementation details, please refer to the [SWAP GitHub repository](https://github.com/xiongsiheng/SWAP).


## Citation
```
@inproceedings{xiong2025deliberate,
  title={Deliberate reasoning in language models as structure-aware planning with an accurate world model},
  author={Xiong, Siheng and Payani, Ali and Yang, Yuan and Fekri, Faramarz},
  booktitle={Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)},
  pages={31900--31931},
  year={2025}
}
```