Update README.md
Browse files
README.md
CHANGED
|
@@ -68,6 +68,25 @@ pipe(do_closed_qa(test_article, question), max_new_tokens=128, temperature=0)[0]
|
|
| 68 |
# "γ¦γγγΌγ 2ηγ¨γγ³γγγ½γγ―γΉγͺγ©"
|
| 69 |
```
|
| 70 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 71 |
|
| 72 |
# Training details
|
| 73 |
|
|
|
|
| 68 |
# "γ¦γγγΌγ 2ηγ¨γγ³γγγ½γγ―γΉγͺγ©"
|
| 69 |
```
|
| 70 |
|
| 71 |
+
### Prompting
|
| 72 |
+
|
| 73 |
+
We have found that this model is able to work well using a variety of prompts, including the Alpaca style templated prompts:
|
| 74 |
+
|
| 75 |
+
```python
|
| 76 |
+
|
| 77 |
+
f"""
|
| 78 |
+
Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request.
|
| 79 |
+
### Instruction:
|
| 80 |
+
{instruction}
|
| 81 |
+
### Input:
|
| 82 |
+
{input}
|
| 83 |
+
### Response:
|
| 84 |
+
"""
|
| 85 |
+
|
| 86 |
+
```
|
| 87 |
+
|
| 88 |
+
We have found that having a newline at the end of the prompt can be important for signalling that the model must respond and not continue the inputs.
|
| 89 |
+
|
| 90 |
|
| 91 |
# Training details
|
| 92 |
|