Text Classification
Transformers
PyTorch
Chinese
bert
SequenceClassification
Lepton
古文
文言文
ancient
classical
letter
书信标题
Instructions to use cbdb/ClassicalChineseLetterClassification with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use cbdb/ClassicalChineseLetterClassification with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-classification", model="cbdb/ClassicalChineseLetterClassification")# Load model directly from transformers import AutoTokenizer, AutoModelForSequenceClassification tokenizer = AutoTokenizer.from_pretrained("cbdb/ClassicalChineseLetterClassification") model = AutoModelForSequenceClassification.from_pretrained("cbdb/ClassicalChineseLetterClassification") - Notebooks
- Google Colab
- Kaggle
fix text color error
Browse files
README.md
CHANGED
|
@@ -97,6 +97,7 @@ print(f'The predicted probability for the {list(pred_class_proba.keys())[0]} cla
|
|
| 97 |
print(f'The predicted probability for the {list(pred_class_proba.keys())[1]} class: {list(pred_class_proba.values())[1]}')
|
| 98 |
```
|
| 99 |
<font color="IndianRed"> >>> </font> The predicted probability for the not-letter class: 0.002029061783105135
|
|
|
|
| 100 |
<font color="IndianRed"> >>> </font> The predicted probability for the letter class: 0.9979709386825562
|
| 101 |
|
| 102 |
```python
|
|
|
|
| 97 |
print(f'The predicted probability for the {list(pred_class_proba.keys())[1]} class: {list(pred_class_proba.values())[1]}')
|
| 98 |
```
|
| 99 |
<font color="IndianRed"> >>> </font> The predicted probability for the not-letter class: 0.002029061783105135
|
| 100 |
+
|
| 101 |
<font color="IndianRed"> >>> </font> The predicted probability for the letter class: 0.9979709386825562
|
| 102 |
|
| 103 |
```python
|