Improve language tag

#1
by lbourdois - opened
Files changed (1) hide show
  1. README.md +119 -110
README.md CHANGED
@@ -1,110 +1,119 @@
1
-
2
- ---
3
-
4
- license: creativeml-openrail-m
5
- datasets:
6
- - AI-MO/NuminaMath-CoT
7
- language:
8
- - en
9
- base_model:
10
- - Qwen/Qwen2.5-7B-Instruct
11
- pipeline_tag: text-generation
12
- library_name: transformers
13
- tags:
14
- - Qwen2.5
15
- - Ollama
16
- - Neumind
17
- - Math
18
- - Instruct
19
- - safetensors
20
- - pytorch
21
- - trl
22
-
23
- ---
24
-
25
- [![QuantFactory Banner](https://lh7-rt.googleusercontent.com/docsz/AD_4nXeiuCm7c8lEwEJuRey9kiVZsRn2W-b4pWlu3-X534V3YmVuVc2ZL-NXg2RkzSOOS2JXGHutDuyyNAUtdJI65jGTo8jT9Y99tMi4H4MqL44Uc5QKG77B0d6-JfIkZHFaUA71-RtjyYZWVIhqsNZcx8-OMaA?key=xt3VSDoCbmTY7o-cwwOFwQ)](https://hf.co/QuantFactory)
26
-
27
-
28
- # QuantFactory/Neumind-Math-7B-Instruct-GGUF
29
- This is quantized version of [prithivMLmods/Neumind-Math-7B-Instruct](https://huggingface.co/prithivMLmods/Neumind-Math-7B-Instruct) created using llama.cpp
30
-
31
- # Original Model Card
32
-
33
-
34
- ### Neumind-Math-7B-Instruct Model Files
35
-
36
- The **Neumind-Math-7B-Instruct** is a fine-tuned model based on **Qwen2.5-7B-Instruct**, optimized for mathematical reasoning, step-by-step problem-solving, and instruction-based tasks in the mathematics domain. The model is designed for applications requiring structured reasoning, numerical computations, and mathematical proof generation.
37
-
38
- | File Name | Size | Description | Upload Status |
39
- |------------------------------------|------------|------------------------------------------|----------------|
40
- | `.gitattributes` | 1.57 kB | Git attributes configuration file | Uploaded |
41
- | `README.md` | 265 Bytes | ReadMe file with basic information | Updated |
42
- | `added_tokens.json` | 657 Bytes | Additional token definitions | Uploaded |
43
- | `config.json` | 860 Bytes | Model configuration settings | Uploaded |
44
- | `generation_config.json` | 281 Bytes | Generation settings | Uploaded |
45
- | `merges.txt` | 1.82 MB | Tokenizer merge rules | Uploaded |
46
- | `pytorch_model-00001-of-00004.bin` | 4.88 GB | Model shard 1 of 4 | Uploaded (LFS) |
47
- | `pytorch_model-00002-of-00004.bin` | 4.93 GB | Model shard 2 of 4 | Uploaded (LFS) |
48
- | `pytorch_model-00003-of-00004.bin` | 4.33 GB | Model shard 3 of 4 | Uploaded (LFS) |
49
- | `pytorch_model-00004-of-00004.bin` | 1.09 GB | Model shard 4 of 4 | Uploaded (LFS) |
50
- | `pytorch_model.bin.index.json` | 28.1 kB | Model index JSON | Uploaded |
51
- | `special_tokens_map.json` | 644 Bytes | Mapping of special tokens | Uploaded |
52
- | `tokenizer.json` | 11.4 MB | Tokenizer configuration | Uploaded (LFS) |
53
- | `tokenizer_config.json` | 7.73 kB | Additional tokenizer settings | Uploaded |
54
- | `vocab.json` | 2.78 MB | Vocabulary for tokenization | Uploaded |
55
-
56
- ---
57
-
58
- ### **Key Features:**
59
-
60
- 1. **Mathematical Reasoning:**
61
- Specifically fine-tuned for solving mathematical problems, including arithmetic, algebra, calculus, and geometry.
62
-
63
- 2. **Step-by-Step Problem Solving:**
64
- Provides detailed, logical solutions for complex mathematical tasks and demonstrates problem-solving methodologies.
65
-
66
- 3. **Instructional Applications:**
67
- Tailored for use in educational settings, such as tutoring systems, math content creation, and interactive learning tools.
68
-
69
- ---
70
-
71
- ### **Training Details:**
72
- - **Base Model:** [Qwen2.5-7B](https://huggingface.co/Qwen/Qwen2.5-7B)
73
- - **Dataset:** Trained on **AI-MO/NuminaMath-CoT**, a large dataset of mathematical problems and chain-of-thought (CoT) reasoning. The dataset contains **860k problems** across various difficulty levels, enabling the model to tackle a wide spectrum of mathematical tasks.
74
-
75
- ---
76
-
77
- ### **Capabilities:**
78
-
79
- - **Complex Problem Solving:**
80
- Solves a wide range of mathematical problems, from basic arithmetic to advanced calculus and algebraic equations.
81
-
82
- - **Chain-of-Thought Reasoning:**
83
- Excels in step-by-step logical reasoning, making it suitable for tasks requiring detailed explanations.
84
-
85
- - **Instruction-Based Generation:**
86
- Ideal for generating educational content, such as worked examples, quizzes, and tutorials.
87
-
88
- ---
89
-
90
- ### **Usage Instructions:**
91
-
92
- 1. **Model Setup:**
93
- Download all model shards and the associated configuration files. Ensure the files are correctly placed for seamless loading.
94
-
95
- 2. **Inference:**
96
- Load the model using frameworks like PyTorch and Hugging Face Transformers. Ensure the `pytorch_model.bin.index.json` file is in the same directory for shard-based loading.
97
-
98
- 3. **Customization:**
99
- Adjust generation parameters using `generation_config.json` to optimize outputs for your specific application.
100
- ---
101
-
102
- ### **Applications:**
103
-
104
- - **Education:**
105
- Interactive math tutoring, content creation, and step-by-step problem-solving tools.
106
- - **Research:**
107
- Automated theorem proving and symbolic mathematics.
108
- - **General Use:**
109
- Solving everyday mathematical queries and generating numerical datasets.
110
- ---
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: creativeml-openrail-m
3
+ datasets:
4
+ - AI-MO/NuminaMath-CoT
5
+ language:
6
+ - zho
7
+ - eng
8
+ - fra
9
+ - spa
10
+ - por
11
+ - deu
12
+ - ita
13
+ - rus
14
+ - jpn
15
+ - kor
16
+ - vie
17
+ - tha
18
+ - ara
19
+ base_model:
20
+ - Qwen/Qwen2.5-7B-Instruct
21
+ pipeline_tag: text-generation
22
+ library_name: transformers
23
+ tags:
24
+ - Qwen2.5
25
+ - Ollama
26
+ - Neumind
27
+ - Math
28
+ - Instruct
29
+ - safetensors
30
+ - pytorch
31
+ - trl
32
+ ---
33
+
34
+ [![QuantFactory Banner](https://lh7-rt.googleusercontent.com/docsz/AD_4nXeiuCm7c8lEwEJuRey9kiVZsRn2W-b4pWlu3-X534V3YmVuVc2ZL-NXg2RkzSOOS2JXGHutDuyyNAUtdJI65jGTo8jT9Y99tMi4H4MqL44Uc5QKG77B0d6-JfIkZHFaUA71-RtjyYZWVIhqsNZcx8-OMaA?key=xt3VSDoCbmTY7o-cwwOFwQ)](https://hf.co/QuantFactory)
35
+
36
+
37
+ # QuantFactory/Neumind-Math-7B-Instruct-GGUF
38
+ This is quantized version of [prithivMLmods/Neumind-Math-7B-Instruct](https://huggingface.co/prithivMLmods/Neumind-Math-7B-Instruct) created using llama.cpp
39
+
40
+ # Original Model Card
41
+
42
+
43
+ ### Neumind-Math-7B-Instruct Model Files
44
+
45
+ The **Neumind-Math-7B-Instruct** is a fine-tuned model based on **Qwen2.5-7B-Instruct**, optimized for mathematical reasoning, step-by-step problem-solving, and instruction-based tasks in the mathematics domain. The model is designed for applications requiring structured reasoning, numerical computations, and mathematical proof generation.
46
+
47
+ | File Name | Size | Description | Upload Status |
48
+ |------------------------------------|------------|------------------------------------------|----------------|
49
+ | `.gitattributes` | 1.57 kB | Git attributes configuration file | Uploaded |
50
+ | `README.md` | 265 Bytes | ReadMe file with basic information | Updated |
51
+ | `added_tokens.json` | 657 Bytes | Additional token definitions | Uploaded |
52
+ | `config.json` | 860 Bytes | Model configuration settings | Uploaded |
53
+ | `generation_config.json` | 281 Bytes | Generation settings | Uploaded |
54
+ | `merges.txt` | 1.82 MB | Tokenizer merge rules | Uploaded |
55
+ | `pytorch_model-00001-of-00004.bin` | 4.88 GB | Model shard 1 of 4 | Uploaded (LFS) |
56
+ | `pytorch_model-00002-of-00004.bin` | 4.93 GB | Model shard 2 of 4 | Uploaded (LFS) |
57
+ | `pytorch_model-00003-of-00004.bin` | 4.33 GB | Model shard 3 of 4 | Uploaded (LFS) |
58
+ | `pytorch_model-00004-of-00004.bin` | 1.09 GB | Model shard 4 of 4 | Uploaded (LFS) |
59
+ | `pytorch_model.bin.index.json` | 28.1 kB | Model index JSON | Uploaded |
60
+ | `special_tokens_map.json` | 644 Bytes | Mapping of special tokens | Uploaded |
61
+ | `tokenizer.json` | 11.4 MB | Tokenizer configuration | Uploaded (LFS) |
62
+ | `tokenizer_config.json` | 7.73 kB | Additional tokenizer settings | Uploaded |
63
+ | `vocab.json` | 2.78 MB | Vocabulary for tokenization | Uploaded |
64
+
65
+ ---
66
+
67
+ ### **Key Features:**
68
+
69
+ 1. **Mathematical Reasoning:**
70
+ Specifically fine-tuned for solving mathematical problems, including arithmetic, algebra, calculus, and geometry.
71
+
72
+ 2. **Step-by-Step Problem Solving:**
73
+ Provides detailed, logical solutions for complex mathematical tasks and demonstrates problem-solving methodologies.
74
+
75
+ 3. **Instructional Applications:**
76
+ Tailored for use in educational settings, such as tutoring systems, math content creation, and interactive learning tools.
77
+
78
+ ---
79
+
80
+ ### **Training Details:**
81
+ - **Base Model:** [Qwen2.5-7B](https://huggingface.co/Qwen/Qwen2.5-7B)
82
+ - **Dataset:** Trained on **AI-MO/NuminaMath-CoT**, a large dataset of mathematical problems and chain-of-thought (CoT) reasoning. The dataset contains **860k problems** across various difficulty levels, enabling the model to tackle a wide spectrum of mathematical tasks.
83
+
84
+ ---
85
+
86
+ ### **Capabilities:**
87
+
88
+ - **Complex Problem Solving:**
89
+ Solves a wide range of mathematical problems, from basic arithmetic to advanced calculus and algebraic equations.
90
+
91
+ - **Chain-of-Thought Reasoning:**
92
+ Excels in step-by-step logical reasoning, making it suitable for tasks requiring detailed explanations.
93
+
94
+ - **Instruction-Based Generation:**
95
+ Ideal for generating educational content, such as worked examples, quizzes, and tutorials.
96
+
97
+ ---
98
+
99
+ ### **Usage Instructions:**
100
+
101
+ 1. **Model Setup:**
102
+ Download all model shards and the associated configuration files. Ensure the files are correctly placed for seamless loading.
103
+
104
+ 2. **Inference:**
105
+ Load the model using frameworks like PyTorch and Hugging Face Transformers. Ensure the `pytorch_model.bin.index.json` file is in the same directory for shard-based loading.
106
+
107
+ 3. **Customization:**
108
+ Adjust generation parameters using `generation_config.json` to optimize outputs for your specific application.
109
+ ---
110
+
111
+ ### **Applications:**
112
+
113
+ - **Education:**
114
+ Interactive math tutoring, content creation, and step-by-step problem-solving tools.
115
+ - **Research:**
116
+ Automated theorem proving and symbolic mathematics.
117
+ - **General Use:**
118
+ Solving everyday mathematical queries and generating numerical datasets.
119
+ ---