Abdul1102 commited on
Commit
88f08d0
·
verified ·
1 Parent(s): cf400f4

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +119 -101
README.md CHANGED
@@ -1,199 +1,217 @@
1
  ---
2
  library_name: transformers
3
- tags: []
 
 
 
 
 
 
4
  ---
5
 
6
- # Model Card for Model ID
7
-
8
- <!-- Provide a quick summary of what the model is/does. -->
9
-
10
 
 
11
 
12
  ## Model Details
13
 
14
  ### Model Description
15
 
16
- <!-- Provide a longer summary of what this model is. -->
17
-
18
- This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
19
-
20
- - **Developed by:** [More Information Needed]
21
- - **Funded by [optional]:** [More Information Needed]
22
- - **Shared by [optional]:** [More Information Needed]
23
- - **Model type:** [More Information Needed]
24
- - **Language(s) (NLP):** [More Information Needed]
25
- - **License:** [More Information Needed]
26
- - **Finetuned from model [optional]:** [More Information Needed]
27
-
28
- ### Model Sources [optional]
29
 
30
- <!-- Provide the basic links for the model. -->
31
-
32
- - **Repository:** [More Information Needed]
33
- - **Paper [optional]:** [More Information Needed]
34
- - **Demo [optional]:** [More Information Needed]
35
 
36
  ## Uses
37
 
38
- <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
39
-
40
  ### Direct Use
41
 
42
- <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
43
-
44
- [More Information Needed]
45
-
46
- ### Downstream Use [optional]
47
-
48
- <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
49
 
50
- [More Information Needed]
 
 
51
 
52
  ### Out-of-Scope Use
53
 
54
- <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
55
-
56
- [More Information Needed]
 
57
 
58
  ## Bias, Risks, and Limitations
59
 
60
- <!-- This section is meant to convey both technical and sociotechnical limitations. -->
61
-
62
- [More Information Needed]
63
 
64
  ### Recommendations
65
 
66
- <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
67
-
68
- Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
69
 
70
  ## How to Get Started with the Model
71
 
72
- Use the code below to get started with the model.
73
 
74
- [More Information Needed]
 
 
75
 
76
- ## Training Details
 
77
 
78
- ### Training Data
 
 
79
 
80
- <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
 
 
 
 
81
 
82
- [More Information Needed]
 
 
 
 
 
83
 
84
- ### Training Procedure
85
 
86
- <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
87
 
88
- #### Preprocessing [optional]
 
 
 
89
 
90
- [More Information Needed]
91
 
 
 
 
 
 
92
 
93
  #### Training Hyperparameters
94
 
95
- - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
 
 
 
 
 
96
 
97
- #### Speeds, Sizes, Times [optional]
98
-
99
- <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
100
-
101
- [More Information Needed]
102
 
103
  ## Evaluation
104
 
105
- <!-- This section describes the evaluation protocols and provides the results. -->
106
-
107
  ### Testing Data, Factors & Metrics
108
 
109
  #### Testing Data
110
 
111
- <!-- This should link to a Dataset Card if possible. -->
112
-
113
- [More Information Needed]
114
 
115
  #### Factors
116
 
117
- <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
118
-
119
- [More Information Needed]
120
 
121
  #### Metrics
122
 
123
- <!-- These are the evaluation metrics being used, ideally with a description of why. -->
124
-
125
- [More Information Needed]
126
 
127
  ### Results
128
 
129
- [More Information Needed]
 
 
 
 
 
130
 
131
  #### Summary
132
 
 
133
 
 
134
 
135
- ## Model Examination [optional]
136
 
137
- <!-- Relevant interpretability work for the model goes here -->
138
 
139
- [More Information Needed]
140
 
141
  ## Environmental Impact
142
 
143
- <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
144
-
145
- Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
 
 
146
 
147
- - **Hardware Type:** [More Information Needed]
148
- - **Hours used:** [More Information Needed]
149
- - **Cloud Provider:** [More Information Needed]
150
- - **Compute Region:** [More Information Needed]
151
- - **Carbon Emitted:** [More Information Needed]
152
 
153
- ## Technical Specifications [optional]
154
 
155
  ### Model Architecture and Objective
156
 
157
- [More Information Needed]
 
 
 
158
 
159
  ### Compute Infrastructure
160
 
161
- [More Information Needed]
162
-
163
  #### Hardware
164
 
165
- [More Information Needed]
166
 
167
  #### Software
168
 
169
- [More Information Needed]
 
 
 
 
 
170
 
171
- ## Citation [optional]
172
 
173
- <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
174
 
175
- **BibTeX:**
176
 
177
- [More Information Needed]
178
 
179
- **APA:**
180
 
181
- [More Information Needed]
182
 
183
- ## Glossary [optional]
184
 
185
- <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
186
 
187
- [More Information Needed]
188
 
189
- ## More Information [optional]
190
 
191
- [More Information Needed]
192
 
193
- ## Model Card Authors [optional]
194
 
195
- [More Information Needed]
196
 
197
  ## Model Card Contact
198
 
199
- [More Information Needed]
 
1
  ---
2
  library_name: transformers
3
+ tags:
4
+ - llama-3.2
5
+ - causal-lm
6
+ - code
7
+ - python
8
+ - peft
9
+ - qlora
10
  ---
11
 
12
+ # Model Card for llama32-1b-python-docstrings-qlora
 
 
 
13
 
14
+ A parameter-efficiently fine-tuned adapter on top of `meta-llama/Llama-3.2-1B-Instruct` for generating concise one-line Python docstrings from function bodies.
15
 
16
  ## Model Details
17
 
18
  ### Model Description
19
 
20
+ - **Developed by:** Abdullah Al-Housni
21
+ - **Model type:** Causal language model with LoRA/QLoRA adapters
22
+ - **Language(s):** Python code as input, English docstrings as output
23
+ - **License:** Same as `meta-llama/Llama-3.2-1B-Instruct` (Meta Llama 3.2 Community License)
24
+ - **Finetuned from model:** `meta-llama/Llama-3.2-1B-Instruct`
 
 
 
 
 
 
 
 
25
 
26
+ The model is trained to take a Python function definition and generate a concise, one-line docstring describing what the function does.
 
 
 
 
27
 
28
  ## Uses
29
 
 
 
30
  ### Direct Use
31
 
32
+ - Automatically generate one-line Python docstrings for functions.
33
+ - Improve or bootstrap documentation in Python codebases.
34
+ - Educational use for learning how to summarize code behavior.
 
 
 
 
35
 
36
+ Typical usage pattern:
37
+ - Input: Python function body (source code).
38
+ - Output: Single-sentence English description suitable as a docstring.
39
 
40
  ### Out-of-Scope Use
41
 
42
+ - Generating full, multi-paragraph API documentation.
43
+ - Security auditing or correctness guarantees for code.
44
+ - Use outside Python (e.g., other programming languages) without additional fine-tuning.
45
+ - Any safety-critical application where incorrect summaries could cause harm.
46
 
47
  ## Bias, Risks, and Limitations
48
 
49
+ - The model can produce **incorrect or incomplete summaries**, especially for complex or ambiguous functions.
50
+ - It may imitate noisy or low-quality patterns from the training data (e.g., overly short or cryptic docstrings).
51
+ - It does **not** understand project-specific context, invariants, or business logic; outputs should be reviewed by a human developer.
52
 
53
  ### Recommendations
54
 
55
+ - Use the model as an **assistive tool**, not an authoritative source.
56
+ - Always review and edit generated docstrings before committing to production code.
57
+ - For non-Python or highly domain-specific code, consider additional fine-tuning on in-domain examples.
58
 
59
  ## How to Get Started with the Model
60
 
61
+ Example with 🤗 Transformers and PEFT (LoRA adapter):
62
 
63
+ ```python
64
+ from transformers import AutoTokenizer, AutoModelForCausalLM
65
+ from peft import PeftModel
66
 
67
+ base_model_id = "meta-llama/Llama-3.2-1B-Instruct"
68
+ adapter_id = "YOUR_USERNAME/llama32-1b-python-docstrings-qlora"
69
 
70
+ tokenizer = AutoTokenizer.from_pretrained(base_model_id)
71
+ model = AutoModelForCausalLM.from_pretrained(base_model_id, device_map="auto")
72
+ model = PeftModel.from_pretrained(model, adapter_id)
73
 
74
+ def make_prompt(code: str) -> str:
75
+ return (
76
+ "Write a one-line Python docstring for this function:\n\n"
77
+ f"{code}\n\n\"\"\""
78
+ )
79
 
80
+ code = "def add(a, b):\n return a + b"
81
+ inputs = tokenizer(make_prompt(code), return_tensors="pt").to(model.device)
82
+ outputs = model.generate(**inputs, max_new_tokens=32, do_sample=False)
83
+ text = tokenizer.decode(outputs[0], skip_special_tokens=True)
84
+ print(text)
85
+ ```
86
 
87
+ ## Training Details
88
 
89
+ ### Training Data
90
 
91
+ - Dataset: Python subset of CodeSearchNet (`Nan-Do/code-search-net-python`)
92
+ - Inputs: `code` column (full Python function body)
93
+ - Targets: First non-empty line of `docstring`
94
+ - A filtered subset of ~1,000–2,000 examples was used for efficient QLoRA fine-tuning
95
 
96
+ ### Training Procedure
97
 
98
+ - Objective: Causal language modeling (predict the docstring continuation)
99
+ - Method: QLoRA (4-bit quantized base model with LoRA adapters)
100
+ - Precision: 4-bit quantized weights, bf16 compute
101
+ - Epochs: 1
102
+ - Max sequence length: 256–512 tokens
103
 
104
  #### Training Hyperparameters
105
 
106
+ - Learning rate: ~2e-4 (adapter weights only)
107
+ - Epochs: 1
108
+ - Optimizer: AdamW via Hugging Face `Trainer`
109
+ - LoRA rank: 16
110
+ - LoRA alpha: 32
111
+ - LoRA dropout: 0.05
112
 
113
+ ---
 
 
 
 
114
 
115
  ## Evaluation
116
 
 
 
117
  ### Testing Data, Factors & Metrics
118
 
119
  #### Testing Data
120
 
121
+ Held-out test split from the same CodeSearchNet Python dataset, using identical `code` → one-line docstring mapping.
 
 
122
 
123
  #### Factors
124
 
125
+ - Function size and complexity
126
+ - Variety in docstring writing styles
127
+ - Presence of short or noisy docstrings
128
 
129
  #### Metrics
130
 
131
+ - BLEU (sacreBLEU): strict n-gram overlap, sensitive to paraphrasing
132
+ - ROUGE (ROUGE-1 / ROUGE-2 / ROUGE-L): better for short summaries
 
133
 
134
  ### Results
135
 
136
+ Approximate performance on ~50 held-out samples:
137
+
138
+ - BLEU: ~12.4
139
+ - ROUGE-1: ~0.78
140
+ - ROUGE-2: ~0.74
141
+ - ROUGE-L: ~0.78
142
 
143
  #### Summary
144
 
145
+ The model frequently reproduces or closely paraphrases the correct docstring. Occasional failures include echoing part of the prompt or returning an empty string. Strong performance for a 1B model trained briefly on a small dataset.
146
 
147
+ ---
148
 
149
+ ## Model Examination
150
 
151
+ Not applicable.
152
 
153
+ ---
154
 
155
  ## Environmental Impact
156
 
157
+ - Hardware Type: Google Colab GPU (T4/L4)
158
+ - Hours Used: ~0.5–1 hour total
159
+ - Cloud Provider: Google Colab
160
+ - Compute Region: US
161
+ - Carbon Emitted: Not estimated (very low due to minimal training time)
162
 
163
+ ---
 
 
 
 
164
 
165
+ ## Technical Specifications
166
 
167
  ### Model Architecture and Objective
168
 
169
+ - Base model: Llama 3.2 1B Instruct
170
+ - Architecture: Decoder-only transformer
171
+ - Objective: Causal language modeling
172
+ - Parameter-efficient fine-tuning using LoRA (rank 16)
173
 
174
  ### Compute Infrastructure
175
 
 
 
176
  #### Hardware
177
 
178
+ Single Google Colab GPU (T4 or L4)
179
 
180
  #### Software
181
 
182
+ - Python
183
+ - PyTorch
184
+ - Hugging Face Transformers
185
+ - PEFT
186
+ - bitsandbytes
187
+ - Datasets
188
 
189
+ ---
190
 
191
+ ## Citation
192
 
193
+ Not applicable.
194
 
195
+ ---
196
 
197
+ ## Glossary
198
 
199
+ Not applicable.
200
 
201
+ ---
202
 
203
+ ## More Information
204
 
205
+ See the Hugging Face model page for updates or usage examples.
206
 
207
+ ---
208
 
209
+ ## Model Card Authors
210
 
211
+ Abdullah Al-Housni
212
 
213
+ ---
214
 
215
  ## Model Card Contact
216
 
217
+ Available through the Hugging Face model repository.