cakra84 commited on
Commit
198c9ac
·
verified ·
1 Parent(s): 480d57e

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +32 -60
README.md CHANGED
@@ -1,85 +1,57 @@
1
  license: apache-2.0
2
- language:
3
-
4
- en
5
  tags:
6
 
7
  text-generation
8
 
9
- fine-tuning
10
-
11
- bangkit
12
-
13
- agrease
14
- metrics:
15
-
16
- loss
17
- base_model: mistralai/Mistral-7B-Instruct-v0.3
18
- datasets:
19
 
20
- custom-scraped
21
- model-index:
22
 
23
- name: agrease-mistral-finetune
24
- results:
25
 
26
- task:
27
- type: text-generation
28
- dataset:
29
- type: custom-scraped
30
- name: Agrease Application Data
31
- metrics:
32
 
33
- type: loss
34
- value: 0.11
35
- name: Fine-Tuning Loss
36
-
37
- Fine-Tuning Mistral for the Agrease Application
38
- Author: Benito Yvan Deva Putra Arung Dirgantara
39
- Contact: benitodeva84@gmail.com
40
- Project: Bangkit Academy 2024 - Machine Learning Path
41
-
42
- 1. Project Overview
43
- This project focuses on the fine-tuning of the Mistral v3 Large Language Model (LLM) to create a specialized model for the "Agrease" application. The primary objective was to adapt the general capabilities of the Mistral LLM to understand and process domain-specific data relevant to Agrease, enhancing its performance for tasks such as recommendation and data interpretation.
44
-
45
- This was developed as a capstone project during my participation in the Bangkit Academy 2024 Batch 2 program, under the Machine Learning learning path.
46
-
47
- 2. Methodology
48
- The project followed a structured machine learning workflow, from data acquisition to model evaluation.
49
 
50
- 2.1. Data Collection
51
- To build a relevant dataset for fine-tuning, web scraping techniques were employed.
52
 
53
- Tools: BeautifulSoup and Scrapy were used to gather application data from various online marketplaces and sources.
 
54
 
55
- Process: The scrapers were designed to extract specific information required to train the model effectively for the Agrease application's context.
56
 
57
- 2.2. Model Fine-Tuning
58
- The core of this project involved the fine-tuning process.
59
 
60
- Base Model: We used the pre-trained Mistral v3 as our foundation model.
61
 
62
- Frameworks: The fine-tuning process was implemented using Python, with primary libraries being PyTorch and TensorFlow.
63
 
64
- Objective: The goal was to train the model on our custom-scraped dataset, adjusting its weights to specialize its responses and understanding, while minimizing the training loss.
65
 
66
- 3. Results
67
- The fine-tuning process yielded significant improvements in the model's performance on domain-specific tasks.
68
 
69
- Fine-Tuned LLM: Achieved a final loss rate of 11%, indicating a successful adaptation of the model to the new data.
 
70
 
71
- Recommendation Model: As part of the broader Agrease application, a recommendation model was also developed, which achieved a 10% loss rate.
72
 
73
- These results demonstrate the model's strong capability to serve the specific needs of the Agrease application.
 
 
74
 
75
- 4. Technical Stack
76
- Programming Language: Python
77
 
78
- ML/DL Frameworks: TensorFlow, PyTorch
 
79
 
80
- Data Scraping: BeautifulSoup, Scrapy
81
 
82
- Base LLM: Mistral v3
 
83
 
84
- 5. Acknowledgements
85
- I would like to thank Google, GoTo, Traveloka, and the Ministry of Education, Culture, Research, and Technology of the Republic of Indonesia for the opportunity to participate in the Bangkit Academy program. The skills and experience gained were invaluable in the successful completion of this project.
 
1
  license: apache-2.0
 
 
 
2
  tags:
3
 
4
  text-generation
5
 
6
+ conversational
 
 
 
 
 
 
 
 
 
7
 
8
+ mistral
 
9
 
10
+ fine-tuned
 
11
 
12
+ chatbot
 
 
 
 
 
13
 
14
+ bangkit
15
+ widget:
 
 
 
 
 
 
 
 
 
 
 
 
 
 
16
 
17
+ text: "Hello! I'm looking for recommendations on agricultural products."
 
18
 
19
+ Fine-Tuned Mistral Model for Agrease Application
20
+ This repository contains a fine-tuned version of a Mistral Large Language Model, specifically adapted for the "Agrease" application. The model was developed as part of a capstone project for Bangkit Academy 2024 Batch 2.
21
 
22
+ The primary goal of this project was to create a specialized conversational AI capable of assisting users within the Agrease application, likely by providing information and recommendations based on data from various marketplaces.
23
 
24
+ Model Description
25
+ Base Model: The model is a fine-tuned version of a Mistral v3 Large Language Model.
26
 
27
+ Fine-tuning Task: The model was fine-tuned for conversational question-answering and recommendations.
28
 
29
+ Training Data: The training data was collected by scraping various online marketplaces using Python libraries such as BeautifulSoup and Scrapy.
30
 
31
+ Performance: The fine-tuning process achieved a final training loss of 11%.
32
 
33
+ Intended Use
34
+ This model is intended to be used as a chatbot or a conversational agent within a larger application. It can answer user queries, provide product recommendations, and engage in domain-specific conversations related to the "Agrease" application's scope.
35
 
36
+ How to Use
37
+ You can use this model with the transformers library for text generation.
38
 
39
+ from transformers import pipeline
40
 
41
+ # Load the text generation pipeline from the Hugging Face Hub
42
+ # Replace "your-username/model-name" with the actual model path
43
+ generator = pipeline('text-generation', model='your-username/model-name')
44
 
45
+ # Example prompt
46
+ prompt = "What are the best fertilizers for rice paddies in a tropical climate?"
47
 
48
+ # Generate a response
49
+ response = generator(prompt, max_length=150, num_return_sequences=1)
50
 
51
+ print(response[0]['generated_text'])
52
 
53
+ Training Data
54
+ The dataset used for fine-tuning was created by scraping publicly available data from various e-commerce and marketplace websites. The scraping was performed using custom Python scripts with BeautifulSoup and Scrapy. The collected data was then processed and formatted into a conversational format suitable for training a large language model.
55
 
56
+ Training Procedure
57
+ The fine-tuning was performed using the PyTorch framework on the collected dataset. The training focused on minimizing the cross-entropy loss to improve the model's ability to generate relevant and coherent responses in a conversational context. The final model achieved a training loss of 0.11