| ## Usage of this model: | |
| I'm glad to share with you my exciting journey of fine-tuning Llama 2 for | |
| Named Entity Recognition (NER),particularly on a customer service dataset. | |
| NER is a fascinating natural language processing task that involves identifying | |
| and classifying entities like names of people, organizations, locations, | |
| and other important terms within a given text. | |
| The customer service dataset I used was carefully curated and annotated with | |
| a wide range of service-related entities, such as specific types of services, | |
| service providers, service locations, and other related terms. The data was diverse and | |
| representative of the actual domain it aimed to address. | |
| (I will re-upload the dataset with more sample in it to here zaursamedov1/customer-service-ner) | |
| ## To get more closer look at to the model read this colab notebook | |
| (Coming soon...) | |
| --- | |
| library_name: peft | |
| --- | |
| ## Training procedure | |
| The following `bitsandbytes` quantization config was used during training: | |
| - load_in_8bit: False | |
| - load_in_4bit: True | |
| - llm_int8_threshold: 6.0 | |
| - llm_int8_skip_modules: None | |
| - llm_int8_enable_fp32_cpu_offload: False | |
| - llm_int8_has_fp16_weight: False | |
| - bnb_4bit_quant_type: nf4 | |
| - bnb_4bit_use_double_quant: False | |
| - bnb_4bit_compute_dtype: float16 | |
| ### Framework versions | |
| - PEFT 0.5.0.dev0 | |