Image-Text-to-Text
Transformers
Safetensors
English
mistral
text-generation
vision
conversational
text-generation-inference
Instructions to use LeroyDyer/SpydazWebAI_VisionEncoderDecoderModel_Mini3b with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use LeroyDyer/SpydazWebAI_VisionEncoderDecoderModel_Mini3b with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("image-text-to-text", model="LeroyDyer/SpydazWebAI_VisionEncoderDecoderModel_Mini3b") messages = [ { "role": "user", "content": [ {"type": "image", "url": "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/p-blog/candy.JPG"}, {"type": "text", "text": "What animal is on the candy?"} ] }, ] pipe(text=messages)# Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("LeroyDyer/SpydazWebAI_VisionEncoderDecoderModel_Mini3b") model = AutoModelForCausalLM.from_pretrained("LeroyDyer/SpydazWebAI_VisionEncoderDecoderModel_Mini3b") messages = [ { "role": "user", "content": [ {"type": "image", "url": "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/p-blog/candy.JPG"}, {"type": "text", "text": "What animal is on the candy?"} ] }, ] inputs = tokenizer.apply_chat_template( messages, add_generation_prompt=True, tokenize=True, return_dict=True, return_tensors="pt", ).to(model.device) outputs = model.generate(**inputs, max_new_tokens=40) print(tokenizer.decode(outputs[0][inputs["input_ids"].shape[-1]:])) - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use LeroyDyer/SpydazWebAI_VisionEncoderDecoderModel_Mini3b with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "LeroyDyer/SpydazWebAI_VisionEncoderDecoderModel_Mini3b" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "LeroyDyer/SpydazWebAI_VisionEncoderDecoderModel_Mini3b", "messages": [ { "role": "user", "content": [ { "type": "text", "text": "Describe this image in one sentence." }, { "type": "image_url", "image_url": { "url": "https://cdn.britannica.com/61/93061-050-99147DCE/Statue-of-Liberty-Island-New-York-Bay.jpg" } } ] } ] }'Use Docker
docker model run hf.co/LeroyDyer/SpydazWebAI_VisionEncoderDecoderModel_Mini3b
- SGLang
How to use LeroyDyer/SpydazWebAI_VisionEncoderDecoderModel_Mini3b with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "LeroyDyer/SpydazWebAI_VisionEncoderDecoderModel_Mini3b" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "LeroyDyer/SpydazWebAI_VisionEncoderDecoderModel_Mini3b", "messages": [ { "role": "user", "content": [ { "type": "text", "text": "Describe this image in one sentence." }, { "type": "image_url", "image_url": { "url": "https://cdn.britannica.com/61/93061-050-99147DCE/Statue-of-Liberty-Island-New-York-Bay.jpg" } } ] } ] }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "LeroyDyer/SpydazWebAI_VisionEncoderDecoderModel_Mini3b" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "LeroyDyer/SpydazWebAI_VisionEncoderDecoderModel_Mini3b", "messages": [ { "role": "user", "content": [ { "type": "text", "text": "Describe this image in one sentence." }, { "type": "image_url", "image_url": { "url": "https://cdn.britannica.com/61/93061-050-99147DCE/Statue-of-Liberty-Island-New-York-Bay.jpg" } } ] } ] }' - Docker Model Runner
How to use LeroyDyer/SpydazWebAI_VisionEncoderDecoderModel_Mini3b with Docker Model Runner:
docker model run hf.co/LeroyDyer/SpydazWebAI_VisionEncoderDecoderModel_Mini3b
| { | |
| "metadata": { | |
| "total_size": 8174347264 | |
| }, | |
| "weight_map": { | |
| "ImageProcessor.decoder.lm_head.weight": "model-00002-of-00002.safetensors", | |
| "ImageProcessor.decoder.model.embed_tokens.weight": "model-00002-of-00002.safetensors", | |
| "ImageProcessor.decoder.model.layers.0.input_layernorm.weight": "model-00002-of-00002.safetensors", | |
| "ImageProcessor.decoder.model.layers.0.mlp.down_proj.weight": "model-00002-of-00002.safetensors", | |
| "ImageProcessor.decoder.model.layers.0.mlp.gate_proj.weight": "model-00002-of-00002.safetensors", | |
| "ImageProcessor.decoder.model.layers.0.mlp.up_proj.weight": "model-00002-of-00002.safetensors", | |
| "ImageProcessor.decoder.model.layers.0.post_attention_layernorm.weight": "model-00002-of-00002.safetensors", | |
| "ImageProcessor.decoder.model.layers.0.self_attn.k_proj.weight": "model-00002-of-00002.safetensors", | |
| "ImageProcessor.decoder.model.layers.0.self_attn.o_proj.weight": "model-00002-of-00002.safetensors", | |
| "ImageProcessor.decoder.model.layers.0.self_attn.q_proj.weight": "model-00002-of-00002.safetensors", | |
| "ImageProcessor.decoder.model.layers.0.self_attn.v_proj.weight": "model-00002-of-00002.safetensors", | |
| "ImageProcessor.decoder.model.layers.1.input_layernorm.weight": "model-00002-of-00002.safetensors", | |
| "ImageProcessor.decoder.model.layers.1.mlp.down_proj.weight": "model-00002-of-00002.safetensors", | |
| "ImageProcessor.decoder.model.layers.1.mlp.gate_proj.weight": "model-00002-of-00002.safetensors", | |
| "ImageProcessor.decoder.model.layers.1.mlp.up_proj.weight": "model-00002-of-00002.safetensors", | |
| "ImageProcessor.decoder.model.layers.1.post_attention_layernorm.weight": "model-00002-of-00002.safetensors", | |
| "ImageProcessor.decoder.model.layers.1.self_attn.k_proj.weight": "model-00002-of-00002.safetensors", | |
| "ImageProcessor.decoder.model.layers.1.self_attn.o_proj.weight": "model-00002-of-00002.safetensors", | |
| "ImageProcessor.decoder.model.layers.1.self_attn.q_proj.weight": "model-00002-of-00002.safetensors", | |
| "ImageProcessor.decoder.model.layers.1.self_attn.v_proj.weight": "model-00002-of-00002.safetensors", | |
| "ImageProcessor.decoder.model.layers.10.input_layernorm.weight": "model-00002-of-00002.safetensors", | |
| "ImageProcessor.decoder.model.layers.10.mlp.down_proj.weight": "model-00002-of-00002.safetensors", | |
| "ImageProcessor.decoder.model.layers.10.mlp.gate_proj.weight": "model-00002-of-00002.safetensors", | |
| "ImageProcessor.decoder.model.layers.10.mlp.up_proj.weight": "model-00002-of-00002.safetensors", | |
| "ImageProcessor.decoder.model.layers.10.post_attention_layernorm.weight": "model-00002-of-00002.safetensors", | |
| "ImageProcessor.decoder.model.layers.10.self_attn.k_proj.weight": "model-00002-of-00002.safetensors", | |
| "ImageProcessor.decoder.model.layers.10.self_attn.o_proj.weight": "model-00002-of-00002.safetensors", | |
| "ImageProcessor.decoder.model.layers.10.self_attn.q_proj.weight": "model-00002-of-00002.safetensors", | |
| "ImageProcessor.decoder.model.layers.10.self_attn.v_proj.weight": "model-00002-of-00002.safetensors", | |
| "ImageProcessor.decoder.model.layers.11.input_layernorm.weight": "model-00002-of-00002.safetensors", | |
| "ImageProcessor.decoder.model.layers.11.mlp.down_proj.weight": "model-00002-of-00002.safetensors", | |
| "ImageProcessor.decoder.model.layers.11.mlp.gate_proj.weight": "model-00002-of-00002.safetensors", | |
| "ImageProcessor.decoder.model.layers.11.mlp.up_proj.weight": "model-00002-of-00002.safetensors", | |
| "ImageProcessor.decoder.model.layers.11.post_attention_layernorm.weight": "model-00002-of-00002.safetensors", | |
| "ImageProcessor.decoder.model.layers.11.self_attn.k_proj.weight": "model-00002-of-00002.safetensors", | |
| "ImageProcessor.decoder.model.layers.11.self_attn.o_proj.weight": "model-00002-of-00002.safetensors", | |
| "ImageProcessor.decoder.model.layers.11.self_attn.q_proj.weight": "model-00002-of-00002.safetensors", | |
| "ImageProcessor.decoder.model.layers.11.self_attn.v_proj.weight": "model-00002-of-00002.safetensors", | |
| "ImageProcessor.decoder.model.layers.2.input_layernorm.weight": "model-00002-of-00002.safetensors", | |
| "ImageProcessor.decoder.model.layers.2.mlp.down_proj.weight": "model-00002-of-00002.safetensors", | |
| "ImageProcessor.decoder.model.layers.2.mlp.gate_proj.weight": "model-00002-of-00002.safetensors", | |
| "ImageProcessor.decoder.model.layers.2.mlp.up_proj.weight": "model-00002-of-00002.safetensors", | |
| "ImageProcessor.decoder.model.layers.2.post_attention_layernorm.weight": "model-00002-of-00002.safetensors", | |
| "ImageProcessor.decoder.model.layers.2.self_attn.k_proj.weight": "model-00002-of-00002.safetensors", | |
| "ImageProcessor.decoder.model.layers.2.self_attn.o_proj.weight": "model-00002-of-00002.safetensors", | |
| "ImageProcessor.decoder.model.layers.2.self_attn.q_proj.weight": "model-00002-of-00002.safetensors", | |
| "ImageProcessor.decoder.model.layers.2.self_attn.v_proj.weight": "model-00002-of-00002.safetensors", | |
| "ImageProcessor.decoder.model.layers.3.input_layernorm.weight": "model-00002-of-00002.safetensors", | |
| "ImageProcessor.decoder.model.layers.3.mlp.down_proj.weight": "model-00002-of-00002.safetensors", | |
| "ImageProcessor.decoder.model.layers.3.mlp.gate_proj.weight": "model-00002-of-00002.safetensors", | |
| "ImageProcessor.decoder.model.layers.3.mlp.up_proj.weight": "model-00002-of-00002.safetensors", | |
| "ImageProcessor.decoder.model.layers.3.post_attention_layernorm.weight": "model-00002-of-00002.safetensors", | |
| "ImageProcessor.decoder.model.layers.3.self_attn.k_proj.weight": "model-00002-of-00002.safetensors", | |
| "ImageProcessor.decoder.model.layers.3.self_attn.o_proj.weight": "model-00002-of-00002.safetensors", | |
| "ImageProcessor.decoder.model.layers.3.self_attn.q_proj.weight": "model-00002-of-00002.safetensors", | |
| "ImageProcessor.decoder.model.layers.3.self_attn.v_proj.weight": "model-00002-of-00002.safetensors", | |
| "ImageProcessor.decoder.model.layers.4.input_layernorm.weight": "model-00002-of-00002.safetensors", | |
| "ImageProcessor.decoder.model.layers.4.mlp.down_proj.weight": "model-00002-of-00002.safetensors", | |
| "ImageProcessor.decoder.model.layers.4.mlp.gate_proj.weight": "model-00002-of-00002.safetensors", | |
| "ImageProcessor.decoder.model.layers.4.mlp.up_proj.weight": "model-00002-of-00002.safetensors", | |
| "ImageProcessor.decoder.model.layers.4.post_attention_layernorm.weight": "model-00002-of-00002.safetensors", | |
| "ImageProcessor.decoder.model.layers.4.self_attn.k_proj.weight": "model-00002-of-00002.safetensors", | |
| "ImageProcessor.decoder.model.layers.4.self_attn.o_proj.weight": "model-00002-of-00002.safetensors", | |
| "ImageProcessor.decoder.model.layers.4.self_attn.q_proj.weight": "model-00002-of-00002.safetensors", | |
| "ImageProcessor.decoder.model.layers.4.self_attn.v_proj.weight": "model-00002-of-00002.safetensors", | |
| "ImageProcessor.decoder.model.layers.5.input_layernorm.weight": "model-00002-of-00002.safetensors", | |
| "ImageProcessor.decoder.model.layers.5.mlp.down_proj.weight": "model-00002-of-00002.safetensors", | |
| "ImageProcessor.decoder.model.layers.5.mlp.gate_proj.weight": "model-00002-of-00002.safetensors", | |
| "ImageProcessor.decoder.model.layers.5.mlp.up_proj.weight": "model-00002-of-00002.safetensors", | |
| "ImageProcessor.decoder.model.layers.5.post_attention_layernorm.weight": "model-00002-of-00002.safetensors", | |
| "ImageProcessor.decoder.model.layers.5.self_attn.k_proj.weight": "model-00002-of-00002.safetensors", | |
| "ImageProcessor.decoder.model.layers.5.self_attn.o_proj.weight": "model-00002-of-00002.safetensors", | |
| "ImageProcessor.decoder.model.layers.5.self_attn.q_proj.weight": "model-00002-of-00002.safetensors", | |
| "ImageProcessor.decoder.model.layers.5.self_attn.v_proj.weight": "model-00002-of-00002.safetensors", | |
| "ImageProcessor.decoder.model.layers.6.input_layernorm.weight": "model-00002-of-00002.safetensors", | |
| "ImageProcessor.decoder.model.layers.6.mlp.down_proj.weight": "model-00002-of-00002.safetensors", | |
| "ImageProcessor.decoder.model.layers.6.mlp.gate_proj.weight": "model-00002-of-00002.safetensors", | |
| "ImageProcessor.decoder.model.layers.6.mlp.up_proj.weight": "model-00002-of-00002.safetensors", | |
| "ImageProcessor.decoder.model.layers.6.post_attention_layernorm.weight": "model-00002-of-00002.safetensors", | |
| "ImageProcessor.decoder.model.layers.6.self_attn.k_proj.weight": "model-00002-of-00002.safetensors", | |
| "ImageProcessor.decoder.model.layers.6.self_attn.o_proj.weight": "model-00002-of-00002.safetensors", | |
| "ImageProcessor.decoder.model.layers.6.self_attn.q_proj.weight": "model-00002-of-00002.safetensors", | |
| "ImageProcessor.decoder.model.layers.6.self_attn.v_proj.weight": "model-00002-of-00002.safetensors", | |
| "ImageProcessor.decoder.model.layers.7.input_layernorm.weight": "model-00002-of-00002.safetensors", | |
| "ImageProcessor.decoder.model.layers.7.mlp.down_proj.weight": "model-00002-of-00002.safetensors", | |
| "ImageProcessor.decoder.model.layers.7.mlp.gate_proj.weight": "model-00002-of-00002.safetensors", | |
| "ImageProcessor.decoder.model.layers.7.mlp.up_proj.weight": "model-00002-of-00002.safetensors", | |
| "ImageProcessor.decoder.model.layers.7.post_attention_layernorm.weight": "model-00002-of-00002.safetensors", | |
| "ImageProcessor.decoder.model.layers.7.self_attn.k_proj.weight": "model-00002-of-00002.safetensors", | |
| "ImageProcessor.decoder.model.layers.7.self_attn.o_proj.weight": "model-00002-of-00002.safetensors", | |
| "ImageProcessor.decoder.model.layers.7.self_attn.q_proj.weight": "model-00002-of-00002.safetensors", | |
| "ImageProcessor.decoder.model.layers.7.self_attn.v_proj.weight": "model-00002-of-00002.safetensors", | |
| "ImageProcessor.decoder.model.layers.8.input_layernorm.weight": "model-00002-of-00002.safetensors", | |
| "ImageProcessor.decoder.model.layers.8.mlp.down_proj.weight": "model-00002-of-00002.safetensors", | |
| "ImageProcessor.decoder.model.layers.8.mlp.gate_proj.weight": "model-00002-of-00002.safetensors", | |
| "ImageProcessor.decoder.model.layers.8.mlp.up_proj.weight": "model-00002-of-00002.safetensors", | |
| "ImageProcessor.decoder.model.layers.8.post_attention_layernorm.weight": "model-00002-of-00002.safetensors", | |
| "ImageProcessor.decoder.model.layers.8.self_attn.k_proj.weight": "model-00002-of-00002.safetensors", | |
| "ImageProcessor.decoder.model.layers.8.self_attn.o_proj.weight": "model-00002-of-00002.safetensors", | |
| "ImageProcessor.decoder.model.layers.8.self_attn.q_proj.weight": "model-00002-of-00002.safetensors", | |
| "ImageProcessor.decoder.model.layers.8.self_attn.v_proj.weight": "model-00002-of-00002.safetensors", | |
| "ImageProcessor.decoder.model.layers.9.input_layernorm.weight": "model-00002-of-00002.safetensors", | |
| "ImageProcessor.decoder.model.layers.9.mlp.down_proj.weight": "model-00002-of-00002.safetensors", | |
| "ImageProcessor.decoder.model.layers.9.mlp.gate_proj.weight": "model-00002-of-00002.safetensors", | |
| "ImageProcessor.decoder.model.layers.9.mlp.up_proj.weight": "model-00002-of-00002.safetensors", | |
| "ImageProcessor.decoder.model.layers.9.post_attention_layernorm.weight": "model-00002-of-00002.safetensors", | |
| "ImageProcessor.decoder.model.layers.9.self_attn.k_proj.weight": "model-00002-of-00002.safetensors", | |
| "ImageProcessor.decoder.model.layers.9.self_attn.o_proj.weight": "model-00002-of-00002.safetensors", | |
| "ImageProcessor.decoder.model.layers.9.self_attn.q_proj.weight": "model-00002-of-00002.safetensors", | |
| "ImageProcessor.decoder.model.layers.9.self_attn.v_proj.weight": "model-00002-of-00002.safetensors", | |
| "ImageProcessor.decoder.model.norm.weight": "model-00002-of-00002.safetensors", | |
| "ImageProcessor.enc_to_dec_proj.bias": "model-00002-of-00002.safetensors", | |
| "ImageProcessor.enc_to_dec_proj.weight": "model-00002-of-00002.safetensors", | |
| "ImageProcessor.encoder.embeddings.cls_token": "model-00002-of-00002.safetensors", | |
| "ImageProcessor.encoder.embeddings.patch_embeddings.projection.bias": "model-00002-of-00002.safetensors", | |
| "ImageProcessor.encoder.embeddings.patch_embeddings.projection.weight": "model-00002-of-00002.safetensors", | |
| "ImageProcessor.encoder.embeddings.position_embeddings": "model-00002-of-00002.safetensors", | |
| "ImageProcessor.encoder.encoder.layer.0.attention.attention.key.bias": "model-00002-of-00002.safetensors", | |
| "ImageProcessor.encoder.encoder.layer.0.attention.attention.key.weight": "model-00002-of-00002.safetensors", | |
| "ImageProcessor.encoder.encoder.layer.0.attention.attention.query.bias": "model-00002-of-00002.safetensors", | |
| "ImageProcessor.encoder.encoder.layer.0.attention.attention.query.weight": "model-00002-of-00002.safetensors", | |
| "ImageProcessor.encoder.encoder.layer.0.attention.attention.value.bias": "model-00002-of-00002.safetensors", | |
| "ImageProcessor.encoder.encoder.layer.0.attention.attention.value.weight": "model-00002-of-00002.safetensors", | |
| "ImageProcessor.encoder.encoder.layer.0.attention.output.dense.bias": "model-00002-of-00002.safetensors", | |
| "ImageProcessor.encoder.encoder.layer.0.attention.output.dense.weight": "model-00002-of-00002.safetensors", | |
| "ImageProcessor.encoder.encoder.layer.0.intermediate.dense.bias": "model-00002-of-00002.safetensors", | |
| "ImageProcessor.encoder.encoder.layer.0.intermediate.dense.weight": "model-00002-of-00002.safetensors", | |
| "ImageProcessor.encoder.encoder.layer.0.layernorm_after.bias": "model-00002-of-00002.safetensors", | |
| "ImageProcessor.encoder.encoder.layer.0.layernorm_after.weight": "model-00002-of-00002.safetensors", | |
| "ImageProcessor.encoder.encoder.layer.0.layernorm_before.bias": "model-00002-of-00002.safetensors", | |
| "ImageProcessor.encoder.encoder.layer.0.layernorm_before.weight": "model-00002-of-00002.safetensors", | |
| "ImageProcessor.encoder.encoder.layer.0.output.dense.bias": "model-00002-of-00002.safetensors", | |
| "ImageProcessor.encoder.encoder.layer.0.output.dense.weight": "model-00002-of-00002.safetensors", | |
| "ImageProcessor.encoder.encoder.layer.1.attention.attention.key.bias": "model-00002-of-00002.safetensors", | |
| "ImageProcessor.encoder.encoder.layer.1.attention.attention.key.weight": "model-00002-of-00002.safetensors", | |
| "ImageProcessor.encoder.encoder.layer.1.attention.attention.query.bias": "model-00002-of-00002.safetensors", | |
| "ImageProcessor.encoder.encoder.layer.1.attention.attention.query.weight": "model-00002-of-00002.safetensors", | |
| "ImageProcessor.encoder.encoder.layer.1.attention.attention.value.bias": "model-00002-of-00002.safetensors", | |
| "ImageProcessor.encoder.encoder.layer.1.attention.attention.value.weight": "model-00002-of-00002.safetensors", | |
| "ImageProcessor.encoder.encoder.layer.1.attention.output.dense.bias": "model-00002-of-00002.safetensors", | |
| "ImageProcessor.encoder.encoder.layer.1.attention.output.dense.weight": "model-00002-of-00002.safetensors", | |
| "ImageProcessor.encoder.encoder.layer.1.intermediate.dense.bias": "model-00002-of-00002.safetensors", | |
| "ImageProcessor.encoder.encoder.layer.1.intermediate.dense.weight": "model-00002-of-00002.safetensors", | |
| "ImageProcessor.encoder.encoder.layer.1.layernorm_after.bias": "model-00002-of-00002.safetensors", | |
| "ImageProcessor.encoder.encoder.layer.1.layernorm_after.weight": "model-00002-of-00002.safetensors", | |
| "ImageProcessor.encoder.encoder.layer.1.layernorm_before.bias": "model-00002-of-00002.safetensors", | |
| "ImageProcessor.encoder.encoder.layer.1.layernorm_before.weight": "model-00002-of-00002.safetensors", | |
| "ImageProcessor.encoder.encoder.layer.1.output.dense.bias": "model-00002-of-00002.safetensors", | |
| "ImageProcessor.encoder.encoder.layer.1.output.dense.weight": "model-00002-of-00002.safetensors", | |
| "ImageProcessor.encoder.encoder.layer.10.attention.attention.key.bias": "model-00002-of-00002.safetensors", | |
| "ImageProcessor.encoder.encoder.layer.10.attention.attention.key.weight": "model-00002-of-00002.safetensors", | |
| "ImageProcessor.encoder.encoder.layer.10.attention.attention.query.bias": "model-00002-of-00002.safetensors", | |
| "ImageProcessor.encoder.encoder.layer.10.attention.attention.query.weight": "model-00002-of-00002.safetensors", | |
| "ImageProcessor.encoder.encoder.layer.10.attention.attention.value.bias": "model-00002-of-00002.safetensors", | |
| "ImageProcessor.encoder.encoder.layer.10.attention.attention.value.weight": "model-00002-of-00002.safetensors", | |
| "ImageProcessor.encoder.encoder.layer.10.attention.output.dense.bias": "model-00002-of-00002.safetensors", | |
| "ImageProcessor.encoder.encoder.layer.10.attention.output.dense.weight": "model-00002-of-00002.safetensors", | |
| "ImageProcessor.encoder.encoder.layer.10.intermediate.dense.bias": "model-00002-of-00002.safetensors", | |
| "ImageProcessor.encoder.encoder.layer.10.intermediate.dense.weight": "model-00002-of-00002.safetensors", | |
| "ImageProcessor.encoder.encoder.layer.10.layernorm_after.bias": "model-00002-of-00002.safetensors", | |
| "ImageProcessor.encoder.encoder.layer.10.layernorm_after.weight": "model-00002-of-00002.safetensors", | |
| "ImageProcessor.encoder.encoder.layer.10.layernorm_before.bias": "model-00002-of-00002.safetensors", | |
| "ImageProcessor.encoder.encoder.layer.10.layernorm_before.weight": "model-00002-of-00002.safetensors", | |
| "ImageProcessor.encoder.encoder.layer.10.output.dense.bias": "model-00002-of-00002.safetensors", | |
| "ImageProcessor.encoder.encoder.layer.10.output.dense.weight": "model-00002-of-00002.safetensors", | |
| "ImageProcessor.encoder.encoder.layer.11.attention.attention.key.bias": "model-00002-of-00002.safetensors", | |
| "ImageProcessor.encoder.encoder.layer.11.attention.attention.key.weight": "model-00002-of-00002.safetensors", | |
| "ImageProcessor.encoder.encoder.layer.11.attention.attention.query.bias": "model-00002-of-00002.safetensors", | |
| "ImageProcessor.encoder.encoder.layer.11.attention.attention.query.weight": "model-00002-of-00002.safetensors", | |
| "ImageProcessor.encoder.encoder.layer.11.attention.attention.value.bias": "model-00002-of-00002.safetensors", | |
| "ImageProcessor.encoder.encoder.layer.11.attention.attention.value.weight": "model-00002-of-00002.safetensors", | |
| "ImageProcessor.encoder.encoder.layer.11.attention.output.dense.bias": "model-00002-of-00002.safetensors", | |
| "ImageProcessor.encoder.encoder.layer.11.attention.output.dense.weight": "model-00002-of-00002.safetensors", | |
| "ImageProcessor.encoder.encoder.layer.11.intermediate.dense.bias": "model-00002-of-00002.safetensors", | |
| "ImageProcessor.encoder.encoder.layer.11.intermediate.dense.weight": "model-00002-of-00002.safetensors", | |
| "ImageProcessor.encoder.encoder.layer.11.layernorm_after.bias": "model-00002-of-00002.safetensors", | |
| "ImageProcessor.encoder.encoder.layer.11.layernorm_after.weight": "model-00002-of-00002.safetensors", | |
| "ImageProcessor.encoder.encoder.layer.11.layernorm_before.bias": "model-00002-of-00002.safetensors", | |
| "ImageProcessor.encoder.encoder.layer.11.layernorm_before.weight": "model-00002-of-00002.safetensors", | |
| "ImageProcessor.encoder.encoder.layer.11.output.dense.bias": "model-00002-of-00002.safetensors", | |
| "ImageProcessor.encoder.encoder.layer.11.output.dense.weight": "model-00002-of-00002.safetensors", | |
| "ImageProcessor.encoder.encoder.layer.2.attention.attention.key.bias": "model-00002-of-00002.safetensors", | |
| "ImageProcessor.encoder.encoder.layer.2.attention.attention.key.weight": "model-00002-of-00002.safetensors", | |
| "ImageProcessor.encoder.encoder.layer.2.attention.attention.query.bias": "model-00002-of-00002.safetensors", | |
| "ImageProcessor.encoder.encoder.layer.2.attention.attention.query.weight": "model-00002-of-00002.safetensors", | |
| "ImageProcessor.encoder.encoder.layer.2.attention.attention.value.bias": "model-00002-of-00002.safetensors", | |
| "ImageProcessor.encoder.encoder.layer.2.attention.attention.value.weight": "model-00002-of-00002.safetensors", | |
| "ImageProcessor.encoder.encoder.layer.2.attention.output.dense.bias": "model-00002-of-00002.safetensors", | |
| "ImageProcessor.encoder.encoder.layer.2.attention.output.dense.weight": "model-00002-of-00002.safetensors", | |
| "ImageProcessor.encoder.encoder.layer.2.intermediate.dense.bias": "model-00002-of-00002.safetensors", | |
| "ImageProcessor.encoder.encoder.layer.2.intermediate.dense.weight": "model-00002-of-00002.safetensors", | |
| "ImageProcessor.encoder.encoder.layer.2.layernorm_after.bias": "model-00002-of-00002.safetensors", | |
| "ImageProcessor.encoder.encoder.layer.2.layernorm_after.weight": "model-00002-of-00002.safetensors", | |
| "ImageProcessor.encoder.encoder.layer.2.layernorm_before.bias": "model-00002-of-00002.safetensors", | |
| "ImageProcessor.encoder.encoder.layer.2.layernorm_before.weight": "model-00002-of-00002.safetensors", | |
| "ImageProcessor.encoder.encoder.layer.2.output.dense.bias": "model-00002-of-00002.safetensors", | |
| "ImageProcessor.encoder.encoder.layer.2.output.dense.weight": "model-00002-of-00002.safetensors", | |
| "ImageProcessor.encoder.encoder.layer.3.attention.attention.key.bias": "model-00002-of-00002.safetensors", | |
| "ImageProcessor.encoder.encoder.layer.3.attention.attention.key.weight": "model-00002-of-00002.safetensors", | |
| "ImageProcessor.encoder.encoder.layer.3.attention.attention.query.bias": "model-00002-of-00002.safetensors", | |
| "ImageProcessor.encoder.encoder.layer.3.attention.attention.query.weight": "model-00002-of-00002.safetensors", | |
| "ImageProcessor.encoder.encoder.layer.3.attention.attention.value.bias": "model-00002-of-00002.safetensors", | |
| "ImageProcessor.encoder.encoder.layer.3.attention.attention.value.weight": "model-00002-of-00002.safetensors", | |
| "ImageProcessor.encoder.encoder.layer.3.attention.output.dense.bias": "model-00002-of-00002.safetensors", | |
| "ImageProcessor.encoder.encoder.layer.3.attention.output.dense.weight": "model-00002-of-00002.safetensors", | |
| "ImageProcessor.encoder.encoder.layer.3.intermediate.dense.bias": "model-00002-of-00002.safetensors", | |
| "ImageProcessor.encoder.encoder.layer.3.intermediate.dense.weight": "model-00002-of-00002.safetensors", | |
| "ImageProcessor.encoder.encoder.layer.3.layernorm_after.bias": "model-00002-of-00002.safetensors", | |
| "ImageProcessor.encoder.encoder.layer.3.layernorm_after.weight": "model-00002-of-00002.safetensors", | |
| "ImageProcessor.encoder.encoder.layer.3.layernorm_before.bias": "model-00002-of-00002.safetensors", | |
| "ImageProcessor.encoder.encoder.layer.3.layernorm_before.weight": "model-00002-of-00002.safetensors", | |
| "ImageProcessor.encoder.encoder.layer.3.output.dense.bias": "model-00002-of-00002.safetensors", | |
| "ImageProcessor.encoder.encoder.layer.3.output.dense.weight": "model-00002-of-00002.safetensors", | |
| "ImageProcessor.encoder.encoder.layer.4.attention.attention.key.bias": "model-00002-of-00002.safetensors", | |
| "ImageProcessor.encoder.encoder.layer.4.attention.attention.key.weight": "model-00002-of-00002.safetensors", | |
| "ImageProcessor.encoder.encoder.layer.4.attention.attention.query.bias": "model-00002-of-00002.safetensors", | |
| "ImageProcessor.encoder.encoder.layer.4.attention.attention.query.weight": "model-00002-of-00002.safetensors", | |
| "ImageProcessor.encoder.encoder.layer.4.attention.attention.value.bias": "model-00002-of-00002.safetensors", | |
| "ImageProcessor.encoder.encoder.layer.4.attention.attention.value.weight": "model-00002-of-00002.safetensors", | |
| "ImageProcessor.encoder.encoder.layer.4.attention.output.dense.bias": "model-00002-of-00002.safetensors", | |
| "ImageProcessor.encoder.encoder.layer.4.attention.output.dense.weight": "model-00002-of-00002.safetensors", | |
| "ImageProcessor.encoder.encoder.layer.4.intermediate.dense.bias": "model-00002-of-00002.safetensors", | |
| "ImageProcessor.encoder.encoder.layer.4.intermediate.dense.weight": "model-00002-of-00002.safetensors", | |
| "ImageProcessor.encoder.encoder.layer.4.layernorm_after.bias": "model-00002-of-00002.safetensors", | |
| "ImageProcessor.encoder.encoder.layer.4.layernorm_after.weight": "model-00002-of-00002.safetensors", | |
| "ImageProcessor.encoder.encoder.layer.4.layernorm_before.bias": "model-00002-of-00002.safetensors", | |
| "ImageProcessor.encoder.encoder.layer.4.layernorm_before.weight": "model-00002-of-00002.safetensors", | |
| "ImageProcessor.encoder.encoder.layer.4.output.dense.bias": "model-00002-of-00002.safetensors", | |
| "ImageProcessor.encoder.encoder.layer.4.output.dense.weight": "model-00002-of-00002.safetensors", | |
| "ImageProcessor.encoder.encoder.layer.5.attention.attention.key.bias": "model-00002-of-00002.safetensors", | |
| "ImageProcessor.encoder.encoder.layer.5.attention.attention.key.weight": "model-00002-of-00002.safetensors", | |
| "ImageProcessor.encoder.encoder.layer.5.attention.attention.query.bias": "model-00002-of-00002.safetensors", | |
| "ImageProcessor.encoder.encoder.layer.5.attention.attention.query.weight": "model-00002-of-00002.safetensors", | |
| "ImageProcessor.encoder.encoder.layer.5.attention.attention.value.bias": "model-00002-of-00002.safetensors", | |
| "ImageProcessor.encoder.encoder.layer.5.attention.attention.value.weight": "model-00002-of-00002.safetensors", | |
| "ImageProcessor.encoder.encoder.layer.5.attention.output.dense.bias": "model-00002-of-00002.safetensors", | |
| "ImageProcessor.encoder.encoder.layer.5.attention.output.dense.weight": "model-00002-of-00002.safetensors", | |
| "ImageProcessor.encoder.encoder.layer.5.intermediate.dense.bias": "model-00002-of-00002.safetensors", | |
| "ImageProcessor.encoder.encoder.layer.5.intermediate.dense.weight": "model-00002-of-00002.safetensors", | |
| "ImageProcessor.encoder.encoder.layer.5.layernorm_after.bias": "model-00002-of-00002.safetensors", | |
| "ImageProcessor.encoder.encoder.layer.5.layernorm_after.weight": "model-00002-of-00002.safetensors", | |
| "ImageProcessor.encoder.encoder.layer.5.layernorm_before.bias": "model-00002-of-00002.safetensors", | |
| "ImageProcessor.encoder.encoder.layer.5.layernorm_before.weight": "model-00002-of-00002.safetensors", | |
| "ImageProcessor.encoder.encoder.layer.5.output.dense.bias": "model-00002-of-00002.safetensors", | |
| "ImageProcessor.encoder.encoder.layer.5.output.dense.weight": "model-00002-of-00002.safetensors", | |
| "ImageProcessor.encoder.encoder.layer.6.attention.attention.key.bias": "model-00002-of-00002.safetensors", | |
| "ImageProcessor.encoder.encoder.layer.6.attention.attention.key.weight": "model-00002-of-00002.safetensors", | |
| "ImageProcessor.encoder.encoder.layer.6.attention.attention.query.bias": "model-00002-of-00002.safetensors", | |
| "ImageProcessor.encoder.encoder.layer.6.attention.attention.query.weight": "model-00002-of-00002.safetensors", | |
| "ImageProcessor.encoder.encoder.layer.6.attention.attention.value.bias": "model-00002-of-00002.safetensors", | |
| "ImageProcessor.encoder.encoder.layer.6.attention.attention.value.weight": "model-00002-of-00002.safetensors", | |
| "ImageProcessor.encoder.encoder.layer.6.attention.output.dense.bias": "model-00002-of-00002.safetensors", | |
| "ImageProcessor.encoder.encoder.layer.6.attention.output.dense.weight": "model-00002-of-00002.safetensors", | |
| "ImageProcessor.encoder.encoder.layer.6.intermediate.dense.bias": "model-00002-of-00002.safetensors", | |
| "ImageProcessor.encoder.encoder.layer.6.intermediate.dense.weight": "model-00002-of-00002.safetensors", | |
| "ImageProcessor.encoder.encoder.layer.6.layernorm_after.bias": "model-00002-of-00002.safetensors", | |
| "ImageProcessor.encoder.encoder.layer.6.layernorm_after.weight": "model-00002-of-00002.safetensors", | |
| "ImageProcessor.encoder.encoder.layer.6.layernorm_before.bias": "model-00002-of-00002.safetensors", | |
| "ImageProcessor.encoder.encoder.layer.6.layernorm_before.weight": "model-00002-of-00002.safetensors", | |
| "ImageProcessor.encoder.encoder.layer.6.output.dense.bias": "model-00002-of-00002.safetensors", | |
| "ImageProcessor.encoder.encoder.layer.6.output.dense.weight": "model-00002-of-00002.safetensors", | |
| "ImageProcessor.encoder.encoder.layer.7.attention.attention.key.bias": "model-00002-of-00002.safetensors", | |
| "ImageProcessor.encoder.encoder.layer.7.attention.attention.key.weight": "model-00002-of-00002.safetensors", | |
| "ImageProcessor.encoder.encoder.layer.7.attention.attention.query.bias": "model-00002-of-00002.safetensors", | |
| "ImageProcessor.encoder.encoder.layer.7.attention.attention.query.weight": "model-00002-of-00002.safetensors", | |
| "ImageProcessor.encoder.encoder.layer.7.attention.attention.value.bias": "model-00002-of-00002.safetensors", | |
| "ImageProcessor.encoder.encoder.layer.7.attention.attention.value.weight": "model-00002-of-00002.safetensors", | |
| "ImageProcessor.encoder.encoder.layer.7.attention.output.dense.bias": "model-00002-of-00002.safetensors", | |
| "ImageProcessor.encoder.encoder.layer.7.attention.output.dense.weight": "model-00002-of-00002.safetensors", | |
| "ImageProcessor.encoder.encoder.layer.7.intermediate.dense.bias": "model-00002-of-00002.safetensors", | |
| "ImageProcessor.encoder.encoder.layer.7.intermediate.dense.weight": "model-00002-of-00002.safetensors", | |
| "ImageProcessor.encoder.encoder.layer.7.layernorm_after.bias": "model-00002-of-00002.safetensors", | |
| "ImageProcessor.encoder.encoder.layer.7.layernorm_after.weight": "model-00002-of-00002.safetensors", | |
| "ImageProcessor.encoder.encoder.layer.7.layernorm_before.bias": "model-00002-of-00002.safetensors", | |
| "ImageProcessor.encoder.encoder.layer.7.layernorm_before.weight": "model-00002-of-00002.safetensors", | |
| "ImageProcessor.encoder.encoder.layer.7.output.dense.bias": "model-00002-of-00002.safetensors", | |
| "ImageProcessor.encoder.encoder.layer.7.output.dense.weight": "model-00002-of-00002.safetensors", | |
| "ImageProcessor.encoder.encoder.layer.8.attention.attention.key.bias": "model-00002-of-00002.safetensors", | |
| "ImageProcessor.encoder.encoder.layer.8.attention.attention.key.weight": "model-00002-of-00002.safetensors", | |
| "ImageProcessor.encoder.encoder.layer.8.attention.attention.query.bias": "model-00002-of-00002.safetensors", | |
| "ImageProcessor.encoder.encoder.layer.8.attention.attention.query.weight": "model-00002-of-00002.safetensors", | |
| "ImageProcessor.encoder.encoder.layer.8.attention.attention.value.bias": "model-00002-of-00002.safetensors", | |
| "ImageProcessor.encoder.encoder.layer.8.attention.attention.value.weight": "model-00002-of-00002.safetensors", | |
| "ImageProcessor.encoder.encoder.layer.8.attention.output.dense.bias": "model-00002-of-00002.safetensors", | |
| "ImageProcessor.encoder.encoder.layer.8.attention.output.dense.weight": "model-00002-of-00002.safetensors", | |
| "ImageProcessor.encoder.encoder.layer.8.intermediate.dense.bias": "model-00002-of-00002.safetensors", | |
| "ImageProcessor.encoder.encoder.layer.8.intermediate.dense.weight": "model-00002-of-00002.safetensors", | |
| "ImageProcessor.encoder.encoder.layer.8.layernorm_after.bias": "model-00002-of-00002.safetensors", | |
| "ImageProcessor.encoder.encoder.layer.8.layernorm_after.weight": "model-00002-of-00002.safetensors", | |
| "ImageProcessor.encoder.encoder.layer.8.layernorm_before.bias": "model-00002-of-00002.safetensors", | |
| "ImageProcessor.encoder.encoder.layer.8.layernorm_before.weight": "model-00002-of-00002.safetensors", | |
| "ImageProcessor.encoder.encoder.layer.8.output.dense.bias": "model-00002-of-00002.safetensors", | |
| "ImageProcessor.encoder.encoder.layer.8.output.dense.weight": "model-00002-of-00002.safetensors", | |
| "ImageProcessor.encoder.encoder.layer.9.attention.attention.key.bias": "model-00002-of-00002.safetensors", | |
| "ImageProcessor.encoder.encoder.layer.9.attention.attention.key.weight": "model-00002-of-00002.safetensors", | |
| "ImageProcessor.encoder.encoder.layer.9.attention.attention.query.bias": "model-00002-of-00002.safetensors", | |
| "ImageProcessor.encoder.encoder.layer.9.attention.attention.query.weight": "model-00002-of-00002.safetensors", | |
| "ImageProcessor.encoder.encoder.layer.9.attention.attention.value.bias": "model-00002-of-00002.safetensors", | |
| "ImageProcessor.encoder.encoder.layer.9.attention.attention.value.weight": "model-00002-of-00002.safetensors", | |
| "ImageProcessor.encoder.encoder.layer.9.attention.output.dense.bias": "model-00002-of-00002.safetensors", | |
| "ImageProcessor.encoder.encoder.layer.9.attention.output.dense.weight": "model-00002-of-00002.safetensors", | |
| "ImageProcessor.encoder.encoder.layer.9.intermediate.dense.bias": "model-00002-of-00002.safetensors", | |
| "ImageProcessor.encoder.encoder.layer.9.intermediate.dense.weight": "model-00002-of-00002.safetensors", | |
| "ImageProcessor.encoder.encoder.layer.9.layernorm_after.bias": "model-00002-of-00002.safetensors", | |
| "ImageProcessor.encoder.encoder.layer.9.layernorm_after.weight": "model-00002-of-00002.safetensors", | |
| "ImageProcessor.encoder.encoder.layer.9.layernorm_before.bias": "model-00002-of-00002.safetensors", | |
| "ImageProcessor.encoder.encoder.layer.9.layernorm_before.weight": "model-00002-of-00002.safetensors", | |
| "ImageProcessor.encoder.encoder.layer.9.output.dense.bias": "model-00002-of-00002.safetensors", | |
| "ImageProcessor.encoder.encoder.layer.9.output.dense.weight": "model-00002-of-00002.safetensors", | |
| "ImageProcessor.encoder.layernorm.bias": "model-00002-of-00002.safetensors", | |
| "ImageProcessor.encoder.layernorm.weight": "model-00002-of-00002.safetensors", | |
| "ImageProcessor.encoder.pooler.dense.bias": "model-00002-of-00002.safetensors", | |
| "ImageProcessor.encoder.pooler.dense.weight": "model-00002-of-00002.safetensors", | |
| "lm_head.weight": "model-00002-of-00002.safetensors", | |
| "model.embed_tokens.weight": "model-00001-of-00002.safetensors", | |
| "model.layers.0.input_layernorm.weight": "model-00001-of-00002.safetensors", | |
| "model.layers.0.mlp.down_proj.weight": "model-00001-of-00002.safetensors", | |
| "model.layers.0.mlp.gate_proj.weight": "model-00001-of-00002.safetensors", | |
| "model.layers.0.mlp.up_proj.weight": "model-00001-of-00002.safetensors", | |
| "model.layers.0.post_attention_layernorm.weight": "model-00001-of-00002.safetensors", | |
| "model.layers.0.self_attn.k_proj.weight": "model-00001-of-00002.safetensors", | |
| "model.layers.0.self_attn.o_proj.weight": "model-00001-of-00002.safetensors", | |
| "model.layers.0.self_attn.q_proj.weight": "model-00001-of-00002.safetensors", | |
| "model.layers.0.self_attn.v_proj.weight": "model-00001-of-00002.safetensors", | |
| "model.layers.1.input_layernorm.weight": "model-00001-of-00002.safetensors", | |
| "model.layers.1.mlp.down_proj.weight": "model-00001-of-00002.safetensors", | |
| "model.layers.1.mlp.gate_proj.weight": "model-00001-of-00002.safetensors", | |
| "model.layers.1.mlp.up_proj.weight": "model-00001-of-00002.safetensors", | |
| "model.layers.1.post_attention_layernorm.weight": "model-00001-of-00002.safetensors", | |
| "model.layers.1.self_attn.k_proj.weight": "model-00001-of-00002.safetensors", | |
| "model.layers.1.self_attn.o_proj.weight": "model-00001-of-00002.safetensors", | |
| "model.layers.1.self_attn.q_proj.weight": "model-00001-of-00002.safetensors", | |
| "model.layers.1.self_attn.v_proj.weight": "model-00001-of-00002.safetensors", | |
| "model.layers.10.input_layernorm.weight": "model-00002-of-00002.safetensors", | |
| "model.layers.10.mlp.down_proj.weight": "model-00002-of-00002.safetensors", | |
| "model.layers.10.mlp.gate_proj.weight": "model-00001-of-00002.safetensors", | |
| "model.layers.10.mlp.up_proj.weight": "model-00001-of-00002.safetensors", | |
| "model.layers.10.post_attention_layernorm.weight": "model-00002-of-00002.safetensors", | |
| "model.layers.10.self_attn.k_proj.weight": "model-00001-of-00002.safetensors", | |
| "model.layers.10.self_attn.o_proj.weight": "model-00001-of-00002.safetensors", | |
| "model.layers.10.self_attn.q_proj.weight": "model-00001-of-00002.safetensors", | |
| "model.layers.10.self_attn.v_proj.weight": "model-00001-of-00002.safetensors", | |
| "model.layers.11.input_layernorm.weight": "model-00002-of-00002.safetensors", | |
| "model.layers.11.mlp.down_proj.weight": "model-00002-of-00002.safetensors", | |
| "model.layers.11.mlp.gate_proj.weight": "model-00002-of-00002.safetensors", | |
| "model.layers.11.mlp.up_proj.weight": "model-00002-of-00002.safetensors", | |
| "model.layers.11.post_attention_layernorm.weight": "model-00002-of-00002.safetensors", | |
| "model.layers.11.self_attn.k_proj.weight": "model-00002-of-00002.safetensors", | |
| "model.layers.11.self_attn.o_proj.weight": "model-00002-of-00002.safetensors", | |
| "model.layers.11.self_attn.q_proj.weight": "model-00002-of-00002.safetensors", | |
| "model.layers.11.self_attn.v_proj.weight": "model-00002-of-00002.safetensors", | |
| "model.layers.12.input_layernorm.weight": "model-00002-of-00002.safetensors", | |
| "model.layers.12.mlp.down_proj.weight": "model-00002-of-00002.safetensors", | |
| "model.layers.12.mlp.gate_proj.weight": "model-00002-of-00002.safetensors", | |
| "model.layers.12.mlp.up_proj.weight": "model-00002-of-00002.safetensors", | |
| "model.layers.12.post_attention_layernorm.weight": "model-00002-of-00002.safetensors", | |
| "model.layers.12.self_attn.k_proj.weight": "model-00002-of-00002.safetensors", | |
| "model.layers.12.self_attn.o_proj.weight": "model-00002-of-00002.safetensors", | |
| "model.layers.12.self_attn.q_proj.weight": "model-00002-of-00002.safetensors", | |
| "model.layers.12.self_attn.v_proj.weight": "model-00002-of-00002.safetensors", | |
| "model.layers.13.input_layernorm.weight": "model-00002-of-00002.safetensors", | |
| "model.layers.13.mlp.down_proj.weight": "model-00002-of-00002.safetensors", | |
| "model.layers.13.mlp.gate_proj.weight": "model-00002-of-00002.safetensors", | |
| "model.layers.13.mlp.up_proj.weight": "model-00002-of-00002.safetensors", | |
| "model.layers.13.post_attention_layernorm.weight": "model-00002-of-00002.safetensors", | |
| "model.layers.13.self_attn.k_proj.weight": "model-00002-of-00002.safetensors", | |
| "model.layers.13.self_attn.o_proj.weight": "model-00002-of-00002.safetensors", | |
| "model.layers.13.self_attn.q_proj.weight": "model-00002-of-00002.safetensors", | |
| "model.layers.13.self_attn.v_proj.weight": "model-00002-of-00002.safetensors", | |
| "model.layers.14.input_layernorm.weight": "model-00002-of-00002.safetensors", | |
| "model.layers.14.mlp.down_proj.weight": "model-00002-of-00002.safetensors", | |
| "model.layers.14.mlp.gate_proj.weight": "model-00002-of-00002.safetensors", | |
| "model.layers.14.mlp.up_proj.weight": "model-00002-of-00002.safetensors", | |
| "model.layers.14.post_attention_layernorm.weight": "model-00002-of-00002.safetensors", | |
| "model.layers.14.self_attn.k_proj.weight": "model-00002-of-00002.safetensors", | |
| "model.layers.14.self_attn.o_proj.weight": "model-00002-of-00002.safetensors", | |
| "model.layers.14.self_attn.q_proj.weight": "model-00002-of-00002.safetensors", | |
| "model.layers.14.self_attn.v_proj.weight": "model-00002-of-00002.safetensors", | |
| "model.layers.15.input_layernorm.weight": "model-00002-of-00002.safetensors", | |
| "model.layers.15.mlp.down_proj.weight": "model-00002-of-00002.safetensors", | |
| "model.layers.15.mlp.gate_proj.weight": "model-00002-of-00002.safetensors", | |
| "model.layers.15.mlp.up_proj.weight": "model-00002-of-00002.safetensors", | |
| "model.layers.15.post_attention_layernorm.weight": "model-00002-of-00002.safetensors", | |
| "model.layers.15.self_attn.k_proj.weight": "model-00002-of-00002.safetensors", | |
| "model.layers.15.self_attn.o_proj.weight": "model-00002-of-00002.safetensors", | |
| "model.layers.15.self_attn.q_proj.weight": "model-00002-of-00002.safetensors", | |
| "model.layers.15.self_attn.v_proj.weight": "model-00002-of-00002.safetensors", | |
| "model.layers.2.input_layernorm.weight": "model-00001-of-00002.safetensors", | |
| "model.layers.2.mlp.down_proj.weight": "model-00001-of-00002.safetensors", | |
| "model.layers.2.mlp.gate_proj.weight": "model-00001-of-00002.safetensors", | |
| "model.layers.2.mlp.up_proj.weight": "model-00001-of-00002.safetensors", | |
| "model.layers.2.post_attention_layernorm.weight": "model-00001-of-00002.safetensors", | |
| "model.layers.2.self_attn.k_proj.weight": "model-00001-of-00002.safetensors", | |
| "model.layers.2.self_attn.o_proj.weight": "model-00001-of-00002.safetensors", | |
| "model.layers.2.self_attn.q_proj.weight": "model-00001-of-00002.safetensors", | |
| "model.layers.2.self_attn.v_proj.weight": "model-00001-of-00002.safetensors", | |
| "model.layers.3.input_layernorm.weight": "model-00001-of-00002.safetensors", | |
| "model.layers.3.mlp.down_proj.weight": "model-00001-of-00002.safetensors", | |
| "model.layers.3.mlp.gate_proj.weight": "model-00001-of-00002.safetensors", | |
| "model.layers.3.mlp.up_proj.weight": "model-00001-of-00002.safetensors", | |
| "model.layers.3.post_attention_layernorm.weight": "model-00001-of-00002.safetensors", | |
| "model.layers.3.self_attn.k_proj.weight": "model-00001-of-00002.safetensors", | |
| "model.layers.3.self_attn.o_proj.weight": "model-00001-of-00002.safetensors", | |
| "model.layers.3.self_attn.q_proj.weight": "model-00001-of-00002.safetensors", | |
| "model.layers.3.self_attn.v_proj.weight": "model-00001-of-00002.safetensors", | |
| "model.layers.4.input_layernorm.weight": "model-00001-of-00002.safetensors", | |
| "model.layers.4.mlp.down_proj.weight": "model-00001-of-00002.safetensors", | |
| "model.layers.4.mlp.gate_proj.weight": "model-00001-of-00002.safetensors", | |
| "model.layers.4.mlp.up_proj.weight": "model-00001-of-00002.safetensors", | |
| "model.layers.4.post_attention_layernorm.weight": "model-00001-of-00002.safetensors", | |
| "model.layers.4.self_attn.k_proj.weight": "model-00001-of-00002.safetensors", | |
| "model.layers.4.self_attn.o_proj.weight": "model-00001-of-00002.safetensors", | |
| "model.layers.4.self_attn.q_proj.weight": "model-00001-of-00002.safetensors", | |
| "model.layers.4.self_attn.v_proj.weight": "model-00001-of-00002.safetensors", | |
| "model.layers.5.input_layernorm.weight": "model-00001-of-00002.safetensors", | |
| "model.layers.5.mlp.down_proj.weight": "model-00001-of-00002.safetensors", | |
| "model.layers.5.mlp.gate_proj.weight": "model-00001-of-00002.safetensors", | |
| "model.layers.5.mlp.up_proj.weight": "model-00001-of-00002.safetensors", | |
| "model.layers.5.post_attention_layernorm.weight": "model-00001-of-00002.safetensors", | |
| "model.layers.5.self_attn.k_proj.weight": "model-00001-of-00002.safetensors", | |
| "model.layers.5.self_attn.o_proj.weight": "model-00001-of-00002.safetensors", | |
| "model.layers.5.self_attn.q_proj.weight": "model-00001-of-00002.safetensors", | |
| "model.layers.5.self_attn.v_proj.weight": "model-00001-of-00002.safetensors", | |
| "model.layers.6.input_layernorm.weight": "model-00001-of-00002.safetensors", | |
| "model.layers.6.mlp.down_proj.weight": "model-00001-of-00002.safetensors", | |
| "model.layers.6.mlp.gate_proj.weight": "model-00001-of-00002.safetensors", | |
| "model.layers.6.mlp.up_proj.weight": "model-00001-of-00002.safetensors", | |
| "model.layers.6.post_attention_layernorm.weight": "model-00001-of-00002.safetensors", | |
| "model.layers.6.self_attn.k_proj.weight": "model-00001-of-00002.safetensors", | |
| "model.layers.6.self_attn.o_proj.weight": "model-00001-of-00002.safetensors", | |
| "model.layers.6.self_attn.q_proj.weight": "model-00001-of-00002.safetensors", | |
| "model.layers.6.self_attn.v_proj.weight": "model-00001-of-00002.safetensors", | |
| "model.layers.7.input_layernorm.weight": "model-00001-of-00002.safetensors", | |
| "model.layers.7.mlp.down_proj.weight": "model-00001-of-00002.safetensors", | |
| "model.layers.7.mlp.gate_proj.weight": "model-00001-of-00002.safetensors", | |
| "model.layers.7.mlp.up_proj.weight": "model-00001-of-00002.safetensors", | |
| "model.layers.7.post_attention_layernorm.weight": "model-00001-of-00002.safetensors", | |
| "model.layers.7.self_attn.k_proj.weight": "model-00001-of-00002.safetensors", | |
| "model.layers.7.self_attn.o_proj.weight": "model-00001-of-00002.safetensors", | |
| "model.layers.7.self_attn.q_proj.weight": "model-00001-of-00002.safetensors", | |
| "model.layers.7.self_attn.v_proj.weight": "model-00001-of-00002.safetensors", | |
| "model.layers.8.input_layernorm.weight": "model-00001-of-00002.safetensors", | |
| "model.layers.8.mlp.down_proj.weight": "model-00001-of-00002.safetensors", | |
| "model.layers.8.mlp.gate_proj.weight": "model-00001-of-00002.safetensors", | |
| "model.layers.8.mlp.up_proj.weight": "model-00001-of-00002.safetensors", | |
| "model.layers.8.post_attention_layernorm.weight": "model-00001-of-00002.safetensors", | |
| "model.layers.8.self_attn.k_proj.weight": "model-00001-of-00002.safetensors", | |
| "model.layers.8.self_attn.o_proj.weight": "model-00001-of-00002.safetensors", | |
| "model.layers.8.self_attn.q_proj.weight": "model-00001-of-00002.safetensors", | |
| "model.layers.8.self_attn.v_proj.weight": "model-00001-of-00002.safetensors", | |
| "model.layers.9.input_layernorm.weight": "model-00001-of-00002.safetensors", | |
| "model.layers.9.mlp.down_proj.weight": "model-00001-of-00002.safetensors", | |
| "model.layers.9.mlp.gate_proj.weight": "model-00001-of-00002.safetensors", | |
| "model.layers.9.mlp.up_proj.weight": "model-00001-of-00002.safetensors", | |
| "model.layers.9.post_attention_layernorm.weight": "model-00001-of-00002.safetensors", | |
| "model.layers.9.self_attn.k_proj.weight": "model-00001-of-00002.safetensors", | |
| "model.layers.9.self_attn.o_proj.weight": "model-00001-of-00002.safetensors", | |
| "model.layers.9.self_attn.q_proj.weight": "model-00001-of-00002.safetensors", | |
| "model.layers.9.self_attn.v_proj.weight": "model-00001-of-00002.safetensors", | |
| "model.norm.weight": "model-00002-of-00002.safetensors" | |
| } | |
| } | |