| license: gemma | |
| tags: | |
| - coding | |
| - rag | |
| - gemma | |
| - chain-of-thought | |
| base_model: google/functiongemma-270m-it | |
| # FunctionGemma Coding Assistant | |
| Production-ready coding assistant with: | |
| - Chain of Recursive Thoughts reasoning | |
| - Web search integration (optional) | |
| - Local knowledge base (SQLite + FAISS) | |
| - Fine-tuned on 20k diverse code samples | |
| ## Usage | |
| from transformers import AutoModelForCausalLM, AutoTokenizermodel = AutoModelForCausalLM.from_pretrained("shaurya79/functiongemma-coding-assistant") | |
| tokenizer = AutoTokenizer.from_pretrained("shaurya79/functiongemma-coding-assistant")prompt = "Write a Python function to reverse a string" | |
| inputs = tokenizer(prompt, return_tensors="pt") | |
| outputs = model.generate(**inputs, max_new_tokens=512) | |
| print(tokenizer.decode(outputs)) | |
| ## Features | |
| - Code generation (Python, JavaScript, etc.) | |
| - Detailed explanations | |
| - Bug fixing suggestions | |
| - Multi-language support | |
| ## Size | |
| ~250-300 MB (quantized) | |