Instructions to use TracNetwork/functiongemma-270m-it-intercomswap-v3 with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use TracNetwork/functiongemma-270m-it-intercomswap-v3 with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="TracNetwork/functiongemma-270m-it-intercomswap-v3") messages = [ {"role": "user", "content": "Who are you?"}, ] pipe(messages)# Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("TracNetwork/functiongemma-270m-it-intercomswap-v3") model = AutoModelForCausalLM.from_pretrained("TracNetwork/functiongemma-270m-it-intercomswap-v3") messages = [ {"role": "user", "content": "Who are you?"}, ] inputs = tokenizer.apply_chat_template( messages, add_generation_prompt=True, tokenize=True, return_dict=True, return_tensors="pt", ).to(model.device) outputs = model.generate(**inputs, max_new_tokens=40) print(tokenizer.decode(outputs[0][inputs["input_ids"].shape[-1]:])) - llama-cpp-python
How to use TracNetwork/functiongemma-270m-it-intercomswap-v3 with llama-cpp-python:
# !pip install llama-cpp-python from llama_cpp import Llama llm = Llama.from_pretrained( repo_id="TracNetwork/functiongemma-270m-it-intercomswap-v3", filename="gguf/functiongemma-v3-f16.gguf", )
llm.create_chat_completion( messages = [ { "role": "user", "content": "What is the capital of France?" } ] ) - Notebooks
- Google Colab
- Kaggle
- Local Apps
- llama.cpp
How to use TracNetwork/functiongemma-270m-it-intercomswap-v3 with llama.cpp:
Install from brew
brew install llama.cpp # Start a local OpenAI-compatible server with a web UI: llama-server -hf TracNetwork/functiongemma-270m-it-intercomswap-v3:F16 # Run inference directly in the terminal: llama-cli -hf TracNetwork/functiongemma-270m-it-intercomswap-v3:F16
Install from WinGet (Windows)
winget install llama.cpp # Start a local OpenAI-compatible server with a web UI: llama-server -hf TracNetwork/functiongemma-270m-it-intercomswap-v3:F16 # Run inference directly in the terminal: llama-cli -hf TracNetwork/functiongemma-270m-it-intercomswap-v3:F16
Use pre-built binary
# Download pre-built binary from: # https://github.com/ggerganov/llama.cpp/releases # Start a local OpenAI-compatible server with a web UI: ./llama-server -hf TracNetwork/functiongemma-270m-it-intercomswap-v3:F16 # Run inference directly in the terminal: ./llama-cli -hf TracNetwork/functiongemma-270m-it-intercomswap-v3:F16
Build from source code
git clone https://github.com/ggerganov/llama.cpp.git cd llama.cpp cmake -B build cmake --build build -j --target llama-server llama-cli # Start a local OpenAI-compatible server with a web UI: ./build/bin/llama-server -hf TracNetwork/functiongemma-270m-it-intercomswap-v3:F16 # Run inference directly in the terminal: ./build/bin/llama-cli -hf TracNetwork/functiongemma-270m-it-intercomswap-v3:F16
Use Docker
docker model run hf.co/TracNetwork/functiongemma-270m-it-intercomswap-v3:F16
- LM Studio
- Jan
- vLLM
How to use TracNetwork/functiongemma-270m-it-intercomswap-v3 with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "TracNetwork/functiongemma-270m-it-intercomswap-v3" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "TracNetwork/functiongemma-270m-it-intercomswap-v3", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker
docker model run hf.co/TracNetwork/functiongemma-270m-it-intercomswap-v3:F16
- SGLang
How to use TracNetwork/functiongemma-270m-it-intercomswap-v3 with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "TracNetwork/functiongemma-270m-it-intercomswap-v3" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "TracNetwork/functiongemma-270m-it-intercomswap-v3", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "TracNetwork/functiongemma-270m-it-intercomswap-v3" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "TracNetwork/functiongemma-270m-it-intercomswap-v3", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }' - Ollama
How to use TracNetwork/functiongemma-270m-it-intercomswap-v3 with Ollama:
ollama run hf.co/TracNetwork/functiongemma-270m-it-intercomswap-v3:F16
- Unsloth Studio new
How to use TracNetwork/functiongemma-270m-it-intercomswap-v3 with Unsloth Studio:
Install Unsloth Studio (macOS, Linux, WSL)
curl -fsSL https://unsloth.ai/install.sh | sh # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for TracNetwork/functiongemma-270m-it-intercomswap-v3 to start chatting
Install Unsloth Studio (Windows)
irm https://unsloth.ai/install.ps1 | iex # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for TracNetwork/functiongemma-270m-it-intercomswap-v3 to start chatting
Using HuggingFace Spaces for Unsloth
# No setup required # Open https://huggingface.co/spaces/unsloth/studio in your browser # Search for TracNetwork/functiongemma-270m-it-intercomswap-v3 to start chatting
- Pi new
How to use TracNetwork/functiongemma-270m-it-intercomswap-v3 with Pi:
Start the llama.cpp server
# Install llama.cpp: brew install llama.cpp # Start a local OpenAI-compatible server: llama-server -hf TracNetwork/functiongemma-270m-it-intercomswap-v3:F16
Configure the model in Pi
# Install Pi: npm install -g @mariozechner/pi-coding-agent # Add to ~/.pi/agent/models.json: { "providers": { "llama-cpp": { "baseUrl": "http://localhost:8080/v1", "api": "openai-completions", "apiKey": "none", "models": [ { "id": "TracNetwork/functiongemma-270m-it-intercomswap-v3:F16" } ] } } }Run Pi
# Start Pi in your project directory: pi
- Hermes Agent new
How to use TracNetwork/functiongemma-270m-it-intercomswap-v3 with Hermes Agent:
Start the llama.cpp server
# Install llama.cpp: brew install llama.cpp # Start a local OpenAI-compatible server: llama-server -hf TracNetwork/functiongemma-270m-it-intercomswap-v3:F16
Configure Hermes
# Install Hermes: curl -fsSL https://hermes-agent.nousresearch.com/install.sh | bash hermes setup # Point Hermes at the local server: hermes config set model.provider custom hermes config set model.base_url http://127.0.0.1:8080/v1 hermes config set model.default TracNetwork/functiongemma-270m-it-intercomswap-v3:F16
Run Hermes
hermes
- Docker Model Runner
How to use TracNetwork/functiongemma-270m-it-intercomswap-v3 with Docker Model Runner:
docker model run hf.co/TracNetwork/functiongemma-270m-it-intercomswap-v3:F16
- Lemonade
How to use TracNetwork/functiongemma-270m-it-intercomswap-v3 with Lemonade:
Pull the model
# Download Lemonade from https://lemonade-server.ai/ lemonade pull TracNetwork/functiongemma-270m-it-intercomswap-v3:F16
Run and chat with the model
lemonade run user.functiongemma-270m-it-intercomswap-v3-F16
List all available models
lemonade list
# Load model directly
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("TracNetwork/functiongemma-270m-it-intercomswap-v3")
model = AutoModelForCausalLM.from_pretrained("TracNetwork/functiongemma-270m-it-intercomswap-v3")
messages = [
{"role": "user", "content": "Who are you?"},
]
inputs = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True,
tokenize=True,
return_dict=True,
return_tensors="pt",
).to(model.device)
outputs = model.generate(**inputs, max_new_tokens=40)
print(tokenizer.decode(outputs[0][inputs["input_ids"].shape[-1]:]))functiongemma-270m-it-intercomswap-v3
IntercomSwap fine-tuned FunctionGemma model for deterministic tool-calling in BTC Lightning <-> USDT Solana swap workflows.
What Is IntercomSwap
Intercom Swap is a fork of upstream Intercom that keeps the Intercom stack intact and adds a non-custodial swap harness for BTC over Lightning <> USDT on Solana via a shared escrow program, with deterministic operator tooling, recovery, and unattended end-to-end tests.
GitHub: https://github.com/TracSystems/intercom-swap
Base model: google/functiongemma-270m-it
Model Purpose
- Convert natural-language operator prompts into validated tool calls.
- Enforce buy/sell direction mapping for swap intents.
- Support repeat/autopost workflows used by IntercomSwap prompt routing.
Repository Layout
./:- merged HF checkpoint (Transformers format)
./nvfp4:- NVFP4-quantized checkpoint for TensorRT-LLM serving
./gguf:functiongemma-v3-f16.gguffunctiongemma-v3-q8_0.gguf
Startup By Flavor
1) Base HF checkpoint (Transformers)
python -m vllm.entrypoints.openai.api_server \
--model TracNetwork/functiongemma-270m-it-intercomswap-v3 \
--host 0.0.0.0 \
--port 8000 \
--dtype auto \
--max-model-len 8192
Lower memory mode example:
python -m vllm.entrypoints.openai.api_server \
--model TracNetwork/functiongemma-270m-it-intercomswap-v3 \
--host 0.0.0.0 \
--port 8000 \
--dtype auto \
--max-model-len 4096 \
--max-num-seqs 8
2) NVFP4 checkpoint (./nvfp4)
TensorRT-LLM example with explicit headroom (avoid consuming all VRAM):
trtllm-serve serve ./nvfp4 \
--backend pytorch \
--host 0.0.0.0 \
--port 8012 \
--max_batch_size 8 \
--max_num_tokens 16384 \
--kv_cache_free_gpu_memory_fraction 0.05
Memory tuning guidance:
- Decrease
--max_num_tokensfirst. - Then reduce
--max_batch_size. - Keep
--kv_cache_free_gpu_memory_fractionaround0.05to preserve safety headroom.
3) GGUF checkpoint (./gguf)
Q8_0 (recommended default balance):
llama-server \
-m ./gguf/functiongemma-v3-q8_0.gguf \
--host 0.0.0.0 \
--port 8014 \
--ctx-size 8192 \
--batch-size 256 \
--ubatch-size 64 \
--gpu-layers 12
F16 (higher quality, higher memory):
llama-server \
-m ./gguf/functiongemma-v3-f16.gguf \
--host 0.0.0.0 \
--port 8014 \
--ctx-size 8192 \
--batch-size 256 \
--ubatch-size 64 \
--gpu-layers 12
Memory tuning guidance:
- Lower
--gpu-layersto reduce VRAM usage. - Lower
--ctx-sizeto reduce RAM+VRAM KV-cache usage. - Use
q8_0for general deployment,f16for quality-first offline tests.
Training Snapshot
- Base family: FunctionGemma 270M instruction-tuned.
- Fine-tune objective: IntercomSwap tool-call routing and argument shaping.
- Corpus profile: operations + intent-routing + tool-calling examples.
Evaluation Snapshot
From held-out evaluation for this release line:
- Train examples:
6263 - Eval examples:
755 - Train loss:
0.01348 - Eval loss:
0.02012
Intended Use
- Local or private deployments where tool execution is validated server-side.
- Deterministic operator workflows for swap infra.
Out-of-Scope Use
- Autonomous financial decision-making.
- Direct execution of unvalidated user text as shell/actions.
- Safety-critical usage without host-side policy/validation.
Safety Notes
- Always validate tool name + argument schema server-side.
- Treat network-side payloads as untrusted input.
- Keep wallet secrets and API credentials outside model context.
Provenance
- Derived from:
google/functiongemma-270m-it - Integration target: IntercomSwap prompt-mode tool routing
- Downloads last month
- 28
Model tree for TracNetwork/functiongemma-270m-it-intercomswap-v3
Base model
google/functiongemma-270m-it
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="TracNetwork/functiongemma-270m-it-intercomswap-v3") messages = [ {"role": "user", "content": "Who are you?"}, ] pipe(messages)