Instructions to use google/functiongemma-270m-it with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use google/functiongemma-270m-it with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="google/functiongemma-270m-it") messages = [ {"role": "user", "content": "Who are you?"}, ] pipe(messages)# Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("google/functiongemma-270m-it") model = AutoModelForCausalLM.from_pretrained("google/functiongemma-270m-it") messages = [ {"role": "user", "content": "Who are you?"}, ] inputs = tokenizer.apply_chat_template( messages, add_generation_prompt=True, tokenize=True, return_dict=True, return_tensors="pt", ).to(model.device) outputs = model.generate(**inputs, max_new_tokens=40) print(tokenizer.decode(outputs[0][inputs["input_ids"].shape[-1]:])) - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use google/functiongemma-270m-it with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "google/functiongemma-270m-it" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "google/functiongemma-270m-it", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker
docker model run hf.co/google/functiongemma-270m-it
- SGLang
How to use google/functiongemma-270m-it with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "google/functiongemma-270m-it" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "google/functiongemma-270m-it", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "google/functiongemma-270m-it" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "google/functiongemma-270m-it", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }' - Docker Model Runner
How to use google/functiongemma-270m-it with Docker Model Runner:
docker model run hf.co/google/functiongemma-270m-it
Tool Calling example on Raspberry PI
I used Ollama to host a functionGemma model to create an Agnet which invokes my Gmail MCP actions using langchain. It is wonderful!! It runs on Raspberry PI. Just takes few seconds.
import asyncio
import time
from langchain.agents import create_agent
from langchain_ollama import ChatOllama
from langchain_mcp_adapters.client import MultiServerMCPClient
async def init():
client = MultiServerMCPClient(
{
"email": {
"transport": "http", # HTTP-based remote server
# Ensure you start your weather server on port 8000
"url": "https://n8n.samair.me/mcp/gmailv2",
"headers": {
"Authorization": "Bearer "
}
},
}
)
tools = await client.get_tools()
model = ChatOllama(
model="functiongemma",
temperature=0,
base_url="http://pi5.local:11434",
# other params...
)
SYSTEM_PROMPT = """
Your are helpful personal assistant, your job is to help your user.
"""
agent = create_agent(model, tools=tools, system_prompt=SYSTEM_PROMPT)
last_step_time = time.perf_counter()
current_time = time.perf_counter()
duration = current_time - last_step_time
print(f"[{duration:.2f}s]")
async for chunk in agent.astream(
{"messages": [{"role": "user", "content": "Send a message asking if they are availble on weekend for meeting to sameer@samair.me"}]},
):
for step, data in chunk.items():
current_time = time.perf_counter()
duration = current_time - last_step_time
print(f"[{duration:.2f}s] step: {step}")
print(f"content: {data['messages'][-1].content_blocks}")
if name == "main":
asyncio.run(init())
'''
Hi @samairtimer
A huge thank you for your work on this implementation; seeing FunctionGemma integrated so seamlessly with MCP on the edge is fantastic. We truly appreciate the time and effort you've put into showcasing the model's capabilities to the community!