Configuration Reference
Agent Configuration
Each agent is configured via Helm values or the Governance UI. Key settings:
agents:
sencho:
enabled: true
model: anthropic/claude-sonnet-4-20250514 # LLM model identifier
apiKey: "" # Or use secretRef for Key Vault
secretRef:
name: kantai-llm-keys
key: sencho-api-key
tools: # Enabled tools
- calendar
- notes
- web-search
- email
channels: # Connected messaging channels
- telegram
- discord
memory:
longTerm: true # Enable persistent memory
maxTokens: 100000 # Long-term memory token budget
resources:
requests:
cpu: "500m"
memory: "1Gi"
limits:
cpu: "2"
memory: "4Gi"
Channel Configuration
channels:
telegram:
enabled: true
token: "" # Bot token from @BotFather
secretRef:
name: kantai-channel-keys
key: telegram-token
allowedUsers: # Optional: restrict access
- "123456789"
discord:
enabled: true
token: ""
guildId: ""
channels: # Which Discord channels to join
- "general"
- "ops"
signal:
enabled: false
number: "+1234567890"
Helm Chart Values Reference
Core Values
| Parameter | Default | Description |
|---|---|---|
gangway.enabled |
true |
Deploy the Gangway portal |
gangway.host |
"" |
Hostname for ingress |
gangway.replicas |
1 |
Portal pod replicas |
gangway.tls.enabled |
true |
Enable TLS on ingress |
gangway.tls.issuer |
letsencrypt |
cert-manager issuer |
postgresql.enabled |
true |
Deploy built-in PostgreSQL |
postgresql.external.host |
"" |
Use external PostgreSQL instead |
redis.enabled |
true |
Deploy built-in Redis |
backup.enabled |
false |
Enable automated backups |
backup.schedule |
0 */1 * * * |
Backup cron schedule |
backup.storage |
"" |
Blob storage connection string |
monitoring.enabled |
false |
Deploy Prometheus ServiceMonitor |
Agent Values
| Parameter | Default | Description |
|---|---|---|
agents.<name>.enabled |
false |
Enable this agent |
agents.<name>.model |
"" |
LLM model identifier |
agents.<name>.apiKey |
"" |
API key (plain text, use secretRef instead) |
agents.<name>.secretRef |
{} |
K8s secret reference for API key |
agents.<name>.tools |
[] |
List of enabled tools |
agents.<name>.channels |
[] |
List of connected channels |
agents.<name>.tier |
2 |
Permission tier (1-3) |
API Reference
WebSocket Protocol
Gangway and agents communicate over WebSocket. The protocol is JSON-based.
Connection:
wss://gangway.example.com/ws/chat?agent=sencho&session=<session-id>
Send a message:
{
"type": "message",
"content": "What's on my calendar today?",
"session": "abc-123",
"timestamp": "2025-01-15T10:30:00Z"
}
Receive a response:
{
"type": "response",
"content": "You have 3 meetings today...",
"agent": "sencho",
"session": "abc-123",
"timestamp": "2025-01-15T10:30:02Z",
"tokens": { "input": 245, "output": 189 }
}
Stream (partial responses):
{
"type": "stream",
"content": "You have ",
"done": false
}
Events:
{
"type": "event",
"event": "agent.tool_call",
"data": { "tool": "calendar", "action": "list_events" }
}
REST API
| Endpoint | Method | Description |
|---|---|---|
/api/agents |
GET | List agents and their status |
/api/agents/:name/config |
GET/PUT | Get or update agent configuration |
/api/sessions |
GET | List chat sessions |
/api/sessions/:id/messages |
GET | Get messages for a session |
/api/nava/cards |
GET | Search Nava knowledge cards |
/api/governance/policies |
GET/PUT | Manage governance policies |
/api/metrics |
GET | Prometheus-format metrics |
Troubleshooting Guide
Agent won’t start
- Check pod status:
kubectl get pods -n kantai - Check logs:
kubectl logs deploy/<agent> -n kantai - Common causes: invalid API key, missing secret, insufficient resources
- Verify secret exists:
kubectl get secret -n kantai
Gangway shows “Disconnected”
- Check Gangway pod:
kubectl logs deploy/gangway -n kantai - Verify the agent pod is running
- Check network policies allow Gangway→Agent communication
- Restart Gangway:
kubectl rollout restart deploy/gangway -n kantai
Messages not delivered to Telegram/Discord
- Verify channel token in secrets
- Check agent logs for API errors from the messaging provider
- Ensure egress network policies allow traffic to
api.telegram.orgordiscord.com - Test the bot token independently with curl
High token usage / unexpected costs
- Check Dashboard cost estimates
- Review agent memory settings — large
maxTokensmeans more context per request - Use Governance to set daily token limits
- Check Tetraban for unexpected agent activity
FAQ
Is Kantai free? Yes. Kantai OSS is MIT-licensed. You pay only for your infrastructure and LLM API usage.
What LLM providers are supported? OpenAI, Anthropic, Azure OpenAI, Google Vertex AI, and any OpenAI-compatible API (Ollama, vLLM, LiteLLM). Configure the model identifier and endpoint per agent.
Can I run Kantai without cloud? Absolutely. Deploy on bare metal or local k3s with Ollama for fully air-gapped, offline operation. No cloud services required.
How do I add a custom agent? Define a new agent in your Helm values with a unique name, role description, model, and tools. The OpenClaw runtime handles the rest.
Is my data sent anywhere? Only to the LLM provider you configure. Kantai itself sends no telemetry, no analytics, no phone-home traffic. Your data stays in your cluster.
Can I use Kantai with my existing Kubernetes tools? Yes. Kantai is standard Kubernetes — it works with your existing monitoring (Prometheus/Grafana), logging (Loki/ELK), GitOps (Flux/ArgoCD), and service mesh (Istio/Linkerd).
How do I upgrade?
helm repo update
helm upgrade kantai kantai/kantai-oss -n kantai -f values.yaml
Agents restart with zero downtime via rolling updates.
Need more help? Open an issue on GitHub or check the Community page.