Prerequisites
Kubernetes Cluster
A running K8s cluster (1.27+). AKS, EKS, GKE, k3s, or kind all work. Minimum 3 nodes recommended for production.
Helm 3
Helm 3.12+ installed on your local machine. Used to deploy the Kantai chart and manage upgrades.
kubectl
kubectl configured with cluster access. You’ll need cluster-admin or equivalent RBAC for initial setup.
Optional: A domain name with TLS cert (or cert-manager) for the Gangway portal. Works on localhost for dev.
Step 1 — Add the Helm Repository
helm repo add kantai https://charts.pemos.org
helm repo update
Step 2 — Create a Namespace
kubectl create namespace kantai
Step 3 — Install Kantai OSS
Minimal install with defaults:
helm install kantai kantai/kantai-oss \
--namespace kantai \
--set gangway.host=gangway.example.com \
--set agents.sencho.enabled=true \
--set agents.takumi.enabled=true
This deploys:
- Gangway — the web portal (Dashboard, Chat, Nava, Learn, Wiki, Governance, Tetraban)
- Sencho — your personal AI captain
- Takumi — your worker/execution agent
- OpenClaw runtime — the agent execution engine
- PostgreSQL — conversation and memory storage
Step 4 — Verify the Deployment
kubectl get pods -n kantai
All pods should reach Running within 2-3 minutes. Check logs if anything stays in Pending:
kubectl logs -n kantai deploy/gangway
kubectl logs -n kantai deploy/sencho
Step 5 — First Login to Gangway
Open your browser and navigate to your Gangway host (e.g., https://gangway.example.com).
- The initial admin credentials are output during install — check the Helm notes:
helm get notes kantai -n kantai - Log in and change your password immediately
- You’ll land on the Dashboard — showing agent health, fleet status, and quick stats
Step 6 — Configure Your First Agent
From the Gangway portal:
- Navigate to Governance → Agent Config
- Select Sencho and configure:
- Model: Choose your LLM backend (OpenAI, Anthropic, Azure OpenAI, local Ollama)
- API Key: Enter your provider key (stored encrypted via K8s secrets)
- Personality: Adjust tone, verbosity, and behavior
- Click Save & Restart Agent
Or configure via Helm values:
agents:
sencho:
model: anthropic/claude-sonnet-4-20250514
apiKey: sk-...
tools:
- calendar
- notes
- web-search
helm upgrade kantai kantai/kantai-oss -n kantai -f values.yaml
Step 7 — Connect Messaging Channels
Kantai agents can communicate over multiple channels simultaneously.
Telegram
- Create a bot via @BotFather
- Add the bot token to your Helm values under
channels.telegram.token - Start chatting with your bot — Sencho responds
Discord
- Create a Discord application and bot at discord.dev
- Add the bot token and guild ID to
channels.discord - Invite the bot to your server — agents join configured channels
Signal
- Register a Signal number using signal-cli or the linked device flow
- Configure
channels.signalwith your credentials - Agents respond to DMs and group messages
You can also use the Gangway Chat tab for a built-in web interface — no external setup needed.
Troubleshooting
Pods stuck in Pending
Check node resources: kubectl describe pod <name> -n kantai. Common cause: insufficient CPU/memory. Minimum 2 CPU / 4GB per agent pod.
Agent not responding
Verify the LLM API key is correct: kubectl logs deploy/sencho -n kantai. Look for authentication errors. Check network policies allow egress to your model provider.
Gangway 502 errors
The portal pod may still be starting. Wait 60s and retry. Check kubectl logs deploy/gangway -n kantai for port binding issues.
Database connection failures
Ensure the PostgreSQL pod is running: kubectl get pods -n kantai -l app=postgresql. Check PVC binding if using persistent storage.
Need more help? Check the Docs for full configuration reference, or visit Community for support.
Want managed hosting? pemos.ca handles infrastructure, upgrades, and support so you can focus on your agents.