Configuration
Customize your k8s-provisioner cluster
k8s-provisioner uses two configuration files:
config.yaml
Main configuration file for Kubernetes and components.
cluster:
name: "k8s-lab"
pod_cidr: "10.244.0.0/16"
service_cidr: "10.96.0.0/12"
versions:
kubernetes: "1.32"
crio: "v1.32"
calico: "3.28.0"
metallb: "0.14.8"
istio: "1.28.2"
karpor: "0.7.6"
network:
interface: "eth1"
controlplane_ip: "192.168.56.10"
metallb_range: "192.168.56.200-192.168.56.250"
storage:
nfs_server: "storage"
nfs_path: "/exports/k8s-volumes"
default_dynamic: true # nfs-dynamic as default StorageClass
nodes:
- name: "storage"
role: "storage"
- name: "controlplane"
role: "controlplane"
- name: "node01"
role: "worker"
- name: "node02"
role: "worker"
components:
cni: "calico"
load_balancer: "metallb"
service_mesh: "istio"
monitoring: "prometheus-stack" # Options: prometheus-stack, none
logging: "loki" # Options: loki, none
karpor: "enabled" # Options: enabled, none
# Karpor AI configuration (optional)
karpor_ai:
enabled: true
backend: "ollama" # Options: openai, azureopenai, huggingface, ollama
model: "llama3.2:3b" # Local model (or minimax-m2.5:cloud for cloud)
auth_token: "" # API token (not needed for ollama local)
base_url: "" # Custom endpoint (leave empty for default)
# Ollama cloud API key (only for :cloud models)
ollama:
api_key: "" # Get from https://ollama.com/settings/keys
Configuration Options
components.karpor
| Value | Description |
|---|---|
enabled | Install Karpor with AI features |
none | Skip Karpor installation |
karpor_ai.enabled
| Value | Description |
|---|---|
true | Enable AI features with Ollama |
false | Disable AI (Ollama not installed) |
karpor_ai.model
Local models (run inside cluster):
| Model | RAM Required | Quality |
|---|---|---|
llama3.2:1b | ~2GB | Basic |
llama3.2:3b | ~4GB | Good (default) |
qwen2.5-coder:7b | ~8GB | Excellent |
llama3.1:8b | ~10GB | Excellent |
Cloud models (require API key):
| Model | Description |
|---|---|
minimax-m2.5:cloud | Top performer |
qwen3-coder:480b-cloud | Excellent for code |
glm-4.7:cloud | Good general purpose |
storage.default_dynamic
| Value | Description |
|---|---|
true | nfs-dynamic is the default StorageClass |
false | No default StorageClass |
vagrant/settings.yaml
VM configuration for VirtualBox.
box_name: "bento/debian-12"
vm:
- name: "storage"
ip: "192.168.56.20"
memory: "1024"
cpus: "1"
role: "storage"
- name: "controlplane"
ip: "192.168.56.10"
memory: "6144" # Extra for monitoring stack
cpus: "4"
role: "controlplane"
- name: "node01"
ip: "192.168.56.11"
memory: "8192" # Extra for AI workloads (Ollama)
cpus: "2"
role: "worker"
- name: "node02"
ip: "192.168.56.12"
memory: "4096"
cpus: "2"
role: "worker"
VM Resource Allocation
| VM | Memory | CPUs | Purpose |
|---|---|---|---|
| Storage | 1 GB | 1 | NFS Server |
| ControlPlane | 6 GB | 4 | K8s Master + Monitoring |
| Node01 | 8 GB | 2 | Worker + AI Workloads |
| Node02 | 4 GB | 2 | Worker |
| Total | 19 GB | 9 |
Note: Node01 has extra memory for Ollama AI workloads. If not using Karpor AI, you can reduce it to 4GB.
Minimal Configuration (Without Karpor)
If you have limited resources, disable Karpor and AI:
components:
karpor: "none"
karpor_ai:
enabled: false
And reduce VM resources in vagrant/settings.yaml:
- name: "controlplane"
memory: "4096"
cpus: "2"
- name: "node01"
memory: "4096"
cpus: "2"