Kubernetes Cluster Discovery
Tripl-i automatically discovers and maps Kubernetes cluster infrastructure, providing complete visibility into containerized environments alongside your existing server, VM, and cloud inventory.
What Gets Discovered
The Kubernetes scanner connects to the K8s API server and collects:
| Resource | What We Collect | CI Created |
|---|---|---|
| Cluster | Version, platform, node count, namespace count, API endpoint | Kubernetes Cluster |
| Namespaces | Name, status, labels, environment classification (dev/qa/staging/production) | Kubernetes Namespace |
| Nodes | IP, CPU, memory, pod capacity, kubelet version, container runtime, roles, conditions, taints | Enriches existing Server CI |
| Workloads | Deployments, StatefulSets, DaemonSets — replicas (desired/ready), container images, resource requests/limits, labels, health status | Kubernetes Workload |
| Services | ClusterIP, NodePort, LoadBalancer — ports, selectors, external IPs | Kubernetes Service |
| Ingresses | Hostnames, path rules, TLS config, ingress class, backend services | Kubernetes Ingress |
| Pods | Pod IPs mapped to owning workloads (used for dependency mapping, not stored as CIs) | — |
Pods are not tracked as CIs — they are ephemeral. The Deployment/StatefulSet is the stable identity, equivalent to a "server" in traditional infrastructure.
Relationships
Infrastructure Hierarchy
These relationships map how Kubernetes objects are organized:
Kubernetes Cluster
└── contains → Kubernetes Namespace
├── contains → Kubernetes Workload (Deployment, StatefulSet, DaemonSet)
├── contains → Kubernetes Service
└── contains → Kubernetes Ingress
Traffic Routing
These relationships map how traffic flows from external users to workloads:
Kubernetes Ingress ──routes_to──→ Kubernetes Service ──Serves──→ Kubernetes Workload
For example: app.example.com (Ingress) routes to frontend-svc (Service) which serves frontend (Deployment).
Node Mapping
Kubernetes nodes are matched to existing Server CIs discovered by WMI, SSH, or vCenter — no duplicates created:
| Match Method | How It Works |
|---|---|
| System UUID | Node UUID matched to VM serial number (VMware byte-swap handled) |
| Provider ID | AWS instance ID extracted from node's cloud reference |
| IP Address | Node internal IP matched to server IP |
| Hostname | Node name matched to server name (fallback) |
Once matched, the Server CI is enriched with Kubernetes metadata (kubelet version, pod capacity, node role, conditions) and linked to the cluster:
Kubernetes Cluster ──contains──→ Server (node)
Workload Dependencies (Cross-Namespace & Cross-Cluster)
When Kubernetes nodes are also scanned via SSH, Tripl-i cross-references the node's network connection table with the Kubernetes pod IP registry. This resolves raw IP connections into named workload relationships:
Before (SSH scan only):
Server (prd-k8sm-n110) ──Connected To──→ UnknownDevice (10.42.0.197:2020)
After (SSH + Kubernetes scan combined):
Workload (cilium-agent) ──Connected To──→ Workload (fluent-bit) cross-namespace
Workload (api-service) ──Connected To──→ Workload (postgres) cross-namespace
Workload (api-service) ──Connected To──→ Server (db-server-01, SQL Server) cross-boundary (K8s → traditional)
This answers questions like:
- Which workloads communicate with each other across namespaces?
- Which workloads depend on infrastructure outside the cluster (databases, legacy servers)?
- What is the blast radius if a workload or node goes down?
Service-to-Service Communication
When workload-level dependencies are discovered, Tripl-i automatically derives Service-to-Service relationships by walking the chain:
Service A ──Serves──→ Workload A ──Connected To──→ Workload B ←──Serves── Service B
This creates a direct shortcut:
Service A ──Connected To──→ Service B
For example: apinizer-portal-service connects to elasticsearch-es-http — derived from the fact that the portal workload communicates with the Elasticsearch workload.
Namespace-to-Namespace Communication
Tripl-i also derives Namespace-to-Namespace relationships by aggregating cross-namespace workload connections:
Namespace A → contains → Workload A → Connected To → Workload B ← contains ← Namespace B
This creates a high-level view:
Namespace A ──Connected To──→ Namespace B
For example: namespace tetragon connects to namespace epys — derived from the fact that tetragon/tetragon workload communicates with epys/worker workload.
This namespace-level view answers:
- Which namespaces communicate with each other?
- What is the blast radius if an entire namespace goes down?
- Which namespaces are isolated vs highly connected?
Complete Relationship Summary
| Relationship | Source | Target | Data Source |
|---|---|---|---|
| contains | Cluster | Namespace | K8s API |
| contains | Namespace | Workload, Service, Ingress | K8s API |
| contains | Cluster | Server (node) | K8s API + cross-scanner matching |
| Serves | Service | Workload | K8s API (label selector matching) |
| routes_to | Ingress | Service | K8s API (backend config) |
| runs_on | Workload | Server (node) | K8s API (pod scheduling) |
| Connected To | Workload | Workload | SSH scan + pod IP cross-reference |
| Connected To | Service | Service | Derived from Workload connectivity + Serves |
| Connected To | Workload | Server (external) | SSH scan + pod IP cross-reference |
| Connected To | Namespace | Namespace | Derived from cross-namespace Workload connectivity |
Workload Health Status
Each workload CI includes a health status derived from replica counts:
| Status | Condition | Example |
|---|---|---|
| Healthy | All desired replicas are ready | 3/3 replicas ready |
| Degraded | Some replicas are ready, but fewer than desired | 2/3 replicas ready |
| Failed | No replicas are ready | 0/3 replicas ready |
How It Works
Scanning
- The scanner agent detects port 6443 open on a target IP
- Authenticates via bearer token, kubeconfig file, or in-cluster config (tried in order)
- Collects cluster info, nodes, namespaces, workloads, services, ingresses, and pod IPs via read-only K8s API calls
- Sends scan data to the Tripl-i backend for processing
Processing
- Cluster CI created with version, platform, API endpoint
- Namespace CIs created with environment auto-detection
- Nodes matched to existing Server CIs (UUID, provider ID, IP, hostname cascade) and enriched with K8s metadata
- Workload CIs created with replica counts, images, health status; matched to services via label selectors
- Service CIs created with type, ports, selectors
- Ingress CIs created with routing rules and TLS config
- Hierarchical relationships built: Cluster, Namespace, Workload, Service, Ingress (contains)
- Traffic relationships built: Ingress to Service to Workload (Serves/routes_to)
- Pod IP registry populated for cross-referencing with SSH network data
Dependency Cross-Referencing
When a Kubernetes node is also scanned via SSH:
- SSH scan collects the node's full TCP connection table (same as any Linux server)
- Pod IPs in the connection table are resolved to their owning Workload CIs via the pod IP registry
- Both the local and remote pod IPs are resolved — creating Workload-to-Workload relationships
- Connections to IPs outside the cluster are matched to existing Server/Database CIs from WMI/SSH/vCenter scans
Prerequisite: Kubernetes nodes must also be scanned via SSH for dependency mapping to work.
Authentication Setup
Create a read-only service account with the minimum required permissions:
# Create service account
kubectl create serviceaccount tripli-scanner -n kube-system
# Create custom ClusterRole (includes cluster-scoped resources)
kubectl apply -f - <<EOF
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: tripli-scanner-role
rules:
- apiGroups: [""]
resources: ["nodes", "namespaces", "pods", "services"]
verbs: ["get", "list"]
- apiGroups: ["apps"]
resources: ["deployments", "statefulsets", "daemonsets"]
verbs: ["get", "list"]
- apiGroups: ["networking.k8s.io"]
resources: ["ingresses"]
verbs: ["get", "list"]
EOF
# Bind the role
kubectl create clusterrolebinding tripli-scanner \
--clusterrole=tripli-scanner-role \
--serviceaccount=kube-system:tripli-scanner
# Generate a long-lived token (1 year)
kubectl create token tripli-scanner -n kube-system --duration=8760h
The built-in view ClusterRole only covers namespaced resources. The custom ClusterRole above also includes nodes and namespaces (cluster-scoped) which are required for full discovery.
Note: The
replicasetsresource is not required in the ClusterRole. The scanner resolves pod ownership viapod.metadata.ownerReferences(which is always available), not by querying ReplicaSet objects directly.
Configuring the Scanner
- In the Tripl-i Scanner GUI, go to the Credentials tab
- Select protocol: Kubernetes
- Set IP Range to your API server IP
- Paste the Bearer Token from the setup step above
- Set Server URL (e.g.,
https://10.0.1.100:6443) - Uncheck Verify SSL if using self-signed certificates
- Click Save
Security
- Read-only — The scanner never creates, modifies, or deletes any Kubernetes resources
- Minimal permissions — Only
getandliston the specific resource types needed - No secrets accessed — ConfigMaps, Secrets, and pod logs are never read
- Token rotation — Use
--durationflag to create time-limited tokens - Pod IPs only — Pod data collected is limited to IPs and owner references for dependency mapping; no container content is accessed
Performance
| Cluster Size | Scan Duration |
|---|---|
| Small (fewer than 10 nodes) | 15-30 seconds |
| Medium (10-50 nodes) | 30-60 seconds |
| Large (50+ nodes) | 60-120 seconds |
Troubleshooting
| Symptom | Resolution |
|---|---|
| Port 6443 not open | Verify network connectivity and firewall rules |
| All credentials failed | Regenerate the service account token |
| Forbidden errors | Verify ClusterRoleBinding is configured correctly |
| Certificate verify failed | Uncheck "Verify SSL" in credential configuration |
| No nodes or namespaces | Use the custom ClusterRole above (not the built-in view role) |
| No dependency relationships | Ensure K8s nodes are also scanned via SSH |