Skip to main content

Kubernetes Cluster Discovery

Tripl-i automatically discovers and maps Kubernetes cluster infrastructure, providing complete visibility into containerized environments alongside your existing server, VM, and cloud inventory.

What Gets Discovered

The Kubernetes scanner connects to the K8s API server and collects:

ResourceWhat We CollectCI Created
ClusterVersion, platform, node count, namespace count, API endpointKubernetes Cluster
NamespacesName, status, labels, environment classification (dev/qa/staging/production)Kubernetes Namespace
NodesIP, CPU, memory, pod capacity, kubelet version, container runtime, roles, conditions, taintsEnriches existing Server CI
WorkloadsDeployments, StatefulSets, DaemonSets — replicas (desired/ready), container images, resource requests/limits, labels, health statusKubernetes Workload
ServicesClusterIP, NodePort, LoadBalancer — ports, selectors, external IPsKubernetes Service
IngressesHostnames, path rules, TLS config, ingress class, backend servicesKubernetes Ingress
PodsPod IPs mapped to owning workloads (used for dependency mapping, not stored as CIs)

Pods are not tracked as CIs — they are ephemeral. The Deployment/StatefulSet is the stable identity, equivalent to a "server" in traditional infrastructure.

Relationships

Infrastructure Hierarchy

These relationships map how Kubernetes objects are organized:

Kubernetes Cluster
└── contains → Kubernetes Namespace
├── contains → Kubernetes Workload (Deployment, StatefulSet, DaemonSet)
├── contains → Kubernetes Service
└── contains → Kubernetes Ingress

Traffic Routing

These relationships map how traffic flows from external users to workloads:

Kubernetes Ingress ──routes_to──→ Kubernetes Service ──Serves──→ Kubernetes Workload

For example: app.example.com (Ingress) routes to frontend-svc (Service) which serves frontend (Deployment).

Node Mapping

Kubernetes nodes are matched to existing Server CIs discovered by WMI, SSH, or vCenter — no duplicates created:

Match MethodHow It Works
System UUIDNode UUID matched to VM serial number (VMware byte-swap handled)
Provider IDAWS instance ID extracted from node's cloud reference
IP AddressNode internal IP matched to server IP
HostnameNode name matched to server name (fallback)

Once matched, the Server CI is enriched with Kubernetes metadata (kubelet version, pod capacity, node role, conditions) and linked to the cluster:

Kubernetes Cluster ──contains──→ Server (node)

Workload Dependencies (Cross-Namespace & Cross-Cluster)

When Kubernetes nodes are also scanned via SSH, Tripl-i cross-references the node's network connection table with the Kubernetes pod IP registry. This resolves raw IP connections into named workload relationships:

Before (SSH scan only):

Server (prd-k8sm-n110) ──Connected To──→ UnknownDevice (10.42.0.197:2020)

After (SSH + Kubernetes scan combined):

Workload (cilium-agent) ──Connected To──→ Workload (fluent-bit)              cross-namespace
Workload (api-service) ──Connected To──→ Workload (postgres) cross-namespace
Workload (api-service) ──Connected To──→ Server (db-server-01, SQL Server) cross-boundary (K8s → traditional)

This answers questions like:

  • Which workloads communicate with each other across namespaces?
  • Which workloads depend on infrastructure outside the cluster (databases, legacy servers)?
  • What is the blast radius if a workload or node goes down?

Service-to-Service Communication

When workload-level dependencies are discovered, Tripl-i automatically derives Service-to-Service relationships by walking the chain:

Service A ──Serves──→ Workload A ──Connected To──→ Workload B ←──Serves── Service B

This creates a direct shortcut:

Service A ──Connected To──→ Service B

For example: apinizer-portal-service connects to elasticsearch-es-http — derived from the fact that the portal workload communicates with the Elasticsearch workload.

Namespace-to-Namespace Communication

Tripl-i also derives Namespace-to-Namespace relationships by aggregating cross-namespace workload connections:

Namespace A → contains → Workload A → Connected To → Workload B ← contains ← Namespace B

This creates a high-level view:

Namespace A ──Connected To──→ Namespace B

For example: namespace tetragon connects to namespace epys — derived from the fact that tetragon/tetragon workload communicates with epys/worker workload.

This namespace-level view answers:

  • Which namespaces communicate with each other?
  • What is the blast radius if an entire namespace goes down?
  • Which namespaces are isolated vs highly connected?

Complete Relationship Summary

RelationshipSourceTargetData Source
containsClusterNamespaceK8s API
containsNamespaceWorkload, Service, IngressK8s API
containsClusterServer (node)K8s API + cross-scanner matching
ServesServiceWorkloadK8s API (label selector matching)
routes_toIngressServiceK8s API (backend config)
runs_onWorkloadServer (node)K8s API (pod scheduling)
Connected ToWorkloadWorkloadSSH scan + pod IP cross-reference
Connected ToServiceServiceDerived from Workload connectivity + Serves
Connected ToWorkloadServer (external)SSH scan + pod IP cross-reference
Connected ToNamespaceNamespaceDerived from cross-namespace Workload connectivity

Workload Health Status

Each workload CI includes a health status derived from replica counts:

StatusConditionExample
HealthyAll desired replicas are ready3/3 replicas ready
DegradedSome replicas are ready, but fewer than desired2/3 replicas ready
FailedNo replicas are ready0/3 replicas ready

How It Works

Scanning

  1. The scanner agent detects port 6443 open on a target IP
  2. Authenticates via bearer token, kubeconfig file, or in-cluster config (tried in order)
  3. Collects cluster info, nodes, namespaces, workloads, services, ingresses, and pod IPs via read-only K8s API calls
  4. Sends scan data to the Tripl-i backend for processing

Processing

  1. Cluster CI created with version, platform, API endpoint
  2. Namespace CIs created with environment auto-detection
  3. Nodes matched to existing Server CIs (UUID, provider ID, IP, hostname cascade) and enriched with K8s metadata
  4. Workload CIs created with replica counts, images, health status; matched to services via label selectors
  5. Service CIs created with type, ports, selectors
  6. Ingress CIs created with routing rules and TLS config
  7. Hierarchical relationships built: Cluster, Namespace, Workload, Service, Ingress (contains)
  8. Traffic relationships built: Ingress to Service to Workload (Serves/routes_to)
  9. Pod IP registry populated for cross-referencing with SSH network data

Dependency Cross-Referencing

When a Kubernetes node is also scanned via SSH:

  1. SSH scan collects the node's full TCP connection table (same as any Linux server)
  2. Pod IPs in the connection table are resolved to their owning Workload CIs via the pod IP registry
  3. Both the local and remote pod IPs are resolved — creating Workload-to-Workload relationships
  4. Connections to IPs outside the cluster are matched to existing Server/Database CIs from WMI/SSH/vCenter scans

Prerequisite: Kubernetes nodes must also be scanned via SSH for dependency mapping to work.

Authentication Setup

Create a read-only service account with the minimum required permissions:

# Create service account
kubectl create serviceaccount tripli-scanner -n kube-system

# Create custom ClusterRole (includes cluster-scoped resources)
kubectl apply -f - <<EOF
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: tripli-scanner-role
rules:
- apiGroups: [""]
resources: ["nodes", "namespaces", "pods", "services"]
verbs: ["get", "list"]
- apiGroups: ["apps"]
resources: ["deployments", "statefulsets", "daemonsets"]
verbs: ["get", "list"]
- apiGroups: ["networking.k8s.io"]
resources: ["ingresses"]
verbs: ["get", "list"]
EOF

# Bind the role
kubectl create clusterrolebinding tripli-scanner \
--clusterrole=tripli-scanner-role \
--serviceaccount=kube-system:tripli-scanner

# Generate a long-lived token (1 year)
kubectl create token tripli-scanner -n kube-system --duration=8760h

The built-in view ClusterRole only covers namespaced resources. The custom ClusterRole above also includes nodes and namespaces (cluster-scoped) which are required for full discovery.

Note: The replicasets resource is not required in the ClusterRole. The scanner resolves pod ownership via pod.metadata.ownerReferences (which is always available), not by querying ReplicaSet objects directly.

Configuring the Scanner

  1. In the Tripl-i Scanner GUI, go to the Credentials tab
  2. Select protocol: Kubernetes
  3. Set IP Range to your API server IP
  4. Paste the Bearer Token from the setup step above
  5. Set Server URL (e.g., https://10.0.1.100:6443)
  6. Uncheck Verify SSL if using self-signed certificates
  7. Click Save

Security

  • Read-only — The scanner never creates, modifies, or deletes any Kubernetes resources
  • Minimal permissions — Only get and list on the specific resource types needed
  • No secrets accessed — ConfigMaps, Secrets, and pod logs are never read
  • Token rotation — Use --duration flag to create time-limited tokens
  • Pod IPs only — Pod data collected is limited to IPs and owner references for dependency mapping; no container content is accessed

Performance

Cluster SizeScan Duration
Small (fewer than 10 nodes)15-30 seconds
Medium (10-50 nodes)30-60 seconds
Large (50+ nodes)60-120 seconds

Troubleshooting

SymptomResolution
Port 6443 not openVerify network connectivity and firewall rules
All credentials failedRegenerate the service account token
Forbidden errorsVerify ClusterRoleBinding is configured correctly
Certificate verify failedUncheck "Verify SSL" in credential configuration
No nodes or namespacesUse the custom ClusterRole above (not the built-in view role)
No dependency relationshipsEnsure K8s nodes are also scanned via SSH