Kubernetes Cluster Discovery Reference
This document provides a comprehensive reference for Kubernetes cluster discovery in Tripl-i. The Kubernetes scanner connects to the K8s API server to discover cluster topology, nodes, workloads, services, and ingress configurations — providing complete visibility into containerized infrastructure.
Overview
The Kubernetes scanner (kubernetes_scanner.py) discovers and maps Kubernetes cluster infrastructure via the standard Kubernetes API. It collects cluster metadata, node inventory, deployment and statefulset workloads, services, and ingress routing rules. The scanner follows the same opt-in pattern as vCenter — it is triggered when port 6443 is detected during a network scan.
Key Benefits
- Complete Cluster Visibility — Automatically discover all nodes, workloads, services, and ingresses across namespaces
- Node-to-Server Correlation — Kubernetes nodes are matched to existing Server CIs discovered via WMI or SSH, enriching them with K8s metadata
- Workload Inventory — Track every Deployment, StatefulSet, and DaemonSet with replica counts, container images, and resource requests
- Service Mapping — Map ClusterIP, NodePort, and LoadBalancer services to their backing workloads
- Ingress Routing — Discover external access paths including hostnames, TLS configuration, and backend routing
- Dependency Mapping — Automatically build relationships between cluster, nodes, workloads, and services
Network Ports and Protocols
Kubernetes API Communication
| Port | Protocol | Purpose |
|---|---|---|
| 6443 | TCP/HTTPS | Kubernetes API server (required) |
Discovery Trigger
The Kubernetes scanner is automatically triggered when:
- Port 6443 is detected as open during network scanning
- Kubernetes credentials are configured for the target IP range
- The
kubernetesprotocol is enabled in the scan configuration
Kubernetes scanning is opt-in (disabled by default). Enable it in the scan configuration when you have K8s clusters to discover. This prevents unnecessary scanning noise on non-Kubernetes hosts.
Authentication Requirements
Credential Options
The scanner supports three authentication methods, tried in priority order:
| Method | When to Use | What to Provide |
|---|---|---|
| Bearer Token | Most common — service account token | Token string + API server URL |
| Kubeconfig File | When you have an existing kubeconfig | Path to kubeconfig file |
| In-Cluster Config | Agent running as a K8s pod | Nothing — automatic detection |
Creating a Service Account Token
To create a read-only service account for discovery:
kubectl create serviceaccount nopesight-scanner -n kube-system
kubectl create clusterrolebinding nopesight-scanner \
--clusterrole=view \
--serviceaccount=kube-system:nopesight-scanner
kubectl create token nopesight-scanner -n kube-system --duration=8760h
The built-in view ClusterRole provides read-only access to all resources the scanner needs.
Required API Permissions
The scanner only performs read operations. It never creates, modifies, or deletes any Kubernetes resources. The following API calls are made:
| API Call | Purpose |
|---|---|
GET /version | Cluster version information |
GET /api/v1/nodes | Node inventory |
GET /api/v1/namespaces | Namespace count |
GET /api/v1/pods | Pod counts per node |
GET /api/v1/services | Service discovery |
GET /apis/apps/v1/deployments | Deployment workloads |
GET /apis/apps/v1/statefulsets | StatefulSet workloads |
GET /apis/apps/v1/daemonsets | DaemonSet workloads |
GET /apis/networking.k8s.v1/ingresses | Ingress routing rules |
Configuring Credentials in the Scanner Agent
In the Nopesight Scanner GUI:
- Navigate to the Credentials tab
- Select protocol: Kubernetes
- Set IP Range to your API server IP (e.g.,
10.0.1.100) - Fill in one of:
- Bearer Token — paste the service account token
- Kubeconfig — browse to the kubeconfig file path
- Set Server URL (e.g.,
https://10.0.1.100:6443) — optional if using kubeconfig - Verify SSL — leave unchecked for self-signed certificates
- Click Save
Data Collection Overview
What Gets Discovered
Kubernetes Cluster
|
+--- Cluster Info (version, platform, node/namespace counts)
|
+--- Nodes
| +--- Addresses (InternalIP, Hostname)
| +--- Capacity & Allocatable Resources (CPU, memory, pods)
| +--- Conditions (Ready, MemoryPressure, DiskPressure)
| +--- Labels & Roles (worker, control-plane)
| +--- Node Info (kubelet version, OS image, container runtime, systemUUID)
| +--- Pod Count
| +--- Provider ID (AWS, Azure, GCP instance reference)
|
+--- Workloads
| +--- Deployments (replicas, images, resources, strategy)
| +--- StatefulSets (replicas, images, volume claims)
| +--- DaemonSets (desired/ready counts)
|
+--- Services
| +--- ClusterIP, NodePort, LoadBalancer types
| +--- Ports, selectors, external IPs
|
+--- Ingresses
+--- Rules (hostnames, paths, backends)
+--- TLS configuration
+--- Ingress class
Cluster Information
| Data Point | Description |
|---|---|
| Kubernetes version | API server version (e.g., "1.28.4") |
| Platform | Build platform (e.g., "linux/amd64") |
| Node count | Total nodes in cluster |
| Namespace count | Total namespaces |
| API server URL | Cluster endpoint |
| Git version | Full version string |
Node Details
For each node in the cluster:
| Data Point | Description |
|---|---|
| Name | Node hostname |
| Internal IP | Node IP address within the cluster network |
| Hostname | DNS hostname |
| CPU capacity | Total CPU cores available |
| Memory capacity | Total memory (e.g., "32Gi") |
| Pod capacity | Maximum pods schedulable |
| Allocatable resources | Resources available after system reservations |
| Ready condition | Whether the node is healthy |
| Memory/Disk/PID pressure | Resource pressure conditions |
| Roles | control-plane, worker, or custom roles |
| Kubelet version | Node agent version |
| Container runtime | Runtime engine (e.g., "containerd://1.7.8") |
| OS image | Operating system (e.g., "Ubuntu 22.04.3 LTS") |
| Kernel version | Linux kernel version |
| System UUID | Hardware UUID (used for cross-referencing with WMI/vCenter) |
| Provider ID | Cloud provider instance ID (e.g., aws:///eu-central-1a/i-0abc123) |
| Pod count | Number of pods running on this node |
| Labels | All Kubernetes labels (instance type, zone, custom labels) |
Workload Details
For each Deployment, StatefulSet, and DaemonSet:
| Data Point | Description |
|---|---|
| Kind | Deployment, StatefulSet, or DaemonSet |
| Name | Workload name |
| Namespace | Kubernetes namespace |
| Desired replicas | Target replica count |
| Ready replicas | Currently ready replicas |
| Available replicas | Available for traffic |
| Container images | Docker/OCI images used (with tags) |
| Labels | Workload labels (app, tier, version, etc.) |
| Resource requests | CPU and memory requests per container |
| Resource limits | CPU and memory limits per container |
| Update strategy | RollingUpdate, Recreate, or OnDelete |
| Volume claims | Persistent volume claims (StatefulSets) with storage class and size |
| Created at | Workload creation timestamp |
Service Details
For each Kubernetes Service:
| Data Point | Description |
|---|---|
| Name | Service name |
| Namespace | Kubernetes namespace |
| Type | ClusterIP, NodePort, LoadBalancer, or ExternalName |
| Cluster IP | Internal service IP |
| External IP | LoadBalancer IP/hostname (if applicable) |
| Ports | Port mappings (port, targetPort, nodePort, protocol) |
| Selector | Label selector linking to workload pods |
Ingress Details
For each Ingress resource:
| Data Point | Description |
|---|---|
| Name | Ingress name |
| Namespace | Kubernetes namespace |
| Ingress class | Controller class (nginx, traefik, alb, etc.) |
| Rules | Host-based routing rules with path matching |
| Backend services | Service name and port for each path |
| TLS enabled | Whether TLS termination is configured |
| TLS hosts | Hostnames covered by TLS certificates |
| Secret names | TLS certificate secret references |
What Gets Created in Tripl-i
Configuration Items
| CI Type | Created From | Key Fields |
|---|---|---|
| Kubernetes Cluster | Cluster info | version, platform, node count, namespace count, API server URL |
| Kubernetes Workload | Each Deployment/StatefulSet/DaemonSet | kind, namespace, replicas, images, resources, labels |
| Kubernetes Ingress | Each Ingress resource | ingress class, rules, TLS config, hostnames |
| Server (enriched) | Nodes matched to existing CIs | K8s node role, kubelet version, pod capacity, conditions, labels |
Node-to-Server Correlation
One of the most powerful features of Kubernetes discovery is cross-scanner correlation. When the backend processes K8s scan data, it attempts to match each node to an existing Server CI that was previously discovered via WMI, SSH, or vCenter:
| Match Method | What It Compares | Use Case |
|---|---|---|
| System UUID | Node systemUUID vs CI serialNumber | VMware VMs (handles both UUID and BIOS serial formats) |
| Provider ID | AWS instance ID from providerID vs CI customFields.instance_id | AWS EC2 instances |
| Internal IP | Node IP vs CI ipAddress | General IP matching |
| Hostname | Node hostname vs CI name (case-insensitive) | Fallback matching |
When a match is found, the existing Server CI is enriched with Kubernetes metadata — no duplicate CI is created. When no match is found, a new Server CI is created with the K8s node data.
Kubernetes-Specific CI Fields
After processing, CIs gain these additional fields:
Cluster CI (customFields):
k8s_version— Kubernetes versionk8s_platform— Build platformk8s_node_count— Number of nodesk8s_namespace_count— Number of namespacesk8s_api_server— API server URL
Workload CI (customFields):
k8s_namespace— Kubernetes namespacek8s_kind— Deployment, StatefulSet, or DaemonSetk8s_replicas_desired/k8s_replicas_ready— Replica countsk8s_images— Container images (JSON array)k8s_labels— Kubernetes labelsk8s_resources— Resource requests and limitsk8s_strategy— Update strategyk8s_volume_claims— PVC details (StatefulSets)k8s_service_name— Matching service name (if any)k8s_service_type— Service type (ClusterIP, LoadBalancer, etc.)k8s_service_ports— Service port mappingsk8s_external_ip— External access IP
Ingress CI (customFields):
k8s_ingress_class— Ingress controller classk8s_ingress_rules— Routing rules (JSON)k8s_tls_enabled— TLS termination flagk8s_tls_hosts— TLS hostnamesk8s_namespace— Kubernetes namespace
Server CI (node enrichment, customFields):
k8s_node_role— Node role (worker, control-plane)k8s_kubelet_version— Kubelet agent versionk8s_pod_capacity— Maximum podsk8s_conditions— Node health conditionsk8s_labels— Node labels
Relationships Created
| Relationship | Source | Target | Description |
|---|---|---|---|
| contains | Kubernetes Cluster | Server (node) | Cluster contains this node |
| contains | Kubernetes Cluster | Kubernetes Workload | Cluster contains this workload |
| contains | Kubernetes Cluster | Kubernetes Ingress | Cluster contains this ingress |
| runs_on | Kubernetes Workload | Server (node) | Workload pods run on this node |
| routes_to | Kubernetes Ingress | Kubernetes Workload | Ingress routes traffic to workload |
| serves | Service | Kubernetes Workload | Service fronts this workload |
Architecture Decisions
What Becomes a CI vs. What Doesn't
| K8s Object | CI? | Rationale |
|---|---|---|
| Cluster | Yes — Kubernetes Cluster | Top-level infrastructure component |
| Node | Enriches existing Server CI | Nodes are physical/virtual servers — not separate CI type |
| Deployment / StatefulSet / DaemonSet | Yes — Kubernetes Workload | Represents a deployable application unit |
| Ingress | Yes — Kubernetes Ingress | Represents external access configuration |
| Service | Metadata on Workload CI | Services are networking abstractions, not standalone infrastructure |
| Pod | No | Ephemeral — pods come and go constantly |
| Namespace | No | Organizational grouping, not infrastructure |
| ConfigMap / Secret | No | Configuration data, not discoverable infrastructure |
Why Services Are Not Separate CIs
Kubernetes Services are label-based routing abstractions. Rather than creating thousands of Service CIs that have a 1:1 relationship with workloads, service metadata (name, type, ports, external IP) is attached directly to the matching Workload CI as custom fields. This keeps the CMDB focused on meaningful infrastructure components.
Discovery Logging
Kubernetes scan results appear in Discovery Logs with scan type kubernetes. A successful scan log includes:
- Scan Type:
kubernetes - Status:
success - Discovery Status: "Kubernetes cluster discovered"
- Result Summary: Node count, workload count, service count, data size
Failed scans show the specific error (authentication failure, API unreachable, missing permissions).
Performance Considerations
| Factor | Typical Value |
|---|---|
| Scan duration (small cluster, fewer than 10 nodes) | 15-30 seconds |
| Scan duration (medium cluster, 10-50 nodes) | 30-60 seconds |
| Scan duration (large cluster, 50+ nodes) | 60-120 seconds |
| Timeout | 120 seconds |
Scan duration depends primarily on the number of pods (for per-node pod counting) and the number of workloads across all namespaces.
Security Considerations
- Read-only access — The scanner never creates, modifies, or deletes K8s resources
- Service account tokens — Use dedicated service accounts with minimal
viewpermissions - Token rotation — Create time-limited tokens with
--durationflag - TLS verification — Can be enabled for production clusters with valid certificates
- No secrets accessed — The scanner does not read ConfigMaps, Secrets, or pod logs
- Namespace isolation — The scanner lists resources across all namespaces but only collects metadata
Troubleshooting
Common Issues
| Symptom | Cause | Resolution |
|---|---|---|
| "Port 6443 not open" | API server not reachable | Verify network connectivity and firewall rules |
| "All Kubernetes credentials failed" | Invalid or expired token | Regenerate the service account token |
| "Forbidden" errors | Insufficient permissions | Verify the ClusterRoleBinding is correctly configured |
| "Certificate verify failed" | Self-signed certificate | Uncheck "Verify SSL" in credential configuration |
| Scanner skips K8s | Protocol not enabled | Enable kubernetes in scan protocol settings |
| "kubernetes Python package not installed" | Missing dependency | Run pip install kubernetes on the scanner agent |
Verifying Connectivity
Test from the scanner agent machine:
curl -k https://API_SERVER_IP:6443/version
If this returns a JSON version response, the API server is reachable.
Next Steps
- Server Scanning — How WMI and SSH scans enrich K8s nodes
- VMware vCenter Scanner — Virtual infrastructure discovery (VM UUIDs used for node matching)
- Network Scanning Overview — Understanding the full scanning pipeline
- Credential Management — Setting up scanning credentials
- Discovery Scheduling — Automating recurring scans