Lacework
Access helpful articles and other FAQs on Lacework
vschmitt_FTNT
Article Id 394866
Description After setting up Kubernetes compliance using a helm chart, an error message 'Partial collection available. The node collector has not been configured' which shows up on the Lacework portal UI page.
This article describes the troubleshooting steps to fix this error message.
Scope FortiCNAPP, Lacework, Kubernetes CSPM.
Solution

The Lacework Cluster Collector retrieves AWS Instance metadata for the EKS cluster, which is crucial for connecting Node and Cluster collector data and providing configuration visibility in the Lacework platform.
If the metadata requests from the Cluster Collector are blocked, this can result in errors seen in the Lacework Console (such as Partial Collection).

 

This message: 'Partial collection available. The node collector has not been configured.' usually means the lacework-agent-cluster pods are not able to reach the AWS metadata service. So, first, it is necessary to ensure that the metadata service is accessible to the pods.

 

  • Ensure that the metadata service is accessible to the pods: Configure Access to Tags in AWS.
  • Check the Cluster Collector logs to see if any errors are seen relating to the retrieval of AWS instance metadata.

 

grep -A 10 EC2Metadata lacework-agent-cluster-5ccf8698d6-kctpt

 

2025-03-25T16:15:24.025Z INFO discovery/cdiscovery.go:119 AWS init 2025-03-25T16:15:27.188Z INFO discovery/cdiscovery.go:138 AWS:Error in getting instance-id from EC2 metadata EC2MetadataError: failed to make EC2Metadata request status code: 401, request id: 

 

Note: A 401 status code error can be seen if the AWS EC2 has been set using the http-put-response-hop-limit to 1 for security reasons. These metadata options in AWS are used to limit the number of network hops that the PUT response is allowed to make.
If this setting is used, the Administrator can set the clusterAgent.hostNetworkAccess=True when deploying the cluster agent (Configuration Parameters).

 

--set clusterAgent.hostNetworkAccess=True

 

  • Check the name of the EKS cluster in AWS and make sure it is the same name while doing the EKS compliance integration.
    Ensure that the following configuration value is set to the Amazon EKS cluster name (as it appears in AWS):
    Example for Helm: --set laceworkConfig.kubernetesCluster=myEksClusterName

Example for Terraform: lacework_cluster_name = "myEksClusterName".

 

  • Check if the cluster collector and nodes are running in the Kubernetes cluster using the below command:

List all pods in the Lacework namespace for the cluster.

 

kubectl get pods -o wide -n lacework


Example:


NAME                                     READY   STATUS    RESTARTS   AGE

lacework-agent-cluster-5ccf8698d6-kctpt   1/1     Running   0          26h

lacework-agent-hn8qz                      1/1     Running   0          23h

lacework-agent-xftp5                      1/1     Running   0          23h
  • The Cluster Collector node is named with a prefix of lacework-agent-cluster-*. There should be one for each Kubernetes cluster.

  • The Node Collector nodes are named with a prefix of lacework-agent-*. There should be one for each node in a Kubernetes cluster.

 

Then it is possible to open a support ticket, adding the following:
The cluster collector and node logs are as per the commands below:

 

kubectl logs lacework-agent-cluster-5ccf8698d6-kctpt -n lacework

kubectl logs lacework-agent-hn8qz  -n lacework 

kubectl logs lacework-agent-xftp5  -n lacework