Integrate Kubernetes policy-as-code solutions into Security Hub
Using Kubernetes policy-as-code (PaC) solutions, administrators and security professionals can enforce organization policies to Kubernetes resources. There are several publicly available PAC solutions that are available for Kubernetes, such as Gatekeeper, Polaris, and Kyverno.
<p>PaC solutions usually implement two features:</p>
<ul>
<li>Use <a href="https://kubernetes.io/docs/reference/access-authn-authz/admission-controllers/" target="_blank" rel="noopener">Kubernetes admission controllers</a> to validate or modify objects before they’re created to help enforce configuration best practices for your clusters.</li>
<li>Provide a way for you to scan your resources created before policies were deployed or against new policies being evaluated.</li>
</ul>
<p>This post presents a solution to send policy violations from PaC solutions using <a href="https://github.com/kubernetes-sigs/wg-policy-prototypes/tree/master/policy-report" target="_blank" rel="noopener">Kubernetes policy report</a> format (for example, using Kyverno) or from Gatekeeper’s constraints status directly to <a href="https://aws.amazon.com/security-hub/" target="_blank" rel="noopener">AWS Security Hub</a>. With this solution, you can visualize Kubernetes security misconfigurations across your <a href="https://aws.amazon.com/eks" target="_blank" rel="noopener">Amazon Elastic Kubernetes Service (Amazon EKS)</a> clusters and your organizations in <a href="https://aws.amazon.com/organizations/" target="_blank" rel="noopener">AWS Organizations</a>. This can also help you implement standard security use cases—such as unified security reporting, escalation through a ticketing system, or automated remediation—on top of Security Hub to help improve your overall Kubernetes security posture and reduce manual efforts.</p>
<h2>Solution overview</h2>
<p>The solution uses the approach described in <a href="https://aws.amazon.com/blogs/opensource/a-container-free-way-to-configure-kubernetes-using-aws-lambda/" target="_blank" rel="noopener">A Container-Free Way to Configure Kubernetes Using AWS Lambda</a> to deploy an <a href="https://aws.amazon.com/lambda/" target="_blank" rel="noopener">AWS Lambda</a> function that periodically synchronizes the security status of a Kubernetes cluster from a Kubernetes or Gatekeeper policy report with Security Hub. Figure 1 shows the architecture diagram for the solution.</p>
<div id="attachment_33740" class="wp-caption aligncenter">
<img aria-describedby="caption-attachment-33740" src="https://infracom.com.sg/wp-content/uploads/2024/04/img1-4.png" alt="Figure 1: Diagram of solution" width="493" height="206" class="size-full wp-image-33740">
<p id="caption-attachment-33740" class="wp-caption-text">Figure 1: Diagram of solution</p>
</div>
<p>This solution works using the following resources and configurations:</p>
<ol>
<li>A scheduled event which invokes a Lambda function on a 10-minute interval.</li>
<li>The Lambda function iterates through each running EKS cluster that you want to integrate and authenticate by using a <a href="https://github.com/kubernetes-client/python" target="_blank" rel="noopener">Kubernetes Python client</a> and an <a href="https://aws.amazon.com/iam/" target="_blank" rel="noopener">AWS Identity and Access Management (IAM)</a> role of the Lambda function.</li>
<li>For each running cluster, the Lambda function retrieves the selected Kubernetes policy reports (or the Gatekeeper constraint status, depending on the policy selected) and sends active violations, if present, to Security Hub. With Gatekeeper, if more violations exist than those reported in the constraint, an additional <span>INFORMATIONAL</span> finding is generated in Security Hub to let security teams know of the missing findings. <p>Optional: EKS cluster administrators can raise the limit of reported policy violations by using the <code>–constraint-violations-limit</code> flag in their Gatekeeper audit operation.</p> </li>
<li>For each running cluster, the Lambda function archives archive previously raised and resolved findings in Security Hub.</li>
</ol>
<p>You can download the solution from this <a href="https://github.com/aws-samples/securityhub-k8s-policy-integration" target="_blank" rel="noopener">GitHub repository</a>.</p>
<h2>Walkthrough</h2>
<p>In the walkthrough, I show you how to deploy a Kubernetes policy-as-code solution and forward the findings to Security Hub. We’ll configure Kyverno and a Kubernetes demo environment with findings in an existing EKS cluster to Security Hub.</p>
<p>The code provided includes an example constraint and noncompliant resource to test against.</p>
<h3>Prerequisites</h3>
<p>An EKS cluster is required to set up this solution within your AWS environments. The cluster should be configured with either <a href="https://docs.aws.amazon.com/eks/latest/userguide/add-user-role.html" target="_blank" rel="noopener">aws-auth ConfigMap</a> or <a href="https://docs.aws.amazon.com/eks/latest/userguide/access-policies.html" target="_blank" rel="noopener">access entries.</a> Optional: You can use <a href="https://docs.aws.amazon.com/eks/latest/userguide/getting-started-eksctl.html" target="_blank" rel="noopener">eksctl </a>to create a cluster.</p>
<p>The following resources need to be installed on your computer:</p>
<h3>Step 1: Set up the environment</h3>
<p>The first step is to install Kyverno on an existing Kubernetes cluster. Then deploy examples of a Kyverno policy and noncompliant resources.</p>
<h4>Deploy Kyverno example and policy</h4>
<ol>
<li>Deploy Kyverno in your Kubernetes cluster according to its <a href="https://kyverno.io/docs/installation/" target="_blank" rel="noopener">installation manual</a> using the Kubernetes CLI.
<div class="hide-language">
<pre class="unlimited-height-code"><code>kubectl create -f https://github.com/kyverno/kyverno/releases/download/v1.10.0/install.yaml</code></pre>
</div> </li>
<li>Set up a policy that requires namespaces to use the label <span>thisshouldntexist</span>.
<div class="hide-language">
<pre class="unlimited-height-code"><code class="lang-text">kubectl create -f - << EOF
apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
name: require-ns-labels
spec:
validationFailureAction: Audit
background: true
rules:
– name: check-for-labels-on-namespace
match:
any:
– resources:
kinds:
– Namespace
validate:
message: "The label thisshouldntexist is required."
pattern:
metadata:
labels:
thisshouldntexist: "?*"
EOF
<h4>Deploy a noncompliant resource to test this solution</h4>
<ol>
<li>Create a noncompliant namespace.
<div class="hide-language">
<pre class="unlimited-height-code"><code>kubectl create namespace non-compliant</code></pre>
</div> </li>
<li>Check the Kubernetes policy report status using the following command:
<div class="hide-language">
<pre class="unlimited-height-code"><code>kubectl get clusterpolicyreport -o yaml</code></pre>
</div> </li>
</ol>
You should see output similar to the following:</p>
<div class="hide-language">
<pre class="unlimited-height-code"><code class="lang-text">apiVersion: v1
items:
– apiVersion: wgpolicyk8s.io/v1alpha2
kind: ClusterPolicyReport
metadata:
creationTimestamp: “2024-02-20T14:00:37Z”
generation: 1
labels:
app.kubernetes.io/managed-by: kyverno
cpol.kyverno.io/require-ns-labels: “3734083”
name: cpol-require-ns-labels
resourceVersion: “3734261”
uid: 3cfcf1da-bd28-453f-b2f5-512c26065986
results:
…
– message: ‘validation error: The label thisshouldntexist is required. rule check-for-labels-on-namespace
failed at path /metadata/labels/thisshouldntexist/’
policy: require-ns-labels
resources:
– apiVersion: v1
kind: Namespace
name: non-compliant
uid: d62eb1ad-8a0b-476b-848d-ff6542c57840
result: fail
rule: check-for-labels-on-namespace
scored: true
source: kyverno
timestamp:
nanos: 0
seconds: 1708437615
Step 2: Solution code deployment and configuration
The next step is to clone and deploy the solution that integrates with Security Hub.
To deploy the solution
- Clone the GitHub repository by using your preferred command line terminal:
- Open the parameters.json file and configure the following values:
- Policy – Name of the product that you want to enable, in this case policyreport, which is supported by tools such as Kyverno.
- ClusterNames – List of EKS clusters. When AccessEntryEnabled is enabled, this solution deploys an access entry for the integration to access your EKS clusters.
- SubnetIds – (Optional) A comma-separated list of your subnets. If you’ve configured the API endpoints of your EKS clusters as private only, then you need to configure this parameter. If your EKS clusters have public endpoints enabled, you can remove this parameter.
- SecurityGroupId – (Optional) A security group ID that allows connectivity to the EKS clusters. This parameter is only required if you’re running private API endpoints; otherwise, you can remove it. This security group should be allowed ingress from the security group of the EKS control plane.
- AccessEntryEnabled – (Optional) If you’re using EKS access entries, the solution automatically deploys the access entries with read-only-group permissions deployed in the next step. This parameter is True by default.
- Save the changes and close the parameters file.
- Set up your AWS_REGION (for example, export AWS_REGION=eu-west-1) and make sure that your credentials are configured for the delegated administrator account.
- Enter the following command to deploy:
You should see the following output:
Step 3: Set up EKS cluster access
You need to create the Kubernetes Group read-only-group to allow read-only permissions to the IAM role of the Lambda function. If you aren’t using access entries, you will also need to modify the aws-auth ConfigMap of the Kubernetes clusters.
To configure access to EKS clusters
- For each cluster that’s running in your account, run the kube-setup.sh script to create the Kubernetes read-only cluster role and cluster role binding.
- (Optional) Configure aws-auth ConfigMap using eksctl if you aren’t using access entries.
Step 4: Verify AWS service integration
The next step is to verify that the Lambda integration to Security Hub is running.
To verify the integration is running
- Open the Lambda console, and navigate to the aws-securityhub-k8s-policy-integration- function.
- Start a test to import your cluster’s noncompliant findings to Security Hub.
- In the Security Hub console, review the recently created findings from Kyverno.
Step 5: Clean up
The final step is to clean up the resources that you created for this walkthrough.
To destroy the stack
- Use the command line terminal in your laptop to run the following command:
Conclusion
In this post, you learned how to integrate Kubernetes policy report findings with Security Hub and tested this setup by using the Kyverno policy engine. If you want to test the integration of this solution with Gatekeeper, you can find alternative commands for step 1 of this post in the GitHub repository’s README file.
Using this integration, you can gain visibility into your Kubernetes security posture across EKS clusters and join it with a centralized view, together with other security findings such as those from AWS Config, Amazon Inspector, and more across your organization. You can also try this solution with other tools, such as kube-bench or Gatekeeper. You can extend this setup to notify security teams of critical misconfigurations or implement automated remediation actions by using AWS Security Hub.
For more information on how to use PaC solutions to secure Kubernetes workloads in the AWS cloud, see Amazon Elastic Kubernetes Service (Amazon EKS) workshop, Amazon EKS best practices, Using Gatekeeper as a drop-in Pod Security Policy replacement in Amazon EKS and Policy-based countermeasures for Kubernetes.
If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, contact AWS Support.
<!-- '"` -->
You must be logged in to post a comment.