fbpx

TLS-enabled Kubernetes clusters with ACM Personal Amazon and CA EKS

In this website post, we demonstrate how to setup end-to-end encryption on Amazon Elastic Kubernetes Service (Amazon EKS) with AWS Certificate Manager Private Certificate Authority . Because of this exemplory case of end-to-end encryption, traffic hails from your terminates and client at an Ingress controller server running in the sample app. By following a instructions in this article, you can create an NGINX ingress controller on Amazon EKS. Within the example, we demonstrate how exactly to configure an AWS Network Load Balancer (NLB) with HTTPS using certificates issued via ACM Private CA while watching ingress controller.

AWS Private CA supports an open source plugin for cert-manager that provides a far more secure certificate authority solution for Kubernetes containers. cert-manager is really a widely-adopted solution for TLS certificate management in Kubernetes. Customers who use cert-manager for application certificate lifecycle management is now able to use this treatment for improve security on the default cert-manager CA, which stores keys in plaintext in server memory. Customers with regulatory requirements for controlling usage of and auditing their CA operations may use this treatment for improve auditability and support compliance.

 

Solution components

  • Kubernetes can be an open-source system for automating the deployment, scaling, and management of containerized applications.
  • Amazon EKS is really a managed service which you can use to perform Kubernetes on Amazon Web Services (AWS) without having to install, operate, and keep maintaining your personal Kubernetes control nodes or plane.
  • cert-manager can be an increase to Kubernetes to supply TLS certificate management. cert-manager requests certificates, distributes them to Kubernetes containers, and automates certificate renewal. cert-manager ensures certificates are up-to-date and valid, and attempts to renew certificates at a proper time before expiry.
  • ACM Private CA enables the creation of private CA hierarchies, including root and subordinate CAs, minus the maintenance and investment costs of operating an on-premises CA. With ACM Private CA, it is possible to issue certificates for authenticating internal users, computers, applications, services, servers, along with other devices, and for signing computer code. The private keys for private CAs are stored in AWS managed hardware security modules (HSMs), which are FIPS 140-2 certified, providing an improved security profile set alongside the default CAs in Kubernetes. Private certificates help identify and secure communication between connected resources on private networks such as for example servers, ioT and mobile devices, and applications.
  • AWS Private CA Issuer plugin. Kubernetes applications and containers use digital certificates to supply secure authentication and encryption over TLS. With this particular plugin, cert-manager requests TLS certificates from Private CA. The integration supports certificate automation for TLS in a variety of configurations, including at the ingress, on the pod, and mutual TLS between pods. You should use the AWS Private CA Issuer plugin with Amazon Elastic Kubernetes Service, self managed Kubernetes on AWS, and Kubernetes on-premises.
  • The AWS Load Balancer controller manages AWS Elastic Load Balancers for a Kubernetes cluster. The controller provisions the next resources.
    • An AWS Application Load Balancer (ALB) once you develop a Kubernetes Ingress.
    • An AWS Network Load Balancer (NLB) once you develop a Kubernetes Service of type LoadBalancer.

Different points for terminating TLS in Kubernetes

How and where you terminate your TLS connection depends upon your use case, security policies, and have to adhere to regulatory requirements. This section discusses four different use cases which are useful for terminating TLS regularly. The utilization cases are illustrated in Figure 1 and described in the written text that follows.

Figure 1: Terminating TLS at different points

Figure 1: Terminating TLS at different points

  1. At the strain balancer: The most frequent use case for terminating TLS at the strain balancer level is by using publicly trusted certificates. This use case is easy to deploy and the certificate will the strain balancer itself. For instance, you should use ACM to issue a public bind and certificate it with AWS NLB. You can find out more from How do you terminate HTTPS traffic on Amazon EKS workloads with ACM?
  2. At the ingress: If there is absolutely no strict requirement of end-to-end encryption, it is possible to offload this processing to the ingress controller or the NLB. This can help one to optimize the performance of one’s workloads and make sure they are better to configure and manage. This use is examined by us case in this website post.
  3. On the pod: In Kubernetes, a pod may be the smallest deployable unit of computing also it encapsulates a number of applications. End-to-end encryption of the traffic from your client completely to a Kubernetes pod offers a secure communication model where in fact the TLS is terminated at the pod in the Kubernetes cluster. This may be ideal for meeting certain security requirements. It is possible to learn more from your blog post Establishing end-to-end TLS encryption on Amazon EKS with the brand new AWS Load Balancer Controller.
  4. Mutual TLS between pods: This use case targets encryption in transit for data flowing inside Kubernetes cluster. For additional information on how this is achieved with Cert-manager utilizing an Istio service mesh, please start to see the Securing Istio workloads with mTLS using cert-manager post. You should use the AWS Private CA Issuer plugin together with cert-manager to utilize ACM Private CA to issue certificates for securing communication between your pods.

In this website post, a scenario can be used by us where there’s a requirement to terminate TLS at the ingress controller level, demonstrating the next example above.

Figure 2 has an overall picture of the answer described in this website post. The components and steps illustrated in Figure 2 are described in the sections that follow fully.

Figure 2: Overall solution diagram

Figure 2: Overall solution diagram

Prerequisites

Before you begin, you need the next:

Verify that you have the most recent versions of the tools installed before starting.

Provision an Amazon EKS cluster

When you have a running Amazon EKS cluster already, it is possible to skip this move and step to install NGINX Ingress.

You should use the AWS Management Console or AWS CLI, but this example uses eksctl to provision the cluster. eksctl is really a tool that means it is simpler to deploy and manage an Amazon EKS cluster.

The US-EAST-2 is used by this example Region and the T2 node type. Choose the node Region and type which are befitting your environment. Cluster provisioning takes approx a quarter-hour.

To provision an Amazon EKS cluster

  1. Run the next eksctl command to generate an Amazon EKS cluster in the us-east-2 Region with Kubernetes version 1.19 and two nodes. It is possible to change the spot to one that best fits your use case.
    eksctl create cluster 
         

    –name acm-pca-lab
    –version 1.19
    –nodegroup-name acm-pca-nlb-lab-workers
    –node-type t2.medium
    –nodes 2
    –region us-east-2

  2. Your cluster has been created once, verify your cluster is running correctly by running the next command:

     

    $ kubectl get pods --all-namespaces
     

    NAMESPACE NAME READY STATUS RESTARTS AGE
    kube-system aws-node-t94rp 1/1 Running 0 3m4s
    kube-system aws-node-w7dm6 1/1 Running 0 3m19s
    kube-system coredns-56b458df85-6tgjl 1/1 Running 0 10m
    kube-system coredns-56b458df85-8gp94 1/1 Running 0 10m
    kube-system kube-proxy-2pjx7 1/1 Running 0 3m19s
    kube-system kube-proxy-hz8wq 1/1 Running 0 3m4s

 
        You need to see output like the above, with all pods in a running state.

 

Install NGINX Ingress

NGINX Ingress is made round the Kubernetes Ingress resource, utilizing a ConfigMap to store the NGINX configuration.

To set up NGINX Ingress

    1. Utilize the following command to set up NGINX Ingress:
      kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v0.46.0/deploy/static/provider/aws/deploy.yaml
           
       
    2. Run the next command to look for the address that AWS has assigned to your NLB:
           $ kubectl get service -n ingress-nginx
      

      NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
      ingress-nginx-controller LoadBalancer 10.100.214.10 a3ebe22e7ca0522d1123456fbc92605c-8ac7f1d49be2fc42.elb.us-east-2.amazonaws.com 80:32598/TCP,443:30624/TCP 14s
      ingress-nginx-controller-admission ClusterIP 10.100.118.1 443/TCP 14s

       
    3. Normally it takes up to five minutes for the strain balancer to prepare yourself. The external IP is established once, run the next command to verify that traffic has been correctly routed to ingress-nginx:
           curl http://a3ebe22e7ca0522d1123456fbc92605c-8ac7f1d49be2fc42.elb.us-east-2.amazonaws.com
      

       

      404 Not Found


      nginx
       

Note: Despite the fact that, it’s returning an HTTP 404 error code, in this full case curl continues to be achieving the ingress controller and obtaining the expected HTTP response back.

Configure your DNS records

Once your load balancer is provisioned, the next thing is to point the application’s DNS record to the URL of the NLB.

You should use your DNS provider’s console, for instance Route53, and set a CNAME record pointing to your NLB. See CNAME record type for additional information on how best to setup a CNAME record using Route53.

the sample can be used by

This scenario domain rsa-2048.example.com .

                         rsa-2048.example.com                     CNAME a3ebe22e7ca0522d1123456fbc92605c-8ac7f1d49be2fc42.elb.us-east-2.amazonaws.com
     

As you feel the scenario, replace rsa-2048.example.com together with your registered domain.

Install cert-manager

cert-manager is really a Kubernetes add-on which you can use to automate the management and issuance of TLS certificates from various issuing sources. It runs inside your Kubernetes cluster and can make sure that certificates are valid and try to renew certificates at a proper time before they expire.

You should use the standard installation on Kubernetes guide to set up cert-manager on Amazon EKS.

After you’ve deployed cert-manager, it is possible to verify the installation  by following these instructions . If all of the above steps have completed without error, you’re all set!

Note : If you’re likely to use Amazon EKS with Kubernetes pods running on AWS Fargate , please follow the cert-manager Fargate instructions to be sure cert-manager installation works needlessly to say. AWS Fargate is really a technology that delivers on-demand, right-sized compute convenience of containers .

Install aws-privateca-issuer

The AWS PrivateCA Issuer plugin acts being an addon (see external cert configuration ) to cert-manager that signs certificate requests using ACM Private CA.

To set up aws-privateca-issuer

    1. For installation, utilize the following helm commands:
           kubectl create namespace aws-pca-issuer
      
      helm repo add awspca https://cert-manager.github.io/aws-privateca-issuer
      helm repo update
      helm install awspca/aws-pca-issuer --generate-name --namespace aws-pca-issuer
           
    1. Verify that the AWS Private CA Issuer is configured correctly by running the next command and make sure that it really is in READY state with status as Running:
           $ kubectl get pods --namespace aws-pca-issuer
      NAME                                         READY   STATUS    RESTARTS   AGE
      aws-pca-issuer-1622570742-56474c464b-j6k8s   1/1     Running   0          21s
           
    1. You can examine the chart configuration in the default values file.

Create an ACM Private CA

In this scenario, you develop a private certificate authority in ACM Private CA with RSA 2048 selected because the key algorithm. It is possible to develop a CA utilizing the AWS console, AWS CLI, or AWS CloudFormation .

To generate an ACM Private CA

Download the CA certificate utilizing the following command. Replace the and values with the values from the CA you created earlier and save it to a file named cacert.pem :

     aws acm-pca get-certificate-authority-certificate --certificate-authority-arn                                                              -- region                                                              --output text > cacert.pem
     

Once your private CA is active, it is possible to proceed to the next phase. You private CA shall look like the CA in Figure 3.

Figure 3: Sample ACM Private CA

Figure 3: Sample ACM Private CA

Set EKS node permission for ACM Private CA

To be able to issue a certificate from ACM Private CA, add the IAM policy from the prerequisites to your EKS NodeInstanceRole. Replace the value with the worthiness from the CA you created earlier:


    "Version": "2012-10-17",
    "Statement": [
        "Sid": "awspcaissuerpolicy",
        "Effect": "Allow",
        "Action": [
            "acm-pca:GetCertificate",
            "acm-pca:DescribeCertificateAuthority",
            "acm-pca:IssueCertificate"
        ],
        "Resource": "                                                            "
     ]
 

Create an Issuer in Amazon EKS

Given that the ACM Private CA is active, you can start requesting private certificates which may be utilized by Kubernetes applications. Use aws-privateca-issuer plugin to generate the ClusterIssuer, which is used in combination with the ACM PCA to issue certificates.

Issuers (and ClusterIssuers ) represent a certificate authority that signed x509 certificates can be acquired, such as for example ACM Private CA. You will need a minumum of one ClusterIssuer or Issuer before you start requesting certificates in your cluster. You can find two custom resources you can use to generate an Issuer inside Kubernetes utilizing the aws-privateca-issuer add-on:

  • AWSPCAIssuer is really a regular namespaced issuer you can use as a reference in your Certificate custom resources.
  • AWSPCAClusterIssuer is specified just as exactly, but it doesn’t participate in a single namespace and will be referenced by certificate resources from multiple different namespaces.

To generate an Issuer in Amazon EKS

    1. Because of this scenario, an AWSPCAClusterIssuer is established by you. Start by developing a file named cluster-issuer.yaml and save the next text inside it, replacing and information with your personal.
           apiVersion: awspca.cert-manager.io/v1beta1
      

      kind: AWSPCAClusterIssuer
      metadata:
      name: demo-test-root-ca
      spec:
      arn:
      region:

    1. Deploy the AWSPCAClusterIssuer:
           kubectl apply -f cluster-issuer.yaml
           
    1. Verify the installation and ensure that the next command returns a Kubernetes service of kind AWSPCAClusterIssuer:
           $ kubectl get AWSPCAClusterIssuer
      NAME                AGE
      demo-test-root-ca   51s
           

Request the certificate

Now, you can start requesting certificates which may be utilized by Kubernetes applications from the provisioned issuer. For additional information on how best to specify and request Certificate resources, please check  Certificate Resources  guide.

To request the certificate

    1. As an initial step, develop a new namespace which has the application and secret:
           $ kubectl create namespace acm-pca-lab-demo
      namespace/acm-pca-lab-demo created
           
    1. Next, develop a basic X509 private certificate for the domain.
      Develop a file named rsa-2048.yaml and save the next text inside it. Replace rsa-2048.example.com together with your domain.
     kind: Certificate
apiVersion: cert-manager.io/v1
metadata:
  name: rsa-cert-2048
  namespace: acm-pca-lab-demo
spec:
  commonName: www.rsa-2048.example.com
  dnsNames:
    - www.rsa-2048.example.com
    - rsa-2048.example.com
  duration: 2160h0m0s
  issuerRef:
    group: awspca.cert-manager.io
    kind: AWSPCAClusterIssuer
    name: demo-test-root-ca
  renewBefore: 360h0m0s
  secretName: rsa-example-cert-2048
  usages:
    - server auth
    - client auth
  privateKey:
    algorithm: "RSA"
    size: 2048
     
  • For a certificate with an integral algorithm of RSA 2048, create the resource:

     

    kubectl apply -f rsa-2048.yaml -n acm-pca-lab-demo
     
  • Verify that the certificate is issued and in READY state by running the next command:

     

    $ kubectl get certificate -n acm-pca-lab-demo
     

    NAME READY SECRET AGE
    rsa-cert-2048 True rsa-example-cert-2048 12s

    • Run the command kubectl describe certificate -n acm-pca-lab-demo to check on the progress of one’s certificate.
    • After the certificate status shows as issued, you should use the following command to check on the issued certificate details:
           kubectl get secret rsa-example-cert-2048 -n acm-pca-lab-demo -o 'go-template=index .data "tls.crt"' | base64 --decode | openssl x509 -noout -text
           
     

    Deploy a demo application

     

    For the intended purpose of this scenario, it is possible to develop a new service-a simple “hello world” website that uses echoheaders that respond with the HTTP request headers alongside some cluster details.

    To deploy a demo application

    1. Develop a new file named hello-world.yaml with below content:
      apiVersion: v1
           

      kind: Service
      metadata:
      name: hello-world
      namespace: acm-pca-lab-demo
      spec:
      type: ClusterIP
      ports:
      – port: 80
      targetPort: 8080
      selector:

       

    app: hello-world

    apiVersion: apps/v1
    kind: Deployment
    metadata:
    name: hello-world
    namespace: acm-pca-lab-demo
    spec:
    replicas: 3
    selector:
    matchLabels:
    app: hello-world
    template:
    metadata:
    labels:
    app: hello-world
    spec:
    containers:
    – name: echoheaders
    image: k8s.gcr.io/echoserver:1.10
    args:
    – “-text=Hello World”
    imagePullPolicy: IfNotPresent
    resources:
    requests:
    cpu: 100m
    memory: 100Mi
    ports:
    – containerPort: 8080

     
  • Create the service utilizing the following command:

     

    $ kubectl apply -f hello-world.yaml
     

    Expose and secure your application

     

    Given that you’ve issued a certificate, it is possible to expose your application utilizing a Kubernetes Ingress resource.

    To expose and secure your application

      1. Develop a new file called example-ingress.yaml and add the next content:
        apiVersion: networking.k8s.io/v1
             

        kind: Ingress
        metadata:
        name: acm-pca-demo-ingress
        namespace: acm-pca-lab-demo
        annotations:
        kubernetes.io/ingress.class: “nginx”
        spec:
        tls:
        – hosts:
        – www.rsa-2048.example.com
        secretName: rsa-example-cert-2048
        rules:
        – host: www.rsa-2048.example.com
        http:
        paths:
        – path: /
        pathType: Exact
        backend:
        service:
        name: hello-world
        port:
        number: 80

      1. Develop a new Ingress resource by running the next command:
             kubectl apply -f example-ingress.yaml 
             

    Access the application using TLS

    After completing the prior step, you can to gain access to this ongoing service from any computer linked to the internet.

    To access the application using TLS

    • Get on a terminal window on a machine which has access to the web, and run the next:
           $ curl https://rsa-2048.example.com --cacert cacert.pem 
           
    • You need to see an output like the following:
           Hostname: hello-world-d8fbd49c6-9bczb
      
      Pod Information:
          -no pod information available-
      
      Server values:
          server_version=nginx: 1.13.3 - lua: 10008
      
      Request Information:
          client_address=192.162.32.64
          method=GET
          real path=/
          query=
          request_version=1.1
          request_scheme=http
          request_uri=http://rsa-2048.example.com:8080/
      
      Request Headers:
          accept=          /     
          host=rsa-2048.example.com
          user-agent=curl/7.62.0
          x-forwarded-for=52.94.2.2
          x-forwarded-host=rsa-2048.example.com
          x-forwarded-port=443
          x-forwarded-proto=https
          x-real-ip=52.94.2.2
          x-request-id=371b6fc15a45d189430281693cccb76e
          x-scheme=https
      
      Request Body:
          -no physical body in request-…
           

      This response is returned from the service running behind the Kubernetes Ingress controller and demonstrates a successful TLS handshake happened at port 443 with https protocol.

    • You should use the next command to verify that the certificate issued earlier has been useful for the SSL handshake:
           echo | openssl s_client -showcerts -servername www.rsa-2048.example.com -connect www.rsa-2048.example.com:443 2>/dev/null | openssl x509 -inform pem -noout -text
           

    Cleanup

    avoid incurring future charges on your own AWS account

    To, perform the next steps to eliminate the scenario.

    Delete the ACM Private CA

    It is possible to delete the ACM Private CA by following instructions in Deleting your private CA .

    Alternatively, you can use the next commands to delete the ACM Private CA, replacing the and with your personal:

    • Disable the CA.
           aws acm-pca update-certificate-authority 
      --certificate-authority-arn                                                        
      --region                                                        
      --status DISABLED
           
    • Call the Delete Certificate Authority API
           aws acm-pca delete-certificate-authority 
      --certificate-authority-arn                                                        
      --region                                                        
      --permanent-deletion-time-in-days 7
           

    Continue the cleanup

    After the ACM Private CA has been deleted, continue the cleanup by running the next commands.

    • Delete the ongoing services
           kubectl delete -f hello-world.yaml
           
    • Delete the Ingress controller:
           kubectl delete -f example-ingress.yaml
           
    • Delete the IAM NodeInstanceRole, replace role name together with your EKS Node instance role designed for the demo:
           aws iam delete-role --role-name eksctl-acm-pca-lab-nodegroup-acm-pca-nlb-lab-workers-NodeInstanceProfile-XXXXXXX
           
    • Delete the Amazon EKS cluster using ekctl command:
           eksctl delete cluster acm-pca-lab --region us-east-2
           

    You can even clean up from your own Cloudformation console by deleting the stacks named eksctl-acm-pca-lab-nodegroup-acm-pca-nlb-lab-workers and eksctl-acm-pca-lab-cluster .

    Conclusion

    In this website post, we showed you how exactly to setup a Kubernetes Ingress controller with something running in Amazon EKS cluster using AWS Load Balancer Controller with Network Load Balancer and create HTTPS using private certificates issued by ACM Private CA . When you have questions or desire to contribute, join the aws-privateca-issuer add-on project on GitHub .

    When you have feedback concerning this post, submit comments in the Comments section below. When you have questions concerning this post, take up a new thread on the AWS Certificate Manager forum or contact AWS Support .

    Want more AWS Security how-to content, news, and show announcements? Follow us on Twitter .