AWS - How I set up my EKS

Intro

This article is for recording how I set up the AWS EKS for the team.

Here are my requirements.

  • I could use SSM and SSH keys to access EKS workers from a bastion instance.
  • All EKS add-ons could install and work.
  • ALB could create and accept requests from the internet.
  • Worker will only host at the private subnet.

Here is my network architecture.

  • Build a two-tier network on VPC.
  • Build a bastion instance on the public subnet.

Two-tier network

1. Set up authorize

Before setting up an EKS, we must prepare the IAM role and security groups for services.

IAM

  • EKS Master
  • EKS Worker
  • Vertical Pod Autoscaler
  • Application Load Balancer

According to AWS doc

Due to VPA and ALB will use IRSA, therefore we need to wait for EKS creation to finish, then create OIDC. So the IAM role for VPA and ALB will be created later.

Role for EKS Master

Trust policy:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Principal": {
                "Service": "eks.amazonaws.com"
            },
            "Action": "sts:AssumeRole"
        }
    ]
}

And attach the policies:

  • AmazonEKSClusterPolicy.

Role for EKS Worker

Trust policy:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Principal": {
                "Service": "ec2.amazonaws.com"
            },
            "Action": "sts:AssumeRole"
        }
    ]
}

And attach the policies:

  • AmazonEBSCSIDriverPolicy, for add-on EBS
  • AmazonEC2ContainerRegistryReadOnly, for ECR
  • AmazonEKS_CNI_Policy, for add-on VPC CNI, if you use IPv6 then you need to create the policy yourself.
  • AmazonEKSWorkerNodePolicy, for EKS worker
  • AmazonSSMManagedInstanceCore, for session manager

Security Group

  • EKS Master
  • EKS Worker

EKS will create default security groups for the master and node. But I want use the default one, therefore I create other to manage.

Here are the rules.

Security group for EKS Master

TypeSourceProtocolPort
InboundSelfAll trafficAll
InboundBastionTCP22
InboundBastionTCP443
OutboundAll traffic0.0.0.0/0
OutboundAll traffic::/0

Security group for EKS Worker

TypeSourceProtocolPort
InboundSelfAll trafficAll
InboundEKS MasterTCP443
InboundBastionTCP22
OutboundAll traffic0.0.0.0/0
OutboundAll traffic::/0

2. Build up Kubernetes

At this step, I will use terraform to create the EKS, including OIDC with EKS. You could check the demo at this repository.

Note: Adot will install fail, we will add the required settings in the next steps. My demo includes creating OIDC.

After creation, we could run this command to get credential to kubeconfig.

aws eks update-kubeconfig --region <REGION> --name <CLUSTER_NAME>

3. Add Add-On and services

After EKS is created, we will need to add some services to make our services better.

Metrics Server

This is a very important component in EKS. Horizontal Pod Autoscaler and Vertical Pod Autoscaler depend on this. So we need to install it first.

kubectl apply -f https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/components.yaml
kubectl get deployment metrics-server -n kube-system

Reference

AWS Distro for OpenTelemetry

  1. Create a Service account & Namespace for Adot

     kubectl apply -f https://amazon-eks.s3.amazonaws.com/docs/addons-otel-permissions.yaml
     kubectl apply -f https://github.com/cert-manager/cert-manager/releases/download/v1.9.1/cert-manager.yaml
    
  2. Create IRSA for Adot

     eksctl create iamserviceaccount \
         --name adot-collector \
         --namespace <NAMESPACE> \
         --cluster <CLUSTER_NAME> \
         --attach-policy-arn arn:aws:iam::aws:policy/AmazonPrometheusRemoteWriteAccess \
         --attach-policy-arn arn:aws:iam::aws:policy/AWSXrayWriteOnlyAccess \
         --attach-policy-arn arn:aws:iam::aws:policy/CloudWatchAgentServerPolicy \
         --approve \
         --override-existing-serviceaccounts
    
  3. Install collector

    • CloudWatch
      • Download YAML:curl -o collector-config-cloudwatch.yaml https://raw.githubusercontent.com/aws-observability/aws-otel-community/master/sample-configs/operator/collector-config-cloudwatch.yaml
      • Replace values
        • serviceAccount: Check this value should be the same as the previous name field adot-collector
        • value: <YOUR_EKS_CLUSTER_NAME>
        • region: <YOUR_AWS_REGION>
        • name: Check this value should be the same as the previous name field adot-collector
      • Apply YAML:kubectl apply -f collector-config-cloudwatch.yaml
    • X-Ray
      • Download YAML:curl -o collector-config-xray.yaml https://raw.githubusercontent.com/aws-observability/aws-otel-community/master/sample-configs/operator/collector-config-xray.yaml
      • Replace values
        • serviceAccount: Check this value should be the same as the previous name field adot-collector
        • value: <YOUR_EKS_CLUSTER_NAME>
        • region: <YOUR_AWS_REGION>
      • Apply YAML:kubectl apply -f collector-config-xray.yaml
    • Prometheus on Daemonset
      • Download YAML:curl -o collector-config-advanced.yaml https://raw.githubusercontent.com/aws-observability/aws-otel-community/master/sample-configs/operator/collector-config-advanced.yaml
      • Replace values
        • serviceAccount: Check this value should be the same as the previous name field adot-collector
        • value: <YOUR_EKS_CLUSTER_NAME>
        • region: <YOUR_AWS_REGION>
        • name: Check this value should be the same as the previous name field adot-collector
      • Apply YAML:kubectl apply -f collector-config-advanced.yaml

Reference

Vertical Pod Autoscaler

Requirements

  • Install metrics server
  • kubectl could communicate with your Amazon EKS cluster.
  • OpenSSL 1.1.1 or later installed on your device which is support -addext option.

Installation

  1. Download repository:git clone https://github.com/kubernetes/autoscaler.git
  2. Go inside:cd autoscaler/vertical-pod-autoscaler/
  3. Deploy VPA:./hack/vpa-up.sh:If your OpenSSL is not supported -addext option, you will fail at this step.

If you install another OpenSSL version with another naming. You could change the OpenSSL command naming at autoscaler/vertical-pod-autoscaler/pkg/admission-controller/gencerts.sh. For example, I install openssl11 on Amazon Linux. Then I change the openssl to openssl11 at gencerts.sh.

If you have already deployed another version of the Vertical Pod Autoscaler, remove it with the following command.

./hack/vpa-down.sh

Reference

Application Load Balancer

Before we install the ALB controller, we need to install External-DNS first. External-DNS could help us update the Route53 record while we could ingress with ALB. But if your ALB is not planning to open to the internet, you could skip this step.

This ALB required OIDC.

Install external-dns

  1. Create IAM Policy

     {
         "Version": "2012-10-17",
         "Statement": [
             {
                 "Effect": "Allow",
                 "Action": ["route53:ChangeResourceRecordSets"],
                 "Resource": ["arn:aws:route53:::hostedzone/*"]
             },
             {
                 "Effect": "Allow",
                 "Action": ["route53:ListHostedZones", "route53:ListResourceRecordSets"],
                 "Resource": ["*"]
             }
         ]
     }
    
  2. Create IRSA. Replace $POLICY_ARN value with IAM policy's arn at step 1.

     eksctl create iamserviceaccount \
         --cluster $EKS_CLUSTER_NAME \
         --name "external-dns" \
         --namespace ${EXTERNALDNS_NS:-"default"} \
         --attach-policy-arn $POLICY_ARN \
         --approve
    
  3. Save external-dns YAML file.

    Save to YAML

     # comment out sa if it was previously created
     apiVersion: v1
     kind: ServiceAccount
     metadata:
         name: external-dns
         labels:
             app.kubernetes.io/name: external-dns
     ---
     apiVersion: rbac.authorization.k8s.io/v1
     kind: ClusterRole
     metadata:
         name: external-dns
         labels:
             app.kubernetes.io/name: external-dns
     rules:
         - apiGroups: ['']
           resources: ['services', 'endpoints', 'pods', 'nodes']
           verbs: ['get', 'watch', 'list']
         - apiGroups: ['extensions', 'networking.k8s.io']
           resources: ['ingresses']
           verbs: ['get', 'watch', 'list']
     ---
     apiVersion: rbac.authorization.k8s.io/v1
     kind: ClusterRoleBinding
     metadata:
         name: external-dns-viewer
         labels:
             app.kubernetes.io/name: external-dns
     roleRef:
         apiGroup: rbac.authorization.k8s.io
         kind: ClusterRole
         name: external-dns
     subjects:
         - kind: ServiceAccount
           name: external-dns
           namespace: default # change to desired namespace: externaldns, kube-addons
     ---
     apiVersion: apps/v1
     kind: Deployment
     metadata:
         name: external-dns
         labels:
             app.kubernetes.io/name: external-dns
     spec:
         strategy:
             type: Recreate
         selector:
             matchLabels:
                 app.kubernetes.io/name: external-dns
         template:
             metadata:
                 labels:
                     app.kubernetes.io/name: external-dns
             spec:
                 serviceAccountName: external-dns
                 containers:
                     - name: external-dns
                       image: k8s.gcr.io/external-dns/external-dns:v0.11.0
                       args:
                           - --source=service
                           - --source=ingress
                           - --domain-filter=example.com # will make ExternalDNS see only the hosted zones matching provided domain, omit to process all available hosted zones
                           - --provider=aws
                           - --policy=upsert-only # would prevent ExternalDNS from deleting any records, omit to enable full synchronization
                           - --aws-zone-type=public # only look at public hosted zones (valid values are public, private or no value for both)
                           - --registry=txt
                           - --txt-owner-id=external-dns
                       env:
                           - name: AWS_DEFAULT_REGION
                             value: us-east-1 # change to region where EKS is installed
                 # # Uncommend below if using static credentials
                 #        - name: AWS_SHARED_CREDENTIALS_FILE
                 #          value: /.aws/credentials
                 #      volumeMounts:
                 #        - name: aws-credentials
                 #          mountPath: /.aws
                 #          readOnly: true
                 #  volumes:
                 #    - name: aws-credentials
                 #      secret:
                 #        secretName: external-dns
    
  4. Replace to values at step 3 YAML file.

    • Comment Service account section. We have already created the service account in step 2.
    • Change the --domain-filter to your domain
    • Change env AWS_DEFAULT_REGION to your region.
  5. Deploy external-dns.

     kubectl create --filename externaldns-with-rbac.yaml --namespace ${EXTERNALDNS_NS:-"default"}
    

Install AWS Load Balancer Controller

Requirements
Installation
  1. Create IAM Policy AWSLoadBalancerControllerIAMPolicy

     aws iam create-policy \
         --policy-name AWSLoadBalancerControllerIAMPolicy \
         --policy-document https://raw.githubusercontent.com/kubernetes-sigs/aws-load-balancer-controller/v2.4.3/docs/install/iam_policy.json
    
  2. Create IRSA, and replace values.

    • <IAM_POLICY> with IAM policy arn at step 1
    • <EKS_CLUSTER> EKS cluster name
    • <ROLE_NAME> IAM role name

      eksctl create iamserviceaccount \
        --cluster=<EKS_CLUSTER> \
        --namespace=kube-system \
        --name=aws-load-balancer-controller \
        --role-name <ROLE_NAME> \
        --attach-policy-arn=<IAM_POLICY> \
        --approve \
        --override-existing-serviceaccounts
      
  3. Install the AWS Load Balancer Controller using Helm V3

     helm repo add eks https://aws.github.io/eks-charts
     helm repo update
     helm install aws-load-balancer-controller eks/aws-load-balancer-controller \
         -n kube-system \
         --set clusterName=<EKS_CLUSTER> \
         --set serviceAccount.create=false \
         --set serviceAccount.name=aws-load-balancer-controller
    

Reference

Did you find this article valuable?

Support 攻城獅 by becoming a sponsor. Any amount is appreciated!