Options

[T&C] Thunder ADC with Thunder Kubernetes Connector (TKC) using CRDs

siddharthaasiddharthaa Member, Administrator admin
edited November 2022 in DevOps

In an earlier article, we saw how you can use the Thunder Kubernetes Connector (TKC) to dynamically configure the Thunder ADC for load-balancing traffic to a Kubernetes cluster. In that article, we specified the SLB configuration using annotations in an Ingress resource.


Starting from TKC v1.11, Thunder Kubernetes Connector (TKC) allows you to define your own Custom Resource Definition (CRD) for SLB objects in the Kubernetes API. These SLB objects are then configured with custom options on the Thunder device.

TKC supports Custom Resource Definition (CRD) for the following SLB objects:

  • virtual-server
  • virtual-port
  • service-group
  • slb cipher template
  • slb server-ssl template
  • slb client-ssl template
  • health monitor
  • ip nat pool


In this article, we will see how you can specify the SLB configuration using CRDs.


Video Demo

For a demo of the below steps, see:


Setup

The K8s cluster consists of:

  • Master: 172.16.1.12
  • Node: 172.16.1.13
  • Node: 172.16.1.14

 The Pod network is 192.168.0.0/16.



The K8s version is:

$ kubectl version
Client Version: version.Info{Major:"1", Minor:"21", GitVersion:"v1.21.2", GitCommit:"092fbfbf53427de67cac1e9fa54aaa09a28371d7", GitTreeState:"clean", BuildDate:"2021-06-16T12:59:11Z", GoVersion:"go1.16.5", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"21", GitVersion:"v1.21.2", GitCommit:"092fbfbf53427de67cac1e9fa54aaa09a28371d7", GitTreeState:"clean", BuildDate:"2021-06-16T12:53:14Z", GoVersion:"go1.16.5", Compiler:"gc", Platform:"linux/amd64"}


The current configuration of the Thunder device is:

harmony-controller telemetry
 log-rate 1750
!
ip dns primary 8.8.8.8
!
ip dns secondary 9.9.9.9
!
timezone America/Los_Angeles
!
ntp server time.google.com
!
glm use-mgmt-port
glm enable-requests
!
interface management
 ip address 10.64.4.32 255.255.255.0
 ip control-apps-use-mgmt-port
 ip default-gateway 10.64.4.1
 enable
!
interface ethernet 1
 enable
 ip address 172.16.1.10 255.255.255.0
!
interface ethernet 2
!
!
ip route 0.0.0.0 /0 172.16.1.1
!
sflow setting local-collection
!
sflow collector ip 127.0.0.1 6343
!
!
end


Configuration

Deploy a web app in the K8s cluster

In the K8s cluster, deploy a web service named web01, with Service type as NodePort:

$ cat web01.yaml
apiVersion: v1
kind: ConfigMap
metadata:
  name: webmap01
data:
  index.html: "<html><h1>This is web service web01</h1><html>"
---
apiVersion: v1
kind: Service
metadata:
  name: web01
spec:
  type: NodePort
  ports:
    - name: http-port
      protocol: TCP
      port: 8080
      targetPort: 80
  selector:
    app: web01
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: web01
spec:
  replicas: 3
  selector:
    matchLabels:
      app: web01
  template:
    metadata:
      labels:
        app: web01
    spec:
      volumes:
        - name: volmap
          configMap:
            name: webmap01
      containers:
        - name: nginx
          image: nginx:1.13
          ports:
            - containerPort: 80
          volumeMounts:
            - name: volmap
              mountPath: /usr/share/nginx/html/


$ kubectl apply -f web01.yaml

$ kubectl get pods
NAME                     READY   STATUS    RESTARTS   AGE
web01-68c59f6f4c-dcvs2   1/1     Running   0          6d
web01-68c59f6f4c-qrfjd   1/1     Running   0          6d
web01-68c59f6f4c-z865r   1/1     Running   0          6d



Enable HTTPS management access on the Thunder ADC interface

The Thunder Kubernetes Connector (TKC) will run as a Pod within the K8s cluster and will communicate with Thunder ADC on interface ethernet 1. For this, enable management access on that interface of the Thunder ADC:

enable-management service https
 ethernet 1



Deploy K8s Resources

Create K8s Secret for Thunder device username/password

Create a Secret object with base64 encoding of the default username/password for the Thunder ADC (admin/a10):

$ cat a10-secret.yaml
apiVersion: v1
kind: Secret
metadata:
 name: a10-secret
type: Opaque
data:
 username: YWRtaW4=
 password: YTEw

$ kubectl apply -f a10-secret.yaml
secret/a10-secret created


Create ServiceAccount, ClusterRole, and ClusterRoleBinding objects

Deploy ServiceAccount, ClusterRole, and ClusterRoleBinding objects. This ServiceAccount will subsequently be applied to the TKC, thereby granting the TKC the required permissions to access the various resources within the K8s cluster.

$ cat a10-rbac.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
 name: a10-ingress
 namespace: default
---

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
 name: a10-ingress
rules:
 - apiGroups: [""]
  resources: ["pods", "nodes", "services", "endpoints", "secrets", "configmaps"]
  verbs: ["get", "list", "watch"]
 - apiGroups: ["extensions", "networking.k8s.io"]
  resources: ["ingresses"]
  verbs: ["get", "list", "watch"]
 - apiGroups: ["tkc.a10networks.com"]
  resources: ["natpools", "healthmonitors", "virtualservers", "servicegroups", "serverssls", "virtualports", "templateciphers", "clientssls", "serversssls/status", "clientssls/status"]
  verbs: ["get", "watch", "list", "patch", "create", "update"]
---

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
 name: a10-ingress
roleRef:
 apiGroup: rbac.authorization.k8s.io
 kind: ClusterRole
 name: a10-ingress
subjects:
 - kind: ServiceAccount
  name: a10-ingress
  namespace: default

$ kubectl apply -f a10-rbac.yaml
serviceaccount/a10-ingress created
clusterrole.rbac.authorization.k8s.io/a10-ingress created
clusterrolebinding.rbac.authorization.k8s.io/a10-ingress created



Deploy Ingress Resource

Deploy an Ingress Resource with ingress class of a10-ext:

$ cat a10-ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
 name: a10-ingress01
 namespace: default
 annotations:
  kubernetes.io/ingress.class: a10-ext
spec:
 rules:
 - host: web01.a10tests.com
  http:
   paths:
   - path: /
    pathType: Prefix
    backend:
     service:
      name: web01
      port:
       number: 8080

$ kubectl apply -f a10-ingress.yaml
ingress.networking.k8s.io/a10-ingress01 created

$ kubectl get ingress
NAME      CLASS  HOSTS        ADDRESS  PORTS  AGE
a10-ingress01  <none>  web01.a10tests.com       80   19s


Deploy TKC

Here we deploy TKC version v1.11.0.0.

$ cat a10-tkc.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
 name: a10-thunder-kubernetes-connector
spec:
 replicas: 1
 selector:
  matchLabels:
   app: thunder-kubernetes-connector
 template:
  metadata:
   labels:
    app: thunder-kubernetes-connector
  spec:
   serviceAccountName: a10-ingress
   containers:
   - name: thunder-kubernetes-connector
    image: a10networks/a10-kubernetes-connector:1.11.0.0
    imagePullPolicy: IfNotPresent
    env:
    - name: POD_NAMESPACE
     valueFrom:
      fieldRef:
       fieldPath: metadata.namespace
    - name: WATCH_NAMESPACE
     value: default
    - name: CONTROLLER_URL
          value: https://172.16.1.10
    - name: ACOS_USERNAME_PASSWORD_SECRETNAME
     value: a10-secret
    - name: PARTITION
     value: shared
    args:
    - --watch-namespace=$(WATCH_NAMESPACE)
    - --use-node-external-ip=true
    - --patch-to-update=true
    - --safe-acos-delete=true
    - --use-ingress-class-only=true
    - --ingress-class=a10-ext

$ kubectl apply -f a10-tkc.yaml
deployment.apps/a10-thunder-kubernetes-connector created



Install and Deploy CRDs for SLB objects

Install TKC CRDs

Install A10's TKC CRDs in the Kubernetes cluster:

$ kubectl create -f https://a10networks.github.io/tkc-doc/crd/a10-crd-installation.yaml

$ kubectl api-resources | grep a10
clientssls            a10clissl     tkc.a10networks.com/v1alpha1      true     ClientSsl
healthmonitors          a10hm       tkc.a10networks.com/v1         true     HealthMonitor
natpools             a10natpool     tkc.a10networks.com/v1         true     NatPool
serverssls            a10srvssl     tkc.a10networks.com/v1alpha1      true     ServerSsl
servicegroups           a10sg       tkc.a10networks.com/v1         true     ServiceGroup
templateciphers          a10tc       tkc.a10networks.com/v1alpha1      true     TemplateCipher
virtualports           a10vport      tkc.a10networks.com/v1         true     VirtualPort
virtualservers          a10vip       tkc.a10networks.com/v1alpha1      true     VirtualServer


HealthMonitor Resource

$ cat crd-hm-http.yaml
apiVersion: "tkc.a10networks.com/v1"
kind: HealthMonitor
metadata:
 name: hm-http
spec:
 name: hm-http
 type: http
 port: 80
 retry: 5
 interval: 10
 timeout: 2
 httpMethod: "GET"
 httpUrl: "/"
 httpCode: 200

[a10tme@master ~]$ kubectl apply -f crd-hm-http.yaml
healthmonitor.tkc.a10networks.com/hm-http created

[a10tme@master ~]$ kubectl get a10hm
NAME   NAME   STATUS  TYPE  HOST  URL  CODE
hm-http  hm-http  Active  http     /   200


ServiceGroup Resource

Note that the ServiceGroup Resource binds to a specific Service Resource, in this case, web01:

$ cat crd-sg-web01.yaml
apiVersion: "tkc.a10networks.com/v1"
kind: ServiceGroup
metadata:
  name: sg-web01
spec:
  name: "sg-web01"
  service: "web01"
  healthCheck: "hm-http"

$ kubectl apply -f crd-sg-web01.yaml
servicegroup.tkc.a10networks.com/sg-web01 created

$ kubectl get a10sg
NAME    NAME    STATUS  SERVICE  PROTOCOL  METHOD
sg-web01  sg-web01  Active  web01   tcp    round-robin


VirtualServer Resource

$ cat crd-vip-web01.yaml
apiVersion: "tkc.a10networks.com/v1alpha1"
kind: VirtualServer
metadata:
 name: vip-web01
spec:
 name: vip-web01
 ip-address: 172.16.1.251

$ kubectl apply -f crd-vip-web01.yaml
virtualserver.tkc.a10networks.com/vip-web01 created

$ kubectl get a10vip
NAME    NAME    VIP      STATUS
vip-web01  vip-web01  172.16.1.251  Active


VirtualPort Resource

Note that the VirtualPort resource binds to a specific Ingress resource, in this case, a10-ingress01:

$ cat crd-vport-web01-http.yaml
apiVersion: "tkc.a10networks.com/v1"
kind: VirtualPort
metadata:
  name: vport-web01-http
spec:
  virtualServer: "vip-web01"
  port: 80
  protocol: "http"
  ingress: "a10-ingress01"
  snatAuto: 1

$ kubectl apply -f crd-vport-web01-http.yaml
virtualport.tkc.a10networks.com/vport-web01-http created

$ kubectl get a10vport
NAME              PORT  PROTOCOL  STATUS  VIRTUALSERVER  INGRESS        ROUTE
vport-web01-http  80    http      Active  vip-web01      a10-ingress01


Verification

On the Thunder ADC, you will see the following SLB configuration has been added by the TKC:

health monitor hm-http
  retry 5
  interval 10 timeout 2
  method http port 80 expect response-code 200 url GET /
!
slb server 172.16.1.13 172.16.1.13
  port 30451 tcp
!
slb server 172.16.1.14 172.16.1.14
  port 30451 tcp
!
slb service-group sg-web01 tcp
  health-check hm-http
  member 172.16.1.13 30451
  member 172.16.1.14 30451
!
slb template http default-a10-ingress01-httpTemplate
  url-switching starts-with / service-group sg-web01
!
slb virtual-server vip-web01 172.16.1.251
  port 80 http
    source-nat auto
    service-group sg-web01
    template http default-a10-ingress01-httpTemplate


On the client machine, we have a DNS entry for "web01.a10tests.com" mapped to the VIP on the Thunder ADC. Now we can access the website http://web01.a10tests.com from this client machine:

$ curl http://web01.a10tests.com
<html><h1>This is web service web01</h1><html>


Sign In or Register to comment.