/plushcap/analysis/spacelift/ansible-kubernetes

How to Manage Kubernetes with Ansible [Tutorial]

What's this blog post about?

Ansible is a powerful tool that can be used to manage Kubernetes clusters. In this guide, we will cover how to use Ansible to deploy your Kubernetes environment and resources in your Kubernetes cluster. We will also discuss how you can integrate Ansible with CI/CD tools like Jenkins or GitOps tools like Argo CD for a seamless workflow. Firstly, let’s set up our Ansible Control Node. For this guide, we will assume that you have already installed Ansible on your control node and configured it to connect to your target machines (master, worker nodes, and proxy). If not, please refer to the official Ansible documentation for installation and configuration instructions. Next, let’s create an inventory file (kube_inventory) under ~/ansible/inventory/. This file will contain the IP addresses or host names of all your target machines: [master] 10.x.x.x [workers] 10.x.x.x 10.x.x.x [proxy-servers] 10.x.x.x #add your proxy IP or DNS name here. Now, let’s create a playbook file as kube_master.yml under ~/ansible/playbooks/. This playbook will be responsible for deploying the Kubernetes master node: - name: Install and Configure Kubernetes Master Node hosts: master become: yes tasks: - name: Update System Packages apt: name: "{{ item }}" state: latest update_cache: yes with_items: - software-properties-common - python3-pip - python3-venv - python3-setuptools - git - name: Install Docker CE shell: curl -fsSL https://get.docker.com/ | sudo sh args: chdir: $HOME - name: Add Kubernetes Repository Key shell: curl -s https://packages.cloud.google.com/apt/doc/repository.gpg.key | apt-key add - args: chdir: $HOME - name: Add Kubernetes Repository Line to Sources List copy: content: "deb http://apt.kubernetes.io/ kubernetes-xenial main" dest: /etc/apt/sources.list.d/kubernetes.list - name: Update System Packages apt: update_cache: yes - name: Install Kubelet, Kubeadm and Kubectl apt: name: "{{ item }}" state: latest with_items: - kubelet=1.29.2-00 - kubeadm=1.29.2-00 - kubectl=1.29.2-00 - name: Configure Kubernetes Master Node copy: content: | KUBECONFIG=/etc/kubernetes/admin.conf KUBERNETES_PROVIDER=aws dest: /home/YOUR_USERPROFILE_NAME/.bashrc owner: YOUR_USERPROFILE_NAME - name: Source Bash Profile shell: source /home/YOUR_USERPROFILE_NAME/.bashrc args: chdir: $HOME - name: Initialize Kubernetes Master Node shell: kubeadm init --pod-network-cidr=10.244.0.0/16 >> master_node_setup.log args: chdir: $HOME creates: master_node_setup.log - name: Copy Admin Conf to Proxy copy: src: /etc/kubernetes/admin.conf dest: /home/YOUR_USERPROFILE_NAME/.kube/config remote_src: yes owner: YOUR_USERPROFILE_NAME - name: Install Pod Network become: yes become_user: YOUR_USERPROFILE_NAME shell: kubectl apply -f https://raw.githubusercontent.com/flannel-io/flannel/master/Documentation/kube-flannel.yml >> pod_network_setup.log args: chdir: $HOME creates: pod_network_setup.log To run the master playbook, use the following from your Ansible Control Node: ansible-playbook ~/ansible/playbook/kube_master.yml -i ~/ansible/inventory/kube_inventory Once we have successfully deployed this to the master node, we can jump into deploying the following to our worker nodes to connect our worker nodes to the master node. Let’s create kube_workers.yml under ~/ansible/playbooks/. Please make sure to replace YOUR_MASTER_IP with the IP address of your master node and also replace YOUR_USERPROFILE_NAME with the name of your user profile that is in your /home/ directory that you are installing Kubernetes under. - name: Configure Join Commands on Master Node hosts: master become: yes tasks: - name: Retrieve Join Command shell: kubeadm token create --print-join-command register: join_command_raw - name: Set Join Command set_fact: join_command: "{{ join_command_raw.stdout_lines[0] }}" - name: Join Worker Nodes hosts: workers become: yes tasks: - name: Enable TCP port 6443 (On Master) is able to connect from Worker wait_for: "host=YOUR_MASTER_IP port=6443 timeout=1" - name: Join worker to cluster shell: "{{ hostvars['YOUR_MASTER_IP'].join_command }} >> node_joined.log" args: chdir: /home/YOUR_USERPROFILE_NAME creates: node_joined.log To run the worker nodes playbook, use the following from your Ansible Control Node: ansible-playbook ~/ansible/playbook/kube_workers.yml -i ~/ansible/inventory/kube_inventory Once the playbook runs successfully, you can validate the cluster is working properly by running the commands below from your Master Node: kubectl get nodes kubectl get all -A We will now add the kube config of the master node to our /etc/kube/config of our proxy. From the master node, you can run the command below to copy the config over to your proxy: sudo scp /etc/kubernetes/admin.conf USERNAME@MASTER_NODE_IP:~/.kube/config You can validate under your ~/.kube/config in your proxy machine to make sure you view the config and also run the following from your proxy to make sure you can access your cluster from your proxy: kubectl get nodes kubectl get all -A The next step is to deploy the Kubernetes task manifest from our Ansible Control Node. But overall, you can see how much time we can save on setting up and configuring a Kubernetes cluster using Ansible. You can easily add another Ubuntu server in your Ansible inventory file and run the playbooks to add another node into your Kubernetes cluster. You have more control over the state of your Kubernetes Nodes. Before we start, validate that you are able to ping your proxy from your Ansible control node to ensure you have connectivity. Let’s modify our inventory file to include the proxy now (~/ansible/inventory/kube_inventory) that will contain the proxy IP or Host Name (if you have DNS configured). Add your proxy to your inventory as below: [master] 10.x.x.x [workers] 10.x.x.x 10.x.x.x [proxy-servers] 10.x.x.x #add your proxy IP or DNS name here. Let’s create a simple playbook file as create_namespace.yml in ~/ansible/playbooks/ as the following to create a namespace in your Kubernetes cluster: - name: Create K8S resource hosts: proxy-servers tasks: - name: Get K8S namespace kubernetes.core.k8s: name: my-namespace api_version: v1 kind: Namespace state: present Run your ansible playbook command: ansible-playbook ~/ansible/playbooks/create_namespace.yml -i ~/ansible/inventory/kube_inventory Once the playbook run is complete, go to your proxy and validate that you are able to see the namespace created by running the following: kubectl get namespace And there you have it: you have just used Ansible to deploy a Kubernetes task manifest to your Kubernetes cluster. Here are some other playbooks (Deployments, Services, and Configmaps) you can test running from your Ansible Control Node. You can utilize the following Application/Service Deployment task manifest to deploy a nginx application: - name: Application Deployment hosts: proxy_servers tasks: - name: Create a Deployment kubernetes.core.k8s: definition: apiVersion: apps/v1 kind: Deployment metadata: name: myapp namespace: my-namespace spec: replicas: 3 selector: matchLabels: app: myapp template: metadata: labels: app: myapp spec: containers: - name: myapp-container image: nginx:latest ports: - containerPort: 80 - name: Expose Deployment as a Service kubernetes.core.k8s: definition: apiVersion: v1 kind: Service metadata: name: myapp-service namespace: my-namespace spec: selector: app: myapp ports: - protocol: TCP port: 80 targetPort: 80 type: LoadBalancer You can also manage your Kubernetes environment variables with Ansible using a configmap: - name: Manage ConfigMaps and Secrets hosts: proxy_servers tasks: - name: Create ConfigMap kubernetes.core.k8s: definition: apiVersion: v1 kind: ConfigMap metadata: name: app-configmap namespace: my-namespace data: config.json: | { "key": "value" } - name: Create Secret kubernetes.core.k8s: definition: apiVersion: v1 kind: Secret metadata: name: myapp-secret namespace: my-namespace stringData: password: mypassword Nowadays, we mostly navigate towards a PaaS solution for our Kubernetes cluster, as it will be hosted by Azure, AWS, GCP, or others. I want to briefly cover how you would connect an Azure AKS cluster to your Ansible – proxy workflow. The same process will follow for Amazon EKS and Google GKE with their dedicated CLI commands. Let’s go to our Proxy machine and run the code below. We will install the Azure CLI tools and use ‘az login’ to log in to Azure. This will validate that we are able to connect to our AKS cluster from our proxy and make sure that we have the kube config in our proxy updated. #Install Azure CLI (or any other cloud provider CLI tools): curl -sL https://aka.ms/InstallAzureCLIDeb | sudo bash #Login to Azure: az login #Add Azure AKS cluster to proxy ~/.kube/config az aks get-credentials --name name_of_aks_cluster --resource-group name_of_aks_rg #Test access to K8S cluster: kubectl get nodes kubectl get all -A Once we validate we can access our AKS cluster nodes and other resources, we can move to the Ansible Control Node and run some of the previous playbooks against the proxy. - name: Create K8S resource hosts: proxy-servers tasks: - name: Get K8S namespace kubernetes.core.k8s: name: my-namespace api_version: v1 kind: Namespace state: present Run your ansible playbook command: ansible-playbook ~/ansible/playbooks/create_namespace.yml -i ~/ansible/inventory/kube_inventory Once the playbook run is complete, go to your proxy and validate you are able to see the namespace created by running the following: kubectl get namespace We have confirmed we can run playbooks against our Azure AKS cluster. One thing to note is that we replaced the existing /.kube/config with the Azure AKS cluster config. Typically, you will have a multi-cluster environment and will need to add different config files in your ~/.kube/ location and configure your Ansible playbooks to point to the correct config file using the following: - name: Set Kubernetes context k8s_auth: kubeconfig: /path/to/kubeconfig register: kube_auth Implementing Ansible in your CI/CD workflows consists of two main methods. - Use Ansible in a Jenkins Pipeline setup for CI/CD, which allows for direct deployment and configuration of Kubernetes resources from within the pipeline. Jenkins can trigger Ansible playbooks as part of the deployment process and apply the changes directly to the Kubernetes cluster. This is an ideal approach if you are looking for a more hands-on, scriptable method to manage Kubernetes deployments. - Integrate Ansible with a CI/CD GitOps tool for Kubernetes, such as Argo CD or Flux, as these tools will focus more on the pre-processing steps that are needed to generate the Kubernetes manifests before deployment. Since Argo CD/Flux focuses on reading the Git Repository for Kubernetes manifest file changes, you can add a step in your CI/CD pipeline to trigger an Ansible playbook to dynamically generate or update the manifest files in the repository based on configurations and environments through jinja2 templates. Ansible’s strength is that it can handle all idempotent operations, which ensures consistent deployments without unnecessary reconfigurations. Ansible in a Jenkins CI/CD Pipeline Here is an example of how you would use Ansible to deploy Kubernetes manifest in a Jenkins CI/CD Pipeline: Jenkins file: pipeline { agent any environment { ANSIBLE_HOST_KEY_CHECKING = "False" } stages { stage('Checkout') { steps { checkout scm } } stage('('('('('('('('('('('('('('('('('('('('('('('('('('('('('('('

Company
Spacelift

Date published
April 11, 2024

Author(s)
Faisal Hashem

Word count
4362

Hacker News points
None found.

Language
English


By Matt Makai. 2021-2024.