Go to file
Jayson Grace e4146b4ca9
Add PXE boot support to k3s_agent role (#409)
* Add conditional snapshotter for PXE-booted systems

**Added:**

- PXE Boot Check - Introduced tasks to check if the system is PXE-booted by
  analyzing `/proc/cmdline` in `roles/k3s_agent/tasks/main.yml`.
- Conditional Snapshotter in Template - Added logic in `k3s.service.j2` template
  to conditionally set `--snapshotter native` for PXE-booted systems.

**Changed:**

- `k3s.service.j2` Template Update - Modified the `ExecStart` line to include a
  conditional check for `is_pxe_booted` fact, dynamically setting the
  `--snapshotter` option for NFS-mounted systems.
- `main.yml` Task Modification - Added tasks to set `is_pxe_booted` fact based
  on the presence of `root=/dev/nfs` in the system's boot command line.

This update allows k3s agents on PXE-booted systems to use the native snapshotter
when running on NFS, addressing compatibility issues with OverlayFS.

* Introduce idiomatic practices for affected areas from previous commits

**Added:**

- Structured HTTP Proxy Configuration Block - Added a structured block in
  `http_proxy.yml` for managing HTTP proxy settings, aligning with Ansible's
  recommended practices. This includes creating directories and deploying
  configuration files in a clear, modular fashion.
- Conditional Execution for Proxy Setup - Implemented conditional execution
  for the proxy setup in `http_proxy.yml`, utilizing `proxy_env` to adhere
  to Ansible's best practices for conditional tasks.
- Improved PXE-Boot System Check Block - Introduced a more structured approach
  in `main.yml` for checking PXE-booted systems, enhancing readability and
  maintainability.

**Changed:**

- Adopted Ansible Builtin Modules - Transitioned existing tasks to use
  `ansible.builtin` modules, ensuring compatibility and future-proofing the
  role.
- Refined Task Grouping - Reorganized tasks into logical blocks, improving
  the overall structure and readability, and showcasing Ansible's capabilities
  for efficient task management.
- Updated K3s Service Configuration - Modified the K3s service configuration
  task in `main.yml` for a more streamlined approach using Ansible's template
  module, reflecting community-driven best practices.

**Removed:**

- Streamlined Task Definitions - Optimized task definitions to reduce
  redundancy, focusing on clarity and adherence to the evolving Ansible
  community standards.

* Added missing checks causing failures for agents
2024-02-06 14:24:31 -06:00
.github chore(deps) pre-commit updates (#438) 2024-01-30 11:54:28 -06:00
collections Add support for API servers on IPv6 addresses (#48) 2022-09-10 12:57:38 -05:00
example Add support for API servers on IPv6 addresses (#48) 2022-09-10 12:57:38 -05:00
inventory Add Cilium CNI option (#435) 2024-01-29 19:29:13 -06:00
molecule Add Cilium CNI option (#435) 2024-01-29 19:29:13 -06:00
roles Add PXE boot support to k3s_agent role (#409) 2024-02-06 14:24:31 -06:00
templates Fix LXC container implementations (#231) 2023-03-03 11:28:14 -06:00
.ansible-lint chore(deps) pre-commit updates (#438) 2024-01-30 11:54:28 -06:00
.editorconfig add editorconfig and fix trailing whitespaces (#68) 2022-09-07 20:00:13 -05:00
.gitignore fetch kubeconfig from master after deployment (#431) 2024-01-27 16:30:13 -06:00
.pre-commit-config.yaml chore(deps) pre-commit updates (#438) 2024-01-30 11:54:28 -06:00
.yamllint initial galaxy.yml (#388) 2024-01-18 18:35:19 -06:00
ansible.example.cfg Override less Ansible settings (#224) 2023-02-05 21:52:44 -06:00
deploy.sh Override less Ansible settings (#224) 2023-02-05 21:52:44 -06:00
galaxy.yml initial galaxy.yml (#388) 2024-01-18 18:35:19 -06:00
LICENSE Pre commit fixes (#167) 2022-11-13 22:42:49 -06:00
README.md initial galaxy.yml (#388) 2024-01-18 18:35:19 -06:00
reboot.sh Change reboot.sh to be executable (#344) 2023-08-07 11:29:03 -05:00
reboot.yml Update truthy values to true/false only, #204 (#387) 2024-01-15 12:43:44 -06:00
requirements.in Fix CI (#389) 2024-01-24 22:26:38 -06:00
requirements.txt chore(deps): bump ansible-core from 2.16.2 to 2.16.3 (#436) 2024-01-29 21:29:07 -06:00
reset.sh Override less Ansible settings (#224) 2023-02-05 21:52:44 -06:00
reset.yml Update truthy values to true/false only, #204 (#387) 2024-01-15 12:43:44 -06:00
site.yml fetch kubeconfig from master after deployment (#431) 2024-01-27 16:30:13 -06:00

Automated build of HA k3s Cluster with kube-vip and MetalLB

Fully Automated K3S etcd High Availability Install

This playbook will build an HA Kubernetes cluster with k3s, kube-vip and MetalLB via ansible.

This is based on the work from this fork which is based on the work from k3s-io/k3s-ansible. It uses kube-vip to create a load balancer for control plane, and metal-lb for its service LoadBalancer.

If you want more context on how this works, see:

📄 Documentation (including example commands)

📺 Watch the Video

📖 k3s Ansible Playbook

Build a Kubernetes cluster using Ansible with k3s. The goal is easily install a HA Kubernetes cluster on machines running:

  • Debian (tested on version 11)
  • Ubuntu (tested on version 22.04)
  • Rocky (tested on version 9)

on processor architecture:

  • x64
  • arm64
  • armhf

System requirements

  • Control Node (the machine you are running ansible commands) must have Ansible 2.11+ If you need a quick primer on Ansible you can check out my docs and setting up Ansible.

  • You will also need to install collections that this playbook uses by running ansible-galaxy collection install -r ./collections/requirements.yml (important)

  • netaddr package must be available to Ansible. If you have installed Ansible via apt, this is already taken care of. If you have installed Ansible via pip, make sure to install netaddr into the respective virtual environment.

  • server and agent nodes should have passwordless SSH access, if not you can supply arguments to provide credentials --ask-pass --ask-become-pass to each command.

🚀 Getting Started

🍴 Preparation

First create a new directory based on the sample directory within the inventory directory:

cp -R inventory/sample inventory/my-cluster

Second, edit inventory/my-cluster/hosts.ini to match the system information gathered above

For example:

[master]
192.168.30.38
192.168.30.39
192.168.30.40

[node]
192.168.30.41
192.168.30.42

[k3s_cluster:children]
master
node

If multiple hosts are in the master group, the playbook will automatically set up k3s in HA mode with etcd.

Finally, copy ansible.example.cfg to ansible.cfg and adapt the inventory path to match the files that you just created.

This requires at least k3s version 1.19.1 however the version is configurable by using the k3s_version variable.

If needed, you can also edit inventory/my-cluster/group_vars/all.yml to match your environment.

☸️ Create Cluster

Start provisioning of the cluster using the following command:

ansible-playbook site.yml -i inventory/my-cluster/hosts.ini

After deployment control plane will be accessible via virtual ip-address which is defined in inventory/group_vars/all.yml as apiserver_endpoint

🔥 Remove k3s cluster

ansible-playbook reset.yml -i inventory/my-cluster/hosts.ini

You should also reboot these nodes due to the VIP not being destroyed

⚙️ Kube Config

To copy your kube config locally so that you can access your Kubernetes cluster run:

scp debian@master_ip:~/.kube/config ~/.kube/config

🔨 Testing your cluster

See the commands here.

Troubleshooting

Be sure to see this post on how to troubleshoot common problems

Testing the playbook using molecule

This playbook includes a molecule-based test setup. It is run automatically in CI, but you can also run the tests locally. This might be helpful for quick feedback in a few cases. You can find more information about it here.

Pre-commit Hooks

This repo uses pre-commit and pre-commit-hooks to lint and fix common style and syntax errors. Be sure to install python packages and then run pre-commit install. For more information, see pre-commit

🌌 Ansible Galaxy

This collection can now be used in larger ansible projects.

Instructions:

  • create or modify a file collections/requirements.yml in your project
collections:
  - name: ansible.utils
  - name: community.general
  - name: ansible.posix
  - name: kubernetes.core
  - name: https://github.com/techno-tim/k3s-ansible.git
    type: git
    version: master
  • install via ansible-galaxy collection install -r ./collections/requirements.yml
  • every role is now available via the prefix techno_tim.k3s_ansible. e.g. techno_tim.k3s_ansible.lxc

Thanks 🤝

This repo is really standing on the shoulders of giants. Thank you to all those who have contributed and thanks to these repos for code and ideas: