commit ecb65e5f297b765b45b093a244215772b3ba4fe7 Author: torrella Date: Tue Apr 8 09:54:04 2025 +0200 first commit diff --git a/buscar_k3s.sh b/buscar_k3s.sh new file mode 100644 index 0000000..670afc4 --- /dev/null +++ b/buscar_k3s.sh @@ -0,0 +1,24 @@ +#!/bin/bash + +# Definir el directorio de búsqueda (puedes cambiarlo según tus necesidades) +DIRECTORIO_BUSQUEDA="/ruta/al/directorio" + +# Definir la cadena a buscar +CADENA_BUSCADA="k3s" + +# Verificar si el directorio existe +if [ ! -d "$DIRECTORIO_BUSQUEDA" ]; then + echo "El directorio $DIRECTORIO_BUSQUEDA no existe." + exit 1 +fi + +# Buscar la cadena en todos los archivos del árbol de directorios +echo "Buscando la cadena '$CADENA_BUSCADA' en $DIRECTORIO_BUSQUEDA..." +grep -rn "$CADENA_BUSCADA" "$DIRECTORIO_BUSQUEDA" + +# Verificar si se encontraron resultados +if [ $? -eq 0 ]; then + echo "Búsqueda completada. Se encontraron coincidencias." +else + echo "Búsqueda completada. No se encontraron coincidencias." +fi diff --git a/doc/1 b/doc/1 new file mode 100644 index 0000000..afc5d62 --- /dev/null +++ b/doc/1 @@ -0,0 +1,144 @@ +### Documentación Exhaustiva de Problemas y Soluciones + +En este documento se detalla el proceso que hemos seguido para resolver los diversos problemas relacionados con el uso de Minikube, Kubernetes, y el Dashboard de Kubernetes en tu entorno. A continuación, describo cada interacción y las soluciones aplicadas, de forma clara y detallada. + +--- + +#### **Interacción 1: Problema de Conexión con el Servidor Kubernetes** + +**Problema:** +Al intentar ejecutar comandos como `kubectl get pods`, se presentaba un error que indicaba que no se podía conectar al servidor de Kubernetes. + +```bash +kubectl get pods +E0310 10:51:36.294414 1140375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.59.100:8443/api?timeout=32s\": dial tcp 192.168.59.100:8443: i/o timeout" +``` + +**Diagnóstico y Resolución:** +El error indicaba un problema de conectividad con el servidor API de Kubernetes. Esto podría deberse a varias razones, como problemas de red o configuraciones incorrectas de Minikube o el servicio K3s. + +1. **Reiniciar Minikube**: + Primero intentamos reiniciar Minikube, lo cual reestablece la configuración y la conectividad del clúster. + + ```bash + minikube start + ``` + +2. **Reiniciar el servicio K3s**: + En algunos casos, los servicios de Kubernetes como K3s pueden no funcionar correctamente. Por lo tanto, reiniciamos el servicio K3s. + + ```bash + sudo systemctl restart k3s + ``` + +**Resultado:** +Tras estos pasos, logramos restaurar la conectividad con el servidor Kubernetes y pudimos ejecutar comandos `kubectl` sin problemas. + +--- + +#### **Interacción 2: Problemas de Conectividad con el Servidor API en Minikube** + +**Problema:** +Después de restaurar la conectividad, un error similar volvía a aparecer al intentar realizar operaciones con `kubectl`. + +```bash +kubectl cluster-info +Kubernetes control plane is running at https://192.168.59.100:8443 +``` + +Aunque la conectividad se restableció en un momento, surgió un problema de red persistente con Minikube al intentar interactuar con el clúster. + +**Diagnóstico y Resolución:** +1. **Verificar el Estado del Clúster**: + Se utilizó el comando `kubectl cluster-info` para verificar que Kubernetes estuviera funcionando y accesible. + + ```bash + kubectl cluster-info + ``` + +2. **Comprobar la Configuración de Red**: + Si la conectividad con el servidor API no funcionaba correctamente después de restablecer Minikube, un reinicio de la red podría haber sido necesario. + +**Resultado:** +El estado de Kubernetes y Minikube parecía estable después de ejecutar estos comandos, lo que permitió que los pods y demás recursos del clúster se mostraran correctamente. + +--- + +#### **Interacción 3: Error de Estado de los Pods del Dashboard de Kubernetes** + +**Problema:** +Al intentar visualizar los pods del dashboard de Kubernetes, observamos que uno de los pods, `kubernetes-dashboard-kong`, se encontraba en un estado de `CrashLoopBackOff`. + +```bash +kubectl get pods -n kubernetes-dashboard +kubernetes-dashboard kubernetes-dashboard-kong-79867c9c48-v9snj 0/1 CrashLoopBackOff +``` + +**Diagnóstico y Resolución:** +1. **Ver Logs del Pod con Error**: + Se revisaron los logs del pod para identificar qué causaba el `CrashLoopBackOff`. En este caso, el error estaba relacionado con el intento de conexión de un socket: + + ```bash + kubectl logs -n kubernetes-dashboard kubernetes-dashboard-kong-79867c9c48-v9snj + ``` + + El error mostraba lo siguiente: + ``` + nginx: [emerg] bind() to unix:/kong_prefix/sockets/we failed (98: Address already in use) + ``` + + Esto indicaba que un proceso estaba intentando enlazar un socket en una dirección ya en uso, lo que estaba causando que el contenedor se bloqueara repetidamente. + +2. **Eliminar el Pod Fallido**: + Para resolver el conflicto de los sockets, decidimos eliminar el pod fallido, lo que permitió que Kubernetes lo recreara automáticamente. + + ```bash + kubectl delete pod -n kubernetes-dashboard kubernetes-dashboard-kong-79867c9c48-v9snj + ``` + +3. **Verificar el Estado de los Pods**: + Después de eliminar el pod con el error, el sistema recreó un nuevo pod en su lugar. Se verificó que el nuevo pod estuviera en un estado "Running" y sin problemas. + + ```bash + kubectl get pods -n kubernetes-dashboard + ``` + +**Resultado:** +El pod fue recreado correctamente y el problema de `CrashLoopBackOff` se resolvió. + +--- + +#### **Interacción 4: Verificación de la Configuración de Red y de los Recursos en Kubernetes** + +**Problema:** +Después de que el pod del dashboard se recreara, realizamos una verificación de los recursos y la conectividad de los mismos en el clúster para asegurarnos de que todo funcionaba de forma correcta. + +**Diagnóstico y Resolución:** +1. **Verificar los Pods Actuales**: + Se realizó una consulta para obtener información sobre los pods en el espacio de nombres `kubernetes-dashboard`. + + ```bash + kubectl get pods -n kubernetes-dashboard + ``` + +2. **Verificar los Logs de los Nuevos Pods**: + También se consultaron los logs del pod recién creado para asegurarse de que ya no había problemas relacionados con los sockets. + + ```bash + kubectl logs -n kubernetes-dashboard kubernetes-dashboard-kong-79867c9c48-fk4j2 + ``` + +**Resultado:** +La verificación mostró que el pod recién creado ya estaba funcionando sin problemas y no se reportaron nuevos errores. + +--- + +### Conclusión + +A lo largo de este proceso, hemos resuelto varios problemas relacionados con la conectividad de Kubernetes, la configuración de Minikube y la estabilidad de los pods en el dashboard. Los pasos clave fueron: + +1. **Restablecer la conectividad con Kubernetes** mediante la reinstalación y reinicio de Minikube y el servicio K3s. +2. **Diagnóstico y resolución de errores en el pod `kubernetes-dashboard-kong`**, eliminando el pod defectuoso y permitiendo que Kubernetes lo recreara. +3. **Verificación y solución de problemas de los recursos del clúster**, asegurando que los pods del dashboard de Kubernetes estuvieran en un estado saludable. + +Cada uno de estos problemas se resolvió siguiendo un enfoque sistemático, asegurándonos de que la infraestructura del clúster de Kubernetes estuviera operativa y sin errores. diff --git a/doc/1.1 b/doc/1.1 new file mode 100644 index 0000000..62a0672 --- /dev/null +++ b/doc/1.1 @@ -0,0 +1,124 @@ +¡Entendido! A continuación, te proporciono una documentación detallada del problema que experimentaste, los pasos que tomaste y las soluciones aplicadas para resolverlo. + +--- + +# **Documentación del Problema y Solución para Configuración de K3s** + +## **1. Descripción del Problema** + +Al intentar interactuar con el clúster de K3s mediante el comando `kubectl`, se presentó el siguiente error: + +``` +error: You must be logged in to the server (the server has asked for the client to provide credentials) +``` + +Esto indicaba un problema con la autenticación del cliente hacia el servidor de Kubernetes, lo cual podría ser causado por un certificado no confiable o problemas con la configuración del acceso al servidor. + +### **Errores Específicos Observados:** + +1. **Problema de autoridad de certificación desconocida**: + ``` + error: couldn't get current server API group list: tls: failed to verify certificate: x509: certificate signed by unknown authority + ``` + +2. **Problemas de autenticación y conexión**: + ``` + error: You must be logged in to the server (the server has asked for the client to provide credentials) + ``` + +3. **Problemas al intentar establecer la conexión con el servidor local**: + ``` + error: Couldn't get current server API group list: dial tcp [::1]:8080: connect: connection refused + ``` + +## **2. Diagnóstico del Problema** + +### **2.1 Causa principal** +La causa principal parecía ser que el certificado utilizado por el servidor K3s no era reconocido por el cliente `kubectl`. Esto ocurre cuando el certificado del servidor K3s no está en la lista de certificados de confianza del sistema. + +### **2.2 Posibles causas secundarias** +- **Error de configuración de `kubectl`:** Puede que el archivo de configuración (`k3s.yaml`) no estuviera correctamente configurado o accesible para el usuario. +- **Problema de autenticación de cliente:** `kubectl` necesita un certificado y una clave que coincidan con los del servidor para autenticar al usuario. Si el archivo `k3s.yaml` no estaba correctamente configurado, el cliente no podría autenticar correctamente. + +## **3. Soluciones Aplicadas** + +### **3.1 Solución para el problema del certificado no confiable** + +1. **Obtener el certificado de autoridad desde el archivo `k3s.yaml`:** + Primero, extraímos el certificado del archivo `k3s.yaml` de K3s. Este archivo contiene información del certificado de la autoridad de certificación que debe ser agregado al sistema. + + ```bash + sudo cat /etc/rancher/k3s/k3s.yaml + ``` + + Se extrajo el campo `certificate-authority-data` y se decodificó: + + ```bash + sudo bash -c "echo $(grep 'certificate-authority-data' /etc/rancher/k3s/k3s.yaml | awk '{print $2}') | base64 --decode > /usr/local/share/ca-certificates/k3s.crt && sudo update-ca-certificates" + ``` + + Esto descargó y actualizó el certificado de autoridad en el sistema, registrándolo en el almacén de certificados de confianza. + +2. **Resultado:** + El sistema reconoció el certificado de la autoridad de certificación y lo agregó al almacén de certificados confiables. + +### **3.2 Solución para el problema de autenticación (usuario y claves)** + +1. **Verificación de configuración de `kubectl`:** + Luego, exportamos la variable de entorno `KUBECONFIG` para que `kubectl` apunte al archivo de configuración de K3s: + + ```bash + export KUBECONFIG=/etc/rancher/k3s/k3s.yaml + ``` + +2. **Verificación del archivo `k3s.yaml`:** + Verificamos que el archivo `k3s.yaml` estuviera correctamente configurado con los certificados de cliente y la clave apropiada, asegurándonos de que la autenticación estuviera bien configurada. + +3. **Comprobación de los permisos del archivo:** + El archivo `k3s.yaml` debería tener los permisos adecuados para que el usuario de `kubectl` pudiera acceder a él. Si se presentaban problemas de permisos, se otorgaron los permisos adecuados: + + ```bash + sudo chmod 644 /etc/rancher/k3s/k3s.yaml + ``` + +### **3.3 Solución para el problema de conexión con el servidor** + +En los errores relacionados con la conexión a `localhost:8080`, indicaba que el servidor de K3s no estaba corriendo o no era accesible. + +1. **Verificar el estado del servicio de K3s:** + Se verificó que el servicio de K3s estuviera corriendo: + + ```bash + sudo systemctl status k3s + ``` + + Si el servicio no estaba activo, se reinició: + + ```bash + sudo systemctl restart k3s + ``` + +2. **Reconfigurar el `kubectl` para apuntar a la IP correcta:** + Como `kubectl` intentaba conectarse a `localhost:8080`, se aseguró que el archivo `k3s.yaml` tuviera la URL correcta del servidor (por ejemplo, `https://127.0.0.1:6443`). + + Si fuera necesario, se ajustó el archivo `k3s.yaml` para asegurar que la dirección IP y el puerto estuvieran correctamente configurados. + +## **4. Estado Final y Pruebas** + +Después de realizar los pasos anteriores, se logró resolver el problema de autenticación y de verificación de certificados, permitiendo ejecutar correctamente el comando: + +```bash +kubectl get nodes +``` + +El clúster respondió correctamente, mostrando los nodos en el clúster de K3s. + +## **5. Conclusiones** + +- El principal problema fue la falta de confianza en el certificado del servidor de K3s en el cliente `kubectl`. +- Se resolvió extrayendo y agregando el certificado de la autoridad de certificación al almacén de confianza del sistema. +- Además, se configuraron correctamente las variables de entorno y permisos del archivo de configuración `k3s.yaml` para asegurar que `kubectl` pudiera acceder a la información de conexión y autenticación. + +--- + +Este documento describe los pasos que tomaste para solucionar el problema. Si necesitas más detalles o ejemplos de comandos adicionales, no dudes en preguntar. diff --git a/doc/1.2 b/doc/1.2 new file mode 100644 index 0000000..57da1e0 --- /dev/null +++ b/doc/1.2 @@ -0,0 +1,93 @@ +Sí, es posible configurar el acceso a Kubernetes sin usar `sudo` para el usuario `usuari`. Esto se puede lograr de manera que el usuario `usuari` tenga acceso al archivo de configuración de K3s (`/etc/rancher/k3s/k3s.yaml`) y a los certificados necesarios sin necesidad de permisos de superusuario. + +Te detallo cómo hacerlo: + +### 1. **Cambiar los permisos del archivo de configuración** +El archivo `/etc/rancher/k3s/k3s.yaml` por defecto es accesible solo por el usuario `root`, lo que requiere el uso de `sudo`. Para permitir que el usuario `usuari` pueda acceder a este archivo, necesitas cambiar los permisos del archivo. + +#### **Pasos:** + +1. **Cambiar los permisos del archivo de configuración** + Cambia la propiedad del archivo `/etc/rancher/k3s/k3s.yaml` para que el usuario `usuari` pueda leerlo. + + ```bash + sudo chown usuari:usuari /etc/rancher/k3s/k3s.yaml + ``` + + Esto le da acceso de lectura y escritura al usuario `usuari` para ese archivo específico. + +2. **Permitir la lectura de certificados para `usuari`** + Si tienes certificados o claves privadas almacenadas en directorios que requieren permisos especiales (como `/etc/ssl/certs` o `/usr/local/share/ca-certificates`), tendrás que cambiar los permisos de esos archivos o directorios también. + + Puedes hacerlo de esta manera: + + ```bash + sudo chmod 644 /usr/local/share/ca-certificates/k3s.crt + ``` + + Esto le permitirá al usuario `usuari` leer el archivo del certificado y añadirlo al almacén de certificados del sistema. + +### 2. **Configurar `kubectl` para que funcione sin `sudo`** + +Asegúrate de que el archivo `k3s.yaml` esté correctamente configurado en la variable `KUBECONFIG` sin necesidad de usar `sudo`. + +#### **Pasos:** + +1. **Configura `KUBECONFIG` para el usuario `usuari`** + Asegúrate de que el usuario `usuari` exporte la variable de entorno `KUBECONFIG` en su sesión, o de manera persistente, para usar el archivo `/etc/rancher/k3s/k3s.yaml` de la siguiente forma: + + - Para configurar esto solo en la sesión actual: + ```bash + export KUBECONFIG=/etc/rancher/k3s/k3s.yaml + ``` + + - Para hacer que esta configuración persista en futuras sesiones, añade la línea anterior al final de su archivo `.bashrc` o `.zshrc`, dependiendo de tu shell. Si usas `bash`, puedes hacerlo así: + ```bash + echo "export KUBECONFIG=/etc/rancher/k3s/k3s.yaml" >> ~/.bashrc + ``` + + Luego, asegúrate de recargar el archivo de configuración: + ```bash + source ~/.bashrc + ``` + +2. **Verifica que `kubectl` ahora funciona correctamente sin `sudo`:** + Ahora, el usuario `usuari` debería poder ejecutar `kubectl` sin la necesidad de usar `sudo`. + + ```bash + kubectl get nodes + ``` + +### 3. **Configurar los permisos del socket de K3s** +K3s expone su API en un socket que, por defecto, suele ser accesible solo por `root`. Sin embargo, puedes configurar el socket de Kubernetes para que el usuario `usuari` también tenga acceso sin `sudo`. + +#### **Pasos:** + +1. **Revisar el socket de K3s** + Normalmente, el socket de K3s está en `/var/run/k3s/k3s.sock` o en una ruta similar. + + Verifica la ubicación del socket de K3s: + ```bash + sudo ls -l /var/run/k3s/ + ``` + +2. **Cambiar permisos del socket de K3s** + Si el socket tiene permisos restrictivos, puedes cambiar los permisos para permitir que el usuario `usuari` lo acceda. + + ```bash + sudo chmod 666 /var/run/k3s/k3s.sock + ``` + + Esto permite que cualquier usuario lea y escriba en el socket. + +3. **Verifica que `kubectl` funciona sin `sudo`** + Una vez hecho esto, el usuario `usuari` debería poder comunicarse con el servidor API de Kubernetes sin problemas. + +### Resumen de lo que se ha hecho: + +- **Cambiar los permisos del archivo `k3s.yaml`** para que el usuario `usuari` pueda acceder a él sin `sudo`. +- **Asegurarse de que los certificados** sean legibles por `usuari` para evitar errores de autoridad desconocida. +- **Configurar la variable de entorno `KUBECONFIG`** para que apunte al archivo correcto. +- **Ajustar los permisos del socket de K3s** para permitir la comunicación sin `sudo`. + +Con estos pasos, el usuario `usuari` podrá interactuar con Kubernetes usando `kubectl` sin tener que utilizar `sudo`. diff --git a/k3s-ansible/.ansible-lint b/k3s-ansible/.ansible-lint new file mode 100644 index 0000000..94f4c3b --- /dev/null +++ b/k3s-ansible/.ansible-lint @@ -0,0 +1,21 @@ +--- +profile: production +exclude_paths: + # default paths + - .cache/ + - .github/ + - test/fixtures/formatting-before/ + - test/fixtures/formatting-prettier/ + + # The "converge" and "reset" playbooks use import_playbook in + # conjunction with the "env" lookup plugin, which lets the + # syntax check of ansible-lint fail. + - molecule/**/converge.yml + - molecule/**/prepare.yml + - molecule/**/reset.yml + + # The file was generated by galaxy ansible - don't mess with it. + - galaxy.yml + +skip_list: + - var-naming[no-role-prefix] diff --git a/k3s-ansible/.editorconfig b/k3s-ansible/.editorconfig new file mode 100644 index 0000000..02c5127 --- /dev/null +++ b/k3s-ansible/.editorconfig @@ -0,0 +1,13 @@ +root = true +[*] +indent_style = space +indent_size = 2 +charset = utf-8 +trim_trailing_whitespace = true +insert_final_newline = true +end_of_line = lf +max_line_length = off +[Makefile] +indent_style = tab +[*.go] +indent_style = tab diff --git a/k3s-ansible/.pre-commit-config.yaml b/k3s-ansible/.pre-commit-config.yaml new file mode 100644 index 0000000..c1e58c2 --- /dev/null +++ b/k3s-ansible/.pre-commit-config.yaml @@ -0,0 +1,35 @@ +--- +repos: + - repo: https://github.com/pre-commit/pre-commit-hooks + rev: v4.5.0 + hooks: + - id: requirements-txt-fixer + - id: sort-simple-yaml + - id: detect-private-key + - id: check-merge-conflict + - id: end-of-file-fixer + - id: mixed-line-ending + - id: trailing-whitespace + args: [--markdown-linebreak-ext=md] + - repo: https://github.com/adrienverge/yamllint.git + rev: v1.33.0 + hooks: + - id: yamllint + args: [-c=.yamllint] + - repo: https://github.com/ansible-community/ansible-lint.git + rev: v6.22.2 + hooks: + - id: ansible-lint + - repo: https://github.com/shellcheck-py/shellcheck-py + rev: v0.9.0.6 + hooks: + - id: shellcheck + - repo: https://github.com/Lucas-C/pre-commit-hooks + rev: v1.5.4 + hooks: + - id: remove-crlf + - id: remove-tabs + - repo: https://github.com/sirosen/texthooks + rev: 0.6.4 + hooks: + - id: fix-smartquotes diff --git a/k3s-ansible/.yamllint b/k3s-ansible/.yamllint new file mode 100644 index 0000000..12f8331 --- /dev/null +++ b/k3s-ansible/.yamllint @@ -0,0 +1,20 @@ +--- +extends: default + +rules: + comments: + min-spaces-from-content: 1 + comments-indentation: false + braces: + max-spaces-inside: 1 + octal-values: + forbid-implicit-octal: true + forbid-explicit-octal: true + line-length: + max: 120 + level: warning + truthy: + allowed-values: ["true", "false"] + +ignore: + - galaxy.yml diff --git a/k3s-ansible/LICENSE b/k3s-ansible/LICENSE new file mode 100644 index 0000000..4757b96 --- /dev/null +++ b/k3s-ansible/LICENSE @@ -0,0 +1,177 @@ + + Apache License + Version 2.0, January 2004 + http://www.apache.org/licenses/ + + TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION + + 1. Definitions. + + "License" shall mean the terms and conditions for use, reproduction, + and distribution as defined by Sections 1 through 9 of this document. + + "Licensor" shall mean the copyright owner or entity authorized by + the copyright owner that is granting the License. + + "Legal Entity" shall mean the union of the acting entity and all + other entities that control, are controlled by, or are under common + control with that entity. For the purposes of this definition, + "control" means (i) the power, direct or indirect, to cause the + direction or management of such entity, whether by contract or + otherwise, or (ii) ownership of fifty percent (50%) or more of the + outstanding shares, or (iii) beneficial ownership of such entity. + + "You" (or "Your") shall mean an individual or Legal Entity + exercising permissions granted by this License. + + "Source" form shall mean the preferred form for making modifications, + including but not limited to software source code, documentation + source, and configuration files. + + "Object" form shall mean any form resulting from mechanical + transformation or translation of a Source form, including but + not limited to compiled object code, generated documentation, + and conversions to other media types. + + "Work" shall mean the work of authorship, whether in Source or + Object form, made available under the License, as indicated by a + copyright notice that is included in or attached to the work + (an example is provided in the Appendix below). + + "Derivative Works" shall mean any work, whether in Source or Object + form, that is based on (or derived from) the Work and for which the + editorial revisions, annotations, elaborations, or other modifications + represent, as a whole, an original work of authorship. For the purposes + of this License, Derivative Works shall not include works that remain + separable from, or merely link (or bind by name) to the interfaces of, + the Work and Derivative Works thereof. + + "Contribution" shall mean any work of authorship, including + the original version of the Work and any modifications or additions + to that Work or Derivative Works thereof, that is intentionally + submitted to Licensor for inclusion in the Work by the copyright owner + or by an individual or Legal Entity authorized to submit on behalf of + the copyright owner. For the purposes of this definition, "submitted" + means any form of electronic, verbal, or written communication sent + to the Licensor or its representatives, including but not limited to + communication on electronic mailing lists, source code control systems, + and issue tracking systems that are managed by, or on behalf of, the + Licensor for the purpose of discussing and improving the Work, but + excluding communication that is conspicuously marked or otherwise + designated in writing by the copyright owner as "Not a Contribution." + + "Contributor" shall mean Licensor and any individual or Legal Entity + on behalf of whom a Contribution has been received by Licensor and + subsequently incorporated within the Work. + + 2. Grant of Copyright License. Subject to the terms and conditions of + this License, each Contributor hereby grants to You a perpetual, + worldwide, non-exclusive, no-charge, royalty-free, irrevocable + copyright license to reproduce, prepare Derivative Works of, + publicly display, publicly perform, sublicense, and distribute the + Work and such Derivative Works in Source or Object form. + + 3. Grant of Patent License. Subject to the terms and conditions of + this License, each Contributor hereby grants to You a perpetual, + worldwide, non-exclusive, no-charge, royalty-free, irrevocable + (except as stated in this section) patent license to make, have made, + use, offer to sell, sell, import, and otherwise transfer the Work, + where such license applies only to those patent claims licensable + by such Contributor that are necessarily infringed by their + Contribution(s) alone or by combination of their Contribution(s) + with the Work to which such Contribution(s) was submitted. If You + institute patent litigation against any entity (including a + cross-claim or counterclaim in a lawsuit) alleging that the Work + or a Contribution incorporated within the Work constitutes direct + or contributory patent infringement, then any patent licenses + granted to You under this License for that Work shall terminate + as of the date such litigation is filed. + + 4. Redistribution. You may reproduce and distribute copies of the + Work or Derivative Works thereof in any medium, with or without + modifications, and in Source or Object form, provided that You + meet the following conditions: + + (a) You must give any other recipients of the Work or + Derivative Works a copy of this License; and + + (b) You must cause any modified files to carry prominent notices + stating that You changed the files; and + + (c) You must retain, in the Source form of any Derivative Works + that You distribute, all copyright, patent, trademark, and + attribution notices from the Source form of the Work, + excluding those notices that do not pertain to any part of + the Derivative Works; and + + (d) If the Work includes a "NOTICE" text file as part of its + distribution, then any Derivative Works that You distribute must + include a readable copy of the attribution notices contained + within such NOTICE file, excluding those notices that do not + pertain to any part of the Derivative Works, in at least one + of the following places: within a NOTICE text file distributed + as part of the Derivative Works; within the Source form or + documentation, if provided along with the Derivative Works; or, + within a display generated by the Derivative Works, if and + wherever such third-party notices normally appear. The contents + of the NOTICE file are for informational purposes only and + do not modify the License. You may add Your own attribution + notices within Derivative Works that You distribute, alongside + or as an addendum to the NOTICE text from the Work, provided + that such additional attribution notices cannot be construed + as modifying the License. + + You may add Your own copyright statement to Your modifications and + may provide additional or different license terms and conditions + for use, reproduction, or distribution of Your modifications, or + for any such Derivative Works as a whole, provided Your use, + reproduction, and distribution of the Work otherwise complies with + the conditions stated in this License. + + 5. Submission of Contributions. Unless You explicitly state otherwise, + any Contribution intentionally submitted for inclusion in the Work + by You to the Licensor shall be under the terms and conditions of + this License, without any additional terms or conditions. + Notwithstanding the above, nothing herein shall supersede or modify + the terms of any separate license agreement you may have executed + with Licensor regarding such Contributions. + + 6. Trademarks. This License does not grant permission to use the trade + names, trademarks, service marks, or product names of the Licensor, + except as required for reasonable and customary use in describing the + origin of the Work and reproducing the content of the NOTICE file. + + 7. Disclaimer of Warranty. Unless required by applicable law or + agreed to in writing, Licensor provides the Work (and each + Contributor provides its Contributions) on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or + implied, including, without limitation, any warranties or conditions + of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A + PARTICULAR PURPOSE. You are solely responsible for determining the + appropriateness of using or redistributing the Work and assume any + risks associated with Your exercise of permissions under this License. + + 8. Limitation of Liability. In no event and under no legal theory, + whether in tort (including negligence), contract, or otherwise, + unless required by applicable law (such as deliberate and grossly + negligent acts) or agreed to in writing, shall any Contributor be + liable to You for damages, including any direct, indirect, special, + incidental, or consequential damages of any character arising as a + result of this License or out of the use or inability to use the + Work (including but not limited to damages for loss of goodwill, + work stoppage, computer failure or malfunction, or any and all + other commercial damages or losses), even if such Contributor + has been advised of the possibility of such damages. + + 9. Accepting Warranty or Additional Liability. While redistributing + the Work or Derivative Works thereof, You may choose to offer, + and charge a fee for, acceptance of support, warranty, indemnity, + or other liability obligations and/or rights consistent with this + License. However, in accepting such obligations, You may act only + on Your own behalf and on Your sole responsibility, not on behalf + of any other Contributor, and only if You agree to indemnify, + defend, and hold each Contributor harmless for any liability + incurred by, or claims asserted against, such Contributor by reason + of your accepting any such warranty or additional liability. + + END OF TERMS AND CONDITIONS diff --git a/k3s-ansible/README.md b/k3s-ansible/README.md new file mode 100644 index 0000000..10cbafd --- /dev/null +++ b/k3s-ansible/README.md @@ -0,0 +1,235 @@ +# Autoomated build of HA k3s Cluster with `kube-vip` and MetalLB + +![Fully Automated K3S etcd High Availability Install](https://img.youtube.com/vi/CbkEWcUZ7zM/0.jpg) + +This playbook will build an HA Kubernetes cluster with `k3s`, `kube-vip` and MetalLB via `ansible`. + +This is based on the work from [this fork](https://github.com/212850a/k3s-ansible) which is based on the work from [k3s-io/k3s-ansible](https://github.com/k3s-io/k3s-ansible). It uses [kube-vip](https://kube-vip.io/) to create a load balancer for control plane, and [metal-lb](https://metallb.universe.tf/installation/) for its service `LoadBalancer`. + +If you want more context on how this works, see: + +📄 [Documentation](https://technotim.live/posts/k3s-etcd-ansible/) (including example commands) + +📺 [Watch the Video](https://www.youtube.com/watch?v=CbkEWcUZ7zM) + +## 📖 k3s Ansible Playbook + +Build a Kubernetes cluster using Ansible with k3s. The goal is easily install a HA Kubernetes cluster on machines running: + +- [x] Debian (tested on version 11) +- [x] Ubuntu (tested on version 22.04) +- [x] Rocky (tested on version 9) + +on processor architecture: + +- [X] x64 +- [X] arm64 +- [X] armhf + +## ✅ System requirements + +- Control Node (the machine you are running `ansible` commands) must have Ansible 2.11+ If you need a quick primer on Ansible [you can check out my docs and setting up Ansible](https://technotim.live/posts/ansible-automation/). + +- You will also need to install collections that this playbook uses by running `ansible-galaxy collection install -r ./collections/requirements.yml` (important❗) + +- [`netaddr` package](https://pypi.org/project/netaddr/) must be available to Ansible. If you have installed Ansible via apt, this is already taken care of. If you have installed Ansible via `pip`, make sure to install `netaddr` into the respective virtual environment. + +- `server` and `agent` nodes should have passwordless SSH access, if not you can supply arguments to provide credentials `--ask-pass --ask-become-pass` to each command. + +## 🚀 Getting Started + +### 🍴 Preparation + +First create a new directory based on the `sample` directory within the `inventory` directory: + +```bash +cp -R inventory/sample inventory/my-cluster +``` + +Second, edit `inventory/my-cluster/hosts.ini` to match the system information gathered above + +For example: + +```ini +[master] +192.168.30.38 +192.168.30.39 +192.168.30.40 + +[node] +192.168.30.41 +192.168.30.42 + +[k3s_cluster:children] +master +node +``` + +If multiple hosts are in the master group, the playbook will automatically set up k3s in [HA mode with etcd](https://rancher.com/docs/k3s/latest/en/installation/ha-embedded/). + +Finally, copy `ansible.example.cfg` to `ansible.cfg` and adapt the inventory path to match the files that you just created. + +This requires at least k3s version `1.19.1` however the version is configurable by using the `k3s_version` variable. + +If needed, you can also edit `inventory/my-cluster/group_vars/all.yml` to match your environment. + +### ☸️ Create Cluster + +Start provisioning of the cluster using the following command: + +```bash +ansible-playbook site.yml -i inventory/my-cluster/hosts.ini +``` + +After deployment control plane will be accessible via virtual ip-address which is defined in inventory/group_vars/all.yml as `apiserver_endpoint` + +### 🔥 Remove k3s cluster + +```bash +ansible-playbook reset.yml -i inventory/my-cluster/hosts.ini +``` + +>You should also reboot these nodes due to the VIP not being destroyed + +## ⚙️ Kube Config + +To copy your `kube config` locally so that you can access your **Kubernetes** cluster run: + +```bash +scp debian@master_ip:/etc/rancher/k3s/k3s.yaml ~/.kube/config +``` +If you get file Permission denied, go into the node and temporarly run: +```bash +sudo chmod 777 /etc/rancher/k3s/k3s.yaml +``` +Then copy with the scp command and reset the permissions back to: +```bash +sudo chmod 600 /etc/rancher/k3s/k3s.yaml +``` + +You'll then want to modify the config to point to master IP by running: +```bash +sudo nano ~/.kube/config +``` +Then change `server: https://127.0.0.1:6443` to match your master IP: `server: https://192.168.1.222:6443` + +### 🔨 Testing your cluster + +See the commands [here](https://technotim.live/posts/k3s-etcd-ansible/#testing-your-cluster). + +### Variables + +| Role(s) | Variable | Type | Default | Required | Description | +|---|---|---|---|---|---| +| `download` | `k3s_version` | string | ❌ | Required | K3s binaries version | +| `k3s_agent`, `k3s_server`, `k3s_server_post` | `apiserver_endpoint` | string | ❌ | Required | Virtual ip-address configured on each master | +| `k3s_agent` | `extra_agent_args` | string | `null` | Not required | Extra arguments for agents nodes | +| `k3s_agent`, `k3s_server` | `group_name_master` | string | `null` | Not required | Name othe master group | +| `k3s_agent` | `k3s_token` | string | `null` | Not required | Token used to communicate between masters | +| `k3s_agent`, `k3s_server` | `proxy_env` | dict | `null` | Not required | Internet proxy configurations | +| `k3s_agent`, `k3s_server` | `proxy_env.HTTP_PROXY` | string | ❌ | Required | HTTP internet proxy | +| `k3s_agent`, `k3s_server` | `proxy_env.HTTPS_PROXY` | string | ❌ | Required | HTTP internet proxy | +| `k3s_agent`, `k3s_server` | `proxy_env.NO_PROXY` | string | ❌ | Required | Addresses that will not use the proxies | +| `k3s_agent`, `k3s_server`, `reset` | `systemd_dir` | string | `/etc/systemd/system` | Not required | Path to systemd services | +| `k3s_custom_registries` | `custom_registries_yaml` | string | ❌ | Required | YAML block defining custom registries. The following is an example that pulls all images used in this playbook through your private registries. It also allows you to pull your own images from your private registry, without having to use imagePullSecrets in your deployments. If all you need is your own images and you don't care about caching the docker/quay/ghcr.io images, you can just remove those from the mirrors: section. | +| `k3s_server`, `k3s_server_post` | `cilium_bgp` | bool | `~` | Not required | Enable cilium BGP control plane for LB services and pod cidrs. Disables the use of MetalLB. | +| `k3s_server`, `k3s_server_post` | `cilium_iface` | string | ❌ | Not required | The network interface used for when Cilium is enabled | +| `k3s_server` | `extra_server_args` | string | `""` | Not required | Extra arguments for server nodes | +| `k3s_server` | `k3s_create_kubectl_symlink` | bool | `false` | Not required | Create the kubectl -> k3s symlink | +| `k3s_server` | `k3s_create_crictl_symlink` | bool | `true` | Not required | Create the crictl -> k3s symlink | +| `k3s_server` | `kube_vip_arp` | bool | `true` | Not required | Enables kube-vip ARP broadcasts | +| `k3s_server` | `kube_vip_bgp` | bool | `false` | Not required | Enables kube-vip BGP peering | +| `k3s_server` | `kube_vip_bgp_routerid` | string | `"127.0.0.1"` | Not required | Defines the router ID for the kube-vip BGP server | +| `k3s_server` | `kube_vip_bgp_as` | string | `"64513"` | Not required | Defines the AS for the kube-vip BGP server | +| `k3s_server` | `kube_vip_bgp_peeraddress` | string | `"192.168.30.1"` | Not required | Defines the address for the kube-vip BGP peer | +| `k3s_server` | `kube_vip_bgp_peeras` | string | `"64512"` | Not required | Defines the AS for the kube-vip BGP peer | +| `k3s_server` | `kube_vip_bgp_peers` | list | `[]` | Not required | List of BGP peer ASN & address pairs | +| `k3s_server` | `kube_vip_bgp_peers_groups` | list | `['k3s_master']` | Not required | Inventory group in which to search for additional `kube_vip_bgp_peers` parameters to merge. | +| `k3s_server` | `kube_vip_iface` | string | `~` | Not required | Explicitly define an interface that ALL control nodes should use to propagate the VIP, define it here. Otherwise, kube-vip will determine the right interface automatically at runtime. | +| `k3s_server` | `kube_vip_tag_version` | string | `v0.7.2` | Not required | Image tag for kube-vip | +| `k3s_server` | `kube_vip_cloud_provider_tag_version` | string | `main` | Not required | Tag for kube-vip-cloud-provider manifest when enable | +| `k3s_server`, `k3_server_post` | `kube_vip_lb_ip_range` | string | `~` | Not required | IP range for kube-vip load balancer | +| `k3s_server`, `k3s_server_post` | `metal_lb_controller_tag_version` | string | `v0.14.3` | Not required | Image tag for MetalLB | +| `k3s_server` | `metal_lb_speaker_tag_version` | string | `v0.14.3` | Not required | Image tag for MetalLB | +| `k3s_server` | `metal_lb_type` | string | `native` | Not required | Use FRR mode or native. Valid values are `frr` and `native` | +| `k3s_server` | `retry_count` | int | `20` | Not required | Amount of retries when verifying that nodes joined | +| `k3s_server` | `server_init_args` | string | ❌ | Not required | Arguments for server nodes | +| `k3s_server_post` | `bpf_lb_algorithm` | string | `maglev` | Not required | BPF lb algorithm | +| `k3s_server_post` | `bpf_lb_mode` | string | `hybrid` | Not required | BPF lb mode | +| `k3s_server_post` | `calico_blocksize` | int | `26` | Not required | IP pool block size | +| `k3s_server_post` | `calico_ebpf` | bool | `false` | Not required | Use eBPF dataplane instead of iptables | +| `k3s_server_post` | `calico_encapsulation` | string | `VXLANCrossSubnet` | Not required | IP pool encapsulation | +| `k3s_server_post` | `calico_natOutgoing` | string | `Enabled` | Not required | IP pool NAT outgoing | +| `k3s_server_post` | `calico_nodeSelector` | string | `all()` | Not required | IP pool node selector | +| `k3s_server_post` | `calico_iface` | string | `~` | Not required | The network interface used for when Calico is enabled | +| `k3s_server_post` | `calico_tag` | string | `v3.27.2` | Not required | Calico version tag | +| `k3s_server_post` | `cilium_bgp_my_asn` | int | `64513` | Not required | Local ASN for BGP peer | +| `k3s_server_post` | `cilium_bgp_peer_asn` | int | `64512` | Not required | BGP peer ASN | +| `k3s_server_post` | `cilium_bgp_peer_address` | string | `~` | Not required | BGP peer address | +| `k3s_server_post` | `cilium_bgp_neighbors` | list | `[]` | Not required | List of BGP peer ASN & address pairs | +| `k3s_server_post` | `cilium_bgp_neighbors_groups` | list | `['k3s_all']` | Not required | Inventory group in which to search for additional `cilium_bgp_neighbors` parameters to merge. | +| `k3s_server_post` | `cilium_bgp_lb_cidr` | string | `192.168.31.0/24` | Not required | BGP load balancer IP range | +| `k3s_server_post` | `cilium_exportPodCIDR` | bool | `true` | Not required | Export pod CIDR | +| `k3s_server_post` | `cilium_hubble` | bool | `true` | Not required | Enable Cilium Hubble | +| `k3s_server_post` | `cilium_hubble` | bool | `true` | Not required | Enable Cilium Hubble | +| `k3s_server_post` | `cilium_mode` | string | `native` | Not required | Inner-node communication mode (choices are `native` and `routed`) | +| `k3s_server_post` | `cluster_cidr` | string | `10.52.0.0/16` | Not required | Inner-cluster IP range | +| `k3s_server_post` | `enable_bpf_masquerade` | bool | `true` | Not required | Use IP masquerading | +| `k3s_server_post` | `kube_proxy_replacement` | bool | `true` | Not required | Replace the native kube-proxy with Cilium | +| `k3s_server_post` | `metal_lb_available_timeout` | string | `240s` | Not required | Wait for MetalLB resources | +| `k3s_server_post` | `metal_lb_ip_range` | string | `192.168.30.80-192.168.30.90` | Not required | MetalLB ip range for load balancer | +| `k3s_server_post` | `metal_lb_controller_tag_version` | string | `v0.14.3` | Not required | Image tag for MetalLB | +| `k3s_server_post` | `metal_lb_mode` | string | `layer2` | Not required | Metallb mode (choices are `bgp` and `layer2`) | +| `k3s_server_post` | `metal_lb_bgp_my_asn` | string | `~` | Not required | BGP ASN configurations | +| `k3s_server_post` | `metal_lb_bgp_peer_asn` | string | `~` | Not required | BGP peer ASN configurations | +| `k3s_server_post` | `metal_lb_bgp_peer_address` | string | `~` | Not required | BGP peer address | +| `lxc` | `custom_reboot_command` | string | `~` | Not required | Command to run on reboot | +| `prereq` | `system_timezone` | string | `null` | Not required | Timezone to be set on all nodes | +| `proxmox_lxc`, `reset_proxmox_lxc` | `proxmox_lxc_ct_ids` | list | ❌ | Required | Proxmox container ID list | +| `raspberrypi` | `state` | string | `present` | Not required | Indicates whether the k3s prerequisites for Raspberry Pi should be set up (possible values are `present` and `absent`) | + + +### Troubleshooting + +Be sure to see [this post](https://github.com/techno-tim/k3s-ansible/discussions/20) on how to troubleshoot common problems + +### Testing the playbook using molecule + +This playbook includes a [molecule](https://molecule.rtfd.io/)-based test setup. +It is run automatically in CI, but you can also run the tests locally. +This might be helpful for quick feedback in a few cases. +You can find more information about it [here](molecule/README.md). + +### Pre-commit Hooks + +This repo uses `pre-commit` and `pre-commit-hooks` to lint and fix common style and syntax errors. Be sure to install python packages and then run `pre-commit install`. For more information, see [pre-commit](https://pre-commit.com/) + +## 🌌 Ansible Galaxy + +This collection can now be used in larger ansible projects. + +Instructions: + +- create or modify a file `collections/requirements.yml` in your project + +```yml +collections: + - name: ansible.utils + - name: community.general + - name: ansible.posix + - name: kubernetes.core + - name: https://github.com/techno-tim/k3s-ansible.git + type: git + version: master +``` + +- install via `ansible-galaxy collection install -r ./collections/requirements.yml` +- every role is now available via the prefix `techno_tim.k3s_ansible.` e.g. `techno_tim.k3s_ansible.lxc` + +## Thanks 🤝 + +This repo is really standing on the shoulders of giants. Thank you to all those who have contributed and thanks to these repos for code and ideas: + +- [k3s-io/k3s-ansible](https://github.com/k3s-io/k3s-ansible) +- [geerlingguy/turing-pi-cluster](https://github.com/geerlingguy/turing-pi-cluster) +- [212850a/k3s-ansible](https://github.com/212850a/k3s-ansible) diff --git a/k3s-ansible/ansible.cfg b/k3s-ansible/ansible.cfg new file mode 100644 index 0000000..b36870b --- /dev/null +++ b/k3s-ansible/ansible.cfg @@ -0,0 +1,2 @@ +[defaults] +inventory = inventory/my-cluster/hosts.ini ; Adapt this to the path to your inventory file diff --git a/k3s-ansible/ansible.example.cfg b/k3s-ansible/ansible.example.cfg new file mode 100644 index 0000000..b36870b --- /dev/null +++ b/k3s-ansible/ansible.example.cfg @@ -0,0 +1,2 @@ +[defaults] +inventory = inventory/my-cluster/hosts.ini ; Adapt this to the path to your inventory file diff --git a/k3s-ansible/collections/requirements.yml b/k3s-ansible/collections/requirements.yml new file mode 100644 index 0000000..0d176b4 --- /dev/null +++ b/k3s-ansible/collections/requirements.yml @@ -0,0 +1,6 @@ +--- +collections: + - name: ansible.utils + - name: community.general + - name: ansible.posix + - name: kubernetes.core diff --git a/k3s-ansible/deploy.sh b/k3s-ansible/deploy.sh new file mode 100755 index 0000000..8f702d6 --- /dev/null +++ b/k3s-ansible/deploy.sh @@ -0,0 +1,3 @@ +#!/bin/bash + +ansible-playbook site.yml diff --git a/k3s-ansible/error b/k3s-ansible/error new file mode 100644 index 0000000..a04c45a --- /dev/null +++ b/k3s-ansible/error @@ -0,0 +1,2339 @@ +mar 12 16:33:27 CASCA k3s[293286]: I0312 16:33:27.059914 293286 server.go:150] Version: v1.31.6+k3s1 +mar 12 16:33:27 CASCA k3s[293286]: I0312 16:33:27.059934 293286 server.go:152] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" +mar 12 16:33:27 CASCA k3s[293286]: Error: failed to create listener: failed to listen on 127.0.0.1:6444: listen tcp 127.0.0.1:6444: bind: address already in use +mar 12 16:33:27 CASCA k3s[293286]: time="2025-03-12T16:33:27+01:00" level=error msg="apiserver exited: failed to create listener: failed to listen on 127.0.0.1:6444: listen tcp 127.0.0.1:6444: bind: address already in use" +mar 12 16:33:27 CASCA systemd[1]: k3s.service: Main process exited, code=exited, status=1/FAILURE +░░ Subject: Unit process exited +░░ Defined-By: systemd +░░ Support: https://www.debian.org/support +░░ +░░ An ExecStart= process belonging to unit k3s.service has exited. +░░ +░░ The process' exit code is 'exited' and its exit status is 1. +mar 12 16:33:27 CASCA systemd[1]: k3s.service: Failed with result 'exit-code'. +░░ Subject: Unit failed +░░ Defined-By: systemd +░░ Support: https://www.debian.org/support +░░ +░░ The unit k3s.service has entered the 'failed' state with result 'exit-code'. +mar 12 16:33:27 CASCA systemd[1]: Failed to start k3s.service - Lightweight Kubernetes. +░░ Subject: A start job for unit k3s.service has failed +░░ Defined-By: systemd +░░ Support: https://www.debian.org/support +░░ +░░ A start job for unit k3s.service has finished with a failure. +░░ +░░ The job identifier is 114527 and the job result is failed. +mar 12 16:33:32 CASCA systemd[1]: k3s.service: Scheduled restart job, restart counter is at 1191. +░░ Subject: Automatic restarting of a unit has been scheduled +░░ Defined-By: systemd +░░ Support: https://www.debian.org/support +░░ +░░ Automatic restarting of the unit k3s.service has been scheduled, as the result for +░░ the configured Restart= setting for the unit. +mar 12 16:33:32 CASCA systemd[1]: Stopped k3s.service - Lightweight Kubernetes. +░░ Subject: A stop job for unit k3s.service has finished +░░ Defined-By: systemd +░░ Support: https://www.debian.org/support +░░ +░░ A stop job for unit k3s.service has finished. +░░ +░░ The job identifier is 114621 and the job result is done. +mar 12 16:33:32 CASCA systemd[1]: Starting k3s.service - Lightweight Kubernetes... +░░ Subject: A start job for unit k3s.service has begun execution +░░ Defined-By: systemd +░░ Support: https://www.debian.org/support +░░ +░░ A start job for unit k3s.service has begun execution. +░░ +░░ The job identifier is 114621. +mar 12 16:33:32 CASCA sh[293679]: + /usr/bin/systemctl is-enabled --quiet nm-cloud-setup.service +mar 12 16:33:32 CASCA k3s[293686]: time="2025-03-12T16:33:32+01:00" level=info msg="Starting k3s v1.31.6+k3s1 (6ab750f9)" +mar 12 16:33:32 CASCA k3s[293686]: time="2025-03-12T16:33:32+01:00" level=info msg="Configuring sqlite3 database connection pooling: maxIdleConns=2, maxOpenConns=0, connMaxLifetime=0s" +mar 12 16:33:32 CASCA k3s[293686]: time="2025-03-12T16:33:32+01:00" level=info msg="Configuring database table schema and indexes, this may take a moment..." +mar 12 16:33:32 CASCA k3s[293686]: time="2025-03-12T16:33:32+01:00" level=info msg="Database tables and indexes are up to date" +mar 12 16:33:32 CASCA k3s[293686]: time="2025-03-12T16:33:32+01:00" level=info msg="Kine available at unix://kine.sock" +mar 12 16:33:32 CASCA k3s[293686]: time="2025-03-12T16:33:32+01:00" level=info msg="Reconciling bootstrap data between datastore and disk" +mar 12 16:33:32 CASCA k3s[293686]: time="2025-03-12T16:33:32+01:00" level=info msg="Running kube-apiserver --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,k3s --authorization-mode=Node,RBAC --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/k3s/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction --enable-aggregator-routing=true --enable-bootstrap-token-auth=true --etcd-servers=unix://kine.sock --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key" +mar 12 16:33:32 CASCA k3s[293686]: time="2025-03-12T16:33:32+01:00" level=info msg="Running kube-scheduler --authentication-kubeconfig=/var/lib/rancher/k3s/server/cred/scheduler.kubeconfig --authorization-kubeconfig=/var/lib/rancher/k3s/server/cred/scheduler.kubeconfig --bind-address=127.0.0.1 --kubeconfig=/var/lib/rancher/k3s/server/cred/scheduler.kubeconfig --leader-elect=false --profiling=false --secure-port=10259" +mar 12 16:33:32 CASCA k3s[293686]: W0312 16:33:32.541730 293686 registry.go:256] calling componentGlobalsRegistry.AddFlags more than once, the registry will be set by the latest flags +mar 12 16:33:32 CASCA k3s[293686]: time="2025-03-12T16:33:32+01:00" level=info msg="Waiting for API server to become available" +mar 12 16:33:32 CASCA k3s[293686]: time="2025-03-12T16:33:32+01:00" level=info msg="Running kube-controller-manager --allocate-node-cidrs=true --authentication-kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --authorization-kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --bind-address=127.0.0.1 --cluster-cidr=10.42.0.0/16 --cluster-signing-kube-apiserver-client-cert-file=/var/lib/rancher/k3s/server/tls/client-ca.nochain.crt --cluster-signing-kube-apiserver-client-key-file=/var/lib/rancher/k3s/server/tls/client-ca.key --cluster-signing-kubelet-client-cert-file=/var/lib/rancher/k3s/server/tls/client-ca.nochain.crt --cluster-signing-kubelet-client-key-file=/var/lib/rancher/k3s/server/tls/client-ca.key --cluster-signing-kubelet-serving-cert-file=/var/lib/rancher/k3s/server/tls/server-ca.nochain.crt --cluster-signing-kubelet-serving-key-file=/var/lib/rancher/k3s/server/tls/server-ca.key --cluster-signing-legacy-unknown-cert-file=/var/lib/rancher/k3s/server/tls/server-ca.nochain.crt --cluster-signing-legacy-unknown-key-file=/var/lib/rancher/k3s/server/tls/server-ca.key --configure-cloud-routes=false --controllers=*,tokencleaner,-service,-route,-cloud-node-lifecycle --kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --leader-elect=false --profiling=false --root-ca-file=/var/lib/rancher/k3s/server/tls/server-ca.crt --secure-port=10257 --service-account-private-key-file=/var/lib/rancher/k3s/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --use-service-account-credentials=true" +mar 12 16:33:32 CASCA k3s[293686]: W0312 16:33:32.542188 293686 registry.go:256] calling componentGlobalsRegistry.AddFlags more than once, the registry will be set by the latest flags +mar 12 16:33:32 CASCA k3s[293686]: I0312 16:33:32.542231 293686 options.go:228] external host was not specified, using 192.168.1.133 +mar 12 16:33:32 CASCA k3s[293686]: time="2025-03-12T16:33:32+01:00" level=info msg="Running cloud-controller-manager --allocate-node-cidrs=true --authentication-kubeconfig=/var/lib/rancher/k3s/server/cred/cloud-controller.kubeconfig --authorization-kubeconfig=/var/lib/rancher/k3s/server/cred/cloud-controller.kubeconfig --bind-address=127.0.0.1 --cloud-config=/var/lib/rancher/k3s/server/etc/cloud-config.yaml --cloud-provider=k3s --cluster-cidr=10.42.0.0/16 --configure-cloud-routes=false --controllers=*,-route --kubeconfig=/var/lib/rancher/k3s/server/cred/cloud-controller.kubeconfig --leader-elect=false --leader-elect-resource-name=k3s-cloud-controller-manager --node-status-update-frequency=1m0s --profiling=false" +mar 12 16:33:32 CASCA k3s[293686]: time="2025-03-12T16:33:32+01:00" level=info msg="Server node token is available at /var/lib/rancher/k3s/server/token" +mar 12 16:33:32 CASCA k3s[293686]: time="2025-03-12T16:33:32+01:00" level=info msg="To join server node to cluster: k3s server -s https://192.168.1.133:6443 -t ${SERVER_NODE_TOKEN}" +mar 12 16:33:32 CASCA k3s[293686]: time="2025-03-12T16:33:32+01:00" level=info msg="Agent node token is available at /var/lib/rancher/k3s/server/agent-token" +mar 12 16:33:32 CASCA k3s[293686]: time="2025-03-12T16:33:32+01:00" level=info msg="To join agent node to cluster: k3s agent -s https://192.168.1.133:6443 -t ${AGENT_NODE_TOKEN}" +mar 12 16:33:32 CASCA k3s[293686]: I0312 16:33:32.543427 293686 server.go:150] Version: v1.31.6+k3s1 +mar 12 16:33:32 CASCA k3s[293686]: I0312 16:33:32.543446 293686 server.go:152] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" +mar 12 16:33:32 CASCA k3s[293686]: Error: failed to create listener: failed to listen on 127.0.0.1:6444: listen tcp 127.0.0.1:6444: bind: address already in use +mar 12 16:33:32 CASCA k3s[293686]: time="2025-03-12T16:33:32+01:00" level=error msg="apiserver exited: failed to create listener: failed to listen on 127.0.0.1:6444: listen tcp 127.0.0.1:6444: bind: address already in use" +mar 12 16:33:32 CASCA systemd[1]: k3s.service: Main process exited, code=exited, status=1/FAILURE +░░ Subject: Unit process exited +░░ Defined-By: systemd +░░ Support: https://www.debian.org/support +░░ +░░ An ExecStart= process belonging to unit k3s.service has exited. +░░ +░░ The process' exit code is 'exited' and its exit status is 1. +mar 12 16:33:32 CASCA systemd[1]: k3s.service: Failed with result 'exit-code'. +░░ Subject: Unit failed +░░ Defined-By: systemd +░░ Support: https://www.debian.org/support +░░ +░░ The unit k3s.service has entered the 'failed' state with result 'exit-code'. +mar 12 16:33:32 CASCA systemd[1]: Failed to start k3s.service - Lightweight Kubernetes. +░░ Subject: A start job for unit k3s.service has failed +░░ Defined-By: systemd +░░ Support: https://www.debian.org/support +░░ +░░ A start job for unit k3s.service has finished with a failure. +░░ +░░ The job identifier is 114621 and the job result is failed. +mar 12 16:33:37 CASCA systemd[1]: k3s.service: Scheduled restart job, restart counter is at 1192. +░░ Subject: Automatic restarting of a unit has been scheduled +░░ Defined-By: systemd +░░ Support: https://www.debian.org/support +░░ +░░ Automatic restarting of the unit k3s.service has been scheduled, as the result for +░░ the configured Restart= setting for the unit. +mar 12 16:33:37 CASCA systemd[1]: Stopped k3s.service - Lightweight Kubernetes. +░░ Subject: A stop job for unit k3s.service has finished +░░ Defined-By: systemd +░░ Support: https://www.debian.org/support +░░ +░░ A stop job for unit k3s.service has finished. +░░ +░░ The job identifier is 114715 and the job result is done. +mar 12 16:33:37 CASCA systemd[1]: Starting k3s.service - Lightweight Kubernetes... +░░ Subject: A start job for unit k3s.service has begun execution +░░ Defined-By: systemd +░░ Support: https://www.debian.org/support +░░ +░░ A start job for unit k3s.service has begun execution. +░░ +░░ The job identifier is 114715. +mar 12 16:33:37 CASCA sh[294117]: + /usr/bin/systemctl is-enabled --quiet nm-cloud-setup.service +mar 12 16:33:37 CASCA k3s[294128]: time="2025-03-12T16:33:37+01:00" level=info msg="Starting k3s v1.31.6+k3s1 (6ab750f9)" +mar 12 16:33:37 CASCA k3s[294128]: time="2025-03-12T16:33:37+01:00" level=info msg="Configuring sqlite3 database connection pooling: maxIdleConns=2, maxOpenConns=0, connMaxLifetime=0s" +mar 12 16:33:37 CASCA k3s[294128]: time="2025-03-12T16:33:37+01:00" level=info msg="Configuring database table schema and indexes, this may take a moment..." +mar 12 16:33:37 CASCA k3s[294128]: time="2025-03-12T16:33:37+01:00" level=info msg="Database tables and indexes are up to date" +mar 12 16:33:37 CASCA k3s[294128]: time="2025-03-12T16:33:37+01:00" level=info msg="Kine available at unix://kine.sock" +mar 12 16:33:37 CASCA k3s[294128]: time="2025-03-12T16:33:37+01:00" level=info msg="Reconciling bootstrap data between datastore and disk" +mar 12 16:33:38 CASCA k3s[294128]: time="2025-03-12T16:33:38+01:00" level=info msg="Running kube-apiserver --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,k3s --authorization-mode=Node,RBAC --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/k3s/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction --enable-aggregator-routing=true --enable-bootstrap-token-auth=true --etcd-servers=unix://kine.sock --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key" +mar 12 16:33:38 CASCA k3s[294128]: time="2025-03-12T16:33:38+01:00" level=info msg="Running kube-scheduler --authentication-kubeconfig=/var/lib/rancher/k3s/server/cred/scheduler.kubeconfig --authorization-kubeconfig=/var/lib/rancher/k3s/server/cred/scheduler.kubeconfig --bind-address=127.0.0.1 --kubeconfig=/var/lib/rancher/k3s/server/cred/scheduler.kubeconfig --leader-elect=false --profiling=false --secure-port=10259" +mar 12 16:33:38 CASCA k3s[294128]: W0312 16:33:38.053193 294128 registry.go:256] calling componentGlobalsRegistry.AddFlags more than once, the registry will be set by the latest flags +mar 12 16:33:38 CASCA k3s[294128]: I0312 16:33:38.053734 294128 options.go:228] external host was not specified, using 192.168.1.133 +mar 12 16:33:38 CASCA k3s[294128]: time="2025-03-12T16:33:38+01:00" level=info msg="Waiting for API server to become available" +mar 12 16:33:38 CASCA k3s[294128]: time="2025-03-12T16:33:38+01:00" level=info msg="Running kube-controller-manager --allocate-node-cidrs=true --authentication-kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --authorization-kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --bind-address=127.0.0.1 --cluster-cidr=10.42.0.0/16 --cluster-signing-kube-apiserver-client-cert-file=/var/lib/rancher/k3s/server/tls/client-ca.nochain.crt --cluster-signing-kube-apiserver-client-key-file=/var/lib/rancher/k3s/server/tls/client-ca.key --cluster-signing-kubelet-client-cert-file=/var/lib/rancher/k3s/server/tls/client-ca.nochain.crt --cluster-signing-kubelet-client-key-file=/var/lib/rancher/k3s/server/tls/client-ca.key --cluster-signing-kubelet-serving-cert-file=/var/lib/rancher/k3s/server/tls/server-ca.nochain.crt --cluster-signing-kubelet-serving-key-file=/var/lib/rancher/k3s/server/tls/server-ca.key --cluster-signing-legacy-unknown-cert-file=/var/lib/rancher/k3s/server/tls/server-ca.nochain.crt --cluster-signing-legacy-unknown-key-file=/var/lib/rancher/k3s/server/tls/server-ca.key --configure-cloud-routes=false --controllers=*,tokencleaner,-service,-route,-cloud-node-lifecycle --kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --leader-elect=false --profiling=false --root-ca-file=/var/lib/rancher/k3s/server/tls/server-ca.crt --secure-port=10257 --service-account-private-key-file=/var/lib/rancher/k3s/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --use-service-account-credentials=true" +mar 12 16:33:38 CASCA k3s[294128]: W0312 16:33:38.054080 294128 registry.go:256] calling componentGlobalsRegistry.AddFlags more than once, the registry will be set by the latest flags +mar 12 16:33:38 CASCA k3s[294128]: time="2025-03-12T16:33:38+01:00" level=info msg="Running cloud-controller-manager --allocate-node-cidrs=true --authentication-kubeconfig=/var/lib/rancher/k3s/server/cred/cloud-controller.kubeconfig --authorization-kubeconfig=/var/lib/rancher/k3s/server/cred/cloud-controller.kubeconfig --bind-address=127.0.0.1 --cloud-config=/var/lib/rancher/k3s/server/etc/cloud-config.yaml --cloud-provider=k3s --cluster-cidr=10.42.0.0/16 --configure-cloud-routes=false --controllers=*,-route --kubeconfig=/var/lib/rancher/k3s/server/cred/cloud-controller.kubeconfig --leader-elect=false --leader-elect-resource-name=k3s-cloud-controller-manager --node-status-update-frequency=1m0s --profiling=false" +mar 12 16:33:38 CASCA k3s[294128]: time="2025-03-12T16:33:38+01:00" level=info msg="Server node token is available at /var/lib/rancher/k3s/server/token" +mar 12 16:33:38 CASCA k3s[294128]: I0312 16:33:38.054960 294128 server.go:150] Version: v1.31.6+k3s1 +mar 12 16:33:38 CASCA k3s[294128]: I0312 16:33:38.054974 294128 server.go:152] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" +mar 12 16:33:38 CASCA k3s[294128]: Error: failed to create listener: failed to listen on 127.0.0.1:6444: listen tcp 127.0.0.1:6444: bind: address already in use +mar 12 16:33:38 CASCA k3s[294128]: time="2025-03-12T16:33:38+01:00" level=error msg="apiserver exited: failed to create listener: failed to listen on 127.0.0.1:6444: listen tcp 127.0.0.1:6444: bind: address already in use" +mar 12 16:33:38 CASCA systemd[1]: k3s.service: Main process exited, code=exited, status=1/FAILURE +░░ Subject: Unit process exited +░░ Defined-By: systemd +░░ Support: https://www.debian.org/support +░░ +░░ An ExecStart= process belonging to unit k3s.service has exited. +░░ +░░ The process' exit code is 'exited' and its exit status is 1. +mar 12 16:33:38 CASCA systemd[1]: k3s.service: Failed with result 'exit-code'. +░░ Subject: Unit failed +░░ Defined-By: systemd +░░ Support: https://www.debian.org/support +░░ +░░ The unit k3s.service has entered the 'failed' state with result 'exit-code'. +mar 12 16:33:38 CASCA systemd[1]: Failed to start k3s.service - Lightweight Kubernetes. +░░ Subject: A start job for unit k3s.service has failed +░░ Defined-By: systemd +░░ Support: https://www.debian.org/support +░░ +░░ A start job for unit k3s.service has finished with a failure. +░░ +░░ The job identifier is 114715 and the job result is failed. +mar 12 16:33:43 CASCA systemd[1]: k3s.service: Scheduled restart job, restart counter is at 1193. +░░ Subject: Automatic restarting of a unit has been scheduled +░░ Defined-By: systemd +░░ Support: https://www.debian.org/support +░░ +░░ Automatic restarting of the unit k3s.service has been scheduled, as the result for +░░ the configured Restart= setting for the unit. +mar 12 16:33:43 CASCA systemd[1]: Stopped k3s.service - Lightweight Kubernetes. +░░ Subject: A stop job for unit k3s.service has finished +░░ Defined-By: systemd +░░ Support: https://www.debian.org/support +░░ +░░ A stop job for unit k3s.service has finished. +░░ +░░ The job identifier is 114809 and the job result is done. +mar 12 16:33:43 CASCA systemd[1]: Starting k3s.service - Lightweight Kubernetes... +░░ Subject: A start job for unit k3s.service has begun execution +░░ Defined-By: systemd +░░ Support: https://www.debian.org/support +░░ +░░ A start job for unit k3s.service has begun execution. +░░ +░░ The job identifier is 114809. +mar 12 16:33:43 CASCA sh[294592]: + /usr/bin/systemctl is-enabled --quiet nm-cloud-setup.service +mar 12 16:33:43 CASCA k3s[294599]: time="2025-03-12T16:33:43+01:00" level=info msg="Starting k3s v1.31.6+k3s1 (6ab750f9)" +mar 12 16:33:43 CASCA k3s[294599]: time="2025-03-12T16:33:43+01:00" level=info msg="Configuring sqlite3 database connection pooling: maxIdleConns=2, maxOpenConns=0, connMaxLifetime=0s" +mar 12 16:33:43 CASCA k3s[294599]: time="2025-03-12T16:33:43+01:00" level=info msg="Configuring database table schema and indexes, this may take a moment..." +mar 12 16:33:43 CASCA k3s[294599]: time="2025-03-12T16:33:43+01:00" level=info msg="Database tables and indexes are up to date" +mar 12 16:33:43 CASCA k3s[294599]: time="2025-03-12T16:33:43+01:00" level=info msg="Kine available at unix://kine.sock" +mar 12 16:33:43 CASCA k3s[294599]: time="2025-03-12T16:33:43+01:00" level=info msg="Reconciling bootstrap data between datastore and disk" +mar 12 16:33:43 CASCA k3s[294599]: time="2025-03-12T16:33:43+01:00" level=info msg="Running kube-apiserver --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,k3s --authorization-mode=Node,RBAC --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/k3s/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction --enable-aggregator-routing=true --enable-bootstrap-token-auth=true --etcd-servers=unix://kine.sock --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key" +mar 12 16:33:43 CASCA k3s[294599]: time="2025-03-12T16:33:43+01:00" level=info msg="Running kube-scheduler --authentication-kubeconfig=/var/lib/rancher/k3s/server/cred/scheduler.kubeconfig --authorization-kubeconfig=/var/lib/rancher/k3s/server/cred/scheduler.kubeconfig --bind-address=127.0.0.1 --kubeconfig=/var/lib/rancher/k3s/server/cred/scheduler.kubeconfig --leader-elect=false --profiling=false --secure-port=10259" +mar 12 16:33:43 CASCA k3s[294599]: time="2025-03-12T16:33:43+01:00" level=info msg="Waiting for API server to become available" +mar 12 16:33:43 CASCA k3s[294599]: W0312 16:33:43.564110 294599 registry.go:256] calling componentGlobalsRegistry.AddFlags more than once, the registry will be set by the latest flags +mar 12 16:33:43 CASCA k3s[294599]: time="2025-03-12T16:33:43+01:00" level=info msg="Running kube-controller-manager --allocate-node-cidrs=true --authentication-kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --authorization-kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --bind-address=127.0.0.1 --cluster-cidr=10.42.0.0/16 --cluster-signing-kube-apiserver-client-cert-file=/var/lib/rancher/k3s/server/tls/client-ca.nochain.crt --cluster-signing-kube-apiserver-client-key-file=/var/lib/rancher/k3s/server/tls/client-ca.key --cluster-signing-kubelet-client-cert-file=/var/lib/rancher/k3s/server/tls/client-ca.nochain.crt --cluster-signing-kubelet-client-key-file=/var/lib/rancher/k3s/server/tls/client-ca.key --cluster-signing-kubelet-serving-cert-file=/var/lib/rancher/k3s/server/tls/server-ca.nochain.crt --cluster-signing-kubelet-serving-key-file=/var/lib/rancher/k3s/server/tls/server-ca.key --cluster-signing-legacy-unknown-cert-file=/var/lib/rancher/k3s/server/tls/server-ca.nochain.crt --cluster-signing-legacy-unknown-key-file=/var/lib/rancher/k3s/server/tls/server-ca.key --configure-cloud-routes=false --controllers=*,tokencleaner,-service,-route,-cloud-node-lifecycle --kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --leader-elect=false --profiling=false --root-ca-file=/var/lib/rancher/k3s/server/tls/server-ca.crt --secure-port=10257 --service-account-private-key-file=/var/lib/rancher/k3s/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --use-service-account-credentials=true" +mar 12 16:33:43 CASCA k3s[294599]: W0312 16:33:43.564646 294599 registry.go:256] calling componentGlobalsRegistry.AddFlags more than once, the registry will be set by the latest flags +mar 12 16:33:43 CASCA k3s[294599]: I0312 16:33:43.564753 294599 options.go:228] external host was not specified, using 192.168.1.133 +mar 12 16:33:43 CASCA k3s[294599]: time="2025-03-12T16:33:43+01:00" level=info msg="Running cloud-controller-manager --allocate-node-cidrs=true --authentication-kubeconfig=/var/lib/rancher/k3s/server/cred/cloud-controller.kubeconfig --authorization-kubeconfig=/var/lib/rancher/k3s/server/cred/cloud-controller.kubeconfig --bind-address=127.0.0.1 --cloud-config=/var/lib/rancher/k3s/server/etc/cloud-config.yaml --cloud-provider=k3s --cluster-cidr=10.42.0.0/16 --configure-cloud-routes=false --controllers=*,-route --kubeconfig=/var/lib/rancher/k3s/server/cred/cloud-controller.kubeconfig --leader-elect=false --leader-elect-resource-name=k3s-cloud-controller-manager --node-status-update-frequency=1m0s --profiling=false" +mar 12 16:33:43 CASCA k3s[294599]: time="2025-03-12T16:33:43+01:00" level=info msg="Server node token is available at /var/lib/rancher/k3s/server/token" +mar 12 16:33:43 CASCA k3s[294599]: time="2025-03-12T16:33:43+01:00" level=info msg="To join server node to cluster: k3s server -s https://192.168.1.133:6443 -t ${SERVER_NODE_TOKEN}" +mar 12 16:33:43 CASCA k3s[294599]: time="2025-03-12T16:33:43+01:00" level=info msg="Agent node token is available at /var/lib/rancher/k3s/server/agent-token" +mar 12 16:33:43 CASCA k3s[294599]: time="2025-03-12T16:33:43+01:00" level=info msg="To join agent node to cluster: k3s agent -s https://192.168.1.133:6443 -t ${AGENT_NODE_TOKEN}" +mar 12 16:33:43 CASCA k3s[294599]: I0312 16:33:43.566007 294599 server.go:150] Version: v1.31.6+k3s1 +mar 12 16:33:43 CASCA k3s[294599]: I0312 16:33:43.566030 294599 server.go:152] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" +mar 12 16:33:43 CASCA k3s[294599]: Error: failed to create listener: failed to listen on 127.0.0.1:6444: listen tcp 127.0.0.1:6444: bind: address already in use +mar 12 16:33:43 CASCA k3s[294599]: time="2025-03-12T16:33:43+01:00" level=error msg="apiserver exited: failed to create listener: failed to listen on 127.0.0.1:6444: listen tcp 127.0.0.1:6444: bind: address already in use" +mar 12 16:33:43 CASCA systemd[1]: k3s.service: Main process exited, code=exited, status=1/FAILURE +░░ Subject: Unit process exited +░░ Defined-By: systemd +░░ Support: https://www.debian.org/support +░░ +░░ An ExecStart= process belonging to unit k3s.service has exited. +░░ +░░ The process' exit code is 'exited' and its exit status is 1. +mar 12 16:33:43 CASCA systemd[1]: k3s.service: Failed with result 'exit-code'. +░░ Subject: Unit failed +░░ Defined-By: systemd +░░ Support: https://www.debian.org/support +░░ +░░ The unit k3s.service has entered the 'failed' state with result 'exit-code'. +mar 12 16:33:43 CASCA systemd[1]: Failed to start k3s.service - Lightweight Kubernetes. +░░ Subject: A start job for unit k3s.service has failed +░░ Defined-By: systemd +░░ Support: https://www.debian.org/support +░░ +░░ A start job for unit k3s.service has finished with a failure. +░░ +░░ The job identifier is 114809 and the job result is failed. +mar 12 16:33:48 CASCA systemd[1]: k3s.service: Scheduled restart job, restart counter is at 1194. +░░ Subject: Automatic restarting of a unit has been scheduled +░░ Defined-By: systemd +░░ Support: https://www.debian.org/support +░░ +░░ Automatic restarting of the unit k3s.service has been scheduled, as the result for +░░ the configured Restart= setting for the unit. +mar 12 16:33:48 CASCA systemd[1]: Stopped k3s.service - Lightweight Kubernetes. +░░ Subject: A stop job for unit k3s.service has finished +░░ Defined-By: systemd +░░ Support: https://www.debian.org/support +░░ +░░ A stop job for unit k3s.service has finished. +░░ +░░ The job identifier is 114903 and the job result is done. +mar 12 16:33:48 CASCA systemd[1]: Starting k3s.service - Lightweight Kubernetes... +░░ Subject: A start job for unit k3s.service has begun execution +░░ Defined-By: systemd +░░ Support: https://www.debian.org/support +░░ +░░ A start job for unit k3s.service has begun execution. +░░ +░░ The job identifier is 114903. +mar 12 16:33:48 CASCA sh[294930]: + /usr/bin/systemctl is-enabled --quiet nm-cloud-setup.service +mar 12 16:33:48 CASCA k3s[294940]: time="2025-03-12T16:33:48+01:00" level=info msg="Starting k3s v1.31.6+k3s1 (6ab750f9)" +mar 12 16:33:48 CASCA k3s[294940]: time="2025-03-12T16:33:48+01:00" level=info msg="Configuring sqlite3 database connection pooling: maxIdleConns=2, maxOpenConns=0, connMaxLifetime=0s" +mar 12 16:33:48 CASCA k3s[294940]: time="2025-03-12T16:33:48+01:00" level=info msg="Configuring database table schema and indexes, this may take a moment..." +mar 12 16:33:48 CASCA k3s[294940]: time="2025-03-12T16:33:48+01:00" level=info msg="Database tables and indexes are up to date" +mar 12 16:33:48 CASCA k3s[294940]: time="2025-03-12T16:33:48+01:00" level=info msg="Kine available at unix://kine.sock" +mar 12 16:33:48 CASCA k3s[294940]: time="2025-03-12T16:33:48+01:00" level=info msg="Reconciling bootstrap data between datastore and disk" +mar 12 16:33:49 CASCA k3s[294940]: time="2025-03-12T16:33:49+01:00" level=info msg="Running kube-apiserver --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,k3s --authorization-mode=Node,RBAC --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/k3s/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction --enable-aggregator-routing=true --enable-bootstrap-token-auth=true --etcd-servers=unix://kine.sock --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key" +mar 12 16:33:49 CASCA k3s[294940]: time="2025-03-12T16:33:49+01:00" level=info msg="Running kube-scheduler --authentication-kubeconfig=/var/lib/rancher/k3s/server/cred/scheduler.kubeconfig --authorization-kubeconfig=/var/lib/rancher/k3s/server/cred/scheduler.kubeconfig --bind-address=127.0.0.1 --kubeconfig=/var/lib/rancher/k3s/server/cred/scheduler.kubeconfig --leader-elect=false --profiling=false --secure-port=10259" +mar 12 16:33:49 CASCA k3s[294940]: time="2025-03-12T16:33:49+01:00" level=info msg="Waiting for API server to become available" +mar 12 16:33:49 CASCA k3s[294940]: W0312 16:33:49.054677 294940 registry.go:256] calling componentGlobalsRegistry.AddFlags more than once, the registry will be set by the latest flags +mar 12 16:33:49 CASCA k3s[294940]: time="2025-03-12T16:33:49+01:00" level=info msg="Running kube-controller-manager --allocate-node-cidrs=true --authentication-kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --authorization-kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --bind-address=127.0.0.1 --cluster-cidr=10.42.0.0/16 --cluster-signing-kube-apiserver-client-cert-file=/var/lib/rancher/k3s/server/tls/client-ca.nochain.crt --cluster-signing-kube-apiserver-client-key-file=/var/lib/rancher/k3s/server/tls/client-ca.key --cluster-signing-kubelet-client-cert-file=/var/lib/rancher/k3s/server/tls/client-ca.nochain.crt --cluster-signing-kubelet-client-key-file=/var/lib/rancher/k3s/server/tls/client-ca.key --cluster-signing-kubelet-serving-cert-file=/var/lib/rancher/k3s/server/tls/server-ca.nochain.crt --cluster-signing-kubelet-serving-key-file=/var/lib/rancher/k3s/server/tls/server-ca.key --cluster-signing-legacy-unknown-cert-file=/var/lib/rancher/k3s/server/tls/server-ca.nochain.crt --cluster-signing-legacy-unknown-key-file=/var/lib/rancher/k3s/server/tls/server-ca.key --configure-cloud-routes=false --controllers=*,tokencleaner,-service,-route,-cloud-node-lifecycle --kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --leader-elect=false --profiling=false --root-ca-file=/var/lib/rancher/k3s/server/tls/server-ca.crt --secure-port=10257 --service-account-private-key-file=/var/lib/rancher/k3s/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --use-service-account-credentials=true" +mar 12 16:33:49 CASCA k3s[294940]: W0312 16:33:49.055025 294940 registry.go:256] calling componentGlobalsRegistry.AddFlags more than once, the registry will be set by the latest flags +mar 12 16:33:49 CASCA k3s[294940]: time="2025-03-12T16:33:49+01:00" level=info msg="Running cloud-controller-manager --allocate-node-cidrs=true --authentication-kubeconfig=/var/lib/rancher/k3s/server/cred/cloud-controller.kubeconfig --authorization-kubeconfig=/var/lib/rancher/k3s/server/cred/cloud-controller.kubeconfig --bind-address=127.0.0.1 --cloud-config=/var/lib/rancher/k3s/server/etc/cloud-config.yaml --cloud-provider=k3s --cluster-cidr=10.42.0.0/16 --configure-cloud-routes=false --controllers=*,-route --kubeconfig=/var/lib/rancher/k3s/server/cred/cloud-controller.kubeconfig --leader-elect=false --leader-elect-resource-name=k3s-cloud-controller-manager --node-status-update-frequency=1m0s --profiling=false" +mar 12 16:33:49 CASCA k3s[294940]: time="2025-03-12T16:33:49+01:00" level=info msg="Server node token is available at /var/lib/rancher/k3s/server/token" +mar 12 16:33:49 CASCA k3s[294940]: I0312 16:33:49.055856 294940 options.go:228] external host was not specified, using 192.168.1.133 +mar 12 16:33:49 CASCA k3s[294940]: time="2025-03-12T16:33:49+01:00" level=info msg="To join server node to cluster: k3s server -s https://192.168.1.133:6443 -t ${SERVER_NODE_TOKEN}" +mar 12 16:33:49 CASCA k3s[294940]: time="2025-03-12T16:33:49+01:00" level=info msg="Agent node token is available at /var/lib/rancher/k3s/server/agent-token" +mar 12 16:33:49 CASCA k3s[294940]: time="2025-03-12T16:33:49+01:00" level=info msg="To join agent node to cluster: k3s agent -s https://192.168.1.133:6443 -t ${AGENT_NODE_TOKEN}" +mar 12 16:33:49 CASCA k3s[294940]: time="2025-03-12T16:33:49+01:00" level=info msg="Wrote kubeconfig /etc/rancher/k3s/k3s.yaml" +mar 12 16:33:49 CASCA k3s[294940]: time="2025-03-12T16:33:49+01:00" level=info msg="Run: k3s kubectl" +mar 12 16:33:49 CASCA k3s[294940]: I0312 16:33:49.057084 294940 server.go:150] Version: v1.31.6+k3s1 +mar 12 16:33:49 CASCA k3s[294940]: I0312 16:33:49.057104 294940 server.go:152] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" +mar 12 16:33:49 CASCA k3s[294940]: Error: failed to create listener: failed to listen on 127.0.0.1:6444: listen tcp 127.0.0.1:6444: bind: address already in use +mar 12 16:33:49 CASCA k3s[294940]: time="2025-03-12T16:33:49+01:00" level=error msg="apiserver exited: failed to create listener: failed to listen on 127.0.0.1:6444: listen tcp 127.0.0.1:6444: bind: address already in use" +mar 12 16:33:49 CASCA systemd[1]: k3s.service: Main process exited, code=exited, status=1/FAILURE +░░ Subject: Unit process exited +░░ Defined-By: systemd +░░ Support: https://www.debian.org/support +░░ +░░ An ExecStart= process belonging to unit k3s.service has exited. +░░ +░░ The process' exit code is 'exited' and its exit status is 1. +mar 12 16:33:49 CASCA systemd[1]: k3s.service: Failed with result 'exit-code'. +░░ Subject: Unit failed +░░ Defined-By: systemd +░░ Support: https://www.debian.org/support +░░ +░░ The unit k3s.service has entered the 'failed' state with result 'exit-code'. +mar 12 16:33:49 CASCA systemd[1]: Failed to start k3s.service - Lightweight Kubernetes. +░░ Subject: A start job for unit k3s.service has failed +░░ Defined-By: systemd +░░ Support: https://www.debian.org/support +░░ +░░ A start job for unit k3s.service has finished with a failure. +░░ +░░ The job identifier is 114903 and the job result is failed. +mar 12 16:33:54 CASCA systemd[1]: k3s.service: Scheduled restart job, restart counter is at 1195. +░░ Subject: Automatic restarting of a unit has been scheduled +░░ Defined-By: systemd +░░ Support: https://www.debian.org/support +░░ +░░ Automatic restarting of the unit k3s.service has been scheduled, as the result for +░░ the configured Restart= setting for the unit. +mar 12 16:33:54 CASCA systemd[1]: Stopped k3s.service - Lightweight Kubernetes. +░░ Subject: A stop job for unit k3s.service has finished +░░ Defined-By: systemd +░░ Support: https://www.debian.org/support +░░ +░░ A stop job for unit k3s.service has finished. +░░ +░░ The job identifier is 114997 and the job result is done. +mar 12 16:33:54 CASCA systemd[1]: Starting k3s.service - Lightweight Kubernetes... +░░ Subject: A start job for unit k3s.service has begun execution +░░ Defined-By: systemd +░░ Support: https://www.debian.org/support +░░ +░░ A start job for unit k3s.service has begun execution. +░░ +░░ The job identifier is 114997. +mar 12 16:33:54 CASCA sh[295357]: + /usr/bin/systemctl is-enabled --quiet nm-cloud-setup.service +mar 12 16:33:54 CASCA k3s[295365]: time="2025-03-12T16:33:54+01:00" level=info msg="Starting k3s v1.31.6+k3s1 (6ab750f9)" +mar 12 16:33:54 CASCA k3s[295365]: time="2025-03-12T16:33:54+01:00" level=info msg="Configuring sqlite3 database connection pooling: maxIdleConns=2, maxOpenConns=0, connMaxLifetime=0s" +mar 12 16:33:54 CASCA k3s[295365]: time="2025-03-12T16:33:54+01:00" level=info msg="Configuring database table schema and indexes, this may take a moment..." +mar 12 16:33:54 CASCA k3s[295365]: time="2025-03-12T16:33:54+01:00" level=info msg="Database tables and indexes are up to date" +mar 12 16:33:54 CASCA k3s[295365]: time="2025-03-12T16:33:54+01:00" level=info msg="Kine available at unix://kine.sock" +mar 12 16:33:54 CASCA k3s[295365]: time="2025-03-12T16:33:54+01:00" level=info msg="Reconciling bootstrap data between datastore and disk" +mar 12 16:33:54 CASCA k3s[295365]: time="2025-03-12T16:33:54+01:00" level=info msg="Running kube-apiserver --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,k3s --authorization-mode=Node,RBAC --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/k3s/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction --enable-aggregator-routing=true --enable-bootstrap-token-auth=true --etcd-servers=unix://kine.sock --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key" +mar 12 16:33:54 CASCA k3s[295365]: time="2025-03-12T16:33:54+01:00" level=info msg="Running kube-scheduler --authentication-kubeconfig=/var/lib/rancher/k3s/server/cred/scheduler.kubeconfig --authorization-kubeconfig=/var/lib/rancher/k3s/server/cred/scheduler.kubeconfig --bind-address=127.0.0.1 --kubeconfig=/var/lib/rancher/k3s/server/cred/scheduler.kubeconfig --leader-elect=false --profiling=false --secure-port=10259" +mar 12 16:33:54 CASCA k3s[295365]: W0312 16:33:54.580835 295365 registry.go:256] calling componentGlobalsRegistry.AddFlags more than once, the registry will be set by the latest flags +mar 12 16:33:54 CASCA k3s[295365]: time="2025-03-12T16:33:54+01:00" level=info msg="Waiting for API server to become available" +mar 12 16:33:54 CASCA k3s[295365]: time="2025-03-12T16:33:54+01:00" level=info msg="Running kube-controller-manager --allocate-node-cidrs=true --authentication-kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --authorization-kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --bind-address=127.0.0.1 --cluster-cidr=10.42.0.0/16 --cluster-signing-kube-apiserver-client-cert-file=/var/lib/rancher/k3s/server/tls/client-ca.nochain.crt --cluster-signing-kube-apiserver-client-key-file=/var/lib/rancher/k3s/server/tls/client-ca.key --cluster-signing-kubelet-client-cert-file=/var/lib/rancher/k3s/server/tls/client-ca.nochain.crt --cluster-signing-kubelet-client-key-file=/var/lib/rancher/k3s/server/tls/client-ca.key --cluster-signing-kubelet-serving-cert-file=/var/lib/rancher/k3s/server/tls/server-ca.nochain.crt --cluster-signing-kubelet-serving-key-file=/var/lib/rancher/k3s/server/tls/server-ca.key --cluster-signing-legacy-unknown-cert-file=/var/lib/rancher/k3s/server/tls/server-ca.nochain.crt --cluster-signing-legacy-unknown-key-file=/var/lib/rancher/k3s/server/tls/server-ca.key --configure-cloud-routes=false --controllers=*,tokencleaner,-service,-route,-cloud-node-lifecycle --kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --leader-elect=false --profiling=false --root-ca-file=/var/lib/rancher/k3s/server/tls/server-ca.crt --secure-port=10257 --service-account-private-key-file=/var/lib/rancher/k3s/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --use-service-account-credentials=true" +mar 12 16:33:54 CASCA k3s[295365]: W0312 16:33:54.581307 295365 registry.go:256] calling componentGlobalsRegistry.AddFlags more than once, the registry will be set by the latest flags +mar 12 16:33:54 CASCA k3s[295365]: I0312 16:33:54.581333 295365 options.go:228] external host was not specified, using 192.168.1.133 +mar 12 16:33:54 CASCA k3s[295365]: time="2025-03-12T16:33:54+01:00" level=info msg="Running cloud-controller-manager --allocate-node-cidrs=true --authentication-kubeconfig=/var/lib/rancher/k3s/server/cred/cloud-controller.kubeconfig --authorization-kubeconfig=/var/lib/rancher/k3s/server/cred/cloud-controller.kubeconfig --bind-address=127.0.0.1 --cloud-config=/var/lib/rancher/k3s/server/etc/cloud-config.yaml --cloud-provider=k3s --cluster-cidr=10.42.0.0/16 --configure-cloud-routes=false --controllers=*,-route --kubeconfig=/var/lib/rancher/k3s/server/cred/cloud-controller.kubeconfig --leader-elect=false --leader-elect-resource-name=k3s-cloud-controller-manager --node-status-update-frequency=1m0s --profiling=false" +mar 12 16:33:54 CASCA k3s[295365]: time="2025-03-12T16:33:54+01:00" level=info msg="Server node token is available at /var/lib/rancher/k3s/server/token" +mar 12 16:33:54 CASCA k3s[295365]: time="2025-03-12T16:33:54+01:00" level=info msg="To join server node to cluster: k3s server -s https://192.168.1.133:6443 -t ${SERVER_NODE_TOKEN}" +mar 12 16:33:54 CASCA k3s[295365]: time="2025-03-12T16:33:54+01:00" level=info msg="Agent node token is available at /var/lib/rancher/k3s/server/agent-token" +mar 12 16:33:54 CASCA k3s[295365]: time="2025-03-12T16:33:54+01:00" level=info msg="To join agent node to cluster: k3s agent -s https://192.168.1.133:6443 -t ${AGENT_NODE_TOKEN}" +mar 12 16:33:54 CASCA k3s[295365]: I0312 16:33:54.582544 295365 server.go:150] Version: v1.31.6+k3s1 +mar 12 16:33:54 CASCA k3s[295365]: I0312 16:33:54.582565 295365 server.go:152] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" +mar 12 16:33:54 CASCA k3s[295365]: Error: failed to create listener: failed to listen on 127.0.0.1:6444: listen tcp 127.0.0.1:6444: bind: address already in use +mar 12 16:33:54 CASCA k3s[295365]: time="2025-03-12T16:33:54+01:00" level=error msg="apiserver exited: failed to create listener: failed to listen on 127.0.0.1:6444: listen tcp 127.0.0.1:6444: bind: address already in use" +mar 12 16:33:54 CASCA systemd[1]: k3s.service: Main process exited, code=exited, status=1/FAILURE +░░ Subject: Unit process exited +░░ Defined-By: systemd +░░ Support: https://www.debian.org/support +░░ +░░ An ExecStart= process belonging to unit k3s.service has exited. +░░ +░░ The process' exit code is 'exited' and its exit status is 1. +mar 12 16:33:54 CASCA systemd[1]: k3s.service: Failed with result 'exit-code'. +░░ Subject: Unit failed +░░ Defined-By: systemd +░░ Support: https://www.debian.org/support +░░ +░░ The unit k3s.service has entered the 'failed' state with result 'exit-code'. +mar 12 16:33:54 CASCA systemd[1]: Failed to start k3s.service - Lightweight Kubernetes. +░░ Subject: A start job for unit k3s.service has failed +░░ Defined-By: systemd +░░ Support: https://www.debian.org/support +░░ +░░ A start job for unit k3s.service has finished with a failure. +░░ +░░ The job identifier is 114997 and the job result is failed. +mar 12 16:33:59 CASCA systemd[1]: k3s.service: Scheduled restart job, restart counter is at 1196. +░░ Subject: Automatic restarting of a unit has been scheduled +░░ Defined-By: systemd +░░ Support: https://www.debian.org/support +░░ +░░ Automatic restarting of the unit k3s.service has been scheduled, as the result for +░░ the configured Restart= setting for the unit. +mar 12 16:33:59 CASCA systemd[1]: Stopped k3s.service - Lightweight Kubernetes. +░░ Subject: A stop job for unit k3s.service has finished +░░ Defined-By: systemd +░░ Support: https://www.debian.org/support +░░ +░░ A stop job for unit k3s.service has finished. +░░ +░░ The job identifier is 115091 and the job result is done. +mar 12 16:33:59 CASCA systemd[1]: Starting k3s.service - Lightweight Kubernetes... +░░ Subject: A start job for unit k3s.service has begun execution +░░ Defined-By: systemd +░░ Support: https://www.debian.org/support +░░ +░░ A start job for unit k3s.service has begun execution. +░░ +░░ The job identifier is 115091. +mar 12 16:33:59 CASCA sh[295816]: + /usr/bin/systemctl is-enabled --quiet nm-cloud-setup.service +mar 12 16:33:59 CASCA k3s[295825]: time="2025-03-12T16:33:59+01:00" level=info msg="Starting k3s v1.31.6+k3s1 (6ab750f9)" +mar 12 16:33:59 CASCA k3s[295825]: time="2025-03-12T16:33:59+01:00" level=info msg="Configuring sqlite3 database connection pooling: maxIdleConns=2, maxOpenConns=0, connMaxLifetime=0s" +mar 12 16:33:59 CASCA k3s[295825]: time="2025-03-12T16:33:59+01:00" level=info msg="Configuring database table schema and indexes, this may take a moment..." +mar 12 16:33:59 CASCA k3s[295825]: time="2025-03-12T16:33:59+01:00" level=info msg="Database tables and indexes are up to date" +mar 12 16:33:59 CASCA k3s[295825]: time="2025-03-12T16:33:59+01:00" level=info msg="Kine available at unix://kine.sock" +mar 12 16:33:59 CASCA k3s[295825]: time="2025-03-12T16:33:59+01:00" level=info msg="Reconciling bootstrap data between datastore and disk" +mar 12 16:34:00 CASCA k3s[295825]: time="2025-03-12T16:34:00+01:00" level=info msg="Running kube-apiserver --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,k3s --authorization-mode=Node,RBAC --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/k3s/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction --enable-aggregator-routing=true --enable-bootstrap-token-auth=true --etcd-servers=unix://kine.sock --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key" +mar 12 16:34:00 CASCA k3s[295825]: time="2025-03-12T16:34:00+01:00" level=info msg="Running kube-scheduler --authentication-kubeconfig=/var/lib/rancher/k3s/server/cred/scheduler.kubeconfig --authorization-kubeconfig=/var/lib/rancher/k3s/server/cred/scheduler.kubeconfig --bind-address=127.0.0.1 --kubeconfig=/var/lib/rancher/k3s/server/cred/scheduler.kubeconfig --leader-elect=false --profiling=false --secure-port=10259" +mar 12 16:34:00 CASCA k3s[295825]: W0312 16:34:00.043113 295825 registry.go:256] calling componentGlobalsRegistry.AddFlags more than once, the registry will be set by the latest flags +mar 12 16:34:00 CASCA k3s[295825]: time="2025-03-12T16:34:00+01:00" level=info msg="Waiting for API server to become available" +mar 12 16:34:00 CASCA k3s[295825]: time="2025-03-12T16:34:00+01:00" level=info msg="Running kube-controller-manager --allocate-node-cidrs=true --authentication-kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --authorization-kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --bind-address=127.0.0.1 --cluster-cidr=10.42.0.0/16 --cluster-signing-kube-apiserver-client-cert-file=/var/lib/rancher/k3s/server/tls/client-ca.nochain.crt --cluster-signing-kube-apiserver-client-key-file=/var/lib/rancher/k3s/server/tls/client-ca.key --cluster-signing-kubelet-client-cert-file=/var/lib/rancher/k3s/server/tls/client-ca.nochain.crt --cluster-signing-kubelet-client-key-file=/var/lib/rancher/k3s/server/tls/client-ca.key --cluster-signing-kubelet-serving-cert-file=/var/lib/rancher/k3s/server/tls/server-ca.nochain.crt --cluster-signing-kubelet-serving-key-file=/var/lib/rancher/k3s/server/tls/server-ca.key --cluster-signing-legacy-unknown-cert-file=/var/lib/rancher/k3s/server/tls/server-ca.nochain.crt --cluster-signing-legacy-unknown-key-file=/var/lib/rancher/k3s/server/tls/server-ca.key --configure-cloud-routes=false --controllers=*,tokencleaner,-service,-route,-cloud-node-lifecycle --kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --leader-elect=false --profiling=false --root-ca-file=/var/lib/rancher/k3s/server/tls/server-ca.crt --secure-port=10257 --service-account-private-key-file=/var/lib/rancher/k3s/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --use-service-account-credentials=true" +mar 12 16:34:00 CASCA k3s[295825]: W0312 16:34:00.043461 295825 registry.go:256] calling componentGlobalsRegistry.AddFlags more than once, the registry will be set by the latest flags +mar 12 16:34:00 CASCA k3s[295825]: I0312 16:34:00.043592 295825 options.go:228] external host was not specified, using 192.168.1.133 +mar 12 16:34:00 CASCA k3s[295825]: time="2025-03-12T16:34:00+01:00" level=info msg="Running cloud-controller-manager --allocate-node-cidrs=true --authentication-kubeconfig=/var/lib/rancher/k3s/server/cred/cloud-controller.kubeconfig --authorization-kubeconfig=/var/lib/rancher/k3s/server/cred/cloud-controller.kubeconfig --bind-address=127.0.0.1 --cloud-config=/var/lib/rancher/k3s/server/etc/cloud-config.yaml --cloud-provider=k3s --cluster-cidr=10.42.0.0/16 --configure-cloud-routes=false --controllers=*,-route --kubeconfig=/var/lib/rancher/k3s/server/cred/cloud-controller.kubeconfig --leader-elect=false --leader-elect-resource-name=k3s-cloud-controller-manager --node-status-update-frequency=1m0s --profiling=false" +mar 12 16:34:00 CASCA k3s[295825]: time="2025-03-12T16:34:00+01:00" level=info msg="Server node token is available at /var/lib/rancher/k3s/server/token" +mar 12 16:34:00 CASCA k3s[295825]: time="2025-03-12T16:34:00+01:00" level=info msg="To join server node to cluster: k3s server -s https://192.168.1.133:6443 -t ${SERVER_NODE_TOKEN}" +mar 12 16:34:00 CASCA k3s[295825]: time="2025-03-12T16:34:00+01:00" level=info msg="Agent node token is available at /var/lib/rancher/k3s/server/agent-token" +mar 12 16:34:00 CASCA k3s[295825]: time="2025-03-12T16:34:00+01:00" level=info msg="To join agent node to cluster: k3s agent -s https://192.168.1.133:6443 -t ${AGENT_NODE_TOKEN}" +mar 12 16:34:00 CASCA k3s[295825]: I0312 16:34:00.044831 295825 server.go:150] Version: v1.31.6+k3s1 +mar 12 16:34:00 CASCA k3s[295825]: I0312 16:34:00.044845 295825 server.go:152] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" +mar 12 16:34:00 CASCA k3s[295825]: Error: failed to create listener: failed to listen on 127.0.0.1:6444: listen tcp 127.0.0.1:6444: bind: address already in use +mar 12 16:34:00 CASCA k3s[295825]: time="2025-03-12T16:34:00+01:00" level=error msg="apiserver exited: failed to create listener: failed to listen on 127.0.0.1:6444: listen tcp 127.0.0.1:6444: bind: address already in use" +mar 12 16:34:00 CASCA systemd[1]: k3s.service: Main process exited, code=exited, status=1/FAILURE +░░ Subject: Unit process exited +░░ Defined-By: systemd +░░ Support: https://www.debian.org/support +░░ +░░ An ExecStart= process belonging to unit k3s.service has exited. +░░ +░░ The process' exit code is 'exited' and its exit status is 1. +mar 12 16:34:00 CASCA systemd[1]: k3s.service: Failed with result 'exit-code'. +░░ Subject: Unit failed +░░ Defined-By: systemd +░░ Support: https://www.debian.org/support +░░ +░░ The unit k3s.service has entered the 'failed' state with result 'exit-code'. +mar 12 16:34:00 CASCA systemd[1]: Failed to start k3s.service - Lightweight Kubernetes. +░░ Subject: A start job for unit k3s.service has failed +░░ Defined-By: systemd +░░ Support: https://www.debian.org/support +░░ +░░ A start job for unit k3s.service has finished with a failure. +░░ +░░ The job identifier is 115091 and the job result is failed. +mar 12 16:34:03 CASCA systemd[1]: Stopped k3s.service - Lightweight Kubernetes. +░░ Subject: A stop job for unit k3s.service has finished +░░ Defined-By: systemd +░░ Support: https://www.debian.org/support +░░ +░░ A stop job for unit k3s.service has finished. +░░ +░░ The job identifier is 115185 and the job result is done. +mar 12 16:34:03 CASCA systemd[1]: Starting k3s.service - Lightweight Kubernetes... +░░ Subject: A start job for unit k3s.service has begun execution +░░ Defined-By: systemd +░░ Support: https://www.debian.org/support +░░ +░░ A start job for unit k3s.service has begun execution. +░░ +░░ The job identifier is 115185. +mar 12 16:34:03 CASCA sh[296083]: + /usr/bin/systemctl is-enabled --quiet nm-cloud-setup.service +mar 12 16:34:03 CASCA k3s[296088]: time="2025-03-12T16:34:03+01:00" level=info msg="Starting k3s v1.31.6+k3s1 (6ab750f9)" +mar 12 16:34:03 CASCA k3s[296088]: time="2025-03-12T16:34:03+01:00" level=info msg="Configuring sqlite3 database connection pooling: maxIdleConns=2, maxOpenConns=0, connMaxLifetime=0s" +mar 12 16:34:03 CASCA k3s[296088]: time="2025-03-12T16:34:03+01:00" level=info msg="Configuring database table schema and indexes, this may take a moment..." +mar 12 16:34:03 CASCA k3s[296088]: time="2025-03-12T16:34:03+01:00" level=info msg="Database tables and indexes are up to date" +mar 12 16:34:03 CASCA k3s[296088]: time="2025-03-12T16:34:03+01:00" level=info msg="Kine available at unix://kine.sock" +mar 12 16:34:03 CASCA k3s[296088]: time="2025-03-12T16:34:03+01:00" level=info msg="Reconciling bootstrap data between datastore and disk" +mar 12 16:34:03 CASCA k3s[296088]: time="2025-03-12T16:34:03+01:00" level=info msg="Running kube-apiserver --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,k3s --authorization-mode=Node,RBAC --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/k3s/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction --enable-aggregator-routing=true --enable-bootstrap-token-auth=true --etcd-servers=unix://kine.sock --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key" +mar 12 16:34:03 CASCA k3s[296088]: time="2025-03-12T16:34:03+01:00" level=info msg="Running kube-scheduler --authentication-kubeconfig=/var/lib/rancher/k3s/server/cred/scheduler.kubeconfig --authorization-kubeconfig=/var/lib/rancher/k3s/server/cred/scheduler.kubeconfig --bind-address=127.0.0.1 --kubeconfig=/var/lib/rancher/k3s/server/cred/scheduler.kubeconfig --leader-elect=false --profiling=false --secure-port=10259" +mar 12 16:34:03 CASCA k3s[296088]: time="2025-03-12T16:34:03+01:00" level=info msg="Waiting for API server to become available" +mar 12 16:34:03 CASCA k3s[296088]: W0312 16:34:03.971970 296088 registry.go:256] calling componentGlobalsRegistry.AddFlags more than once, the registry will be set by the latest flags +mar 12 16:34:03 CASCA k3s[296088]: time="2025-03-12T16:34:03+01:00" level=info msg="Running kube-controller-manager --allocate-node-cidrs=true --authentication-kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --authorization-kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --bind-address=127.0.0.1 --cluster-cidr=10.42.0.0/16 --cluster-signing-kube-apiserver-client-cert-file=/var/lib/rancher/k3s/server/tls/client-ca.nochain.crt --cluster-signing-kube-apiserver-client-key-file=/var/lib/rancher/k3s/server/tls/client-ca.key --cluster-signing-kubelet-client-cert-file=/var/lib/rancher/k3s/server/tls/client-ca.nochain.crt --cluster-signing-kubelet-client-key-file=/var/lib/rancher/k3s/server/tls/client-ca.key --cluster-signing-kubelet-serving-cert-file=/var/lib/rancher/k3s/server/tls/server-ca.nochain.crt --cluster-signing-kubelet-serving-key-file=/var/lib/rancher/k3s/server/tls/server-ca.key --cluster-signing-legacy-unknown-cert-file=/var/lib/rancher/k3s/server/tls/server-ca.nochain.crt --cluster-signing-legacy-unknown-key-file=/var/lib/rancher/k3s/server/tls/server-ca.key --configure-cloud-routes=false --controllers=*,tokencleaner,-service,-route,-cloud-node-lifecycle --kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --leader-elect=false --profiling=false --root-ca-file=/var/lib/rancher/k3s/server/tls/server-ca.crt --secure-port=10257 --service-account-private-key-file=/var/lib/rancher/k3s/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --use-service-account-credentials=true" +mar 12 16:34:03 CASCA k3s[296088]: W0312 16:34:03.972430 296088 registry.go:256] calling componentGlobalsRegistry.AddFlags more than once, the registry will be set by the latest flags +mar 12 16:34:03 CASCA k3s[296088]: I0312 16:34:03.972518 296088 options.go:228] external host was not specified, using 192.168.1.133 +mar 12 16:34:03 CASCA k3s[296088]: time="2025-03-12T16:34:03+01:00" level=info msg="Running cloud-controller-manager --allocate-node-cidrs=true --authentication-kubeconfig=/var/lib/rancher/k3s/server/cred/cloud-controller.kubeconfig --authorization-kubeconfig=/var/lib/rancher/k3s/server/cred/cloud-controller.kubeconfig --bind-address=127.0.0.1 --cloud-config=/var/lib/rancher/k3s/server/etc/cloud-config.yaml --cloud-provider=k3s --cluster-cidr=10.42.0.0/16 --configure-cloud-routes=false --controllers=*,-route --kubeconfig=/var/lib/rancher/k3s/server/cred/cloud-controller.kubeconfig --leader-elect=false --leader-elect-resource-name=k3s-cloud-controller-manager --node-status-update-frequency=1m0s --profiling=false" +mar 12 16:34:03 CASCA k3s[296088]: time="2025-03-12T16:34:03+01:00" level=info msg="Server node token is available at /var/lib/rancher/k3s/server/token" +mar 12 16:34:03 CASCA k3s[296088]: time="2025-03-12T16:34:03+01:00" level=info msg="To join server node to cluster: k3s server -s https://192.168.1.133:6443 -t ${SERVER_NODE_TOKEN}" +mar 12 16:34:03 CASCA k3s[296088]: time="2025-03-12T16:34:03+01:00" level=info msg="Agent node token is available at /var/lib/rancher/k3s/server/agent-token" +mar 12 16:34:03 CASCA k3s[296088]: time="2025-03-12T16:34:03+01:00" level=info msg="To join agent node to cluster: k3s agent -s https://192.168.1.133:6443 -t ${AGENT_NODE_TOKEN}" +mar 12 16:34:03 CASCA k3s[296088]: I0312 16:34:03.973720 296088 server.go:150] Version: v1.31.6+k3s1 +mar 12 16:34:03 CASCA k3s[296088]: I0312 16:34:03.973739 296088 server.go:152] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" +mar 12 16:34:03 CASCA k3s[296088]: Error: failed to create listener: failed to listen on 127.0.0.1:6444: listen tcp 127.0.0.1:6444: bind: address already in use +mar 12 16:34:03 CASCA k3s[296088]: time="2025-03-12T16:34:03+01:00" level=error msg="apiserver exited: failed to create listener: failed to listen on 127.0.0.1:6444: listen tcp 127.0.0.1:6444: bind: address already in use" +mar 12 16:34:03 CASCA systemd[1]: k3s.service: Main process exited, code=exited, status=1/FAILURE +░░ Subject: Unit process exited +░░ Defined-By: systemd +░░ Support: https://www.debian.org/support +░░ +░░ An ExecStart= process belonging to unit k3s.service has exited. +░░ +░░ The process' exit code is 'exited' and its exit status is 1. +mar 12 16:34:03 CASCA systemd[1]: k3s.service: Failed with result 'exit-code'. +░░ Subject: Unit failed +░░ Defined-By: systemd +░░ Support: https://www.debian.org/support +░░ +░░ The unit k3s.service has entered the 'failed' state with result 'exit-code'. +mar 12 16:34:03 CASCA systemd[1]: Failed to start k3s.service - Lightweight Kubernetes. +░░ Subject: A start job for unit k3s.service has failed +░░ Defined-By: systemd +░░ Support: https://www.debian.org/support +░░ +░░ A start job for unit k3s.service has finished with a failure. +░░ +░░ The job identifier is 115185 and the job result is failed. +mar 12 16:34:09 CASCA systemd[1]: k3s.service: Scheduled restart job, restart counter is at 1197. +░░ Subject: Automatic restarting of a unit has been scheduled +░░ Defined-By: systemd +░░ Support: https://www.debian.org/support +░░ +░░ Automatic restarting of the unit k3s.service has been scheduled, as the result for +░░ the configured Restart= setting for the unit. +mar 12 16:34:09 CASCA systemd[1]: Stopped k3s.service - Lightweight Kubernetes. +░░ Subject: A stop job for unit k3s.service has finished +░░ Defined-By: systemd +░░ Support: https://www.debian.org/support +░░ +░░ A stop job for unit k3s.service has finished. +░░ +░░ The job identifier is 115279 and the job result is done. +mar 12 16:34:09 CASCA systemd[1]: Starting k3s.service - Lightweight Kubernetes... +░░ Subject: A start job for unit k3s.service has begun execution +░░ Defined-By: systemd +░░ Support: https://www.debian.org/support +░░ +░░ A start job for unit k3s.service has begun execution. +░░ +░░ The job identifier is 115279. +mar 12 16:34:09 CASCA sh[296535]: + /usr/bin/systemctl is-enabled --quiet nm-cloud-setup.service +mar 12 16:34:09 CASCA k3s[296543]: time="2025-03-12T16:34:09+01:00" level=info msg="Starting k3s v1.31.6+k3s1 (6ab750f9)" +mar 12 16:34:09 CASCA k3s[296543]: time="2025-03-12T16:34:09+01:00" level=info msg="Configuring sqlite3 database connection pooling: maxIdleConns=2, maxOpenConns=0, connMaxLifetime=0s" +mar 12 16:34:09 CASCA k3s[296543]: time="2025-03-12T16:34:09+01:00" level=info msg="Configuring database table schema and indexes, this may take a moment..." +mar 12 16:34:09 CASCA k3s[296543]: time="2025-03-12T16:34:09+01:00" level=info msg="Database tables and indexes are up to date" +mar 12 16:34:09 CASCA k3s[296543]: time="2025-03-12T16:34:09+01:00" level=info msg="Kine available at unix://kine.sock" +mar 12 16:34:09 CASCA k3s[296543]: time="2025-03-12T16:34:09+01:00" level=info msg="Reconciling bootstrap data between datastore and disk" +mar 12 16:34:09 CASCA k3s[296543]: time="2025-03-12T16:34:09+01:00" level=info msg="Running kube-apiserver --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,k3s --authorization-mode=Node,RBAC --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/k3s/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction --enable-aggregator-routing=true --enable-bootstrap-token-auth=true --etcd-servers=unix://kine.sock --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key" +mar 12 16:34:09 CASCA k3s[296543]: time="2025-03-12T16:34:09+01:00" level=info msg="Running kube-scheduler --authentication-kubeconfig=/var/lib/rancher/k3s/server/cred/scheduler.kubeconfig --authorization-kubeconfig=/var/lib/rancher/k3s/server/cred/scheduler.kubeconfig --bind-address=127.0.0.1 --kubeconfig=/var/lib/rancher/k3s/server/cred/scheduler.kubeconfig --leader-elect=false --profiling=false --secure-port=10259" +mar 12 16:34:09 CASCA k3s[296543]: W0312 16:34:09.307627 296543 registry.go:256] calling componentGlobalsRegistry.AddFlags more than once, the registry will be set by the latest flags +mar 12 16:34:09 CASCA k3s[296543]: time="2025-03-12T16:34:09+01:00" level=info msg="Waiting for API server to become available" +mar 12 16:34:09 CASCA k3s[296543]: time="2025-03-12T16:34:09+01:00" level=info msg="Running kube-controller-manager --allocate-node-cidrs=true --authentication-kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --authorization-kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --bind-address=127.0.0.1 --cluster-cidr=10.42.0.0/16 --cluster-signing-kube-apiserver-client-cert-file=/var/lib/rancher/k3s/server/tls/client-ca.nochain.crt --cluster-signing-kube-apiserver-client-key-file=/var/lib/rancher/k3s/server/tls/client-ca.key --cluster-signing-kubelet-client-cert-file=/var/lib/rancher/k3s/server/tls/client-ca.nochain.crt --cluster-signing-kubelet-client-key-file=/var/lib/rancher/k3s/server/tls/client-ca.key --cluster-signing-kubelet-serving-cert-file=/var/lib/rancher/k3s/server/tls/server-ca.nochain.crt --cluster-signing-kubelet-serving-key-file=/var/lib/rancher/k3s/server/tls/server-ca.key --cluster-signing-legacy-unknown-cert-file=/var/lib/rancher/k3s/server/tls/server-ca.nochain.crt --cluster-signing-legacy-unknown-key-file=/var/lib/rancher/k3s/server/tls/server-ca.key --configure-cloud-routes=false --controllers=*,tokencleaner,-service,-route,-cloud-node-lifecycle --kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --leader-elect=false --profiling=false --root-ca-file=/var/lib/rancher/k3s/server/tls/server-ca.crt --secure-port=10257 --service-account-private-key-file=/var/lib/rancher/k3s/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --use-service-account-credentials=true" +mar 12 16:34:09 CASCA k3s[296543]: W0312 16:34:09.307971 296543 registry.go:256] calling componentGlobalsRegistry.AddFlags more than once, the registry will be set by the latest flags +mar 12 16:34:09 CASCA k3s[296543]: time="2025-03-12T16:34:09+01:00" level=info msg="Running cloud-controller-manager --allocate-node-cidrs=true --authentication-kubeconfig=/var/lib/rancher/k3s/server/cred/cloud-controller.kubeconfig --authorization-kubeconfig=/var/lib/rancher/k3s/server/cred/cloud-controller.kubeconfig --bind-address=127.0.0.1 --cloud-config=/var/lib/rancher/k3s/server/etc/cloud-config.yaml --cloud-provider=k3s --cluster-cidr=10.42.0.0/16 --configure-cloud-routes=false --controllers=*,-route --kubeconfig=/var/lib/rancher/k3s/server/cred/cloud-controller.kubeconfig --leader-elect=false --leader-elect-resource-name=k3s-cloud-controller-manager --node-status-update-frequency=1m0s --profiling=false" +mar 12 16:34:09 CASCA k3s[296543]: I0312 16:34:09.308180 296543 options.go:228] external host was not specified, using 192.168.1.133 +mar 12 16:34:09 CASCA k3s[296543]: time="2025-03-12T16:34:09+01:00" level=info msg="Server node token is available at /var/lib/rancher/k3s/server/token" +mar 12 16:34:09 CASCA k3s[296543]: time="2025-03-12T16:34:09+01:00" level=info msg="To join server node to cluster: k3s server -s https://192.168.1.133:6443 -t ${SERVER_NODE_TOKEN}" +mar 12 16:34:09 CASCA k3s[296543]: time="2025-03-12T16:34:09+01:00" level=info msg="Agent node token is available at /var/lib/rancher/k3s/server/agent-token" +mar 12 16:34:09 CASCA k3s[296543]: time="2025-03-12T16:34:09+01:00" level=info msg="To join agent node to cluster: k3s agent -s https://192.168.1.133:6443 -t ${AGENT_NODE_TOKEN}" +mar 12 16:34:09 CASCA k3s[296543]: I0312 16:34:09.309421 296543 server.go:150] Version: v1.31.6+k3s1 +mar 12 16:34:09 CASCA k3s[296543]: I0312 16:34:09.309437 296543 server.go:152] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" +mar 12 16:34:09 CASCA k3s[296543]: time="2025-03-12T16:34:09+01:00" level=info msg="Wrote kubeconfig /etc/rancher/k3s/k3s.yaml" +mar 12 16:34:09 CASCA k3s[296543]: time="2025-03-12T16:34:09+01:00" level=info msg="Run: k3s kubectl" +mar 12 16:34:09 CASCA k3s[296543]: Error: failed to create listener: failed to listen on 127.0.0.1:6444: listen tcp 127.0.0.1:6444: bind: address already in use +mar 12 16:34:09 CASCA k3s[296543]: time="2025-03-12T16:34:09+01:00" level=error msg="apiserver exited: failed to create listener: failed to listen on 127.0.0.1:6444: listen tcp 127.0.0.1:6444: bind: address already in use" +mar 12 16:34:09 CASCA systemd[1]: k3s.service: Main process exited, code=exited, status=1/FAILURE +░░ Subject: Unit process exited +░░ Defined-By: systemd +░░ Support: https://www.debian.org/support +░░ +░░ An ExecStart= process belonging to unit k3s.service has exited. +░░ +░░ The process' exit code is 'exited' and its exit status is 1. +mar 12 16:34:09 CASCA systemd[1]: k3s.service: Failed with result 'exit-code'. +░░ Subject: Unit failed +░░ Defined-By: systemd +░░ Support: https://www.debian.org/support +░░ +░░ The unit k3s.service has entered the 'failed' state with result 'exit-code'. +mar 12 16:34:09 CASCA systemd[1]: Failed to start k3s.service - Lightweight Kubernetes. +░░ Subject: A start job for unit k3s.service has failed +░░ Defined-By: systemd +░░ Support: https://www.debian.org/support +░░ +░░ A start job for unit k3s.service has finished with a failure. +░░ +░░ The job identifier is 115279 and the job result is failed. +mar 12 16:34:14 CASCA systemd[1]: k3s.service: Scheduled restart job, restart counter is at 1198. +░░ Subject: Automatic restarting of a unit has been scheduled +░░ Defined-By: systemd +░░ Support: https://www.debian.org/support +░░ +░░ Automatic restarting of the unit k3s.service has been scheduled, as the result for +░░ the configured Restart= setting for the unit. +mar 12 16:34:14 CASCA systemd[1]: Stopped k3s.service - Lightweight Kubernetes. +░░ Subject: A stop job for unit k3s.service has finished +░░ Defined-By: systemd +░░ Support: https://www.debian.org/support +░░ +░░ A stop job for unit k3s.service has finished. +░░ +░░ The job identifier is 115373 and the job result is done. +mar 12 16:34:14 CASCA systemd[1]: Starting k3s.service - Lightweight Kubernetes... +░░ Subject: A start job for unit k3s.service has begun execution +░░ Defined-By: systemd +░░ Support: https://www.debian.org/support +░░ +░░ A start job for unit k3s.service has begun execution. +░░ +░░ The job identifier is 115373. +mar 12 16:34:14 CASCA sh[296970]: + /usr/bin/systemctl is-enabled --quiet nm-cloud-setup.service +mar 12 16:34:14 CASCA k3s[296980]: time="2025-03-12T16:34:14+01:00" level=info msg="Starting k3s v1.31.6+k3s1 (6ab750f9)" +mar 12 16:34:14 CASCA k3s[296980]: time="2025-03-12T16:34:14+01:00" level=info msg="Configuring sqlite3 database connection pooling: maxIdleConns=2, maxOpenConns=0, connMaxLifetime=0s" +mar 12 16:34:14 CASCA k3s[296980]: time="2025-03-12T16:34:14+01:00" level=info msg="Configuring database table schema and indexes, this may take a moment..." +mar 12 16:34:14 CASCA k3s[296980]: time="2025-03-12T16:34:14+01:00" level=info msg="Database tables and indexes are up to date" +mar 12 16:34:14 CASCA k3s[296980]: time="2025-03-12T16:34:14+01:00" level=info msg="Kine available at unix://kine.sock" +mar 12 16:34:14 CASCA k3s[296980]: time="2025-03-12T16:34:14+01:00" level=info msg="Reconciling bootstrap data between datastore and disk" +mar 12 16:34:14 CASCA k3s[296980]: time="2025-03-12T16:34:14+01:00" level=info msg="Running kube-apiserver --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,k3s --authorization-mode=Node,RBAC --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/k3s/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction --enable-aggregator-routing=true --enable-bootstrap-token-auth=true --etcd-servers=unix://kine.sock --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key" +mar 12 16:34:14 CASCA k3s[296980]: time="2025-03-12T16:34:14+01:00" level=info msg="Running kube-scheduler --authentication-kubeconfig=/var/lib/rancher/k3s/server/cred/scheduler.kubeconfig --authorization-kubeconfig=/var/lib/rancher/k3s/server/cred/scheduler.kubeconfig --bind-address=127.0.0.1 --kubeconfig=/var/lib/rancher/k3s/server/cred/scheduler.kubeconfig --leader-elect=false --profiling=false --secure-port=10259" +mar 12 16:34:14 CASCA k3s[296980]: time="2025-03-12T16:34:14+01:00" level=info msg="Waiting for API server to become available" +mar 12 16:34:14 CASCA k3s[296980]: W0312 16:34:14.835858 296980 registry.go:256] calling componentGlobalsRegistry.AddFlags more than once, the registry will be set by the latest flags +mar 12 16:34:14 CASCA k3s[296980]: time="2025-03-12T16:34:14+01:00" level=info msg="Running kube-controller-manager --allocate-node-cidrs=true --authentication-kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --authorization-kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --bind-address=127.0.0.1 --cluster-cidr=10.42.0.0/16 --cluster-signing-kube-apiserver-client-cert-file=/var/lib/rancher/k3s/server/tls/client-ca.nochain.crt --cluster-signing-kube-apiserver-client-key-file=/var/lib/rancher/k3s/server/tls/client-ca.key --cluster-signing-kubelet-client-cert-file=/var/lib/rancher/k3s/server/tls/client-ca.nochain.crt --cluster-signing-kubelet-client-key-file=/var/lib/rancher/k3s/server/tls/client-ca.key --cluster-signing-kubelet-serving-cert-file=/var/lib/rancher/k3s/server/tls/server-ca.nochain.crt --cluster-signing-kubelet-serving-key-file=/var/lib/rancher/k3s/server/tls/server-ca.key --cluster-signing-legacy-unknown-cert-file=/var/lib/rancher/k3s/server/tls/server-ca.nochain.crt --cluster-signing-legacy-unknown-key-file=/var/lib/rancher/k3s/server/tls/server-ca.key --configure-cloud-routes=false --controllers=*,tokencleaner,-service,-route,-cloud-node-lifecycle --kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --leader-elect=false --profiling=false --root-ca-file=/var/lib/rancher/k3s/server/tls/server-ca.crt --secure-port=10257 --service-account-private-key-file=/var/lib/rancher/k3s/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --use-service-account-credentials=true" +mar 12 16:34:14 CASCA k3s[296980]: W0312 16:34:14.836194 296980 registry.go:256] calling componentGlobalsRegistry.AddFlags more than once, the registry will be set by the latest flags +mar 12 16:34:14 CASCA k3s[296980]: time="2025-03-12T16:34:14+01:00" level=info msg="Running cloud-controller-manager --allocate-node-cidrs=true --authentication-kubeconfig=/var/lib/rancher/k3s/server/cred/cloud-controller.kubeconfig --authorization-kubeconfig=/var/lib/rancher/k3s/server/cred/cloud-controller.kubeconfig --bind-address=127.0.0.1 --cloud-config=/var/lib/rancher/k3s/server/etc/cloud-config.yaml --cloud-provider=k3s --cluster-cidr=10.42.0.0/16 --configure-cloud-routes=false --controllers=*,-route --kubeconfig=/var/lib/rancher/k3s/server/cred/cloud-controller.kubeconfig --leader-elect=false --leader-elect-resource-name=k3s-cloud-controller-manager --node-status-update-frequency=1m0s --profiling=false" +mar 12 16:34:14 CASCA k3s[296980]: time="2025-03-12T16:34:14+01:00" level=info msg="Server node token is available at /var/lib/rancher/k3s/server/token" +mar 12 16:34:14 CASCA k3s[296980]: I0312 16:34:14.837054 296980 options.go:228] external host was not specified, using 192.168.1.133 +mar 12 16:34:14 CASCA k3s[296980]: time="2025-03-12T16:34:14+01:00" level=info msg="To join server node to cluster: k3s server -s https://192.168.1.133:6443 -t ${SERVER_NODE_TOKEN}" +mar 12 16:34:14 CASCA k3s[296980]: time="2025-03-12T16:34:14+01:00" level=info msg="Agent node token is available at /var/lib/rancher/k3s/server/agent-token" +mar 12 16:34:14 CASCA k3s[296980]: time="2025-03-12T16:34:14+01:00" level=info msg="To join agent node to cluster: k3s agent -s https://192.168.1.133:6443 -t ${AGENT_NODE_TOKEN}" +mar 12 16:34:14 CASCA k3s[296980]: time="2025-03-12T16:34:14+01:00" level=info msg="Wrote kubeconfig /etc/rancher/k3s/k3s.yaml" +mar 12 16:34:14 CASCA k3s[296980]: time="2025-03-12T16:34:14+01:00" level=info msg="Run: k3s kubectl" +mar 12 16:34:14 CASCA k3s[296980]: I0312 16:34:14.838251 296980 server.go:150] Version: v1.31.6+k3s1 +mar 12 16:34:14 CASCA k3s[296980]: I0312 16:34:14.838274 296980 server.go:152] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" +mar 12 16:34:14 CASCA k3s[296980]: Error: failed to create listener: failed to listen on 127.0.0.1:6444: listen tcp 127.0.0.1:6444: bind: address already in use +mar 12 16:34:14 CASCA k3s[296980]: time="2025-03-12T16:34:14+01:00" level=error msg="apiserver exited: failed to create listener: failed to listen on 127.0.0.1:6444: listen tcp 127.0.0.1:6444: bind: address already in use" +mar 12 16:34:14 CASCA systemd[1]: k3s.service: Main process exited, code=exited, status=1/FAILURE +░░ Subject: Unit process exited +░░ Defined-By: systemd +░░ Support: https://www.debian.org/support +░░ +░░ An ExecStart= process belonging to unit k3s.service has exited. +░░ +░░ The process' exit code is 'exited' and its exit status is 1. +mar 12 16:34:14 CASCA systemd[1]: k3s.service: Failed with result 'exit-code'. +░░ Subject: Unit failed +░░ Defined-By: systemd +░░ Support: https://www.debian.org/support +░░ +░░ The unit k3s.service has entered the 'failed' state with result 'exit-code'. +mar 12 16:34:14 CASCA systemd[1]: Failed to start k3s.service - Lightweight Kubernetes. +░░ Subject: A start job for unit k3s.service has failed +░░ Defined-By: systemd +░░ Support: https://www.debian.org/support +░░ +░░ A start job for unit k3s.service has finished with a failure. +░░ +░░ The job identifier is 115373 and the job result is failed. +mar 12 16:34:20 CASCA systemd[1]: k3s.service: Scheduled restart job, restart counter is at 1199. +░░ Subject: Automatic restarting of a unit has been scheduled +░░ Defined-By: systemd +░░ Support: https://www.debian.org/support +░░ +░░ Automatic restarting of the unit k3s.service has been scheduled, as the result for +░░ the configured Restart= setting for the unit. +mar 12 16:34:20 CASCA systemd[1]: Stopped k3s.service - Lightweight Kubernetes. +░░ Subject: A stop job for unit k3s.service has finished +░░ Defined-By: systemd +░░ Support: https://www.debian.org/support +░░ +░░ A stop job for unit k3s.service has finished. +░░ +░░ The job identifier is 115467 and the job result is done. +mar 12 16:34:20 CASCA systemd[1]: Starting k3s.service - Lightweight Kubernetes... +░░ Subject: A start job for unit k3s.service has begun execution +░░ Defined-By: systemd +░░ Support: https://www.debian.org/support +░░ +░░ A start job for unit k3s.service has begun execution. +░░ +░░ The job identifier is 115467. +mar 12 16:34:20 CASCA sh[297349]: + /usr/bin/systemctl is-enabled --quiet nm-cloud-setup.service +mar 12 16:34:20 CASCA k3s[297356]: time="2025-03-12T16:34:20+01:00" level=info msg="Starting k3s v1.31.6+k3s1 (6ab750f9)" +mar 12 16:34:20 CASCA k3s[297356]: time="2025-03-12T16:34:20+01:00" level=info msg="Configuring sqlite3 database connection pooling: maxIdleConns=2, maxOpenConns=0, connMaxLifetime=0s" +mar 12 16:34:20 CASCA k3s[297356]: time="2025-03-12T16:34:20+01:00" level=info msg="Configuring database table schema and indexes, this may take a moment..." +mar 12 16:34:20 CASCA k3s[297356]: time="2025-03-12T16:34:20+01:00" level=info msg="Database tables and indexes are up to date" +mar 12 16:34:20 CASCA k3s[297356]: time="2025-03-12T16:34:20+01:00" level=info msg="Kine available at unix://kine.sock" +mar 12 16:34:20 CASCA k3s[297356]: time="2025-03-12T16:34:20+01:00" level=info msg="Reconciling bootstrap data between datastore and disk" +mar 12 16:34:20 CASCA k3s[297356]: time="2025-03-12T16:34:20+01:00" level=info msg="Running kube-apiserver --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,k3s --authorization-mode=Node,RBAC --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/k3s/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction --enable-aggregator-routing=true --enable-bootstrap-token-auth=true --etcd-servers=unix://kine.sock --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key" +mar 12 16:34:20 CASCA k3s[297356]: time="2025-03-12T16:34:20+01:00" level=info msg="Running kube-scheduler --authentication-kubeconfig=/var/lib/rancher/k3s/server/cred/scheduler.kubeconfig --authorization-kubeconfig=/var/lib/rancher/k3s/server/cred/scheduler.kubeconfig --bind-address=127.0.0.1 --kubeconfig=/var/lib/rancher/k3s/server/cred/scheduler.kubeconfig --leader-elect=false --profiling=false --secure-port=10259" +mar 12 16:34:20 CASCA k3s[297356]: W0312 16:34:20.345215 297356 registry.go:256] calling componentGlobalsRegistry.AddFlags more than once, the registry will be set by the latest flags +mar 12 16:34:20 CASCA k3s[297356]: time="2025-03-12T16:34:20+01:00" level=info msg="Waiting for API server to become available" +mar 12 16:34:20 CASCA k3s[297356]: time="2025-03-12T16:34:20+01:00" level=info msg="Running kube-controller-manager --allocate-node-cidrs=true --authentication-kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --authorization-kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --bind-address=127.0.0.1 --cluster-cidr=10.42.0.0/16 --cluster-signing-kube-apiserver-client-cert-file=/var/lib/rancher/k3s/server/tls/client-ca.nochain.crt --cluster-signing-kube-apiserver-client-key-file=/var/lib/rancher/k3s/server/tls/client-ca.key --cluster-signing-kubelet-client-cert-file=/var/lib/rancher/k3s/server/tls/client-ca.nochain.crt --cluster-signing-kubelet-client-key-file=/var/lib/rancher/k3s/server/tls/client-ca.key --cluster-signing-kubelet-serving-cert-file=/var/lib/rancher/k3s/server/tls/server-ca.nochain.crt --cluster-signing-kubelet-serving-key-file=/var/lib/rancher/k3s/server/tls/server-ca.key --cluster-signing-legacy-unknown-cert-file=/var/lib/rancher/k3s/server/tls/server-ca.nochain.crt --cluster-signing-legacy-unknown-key-file=/var/lib/rancher/k3s/server/tls/server-ca.key --configure-cloud-routes=false --controllers=*,tokencleaner,-service,-route,-cloud-node-lifecycle --kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --leader-elect=false --profiling=false --root-ca-file=/var/lib/rancher/k3s/server/tls/server-ca.crt --secure-port=10257 --service-account-private-key-file=/var/lib/rancher/k3s/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --use-service-account-credentials=true" +mar 12 16:34:20 CASCA k3s[297356]: W0312 16:34:20.345563 297356 registry.go:256] calling componentGlobalsRegistry.AddFlags more than once, the registry will be set by the latest flags +mar 12 16:34:20 CASCA k3s[297356]: time="2025-03-12T16:34:20+01:00" level=info msg="Running cloud-controller-manager --allocate-node-cidrs=true --authentication-kubeconfig=/var/lib/rancher/k3s/server/cred/cloud-controller.kubeconfig --authorization-kubeconfig=/var/lib/rancher/k3s/server/cred/cloud-controller.kubeconfig --bind-address=127.0.0.1 --cloud-config=/var/lib/rancher/k3s/server/etc/cloud-config.yaml --cloud-provider=k3s --cluster-cidr=10.42.0.0/16 --configure-cloud-routes=false --controllers=*,-route --kubeconfig=/var/lib/rancher/k3s/server/cred/cloud-controller.kubeconfig --leader-elect=false --leader-elect-resource-name=k3s-cloud-controller-manager --node-status-update-frequency=1m0s --profiling=false" +mar 12 16:34:20 CASCA k3s[297356]: I0312 16:34:20.345751 297356 options.go:228] external host was not specified, using 192.168.1.133 +mar 12 16:34:20 CASCA k3s[297356]: time="2025-03-12T16:34:20+01:00" level=info msg="Server node token is available at /var/lib/rancher/k3s/server/token" +mar 12 16:34:20 CASCA k3s[297356]: time="2025-03-12T16:34:20+01:00" level=info msg="To join server node to cluster: k3s server -s https://192.168.1.133:6443 -t ${SERVER_NODE_TOKEN}" +mar 12 16:34:20 CASCA k3s[297356]: time="2025-03-12T16:34:20+01:00" level=info msg="Agent node token is available at /var/lib/rancher/k3s/server/agent-token" +mar 12 16:34:20 CASCA k3s[297356]: time="2025-03-12T16:34:20+01:00" level=info msg="To join agent node to cluster: k3s agent -s https://192.168.1.133:6443 -t ${AGENT_NODE_TOKEN}" +mar 12 16:34:20 CASCA k3s[297356]: I0312 16:34:20.346957 297356 server.go:150] Version: v1.31.6+k3s1 +mar 12 16:34:20 CASCA k3s[297356]: I0312 16:34:20.346976 297356 server.go:152] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" +mar 12 16:34:20 CASCA k3s[297356]: Error: failed to create listener: failed to listen on 127.0.0.1:6444: listen tcp 127.0.0.1:6444: bind: address already in use +mar 12 16:34:20 CASCA k3s[297356]: time="2025-03-12T16:34:20+01:00" level=error msg="apiserver exited: failed to create listener: failed to listen on 127.0.0.1:6444: listen tcp 127.0.0.1:6444: bind: address already in use" +mar 12 16:34:20 CASCA systemd[1]: k3s.service: Main process exited, code=exited, status=1/FAILURE +░░ Subject: Unit process exited +░░ Defined-By: systemd +░░ Support: https://www.debian.org/support +░░ +░░ An ExecStart= process belonging to unit k3s.service has exited. +░░ +░░ The process' exit code is 'exited' and its exit status is 1. +mar 12 16:34:20 CASCA systemd[1]: k3s.service: Failed with result 'exit-code'. +░░ Subject: Unit failed +░░ Defined-By: systemd +░░ Support: https://www.debian.org/support +░░ +░░ The unit k3s.service has entered the 'failed' state with result 'exit-code'. +mar 12 16:34:20 CASCA systemd[1]: Failed to start k3s.service - Lightweight Kubernetes. +░░ Subject: A start job for unit k3s.service has failed +░░ Defined-By: systemd +░░ Support: https://www.debian.org/support +░░ +░░ A start job for unit k3s.service has finished with a failure. +░░ +░░ The job identifier is 115467 and the job result is failed. +mar 12 16:34:25 CASCA systemd[1]: k3s.service: Scheduled restart job, restart counter is at 1200. +░░ Subject: Automatic restarting of a unit has been scheduled +░░ Defined-By: systemd +░░ Support: https://www.debian.org/support +░░ +░░ Automatic restarting of the unit k3s.service has been scheduled, as the result for +░░ the configured Restart= setting for the unit. +mar 12 16:34:25 CASCA systemd[1]: Stopped k3s.service - Lightweight Kubernetes. +░░ Subject: A stop job for unit k3s.service has finished +░░ Defined-By: systemd +░░ Support: https://www.debian.org/support +░░ +░░ A stop job for unit k3s.service has finished. +░░ +░░ The job identifier is 115561 and the job result is done. +mar 12 16:34:25 CASCA systemd[1]: Starting k3s.service - Lightweight Kubernetes... +░░ Subject: A start job for unit k3s.service has begun execution +░░ Defined-By: systemd +░░ Support: https://www.debian.org/support +░░ +░░ A start job for unit k3s.service has begun execution. +░░ +░░ The job identifier is 115561. +mar 12 16:34:25 CASCA sh[297733]: + /usr/bin/systemctl is-enabled --quiet nm-cloud-setup.service +mar 12 16:34:25 CASCA k3s[297740]: time="2025-03-12T16:34:25+01:00" level=info msg="Starting k3s v1.31.6+k3s1 (6ab750f9)" +mar 12 16:34:25 CASCA k3s[297740]: time="2025-03-12T16:34:25+01:00" level=info msg="Configuring sqlite3 database connection pooling: maxIdleConns=2, maxOpenConns=0, connMaxLifetime=0s" +mar 12 16:34:25 CASCA k3s[297740]: time="2025-03-12T16:34:25+01:00" level=info msg="Configuring database table schema and indexes, this may take a moment..." +mar 12 16:34:25 CASCA k3s[297740]: time="2025-03-12T16:34:25+01:00" level=info msg="Database tables and indexes are up to date" +mar 12 16:34:25 CASCA k3s[297740]: time="2025-03-12T16:34:25+01:00" level=info msg="Kine available at unix://kine.sock" +mar 12 16:34:25 CASCA k3s[297740]: time="2025-03-12T16:34:25+01:00" level=info msg="Reconciling bootstrap data between datastore and disk" +mar 12 16:34:25 CASCA k3s[297740]: time="2025-03-12T16:34:25+01:00" level=info msg="Running kube-apiserver --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,k3s --authorization-mode=Node,RBAC --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/k3s/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction --enable-aggregator-routing=true --enable-bootstrap-token-auth=true --etcd-servers=unix://kine.sock --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key" +mar 12 16:34:25 CASCA k3s[297740]: time="2025-03-12T16:34:25+01:00" level=info msg="Running kube-scheduler --authentication-kubeconfig=/var/lib/rancher/k3s/server/cred/scheduler.kubeconfig --authorization-kubeconfig=/var/lib/rancher/k3s/server/cred/scheduler.kubeconfig --bind-address=127.0.0.1 --kubeconfig=/var/lib/rancher/k3s/server/cred/scheduler.kubeconfig --leader-elect=false --profiling=false --secure-port=10259" +mar 12 16:34:25 CASCA k3s[297740]: W0312 16:34:25.856973 297740 registry.go:256] calling componentGlobalsRegistry.AddFlags more than once, the registry will be set by the latest flags +mar 12 16:34:25 CASCA k3s[297740]: time="2025-03-12T16:34:25+01:00" level=info msg="Waiting for API server to become available" +mar 12 16:34:25 CASCA k3s[297740]: time="2025-03-12T16:34:25+01:00" level=info msg="Running kube-controller-manager --allocate-node-cidrs=true --authentication-kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --authorization-kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --bind-address=127.0.0.1 --cluster-cidr=10.42.0.0/16 --cluster-signing-kube-apiserver-client-cert-file=/var/lib/rancher/k3s/server/tls/client-ca.nochain.crt --cluster-signing-kube-apiserver-client-key-file=/var/lib/rancher/k3s/server/tls/client-ca.key --cluster-signing-kubelet-client-cert-file=/var/lib/rancher/k3s/server/tls/client-ca.nochain.crt --cluster-signing-kubelet-client-key-file=/var/lib/rancher/k3s/server/tls/client-ca.key --cluster-signing-kubelet-serving-cert-file=/var/lib/rancher/k3s/server/tls/server-ca.nochain.crt --cluster-signing-kubelet-serving-key-file=/var/lib/rancher/k3s/server/tls/server-ca.key --cluster-signing-legacy-unknown-cert-file=/var/lib/rancher/k3s/server/tls/server-ca.nochain.crt --cluster-signing-legacy-unknown-key-file=/var/lib/rancher/k3s/server/tls/server-ca.key --configure-cloud-routes=false --controllers=*,tokencleaner,-service,-route,-cloud-node-lifecycle --kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --leader-elect=false --profiling=false --root-ca-file=/var/lib/rancher/k3s/server/tls/server-ca.crt --secure-port=10257 --service-account-private-key-file=/var/lib/rancher/k3s/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --use-service-account-credentials=true" +mar 12 16:34:25 CASCA k3s[297740]: W0312 16:34:25.857380 297740 registry.go:256] calling componentGlobalsRegistry.AddFlags more than once, the registry will be set by the latest flags +mar 12 16:34:25 CASCA k3s[297740]: time="2025-03-12T16:34:25+01:00" level=info msg="Running cloud-controller-manager --allocate-node-cidrs=true --authentication-kubeconfig=/var/lib/rancher/k3s/server/cred/cloud-controller.kubeconfig --authorization-kubeconfig=/var/lib/rancher/k3s/server/cred/cloud-controller.kubeconfig --bind-address=127.0.0.1 --cloud-config=/var/lib/rancher/k3s/server/etc/cloud-config.yaml --cloud-provider=k3s --cluster-cidr=10.42.0.0/16 --configure-cloud-routes=false --controllers=*,-route --kubeconfig=/var/lib/rancher/k3s/server/cred/cloud-controller.kubeconfig --leader-elect=false --leader-elect-resource-name=k3s-cloud-controller-manager --node-status-update-frequency=1m0s --profiling=false" +mar 12 16:34:25 CASCA k3s[297740]: time="2025-03-12T16:34:25+01:00" level=info msg="Server node token is available at /var/lib/rancher/k3s/server/token" +mar 12 16:34:25 CASCA k3s[297740]: time="2025-03-12T16:34:25+01:00" level=info msg="To join server node to cluster: k3s server -s https://192.168.1.133:6443 -t ${SERVER_NODE_TOKEN}" +mar 12 16:34:25 CASCA k3s[297740]: time="2025-03-12T16:34:25+01:00" level=info msg="Agent node token is available at /var/lib/rancher/k3s/server/agent-token" +mar 12 16:34:25 CASCA k3s[297740]: I0312 16:34:25.858380 297740 options.go:228] external host was not specified, using 192.168.1.133 +mar 12 16:34:25 CASCA k3s[297740]: time="2025-03-12T16:34:25+01:00" level=info msg="To join agent node to cluster: k3s agent -s https://192.168.1.133:6443 -t ${AGENT_NODE_TOKEN}" +mar 12 16:34:25 CASCA k3s[297740]: time="2025-03-12T16:34:25+01:00" level=info msg="Wrote kubeconfig /etc/rancher/k3s/k3s.yaml" +mar 12 16:34:25 CASCA k3s[297740]: time="2025-03-12T16:34:25+01:00" level=info msg="Run: k3s kubectl" +mar 12 16:34:25 CASCA k3s[297740]: I0312 16:34:25.859623 297740 server.go:150] Version: v1.31.6+k3s1 +mar 12 16:34:25 CASCA k3s[297740]: I0312 16:34:25.859637 297740 server.go:152] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" +mar 12 16:34:25 CASCA k3s[297740]: Error: failed to create listener: failed to listen on 127.0.0.1:6444: listen tcp 127.0.0.1:6444: bind: address already in use +mar 12 16:34:25 CASCA k3s[297740]: time="2025-03-12T16:34:25+01:00" level=error msg="apiserver exited: failed to create listener: failed to listen on 127.0.0.1:6444: listen tcp 127.0.0.1:6444: bind: address already in use" +mar 12 16:34:25 CASCA systemd[1]: k3s.service: Main process exited, code=exited, status=1/FAILURE +░░ Subject: Unit process exited +░░ Defined-By: systemd +░░ Support: https://www.debian.org/support +░░ +░░ An ExecStart= process belonging to unit k3s.service has exited. +░░ +░░ The process' exit code is 'exited' and its exit status is 1. +mar 12 16:34:25 CASCA systemd[1]: k3s.service: Failed with result 'exit-code'. +░░ Subject: Unit failed +░░ Defined-By: systemd +░░ Support: https://www.debian.org/support +░░ +░░ The unit k3s.service has entered the 'failed' state with result 'exit-code'. +mar 12 16:34:25 CASCA systemd[1]: Failed to start k3s.service - Lightweight Kubernetes. +░░ Subject: A start job for unit k3s.service has failed +░░ Defined-By: systemd +░░ Support: https://www.debian.org/support +░░ +░░ A start job for unit k3s.service has finished with a failure. +░░ +░░ The job identifier is 115561 and the job result is failed. +mar 12 16:34:31 CASCA systemd[1]: k3s.service: Scheduled restart job, restart counter is at 1201. +░░ Subject: Automatic restarting of a unit has been scheduled +░░ Defined-By: systemd +░░ Support: https://www.debian.org/support +░░ +░░ Automatic restarting of the unit k3s.service has been scheduled, as the result for +░░ the configured Restart= setting for the unit. +mar 12 16:34:31 CASCA systemd[1]: Stopped k3s.service - Lightweight Kubernetes. +░░ Subject: A stop job for unit k3s.service has finished +░░ Defined-By: systemd +░░ Support: https://www.debian.org/support +░░ +░░ A stop job for unit k3s.service has finished. +░░ +░░ The job identifier is 115655 and the job result is done. +mar 12 16:34:31 CASCA systemd[1]: Starting k3s.service - Lightweight Kubernetes... +░░ Subject: A start job for unit k3s.service has begun execution +░░ Defined-By: systemd +░░ Support: https://www.debian.org/support +░░ +░░ A start job for unit k3s.service has begun execution. +░░ +░░ The job identifier is 115655. +mar 12 16:34:31 CASCA sh[298138]: + /usr/bin/systemctl is-enabled --quiet nm-cloud-setup.service +mar 12 16:34:31 CASCA k3s[298148]: time="2025-03-12T16:34:31+01:00" level=info msg="Starting k3s v1.31.6+k3s1 (6ab750f9)" +mar 12 16:34:31 CASCA k3s[298148]: time="2025-03-12T16:34:31+01:00" level=info msg="Configuring sqlite3 database connection pooling: maxIdleConns=2, maxOpenConns=0, connMaxLifetime=0s" +mar 12 16:34:31 CASCA k3s[298148]: time="2025-03-12T16:34:31+01:00" level=info msg="Configuring database table schema and indexes, this may take a moment..." +mar 12 16:34:31 CASCA k3s[298148]: time="2025-03-12T16:34:31+01:00" level=info msg="Database tables and indexes are up to date" +mar 12 16:34:31 CASCA k3s[298148]: time="2025-03-12T16:34:31+01:00" level=info msg="Kine available at unix://kine.sock" +mar 12 16:34:31 CASCA k3s[298148]: time="2025-03-12T16:34:31+01:00" level=info msg="Reconciling bootstrap data between datastore and disk" +mar 12 16:34:31 CASCA k3s[298148]: time="2025-03-12T16:34:31+01:00" level=info msg="Running kube-apiserver --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,k3s --authorization-mode=Node,RBAC --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/k3s/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction --enable-aggregator-routing=true --enable-bootstrap-token-auth=true --etcd-servers=unix://kine.sock --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key" +mar 12 16:34:31 CASCA k3s[298148]: time="2025-03-12T16:34:31+01:00" level=info msg="Running kube-scheduler --authentication-kubeconfig=/var/lib/rancher/k3s/server/cred/scheduler.kubeconfig --authorization-kubeconfig=/var/lib/rancher/k3s/server/cred/scheduler.kubeconfig --bind-address=127.0.0.1 --kubeconfig=/var/lib/rancher/k3s/server/cred/scheduler.kubeconfig --leader-elect=false --profiling=false --secure-port=10259" +mar 12 16:34:31 CASCA k3s[298148]: W0312 16:34:31.292453 298148 registry.go:256] calling componentGlobalsRegistry.AddFlags more than once, the registry will be set by the latest flags +mar 12 16:34:31 CASCA k3s[298148]: time="2025-03-12T16:34:31+01:00" level=info msg="Waiting for API server to become available" +mar 12 16:34:31 CASCA k3s[298148]: time="2025-03-12T16:34:31+01:00" level=info msg="Running kube-controller-manager --allocate-node-cidrs=true --authentication-kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --authorization-kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --bind-address=127.0.0.1 --cluster-cidr=10.42.0.0/16 --cluster-signing-kube-apiserver-client-cert-file=/var/lib/rancher/k3s/server/tls/client-ca.nochain.crt --cluster-signing-kube-apiserver-client-key-file=/var/lib/rancher/k3s/server/tls/client-ca.key --cluster-signing-kubelet-client-cert-file=/var/lib/rancher/k3s/server/tls/client-ca.nochain.crt --cluster-signing-kubelet-client-key-file=/var/lib/rancher/k3s/server/tls/client-ca.key --cluster-signing-kubelet-serving-cert-file=/var/lib/rancher/k3s/server/tls/server-ca.nochain.crt --cluster-signing-kubelet-serving-key-file=/var/lib/rancher/k3s/server/tls/server-ca.key --cluster-signing-legacy-unknown-cert-file=/var/lib/rancher/k3s/server/tls/server-ca.nochain.crt --cluster-signing-legacy-unknown-key-file=/var/lib/rancher/k3s/server/tls/server-ca.key --configure-cloud-routes=false --controllers=*,tokencleaner,-service,-route,-cloud-node-lifecycle --kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --leader-elect=false --profiling=false --root-ca-file=/var/lib/rancher/k3s/server/tls/server-ca.crt --secure-port=10257 --service-account-private-key-file=/var/lib/rancher/k3s/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --use-service-account-credentials=true" +mar 12 16:34:31 CASCA k3s[298148]: W0312 16:34:31.292799 298148 registry.go:256] calling componentGlobalsRegistry.AddFlags more than once, the registry will be set by the latest flags +mar 12 16:34:31 CASCA k3s[298148]: I0312 16:34:31.292948 298148 options.go:228] external host was not specified, using 192.168.1.133 +mar 12 16:34:31 CASCA k3s[298148]: time="2025-03-12T16:34:31+01:00" level=info msg="Running cloud-controller-manager --allocate-node-cidrs=true --authentication-kubeconfig=/var/lib/rancher/k3s/server/cred/cloud-controller.kubeconfig --authorization-kubeconfig=/var/lib/rancher/k3s/server/cred/cloud-controller.kubeconfig --bind-address=127.0.0.1 --cloud-config=/var/lib/rancher/k3s/server/etc/cloud-config.yaml --cloud-provider=k3s --cluster-cidr=10.42.0.0/16 --configure-cloud-routes=false --controllers=*,-route --kubeconfig=/var/lib/rancher/k3s/server/cred/cloud-controller.kubeconfig --leader-elect=false --leader-elect-resource-name=k3s-cloud-controller-manager --node-status-update-frequency=1m0s --profiling=false" +mar 12 16:34:31 CASCA k3s[298148]: time="2025-03-12T16:34:31+01:00" level=info msg="Server node token is available at /var/lib/rancher/k3s/server/token" +mar 12 16:34:31 CASCA k3s[298148]: time="2025-03-12T16:34:31+01:00" level=info msg="To join server node to cluster: k3s server -s https://192.168.1.133:6443 -t ${SERVER_NODE_TOKEN}" +mar 12 16:34:31 CASCA k3s[298148]: time="2025-03-12T16:34:31+01:00" level=info msg="Agent node token is available at /var/lib/rancher/k3s/server/agent-token" +mar 12 16:34:31 CASCA k3s[298148]: time="2025-03-12T16:34:31+01:00" level=info msg="To join agent node to cluster: k3s agent -s https://192.168.1.133:6443 -t ${AGENT_NODE_TOKEN}" +mar 12 16:34:31 CASCA k3s[298148]: I0312 16:34:31.294174 298148 server.go:150] Version: v1.31.6+k3s1 +mar 12 16:34:31 CASCA k3s[298148]: I0312 16:34:31.294191 298148 server.go:152] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" +mar 12 16:34:31 CASCA k3s[298148]: Error: failed to create listener: failed to listen on 127.0.0.1:6444: listen tcp 127.0.0.1:6444: bind: address already in use +mar 12 16:34:31 CASCA k3s[298148]: time="2025-03-12T16:34:31+01:00" level=error msg="apiserver exited: failed to create listener: failed to listen on 127.0.0.1:6444: listen tcp 127.0.0.1:6444: bind: address already in use" +mar 12 16:34:31 CASCA systemd[1]: k3s.service: Main process exited, code=exited, status=1/FAILURE +░░ Subject: Unit process exited +░░ Defined-By: systemd +░░ Support: https://www.debian.org/support +░░ +░░ An ExecStart= process belonging to unit k3s.service has exited. +░░ +░░ The process' exit code is 'exited' and its exit status is 1. +mar 12 16:34:31 CASCA systemd[1]: k3s.service: Failed with result 'exit-code'. +░░ Subject: Unit failed +░░ Defined-By: systemd +░░ Support: https://www.debian.org/support +░░ +░░ The unit k3s.service has entered the 'failed' state with result 'exit-code'. +mar 12 16:34:31 CASCA systemd[1]: Failed to start k3s.service - Lightweight Kubernetes. +░░ Subject: A start job for unit k3s.service has failed +░░ Defined-By: systemd +░░ Support: https://www.debian.org/support +░░ +░░ A start job for unit k3s.service has finished with a failure. +░░ +░░ The job identifier is 115655 and the job result is failed. +mar 12 16:34:36 CASCA systemd[1]: k3s.service: Scheduled restart job, restart counter is at 1202. +░░ Subject: Automatic restarting of a unit has been scheduled +░░ Defined-By: systemd +░░ Support: https://www.debian.org/support +░░ +░░ Automatic restarting of the unit k3s.service has been scheduled, as the result for +░░ the configured Restart= setting for the unit. +mar 12 16:34:36 CASCA systemd[1]: Stopped k3s.service - Lightweight Kubernetes. +░░ Subject: A stop job for unit k3s.service has finished +░░ Defined-By: systemd +░░ Support: https://www.debian.org/support +░░ +░░ A stop job for unit k3s.service has finished. +░░ +░░ The job identifier is 115749 and the job result is done. +mar 12 16:34:36 CASCA systemd[1]: Starting k3s.service - Lightweight Kubernetes... +░░ Subject: A start job for unit k3s.service has begun execution +░░ Defined-By: systemd +░░ Support: https://www.debian.org/support +░░ +░░ A start job for unit k3s.service has begun execution. +░░ +░░ The job identifier is 115749. +mar 12 16:34:36 CASCA sh[298547]: + /usr/bin/systemctl is-enabled --quiet nm-cloud-setup.service +mar 12 16:34:36 CASCA k3s[298557]: time="2025-03-12T16:34:36+01:00" level=info msg="Starting k3s v1.31.6+k3s1 (6ab750f9)" +mar 12 16:34:36 CASCA k3s[298557]: time="2025-03-12T16:34:36+01:00" level=info msg="Configuring sqlite3 database connection pooling: maxIdleConns=2, maxOpenConns=0, connMaxLifetime=0s" +mar 12 16:34:36 CASCA k3s[298557]: time="2025-03-12T16:34:36+01:00" level=info msg="Configuring database table schema and indexes, this may take a moment..." +mar 12 16:34:36 CASCA k3s[298557]: time="2025-03-12T16:34:36+01:00" level=info msg="Database tables and indexes are up to date" +mar 12 16:34:36 CASCA k3s[298557]: time="2025-03-12T16:34:36+01:00" level=info msg="Kine available at unix://kine.sock" +mar 12 16:34:36 CASCA k3s[298557]: time="2025-03-12T16:34:36+01:00" level=info msg="Reconciling bootstrap data between datastore and disk" +mar 12 16:34:36 CASCA k3s[298557]: time="2025-03-12T16:34:36+01:00" level=info msg="Running kube-apiserver --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,k3s --authorization-mode=Node,RBAC --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/k3s/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction --enable-aggregator-routing=true --enable-bootstrap-token-auth=true --etcd-servers=unix://kine.sock --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key" +mar 12 16:34:36 CASCA k3s[298557]: time="2025-03-12T16:34:36+01:00" level=info msg="Running kube-scheduler --authentication-kubeconfig=/var/lib/rancher/k3s/server/cred/scheduler.kubeconfig --authorization-kubeconfig=/var/lib/rancher/k3s/server/cred/scheduler.kubeconfig --bind-address=127.0.0.1 --kubeconfig=/var/lib/rancher/k3s/server/cred/scheduler.kubeconfig --leader-elect=false --profiling=false --secure-port=10259" +mar 12 16:34:36 CASCA k3s[298557]: W0312 16:34:36.831518 298557 registry.go:256] calling componentGlobalsRegistry.AddFlags more than once, the registry will be set by the latest flags +mar 12 16:34:36 CASCA k3s[298557]: time="2025-03-12T16:34:36+01:00" level=info msg="Waiting for API server to become available" +mar 12 16:34:36 CASCA k3s[298557]: time="2025-03-12T16:34:36+01:00" level=info msg="Running kube-controller-manager --allocate-node-cidrs=true --authentication-kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --authorization-kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --bind-address=127.0.0.1 --cluster-cidr=10.42.0.0/16 --cluster-signing-kube-apiserver-client-cert-file=/var/lib/rancher/k3s/server/tls/client-ca.nochain.crt --cluster-signing-kube-apiserver-client-key-file=/var/lib/rancher/k3s/server/tls/client-ca.key --cluster-signing-kubelet-client-cert-file=/var/lib/rancher/k3s/server/tls/client-ca.nochain.crt --cluster-signing-kubelet-client-key-file=/var/lib/rancher/k3s/server/tls/client-ca.key --cluster-signing-kubelet-serving-cert-file=/var/lib/rancher/k3s/server/tls/server-ca.nochain.crt --cluster-signing-kubelet-serving-key-file=/var/lib/rancher/k3s/server/tls/server-ca.key --cluster-signing-legacy-unknown-cert-file=/var/lib/rancher/k3s/server/tls/server-ca.nochain.crt --cluster-signing-legacy-unknown-key-file=/var/lib/rancher/k3s/server/tls/server-ca.key --configure-cloud-routes=false --controllers=*,tokencleaner,-service,-route,-cloud-node-lifecycle --kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --leader-elect=false --profiling=false --root-ca-file=/var/lib/rancher/k3s/server/tls/server-ca.crt --secure-port=10257 --service-account-private-key-file=/var/lib/rancher/k3s/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --use-service-account-credentials=true" +mar 12 16:34:36 CASCA k3s[298557]: W0312 16:34:36.831930 298557 registry.go:256] calling componentGlobalsRegistry.AddFlags more than once, the registry will be set by the latest flags +mar 12 16:34:36 CASCA k3s[298557]: I0312 16:34:36.832005 298557 options.go:228] external host was not specified, using 192.168.1.133 +mar 12 16:34:36 CASCA k3s[298557]: time="2025-03-12T16:34:36+01:00" level=info msg="Running cloud-controller-manager --allocate-node-cidrs=true --authentication-kubeconfig=/var/lib/rancher/k3s/server/cred/cloud-controller.kubeconfig --authorization-kubeconfig=/var/lib/rancher/k3s/server/cred/cloud-controller.kubeconfig --bind-address=127.0.0.1 --cloud-config=/var/lib/rancher/k3s/server/etc/cloud-config.yaml --cloud-provider=k3s --cluster-cidr=10.42.0.0/16 --configure-cloud-routes=false --controllers=*,-route --kubeconfig=/var/lib/rancher/k3s/server/cred/cloud-controller.kubeconfig --leader-elect=false --leader-elect-resource-name=k3s-cloud-controller-manager --node-status-update-frequency=1m0s --profiling=false" +mar 12 16:34:36 CASCA k3s[298557]: time="2025-03-12T16:34:36+01:00" level=info msg="Server node token is available at /var/lib/rancher/k3s/server/token" +mar 12 16:34:36 CASCA k3s[298557]: time="2025-03-12T16:34:36+01:00" level=info msg="To join server node to cluster: k3s server -s https://192.168.1.133:6443 -t ${SERVER_NODE_TOKEN}" +mar 12 16:34:36 CASCA k3s[298557]: time="2025-03-12T16:34:36+01:00" level=info msg="Agent node token is available at /var/lib/rancher/k3s/server/agent-token" +mar 12 16:34:36 CASCA k3s[298557]: time="2025-03-12T16:34:36+01:00" level=info msg="To join agent node to cluster: k3s agent -s https://192.168.1.133:6443 -t ${AGENT_NODE_TOKEN}" +mar 12 16:34:36 CASCA k3s[298557]: I0312 16:34:36.833202 298557 server.go:150] Version: v1.31.6+k3s1 +mar 12 16:34:36 CASCA k3s[298557]: I0312 16:34:36.833219 298557 server.go:152] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" +mar 12 16:34:36 CASCA k3s[298557]: Error: failed to create listener: failed to listen on 127.0.0.1:6444: listen tcp 127.0.0.1:6444: bind: address already in use +mar 12 16:34:36 CASCA k3s[298557]: time="2025-03-12T16:34:36+01:00" level=error msg="apiserver exited: failed to create listener: failed to listen on 127.0.0.1:6444: listen tcp 127.0.0.1:6444: bind: address already in use" +mar 12 16:34:36 CASCA systemd[1]: k3s.service: Main process exited, code=exited, status=1/FAILURE +░░ Subject: Unit process exited +░░ Defined-By: systemd +░░ Support: https://www.debian.org/support +░░ +░░ An ExecStart= process belonging to unit k3s.service has exited. +░░ +░░ The process' exit code is 'exited' and its exit status is 1. +mar 12 16:34:36 CASCA systemd[1]: k3s.service: Failed with result 'exit-code'. +░░ Subject: Unit failed +░░ Defined-By: systemd +░░ Support: https://www.debian.org/support +░░ +░░ The unit k3s.service has entered the 'failed' state with result 'exit-code'. +mar 12 16:34:36 CASCA systemd[1]: Failed to start k3s.service - Lightweight Kubernetes. +░░ Subject: A start job for unit k3s.service has failed +░░ Defined-By: systemd +░░ Support: https://www.debian.org/support +░░ +░░ A start job for unit k3s.service has finished with a failure. +░░ +░░ The job identifier is 115749 and the job result is failed. +mar 12 16:34:42 CASCA systemd[1]: k3s.service: Scheduled restart job, restart counter is at 1203. +░░ Subject: Automatic restarting of a unit has been scheduled +░░ Defined-By: systemd +░░ Support: https://www.debian.org/support +░░ +░░ Automatic restarting of the unit k3s.service has been scheduled, as the result for +░░ the configured Restart= setting for the unit. +mar 12 16:34:42 CASCA systemd[1]: Stopped k3s.service - Lightweight Kubernetes. +░░ Subject: A stop job for unit k3s.service has finished +░░ Defined-By: systemd +░░ Support: https://www.debian.org/support +░░ +░░ A stop job for unit k3s.service has finished. +░░ +░░ The job identifier is 115843 and the job result is done. +mar 12 16:34:42 CASCA systemd[1]: Starting k3s.service - Lightweight Kubernetes... +░░ Subject: A start job for unit k3s.service has begun execution +░░ Defined-By: systemd +░░ Support: https://www.debian.org/support +░░ +░░ A start job for unit k3s.service has begun execution. +░░ +░░ The job identifier is 115843. +mar 12 16:34:42 CASCA sh[298893]: + /usr/bin/systemctl is-enabled --quiet nm-cloud-setup.service +mar 12 16:34:42 CASCA k3s[298900]: time="2025-03-12T16:34:42+01:00" level=info msg="Starting k3s v1.31.6+k3s1 (6ab750f9)" +mar 12 16:34:42 CASCA k3s[298900]: time="2025-03-12T16:34:42+01:00" level=info msg="Configuring sqlite3 database connection pooling: maxIdleConns=2, maxOpenConns=0, connMaxLifetime=0s" +mar 12 16:34:42 CASCA k3s[298900]: time="2025-03-12T16:34:42+01:00" level=info msg="Configuring database table schema and indexes, this may take a moment..." +mar 12 16:34:42 CASCA k3s[298900]: time="2025-03-12T16:34:42+01:00" level=info msg="Database tables and indexes are up to date" +mar 12 16:34:42 CASCA k3s[298900]: time="2025-03-12T16:34:42+01:00" level=info msg="Kine available at unix://kine.sock" +mar 12 16:34:42 CASCA k3s[298900]: time="2025-03-12T16:34:42+01:00" level=info msg="Reconciling bootstrap data between datastore and disk" +mar 12 16:34:42 CASCA k3s[298900]: time="2025-03-12T16:34:42+01:00" level=info msg="Running kube-apiserver --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,k3s --authorization-mode=Node,RBAC --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/k3s/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction --enable-aggregator-routing=true --enable-bootstrap-token-auth=true --etcd-servers=unix://kine.sock --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key" +mar 12 16:34:42 CASCA k3s[298900]: time="2025-03-12T16:34:42+01:00" level=info msg="Running kube-scheduler --authentication-kubeconfig=/var/lib/rancher/k3s/server/cred/scheduler.kubeconfig --authorization-kubeconfig=/var/lib/rancher/k3s/server/cred/scheduler.kubeconfig --bind-address=127.0.0.1 --kubeconfig=/var/lib/rancher/k3s/server/cred/scheduler.kubeconfig --leader-elect=false --profiling=false --secure-port=10259" +mar 12 16:34:42 CASCA k3s[298900]: W0312 16:34:42.317495 298900 registry.go:256] calling componentGlobalsRegistry.AddFlags more than once, the registry will be set by the latest flags +mar 12 16:34:42 CASCA k3s[298900]: time="2025-03-12T16:34:42+01:00" level=info msg="Waiting for API server to become available" +mar 12 16:34:42 CASCA k3s[298900]: time="2025-03-12T16:34:42+01:00" level=info msg="Running kube-controller-manager --allocate-node-cidrs=true --authentication-kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --authorization-kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --bind-address=127.0.0.1 --cluster-cidr=10.42.0.0/16 --cluster-signing-kube-apiserver-client-cert-file=/var/lib/rancher/k3s/server/tls/client-ca.nochain.crt --cluster-signing-kube-apiserver-client-key-file=/var/lib/rancher/k3s/server/tls/client-ca.key --cluster-signing-kubelet-client-cert-file=/var/lib/rancher/k3s/server/tls/client-ca.nochain.crt --cluster-signing-kubelet-client-key-file=/var/lib/rancher/k3s/server/tls/client-ca.key --cluster-signing-kubelet-serving-cert-file=/var/lib/rancher/k3s/server/tls/server-ca.nochain.crt --cluster-signing-kubelet-serving-key-file=/var/lib/rancher/k3s/server/tls/server-ca.key --cluster-signing-legacy-unknown-cert-file=/var/lib/rancher/k3s/server/tls/server-ca.nochain.crt --cluster-signing-legacy-unknown-key-file=/var/lib/rancher/k3s/server/tls/server-ca.key --configure-cloud-routes=false --controllers=*,tokencleaner,-service,-route,-cloud-node-lifecycle --kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --leader-elect=false --profiling=false --root-ca-file=/var/lib/rancher/k3s/server/tls/server-ca.crt --secure-port=10257 --service-account-private-key-file=/var/lib/rancher/k3s/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --use-service-account-credentials=true" +mar 12 16:34:42 CASCA k3s[298900]: W0312 16:34:42.317968 298900 registry.go:256] calling componentGlobalsRegistry.AddFlags more than once, the registry will be set by the latest flags +mar 12 16:34:42 CASCA k3s[298900]: I0312 16:34:42.318096 298900 options.go:228] external host was not specified, using 192.168.1.133 +mar 12 16:34:42 CASCA k3s[298900]: time="2025-03-12T16:34:42+01:00" level=info msg="Running cloud-controller-manager --allocate-node-cidrs=true --authentication-kubeconfig=/var/lib/rancher/k3s/server/cred/cloud-controller.kubeconfig --authorization-kubeconfig=/var/lib/rancher/k3s/server/cred/cloud-controller.kubeconfig --bind-address=127.0.0.1 --cloud-config=/var/lib/rancher/k3s/server/etc/cloud-config.yaml --cloud-provider=k3s --cluster-cidr=10.42.0.0/16 --configure-cloud-routes=false --controllers=*,-route --kubeconfig=/var/lib/rancher/k3s/server/cred/cloud-controller.kubeconfig --leader-elect=false --leader-elect-resource-name=k3s-cloud-controller-manager --node-status-update-frequency=1m0s --profiling=false" +mar 12 16:34:42 CASCA k3s[298900]: time="2025-03-12T16:34:42+01:00" level=info msg="Server node token is available at /var/lib/rancher/k3s/server/token" +mar 12 16:34:42 CASCA k3s[298900]: time="2025-03-12T16:34:42+01:00" level=info msg="To join server node to cluster: k3s server -s https://192.168.1.133:6443 -t ${SERVER_NODE_TOKEN}" +mar 12 16:34:42 CASCA k3s[298900]: time="2025-03-12T16:34:42+01:00" level=info msg="Agent node token is available at /var/lib/rancher/k3s/server/agent-token" +mar 12 16:34:42 CASCA k3s[298900]: time="2025-03-12T16:34:42+01:00" level=info msg="To join agent node to cluster: k3s agent -s https://192.168.1.133:6443 -t ${AGENT_NODE_TOKEN}" +mar 12 16:34:42 CASCA k3s[298900]: I0312 16:34:42.319313 298900 server.go:150] Version: v1.31.6+k3s1 +mar 12 16:34:42 CASCA k3s[298900]: I0312 16:34:42.319326 298900 server.go:152] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" +mar 12 16:34:42 CASCA k3s[298900]: Error: failed to create listener: failed to listen on 127.0.0.1:6444: listen tcp 127.0.0.1:6444: bind: address already in use +mar 12 16:34:42 CASCA k3s[298900]: time="2025-03-12T16:34:42+01:00" level=error msg="apiserver exited: failed to create listener: failed to listen on 127.0.0.1:6444: listen tcp 127.0.0.1:6444: bind: address already in use" +mar 12 16:34:42 CASCA systemd[1]: k3s.service: Main process exited, code=exited, status=1/FAILURE +░░ Subject: Unit process exited +░░ Defined-By: systemd +░░ Support: https://www.debian.org/support +░░ +░░ An ExecStart= process belonging to unit k3s.service has exited. +░░ +░░ The process' exit code is 'exited' and its exit status is 1. +mar 12 16:34:42 CASCA systemd[1]: k3s.service: Failed with result 'exit-code'. +░░ Subject: Unit failed +░░ Defined-By: systemd +░░ Support: https://www.debian.org/support +░░ +░░ The unit k3s.service has entered the 'failed' state with result 'exit-code'. +mar 12 16:34:42 CASCA systemd[1]: Failed to start k3s.service - Lightweight Kubernetes. +░░ Subject: A start job for unit k3s.service has failed +░░ Defined-By: systemd +░░ Support: https://www.debian.org/support +░░ +░░ A start job for unit k3s.service has finished with a failure. +░░ +░░ The job identifier is 115843 and the job result is failed. +mar 12 16:34:47 CASCA systemd[1]: k3s.service: Scheduled restart job, restart counter is at 1204. +░░ Subject: Automatic restarting of a unit has been scheduled +░░ Defined-By: systemd +░░ Support: https://www.debian.org/support +░░ +░░ Automatic restarting of the unit k3s.service has been scheduled, as the result for +░░ the configured Restart= setting for the unit. +mar 12 16:34:47 CASCA systemd[1]: Stopped k3s.service - Lightweight Kubernetes. +░░ Subject: A stop job for unit k3s.service has finished +░░ Defined-By: systemd +░░ Support: https://www.debian.org/support +░░ +░░ A stop job for unit k3s.service has finished. +░░ +░░ The job identifier is 115937 and the job result is done. +mar 12 16:34:47 CASCA systemd[1]: Starting k3s.service - Lightweight Kubernetes... +░░ Subject: A start job for unit k3s.service has begun execution +░░ Defined-By: systemd +░░ Support: https://www.debian.org/support +░░ +░░ A start job for unit k3s.service has begun execution. +░░ +░░ The job identifier is 115937. +mar 12 16:34:47 CASCA sh[299237]: + /usr/bin/systemctl is-enabled --quiet nm-cloud-setup.service +mar 12 16:34:47 CASCA k3s[299241]: time="2025-03-12T16:34:47+01:00" level=info msg="Starting k3s v1.31.6+k3s1 (6ab750f9)" +mar 12 16:34:47 CASCA k3s[299241]: time="2025-03-12T16:34:47+01:00" level=info msg="Configuring sqlite3 database connection pooling: maxIdleConns=2, maxOpenConns=0, connMaxLifetime=0s" +mar 12 16:34:47 CASCA k3s[299241]: time="2025-03-12T16:34:47+01:00" level=info msg="Configuring database table schema and indexes, this may take a moment..." +mar 12 16:34:47 CASCA k3s[299241]: time="2025-03-12T16:34:47+01:00" level=info msg="Database tables and indexes are up to date" +mar 12 16:34:47 CASCA k3s[299241]: time="2025-03-12T16:34:47+01:00" level=info msg="Kine available at unix://kine.sock" +mar 12 16:34:47 CASCA k3s[299241]: time="2025-03-12T16:34:47+01:00" level=info msg="Reconciling bootstrap data between datastore and disk" +mar 12 16:34:47 CASCA k3s[299241]: time="2025-03-12T16:34:47+01:00" level=info msg="Running kube-apiserver --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,k3s --authorization-mode=Node,RBAC --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/k3s/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction --enable-aggregator-routing=true --enable-bootstrap-token-auth=true --etcd-servers=unix://kine.sock --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key" +mar 12 16:34:47 CASCA k3s[299241]: time="2025-03-12T16:34:47+01:00" level=info msg="Running kube-scheduler --authentication-kubeconfig=/var/lib/rancher/k3s/server/cred/scheduler.kubeconfig --authorization-kubeconfig=/var/lib/rancher/k3s/server/cred/scheduler.kubeconfig --bind-address=127.0.0.1 --kubeconfig=/var/lib/rancher/k3s/server/cred/scheduler.kubeconfig --leader-elect=false --profiling=false --secure-port=10259" +mar 12 16:34:47 CASCA k3s[299241]: time="2025-03-12T16:34:47+01:00" level=info msg="Waiting for API server to become available" +mar 12 16:34:47 CASCA k3s[299241]: W0312 16:34:47.794187 299241 registry.go:256] calling componentGlobalsRegistry.AddFlags more than once, the registry will be set by the latest flags +mar 12 16:34:47 CASCA k3s[299241]: time="2025-03-12T16:34:47+01:00" level=info msg="Running kube-controller-manager --allocate-node-cidrs=true --authentication-kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --authorization-kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --bind-address=127.0.0.1 --cluster-cidr=10.42.0.0/16 --cluster-signing-kube-apiserver-client-cert-file=/var/lib/rancher/k3s/server/tls/client-ca.nochain.crt --cluster-signing-kube-apiserver-client-key-file=/var/lib/rancher/k3s/server/tls/client-ca.key --cluster-signing-kubelet-client-cert-file=/var/lib/rancher/k3s/server/tls/client-ca.nochain.crt --cluster-signing-kubelet-client-key-file=/var/lib/rancher/k3s/server/tls/client-ca.key --cluster-signing-kubelet-serving-cert-file=/var/lib/rancher/k3s/server/tls/server-ca.nochain.crt --cluster-signing-kubelet-serving-key-file=/var/lib/rancher/k3s/server/tls/server-ca.key --cluster-signing-legacy-unknown-cert-file=/var/lib/rancher/k3s/server/tls/server-ca.nochain.crt --cluster-signing-legacy-unknown-key-file=/var/lib/rancher/k3s/server/tls/server-ca.key --configure-cloud-routes=false --controllers=*,tokencleaner,-service,-route,-cloud-node-lifecycle --kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --leader-elect=false --profiling=false --root-ca-file=/var/lib/rancher/k3s/server/tls/server-ca.crt --secure-port=10257 --service-account-private-key-file=/var/lib/rancher/k3s/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --use-service-account-credentials=true" +mar 12 16:34:47 CASCA k3s[299241]: W0312 16:34:47.794640 299241 registry.go:256] calling componentGlobalsRegistry.AddFlags more than once, the registry will be set by the latest flags +mar 12 16:34:47 CASCA k3s[299241]: I0312 16:34:47.794699 299241 options.go:228] external host was not specified, using 192.168.1.133 +mar 12 16:34:47 CASCA k3s[299241]: time="2025-03-12T16:34:47+01:00" level=info msg="Running cloud-controller-manager --allocate-node-cidrs=true --authentication-kubeconfig=/var/lib/rancher/k3s/server/cred/cloud-controller.kubeconfig --authorization-kubeconfig=/var/lib/rancher/k3s/server/cred/cloud-controller.kubeconfig --bind-address=127.0.0.1 --cloud-config=/var/lib/rancher/k3s/server/etc/cloud-config.yaml --cloud-provider=k3s --cluster-cidr=10.42.0.0/16 --configure-cloud-routes=false --controllers=*,-route --kubeconfig=/var/lib/rancher/k3s/server/cred/cloud-controller.kubeconfig --leader-elect=false --leader-elect-resource-name=k3s-cloud-controller-manager --node-status-update-frequency=1m0s --profiling=false" +mar 12 16:34:47 CASCA k3s[299241]: time="2025-03-12T16:34:47+01:00" level=info msg="Server node token is available at /var/lib/rancher/k3s/server/token" +mar 12 16:34:47 CASCA k3s[299241]: time="2025-03-12T16:34:47+01:00" level=info msg="To join server node to cluster: k3s server -s https://192.168.1.133:6443 -t ${SERVER_NODE_TOKEN}" +mar 12 16:34:47 CASCA k3s[299241]: time="2025-03-12T16:34:47+01:00" level=info msg="Agent node token is available at /var/lib/rancher/k3s/server/agent-token" +mar 12 16:34:47 CASCA k3s[299241]: time="2025-03-12T16:34:47+01:00" level=info msg="To join agent node to cluster: k3s agent -s https://192.168.1.133:6443 -t ${AGENT_NODE_TOKEN}" +mar 12 16:34:47 CASCA k3s[299241]: I0312 16:34:47.795958 299241 server.go:150] Version: v1.31.6+k3s1 +mar 12 16:34:47 CASCA k3s[299241]: I0312 16:34:47.795975 299241 server.go:152] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" +mar 12 16:34:47 CASCA k3s[299241]: Error: failed to create listener: failed to listen on 127.0.0.1:6444: listen tcp 127.0.0.1:6444: bind: address already in use +mar 12 16:34:47 CASCA k3s[299241]: time="2025-03-12T16:34:47+01:00" level=error msg="apiserver exited: failed to create listener: failed to listen on 127.0.0.1:6444: listen tcp 127.0.0.1:6444: bind: address already in use" +mar 12 16:34:47 CASCA systemd[1]: k3s.service: Main process exited, code=exited, status=1/FAILURE +░░ Subject: Unit process exited +░░ Defined-By: systemd +░░ Support: https://www.debian.org/support +░░ +░░ An ExecStart= process belonging to unit k3s.service has exited. +░░ +░░ The process' exit code is 'exited' and its exit status is 1. +mar 12 16:34:47 CASCA systemd[1]: k3s.service: Failed with result 'exit-code'. +░░ Subject: Unit failed +░░ Defined-By: systemd +░░ Support: https://www.debian.org/support +░░ +░░ The unit k3s.service has entered the 'failed' state with result 'exit-code'. +mar 12 16:34:47 CASCA systemd[1]: Failed to start k3s.service - Lightweight Kubernetes. +░░ Subject: A start job for unit k3s.service has failed +░░ Defined-By: systemd +░░ Support: https://www.debian.org/support +░░ +░░ A start job for unit k3s.service has finished with a failure. +░░ +░░ The job identifier is 115937 and the job result is failed. +mar 12 16:34:53 CASCA systemd[1]: k3s.service: Scheduled restart job, restart counter is at 1205. +░░ Subject: Automatic restarting of a unit has been scheduled +░░ Defined-By: systemd +░░ Support: https://www.debian.org/support +░░ +░░ Automatic restarting of the unit k3s.service has been scheduled, as the result for +░░ the configured Restart= setting for the unit. +mar 12 16:34:53 CASCA systemd[1]: Stopped k3s.service - Lightweight Kubernetes. +░░ Subject: A stop job for unit k3s.service has finished +░░ Defined-By: systemd +░░ Support: https://www.debian.org/support +░░ +░░ A stop job for unit k3s.service has finished. +░░ +░░ The job identifier is 116031 and the job result is done. +mar 12 16:34:53 CASCA systemd[1]: Starting k3s.service - Lightweight Kubernetes... +░░ Subject: A start job for unit k3s.service has begun execution +░░ Defined-By: systemd +░░ Support: https://www.debian.org/support +░░ +░░ A start job for unit k3s.service has begun execution. +░░ +░░ The job identifier is 116031. +mar 12 16:34:53 CASCA sh[299546]: + /usr/bin/systemctl is-enabled --quiet nm-cloud-setup.service +mar 12 16:34:53 CASCA k3s[299553]: time="2025-03-12T16:34:53+01:00" level=info msg="Starting k3s v1.31.6+k3s1 (6ab750f9)" +mar 12 16:34:53 CASCA k3s[299553]: time="2025-03-12T16:34:53+01:00" level=info msg="Configuring sqlite3 database connection pooling: maxIdleConns=2, maxOpenConns=0, connMaxLifetime=0s" +mar 12 16:34:53 CASCA k3s[299553]: time="2025-03-12T16:34:53+01:00" level=info msg="Configuring database table schema and indexes, this may take a moment..." +mar 12 16:34:53 CASCA k3s[299553]: time="2025-03-12T16:34:53+01:00" level=info msg="Database tables and indexes are up to date" +mar 12 16:34:53 CASCA k3s[299553]: time="2025-03-12T16:34:53+01:00" level=info msg="Kine available at unix://kine.sock" +mar 12 16:34:53 CASCA k3s[299553]: time="2025-03-12T16:34:53+01:00" level=info msg="Reconciling bootstrap data between datastore and disk" +mar 12 16:34:53 CASCA k3s[299553]: time="2025-03-12T16:34:53+01:00" level=info msg="Running kube-apiserver --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,k3s --authorization-mode=Node,RBAC --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/k3s/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction --enable-aggregator-routing=true --enable-bootstrap-token-auth=true --etcd-servers=unix://kine.sock --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key" +mar 12 16:34:53 CASCA k3s[299553]: time="2025-03-12T16:34:53+01:00" level=info msg="Running kube-scheduler --authentication-kubeconfig=/var/lib/rancher/k3s/server/cred/scheduler.kubeconfig --authorization-kubeconfig=/var/lib/rancher/k3s/server/cred/scheduler.kubeconfig --bind-address=127.0.0.1 --kubeconfig=/var/lib/rancher/k3s/server/cred/scheduler.kubeconfig --leader-elect=false --profiling=false --secure-port=10259" +mar 12 16:34:53 CASCA k3s[299553]: time="2025-03-12T16:34:53+01:00" level=info msg="Waiting for API server to become available" +mar 12 16:34:53 CASCA k3s[299553]: I0312 16:34:53.313071 299553 options.go:228] external host was not specified, using 192.168.1.133 +mar 12 16:34:53 CASCA k3s[299553]: W0312 16:34:53.313285 299553 registry.go:256] calling componentGlobalsRegistry.AddFlags more than once, the registry will be set by the latest flags +mar 12 16:34:53 CASCA k3s[299553]: time="2025-03-12T16:34:53+01:00" level=info msg="Running kube-controller-manager --allocate-node-cidrs=true --authentication-kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --authorization-kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --bind-address=127.0.0.1 --cluster-cidr=10.42.0.0/16 --cluster-signing-kube-apiserver-client-cert-file=/var/lib/rancher/k3s/server/tls/client-ca.nochain.crt --cluster-signing-kube-apiserver-client-key-file=/var/lib/rancher/k3s/server/tls/client-ca.key --cluster-signing-kubelet-client-cert-file=/var/lib/rancher/k3s/server/tls/client-ca.nochain.crt --cluster-signing-kubelet-client-key-file=/var/lib/rancher/k3s/server/tls/client-ca.key --cluster-signing-kubelet-serving-cert-file=/var/lib/rancher/k3s/server/tls/server-ca.nochain.crt --cluster-signing-kubelet-serving-key-file=/var/lib/rancher/k3s/server/tls/server-ca.key --cluster-signing-legacy-unknown-cert-file=/var/lib/rancher/k3s/server/tls/server-ca.nochain.crt --cluster-signing-legacy-unknown-key-file=/var/lib/rancher/k3s/server/tls/server-ca.key --configure-cloud-routes=false --controllers=*,tokencleaner,-service,-route,-cloud-node-lifecycle --kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --leader-elect=false --profiling=false --root-ca-file=/var/lib/rancher/k3s/server/tls/server-ca.crt --secure-port=10257 --service-account-private-key-file=/var/lib/rancher/k3s/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --use-service-account-credentials=true" +mar 12 16:34:53 CASCA k3s[299553]: W0312 16:34:53.313674 299553 registry.go:256] calling componentGlobalsRegistry.AddFlags more than once, the registry will be set by the latest flags +mar 12 16:34:53 CASCA k3s[299553]: time="2025-03-12T16:34:53+01:00" level=info msg="Running cloud-controller-manager --allocate-node-cidrs=true --authentication-kubeconfig=/var/lib/rancher/k3s/server/cred/cloud-controller.kubeconfig --authorization-kubeconfig=/var/lib/rancher/k3s/server/cred/cloud-controller.kubeconfig --bind-address=127.0.0.1 --cloud-config=/var/lib/rancher/k3s/server/etc/cloud-config.yaml --cloud-provider=k3s --cluster-cidr=10.42.0.0/16 --configure-cloud-routes=false --controllers=*,-route --kubeconfig=/var/lib/rancher/k3s/server/cred/cloud-controller.kubeconfig --leader-elect=false --leader-elect-resource-name=k3s-cloud-controller-manager --node-status-update-frequency=1m0s --profiling=false" +mar 12 16:34:53 CASCA k3s[299553]: time="2025-03-12T16:34:53+01:00" level=info msg="Server node token is available at /var/lib/rancher/k3s/server/token" +mar 12 16:34:53 CASCA k3s[299553]: I0312 16:34:53.314512 299553 server.go:150] Version: v1.31.6+k3s1 +mar 12 16:34:53 CASCA k3s[299553]: I0312 16:34:53.314537 299553 server.go:152] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" +mar 12 16:34:53 CASCA k3s[299553]: time="2025-03-12T16:34:53+01:00" level=info msg="To join server node to cluster: k3s server -s https://192.168.1.133:6443 -t ${SERVER_NODE_TOKEN}" +mar 12 16:34:53 CASCA k3s[299553]: time="2025-03-12T16:34:53+01:00" level=info msg="Agent node token is available at /var/lib/rancher/k3s/server/agent-token" +mar 12 16:34:53 CASCA k3s[299553]: Error: failed to create listener: failed to listen on 127.0.0.1:6444: listen tcp 127.0.0.1:6444: bind: address already in use +mar 12 16:34:53 CASCA k3s[299553]: time="2025-03-12T16:34:53+01:00" level=error msg="apiserver exited: failed to create listener: failed to listen on 127.0.0.1:6444: listen tcp 127.0.0.1:6444: bind: address already in use" +mar 12 16:34:53 CASCA systemd[1]: k3s.service: Main process exited, code=exited, status=1/FAILURE +░░ Subject: Unit process exited +░░ Defined-By: systemd +░░ Support: https://www.debian.org/support +░░ +░░ An ExecStart= process belonging to unit k3s.service has exited. +░░ +░░ The process' exit code is 'exited' and its exit status is 1. +mar 12 16:34:53 CASCA systemd[1]: k3s.service: Failed with result 'exit-code'. +░░ Subject: Unit failed +░░ Defined-By: systemd +░░ Support: https://www.debian.org/support +░░ +░░ The unit k3s.service has entered the 'failed' state with result 'exit-code'. +mar 12 16:34:53 CASCA systemd[1]: Failed to start k3s.service - Lightweight Kubernetes. +░░ Subject: A start job for unit k3s.service has failed +░░ Defined-By: systemd +░░ Support: https://www.debian.org/support +░░ +░░ A start job for unit k3s.service has finished with a failure. +░░ +░░ The job identifier is 116031 and the job result is failed. +mar 12 16:34:58 CASCA systemd[1]: k3s.service: Scheduled restart job, restart counter is at 1206. +░░ Subject: Automatic restarting of a unit has been scheduled +░░ Defined-By: systemd +░░ Support: https://www.debian.org/support +░░ +░░ Automatic restarting of the unit k3s.service has been scheduled, as the result for +░░ the configured Restart= setting for the unit. +mar 12 16:34:58 CASCA systemd[1]: Stopped k3s.service - Lightweight Kubernetes. +░░ Subject: A stop job for unit k3s.service has finished +░░ Defined-By: systemd +░░ Support: https://www.debian.org/support +░░ +░░ A stop job for unit k3s.service has finished. +░░ +░░ The job identifier is 116125 and the job result is done. +mar 12 16:34:58 CASCA systemd[1]: Starting k3s.service - Lightweight Kubernetes... +░░ Subject: A start job for unit k3s.service has begun execution +░░ Defined-By: systemd +░░ Support: https://www.debian.org/support +░░ +░░ A start job for unit k3s.service has begun execution. +░░ +░░ The job identifier is 116125. +mar 12 16:34:58 CASCA sh[299943]: + /usr/bin/systemctl is-enabled --quiet nm-cloud-setup.service +mar 12 16:34:58 CASCA k3s[299952]: time="2025-03-12T16:34:58+01:00" level=info msg="Starting k3s v1.31.6+k3s1 (6ab750f9)" +mar 12 16:34:58 CASCA k3s[299952]: time="2025-03-12T16:34:58+01:00" level=info msg="Configuring sqlite3 database connection pooling: maxIdleConns=2, maxOpenConns=0, connMaxLifetime=0s" +mar 12 16:34:58 CASCA k3s[299952]: time="2025-03-12T16:34:58+01:00" level=info msg="Configuring database table schema and indexes, this may take a moment..." +mar 12 16:34:58 CASCA k3s[299952]: time="2025-03-12T16:34:58+01:00" level=info msg="Database tables and indexes are up to date" +mar 12 16:34:58 CASCA k3s[299952]: time="2025-03-12T16:34:58+01:00" level=info msg="Kine available at unix://kine.sock" +mar 12 16:34:58 CASCA k3s[299952]: time="2025-03-12T16:34:58+01:00" level=info msg="Reconciling bootstrap data between datastore and disk" +mar 12 16:34:58 CASCA k3s[299952]: time="2025-03-12T16:34:58+01:00" level=info msg="Running kube-apiserver --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,k3s --authorization-mode=Node,RBAC --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/k3s/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction --enable-aggregator-routing=true --enable-bootstrap-token-auth=true --etcd-servers=unix://kine.sock --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key" +mar 12 16:34:58 CASCA k3s[299952]: time="2025-03-12T16:34:58+01:00" level=info msg="Running kube-scheduler --authentication-kubeconfig=/var/lib/rancher/k3s/server/cred/scheduler.kubeconfig --authorization-kubeconfig=/var/lib/rancher/k3s/server/cred/scheduler.kubeconfig --bind-address=127.0.0.1 --kubeconfig=/var/lib/rancher/k3s/server/cred/scheduler.kubeconfig --leader-elect=false --profiling=false --secure-port=10259" +mar 12 16:34:58 CASCA k3s[299952]: W0312 16:34:58.829118 299952 registry.go:256] calling componentGlobalsRegistry.AddFlags more than once, the registry will be set by the latest flags +mar 12 16:34:58 CASCA k3s[299952]: time="2025-03-12T16:34:58+01:00" level=info msg="Waiting for API server to become available" +mar 12 16:34:58 CASCA k3s[299952]: time="2025-03-12T16:34:58+01:00" level=info msg="Running kube-controller-manager --allocate-node-cidrs=true --authentication-kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --authorization-kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --bind-address=127.0.0.1 --cluster-cidr=10.42.0.0/16 --cluster-signing-kube-apiserver-client-cert-file=/var/lib/rancher/k3s/server/tls/client-ca.nochain.crt --cluster-signing-kube-apiserver-client-key-file=/var/lib/rancher/k3s/server/tls/client-ca.key --cluster-signing-kubelet-client-cert-file=/var/lib/rancher/k3s/server/tls/client-ca.nochain.crt --cluster-signing-kubelet-client-key-file=/var/lib/rancher/k3s/server/tls/client-ca.key --cluster-signing-kubelet-serving-cert-file=/var/lib/rancher/k3s/server/tls/server-ca.nochain.crt --cluster-signing-kubelet-serving-key-file=/var/lib/rancher/k3s/server/tls/server-ca.key --cluster-signing-legacy-unknown-cert-file=/var/lib/rancher/k3s/server/tls/server-ca.nochain.crt --cluster-signing-legacy-unknown-key-file=/var/lib/rancher/k3s/server/tls/server-ca.key --configure-cloud-routes=false --controllers=*,tokencleaner,-service,-route,-cloud-node-lifecycle --kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --leader-elect=false --profiling=false --root-ca-file=/var/lib/rancher/k3s/server/tls/server-ca.crt --secure-port=10257 --service-account-private-key-file=/var/lib/rancher/k3s/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --use-service-account-credentials=true" +mar 12 16:34:58 CASCA k3s[299952]: W0312 16:34:58.829576 299952 registry.go:256] calling componentGlobalsRegistry.AddFlags more than once, the registry will be set by the latest flags +mar 12 16:34:58 CASCA k3s[299952]: I0312 16:34:58.829643 299952 options.go:228] external host was not specified, using 192.168.1.133 +mar 12 16:34:58 CASCA k3s[299952]: time="2025-03-12T16:34:58+01:00" level=info msg="Running cloud-controller-manager --allocate-node-cidrs=true --authentication-kubeconfig=/var/lib/rancher/k3s/server/cred/cloud-controller.kubeconfig --authorization-kubeconfig=/var/lib/rancher/k3s/server/cred/cloud-controller.kubeconfig --bind-address=127.0.0.1 --cloud-config=/var/lib/rancher/k3s/server/etc/cloud-config.yaml --cloud-provider=k3s --cluster-cidr=10.42.0.0/16 --configure-cloud-routes=false --controllers=*,-route --kubeconfig=/var/lib/rancher/k3s/server/cred/cloud-controller.kubeconfig --leader-elect=false --leader-elect-resource-name=k3s-cloud-controller-manager --node-status-update-frequency=1m0s --profiling=false" +mar 12 16:34:58 CASCA k3s[299952]: time="2025-03-12T16:34:58+01:00" level=info msg="Server node token is available at /var/lib/rancher/k3s/server/token" +mar 12 16:34:58 CASCA k3s[299952]: time="2025-03-12T16:34:58+01:00" level=info msg="To join server node to cluster: k3s server -s https://192.168.1.133:6443 -t ${SERVER_NODE_TOKEN}" +mar 12 16:34:58 CASCA k3s[299952]: time="2025-03-12T16:34:58+01:00" level=info msg="Agent node token is available at /var/lib/rancher/k3s/server/agent-token" +mar 12 16:34:58 CASCA k3s[299952]: time="2025-03-12T16:34:58+01:00" level=info msg="To join agent node to cluster: k3s agent -s https://192.168.1.133:6443 -t ${AGENT_NODE_TOKEN}" +mar 12 16:34:58 CASCA k3s[299952]: I0312 16:34:58.830859 299952 server.go:150] Version: v1.31.6+k3s1 +mar 12 16:34:58 CASCA k3s[299952]: I0312 16:34:58.830875 299952 server.go:152] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" +mar 12 16:34:58 CASCA k3s[299952]: Error: failed to create listener: failed to listen on 127.0.0.1:6444: listen tcp 127.0.0.1:6444: bind: address already in use +mar 12 16:34:58 CASCA k3s[299952]: time="2025-03-12T16:34:58+01:00" level=error msg="apiserver exited: failed to create listener: failed to listen on 127.0.0.1:6444: listen tcp 127.0.0.1:6444: bind: address already in use" +mar 12 16:34:58 CASCA systemd[1]: k3s.service: Main process exited, code=exited, status=1/FAILURE +░░ Subject: Unit process exited +░░ Defined-By: systemd +░░ Support: https://www.debian.org/support +░░ +░░ An ExecStart= process belonging to unit k3s.service has exited. +░░ +░░ The process' exit code is 'exited' and its exit status is 1. +mar 12 16:34:58 CASCA systemd[1]: k3s.service: Failed with result 'exit-code'. +░░ Subject: Unit failed +░░ Defined-By: systemd +░░ Support: https://www.debian.org/support +░░ +░░ The unit k3s.service has entered the 'failed' state with result 'exit-code'. +mar 12 16:34:58 CASCA systemd[1]: Failed to start k3s.service - Lightweight Kubernetes. +░░ Subject: A start job for unit k3s.service has failed +░░ Defined-By: systemd +░░ Support: https://www.debian.org/support +░░ +░░ A start job for unit k3s.service has finished with a failure. +░░ +░░ The job identifier is 116125 and the job result is failed. +mar 12 16:35:04 CASCA systemd[1]: k3s.service: Scheduled restart job, restart counter is at 1207. +░░ Subject: Automatic restarting of a unit has been scheduled +░░ Defined-By: systemd +░░ Support: https://www.debian.org/support +░░ +░░ Automatic restarting of the unit k3s.service has been scheduled, as the result for +░░ the configured Restart= setting for the unit. +mar 12 16:35:04 CASCA systemd[1]: Stopped k3s.service - Lightweight Kubernetes. +░░ Subject: A stop job for unit k3s.service has finished +░░ Defined-By: systemd +░░ Support: https://www.debian.org/support +░░ +░░ A stop job for unit k3s.service has finished. +░░ +░░ The job identifier is 116219 and the job result is done. +mar 12 16:35:04 CASCA systemd[1]: Starting k3s.service - Lightweight Kubernetes... +░░ Subject: A start job for unit k3s.service has begun execution +░░ Defined-By: systemd +░░ Support: https://www.debian.org/support +░░ +░░ A start job for unit k3s.service has begun execution. +░░ +░░ The job identifier is 116219. +mar 12 16:35:04 CASCA sh[300339]: + /usr/bin/systemctl is-enabled --quiet nm-cloud-setup.service +mar 12 16:35:04 CASCA k3s[300346]: time="2025-03-12T16:35:04+01:00" level=info msg="Starting k3s v1.31.6+k3s1 (6ab750f9)" +mar 12 16:35:04 CASCA k3s[300346]: time="2025-03-12T16:35:04+01:00" level=info msg="Configuring sqlite3 database connection pooling: maxIdleConns=2, maxOpenConns=0, connMaxLifetime=0s" +mar 12 16:35:04 CASCA k3s[300346]: time="2025-03-12T16:35:04+01:00" level=info msg="Configuring database table schema and indexes, this may take a moment..." +mar 12 16:35:04 CASCA k3s[300346]: time="2025-03-12T16:35:04+01:00" level=info msg="Database tables and indexes are up to date" +mar 12 16:35:04 CASCA k3s[300346]: time="2025-03-12T16:35:04+01:00" level=info msg="Kine available at unix://kine.sock" +mar 12 16:35:04 CASCA k3s[300346]: time="2025-03-12T16:35:04+01:00" level=info msg="Reconciling bootstrap data between datastore and disk" +mar 12 16:35:04 CASCA k3s[300346]: time="2025-03-12T16:35:04+01:00" level=info msg="Running kube-apiserver --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,k3s --authorization-mode=Node,RBAC --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/k3s/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction --enable-aggregator-routing=true --enable-bootstrap-token-auth=true --etcd-servers=unix://kine.sock --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key" +mar 12 16:35:04 CASCA k3s[300346]: time="2025-03-12T16:35:04+01:00" level=info msg="Running kube-scheduler --authentication-kubeconfig=/var/lib/rancher/k3s/server/cred/scheduler.kubeconfig --authorization-kubeconfig=/var/lib/rancher/k3s/server/cred/scheduler.kubeconfig --bind-address=127.0.0.1 --kubeconfig=/var/lib/rancher/k3s/server/cred/scheduler.kubeconfig --leader-elect=false --profiling=false --secure-port=10259" +mar 12 16:35:04 CASCA k3s[300346]: W0312 16:35:04.287210 300346 registry.go:256] calling componentGlobalsRegistry.AddFlags more than once, the registry will be set by the latest flags +mar 12 16:35:04 CASCA k3s[300346]: time="2025-03-12T16:35:04+01:00" level=info msg="Waiting for API server to become available" +mar 12 16:35:04 CASCA k3s[300346]: time="2025-03-12T16:35:04+01:00" level=info msg="Running kube-controller-manager --allocate-node-cidrs=true --authentication-kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --authorization-kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --bind-address=127.0.0.1 --cluster-cidr=10.42.0.0/16 --cluster-signing-kube-apiserver-client-cert-file=/var/lib/rancher/k3s/server/tls/client-ca.nochain.crt --cluster-signing-kube-apiserver-client-key-file=/var/lib/rancher/k3s/server/tls/client-ca.key --cluster-signing-kubelet-client-cert-file=/var/lib/rancher/k3s/server/tls/client-ca.nochain.crt --cluster-signing-kubelet-client-key-file=/var/lib/rancher/k3s/server/tls/client-ca.key --cluster-signing-kubelet-serving-cert-file=/var/lib/rancher/k3s/server/tls/server-ca.nochain.crt --cluster-signing-kubelet-serving-key-file=/var/lib/rancher/k3s/server/tls/server-ca.key --cluster-signing-legacy-unknown-cert-file=/var/lib/rancher/k3s/server/tls/server-ca.nochain.crt --cluster-signing-legacy-unknown-key-file=/var/lib/rancher/k3s/server/tls/server-ca.key --configure-cloud-routes=false --controllers=*,tokencleaner,-service,-route,-cloud-node-lifecycle --kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --leader-elect=false --profiling=false --root-ca-file=/var/lib/rancher/k3s/server/tls/server-ca.crt --secure-port=10257 --service-account-private-key-file=/var/lib/rancher/k3s/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --use-service-account-credentials=true" +mar 12 16:35:04 CASCA k3s[300346]: W0312 16:35:04.287660 300346 registry.go:256] calling componentGlobalsRegistry.AddFlags more than once, the registry will be set by the latest flags +mar 12 16:35:04 CASCA k3s[300346]: I0312 16:35:04.287792 300346 options.go:228] external host was not specified, using 192.168.1.133 +mar 12 16:35:04 CASCA k3s[300346]: time="2025-03-12T16:35:04+01:00" level=info msg="Running cloud-controller-manager --allocate-node-cidrs=true --authentication-kubeconfig=/var/lib/rancher/k3s/server/cred/cloud-controller.kubeconfig --authorization-kubeconfig=/var/lib/rancher/k3s/server/cred/cloud-controller.kubeconfig --bind-address=127.0.0.1 --cloud-config=/var/lib/rancher/k3s/server/etc/cloud-config.yaml --cloud-provider=k3s --cluster-cidr=10.42.0.0/16 --configure-cloud-routes=false --controllers=*,-route --kubeconfig=/var/lib/rancher/k3s/server/cred/cloud-controller.kubeconfig --leader-elect=false --leader-elect-resource-name=k3s-cloud-controller-manager --node-status-update-frequency=1m0s --profiling=false" +mar 12 16:35:04 CASCA k3s[300346]: time="2025-03-12T16:35:04+01:00" level=info msg="Server node token is available at /var/lib/rancher/k3s/server/token" +mar 12 16:35:04 CASCA k3s[300346]: time="2025-03-12T16:35:04+01:00" level=info msg="To join server node to cluster: k3s server -s https://192.168.1.133:6443 -t ${SERVER_NODE_TOKEN}" +mar 12 16:35:04 CASCA k3s[300346]: time="2025-03-12T16:35:04+01:00" level=info msg="Agent node token is available at /var/lib/rancher/k3s/server/agent-token" +mar 12 16:35:04 CASCA k3s[300346]: time="2025-03-12T16:35:04+01:00" level=info msg="To join agent node to cluster: k3s agent -s https://192.168.1.133:6443 -t ${AGENT_NODE_TOKEN}" +mar 12 16:35:04 CASCA k3s[300346]: I0312 16:35:04.289169 300346 server.go:150] Version: v1.31.6+k3s1 +mar 12 16:35:04 CASCA k3s[300346]: I0312 16:35:04.289196 300346 server.go:152] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" +mar 12 16:35:04 CASCA k3s[300346]: Error: failed to create listener: failed to listen on 127.0.0.1:6444: listen tcp 127.0.0.1:6444: bind: address already in use +mar 12 16:35:04 CASCA k3s[300346]: time="2025-03-12T16:35:04+01:00" level=error msg="apiserver exited: failed to create listener: failed to listen on 127.0.0.1:6444: listen tcp 127.0.0.1:6444: bind: address already in use" +mar 12 16:35:04 CASCA systemd[1]: k3s.service: Main process exited, code=exited, status=1/FAILURE +░░ Subject: Unit process exited +░░ Defined-By: systemd +░░ Support: https://www.debian.org/support +░░ +░░ An ExecStart= process belonging to unit k3s.service has exited. +░░ +░░ The process' exit code is 'exited' and its exit status is 1. +mar 12 16:35:04 CASCA systemd[1]: k3s.service: Failed with result 'exit-code'. +░░ Subject: Unit failed +░░ Defined-By: systemd +░░ Support: https://www.debian.org/support +░░ +░░ The unit k3s.service has entered the 'failed' state with result 'exit-code'. +mar 12 16:35:04 CASCA systemd[1]: Failed to start k3s.service - Lightweight Kubernetes. +░░ Subject: A start job for unit k3s.service has failed +░░ Defined-By: systemd +░░ Support: https://www.debian.org/support +░░ +░░ A start job for unit k3s.service has finished with a failure. +░░ +░░ The job identifier is 116219 and the job result is failed. +mar 12 16:35:09 CASCA systemd[1]: k3s.service: Scheduled restart job, restart counter is at 1208. +░░ Subject: Automatic restarting of a unit has been scheduled +░░ Defined-By: systemd +░░ Support: https://www.debian.org/support +░░ +░░ Automatic restarting of the unit k3s.service has been scheduled, as the result for +░░ the configured Restart= setting for the unit. +mar 12 16:35:09 CASCA systemd[1]: Stopped k3s.service - Lightweight Kubernetes. +░░ Subject: A stop job for unit k3s.service has finished +░░ Defined-By: systemd +░░ Support: https://www.debian.org/support +░░ +░░ A stop job for unit k3s.service has finished. +░░ +░░ The job identifier is 116313 and the job result is done. +mar 12 16:35:09 CASCA systemd[1]: Starting k3s.service - Lightweight Kubernetes... +░░ Subject: A start job for unit k3s.service has begun execution +░░ Defined-By: systemd +░░ Support: https://www.debian.org/support +░░ +░░ A start job for unit k3s.service has begun execution. +░░ +░░ The job identifier is 116313. +mar 12 16:35:09 CASCA sh[300731]: + /usr/bin/systemctl is-enabled --quiet nm-cloud-setup.service +mar 12 16:35:09 CASCA k3s[300739]: time="2025-03-12T16:35:09+01:00" level=info msg="Starting k3s v1.31.6+k3s1 (6ab750f9)" +mar 12 16:35:09 CASCA k3s[300739]: time="2025-03-12T16:35:09+01:00" level=info msg="Configuring sqlite3 database connection pooling: maxIdleConns=2, maxOpenConns=0, connMaxLifetime=0s" +mar 12 16:35:09 CASCA k3s[300739]: time="2025-03-12T16:35:09+01:00" level=info msg="Configuring database table schema and indexes, this may take a moment..." +mar 12 16:35:09 CASCA k3s[300739]: time="2025-03-12T16:35:09+01:00" level=info msg="Database tables and indexes are up to date" +mar 12 16:35:09 CASCA k3s[300739]: time="2025-03-12T16:35:09+01:00" level=info msg="Kine available at unix://kine.sock" +mar 12 16:35:09 CASCA k3s[300739]: time="2025-03-12T16:35:09+01:00" level=info msg="Reconciling bootstrap data between datastore and disk" +mar 12 16:35:09 CASCA k3s[300739]: time="2025-03-12T16:35:09+01:00" level=info msg="Running kube-apiserver --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,k3s --authorization-mode=Node,RBAC --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/k3s/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction --enable-aggregator-routing=true --enable-bootstrap-token-auth=true --etcd-servers=unix://kine.sock --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key" +mar 12 16:35:09 CASCA k3s[300739]: time="2025-03-12T16:35:09+01:00" level=info msg="Running kube-scheduler --authentication-kubeconfig=/var/lib/rancher/k3s/server/cred/scheduler.kubeconfig --authorization-kubeconfig=/var/lib/rancher/k3s/server/cred/scheduler.kubeconfig --bind-address=127.0.0.1 --kubeconfig=/var/lib/rancher/k3s/server/cred/scheduler.kubeconfig --leader-elect=false --profiling=false --secure-port=10259" +mar 12 16:35:09 CASCA k3s[300739]: W0312 16:35:09.834674 300739 registry.go:256] calling componentGlobalsRegistry.AddFlags more than once, the registry will be set by the latest flags +mar 12 16:35:09 CASCA k3s[300739]: time="2025-03-12T16:35:09+01:00" level=info msg="Waiting for API server to become available" +mar 12 16:35:09 CASCA k3s[300739]: time="2025-03-12T16:35:09+01:00" level=info msg="Running kube-controller-manager --allocate-node-cidrs=true --authentication-kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --authorization-kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --bind-address=127.0.0.1 --cluster-cidr=10.42.0.0/16 --cluster-signing-kube-apiserver-client-cert-file=/var/lib/rancher/k3s/server/tls/client-ca.nochain.crt --cluster-signing-kube-apiserver-client-key-file=/var/lib/rancher/k3s/server/tls/client-ca.key --cluster-signing-kubelet-client-cert-file=/var/lib/rancher/k3s/server/tls/client-ca.nochain.crt --cluster-signing-kubelet-client-key-file=/var/lib/rancher/k3s/server/tls/client-ca.key --cluster-signing-kubelet-serving-cert-file=/var/lib/rancher/k3s/server/tls/server-ca.nochain.crt --cluster-signing-kubelet-serving-key-file=/var/lib/rancher/k3s/server/tls/server-ca.key --cluster-signing-legacy-unknown-cert-file=/var/lib/rancher/k3s/server/tls/server-ca.nochain.crt --cluster-signing-legacy-unknown-key-file=/var/lib/rancher/k3s/server/tls/server-ca.key --configure-cloud-routes=false --controllers=*,tokencleaner,-service,-route,-cloud-node-lifecycle --kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --leader-elect=false --profiling=false --root-ca-file=/var/lib/rancher/k3s/server/tls/server-ca.crt --secure-port=10257 --service-account-private-key-file=/var/lib/rancher/k3s/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --use-service-account-credentials=true" +mar 12 16:35:09 CASCA k3s[300739]: W0312 16:35:09.835024 300739 registry.go:256] calling componentGlobalsRegistry.AddFlags more than once, the registry will be set by the latest flags +mar 12 16:35:09 CASCA k3s[300739]: I0312 16:35:09.835170 300739 options.go:228] external host was not specified, using 192.168.1.133 +mar 12 16:35:09 CASCA k3s[300739]: time="2025-03-12T16:35:09+01:00" level=info msg="Running cloud-controller-manager --allocate-node-cidrs=true --authentication-kubeconfig=/var/lib/rancher/k3s/server/cred/cloud-controller.kubeconfig --authorization-kubeconfig=/var/lib/rancher/k3s/server/cred/cloud-controller.kubeconfig --bind-address=127.0.0.1 --cloud-config=/var/lib/rancher/k3s/server/etc/cloud-config.yaml --cloud-provider=k3s --cluster-cidr=10.42.0.0/16 --configure-cloud-routes=false --controllers=*,-route --kubeconfig=/var/lib/rancher/k3s/server/cred/cloud-controller.kubeconfig --leader-elect=false --leader-elect-resource-name=k3s-cloud-controller-manager --node-status-update-frequency=1m0s --profiling=false" +mar 12 16:35:09 CASCA k3s[300739]: time="2025-03-12T16:35:09+01:00" level=info msg="Server node token is available at /var/lib/rancher/k3s/server/token" +mar 12 16:35:09 CASCA k3s[300739]: time="2025-03-12T16:35:09+01:00" level=info msg="To join server node to cluster: k3s server -s https://192.168.1.133:6443 -t ${SERVER_NODE_TOKEN}" +mar 12 16:35:09 CASCA k3s[300739]: time="2025-03-12T16:35:09+01:00" level=info msg="Agent node token is available at /var/lib/rancher/k3s/server/agent-token" +mar 12 16:35:09 CASCA k3s[300739]: time="2025-03-12T16:35:09+01:00" level=info msg="To join agent node to cluster: k3s agent -s https://192.168.1.133:6443 -t ${AGENT_NODE_TOKEN}" +mar 12 16:35:09 CASCA k3s[300739]: I0312 16:35:09.836389 300739 server.go:150] Version: v1.31.6+k3s1 +mar 12 16:35:09 CASCA k3s[300739]: I0312 16:35:09.836409 300739 server.go:152] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" +mar 12 16:35:09 CASCA k3s[300739]: Error: failed to create listener: failed to listen on 127.0.0.1:6444: listen tcp 127.0.0.1:6444: bind: address already in use +mar 12 16:35:09 CASCA k3s[300739]: time="2025-03-12T16:35:09+01:00" level=error msg="apiserver exited: failed to create listener: failed to listen on 127.0.0.1:6444: listen tcp 127.0.0.1:6444: bind: address already in use" +mar 12 16:35:09 CASCA systemd[1]: k3s.service: Main process exited, code=exited, status=1/FAILURE +░░ Subject: Unit process exited +░░ Defined-By: systemd +░░ Support: https://www.debian.org/support +░░ +░░ An ExecStart= process belonging to unit k3s.service has exited. +░░ +░░ The process' exit code is 'exited' and its exit status is 1. +mar 12 16:35:09 CASCA systemd[1]: k3s.service: Failed with result 'exit-code'. +░░ Subject: Unit failed +░░ Defined-By: systemd +░░ Support: https://www.debian.org/support +░░ +░░ The unit k3s.service has entered the 'failed' state with result 'exit-code'. +mar 12 16:35:09 CASCA systemd[1]: Failed to start k3s.service - Lightweight Kubernetes. +░░ Subject: A start job for unit k3s.service has failed +░░ Defined-By: systemd +░░ Support: https://www.debian.org/support +░░ +░░ A start job for unit k3s.service has finished with a failure. +░░ +░░ The job identifier is 116313 and the job result is failed. +mar 12 16:35:15 CASCA systemd[1]: k3s.service: Scheduled restart job, restart counter is at 1209. +░░ Subject: Automatic restarting of a unit has been scheduled +░░ Defined-By: systemd +░░ Support: https://www.debian.org/support +░░ +░░ Automatic restarting of the unit k3s.service has been scheduled, as the result for +░░ the configured Restart= setting for the unit. +mar 12 16:35:15 CASCA systemd[1]: Stopped k3s.service - Lightweight Kubernetes. +░░ Subject: A stop job for unit k3s.service has finished +░░ Defined-By: systemd +░░ Support: https://www.debian.org/support +░░ +░░ A stop job for unit k3s.service has finished. +░░ +░░ The job identifier is 116407 and the job result is done. +mar 12 16:35:15 CASCA systemd[1]: Starting k3s.service - Lightweight Kubernetes... +░░ Subject: A start job for unit k3s.service has begun execution +░░ Defined-By: systemd +░░ Support: https://www.debian.org/support +░░ +░░ A start job for unit k3s.service has begun execution. +░░ +░░ The job identifier is 116407. +mar 12 16:35:15 CASCA sh[301001]: + /usr/bin/systemctl is-enabled --quiet nm-cloud-setup.service +mar 12 16:35:15 CASCA k3s[301005]: time="2025-03-12T16:35:15+01:00" level=info msg="Starting k3s v1.31.6+k3s1 (6ab750f9)" +mar 12 16:35:15 CASCA k3s[301005]: time="2025-03-12T16:35:15+01:00" level=info msg="Configuring sqlite3 database connection pooling: maxIdleConns=2, maxOpenConns=0, connMaxLifetime=0s" +mar 12 16:35:15 CASCA k3s[301005]: time="2025-03-12T16:35:15+01:00" level=info msg="Configuring database table schema and indexes, this may take a moment..." +mar 12 16:35:15 CASCA k3s[301005]: time="2025-03-12T16:35:15+01:00" level=info msg="Database tables and indexes are up to date" +mar 12 16:35:15 CASCA k3s[301005]: time="2025-03-12T16:35:15+01:00" level=info msg="Kine available at unix://kine.sock" +mar 12 16:35:15 CASCA k3s[301005]: time="2025-03-12T16:35:15+01:00" level=info msg="Reconciling bootstrap data between datastore and disk" +mar 12 16:35:15 CASCA k3s[301005]: time="2025-03-12T16:35:15+01:00" level=info msg="Running kube-apiserver --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,k3s --authorization-mode=Node,RBAC --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/k3s/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction --enable-aggregator-routing=true --enable-bootstrap-token-auth=true --etcd-servers=unix://kine.sock --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key" +mar 12 16:35:15 CASCA k3s[301005]: time="2025-03-12T16:35:15+01:00" level=info msg="Running kube-scheduler --authentication-kubeconfig=/var/lib/rancher/k3s/server/cred/scheduler.kubeconfig --authorization-kubeconfig=/var/lib/rancher/k3s/server/cred/scheduler.kubeconfig --bind-address=127.0.0.1 --kubeconfig=/var/lib/rancher/k3s/server/cred/scheduler.kubeconfig --leader-elect=false --profiling=false --secure-port=10259" +mar 12 16:35:15 CASCA k3s[301005]: W0312 16:35:15.277839 301005 registry.go:256] calling componentGlobalsRegistry.AddFlags more than once, the registry will be set by the latest flags +mar 12 16:35:15 CASCA k3s[301005]: time="2025-03-12T16:35:15+01:00" level=info msg="Waiting for API server to become available" +mar 12 16:35:15 CASCA k3s[301005]: time="2025-03-12T16:35:15+01:00" level=info msg="Running kube-controller-manager --allocate-node-cidrs=true --authentication-kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --authorization-kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --bind-address=127.0.0.1 --cluster-cidr=10.42.0.0/16 --cluster-signing-kube-apiserver-client-cert-file=/var/lib/rancher/k3s/server/tls/client-ca.nochain.crt --cluster-signing-kube-apiserver-client-key-file=/var/lib/rancher/k3s/server/tls/client-ca.key --cluster-signing-kubelet-client-cert-file=/var/lib/rancher/k3s/server/tls/client-ca.nochain.crt --cluster-signing-kubelet-client-key-file=/var/lib/rancher/k3s/server/tls/client-ca.key --cluster-signing-kubelet-serving-cert-file=/var/lib/rancher/k3s/server/tls/server-ca.nochain.crt --cluster-signing-kubelet-serving-key-file=/var/lib/rancher/k3s/server/tls/server-ca.key --cluster-signing-legacy-unknown-cert-file=/var/lib/rancher/k3s/server/tls/server-ca.nochain.crt --cluster-signing-legacy-unknown-key-file=/var/lib/rancher/k3s/server/tls/server-ca.key --configure-cloud-routes=false --controllers=*,tokencleaner,-service,-route,-cloud-node-lifecycle --kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --leader-elect=false --profiling=false --root-ca-file=/var/lib/rancher/k3s/server/tls/server-ca.crt --secure-port=10257 --service-account-private-key-file=/var/lib/rancher/k3s/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --use-service-account-credentials=true" +mar 12 16:35:15 CASCA k3s[301005]: W0312 16:35:15.278187 301005 registry.go:256] calling componentGlobalsRegistry.AddFlags more than once, the registry will be set by the latest flags +mar 12 16:35:15 CASCA k3s[301005]: I0312 16:35:15.278304 301005 options.go:228] external host was not specified, using 192.168.1.133 +mar 12 16:35:15 CASCA k3s[301005]: time="2025-03-12T16:35:15+01:00" level=info msg="Running cloud-controller-manager --allocate-node-cidrs=true --authentication-kubeconfig=/var/lib/rancher/k3s/server/cred/cloud-controller.kubeconfig --authorization-kubeconfig=/var/lib/rancher/k3s/server/cred/cloud-controller.kubeconfig --bind-address=127.0.0.1 --cloud-config=/var/lib/rancher/k3s/server/etc/cloud-config.yaml --cloud-provider=k3s --cluster-cidr=10.42.0.0/16 --configure-cloud-routes=false --controllers=*,-route --kubeconfig=/var/lib/rancher/k3s/server/cred/cloud-controller.kubeconfig --leader-elect=false --leader-elect-resource-name=k3s-cloud-controller-manager --node-status-update-frequency=1m0s --profiling=false" +mar 12 16:35:15 CASCA k3s[301005]: time="2025-03-12T16:35:15+01:00" level=info msg="Server node token is available at /var/lib/rancher/k3s/server/token" +mar 12 16:35:15 CASCA k3s[301005]: time="2025-03-12T16:35:15+01:00" level=info msg="To join server node to cluster: k3s server -s https://192.168.1.133:6443 -t ${SERVER_NODE_TOKEN}" +mar 12 16:35:15 CASCA k3s[301005]: time="2025-03-12T16:35:15+01:00" level=info msg="Agent node token is available at /var/lib/rancher/k3s/server/agent-token" +mar 12 16:35:15 CASCA k3s[301005]: time="2025-03-12T16:35:15+01:00" level=info msg="To join agent node to cluster: k3s agent -s https://192.168.1.133:6443 -t ${AGENT_NODE_TOKEN}" +mar 12 16:35:15 CASCA k3s[301005]: I0312 16:35:15.279488 301005 server.go:150] Version: v1.31.6+k3s1 +mar 12 16:35:15 CASCA k3s[301005]: I0312 16:35:15.279552 301005 server.go:152] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" +mar 12 16:35:15 CASCA k3s[301005]: Error: failed to create listener: failed to listen on 127.0.0.1:6444: listen tcp 127.0.0.1:6444: bind: address already in use +mar 12 16:35:15 CASCA k3s[301005]: time="2025-03-12T16:35:15+01:00" level=error msg="apiserver exited: failed to create listener: failed to listen on 127.0.0.1:6444: listen tcp 127.0.0.1:6444: bind: address already in use" +mar 12 16:35:15 CASCA systemd[1]: k3s.service: Main process exited, code=exited, status=1/FAILURE +░░ Subject: Unit process exited +░░ Defined-By: systemd +░░ Support: https://www.debian.org/support +░░ +░░ An ExecStart= process belonging to unit k3s.service has exited. +░░ +░░ The process' exit code is 'exited' and its exit status is 1. +mar 12 16:35:15 CASCA systemd[1]: k3s.service: Failed with result 'exit-code'. +░░ Subject: Unit failed +░░ Defined-By: systemd +░░ Support: https://www.debian.org/support +░░ +░░ The unit k3s.service has entered the 'failed' state with result 'exit-code'. +mar 12 16:35:15 CASCA systemd[1]: Failed to start k3s.service - Lightweight Kubernetes. +░░ Subject: A start job for unit k3s.service has failed +░░ Defined-By: systemd +░░ Support: https://www.debian.org/support +░░ +░░ A start job for unit k3s.service has finished with a failure. +░░ +░░ The job identifier is 116407 and the job result is failed. +mar 12 16:35:20 CASCA systemd[1]: k3s.service: Scheduled restart job, restart counter is at 1210. +░░ Subject: Automatic restarting of a unit has been scheduled +░░ Defined-By: systemd +░░ Support: https://www.debian.org/support +░░ +░░ Automatic restarting of the unit k3s.service has been scheduled, as the result for +░░ the configured Restart= setting for the unit. +mar 12 16:35:20 CASCA systemd[1]: Stopped k3s.service - Lightweight Kubernetes. +░░ Subject: A stop job for unit k3s.service has finished +░░ Defined-By: systemd +░░ Support: https://www.debian.org/support +░░ +░░ A stop job for unit k3s.service has finished. +░░ +░░ The job identifier is 116501 and the job result is done. +mar 12 16:35:20 CASCA systemd[1]: Starting k3s.service - Lightweight Kubernetes... +░░ Subject: A start job for unit k3s.service has begun execution +░░ Defined-By: systemd +░░ Support: https://www.debian.org/support +░░ +░░ A start job for unit k3s.service has begun execution. +░░ +░░ The job identifier is 116501. +mar 12 16:35:20 CASCA sh[301218]: + /usr/bin/systemctl is-enabled --quiet nm-cloud-setup.service +mar 12 16:35:20 CASCA k3s[301225]: time="2025-03-12T16:35:20+01:00" level=info msg="Starting k3s v1.31.6+k3s1 (6ab750f9)" +mar 12 16:35:20 CASCA k3s[301225]: time="2025-03-12T16:35:20+01:00" level=info msg="Configuring sqlite3 database connection pooling: maxIdleConns=2, maxOpenConns=0, connMaxLifetime=0s" +mar 12 16:35:20 CASCA k3s[301225]: time="2025-03-12T16:35:20+01:00" level=info msg="Configuring database table schema and indexes, this may take a moment..." +mar 12 16:35:20 CASCA k3s[301225]: time="2025-03-12T16:35:20+01:00" level=info msg="Database tables and indexes are up to date" +mar 12 16:35:20 CASCA k3s[301225]: time="2025-03-12T16:35:20+01:00" level=info msg="Kine available at unix://kine.sock" +mar 12 16:35:20 CASCA k3s[301225]: time="2025-03-12T16:35:20+01:00" level=info msg="Reconciling bootstrap data between datastore and disk" +mar 12 16:35:20 CASCA k3s[301225]: time="2025-03-12T16:35:20+01:00" level=info msg="Running kube-apiserver --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,k3s --authorization-mode=Node,RBAC --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/k3s/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction --enable-aggregator-routing=true --enable-bootstrap-token-auth=true --etcd-servers=unix://kine.sock --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key" +mar 12 16:35:20 CASCA k3s[301225]: time="2025-03-12T16:35:20+01:00" level=info msg="Running kube-scheduler --authentication-kubeconfig=/var/lib/rancher/k3s/server/cred/scheduler.kubeconfig --authorization-kubeconfig=/var/lib/rancher/k3s/server/cred/scheduler.kubeconfig --bind-address=127.0.0.1 --kubeconfig=/var/lib/rancher/k3s/server/cred/scheduler.kubeconfig --leader-elect=false --profiling=false --secure-port=10259" +mar 12 16:35:20 CASCA k3s[301225]: W0312 16:35:20.847052 301225 registry.go:256] calling componentGlobalsRegistry.AddFlags more than once, the registry will be set by the latest flags +mar 12 16:35:20 CASCA k3s[301225]: time="2025-03-12T16:35:20+01:00" level=info msg="Waiting for API server to become available" +mar 12 16:35:20 CASCA k3s[301225]: time="2025-03-12T16:35:20+01:00" level=info msg="Running kube-controller-manager --allocate-node-cidrs=true --authentication-kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --authorization-kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --bind-address=127.0.0.1 --cluster-cidr=10.42.0.0/16 --cluster-signing-kube-apiserver-client-cert-file=/var/lib/rancher/k3s/server/tls/client-ca.nochain.crt --cluster-signing-kube-apiserver-client-key-file=/var/lib/rancher/k3s/server/tls/client-ca.key --cluster-signing-kubelet-client-cert-file=/var/lib/rancher/k3s/server/tls/client-ca.nochain.crt --cluster-signing-kubelet-client-key-file=/var/lib/rancher/k3s/server/tls/client-ca.key --cluster-signing-kubelet-serving-cert-file=/var/lib/rancher/k3s/server/tls/server-ca.nochain.crt --cluster-signing-kubelet-serving-key-file=/var/lib/rancher/k3s/server/tls/server-ca.key --cluster-signing-legacy-unknown-cert-file=/var/lib/rancher/k3s/server/tls/server-ca.nochain.crt --cluster-signing-legacy-unknown-key-file=/var/lib/rancher/k3s/server/tls/server-ca.key --configure-cloud-routes=false --controllers=*,tokencleaner,-service,-route,-cloud-node-lifecycle --kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --leader-elect=false --profiling=false --root-ca-file=/var/lib/rancher/k3s/server/tls/server-ca.crt --secure-port=10257 --service-account-private-key-file=/var/lib/rancher/k3s/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --use-service-account-credentials=true" +mar 12 16:35:20 CASCA k3s[301225]: W0312 16:35:20.847398 301225 registry.go:256] calling componentGlobalsRegistry.AddFlags more than once, the registry will be set by the latest flags +mar 12 16:35:20 CASCA k3s[301225]: I0312 16:35:20.847542 301225 options.go:228] external host was not specified, using 192.168.1.133 +mar 12 16:35:20 CASCA k3s[301225]: time="2025-03-12T16:35:20+01:00" level=info msg="Running cloud-controller-manager --allocate-node-cidrs=true --authentication-kubeconfig=/var/lib/rancher/k3s/server/cred/cloud-controller.kubeconfig --authorization-kubeconfig=/var/lib/rancher/k3s/server/cred/cloud-controller.kubeconfig --bind-address=127.0.0.1 --cloud-config=/var/lib/rancher/k3s/server/etc/cloud-config.yaml --cloud-provider=k3s --cluster-cidr=10.42.0.0/16 --configure-cloud-routes=false --controllers=*,-route --kubeconfig=/var/lib/rancher/k3s/server/cred/cloud-controller.kubeconfig --leader-elect=false --leader-elect-resource-name=k3s-cloud-controller-manager --node-status-update-frequency=1m0s --profiling=false" +mar 12 16:35:20 CASCA k3s[301225]: time="2025-03-12T16:35:20+01:00" level=info msg="Server node token is available at /var/lib/rancher/k3s/server/token" +mar 12 16:35:20 CASCA k3s[301225]: time="2025-03-12T16:35:20+01:00" level=info msg="To join server node to cluster: k3s server -s https://192.168.1.133:6443 -t ${SERVER_NODE_TOKEN}" +mar 12 16:35:20 CASCA k3s[301225]: time="2025-03-12T16:35:20+01:00" level=info msg="Agent node token is available at /var/lib/rancher/k3s/server/agent-token" +mar 12 16:35:20 CASCA k3s[301225]: time="2025-03-12T16:35:20+01:00" level=info msg="To join agent node to cluster: k3s agent -s https://192.168.1.133:6443 -t ${AGENT_NODE_TOKEN}" +mar 12 16:35:20 CASCA k3s[301225]: I0312 16:35:20.848722 301225 server.go:150] Version: v1.31.6+k3s1 +mar 12 16:35:20 CASCA k3s[301225]: I0312 16:35:20.848739 301225 server.go:152] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" +mar 12 16:35:20 CASCA k3s[301225]: Error: failed to create listener: failed to listen on 127.0.0.1:6444: listen tcp 127.0.0.1:6444: bind: address already in use +mar 12 16:35:20 CASCA k3s[301225]: time="2025-03-12T16:35:20+01:00" level=error msg="apiserver exited: failed to create listener: failed to listen on 127.0.0.1:6444: listen tcp 127.0.0.1:6444: bind: address already in use" +mar 12 16:35:20 CASCA systemd[1]: k3s.service: Main process exited, code=exited, status=1/FAILURE +░░ Subject: Unit process exited +░░ Defined-By: systemd +░░ Support: https://www.debian.org/support +░░ +░░ An ExecStart= process belonging to unit k3s.service has exited. +░░ +░░ The process' exit code is 'exited' and its exit status is 1. +mar 12 16:35:20 CASCA systemd[1]: k3s.service: Failed with result 'exit-code'. +░░ Subject: Unit failed +░░ Defined-By: systemd +░░ Support: https://www.debian.org/support +░░ +░░ The unit k3s.service has entered the 'failed' state with result 'exit-code'. +mar 12 16:35:20 CASCA systemd[1]: Failed to start k3s.service - Lightweight Kubernetes. +░░ Subject: A start job for unit k3s.service has failed +░░ Defined-By: systemd +░░ Support: https://www.debian.org/support +░░ +░░ A start job for unit k3s.service has finished with a failure. +░░ +░░ The job identifier is 116501 and the job result is failed. +mar 12 16:35:26 CASCA systemd[1]: k3s.service: Scheduled restart job, restart counter is at 1211. +░░ Subject: Automatic restarting of a unit has been scheduled +░░ Defined-By: systemd +░░ Support: https://www.debian.org/support +░░ +░░ Automatic restarting of the unit k3s.service has been scheduled, as the result for +░░ the configured Restart= setting for the unit. +mar 12 16:35:26 CASCA systemd[1]: Stopped k3s.service - Lightweight Kubernetes. +░░ Subject: A stop job for unit k3s.service has finished +░░ Defined-By: systemd +░░ Support: https://www.debian.org/support +░░ +░░ A stop job for unit k3s.service has finished. +░░ +░░ The job identifier is 116595 and the job result is done. +mar 12 16:35:26 CASCA systemd[1]: Starting k3s.service - Lightweight Kubernetes... +░░ Subject: A start job for unit k3s.service has begun execution +░░ Defined-By: systemd +░░ Support: https://www.debian.org/support +░░ +░░ A start job for unit k3s.service has begun execution. +░░ +░░ The job identifier is 116595. +mar 12 16:35:26 CASCA sh[301551]: + /usr/bin/systemctl is-enabled --quiet nm-cloud-setup.service +mar 12 16:35:26 CASCA k3s[301557]: time="2025-03-12T16:35:26+01:00" level=info msg="Starting k3s v1.31.6+k3s1 (6ab750f9)" +mar 12 16:35:26 CASCA k3s[301557]: time="2025-03-12T16:35:26+01:00" level=info msg="Configuring sqlite3 database connection pooling: maxIdleConns=2, maxOpenConns=0, connMaxLifetime=0s" +mar 12 16:35:26 CASCA k3s[301557]: time="2025-03-12T16:35:26+01:00" level=info msg="Configuring database table schema and indexes, this may take a moment..." +mar 12 16:35:26 CASCA k3s[301557]: time="2025-03-12T16:35:26+01:00" level=info msg="Database tables and indexes are up to date" +mar 12 16:35:26 CASCA k3s[301557]: time="2025-03-12T16:35:26+01:00" level=info msg="Kine available at unix://kine.sock" +mar 12 16:35:26 CASCA k3s[301557]: time="2025-03-12T16:35:26+01:00" level=info msg="Reconciling bootstrap data between datastore and disk" +mar 12 16:35:26 CASCA k3s[301557]: time="2025-03-12T16:35:26+01:00" level=info msg="Running kube-apiserver --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,k3s --authorization-mode=Node,RBAC --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/k3s/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction --enable-aggregator-routing=true --enable-bootstrap-token-auth=true --etcd-servers=unix://kine.sock --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key" +mar 12 16:35:26 CASCA k3s[301557]: time="2025-03-12T16:35:26+01:00" level=info msg="Running kube-scheduler --authentication-kubeconfig=/var/lib/rancher/k3s/server/cred/scheduler.kubeconfig --authorization-kubeconfig=/var/lib/rancher/k3s/server/cred/scheduler.kubeconfig --bind-address=127.0.0.1 --kubeconfig=/var/lib/rancher/k3s/server/cred/scheduler.kubeconfig --leader-elect=false --profiling=false --secure-port=10259" +mar 12 16:35:26 CASCA k3s[301557]: W0312 16:35:26.284751 301557 registry.go:256] calling componentGlobalsRegistry.AddFlags more than once, the registry will be set by the latest flags +mar 12 16:35:26 CASCA k3s[301557]: time="2025-03-12T16:35:26+01:00" level=info msg="Waiting for API server to become available" +mar 12 16:35:26 CASCA k3s[301557]: time="2025-03-12T16:35:26+01:00" level=info msg="Running kube-controller-manager --allocate-node-cidrs=true --authentication-kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --authorization-kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --bind-address=127.0.0.1 --cluster-cidr=10.42.0.0/16 --cluster-signing-kube-apiserver-client-cert-file=/var/lib/rancher/k3s/server/tls/client-ca.nochain.crt --cluster-signing-kube-apiserver-client-key-file=/var/lib/rancher/k3s/server/tls/client-ca.key --cluster-signing-kubelet-client-cert-file=/var/lib/rancher/k3s/server/tls/client-ca.nochain.crt --cluster-signing-kubelet-client-key-file=/var/lib/rancher/k3s/server/tls/client-ca.key --cluster-signing-kubelet-serving-cert-file=/var/lib/rancher/k3s/server/tls/server-ca.nochain.crt --cluster-signing-kubelet-serving-key-file=/var/lib/rancher/k3s/server/tls/server-ca.key --cluster-signing-legacy-unknown-cert-file=/var/lib/rancher/k3s/server/tls/server-ca.nochain.crt --cluster-signing-legacy-unknown-key-file=/var/lib/rancher/k3s/server/tls/server-ca.key --configure-cloud-routes=false --controllers=*,tokencleaner,-service,-route,-cloud-node-lifecycle --kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --leader-elect=false --profiling=false --root-ca-file=/var/lib/rancher/k3s/server/tls/server-ca.crt --secure-port=10257 --service-account-private-key-file=/var/lib/rancher/k3s/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --use-service-account-credentials=true" +mar 12 16:35:26 CASCA k3s[301557]: W0312 16:35:26.285236 301557 registry.go:256] calling componentGlobalsRegistry.AddFlags more than once, the registry will be set by the latest flags +mar 12 16:35:26 CASCA k3s[301557]: I0312 16:35:26.285265 301557 options.go:228] external host was not specified, using 192.168.1.133 +mar 12 16:35:26 CASCA k3s[301557]: time="2025-03-12T16:35:26+01:00" level=info msg="Running cloud-controller-manager --allocate-node-cidrs=true --authentication-kubeconfig=/var/lib/rancher/k3s/server/cred/cloud-controller.kubeconfig --authorization-kubeconfig=/var/lib/rancher/k3s/server/cred/cloud-controller.kubeconfig --bind-address=127.0.0.1 --cloud-config=/var/lib/rancher/k3s/server/etc/cloud-config.yaml --cloud-provider=k3s --cluster-cidr=10.42.0.0/16 --configure-cloud-routes=false --controllers=*,-route --kubeconfig=/var/lib/rancher/k3s/server/cred/cloud-controller.kubeconfig --leader-elect=false --leader-elect-resource-name=k3s-cloud-controller-manager --node-status-update-frequency=1m0s --profiling=false" +mar 12 16:35:26 CASCA k3s[301557]: time="2025-03-12T16:35:26+01:00" level=info msg="Server node token is available at /var/lib/rancher/k3s/server/token" +mar 12 16:35:26 CASCA k3s[301557]: time="2025-03-12T16:35:26+01:00" level=info msg="To join server node to cluster: k3s server -s https://192.168.1.133:6443 -t ${SERVER_NODE_TOKEN}" +mar 12 16:35:26 CASCA k3s[301557]: time="2025-03-12T16:35:26+01:00" level=info msg="Agent node token is available at /var/lib/rancher/k3s/server/agent-token" +mar 12 16:35:26 CASCA k3s[301557]: time="2025-03-12T16:35:26+01:00" level=info msg="To join agent node to cluster: k3s agent -s https://192.168.1.133:6443 -t ${AGENT_NODE_TOKEN}" +mar 12 16:35:26 CASCA k3s[301557]: I0312 16:35:26.286462 301557 server.go:150] Version: v1.31.6+k3s1 +mar 12 16:35:26 CASCA k3s[301557]: I0312 16:35:26.286487 301557 server.go:152] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" +mar 12 16:35:26 CASCA k3s[301557]: Error: failed to create listener: failed to listen on 127.0.0.1:6444: listen tcp 127.0.0.1:6444: bind: address already in use +mar 12 16:35:26 CASCA k3s[301557]: time="2025-03-12T16:35:26+01:00" level=error msg="apiserver exited: failed to create listener: failed to listen on 127.0.0.1:6444: listen tcp 127.0.0.1:6444: bind: address already in use" +mar 12 16:35:26 CASCA systemd[1]: k3s.service: Main process exited, code=exited, status=1/FAILURE +░░ Subject: Unit process exited +░░ Defined-By: systemd +░░ Support: https://www.debian.org/support +░░ +░░ An ExecStart= process belonging to unit k3s.service has exited. +░░ +░░ The process' exit code is 'exited' and its exit status is 1. +mar 12 16:35:26 CASCA systemd[1]: k3s.service: Failed with result 'exit-code'. +░░ Subject: Unit failed +░░ Defined-By: systemd +░░ Support: https://www.debian.org/support +░░ +░░ The unit k3s.service has entered the 'failed' state with result 'exit-code'. +mar 12 16:35:26 CASCA systemd[1]: Failed to start k3s.service - Lightweight Kubernetes. +░░ Subject: A start job for unit k3s.service has failed +░░ Defined-By: systemd +░░ Support: https://www.debian.org/support +░░ +░░ A start job for unit k3s.service has finished with a failure. +░░ +░░ The job identifier is 116595 and the job result is failed. +mar 12 16:35:31 CASCA systemd[1]: k3s.service: Scheduled restart job, restart counter is at 1212. +░░ Subject: Automatic restarting of a unit has been scheduled +░░ Defined-By: systemd +░░ Support: https://www.debian.org/support +░░ +░░ Automatic restarting of the unit k3s.service has been scheduled, as the result for +░░ the configured Restart= setting for the unit. +mar 12 16:35:31 CASCA systemd[1]: Stopped k3s.service - Lightweight Kubernetes. +░░ Subject: A stop job for unit k3s.service has finished +░░ Defined-By: systemd +░░ Support: https://www.debian.org/support +░░ +░░ A stop job for unit k3s.service has finished. +░░ +░░ The job identifier is 116689 and the job result is done. +mar 12 16:35:31 CASCA systemd[1]: Starting k3s.service - Lightweight Kubernetes... +░░ Subject: A start job for unit k3s.service has begun execution +░░ Defined-By: systemd +░░ Support: https://www.debian.org/support +░░ +░░ A start job for unit k3s.service has begun execution. +░░ +░░ The job identifier is 116689. +mar 12 16:35:31 CASCA sh[301968]: + /usr/bin/systemctl is-enabled --quiet nm-cloud-setup.service +mar 12 16:35:31 CASCA k3s[301978]: time="2025-03-12T16:35:31+01:00" level=info msg="Starting k3s v1.31.6+k3s1 (6ab750f9)" +mar 12 16:35:31 CASCA k3s[301978]: time="2025-03-12T16:35:31+01:00" level=info msg="Configuring sqlite3 database connection pooling: maxIdleConns=2, maxOpenConns=0, connMaxLifetime=0s" +mar 12 16:35:31 CASCA k3s[301978]: time="2025-03-12T16:35:31+01:00" level=info msg="Configuring database table schema and indexes, this may take a moment..." +mar 12 16:35:31 CASCA k3s[301978]: time="2025-03-12T16:35:31+01:00" level=info msg="Database tables and indexes are up to date" +mar 12 16:35:31 CASCA k3s[301978]: time="2025-03-12T16:35:31+01:00" level=info msg="Kine available at unix://kine.sock" +mar 12 16:35:31 CASCA k3s[301978]: time="2025-03-12T16:35:31+01:00" level=info msg="Reconciling bootstrap data between datastore and disk" +mar 12 16:35:31 CASCA k3s[301978]: time="2025-03-12T16:35:31+01:00" level=info msg="Running kube-apiserver --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,k3s --authorization-mode=Node,RBAC --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/k3s/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction --enable-aggregator-routing=true --enable-bootstrap-token-auth=true --etcd-servers=unix://kine.sock --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key" +mar 12 16:35:31 CASCA k3s[301978]: time="2025-03-12T16:35:31+01:00" level=info msg="Running kube-scheduler --authentication-kubeconfig=/var/lib/rancher/k3s/server/cred/scheduler.kubeconfig --authorization-kubeconfig=/var/lib/rancher/k3s/server/cred/scheduler.kubeconfig --bind-address=127.0.0.1 --kubeconfig=/var/lib/rancher/k3s/server/cred/scheduler.kubeconfig --leader-elect=false --profiling=false --secure-port=10259" +mar 12 16:35:31 CASCA k3s[301978]: W0312 16:35:31.804367 301978 registry.go:256] calling componentGlobalsRegistry.AddFlags more than once, the registry will be set by the latest flags +mar 12 16:35:31 CASCA k3s[301978]: time="2025-03-12T16:35:31+01:00" level=info msg="Waiting for API server to become available" +mar 12 16:35:31 CASCA k3s[301978]: time="2025-03-12T16:35:31+01:00" level=info msg="Running kube-controller-manager --allocate-node-cidrs=true --authentication-kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --authorization-kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --bind-address=127.0.0.1 --cluster-cidr=10.42.0.0/16 --cluster-signing-kube-apiserver-client-cert-file=/var/lib/rancher/k3s/server/tls/client-ca.nochain.crt --cluster-signing-kube-apiserver-client-key-file=/var/lib/rancher/k3s/server/tls/client-ca.key --cluster-signing-kubelet-client-cert-file=/var/lib/rancher/k3s/server/tls/client-ca.nochain.crt --cluster-signing-kubelet-client-key-file=/var/lib/rancher/k3s/server/tls/client-ca.key --cluster-signing-kubelet-serving-cert-file=/var/lib/rancher/k3s/server/tls/server-ca.nochain.crt --cluster-signing-kubelet-serving-key-file=/var/lib/rancher/k3s/server/tls/server-ca.key --cluster-signing-legacy-unknown-cert-file=/var/lib/rancher/k3s/server/tls/server-ca.nochain.crt --cluster-signing-legacy-unknown-key-file=/var/lib/rancher/k3s/server/tls/server-ca.key --configure-cloud-routes=false --controllers=*,tokencleaner,-service,-route,-cloud-node-lifecycle --kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --leader-elect=false --profiling=false --root-ca-file=/var/lib/rancher/k3s/server/tls/server-ca.crt --secure-port=10257 --service-account-private-key-file=/var/lib/rancher/k3s/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --use-service-account-credentials=true" +mar 12 16:35:31 CASCA k3s[301978]: I0312 16:35:31.804877 301978 options.go:228] external host was not specified, using 192.168.1.133 +mar 12 16:35:31 CASCA k3s[301978]: W0312 16:35:31.804884 301978 registry.go:256] calling componentGlobalsRegistry.AddFlags more than once, the registry will be set by the latest flags +mar 12 16:35:31 CASCA k3s[301978]: time="2025-03-12T16:35:31+01:00" level=info msg="Running cloud-controller-manager --allocate-node-cidrs=true --authentication-kubeconfig=/var/lib/rancher/k3s/server/cred/cloud-controller.kubeconfig --authorization-kubeconfig=/var/lib/rancher/k3s/server/cred/cloud-controller.kubeconfig --bind-address=127.0.0.1 --cloud-config=/var/lib/rancher/k3s/server/etc/cloud-config.yaml --cloud-provider=k3s --cluster-cidr=10.42.0.0/16 --configure-cloud-routes=false --controllers=*,-route --kubeconfig=/var/lib/rancher/k3s/server/cred/cloud-controller.kubeconfig --leader-elect=false --leader-elect-resource-name=k3s-cloud-controller-manager --node-status-update-frequency=1m0s --profiling=false" +mar 12 16:35:31 CASCA k3s[301978]: time="2025-03-12T16:35:31+01:00" level=info msg="Server node token is available at /var/lib/rancher/k3s/server/token" +mar 12 16:35:31 CASCA k3s[301978]: time="2025-03-12T16:35:31+01:00" level=info msg="To join server node to cluster: k3s server -s https://192.168.1.133:6443 -t ${SERVER_NODE_TOKEN}" +mar 12 16:35:31 CASCA k3s[301978]: time="2025-03-12T16:35:31+01:00" level=info msg="Agent node token is available at /var/lib/rancher/k3s/server/agent-token" +mar 12 16:35:31 CASCA k3s[301978]: time="2025-03-12T16:35:31+01:00" level=info msg="To join agent node to cluster: k3s agent -s https://192.168.1.133:6443 -t ${AGENT_NODE_TOKEN}" +mar 12 16:35:31 CASCA k3s[301978]: I0312 16:35:31.806067 301978 server.go:150] Version: v1.31.6+k3s1 +mar 12 16:35:31 CASCA k3s[301978]: I0312 16:35:31.806086 301978 server.go:152] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" +mar 12 16:35:31 CASCA k3s[301978]: Error: failed to create listener: failed to listen on 127.0.0.1:6444: listen tcp 127.0.0.1:6444: bind: address already in use +mar 12 16:35:31 CASCA k3s[301978]: time="2025-03-12T16:35:31+01:00" level=error msg="apiserver exited: failed to create listener: failed to listen on 127.0.0.1:6444: listen tcp 127.0.0.1:6444: bind: address already in use" +mar 12 16:35:31 CASCA systemd[1]: k3s.service: Main process exited, code=exited, status=1/FAILURE +░░ Subject: Unit process exited +░░ Defined-By: systemd +░░ Support: https://www.debian.org/support +░░ +░░ An ExecStart= process belonging to unit k3s.service has exited. +░░ +░░ The process' exit code is 'exited' and its exit status is 1. +mar 12 16:35:31 CASCA systemd[1]: k3s.service: Failed with result 'exit-code'. +░░ Subject: Unit failed +░░ Defined-By: systemd +░░ Support: https://www.debian.org/support +░░ +░░ The unit k3s.service has entered the 'failed' state with result 'exit-code'. +mar 12 16:35:31 CASCA systemd[1]: Failed to start k3s.service - Lightweight Kubernetes. +░░ Subject: A start job for unit k3s.service has failed +░░ Defined-By: systemd +░░ Support: https://www.debian.org/support +░░ +░░ A start job for unit k3s.service has finished with a failure. +░░ +░░ The job identifier is 116689 and the job result is failed. +mar 12 16:35:37 CASCA systemd[1]: k3s.service: Scheduled restart job, restart counter is at 1213. +░░ Subject: Automatic restarting of a unit has been scheduled +░░ Defined-By: systemd +░░ Support: https://www.debian.org/support +░░ +░░ Automatic restarting of the unit k3s.service has been scheduled, as the result for +░░ the configured Restart= setting for the unit. +mar 12 16:35:37 CASCA systemd[1]: Stopped k3s.service - Lightweight Kubernetes. +░░ Subject: A stop job for unit k3s.service has finished +░░ Defined-By: systemd +░░ Support: https://www.debian.org/support +░░ +░░ A stop job for unit k3s.service has finished. +░░ +░░ The job identifier is 116783 and the job result is done. +mar 12 16:35:37 CASCA systemd[1]: Starting k3s.service - Lightweight Kubernetes... +░░ Subject: A start job for unit k3s.service has begun execution +░░ Defined-By: systemd +░░ Support: https://www.debian.org/support +░░ +░░ A start job for unit k3s.service has begun execution. +░░ +░░ The job identifier is 116783. +mar 12 16:35:37 CASCA sh[302412]: + /usr/bin/systemctl is-enabled --quiet nm-cloud-setup.service +mar 12 16:35:37 CASCA k3s[302419]: time="2025-03-12T16:35:37+01:00" level=info msg="Starting k3s v1.31.6+k3s1 (6ab750f9)" +mar 12 16:35:37 CASCA k3s[302419]: time="2025-03-12T16:35:37+01:00" level=info msg="Configuring sqlite3 database connection pooling: maxIdleConns=2, maxOpenConns=0, connMaxLifetime=0s" +mar 12 16:35:37 CASCA k3s[302419]: time="2025-03-12T16:35:37+01:00" level=info msg="Configuring database table schema and indexes, this may take a moment..." +mar 12 16:35:37 CASCA k3s[302419]: time="2025-03-12T16:35:37+01:00" level=info msg="Database tables and indexes are up to date" +mar 12 16:35:37 CASCA k3s[302419]: time="2025-03-12T16:35:37+01:00" level=info msg="Kine available at unix://kine.sock" +mar 12 16:35:37 CASCA k3s[302419]: time="2025-03-12T16:35:37+01:00" level=info msg="Reconciling bootstrap data between datastore and disk" +mar 12 16:35:37 CASCA k3s[302419]: time="2025-03-12T16:35:37+01:00" level=info msg="Running kube-apiserver --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,k3s --authorization-mode=Node,RBAC --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/k3s/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction --enable-aggregator-routing=true --enable-bootstrap-token-auth=true --etcd-servers=unix://kine.sock --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key" +mar 12 16:35:37 CASCA k3s[302419]: time="2025-03-12T16:35:37+01:00" level=info msg="Running kube-scheduler --authentication-kubeconfig=/var/lib/rancher/k3s/server/cred/scheduler.kubeconfig --authorization-kubeconfig=/var/lib/rancher/k3s/server/cred/scheduler.kubeconfig --bind-address=127.0.0.1 --kubeconfig=/var/lib/rancher/k3s/server/cred/scheduler.kubeconfig --leader-elect=false --profiling=false --secure-port=10259" +mar 12 16:35:37 CASCA k3s[302419]: W0312 16:35:37.306964 302419 registry.go:256] calling componentGlobalsRegistry.AddFlags more than once, the registry will be set by the latest flags +mar 12 16:35:37 CASCA k3s[302419]: time="2025-03-12T16:35:37+01:00" level=info msg="Waiting for API server to become available" +mar 12 16:35:37 CASCA k3s[302419]: time="2025-03-12T16:35:37+01:00" level=info msg="Running kube-controller-manager --allocate-node-cidrs=true --authentication-kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --authorization-kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --bind-address=127.0.0.1 --cluster-cidr=10.42.0.0/16 --cluster-signing-kube-apiserver-client-cert-file=/var/lib/rancher/k3s/server/tls/client-ca.nochain.crt --cluster-signing-kube-apiserver-client-key-file=/var/lib/rancher/k3s/server/tls/client-ca.key --cluster-signing-kubelet-client-cert-file=/var/lib/rancher/k3s/server/tls/client-ca.nochain.crt --cluster-signing-kubelet-client-key-file=/var/lib/rancher/k3s/server/tls/client-ca.key --cluster-signing-kubelet-serving-cert-file=/var/lib/rancher/k3s/server/tls/server-ca.nochain.crt --cluster-signing-kubelet-serving-key-file=/var/lib/rancher/k3s/server/tls/server-ca.key --cluster-signing-legacy-unknown-cert-file=/var/lib/rancher/k3s/server/tls/server-ca.nochain.crt --cluster-signing-legacy-unknown-key-file=/var/lib/rancher/k3s/server/tls/server-ca.key --configure-cloud-routes=false --controllers=*,tokencleaner,-service,-route,-cloud-node-lifecycle --kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --leader-elect=false --profiling=false --root-ca-file=/var/lib/rancher/k3s/server/tls/server-ca.crt --secure-port=10257 --service-account-private-key-file=/var/lib/rancher/k3s/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --use-service-account-credentials=true" +mar 12 16:35:37 CASCA k3s[302419]: W0312 16:35:37.307299 302419 registry.go:256] calling componentGlobalsRegistry.AddFlags more than once, the registry will be set by the latest flags +mar 12 16:35:37 CASCA k3s[302419]: I0312 16:35:37.307457 302419 options.go:228] external host was not specified, using 192.168.1.133 +mar 12 16:35:37 CASCA k3s[302419]: time="2025-03-12T16:35:37+01:00" level=info msg="Running cloud-controller-manager --allocate-node-cidrs=true --authentication-kubeconfig=/var/lib/rancher/k3s/server/cred/cloud-controller.kubeconfig --authorization-kubeconfig=/var/lib/rancher/k3s/server/cred/cloud-controller.kubeconfig --bind-address=127.0.0.1 --cloud-config=/var/lib/rancher/k3s/server/etc/cloud-config.yaml --cloud-provider=k3s --cluster-cidr=10.42.0.0/16 --configure-cloud-routes=false --controllers=*,-route --kubeconfig=/var/lib/rancher/k3s/server/cred/cloud-controller.kubeconfig --leader-elect=false --leader-elect-resource-name=k3s-cloud-controller-manager --node-status-update-frequency=1m0s --profiling=false" +mar 12 16:35:37 CASCA k3s[302419]: time="2025-03-12T16:35:37+01:00" level=info msg="Server node token is available at /var/lib/rancher/k3s/server/token" +mar 12 16:35:37 CASCA k3s[302419]: time="2025-03-12T16:35:37+01:00" level=info msg="To join server node to cluster: k3s server -s https://192.168.1.133:6443 -t ${SERVER_NODE_TOKEN}" +mar 12 16:35:37 CASCA k3s[302419]: time="2025-03-12T16:35:37+01:00" level=info msg="Agent node token is available at /var/lib/rancher/k3s/server/agent-token" +mar 12 16:35:37 CASCA k3s[302419]: time="2025-03-12T16:35:37+01:00" level=info msg="To join agent node to cluster: k3s agent -s https://192.168.1.133:6443 -t ${AGENT_NODE_TOKEN}" +mar 12 16:35:37 CASCA k3s[302419]: I0312 16:35:37.308674 302419 server.go:150] Version: v1.31.6+k3s1 +mar 12 16:35:37 CASCA k3s[302419]: I0312 16:35:37.308687 302419 server.go:152] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" +mar 12 16:35:37 CASCA k3s[302419]: Error: failed to create listener: failed to listen on 127.0.0.1:6444: listen tcp 127.0.0.1:6444: bind: address already in use +mar 12 16:35:37 CASCA k3s[302419]: time="2025-03-12T16:35:37+01:00" level=error msg="apiserver exited: failed to create listener: failed to listen on 127.0.0.1:6444: listen tcp 127.0.0.1:6444: bind: address already in use" +mar 12 16:35:37 CASCA systemd[1]: k3s.service: Main process exited, code=exited, status=1/FAILURE +░░ Subject: Unit process exited +░░ Defined-By: systemd +░░ Support: https://www.debian.org/support +░░ +░░ An ExecStart= process belonging to unit k3s.service has exited. +░░ +░░ The process' exit code is 'exited' and its exit status is 1. +mar 12 16:35:37 CASCA systemd[1]: k3s.service: Failed with result 'exit-code'. +░░ Subject: Unit failed +░░ Defined-By: systemd +░░ Support: https://www.debian.org/support +░░ +░░ The unit k3s.service has entered the 'failed' state with result 'exit-code'. +mar 12 16:35:37 CASCA systemd[1]: Failed to start k3s.service - Lightweight Kubernetes. +░░ Subject: A start job for unit k3s.service has failed +░░ Defined-By: systemd +░░ Support: https://www.debian.org/support +░░ +░░ A start job for unit k3s.service has finished with a failure. +░░ +░░ The job identifier is 116783 and the job result is failed. +mar 12 16:35:42 CASCA systemd[1]: k3s.service: Scheduled restart job, restart counter is at 1214. +░░ Subject: Automatic restarting of a unit has been scheduled +░░ Defined-By: systemd +░░ Support: https://www.debian.org/support +░░ +░░ Automatic restarting of the unit k3s.service has been scheduled, as the result for +░░ the configured Restart= setting for the unit. +mar 12 16:35:42 CASCA systemd[1]: Stopped k3s.service - Lightweight Kubernetes. +░░ Subject: A stop job for unit k3s.service has finished +░░ Defined-By: systemd +░░ Support: https://www.debian.org/support +░░ +░░ A stop job for unit k3s.service has finished. +░░ +░░ The job identifier is 116877 and the job result is done. +mar 12 16:35:42 CASCA systemd[1]: Starting k3s.service - Lightweight Kubernetes... +░░ Subject: A start job for unit k3s.service has begun execution +░░ Defined-By: systemd +░░ Support: https://www.debian.org/support +░░ +░░ A start job for unit k3s.service has begun execution. +░░ +░░ The job identifier is 116877. +mar 12 16:35:42 CASCA sh[302739]: + /usr/bin/systemctl is-enabled --quiet nm-cloud-setup.service +mar 12 16:35:42 CASCA k3s[302746]: time="2025-03-12T16:35:42+01:00" level=info msg="Starting k3s v1.31.6+k3s1 (6ab750f9)" +mar 12 16:35:42 CASCA k3s[302746]: time="2025-03-12T16:35:42+01:00" level=info msg="Configuring sqlite3 database connection pooling: maxIdleConns=2, maxOpenConns=0, connMaxLifetime=0s" +mar 12 16:35:42 CASCA k3s[302746]: time="2025-03-12T16:35:42+01:00" level=info msg="Configuring database table schema and indexes, this may take a moment..." +mar 12 16:35:42 CASCA k3s[302746]: time="2025-03-12T16:35:42+01:00" level=info msg="Database tables and indexes are up to date" +mar 12 16:35:42 CASCA k3s[302746]: time="2025-03-12T16:35:42+01:00" level=info msg="Kine available at unix://kine.sock" +mar 12 16:35:42 CASCA k3s[302746]: time="2025-03-12T16:35:42+01:00" level=info msg="Reconciling bootstrap data between datastore and disk" +mar 12 16:35:42 CASCA k3s[302746]: time="2025-03-12T16:35:42+01:00" level=info msg="Running kube-apiserver --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,k3s --authorization-mode=Node,RBAC --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/k3s/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction --enable-aggregator-routing=true --enable-bootstrap-token-auth=true --etcd-servers=unix://kine.sock --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key" +mar 12 16:35:42 CASCA k3s[302746]: time="2025-03-12T16:35:42+01:00" level=info msg="Running kube-scheduler --authentication-kubeconfig=/var/lib/rancher/k3s/server/cred/scheduler.kubeconfig --authorization-kubeconfig=/var/lib/rancher/k3s/server/cred/scheduler.kubeconfig --bind-address=127.0.0.1 --kubeconfig=/var/lib/rancher/k3s/server/cred/scheduler.kubeconfig --leader-elect=false --profiling=false --secure-port=10259" +mar 12 16:35:42 CASCA k3s[302746]: W0312 16:35:42.787970 302746 registry.go:256] calling componentGlobalsRegistry.AddFlags more than once, the registry will be set by the latest flags +mar 12 16:35:42 CASCA k3s[302746]: time="2025-03-12T16:35:42+01:00" level=info msg="Waiting for API server to become available" +mar 12 16:35:42 CASCA k3s[302746]: time="2025-03-12T16:35:42+01:00" level=info msg="Running kube-controller-manager --allocate-node-cidrs=true --authentication-kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --authorization-kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --bind-address=127.0.0.1 --cluster-cidr=10.42.0.0/16 --cluster-signing-kube-apiserver-client-cert-file=/var/lib/rancher/k3s/server/tls/client-ca.nochain.crt --cluster-signing-kube-apiserver-client-key-file=/var/lib/rancher/k3s/server/tls/client-ca.key --cluster-signing-kubelet-client-cert-file=/var/lib/rancher/k3s/server/tls/client-ca.nochain.crt --cluster-signing-kubelet-client-key-file=/var/lib/rancher/k3s/server/tls/client-ca.key --cluster-signing-kubelet-serving-cert-file=/var/lib/rancher/k3s/server/tls/server-ca.nochain.crt --cluster-signing-kubelet-serving-key-file=/var/lib/rancher/k3s/server/tls/server-ca.key --cluster-signing-legacy-unknown-cert-file=/var/lib/rancher/k3s/server/tls/server-ca.nochain.crt --cluster-signing-legacy-unknown-key-file=/var/lib/rancher/k3s/server/tls/server-ca.key --configure-cloud-routes=false --controllers=*,tokencleaner,-service,-route,-cloud-node-lifecycle --kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --leader-elect=false --profiling=false --root-ca-file=/var/lib/rancher/k3s/server/tls/server-ca.crt --secure-port=10257 --service-account-private-key-file=/var/lib/rancher/k3s/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --use-service-account-credentials=true" +mar 12 16:35:42 CASCA k3s[302746]: W0312 16:35:42.788435 302746 registry.go:256] calling componentGlobalsRegistry.AddFlags more than once, the registry will be set by the latest flags +mar 12 16:35:42 CASCA k3s[302746]: I0312 16:35:42.788552 302746 options.go:228] external host was not specified, using 192.168.1.133 +mar 12 16:35:42 CASCA k3s[302746]: time="2025-03-12T16:35:42+01:00" level=info msg="Running cloud-controller-manager --allocate-node-cidrs=true --authentication-kubeconfig=/var/lib/rancher/k3s/server/cred/cloud-controller.kubeconfig --authorization-kubeconfig=/var/lib/rancher/k3s/server/cred/cloud-controller.kubeconfig --bind-address=127.0.0.1 --cloud-config=/var/lib/rancher/k3s/server/etc/cloud-config.yaml --cloud-provider=k3s --cluster-cidr=10.42.0.0/16 --configure-cloud-routes=false --controllers=*,-route --kubeconfig=/var/lib/rancher/k3s/server/cred/cloud-controller.kubeconfig --leader-elect=false --leader-elect-resource-name=k3s-cloud-controller-manager --node-status-update-frequency=1m0s --profiling=false" +mar 12 16:35:42 CASCA k3s[302746]: time="2025-03-12T16:35:42+01:00" level=info msg="Server node token is available at /var/lib/rancher/k3s/server/token" +mar 12 16:35:42 CASCA k3s[302746]: time="2025-03-12T16:35:42+01:00" level=info msg="To join server node to cluster: k3s server -s https://192.168.1.133:6443 -t ${SERVER_NODE_TOKEN}" +mar 12 16:35:42 CASCA k3s[302746]: time="2025-03-12T16:35:42+01:00" level=info msg="Agent node token is available at /var/lib/rancher/k3s/server/agent-token" +mar 12 16:35:42 CASCA k3s[302746]: time="2025-03-12T16:35:42+01:00" level=info msg="To join agent node to cluster: k3s agent -s https://192.168.1.133:6443 -t ${AGENT_NODE_TOKEN}" +mar 12 16:35:42 CASCA k3s[302746]: I0312 16:35:42.789773 302746 server.go:150] Version: v1.31.6+k3s1 +mar 12 16:35:42 CASCA k3s[302746]: I0312 16:35:42.789788 302746 server.go:152] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" +mar 12 16:35:42 CASCA k3s[302746]: Error: failed to create listener: failed to listen on 127.0.0.1:6444: listen tcp 127.0.0.1:6444: bind: address already in use +mar 12 16:35:42 CASCA k3s[302746]: time="2025-03-12T16:35:42+01:00" level=error msg="apiserver exited: failed to create listener: failed to listen on 127.0.0.1:6444: listen tcp 127.0.0.1:6444: bind: address already in use" +mar 12 16:35:42 CASCA systemd[1]: k3s.service: Main process exited, code=exited, status=1/FAILURE +░░ Subject: Unit process exited +░░ Defined-By: systemd +░░ Support: https://www.debian.org/support +░░ +░░ An ExecStart= process belonging to unit k3s.service has exited. +░░ +░░ The process' exit code is 'exited' and its exit status is 1. +mar 12 16:35:42 CASCA systemd[1]: k3s.service: Failed with result 'exit-code'. +░░ Subject: Unit failed +░░ Defined-By: systemd +░░ Support: https://www.debian.org/support +░░ +░░ The unit k3s.service has entered the 'failed' state with result 'exit-code'. +mar 12 16:35:42 CASCA systemd[1]: Failed to start k3s.service - Lightweight Kubernetes. +░░ Subject: A start job for unit k3s.service has failed +░░ Defined-By: systemd +░░ Support: https://www.debian.org/support +░░ +░░ A start job for unit k3s.service has finished with a failure. +░░ +░░ The job identifier is 116877 and the job result is failed. +mar 12 16:35:48 CASCA systemd[1]: k3s.service: Scheduled restart job, restart counter is at 1215. +░░ Subject: Automatic restarting of a unit has been scheduled +░░ Defined-By: systemd +░░ Support: https://www.debian.org/support +░░ +░░ Automatic restarting of the unit k3s.service has been scheduled, as the result for +░░ the configured Restart= setting for the unit. +mar 12 16:35:48 CASCA systemd[1]: Stopped k3s.service - Lightweight Kubernetes. +░░ Subject: A stop job for unit k3s.service has finished +░░ Defined-By: systemd +░░ Support: https://www.debian.org/support +░░ +░░ A stop job for unit k3s.service has finished. +░░ +░░ The job identifier is 116971 and the job result is done. +mar 12 16:35:48 CASCA systemd[1]: Starting k3s.service - Lightweight Kubernetes... +░░ Subject: A start job for unit k3s.service has begun execution +░░ Defined-By: systemd +░░ Support: https://www.debian.org/support +░░ +░░ A start job for unit k3s.service has begun execution. +░░ +░░ The job identifier is 116971. +mar 12 16:35:48 CASCA sh[302919]: + /usr/bin/systemctl is-enabled --quiet nm-cloud-setup.service +mar 12 16:35:48 CASCA k3s[302925]: time="2025-03-12T16:35:48+01:00" level=info msg="Starting k3s v1.31.6+k3s1 (6ab750f9)" +mar 12 16:35:48 CASCA k3s[302925]: time="2025-03-12T16:35:48+01:00" level=info msg="Configuring sqlite3 database connection pooling: maxIdleConns=2, maxOpenConns=0, connMaxLifetime=0s" +mar 12 16:35:48 CASCA k3s[302925]: time="2025-03-12T16:35:48+01:00" level=info msg="Configuring database table schema and indexes, this may take a moment..." +mar 12 16:35:48 CASCA k3s[302925]: time="2025-03-12T16:35:48+01:00" level=info msg="Database tables and indexes are up to date" +mar 12 16:35:48 CASCA k3s[302925]: time="2025-03-12T16:35:48+01:00" level=info msg="Kine available at unix://kine.sock" +mar 12 16:35:48 CASCA k3s[302925]: time="2025-03-12T16:35:48+01:00" level=info msg="Reconciling bootstrap data between datastore and disk" +mar 12 16:35:48 CASCA k3s[302925]: time="2025-03-12T16:35:48+01:00" level=info msg="Running kube-apiserver --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,k3s --authorization-mode=Node,RBAC --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/k3s/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction --enable-aggregator-routing=true --enable-bootstrap-token-auth=true --etcd-servers=unix://kine.sock --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key" +mar 12 16:35:48 CASCA k3s[302925]: time="2025-03-12T16:35:48+01:00" level=info msg="Running kube-scheduler --authentication-kubeconfig=/var/lib/rancher/k3s/server/cred/scheduler.kubeconfig --authorization-kubeconfig=/var/lib/rancher/k3s/server/cred/scheduler.kubeconfig --bind-address=127.0.0.1 --kubeconfig=/var/lib/rancher/k3s/server/cred/scheduler.kubeconfig --leader-elect=false --profiling=false --secure-port=10259" +mar 12 16:35:48 CASCA k3s[302925]: W0312 16:35:48.329161 302925 registry.go:256] calling componentGlobalsRegistry.AddFlags more than once, the registry will be set by the latest flags +mar 12 16:35:48 CASCA k3s[302925]: time="2025-03-12T16:35:48+01:00" level=info msg="Waiting for API server to become available" +mar 12 16:35:48 CASCA k3s[302925]: time="2025-03-12T16:35:48+01:00" level=info msg="Running kube-controller-manager --allocate-node-cidrs=true --authentication-kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --authorization-kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --bind-address=127.0.0.1 --cluster-cidr=10.42.0.0/16 --cluster-signing-kube-apiserver-client-cert-file=/var/lib/rancher/k3s/server/tls/client-ca.nochain.crt --cluster-signing-kube-apiserver-client-key-file=/var/lib/rancher/k3s/server/tls/client-ca.key --cluster-signing-kubelet-client-cert-file=/var/lib/rancher/k3s/server/tls/client-ca.nochain.crt --cluster-signing-kubelet-client-key-file=/var/lib/rancher/k3s/server/tls/client-ca.key --cluster-signing-kubelet-serving-cert-file=/var/lib/rancher/k3s/server/tls/server-ca.nochain.crt --cluster-signing-kubelet-serving-key-file=/var/lib/rancher/k3s/server/tls/server-ca.key --cluster-signing-legacy-unknown-cert-file=/var/lib/rancher/k3s/server/tls/server-ca.nochain.crt --cluster-signing-legacy-unknown-key-file=/var/lib/rancher/k3s/server/tls/server-ca.key --configure-cloud-routes=false --controllers=*,tokencleaner,-service,-route,-cloud-node-lifecycle --kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --leader-elect=false --profiling=false --root-ca-file=/var/lib/rancher/k3s/server/tls/server-ca.crt --secure-port=10257 --service-account-private-key-file=/var/lib/rancher/k3s/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --use-service-account-credentials=true" +mar 12 16:35:48 CASCA k3s[302925]: W0312 16:35:48.329565 302925 registry.go:256] calling componentGlobalsRegistry.AddFlags more than once, the registry will be set by the latest flags +mar 12 16:35:48 CASCA k3s[302925]: I0312 16:35:48.329702 302925 options.go:228] external host was not specified, using 192.168.1.133 +mar 12 16:35:48 CASCA k3s[302925]: time="2025-03-12T16:35:48+01:00" level=info msg="Running cloud-controller-manager --allocate-node-cidrs=true --authentication-kubeconfig=/var/lib/rancher/k3s/server/cred/cloud-controller.kubeconfig --authorization-kubeconfig=/var/lib/rancher/k3s/server/cred/cloud-controller.kubeconfig --bind-address=127.0.0.1 --cloud-config=/var/lib/rancher/k3s/server/etc/cloud-config.yaml --cloud-provider=k3s --cluster-cidr=10.42.0.0/16 --configure-cloud-routes=false --controllers=*,-route --kubeconfig=/var/lib/rancher/k3s/server/cred/cloud-controller.kubeconfig --leader-elect=false --leader-elect-resource-name=k3s-cloud-controller-manager --node-status-update-frequency=1m0s --profiling=false" +mar 12 16:35:48 CASCA k3s[302925]: time="2025-03-12T16:35:48+01:00" level=info msg="Server node token is available at /var/lib/rancher/k3s/server/token" +mar 12 16:35:48 CASCA k3s[302925]: time="2025-03-12T16:35:48+01:00" level=info msg="To join server node to cluster: k3s server -s https://192.168.1.133:6443 -t ${SERVER_NODE_TOKEN}" +mar 12 16:35:48 CASCA k3s[302925]: time="2025-03-12T16:35:48+01:00" level=info msg="Agent node token is available at /var/lib/rancher/k3s/server/agent-token" +mar 12 16:35:48 CASCA k3s[302925]: time="2025-03-12T16:35:48+01:00" level=info msg="To join agent node to cluster: k3s agent -s https://192.168.1.133:6443 -t ${AGENT_NODE_TOKEN}" +mar 12 16:35:48 CASCA k3s[302925]: I0312 16:35:48.330938 302925 server.go:150] Version: v1.31.6+k3s1 +mar 12 16:35:48 CASCA k3s[302925]: I0312 16:35:48.330953 302925 server.go:152] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" +mar 12 16:35:48 CASCA k3s[302925]: Error: failed to create listener: failed to listen on 127.0.0.1:6444: listen tcp 127.0.0.1:6444: bind: address already in use +mar 12 16:35:48 CASCA k3s[302925]: time="2025-03-12T16:35:48+01:00" level=error msg="apiserver exited: failed to create listener: failed to listen on 127.0.0.1:6444: listen tcp 127.0.0.1:6444: bind: address already in use" +mar 12 16:35:48 CASCA systemd[1]: k3s.service: Main process exited, code=exited, status=1/FAILURE +░░ Subject: Unit process exited +░░ Defined-By: systemd +░░ Support: https://www.debian.org/support +░░ +░░ An ExecStart= process belonging to unit k3s.service has exited. +░░ +░░ The process' exit code is 'exited' and its exit status is 1. +mar 12 16:35:48 CASCA systemd[1]: k3s.service: Failed with result 'exit-code'. +░░ Subject: Unit failed +░░ Defined-By: systemd +░░ Support: https://www.debian.org/support +░░ +░░ The unit k3s.service has entered the 'failed' state with result 'exit-code'. +mar 12 16:35:48 CASCA systemd[1]: Failed to start k3s.service - Lightweight Kubernetes. +░░ Subject: A start job for unit k3s.service has failed +░░ Defined-By: systemd +░░ Support: https://www.debian.org/support +░░ +░░ A start job for unit k3s.service has finished with a failure. +░░ +░░ The job identifier is 116971 and the job result is failed. +mar 12 16:35:53 CASCA systemd[1]: k3s.service: Scheduled restart job, restart counter is at 1216. +░░ Subject: Automatic restarting of a unit has been scheduled +░░ Defined-By: systemd +░░ Support: https://www.debian.org/support +░░ +░░ Automatic restarting of the unit k3s.service has been scheduled, as the result for +░░ the configured Restart= setting for the unit. +mar 12 16:35:53 CASCA systemd[1]: Stopped k3s.service - Lightweight Kubernetes. +░░ Subject: A stop job for unit k3s.service has finished +░░ Defined-By: systemd +░░ Support: https://www.debian.org/support +░░ +░░ A stop job for unit k3s.service has finished. +░░ +░░ The job identifier is 117065 and the job result is done. +mar 12 16:35:53 CASCA systemd[1]: Starting k3s.service - Lightweight Kubernetes... +░░ Subject: A start job for unit k3s.service has begun execution +░░ Defined-By: systemd +░░ Support: https://www.debian.org/support +░░ +░░ A start job for unit k3s.service has begun execution. +░░ +░░ The job identifier is 117065. +mar 12 16:35:53 CASCA sh[303334]: + /usr/bin/systemctl is-enabled --quiet nm-cloud-setup.service +mar 12 16:35:53 CASCA k3s[303341]: time="2025-03-12T16:35:53+01:00" level=info msg="Starting k3s v1.31.6+k3s1 (6ab750f9)" +mar 12 16:35:53 CASCA k3s[303341]: time="2025-03-12T16:35:53+01:00" level=info msg="Configuring sqlite3 database connection pooling: maxIdleConns=2, maxOpenConns=0, connMaxLifetime=0s" +mar 12 16:35:53 CASCA k3s[303341]: time="2025-03-12T16:35:53+01:00" level=info msg="Configuring database table schema and indexes, this may take a moment..." +mar 12 16:35:53 CASCA k3s[303341]: time="2025-03-12T16:35:53+01:00" level=info msg="Database tables and indexes are up to date" +mar 12 16:35:53 CASCA k3s[303341]: time="2025-03-12T16:35:53+01:00" level=info msg="Kine available at unix://kine.sock" +mar 12 16:35:53 CASCA k3s[303341]: time="2025-03-12T16:35:53+01:00" level=info msg="Reconciling bootstrap data between datastore and disk" +mar 12 16:35:53 CASCA k3s[303341]: time="2025-03-12T16:35:53+01:00" level=info msg="Running kube-apiserver --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,k3s --authorization-mode=Node,RBAC --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/k3s/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction --enable-aggregator-routing=true --enable-bootstrap-token-auth=true --etcd-servers=unix://kine.sock --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key" +mar 12 16:35:53 CASCA k3s[303341]: time="2025-03-12T16:35:53+01:00" level=info msg="Running kube-scheduler --authentication-kubeconfig=/var/lib/rancher/k3s/server/cred/scheduler.kubeconfig --authorization-kubeconfig=/var/lib/rancher/k3s/server/cred/scheduler.kubeconfig --bind-address=127.0.0.1 --kubeconfig=/var/lib/rancher/k3s/server/cred/scheduler.kubeconfig --leader-elect=false --profiling=false --secure-port=10259" +mar 12 16:35:53 CASCA k3s[303341]: W0312 16:35:53.775748 303341 registry.go:256] calling componentGlobalsRegistry.AddFlags more than once, the registry will be set by the latest flags +mar 12 16:35:53 CASCA k3s[303341]: time="2025-03-12T16:35:53+01:00" level=info msg="Waiting for API server to become available" +mar 12 16:35:53 CASCA k3s[303341]: time="2025-03-12T16:35:53+01:00" level=info msg="Running kube-controller-manager --allocate-node-cidrs=true --authentication-kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --authorization-kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --bind-address=127.0.0.1 --cluster-cidr=10.42.0.0/16 --cluster-signing-kube-apiserver-client-cert-file=/var/lib/rancher/k3s/server/tls/client-ca.nochain.crt --cluster-signing-kube-apiserver-client-key-file=/var/lib/rancher/k3s/server/tls/client-ca.key --cluster-signing-kubelet-client-cert-file=/var/lib/rancher/k3s/server/tls/client-ca.nochain.crt --cluster-signing-kubelet-client-key-file=/var/lib/rancher/k3s/server/tls/client-ca.key --cluster-signing-kubelet-serving-cert-file=/var/lib/rancher/k3s/server/tls/server-ca.nochain.crt --cluster-signing-kubelet-serving-key-file=/var/lib/rancher/k3s/server/tls/server-ca.key --cluster-signing-legacy-unknown-cert-file=/var/lib/rancher/k3s/server/tls/server-ca.nochain.crt --cluster-signing-legacy-unknown-key-file=/var/lib/rancher/k3s/server/tls/server-ca.key --configure-cloud-routes=false --controllers=*,tokencleaner,-service,-route,-cloud-node-lifecycle --kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --leader-elect=false --profiling=false --root-ca-file=/var/lib/rancher/k3s/server/tls/server-ca.crt --secure-port=10257 --service-account-private-key-file=/var/lib/rancher/k3s/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --use-service-account-credentials=true" +mar 12 16:35:53 CASCA k3s[303341]: W0312 16:35:53.776216 303341 registry.go:256] calling componentGlobalsRegistry.AddFlags more than once, the registry will be set by the latest flags +mar 12 16:35:53 CASCA k3s[303341]: I0312 16:35:53.776414 303341 options.go:228] external host was not specified, using 192.168.1.133 +mar 12 16:35:53 CASCA k3s[303341]: time="2025-03-12T16:35:53+01:00" level=info msg="Running cloud-controller-manager --allocate-node-cidrs=true --authentication-kubeconfig=/var/lib/rancher/k3s/server/cred/cloud-controller.kubeconfig --authorization-kubeconfig=/var/lib/rancher/k3s/server/cred/cloud-controller.kubeconfig --bind-address=127.0.0.1 --cloud-config=/var/lib/rancher/k3s/server/etc/cloud-config.yaml --cloud-provider=k3s --cluster-cidr=10.42.0.0/16 --configure-cloud-routes=false --controllers=*,-route --kubeconfig=/var/lib/rancher/k3s/server/cred/cloud-controller.kubeconfig --leader-elect=false --leader-elect-resource-name=k3s-cloud-controller-manager --node-status-update-frequency=1m0s --profiling=false" +mar 12 16:35:53 CASCA k3s[303341]: time="2025-03-12T16:35:53+01:00" level=info msg="Server node token is available at /var/lib/rancher/k3s/server/token" +mar 12 16:35:53 CASCA k3s[303341]: time="2025-03-12T16:35:53+01:00" level=info msg="To join server node to cluster: k3s server -s https://192.168.1.133:6443 -t ${SERVER_NODE_TOKEN}" +mar 12 16:35:53 CASCA k3s[303341]: time="2025-03-12T16:35:53+01:00" level=info msg="Agent node token is available at /var/lib/rancher/k3s/server/agent-token" +mar 12 16:35:53 CASCA k3s[303341]: time="2025-03-12T16:35:53+01:00" level=info msg="To join agent node to cluster: k3s agent -s https://192.168.1.133:6443 -t ${AGENT_NODE_TOKEN}" +mar 12 16:35:53 CASCA k3s[303341]: I0312 16:35:53.777956 303341 server.go:150] Version: v1.31.6+k3s1 +mar 12 16:35:53 CASCA k3s[303341]: I0312 16:35:53.777983 303341 server.go:152] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" +mar 12 16:35:53 CASCA k3s[303341]: Error: failed to create listener: failed to listen on 127.0.0.1:6444: listen tcp 127.0.0.1:6444: bind: address already in use +mar 12 16:35:53 CASCA k3s[303341]: time="2025-03-12T16:35:53+01:00" level=error msg="apiserver exited: failed to create listener: failed to listen on 127.0.0.1:6444: listen tcp 127.0.0.1:6444: bind: address already in use" +mar 12 16:35:53 CASCA systemd[1]: k3s.service: Main process exited, code=exited, status=1/FAILURE +░░ Subject: Unit process exited +░░ Defined-By: systemd +░░ Support: https://www.debian.org/support +░░ +░░ An ExecStart= process belonging to unit k3s.service has exited. +░░ +░░ The process' exit code is 'exited' and its exit status is 1. +mar 12 16:35:53 CASCA systemd[1]: k3s.service: Failed with result 'exit-code'. +░░ Subject: Unit failed +░░ Defined-By: systemd +░░ Support: https://www.debian.org/support +░░ +░░ The unit k3s.service has entered the 'failed' state with result 'exit-code'. +mar 12 16:35:53 CASCA systemd[1]: Failed to start k3s.service - Lightweight Kubernetes. +░░ Subject: A start job for unit k3s.service has failed +░░ Defined-By: systemd +░░ Support: https://www.debian.org/support +░░ +░░ A start job for unit k3s.service has finished with a failure. +░░ +░░ The job identifier is 117065 and the job result is failed. +mar 12 16:35:59 CASCA systemd[1]: k3s.service: Scheduled restart job, restart counter is at 1217. +░░ Subject: Automatic restarting of a unit has been scheduled +░░ Defined-By: systemd +░░ Support: https://www.debian.org/support +░░ +░░ Automatic restarting of the unit k3s.service has been scheduled, as the result for +░░ the configured Restart= setting for the unit. +mar 12 16:35:59 CASCA systemd[1]: Stopped k3s.service - Lightweight Kubernetes. +░░ Subject: A stop job for unit k3s.service has finished +░░ Defined-By: systemd +░░ Support: https://www.debian.org/support +░░ +░░ A stop job for unit k3s.service has finished. +░░ +░░ The job identifier is 117159 and the job result is done. +mar 12 16:35:59 CASCA systemd[1]: Starting k3s.service - Lightweight Kubernetes... +░░ Subject: A start job for unit k3s.service has begun execution +░░ Defined-By: systemd +░░ Support: https://www.debian.org/support +░░ +░░ A start job for unit k3s.service has begun execution. +░░ +░░ The job identifier is 117159. +mar 12 16:35:59 CASCA sh[303731]: + /usr/bin/systemctl is-enabled --quiet nm-cloud-setup.service +mar 12 16:35:59 CASCA k3s[303740]: time="2025-03-12T16:35:59+01:00" level=info msg="Starting k3s v1.31.6+k3s1 (6ab750f9)" +mar 12 16:35:59 CASCA k3s[303740]: time="2025-03-12T16:35:59+01:00" level=info msg="Configuring sqlite3 database connection pooling: maxIdleConns=2, maxOpenConns=0, connMaxLifetime=0s" +mar 12 16:35:59 CASCA k3s[303740]: time="2025-03-12T16:35:59+01:00" level=info msg="Configuring database table schema and indexes, this may take a moment..." +mar 12 16:35:59 CASCA k3s[303740]: time="2025-03-12T16:35:59+01:00" level=info msg="Database tables and indexes are up to date" +mar 12 16:35:59 CASCA k3s[303740]: time="2025-03-12T16:35:59+01:00" level=info msg="Kine available at unix://kine.sock" +mar 12 16:35:59 CASCA k3s[303740]: time="2025-03-12T16:35:59+01:00" level=info msg="Reconciling bootstrap data between datastore and disk" +mar 12 16:35:59 CASCA k3s[303740]: time="2025-03-12T16:35:59+01:00" level=info msg="Running kube-apiserver --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,k3s --authorization-mode=Node,RBAC --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/k3s/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction --enable-aggregator-routing=true --enable-bootstrap-token-auth=true --etcd-servers=unix://kine.sock --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key" +mar 12 16:35:59 CASCA k3s[303740]: time="2025-03-12T16:35:59+01:00" level=info msg="Running kube-scheduler --authentication-kubeconfig=/var/lib/rancher/k3s/server/cred/scheduler.kubeconfig --authorization-kubeconfig=/var/lib/rancher/k3s/server/cred/scheduler.kubeconfig --bind-address=127.0.0.1 --kubeconfig=/var/lib/rancher/k3s/server/cred/scheduler.kubeconfig --leader-elect=false --profiling=false --secure-port=10259" +mar 12 16:35:59 CASCA k3s[303740]: W0312 16:35:59.333474 303740 registry.go:256] calling componentGlobalsRegistry.AddFlags more than once, the registry will be set by the latest flags +mar 12 16:35:59 CASCA k3s[303740]: time="2025-03-12T16:35:59+01:00" level=info msg="Waiting for API server to become available" +mar 12 16:35:59 CASCA k3s[303740]: time="2025-03-12T16:35:59+01:00" level=info msg="Running kube-controller-manager --allocate-node-cidrs=true --authentication-kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --authorization-kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --bind-address=127.0.0.1 --cluster-cidr=10.42.0.0/16 --cluster-signing-kube-apiserver-client-cert-file=/var/lib/rancher/k3s/server/tls/client-ca.nochain.crt --cluster-signing-kube-apiserver-client-key-file=/var/lib/rancher/k3s/server/tls/client-ca.key --cluster-signing-kubelet-client-cert-file=/var/lib/rancher/k3s/server/tls/client-ca.nochain.crt --cluster-signing-kubelet-client-key-file=/var/lib/rancher/k3s/server/tls/client-ca.key --cluster-signing-kubelet-serving-cert-file=/var/lib/rancher/k3s/server/tls/server-ca.nochain.crt --cluster-signing-kubelet-serving-key-file=/var/lib/rancher/k3s/server/tls/server-ca.key --cluster-signing-legacy-unknown-cert-file=/var/lib/rancher/k3s/server/tls/server-ca.nochain.crt --cluster-signing-legacy-unknown-key-file=/var/lib/rancher/k3s/server/tls/server-ca.key --configure-cloud-routes=false --controllers=*,tokencleaner,-service,-route,-cloud-node-lifecycle --kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --leader-elect=false --profiling=false --root-ca-file=/var/lib/rancher/k3s/server/tls/server-ca.crt --secure-port=10257 --service-account-private-key-file=/var/lib/rancher/k3s/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --use-service-account-credentials=true" +mar 12 16:35:59 CASCA k3s[303740]: W0312 16:35:59.333847 303740 registry.go:256] calling componentGlobalsRegistry.AddFlags more than once, the registry will be set by the latest flags +mar 12 16:35:59 CASCA k3s[303740]: time="2025-03-12T16:35:59+01:00" level=info msg="Running cloud-controller-manager --allocate-node-cidrs=true --authentication-kubeconfig=/var/lib/rancher/k3s/server/cred/cloud-controller.kubeconfig --authorization-kubeconfig=/var/lib/rancher/k3s/server/cred/cloud-controller.kubeconfig --bind-address=127.0.0.1 --cloud-config=/var/lib/rancher/k3s/server/etc/cloud-config.yaml --cloud-provider=k3s --cluster-cidr=10.42.0.0/16 --configure-cloud-routes=false --controllers=*,-route --kubeconfig=/var/lib/rancher/k3s/server/cred/cloud-controller.kubeconfig --leader-elect=false --leader-elect-resource-name=k3s-cloud-controller-manager --node-status-update-frequency=1m0s --profiling=false" +mar 12 16:35:59 CASCA k3s[303740]: time="2025-03-12T16:35:59+01:00" level=info msg="Server node token is available at /var/lib/rancher/k3s/server/token" +mar 12 16:35:59 CASCA k3s[303740]: time="2025-03-12T16:35:59+01:00" level=info msg="To join server node to cluster: k3s server -s https://192.168.1.133:6443 -t ${SERVER_NODE_TOKEN}" +mar 12 16:35:59 CASCA k3s[303740]: time="2025-03-12T16:35:59+01:00" level=info msg="Agent node token is available at /var/lib/rancher/k3s/server/agent-token" +mar 12 16:35:59 CASCA k3s[303740]: I0312 16:35:59.334728 303740 options.go:228] external host was not specified, using 192.168.1.133 +mar 12 16:35:59 CASCA k3s[303740]: time="2025-03-12T16:35:59+01:00" level=info msg="To join agent node to cluster: k3s agent -s https://192.168.1.133:6443 -t ${AGENT_NODE_TOKEN}" +mar 12 16:35:59 CASCA k3s[303740]: time="2025-03-12T16:35:59+01:00" level=info msg="Wrote kubeconfig /etc/rancher/k3s/k3s.yaml" +mar 12 16:35:59 CASCA k3s[303740]: time="2025-03-12T16:35:59+01:00" level=info msg="Run: k3s kubectl" +mar 12 16:35:59 CASCA k3s[303740]: I0312 16:35:59.335953 303740 server.go:150] Version: v1.31.6+k3s1 +mar 12 16:35:59 CASCA k3s[303740]: I0312 16:35:59.335979 303740 server.go:152] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" +mar 12 16:35:59 CASCA k3s[303740]: Error: failed to create listener: failed to listen on 127.0.0.1:6444: listen tcp 127.0.0.1:6444: bind: address already in use +mar 12 16:35:59 CASCA k3s[303740]: time="2025-03-12T16:35:59+01:00" level=error msg="apiserver exited: failed to create listener: failed to listen on 127.0.0.1:6444: listen tcp 127.0.0.1:6444: bind: address already in use" +mar 12 16:35:59 CASCA systemd[1]: k3s.service: Main process exited, code=exited, status=1/FAILURE +░░ Subject: Unit process exited +░░ Defined-By: systemd +░░ Support: https://www.debian.org/support +░░ +░░ An ExecStart= process belonging to unit k3s.service has exited. +░░ +░░ The process' exit code is 'exited' and its exit status is 1. +mar 12 16:35:59 CASCA systemd[1]: k3s.service: Failed with result 'exit-code'. +░░ Subject: Unit failed +░░ Defined-By: systemd +░░ Support: https://www.debian.org/support +░░ +░░ The unit k3s.service has entered the 'failed' state with result 'exit-code'. +mar 12 16:35:59 CASCA systemd[1]: Failed to start k3s.service - Lightweight Kubernetes. +░░ Subject: A start job for unit k3s.service has failed +░░ Defined-By: systemd +░░ Support: https://www.debian.org/support +░░ +░░ A start job for unit k3s.service has finished with a failure. +░░ +░░ The job identifier is 117159 and the job result is failed. +mar 12 16:36:04 CASCA systemd[1]: k3s.service: Scheduled restart job, restart counter is at 1218. +░░ Subject: Automatic restarting of a unit has been scheduled +░░ Defined-By: systemd +░░ Support: https://www.debian.org/support +░░ +░░ Automatic restarting of the unit k3s.service has been scheduled, as the result for +░░ the configured Restart= setting for the unit. +mar 12 16:36:04 CASCA systemd[1]: Stopped k3s.service - Lightweight Kubernetes. +░░ Subject: A stop job for unit k3s.service has finished +░░ Defined-By: systemd +░░ Support: https://www.debian.org/support +░░ +░░ A stop job for unit k3s.service has finished. +░░ +░░ The job identifier is 117253 and the job result is done. +mar 12 16:36:04 CASCA systemd[1]: Starting k3s.service - Lightweight Kubernetes... +░░ Subject: A start job for unit k3s.service has begun execution +░░ Defined-By: systemd +░░ Support: https://www.debian.org/support +░░ +░░ A start job for unit k3s.service has begun execution. +░░ +░░ The job identifier is 117253. +mar 12 16:36:04 CASCA sh[304097]: + /usr/bin/systemctl is-enabled --quiet nm-cloud-setup.service +mar 12 16:36:04 CASCA k3s[304102]: time="2025-03-12T16:36:04+01:00" level=info msg="Starting k3s v1.31.6+k3s1 (6ab750f9)" +mar 12 16:36:04 CASCA k3s[304102]: time="2025-03-12T16:36:04+01:00" level=info msg="Configuring sqlite3 database connection pooling: maxIdleConns=2, maxOpenConns=0, connMaxLifetime=0s" +mar 12 16:36:04 CASCA k3s[304102]: time="2025-03-12T16:36:04+01:00" level=info msg="Configuring database table schema and indexes, this may take a moment..." +mar 12 16:36:04 CASCA k3s[304102]: time="2025-03-12T16:36:04+01:00" level=info msg="Database tables and indexes are up to date" +mar 12 16:36:04 CASCA k3s[304102]: time="2025-03-12T16:36:04+01:00" level=info msg="Kine available at unix://kine.sock" +mar 12 16:36:04 CASCA k3s[304102]: time="2025-03-12T16:36:04+01:00" level=info msg="Reconciling bootstrap data between datastore and disk" +mar 12 16:36:04 CASCA k3s[304102]: time="2025-03-12T16:36:04+01:00" level=info msg="Running kube-apiserver --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,k3s --authorization-mode=Node,RBAC --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/k3s/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction --enable-aggregator-routing=true --enable-bootstrap-token-auth=true --etcd-servers=unix://kine.sock --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key" +mar 12 16:36:04 CASCA k3s[304102]: time="2025-03-12T16:36:04+01:00" level=info msg="Running kube-scheduler --authentication-kubeconfig=/var/lib/rancher/k3s/server/cred/scheduler.kubeconfig --authorization-kubeconfig=/var/lib/rancher/k3s/server/cred/scheduler.kubeconfig --bind-address=127.0.0.1 --kubeconfig=/var/lib/rancher/k3s/server/cred/scheduler.kubeconfig --leader-elect=false --profiling=false --secure-port=10259" +mar 12 16:36:04 CASCA k3s[304102]: time="2025-03-12T16:36:04+01:00" level=info msg="Waiting for API server to become available" +mar 12 16:36:04 CASCA k3s[304102]: W0312 16:36:04.793723 304102 registry.go:256] calling componentGlobalsRegistry.AddFlags more than once, the registry will be set by the latest flags +mar 12 16:36:04 CASCA k3s[304102]: time="2025-03-12T16:36:04+01:00" level=info msg="Running kube-controller-manager --allocate-node-cidrs=true --authentication-kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --authorization-kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --bind-address=127.0.0.1 --cluster-cidr=10.42.0.0/16 --cluster-signing-kube-apiserver-client-cert-file=/var/lib/rancher/k3s/server/tls/client-ca.nochain.crt --cluster-signing-kube-apiserver-client-key-file=/var/lib/rancher/k3s/server/tls/client-ca.key --cluster-signing-kubelet-client-cert-file=/var/lib/rancher/k3s/server/tls/client-ca.nochain.crt --cluster-signing-kubelet-client-key-file=/var/lib/rancher/k3s/server/tls/client-ca.key --cluster-signing-kubelet-serving-cert-file=/var/lib/rancher/k3s/server/tls/server-ca.nochain.crt --cluster-signing-kubelet-serving-key-file=/var/lib/rancher/k3s/server/tls/server-ca.key --cluster-signing-legacy-unknown-cert-file=/var/lib/rancher/k3s/server/tls/server-ca.nochain.crt --cluster-signing-legacy-unknown-key-file=/var/lib/rancher/k3s/server/tls/server-ca.key --configure-cloud-routes=false --controllers=*,tokencleaner,-service,-route,-cloud-node-lifecycle --kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --leader-elect=false --profiling=false --root-ca-file=/var/lib/rancher/k3s/server/tls/server-ca.crt --secure-port=10257 --service-account-private-key-file=/var/lib/rancher/k3s/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --use-service-account-credentials=true" +mar 12 16:36:04 CASCA k3s[304102]: W0312 16:36:04.794088 304102 registry.go:256] calling componentGlobalsRegistry.AddFlags more than once, the registry will be set by the latest flags +mar 12 16:36:04 CASCA k3s[304102]: I0312 16:36:04.794164 304102 options.go:228] external host was not specified, using 192.168.1.133 +mar 12 16:36:04 CASCA k3s[304102]: time="2025-03-12T16:36:04+01:00" level=info msg="Running cloud-controller-manager --allocate-node-cidrs=true --authentication-kubeconfig=/var/lib/rancher/k3s/server/cred/cloud-controller.kubeconfig --authorization-kubeconfig=/var/lib/rancher/k3s/server/cred/cloud-controller.kubeconfig --bind-address=127.0.0.1 --cloud-config=/var/lib/rancher/k3s/server/etc/cloud-config.yaml --cloud-provider=k3s --cluster-cidr=10.42.0.0/16 --configure-cloud-routes=false --controllers=*,-route --kubeconfig=/var/lib/rancher/k3s/server/cred/cloud-controller.kubeconfig --leader-elect=false --leader-elect-resource-name=k3s-cloud-controller-manager --node-status-update-frequency=1m0s --profiling=false" +mar 12 16:36:04 CASCA k3s[304102]: time="2025-03-12T16:36:04+01:00" level=info msg="Server node token is available at /var/lib/rancher/k3s/server/token" +mar 12 16:36:04 CASCA k3s[304102]: time="2025-03-12T16:36:04+01:00" level=info msg="To join server node to cluster: k3s server -s https://192.168.1.133:6443 -t ${SERVER_NODE_TOKEN}" +mar 12 16:36:04 CASCA k3s[304102]: time="2025-03-12T16:36:04+01:00" level=info msg="Agent node token is available at /var/lib/rancher/k3s/server/agent-token" +mar 12 16:36:04 CASCA k3s[304102]: time="2025-03-12T16:36:04+01:00" level=info msg="To join agent node to cluster: k3s agent -s https://192.168.1.133:6443 -t ${AGENT_NODE_TOKEN}" +mar 12 16:36:04 CASCA k3s[304102]: I0312 16:36:04.795363 304102 server.go:150] Version: v1.31.6+k3s1 +mar 12 16:36:04 CASCA k3s[304102]: I0312 16:36:04.795376 304102 server.go:152] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" +mar 12 16:36:04 CASCA k3s[304102]: Error: failed to create listener: failed to listen on 127.0.0.1:6444: listen tcp 127.0.0.1:6444: bind: address already in use +mar 12 16:36:04 CASCA k3s[304102]: time="2025-03-12T16:36:04+01:00" level=error msg="apiserver exited: failed to create listener: failed to listen on 127.0.0.1:6444: listen tcp 127.0.0.1:6444: bind: address already in use" +mar 12 16:36:04 CASCA systemd[1]: k3s.service: Main process exited, code=exited, status=1/FAILURE +░░ Subject: Unit process exited +░░ Defined-By: systemd +░░ Support: https://www.debian.org/support +░░ +░░ An ExecStart= process belonging to unit k3s.service has exited. +░░ +░░ The process' exit code is 'exited' and its exit status is 1. +mar 12 16:36:04 CASCA systemd[1]: k3s.service: Failed with result 'exit-code'. +░░ Subject: Unit failed +░░ Defined-By: systemd +░░ Support: https://www.debian.org/support +░░ +░░ The unit k3s.service has entered the 'failed' state with result 'exit-code'. +mar 12 16:36:04 CASCA systemd[1]: Failed to start k3s.service - Lightweight Kubernetes. +░░ Subject: A start job for unit k3s.service has failed +░░ Defined-By: systemd +░░ Support: https://www.debian.org/support +░░ +░░ A start job for unit k3s.service has finished with a failure. +░░ +░░ The job identifier is 117253 and the job result is failed. +mar 12 16:36:10 CASCA systemd[1]: k3s.service: Scheduled restart job, restart counter is at 1219. +░░ Subject: Automatic restarting of a unit has been scheduled +░░ Defined-By: systemd +░░ Support: https://www.debian.org/support +░░ +░░ Automatic restarting of the unit k3s.service has been scheduled, as the result for +░░ the configured Restart= setting for the unit. +mar 12 16:36:10 CASCA systemd[1]: Stopped k3s.service - Lightweight Kubernetes. +░░ Subject: A stop job for unit k3s.service has finished +░░ Defined-By: systemd +░░ Support: https://www.debian.org/support +░░ +░░ A stop job for unit k3s.service has finished. +░░ +░░ The job identifier is 117347 and the job result is done. +mar 12 16:36:10 CASCA systemd[1]: Starting k3s.service - Lightweight Kubernetes... +░░ Subject: A start job for unit k3s.service has begun execution +░░ Defined-By: systemd +░░ Support: https://www.debian.org/support +░░ +░░ A start job for unit k3s.service has begun execution. +░░ +░░ The job identifier is 117347. +mar 12 16:36:10 CASCA sh[304421]: + /usr/bin/systemctl is-enabled --quiet nm-cloud-setup.service +mar 12 16:36:10 CASCA k3s[304427]: time="2025-03-12T16:36:10+01:00" level=info msg="Starting k3s v1.31.6+k3s1 (6ab750f9)" +mar 12 16:36:10 CASCA k3s[304427]: time="2025-03-12T16:36:10+01:00" level=info msg="Configuring sqlite3 database connection pooling: maxIdleConns=2, maxOpenConns=0, connMaxLifetime=0s" +mar 12 16:36:10 CASCA k3s[304427]: time="2025-03-12T16:36:10+01:00" level=info msg="Configuring database table schema and indexes, this may take a moment..." +mar 12 16:36:10 CASCA k3s[304427]: time="2025-03-12T16:36:10+01:00" level=info msg="Database tables and indexes are up to date" +mar 12 16:36:10 CASCA k3s[304427]: time="2025-03-12T16:36:10+01:00" level=info msg="Kine available at unix://kine.sock" +mar 12 16:36:10 CASCA k3s[304427]: time="2025-03-12T16:36:10+01:00" level=info msg="Reconciling bootstrap data between datastore and disk" +mar 12 16:36:10 CASCA k3s[304427]: time="2025-03-12T16:36:10+01:00" level=info msg="Running kube-apiserver --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,k3s --authorization-mode=Node,RBAC --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/k3s/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction --enable-aggregator-routing=true --enable-bootstrap-token-auth=true --etcd-servers=unix://kine.sock --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key" +mar 12 16:36:10 CASCA k3s[304427]: time="2025-03-12T16:36:10+01:00" level=info msg="Running kube-scheduler --authentication-kubeconfig=/var/lib/rancher/k3s/server/cred/scheduler.kubeconfig --authorization-kubeconfig=/var/lib/rancher/k3s/server/cred/scheduler.kubeconfig --bind-address=127.0.0.1 --kubeconfig=/var/lib/rancher/k3s/server/cred/scheduler.kubeconfig --leader-elect=false --profiling=false --secure-port=10259" +mar 12 16:36:10 CASCA k3s[304427]: W0312 16:36:10.320641 304427 registry.go:256] calling componentGlobalsRegistry.AddFlags more than once, the registry will be set by the latest flags +mar 12 16:36:10 CASCA k3s[304427]: time="2025-03-12T16:36:10+01:00" level=info msg="Waiting for API server to become available" +mar 12 16:36:10 CASCA k3s[304427]: time="2025-03-12T16:36:10+01:00" level=info msg="Running kube-controller-manager --allocate-node-cidrs=true --authentication-kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --authorization-kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --bind-address=127.0.0.1 --cluster-cidr=10.42.0.0/16 --cluster-signing-kube-apiserver-client-cert-file=/var/lib/rancher/k3s/server/tls/client-ca.nochain.crt --cluster-signing-kube-apiserver-client-key-file=/var/lib/rancher/k3s/server/tls/client-ca.key --cluster-signing-kubelet-client-cert-file=/var/lib/rancher/k3s/server/tls/client-ca.nochain.crt --cluster-signing-kubelet-client-key-file=/var/lib/rancher/k3s/server/tls/client-ca.key --cluster-signing-kubelet-serving-cert-file=/var/lib/rancher/k3s/server/tls/server-ca.nochain.crt --cluster-signing-kubelet-serving-key-file=/var/lib/rancher/k3s/server/tls/server-ca.key --cluster-signing-legacy-unknown-cert-file=/var/lib/rancher/k3s/server/tls/server-ca.nochain.crt --cluster-signing-legacy-unknown-key-file=/var/lib/rancher/k3s/server/tls/server-ca.key --configure-cloud-routes=false --controllers=*,tokencleaner,-service,-route,-cloud-node-lifecycle --kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --leader-elect=false --profiling=false --root-ca-file=/var/lib/rancher/k3s/server/tls/server-ca.crt --secure-port=10257 --service-account-private-key-file=/var/lib/rancher/k3s/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --use-service-account-credentials=true" +mar 12 16:36:10 CASCA k3s[304427]: W0312 16:36:10.321156 304427 registry.go:256] calling componentGlobalsRegistry.AddFlags more than once, the registry will be set by the latest flags +mar 12 16:36:10 CASCA k3s[304427]: I0312 16:36:10.321301 304427 options.go:228] external host was not specified, using 192.168.1.133 +mar 12 16:36:10 CASCA k3s[304427]: time="2025-03-12T16:36:10+01:00" level=info msg="Running cloud-controller-manager --allocate-node-cidrs=true --authentication-kubeconfig=/var/lib/rancher/k3s/server/cred/cloud-controller.kubeconfig --authorization-kubeconfig=/var/lib/rancher/k3s/server/cred/cloud-controller.kubeconfig --bind-address=127.0.0.1 --cloud-config=/var/lib/rancher/k3s/server/etc/cloud-config.yaml --cloud-provider=k3s --cluster-cidr=10.42.0.0/16 --configure-cloud-routes=false --controllers=*,-route --kubeconfig=/var/lib/rancher/k3s/server/cred/cloud-controller.kubeconfig --leader-elect=false --leader-elect-resource-name=k3s-cloud-controller-manager --node-status-update-frequency=1m0s --profiling=false" +mar 12 16:36:10 CASCA k3s[304427]: time="2025-03-12T16:36:10+01:00" level=info msg="Server node token is available at /var/lib/rancher/k3s/server/token" +mar 12 16:36:10 CASCA k3s[304427]: time="2025-03-12T16:36:10+01:00" level=info msg="To join server node to cluster: k3s server -s https://192.168.1.133:6443 -t ${SERVER_NODE_TOKEN}" +mar 12 16:36:10 CASCA k3s[304427]: time="2025-03-12T16:36:10+01:00" level=info msg="Agent node token is available at /var/lib/rancher/k3s/server/agent-token" +mar 12 16:36:10 CASCA k3s[304427]: time="2025-03-12T16:36:10+01:00" level=info msg="To join agent node to cluster: k3s agent -s https://192.168.1.133:6443 -t ${AGENT_NODE_TOKEN}" +mar 12 16:36:10 CASCA k3s[304427]: I0312 16:36:10.322525 304427 server.go:150] Version: v1.31.6+k3s1 +mar 12 16:36:10 CASCA k3s[304427]: I0312 16:36:10.322551 304427 server.go:152] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" +mar 12 16:36:10 CASCA k3s[304427]: Error: failed to create listener: failed to listen on 127.0.0.1:6444: listen tcp 127.0.0.1:6444: bind: address already in use +mar 12 16:36:10 CASCA k3s[304427]: time="2025-03-12T16:36:10+01:00" level=error msg="apiserver exited: failed to create listener: failed to listen on 127.0.0.1:6444: listen tcp 127.0.0.1:6444: bind: address already in use" +mar 12 16:36:10 CASCA systemd[1]: k3s.service: Main process exited, code=exited, status=1/FAILURE +░░ Subject: Unit process exited +░░ Defined-By: systemd +░░ Support: https://www.debian.org/support +░░ +░░ An ExecStart= process belonging to unit k3s.service has exited. +░░ +░░ The process' exit code is 'exited' and its exit status is 1. +mar 12 16:36:10 CASCA systemd[1]: k3s.service: Failed with result 'exit-code'. +░░ Subject: Unit failed +░░ Defined-By: systemd +░░ Support: https://www.debian.org/support +░░ +░░ The unit k3s.service has entered the 'failed' state with result 'exit-code'. +mar 12 16:36:10 CASCA systemd[1]: Failed to start k3s.service - Lightweight Kubernetes. +░░ Subject: A start job for unit k3s.service has failed +░░ Defined-By: systemd +░░ Support: https://www.debian.org/support +░░ +░░ A start job for unit k3s.service has finished with a failure. +░░ +░░ The job identifier is 117347 and the job result is failed. +mar 12 16:36:15 CASCA systemd[1]: k3s.service: Scheduled restart job, restart counter is at 1220. +░░ Subject: Automatic restarting of a unit has been scheduled +░░ Defined-By: systemd +░░ Support: https://www.debian.org/support +░░ +░░ Automatic restarting of the unit k3s.service has been scheduled, as the result for +░░ the configured Restart= setting for the unit. +mar 12 16:36:15 CASCA systemd[1]: Stopped k3s.service - Lightweight Kubernetes. +░░ Subject: A stop job for unit k3s.service has finished +░░ Defined-By: systemd +░░ Support: https://www.debian.org/support +░░ +░░ A stop job for unit k3s.service has finished. +░░ +░░ The job identifier is 117441 and the job result is done. +mar 12 16:36:15 CASCA systemd[1]: Starting k3s.service - Lightweight Kubernetes... +░░ Subject: A start job for unit k3s.service has begun execution +░░ Defined-By: systemd +░░ Support: https://www.debian.org/support +░░ +░░ A start job for unit k3s.service has begun execution. +░░ +░░ The job identifier is 117441. +mar 12 16:36:15 CASCA sh[304761]: + /usr/bin/systemctl is-enabled --quiet nm-cloud-setup.service +mar 12 16:36:15 CASCA k3s[304768]: time="2025-03-12T16:36:15+01:00" level=info msg="Starting k3s v1.31.6+k3s1 (6ab750f9)" +mar 12 16:36:15 CASCA k3s[304768]: time="2025-03-12T16:36:15+01:00" level=info msg="Configuring sqlite3 database connection pooling: maxIdleConns=2, maxOpenConns=0, connMaxLifetime=0s" +mar 12 16:36:15 CASCA k3s[304768]: time="2025-03-12T16:36:15+01:00" level=info msg="Configuring database table schema and indexes, this may take a moment..." +mar 12 16:36:15 CASCA k3s[304768]: time="2025-03-12T16:36:15+01:00" level=info msg="Database tables and indexes are up to date" +mar 12 16:36:15 CASCA k3s[304768]: time="2025-03-12T16:36:15+01:00" level=info msg="Kine available at unix://kine.sock" +mar 12 16:36:15 CASCA k3s[304768]: time="2025-03-12T16:36:15+01:00" level=info msg="Reconciling bootstrap data between datastore and disk" +mar 12 16:36:15 CASCA k3s[304768]: time="2025-03-12T16:36:15+01:00" level=info msg="Running kube-apiserver --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,k3s --authorization-mode=Node,RBAC --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/k3s/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction --enable-aggregator-routing=true --enable-bootstrap-token-auth=true --etcd-servers=unix://kine.sock --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key" +mar 12 16:36:15 CASCA k3s[304768]: time="2025-03-12T16:36:15+01:00" level=info msg="Running kube-scheduler --authentication-kubeconfig=/var/lib/rancher/k3s/server/cred/scheduler.kubeconfig --authorization-kubeconfig=/var/lib/rancher/k3s/server/cred/scheduler.kubeconfig --bind-address=127.0.0.1 --kubeconfig=/var/lib/rancher/k3s/server/cred/scheduler.kubeconfig --leader-elect=false --profiling=false --secure-port=10259" +mar 12 16:36:15 CASCA k3s[304768]: W0312 16:36:15.830300 304768 registry.go:256] calling componentGlobalsRegistry.AddFlags more than once, the registry will be set by the latest flags +mar 12 16:36:15 CASCA k3s[304768]: time="2025-03-12T16:36:15+01:00" level=info msg="Waiting for API server to become available" +mar 12 16:36:15 CASCA k3s[304768]: time="2025-03-12T16:36:15+01:00" level=info msg="Running kube-controller-manager --allocate-node-cidrs=true --authentication-kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --authorization-kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --bind-address=127.0.0.1 --cluster-cidr=10.42.0.0/16 --cluster-signing-kube-apiserver-client-cert-file=/var/lib/rancher/k3s/server/tls/client-ca.nochain.crt --cluster-signing-kube-apiserver-client-key-file=/var/lib/rancher/k3s/server/tls/client-ca.key --cluster-signing-kubelet-client-cert-file=/var/lib/rancher/k3s/server/tls/client-ca.nochain.crt --cluster-signing-kubelet-client-key-file=/var/lib/rancher/k3s/server/tls/client-ca.key --cluster-signing-kubelet-serving-cert-file=/var/lib/rancher/k3s/server/tls/server-ca.nochain.crt --cluster-signing-kubelet-serving-key-file=/var/lib/rancher/k3s/server/tls/server-ca.key --cluster-signing-legacy-unknown-cert-file=/var/lib/rancher/k3s/server/tls/server-ca.nochain.crt --cluster-signing-legacy-unknown-key-file=/var/lib/rancher/k3s/server/tls/server-ca.key --configure-cloud-routes=false --controllers=*,tokencleaner,-service,-route,-cloud-node-lifecycle --kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --leader-elect=false --profiling=false --root-ca-file=/var/lib/rancher/k3s/server/tls/server-ca.crt --secure-port=10257 --service-account-private-key-file=/var/lib/rancher/k3s/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --use-service-account-credentials=true" +mar 12 16:36:15 CASCA k3s[304768]: W0312 16:36:15.830666 304768 registry.go:256] calling componentGlobalsRegistry.AddFlags more than once, the registry will be set by the latest flags +mar 12 16:36:15 CASCA k3s[304768]: I0312 16:36:15.830819 304768 options.go:228] external host was not specified, using 192.168.1.133 +mar 12 16:36:15 CASCA k3s[304768]: time="2025-03-12T16:36:15+01:00" level=info msg="Running cloud-controller-manager --allocate-node-cidrs=true --authentication-kubeconfig=/var/lib/rancher/k3s/server/cred/cloud-controller.kubeconfig --authorization-kubeconfig=/var/lib/rancher/k3s/server/cred/cloud-controller.kubeconfig --bind-address=127.0.0.1 --cloud-config=/var/lib/rancher/k3s/server/etc/cloud-config.yaml --cloud-provider=k3s --cluster-cidr=10.42.0.0/16 --configure-cloud-routes=false --controllers=*,-route --kubeconfig=/var/lib/rancher/k3s/server/cred/cloud-controller.kubeconfig --leader-elect=false --leader-elect-resource-name=k3s-cloud-controller-manager --node-status-update-frequency=1m0s --profiling=false" +mar 12 16:36:15 CASCA k3s[304768]: time="2025-03-12T16:36:15+01:00" level=info msg="Server node token is available at /var/lib/rancher/k3s/server/token" +mar 12 16:36:15 CASCA k3s[304768]: time="2025-03-12T16:36:15+01:00" level=info msg="To join server node to cluster: k3s server -s https://192.168.1.133:6443 -t ${SERVER_NODE_TOKEN}" +mar 12 16:36:15 CASCA k3s[304768]: time="2025-03-12T16:36:15+01:00" level=info msg="Agent node token is available at /var/lib/rancher/k3s/server/agent-token" +mar 12 16:36:15 CASCA k3s[304768]: time="2025-03-12T16:36:15+01:00" level=info msg="To join agent node to cluster: k3s agent -s https://192.168.1.133:6443 -t ${AGENT_NODE_TOKEN}" +mar 12 16:36:15 CASCA k3s[304768]: I0312 16:36:15.832085 304768 server.go:150] Version: v1.31.6+k3s1 +mar 12 16:36:15 CASCA k3s[304768]: I0312 16:36:15.832098 304768 server.go:152] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" +mar 12 16:36:15 CASCA k3s[304768]: time="2025-03-12T16:36:15+01:00" level=info msg="Wrote kubeconfig /etc/rancher/k3s/k3s.yaml" +mar 12 16:36:15 CASCA k3s[304768]: time="2025-03-12T16:36:15+01:00" level=info msg="Run: k3s kubectl" +mar 12 16:36:15 CASCA k3s[304768]: Error: failed to create listener: failed to listen on 127.0.0.1:6444: listen tcp 127.0.0.1:6444: bind: address already in use +mar 12 16:36:15 CASCA k3s[304768]: time="2025-03-12T16:36:15+01:00" level=error msg="apiserver exited: failed to create listener: failed to listen on 127.0.0.1:6444: listen tcp 127.0.0.1:6444: bind: address already in use" +mar 12 16:36:15 CASCA systemd[1]: k3s.service: Main process exited, code=exited, status=1/FAILURE +░░ Subject: Unit process exited +░░ Defined-By: systemd +░░ Support: https://www.debian.org/support +░░ +░░ An ExecStart= process belonging to unit k3s.service has exited. +░░ +░░ The process' exit code is 'exited' and its exit status is 1. +mar 12 16:36:15 CASCA systemd[1]: k3s.service: Failed with result 'exit-code'. +░░ Subject: Unit failed +░░ Defined-By: systemd +░░ Support: https://www.debian.org/support +░░ +░░ The unit k3s.service has entered the 'failed' state with result 'exit-code'. +mar 12 16:36:15 CASCA systemd[1]: Failed to start k3s.service - Lightweight Kubernetes. +░░ Subject: A start job for unit k3s.service has failed +░░ Defined-By: systemd +░░ Support: https://www.debian.org/support +░░ +░░ A start job for unit k3s.service has finished with a failure. +░░ +░░ The job identifier is 117441 and the job result is failed. +mar 12 16:36:21 CASCA systemd[1]: k3s.service: Scheduled restart job, restart counter is at 1221. +░░ Subject: Automatic restarting of a unit has been scheduled +░░ Defined-By: systemd +░░ Support: https://www.debian.org/support +░░ +░░ Automatic restarting of the unit k3s.service has been scheduled, as the result for +░░ the configured Restart= setting for the unit. +mar 12 16:36:21 CASCA systemd[1]: Stopped k3s.service - Lightweight Kubernetes. +░░ Subject: A stop job for unit k3s.service has finished +░░ Defined-By: systemd +░░ Support: https://www.debian.org/support +░░ +░░ A stop job for unit k3s.service has finished. +░░ +░░ The job identifier is 117535 and the job result is done. +mar 12 16:36:21 CASCA systemd[1]: Starting k3s.service - Lightweight Kubernetes... +░░ Subject: A start job for unit k3s.service has begun execution +░░ Defined-By: systemd +░░ Support: https://www.debian.org/support +░░ +░░ A start job for unit k3s.service has begun execution. +░░ +░░ The job identifier is 117535. +mar 12 16:36:21 CASCA sh[305102]: + /usr/bin/systemctl is-enabled --quiet nm-cloud-setup.service +mar 12 16:36:21 CASCA k3s[305109]: time="2025-03-12T16:36:21+01:00" level=info msg="Starting k3s v1.31.6+k3s1 (6ab750f9)" +mar 12 16:36:21 CASCA k3s[305109]: time="2025-03-12T16:36:21+01:00" level=info msg="Configuring sqlite3 database connection pooling: maxIdleConns=2, maxOpenConns=0, connMaxLifetime=0s" +mar 12 16:36:21 CASCA k3s[305109]: time="2025-03-12T16:36:21+01:00" level=info msg="Configuring database table schema and indexes, this may take a moment..." +mar 12 16:36:21 CASCA k3s[305109]: time="2025-03-12T16:36:21+01:00" level=info msg="Database tables and indexes are up to date" +mar 12 16:36:21 CASCA k3s[305109]: time="2025-03-12T16:36:21+01:00" level=info msg="Kine available at unix://kine.sock" +mar 12 16:36:21 CASCA k3s[305109]: time="2025-03-12T16:36:21+01:00" level=info msg="Reconciling bootstrap data between datastore and disk" +mar 12 16:36:21 CASCA k3s[305109]: time="2025-03-12T16:36:21+01:00" level=info msg="Running kube-apiserver --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,k3s --authorization-mode=Node,RBAC --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/k3s/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction --enable-aggregator-routing=true --enable-bootstrap-token-auth=true --etcd-servers=unix://kine.sock --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key" +mar 12 16:36:21 CASCA k3s[305109]: time="2025-03-12T16:36:21+01:00" level=info msg="Running kube-scheduler --authentication-kubeconfig=/var/lib/rancher/k3s/server/cred/scheduler.kubeconfig --authorization-kubeconfig=/var/lib/rancher/k3s/server/cred/scheduler.kubeconfig --bind-address=127.0.0.1 --kubeconfig=/var/lib/rancher/k3s/server/cred/scheduler.kubeconfig --leader-elect=false --profiling=false --secure-port=10259" +mar 12 16:36:21 CASCA k3s[305109]: time="2025-03-12T16:36:21+01:00" level=info msg="Waiting for API server to become available" +mar 12 16:36:21 CASCA k3s[305109]: W0312 16:36:21.311329 305109 registry.go:256] calling componentGlobalsRegistry.AddFlags more than once, the registry will be set by the latest flags +mar 12 16:36:21 CASCA k3s[305109]: time="2025-03-12T16:36:21+01:00" level=info msg="Running kube-controller-manager --allocate-node-cidrs=true --authentication-kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --authorization-kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --bind-address=127.0.0.1 --cluster-cidr=10.42.0.0/16 --cluster-signing-kube-apiserver-client-cert-file=/var/lib/rancher/k3s/server/tls/client-ca.nochain.crt --cluster-signing-kube-apiserver-client-key-file=/var/lib/rancher/k3s/server/tls/client-ca.key --cluster-signing-kubelet-client-cert-file=/var/lib/rancher/k3s/server/tls/client-ca.nochain.crt --cluster-signing-kubelet-client-key-file=/var/lib/rancher/k3s/server/tls/client-ca.key --cluster-signing-kubelet-serving-cert-file=/var/lib/rancher/k3s/server/tls/server-ca.nochain.crt --cluster-signing-kubelet-serving-key-file=/var/lib/rancher/k3s/server/tls/server-ca.key --cluster-signing-legacy-unknown-cert-file=/var/lib/rancher/k3s/server/tls/server-ca.nochain.crt --cluster-signing-legacy-unknown-key-file=/var/lib/rancher/k3s/server/tls/server-ca.key --configure-cloud-routes=false --controllers=*,tokencleaner,-service,-route,-cloud-node-lifecycle --kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --leader-elect=false --profiling=false --root-ca-file=/var/lib/rancher/k3s/server/tls/server-ca.crt --secure-port=10257 --service-account-private-key-file=/var/lib/rancher/k3s/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --use-service-account-credentials=true" +mar 12 16:36:21 CASCA k3s[305109]: W0312 16:36:21.311673 305109 registry.go:256] calling componentGlobalsRegistry.AddFlags more than once, the registry will be set by the latest flags +mar 12 16:36:21 CASCA k3s[305109]: time="2025-03-12T16:36:21+01:00" level=info msg="Running cloud-controller-manager --allocate-node-cidrs=true --authentication-kubeconfig=/var/lib/rancher/k3s/server/cred/cloud-controller.kubeconfig --authorization-kubeconfig=/var/lib/rancher/k3s/server/cred/cloud-controller.kubeconfig --bind-address=127.0.0.1 --cloud-config=/var/lib/rancher/k3s/server/etc/cloud-config.yaml --cloud-provider=k3s --cluster-cidr=10.42.0.0/16 --configure-cloud-routes=false --controllers=*,-route --kubeconfig=/var/lib/rancher/k3s/server/cred/cloud-controller.kubeconfig --leader-elect=false --leader-elect-resource-name=k3s-cloud-controller-manager --node-status-update-frequency=1m0s --profiling=false" +mar 12 16:36:21 CASCA k3s[305109]: I0312 16:36:21.311866 305109 options.go:228] external host was not specified, using 192.168.1.133 +mar 12 16:36:21 CASCA k3s[305109]: time="2025-03-12T16:36:21+01:00" level=info msg="Server node token is available at /var/lib/rancher/k3s/server/token" +mar 12 16:36:21 CASCA k3s[305109]: time="2025-03-12T16:36:21+01:00" level=info msg="To join server node to cluster: k3s server -s https://192.168.1.133:6443 -t ${SERVER_NODE_TOKEN}" +mar 12 16:36:21 CASCA k3s[305109]: time="2025-03-12T16:36:21+01:00" level=info msg="Agent node token is available at /var/lib/rancher/k3s/server/agent-token" +mar 12 16:36:21 CASCA k3s[305109]: time="2025-03-12T16:36:21+01:00" level=info msg="To join agent node to cluster: k3s agent -s https://192.168.1.133:6443 -t ${AGENT_NODE_TOKEN}" +mar 12 16:36:21 CASCA k3s[305109]: I0312 16:36:21.313086 305109 server.go:150] Version: v1.31.6+k3s1 +mar 12 16:36:21 CASCA k3s[305109]: I0312 16:36:21.313105 305109 server.go:152] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" +mar 12 16:36:21 CASCA k3s[305109]: Error: failed to create listener: failed to listen on 127.0.0.1:6444: listen tcp 127.0.0.1:6444: bind: address already in use +mar 12 16:36:21 CASCA k3s[305109]: time="2025-03-12T16:36:21+01:00" level=error msg="apiserver exited: failed to create listener: failed to listen on 127.0.0.1:6444: listen tcp 127.0.0.1:6444: bind: address already in use" +mar 12 16:36:21 CASCA systemd[1]: k3s.service: Main process exited, code=exited, status=1/FAILURE +░░ Subject: Unit process exited +░░ Defined-By: systemd +░░ Support: https://www.debian.org/support +░░ +░░ An ExecStart= process belonging to unit k3s.service has exited. +░░ +░░ The process' exit code is 'exited' and its exit status is 1. +mar 12 16:36:21 CASCA systemd[1]: k3s.service: Failed with result 'exit-code'. +░░ Subject: Unit failed +░░ Defined-By: systemd +░░ Support: https://www.debian.org/support +░░ +░░ The unit k3s.service has entered the 'failed' state with result 'exit-code'. +mar 12 16:36:21 CASCA systemd[1]: Failed to start k3s.service - Lightweight Kubernetes. +░░ Subject: A start job for unit k3s.service has failed +░░ Defined-By: systemd +░░ Support: https://www.debian.org/support +░░ +░░ A start job for unit k3s.service has finished with a failure. +░░ +░░ The job identifier is 117535 and the job result is failed. +mar 12 16:36:26 CASCA systemd[1]: k3s.service: Scheduled restart job, restart counter is at 1222. +░░ Subject: Automatic restarting of a unit has been scheduled +░░ Defined-By: systemd +░░ Support: https://www.debian.org/support +░░ +░░ Automatic restarting of the unit k3s.service has been scheduled, as the result for +░░ the configured Restart= setting for the unit. +mar 12 16:36:26 CASCA systemd[1]: Stopped k3s.service - Lightweight Kubernetes. +░░ Subject: A stop job for unit k3s.service has finished +░░ Defined-By: systemd +░░ Support: https://www.debian.org/support +░░ +░░ A stop job for unit k3s.service has finished. +░░ +░░ The job identifier is 117629 and the job result is done. +mar 12 16:36:26 CASCA systemd[1]: Starting k3s.service - Lightweight Kubernetes... +░░ Subject: A start job for unit k3s.service has begun execution +░░ Defined-By: systemd +░░ Support: https://www.debian.org/support +░░ +░░ A start job for unit k3s.service has begun execution. +░░ +░░ The job identifier is 117629. +mar 12 16:36:26 CASCA sh[305425]: + /usr/bin/systemctl is-enabled --quiet nm-cloud-setup.service +mar 12 16:36:26 CASCA k3s[305432]: time="2025-03-12T16:36:26+01:00" level=info msg="Starting k3s v1.31.6+k3s1 (6ab750f9)" +mar 12 16:36:26 CASCA k3s[305432]: time="2025-03-12T16:36:26+01:00" level=info msg="Configuring sqlite3 database connection pooling: maxIdleConns=2, maxOpenConns=0, connMaxLifetime=0s" +mar 12 16:36:26 CASCA k3s[305432]: time="2025-03-12T16:36:26+01:00" level=info msg="Configuring database table schema and indexes, this may take a moment..." +mar 12 16:36:26 CASCA k3s[305432]: time="2025-03-12T16:36:26+01:00" level=info msg="Database tables and indexes are up to date" +mar 12 16:36:26 CASCA k3s[305432]: time="2025-03-12T16:36:26+01:00" level=info msg="Kine available at unix://kine.sock" +mar 12 16:36:26 CASCA k3s[305432]: time="2025-03-12T16:36:26+01:00" level=info msg="Reconciling bootstrap data between datastore and disk" +mar 12 16:36:26 CASCA k3s[305432]: time="2025-03-12T16:36:26+01:00" level=info msg="Running kube-apiserver --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,k3s --authorization-mode=Node,RBAC --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/k3s/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction --enable-aggregator-routing=true --enable-bootstrap-token-auth=true --etcd-servers=unix://kine.sock --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key" +mar 12 16:36:26 CASCA k3s[305432]: time="2025-03-12T16:36:26+01:00" level=info msg="Running kube-scheduler --authentication-kubeconfig=/var/lib/rancher/k3s/server/cred/scheduler.kubeconfig --authorization-kubeconfig=/var/lib/rancher/k3s/server/cred/scheduler.kubeconfig --bind-address=127.0.0.1 --kubeconfig=/var/lib/rancher/k3s/server/cred/scheduler.kubeconfig --leader-elect=false --profiling=false --secure-port=10259" +mar 12 16:36:26 CASCA k3s[305432]: W0312 16:36:26.788551 305432 registry.go:256] calling componentGlobalsRegistry.AddFlags more than once, the registry will be set by the latest flags +mar 12 16:36:26 CASCA k3s[305432]: time="2025-03-12T16:36:26+01:00" level=info msg="Waiting for API server to become available" +mar 12 16:36:26 CASCA k3s[305432]: time="2025-03-12T16:36:26+01:00" level=info msg="Running kube-controller-manager --allocate-node-cidrs=true --authentication-kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --authorization-kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --bind-address=127.0.0.1 --cluster-cidr=10.42.0.0/16 --cluster-signing-kube-apiserver-client-cert-file=/var/lib/rancher/k3s/server/tls/client-ca.nochain.crt --cluster-signing-kube-apiserver-client-key-file=/var/lib/rancher/k3s/server/tls/client-ca.key --cluster-signing-kubelet-client-cert-file=/var/lib/rancher/k3s/server/tls/client-ca.nochain.crt --cluster-signing-kubelet-client-key-file=/var/lib/rancher/k3s/server/tls/client-ca.key --cluster-signing-kubelet-serving-cert-file=/var/lib/rancher/k3s/server/tls/server-ca.nochain.crt --cluster-signing-kubelet-serving-key-file=/var/lib/rancher/k3s/server/tls/server-ca.key --cluster-signing-legacy-unknown-cert-file=/var/lib/rancher/k3s/server/tls/server-ca.nochain.crt --cluster-signing-legacy-unknown-key-file=/var/lib/rancher/k3s/server/tls/server-ca.key --configure-cloud-routes=false --controllers=*,tokencleaner,-service,-route,-cloud-node-lifecycle --kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --leader-elect=false --profiling=false --root-ca-file=/var/lib/rancher/k3s/server/tls/server-ca.crt --secure-port=10257 --service-account-private-key-file=/var/lib/rancher/k3s/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --use-service-account-credentials=true" +mar 12 16:36:26 CASCA k3s[305432]: W0312 16:36:26.789039 305432 registry.go:256] calling componentGlobalsRegistry.AddFlags more than once, the registry will be set by the latest flags +mar 12 16:36:26 CASCA k3s[305432]: I0312 16:36:26.789065 305432 options.go:228] external host was not specified, using 192.168.1.133 +mar 12 16:36:26 CASCA k3s[305432]: time="2025-03-12T16:36:26+01:00" level=info msg="Running cloud-controller-manager --allocate-node-cidrs=true --authentication-kubeconfig=/var/lib/rancher/k3s/server/cred/cloud-controller.kubeconfig --authorization-kubeconfig=/var/lib/rancher/k3s/server/cred/cloud-controller.kubeconfig --bind-address=127.0.0.1 --cloud-config=/var/lib/rancher/k3s/server/etc/cloud-config.yaml --cloud-provider=k3s --cluster-cidr=10.42.0.0/16 --configure-cloud-routes=false --controllers=*,-route --kubeconfig=/var/lib/rancher/k3s/server/cred/cloud-controller.kubeconfig --leader-elect=false --leader-elect-resource-name=k3s-cloud-controller-manager --node-status-update-frequency=1m0s --profiling=false" +mar 12 16:36:26 CASCA k3s[305432]: time="2025-03-12T16:36:26+01:00" level=info msg="Server node token is available at /var/lib/rancher/k3s/server/token" +mar 12 16:36:26 CASCA k3s[305432]: time="2025-03-12T16:36:26+01:00" level=info msg="To join server node to cluster: k3s server -s https://192.168.1.133:6443 -t ${SERVER_NODE_TOKEN}" +mar 12 16:36:26 CASCA k3s[305432]: time="2025-03-12T16:36:26+01:00" level=info msg="Agent node token is available at /var/lib/rancher/k3s/server/agent-token" +mar 12 16:36:26 CASCA k3s[305432]: time="2025-03-12T16:36:26+01:00" level=info msg="To join agent node to cluster: k3s agent -s https://192.168.1.133:6443 -t ${AGENT_NODE_TOKEN}" +mar 12 16:36:26 CASCA k3s[305432]: I0312 16:36:26.790249 305432 server.go:150] Version: v1.31.6+k3s1 +mar 12 16:36:26 CASCA k3s[305432]: I0312 16:36:26.790268 305432 server.go:152] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" +mar 12 16:36:26 CASCA k3s[305432]: Error: failed to create listener: failed to listen on 127.0.0.1:6444: listen tcp 127.0.0.1:6444: bind: address already in use +mar 12 16:36:26 CASCA k3s[305432]: time="2025-03-12T16:36:26+01:00" level=error msg="apiserver exited: failed to create listener: failed to listen on 127.0.0.1:6444: listen tcp 127.0.0.1:6444: bind: address already in use" +mar 12 16:36:26 CASCA systemd[1]: k3s.service: Main process exited, code=exited, status=1/FAILURE +░░ Subject: Unit process exited +░░ Defined-By: systemd +░░ Support: https://www.debian.org/support +░░ +░░ An ExecStart= process belonging to unit k3s.service has exited. +░░ +░░ The process' exit code is 'exited' and its exit status is 1. +mar 12 16:36:26 CASCA systemd[1]: k3s.service: Failed with result 'exit-code'. +░░ Subject: Unit failed +░░ Defined-By: systemd +░░ Support: https://www.debian.org/support +░░ +░░ The unit k3s.service has entered the 'failed' state with result 'exit-code'. +mar 12 16:36:26 CASCA systemd[1]: Failed to start k3s.service - Lightweight Kubernetes. +░░ Subject: A start job for unit k3s.service has failed +░░ Defined-By: systemd +░░ Support: https://www.debian.org/support +░░ +░░ A start job for unit k3s.service has finished with a failure. +░░ +░░ The job identifier is 117629 and the job result is failed. +mar 12 16:36:32 CASCA systemd[1]: k3s.service: Scheduled restart job, restart counter is at 1223. +░░ Subject: Automatic restarting of a unit has been scheduled +░░ Defined-By: systemd +░░ Support: https://www.debian.org/support +░░ +░░ Automatic restarting of the unit k3s.service has been scheduled, as the result for +░░ the configured Restart= setting for the unit. +mar 12 16:36:32 CASCA systemd[1]: Stopped k3s.service - Lightweight Kubernetes. +░░ Subject: A stop job for unit k3s.service has finished +░░ Defined-By: systemd +░░ Support: https://www.debian.org/support +░░ +░░ A stop job for unit k3s.service has finished. +░░ +░░ The job identifier is 117723 and the job result is done. +mar 12 16:36:32 CASCA systemd[1]: Starting k3s.service - Lightweight Kubernetes... +░░ Subject: A start job for unit k3s.service has begun execution +░░ Defined-By: systemd +░░ Support: https://www.debian.org/support +░░ +░░ A start job for unit k3s.service has begun execution. +░░ +░░ The job identifier is 117723. +mar 12 16:36:32 CASCA sh[305771]: + /usr/bin/systemctl is-enabled --quiet nm-cloud-setup.service +mar 12 16:36:32 CASCA k3s[305779]: time="2025-03-12T16:36:32+01:00" level=info msg="Starting k3s v1.31.6+k3s1 (6ab750f9)" +mar 12 16:36:32 CASCA k3s[305779]: time="2025-03-12T16:36:32+01:00" level=info msg="Configuring sqlite3 database connection pooling: maxIdleConns=2, maxOpenConns=0, connMaxLifetime=0s" +mar 12 16:36:32 CASCA k3s[305779]: time="2025-03-12T16:36:32+01:00" level=info msg="Configuring database table schema and indexes, this may take a moment..." +mar 12 16:36:32 CASCA k3s[305779]: time="2025-03-12T16:36:32+01:00" level=info msg="Database tables and indexes are up to date" +mar 12 16:36:32 CASCA k3s[305779]: time="2025-03-12T16:36:32+01:00" level=info msg="Kine available at unix://kine.sock" +mar 12 16:36:32 CASCA k3s[305779]: time="2025-03-12T16:36:32+01:00" level=info msg="Reconciling bootstrap data between datastore and disk" +mar 12 16:36:32 CASCA k3s[305779]: time="2025-03-12T16:36:32+01:00" level=info msg="Running kube-apiserver --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,k3s --authorization-mode=Node,RBAC --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/k3s/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction --enable-aggregator-routing=true --enable-bootstrap-token-auth=true --etcd-servers=unix://kine.sock --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key" +mar 12 16:36:32 CASCA k3s[305779]: time="2025-03-12T16:36:32+01:00" level=info msg="Running kube-scheduler --authentication-kubeconfig=/var/lib/rancher/k3s/server/cred/scheduler.kubeconfig --authorization-kubeconfig=/var/lib/rancher/k3s/server/cred/scheduler.kubeconfig --bind-address=127.0.0.1 --kubeconfig=/var/lib/rancher/k3s/server/cred/scheduler.kubeconfig --leader-elect=false --profiling=false --secure-port=10259" +mar 12 16:36:32 CASCA k3s[305779]: time="2025-03-12T16:36:32+01:00" level=info msg="Waiting for API server to become available" +mar 12 16:36:32 CASCA k3s[305779]: W0312 16:36:32.314610 305779 registry.go:256] calling componentGlobalsRegistry.AddFlags more than once, the registry will be set by the latest flags +mar 12 16:36:32 CASCA k3s[305779]: time="2025-03-12T16:36:32+01:00" level=info msg="Running kube-controller-manager --allocate-node-cidrs=true --authentication-kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --authorization-kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --bind-address=127.0.0.1 --cluster-cidr=10.42.0.0/16 --cluster-signing-kube-apiserver-client-cert-file=/var/lib/rancher/k3s/server/tls/client-ca.nochain.crt --cluster-signing-kube-apiserver-client-key-file=/var/lib/rancher/k3s/server/tls/client-ca.key --cluster-signing-kubelet-client-cert-file=/var/lib/rancher/k3s/server/tls/client-ca.nochain.crt --cluster-signing-kubelet-client-key-file=/var/lib/rancher/k3s/server/tls/client-ca.key --cluster-signing-kubelet-serving-cert-file=/var/lib/rancher/k3s/server/tls/server-ca.nochain.crt --cluster-signing-kubelet-serving-key-file=/var/lib/rancher/k3s/server/tls/server-ca.key --cluster-signing-legacy-unknown-cert-file=/var/lib/rancher/k3s/server/tls/server-ca.nochain.crt --cluster-signing-legacy-unknown-key-file=/var/lib/rancher/k3s/server/tls/server-ca.key --configure-cloud-routes=false --controllers=*,tokencleaner,-service,-route,-cloud-node-lifecycle --kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --leader-elect=false --profiling=false --root-ca-file=/var/lib/rancher/k3s/server/tls/server-ca.crt --secure-port=10257 --service-account-private-key-file=/var/lib/rancher/k3s/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --use-service-account-credentials=true" +mar 12 16:36:32 CASCA k3s[305779]: W0312 16:36:32.315034 305779 registry.go:256] calling componentGlobalsRegistry.AddFlags more than once, the registry will be set by the latest flags +mar 12 16:36:32 CASCA k3s[305779]: I0312 16:36:32.315101 305779 options.go:228] external host was not specified, using 192.168.1.133 +mar 12 16:36:32 CASCA k3s[305779]: time="2025-03-12T16:36:32+01:00" level=info msg="Running cloud-controller-manager --allocate-node-cidrs=true --authentication-kubeconfig=/var/lib/rancher/k3s/server/cred/cloud-controller.kubeconfig --authorization-kubeconfig=/var/lib/rancher/k3s/server/cred/cloud-controller.kubeconfig --bind-address=127.0.0.1 --cloud-config=/var/lib/rancher/k3s/server/etc/cloud-config.yaml --cloud-provider=k3s --cluster-cidr=10.42.0.0/16 --configure-cloud-routes=false --controllers=*,-route --kubeconfig=/var/lib/rancher/k3s/server/cred/cloud-controller.kubeconfig --leader-elect=false --leader-elect-resource-name=k3s-cloud-controller-manager --node-status-update-frequency=1m0s --profiling=false" +mar 12 16:36:32 CASCA k3s[305779]: time="2025-03-12T16:36:32+01:00" level=info msg="Server node token is available at /var/lib/rancher/k3s/server/token" +mar 12 16:36:32 CASCA k3s[305779]: time="2025-03-12T16:36:32+01:00" level=info msg="To join server node to cluster: k3s server -s https://192.168.1.133:6443 -t ${SERVER_NODE_TOKEN}" +mar 12 16:36:32 CASCA k3s[305779]: time="2025-03-12T16:36:32+01:00" level=info msg="Agent node token is available at /var/lib/rancher/k3s/server/agent-token" +mar 12 16:36:32 CASCA k3s[305779]: time="2025-03-12T16:36:32+01:00" level=info msg="To join agent node to cluster: k3s agent -s https://192.168.1.133:6443 -t ${AGENT_NODE_TOKEN}" +mar 12 16:36:32 CASCA k3s[305779]: I0312 16:36:32.316330 305779 server.go:150] Version: v1.31.6+k3s1 +mar 12 16:36:32 CASCA k3s[305779]: I0312 16:36:32.316345 305779 server.go:152] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" +mar 12 16:36:32 CASCA k3s[305779]: Error: failed to create listener: failed to listen on 127.0.0.1:6444: listen tcp 127.0.0.1:6444: bind: address already in use +mar 12 16:36:32 CASCA k3s[305779]: time="2025-03-12T16:36:32+01:00" level=error msg="apiserver exited: failed to create listener: failed to listen on 127.0.0.1:6444: listen tcp 127.0.0.1:6444: bind: address already in use" +mar 12 16:36:32 CASCA systemd[1]: k3s.service: Main process exited, code=exited, status=1/FAILURE +░░ Subject: Unit process exited +░░ Defined-By: systemd +░░ Support: https://www.debian.org/support +░░ +░░ An ExecStart= process belonging to unit k3s.service has exited. +░░ +░░ The process' exit code is 'exited' and its exit status is 1. +mar 12 16:36:32 CASCA systemd[1]: k3s.service: Failed with result 'exit-code'. +░░ Subject: Unit failed +░░ Defined-By: systemd +░░ Support: https://www.debian.org/support +░░ +░░ The unit k3s.service has entered the 'failed' state with result 'exit-code'. +mar 12 16:36:32 CASCA systemd[1]: Failed to start k3s.service - Lightweight Kubernetes. +░░ Subject: A start job for unit k3s.service has failed +░░ Defined-By: systemd +░░ Support: https://www.debian.org/support +░░ +░░ A start job for unit k3s.service has finished with a failure. +░░ +░░ The job identifier is 117723 and the job result is failed. diff --git a/k3s-ansible/example/deployment.yml b/k3s-ansible/example/deployment.yml new file mode 100644 index 0000000..ad875ee --- /dev/null +++ b/k3s-ansible/example/deployment.yml @@ -0,0 +1,20 @@ +--- +apiVersion: apps/v1 +kind: Deployment +metadata: + name: nginx +spec: + selector: + matchLabels: + app: nginx + replicas: 3 + template: + metadata: + labels: + app: nginx + spec: + containers: + - name: nginx + image: nginx:alpine + ports: + - containerPort: 80 diff --git a/k3s-ansible/example/service.yml b/k3s-ansible/example/service.yml new file mode 100644 index 0000000..a309465 --- /dev/null +++ b/k3s-ansible/example/service.yml @@ -0,0 +1,13 @@ +--- +apiVersion: v1 +kind: Service +metadata: + name: nginx +spec: + ipFamilyPolicy: PreferDualStack + selector: + app: nginx + ports: + - port: 80 + targetPort: 80 + type: LoadBalancer diff --git a/k3s-ansible/fk b/k3s-ansible/fk new file mode 100644 index 0000000..bab8d51 --- /dev/null +++ b/k3s-ansible/fk @@ -0,0 +1 @@ +cont diff --git a/k3s-ansible/galaxy.yml b/k3s-ansible/galaxy.yml new file mode 100644 index 0000000..0f9b196 --- /dev/null +++ b/k3s-ansible/galaxy.yml @@ -0,0 +1,81 @@ +### REQUIRED +# The namespace of the collection. This can be a company/brand/organization or product namespace under which all +# content lives. May only contain alphanumeric lowercase characters and underscores. Namespaces cannot start with +# underscores or numbers and cannot contain consecutive underscores +namespace: techno_tim + +# The name of the collection. Has the same character restrictions as 'namespace' +name: k3s_ansible + +# The version of the collection. Must be compatible with semantic versioning +version: 1.0.0 + +# The path to the Markdown (.md) readme file. This path is relative to the root of the collection +readme: README.md + +# A list of the collection's content authors. Can be just the name or in the format 'Full Name (url) +# @nicks:irc/im.site#channel' +authors: +- your name + + +### OPTIONAL but strongly recommended +# A short summary description of the collection +description: > + The easiest way to bootstrap a self-hosted High Availability Kubernetes + cluster. A fully automated HA k3s etcd install with kube-vip, MetalLB, + and more. + +# Either a single license or a list of licenses for content inside of a collection. Ansible Galaxy currently only +# accepts L(SPDX,https://spdx.org/licenses/) licenses. This key is mutually exclusive with 'license_file' +license: +- Apache-2.0 + + +# A list of tags you want to associate with the collection for indexing/searching. A tag name has the same character +# requirements as 'namespace' and 'name' +tags: + - etcd + - high-availability + - k8s + - k3s + - k3s-cluster + - kube-vip + - kubernetes + - metallb + - rancher + +# Collections that this collection requires to be installed for it to be usable. The key of the dict is the +# collection label 'namespace.name'. The value is a version range +# L(specifiers,https://python-semanticversion.readthedocs.io/en/latest/#requirement-specification). Multiple version +# range specifiers can be set and are separated by ',' +dependencies: + ansible.utils: '*' + ansible.posix: '*' + community.general: '*' + kubernetes.core: '*' + +# The URL of the originating SCM repository +repository: https://github.com/techno-tim/k3s-ansible + +# The URL to any online docs +documentation: https://github.com/techno-tim/k3s-ansible + +# The URL to the homepage of the collection/project +homepage: https://www.youtube.com/watch?v=CbkEWcUZ7zM + +# The URL to the collection issue tracker +issues: https://github.com/techno-tim/k3s-ansible/issues + +# A list of file glob-like patterns used to filter any files or directories that should not be included in the build +# artifact. A pattern is matched from the relative path of the file or directory of the collection directory. This +# uses 'fnmatch' to match the files or directories. Some directories and files like 'galaxy.yml', '*.pyc', '*.retry', +# and '.git' are always filtered. Mutually exclusive with 'manifest' +build_ignore: [] + +# A dict controlling use of manifest directives used in building the collection artifact. The key 'directives' is a +# list of MANIFEST.in style +# L(directives,https://packaging.python.org/en/latest/guides/using-manifest-in/#manifest-in-commands). The key +# 'omit_default_directives' is a boolean that controls whether the default directives are used. Mutually exclusive +# with 'build_ignore' +# manifest: null diff --git a/k3s-ansible/inventory/.gitignore b/k3s-ansible/inventory/.gitignore new file mode 100644 index 0000000..ddcc0d1 --- /dev/null +++ b/k3s-ansible/inventory/.gitignore @@ -0,0 +1,3 @@ +/* +!.gitignore +!sample/ diff --git a/k3s-ansible/inventory/sample/group_vars/all.yml b/k3s-ansible/inventory/sample/group_vars/all.yml new file mode 100644 index 0000000..01b1fe9 --- /dev/null +++ b/k3s-ansible/inventory/sample/group_vars/all.yml @@ -0,0 +1,171 @@ +--- +k3s_version: v1.30.2+k3s2 +# this is the user that has ssh access to these machines +ansible_user: ansibleuser +systemd_dir: /etc/systemd/system + +# Set your timezone +system_timezone: "Your/Timezone" + +# interface which will be used for flannel +flannel_iface: "eth0" + +# uncomment calico_iface to use tigera operator/calico cni instead of flannel https://docs.tigera.io/calico/latest/about +# calico_iface: "eth0" +calico_ebpf: false # use eBPF dataplane instead of iptables +calico_tag: "v3.28.0" # calico version tag + +# uncomment cilium_iface to use cilium cni instead of flannel or calico +# ensure v4.19.57, v5.1.16, v5.2.0 or more recent kernel +# cilium_iface: "eth0" +cilium_mode: "native" # native when nodes on same subnet or using bgp, else set routed +cilium_tag: "v1.16.0" # cilium version tag +cilium_hubble: true # enable hubble observability relay and ui + +# if using calico or cilium, you may specify the cluster pod cidr pool +cluster_cidr: "10.52.0.0/16" + +# enable cilium bgp control plane for lb services and pod cidrs. disables metallb. +cilium_bgp: false + +# bgp parameters for cilium cni. only active when cilium_iface is defined and cilium_bgp is true. +cilium_bgp_my_asn: "64513" +cilium_bgp_peer_asn: "64512" +cilium_bgp_peer_address: "192.168.30.1" +cilium_bgp_lb_cidr: "192.168.31.0/24" # cidr for cilium loadbalancer ipam + +# apiserver_endpoint is virtual ip-address which will be configured on each master +apiserver_endpoint: "192.168.30.222" + +# k3s_token is required masters can talk together securely +# this token should be alpha numeric only +k3s_token: "some-SUPER-DEDEUPER-secret-password" + +# The IP on which the node is reachable in the cluster. +# Here, a sensible default is provided, you can still override +# it for each of your hosts, though. +k3s_node_ip: "{{ ansible_facts[(cilium_iface | default(calico_iface | default(flannel_iface)))]['ipv4']['address'] }}" + +# Disable the taint manually by setting: k3s_master_taint = false +k3s_master_taint: "{{ true if groups['node'] | default([]) | length >= 1 else false }}" + +# these arguments are recommended for servers as well as agents: +extra_args: >- + {{ '--flannel-iface=' + flannel_iface if calico_iface is not defined and cilium_iface is not defined else '' }} + --node-ip={{ k3s_node_ip }} + +# change these to your liking, the only required are: --disable servicelb, --tls-san {{ apiserver_endpoint }} +# the contents of the if block is also required if using calico or cilium +extra_server_args: >- + {{ extra_args }} + {{ '--node-taint node-role.kubernetes.io/master=true:NoSchedule' if k3s_master_taint else '' }} + {% if calico_iface is defined or cilium_iface is defined %} + --flannel-backend=none + --disable-network-policy + --cluster-cidr={{ cluster_cidr | default('10.52.0.0/16') }} + {% endif %} + --tls-san {{ apiserver_endpoint }} + --disable servicelb + --disable traefik + +extra_agent_args: >- + {{ extra_args }} + +# image tag for kube-vip +kube_vip_tag_version: "v0.8.2" + +# tag for kube-vip-cloud-provider manifest +# kube_vip_cloud_provider_tag_version: "main" + +# kube-vip ip range for load balancer +# (uncomment to use kube-vip for services instead of MetalLB) +# kube_vip_lb_ip_range: "192.168.30.80-192.168.30.90" + +# metallb type frr or native +metal_lb_type: "native" + +# metallb mode layer2 or bgp +metal_lb_mode: "layer2" + +# bgp options +# metal_lb_bgp_my_asn: "64513" +# metal_lb_bgp_peer_asn: "64512" +# metal_lb_bgp_peer_address: "192.168.30.1" + +# image tag for metal lb +metal_lb_speaker_tag_version: "v0.14.8" +metal_lb_controller_tag_version: "v0.14.8" + +# metallb ip range for load balancer +metal_lb_ip_range: "192.168.30.80-192.168.30.90" + +# Only enable if your nodes are proxmox LXC nodes, make sure to configure your proxmox nodes +# in your hosts.ini file. +# Please read https://gist.github.com/triangletodd/02f595cd4c0dc9aac5f7763ca2264185 before using this. +# Most notably, your containers must be privileged, and must not have nesting set to true. +# Please note this script disables most of the security of lxc containers, with the trade off being that lxc +# containers are significantly more resource efficient compared to full VMs. +# Mixing and matching VMs and lxc containers is not supported, ymmv if you want to do this. +# I would only really recommend using this if you have particularly low powered proxmox nodes where the overhead of +# VMs would use a significant portion of your available resources. +proxmox_lxc_configure: false +# the user that you would use to ssh into the host, for example if you run ssh some-user@my-proxmox-host, +# set this value to some-user +proxmox_lxc_ssh_user: root +# the unique proxmox ids for all of the containers in the cluster, both worker and master nodes +proxmox_lxc_ct_ids: + - 200 + - 201 + - 202 + - 203 + - 204 + +# Only enable this if you have set up your own container registry to act as a mirror / pull-through cache +# (harbor / nexus / docker's official registry / etc). +# Can be beneficial for larger dev/test environments (for example if you're getting rate limited by docker hub), +# or air-gapped environments where your nodes don't have internet access after the initial setup +# (which is still needed for downloading the k3s binary and such). +# k3s's documentation about private registries here: https://docs.k3s.io/installation/private-registry +custom_registries: false +# The registries can be authenticated or anonymous, depending on your registry server configuration. +# If they allow anonymous access, simply remove the following bit from custom_registries_yaml +# configs: +# "registry.domain.com": +# auth: +# username: yourusername +# password: yourpassword +# The following is an example that pulls all images used in this playbook through your private registries. +# It also allows you to pull your own images from your private registry, without having to use imagePullSecrets +# in your deployments. +# If all you need is your own images and you don't care about caching the docker/quay/ghcr.io images, +# you can just remove those from the mirrors: section. +custom_registries_yaml: | + mirrors: + docker.io: + endpoint: + - "https://registry.domain.com/v2/dockerhub" + quay.io: + endpoint: + - "https://registry.domain.com/v2/quayio" + ghcr.io: + endpoint: + - "https://registry.domain.com/v2/ghcrio" + registry.domain.com: + endpoint: + - "https://registry.domain.com" + + configs: + "registry.domain.com": + auth: + username: yourusername + password: yourpassword + +# On some distros like Diet Pi, there is no dbus installed. dbus required by the default reboot command. +# Uncomment if you need a custom reboot command +# custom_reboot_command: /usr/sbin/shutdown -r now + +# Only enable and configure these if you access the internet through a proxy +# proxy_env: +# HTTP_PROXY: "http://proxy.domain.local:3128" +# HTTPS_PROXY: "http://proxy.domain.local:3128" +# NO_PROXY: "*.domain.local,127.0.0.0/8,10.0.0.0/8,172.16.0.0/12,192.168.0.0/16" diff --git a/k3s-ansible/inventory/sample/group_vars/proxmox.yml b/k3s-ansible/inventory/sample/group_vars/proxmox.yml new file mode 100644 index 0000000..ea1759b --- /dev/null +++ b/k3s-ansible/inventory/sample/group_vars/proxmox.yml @@ -0,0 +1,2 @@ +--- +ansible_user: '{{ proxmox_lxc_ssh_user }}' diff --git a/k3s-ansible/inventory/sample/hosts.ini b/k3s-ansible/inventory/sample/hosts.ini new file mode 100644 index 0000000..7045423 --- /dev/null +++ b/k3s-ansible/inventory/sample/hosts.ini @@ -0,0 +1,17 @@ +[master] +192.168.30.38 +192.168.30.39 +192.168.30.40 + +[node] +192.168.30.41 +192.168.30.42 + +# only required if proxmox_lxc_configure: true +# must contain all proxmox instances that have a master or worker node +# [proxmox] +# 192.168.30.43 + +[k3s_cluster:children] +master +node diff --git a/k3s-ansible/k3s.crt b/k3s-ansible/k3s.crt new file mode 100644 index 0000000..e69de29 diff --git a/k3s-ansible/k3s_ca.crt b/k3s-ansible/k3s_ca.crt new file mode 100644 index 0000000..e69de29 diff --git a/k3s-ansible/kubeconfig b/k3s-ansible/kubeconfig new file mode 100644 index 0000000..701d179 --- /dev/null +++ b/k3s-ansible/kubeconfig @@ -0,0 +1,19 @@ +apiVersion: v1 +clusters: +- cluster: + certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUJkekNDQVIyZ0F3SUJBZ0lCQURBS0JnZ3Foa2pPUFFRREFqQWpNU0V3SHdZRFZRUUREQmhyTTNNdGMyVnkKZG1WeUxXTmhRREUzTkRJek1UZ3hNalF3SGhjTk1qVXdNekU0TVRjeE5USTBXaGNOTXpVd016RTJNVGN4TlRJMApXakFqTVNFd0h3WURWUVFEREJock0zTXRjMlZ5ZG1WeUxXTmhRREUzTkRJek1UZ3hNalF3V1RBVEJnY3Foa2pPClBRSUJCZ2dxaGtqT1BRTUJCd05DQUFRdXgzUzZOdUJ0bXExbzhIaFFkL0pYK3BLdm1UMEpMSkNWdFBqNjNkWFkKR3lmSnlDM3dLazdIZzNGMS90eExnSFRUUHRmUm56b0ZEdGNPZU5xWEpUejFvMEl3UURBT0JnTlZIUThCQWY4RQpCQU1DQXFRd0R3WURWUjBUQVFIL0JBVXdBd0VCL3pBZEJnTlZIUTRFRmdRVWJUTnRFL0JUUmpIZ1ljbEJkRm9QCkVhT3JsT2N3Q2dZSUtvWkl6ajBFQXdJRFNBQXdSUUloQU9ObWx5QUxXeklhTkFoZ1BRMlVtb0tmdmF3V3IrNlAKaG5rQkhVTVV2TTcrQWlCLzJsSWJyZzV3TjJwMC9RY0duWVllcEppbzF2ZHRjTHNmYmhVMm5FbndFZz09Ci0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0K + server: https://192.168.1.222:6443 + name: default +contexts: +- context: + cluster: default + user: default + name: default +current-context: default +kind: Config +preferences: {} +users: +- name: default + user: + client-certificate-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUJrVENDQVRlZ0F3SUJBZ0lJZmlmRjE3UDRVRFV3Q2dZSUtvWkl6ajBFQXdJd0l6RWhNQjhHQTFVRUF3d1kKYXpOekxXTnNhV1Z1ZEMxallVQXhOelF5TXpFNE1USTBNQjRYRFRJMU1ETXhPREUzTVRVeU5Gb1hEVEkyTURNeApPREUzTVRVeU5Gb3dNREVYTUJVR0ExVUVDaE1PYzNsemRHVnRPbTFoYzNSbGNuTXhGVEFUQmdOVkJBTVRESE41CmMzUmxiVHBoWkcxcGJqQlpNQk1HQnlxR1NNNDlBZ0VHQ0NxR1NNNDlBd0VIQTBJQUJDK3AwNFhSeWNWMzZQZVQKWWJvVU44OFhXemZHVkZGenFBRzlsdi90cGVVNlNFZEI4YzNBamU3STA2UitnY2FNTjlvekVFS096cFVYcktmVgpMWFJEUlRpalNEQkdNQTRHQTFVZER3RUIvd1FFQXdJRm9EQVRCZ05WSFNVRUREQUtCZ2dyQmdFRkJRY0RBakFmCkJnTlZIU01FR0RBV2dCUjZOM3l6Yyt4OFFIcHo2U3F1UkhBdjBlY0lBREFLQmdncWhrak9QUVFEQWdOSUFEQkYKQWlFQXBoRlloN3FERVJSSmlDcWtYS0hDbXMvTDRDMDVMZVhxT0ZoWUZRNGVBN1lDSUU0KzJKZHFwSHhEV1hkQworU2M4VFBmODFwZTU5Q0t4MnBETllDZjdUcFNjCi0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0KLS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUJkakNDQVIyZ0F3SUJBZ0lCQURBS0JnZ3Foa2pPUFFRREFqQWpNU0V3SHdZRFZRUUREQmhyTTNNdFkyeHAKWlc1MExXTmhRREUzTkRJek1UZ3hNalF3SGhjTk1qVXdNekU0TVRjeE5USTBXaGNOTXpVd016RTJNVGN4TlRJMApXakFqTVNFd0h3WURWUVFEREJock0zTXRZMnhwWlc1MExXTmhRREUzTkRJek1UZ3hNalF3V1RBVEJnY3Foa2pPClBRSUJCZ2dxaGtqT1BRTUJCd05DQUFRSi9ndlFjbXphVG5XcHd3VlRYaUdNUGVqeWFnaWhtSUl5SU5iUHNtR0MKWWIxTWRqQ1RYZ3V4OUJrUUhJRWVQMEhvY1FuSEhpeUhGY1orb09iWGVPWlFvMEl3UURBT0JnTlZIUThCQWY4RQpCQU1DQXFRd0R3WURWUjBUQVFIL0JBVXdBd0VCL3pBZEJnTlZIUTRFRmdRVWVqZDhzM1BzZkVCNmMra3Fya1J3Ckw5SG5DQUF3Q2dZSUtvWkl6ajBFQXdJRFJ3QXdSQUlnZjJhekc0VEo5c084NXlPWE12NVNrcWczRTdsMFNTM3kKN2g3QzExcVlmSWdDSUJuTnBrR1d6QjFycVBzdHI0dGlSWGdmVE8vc3lnbXM2cm5WZjcwNzlpRncKLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo= + client-key-data: LS0tLS1CRUdJTiBFQyBQUklWQVRFIEtFWS0tLS0tCk1IY0NBUUVFSUc4NmJjVlJZYTVTQ2NUZ08zK0xQRHRDb1VRVS9VNm1DUEh3akhTN1BYMWtvQW9HQ0NxR1NNNDkKQXdFSG9VUURRZ0FFTDZuVGhkSEp4WGZvOTVOaHVoUTN6eGRiTjhaVVVYT29BYjJXLysybDVUcElSMEh4emNDTgo3c2pUcEg2QnhvdzMyak1RUW83T2xSZXNwOVV0ZEVORk9BPT0KLS0tLS1FTkQgRUMgUFJJVkFURSBLRVktLS0tLQo= diff --git a/k3s-ansible/molecule/README.md b/k3s-ansible/molecule/README.md new file mode 100644 index 0000000..aa1845b --- /dev/null +++ b/k3s-ansible/molecule/README.md @@ -0,0 +1,79 @@ +# Test suites for `k3s-ansible` + +This folder contains the [molecule](https://molecule.rtfd.io/)-based test setup for this playbook. + +## Scenarios + +We have these scenarios: + +- **default**: + A 3 control + 2 worker node cluster based very closely on the [sample inventory](../inventory/sample/). +- **ipv6**: + A cluster that is externally accessible via IPv6 ([more information](ipv6/README.md)) + To save a bit of test time, this cluster is _not_ highly available, it consists of only one control and one worker node. +- **single_node**: + Very similar to the default scenario, but uses only a single node for all cluster functionality. +- **calico**: + The same as single node, but uses calico cni instead of flannel. +- **cilium**: + The same as single node, but uses cilium cni instead of flannel. +- **kube-vip** + The same as single node, but uses kube-vip as service loadbalancer instead of MetalLB + +## How to execute + +To test on your local machine, follow these steps: + +### System requirements + +Make sure that the following software packages are available on your system: + +- [Python 3](https://www.python.org/downloads) +- [Vagrant](https://www.vagrantup.com/downloads) +- [VirtualBox](https://www.virtualbox.org/wiki/Downloads) + +### Set up VirtualBox networking on Linux and macOS + +_You can safely skip this if you are working on Windows._ + +Furthermore, the test cluster uses the `192.168.30.0/24` subnet which is [not set up by VirtualBox automatically](https://www.virtualbox.org/manual/ch06.html#network_hostonly). +To set the subnet up for use with VirtualBox, please make sure that `/etc/vbox/networks.conf` exists and that it contains this line: + +``` +* 192.168.30.0/24 +* fdad:bad:ba55::/64 +``` + +### Install Python dependencies + +You will get [Molecule, Ansible and a few extra dependencies](../requirements.txt) via [pip](https://pip.pypa.io/). +Usually, it is advisable to work in a [virtual environment](https://docs.python.org/3/tutorial/venv.html) for this: + +```bash +cd /path/to/k3s-ansible + +# Create a virtualenv at ".env". You only need to do this once. +python3 -m venv .env + +# Activate the virtualenv for your current shell session. +# If you start a new session, you will have to repeat this. +source .env/bin/activate + +# Install the required packages into the virtualenv. +# These remain installed across shell sessions. +python3 -m pip install -r requirements.txt +``` + +### Run molecule + +With the virtual environment from the previous step active in your shell session, you can now use molecule to test the playbook. +Interesting commands are: + +- `molecule create`: Create virtual machines for the test cluster nodes. +- `molecule destroy`: Delete the virtual machines for the test cluster nodes. +- `molecule converge`: Run the `site` playbook on the nodes of the test cluster. +- `molecule side_effect`: Run the `reset` playbook on the nodes of the test cluster. +- `molecule verify`: Verify that the cluster works correctly. +- `molecule test`: The "all-in-one" sequence of steps that is executed in CI. + This includes the `create`, `converge`, `verify`, `side_effect` and `destroy` steps. + See [`molecule.yml`](default/molecule.yml) for more details. diff --git a/k3s-ansible/molecule/calico/molecule.yml b/k3s-ansible/molecule/calico/molecule.yml new file mode 100644 index 0000000..e4ddb25 --- /dev/null +++ b/k3s-ansible/molecule/calico/molecule.yml @@ -0,0 +1,49 @@ +--- +dependency: + name: galaxy +driver: + name: vagrant +platforms: + - name: control1 + box: generic/ubuntu2204 + memory: 4096 + cpus: 4 + config_options: + # We currently can not use public-key based authentication on Ubuntu 22.04, + # see: https://github.com/chef/bento/issues/1405 + ssh.username: vagrant + ssh.password: vagrant + groups: + - k3s_cluster + - master + interfaces: + - network_name: private_network + ip: 192.168.30.62 +provisioner: + name: ansible + env: + ANSIBLE_VERBOSITY: 1 + playbooks: + converge: ../resources/converge.yml + side_effect: ../resources/reset.yml + verify: ../resources/verify.yml + inventory: + links: + group_vars: ../../inventory/sample/group_vars +scenario: + test_sequence: + - dependency + - cleanup + - destroy + - syntax + - create + - prepare + - converge + # idempotence is not possible with the playbook in its current form. + - verify + # We are repurposing side_effect here to test the reset playbook. + # This is why we do not run it before verify (which tests the cluster), + # but after the verify step. + - side_effect + - cleanup + - destroy diff --git a/k3s-ansible/molecule/calico/overrides.yml b/k3s-ansible/molecule/calico/overrides.yml new file mode 100644 index 0000000..a63ec44 --- /dev/null +++ b/k3s-ansible/molecule/calico/overrides.yml @@ -0,0 +1,16 @@ +--- +- name: Apply overrides + hosts: all + tasks: + - name: Override host variables + ansible.builtin.set_fact: + # See: + # https://github.com/flannel-io/flannel/blob/67d603aaf45ef80f5dd39f43714fc5e6f8a637eb/Documentation/troubleshooting.md#Vagrant + calico_iface: eth1 + + # The test VMs might be a bit slow, so we give them more time to join the cluster: + retry_count: 45 + + # Make sure that our IP ranges do not collide with those of the other scenarios + apiserver_endpoint: 192.168.30.224 + metal_lb_ip_range: 192.168.30.100-192.168.30.109 diff --git a/k3s-ansible/molecule/cilium/molecule.yml b/k3s-ansible/molecule/cilium/molecule.yml new file mode 100644 index 0000000..542b6d5 --- /dev/null +++ b/k3s-ansible/molecule/cilium/molecule.yml @@ -0,0 +1,49 @@ +--- +dependency: + name: galaxy +driver: + name: vagrant +platforms: + - name: control1 + box: generic/ubuntu2204 + memory: 4096 + cpus: 4 + config_options: + # We currently can not use public-key based authentication on Ubuntu 22.04, + # see: https://github.com/chef/bento/issues/1405 + ssh.username: vagrant + ssh.password: vagrant + groups: + - k3s_cluster + - master + interfaces: + - network_name: private_network + ip: 192.168.30.63 +provisioner: + name: ansible + env: + ANSIBLE_VERBOSITY: 1 + playbooks: + converge: ../resources/converge.yml + side_effect: ../resources/reset.yml + verify: ../resources/verify.yml + inventory: + links: + group_vars: ../../inventory/sample/group_vars +scenario: + test_sequence: + - dependency + - cleanup + - destroy + - syntax + - create + - prepare + - converge + # idempotence is not possible with the playbook in its current form. + - verify + # We are repurposing side_effect here to test the reset playbook. + # This is why we do not run it before verify (which tests the cluster), + # but after the verify step. + - side_effect + - cleanup + - destroy diff --git a/k3s-ansible/molecule/cilium/overrides.yml b/k3s-ansible/molecule/cilium/overrides.yml new file mode 100644 index 0000000..c602a28 --- /dev/null +++ b/k3s-ansible/molecule/cilium/overrides.yml @@ -0,0 +1,16 @@ +--- +- name: Apply overrides + hosts: all + tasks: + - name: Override host variables + ansible.builtin.set_fact: + # See: + # https://github.com/flannel-io/flannel/blob/67d603aaf45ef80f5dd39f43714fc5e6f8a637eb/Documentation/troubleshooting.md#Vagrant + cilium_iface: eth1 + + # The test VMs might be a bit slow, so we give them more time to join the cluster: + retry_count: 45 + + # Make sure that our IP ranges do not collide with those of the other scenarios + apiserver_endpoint: 192.168.30.225 + metal_lb_ip_range: 192.168.30.110-192.168.30.119 diff --git a/k3s-ansible/molecule/default/molecule.yml b/k3s-ansible/molecule/default/molecule.yml new file mode 100644 index 0000000..1ad61f4 --- /dev/null +++ b/k3s-ansible/molecule/default/molecule.yml @@ -0,0 +1,99 @@ +--- +dependency: + name: galaxy +driver: + name: vagrant +platforms: + - name: control1 + box: generic/ubuntu2204 + memory: 1024 + cpus: 2 + groups: + - k3s_cluster + - master + interfaces: + - network_name: private_network + ip: 192.168.30.38 + config_options: + # We currently can not use public-key based authentication on Ubuntu 22.04, + # see: https://github.com/chef/bento/issues/1405 + ssh.username: vagrant + ssh.password: vagrant + + - name: control2 + box: generic/debian12 + memory: 1024 + cpus: 2 + groups: + - k3s_cluster + - master + interfaces: + - network_name: private_network + ip: 192.168.30.39 + + - name: control3 + box: generic/rocky9 + memory: 1024 + cpus: 2 + groups: + - k3s_cluster + - master + interfaces: + - network_name: private_network + ip: 192.168.30.40 + + - name: node1 + box: generic/ubuntu2204 + memory: 1024 + cpus: 2 + groups: + - k3s_cluster + - node + interfaces: + - network_name: private_network + ip: 192.168.30.41 + config_options: + # We currently can not use public-key based authentication on Ubuntu 22.04, + # see: https://github.com/chef/bento/issues/1405 + ssh.username: vagrant + ssh.password: vagrant + + - name: node2 + box: generic/rocky9 + memory: 1024 + cpus: 2 + groups: + - k3s_cluster + - node + interfaces: + - network_name: private_network + ip: 192.168.30.42 + +provisioner: + name: ansible + env: + ANSIBLE_VERBOSITY: 1 + playbooks: + converge: ../resources/converge.yml + side_effect: ../resources/reset.yml + verify: ../resources/verify.yml + inventory: + links: + group_vars: ../../inventory/sample/group_vars +scenario: + test_sequence: + - dependency + - cleanup + - destroy + - syntax + - create + - prepare + - converge + # idempotence is not possible with the playbook in its current form. + - verify + # We are repurposing side_effect here to test the reset playbook. + # This is why we do not run it before verify (which tests the cluster), + # but after the verify step. + - side_effect + - cleanup + - destroy diff --git a/k3s-ansible/molecule/default/overrides.yml b/k3s-ansible/molecule/default/overrides.yml new file mode 100644 index 0000000..4eea472 --- /dev/null +++ b/k3s-ansible/molecule/default/overrides.yml @@ -0,0 +1,12 @@ +--- +- name: Apply overrides + hosts: all + tasks: + - name: Override host variables + ansible.builtin.set_fact: + # See: + # https://github.com/flannel-io/flannel/blob/67d603aaf45ef80f5dd39f43714fc5e6f8a637eb/Documentation/troubleshooting.md#Vagrant + flannel_iface: eth1 + + # The test VMs might be a bit slow, so we give them more time to join the cluster: + retry_count: 45 diff --git a/k3s-ansible/molecule/default/prepare.yml b/k3s-ansible/molecule/default/prepare.yml new file mode 100644 index 0000000..044aa79 --- /dev/null +++ b/k3s-ansible/molecule/default/prepare.yml @@ -0,0 +1,22 @@ +--- +- name: Apply overrides + ansible.builtin.import_playbook: >- + {{ lookup("ansible.builtin.env", "MOLECULE_SCENARIO_DIRECTORY") }}/overrides.yml + +- name: Network setup + hosts: all + tasks: + - name: Disable firewalld + when: ansible_distribution == "Rocky" + # Rocky Linux comes with firewalld enabled. It blocks some of the network + # connections needed for our k3s cluster. For our test setup, we just disable + # it since the VM host's firewall is still active for connections to and from + # the Internet. + # When building your own cluster, please DO NOT blindly copy this. Instead, + # please create a custom firewall configuration that fits your network design + # and security needs. + ansible.builtin.systemd: + name: firewalld + enabled: false + state: stopped + become: true diff --git a/k3s-ansible/molecule/ipv6/README.md b/k3s-ansible/molecule/ipv6/README.md new file mode 100644 index 0000000..eaaeeab --- /dev/null +++ b/k3s-ansible/molecule/ipv6/README.md @@ -0,0 +1,35 @@ +# Sample IPv6 configuration for `k3s-ansible` + +This scenario contains a cluster configuration which is _IPv6 first_, but still supports dual-stack networking with IPv4 for most things. +This means: + +- The API server VIP is an IPv6 address. +- The MetalLB pool consists of both IPv4 and IPv4 addresses. +- Nodes as well as cluster-internal resources (pods and services) are accessible via IPv4 as well as IPv6. + +## Network design + +All IPv6 addresses used in this scenario share a single `/48` prefix: `fdad:bad:ba55`. +The following subnets are used: + +- `fdad:bad:ba55:`**`0`**`::/64` is the subnet which contains the cluster components meant for external access. + That includes: + + - The VIP for the Kubernetes API server: `fdad:bad:ba55::333` + - Services load-balanced by MetalLB: `fdad:bad:ba55::1b:0/112` + - Cluster nodes: `fdad:bad:ba55::de:0/112` + - The host executing Vagrant: `fdad:bad:ba55::1` + + In a home lab setup, this might be your LAN. + +- `fdad:bad:ba55:`**`4200`**`::/56` is used internally by the cluster for pods. + +- `fdad:bad:ba55:`**`4300`**`::/108` is used internally by the cluster for services. + +IPv4 networking is also available: + +- The nodes have addresses inside `192.168.123.0/24`. + MetalLB also has a bit of address space in this range: `192.168.123.80-192.168.123.90` +- For pods and services, the k3s defaults (`10.42.0.0/16` and `10.43.0.0/16)` are used. + +Note that the host running Vagrant is not part any of these IPv4 networks. diff --git a/k3s-ansible/molecule/ipv6/host_vars/control1.yml b/k3s-ansible/molecule/ipv6/host_vars/control1.yml new file mode 100644 index 0000000..aa675db --- /dev/null +++ b/k3s-ansible/molecule/ipv6/host_vars/control1.yml @@ -0,0 +1,3 @@ +--- +node_ipv4: 192.168.123.11 +node_ipv6: fdad:bad:ba55::de:11 diff --git a/k3s-ansible/molecule/ipv6/host_vars/control2.yml b/k3s-ansible/molecule/ipv6/host_vars/control2.yml new file mode 100644 index 0000000..97fbc81 --- /dev/null +++ b/k3s-ansible/molecule/ipv6/host_vars/control2.yml @@ -0,0 +1,3 @@ +--- +node_ipv4: 192.168.123.12 +node_ipv6: fdad:bad:ba55::de:12 diff --git a/k3s-ansible/molecule/ipv6/host_vars/node1.yml b/k3s-ansible/molecule/ipv6/host_vars/node1.yml new file mode 100644 index 0000000..57ba927 --- /dev/null +++ b/k3s-ansible/molecule/ipv6/host_vars/node1.yml @@ -0,0 +1,3 @@ +--- +node_ipv4: 192.168.123.21 +node_ipv6: fdad:bad:ba55::de:21 diff --git a/k3s-ansible/molecule/ipv6/molecule.yml b/k3s-ansible/molecule/ipv6/molecule.yml new file mode 100644 index 0000000..5c2454e --- /dev/null +++ b/k3s-ansible/molecule/ipv6/molecule.yml @@ -0,0 +1,81 @@ +--- +dependency: + name: galaxy +driver: + name: vagrant +platforms: + - name: control1 + box: generic/ubuntu2204 + memory: 1024 + cpus: 2 + groups: + - k3s_cluster + - master + interfaces: + - network_name: private_network + ip: fdad:bad:ba55::de:11 + config_options: + # We currently can not use public-key based authentication on Ubuntu 22.04, + # see: https://github.com/chef/bento/issues/1405 + ssh.username: vagrant + ssh.password: vagrant + + - name: control2 + box: generic/ubuntu2204 + memory: 1024 + cpus: 2 + groups: + - k3s_cluster + - master + interfaces: + - network_name: private_network + ip: fdad:bad:ba55::de:12 + config_options: + # We currently can not use public-key based authentication on Ubuntu 22.04, + # see: https://github.com/chef/bento/issues/1405 + ssh.username: vagrant + ssh.password: vagrant + + - name: node1 + box: generic/ubuntu2204 + memory: 1024 + cpus: 2 + groups: + - k3s_cluster + - node + interfaces: + - network_name: private_network + ip: fdad:bad:ba55::de:21 + config_options: + # We currently can not use public-key based authentication on Ubuntu 22.04, + # see: https://github.com/chef/bento/issues/1405 + ssh.username: vagrant + ssh.password: vagrant +provisioner: + name: ansible + env: + ANSIBLE_VERBOSITY: 1 + playbooks: + converge: ../resources/converge.yml + side_effect: ../resources/reset.yml + verify: ../resources/verify.yml + inventory: + links: + group_vars: ../../inventory/sample/group_vars +scenario: + test_sequence: + - dependency + - cleanup + - destroy + - syntax + - create + - prepare + - converge + # idempotence is not possible with the playbook in its current form. + - verify + # We are repurposing side_effect here to test the reset playbook. + # This is why we do not run it before verify (which tests the cluster), + # but after the verify step. + - side_effect + - cleanup + - destroy diff --git a/k3s-ansible/molecule/ipv6/overrides.yml b/k3s-ansible/molecule/ipv6/overrides.yml new file mode 100644 index 0000000..44bbc07 --- /dev/null +++ b/k3s-ansible/molecule/ipv6/overrides.yml @@ -0,0 +1,51 @@ +--- +- name: Apply overrides + hosts: all + tasks: + - name: Override host variables (1/2) + ansible.builtin.set_fact: + # See: + # https://github.com/flannel-io/flannel/blob/67d603aaf45ef80f5dd39f43714fc5e6f8a637eb/Documentation/troubleshooting.md#Vagrant + flannel_iface: eth1 + + # In this scenario, we have multiple interfaces that the VIP could be + # broadcasted on. Since we have assigned a dedicated private network + # here, let's make sure that it is used. + kube_vip_iface: eth1 + + # The test VMs might be a bit slow, so we give them more time to join the cluster: + retry_count: 45 + + # IPv6 configuration + # ###################################################################### + + # The API server will be reachable on IPv6 only + apiserver_endpoint: fdad:bad:ba55::333 + + # We give MetalLB address space for both IPv4 and IPv6 + metal_lb_ip_range: + - fdad:bad:ba55::1b:0/112 + - 192.168.123.80-192.168.123.90 + + # k3s_node_ip is by default set to the IPv4 address of flannel_iface. + # We want IPv6 addresses here of course, so we just specify them + # manually below. + k3s_node_ip: "{{ node_ipv4 }},{{ node_ipv6 }}" + + - name: Override host variables (2/2) + # Since "extra_args" depends on "k3s_node_ip" and "flannel_iface" we have + # to set this AFTER overriding the both of them. + ansible.builtin.set_fact: + # A few extra server args are necessary: + # - the network policy needs to be disabled. + # - we need to manually specify the subnets for services and pods, as + # the default has IPv4 ranges only. + extra_server_args: >- + {{ extra_args }} + --tls-san {{ apiserver_endpoint }} + {{ '--node-taint node-role.kubernetes.io/master=true:NoSchedule' if k3s_master_taint else '' }} + --disable servicelb + --disable traefik + --disable-network-policy + --cluster-cidr=10.42.0.0/16,fdad:bad:ba55:4200::/56 + --service-cidr=10.43.0.0/16,fdad:bad:ba55:4300::/108 diff --git a/k3s-ansible/molecule/ipv6/prepare.yml b/k3s-ansible/molecule/ipv6/prepare.yml new file mode 100644 index 0000000..9763458 --- /dev/null +++ b/k3s-ansible/molecule/ipv6/prepare.yml @@ -0,0 +1,51 @@ +--- +- name: Apply overrides + ansible.builtin.import_playbook: >- + {{ lookup("ansible.builtin.env", "MOLECULE_SCENARIO_DIRECTORY") }}/overrides.yml + +- name: Configure dual-stack networking + hosts: all + become: true + + # Unfortunately, as of 2022-09, Vagrant does not support the configuration + # of both IPv4 and IPv6 addresses for a single network adapter. So we have + # to configure that ourselves. + # Moreover, we have to explicitly enable IPv6 for the loopback interface. + + tasks: + - name: Enable IPv6 for network interfaces + ansible.posix.sysctl: + name: net.ipv6.conf.{{ item }}.disable_ipv6 + value: "0" + with_items: + - all + - default + - lo + + - name: Disable duplicate address detection + # Duplicate address detection did repeatedly fail within the virtual + # network. But since this setup does not use SLAAC anyway, we can safely + # disable it. + ansible.posix.sysctl: + name: net.ipv6.conf.{{ item }}.accept_dad + value: "0" + with_items: + - "{{ flannel_iface }}" + + - name: Write IPv4 configuration + ansible.builtin.template: + src: 55-flannel-ipv4.yaml.j2 + dest: /etc/netplan/55-flannel-ipv4.yaml + owner: root + group: root + mode: "0644" + register: netplan_template + + - name: Apply netplan configuration + # Conceptually, this should be a handler rather than a task. + # However, we are currently not in a role context - creating + # one just for this seemed overkill. + when: netplan_template.changed + ansible.builtin.command: + cmd: netplan apply + changed_when: true diff --git a/k3s-ansible/molecule/ipv6/templates/55-flannel-ipv4.yaml.j2 b/k3s-ansible/molecule/ipv6/templates/55-flannel-ipv4.yaml.j2 new file mode 100644 index 0000000..6f68777 --- /dev/null +++ b/k3s-ansible/molecule/ipv6/templates/55-flannel-ipv4.yaml.j2 @@ -0,0 +1,8 @@ +--- +network: + version: 2 + renderer: networkd + ethernets: + {{ flannel_iface }}: + addresses: + - {{ node_ipv4 }}/24 diff --git a/k3s-ansible/molecule/kube-vip/molecule.yml b/k3s-ansible/molecule/kube-vip/molecule.yml new file mode 100644 index 0000000..e4ddb25 --- /dev/null +++ b/k3s-ansible/molecule/kube-vip/molecule.yml @@ -0,0 +1,49 @@ +--- +dependency: + name: galaxy +driver: + name: vagrant +platforms: + - name: control1 + box: generic/ubuntu2204 + memory: 4096 + cpus: 4 + config_options: + # We currently can not use public-key based authentication on Ubuntu 22.04, + # see: https://github.com/chef/bento/issues/1405 + ssh.username: vagrant + ssh.password: vagrant + groups: + - k3s_cluster + - master + interfaces: + - network_name: private_network + ip: 192.168.30.62 +provisioner: + name: ansible + env: + ANSIBLE_VERBOSITY: 1 + playbooks: + converge: ../resources/converge.yml + side_effect: ../resources/reset.yml + verify: ../resources/verify.yml + inventory: + links: + group_vars: ../../inventory/sample/group_vars +scenario: + test_sequence: + - dependency + - cleanup + - destroy + - syntax + - create + - prepare + - converge + # idempotence is not possible with the playbook in its current form. + - verify + # We are repurposing side_effect here to test the reset playbook. + # This is why we do not run it before verify (which tests the cluster), + # but after the verify step. + - side_effect + - cleanup + - destroy diff --git a/k3s-ansible/molecule/kube-vip/overrides.yml b/k3s-ansible/molecule/kube-vip/overrides.yml new file mode 100644 index 0000000..4577afc --- /dev/null +++ b/k3s-ansible/molecule/kube-vip/overrides.yml @@ -0,0 +1,17 @@ +--- +- name: Apply overrides + hosts: all + tasks: + - name: Override host variables + ansible.builtin.set_fact: + # See: + # https://github.com/flannel-io/flannel/blob/67d603aaf45ef80f5dd39f43714fc5e6f8a637eb/Documentation/troubleshooting.md#Vagrant + flannel_iface: eth1 + + # The test VMs might be a bit slow, so we give them more time to join the cluster: + retry_count: 45 + + # Make sure that our IP ranges do not collide with those of the other scenarios + apiserver_endpoint: 192.168.30.225 + # Use kube-vip instead of MetalLB + kube_vip_lb_ip_range: 192.168.30.110-192.168.30.119 diff --git a/k3s-ansible/molecule/resources/converge.yml b/k3s-ansible/molecule/resources/converge.yml new file mode 100644 index 0000000..c5efc8e --- /dev/null +++ b/k3s-ansible/molecule/resources/converge.yml @@ -0,0 +1,7 @@ +--- +- name: Apply overrides + ansible.builtin.import_playbook: >- + {{ lookup("ansible.builtin.env", "MOLECULE_SCENARIO_DIRECTORY") }}/overrides.yml + +- name: Converge + ansible.builtin.import_playbook: ../../site.yml diff --git a/k3s-ansible/molecule/resources/reset.yml b/k3s-ansible/molecule/resources/reset.yml new file mode 100644 index 0000000..266ce85 --- /dev/null +++ b/k3s-ansible/molecule/resources/reset.yml @@ -0,0 +1,7 @@ +--- +- name: Apply overrides + ansible.builtin.import_playbook: >- + {{ lookup("ansible.builtin.env", "MOLECULE_SCENARIO_DIRECTORY") }}/overrides.yml + +- name: Reset + ansible.builtin.import_playbook: ../../reset.yml diff --git a/k3s-ansible/molecule/resources/verify.yml b/k3s-ansible/molecule/resources/verify.yml new file mode 100644 index 0000000..ef7ea52 --- /dev/null +++ b/k3s-ansible/molecule/resources/verify.yml @@ -0,0 +1,5 @@ +--- +- name: Verify + hosts: all + roles: + - verify_from_outside diff --git a/k3s-ansible/molecule/resources/verify_from_outside/defaults/main.yml b/k3s-ansible/molecule/resources/verify_from_outside/defaults/main.yml new file mode 100644 index 0000000..104fda4 --- /dev/null +++ b/k3s-ansible/molecule/resources/verify_from_outside/defaults/main.yml @@ -0,0 +1,9 @@ +--- +# A host outside of the cluster from which the checks shall be performed +outside_host: localhost + +# This kubernetes namespace will be used for testing +testing_namespace: molecule-verify-from-outside + +# The directory in which the example manifests reside +example_manifests_path: ../../../example diff --git a/k3s-ansible/molecule/resources/verify_from_outside/tasks/kubecfg-cleanup.yml b/k3s-ansible/molecule/resources/verify_from_outside/tasks/kubecfg-cleanup.yml new file mode 100644 index 0000000..9645af1 --- /dev/null +++ b/k3s-ansible/molecule/resources/verify_from_outside/tasks/kubecfg-cleanup.yml @@ -0,0 +1,5 @@ +--- +- name: Clean up kubecfg + ansible.builtin.file: + path: "{{ kubecfg.path }}" + state: absent diff --git a/k3s-ansible/molecule/resources/verify_from_outside/tasks/kubecfg-fetch.yml b/k3s-ansible/molecule/resources/verify_from_outside/tasks/kubecfg-fetch.yml new file mode 100644 index 0000000..d7f498e --- /dev/null +++ b/k3s-ansible/molecule/resources/verify_from_outside/tasks/kubecfg-fetch.yml @@ -0,0 +1,19 @@ +--- +- name: Create temporary directory for kubecfg + ansible.builtin.tempfile: + state: directory + suffix: kubecfg + register: kubecfg +- name: Gathering facts + delegate_to: "{{ groups['master'][0] }}" + ansible.builtin.gather_facts: +- name: Download kubecfg + ansible.builtin.fetch: + src: "{{ ansible_env.HOME }}/.kube/config" + dest: "{{ kubecfg.path }}/" + flat: true + delegate_to: "{{ groups['master'][0] }}" + delegate_facts: true +- name: Store path to kubecfg + ansible.builtin.set_fact: + kubecfg_path: "{{ kubecfg.path }}/config" diff --git a/k3s-ansible/molecule/resources/verify_from_outside/tasks/main.yml b/k3s-ansible/molecule/resources/verify_from_outside/tasks/main.yml new file mode 100644 index 0000000..2f43a27 --- /dev/null +++ b/k3s-ansible/molecule/resources/verify_from_outside/tasks/main.yml @@ -0,0 +1,14 @@ +--- +- name: Verify + run_once: true + delegate_to: "{{ outside_host }}" + block: + - name: "Test CASE: Get kube config" + ansible.builtin.import_tasks: kubecfg-fetch.yml + - name: "TEST CASE: Get nodes" + ansible.builtin.include_tasks: test/get-nodes.yml + - name: "TEST CASE: Deploy example" + ansible.builtin.include_tasks: test/deploy-example.yml + always: + - name: "TEST CASE: Cleanup" + ansible.builtin.import_tasks: kubecfg-cleanup.yml diff --git a/k3s-ansible/molecule/resources/verify_from_outside/tasks/test/deploy-example.yml b/k3s-ansible/molecule/resources/verify_from_outside/tasks/test/deploy-example.yml new file mode 100644 index 0000000..13a1c4b --- /dev/null +++ b/k3s-ansible/molecule/resources/verify_from_outside/tasks/test/deploy-example.yml @@ -0,0 +1,58 @@ +--- +- name: Deploy example + block: + - name: "Create namespace: {{ testing_namespace }}" + kubernetes.core.k8s: + api_version: v1 + kind: Namespace + name: "{{ testing_namespace }}" + state: present + wait: true + kubeconfig: "{{ kubecfg_path }}" + + - name: Apply example manifests + kubernetes.core.k8s: + src: "{{ example_manifests_path }}/{{ item }}" + namespace: "{{ testing_namespace }}" + state: present + wait: true + kubeconfig: "{{ kubecfg_path }}" + with_items: + - deployment.yml + - service.yml + + - name: Get info about nginx service + kubernetes.core.k8s_info: + kind: service + name: nginx + namespace: "{{ testing_namespace }}" + kubeconfig: "{{ kubecfg_path }}" + vars: + metallb_ip: status.loadBalancer.ingress[0].ip + metallb_port: spec.ports[0].port + register: nginx_services + + - name: Assert that the nginx welcome page is available + ansible.builtin.uri: + url: http://{{ ip | ansible.utils.ipwrap }}:{{ port_ }}/ + return_content: true + register: result + failed_when: "'Welcome to nginx!' not in result.content" + vars: + ip: >- + {{ nginx_services.resources[0].status.loadBalancer.ingress[0].ip }} + port_: >- + {{ nginx_services.resources[0].spec.ports[0].port }} + # Deactivated linter rules: + # - jinja[invalid]: As of version 6.6.0, ansible-lint complains that the input to ipwrap + # would be undefined. This will not be the case during playbook execution. + # noqa jinja[invalid] + + always: + - name: "Remove namespace: {{ testing_namespace }}" + kubernetes.core.k8s: + api_version: v1 + kind: Namespace + name: "{{ testing_namespace }}" + state: absent + kubeconfig: "{{ kubecfg_path }}" diff --git a/k3s-ansible/molecule/resources/verify_from_outside/tasks/test/get-nodes.yml b/k3s-ansible/molecule/resources/verify_from_outside/tasks/test/get-nodes.yml new file mode 100644 index 0000000..99b86a4 --- /dev/null +++ b/k3s-ansible/molecule/resources/verify_from_outside/tasks/test/get-nodes.yml @@ -0,0 +1,28 @@ +--- +- name: Get all nodes in cluster + kubernetes.core.k8s_info: + kind: node + kubeconfig: "{{ kubecfg_path }}" + register: cluster_nodes + +- name: Assert that the cluster contains exactly the expected nodes + ansible.builtin.assert: + that: found_nodes == expected_nodes + success_msg: "Found nodes as expected: {{ found_nodes }}" + fail_msg: Expected nodes {{ expected_nodes }}, but found nodes {{ found_nodes }} + vars: + found_nodes: >- + {{ cluster_nodes | json_query('resources[*].metadata.name') | unique | sort }} + expected_nodes: |- + {{ + ( + ( groups['master'] | default([]) ) + + ( groups['node'] | default([]) ) + ) + | unique + | sort + }} + # Deactivated linter rules: + # - jinja[invalid]: As of version 6.6.0, ansible-lint complains that the input to ipwrap + # would be undefined. This will not be the case during playbook execution. + # noqa jinja[invalid] diff --git a/k3s-ansible/molecule/single_node/molecule.yml b/k3s-ansible/molecule/single_node/molecule.yml new file mode 100644 index 0000000..c6d45fc --- /dev/null +++ b/k3s-ansible/molecule/single_node/molecule.yml @@ -0,0 +1,49 @@ +--- +dependency: + name: galaxy +driver: + name: vagrant +platforms: + - name: control1 + box: generic/ubuntu2204 + memory: 4096 + cpus: 4 + config_options: + # We currently can not use public-key based authentication on Ubuntu 22.04, + # see: https://github.com/chef/bento/issues/1405 + ssh.username: vagrant + ssh.password: vagrant + groups: + - k3s_cluster + - master + interfaces: + - network_name: private_network + ip: 192.168.30.50 +provisioner: + name: ansible + env: + ANSIBLE_VERBOSITY: 1 + playbooks: + converge: ../resources/converge.yml + side_effect: ../resources/reset.yml + verify: ../resources/verify.yml + inventory: + links: + group_vars: ../../inventory/sample/group_vars +scenario: + test_sequence: + - dependency + - cleanup + - destroy + - syntax + - create + - prepare + - converge + # idempotence is not possible with the playbook in its current form. + - verify + # We are repurposing side_effect here to test the reset playbook. + # This is why we do not run it before verify (which tests the cluster), + # but after the verify step. + - side_effect + - cleanup + - destroy diff --git a/k3s-ansible/molecule/single_node/overrides.yml b/k3s-ansible/molecule/single_node/overrides.yml new file mode 100644 index 0000000..2cb8ec7 --- /dev/null +++ b/k3s-ansible/molecule/single_node/overrides.yml @@ -0,0 +1,16 @@ +--- +- name: Apply overrides + hosts: all + tasks: + - name: Override host variables + ansible.builtin.set_fact: + # See: + # https://github.com/flannel-io/flannel/blob/67d603aaf45ef80f5dd39f43714fc5e6f8a637eb/Documentation/troubleshooting.md#Vagrant + flannel_iface: eth1 + + # The test VMs might be a bit slow, so we give them more time to join the cluster: + retry_count: 45 + + # Make sure that our IP ranges do not collide with those of the default scenario + apiserver_endpoint: 192.168.30.223 + metal_lb_ip_range: 192.168.30.91-192.168.30.99 diff --git a/k3s-ansible/reboot.sh b/k3s-ansible/reboot.sh new file mode 100755 index 0000000..95f66a6 --- /dev/null +++ b/k3s-ansible/reboot.sh @@ -0,0 +1,3 @@ +#!/bin/bash + +ansible-playbook reboot.yml diff --git a/k3s-ansible/reboot.yml b/k3s-ansible/reboot.yml new file mode 100644 index 0000000..e0fa8b9 --- /dev/null +++ b/k3s-ansible/reboot.yml @@ -0,0 +1,10 @@ +--- +- name: Reboot k3s_cluster + hosts: k3s_cluster + gather_facts: true + tasks: + - name: Reboot the nodes (and Wait upto 5 mins max) + become: true + ansible.builtin.reboot: + reboot_command: "{{ custom_reboot_command | default(omit) }}" + reboot_timeout: 300 diff --git a/k3s-ansible/requirements.in b/k3s-ansible/requirements.in new file mode 100644 index 0000000..e0eac29 --- /dev/null +++ b/k3s-ansible/requirements.in @@ -0,0 +1,10 @@ +ansible-core>=2.16.2 +jmespath>=1.0.1 +jsonpatch>=1.33 +kubernetes>=29.0.0 +molecule-plugins[vagrant] +molecule>=6.0.3 +netaddr>=0.10.1 +pre-commit>=3.6.0 +pre-commit-hooks>=4.5.0 +pyyaml>=6.0.1 diff --git a/k3s-ansible/requirements.txt b/k3s-ansible/requirements.txt new file mode 100644 index 0000000..8370016 --- /dev/null +++ b/k3s-ansible/requirements.txt @@ -0,0 +1,169 @@ +# +# This file is autogenerated by pip-compile with Python 3.11 +# by the following command: +# +# pip-compile requirements.in +# +ansible-compat==4.1.11 + # via molecule +ansible-core==2.18.0 + # via + # -r requirements.in + # ansible-compat + # molecule +attrs==23.2.0 + # via + # jsonschema + # referencing +bracex==2.4 + # via wcmatch +cachetools==5.3.2 + # via google-auth +certifi==2023.11.17 + # via + # kubernetes + # requests +cffi==1.16.0 + # via cryptography +cfgv==3.4.0 + # via pre-commit +charset-normalizer==3.3.2 + # via requests +click==8.1.7 + # via + # click-help-colors + # molecule +click-help-colors==0.9.4 + # via molecule +cryptography==41.0.7 + # via ansible-core +distlib==0.3.8 + # via virtualenv +enrich==1.2.7 + # via molecule +filelock==3.13.1 + # via virtualenv +google-auth==2.26.2 + # via kubernetes +identify==2.5.33 + # via pre-commit +idna==3.6 + # via requests +jinja2==3.1.3 + # via + # ansible-core + # molecule +jmespath==1.0.1 + # via -r requirements.in +jsonpatch==1.33 + # via -r requirements.in +jsonpointer==2.4 + # via jsonpatch +jsonschema==4.21.1 + # via + # ansible-compat + # molecule +jsonschema-specifications==2023.12.1 + # via jsonschema +kubernetes==29.0.0 + # via -r requirements.in +markdown-it-py==3.0.0 + # via rich +markupsafe==2.1.4 + # via jinja2 +mdurl==0.1.2 + # via markdown-it-py +molecule==6.0.3 + # via + # -r requirements.in + # molecule-plugins +molecule-plugins[vagrant]==23.5.3 + # via -r requirements.in +netaddr==0.10.1 + # via -r requirements.in +nodeenv==1.8.0 + # via pre-commit +oauthlib==3.2.2 + # via + # kubernetes + # requests-oauthlib +packaging==23.2 + # via + # ansible-compat + # ansible-core + # molecule +platformdirs==4.1.0 + # via virtualenv +pluggy==1.3.0 + # via molecule +pre-commit==3.8.0 + # via -r requirements.in +pre-commit-hooks==4.6.0 + # via -r requirements.in +pyasn1==0.5.1 + # via + # pyasn1-modules + # rsa +pyasn1-modules==0.3.0 + # via google-auth +pycparser==2.21 + # via cffi +pygments==2.17.2 + # via rich +python-dateutil==2.8.2 + # via kubernetes +python-vagrant==1.0.0 + # via molecule-plugins +pyyaml==6.0.2 + # via + # -r requirements.in + # ansible-compat + # ansible-core + # kubernetes + # molecule + # pre-commit +referencing==0.32.1 + # via + # jsonschema + # jsonschema-specifications +requests==2.31.0 + # via + # kubernetes + # requests-oauthlib +requests-oauthlib==1.3.1 + # via kubernetes +resolvelib==1.0.1 + # via ansible-core +rich==13.7.0 + # via + # enrich + # molecule +rpds-py==0.17.1 + # via + # jsonschema + # referencing +rsa==4.9 + # via google-auth +ruamel-yaml==0.18.5 + # via pre-commit-hooks +ruamel-yaml-clib==0.2.8 + # via ruamel-yaml +six==1.16.0 + # via + # kubernetes + # python-dateutil +subprocess-tee==0.4.1 + # via ansible-compat +urllib3==2.1.0 + # via + # kubernetes + # requests +virtualenv==20.25.0 + # via pre-commit +wcmatch==8.5 + # via molecule +websocket-client==1.7.0 + # via kubernetes + +# The following packages are considered to be unsafe in a requirements file: +# setuptools diff --git a/k3s-ansible/reset.sh b/k3s-ansible/reset.sh new file mode 100755 index 0000000..bd9dcae --- /dev/null +++ b/k3s-ansible/reset.sh @@ -0,0 +1,3 @@ +#!/bin/bash + +ansible-playbook reset.yml diff --git a/k3s-ansible/reset.yml b/k3s-ansible/reset.yml new file mode 100644 index 0000000..238ce70 --- /dev/null +++ b/k3s-ansible/reset.yml @@ -0,0 +1,25 @@ +--- +- name: Reset k3s cluster + hosts: k3s_cluster + gather_facts: true + roles: + - role: reset + become: true + - role: raspberrypi + become: true + vars: { state: absent } + post_tasks: + - name: Reboot and wait for node to come back up + become: true + ansible.builtin.reboot: + reboot_command: "{{ custom_reboot_command | default(omit) }}" + reboot_timeout: 3600 + +- name: Revert changes to Proxmox cluster + hosts: proxmox + gather_facts: true + become: true + remote_user: "{{ proxmox_lxc_ssh_user }}" + roles: + - role: reset_proxmox_lxc + when: proxmox_lxc_configure diff --git a/k3s-ansible/roles/download/meta/main.yml b/k3s-ansible/roles/download/meta/main.yml new file mode 100644 index 0000000..e7911d5 --- /dev/null +++ b/k3s-ansible/roles/download/meta/main.yml @@ -0,0 +1,8 @@ +--- +argument_specs: + main: + short_description: Manage the downloading of K3S binaries + options: + k3s_version: + description: The desired version of K3S + required: true diff --git a/k3s-ansible/roles/download/tasks/main.yml b/k3s-ansible/roles/download/tasks/main.yml new file mode 100644 index 0000000..51cd35e --- /dev/null +++ b/k3s-ansible/roles/download/tasks/main.yml @@ -0,0 +1,34 @@ +--- +- name: Download k3s binary x64 + ansible.builtin.get_url: + url: https://github.com/k3s-io/k3s/releases/download/{{ k3s_version }}/k3s + checksum: sha256:https://github.com/k3s-io/k3s/releases/download/{{ k3s_version }}/sha256sum-amd64.txt + dest: /usr/local/bin/k3s + owner: root + group: root + mode: "0755" + when: ansible_facts.architecture == "x86_64" + +- name: Download k3s binary arm64 + ansible.builtin.get_url: + url: https://github.com/k3s-io/k3s/releases/download/{{ k3s_version }}/k3s-arm64 + checksum: sha256:https://github.com/k3s-io/k3s/releases/download/{{ k3s_version }}/sha256sum-arm64.txt + dest: /usr/local/bin/k3s + owner: root + group: root + mode: "0755" + when: + - ( ansible_facts.architecture is search("arm") and ansible_facts.userspace_bits == "64" ) + or ansible_facts.architecture is search("aarch64") + +- name: Download k3s binary armhf + ansible.builtin.get_url: + url: https://github.com/k3s-io/k3s/releases/download/{{ k3s_version }}/k3s-armhf + checksum: sha256:https://github.com/k3s-io/k3s/releases/download/{{ k3s_version }}/sha256sum-arm.txt + dest: /usr/local/bin/k3s + owner: root + group: root + mode: "0755" + when: + - ansible_facts.architecture is search("arm") + - ansible_facts.userspace_bits == "32" diff --git a/k3s-ansible/roles/k3s/node/defaults/main.yml b/k3s-ansible/roles/k3s/node/defaults/main.yml new file mode 100644 index 0000000..a07af66 --- /dev/null +++ b/k3s-ansible/roles/k3s/node/defaults/main.yml @@ -0,0 +1,3 @@ +--- +# Name of the master group +group_name_master: master diff --git a/k3s-ansible/roles/k3s_agent/defaults/main.yml b/k3s-ansible/roles/k3s_agent/defaults/main.yml new file mode 100644 index 0000000..bdf76ae --- /dev/null +++ b/k3s-ansible/roles/k3s_agent/defaults/main.yml @@ -0,0 +1,4 @@ +--- +extra_agent_args: "" +group_name_master: master +systemd_dir: /etc/systemd/system diff --git a/k3s-ansible/roles/k3s_agent/meta/main.yml b/k3s-ansible/roles/k3s_agent/meta/main.yml new file mode 100644 index 0000000..cec4ba0 --- /dev/null +++ b/k3s-ansible/roles/k3s_agent/meta/main.yml @@ -0,0 +1,39 @@ +--- +argument_specs: + main: + short_description: Setup k3s agents + options: + apiserver_endpoint: + description: Virtual ip-address configured on each master + required: true + + extra_agent_args: + description: Extra arguments for agents nodes + + group_name_master: + description: Name of the master group + default: master + + k3s_token: + description: Token used to communicate between masters + + proxy_env: + type: dict + description: + - Internet proxy configurations. + - See https://docs.k3s.io/advanced#configuring-an-http-proxy for details + default: ~ + options: + HTTP_PROXY: + description: HTTP internet proxy + required: true + HTTPS_PROXY: + description: HTTPS internet proxy + required: true + NO_PROXY: + description: Addresses that will not use the proxies + required: true + + systemd_dir: + description: Path to systemd services + default: /etc/systemd/system diff --git a/k3s-ansible/roles/k3s_agent/tasks/http_proxy.yml b/k3s-ansible/roles/k3s_agent/tasks/http_proxy.yml new file mode 100644 index 0000000..8b58777 --- /dev/null +++ b/k3s-ansible/roles/k3s_agent/tasks/http_proxy.yml @@ -0,0 +1,18 @@ +--- +- name: Create k3s-node.service.d directory + ansible.builtin.file: + path: "{{ systemd_dir }}/k3s-node.service.d" + state: directory + owner: root + group: root + mode: "0755" + when: proxy_env is defined + +- name: Copy K3s http_proxy conf file + ansible.builtin.template: + src: http_proxy.conf.j2 + dest: "{{ systemd_dir }}/k3s-node.service.d/http_proxy.conf" + owner: root + group: root + mode: "0755" + when: proxy_env is defined diff --git a/k3s-ansible/roles/k3s_agent/tasks/main.yml b/k3s-ansible/roles/k3s_agent/tasks/main.yml new file mode 100644 index 0000000..c522f21 --- /dev/null +++ b/k3s-ansible/roles/k3s_agent/tasks/main.yml @@ -0,0 +1,36 @@ +--- +- name: Check for PXE-booted system + block: + - name: Check if system is PXE-booted + ansible.builtin.command: + cmd: cat /proc/cmdline + register: boot_cmdline + changed_when: false + check_mode: false + + - name: Set fact for PXE-booted system + ansible.builtin.set_fact: + is_pxe_booted: "{{ 'root=/dev/nfs' in boot_cmdline.stdout }}" + when: boot_cmdline.stdout is defined + + - name: Include http_proxy configuration tasks + ansible.builtin.include_tasks: http_proxy.yml + +- name: Deploy K3s http_proxy conf + ansible.builtin.include_tasks: http_proxy.yml + when: proxy_env is defined + +- name: Configure the k3s service + ansible.builtin.template: + src: k3s.service.j2 + dest: "{{ systemd_dir }}/k3s-node.service" + owner: root + group: root + mode: "0755" + +- name: Manage k3s service + ansible.builtin.systemd: + name: k3s-node + daemon_reload: true + state: restarted + enabled: true diff --git a/k3s-ansible/roles/k3s_agent/templates/http_proxy.conf.j2 b/k3s-ansible/roles/k3s_agent/templates/http_proxy.conf.j2 new file mode 100644 index 0000000..6591d45 --- /dev/null +++ b/k3s-ansible/roles/k3s_agent/templates/http_proxy.conf.j2 @@ -0,0 +1,4 @@ +[Service] +Environment=HTTP_PROXY={{ proxy_env.HTTP_PROXY }} +Environment=HTTPS_PROXY={{ proxy_env.HTTPS_PROXY }} +Environment=NO_PROXY={{ proxy_env.NO_PROXY }} diff --git a/k3s-ansible/roles/k3s_agent/templates/k3s.service.j2 b/k3s-ansible/roles/k3s_agent/templates/k3s.service.j2 new file mode 100644 index 0000000..52aa272 --- /dev/null +++ b/k3s-ansible/roles/k3s_agent/templates/k3s.service.j2 @@ -0,0 +1,27 @@ +[Unit] +Description=Lightweight Kubernetes +Documentation=https://k3s.io +After=network-online.target + +[Service] +Type=notify +ExecStartPre=-/sbin/modprobe br_netfilter +ExecStartPre=-/sbin/modprobe overlay +# Conditional snapshotter based on PXE boot status +ExecStart=/usr/local/bin/k3s agent \ + --server https://{{ apiserver_endpoint | ansible.utils.ipwrap }}:6443 \ + {% if is_pxe_booted | default(false) %}--snapshotter native \ + {% endif %}--token {{ hostvars[groups[group_name_master | default('master')][0]]['token'] | default(k3s_token) }} \ + {{ extra_agent_args }} +KillMode=process +Delegate=yes +LimitNOFILE=1048576 +LimitNPROC=infinity +LimitCORE=infinity +TasksMax=infinity +TimeoutStartSec=0 +Restart=always +RestartSec=5s + +[Install] +WantedBy=multi-user.target diff --git a/k3s-ansible/roles/k3s_custom_registries/meta/main.yml b/k3s-ansible/roles/k3s_custom_registries/meta/main.yml new file mode 100644 index 0000000..3c0878f --- /dev/null +++ b/k3s-ansible/roles/k3s_custom_registries/meta/main.yml @@ -0,0 +1,20 @@ +--- +argument_specs: + main: + short_description: Configure the use of a custom container registry + options: + custom_registries_yaml: + description: + - YAML block defining custom registries. + - > + The following is an example that pulls all images used in + this playbook through your private registries. + - > + It also allows you to pull your own images from your private + registry, without having to use imagePullSecrets in your + deployments. + - > + If all you need is your own images and you don't care about + caching the docker/quay/ghcr.io images, you can just remove + those from the mirrors: section. + required: true diff --git a/k3s-ansible/roles/k3s_custom_registries/tasks/main.yml b/k3s-ansible/roles/k3s_custom_registries/tasks/main.yml new file mode 100644 index 0000000..cfbb1ec --- /dev/null +++ b/k3s-ansible/roles/k3s_custom_registries/tasks/main.yml @@ -0,0 +1,16 @@ +--- +- name: Create directory /etc/rancher/k3s + ansible.builtin.file: + path: /etc/{{ item }} + state: directory + mode: "0755" + loop: + - rancher + - rancher/k3s + +- name: Insert registries into /etc/rancher/k3s/registries.yaml + ansible.builtin.blockinfile: + path: /etc/rancher/k3s/registries.yaml + block: "{{ custom_registries_yaml }}" + mode: "0600" + create: true diff --git a/k3s-ansible/roles/k3s_server/defaults/main.yml b/k3s-ansible/roles/k3s_server/defaults/main.yml new file mode 100644 index 0000000..1d18efd --- /dev/null +++ b/k3s-ansible/roles/k3s_server/defaults/main.yml @@ -0,0 +1,40 @@ +--- +extra_server_args: "" + +k3s_kubectl_binary: k3s kubectl + +group_name_master: master + +kube_vip_arp: true +kube_vip_iface: +kube_vip_cloud_provider_tag_version: main +kube_vip_tag_version: v0.7.2 + +kube_vip_bgp: false +kube_vip_bgp_routerid: 127.0.0.1 +kube_vip_bgp_as: "64513" +kube_vip_bgp_peeraddress: 192.168.30.1 +kube_vip_bgp_peeras: "64512" + +kube_vip_bgp_peers: [] +kube_vip_bgp_peers_groups: ['k3s_master'] + +metal_lb_controller_tag_version: v0.14.3 +metal_lb_speaker_tag_version: v0.14.3 +metal_lb_type: native + +retry_count: 20 + +# yamllint disable rule:line-length +server_init_args: >- + {% if groups[group_name_master | default('master')] | length > 1 %} + {% if ansible_hostname == hostvars[groups[group_name_master | default('master')][0]]['ansible_hostname'] %} + --cluster-init + {% else %} + --server https://{{ hostvars[groups[group_name_master | default('master')][0]].k3s_node_ip | split(",") | first | ansible.utils.ipwrap }}:6443 + {% endif %} + --token {{ k3s_token }} + {% endif %} + {{ extra_server_args }} + +systemd_dir: /etc/systemd/system diff --git a/k3s-ansible/roles/k3s_server/meta/main.yml b/k3s-ansible/roles/k3s_server/meta/main.yml new file mode 100644 index 0000000..7d9fbfd --- /dev/null +++ b/k3s-ansible/roles/k3s_server/meta/main.yml @@ -0,0 +1,135 @@ +--- +argument_specs: + main: + short_description: Setup k3s servers + options: + apiserver_endpoint: + description: Virtual ip-address configured on each master + required: true + + cilium_bgp: + description: + - Enable cilium BGP control plane for LB services and pod cidrs. + - Disables the use of MetalLB. + type: bool + default: ~ + + cilium_iface: + description: The network interface used for when Cilium is enabled + default: ~ + + extra_server_args: + description: Extra arguments for server nodes + default: "" + + group_name_master: + description: Name of the master group + default: master + + k3s_create_kubectl_symlink: + description: Create the kubectl -> k3s symlink + default: false + type: bool + + k3s_create_crictl_symlink: + description: Create the crictl -> k3s symlink + default: false + type: bool + + kube_vip_arp: + description: Enables kube-vip ARP broadcasts + default: true + type: bool + + kube_vip_bgp: + description: Enables kube-vip BGP peering + default: false + type: bool + + kube_vip_bgp_routerid: + description: Defines the router ID for the kube-vip BGP server + default: "127.0.0.1" + + kube_vip_bgp_as: + description: Defines the AS for the kube-vip BGP server + default: "64513" + + kube_vip_bgp_peeraddress: + description: Defines the address for the kube-vip BGP peer + default: "192.168.30.1" + + kube_vip_bgp_peeras: + description: Defines the AS for the kube-vip BGP peer + default: "64512" + + kube_vip_bgp_peers: + description: List of BGP peer ASN & address pairs + default: [] + + kube_vip_bgp_peers_groups: + description: Inventory group in which to search for additional kube_vip_bgp_peers parameters to merge. + default: ['k3s_master'] + + kube_vip_iface: + description: + - Explicitly define an interface that ALL control nodes + - should use to propagate the VIP, define it here. + - Otherwise, kube-vip will determine the right interface + - automatically at runtime. + default: ~ + + kube_vip_tag_version: + description: Image tag for kube-vip + default: v0.7.2 + + kube_vip_cloud_provider_tag_version: + description: Tag for kube-vip-cloud-provider manifest when enabled + default: main + + kube_vip_lb_ip_range: + description: IP range for kube-vip load balancer + default: ~ + + metal_lb_controller_tag_version: + description: Image tag for MetalLB + default: v0.14.3 + + metal_lb_speaker_tag_version: + description: Image tag for MetalLB + default: v0.14.3 + + metal_lb_type: + choices: + - frr + - native + default: native + description: Use FRR mode or native. Valid values are `frr` and `native` + + proxy_env: + type: dict + description: + - Internet proxy configurations. + - See https://docs.k3s.io/advanced#configuring-an-http-proxy for details + default: ~ + options: + HTTP_PROXY: + description: HTTP internet proxy + required: true + HTTPS_PROXY: + description: HTTPS internet proxy + required: true + NO_PROXY: + description: Addresses that will not use the proxies + required: true + + retry_count: + description: Amount of retries when verifying that nodes joined + type: int + default: 20 + + server_init_args: + description: Arguments for server nodes + + systemd_dir: + description: Path to systemd services + default: /etc/systemd/system diff --git a/k3s-ansible/roles/k3s_server/tasks/fetch_k3s_init_logs.yml b/k3s-ansible/roles/k3s_server/tasks/fetch_k3s_init_logs.yml new file mode 100644 index 0000000..ae6f522 --- /dev/null +++ b/k3s-ansible/roles/k3s_server/tasks/fetch_k3s_init_logs.yml @@ -0,0 +1,28 @@ +--- +# Download logs of k3s-init.service from the nodes to localhost. +# Note that log_destination must be set. + +- name: Fetch k3s-init.service logs + ansible.builtin.command: + cmd: journalctl --all --unit=k3s-init.service + changed_when: false + register: k3s_init_log + +- name: Create {{ log_destination }} + delegate_to: localhost + run_once: true + become: false + ansible.builtin.file: + path: "{{ log_destination }}" + state: directory + mode: "0755" + +- name: Store logs to {{ log_destination }} + delegate_to: localhost + become: false + ansible.builtin.template: + src: content.j2 + dest: "{{ log_destination }}/k3s-init@{{ ansible_hostname }}.log" + mode: "0644" + vars: + content: "{{ k3s_init_log.stdout }}" diff --git a/k3s-ansible/roles/k3s_server/tasks/http_proxy.yml b/k3s-ansible/roles/k3s_server/tasks/http_proxy.yml new file mode 100644 index 0000000..5b7c534 --- /dev/null +++ b/k3s-ansible/roles/k3s_server/tasks/http_proxy.yml @@ -0,0 +1,16 @@ +--- +- name: Create k3s.service.d directory + ansible.builtin.file: + path: "{{ systemd_dir }}/k3s.service.d" + state: directory + owner: root + group: root + mode: "0755" + +- name: Copy K3s http_proxy conf file + ansible.builtin.template: + src: http_proxy.conf.j2 + dest: "{{ systemd_dir }}/k3s.service.d/http_proxy.conf" + owner: root + group: root + mode: "0755" diff --git a/k3s-ansible/roles/k3s_server/tasks/kube-vip.yml b/k3s-ansible/roles/k3s_server/tasks/kube-vip.yml new file mode 100644 index 0000000..f8b53e6 --- /dev/null +++ b/k3s-ansible/roles/k3s_server/tasks/kube-vip.yml @@ -0,0 +1,27 @@ +--- +- name: Create manifests directory on first master + ansible.builtin.file: + path: /var/lib/rancher/k3s/server/manifests + state: directory + owner: root + group: root + mode: "0644" + when: ansible_hostname == hostvars[groups[group_name_master | default('master')][0]]['ansible_hostname'] + +- name: Download vip cloud provider manifest to first master + ansible.builtin.get_url: + url: https://raw.githubusercontent.com/kube-vip/kube-vip-cloud-provider/{{ kube_vip_cloud_provider_tag_version | default('main') }}/manifest/kube-vip-cloud-controller.yaml # noqa yaml[line-length] + dest: /var/lib/rancher/k3s/server/manifests/kube-vip-cloud-controller.yaml + owner: root + group: root + mode: "0644" + when: ansible_hostname == hostvars[groups[group_name_master | default('master')][0]]['ansible_hostname'] + +- name: Copy kubevip configMap manifest to first master + ansible.builtin.template: + src: kubevip.yaml.j2 + dest: /var/lib/rancher/k3s/server/manifests/kubevip.yaml + owner: root + group: root + mode: "0644" + when: ansible_hostname == hostvars[groups[group_name_master | default('master')][0]]['ansible_hostname'] diff --git a/k3s-ansible/roles/k3s_server/tasks/main.yml b/k3s-ansible/roles/k3s_server/tasks/main.yml new file mode 100644 index 0000000..8ebaad7 --- /dev/null +++ b/k3s-ansible/roles/k3s_server/tasks/main.yml @@ -0,0 +1,173 @@ +--- +- name: Stop k3s-init + ansible.builtin.systemd: + name: k3s-init + state: stopped + failed_when: false + +# k3s-init won't work if the port is already in use +- name: Stop k3s + ansible.builtin.systemd: + name: k3s + state: stopped + failed_when: false + +- name: Clean previous runs of k3s-init # noqa command-instead-of-module + # The systemd module does not support "reset-failed", so we need to resort to command. + ansible.builtin.command: systemctl reset-failed k3s-init + failed_when: false + changed_when: false + +- name: Deploy K3s http_proxy conf + ansible.builtin.include_tasks: http_proxy.yml + when: proxy_env is defined + +- name: Deploy vip manifest + ansible.builtin.include_tasks: vip.yml +- name: Deploy metallb manifest + ansible.builtin.include_tasks: metallb.yml + tags: metallb + when: kube_vip_lb_ip_range is not defined and (not cilium_bgp or cilium_iface is not defined) + +- name: Deploy kube-vip manifest + ansible.builtin.include_tasks: kube-vip.yml + tags: kubevip + when: kube_vip_lb_ip_range is defined + +- name: Init cluster inside the transient k3s-init service + ansible.builtin.command: + cmd: systemd-run -p RestartSec=2 -p Restart=on-failure --unit=k3s-init k3s server {{ server_init_args }} + creates: "{{ systemd_dir }}/k3s-init.service" + +- name: Verification + when: not ansible_check_mode + block: + - name: Verify that all nodes actually joined (check k3s-init.service if this fails) + ansible.builtin.command: + cmd: "{{ k3s_kubectl_binary | default('k3s kubectl') }} get nodes -l 'node-role.kubernetes.io/master=true' -o=jsonpath='{.items[*].metadata.name}'" # yamllint disable-line rule:line-length + register: nodes + until: nodes.rc == 0 and (nodes.stdout.split() | length) == (groups[group_name_master | default('master')] | length) # yamllint disable-line rule:line-length + retries: "{{ retry_count | default(20) }}" + delay: 10 + changed_when: false + always: + - name: Save logs of k3s-init.service + ansible.builtin.include_tasks: fetch_k3s_init_logs.yml + when: log_destination + vars: + log_destination: >- + {{ lookup('ansible.builtin.env', 'ANSIBLE_K3S_LOG_DIR', default=False) }} + - name: Kill the temporary service used for initialization + ansible.builtin.systemd: + name: k3s-init + state: stopped + failed_when: false + +- name: Copy K3s service file + register: k3s_service + ansible.builtin.template: + src: k3s.service.j2 + dest: "{{ systemd_dir }}/k3s.service" + owner: root + group: root + mode: "0644" + +- name: Enable and check K3s service + ansible.builtin.systemd: + name: k3s + daemon_reload: true + state: restarted + enabled: true + +- name: Wait for node-token + ansible.builtin.wait_for: + path: /var/lib/rancher/k3s/server/node-token + +- name: Register node-token file access mode + ansible.builtin.stat: + path: /var/lib/rancher/k3s/server + register: p + +- name: Change file access node-token + ansible.builtin.file: + path: /var/lib/rancher/k3s/server + mode: g+rx,o+rx + +- name: Read node-token from master + ansible.builtin.slurp: + src: /var/lib/rancher/k3s/server/node-token + register: node_token + +- name: Store Master node-token + ansible.builtin.set_fact: + token: "{{ node_token.content | b64decode | regex_replace('\n', '') }}" + +- name: Restore node-token file access + ansible.builtin.file: + path: /var/lib/rancher/k3s/server + mode: "{{ p.stat.mode }}" + +- name: Create directory .kube + ansible.builtin.file: + path: "{{ ansible_user_dir }}/.kube" + state: directory + owner: "{{ ansible_user_id }}" + mode: u=rwx,g=rx,o= + +- name: Copy config file to user home directory + ansible.builtin.copy: + src: /etc/rancher/k3s/k3s.yaml + dest: "{{ ansible_user_dir }}/.kube/config" + remote_src: true + owner: "{{ ansible_user_id }}" + mode: u=rw,g=,o= + +- name: Configure kubectl cluster to {{ endpoint_url }} + ansible.builtin.command: >- + {{ k3s_kubectl_binary | default('k3s kubectl') }} config set-cluster default + --server={{ endpoint_url }} + --kubeconfig {{ ansible_user_dir }}/.kube/config + changed_when: true + vars: + endpoint_url: >- + https://{{ apiserver_endpoint | ansible.utils.ipwrap }}:6443 +# Deactivated linter rules: +# - jinja[invalid]: As of version 6.6.0, ansible-lint complains that the input to ipwrap +# would be undefined. This will not be the case during playbook execution. +# noqa jinja[invalid] + +- name: Create kubectl symlink + ansible.builtin.file: + src: /usr/local/bin/k3s + dest: /usr/local/bin/kubectl + state: link + when: k3s_create_kubectl_symlink | default(true) | bool + +- name: Create crictl symlink + ansible.builtin.file: + src: /usr/local/bin/k3s + dest: /usr/local/bin/crictl + state: link + when: k3s_create_crictl_symlink | default(true) | bool + +- name: Get contents of manifests folder + ansible.builtin.find: + paths: /var/lib/rancher/k3s/server/manifests + file_type: file + register: k3s_server_manifests + +- name: Get sub dirs of manifests folder + ansible.builtin.find: + paths: /var/lib/rancher/k3s/server/manifests + file_type: directory + register: k3s_server_manifests_directories + +- name: Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start + ansible.builtin.file: + path: "{{ item.path }}" + state: absent + with_items: + - "{{ k3s_server_manifests.files }}" + - "{{ k3s_server_manifests_directories.files }}" + loop_control: + label: "{{ item.path }}" diff --git a/k3s-ansible/roles/k3s_server/tasks/metallb.yml b/k3s-ansible/roles/k3s_server/tasks/metallb.yml new file mode 100644 index 0000000..7624d16 --- /dev/null +++ b/k3s-ansible/roles/k3s_server/tasks/metallb.yml @@ -0,0 +1,30 @@ +--- +- name: Create manifests directory on first master + ansible.builtin.file: + path: /var/lib/rancher/k3s/server/manifests + state: directory + owner: root + group: root + mode: "0644" + when: ansible_hostname == hostvars[groups[group_name_master | default('master')][0]]['ansible_hostname'] + +- name: "Download to first master: manifest for metallb-{{ metal_lb_type }}" + ansible.builtin.get_url: + url: https://raw.githubusercontent.com/metallb/metallb/{{ metal_lb_controller_tag_version }}/config/manifests/metallb-{{ metal_lb_type }}.yaml # noqa yaml[line-length] + dest: /var/lib/rancher/k3s/server/manifests/metallb-crds.yaml + owner: root + group: root + mode: "0644" + when: ansible_hostname == hostvars[groups[group_name_master | default('master')][0]]['ansible_hostname'] + +- name: Set image versions in manifest for metallb-{{ metal_lb_type }} + ansible.builtin.replace: + path: /var/lib/rancher/k3s/server/manifests/metallb-crds.yaml + regexp: "{{ item.change | ansible.builtin.regex_escape }}" + replace: "{{ item.to }}" + with_items: + - change: metallb/speaker:{{ metal_lb_controller_tag_version }} + to: metallb/speaker:{{ metal_lb_speaker_tag_version }} + loop_control: + label: "{{ item.change }} => {{ item.to }}" + when: ansible_hostname == hostvars[groups[group_name_master | default('master')][0]]['ansible_hostname'] diff --git a/k3s-ansible/roles/k3s_server/tasks/vip.yml b/k3s-ansible/roles/k3s_server/tasks/vip.yml new file mode 100644 index 0000000..aba5b4f --- /dev/null +++ b/k3s-ansible/roles/k3s_server/tasks/vip.yml @@ -0,0 +1,31 @@ +--- +- name: Set _kube_vip_bgp_peers fact + ansible.builtin.set_fact: + _kube_vip_bgp_peers: "{{ lookup('community.general.merge_variables', '^kube_vip_bgp_peers__.+$', initial_value=kube_vip_bgp_peers, groups=kube_vip_bgp_peers_groups) }}" # yamllint disable-line rule:line-length + +- name: Create manifests directory on first master + ansible.builtin.file: + path: /var/lib/rancher/k3s/server/manifests + state: directory + owner: root + group: root + mode: "0644" + when: ansible_hostname == hostvars[groups[group_name_master | default('master')][0]]['ansible_hostname'] + +- name: Download vip rbac manifest to first master + ansible.builtin.get_url: + url: https://kube-vip.io/manifests/rbac.yaml + dest: /var/lib/rancher/k3s/server/manifests/vip-rbac.yaml + owner: root + group: root + mode: "0644" + when: ansible_hostname == hostvars[groups[group_name_master | default('master')][0]]['ansible_hostname'] + +- name: Copy vip manifest to first master + ansible.builtin.template: + src: vip.yaml.j2 + dest: /var/lib/rancher/k3s/server/manifests/vip.yaml + owner: root + group: root + mode: "0644" + when: ansible_hostname == hostvars[groups[group_name_master | default('master')][0]]['ansible_hostname'] diff --git a/k3s-ansible/roles/k3s_server/templates/content.j2 b/k3s-ansible/roles/k3s_server/templates/content.j2 new file mode 100644 index 0000000..fe7fd8b --- /dev/null +++ b/k3s-ansible/roles/k3s_server/templates/content.j2 @@ -0,0 +1,5 @@ +{# + This is a really simple template that just outputs the + value of the "content" variable. +#} +{{ content }} diff --git a/k3s-ansible/roles/k3s_server/templates/http_proxy.conf.j2 b/k3s-ansible/roles/k3s_server/templates/http_proxy.conf.j2 new file mode 100644 index 0000000..6591d45 --- /dev/null +++ b/k3s-ansible/roles/k3s_server/templates/http_proxy.conf.j2 @@ -0,0 +1,4 @@ +[Service] +Environment=HTTP_PROXY={{ proxy_env.HTTP_PROXY }} +Environment=HTTPS_PROXY={{ proxy_env.HTTPS_PROXY }} +Environment=NO_PROXY={{ proxy_env.NO_PROXY }} diff --git a/k3s-ansible/roles/k3s_server/templates/k3s.service.j2 b/k3s-ansible/roles/k3s_server/templates/k3s.service.j2 new file mode 100644 index 0000000..ae5cb48 --- /dev/null +++ b/k3s-ansible/roles/k3s_server/templates/k3s.service.j2 @@ -0,0 +1,24 @@ +[Unit] +Description=Lightweight Kubernetes +Documentation=https://k3s.io +After=network-online.target + +[Service] +Type=notify +ExecStartPre=-/sbin/modprobe br_netfilter +ExecStartPre=-/sbin/modprobe overlay +ExecStart=/usr/local/bin/k3s server {{ extra_server_args | default("") }} +KillMode=process +Delegate=yes +# Having non-zero Limit*s causes performance problems due to accounting overhead +# in the kernel. We recommend using cgroups to do container-local accounting. +LimitNOFILE=1048576 +LimitNPROC=infinity +LimitCORE=infinity +TasksMax=infinity +TimeoutStartSec=0 +Restart=always +RestartSec=5s + +[Install] +WantedBy=multi-user.target diff --git a/k3s-ansible/roles/k3s_server/templates/kubevip.yaml.j2 b/k3s-ansible/roles/k3s_server/templates/kubevip.yaml.j2 new file mode 100644 index 0000000..40d8b50 --- /dev/null +++ b/k3s-ansible/roles/k3s_server/templates/kubevip.yaml.j2 @@ -0,0 +1,13 @@ +--- +apiVersion: v1 +kind: ConfigMap +metadata: + name: kubevip + namespace: kube-system +data: +{% if kube_vip_lb_ip_range is string %} +{# kube_vip_lb_ip_range was used in the legacy way: single string instead of a list #} +{# => transform to list with single element #} +{% set kube_vip_lb_ip_range = [kube_vip_lb_ip_range] %} +{% endif %} + range-global: {{ kube_vip_lb_ip_range | join(',') }} diff --git a/k3s-ansible/roles/k3s_server/templates/vip.yaml.j2 b/k3s-ansible/roles/k3s_server/templates/vip.yaml.j2 new file mode 100644 index 0000000..44469a6 --- /dev/null +++ b/k3s-ansible/roles/k3s_server/templates/vip.yaml.j2 @@ -0,0 +1,104 @@ +apiVersion: apps/v1 +kind: DaemonSet +metadata: + name: kube-vip-ds + namespace: kube-system +spec: + selector: + matchLabels: + name: kube-vip-ds + template: + metadata: + labels: + name: kube-vip-ds + spec: + affinity: + nodeAffinity: + requiredDuringSchedulingIgnoredDuringExecution: + nodeSelectorTerms: + - matchExpressions: + - key: node-role.kubernetes.io/master + operator: Exists + - matchExpressions: + - key: node-role.kubernetes.io/control-plane + operator: Exists + containers: + - args: + - manager + env: + - name: vip_arp + value: "{{ 'true' if kube_vip_arp | default(true) | bool else 'false' }}" + - name: bgp_enable + value: "{{ 'true' if kube_vip_bgp | default(false) | bool else 'false' }}" + - name: port + value: "6443" +{% if kube_vip_iface %} + - name: vip_interface + value: {{ kube_vip_iface }} +{% endif %} + - name: vip_cidr + value: "{{ apiserver_endpoint | ansible.utils.ipsubnet | ansible.utils.ipaddr('prefix') }}" + - name: cp_enable + value: "true" + - name: cp_namespace + value: kube-system + - name: vip_ddns + value: "false" + - name: svc_enable + value: "{{ 'true' if kube_vip_lb_ip_range is defined else 'false' }}" + - name: vip_leaderelection + value: "true" + - name: vip_leaseduration + value: "15" + - name: vip_renewdeadline + value: "10" + - name: vip_retryperiod + value: "2" + - name: address + value: {{ apiserver_endpoint }} +{% if kube_vip_bgp | default(false) | bool %} +{% if kube_vip_bgp_routerid is defined %} + - name: bgp_routerid + value: "{{ kube_vip_bgp_routerid }}" +{% endif %} +{% if _kube_vip_bgp_peers | length > 0 %} + - name: bgppeers + value: "{{ _kube_vip_bgp_peers | map(attribute='peer_address') | zip(_kube_vip_bgp_peers| map(attribute='peer_asn')) | map('join', ',') | join(':') }}" # yamllint disable-line rule:line-length +{% else %} +{% if kube_vip_bgp_as is defined %} + - name: bgp_as + value: "{{ kube_vip_bgp_as }}" +{% endif %} +{% if kube_vip_bgp_peeraddress is defined %} + - name: bgp_peeraddress + value: "{{ kube_vip_bgp_peeraddress }}" +{% endif %} +{% if kube_vip_bgp_peeras is defined %} + - name: bgp_peeras + value: "{{ kube_vip_bgp_peeras }}" +{% endif %} +{% endif %} +{% endif %} + image: ghcr.io/kube-vip/kube-vip:{{ kube_vip_tag_version }} + imagePullPolicy: Always + name: kube-vip + resources: {} + securityContext: + capabilities: + add: + - NET_ADMIN + - NET_RAW + - SYS_TIME + hostNetwork: true + serviceAccountName: kube-vip + tolerations: + - effect: NoSchedule + operator: Exists + - effect: NoExecute + operator: Exists + updateStrategy: {} +status: + currentNumberScheduled: 0 + desiredNumberScheduled: 0 + numberMisscheduled: 0 + numberReady: 0 diff --git a/k3s-ansible/roles/k3s_server_post/defaults/main.yml b/k3s-ansible/roles/k3s_server_post/defaults/main.yml new file mode 100644 index 0000000..578e557 --- /dev/null +++ b/k3s-ansible/roles/k3s_server_post/defaults/main.yml @@ -0,0 +1,32 @@ +--- +k3s_kubectl_binary: k3s kubectl + +bpf_lb_algorithm: maglev +bpf_lb_mode: hybrid + +calico_blockSize: 26 # noqa var-naming +calico_ebpf: false +calico_encapsulation: VXLANCrossSubnet +calico_natOutgoing: Enabled # noqa var-naming +calico_nodeSelector: all() # noqa var-naming +calico_tag: v3.27.2 + +cilium_bgp: false +cilium_exportPodCIDR: true # noqa var-naming +cilium_bgp_my_asn: 64513 +cilium_bgp_peer_asn: 64512 +cilium_bgp_neighbors: [] +cilium_bgp_neighbors_groups: ['k3s_all'] +cilium_bgp_lb_cidr: 192.168.31.0/24 +cilium_hubble: true +cilium_mode: native + +cluster_cidr: 10.52.0.0/16 +enable_bpf_masquerade: true +kube_proxy_replacement: true +group_name_master: master + +metal_lb_mode: layer2 +metal_lb_available_timeout: 240s +metal_lb_controller_tag_version: v0.14.3 +metal_lb_ip_range: 192.168.30.80-192.168.30.90 diff --git a/k3s-ansible/roles/k3s_server_post/meta/main.yml b/k3s-ansible/roles/k3s_server_post/meta/main.yml new file mode 100644 index 0000000..f9fc83d --- /dev/null +++ b/k3s-ansible/roles/k3s_server_post/meta/main.yml @@ -0,0 +1,153 @@ +--- +argument_specs: + main: + short_description: Configure k3s cluster + options: + apiserver_endpoint: + description: Virtual ip-address configured on each master + required: true + + bpf_lb_algorithm: + description: BPF lb algorithm + default: maglev + + bpf_lb_mode: + description: BPF lb mode + default: hybrid + + calico_blockSize: + description: IP pool block size + type: int + default: 26 + + calico_ebpf: + description: Use eBPF dataplane instead of iptables + type: bool + default: false + + calico_encapsulation: + description: IP pool encapsulation + default: VXLANCrossSubnet + + calico_natOutgoing: + description: IP pool NAT outgoing + default: Enabled + + calico_nodeSelector: + description: IP pool node selector + default: all() + + calico_iface: + description: The network interface used for when Calico is enabled + default: ~ + + calico_tag: + description: Calico version tag + default: v3.27.2 + + cilium_bgp: + description: + - Enable cilium BGP control plane for LB services and pod cidrs. + - Disables the use of MetalLB. + type: bool + default: false + + cilium_bgp_my_asn: + description: Local ASN for BGP peer + type: int + default: 64513 + + cilium_bgp_peer_asn: + description: BGP peer ASN + type: int + default: 64512 + + cilium_bgp_peer_address: + description: BGP peer address + default: ~ + + cilium_bgp_neighbors: + description: List of BGP peer ASN & address pairs + default: [] + + cilium_bgp_neighbors_groups: + description: Inventory group in which to search for additional cilium_bgp_neighbors parameters to merge. + default: ['k3s_all'] + + cilium_bgp_lb_cidr: + description: BGP load balancer IP range + default: 192.168.31.0/24 + + cilium_exportPodCIDR: + description: Export pod CIDR + type: bool + default: true + + cilium_hubble: + description: Enable Cilium Hubble + type: bool + default: true + + cilium_iface: + description: The network interface used for when Cilium is enabled + default: ~ + + cilium_mode: + description: Inner-node communication mode + default: native + choices: + - native + - routed + + cluster_cidr: + description: Inner-cluster IP range + default: 10.52.0.0/16 + + enable_bpf_masquerade: + description: Use IP masquerading + type: bool + default: true + + group_name_master: + description: Name of the master group + default: master + + kube_proxy_replacement: + description: Replace the native kube-proxy with Cilium + type: bool + default: true + + kube_vip_lb_ip_range: + description: IP range for kube-vip load balancer + default: ~ + + metal_lb_available_timeout: + description: Wait for MetalLB resources + default: 240s + + metal_lb_ip_range: + description: MetalLB ip range for load balancer + default: 192.168.30.80-192.168.30.90 + + metal_lb_controller_tag_version: + description: Image tag for MetalLB + default: v0.14.3 + + metal_lb_mode: + description: Metallb mode + default: layer2 + choices: + - bgp + - layer2 + + metal_lb_bgp_my_asn: + description: BGP ASN configurations + default: ~ + + metal_lb_bgp_peer_asn: + description: BGP peer ASN configurations + default: ~ + + metal_lb_bgp_peer_address: + description: BGP peer address + default: ~ diff --git a/k3s-ansible/roles/k3s_server_post/tasks/calico.yml b/k3s-ansible/roles/k3s_server_post/tasks/calico.yml new file mode 100644 index 0000000..2a9302f --- /dev/null +++ b/k3s-ansible/roles/k3s_server_post/tasks/calico.yml @@ -0,0 +1,120 @@ +--- +- name: Deploy Calico to cluster + when: ansible_hostname == hostvars[groups[group_name_master | default('master')][0]]['ansible_hostname'] + run_once: true + block: + - name: Create manifests directory on first master + ansible.builtin.file: + path: /tmp/k3s + state: directory + owner: root + group: root + mode: "0755" + + - name: "Download to first master: manifest for Tigera Operator and Calico CRDs" + ansible.builtin.get_url: + url: https://raw.githubusercontent.com/projectcalico/calico/{{ calico_tag }}/manifests/tigera-operator.yaml + dest: /tmp/k3s/tigera-operator.yaml + owner: root + group: root + mode: "0755" + + - name: Copy Calico custom resources manifest to first master + ansible.builtin.template: + src: calico.crs.j2 + dest: /tmp/k3s/custom-resources.yaml + owner: root + group: root + mode: "0755" + + - name: Deploy or replace Tigera Operator + block: + - name: Deploy Tigera Operator + ansible.builtin.command: + cmd: "{{ k3s_kubectl_binary | default('k3s kubectl') }} create -f /tmp/k3s/tigera-operator.yaml" + register: create_operator + changed_when: "'created' in create_operator.stdout" + failed_when: "'Error' in create_operator.stderr and 'already exists' not in create_operator.stderr" + rescue: + - name: Replace existing Tigera Operator + ansible.builtin.command: + cmd: "{{ k3s_kubectl_binary | default('k3s kubectl') }} replace -f /tmp/k3s/tigera-operator.yaml" + register: replace_operator + changed_when: "'replaced' in replace_operator.stdout" + failed_when: "'Error' in replace_operator.stderr" + + - name: Wait for Tigera Operator resources + ansible.builtin.command: >- + {{ k3s_kubectl_binary | default('k3s kubectl') }} wait {{ item.type }}/{{ item.name }} + --namespace='tigera-operator' + --for=condition=Available=True + --timeout=30s + register: tigera_result + changed_when: false + until: tigera_result is succeeded + retries: 7 + delay: 7 + with_items: + - { name: tigera-operator, type: deployment } + loop_control: + label: "{{ item.type }}/{{ item.name }}" + + - name: Deploy Calico custom resources + block: + - name: Deploy custom resources for Calico + ansible.builtin.command: + cmd: "{{ k3s_kubectl_binary | default('k3s kubectl') }} create -f /tmp/k3s/custom-resources.yaml" + register: create_cr + changed_when: "'created' in create_cr.stdout" + failed_when: "'Error' in create_cr.stderr and 'already exists' not in create_cr.stderr" + rescue: + - name: Apply new Calico custom resource manifest + ansible.builtin.command: + cmd: "{{ k3s_kubectl_binary | default('k3s kubectl') }} apply -f /tmp/k3s/custom-resources.yaml" + register: apply_cr + changed_when: "'configured' in apply_cr.stdout or 'created' in apply_cr.stdout" + failed_when: "'Error' in apply_cr.stderr" + + - name: Wait for Calico system resources to be available + ansible.builtin.command: >- + {% if item.type == 'daemonset' %} + {{ k3s_kubectl_binary | default('k3s kubectl') }} wait pods + --namespace='{{ item.namespace }}' + --selector={{ item.selector }} + --for=condition=Ready + {% else %} + {{ k3s_kubectl_binary | default('k3s kubectl') }} wait {{ item.type }}/{{ item.name }} + --namespace='{{ item.namespace }}' + --for=condition=Available + {% endif %} + --timeout=30s + register: cr_result + changed_when: false + until: cr_result is succeeded + retries: 30 + delay: 7 + with_items: + - { name: calico-typha, type: deployment, namespace: calico-system } + - { name: calico-kube-controllers, type: deployment, namespace: calico-system } + - name: csi-node-driver + type: daemonset + selector: k8s-app=csi-node-driver + namespace: calico-system + - name: calico-node + type: daemonset + selector: k8s-app=calico-node + namespace: calico-system + - { name: calico-apiserver, type: deployment, namespace: calico-apiserver } + loop_control: + label: "{{ item.type }}/{{ item.name }}" + + - name: Patch Felix configuration for eBPF mode + ansible.builtin.command: + cmd: > + {{ k3s_kubectl_binary | default('k3s kubectl') }} patch felixconfiguration default + --type='merge' + --patch='{"spec": {"bpfKubeProxyIptablesCleanupEnabled": false}}' + register: patch_result + changed_when: "'felixconfiguration.projectcalico.org/default patched' in patch_result.stdout" + failed_when: "'Error' in patch_result.stderr" + when: calico_ebpf diff --git a/k3s-ansible/roles/k3s_server_post/tasks/cilium.yml b/k3s-ansible/roles/k3s_server_post/tasks/cilium.yml new file mode 100644 index 0000000..d7a48b0 --- /dev/null +++ b/k3s-ansible/roles/k3s_server_post/tasks/cilium.yml @@ -0,0 +1,256 @@ +--- +- name: Prepare Cilium CLI on first master and deploy CNI + when: ansible_hostname == hostvars[groups[group_name_master | default('master')][0]]['ansible_hostname'] + run_once: true + block: + - name: Create tmp directory on first master + ansible.builtin.file: + path: /tmp/k3s + state: directory + owner: root + group: root + mode: "0755" + + - name: Check if Cilium CLI is installed + ansible.builtin.command: cilium version + register: cilium_cli_installed + failed_when: false + changed_when: false + ignore_errors: true + + - name: Check for Cilium CLI version in command output + ansible.builtin.set_fact: + installed_cli_version: >- + {{ + cilium_cli_installed.stdout_lines + | join(' ') + | regex_findall('cilium-cli: (v\d+\.\d+\.\d+)') + | first + | default('unknown') + }} + when: cilium_cli_installed.rc == 0 + + - name: Get latest stable Cilium CLI version file + ansible.builtin.get_url: + url: https://raw.githubusercontent.com/cilium/cilium-cli/main/stable.txt + dest: /tmp/k3s/cilium-cli-stable.txt + owner: root + group: root + mode: "0755" + + - name: Read Cilium CLI stable version from file + ansible.builtin.command: cat /tmp/k3s/cilium-cli-stable.txt + register: cli_ver + changed_when: false + + - name: Log installed Cilium CLI version + ansible.builtin.debug: + msg: "Installed Cilium CLI version: {{ installed_cli_version | default('Not installed') }}" + + - name: Log latest stable Cilium CLI version + ansible.builtin.debug: + msg: "Latest Cilium CLI version: {{ cli_ver.stdout }}" + + - name: Determine if Cilium CLI needs installation or update + ansible.builtin.set_fact: + cilium_cli_needs_update: >- + {{ + cilium_cli_installed.rc != 0 or + (cilium_cli_installed.rc == 0 and + installed_cli_version != cli_ver.stdout) + }} + + - name: Install or update Cilium CLI + when: cilium_cli_needs_update + block: + - name: Set architecture variable + ansible.builtin.set_fact: + cli_arch: "{{ 'arm64' if ansible_architecture == 'aarch64' else 'amd64' }}" + + - name: Download Cilium CLI and checksum + ansible.builtin.get_url: + url: "{{ cilium_base_url }}/cilium-linux-{{ cli_arch }}{{ item }}" + dest: /tmp/k3s/cilium-linux-{{ cli_arch }}{{ item }} + owner: root + group: root + mode: "0755" + loop: + - .tar.gz + - .tar.gz.sha256sum + vars: + cilium_base_url: https://github.com/cilium/cilium-cli/releases/download/{{ cli_ver.stdout }} + + - name: Verify the downloaded tarball + ansible.builtin.shell: | + cd /tmp/k3s && sha256sum --check cilium-linux-{{ cli_arch }}.tar.gz.sha256sum + args: + executable: /bin/bash + changed_when: false + + - name: Extract Cilium CLI to /usr/local/bin + ansible.builtin.unarchive: + src: /tmp/k3s/cilium-linux-{{ cli_arch }}.tar.gz + dest: /usr/local/bin + remote_src: true + + - name: Remove downloaded tarball and checksum file + ansible.builtin.file: + path: "{{ item }}" + state: absent + loop: + - /tmp/k3s/cilium-linux-{{ cli_arch }}.tar.gz + - /tmp/k3s/cilium-linux-{{ cli_arch }}.tar.gz.sha256sum + + - name: Wait for connectivity to kube VIP + ansible.builtin.command: ping -c 1 {{ apiserver_endpoint }} + register: ping_result + until: ping_result.rc == 0 + retries: 21 + delay: 1 + ignore_errors: true + changed_when: false + + - name: Fail if kube VIP not reachable + ansible.builtin.fail: + msg: API endpoint {{ apiserver_endpoint }} is not reachable + when: ping_result.rc != 0 + + - name: Test for existing Cilium install + ansible.builtin.command: | + {{ k3s_kubectl_binary | default('k3s kubectl') }} -n kube-system get daemonsets cilium + register: cilium_installed + failed_when: false + changed_when: false + ignore_errors: true + + - name: Check existing Cilium install + when: cilium_installed.rc == 0 + block: + - name: Check Cilium version + ansible.builtin.command: cilium version + register: cilium_version + failed_when: false + changed_when: false + ignore_errors: true + + - name: Parse installed Cilium version + ansible.builtin.set_fact: + installed_cilium_version: >- + {{ + cilium_version.stdout_lines + | join(' ') + | regex_findall('cilium image.+(\d+\.\d+\.\d+)') + | first + | default('unknown') + }} + + - name: Determine if Cilium needs update + ansible.builtin.set_fact: + cilium_needs_update: >- + {{ 'v' + installed_cilium_version != cilium_tag }} + + - name: Log result + ansible.builtin.debug: + msg: > + Installed Cilium version: {{ installed_cilium_version }}, + Target Cilium version: {{ cilium_tag }}, + Update needed: {{ cilium_needs_update }} + + - name: Install Cilium + ansible.builtin.command: >- + {% if cilium_installed.rc != 0 %} + cilium install + {% else %} + cilium upgrade + {% endif %} + --version "{{ cilium_tag }}" + --helm-set operator.replicas="1" + {{ '--helm-set devices=' + cilium_iface if cilium_iface != 'auto' else '' }} + --helm-set ipam.operator.clusterPoolIPv4PodCIDRList={{ cluster_cidr }} + {% if cilium_mode == "native" or (cilium_bgp and cilium_exportPodCIDR != 'false') %} + --helm-set ipv4NativeRoutingCIDR={{ cluster_cidr }} + {% endif %} + --helm-set k8sServiceHost="127.0.0.1" + --helm-set k8sServicePort="6444" + --helm-set routingMode={{ cilium_mode }} + --helm-set autoDirectNodeRoutes={{ "true" if cilium_mode == "native" else "false" }} + --helm-set kubeProxyReplacement={{ kube_proxy_replacement }} + --helm-set bpf.masquerade={{ enable_bpf_masquerade }} + --helm-set bgpControlPlane.enabled={{ cilium_bgp | default("false") }} + --helm-set hubble.enabled={{ "true" if cilium_hubble else "false" }} + --helm-set hubble.relay.enabled={{ "true" if cilium_hubble else "false" }} + --helm-set hubble.ui.enabled={{ "true" if cilium_hubble else "false" }} + {% if kube_proxy_replacement is not false %} + --helm-set bpf.loadBalancer.algorithm={{ bpf_lb_algorithm }} + --helm-set bpf.loadBalancer.mode={{ bpf_lb_mode }} + {% endif %} + environment: + KUBECONFIG: "{{ ansible_user_dir }}/.kube/config" + register: cilium_install_result + changed_when: cilium_install_result.rc == 0 + when: cilium_installed.rc != 0 or cilium_needs_update + + - name: Wait for Cilium resources + ansible.builtin.command: >- + {% if item.type == 'daemonset' %} + {{ k3s_kubectl_binary | default('k3s kubectl') }} wait pods + --namespace=kube-system + --selector='k8s-app=cilium' + --for=condition=Ready + {% else %} + {{ k3s_kubectl_binary | default('k3s kubectl') }} wait {{ item.type }}/{{ item.name }} + --namespace=kube-system + --for=condition=Available + {% endif %} + --timeout=30s + register: cr_result + changed_when: false + until: cr_result is succeeded + retries: 30 + delay: 7 + with_items: + - { name: cilium-operator, type: deployment } + - { name: cilium, type: daemonset, selector: k8s-app=cilium } + - { name: hubble-relay, type: deployment, check_hubble: true } + - { name: hubble-ui, type: deployment, check_hubble: true } + loop_control: + label: "{{ item.type }}/{{ item.name }}" + when: >- + not item.check_hubble | default(false) or (item.check_hubble | default(false) and cilium_hubble) + + - name: Configure Cilium BGP + when: cilium_bgp + block: + - name: Set _cilium_bgp_neighbors fact + ansible.builtin.set_fact: + _cilium_bgp_neighbors: "{{ lookup('community.general.merge_variables', '^cilium_bgp_neighbors__.+$', initial_value=cilium_bgp_neighbors, groups=cilium_bgp_neighbors_groups) }}" # yamllint disable-line rule:line-length + + - name: Copy BGP manifests to first master + ansible.builtin.template: + src: cilium.crs.j2 + dest: /tmp/k3s/cilium-bgp.yaml + owner: root + group: root + mode: "0755" + + - name: Apply BGP manifests + ansible.builtin.command: + cmd: "{{ k3s_kubectl_binary | default('k3s kubectl') }} apply -f /tmp/k3s/cilium-bgp.yaml" + register: apply_cr + changed_when: "'configured' in apply_cr.stdout or 'created' in apply_cr.stdout" + failed_when: "'is invalid' in apply_cr.stderr" + ignore_errors: true + + - name: Print error message if BGP manifests application fails + ansible.builtin.debug: + msg: "{{ apply_cr.stderr }}" + when: "'is invalid' in apply_cr.stderr" + + - name: Test for BGP config resources + ansible.builtin.command: "{{ item }}" + loop: + - "{{ k3s_kubectl_binary | default('k3s kubectl') }} get CiliumBGPPeeringPolicy.cilium.io" + - "{{ k3s_kubectl_binary | default('k3s kubectl') }} get CiliumLoadBalancerIPPool.cilium.io" + changed_when: false + loop_control: + label: "{{ item }}" diff --git a/k3s-ansible/roles/k3s_server_post/tasks/main.yml b/k3s-ansible/roles/k3s_server_post/tasks/main.yml new file mode 100644 index 0000000..1a02d8d --- /dev/null +++ b/k3s-ansible/roles/k3s_server_post/tasks/main.yml @@ -0,0 +1,20 @@ +--- +- name: Deploy calico + ansible.builtin.include_tasks: calico.yml + tags: calico + when: calico_iface is defined and cilium_iface is not defined + +- name: Deploy cilium + ansible.builtin.include_tasks: cilium.yml + tags: cilium + when: cilium_iface is defined + +- name: Deploy metallb pool + ansible.builtin.include_tasks: metallb.yml + tags: metallb + when: kube_vip_lb_ip_range is not defined and (not cilium_bgp or cilium_iface is not defined) + +- name: Remove tmp directory used for manifests + ansible.builtin.file: + path: /tmp/k3s + state: absent diff --git a/k3s-ansible/roles/k3s_server_post/tasks/metallb.yml b/k3s-ansible/roles/k3s_server_post/tasks/metallb.yml new file mode 100644 index 0000000..4a3279c --- /dev/null +++ b/k3s-ansible/roles/k3s_server_post/tasks/metallb.yml @@ -0,0 +1,136 @@ +--- +- name: Create manifests directory for temp configuration + ansible.builtin.file: + path: /tmp/k3s + state: directory + owner: "{{ ansible_user_id }}" + mode: "0755" + with_items: "{{ groups[group_name_master | default('master')] }}" + run_once: true + +- name: Delete outdated metallb replicas + ansible.builtin.shell: |- + set -o pipefail + + REPLICAS=$({{ k3s_kubectl_binary | default('k3s kubectl') }} --namespace='metallb-system' get replicasets \ + -l 'component=controller,app=metallb' \ + -o jsonpath='{.items[0].spec.template.spec.containers[0].image}, {.items[0].metadata.name}' 2>/dev/null || true) + REPLICAS_SETS=$(echo ${REPLICAS} | grep -v '{{ metal_lb_controller_tag_version }}' | sed -e "s/^.*\s//g") + if [ -n "${REPLICAS_SETS}" ] ; then + for REPLICAS in "${REPLICAS_SETS}" + do + {{ k3s_kubectl_binary | default('k3s kubectl') }} --namespace='metallb-system' \ + delete rs "${REPLICAS}" + done + fi + args: + executable: /bin/bash + changed_when: false + run_once: true + with_items: "{{ groups[group_name_master | default('master')] }}" + +- name: Copy metallb CRs manifest to first master + ansible.builtin.template: + src: metallb.crs.j2 + dest: /tmp/k3s/metallb-crs.yaml + owner: "{{ ansible_user_id }}" + mode: "0755" + with_items: "{{ groups[group_name_master | default('master')] }}" + run_once: true + +- name: Test metallb-system namespace + ansible.builtin.command: >- + {{ k3s_kubectl_binary | default('k3s kubectl') }} -n metallb-system + changed_when: false + with_items: "{{ groups[group_name_master | default('master')] }}" + run_once: true + +- name: Wait for MetalLB resources + ansible.builtin.command: >- + {{ k3s_kubectl_binary | default('k3s kubectl') }} wait {{ item.resource }} + --namespace='metallb-system' + {% if item.name | default(False) -%}{{ item.name }}{%- endif %} + {% if item.selector | default(False) -%}--selector='{{ item.selector }}'{%- endif %} + {% if item.condition | default(False) -%}{{ item.condition }}{%- endif %} + --timeout='{{ metal_lb_available_timeout }}' + changed_when: false + run_once: true + with_items: + - description: controller + resource: deployment + name: controller + condition: --for condition=Available=True + - description: webhook service + resource: pod + selector: component=controller + condition: --for=jsonpath='{.status.phase}'=Running + - description: pods in replica sets + resource: pod + selector: component=controller,app=metallb + condition: --for condition=Ready + - description: ready replicas of controller + resource: replicaset + selector: component=controller,app=metallb + condition: --for=jsonpath='{.status.readyReplicas}'=1 + - description: fully labeled replicas of controller + resource: replicaset + selector: component=controller,app=metallb + condition: --for=jsonpath='{.status.fullyLabeledReplicas}'=1 + - description: available replicas of controller + resource: replicaset + selector: component=controller,app=metallb + condition: --for=jsonpath='{.status.availableReplicas}'=1 + loop_control: + label: "{{ item.description }}" + +- name: Set metallb webhook service name + ansible.builtin.set_fact: + metallb_webhook_service_name: >- + {{ + ( + (metal_lb_controller_tag_version | regex_replace('^v', '')) + is + version('0.14.4', '<', version_type='semver') + ) | ternary( + 'webhook-service', + 'metallb-webhook-service' + ) + }} + +- name: Test metallb-system webhook-service endpoint + ansible.builtin.command: >- + {{ k3s_kubectl_binary | default('k3s kubectl') }} -n metallb-system get endpoints {{ metallb_webhook_service_name }} + changed_when: false + with_items: "{{ groups[group_name_master | default('master')] }}" + run_once: true + +- name: Apply metallb CRs + ansible.builtin.command: >- + {{ k3s_kubectl_binary | default('k3s kubectl') }} apply -f /tmp/k3s/metallb-crs.yaml + --timeout='{{ metal_lb_available_timeout }}' + register: this + changed_when: false + run_once: true + until: this.rc == 0 + retries: 5 + +- name: Test metallb-system resources for Layer 2 configuration + ansible.builtin.command: >- + {{ k3s_kubectl_binary | default('k3s kubectl') }} -n metallb-system get {{ item }} + changed_when: false + run_once: true + when: metal_lb_mode == "layer2" + with_items: + - IPAddressPool + - L2Advertisement + +- name: Test metallb-system resources for BGP configuration + ansible.builtin.command: >- + {{ k3s_kubectl_binary | default('k3s kubectl') }} -n metallb-system get {{ item }} + changed_when: false + run_once: true + when: metal_lb_mode == "bgp" + with_items: + - IPAddressPool + - BGPPeer + - BGPAdvertisement diff --git a/k3s-ansible/roles/k3s_server_post/templates/calico.crs.j2 b/k3s-ansible/roles/k3s_server_post/templates/calico.crs.j2 new file mode 100644 index 0000000..351b648 --- /dev/null +++ b/k3s-ansible/roles/k3s_server_post/templates/calico.crs.j2 @@ -0,0 +1,41 @@ +# This section includes base Calico installation configuration. +# For more information, see: https://docs.tigera.io/calico/latest/reference/installation/api#operator.tigera.io/v1.Installation +apiVersion: operator.tigera.io/v1 +kind: Installation +metadata: + name: default +spec: + # Configures Calico networking. + calicoNetwork: + # Note: The ipPools section cannot be modified post-install. + ipPools: + - blockSize: {{ calico_blockSize }} + cidr: {{ cluster_cidr }} + encapsulation: {{ calico_encapsulation }} + natOutgoing: {{ calico_natOutgoing }} + nodeSelector: {{ calico_nodeSelector }} + nodeAddressAutodetectionV4: + interface: {{ calico_iface }} + linuxDataplane: {{ 'BPF' if calico_ebpf else 'Iptables' }} + +--- + +# This section configures the Calico API server. +# For more information, see: https://docs.tigera.io/calico/latest/reference/installation/api#operator.tigera.io/v1.APIServer +apiVersion: operator.tigera.io/v1 +kind: APIServer +metadata: + name: default +spec: {} + +{% if calico_ebpf %} +--- +kind: ConfigMap +apiVersion: v1 +metadata: + name: kubernetes-services-endpoint + namespace: tigera-operator +data: + KUBERNETES_SERVICE_HOST: '{{ apiserver_endpoint }}' + KUBERNETES_SERVICE_PORT: '6443' +{% endif %} diff --git a/k3s-ansible/roles/k3s_server_post/templates/cilium.crs.j2 b/k3s-ansible/roles/k3s_server_post/templates/cilium.crs.j2 new file mode 100644 index 0000000..5a9e81c --- /dev/null +++ b/k3s-ansible/roles/k3s_server_post/templates/cilium.crs.j2 @@ -0,0 +1,48 @@ +apiVersion: "cilium.io/v2alpha1" +kind: CiliumBGPPeeringPolicy +metadata: + name: 01-bgp-peering-policy +spec: # CiliumBGPPeeringPolicySpec + virtualRouters: # []CiliumBGPVirtualRouter + - localASN: {{ cilium_bgp_my_asn }} + exportPodCIDR: {{ cilium_exportPodCIDR | default('true') }} + neighbors: # []CiliumBGPNeighbor +{% if _cilium_bgp_neighbors | length > 0 %} +{% for item in _cilium_bgp_neighbors %} + - peerAddress: '{{ item.peer_address + "/32"}}' + peerASN: {{ item.peer_asn }} + eBGPMultihopTTL: 10 + connectRetryTimeSeconds: 120 + holdTimeSeconds: 90 + keepAliveTimeSeconds: 30 + gracefulRestart: + enabled: true + restartTimeSeconds: 120 +{% endfor %} +{% else %} + - peerAddress: '{{ cilium_bgp_peer_address + "/32"}}' + peerASN: {{ cilium_bgp_peer_asn }} + eBGPMultihopTTL: 10 + connectRetryTimeSeconds: 120 + holdTimeSeconds: 90 + keepAliveTimeSeconds: 30 + gracefulRestart: + enabled: true + restartTimeSeconds: 120 +{% endif %} + serviceSelector: + matchExpressions: + - {key: somekey, operator: NotIn, values: ['never-used-value']} +--- +apiVersion: "cilium.io/v2alpha1" +kind: CiliumLoadBalancerIPPool +metadata: + name: "01-lb-pool" +spec: + blocks: +{% if "/" in cilium_bgp_lb_cidr %} + - cidr: {{ cilium_bgp_lb_cidr }} +{% else %} + - start: {{ cilium_bgp_lb_cidr.split('-')[0] }} + stop: {{ cilium_bgp_lb_cidr.split('-')[1] }} +{% endif %} diff --git a/k3s-ansible/roles/k3s_server_post/templates/metallb.crs.j2 b/k3s-ansible/roles/k3s_server_post/templates/metallb.crs.j2 new file mode 100644 index 0000000..562f561 --- /dev/null +++ b/k3s-ansible/roles/k3s_server_post/templates/metallb.crs.j2 @@ -0,0 +1,43 @@ +apiVersion: metallb.io/v1beta1 +kind: IPAddressPool +metadata: + name: first-pool + namespace: metallb-system +spec: + addresses: +{% if metal_lb_ip_range is string %} +{# metal_lb_ip_range was used in the legacy way: single string instead of a list #} +{# => transform to list with single element #} +{% set metal_lb_ip_range = [metal_lb_ip_range] %} +{% endif %} +{% for range in metal_lb_ip_range %} + - {{ range }} +{% endfor %} + +{% if metal_lb_mode == "layer2" %} +--- +apiVersion: metallb.io/v1beta1 +kind: L2Advertisement +metadata: + name: default + namespace: metallb-system +{% endif %} +{% if metal_lb_mode == "bgp" %} +--- +apiVersion: metallb.io/v1beta2 +kind: BGPPeer +metadata: + name: default + namespace: metallb-system +spec: + myASN: {{ metal_lb_bgp_my_asn }} + peerASN: {{ metal_lb_bgp_peer_asn }} + peerAddress: {{ metal_lb_bgp_peer_address }} + +--- +apiVersion: metallb.io/v1beta1 +kind: BGPAdvertisement +metadata: + name: default + namespace: metallb-system +{% endif %} diff --git a/k3s-ansible/roles/lxc/handlers/main.yml b/k3s-ansible/roles/lxc/handlers/main.yml new file mode 100644 index 0000000..1c0002d --- /dev/null +++ b/k3s-ansible/roles/lxc/handlers/main.yml @@ -0,0 +1,6 @@ +--- +- name: Reboot server + become: true + ansible.builtin.reboot: + reboot_command: "{{ custom_reboot_command | default(omit) }}" + listen: reboot server diff --git a/k3s-ansible/roles/lxc/meta/main.yml b/k3s-ansible/roles/lxc/meta/main.yml new file mode 100644 index 0000000..42847df --- /dev/null +++ b/k3s-ansible/roles/lxc/meta/main.yml @@ -0,0 +1,8 @@ +--- +argument_specs: + main: + short_description: Configure LXC + options: + custom_reboot_command: + default: ~ + description: Command to run on reboot diff --git a/k3s-ansible/roles/lxc/tasks/main.yml b/k3s-ansible/roles/lxc/tasks/main.yml new file mode 100644 index 0000000..3568687 --- /dev/null +++ b/k3s-ansible/roles/lxc/tasks/main.yml @@ -0,0 +1,21 @@ +--- +- name: Check for rc.local file + ansible.builtin.stat: + path: /etc/rc.local + register: rcfile + +- name: Create rc.local if needed + ansible.builtin.lineinfile: + path: /etc/rc.local + line: "#!/bin/sh -e" + create: true + insertbefore: BOF + mode: u=rwx,g=rx,o=rx + when: not rcfile.stat.exists + +- name: Write rc.local file + ansible.builtin.blockinfile: + path: /etc/rc.local + content: "{{ lookup('template', 'templates/rc.local.j2') }}" + state: present + notify: reboot server diff --git a/k3s-ansible/roles/prereq/defaults/main.yml b/k3s-ansible/roles/prereq/defaults/main.yml new file mode 100644 index 0000000..850cbbf --- /dev/null +++ b/k3s-ansible/roles/prereq/defaults/main.yml @@ -0,0 +1,4 @@ +--- +secure_path: + RedHat: /sbin:/bin:/usr/sbin:/usr/bin:/usr/local/bin + Suse: /usr/sbin:/usr/bin:/sbin:/bin:/usr/local/bin diff --git a/k3s-ansible/roles/prereq/meta/main.yml b/k3s-ansible/roles/prereq/meta/main.yml new file mode 100644 index 0000000..939124b --- /dev/null +++ b/k3s-ansible/roles/prereq/meta/main.yml @@ -0,0 +1,7 @@ +--- +argument_specs: + main: + short_description: Prerequisites + options: + system_timezone: + description: Timezone to be set on all nodes diff --git a/k3s-ansible/roles/prereq/tasks/main.yml b/k3s-ansible/roles/prereq/tasks/main.yml new file mode 100644 index 0000000..9161f7b --- /dev/null +++ b/k3s-ansible/roles/prereq/tasks/main.yml @@ -0,0 +1,69 @@ +--- +- name: Set same timezone on every Server + community.general.timezone: + name: "{{ system_timezone }}" + when: (system_timezone is defined) and (system_timezone != "Your/Timezone") + +- name: Set SELinux to disabled state + ansible.posix.selinux: + state: disabled + when: ansible_os_family == "RedHat" + +- name: Enable IPv4 forwarding + ansible.posix.sysctl: + name: net.ipv4.ip_forward + value: "1" + state: present + reload: true + tags: sysctl + +- name: Enable IPv6 forwarding + ansible.posix.sysctl: + name: net.ipv6.conf.all.forwarding + value: "1" + state: present + reload: true + tags: sysctl + +- name: Enable IPv6 router advertisements + ansible.posix.sysctl: + name: net.ipv6.conf.all.accept_ra + value: "2" + state: present + reload: true + tags: sysctl + +- name: Add br_netfilter to /etc/modules-load.d/ + ansible.builtin.copy: + content: br_netfilter + dest: /etc/modules-load.d/br_netfilter.conf + mode: u=rw,g=,o= + when: ansible_os_family == "RedHat" + +- name: Load br_netfilter + community.general.modprobe: + name: br_netfilter + state: present + when: ansible_os_family == "RedHat" + +- name: Set bridge-nf-call-iptables (just to be sure) + ansible.posix.sysctl: + name: "{{ item }}" + value: "1" + state: present + reload: true + when: ansible_os_family == "RedHat" + loop: + - net.bridge.bridge-nf-call-iptables + - net.bridge.bridge-nf-call-ip6tables + tags: sysctl + +- name: Add /usr/local/bin to sudo secure_path + ansible.builtin.lineinfile: + line: Defaults secure_path = {{ secure_path[ansible_os_family] }} + regexp: Defaults(\s)*secure_path(\s)*= + state: present + insertafter: EOF + path: /etc/sudoers + validate: visudo -cf %s + when: ansible_os_family in [ "RedHat", "Suse" ] diff --git a/k3s-ansible/roles/proxmox_lxc/handlers/main.yml b/k3s-ansible/roles/proxmox_lxc/handlers/main.yml new file mode 100644 index 0000000..89a61e0 --- /dev/null +++ b/k3s-ansible/roles/proxmox_lxc/handlers/main.yml @@ -0,0 +1,13 @@ +--- +- name: Reboot containers + block: + - name: Get container ids from filtered files + ansible.builtin.set_fact: + proxmox_lxc_filtered_ids: >- + {{ proxmox_lxc_filtered_files | map("split", "/") | map("last") | map("split", ".") | map("first") }} + listen: reboot containers + - name: Reboot container + ansible.builtin.command: pct reboot {{ item }} + loop: "{{ proxmox_lxc_filtered_ids }}" + changed_when: true + listen: reboot containers diff --git a/k3s-ansible/roles/proxmox_lxc/meta/main.yml b/k3s-ansible/roles/proxmox_lxc/meta/main.yml new file mode 100644 index 0000000..827c956 --- /dev/null +++ b/k3s-ansible/roles/proxmox_lxc/meta/main.yml @@ -0,0 +1,9 @@ +--- +argument_specs: + main: + short_description: Proxmox LXC settings + options: + proxmox_lxc_ct_ids: + description: Proxmox container ID list + type: list + required: true diff --git a/k3s-ansible/roles/proxmox_lxc/tasks/main.yml b/k3s-ansible/roles/proxmox_lxc/tasks/main.yml new file mode 100644 index 0000000..5418cec --- /dev/null +++ b/k3s-ansible/roles/proxmox_lxc/tasks/main.yml @@ -0,0 +1,43 @@ +--- +- name: Check for container files that exist on this host + ansible.builtin.stat: + path: /etc/pve/lxc/{{ item }}.conf + loop: "{{ proxmox_lxc_ct_ids }}" + register: stat_results + +- name: Filter out files that do not exist + ansible.builtin.set_fact: + proxmox_lxc_filtered_files: '{{ stat_results.results | rejectattr("stat.exists", "false") | map(attribute="stat.path") }}' # noqa yaml[line-length] + +# https://gist.github.com/triangletodd/02f595cd4c0dc9aac5f7763ca2264185 +- name: Ensure lxc config has the right apparmor profile + ansible.builtin.lineinfile: + dest: "{{ item }}" + regexp: ^lxc.apparmor.profile + line: "lxc.apparmor.profile: unconfined" + loop: "{{ proxmox_lxc_filtered_files }}" + notify: reboot containers + +- name: Ensure lxc config has the right cgroup + ansible.builtin.lineinfile: + dest: "{{ item }}" + regexp: ^lxc.cgroup.devices.allow + line: "lxc.cgroup.devices.allow: a" + loop: "{{ proxmox_lxc_filtered_files }}" + notify: reboot containers + +- name: Ensure lxc config has the right cap drop + ansible.builtin.lineinfile: + dest: "{{ item }}" + regexp: ^lxc.cap.drop + line: "lxc.cap.drop: " + loop: "{{ proxmox_lxc_filtered_files }}" + notify: reboot containers + +- name: Ensure lxc config has the right mounts + ansible.builtin.lineinfile: + dest: "{{ item }}" + regexp: ^lxc.mount.auto + line: 'lxc.mount.auto: "proc:rw sys:rw"' + loop: "{{ proxmox_lxc_filtered_files }}" + notify: reboot containers diff --git a/k3s-ansible/roles/raspberrypi/defaults/main.yml b/k3s-ansible/roles/raspberrypi/defaults/main.yml new file mode 100644 index 0000000..124fb90 --- /dev/null +++ b/k3s-ansible/roles/raspberrypi/defaults/main.yml @@ -0,0 +1,6 @@ +--- +# Indicates whether the k3s prerequisites for Raspberry Pi should be set up +# Possible values: +# - present +# - absent +state: present diff --git a/k3s-ansible/roles/raspberrypi/handlers/main.yml b/k3s-ansible/roles/raspberrypi/handlers/main.yml new file mode 100644 index 0000000..c060793 --- /dev/null +++ b/k3s-ansible/roles/raspberrypi/handlers/main.yml @@ -0,0 +1,5 @@ +--- +- name: Reboot + ansible.builtin.reboot: + reboot_command: "{{ custom_reboot_command | default(omit) }}" + listen: reboot diff --git a/k3s-ansible/roles/raspberrypi/meta/main.yml b/k3s-ansible/roles/raspberrypi/meta/main.yml new file mode 100644 index 0000000..e2b9bad --- /dev/null +++ b/k3s-ansible/roles/raspberrypi/meta/main.yml @@ -0,0 +1,10 @@ +--- +argument_specs: + main: + short_description: Adjust some Raspberry Pi specific requisites + options: + state: + default: present + description: + - Indicates whether the k3s prerequisites for Raspberry Pi should be + - set up (possible values are `present` and `absent`) diff --git a/k3s-ansible/roles/raspberrypi/tasks/main.yml b/k3s-ansible/roles/raspberrypi/tasks/main.yml new file mode 100644 index 0000000..eb21c9a --- /dev/null +++ b/k3s-ansible/roles/raspberrypi/tasks/main.yml @@ -0,0 +1,59 @@ +--- +- name: Test for raspberry pi /proc/cpuinfo + ansible.builtin.command: grep -E "Raspberry Pi|BCM2708|BCM2709|BCM2835|BCM2836" /proc/cpuinfo + register: grep_cpuinfo_raspberrypi + failed_when: false + changed_when: false + +- name: Test for raspberry pi /proc/device-tree/model + ansible.builtin.command: grep -E "Raspberry Pi" /proc/device-tree/model + register: grep_device_tree_model_raspberrypi + failed_when: false + changed_when: false + +- name: Set raspberry_pi fact to true + ansible.builtin.set_fact: + raspberry_pi: true + when: grep_cpuinfo_raspberrypi.rc == 0 or grep_device_tree_model_raspberrypi.rc == 0 + +- name: Set detected_distribution to Raspbian (ARM64 on Raspbian, Debian Buster/Bullseye/Bookworm) + ansible.builtin.set_fact: + detected_distribution: Raspbian + vars: + allowed_descriptions: + - "[Rr]aspbian.*" + - Debian.*buster + - Debian.*bullseye + - Debian.*bookworm + when: + - ansible_facts.architecture is search("aarch64") + - raspberry_pi|default(false) + - ansible_facts.lsb.description|default("") is match(allowed_descriptions | join('|')) + +- name: Set detected_distribution to Raspbian (ARM64 on Debian Bookworm) + ansible.builtin.set_fact: + detected_distribution: Raspbian + when: + - ansible_facts.architecture is search("aarch64") + - raspberry_pi|default(false) + - ansible_facts.lsb.description|default("") is match("Debian.*bookworm") + +- name: Set detected_distribution_major_version + ansible.builtin.set_fact: + detected_distribution_major_version: "{{ ansible_facts.lsb.major_release }}" + when: + - detected_distribution | default("") == "Raspbian" + +- name: Execute OS related tasks on the Raspberry Pi - {{ action_ }} + ansible.builtin.include_tasks: "{{ item }}" + with_first_found: + - "{{ action_ }}/{{ detected_distribution }}-{{ detected_distribution_major_version }}.yml" + - "{{ action_ }}/{{ detected_distribution }}.yml" + - "{{ action_ }}/{{ ansible_distribution }}-{{ ansible_distribution_major_version }}.yml" + - "{{ action_ }}/{{ ansible_distribution }}.yml" + - "{{ action_ }}/default.yml" + vars: + action_: >- + {% if state == "present" %}setup{% else %}teardown{% endif %} + when: + - raspberry_pi|default(false) diff --git a/k3s-ansible/roles/raspberrypi/tasks/setup/Raspbian.yml b/k3s-ansible/roles/raspberrypi/tasks/setup/Raspbian.yml new file mode 100644 index 0000000..1d0a8cd --- /dev/null +++ b/k3s-ansible/roles/raspberrypi/tasks/setup/Raspbian.yml @@ -0,0 +1,49 @@ +--- +- name: Test for cmdline path + ansible.builtin.stat: + path: /boot/firmware/cmdline.txt + register: boot_cmdline_path + failed_when: false + changed_when: false + +- name: Set cmdline path based on Debian version and command result + ansible.builtin.set_fact: + cmdline_path: >- + {{ + ( + boot_cmdline_path.stat.exists and + ansible_facts.lsb.description | default('') is match('Debian.*(?!(bookworm|sid))') + ) | ternary( + '/boot/firmware/cmdline.txt', + '/boot/cmdline.txt' + ) + }} + +- name: Activating cgroup support + ansible.builtin.lineinfile: + path: "{{ cmdline_path }}" + regexp: ^((?!.*\bcgroup_enable=cpuset cgroup_memory=1 cgroup_enable=memory\b).*)$ + line: \1 cgroup_enable=cpuset cgroup_memory=1 cgroup_enable=memory + backrefs: true + notify: reboot + +- name: Install iptables + ansible.builtin.apt: + name: iptables + state: present + +- name: Flush iptables before changing to iptables-legacy + ansible.builtin.iptables: + flush: true + +- name: Changing to iptables-legacy + community.general.alternatives: + path: /usr/sbin/iptables-legacy + name: iptables + register: ip4_legacy + +- name: Changing to ip6tables-legacy + community.general.alternatives: + path: /usr/sbin/ip6tables-legacy + name: ip6tables + register: ip6_legacy diff --git a/k3s-ansible/roles/raspberrypi/tasks/setup/Rocky.yml b/k3s-ansible/roles/raspberrypi/tasks/setup/Rocky.yml new file mode 100644 index 0000000..2f756cb --- /dev/null +++ b/k3s-ansible/roles/raspberrypi/tasks/setup/Rocky.yml @@ -0,0 +1,9 @@ +--- +- name: Enable cgroup via boot commandline if not already enabled for Rocky + ansible.builtin.lineinfile: + path: /boot/cmdline.txt + backrefs: true + regexp: ^((?!.*\bcgroup_enable=cpuset cgroup_memory=1 cgroup_enable=memory\b).*)$ + line: \1 cgroup_enable=cpuset cgroup_memory=1 cgroup_enable=memory + notify: reboot + when: not ansible_check_mode diff --git a/k3s-ansible/roles/raspberrypi/tasks/setup/Ubuntu.yml b/k3s-ansible/roles/raspberrypi/tasks/setup/Ubuntu.yml new file mode 100644 index 0000000..07f20a8 --- /dev/null +++ b/k3s-ansible/roles/raspberrypi/tasks/setup/Ubuntu.yml @@ -0,0 +1,14 @@ +--- +- name: Enable cgroup via boot commandline if not already enabled for Ubuntu on a Raspberry Pi + ansible.builtin.lineinfile: + path: /boot/firmware/cmdline.txt + backrefs: true + regexp: ^((?!.*\bcgroup_enable=cpuset cgroup_memory=1 cgroup_enable=memory\b).*)$ + line: \1 cgroup_enable=cpuset cgroup_memory=1 cgroup_enable=memory + notify: reboot + +- name: Install linux-modules-extra-raspi + ansible.builtin.apt: + name: linux-modules-extra-raspi + state: present + when: ansible_distribution_version is version('24.04', '<') diff --git a/k3s-ansible/roles/raspberrypi/tasks/setup/default.yml b/k3s-ansible/roles/raspberrypi/tasks/setup/default.yml new file mode 100644 index 0000000..ed97d53 --- /dev/null +++ b/k3s-ansible/roles/raspberrypi/tasks/setup/default.yml @@ -0,0 +1 @@ +--- diff --git a/k3s-ansible/roles/raspberrypi/tasks/teardown/Raspbian.yml b/k3s-ansible/roles/raspberrypi/tasks/teardown/Raspbian.yml new file mode 100644 index 0000000..ed97d53 --- /dev/null +++ b/k3s-ansible/roles/raspberrypi/tasks/teardown/Raspbian.yml @@ -0,0 +1 @@ +--- diff --git a/k3s-ansible/roles/raspberrypi/tasks/teardown/Rocky.yml b/k3s-ansible/roles/raspberrypi/tasks/teardown/Rocky.yml new file mode 100644 index 0000000..ed97d53 --- /dev/null +++ b/k3s-ansible/roles/raspberrypi/tasks/teardown/Rocky.yml @@ -0,0 +1 @@ +--- diff --git a/k3s-ansible/roles/raspberrypi/tasks/teardown/Ubuntu.yml b/k3s-ansible/roles/raspberrypi/tasks/teardown/Ubuntu.yml new file mode 100644 index 0000000..681068a --- /dev/null +++ b/k3s-ansible/roles/raspberrypi/tasks/teardown/Ubuntu.yml @@ -0,0 +1,6 @@ +--- +- name: Remove linux-modules-extra-raspi + ansible.builtin.apt: + name: linux-modules-extra-raspi + state: absent + when: ansible_distribution_version is version('24.04', '<') diff --git a/k3s-ansible/roles/raspberrypi/tasks/teardown/default.yml b/k3s-ansible/roles/raspberrypi/tasks/teardown/default.yml new file mode 100644 index 0000000..ed97d53 --- /dev/null +++ b/k3s-ansible/roles/raspberrypi/tasks/teardown/default.yml @@ -0,0 +1 @@ +--- diff --git a/k3s-ansible/roles/reset/defaults/main.yml b/k3s-ansible/roles/reset/defaults/main.yml new file mode 100644 index 0000000..0b45925 --- /dev/null +++ b/k3s-ansible/roles/reset/defaults/main.yml @@ -0,0 +1,2 @@ +--- +systemd_dir: /etc/systemd/system diff --git a/k3s-ansible/roles/reset/meta/main.yml b/k3s-ansible/roles/reset/meta/main.yml new file mode 100644 index 0000000..830e019 --- /dev/null +++ b/k3s-ansible/roles/reset/meta/main.yml @@ -0,0 +1,8 @@ +--- +argument_specs: + main: + short_description: Reset all nodes + options: + systemd_dir: + description: Path to systemd services + default: /etc/systemd/system diff --git a/k3s-ansible/roles/reset/tasks/main.yml b/k3s-ansible/roles/reset/tasks/main.yml new file mode 100644 index 0000000..6fba44b --- /dev/null +++ b/k3s-ansible/roles/reset/tasks/main.yml @@ -0,0 +1,96 @@ +--- +- name: Disable services + ansible.builtin.systemd: + name: "{{ item }}" + state: stopped + enabled: false + failed_when: false + with_items: + - k3s + - k3s-node + - k3s-init + +- name: RUN pkill -9 -f "k3s/data/[^/]+/bin/containerd-shim-runc" + register: pkill_containerd_shim_runc + ansible.builtin.command: pkill -9 -f "k3s/data/[^/]+/bin/containerd-shim-runc" + changed_when: pkill_containerd_shim_runc.rc == 0 + failed_when: false + +- name: Umount k3s filesystems + ansible.builtin.include_tasks: umount_with_children.yml + with_items: + - /run/k3s + - /var/lib/kubelet + - /run/netns + - /var/lib/rancher/k3s + - /var/lib/kubelet/pods + - /var/lib/kubelet/plugins + - /run/netns/cni- + loop_control: + loop_var: mounted_fs + +- name: Remove service files, binaries and data + ansible.builtin.file: + name: "{{ item }}" + state: absent + with_items: + - /usr/local/bin/k3s + - "{{ systemd_dir }}/k3s.service" + - "{{ systemd_dir }}/k3s-node.service" + - /etc/rancher/k3s + - /run/k3s + - /run/flannel + - /etc/rancher/ + - /var/lib/kubelet + - /var/lib/rancher/k3s + - /var/lib/rancher/ + - /var/lib/cni/ + - /etc/cni/net.d + +- name: Remove K3s http_proxy files + ansible.builtin.file: + name: "{{ item }}" + state: absent + with_items: + - "{{ systemd_dir }}/k3s.service.d/http_proxy.conf" + - "{{ systemd_dir }}/k3s.service.d" + - "{{ systemd_dir }}/k3s-node.service.d/http_proxy.conf" + - "{{ systemd_dir }}/k3s-node.service.d" + when: proxy_env is defined + +- name: Reload daemon_reload + ansible.builtin.systemd: + daemon_reload: true + +- name: Remove tmp directory used for manifests + ansible.builtin.file: + path: /tmp/k3s + state: absent + +- name: Check if rc.local exists + ansible.builtin.stat: + path: /etc/rc.local + register: rcfile + +- name: Remove rc.local modifications for proxmox lxc containers + become: true + ansible.builtin.blockinfile: + path: /etc/rc.local + content: "{{ lookup('template', 'templates/rc.local.j2') }}" + create: false + state: absent + when: proxmox_lxc_configure and rcfile.stat.exists + +- name: Check rc.local for cleanup + become: true + ansible.builtin.slurp: + src: /etc/rc.local + register: rcslurp + when: proxmox_lxc_configure and rcfile.stat.exists + +- name: Cleanup rc.local if we only have a Shebang line + become: true + ansible.builtin.file: + path: /etc/rc.local + state: absent + when: proxmox_lxc_configure and rcfile.stat.exists and ((rcslurp.content | b64decode).splitlines() | length) <= 1 diff --git a/k3s-ansible/roles/reset/tasks/umount_with_children.yml b/k3s-ansible/roles/reset/tasks/umount_with_children.yml new file mode 100644 index 0000000..e540820 --- /dev/null +++ b/k3s-ansible/roles/reset/tasks/umount_with_children.yml @@ -0,0 +1,15 @@ +--- +- name: Get the list of mounted filesystems + ansible.builtin.shell: set -o pipefail && cat /proc/mounts | awk '{ print $2}' | grep -E "^{{ mounted_fs }}" + register: get_mounted_filesystems + args: + executable: /bin/bash + failed_when: false + changed_when: get_mounted_filesystems.stdout | length > 0 + check_mode: false + +- name: Umount filesystem + ansible.posix.mount: + path: "{{ item }}" + state: unmounted + with_items: "{{ get_mounted_filesystems.stdout_lines | reverse | list }}" diff --git a/k3s-ansible/roles/reset_proxmox_lxc/handlers/main.yml b/k3s-ansible/roles/reset_proxmox_lxc/handlers/main.yml new file mode 120000 index 0000000..7f79c4b --- /dev/null +++ b/k3s-ansible/roles/reset_proxmox_lxc/handlers/main.yml @@ -0,0 +1 @@ +../../proxmox_lxc/handlers/main.yml \ No newline at end of file diff --git a/k3s-ansible/roles/reset_proxmox_lxc/meta/main.yml b/k3s-ansible/roles/reset_proxmox_lxc/meta/main.yml new file mode 100644 index 0000000..827c956 --- /dev/null +++ b/k3s-ansible/roles/reset_proxmox_lxc/meta/main.yml @@ -0,0 +1,9 @@ +--- +argument_specs: + main: + short_description: Proxmox LXC settings + options: + proxmox_lxc_ct_ids: + description: Proxmox container ID list + type: list + required: true diff --git a/k3s-ansible/roles/reset_proxmox_lxc/tasks/main.yml b/k3s-ansible/roles/reset_proxmox_lxc/tasks/main.yml new file mode 100644 index 0000000..78faf5f --- /dev/null +++ b/k3s-ansible/roles/reset_proxmox_lxc/tasks/main.yml @@ -0,0 +1,46 @@ +--- +- name: Check for container files that exist on this host + ansible.builtin.stat: + path: /etc/pve/lxc/{{ item }}.conf + loop: "{{ proxmox_lxc_ct_ids }}" + register: stat_results + +- name: Filter out files that do not exist + ansible.builtin.set_fact: + proxmox_lxc_filtered_files: '{{ stat_results.results | rejectattr("stat.exists", "false") | map(attribute="stat.path") }}' # noqa yaml[line-length] + +- name: Remove LXC apparmor profile + ansible.builtin.lineinfile: + dest: "{{ item }}" + regexp: ^lxc.apparmor.profile + line: "lxc.apparmor.profile: unconfined" + state: absent + loop: "{{ proxmox_lxc_filtered_files }}" + notify: reboot containers + +- name: Remove lxc cgroups + ansible.builtin.lineinfile: + dest: "{{ item }}" + regexp: ^lxc.cgroup.devices.allow + line: "lxc.cgroup.devices.allow: a" + state: absent + loop: "{{ proxmox_lxc_filtered_files }}" + notify: reboot containers + +- name: Remove lxc cap drop + ansible.builtin.lineinfile: + dest: "{{ item }}" + regexp: ^lxc.cap.drop + line: "lxc.cap.drop: " + state: absent + loop: "{{ proxmox_lxc_filtered_files }}" + notify: reboot containers + +- name: Remove lxc mounts + ansible.builtin.lineinfile: + dest: "{{ item }}" + regexp: ^lxc.mount.auto + line: 'lxc.mount.auto: "proc:rw sys:rw"' + state: absent + loop: "{{ proxmox_lxc_filtered_files }}" + notify: reboot containers diff --git a/k3s-ansible/site.yml b/k3s-ansible/site.yml new file mode 100644 index 0000000..b656e56 --- /dev/null +++ b/k3s-ansible/site.yml @@ -0,0 +1,68 @@ +--- +- name: Pre tasks + hosts: all + pre_tasks: + - name: Verify Ansible is version 2.11 or above. (If this fails you may need to update Ansible) + ansible.builtin.assert: + that: ansible_version.full is version_compare('2.11', '>=') + msg: > + "Ansible is out of date. See here for more info: https://docs.technotim.live/posts/ansible-automation/" + +- name: Prepare Proxmox cluster + hosts: proxmox + gather_facts: true + become: true + environment: "{{ proxy_env | default({}) }}" + roles: + - role: proxmox_lxc + when: proxmox_lxc_configure + +- name: Prepare k3s nodes + hosts: k3s_cluster + gather_facts: true + environment: "{{ proxy_env | default({}) }}" + roles: + - role: lxc + become: true + when: proxmox_lxc_configure + - role: prereq + become: true + - role: download + become: true + - role: raspberrypi + become: true + - role: k3s_custom_registries + become: true + when: custom_registries + +- name: Setup k3s servers + hosts: master + environment: "{{ proxy_env | default({}) }}" + roles: + - role: k3s_server + become: true + +- name: Setup k3s agents + hosts: node + environment: "{{ proxy_env | default({}) }}" + roles: + - role: k3s_agent + become: true + +- name: Configure k3s cluster + hosts: master + environment: "{{ proxy_env | default({}) }}" + roles: + - role: k3s_server_post + become: true + +- name: Storing kubeconfig in the playbook directory + hosts: master + environment: "{{ proxy_env | default({}) }}" + tasks: + - name: Copying kubeconfig from {{ hostvars[groups[group_name_master | default('master')][0]]['ansible_hostname'] }} + ansible.builtin.fetch: + src: "{{ ansible_user_dir }}/.kube/config" + dest: ./kubeconfig + flat: true + when: ansible_hostname == hostvars[groups[group_name_master | default('master')][0]]['ansible_hostname'] diff --git a/k3s-ansible/templates/rc.local.j2 b/k3s-ansible/templates/rc.local.j2 new file mode 100644 index 0000000..16ca666 --- /dev/null +++ b/k3s-ansible/templates/rc.local.j2 @@ -0,0 +1,8 @@ +# Kubeadm 1.15 needs /dev/kmsg to be there, but it's not in lxc, but we can just use /dev/console instead +# see: https://github.com/kubernetes-sigs/kind/issues/662 +if [ ! -e /dev/kmsg ]; then + ln -s /dev/console /dev/kmsg +fi + +# https://medium.com/@kvaps/run-kubernetes-in-lxc-container-f04aa94b6c9c +mount --make-rshared / diff --git a/k3s-ansible/xclip b/k3s-ansible/xclip new file mode 100644 index 0000000..065a9f9 --- /dev/null +++ b/k3s-ansible/xclip @@ -0,0 +1,18 @@ +I0312 17:14:11.285448 1 main.go:211] CLI flags config: {etcdEndpoints:http://127.0.0.1:4001,http://127.0.0.1:2379 etcdPrefix:/coreos.com/network etcdKeyfile: etcdCertfile: etcdCAFile: etcdUsername: etcdPassword: version:false kubeSubnetMgr:true kubeApiUrl: kubeAnnotationPrefix:flannel.alpha.coreos.com kubeConfigFile: iface:[] ifaceRegex:[] ipMasq:true ifaceCanReach: subnetFile:/run/flannel/subnet.env publicIP: publicIPv6: subnetLeaseRenewMargin:60 healthzIP:0.0.0.0 healthzPort:0 iptablesResyncSeconds:5 iptablesForwardRules:true netConfPath:/etc/kube-flannel/net-conf.json setNodeNetworkUnavailable:true} +W0312 17:14:11.285516 1 client_config.go:618] Neither --kubeconfig nor --master was specified. Using the inClusterConfig. This might not work. +I0312 17:14:11.291436 1 kube.go:139] Waiting 10m0s for node controller to sync +I0312 17:14:11.291467 1 kube.go:469] Starting kube subnet manager +I0312 17:14:11.292837 1 kube.go:490] Creating the node lease for IPv4. This is the n.Spec.PodCIDRs: [10.42.0.0/24] +I0312 17:14:12.291660 1 kube.go:146] Node controller sync successful +I0312 17:14:12.291709 1 main.go:231] Created subnet manager: Kubernetes Subnet Manager - casca +I0312 17:14:12.291718 1 main.go:234] Installing signal handlers +I0312 17:14:12.291835 1 main.go:468] Found network config - Backend type: vxlan +I0312 17:14:12.296646 1 kube.go:669] List of node(casca) annotations: map[string]string{"alpha.kubernetes.io/provided-node-ip":"192.168.1.133", "flannel.alpha.coreos.com/backend-data":"{\"VNI\":1,\"VtepMAC\":\"8e:1b:3c:71:96:01\"}", "flannel.alpha.coreos.com/backend-type":"vxlan", "flannel.alpha.coreos.com/kube-subnet-manager":"true", "flannel.alpha.coreos.com/public-ip":"192.168.1.133", "k3s.io/hostname":"CASCA", "k3s.io/internal-ip":"192.168.1.133", "k3s.io/node-args":"[\"server\",\"--flannel-backend\",\"none\",\"--token\",\"********\"]", "k3s.io/node-config-hash":"EC72RJBT2ODREIIW72ZM7V5VCX6HHTLU3MR635DGCNCGIXLUK2RQ====", "k3s.io/node-env":"{}", "node.alpha.kubernetes.io/ttl":"0", "volumes.kubernetes.io/controller-managed-attach-detach":"true"} +I0312 17:14:12.296736 1 match.go:211] Determining IP address of default interface +I0312 17:14:12.297417 1 match.go:264] Using interface with name enp2s0 and address 192.168.1.133 +I0312 17:14:12.297462 1 match.go:286] Defaulting external address to interface address (192.168.1.133) +I0312 17:14:12.297533 1 vxlan.go:141] VXLAN config: VNI=1 Port=0 GBP=false Learning=false DirectRouting=false +I0312 17:14:12.301192 1 kube.go:636] List of node(casca) annotations: map[string]string{"alpha.kubernetes.io/provided-node-ip":"192.168.1.133", "flannel.alpha.coreos.com/backend-data":"{\"VNI\":1,\"VtepMAC\":\"8e:1b:3c:71:96:01\"}", "flannel.alpha.coreos.com/backend-type":"vxlan", "flannel.alpha.coreos.com/kube-subnet-manager":"true", "flannel.alpha.coreos.com/public-ip":"192.168.1.133", "k3s.io/hostname":"CASCA", "k3s.io/internal-ip":"192.168.1.133", "k3s.io/node-args":"[\"server\",\"--flannel-backend\",\"none\",\"--token\",\"********\"]", "k3s.io/node-config-hash":"EC72RJBT2ODREIIW72ZM7V5VCX6HHTLU3MR635DGCNCGIXLUK2RQ====", "k3s.io/node-env":"{}", "node.alpha.kubernetes.io/ttl":"0", "volumes.kubernetes.io/controller-managed-attach-detach":"true"} +I0312 17:14:12.301246 1 vxlan.go:155] Interface flannel.1 mac address set to: 8e:1b:3c:71:96:01 +E0312 17:14:12.301685 1 main.go:359] Error registering network: failed to acquire lease: subnet "10.244.0.0/16" specified in the flannel net config doesn't contain "10.42.0.0/24" PodCIDR of the "casca" node +I0312 17:14:12.301771 1 main.go:448] Stopping shutdownHandler... diff --git a/path.txt b/path.txt new file mode 100644 index 0000000..6cbc2b7 --- /dev/null +++ b/path.txt @@ -0,0 +1 @@ +/home/usuari/.local/bin:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/home/usuari/.lmstudio/bin diff --git a/scripts/certificat222.sh b/scripts/certificat222.sh new file mode 100644 index 0000000..736fa04 --- /dev/null +++ b/scripts/certificat222.sh @@ -0,0 +1,24 @@ +#!/bin/bash + +# Pas 1: Comprovar la connexió actual amb el servidor de Kubernetes. +echo "Intentant obtindre els nodes sense configurar la variable KUBECONFIG..." +kubectl get nodes +# Aquest pas hauria de donar l'error relacionat amb el certificat desconegut + +# Pas 2: Comprovar els permisos del fitxer de configuració de K3s. +echo "Comprovant els permisos del fitxer de configuració k3s.yaml..." +sudo ls -l /etc/rancher/k3s/k3s.yaml +# Cal verificar que el fitxer existeix i té els permisos correctes + +# Pas 3: Establir la variable d'entorn KUBECONFIG per utilitzar el fitxer k3s.yaml. +echo "Exportant la variable d'entorn KUBECONFIG per utilitzar el fitxer k3s.yaml..." +export KUBECONFIG=/etc/rancher/k3s/k3s.yaml + +# Pas 4: Intentar novament obtenir els nodes de Kubernetes ara amb la configuració correcta. +echo "Intentant obtindre els nodes després d'establir la variable KUBECONFIG..." +kubectl get nodes +# Ara, el comandament hauria de funcionar correctament i mostrar la llista de nodes + +# Explicació addicional: +echo "L'error inicial era causat per l'ús d'un fitxer de configuració incorrecte. " +echo "En exportar la variable KUBECONFIG, indiquem a kubectl que utilitze el fitxer k3s.yaml correcte." diff --git a/scripts/despres-llançament.sh b/scripts/despres-llançament.sh new file mode 100644 index 0000000..6a80df5 --- /dev/null +++ b/scripts/despres-llançament.sh @@ -0,0 +1,32 @@ +#!/bin/bash + +# Esborra la configuració actual de kube +rm -f ~/.kube/config + +# Executa comandes en el servidor remot sense entrar en mode interactiu +ssh usuari@192.168.1.111 << 'EOF' + echo "----------Pods del PRIMER SERVIDOR----------" + kubectl get pods + sudo chown usuari /etc/rancher/k3s/k3s.yaml +EOF + +# Copia la configuració de K3s des del servidor remot +scp usuari@192.168.1.111:/etc/rancher/k3s/k3s.yaml ~/.kube/config + +# Variables +NOVA_IP="192.168.1.222" +RUTA_CONFIGURACIO="$HOME/.kube/config" + +# Reemplaça la IP en ~/.kube/config +sed -i "s|server: https://[0-9.]*:6443|server: https://$NOVA_IP:6443|" "$RUTA_CONFIGURACIO" + +# Mostrar canvis +echo "Actualització completada. Verifica l'arxiu:" +echo "- $RUTA_CONFIGURACIO" + +# copia el tal en donaodsmf +sudo cp .kube/config /etc/rancher/k3s/k3s.yaml + +# Mostra els pods des del node local +echo "----------Pods des del Metall----------" +kubectl get pods diff --git a/scripts/error.sh b/scripts/error.sh new file mode 100644 index 0000000..9de9976 --- /dev/null +++ b/scripts/error.sh @@ -0,0 +1,45 @@ +#!/bin/bash + +cd ~/Nextcloud/EC/Documents/fct/k9/k3s-ansible + +# Actualitza ansible + +echo "---------------------------------------------------" +echo "---------------- Actualitza ansible --------------------" +echo "---------------------------------------------------" +ansible --version + +echo $PATH + +# El resultado de echo $PATH muestra que el directorio ~/.local/bin no está en tu variable de entorno PATH. Esto explica por qué pipx sigue advirtiendo que los comandos de Ansible no son accesibles globalmente. +export PATH="$HOME/.local/bin:$PATH" + +which ansible-playbook + +# Deberías ver algo como esto: +# /home/usuari/.local/bin/ansible-playbook + + +#https://docs.ansible.com/ansible/latest/installation_guide/intro_installation.html#intro-installation-guide +pipx uninstall ansible +pipx install --include-deps ansible --force +pipx install ansible-core --force +#pipx upgrade --include-injected ansible + +pipx install netaddr --force + +ansible --version + +echo "---------------------------------------------------" + +echo "Instal·la netaddr" + + +pipx inject ansible-core netaddr + +#Comprova versió +pipx runpip ansible-core show netaddr + + +####################################################################### + diff --git a/serveis/caddy/font b/serveis/caddy/font new file mode 100644 index 0000000..15c5ac3 --- /dev/null +++ b/serveis/caddy/font @@ -0,0 +1 @@ +https://github.com/caddyserver/ingress diff --git a/serveis/caddy/ingress b/serveis/caddy/ingress new file mode 160000 index 0000000..45c5e7d --- /dev/null +++ b/serveis/caddy/ingress @@ -0,0 +1 @@ +Subproject commit 45c5e7d5ee14c77fcc155701fb68f5d2db34608b diff --git a/serveis/crypad/cryptpad-k8s/cryptpad.yml b/serveis/crypad/cryptpad-k8s/cryptpad.yml new file mode 100644 index 0000000..5385151 --- /dev/null +++ b/serveis/crypad/cryptpad-k8s/cryptpad.yml @@ -0,0 +1,174 @@ +--- +apiVersion: v1 +kind: Namespace +metadata: + name: cryptpad +--- +apiVersion: apps/v1 +kind: Deployment +metadata: + name: cryptpad + namespace: cryptpad +spec: + selector: + matchLabels: + app: cryptpad + template: + metadata: + labels: + app: cryptpad + spec: + volumes: + - name: config + configMap: + name: config + - name: cryptpad + persistentVolumeClaim: + claimName: cryptpad + containers: + - name: cryptpad + image: quay.io/ffddorf/cryptpad:4.8.0 + resources: + limits: + memory: "512Mi" + cpu: "500m" + ports: + - containerPort: 3000 + volumeMounts: + - name: config + mountPath: /cryptpad/config + - name: cryptpad + mountPath: /cryptpad/data +--- +apiVersion: v1 +kind: ConfigMap +metadata: + name: config + namespace: cryptpad +data: + config.js: | + module.exports = { + adminKeys: [ + "[nomaster@pad.freifunk-duesseldorf.de/WUdnwywXbKnT6QsT6OuZXQqJOQCZwiZDz2y3492oGpw=]", + ], + adminEmail: 'kontakt@freifunk-duesseldorf.de', + allowSubscriptions: false, + archivePath: './data/archive', + blobPath: './data/blob', + blobStagingPath: './data/blobstage', + blockPath: './data/block', + filePath: './data/store', + httpAddress: '::', + httpPort: 3000, + httpSafeOrigin: 'https://cryptpad.freifunk-duesseldorf.de/', + httpUnsafeOrigin: 'https://pad.freifunk-duesseldorf.de/', + logFeedback: false, + logLevel: 'info', + logToStdout: true, + noSubscriptionButton: true, + pinPath: './data/pins', + removeDonateButton: true, + supportMailboxPublicKey: 'bLZQjf8j/kQnV3LLT64ROORvJjzJzz7FQRLWh1DV6B4=', + taskPath: './data/tasks', + verbose: false, + }; +--- +apiVersion: v1 +kind: Service +metadata: + name: cryptpad + namespace: cryptpad +spec: + selector: + app: cryptpad + ports: + - port: 3000 + targetPort: 3000 +--- +apiVersion: traefik.containo.us/v1alpha1 +kind: Middleware +metadata: + name: security + namespace: cryptpad +spec: + headers: + stsSeconds: 63072000 + customResponseHeaders: + cross-origin-resource-policy: cross-origin + cross-origin-embedder-policy: require-corp +--- +kind: Ingress +apiVersion: networking.k8s.io/v1 +metadata: + name: cryptpad + namespace: cryptpad + annotations: + cert-manager.io/cluster-issuer: letsencrypt-prod + kubernetes.io/ingress.class: traefik + kubernetes.io/tls-acme: "true" + traefik.ingress.kubernetes.io/router.entrypoints: websecure + traefik.ingress.kubernetes.io/router.tls: "true" + traefik.ingress.kubernetes.io/router.middlewares: cryptpad-security@kubernetescrd +spec: + tls: + - hosts: + - cryptpad.freifunk-duesseldorf.de + - pad.freifunk-duesseldorf.de + secretName: cryptpad-tls-prod + rules: + - host: cryptpad.freifunk-duesseldorf.de + http: + paths: + - pathType: Prefix + path: "/" + backend: + service: + name: cryptpad + port: + number: 3000 + - host: pad.freifunk-duesseldorf.de + http: + paths: + - pathType: Prefix + path: "/" + backend: + service: + name: cryptpad + port: + number: 3000 +--- +apiVersion: v1 +kind: PersistentVolumeClaim +metadata: + name: cryptpad + namespace: cryptpad +spec: + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 3Gi + volumeName: cryptpad +--- +apiVersion: v1 +kind: PersistentVolume +metadata: + name: cryptpad +spec: + capacity: + storage: 4Gi + volumeMode: Filesystem + accessModes: + - ReadWriteOnce + persistentVolumeReclaimPolicy: Retain + storageClassName: local-path + local: + path: /data/cryptpad/cryptpad + nodeAffinity: + required: + nodeSelectorTerms: + - matchExpressions: + - key: kubernetes.io/hostname + operator: In + values: + - k3s1 diff --git a/serveis/crypad/cryptpad-k8s/pvc-cryptpad.yml b/serveis/crypad/cryptpad-k8s/pvc-cryptpad.yml new file mode 100644 index 0000000..b8675f6 --- /dev/null +++ b/serveis/crypad/cryptpad-k8s/pvc-cryptpad.yml @@ -0,0 +1,12 @@ +apiVersion: v1 +kind: PersistentVolumeClaim +metadata: + name: cryptpad + namespace: cryptpad +spec: + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 4Gi + storageClassName: local-path diff --git a/serveis/crypad/cryptpad-k8s/renovate.json b/serveis/crypad/cryptpad-k8s/renovate.json new file mode 100644 index 0000000..ab917d5 --- /dev/null +++ b/serveis/crypad/cryptpad-k8s/renovate.json @@ -0,0 +1,25 @@ +{ + "$schema": "https://docs.renovatebot.com/renovate-schema.json", + "regexManagers": [ + { + "fileMatch": ["^version\\.txt$"], + "matchStrings": ["^(?\\d+\\.\\d+\\.\\d+)"], + "depNameTemplate": "xwiki-labs/cryptpad-docker", + "datasourceTemplate": "github-tags" + }, + { + "fileMatch": ["^cryptpad\\.yml$"], + "matchStrings": [ + "^\\s+image:\\s?quay\\.io/ffddorf/cryptpad:(?\\d+\\.\\d+\\.\\d+)$" + ], + "depNameTemplate": "xwiki-labs/cryptpad-docker", + "datasourceTemplate": "github-tags" + } + ], + "packageRules": [ + { + "matchPackageNames": ["xwiki-labs/cryptpad"], + "groupName": "cryptpad" + } + ] +} diff --git a/serveis/crypad/cryptpad-k8s/version.txt b/serveis/crypad/cryptpad-k8s/version.txt new file mode 100644 index 0000000..0062ac9 --- /dev/null +++ b/serveis/crypad/cryptpad-k8s/version.txt @@ -0,0 +1 @@ +5.0.0 diff --git a/serveis/etherpad/estructura b/serveis/etherpad/estructura new file mode 100644 index 0000000..4227936 --- /dev/null +++ b/serveis/etherpad/estructura @@ -0,0 +1,256 @@ +usuari@CASCA:~/Nextcloud/EC/Documents/fct/k9/serveis/etherpad/etherpad-lite/lib$ tree +. +├── etherpad-lite-k8s +│   ├── configmap.yaml +│   ├── deployment.yaml +│   ├── kustomization.yaml +│   └── service.yaml +├── etherpad-lite-k8s-kubedb-mysql +│   ├── configmap.yaml +│   ├── deployment.yaml +│   ├── kustomization.yaml +│   └── name-prefix-transformer-config.yaml +├── kubedb-mysql-etherpad-lite +│   ├── etherpad-mysql.yaml +│   ├── kustomization.yaml +│   ├── README.md +│   └── transformer-config-kubedb.yaml +├── kubedb-mysql-etherpad-lite-with-init-script +│   ├── etherpad-mysql-init-configmap.yaml +│   ├── etherpad-mysql-with-init-script.yaml +│   └── kustomization.yaml +└── test-etherpad-lite-mysql-with-namePrefix + └── kustomization.yaml + +6 directories, 16 files +usuari@CASCA:~/Nextcloud/EC/Documents/fct/k9/serveis/etherpad/etherpad-lite/lib$ cat etherpad-lite-k8s/configmap.yaml +apiVersion: v1 +kind: ConfigMap +metadata: + name: etherpad +data: + settings.json: | + { + "skinName":"colibris", + "title":"Etherpad on Kubernetes" + } +usuari@CASCA:~/Nextcloud/EC/Documents/fct/k9/serveis/etherpad/etherpad-lite/lib$ cat etherpad-lite-k8s/deployment.yaml +apiVersion: apps/v1 +kind: Deployment +metadata: + name: etherpad +spec: + replicas: 1 + selector: + matchLabels: + app: etherpad + template: + metadata: + labels: + app: etherpad + spec: + containers: + - name: etherpad + image: etherpad/etherpad:1.7.5 + ports: + - containerPort: 9001 + name: web + volumeMounts: + - name: "config" + mountPath: "/opt/etherpad/settings.json" + subPath: "settings.json" + volumes: + - name: config + configMap: + name: etherpad +usuari@CASCA:~/Nextcloud/EC/Documents/fct/k9/serveis/etherpad/etherpad-lite/lib$ cat etherpad-lite-k8s/kustomization.yaml +apiVersion: kustomize.config.k8s.io/v1beta1 +kind: Kustomization +resources: +- configmap.yaml +- deployment.yaml +- service.yaml +usuari@CASCA:~/Nextcloud/EC/Documents/fct/k9/serveis/etherpad/etherpad-lite/lib$ cat etherpad-lite-k8s/service.yaml +apiVersion: v1 +kind: Service +metadata: + name: etherpad +spec: + selector: + app: etherpad + ports: + - name: web + port: 80 + targetPort: web +usuari@CASCA:~/Nextcloud/EC/Documents/fct/k9/serveis/etherpad/etherpad-lite/lib$ cat etherpad-lite-k8s-kubedb-mysql/configmap.yaml +apiVersion: v1 +kind: ConfigMap +metadata: + name: etherpad +data: + settings.json: | + { + "skinName":"colibris", + "title":"Etherpad on Kubernetes w/ MySQL", + "dbType": "${ETHERPAD_DB_TYPE:mysql}", + "dbSettings": { + "database": "${ETHERPAD_DB_DATABASE}", + "host": "${ETHERPAD_DB_HOST}", + "password": "${ETHERPAD_DB_PASSWORD}", + "user": "${ETHERPAD_DB_USER}" + } + } +usuari@CASCA:~/Nextcloud/EC/Documents/fct/k9/serveis/etherpad/etherpad-lite/lib$ cat etherpad-lite-k8s-kubedb-mysql/deployment.yaml +apiVersion: apps/v1 +kind: Deployment +metadata: + name: etherpad +spec: + template: + spec: + containers: + - name: etherpad + env: + - name: ETHERPAD_DB_TYPE + value: mysql + - name: ETHERPAD_DB_HOST + value: $(MYSQL_SERVICE) + - name: ETHERPAD_DB_DATABASE + value: etherpad_lite_db + - name: ETHERPAD_DB_USER + valueFrom: + secretKeyRef: + name: etherpad-mysql-auth + key: username + - name: ETHERPAD_DB_PASSWORD + valueFrom: + secretKeyRef: + name: etherpad-mysql-auth + key: password + volumeMounts: + - name: "config" + mountPath: "/opt/etherpad-lite/settings.json" + subPath: "settings.json" +usuari@CASCA:~/Nextcloud/EC/Documents/fct/k9/serveis/etherpad/etherpad-lite/lib$ cat etherpad-lite-k8s-kubedb-mysql/kustomization.yaml +apiVersion: kustomize.config.k8s.io/v1beta1 +kind: Kustomization +bases: +- ../kubedb-mysql-etherpad-lite-with-init-script +- ../etherpad-lite-k8s +patchesStrategicMerge: +- configmap.yaml +- deployment.yaml +images: +- name: etherpad/etherpad + # This is required until etherpad-lite 1.8 comes out to be able to use env vars in settings.json + newTag: latest +configurations: +- name-prefix-transformer-config.yaml +usuari@CASCA:~/Nextcloud/EC/Documents/fct/k9/serveis/etherpad/etherpad-lite/lib$ cat etherpad-lite-k8s-kubedb-mysql/name-prefix-transformer-config.yaml +namePrefix: +- apiVersion: apps/v1 + kind: Deployment + path: spec/template/spec/containers/env/valueFrom/secretKeyRef/name +usuari@CASCA:~/Nextcloud/EC/Documents/fct/k9/serveis/etherpad/etherpad-lite/lib$ cat kubedb-mysql-etherpad-lite/etherpad-mysql.yaml +apiVersion: kubedb.com/v1alpha1 +kind: MySQL +metadata: + name: etherpad-mysql +spec: + version: "5.7.25" + storageType: Durable + terminationPolicy: WipeOut + storage: + storageClassName: "default" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + +usuari@CASCA:~/Nextcloud/EC/Documents/fct/k9/serveis/etherpad/etherpad-lite/lib$ cat kubedb-mysql-etherpad-lite/kustomization.yaml +apiVersion: kustomize.config.k8s.io/v1beta1 +kind: Kustomization +resources: +- etherpad-mysql.yaml +vars: +- name: MYSQL_SERVICE + objref: + apiVersion: kubedb.com/v1alpha1 + kind: MySQL + name: etherpad-mysql + fieldref: + fieldpath: metadata.name +configurations: +- transformer-config-kubedb.yaml +usuari@CASCA:~/Nextcloud/EC/Documents/fct/k9/serveis/etherpad/etherpad-lite/lib$ cat kubedb-mysql-etherpad-lite/README.md +# kubedb-mysql-etherpad-lite + +This is *just* the kubedb MySQL resource for etherpad-lite. Compose it with something like ../etherpad-lite-k8s to get a full setup. +usuari@CASCA:~/Nextcloud/EC/Documents/fct/k9/serveis/etherpad/etherpad-lite/lib$ cat kubedb-mysql-etherpad-lite/kustomization.yaml +apiVersion: kustomize.config.k8s.io/v1beta1 +kind: Kustomization +resources: +- etherpad-mysql.yaml +vars: +- name: MYSQL_SERVICE + objref: + apiVersion: kubedb.com/v1alpha1 + kind: MySQL + name: etherpad-mysql + fieldref: + fieldpath: metadata.name +configurations: +- transformer-config-kubedb.yaml +usuari@CASCA:~/Nextcloud/EC/Documents/fct/k9/serveis/etherpad/etherpad-lite/lib$ cat kubedb-mysql-etherpad-lite/transformer-config-kubedb.yaml +namePrefix: +- apiVersion: kubedb.com/v1alpha1 + kind: MySQL + path: spec/init/scriptSource/configMap/name + +nameReference: +- version: v1 + kind: ConfigMap + fieldSpecs: + - version: kubedb.com/v1alpha1 + kind: MySQL + path: spec/init/scriptSource +usuari@CASCA:~/Nextcloud/EC/Documents/fct/k9/serveis/etherpad/etherpad-lite/lib$ cat kubedb-mysql-etherpad-lite-with-init-script/etherpad-mysql-init-configmap.yaml +apiVersion: v1 +kind: ConfigMap +metadata: + name: etherpad-mysql-init +data: + init.sql: | + create database `etherpad_lite_db`; + use `etherpad_lite_db`; + + CREATE TABLE `store` ( + `key` varchar(100) COLLATE utf8mb4_bin NOT NULL DEFAULT '', + `value` longtext COLLATE utf8mb4_bin NOT NULL, + PRIMARY KEY (`key`) + ) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_bin; +usuari@CASCA:~/Nextcloud/EC/Documents/fct/k9/serveis/etherpad/etherpad-lite/lib$ cat kubedb-mysql-etherpad-lite-with-init-script/etherpad-mysql-with-init-script.yaml +apiVersion: kubedb.com/v1alpha1 +kind: MySQL +metadata: + name: etherpad-mysql +spec: + init: + scriptSource: + configMap: + name: etherpad-mysql-init +usuari@CASCA:~/Nextcloud/EC/Documents/fct/k9/serveis/etherpad/etherpad-lite/lib$ cat kubedb-mysql-etherpad-lite-with-init-script/kustomization.yaml +apiVersion: kustomize.config.k8s.io/v1beta1 +kind: Kustomization +bases: +- ../kubedb-mysql-etherpad-lite +resources: +- etherpad-mysql-init-configmap.yaml +patchesStrategicMerge: +- etherpad-mysql-with-init-script.yaml +usuari@CASCA:~/Nextcloud/EC/Documents/fct/k9/serveis/etherpad/etherpad-lite/lib$ cat test-etherpad-lite-mysql-with-namePrefix/kustomization.yaml +bases: +- ../etherpad-lite-k8s-kubedb-mysql +namePrefix: test-namePrefix- +usuari@CASCA:~/Nextcloud/EC/Documents/fct/k9/serveis/etherpad/etherpad-lite/lib$ diff --git a/serveis/etherpad/etherpad-lite/README.md b/serveis/etherpad/etherpad-lite/README.md new file mode 100755 index 0000000..44fd17c --- /dev/null +++ b/serveis/etherpad/etherpad-lite/README.md @@ -0,0 +1,40 @@ +# etherpad-lite + +Configuration for running [Etherpad Lite](https://github.com/ether/etherpad-lite) on kubernetes. + +`./lib/` contains several directories that configure Etherpad Lite in different ways, from the very simplest (but using ephemeral storage) to one that stores ether pads in MySQL provisioned by kubedb. + +## Usage + +Use `kubectl kustomize ` to render kubernetes manifests, e.g. + +``` +kubectl kustomize lib/etherpad-lite-k8s +``` + +or + +``` +kubectl kustomize lib/etherpad-lite-k8s-kubedb-mysql/ +``` + +or (via URL, no git clone required!) + +``` +kubectl kustomize github.com/gobengo/etherpad-lite.git/lib/etherpad-lite-k8s +``` + +## Install on Kubernetes + +Assuming you have created a namespace named `my-etherpad-namespace` with something like `kubectl create ns my-etherpad-namespace` + +``` +kubectl kustomize github.com/gobengo/etherpad-lite.git/lib/etherpad-lite-k8s | kubectl apply -n my-etherpad-namespace -f - +``` + +If you don't have unix pipes: + +``` +kubectl -n my-etherpad-namespace apply -k github.com/gobengo/etherpad-lite.git/lib/etherpad-lite-k8s +``` + diff --git a/serveis/etherpad/etherpad-lite/lib/etherpad-lite-k8s-kubedb-mysql/configmap.yaml b/serveis/etherpad/etherpad-lite/lib/etherpad-lite-k8s-kubedb-mysql/configmap.yaml new file mode 100755 index 0000000..b5e525c --- /dev/null +++ b/serveis/etherpad/etherpad-lite/lib/etherpad-lite-k8s-kubedb-mysql/configmap.yaml @@ -0,0 +1,17 @@ +apiVersion: v1 +kind: ConfigMap +metadata: + name: etherpad +data: + settings.json: | + { + "skinName":"colibris", + "title":"Etherpad on Kubernetes w/ MySQL", + "dbType": "${ETHERPAD_DB_TYPE:mysql}", + "dbSettings": { + "database": "${ETHERPAD_DB_DATABASE}", + "host": "${ETHERPAD_DB_HOST}", + "password": "${ETHERPAD_DB_PASSWORD}", + "user": "${ETHERPAD_DB_USER}" + } + } diff --git a/serveis/etherpad/etherpad-lite/lib/etherpad-lite-k8s-kubedb-mysql/deployment.yaml b/serveis/etherpad/etherpad-lite/lib/etherpad-lite-k8s-kubedb-mysql/deployment.yaml new file mode 100755 index 0000000..a50aa18 --- /dev/null +++ b/serveis/etherpad/etherpad-lite/lib/etherpad-lite-k8s-kubedb-mysql/deployment.yaml @@ -0,0 +1,30 @@ +apiVersion: apps/v1 +kind: Deployment +metadata: + name: etherpad +spec: + template: + spec: + containers: + - name: etherpad + env: + - name: ETHERPAD_DB_TYPE + value: mysql + - name: ETHERPAD_DB_HOST + value: $(MYSQL_SERVICE) + - name: ETHERPAD_DB_DATABASE + value: etherpad_lite_db + - name: ETHERPAD_DB_USER + valueFrom: + secretKeyRef: + name: etherpad-mysql-auth + key: username + - name: ETHERPAD_DB_PASSWORD + valueFrom: + secretKeyRef: + name: etherpad-mysql-auth + key: password + volumeMounts: + - name: "config" + mountPath: "/opt/etherpad-lite/settings.json" + subPath: "settings.json" diff --git a/serveis/etherpad/etherpad-lite/lib/etherpad-lite-k8s-kubedb-mysql/kustomization.yaml b/serveis/etherpad/etherpad-lite/lib/etherpad-lite-k8s-kubedb-mysql/kustomization.yaml new file mode 100755 index 0000000..4a0b5ec --- /dev/null +++ b/serveis/etherpad/etherpad-lite/lib/etherpad-lite-k8s-kubedb-mysql/kustomization.yaml @@ -0,0 +1,13 @@ +apiVersion: kustomize.config.k8s.io/v1beta1 +kind: Kustomization +resources: + - ../kubedb-mysql-etherpad-lite-with-init-script + - ../etherpad-lite-k8s +patches: + - configmap.yaml + - deployment.yaml +images: + - name: etherpad/etherpad + newTag: latest +configurations: + - name-prefix-transformer-config.yaml diff --git a/serveis/etherpad/etherpad-lite/lib/etherpad-lite-k8s-kubedb-mysql/name-prefix-transformer-config.yaml b/serveis/etherpad/etherpad-lite/lib/etherpad-lite-k8s-kubedb-mysql/name-prefix-transformer-config.yaml new file mode 100755 index 0000000..31022f8 --- /dev/null +++ b/serveis/etherpad/etherpad-lite/lib/etherpad-lite-k8s-kubedb-mysql/name-prefix-transformer-config.yaml @@ -0,0 +1,4 @@ +namePrefix: +- apiVersion: apps/v1 + kind: Deployment + path: spec/template/spec/containers/env/valueFrom/secretKeyRef/name diff --git a/serveis/etherpad/etherpad-lite/lib/etherpad-lite-k8s/configmap.yaml b/serveis/etherpad/etherpad-lite/lib/etherpad-lite-k8s/configmap.yaml new file mode 100755 index 0000000..a166406 --- /dev/null +++ b/serveis/etherpad/etherpad-lite/lib/etherpad-lite-k8s/configmap.yaml @@ -0,0 +1,10 @@ +apiVersion: v1 +kind: ConfigMap +metadata: + name: etherpad +data: + settings.json: | + { + "skinName":"colibris", + "title":"Etherpad on Kubernetes" + } diff --git a/serveis/etherpad/etherpad-lite/lib/etherpad-lite-k8s/deployment.yaml b/serveis/etherpad/etherpad-lite/lib/etherpad-lite-k8s/deployment.yaml new file mode 100755 index 0000000..efbd49b --- /dev/null +++ b/serveis/etherpad/etherpad-lite/lib/etherpad-lite-k8s/deployment.yaml @@ -0,0 +1,28 @@ +apiVersion: apps/v1 +kind: Deployment +metadata: + name: etherpad +spec: + replicas: 1 + selector: + matchLabels: + app: etherpad + template: + metadata: + labels: + app: etherpad + spec: + containers: + - name: etherpad + image: etherpad/etherpad:1.7.5 + ports: + - containerPort: 9001 + name: web + volumeMounts: + - name: "config" + mountPath: "/opt/etherpad/settings.json" + subPath: "settings.json" + volumes: + - name: config + configMap: + name: etherpad diff --git a/serveis/etherpad/etherpad-lite/lib/etherpad-lite-k8s/kustomization.yaml b/serveis/etherpad/etherpad-lite/lib/etherpad-lite-k8s/kustomization.yaml new file mode 100755 index 0000000..a6d1ae5 --- /dev/null +++ b/serveis/etherpad/etherpad-lite/lib/etherpad-lite-k8s/kustomization.yaml @@ -0,0 +1,6 @@ +apiVersion: kustomize.config.k8s.io/v1beta1 +kind: Kustomization +resources: +- configmap.yaml +- deployment.yaml +- service.yaml diff --git a/serveis/etherpad/etherpad-lite/lib/etherpad-lite-k8s/service.yaml b/serveis/etherpad/etherpad-lite/lib/etherpad-lite-k8s/service.yaml new file mode 100755 index 0000000..eb0d024 --- /dev/null +++ b/serveis/etherpad/etherpad-lite/lib/etherpad-lite-k8s/service.yaml @@ -0,0 +1,11 @@ +apiVersion: v1 +kind: Service +metadata: + name: etherpad +spec: + selector: + app: etherpad + ports: + - name: web + port: 80 + targetPort: web diff --git a/serveis/etherpad/etherpad-lite/lib/kubedb-mysql-etherpad-lite-with-init-script/etherpad-mysql-init-configmap.yaml b/serveis/etherpad/etherpad-lite/lib/kubedb-mysql-etherpad-lite-with-init-script/etherpad-mysql-init-configmap.yaml new file mode 100755 index 0000000..51f9206 --- /dev/null +++ b/serveis/etherpad/etherpad-lite/lib/kubedb-mysql-etherpad-lite-with-init-script/etherpad-mysql-init-configmap.yaml @@ -0,0 +1,14 @@ +apiVersion: v1 +kind: ConfigMap +metadata: + name: etherpad-mysql-init +data: + init.sql: | + create database `etherpad_lite_db`; + use `etherpad_lite_db`; + + CREATE TABLE `store` ( + `key` varchar(100) COLLATE utf8mb4_bin NOT NULL DEFAULT '', + `value` longtext COLLATE utf8mb4_bin NOT NULL, + PRIMARY KEY (`key`) + ) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_bin; diff --git a/serveis/etherpad/etherpad-lite/lib/kubedb-mysql-etherpad-lite-with-init-script/etherpad-mysql-with-init-script.yaml b/serveis/etherpad/etherpad-lite/lib/kubedb-mysql-etherpad-lite-with-init-script/etherpad-mysql-with-init-script.yaml new file mode 100755 index 0000000..f0395dd --- /dev/null +++ b/serveis/etherpad/etherpad-lite/lib/kubedb-mysql-etherpad-lite-with-init-script/etherpad-mysql-with-init-script.yaml @@ -0,0 +1,9 @@ +apiVersion: kubedb.com/v1alpha1 +kind: MySQL +metadata: + name: etherpad-mysql +spec: + init: + scriptSource: + configMap: + name: etherpad-mysql-init diff --git a/serveis/etherpad/etherpad-lite/lib/kubedb-mysql-etherpad-lite-with-init-script/kustomization.yaml b/serveis/etherpad/etherpad-lite/lib/kubedb-mysql-etherpad-lite-with-init-script/kustomization.yaml new file mode 100755 index 0000000..2aa311b --- /dev/null +++ b/serveis/etherpad/etherpad-lite/lib/kubedb-mysql-etherpad-lite-with-init-script/kustomization.yaml @@ -0,0 +1,8 @@ +apiVersion: kustomize.config.k8s.io/v1 +kind: Kustomization +bases: +- ../kubedb-mysql-etherpad-lite +resources: +- etherpad-mysql-init-configmap.yaml +patchesStrategicMerge: +- etherpad-mysql-with-init-script.yaml diff --git a/serveis/etherpad/etherpad-lite/lib/kubedb-mysql-etherpad-lite/README.md b/serveis/etherpad/etherpad-lite/lib/kubedb-mysql-etherpad-lite/README.md new file mode 100755 index 0000000..a894a2e --- /dev/null +++ b/serveis/etherpad/etherpad-lite/lib/kubedb-mysql-etherpad-lite/README.md @@ -0,0 +1,3 @@ +# kubedb-mysql-etherpad-lite + +This is *just* the kubedb MySQL resource for etherpad-lite. Compose it with something like ../etherpad-lite-k8s to get a full setup. diff --git a/serveis/etherpad/etherpad-lite/lib/kubedb-mysql-etherpad-lite/etherpad-mysql.yaml b/serveis/etherpad/etherpad-lite/lib/kubedb-mysql-etherpad-lite/etherpad-mysql.yaml new file mode 100755 index 0000000..2fb6236 --- /dev/null +++ b/serveis/etherpad/etherpad-lite/lib/kubedb-mysql-etherpad-lite/etherpad-mysql.yaml @@ -0,0 +1,16 @@ +apiVersion: kubedb.com/v1alpha1 +kind: MySQL +metadata: + name: etherpad-mysql +spec: + version: "5.7.25" + storageType: Durable + terminationPolicy: WipeOut + storage: + storageClassName: "default" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + diff --git a/serveis/etherpad/etherpad-lite/lib/kubedb-mysql-etherpad-lite/kustomization.yaml b/serveis/etherpad/etherpad-lite/lib/kubedb-mysql-etherpad-lite/kustomization.yaml new file mode 100755 index 0000000..2969e65 --- /dev/null +++ b/serveis/etherpad/etherpad-lite/lib/kubedb-mysql-etherpad-lite/kustomization.yaml @@ -0,0 +1,14 @@ +apiVersion: kustomize.config.k8s.io/v1 +kind: Kustomization +resources: +- etherpad-mysql.yaml +vars: +- name: MYSQL_SERVICE + objref: + apiVersion: kubedb.com/v1alpha1 + kind: MySQL + name: etherpad-mysql + fieldref: + fieldpath: metadata.name +configurations: +- transformer-config-kubedb.yaml diff --git a/serveis/etherpad/etherpad-lite/lib/kubedb-mysql-etherpad-lite/transformer-config-kubedb.yaml b/serveis/etherpad/etherpad-lite/lib/kubedb-mysql-etherpad-lite/transformer-config-kubedb.yaml new file mode 100755 index 0000000..b206765 --- /dev/null +++ b/serveis/etherpad/etherpad-lite/lib/kubedb-mysql-etherpad-lite/transformer-config-kubedb.yaml @@ -0,0 +1,12 @@ +namePrefix: +- apiVersion: kubedb.com/v1alpha1 + kind: MySQL + path: spec/init/scriptSource/configMap/name + +nameReference: +- version: v1 + kind: ConfigMap + fieldSpecs: + - version: kubedb.com/v1alpha1 + kind: MySQL + path: spec/init/scriptSource diff --git a/serveis/etherpad/etherpad-lite/lib/test-etherpad-lite-mysql-with-namePrefix/kustomization.yaml b/serveis/etherpad/etherpad-lite/lib/test-etherpad-lite-mysql-with-namePrefix/kustomization.yaml new file mode 100644 index 0000000..c1753b9 --- /dev/null +++ b/serveis/etherpad/etherpad-lite/lib/test-etherpad-lite-mysql-with-namePrefix/kustomization.yaml @@ -0,0 +1,3 @@ +bases: +- ../etherpad-lite-k8s-kubedb-mysql +namePrefix: test-namePrefix- diff --git a/serveis/etherpad/etherpad-lite/lib2/etherpad-lite-k8s-kubedb-mysql/configmap.yaml b/serveis/etherpad/etherpad-lite/lib2/etherpad-lite-k8s-kubedb-mysql/configmap.yaml new file mode 100755 index 0000000..b5e525c --- /dev/null +++ b/serveis/etherpad/etherpad-lite/lib2/etherpad-lite-k8s-kubedb-mysql/configmap.yaml @@ -0,0 +1,17 @@ +apiVersion: v1 +kind: ConfigMap +metadata: + name: etherpad +data: + settings.json: | + { + "skinName":"colibris", + "title":"Etherpad on Kubernetes w/ MySQL", + "dbType": "${ETHERPAD_DB_TYPE:mysql}", + "dbSettings": { + "database": "${ETHERPAD_DB_DATABASE}", + "host": "${ETHERPAD_DB_HOST}", + "password": "${ETHERPAD_DB_PASSWORD}", + "user": "${ETHERPAD_DB_USER}" + } + } diff --git a/serveis/etherpad/etherpad-lite/lib2/etherpad-lite-k8s-kubedb-mysql/deployment.yaml b/serveis/etherpad/etherpad-lite/lib2/etherpad-lite-k8s-kubedb-mysql/deployment.yaml new file mode 100755 index 0000000..a50aa18 --- /dev/null +++ b/serveis/etherpad/etherpad-lite/lib2/etherpad-lite-k8s-kubedb-mysql/deployment.yaml @@ -0,0 +1,30 @@ +apiVersion: apps/v1 +kind: Deployment +metadata: + name: etherpad +spec: + template: + spec: + containers: + - name: etherpad + env: + - name: ETHERPAD_DB_TYPE + value: mysql + - name: ETHERPAD_DB_HOST + value: $(MYSQL_SERVICE) + - name: ETHERPAD_DB_DATABASE + value: etherpad_lite_db + - name: ETHERPAD_DB_USER + valueFrom: + secretKeyRef: + name: etherpad-mysql-auth + key: username + - name: ETHERPAD_DB_PASSWORD + valueFrom: + secretKeyRef: + name: etherpad-mysql-auth + key: password + volumeMounts: + - name: "config" + mountPath: "/opt/etherpad-lite/settings.json" + subPath: "settings.json" diff --git a/serveis/etherpad/etherpad-lite/lib2/etherpad-lite-k8s-kubedb-mysql/kustomization.yaml b/serveis/etherpad/etherpad-lite/lib2/etherpad-lite-k8s-kubedb-mysql/kustomization.yaml new file mode 100755 index 0000000..cc19af6 --- /dev/null +++ b/serveis/etherpad/etherpad-lite/lib2/etherpad-lite-k8s-kubedb-mysql/kustomization.yaml @@ -0,0 +1,14 @@ +apiVersion: kustomize.config.k8s.io/v1beta1 +kind: Kustomization +bases: +- ../kubedb-mysql-etherpad-lite-with-init-script +- ../etherpad-lite-k8s +patchesStrategicMerge: +- configmap.yaml +- deployment.yaml +images: +- name: etherpad/etherpad + # This is required until etherpad-lite 1.8 comes out to be able to use env vars in settings.json + newTag: latest +configurations: +- name-prefix-transformer-config.yaml diff --git a/serveis/etherpad/etherpad-lite/lib2/etherpad-lite-k8s-kubedb-mysql/name-prefix-transformer-config.yaml b/serveis/etherpad/etherpad-lite/lib2/etherpad-lite-k8s-kubedb-mysql/name-prefix-transformer-config.yaml new file mode 100755 index 0000000..31022f8 --- /dev/null +++ b/serveis/etherpad/etherpad-lite/lib2/etherpad-lite-k8s-kubedb-mysql/name-prefix-transformer-config.yaml @@ -0,0 +1,4 @@ +namePrefix: +- apiVersion: apps/v1 + kind: Deployment + path: spec/template/spec/containers/env/valueFrom/secretKeyRef/name diff --git a/serveis/etherpad/etherpad-lite/lib2/etherpad-lite-k8s/configmap.yaml b/serveis/etherpad/etherpad-lite/lib2/etherpad-lite-k8s/configmap.yaml new file mode 100755 index 0000000..a166406 --- /dev/null +++ b/serveis/etherpad/etherpad-lite/lib2/etherpad-lite-k8s/configmap.yaml @@ -0,0 +1,10 @@ +apiVersion: v1 +kind: ConfigMap +metadata: + name: etherpad +data: + settings.json: | + { + "skinName":"colibris", + "title":"Etherpad on Kubernetes" + } diff --git a/serveis/etherpad/etherpad-lite/lib2/etherpad-lite-k8s/deployment.yaml b/serveis/etherpad/etherpad-lite/lib2/etherpad-lite-k8s/deployment.yaml new file mode 100755 index 0000000..efbd49b --- /dev/null +++ b/serveis/etherpad/etherpad-lite/lib2/etherpad-lite-k8s/deployment.yaml @@ -0,0 +1,28 @@ +apiVersion: apps/v1 +kind: Deployment +metadata: + name: etherpad +spec: + replicas: 1 + selector: + matchLabels: + app: etherpad + template: + metadata: + labels: + app: etherpad + spec: + containers: + - name: etherpad + image: etherpad/etherpad:1.7.5 + ports: + - containerPort: 9001 + name: web + volumeMounts: + - name: "config" + mountPath: "/opt/etherpad/settings.json" + subPath: "settings.json" + volumes: + - name: config + configMap: + name: etherpad diff --git a/serveis/etherpad/etherpad-lite/lib2/etherpad-lite-k8s/kustomization.yaml b/serveis/etherpad/etherpad-lite/lib2/etherpad-lite-k8s/kustomization.yaml new file mode 100755 index 0000000..a6d1ae5 --- /dev/null +++ b/serveis/etherpad/etherpad-lite/lib2/etherpad-lite-k8s/kustomization.yaml @@ -0,0 +1,6 @@ +apiVersion: kustomize.config.k8s.io/v1beta1 +kind: Kustomization +resources: +- configmap.yaml +- deployment.yaml +- service.yaml diff --git a/serveis/etherpad/etherpad-lite/lib2/etherpad-lite-k8s/service.yaml b/serveis/etherpad/etherpad-lite/lib2/etherpad-lite-k8s/service.yaml new file mode 100755 index 0000000..eb0d024 --- /dev/null +++ b/serveis/etherpad/etherpad-lite/lib2/etherpad-lite-k8s/service.yaml @@ -0,0 +1,11 @@ +apiVersion: v1 +kind: Service +metadata: + name: etherpad +spec: + selector: + app: etherpad + ports: + - name: web + port: 80 + targetPort: web diff --git a/serveis/etherpad/etherpad-lite/lib2/kubedb-mysql-etherpad-lite-with-init-script/etherpad-mysql-init-configmap.yaml b/serveis/etherpad/etherpad-lite/lib2/kubedb-mysql-etherpad-lite-with-init-script/etherpad-mysql-init-configmap.yaml new file mode 100755 index 0000000..51f9206 --- /dev/null +++ b/serveis/etherpad/etherpad-lite/lib2/kubedb-mysql-etherpad-lite-with-init-script/etherpad-mysql-init-configmap.yaml @@ -0,0 +1,14 @@ +apiVersion: v1 +kind: ConfigMap +metadata: + name: etherpad-mysql-init +data: + init.sql: | + create database `etherpad_lite_db`; + use `etherpad_lite_db`; + + CREATE TABLE `store` ( + `key` varchar(100) COLLATE utf8mb4_bin NOT NULL DEFAULT '', + `value` longtext COLLATE utf8mb4_bin NOT NULL, + PRIMARY KEY (`key`) + ) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_bin; diff --git a/serveis/etherpad/etherpad-lite/lib2/kubedb-mysql-etherpad-lite-with-init-script/etherpad-mysql-with-init-script.yaml b/serveis/etherpad/etherpad-lite/lib2/kubedb-mysql-etherpad-lite-with-init-script/etherpad-mysql-with-init-script.yaml new file mode 100755 index 0000000..f0395dd --- /dev/null +++ b/serveis/etherpad/etherpad-lite/lib2/kubedb-mysql-etherpad-lite-with-init-script/etherpad-mysql-with-init-script.yaml @@ -0,0 +1,9 @@ +apiVersion: kubedb.com/v1alpha1 +kind: MySQL +metadata: + name: etherpad-mysql +spec: + init: + scriptSource: + configMap: + name: etherpad-mysql-init diff --git a/serveis/etherpad/etherpad-lite/lib2/kubedb-mysql-etherpad-lite-with-init-script/kustomization.yaml b/serveis/etherpad/etherpad-lite/lib2/kubedb-mysql-etherpad-lite-with-init-script/kustomization.yaml new file mode 100755 index 0000000..ba70a17 --- /dev/null +++ b/serveis/etherpad/etherpad-lite/lib2/kubedb-mysql-etherpad-lite-with-init-script/kustomization.yaml @@ -0,0 +1,8 @@ +apiVersion: kustomize.config.k8s.io/v1beta1 +kind: Kustomization +bases: +- ../kubedb-mysql-etherpad-lite +resources: +- etherpad-mysql-init-configmap.yaml +patchesStrategicMerge: +- etherpad-mysql-with-init-script.yaml diff --git a/serveis/etherpad/etherpad-lite/lib2/kubedb-mysql-etherpad-lite/README.md b/serveis/etherpad/etherpad-lite/lib2/kubedb-mysql-etherpad-lite/README.md new file mode 100755 index 0000000..a894a2e --- /dev/null +++ b/serveis/etherpad/etherpad-lite/lib2/kubedb-mysql-etherpad-lite/README.md @@ -0,0 +1,3 @@ +# kubedb-mysql-etherpad-lite + +This is *just* the kubedb MySQL resource for etherpad-lite. Compose it with something like ../etherpad-lite-k8s to get a full setup. diff --git a/serveis/etherpad/etherpad-lite/lib2/kubedb-mysql-etherpad-lite/etherpad-mysql.yaml b/serveis/etherpad/etherpad-lite/lib2/kubedb-mysql-etherpad-lite/etherpad-mysql.yaml new file mode 100755 index 0000000..2fb6236 --- /dev/null +++ b/serveis/etherpad/etherpad-lite/lib2/kubedb-mysql-etherpad-lite/etherpad-mysql.yaml @@ -0,0 +1,16 @@ +apiVersion: kubedb.com/v1alpha1 +kind: MySQL +metadata: + name: etherpad-mysql +spec: + version: "5.7.25" + storageType: Durable + terminationPolicy: WipeOut + storage: + storageClassName: "default" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + diff --git a/serveis/etherpad/etherpad-lite/lib2/kubedb-mysql-etherpad-lite/kustomization.yaml b/serveis/etherpad/etherpad-lite/lib2/kubedb-mysql-etherpad-lite/kustomization.yaml new file mode 100755 index 0000000..1914ecf --- /dev/null +++ b/serveis/etherpad/etherpad-lite/lib2/kubedb-mysql-etherpad-lite/kustomization.yaml @@ -0,0 +1,14 @@ +apiVersion: kustomize.config.k8s.io/v1beta1 +kind: Kustomization +resources: +- etherpad-mysql.yaml +vars: +- name: MYSQL_SERVICE + objref: + apiVersion: kubedb.com/v1alpha1 + kind: MySQL + name: etherpad-mysql + fieldref: + fieldpath: metadata.name +configurations: +- transformer-config-kubedb.yaml diff --git a/serveis/etherpad/etherpad-lite/lib2/kubedb-mysql-etherpad-lite/transformer-config-kubedb.yaml b/serveis/etherpad/etherpad-lite/lib2/kubedb-mysql-etherpad-lite/transformer-config-kubedb.yaml new file mode 100755 index 0000000..b206765 --- /dev/null +++ b/serveis/etherpad/etherpad-lite/lib2/kubedb-mysql-etherpad-lite/transformer-config-kubedb.yaml @@ -0,0 +1,12 @@ +namePrefix: +- apiVersion: kubedb.com/v1alpha1 + kind: MySQL + path: spec/init/scriptSource/configMap/name + +nameReference: +- version: v1 + kind: ConfigMap + fieldSpecs: + - version: kubedb.com/v1alpha1 + kind: MySQL + path: spec/init/scriptSource diff --git a/serveis/etherpad/etherpad-lite/lib2/test-etherpad-lite-mysql-with-namePrefix/kustomization.yaml b/serveis/etherpad/etherpad-lite/lib2/test-etherpad-lite-mysql-with-namePrefix/kustomization.yaml new file mode 100644 index 0000000..c1753b9 --- /dev/null +++ b/serveis/etherpad/etherpad-lite/lib2/test-etherpad-lite-mysql-with-namePrefix/kustomization.yaml @@ -0,0 +1,3 @@ +bases: +- ../etherpad-lite-k8s-kubedb-mysql +namePrefix: test-namePrefix- diff --git a/serveis/example/deployment.yml b/serveis/example/deployment.yml new file mode 100644 index 0000000..ad875ee --- /dev/null +++ b/serveis/example/deployment.yml @@ -0,0 +1,20 @@ +--- +apiVersion: apps/v1 +kind: Deployment +metadata: + name: nginx +spec: + selector: + matchLabels: + app: nginx + replicas: 3 + template: + metadata: + labels: + app: nginx + spec: + containers: + - name: nginx + image: nginx:alpine + ports: + - containerPort: 80 diff --git a/serveis/example/service.yml b/serveis/example/service.yml new file mode 100644 index 0000000..a309465 --- /dev/null +++ b/serveis/example/service.yml @@ -0,0 +1,13 @@ +--- +apiVersion: v1 +kind: Service +metadata: + name: nginx +spec: + ipFamilyPolicy: PreferDualStack + selector: + app: nginx + ports: + - port: 80 + targetPort: 80 + type: LoadBalancer diff --git a/serveis/nextcloud/nextcloud_cron.yml b/serveis/nextcloud/nextcloud_cron.yml new file mode 100644 index 0000000..c64b963 --- /dev/null +++ b/serveis/nextcloud/nextcloud_cron.yml @@ -0,0 +1,21 @@ +--- +apiVersion: batch/v1 +kind: CronJob +metadata: + name: nextcloud-cron + namespace: nextcloud +spec: + schedule: "*/5 * * * *" + jobTemplate: + spec: + template: + spec: + containers: + - name: nextcloud + image: nextcloud:25.0.3-apache + imagePullPolicy: IfNotPresent + command: + - /bin/sh + - -c + - curl https://your.nextcloud.domain/cron.php + restartPolicy: OnFailure diff --git a/serveis/nextcloud/nextcloud_deployment.yml b/serveis/nextcloud/nextcloud_deployment.yml new file mode 100644 index 0000000..9bef235 --- /dev/null +++ b/serveis/nextcloud/nextcloud_deployment.yml @@ -0,0 +1,117 @@ +--- +apiVersion: apps/v1 +kind: Deployment +metadata: + name: nextcloud + namespace: nextcloud + labels: + app: nextcloud +spec: + replicas: 1 + selector: + matchLabels: + app: nextcloud + strategy: + rollingUpdate: + maxSurge: 0 + maxUnavailable: 1 + type: RollingUpdate + template: + metadata: + labels: + app: nextcloud + spec: + containers: + - image: nextcloud:25.0.3-apache + name: nextcloud + ports: + - containerPort: 80 + protocol: TCP + env: + - name: REDIS_HOST + value: redis + - name: POSTGRES_HOST + value: postgresql + - name: POSTGRES_DB + valueFrom: + secretKeyRef: + key: POSTGRES_DB + name: nextcloud-secrets + - name: POSTGRES_USER + valueFrom: + secretKeyRef: + key: POSTGRES_USER + name: nextcloud-secrets + - name: POSTGRES_PASSWORD + valueFrom: + secretKeyRef: + key: POSTGRES_PASSWORD + name: nextcloud-secrets + - name: NEXTCLOUD_ADMIN_USER + valueFrom: + secretKeyRef: + key: NEXTCLOUD_ADMIN_USER + name: nextcloud-secrets + - name: NEXTCLOUD_ADMIN_PASSWORD + valueFrom: + secretKeyRef: + key: NEXTCLOUD_ADMIN_PASSWORD + name: nextcloud-secrets + - name: NEXTCLOUD_TRUSTED_DOMAINS + value: your.nextcloud.domain + - name: NEXTCLOUD_DATA_DIR + value: /mnt/data + # - name: OBJECTSTORE_S3_HOST + # value: your.s3.host + # - name: OBJECTSTORE_S3_REGION + # value: gso-rack-1 + # - name: OBJECTSTORE_S3_BUCKET + # value: nextcloud + # - name: OBJECTSTORE_S3_PORT + # value: "9000" + # - name: OBJECTSTORE_S3_SSL + # value: "true" + # - name: OBJECTSTORE_S3_USEPATH_STYLE + # value: "true" + # - name: OBJECTSTORE_S3_KEY + # valueFrom: + # secretKeyRef: + # key: OBJECTSTORE_S3_KEY + # name: nextcloud-secrets + # - name: OBJECTSTORE_S3_SECRET + # valueFrom: + # secretKeyRef: + # key: OBJECTSTORE_S3_SECRET + # name: nextcloud-secrets + - name: TRUSTED_PROXIES + value: 192.168.4.0/24 10.0.0.0/16 # This includes my router IP address and the CIDR range of the cluster + - name: APACHE_DISABLE_REWRITE_IP + value: "1" + - name: OVERWRITEHOST + value: your.nextcloud.domain + - name: OVERWRITEPROTOCOL + value: https + - name: OVERWRITECLIURL + value: https://your.nextcloud.domain + - name: OVERWRITEWEBROOT + value: "/" + - name: PHP_MEMORY_LIMIT + value: 4G + - name: PHP_UPLOAD_LIMIT + value: 1G + - name: TZ + value: America/New_York + volumeMounts: + - mountPath: /var/www/html + name: nextcloud-storage + readOnly: false + - mountPath: /mnt/data + name: nextcloud-storage-nfs + readOnly: false + volumes: + - name: nextcloud-storage + persistentVolumeClaim: + claimName: nextcloud-pvc + - name: nextcloud-storage-nfs + persistentVolumeClaim: + claimName: nextcloud-pvc-nfs diff --git a/serveis/nextcloud/nextcloud_headers.yml b/serveis/nextcloud/nextcloud_headers.yml new file mode 100644 index 0000000..a440ef1 --- /dev/null +++ b/serveis/nextcloud/nextcloud_headers.yml @@ -0,0 +1,26 @@ +--- +apiVersion: traefik.containo.us/v1alpha1 +kind: Middleware +metadata: + name: headers + namespace: nextcloud +spec: + headers: + frameDeny: true + browserXssFilter: true + customResponseHeaders: + Strict-Transport-Security: "15552000" + X-Frame-Options: SAMEORIGIN +--- +apiVersion: traefik.containo.us/v1alpha1 +kind: Middleware +metadata: + name: redirects + namespace: nextcloud +spec: + redirectScheme: + permanent: true + scheme: https + redirectRegex: + regex: https://(.*)/.well-known/(card|cal)dav + replacement: https://$1/remote.php/dav/ diff --git a/serveis/nextcloud/nextcloud_ingress.yml b/serveis/nextcloud/nextcloud_ingress.yml new file mode 100644 index 0000000..b60eedc --- /dev/null +++ b/serveis/nextcloud/nextcloud_ingress.yml @@ -0,0 +1,26 @@ +--- +apiVersion: networking.k8s.io/v1 +kind: Ingress +metadata: + name: nextcloud-ingress + namespace: nextcloud + annotations: + traefik.ingress.kubernetes.io/router.middlewares: nextcloud-headers@kubernetescrd,nextcloud-redirects@kubernetescrd + traefik.ingress.kubernetes.io/router.entrypoints: web,websecure + cert-manager.io/cluster-issuer: letsencrypt-aws +spec: + rules: + - host: your.nextcloud.domain + http: + paths: + - path: / + pathType: Prefix + backend: + service: + name: nextcloud + port: + number: 80 + tls: + - secretName: ssl-cert + hosts: + - your.nextcloud.domain diff --git a/serveis/nextcloud/nextcloud_pvc.yml b/serveis/nextcloud/nextcloud_pvc.yml new file mode 100644 index 0000000..d31a607 --- /dev/null +++ b/serveis/nextcloud/nextcloud_pvc.yml @@ -0,0 +1,26 @@ +--- +apiVersion: v1 +kind: PersistentVolumeClaim +metadata: + name: nextcloud-pvc + namespace: nextcloud +spec: + accessModes: + - ReadWriteOnce + storageClassName: longhorn + resources: + requests: + storage: 5Gi +--- +apiVersion: v1 +kind: PersistentVolumeClaim +metadata: + name: nextcloud-pvc-nfs + namespace: nextcloud +spec: + accessModes: + - ReadWriteOnce + storageClassName: nfs-client + resources: + requests: + storage: 100Gi diff --git a/serveis/nextcloud/nextcloud_service.yml b/serveis/nextcloud/nextcloud_service.yml new file mode 100644 index 0000000..ffc0a8a --- /dev/null +++ b/serveis/nextcloud/nextcloud_service.yml @@ -0,0 +1,13 @@ +--- +apiVersion: v1 +kind: Service +metadata: + name: nextcloud + namespace: nextcloud + labels: + app: nextcloud +spec: + ports: + - port: 80 + selector: + app: nextcloud diff --git a/serveis/nextcloud/postgresql_deployment.yml b/serveis/nextcloud/postgresql_deployment.yml new file mode 100644 index 0000000..606104d --- /dev/null +++ b/serveis/nextcloud/postgresql_deployment.yml @@ -0,0 +1,50 @@ +--- +apiVersion: apps/v1 +kind: Deployment +metadata: + name: postgresql + namespace: nextcloud + labels: + app: postgresql +spec: + replicas: 1 + selector: + matchLabels: + app: postgresql + template: + metadata: + labels: + app: postgresql + spec: + containers: + - name: postgresql + image: postgres:15 + ports: + - containerPort: 5432 + env: + - name: POSTGRES_DB + valueFrom: + secretKeyRef: + key: POSTGRES_DB + name: nextcloud-secrets + - name: POSTGRES_USER + valueFrom: + secretKeyRef: + key: POSTGRES_USER + name: nextcloud-secrets + - name: POSTGRES_PASSWORD + valueFrom: + secretKeyRef: + key: POSTGRES_PASSWORD + name: nextcloud-secrets + - name: PGDATA + value: /var/lib/postgresql/data/pgdata + - name: TZ + value: America/New_York + volumeMounts: + - name: postgresql-data + mountPath: /var/lib/postgresql/data + volumes: + - name: postgresql-data + persistentVolumeClaim: + claimName: postgresql-pvc diff --git a/serveis/nextcloud/postgresql_pvc.yml b/serveis/nextcloud/postgresql_pvc.yml new file mode 100644 index 0000000..5d4131c --- /dev/null +++ b/serveis/nextcloud/postgresql_pvc.yml @@ -0,0 +1,13 @@ +--- +apiVersion: v1 +kind: PersistentVolumeClaim +metadata: + name: postgresql-pvc + namespace: nextcloud +spec: + accessModes: + - ReadWriteOnce + storageClassName: longhorn + resources: + requests: + storage: 2Gi diff --git a/serveis/nextcloud/postgresql_service.yml b/serveis/nextcloud/postgresql_service.yml new file mode 100644 index 0000000..8673be8 --- /dev/null +++ b/serveis/nextcloud/postgresql_service.yml @@ -0,0 +1,13 @@ +--- +apiVersion: v1 +kind: Service +metadata: + name: postgresql + namespace: nextcloud + labels: + app: postgresql +spec: + ports: + - port: 5432 + selector: + app: postgresql diff --git a/serveis/nextcloud/redis_deployment.yml b/serveis/nextcloud/redis_deployment.yml new file mode 100644 index 0000000..e65ac55 --- /dev/null +++ b/serveis/nextcloud/redis_deployment.yml @@ -0,0 +1,27 @@ +--- +apiVersion: apps/v1 +kind: Deployment +metadata: + name: redis + namespace: nextcloud + labels: + app: redis +spec: + selector: + matchLabels: + app: redis + replicas: 1 + template: + metadata: + labels: + app: redis + spec: + containers: + - image: redis:alpine + name: redis + ports: + - containerPort: 6379 + env: + - name: TZ + value: America/New_York + restartPolicy: Always diff --git a/serveis/nextcloud/redis_service.yml b/serveis/nextcloud/redis_service.yml new file mode 100644 index 0000000..1938749 --- /dev/null +++ b/serveis/nextcloud/redis_service.yml @@ -0,0 +1,13 @@ +--- +apiVersion: v1 +kind: Service +metadata: + name: redis + namespace: nextcloud + labels: + app: redis +spec: + ports: + - port: 6379 + selector: + app: redis diff --git a/serveis/nextcloud/secrets.yml b/serveis/nextcloud/secrets.yml new file mode 100644 index 0000000..2272496 --- /dev/null +++ b/serveis/nextcloud/secrets.yml @@ -0,0 +1,12 @@ +apiVersion: v1 +kind: Secret +metadata: + name: nextcloud-secrets + namespace: nextcloud +type: Opaque +stringData: + POSTGRES_DB: $DB + POSTGRES_USER: $DB_USER + POSTGRES_PASSWORD: $DB_NEXTCLOUD_PASSWORD + NEXTCLOUD_ADMIN_USER: $NEXTCLOUD_ADMIN_USER + NEXTCLOUD_ADMIN_PASSWORD: $NEXTCLOUD_ADMIN_PASSWORD diff --git a/serveis/nginx/deployment.yml b/serveis/nginx/deployment.yml new file mode 100644 index 0000000..ad875ee --- /dev/null +++ b/serveis/nginx/deployment.yml @@ -0,0 +1,20 @@ +--- +apiVersion: apps/v1 +kind: Deployment +metadata: + name: nginx +spec: + selector: + matchLabels: + app: nginx + replicas: 3 + template: + metadata: + labels: + app: nginx + spec: + containers: + - name: nginx + image: nginx:alpine + ports: + - containerPort: 80 diff --git a/serveis/nginx/ingress.yaml b/serveis/nginx/ingress.yaml new file mode 100644 index 0000000..2c51d73 --- /dev/null +++ b/serveis/nginx/ingress.yaml @@ -0,0 +1,39 @@ +apiVersion: networking.k8s.io/v1 +kind: Ingress +metadata: + name: clustergv-ingress + annotations: + nginx.ingress.kubernetes.io/rewrite-target: / +spec: + ingressClassName: nginx # Usa "traefik" si es el controlador por defecto + rules: + - host: radicale.clustergv.fai.st + http: + paths: + - path: / + pathType: Prefix + backend: + service: + name: radicale + port: + number: 80 + - host: wordpress.clustergv.fai.st + http: + paths: + - path: / + pathType: Prefix + backend: + service: + name: wordpress + port: + number: 80 + - host: nginx.clustergv.fai.st + http: + paths: + - path: / + pathType: Prefix + backend: + service: + name: nginx + port: + number: 80 diff --git a/serveis/nginx/service.yml b/serveis/nginx/service.yml new file mode 100644 index 0000000..a309465 --- /dev/null +++ b/serveis/nginx/service.yml @@ -0,0 +1,13 @@ +--- +apiVersion: v1 +kind: Service +metadata: + name: nginx +spec: + ipFamilyPolicy: PreferDualStack + selector: + app: nginx + ports: + - port: 80 + targetPort: 80 + type: LoadBalancer diff --git a/serveis/wordpress/kustomization.yaml b/serveis/wordpress/kustomization.yaml new file mode 100644 index 0000000..55be286 --- /dev/null +++ b/serveis/wordpress/kustomization.yaml @@ -0,0 +1,8 @@ +secretGenerator: + - name: mysql-pass + literals: + - password=YOUR_PASSWORD + +resources: + - mysql-deployment.yaml + - wordpress-deployment.yaml diff --git a/serveis/wordpress/mysql-deployment.yaml b/serveis/wordpress/mysql-deployment.yaml new file mode 100644 index 0000000..9057448 --- /dev/null +++ b/serveis/wordpress/mysql-deployment.yaml @@ -0,0 +1,74 @@ +apiVersion: v1 +kind: Service +metadata: + name: wordpress-mysql + labels: + app: wordpress +spec: + ports: + - port: 3306 + selector: + app: wordpress + tier: mysql + clusterIP: None +--- +apiVersion: v1 +kind: PersistentVolumeClaim +metadata: + name: mysql-pv-claim + labels: + app: wordpress +spec: + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 20Gi +--- +apiVersion: apps/v1 +kind: Deployment +metadata: + name: wordpress-mysql + labels: + app: wordpress +spec: + selector: + matchLabels: + app: wordpress + tier: mysql + strategy: + type: Recreate + template: + metadata: + labels: + app: wordpress + tier: mysql + spec: + containers: + - image: mysql:5.7 + name: mysql + env: + - name: MYSQL_ROOT_PASSWORD + valueFrom: + secretKeyRef: + name: mysql-pass + key: password + - name: MYSQL_DATABASE + value: wordpress + - name: MYSQL_USER + value: wordpress + - name: MYSQL_PASSWORD + valueFrom: + secretKeyRef: + name: mysql-pass + key: password + ports: + - containerPort: 3306 + name: mysql + volumeMounts: + - name: mysql-persistent-storage + mountPath: /var/lib/mysql + volumes: + - name: mysql-persistent-storage + persistentVolumeClaim: + claimName: mysql-pv-claim diff --git a/serveis/wordpress/mysql-pv.yaml b/serveis/wordpress/mysql-pv.yaml new file mode 100644 index 0000000..7e7a338 --- /dev/null +++ b/serveis/wordpress/mysql-pv.yaml @@ -0,0 +1,15 @@ +apiVersion: v1 +kind: PersistentVolume +metadata: + name: mysql-pv +spec: + capacity: + storage: 20Gi # Ajusta el tamaño si es necesario + volumeMode: Filesystem + accessModes: + - ReadWriteOnce # Permite acceso de lectura/escritura por un solo pod + persistentVolumeReclaimPolicy: Retain # El volumen no se eliminará cuando se libere + storageClassName: local-path # Usamos 'local-path' ya que es un almacenamiento local en el nodo + hostPath: + path: /mnt/data/mysql # Ruta en el nodo donde se almacenarán los datos + type: DirectoryOrCreate # Crea el directorio si no existe diff --git a/serveis/wordpress/wordpress-deployment.yaml b/serveis/wordpress/wordpress-deployment.yaml new file mode 100644 index 0000000..43d9525 --- /dev/null +++ b/serveis/wordpress/wordpress-deployment.yaml @@ -0,0 +1,69 @@ +apiVersion: v1 +kind: Service +metadata: + name: wordpress + labels: + app: wordpress +spec: + ports: + - port: 80 + selector: + app: wordpress + tier: frontend + type: LoadBalancer +--- +apiVersion: v1 +kind: PersistentVolumeClaim +metadata: + name: wp-pv-claim + labels: + app: wordpress +spec: + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 20Gi +--- +apiVersion: apps/v1 +kind: Deployment +metadata: + name: wordpress + labels: + app: wordpress +spec: + selector: + matchLabels: + app: wordpress + tier: frontend + strategy: + type: Recreate + template: + metadata: + labels: + app: wordpress + tier: frontend + spec: + containers: + - image: wordpress:6.2.1-apache + name: wordpress + env: + - name: WORDPRESS_DB_HOST + value: wordpress-mysql + - name: WORDPRESS_DB_PASSWORD + valueFrom: + secretKeyRef: + name: mysql-pass + key: password + - name: WORDPRESS_DB_USER + value: wordpress + ports: + - containerPort: 80 + name: wordpress + volumeMounts: + - name: wordpress-persistent-storage + mountPath: /var/www/html + volumes: + - name: wordpress-persistent-storage + persistentVolumeClaim: + claimName: wp-pv-claim diff --git a/serveis/wordpress/wp-pv.yaml b/serveis/wordpress/wp-pv.yaml new file mode 100644 index 0000000..715e959 --- /dev/null +++ b/serveis/wordpress/wp-pv.yaml @@ -0,0 +1,15 @@ +apiVersion: v1 +kind: PersistentVolume +metadata: + name: wp-pv +spec: + capacity: + storage: 20Gi # Ajusta el tamaño si es necesario + volumeMode: Filesystem + accessModes: + - ReadWriteOnce # Permite acceso de lectura/escritura por un solo pod + persistentVolumeReclaimPolicy: Retain # El volumen no se eliminará cuando se libere + storageClassName: local-path # Usamos 'local-path' para almacenamiento local en el nodo + hostPath: + path: /mnt/data/wordpress # Ruta en el nodo donde se almacenarán los datos + type: DirectoryOrCreate # Crea el directorio si no existe diff --git a/sub-en.vtt b/sub-en.vtt new file mode 100644 index 0000000..dfd8cf2 --- /dev/null +++ b/sub-en.vtt @@ -0,0 +1,3912 @@ +WEBVTT +Kind: captions +Language: en + +00:00:00.240 --> 00:00:03.830 align:start position:0% + +words<00:00:00.560> to<00:00:00.719> describe<00:00:01.599> setting<00:00:02.000> up<00:00:02.159> k3s + +00:00:03.830 --> 00:00:03.840 align:start position:0% +words to describe setting up k3s + + +00:00:03.840 --> 00:00:05.590 align:start position:0% +words to describe setting up k3s +this<00:00:04.080> is<00:00:04.240> hard + +00:00:05.590 --> 00:00:05.600 align:start position:0% +this is hard + + +00:00:05.600 --> 00:00:08.150 align:start position:0% +this is hard +this<00:00:05.839> is<00:00:06.000> so<00:00:06.319> difficult<00:00:06.799> to<00:00:06.960> set<00:00:07.200> up + +00:00:08.150 --> 00:00:08.160 align:start position:0% +this is so difficult to set up + + +00:00:08.160 --> 00:00:10.390 align:start position:0% +this is so difficult to set up +isn't<00:00:08.400> this<00:00:08.559> overkill + +00:00:10.390 --> 00:00:10.400 align:start position:0% +isn't this overkill + + +00:00:10.400 --> 00:00:13.270 align:start position:0% +isn't this overkill +what<00:00:10.559> is<00:00:10.719> the<00:00:10.800> load<00:00:11.040> balancer<00:00:11.519> again + +00:00:13.270 --> 00:00:13.280 align:start position:0% +what is the load balancer again + + +00:00:13.280 --> 00:00:16.390 align:start position:0% +what is the load balancer again +why<00:00:13.440> do<00:00:13.679> i<00:00:13.759> need<00:00:14.080> two<00:00:14.320> load<00:00:14.559> balancers + +00:00:16.390 --> 00:00:16.400 align:start position:0% +why do i need two load balancers + + +00:00:16.400 --> 00:00:18.230 align:start position:0% +why do i need two load balancers +should<00:00:16.560> i<00:00:16.720> use<00:00:16.880> that<00:00:17.039> cd + +00:00:18.230 --> 00:00:18.240 align:start position:0% +should i use that cd + + +00:00:18.240 --> 00:00:20.870 align:start position:0% +should i use that cd +so<00:00:18.480> wait<00:00:18.880> i<00:00:19.039> need<00:00:19.359> two<00:00:19.600> load<00:00:19.840> balancers<00:00:20.640> and + +00:00:20.870 --> 00:00:20.880 align:start position:0% +so wait i need two load balancers and + + +00:00:20.880 --> 00:00:22.870 align:start position:0% +so wait i need two load balancers and +keep<00:00:21.039> alive<00:00:21.439> d + +00:00:22.870 --> 00:00:22.880 align:start position:0% +keep alive d + + +00:00:22.880 --> 00:00:25.429 align:start position:0% +keep alive d +what<00:00:23.119> is<00:00:23.279> metal<00:00:23.600> lb<00:00:24.000> again + +00:00:25.429 --> 00:00:25.439 align:start position:0% +what is metal lb again + + +00:00:25.439 --> 00:00:27.670 align:start position:0% +what is metal lb again +have<00:00:25.599> you<00:00:25.680> heard<00:00:25.920> of<00:00:26.000> cubevip + +00:00:27.670 --> 00:00:27.680 align:start position:0% +have you heard of cubevip + + +00:00:27.680 --> 00:00:33.190 align:start position:0% +have you heard of cubevip +um<00:00:28.480> isn't<00:00:28.720> that<00:00:28.880> a<00:00:28.960> single<00:00:29.199> point<00:00:29.439> of<00:00:29.519> failure + +00:00:33.190 --> 00:00:33.200 align:start position:0% + + + +00:00:33.200 --> 00:00:34.389 align:start position:0% + +i<00:00:33.440> know + +00:00:34.389 --> 00:00:34.399 align:start position:0% +i know + + +00:00:34.399 --> 00:00:36.549 align:start position:0% +i know +i'll<00:00:34.719> automate<00:00:35.200> the<00:00:35.280> whole<00:00:35.520> thing + +00:00:36.549 --> 00:00:36.559 align:start position:0% +i'll automate the whole thing + + +00:00:36.559 --> 00:00:39.030 align:start position:0% +i'll automate the whole thing +today<00:00:36.960> we're<00:00:37.200> not<00:00:37.520> only<00:00:37.760> going<00:00:37.920> to<00:00:38.000> set<00:00:38.239> up<00:00:38.399> k3s + +00:00:39.030 --> 00:00:39.040 align:start position:0% +today we're not only going to set up k3s + + +00:00:39.040 --> 00:00:41.990 align:start position:0% +today we're not only going to set up k3s +with<00:00:39.280> ncd<00:00:39.920> and<00:00:40.160> ha<00:00:40.640> installation<00:00:41.440> with<00:00:41.760> cube + +00:00:41.990 --> 00:00:42.000 align:start position:0% +with ncd and ha installation with cube + + +00:00:42.000 --> 00:00:44.950 align:start position:0% +with ncd and ha installation with cube +vip<00:00:42.480> and<00:00:42.640> middle<00:00:42.960> of<00:00:43.120> b<00:00:43.600> but<00:00:43.840> we're<00:00:44.079> also<00:00:44.719> going + +00:00:44.950 --> 00:00:44.960 align:start position:0% +vip and middle of b but we're also going + + +00:00:44.960 --> 00:00:47.110 align:start position:0% +vip and middle of b but we're also going +to<00:00:45.120> automate<00:00:45.680> the<00:00:45.840> whole<00:00:46.079> entire<00:00:46.480> thing<00:00:46.960> so + +00:00:47.110 --> 00:00:47.120 align:start position:0% +to automate the whole entire thing so + + +00:00:47.120 --> 00:00:49.830 align:start position:0% +to automate the whole entire thing so +that<00:00:47.360> we<00:00:47.840> can't<00:00:48.160> really<00:00:48.640> mess<00:00:48.960> this<00:00:49.200> up<00:00:49.600> and<00:00:49.680> so + +00:00:49.830 --> 00:00:49.840 align:start position:0% +that we can't really mess this up and so + + +00:00:49.840 --> 00:00:51.350 align:start position:0% +that we can't really mess this up and so +we're<00:00:50.000> going<00:00:50.079> to<00:00:50.239> fully<00:00:50.640> automate<00:00:51.120> the + +00:00:51.350 --> 00:00:51.360 align:start position:0% +we're going to fully automate the + + +00:00:51.360 --> 00:00:53.990 align:start position:0% +we're going to fully automate the +installation<00:00:51.920> of<00:00:52.079> k3s<00:00:52.800> so<00:00:52.960> that<00:00:53.199> it's<00:00:53.360> 100 + +00:00:53.990 --> 00:00:54.000 align:start position:0% +installation of k3s so that it's 100 + + +00:00:54.000 --> 00:00:55.110 align:start position:0% +installation of k3s so that it's 100 +repeatable + +00:00:55.110 --> 00:00:55.120 align:start position:0% +repeatable + + +00:00:55.120 --> 00:00:57.350 align:start position:0% +repeatable +and<00:00:55.280> then<00:00:55.600> we're<00:00:55.760> gonna<00:00:56.239> tear<00:00:56.399> it<00:00:56.559> all<00:00:56.719> down<00:00:57.199> as + +00:00:57.350 --> 00:00:57.360 align:start position:0% +and then we're gonna tear it all down as + + +00:00:57.360 --> 00:00:59.750 align:start position:0% +and then we're gonna tear it all down as +if<00:00:57.520> it<00:00:57.600> never<00:00:57.840> happened<00:00:58.239> but<00:00:58.480> before<00:00:58.800> we<00:00:58.960> do<00:00:59.520> a + +00:00:59.750 --> 00:00:59.760 align:start position:0% +if it never happened but before we do a + + +00:00:59.760 --> 00:01:02.389 align:start position:0% +if it never happened but before we do a +huge<00:01:00.160> thanks<00:01:00.559> to<00:01:00.719> our<00:01:00.879> sponsor<00:01:01.520> microcenter + +00:01:02.389 --> 00:01:02.399 align:start position:0% +huge thanks to our sponsor microcenter + + +00:01:02.399 --> 00:01:04.149 align:start position:0% +huge thanks to our sponsor microcenter +if<00:01:02.559> you're<00:01:02.719> thinking<00:01:03.039> of<00:01:03.120> building<00:01:03.440> a<00:01:03.520> new<00:01:03.760> pc + +00:01:04.149 --> 00:01:04.159 align:start position:0% +if you're thinking of building a new pc + + +00:01:04.159 --> 00:01:05.350 align:start position:0% +if you're thinking of building a new pc +you<00:01:04.400> should<00:01:04.559> look<00:01:04.720> no<00:01:04.879> further<00:01:05.199> than + +00:01:05.350 --> 00:01:05.360 align:start position:0% +you should look no further than + + +00:01:05.360 --> 00:01:06.789 align:start position:0% +you should look no further than +microcenter<00:01:06.080> if<00:01:06.159> you've<00:01:06.320> never<00:01:06.560> been<00:01:06.720> to + +00:01:06.789 --> 00:01:06.799 align:start position:0% +microcenter if you've never been to + + +00:01:06.799 --> 00:01:08.630 align:start position:0% +microcenter if you've never been to +microcenter<00:01:07.520> you're<00:01:07.760> missing<00:01:08.080> out<00:01:08.320> on<00:01:08.400> seeing + +00:01:08.630 --> 00:01:08.640 align:start position:0% +microcenter you're missing out on seeing + + +00:01:08.640 --> 00:01:10.870 align:start position:0% +microcenter you're missing out on seeing +a<00:01:08.720> huge<00:01:09.040> selection<00:01:09.439> of<00:01:09.600> technology<00:01:10.240> in<00:01:10.320> person + +00:01:10.870 --> 00:01:10.880 align:start position:0% +a huge selection of technology in person + + +00:01:10.880 --> 00:01:12.710 align:start position:0% +a huge selection of technology in person +they've<00:01:11.119> got<00:01:11.360> everything<00:01:11.760> for<00:01:11.920> custom<00:01:12.320> pc + +00:01:12.710 --> 00:01:12.720 align:start position:0% +they've got everything for custom pc + + +00:01:12.720 --> 00:01:15.030 align:start position:0% +they've got everything for custom pc +builders<00:01:13.200> from<00:01:13.439> ssds<00:01:14.080> and<00:01:14.159> hard<00:01:14.400> drives<00:01:14.880> to + +00:01:15.030 --> 00:01:15.040 align:start position:0% +builders from ssds and hard drives to + + +00:01:15.040 --> 00:01:17.830 align:start position:0% +builders from ssds and hard drives to +power<00:01:15.360> supplies<00:01:16.240> to<00:01:16.400> memory<00:01:17.119> to<00:01:17.280> air<00:01:17.680> and + +00:01:17.830 --> 00:01:17.840 align:start position:0% +power supplies to memory to air and + + +00:01:17.840 --> 00:01:20.390 align:start position:0% +power supplies to memory to air and +water<00:01:18.159> cooling<00:01:18.720> to<00:01:18.880> motherboards<00:01:19.920> to<00:01:20.080> video + +00:01:20.390 --> 00:01:20.400 align:start position:0% +water cooling to motherboards to video + + +00:01:20.400 --> 00:01:23.429 align:start position:0% +water cooling to motherboards to video +cards<00:01:21.119> to<00:01:21.280> processors<00:01:22.240> and<00:01:22.479> more<00:01:22.880> microcenter + +00:01:23.429 --> 00:01:23.439 align:start position:0% +cards to processors and more microcenter + + +00:01:23.439 --> 00:01:25.030 align:start position:0% +cards to processors and more microcenter +is<00:01:23.600> your<00:01:23.759> one-stop<00:01:24.240> shop<00:01:24.479> to<00:01:24.640> totally + +00:01:25.030 --> 00:01:25.040 align:start position:0% +is your one-stop shop to totally + + +00:01:25.040 --> 00:01:27.109 align:start position:0% +is your one-stop shop to totally +customize<00:01:25.520> your<00:01:25.680> next<00:01:25.920> pc<00:01:26.320> build<00:01:26.799> and<00:01:26.960> don't + +00:01:27.109 --> 00:01:27.119 align:start position:0% +customize your next pc build and don't + + +00:01:27.119 --> 00:01:28.789 align:start position:0% +customize your next pc build and don't +worry<00:01:27.520> if<00:01:27.759> it's<00:01:27.840> your<00:01:28.000> first<00:01:28.240> time<00:01:28.400> building<00:01:28.720> a + +00:01:28.789 --> 00:01:28.799 align:start position:0% +worry if it's your first time building a + + +00:01:28.799 --> 00:01:30.789 align:start position:0% +worry if it's your first time building a +pc<00:01:29.280> they<00:01:29.520> have<00:01:29.680> lots<00:01:29.920> of<00:01:30.079> helpful<00:01:30.560> and + +00:01:30.789 --> 00:01:30.799 align:start position:0% +pc they have lots of helpful and + + +00:01:30.799 --> 00:01:32.310 align:start position:0% +pc they have lots of helpful and +eligible<00:01:31.200> staff<00:01:31.520> that<00:01:31.680> are<00:01:31.840> there<00:01:32.000> to<00:01:32.159> help + +00:01:32.310 --> 00:01:32.320 align:start position:0% +eligible staff that are there to help + + +00:01:32.320 --> 00:01:33.910 align:start position:0% +eligible staff that are there to help +you<00:01:32.560> out<00:01:32.799> and<00:01:32.960> will<00:01:33.119> point<00:01:33.360> you<00:01:33.520> in<00:01:33.600> the<00:01:33.680> right + +00:01:33.910 --> 00:01:33.920 align:start position:0% +you out and will point you in the right + + +00:01:33.920 --> 00:01:35.749 align:start position:0% +you out and will point you in the right +direction<00:01:34.640> so<00:01:34.799> that<00:01:34.960> you<00:01:35.119> don't<00:01:35.360> attempt<00:01:35.680> to + +00:01:35.749 --> 00:01:35.759 align:start position:0% +direction so that you don't attempt to + + +00:01:35.759 --> 00:01:41.109 align:start position:0% +direction so that you don't attempt to +apply<00:01:36.159> thermal<00:01:36.479> paste<00:01:36.720> like<00:01:36.960> this + +00:01:41.109 --> 00:01:41.119 align:start position:0% + + + +00:01:41.119 --> 00:01:42.870 align:start position:0% + +microcenter<00:01:41.759> has<00:01:41.840> been<00:01:42.000> kind<00:01:42.240> enough<00:01:42.560> to<00:01:42.640> give + +00:01:42.870 --> 00:01:42.880 align:start position:0% +microcenter has been kind enough to give + + +00:01:42.880 --> 00:01:45.429 align:start position:0% +microcenter has been kind enough to give +all<00:01:43.119> new<00:01:43.360> customers<00:01:43.920> a<00:01:44.079> free<00:01:44.399> ssd<00:01:45.119> and<00:01:45.280> it's + +00:01:45.429 --> 00:01:45.439 align:start position:0% +all new customers a free ssd and it's + + +00:01:45.439 --> 00:01:47.990 align:start position:0% +all new customers a free ssd and it's +available<00:01:46.159> in<00:01:46.320> store<00:01:46.640> only<00:01:47.280> so<00:01:47.439> see<00:01:47.600> the<00:01:47.759> link + +00:01:47.990 --> 00:01:48.000 align:start position:0% +available in store only so see the link + + +00:01:48.000 --> 00:01:50.310 align:start position:0% +available in store only so see the link +in<00:01:48.079> the<00:01:48.159> description<00:01:48.720> for<00:01:48.880> details<00:01:49.759> so<00:01:50.159> how + +00:01:50.310 --> 00:01:50.320 align:start position:0% +in the description for details so how + + +00:01:50.320 --> 00:01:52.389 align:start position:0% +in the description for details so how +did<00:01:50.560> i<00:01:50.640> get<00:01:50.880> here<00:01:51.280> well<00:01:51.600> as<00:01:51.759> you<00:01:51.920> may<00:01:52.079> or<00:01:52.159> may + +00:01:52.389 --> 00:01:52.399 align:start position:0% +did i get here well as you may or may + + +00:01:52.399 --> 00:01:54.710 align:start position:0% +did i get here well as you may or may +not<00:01:52.560> know<00:01:52.880> i've<00:01:53.040> been<00:01:53.200> running<00:01:53.520> k3s<00:01:54.240> in<00:01:54.320> my<00:01:54.479> own + +00:01:54.710 --> 00:01:54.720 align:start position:0% +not know i've been running k3s in my own + + +00:01:54.720 --> 00:01:56.630 align:start position:0% +not know i've been running k3s in my own +environment<00:01:55.200> for<00:01:55.439> quite<00:01:55.680> some<00:01:55.920> time<00:01:56.320> and<00:01:56.479> i + +00:01:56.630 --> 00:01:56.640 align:start position:0% +environment for quite some time and i + + +00:01:56.640 --> 00:01:59.030 align:start position:0% +environment for quite some time and i +even<00:01:56.880> have<00:01:57.040> a<00:01:57.119> video<00:01:57.439> on<00:01:57.600> setting<00:01:57.920> up<00:01:58.079> k3s<00:01:58.799> with + +00:01:59.030 --> 00:01:59.040 align:start position:0% +even have a video on setting up k3s with + + +00:01:59.040 --> 00:02:00.950 align:start position:0% +even have a video on setting up k3s with +my<00:01:59.200> sequel<00:01:59.840> now<00:02:00.000> there's<00:02:00.240> nothing<00:02:00.560> wrong<00:02:00.799> with + +00:02:00.950 --> 00:02:00.960 align:start position:0% +my sequel now there's nothing wrong with + + +00:02:00.960 --> 00:02:03.350 align:start position:0% +my sequel now there's nothing wrong with +the<00:02:01.119> k3s<00:02:01.600> version<00:02:01.920> of<00:02:02.000> my<00:02:02.240> sequel<00:02:02.799> it<00:02:02.960> runs + +00:02:03.350 --> 00:02:03.360 align:start position:0% +the k3s version of my sequel it runs + + +00:02:03.360 --> 00:02:05.830 align:start position:0% +the k3s version of my sequel it runs +great<00:02:03.680> but<00:02:03.920> at<00:02:04.000> the<00:02:04.159> time<00:02:04.640> the<00:02:04.799> lcd<00:02:05.360> version + +00:02:05.830 --> 00:02:05.840 align:start position:0% +great but at the time the lcd version + + +00:02:05.840 --> 00:02:08.070 align:start position:0% +great but at the time the lcd version +wasn't<00:02:06.079> available<00:02:06.719> and<00:02:06.880> the<00:02:07.040> lcd<00:02:07.520> version<00:02:07.920> is + +00:02:08.070 --> 00:02:08.080 align:start position:0% +wasn't available and the lcd version is + + +00:02:08.080 --> 00:02:09.990 align:start position:0% +wasn't available and the lcd version is +super<00:02:08.479> interesting<00:02:09.039> because<00:02:09.360> it<00:02:09.520> creates<00:02:09.840> a + +00:02:09.990 --> 00:02:10.000 align:start position:0% +super interesting because it creates a + + +00:02:10.000 --> 00:02:13.190 align:start position:0% +super interesting because it creates a +high<00:02:10.239> availability<00:02:11.200> database<00:02:11.920> on<00:02:12.080> the<00:02:12.239> nodes + +00:02:13.190 --> 00:02:13.200 align:start position:0% +high availability database on the nodes + + +00:02:13.200 --> 00:02:15.110 align:start position:0% +high availability database on the nodes +instead<00:02:13.520> of<00:02:13.599> hosting<00:02:14.080> it<00:02:14.400> outside<00:02:14.879> of<00:02:14.959> the + +00:02:15.110 --> 00:02:15.120 align:start position:0% +instead of hosting it outside of the + + +00:02:15.120 --> 00:02:17.190 align:start position:0% +instead of hosting it outside of the +cluster<00:02:15.760> and<00:02:15.920> right<00:02:16.160> around<00:02:16.400> that<00:02:16.640> time<00:02:16.879> i<00:02:16.959> saw + +00:02:17.190 --> 00:02:17.200 align:start position:0% +cluster and right around that time i saw + + +00:02:17.200 --> 00:02:19.430 align:start position:0% +cluster and right around that time i saw +jeff<00:02:17.440> galen<00:02:17.840> create<00:02:18.080> a<00:02:18.160> video<00:02:18.480> on<00:02:18.720> ansible<00:02:19.360> and + +00:02:19.430 --> 00:02:19.440 align:start position:0% +jeff galen create a video on ansible and + + +00:02:19.440 --> 00:02:21.270 align:start position:0% +jeff galen create a video on ansible and +that<00:02:19.680> sent<00:02:19.840> me<00:02:20.000> down<00:02:20.160> a<00:02:20.239> rabbit<00:02:20.560> hole<00:02:20.959> learning + +00:02:21.270 --> 00:02:21.280 align:start position:0% +that sent me down a rabbit hole learning + + +00:02:21.280 --> 00:02:24.150 align:start position:0% +that sent me down a rabbit hole learning +ansible<00:02:21.920> creating<00:02:22.319> a<00:02:22.480> video<00:02:22.800> on<00:02:23.040> ansible<00:02:23.920> and + +00:02:24.150 --> 00:02:24.160 align:start position:0% +ansible creating a video on ansible and + + +00:02:24.160 --> 00:02:26.790 align:start position:0% +ansible creating a video on ansible and +automating<00:02:24.879> a<00:02:24.959> lot<00:02:25.120> of<00:02:25.360> tasks<00:02:26.080> well<00:02:26.480> you<00:02:26.640> know + +00:02:26.790 --> 00:02:26.800 align:start position:0% +automating a lot of tasks well you know + + +00:02:26.800 --> 00:02:29.190 align:start position:0% +automating a lot of tasks well you know +how<00:02:26.959> that<00:02:27.120> goes<00:02:27.760> anyway<00:02:28.480> so<00:02:28.640> i<00:02:28.800> found<00:02:29.040> that + +00:02:29.190 --> 00:02:29.200 align:start position:0% +how that goes anyway so i found that + + +00:02:29.200 --> 00:02:31.589 align:start position:0% +how that goes anyway so i found that +github<00:02:29.680> repo<00:02:30.239> i<00:02:30.400> cloned<00:02:30.800> it<00:02:30.959> and<00:02:31.120> created<00:02:31.440> some + +00:02:31.589 --> 00:02:31.599 align:start position:0% +github repo i cloned it and created some + + +00:02:31.599 --> 00:02:33.430 align:start position:0% +github repo i cloned it and created some +virtual<00:02:32.000> machines<00:02:32.640> and<00:02:32.800> then<00:02:32.959> i<00:02:33.120> tried<00:02:33.280> to + +00:02:33.430 --> 00:02:33.440 align:start position:0% +virtual machines and then i tried to + + +00:02:33.440 --> 00:02:35.830 align:start position:0% +virtual machines and then i tried to +provision<00:02:34.080> a<00:02:34.239> high<00:02:34.480> availability<00:02:35.200> cluster + +00:02:35.830 --> 00:02:35.840 align:start position:0% +provision a high availability cluster + + +00:02:35.840 --> 00:02:37.670 align:start position:0% +provision a high availability cluster +but<00:02:36.160> there<00:02:36.400> was<00:02:36.640> just<00:02:36.879> one<00:02:37.120> problem<00:02:37.519> the + +00:02:37.670 --> 00:02:37.680 align:start position:0% +but there was just one problem the + + +00:02:37.680 --> 00:02:39.910 align:start position:0% +but there was just one problem the +ansible<00:02:38.160> playbook<00:02:38.720> only<00:02:39.040> supported<00:02:39.519> spinning + +00:02:39.910 --> 00:02:39.920 align:start position:0% +ansible playbook only supported spinning + + +00:02:39.920 --> 00:02:43.190 align:start position:0% +ansible playbook only supported spinning +up<00:02:40.319> one<00:02:40.640> lcd<00:02:41.200> node<00:02:41.599> and<00:02:41.760> that<00:02:42.000> meant<00:02:42.400> only<00:02:42.879> one + +00:02:43.190 --> 00:02:43.200 align:start position:0% +up one lcd node and that meant only one + + +00:02:43.200 --> 00:02:45.830 align:start position:0% +up one lcd node and that meant only one +server<00:02:43.519> node<00:02:43.840> which<00:02:44.239> isn't<00:02:44.560> ha<00:02:45.200> i<00:02:45.280> mean<00:02:45.599> it's + +00:02:45.830 --> 00:02:45.840 align:start position:0% +server node which isn't ha i mean it's + + +00:02:45.840 --> 00:02:47.990 align:start position:0% +server node which isn't ha i mean it's +configured<00:02:46.319> for<00:02:46.560> ha<00:02:47.120> but<00:02:47.280> i<00:02:47.440> would<00:02:47.599> have<00:02:47.760> to + +00:02:47.990 --> 00:02:48.000 align:start position:0% +configured for ha but i would have to + + +00:02:48.000 --> 00:02:50.309 align:start position:0% +configured for ha but i would have to +manually<00:02:48.640> add<00:02:49.040> additional<00:02:49.519> server<00:02:49.840> nodes<00:02:50.160> to + +00:02:50.309 --> 00:02:50.319 align:start position:0% +manually add additional server nodes to + + +00:02:50.319 --> 00:02:52.710 align:start position:0% +manually add additional server nodes to +make<00:02:50.560> it<00:02:50.879> ha<00:02:51.519> and<00:02:51.680> that's<00:02:51.920> no<00:02:52.160> fun<00:02:52.560> so + +00:02:52.710 --> 00:02:52.720 align:start position:0% +make it ha and that's no fun so + + +00:02:52.720 --> 00:02:55.589 align:start position:0% +make it ha and that's no fun so +technically<00:02:53.360> it<00:02:53.440> wasn't<00:02:53.920> ha<00:02:54.560> out<00:02:54.720> of<00:02:54.800> the<00:02:54.879> box + +00:02:55.589 --> 00:02:55.599 align:start position:0% +technically it wasn't ha out of the box + + +00:02:55.599 --> 00:02:57.750 align:start position:0% +technically it wasn't ha out of the box +so<00:02:55.760> i<00:02:55.920> decided<00:02:56.319> to<00:02:56.480> dig<00:02:56.640> around<00:02:57.120> in<00:02:57.200> the<00:02:57.360> code + +00:02:57.750 --> 00:02:57.760 align:start position:0% +so i decided to dig around in the code + + +00:02:57.760 --> 00:02:59.910 align:start position:0% +so i decided to dig around in the code +and<00:02:58.000> in<00:02:58.159> the<00:02:58.319> branches<00:02:58.959> and<00:02:59.120> i<00:02:59.280> found<00:02:59.519> a<00:02:59.599> fork + +00:02:59.910 --> 00:02:59.920 align:start position:0% +and in the branches and i found a fork + + +00:02:59.920 --> 00:03:02.229 align:start position:0% +and in the branches and i found a fork +where<00:03:00.080> somebody<00:03:00.480> actually<00:03:01.040> fixed<00:03:01.519> that<00:03:01.840> issue + +00:03:02.229 --> 00:03:02.239 align:start position:0% +where somebody actually fixed that issue + + +00:03:02.239 --> 00:03:04.070 align:start position:0% +where somebody actually fixed that issue +so<00:03:02.400> i<00:03:02.560> could<00:03:02.720> actually<00:03:03.120> create<00:03:03.360> an<00:03:03.599> h<00:03:03.840> a + +00:03:04.070 --> 00:03:04.080 align:start position:0% +so i could actually create an h a + + +00:03:04.080 --> 00:03:06.070 align:start position:0% +so i could actually create an h a +cluster<00:03:04.640> out<00:03:04.800> of<00:03:04.879> the<00:03:04.959> box<00:03:05.280> with<00:03:05.519> ansible<00:03:05.920> and + +00:03:06.070 --> 00:03:06.080 align:start position:0% +cluster out of the box with ansible and + + +00:03:06.080 --> 00:03:08.149 align:start position:0% +cluster out of the box with ansible and +i<00:03:06.239> saw<00:03:06.480> they<00:03:06.720> also<00:03:07.040> added<00:03:07.360> support<00:03:07.680> for<00:03:07.920> cube + +00:03:08.149 --> 00:03:08.159 align:start position:0% +i saw they also added support for cube + + +00:03:08.159 --> 00:03:10.390 align:start position:0% +i saw they also added support for cube +vip<00:03:08.560> this<00:03:08.800> was<00:03:09.040> awesome<00:03:09.440> because<00:03:09.920> this<00:03:10.159> is + +00:03:10.390 --> 00:03:10.400 align:start position:0% +vip this was awesome because this is + + +00:03:10.400 --> 00:03:13.509 align:start position:0% +vip this was awesome because this is +exactly<00:03:11.360> what<00:03:11.599> i<00:03:11.680> was<00:03:11.920> trying<00:03:12.239> to<00:03:12.400> do<00:03:13.040> i<00:03:13.200> love + +00:03:13.509 --> 00:03:13.519 align:start position:0% +exactly what i was trying to do i love + + +00:03:13.519 --> 00:03:15.990 align:start position:0% +exactly what i was trying to do i love +open<00:03:13.760> source<00:03:14.319> so<00:03:14.480> a<00:03:14.560> huge<00:03:14.879> thank<00:03:15.120> you<00:03:15.360> to<00:03:15.519> user + +00:03:15.990 --> 00:03:16.000 align:start position:0% +open source so a huge thank you to user + + +00:03:16.000 --> 00:03:18.949 align:start position:0% +open source so a huge thank you to user +212<00:03:16.720> 850a + +00:03:18.949 --> 00:03:18.959 align:start position:0% +212 850a + + +00:03:18.959 --> 00:03:21.270 align:start position:0% +212 850a +this<00:03:19.200> gave<00:03:19.519> me<00:03:19.840> a<00:03:20.000> nice<00:03:20.239> starting<00:03:20.720> point<00:03:20.959> to + +00:03:21.270 --> 00:03:21.280 align:start position:0% +this gave me a nice starting point to + + +00:03:21.280 --> 00:03:23.509 align:start position:0% +this gave me a nice starting point to +automate<00:03:21.680> the<00:03:21.840> rest<00:03:22.400> again<00:03:22.879> huge<00:03:23.120> thank<00:03:23.360> you + +00:03:23.509 --> 00:03:23.519 align:start position:0% +automate the rest again huge thank you + + +00:03:23.519 --> 00:03:25.910 align:start position:0% +automate the rest again huge thank you +to<00:03:23.760> open<00:03:24.000> source<00:03:24.239> community<00:03:24.879> jeff<00:03:25.120> gearling + +00:03:25.910 --> 00:03:25.920 align:start position:0% +to open source community jeff gearling + + +00:03:25.920 --> 00:03:29.750 align:start position:0% +to open source community jeff gearling +and<00:03:26.159> user<00:03:26.959> 212<00:03:27.680> 850a<00:03:28.480> so<00:03:28.720> after<00:03:29.040> poking<00:03:29.440> around + +00:03:29.750 --> 00:03:29.760 align:start position:0% +and user 212 850a so after poking around + + +00:03:29.760 --> 00:03:31.589 align:start position:0% +and user 212 850a so after poking around +for<00:03:29.920> a<00:03:30.000> little<00:03:30.159> bit<00:03:30.400> i<00:03:30.560> found<00:03:30.799> that<00:03:31.120> most<00:03:31.360> of<00:03:31.519> it + +00:03:31.589 --> 00:03:31.599 align:start position:0% +for a little bit i found that most of it + + +00:03:31.599 --> 00:03:33.990 align:start position:0% +for a little bit i found that most of it +was<00:03:31.840> working<00:03:32.400> but<00:03:32.560> it<00:03:32.720> did<00:03:32.959> need<00:03:33.200> some<00:03:33.440> updates + +00:03:33.990 --> 00:03:34.000 align:start position:0% +was working but it did need some updates + + +00:03:34.000 --> 00:03:36.229 align:start position:0% +was working but it did need some updates +and<00:03:34.239> some<00:03:34.480> configuration<00:03:35.280> changes<00:03:35.920> to<00:03:36.080> work + +00:03:36.229 --> 00:03:36.239 align:start position:0% +and some configuration changes to work + + +00:03:36.239 --> 00:03:38.390 align:start position:0% +and some configuration changes to work +with<00:03:36.400> the<00:03:36.480> latest<00:03:36.959> version<00:03:37.280> of<00:03:37.440> qvim<00:03:38.080> along + +00:03:38.390 --> 00:03:38.400 align:start position:0% +with the latest version of qvim along + + +00:03:38.400 --> 00:03:40.309 align:start position:0% +with the latest version of qvim along +with<00:03:38.560> some<00:03:38.720> other<00:03:38.959> features<00:03:39.360> i<00:03:39.440> wanted<00:03:39.760> to<00:03:39.840> add + +00:03:40.309 --> 00:03:40.319 align:start position:0% +with some other features i wanted to add + + +00:03:40.319 --> 00:03:43.509 align:start position:0% +with some other features i wanted to add +so<00:03:40.640> i<00:03:40.799> decided<00:03:41.280> to<00:03:41.920> roll<00:03:42.159> up<00:03:42.239> my<00:03:42.400> sleeves + +00:03:43.509 --> 00:03:43.519 align:start position:0% +so i decided to roll up my sleeves + + +00:03:43.519 --> 00:03:45.910 align:start position:0% +so i decided to roll up my sleeves +and<00:03:43.680> start<00:03:44.080> hacking<00:03:44.480> away<00:03:44.879> at<00:03:44.959> this<00:03:45.280> fork<00:03:45.760> in + +00:03:45.910 --> 00:03:45.920 align:start position:0% +and start hacking away at this fork in + + +00:03:45.920 --> 00:03:47.910 align:start position:0% +and start hacking away at this fork in +my<00:03:46.080> own<00:03:46.319> branch<00:03:46.879> and<00:03:47.040> before<00:03:47.440> making<00:03:47.760> it + +00:03:47.910 --> 00:03:47.920 align:start position:0% +my own branch and before making it + + +00:03:47.920 --> 00:03:50.229 align:start position:0% +my own branch and before making it +public<00:03:48.560> i<00:03:48.720> wanted<00:03:48.959> to<00:03:49.200> accomplish<00:03:49.920> a<00:03:50.000> few + +00:03:50.229 --> 00:03:50.239 align:start position:0% +public i wanted to accomplish a few + + +00:03:50.239 --> 00:03:52.550 align:start position:0% +public i wanted to accomplish a few +things<00:03:50.799> i<00:03:50.879> wanted<00:03:51.200> to<00:03:51.280> make<00:03:51.519> sure<00:03:51.760> that<00:03:52.159> anyone + +00:03:52.550 --> 00:03:52.560 align:start position:0% +things i wanted to make sure that anyone + + +00:03:52.560 --> 00:03:54.869 align:start position:0% +things i wanted to make sure that anyone +using<00:03:52.879> this<00:03:53.200> could<00:03:53.439> start<00:03:53.680> with<00:03:53.920> an<00:03:54.239> unlimited + +00:03:54.869 --> 00:03:54.879 align:start position:0% +using this could start with an unlimited + + +00:03:54.879 --> 00:03:56.789 align:start position:0% +using this could start with an unlimited +amount<00:03:55.280> of<00:03:55.360> nodes<00:03:55.920> i<00:03:56.000> wanted<00:03:56.319> to<00:03:56.400> make<00:03:56.640> sure + +00:03:56.789 --> 00:03:56.799 align:start position:0% +amount of nodes i wanted to make sure + + +00:03:56.799 --> 00:03:59.350 align:start position:0% +amount of nodes i wanted to make sure +that<00:03:57.120> qvip<00:03:57.680> was<00:03:58.080> rock<00:03:58.319> solid<00:03:58.959> and<00:03:59.040> then<00:03:59.280> it + +00:03:59.350 --> 00:03:59.360 align:start position:0% +that qvip was rock solid and then it + + +00:03:59.360 --> 00:04:01.350 align:start position:0% +that qvip was rock solid and then it +would<00:03:59.519> actually<00:04:00.000> create<00:04:00.319> a<00:04:00.480> load<00:04:00.720> balancer + +00:04:01.350 --> 00:04:01.360 align:start position:0% +would actually create a load balancer + + +00:04:01.360 --> 00:04:03.910 align:start position:0% +would actually create a load balancer +that<00:04:01.599> you<00:04:01.760> could<00:04:02.000> use<00:04:02.480> to<00:04:02.640> make<00:04:02.879> k3s<00:04:03.680> fault + +00:04:03.910 --> 00:04:03.920 align:start position:0% +that you could use to make k3s fault + + +00:04:03.920 --> 00:04:06.070 align:start position:0% +that you could use to make k3s fault +tolerant<00:04:04.480> i<00:04:04.640> also<00:04:04.959> wanted<00:04:05.200> to<00:04:05.360> automate<00:04:05.920> an + +00:04:06.070 --> 00:04:06.080 align:start position:0% +tolerant i also wanted to automate an + + +00:04:06.080 --> 00:04:08.070 align:start position:0% +tolerant i also wanted to automate an +external<00:04:06.560> load<00:04:06.799> balancer<00:04:07.280> so<00:04:07.439> that<00:04:07.599> when<00:04:07.840> you + +00:04:08.070 --> 00:04:08.080 align:start position:0% +external load balancer so that when you + + +00:04:08.080 --> 00:04:11.030 align:start position:0% +external load balancer so that when you +expose<00:04:08.560> the<00:04:08.720> service<00:04:09.519> you<00:04:09.680> get<00:04:09.840> an<00:04:10.000> ip<00:04:10.400> address + +00:04:11.030 --> 00:04:11.040 align:start position:0% +expose the service you get an ip address + + +00:04:11.040 --> 00:04:13.030 align:start position:0% +expose the service you get an ip address +for<00:04:11.280> that<00:04:11.439> service<00:04:11.840> from<00:04:12.080> your<00:04:12.239> cluster<00:04:12.959> and + +00:04:13.030 --> 00:04:13.040 align:start position:0% +for that service from your cluster and + + +00:04:13.040 --> 00:04:15.509 align:start position:0% +for that service from your cluster and +then<00:04:13.360> anyone<00:04:13.920> can<00:04:14.159> use<00:04:14.319> that<00:04:14.640> ip<00:04:15.040> address<00:04:15.360> to + +00:04:15.509 --> 00:04:15.519 align:start position:0% +then anyone can use that ip address to + + +00:04:15.519 --> 00:04:19.110 align:start position:0% +then anyone can use that ip address to +access<00:04:16.079> services<00:04:16.720> within<00:04:17.199> k3s<00:04:18.160> so<00:04:18.639> i<00:04:18.799> had<00:04:19.040> a + +00:04:19.110 --> 00:04:19.120 align:start position:0% +access services within k3s so i had a + + +00:04:19.120 --> 00:04:21.349 align:start position:0% +access services within k3s so i had a +few<00:04:19.359> choices<00:04:19.759> for<00:04:19.840> this<00:04:20.079> step<00:04:20.720> and<00:04:20.880> a<00:04:20.959> quick + +00:04:21.349 --> 00:04:21.359 align:start position:0% +few choices for this step and a quick + + +00:04:21.359 --> 00:04:22.950 align:start position:0% +few choices for this step and a quick +clarification<00:04:22.160> for<00:04:22.320> these<00:04:22.479> two<00:04:22.720> load + +00:04:22.950 --> 00:04:22.960 align:start position:0% +clarification for these two load + + +00:04:22.960 --> 00:04:25.030 align:start position:0% +clarification for these two load +balancers<00:04:23.759> the<00:04:23.919> first<00:04:24.160> load<00:04:24.400> balancer<00:04:24.880> you + +00:04:25.030 --> 00:04:25.040 align:start position:0% +balancers the first load balancer you + + +00:04:25.040 --> 00:04:28.150 align:start position:0% +balancers the first load balancer you +typically<00:04:25.520> need<00:04:25.759> in<00:04:25.919> k3s<00:04:26.639> is<00:04:26.800> a<00:04:27.040> load<00:04:27.360> balancer + +00:04:28.150 --> 00:04:28.160 align:start position:0% +typically need in k3s is a load balancer + + +00:04:28.160 --> 00:04:30.790 align:start position:0% +typically need in k3s is a load balancer +for<00:04:28.400> your<00:04:28.639> kubernetes<00:04:29.440> api<00:04:30.160> this<00:04:30.320> is<00:04:30.400> the<00:04:30.560> load + +00:04:30.790 --> 00:04:30.800 align:start position:0% +for your kubernetes api this is the load + + +00:04:30.800 --> 00:04:32.550 align:start position:0% +for your kubernetes api this is the load +balancer<00:04:31.280> for<00:04:31.600> control<00:04:32.000> plane<00:04:32.320> and<00:04:32.479> this + +00:04:32.550 --> 00:04:32.560 align:start position:0% +balancer for control plane and this + + +00:04:32.560 --> 00:04:34.469 align:start position:0% +balancer for control plane and this +should<00:04:32.720> be<00:04:32.960> fault<00:04:33.199> tolerance<00:04:33.600> so<00:04:33.759> that<00:04:34.080> if<00:04:34.160> you + +00:04:34.469 --> 00:04:34.479 align:start position:0% +should be fault tolerance so that if you + + +00:04:34.479 --> 00:04:37.030 align:start position:0% +should be fault tolerance so that if you +issue<00:04:34.880> k3s<00:04:35.520> commands<00:04:36.000> you<00:04:36.160> can<00:04:36.400> still<00:04:36.720> get<00:04:36.880> a + +00:04:37.030 --> 00:04:37.040 align:start position:0% +issue k3s commands you can still get a + + +00:04:37.040 --> 00:04:38.550 align:start position:0% +issue k3s commands you can still get a +response<00:04:37.520> back<00:04:37.840> and<00:04:38.000> the<00:04:38.160> other<00:04:38.320> load + +00:04:38.550 --> 00:04:38.560 align:start position:0% +response back and the other load + + +00:04:38.560 --> 00:04:41.270 align:start position:0% +response back and the other load +balancer<00:04:39.199> is<00:04:39.360> a<00:04:39.600> service<00:04:40.000> load<00:04:40.240> balancer<00:04:41.040> or + +00:04:41.270 --> 00:04:41.280 align:start position:0% +balancer is a service load balancer or + + +00:04:41.280 --> 00:04:43.990 align:start position:0% +balancer is a service load balancer or +kubernetes<00:04:42.000> for<00:04:42.160> you<00:04:42.320> to<00:04:42.479> expose<00:04:43.199> services<00:04:43.759> on + +00:04:43.990 --> 00:04:44.000 align:start position:0% +kubernetes for you to expose services on + + +00:04:44.000 --> 00:04:46.469 align:start position:0% +kubernetes for you to expose services on +in<00:04:44.160> most<00:04:44.400> cloud<00:04:44.720> environments<00:04:45.440> they<00:04:45.680> supply<00:04:46.240> a + +00:04:46.469 --> 00:04:46.479 align:start position:0% +in most cloud environments they supply a + + +00:04:46.479 --> 00:04:48.710 align:start position:0% +in most cloud environments they supply a +cloud<00:04:47.040> load<00:04:47.280> balancer<00:04:47.759> for<00:04:47.919> you<00:04:48.080> to<00:04:48.320> expose + +00:04:48.710 --> 00:04:48.720 align:start position:0% +cloud load balancer for you to expose + + +00:04:48.720 --> 00:04:50.629 align:start position:0% +cloud load balancer for you to expose +services<00:04:49.280> on<00:04:49.600> and<00:04:49.759> this<00:04:50.000> service<00:04:50.400> load + +00:04:50.629 --> 00:04:50.639 align:start position:0% +services on and this service load + + +00:04:50.639 --> 00:04:52.629 align:start position:0% +services on and this service load +balancer<00:04:51.040> that<00:04:51.199> i'm<00:04:51.360> talking<00:04:51.680> about<00:04:52.080> is<00:04:52.320> for + +00:04:52.629 --> 00:04:52.639 align:start position:0% +balancer that i'm talking about is for + + +00:04:52.639 --> 00:04:55.430 align:start position:0% +balancer that i'm talking about is for +non-cloud<00:04:53.360> environments<00:04:54.479> in<00:04:54.639> self-hosted + +00:04:55.430 --> 00:04:55.440 align:start position:0% +non-cloud environments in self-hosted + + +00:04:55.440 --> 00:04:57.430 align:start position:0% +non-cloud environments in self-hosted +environments<00:04:56.240> and<00:04:56.400> since<00:04:56.639> we<00:04:56.800> don't<00:04:56.960> have<00:04:57.199> a + +00:04:57.430 --> 00:04:57.440 align:start position:0% +environments and since we don't have a + + +00:04:57.440 --> 00:05:00.390 align:start position:0% +environments and since we don't have a +cloud<00:04:58.000> load<00:04:58.240> balancer<00:04:58.880> to<00:04:59.040> give<00:04:59.280> us<00:04:59.520> ips<00:05:00.160> to + +00:05:00.390 --> 00:05:00.400 align:start position:0% +cloud load balancer to give us ips to + + +00:05:00.400 --> 00:05:02.950 align:start position:0% +cloud load balancer to give us ips to +expose<00:05:01.120> our<00:05:01.280> services<00:05:01.919> outside<00:05:02.560> we<00:05:02.720> need<00:05:02.880> to + +00:05:02.950 --> 00:05:02.960 align:start position:0% +expose our services outside we need to + + +00:05:02.960 --> 00:05:05.189 align:start position:0% +expose our services outside we need to +use<00:05:03.199> something<00:05:03.600> that<00:05:03.759> can<00:05:04.080> emulate<00:05:04.639> a<00:05:04.880> cloud + +00:05:05.189 --> 00:05:05.199 align:start position:0% +use something that can emulate a cloud + + +00:05:05.199 --> 00:05:07.670 align:start position:0% +use something that can emulate a cloud +load<00:05:05.440> balancer<00:05:06.320> that<00:05:06.560> kubernetes<00:05:07.199> can<00:05:07.440> ask + +00:05:07.670 --> 00:05:07.680 align:start position:0% +load balancer that kubernetes can ask + + +00:05:07.680 --> 00:05:10.310 align:start position:0% +load balancer that kubernetes can ask +for<00:05:07.840> an<00:05:08.000> ip<00:05:08.320> address<00:05:08.639> from<00:05:09.199> so<00:05:09.520> our<00:05:09.680> services + +00:05:10.310 --> 00:05:10.320 align:start position:0% +for an ip address from so our services + + +00:05:10.320 --> 00:05:12.629 align:start position:0% +for an ip address from so our services +can<00:05:10.479> be<00:05:10.639> exposed<00:05:11.360> so<00:05:11.520> i<00:05:11.680> had<00:05:11.840> choices<00:05:12.240> to<00:05:12.400> make + +00:05:12.629 --> 00:05:12.639 align:start position:0% +can be exposed so i had choices to make + + +00:05:12.639 --> 00:05:15.270 align:start position:0% +can be exposed so i had choices to make +for<00:05:12.800> load<00:05:13.039> balancers<00:05:13.840> qvip<00:05:14.400> can<00:05:14.639> actually<00:05:15.039> do + +00:05:15.270 --> 00:05:15.280 align:start position:0% +for load balancers qvip can actually do + + +00:05:15.280 --> 00:05:17.590 align:start position:0% +for load balancers qvip can actually do +both<00:05:15.759> it<00:05:15.919> can<00:05:16.080> be<00:05:16.160> a<00:05:16.320> service<00:05:16.639> load<00:05:16.880> balancer + +00:05:17.590 --> 00:05:17.600 align:start position:0% +both it can be a service load balancer + + +00:05:17.600 --> 00:05:19.590 align:start position:0% +both it can be a service load balancer +or<00:05:17.840> a<00:05:17.919> load<00:05:18.160> balancer<00:05:18.800> for<00:05:19.039> your<00:05:19.199> control + +00:05:19.590 --> 00:05:19.600 align:start position:0% +or a load balancer for your control + + +00:05:19.600 --> 00:05:22.150 align:start position:0% +or a load balancer for your control +plane<00:05:19.919> for<00:05:20.080> your<00:05:20.240> kubernetes<00:05:20.960> lcd<00:05:21.440> nodes<00:05:22.000> this + +00:05:22.150 --> 00:05:22.160 align:start position:0% +plane for your kubernetes lcd nodes this + + +00:05:22.160 --> 00:05:23.830 align:start position:0% +plane for your kubernetes lcd nodes this +sounded<00:05:22.479> like<00:05:22.720> a<00:05:22.800> great<00:05:23.039> solution<00:05:23.440> because + +00:05:23.830 --> 00:05:23.840 align:start position:0% +sounded like a great solution because + + +00:05:23.840 --> 00:05:26.310 align:start position:0% +sounded like a great solution because +then<00:05:24.080> i<00:05:24.160> didn't<00:05:24.400> have<00:05:24.560> to<00:05:24.639> use<00:05:24.800> metal<00:05:25.199> lb + +00:05:26.310 --> 00:05:26.320 align:start position:0% +then i didn't have to use metal lb + + +00:05:26.320 --> 00:05:28.629 align:start position:0% +then i didn't have to use metal lb +but<00:05:26.479> i<00:05:26.639> love<00:05:26.880> metal<00:05:27.199> lv<00:05:27.600> but<00:05:27.759> taking<00:05:28.080> on<00:05:28.400> one + +00:05:28.629 --> 00:05:28.639 align:start position:0% +but i love metal lv but taking on one + + +00:05:28.639 --> 00:05:31.189 align:start position:0% +but i love metal lv but taking on one +less<00:05:28.880> dependency<00:05:29.600> sounded<00:05:30.000> like<00:05:30.320> a<00:05:30.400> good<00:05:30.639> idea + +00:05:31.189 --> 00:05:31.199 align:start position:0% +less dependency sounded like a good idea + + +00:05:31.199 --> 00:05:32.950 align:start position:0% +less dependency sounded like a good idea +especially<00:05:31.680> when<00:05:31.919> it<00:05:32.000> comes<00:05:32.240> to<00:05:32.560> breaking + +00:05:32.950 --> 00:05:32.960 align:start position:0% +especially when it comes to breaking + + +00:05:32.960 --> 00:05:35.510 align:start position:0% +especially when it comes to breaking +changes<00:05:33.520> it's<00:05:33.680> just<00:05:34.000> less<00:05:34.240> to<00:05:34.400> manage<00:05:34.960> so<00:05:35.120> then + +00:05:35.510 --> 00:05:35.520 align:start position:0% +changes it's just less to manage so then + + +00:05:35.520 --> 00:05:37.430 align:start position:0% +changes it's just less to manage so then +of<00:05:35.680> course<00:05:36.000> the<00:05:36.240> other<00:05:36.479> option<00:05:36.800> for<00:05:36.960> exposing + +00:05:37.430 --> 00:05:37.440 align:start position:0% +of course the other option for exposing + + +00:05:37.440 --> 00:05:39.909 align:start position:0% +of course the other option for exposing +my<00:05:37.600> services<00:05:38.160> was<00:05:38.400> just<00:05:38.639> to<00:05:38.720> use<00:05:39.039> metal<00:05:39.440> albeit + +00:05:39.909 --> 00:05:39.919 align:start position:0% +my services was just to use metal albeit + + +00:05:39.919 --> 00:05:40.790 align:start position:0% +my services was just to use metal albeit +and + +00:05:40.790 --> 00:05:40.800 align:start position:0% +and + + +00:05:40.800 --> 00:05:43.430 align:start position:0% +and +honestly<00:05:41.680> after<00:05:42.080> hours<00:05:42.479> and<00:05:42.639> hours<00:05:43.039> of<00:05:43.199> trying + +00:05:43.430 --> 00:05:43.440 align:start position:0% +honestly after hours and hours of trying + + +00:05:43.440 --> 00:05:46.070 align:start position:0% +honestly after hours and hours of trying +to<00:05:43.600> get<00:05:43.919> qvic<00:05:44.560> service<00:05:44.960> load<00:05:45.199> balancer<00:05:45.840> to<00:05:45.919> be + +00:05:46.070 --> 00:05:46.080 align:start position:0% +to get qvic service load balancer to be + + +00:05:46.080 --> 00:05:48.310 align:start position:0% +to get qvic service load balancer to be +able<00:05:46.240> to<00:05:46.320> work<00:05:46.560> with<00:05:46.720> my<00:05:46.880> services<00:05:47.759> i<00:05:47.919> decided + +00:05:48.310 --> 00:05:48.320 align:start position:0% +able to work with my services i decided + + +00:05:48.320 --> 00:05:51.189 align:start position:0% +able to work with my services i decided +to<00:05:48.479> fall<00:05:48.720> back<00:05:48.960> on<00:05:49.199> good<00:05:49.440> old<00:05:49.680> trusty<00:05:50.320> metal<00:05:50.639> lb + +00:05:51.189 --> 00:05:51.199 align:start position:0% +to fall back on good old trusty metal lb + + +00:05:51.199 --> 00:05:53.749 align:start position:0% +to fall back on good old trusty metal lb +and<00:05:51.360> metal<00:05:51.680> lb<00:05:52.240> just<00:05:52.479> works<00:05:52.960> and<00:05:53.280> i<00:05:53.360> could<00:05:53.600> use + +00:05:53.749 --> 00:05:53.759 align:start position:0% +and metal lb just works and i could use + + +00:05:53.759 --> 00:05:56.230 align:start position:0% +and metal lb just works and i could use +my<00:05:53.919> existing<00:05:54.320> configuration<00:05:55.039> for<00:05:55.280> it<00:05:55.440> so<00:05:55.840> it + +00:05:56.230 --> 00:05:56.240 align:start position:0% +my existing configuration for it so it + + +00:05:56.240 --> 00:05:58.710 align:start position:0% +my existing configuration for it so it +really<00:05:56.720> wasn't<00:05:57.199> a<00:05:57.360> loss<00:05:57.680> at<00:05:57.840> all<00:05:58.240> so<00:05:58.400> at<00:05:58.479> this + +00:05:58.710 --> 00:05:58.720 align:start position:0% +really wasn't a loss at all so at this + + +00:05:58.720 --> 00:06:00.870 align:start position:0% +really wasn't a loss at all so at this +point<00:05:58.960> i<00:05:59.120> had<00:05:59.280> my<00:05:59.520> architecture<00:06:00.479> pretty<00:06:00.639> much + +00:06:00.870 --> 00:06:00.880 align:start position:0% +point i had my architecture pretty much + + +00:06:00.880 --> 00:06:03.189 align:start position:0% +point i had my architecture pretty much +decided<00:06:01.520> qubit<00:06:01.919> for<00:06:02.080> my<00:06:02.240> kubernetes<00:06:02.880> control + +00:06:03.189 --> 00:06:03.199 align:start position:0% +decided qubit for my kubernetes control + + +00:06:03.199 --> 00:06:05.350 align:start position:0% +decided qubit for my kubernetes control +plane<00:06:03.600> and<00:06:03.759> metal<00:06:04.160> llb<00:06:04.479> for<00:06:04.639> my<00:06:04.800> service<00:06:05.120> load + +00:06:05.350 --> 00:06:05.360 align:start position:0% +plane and metal llb for my service load + + +00:06:05.360 --> 00:06:07.189 align:start position:0% +plane and metal llb for my service load +balancer<00:06:06.000> and<00:06:06.080> once<00:06:06.319> i<00:06:06.479> solved<00:06:06.800> creating + +00:06:07.189 --> 00:06:07.199 align:start position:0% +balancer and once i solved creating + + +00:06:07.199 --> 00:06:09.430 align:start position:0% +balancer and once i solved creating +multiple<00:06:07.680> server<00:06:08.000> nodes<00:06:08.560> configuring<00:06:09.199> cube + +00:06:09.430 --> 00:06:09.440 align:start position:0% +multiple server nodes configuring cube + + +00:06:09.440 --> 00:06:11.830 align:start position:0% +multiple server nodes configuring cube +vip<00:06:09.840> and<00:06:10.080> configuring<00:06:10.639> middle<00:06:11.039> of<00:06:11.120> b + +00:06:11.830 --> 00:06:11.840 align:start position:0% +vip and configuring middle of b + + +00:06:11.840 --> 00:06:13.909 align:start position:0% +vip and configuring middle of b +it<00:06:11.919> was<00:06:12.080> time<00:06:12.319> to<00:06:12.400> do<00:06:12.560> some<00:06:12.800> testing<00:06:13.520> for<00:06:13.759> my + +00:06:13.909 --> 00:06:13.919 align:start position:0% +it was time to do some testing for my + + +00:06:13.919 --> 00:06:16.309 align:start position:0% +it was time to do some testing for my +test<00:06:14.240> i<00:06:14.400> created<00:06:14.960> five<00:06:15.280> nodes<00:06:15.680> and<00:06:15.919> these<00:06:16.160> are + +00:06:16.309 --> 00:06:16.319 align:start position:0% +test i created five nodes and these are + + +00:06:16.319 --> 00:06:18.870 align:start position:0% +test i created five nodes and these are +standard<00:06:16.800> ubuntu<00:06:17.440> cloud<00:06:17.759> image<00:06:18.080> notes<00:06:18.639> and<00:06:18.800> i + +00:06:18.870 --> 00:06:18.880 align:start position:0% +standard ubuntu cloud image notes and i + + +00:06:18.880 --> 00:06:20.390 align:start position:0% +standard ubuntu cloud image notes and i +just<00:06:19.120> recently<00:06:19.520> created<00:06:19.840> a<00:06:19.919> video<00:06:20.240> on + +00:06:20.390 --> 00:06:20.400 align:start position:0% +just recently created a video on + + +00:06:20.400 --> 00:06:22.390 align:start position:0% +just recently created a video on +provisioning<00:06:20.960> new<00:06:21.199> ubuntu<00:06:21.680> machines<00:06:22.080> using + +00:06:22.390 --> 00:06:22.400 align:start position:0% +provisioning new ubuntu machines using + + +00:06:22.400 --> 00:06:24.390 align:start position:0% +provisioning new ubuntu machines using +cloud<00:06:22.720> image<00:06:23.280> and<00:06:23.360> cloud<00:06:23.680> init<00:06:24.000> they're<00:06:24.240> the + +00:06:24.390 --> 00:06:24.400 align:start position:0% +cloud image and cloud init they're the + + +00:06:24.400 --> 00:06:27.350 align:start position:0% +cloud image and cloud init they're the +perfect<00:06:24.880> ubuntu<00:06:25.520> minimal<00:06:26.000> server<00:06:26.240> for<00:06:26.479> k3s + +00:06:27.350 --> 00:06:27.360 align:start position:0% +perfect ubuntu minimal server for k3s + + +00:06:27.360 --> 00:06:29.189 align:start position:0% +perfect ubuntu minimal server for k3s +just<00:06:27.600> really<00:06:27.840> check<00:06:28.080> it<00:06:28.240> out<00:06:28.479> so<00:06:28.639> once<00:06:28.960> i<00:06:29.039> had + +00:06:29.189 --> 00:06:29.199 align:start position:0% +just really check it out so once i had + + +00:06:29.199 --> 00:06:31.430 align:start position:0% +just really check it out so once i had +these<00:06:29.520> five<00:06:29.840> servers<00:06:30.479> up<00:06:30.639> and<00:06:30.720> running<00:06:31.120> and + +00:06:31.430 --> 00:06:31.440 align:start position:0% +these five servers up and running and + + +00:06:31.440 --> 00:06:33.590 align:start position:0% +these five servers up and running and +made<00:06:31.680> note<00:06:31.919> of<00:06:32.000> their<00:06:32.240> ip<00:06:32.560> addresses + +00:06:33.590 --> 00:06:33.600 align:start position:0% +made note of their ip addresses + + +00:06:33.600 --> 00:06:35.270 align:start position:0% +made note of their ip addresses +it<00:06:33.680> was<00:06:33.919> time<00:06:34.080> to<00:06:34.240> configure<00:06:34.639> myansible + +00:06:35.270 --> 00:06:35.280 align:start position:0% +it was time to configure myansible + + +00:06:35.280 --> 00:06:37.510 align:start position:0% +it was time to configure myansible +playbook<00:06:35.759> so<00:06:36.000> here<00:06:36.240> in<00:06:36.400> the<00:06:36.479> groupbars<00:06:37.120> file + +00:06:37.510 --> 00:06:37.520 align:start position:0% +playbook so here in the groupbars file + + +00:06:37.520 --> 00:06:39.749 align:start position:0% +playbook so here in the groupbars file +is<00:06:37.680> where<00:06:37.919> all<00:06:38.160> of<00:06:38.240> my<00:06:38.479> variables<00:06:39.120> are<00:06:39.360> set<00:06:39.600> for + +00:06:39.749 --> 00:06:39.759 align:start position:0% +is where all of my variables are set for + + +00:06:39.759 --> 00:06:42.150 align:start position:0% +is where all of my variables are set for +ansible<00:06:40.319> first<00:06:40.560> you<00:06:40.639> can<00:06:40.800> specify<00:06:41.440> the<00:06:41.520> k3s + +00:06:42.150 --> 00:06:42.160 align:start position:0% +ansible first you can specify the k3s + + +00:06:42.160 --> 00:06:44.469 align:start position:0% +ansible first you can specify the k3s +version<00:06:43.039> and<00:06:43.120> then<00:06:43.360> you<00:06:43.440> could<00:06:43.680> specify<00:06:44.240> an + +00:06:44.469 --> 00:06:44.479 align:start position:0% +version and then you could specify an + + +00:06:44.479 --> 00:06:46.469 align:start position:0% +version and then you could specify an +ansible<00:06:44.960> user<00:06:45.440> and<00:06:45.520> this<00:06:45.759> is<00:06:45.840> the<00:06:46.000> user<00:06:46.319> that + +00:06:46.469 --> 00:06:46.479 align:start position:0% +ansible user and this is the user that + + +00:06:46.479 --> 00:06:48.469 align:start position:0% +ansible user and this is the user that +ansible<00:06:46.960> will<00:06:47.120> run<00:06:47.360> as<00:06:47.759> and<00:06:47.919> another<00:06:48.240> quick + +00:06:48.469 --> 00:06:48.479 align:start position:0% +ansible will run as and another quick + + +00:06:48.479 --> 00:06:50.710 align:start position:0% +ansible will run as and another quick +tip<00:06:49.120> if<00:06:49.280> you<00:06:49.360> need<00:06:49.520> to<00:06:49.599> set<00:06:49.759> up<00:06:49.919> ansible<00:06:50.400> i<00:06:50.479> got + +00:06:50.710 --> 00:06:50.720 align:start position:0% +tip if you need to set up ansible i got + + +00:06:50.720 --> 00:06:53.189 align:start position:0% +tip if you need to set up ansible i got +a<00:06:50.960> really<00:06:51.280> quick<00:06:51.520> video<00:06:52.080> on<00:06:52.240> the<00:06:52.400> bare<00:06:52.720> minimum + +00:06:53.189 --> 00:06:53.199 align:start position:0% +a really quick video on the bare minimum + + +00:06:53.199 --> 00:06:55.029 align:start position:0% +a really quick video on the bare minimum +stuff<00:06:53.520> you<00:06:53.680> need<00:06:53.840> to<00:06:53.919> do<00:06:54.160> in<00:06:54.240> order<00:06:54.560> to<00:06:54.720> set<00:06:54.880> up + +00:06:55.029 --> 00:06:55.039 align:start position:0% +stuff you need to do in order to set up + + +00:06:55.039 --> 00:06:57.270 align:start position:0% +stuff you need to do in order to set up +ansible<00:06:55.520> it's<00:06:55.680> a<00:06:55.840> great<00:06:56.160> primer<00:06:56.560> for<00:06:56.720> this<00:06:56.960> too + +00:06:57.270 --> 00:06:57.280 align:start position:0% +ansible it's a great primer for this too + + +00:06:57.280 --> 00:06:59.350 align:start position:0% +ansible it's a great primer for this too +next<00:06:57.599> is<00:06:57.680> setting<00:06:58.080> a<00:06:58.160> system<00:06:58.560> directory<00:06:59.199> and + +00:06:59.350 --> 00:06:59.360 align:start position:0% +next is setting a system directory and + + +00:06:59.360 --> 00:07:01.350 align:start position:0% +next is setting a system directory and +you<00:06:59.520> won't<00:06:59.759> really<00:07:00.000> need<00:07:00.160> to<00:07:00.240> touch<00:07:00.560> this<00:07:01.039> next + +00:07:01.350 --> 00:07:01.360 align:start position:0% +you won't really need to touch this next + + +00:07:01.360 --> 00:07:03.830 align:start position:0% +you won't really need to touch this next +is<00:07:01.440> setting<00:07:01.759> a<00:07:01.840> flannel<00:07:02.400> interface<00:07:02.880> of<00:07:03.039> eth0 + +00:07:03.830 --> 00:07:03.840 align:start position:0% +is setting a flannel interface of eth0 + + +00:07:03.840 --> 00:07:05.990 align:start position:0% +is setting a flannel interface of eth0 +so<00:07:04.000> flannel's<00:07:04.560> responsible<00:07:05.199> for<00:07:05.440> networking + +00:07:05.990 --> 00:07:06.000 align:start position:0% +so flannel's responsible for networking + + +00:07:06.000 --> 00:07:08.950 align:start position:0% +so flannel's responsible for networking +in<00:07:06.080> k3s<00:07:07.039> and<00:07:07.199> it's<00:07:07.599> pretty<00:07:08.000> dense<00:07:08.560> but<00:07:08.720> if<00:07:08.800> you + +00:07:08.950 --> 00:07:08.960 align:start position:0% +in k3s and it's pretty dense but if you + + +00:07:08.960 --> 00:07:10.230 align:start position:0% +in k3s and it's pretty dense but if you +want<00:07:09.120> to<00:07:09.199> know<00:07:09.440> more<00:07:09.599> about<00:07:09.840> it<00:07:10.000> you<00:07:10.080> should + +00:07:10.230 --> 00:07:10.240 align:start position:0% +want to know more about it you should + + +00:07:10.240 --> 00:07:12.150 align:start position:0% +want to know more about it you should +totally<00:07:10.639> check<00:07:10.880> out<00:07:10.960> their<00:07:11.120> github<00:07:11.599> repo<00:07:12.000> but + +00:07:12.150 --> 00:07:12.160 align:start position:0% +totally check out their github repo but + + +00:07:12.160 --> 00:07:14.469 align:start position:0% +totally check out their github repo but +as<00:07:12.400> i<00:07:12.560> understand<00:07:13.039> it<00:07:13.199> it's<00:07:13.440> responsible<00:07:14.160> for + +00:07:14.469 --> 00:07:14.479 align:start position:0% +as i understand it it's responsible for + + +00:07:14.479 --> 00:07:16.950 align:start position:0% +as i understand it it's responsible for +layer<00:07:14.800> 3<00:07:15.039> communication<00:07:16.080> between<00:07:16.479> nodes<00:07:16.800> in<00:07:16.880> a + +00:07:16.950 --> 00:07:16.960 align:start position:0% +layer 3 communication between nodes in a + + +00:07:16.960 --> 00:07:19.909 align:start position:0% +layer 3 communication between nodes in a +cluster<00:07:17.599> and<00:07:17.759> so<00:07:18.000> here<00:07:18.240> i<00:07:18.479> set<00:07:18.720> at<00:07:19.039> 0<00:07:19.599> because + +00:07:19.909 --> 00:07:19.919 align:start position:0% +cluster and so here i set at 0 because + + +00:07:19.919 --> 00:07:21.909 align:start position:0% +cluster and so here i set at 0 because +that's<00:07:20.160> the<00:07:20.400> ethernet<00:07:20.960> interface<00:07:21.599> on<00:07:21.759> these + +00:07:21.909 --> 00:07:21.919 align:start position:0% +that's the ethernet interface on these + + +00:07:21.919 --> 00:07:24.070 align:start position:0% +that's the ethernet interface on these +virtual<00:07:22.400> machines<00:07:23.120> next<00:07:23.360> i'm<00:07:23.520> setting<00:07:23.840> a + +00:07:24.070 --> 00:07:24.080 align:start position:0% +virtual machines next i'm setting a + + +00:07:24.080 --> 00:07:25.990 align:start position:0% +virtual machines next i'm setting a +server<00:07:24.639> endpoint<00:07:25.120> and<00:07:25.199> this<00:07:25.360> is<00:07:25.520> the<00:07:25.680> ip + +00:07:25.990 --> 00:07:26.000 align:start position:0% +server endpoint and this is the ip + + +00:07:26.000 --> 00:07:28.150 align:start position:0% +server endpoint and this is the ip +address<00:07:26.400> of<00:07:26.560> the<00:07:26.800> vip<00:07:27.039> that<00:07:27.199> will<00:07:27.360> get<00:07:27.599> created + +00:07:28.150 --> 00:07:28.160 align:start position:0% +address of the vip that will get created + + +00:07:28.160 --> 00:07:30.390 align:start position:0% +address of the vip that will get created +for<00:07:28.479> kubernetes<00:07:29.120> control<00:07:29.520> plane<00:07:29.840> and<00:07:29.919> so<00:07:30.160> this + +00:07:30.390 --> 00:07:30.400 align:start position:0% +for kubernetes control plane and so this + + +00:07:30.400 --> 00:07:32.710 align:start position:0% +for kubernetes control plane and so this +vip<00:07:30.639> gets<00:07:30.960> created<00:07:31.520> instead<00:07:31.840> of<00:07:32.160> you<00:07:32.400> having + +00:07:32.710 --> 00:07:32.720 align:start position:0% +vip gets created instead of you having + + +00:07:32.720 --> 00:07:35.270 align:start position:0% +vip gets created instead of you having +to<00:07:32.880> create<00:07:33.199> external<00:07:33.759> load<00:07:34.000> balancers<00:07:34.960> along + +00:07:35.270 --> 00:07:35.280 align:start position:0% +to create external load balancers along + + +00:07:35.280 --> 00:07:37.909 align:start position:0% +to create external load balancers along +with<00:07:35.599> keepa<00:07:35.919> live<00:07:36.160> d<00:07:36.479> this<00:07:36.720> creates<00:07:37.199> a<00:07:37.440> vip + +00:07:37.909 --> 00:07:37.919 align:start position:0% +with keepa live d this creates a vip + + +00:07:37.919 --> 00:07:40.070 align:start position:0% +with keepa live d this creates a vip +that<00:07:38.160> is<00:07:38.319> highly<00:07:38.720> available<00:07:39.360> that's<00:07:39.680> exposed + +00:07:40.070 --> 00:07:40.080 align:start position:0% +that is highly available that's exposed + + +00:07:40.080 --> 00:07:41.909 align:start position:0% +that is highly available that's exposed +through<00:07:40.240> the<00:07:40.319> kubernetes<00:07:41.039> cluster<00:07:41.520> that<00:07:41.759> we + +00:07:41.909 --> 00:07:41.919 align:start position:0% +through the kubernetes cluster that we + + +00:07:41.919 --> 00:07:44.309 align:start position:0% +through the kubernetes cluster that we +can<00:07:42.160> communicate<00:07:42.880> with<00:07:43.280> and<00:07:43.440> kubernetes<00:07:44.080> can + +00:07:44.309 --> 00:07:44.319 align:start position:0% +can communicate with and kubernetes can + + +00:07:44.319 --> 00:07:46.550 align:start position:0% +can communicate with and kubernetes can +too<00:07:44.800> so<00:07:45.120> it's<00:07:45.440> pretty<00:07:45.759> awesome<00:07:46.080> that<00:07:46.319> takes + +00:07:46.550 --> 00:07:46.560 align:start position:0% +too so it's pretty awesome that takes + + +00:07:46.560 --> 00:07:48.309 align:start position:0% +too so it's pretty awesome that takes +care<00:07:46.800> of<00:07:46.960> two<00:07:47.120> to<00:07:47.280> three<00:07:47.520> additional<00:07:47.919> virtual + +00:07:48.309 --> 00:07:48.319 align:start position:0% +care of two to three additional virtual + + +00:07:48.319 --> 00:07:49.990 align:start position:0% +care of two to three additional virtual +machines<00:07:48.639> that<00:07:48.720> you<00:07:49.039> don't<00:07:49.199> have<00:07:49.360> to<00:07:49.440> maintain + +00:07:49.990 --> 00:07:50.000 align:start position:0% +machines that you don't have to maintain + + +00:07:50.000 --> 00:07:53.029 align:start position:0% +machines that you don't have to maintain +anymore<00:07:50.720> next<00:07:51.039> i<00:07:51.120> set<00:07:51.360> my<00:07:51.520> k3s<00:07:52.160> token<00:07:52.560> and<00:07:52.800> this + +00:07:53.029 --> 00:07:53.039 align:start position:0% +anymore next i set my k3s token and this + + +00:07:53.039 --> 00:07:54.790 align:start position:0% +anymore next i set my k3s token and this +should<00:07:53.199> be<00:07:53.520> a<00:07:53.680> secret<00:07:54.160> that<00:07:54.319> you<00:07:54.479> should + +00:07:54.790 --> 00:07:54.800 align:start position:0% +should be a secret that you should + + +00:07:54.800 --> 00:07:56.710 align:start position:0% +should be a secret that you should +obviously<00:07:55.520> keep<00:07:55.680> secret<00:07:56.160> but<00:07:56.319> it's<00:07:56.560> your + +00:07:56.710 --> 00:07:56.720 align:start position:0% +obviously keep secret but it's your + + +00:07:56.720 --> 00:07:59.189 align:start position:0% +obviously keep secret but it's your +password<00:07:57.199> or<00:07:57.280> your<00:07:57.440> token<00:07:57.840> for<00:07:58.000> k3s + +00:07:59.189 --> 00:07:59.199 align:start position:0% +password or your token for k3s + + +00:07:59.199 --> 00:08:00.469 align:start position:0% +password or your token for k3s +and<00:07:59.360> you'll<00:07:59.599> only<00:07:59.759> need<00:08:00.000> this<00:08:00.240> in<00:08:00.319> the + +00:08:00.469 --> 00:08:00.479 align:start position:0% +and you'll only need this in the + + +00:08:00.479 --> 00:08:01.990 align:start position:0% +and you'll only need this in the +beginning<00:08:00.879> or<00:08:01.039> if<00:08:01.120> you<00:08:01.280> join<00:08:01.599> additional + +00:08:01.990 --> 00:08:02.000 align:start position:0% +beginning or if you join additional + + +00:08:02.000 --> 00:08:04.309 align:start position:0% +beginning or if you join additional +nodes<00:08:02.319> later<00:08:02.800> i<00:08:02.960> then<00:08:03.199> added<00:08:03.599> some<00:08:03.840> additional + +00:08:04.309 --> 00:08:04.319 align:start position:0% +nodes later i then added some additional + + +00:08:04.319 --> 00:08:06.950 align:start position:0% +nodes later i then added some additional +arguments<00:08:04.800> to<00:08:04.960> my<00:08:05.199> server<00:08:05.840> and<00:08:06.000> to<00:08:06.160> my<00:08:06.319> agents + +00:08:06.950 --> 00:08:06.960 align:start position:0% +arguments to my server and to my agents + + +00:08:06.960 --> 00:08:09.189 align:start position:0% +arguments to my server and to my agents +but<00:08:07.120> as<00:08:07.199> far<00:08:07.440> as<00:08:07.599> the<00:08:07.680> server<00:08:08.080> goes<00:08:08.400> i<00:08:08.639> disabled + +00:08:09.189 --> 00:08:09.199 align:start position:0% +but as far as the server goes i disabled + + +00:08:09.199 --> 00:08:11.270 align:start position:0% +but as far as the server goes i disabled +the<00:08:09.360> service<00:08:09.759> load<00:08:10.000> balancer<00:08:10.800> we'll<00:08:11.039> want<00:08:11.199> to + +00:08:11.270 --> 00:08:11.280 align:start position:0% +the service load balancer we'll want to + + +00:08:11.280 --> 00:08:13.270 align:start position:0% +the service load balancer we'll want to +do<00:08:11.440> that<00:08:11.680> if<00:08:11.759> we're<00:08:11.919> running<00:08:12.240> metal<00:08:12.639> of<00:08:12.720> b<00:08:13.039> or<00:08:13.199> a + +00:08:13.270 --> 00:08:13.280 align:start position:0% +do that if we're running metal of b or a + + +00:08:13.280 --> 00:08:15.350 align:start position:0% +do that if we're running metal of b or a +service<00:08:13.680> load<00:08:13.919> balancer<00:08:14.479> which<00:08:14.720> we<00:08:14.879> are<00:08:15.199> i'm + +00:08:15.350 --> 00:08:15.360 align:start position:0% +service load balancer which we are i'm + + +00:08:15.360 --> 00:08:18.070 align:start position:0% +service load balancer which we are i'm +telling<00:08:15.680> it<00:08:16.000> not<00:08:16.240> to<00:08:16.479> deploy<00:08:16.960> traffic<00:08:17.680> this<00:08:17.919> is + +00:08:18.070 --> 00:08:18.080 align:start position:0% +telling it not to deploy traffic this is + + +00:08:18.080 --> 00:08:19.990 align:start position:0% +telling it not to deploy traffic this is +up<00:08:18.160> to<00:08:18.319> you<00:08:18.560> if<00:08:18.639> you<00:08:18.800> want<00:08:18.960> to<00:08:19.039> deploy<00:08:19.520> traffic + +00:08:19.990 --> 00:08:20.000 align:start position:0% +up to you if you want to deploy traffic + + +00:08:20.000 --> 00:08:21.830 align:start position:0% +up to you if you want to deploy traffic +you<00:08:20.160> can<00:08:20.319> just<00:08:20.479> delete<00:08:20.800> that<00:08:20.960> arg<00:08:21.440> but<00:08:21.680> i'm + +00:08:21.830 --> 00:08:21.840 align:start position:0% +you can just delete that arg but i'm + + +00:08:21.840 --> 00:08:23.350 align:start position:0% +you can just delete that arg but i'm +going<00:08:22.000> to<00:08:22.080> delete<00:08:22.400> it<00:08:22.479> so<00:08:22.639> i<00:08:22.720> can<00:08:22.960> install<00:08:23.280> it + +00:08:23.350 --> 00:08:23.360 align:start position:0% +going to delete it so i can install it + + +00:08:23.360 --> 00:08:25.110 align:start position:0% +going to delete it so i can install it +with<00:08:23.520> helm<00:08:23.759> later<00:08:24.240> because<00:08:24.639> i<00:08:24.800> like<00:08:24.960> to + +00:08:25.110 --> 00:08:25.120 align:start position:0% +with helm later because i like to + + +00:08:25.120 --> 00:08:27.430 align:start position:0% +with helm later because i like to +install<00:08:25.599> traffic<00:08:26.080> on<00:08:26.240> my<00:08:26.479> own<00:08:26.960> later<00:08:27.199> with + +00:08:27.430 --> 00:08:27.440 align:start position:0% +install traffic on my own later with + + +00:08:27.440 --> 00:08:29.350 align:start position:0% +install traffic on my own later with +helm<00:08:27.840> but<00:08:28.000> if<00:08:28.080> you<00:08:28.240> wanted<00:08:28.479> to<00:08:28.720> install<00:08:29.039> it<00:08:29.199> you + +00:08:29.350 --> 00:08:29.360 align:start position:0% +helm but if you wanted to install it you + + +00:08:29.360 --> 00:08:30.869 align:start position:0% +helm but if you wanted to install it you +could<00:08:29.520> just<00:08:29.680> delete<00:08:30.000> this<00:08:30.160> argument<00:08:30.639> this + +00:08:30.869 --> 00:08:30.879 align:start position:0% +could just delete this argument this + + +00:08:30.879 --> 00:08:32.389 align:start position:0% +could just delete this argument this +next<00:08:31.199> argument<00:08:31.680> is<00:08:31.840> just<00:08:32.080> setting + +00:08:32.389 --> 00:08:32.399 align:start position:0% +next argument is just setting + + +00:08:32.399 --> 00:08:34.389 align:start position:0% +next argument is just setting +permissions<00:08:32.959> on<00:08:33.200> coop<00:08:33.440> config<00:08:33.919> and<00:08:34.000> this<00:08:34.240> is + +00:08:34.389 --> 00:08:34.399 align:start position:0% +permissions on coop config and this is + + +00:08:34.399 --> 00:08:36.070 align:start position:0% +permissions on coop config and this is +really<00:08:34.640> just<00:08:34.880> for<00:08:35.200> convenience<00:08:35.680> so<00:08:35.839> i<00:08:35.919> don't + +00:08:36.070 --> 00:08:36.080 align:start position:0% +really just for convenience so i don't + + +00:08:36.080 --> 00:08:38.230 align:start position:0% +really just for convenience so i don't +have<00:08:36.240> to<00:08:36.320> run<00:08:36.640> sudo<00:08:37.120> when<00:08:37.279> i'm<00:08:37.360> remoted<00:08:37.919> into<00:08:38.159> a + +00:08:38.230 --> 00:08:38.240 align:start position:0% +have to run sudo when i'm remoted into a + + +00:08:38.240 --> 00:08:40.230 align:start position:0% +have to run sudo when i'm remoted into a +node<00:08:38.479> to<00:08:38.640> run<00:08:38.880> coupe<00:08:39.120> control<00:08:39.599> it's<00:08:39.839> probably + +00:08:40.230 --> 00:08:40.240 align:start position:0% +node to run coupe control it's probably + + +00:08:40.240 --> 00:08:42.630 align:start position:0% +node to run coupe control it's probably +a<00:08:40.320> good<00:08:40.479> idea<00:08:41.120> not<00:08:41.360> to<00:08:41.519> this<00:08:41.760> but<00:08:41.919> i<00:08:42.080> got<00:08:42.320> so + +00:08:42.630 --> 00:08:42.640 align:start position:0% +a good idea not to this but i got so + + +00:08:42.640 --> 00:08:44.949 align:start position:0% +a good idea not to this but i got so +tired<00:08:43.039> of<00:08:43.200> typing<00:08:43.599> in<00:08:43.760> sudo<00:08:44.240> every<00:08:44.399> time<00:08:44.640> i<00:08:44.720> was + +00:08:44.949 --> 00:08:44.959 align:start position:0% +tired of typing in sudo every time i was + + +00:08:44.959 --> 00:08:47.110 align:start position:0% +tired of typing in sudo every time i was +testing<00:08:45.279> this<00:08:45.600> the<00:08:45.839> thousand<00:08:46.320> times<00:08:46.640> i<00:08:46.800> spun + +00:08:47.110 --> 00:08:47.120 align:start position:0% +testing this the thousand times i spun + + +00:08:47.120 --> 00:08:48.310 align:start position:0% +testing this the thousand times i spun +this<00:08:47.360> up + +00:08:48.310 --> 00:08:48.320 align:start position:0% +this up + + +00:08:48.320 --> 00:08:50.230 align:start position:0% +this up +that<00:08:48.560> i<00:08:48.720> just<00:08:48.959> changed<00:08:49.279> the<00:08:49.600> permissions<00:08:50.160> of + +00:08:50.230 --> 00:08:50.240 align:start position:0% +that i just changed the permissions of + + +00:08:50.240 --> 00:08:52.470 align:start position:0% +that i just changed the permissions of +this<00:08:50.480> file<00:08:51.120> but<00:08:51.279> feel<00:08:51.519> free<00:08:51.760> to<00:08:52.000> remove<00:08:52.320> that + +00:08:52.470 --> 00:08:52.480 align:start position:0% +this file but feel free to remove that + + +00:08:52.480 --> 00:08:54.470 align:start position:0% +this file but feel free to remove that +argument<00:08:52.880> if<00:08:53.040> you<00:08:53.120> want<00:08:53.519> and<00:08:53.680> the<00:08:53.839> next<00:08:54.080> string + +00:08:54.470 --> 00:08:54.480 align:start position:0% +argument if you want and the next string + + +00:08:54.480 --> 00:08:57.030 align:start position:0% +argument if you want and the next string +of<00:08:54.720> arguments<00:08:55.200> are<00:08:55.680> quite<00:08:56.000> a<00:08:56.080> few + +00:08:57.030 --> 00:08:57.040 align:start position:0% +of arguments are quite a few + + +00:08:57.040 --> 00:08:58.150 align:start position:0% +of arguments are quite a few +but<00:08:57.279> i'll<00:08:57.440> leave<00:08:57.680> these<00:08:57.920> in<00:08:58.000> the + +00:08:58.150 --> 00:08:58.160 align:start position:0% +but i'll leave these in the + + +00:08:58.160 --> 00:09:00.790 align:start position:0% +but i'll leave these in the +documentation<00:08:59.120> but<00:08:59.360> to<00:08:59.600> summarize + +00:09:00.790 --> 00:09:00.800 align:start position:0% +documentation but to summarize + + +00:09:00.800 --> 00:09:03.110 align:start position:0% +documentation but to summarize +the<00:09:00.959> rest<00:09:01.200> of<00:09:01.279> these<00:09:01.519> args<00:09:02.000> as<00:09:02.399> well<00:09:02.720> as<00:09:02.880> the + +00:09:03.110 --> 00:09:03.120 align:start position:0% +the rest of these args as well as the + + +00:09:03.120 --> 00:09:05.190 align:start position:0% +the rest of these args as well as the +agent<00:09:03.440> args<00:09:03.839> you<00:09:03.920> see<00:09:04.160> here<00:09:04.560> is<00:09:04.720> that<00:09:04.880> i<00:09:04.959> found + +00:09:05.190 --> 00:09:05.200 align:start position:0% +agent args you see here is that i found + + +00:09:05.200 --> 00:09:07.269 align:start position:0% +agent args you see here is that i found +that<00:09:05.360> i<00:09:05.519> needed<00:09:06.000> most<00:09:06.240> of<00:09:06.399> these<00:09:06.560> args<00:09:06.959> to<00:09:07.040> make + +00:09:07.269 --> 00:09:07.279 align:start position:0% +that i needed most of these args to make + + +00:09:07.279 --> 00:09:09.509 align:start position:0% +that i needed most of these args to make +k3s<00:09:07.920> a<00:09:08.080> little<00:09:08.240> more<00:09:08.480> responsive<00:09:09.120> what<00:09:09.279> do<00:09:09.440> i + +00:09:09.509 --> 00:09:09.519 align:start position:0% +k3s a little more responsive what do i + + +00:09:09.519 --> 00:09:11.910 align:start position:0% +k3s a little more responsive what do i +mean<00:09:09.760> by<00:09:09.920> that<00:09:10.240> one<00:09:10.399> of<00:09:10.480> the<00:09:10.640> defaults<00:09:11.040> for<00:09:11.200> k3s + +00:09:11.910 --> 00:09:11.920 align:start position:0% +mean by that one of the defaults for k3s + + +00:09:11.920 --> 00:09:14.230 align:start position:0% +mean by that one of the defaults for k3s +is<00:09:12.000> that<00:09:12.240> if<00:09:12.399> the<00:09:12.480> node's<00:09:12.880> not<00:09:13.120> ready<00:09:13.680> it<00:09:13.920> won't + +00:09:14.230 --> 00:09:14.240 align:start position:0% +is that if the node's not ready it won't + + +00:09:14.240 --> 00:09:16.630 align:start position:0% +is that if the node's not ready it won't +schedule<00:09:14.720> additional<00:09:15.200> pods<00:09:15.600> on<00:09:15.760> it<00:09:16.320> until + +00:09:16.630 --> 00:09:16.640 align:start position:0% +schedule additional pods on it until + + +00:09:16.640 --> 00:09:19.350 align:start position:0% +schedule additional pods on it until +that<00:09:16.880> node<00:09:17.279> becomes<00:09:17.680> ready<00:09:18.480> but<00:09:18.640> the<00:09:18.800> timeout + +00:09:19.350 --> 00:09:19.360 align:start position:0% +that node becomes ready but the timeout + + +00:09:19.360 --> 00:09:21.190 align:start position:0% +that node becomes ready but the timeout +is<00:09:19.519> like<00:09:19.760> five<00:09:20.000> minutes<00:09:20.320> long<00:09:20.720> which<00:09:20.959> is<00:09:21.120> a + +00:09:21.190 --> 00:09:21.200 align:start position:0% +is like five minutes long which is a + + +00:09:21.200 --> 00:09:23.829 align:start position:0% +is like five minutes long which is a +long<00:09:21.519> time<00:09:21.920> i<00:09:22.080> mean<00:09:22.480> it's<00:09:22.720> not<00:09:22.959> a<00:09:23.040> long<00:09:23.279> time<00:09:23.680> if + +00:09:23.829 --> 00:09:23.839 align:start position:0% +long time i mean it's not a long time if + + +00:09:23.839 --> 00:09:25.590 align:start position:0% +long time i mean it's not a long time if +you're<00:09:24.000> running<00:09:24.399> multiple<00:09:24.800> replicas<00:09:25.360> of<00:09:25.519> a + +00:09:25.590 --> 00:09:25.600 align:start position:0% +you're running multiple replicas of a + + +00:09:25.600 --> 00:09:27.829 align:start position:0% +you're running multiple replicas of a +pod<00:09:25.920> and<00:09:26.080> you're<00:09:26.240> running<00:09:26.480> pods<00:09:26.800> in<00:09:26.959> aj<00:09:27.680> you + +00:09:27.829 --> 00:09:27.839 align:start position:0% +pod and you're running pods in aj you + + +00:09:27.839 --> 00:09:29.750 align:start position:0% +pod and you're running pods in aj you +would<00:09:28.000> almost<00:09:28.480> not<00:09:28.640> notice<00:09:29.040> at<00:09:29.200> all + +00:09:29.750 --> 00:09:29.760 align:start position:0% +would almost not notice at all + + +00:09:29.760 --> 00:09:31.829 align:start position:0% +would almost not notice at all +especially<00:09:30.320> in<00:09:30.399> larger<00:09:30.800> installations<00:09:31.680> but + +00:09:31.829 --> 00:09:31.839 align:start position:0% +especially in larger installations but + + +00:09:31.839 --> 00:09:34.230 align:start position:0% +especially in larger installations but +in<00:09:32.000> smaller<00:09:32.399> installations<00:09:33.200> like<00:09:33.440> home<00:09:33.760> labs + +00:09:34.230 --> 00:09:34.240 align:start position:0% +in smaller installations like home labs + + +00:09:34.240 --> 00:09:36.389 align:start position:0% +in smaller installations like home labs +i<00:09:34.399> found<00:09:34.640> that<00:09:34.800> five<00:09:35.120> minutes<00:09:35.600> is<00:09:35.760> a<00:09:36.080> really + +00:09:36.389 --> 00:09:36.399 align:start position:0% +i found that five minutes is a really + + +00:09:36.399 --> 00:09:38.310 align:start position:0% +i found that five minutes is a really +long<00:09:36.640> time<00:09:37.120> especially<00:09:37.600> if<00:09:37.680> you're<00:09:37.839> running<00:09:38.160> a + +00:09:38.310 --> 00:09:38.320 align:start position:0% +long time especially if you're running a + + +00:09:38.320 --> 00:09:40.630 align:start position:0% +long time especially if you're running a +replica<00:09:38.959> of<00:09:39.200> one<00:09:39.680> that<00:09:39.839> means<00:09:40.080> your<00:09:40.240> service + +00:09:40.630 --> 00:09:40.640 align:start position:0% +replica of one that means your service + + +00:09:40.640 --> 00:09:42.630 align:start position:0% +replica of one that means your service +is<00:09:40.720> down<00:09:41.040> for<00:09:41.360> at<00:09:41.440> least<00:09:41.680> five<00:09:41.920> minutes<00:09:42.480> so + +00:09:42.630 --> 00:09:42.640 align:start position:0% +is down for at least five minutes so + + +00:09:42.640 --> 00:09:44.550 align:start position:0% +is down for at least five minutes so +i've<00:09:42.880> scraped<00:09:43.200> the<00:09:43.360> internet<00:09:43.920> found<00:09:44.160> a<00:09:44.240> lot<00:09:44.399> of + +00:09:44.550 --> 00:09:44.560 align:start position:0% +i've scraped the internet found a lot of + + +00:09:44.560 --> 00:09:46.150 align:start position:0% +i've scraped the internet found a lot of +these<00:09:44.720> arguments<00:09:45.200> and<00:09:45.440> i've<00:09:45.680> been<00:09:45.839> using + +00:09:46.150 --> 00:09:46.160 align:start position:0% +these arguments and i've been using + + +00:09:46.160 --> 00:09:48.470 align:start position:0% +these arguments and i've been using +these<00:09:46.399> in<00:09:46.560> my<00:09:46.880> home<00:09:47.279> production + +00:09:48.470 --> 00:09:48.480 align:start position:0% +these in my home production + + +00:09:48.480 --> 00:09:49.829 align:start position:0% +these in my home production +home<00:09:48.720> lab + +00:09:49.829 --> 00:09:49.839 align:start position:0% +home lab + + +00:09:49.839 --> 00:09:51.430 align:start position:0% +home lab +for<00:09:50.000> about<00:09:50.240> a<00:09:50.320> year<00:09:50.480> now<00:09:50.880> and<00:09:50.959> they<00:09:51.120> seem<00:09:51.360> to + +00:09:51.430 --> 00:09:51.440 align:start position:0% +for about a year now and they seem to + + +00:09:51.440 --> 00:09:52.870 align:start position:0% +for about a year now and they seem to +work<00:09:51.680> pretty<00:09:51.839> well<00:09:52.080> but<00:09:52.320> you<00:09:52.480> might<00:09:52.640> need<00:09:52.800> to + +00:09:52.870 --> 00:09:52.880 align:start position:0% +work pretty well but you might need to + + +00:09:52.880 --> 00:09:54.630 align:start position:0% +work pretty well but you might need to +do<00:09:53.120> some<00:09:53.279> tweaking<00:09:53.680> depending<00:09:54.160> on<00:09:54.399> your + +00:09:54.630 --> 00:09:54.640 align:start position:0% +do some tweaking depending on your + + +00:09:54.640 --> 00:09:57.269 align:start position:0% +do some tweaking depending on your +services<00:09:55.519> your<00:09:55.680> hardware<00:09:56.640> and<00:09:56.800> what<00:09:57.040> works + +00:09:57.269 --> 00:09:57.279 align:start position:0% +services your hardware and what works + + +00:09:57.279 --> 00:09:59.350 align:start position:0% +services your hardware and what works +best<00:09:57.519> for<00:09:57.680> you<00:09:58.000> and<00:09:58.080> again<00:09:58.399> k3s<00:09:59.040> will<00:09:59.200> work + +00:09:59.350 --> 00:09:59.360 align:start position:0% +best for you and again k3s will work + + +00:09:59.360 --> 00:10:01.110 align:start position:0% +best for you and again k3s will work +without<00:09:59.839> any<00:10:00.080> of<00:10:00.160> those<00:10:00.320> arguments<00:10:00.800> i<00:10:00.880> just + +00:10:01.110 --> 00:10:01.120 align:start position:0% +without any of those arguments i just + + +00:10:01.120 --> 00:10:02.949 align:start position:0% +without any of those arguments i just +mentioned<00:10:01.839> and<00:10:02.160> maybe<00:10:02.480> you<00:10:02.560> should<00:10:02.720> try<00:10:02.880> it + +00:10:02.949 --> 00:10:02.959 align:start position:0% +mentioned and maybe you should try it + + +00:10:02.959 --> 00:10:05.030 align:start position:0% +mentioned and maybe you should try it +that<00:10:03.120> way<00:10:03.360> first<00:10:03.839> next<00:10:04.079> i<00:10:04.240> set<00:10:04.399> the<00:10:04.640> tag + +00:10:05.030 --> 00:10:05.040 align:start position:0% +that way first next i set the tag + + +00:10:05.040 --> 00:10:07.030 align:start position:0% +that way first next i set the tag +version<00:10:05.440> for<00:10:05.600> cube<00:10:05.920> vib<00:10:06.399> and<00:10:06.560> this<00:10:06.720> is<00:10:06.800> just + +00:10:07.030 --> 00:10:07.040 align:start position:0% +version for cube vib and this is just + + +00:10:07.040 --> 00:10:08.710 align:start position:0% +version for cube vib and this is just +the<00:10:07.120> container<00:10:07.680> image<00:10:07.920> tag<00:10:08.320> the<00:10:08.399> current + +00:10:08.710 --> 00:10:08.720 align:start position:0% +the container image tag the current + + +00:10:08.720 --> 00:10:10.110 align:start position:0% +the container image tag the current +version<00:10:09.120> is + +00:10:10.110 --> 00:10:10.120 align:start position:0% +version is + + +00:10:10.120 --> 00:10:13.030 align:start position:0% +version is +v0.4.2<00:10:11.200> and<00:10:11.360> so<00:10:11.760> that's<00:10:12.000> what<00:10:12.240> i'm<00:10:12.399> specifying + +00:10:13.030 --> 00:10:13.040 align:start position:0% +v0.4.2 and so that's what i'm specifying + + +00:10:13.040 --> 00:10:15.590 align:start position:0% +v0.4.2 and so that's what i'm specifying +here<00:10:13.440> and<00:10:13.600> i<00:10:13.760> did<00:10:14.240> similar<00:10:14.720> things<00:10:14.959> for<00:10:15.279> metal + +00:10:15.590 --> 00:10:15.600 align:start position:0% +here and i did similar things for metal + + +00:10:15.600 --> 00:10:18.790 align:start position:0% +here and i did similar things for metal +lb<00:10:16.079> too<00:10:16.640> so<00:10:16.800> for<00:10:17.120> metal<00:10:17.440> lb<00:10:17.839> there's<00:10:18.079> a<00:10:18.320> speaker + +00:10:18.790 --> 00:10:18.800 align:start position:0% +lb too so for metal lb there's a speaker + + +00:10:18.800 --> 00:10:21.230 align:start position:0% +lb too so for metal lb there's a speaker +container<00:10:19.440> which<00:10:19.760> the<00:10:19.920> latest<00:10:20.240> version<00:10:20.560> is + +00:10:21.230 --> 00:10:21.240 align:start position:0% +container which the latest version is + + +00:10:21.240 --> 00:10:24.069 align:start position:0% +container which the latest version is +0.12.1<00:10:22.320> and<00:10:22.480> then<00:10:22.640> there's<00:10:22.880> a<00:10:23.200> controller<00:10:23.760> tag + +00:10:24.069 --> 00:10:24.079 align:start position:0% +0.12.1 and then there's a controller tag + + +00:10:24.079 --> 00:10:27.509 align:start position:0% +0.12.1 and then there's a controller tag +as<00:10:24.240> well<00:10:24.800> which<00:10:25.040> i<00:10:25.279> also<00:10:25.600> set<00:10:25.839> to<00:10:26.200> 0.12.1<00:10:27.279> now + +00:10:27.509 --> 00:10:27.519 align:start position:0% +as well which i also set to 0.12.1 now + + +00:10:27.519 --> 00:10:29.350 align:start position:0% +as well which i also set to 0.12.1 now +these<00:10:27.680> should<00:10:27.839> be<00:10:28.240> lockstep<00:10:28.880> in<00:10:28.959> the<00:10:29.120> same + +00:10:29.350 --> 00:10:29.360 align:start position:0% +these should be lockstep in the same + + +00:10:29.360 --> 00:10:31.430 align:start position:0% +these should be lockstep in the same +version<00:10:29.760> but<00:10:29.920> i<00:10:30.079> made<00:10:30.320> it<00:10:30.399> configurable<00:10:31.120> in<00:10:31.200> my + +00:10:31.430 --> 00:10:31.440 align:start position:0% +version but i made it configurable in my + + +00:10:31.440 --> 00:10:33.430 align:start position:0% +version but i made it configurable in my +template<00:10:32.160> just<00:10:32.399> in<00:10:32.480> case<00:10:32.720> they're<00:10:32.880> not<00:10:33.279> so + +00:10:33.430 --> 00:10:33.440 align:start position:0% +template just in case they're not so + + +00:10:33.440 --> 00:10:35.030 align:start position:0% +template just in case they're not so +that<00:10:33.760> i<00:10:33.920> didn't<00:10:34.079> have<00:10:34.240> to<00:10:34.320> figure<00:10:34.560> that<00:10:34.800> out<00:10:34.959> in + +00:10:35.030 --> 00:10:35.040 align:start position:0% +that i didn't have to figure that out in + + +00:10:35.040 --> 00:10:37.430 align:start position:0% +that i didn't have to figure that out in +the<00:10:35.120> future<00:10:35.600> and<00:10:35.760> next<00:10:36.000> i<00:10:36.160> chose<00:10:36.480> an<00:10:36.720> ip<00:10:37.120> range + +00:10:37.430 --> 00:10:37.440 align:start position:0% +the future and next i chose an ip range + + +00:10:37.440 --> 00:10:40.470 align:start position:0% +the future and next i chose an ip range +for<00:10:37.600> metal<00:10:37.920> lb<00:10:38.560> so<00:10:38.800> this<00:10:39.040> is<00:10:39.200> the<00:10:39.440> range<00:10:39.760> of<00:10:39.920> ips + +00:10:40.470 --> 00:10:40.480 align:start position:0% +for metal lb so this is the range of ips + + +00:10:40.480 --> 00:10:43.030 align:start position:0% +for metal lb so this is the range of ips +that<00:10:40.720> when<00:10:40.880> you<00:10:41.040> expose<00:10:41.519> services<00:10:42.320> they'll<00:10:42.560> be + +00:10:43.030 --> 00:10:43.040 align:start position:0% +that when you expose services they'll be + + +00:10:43.040 --> 00:10:45.110 align:start position:0% +that when you expose services they'll be +exposed<00:10:43.600> on<00:10:43.839> and<00:10:44.000> you<00:10:44.160> can<00:10:44.399> communicate<00:10:44.880> with + +00:10:45.110 --> 00:10:45.120 align:start position:0% +exposed on and you can communicate with + + +00:10:45.120 --> 00:10:46.870 align:start position:0% +exposed on and you can communicate with +them<00:10:45.440> i'll<00:10:45.600> show<00:10:45.760> you<00:10:46.000> some<00:10:46.160> examples<00:10:46.640> here<00:10:46.800> in + +00:10:46.870 --> 00:10:46.880 align:start position:0% +them i'll show you some examples here in + + +00:10:46.880 --> 00:10:49.590 align:start position:0% +them i'll show you some examples here in +a<00:10:46.959> little<00:10:47.120> bit<00:10:47.440> but<00:10:47.600> i<00:10:47.760> set<00:10:48.000> a<00:10:48.079> range<00:10:48.399> from<00:10:48.800> 192 + +00:10:49.590 --> 00:10:49.600 align:start position:0% +a little bit but i set a range from 192 + + +00:10:49.600 --> 00:10:53.990 align:start position:0% +a little bit but i set a range from 192 +168<00:10:50.519> 30.80<00:10:51.760> all<00:10:51.920> the<00:10:52.000> way<00:10:52.160> up<00:10:52.399> to<00:10:52.560> 90.<00:10:53.279> so<00:10:53.440> i<00:10:53.519> get + +00:10:53.990 --> 00:10:54.000 align:start position:0% +168 30.80 all the way up to 90. so i get + + +00:10:54.000 --> 00:10:56.630 align:start position:0% +168 30.80 all the way up to 90. so i get +10 + +00:10:56.630 --> 00:10:56.640 align:start position:0% + + + +00:10:56.640 --> 00:10:59.750 align:start position:0% + +so<00:10:56.959> i<00:10:57.120> get<00:10:57.360> 11<00:10:57.839> ips<00:10:58.320> here<00:10:58.720> typically<00:10:59.279> i<00:10:59.519> only + +00:10:59.750 --> 00:10:59.760 align:start position:0% +so i get 11 ips here typically i only + + +00:10:59.760 --> 00:11:01.990 align:start position:0% +so i get 11 ips here typically i only +need<00:11:00.000> one<00:11:00.160> or<00:11:00.320> two<00:11:00.880> but<00:11:01.120> i<00:11:01.200> set<00:11:01.360> the<00:11:01.519> range<00:11:01.760> from + +00:11:01.990 --> 00:11:02.000 align:start position:0% +need one or two but i set the range from + + +00:11:02.000 --> 00:11:04.790 align:start position:0% +need one or two but i set the range from +80<00:11:02.240> to<00:11:02.399> 90<00:11:02.880> just<00:11:03.200> in<00:11:03.360> case<00:11:03.920> after<00:11:04.240> that<00:11:04.560> i + +00:11:04.790 --> 00:11:04.800 align:start position:0% +80 to 90 just in case after that i + + +00:11:04.800 --> 00:11:07.030 align:start position:0% +80 to 90 just in case after that i +checked<00:11:05.040> my<00:11:05.200> host.ini<00:11:06.160> to<00:11:06.240> make<00:11:06.399> sure<00:11:06.640> i<00:11:06.720> had + +00:11:07.030 --> 00:11:07.040 align:start position:0% +checked my host.ini to make sure i had + + +00:11:07.040 --> 00:11:09.350 align:start position:0% +checked my host.ini to make sure i had +all<00:11:07.200> of<00:11:07.279> the<00:11:07.440> ip<00:11:07.839> addresses<00:11:08.320> in<00:11:08.480> here<00:11:09.040> and<00:11:09.200> the + +00:11:09.350 --> 00:11:09.360 align:start position:0% +all of the ip addresses in here and the + + +00:11:09.360 --> 00:11:11.190 align:start position:0% +all of the ip addresses in here and the +three<00:11:09.600> virtual<00:11:10.000> machines<00:11:10.480> i<00:11:10.560> am<00:11:10.720> going<00:11:10.880> to<00:11:10.959> use + +00:11:11.190 --> 00:11:11.200 align:start position:0% +three virtual machines i am going to use + + +00:11:11.200 --> 00:11:14.790 align:start position:0% +three virtual machines i am going to use +for<00:11:11.360> my<00:11:11.519> masters<00:11:12.079> are<00:11:12.320> 38<00:11:13.040> 39<00:11:13.760> and<00:11:14.000> 40.<00:11:14.560> these + +00:11:14.790 --> 00:11:14.800 align:start position:0% +for my masters are 38 39 and 40. these + + +00:11:14.800 --> 00:11:16.949 align:start position:0% +for my masters are 38 39 and 40. these +are<00:11:14.959> also<00:11:15.360> referred<00:11:15.760> to<00:11:16.079> as<00:11:16.399> your<00:11:16.640> server + +00:11:16.949 --> 00:11:16.959 align:start position:0% +are also referred to as your server + + +00:11:16.959 --> 00:11:19.190 align:start position:0% +are also referred to as your server +notes<00:11:17.440> and<00:11:17.519> then<00:11:17.760> my<00:11:18.079> worker<00:11:18.480> nodes<00:11:18.800> or<00:11:18.959> my + +00:11:19.190 --> 00:11:19.200 align:start position:0% +notes and then my worker nodes or my + + +00:11:19.200 --> 00:11:22.310 align:start position:0% +notes and then my worker nodes or my +agents<00:11:19.839> are<00:11:20.000> going<00:11:20.160> to<00:11:20.240> be<00:11:20.560> 41<00:11:21.279> and<00:11:21.360> 42.<00:11:22.160> so + +00:11:22.310 --> 00:11:22.320 align:start position:0% +agents are going to be 41 and 42. so + + +00:11:22.320 --> 00:11:24.389 align:start position:0% +agents are going to be 41 and 42. so +this<00:11:22.560> means<00:11:22.959> three<00:11:23.200> servers<00:11:23.600> with<00:11:23.839> kubernetes + +00:11:24.389 --> 00:11:24.399 align:start position:0% +this means three servers with kubernetes + + +00:11:24.399 --> 00:11:27.110 align:start position:0% +this means three servers with kubernetes +control<00:11:24.800> plane<00:11:25.279> and<00:11:25.519> ncd<00:11:26.240> making<00:11:26.640> it<00:11:26.720> highly + +00:11:27.110 --> 00:11:27.120 align:start position:0% +control plane and ncd making it highly + + +00:11:27.120 --> 00:11:29.430 align:start position:0% +control plane and ncd making it highly +available<00:11:27.839> and<00:11:27.920> then<00:11:28.240> two<00:11:28.480> worker<00:11:28.880> nodes<00:11:29.279> to + +00:11:29.430 --> 00:11:29.440 align:start position:0% +available and then two worker nodes to + + +00:11:29.440 --> 00:11:31.670 align:start position:0% +available and then two worker nodes to +run<00:11:29.680> my<00:11:29.839> user<00:11:30.160> workloads<00:11:30.959> and<00:11:31.120> if<00:11:31.200> i<00:11:31.360> had<00:11:31.519> more + +00:11:31.670 --> 00:11:31.680 align:start position:0% +run my user workloads and if i had more + + +00:11:31.680 --> 00:11:33.190 align:start position:0% +run my user workloads and if i had more +virtual<00:11:32.079> machines<00:11:32.480> i<00:11:32.560> would<00:11:32.720> just<00:11:32.959> add<00:11:33.040> them + +00:11:33.190 --> 00:11:33.200 align:start position:0% +virtual machines i would just add them + + +00:11:33.200 --> 00:11:35.110 align:start position:0% +virtual machines i would just add them +below<00:11:33.600> so<00:11:33.760> with<00:11:34.000> all<00:11:34.160> of<00:11:34.240> this<00:11:34.399> configured<00:11:34.959> i + +00:11:35.110 --> 00:11:35.120 align:start position:0% +below so with all of this configured i + + +00:11:35.120 --> 00:11:37.990 align:start position:0% +below so with all of this configured i +ran<00:11:35.360> the<00:11:35.680> site<00:11:36.079> playbook<00:11:36.959> and<00:11:37.279> pointed<00:11:37.680> it<00:11:37.839> at + +00:11:37.990 --> 00:11:38.000 align:start position:0% +ran the site playbook and pointed it at + + +00:11:38.000 --> 00:11:40.630 align:start position:0% +ran the site playbook and pointed it at +my<00:11:38.160> host.ini<00:11:39.279> but<00:11:39.440> before<00:11:39.839> i<00:11:39.920> did<00:11:40.160> that<00:11:40.399> i + +00:11:40.630 --> 00:11:40.640 align:start position:0% +my host.ini but before i did that i + + +00:11:40.640 --> 00:11:43.190 align:start position:0% +my host.ini but before i did that i +started<00:11:41.200> pinging<00:11:41.600> my<00:11:41.839> vip<00:11:42.480> obviously<00:11:43.040> it's + +00:11:43.190 --> 00:11:43.200 align:start position:0% +started pinging my vip obviously it's + + +00:11:43.200 --> 00:11:45.350 align:start position:0% +started pinging my vip obviously it's +not<00:11:43.440> there<00:11:44.000> as<00:11:44.160> soon<00:11:44.320> as<00:11:44.480> it<00:11:44.640> comes<00:11:44.959> up<00:11:45.200> it + +00:11:45.350 --> 00:11:45.360 align:start position:0% +not there as soon as it comes up it + + +00:11:45.360 --> 00:11:50.949 align:start position:0% +not there as soon as it comes up it +should<00:11:45.519> respond<00:11:46.560> so<00:11:46.800> i<00:11:46.880> ran<00:11:47.120> the<00:11:47.200> playbook + +00:11:50.949 --> 00:11:50.959 align:start position:0% + + + +00:11:50.959 --> 00:11:53.590 align:start position:0% + +and<00:11:51.120> it<00:11:51.360> installed<00:11:51.839> and<00:11:52.000> configured<00:11:52.639> k3s<00:11:53.440> on + +00:11:53.590 --> 00:11:53.600 align:start position:0% +and it installed and configured k3s on + + +00:11:53.600 --> 00:11:57.110 align:start position:0% +and it installed and configured k3s on +one<00:11:53.839> of<00:11:53.920> the<00:11:54.079> server<00:11:54.399> nodes + +00:11:57.110 --> 00:11:57.120 align:start position:0% +one of the server nodes + + +00:11:57.120 --> 00:11:58.949 align:start position:0% +one of the server nodes +shortly<00:11:57.519> after<00:11:57.760> that<00:11:58.000> the<00:11:58.240> vip<00:11:58.639> started + +00:11:58.949 --> 00:11:58.959 align:start position:0% +shortly after that the vip started + + +00:11:58.959 --> 00:12:01.190 align:start position:0% +shortly after that the vip started +responding<00:11:59.839> so<00:12:00.000> this<00:12:00.240> means<00:12:00.560> qvip<00:12:01.040> is + +00:12:01.190 --> 00:12:01.200 align:start position:0% +responding so this means qvip is + + +00:12:01.200 --> 00:12:03.590 align:start position:0% +responding so this means qvip is +installed<00:12:01.600> on<00:12:01.760> that<00:12:01.920> machine<00:12:02.720> and<00:12:02.880> the<00:12:03.040> vip<00:12:03.360> is + +00:12:03.590 --> 00:12:03.600 align:start position:0% +installed on that machine and the vip is + + +00:12:03.600 --> 00:12:06.470 align:start position:0% +installed on that machine and the vip is +up + +00:12:06.470 --> 00:12:06.480 align:start position:0% + + + +00:12:06.480 --> 00:12:07.990 align:start position:0% + +and<00:12:06.560> then<00:12:06.800> it<00:12:06.959> started<00:12:07.279> joining<00:12:07.760> other + +00:12:07.990 --> 00:12:08.000 align:start position:0% +and then it started joining other + + +00:12:08.000 --> 00:12:11.590 align:start position:0% +and then it started joining other +machines<00:12:08.480> to<00:12:08.639> the<00:12:08.720> cluster + +00:12:11.590 --> 00:12:11.600 align:start position:0% +machines to the cluster + + +00:12:11.600 --> 00:12:14.069 align:start position:0% +machines to the cluster +and<00:12:11.760> then<00:12:12.240> shortly<00:12:12.639> after<00:12:12.880> that<00:12:13.279> i<00:12:13.440> had<00:12:13.680> a<00:12:13.839> high + +00:12:14.069 --> 00:12:14.079 align:start position:0% +and then shortly after that i had a high + + +00:12:14.079 --> 00:12:17.509 align:start position:0% +and then shortly after that i had a high +availability<00:12:15.120> kubernetes<00:12:15.839> cluster<00:12:16.399> on<00:12:16.560> k3s + +00:12:17.509 --> 00:12:17.519 align:start position:0% +availability kubernetes cluster on k3s + + +00:12:17.519 --> 00:12:20.069 align:start position:0% +availability kubernetes cluster on k3s +and<00:12:17.600> that's<00:12:17.839> a<00:12:18.000> ha<00:12:18.480> cluster<00:12:19.120> with<00:12:19.360> that<00:12:19.519> cd + +00:12:20.069 --> 00:12:20.079 align:start position:0% +and that's a ha cluster with that cd + + +00:12:20.079 --> 00:12:22.870 align:start position:0% +and that's a ha cluster with that cd +with<00:12:20.320> a<00:12:20.399> load<00:12:20.560> balancer<00:12:21.120> that's<00:12:21.600> also<00:12:22.160> ha<00:12:22.720> for + +00:12:22.870 --> 00:12:22.880 align:start position:0% +with a load balancer that's also ha for + + +00:12:22.880 --> 00:12:26.069 align:start position:0% +with a load balancer that's also ha for +my<00:12:23.040> control<00:12:23.440> plane<00:12:23.920> and<00:12:24.399> aha<00:12:25.040> load<00:12:25.279> balancers + +00:12:26.069 --> 00:12:26.079 align:start position:0% +my control plane and aha load balancers + + +00:12:26.079 --> 00:12:27.990 align:start position:0% +my control plane and aha load balancers +for<00:12:26.320> all<00:12:26.480> of<00:12:26.560> my<00:12:26.720> services<00:12:27.360> but<00:12:27.519> we<00:12:27.680> need<00:12:27.839> to + +00:12:27.990 --> 00:12:28.000 align:start position:0% +for all of my services but we need to + + +00:12:28.000 --> 00:12:30.550 align:start position:0% +for all of my services but we need to +verify<00:12:28.959> hopefully<00:12:29.360> you<00:12:29.440> trust<00:12:29.760> me<00:12:30.160> but<00:12:30.320> let's + +00:12:30.550 --> 00:12:30.560 align:start position:0% +verify hopefully you trust me but let's + + +00:12:30.560 --> 00:12:33.350 align:start position:0% +verify hopefully you trust me but let's +also<00:12:30.880> verify<00:12:31.519> so<00:12:31.680> we<00:12:31.839> can<00:12:32.079> ssh<00:12:32.720> into<00:12:33.040> one<00:12:33.200> of + +00:12:33.350 --> 00:12:33.360 align:start position:0% +also verify so we can ssh into one of + + +00:12:33.360 --> 00:12:35.190 align:start position:0% +also verify so we can ssh into one of +our<00:12:33.519> server<00:12:33.839> nodes<00:12:34.320> once<00:12:34.560> we're<00:12:34.720> there<00:12:34.959> we<00:12:35.120> can + +00:12:35.190 --> 00:12:35.200 align:start position:0% +our server nodes once we're there we can + + +00:12:35.200 --> 00:12:37.590 align:start position:0% +our server nodes once we're there we can +run<00:12:35.440> sudo<00:12:35.839> coop<00:12:36.079> control<00:12:36.480> kit<00:12:36.720> nodes<00:12:37.360> and<00:12:37.440> we + +00:12:37.590 --> 00:12:37.600 align:start position:0% +run sudo coop control kit nodes and we + + +00:12:37.600 --> 00:12:39.910 align:start position:0% +run sudo coop control kit nodes and we +can<00:12:37.760> see<00:12:38.240> we<00:12:38.399> have<00:12:38.800> five<00:12:39.120> nodes<00:12:39.519> and<00:12:39.600> they're + +00:12:39.910 --> 00:12:39.920 align:start position:0% +can see we have five nodes and they're + + +00:12:39.920 --> 00:12:41.750 align:start position:0% +can see we have five nodes and they're +all<00:12:40.079> online<00:12:40.639> you<00:12:40.720> can<00:12:40.880> see<00:12:41.040> i<00:12:41.200> have<00:12:41.519> three + +00:12:41.750 --> 00:12:41.760 align:start position:0% +all online you can see i have three + + +00:12:41.760 --> 00:12:44.150 align:start position:0% +all online you can see i have three +control<00:12:42.160> plane<00:12:42.560> at<00:12:42.639> cd<00:12:43.040> masters + +00:12:44.150 --> 00:12:44.160 align:start position:0% +control plane at cd masters + + +00:12:44.160 --> 00:12:46.870 align:start position:0% +control plane at cd masters +and<00:12:44.720> two<00:12:45.279> workers<00:12:45.839> or<00:12:46.079> agents<00:12:46.480> ready<00:12:46.720> for + +00:12:46.870 --> 00:12:46.880 align:start position:0% +and two workers or agents ready for + + +00:12:46.880 --> 00:12:48.150 align:start position:0% +and two workers or agents ready for +workloads + +00:12:48.150 --> 00:12:48.160 align:start position:0% +workloads + + +00:12:48.160 --> 00:12:50.790 align:start position:0% +workloads +super<00:12:48.959> super<00:12:49.360> awesome + +00:12:50.790 --> 00:12:50.800 align:start position:0% +super super awesome + + +00:12:50.800 --> 00:12:53.509 align:start position:0% +super super awesome +so<00:12:50.959> instead<00:12:51.279> of<00:12:51.680> ssh<00:12:52.399> into<00:12:52.639> this<00:12:52.800> server<00:12:53.200> let's + +00:12:53.509 --> 00:12:53.519 align:start position:0% +so instead of ssh into this server let's + + +00:12:53.519 --> 00:12:56.069 align:start position:0% +so instead of ssh into this server let's +actually<00:12:53.920> copy<00:12:54.320> our<00:12:54.560> coup<00:12:54.800> config<00:12:55.360> locally<00:12:55.839> so + +00:12:56.069 --> 00:12:56.079 align:start position:0% +actually copy our coup config locally so + + +00:12:56.079 --> 00:12:57.990 align:start position:0% +actually copy our coup config locally so +we<00:12:56.160> can<00:12:56.320> run<00:12:56.560> the<00:12:56.720> rest<00:12:56.959> of<00:12:57.040> the<00:12:57.120> commands<00:12:57.839> so + +00:12:57.990 --> 00:12:58.000 align:start position:0% +we can run the rest of the commands so + + +00:12:58.000 --> 00:13:00.069 align:start position:0% +we can run the rest of the commands so +let's<00:12:58.320> exit<00:12:58.720> out<00:12:58.880> of<00:12:58.959> here<00:12:59.440> you'll<00:12:59.680> want<00:12:59.839> to + +00:13:00.069 --> 00:13:00.079 align:start position:0% +let's exit out of here you'll want to + + +00:13:00.079 --> 00:13:02.069 align:start position:0% +let's exit out of here you'll want to +make<00:13:00.240> a<00:13:00.399> directory<00:13:00.959> for<00:13:01.120> your<00:13:01.360> coup<00:13:01.600> config + +00:13:02.069 --> 00:13:02.079 align:start position:0% +make a directory for your coup config + + +00:13:02.079 --> 00:13:04.230 align:start position:0% +make a directory for your coup config +file<00:13:02.399> if<00:13:02.480> you've<00:13:02.720> never<00:13:02.959> done<00:13:03.200> this<00:13:03.440> before<00:13:03.920> or + +00:13:04.230 --> 00:13:04.240 align:start position:0% +file if you've never done this before or + + +00:13:04.240 --> 00:13:06.470 align:start position:0% +file if you've never done this before or +back<00:13:04.480> up<00:13:04.639> your<00:13:04.880> existing<00:13:05.440> coupe<00:13:05.680> config<00:13:06.079> file + +00:13:06.470 --> 00:13:06.480 align:start position:0% +back up your existing coupe config file + + +00:13:06.480 --> 00:13:08.790 align:start position:0% +back up your existing coupe config file +if<00:13:06.639> it's<00:13:06.800> there<00:13:07.279> then<00:13:07.440> we'll<00:13:07.600> just<00:13:08.000> scp<00:13:08.639> or + +00:13:08.790 --> 00:13:08.800 align:start position:0% +if it's there then we'll just scp or + + +00:13:08.800 --> 00:13:10.949 align:start position:0% +if it's there then we'll just scp or +secure<00:13:09.200> copy<00:13:09.839> that<00:13:10.079> file<00:13:10.399> from<00:13:10.560> one<00:13:10.720> of<00:13:10.800> the + +00:13:10.949 --> 00:13:10.959 align:start position:0% +secure copy that file from one of the + + +00:13:10.959 --> 00:13:13.030 align:start position:0% +secure copy that file from one of the +servers<00:13:11.360> back<00:13:11.519> to<00:13:11.680> our<00:13:11.839> local<00:13:12.079> machine<00:13:12.800> after + +00:13:13.030 --> 00:13:13.040 align:start position:0% +servers back to our local machine after + + +00:13:13.040 --> 00:13:15.350 align:start position:0% +servers back to our local machine after +it<00:13:13.200> transfers<00:13:14.000> we<00:13:14.160> can<00:13:14.320> run<00:13:14.560> a<00:13:14.720> coupe<00:13:14.880> control + +00:13:15.350 --> 00:13:15.360 align:start position:0% +it transfers we can run a coupe control + + +00:13:15.360 --> 00:13:17.750 align:start position:0% +it transfers we can run a coupe control +get<00:13:15.600> nodes<00:13:16.079> and<00:13:16.240> see<00:13:16.399> the<00:13:16.560> same<00:13:16.800> thing<00:13:17.360> awesome + +00:13:17.750 --> 00:13:17.760 align:start position:0% +get nodes and see the same thing awesome + + +00:13:17.760 --> 00:13:19.430 align:start position:0% +get nodes and see the same thing awesome +so<00:13:17.920> now<00:13:18.079> we<00:13:18.240> have<00:13:18.320> cube<00:13:18.560> control<00:13:18.959> running<00:13:19.279> on + +00:13:19.430 --> 00:13:19.440 align:start position:0% +so now we have cube control running on + + +00:13:19.440 --> 00:13:21.829 align:start position:0% +so now we have cube control running on +this<00:13:19.600> machine<00:13:20.320> next<00:13:20.639> i<00:13:20.880> created<00:13:21.279> a<00:13:21.519> super + +00:13:21.829 --> 00:13:21.839 align:start position:0% +this machine next i created a super + + +00:13:21.839 --> 00:13:24.710 align:start position:0% +this machine next i created a super +simple<00:13:22.320> nginx<00:13:22.880> deployment<00:13:23.440> for<00:13:23.600> kubernetes + +00:13:24.710 --> 00:13:24.720 align:start position:0% +simple nginx deployment for kubernetes + + +00:13:24.720 --> 00:13:27.269 align:start position:0% +simple nginx deployment for kubernetes +this<00:13:24.959> deploys<00:13:25.519> an<00:13:25.680> alpine<00:13:26.160> version<00:13:26.560> of<00:13:26.720> nginx + +00:13:27.269 --> 00:13:27.279 align:start position:0% +this deploys an alpine version of nginx + + +00:13:27.279 --> 00:13:29.430 align:start position:0% +this deploys an alpine version of nginx +and<00:13:27.440> sets<00:13:27.680> the<00:13:27.839> replicas<00:13:28.399> to<00:13:28.560> three<00:13:29.120> i<00:13:29.279> did + +00:13:29.430 --> 00:13:29.440 align:start position:0% +and sets the replicas to three i did + + +00:13:29.440 --> 00:13:31.590 align:start position:0% +and sets the replicas to three i did +that<00:13:29.600> by<00:13:29.760> running<00:13:30.079> coop<00:13:30.320> control<00:13:30.800> apply<00:13:31.200> dash + +00:13:31.590 --> 00:13:31.600 align:start position:0% +that by running coop control apply dash + + +00:13:31.600 --> 00:13:33.670 align:start position:0% +that by running coop control apply dash +f<00:13:32.000> and<00:13:32.079> then<00:13:32.240> the<00:13:32.399> path<00:13:32.720> to<00:13:32.880> the<00:13:33.040> deployment + +00:13:33.670 --> 00:13:33.680 align:start position:0% +f and then the path to the deployment + + +00:13:33.680 --> 00:13:36.069 align:start position:0% +f and then the path to the deployment +manifest<00:13:34.480> and<00:13:34.639> then<00:13:34.880> kubernetes<00:13:35.600> told<00:13:35.839> me + +00:13:36.069 --> 00:13:36.079 align:start position:0% +manifest and then kubernetes told me + + +00:13:36.079 --> 00:13:38.069 align:start position:0% +manifest and then kubernetes told me +that<00:13:36.320> deployment<00:13:36.959> was<00:13:37.200> created<00:13:37.760> then<00:13:38.000> i + +00:13:38.069 --> 00:13:38.079 align:start position:0% +that deployment was created then i + + +00:13:38.079 --> 00:13:39.350 align:start position:0% +that deployment was created then i +wanted<00:13:38.320> to<00:13:38.480> check<00:13:38.720> to<00:13:38.880> see<00:13:39.040> how<00:13:39.199> this + +00:13:39.350 --> 00:13:39.360 align:start position:0% +wanted to check to see how this + + +00:13:39.360 --> 00:13:41.990 align:start position:0% +wanted to check to see how this +deployment<00:13:39.920> was<00:13:40.079> doing<00:13:40.720> so<00:13:41.360> i<00:13:41.519> ran<00:13:41.760> coop + +00:13:41.990 --> 00:13:42.000 align:start position:0% +deployment was doing so i ran coop + + +00:13:42.000 --> 00:13:44.870 align:start position:0% +deployment was doing so i ran coop +control<00:13:42.560> describe<00:13:43.120> deployment<00:13:43.839> nginx<00:13:44.720> and + +00:13:44.870 --> 00:13:44.880 align:start position:0% +control describe deployment nginx and + + +00:13:44.880 --> 00:13:46.790 align:start position:0% +control describe deployment nginx and +you<00:13:45.040> can<00:13:45.199> see<00:13:45.519> it<00:13:45.760> is<00:13:45.920> deployed<00:13:46.480> and<00:13:46.639> the + +00:13:46.790 --> 00:13:46.800 align:start position:0% +you can see it is deployed and the + + +00:13:46.800 --> 00:13:49.430 align:start position:0% +you can see it is deployed and the +desired<00:13:47.360> state<00:13:47.680> is<00:13:48.079> three<00:13:48.720> and<00:13:48.959> three<00:13:49.199> were + +00:13:49.430 --> 00:13:49.440 align:start position:0% +desired state is three and three were + + +00:13:49.440 --> 00:13:51.750 align:start position:0% +desired state is three and three were +updated<00:13:50.000> three<00:13:50.240> total<00:13:50.720> three<00:13:51.040> available<00:13:51.600> and + +00:13:51.750 --> 00:13:51.760 align:start position:0% +updated three total three available and + + +00:13:51.760 --> 00:13:54.069 align:start position:0% +updated three total three available and +zero<00:13:52.240> unavailable<00:13:53.040> so<00:13:53.279> all<00:13:53.519> three<00:13:53.760> of<00:13:53.839> my + +00:13:54.069 --> 00:13:54.079 align:start position:0% +zero unavailable so all three of my + + +00:13:54.079 --> 00:13:56.230 align:start position:0% +zero unavailable so all three of my +nginx<00:13:54.560> pods<00:13:54.959> are<00:13:55.120> up<00:13:55.279> and<00:13:55.360> running<00:13:55.760> but<00:13:56.000> this + +00:13:56.230 --> 00:13:56.240 align:start position:0% +nginx pods are up and running but this + + +00:13:56.240 --> 00:13:58.470 align:start position:0% +nginx pods are up and running but this +doesn't<00:13:56.560> give<00:13:56.720> me<00:13:56.959> access<00:13:57.440> to<00:13:57.600> these<00:13:57.920> pods + +00:13:58.470 --> 00:13:58.480 align:start position:0% +doesn't give me access to these pods + + +00:13:58.480 --> 00:14:00.870 align:start position:0% +doesn't give me access to these pods +outside<00:13:58.880> of<00:13:59.040> kubernetes<00:14:00.000> this<00:14:00.240> is<00:14:00.399> where<00:14:00.720> a + +00:14:00.870 --> 00:14:00.880 align:start position:0% +outside of kubernetes this is where a + + +00:14:00.880 --> 00:14:03.430 align:start position:0% +outside of kubernetes this is where a +service<00:14:01.519> and<00:14:01.680> a<00:14:01.760> load<00:14:02.000> balancer<00:14:02.560> comes<00:14:02.880> in<00:14:03.279> the + +00:14:03.430 --> 00:14:03.440 align:start position:0% +service and a load balancer comes in the + + +00:14:03.440 --> 00:14:06.230 align:start position:0% +service and a load balancer comes in the +exact<00:14:03.920> reason<00:14:04.399> why<00:14:04.639> we<00:14:04.880> installed<00:14:05.360> metal<00:14:05.680> lb + +00:14:06.230 --> 00:14:06.240 align:start position:0% +exact reason why we installed metal lb + + +00:14:06.240 --> 00:14:08.710 align:start position:0% +exact reason why we installed metal lb +so<00:14:06.480> then<00:14:06.639> i<00:14:06.800> created<00:14:07.199> a<00:14:07.440> super<00:14:07.760> simple<00:14:08.320> service + +00:14:08.710 --> 00:14:08.720 align:start position:0% +so then i created a super simple service + + +00:14:08.720 --> 00:14:11.990 align:start position:0% +so then i created a super simple service +file<00:14:09.120> this<00:14:09.360> service<00:14:09.760> file<00:14:10.320> is<00:14:10.480> just<00:14:10.800> a<00:14:11.279> service + +00:14:11.990 --> 00:14:12.000 align:start position:0% +file this service file is just a service + + +00:14:12.000 --> 00:14:14.470 align:start position:0% +file this service file is just a service +pointing<00:14:12.480> to<00:14:12.959> the<00:14:13.199> app<00:14:13.519> of<00:14:13.680> nginx<00:14:14.240> that<00:14:14.399> we + +00:14:14.470 --> 00:14:14.480 align:start position:0% +pointing to the app of nginx that we + + +00:14:14.480 --> 00:14:16.710 align:start position:0% +pointing to the app of nginx that we +just<00:14:14.720> created<00:14:15.360> that<00:14:15.519> deployment<00:14:16.240> and<00:14:16.320> we<00:14:16.480> tell + +00:14:16.710 --> 00:14:16.720 align:start position:0% +just created that deployment and we tell + + +00:14:16.720 --> 00:14:19.350 align:start position:0% +just created that deployment and we tell +this<00:14:16.959> service<00:14:17.360> to<00:14:17.600> expose<00:14:18.000> it<00:14:18.240> on<00:14:18.399> port<00:14:18.720> 80<00:14:19.199> and + +00:14:19.350 --> 00:14:19.360 align:start position:0% +this service to expose it on port 80 and + + +00:14:19.360 --> 00:14:21.670 align:start position:0% +this service to expose it on port 80 and +that<00:14:19.519> the<00:14:19.760> target<00:14:20.240> port<00:14:20.560> for<00:14:20.800> that<00:14:21.040> container + +00:14:21.670 --> 00:14:21.680 align:start position:0% +that the target port for that container + + +00:14:21.680 --> 00:14:23.670 align:start position:0% +that the target port for that container +is<00:14:21.839> also<00:14:22.160> port<00:14:22.480> 80<00:14:22.880> and<00:14:23.040> here's<00:14:23.360> where<00:14:23.519> the + +00:14:23.670 --> 00:14:23.680 align:start position:0% +is also port 80 and here's where the + + +00:14:23.680 --> 00:14:25.990 align:start position:0% +is also port 80 and here's where the +magic<00:14:24.079> takes<00:14:24.399> place<00:14:25.040> we<00:14:25.199> tell<00:14:25.440> it<00:14:25.519> that<00:14:25.680> the + +00:14:25.990 --> 00:14:26.000 align:start position:0% +magic takes place we tell it that the + + +00:14:26.000 --> 00:14:28.470 align:start position:0% +magic takes place we tell it that the +type<00:14:26.480> is<00:14:26.800> type<00:14:27.120> load<00:14:27.360> balancer<00:14:28.000> this<00:14:28.240> tells + +00:14:28.470 --> 00:14:28.480 align:start position:0% +type is type load balancer this tells + + +00:14:28.480 --> 00:14:31.030 align:start position:0% +type is type load balancer this tells +kubernetes<00:14:29.519> to<00:14:29.680> tell<00:14:30.079> our<00:14:30.320> cloud<00:14:30.720> load + +00:14:31.030 --> 00:14:31.040 align:start position:0% +kubernetes to tell our cloud load + + +00:14:31.040 --> 00:14:34.150 align:start position:0% +kubernetes to tell our cloud load +balancer<00:14:31.839> to<00:14:32.000> give<00:14:32.160> us<00:14:32.399> an<00:14:32.560> ip<00:14:33.199> and<00:14:33.519> our<00:14:33.680> cloud + +00:14:34.150 --> 00:14:34.160 align:start position:0% +balancer to give us an ip and our cloud + + +00:14:34.160 --> 00:14:37.590 align:start position:0% +balancer to give us an ip and our cloud +load<00:14:34.480> balancer<00:14:35.279> right<00:14:35.519> now<00:14:36.160> is<00:14:36.320> metal<00:14:36.720> lb + +00:14:37.590 --> 00:14:37.600 align:start position:0% +load balancer right now is metal lb + + +00:14:37.600 --> 00:14:40.710 align:start position:0% +load balancer right now is metal lb +so<00:14:37.839> metal<00:14:38.240> lb<00:14:38.800> should<00:14:39.040> hand<00:14:39.279> us<00:14:39.519> an<00:14:39.680> ip<00:14:40.079> address + +00:14:40.710 --> 00:14:40.720 align:start position:0% +so metal lb should hand us an ip address + + +00:14:40.720 --> 00:14:43.030 align:start position:0% +so metal lb should hand us an ip address +that<00:14:40.880> we<00:14:41.120> specified<00:14:41.839> in<00:14:41.920> that<00:14:42.160> range<00:14:42.800> and<00:14:42.880> if + +00:14:43.030 --> 00:14:43.040 align:start position:0% +that we specified in that range and if + + +00:14:43.040 --> 00:14:44.790 align:start position:0% +that we specified in that range and if +all<00:14:43.199> of<00:14:43.279> that<00:14:43.519> happens<00:14:44.079> we<00:14:44.240> should<00:14:44.399> be<00:14:44.560> able<00:14:44.720> to + +00:14:44.790 --> 00:14:44.800 align:start position:0% +all of that happens we should be able to + + +00:14:44.800 --> 00:14:47.189 align:start position:0% +all of that happens we should be able to +get<00:14:45.040> to<00:14:45.199> our<00:14:45.360> service<00:14:46.160> so<00:14:46.320> then<00:14:46.560> i<00:14:46.639> ran<00:14:46.959> coop + +00:14:47.189 --> 00:14:47.199 align:start position:0% +get to our service so then i ran coop + + +00:14:47.199 --> 00:14:49.430 align:start position:0% +get to our service so then i ran coop +control<00:14:47.760> apply<00:14:48.079> dash<00:14:48.480> f<00:14:48.800> and<00:14:48.880> then<00:14:49.040> the<00:14:49.199> path + +00:14:49.430 --> 00:14:49.440 align:start position:0% +control apply dash f and then the path + + +00:14:49.440 --> 00:14:51.990 align:start position:0% +control apply dash f and then the path +to<00:14:49.600> the<00:14:49.760> service<00:14:50.160> file<00:14:50.720> kubernetes<00:14:51.440> told<00:14:51.680> me + +00:14:51.990 --> 00:14:52.000 align:start position:0% +to the service file kubernetes told me + + +00:14:52.000 --> 00:14:54.310 align:start position:0% +to the service file kubernetes told me +it<00:14:52.160> created<00:14:52.560> the<00:14:52.720> service<00:14:53.120> for<00:14:53.279> me<00:14:53.680> and<00:14:53.839> then<00:14:54.160> i + +00:14:54.310 --> 00:14:54.320 align:start position:0% +it created the service for me and then i + + +00:14:54.320 --> 00:14:56.389 align:start position:0% +it created the service for me and then i +wanted<00:14:54.639> to<00:14:54.800> verify<00:14:55.360> that<00:14:55.519> so<00:14:55.680> i<00:14:55.839> ran<00:14:56.160> coop + +00:14:56.389 --> 00:14:56.399 align:start position:0% +wanted to verify that so i ran coop + + +00:14:56.399 --> 00:14:59.189 align:start position:0% +wanted to verify that so i ran coop +control<00:14:56.880> describe<00:14:57.440> service<00:14:58.000> nginx<00:14:58.959> and<00:14:59.040> we + +00:14:59.189 --> 00:14:59.199 align:start position:0% +control describe service nginx and we + + +00:14:59.199 --> 00:15:01.829 align:start position:0% +control describe service nginx and we +could<00:14:59.440> see<00:14:59.760> here<00:15:00.160> that<00:15:00.399> it<00:15:00.560> exposed<00:15:01.360> a<00:15:01.600> load + +00:15:01.829 --> 00:15:01.839 align:start position:0% +could see here that it exposed a load + + +00:15:01.839 --> 00:15:04.629 align:start position:0% +could see here that it exposed a load +balancer<00:15:02.480> ingress<00:15:03.199> of<00:15:03.680> one<00:15:03.839> of<00:15:03.920> the<00:15:04.160> ip + +00:15:04.629 --> 00:15:04.639 align:start position:0% +balancer ingress of one of the ip + + +00:15:04.639 --> 00:15:07.670 align:start position:0% +balancer ingress of one of the ip +addresses<00:15:05.600> that<00:15:05.760> we<00:15:06.000> specified<00:15:06.639> in<00:15:06.800> metal<00:15:07.120> lb + +00:15:07.670 --> 00:15:07.680 align:start position:0% +addresses that we specified in metal lb + + +00:15:07.680 --> 00:15:10.230 align:start position:0% +addresses that we specified in metal lb +so<00:15:07.839> this<00:15:08.079> means<00:15:08.399> my<00:15:08.800> nginx<00:15:09.440> deployment<00:15:10.000> of + +00:15:10.230 --> 00:15:10.240 align:start position:0% +so this means my nginx deployment of + + +00:15:10.240 --> 00:15:13.030 align:start position:0% +so this means my nginx deployment of +three<00:15:10.560> pods<00:15:11.440> is<00:15:11.600> now<00:15:11.920> exposed<00:15:12.480> on<00:15:12.639> a<00:15:12.720> load + +00:15:13.030 --> 00:15:13.040 align:start position:0% +three pods is now exposed on a load + + +00:15:13.040 --> 00:15:18.470 align:start position:0% +three pods is now exposed on a load +balancer<00:15:13.680> at<00:15:13.760> this<00:15:14.079> ip<00:15:14.800> 192<00:15:15.680> 168<00:15:16.920> 30.80<00:15:18.160> and<00:15:18.320> if + +00:15:18.470 --> 00:15:18.480 align:start position:0% +balancer at this ip 192 168 30.80 and if + + +00:15:18.480 --> 00:15:20.949 align:start position:0% +balancer at this ip 192 168 30.80 and if +we<00:15:18.720> go<00:15:18.880> to<00:15:19.040> that<00:15:19.279> ip<00:15:19.600> address<00:15:20.160> we<00:15:20.320> can<00:15:20.560> see<00:15:20.800> the + +00:15:20.949 --> 00:15:20.959 align:start position:0% +we go to that ip address we can see the + + +00:15:20.959 --> 00:15:24.150 align:start position:0% +we go to that ip address we can see the +hello<00:15:21.360> world<00:15:21.680> page<00:15:22.000> from<00:15:22.240> engine<00:15:22.639> x<00:15:23.360> this<00:15:23.920> is + +00:15:24.150 --> 00:15:24.160 align:start position:0% +hello world page from engine x this is + + +00:15:24.160 --> 00:15:26.710 align:start position:0% +hello world page from engine x this is +so<00:15:24.560> awesome<00:15:25.279> so<00:15:25.440> this<00:15:25.680> proves<00:15:26.240> all<00:15:26.399> the<00:15:26.480> way + +00:15:26.710 --> 00:15:26.720 align:start position:0% +so awesome so this proves all the way + + +00:15:26.720 --> 00:15:29.110 align:start position:0% +so awesome so this proves all the way +through<00:15:26.959> that<00:15:27.199> middle<00:15:27.519> lb<00:15:28.079> is<00:15:28.240> working<00:15:28.720> but<00:15:28.880> we + +00:15:29.110 --> 00:15:29.120 align:start position:0% +through that middle lb is working but we + + +00:15:29.120 --> 00:15:32.230 align:start position:0% +through that middle lb is working but we +never<00:15:29.440> really<00:15:29.759> tested<00:15:30.160> the<00:15:30.560> ha<00:15:31.120> side<00:15:31.360> of<00:15:31.600> cubit + +00:15:32.230 --> 00:15:32.240 align:start position:0% +never really tested the ha side of cubit + + +00:15:32.240 --> 00:15:34.230 align:start position:0% +never really tested the ha side of cubit +we<00:15:32.399> know<00:15:32.560> that<00:15:32.720> we<00:15:32.880> can<00:15:33.120> issue<00:15:33.519> kubernetes + +00:15:34.230 --> 00:15:34.240 align:start position:0% +we know that we can issue kubernetes + + +00:15:34.240 --> 00:15:36.470 align:start position:0% +we know that we can issue kubernetes +commands<00:15:34.720> right<00:15:34.959> now<00:15:35.199> with<00:15:35.360> goog<00:15:35.680> control<00:15:36.320> but + +00:15:36.470 --> 00:15:36.480 align:start position:0% +commands right now with goog control but + + +00:15:36.480 --> 00:15:38.790 align:start position:0% +commands right now with goog control but +we<00:15:36.639> didn't<00:15:37.040> take<00:15:37.360> any<00:15:37.600> of<00:15:37.680> those<00:15:37.920> notes<00:15:38.240> down + +00:15:38.790 --> 00:15:38.800 align:start position:0% +we didn't take any of those notes down + + +00:15:38.800 --> 00:15:41.030 align:start position:0% +we didn't take any of those notes down +so<00:15:39.279> let's<00:15:39.519> do<00:15:39.680> that<00:15:40.000> too<00:15:40.399> so<00:15:40.560> i<00:15:40.639> started + +00:15:41.030 --> 00:15:41.040 align:start position:0% +so let's do that too so i started + + +00:15:41.040 --> 00:15:43.189 align:start position:0% +so let's do that too so i started +pinging<00:15:41.440> that<00:15:41.680> vip<00:15:42.079> and<00:15:42.240> while<00:15:42.480> doing<00:15:42.800> it<00:15:43.040> i + +00:15:43.189 --> 00:15:43.199 align:start position:0% +pinging that vip and while doing it i + + +00:15:43.199 --> 00:15:46.150 align:start position:0% +pinging that vip and while doing it i +remote<00:15:43.600> it<00:15:43.759> into<00:15:44.240> my<00:15:44.639> first<00:15:45.199> master<00:15:45.600> node<00:15:46.000> or + +00:15:46.150 --> 00:15:46.160 align:start position:0% +remote it into my first master node or + + +00:15:46.160 --> 00:15:47.749 align:start position:0% +remote it into my first master node or +the<00:15:46.320> server<00:15:46.720> node<00:15:46.959> that's<00:15:47.279> running<00:15:47.600> the + +00:15:47.749 --> 00:15:47.759 align:start position:0% +the server node that's running the + + +00:15:47.759 --> 00:15:49.590 align:start position:0% +the server node that's running the +control<00:15:48.160> plane<00:15:48.560> and<00:15:48.720> it's<00:15:48.880> also<00:15:49.199> one<00:15:49.440> of<00:15:49.519> the + +00:15:49.590 --> 00:15:49.600 align:start position:0% +control plane and it's also one of the + + +00:15:49.600 --> 00:15:51.189 align:start position:0% +control plane and it's also one of the +nodes<00:15:49.920> that's<00:15:50.160> running<00:15:50.480> cube<00:15:50.720> vip<00:15:50.959> that's + +00:15:51.189 --> 00:15:51.199 align:start position:0% +nodes that's running cube vip that's + + +00:15:51.199 --> 00:15:54.230 align:start position:0% +nodes that's running cube vip that's +supplying<00:15:51.920> this<00:15:52.240> vip<00:15:52.639> so<00:15:52.959> i<00:15:53.120> decided<00:15:53.600> to<00:15:54.000> shut + +00:15:54.230 --> 00:15:54.240 align:start position:0% +supplying this vip so i decided to shut + + +00:15:54.240 --> 00:15:57.030 align:start position:0% +supplying this vip so i decided to shut +it<00:15:54.320> down<00:15:54.959> and<00:15:55.279> as<00:15:55.519> you<00:15:55.680> can<00:15:55.920> see<00:15:56.160> on<00:15:56.480> the<00:15:56.639> right + +00:15:57.030 --> 00:15:57.040 align:start position:0% +it down and as you can see on the right + + +00:15:57.040 --> 00:15:59.590 align:start position:0% +it down and as you can see on the right +i'm<00:15:57.360> still<00:15:57.680> getting<00:15:58.000> responses<00:15:58.959> and<00:15:59.279> you<00:15:59.360> can + +00:15:59.590 --> 00:15:59.600 align:start position:0% +i'm still getting responses and you can + + +00:15:59.600 --> 00:16:01.430 align:start position:0% +i'm still getting responses and you can +see<00:15:59.759> on<00:15:59.920> the<00:16:00.079> left<00:16:00.720> i'm<00:16:00.880> not<00:16:01.120> getting<00:16:01.360> a + +00:16:01.430 --> 00:16:01.440 align:start position:0% +see on the left i'm not getting a + + +00:16:01.440 --> 00:16:03.829 align:start position:0% +see on the left i'm not getting a +response<00:16:01.920> from<00:16:02.160> that<00:16:02.320> machine<00:16:02.959> so<00:16:03.199> this<00:16:03.440> means + +00:16:03.829 --> 00:16:03.839 align:start position:0% +response from that machine so this means + + +00:16:03.839 --> 00:16:06.230 align:start position:0% +response from that machine so this means +we<00:16:04.000> have<00:16:04.160> an<00:16:04.399> h<00:16:04.560> a<00:16:04.720> vip<00:16:04.959> now<00:16:05.440> now<00:16:05.680> i<00:16:05.759> can't<00:16:06.000> shut + +00:16:06.230 --> 00:16:06.240 align:start position:0% +we have an h a vip now now i can't shut + + +00:16:06.240 --> 00:16:08.870 align:start position:0% +we have an h a vip now now i can't shut +down<00:16:06.560> a<00:16:06.720> second<00:16:07.120> node<00:16:07.600> an<00:16:07.759> aja<00:16:08.160> cluster<00:16:08.639> of + +00:16:08.870 --> 00:16:08.880 align:start position:0% +down a second node an aja cluster of + + +00:16:08.880 --> 00:16:11.189 align:start position:0% +down a second node an aja cluster of +only<00:16:09.199> three<00:16:09.440> nodes<00:16:09.839> can<00:16:10.079> only<00:16:10.560> lose<00:16:10.959> one + +00:16:11.189 --> 00:16:11.199 align:start position:0% +only three nodes can only lose one + + +00:16:11.199 --> 00:16:13.030 align:start position:0% +only three nodes can only lose one +machine<00:16:11.759> so<00:16:11.920> if<00:16:12.079> i<00:16:12.160> shut<00:16:12.399> down<00:16:12.720> another + +00:16:13.030 --> 00:16:13.040 align:start position:0% +machine so if i shut down another + + +00:16:13.040 --> 00:16:15.110 align:start position:0% +machine so if i shut down another +machine<00:16:13.680> i<00:16:13.759> won't<00:16:14.079> have<00:16:14.240> access<00:16:14.880> to + +00:16:15.110 --> 00:16:15.120 align:start position:0% +machine i won't have access to + + +00:16:15.120 --> 00:16:17.350 align:start position:0% +machine i won't have access to +kubernetes<00:16:15.920> but<00:16:16.079> i<00:16:16.240> will<00:16:16.480> still<00:16:16.720> have<00:16:16.959> access + +00:16:17.350 --> 00:16:17.360 align:start position:0% +kubernetes but i will still have access + + +00:16:17.360 --> 00:16:19.030 align:start position:0% +kubernetes but i will still have access +to<00:16:17.600> all<00:16:17.759> of<00:16:17.839> my<00:16:18.000> workloads<00:16:18.480> that<00:16:18.639> are<00:16:18.720> running + +00:16:19.030 --> 00:16:19.040 align:start position:0% +to all of my workloads that are running + + +00:16:19.040 --> 00:16:20.550 align:start position:0% +to all of my workloads that are running +it's<00:16:19.199> just<00:16:19.440> that<00:16:19.600> i<00:16:19.759> can't<00:16:20.000> change<00:16:20.240> the<00:16:20.399> state + +00:16:20.550 --> 00:16:20.560 align:start position:0% +it's just that i can't change the state + + +00:16:20.560 --> 00:16:23.030 align:start position:0% +it's just that i can't change the state +of<00:16:20.639> kubernetes<00:16:21.360> nor<00:16:21.839> access<00:16:22.320> it<00:16:22.480> over<00:16:22.800> coupe + +00:16:23.030 --> 00:16:23.040 align:start position:0% +of kubernetes nor access it over coupe + + +00:16:23.040 --> 00:16:24.389 align:start position:0% +of kubernetes nor access it over coupe +control<00:16:23.600> so + +00:16:24.389 --> 00:16:24.399 align:start position:0% +control so + + +00:16:24.399 --> 00:16:26.710 align:start position:0% +control so +this<00:16:24.639> is<00:16:25.120> so<00:16:25.360> awesome<00:16:25.839> so<00:16:26.079> i<00:16:26.240> started<00:16:26.560> that + +00:16:26.710 --> 00:16:26.720 align:start position:0% +this is so awesome so i started that + + +00:16:26.720 --> 00:16:28.710 align:start position:0% +this is so awesome so i started that +other<00:16:26.959> node<00:16:27.199> back<00:16:27.519> up<00:16:27.759> and<00:16:27.920> it's<00:16:28.079> responding + +00:16:28.710 --> 00:16:28.720 align:start position:0% +other node back up and it's responding + + +00:16:28.720 --> 00:16:31.269 align:start position:0% +other node back up and it's responding +and<00:16:28.959> obviously<00:16:29.680> qbip<00:16:30.160> is<00:16:30.240> still<00:16:30.480> responding + +00:16:31.269 --> 00:16:31.279 align:start position:0% +and obviously qbip is still responding + + +00:16:31.279 --> 00:16:33.269 align:start position:0% +and obviously qbip is still responding +so<00:16:31.600> what<00:16:31.759> does<00:16:31.920> one<00:16:32.160> do<00:16:32.399> after<00:16:32.639> we<00:16:32.800> build<00:16:33.040> the + +00:16:33.269 --> 00:16:33.279 align:start position:0% +so what does one do after we build the + + +00:16:33.279 --> 00:16:35.670 align:start position:0% +so what does one do after we build the +perfect<00:16:33.680> k3s<00:16:34.240> cluster + +00:16:35.670 --> 00:16:35.680 align:start position:0% +perfect k3s cluster + + +00:16:35.680 --> 00:16:37.990 align:start position:0% +perfect k3s cluster +we<00:16:35.839> burn<00:16:36.160> it<00:16:36.240> down<00:16:36.480> of<00:16:36.639> course<00:16:37.360> there's<00:16:37.600> also<00:16:37.920> a + +00:16:37.990 --> 00:16:38.000 align:start position:0% +we burn it down of course there's also a + + +00:16:38.000 --> 00:16:40.710 align:start position:0% +we burn it down of course there's also a +playbook<00:16:38.480> to<00:16:38.800> totally<00:16:39.279> reset<00:16:39.680> k3s<00:16:40.320> back<00:16:40.560> to + +00:16:40.710 --> 00:16:40.720 align:start position:0% +playbook to totally reset k3s back to + + +00:16:40.720 --> 00:16:42.389 align:start position:0% +playbook to totally reset k3s back to +its<00:16:40.880> initial<00:16:41.199> state<00:16:41.680> so<00:16:41.920> running<00:16:42.240> this + +00:16:42.389 --> 00:16:42.399 align:start position:0% +its initial state so running this + + +00:16:42.399 --> 00:16:44.470 align:start position:0% +its initial state so running this +playbook<00:16:42.959> and<00:16:43.120> pointing<00:16:43.519> at<00:16:43.600> the<00:16:43.839> same<00:16:44.079> host + +00:16:44.470 --> 00:16:44.480 align:start position:0% +playbook and pointing at the same host + + +00:16:44.480 --> 00:16:46.790 align:start position:0% +playbook and pointing at the same host +will<00:16:44.800> totally<00:16:45.279> clean<00:16:45.519> it<00:16:45.680> up<00:16:46.079> it'll<00:16:46.399> clean<00:16:46.639> up + +00:16:46.790 --> 00:16:46.800 align:start position:0% +will totally clean it up it'll clean up + + +00:16:46.800 --> 00:16:49.269 align:start position:0% +will totally clean it up it'll clean up +all<00:16:47.040> nodes<00:16:47.600> remove<00:16:48.160> all<00:16:48.399> containers<00:16:49.120> and + +00:16:49.269 --> 00:16:49.279 align:start position:0% +all nodes remove all containers and + + +00:16:49.279 --> 00:16:51.430 align:start position:0% +all nodes remove all containers and +reset<00:16:49.680> it<00:16:49.839> back<00:16:50.079> to<00:16:50.240> the<00:16:50.320> state<00:16:50.639> it<00:16:50.800> was<00:16:51.120> before + +00:16:51.430 --> 00:16:51.440 align:start position:0% +reset it back to the state it was before + + +00:16:51.440 --> 00:16:53.350 align:start position:0% +reset it back to the state it was before +we<00:16:51.680> ran<00:16:51.839> this<00:16:52.079> playbook<00:16:52.560> this<00:16:52.800> was<00:16:53.040> super + +00:16:53.350 --> 00:16:53.360 align:start position:0% +we ran this playbook this was super + + +00:16:53.360 --> 00:16:55.509 align:start position:0% +we ran this playbook this was super +handy<00:16:53.839> as<00:16:54.000> i<00:16:54.079> was<00:16:54.320> testing<00:16:54.639> on<00:16:54.720> my<00:16:54.880> changes + +00:16:55.509 --> 00:16:55.519 align:start position:0% +handy as i was testing on my changes + + +00:16:55.519 --> 00:16:57.590 align:start position:0% +handy as i was testing on my changes +must<00:16:55.759> have<00:16:55.920> run<00:16:56.160> this<00:16:56.800> at<00:16:56.880> least<00:16:57.199> a<00:16:57.279> thousand + +00:16:57.590 --> 00:16:57.600 align:start position:0% +must have run this at least a thousand + + +00:16:57.600 --> 00:16:59.670 align:start position:0% +must have run this at least a thousand +times<00:16:58.160> and<00:16:58.399> after<00:16:58.720> it's<00:16:58.880> done<00:16:59.120> we're<00:16:59.279> back<00:16:59.519> to + +00:16:59.670 --> 00:16:59.680 align:start position:0% +times and after it's done we're back to + + +00:16:59.680 --> 00:17:01.749 align:start position:0% +times and after it's done we're back to +a<00:16:59.759> good<00:16:59.920> state<00:17:00.320> one<00:17:00.560> note<00:17:01.040> you<00:17:01.199> might<00:17:01.440> want<00:17:01.600> to + +00:17:01.749 --> 00:17:01.759 align:start position:0% +a good state one note you might want to + + +00:17:01.759 --> 00:17:03.749 align:start position:0% +a good state one note you might want to +actually<00:17:02.160> reboot<00:17:02.480> them<00:17:02.720> afterwards<00:17:03.440> i've + +00:17:03.749 --> 00:17:03.759 align:start position:0% +actually reboot them afterwards i've + + +00:17:03.759 --> 00:17:05.429 align:start position:0% +actually reboot them afterwards i've +noticed<00:17:04.000> that<00:17:04.160> the<00:17:04.319> vip<00:17:04.640> stays<00:17:05.039> up<00:17:05.199> and<00:17:05.360> it + +00:17:05.429 --> 00:17:05.439 align:start position:0% +noticed that the vip stays up and it + + +00:17:05.439 --> 00:17:07.829 align:start position:0% +noticed that the vip stays up and it +will<00:17:05.600> respond<00:17:06.319> so<00:17:06.559> i<00:17:06.720> have<00:17:06.959> a<00:17:07.039> playbook<00:17:07.679> to + +00:17:07.829 --> 00:17:07.839 align:start position:0% +will respond so i have a playbook to + + +00:17:07.839 --> 00:17:09.909 align:start position:0% +will respond so i have a playbook to +reboot<00:17:08.400> all<00:17:08.559> of<00:17:08.640> these<00:17:08.880> machines<00:17:09.600> and<00:17:09.679> this + +00:17:09.909 --> 00:17:09.919 align:start position:0% +reboot all of these machines and this + + +00:17:09.919 --> 00:17:11.669 align:start position:0% +reboot all of these machines and this +playbook<00:17:10.319> will<00:17:10.559> actually<00:17:11.039> wait<00:17:11.199> for<00:17:11.360> them<00:17:11.520> to + +00:17:11.669 --> 00:17:11.679 align:start position:0% +playbook will actually wait for them to + + +00:17:11.679 --> 00:17:14.949 align:start position:0% +playbook will actually wait for them to +respond<00:17:12.480> before<00:17:12.880> it<00:17:13.039> reports<00:17:13.439> a<00:17:13.600> success + +00:17:14.949 --> 00:17:14.959 align:start position:0% +respond before it reports a success + + +00:17:14.959 --> 00:17:17.669 align:start position:0% +respond before it reports a success +just<00:17:15.199> like<00:17:15.439> that<00:17:15.839> and<00:17:16.000> so<00:17:16.400> this<00:17:16.720> is<00:17:17.120> everything + +00:17:17.669 --> 00:17:17.679 align:start position:0% +just like that and so this is everything + + +00:17:17.679 --> 00:17:19.429 align:start position:0% +just like that and so this is everything +that<00:17:17.919> everyone<00:17:18.480> struggles<00:17:18.959> with<00:17:19.199> when + +00:17:19.429 --> 00:17:19.439 align:start position:0% +that everyone struggles with when + + +00:17:19.439 --> 00:17:22.470 align:start position:0% +that everyone struggles with when +setting<00:17:19.760> up<00:17:19.919> k3s<00:17:20.959> no<00:17:21.120> more<00:17:21.360> using<00:17:21.679> mysql<00:17:22.319> and + +00:17:22.470 --> 00:17:22.480 align:start position:0% +setting up k3s no more using mysql and + + +00:17:22.480 --> 00:17:24.789 align:start position:0% +setting up k3s no more using mysql and +making<00:17:22.799> that<00:17:23.039> ha<00:17:23.600> if<00:17:23.679> you<00:17:23.839> don't<00:17:24.000> want<00:17:24.240> to<00:17:24.640> no + +00:17:24.789 --> 00:17:24.799 align:start position:0% +making that ha if you don't want to no + + +00:17:24.799 --> 00:17:26.470 align:start position:0% +making that ha if you don't want to no +more<00:17:24.959> spinning<00:17:25.360> up<00:17:25.600> additional<00:17:26.160> load + +00:17:26.470 --> 00:17:26.480 align:start position:0% +more spinning up additional load + + +00:17:26.480 --> 00:17:28.950 align:start position:0% +more spinning up additional load +balancers<00:17:27.120> and<00:17:27.280> keep<00:17:27.520> a<00:17:27.600> live<00:17:27.839> d<00:17:28.400> and<00:17:28.640> making + +00:17:28.950 --> 00:17:28.960 align:start position:0% +balancers and keep a live d and making + + +00:17:28.960 --> 00:17:31.669 align:start position:0% +balancers and keep a live d and making +those<00:17:29.280> aj<00:17:30.080> if<00:17:30.240> you<00:17:30.320> don't<00:17:30.480> want<00:17:30.720> to<00:17:31.200> no<00:17:31.360> more + +00:17:31.669 --> 00:17:31.679 align:start position:0% +those aj if you don't want to no more + + +00:17:31.679 --> 00:17:33.990 align:start position:0% +those aj if you don't want to no more +configuring<00:17:32.320> metal<00:17:32.640> lb<00:17:33.120> or<00:17:33.280> installing<00:17:33.760> with + +00:17:33.990 --> 00:17:34.000 align:start position:0% +configuring metal lb or installing with + + +00:17:34.000 --> 00:17:36.470 align:start position:0% +configuring metal lb or installing with +helm<00:17:34.559> if<00:17:34.720> you<00:17:34.799> don't<00:17:34.960> want<00:17:35.200> to<00:17:35.679> just<00:17:36.160> one + +00:17:36.470 --> 00:17:36.480 align:start position:0% +helm if you don't want to just one + + +00:17:36.480 --> 00:17:38.470 align:start position:0% +helm if you don't want to just one +simple<00:17:36.799> playbook<00:17:37.280> that<00:17:37.520> spins<00:17:37.840> up<00:17:38.160> all<00:17:38.320> of + +00:17:38.470 --> 00:17:38.480 align:start position:0% +simple playbook that spins up all of + + +00:17:38.480 --> 00:17:40.950 align:start position:0% +simple playbook that spins up all of +that<00:17:38.799> in<00:17:39.039> one<00:17:39.360> shot<00:17:39.760> and<00:17:39.919> then<00:17:40.320> you<00:17:40.559> can<00:17:40.720> burn + +00:17:40.950 --> 00:17:40.960 align:start position:0% +that in one shot and then you can burn + + +00:17:40.960 --> 00:17:43.029 align:start position:0% +that in one shot and then you can burn +it<00:17:41.039> down<00:17:41.280> if<00:17:41.440> you<00:17:41.520> want<00:17:41.760> to<00:17:42.000> too<00:17:42.320> so<00:17:42.559> again<00:17:42.880> a + +00:17:43.029 --> 00:17:43.039 align:start position:0% +it down if you want to too so again a + + +00:17:43.039 --> 00:17:45.270 align:start position:0% +it down if you want to too so again a +huge<00:17:43.360> thanks<00:17:43.679> to<00:17:43.840> the<00:17:44.000> k3s<00:17:44.640> community<00:17:45.120> who + +00:17:45.270 --> 00:17:45.280 align:start position:0% +huge thanks to the k3s community who + + +00:17:45.280 --> 00:17:47.270 align:start position:0% +huge thanks to the k3s community who +made<00:17:45.520> this<00:17:45.760> original<00:17:46.160> playbook<00:17:46.799> along<00:17:47.039> with + +00:17:47.270 --> 00:17:47.280 align:start position:0% +made this original playbook along with + + +00:17:47.280 --> 00:17:49.830 align:start position:0% +made this original playbook along with +jeff<00:17:47.520> gearling<00:17:48.080> thank<00:17:48.240> you<00:17:48.400> so<00:17:48.559> much<00:17:49.200> and<00:17:49.440> also + +00:17:49.830 --> 00:17:49.840 align:start position:0% +jeff gearling thank you so much and also + + +00:17:49.840 --> 00:17:53.430 align:start position:0% +jeff gearling thank you so much and also +thank<00:17:50.000> you<00:17:50.160> to<00:17:50.480> github<00:17:50.880> user212<00:17:51.919> 850a + +00:17:53.430 --> 00:17:53.440 align:start position:0% +thank you to github user212 850a + + +00:17:53.440 --> 00:17:55.270 align:start position:0% +thank you to github user212 850a +thank<00:17:53.679> you<00:17:53.840> so<00:17:54.000> much<00:17:54.400> i'll<00:17:54.640> have<00:17:54.799> links<00:17:55.120> in<00:17:55.200> the + +00:17:55.270 --> 00:17:55.280 align:start position:0% +thank you so much i'll have links in the + + +00:17:55.280 --> 00:17:57.110 align:start position:0% +thank you so much i'll have links in the +description<00:17:55.919> to<00:17:56.240> all<00:17:56.400> of<00:17:56.480> the<00:17:56.559> code<00:17:56.799> that<00:17:57.039> i + +00:17:57.110 --> 00:17:57.120 align:start position:0% +description to all of the code that i + + +00:17:57.120 --> 00:17:59.669 align:start position:0% +description to all of the code that i +have<00:17:57.679> in<00:17:57.840> the<00:17:57.919> description<00:17:58.480> below<00:17:59.120> so<00:17:59.360> what<00:17:59.600> do + +00:17:59.669 --> 00:17:59.679 align:start position:0% +have in the description below so what do + + +00:17:59.679 --> 00:18:01.990 align:start position:0% +have in the description below so what do +you<00:17:59.760> think<00:18:00.000> of<00:18:00.080> spinning<00:18:00.480> up<00:18:00.640> a<00:18:00.880> truly<00:18:01.440> ha + +00:18:01.990 --> 00:18:02.000 align:start position:0% +you think of spinning up a truly ha + + +00:18:02.000 --> 00:18:04.070 align:start position:0% +you think of spinning up a truly ha +version<00:18:02.240> of<00:18:02.400> k3s<00:18:02.960> using<00:18:03.280> ansible<00:18:03.760> is<00:18:03.919> there + +00:18:04.070 --> 00:18:04.080 align:start position:0% +version of k3s using ansible is there + + +00:18:04.080 --> 00:18:05.510 align:start position:0% +version of k3s using ansible is there +anything<00:18:04.400> i<00:18:04.480> should<00:18:04.720> contribute<00:18:05.280> to<00:18:05.440> the + +00:18:05.510 --> 00:18:05.520 align:start position:0% +anything i should contribute to the + + +00:18:05.520 --> 00:18:07.669 align:start position:0% +anything i should contribute to the +script<00:18:05.840> to<00:18:06.000> make<00:18:06.160> it<00:18:06.559> easier<00:18:06.880> for<00:18:07.039> you<00:18:07.360> let<00:18:07.520> me + +00:18:07.669 --> 00:18:07.679 align:start position:0% +script to make it easier for you let me + + +00:18:07.679 --> 00:18:09.909 align:start position:0% +script to make it easier for you let me +know<00:18:08.160> in<00:18:08.240> the<00:18:08.400> comments<00:18:08.720> section<00:18:09.039> below<00:18:09.600> and + +00:18:09.909 --> 00:18:09.919 align:start position:0% +know in the comments section below and + + +00:18:09.919 --> 00:18:11.669 align:start position:0% +know in the comments section below and +remember<00:18:10.400> if<00:18:10.480> you<00:18:10.640> found<00:18:11.039> anything<00:18:11.440> in<00:18:11.520> this + +00:18:11.669 --> 00:18:11.679 align:start position:0% +remember if you found anything in this + + +00:18:11.679 --> 00:18:13.029 align:start position:0% +remember if you found anything in this +video<00:18:11.919> helpful + +00:18:13.029 --> 00:18:13.039 align:start position:0% +video helpful + + +00:18:13.039 --> 00:18:15.350 align:start position:0% +video helpful +don't<00:18:13.200> forget<00:18:13.520> to<00:18:13.679> like<00:18:13.919> and<00:18:14.000> subscribe + +00:18:15.350 --> 00:18:15.360 align:start position:0% +don't forget to like and subscribe + + +00:18:15.360 --> 00:18:17.430 align:start position:0% +don't forget to like and subscribe +thanks<00:18:15.600> for<00:18:15.760> watching<00:18:16.160> fix<00:18:16.400> the<00:18:16.559> lights<00:18:16.880> i<00:18:17.280> if + +00:18:17.430 --> 00:18:17.440 align:start position:0% +thanks for watching fix the lights i if + + +00:18:17.440 --> 00:18:19.430 align:start position:0% +thanks for watching fix the lights i if +you<00:18:17.520> weren't<00:18:17.760> here<00:18:18.000> last<00:18:18.240> week<00:18:18.640> small<00:18:19.039> episode + +00:18:19.430 --> 00:18:19.440 align:start position:0% +you weren't here last week small episode + + +00:18:19.440 --> 00:18:20.630 align:start position:0% +you weren't here last week small episode +with<00:18:19.600> the<00:18:19.679> lights<00:18:19.919> i<00:18:20.000> couldn't<00:18:20.240> figure<00:18:20.480> out + +00:18:20.630 --> 00:18:20.640 align:start position:0% +with the lights i couldn't figure out + + +00:18:20.640 --> 00:18:22.710 align:start position:0% +with the lights i couldn't figure out +what<00:18:20.799> was<00:18:20.960> going<00:18:21.200> on<00:18:21.360> with<00:18:21.520> my<00:18:21.760> bottom<00:18:22.080> lights + +00:18:22.710 --> 00:18:22.720 align:start position:0% +what was going on with my bottom lights + + +00:18:22.720 --> 00:18:25.190 align:start position:0% +what was going on with my bottom lights +my<00:18:22.880> bottom<00:18:23.200> lights<00:18:23.679> ended<00:18:24.000> up<00:18:24.320> having<00:18:24.720> a<00:18:24.880> small + +00:18:25.190 --> 00:18:25.200 align:start position:0% +my bottom lights ended up having a small + + +00:18:25.200 --> 00:18:26.870 align:start position:0% +my bottom lights ended up having a small +issue<00:18:25.600> and<00:18:25.679> it<00:18:25.840> took<00:18:26.080> me<00:18:26.240> a<00:18:26.320> long<00:18:26.559> time<00:18:26.799> to + +00:18:26.870 --> 00:18:26.880 align:start position:0% +issue and it took me a long time to + + +00:18:26.880 --> 00:18:29.590 align:start position:0% +issue and it took me a long time to +figure<00:18:27.200> out<00:18:28.080> it<00:18:28.240> ended<00:18:28.559> up<00:18:28.640> being<00:18:28.960> a<00:18:29.120> firewall + +00:18:29.590 --> 00:18:29.600 align:start position:0% +figure out it ended up being a firewall + + +00:18:29.600 --> 00:18:32.310 align:start position:0% +figure out it ended up being a firewall +rule<00:18:29.840> so<00:18:30.080> if<00:18:30.240> it's<00:18:30.480> not<00:18:30.720> dns<00:18:31.440> it<00:18:31.600> is<00:18:31.760> a<00:18:31.840> firewall + +00:18:32.310 --> 00:18:32.320 align:start position:0% +rule so if it's not dns it is a firewall + + +00:18:32.320 --> 00:18:33.590 align:start position:0% +rule so if it's not dns it is a firewall +rule<00:18:32.559> all<00:18:32.720> right<00:18:32.960> changing<00:18:33.280> the<00:18:33.360> light<00:18:33.520> as + +00:18:33.590 --> 00:18:33.600 align:start position:0% +rule all right changing the light as + + +00:18:33.600 --> 00:18:35.430 align:start position:0% +rule all right changing the light as +soon<00:18:33.760> as<00:18:33.840> i<00:18:33.919> mention<00:18:34.240> them + +00:18:35.430 --> 00:18:35.440 align:start position:0% +soon as i mention them + + +00:18:35.440 --> 00:18:37.590 align:start position:0% +soon as i mention them +if<00:18:35.600> it's<00:18:35.760> not<00:18:35.919> dns<00:18:36.400> it's<00:18:36.559> a<00:18:36.640> firewall<00:18:37.039> rule<00:18:37.280> now + +00:18:37.590 --> 00:18:37.600 align:start position:0% +if it's not dns it's a firewall rule now + + +00:18:37.600 --> 00:18:38.950 align:start position:0% +if it's not dns it's a firewall rule now +now<00:18:37.760> you're<00:18:37.919> really<00:18:38.160> testing<00:18:38.480> mines<00:18:38.880> all + +00:18:38.950 --> 00:18:38.960 align:start position:0% +now you're really testing mines all + + +00:18:38.960 --> 00:18:40.549 align:start position:0% +now you're really testing mines all +right<00:18:39.600> it's<00:18:39.760> gonna<00:18:40.000> happen<00:18:40.320> it's<00:18:40.400> gonna + +00:18:40.549 --> 00:18:40.559 align:start position:0% +right it's gonna happen it's gonna + + +00:18:40.559 --> 00:18:43.120 align:start position:0% +right it's gonna happen it's gonna +happen<00:18:40.880> so + diff --git a/sub-es.vtt b/sub-es.vtt new file mode 100644 index 0000000..56771bb --- /dev/null +++ b/sub-es.vtt @@ -0,0 +1,3912 @@ +WEBVTT +Kind: captions +Language: es + +00:00:00.240 --> 00:00:03.830 align:start position:0% + +Palabras <00:00:00.559>para <00:00:00.878>describir <00:00:01.197>la <00:00:01.516>configuración <00:00:01.835>de <00:00:02.154>K3S. + +00:00:03.830 --> 00:00:03.840 align:start position:0% +Palabras para describir la configuración de K3S. + + +00:00:03.840 --> 00:00:05.590 align:start position:0% +Palabras para describir la configuración de K3S. +Esto <00:00:03.973>es <00:00:04.106>difícil. <00:00:04.239>Es + +00:00:05.590 --> 00:00:05.600 align:start position:0% +Esto es difícil. Es + + +00:00:05.600 --> 00:00:08.150 align:start position:0% +Esto es difícil. Es +muy <00:00:06.000>difícil <00:00:06.400>de <00:00:06.800>configurar. <00:00:07.200>¿ + +00:00:08.150 --> 00:00:08.160 align:start position:0% +muy difícil de configurar. ¿ + + +00:00:08.160 --> 00:00:10.390 align:start position:0% +muy difícil de configurar. ¿ +No <00:00:08.259>es <00:00:08.358>esto <00:00:08.457>exagerado? <00:00:08.556>¿ + +00:00:10.390 --> 00:00:10.400 align:start position:0% +No es esto exagerado? ¿ + + +00:00:10.400 --> 00:00:13.270 align:start position:0% +No es esto exagerado? ¿ +Qué <00:00:10.586>es <00:00:10.772>el <00:00:10.958>balanceador <00:00:11.144>de <00:00:11.330>carga? <00:00:11.516>¿ + +00:00:13.270 --> 00:00:13.280 align:start position:0% +Qué es el balanceador de carga? ¿ + + +00:00:13.280 --> 00:00:16.390 align:start position:0% +Qué es el balanceador de carga? ¿ +Por <00:00:13.462>qué <00:00:13.644>necesito <00:00:13.826>dos <00:00:14.008>balanceadores <00:00:14.190>de <00:00:14.372>carga? <00:00:14.554>¿ + +00:00:16.390 --> 00:00:16.400 align:start position:0% +Por qué necesito dos balanceadores de carga? ¿ + + +00:00:16.400 --> 00:00:18.230 align:start position:0% +Por qué necesito dos balanceadores de carga? ¿ +Debería <00:00:16.613>usar <00:00:16.826>ese <00:00:17.039>CD? + +00:00:18.230 --> 00:00:18.240 align:start position:0% +Debería usar ese CD? + + +00:00:18.240 --> 00:00:20.870 align:start position:0% +Debería usar ese CD? +Necesito <00:00:18.720>dos <00:00:19.200>balanceadores <00:00:19.680>de <00:00:20.160>carga <00:00:20.640>y + +00:00:20.870 --> 00:00:20.880 align:start position:0% +Necesito dos balanceadores de carga y + + +00:00:20.880 --> 00:00:22.870 align:start position:0% +Necesito dos balanceadores de carga y +mantener <00:00:21.019>viva <00:00:21.158>la <00:00:21.297>D. <00:00:21.436>¿ + +00:00:22.870 --> 00:00:22.880 align:start position:0% +mantener viva la D. ¿ + + +00:00:22.880 --> 00:00:25.429 align:start position:0% +mantener viva la D. ¿ +Qué <00:00:23.253>es <00:00:23.626>MetalLB? <00:00:23.999>¿Has + +00:00:25.429 --> 00:00:25.439 align:start position:0% +Qué es MetalLB? ¿Has + + +00:00:25.439 --> 00:00:27.670 align:start position:0% +Qué es MetalLB? ¿Has +oído <00:00:25.579>hablar <00:00:25.719>de <00:00:25.859>CubeVIP? <00:00:25.999>¿ + +00:00:27.670 --> 00:00:27.680 align:start position:0% +oído hablar de CubeVIP? ¿ + + +00:00:27.680 --> 00:00:33.190 align:start position:0% +oído hablar de CubeVIP? ¿ +No <00:00:27.942>es <00:00:28.204>eso <00:00:28.466>un <00:00:28.728>punto <00:00:28.990>único <00:00:29.252>de <00:00:29.514>falla? + +00:00:33.190 --> 00:00:33.200 align:start position:0% + + + +00:00:33.200 --> 00:00:34.389 align:start position:0% + +Sé <00:00:33.440>que + +00:00:34.389 --> 00:00:34.399 align:start position:0% +Sé que + + +00:00:34.399 --> 00:00:36.549 align:start position:0% +Sé que +automatizaré <00:00:35.520>todo + +00:00:36.549 --> 00:00:36.559 align:start position:0% +automatizaré todo + + +00:00:36.559 --> 00:00:39.030 align:start position:0% +automatizaré todo +hoy. <00:00:36.865>No <00:00:37.171>solo <00:00:37.477>vamos <00:00:37.783>a <00:00:38.089>configurar <00:00:38.395>K3S + +00:00:39.030 --> 00:00:39.040 align:start position:0% +hoy. No solo vamos a configurar K3S + + +00:00:39.040 --> 00:00:41.990 align:start position:0% +hoy. No solo vamos a configurar K3S +con <00:00:39.380>NCD <00:00:39.720>y <00:00:40.060>la <00:00:40.400>instalación <00:00:40.740>de <00:00:41.080>HA <00:00:41.420>con <00:00:41.760>Cube + +00:00:41.990 --> 00:00:42.000 align:start position:0% +con NCD y la instalación de HA con Cube + + +00:00:42.000 --> 00:00:44.950 align:start position:0% +con NCD y la instalación de HA con Cube +VIP <00:00:42.339>y <00:00:42.678>Middle <00:00:43.017>of <00:00:43.356>B, <00:00:43.695>sino <00:00:44.034>que <00:00:44.373>también <00:00:44.712>vamos + +00:00:44.950 --> 00:00:44.960 align:start position:0% +VIP y Middle of B, sino que también vamos + + +00:00:44.960 --> 00:00:47.110 align:start position:0% +VIP y Middle of B, sino que también vamos +a <00:00:45.626>automatizar <00:00:46.292>todo <00:00:46.958>para + +00:00:47.110 --> 00:00:47.120 align:start position:0% +a automatizar todo para + + +00:00:47.120 --> 00:00:49.830 align:start position:0% +a automatizar todo para +que <00:00:47.632>no <00:00:48.144>podamos <00:00:48.656>arruinarlo <00:00:49.168>realmente. <00:00:49.680>Entonces, + +00:00:49.830 --> 00:00:49.840 align:start position:0% +que no podamos arruinarlo realmente. Entonces, + + +00:00:49.840 --> 00:00:51.350 align:start position:0% +que no podamos arruinarlo realmente. Entonces, +vamos <00:00:50.160>a <00:00:50.480>automatizar <00:00:50.800>completamente <00:00:51.120>la + +00:00:51.350 --> 00:00:51.360 align:start position:0% +vamos a automatizar completamente la + + +00:00:51.360 --> 00:00:53.990 align:start position:0% +vamos a automatizar completamente la +instalación <00:00:51.693>de <00:00:52.026>K3S <00:00:52.359>para <00:00:52.692>que <00:00:53.025>sea <00:00:53.358>100% + +00:00:53.990 --> 00:00:54.000 align:start position:0% +instalación de K3S para que sea 100% + + +00:00:54.000 --> 00:00:55.110 align:start position:0% +instalación de K3S para que sea 100% +repetible + +00:00:55.110 --> 00:00:55.120 align:start position:0% +repetible + + +00:00:55.120 --> 00:00:57.350 align:start position:0% +repetible +y <00:00:55.535>luego <00:00:55.950>lo <00:00:56.365>desmantelaremos <00:00:56.780>todo <00:00:57.195>como + +00:00:57.350 --> 00:00:57.360 align:start position:0% +y luego lo desmantelaremos todo como + + +00:00:57.360 --> 00:00:59.750 align:start position:0% +y luego lo desmantelaremos todo como +si <00:00:57.630>nunca <00:00:57.900>hubiera <00:00:58.170>sucedido. <00:00:58.440>Pero <00:00:58.710>antes <00:00:58.980>de <00:00:59.250>hacerlo, <00:00:59.520>un + +00:00:59.750 --> 00:00:59.760 align:start position:0% +si nunca hubiera sucedido. Pero antes de hacerlo, un + + +00:00:59.760 --> 00:01:02.389 align:start position:0% +si nunca hubiera sucedido. Pero antes de hacerlo, un +gran <00:01:00.112>agradecimiento <00:01:00.464>a <00:01:00.816>nuestro <00:01:01.168>patrocinador <00:01:01.520>Microcenter. + +00:01:02.389 --> 00:01:02.399 align:start position:0% +gran agradecimiento a nuestro patrocinador Microcenter. + + +00:01:02.399 --> 00:01:04.149 align:start position:0% +gran agradecimiento a nuestro patrocinador Microcenter. +Si <00:01:02.593>estás <00:01:02.787>pensando <00:01:02.981>en <00:01:03.175>construir <00:01:03.369>una <00:01:03.563>nueva <00:01:03.757>PC, + +00:01:04.149 --> 00:01:04.159 align:start position:0% +Si estás pensando en construir una nueva PC, + + +00:01:04.159 --> 00:01:05.350 align:start position:0% +Si estás pensando en construir una nueva PC, +no <00:01:04.419>busques <00:01:04.679>más <00:01:04.939>allá <00:01:05.199>de + +00:01:05.350 --> 00:01:05.360 align:start position:0% +no busques más allá de + + +00:01:05.360 --> 00:01:06.789 align:start position:0% +no busques más allá de +Microcenter. <00:01:05.632>Si <00:01:05.904>nunca <00:01:06.176>has <00:01:06.448>estado <00:01:06.720>en + +00:01:06.789 --> 00:01:06.799 align:start position:0% +Microcenter. Si nunca has estado en + + +00:01:06.799 --> 00:01:08.630 align:start position:0% +Microcenter. Si nunca has estado en +Microcenter, <00:01:07.027>te <00:01:07.255>estás <00:01:07.483>perdiendo <00:01:07.711>la <00:01:07.939>oportunidad <00:01:08.167>de <00:01:08.395>ver + +00:01:08.630 --> 00:01:08.640 align:start position:0% +Microcenter, te estás perdiendo la oportunidad de ver + + +00:01:08.640 --> 00:01:10.870 align:start position:0% +Microcenter, te estás perdiendo la oportunidad de ver +una <00:01:08.920>gran <00:01:09.200>selección <00:01:09.480>de <00:01:09.760>tecnología <00:01:10.040>en <00:01:10.320>persona. + +00:01:10.870 --> 00:01:10.880 align:start position:0% +una gran selección de tecnología en persona. + + +00:01:10.880 --> 00:01:12.710 align:start position:0% +una gran selección de tecnología en persona. +Tienen <00:01:11.085>todo <00:01:11.290>para <00:01:11.495>ti. <00:01:11.700>Constructores <00:01:11.905>de <00:01:12.110>PC <00:01:12.315>personalizados, + +00:01:12.710 --> 00:01:12.720 align:start position:0% +Tienen todo para ti. Constructores de PC personalizados, + + +00:01:12.720 --> 00:01:15.030 align:start position:0% +Tienen todo para ti. Constructores de PC personalizados, +desde <00:01:13.152>SSD <00:01:13.584>y <00:01:14.016>discos <00:01:14.448>duros <00:01:14.880>hasta + +00:01:15.030 --> 00:01:15.040 align:start position:0% +desde SSD y discos duros hasta + + +00:01:15.040 --> 00:01:17.830 align:start position:0% +desde SSD y discos duros hasta +fuentes <00:01:15.417>de <00:01:15.794>alimentación, <00:01:16.171>memoria, <00:01:16.548>refrigeración <00:01:16.925>por <00:01:17.302>aire <00:01:17.679>y + +00:01:17.830 --> 00:01:17.840 align:start position:0% +fuentes de alimentación, memoria, refrigeración por aire y + + +00:01:17.840 --> 00:01:20.390 align:start position:0% +fuentes de alimentación, memoria, refrigeración por aire y +agua, <00:01:18.960>placas <00:01:20.080>base, + +00:01:20.390 --> 00:01:20.400 align:start position:0% +agua, placas base, + + +00:01:20.400 --> 00:01:23.429 align:start position:0% +agua, placas base, +tarjetas <00:01:20.813>de <00:01:21.226>vídeo, <00:01:21.639>procesadores <00:01:22.052>y <00:01:22.465>más. <00:01:22.878>Microcenter + +00:01:23.429 --> 00:01:23.439 align:start position:0% +tarjetas de vídeo, procesadores y más. Microcenter + + +00:01:23.439 --> 00:01:25.030 align:start position:0% +tarjetas de vídeo, procesadores y más. Microcenter +es <00:01:23.739>su <00:01:24.039>tienda <00:01:24.339>única <00:01:24.639>para + +00:01:25.030 --> 00:01:25.040 align:start position:0% +es su tienda única para + + +00:01:25.040 --> 00:01:27.109 align:start position:0% +es su tienda única para +personalizar <00:01:25.253>totalmente <00:01:25.466>su <00:01:25.679>próxima <00:01:25.892>construcción <00:01:26.105>de <00:01:26.318>PC <00:01:26.531>y <00:01:26.744>no <00:01:26.957>se + +00:01:27.109 --> 00:01:27.119 align:start position:0% +personalizar totalmente su próxima construcción de PC y no se + + +00:01:27.119 --> 00:01:28.789 align:start position:0% +personalizar totalmente su próxima construcción de PC y no se +preocupe <00:01:27.319>si <00:01:27.519>es <00:01:27.719>la <00:01:27.919>primera <00:01:28.119>vez <00:01:28.319>que <00:01:28.519>construye <00:01:28.719>una + +00:01:28.789 --> 00:01:28.799 align:start position:0% +preocupe si es la primera vez que construye una + + +00:01:28.799 --> 00:01:30.789 align:start position:0% +preocupe si es la primera vez que construye una +PC, <00:01:29.151>tienen <00:01:29.503>mucho <00:01:29.855>personal <00:01:30.207>útil <00:01:30.559>y + +00:01:30.789 --> 00:01:30.799 align:start position:0% +PC, tienen mucho personal útil y + + +00:01:30.799 --> 00:01:32.310 align:start position:0% +PC, tienen mucho personal útil y +elegible <00:01:31.071>que <00:01:31.343>está <00:01:31.615>allí <00:01:31.887>para <00:01:32.159>ayudarlo + +00:01:32.310 --> 00:01:32.320 align:start position:0% +elegible que está allí para ayudarlo + + +00:01:32.320 --> 00:01:33.910 align:start position:0% +elegible que está allí para ayudarlo +y <00:01:32.660>lo <00:01:33.000>orientará <00:01:33.340>en <00:01:33.680>la + +00:01:33.910 --> 00:01:33.920 align:start position:0% +y lo orientará en la + + +00:01:33.920 --> 00:01:35.749 align:start position:0% +y lo orientará en la +dirección <00:01:34.272>correcta <00:01:34.624>para <00:01:34.976>que <00:01:35.328>no <00:01:35.680>intente + +00:01:35.749 --> 00:01:35.759 align:start position:0% +dirección correcta para que no intente + + +00:01:35.759 --> 00:01:41.109 align:start position:0% +dirección correcta para que no intente +aplicar <00:01:36.059>pasta <00:01:36.359>térmica <00:01:36.659>como <00:01:36.959>esta. + +00:01:41.109 --> 00:01:41.119 align:start position:0% + + + +00:01:41.119 --> 00:01:42.870 align:start position:0% + +Microcenter <00:01:41.336>ha <00:01:41.553>tenido <00:01:41.770>la <00:01:41.987>amabilidad <00:01:42.204>de <00:01:42.421>brindarles <00:01:42.638>a + +00:01:42.870 --> 00:01:42.880 align:start position:0% +Microcenter ha tenido la amabilidad de brindarles a + + +00:01:42.880 --> 00:01:45.429 align:start position:0% +Microcenter ha tenido la amabilidad de brindarles a +todos <00:01:43.180>los <00:01:43.480>nuevos <00:01:43.780>clientes <00:01:44.080>un <00:01:44.380>SSD <00:01:44.680>gratuito <00:01:44.980>y <00:01:45.280>está + +00:01:45.429 --> 00:01:45.439 align:start position:0% +todos los nuevos clientes un SSD gratuito y está + + +00:01:45.439 --> 00:01:47.990 align:start position:0% +todos los nuevos clientes un SSD gratuito y está +disponible <00:01:45.696>solo <00:01:45.953>en <00:01:46.210>la <00:01:46.467>tienda, <00:01:46.724>así <00:01:46.981>que <00:01:47.238>vea <00:01:47.495>el <00:01:47.752>enlace + +00:01:47.990 --> 00:01:48.000 align:start position:0% +disponible solo en la tienda, así que vea el enlace + + +00:01:48.000 --> 00:01:50.310 align:start position:0% +disponible solo en la tienda, así que vea el enlace +en <00:01:48.269>la <00:01:48.538>descripción <00:01:48.807>para <00:01:49.076>obtener <00:01:49.345>más <00:01:49.614>detalles. <00:01:49.883>Entonces, <00:01:50.152>¿cómo + +00:01:50.310 --> 00:01:50.320 align:start position:0% +en la descripción para obtener más detalles. Entonces, ¿cómo + + +00:01:50.320 --> 00:01:52.389 align:start position:0% +en la descripción para obtener más detalles. Entonces, ¿cómo +llegué <00:01:50.687>aquí? <00:01:51.054>Bueno, <00:01:51.421>como <00:01:51.788>puede <00:01:52.155>que + +00:01:52.389 --> 00:01:52.399 align:start position:0% +llegué aquí? Bueno, como puede que + + +00:01:52.399 --> 00:01:54.710 align:start position:0% +llegué aquí? Bueno, como puede que +sepa <00:01:52.630>o <00:01:52.861>no, <00:01:53.092>he <00:01:53.323>estado <00:01:53.554>ejecutando <00:01:53.785>K3S <00:01:54.016>en <00:01:54.247>mi <00:01:54.478>propio + +00:01:54.710 --> 00:01:54.720 align:start position:0% +sepa o no, he estado ejecutando K3S en mi propio + + +00:01:54.720 --> 00:01:56.630 align:start position:0% +sepa o no, he estado ejecutando K3S en mi propio +entorno <00:01:55.159>durante <00:01:55.598>bastante <00:01:56.037>tiempo <00:01:56.476>e + +00:01:56.630 --> 00:01:56.640 align:start position:0% +entorno durante bastante tiempo e + + +00:01:56.640 --> 00:01:59.030 align:start position:0% +entorno durante bastante tiempo e +incluso <00:01:56.909>tengo <00:01:57.178>un <00:01:57.447>video <00:01:57.716>sobre <00:01:57.985>cómo <00:01:58.254>configurar <00:01:58.523>K3S <00:01:58.792>con + +00:01:59.030 --> 00:01:59.040 align:start position:0% +incluso tengo un video sobre cómo configurar K3S con + + +00:01:59.040 --> 00:02:00.950 align:start position:0% +incluso tengo un video sobre cómo configurar K3S con +mi <00:01:59.259>Sequel. <00:01:59.478>Ahora, <00:01:59.697>no <00:01:59.916>hay <00:02:00.135>nada <00:02:00.354>de <00:02:00.573>malo <00:02:00.792>con + +00:02:00.950 --> 00:02:00.960 align:start position:0% +mi Sequel. Ahora, no hay nada de malo con + + +00:02:00.960 --> 00:02:03.350 align:start position:0% +mi Sequel. Ahora, no hay nada de malo con +la <00:02:01.245>versión <00:02:01.530>K3S <00:02:01.815>de <00:02:02.100>mi <00:02:02.385>Sequel, <00:02:02.670>funciona <00:02:02.955>muy + +00:02:03.350 --> 00:02:03.360 align:start position:0% +la versión K3S de mi Sequel, funciona muy + + +00:02:03.360 --> 00:02:05.830 align:start position:0% +la versión K3S de mi Sequel, funciona muy +bien, <00:02:03.645>pero <00:02:03.930>en <00:02:04.215>ese <00:02:04.500>momento <00:02:04.785>la <00:02:05.070>versión <00:02:05.355>LCD + +00:02:05.830 --> 00:02:05.840 align:start position:0% +bien, pero en ese momento la versión LCD + + +00:02:05.840 --> 00:02:08.070 align:start position:0% +bien, pero en ese momento la versión LCD +no <00:02:06.137>estaba <00:02:06.434>disponible <00:02:06.731>y <00:02:07.028>la <00:02:07.325>versión <00:02:07.622>LCD <00:02:07.919>es + +00:02:08.070 --> 00:02:08.080 align:start position:0% +no estaba disponible y la versión LCD es + + +00:02:08.080 --> 00:02:09.990 align:start position:0% +no estaba disponible y la versión LCD es +súper <00:02:08.520>interesante <00:02:08.960>porque <00:02:09.400>crea <00:02:09.840>una + +00:02:09.990 --> 00:02:10.000 align:start position:0% +súper interesante porque crea una + + +00:02:10.000 --> 00:02:13.190 align:start position:0% +súper interesante porque crea una +base <00:02:10.279>de <00:02:10.558>datos <00:02:10.837>de <00:02:11.116>alta <00:02:11.395>disponibilidad <00:02:11.674>en <00:02:11.953>los <00:02:12.232>nodos + +00:02:13.190 --> 00:02:13.200 align:start position:0% +base de datos de alta disponibilidad en los nodos + + +00:02:13.200 --> 00:02:15.110 align:start position:0% +base de datos de alta disponibilidad en los nodos +en <00:02:13.551>lugar <00:02:13.902>de <00:02:14.253>alojarla <00:02:14.604>fuera <00:02:14.955>del + +00:02:15.110 --> 00:02:15.120 align:start position:0% +en lugar de alojarla fuera del + + +00:02:15.120 --> 00:02:17.190 align:start position:0% +en lugar de alojarla fuera del +clúster <00:02:15.426>y <00:02:15.732>justo <00:02:16.038>en <00:02:16.344>ese <00:02:16.650>momento <00:02:16.956>vi + +00:02:17.190 --> 00:02:17.200 align:start position:0% +clúster y justo en ese momento vi + + +00:02:17.200 --> 00:02:19.430 align:start position:0% +clúster y justo en ese momento vi +Jeff <00:02:17.508>Galen <00:02:17.816>creó <00:02:18.124>un <00:02:18.432>video <00:02:18.740>sobre <00:02:19.048>Ansible <00:02:19.356>y + +00:02:19.430 --> 00:02:19.440 align:start position:0% +Jeff Galen creó un video sobre Ansible y + + +00:02:19.440 --> 00:02:21.270 align:start position:0% +Jeff Galen creó un video sobre Ansible y +eso <00:02:19.629>me <00:02:19.818>llevó <00:02:20.007>por <00:02:20.196>un <00:02:20.385>camino <00:02:20.574>sin <00:02:20.763>salida <00:02:20.952>aprendiendo + +00:02:21.270 --> 00:02:21.280 align:start position:0% +eso me llevó por un camino sin salida aprendiendo + + +00:02:21.280 --> 00:02:24.150 align:start position:0% +eso me llevó por un camino sin salida aprendiendo +Ansible <00:02:21.720>creando <00:02:22.160>un <00:02:22.600>video <00:02:23.040>sobre <00:02:23.480>Ansible <00:02:23.920>y + +00:02:24.150 --> 00:02:24.160 align:start position:0% +Ansible creando un video sobre Ansible y + + +00:02:24.160 --> 00:02:26.790 align:start position:0% +Ansible creando un video sobre Ansible y +automatizando <00:02:24.656>muchas <00:02:25.152>tareas <00:02:25.648>bueno, <00:02:26.144>ya <00:02:26.640>sabes + +00:02:26.790 --> 00:02:26.800 align:start position:0% +automatizando muchas tareas bueno, ya sabes + + +00:02:26.800 --> 00:02:29.190 align:start position:0% +automatizando muchas tareas bueno, ya sabes +cómo <00:02:27.048>va <00:02:27.296>eso <00:02:27.544>de <00:02:27.792>todos <00:02:28.040>modos, <00:02:28.288>así <00:02:28.536>que <00:02:28.784>encontré <00:02:29.032>ese + +00:02:29.190 --> 00:02:29.200 align:start position:0% +cómo va eso de todos modos, así que encontré ese + + +00:02:29.200 --> 00:02:31.589 align:start position:0% +cómo va eso de todos modos, así que encontré ese +repositorio <00:02:29.520>de <00:02:29.840>Github, <00:02:30.160>lo <00:02:30.480>cloné <00:02:30.800>y <00:02:31.120>creé <00:02:31.440>algunas + +00:02:31.589 --> 00:02:31.599 align:start position:0% +repositorio de Github, lo cloné y creé algunas + + +00:02:31.599 --> 00:02:33.430 align:start position:0% +repositorio de Github, lo cloné y creé algunas +máquinas <00:02:32.019>virtuales <00:02:32.439>y <00:02:32.859>luego <00:02:33.279>intenté + +00:02:33.430 --> 00:02:33.440 align:start position:0% +máquinas virtuales y luego intenté + + +00:02:33.440 --> 00:02:35.830 align:start position:0% +máquinas virtuales y luego intenté +aprovisionar <00:02:33.792>un <00:02:34.144>clúster <00:02:34.496>de <00:02:34.848>alta <00:02:35.200>disponibilidad, + +00:02:35.830 --> 00:02:35.840 align:start position:0% +aprovisionar un clúster de alta disponibilidad, + + +00:02:35.840 --> 00:02:37.670 align:start position:0% +aprovisionar un clúster de alta disponibilidad, +pero <00:02:36.175>había <00:02:36.510>solo <00:02:36.845>un <00:02:37.180>problema: <00:02:37.515>el + +00:02:37.670 --> 00:02:37.680 align:start position:0% +pero había solo un problema: el + + +00:02:37.680 --> 00:02:39.910 align:start position:0% +pero había solo un problema: el +libro <00:02:37.909>de <00:02:38.138>jugadas <00:02:38.367>de <00:02:38.596>Ansible <00:02:38.825>solo <00:02:39.054>admitía <00:02:39.283>poner <00:02:39.512>en + +00:02:39.910 --> 00:02:39.920 align:start position:0% +libro de jugadas de Ansible solo admitía poner en + + +00:02:39.920 --> 00:02:43.190 align:start position:0% +libro de jugadas de Ansible solo admitía poner en +marcha <00:02:40.289>un <00:02:40.658>nodo <00:02:41.027>LCD <00:02:41.396>y <00:02:41.765>eso <00:02:42.134>significaba <00:02:42.503>solo <00:02:42.872>un + +00:02:43.190 --> 00:02:43.200 align:start position:0% +marcha un nodo LCD y eso significaba solo un + + +00:02:43.200 --> 00:02:45.830 align:start position:0% +marcha un nodo LCD y eso significaba solo un +nodo <00:02:43.466>de <00:02:43.732>servidor <00:02:43.998>que <00:02:44.264>no <00:02:44.530>es <00:02:44.796>Ha, <00:02:45.062>quiero <00:02:45.328>decir, <00:02:45.594>está + +00:02:45.830 --> 00:02:45.840 align:start position:0% +nodo de servidor que no es Ha, quiero decir, está + + +00:02:45.840 --> 00:02:47.990 align:start position:0% +nodo de servidor que no es Ha, quiero decir, está +configurado <00:02:46.224>para <00:02:46.608>Ha, <00:02:46.992>pero <00:02:47.376>tendría <00:02:47.760>que + +00:02:47.990 --> 00:02:48.000 align:start position:0% +configurado para Ha, pero tendría que + + +00:02:48.000 --> 00:02:50.309 align:start position:0% +configurado para Ha, pero tendría que +agregar <00:02:48.360>manualmente <00:02:48.720>nodos <00:02:49.080>de <00:02:49.440>servidor <00:02:49.800>adicionales <00:02:50.160>para + +00:02:50.309 --> 00:02:50.319 align:start position:0% +agregar manualmente nodos de servidor adicionales para + + +00:02:50.319 --> 00:02:52.710 align:start position:0% +agregar manualmente nodos de servidor adicionales para +hacerlo <00:02:50.599>Ha <00:02:50.879>y <00:02:51.159>eso <00:02:51.439>no <00:02:51.719>es <00:02:51.999>divertido, <00:02:52.279>así <00:02:52.559>que + +00:02:52.710 --> 00:02:52.720 align:start position:0% +hacerlo Ha y eso no es divertido, así que + + +00:02:52.720 --> 00:02:55.589 align:start position:0% +hacerlo Ha y eso no es divertido, así que +técnicamente <00:02:53.079>no <00:02:53.438>era <00:02:53.797>Ha <00:02:54.156>listo <00:02:54.515>para <00:02:54.874>usar, + +00:02:55.589 --> 00:02:55.599 align:start position:0% +técnicamente no era Ha listo para usar, + + +00:02:55.599 --> 00:02:57.750 align:start position:0% +técnicamente no era Ha listo para usar, +así <00:02:55.892>que <00:02:56.185>decidí <00:02:56.478>investigar <00:02:56.771>en <00:02:57.064>el <00:02:57.357>código + +00:02:57.750 --> 00:02:57.760 align:start position:0% +así que decidí investigar en el código + + +00:02:57.760 --> 00:02:59.910 align:start position:0% +así que decidí investigar en el código +y <00:02:58.022>en <00:02:58.284>las <00:02:58.546>ramas <00:02:58.808>y <00:02:59.070>encontré <00:02:59.332>una <00:02:59.594>bifurcación + +00:02:59.910 --> 00:02:59.920 align:start position:0% +y en las ramas y encontré una bifurcación + + +00:02:59.920 --> 00:03:02.229 align:start position:0% +y en las ramas y encontré una bifurcación +donde <00:03:00.304>alguien <00:03:00.688>realmente <00:03:01.072>arregló <00:03:01.456>ese <00:03:01.840>problema + +00:03:02.229 --> 00:03:02.239 align:start position:0% +donde alguien realmente arregló ese problema + + +00:03:02.239 --> 00:03:04.070 align:start position:0% +donde alguien realmente arregló ese problema +para <00:03:02.639>que <00:03:03.039>pudiera <00:03:03.439>crear <00:03:03.839>un + +00:03:04.070 --> 00:03:04.080 align:start position:0% +para que pudiera crear un + + +00:03:04.080 --> 00:03:06.070 align:start position:0% +para que pudiera crear un +clúster <00:03:04.284>de <00:03:04.488>alta <00:03:04.692>disponibilidad <00:03:04.896>listo <00:03:05.100>para <00:03:05.304>usar <00:03:05.508>con <00:03:05.712>Ansible <00:03:05.916>y + +00:03:06.070 --> 00:03:06.080 align:start position:0% +clúster de alta disponibilidad listo para usar con Ansible y + + +00:03:06.080 --> 00:03:08.149 align:start position:0% +clúster de alta disponibilidad listo para usar con Ansible y +vi <00:03:06.386>que <00:03:06.692>también <00:03:06.998>agregaron <00:03:07.304>soporte <00:03:07.610>para <00:03:07.916>Cube + +00:03:08.149 --> 00:03:08.159 align:start position:0% +vi que también agregaron soporte para Cube + + +00:03:08.159 --> 00:03:10.390 align:start position:0% +vi que también agregaron soporte para Cube +VIP, <00:03:08.492>esto <00:03:08.825>fue <00:03:09.158>increíble <00:03:09.491>porque <00:03:09.824>esto <00:03:10.157>es + +00:03:10.390 --> 00:03:10.400 align:start position:0% +VIP, esto fue increíble porque esto es + + +00:03:10.400 --> 00:03:13.509 align:start position:0% +VIP, esto fue increíble porque esto es +exactamente <00:03:10.711>lo <00:03:11.022>que <00:03:11.333>estaba <00:03:11.644>tratando <00:03:11.955>de <00:03:12.266>hacer. <00:03:12.577>Me <00:03:12.888>encanta <00:03:13.199>el + +00:03:13.509 --> 00:03:13.519 align:start position:0% +exactamente lo que estaba tratando de hacer. Me encanta el + + +00:03:13.519 --> 00:03:15.990 align:start position:0% +exactamente lo que estaba tratando de hacer. Me encanta el +código <00:03:13.769>abierto, <00:03:14.019>así <00:03:14.269>que <00:03:14.519>un <00:03:14.769>gran <00:03:15.019>agradecimiento <00:03:15.269>al <00:03:15.519>usuario + +00:03:15.990 --> 00:03:16.000 align:start position:0% +código abierto, así que un gran agradecimiento al usuario + + +00:03:16.000 --> 00:03:18.949 align:start position:0% +código abierto, así que un gran agradecimiento al usuario +212 <00:03:16.720>850a, + +00:03:18.949 --> 00:03:18.959 align:start position:0% +212 850a, + + +00:03:18.959 --> 00:03:21.270 align:start position:0% +212 850a, +esto <00:03:19.209>me <00:03:19.459>dio <00:03:19.709>un <00:03:19.959>buen <00:03:20.209>punto <00:03:20.459>de <00:03:20.709>partida <00:03:20.959>para + +00:03:21.270 --> 00:03:21.280 align:start position:0% +esto me dio un buen punto de partida para + + +00:03:21.280 --> 00:03:23.509 align:start position:0% +esto me dio un buen punto de partida para +automatizar <00:03:21.626>el <00:03:21.972>descansa <00:03:22.318>de <00:03:22.664>nuevo, <00:03:23.010>muchas <00:03:23.356>gracias + +00:03:23.509 --> 00:03:23.519 align:start position:0% +automatizar el descansa de nuevo, muchas gracias + + +00:03:23.519 --> 00:03:25.910 align:start position:0% +automatizar el descansa de nuevo, muchas gracias +a <00:03:23.747>la <00:03:23.975>comunidad <00:03:24.203>de <00:03:24.431>código <00:03:24.659>abierto, <00:03:24.887>Jeff <00:03:25.115>Gearling + +00:03:25.910 --> 00:03:25.920 align:start position:0% +a la comunidad de código abierto, Jeff Gearling + + +00:03:25.920 --> 00:03:29.750 align:start position:0% +a la comunidad de código abierto, Jeff Gearling +y <00:03:26.422>al <00:03:26.924>usuario <00:03:27.426>212 <00:03:27.928>850a. <00:03:28.430>Después <00:03:28.932>de <00:03:29.434>investigar + +00:03:29.750 --> 00:03:29.760 align:start position:0% +y al usuario 212 850a. Después de investigar + + +00:03:29.760 --> 00:03:31.589 align:start position:0% +y al usuario 212 850a. Después de investigar +un <00:03:30.053>poco, <00:03:30.346>descubrí <00:03:30.639>que <00:03:30.932>la <00:03:31.225>mayor <00:03:31.518>parte + +00:03:31.589 --> 00:03:31.599 align:start position:0% +un poco, descubrí que la mayor parte + + +00:03:31.599 --> 00:03:33.990 align:start position:0% +un poco, descubrí que la mayor parte +funcionaba, <00:03:32.059>pero <00:03:32.519>necesitaba <00:03:32.979>algunas <00:03:33.439>actualizaciones + +00:03:33.990 --> 00:03:34.000 align:start position:0% +funcionaba, pero necesitaba algunas actualizaciones + + +00:03:34.000 --> 00:03:36.229 align:start position:0% +funcionaba, pero necesitaba algunas actualizaciones +y <00:03:34.346>algunos <00:03:34.692>cambios <00:03:35.038>de <00:03:35.384>configuración <00:03:35.730>para <00:03:36.076>funcionar + +00:03:36.229 --> 00:03:36.239 align:start position:0% +y algunos cambios de configuración para funcionar + + +00:03:36.239 --> 00:03:38.390 align:start position:0% +y algunos cambios de configuración para funcionar +con <00:03:36.545>la <00:03:36.851>última <00:03:37.157>versión <00:03:37.463>de <00:03:37.769>Qvim <00:03:38.075>junto + +00:03:38.390 --> 00:03:38.400 align:start position:0% +con la última versión de Qvim junto + + +00:03:38.400 --> 00:03:40.309 align:start position:0% +con la última versión de Qvim junto +con <00:03:38.640>algunas <00:03:38.880>otras <00:03:39.120>características <00:03:39.360>que <00:03:39.600>quería <00:03:39.840>agregar, + +00:03:40.309 --> 00:03:40.319 align:start position:0% +con algunas otras características que quería agregar, + + +00:03:40.319 --> 00:03:43.509 align:start position:0% +con algunas otras características que quería agregar, +así <00:03:41.012>que <00:03:41.705>decidí <00:03:42.398>arremangarme + +00:03:43.509 --> 00:03:43.519 align:start position:0% +así que decidí arremangarme + + +00:03:43.519 --> 00:03:45.910 align:start position:0% +así que decidí arremangarme +y <00:03:43.839>comenzar <00:03:44.159>a <00:03:44.479>trabajar <00:03:44.799>en <00:03:45.119>esta <00:03:45.439>bifurcación <00:03:45.759>en + +00:03:45.910 --> 00:03:45.920 align:start position:0% +y comenzar a trabajar en esta bifurcación en + + +00:03:45.920 --> 00:03:47.910 align:start position:0% +y comenzar a trabajar en esta bifurcación en +mi <00:03:46.226>propia <00:03:46.532>rama <00:03:46.838>y <00:03:47.144>antes <00:03:47.450>de <00:03:47.756>hacerlo + +00:03:47.910 --> 00:03:47.920 align:start position:0% +mi propia rama y antes de hacerlo + + +00:03:47.920 --> 00:03:50.229 align:start position:0% +mi propia rama y antes de hacerlo +público, <00:03:48.613>quería <00:03:49.306>lograr <00:03:49.999>algunas + +00:03:50.229 --> 00:03:50.239 align:start position:0% +público, quería lograr algunas + + +00:03:50.239 --> 00:03:52.550 align:start position:0% +público, quería lograr algunas +cosas. <00:03:50.559>Quería <00:03:50.879>asegurarme <00:03:51.199>de <00:03:51.519>que <00:03:51.839>cualquiera <00:03:52.159>que + +00:03:52.550 --> 00:03:52.560 align:start position:0% +cosas. Quería asegurarme de que cualquiera que + + +00:03:52.560 --> 00:03:54.869 align:start position:0% +cosas. Quería asegurarme de que cualquiera que +usara <00:03:52.895>esto <00:03:53.230>pudiera <00:03:53.565>comenzar <00:03:53.900>con <00:03:54.235>una + +00:03:54.869 --> 00:03:54.879 align:start position:0% +usara esto pudiera comenzar con una + + +00:03:54.879 --> 00:03:56.789 align:start position:0% +usara esto pudiera comenzar con una +cantidad <00:03:55.231>ilimitada <00:03:55.583>de <00:03:55.935>nodos. <00:03:56.287>Quería <00:03:56.639>asegurarme + +00:03:56.789 --> 00:03:56.799 align:start position:0% +cantidad ilimitada de nodos. Quería asegurarme + + +00:03:56.799 --> 00:03:59.350 align:start position:0% +cantidad ilimitada de nodos. Quería asegurarme +de <00:03:57.074>que <00:03:57.349>Qvip <00:03:57.624>fuera <00:03:57.899>sólido <00:03:58.174>como <00:03:58.449>una <00:03:58.724>roca <00:03:58.999>y <00:03:59.274>luego + +00:03:59.350 --> 00:03:59.360 align:start position:0% +de que Qvip fuera sólido como una roca y luego + + +00:03:59.360 --> 00:04:01.350 align:start position:0% +de que Qvip fuera sólido como una roca y luego +crearía <00:03:59.700>un <00:04:00.040>balanceador <00:04:00.380>de <00:04:00.720>carga + +00:04:01.350 --> 00:04:01.360 align:start position:0% +crearía un balanceador de carga + + +00:04:01.360 --> 00:04:03.910 align:start position:0% +crearía un balanceador de carga +que <00:04:01.691>podría <00:04:02.022>usar <00:04:02.353>para <00:04:02.684>hacer <00:04:03.015>que <00:04:03.346>K3s <00:04:03.677>sea + +00:04:03.910 --> 00:04:03.920 align:start position:0% +que podría usar para hacer que K3s sea + + +00:04:03.920 --> 00:04:06.070 align:start position:0% +que podría usar para hacer que K3s sea +tolerante <00:04:04.253>a <00:04:04.586>fallas. <00:04:04.919>También <00:04:05.252>quería <00:04:05.585>automatizar <00:04:05.918>un + +00:04:06.070 --> 00:04:06.080 align:start position:0% +tolerante a fallas. También quería automatizar un + + +00:04:06.080 --> 00:04:08.070 align:start position:0% +tolerante a fallas. También quería automatizar un +balanceador <00:04:06.373>de <00:04:06.666>carga <00:04:06.959>externo <00:04:07.252>para <00:04:07.545>que <00:04:07.838>cuando + +00:04:08.070 --> 00:04:08.080 align:start position:0% +balanceador de carga externo para que cuando + + +00:04:08.080 --> 00:04:11.030 align:start position:0% +balanceador de carga externo para que cuando +exponga <00:04:08.466>el <00:04:08.852>servicio <00:04:09.238>obtenga <00:04:09.624>una <00:04:10.010>dirección <00:04:10.396>IP + +00:04:11.030 --> 00:04:11.040 align:start position:0% +exponga el servicio obtenga una dirección IP + + +00:04:11.040 --> 00:04:13.030 align:start position:0% +exponga el servicio obtenga una dirección IP +para <00:04:11.359>ese <00:04:11.678>servicio <00:04:11.997>de <00:04:12.316>su <00:04:12.635>clúster <00:04:12.954>y + +00:04:13.030 --> 00:04:13.040 align:start position:0% +para ese servicio de su clúster y + + +00:04:13.040 --> 00:04:15.509 align:start position:0% +para ese servicio de su clúster y +luego <00:04:13.371>cualquiera <00:04:13.702>pueda <00:04:14.033>usar <00:04:14.364>esa <00:04:14.695>dirección <00:04:15.026>IP <00:04:15.357>para + +00:04:15.509 --> 00:04:15.519 align:start position:0% +luego cualquiera pueda usar esa dirección IP para + + +00:04:15.519 --> 00:04:19.110 align:start position:0% +luego cualquiera pueda usar esa dirección IP para +acceder <00:04:15.910>a <00:04:16.301>los <00:04:16.692>servicios <00:04:17.083>dentro <00:04:17.474>de <00:04:17.865>K3s, <00:04:18.256>así <00:04:18.647>que <00:04:19.038>tenía + +00:04:19.110 --> 00:04:19.120 align:start position:0% +acceder a los servicios dentro de K3s, así que tenía + + +00:04:19.120 --> 00:04:21.349 align:start position:0% +acceder a los servicios dentro de K3s, así que tenía +algunas <00:04:19.426>opciones <00:04:19.732>para <00:04:20.038>este <00:04:20.344>paso <00:04:20.650>y <00:04:20.956>una + +00:04:21.349 --> 00:04:21.359 align:start position:0% +algunas opciones para este paso y una + + +00:04:21.359 --> 00:04:22.950 align:start position:0% +algunas opciones para este paso y una +aclaración <00:04:21.699>rápida <00:04:22.039>para <00:04:22.379>estos <00:04:22.719>dos + +00:04:22.950 --> 00:04:22.960 align:start position:0% +aclaración rápida para estos dos + + +00:04:22.960 --> 00:04:25.030 align:start position:0% +aclaración rápida para estos dos +balanceadores <00:04:23.200>de <00:04:23.440>carga. <00:04:23.680>El <00:04:23.920>primer <00:04:24.160>balanceador <00:04:24.400>de <00:04:24.640>carga <00:04:24.880>que + +00:04:25.030 --> 00:04:25.040 align:start position:0% +balanceadores de carga. El primer balanceador de carga que + + +00:04:25.040 --> 00:04:28.150 align:start position:0% +balanceadores de carga. El primer balanceador de carga que +normalmente <00:04:25.330>necesita <00:04:25.620>en <00:04:25.910>K3s <00:04:26.200>es <00:04:26.490>un <00:04:26.780>balanceador <00:04:27.070>de <00:04:27.360>carga + +00:04:28.150 --> 00:04:28.160 align:start position:0% +normalmente necesita en K3s es un balanceador de carga + + +00:04:28.160 --> 00:04:30.790 align:start position:0% +normalmente necesita en K3s es un balanceador de carga +para <00:04:28.502>su <00:04:28.844>API <00:04:29.186>de <00:04:29.528>Kubernetes: <00:04:29.870>este <00:04:30.212>es <00:04:30.554>el + +00:04:30.790 --> 00:04:30.800 align:start position:0% +para su API de Kubernetes: este es el + + +00:04:30.800 --> 00:04:32.550 align:start position:0% +para su API de Kubernetes: este es el +balanceador <00:04:31.009>de <00:04:31.218>carga <00:04:31.427>para <00:04:31.636>el <00:04:31.845>plano <00:04:32.054>de <00:04:32.263>control <00:04:32.472>y + +00:04:32.550 --> 00:04:32.560 align:start position:0% +balanceador de carga para el plano de control y + + +00:04:32.560 --> 00:04:34.469 align:start position:0% +balanceador de carga para el plano de control y +debe <00:04:32.788>ser <00:04:33.016>tolerante <00:04:33.244>a <00:04:33.472>fallas <00:04:33.700>para <00:04:33.928>que, <00:04:34.156>si + +00:04:34.469 --> 00:04:34.479 align:start position:0% +debe ser tolerante a fallas para que, si + + +00:04:34.479 --> 00:04:37.030 align:start position:0% +debe ser tolerante a fallas para que, si +emite <00:04:34.879>comandos <00:04:35.279>k3s, <00:04:35.679>aún <00:04:36.079>pueda <00:04:36.479>obtener <00:04:36.879>una + +00:04:37.030 --> 00:04:37.040 align:start position:0% +emite comandos k3s, aún pueda obtener una + + +00:04:37.040 --> 00:04:38.550 align:start position:0% +emite comandos k3s, aún pueda obtener una +respuesta <00:04:37.466>y <00:04:37.892>el <00:04:38.318>otro + +00:04:38.550 --> 00:04:38.560 align:start position:0% +respuesta y el otro + + +00:04:38.560 --> 00:04:41.270 align:start position:0% +respuesta y el otro +balanceador <00:04:38.808>de <00:04:39.056>carga <00:04:39.304>es <00:04:39.552>un <00:04:39.800>balanceador <00:04:40.048>de <00:04:40.296>carga <00:04:40.544>de <00:04:40.792>servicio <00:04:41.040>o + +00:04:41.270 --> 00:04:41.280 align:start position:0% +balanceador de carga es un balanceador de carga de servicio o + + +00:04:41.280 --> 00:04:43.990 align:start position:0% +balanceador de carga es un balanceador de carga de servicio o +Kubernetes <00:04:41.775>para <00:04:42.270>que <00:04:42.765>exponga <00:04:43.260>los <00:04:43.755>servicios + +00:04:43.990 --> 00:04:44.000 align:start position:0% +Kubernetes para que exponga los servicios + + +00:04:44.000 --> 00:04:46.469 align:start position:0% +Kubernetes para que exponga los servicios +en <00:04:44.248>la <00:04:44.496>mayoría <00:04:44.744>de <00:04:44.992>los <00:04:45.240>entornos <00:04:45.488>de <00:04:45.736>nube, <00:04:45.984>proporcionan <00:04:46.232>un + +00:04:46.469 --> 00:04:46.479 align:start position:0% +en la mayoría de los entornos de nube, proporcionan un + + +00:04:46.479 --> 00:04:48.710 align:start position:0% +en la mayoría de los entornos de nube, proporcionan un +balanceador <00:04:46.683>de <00:04:46.887>carga <00:04:47.091>en <00:04:47.295>la <00:04:47.499>nube <00:04:47.703>para <00:04:47.907>que <00:04:48.111>exponga <00:04:48.315>los + +00:04:48.710 --> 00:04:48.720 align:start position:0% +balanceador de carga en la nube para que exponga los + + +00:04:48.720 --> 00:04:50.629 align:start position:0% +balanceador de carga en la nube para que exponga los +servicios <00:04:48.930>y <00:04:49.140>este <00:04:49.350>balanceador <00:04:49.560>de <00:04:49.770>carga <00:04:49.980>de <00:04:50.190>servicio <00:04:50.400>del + +00:04:50.629 --> 00:04:50.639 align:start position:0% +servicios y este balanceador de carga de servicio del + + +00:04:50.639 --> 00:04:52.629 align:start position:0% +servicios y este balanceador de carga de servicio del +que <00:04:51.059>estoy <00:04:51.479>hablando <00:04:51.899>es <00:04:52.319>para + +00:04:52.629 --> 00:04:52.639 align:start position:0% +que estoy hablando es para + + +00:04:52.639 --> 00:04:55.430 align:start position:0% +que estoy hablando es para +entornos <00:04:52.972>que <00:04:53.305>no <00:04:53.638>son <00:04:53.971>de <00:04:54.304>nube <00:04:54.637>en + +00:04:55.430 --> 00:04:55.440 align:start position:0% +entornos que no son de nube en + + +00:04:55.440 --> 00:04:57.430 align:start position:0% +entornos que no son de nube en +entornos <00:04:55.691>autoalojados <00:04:55.942>y, <00:04:56.193>dado <00:04:56.444>que <00:04:56.695>no <00:04:56.946>tenemos <00:04:57.197>un + +00:04:57.430 --> 00:04:57.440 align:start position:0% +entornos autoalojados y, dado que no tenemos un + + +00:04:57.440 --> 00:05:00.390 align:start position:0% +entornos autoalojados y, dado que no tenemos un +balanceador <00:04:57.687>de <00:04:57.934>carga <00:04:58.181>en <00:04:58.428>la <00:04:58.675>nube <00:04:58.922>que <00:04:59.169>nos <00:04:59.416>brinde <00:04:59.663>direcciones <00:04:59.910>IP <00:05:00.157>para + +00:05:00.390 --> 00:05:00.400 align:start position:0% +balanceador de carga en la nube que nos brinde direcciones IP para + + +00:05:00.400 --> 00:05:02.950 align:start position:0% +balanceador de carga en la nube que nos brinde direcciones IP para +exponer <00:05:00.813>nuestros <00:05:01.226>servicios <00:05:01.639>en <00:05:02.052>el <00:05:02.465>exterior, <00:05:02.878>necesitamos + +00:05:02.950 --> 00:05:02.960 align:start position:0% +exponer nuestros servicios en el exterior, necesitamos + + +00:05:02.960 --> 00:05:05.189 align:start position:0% +exponer nuestros servicios en el exterior, necesitamos +usar <00:05:03.344>algo <00:05:03.728>que <00:05:04.112>pueda <00:05:04.496>emular <00:05:04.880>un + +00:05:05.189 --> 00:05:05.199 align:start position:0% +usar algo que pueda emular un + + +00:05:05.199 --> 00:05:07.670 align:start position:0% +usar algo que pueda emular un +balanceador <00:05:05.423>de <00:05:05.647>carga <00:05:05.871>en <00:05:06.095>la <00:05:06.319>nube <00:05:06.543>al <00:05:06.767>que <00:05:06.991>Kubernetes <00:05:07.215>pueda <00:05:07.439>solicitar + +00:05:07.670 --> 00:05:07.680 align:start position:0% +balanceador de carga en la nube al que Kubernetes pueda solicitar + + +00:05:07.680 --> 00:05:10.310 align:start position:0% +balanceador de carga en la nube al que Kubernetes pueda solicitar +una <00:05:08.013>dirección <00:05:08.346>IP <00:05:08.679>para <00:05:09.012>que <00:05:09.345>nuestros <00:05:09.678>servicios + +00:05:10.310 --> 00:05:10.320 align:start position:0% +una dirección IP para que nuestros servicios + + +00:05:10.320 --> 00:05:12.629 align:start position:0% +una dirección IP para que nuestros servicios +puedan <00:05:10.617>exponerse, <00:05:10.914>así <00:05:11.211>que <00:05:11.508>tuve <00:05:11.805>que <00:05:12.102>elegir <00:05:12.399>entre + +00:05:12.629 --> 00:05:12.639 align:start position:0% +puedan exponerse, así que tuve que elegir entre + + +00:05:12.639 --> 00:05:15.270 align:start position:0% +puedan exponerse, así que tuve que elegir entre +balanceadores <00:05:12.981>de <00:05:13.323>carga, <00:05:13.665>QVIP <00:05:14.007>en <00:05:14.349>realidad <00:05:14.691>puede <00:05:15.033>hacer + +00:05:15.270 --> 00:05:15.280 align:start position:0% +balanceadores de carga, QVIP en realidad puede hacer + + +00:05:15.280 --> 00:05:17.590 align:start position:0% +balanceadores de carga, QVIP en realidad puede hacer +ambas <00:05:15.457>cosas, <00:05:15.634>puede <00:05:15.811>ser <00:05:15.988>un <00:05:16.165>balanceador <00:05:16.342>de <00:05:16.519>carga <00:05:16.696>de <00:05:16.873>servicio + +00:05:17.590 --> 00:05:17.600 align:start position:0% +ambas cosas, puede ser un balanceador de carga de servicio + + +00:05:17.600 --> 00:05:19.590 align:start position:0% +ambas cosas, puede ser un balanceador de carga de servicio +o <00:05:17.866>un <00:05:18.132>balanceador <00:05:18.398>de <00:05:18.664>carga <00:05:18.930>para <00:05:19.196>su + +00:05:19.590 --> 00:05:19.600 align:start position:0% +o un balanceador de carga para su + + +00:05:19.600 --> 00:05:22.150 align:start position:0% +o un balanceador de carga para su +plano <00:05:19.866>de <00:05:20.132>control <00:05:20.398>para <00:05:20.664>sus <00:05:20.930>nodos <00:05:21.196>LCD <00:05:21.462>de <00:05:21.728>Kubernetes, <00:05:21.994>esto + +00:05:22.150 --> 00:05:22.160 align:start position:0% +plano de control para sus nodos LCD de Kubernetes, esto + + +00:05:22.160 --> 00:05:23.830 align:start position:0% +plano de control para sus nodos LCD de Kubernetes, esto +sonaba <00:05:22.416>como <00:05:22.672>una <00:05:22.928>gran <00:05:23.184>solución <00:05:23.440>porque + +00:05:23.830 --> 00:05:23.840 align:start position:0% +sonaba como una gran solución porque + + +00:05:23.840 --> 00:05:26.310 align:start position:0% +sonaba como una gran solución porque +entonces <00:05:24.066>no <00:05:24.292>tenía <00:05:24.518>que <00:05:24.744>usar <00:05:24.970>Metal <00:05:25.196>lb, + +00:05:26.310 --> 00:05:26.320 align:start position:0% +entonces no tenía que usar Metal lb, + + +00:05:26.320 --> 00:05:28.629 align:start position:0% +entonces no tenía que usar Metal lb, +pero <00:05:26.617>me <00:05:26.914>encanta <00:05:27.211>Metal <00:05:27.508>LV, <00:05:27.805>pero <00:05:28.102>asumir <00:05:28.399>una + +00:05:28.629 --> 00:05:28.639 align:start position:0% +pero me encanta Metal LV, pero asumir una + + +00:05:28.639 --> 00:05:31.189 align:start position:0% +pero me encanta Metal LV, pero asumir una +dependencia <00:05:28.972>menos <00:05:29.305>sonaba <00:05:29.638>como <00:05:29.971>una <00:05:30.304>buena <00:05:30.637>idea, + +00:05:31.189 --> 00:05:31.199 align:start position:0% +dependencia menos sonaba como una buena idea, + + +00:05:31.199 --> 00:05:32.950 align:start position:0% +dependencia menos sonaba como una buena idea, +especialmente <00:05:31.425>cuando <00:05:31.651>se <00:05:31.877>trata <00:05:32.103>de <00:05:32.329>Para <00:05:32.555>los + +00:05:32.950 --> 00:05:32.960 align:start position:0% +especialmente cuando se trata de Para los + + +00:05:32.960 --> 00:05:35.510 align:start position:0% +especialmente cuando se trata de Para los +cambios <00:05:33.230>importantes, <00:05:33.500>es <00:05:33.770>menos <00:05:34.040>difícil <00:05:34.310>de <00:05:34.580>administrar, <00:05:34.850>así <00:05:35.120>que, + +00:05:35.510 --> 00:05:35.520 align:start position:0% +cambios importantes, es menos difícil de administrar, así que, + + +00:05:35.520 --> 00:05:37.430 align:start position:0% +cambios importantes, es menos difícil de administrar, así que, +por <00:05:35.760>supuesto, <00:05:36.000>la <00:05:36.240>otra <00:05:36.480>opción <00:05:36.720>para <00:05:36.960>exponer + +00:05:37.430 --> 00:05:37.440 align:start position:0% +por supuesto, la otra opción para exponer + + +00:05:37.440 --> 00:05:39.909 align:start position:0% +por supuesto, la otra opción para exponer +mis <00:05:37.773>servicios <00:05:38.106>era <00:05:38.439>simplemente <00:05:38.772>usar <00:05:39.105>Metal, <00:05:39.438>aunque, + +00:05:39.909 --> 00:05:39.919 align:start position:0% +mis servicios era simplemente usar Metal, aunque, + + +00:05:39.919 --> 00:05:40.790 align:start position:0% +mis servicios era simplemente usar Metal, aunque, + + +00:05:40.790 --> 00:05:40.800 align:start position:0% + + + +00:05:40.800 --> 00:05:43.430 align:start position:0% + +honestamente, <00:05:41.066>después <00:05:41.332>de <00:05:41.598>horas <00:05:41.864>y <00:05:42.130>horas <00:05:42.396>de <00:05:42.662>intentar <00:05:42.928>que <00:05:43.194>el + +00:05:43.430 --> 00:05:43.440 align:start position:0% +honestamente, después de horas y horas de intentar que el + + +00:05:43.440 --> 00:05:46.070 align:start position:0% +honestamente, después de horas y horas de intentar que el +balanceador <00:05:43.935>de <00:05:44.430>carga <00:05:44.925>del <00:05:45.420>servicio <00:05:45.915>Qvic + +00:05:46.070 --> 00:05:46.080 align:start position:0% +balanceador de carga del servicio Qvic + + +00:05:46.080 --> 00:05:48.310 align:start position:0% +balanceador de carga del servicio Qvic +pudiera <00:05:46.447>funcionar <00:05:46.814>con <00:05:47.181>mis <00:05:47.548>servicios, <00:05:47.915>decidí + +00:05:48.310 --> 00:05:48.320 align:start position:0% +pudiera funcionar con mis servicios, decidí + + +00:05:48.320 --> 00:05:51.189 align:start position:0% +pudiera funcionar con mis servicios, decidí +recurrir <00:05:48.706>al <00:05:49.092>viejo <00:05:49.478>y <00:05:49.864>confiable <00:05:50.250>Metal <00:05:50.636>LB + +00:05:51.189 --> 00:05:51.199 align:start position:0% +recurrir al viejo y confiable Metal LB + + +00:05:51.199 --> 00:05:53.749 align:start position:0% +recurrir al viejo y confiable Metal LB +y <00:05:51.542>Metal <00:05:51.885>LB <00:05:52.228>simplemente <00:05:52.571>funciona <00:05:52.914>y <00:05:53.257>pude <00:05:53.600>usar + +00:05:53.749 --> 00:05:53.759 align:start position:0% +y Metal LB simplemente funciona y pude usar + + +00:05:53.759 --> 00:05:56.230 align:start position:0% +y Metal LB simplemente funciona y pude usar +mi <00:05:54.105>configuración <00:05:54.451>existente <00:05:54.797>para <00:05:55.143>ello, <00:05:55.489>así <00:05:55.835>que + +00:05:56.230 --> 00:05:56.240 align:start position:0% +mi configuración existente para ello, así que + + +00:05:56.240 --> 00:05:58.710 align:start position:0% +mi configuración existente para ello, así que +realmente <00:05:56.463>no <00:05:56.686>fue <00:05:56.909>una <00:05:57.132>pérdida <00:05:57.355>en <00:05:57.578>absoluto, <00:05:57.801>así <00:05:58.024>que <00:05:58.247>en <00:05:58.470>este + +00:05:58.710 --> 00:05:58.720 align:start position:0% +realmente no fue una pérdida en absoluto, así que en este + + +00:05:58.720 --> 00:06:00.870 align:start position:0% +realmente no fue una pérdida en absoluto, así que en este +punto <00:05:59.199>tenía <00:05:59.678>mi <00:06:00.157>arquitectura <00:06:00.636>prácticamente + +00:06:00.870 --> 00:06:00.880 align:start position:0% +punto tenía mi arquitectura prácticamente + + +00:06:00.880 --> 00:06:03.189 align:start position:0% +punto tenía mi arquitectura prácticamente +decidida, <00:06:01.130>Qubit <00:06:01.380>para <00:06:01.630>mi <00:06:01.880>plano <00:06:02.130>de <00:06:02.380>control <00:06:02.630>de <00:06:02.880>Kubernetes + +00:06:03.189 --> 00:06:03.199 align:start position:0% +decidida, Qubit para mi plano de control de Kubernetes + + +00:06:03.199 --> 00:06:05.350 align:start position:0% +decidida, Qubit para mi plano de control de Kubernetes +y <00:06:03.412>Metal <00:06:03.625>LLB <00:06:03.838>para <00:06:04.051>mi <00:06:04.264>balanceador <00:06:04.477>de <00:06:04.690>carga <00:06:04.903>de <00:06:05.116>servicio + +00:06:05.350 --> 00:06:05.360 align:start position:0% +y Metal LLB para mi balanceador de carga de servicio + + +00:06:05.360 --> 00:06:07.189 align:start position:0% +y Metal LLB para mi balanceador de carga de servicio +y <00:06:05.565>una <00:06:05.770>vez <00:06:05.975>que <00:06:06.180>resolví <00:06:06.385>la <00:06:06.590>creación <00:06:06.795>de + +00:06:07.189 --> 00:06:07.199 align:start position:0% +y una vez que resolví la creación de + + +00:06:07.199 --> 00:06:09.430 align:start position:0% +y una vez que resolví la creación de +múltiples <00:06:07.599>nodos <00:06:07.999>de <00:06:08.399>servidor, <00:06:08.799>configurando <00:06:09.199>Cube + +00:06:09.430 --> 00:06:09.440 align:start position:0% +múltiples nodos de servidor, configurando Cube + + +00:06:09.440 --> 00:06:11.830 align:start position:0% +múltiples nodos de servidor, configurando Cube +VIP <00:06:09.776>y <00:06:10.112>configurando <00:06:10.448>Middle <00:06:10.784>of <00:06:11.120>B, + +00:06:11.830 --> 00:06:11.840 align:start position:0% +VIP y configurando Middle of B, + + +00:06:11.840 --> 00:06:13.909 align:start position:0% +VIP y configurando Middle of B, +era <00:06:12.114>hora <00:06:12.388>de <00:06:12.662>hacer <00:06:12.936>algunas <00:06:13.210>pruebas <00:06:13.484>para <00:06:13.758>mi + +00:06:13.909 --> 00:06:13.919 align:start position:0% +era hora de hacer algunas pruebas para mi + + +00:06:13.919 --> 00:06:16.309 align:start position:0% +era hora de hacer algunas pruebas para mi +prueba, <00:06:14.292>creé <00:06:14.665>cinco <00:06:15.038>nodos <00:06:15.411>y <00:06:15.784>estas <00:06:16.157>son + +00:06:16.309 --> 00:06:16.319 align:start position:0% +prueba, creé cinco nodos y estas son + + +00:06:16.319 --> 00:06:18.870 align:start position:0% +prueba, creé cinco nodos y estas son +notas <00:06:16.629>de <00:06:16.939>imagen <00:06:17.249>de <00:06:17.559>nube <00:06:17.869>de <00:06:18.179>Ubuntu <00:06:18.489>estándar <00:06:18.799>y + +00:06:18.870 --> 00:06:18.880 align:start position:0% +notas de imagen de nube de Ubuntu estándar y + + +00:06:18.880 --> 00:06:20.390 align:start position:0% +notas de imagen de nube de Ubuntu estándar y +recientemente <00:06:19.152>creé <00:06:19.424>un <00:06:19.696>video <00:06:19.968>sobre <00:06:20.240>el + +00:06:20.390 --> 00:06:20.400 align:start position:0% +recientemente creé un video sobre el + + +00:06:20.400 --> 00:06:22.390 align:start position:0% +recientemente creé un video sobre el +aprovisionamiento <00:06:20.736>de <00:06:21.072>nuevas <00:06:21.408>máquinas <00:06:21.744>Ubuntu <00:06:22.080>usando + +00:06:22.390 --> 00:06:22.400 align:start position:0% +aprovisionamiento de nuevas máquinas Ubuntu usando + + +00:06:22.400 --> 00:06:24.390 align:start position:0% +aprovisionamiento de nuevas máquinas Ubuntu usando +Cloud <00:06:22.706>Image <00:06:23.012>y <00:06:23.318>Cloud <00:06:23.624>Init. <00:06:23.930>Son <00:06:24.236>el + +00:06:24.390 --> 00:06:24.400 align:start position:0% +Cloud Image y Cloud Init. Son el + + +00:06:24.400 --> 00:06:27.350 align:start position:0% +Cloud Image y Cloud Init. Son el +servidor <00:06:24.746>mínimo <00:06:25.092>de <00:06:25.438>Ubuntu <00:06:25.784>perfecto <00:06:26.130>para <00:06:26.476>K3, + +00:06:27.350 --> 00:06:27.360 align:start position:0% +servidor mínimo de Ubuntu perfecto para K3, + + +00:06:27.360 --> 00:06:29.189 align:start position:0% +servidor mínimo de Ubuntu perfecto para K3, +simplemente <00:06:27.546>échale <00:06:27.732>un <00:06:27.918>vistazo. <00:06:28.104>Así <00:06:28.290>que <00:06:28.476>una <00:06:28.662>vez <00:06:28.848>que <00:06:29.034>tuve + +00:06:29.189 --> 00:06:29.199 align:start position:0% +simplemente échale un vistazo. Así que una vez que tuve + + +00:06:29.199 --> 00:06:31.430 align:start position:0% +simplemente échale un vistazo. Así que una vez que tuve +estos <00:06:29.583>cinco <00:06:29.967>servidores <00:06:30.351>en <00:06:30.735>funcionamiento <00:06:31.119>y + +00:06:31.430 --> 00:06:31.440 align:start position:0% +estos cinco servidores en funcionamiento y + + +00:06:31.440 --> 00:06:33.590 align:start position:0% +estos cinco servidores en funcionamiento y +tomé <00:06:31.664>nota <00:06:31.888>de <00:06:32.112>sus <00:06:32.336>direcciones <00:06:32.560>IP, + +00:06:33.590 --> 00:06:33.600 align:start position:0% +tomé nota de sus direcciones IP, + + +00:06:33.600 --> 00:06:35.270 align:start position:0% +tomé nota de sus direcciones IP, +era <00:06:33.859>hora <00:06:34.118>de <00:06:34.377>configurar <00:06:34.636>MyAnsible. + +00:06:35.270 --> 00:06:35.280 align:start position:0% +era hora de configurar MyAnsible. + + +00:06:35.280 --> 00:06:37.510 align:start position:0% +era hora de configurar MyAnsible. +playbook <00:06:35.586>entonces <00:06:35.892>aquí <00:06:36.198>en <00:06:36.504>el <00:06:36.810>archivo <00:06:37.116>groupbars + +00:06:37.510 --> 00:06:37.520 align:start position:0% +playbook entonces aquí en el archivo groupbars + + +00:06:37.520 --> 00:06:39.749 align:start position:0% +playbook entonces aquí en el archivo groupbars +es <00:06:37.817>donde <00:06:38.114>se <00:06:38.411>configuran <00:06:38.708>todas <00:06:39.005>mis <00:06:39.302>variables <00:06:39.599>para + +00:06:39.749 --> 00:06:39.759 align:start position:0% +es donde se configuran todas mis variables para + + +00:06:39.759 --> 00:06:42.150 align:start position:0% +es donde se configuran todas mis variables para +ansible <00:06:40.199>primero <00:06:40.639>puedes <00:06:41.079>especificar <00:06:41.519>la + +00:06:42.150 --> 00:06:42.160 align:start position:0% +ansible primero puedes especificar la + + +00:06:42.160 --> 00:06:44.469 align:start position:0% +ansible primero puedes especificar la +versión <00:06:42.457>de <00:06:42.754>k3s <00:06:43.051>y <00:06:43.348>luego <00:06:43.645>puedes <00:06:43.942>especificar <00:06:44.239>un + +00:06:44.469 --> 00:06:44.479 align:start position:0% +versión de k3s y luego puedes especificar un + + +00:06:44.479 --> 00:06:46.469 align:start position:0% +versión de k3s y luego puedes especificar un +usuario <00:06:44.683>ansible <00:06:44.887>y <00:06:45.091>este <00:06:45.295>es <00:06:45.499>el <00:06:45.703>usuario <00:06:45.907>con <00:06:46.111>el <00:06:46.315>que + +00:06:46.469 --> 00:06:46.479 align:start position:0% +usuario ansible y este es el usuario con el que + + +00:06:46.479 --> 00:06:48.469 align:start position:0% +usuario ansible y este es el usuario con el que +ansible <00:06:46.919>se <00:06:47.359>ejecutará <00:06:47.799>y <00:06:48.239>otro + +00:06:48.469 --> 00:06:48.479 align:start position:0% +ansible se ejecutará y otro + + +00:06:48.479 --> 00:06:50.710 align:start position:0% +ansible se ejecutará y otro +consejo <00:06:48.812>rápido <00:06:49.145>si <00:06:49.478>necesitas <00:06:49.811>configurar <00:06:50.144>ansible <00:06:50.477>tengo + +00:06:50.710 --> 00:06:50.720 align:start position:0% +consejo rápido si necesitas configurar ansible tengo + + +00:06:50.720 --> 00:06:53.189 align:start position:0% +consejo rápido si necesitas configurar ansible tengo +un <00:06:51.120>video <00:06:51.520>muy <00:06:51.920>rápido <00:06:52.320>sobre <00:06:52.720>las + +00:06:53.189 --> 00:06:53.199 align:start position:0% +un video muy rápido sobre las + + +00:06:53.199 --> 00:06:55.029 align:start position:0% +un video muy rápido sobre las +cosas <00:06:53.479>mínimas <00:06:53.759>que <00:06:54.039>necesitas <00:06:54.319>hacer <00:06:54.599>para <00:06:54.879>configurar + +00:06:55.029 --> 00:06:55.039 align:start position:0% +cosas mínimas que necesitas hacer para configurar + + +00:06:55.039 --> 00:06:57.270 align:start position:0% +cosas mínimas que necesitas hacer para configurar +ansible <00:06:55.279>también <00:06:55.519>es <00:06:55.759>una <00:06:55.999>excelente <00:06:56.239>introducción <00:06:56.479>para <00:06:56.719>esto <00:06:56.959>lo + +00:06:57.270 --> 00:06:57.280 align:start position:0% +ansible también es una excelente introducción para esto lo + + +00:06:57.280 --> 00:06:59.350 align:start position:0% +ansible también es una excelente introducción para esto lo +siguiente <00:06:57.554>es <00:06:57.828>configurar <00:06:58.102>un <00:06:58.376>directorio <00:06:58.650>del <00:06:58.924>sistema <00:06:59.198>y + +00:06:59.350 --> 00:06:59.360 align:start position:0% +siguiente es configurar un directorio del sistema y + + +00:06:59.360 --> 00:07:01.350 align:start position:0% +siguiente es configurar un directorio del sistema y +realmente <00:06:59.639>no <00:06:59.918>necesitarás <00:07:00.197>tocar <00:07:00.476>esto <00:07:00.755>lo <00:07:01.034>siguiente + +00:07:01.350 --> 00:07:01.360 align:start position:0% +realmente no necesitarás tocar esto lo siguiente + + +00:07:01.360 --> 00:07:03.830 align:start position:0% +realmente no necesitarás tocar esto lo siguiente +es <00:07:01.599>configurar <00:07:01.838>una <00:07:02.077>interfaz <00:07:02.316>flannel <00:07:02.555>de <00:07:02.794>eth0 <00:07:03.033>por + +00:07:03.830 --> 00:07:03.840 align:start position:0% +es configurar una interfaz flannel de eth0 por + + +00:07:03.840 --> 00:07:05.990 align:start position:0% +es configurar una interfaz flannel de eth0 por +lo <00:07:04.068>que <00:07:04.296>flannel <00:07:04.524>es <00:07:04.752>responsable <00:07:04.980>de <00:07:05.208>la <00:07:05.436>red + +00:07:05.990 --> 00:07:06.000 align:start position:0% +lo que flannel es responsable de la red + + +00:07:06.000 --> 00:07:08.950 align:start position:0% +lo que flannel es responsable de la red +en <00:07:06.400>k3s <00:07:06.800>y <00:07:07.200>es <00:07:07.600>bastante <00:07:08.000>denso <00:07:08.400>pero <00:07:08.800>si + +00:07:08.950 --> 00:07:08.960 align:start position:0% +en k3s y es bastante denso pero si + + +00:07:08.960 --> 00:07:10.230 align:start position:0% +en k3s y es bastante denso pero si +quieres <00:07:09.184>saber <00:07:09.408>más <00:07:09.632>al <00:07:09.856>respecto <00:07:10.080>deberías + +00:07:10.230 --> 00:07:10.240 align:start position:0% +quieres saber más al respecto deberías + + +00:07:10.240 --> 00:07:12.150 align:start position:0% +quieres saber más al respecto deberías +revisar <00:07:10.680>su <00:07:11.120>repositorio <00:07:11.560>github <00:07:12.000>pero + +00:07:12.150 --> 00:07:12.160 align:start position:0% +revisar su repositorio github pero + + +00:07:12.160 --> 00:07:14.469 align:start position:0% +revisar su repositorio github pero +según <00:07:12.493>tengo <00:07:12.826>entendido <00:07:13.159>es <00:07:13.492>responsable <00:07:13.825>de <00:07:14.158>la + +00:07:14.469 --> 00:07:14.479 align:start position:0% +según tengo entendido es responsable de la + + +00:07:14.479 --> 00:07:16.950 align:start position:0% +según tengo entendido es responsable de la +comunicación <00:07:14.822>de <00:07:15.165>capa <00:07:15.508>3 <00:07:15.851>entre <00:07:16.194>nodos <00:07:16.537>en <00:07:16.880>un + +00:07:16.950 --> 00:07:16.960 align:start position:0% +comunicación de capa 3 entre nodos en un + + +00:07:16.960 --> 00:07:19.909 align:start position:0% +comunicación de capa 3 entre nodos en un +clúster <00:07:17.289>y <00:07:17.618>entonces <00:07:17.947>aquí <00:07:18.276>lo <00:07:18.605>configuro <00:07:18.934>en <00:07:19.263>0 <00:07:19.592>porque + +00:07:19.909 --> 00:07:19.919 align:start position:0% +clúster y entonces aquí lo configuro en 0 porque + + +00:07:19.919 --> 00:07:21.909 align:start position:0% +clúster y entonces aquí lo configuro en 0 porque +esa <00:07:20.225>es <00:07:20.531>la <00:07:20.837>interfaz <00:07:21.143>ethernet <00:07:21.449>en <00:07:21.755>estas + +00:07:21.909 --> 00:07:21.919 align:start position:0% +esa es la interfaz ethernet en estas + + +00:07:21.919 --> 00:07:24.070 align:start position:0% +esa es la interfaz ethernet en estas +máquinas <00:07:22.303>virtuales <00:07:22.687>luego <00:07:23.071>estoy <00:07:23.455>configurando <00:07:23.839>un + +00:07:24.070 --> 00:07:24.080 align:start position:0% +máquinas virtuales luego estoy configurando un + + +00:07:24.080 --> 00:07:25.990 align:start position:0% +máquinas virtuales luego estoy configurando un +punto <00:07:24.308>final <00:07:24.536>del <00:07:24.764>servidor <00:07:24.992>y <00:07:25.220>esta <00:07:25.448>es <00:07:25.676>la + +00:07:25.990 --> 00:07:26.000 align:start position:0% +punto final del servidor y esta es la + + +00:07:26.000 --> 00:07:28.150 align:start position:0% +punto final del servidor y esta es la +dirección <00:07:26.266>ip <00:07:26.532>del <00:07:26.798>vip <00:07:27.064>que <00:07:27.330>se <00:07:27.596>creará + +00:07:28.150 --> 00:07:28.160 align:start position:0% +dirección ip del vip que se creará + + +00:07:28.160 --> 00:07:30.390 align:start position:0% +dirección ip del vip que se creará +para <00:07:28.382>el <00:07:28.604>plano <00:07:28.826>de <00:07:29.048>control <00:07:29.270>de <00:07:29.492>kubernetes <00:07:29.714>y <00:07:29.936>entonces <00:07:30.158>se + +00:07:30.390 --> 00:07:30.400 align:start position:0% +para el plano de control de kubernetes y entonces se + + +00:07:30.400 --> 00:07:32.710 align:start position:0% +para el plano de control de kubernetes y entonces se +crea <00:07:30.685>este <00:07:30.970>vip <00:07:31.255>En <00:07:31.540>lugar <00:07:31.825>de <00:07:32.110>tener <00:07:32.395>que + +00:07:32.710 --> 00:07:32.720 align:start position:0% +crea este vip En lugar de tener que + + +00:07:32.720 --> 00:07:35.270 align:start position:0% +crea este vip En lugar de tener que +crear <00:07:33.168>balanceadores <00:07:33.616>de <00:07:34.064>carga <00:07:34.512>externos <00:07:34.960>junto + +00:07:35.270 --> 00:07:35.280 align:start position:0% +crear balanceadores de carga externos junto + + +00:07:35.280 --> 00:07:37.909 align:start position:0% +crear balanceadores de carga externos junto +con <00:07:35.640>Keepa <00:07:36.000>Live, <00:07:36.360>esto <00:07:36.720>crea <00:07:37.080>un <00:07:37.440>VIP + +00:07:37.909 --> 00:07:37.919 align:start position:0% +con Keepa Live, esto crea un VIP + + +00:07:37.919 --> 00:07:40.070 align:start position:0% +con Keepa Live, esto crea un VIP +que <00:07:38.170>está <00:07:38.421>altamente <00:07:38.672>disponible <00:07:38.923>y <00:07:39.174>que <00:07:39.425>se <00:07:39.676>expone + +00:07:40.070 --> 00:07:40.080 align:start position:0% +que está altamente disponible y que se expone + + +00:07:40.080 --> 00:07:41.909 align:start position:0% +que está altamente disponible y que se expone +a <00:07:40.289>través <00:07:40.498>del <00:07:40.707>clúster <00:07:40.916>de <00:07:41.125>Kubernetes <00:07:41.334>con <00:07:41.543>el <00:07:41.752>que + +00:07:41.909 --> 00:07:41.919 align:start position:0% +a través del clúster de Kubernetes con el que + + +00:07:41.919 --> 00:07:44.309 align:start position:0% +a través del clúster de Kubernetes con el que +podemos <00:07:42.639>comunicarnos <00:07:43.359>y <00:07:44.079>Kubernetes + +00:07:44.309 --> 00:07:44.319 align:start position:0% +podemos comunicarnos y Kubernetes + + +00:07:44.319 --> 00:07:46.550 align:start position:0% +podemos comunicarnos y Kubernetes +también <00:07:44.541>puede, <00:07:44.763>por <00:07:44.985>lo <00:07:45.207>que <00:07:45.429>es <00:07:45.651>bastante <00:07:45.873>asombroso <00:07:46.095>que <00:07:46.317>se + +00:07:46.550 --> 00:07:46.560 align:start position:0% +también puede, por lo que es bastante asombroso que se + + +00:07:46.560 --> 00:07:48.309 align:start position:0% +también puede, por lo que es bastante asombroso que se +encargue <00:07:46.754>de <00:07:46.948>dos <00:07:47.142>o <00:07:47.336>tres <00:07:47.530>máquinas <00:07:47.724>virtuales <00:07:47.918>adicionales + +00:07:48.309 --> 00:07:48.319 align:start position:0% +encargue de dos o tres máquinas virtuales adicionales + + +00:07:48.319 --> 00:07:49.990 align:start position:0% +encargue de dos o tres máquinas virtuales adicionales +que <00:07:48.543>ya <00:07:48.767>no <00:07:48.991>tienes <00:07:49.215>que <00:07:49.439>mantener. + +00:07:49.990 --> 00:07:50.000 align:start position:0% +que ya no tienes que mantener. + + +00:07:50.000 --> 00:07:53.029 align:start position:0% +que ya no tienes que mantener. +A <00:07:50.400>continuación, <00:07:50.800>configuro <00:07:51.200>mi <00:07:51.600>token <00:07:52.000>K3S <00:07:52.400>y <00:07:52.800>este + +00:07:53.029 --> 00:07:53.039 align:start position:0% +A continuación, configuro mi token K3S y este + + +00:07:53.039 --> 00:07:54.790 align:start position:0% +A continuación, configuro mi token K3S y este +debería <00:07:53.399>ser <00:07:53.759>un <00:07:54.119>secreto <00:07:54.479>que + +00:07:54.790 --> 00:07:54.800 align:start position:0% +debería ser un secreto que + + +00:07:54.800 --> 00:07:56.710 align:start position:0% +debería ser un secreto que +obviamente <00:07:55.051>debes <00:07:55.302>mantener <00:07:55.553>en <00:07:55.804>secreto, <00:07:56.055>pero <00:07:56.306>es <00:07:56.557>tu + +00:07:56.710 --> 00:07:56.720 align:start position:0% +obviamente debes mantener en secreto, pero es tu + + +00:07:56.720 --> 00:07:59.189 align:start position:0% +obviamente debes mantener en secreto, pero es tu +contraseña <00:07:56.976>o <00:07:57.232>tu <00:07:57.488>token <00:07:57.744>para <00:07:58.000>K3S + +00:07:59.189 --> 00:07:59.199 align:start position:0% +contraseña o tu token para K3S + + +00:07:59.199 --> 00:08:00.469 align:start position:0% +contraseña o tu token para K3S +y <00:07:59.479>solo <00:07:59.759>lo <00:08:00.039>necesitarás <00:08:00.319>al + +00:08:00.469 --> 00:08:00.479 align:start position:0% +y solo lo necesitarás al + + +00:08:00.479 --> 00:08:01.990 align:start position:0% +y solo lo necesitarás al +principio <00:08:00.703>o <00:08:00.927>si <00:08:01.151>te <00:08:01.375>unes <00:08:01.599>a + +00:08:01.990 --> 00:08:02.000 align:start position:0% +principio o si te unes a + + +00:08:02.000 --> 00:08:04.309 align:start position:0% +principio o si te unes a +nodos <00:08:02.306>adicionales <00:08:02.612>más <00:08:02.918>tarde. <00:08:03.224>Luego, <00:08:03.530>agregué <00:08:03.836>algunos + +00:08:04.309 --> 00:08:04.319 align:start position:0% +nodos adicionales más tarde. Luego, agregué algunos + + +00:08:04.319 --> 00:08:06.950 align:start position:0% +nodos adicionales más tarde. Luego, agregué algunos +argumentos <00:08:04.569>adicionales <00:08:04.819>a <00:08:05.069>mi <00:08:05.319>servidor <00:08:05.569>y <00:08:05.819>a <00:08:06.069>mis <00:08:06.319>agentes, + +00:08:06.950 --> 00:08:06.960 align:start position:0% +argumentos adicionales a mi servidor y a mis agentes, + + +00:08:06.960 --> 00:08:09.189 align:start position:0% +argumentos adicionales a mi servidor y a mis agentes, +pero <00:08:07.199>en <00:08:07.438>lo <00:08:07.677>que <00:08:07.916>respecta <00:08:08.155>al <00:08:08.394>servidor, <00:08:08.633>deshabilité + +00:08:09.189 --> 00:08:09.199 align:start position:0% +pero en lo que respecta al servidor, deshabilité + + +00:08:09.199 --> 00:08:11.270 align:start position:0% +pero en lo que respecta al servidor, deshabilité +el <00:08:09.532>balanceador <00:08:09.865>de <00:08:10.198>carga <00:08:10.531>del <00:08:10.864>servicio. <00:08:11.197>Querremos + +00:08:11.270 --> 00:08:11.280 align:start position:0% +el balanceador de carga del servicio. Querremos + + +00:08:11.280 --> 00:08:13.270 align:start position:0% +el balanceador de carga del servicio. Querremos +hacer <00:08:11.493>eso <00:08:11.706>si <00:08:11.919>estamos <00:08:12.132>ejecutando <00:08:12.345>Metal <00:08:12.558>of <00:08:12.771>B <00:08:12.984>o <00:08:13.197>un + +00:08:13.270 --> 00:08:13.280 align:start position:0% +hacer eso si estamos ejecutando Metal of B o un + + +00:08:13.280 --> 00:08:15.350 align:start position:0% +hacer eso si estamos ejecutando Metal of B o un +balanceador <00:08:13.439>de <00:08:13.598>carga <00:08:13.757>del <00:08:13.916>servicio, <00:08:14.075>que <00:08:14.234>es <00:08:14.393>lo <00:08:14.552>que <00:08:14.711>estamos <00:08:14.870>haciendo. <00:08:15.029>Le <00:08:15.188>estoy + +00:08:15.350 --> 00:08:15.360 align:start position:0% +balanceador de carga del servicio, que es lo que estamos haciendo. Le estoy + + +00:08:15.360 --> 00:08:18.070 align:start position:0% +balanceador de carga del servicio, que es lo que estamos haciendo. Le estoy +diciendo <00:08:15.871>que <00:08:16.382>no <00:08:16.893>implemente <00:08:17.404>tráfico. <00:08:17.915>Esto + +00:08:18.070 --> 00:08:18.080 align:start position:0% +diciendo que no implemente tráfico. Esto + + +00:08:18.080 --> 00:08:19.990 align:start position:0% +diciendo que no implemente tráfico. Esto +depende <00:08:18.320>de <00:08:18.560>usted. <00:08:18.800>Si <00:08:19.040>desea <00:08:19.280>implementar <00:08:19.520>tráfico, + +00:08:19.990 --> 00:08:20.000 align:start position:0% +depende de usted. Si desea implementar tráfico, + + +00:08:20.000 --> 00:08:21.830 align:start position:0% +depende de usted. Si desea implementar tráfico, +puede <00:08:20.420>eliminar <00:08:20.840>ese <00:08:21.260>argumento, <00:08:21.680>pero + +00:08:21.830 --> 00:08:21.840 align:start position:0% +puede eliminar ese argumento, pero + + +00:08:21.840 --> 00:08:23.350 align:start position:0% +puede eliminar ese argumento, pero +lo <00:08:22.200>eliminaré <00:08:22.560>para <00:08:22.920>poder <00:08:23.280>instalarlo + +00:08:23.350 --> 00:08:23.360 align:start position:0% +lo eliminaré para poder instalarlo + + +00:08:23.360 --> 00:08:25.110 align:start position:0% +lo eliminaré para poder instalarlo +con <00:08:23.626>Helm <00:08:23.892>más <00:08:24.158>tarde <00:08:24.424>porque <00:08:24.690>me <00:08:24.956>gusta + +00:08:25.110 --> 00:08:25.120 align:start position:0% +con Helm más tarde porque me gusta + + +00:08:25.120 --> 00:08:27.430 align:start position:0% +con Helm más tarde porque me gusta +instalar <00:08:25.379>el <00:08:25.638>tráfico <00:08:25.897>por <00:08:26.156>mi <00:08:26.415>cuenta <00:08:26.674>más <00:08:26.933>tarde <00:08:27.192>con + +00:08:27.430 --> 00:08:27.440 align:start position:0% +instalar el tráfico por mi cuenta más tarde con + + +00:08:27.440 --> 00:08:29.350 align:start position:0% +instalar el tráfico por mi cuenta más tarde con +Helm, <00:08:27.879>pero <00:08:28.318>si <00:08:28.757>desea <00:08:29.196>instalarlo, + +00:08:29.350 --> 00:08:29.360 align:start position:0% +Helm, pero si desea instalarlo, + + +00:08:29.360 --> 00:08:30.869 align:start position:0% +Helm, pero si desea instalarlo, +puede <00:08:29.615>simplemente <00:08:29.870>eliminar <00:08:30.125>esto. <00:08:30.380>argumento <00:08:30.635>el + +00:08:30.869 --> 00:08:30.879 align:start position:0% +puede simplemente eliminar esto. argumento el + + +00:08:30.879 --> 00:08:32.389 align:start position:0% +puede simplemente eliminar esto. argumento el +siguiente <00:08:31.279>argumento <00:08:31.679>solo <00:08:32.079>establece + +00:08:32.389 --> 00:08:32.399 align:start position:0% +siguiente argumento solo establece + + +00:08:32.399 --> 00:08:34.389 align:start position:0% +siguiente argumento solo establece +permisos <00:08:32.662>en <00:08:32.925>la <00:08:33.188>configuración <00:08:33.451>cooperativa <00:08:33.714>y <00:08:33.977>esto <00:08:34.240>es + +00:08:34.389 --> 00:08:34.399 align:start position:0% +permisos en la configuración cooperativa y esto es + + +00:08:34.399 --> 00:08:36.070 align:start position:0% +permisos en la configuración cooperativa y esto es +realmente <00:08:34.589>solo <00:08:34.779>por <00:08:34.969>conveniencia <00:08:35.159>para <00:08:35.349>que <00:08:35.539>no <00:08:35.729>tenga <00:08:35.919>que + +00:08:36.070 --> 00:08:36.080 align:start position:0% +realmente solo por conveniencia para que no tenga que + + +00:08:36.080 --> 00:08:38.230 align:start position:0% +realmente solo por conveniencia para que no tenga que +ejecutar <00:08:36.311>sudo <00:08:36.542>cuando <00:08:36.773>estoy <00:08:37.004>conectado <00:08:37.235>de <00:08:37.466>forma <00:08:37.697>remota <00:08:37.928>a <00:08:38.159>un + +00:08:38.230 --> 00:08:38.240 align:start position:0% +ejecutar sudo cuando estoy conectado de forma remota a un + + +00:08:38.240 --> 00:08:40.230 align:start position:0% +ejecutar sudo cuando estoy conectado de forma remota a un +nodo <00:08:38.439>para <00:08:38.638>ejecutar <00:08:38.837>el <00:08:39.036>control <00:08:39.235>de <00:08:39.434>cupé <00:08:39.633>probablemente <00:08:39.832>sea + +00:08:40.230 --> 00:08:40.240 align:start position:0% +nodo para ejecutar el control de cupé probablemente sea + + +00:08:40.240 --> 00:08:42.630 align:start position:0% +nodo para ejecutar el control de cupé probablemente sea +una <00:08:40.537>buena <00:08:40.834>idea <00:08:41.131>no <00:08:41.428>hacer <00:08:41.725>esto, <00:08:42.022>pero <00:08:42.319>me + +00:08:42.630 --> 00:08:42.640 align:start position:0% +una buena idea no hacer esto, pero me + + +00:08:42.640 --> 00:08:44.949 align:start position:0% +una buena idea no hacer esto, pero me +cansé <00:08:42.900>tanto <00:08:43.160>de <00:08:43.420>escribir <00:08:43.680>sudo <00:08:43.940>cada <00:08:44.200>vez <00:08:44.460>que <00:08:44.720>estaba + +00:08:44.949 --> 00:08:44.959 align:start position:0% +cansé tanto de escribir sudo cada vez que estaba + + +00:08:44.959 --> 00:08:47.110 align:start position:0% +cansé tanto de escribir sudo cada vez que estaba +probando <00:08:45.222>esto <00:08:45.485>las <00:08:45.748>mil <00:08:46.011>veces <00:08:46.274>que <00:08:46.537>lo <00:08:46.800>hice + +00:08:47.110 --> 00:08:47.120 align:start position:0% +probando esto las mil veces que lo hice + + +00:08:47.120 --> 00:08:48.310 align:start position:0% +probando esto las mil veces que lo hice +funcionar + +00:08:48.310 --> 00:08:48.320 align:start position:0% +funcionar + + +00:08:48.320 --> 00:08:50.230 align:start position:0% +funcionar +que <00:08:48.688>simplemente <00:08:49.056>cambié <00:08:49.424>los <00:08:49.792>permisos <00:08:50.160>de + +00:08:50.230 --> 00:08:50.240 align:start position:0% +que simplemente cambié los permisos de + + +00:08:50.240 --> 00:08:52.470 align:start position:0% +que simplemente cambié los permisos de +este <00:08:50.537>archivo, <00:08:50.834>pero <00:08:51.131>siéntete <00:08:51.428>libre <00:08:51.725>de <00:08:52.022>eliminar <00:08:52.319>ese + +00:08:52.470 --> 00:08:52.480 align:start position:0% +este archivo, pero siéntete libre de eliminar ese + + +00:08:52.480 --> 00:08:54.470 align:start position:0% +este archivo, pero siéntete libre de eliminar ese +argumento <00:08:52.708>si <00:08:52.936>lo <00:08:53.164>deseas <00:08:53.392>y <00:08:53.620>la <00:08:53.848>siguiente <00:08:54.076>cadena + +00:08:54.470 --> 00:08:54.480 align:start position:0% +argumento si lo deseas y la siguiente cadena + + +00:08:54.480 --> 00:08:57.030 align:start position:0% +argumento si lo deseas y la siguiente cadena +de <00:08:55.013>argumentos <00:08:55.546>son <00:08:56.079>bastantes, + +00:08:57.030 --> 00:08:57.040 align:start position:0% +de argumentos son bastantes, + + +00:08:57.040 --> 00:08:58.150 align:start position:0% +de argumentos son bastantes, +pero <00:08:57.280>los <00:08:57.520>dejaré <00:08:57.760>en <00:08:58.000>la + +00:08:58.150 --> 00:08:58.160 align:start position:0% +pero los dejaré en la + + +00:08:58.160 --> 00:09:00.790 align:start position:0% +pero los dejaré en la +documentación, <00:08:58.640>pero <00:08:59.120>para <00:08:59.600>resumir + +00:09:00.790 --> 00:09:00.800 align:start position:0% +documentación, pero para resumir + + +00:09:00.800 --> 00:09:03.110 align:start position:0% +documentación, pero para resumir +el <00:09:01.097>resto <00:09:01.394>de <00:09:01.691>estos <00:09:01.988>argumentos, <00:09:02.285>así <00:09:02.582>como <00:09:02.879>los + +00:09:03.110 --> 00:09:03.120 align:start position:0% +el resto de estos argumentos, así como los + + +00:09:03.120 --> 00:09:05.190 align:start position:0% +el resto de estos argumentos, así como los +argumentos <00:09:03.349>del <00:09:03.578>agente <00:09:03.807>que <00:09:04.036>ves <00:09:04.265>aquí, <00:09:04.494>es <00:09:04.723>que <00:09:04.952>descubrí + +00:09:05.190 --> 00:09:05.200 align:start position:0% +argumentos del agente que ves aquí, es que descubrí + + +00:09:05.200 --> 00:09:07.269 align:start position:0% +argumentos del agente que ves aquí, es que descubrí +que <00:09:05.404>necesitaba <00:09:05.608>la <00:09:05.812>mayoría <00:09:06.016>de <00:09:06.220>estos <00:09:06.424>argumentos <00:09:06.628>para <00:09:06.832>hacer <00:09:07.036>que + +00:09:07.269 --> 00:09:07.279 align:start position:0% +que necesitaba la mayoría de estos argumentos para hacer que + + +00:09:07.279 --> 00:09:09.509 align:start position:0% +que necesitaba la mayoría de estos argumentos para hacer que +k3s <00:09:07.587>sea <00:09:07.895>un <00:09:08.203>poco <00:09:08.511>más <00:09:08.819>receptivo. <00:09:09.127>¿Qué <00:09:09.435>quiero + +00:09:09.509 --> 00:09:09.519 align:start position:0% +k3s sea un poco más receptivo. ¿Qué quiero + + +00:09:09.519 --> 00:09:11.910 align:start position:0% +k3s sea un poco más receptivo. ¿Qué quiero +decir <00:09:09.705>con <00:09:09.891>eso? <00:09:10.077>Uno <00:09:10.263>de <00:09:10.449>los <00:09:10.635>valores <00:09:10.821>predeterminados <00:09:11.007>para <00:09:11.193>k3s + +00:09:11.910 --> 00:09:11.920 align:start position:0% +decir con eso? Uno de los valores predeterminados para k3s + + +00:09:11.920 --> 00:09:14.230 align:start position:0% +decir con eso? Uno de los valores predeterminados para k3s +es <00:09:12.170>que <00:09:12.420>si <00:09:12.670>el <00:09:12.920>nodo <00:09:13.170>no <00:09:13.420>está <00:09:13.670>listo, <00:09:13.920>no + +00:09:14.230 --> 00:09:14.240 align:start position:0% +es que si el nodo no está listo, no + + +00:09:14.240 --> 00:09:16.630 align:start position:0% +es que si el nodo no está listo, no +programará <00:09:14.760>pods <00:09:15.280>adicionales <00:09:15.800>hasta <00:09:16.320>que + +00:09:16.630 --> 00:09:16.640 align:start position:0% +programará pods adicionales hasta que + + +00:09:16.640 --> 00:09:19.350 align:start position:0% +programará pods adicionales hasta que +ese <00:09:16.910>nodo <00:09:17.180>esté <00:09:17.450>listo, <00:09:17.720>pero <00:09:17.990>el <00:09:18.260>tiempo <00:09:18.530>de <00:09:18.800>espera + +00:09:19.350 --> 00:09:19.360 align:start position:0% +ese nodo esté listo, pero el tiempo de espera + + +00:09:19.360 --> 00:09:21.190 align:start position:0% +ese nodo esté listo, pero el tiempo de espera +es <00:09:19.611>de <00:09:19.862>unos <00:09:20.113>cinco <00:09:20.364>minutos, <00:09:20.615>lo <00:09:20.866>cual <00:09:21.117>es + +00:09:21.190 --> 00:09:21.200 align:start position:0% +es de unos cinco minutos, lo cual es + + +00:09:21.200 --> 00:09:23.829 align:start position:0% +es de unos cinco minutos, lo cual es +mucho <00:09:21.475>tiempo, <00:09:21.750>quiero <00:09:22.025>decir <00:09:22.300>que <00:09:22.575>no <00:09:22.850>es <00:09:23.125>mucho <00:09:23.400>tiempo <00:09:23.675>si + +00:09:23.829 --> 00:09:23.839 align:start position:0% +mucho tiempo, quiero decir que no es mucho tiempo si + + +00:09:23.839 --> 00:09:25.590 align:start position:0% +mucho tiempo, quiero decir que no es mucho tiempo si +estás <00:09:24.175>ejecutando <00:09:24.511>varias <00:09:24.847>réplicas <00:09:25.183>de <00:09:25.519>un + +00:09:25.590 --> 00:09:25.600 align:start position:0% +estás ejecutando varias réplicas de un + + +00:09:25.600 --> 00:09:27.829 align:start position:0% +estás ejecutando varias réplicas de un +pod <00:09:25.946>y <00:09:26.292>estás <00:09:26.638>ejecutando <00:09:26.984>pods <00:09:27.330>en <00:09:27.676>aj + +00:09:27.829 --> 00:09:27.839 align:start position:0% +pod y estás ejecutando pods en aj + + +00:09:27.839 --> 00:09:29.750 align:start position:0% +pod y estás ejecutando pods en aj +casi <00:09:28.111>no <00:09:28.383>lo <00:09:28.655>notarías <00:09:28.927>en <00:09:29.199>absoluto, + +00:09:29.750 --> 00:09:29.760 align:start position:0% +casi no lo notarías en absoluto, + + +00:09:29.760 --> 00:09:31.829 align:start position:0% +casi no lo notarías en absoluto, +especialmente <00:09:30.144>en <00:09:30.528>instalaciones <00:09:30.912>más <00:09:31.296>grandes, <00:09:31.680>pero + +00:09:31.829 --> 00:09:31.839 align:start position:0% +especialmente en instalaciones más grandes, pero + + +00:09:31.839 --> 00:09:34.230 align:start position:0% +especialmente en instalaciones más grandes, pero +en <00:09:32.113>instalaciones <00:09:32.387>más <00:09:32.661>pequeñas <00:09:32.935>como <00:09:33.209>los <00:09:33.483>laboratorios <00:09:33.757>caseros, + +00:09:34.230 --> 00:09:34.240 align:start position:0% +en instalaciones más pequeñas como los laboratorios caseros, + + +00:09:34.240 --> 00:09:36.389 align:start position:0% +en instalaciones más pequeñas como los laboratorios caseros, +descubrí <00:09:34.608>que <00:09:34.976>cinco <00:09:35.344>minutos <00:09:35.712>es <00:09:36.080>un + +00:09:36.389 --> 00:09:36.399 align:start position:0% +descubrí que cinco minutos es un + + +00:09:36.399 --> 00:09:38.310 align:start position:0% +descubrí que cinco minutos es un +tiempo <00:09:36.650>realmente <00:09:36.901>largo, <00:09:37.152>especialmente <00:09:37.403>si <00:09:37.654>estás <00:09:37.905>ejecutando <00:09:38.156>una + +00:09:38.310 --> 00:09:38.320 align:start position:0% +tiempo realmente largo, especialmente si estás ejecutando una + + +00:09:38.320 --> 00:09:40.630 align:start position:0% +tiempo realmente largo, especialmente si estás ejecutando una +réplica <00:09:38.533>de <00:09:38.746>uno, <00:09:38.959>lo <00:09:39.172>que <00:09:39.385>significa <00:09:39.598>que <00:09:39.811>tu <00:09:40.024>servicio <00:09:40.237>estará + +00:09:40.630 --> 00:09:40.640 align:start position:0% +réplica de uno, lo que significa que tu servicio estará + + +00:09:40.640 --> 00:09:42.630 align:start position:0% +réplica de uno, lo que significa que tu servicio estará +inactivo <00:09:40.902>durante <00:09:41.164>al <00:09:41.426>menos <00:09:41.688>cinco <00:09:41.950>minutos, <00:09:42.212>así <00:09:42.474>que + +00:09:42.630 --> 00:09:42.640 align:start position:0% +inactivo durante al menos cinco minutos, así que + + +00:09:42.640 --> 00:09:44.550 align:start position:0% +inactivo durante al menos cinco minutos, así que +he <00:09:42.891>buscado <00:09:43.142>en <00:09:43.393>Internet <00:09:43.644>y <00:09:43.895>encontré <00:09:44.146>muchos <00:09:44.397>de + +00:09:44.550 --> 00:09:44.560 align:start position:0% +he buscado en Internet y encontré muchos de + + +00:09:44.560 --> 00:09:46.150 align:start position:0% +he buscado en Internet y encontré muchos de +estos <00:09:45.199>argumentos <00:09:45.838>y + +00:09:46.150 --> 00:09:46.160 align:start position:0% +estos argumentos y + + +00:09:46.160 --> 00:09:48.470 align:start position:0% +estos argumentos y +los <00:09:46.383>he <00:09:46.606>estado <00:09:46.829>usando <00:09:47.052>en <00:09:47.275>mi + +00:09:48.470 --> 00:09:48.480 align:start position:0% +los he estado usando en mi + + +00:09:48.480 --> 00:09:49.829 align:start position:0% +los he estado usando en mi +laboratorio <00:09:48.540>de <00:09:48.600>producción <00:09:48.660>en <00:09:48.720>casa + +00:09:49.829 --> 00:09:49.839 align:start position:0% +laboratorio de producción en casa + + +00:09:49.839 --> 00:09:51.430 align:start position:0% +laboratorio de producción en casa +durante <00:09:50.143>aproximadamente <00:09:50.447>un <00:09:50.751>año <00:09:51.055>y <00:09:51.359>parecen + +00:09:51.430 --> 00:09:51.440 align:start position:0% +durante aproximadamente un año y parecen + + +00:09:51.440 --> 00:09:52.870 align:start position:0% +durante aproximadamente un año y parecen +funcionar <00:09:51.634>bastante <00:09:51.828>bien, <00:09:52.022>pero <00:09:52.216>es <00:09:52.410>posible <00:09:52.604>que <00:09:52.798>debas + +00:09:52.870 --> 00:09:52.880 align:start position:0% +funcionar bastante bien, pero es posible que debas + + +00:09:52.880 --> 00:09:54.630 align:start position:0% +funcionar bastante bien, pero es posible que debas +hacer <00:09:53.259>algunos <00:09:53.638>ajustes <00:09:54.017>según <00:09:54.396>tus + +00:09:54.630 --> 00:09:54.640 align:start position:0% +hacer algunos ajustes según tus + + +00:09:54.640 --> 00:09:57.269 align:start position:0% +hacer algunos ajustes según tus +servicios, <00:09:55.040>tu <00:09:55.440>hardware <00:09:55.840>y <00:09:56.240>lo <00:09:56.640>que <00:09:57.040>funcione + +00:09:57.269 --> 00:09:57.279 align:start position:0% +servicios, tu hardware y lo que funcione + + +00:09:57.279 --> 00:09:59.350 align:start position:0% +servicios, tu hardware y lo que funcione +mejor <00:09:57.599>para <00:09:57.919>ti <00:09:58.239>y, <00:09:58.559>nuevamente, <00:09:58.879>k3s <00:09:59.199>funcionará + +00:09:59.350 --> 00:09:59.360 align:start position:0% +mejor para ti y, nuevamente, k3s funcionará + + +00:09:59.360 --> 00:10:01.110 align:start position:0% +mejor para ti y, nuevamente, k3s funcionará +sin <00:09:59.577>ninguno <00:09:59.794>de <00:10:00.011>esos <00:10:00.228>argumentos <00:10:00.445>que <00:10:00.662>acabo <00:10:00.879>de + +00:10:01.110 --> 00:10:01.120 align:start position:0% +sin ninguno de esos argumentos que acabo de + + +00:10:01.120 --> 00:10:02.949 align:start position:0% +sin ninguno de esos argumentos que acabo de +mencionar <00:10:01.413>y <00:10:01.706>tal <00:10:01.999>vez <00:10:02.292>deberías <00:10:02.585>probarlo <00:10:02.878>de + +00:10:02.949 --> 00:10:02.959 align:start position:0% +mencionar y tal vez deberías probarlo de + + +00:10:02.959 --> 00:10:05.030 align:start position:0% +mencionar y tal vez deberías probarlo de +esa <00:10:03.239>manera <00:10:03.519>primero, <00:10:03.799>a <00:10:04.079>continuación, <00:10:04.359>configuro <00:10:04.639>la + +00:10:05.030 --> 00:10:05.040 align:start position:0% +esa manera primero, a continuación, configuro la + + +00:10:05.040 --> 00:10:07.030 align:start position:0% +esa manera primero, a continuación, configuro la +versión <00:10:05.235>de <00:10:05.430>etiqueta <00:10:05.625>para <00:10:05.820>cube <00:10:06.015>vib <00:10:06.210>y <00:10:06.405>esta <00:10:06.600>es <00:10:06.795>solo + +00:10:07.030 --> 00:10:07.040 align:start position:0% +versión de etiqueta para cube vib y esta es solo + + +00:10:07.040 --> 00:10:08.710 align:start position:0% +versión de etiqueta para cube vib y esta es solo +la <00:10:07.266>etiqueta <00:10:07.492>de <00:10:07.718>imagen <00:10:07.944>del <00:10:08.170>contenedor, <00:10:08.396>la + +00:10:08.710 --> 00:10:08.720 align:start position:0% +la etiqueta de imagen del contenedor, la + + +00:10:08.720 --> 00:10:10.110 align:start position:0% +la etiqueta de imagen del contenedor, la +versión <00:10:08.920>actual <00:10:09.120>es + +00:10:10.110 --> 00:10:10.120 align:start position:0% +versión actual es + + +00:10:10.120 --> 00:10:13.030 align:start position:0% +versión actual es +v0.4.2 <00:10:10.445>y <00:10:10.770>eso <00:10:11.095>es <00:10:11.420>lo <00:10:11.745>que <00:10:12.070>estoy <00:10:12.395>especificando + +00:10:13.030 --> 00:10:13.040 align:start position:0% +v0.4.2 y eso es lo que estoy especificando + + +00:10:13.040 --> 00:10:15.590 align:start position:0% +v0.4.2 y eso es lo que estoy especificando +aquí <00:10:13.359>e <00:10:13.678>hice <00:10:13.997>cosas <00:10:14.316>similares <00:10:14.635>también <00:10:14.954>para <00:10:15.273>metal + +00:10:15.590 --> 00:10:15.600 align:start position:0% +aquí e hice cosas similares también para metal + + +00:10:15.600 --> 00:10:18.790 align:start position:0% +aquí e hice cosas similares también para metal +lb, <00:10:15.988>así <00:10:16.376>que <00:10:16.764>para <00:10:17.152>metal <00:10:17.540>lb <00:10:17.928>hay <00:10:18.316>un + +00:10:18.790 --> 00:10:18.800 align:start position:0% +lb, así que para metal lb hay un + + +00:10:18.800 --> 00:10:21.230 align:start position:0% +lb, así que para metal lb hay un +contenedor <00:10:19.093>de <00:10:19.386>altavoz <00:10:19.679>cuya <00:10:19.972>última <00:10:20.265>versión <00:10:20.558>es + +00:10:21.230 --> 00:10:21.240 align:start position:0% +contenedor de altavoz cuya última versión es + + +00:10:21.240 --> 00:10:24.069 align:start position:0% +contenedor de altavoz cuya última versión es +0.12.1 <00:10:21.555>y <00:10:21.870>luego <00:10:22.185>también <00:10:22.500>hay <00:10:22.815>una <00:10:23.130>etiqueta <00:10:23.445>de <00:10:23.760>controlador + +00:10:24.069 --> 00:10:24.079 align:start position:0% +0.12.1 y luego también hay una etiqueta de controlador + + +00:10:24.079 --> 00:10:27.509 align:start position:0% +0.12.1 y luego también hay una etiqueta de controlador +que <00:10:24.719>también <00:10:25.359>configuré <00:10:25.999>en <00:10:26.639>0.12.1 <00:10:27.279>ahora + +00:10:27.509 --> 00:10:27.519 align:start position:0% +que también configuré en 0.12.1 ahora + + +00:10:27.519 --> 00:10:29.350 align:start position:0% +que también configuré en 0.12.1 ahora +estos <00:10:27.747>deberían <00:10:27.975>estar <00:10:28.203>en <00:10:28.431>sincronía <00:10:28.659>en <00:10:28.887>el <00:10:29.115>mismo + +00:10:29.350 --> 00:10:29.360 align:start position:0% +estos deberían estar en sincronía en el mismo + + +00:10:29.360 --> 00:10:31.430 align:start position:0% +estos deberían estar en sincronía en el mismo +versión, <00:10:29.666>pero <00:10:29.972>lo <00:10:30.278>hice <00:10:30.584>configurable <00:10:30.890>en <00:10:31.196>mi + +00:10:31.430 --> 00:10:31.440 align:start position:0% +versión, pero lo hice configurable en mi + + +00:10:31.440 --> 00:10:33.430 align:start position:0% +versión, pero lo hice configurable en mi +plantilla <00:10:31.607>en <00:10:31.774>caso <00:10:31.941>de <00:10:32.108>que <00:10:32.275>no <00:10:32.442>lo <00:10:32.609>sean <00:10:32.776>para <00:10:32.943>no <00:10:33.110>tener <00:10:33.277>que + +00:10:33.430 --> 00:10:33.440 align:start position:0% +plantilla en caso de que no lo sean para no tener que + + +00:10:33.440 --> 00:10:35.030 align:start position:0% +plantilla en caso de que no lo sean para no tener que +averiguarlo <00:10:34.959>en + +00:10:35.030 --> 00:10:35.040 align:start position:0% +averiguarlo en + + +00:10:35.040 --> 00:10:37.430 align:start position:0% +averiguarlo en +el <00:10:35.300>futuro <00:10:35.560>y <00:10:35.820>luego <00:10:36.080>elegí <00:10:36.340>un <00:10:36.600>rango <00:10:36.860>de <00:10:37.120>IP + +00:10:37.430 --> 00:10:37.440 align:start position:0% +el futuro y luego elegí un rango de IP + + +00:10:37.440 --> 00:10:40.470 align:start position:0% +el futuro y luego elegí un rango de IP +para <00:10:37.665>metal <00:10:37.890>lb, <00:10:38.115>por <00:10:38.340>lo <00:10:38.565>que <00:10:38.790>este <00:10:39.015>es <00:10:39.240>el <00:10:39.465>rango <00:10:39.690>de <00:10:39.915>IP + +00:10:40.470 --> 00:10:40.480 align:start position:0% +para metal lb, por lo que este es el rango de IP + + +00:10:40.480 --> 00:10:43.030 align:start position:0% +para metal lb, por lo que este es el rango de IP +que <00:10:41.000>cuando <00:10:41.520>exponga <00:10:42.040>servicios, <00:10:42.560>se + +00:10:43.030 --> 00:10:43.040 align:start position:0% +que cuando exponga servicios, se + + +00:10:43.040 --> 00:10:45.110 align:start position:0% +que cuando exponga servicios, se +expondrán <00:10:43.500>y <00:10:43.960>podrá <00:10:44.420>comunicarse <00:10:44.880>con + +00:10:45.110 --> 00:10:45.120 align:start position:0% +expondrán y podrá comunicarse con + + +00:10:45.120 --> 00:10:46.870 align:start position:0% +expondrán y podrá comunicarse con +ellos. <00:10:45.400>Le <00:10:45.680>mostraré <00:10:45.960>algunos <00:10:46.240>ejemplos <00:10:46.520>aquí <00:10:46.800>en + +00:10:46.870 --> 00:10:46.880 align:start position:0% +ellos. Le mostraré algunos ejemplos aquí en + + +00:10:46.880 --> 00:10:49.590 align:start position:0% +ellos. Le mostraré algunos ejemplos aquí en +un <00:10:47.154>momento, <00:10:47.428>pero <00:10:47.702>configuré <00:10:47.976>un <00:10:48.250>rango <00:10:48.524>de <00:10:48.798>192 + +00:10:49.590 --> 00:10:49.600 align:start position:0% +un momento, pero configuré un rango de 192 + + +00:10:49.600 --> 00:10:53.990 align:start position:0% +un momento, pero configuré un rango de 192 +168 <00:10:50.253>30.80 <00:10:50.906>hasta <00:10:51.559>90. <00:10:52.212>así <00:10:52.865>que <00:10:53.518>obtengo + +00:10:53.990 --> 00:10:54.000 align:start position:0% +168 30.80 hasta 90. así que obtengo + + +00:10:54.000 --> 00:10:56.630 align:start position:0% +168 30.80 hasta 90. así que obtengo +10, + +00:10:56.630 --> 00:10:56.640 align:start position:0% + + + +00:10:56.640 --> 00:10:59.750 align:start position:0% + +así <00:10:57.051>que <00:10:57.462>obtengo <00:10:57.873>11 <00:10:58.284>IP <00:10:58.695>aquí, <00:10:59.106>normalmente <00:10:59.517>solo + +00:10:59.750 --> 00:10:59.760 align:start position:0% +así que obtengo 11 IP aquí, normalmente solo + + +00:10:59.760 --> 00:11:01.990 align:start position:0% +así que obtengo 11 IP aquí, normalmente solo +necesito <00:11:00.010>uno <00:11:00.260>o <00:11:00.510>dos, <00:11:00.760>pero <00:11:01.010>configuré <00:11:01.260>el <00:11:01.510>rango <00:11:01.760>de + +00:11:01.990 --> 00:11:02.000 align:start position:0% +necesito uno o dos, pero configuré el rango de + + +00:11:02.000 --> 00:11:04.790 align:start position:0% +necesito uno o dos, pero configuré el rango de +80 <00:11:02.320>a <00:11:02.640>90 <00:11:02.960>por <00:11:03.280>si <00:11:03.600>acaso <00:11:03.920>después <00:11:04.240>de <00:11:04.560>eso + +00:11:04.790 --> 00:11:04.800 align:start position:0% +80 a 90 por si acaso después de eso + + +00:11:04.800 --> 00:11:07.030 align:start position:0% +80 a 90 por si acaso después de eso +revisé <00:11:05.074>mi <00:11:05.348>host.ini <00:11:05.622>para <00:11:05.896>asegurarme <00:11:06.170>de <00:11:06.444>que <00:11:06.718>tenía + +00:11:07.030 --> 00:11:07.040 align:start position:0% +revisé mi host.ini para asegurarme de que tenía + + +00:11:07.040 --> 00:11:09.350 align:start position:0% +revisé mi host.ini para asegurarme de que tenía +todas <00:11:07.400>las <00:11:07.760>direcciones <00:11:08.120>IP <00:11:08.480>aquí <00:11:08.840>y <00:11:09.200>las + +00:11:09.350 --> 00:11:09.360 align:start position:0% +todas las direcciones IP aquí y las + + +00:11:09.360 --> 00:11:11.190 align:start position:0% +todas las direcciones IP aquí y las +tres <00:11:09.626>máquinas <00:11:09.892>virtuales <00:11:10.158>que <00:11:10.424>voy <00:11:10.690>a <00:11:10.956>usar + +00:11:11.190 --> 00:11:11.200 align:start position:0% +tres máquinas virtuales que voy a usar + + +00:11:11.200 --> 00:11:14.790 align:start position:0% +tres máquinas virtuales que voy a usar +para <00:11:11.620>mis <00:11:12.040>maestros <00:11:12.460>son <00:11:12.880>38 <00:11:13.300>39 <00:11:13.720>y <00:11:14.140>40. <00:11:14.560>estos + +00:11:14.790 --> 00:11:14.800 align:start position:0% +para mis maestros son 38 39 y 40. estos + + +00:11:14.800 --> 00:11:16.949 align:start position:0% +para mis maestros son 38 39 y 40. estos +también <00:11:15.260>se <00:11:15.720>conocen <00:11:16.180>como <00:11:16.640>sus + +00:11:16.949 --> 00:11:16.959 align:start position:0% +también se conocen como sus + + +00:11:16.959 --> 00:11:19.190 align:start position:0% +también se conocen como sus +notas <00:11:17.159>del <00:11:17.359>servidor <00:11:17.559>y <00:11:17.759>luego <00:11:17.959>mis <00:11:18.159>nodos <00:11:18.359>de <00:11:18.559>trabajo <00:11:18.759>o <00:11:18.959>mis + +00:11:19.190 --> 00:11:19.200 align:start position:0% +notas del servidor y luego mis nodos de trabajo o mis + + +00:11:19.200 --> 00:11:22.310 align:start position:0% +notas del servidor y luego mis nodos de trabajo o mis +agentes <00:11:19.792>serán <00:11:20.384>41 <00:11:20.976>y <00:11:21.568>42. <00:11:22.160>entonces + +00:11:22.310 --> 00:11:22.320 align:start position:0% +agentes serán 41 y 42. entonces + + +00:11:22.320 --> 00:11:24.389 align:start position:0% +agentes serán 41 y 42. entonces +esto <00:11:22.699>significa <00:11:23.078>tres <00:11:23.457>servidores <00:11:23.836>con + +00:11:24.389 --> 00:11:24.399 align:start position:0% +esto significa tres servidores con + + +00:11:24.399 --> 00:11:27.110 align:start position:0% +esto significa tres servidores con +plano <00:11:24.610>de <00:11:24.821>control <00:11:25.032>de <00:11:25.243>kubernetes <00:11:25.454>y <00:11:25.665>ncd, <00:11:25.876>lo <00:11:26.087>que <00:11:26.298>lo <00:11:26.509>hace <00:11:26.720>altamente + +00:11:27.110 --> 00:11:27.120 align:start position:0% +plano de control de kubernetes y ncd, lo que lo hace altamente + + +00:11:27.120 --> 00:11:29.430 align:start position:0% +plano de control de kubernetes y ncd, lo que lo hace altamente +disponible <00:11:27.428>y <00:11:27.736>luego <00:11:28.044>dos <00:11:28.352>nodos <00:11:28.660>de <00:11:28.968>trabajo <00:11:29.276>para + +00:11:29.430 --> 00:11:29.440 align:start position:0% +disponible y luego dos nodos de trabajo para + + +00:11:29.440 --> 00:11:31.670 align:start position:0% +disponible y luego dos nodos de trabajo para +ejecutar <00:11:29.647>mis <00:11:29.854>cargas <00:11:30.061>de <00:11:30.268>trabajo <00:11:30.475>de <00:11:30.682>usuario <00:11:30.889>y <00:11:31.096>si <00:11:31.303>tuviera <00:11:31.510>más + +00:11:31.670 --> 00:11:31.680 align:start position:0% +ejecutar mis cargas de trabajo de usuario y si tuviera más + + +00:11:31.680 --> 00:11:33.190 align:start position:0% +ejecutar mis cargas de trabajo de usuario y si tuviera más +máquinas <00:11:32.020>virtuales, <00:11:32.360>simplemente <00:11:32.700>las <00:11:33.040>agregaría + +00:11:33.190 --> 00:11:33.200 align:start position:0% +máquinas virtuales, simplemente las agregaría + + +00:11:33.200 --> 00:11:35.110 align:start position:0% +máquinas virtuales, simplemente las agregaría +a <00:11:33.451>continuación, <00:11:33.702>así <00:11:33.953>que <00:11:34.204>con <00:11:34.455>todo <00:11:34.706>esto <00:11:34.957>configurado, + +00:11:35.110 --> 00:11:35.120 align:start position:0% +a continuación, así que con todo esto configurado, + + +00:11:35.120 --> 00:11:37.990 align:start position:0% +a continuación, así que con todo esto configurado, +ejecuté <00:11:35.391>el <00:11:35.662>libro <00:11:35.933>de <00:11:36.204>jugadas <00:11:36.475>del <00:11:36.746>sitio <00:11:37.017>y <00:11:37.288>lo <00:11:37.559>apunté <00:11:37.830>a + +00:11:37.990 --> 00:11:38.000 align:start position:0% +ejecuté el libro de jugadas del sitio y lo apunté a + + +00:11:38.000 --> 00:11:40.630 align:start position:0% +ejecuté el libro de jugadas del sitio y lo apunté a +mi <00:11:38.399>host.ini, <00:11:38.798>pero <00:11:39.197>antes <00:11:39.596>de <00:11:39.995>hacer <00:11:40.394>eso, + +00:11:40.630 --> 00:11:40.640 align:start position:0% +mi host.ini, pero antes de hacer eso, + + +00:11:40.640 --> 00:11:43.190 align:start position:0% +mi host.ini, pero antes de hacer eso, +comencé <00:11:40.982>a <00:11:41.324>hacer <00:11:41.666>ping <00:11:42.008>a <00:11:42.350>mi <00:11:42.692>vip, <00:11:43.034>obviamente + +00:11:43.190 --> 00:11:43.200 align:start position:0% +comencé a hacer ping a mi vip, obviamente + + +00:11:43.200 --> 00:11:45.350 align:start position:0% +comencé a hacer ping a mi vip, obviamente +no <00:11:43.533>está <00:11:43.866>allí <00:11:44.199>tan <00:11:44.532>pronto <00:11:44.865>como <00:11:45.198>aparece, + +00:11:45.350 --> 00:11:45.360 align:start position:0% +no está allí tan pronto como aparece, + + +00:11:45.360 --> 00:11:50.949 align:start position:0% +no está allí tan pronto como aparece, +debería <00:11:45.590>responder, <00:11:45.820>así <00:11:46.050>que <00:11:46.280>ejecuté <00:11:46.510>el <00:11:46.740>libro <00:11:46.970>de <00:11:47.200>jugadas + +00:11:50.949 --> 00:11:50.959 align:start position:0% + + + +00:11:50.959 --> 00:11:53.590 align:start position:0% + +e <00:11:51.455>instaló <00:11:51.951>y <00:11:52.447>configuró <00:11:52.943>k3s <00:11:53.439>en + +00:11:53.590 --> 00:11:53.600 align:start position:0% +e instaló y configuró k3s en + + +00:11:53.600 --> 00:11:57.110 align:start position:0% +e instaló y configuró k3s en +uno <00:11:53.759>de <00:11:53.918>los <00:11:54.077>nodos <00:11:54.236>del <00:11:54.395>servidor + +00:11:57.110 --> 00:11:57.120 align:start position:0% +uno de los nodos del servidor + + +00:11:57.120 --> 00:11:58.949 align:start position:0% +uno de los nodos del servidor +poco <00:11:57.337>después <00:11:57.554>de <00:11:57.771>eso, <00:11:57.988>el <00:11:58.205>vip <00:11:58.422>comenzó <00:11:58.639>a + +00:11:58.949 --> 00:11:58.959 align:start position:0% +poco después de eso, el vip comenzó a + + +00:11:58.959 --> 00:12:01.190 align:start position:0% +poco después de eso, el vip comenzó a +responder, <00:11:59.305>lo <00:11:59.651>que <00:11:59.997>significa <00:12:00.343>que <00:12:00.689>qvip <00:12:01.035>está + +00:12:01.190 --> 00:12:01.200 align:start position:0% +responder, lo que significa que qvip está + + +00:12:01.200 --> 00:12:03.590 align:start position:0% +responder, lo que significa que qvip está +instalado <00:12:01.508>en <00:12:01.816>esa <00:12:02.124>máquina <00:12:02.432>y <00:12:02.740>el <00:12:03.048>vip <00:12:03.356>está + +00:12:03.590 --> 00:12:03.600 align:start position:0% +instalado en esa máquina y el vip está + + +00:12:03.600 --> 00:12:06.470 align:start position:0% +instalado en esa máquina y el vip está +activo + +00:12:06.470 --> 00:12:06.480 align:start position:0% + + + +00:12:06.480 --> 00:12:07.990 align:start position:0% + +y <00:12:06.736>luego <00:12:06.992>comenzó <00:12:07.248>a <00:12:07.504>unir <00:12:07.760>otras + +00:12:07.990 --> 00:12:08.000 align:start position:0% +y luego comenzó a unir otras + + +00:12:08.000 --> 00:12:11.590 align:start position:0% +y luego comenzó a unir otras +máquinas <00:12:08.360>al <00:12:08.720>clúster + +00:12:11.590 --> 00:12:11.600 align:start position:0% +máquinas al clúster + + +00:12:11.600 --> 00:12:14.069 align:start position:0% +máquinas al clúster +y <00:12:11.919>luego, <00:12:12.238>poco <00:12:12.557>después <00:12:12.876>de <00:12:13.195>eso, <00:12:13.514>tuve <00:12:13.833>un + +00:12:14.069 --> 00:12:14.079 align:start position:0% +y luego, poco después de eso, tuve un + + +00:12:14.079 --> 00:12:17.509 align:start position:0% +y luego, poco después de eso, tuve un +clúster <00:12:14.433>de <00:12:14.787>kubernetes <00:12:15.141>de <00:12:15.495>alta <00:12:15.849>disponibilidad <00:12:16.203>en <00:12:16.557>k3s + +00:12:17.509 --> 00:12:17.519 align:start position:0% +clúster de kubernetes de alta disponibilidad en k3s + + +00:12:17.519 --> 00:12:20.069 align:start position:0% +clúster de kubernetes de alta disponibilidad en k3s +y <00:12:17.769>ese <00:12:18.019>es <00:12:18.269>un <00:12:18.519>clúster <00:12:18.769>ha <00:12:19.019>con <00:12:19.269>ese <00:12:19.519>cd + +00:12:20.069 --> 00:12:20.079 align:start position:0% +y ese es un clúster ha con ese cd + + +00:12:20.079 --> 00:12:22.870 align:start position:0% +y ese es un clúster ha con ese cd +con <00:12:20.372>un <00:12:20.665>balanceador <00:12:20.958>de <00:12:21.251>carga <00:12:21.544>que <00:12:21.837>también <00:12:22.130>es <00:12:22.423>ha <00:12:22.716>para + +00:12:22.870 --> 00:12:22.880 align:start position:0% +con un balanceador de carga que también es ha para + + +00:12:22.880 --> 00:12:26.069 align:start position:0% +con un balanceador de carga que también es ha para +mi <00:12:23.146>plano <00:12:23.412>de <00:12:23.678>control <00:12:23.944>y <00:12:24.210>balanceadores <00:12:24.476>de <00:12:24.742>carga <00:12:25.008>aha <00:12:25.274>para + +00:12:26.069 --> 00:12:26.079 align:start position:0% +mi plano de control y balanceadores de carga aha para + + +00:12:26.079 --> 00:12:27.990 align:start position:0% +mi plano de control y balanceadores de carga aha para +todos <00:12:26.519>mis <00:12:26.959>servicios, <00:12:27.399>pero <00:12:27.839>necesitamos + +00:12:27.990 --> 00:12:28.000 align:start position:0% +todos mis servicios, pero necesitamos + + +00:12:28.000 --> 00:12:30.550 align:start position:0% +todos mis servicios, pero necesitamos +verificar, <00:12:28.386>espero <00:12:28.772>que <00:12:29.158>confíes <00:12:29.544>en <00:12:29.930>mí, <00:12:30.316>pero + +00:12:30.550 --> 00:12:30.560 align:start position:0% +verificar, espero que confíes en mí, pero + + +00:12:30.560 --> 00:12:33.350 align:start position:0% +verificar, espero que confíes en mí, pero +también <00:12:30.824>verifiquemos <00:12:31.088>para <00:12:31.352>que <00:12:31.616>podamos <00:12:31.880>ingresar <00:12:32.144>por <00:12:32.408>ssh <00:12:32.672>a <00:12:32.936>uno <00:12:33.200>de + +00:12:33.350 --> 00:12:33.360 align:start position:0% +también verifiquemos para que podamos ingresar por ssh a uno de + + +00:12:33.360 --> 00:12:35.190 align:start position:0% +también verifiquemos para que podamos ingresar por ssh a uno de +nuestros <00:12:33.555>nodos <00:12:33.750>de <00:12:33.945>servidor <00:12:34.140>una <00:12:34.335>vez <00:12:34.530>que <00:12:34.725>estemos <00:12:34.920>allí, <00:12:35.115>podemos + +00:12:35.190 --> 00:12:35.200 align:start position:0% +nuestros nodos de servidor una vez que estemos allí, podemos + + +00:12:35.200 --> 00:12:37.590 align:start position:0% +nuestros nodos de servidor una vez que estemos allí, podemos +ejecutar <00:12:35.573>sudo <00:12:35.946>coop <00:12:36.319>control <00:12:36.692>kit <00:12:37.065>nodes <00:12:37.438>y + +00:12:37.590 --> 00:12:37.600 align:start position:0% +ejecutar sudo coop control kit nodes y + + +00:12:37.600 --> 00:12:39.910 align:start position:0% +ejecutar sudo coop control kit nodes y +podemos <00:12:37.933>ver <00:12:38.266>que <00:12:38.599>tenemos <00:12:38.932>cinco <00:12:39.265>nodos <00:12:39.598>y + +00:12:39.910 --> 00:12:39.920 align:start position:0% +podemos ver que tenemos cinco nodos y + + +00:12:39.920 --> 00:12:41.750 align:start position:0% +podemos ver que tenemos cinco nodos y +todos <00:12:40.119>están <00:12:40.318>en <00:12:40.517>línea <00:12:40.716>puedes <00:12:40.915>ver <00:12:41.114>que <00:12:41.313>tengo <00:12:41.512>tres + +00:12:41.750 --> 00:12:41.760 align:start position:0% +todos están en línea puedes ver que tengo tres + + +00:12:41.760 --> 00:12:44.150 align:start position:0% +todos están en línea puedes ver que tengo tres +planos <00:12:42.016>de <00:12:42.272>control <00:12:42.528>en <00:12:42.784>cd <00:12:43.040>masters + +00:12:44.150 --> 00:12:44.160 align:start position:0% +planos de control en cd masters + + +00:12:44.160 --> 00:12:46.870 align:start position:0% +planos de control en cd masters +y <00:12:44.525>dos <00:12:44.890>trabajadores <00:12:45.255>o <00:12:45.620>agentes <00:12:45.985>listos <00:12:46.350>para <00:12:46.715>cargas + +00:12:46.870 --> 00:12:46.880 align:start position:0% +y dos trabajadores o agentes listos para cargas + + +00:12:46.880 --> 00:12:48.150 align:start position:0% +y dos trabajadores o agentes listos para cargas +de trabajo + +00:12:48.150 --> 00:12:48.160 align:start position:0% +de trabajo + + +00:12:48.160 --> 00:12:50.790 align:start position:0% +de trabajo +super <00:12:48.760>super <00:12:49.360>genial + +00:12:50.790 --> 00:12:50.800 align:start position:0% +super super genial + + +00:12:50.800 --> 00:12:53.509 align:start position:0% +super super genial +así <00:12:51.100>que <00:12:51.400>en <00:12:51.700>lugar <00:12:52.000>de <00:12:52.300>ssh <00:12:52.600>en <00:12:52.900>este <00:12:53.200>servidor + +00:12:53.509 --> 00:12:53.519 align:start position:0% +así que en lugar de ssh en este servidor + + +00:12:53.519 --> 00:12:56.069 align:start position:0% +así que en lugar de ssh en este servidor +copiemos <00:12:53.850>nuestra <00:12:54.181>configuración <00:12:54.512>de <00:12:54.843>coup <00:12:55.174>localmente <00:12:55.505>para <00:12:55.836>que + +00:12:56.069 --> 00:12:56.079 align:start position:0% +copiemos nuestra configuración de coup localmente para que + + +00:12:56.079 --> 00:12:57.990 align:start position:0% +copiemos nuestra configuración de coup localmente para que +podamos <00:12:56.299>ejecutar <00:12:56.519>el <00:12:56.739>resto <00:12:56.959>de <00:12:57.179>los <00:12:57.399>comandos <00:12:57.619>así <00:12:57.839>que + +00:12:57.990 --> 00:12:58.000 align:start position:0% +podamos ejecutar el resto de los comandos así que + + +00:12:58.000 --> 00:13:00.069 align:start position:0% +podamos ejecutar el resto de los comandos así que +salgamos <00:12:58.613>de <00:12:59.226>aquí <00:12:59.839>querrás + +00:13:00.069 --> 00:13:00.079 align:start position:0% +salgamos de aquí querrás + + +00:13:00.079 --> 00:13:02.069 align:start position:0% +salgamos de aquí querrás +crear <00:13:00.248>un <00:13:00.417>directorio <00:13:00.586>para <00:13:00.755>tu <00:13:00.924>archivo <00:13:01.093>de <00:13:01.262>configuración <00:13:01.431>de <00:13:01.600>coup + +00:13:02.069 --> 00:13:02.079 align:start position:0% +crear un directorio para tu archivo de configuración de coup + + +00:13:02.079 --> 00:13:04.230 align:start position:0% +crear un directorio para tu archivo de configuración de coup +si <00:13:02.309>nunca <00:13:02.539>has <00:13:02.769>hecho <00:13:02.999>esto <00:13:03.229>antes <00:13:03.459>o <00:13:03.689>hacer <00:13:03.919>una + +00:13:04.230 --> 00:13:04.240 align:start position:0% +si nunca has hecho esto antes o hacer una + + +00:13:04.240 --> 00:13:06.470 align:start position:0% +si nunca has hecho esto antes o hacer una +copia <00:13:04.423>de <00:13:04.606>seguridad <00:13:04.789>de <00:13:04.972>tu <00:13:05.155>archivo <00:13:05.338>de <00:13:05.521>configuración <00:13:05.704>de <00:13:05.887>coupe <00:13:06.070>existente + +00:13:06.470 --> 00:13:06.480 align:start position:0% +copia de seguridad de tu archivo de configuración de coupe existente + + +00:13:06.480 --> 00:13:08.790 align:start position:0% +copia de seguridad de tu archivo de configuración de coupe existente +si <00:13:06.839>está <00:13:07.198>ahí <00:13:07.557>entonces <00:13:07.916>simplemente <00:13:08.275>haremos <00:13:08.634>una + +00:13:08.790 --> 00:13:08.800 align:start position:0% +si está ahí entonces simplemente haremos una + + +00:13:08.800 --> 00:13:10.949 align:start position:0% +si está ahí entonces simplemente haremos una +copia <00:13:09.000>segura <00:13:09.200>o <00:13:09.400>scp <00:13:09.600>de <00:13:09.800>ese <00:13:10.000>archivo <00:13:10.200>desde <00:13:10.400>uno <00:13:10.600>de <00:13:10.800>los + +00:13:10.949 --> 00:13:10.959 align:start position:0% +copia segura o scp de ese archivo desde uno de los + + +00:13:10.959 --> 00:13:13.030 align:start position:0% +copia segura o scp de ese archivo desde uno de los +servidores <00:13:11.189>a <00:13:11.419>nuestra <00:13:11.649>máquina <00:13:11.879>local <00:13:12.109>después <00:13:12.339>de <00:13:12.569>que <00:13:12.799>se + +00:13:13.030 --> 00:13:13.040 align:start position:0% +servidores a nuestra máquina local después de que se + + +00:13:13.040 --> 00:13:15.350 align:start position:0% +servidores a nuestra máquina local después de que se +transfiera <00:13:13.346>podemos <00:13:13.652>ejecutar <00:13:13.958>un <00:13:14.264>control <00:13:14.570>de <00:13:14.876>coupe + +00:13:15.350 --> 00:13:15.360 align:start position:0% +transfiera podemos ejecutar un control de coupe + + +00:13:15.360 --> 00:13:17.750 align:start position:0% +transfiera podemos ejecutar un control de coupe +obtener <00:13:15.693>nodos <00:13:16.026>y <00:13:16.359>ver <00:13:16.692>lo <00:13:17.025>mismo <00:13:17.358>genial + +00:13:17.750 --> 00:13:17.760 align:start position:0% +obtener nodos y ver lo mismo genial + + +00:13:17.760 --> 00:13:19.430 align:start position:0% +obtener nodos y ver lo mismo genial +así <00:13:17.977>que <00:13:18.194>ahora <00:13:18.411>tenemos <00:13:18.628>cube <00:13:18.845>control <00:13:19.062>ejecutándose <00:13:19.279>en + +00:13:19.430 --> 00:13:19.440 align:start position:0% +así que ahora tenemos cube control ejecutándose en + + +00:13:19.440 --> 00:13:21.829 align:start position:0% +así que ahora tenemos cube control ejecutándose en +esta <00:13:19.855>máquina <00:13:20.270>a <00:13:20.685>continuación <00:13:21.100>creé <00:13:21.515>una + +00:13:21.829 --> 00:13:21.839 align:start position:0% +esta máquina a continuación creé una + + +00:13:21.839 --> 00:13:24.710 align:start position:0% +esta máquina a continuación creé una +implementación <00:13:22.132>de <00:13:22.425>nginx <00:13:22.718>super <00:13:23.011>simple <00:13:23.304>para <00:13:23.597>kubernetes + +00:13:24.710 --> 00:13:24.720 align:start position:0% +implementación de nginx super simple para kubernetes + + +00:13:24.720 --> 00:13:27.269 align:start position:0% +implementación de nginx super simple para kubernetes +esto <00:13:25.053>implementa <00:13:25.386>una <00:13:25.719>versión <00:13:26.052>alpine <00:13:26.385>de <00:13:26.718>nginx + +00:13:27.269 --> 00:13:27.279 align:start position:0% +esto implementa una versión alpine de nginx + + +00:13:27.279 --> 00:13:29.430 align:start position:0% +esto implementa una versión alpine de nginx +y <00:13:27.564>establece <00:13:27.849>las <00:13:28.134>réplicas <00:13:28.419>en <00:13:28.704>tres <00:13:28.989>lo <00:13:29.274>hice + +00:13:29.430 --> 00:13:29.440 align:start position:0% +y establece las réplicas en tres lo hice + + +00:13:29.440 --> 00:13:31.590 align:start position:0% +y establece las réplicas en tres lo hice +ejecutando <00:13:29.880>coop <00:13:30.320>control <00:13:30.760>apply <00:13:31.200>dash + +00:13:31.590 --> 00:13:31.600 align:start position:0% +ejecutando coop control apply dash + + +00:13:31.600 --> 00:13:33.670 align:start position:0% +ejecutando coop control apply dash +f <00:13:31.888>y <00:13:32.176>luego <00:13:32.464>la <00:13:32.752>ruta <00:13:33.040>al + +00:13:33.670 --> 00:13:33.680 align:start position:0% +f y luego la ruta al + + +00:13:33.680 --> 00:13:36.069 align:start position:0% +f y luego la ruta al +manifiesto <00:13:33.988>de <00:13:34.296>implementación <00:13:34.604>y <00:13:34.912>luego <00:13:35.220>kubernetes <00:13:35.528>me <00:13:35.836>dijo + +00:13:36.069 --> 00:13:36.079 align:start position:0% +manifiesto de implementación y luego kubernetes me dijo + + +00:13:36.079 --> 00:13:38.069 align:start position:0% +manifiesto de implementación y luego kubernetes me dijo +que <00:13:36.463>se <00:13:36.847>creó <00:13:37.231>la <00:13:37.615>implementación <00:13:37.999>entonces + +00:13:38.069 --> 00:13:38.079 align:start position:0% +que se creó la implementación entonces + + +00:13:38.079 --> 00:13:39.350 align:start position:0% +que se creó la implementación entonces +quería <00:13:38.359>verificar <00:13:38.639>para <00:13:38.919>ver <00:13:39.199>cómo + +00:13:39.350 --> 00:13:39.360 align:start position:0% +quería verificar para ver cómo + + +00:13:39.360 --> 00:13:41.990 align:start position:0% +quería verificar para ver cómo +estaba <00:13:39.702>funcionando <00:13:40.044>esta <00:13:40.386>implementación <00:13:40.728>así <00:13:41.070>que <00:13:41.412>ejecuté <00:13:41.754>coop + +00:13:41.990 --> 00:13:42.000 align:start position:0% +estaba funcionando esta implementación así que ejecuté coop + + +00:13:42.000 --> 00:13:44.870 align:start position:0% +estaba funcionando esta implementación así que ejecuté coop +control <00:13:42.680>describe <00:13:43.360>deployment <00:13:44.040>nginx <00:13:44.720>y + +00:13:44.870 --> 00:13:44.880 align:start position:0% +control describe deployment nginx y + + +00:13:44.880 --> 00:13:46.790 align:start position:0% +control describe deployment nginx y +puedes <00:13:45.173>ver <00:13:45.466>que <00:13:45.759>está <00:13:46.052>implementado <00:13:46.345>y <00:13:46.638>el + +00:13:46.790 --> 00:13:46.800 align:start position:0% +puedes ver que está implementado y el + + +00:13:46.800 --> 00:13:49.430 align:start position:0% +puedes ver que está implementado y el +estado <00:13:47.279>deseado <00:13:47.758>es <00:13:48.237>tres <00:13:48.716>y <00:13:49.195>se + +00:13:49.430 --> 00:13:49.440 align:start position:0% +estado deseado es tres y se + + +00:13:49.440 --> 00:13:51.750 align:start position:0% +estado deseado es tres y se +actualizaron <00:13:49.748>tres, <00:13:50.056>tres <00:13:50.364>en <00:13:50.672>total, <00:13:50.980>tres <00:13:51.288>disponibles <00:13:51.596>y + +00:13:51.750 --> 00:13:51.760 align:start position:0% +actualizaron tres, tres en total, tres disponibles y + + +00:13:51.760 --> 00:13:54.069 align:start position:0% +actualizaron tres, tres en total, tres disponibles y +cero <00:13:52.057>no <00:13:52.354>disponibles, <00:13:52.651>por <00:13:52.948>lo <00:13:53.245>que <00:13:53.542>mis <00:13:53.839>tres + +00:13:54.069 --> 00:13:54.079 align:start position:0% +cero no disponibles, por lo que mis tres + + +00:13:54.079 --> 00:13:56.230 align:start position:0% +cero no disponibles, por lo que mis tres +pods <00:13:54.399>nginx <00:13:54.719>están <00:13:55.039>en <00:13:55.359>funcionamiento, <00:13:55.679>pero <00:13:55.999>esto + +00:13:56.230 --> 00:13:56.240 align:start position:0% +pods nginx están en funcionamiento, pero esto + + +00:13:56.240 --> 00:13:58.470 align:start position:0% +pods nginx están en funcionamiento, pero esto +no <00:13:56.520>me <00:13:56.800>da <00:13:57.080>acceso <00:13:57.360>a <00:13:57.640>estos <00:13:57.920>pods + +00:13:58.470 --> 00:13:58.480 align:start position:0% +no me da acceso a estos pods + + +00:13:58.480 --> 00:14:00.870 align:start position:0% +no me da acceso a estos pods +fuera <00:13:58.928>de <00:13:59.376>kubernetes. <00:13:59.824>Aquí <00:14:00.272>es <00:14:00.720>donde + +00:14:00.870 --> 00:14:00.880 align:start position:0% +fuera de kubernetes. Aquí es donde + + +00:14:00.880 --> 00:14:03.430 align:start position:0% +fuera de kubernetes. Aquí es donde +entra <00:14:01.179>un <00:14:01.478>servicio <00:14:01.777>y <00:14:02.076>un <00:14:02.375>balanceador <00:14:02.674>de <00:14:02.973>carga. <00:14:03.272>La + +00:14:03.430 --> 00:14:03.440 align:start position:0% +entra un servicio y un balanceador de carga. La + + +00:14:03.440 --> 00:14:06.230 align:start position:0% +entra un servicio y un balanceador de carga. La +razón <00:14:03.760>exacta <00:14:04.080>por <00:14:04.400>la <00:14:04.720>que <00:14:05.040>instalamos <00:14:05.360>metal <00:14:05.680>lb, + +00:14:06.230 --> 00:14:06.240 align:start position:0% +razón exacta por la que instalamos metal lb, + + +00:14:06.240 --> 00:14:08.710 align:start position:0% +razón exacta por la que instalamos metal lb, +así <00:14:06.500>que <00:14:06.760>creé <00:14:07.020>un <00:14:07.280>archivo <00:14:07.540>de <00:14:07.800>servicio <00:14:08.060>súper <00:14:08.320>simple. + +00:14:08.710 --> 00:14:08.720 align:start position:0% +así que creé un archivo de servicio súper simple. + + +00:14:08.720 --> 00:14:11.990 align:start position:0% +así que creé un archivo de servicio súper simple. +Este <00:14:09.039>archivo <00:14:09.358>de <00:14:09.677>servicio <00:14:09.996>es <00:14:10.315>solo <00:14:10.634>un <00:14:10.953>servicio <00:14:11.272>que + +00:14:11.990 --> 00:14:12.000 align:start position:0% +Este archivo de servicio es solo un servicio que + + +00:14:12.000 --> 00:14:14.470 align:start position:0% +Este archivo de servicio es solo un servicio que +apunta <00:14:12.399>a <00:14:12.798>la <00:14:13.197>aplicación <00:14:13.596>de <00:14:13.995>nginx <00:14:14.394>que + +00:14:14.470 --> 00:14:14.480 align:start position:0% +apunta a la aplicación de nginx que + + +00:14:14.480 --> 00:14:16.710 align:start position:0% +apunta a la aplicación de nginx que +acabamos <00:14:14.730>de <00:14:14.980>crear <00:14:15.230>esa <00:14:15.480>implementación <00:14:15.730>y <00:14:15.980>le <00:14:16.230>decimos <00:14:16.480>a + +00:14:16.710 --> 00:14:16.720 align:start position:0% +acabamos de crear esa implementación y le decimos a + + +00:14:16.720 --> 00:14:19.350 align:start position:0% +acabamos de crear esa implementación y le decimos a +este <00:14:16.995>servicio <00:14:17.270>que <00:14:17.545>lo <00:14:17.820>exponga <00:14:18.095>en <00:14:18.370>el <00:14:18.645>puerto <00:14:18.920>80 <00:14:19.195>y + +00:14:19.350 --> 00:14:19.360 align:start position:0% +este servicio que lo exponga en el puerto 80 y + + +00:14:19.360 --> 00:14:21.670 align:start position:0% +este servicio que lo exponga en el puerto 80 y +que <00:14:19.600>el <00:14:19.840>puerto <00:14:20.080>de <00:14:20.320>destino <00:14:20.560>para <00:14:20.800>ese <00:14:21.040>contenedor + +00:14:21.670 --> 00:14:21.680 align:start position:0% +que el puerto de destino para ese contenedor + + +00:14:21.680 --> 00:14:23.670 align:start position:0% +que el puerto de destino para ese contenedor +también <00:14:21.909>es <00:14:22.138>el <00:14:22.367>puerto <00:14:22.596>80 <00:14:22.825>y <00:14:23.054>aquí <00:14:23.283>es <00:14:23.512>donde + +00:14:23.670 --> 00:14:23.680 align:start position:0% +también es el puerto 80 y aquí es donde + + +00:14:23.680 --> 00:14:25.990 align:start position:0% +también es el puerto 80 y aquí es donde +ocurre <00:14:24.013>la <00:14:24.346>magia, <00:14:24.679>le <00:14:25.012>decimos <00:14:25.345>que <00:14:25.678>el + +00:14:25.990 --> 00:14:26.000 align:start position:0% +ocurre la magia, le decimos que el + + +00:14:26.000 --> 00:14:28.470 align:start position:0% +ocurre la magia, le decimos que el +tipo <00:14:26.224>es <00:14:26.448>balanceador <00:14:26.672>de <00:14:26.896>carga <00:14:27.120>de <00:14:27.344>tipo <00:14:27.568>esto <00:14:27.792>le <00:14:28.016>dice <00:14:28.240>a + +00:14:28.470 --> 00:14:28.480 align:start position:0% +tipo es balanceador de carga de tipo esto le dice a + + +00:14:28.480 --> 00:14:31.030 align:start position:0% +tipo es balanceador de carga de tipo esto le dice a +kubernetes <00:14:28.666>que <00:14:28.852>le <00:14:29.038>diga <00:14:29.224>a <00:14:29.410>nuestro <00:14:29.596>balanceador <00:14:29.782>de <00:14:29.968>carga <00:14:30.154>en <00:14:30.340>la <00:14:30.526>nube <00:14:30.712>que + +00:14:31.030 --> 00:14:31.040 align:start position:0% +kubernetes que le diga a nuestro balanceador de carga en la nube que + + +00:14:31.040 --> 00:14:34.150 align:start position:0% +kubernetes que le diga a nuestro balanceador de carga en la nube que +nos <00:14:31.568><00:14:32.096>una <00:14:32.624>ip <00:14:33.152>y <00:14:33.680>nuestro + +00:14:34.150 --> 00:14:34.160 align:start position:0% +nos dé una ip y nuestro + + +00:14:34.160 --> 00:14:37.590 align:start position:0% +nos dé una ip y nuestro +balanceador <00:14:34.392>de <00:14:34.624>carga <00:14:34.856>en <00:14:35.088>la <00:14:35.320>nube <00:14:35.552>en <00:14:35.784>este <00:14:36.016>momento <00:14:36.248>es <00:14:36.480>metal <00:14:36.712>lb, + +00:14:37.590 --> 00:14:37.600 align:start position:0% +balanceador de carga en la nube en este momento es metal lb, + + +00:14:37.600 --> 00:14:40.710 align:start position:0% +balanceador de carga en la nube en este momento es metal lb, +por <00:14:37.875>lo <00:14:38.150>que <00:14:38.425>metal <00:14:38.700>lb <00:14:38.975>debería <00:14:39.250>darnos <00:14:39.525>una <00:14:39.800>dirección <00:14:40.075>ip + +00:14:40.710 --> 00:14:40.720 align:start position:0% +por lo que metal lb debería darnos una dirección ip + + +00:14:40.720 --> 00:14:43.030 align:start position:0% +por lo que metal lb debería darnos una dirección ip +que <00:14:41.080>especificamos <00:14:41.440>en <00:14:41.800>ese <00:14:42.160>rango <00:14:42.520>y <00:14:42.880>si + +00:14:43.030 --> 00:14:43.040 align:start position:0% +que especificamos en ese rango y si + + +00:14:43.040 --> 00:14:44.790 align:start position:0% +que especificamos en ese rango y si +todo <00:14:43.460>eso <00:14:43.880>sucede, <00:14:44.300>deberíamos <00:14:44.720>poder + +00:14:44.790 --> 00:14:44.800 align:start position:0% +todo eso sucede, deberíamos poder + + +00:14:44.800 --> 00:14:47.189 align:start position:0% +todo eso sucede, deberíamos poder +acceder <00:14:45.159>a <00:14:45.518>nuestro <00:14:45.877>servicio, <00:14:46.236>entonces <00:14:46.595>ejecuté <00:14:46.954>coop + +00:14:47.189 --> 00:14:47.199 align:start position:0% +acceder a nuestro servicio, entonces ejecuté coop + + +00:14:47.199 --> 00:14:49.430 align:start position:0% +acceder a nuestro servicio, entonces ejecuté coop +control <00:14:47.484>apply <00:14:47.769>dash <00:14:48.054>f <00:14:48.339>y <00:14:48.624>luego <00:14:48.909>la <00:14:49.194>ruta + +00:14:49.430 --> 00:14:49.440 align:start position:0% +control apply dash f y luego la ruta + + +00:14:49.440 --> 00:14:51.990 align:start position:0% +control apply dash f y luego la ruta +al <00:14:49.760>archivo <00:14:50.080>de <00:14:50.400>servicio <00:14:50.720>kubernetes <00:14:51.040>me <00:14:51.360>dijo <00:14:51.680>que + +00:14:51.990 --> 00:14:52.000 align:start position:0% +al archivo de servicio kubernetes me dijo que + + +00:14:52.000 --> 00:14:54.310 align:start position:0% +al archivo de servicio kubernetes me dijo que +creó <00:14:52.360>el <00:14:52.720>servicio <00:14:53.080>para <00:14:53.440><00:14:53.800>y <00:14:54.160>luego + +00:14:54.310 --> 00:14:54.320 align:start position:0% +creó el servicio para mí y luego + + +00:14:54.320 --> 00:14:56.389 align:start position:0% +creó el servicio para mí y luego +quise <00:14:54.688>verificarlo, <00:14:55.056>así <00:14:55.424>que <00:14:55.792>ejecuté <00:14:56.160>coop + +00:14:56.389 --> 00:14:56.399 align:start position:0% +quise verificarlo, así que ejecuté coop + + +00:14:56.399 --> 00:14:59.189 align:start position:0% +quise verificarlo, así que ejecuté coop +control <00:14:57.059>describe <00:14:57.719>service <00:14:58.379>nginx <00:14:59.039>y + +00:14:59.189 --> 00:14:59.199 align:start position:0% +control describe service nginx y + + +00:14:59.199 --> 00:15:01.829 align:start position:0% +control describe service nginx y +pudimos <00:14:59.679>ver <00:15:00.159>aquí <00:15:00.639>que <00:15:01.119>expuso <00:15:01.599>un + +00:15:01.829 --> 00:15:01.839 align:start position:0% +pudimos ver aquí que expuso un + + +00:15:01.839 --> 00:15:04.629 align:start position:0% +pudimos ver aquí que expuso un +ingreso <00:15:02.129>de <00:15:02.419>balanceador <00:15:02.709>de <00:15:02.999>carga <00:15:03.289>de <00:15:03.579>una <00:15:03.869>de <00:15:04.159>las + +00:15:04.629 --> 00:15:04.639 align:start position:0% +ingreso de balanceador de carga de una de las + + +00:15:04.639 --> 00:15:07.670 align:start position:0% +ingreso de balanceador de carga de una de las +direcciones <00:15:05.052>IP <00:15:05.465>que <00:15:05.878>especificamos <00:15:06.291>en <00:15:06.704>metal <00:15:07.117>lb, + +00:15:07.670 --> 00:15:07.680 align:start position:0% +direcciones IP que especificamos en metal lb, + + +00:15:07.680 --> 00:15:10.230 align:start position:0% +direcciones IP que especificamos en metal lb, +por <00:15:07.937>lo <00:15:08.194>que <00:15:08.451>esto <00:15:08.708>significa <00:15:08.965>que <00:15:09.222>mi <00:15:09.479>implementación <00:15:09.736>nginx <00:15:09.993>de + +00:15:10.230 --> 00:15:10.240 align:start position:0% +por lo que esto significa que mi implementación nginx de + + +00:15:10.240 --> 00:15:13.030 align:start position:0% +por lo que esto significa que mi implementación nginx de +tres <00:15:10.653>pods <00:15:11.066>ahora <00:15:11.479>está <00:15:11.892>expuesta <00:15:12.305>en <00:15:12.718>un + +00:15:13.030 --> 00:15:13.040 align:start position:0% +tres pods ahora está expuesta en un + + +00:15:13.040 --> 00:15:18.470 align:start position:0% +tres pods ahora está expuesta en un +balanceador <00:15:13.568>de <00:15:14.096>carga <00:15:14.624>en <00:15:15.152>esta <00:15:15.680>IP <00:15:16.208>192 <00:15:16.736>168 <00:15:17.264>30.80 <00:15:17.792>y <00:15:18.320>si + +00:15:18.470 --> 00:15:18.480 align:start position:0% +balanceador de carga en esta IP 192 168 30.80 y si + + +00:15:18.480 --> 00:15:20.949 align:start position:0% +balanceador de carga en esta IP 192 168 30.80 y si +vamos <00:15:18.811>a <00:15:19.142>esa <00:15:19.473>dirección <00:15:19.804>IP, <00:15:20.135>podemos <00:15:20.466>ver <00:15:20.797>la + +00:15:20.949 --> 00:15:20.959 align:start position:0% +vamos a esa dirección IP, podemos ver la + + +00:15:20.959 --> 00:15:24.150 align:start position:0% +vamos a esa dirección IP, podemos ver la +página <00:15:21.288>de <00:15:21.617>Hola <00:15:21.946>Mundo <00:15:22.275>desde <00:15:22.604>el <00:15:22.933>motor <00:15:23.262>x, <00:15:23.591>esto <00:15:23.920>es + +00:15:24.150 --> 00:15:24.160 align:start position:0% +página de Hola Mundo desde el motor x, esto es + + +00:15:24.160 --> 00:15:26.710 align:start position:0% +página de Hola Mundo desde el motor x, esto es +increíble, <00:15:24.392>así <00:15:24.624>que <00:15:24.856>esto <00:15:25.088>prueba <00:15:25.320>de <00:15:25.552>principio <00:15:25.784>a <00:15:26.016>fin <00:15:26.248>que <00:15:26.480>el + +00:15:26.710 --> 00:15:26.720 align:start position:0% +increíble, así que esto prueba de principio a fin que el + + +00:15:26.720 --> 00:15:29.110 align:start position:0% +increíble, así que esto prueba de principio a fin que el +lb <00:15:27.260>medio <00:15:27.800>está <00:15:28.340>funcionando, <00:15:28.880>pero + +00:15:29.110 --> 00:15:29.120 align:start position:0% +lb medio está funcionando, pero + + +00:15:29.120 --> 00:15:32.230 align:start position:0% +lb medio está funcionando, pero +nunca <00:15:29.474>probamos <00:15:29.828>realmente <00:15:30.182>el <00:15:30.536>lado <00:15:30.890>ha <00:15:31.244>de <00:15:31.598>cubit, + +00:15:32.230 --> 00:15:32.240 align:start position:0% +nunca probamos realmente el lado ha de cubit, + + +00:15:32.240 --> 00:15:34.230 align:start position:0% +nunca probamos realmente el lado ha de cubit, +sabemos <00:15:32.666>que <00:15:33.092>podemos <00:15:33.518>emitir + +00:15:34.230 --> 00:15:34.240 align:start position:0% +sabemos que podemos emitir + + +00:15:34.240 --> 00:15:36.470 align:start position:0% +sabemos que podemos emitir +comandos <00:15:34.500>de <00:15:34.760>kubernetes <00:15:35.020>ahora <00:15:35.280>mismo <00:15:35.540>con <00:15:35.800>goog <00:15:36.060>control, <00:15:36.320>pero + +00:15:36.470 --> 00:15:36.480 align:start position:0% +comandos de kubernetes ahora mismo con goog control, pero + + +00:15:36.480 --> 00:15:38.790 align:start position:0% +comandos de kubernetes ahora mismo con goog control, pero +no <00:15:36.832>tomamos <00:15:37.184>ninguna <00:15:37.536>de <00:15:37.888>esas <00:15:38.240>notas, + +00:15:38.790 --> 00:15:38.800 align:start position:0% +no tomamos ninguna de esas notas, + + +00:15:38.800 --> 00:15:41.030 align:start position:0% +no tomamos ninguna de esas notas, +así <00:15:39.029>que <00:15:39.258>hagámoslo <00:15:39.487>también, <00:15:39.716>así <00:15:39.945>que <00:15:40.174>comencé <00:15:40.403>a <00:15:40.632>hacer + +00:15:41.030 --> 00:15:41.040 align:start position:0% +así que hagámoslo también, así que comencé a hacer + + +00:15:41.040 --> 00:15:43.189 align:start position:0% +así que hagámoslo también, así que comencé a hacer +ping <00:15:41.325>a <00:15:41.610>ese <00:15:41.895>vip <00:15:42.180>y <00:15:42.465>mientras <00:15:42.750>lo <00:15:43.035>hacía, + +00:15:43.189 --> 00:15:43.199 align:start position:0% +ping a ese vip y mientras lo hacía, + + +00:15:43.199 --> 00:15:46.150 align:start position:0% +ping a ese vip y mientras lo hacía, +lo <00:15:43.479>conecté <00:15:43.759>de <00:15:44.039>forma <00:15:44.319>remota <00:15:44.599>a <00:15:44.879>mi <00:15:45.159>primer <00:15:45.439>nodo <00:15:45.719>maestro <00:15:45.999>o + +00:15:46.150 --> 00:15:46.160 align:start position:0% +lo conecté de forma remota a mi primer nodo maestro o + + +00:15:46.160 --> 00:15:47.749 align:start position:0% +lo conecté de forma remota a mi primer nodo maestro o +al <00:15:46.400>nodo <00:15:46.640>del <00:15:46.880>servidor <00:15:47.120>que <00:15:47.360>ejecuta <00:15:47.600>el + +00:15:47.749 --> 00:15:47.759 align:start position:0% +al nodo del servidor que ejecuta el + + +00:15:47.759 --> 00:15:49.590 align:start position:0% +al nodo del servidor que ejecuta el +plano <00:15:47.979>de <00:15:48.199>control <00:15:48.419>y <00:15:48.639>también <00:15:48.859>es <00:15:49.079>uno <00:15:49.299>de <00:15:49.519>los + +00:15:49.590 --> 00:15:49.600 align:start position:0% +plano de control y también es uno de los + + +00:15:49.600 --> 00:15:51.189 align:start position:0% +plano de control y también es uno de los +nodos <00:15:49.826>que <00:15:50.052>ejecuta <00:15:50.278>cube <00:15:50.504>vip <00:15:50.730>que <00:15:50.956>está + +00:15:51.189 --> 00:15:51.199 align:start position:0% +nodos que ejecuta cube vip que está + + +00:15:51.199 --> 00:15:54.230 align:start position:0% +nodos que ejecuta cube vip que está +suministrando <00:15:51.759>este <00:15:52.319>vip, <00:15:52.879>así <00:15:53.439>que <00:15:53.999>decidí + +00:15:54.230 --> 00:15:54.240 align:start position:0% +suministrando este vip, así que decidí + + +00:15:54.240 --> 00:15:57.030 align:start position:0% +suministrando este vip, así que decidí +apagarlo <00:15:54.582>y <00:15:54.924>como <00:15:55.266>puedes <00:15:55.608>ver <00:15:55.950>a <00:15:56.292>la <00:15:56.634>derecha + +00:15:57.030 --> 00:15:57.040 align:start position:0% +apagarlo y como puedes ver a la derecha + + +00:15:57.040 --> 00:15:59.590 align:start position:0% +apagarlo y como puedes ver a la derecha +sigo <00:15:57.620>recibiendo <00:15:58.200>respuestas <00:15:58.780>y <00:15:59.360>puedes + +00:15:59.590 --> 00:15:59.600 align:start position:0% +sigo recibiendo respuestas y puedes + + +00:15:59.600 --> 00:16:01.430 align:start position:0% +sigo recibiendo respuestas y puedes +ver <00:15:59.820>a <00:16:00.040>la <00:16:00.260>izquierda <00:16:00.480>que <00:16:00.700>no <00:16:00.920>estoy <00:16:01.140>recibiendo <00:16:01.360>una + +00:16:01.430 --> 00:16:01.440 align:start position:0% +ver a la izquierda que no estoy recibiendo una + + +00:16:01.440 --> 00:16:03.829 align:start position:0% +ver a la izquierda que no estoy recibiendo una +respuesta <00:16:01.690>de <00:16:01.940>esa <00:16:02.190>máquina, <00:16:02.440>así <00:16:02.690>que <00:16:02.940>esto <00:16:03.190>significa <00:16:03.440>que + +00:16:03.829 --> 00:16:03.839 align:start position:0% +respuesta de esa máquina, así que esto significa que + + +00:16:03.839 --> 00:16:06.230 align:start position:0% +respuesta de esa máquina, así que esto significa que +tenemos <00:16:04.147>un <00:16:04.455>h <00:16:04.763>a <00:16:05.071>vip <00:16:05.379>ahora <00:16:05.687>no <00:16:05.995>puedo + +00:16:06.230 --> 00:16:06.240 align:start position:0% +tenemos un h a vip ahora no puedo + + +00:16:06.240 --> 00:16:08.870 align:start position:0% +tenemos un h a vip ahora no puedo +apagar <00:16:06.582>un <00:16:06.924>segundo <00:16:07.266>nodo <00:16:07.608>un <00:16:07.950>clúster <00:16:08.292>aja <00:16:08.634>de + +00:16:08.870 --> 00:16:08.880 align:start position:0% +apagar un segundo nodo un clúster aja de + + +00:16:08.880 --> 00:16:11.189 align:start position:0% +apagar un segundo nodo un clúster aja de +solo <00:16:09.226>tres <00:16:09.572>nodos <00:16:09.918>solo <00:16:10.264>puede <00:16:10.610>perder <00:16:10.956>una + +00:16:11.189 --> 00:16:11.199 align:start position:0% +solo tres nodos solo puede perder una + + +00:16:11.199 --> 00:16:13.030 align:start position:0% +solo tres nodos solo puede perder una +máquina, <00:16:11.503>así <00:16:11.807>que <00:16:12.111>si <00:16:12.415>apago <00:16:12.719>otra + +00:16:13.030 --> 00:16:13.040 align:start position:0% +máquina, así que si apago otra + + +00:16:13.040 --> 00:16:15.110 align:start position:0% +máquina, así que si apago otra +máquina <00:16:13.500>no <00:16:13.960>tendré <00:16:14.420>acceso <00:16:14.880>a + +00:16:15.110 --> 00:16:15.120 align:start position:0% +máquina no tendré acceso a + + +00:16:15.120 --> 00:16:17.350 align:start position:0% +máquina no tendré acceso a +kubernetes <00:16:15.579>pero <00:16:16.038>seguiré <00:16:16.497>teniendo <00:16:16.956>acceso + +00:16:17.350 --> 00:16:17.360 align:start position:0% +kubernetes pero seguiré teniendo acceso + + +00:16:17.360 --> 00:16:19.030 align:start position:0% +kubernetes pero seguiré teniendo acceso +a <00:16:17.496>todas <00:16:17.632>mis <00:16:17.768>cargas <00:16:17.904>de <00:16:18.040>trabajo <00:16:18.176>que <00:16:18.312>se <00:16:18.448>están <00:16:18.584>ejecutando <00:16:18.720>es + +00:16:19.030 --> 00:16:19.040 align:start position:0% +a todas mis cargas de trabajo que se están ejecutando es + + +00:16:19.040 --> 00:16:20.550 align:start position:0% +a todas mis cargas de trabajo que se están ejecutando es +solo <00:16:19.266>que <00:16:19.492>no <00:16:19.718>puedo <00:16:19.944>cambiar <00:16:20.170>el <00:16:20.396>estado + +00:16:20.550 --> 00:16:20.560 align:start position:0% +solo que no puedo cambiar el estado + + +00:16:20.560 --> 00:16:23.030 align:start position:0% +solo que no puedo cambiar el estado +de <00:16:20.840>kubernetes <00:16:21.120>ni <00:16:21.400>acceder <00:16:21.680>a <00:16:21.960>él <00:16:22.240>a <00:16:22.520>través <00:16:22.800>del + +00:16:23.030 --> 00:16:23.040 align:start position:0% +de kubernetes ni acceder a él a través del + + +00:16:23.040 --> 00:16:24.389 align:start position:0% +de kubernetes ni acceder a él a través del +control <00:16:23.180>de <00:16:23.320>cupé, <00:16:23.460>así <00:16:23.600>que + +00:16:24.389 --> 00:16:24.399 align:start position:0% +control de cupé, así que + + +00:16:24.399 --> 00:16:26.710 align:start position:0% +control de cupé, así que +esto <00:16:24.595>es <00:16:24.791>increíble, <00:16:24.987>así <00:16:25.183>que <00:16:25.379>inicié <00:16:25.575>la <00:16:25.771>copia <00:16:25.967>de <00:16:26.163>seguridad <00:16:26.359>de <00:16:26.555>ese + +00:16:26.710 --> 00:16:26.720 align:start position:0% +esto es increíble, así que inicié la copia de seguridad de ese + + +00:16:26.720 --> 00:16:28.710 align:start position:0% +esto es increíble, así que inicié la copia de seguridad de ese +otro <00:16:27.059>nodo <00:16:27.398>y <00:16:27.737>está <00:16:28.076>respondiendo + +00:16:28.710 --> 00:16:28.720 align:start position:0% +otro nodo y está respondiendo + + +00:16:28.720 --> 00:16:31.269 align:start position:0% +otro nodo y está respondiendo +y <00:16:29.013>obviamente <00:16:29.306>qbip <00:16:29.599>sigue <00:16:29.892>respondiendo <00:16:30.185>entonces, <00:16:30.478>¿ + +00:16:31.269 --> 00:16:31.279 align:start position:0% +y obviamente qbip sigue respondiendo entonces, ¿ + + +00:16:31.279 --> 00:16:33.269 align:start position:0% +y obviamente qbip sigue respondiendo entonces, ¿ +qué <00:16:31.572>se <00:16:31.865>hace <00:16:32.158>después <00:16:32.451>de <00:16:32.744>construir <00:16:33.037>el + +00:16:33.269 --> 00:16:33.279 align:start position:0% +qué se hace después de construir el + + +00:16:33.279 --> 00:16:35.670 align:start position:0% +qué se hace después de construir el +clúster <00:16:33.759>k3s <00:16:34.239>perfecto? + +00:16:35.670 --> 00:16:35.680 align:start position:0% +clúster k3s perfecto? + + +00:16:35.680 --> 00:16:37.990 align:start position:0% +clúster k3s perfecto? +lo <00:16:36.053>quemamos <00:16:36.426>por <00:16:36.799>supuesto, <00:16:37.172>también <00:16:37.545>hay <00:16:37.918>un + +00:16:37.990 --> 00:16:38.000 align:start position:0% +lo quemamos por supuesto, también hay un + + +00:16:38.000 --> 00:16:40.710 align:start position:0% +lo quemamos por supuesto, también hay un +libro <00:16:38.365>de <00:16:38.730>jugadas <00:16:39.095>para <00:16:39.460>restablecer <00:16:39.825>totalmente <00:16:40.190>k3s <00:16:40.555>a + +00:16:40.710 --> 00:16:40.720 align:start position:0% +libro de jugadas para restablecer totalmente k3s a + + +00:16:40.720 --> 00:16:42.389 align:start position:0% +libro de jugadas para restablecer totalmente k3s a +su <00:16:40.937>estado <00:16:41.154>inicial, <00:16:41.371>por <00:16:41.588>lo <00:16:41.805>que <00:16:42.022>ejecutar <00:16:42.239>este + +00:16:42.389 --> 00:16:42.399 align:start position:0% +su estado inicial, por lo que ejecutar este + + +00:16:42.399 --> 00:16:44.470 align:start position:0% +su estado inicial, por lo que ejecutar este +libro <00:16:42.639>de <00:16:42.879>jugadas <00:16:43.119>y <00:16:43.359>apuntar <00:16:43.599>al <00:16:43.839>mismo <00:16:44.079>host + +00:16:44.470 --> 00:16:44.480 align:start position:0% +libro de jugadas y apuntar al mismo host + + +00:16:44.480 --> 00:16:46.790 align:start position:0% +libro de jugadas y apuntar al mismo host +lo <00:16:45.019>limpiará <00:16:45.558>por <00:16:46.097>completo <00:16:46.636>limpiará + +00:16:46.790 --> 00:16:46.800 align:start position:0% +lo limpiará por completo limpiará + + +00:16:46.800 --> 00:16:49.269 align:start position:0% +lo limpiará por completo limpiará +todos <00:16:47.131>los <00:16:47.462>nodos <00:16:47.793>eliminará <00:16:48.124>todos <00:16:48.455>los <00:16:48.786>contenedores <00:16:49.117>y + +00:16:49.269 --> 00:16:49.279 align:start position:0% +todos los nodos eliminará todos los contenedores y + + +00:16:49.279 --> 00:16:51.430 align:start position:0% +todos los nodos eliminará todos los contenedores y +lo <00:16:49.463>restablecerá <00:16:49.647>al <00:16:49.831>estado <00:16:50.015>en <00:16:50.199>el <00:16:50.383>que <00:16:50.567>estaba <00:16:50.751>antes <00:16:50.935>de <00:16:51.119>que + +00:16:51.430 --> 00:16:51.440 align:start position:0% +lo restablecerá al estado en el que estaba antes de que + + +00:16:51.440 --> 00:16:53.350 align:start position:0% +lo restablecerá al estado en el que estaba antes de que +ejecutáramos <00:16:51.668>este <00:16:51.896>libro <00:16:52.124>de <00:16:52.352>jugadas <00:16:52.580>esto <00:16:52.808>fue <00:16:53.036>muy + +00:16:53.350 --> 00:16:53.360 align:start position:0% +ejecutáramos este libro de jugadas esto fue muy + + +00:16:53.360 --> 00:16:55.509 align:start position:0% +ejecutáramos este libro de jugadas esto fue muy +útil <00:16:53.613>ya <00:16:53.866>que <00:16:54.119>estaba <00:16:54.372>probando <00:16:54.625>mis <00:16:54.878>cambios + +00:16:55.509 --> 00:16:55.519 align:start position:0% +útil ya que estaba probando mis cambios + + +00:16:55.519 --> 00:16:57.590 align:start position:0% +útil ya que estaba probando mis cambios +debe <00:16:55.812>tener <00:16:56.105>ejecuta <00:16:56.398>esto <00:16:56.691>al <00:16:56.984>menos <00:16:57.277>mil + +00:16:57.590 --> 00:16:57.600 align:start position:0% +debe tener ejecuta esto al menos mil + + +00:16:57.600 --> 00:16:59.670 align:start position:0% +debe tener ejecuta esto al menos mil +veces <00:16:57.839>y <00:16:58.078>después <00:16:58.317>de <00:16:58.556>que <00:16:58.795>esté <00:16:59.034>hecho, <00:16:59.273>volvemos <00:16:59.512>a + +00:16:59.670 --> 00:16:59.680 align:start position:0% +veces y después de que esté hecho, volvemos a + + +00:16:59.680 --> 00:17:01.749 align:start position:0% +veces y después de que esté hecho, volvemos a +un <00:16:59.920>buen <00:17:00.160>estado. <00:17:00.400>Una <00:17:00.640>nota: <00:17:00.880>es <00:17:01.120>posible <00:17:01.360>que <00:17:01.600>desees + +00:17:01.749 --> 00:17:01.759 align:start position:0% +un buen estado. Una nota: es posible que desees + + +00:17:01.759 --> 00:17:03.749 align:start position:0% +un buen estado. Una nota: es posible que desees +reiniciarlos <00:17:02.599>después. <00:17:03.439>He + +00:17:03.749 --> 00:17:03.759 align:start position:0% +reiniciarlos después. He + + +00:17:03.759 --> 00:17:05.429 align:start position:0% +reiniciarlos después. He +notado <00:17:04.025>que <00:17:04.291>el <00:17:04.557>vip <00:17:04.823>permanece <00:17:05.089>activo <00:17:05.355>y + +00:17:05.429 --> 00:17:05.439 align:start position:0% +notado que el vip permanece activo y + + +00:17:05.439 --> 00:17:07.829 align:start position:0% +notado que el vip permanece activo y +responderá, <00:17:05.719>así <00:17:05.999>que <00:17:06.279>tengo <00:17:06.559>un <00:17:06.839>libro <00:17:07.119>de <00:17:07.399>jugadas <00:17:07.679>para + +00:17:07.829 --> 00:17:07.839 align:start position:0% +responderá, así que tengo un libro de jugadas para + + +00:17:07.839 --> 00:17:09.909 align:start position:0% +responderá, así que tengo un libro de jugadas para +reiniciar <00:17:08.207>todas <00:17:08.575>estas <00:17:08.943>máquinas <00:17:09.311>y <00:17:09.679>este + +00:17:09.909 --> 00:17:09.919 align:start position:0% +reiniciar todas estas máquinas y este + + +00:17:09.919 --> 00:17:11.669 align:start position:0% +reiniciar todas estas máquinas y este +libro <00:17:10.185>de <00:17:10.451>jugadas <00:17:10.717>realmente <00:17:10.983>esperará <00:17:11.249>a <00:17:11.515>que + +00:17:11.669 --> 00:17:11.679 align:start position:0% +libro de jugadas realmente esperará a que + + +00:17:11.679 --> 00:17:14.949 align:start position:0% +libro de jugadas realmente esperará a que +respondan <00:17:12.063>antes <00:17:12.447>de <00:17:12.831>informar <00:17:13.215>un <00:17:13.599>éxito + +00:17:14.949 --> 00:17:14.959 align:start position:0% +respondan antes de informar un éxito + + +00:17:14.959 --> 00:17:17.669 align:start position:0% +respondan antes de informar un éxito +así <00:17:15.199>como <00:17:15.439>así. <00:17:15.679>Entonces, <00:17:15.919>esto <00:17:16.159>es <00:17:16.399>todo <00:17:16.639>con <00:17:16.879>lo <00:17:17.119>que + +00:17:17.669 --> 00:17:17.679 align:start position:0% +así como así. Entonces, esto es todo con lo que + + +00:17:17.679 --> 00:17:19.429 align:start position:0% +así como así. Entonces, esto es todo con lo que +todos <00:17:18.439>luchan <00:17:19.199>al + +00:17:19.429 --> 00:17:19.439 align:start position:0% +todos luchan al + + +00:17:19.439 --> 00:17:22.470 align:start position:0% +todos luchan al +configurar <00:17:19.919>k3s. <00:17:20.399>No <00:17:20.879>más <00:17:21.359>usar <00:17:21.839>mysql <00:17:22.319>y + +00:17:22.470 --> 00:17:22.480 align:start position:0% +configurar k3s. No más usar mysql y + + +00:17:22.480 --> 00:17:24.789 align:start position:0% +configurar k3s. No más usar mysql y +hacer <00:17:22.840>eso. <00:17:23.200>Si <00:17:23.560>no <00:17:23.920>lo <00:17:24.280>desea, <00:17:24.640>no + +00:17:24.789 --> 00:17:24.799 align:start position:0% +hacer eso. Si no lo desea, no + + +00:17:24.799 --> 00:17:26.470 align:start position:0% +hacer eso. Si no lo desea, no +más <00:17:25.071>girar <00:17:25.343>balanceadores <00:17:25.615>de <00:17:25.887>carga <00:17:26.159>adicionales + +00:17:26.470 --> 00:17:26.480 align:start position:0% +más girar balanceadores de carga adicionales + + +00:17:26.480 --> 00:17:28.950 align:start position:0% +más girar balanceadores de carga adicionales +y <00:17:26.788>mantener <00:17:27.096>un <00:17:27.404>d <00:17:27.712>en <00:17:28.020>vivo <00:17:28.328>y <00:17:28.636>hacer + +00:17:28.950 --> 00:17:28.960 align:start position:0% +y mantener un d en vivo y hacer + + +00:17:28.960 --> 00:17:31.669 align:start position:0% +y mantener un d en vivo y hacer +esos <00:17:29.302>aj <00:17:29.644>si <00:17:29.986>no <00:17:30.328>lo <00:17:30.670>desea. <00:17:31.012>No <00:17:31.354>más + +00:17:31.669 --> 00:17:31.679 align:start position:0% +esos aj si no lo desea. No más + + +00:17:31.679 --> 00:17:33.990 align:start position:0% +esos aj si no lo desea. No más +configurar <00:17:32.095>metal <00:17:32.511>lb <00:17:32.927>o <00:17:33.343>instalar <00:17:33.759>con + +00:17:33.990 --> 00:17:34.000 align:start position:0% +configurar metal lb o instalar con + + +00:17:34.000 --> 00:17:36.470 align:start position:0% +configurar metal lb o instalar con +helm <00:17:34.360>si <00:17:34.720>no <00:17:35.080>lo <00:17:35.440>desea. <00:17:35.800>Solo <00:17:36.160>un + +00:17:36.470 --> 00:17:36.480 align:start position:0% +helm si no lo desea. Solo un + + +00:17:36.480 --> 00:17:38.470 align:start position:0% +helm si no lo desea. Solo un +libro <00:17:36.742>de <00:17:37.004>jugadas <00:17:37.266>simple <00:17:37.528>que <00:17:37.790>hace <00:17:38.052>girar <00:17:38.314>todo + +00:17:38.470 --> 00:17:38.480 align:start position:0% +libro de jugadas simple que hace girar todo + + +00:17:38.480 --> 00:17:40.950 align:start position:0% +libro de jugadas simple que hace girar todo +eso <00:17:38.853>de <00:17:39.226>una <00:17:39.599>vez <00:17:39.972>y <00:17:40.345>luego <00:17:40.718>puede + +00:17:40.950 --> 00:17:40.960 align:start position:0% +eso de una vez y luego puede + + +00:17:40.960 --> 00:17:43.029 align:start position:0% +eso de una vez y luego puede +quemarlo <00:17:41.280>si <00:17:41.600>también <00:17:41.920>lo <00:17:42.240>desea. <00:17:42.560>Nuevamente, <00:17:42.880>un + +00:17:43.029 --> 00:17:43.039 align:start position:0% +quemarlo si también lo desea. Nuevamente, un + + +00:17:43.039 --> 00:17:45.270 align:start position:0% +quemarlo si también lo desea. Nuevamente, un +gran <00:17:43.385>agradecimiento <00:17:43.731>a <00:17:44.077>la <00:17:44.423>comunidad <00:17:44.769>k3s <00:17:45.115>que + +00:17:45.270 --> 00:17:45.280 align:start position:0% +gran agradecimiento a la comunidad k3s que + + +00:17:45.280 --> 00:17:47.270 align:start position:0% +gran agradecimiento a la comunidad k3s que +hizo <00:17:45.531>este <00:17:45.782>libro <00:17:46.033>de <00:17:46.284>jugadas <00:17:46.535>original <00:17:46.786>junto <00:17:47.037>con + +00:17:47.270 --> 00:17:47.280 align:start position:0% +hizo este libro de jugadas original junto con + + +00:17:47.280 --> 00:17:49.830 align:start position:0% +hizo este libro de jugadas original junto con +jeff <00:17:47.712>gearling. <00:17:48.144>Muchas <00:17:48.576>gracias <00:17:49.008>y <00:17:49.440>también + +00:17:49.830 --> 00:17:49.840 align:start position:0% +jeff gearling. Muchas gracias y también + + +00:17:49.840 --> 00:17:53.430 align:start position:0% +jeff gearling. Muchas gracias y también +gracias <00:17:50.359>a <00:17:50.878>github <00:17:51.397>user212 <00:17:51.916>850a. + +00:17:53.430 --> 00:17:53.440 align:start position:0% +gracias a github user212 850a. + + +00:17:53.440 --> 00:17:55.270 align:start position:0% +gracias a github user212 850a. +Muchas <00:17:53.792>gracias. <00:17:54.144>Tendré <00:17:54.496>enlaces <00:17:54.848>en <00:17:55.200>la + +00:17:55.270 --> 00:17:55.280 align:start position:0% +Muchas gracias. Tendré enlaces en la + + +00:17:55.280 --> 00:17:57.110 align:start position:0% +Muchas gracias. Tendré enlaces en la +descripción <00:17:55.631>a <00:17:55.982>todo <00:17:56.333>el <00:17:56.684>código <00:17:57.035>que + +00:17:57.110 --> 00:17:57.120 align:start position:0% +descripción a todo el código que + + +00:17:57.120 --> 00:17:59.669 align:start position:0% +descripción a todo el código que +tengo <00:17:57.430>en <00:17:57.740>la <00:17:58.050>descripción <00:17:58.360>a <00:17:58.670>continuación, <00:17:58.980>así <00:17:59.290>que <00:17:59.600>¿qué + +00:17:59.669 --> 00:17:59.679 align:start position:0% +tengo en la descripción a continuación, así que ¿qué + + +00:17:59.679 --> 00:18:01.990 align:start position:0% +tengo en la descripción a continuación, así que ¿qué +piensas <00:18:00.031>de <00:18:00.383>poner <00:18:00.735>en <00:18:01.087>marcha <00:18:01.439>una + +00:18:01.990 --> 00:18:02.000 align:start position:0% +piensas de poner en marcha una + + +00:18:02.000 --> 00:18:04.070 align:start position:0% +piensas de poner en marcha una +versión <00:18:02.274>verdaderamente <00:18:02.548>ha <00:18:02.822>de <00:18:03.096>k3s <00:18:03.370>usando <00:18:03.644>ansible? <00:18:03.918>¿Hay + +00:18:04.070 --> 00:18:04.080 align:start position:0% +versión verdaderamente ha de k3s usando ansible? ¿Hay + + +00:18:04.080 --> 00:18:05.510 align:start position:0% +versión verdaderamente ha de k3s usando ansible? ¿Hay +algo <00:18:04.420>que <00:18:04.760>deba <00:18:05.100>aportar <00:18:05.440>al + +00:18:05.510 --> 00:18:05.520 align:start position:0% +algo que deba aportar al + + +00:18:05.520 --> 00:18:07.669 align:start position:0% +algo que deba aportar al +script <00:18:05.805>para <00:18:06.090>que <00:18:06.375>te <00:18:06.660>resulte <00:18:06.945>más <00:18:07.230>fácil? <00:18:07.515>Házmelo + +00:18:07.669 --> 00:18:07.679 align:start position:0% +script para que te resulte más fácil? Házmelo + + +00:18:07.679 --> 00:18:09.909 align:start position:0% +script para que te resulte más fácil? Házmelo +saber <00:18:07.919>en <00:18:08.159>la <00:18:08.399>sección <00:18:08.639>de <00:18:08.879>comentarios <00:18:09.119>a <00:18:09.359>continuación <00:18:09.599>y + +00:18:09.909 --> 00:18:09.919 align:start position:0% +saber en la sección de comentarios a continuación y + + +00:18:09.919 --> 00:18:11.669 align:start position:0% +saber en la sección de comentarios a continuación y +recuerda <00:18:10.147>que <00:18:10.375>si <00:18:10.603>encuentras <00:18:10.831>algo <00:18:11.059>útil <00:18:11.287>en <00:18:11.515>este + +00:18:11.669 --> 00:18:11.679 align:start position:0% +recuerda que si encuentras algo útil en este + + +00:18:11.679 --> 00:18:13.029 align:start position:0% +recuerda que si encuentras algo útil en este +video, + +00:18:13.029 --> 00:18:13.039 align:start position:0% +video, + + +00:18:13.039 --> 00:18:15.350 align:start position:0% +video, +no <00:18:13.199>olvides <00:18:13.359>darle <00:18:13.519>me <00:18:13.679>gusta <00:18:13.839>y <00:18:13.999>suscribirte. + +00:18:15.350 --> 00:18:15.360 align:start position:0% +no olvides darle me gusta y suscribirte. + + +00:18:15.360 --> 00:18:17.430 align:start position:0% +no olvides darle me gusta y suscribirte. +Gracias <00:18:15.680>por <00:18:16.000>ver <00:18:16.320>Fix <00:18:16.640>the <00:18:16.960>Lights. <00:18:17.280>Si + +00:18:17.430 --> 00:18:17.440 align:start position:0% +Gracias por ver Fix the Lights. Si + + +00:18:17.440 --> 00:18:19.430 align:start position:0% +Gracias por ver Fix the Lights. Si +no <00:18:17.639>estuviste <00:18:17.838>aquí <00:18:18.037>la <00:18:18.236>semana <00:18:18.435>pasada, <00:18:18.634>un <00:18:18.833>pequeño <00:18:19.032>episodio + +00:18:19.430 --> 00:18:19.440 align:start position:0% +no estuviste aquí la semana pasada, un pequeño episodio + + +00:18:19.440 --> 00:18:20.630 align:start position:0% +no estuviste aquí la semana pasada, un pequeño episodio +con <00:18:19.648>las <00:18:19.856>luces. <00:18:20.064>No <00:18:20.272>pude <00:18:20.480>entender + +00:18:20.630 --> 00:18:20.640 align:start position:0% +con las luces. No pude entender + + +00:18:20.640 --> 00:18:22.710 align:start position:0% +con las luces. No pude entender +qué <00:18:20.880>estaba <00:18:21.120>pasando <00:18:21.360>con <00:18:21.600>mis <00:18:21.840>luces <00:18:22.080>inferiores. + +00:18:22.710 --> 00:18:22.720 align:start position:0% +qué estaba pasando con mis luces inferiores. + + +00:18:22.720 --> 00:18:25.190 align:start position:0% +qué estaba pasando con mis luces inferiores. +Mis <00:18:23.080>luces <00:18:23.440>inferiores <00:18:23.800>terminaron <00:18:24.160>teniendo <00:18:24.520>un <00:18:24.880>pequeño + +00:18:25.190 --> 00:18:25.200 align:start position:0% +Mis luces inferiores terminaron teniendo un pequeño + + +00:18:25.200 --> 00:18:26.870 align:start position:0% +Mis luces inferiores terminaron teniendo un pequeño +problema <00:18:25.466>y <00:18:25.732>me <00:18:25.998>llevó <00:18:26.264>mucho <00:18:26.530>tiempo <00:18:26.796>darme + +00:18:26.870 --> 00:18:26.880 align:start position:0% +problema y me llevó mucho tiempo darme + + +00:18:26.880 --> 00:18:29.590 align:start position:0% +problema y me llevó mucho tiempo darme +cuenta <00:18:27.328>de <00:18:27.776>que <00:18:28.224>terminó <00:18:28.672>siendo <00:18:29.120>una + +00:18:29.590 --> 00:18:29.600 align:start position:0% +cuenta de que terminó siendo una + + +00:18:29.600 --> 00:18:32.310 align:start position:0% +cuenta de que terminó siendo una +regla <00:18:29.760>de <00:18:29.920>firewall. <00:18:30.080>Así <00:18:30.240>que <00:18:30.400>si <00:18:30.560>no <00:18:30.720>es <00:18:30.880>DNS, <00:18:31.040>es <00:18:31.200>una <00:18:31.360>regla <00:18:31.520>de <00:18:31.680>firewall. <00:18:31.840>Está + +00:18:32.310 --> 00:18:32.320 align:start position:0% +regla de firewall. Así que si no es DNS, es una regla de firewall. Está + + +00:18:32.320 --> 00:18:33.590 align:start position:0% +regla de firewall. Así que si no es DNS, es una regla de firewall. Está +bien, <00:18:32.720>cambiando <00:18:33.120>la <00:18:33.520>luz + +00:18:33.590 --> 00:18:33.600 align:start position:0% +bien, cambiando la luz + + +00:18:33.600 --> 00:18:35.430 align:start position:0% +bien, cambiando la luz +tan <00:18:33.760>pronto <00:18:33.920>como <00:18:34.080>las <00:18:34.240>menciono, + +00:18:35.430 --> 00:18:35.440 align:start position:0% +tan pronto como las menciono, + + +00:18:35.440 --> 00:18:37.590 align:start position:0% +tan pronto como las menciono, +si <00:18:35.670>no <00:18:35.900>es <00:18:36.130>DNS, <00:18:36.360>es <00:18:36.590>una <00:18:36.820>regla <00:18:37.050>de <00:18:37.280>firewall. + +00:18:37.590 --> 00:18:37.600 align:start position:0% +si no es DNS, es una regla de firewall. + + +00:18:37.600 --> 00:18:38.950 align:start position:0% +si no es DNS, es una regla de firewall. +Ahora <00:18:37.813>realmente <00:18:38.026>estás <00:18:38.239>probando <00:18:38.452>las <00:18:38.665>minas. <00:18:38.878>Está + +00:18:38.950 --> 00:18:38.960 align:start position:0% +Ahora realmente estás probando las minas. Está + + +00:18:38.960 --> 00:18:40.549 align:start position:0% +Ahora realmente estás probando las minas. Está +bien, <00:18:39.248>va <00:18:39.536>a <00:18:39.824>suceder, <00:18:40.112>va <00:18:40.400>a + +00:18:40.549 --> 00:18:40.559 align:start position:0% +bien, va a suceder, va a + + +00:18:40.559 --> 00:18:43.120 align:start position:0% +bien, va a suceder, va a +suceder. +