ansible: Allow unattended upgrades from Debian backports
[lttng-ci.git] / automation / ansible / README.md
... / ...
CommitLineData
1# Setup on Ubuntu
2
3```
4apt install ansible ansible-mitogen
5```
6
7# Required collections
8
9```
10ansible-galaxy install -r roles/requirements.yml
11```
12
13# Privileged data
14
15Privileged data is stored in Bitwarden. To use roles that fetch privileged data,
16the following utilities must be available:
17
18* [bw](https://bitwarden.com/help/cli/)
19
20Once installed, login and unlock the vault:
21
22```
23bw login # or, `bw unlock`
24export BW_SESSION=xxxx
25bw sync -f
26```
27
28# Running playbooks
29
30```
31ansible-playbook -i hosts [-l SUBSET] site.yaml
32```
33
34## Skip slow tasks
35
36`ansible-playbook --skip-tags slow`
37
38# Bootstrapping hosts
39
40## CI host
41
42### Debian
43
441. Boot host with PXE
452. Select the option `Debian Bookworm amd64 (CI-host)` or equivalent
463. Post-preseed verifications:
47 * Check that start-stop-daemon is available in `$PATH`. If not: `touch /sbin/start-stop-daemon; chmod +x /sbin/start-stop-daemon ; apt-get install --reinstall dpkg`
48 * Verify that the ZFS pool `tank` exists on the target host. If not, create it e.g. `zpool create -f tank mirror dev1 dev2`
494. Add the host to the ansible inventory in the hosts group and in the appropriate cluster group
505. For LXD hosts, add the host to the `lxd` group
516. Follow the appropriate LXD or Incus cluster steps
52
53### Windows
54
551. Configure either SSH or WinRM connection: see https://docs.ansible.com/ansible/latest/os_guide/windows_setup.html
562. For arm64 hosts:
57 * Install the necessary optional features (eg. OpenSSH, Hyper-V) since Windows RSAT isn't available on Arm64 yet
58
59## CI 'rootnode'
60
611. Add the new ansible node to the `node_standalone` group in the inventory
622. Add an entry to the `vms` variable in the host vars for the libvirt host
63 * See the defaults and details in `roles/libvirt/vars/main.yml` and `roles/libvirt/tasks/main.yml`
64 * Make sure to set the `cdrom` key to the path of ISO for the installer
653. Run the playbook, eg. `ansible-playbook -i hosts -l cloud07.internal.efficios.com site.yml`
66 * The VM should be created and started
674. Once the VM is installed take a snapshot so that Jenkins may revert to the original state
68 * `ansible-playbook playbooks/snapshot-rootnode.yml -e '{"revert_before": false}' -l new-rootnode`
69
70### Ubuntu auto-installer
71
721. Note your IP address
732. Switch to the directory with the user-data files: `cd roles/libvirt/files`
743. Write out the instance-specific metadata, eg.
75
76```
77cat > meta-data <<EOF
78instance-id: iid-XXX
79hostname: XXX.internal.efficios.com
80EOF
81```
82 * The instance-id is used to determine if re-installation is necessary.
834. Start a python web server: `python3 -m http.server 3003`
845. Connect to the VM using a remote viewer on the address given by `virsh --connect qemu+ssh://root@host/system domdisplay`
856. Edit the grub boot options for the installer and append the following as arguments for the kernel: `autoinstall 'ds=nocloud-net;s=http://IPADDRESS:3003/'` and boot the installer
86 * Note that the trailing `/` and quoting are important
87 * The will load the `user-data`, `meta-data`, and `vendor-data` files in the directory served by the python web server
887. After the installation is complete, the system will reboot and run cloud-init for the final portion of the initial setup. Once completed, ansible can be run against it using the ubuntu user and becoming root, eg. `ansible-playbook -i hosts -u ubuntu -b ...`
89
90# LXD Cluster
91
92## Start a new cluster
93
941. For the initial member of the cluster, set the `lxd_cluster` variable in the host variables to something similar to:
95
96```
97lxd_cluster:
98 server_name: cluster-member-name
99 enabled: true
100 member_config:
101 - entity: storage-pool
102 name: default
103 key: source
104 value: tank/lxd
105```
106
1072. Run the `site.yml` playbook on the node
1083. Verify that storage pool is configured:
109
110```
111$ lxc storage list
112| name | driver | state |
113| default | zfs | created |
114```
115
116 * If not present, create it on necessary targets:
117
118```
119$ lxc storage create default zfs source=tank/lxd --target=cluster-member-name
120# Repeat for any other members
121# Then, on the member itself
122$ lxc storage create default zfs
123# The storage listed should not be in the 'pending' state
124```
125
1264. Create a metrics certificate pair for the cluster, or use an existing one
127
128```
129openssl req -x509 -newkey ec -pkeyopt ec_paramgen_curve:secp384r1 -sha384 -keyout metrics.key -nodes -out metrics.crt -days 3650 -subj "/CN=metrics.local"
130lxc config trust add metrics.crt --type=metrics
131```
132
133## Adding a new host
134
1352. In the member's host_vars file set the following keys:
1361. On the existing host or cluster, generate a token for the new member: `lxc cluster add member-host-name`
137 * `lxd_cluster_ip`: The IP address on which the server will listen
138 * `lxd_cluster`: In a fashion similar to the following entry
139```
140lxd_cluster:
141 enabled: true
142 # Same as the name from the token created above
143 server_name: 'member-host-name'
144 # This shoud match `lxd_cluster_ip`
145 server_address: 172.18.0.192
146 cluster_token: 'xxx'
147 member_config:
148 - entity: storage-pool
149 name: default
150 key: source
151 value: tank/lxd
152```
153 * The `cluster_token` does not need to be kept in git after the the playbook's first run
1543. Assuming the member is in the host's group of the inventory, run the `site.yml` playbook.
155
156## Managing instances
157
158Local requirements:
159
160 * python3, python3-dnspython, python3-jenkins, samba-tool, kinit
161
162To automatically provision instances, perform certain operations, and update DNS entries:
163
1641. Update `vars/ci-instances.yml`
1652. Open a kerberos ticket with `kinit`
1663. Run the playbook, eg. `ansible-playbook playbooks/ci-instances.yml`
167
168# Incus cluster
169
170## Migration from LXD
171
1721. Run the `site.yml` playbook on the hosts to install `incus` and `incus-tools`
1732. On one cluster member, start the `lxd-to-incus` script, and follow the prompts
1743. On each other cluster member, start `lxd-to-incus --cluster-member`
1754. When prompted on each cluster member, uninstall `lxd`.
This page took 0.054275 seconds and 5 git commands to generate.