ansible: Add missing default variables for common-node
[lttng-ci.git] / automation / ansible / README.md
... / ...
CommitLineData
1# Setup on Ubuntu
2
3```
4apt install ansible ansible-mitogen
5```
6
7# Required collections
8
9```
10ansible-galaxy install -r roles/requirements.yml
11```
12
13# Privileged data
14
15Privileged data is stored in Bitwarden. To use roles that fetch privileged data,
16the following utilities must be available:
17
18* [bw](https://bitwarden.com/help/cli/)
19
20Once installed, login and unlock the vault:
21
22```
23bw login # or, `bw unlock`
24export BW_SESSION=xxxx
25bw sync -f
26```
27
28# Running playbooks
29
30```
31ansible-playbook -i hosts [-l SUBSET] site.yaml
32```
33
34# Bootstrapping hosts
35
36## Windows
37
381. Configure either SSH or WinRM connection: see https://docs.ansible.com/ansible/latest/os_guide/windows_setup.html
392. For arm64 hosts:
40 * Install the necessary optional features (eg. OpenSSH, Hyper-V) since Windows RSAT isn't available on Arm64 yet
41
42## CI 'rootnode'
43
441. Add the new ansible node to the `node_standalone` group in the inventory
452. Add an entry to the `vms` variable in the host vars for the libvirt host
46 * See the defaults and details in `roles/libvirt/vars/main.yml` and `roles/libvirt/tasks/main.yml`
47 * Make sure to set the `cdrom` key to the path of ISO for the installer
483. Run the playbook, eg. `ansible-playbook -i hosts -l cloud07.internal.efficios.com site.yml`
49 * The VM should be created and started
504. Once the VM is installed take a snapshot so that Jenkins may revert to the original state
51 * `ansible-playbook playbooks/snapshot-rootnode.yml -e '{"revert_before": false}' -l new-rootnode`
52
53### Ubuntu auto-installer
54
551. Note your IP address
562. Switch to the directory with the user-data files: `cd roles/libvirt/files`
573. Write out the instance-specific metadata, eg.
58
59```
60cat > meta-data <<EOF
61instance-id: iid-XXX
62hostname: XXX.internal.efficios.com
63EOF
64```
65 * The instance-id is used to determine if re-installation is necessary.
664. Start a python web server: `python3 -m http.server 3003`
675. Connect to the VM using a remote viewer on the address given by `virsh --connect qemu+ssh://root@host/system domdisplay`
686. Edit the grub boot options for the installer and append the following as arguments for the kernel: `autoinstall 'ds=nocloud-net;s=http://IPADDRESS:3003/'` and boot the installer
69 * Note that the trailing `/` and quoting are important
70 * The will load the `user-data`, `meta-data`, and `vendor-data` files in the directory served by the python web server
717. After the installation is complete, the system will reboot and run cloud-init for the final portion of the initial setup. Once completed, ansible can be run against it using the ubuntu user and becoming root, eg. `ansible-playbook -i hosts -u ubuntu -b ...`
72
73# LXD Cluster
74
75## Start a new cluster
76
771. For the initial member of the cluster, set the `lxd_cluster` variable in the host variables to something similar to:
78
79```
80lxd_cluster:
81 server_name: cluster-member-name
82 enabled: true
83 member_config:
84 - entity: storage-pool
85 name: default
86 key: source
87 value: tank/lxd
88```
89
902. Run the `site.yml` playbook on the node
913. Verify that storage pool is configured:
92
93```
94$ lxc storage list
95| name | driver | state |
96| default | zfs | created |
97```
98
99 * If not present, create it on necessary targets:
100
101```
102$ lxc storage create default zfs source=tank/lxd --target=cluster-member-name
103# Repeat for any other members
104# Then, on the member itself
105$ lxc storage create default zfs
106# The storage listed should not be in the 'pending' state
107```
108
1094. Create a metrics certificate pair for the cluster, or use an existing one
110
111```
112openssl req -x509 -newkey ec -pkeyopt ec_paramgen_curve:secp384r1 -sha384 -keyout metrics.key -nodes -out metrics.crt -days 3650 -subj "/CN=metrics.local"
113lxc config trust add metrics.crt --type=metrics
114```
115
116## Adding a new host
117
1181. Generate a token for the new member: `lxc cluster add member-host-name`
1192. In the member's host_var's file set the following key:
120 * `lxd_cluster_ip`: The IP address on which the server will listen
121 * `lxd_cluster`: In a fashion similar to the following entry
122```
123lxd_cluster:
124 enabled: true
125 server_address: 172.18.0.192
126 cluster_token: 'xxx'
127 member_config:
128 - entity: storage-pool
129 name: default
130 key: source
131 value: tank/lxd
132```
133 * The `cluster_token` does not need to be kept in git after the the playbook's first run
1343. Assuming the member is in the host's group of the inventory, run the `site.yml` playbook.
135
136## Managing instances
137
138Local requirements:
139
140 * python3, python3-dnspython, samba-tool, kinit
141
142To automatically provision instances, perform certain operations, and update DNS entries:
143
1441. Update `vars/ci-instances.yml`
1452. Open a kerberos ticket with `kinit`
1463. Run the playbook, eg. `ansible-playbook playbooks/ci-instances.yml`
This page took 0.024945 seconds and 4 git commands to generate.