Commit | Line | Data |
---|---|---|
e689666c MJ |
1 | # Setup on Ubuntu |
2 | ||
3 | ``` | |
4 | apt install ansible ansible-mitogen | |
5 | ``` | |
6 | ||
83d6ed6c KS |
7 | # Required collections |
8 | ||
9 | ``` | |
fc7346bb | 10 | ansible-galaxy install -r roles/requirements.yml |
83d6ed6c KS |
11 | ``` |
12 | ||
13 | # Privileged data | |
14 | ||
15 | Privileged data is stored in Bitwarden. To use roles that fetch privileged data, | |
16 | the following utilities must be available: | |
17 | ||
18 | * [bw](https://bitwarden.com/help/cli/) | |
19 | ||
20 | Once installed, login and unlock the vault: | |
21 | ||
22 | ``` | |
23 | bw login # or, `bw unlock` | |
24 | export BW_SESSION=xxxx | |
25 | bw sync -f | |
26 | ``` | |
27 | ||
28 | # Running playbooks | |
29 | ||
30 | ``` | |
31 | ansible-playbook -i hosts [-l SUBSET] site.yaml | |
32 | ``` | |
d82e5cee KS |
33 | |
34 | # Bootstrapping hosts | |
35 | ||
36 | ## Windows | |
37 | ||
38 | 1. Configure either SSH or WinRM connection: see https://docs.ansible.com/ansible/latest/os_guide/windows_setup.html | |
39 | 2. For arm64 hosts: | |
40 | * Install the necessary optional features (eg. OpenSSH, Hyper-V) since Windows RSAT isn't available on Arm64 yet | |
4be44748 KS |
41 | |
42 | ## CI 'rootnode' | |
43 | ||
44 | 1. Add an entry to the `vms` variable in the host vars for a libvirt host | |
45 | * See the defaults and details in `roles/libvirt/vars/main.yml` and `roles/libvirt/tasks/main.yml` | |
46 | * Make sure to set the `cdrom` key to the path of ISO for the installer | |
47 | 2. Run the playbook, eg. `ansible-playbook -i hosts -l cloud07.internal.efficios.com site.yml` | |
48 | * The VM should be created and started | |
49 | 3. Once the VM is installed take a snapshot so that Jenkins may revert to the original state | |
50 | ||
51 | ### Ubuntu auto-installer | |
52 | ||
53 | 1. Note your IP address | |
54 | 2. Switch to the directory with the user-data files: `cd roles/libvirt/files` | |
55 | 3. Write out the instance-specific metadata, eg. | |
56 | ||
57 | ``` | |
58 | cat > meta-data <<EOF | |
59 | instance-id: iid-XXX | |
60 | hostname: XXX.internal.efficios.com | |
61 | EOF | |
62 | ``` | |
63 | * The instance-id is used to determine if re-installation is necessary. | |
64 | 4. Start a python web server: `python3 -m http.server 3003` | |
65 | 5. Connect to the VM using a remote viewer on the address given by `virsh --connect qemu+ssh://root@host/system domdisplay` | |
66 | 6. Edit the grub boot options for the installer and append the following as arguments for the kernel: `autoinstall 'ds=nocloud-net;s=http://IPADDRESS:3003/'` and boot the installer | |
67 | * Note that the trailing `/` and quoting are important | |
68 | * The will load the `user-data`, `meta-data`, and `vendor-data` files in the directory served by the python web server | |
69 | 7. After the installation is complete, the system will reboot and run cloud-init for the final portion of the initial setup. Once completed, ansible can be run against it using the ubuntu user and becoming root, eg. `ansible-playbook -i hosts -u ubuntu -b ...` | |
c3c15dc7 KS |
70 | |
71 | # LXD Cluster | |
72 | ||
73 | ## Start a new cluster | |
74 | ||
75 | 1. For the initial member of the cluster, set the `lxd_cluster` variable in the host variables to something similar to: | |
76 | ||
77 | ``` | |
78 | lxd_cluster: | |
79 | server_name: cluster-member-name | |
80 | enabled: true | |
81 | member_config: | |
82 | - entity: storage-pool | |
83 | name: default | |
84 | key: source | |
85 | value: tank/lxd | |
86 | ``` | |
87 | ||
88 | 2. Run the `site.yml` playbook on the node | |
89 | 3. Verify that storage pool is configured: | |
90 | ||
91 | ``` | |
92 | $ lxc storage list | |
93 | | name | driver | state | | |
94 | | default | zfs | created | | |
95 | ``` | |
96 | ||
97 | * If not present, create it on necessary targets: | |
98 | ||
99 | ``` | |
100 | $ lxc storage create default zfs source=tank/lxd --target=cluster-member-name | |
101 | # Repeat for any other members | |
102 | # Then, on the member itself | |
103 | $ lxc storage create default zfs | |
104 | # The storage listed should not be in the 'pending' state | |
105 | ``` | |
106 | ||
107 | 4. Create a metrics certificate pair for the cluster, or use an existing one | |
108 | ||
109 | ``` | |
110 | openssl req -x509 -newkey ec -pkeyopt ec_paramgen_curve:secp384r1 -sha384 -keyout metrics.key -nodes -out metrics.crt -days 3650 -subj "/CN=metrics.local" | |
111 | lxc config trust add metrics.crt --type=metrics | |
112 | ``` | |
113 | ||
114 | ## Adding a new host | |
115 | ||
116 | 1. Generate a token for the new member: `lxc cluster add member-host-name` | |
117 | 2. In the member's host_var's file set the following key: | |
118 | * `lxd_cluster_ip`: The IP address on which the server will listen | |
119 | * `lxd_cluster`: In a fashion similar to the following entry | |
120 | ``` | |
121 | lxd_cluster: | |
122 | enabled: true | |
123 | server_address: 172.18.0.192 | |
124 | cluster_token: 'xxx' | |
125 | member_config: | |
126 | - entity: storage-pool | |
127 | name: default | |
128 | key: source | |
129 | value: tank/lxd | |
130 | ``` | |
131 | * The `cluster_token` does not need to be kept in git after the the playbook's first run | |
132 | 3. Assuming the member is in the host's group of the inventory, run the `site.yml` playbook. | |
133 | ||
134 | ## Managing instances | |
135 | ||
136 | Local requirements: | |
137 | ||
138 | * python3, python3-dnspython, samba-tool, kinit | |
139 | ||
140 | To automatically provision instances, perform certain operations, and update DNS entries: | |
141 | ||
142 | 1. Update `vars/ci-instances.yml` | |
143 | 2. Open a kerberos ticket with `kinit` | |
455bd8ef | 144 | 3. Run the playbook, eg. `ansible-playbook playbooks/ci-instances.yml` |