Skip to content

Commit 3d91816

Browse files
authored
Merge pull request #500 from stackhpc/feat/compute-script-sb
EXPERIMENTAL: add machinery to allow compute nodes to rejoin cluster on reimage
2 parents 8059d24 + a0ba5f1 commit 3d91816

File tree

21 files changed

+705
-12
lines changed

21 files changed

+705
-12
lines changed

ansible/.gitignore

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -58,6 +58,8 @@ roles/*
5858
!roles/squid/**
5959
!roles/tuned/
6060
!roles/tuned/**
61+
!roles/compute_init/
62+
!roles/compute_init/**
6163
!roles/k3s/
6264
!roles/k3s/**
6365
!roles/k9s/

ansible/extras.yml

Lines changed: 12 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -38,6 +38,18 @@
3838
- import_role:
3939
name: persist_hostkeys
4040

41+
42+
- name: Setup NFS export for compute node configuration
43+
hosts: compute_init:!builder
44+
# NB: has to be after eeesi and os-manila-mount
45+
tags: compute_init
46+
become: yes
47+
name: Export hostvars
48+
tasks:
49+
- include_role:
50+
name: compute_init
51+
tasks_from: export.yml
52+
4153
- name: Install k9s
4254
become: yes
4355
hosts: k9s

ansible/fatimage.yml

Lines changed: 10 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -73,6 +73,16 @@
7373

7474
- import_playbook: extras.yml
7575

76+
# TODO: is this the right place?
77+
- name: Install compute_init script
78+
hosts: compute_init
79+
tags: compute_init # tagged to allow running on cluster instances for dev
80+
become: yes
81+
tasks:
82+
- include_role:
83+
name: compute_init
84+
tasks_from: install.yml
85+
7686
- hosts: builder
7787
become: yes
7888
gather_facts: yes

ansible/roles/cluster_infra/templates/resources.tf.j2

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -399,7 +399,7 @@ resource "openstack_compute_instance_v2" "login" {
399399
ansible_init_coll_{{ loop.index0 }}_source = "{{ collection.source }}"
400400
{% endif %}
401401
{% endfor %}
402-
k3s_server = openstack_compute_instance_v2.control.network[0].fixed_ip_v4
402+
control_address = openstack_compute_instance_v2.control.network[0].fixed_ip_v4
403403
k3s_token = "{{ k3s_token }}"
404404
}
405405
}
@@ -565,7 +565,7 @@ resource "openstack_compute_instance_v2" "{{ partition.name }}" {
565565
ansible_init_coll_{{ loop.index0 }}_source = "{{ collection.source }}"
566566
{% endif %}
567567
{% endfor %}
568-
k3s_server = openstack_compute_instance_v2.control.network[0].fixed_ip_v4
568+
control_address = openstack_compute_instance_v2.control.network[0].fixed_ip_v4
569569
k3s_token = "{{ k3s_token }}"
570570
}
571571
}

ansible/roles/compute_init/README.md

Lines changed: 130 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,130 @@
1+
# EXPERIMENTAL: compute-init
2+
3+
Experimental / in-progress functionality to allow compute nodes to rejoin the
4+
cluster after a reboot.
5+
6+
To enable this add compute nodes (or a subset of them into) the `compute_init`
7+
group.
8+
9+
This works as follows:
10+
1. During image build, an ansible-init playbook and supporting files
11+
(e.g. templates, filters, etc) are installed.
12+
2. Cluster instances are created as usual; the above compute-init playbook does
13+
not run.
14+
3. The `site.yml` playbook is run as usual to configure all the instances into
15+
a cluster. In addition, with `compute-init` enabled, a `/exports/cluster` NFS
16+
share is created on the control node containing:
17+
- an /etc/hosts file for the cluster
18+
- Hostvars for each compute node
19+
4. On reboot of a compute node, ansible-init runs the compute-init playbook
20+
which:
21+
a. Checks whether the `enable_compute` metadata flag is set, and exits if
22+
not.
23+
b. Tries to mount the above `/exports/cluster` NFS share from the control
24+
node, and exits if it cannot.
25+
c. Configures itself using the exported hostvars, depending on the
26+
`enable_*` flags set in metadata.
27+
d. Issues an `scontrol` command to resume the node (because Slurm will
28+
consider it as "unexpectedly rebooted").
29+
30+
The check in 4b. above is what prevents the compute-init script from trying
31+
to configure the node before the services on the control node are available
32+
(which requires running the site.yml playbook).
33+
34+
The following roles/groups are currently fully functional:
35+
- `resolv_conf`: all functionality
36+
- `etc_hosts`: all functionality
37+
- `nfs`: client functionality only
38+
- `manila`: all functionality
39+
- `basic_users`: all functionality, assumes home directory already exists on
40+
shared storage
41+
- `eessi`: all functionality, assumes `cvmfs_config` is the same on control
42+
node and all compute nodes.
43+
- `openhpc`: all functionality
44+
45+
# Development/debugging
46+
47+
To develop/debug this without actually having to build an image:
48+
49+
50+
1. Deploy a cluster using tofu and ansible/site.yml as normal. This will
51+
additionally configure the control node to export compute hostvars over NFS.
52+
Check the cluster is up.
53+
54+
2. Reimage the compute nodes:
55+
56+
ansible-playbook --limit compute ansible/adhoc/rebuild.yml
57+
58+
3. Add metadata to a compute node e.g. via Horizon to turn on compute-init
59+
playbook functionality.
60+
61+
4. Fake an image build to deploy the compute-init playbook:
62+
63+
ansible-playbook ansible/fatimage.yml --tags compute_init
64+
65+
NB: This will also re-export the compute hostvars, as the nodes are not
66+
in the builder group, which conveniently means any changes made to that
67+
play also get picked up.
68+
69+
5. Fake a reimage of compute to run ansible-init and the compute-init playbook:
70+
71+
On compute node where metadata was added:
72+
73+
[root@rl9-compute-0 rocky]# rm -f /var/lib/ansible-init.done && systemctl restart ansible-init
74+
[root@rl9-compute-0 rocky]# systemctl status ansible-init
75+
76+
Use `systemctl status ansible-init` to view stdout/stderr from Ansible.
77+
78+
Steps 4/5 can be repeated with changes to the compute script. If required,
79+
reimage the compute node(s) first as in step 2 and/or add additional metadata
80+
as in step 3.
81+
82+
83+
# Design notes
84+
- Duplicating code in roles into the `compute-init` script is unfortunate, but
85+
does allow developing this functionality without wider changes to the
86+
appliance.
87+
88+
- In general, we don't want to rely on NFS export. So should e.g. copy files
89+
from this mount ASAP in the compute-init script. TODO:
90+
91+
- There are a couple of approaches to supporting existing roles using `compute-init`:
92+
93+
1. Control node copies files resulting from role into cluster exports,
94+
compute-init copies to local disk. Only works if files are not host-specific
95+
Examples: etc_hosts, eessi config?
96+
97+
2. Re-implement the role. Works if the role vars are not too complicated,
98+
(else they all need to be duplicated in compute-init). Could also only
99+
support certain subsets of role functionality or variables
100+
Examples: resolv_conf, stackhpc.openhpc
101+
102+
- Some variables are defined using hostvars from other nodes, which aren't
103+
available v the current approach:
104+
105+
```
106+
[root@rl9-compute-0 rocky]# grep hostvars /mnt/cluster/hostvars/rl9-compute-0/hostvars.yml
107+
"grafana_address": "{{ hostvars[groups['grafana'].0].api_address }}",
108+
"grafana_api_address": "{{ hostvars[groups['grafana'].0].internal_address }}",
109+
"mysql_host": "{{ hostvars[groups['mysql'] | first].api_address }}",
110+
"nfs_server_default": "{{ hostvars[groups['control'] | first ].internal_address }}",
111+
"openhpc_slurm_control_host": "{{ hostvars[groups['control'].0].api_address }}",
112+
"openondemand_address": "{{ hostvars[groups['openondemand'].0].api_address if groups['openondemand'] | count > 0 else '' }}",
113+
"openondemand_node_proxy_directives": "{{ _opeonondemand_unset_auth if (openondemand_auth == 'basic_pam' and 'openondemand_host_regex' and groups['grafana'] | length > 0 and hostvars[ groups['grafana'] | first]._grafana_auth_is_anonymous) else '' }}",
114+
"openondemand_servername": "{{ hostvars[ groups['openondemand'] | first].ansible_host }}",
115+
"prometheus_address": "{{ hostvars[groups['prometheus'].0].api_address }}",
116+
"{{ hostvars[groups['freeipa_server'].0].ansible_host }}"
117+
```
118+
119+
More generally, there is nothing to stop any group var depending on a
120+
"{{ hostvars[] }}" interpolation ...
121+
122+
Only `nfs_server_default` and `openhpc_slurm_control_host` are of concern
123+
for compute nodes - both of these indirect via `api_address` to
124+
`inventory_hostname`. This has been worked around by replacing this with
125+
"{{ groups['control'] | first }}" which does result in the control node
126+
inventory hostname when templating.
127+
128+
Note that although `groups` is defined in the templated hostvars, when
129+
the hostvars are loaded using `include_vars:` is is ignored as it is a
130+
"magic variable" determined by ansible itself and cannot be set.

0 commit comments

Comments
 (0)