|
| 1 | +# EXPERIMENTAL: compute-init |
| 2 | + |
| 3 | +Experimental / in-progress functionality to allow compute nodes to rejoin the |
| 4 | +cluster after a reboot. |
| 5 | + |
| 6 | +To enable this add compute nodes (or a subset of them into) the `compute_init` |
| 7 | +group. |
| 8 | + |
| 9 | +This works as follows: |
| 10 | +1. During image build, an ansible-init playbook and supporting files |
| 11 | +(e.g. templates, filters, etc) are installed. |
| 12 | +2. Cluster instances are created as usual; the above compute-init playbook does |
| 13 | +not run. |
| 14 | +3. The `site.yml` playbook is run as usual to configure all the instances into |
| 15 | +a cluster. In addition, with `compute-init` enabled, a `/exports/cluster` NFS |
| 16 | +share is created on the control node containing: |
| 17 | + - an /etc/hosts file for the cluster |
| 18 | + - Hostvars for each compute node |
| 19 | +4. On reboot of a compute node, ansible-init runs the compute-init playbook |
| 20 | +which: |
| 21 | + a. Checks whether the `enable_compute` metadata flag is set, and exits if |
| 22 | + not. |
| 23 | + b. Tries to mount the above `/exports/cluster` NFS share from the control |
| 24 | + node, and exits if it cannot. |
| 25 | + c. Configures itself using the exported hostvars, depending on the |
| 26 | + `enable_*` flags set in metadata. |
| 27 | + d. Issues an `scontrol` command to resume the node (because Slurm will |
| 28 | + consider it as "unexpectedly rebooted"). |
| 29 | + |
| 30 | +The check in 4b. above is what prevents the compute-init script from trying |
| 31 | +to configure the node before the services on the control node are available |
| 32 | +(which requires running the site.yml playbook). |
| 33 | + |
| 34 | +The following roles/groups are currently fully functional: |
| 35 | +- `resolv_conf`: all functionality |
| 36 | +- `etc_hosts`: all functionality |
| 37 | +- `nfs`: client functionality only |
| 38 | +- `manila`: all functionality |
| 39 | +- `basic_users`: all functionality, assumes home directory already exists on |
| 40 | + shared storage |
| 41 | +- `eessi`: all functionality, assumes `cvmfs_config` is the same on control |
| 42 | + node and all compute nodes. |
| 43 | +- `openhpc`: all functionality |
| 44 | + |
| 45 | +# Development/debugging |
| 46 | + |
| 47 | +To develop/debug this without actually having to build an image: |
| 48 | + |
| 49 | + |
| 50 | +1. Deploy a cluster using tofu and ansible/site.yml as normal. This will |
| 51 | + additionally configure the control node to export compute hostvars over NFS. |
| 52 | + Check the cluster is up. |
| 53 | + |
| 54 | +2. Reimage the compute nodes: |
| 55 | + |
| 56 | + ansible-playbook --limit compute ansible/adhoc/rebuild.yml |
| 57 | + |
| 58 | +3. Add metadata to a compute node e.g. via Horizon to turn on compute-init |
| 59 | + playbook functionality. |
| 60 | + |
| 61 | +4. Fake an image build to deploy the compute-init playbook: |
| 62 | + |
| 63 | + ansible-playbook ansible/fatimage.yml --tags compute_init |
| 64 | + |
| 65 | + NB: This will also re-export the compute hostvars, as the nodes are not |
| 66 | + in the builder group, which conveniently means any changes made to that |
| 67 | + play also get picked up. |
| 68 | + |
| 69 | +5. Fake a reimage of compute to run ansible-init and the compute-init playbook: |
| 70 | + |
| 71 | + On compute node where metadata was added: |
| 72 | + |
| 73 | + [root@rl9-compute-0 rocky]# rm -f /var/lib/ansible-init.done && systemctl restart ansible-init |
| 74 | + [root@rl9-compute-0 rocky]# systemctl status ansible-init |
| 75 | + |
| 76 | + Use `systemctl status ansible-init` to view stdout/stderr from Ansible. |
| 77 | + |
| 78 | +Steps 4/5 can be repeated with changes to the compute script. If required, |
| 79 | +reimage the compute node(s) first as in step 2 and/or add additional metadata |
| 80 | +as in step 3. |
| 81 | + |
| 82 | + |
| 83 | +# Design notes |
| 84 | +- Duplicating code in roles into the `compute-init` script is unfortunate, but |
| 85 | + does allow developing this functionality without wider changes to the |
| 86 | + appliance. |
| 87 | + |
| 88 | +- In general, we don't want to rely on NFS export. So should e.g. copy files |
| 89 | + from this mount ASAP in the compute-init script. TODO: |
| 90 | + |
| 91 | +- There are a couple of approaches to supporting existing roles using `compute-init`: |
| 92 | + |
| 93 | + 1. Control node copies files resulting from role into cluster exports, |
| 94 | + compute-init copies to local disk. Only works if files are not host-specific |
| 95 | + Examples: etc_hosts, eessi config? |
| 96 | + |
| 97 | + 2. Re-implement the role. Works if the role vars are not too complicated, |
| 98 | + (else they all need to be duplicated in compute-init). Could also only |
| 99 | + support certain subsets of role functionality or variables |
| 100 | + Examples: resolv_conf, stackhpc.openhpc |
| 101 | + |
| 102 | +- Some variables are defined using hostvars from other nodes, which aren't |
| 103 | + available v the current approach: |
| 104 | + |
| 105 | + ``` |
| 106 | + [root@rl9-compute-0 rocky]# grep hostvars /mnt/cluster/hostvars/rl9-compute-0/hostvars.yml |
| 107 | + "grafana_address": "{{ hostvars[groups['grafana'].0].api_address }}", |
| 108 | + "grafana_api_address": "{{ hostvars[groups['grafana'].0].internal_address }}", |
| 109 | + "mysql_host": "{{ hostvars[groups['mysql'] | first].api_address }}", |
| 110 | + "nfs_server_default": "{{ hostvars[groups['control'] | first ].internal_address }}", |
| 111 | + "openhpc_slurm_control_host": "{{ hostvars[groups['control'].0].api_address }}", |
| 112 | + "openondemand_address": "{{ hostvars[groups['openondemand'].0].api_address if groups['openondemand'] | count > 0 else '' }}", |
| 113 | + "openondemand_node_proxy_directives": "{{ _opeonondemand_unset_auth if (openondemand_auth == 'basic_pam' and 'openondemand_host_regex' and groups['grafana'] | length > 0 and hostvars[ groups['grafana'] | first]._grafana_auth_is_anonymous) else '' }}", |
| 114 | + "openondemand_servername": "{{ hostvars[ groups['openondemand'] | first].ansible_host }}", |
| 115 | + "prometheus_address": "{{ hostvars[groups['prometheus'].0].api_address }}", |
| 116 | + "{{ hostvars[groups['freeipa_server'].0].ansible_host }}" |
| 117 | + ``` |
| 118 | +
|
| 119 | + More generally, there is nothing to stop any group var depending on a |
| 120 | + "{{ hostvars[] }}" interpolation ... |
| 121 | +
|
| 122 | + Only `nfs_server_default` and `openhpc_slurm_control_host` are of concern |
| 123 | + for compute nodes - both of these indirect via `api_address` to |
| 124 | + `inventory_hostname`. This has been worked around by replacing this with |
| 125 | + "{{ groups['control'] | first }}" which does result in the control node |
| 126 | + inventory hostname when templating. |
| 127 | +
|
| 128 | + Note that although `groups` is defined in the templated hostvars, when |
| 129 | + the hostvars are loaded using `include_vars:` is is ignored as it is a |
| 130 | + "magic variable" determined by ansible itself and cannot be set. |
0 commit comments