You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
`openhpc_slurm_partitions`: Optional. List of one or more slurm partitions, default `[]`. Each partition may contain the following values:
54
-
*`groups`: If there are multiple node groups that make up the partition, a list of group objects can be defined here.
55
-
Otherwise, `groups` can be omitted and the following attributes can be defined in the partition object:
56
-
*`name`: The name of the nodes within this group.
57
-
*`cluster_name`: Optional. An override for the top-level definition `openhpc_cluster_name`.
58
-
*`extra_nodes`: Optional. A list of additional node definitions, e.g. for nodes in this group/partition not controlled by this role. Each item should be a dict, with keys/values as per the ["NODE CONFIGURATION"](https://slurm.schedmd.com/slurm.conf.html#lbAE) docs for slurm.conf. Note the key `NodeName` must be first.
59
-
*`ram_mb`: Optional. The physical RAM available in each node of this group ([slurm.conf](https://slurm.schedmd.com/slurm.conf.html) parameter `RealMemory`) in MiB. This is set using ansible facts if not defined, equivalent to `free --mebi` total * `openhpc_ram_multiplier`.
60
-
*`ram_multiplier`: Optional. An override for the top-level definition `openhpc_ram_multiplier`. Has no effect if `ram_mb` is set.
53
+
`openhpc_nodegroups`: Optional, default `[]`. List of mappings, each defining a
54
+
unique set of homogenous nodes:
55
+
*`name`: Required. Name of node group.
56
+
*`ram_mb`: Optional. The physical RAM available in each node of this group
in MiB. This is set using ansible facts if not defined, equivalent to
59
+
`free --mebi` total * `openhpc_ram_multiplier`.
60
+
*`ram_multiplier`: Optional. An override for the top-level definition
61
+
`openhpc_ram_multiplier`. Has no effect if `ram_mb` is set.
61
62
*`gres`: Optional. List of dicts defining [generic resources](https://slurm.schedmd.com/gres.html). Each dict must define:
62
63
-`conf`: A string with the [resource specification](https://slurm.schedmd.com/slurm.conf.html#OPT_Gres_1) but requiring the format `<name>:<type>:<number>`, e.g. `gpu:A100:2`. Note the `type` is an arbitrary string.
63
64
-`file`: A string with the [File](https://slurm.schedmd.com/gres.conf.html#OPT_File) (path to device(s)) for this resource, e.g. `/dev/nvidia[0-1]` for the above example.
64
-
65
65
Note [GresTypes](https://slurm.schedmd.com/slurm.conf.html#OPT_GresTypes) must be set in `openhpc_config` if this is used.
66
-
67
-
*`default`: Optional. A boolean flag for whether this partion is the default. Valid settings are `YES` and `NO`.
68
-
*`maxtime`: Optional. A partition-specific time limit following the format of [slurm.conf](https://slurm.schedmd.com/slurm.conf.html) parameter `MaxTime`. The default value is
69
-
given by `openhpc_job_maxtime`. The value should be quoted to avoid Ansible conversions.
70
-
*`partition_params`: Optional. Mapping of additional parameters and values for [partition configuration](https://slurm.schedmd.com/slurm.conf.html#SECTION_PARTITION-CONFIGURATION).
71
-
72
-
For each group (if used) or partition any nodes in an ansible inventory group `<cluster_name>_<group_name>` will be added to the group/partition. Note that:
73
-
- Nodes may have arbitrary hostnames but these should be lowercase to avoid a mismatch between inventory and actual hostname.
74
-
- Nodes in a group are assumed to be homogenous in terms of processor and memory.
75
-
- An inventory group may be empty or missing, but if it is not then the play must contain at least one node from it (used to set processor information).
76
-
77
-
78
-
`openhpc_job_maxtime`: Maximum job time limit, default `'60-0'` (60 days). See [slurm.conf](https://slurm.schedmd.com/slurm.conf.html) parameter `MaxTime` for format. The default is 60 days. The value should be quoted to avoid Ansible conversions.
66
+
*`features`: Optional. List of [Features](https://slurm.schedmd.com/slurm.conf.html#OPT_Features) strings.
67
+
*`node_params`: Optional. Mapping of additional parameters and values for
To deploy, create a playbook which looks like this:
196
-
197
-
---
198
-
- hosts:
199
-
- cluster_login
200
-
- cluster_control
201
-
- cluster_batch
202
-
become: yes
203
-
roles:
204
-
- role: openhpc
205
-
openhpc_enable:
206
-
control: "{{ inventory_hostname in groups['cluster_control'] }}"
207
-
batch: "{{ inventory_hostname in groups['cluster_batch'] }}"
208
-
runtime: true
209
-
openhpc_slurm_service_enabled: true
210
-
openhpc_slurm_control_host: "{{ groups['cluster_control'] | first }}"
211
-
openhpc_slurm_partitions:
212
-
- name: "compute"
213
-
openhpc_cluster_name: openhpc
214
-
openhpc_packages: []
215
-
...
210
+
[hpc_control]
211
+
cluster-control
212
+
```
216
213
214
+
```yaml
215
+
#playbook.yml
216
+
---
217
+
- hosts: all
218
+
become: yes
219
+
tasks:
220
+
- import_role:
221
+
name: stackhpc.openhpc
222
+
vars:
223
+
openhpc_cluster_name: hpc
224
+
openhpc_enable:
225
+
control: "{{ inventory_hostname in groups['cluster_control'] }}"
226
+
batch: "{{ inventory_hostname in groups['cluster_compute'] }}"
227
+
runtime: true
228
+
openhpc_slurm_control_host: "{{ groups['cluster_control'] | first }}"
229
+
openhpc_nodegroups:
230
+
- name: compute
231
+
openhpc_partitions:
232
+
- name: compute
217
233
---
234
+
```
235
+
236
+
### Multiple nodegroups
237
+
238
+
This example shows how partitions can span multiple types of compute node.
239
+
240
+
This example inventory describes three types of compute node (login and
241
+
control nodes are omitted for brevity):
242
+
243
+
```ini
244
+
# inventory/hosts:
245
+
...
246
+
[hpc_general]
247
+
# standard compute nodes
248
+
cluster-general-0
249
+
cluster-general-1
250
+
251
+
[hpc_large]
252
+
# large memory nodes
253
+
cluster-largemem-0
254
+
cluster-largemem-1
255
+
256
+
[hpc_gpu]
257
+
# GPU nodes
258
+
cluster-a100-0
259
+
cluster-a100-1
260
+
...
261
+
```
262
+
263
+
Firstly the `openhpc_nodegroups` is set to capture these inventory groups and
264
+
apply any node-level parameters - in this case the `largemem` nodes have
265
+
2x cores reserved for some reason, and GRES is configured for the GPU nodes:
266
+
267
+
```yaml
268
+
openhpc_cluster_name: hpc
269
+
openhpc_nodegroups:
270
+
- name: general
271
+
- name: large
272
+
node_params:
273
+
CoreSpecCount: 2
274
+
- name: gpu
275
+
gres:
276
+
- conf: gpu:A100:2
277
+
file: /dev/nvidia[0-1]
278
+
```
279
+
280
+
Now two partitions can be configured - a default one with a short timelimit and
281
+
no large memory nodes for testing jobs, and another with all hardware and longer
282
+
job runtime for "production" jobs:
283
+
284
+
```yaml
285
+
openhpc_partitions:
286
+
- name: test
287
+
nodegroups:
288
+
- general
289
+
- gpu
290
+
maxtime: '1:0:0'# 1 hour
291
+
default: 'YES'
292
+
- name: general
293
+
nodegroups:
294
+
- general
295
+
- large
296
+
- gpu
297
+
maxtime: '2-0'# 2 days
298
+
default: 'NO'
299
+
```
300
+
Users will select the partition using `--partition` argument and request nodes
301
+
with appropriate memory or GPUs using the `--mem` and `--gres` or `--gpus*`
302
+
options for `sbatch` or `srun`.
303
+
304
+
Finally here some additional configuration must be provided for GRES:
305
+
```yaml
306
+
openhpc_config:
307
+
GresTypes:
308
+
-gpu
309
+
```
218
310
219
311
<b id="slurm_ver_footnote">1</b> Slurm 20.11 removed `accounting_storage/filetxt` as an option. This version of Slurm was introduced in OpenHPC v2.1 but the OpenHPC repos are common to all OpenHPC v2.x releases. [↩](#accounting_storage)
0 commit comments