You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Generate an SSH keypair. The public key will be registered in OpenStack as a
66
+
keypair and authorised by the instances deployed by Terraform. The private and
67
+
public keys will be transferred to the Ansible control host to allow it to
68
+
connect to the other hosts. Note that password-protected keys are not currently
69
+
supported.
40
70
41
71
.. code-block:: console
42
72
@@ -94,59 +124,74 @@ Or you can source the provided `init.sh` script which shall initialise terraform
94
124
OpenStack Cloud Name: sms-lab
95
125
Password:
96
126
97
-
Generate Terraform variables:
127
+
You must ensure that you have `Ansible installed <https://docs.ansible.com/ansible/latest/installation_guide/intro_installation.html>`_ on your local machine.
If the deployed instances are behind an SSH bastion you must ensure that your SSH config is setup appropriately with a proxy jump.
123
140
124
-
storage_count = "3"
125
-
storage_flavor = "general.v1.small"
126
-
storage_disk_size = 100
141
+
.. code-block::
127
142
128
-
deploy_wazuh = true
129
-
infra_vm_flavor = "general.v1.small"
130
-
infra_vm_disk_size = 100
143
+
Host lab-bastion
144
+
HostName BastionIPAddr
145
+
User username
146
+
IdentityFile ~/.ssh/key
131
147
132
-
EOF
148
+
Host 10.*
149
+
ProxyJump=lab-bastion
150
+
ForwardAgent no
151
+
IdentityFile ~/.ssh/key
152
+
UserKnownHostsFile /dev/null
153
+
StrictHostKeyChecking no
154
+
155
+
Configure Terraform variables
156
+
=============================
157
+
158
+
Populate Terraform variables in `terraform.tfvars`. Examples are provided in
159
+
files named `*.tfvars.example`. The available variables are defined in
160
+
`variables.tf` along with their type, description, and optional default.
133
161
134
162
You will need to set the `multinode_keypair`, `prefix`, and `ssh_public_key`.
135
163
By default, Rocky Linux 9 will be used but Ubuntu Jammy is also supported by
136
-
changing `multinode_image` to `Ubuntu-22.04-lvm` and `ssh_user` to `ubuntu`.
137
-
Other LVM images should also work but are untested.
164
+
changing `multinode_image` to `overcloud-ubuntu-jammy-<release>-<datetime>` and
165
+
`ssh_user` to `ubuntu`.
138
166
139
167
The `multinode_flavor` will change the flavor used for controller and compute
140
168
nodes. Both virtual machines and baremetal are supported, but the `*_disk_size`
141
169
variables must be set to 0 when using baremetal host. This will stop a block
142
170
device being allocated. When any baremetal hosts are deployed, the
143
171
`multinode_vm_network` and `multinode_vm_subnet` should also be changed to
144
-
`stackhpc-ipv4-vlan-v2` and `stackhpc-ipv4-vlan-subnet-v2` respectively.
172
+
a VLAN network and associated subnet.
145
173
146
174
If `deploy_wazuh` is set to true, an infrastructure VM will be created that
147
175
hosts the Wazuh manager. The Wazuh deployment playbooks will also be triggered
148
176
automatically to deploy Wazuh agents to the overcloud hosts.
149
177
178
+
If `add_ansible_control_fip` is set to `true`, a floating IP will be created
179
+
and attached to the Ansible control host. In that case
180
+
`ansible_control_fip_pool` should be set to the name of the pool (network) from
181
+
which to allocate the floating IP, and the floating IP will be used for SSH
182
+
access to the control host.
183
+
184
+
Configure Ansible variables
185
+
===========================
186
+
187
+
Review the vars defined within `ansible/vars/defaults.yml`. In here you can customise the version of kayobe, kayobe-config or openstack-config.
188
+
Make sure to define `ssh_key_path` to point to the location of the SSH key in use by the nodes and also `vxlan_vni` which should be unique value between 1 to 100,000.
189
+
VNI should be much smaller than the officially supported limit of 16,777,215 as we encounter errors when attempting to bring interfaces up that use a high VNI.
190
+
You must set `vault_password_path`; this should be set to the path to a file containing the Ansible vault password.
191
+
192
+
Deployment
193
+
==========
194
+
150
195
Generate a plan:
151
196
152
197
.. code-block:: console
@@ -159,91 +204,62 @@ Apply the changes:
159
204
160
205
terraform apply -auto-approve
161
206
162
-
You should have requested a number of resources spawned on Openstack, and an ansible_inventory file produced as output for Kayobe.
163
-
164
-
Copy your generated id_rsa and id_rsa.pub to ~/.ssh/ on Ansible control host if you want Kayobe to automatically pick them up during bootstrap.
207
+
You should have requested a number of resources to be spawned on Openstack.
165
208
166
209
Configure Ansible control host
210
+
==============================
167
211
168
-
Using the `deploy-openstack-config.yml` playbook you can setup the Ansible control host to include the kayobe/kayobe-config repositories with `hosts` and `admin-oc-networks`.
169
-
It shall also setup the kayobe virtual environment, allowing for immediate configuration and deployment of OpenStack.
170
-
171
-
First you must ensure that you have `Ansible installed <https://docs.ansible.com/ansible/latest/installation_guide/intro_installation.html>`_ on your local machine.
212
+
Run the configure-hosts.yml playbook to configure the Ansible control host.
172
213
173
214
.. code-block:: console
174
215
175
-
pip install --user ansible
176
-
177
-
Secondly if the machines are behind an SSH bastion you must ensure that your ssh config is setup appropriately with a proxy jump
This playbook sequentially executes 2 other playbooks:
180
219
181
-
Host lab-bastion
182
-
HostName BastionIPAddr
183
-
User username
184
-
IdentityFile ~/.ssh/key
220
+
#. ``grow-control-host.yml`` - Applies LVM configuration to the control host to ensure it has enough space to continue with the rest of the deployment. Tag: ``lvm``
221
+
#. ``deploy-openstack-config.yml`` - Prepares the Ansible control host as a Kayobe control host, cloning the Kayobe configuration and installing virtual environments. Tag: ``deploy``
185
222
186
-
Host 10.*
187
-
ProxyJump=lab-bastion
188
-
ForwardAgent no
189
-
IdentityFile ~/.ssh/key
190
-
UserKnownHostsFile /dev/null
191
-
StrictHostKeyChecking no
223
+
These playbooks are tagged so that they can be invoked or skipped using `tags` or `--skip-tags` as required.
192
224
193
-
Install the ansible requirements.
225
+
Deploy OpenStack
226
+
================
194
227
195
-
.. code-block:: console
228
+
Once the Ansible control host has been configured with a Kayobe/OpenStack configuration you can then begin the process of deploying OpenStack.
229
+
This can be achieved by either manually running the various commands to configure the hosts and deploy the services or automated by using the generated `deploy-openstack.sh` script.
230
+
`deploy-openstack.sh` should be available within the home directory on your Ansible control host provided you ran `deploy-openstack-config.yml` earlier.
231
+
This script will go through the process of performing the following tasks:
Review the vars defined within `ansible/vars/defaults.yml`. In here you can customise the version of kayobe, kayobe-config or openstack-config.
200
-
However, make sure to define `ssh_key_path` to point to the location of the SSH key in use amongst the nodes and also `vxlan_vni` which should be unique value between 1 to 100,000.
201
-
VNI should be much smaller than the officially supported limit of 16,777,215 as we encounter errors when attempting to bring interfaces up that use a high VNI. You must set``vault_password_path``; this should be set to the path to a file containing the Ansible vault password.
241
+
Tempest test results will be written to `~/tempest-artifacts`.
202
242
203
-
Finally, run the configure-hosts playbook.
243
+
If you choose to opt for the automated method you must first SSH into your Ansible control host.
This playbook sequentially executes 4 other playbooks:
210
-
211
-
#. ``fix-homedir-ownership.yml`` - Ensures the ``ansible_user`` owns their home directory. Tag: ``fix-homedir``
212
-
#. ``add-fqdn.yml`` - Ensures FQDNs are added to ``/etc/hosts``. Tag: ``fqdn``
213
-
#. ``grow-control-host.yml`` - Applies LVM configuration to the control host to ensure it has enough space to continue with the rest of the deployment. Tag: ``lvm``
214
-
#. ``deploy-openstack-config.yml`` - Deploys the OpenStack configuration to the control host. Tag: ``deploy``
These playbooks are tagged so that they can be invoked or skipped as required. For example, if designate is not being deployed, some time can be saved by skipping the FQDN playbook:
249
+
Start a `tmux` session to avoid halting the deployment if you are disconnected.
Once the Ansible control host has been configured with a Kayobe/OpenStack configuration you can then begin the process of deploying OpenStack.
226
-
This can be achieved by either manually running the various commands to configures the hosts and deploy the services or automated by using `deploy-openstack.sh`,
227
-
which should be available within the homedir on your Ansible control host provided you ran `deploy-openstack-config.yml` earlier.
253
+
tmux
228
254
229
-
If you choose to opt for automated method you must first SSH into your Ansible control host and then run the `deploy-openstack.sh` script
This script will go through the process of performing the following tasks
237
-
* kayobe control host bootstrap
238
-
* kayobe seed host configure
239
-
* kayobe overcloud host configure
240
-
* cephadm deployment
241
-
* kayobe overcloud service deploy
242
-
* openstack configuration
243
-
* tempest testing
244
-
245
261
Accessing OpenStack
246
-
-------------------
262
+
===================
247
263
248
264
After a successful deployment of OpenStack you make access the OpenStack API and Horizon by proxying your connection via the seed node, as it has an interface on the public network (192.168.39.X).
249
265
Using software such as sshuttle will allow for easy access.
@@ -260,15 +276,15 @@ Important to node this will proxy all DNS requests from your machine to the firs
After you are finished with the multinode environment please destroy the nodes to free up resources for others.
266
282
This can acomplished by using the provided `scripts/tear-down.sh` which will destroy your controllers, compute, seed and storage nodes whilst leaving your Ansible control host and keypair intact.
267
283
268
284
If you would like to delete your Ansible control host then you can pass the `-a` flag however if you would also like to remove your keypair then pass `-a -k`
269
285
270
286
Issues & Fixes
271
-
--------------
287
+
==============
272
288
273
289
Sometimes a compute instance fails to be provisioned by Terraform or fails on boot for any reason.
274
290
If this happens the solution is to mark the resource as tainted and perform terraform apply again which shall destroy and rebuild the failed instance.
0 commit comments