@@ -203,3 +203,124 @@ instance:
203
203
ls testdir
204
204
205
205
If it shows the test file then the share is working correctly.
206
+
207
+ Magnum
208
+ ======
209
+
210
+ The Multinode environment has Magnum enabled by default. To test it, you will
211
+ need to create a Kubernetes cluster. It is recommended that you use the
212
+ specified Fedora 35 image, as others may not work. Download the image locally,
213
+ then extract it and upload it to glance:
214
+
215
+ .. code-block :: bash
216
+
217
+ wget https://builds.coreos.fedoraproject.org/prod/streams/stable/builds/35.20220410.3.1/x86_64/fedora-coreos-35.20220410.3.1-openstack.x86_64.qcow2.xz
218
+ unxz fedora-coreos-35.20220410.3.1-openstack.x86_64.qcow2.xz
219
+ openstack image create --container-format bare --disk-format qcow2 --property os_distro=' fedora-coreos' --property os_version=' 35' --file fedora-coreos-35.20220410.3.1-openstack.x86_64.qcow2 fedora-coreos-35 --progress
220
+
221
+ Create a keypair:
222
+
223
+ .. code-block :: bash
224
+
225
+ openstack keypair create --private-key ~ /.ssh/id_rsa id_rsa
226
+
227
+ Install the Magnum, Heat, and Octavia clients:
228
+
229
+ .. code-block :: bash
230
+
231
+ pip install python-magnumclient
232
+ pip install python-heatclient
233
+ pip install python-octaviaclient
234
+
235
+ Create a cluster template:
236
+
237
+ .. code-block :: bash
238
+
239
+ openstack coe cluster template create test-template --image fedora-coreos-35 --external-network external --labels etcd_volume_size=8,boot_volume_size=50,cloud_provider_enabled=true,heat_container_agent_tag=wallaby-stable-1,kube_tag=v1.23.6,cloud_provider_tag=v1.23.1,monitoring_enabled=true,auto_scaling_enabled=true,auto_healing_enabled=true,auto_healing_controller=magnum-auto-healer,magnum_auto_healer_tag=v1.23.0.1-shpc,etcd_tag=v3.5.4,master_lb_floating_ip_enabled=true,cinder_csi_enabled=true,container_infra_prefix=ghcr.io/stackhpc/,min_node_count=1,max_node_count=50,octavia_lb_algorithm=SOURCE_IP_PORT,octavia_provider=ovn --dns-nameserver 8.8.8.8 --flavor m1.medium --master-flavor m1.medium --network-driver calico --volume-driver cinder --docker-storage-driver overlay2 --floating-ip-enabled --master-lb-enabled --coe kubernetes
240
+
241
+ Create a cluster:
242
+
243
+ .. code-block :: bash
244
+
245
+ openstack coe cluster create --keypair id_rsa --master-count 1 --node-count 1 --floating-ip-enabled test-cluster
246
+
247
+ This command will take a while to complete. You can monitor the progress with
248
+ the following command:
249
+
250
+ .. code-block :: bash
251
+
252
+ watch " openstack --insecure coe cluster list ; openstack --insecure stack list ; openstack --insecure server list"
253
+
254
+ Once the cluster is created, you can SSH into the master node and check that
255
+ there are no failed containers:
256
+
257
+ .. code-block :: bash
258
+
259
+ ssh core@{master-ip}
260
+
261
+ List the podman and docker containers:
262
+
263
+ .. code-block :: bash
264
+
265
+ sudo docker ps
266
+ sudo podman ps
267
+
268
+ If there are any failed containers, you can check the logs with the following
269
+ commands:
270
+
271
+ .. code-block :: bash
272
+
273
+ sudo docker logs {container-id}
274
+ sudo podman logs {container-id}
275
+
276
+ Or look at the logs under ``/var/log ``. In particular, pay close attention to
277
+ ``/var/log/heat-config `` on the master and
278
+ ``/var/log/kolla/{magnum,heat,neutron}/* `` on the controllers.
279
+
280
+ Otherwise, the ``state `` of the cluster should eventually become
281
+ ``CREATE_COMPLETE `` and the ``health_status `` should be ``HEALTHY ``.
282
+
283
+ You can interact with the cluster using ``kubectl ``. The instructions for
284
+ installing ``kubectl `` are available `here
285
+ <https://kubernetes.io/docs/tasks/tools/install-kubectl/> `_. You can then
286
+ configure ``kubectl `` to use the cluster, and check that the pods are all
287
+ running:
288
+
289
+ .. code-block :: bash
290
+
291
+ openstack coe cluster config test-cluster --dir $PWD
292
+ export KUBECONFIG=$PWD /config
293
+ kubectl get pods -A
294
+
295
+ Finally, you can optionally use sonobuoy to run a complete set of Kubernetes
296
+ conformance tests.
297
+
298
+ Find the latest release of sonobuoy on their `github releases page
299
+ <https://github.com/vmware-tanzu/sonobuoy/releases> `_. Then download it with wget, e.g.:
300
+
301
+ .. code-block :: bash
302
+
303
+ wget https://github.com/vmware-tanzu/sonobuoy/releases/download/v0.56.16/sonobuoy_0.56.16_linux_amd64.tar.gz
304
+
305
+ Extract it with tar:
306
+
307
+ .. code-block :: bash
308
+
309
+ tar -xvf sonobuoy_0.56.16_linux_amd64.tar.gz
310
+
311
+ And run it:
312
+
313
+ .. code-block :: bash
314
+
315
+ ./sonobuoy run --wait
316
+
317
+ This will take a while to complete. Once it is done you can check the results
318
+ with:
319
+
320
+ .. code-block :: bash
321
+
322
+ results=$( ./sonobuoy retrieve)
323
+ ./sonobuoy results $results
324
+
325
+ There are various other options for sonobuoy, see the `documentation
326
+ <https://sonobuoy.io/docs/> `_ for more details.
0 commit comments