Skip to content

Commit 314a696

Browse files
committed
Merge stackhpc/xena into stackhpc/yoga
2 parents af53d6a + 583b7af commit 314a696

File tree

6 files changed

+139
-6
lines changed

6 files changed

+139
-6
lines changed

doc/source/contributor/environments/ci-multinode.rst

Lines changed: 121 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -203,3 +203,124 @@ instance:
203203
ls testdir
204204
205205
If it shows the test file then the share is working correctly.
206+
207+
Magnum
208+
======
209+
210+
The Multinode environment has Magnum enabled by default. To test it, you will
211+
need to create a Kubernetes cluster. It is recommended that you use the
212+
specified Fedora 35 image, as others may not work. Download the image locally,
213+
then extract it and upload it to glance:
214+
215+
.. code-block:: bash
216+
217+
wget https://builds.coreos.fedoraproject.org/prod/streams/stable/builds/35.20220410.3.1/x86_64/fedora-coreos-35.20220410.3.1-openstack.x86_64.qcow2.xz
218+
unxz fedora-coreos-35.20220410.3.1-openstack.x86_64.qcow2.xz
219+
openstack image create --container-format bare --disk-format qcow2 --property os_distro='fedora-coreos' --property os_version='35' --file fedora-coreos-35.20220410.3.1-openstack.x86_64.qcow2 fedora-coreos-35 --progress
220+
221+
Create a keypair:
222+
223+
.. code-block:: bash
224+
225+
openstack keypair create --private-key ~/.ssh/id_rsa id_rsa
226+
227+
Install the Magnum, Heat, and Octavia clients:
228+
229+
.. code-block:: bash
230+
231+
pip install python-magnumclient
232+
pip install python-heatclient
233+
pip install python-octaviaclient
234+
235+
Create a cluster template:
236+
237+
.. code-block:: bash
238+
239+
openstack coe cluster template create test-template --image fedora-coreos-35 --external-network external --labels etcd_volume_size=8,boot_volume_size=50,cloud_provider_enabled=true,heat_container_agent_tag=wallaby-stable-1,kube_tag=v1.23.6,cloud_provider_tag=v1.23.1,monitoring_enabled=true,auto_scaling_enabled=true,auto_healing_enabled=true,auto_healing_controller=magnum-auto-healer,magnum_auto_healer_tag=v1.23.0.1-shpc,etcd_tag=v3.5.4,master_lb_floating_ip_enabled=true,cinder_csi_enabled=true,container_infra_prefix=ghcr.io/stackhpc/,min_node_count=1,max_node_count=50,octavia_lb_algorithm=SOURCE_IP_PORT,octavia_provider=ovn --dns-nameserver 8.8.8.8 --flavor m1.medium --master-flavor m1.medium --network-driver calico --volume-driver cinder --docker-storage-driver overlay2 --floating-ip-enabled --master-lb-enabled --coe kubernetes
240+
241+
Create a cluster:
242+
243+
.. code-block:: bash
244+
245+
openstack coe cluster create --keypair id_rsa --master-count 1 --node-count 1 --floating-ip-enabled test-cluster
246+
247+
This command will take a while to complete. You can monitor the progress with
248+
the following command:
249+
250+
.. code-block:: bash
251+
252+
watch "openstack --insecure coe cluster list ; openstack --insecure stack list ; openstack --insecure server list"
253+
254+
Once the cluster is created, you can SSH into the master node and check that
255+
there are no failed containers:
256+
257+
.. code-block:: bash
258+
259+
ssh core@{master-ip}
260+
261+
List the podman and docker containers:
262+
263+
.. code-block:: bash
264+
265+
sudo docker ps
266+
sudo podman ps
267+
268+
If there are any failed containers, you can check the logs with the following
269+
commands:
270+
271+
.. code-block:: bash
272+
273+
sudo docker logs {container-id}
274+
sudo podman logs {container-id}
275+
276+
Or look at the logs under ``/var/log``. In particular, pay close attention to
277+
``/var/log/heat-config`` on the master and
278+
``/var/log/kolla/{magnum,heat,neutron}/*`` on the controllers.
279+
280+
Otherwise, the ``state`` of the cluster should eventually become
281+
``CREATE_COMPLETE`` and the ``health_status`` should be ``HEALTHY``.
282+
283+
You can interact with the cluster using ``kubectl``. The instructions for
284+
installing ``kubectl`` are available `here
285+
<https://kubernetes.io/docs/tasks/tools/install-kubectl/>`_. You can then
286+
configure ``kubectl`` to use the cluster, and check that the pods are all
287+
running:
288+
289+
.. code-block:: bash
290+
291+
openstack coe cluster config test-cluster --dir $PWD
292+
export KUBECONFIG=$PWD/config
293+
kubectl get pods -A
294+
295+
Finally, you can optionally use sonobuoy to run a complete set of Kubernetes
296+
conformance tests.
297+
298+
Find the latest release of sonobuoy on their `github releases page
299+
<https://github.com/vmware-tanzu/sonobuoy/releases>`_. Then download it with wget, e.g.:
300+
301+
.. code-block:: bash
302+
303+
wget https://github.com/vmware-tanzu/sonobuoy/releases/download/v0.56.16/sonobuoy_0.56.16_linux_amd64.tar.gz
304+
305+
Extract it with tar:
306+
307+
.. code-block:: bash
308+
309+
tar -xvf sonobuoy_0.56.16_linux_amd64.tar.gz
310+
311+
And run it:
312+
313+
.. code-block:: bash
314+
315+
./sonobuoy run --wait
316+
317+
This will take a while to complete. Once it is done you can check the results
318+
with:
319+
320+
.. code-block:: bash
321+
322+
results=$(./sonobuoy retrieve)
323+
./sonobuoy results $results
324+
325+
There are various other options for sonobuoy, see the `documentation
326+
<https://sonobuoy.io/docs/>`_ for more details.

etc/kayobe/ansible/cephadm-commands-post.yml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -10,4 +10,4 @@
1010
- import_role:
1111
name: stackhpc.cephadm.commands
1212
vars:
13-
cephadm_commands: "{{ cephadm_commands_post }}"
13+
cephadm_commands: "{{ cephadm_commands_post | default([]) }}"

etc/kayobe/ansible/cephadm-commands-pre.yml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -10,4 +10,4 @@
1010
- import_role:
1111
name: stackhpc.cephadm.commands
1212
vars:
13-
cephadm_commands: "{{ cephadm_commands_pre }}"
13+
cephadm_commands: "{{ cephadm_commands_pre | default([]) }}"
Lines changed: 9 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1,3 +1,12 @@
11
---
22
kolla_neutron_ml2_network_vlan_ranges:
33
- physical_network: "physnet1"
4+
5+
kolla_neutron_ml2_type_drivers:
6+
- flat
7+
- vlan
8+
- "{{ 'geneve' if kolla_enable_ovn | bool else 'vxlan' }}"
9+
10+
kolla_neutron_ml2_tenant_network_types:
11+
- vlan
12+
- "{{ 'geneve' if kolla_enable_ovn | bool else 'vxlan' }}"

etc/kayobe/environments/ci-multinode/seed.yml

Lines changed: 2 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -3,9 +3,7 @@ seed_bootstrap_user: "{{ os_distribution if os_distribution == 'ubuntu' else 'cl
33
seed_lvm_groups:
44
- "{{ stackhpc_lvm_group_rootvg }}"
55

6-
seed_extra_network_interfaces: >
7-
"{{ seed_extra_network_interfaces_external +
8-
(seed_extra_network_interfaces_manila if (kolla_enable_manila | bool and kolla_enable_manila_backend_cephfs_native | bool) else []) }}"
6+
seed_extra_network_interfaces: "{{ seed_extra_network_interfaces_external + seed_extra_network_interfaces_manila if (kolla_enable_manila | bool and kolla_enable_manila_backend_cephfs_native | bool) else [] }}"
97

108
# Seed has been provided an external interface
119
# for tempest tests and SSH access to machines.
@@ -26,6 +24,6 @@ snat_rules_default:
2624
source_ip: "{{ ansible_facts.default_ipv4.address }}"
2725
snat_rules_manila:
2826
- interface: "{{ storage_interface }}"
29-
source_ip: "{{ ansible_facts[storage_interface].ipv4.address }}"
27+
source_ip: "{{ ansible_facts[storage_interface].ipv4.address | default }}"
3028
# Only add the storage snat rule if we are using manila-cephfs.
3129
snat_rules: "{{ snat_rules_default + snat_rules_manila if (kolla_enable_manila | bool and kolla_enable_manila_backend_cephfs_native | bool) else snat_rules_default }}"
Lines changed: 5 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,5 @@
1+
---
2+
features:
3+
- |
4+
Updated the documentation for the ci-multinode to include instructions on
5+
how to set up and test Magnum.

0 commit comments

Comments
 (0)