|
| 1 | +================================= |
| 2 | +Using ports with resource request |
| 3 | +================================= |
| 4 | + |
| 5 | +Starting from microversion 2.72 nova supports creating servers with neutron |
| 6 | +ports having resource request visible as a admin-only port attribute |
| 7 | +``resource_request``. For example a neutron port has resource request if it has |
| 8 | +a QoS minimum bandwidth rule attached. |
| 9 | + |
| 10 | +The :neutron-doc:`Quality of Service (QoS): Guaranteed Bandwidth <admin/config-qos-min-bw.html>` |
| 11 | +document describes how to configure neutron to use this feature. |
| 12 | + |
| 13 | +Resource allocation |
| 14 | +~~~~~~~~~~~~~~~~~~~ |
| 15 | + |
| 16 | +Nova collects and combines the resource request from each port in a boot |
| 17 | +request and sends one allocation candidate request to placement during |
| 18 | +scheduling so placement will make sure that the resource request of the ports |
| 19 | +are fulfilled. At the end of the scheduling nova allocates one candidate in |
| 20 | +placement. Therefore the requested resources for each port from a single boot |
| 21 | +request will be allocated under the server's allocation in placement. |
| 22 | + |
| 23 | + |
| 24 | +Resource Group policy |
| 25 | +~~~~~~~~~~~~~~~~~~~~~ |
| 26 | + |
| 27 | +Nova represents the resource request of each neutron port as a separate |
| 28 | +:placement-doc:`Granular Resource Request group <usage/provider-tree.html#granular-resource-requests>` |
| 29 | +when querying placement for allocation candidates. When a server create request |
| 30 | +includes more than one port with resource requests then more than one group |
| 31 | +will be used in the allocation candidate query. In this case placement requires |
| 32 | +to define the ``group_policy``. Today it is only possible via the |
| 33 | +``group_policy`` key of the :nova-doc:`flavor extra_spec <user/flavors.html>`. |
| 34 | +The possible values are ``isolate`` and ``none``. |
| 35 | + |
| 36 | +When the policy is set to ``isolate`` then each request group and therefore the |
| 37 | +resource request of each neutron port will be fulfilled from separate resource |
| 38 | +providers. In case of neutron ports with ``vnic_type=direct`` or |
| 39 | +``vnic_type=macvtap`` this means that each port will use a virtual function |
| 40 | +from different physical functions. |
| 41 | + |
| 42 | +When the policy is set to ``none`` then the resource request of the neutron |
| 43 | +ports can be fulfilled from overlapping resource providers. In case of neutron |
| 44 | +ports with ``vnic_type=direct`` or ``vnic_type=macvtap`` this means the ports |
| 45 | +may use virtual functions from the same physical function. |
| 46 | + |
| 47 | +For neutron ports with ``vnic_type=normal`` the group policy defines the |
| 48 | +collocation policy on OVS bridge level so ``group_policy=none`` is a reasonable |
| 49 | +default value in this case. |
| 50 | + |
| 51 | +If the ``group_policy`` is missing from the flavor then the server create |
| 52 | +request will fail with 'No valid host was found' and a warning describing the |
| 53 | +missing policy will be logged. |
| 54 | + |
| 55 | +Virt driver support |
| 56 | +~~~~~~~~~~~~~~~~~~~ |
| 57 | + |
| 58 | +Supporting neutron ports with ``vnic_type=direct`` or ``vnic_type=macvtap`` |
| 59 | +depends on the capability of the virt driver. For the supported virt drivers |
| 60 | +see the :nova-doc:`Support matrix <user/support-matrix.html#operation_port_with_resource_request>` |
| 61 | + |
| 62 | +If the virt driver on the compute host does not support the needed capability |
| 63 | +then the PCI claim will fail on the host and re-schedule will be triggered. It |
| 64 | +is suggested not to configure bandwidth inventory in the neutron agents on |
| 65 | +these compute hosts to avoid unnecessary reschedule. |
0 commit comments