Skip to content

Commit d4edcd6

Browse files
author
Balazs Gibizer
committed
Store old_flavor already on source host during resize
During resize, on the source host, in resize_instance(), the instance.host and .node is updated to point to the destination host. This indicates to the source host's resource tracker that the allocation of this instance does not need to be tracked as an instance but as an outbound migration instead. However for the source host's resource tracker to do that it, needs to use the instance.old_flavor. Unfortunately the instance.old_flavor is only set during finish_resize() on the destination host. (resize_instance cast to the finish_resize). So it is possible that a running resize_instance() set the instance.host to point to the destination and then before the finish_resize could set the old_flavor an update_available_resources periodic runs on the source host. This causes that the allocation of this instance is not tracked as an instance as the instance.host point to the destination but it is not tracked as a migration either as the instance.old_flavor is not yet set. So the allocation on the source host is simply dropped by the periodic job. When such migration is confirmed the confirm_resize() tries to drop the same resource allocation again but fails as the pinned CPUs of the instance already freed. When such migration is reverted instead, then revert succeeds but the source host resource allocation will not contain the resource allocation of the instance until the next update_available_resources periodic runs and corrects it. This does not affect resources tracked exclusively in placement (e.g. VCPU, MEMORY_MB, DISK_GB) but it does affect NUMA related resource that are still tracked in the resource tracker (e.g. huge pages, pinned CPUs). This patch moves the instance.old_flavor setting to the source node to the same transaction that sets the instance.host to point to the destination host. Hence solving the race condition. Change-Id: Ic0d6c59147abe5e094e8f13e0d3523b178daeba9 Closes-Bug: #1944759 (cherry picked from commit b841e55)
1 parent e6c6880 commit d4edcd6

File tree

2 files changed

+25
-27
lines changed

2 files changed

+25
-27
lines changed

nova/compute/manager.py

Lines changed: 12 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -5635,6 +5635,14 @@ def _resize_instance(
56355635

56365636
instance.host = migration.dest_compute
56375637
instance.node = migration.dest_node
5638+
# NOTE(gibi): as the instance now tracked on the destination we
5639+
# have to make sure that the source compute resource track can
5640+
# track this instance as a migration. For that the resource tracker
5641+
# needs to see the old_flavor set on the instance. The old_flavor
5642+
# setting used to be done on the destination host in finish_resize
5643+
# but that is racy with a source host update_available_resource
5644+
# periodic run
5645+
instance.old_flavor = instance.flavor
56385646
instance.task_state = task_states.RESIZE_MIGRATED
56395647
instance.save(expected_task_state=task_states.RESIZE_MIGRATING)
56405648

@@ -5748,6 +5756,10 @@ def _finish_resize(self, context, instance, migration, disk_info,
57485756
# to ACTIVE for backwards compatibility
57495757
old_vm_state = instance.system_metadata.get('old_vm_state',
57505758
vm_states.ACTIVE)
5759+
# NOTE(gibi): this is already set by the resize_instance on the source
5760+
# node before calling finish_resize on destination but during upgrade
5761+
# it can be that the source node is not having the fix for bug 1944759
5762+
# yet. This assignment can be removed in Z release.
57515763
instance.old_flavor = old_flavor
57525764

57535765
if old_instance_type_id != new_instance_type_id:

nova/tests/functional/libvirt/test_numa_servers.py

Lines changed: 13 additions & 27 deletions
Original file line numberDiff line numberDiff line change
@@ -877,10 +877,10 @@ def inject_periodic_to_finish_resize(*args, **kwargs):
877877

878878
dst_host = server['OS-EXT-SRV-ATTR:host']
879879

880-
# This is a resource accounting bug, we should have 2 cpus pinned on
881-
# both computes. The source should have it due to the outbound
882-
# migration and the destination due to the instance running there
883-
self._assert_pinned_cpus(src_host, 0)
880+
# we have 2 cpus pinned on both computes. The source should have it
881+
# due to the outbound migration and the destination due to the
882+
# instance running there
883+
self._assert_pinned_cpus(src_host, 2)
884884
self._assert_pinned_cpus(dst_host, 2)
885885

886886
return server, src_host, dst_host
@@ -892,30 +892,17 @@ def test_resize_confirm_bug_1944759(self):
892892
# Now confirm the resize
893893
post = {'confirmResize': None}
894894

895-
# FIXME(gibi): This is bug 1944759 where during resize, on the source
896-
# node the resize_instance() call at the point of calling finish_resize
897-
# overlaps with a update_available_resources() periodic job. This
898-
# causes that the periodic job will not track the migration nor the
899-
# instance and therefore freeing the resource allocation. Then when
900-
# later the resize is confirmed the confirm_resize on the source
901-
# compute also wants to free up the resources, the pinned CPUs, and it
902-
# fails as they are already freed.
903-
exc = self.assertRaises(
904-
client.OpenStackApiException,
905-
self.api.post_server_action, server['id'], post
906-
)
907-
self.assertEqual(500, exc.response.status_code)
908-
self.assertIn('CPUUnpinningInvalid', str(exc))
895+
self.api.post_server_action(server['id'], post)
896+
self._wait_for_state_change(server, 'ACTIVE')
909897

910-
# confirm failed above but the resource allocation reflects that the
911-
# VM is running on the dest node
898+
# the resource allocation reflects that the VM is running on the dest
899+
# node
912900
self._assert_pinned_cpus(src_host, 0)
913901
self._assert_pinned_cpus(dst_host, 2)
914902

903+
# and running periodics does not break it either
915904
self._run_periodics()
916905

917-
# and such allocation situation is stable so as a recovery the VM
918-
# can be reset-state to ACTIVE without problem.
919906
self._assert_pinned_cpus(src_host, 0)
920907
self._assert_pinned_cpus(dst_host, 2)
921908

@@ -931,15 +918,14 @@ def test_resize_revert_bug_1944759(self):
931918
self.api.post_server_action(server['id'], post)
932919
self._wait_for_state_change(server, 'ACTIVE')
933920

934-
# This is a resource accounting bug. After the revert the source host
935-
# should have 2 cpus pinned due to the instance.
936-
self._assert_pinned_cpus(src_host, 0)
921+
# After the revert the source host should have 2 cpus pinned due to
922+
# the instance.
923+
self._assert_pinned_cpus(src_host, 2)
937924
self._assert_pinned_cpus(dst_host, 0)
938925

939-
# running the periodic job will fix the resource accounting
926+
# running the periodic job will not break it either
940927
self._run_periodics()
941928

942-
# this is now correct
943929
self._assert_pinned_cpus(src_host, 2)
944930
self._assert_pinned_cpus(dst_host, 0)
945931

0 commit comments

Comments
 (0)