forked from openstack/nova
-
Notifications
You must be signed in to change notification settings - Fork 0
Merge upstream rocky #3
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Merged
Merged
Conversation
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
When disconnecting an encrypted volume the Libvirt driver uses the presence of a Libvirt secret associated with the volume to determine if the new style native QEMU LUKS decryption or original decryption method using os-brick encrytors is used. While this works well in most deployments some issues have been observed in Kolla based environments where the Libvirt secrets are not fully persisted between host reboots or container upgrades. This can lead to _detach_encryptor attempting to build an encryptor which will fail if the associated connection_info for the volume does not contain a device_path, such as in the case for encrypted rbd volumes. This change adds a simple conditional to _detach_encryptor to ensure we return when device_path is not present in connection_info and native QEMU LUKS decryption is available. This handles the specific use case where we are certain that the encrypted volume was never decrypted using the os-brick encryptors, as these require a local block device on the compute host and have thus never supported rbd. It is still safe to build an encryptor and call detach_volume when a device_path is present however as change I9f52f89b8466d036 made such calls idempotent within os-brick. Change-Id: Id670f13a7f197e71c77dc91276fc2fba2fc5f314 Closes-bug: #1821696 (cherry picked from commit 56ca4d3) (cherry picked from commit c6432ac)
When an admin creates a snapshot of another project owners instance, either via the createImage API directly, or via the shelve or createBackup APIs, the admin project is the owner of the image and the owner of the instance (in another project) cannot "see" the image. This is a problem, for example, if an admin shelves a tenant user's server and then the user tries to unshelve the server because the user will not have access to get the shelved snapshot image. This change fixes the problem by leveraging the sharing feature [1] in the v2 image API. When a snapshot is created where the request context project_id does not match the owner of the instance project_id, the instance owner project_id is granted sharing access to the image. By default, this means the instance owner (tenant user) can get the image directly via the image ID if they know it, but otherwise the image is not listed for the user to avoid spamming their image listing. In the case of unshelve, the end user does not need to know the image ID since it is stored in the instance system_metadata. Regardless, the user could accept the pending image membership if they want to see the snapshot show up when listing available images. Note that while the non-admin project has access to the snapshot image, they cannot delete it. For example, if the user tries to delete or unshelve a shelved offloaded server, nova will try to delete the snapshot image which will fail and log a warning since the user does not own the image (the admin does). However, the delete/unshelve operations will not fail because the image cannot be deleted, which is an acceptable trade-off. Due to some very old legacy virt driver code which started in the libvirt driver and was copied to several other drivers, several virt drivers had to be modified to not overwrite the "visibility=shared" image property by passing "is_public=False" when uploading the image data. There was no point in the virt drivers setting is_public=False since the API already controls that. It does mean, however, that the bug fix is not really in effect until both the API and compute service code has this fix. A functional test is added which depends on tracking the owner/member values in the _FakeImageService fixture. Impacted unit tests are updated accordingly. [1] https://developer.openstack.org/api-ref/image/v2/index.html#sharing Conflicts: nova/compute/api.py nova/compute/utils.py NOTE(seyeongkim): The conflict is due to not having change 7e229ba in Rocky. nova/tests/functional/test_images.py NOTE(seyeongkim) The conflict is due to not having correct uuidsentiel position. Change-Id: If53bc8fa8ab4a8a9072061af7afed53fc12c97a5 Closes-Bug: #1675791 (cherry picked from commit 35cc0f5)
Prior to this patch, if the openssl command returned a zero exit code and wrote details to stderr, nova would raise a RuntimeError exception. This patch changes the behavior to only raise a RuntimeError exception when openssl returns a non-zero exit code. Regardless of the exit code a warning will always be logged with stderr details if stderr is not None. Note that processutils.execute will now raise a processutils.ProcessExecutionError exception for any non-zero exit code since we are passing check_exit_code=True, which we convert to a Runtime error. Thanks to Dimitri John Ledkov <[email protected]> and Eric Fried <[email protected]> for helping with this patch. Conflicts: nova/virt/xenapi/agent.py NOTE(coreycb): The conflict is due to Ibe2f478288db42f8168b52dfc14d85ab92ace74b not being in stable/rocky. Change-Id: I212ac2b5ccd93e00adb7b9fe102fcb70857c6073 Partial-Bug: #1771506 (cherry picked from commit 1da71fa) (cherry picked from commit 64793cf)
If for some reason we don't have a valid endpoint to provide the Ironic client, we should at the very least provide the information it needs to make an informed decision. Conflicts: nova/tests/unit/virt/ironic/test_client_wrapper.py The conflict here is twofold: - Each release has a different value for the latest/previous os_ironic_api_version kwarg. This fix doesn't change that value, so it needs to be "left alone" in the tests. - The 'ironic_url' kwarg was renamed to 'endpoint' in the stein release [1]; it needs to be left as 'ironic_url' in rocky and prior. [1] I1b3ce1955622c40b780c0b15ec7e09be3e8ace72 Change-Id: I31fa1c6fb0b224fbb02f9ebf68abc6a3728e9389 Partial-Bug: #1825583 (cherry picked from commit 6eaa6db) (cherry picked from commit 222f462)
In rocky cycle, 'GET /allocation_candidates' started to be aware of nested providers from microversion 1.29, namely, it can have multiple allocations from multiple resource providers in the same tree in the allocation requests. To keep the behavior of microversion before 1.29, it added a filters to exculde nested providers being unaware of the nested architecture. However that function "_exclude_nested_providers()" is very heavy and is executed even if there is no nested provider in the environment when microversion < 1.29. This patch changes it to skip it if there is no nested provider. Since _exclude_nested_providers() should be done before limitting the candidates, this patch also moves it from hander file to the deeper layer. This is manually backported from the placement repository: commit 727fb88dccfe8461cc40ae53ca2d4e40fd2a9c3c Change-Id: I4efdc65395e69a6d33fba927018d003cce26fa68 Closes-Bug: #1828937 (cherry picked from commit 3b17dd7615ab85b751fc875f2391fcc2c34eeee6)
_instance_update modifies its 'values' argument. Consequently if it is retried due to an update conflict, the second invocation has the wrong arguments. A specific issue this causes is that if we called it with expected_task_state a concurrent modification to task_state will cause us to fail and retry. However, expected_task_state will have been popped from values on the first invocation and will not be present for the second. Consequently the second invocation will fail to perform the task_state check and therefore succeed, resulting in a race. We rewrite the old race unit test which wasn't testing the correct thing for 2 reasons: 1. Due to the bug fixed in this patch, although we were calling update_on_match() twice, the second call didn't check the task state. 2. side_effect=iterable returns function items without executing them, but we weren't hitting this due to the bug fixed in this patch. Closes-Bug: #1821373 Change-Id: I01c63e685113bf30e687ccb14a4d18e344b306f6 (cherry picked from commit aae5c7a) (cherry picked from commit 61fef49)
During snapshot of a volume-backed instance, we attempt to quiesce the instance before doing the snapshot. If quiesce is not supported or the qemu guest agent is not enabled, we will skip the quiesce and move on to the snapshot. Because quiesce is a call to nova-compute over RPC, when the libvirt driver raises QemuGuestAgentNotEnabled, oslo.messaging will append the full traceback to the exception message [1] for the remote caller. So, a LOG.info(..., exp) log of the exception object will result in a log of the full traceback. Logging of the full traceback causes confusion for those debugging CI failures. We would rather not log the full traceback in this case where we are catching the exception and emitting an INFO message, so we should use exp.format_message() instead of oslo.messaging's __str__ override. [1] https://github.com/openstack/oslo.messaging/blob/40c25c2/oslo_messaging/_drivers/common.py#L212 Related-Bug: #1824315 Change-Id: Ibfedcb8814437c53081f5a2993ab84b25d73e557 (cherry picked from commit 6607041) (cherry picked from commit 2f7b103)
When an instance has VERIFY_RESIZE status, the instance disk on the source compute host has moved to <instance_path>/<instance_uuid>_resize folder, which leads to disk not found errors if the update available resource periodic task on the source compute runs before resize is actually confirmed. Icec2769bf42455853cbe686fb30fda73df791b25 almost fixed this issue but it will only set reraise to False when task_state is not None, that isn't the case when an instance is resized but resize is not yet confirmed. This patch adds a condition based on vm_state to ensure we don't reraise DiskNotFound exceptions while resize is not confirmed. Closes-Bug: 1774249 Co-Authored-By: Vladyslav Drok <[email protected]> Change-Id: Id687e11e235fd6c2f99bb647184310dfdce9a08d (cherry picked from commit 9661927) (cherry picked from commit f1280ab)
Building on I0390c9ff51f49b063f736ca6ef868a4fa782ede5 we can now restore the original connection_info for a volume bdm when rolling back after a live migration failure. NOTE(lyarwood): Conflict as I8da38aec0fe4808273b8587ace3df9dbbc3ab576 is not present in stable/rocky. Conflicts: nova/compute/manager.py NOTE(lyarwood): Conflict as Ib61913d9d6ef6148170963463bb71c13f4272c5d is not present in stable/stein. Conflicts: nova/compute/manager.py Co-Authored-By: Lee Yarwood <[email protected]> Closes-Bug: #1780973 Change-Id: Ic4bbc075127823abfc559693a0db79e5e23f8209 (cherry picked from commit 1c480a7) (cherry picked from commit 4ff46ff)
The backport 37ac54a to fix bug 1821594 did not account for how the _delete_allocation_after_move method before Stein is tightly coupled to the migration status being set to "confirmed" which is what the _confirm_resize method does after self.driver.confirm_migration returns. However, if self.driver.confirm_migration raises an exception we still want to cleanup the allocations held on the source node and for that we call _delete_allocation_after_move. But because of that tight coupling before Stein, we need to temporarily mutate the migration status to "confirmed" to get the cleanup method to do what we want. This isn't a problem starting in Stein because change I0851e2d54a1fdc82fe3291fb7e286e790f121e92 removed that tight coupling on the migration status, so this is a stable branch only change. Note that we don't call self.reportclient.delete_allocation_for_instance directly since before Stein we still need to account for a migration that does not move the source node allocations to the migration record, and that logic is in _delete_allocation_after_move. A simple unit test assertion is added here but the functional test added in change I9d6478f492351b58aa87b8f56e907ee633d6d1c6 will assert the bug is fixed properly before Stein. Change-Id: I933687891abef4878de09481937d576ce5899511 Closes-Bug: #1821594
This test checks if allocations have been successfully cleaned up upon the driver failing during "confirm_migration". This backport is not clean due to change If6aa37d9b6b48791e070799ab026c816fda4441c, which refactored the testing framework. Within the refactor, new assertion methods were added and method "assertFlavorMatchesAllocation" was modified. This backport needed to be adapted in order to be compatible with the testing framework prior to If6aa37d9b6b48791e070799ab026c816fda4441c. Change-Id: I9d6478f492351b58aa87b8f56e907ee633d6d1c6 Related-bug: #1821594 (cherry picked from commit 873ac49) (cherry picked from commit d7d7f11)
This adds more information to the release note to make it clear that the nova-consoleauth service is deprecated and should not be deployed, except in cases of a live/rolling upgrade. Change-Id: I28fc8fa00a8402d0cbc738729fb43758524aeb80 (cherry picked from commit ea71592)
…s (part 2)" into stable/rocky
get_instance_diagnostics expected all interfaces to have a <target> element with a "dev" attribute in the instance XML. This is not the case for VFIO interfaces (<interface type="hostdev">). This caused an IndexError when looping over the interfaces. This patch fixes this issue by retrieving interfaces data directly from the guest XML and adding nics appropriately to the diagnostics object. The new functional test has been left out of this cherry-pick, since a lot of the test code that supports the test is missing and would have to be back-ported just for that one test, including a ramification of other commit dependencies. The functional code change itself is rather simple, and not having this functional test present in Rocky is considered to be low risk. Change-Id: I8ef852d449e9e637d45e4ac92ffc5d1abd8d31c5 Closes-Bug: #1821798 (cherry picked from commit 1d4f64b)
When block live-migration is run on instance with a deleted glance image, image.cache() is called without specyfing instance disk size parameter, preventing the resize of disk on the target host. Change-Id: Id0f05bb1275cc816d98b662820e02eae25dc57a3 Closes-Bug: #1829000 (cherry picked from commit c1782ba) (cherry picked from commit b45f47c)
If we're swapping from a multiattach volume that has more than one read/write attachment, another server on the secondary attachment could be writing to the volume which is not getting copied into the volume to which we're swapping, so we could have data loss during the swap. This change does volume read/write attachment counting for the volume we're swapping from and if there is more than one read/write attachment on the volume, the swap volume operation fails with a 400 BadRequest error. Conflicts: nova/tests/unit/compute/test_compute_api.py NOTE(mriedem): The conflict is due to Stein change I7d5bddc0aa1833cda5f4bcebe5e03bdd447f641a changing the decorators on the _test_snapshot_and_backup method. Depends-On: https://review.openstack.org/573025/ Closes-Bug: #1775418 Change-Id: Icd7fcb87a09c35a13e4e14235feb30a289d22778 (cherry picked from commit 5a1d159) (cherry picked from commit 9b21d10)
Before change I4244f7dd8fe74565180f73684678027067b4506e in Stein, when a cold migration would reschedule to conductor it would not send the RequestSpec, only the filter_properties. The filter_properties contain a primitive version of the instance group information from the RequestSpec for things like the group members, hosts and policies, but not the uuid. When conductor is trying to reschedule the cold migration without a RequestSpec, it builds a RequestSpec from the components it has, like the filter_properties. This results in a RequestSpec with an instance_group field set but with no uuid field in the RequestSpec.instance_group. That RequestSpec gets persisted and then because of change Ie70c77db753711e1449e99534d3b83669871943f, later attempts to load the RequestSpec from the database will fail because of the missing RequestSpec.instance_group.uuid. The test added here recreates the pre-Stein scenario which could still be a problem (on master) for any corrupted RequestSpecs for older instances. NOTE(mriedem): The ComputeTaskAPI.resize_instance stub is removed in this backport because it is not needed before Stein. Also, the PlacementFixture is in-tree before Stein so that is updated here. Change-Id: I05700c97f756edb7470be7273d5c9c3d76d63e29 Related-Bug: #1830747 (cherry picked from commit c96c7c5) (cherry picked from commit 8478a75)
This closes a bug concerning multi-registry configurations for Quobyte volumes due to no longer using the is_mounted() method that failed in that case. Besides, this adds exception handling for the unmount call that is issued on trying to mount an already mounted volume. NOTE: The original commit also added a new feature (fs type based validation) which is omitted in this backport. Closes-Bug: #1737131 Change-Id: Ia5a23ce1123a68608ee2ec6f2ac5dca02da67c59 (cherry picked from commit 05a73c0) (cherry picked from commit 656aa1c)
It's clear that we could have a RequestSpec.instance_group without a uuid field if the InstanceGroup is set from the _populate_group_info method which should only be used for legacy translation of request specs using legacy filter properties dicts. To workaround the issue, we look for the group scheduler hint to get the group uuid before loading it from the DB. The related functional regression recreate test is updated to show this solves the issue. Change-Id: I20981c987549eec40ad9762e74b0db16e54f4e63 Closes-Bug: #1830747 (cherry picked from commit da453c2) (cherry picked from commit 8569eb9)
The latest url reference is not invalid, redirect to the valid nova rocky index. Change-Id: I9af223c5046dd8f0b85e2bc8502916a04065d469
In order to fix Bug #1809095, it is required to update PCI related VIFs with the original PCI address on the source host to allow virt driver to properly unplug the VIF from hypervisor, e.g allow the proper VF representor to be unplugged from the integration bridge in case of a hardware offloaded OVS. To do so, some preliminary work is needed to allow code-sharing between nova.network.neutronv2 and nova.compute.manager This change: - Moves common logic to retrieve the PCI mapping between the source and destination node from nova.network.neutronv2 to objects.migration_context. - Makes code adjustments to methods in nova.network.neutronv2 to accomodate the former. Partial-Bug: #1809095 Conflicts: nova/network/neutronv2/api.py Change-Id: I9a5118373548c525b2b1c2271e7d210cc92e4f4c (cherry picked from commit 84bb00a)
Update PCI related VIFs with the original PCI address on the source host to allow the virt driver to properly unplug VIFs from hypervisor, e.g allow the proper VF representor to be unplugged from the integration bridge in case of a hardware offloaded OVS. While other approaches are possible for solving the issue, The approach proposed in the series allows the fix to be safely backported. Closes-Bug: #1809095 Conflicts in unit tests were trivial to solve no changes in test logic. Conflicts: nova/tests/unit/compute/test_compute.py nova/tests/unit/compute/test_compute_mgr.py Change-Id: Id3c4d839fb1a6da47cfb366b65c0904d281a218f (cherry picked from commit 77a339c)
On hosts that provide Python 3.7, like Fedora 20, anything that doesn't set an explicit Python 3 version is currently failing due to zKVMCloudConnector explicitly stating it doesn't support Python < 3.6 (who knows why): $ tox -e docs ... Collecting zVMCloudConnector===1.2.2 (from -c https://git.openstack.org/cgit/openstack/requirements/plain/upper-constraints.txt?h=stable/rocky (line 145)) Downloading https://files.pythonhosted.org/packages/a1/da/f5a4432ebcbb630ce9a25d0ec81599fff7e473b5df7f73595783e39dba2d/zVMCloudConnector-1.2.2.tar.gz (183kB) ERROR: Complete output from command python setup.py egg_info: ERROR: Traceback (most recent call last): File "<string>", line 1, in <module> File "/tmp/pip-install-dci83kk9/zVMCloudConnector/setup.py", line 18, in <module> from zvmsdk import version as sdkversion File "/tmp/pip-install-dci83kk9/zVMCloudConnector/zvmsdk/version.py", line 29, in <module> raise RuntimeError('On Python 3, zvm sdk supports to Python 3.6') RuntimeError: On Python 3, zvm sdk supports to Python 3.6 Explicitly request a basepython of python3.5, which is what we supported on stable/rocky to avoid. This stable only since the problem doesn't happen on master, where a newer zVMCloudConnector package is available. Change-Id: I99c8d115be695215a798a0fb2990279237ddf4d3 Signed-off-by: Stephen Finucane <[email protected]> Stable-Only
If ComputeNode.create() fails, the update_available_resource periodic will not try to create it again because it will be mapped in the compute_nodes dict and _init_compute_node will return early but trying to save changes to that ComputeNode object later will fail because there is no id on the object, since we failed to create it in the DB. This simply reverses the logic such that we only map the compute node if we successfully created it. Some existing _init_compute_node testing had to be changed since it relied on the order of when the ComputeNode object is created and put into the compute_nodes dict in order to pass the object along to some much lower-level PCI tracker code, which was arguably mocking too deep for a unit test. That is changed to avoid the low-level mocking and assertions and just assert that _setup_pci_tracker is called as expected. Conflicts: nova/compute/resource_tracker.py nova/tests/unit/compute/test_resource_tracker.py NOTE(mriedem): The resource_tracker.py conflict is due to not having I14a310b20bd9892e7b34464e6baad49bf5928ece in Rocky. The test conflicts are due to not having change I37e8ad5b14262d801702411c2c87e73550adda70 in Rocky. Change-Id: I9fa1d509a3de405d6246fb8670612c65c10cc93b Closes-Bug: #1839674 (cherry picked from commit f578146) (cherry picked from commit 648770b)
The nova.objects.base.obj_equal_prims returns True or False. It does not assert anything. So the return value should be asserted in tests. Add assertTrue where the nova.objects.base.obj_equal_prims is called. Change-Id: I49460ec3b572ee14b32229e771a5499ff91e8722 Closes-Bug: #1839853 (cherry picked from commit 5c1d9dc) (cherry picked from commit a8e19af)
This adds a functional test which recreates bug 1839560 where the driver reports a node, then no longer reports it so the compute manager deletes it, and then the driver reports it again later (this can be common with ironic nodes as they undergo maintenance). The issue is that since Ia69fabce8e7fd7de101e291fe133c6f5f5f7056a in Rocky, the ironic node uuid is re-used for the compute node uuid but there is a unique constraint on the compute node uuid so when trying to create the compute node once the ironic node is available again, the compute node create fails with a duplicate entry error due to the duplicate uuid. To recreate this in the functional test, a new fake virt driver is added which provides a predictable uuid per node like the ironic driver. The test also shows that archiving the database is a way to workaround the bug until it's properly fixed. NOTE(mriedem): Since change Idaed39629095f86d24a54334c699a26c218c6593 is not in Rocky the PlacementFixture still comes from nova_fixtures. Change-Id: If822509e906d5094f13a8700b2b9ed3c40580431 Related-Bug: #1839560 (cherry picked from commit 89dd74a) (cherry picked from commit e7109d4)
There is a unique index on the compute_nodes.uuid column which means we can't have more than one compute_nodes record in the same DB with the same UUID even if one is soft deleted because the deleted column is not part of that unique index constraint. This is a problem with ironic nodes where the node is 1:1 with the compute node record, and when a node is undergoing maintenance the driver doesn't return it from get_available_nodes() so the ComputeManager.update_available_resource periodic task (soft) deletes the compute node record, but when the node is no longer under maintenance in ironic and the driver reports it, the ResourceTracker._init_compute_node code will fail to create the ComputeNode record again because of the duplicate uuid. This change handles the DBDuplicateEntry error in compute_node_create by finding the soft-deleted compute node with the same uuid and simply updating it to no longer be (soft) deleted. Closes-Bug: #1839560 Change-Id: Iafba419fe86446ffe636721f523fb619f8f787b3 (cherry picked from commit 8b00726) (cherry picked from commit 1b02166)
There is no method called_once_with() in Mock object. Use assert_called_once_with() or assert_has_calls() instead. NOTE(takashin): Additional changes are also applied to nova/tests/unit/virt/libvirt/test_vif.py because the following change has not been applied in stable/rocky. I047856982251fddc631679fb2dbcea0f3b0db097 Change-Id: I9f73fcbe7c3dfd64e75ac8224c13934b03443cd5 Closes-Bug: #1544522 (cherry picked from commit cf7d28e) (cherry picked from commit d901860)
openSUSE 42.3 is dead, remove the experimental job so that openSUSE 42.3 can be removed completely from CI. Note that on master this job was replaced with a newer job, but I don't want to add a test with newer OS here. Change-Id: I12c274c6bf88e25d9ebd4f11500b934a0ec41157
The 'has_calls' method does not exist in assertion methods of mock. Replace the 'has_calls' method with an 'assert_has_calls' method or an 'assert_called_once_with' method. Add an 'assertEqual' check before an 'assert_has_calls' method. Conflicts: nova/tests/unit/compute/test_compute_mgr.py NOTE(takashin): The conflict is due to not having the following change in stable/rocky. Ic062446e5c620c89aec3065b34bcdc6bf5966275 Change-Id: I4b606fce473d064b9bb00213696c075cea020aaf Closes-Bug: #1840200 (cherry picked from commit ad482e5) (cherry picked from commit fa59033)
Change Icddbe4760eaff30e4e13c1e8d3d5d3f489dac3c4 was intended for the API service to check all cells for the minimum nova-compute service version when [upgrade_levels]/compute=auto. That worked in the gate with devstack because we don't configure nova-compute with access to the database and run nova-compute with a separate nova-cpu.conf so even if nova-compute is on the same host as the nova-api service, they aren't using the same config file (nova-api runs with nova.conf which has access to the API DB obviously). The problem is when nova-compute is configured with [upgrade_levels]/compute=auto and an [api_database]/connection, there are flows that can try to hit the API database directly because of the _determine_version_cap method. For example, the _sync_power_states periodic task trying to stop an instance, or even simple inter-compute communication over RPC like during a resize. This change simply catches the DBNotAllowed exception, logs a more useful error message, and re-raises the exception. In addition, the config help for the [api_database] group and "configuration" option specifically are updated to mention they should not be set on the nova-compute service. Change-Id: Iac2911a7a305a9d14bc6dadb364998f3ecb9ce42 Related-Bug: #1807044 Closes-Bug: #1839360 (cherry picked from commit 7d7d585) (cherry picked from commit bd03723)
Nova allows rebuild of instance when vm_state is ERROR. [1] The vm_state is restored to ACTIVE only after a successful build. This means rebuilding a baremetal instance using the Ironic driver is impossible because wait_for_active fails if vm_state=ERROR is found. This is a regression introduced in a previous change which added the ability to delete an instance in spawning state. [2] This present change will skip the abort installation logic if task_state is REBUILD_SPAWNING while preserving the previous logic. [1] https://bugs.launchpad.net/nova/+bug/1183946 [2] https://bugs.launchpad.net/nova/+bug/1455000 Change-Id: I857ad7264f1a7ef1263d8a9d4eca491d6c8dce0f Closes-bug: #1735009 (cherry picked from commit 1819718) (cherry picked from commit c21cbf2)
_detect_nbd_devices uses the filter builtin internally to filter valid devices. In python 2, filter returns a list. In python 3, filter returns an iterable or generator function. This change eagerly converts the result of calling filter to a list to preserve the python 2 behaviour under python 3. Closes-Bug: #1840068 Change-Id: I25616c5761ea625a15d725777ae58175651558f8 (cherry picked from commit fc9fb38) (cherry picked from commit e135afe)
This is a partial revert of commit 9606c80 which added the 'path' query parameter to work with noVNC v1.1.0. This broke all other console types using websockify server (serial, spice) because the websockify server itself doesn't know how to handle the 'path' query parameter. It is the noVNC vnc_lite.html file which parses the 'path' variable and uses it as the url to the websockify server. So, all other console types should *not* be generating a console access url with a 'path' query parameter, only noVNC. Closes-Bug: #1845243 TODO(melwitt): Figure out how to test serial and/or spice console in the gate Conflicts: nova/tests/unit/console/test_websocketproxy.py NOTE(melwitt): The conflict is because change I7f5f08691ca3f73073c66c29dddb996fb2c2b266 is not in Rocky. Change-Id: I9521f21a685edc44121d75bdf534c201fa87c2d7 (cherry picked from commit 54125a7) (cherry picked from commit e736bab) (cherry picked from commit ff29f70)
…llowed" into stable/rocky
When we try to use either virtio1.0-block or virtio1.0-net it is correctly rejected by libvirt. We get these returned from libosinfo for newer operating systems that support virtio1.0. As we want to support libvirts older than 5.2.0, its best we just request "virtio", please see: https://libvirt.org/formatdomain.html#elementsVirtioTransitional You can see virtio1.0-net and virtio-block being added here: https://gitlab.com/libosinfo/osinfo-db/blob/master/data/os/fedoraproject.org/fedora-23.xml.in#L31 Change-Id: I633faae47ad5a33b27f5e2eef6e0107f60335146 Closes-Bug: #1835400 (cherry picked from commit 6be668e) (cherry picked from commit a06922d)
markgoddard
approved these changes
Oct 23, 2019
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
No description provided.