summaryrefslogtreecommitdiff
AgeCommit message (Collapse)Author
28 hoursSwitch devstack jobs to XenialHEADmasterIan Wienand
Change I8749ed24d5f451d29f767ebb2761abd743b7d306 modified the devstack based jobs to run on Bionic. Unfortunately we're not quite ready for that; one issue is that our devstack dependencies require zypper for opensuse which is not on bionic [1]. diskimage-builder excludes zypper on bionic in bindep, but we don't have a mechanism to use that (yet [2]). For now, switch them back to Xenial to retain the status quo. We can then take a more controlled approach to work on modernising them. [1] https://bugs.launchpad.net/ubuntu/+source/zypper/+bug/1808230 [2] https://review.openstack.org/624852 Depends-On: https://review.openstack.org/625596 Change-Id: I56646e49264dd844f5818a84e04965863542f572 Notes (review): Code-Review+2: Jens Harbott (frickler) <j.harbott@x-ion.de> Code-Review+2: Monty Taylor <mordred@inaugust.com> Workflow+1: Monty Taylor <mordred@inaugust.com> Verified+2: Zuul Submitted-by: Zuul Submitted-at: Mon, 17 Dec 2018 17:44:13 +0000 Reviewed-on: https://review.openstack.org/624855 Project: openstack-infra/nodepool Branch: refs/heads/master
7 daysMerge "Fix race in test_handler_poll_session_expired"Zuul
12 daysInclude host_id for openstack providerPaul Belanger
When we create a server using the openstack provider, in certain conditions it is helpful to collect the host_id value. As an example, Zuul will pass through this information into a job inventory file which will allow an operator to better profile where jobs run with in a cloud. This can be helpful trying to debug random jobs timing out within a cloud. Change-Id: If29397e67a470462561f24c746375b8291ac43ab Signed-off-by: Paul Belanger <pabelanger@redhat.com> Notes (review): Code-Review+2: David Shrewsbury <shrewsbury.dave@gmail.com> Code-Review+2: Tobias Henkel <tobias.henkel@bmw.de> Workflow+1: Tobias Henkel <tobias.henkel@bmw.de> Verified+2: Zuul Submitted-by: Zuul Submitted-at: Fri, 07 Dec 2018 14:10:48 +0000 Reviewed-on: https://review.openstack.org/623107 Project: openstack-infra/nodepool Branch: refs/heads/master
12 daysFix race in test_handler_poll_session_expiredDavid Shrewsbury
Just waiting for a node request state change does not guarantee that we have actually entered the poll() portion of request handling. This was causing the mock.call_count assert to fail on occasion. Change this to simply wait on the call_count to increment, eliminating that race. There was also another potential race in checking that the request handler was removed from the request_handlers structure. It would be possible for the same request to re-enter active handling before we had a chance to check that it was removed when the session exception was thrown. We handle that race by setting the request state to FAILED on the first exception. Change-Id: I646b82243eb7f8c4e83d6678c2c0d265d99e51e0 Notes (review): Code-Review+2: Tobias Henkel <tobias.henkel@bmw.de> Code-Review+2: Clark Boylan <cboylan@sapwetik.org> Workflow+1: Clark Boylan <cboylan@sapwetik.org> Verified+2: Zuul Submitted-by: Zuul Submitted-at: Tue, 11 Dec 2018 19:00:53 +0000 Reviewed-on: https://review.openstack.org/623269 Project: openstack-infra/nodepool Branch: refs/heads/master
13 daysAdd an upgrade release note for schema changeDavid Shrewsbury
The node deletion race fix (b3053779a676b2deb23eaf2df6832d3491932bf8) alters the ZooKeeper schema, which requires a total launcher restart. A release note was not included there. Change-Id: I346beb49ecbd31cdaf51ada1433600886f88000e Notes (review): Code-Review+2: Tobias Henkel <tobias.henkel@bmw.de> Code-Review+1: Melvin Hillsman <mrhillsman@gmail.com> Code-Review+2: James E. Blair <corvus@inaugust.com> Workflow+1: James E. Blair <corvus@inaugust.com> Verified+2: Zuul Submitted-by: Zuul Submitted-at: Wed, 05 Dec 2018 21:17:34 +0000 Reviewed-on: https://review.openstack.org/623046 Project: openstack-infra/nodepool Branch: refs/heads/master
13 daysMerge "Make estimatedNodepoolQuotaUsed more resilient"Zuul
13 daysMerge "Set type for error'ed instances"Zuul
13 daysMake estimatedNodepoolQuotaUsed more resilientTobias Henkel
We had the case that we stored znodes without pool or type. At least znodes without type break the quota calculation and can lead to wedged providers. So make that more resilient and log exceptions per node instead of failing the complete calculation. This way we don't wedge in case we have bogus data in zk while still being able to debug what's wrong with certain znodes. Change-Id: I4a33ffbbf3684dc3830913ed8dc7b158f2426602 Notes (review): Code-Review+2: James E. Blair <corvus@inaugust.com> Code-Review+2: David Shrewsbury <shrewsbury.dave@gmail.com> Workflow+1: David Shrewsbury <shrewsbury.dave@gmail.com> Verified+2: Zuul Submitted-by: Zuul Submitted-at: Wed, 05 Dec 2018 16:14:50 +0000 Reviewed-on: https://review.openstack.org/622906 Project: openstack-infra/nodepool Branch: refs/heads/master
13 daysSet type for error'ed instancesTristan Cacqueray
When a server creation fails but has an external id we create a new znode to offload the deletion of that node. This currently misses the node type which will trigger an exception during node launch [1]. This wedges the provider until the node deleter kicked in and deleted that node successfully. Fix this by storing the node type in this znode. [1] Exception Traceback (most recent call last): File "nodepool/driver/__init__.py", line 639, in run self._runHandler() File "nodepool/driver/__init__.py", line 563, in _runHandler self._waitForNodeSet() File "nodepool/driver/__init__.py", line 463, in _waitForNodeSet if not self.hasRemainingQuota(ntype): File "nodepool/driver/openstack/handler.py", line 314, in hasRemainingQuota self.manager.estimatedNodepoolQuotaUsed()) File "nodepool/driver/openstack/provider.py", line 164, in estimatedNodepoolQuotaUsed if node.type[0] not in provider_pool.labels: IndexError: list index out of range Change-Id: I67b269069dddb8349959802d7b1ee049a826d0c5 Co-authored-by: Tobias Henkel <tobias.henkel@bmw.de> Notes (review): Code-Review+2: James E. Blair <corvus@inaugust.com> Code-Review+2: David Shrewsbury <shrewsbury.dave@gmail.com> Workflow+1: David Shrewsbury <shrewsbury.dave@gmail.com> Verified+2: Zuul Submitted-by: Zuul Submitted-at: Wed, 05 Dec 2018 16:14:50 +0000 Reviewed-on: https://review.openstack.org/622101 Project: openstack-infra/nodepool Branch: refs/heads/master
14 daysAdd cleanup routine to delete empty nodesDavid Shrewsbury
We've discovered that our node deletion process has the possibility to leave empty (i.e., no data) node znodes in ZooKeeper. Although a fix for this has been merged, we need a way to remove this extraneous data. Change-Id: I6596060f5026088ce987e5d0d7c18b00a6b77c5a Notes (review): Code-Review+2: James E. Blair <corvus@inaugust.com> Code-Review+2: Tobias Henkel <tobias.henkel@bmw.de> Workflow+1: Tobias Henkel <tobias.henkel@bmw.de> Verified+2: Zuul Submitted-by: Zuul Submitted-at: Wed, 05 Dec 2018 05:47:39 +0000 Reviewed-on: https://review.openstack.org/622616 Project: openstack-infra/nodepool Branch: refs/heads/master
14 daysMerge "Fix race when deleting Node znodes"Zuul
14 daysFix race when deleting Node znodesJames E. Blair
There is a race where: Thread A Thread B locks the node deletes the instance deletes the lock locks the node deletes the node data loads empty data from the node Thread A only deletes the node data because during it's recursive delete, a new child node (the lock from B) has appeared. Thread B proceeds with invalid data and errors out. We can detect this condition because the node has no data. It may be theoretically possible to lock the node and load the data before the node data are deleted as well, so to protect against this case, we set a new node state, "deleted" before we start deleting anything. If thread B encounters either of those two conditions (no data, or a "deleted" state), we know we've hit this race and can safely attempt to recursively delete the node again. Change-Id: Iea5558f9eb471cf1096120b06c098f8f41ab59d9 Notes (review): Code-Review+2: Tobias Henkel <tobias.henkel@bmw.de> Code-Review+2: David Shrewsbury <shrewsbury.dave@gmail.com> Workflow+1: David Shrewsbury <shrewsbury.dave@gmail.com> Verified+2: Zuul Submitted-by: Zuul Submitted-at: Tue, 04 Dec 2018 19:20:05 +0000 Reviewed-on: https://review.openstack.org/622403 Project: openstack-infra/nodepool Branch: refs/heads/master
2018-12-04Merge "Make launcher debug slightly less chatty"Zuul
2018-12-03Set pool for error'ed instancesDavid Shrewsbury
Change-Id: Icbad2b01c694fcf487a0d2661a762c3fd76035b5 Notes (review): Code-Review+2: Clark Boylan <cboylan@sapwetik.org> Code-Review+2: Tobias Henkel <tobias.henkel@bmw.de> Workflow+1: Tobias Henkel <tobias.henkel@bmw.de> Code-Review+2: James E. Blair <corvus@inaugust.com> Verified+2: Zuul Submitted-by: Zuul Submitted-at: Tue, 04 Dec 2018 03:42:07 +0000 Reviewed-on: https://review.openstack.org/621681 Project: openstack-infra/nodepool Branch: refs/heads/master
2018-12-03Make launcher debug slightly less chattyJames E. Blair
This reduces most of the new launcher debug messages now that the relative_priority behavior has been verified, though it retains a few additions. Change-Id: I5408105f92aab5baf78ec2ea80f8c4427a2a695b Notes (review): Code-Review+2: Tobias Henkel <tobias.henkel@bmw.de> Code-Review+2: David Shrewsbury <shrewsbury.dave@gmail.com> Workflow+1: James E. Blair <corvus@inaugust.com> Verified+2: Zuul Submitted-by: Zuul Submitted-at: Tue, 04 Dec 2018 17:34:22 +0000 Reviewed-on: https://review.openstack.org/621675 Project: openstack-infra/nodepool Branch: refs/heads/master
2018-12-01Merge "Fix print-zk tool for python3"Zuul
2018-12-01Merge "Add relative priority to request list"Zuul
2018-12-01Add more debug lines to request handlerJames E. Blair
So that we can see the new request processing flow better. Change-Id: I2c2d40b53a93cbf7632c657919d70ce1876e6dea Notes (review): Code-Review+2: Tobias Henkel <tobias.henkel@bmw.de> Code-Review+2: Monty Taylor <mordred@inaugust.com> Workflow+1: Monty Taylor <mordred@inaugust.com> Verified+2: Zuul Submitted-by: Zuul Submitted-at: Sat, 01 Dec 2018 14:20:19 +0000 Reviewed-on: https://review.openstack.org/621321 Project: openstack-infra/nodepool Branch: refs/heads/master
2018-12-01Merge "Don't update caches with empty zNodes"Zuul
2018-12-01Merge "Log exceptions deleting ZK nodes"Zuul
2018-12-01Merge "Ensure that completed handlers are removed frequently"Zuul
2018-11-30Merge "Remove updating stats debug log"Zuul
2018-11-30Add relative priority to request listJames E. Blair
Change-Id: Ief6dfc800097aa239d07a50c34bc72dcb328d4c5 Notes (review): Code-Review+2: Tobias Henkel <tobias.henkel@bmw.de> Code-Review+2: Monty Taylor <mordred@inaugust.com> Workflow+1: Monty Taylor <mordred@inaugust.com> Verified+2: Zuul Submitted-by: Zuul Submitted-at: Sat, 01 Dec 2018 14:20:20 +0000 Reviewed-on: https://review.openstack.org/621314 Project: openstack-infra/nodepool Branch: refs/heads/master
2018-11-30Don't update caches with empty zNodesTobias Henkel
We found out that we leak some empty znodes that were just ignored by nodepool before the caching changes. Now that we know that these exist ignore them as well so we don't get spammed by exceptions. Change-Id: I00a0ad2c7f645a2d03bd1674bf5d050c38b1dd50 Notes (review): Code-Review+2: James E. Blair <corvus@inaugust.com> Code-Review+2: Paul Belanger <pabelanger@redhat.com> Workflow+1: Paul Belanger <pabelanger@redhat.com> Verified+2: Zuul Submitted-by: Zuul Submitted-at: Sat, 01 Dec 2018 04:14:46 +0000 Reviewed-on: https://review.openstack.org/621305 Project: openstack-infra/nodepool Branch: refs/heads/master
2018-11-30Log exceptions deleting ZK nodesJames E. Blair
Change-Id: I58836dafbf0846a4d6beb4a2ae5b8dfde3d5aec8 Notes (review): Code-Review+2: Tobias Henkel <tobias.henkel@bmw.de> Code-Review+2: Clark Boylan <cboylan@sapwetik.org> Workflow+1: Clark Boylan <cboylan@sapwetik.org> Verified+2: Zuul Submitted-by: Zuul Submitted-at: Sat, 01 Dec 2018 03:51:02 +0000 Reviewed-on: https://review.openstack.org/621301 Project: openstack-infra/nodepool Branch: refs/heads/master
2018-11-30Merge "Log exceptions in cache listener events"Zuul
2018-11-30Log exceptions in cache listener eventsJames E. Blair
Change-Id: Ife566e09c23b644d8d777c0f59f1effb6be3ec6c Notes (review): Code-Review+2: Clark Boylan <cboylan@sapwetik.org> Workflow+1: Clark Boylan <cboylan@sapwetik.org> Verified+2: Zuul Submitted-by: Zuul Submitted-at: Fri, 30 Nov 2018 22:17:38 +0000 Reviewed-on: https://review.openstack.org/621292 Project: openstack-infra/nodepool Branch: refs/heads/master
2018-11-30Fix leak detection in unmanaged quota calculationTobias Henkel
The negation in the unmanaged quota calculation is wrong. This leads to the fact that nodepool things all of its nodes are leaked. Change-Id: I60b48a80dc597afa2ceb0a3faddd4c73ffa48c6f Notes (review): Code-Review+2: Clark Boylan <cboylan@sapwetik.org> Code-Review+2: James E. Blair <corvus@inaugust.com> Workflow+1: James E. Blair <corvus@inaugust.com> Verified+2: Zuul Submitted-by: Zuul Submitted-at: Fri, 30 Nov 2018 21:34:24 +0000 Reviewed-on: https://review.openstack.org/621286 Project: openstack-infra/nodepool Branch: refs/heads/master
2018-11-30Remove updating stats debug logTobias Henkel
This seems to be a bit too chatty in the logs. Change-Id: I6f8f42d87c524b7a9b07091adeff296f6a4ce9d1 Notes (review): Code-Review+2: David Shrewsbury <shrewsbury.dave@gmail.com> Code-Review+2: Paul Belanger <pabelanger@redhat.com> Workflow+1: Paul Belanger <pabelanger@redhat.com> Code-Review+2: Clark Boylan <cboylan@sapwetik.org> Workflow+1: Clark Boylan <cboylan@sapwetik.org> Verified+2: Zuul Submitted-by: Zuul Submitted-at: Fri, 30 Nov 2018 23:28:07 +0000 Reviewed-on: https://review.openstack.org/621283 Project: openstack-infra/nodepool Branch: refs/heads/master
2018-11-30Block 0.20.0 of openstacksdkMonty Taylor
There is a bug in Rackspace Public Cloud that causes keystoneauth to fail doing discovery which 0.20.0 of openstacksdk exposes because it starts using keystoneauth discovery directly. Until the keystoneauth fix lands and is released, running nodepool with 0.20.0 of openstacksdk will fail when attempting to use Rackspace Public Cloud. Just for the record, this is due to the fact that Rackspace Public Cloud: - has invalid integer project ids - still senselessly keeps them in the compute service URL - blocks access to the compute discovery document The keystone team are kindly accepting a workaround fix to keystoneauth even though it is a workaround for what is a completely invalid setup. Change-Id: I72ec16ecb7770d97aa5703bdcfd3e8b188c89f19 Notes (review): Code-Review+2: James E. Blair <corvus@inaugust.com> Code-Review+2: David Shrewsbury <shrewsbury.dave@gmail.com> Workflow+1: David Shrewsbury <shrewsbury.dave@gmail.com> Workflow+1: James E. Blair <corvus@inaugust.com> Code-Review+2: Tobias Henkel <tobias.henkel@bmw.de> Workflow+1: Tobias Henkel <tobias.henkel@bmw.de> Verified+2: Zuul Submitted-by: Zuul Submitted-at: Fri, 30 Nov 2018 20:08:57 +0000 Reviewed-on: https://review.openstack.org/621272 Project: openstack-infra/nodepool Branch: refs/heads/master
2018-11-30Merge "OpenStack: store ZK records for launch error nodes"Zuul
2018-11-30Merge "OpenStack: count leaked nodes in unmanaged quota"Zuul
2018-11-30Ensure that completed handlers are removed frequentlyTobias Henkel
On a busy system it can happen that assignHandlers takes quite some time (we saw occurrences of more than 10 minutes). Within this time no node request is marked as fulfilled even if the nodes are there. A possible solution is to return from assignHandlers frequently during the iteration so we can remove completed handlers and then proceed with assigning handlers. Change-Id: I10f40504c81d532e6953d7af63c5c58fd5283573 Notes (review): Code-Review+2: James E. Blair <corvus@inaugust.com> Code-Review+2: David Shrewsbury <shrewsbury.dave@gmail.com> Workflow+1: James E. Blair <corvus@inaugust.com> Verified+2: Zuul Submitted-by: Zuul Submitted-at: Sat, 01 Dec 2018 01:16:55 +0000 Reviewed-on: https://review.openstack.org/610029 Project: openstack-infra/nodepool Branch: refs/heads/master
2018-11-30Merge "Support relative priority of node requests"Zuul
2018-11-29OpenStack: store ZK records for launch error nodesJames E. Blair
If we get an error on create server, we currently leak the instance because we don't store the external id of the instance in ZK. It should eventually be deleted since it's a leaked instance, but we try to keep track of as much as possible. OpenStackSDK can often return the external id to us in these cases, so handle that case and store the external id on a ZK record so that the instance is correctly accounted for. Change-Id: I7ec448e9a7cf6cd01903bf7b5bf4b07a1c143fb8 Notes (review): Code-Review+2: Clark Boylan <cboylan@sapwetik.org> Code-Review+2: Paul Belanger <pabelanger@redhat.com> Code-Review+2: Tobias Henkel <tobias.henkel@bmw.de> Workflow+1: Tobias Henkel <tobias.henkel@bmw.de> Verified+2: Zuul Submitted-by: Zuul Submitted-at: Fri, 30 Nov 2018 14:29:51 +0000 Reviewed-on: https://review.openstack.org/621043 Project: openstack-infra/nodepool Branch: refs/heads/master
2018-11-29OpenStack: count leaked nodes in unmanaged quotaJames E. Blair
If a node has leaked, it won't be counted against quota because it's still recognized as belonging to the nodepool provider so it doesn't count against unmanaged quota, however, there is no zk record for it, so it also isn't counted against managed quota. This throws quota calculations off for as long as the leaked instances exist in nova. To correct this, count leaked nodes against unmanaged quota. Change-Id: I5a658649b881ed80b777096ec48cb6207f2a9cc6 Notes (review): Code-Review+2: Paul Belanger <pabelanger@redhat.com> Code-Review+2: Clark Boylan <cboylan@sapwetik.org> Code-Review+2: Tobias Henkel <tobias.henkel@bmw.de> Workflow+1: Tobias Henkel <tobias.henkel@bmw.de> Verified+2: Zuul Submitted-by: Zuul Submitted-at: Fri, 30 Nov 2018 14:29:50 +0000 Reviewed-on: https://review.openstack.org/621040 Project: openstack-infra/nodepool Branch: refs/heads/master
2018-11-29Support relative priority of node requestsJames E. Blair
The launcher now processes node requests in relative priority order. This relies on the new node request cache because the relative priority field may be updated at any time by the requestor. Needed-By: https://review.openstack.org/615356 Change-Id: If893c34c6652b9649bfb6f1d9f7b942c549c98b4 Notes (review): Code-Review+2: James E. Blair <corvus@inaugust.com> Workflow+1: James E. Blair <corvus@inaugust.com> Verified+2: Zuul Submitted-by: Zuul Submitted-at: Fri, 30 Nov 2018 01:58:40 +0000 Reviewed-on: https://review.openstack.org/620954 Project: openstack-infra/nodepool Branch: refs/heads/master
2018-11-29Merge "Asynchronously update node statistics"Zuul
2018-11-29Merge "Add arbitrary node attributes config option"Zuul
2018-11-29Asynchronously update node statisticsTobias Henkel
We currently updarte the node statistics on every node launch or delete. This cannot use caching at the moment because when the statistics are updated we might end up pushing slightly outdated data. If then there is no further update for a longer time we end up with broken gauges. We already get update events from the node cache so we can use that to centrally trigger node statistics updates. This is combined with leader election so there is only a single launcher that keeps the statistics up to date. This will ensure that the statistics are not cluttered because of several launchers pushing their own slightly different view into the stats. As a side effect this reduces the runtime of a test that creates 200 nodes from 100s to 70s on my local machine. Change-Id: I77c6edc1db45b5b45be1812cf19eea66fdfab014 Notes (review): Code-Review+2: James E. Blair <corvus@inaugust.com> Code-Review+2: David Shrewsbury <shrewsbury.dave@gmail.com> Workflow+1: David Shrewsbury <shrewsbury.dave@gmail.com> Verified+2: Zuul Submitted-by: Zuul Submitted-at: Thu, 29 Nov 2018 21:00:14 +0000 Reviewed-on: https://review.openstack.org/619589 Project: openstack-infra/nodepool Branch: refs/heads/master
2018-11-29Add arbitrary node attributes config optionDavid Shrewsbury
This config option, available under each provider pool section, can contain static key-value pairs that will be stored in ZooKeeper on each Node znode. This will allow us to pass along abitrary data from nodepool to any user of nodepool (specifically, zuul). Initially, this will be used to pass along zone information to zuul executors. Change-Id: I126d37a8c0a4f44dca59c11f76a583b9181ab653 Notes (review): Code-Review+2: Monty Taylor <mordred@inaugust.com> Code-Review+2: Tobias Henkel <tobias.henkel@bmw.de> Code-Review+2: James E. Blair <corvus@inaugust.com> Code-Review+2: Paul Belanger <pabelanger@redhat.com> Workflow+1: Paul Belanger <pabelanger@redhat.com> Verified+2: Zuul Submitted-by: Zuul Submitted-at: Thu, 29 Nov 2018 20:14:31 +0000 Reviewed-on: https://review.openstack.org/620691 Project: openstack-infra/nodepool Branch: refs/heads/master
2018-11-29Merge "Only setup zNode caches in launcher"Zuul
2018-11-29Merge "Add second level cache to node requests"Zuul
2018-11-29Merge "Add second level cache of nodes"Zuul
2018-11-29Merge "Update node request during locking"Zuul
2018-11-28Merge "Cache node request zNodes"Zuul
2018-11-28Merge "Fix test race in test_hold_expiration_higher_than_default"Zuul
2018-11-28Fix test race in test_hold_expiration_higher_than_defaultTobias Henkel
Since introducing znode caching the test test_hold_expiration_higher_than_default fails sometimes because the last assertion could get slightly outdated data. Fix the race by leveraging iterate_timeout. Change-Id: Idf76e62b87c29fa827e2fbacef57dbc60e4f3b7b Notes (review): Code-Review+2: Monty Taylor <mordred@inaugust.com> Code-Review+2: James E. Blair <corvus@inaugust.com> Code-Review+2: David Shrewsbury <shrewsbury.dave@gmail.com> Workflow+1: David Shrewsbury <shrewsbury.dave@gmail.com> Verified+2: Zuul Submitted-by: Zuul Submitted-at: Wed, 28 Nov 2018 18:30:45 +0000 Reviewed-on: https://review.openstack.org/620222 Project: openstack-infra/nodepool Branch: refs/heads/master
2018-11-27Merge "Update devstack test to Fedora 28"Zuul
2018-11-26Only setup zNode caches in launcherTobias Henkel
We currently only need to setup the zNode caches in the launcher. Within the commandline client and the builders this is just unneccessary work. Change-Id: I03aa2a11b75cab3932e4b45c5e964811a7e0b3d4 Notes (review): Code-Review+2: James E. Blair <corvus@inaugust.com> Code-Review+2: David Shrewsbury <shrewsbury.dave@gmail.com> Workflow+1: David Shrewsbury <shrewsbury.dave@gmail.com> Verified+2: Zuul Submitted-by: Zuul Submitted-at: Thu, 29 Nov 2018 09:59:41 +0000 Reviewed-on: https://review.openstack.org/619440 Project: openstack-infra/nodepool Branch: refs/heads/master