Commit Graph

68 Commits

Author SHA1 Message Date
James E. Blair be3edd3e17 Convert openstack driver to statemachine
This updates the OpenStack driver to use the statemachine framework.

The goal is to revise all remaining drivers to use the statemachine
framework for two reasons:

1) We can dramatically reduce the number of threads in Nodepool which
is our biggest scaling bottleneck.  The OpenStack driver already
includes some work in that direction, but in a way that is unique
to it and not easily shared by other drivers.  The statemachine
framework is an extension of that idea implemented so that every driver
can use it.  This change further reduces the number of threads needed
even for the openstack driver.

2) By unifying all the drivers with a simple interface, we can prepare
to move them into Zuul.

There are a few updates to the statemachine framework to accomodate some
features that only the OpenStack driver used to date.

A number of tests need slight alteration since the openstack driver is
the basis of the "fake" driver used for tests.

Change-Id: Ie59a4e9f09990622b192ad840d9c948db717cce2
2023-01-10 10:30:14 -08:00
James E. Blair f7ed1eb1ea Fix openstack image deletion with newer sdk
Openstacksdk version 0.103.0 removed an informal API we were using
which accepted an abbreviated dictionary as input to the delete_image
method.

It now requires either a complete image object or just a name_or_id,
so we now pass in the id.

The sdk min version is increased since older versions have not been
tested with this.

Change-Id: I7df276ab76e9b8fc17612853b474fec414dae977
2022-12-13 15:48:16 -08:00
Clark Boylan 2a231a08c9 Add idle state to driver providers
This change adds an idle state to driver providers which is used to
indicate that the provider should stop performing actions that are not
safe to perform while we bootstrap a second newer version of the
provider to handle a config update.

This is particularly interesting for the static driver because it is
managing all of its state internally to nodepool and not relying on
external cloud systems to track resources. This means it is important
for the static provider to not have an old provider object update
zookeeper at the same time as a new provider object. This was previously
possible and created situtations where the resources in zookeeper did
not reflect our local config.

Since all other drivers rely on external state the primary update here
is to the static driver. We simply stop performing config
synchronization if the idle flag is set on a static provider. This will
allow the new provider to take over reflecting the new config
consistently.

Note, we don't take other approaches and essentially create a system
specific to the static driver because we're trying to avoid modifying
the nodepool runtime significantly to fix a problem that is specific to
the static driver.

Change-Id: I93519d0c6f4ddf8a417d837f6ae12a30a55870bb
2022-10-24 15:30:31 -07:00
James E. Blair 10df93540f Use Zuul-style ZooKeeper connections
We have made many improvements to connection handling in Zuul.
Bring those back to Nodepool by copying over the zuul/zk directory
which has our base ZK connection classes.

This will enable us to bring other Zuul classes over, such as the
component registry.

The existing connection-related code is removed and the remaining
model-style code is moved to nodepool.zk.zookeeper.  Almost every
file imported the model as nodepool.zk, so import adjustments are
made to compensate while keeping the code more or less as-is.

Change-Id: I9f793d7bbad573cb881dfcfdf11e3013e0f8e4a3
2022-05-23 07:40:20 -07:00
Zuul ec2f1879de Merge "Fix flavor handling for openstacksdk 1.0" 2022-04-28 04:32:42 +00:00
Dr. Jens Harbott ac3dc8d9fe Fix flavor handling for openstacksdk 1.0
The newest openstacksdk release returns an object of type
Flavor instead of a dict, which does have an id field, but that might
not correspond to an existing flavor, so we cannot find it in our cache.
Check for the presence of the attributes that we really need and as a
last resort skip quota calculation instead of failing.

Signed-off-by: Dr. Jens Harbott <harbott@osism.tech>
Change-Id: I09b03916598ff147d4be210a27a59799c23a2041
2022-03-17 18:08:03 +01:00
James E. Blair 86631344e3 Add provider image config to statemachine image upload
This adds an extra argument to the provider image upload method
so that it can have access to the provider image configuration
which it may need in order to obtain extra information such as
the architecture.

It also adds the upload number to the image name format so that
we may name image uploads by their number like we do instances.

Change-Id: I0f47b4443d86f021641f315af4b69da26c4713a6
2022-02-22 13:33:31 -08:00
Tobias Henkel df4739c7dd
Allow empty string in leaked port cleanup
Currently nodepool cleans up DOWN ports where the device owner is None
or starts with 'compute:'. We've found a lot of leaked ports in our
environment that haven't been cleaned up by nodepool. Debugging
revealed that those had an empty string as device owner and thus were
ignored by nodepool. Since None and empty string both should mean that
there is no device owner add them to the cleanup as well.

Change-Id: I85d641ae27d2529be86279f6d6e4899844dba88f
2021-07-01 14:29:45 +02:00
Zuul ee22b88ab5 Merge "Support threadless deletes" 2021-06-11 14:19:00 +00:00
Ian Wienand 9517be8ca6 Remove statsd args to OpenStack API client call
Change If21a10c56f43a121d30aa802f2c89d31df97f121 modified nodepool to
not use the inbuilt TaskManager but use openstackapi's task handling
instead.

The statsd arguments added here don't actually do anything and are
ignored; an openstack.Connection() object doesn't setup the stats
configuration.  Things are somewhat working because of the
STATSD_<HOST|PORT> environment variables -- openstacksdk notices these
and turns on stats reporting.  However, it uses the default prefix
('openstack.api') which is a regression over the previous behaviour of
logging operations on a per-cloud basis.

I have proposed the dependent-change that will allow setting the
prefix for stats in the "metric" section of each cloud in the
openstacksdk config file.  This will allow users to return to the
previous behaviour by setting each cloud with an individual prefix in
the cloud configuration (or, indeed keep the current behaviour by not
setting that).  So along with removing the ineffective arguments, I've
updated the relevant documentation and added a release note detailing
this.

Depends-On: https://review.opendev.org/c/openstack/openstacksdk/+/786814

Change-Id: I30e57084489d822dd6152d3e5712e3cd201372ae
2021-04-20 10:19:37 +10:00
James E. Blair 63f38dfd6c Support threadless deletes
The launcher implements deletes using threads, and unlike with
launches, does not give drivers an opportunity to override that
and handle them without threads (as we want to do in the state
machine driver).

To correct this, we move the NodeDeleter class from the launcher
to driver utils, and add a new driver Provider method that returns
the NodeDeleter thread.  This is added in the base Provider class
so all drivers get this behavior by default.

In the state machine driver, we override the method so that instead
of returning a thread, we start a state machine and add it to a list
of state machines that our internal state machine runner thread
should drive.

Change-Id: Iddb7ed23c741824b5727fe2d89c9ddbfc01cd7d7
2021-03-21 14:39:01 -07:00
Tobias Henkel b51a8be1ba
Optimize list_servers call in cleanupLeakedInstances
By default list_servers gets the server list and then attaches the
network information to each server object. This involves a call to the
floatingips endpoint per instance. This is unneeded during
cleanupLeakedInstances since we're only interested in the id and
metadata in order to trigger node deletion. In larger clouds this can
trigger hundreds or thousands unnecessary api calls to the
cloud. Optimize this by getting the server list with the bare flag to
avoid that.

Change-Id: Ie48e647eefc2aa3169e943fdf6854d82219b645b
2021-02-16 15:38:03 +01:00
Tobias Henkel 2e59f7b0b3
Offload waiting for server creation/deletion
Currently nodepool has one thread per server creation or
deletion. Each of those waits for the cloud by regularly getting the
server list and checking if their instance is active or gone. On a
busy nodepool this leads to severe thread contention when the server
list gets large and/or there are many parallel creations/deletions in
progress.

This can be improved by offloading the waiting to a single thread that
regularly retrieves the server list and compares that to the list of
waiting server creates/deletes. The calling threads are then waiting
until the central thread wakes them up to proceed their task. The
waiting threads are waiting for the event outside of the GIL and thus
are not contributing to the thread contention problem anymore.

An alternative approach would be to redesign the threading to be less
threaded but this would be a much more complex redesign. Thus this
change keeps the many threads approach but makes them wait much more
lightweight which shows a substantial improvement during load testing
in a test environment.

Change-Id: I5525f2558a4f08a455f72e6b5479f27684471dc7
2021-02-16 15:37:57 +01:00
James E. Blair 9e9a5b9bfd Improve max-servers handling for GCE
The quota handling for the simple driver (used by GCE) only handles
max-servers, but even so, it still didn't take into consideration
currently building servers.  If a number of simultaneous requests
were received, it would try to build them all and eventually return
node failures for the ones that the cloud refused to build.

The OpenStack driver has a lot of nice quota handling methods which
do take currently building nodes into account.  This change moves
some of those methods into a new Provider mixin class for quota
support.  This class implements some handy methods which perform
the calculations and provides some abstract methods which providers
will need to implement in order to supply information.

The simple driver is updated to use this system, though it still
only supports max-servers for the moment.

Change-Id: I0ce742452914301552f4af5e92a3e36304a7e291
2020-06-21 06:38:50 -07:00
Zuul dd4a993e38 Merge "Logs stats for nodepool automated cleanup" 2020-05-07 23:52:39 +00:00
Clark Boylan 257e26b0a4 Set pool info on leaked instances
We need to set pool info on leaked instances so that they are properly
accounted for against quota. When a znode has provider details but not
pool details we can't count it against quota used but also don't count
it as unmanaged quota so end up in limbo.

To fix this we set pool info metadata so that when an instance leaks we
can recover the pool info and set it on the phony instance znode records
used to delete those instances.

Change-Id: Iba51655f7bf86987f9f88bb45059464f9f211ee9
2020-04-21 10:41:40 -07:00
Ian Wienand ce00f347a4
Logs stats for nodepool automated cleanup
As a follow-on to I81b57d6f6142e64dd0ebf31531ca6489d6c46583, bring
consistency to the resource leakage cleanup statistics provided by
nodepool.

New stats for cleanup of leaked instances and floating ips are added
and documented.  For consistency, the downPorts stat is renamed to
leaked.ports.

The documenation is re-organised slightly to group common stats
together.  The nodepool.task.<provider>.<task> stat is removed because
it is covered by the section on API stats below.

Change-Id: I9773181a81db245c5d1819fc7621b5182fbe5f59
2020-04-15 14:48:36 +02:00
Tobias Urdin e4ce77466a Filter active images for OpenStack provider
The OpenStack provider doesn't filter on status
so when we uploaded a new image and deactivated
the old one it throws a SDKException because it
finds multiple images with the same name.

This adds a filter to only lookup Glance images
with a `active` status with the openstacksdk
which is the only valid state where we can
use the image [1].

[1] https://docs.openstack.org/glance/latest/user/statuses.html

Change-Id: I480b4e222232da1f1aa86b1a08117e605ef08eb4
2020-03-17 16:26:50 +01:00
Tobias Henkel 376adbc933
Delete images by id
Nodepool currently deletes managed images by name. This leads to
problems when an image is uploaded twice (e.g. because of multiple
formats). In this case there can be more than one image with the same
name which breaks the deletion. This can be fixed by deleting the
images by id instead.

Change-Id: I74fc6219ef7f2c496f36defb0703137ec4d7d30e
2019-11-25 12:57:51 +01:00
Monty Taylor 5fae5f5e8c Handle newer nova microversions
openstacksdk is requesting a newer nova microversion for server
records to pull new information that's only returned that way.
One of the results is that, on clouds that support that microversion,
nova no longer returns flavor id in the server record (since a flavor
could be deleted by the cloud while the server stays around) but
instead embeds the details about the flavor (ram, vcpus, etc)
in the server.flavor entry. This is neat - since it gives us the
info we need without the extra call. The downside is that we can't
count on the id field existing. SDK could add one - but it would
be None on newer clouds, so we'd still need to check for existence.

Long story short - handle both sides of the behavior.

Change-Id: I1f7b592265ac612ea6ca1b2f977e1507c6251da3
2019-10-24 17:19:25 +09:00
Clark Boylan 4b6afe403b Handle case where nova server is in DELETED state
The nova api can return instance records for an instance that has been
deleted. When it does this the status should be "DELETED". This means we
should check for the instance to have no more record or if the record is
present check that the status if DELETED.

Change-Id: I7ad753a3c73f3d2cd78f4a380f78279af9206ada
2019-10-11 11:01:09 -07:00
Jan Gutter 6789c4b618 Add port-cleanup-interval config option
There are some edge cases where the port cleanup logic is too
aggressive. This change attempts to address both of them in one commit:

* Some providers might spawn instances very slowly. In the past this was
  handled by hardcoding the timeout to 10 minutes. This allows a user to
  tweak the timeout in config.
* In the esoteric combination of using Ironic without the Ironic Neutron
  agent, it's normal for ports to remain DOWN indefinitely. Setting the
  timeout to 0, will work around that edge case.

Change-Id: I120d79c4b5f209bb1bd9907db172f94f29b9cb5d
2019-10-09 17:06:48 +02:00
Zuul 84dbbeed6c Merge "Fix node failures when at volume quota" 2019-09-10 23:29:15 +00:00
Tobias Henkel 8678b34398 Fix node failures when at volume quota
When creating instances with boot from volume we don't get quota
related information in the exception raised by wait_for_server. Also
in the server munch that is returned the fault information is
missing. This causes node failures when we run into the volume
quota. This can be fixed by explicitly fetching the server if we got
one and inspecting the fault information which contains more
information about the fault reason [1].

[1] Example fault reason:
Build of instance 4628f079-26a9-4a1d-aaa0-881ba4c7b9cb aborted:
VolumeSizeExceedsAvailableQuota: Requested volume or snapshot exceeds
allowed gigabytes quota. Requested 500G, quota is 10240G and 10050G
has been consumed.

Change-Id: I6d832d4dbe348646cd4fb49ee7cb5f6a6ad343cf
2019-09-06 15:15:34 -04:00
Tristan Cacqueray ce58b3e73e openstack: handle safely invalid network name
This change indicates what network can not be found and
prevents this exception from occuring:

  File "nodepool/driver/utils.py", line 70, in run
    self.launch()
  File "nodepool/driver/openstack/handler.py", line 249, in launch
    self._launchNode()
  File "nodepool/driver/openstack/handler.py", line 145, in _launchNode
    userdata=self.label.userdata)
  File "nodepool/driver/openstack/provider.py", line 316, in createServer
    net_id = self.findNetwork(network)['id']
  TypeError: 'NoneType' object is not subscriptable

Change-Id: Ic5fbce7c0c7ea2fc35c866f1e5dbec22b4cc0ef6
2019-08-22 02:00:36 +00:00
James E. Blair 75adc01f0a Increase port cleanup interval
If we set the interval too short (currently 3m), we may delete
a port which simply hasn't been attached to an instance yet if
instance creation is proceeding slowly.

Change-Id: I372e45f2442003369ab9057e1e5d468249e23dad
2019-06-21 07:40:21 -07:00
Monty Taylor 816921e4e2 Explicitly set use_direct_get to False
It is essential for nodepool that we use list and local filtering
for openstack servers for scale reasons. For users who are using
openstacksdk at scales that are not nodepool, it's less efficient
to do lists all the time. There is a flag to control the behavior
of get_server. Set it explicitly to request the behavior nodepool
is looking for so that openstacksdk can change the default to
better serve non-nodepool consumers.

Change-Id: I7cf03285f9a04f3eef403c67d75e149605207eb1
2019-06-05 10:48:33 -05:00
Monty Taylor 7618b714e2 Remove unused use_taskmanager flag
Now that there is no more TaskManager class, nor anything using
one, the use_taskmanager flag is vestigal. Clean it up so that we
don't have to pass it around to things anymore.

Change-Id: I7c1f766f948ad965ee5f07321743fbaebb54288a
2019-04-02 12:11:07 +00:00
Monty Taylor 34aae137fa Remove TaskManager and just use keystoneauth
Support for concurrency and rate limiting has been added to keystoneauth,
which is the library openstacksdk uses to talk to OpenStack. Instead
of managing concurrency in nodepool using the TaskManager and pool of
worker threads, let keystoneauth take over. This also means we no longer
have a hook into the request process, so we defer statsd reporting to
the openstacksdk layer as well.

Change-Id: If21a10c56f43a121d30aa802f2c89d31df97f121
2019-04-02 09:36:13 +00:00
Zuul 78d2476769 Merge "Revert "Revert "Cleanup down ports""" 2019-01-24 16:15:33 +00:00
Sagi Shnaidman d5027ff6a9 Support userdata for instances in openstack
Use "userdata" from Nova API to pass cloud-init config to nova
instances in openstack.

Change-Id: I1c6a1cbc5377d268901210631a376ca26f4887d8
2019-01-22 19:14:52 +02:00
Ian Wienand 0cf8144e8c
Revert "Revert "Cleanup down ports""
This reverts commit 7e1b8a7261.

openstacksdk >=0.19.0 fixes the filtering problems leading to all
ports being deleted. However openstacksdk <0.21.0 has problems with
dogpile.cache so use 0.21.0 as a minimum.

Change-Id: Id642d074cbb645ced5342dda4a1c89987c91a8fc
2019-01-18 15:03:55 +01:00
Tobias Henkel 41c968e3ac
Make estimatedNodepoolQuotaUsed more resilient
We had the case that we stored znodes without pool or type. At least
znodes without type break the quota calculation and can lead to wedged
providers. So make that more resilient and log exceptions per node
instead of failing the complete calculation. This way we don't wedge
in case we have bogus data in zk while still being able to debug
what's wrong with certain znodes.

Change-Id: I4a33ffbbf3684dc3830913ed8dc7b158f2426602
2018-12-05 10:30:54 +01:00
Tobias Henkel f8d20d603c
Fix leak detection in unmanaged quota calculation
The negation in the unmanaged quota calculation is wrong. This leads
to the fact that nodepool things all of its nodes are leaked.

Change-Id: I60b48a80dc597afa2ceb0a3faddd4c73ffa48c6f
2018-11-30 21:50:42 +01:00
James E. Blair 56164c886a OpenStack: count leaked nodes in unmanaged quota
If a node has leaked, it won't be counted against quota because
it's still recognized as belonging to the nodepool provider so it
doesn't count against unmanaged quota, however, there is no zk
record for it, so it also isn't counted against managed quota.
This throws quota calculations off for as long as the leaked
instances exist in nova.

To correct this, count leaked nodes against unmanaged quota.

Change-Id: I5a658649b881ed80b777096ec48cb6207f2a9cc6
2018-11-29 15:19:46 -08:00
Tobias Henkel 7e1b8a7261
Revert "Cleanup down ports"
The port filter for DOWN port seems to have no effect. It actually
deleted *all* ports in the tenant.

This reverts commit cdd60504ec.

Change-Id: I48c1430bb768903af467cace1a720e45ecc8e98f
2018-10-30 13:13:43 +01:00
David Shrewsbury cdd60504ec Cleanup down ports
Cleanup will be periodic (every 3 minutes by default, not yet
configurable) and will be logged and reported via statsd.

Change-Id: I81b57d6f6142e64dd0ebf31531ca6489d6c46583
2018-10-29 13:36:43 -04:00
Ian Wienand 7015bd9af4 Add instance boot properties
This allows us to set parameters for server boot on various images.
This is the equivalent of the "--property" flag when using "openstack
server create".  Various tools on the booted servers can then query
the config-drive metadata to get this value.

Needed-By: https://review.openstack.org/604193/

Change-Id: I99c1980f089aa2971ba728b77adfc6f4200e0b77
2018-09-21 16:29:16 +10:00
David Shrewsbury 47233f434f Move OpenStack leak code into driver
We have some OpenStack specific code for leaked instances in the
common launcher code. Let's move it inside the driver.

Change-Id: Ibc49836a5b27b6991e002393546e2cafef5e32ea
2018-09-18 12:07:41 -04:00
David Shrewsbury 4d71c45da6 Use zk connection passed to OpenStack driver
We are passing the zk connection to the estimatedNodepoolQuotaUsed()
method of the OpenStack provider when we already have it passed into
the provider's start() method, but not saving it there. Let's save
that connection in start() and use it instead.

In a later change to the provider, we will make further use of it.

Change-Id: I013a28d6c46046497d8b04867c51a23f6fa49d39
2018-09-18 11:45:09 -04:00
Zuul b3e1890e2a Merge "Invalidate az cache on bad request" 2018-07-23 13:25:35 +00:00
Tobias Henkel 934b1eed9c
Invalidate az cache on bad request
When getting error 400 we not only need to clear the image and flavor
cache but the az cache as well. Otherwise we will get constantly node
failures for any node request where nodepool chose that az
[1]. Currently the only way to recover from this situation is to
restart nodepool. Invalidating the cache doesn't fix the request that
failed due to this error but at least ensures that nodepool will
recover from this situation automatically for all further node
requests.

[1] Trace:
018-07-05 09:09:08,477 ERROR nodepool.NodeLauncher-0000123378: Launch attempt 2/3 failed for node 0000123378:
Traceback (most recent call last):
  File "/opt/nodepool-source/nodepool/driver/openstack/handler.py", line 221, in launch
    self._launchNode()
  File "/opt/nodepool-source/nodepool/driver/openstack/handler.py", line 134, in _launchNode
    volume_size=self.label.volume_size)
  File "/opt/nodepool-source/nodepool/driver/openstack/provider.py", line 378, in createServer
    return self._client.create_server(wait=False, **create_args)
  File "<decorator-gen-106>", line 2, in create_server
  File "/usr/lib/python3.5/site-packages/shade/_utils.py", line 410, in func_wrapper
    return func(*args, **kwargs)
  File "/usr/lib/python3.5/site-packages/shade/openstackcloud.py", line 6909, in create_server
    endpoint, json=server_json)
  File "/usr/lib/python3.5/site-packages/keystoneauth1/adapter.py", line 334, in post
    return self.request(url, 'POST', **kwargs)
  File "/usr/lib/python3.5/site-packages/shade/_adapter.py", line 158, in request
    return self._munch_response(response, error_message=error_message)
  File "/usr/lib/python3.5/site-packages/shade/_adapter.py", line 114, in _munch_response
    exc.raise_from_response(response, error_message=error_message)
  File "/usr/lib/python3.5/site-packages/shade/exc.py", line 171, in raise_from_response
    raise OpenStackCloudBadRequest(msg, response=response)
shade.exc.OpenStackCloudBadRequest: (400) Client Error for url: (...) The requested availability zone is not available

Change-Id: I5f653f159b08cf086d20c2398a9345bd4caa4d1e
2018-07-23 14:04:08 +02:00
David Shrewsbury bade82d446 Fix plugin and examples for using openstacksdk
These appear to have been missed in: https://review.openstack.org/572829

Change-Id: I5c008c369b3789c3ae79ce89726194ab715767a9
2018-07-17 15:02:59 -04:00
Artem Goncharov fc1f80b6d1
Replace shade and os-client-config with openstacksdk.
os-client-config is now just a wrapper around openstacksdk. The shade
code has been imported into openstacksdk. To reduce complexity, just use
openstacksdk directly.

openstacksdk's TaskManager has had to grow some features to deal with
SwiftService. Making nodepool's TaskManager a subclass of openstacksdk's
TaskManager ensures that we get the thread pool set up properly.

Change-Id: I3a01eb18ae31cc3b61509984f3817378db832b47
2018-07-14 08:44:03 -05:00
David Shrewsbury 87bbe26ab5
Remove OpenStack driver waitForImage call
This doesn't appear to be used.

Change-Id: I8568d59b1f54f3ddd826080bf4502adeada7d01f
2018-07-14 08:44:00 -05:00
Zuul eb52394c8c Merge "Fix for referencing cloud image by ID" 2018-07-04 00:11:05 +00:00
David Shrewsbury d39cc6d7ce Fix for referencing cloud image by ID
For pre-existing cloud images (not managed by nodepool), referencing
them by ID was failing since they could not be found with this data,
only by name.

Current code expects the shade get_image() call to accept a dict with
an 'id' key, which will return that same dict without any provider API
calls. This dict can then be used in createServer() to bypass looking
up the image to get the image ID. However, shade does not accept a dict
for this purpose, but an object with an 'id' attribute. This is
possibly a bug in shade to not accept a dict. But since nodepool knows
whether or not it has an ID (image-id) vs. an image name (image-name),
it can bypass shade altogether when image-id is used in the config.

Note: There is currently no image ID validation done before image
creation when an image-id value is supplied. Not even shade validated
the image ID with a passed in object. Server creation will fail with
an easily identifiable message about this, though.

Change-Id: I732026d1a305c71af53917285f4ebb2beaf3341d
Story: 2002013
Task: 19653
2018-07-03 15:26:33 -04:00
James E. Blair d610cb1c35 Handle node no longer in pool error
When the configuration changes such that the labels of existing
nodes are no longer in a pool, quota calculation fails because
we assume we can't have values in zk which don't match the config.
Handle that case explicitly so that we don't throw an exception.

Change-Id: Ib934cd56ae423d7ecff7edf0d13d33fc05bc757b
2018-06-29 16:12:34 -07:00
Zuul 4b7f348b76 Merge "Pass zk connection to ProviderManager.start()" 2018-06-21 19:00:24 +00:00
David Shrewsbury a418aabb7a Pass zk connection to ProviderManager.start()
In order to support static node pre-registration, we need to give
the provider manager the opportunity to register/deregister any
nodes in its configuration file when it starts (on startup or when
the config change). It will need a ZooKeeper connection to do this.
The OpenStack driver will ignore this parameter.

Change-Id: Idd00286b2577921b3fe5b55e8f13a27f2fbde5d6
2018-06-12 12:04:16 -04:00