In the metric name, we use the builders fqdn as a key, but in the test
we used the hostname. So this test fails on systems where that's not the
same.
Change-Id: If286f19371d1fd70dc9bee4b7af814d13396357b
The cleanup routine for leaked image uploads based its detection
on upload ids, but they are not unique except in the context of
a provider and build. This meant that, for example, as long as
there was an upload with id 0000000001 for any image build for
the provider (very likely!) we would skip cleaning up any leaked
uploads with id 0000000001.
Correct this by using a key generated on build+upload (provider
is implied because we only consider uploads for our current
provider).
Update the tests relevant to this code to exercise this condition.
Change-Id: Ic68932b735d7439ca39e2fbfbe1f73c7942152d6
This allows operators to delete large diskimage files after uploads
are complete, in order to save space.
A setting is also provided to keep certain formats, so that if
operators would like to delete large formats such as "raw" while
retaining a qcow2 copy (which, in an emergency, could be used to
inspect the image, or manually converted and uploaded for use),
that is possible.
Change-Id: I97ca3422044174f956d6c5c3c35c2dbba9b4cadf
We have observed GCE returning bad machine type data which we
then cache. If that happens, clear the cache to avoid getting
stuck with the bad data.
Change-Id: I32fac2a92d4f9d400fe2db41fffd8d189d097542
On startup, the launcher waits up to 5 seconds until it has seen
its own registry entry because it uses the registry to decide if
other components are able to handle a request, and if not, fail
the request.
In the case of a ZK disconnection, we will lose all information
about registered components as well as the tree caches. Upon
reconnection, we will repopulate the tree caches and re-register
our component.
If the tree cache repopulation happens first, our component
registration may be in line behind several thousand ZK events. It
may take more than 5 seconds to repopulate and it would be better
for the launcher to wait until the component registry is up to date
before it resumes processing.
To fix this, instead of only waiting on the initial registration,
we check each time through the launcher's main loop that the registry
is up-to-date before we start processing. This should include
disconnections because we expect the main loop to abort with an
error and restart in those cases.
This operates only on local cached data, so it doesn't generate any
extra ZK traffic.
Change-Id: I1949ec56610fe810d9e088b00666053f2cc37a9a
Like done with several other meta data, copy the `cloud` attribute from
the backing node to the metastatic node.
Change-Id: Id83b3e09147baaab8a85ace4d5beba77d1eb87bd
gp3 is better in almost every way (cheaper, faster, more configurable).
It seems difficult to find a situation where gp2 would be a better
choice, so update the default when creating images to use gp3.
There are two locations where we can specify volume-type: image creation
(where the volume type becomes the default type for the image) and
instance creation (where we can override what the image specifies).
This change updates only the first (image creation), but not the second,
which has no default (which means to use whatever the image specified).
https://aws.amazon.com/ebs/general-purpose/
Change-Id: Ibfc5dfd3958e5b7dbd73c26584d6a5b8d3a1b4eb
This adds some stats keys that may be useful when monitoring
the operation of individual nodepool builders.
Change-Id: Iffdeccd39b3a157a997cf37062064100c17b1cb3
If a long-running backing node used by the metastatic driver develops
problems, performing a host-key-check each time we allocate a new
metastatic node may detect these problems. If that happens, mark
the backing node as failed so that no more nodes are allocated to
it and it is eventually removed.
Change-Id: Ib1763cf8c6e694a4957cb158b3b6afa53d20e606
Some drivers were missing docs and/or validation for options that
they actually support. This change:
adds launch-timeout to:
metastatic docs and validation
aws validation
gce docs and validation
adds post-upload-hook to:
aws validation
adds boot-timeout to:
metastatic docs and validation
adds launch-retries to:
metastatic docs and validation
Change-Id: Id3f4bb687c1b2c39a1feb926a50c46b23ae9df9a
This change adds the ability to use the k8s (and friends) drivers
to create pods with custom specs. This will allow nodepool admins
to define labels that create pods with options not otherwise supported
by Nodepool, as well as pods with multiple containers.
This can be used to implement the versatile sidecar pattern, which,
in a system where it is difficult to background a system process (such
as a database server or container runtime) is useful to run jobs with
such requirements.
It is still the case that a single resource is returned to Zuul, so
a single pod will be added to the inventory. Therefore, the expectation
that it should be possible to shell into the first container in the
pod is documented.
Change-Id: I4a24a953a61239a8a52c9e7a2b68a7ec779f7a3d
In I93400cc156d09ea1add4fc753846df923242c0e6 we've refactore the
launcher config loading to use the last modified timestamps of the
config files to detect if a reload is necessary.
In the builder the situation is even worse as we reload and compare the
config much more often e.g. in the build worker when checking for manual
or scheduled image updates.
With a larger config (2-3MB range) this is a significant performance
problem that can lead to builders being busy with config loading instead
of building images.
Yappi profile (performed with the optimization proposed in
I786daa20ca428039a44d14b1e389d4d3fd62a735, which doesn't fully solve the
problem):
name ncall tsub ttot tavg
..py:880 AwsProviderDiskImage.__eq__ 812.. 17346.57 27435.41 0.000034
..odepool/config.py:281 Label.__eq__ 155.. 1.189220 27403.11 0.176285
..643 BuildWorker._checkConfigRecent 58 0.000000 27031.40 466.0586
..depool/config.py:118 Config.__eq__ 58 0.000000 26733.50 460.9225
Change-Id: I929bdb757eb9e077012b530f6f872bea96ec8bbc
We use latest/stable by default which very recently updated to
1.29/stable. Unfortunately it appears there are issues [0] with this
version on Debian Bookworm which also happens to be the platform we test
on. Our jobs have been consistently failing in a manner that appears
related to this issue. Update the job to collect logs so that we can
better confirm this is the case and rollback to 1.28 which should be
working.
Also update the AWS tests to handle a recent moto release which
requires us to use mock_aws rather than individual mock_* classes.
[0] https://github.com/canonical/microk8s/issues/4361
Change-Id: I72310521bdabfc3e34a9f2e87ff80f6d7c27c180
Co-Authored-By: James E. Blair <jim@acmegating.com>
Co-Authored-By: Jeremy Stanley <fungi@yuggoth.org>
This is an authenticated http metadata service which is typically
available by default, but a more secure setup is to enforce its
usage.
This change adds the ability to do that for both instances and
AMIs.
Change-Id: Ia8554ff0baec260289da0574b92932b37ffe5f04
In an attempt to make the nodescan process as quick as possible,
we start the connection in the provider statemachine thread before
handing the remaining work off to the nodescan statemachine thread.
However, if the nodescan worker is near the end of its request list
when the provider adds the request, then it may end up performing
the initial connection nearly simultaneously with the provider
thread. They may both create a socket and attempt to register
the FD. If the race results in them registering the same FD,
the following exception occurs:
Traceback (most recent call last):
File "/usr/local/lib/python3.11/site-packages/nodepool/driver/statemachine.py", line 253, in runStateMachine
keys = self.nodescan_request.result()
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/nodepool/driver/statemachine.py", line 1295, in result
raise self.exception
File "/usr/local/lib/python3.11/site-packages/nodepool/driver/statemachine.py", line 1147, in addRequest
self._advance(request, False)
File "/usr/local/lib/python3.11/site-packages/nodepool/driver/statemachine.py", line 1187, in _advance
request.advance(socket_ready)
File "/usr/local/lib/python3.11/site-packages/nodepool/driver/statemachine.py", line 1379, in advance
self._connect()
File "/usr/local/lib/python3.11/site-packages/nodepool/driver/statemachine.py", line 1340, in _connect
self.worker.registerDescriptor(self.sock)
File "/usr/local/lib/python3.11/site-packages/nodepool/driver/statemachine.py", line 1173, in registerDescriptor
self.poll.register(
FileExistsError: [Errno 17] File exists
To address this, rather than attempting to coordinate work between
these two threads, let's just let the nodescan worker handle it.
To try to keep the process responsive, we'll wake the nodescan worker
if it's sleeping.
Change-Id: I5ceda68b856c09bf7606e62ac72ca5c5c76d2661
We want to handle the "InsufficientInstanceCapacity" error different than
other "error.unknown" errors in our monitoring/alerting system. With this
change, it would produce a "error.capacity" instead of "error.unknown".
Change-Id: Id3a49d4b2d4b4733f801e65df69b505e913985a7
The node list (web and cli) displays the connection port for the
node, but the k8s drivers use that to send service account
credential info to zuul.
To avoid exposing this to users if operators have chosen to make
the nodepool-launcher webserver accessible, redact the connection
port if it is not an integer.
This also affects the command-line nodepool-list in the same way.
Change-Id: I7a309f95417d47612e40d983b3a2ec6ee4d0183a
In config validation, the gpu parametr type was specified as str
rather than float. This is corrected.
This was not discovered in testing because the only tests which use
the gpu parameter for the other k8s drivers are not present in the
openshiftpods driver. This change also adds the missing tests for
the default resource and resource limits feature which exercises the
gpu limits.
Change-Id: Ife932acaeb5a90ebc94ad36c3b4615a4469f0c40
To support the use case where one has multiple pools providing
metastatic backing nodes, and those pools are in different regions,
and a user wishes to use Zuul executor zones to communicate with
whatever metastatic nodes eventually produced from those regions,
this change updates the launcher and metastatic driver to use
the node attributes (where zuul executor region names are specified)
as default values for metastatic node attributes. This lets users
configure nodepool with zuul executor zones only on the backing pools.
Change-Id: Ie6bdad190f8f0d61dab0fec37642d7a078ab52b3
Co-Authored-By: Benedikt Loeffler <benedikt.loeffler@bmw.de>
The metastatic driver was ignoring the 3 standard pool configuration
options (max-servers, priority, and node-attributes) due to a missing
superclass method call. Correct that and update tests to validate.
Further, the node-attributes option was undocumented for the metastatic
driver, so add it to the docs.
Change-Id: I6a65ea5b8ddb319bc131f87e0793f3626379e15f
Co-Authored-By: Benedikt Loeffler <benedikt.loeffler@bmw.de>
The state transition log messages for the Nodescan statemachine can be
quite excessive. While they might be useful for debugging, it's not
always needed to have all the log messages available.
To provide an easier way to filter these messages, use a dedicated log
package in the NodescanRequest class.
Change-Id: I2b1a625f5e5e375317951e410a27ff4243d4a0ef
Nodepool was declining node requests when other unrelated instance types
of a provider were unavailable:
Declining node request <NodeRequest {... 'node_types': ['ubuntu'],
... }> due to ['node type(s) [ubuntu-invalid] not available']
To fix this we will the check error labels against the requested labels
before including them in the list of invalid node types.
Change-Id: I7bbb3b813ca82baf80821a9e84cc10385ea95a01
* Change the state change logging level to debug -- it's chatty
* Don't allow individual connection attempts to take > 10 seconds
This is a behavior that is in the old nodescan method that
wasn't ported over but should be. As a port comes online as
port of the boot process, early connection attempts may hang
while later ones may succeed. We want to continually try new
connections whether they return an error or hang.
* Fall through to the complete state even if the last key is
ignored
Previously, if the last key we scanned was not compatible, the
state machine would need to go through one extra state
transition in order to set the complete flag, due to an early
return call. We now rearrange that state transition so that we
fall through to completion regardless of whether the last key
was added.
Change-Id: Ic6fd1551c3ef1bbd8eaf3b733e9ecc2609bce47f
We set the AWS external id to the hostname when building, but that
causes problems if we need to retry the build -- we won't delete
the instance we're trying to abort because we don't have the actual
external id (InstanceId).
Instead, delay setting it just a little longer until we get the real
InstanceId back from AWS.
Change-Id: Ibc7ab55ccd54c22ad006c13a0af3e9598056f7a4
We currently use a threadpool executor to scan up to 10 nodes at
a time for ssh keys. If they are slow to respond, that can create
a bottleneck. To alleviate this, use a state machine model for
managing the process, and drive each state machine from a single
thread.
We use select.epoll() to handle the potentially large number of
connections that could be happening simultaneously.
Note: the paramiko/ssh portion of this process spawns its own
thread in the background (and always has). Since we are now allowing
more keyscan processis in parallel, we could end up with an
unbounded set of paramiko threads in the background. If this is
a concern we may need to cap the number of requests handled
simultaneously. Even if we do that, this will still result in
far fewer threads than simply increasing the cap on the threadpool
executor.
Change-Id: I42b76f4c923fd9441fb705e7bffd6bc9ea7240b1
The AWS API call to get the service quota has its own rate limit
that is separate from EC2. It is not documented, but the defaults
appear to be very small; experimentally it appear to be something
like a bucket size of 30 tokens and a refill rate somewhere
between 3 and 10 tokens per minute.
This change moves the quota lookup calls to their own rate limiter
so they are accounted for separately from other calls.
We should configure that rate limiter with the new very low values,
however, that would significantly slow startup since we need to issue
serveral calls at once when we start; after that we are not sensitive
to a delay. The API can handle a burst at startup (with a bucket
size of 30) but our rate limiter doesn't have a burst option. Instead
of cofiguring it properly, we will just configure it with the rate
limit we use for normal operations (so that we at least have some
delay), but otherwise, rely on caching so that we know that we won't
actually exceed the rate limit.
This change therefore also adds a Lazy Executor TTL cache to the
operations with a timeout of 5 minutes. This means that we will issue
bursts of requests every 5 minutes, and as long as the number of
requests is less than the token replacement rate, we'll be fine.
Because this cache is on the adapter, multiple pool workers will use
the same cache. This will cause a reduction in API calls since
currently there is only pool-worker level caching of nodepool quota
information objects. When the 5 minute cache on the nodepool quota
info object expires, we will now hit the adapter cache (with its own
5 minute timeout) rather than go directly to the API repeatedly for
each pool worker. This does mean that quota changes may take between
5 and 10 minutes to appear in nodepool.
The current code only looks up quota information for instance and
volume types actually used. If that number is low, all is well, but
if it is high, then we could potentially approach or exceed the token
replacement rate. To make this more predictable, we will switch the
API call to list all quotas instead of fetching only the ones we need.
Due to pagination, this results in a total of 8 API calls as of writing;
5 for ec2 quotas and 3 for ebs. These are likely to grow over time,
but very slowly.
Taken all together, these changes mean that a single launcher should
issue at most 8 quota service api requests every 5 minutes, which is
below the lowest observed token replacement rate.
Change-Id: Idb3fb114f5b8cda8a7b6d5edc9c011cb7261be9f