Commit Graph

74 Commits

Author SHA1 Message Date
James E. Blair 3e4caaac4b Produce consistent merge commit shas
Use a fixed timestamp and merge message so that zuul mergers
produce the exact same commit sha each time they perform a merge
for a queue item.  This can help correlate git repo states for
different jobs in the same change as well as across different
changes in the case of a dependent change series.

The timestamp used is the "configuration time" of the queue item
(ie, the time the buildset was created or reset).  This means
that it will change on gate resets (which could be useful for
distinguishing one run of a build from another).

Change-Id: I3379b19d77badbe2a2ec8347ddacc50a2551e505
2024-02-26 16:32:46 -08:00
James E. Blair 8dd4011aa0 Monitor and report executor inode usage
This adds inodes to the hdd executor sensor and reports usage
to statsd as well.

Change-Id: Ifd9a63cfc7682f6679322e39809be69abca6827e
2024-02-19 11:20:57 -08:00
James E. Blair 922a6b53ed Make executor sensors slightly more efficient
Rather than checking all of the sensors to see if they are okay,
then collecting all the data again for stats purposes, do both
at the same time.

Change-Id: Ia974a7d013057880171fd1695a1d17169d093410
2024-02-19 09:04:41 -08:00
James E. Blair 5a8e373c3b Replace Ansible 6 with Ansible 9
Ansible 6 is EOL and Ansible 9 is available.  Remove 6 and add 9.

This is usually done in two changes, but this time it's in one
since we can just rotate the 6 around to make it a 9.

command.py has been updated for ansible 9.

Change-Id: I537667f66ba321d057b6637aa4885e48c8b96f04
2024-02-15 16:20:45 -08:00
James E. Blair 1f026bd49c Finish circular dependency refactor
This change completes the circular dependency refactor.

The principal change is that queue items may now include
more than one change simultaneously in the case of circular
dependencies.

In dependent pipelines, the two-phase reporting process is
simplified because it happens during processing of a single
item.

In independent pipelines, non-live items are still used for
linear depnedencies, but multi-change items are used for
circular dependencies.

Previously changes were enqueued recursively and then
bundles were made out of the resulting items.  Since we now
need to enqueue entire cycles in one queue item, the
dependency graph generation is performed at the start of
enqueing the first change in a cycle.

Some tests exercise situations where Zuul is processing
events for old patchsets of changes.  The new change query
sequence mentioned in the previous paragraph necessitates
more accurate information about out-of-date patchsets than
the previous sequence, therefore the Gerrit driver has been
updated to query and return more data about non-current
patchsets.

This change is not backwards compatible with the existing
ZK schema, and will require Zuul systems delete all pipeline
states during the upgrade.  A later change will implement
a helper command for this.

All backwards compatability handling for the last several
model_api versions which were added to prepare for this
upgrade have been removed.  In general, all model data
structures involving frozen jobs are now indexed by the
frozen job's uuid and no longer include the job name since
a job name no longer uniquely identifies a job in a buildset
(either the uuid or the (job name, change) tuple must be
used to identify it).

Job deduplication is simplified and now only needs to
consider jobs within the same buildset.

The fake github driver had a bug (fakegithub.py line 694) where
it did not correctly increment the check run counter, so our
tests that verified that we closed out obsolete check runs
when re-enqueing were not valid.  This has been corrected, and
in doing so, has necessitated some changes around quiet dequeing
when we re-enqueue a change.

The reporting in several drivers has been updated to support
reporting information about multiple changes in a queue item.

Change-Id: I0b9e4d3f9936b1e66a08142fc36866269dc287f1
Depends-On: https://review.opendev.org/907627
2024-02-09 07:39:40 -08:00
James E. Blair 9201f9ee28 Store builds on buildset by uuid
This is part of the circular dependency refactor.

This updates the buildset object in memory (and zk) to store builds
indexed by frozen job uuid rather than job name.  This also updates
everal related fields and also temporary dictionaries to do the same.

This will allow us, in the future, to have more than one job/build
in a buildset with the same name (for different changes/refs).

Change-Id: I70865ec8d70fb9105633f0d03ba7c7e3e6cd147d
2023-12-12 11:58:21 -08:00
James E. Blair 033470e8b3 Fix repo state restore for zuul role tag override
When a repo that is being used for a zuul role has override-checkout
set to a tag, the job would fail because we did not reconstruct the
tag in our zuul-role checkout; we only did that for branches.

This fixes the repo state restore for any type of ref.

There is a an untested code path where a zuul role repo is checked
out to a tag using override-checkout.  Add a test for that (and
also the same for a branch, for good measure).

Change-Id: I36f142cd3c4e7d0b930318dddd7276f3635cc3a2
2023-11-30 10:06:03 -08:00
Zuul c339a97e4d Merge "Add test for reporting of transient build errors" 2023-08-15 09:53:10 +00:00
James E. Blair 60a8dfd451 Add Ansible 8
This is the currently supported version of Ansible.  Since 7 is out
of support, let's skip it.

Change-Id: I1d13c23189dce7fd9db291ee03a452089b92a421
2023-07-19 15:46:48 -07:00
Simon Westphahl 708c1e7025
Add test for reporting of transient build errors
This change adds a test for the bug fixed in
I05be093f80f015463f727f55154e16202821e961.

Change-Id: I63021d396eedb1c99e2d7dd2f33209c563f38d82
2023-05-23 14:22:13 +02:00
James E. Blair f9eb499870 Remove Ansible 5
Change-Id: Icd8c33dfe1c8ffd21a717a1a94f1783c244a6b82
2022-10-11 17:03:57 -07:00
James E. Blair 2d6b5c19ba Remove support for Ansible 2
Versions 2.8 and 2.9 are no longer supported by the Ansible project.

Change-Id: I888ddcbecadd56ced83a27ae5a6e70377dc3bf8c
2022-09-14 17:14:10 -07:00
James E. Blair 7949efd255 Add Ansible 6
Change-Id: I0d450d9385b9aaab22d2d87fb47798bf56525f50
2022-09-02 10:12:55 -07:00
James E. Blair 725b2b3b87 Fix Ansible version testing
Several of our tests which validate Ansible behavior with Zuul are
not versioned so that they test all supported versions of Ansible.
For those cases, add versioned tests and fix any descrepancies that
have been uncovered by the additional tests (fortunately all are
minor test syntax issues and do not affect real-world usage).

One of our largest versioned Ansible tests was not actually testing
multiple Ansible versions -- we just ran it 3 times on the default
version.  Correct that and add validation that the version ran was
the expected version.

Change-Id: I26213f69fe844776408fce24322749a197e07551
2022-09-02 10:12:52 -07:00
Simon Westphahl 21f5bd9f11 Load job from pipeline state on executors
Instead of sending the required job variables via the build request we
will give the executor the job path so it can load the frozen job from
the pipeline state.

Change-Id: Ie1b7ea0a2bc5dfc2d44bcafbc9eb8c227bbe7de2
2021-11-23 15:16:32 -08:00
Tobias Henkel d3a2c33171 Increase load_multiplier in tests
We're seeing many test failures due to executors unregistering during
high system load and thus causing timeouts in test cases. During tests
we expect the system to be busy so increase the load_multiplier in
tests.

Change-Id: I54a05adc9e7cb9efaf20b70e59a59cefd44e21e9
2021-10-29 17:20:40 -07:00
Felix Edel c6ce4ae2bb Don't use executor.builds when processing build result events
The executor client still holds a list of local builds objects which is
used in various places. One use case is to look up necessary
information of the original build when a build result event is handled.

Using such a local list won't work with multiple schedulers in place. As
a first step we will avoid using this list for handling build result
events and instead provide all necessary information to the build result
itself and look up the remaining information from the pipeline directly.

This change also improves the log output when processing build result
events in the scheduler.

Change-Id: I9c4e573de2ce63259ec6cfb7d69c2f5be48f33ef
2021-09-24 16:25:25 -07:00
James E. Blair 6fcde31c9e Try harder to unlock failed build requests
An OpenDev executor lost the ZK connection while trying to start
a build, specifically at the stage of reading the params from ZK.
In this case, it was also unable to unlock the build request
after the initial exception was raised.  The ZK connection
was resumed without losing the lock, which means that the build
request stayed in running+locked, so the cleanup method leaves
it alone.  There is no recovery path from this situation.

To correct this, we will try indefinitely to unlock a build request
after we are no longer working on it.  Further, we will also try
indefinitely to report the result to Zuul.  There is still a narrow
race condition noted inline, but this change should be a substantial
improvement until we can address that.

Also, fix a race that could run merge jobs twice and break their result

There is a race condition in the merger run loop that allows a merge job
to be run twice whereby the second run breaks the result because the job
parameters where deleted during the first run.

This can occur because the merger run loop is operating on cached data.
It could be that a merge request is taken into account because it's
unlocked but was already completed in a previous run.

To avoid running the request a second time, the lock() method now
updates the local request object with the current data from ZooKeeper
and the merger checks the request's state again after locking it.

This change also fixes the executor run loop as this one is using the
same methods. Although we've never seen this issue there it might be
hidden by some other circumstances as the executor API differs in some
aspects from the merger API (e.g. dealing with node requests and node
locking, no synchronous results).

Change-Id: I167c0ceb757e50403532ece88a534c4412d11365
Co-Authored-By: Felix Edel <felix.edel@bmw.de>
2021-09-07 09:34:44 -07:00
James E. Blair 03e98df9da Use the nodeset build parameter instead of hosts/groups
The serialized nodeset is now supplied as a build parameter,
which makes the synthetic hosts and groups parameters which are
derived from it redundant.

Update the executor to rely entirely on the deserialized nodeset.

We also rename the method which creates the parameters since they
are not used for gearman any more.

A subsequent change can remove the hosts and nodes parameters.

Change-Id: Ied7f78c332485e5c66b5721c1007c25660d4238e
2021-07-20 11:04:24 -07:00
Felix Edel fee46c25bc Lock/unlock nodes on executor server
Currently, the nodes are locked in the scheduler/pipeline manager before
the actual build is created in the executor client. When the nodes are
locked, the corresponding NodeRequest is also deleted.

With this change, the executor will lock the nodes directly before
starting the build and unlock them when the build is completed.

To keep the order of events intact, the nodepool.acceptNodes() method is
split up into two:
    1. nodepool.acceptNodeRequest() does most of the old acceptNodes()
       method except for locking the nodes and deleting the node
       request. It is called on the scheduler side when the
       NodesProvisionedEvent is handled (which is also where
       acceptNodes() was previously called).
    2. nodepool.acceptNodes() is now called on the executor side when
       the job is started. It locks the nodes and deletes the node
       request in ZooKeeper.

Finally, it's also necessary to move the autohold processing to the
executor, as this requires a lock on the node. To allow processing of
autoholds, the executor now also determines the build attempts and sets
the RETRY_LIMIT result if necessary.

Change-Id: I7392ce47e84dcfb8079c16e34e0ed2062ebf4136
2021-07-01 05:46:02 +00:00
James E. Blair 118d45b1f2 Shard BuildRequest parameters
It's possible for the build request parameters to get quite large,
so this uses the sharding API to split them across multiple znodes.

Since the executor is going to attempt to start processing a build
request as soon as it exists, and the natural place for the sharded
parameters is underneath the build request, if we created the request
first and then added the sequence znodes for the parameters, the
executor may try to start processing the build request before the
parameters are written.

To resolve this, the sharding API is updated so that it can accept
not only a zk client, but also a zk transaction (which behaves like
a client with some restrictions).  This means we can create the build
request and all of the sharded parameter nodes in one atomic
transaction.

One of the restrictions is that we can't pass 'makepath' to the create
call.  That has an impact on how the sharded nodes are created; it
seems we can't create "foo/0000000000", instead, we must always have
a prefix for the sequence, so we will end up with nodes like
"foo/seq00000000000".  That works well enough for this case.

Change-Id: I4e0e8ec579b291f4d410bcd95ac01f195b3007c1
2021-06-29 14:37:15 -07:00
Felix Edel 6ac14615a0 Execute builds via ZooKeeper
This is the second part of I5de26afdf6774944b35472e2054b93d12fe21793.
It uses the executor api.

Three tests are disabled until the next change.

Change-Id: Ie08fa9dfb4bb3adb9a02e0a2e8b11309e1ec27cd
2021-06-29 14:37:15 -07:00
James E. Blair be50a6ca42 Freeze job variables at start of build
Freze Zuul job variables when starting a build so that jinja
templates can not be used to expose secrets.  The values will be
frozen by running a playbook with set_fact, and that playbook
will run without access to secrets.  After the playbook
completes, the frozen variables are read from and then removed
from the fact cache.  They are then supplied as normal inventory
variables for any trusted playbooks or playbooks with secrets.

The regular un-frozen variables are used for all other untrusted
playbooks.

Extra-vars are now only used to establish precedence among all
Zuul job variables.  They are no longer passed to Ansible with
the "-e" command line option, as that level of precedence could
also be used to obtain secrets.

Much of this work is accomplished by "squashing" all of the Zuul
job, host, group, and extra variables into a flat structure for
each host in the inventory.  This means that much of the variable
precedence is now handled by Zuul, which then gives Ansible
variables as host vars.  The actual inventory files will be much
more verbose now, since each host will have a copy of every "all"
value.  But this allows the freezing process to be much simpler.

When writing the inventory for the setup playbook, we now use the
!unsafe YAML tag which is understood by Ansible to indicate that
it should not perform jinja templating on variables.  This may
help to avoid any mischief with templated variables since they
have not yet been frozen.

Also, be more strict about what characters are allowed in ansible
variable names.  We already checked job variables, but we didn't
verify that secret names/aliases met the ansible variable
requirements.  A check is added for that (and a unit test that
relied on the erroneous behavior is updated).

Story: 2008664
Story: 2008682
Change-Id: I04d8b822fda6628e87a4a57dc368f20d84ae5ea9
2021-06-24 06:24:23 -07:00
Clark Boylan d296098c05 Cleanup Zuul's stdout/stderr output
This is primarily an issue in the unittests, but we also cleanup a
problem with output in ansible package installation verification.

There are two types of issue we address here. The first is unittests
calling print(). We replace those with log.info() calls to keep this
information from adding noise to the stestr output. The second is
subprocess.run() not capturing output so it ends up on stdout/stderr. In
this case we update use of subprocess.run() to capture the output, then
log/error appropriately if the return code is not 0.

Change-Id: I22650bf9495d3fe71bdf4a2dec5d9b3f30116188
2021-06-04 11:42:02 -07:00
Albin Vass 85c7dc1665 Use shell-type config from nodepool
Ansible needs to know which shell type the node uses to operate
correctly, especially for ssh connections for windows nodes because
otherwise ansible defaults to trying bash. Nodepool now allows this
setting in most driver configurations and this change makes Zuul
utilize that setting in the inventory file.

Change-Id: I55389ae8fa30be70c3939737f8c67282aad0ae47
2021-03-08 22:16:23 +01:00
Matthieu Huin 5431c029a8 gerrit: fix invalid ref computation from change
Gerrit's refs are left-padded with zeroes if the change's number is
below 10, for example 9,1 -> refs/changes/09/9/1.

Fix an error in computing the change's ref when the change's number is
below 10.
Modify the test framework to emulate Gerrit ref naming convention more
faithfully in tests.

Change-Id: I54a3c3dcaa9a08cff97bfd701e28b6f240fdb77d
2021-01-05 15:54:37 +01:00
James E. Blair 5804c4c293 Sequence builds in test_executor
These assertions assume the builds are in a specific order.  To
ensure that, wait for each build to pause before starting the next.

Change-Id: I2e62a0197b833e36522aac14dc8f4d4f386eccf5
2020-07-28 13:40:14 -07:00
Monty Taylor 67e87e28e0 Skip host key checking if host keys are missing
If nodepool doesn't send any host keys, we need to not attempt
to check them on the zuul side. Nodepool can omit them if
host key checking is disabled, which is necessary in some
circumstances.

Change-Id: Ib35a9d4c9911fe13afabf089707efcc761fffc74
2020-07-08 08:56:24 -05:00
Albin Vass 518cf7fe5e Enables whitelisting and configuring callbacks
Change-Id: Ida7b84795d922b85ec9cc6161ab1203fb82da825
2020-05-12 19:01:51 +02:00
Albin Vass f9a1e1a958 Validate ansible extra packages
Currently when validating the ansible installation, zuul only checks
if ansible is installed and not any packages that would have been
installed with ANSIBLE_EXTRA_PACKAGES. Since the executor image has
ansible pre installed the ANSIBLE_EXTRA_PACKAGES environment variable
has no effect unless ansible is removed.

This adds a check to make sure packages specified with
ANSIBLE_EXTRA_PACKAGES are installed as well.

Change-Id: I7ee4125d6716db718bb355b837e90dbcfce9b857
2020-05-08 09:03:10 +02:00
vass d919666778 Filter secret ZUUL_ env variables from ansible env
Change-Id: I4c8df21399240fe32760f8af1d183ba3a237eede
2020-04-15 23:09:48 +02:00
Jan Kubovy a770be9b83 Scheduler test app manager
As a preparation for scale-out-scheduler the scheduler in tests
were extracted in order to start multiple instances in a previous
change.

This change continues on by introducing a manager to create
additional scheduler instances and the ability to call certain
methods on some or all of those instances.

This change only touches tests.

Change-Id: Ia05a7221f19bad97de1176239c075b8fc9dab7e5
Story: 2007192
2020-04-03 14:49:59 +02:00
Jan Kubovy 1bb26d7b37 Make test setup_config more pure
Setup config will not set the base test config object rather return a new one.
This allows to setup multiple config objects which is needed in order to
instantiate mutliple schedulers enabling them to have different configs, e.g.
command socket.

Change-Id: Icc7ccc82f7ca766b0b56c38e706e6e3215342efa
Story: 2007192
2020-02-28 11:50:22 +01:00
Felix Edel 7ec23904d7
Fix evaluation of range file_comments
While implementing the file comment functionality for Github, I stumbled
over a bug in the line mapping calculation for range comments.
It looks like this never worked before as it tries to access a
non-existing key in the file_comments dictionary, always resulting in a
"KeyError: 'rng'".

Change-Id: I9920cdd75b8b3e4a856317a66cd476c8d57f2b9b
2020-02-17 10:43:23 +01:00
James E. Blair 96991ac179
Don't set ansible_python_interpreter if in vars
Zuul always sets ansible_python_interpreter as a host var.  However
a user may want to set that as a regular var (to apply to the all
group) or a group var.  If that happens, disable Zuul's own setting
of the value. Note that users can still override the all-var or
group-var with a host-var of their own.

Change-Id: Id130ec1718efa25b260b39ea0587ec5794e8e2cf
2019-12-13 11:48:41 +01:00
James E. Blair fdb1a5ce50
Fix deletion of stale build dirs on startup
This code had a bug -- it didn't build the full path.
This code was not tested.

These two things are related.

Change-Id: I7881fb30017cedc12435e0fcbfda321bdf20d611
2019-11-22 17:06:18 +01:00
Jan Kubovy 255de7646f Update heuristing of parallel starting builds.
An executor is accepting up to twice as many starting builds as defined
by the load_multiplier option. This is limited to 4 CPU/vCPU count.
After that the executor is accepting only up to as many starting builds
as defined by the load_multiplier (also up to half as many).

Change-Id: I8cf395c41191647605ec47d1f5681dc46675546d
2019-08-27 20:07:50 +00:00
Jan Kubovy 35e0bc9b6e Overriding max. starting builds.
An executor is accepting up to twice as many starting builds as defined
by the load_multiplier option. On system with high CPU/vCPU count an
executor may accept too many starting builds. This can be overwritten
using a new max_starting_builds option.

Change-Id: Ic7c121e795e4e3cecec25b2b06dd1a26aa798439
2019-08-22 10:51:44 +02:00
Jean-Philippe Evrard 6a9cf31889 Expose date time as facts
As the ansible facts on the executor's (localhost) are limited, this
change gets all the usual facts of ansible_date_time and makes them
available as date_time.

The timezone is always set to UTC so that the location of the executor
is irrelevant.

Co-Authored-By: Joshua Hesketh <josh@nitrotech.org>
Change-Id: I50c6ed8bb6c0402bb40d96d12abc26dcf61ee630
2019-06-18 12:19:18 +00:00
Tobias Henkel 2c4f9ec6da
Mock system load in executor governor tests
We've seen occasional test failures of test_slow_start [1]. This fails
because the executor unregisters due to high system load on the test
node. However we want to test isolated reasons so mock the system load
in those test cases. The test_hdd_governor and test_pause_governor
have the same issue.

[1] Trace:
zuul.ExecutorServer              INFO     Unregistering due to high system load 20.21 > 20.0
Traceback (most recent call last):
  File "/home/zuul/src/git.openstack.org/openstack-infra/zuul/tests/unit/test_executor.py", line 616, in test_slow_start
    self.assertTrue(self.executor_server.accepting_work)
  File "/home/zuul/src/git.openstack.org/openstack-infra/zuul/.tox/py35/lib/python3.5/site-packages/unittest2/case.py", line 702, in assertTrue
    raise self.failureException(msg)
AssertionError: False is not true

Change-Id: Ib6cd3c894c51e03ea76b6d18282e8bd88b335538
2019-03-23 15:07:20 +01:00
Tobias Henkel cac0a6c9ef
Fix test race in test_periodic_override
The test case test_periodic_override checks if project overrice in
timer triggered pipelines works. For this it enables the timer trigger
and then creates the branch stable/havana. After some time it checks
all in flight builds that they checked out the correct
revision. However the branch creation itself first creates a branch
and then a commit. If the timer kicks in between those two steps zuul
will use the same commit for both branches and the test fails.

Fix this by creating the branch before enabling the timer.

Change-Id: I21866978f4ffaaad4484b2dc93c6a6afa0745f9d
2019-03-23 13:03:19 +01:00
Tobias Henkel 5c2b61e638
Make ansible version configurable
Currently the default ansible version is selected by the version of
zuul itself. However we want to make this configurable per deployment
(zuul.conf), tenant and job.

Change-Id: Iccbb124ac7f7a8260c730fbc109ccfc1dec09f8b
2019-03-15 09:09:16 +01:00
Tobias Henkel 160cd6468b
Fix test_load_governor on large machines
The test_load_governor fakes the load to be 100 and expects the
executor to deregister. However when running the tests on a really big
machine (72 cores) the test fails because the executor permits a load
of 180 before deregistering. This can be fixed by faking the load
relative to the cpu count.

Change-Id: Ia177605356e90fc33848097f9443fc2859ac61e2
2019-01-11 12:39:23 +01:00
Tobias Henkel 145e62b568
Add cgroup support to ram sensor
When running within k8s the system memory statistics are useless as
soon there are configured limits (which is strongly advised). In this
case we additionally need to check the cgroups.

Change-Id: Idebe5d7e60dc862e89d012594ab362a19f18708d
2018-12-18 22:25:27 +01:00
James E. Blair dbe1306b36 Provide per-project ssh key to executor
If a job is run in a post-review pipeline, add the per-project
ssh key of the triggering project to the executor.

This also contains a minor refactor to avoid repeatedly json-parsing
the gearman job arguments, and a fix to TestAnsibleJob which was
using the wrong kind of 'Job'.

Change-Id: I585010366ad87f6d6292e8d4e0855f70e23669f5
2018-09-04 15:42:42 -07:00
James E. Blair 4e70bebafb Map file comment line numbers
After a build finishes, if it returned file comments, the executor
will use the repo in the workspace (if it exists) to map the
supplied line numbers to the original lines in the change (in case
an intervening change has altered the files).

A new facility for reporting warning messages is added, and if the
executor is unable to perform the mapping, or the file comment syntax
is incorrect, a warning is reported.

Change-Id: Iad48168d41df034f575b66976744dbe94ec289bc
2018-08-15 14:38:03 -07:00
James E. Blair a48c9101c6 Cache branches in connections/sources
The current attempt to caches branches is ineffective -- we
query the list of branches during every tenant reconfiguration.

The list of branches for a project is really global information;
we might cache it on the Abide, however, drivers may need to filter
that list based on tenant configuration (eg, github protected
branches).  To accomodate that, just allow/expect the drivers to
perform their own caching of branches, and to generally keep
the list up to date (or at least invalidate their caches) by
observing branch create/delete events.

A full reconfiguration instructs the connections to clear their
caches so that we perform a full query.  That way, an operator
can correct from a situation where the cache is invalid.

Change-Id: I3bd0cda5875dd21368e384e3704a61ebb5dcedfa
2018-08-09 16:02:02 -07:00
Fabien Boucher bc20de95e5 Remove unecessary shebang and exec bit
Change-Id: I54de68b11f055a9269ca5efb8a57f81d57f9d55f
2018-07-26 07:12:24 +00:00
Paul Belanger 608b22f577
Add min_avail_hdd governor for zuul-executor
Using the zuul.executor.state_dir setting from zuul.conf, we can
create a new governor to track the amount of space a zuul-executor is
using. If we go above the min_avail_hdd space (default 5.0%), we'll
stop accepting jobs until space has been reclaimed but the executor.

Change-Id: Ieb446397135ee5b138829cd2440b8c86abbb7d56
Signed-off-by: Paul Belanger <pabelanger@redhat.com>
2018-06-27 14:49:22 -04:00
Tobias Henkel d1372f8f98
Add pause function to executor
The pause function in the executor would facilitate rolling updates in
environments like OpenShift that normally restart services. By
utilizing the governor sensor mechanism it is now easy to add the
pause function.

Change-Id: I2c1f48392514ae9e72f2587a88ef66200cbfdcf8
2018-06-25 16:25:32 +02:00