As a first step towards supporting multiple ansible versions we need
tooling to manage ansible installations. This moves the installation
of ansible from the requirements.txt into zuul. This is called as a
setup hook to install the ansible versions into
<prefix>/lib/zuul/ansible. Further this tooling abstracts knowledge
that the executor must know in order to actually run the correct
version of ansible.
The actual usage of multiple ansible versions will be done in
follow-ups.
For better maintainability the ansible plugins live in
zuul/ansible/base where plugins can be kept in different versions if
necessary. For each supported ansible version there is a specific
folder that symlinks the according plugins.
Change-Id: I5ce1385245c76818777aa34230786a9dbaf723e5
Depends-On: https://review.openstack.org/623927
Currently, this information is missing completely, although it's very
useful when somebody wants to analyze the Ansible run based on the
JSON log.
After proposing this patch also for ansible, I've learned that this
info is already visible in the original Ansible json callback:
https://github.com/ansible/ansible/pull/50853
So, I've just added this missing part to the zuul_json callback.
Change-Id: I1ee043fc1be95ec3260d3fe427653ffe8c09b8f7
Instead of holding the old log data in RAM for the entire run, just read
it in right before writing the new data out.
Change-Id: I9785475b8c876f2cf8e61c5926e6c9d43a432deb
Having the role path is sometimes needed for debugging in case there are
multiple roles with the same name.
Change-Id: Icbd682a6cd84f2bcbaf07625f601316ffd2ea4fc
Currently we can leak secrets if we encounter unreachable nodes
combined with a task using with_items and no_log. In this case the
item variables are written to both the job-output.json and
job-output.txt. Upstream Ansible has the same issue [1].
The text log can be fixed by defining the v2_runner_on_unreachable
callback the same as v2_runner_on_failed.
The json log can be fixed the same way as the upstream Ansible issue.
[1] https://bugzilla.redhat.com/show_bug.cgi?id=1588855
Change-Id: Ie5dd2a6b11e8e276da65fe470f364107f3dd07ef
Our custom command.py Ansible module is updated to match the
version from 2.5, plus our additions.
strip_internal_keys() is moved within Ansible yet again.
Change-Id: Iab951c11b23a24757cf5334b36bc8f7d12e19db0
Depends-On: https://review.openstack.org/567007
In comparison to other callback plugins like the default (stdout)
plugin, the role information for an executed task is missing in the
json output. To be complient to the provided task output containing
the name and uuid field, we've defined a similar data structure
containing the role information. This will only be added to the result
set if the task contained the necessary role information.
Change-Id: I8d94ba077e0bc90b5cf6510804bbd57c38184a9d
Currently, the timestamp information is only provided directly by a few
Ansible modules (e.g. the command module, which shows the runtime of a
command per host result).
This change adds an 'overall' time information to all executed tasks. The
delta between both timestamps shows how long it took a task to finish
across all hosts/nodes.
Update: This information is now also available for plays.
This patch is also proposed for ansible and can be found here:
https://github.com/ansible/ansible/pull/39277
Change-Id: I6294d5d60236905d58c738613e71fcfb1202b45a
The combination of with_items, register, and no_log meant that we
were modifying a list which was a shared reference between the
results object used by ansible and that which we log to json.
Make a deep copy of the results object before we modify it so that
we don't modify the "original".
Also, correct a comment about the location of an import.
This adds a test which fails without the fix.
Change-Id: Iaab94f4ac8a0f58089912e464f6dfcf2e5f8ce71
When using 'with_items' in Ansible the result items of the iterations
are contained in the list under the 'results' key.
This can cause secrets to be leaked when they are used in a loop.
Change-Id: I9e8d08f75207b362ca23457c44cc2f38ff43ac23
We need to pass a working logging config to zuul_stream and ara so that
alembic migrations don't step on pre-playbook output.
Write a logging config using json, then pass its location in env vars so that
zuul_stream and ara can pick it up and pass it to dictConfig.
In support of this, create a LoggingConfig class so that we don't have
to copy key names and logic between executor.server, zuul_stream and
zuul_json. Since we have one, go ahead and use it for the server logging
config too, providing them with a slightly richer default logging
config for folks who don't provide a logging config file of their own.
The log config processing has to go into zuul.ansible because it's
needed in the bubblewrap and we don't have it in the python path
otherwise.
Change-Id: I3d7ac797fd2ee2c53f5fbd79d3ee048be6ca9366
Pass in and use info from the JobDirPlaybook rather than trying to strip
path elements from the playbook name.
Change-Id: Ifcd6f05e27c987d40db23b3dcec344c2eb786d7c
The json output wasn't doing its read/append/write cycle properly,
leading to:
{
"plays": {
"plays": {
"plays": {
"plays": {
"plays": {
"plays":
Which, while amusing, isn't really what we wanted.
Add a wrapping layer that contains playbook info. The original format
has a list of play results but only one stats section, so it's not
actually suitable for appending a list of plays anyway.
Change-Id: I49394e9faa8027a21e5ef6919c0f75a4473f51a9
Tried first with the upstream callback plugin, but it is a stdout
plugin, so needs to take over stdout to work. We need stdout for
executor communication. Then tried subclassing- but the magical ansible
module plugin loading fun happened again. Just copy it in and modify it
slightly for now.
We add playbook, phase and index information. We also read the previous
file back in and append to it on subsequent runs. This may be a memory
issue. HOWEVER - the current construction will hold all of an individual
play in memory anyway. Most of our content size concerns are around
devstack jobs where the bulk of the content will be in a single playbook
anyway - so although ram pressure may be a real thing - we may need to
solve it on the single playbook level anyway. But for now, this should
get us the data.
Change-Id: Ic1becaf2f3ab345da22fa62314f1296d76777fec