Some log upload tasks were missing no_log instructions and might
write out credentials to the job-output.json file. Update these
tasks to include no_log.
Change-Id: I1f18cec117d9205945644ce19d5584f5d676e8d8
Newer ansbile-lint finds "when" or "become" statements that are at the
end of blocks. Ordering these before the block seems like a very
logical thing to do, as we read from top-to-bottom so it's good to see
if the block will execute or not.
This is a no-op, and just moves the places the newer linter found.
Change-Id: If4d1dc4343ea2575c64510e1829c3fe02d6c273f
This is preparation for a later version of ansbile-lint, which finds
missing names on blocks. This seems a reasonable rule, and the
Ansible manual says [1]
Names for blocks have been available since Ansible 2.3. We recommend
using names in all tasks, within blocks or elsewhere, for better
visibility into the tasks being executed when you run the playbook.
This simply adds a name tag for blocks that are missing it. This
should have no operational change, but allows us to update the linter
in a follow-on change.
[1] https://docs.ansible.com/ansible/latest/user_guide/playbooks_blocks.html
Change-Id: I92ed4616775650aced352bc9088a07e919f1a25f
This reverts commit 862ae3f5d6.
We did not consider the effoct on the quick-download link that
is generated in opendev:
http://paste.openstack.org/show/802839/
Change-Id: I9702f8f1c0155ee3b13c74baaf2c09db72e690fd
Add zuul_log_storage_proxy_address variable attempting to replace
storage endpoint address with.
The usecase is when the storage proxy is positioned infront of storage
endpoint.
Change-Id: I353cd50b76683212b0319a1e28f34901267c08e4
As a first step towards minimizing code duplication between the
various upload-logs roles move the uplaod modules into a common role
upload-logs-base. For easier review common code will be consolidated
in a followup change.
The google and s3 variant missed the unicode fix that swift log upload
received. Add this to make to make the test cases work with the same
fixtures.
Change-Id: I2d4474ae1023c3f3e03faaa5b888e705ee2ed0bc
We are facing some issues where the log upload to swift fails, but the
role is always succeeding. To get some more information about the
upload failures, we let the upload() method return those to the Ansible
module and provide them in the module's JSON result.
The equivalent change in the test-upload-logs-swift [1] role is
validated in [2].
[1] https://review.opendev.org/#/c/735503/1
[2] https://review.opendev.org/#/c/737441/
Change-Id: Ie0d4ea2f3365600eae0e572e4c0790b131d3b13e
This reverts commit acde44818d and
the testing part of b3f417a6e6.
We'd like to obtain more consensus on the download script before
we commit to this. In particular, the new zuul manifest file may
make it possible to do this without adding the feature to the
log upload roles.
Change-Id: I959c44b4dac6cad6d1b3d82ba6bc0949c9c759ff
Everything's better with some unicode sprinkled in. Add a unicode
filename to keep unit testing on its toes.
Note this is duplicated across the test role too.
Change-Id: Iaefe9bea2c1a10d440ef75df3acd71fdd9a4157e
After we have determinted the root URL, create a download script from
a given jinja2 template. This is added to the file list at the root
and uploaded with the other files.
Generated index files are given a new flag so they can be
differentiated.
This is an impelementation of
Iea8abd4cd71ece26b51335642f73bd2e544c42dd for the swift-upload role.
Change-Id: I98c80f657f38c5e1ed5f28e5d36988a3429ad1f8
These aren't all getting cleaned up, which winds up breaking
the second runs. Instead of doing addCleanup with a method that
does the loop again, which can fail in the middle and not
clean up subsequent files, add an individual cleanup when we
add the symlink. This results in all of the symlinks consistently
being cleaned.
Change-Id: Id5a5b09c830ad2ad3bb0b77fb9dbdc494c629824
When retrieving gzipped files out of swift with gzip encoding set some
swift implementations return a decompressed version of the file if your
client cannot accept gzip encoding. This causes problems when the file
you want is actually compressed like a .tar.gz. Instead we avoid setting
the encoding type of these files forcing swift to give them back as is.
This change should only be approved after confirming its parent is
tested and working.
Change-Id: Ibcf478b572ba84273732e0ede17bf92bddd8c36f
By doing this, we're not constrained about where to run the uploader
while still providing some useful testing in dry-run mode.
Change-Id: Ie4888606a8ca4ffe2eb99ddbbcd9d5cee8ceec44
We've discovered that rackspace swift seems to always want to gzip
encode files when clients request their contents. When our files are
deflate encoded this results in files that are first deflate encoded
then gzip encoded. Not all browers or layer 7 firewalls can handle this
(despite being perfectly valid according to the HTTP RFCs). We'll use
gzip to see if that causes rackspace to not double encode the files.
To do this with memory efficienty we vendor a tool from pypi called
gzip-stream which allows us to read chunks of the compressed data at a
time without loading the entire file into memory or writing multiple
gzip headers in a single file.
Change-Id: I9483cfdbd8e7d0683eeb24d28dd6d8b0c0e772fa
The href url paths need to have quoted filenames to handle cases where
filenames have special characters like : in them.
Change-Id: I0bc0de8d27c6e45c4a6b8841985b8265f0219df2
We have been getting HTTP 401 unauthorized errors at the rate of about
once a day when trying to get containers in the swift logs role.
Manually getting and posting objects to the same container after the
jobs fail seems to work so this appears to be a transient failure.
Attempt to workaround this by retrying the container get calls several
times.
Change-Id: Ia7395ffa0b120fbbecde0c9bb6e8583078167143
The old code will log swift upload tracebacks via logging.exception()
which doesn't seem to bubble back up into ansible's logging. We address
this by using traceback.format_exc() to format an exception traceback
string which we pass to ansible's module.fail_json().
Change-Id: I524bd0d5a9529011cffb6d09866b22b2c97fab7d
The upload logs roles can make use of the build logs sharding via their
calls into set-zuul-log-path-fact. Document this.
Change-Id: Ia57fc6a47227657f9fac70074e453cf8d4c16c26
Some Swift API implementations (ceph) require globally unique container
names. Document how zuul users can deal with this by setting a unique
value for the zuul_log_container var when using the upload-logs-swift
role.
Change-Id: Ib9c72cf4a08412615c8a45f3eab8d3eb37c61138
There may be broken symlinks within the log directories, those fail with
an error when os.stat() is executed on them. So if/else is replaced with
try/except while TypeError used to catch when self.full_path is None
Change-Id: Iffee97760a39fa4f7760bd67fb63c5f0905064bd
Import keystoneauth1.exceptions to access the exceptions. HttpError
also lives under "exceptions.http.", so update that reference.
Change-Id: I4afe8c9fc8239a31d62a2a1d09794211b5066472
This adds a generic retry handler that takes a callable, and uses
that not only for the exsting retries around the POST call, but
also the actions when creating the container.
Change-Id: I910b8e886f107d4fe38a9334ba836f010f92557c
If we get an unexpected exception, it's helpful to know the
cloud and region involved. It's also nice to return that information
for maybe better readability.
Change-Id: I1c589744103512d981e64e1a3f9506d40e1bf4cf
With the arrival of ansible-lint 4, Jinja2 variable expansions must
include spaces before and after the variable name inside the
brackets.
Adjust the new violations accordingly and remove the rule
206 exclusion.
Change-Id: Ib3ff7b0233a5d5cf99772f9c2adc81861cf34ffa
The CDN setup that rackspace has requires that the
Access-Control-Allow-Origin header be added to each object uploaded.
Change-Id: I0f6af613e2ebcf9cfe835dc8018a73922f9f0ed5
When creating containers, add a CORS header which allows access
from any host, so that the zuul web app can fetch.
Change-Id: I013265643a8fcb4cff001375136c2c37958fd97a
Rackspace only provides unauthenticated access to object storage
via CDN in a non-openstack-standard way, so we need to do some
extra work to support that.
This also adds a helper script for testing which deletes a container
(since in order to do so, you may need to delete all the contents
first).
Some commented-out debug configuration lines are added for the
convenience of future developers.
Change-Id: I3d1fce824fb40136048f0988939d22f755236a59
Our company-internal Swift setup apparently has troubles with
user-provided index files. Given that Swift can be configured to create
indexes on its own, we can skip creating these files altogether and save
some negligible storage space in the process.
One can enable Swift's native indexes by something like:
openstack container set $container --property web-listings=true
This feature does not play well with containers auto-created by Zuul
because Zuul doesn't know (and cannot know) how to configure them. But
given that the code already supports this feature and that it's just a
matter of propagating an Ansible option, and because it fixes a real
issue in my Zuul deployment, I think that it makes sense to support
this.
Change-Id: I952cb2d4a263b07396bc5af60a9753394af3e42b
This converts the Indexer class from something that strictly generates
index.html files for folders to a more generic class that will be able
to hold other types of transformations we might like to do on the
total collections of uploaded files.
The index.html specific arguments are moved into make_indexes() and
the two helper functions that should not be called externally are
renamed private.
Change-Id: I388042ffb6a74c3200d92fb3a084369fcf2cf3a9
Add a function to the FileList context manager to get a temporary
directory; keep track and remove these on exit. Use this in the index
creation.
Change-Id: I9d9220ad70ce191af02ae0331c98eafe487d96d4
We currently try to skip size formatting for folders. However we
compare with a bogix mimetype so the check is false in every case.
Further folders typically have a size of 512 bytes or 4k. In both
cases we don't really need to skip the size formatting so instead of
fixing the check just skip it and do the size formatting
unconditionally.
Change-Id: I7ef021381bb56acf4b22551cc5d5613470fd6d08
We install zuul via test-requirements, so the zuul files should be in a
directory under the site-packges of the virtualenv that tox installed it
in to. Update the path to point correctly to that location.
Remove the ansible-lint skip tags which should now work because the
library path should be pointing to a location that actually holds the
content.
Change-Id: If2d4b39267c4b9a3102a951143b568f8447af8d9
This is so jobs like tox-docs are properly able to append information
with success-url.
Change-Id: Iabd967d8956d18727890823526064fb80f1b12ab
Signed-off-by: Paul Belanger <pabelanger@redhat.com>
Moving this reference to the file_list into the class encapsulates
things better when the Indexer class becomes a more a collection of
tools to modify a FileList before upload.
Change-Id: I2bedee35ce178df40c15d5867edf560a62232c57
The FileList is dynamic and needs to be able to keep track of things
to cleanup. For example, it has index files added to it from
temporary files which should be removed up when we're finished with
the list. In a future change we propose a similar addition of a
download script for logs which should also be managed.
Turn the FileList into a context manager. Modify the index generation
to not create a new FileList, but just replace the internal list. Use
this for the life-span of the upload by wrapping the relevant parts in
a "with:" statement.
Change-Id: I7135bf5a55d133ce146e9aa84f00041fc8125cbc
Add a testenv:py27 environment that overrides basepython to 2.7
Unfortunately implicit namespace packages are a Python3 thing [1] so
we have to scatter a few __init__.py's around for the test loader
under python2 to be able to find the unit test directories.
Update documenation to mention this
Needed-By: https://review.openstack.org/592768
[1] https://www.python.org/dev/peps/pep-0420/
Change-Id: I9a653666e8a083fb7f3fbb92589fe0467a41e6e6
With URLs that may include any number of directory levels before
even the prefix of our upload, it's difficult (though not impossible)
to upload the icons to a fixed location in the container and
reference that location. A more self-contained approach is just
to embed the icon data directly into the HTML. That is what is
done here.
Change-Id: I12342aa479bac41eb3b401d1e92689a56b3c2a2b