All of the ansible OpenStack modules are driven by a clouds.yaml config
file which is processed by os-client-config. Expose the data returned by
that library to enable playbooks to iterate over available clouds.
Since the YAML data format is a subset of JSON, it is trivial to convert
the former to the latter. This means that we can use YAML templates to
build cloudformation stacks, as long as we translate them before passing
them to the AWS API. I figure this could potentially be quite popular in
the Ansible world, since we already use so much YAML for our playbooks.
When a SVN repository has some svn:externals properties, files will be
reported with the X attribute, and lines will be added at the end to
list externals statuses with a text looking like
"Performing status on external item at ....".
Such lines were counted as a local modification by the regex, and the
module returned a change, even though they were none.
To have a clean (and parsable) "svn status" output, it is recommended
to use the --quiet option. The externals will only appear if they have
been modified. With this option on, it seems even safer to consider
there are local modifications when "svn status" outputs anything.
boto can throw SSLError when timeouts occur (among other SSL errors). Catch these so proper JSON can be returned, and also add the ability to retry the operation.
There's an open issue in boto for this: https://github.com/boto/boto/issues/2409
Here's a sample stacktrace that inspired me to work on this. I'm on 1.7, but there's no meaningful differences in the 1.8 release that would affect this. I've added line breaks to the trace for readability.
failed to parse: Traceback (most recent call last):
File "/home/ubuntu/.ansible/tmp/ansible-tmp-1419895753.17-160808281985012/s3", line 2031, in <module> main()
File "/home/ubuntu/.ansible/tmp/ansible-tmp-1419895753.17-160808281985012/s3", line 353, in main download_s3file(module, s3, bucket, obj, dest)
File "/home/ubuntu/.ansible/tmp/ansible-tmp-1419895753.17-160808281985012/s3", line 234, in download_s3file key.get_contents_to_filename(dest)
File "/usr/local/lib/python2.7/dist-packages/boto/s3/key.py", line 1665, in get_contents_to_filename response_headers=response_headers)
File "/usr/local/lib/python2.7/dist-packages/boto/s3/key.py", line 1603, in get_contents_to_file response_headers=response_headers)
File "/usr/local/lib/python2.7/dist-packages/boto/s3/key.py", line 1435, in get_file query_args=None)
File "/usr/local/lib/python2.7/dist-packages/boto/s3/key.py", line 1488, in _get_file_internal for bytes in self:
File "/usr/local/lib/python2.7/dist-packages/boto/s3/key.py", line 368, in next data = self.resp.read(self.BufferSize)
File "/usr/local/lib/python2.7/dist-packages/boto/connection.py", line 416, in read return httplib.HTTPResponse.read(self, amt)
File "/usr/lib/python2.7/httplib.py", line 567, in read s = self.fp.read(amt)
File "/usr/lib/python2.7/socket.py", line 380, in read data = self._sock.recv(left)
File "/usr/lib/python2.7/ssl.py", line 341, in recv return self.read(buflen)
File "/usr/lib/python2.7/ssl.py", line 260, in read return self._sslobj.read(len) ssl.SSLError: The read operation timed out
* If a db user belonged to a role which had a privilege, the user would
not have the privilege added as the role gave the appearance that the
user already had it. Fixed to always check the privileges specific to
the user.
* Make fewer db queries to determine if privileges need to be changed
and change them (was four for each privilege. Now two for each object
that has a set of privileges changed).
Use `has_table_privileges` and `has_database_privileges`
to test whether a user already has a privilege before
granting it, or whether a user doesn't have a privilege
before revoking it.
By default docker-py uses latest version of Docker API. This is not
always desireable, and this patch adds option to specify version, that
should be used.
This prevents errors when the login_user does not have 'ALL'
permissions, and the 'priv' value contains fewer permissions than are
held by an existing user. This is particularly an issue when using an
Amazon Web Services RDS instance, as there is no (accessible) user with
'ALL' permissions on *.*.