Now that we don't need to worry about python-2.4 and 2.5, we can make
some improvements to the way AnsiballZ handles modules.
* Change AnsiballZ wrapper to use import to invoke the module
We need the module to think of itself as a script because it could be
coded as:
main()
or as:
if __name__ == '__main__':
main()
Or even as:
if __name__ == '__main__':
random_function_name()
A script will invoke all of those. Prior to this change, we invoked
a second Python interpreter on the module so that it really was
a script. However, this means that we have to run python twice (once
for the AnsiballZ wrapper and once for the module). This change makes
the module think that it is a script (because __name__ in the module ==
'__main__') but it's actually being invoked by us importing the module
code.
There's three ways we've come up to do this.
* The most elegant is to use zipimporter and tell the import mechanism
that the module being loaded is __main__:
* 5959f11c9d/lib/ansible/executor/module_common.py (L175)
* zipimporter is nice because we do not have to extract the module from
the zip file and save it to the disk when we do that. The import
machinery does it all for us.
* The drawback is that modules do not have a __file__ which points
to a real file when they do this. Modules could be using __file__
to for a variety of reasons, most of those probably have
replacements (the most common one is to find a writable directory
for temporary files. AnsibleModule.tmpdir should be used instead)
We can monkeypatch __file__ in fom AnsibleModule initialization
but that's kind of gross. There's no way I can see to do this
from the wrapper.
* Next, there's imp.load_module():
* https://github.com/abadger/ansible/blob/340edf7489/lib/ansible/executor/module_common.py#L151
* imp has the nice property of allowing us to set __name__ to
__main__ without changing the name of the file itself
* We also don't have to do anything special to set __file__ for
backwards compatibility (although the reason for that is the
drawback):
* Its drawback is that it requires the file to exist on disk so we
have to explicitly extract it from the zipfile and save it to
a temporary file
* The last choice is to use exec to execute the module:
* https://github.com/abadger/ansible/blob/f47a4ccc76/lib/ansible/executor/module_common.py#L175
* The code we would have to maintain for this looks pretty clean.
In the wrapper we create a ModuleType, set __file__ on it, read
the module's contents in from the zip file and then exec it.
* Drawbacks: We still have to explicitly extract the file's contents
from the zip archive instead of letting python's import mechanism
handle it.
* Exec also has hidden performance issues and breaks certain
assumptions that modules could be making about their own code:
http://lucumr.pocoo.org/2011/2/1/exec-in-python/
Our plan is to use imp.load_module() for now, deprecate the use of
__file__ in modules, and switch to zipimport once the deprecation
period for __file__ is over (without monkeypatching a fake __file__ in
via AnsibleModule).
* Rename the name of the AnsiBallZ wrapped module
This makes it obvious that the wrapped module isn't the module file that
we distribute. It's part of trying to mitigate the fact that the module
is now named __main)).py in tracebacks.
* Shield all wrapper symbols inside of a function
With the new import code, all symbols in the wrapper become visible in
the module. To mitigate the chance of collisions, move most symbols
into a toplevel function. The only symbols left in the global namespace
are now _ANSIBALLZ_WRAPPER and _ansiballz_main.
revised porting guide entry
Integrate code coverage collection into AnsiballZ.
ci_coverage
ci_complete
* uri: Avoid exception in common scenario
So I was confused by the fact that the **uri** module, when not
returning an acceptable HTTP status code, returns:
The full traceback is:
File "/tmp/ansible_UQwiI4/ansible_module_uri.py", line 471, in main
uresp['location'] = absolute_location(url, uresp['location'])
While the actual error was:
Status code was 400 and not [201]: HTTP Error 400:
I also wonder why that message ends abruptly. I would have expected
`HTTP Error 400: Bad Request` which would be more useful.
* uri: Avoid false positive tracebacks in fail_json() on PY2
One of the earlier implementation of unified temp for 2.4 passed the
temp diretory to the remote side using this environment variable. We
later changed it to be passed via a module parameter but forgot to
remove the environment variable.
* fix fedora version dnf fact, default pkg_mgr detection per distro family
* loop over possible dnf/yum paths in case there are multiple canonical sources later in life
Signed-off-by: Adam Miller <admiller@redhat.com>
* Support multi-doc yaml in the from_yaml filter
* Most automatic method of handling multidoc
* Only use safe_load_all
* Implement separate filter
* Update plugin docs and changelog
Allow specifying the source and destination files' encodings in the template module
* Added output_encoding to the template module, default to utf-8
* Added documentation for the new variables
* Leveraged the encoding argument on to_text() and to_bytes() to keep the implementation as simple as possible
* Added integration tests with files in utf-8 and windows-1252 encodings, testing all combinations
* fix bad smell test by excluding windows-1252 files from the utf8 checks
* fix bad smell test by excluding valid files from the smart quote test
* Only add exception/traceback on Python 3
On Python 2 the traceback could be any exception from the stack frame
and likely unrelated to the fail_json call.
On Python 3 the traceback is cleared outside any exception frame, so the
call always returns the most inner traceback (if any), and therefor is
most likely related to the fail_json call.
* Add uncertainty to traceback on Python 2
On Python 2 the last exception in the stack frame is being returned,
this could be unrelated to the actual error, especially if fail_json()
is called outside an except: block.
* Add parameter to keep elb rules
Does not purge elb rules. This is usefull if running the elb_application_lb
role and there is the desire to keep existing rules.
* Change variable name keep_rules to purge_rules
The descriptor purge has been used in the past.
* Changed default for purge_rules
Default is purge_rules. This is how the module has functioned previously. This change maintains
the previous behavior.
* Add integration test for purge_rules flag
* Change wording of test task
* Fix merge conflcit
* Changed default for purge_rules
Default is purge_rules. This is how the module has functioned previously. This change maintains
the previous behavior.
* merge conflcit
* Change wording of test task
* Add purge_rules option to test
* Change test description wording
* Expand purge_rules documentation
* Clarifies documentation for purge_rules option
* Documentation change for resizefs
Changed documentation to match the default value of resizefs set in the code.
Added a note on the resizefs use on the example utilizing it.
* Remove test now it validates fine
This provides a more convenient way for testing (async) jobs.
When used with a non-async job it will report a warning so the user is
aware that he may be doing something incorrect.
Since the 'finished' result value is an integer (!), the test is turning
this in a proper boolean.
* cobbler_system: New module to manage Cobbler systems
This module is useful to provision new systems using Cobbler and Ansible.
* cobbler_system: warn on invalid properties
This fix checks if dirname is not equal to '' before proceeding
to create actual directory with name.
Signed-off-by: Abhijeet Kasurde <akasurde@redhat.com>
* The JSONDecodeError exception only exists in Python 3.
* Without a properly parsed JSON response there is no more error
processing to be done, no matter the http response code.
Relates to #38178
* Detect failed sysvinit module
- This checks the stderr instead of the rc to detect whether the
sysvinit module was successful or not, as even when failing, the
rc would be 0.
- It immediatly became obvious that the debug info when failing
was far too little to properly debug the role. To improve this,
I also added the rc, stderr and stdout to the debug output.
* Revert stderr check to rc check, rename out->stdout, err->stderr
* win_user: use different method to validate credentials that does not rely on SMB/RPC
* Use Add-Type as SetLastError on .net reflection not working on 2012 R2
* win_chocolatey: refactor module to fix bugs and add new features
* Fix some typos and only emit install warning not in check mode
* Fixes when testing out installing chocolatey from a server
* Added changelog fragment
* Enable check_mode in command module
This only works if supplying creates or removes since it needs
something to base the heuristic off. If none are supplied it will just
skip as usual.
Fixes#15828
* Add documentation for new check_mode behavior
* ec2.py:
* source_dest_check default value is now None, updated docs
* Refactor restart_instances and startstop_instances -> Two new functions to prevent repetition: check_source_dest_attr and check_termination_protection
* Properly handle default package manager vs apt
For distros where apt might be installed but is not the default
package manager for the distro, properly identify the default distro
package manager during fact finding and re-use fact finding from
DistributionFactCollector and instead of reimplementing small
portions of it in PkgMgrFactCollector
Add unit test to always check the apt + Fedora combination to test
the new code.
Fixes#34014
Signed-off-by: Adam Miller <admiller@redhat.com>
* remove q debugging output I accidentally left behind
Signed-off-by: Adam Miller <admiller@redhat.com>
* add os_family to the conditional so we're only hitting that code path when needed
Signed-off-by: Adam Miller <admiller@redhat.com>
* setup for a _check* pattern for general os_family group pkg_mgr checking
Signed-off-by: Adam Miller <admiller@redhat.com>
* use Mock.patch decorator for os.path.exists in TestPkgMgrFactsAptFedora
Signed-off-by: Adam Miller <admiller@redhat.com>
Allows patching of custom Kubernetes resources that
don't support strategic merge patching
Check that openshift module supports content_type param
(requires version newer than 0.6.0)
* Update dnsimple-python minimum version to 1.0.0 as it supports API v2 and API v1 is deprecated.
* Update examples.
* Update documentation.
Fixes: #42495
Signed-off-by: Abhijeet Kasurde <akasurde@redhat.com>
This commit introduces a new module called vr_startup_script_facts.
This module aims to return the list of startup scripts avaiable
avaiable in Vultr.
Sample available here:
```
"vultr_startup_script_facts": [
{
"date_created": "2018-07-19 08:52:55",
"date_modified": "2018-07-19 08:52:55",
"id": 327140,
"name": "myteststartupscript",
"script": "#!/bin/bash\necho Hello World > /root/hello",
"type": "boot"
}
]
```
* Add src parameter to elasticsearch_plugin
Previously specifying a URL or a file name (which is supported by the
Elasticsearch plugin tooling) would not work correctly with Ansible, because the
detection of the current installation state did not handle this well.
This commit adds a new "src" parameter for the module, which can be specified in
addition to the plugin name. It will be used to retrieve the plugin from
a custom location while keeping the final plugin name available to determine if
it is already present or not.
The url parameter remains for ES 1.x compatiblity.
* Fix sanity test errors
* Add version_added for src option
* Increase first added version to 2.7
* Update nclu.py
Stop module from running `net` on empty commands.
* Update nclu.py
Updated the copyright date
* Update nclu.py
Returned metadata version to 1.1
* Update nclu.py
Fix indentation to be a multiple of 4.
* Create changelog fragment
linked_clone requires snapshot_src parameter. This fix makes them required_together
and update documentation. Also, testcase is added.
Fixes: #42349
Signed-off-by: Abhijeet Kasurde <akasurde@redhat.com>
* Support setting persistent command timeout per task basis
Fixes#42200
* Add variable `ansible_command_timeout` to `persistent_command_timeout`
option for `network_cli` and `netconf` connection plugin so that the
command_timeout can be set per task basis while using `connection=network_cli`
or `connection=netconf`
eg:
```
- name: run copy command
ios_command:
commands:
- show version
vars:
ansible_command_timeout: 40
```
* Modify `ansible-connection` to read command_timeout value from
connection plugin options.
* Add `ansible_command_timeout` to `persistent_command_timeout`
option in `persistent` to support `connection=local` so that
it is backward compatibilty
* To support `connection=local` pass the timeout value as variables
from persistent connection to `ansible-connection` instead of sending
it in playcontext
* Fix CI failure
* Fix review comment
* Check get_option method works with inventory plugins
This use case is already tested by some cloud inventoty plugin but
these tests are slow and aren't always executed, hence this new quick
test.
* AnsiblePlugin: Fix typo in docstring
* change infoblox_client to infobblox-client
* change infoblox_client to infobblox-client
* change infoblox_client to infobblox-client
* change infoblox_client to infobblox-client
* change infoblox_client to infobblox-client
* change infoblox_client to infobblox-client
* change infoblox_client to infobblox-client
* change infoblox_client to infobblox-client
* change infoblox_client to infobblox-client
* change infoblox_client to infobblox-client
* change infoblox_client to infobblox-client
* change infoblox_client to infobblox-client
* ios_user module - add sshkey support
* ios_user - Add version_added to sshkey option
* ios_user - pep8 indentation fixes in unit tests
* ios_user - use b64decode method that works on python 2 and 3
* Only report change when home directory is different
Add tests with home: parameter
Have to skip macOS for now since there is a bug when specifying the home directory path for an existing user that results in a module failure. That needs to be fixed in a separate PR.
Ensure that FieldLevelEncryptionId is properly handled - passing it if
set, and keeping it if returned by GetDistribution
Update cloudfront_distribution tests to remove references to
test_identifier so test suite actually works
Fixes#40724