87 Commits
2.1.0 ... 2.4.1

Author SHA1 Message Date
Sagi Shnaidman
1ce7dd8b5f Release 2.4.1 version
Change-Id: I5d5532865c8700030925a1dcd6e262c53f37c0a9
2025-01-20 17:58:47 +02:00
Zuul
438bbea34b Merge "Fix missed client_cert in OpenStackModule" 2025-01-18 20:37:18 +00:00
Anna Arhipova
e065818024 Update tags when changing server
Tags are not included in the metadata and are not supported by PUT
/servers/{server_id}, requiring a specific mechanism for modification.

With this fix, existing tags on the server will be completely replaced
by the new set of tags provided during the update

Also optimized tags tests a bit

Change-Id: I2673ed57d848654427dcb7ed7aceba811a3dd314
2025-01-17 20:49:50 +01:00
Victor Chembaev
0764e671a9 Fix missed client_cert in OpenStackModule
Change-Id: Ia6bc31c18f0707047c62e8824f7e9e489284bdab
2025-01-17 14:41:32 +02:00
Sagi Shnaidman
5f4db3583e Release 2.4.0 version
Change-Id: I7e09a849d089962aaec90d30d9e6ab8a3891aa41
2025-01-15 18:18:47 +02:00
Anna Arhipova
6262474c94 Allow create instance with tags
Change-Id: I98a04c18ac0a815841c6c474b39af5e9ed4d1c0d
2025-01-14 17:24:45 +00:00
Freerk-Ole Zakfeld
f9fcd35018 Add Traits Module
Change-Id: I7db759f716c1154cb0dd30e240af3a0420efab8f
Signed-off-by: Freerk-Ole Zakfeld <fzakfeld@scaleuptech.com>
2025-01-14 10:34:50 +01:00
Zuul
5dbf47cb49 Merge "Add loadbalancer quota options" 2025-01-09 18:43:07 +00:00
Tom Clark
57c63e7918 Add loadbalancer quota options
Enable configuration of loadbalancer options within the quota module so that the current collection can be used to address Octavia quotas.

Change-Id: I1dd593f5387929ed522d54954fc63dd535df7c7c
2025-01-09 13:45:19 +00:00
Sagi Shnaidman
ed5829d462 Release 2.3.3 version
Change-Id: I9d78fdc98fdb986c926f196077d9b401de573218
2024-12-22 16:57:48 +02:00
Zuul
b025e7c356 Merge "Fix deprecated ANSIBLE_COLLECTIONS_PATHS variable" 2024-12-22 14:38:13 +00:00
Sagi Shnaidman
782340833e Fix deprecated ANSIBLE_COLLECTIONS_PATHS variable
Change it to ANSIBLE_COLLECTIONS_PATH

Change-Id: I6100e1da0e578c26dd90c05b7bd5eeddcaec983b
2024-12-22 14:54:52 +02:00
Sagi Shnaidman
73aab9e80c Add test to only_ipv4 in inventory
Change-Id: I70cbfef05af4853947fac0040c4a3a9cf6c2f1fe
2024-12-22 13:55:33 +02:00
Zuul
ae7e8260a3 Merge "add an option to use only IPv4 only for ansible_host and ansible_ssh_host" 2024-12-22 11:51:13 +00:00
Sagi Shnaidman
030df96dc0 Release 2.3.2 version
Change-Id: I0234ad386309685066ede85286a5d3938c9d6133
2024-12-20 11:46:16 +02:00
Kevin Honka
c5d0d3ec82 add an option to use only IPv4 only for ansible_host and ansible_ssh_host
By default the openstack inventory fetches the first fixed ip
address for "ansible_host" and "ansible_ssh_host".
Due to the random sorting of addresses by OpenStack, this can either be
a ipv4 or ipv6 address, this is not desirable in some legacy setups
where Ansible runs in a ipv4 only setup,
but OpenStack is already ipv6 enabled.

To prevent this, a new option called "only_ipv4" was added,
which forces the inventory plugin to use only fixed ipv4 addresses.

Closes-Bug: #2051249
Change-Id: I3aa9868c9299705d4b0dcbf9b9cb561c0855c11c
2024-12-20 06:36:04 +00:00
Zuul
0cff7eb3a2 Merge "Fix openstack.cloud.port module failure in check mode" 2024-12-19 23:07:20 +00:00
Hirano Yuki
29e3f3dac8 Fix openstack.cloud.port module failure in check mode
The openstack.cloud.port module always fails in check mode due to it
calls PortModule._will_change() with the wrong number of arguments.

Change-Id: I7e8a4473df8bb27d888366b444a54d3f7b1c2fa8
2024-12-19 19:01:22 +00:00
Takashi Kajinami
d18ea87091 Drop compat implementations for tests
Python < 3.4.4 has already reached its EOL.

Change-Id: Ieb7cfd796991645db44dafc0e5ba82855c08853a
2024-12-19 19:00:27 +00:00
Sagi Shnaidman
529c1e8dcc Fix release job
Set up Ansible in virtualenv

Change-Id: I0ee92fe621423f9f199b2baab6a95fba377b65a1
2024-12-19 12:52:07 +02:00
Sagi Shnaidman
99b7af529c Release 2.3.1 version
Change-Id: I8fa2c36d08e387c93f61630b55892ff69b4eff6a
2024-12-18 13:27:39 +02:00
Sagi Shnaidman
2ed3ffe1d0 Change token for new Ansible Galaxy server
Change it with:

zuul-client encrypt  \
  --tenant openstack \
  --project openstack/ansible-collections-openstack
(zuul URl is https://zuul.opendev.org)

Change-Id: I9623f14453fd1fd9631195cc3794141edfce0cf7
2024-12-16 21:37:33 +02:00
Victor Chembaev
ae5dbf0fc0 Add ability to pass client tls certificate
Add ablity to pass client tls certificate
to make mTLS connection to OpenStack provider.
Closes-Bug: #2090953

Change-Id: I33ef38c830309cf4f9fae11c8403fb4e616cf315
2024-12-14 21:42:50 +00:00
Sagi Shnaidman
4074db1bd0 Replace 2.16 jobs by 2.18 Ansible jobs
Change-Id: I4cae065e961a06d44792bef2e6479c30fe242911
2024-12-13 21:41:53 +00:00
Sagi Shnaidman
3248ba9960 CI: wait until server is shelved offloaded
Wait for condition when shelving is finished.

Change-Id: I7945989461c0b80adbb629da2eef1464e1ce2645
2024-12-13 23:39:48 +02:00
Sagi Shnaidman
2d5ca42629 Fix CI and add 2.16 Ansible job
Change-Id: Idcb5d2db4a92084239b80703be6a80de3e9f1116
2024-12-10 17:00:47 +02:00
Sagi Shnaidman
83456005fc Release 2.3.0 version
Change-Id: Ibe054979b707645722f9dd1133b40db95f1aba4e
2024-11-28 19:19:51 +02:00
Zuul
5b61019a34 Merge "Add module to filter available volume services." 2024-11-07 20:04:50 +00:00
Zuul
630dac3787 Merge "Add inactive state for the images" 2024-11-05 12:02:07 +00:00
Zuul
54c2257376 Merge "Allow to specify multiple allocation pools when creating a subnet" 2024-11-05 12:02:06 +00:00
Zuul
c843feb338 Merge "Add target_all_project option" 2024-11-05 11:56:22 +00:00
Simon Hensel
28541ee158 Allow to specify multiple allocation pools when creating a subnet
With this change, multiple allocation pool may be specified when creating
a subnet. Allocation pools are defined as a list of dictionaries.

For example:

openstack.cloud.subnet:
  name: sub1
  network: network1
  cidr: 192.168.0.0/24
  ip_version: 4
  allocation_pools:
    - start: 192.168.0.10
      end: 192.168.0.50
    - start: 192.168.0.100
      end: 192.168.0.150

Change-Id: I77a06990de082466dc6265a14c379b8bbaf789e8
2024-11-05 10:13:45 +01:00
Amir Nikpour
c09029ada0 Add target_all_project option
Adds target_all_project option to neutron_rbac_policy
module, for specifing all projects as target projects
explicitly.

Change-Id: I1393463a79fc83bcda7aa5642f5d3ed27fb195b5
2024-11-05 10:32:08 +03:30
Gaël THEROND (Fl1nt)
903eeb7db7 Add module to filter available volume services.
Add a way to filter which volume service is running on a host or list
which hosts run available volume services.

Closes-Bug: #2010490
Change-Id: Icb17f6019a61d9346472d83ddcd2ad29c340ea05
2024-11-04 13:54:15 +00:00
Christian Berendt
03ebbaed93 Fix typo in openstack.cloud.lb_pool
Change-Id: Ic265ce384d9d4392984bc400e6a286831396d472
2024-11-03 12:15:11 +00:00
Dmitriy Rabotyagov
8d9b8b9a35 Add inactive state for the images
Glance images can be deactivated ad reactivated with corresponding
API calls. It might be useful for operators to be able to control
these states through ansible modules as well. Instead of introduction
of the new parameter we're adding new state for the image that is
`inactive`.

Change-Id: I0738ff564f81a31690872450a4731340ed6bbeb1
2024-11-03 12:12:34 +00:00
Zuul
fef5e127d4 Merge "Allow wait: false when auto_ip is false" 2024-10-31 15:53:55 +00:00
Zuul
01d3011895 Merge "Enable glance-direct interop image import" 2024-10-31 13:51:29 +00:00
James Denton
d4f25d2282 Allow wait: false when auto_ip is false
There is an issue with the logic that results in a failure
to create a server when auto_ip is false. This patch tests
for the bool value of auto_ip and the two lists rather that
None.

Closes-Bug: #2049046
Change-Id: I2664c087c4bde83c4033ab3eb9d3e97dafb9e5cb
Signed-off-by: James Denton <james.denton@rackspace.com>
Signed-off-by: Kevin Carter <kevin.carter@rackspace.com>
2024-10-30 17:44:06 -05:00
Sagi Shnaidman
3df5c38ca6 Fix test for server shelve
Change-Id: I6adb15e82817a4e08ed102a722d9fb44ebe80333
2024-10-29 17:03:36 +02:00
Jay Jahns
474b3804eb Enable glance-direct interop image import
Adds the use_import parameter to enable interop import so
images that need transformation by glance can have it, such as
format conversion.

Closes-Bug: 2084481
Change-Id: I39d1e94ff8ab9f0e0b99c1cef9a814eef0b1f060
2024-10-15 00:29:42 -05:00
Zuul
5a8ad4cdf0 Merge "Fix typo in parameter description" 2024-10-10 21:20:14 +00:00
Zuul
e742016861 Merge "Ensure coe_cluster_template compare labels properly" 2024-10-10 17:03:16 +00:00
Dmitriy Rabotyagov
40a384214c Ensure coe_cluster_template compare labels properly
SDK does return all keys and values of the template labels as
strings. At the same time user can define some labels as integers or
booleans, which will break comparison of labels and lead to module
failure on consecutive runs.

Change-Id: I7ab624428c8bb06030a2b28888f5cb89bb249f08
2024-10-09 20:56:10 +00:00
Pierre Riteau
1188af2486 Fix typo in parameter description
Change-Id: Ia9d4c5b2c8bb27c6088de71c6e55e160d0427ff7
2024-10-09 20:54:02 +00:00
Tobias Urdin
e42e21f621 Add Neutron trunk module
This adds a Ansible module for managing a
Neutron trunk and the sub ports associated
with the trunk.

Change-Id: I0e1c6798b6cc30062c881d1f92fdd4d630d31106
2024-10-09 14:35:52 +00:00
Zuul
e6379e3038 Merge "Fix exception when creating object from file" 2024-10-09 09:08:35 +00:00
Vahid Mohsseni
e0dc4776bb Fix exception when updating container with metadata
A ValueError is raised when running the object_container module with the
`metadata` param against a container with existing metadata.

When the module attempts to enumerate the existing container metadata, a
ValueError exception is raised, because the code is iterating over the
metadata keys, instead of `dict_items`.
Compare to the iteration through another dict `metadata` on the next
line:

    new_metadata = dict((k, v) for k, v in metadata.items()

This change adds a call to `items()` on the dictionary.
Note that this is added outside the parentheses so that the behaviour of the
`or` statement is not affected, and that another exception isn't caused
if `container.metadata` is not a dict.

Closes-Bug: #2071934
Change-Id: Ie5e1f275839e38340a75ab18c3b9ec9bc7745d68
2024-10-09 08:57:18 +11:00
Ben Formosa
22941e86d1 Fix exception when creating object from file
When creating a new object from file, a AttributeError is raised.

This is caused because the sdk does not return anything when creating an
object from a file.

With this change, the `_create` function will always return an object.

Closes-Bug: #2061604
Change-Id: I34cefd1bb10c6eef784e37d26122e5ed2c72488d
2024-10-09 08:53:28 +11:00
Zuul
b089d56136 Merge "Fix regression in quota module" 2024-10-08 15:43:40 +00:00
Zuul
536fe731f7 Merge "Add application_credential module" 2024-10-08 14:27:36 +00:00
Ben Formosa
4a8214b196 Fix regression in quota module
I suspect that the change to `update_quota_set` in openstacksdk commit
[9145dce64](https://opendev.org/openstack/openstacksdk/commit/9145dcec64)
has caused a regession in the quota module, making it not work correctly
for volume and compute quotas.

This change updates the calls to `update_quota_set` with the new
signatures.

Closes-Bug: #2068568
Change-Id: I604a8ffb08a76c20397f43c0ed3b23ddb11e53eb
2024-10-08 16:24:04 +03:00
Sagi Shnaidman
d1b77a3546 Remove 2.9 jobs from Zuul config
It's old and obsolete version, we don't need to run on it.
Change-Id: I1ca3bb1a4c513af1227c08ace4e9df69dca1a4d3
2024-10-08 13:47:14 +03:00
Clark Boylan
8aefe5dc3a Run functional testing regardless of pep8/linter results
The parent change was pushed up initially with a super minor pep8 under
indentation issue. The Zuul config at the time prevent functional
testing from running so we got no indiciation of whether or not the
proposed fix actually fixed the issue.

Update the Zuul config to run pep8 and functional tests concurrently.
Formatting errors are independent of functionality and getting quick
feedback on both gives contributors and reviewers as much feedback as
possible when making decisions about the next step for the code review
process.

The fewer round trips we force everyone to make the less likely we are
to forget about a change or ignore it and otherwise extend the time it
takes to get code merged.

Change-Id: Ib92b3b80f2873327161e23b0ce6bfc6c34850538
2024-10-08 08:11:10 +00:00
Sagi Shnaidman
217e965357 Fix CI in collection
Temporarly disable the quota tests

Change-Id: I7a064649276c1fe0fde025c85817682b0429086a
Signed-off-by: Sagi Shnaidman <sshnaidm@redhat.com>
2024-10-08 00:14:27 +03:00
Steve Baker
94afde008b Add application_credential module
Create or delete a Keystone application credential.  When the secret
parameter is not set a secret will be generated and returned in the
response. Existing credentials cannot be modified so running this module
against an existing credential will result in it being deleted and
recreated. This needs to be taken into account when the secret is
generated, as the secret will change on each run of the module.

The returned result also includes a usable cloud config which allows
playbooks to easily run openstack tasks using the credential created by
this module.

Change-Id: I0ed86dc8785b0e9d10cc89cd9137a11d02d03945
2024-06-26 01:57:55 +00:00
Zuul
032a5222c1 Merge "Add vlan_tranparency for creation networks" 2024-06-25 23:01:13 +00:00
Zuul
fa6eb547ab Merge "Add insecure_registry property to coe_cluster_templates" 2024-06-25 19:49:25 +00:00
Artem Goncharov
23290a568b Allow munch results in server_info module
Certain branches of the openstacksdk are explicitly converting
`Resource` objects to munch objects to add additional virtual
properties. This means that the module may receive `Resource` or a
`Munch` object. Add a small check.

Change-Id: I413877128d1e2b68d7f39420d19e2560d3d9a99e
2024-06-25 09:27:32 +00:00
Artem Goncharov
69801f268f Wait for deleted server to disappear from results
When we delete server wait for it to completely disappear from the
results (Nova returns it for some time with the 'DELETED' state). Since
tests (and actually also users) not able to really cope with this wait
for server to be gone completely.

Change-Id: Ie2dde98ae47dd7108d554495d5025df175647d5c
2024-06-24 17:56:27 +02:00
Fiorella Yanac
0c6aee5378 Add vlan_tranparency for creation networks
vlan_tranparency can be enabled for creation networks.

Change-Id: If1a874d28507ac89eed38426e97f1a58db020965
2024-06-19 13:43:24 +01:00
Dmitriy Rabotyagov
fb258ababa Add insecure_registry property to coe_cluster_templates
This property was missing from API documentation while being supported
for quite some time.

Change-Id: Idc1e1d78cb33f3183af00ee19446ccfd1f00f266
2024-05-02 16:59:12 +02:00
Slawek Kaplonski
4c186a2ae6 Add support for creation of the default external networks
In Neutron external network can be marked as 'default' and such network
will be used in the auto allocate network functionality [1].
This patch adds support for creation of such default network by the
ansible openstack module.

[1] https://docs.openstack.org/neutron/latest/admin/config-auto-allocation.html

Change-Id: I1aeb91f8142cdc506c3343871e95dcad13f44da0
2024-05-02 10:21:38 +02:00
Zuul
598cc2d743 Merge "fix: subnet module: allow cidr option with subnet_pool" 2024-04-08 14:40:10 +00:00
George Shuklin
e0139fe940 fix: subnet module: allow cidr option with subnet_pool
Specifying CIDR during creation of subnet from subnet pool is a valid
operation. Moreover, in case of use of a subnet pool with multiple
subnets, cidr is a mandatory paramter for creating subnet.

Following code should be valid:

    - name: Create subnet
      openstack.cloud.subnet:
        name: "subnet_name"
        network: "some_network"
        gateway_ip: "192.168.0.1"
        allocation_pool_start: "192.168.0.2"
        allocation_pool_end: "192.168.0.254"
        cidr: "192.168.0.0/24"
        ip_version: 4
        subnet_pool: "192.168.0.0/24"

This scenario is added as a subnet-pool.yaml test in the test role.

Change-Id: I1163ba34ac3079f76dd0b7477a80a2135985a650
2024-04-05 12:30:22 +03:00
Zuul
bc65d42f52 Merge "Disable auto-discovery for setuptools" 2024-03-20 15:45:59 +00:00
Zuul
ac4d105813 Merge "router: Allow specifying external network name in a different project" 2024-03-19 17:33:20 +00:00
Joel Capitao
747c4d23bc Disable auto-discovery for setuptools
Since setuptools release (61.0.0) ansible-collection-openstack's
package build command (python3 setup.py sdist bdist_wheel) is
finding multiple top-level packages in a flat-layout automatically.

This issue is mentioned in setuptools bug 3197 [1], and the suggested
workaround is to disable auto-discovery by adding 'py_modules=[]' in
setup.py.

[1] https://github.com/pypa/setuptools/issues/3197

Change-Id: I4aef1fd59375c4a3bc9e362e7949fa153e4cbcb0
2024-03-14 15:50:28 +01:00
Steve Baker
93d51498e9 CI: Don't create port with binding profile
Creating a port with a binding profile now requires a user with the
service role. This fixes CI by removing the tasks which create a port
with a binding profile. The new policy implies that only other openstack
services should be doing this. The capability can remain in the module,
but it is unlikely to be used unless with a custom or deprecated policy.

Change-Id: I89306d35670503d2fc8e76c030d88f64c20eca08
2024-03-13 14:21:14 +13:00
Mark Goddard
cfd2d1f773 router: Allow specifying external network name in a different project
If a router is created in a specific project, the router module
tried to find its external network in the same project. This would fail
with 'No Network found for <network>' if the external network is in a
different project. This behaviour has changed, most likely in [1] when
the project scoping was added to the find_network function call.

This change modifies the network query to first check the project, then
fall back to a global search if the network is not found. This ensures
that if there are multiple networks with the name we will choose one in
the project first, while allowing use of a network in a different
project.

A regression test has been added to cover this case.

[1] 3fdbd56a58

Closes-Bug: #2049658
Change-Id: Iddc0c63a2ce3c500d7be2f8802f718a22f2895ae
2024-01-24 09:24:32 +00:00
Mark Goddard
e009f80ffc CI: Fix linters-devel and devstack tests
The linters-devel job fails with:

  ansible-test sanity: error: argument --skip-test: invalid choice:
  'metaclass-boilerplate' (choose from 'action-plugin-docs', ...)

The functional test fails with:

  The conditional check 'info1.volumes | selectattr("id", "equalto", "{{
  info.volumes.0.id }}") | list | length == 1' failed. The error was:
  Conditional is marked as unsafe, and cannot be evaluated.

This is due to a change in Ansible 2.17 preventing embedded templates
from referencing unsafe data [1].

[1] https://docs.ansible.com/ansible/latest/porting_guides/porting_guide_9.html#playbook

Change-Id: I2f8411cac1403568afb13c2b96ba452c4c81f126
2024-01-19 17:42:23 +00:00
Dmitry Tantsur
08c93cf9b1 Migrate Bifrost jobs to Ubuntu Jammy
Change-Id: I8971e760457a499129d06bb472598f25ee168a7f
2023-12-19 12:53:18 +01:00
gtema
fff978d273 Prepare release 2.2.0
Prepare data for the v2.2.0 release with few new modules and bugfixes.

Change-Id: Id593b623b389cedb140fb05e8063f48ef7eacc36
2023-12-01 17:29:42 +01:00
Zuul
9fb544d94a Merge "Fix port_security_enabled key for port module" 2023-10-17 09:28:52 +00:00
Simon Hensel
94ed95c8b6 Fix port_security_enabled key for port module
Changes to the port_security_enabled parameter are not applied due to
mismatching key names.
In the port module, the input parameter is called `port_security_enabled`,
while the OpenStackSDK is using a field called `is_port_security_enabled`.

When updating an existing port, the port module is comparing the dictionary
keys of the Ansible module parameters with those of the port object
returned by the OpenStackSDK.
Since these keys different, they will not match and changes to
port security are not applied.

Story: 2010687
Task: 47789
Change-Id: I838e9d6ebf1a281269add91724eac240abe35fd4
2023-10-17 08:56:21 +02:00
Zuul
4ab054790c Merge "Prevent routers to be always updated if no shared public network" 2023-10-16 15:31:30 +00:00
Zuul
2c68080758 Merge "Added module for volume type encription" 2023-10-16 14:42:15 +00:00
Zuul
6e680d594b Merge "Add volume_type related plugins/modules" 2023-10-16 14:29:57 +00:00
Dmitriy Rabotyagov
b25e93dbdd Prevent routers to be always updated if no shared public network
Current logic assumes that external_fixed_ips should be always defined,
otherwise `req_fip_map` is an empty sequence, which makes _needs_update
to return True.
With that not having external_fixed_ips is a vaild case whenever
deployment does not have shared public network. This usually
the case when public network is not passed to computes and public
network is used only for routers and floating IPs.

Patch changes logic by addind a `is not None` support to only compare
external_fip configration when user explicitly passed something (passing
an empty dict is equal to requesting "empty" configuration).

Co-Authored-by: Artem Goncharov
Change-Id: Id0f69fe4c985c4c38b493577250cad4e589b9d24
2023-10-16 12:04:50 +02:00
Will Szumski
0aedc268f1 Adds stateful parameter to security groups
This is a missing option.

Change-Id: Ic7b43093d9c35de8962978e9ee108cf7b5379fcd
2023-09-01 17:53:32 +00:00
Dmitriy Rabotyagov
9b47cb4b59 Fix usage of subnet_id key for router
At the moment `subnet` is an alias of `subnet_id`. The way, how aliases
work in ansible modules, is that ansible does add intended key to param
in case alias is used. When riginal key is used, aliases are not
populated.

Right now in case user define `subnet_id` instead of its alias `subnet`
module will fail with KeyError.

Change-Id: I5ce547352097ea821be4c9bbc18147575986c740
2023-09-01 07:23:05 +00:00
Zuul
8612171af3 Merge "Image filters should be dict not set" 2023-08-28 11:04:30 +00:00
arddennis
8f321eaeb2 Added module for volume type encription
New module to manipulate volume type encryption. Including simple CI
task to verify functionality.

Change-Id: I7380a5d258c3df1f9bd512aa4295868294391e31
2023-08-21 08:43:23 +02:00
Denys Mishchenko
147ad6c452 Add volume_type related plugins/modules
Added 2 new modules to manipulate volume types in OpenStack
* volume_type is used to create, delete and modify volume type
* volume_type_info is used to show volume_type details, including
  encryption information

ci tests extended with additional role to test basic module behaviour

It is currently impossible to update is_public volume type attribute
as it is being changed to "os-volume-type-access:is_public" which is not
expected by api. Which expects just "is_public"
https://docs.openstack.org/api-ref/block-storage/v3/?expanded=update-a-volume-type-detail#update-a-volume-type
Which results in "'os-volume-type-access:is_public' was unexpected"
reply. I guess the change is required by openstacksdk or on the API side

Change-Id: Idc26a5240b5f3314c8384c7326d8a82dcc8c6171
2023-08-16 16:35:51 +02:00
Dmitriy Rabotyagov
407369da6e Fix linters for mocking
Right now linters test fail due to a trivial issue in mock loader.
This aims to fix CI for the repo.

Change-Id: Ib58e70d3a54b75ca4cb9ad86b761db1ded157143
2023-08-15 20:16:31 +02:00
Dmitriy Rabotyagov
0a371445eb Image filters should be dict not set
At the moment we generate a set as a filter for image checksums which
lead to AttributeError in SDK:
'set' object has no attribute 'keys'

With that we ensure that supllying checksum does not cause
module crash.

Change-Id: I490f51950592f62c9ad81806593340779bf6dbdb
2023-07-25 12:49:28 +02:00
Joker 234
2808d1c155 fix(inventory): bug when using clouds_yaml_path
Before this fix the current implementation in combination with the most
recent openstacksdk (1.2.0) resulted in a list containing the default
values and another list inside this list containing the value of
clouds_yaml_path. The clouds_yaml_path value gets now added directly to
the list only if it was set.

Change-Id: I3c3b6f59393928d098e9b80c55b87fc6ee1e9912
2023-06-01 19:41:20 +02:00
79 changed files with 3366 additions and 421 deletions

View File

@@ -47,6 +47,7 @@
devstack_services:
designate: true
neutron-dns: true
neutron-trunk: true
zuul_copy_output:
'{{ devstack_log_dir }}/test_output.log': 'logs'
extensions_to_txt:
@@ -162,45 +163,18 @@
tox_constraints_file: '{{ ansible_user_dir }}/{{ zuul.project.src_dir }}/tests/constraints-openstacksdk-1.x.x.txt'
tox_install_siblings: false
# Job with Ansible 2.9 for checking backward compatibility
- job:
name: ansible-collections-openstack-functional-devstack-ansible-2.9
name: ansible-collections-openstack-functional-devstack-ansible-2.18
parent: ansible-collections-openstack-functional-devstack-base
branches: master
description: |
Run openstack collections functional tests against a master devstack
using master of openstacksdk and stable 2.9 branch of ansible
using master of openstacksdk and stable 2.16 branch of ansible
required-projects:
- name: github.com/ansible/ansible
override-checkout: stable-2.9
override-checkout: stable-2.18
vars:
tox_envlist: ansible_2_9
- job:
name: ansible-collections-openstack-functional-devstack-ansible-2.11
parent: ansible-collections-openstack-functional-devstack-base
branches: master
description: |
Run openstack collections functional tests against a master devstack
using master of openstacksdk and stable 2.12 branch of ansible
required-projects:
- name: github.com/ansible/ansible
override-checkout: stable-2.11
vars:
tox_envlist: ansible_2_11
- job:
name: ansible-collections-openstack-functional-devstack-ansible-2.12
parent: ansible-collections-openstack-functional-devstack-base
branches: master
description: |
Run openstack collections functional tests against a master devstack
using master of openstacksdk and stable 2.12 branch of ansible
required-projects:
- name: github.com/ansible/ansible
override-checkout: stable-2.12
vars:
tox_envlist: ansible_2_12
tox_envlist: ansible_2_18
- job:
name: ansible-collections-openstack-functional-devstack-ansible-devel
@@ -244,24 +218,22 @@
bindep_profile: test py310
- job:
name: openstack-tox-linters-ansible-2.12
name: openstack-tox-linters-ansible-2.18
parent: openstack-tox-linters-ansible
nodeset: ubuntu-focal
description: |
Run openstack collections linter tests using the 2.12 branch of ansible
Run openstack collections linter tests using the 2.18 branch of ansible
required-projects:
- name: github.com/ansible/ansible
override-checkout: stable-2.12
override-checkout: stable-2.18
vars:
ensure_tox_version: '<4'
tox_envlist: linters_2_12
python_version: 3.8
bindep_profile: test py38
tox_envlist: linters_2_18
python_version: "3.12"
bindep_profile: test py312
# Cross-checks with other projects
- job:
name: bifrost-collections-src
parent: bifrost-integration-tinyipa-ubuntu-focal
parent: bifrost-integration-tinyipa-ubuntu-jammy
required-projects:
- openstack/ansible-collections-openstack
- # always use master branch when collecting parent job variants, refer to git blame for rationale.
@@ -272,7 +244,7 @@
override-checkout: master
- job:
name: bifrost-keystone-collections-src
parent: bifrost-integration-tinyipa-keystone-ubuntu-focal
parent: bifrost-integration-tinyipa-keystone-ubuntu-jammy
required-projects:
- openstack/ansible-collections-openstack
- # always use master branch when collecting parent job variants, refer to git blame for rationale.
@@ -284,7 +256,7 @@
- job:
name: ansible-collections-openstack-release
parent: base
parent: openstack-tox-linters-ansible
run: ci/publish/publish_collection.yml
secrets:
- ansible_galaxy_info
@@ -294,79 +266,58 @@
data:
url: https://galaxy.ansible.com
token: !encrypted/pkcs1-oaep
- lZFzfoCbuwqV1k6qRfl/VS7E+knUW7+zpg7BptrenK4n0g7UY0HtdVkYq0pV0Tj/LbhzG
jHD0mehcV1iS6B7ORKg4criJkdDfEx09BD8z8yv0EleiIMmhlrCoMcY593OZMBtVbGi0D
CwQtNO98QIsfZogChfLfvRNiBmUV98mEb/p6p3EtGx8J7qcAsqfWxc/CzB8GCleLAHHHT
FuikMM03ZnV0ew7E+TPkHbzzPhBZOqS5HYF0HtgttHwIXdfIWp/XdTuEEk7uRRgYZ2Iao
ifWRzoKaOQmhM++e1ydCqw9D4y9dZEFNMQLwSqcrvtb8cNwT1kl7SCFqYNE2lbutj4ne6
PTBQRsKegMB4Y3ena14fNF6tCynvJLPhF/cjPH2Jhs+B19XQhWkL3TgiOY02W24YHwRcP
+LdkM8inAvyVi3DEbEqdjBPO9OFJcBOKPlCdkGvuwdNCuEpEwctWs0gV3voflG2CDKzmJ
wu9JJOAWnq/0l1WpuDqWreKeQ/BUGZC2Gb4xRAqofulgvhs4WuYoEccjH4EJFIZ90S1EP
R/ZLadqZaEhmjwGM5sMWbBbjT23XsRgg0Tzt9m8DENYMuYDqkMdRbt2jYZa+32p4hyxVe
Y6H/pqYq5b9uOzumnShaK4WlmkQyXcNPkoSlMC1h4OGvqX/WUixpI38jyMA5Tc=
- QJ3c5LfmM4YmqwwLKv4wK5lroWDLGeMyPkmHXhvf0ry3vGjKZvZxVpbIhFXJHXevHov/r
nvlqwmG8D5msynQKZDFg2ZwSMIQWRKfSbsSLe7A6NWI2wC+QtZSPiRiBcBcHY1QbNNW21
84cssYa1oHOA0WXpomBz1qXuPV48aKLjMnWysgFhNSx3Oog+ZOSCczyyVVuXP1lIWIO26
AtRTrEcr37K3JY9usE2PCbZKFOq/+IDPz9fbS7PtBOv7iXOHOf3AfBiJiaJe3q/ecoaaq
ejk2WTKWfvq/3rY4pU1976kUcxgcd+jj9ReFyw8edCsc1ecL0qmZFbdHmC03jEcVo4p8I
WJQ0D5wk4/u2Fu9texNuBvb62Yu3Y028Zhm5rz8Zl/ISsdaA3losn5S7C7iAH/yKlGQEI
N/1X4M0tVPaMtsIhZyyz+JMbeNyVR9ZarqbtpzRtVhjxL7KOiAQbEzAmZcBbCJ2Z5iI+P
bTp03f9Y/tZNtkohARvx1TKhv8CvsmyGkMm+r5Y8aWz3SNy8LL6bSwtGun/ifbnadHmw/
TD5/UUXHHjBGkeAu9HTtwUZ5Qdkfg92PnPgruAAuOkF1Y4RyRS9qvwhtqyHO8TwU0INRY
5MHEzeOQWemoQb/qdENp+J/Q9oMEbpFYv9TkrWkxVoKop6Str8e3FF5sxmN/SE=
- project:
check:
jobs:
- tox-pep8
- openstack-tox-linters-ansible-devel
- openstack-tox-linters-ansible-2.12
- ansible-collections-openstack-functional-devstack:
dependencies: &deps_unit_lint
- tox-pep8
- openstack-tox-linters-ansible-2.12
- ansible-collections-openstack-functional-devstack-releases:
dependencies: *deps_unit_lint
- ansible-collections-openstack-functional-devstack-ansible-2.9:
dependencies: *deps_unit_lint
- ansible-collections-openstack-functional-devstack-ansible-2.12:
dependencies: *deps_unit_lint
- ansible-collections-openstack-functional-devstack-ansible-devel:
dependencies: *deps_unit_lint
- ansible-collections-openstack-functional-devstack-magnum:
dependencies: *deps_unit_lint
- ansible-collections-openstack-functional-devstack-octavia:
dependencies: *deps_unit_lint
- openstack-tox-linters-ansible-2.18
- ansible-collections-openstack-functional-devstack
- ansible-collections-openstack-functional-devstack-releases
- ansible-collections-openstack-functional-devstack-ansible-2.18
- ansible-collections-openstack-functional-devstack-ansible-devel
- ansible-collections-openstack-functional-devstack-magnum
- ansible-collections-openstack-functional-devstack-octavia
- bifrost-collections-src:
voting: false
dependencies: *deps_unit_lint
irrelevant-files: *ignore_files
- bifrost-keystone-collections-src:
voting: false
dependencies: *deps_unit_lint
irrelevant-files: *ignore_files
gate:
jobs:
- tox-pep8
- openstack-tox-linters-ansible-2.12
# - ansible-collections-openstack-functional-devstack
- openstack-tox-linters-ansible-2.18
- ansible-collections-openstack-functional-devstack-releases
# - ansible-collections-openstack-functional-devstack-ansible-2.9
# - ansible-collections-openstack-functional-devstack-ansible-2.12
- ansible-collections-openstack-functional-devstack-magnum
- ansible-collections-openstack-functional-devstack-octavia
periodic:
jobs:
- openstack-tox-linters-ansible-devel
- openstack-tox-linters-ansible-2.12
- openstack-tox-linters-ansible-2.18
- ansible-collections-openstack-functional-devstack
- ansible-collections-openstack-functional-devstack-releases
- ansible-collections-openstack-functional-devstack-ansible-2.9
- ansible-collections-openstack-functional-devstack-ansible-2.12
- ansible-collections-openstack-functional-devstack-ansible-2.18
- ansible-collections-openstack-functional-devstack-ansible-devel
- bifrost-collections-src
- bifrost-keystone-collections-src
- ansible-collections-openstack-functional-devstack-magnum
- ansible-collections-openstack-functional-devstack-octavia
experimental:
jobs:
- ansible-collections-openstack-functional-devstack-ansible-2.11
tag:
jobs:
- ansible-collections-openstack-release

View File

@@ -5,6 +5,175 @@ Ansible OpenStack Collection Release Notes
.. contents:: Topics
v2.4.1
======
Release Summary
---------------
Bugfixes and minor changes
Minor Changes
-------------
- Update tags when changing server
Bugfixes
--------
- Fix missed client_cert in OpenStackModule
v2.4.0
======
Release Summary
---------------
New trait module and minor changes
Major Changes
-------------
- Add trait module
Minor Changes
-------------
- Add loadbalancer quota options
- Allow create instance with tags
New Modules
-----------
- openstack.cloud.trait - Add or Delete a trait from OpenStack
v2.3.3
======
Release Summary
---------------
Bugfixes and minor changes
Minor Changes
-------------
- Add test to only_ipv4 in inventory
- add an option to use only IPv4 only for ansible_host and ansible_ssh_host
Bugfixes
--------
- CI - Fix deprecated ANSIBLE_COLLECTIONS_PATHS variable
v2.3.2
======
Release Summary
---------------
Bugfixes and minor changes
Minor Changes
-------------
- Drop compat implementations for tests
Bugfixes
--------
- Fix openstack.cloud.port module failure in check mode
v2.3.1
======
Release Summary
---------------
Client TLS certificate support
Minor Changes
-------------
- Add ability to pass client tls certificate
v2.3.0
======
Release Summary
---------------
Bugfixes and new modules
Major Changes
-------------
- Add Neutron trunk module
- Add application_credential module
- Add module to filter available volume services
Minor Changes
-------------
- Add inactive state for the images
- Add insecure_registry property to coe_cluster_templates
- Add support for creation of the default external networks
- Add target_all_project option
- Add vlan_tranparency for creation networks
- Allow munch results in server_info module
- Allow to specify multiple allocation pools when creating a subnet
- CI - Disable auto-discovery for setuptools
- CI - Don't create port with binding profile
- CI - Fix CI in collection
- CI - Fix linters-devel and devstack tests
- CI - Fix regression in quota module
- CI - Fix test for server shelve
- CI - Migrate Bifrost jobs to Ubuntu Jammy
- CI - Remove 2.9 jobs from Zuul config
- CI - Run functional testing regardless of pep8/linter results
- Enable glance-direct interop image import
- Ensure coe_cluster_template compare labels properly
- Wait for deleted server to disappear from results
- router - Allow specifying external network name in a different project
Bugfixes
--------
- Allow wait false when auto_ip is false
- Fix exception when creating object from file
- Fix exception when updating container with metadata
- Fix typo in openstack.cloud.lb_pool
- Fix typo in parameter description
- fix subnet module - allow cidr option with subnet_pool
New Modules
-----------
- openstack.cloud.application_credential - Manage OpenStack Identity (Keystone) application credentials
- openstack.cloud.trunk - Add or delete trunks from an OpenStack cloud
- openstack.cloud.volume_service_info - Fetch OpenStack Volume (Cinder) services
v2.2.0
======
Release Summary
---------------
New module for volume_type and bugfixes
Minor Changes
-------------
- Add volume_encryption_type modules
- Add volume_type modules
Bugfixes
--------
- Fix image module filter
- Fix port module idempotency
- Fix router module idempotency
v2.1.0
======

View File

@@ -515,3 +515,104 @@ releases:
- Highlight our mode of operation more prominently
release_summary: New module for Ironic and bugfixes
release_date: '2023-04-19'
2.2.0:
changes:
bugfixes:
- Fix image module filter
- Fix port module idempotency
- Fix router module idempotency
minor_changes:
- Add volume_encryption_type modules
- Add volume_type modules
release_summary: New module for volume_type and bugfixes
release_date: '2023-12-01'
2.3.0:
changes:
bugfixes:
- Allow wait false when auto_ip is false
- Fix exception when creating object from file
- Fix exception when updating container with metadata
- Fix typo in openstack.cloud.lb_pool
- Fix typo in parameter description
- fix subnet module - allow cidr option with subnet_pool
major_changes:
- Add Neutron trunk module
- Add application_credential module
- Add module to filter available volume services
minor_changes:
- Add inactive state for the images
- Add insecure_registry property to coe_cluster_templates
- Add support for creation of the default external networks
- Add target_all_project option
- Add vlan_tranparency for creation networks
- Allow munch results in server_info module
- Allow to specify multiple allocation pools when creating a subnet
- CI - Disable auto-discovery for setuptools
- CI - Don't create port with binding profile
- CI - Fix CI in collection
- CI - Fix linters-devel and devstack tests
- CI - Fix regression in quota module
- CI - Fix test for server shelve
- CI - Migrate Bifrost jobs to Ubuntu Jammy
- CI - Remove 2.9 jobs from Zuul config
- CI - Run functional testing regardless of pep8/linter results
- Enable glance-direct interop image import
- Ensure coe_cluster_template compare labels properly
- Wait for deleted server to disappear from results
- router - Allow specifying external network name in a different project
release_summary: Bugfixes and new modules
modules:
- description: Manage OpenStack Identity (Keystone) application credentials
name: application_credential
namespace: ''
- description: Add or delete trunks from an OpenStack cloud
name: trunk
namespace: ''
- description: Fetch OpenStack Volume (Cinder) services
name: volume_service_info
namespace: ''
release_date: '2024-11-28'
2.3.1:
changes:
minor_changes:
- Add ability to pass client tls certificate
release_summary: Client TLS certificate support
release_date: '2024-12-18'
2.3.2:
changes:
bugfixes:
- Fix openstack.cloud.port module failure in check mode
minor_changes:
- Drop compat implementations for tests
release_summary: Bugfixes and minor changes
release_date: '2024-12-20'
2.3.3:
changes:
bugfixes:
- CI - Fix deprecated ANSIBLE_COLLECTIONS_PATHS variable
minor_changes:
- Add test to only_ipv4 in inventory
- add an option to use only IPv4 only for ansible_host and ansible_ssh_host
release_summary: Bugfixes and minor changes
release_date: '2024-12-22'
2.4.0:
changes:
major_changes:
- Add trait module
minor_changes:
- Add loadbalancer quota options
- Allow create instance with tags
release_summary: New trait module and minor changes
modules:
- description: Add or Delete a trait from OpenStack
name: trait
namespace: ''
release_date: '2025-01-15'
2.4.1:
changes:
bugfixes:
- Fix missed client_cert in OpenStackModule
minor_changes:
- Update tags when changing server
release_summary: Bugfixes and minor changes
release_date: '2024-01-20'

View File

@@ -3,7 +3,8 @@
vars:
collection_path: "{{ ansible_user_dir }}/{{ zuul.project.src_dir }}"
build_collection_path: /tmp/collection_built/
ansible_galaxy_path: "~/.local/bin/ansible-galaxy"
ansible_virtualenv_path: /tmp/ansible_venv
ansible_galaxy_path: "{{ ansible_virtualenv_path }}/bin/ansible-galaxy"
tasks:
@@ -11,9 +12,15 @@
include_role:
name: ensure-pip
- name: Install ansible
- name: Install Ansible in virtualenv
pip:
name: ansible-core<2.12
name: ansible-core<2.19
virtualenv: "{{ ansible_virtualenv_path }}"
virtualenv_command: "{{ ensure_pip_virtualenv_command }}"
- name: Detect ansible version
command: "{{ ansible_virtualenv_path }}/bin/ansible --version"
register: ansible_version
- name: Discover tag version
set_fact:

View File

@@ -0,0 +1,9 @@
expected_fields:
- description
- expires_at
- id
- name
- project_id
- roles
- secret
- unrestricted

View File

@@ -0,0 +1,61 @@
---
- name: Create application credentials
openstack.cloud.application_credential:
cloud: "{{ cloud }}"
state: present
name: ansible_creds
description: dummy description
register: appcred
- name: Assert return values of application_credential module
assert:
that:
- appcred is changed
# allow new fields to be introduced but prevent fields from being removed
- expected_fields|difference(appcred.application_credential.keys())|length == 0
- name: Create the application credential again
openstack.cloud.application_credential:
cloud: "{{ cloud }}"
state: present
name: ansible_creds
description: dummy description
register: appcred
- name: Assert return values of ansible_credential module
assert:
that:
# credentials are immutable so creating twice will cause delete and create
- appcred is changed
# allow new fields to be introduced but prevent fields from being removed
- expected_fields|difference(appcred.application_credential.keys())|length == 0
- name: Update the application credential again
openstack.cloud.application_credential:
cloud: "{{ cloud }}"
state: present
name: ansible_creds
description: new description
register: appcred
- name: Assert application credential changed
assert:
that:
- appcred is changed
- appcred.application_credential.description == 'new description'
- name: Get list of all keypairs using application credential
openstack.cloud.keypair_info:
cloud: "{{ appcred.cloud }}"
- name: Delete application credential
openstack.cloud.application_credential:
cloud: "{{ cloud }}"
state: absent
name: ansible_creds
register: appcred
- name: Assert application credential changed
assert:
that: appcred is changed

View File

@@ -26,6 +26,9 @@
keypair_id: '{{ keypair.keypair.id }}'
name: k8s
state: present
labels:
docker_volume_size: 10
cloud_provider_tag: v1.23.1
register: coe_cluster_template
- name: Assert return values of coe_cluster_template module
@@ -43,6 +46,9 @@
keypair_id: '{{ keypair.keypair.id }}'
name: k8s
state: present
labels:
docker_volume_size: 10
cloud_provider_tag: v1.23.1
register: coe_cluster_template
- name: Assert return values of coe_cluster_template module

View File

@@ -24,6 +24,13 @@
path: '{{ tmp_file.path }}'
size: 1M
- name: Calculating file checksum
ansible.builtin.stat:
path: "{{ tmp_file.path }}"
checksum_algorithm: sha512
get_checksum: true
register: image_details
- name: Ensure mock kernel and ramdisk images (defaults)
openstack.cloud.image:
cloud: "{{ cloud }}"
@@ -42,6 +49,7 @@
name: ansible_image
filename: "{{ tmp_file.path }}"
is_protected: true
checksum: "{{ image_details.stat.checksum }}"
disk_format: raw
tags:
- test
@@ -168,6 +176,34 @@
- image is changed
- image.image.name == 'ansible_image-changed'
- name: Deactivate raw image
openstack.cloud.image:
cloud: "{{ cloud }}"
state: inactive
id: "{{ image.image.id }}"
name: 'ansible_image-changed'
register: image
- name: Assert changed
assert:
that:
- image is changed
- image.image.status == 'deactivated'
- name: Reactivate raw image
openstack.cloud.image:
cloud: "{{ cloud }}"
state: present
id: "{{ image.image.id }}"
name: 'ansible_image-changed'
register: image
- name: Assert changed
assert:
that:
- image is changed
- image.image.status == 'active'
- name: Rename back raw image (defaults)
openstack.cloud.image:
cloud: "{{ cloud }}"

View File

@@ -303,6 +303,25 @@
that:
- inventory.all.children.RegionOne.hosts.keys() | sort == ['ansible_server1', 'ansible_server2'] | sort
- name: List servers with inventory plugin with IPv4 only
ansible.builtin.command:
cmd: ansible-inventory --list --yaml --extra-vars only_ipv4=true --inventory-file openstack.yaml
chdir: "{{ tmp_dir.path }}"
environment:
ANSIBLE_INVENTORY_CACHE: "True"
ANSIBLE_INVENTORY_CACHE_PLUGIN: "jsonfile"
ANSIBLE_CACHE_PLUGIN_CONNECTION: "{{ tmp_dir.path }}/.cache/"
register: inventory
- name: Read YAML output from inventory plugin again
ansible.builtin.set_fact:
inventory: "{{ inventory.stdout | from_yaml }}"
- name: Check YAML output from inventory plugin again
assert:
that:
- inventory.all.children.RegionOne.hosts.keys() | sort == ['ansible_server1', 'ansible_server2'] | sort
- name: Delete server 2
openstack.cloud.resource:
service: compute

View File

@@ -7,3 +7,4 @@ expected_fields:
- project_id
- target_project_id
- tenant_id
all_project_symbol: '*'

View File

@@ -69,6 +69,29 @@
id: "{{ rbac_policy.rbac_policy.id }}"
state: absent
- name: Create a new network RBAC policy by targeting all projects
openstack.cloud.neutron_rbac_policy:
cloud: "{{ cloud }}"
object_id: "{{ network.network.id }}"
object_type: 'network'
action: 'access_as_shared'
target_all_project: true
project_id: "{{ source_project.project.id }}"
register: rbac_policy
- name: Assert return values of neutron_rbac_policy module
assert:
that:
# allow new fields to be introduced but prevent fields from being removed
- expected_fields|difference(rbac_policy.rbac_policy.keys())|length == 0
- rbac_policy.rbac_policy.target_project_id == all_project_symbol
- name: Delete RBAC policy
openstack.cloud.neutron_rbac_policy:
cloud: "{{ cloud }}"
id: "{{ rbac_policy.rbac_policy.id }}"
state: absent
- name: Get all rbac policies for {{ source_project.project.name }} - after deletion
openstack.cloud.neutron_rbac_policies_info:
cloud: "{{ cloud }}"

View File

@@ -5,7 +5,7 @@
state: present
name: ansible_container
- name: Create object
- name: Create object from data
openstack.cloud.object:
cloud: "{{ cloud }}"
state: present
@@ -28,6 +28,47 @@
name: ansible_object
container: ansible_container
- name: Create object from file
block:
- name: Create temporary data file
ansible.builtin.tempfile:
register: tmp_file
- name: Populate data file
ansible.builtin.copy:
content: "this is a test"
dest: "{{ tmp_file.path }}"
- name: Create object from data file
openstack.cloud.object:
cloud: "{{ cloud }}"
state: present
name: ansible_object
filename: "{{ tmp_file.path }}"
container: ansible_container
register: object
always:
- name: Remove temporary data file
ansible.builtin.file:
path: "{{ tmp_file.path }}"
state: absent
when: tmp_file is defined and 'path' in tmp_file
- name: Assert return values of object module
assert:
that:
- object.object.id == "ansible_object"
# allow new fields to be introduced but prevent fields from being removed
- expected_fields|difference(object.object.keys())|length == 0
- name: Delete object
openstack.cloud.object:
cloud: "{{ cloud }}"
state: absent
name: ansible_object
container: ansible_container
- name: Delete container
openstack.cloud.object_container:
cloud: "{{ cloud }}"

View File

@@ -31,6 +31,21 @@
- ('cache-control' in container.container.metadata.keys()|map('lower'))
- container.container.metadata['foo'] == 'bar'
- name: Update container metadata
openstack.cloud.object_container:
cloud: "{{ cloud }}"
name: ansible_container
metadata:
'foo': 'baz'
register: container
- name: Verify container metadata was updated
assert:
that:
- container is changed
- ('cache-control' in container.container.metadata.keys()|map('lower'))
- container.container.metadata['foo'] == 'baz'
- name: Update a container
openstack.cloud.object_container:
cloud: "{{ cloud }}"
@@ -45,7 +60,7 @@
that:
- container is changed
- ('cache-control' not in container.container.metadata.keys()|map('lower'))
- "container.container.metadata == {'foo': 'bar'}"
- "container.container.metadata == {'foo': 'baz'}"
- container.container.read_ACL is none or container.container.read_ACL == ""
- name: Delete container

View File

@@ -1,6 +1,3 @@
binding_profile:
"pci_slot": "0000:03:11.1"
"physical_network": "provider"
expected_fields:
- allowed_address_pairs
- binding_host_id

View File

@@ -256,27 +256,6 @@
state: absent
name: ansible_security_group
- name: Create port (with binding profile)
openstack.cloud.port:
cloud: "{{ cloud }}"
state: present
name: "{{ port_name }}"
network: "{{ network_name }}"
binding_profile: "{{ binding_profile }}"
register: port
- name: Assert binding_profile exists in created port
assert:
that: "port.port['binding_profile']"
- debug: var=port
- name: Delete port (with binding profile)
openstack.cloud.port:
cloud: "{{ cloud }}"
state: absent
name: "{{ port_name }}"
- name: Delete subnet
openstack.cloud.subnet:
cloud: "{{ cloud }}"

View File

@@ -28,3 +28,9 @@ test_compute_quota:
ram: 5
server_group_members: 5
server_groups: 5
test_load_balancer_quota:
load_balancers: 5
health_monitors: 5
listeners: 5
pools: 5
members: 5

View File

@@ -0,0 +1,158 @@
---
- module_defaults:
group/openstack.cloud.openstack:
cloud: "{{ cloud }}"
name: "{{ test_project }}"
# Backward compatibility with Ansible 2.9
openstack.cloud.project:
cloud: "{{ cloud }}"
name: "{{ test_project }}"
openstack.cloud.quota:
cloud: "{{ cloud }}"
name: "{{ test_project }}"
block:
- name: Create test project
openstack.cloud.project:
state: present
- name: Clear quotas before tests
openstack.cloud.quota:
state: absent
register: default_quotas
- name: Set network quota
openstack.cloud.quota: "{{ test_network_quota }}"
register: quotas
- name: Assert changed
assert:
that: quotas is changed
- name: Assert field values
assert:
that: quotas.quotas.network[item.key] == item.value
loop: "{{ test_network_quota | dict2items }}"
- name: Set network quota again
openstack.cloud.quota: "{{ test_network_quota }}"
register: quotas
- name: Assert not changed
assert:
that: quotas is not changed
- name: Set volume quotas
openstack.cloud.quota: "{{ test_volume_quota }}"
register: quotas
- name: Assert changed
assert:
that: quotas is changed
- name: Assert field values
assert:
that: quotas.quotas.volume[item.key] == item.value
loop: "{{ test_volume_quota | dict2items }}"
- name: Set volume quotas again
openstack.cloud.quota: "{{ test_volume_quota }}"
register: quotas
- name: Assert not changed
assert:
that: quotas is not changed
- name: Set compute quotas
openstack.cloud.quota: "{{ test_compute_quota }}"
register: quotas
- name: Assert changed
assert:
that: quotas is changed
- name: Assert field values
assert:
that: quotas.quotas.compute[item.key] == item.value
loop: "{{ test_compute_quota | dict2items }}"
- name: Set compute quotas again
openstack.cloud.quota: "{{ test_compute_quota }}"
register: quotas
- name: Set load_balancer quotas
openstack.cloud.quota: "{{ test_load_balancer_quota }}"
register: quotas
- name: Assert changed
assert:
that: quotas is changed
- name: Assert field values
assert:
that: quotas.quotas.load_balancer[item.key] == item.value
loop: "{{ test_load_balancer_quota | dict2items }}"
- name: Set load_balancer quotas again
openstack.cloud.quota: "{{ test_load_balancer_quota }}"
register: quotas
- name: Assert not changed
assert:
that: quotas is not changed
- name: Unset all quotas
openstack.cloud.quota:
state: absent
register: quotas
- name: Assert defaults restore
assert:
that: quotas.quotas == default_quotas.quotas
- name: Set all quotas at once
openstack.cloud.quota:
"{{ [test_network_quota, test_volume_quota, test_compute_quota, test_load_balancer_quota] | combine }}"
register: quotas
- name: Assert changed
assert:
that: quotas is changed
- name: Assert volume values
assert:
that: quotas.quotas.volume[item.key] == item.value
loop: "{{ test_volume_quota | dict2items }}"
- name: Assert network values
assert:
that: quotas.quotas.network[item.key] == item.value
loop: "{{ test_network_quota | dict2items }}"
- name: Assert compute values
assert:
that: quotas.quotas.compute[item.key] == item.value
loop: "{{ test_compute_quota | dict2items }}"
- name: Assert load_balancer values
assert:
that: quotas.quotas.load_balancer[item.key] == item.value
loop: "{{ test_load_balancer_quota | dict2items }}"
- name: Set all quotas at once again
openstack.cloud.quota:
"{{ [test_network_quota, test_volume_quota, test_compute_quota, test_load_balancer_quota] | combine }}"
register: quotas
- name: Assert not changed
assert:
that: quotas is not changed
- name: Unset all quotas
openstack.cloud.quota:
state: absent
register: quotas
- name: Delete test project
openstack.cloud.project:
state: absent

View File

@@ -128,4 +128,9 @@
- name: Delete test project
openstack.cloud.project:
state: absent
state: absent
- import_tasks: loadbalancer.yml
tags:
- loadbalancer

View File

@@ -384,7 +384,7 @@
- shade_subnet1
network: "{{ external_network_name }}"
external_fixed_ips:
- subnet: shade_subnet5
- subnet_id: shade_subnet5
ip: 10.6.6.100
- name: Gather routers info
@@ -412,7 +412,7 @@
external_gateway_info:
network: "{{ external_network_name }}"
external_fixed_ips:
- subnet: shade_subnet5
- subnet_id: shade_subnet5
ip: 10.6.6.100
- subnet: shade_subnet5
ip: 10.6.6.101
@@ -426,7 +426,7 @@
- shade_subnet1
network: "{{ external_network_name }}"
external_fixed_ips:
- subnet: shade_subnet5
- subnet_id: shade_subnet5
ip: 10.6.6.100
- subnet: shade_subnet5
ip: 10.6.6.101
@@ -461,7 +461,7 @@
- shade_subnet1
network: "{{ external_network_name }}"
external_fixed_ips:
- subnet: shade_subnet5
- subnet_id: shade_subnet5
ip: 10.6.6.101
- name: Update router (remove external fixed ips) again
@@ -473,7 +473,7 @@
- shade_subnet1
network: "{{ external_network_name }}"
external_fixed_ips:
- subnet: shade_subnet5
- subnet_id: shade_subnet5
ip: 10.6.6.101
register: router
@@ -506,7 +506,7 @@
- shade_subnet1
network: "{{ external_network_name }}"
external_fixed_ips:
- subnet: shade_subnet5
- subnet_id: shade_subnet5
ip: 10.6.6.101
- name: Gather routers info
@@ -533,7 +533,7 @@
- shade_subnet1
network: "{{ external_network_name }}"
external_fixed_ips:
- subnet: shade_subnet5
- subnet_id: shade_subnet5
ip: 10.6.6.101
register: router
@@ -720,3 +720,5 @@
name: "{{ external_network_name }}"
- include_tasks: shared_network.yml
- include_tasks: shared_ext_network.yml

View File

@@ -0,0 +1,99 @@
---
# Test the case where we have a shared external network in one project used as
# the gateway on a router in a second project.
# See https://bugs.launchpad.net/ansible-collections-openstack/+bug/2049658
- name: Create the first project
openstack.cloud.project:
cloud: "{{ cloud }}"
state: present
name: "shared_ext_net_test_1"
description: "Project that contains the external network to be shared"
domain: default
is_enabled: True
register: project_1
- name: Create the external network to be shared
openstack.cloud.network:
cloud: "{{ cloud }}"
state: present
name: "{{ external_network_name }}"
project: "shared_ext_net_test_1"
external: true
shared: true
register: shared_ext_network
- name: Create subnet on external network
openstack.cloud.subnet:
cloud: "{{ cloud }}"
state: present
network_name: "{{ shared_ext_network.id }}"
name: "shared_ext_subnet"
project: "shared_ext_net_test_1"
cidr: "10.6.6.0/24"
register: shared_subnet
- name: Create the second project
openstack.cloud.project:
cloud: "{{ cloud }}"
state: present
name: "shared_ext_net_test_2"
description: "Project that contains the subnet to be shared"
domain: default
is_enabled: True
register: project_2
- name: Create router with gateway on shared external network
openstack.cloud.router:
cloud: "{{ cloud }}"
state: present
name: "shared_ext_net_test2_router"
project: "shared_ext_net_test_2"
network: "{{ external_network_name }}"
register: router
- name: Gather routers info
openstack.cloud.routers_info:
cloud: "{{ cloud }}"
name: "shared_ext_net_test2_router"
register: routers
- name: Verify routers info
assert:
that:
- routers.routers.0.id == router.router.id
- routers.routers.0.external_gateway_info.external_fixed_ips|length == 1
- name: Delete router
openstack.cloud.router:
cloud: "{{ cloud }}"
state: absent
name: "shared_ext_net_test2_router"
project: "shared_ext_net_test_2"
- name: Delete subnet
openstack.cloud.subnet:
cloud: "{{ cloud }}"
state: absent
network_name: "{{ shared_ext_network.id }}"
name: "shared_ext_subnet"
project: "shared_ext_net_test_1"
- name: Delete network
openstack.cloud.network:
cloud: "{{ cloud }}"
state: absent
name: "{{ external_network_name }}"
project: "shared_ext_net_test_1"
- name: Delete project 2
openstack.cloud.project:
cloud: "{{ cloud }}"
state: absent
name: "shared_ext_net_test_2"
- name: Delete project 1
openstack.cloud.project:
cloud: "{{ cloud }}"
state: absent
name: "shared_ext_net_test_1"

View File

@@ -72,4 +72,25 @@
name: ansible_security_group
state: absent
- name: Create stateless security group
openstack.cloud.security_group:
cloud: "{{ cloud }}"
name: ansible_security_group_stateless
stateful: false
state: present
description: 'Created from Ansible playbook'
register: security_group_stateless
- name: Assert return values of security_group module
assert:
that:
- security_group_stateless.security_group.name == 'ansible_security_group_stateless'
- security_group_stateless.security_group.stateful == False
- name: Delete stateless security group
openstack.cloud.security_group:
cloud: "{{ cloud }}"
name: ansible_security_group_stateless
state: absent
- include_tasks: rules.yml

View File

@@ -399,6 +399,9 @@
- port-id: "{{ port.port.id }}"
reuse_ips: false
state: present
tags:
- first
- second
wait: true
register: server
@@ -413,6 +416,7 @@
|selectattr('OS-EXT-IPS:type', 'equalto', 'floating')
|map(attribute='addr')
|list|length == 0
- server.server.tags == ["first", "second"]
- name: Find all floating ips for debugging
openstack.cloud.floating_ip_info:
@@ -454,6 +458,8 @@
- '{{ server_security_group }}'
- '{{ server_alt_security_group }}'
state: present
tags:
- yellow
wait: true
register: server_updated
@@ -475,6 +481,7 @@
- server_updated.server.addresses[server_network]|length == 2
- port.port.fixed_ips[0].ip_address in
server_updated.server.addresses[server_network]|map(attribute='addr')
- server_updated.server.tags == ['yellow']
# TODO: Verify networks once openstacksdk's issue #2010352 has been solved
# Ref.: https://storyboard.openstack.org/#!/story/2010352
#- server_updated.server.addresses.public|length > 0
@@ -509,6 +516,8 @@
- '{{ server_security_group }}'
- '{{ server_alt_security_group }}'
state: present
tags:
- yellow
wait: true
register: server_updated_again
@@ -517,6 +526,7 @@
that:
- server.server.id == server_updated_again.server.id
- server_updated_again is not changed
- server_updated_again.server.tags == ['yellow']
# TODO: Drop failure test once openstacksdk's issue #2010352 has been solved
# Ref.: https://storyboard.openstack.org/#!/story/2010352

View File

@@ -460,19 +460,14 @@
register: server
ignore_errors: true
- name: Assert shelve offload server
assert:
that:
- ((server is success)
or (server is not success
and "Cannot 'shelveOffload' instance" in server.msg
and "while it is in vm_state shelved_offloaded" in server.msg))
- name: Get info about server
openstack.cloud.server_info:
cloud: "{{ cloud }}"
server: ansible_server
register: servers
until: servers.servers.0.task_state == none
retries: 30
delay: 10
- name: Ensure status for server is SHELVED_OFFLOADED
# no change if server has been offloaded automatically after first shelve command

View File

@@ -150,3 +150,6 @@
- name: Subnet Allocation
include_tasks: subnet-allocation.yml
- name: Subnet Allocations from Subnet Pool
include_tasks: subnet-pool.yaml

View File

@@ -68,6 +68,80 @@
name: "{{ subnet_name }}"
state: absent
- name: Create subnet {{ subnet_name }} with multiple allocation pools on network {{ network_name }}
openstack.cloud.subnet:
cloud: "{{ cloud }}"
network_name: "{{ network_name }}"
enable_dhcp: "{{ enable_subnet_dhcp }}"
name: "{{ subnet_name }}"
state: present
cidr: 192.168.0.0/24
gateway_ip: 192.168.0.1
allocation_pools:
- start: 192.168.0.2
end: 192.168.0.4
- start: 192.168.0.10
end: 192.168.0.12
- name: Create subnet {{ subnet_name }} on network {{ network_name }} again
openstack.cloud.subnet:
cloud: "{{ cloud }}"
network_name: "{{ network_name }}"
enable_dhcp: "{{ enable_subnet_dhcp }}"
name: "{{ subnet_name }}"
state: present
cidr: 192.168.0.0/24
gateway_ip: 192.168.0.1
allocation_pools:
- start: 192.168.0.2
end: 192.168.0.4
- start: 192.168.0.10
end: 192.168.0.12
register: idem2
- name: Update subnet {{ subnet_name }} allocation pools
openstack.cloud.subnet:
cloud: "{{ cloud }}"
network_name: "{{ network_name }}"
name: "{{ subnet_name }}"
state: present
cidr: 192.168.0.0/24
gateway_ip: 192.168.0.1
allocation_pools:
- start: 192.168.0.2
end: 192.168.0.8
- start: 192.168.0.10
end: 192.168.0.16
- name: Get Subnet Info
openstack.cloud.subnets_info:
cloud: "{{ cloud }}"
name: "{{ subnet_name }}"
register: subnet_result
- name: Verify Subnet Allocation Pools Exist
assert:
that:
- idem2 is not changed
- subnet_result.subnets is defined
- subnet_result.subnets | length == 1
- subnet_result.subnets[0].allocation_pools is defined
- subnet_result.subnets[0].allocation_pools | length == 2
- name: Verify Subnet Allocation Pools
assert:
that:
- (subnet_result.subnets[0].allocation_pools.0.start == '192.168.0.2' and subnet_result.subnets[0].allocation_pools.0.end == '192.168.0.8') or
(subnet_result.subnets[0].allocation_pools.0.start == '192.168.0.10' and subnet_result.subnets[0].allocation_pools.0.end == '192.168.0.16')
- (subnet_result.subnets[0].allocation_pools.1.start == '192.168.0.2' and subnet_result.subnets[0].allocation_pools.1.end == '192.168.0.8') or
(subnet_result.subnets[0].allocation_pools.1.start == '192.168.0.10' and subnet_result.subnets[0].allocation_pools.1.end == '192.168.0.16')
- name: Delete subnet {{ subnet_name }}
openstack.cloud.subnet:
cloud: "{{ cloud }}"
name: "{{ subnet_name }}"
state: absent
- name: Delete network {{ network_name }}
openstack.cloud.network:
cloud: "{{ cloud }}"

View File

@@ -0,0 +1,167 @@
---
# This test cover case when subnet is constructed
# with few prefixes and neutron API is required
# CIDR parameter to be used together with subnet pool.
- name: Create network {{ network_name }}
openstack.cloud.network:
cloud: "{{ cloud }}"
name: "{{ network_name }}"
state: present
- name: Create address_scope
openstack.cloud.address_scope:
cloud: "{{ cloud }}"
name: "{{ address_scope_name }}"
shared: false
ip_version: "4"
register: create_address_scope
- name: Create subnet pool
openstack.cloud.subnet_pool:
cloud: "{{ cloud }}"
name: "{{ subnet_pool_name }}"
is_shared: false
address_scope: "{{ address_scope_name }}"
prefixes:
- 192.168.0.0/24
- 192.168.42.0/24
register: subnet_pool
- name: Create subnet {{ subnet_name }} on network {{ network_name }} from subnet pool {{ subnet_pool_name }}
openstack.cloud.subnet:
cloud: "{{ cloud }}"
network_name: "{{ network_name }}"
enable_dhcp: "{{ enable_subnet_dhcp }}"
name: "{{ subnet_name }}"
state: present
cidr: 192.168.42.0/24 # we want specific cidr from subnet pool
ip_version: 4
subnet_pool: "{{ subnet_pool_name }}"
gateway_ip: 192.168.42.1
allocation_pool_start: 192.168.42.2
allocation_pool_end: 192.168.42.4
- name: Create subnet {{ subnet_name }} on network {{ network_name }} from subnet pool {{ subnet_pool_name }} again
openstack.cloud.subnet:
cloud: "{{ cloud }}"
network_name: "{{ network_name }}"
enable_dhcp: "{{ enable_subnet_dhcp }}"
name: "{{ subnet_name }}"
state: present
cidr: 192.168.42.0/24
ip_version: 4
subnet_pool: "{{ subnet_pool_name }}"
gateway_ip: 192.168.42.1
allocation_pool_start: 192.168.42.2
allocation_pool_end: 192.168.42.4
register: idem1
- name: Get Subnet Info
openstack.cloud.subnets_info:
cloud: "{{ cloud }}"
name: "{{ subnet_name }}"
register: subnet_result
- name: Verify Subnet Allocation Pools Exist
assert:
that:
- idem1 is not changed
- subnet_result.subnets is defined
- subnet_result.subnets | length == 1
- subnet_result.subnets[0].allocation_pools is defined
- subnet_result.subnets[0].allocation_pools | length == 1
- name: Verify Subnet Allocation Pools
assert:
that:
- subnet_result.subnets[0].allocation_pools.0.start == '192.168.42.2'
- subnet_result.subnets[0].allocation_pools.0.end == '192.168.42.4'
- name: Delete subnet {{ subnet_name }}
openstack.cloud.subnet:
cloud: "{{ cloud }}"
name: "{{ subnet_name }}"
state: absent
- name: Create subnet {{ subnet_name }} with multiple allocation pools on network {{ network_name }} from subnet pool {{ subnet_pool_name }}
openstack.cloud.subnet:
cloud: "{{ cloud }}"
network_name: "{{ network_name }}"
enable_dhcp: "{{ enable_subnet_dhcp }}"
name: "{{ subnet_name }}"
state: present
cidr: 192.168.42.0/24 # we want specific cidr from subnet pool
ip_version: 4
subnet_pool: "{{ subnet_pool_name }}"
gateway_ip: 192.168.42.1
allocation_pools:
- start: 192.168.42.2
end: 192.168.42.4
- start: 192.168.42.6
end: 192.168.42.8
- name: Create subnet {{ subnet_name }} on network {{ network_name }} from subnet pool {{ subnet_pool_name }} again
openstack.cloud.subnet:
cloud: "{{ cloud }}"
network_name: "{{ network_name }}"
enable_dhcp: "{{ enable_subnet_dhcp }}"
name: "{{ subnet_name }}"
state: present
cidr: 192.168.42.0/24
ip_version: 4
subnet_pool: "{{ subnet_pool_name }}"
gateway_ip: 192.168.42.1
allocation_pools:
- start: 192.168.42.2
end: 192.168.42.4
- start: 192.168.42.6
end: 192.168.42.8
register: idem2
- name: Get Subnet Info
openstack.cloud.subnets_info:
cloud: "{{ cloud }}"
name: "{{ subnet_name }}"
register: subnet_result
- name: Verify Subnet Allocation Pools Exist
assert:
that:
- idem2 is not changed
- subnet_result.subnets is defined
- subnet_result.subnets | length == 1
- subnet_result.subnets[0].allocation_pools is defined
- subnet_result.subnets[0].allocation_pools | length == 2
- name: Verify Subnet Allocation Pools
assert:
that:
- (subnet_result.subnets[0].allocation_pools.0.start == '192.168.42.2' and subnet_result.subnets[0].allocation_pools.0.end == '192.168.42.4') or
(subnet_result.subnets[0].allocation_pools.0.start == '192.168.42.6' and subnet_result.subnets[0].allocation_pools.0.end == '192.168.42.8')
- (subnet_result.subnets[0].allocation_pools.1.start == '192.168.42.2' and subnet_result.subnets[0].allocation_pools.1.end == '192.168.42.4') or
(subnet_result.subnets[0].allocation_pools.1.start == '192.168.42.6' and subnet_result.subnets[0].allocation_pools.1.end == '192.168.42.8')
- name: Delete subnet {{ subnet_name }}
openstack.cloud.subnet:
cloud: "{{ cloud }}"
name: "{{ subnet_name }}"
state: absent
- name: Delete created subnet pool
openstack.cloud.subnet_pool:
cloud: "{{ cloud }}"
name: "{{ subnet_pool_name }}"
state: absent
- name: Delete created address scope
openstack.cloud.address_scope:
cloud: "{{ cloud }}"
name: "{{ address_scope_name }}"
state: absent
- name: Delete network {{ network_name }}
openstack.cloud.network:
cloud: "{{ cloud }}"
name: "{{ network_name }}"
state: absent

View File

@@ -0,0 +1 @@
trait_name: CUSTOM_ANSIBLE_TRAIT

View File

@@ -0,0 +1,23 @@
---
- openstack.cloud.trait:
cloud: "{{ cloud }}"
state: present
id: "{{ trait_name }}"
delegate_to: localhost
register: item
- assert:
that:
- "'name' in item.trait"
- "item.trait.id == trait_name"
- openstack.cloud.trait:
cloud: "{{ cloud }}"
state: absent
id: "{{ trait_name }}"
delegate_to: localhost
register: item
- assert:
that:
- "'trait' not in item"

View File

@@ -0,0 +1,21 @@
expected_fields:
- created_at
- description
- id
- is_admin_state_up
- name
- port_id
- project_id
- revision_number
- status
- sub_ports
- tags
- tenant_id
- updated_at
trunk_name: ansible_trunk
parent_network_name: ansible_parent_port_network
parent_subnet_name: ansible_parent_port_subnet
parent_port_name: ansible_parent_port
subport_network_name: ansible_subport_network
subport_subnet_name: ansible_subport_subnet
subport_name: ansible_subport

View File

@@ -0,0 +1,131 @@
---
- name: Create parent network
openstack.cloud.network:
cloud: "{{ cloud }}"
state: present
name: "{{ parent_network_name }}"
external: true
register: parent_network
- name: Create parent subnet
openstack.cloud.subnet:
cloud: "{{ cloud }}"
state: present
name: "{{ parent_subnet_name }}"
network_name: "{{ parent_network_name }}"
cidr: 10.5.5.0/24
register: parent_subnet
- name: Create parent port
openstack.cloud.port:
cloud: "{{ cloud }}"
state: present
name: "{{ parent_port_name }}"
network: "{{ parent_network_name }}"
fixed_ips:
- ip_address: 10.5.5.69
register: parent_port
- name: Create subport network
openstack.cloud.network:
cloud: "{{ cloud }}"
state: present
name: "{{ subport_network_name }}"
external: true
register: subport_network
- name: Create subport subnet
openstack.cloud.subnet:
cloud: "{{ cloud }}"
state: present
name: "{{ subport_subnet_name }}"
network_name: "{{ subport_network_name }}"
cidr: 10.5.6.0/24
register: subport_subnet
- name: Create subport
openstack.cloud.port:
cloud: "{{ cloud }}"
state: present
name: "{{ subport_name }}"
network: "{{ subport_network_name }}"
fixed_ips:
- ip_address: 10.5.6.55
register: subport
- name: Create trunk
openstack.cloud.trunk:
cloud: "{{ cloud }}"
state: present
name: "{{ trunk_name }}"
port: "{{ parent_port_name }}"
register: trunk
- debug: var=trunk
- name: assert return values of trunk module
assert:
that:
# allow new fields to be introduced but prevent fields from being removed
- expected_fields|difference(trunk.trunk.keys())|length == 0
- name: Add subport to trunk
openstack.cloud.trunk:
cloud: "{{ cloud }}"
state: present
name: "{{ trunk_name }}"
port: "{{ parent_port_name }}"
sub_ports:
- port: "{{ subport_name }}"
segmentation_type: vlan
segmentation_id: 123
- name: Update subport from trunk
openstack.cloud.trunk:
cloud: "{{ cloud }}"
state: present
name: "{{ trunk_name }}"
port: "{{ parent_port_name }}"
sub_ports: []
- name: Delete trunk
openstack.cloud.trunk:
cloud: "{{ cloud }}"
state: absent
name: "{{ trunk_name }}"
- name: Delete subport
openstack.cloud.port:
cloud: "{{ cloud }}"
state: absent
name: "{{ subport_name }}"
- name: Delete subport subnet
openstack.cloud.subnet:
cloud: "{{ cloud }}"
state: absent
name: "{{ subport_subnet_name }}"
- name: Delete subport network
openstack.cloud.network:
cloud: "{{ cloud }}"
state: absent
name: "{{ subport_network_name }}"
- name: Delete parent port
openstack.cloud.port:
cloud: "{{ cloud }}"
state: absent
name: "{{ parent_port_name }}"
- name: Delete parent subnet
openstack.cloud.subnet:
cloud: "{{ cloud }}"
state: absent
name: "{{ parent_subnet_name }}"
- name: Delete parent network
openstack.cloud.network:
cloud: "{{ cloud }}"
state: absent
name: "{{ parent_network_name }}"

View File

@@ -37,7 +37,7 @@
- name: Check info
assert:
that:
- info1.volumes | selectattr("id", "equalto", "{{ info.volumes.0.id }}") | list | length == 1
- info1.volumes | selectattr("id", "equalto", info.volumes.0.id) | list | length == 1
- info1.volumes.0.name == 'ansible_test'
- info1.volumes.0.status == None

View File

@@ -0,0 +1,9 @@
expected_fields:
- availability_zone
- binary
- disabled_reason
- host
- name
- state
- status
- updated_at

View File

@@ -0,0 +1,23 @@
---
- name: Fetch volume services
openstack.cloud.volume_service_info:
cloud: "{{ cloud }}"
register: volume_services
- name: Assert return values of volume_service_info module
assert:
that:
- volume_services.volume_services | length > 0
# allow new fields to be introduced but prevent fields from being removed
- expected_fields|difference(volume_services.volume_services[0].keys())|length == 0
- name: Fetch volume services with filters
openstack.cloud.volume_service_info:
cloud: "{{ cloud }}"
binary: "cinder-volume"
register: volume_services
- name: Assert return values of volume_service_info module
assert:
that:
- volume_services.volume_services | length > 0

View File

@@ -0,0 +1,10 @@
---
volume_backend_name: LVM_iSCSI
volume_type_name: test_type
volume_type_description: Test volume type
enc_provider_name: nova.volume.encryptors.luks.LuksEncryptor
enc_cipher: aes-xts-plain64
enc_control_location: front-end
enc_control_alt_location: back-end
enc_key_size: 256

View File

@@ -0,0 +1,85 @@
---
- name: Create volume type
openstack.cloud.volume_type:
name: "{{ volume_type_name }}"
cloud: "{{ cloud }}"
state: present
extra_specs:
volume_backend_name: "{{ volume_backend_name }}"
description: "{{ volume_type_description }}"
is_public: true
register: the_result
- name: Check created volume type
vars:
the_volume: "{{ the_result.volume_type }}"
ansible.builtin.assert:
that:
- "'id' in the_result.volume_type"
- the_volume.description == volume_type_description
- the_volume.is_public == True
- the_volume.name == volume_type_name
- the_volume.extra_specs['volume_backend_name'] == volume_backend_name
success_msg: >-
Created volume: {{ the_result.volume_type.id }},
Name: {{ the_result.volume_type.name }},
Description: {{ the_result.volume_type.description }}
- name: Test, check idempotency
openstack.cloud.volume_type:
name: "{{ volume_type_name }}"
cloud: "{{ cloud }}"
state: present
extra_specs:
volume_backend_name: "{{ volume_backend_name }}"
description: "{{ volume_type_description }}"
is_public: true
register: the_result
- name: Check result.changed is false
ansible.builtin.assert:
that:
- the_result.changed == false
success_msg: "Request with the same details lead to no changes"
- name: Add extra spec
openstack.cloud.volume_type:
cloud: "{{ cloud }}"
name: "{{ volume_type_name }}"
state: present
extra_specs:
volume_backend_name: "{{ volume_backend_name }}"
some_spec: fake_spec
description: "{{ volume_type_description }}"
is_public: true
register: the_result
- name: Check volume type extra spec
ansible.builtin.assert:
that:
- "'some_spec' in the_result.volume_type.extra_specs"
- the_result.volume_type.extra_specs["some_spec"] == "fake_spec"
success_msg: >-
New extra specs: {{ the_result.volume_type.extra_specs }}
# is_public update attempt using openstacksdk result in unexpected attribute
# error... TODO: Find solution
#
# - name: Make volume type private
# openstack.cloud.volume_type:
# cloud: "{{ cloud }}"
# name: "{{ volume_type_alt_name }}"
# state: present
# extra_specs:
# volume_backend_name: "{{ volume_backend_name }}"
# # some_other_spec: test
# description: Changed 3rd time test volume type
# is_public: true
# register: the_result
- name: Volume encryption tests
ansible.builtin.include_tasks: volume_encryption.yml
- name: Delete volume type
openstack.cloud.volume_type:
cloud: "{{ cloud }}"
name: "{{ volume_type_name }}"
state: absent
register: the_result

View File

@@ -0,0 +1,67 @@
---
- name: Test, Volume type has no encryption
openstack.cloud.volume_type_info:
cloud: "{{ cloud }}"
name: "{{ volume_type_name }}"
register: the_result
- name: Check volume type has no encryption
ansible.builtin.assert:
that:
- the_result.encryption.id == None
success_msg: >-
Success: Volume type has no encryption at the moment
- name: Test, create volume type encryption
openstack.cloud.volume_type_encryption:
cloud: "{{ cloud }}"
volume_type: "{{ volume_type_name }}"
state: present
encryption_provider: "{{ enc_provider_name }}"
encryption_cipher: "{{ enc_cipher }}"
encryption_control_location: "{{ enc_control_location }}"
encryption_key_size: "{{ enc_key_size }}"
register: the_result
- name: Check volume type encryption
ansible.builtin.assert:
that:
- the_result.encryption.cipher == enc_cipher
- the_result.encryption.control_location == enc_control_location
- the_result.encryption.key_size == enc_key_size
- the_result.encryption.provider == enc_provider_name
success_msg: >-
Success: {{ the_result.encryption.encryption_id }}
- name: Test, update volume type encryption
openstack.cloud.volume_type_encryption:
cloud: "{{ cloud }}"
volume_type: "{{ volume_type_name }}"
state: present
encryption_provider: "{{ enc_provider_name }}"
encryption_cipher: "{{ enc_cipher }}"
encryption_control_location: "{{ enc_control_alt_location }}"
encryption_key_size: "{{ enc_key_size }}"
register: the_result
- name: Check volume type encryption change
ansible.builtin.assert:
that:
- the_result.encryption.control_location == enc_control_alt_location
success_msg: >-
New location: {{ the_result.encryption.control_location }}
- name: Test, delete volume type encryption
openstack.cloud.volume_type_encryption:
cloud: "{{ cloud }}"
volume_type: "{{ volume_type_name }}"
state: absent
register: the_result
- name: Get volume type details
openstack.cloud.volume_type_info:
cloud: "{{ cloud }}"
name: "{{ volume_type_name }}"
register: the_result
- name: Check volume type has no encryption
ansible.builtin.assert:
that:
- the_result.encryption.id == None
success_msg: >-
Success: Volume type has no encryption

View File

@@ -75,10 +75,10 @@ ansible-galaxy collection install --requirements-file ci/requirements.yml
if [ -z "$PIP_INSTALL" ]; then
tox -ebuild
ansible-galaxy collection install "$(find build_artifact/ -maxdepth 1 -name 'openstack-cloud-*')" --force
TEST_COLLECTIONS_PATHS=${HOME}/.ansible/collections:$ANSIBLE_COLLECTIONS_PATHS
TEST_COLLECTIONS_PATHS=${HOME}/.ansible/collections:$ANSIBLE_COLLECTIONS_PATH
else
pip freeze | grep ansible-collections-openstack
TEST_COLLECTIONS_PATHS=$VIRTUAL_ENV/share/ansible/collections:$ANSIBLE_COLLECTIONS_PATHS
TEST_COLLECTIONS_PATHS=$VIRTUAL_ENV/share/ansible/collections:$ANSIBLE_COLLECTIONS_PATH
fi
# We need to source the current tox environment so that Ansible will
@@ -129,7 +129,7 @@ cd ci/
# Run tests
set -o pipefail
# shellcheck disable=SC2086
ANSIBLE_COLLECTIONS_PATHS=$TEST_COLLECTIONS_PATHS ansible-playbook \
ANSIBLE_COLLECTIONS_PATH=$TEST_COLLECTIONS_PATHS ansible-playbook \
-vvv ./run-collection.yml \
-e "sdk_version=${SDK_VER} cloud=${CLOUD} cloud_alt=${CLOUD_ALT} ${ANSIBLE_VARS}" \
${tag_opt} 2>&1 | sudo tee /opt/stack/logs/test_output.log

View File

@@ -5,6 +5,7 @@
roles:
- { role: address_scope, tags: address_scope }
- { role: application_credential, tags: application_credential }
- { role: auth, tags: auth }
- { role: catalog_service, tags: catalog_service }
- { role: coe_cluster, tags: coe_cluster }
@@ -35,6 +36,8 @@
- { role: object, tags: object }
- { role: object_container, tags: object_container }
- { role: port, tags: port }
- { role: trait, tags: trait }
- { role: trunk, tags: trunk }
- { role: project, tags: project }
- { role: quota, tags: quota }
- { role: recordset, tags: recordset }
@@ -53,6 +56,8 @@
- { role: subnet, tags: subnet }
- { role: subnet_pool, tags: subnet_pool }
- { role: volume, tags: volume }
- { role: volume_type, tags: volume_type }
- { role: volume_backup, tags: volume_backup }
- { role: volume_service, tags: volume_service }
- { role: volume_snapshot, tags: volume_snapshot }
- { role: volume_type_access, tags: volume_type_access }

View File

@@ -32,4 +32,4 @@ build_ignore:
- .vscode
- ansible_collections_openstack.egg-info
- changelogs
version: 2.1.0
version: 2.4.1

View File

@@ -2,6 +2,7 @@ requires_ansible: ">=2.8"
action_groups:
openstack:
- address_scope
- application_credential
- auth
- baremetal_deploy_template
- baremetal_inspect
@@ -81,10 +82,12 @@ action_groups:
- subnet
- subnet_pool
- subnets_info
- trunk
- volume
- volume_backup
- volume_backup_info
- volume_info
- volume_service_info
- volume_snapshot
- volume_snapshot_info
- volume_type_access

View File

@@ -96,6 +96,12 @@ options:
only.
type: bool
default: false
only_ipv4:
description:
- Use only ipv4 addresses for ansible_host and ansible_ssh_host.
- Using I(only_ipv4) helps when running Ansible in a ipv4 only setup.
type: bool
default: false
show_all:
description:
- Whether all servers should be listed or not.
@@ -271,9 +277,9 @@ class InventoryModule(BaseInventoryPlugin, Constructable, Cacheable):
if not attempt_to_read_cache or cache_needs_update:
self.display.vvvv('Retrieving servers from Openstack clouds')
clouds_yaml_path = self.get_option('clouds_yaml_path')
config_files = (
openstack.config.loader.CONFIG_FILES
+ ([clouds_yaml_path] if clouds_yaml_path else []))
config_files = openstack.config.loader.CONFIG_FILES
if clouds_yaml_path:
config_files += clouds_yaml_path
config = openstack.config.loader.OpenStackConfig(
config_files=config_files)
@@ -384,10 +390,17 @@ class InventoryModule(BaseInventoryPlugin, Constructable, Cacheable):
if address['OS-EXT-IPS:type'] == 'floating'),
None)
fixed_ip = next(
(address['addr'] for address in addresses
if address['OS-EXT-IPS:type'] == 'fixed'),
None)
if self.get_option('only_ipv4'):
fixed_ip = next(
(address['addr'] for address in addresses
if (address['OS-EXT-IPS:type'] == 'fixed' and address['version'] == 4)),
None)
else:
fixed_ip = next(
(address['addr'] for address in addresses
if address['OS-EXT-IPS:type'] == 'fixed'),
None)
ip = floating_ip if floating_ip is not None and not self.get_option('private') else fixed_ip

View File

@@ -183,7 +183,7 @@ def openstack_cloud_from_module(module, min_version=None, max_version=None):
" excluded.")
for param in (
'auth', 'region_name', 'validate_certs',
'ca_cert', 'client_key', 'api_timeout', 'auth_type'):
'ca_cert', 'client_cert', 'client_key', 'api_timeout', 'auth_type'):
if module.params[param] is not None:
module.fail_json(msg=fail_message.format(param=param))
# For 'interface' parameter, fail if we receive a non-default value
@@ -199,6 +199,7 @@ def openstack_cloud_from_module(module, min_version=None, max_version=None):
verify=module.params['validate_certs'],
cacert=module.params['ca_cert'],
key=module.params['client_key'],
cert=module.params['client_cert'],
api_timeout=module.params['api_timeout'],
interface=module.params['interface'],
)
@@ -358,7 +359,7 @@ class OpenStackModule:
" excluded.")
for param in (
'auth', 'region_name', 'validate_certs',
'ca_cert', 'client_key', 'api_timeout', 'auth_type'):
'ca_cert', 'client_cert', 'client_key', 'api_timeout', 'auth_type'):
if self.params[param] is not None:
self.fail_json(msg=fail_message.format(param=param))
# For 'interface' parameter, fail if we receive a non-default value
@@ -373,6 +374,7 @@ class OpenStackModule:
verify=self.params['validate_certs'],
cacert=self.params['ca_cert'],
key=self.params['client_key'],
cert=self.params['client_cert'],
api_timeout=self.params['api_timeout'],
interface=self.params['interface'],
)

View File

@@ -0,0 +1,332 @@
#!/usr/bin/python
# -*- coding: utf-8 -*-
# Copyright (c) 2024 Red Hat, Inc.
# GNU General Public License v3.0+
# (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
DOCUMENTATION = r"""
---
module: application_credential
short_description: Manage OpenStack Identity (Keystone) application credentials
author: OpenStack Ansible SIG
description:
- Create or delete an OpenStack Identity (Keystone) application credential.
- When the secret parameter is not set a secret will be generated and returned
- in the response. Existing credentials cannot be modified so running this module
- against an existing credential will result in it being deleted and recreated.
- This needs to be taken into account when the secret is generated, as the secret
- will change on each run of the module.
options:
name:
description:
- Name of the application credential.
required: true
type: str
description:
description:
- Application credential description.
type: str
secret:
description:
- Secret to use for authentication
- (if not provided, one will be generated).
type: str
roles:
description:
- Roles to authorize (name or ID).
type: list
elements: dict
suboptions:
name:
description: Name of role
type: str
id:
description: ID of role
type: str
domain_id:
description: Domain ID
type: str
expires_at:
description:
- Sets an expiration date for the application credential,
- format of YYYY-mm-ddTHH:MM:SS
- (if not provided, the application credential will not expire).
type: str
unrestricted:
description:
- Enable application credential to create and delete other application
- credentials and trusts (this is potentially dangerous behavior and is
- disabled by default).
default: false
type: bool
access_rules:
description:
- List of access rules, each containing a request method, path, and service.
type: list
elements: dict
suboptions:
service:
description: Name of service endpoint
type: str
required: true
path:
description: Path portion of access URL
type: str
required: true
method:
description: HTTP method
type: str
required: true
state:
description:
- Should the resource be present or absent.
- Application credentials are immutable so running with an existing present
- credential will result in the credential being deleted and recreated.
choices: [present, absent]
default: present
type: str
extends_documentation_fragment:
- openstack.cloud.openstack
"""
EXAMPLES = r"""
- name: Create application credential
openstack.cloud.application_credential:
cloud: mycloud
description: demodescription
name: democreds
state: present
- name: Create application credential with expiration, access rules and roles
openstack.cloud.application_credential:
cloud: mycloud
description: demodescription
name: democreds
access_rules:
- service: "compute"
path: "/v2.1/servers"
method: "GET"
expires_at: "2024-02-29T09:29:59"
roles:
- name: Member
state: present
- name: Delete application credential
openstack.cloud.application_credential:
cloud: mycloud
name: democreds
state: absent
"""
RETURN = r"""
application_credential:
description: Dictionary describing the project.
returned: On success when I(state) is C(present).
type: dict
contains:
id:
description: The ID of the application credential.
type: str
sample: "2e73d1b4f0cb473f920bd54dfce3c26d"
name:
description: The name of the application credential.
type: str
sample: "appcreds"
secret:
description: Secret to use for authentication
(if not provided, returns the generated value).
type: str
sample: "JxE7LajLY75NZgDH1hfu0N_6xS9hQ-Af40W3"
description:
description: A description of the application credential's purpose.
type: str
sample: "App credential"
expires_at:
description: The expiration time of the application credential in UTC,
if one was specified.
type: str
sample: "2024-02-29T09:29:59.000000"
project_id:
description: The ID of the project the application credential was created
for and that authentication requests using this application
credential will be scoped to.
type: str
sample: "4b633c451ac74233be3721a3635275e5"
roles:
description: A list of one or more roles that this application credential
has associated with its project. A token using this application
credential will have these same roles.
type: list
elements: dict
sample: [{"name": "Member"}]
access_rules:
description: A list of access_rules objects
type: list
elements: dict
sample:
- id: "edecb6c791d541a3b458199858470d20"
service: "compute"
path: "/v2.1/servers"
method: "GET"
unrestricted:
description: A flag indicating whether the application credential may be
used for creation or destruction of other application credentials
or trusts.
type: bool
cloud:
description: The current cloud config with the username and password replaced
with the name and secret of the application credential. This
can be passed to the cloud parameter of other tasks, or written
to an openstack cloud config file.
returned: On success when I(state) is C(present).
type: dict
sample:
auth_type: "v3applicationcredential"
auth:
auth_url: "https://192.0.2.1/identity"
application_credential_secret: "JxE7LajLY75NZgDH1hfu0N_6xS9hQ-Af40W3"
application_credential_id: "3e73d1b4f0cb473f920bd54dfce3c26d"
"""
import copy
from ansible_collections.openstack.cloud.plugins.module_utils.openstack import (
OpenStackModule,
)
try:
import openstack.config
except ImportError:
pass
class IdentityApplicationCredentialModule(OpenStackModule):
argument_spec = dict(
name=dict(required=True),
description=dict(),
secret=dict(no_log=True),
roles=dict(
type="list",
elements="dict",
options=dict(name=dict(), id=dict(), domain_id=dict()),
),
expires_at=dict(),
unrestricted=dict(type="bool", default=False),
access_rules=dict(
type="list",
elements="dict",
options=dict(
service=dict(required=True),
path=dict(required=True),
method=dict(required=True),
),
),
state=dict(default="present", choices=["absent", "present"]),
)
module_kwargs = dict()
cloud = None
def openstack_cloud_from_module(self):
# Fetch cloud param before it is popped
self.cloud = self.params["cloud"]
return OpenStackModule.openstack_cloud_from_module(self)
def run(self):
state = self.params["state"]
creds = self._find()
if state == "present" and not creds:
# Create creds
creds = self._create().to_dict(computed=False)
cloud_config = self._get_cloud_config(creds)
self.exit_json(
changed=True, application_credential=creds, cloud=cloud_config
)
elif state == "present" and creds:
# Recreate immutable creds
self._delete(creds)
creds = self._create().to_dict(computed=False)
cloud_config = self._get_cloud_config(creds)
self.exit_json(
changed=True, application_credential=creds, cloud=cloud_config
)
elif state == "absent" and creds:
# Delete creds
self._delete(creds)
self.exit_json(changed=True)
elif state == "absent" and not creds:
# Do nothing
self.exit_json(changed=False)
def _get_user_id(self):
return self.conn.session.get_user_id()
def _create(self):
kwargs = dict(
(k, self.params[k])
for k in [
"name",
"description",
"secret",
"expires_at",
"unrestricted",
"access_rules",
]
if self.params[k] is not None
)
roles = self.params["roles"]
if roles:
kwroles = []
for role in roles:
kwroles.append(
dict(
(k, role[k])
for k in ["name", "id", "domain_id"]
if role[k] is not None
)
)
kwargs["roles"] = kwroles
kwargs["user"] = self._get_user_id()
creds = self.conn.identity.create_application_credential(**kwargs)
return creds
def _get_cloud_config(self, creds):
cloud_region = openstack.config.OpenStackConfig().get_one(self.cloud)
conf = cloud_region.config
cloud_config = copy.deepcopy(conf)
cloud_config["auth_type"] = "v3applicationcredential"
cloud_config["auth"] = {
"application_credential_id": creds["id"],
"application_credential_secret": creds["secret"],
"auth_url": conf["auth"]["auth_url"],
}
return cloud_config
def _delete(self, creds):
user = self._get_user_id()
self.conn.identity.delete_application_credential(user, creds.id)
def _find(self):
name = self.params["name"]
user = self._get_user_id()
return self.conn.identity.find_application_credential(
user=user, name_or_id=name
)
def main():
module = IdentityApplicationCredentialModule()
module()
if __name__ == "__main__":
main()

View File

@@ -80,6 +80,10 @@ options:
- Magnum's default value for I(is_registry_enabled) is C(false).
type: bool
aliases: ['registry_enabled']
insecure_registry:
description:
- The URL pointing to users own private insecure docker registry.
type: str
is_tls_disabled:
description:
- Indicates whether the TLS should be disabled.
@@ -342,6 +346,7 @@ class COEClusterTemplateModule(OpenStackModule):
keypair_id=dict(),
labels=dict(type='raw'),
master_flavor_id=dict(),
insecure_registry=dict(),
is_master_lb_enabled=dict(type='bool', default=False,
aliases=['master_lb_enabled']),
is_public=dict(type='bool', aliases=['public']),
@@ -412,6 +417,7 @@ class COEClusterTemplateModule(OpenStackModule):
'fixed_subnet', 'flavor_id',
'http_proxy', 'https_proxy',
'image_id',
'insecure_registry',
'is_floating_ip_enabled',
'is_master_lb_enabled',
'is_public', 'is_registry_enabled',
@@ -427,6 +433,9 @@ class COEClusterTemplateModule(OpenStackModule):
if isinstance(labels, str):
labels = dict([tuple(kv.split(":"))
for kv in labels.split(",")])
elif isinstance(labels, dict):
labels = dict({str(k): str(v)
for k, v in labels.items()})
if labels != cluster_template['labels']:
non_updateable_keys.append('labels')
@@ -458,7 +467,7 @@ class COEClusterTemplateModule(OpenStackModule):
'external_network_id', 'fixed_network',
'fixed_subnet', 'flavor_id', 'http_proxy',
'https_proxy', 'image_id',
'is_floating_ip_enabled',
'insecure_registry', 'is_floating_ip_enabled',
'is_master_lb_enabled', 'is_public',
'is_registry_enabled', 'is_tls_disabled',
'keypair_id', 'master_flavor_id', 'name',

View File

@@ -56,7 +56,7 @@ options:
description:
- When I(update_password) is C(always), then the password will always be
updated.
- When I(update_password) is C(on_create), the the password is only set
- When I(update_password) is C(on_create), then the password is only set
when creating a user.
type: str
extends_documentation_fragment:

View File

@@ -100,8 +100,8 @@ options:
type: str
state:
description:
- Should the resource be present or absent.
choices: [present, absent]
- Should the resource be present, absent or inactive.
choices: [present, absent, inactive]
default: present
type: str
tags:
@@ -122,6 +122,12 @@ options:
- I(volume) has been deprecated. Use module M(openstack.cloud.volume)
instead.
type: str
use_import:
description:
- Use the 'glance-direct' method of the interoperable image import mechanism.
- Should only be used when needed, such as when the user needs the cloud to
transform image format.
type: bool
extends_documentation_fragment:
- openstack.cloud.openstack
'''
@@ -147,7 +153,7 @@ EXAMPLES = r'''
RETURN = r'''
image:
description: Dictionary describing the Glance image.
returned: On success when I(state) is C(present).
returned: On success when I(state) is C(present) or C(inactive).
type: dict
contains:
id:
@@ -388,10 +394,11 @@ class ImageModule(OpenStackModule):
owner_domain=dict(aliases=['project_domain']),
properties=dict(type='dict', default={}),
ramdisk=dict(),
state=dict(default='present', choices=['absent', 'present']),
state=dict(default='present', choices=['absent', 'present', 'inactive']),
tags=dict(type='list', default=[], elements='str'),
visibility=dict(choices=['public', 'private', 'shared', 'community']),
volume=dict(),
use_import=dict(type='bool'),
)
module_kwargs = dict(
@@ -404,7 +411,8 @@ class ImageModule(OpenStackModule):
# resource attributes obtainable directly from params
attr_params = ('id', 'name', 'filename', 'disk_format',
'container_format', 'wait', 'timeout', 'is_public',
'is_protected', 'min_disk', 'min_ram', 'volume', 'tags')
'is_protected', 'min_disk', 'min_ram', 'volume', 'tags',
'use_import')
def _resolve_visibility(self):
"""resolve a visibility value to be compatible with older versions"""
@@ -485,7 +493,7 @@ class ImageModule(OpenStackModule):
if image_name_or_id:
image = self.conn.get_image(
image_name_or_id,
filters={(k, self.params[k])
filters={k: self.params[k]
for k in ['checksum'] if self.params[k] is not None})
changed = False
@@ -502,6 +510,10 @@ class ImageModule(OpenStackModule):
self.exit_json(changed=changed,
image=self._return_value(image.id))
if image['status'] == 'deactivated':
self.conn.image.reactivate_image(image)
changed = True
update_payload = self._build_update(image)
if update_payload:
@@ -517,6 +529,20 @@ class ImageModule(OpenStackModule):
wait=self.params['wait'],
timeout=self.params['timeout'])
changed = True
elif self.params['state'] == 'inactive' and image is not None:
if image['status'] == 'active':
self.conn.image.deactivate_image(image)
changed = True
update_payload = self._build_update(image)
if update_payload:
self.conn.image.update_image(image.id, **update_payload)
changed = True
self.exit_json(changed=changed, image=self._return_value(image.id))
self.exit_json(changed=changed)

View File

@@ -142,7 +142,7 @@ pool:
'''
EXAMPLES = r'''
- name: Create a load-balander pool
- name: Create a load-balancer pool
openstack.cloud.lb_pool:
cloud: mycloud
lb_algorithm: ROUND_ROBIN
@@ -151,7 +151,7 @@ EXAMPLES = r'''
protocol: HTTP
state: present
- name: Delete a load-balander pool
- name: Delete a load-balancer pool
openstack.cloud.lb_pool:
cloud: mycloud
name: test-pool

View File

@@ -30,6 +30,15 @@ options:
description:
- Whether this network is externally accessible.
type: bool
is_default:
description:
- Whether this network is default network or not. This is only effective
with external networks.
type: bool
is_vlan_transparent:
description:
- Whether this network is vlan_transparent or not.
type: bool
state:
description:
- Indicate desired state of the resource.
@@ -190,6 +199,8 @@ class NetworkModule(OpenStackModule):
shared=dict(type='bool'),
admin_state_up=dict(type='bool'),
external=dict(type='bool'),
is_default=dict(type='bool'),
is_vlan_transparent=dict(type='bool'),
provider_physical_network=dict(),
provider_network_type=dict(),
provider_segmentation_id=dict(type='int'),
@@ -207,6 +218,8 @@ class NetworkModule(OpenStackModule):
shared = self.params['shared']
admin_state_up = self.params['admin_state_up']
external = self.params['external']
is_default = self.params['is_default']
is_vlan_transparent = self.params['is_vlan_transparent']
provider_physical_network = self.params['provider_physical_network']
provider_network_type = self.params['provider_network_type']
provider_segmentation_id = self.params['provider_segmentation_id']
@@ -244,6 +257,10 @@ class NetworkModule(OpenStackModule):
kwargs["admin_state_up"] = admin_state_up
if external is not None:
kwargs["is_router_external"] = external
if is_default is not None:
kwargs["is_default"] = is_default
if is_vlan_transparent is not None:
kwargs["is_vlan_transparent"] = is_vlan_transparent
if not net:
net = self.conn.network.create_network(name=name, **kwargs)

View File

@@ -65,6 +65,12 @@ options:
- Required when creating or updating a RBAC policy rule, ignored when
deleting a policy.
type: str
target_all_project:
description:
- Whether all projects are targted for access.
- If this option set to true, C(target_project_id) is ignored.
type: bool
default: 'false'
state:
description:
- Whether the RBAC rule should be C(present) or C(absent).
@@ -145,6 +151,8 @@ from ansible_collections.openstack.cloud.plugins.module_utils.openstack import O
class NeutronRBACPolicy(OpenStackModule):
all_project_symbol = '*'
argument_spec = dict(
action=dict(choices=['access_as_external', 'access_as_shared']),
id=dict(aliases=['policy_id']),
@@ -153,17 +161,22 @@ class NeutronRBACPolicy(OpenStackModule):
project_id=dict(),
state=dict(default='present', choices=['absent', 'present']),
target_project_id=dict(),
target_all_project=dict(type='bool', default=False),
)
module_kwargs = dict(
required_if=[
('state', 'present', ('target_project_id',)),
('state', 'present', ('target_project_id', 'target_all_project',), True),
('state', 'absent', ('id',)),
],
supports_check_mode=True,
)
def run(self):
target_all_project = self.params.get('target_all_project')
if target_all_project:
self.params['target_project_id'] = self.all_project_symbol
state = self.params['state']
policy = self._find()
@@ -262,7 +275,7 @@ class NeutronRBACPolicy(OpenStackModule):
return [p for p in policies
if any(p[k] == self.params[k]
for k in ['object_id', 'target_project_id'])]
for k in ['object_id'])]
def _update(self, policy, update):
attributes = update.get('attributes')

View File

@@ -295,8 +295,11 @@ class ObjectModule(OpenStackModule):
for k in ['data', 'filename']
if self.params[k] is not None)
return self.conn.object_store.create_object(container_name, name,
**kwargs)
object = self.conn.object_store.create_object(container_name, name,
**kwargs)
if not object:
object = self._find()
return object
def _delete(self, object):
container_name = self.params['container']

View File

@@ -269,7 +269,7 @@ class ContainerModule(OpenStackModule):
if metadata is not None:
# Swift metadata keys must be treated as case-insensitive
old_metadata = dict((k.lower(), v)
for k, v in (container.metadata or {}))
for k, v in (container.metadata or {}).items())
new_metadata = dict((k, v) for k, v in metadata.items()
if k.lower() not in old_metadata
or v != old_metadata[k.lower()])

View File

@@ -138,10 +138,11 @@ options:
of I(no_security_groups): C(true)."
type: bool
default: 'false'
port_security_enabled:
is_port_security_enabled:
description:
- Whether to enable or disable the port security on the network.
type: bool
aliases: ['port_security_enabled']
security_groups:
description:
- Security group(s) ID(s) or name(s) associated with the port.
@@ -479,7 +480,7 @@ class PortModule(OpenStackModule):
name=dict(required=True),
network=dict(),
no_security_groups=dict(default=False, type='bool'),
port_security_enabled=dict(type='bool'),
is_port_security_enabled=dict(type='bool', aliases=['port_security_enabled']),
security_groups=dict(type='list', elements='str'),
state=dict(default='present', choices=['absent', 'present']),
)
@@ -510,7 +511,7 @@ class PortModule(OpenStackModule):
**(dict(network_id=network.id) if network else dict()))
if self.ansible.check_mode:
self.exit_json(changed=self._will_change(network, port, state))
self.exit_json(changed=self._will_change(port, state))
if state == 'present' and not port:
# create port
@@ -655,7 +656,7 @@ class PortModule(OpenStackModule):
'extra_dhcp_opts',
'is_admin_state_up',
'mac_address',
'port_security_enabled',
'is_port_security_enabled',
'fixed_ips',
'name']:
if self.params[k] is not None:

View File

@@ -38,6 +38,9 @@ options:
groups:
description: Number of groups that are allowed for the project
type: int
health_monitors:
description: Maximum number of health monitors that can be created.
type: int
injected_file_content_bytes:
description:
- Maximum file size in bytes.
@@ -61,6 +64,12 @@ options:
key_pairs:
description: Number of key pairs to allow.
type: int
l7_policies:
description: The maximum amount of L7 policies you can create.
type: int
listeners:
description: The maximum number of listeners you can create.
type: int
load_balancers:
description: The maximum amount of load balancers you can create
type: int
@@ -68,6 +77,9 @@ options:
metadata_items:
description: Number of metadata items allowed per instance.
type: int
members:
description: Number of members allowed for loadbalancer.
type: int
name:
description: Name of the OpenStack Project to manage.
required: true
@@ -227,6 +239,33 @@ quotas:
server_groups:
description: Number of server groups to allow.
type: int
load_balancer:
description: Load_balancer service quotas
type: dict
contains:
health_monitors:
description: Maximum number of health monitors that can be
created.
type: int
l7_policies:
description: The maximum amount of L7 policies you can
create.
type: int
listeners:
description: The maximum number of listeners you can create
type: int
load_balancers:
description: The maximum amount of load balancers one can
create
type: int
members:
description: The maximum amount of members for
loadbalancer.
type: int
pools:
description: The maximum amount of pools one can create.
type: int
network:
description: Network service quotas
type: dict
@@ -234,16 +273,9 @@ quotas:
floating_ips:
description: Number of floating IP's to allow.
type: int
load_balancers:
description: The maximum amount of load balancers one can
create
type: int
networks:
description: Number of networks to allow.
type: int
pools:
description: The maximum amount of pools one can create.
type: int
ports:
description: Number of Network ports to allow, this needs
to be greater than the instances limit.
@@ -312,9 +344,7 @@ quotas:
server_groups: 10,
network:
floating_ips: 50,
load_balancers: 10,
networks: 10,
pools: 10,
ports: 160,
rbac_policies: 10,
routers: 10,
@@ -330,6 +360,13 @@ quotas:
per_volume_gigabytes: -1,
snapshots: 10,
volumes: 10,
load_balancer:
health_monitors: 10,
load_balancers: 10,
l7_policies: 10,
listeners: 10,
pools: 5,
members: 5,
'''
from ansible_collections.openstack.cloud.plugins.module_utils.openstack import OpenStackModule
@@ -337,9 +374,8 @@ from collections import defaultdict
class QuotaModule(OpenStackModule):
# TODO: Add missing network quota options 'check_limit', 'health_monitors',
# 'l7_policies', 'listeners' to argument_spec, DOCUMENTATION and
# RETURN docstrings
# TODO: Add missing network quota options 'check_limit'
# to argument_spec, DOCUMENTATION and RETURN docstrings
argument_spec = dict(
backup_gigabytes=dict(type='int'),
backups=dict(type='int'),
@@ -350,6 +386,7 @@ class QuotaModule(OpenStackModule):
'network_floating_ips']),
gigabytes=dict(type='int'),
groups=dict(type='int'),
health_monitors=dict(type='int'),
injected_file_content_bytes=dict(type='int',
aliases=['injected_file_size']),
injected_file_path_bytes=dict(type='int',
@@ -357,8 +394,11 @@ class QuotaModule(OpenStackModule):
injected_files=dict(type='int'),
instances=dict(type='int'),
key_pairs=dict(type='int', no_log=False),
l7_policies=dict(type='int'),
listeners=dict(type='int'),
load_balancers=dict(type='int', aliases=['loadbalancer']),
metadata_items=dict(type='int'),
members=dict(type='int'),
name=dict(required=True),
networks=dict(type='int', aliases=['network']),
per_volume_gigabytes=dict(type='int'),
@@ -382,9 +422,9 @@ class QuotaModule(OpenStackModule):
supports_check_mode=True
)
# Some attributes in quota resources don't exist in the api anymore, mostly
# compute quotas that were simply network proxies. This map allows marking
# them to be skipped.
# Some attributes in quota resources don't exist in the api anymore, e.g.
# compute quotas that were simply network proxies, and pre-Octavia network
# quotas. This map allows marking them to be skipped.
exclusion_map = {
'compute': {
# 'fixed_ips', # Available until Nova API version 2.35
@@ -397,24 +437,39 @@ class QuotaModule(OpenStackModule):
# 'injected_file_path_bytes', # Nova API
# 'injected_files', # version 2.56
},
'network': {'name'},
'load_balancer': {'name'},
'network': {
'name',
'l7_policies',
'load_balancers',
'loadbalancer',
'health_monitors',
'pools',
'listeners',
},
'volume': {'name'},
}
def _get_quotas(self, project):
quota = {}
if self.conn.has_service('block-storage'):
quota['volume'] = self.conn.block_storage.get_quota_set(project)
quota['volume'] = self.conn.block_storage.get_quota_set(project.id)
else:
self.warn('Block storage service aka volume service is not'
' supported by your cloud. Ignoring volume quotas.')
if self.conn.has_service('load-balancer'):
quota['load_balancer'] = self.conn.load_balancer.get_quota(
project.id)
else:
self.warn('Loadbalancer service is not supported by your'
' cloud. Ignoring loadbalancer quotas.')
if self.conn.has_service('network'):
quota['network'] = self.conn.network.get_quota(project.id)
else:
self.warn('Network service is not supported by your cloud.'
' Ignoring network quotas.')
quota['compute'] = self.conn.compute.get_quota_set(project.id)
return quota
@@ -452,7 +507,6 @@ class QuotaModule(OpenStackModule):
# Get current quota values
quotas = self._get_quotas(project)
changed = False
if self.ansible.check_mode:
@@ -468,6 +522,8 @@ class QuotaModule(OpenStackModule):
self.conn.network.delete_quota(project.id)
if 'volume' in quotas:
self.conn.block_storage.revert_quota_set(project)
if 'load_balancer' in quotas:
self.conn.load_balancer.delete_quota(project.id)
# Necessary since we can't tell what the default quotas are
quotas = self._get_quotas(project)
@@ -477,14 +533,18 @@ class QuotaModule(OpenStackModule):
if changes:
if 'volume' in changes:
self.conn.block_storage.update_quota_set(
quotas['volume'], **changes['volume'])
quotas['volume'] = self.conn.block_storage.update_quota_set(
project.id, **changes['volume'])
if 'compute' in changes:
self.conn.compute.update_quota_set(
quotas['compute'], **changes['compute'])
quotas['compute'] = self.conn.compute.update_quota_set(
project.id, **changes['compute'])
if 'network' in changes:
quotas['network'] = self.conn.network.update_quota(
project.id, **changes['network'])
if 'load_balancer' in changes:
quotas['load_balancer'] = \
self.conn.load_balancer.update_quota(
project.id, **changes['load_balancer'])
changed = True
quotas = {k: v.to_dict(computed=False) for k, v in quotas.items()}

View File

@@ -366,38 +366,36 @@ class RouterModule(OpenStackModule):
if 'ip_address' in p:
cur_fip_map[p['subnet_id']].add(p['ip_address'])
req_fip_map = defaultdict(set)
for p in external_fixed_ips:
if 'ip_address' in p:
req_fip_map[p['subnet_id']].add(p['ip_address'])
if external_fixed_ips is not None:
# User passed expected external_fixed_ips configuration.
# Build map of requested ips/subnets.
for p in external_fixed_ips:
if 'ip_address' in p:
req_fip_map[p['subnet_id']].add(p['ip_address'])
# Check if external ip addresses need to be added
for fip in external_fixed_ips:
subnet = fip['subnet_id']
ip = fip.get('ip_address', None)
if subnet in cur_fip_map:
if ip is not None and ip not in cur_fip_map[subnet]:
# mismatching ip for subnet
# Check if external ip addresses need to be added
for fip in external_fixed_ips:
subnet = fip['subnet_id']
ip = fip.get('ip_address', None)
if subnet in cur_fip_map:
if ip is not None and ip not in cur_fip_map[subnet]:
# mismatching ip for subnet
return True
else:
# adding ext ip with subnet 'subnet'
return True
else:
# adding ext ip with subnet 'subnet'
return True
# Check if external ip addresses need to be removed
for fip in cur_ext_fips:
subnet = fip['subnet_id']
ip = fip['ip_address']
if subnet in req_fip_map:
if ip not in req_fip_map[subnet]:
# removing ext ip with subnet (ip clash)
# Check if external ip addresses need to be removed.
for fip in cur_ext_fips:
subnet = fip['subnet_id']
ip = fip['ip_address']
if subnet in req_fip_map:
if ip not in req_fip_map[subnet]:
# removing ext ip with subnet (ip clash)
return True
else:
# removing ext ip with subnet
return True
else:
# removing ext ip with subnet
return True
if not external_fixed_ips and len(cur_ext_fips) > 1:
# No external fixed ips requested but
# router has several external fixed ips
return True
# Check if internal interfaces need update
if to_add or to_remove or missing_port_ids:
@@ -448,7 +446,8 @@ class RouterModule(OpenStackModule):
return kwargs
def _build_router_interface_config(self, filters):
external_fixed_ips = []
# Undefine external_fixed_ips to have possibility to unset them
external_fixed_ips = None
internal_ports_missing = []
internal_ifaces = []
@@ -459,9 +458,11 @@ class RouterModule(OpenStackModule):
.get('external_fixed_ips')
ext_fixed_ips = ext_fixed_ips or self.params['external_fixed_ips']
if ext_fixed_ips:
# User passed external_fixed_ips configuration. Initialize ips list
external_fixed_ips = []
for iface in ext_fixed_ips:
subnet = self.conn.network.find_subnet(
iface['subnet'], ignore_missing=False, **filters)
iface['subnet_id'], ignore_missing=False, **filters)
fip = dict(subnet_id=subnet.id)
if 'ip_address' in iface:
fip['ip_address'] = iface['ip_address']
@@ -615,9 +616,13 @@ class RouterModule(OpenStackModule):
router = self.conn.network.find_router(name, **query_filters)
network = None
if network_name_or_id:
# First try to find a network in the specified project.
network = self.conn.network.find_network(network_name_or_id,
ignore_missing=False,
**query_filters)
if not network:
# Fall back to a global search for the network.
network = self.conn.network.find_network(network_name_or_id,
ignore_missing=False)
# Validate and cache the subnet IDs so we can avoid duplicate checks
# and expensive API calls.

View File

@@ -113,6 +113,10 @@ options:
choices: [present, absent]
default: present
type: str
stateful:
description:
- Should the resource be stateful or stateless.
type: bool
extends_documentation_fragment:
- openstack.cloud.openstack
'''
@@ -201,6 +205,14 @@ EXAMPLES = r'''
name: foo
description: security group for foo servers
- name: Create a stateless security group
openstack.cloud.security_group:
cloud: mordred
state: present
stateful: false
name: foo
description: stateless security group for foo servers
- name: Update the existing 'foo' security group description
openstack.cloud.security_group:
cloud: mordred
@@ -260,6 +272,7 @@ class SecurityGroupModule(OpenStackModule):
),
),
state=dict(default='present', choices=['absent', 'present']),
stateful=dict(type="bool"),
)
module_kwargs = dict(
@@ -405,7 +418,7 @@ class SecurityGroupModule(OpenStackModule):
def _create(self):
kwargs = dict((k, self.params[k])
for k in ['description', 'name']
for k in ['description', 'name', 'stateful']
if self.params[k] is not None)
project_name_or_id = self.params['project']

View File

@@ -205,6 +205,12 @@ options:
choices: [present, absent]
default: present
type: str
tags:
description:
- A list of tags should be added to instance
type: list
elements: str
default: []
terminate_volume:
description:
- If C(true), delete volume when deleting the instance and if it has
@@ -756,6 +762,7 @@ server:
description: A list of associated tags.
returned: success
type: list
elements: str
task_state:
description: The task state of this server.
returned: success
@@ -825,6 +832,7 @@ class ServerModule(OpenStackModule):
scheduler_hints=dict(type='dict'),
security_groups=dict(default=[], type='list', elements='str'),
state=dict(default='present', choices=['absent', 'present']),
tags=dict(type='list', default=[], elements='str'),
terminate_volume=dict(default=False, type='bool'),
userdata=dict(),
volume_size=dict(type='int'),
@@ -890,7 +898,8 @@ class ServerModule(OpenStackModule):
return {
**self._build_update_ips(server),
**self._build_update_security_groups(server),
**self._build_update_server(server)}
**self._build_update_server(server),
**self._build_update_tags(server)}
def _build_update_ips(self, server):
auto_ip = self.params['auto_ip']
@@ -1030,9 +1039,16 @@ class ServerModule(OpenStackModule):
return update
def _build_update_tags(self, server):
required_tags = self.params.get('tags')
if set(server["tags"]) == set(required_tags):
return {}
update = dict(tags=required_tags)
return update
def _create(self):
for k in ['auto_ip', 'floating_ips', 'floating_ip_pools']:
if self.params[k] is not None \
if self.params[k] \
and self.params['wait'] is False:
# floating ip addresses will only be added if
# we wait until the server has been created
@@ -1072,7 +1088,7 @@ class ServerModule(OpenStackModule):
for k in ['auto_ip', 'availability_zone', 'boot_from_volume',
'boot_volume', 'config_drive', 'description', 'key_name',
'name', 'network', 'reuse_ips', 'scheduler_hints',
'security_groups', 'terminate_volume', 'timeout',
'security_groups', 'tags', 'terminate_volume', 'timeout',
'userdata', 'volume_size', 'volumes', 'wait']:
if self.params[k] is not None:
args[k] = self.params[k]
@@ -1091,10 +1107,20 @@ class ServerModule(OpenStackModule):
server.id,
**dict((k, self.params[k])
for k in ['wait', 'timeout', 'delete_ips']))
# Nova returns server for some time with the "DELETED" state. Our tests
# are not able to handle this, so wait for server to really disappear.
if self.params['wait']:
for count in self.sdk.utils.iterate_timeout(
timeout=self.params['timeout'],
message="Timeout waiting for server to be absent"
):
if self.conn.compute.find_server(server.id) is None:
break
def _update(self, server, update):
server = self._update_ips(server, update)
server = self._update_security_groups(server, update)
server = self._update_tags(server, update)
server = self._update_server(server, update)
# Refresh server attributes after security groups etc. have changed
#
@@ -1167,6 +1193,16 @@ class ServerModule(OpenStackModule):
# be postponed until all updates have been applied.
return server
def _update_tags(self, server, update):
tags = update.get('tags')
self.conn.compute.put(
"/servers/{server_id}/tags".format(server_id=server['id']),
json={"tags": tags},
microversion="2.26"
)
return server
def _parse_metadata(self, metadata):
if not metadata:
return {}

View File

@@ -377,7 +377,9 @@ class ServerInfoModule(OpenStackModule):
kwargs['name_or_id'] = self.params['name']
self.exit(changed=False,
servers=[server.to_dict(computed=False) for server in
servers=[server.to_dict(computed=False)
if hasattr(server, "to_dict") else server
for server in
self.conn.search_servers(**kwargs)])

View File

@@ -28,6 +28,12 @@ options:
- From the subnet pool the last IP that should be assigned to the
virtual machines.
type: str
allocation_pools:
description:
- List of allocation pools to assign to the subnet. Each element
consists of a 'start' and 'end' value.
type: list
elements: dict
cidr:
description:
- The CIDR representation of the subnet that should be assigned to
@@ -299,6 +305,7 @@ class SubnetModule(OpenStackModule):
dns_nameservers=dict(type='list', elements='str'),
allocation_pool_start=dict(),
allocation_pool_end=dict(),
allocation_pools=dict(type='list', elements='dict'),
host_routes=dict(type='list', elements='dict'),
ipv6_ra_mode=dict(choices=ipv6_mode_choices),
ipv6_address_mode=dict(choices=ipv6_mode_choices),
@@ -321,7 +328,9 @@ class SubnetModule(OpenStackModule):
('cidr', 'use_default_subnet_pool', 'subnet_pool'), True),
],
mutually_exclusive=[
('cidr', 'use_default_subnet_pool', 'subnet_pool')
('use_default_subnet_pool', 'subnet_pool'),
('allocation_pool_start', 'allocation_pools'),
('allocation_pool_end', 'allocation_pools')
]
)
@@ -367,7 +376,10 @@ class SubnetModule(OpenStackModule):
params['project_id'] = project.id
if subnet_pool:
params['subnet_pool_id'] = subnet_pool.id
params['allocation_pools'] = self._build_pool()
if self.params['allocation_pool_start']:
params['allocation_pools'] = self._build_pool()
else:
params['allocation_pools'] = self.params['allocation_pools']
params = self._add_extra_attrs(params)
params = {k: v for k, v in params.items() if v is not None}
return params
@@ -382,6 +394,10 @@ class SubnetModule(OpenStackModule):
params['host_routes'].sort(key=lambda r: sorted(r.items()))
subnet['host_routes'].sort(key=lambda r: sorted(r.items()))
if 'allocation_pools' in params:
params['allocation_pools'].sort(key=lambda r: sorted(r.items()))
subnet['allocation_pools'].sort(key=lambda r: sorted(r.items()))
updates = {k: params[k] for k in params if params[k] != subnet[k]}
if self.params['disable_gateway_ip'] and subnet.gateway_ip:
updates['gateway_ip'] = None

110
plugins/modules/trait.py Normal file
View File

@@ -0,0 +1,110 @@
#!/usr/bin/python
# -*- coding: utf-8 -*-
# Copyright (c) 2025, ScaleUp Technologies GmbH & Co. KG
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
DOCUMENTATION = '''
---
module: trait
short_description: Add/Delete a trait from OpenStack
author: OpenStack Ansible SIG
description:
- Add or Delete a trait from OpenStack
options:
id:
description:
- ID/Name of this trait
required: true
type: str
state:
description:
- Should the resource be present or absent.
choices: [present, absent]
default: present
type: str
extends_documentation_fragment:
- openstack.cloud.openstack
'''
EXAMPLES = '''
# Creates a trait with the ID CUSTOM_WINDOWS_SPLA
- openstack.cloud.trait:
cloud: openstack
state: present
id: CUSTOM_WINDOWS_SPLA
'''
RETURN = '''
trait:
description: Dictionary describing the trait.
returned: On success when I(state) is 'present'
type: dict
contains:
id:
description: ID of the trait.
returned: success
type: str
'''
from ansible_collections.openstack.cloud.plugins.module_utils.openstack import (
OpenStackModule)
class TraitModule(OpenStackModule):
argument_spec = dict(
id=dict(required=True),
state=dict(default='present',
choices=['absent', 'present']),
)
module_kwargs = dict(
supports_check_mode=True,
)
def _system_state_change(self, trait):
state = self.params['state']
if state == 'present' and not trait:
return True
if state == 'absent' and trait:
return True
return False
def run(self):
state = self.params['state']
id = self.params['id']
try:
trait = self.conn.placement.get_trait(id)
except self.sdk.exceptions.NotFoundException:
trait = None
if self.ansible.check_mode:
self.exit_json(changed=self._system_state_change(trait), trait=trait)
changed = False
if state == 'present':
if not trait:
trait = self.conn.placement.create_trait(id)
changed = True
self.exit_json(
changed=changed, trait=trait.to_dict(computed=False))
elif state == 'absent':
if trait:
self.conn.placement.delete_trait(id, ignore_missing=False)
self.exit_json(changed=True)
self.exit_json(changed=False)
def main():
module = TraitModule()
module()
if __name__ == '__main__':
main()

306
plugins/modules/trunk.py Normal file
View File

@@ -0,0 +1,306 @@
#!/usr/bin/python
# -*- coding: utf-8 -*-
# Copyright (c) 2015 Hewlett-Packard Development Company, L.P.
# Copyright (c) 2024 Binero AB
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
DOCUMENTATION = '''
---
module: trunk
short_description: Add or delete trunks from an OpenStack cloud.
author: OpenStack Ansible SIG
description:
- Add or delete trunk from an OpenStack cloud.
options:
state:
description:
- Should the resource be present or absent.
choices: [present, absent]
default: present
type: str
name:
description:
- Name that has to be given to the trunk.
- This port attribute cannot be updated.
type: str
required: true
port:
description:
- The name or ID of the port for the trunk.
type: str
required: false
sub_ports:
description:
- The sub ports on the trunk.
type: list
required: false
elements: dict
suboptions:
port:
description: The ID or name of the port.
type: str
segmentation_type:
description: The segmentation type to use.
type: str
segmentation_id:
description: The segmentation ID to use.
type: int
extends_documentation_fragment:
- openstack.cloud.openstack
'''
EXAMPLES = '''
# Create a trunk
- openstack.cloud.trunk:
state: present
auth:
auth_url: https://identity.example.com
username: admin
password: admin
project_name: admin
name: trunk1
port: port1
# Create a trunk with a subport
- openstack.cloud.trunk:
state: present
cloud: my-cloud
name: trunk1
port: port1
sub_ports:
- name: subport1
segmentation_type: vlan
segmentation_id: 123
# Remove a trunk
- openstack.cloud.trunk:
state: absent
auth:
auth_url: https://identity.example.com
username: admin
password: admin
project_name: admin
name: trunk1
'''
RETURN = '''
trunk:
description: Dictionary describing the trunk.
type: dict
returned: On success when I(state) is C(present).
contains:
created_at:
description: Timestamp when the trunk was created.
returned: success
type: str
sample: "2022-02-03T13:28:25Z"
description:
description: The trunk description.
returned: success
type: str
id:
description: The trunk ID.
returned: success
type: str
sample: "3ec25c97-7052-4ab8-a8ba-92faf84148de"
is_admin_state_up:
description: |
The administrative state of the trunk, which is up C(True) or
down C(False).
returned: success
type: bool
sample: true
name:
description: The trunk name.
returned: success
type: str
sample: "trunk_name"
port_id:
description: The ID of the port for the trunk
returned: success
type: str
sample: "5ec25c97-7052-4ab8-a8ba-92faf84148df"
project_id:
description: The ID of the project who owns the trunk.
returned: success
type: str
sample: "aa1ede4f-3952-4131-aab6-3b8902268c7d"
revision_number:
description: The revision number of the resource.
returned: success
type: int
sample: 0
status:
description: The trunk status. Value is C(ACTIVE) or C(DOWN).
returned: success
type: str
sample: "ACTIVE"
sub_ports:
description: List of sub ports on the trunk.
returned: success
type: list
sample: []
tags:
description: The list of tags on the resource.
returned: success
type: list
sample: []
tenant_id:
description: Same as I(project_id). Deprecated.
returned: success
type: str
sample: "51fce036d7984ba6af4f6c849f65ef00"
updated_at:
description: Timestamp when the trunk was last updated.
returned: success
type: str
sample: "2022-02-03T13:28:25Z"
'''
from ansible_collections.openstack.cloud.plugins.module_utils.openstack import OpenStackModule
class TrunkModule(OpenStackModule):
argument_spec = dict(
state=dict(default='present', choices=['absent', 'present']),
name=dict(required=True),
port=dict(),
sub_ports=dict(type='list', elements='dict'),
)
module_kwargs = dict(
required_if=[
('state', 'present', ('port',)),
],
supports_check_mode=True
)
def run(self):
port_name_or_id = self.params['port']
name_or_id = self.params['name']
state = self.params['state']
port = None
if port_name_or_id:
port = self.conn.network.find_port(
port_name_or_id, ignore_missing=False)
trunk = self.conn.network.find_trunk(name_or_id)
sub_ports = []
psp = self.params['sub_ports'] or []
for sp in psp:
subport = self.conn.network.find_port(
sp['port'], ignore_missing=False)
sub_ports.append(subport)
if self.ansible.check_mode:
self.exit_json(changed=self._will_change(state, trunk, sub_ports))
if state == 'present' and not trunk:
# create trunk
trunk = self._create(name_or_id, port)
self.exit_json(changed=True,
trunk=trunk.to_dict(computed=False))
elif state == 'present' and trunk:
# update trunk
update = self._build_update(trunk, sub_ports)
if update:
trunk = self._update(trunk, update)
self.exit_json(changed=bool(update),
trunk=trunk.to_dict(computed=False))
elif state == 'absent' and trunk:
# delete trunk
self._delete(trunk)
self.exit_json(changed=True)
elif state == 'absent' and not trunk:
# do nothing
self.exit_json(changed=False)
def _build_update(self, trunk, sub_ports):
add_sub_ports = []
del_sub_ports = []
for sp in sub_ports:
found = False
for tsp in trunk['sub_ports']:
if tsp['port_id'] == sp['id']:
found = True
break
if found is False:
psp = self.params['sub_ports'] or []
for k in psp:
if sp['name'] == k['port']:
spobj = {
'port_id': sp['id'],
'segmentation_type': k['segmentation_type'],
'segmentation_id': k['segmentation_id'],
}
add_sub_ports.append(spobj)
break
for tsp in trunk['sub_ports']:
found = False
for sp in sub_ports:
if sp['id'] == tsp['port_id']:
found = True
break
if found is False:
del_sub_ports.append({'port_id': tsp['port_id']})
update = {}
if len(add_sub_ports) > 0:
update['add_sub_ports'] = add_sub_ports
if len(del_sub_ports) > 0:
update['del_sub_ports'] = del_sub_ports
return update
def _create(self, name, port):
args = {}
args['name'] = name
args['port_id'] = port.id
return self.conn.network.create_trunk(**args)
def _delete(self, trunk):
sub_ports = []
for sp in trunk['sub_ports']:
sub_ports.append({'port_id': sp['port_id']})
self.conn.network.delete_trunk_subports(trunk.id, sub_ports)
self.conn.network.delete_trunk(trunk.id)
def _update(self, trunk, update):
if update.get('add_sub_ports', None):
self.conn.network.add_trunk_subports(
trunk, update['add_sub_ports'])
if update.get('del_sub_ports', None):
self.conn.network.delete_trunk_subports(
trunk, update['del_sub_ports'])
return self.conn.network.find_trunk(trunk.id)
def _will_change(self, state, trunk, sub_ports):
if state == 'present' and not trunk:
return True
elif state == 'present' and trunk:
return bool(self._build_update(trunk, sub_ports))
elif state == 'absent' and trunk:
return True
else:
return False
def main():
module = TrunkModule()
module()
if __name__ == '__main__':
main()

View File

@@ -0,0 +1,103 @@
#!/usr/bin/python
# -*- coding: utf-8 -*-
# Copyright (c) 2023 Bitswalk, inc.
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
DOCUMENTATION = r'''
---
module: volume_service_info
short_description: Fetch OpenStack Volume (Cinder) services
author: OpenStack Ansible SIG
description:
- Fetch OpenStack Volume (Cinder) services.
options:
binary:
description:
- Filter the service list result by binary name of the service.
type: str
host:
description:
- Filter the service list result by the host name.
type: str
extends_documentation_fragment:
- openstack.cloud.openstack
'''
EXAMPLES = r'''
- name: Fetch all OpenStack Volume (Cinder) services
openstack.cloud.volume_service_info:
cloud: awesomecloud
- name: Fetch a subset of OpenStack Volume (Cinder) services
openstack.cloud.volume_service_info:
cloud: awesomecloud
binary: "cinder-volume"
host: "localhost"
'''
RETURN = r'''
volume_services:
description: List of dictionaries describing Volume (Cinder) services.
returned: always
type: list
elements: dict
contains:
availability_zone:
description: The availability zone name.
type: str
binary:
description: The binary name of the service.
type: str
disabled_reason:
description: The reason why the service is disabled
type: str
host:
description: The name of the host.
type: str
name:
description: Service name
type: str
state:
description: The state of the service. One of up or down.
type: str
status:
description: The status of the service. One of enabled or disabled.
type: str
update_at:
description: The date and time when the resource was updated
type: str
'''
from ansible_collections.openstack.cloud.plugins.module_utils.openstack import OpenStackModule
class VolumeServiceInfoModule(OpenStackModule):
argument_spec = dict(
binary=dict(),
host=dict(),
)
module_kwargs = dict(
supports_check_mode=True
)
def run(self):
kwargs = {k: self.params[k]
for k in ['binary', 'host']
if self.params[k] is not None}
volume_services = self.conn.block_storage.services(**kwargs)
self.exit_json(changed=False,
volume_services=[s.to_dict(computed=False)
for s in volume_services])
def main():
module = VolumeServiceInfoModule()
module()
if __name__ == '__main__':
main()

View File

@@ -0,0 +1,241 @@
#!/usr/bin/python
# -*- coding: utf-8 -*-
# Copyright (c) 2023 Cleura AB
# GNU General Public License v3.0+
# (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
DOCUMENTATION = r'''
---
module: volume_type
short_description: Manage OpenStack volume type
author: OpenStack Ansible SIG
description:
- Add, remove or update volume types in OpenStack.
options:
name:
description:
- Volume type name or id.
required: true
type: str
description:
description:
- Description of the volume type.
type: str
extra_specs:
description:
- List of volume type properties
type: dict
is_public:
description:
- Make volume type accessible to the public.
- Can be set only during creation
type: bool
state:
description:
- Indicate desired state of the resource.
- When I(state) is C(present), then I(is_public) is required.
choices: ['present', 'absent']
default: present
type: str
extends_documentation_fragment:
- openstack.cloud.openstack
'''
EXAMPLES = r'''
- name: Delete volume type by name
openstack.cloud.volume_type:
name: test_type
state: absent
- name: Delete volume type by id
openstack.cloud.volume_type:
name: fbadfa6b-5f17-4c26-948e-73b94de57b42
state: absent
- name: Create volume type
openstack.cloud.volume_type:
name: unencrypted_volume_type
state: present
extra_specs:
volume_backend_name: LVM_iSCSI
description: Unencrypted volume type
is_public: True
'''
RETURN = '''
volume_type:
description: Dictionary describing volume type
returned: On success when I(state) is 'present'
type: dict
contains:
name:
description: volume type name
returned: success
type: str
sample: test_type
extra_specs:
description: volume type extra parameters
returned: success
type: dict
sample: null
is_public:
description: whether the volume type is public
returned: success
type: bool
sample: True
description:
description: volume type description
returned: success
type: str
sample: Unencrypted volume type
'''
from ansible_collections.openstack.cloud.plugins.module_utils.openstack import OpenStackModule
class VolumeTypeModule(OpenStackModule):
argument_spec = dict(
name=dict(type='str', required=True),
description=dict(type='str', required=False),
extra_specs=dict(type='dict', required=False),
is_public=dict(type='bool'),
state=dict(
type='str', default='present', choices=['absent', 'present']),
)
module_kwargs = dict(
required_if=[('state', 'present', ['is_public'])],
supports_check_mode=True,
)
@staticmethod
def _extract_result(details):
if details is not None:
return details.to_dict(computed=False)
return {}
def run(self):
state = self.params['state']
name_or_id = self.params['name']
volume_type = self.conn.block_storage.find_type(name_or_id)
if self.ansible.check_mode:
self.exit_json(
changed=self._will_change(state, volume_type))
if state == 'present' and not volume_type:
# Create type
create_result = self._create()
volume_type = self._extract_result(create_result)
self.exit_json(changed=True, volume_type=volume_type)
elif state == 'present' and volume_type:
# Update type
update = self._build_update(volume_type)
update_result = self._update(volume_type, update)
volume_type = self._extract_result(update_result)
self.exit_json(changed=bool(update), volume_type=volume_type)
elif state == 'absent' and volume_type:
# Delete type
self._delete(volume_type)
self.exit_json(changed=True)
def _build_update(self, volume_type):
return {
**self._build_update_extra_specs(volume_type),
**self._build_update_volume_type(volume_type)}
def _build_update_extra_specs(self, volume_type):
update = {}
old_extra_specs = volume_type['extra_specs']
new_extra_specs = self.params['extra_specs'] or {}
delete_extra_specs_keys = \
set(old_extra_specs.keys()) - set(new_extra_specs.keys())
if delete_extra_specs_keys:
update['delete_extra_specs_keys'] = delete_extra_specs_keys
stringified = {k: str(v) for k, v in new_extra_specs.items()}
if old_extra_specs != stringified:
update['create_extra_specs'] = new_extra_specs
return update
def _build_update_volume_type(self, volume_type):
update = {}
allowed_attributes = [
'is_public', 'description', 'name']
type_attributes = {
k: self.params[k]
for k in allowed_attributes
if k in self.params and self.params.get(k) is not None
and self.params.get(k) != volume_type.get(k)}
if type_attributes:
update['type_attributes'] = type_attributes
return update
def _create(self):
kwargs = {k: self.params[k]
for k in ['name', 'is_public', 'description', 'extra_specs']
if self.params.get(k) is not None}
volume_type = self.conn.block_storage.create_type(**kwargs)
return volume_type
def _delete(self, volume_type):
self.conn.block_storage.delete_type(volume_type.id)
def _update(self, volume_type, update):
if not update:
return volume_type
volume_type = self._update_volume_type(volume_type, update)
volume_type = self._update_extra_specs(volume_type, update)
return volume_type
def _update_extra_specs(self, volume_type, update):
delete_extra_specs_keys = update.get('delete_extra_specs_keys')
if delete_extra_specs_keys:
self.conn.block_storage.delete_type_extra_specs(
volume_type, delete_extra_specs_keys)
# refresh volume_type information
volume_type = self.conn.block_storage.find_type(volume_type.id)
create_extra_specs = update.get('create_extra_specs')
if create_extra_specs:
self.conn.block_storage.update_type_extra_specs(
volume_type, **create_extra_specs)
# refresh volume_type information
volume_type = self.conn.block_storage.find_type(volume_type.id)
return volume_type
def _update_volume_type(self, volume_type, update):
type_attributes = update.get('type_attributes')
if type_attributes:
updated_type = self.conn.block_storage.update_type(
volume_type, **type_attributes)
return updated_type
return volume_type
def _will_change(self, state, volume_type):
if state == 'present' and not volume_type:
return True
if state == 'present' and volume_type:
return bool(self._build_update(volume_type))
if state == 'absent' and volume_type:
return True
return False
def main():
module = VolumeTypeModule()
module()
if __name__ == '__main__':
main()

View File

@@ -0,0 +1,233 @@
#!/usr/bin/python
# -*- coding: utf-8 -*-
# Copyright (c) 2023 Cleura AB
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
DOCUMENTATION = r'''
---
module: volume_type_encryption
short_description: Manage OpenStack volume type encryption
author: OpenStack Ansible SIG
description:
- Add, remove or update volume type encryption in OpenStack.
options:
volume_type:
description:
- Volume type name or id.
required: true
type: str
state:
description:
- Indicate desired state of the resource.
- When I(state) is C(present), then I(encryption options) are required.
choices: ['present', 'absent']
default: present
type: str
encryption_provider:
description:
- class that provides encryption support for the volume type
- admin only
type: str
encryption_cipher:
description:
- encryption algorithm or mode
- admin only
type: str
encryption_control_location:
description:
- Set the notional service where the encryption is performed
- admin only
choices: ['front-end', 'back-end']
type: str
encryption_key_size:
description:
- Set the size of the encryption key of this volume type
- admin only
choices: [128, 256, 512]
type: int
extends_documentation_fragment:
- openstack.cloud.openstack
'''
EXAMPLES = r'''
- name: Create volume type encryption
openstack.cloud.volume_type_encryption:
volume_type: test_type
state: present
encryption_provider: nova.volume.encryptors.luks.LuksEncryptor
encryption_cipher: aes-xts-plain64
encryption_control_location: front-end
encryption_key_size: 256
- name: Delete volume type encryption
openstack.cloud.volume_type_encryption:
volume_type: test_type
state: absent
register: the_result
'''
RETURN = '''
encryption:
description: Dictionary describing volume type encryption
returned: On success when I(state) is 'present'
type: dict
contains:
cipher:
description: encryption cipher
returned: success
type: str
sample: aes-xts-plain64
control_location:
description: encryption location
returned: success
type: str
sample: front-end
created_at:
description: Resource creation date and time
returned: success
type: str
sample: "2023-08-04T10:23:03.000000"
deleted:
description: Boolean if the resource was deleted
returned: success
type: str
sample: false,
deleted_at:
description: Resource delete date and time
returned: success
type: str
sample: null,
encryption_id:
description: UUID of the volume type encryption
returned: success
type: str
sample: b75d8c5c-a6d8-4a5d-8c86-ef4f1298525d
id:
description: Alias to encryption_id
returned: success
type: str
sample: b75d8c5c-a6d8-4a5d-8c86-ef4f1298525d
key_size:
description: Size of the key
returned: success
type: str
sample: 256,
provider:
description: Encryption provider
returned: success
type: str
sample: "nova.volume.encryptors.luks.LuksEncryptor"
updated_at:
description: Resource last update date and time
returned: success
type: str
sample: null
'''
from ansible_collections.openstack.cloud.plugins.module_utils.openstack import OpenStackModule
class VolumeTypeModule(OpenStackModule):
argument_spec = dict(
volume_type=dict(type='str', required=True),
state=dict(
type='str', default='present', choices=['absent', 'present']),
encryption_provider=dict(type='str', required=False),
encryption_cipher=dict(type='str', required=False),
encryption_control_location=dict(
type='str', choices=['front-end', 'back-end'], required=False),
encryption_key_size=dict(
type='int', choices=[128, 256, 512], required=False),
)
module_kwargs = dict(
required_if=[('state', 'present', [
'encryption_provider', 'encryption_cipher',
'encryption_control_location', 'encryption_key_size'])],
supports_check_mode=True,
)
@staticmethod
def _extract_result(details):
if details is not None:
return details.to_dict(computed=False)
return {}
def run(self):
state = self.params['state']
name = self.params['volume_type']
volume_type = self.conn.block_storage.find_type(name)
# TODO: Add get type_encryption by id
type_encryption = self.conn.block_storage.get_type_encryption(
volume_type.id)
encryption_id = type_encryption.get('encryption_id')
if self.ansible.check_mode:
self.exit_json(
changed=self._will_change(state, encryption_id))
if state == 'present':
update = self._build_update_type_encryption(type_encryption)
if not bool(update):
# No change is required
self.exit_json(changed=False)
if not encryption_id: # Create new type encryption
result = self.conn.block_storage.create_type_encryption(
volume_type, **update)
else: # Update existing type encryption
result = self.conn.block_storage.update_type_encryption(
encryption=type_encryption, **update)
encryption = self._extract_result(result)
self.exit_json(changed=bool(update), encryption=encryption)
elif encryption_id is not None:
# absent state requires type encryption delete
self.conn.block_storage.delete_type_encryption(type_encryption)
self.exit_json(changed=True)
def _build_update_type_encryption(self, type_encryption):
attributes_map = {
'encryption_provider': 'provider',
'encryption_cipher': 'cipher',
'encryption_key_size': 'key_size',
'encryption_control_location': 'control_location'}
encryption_attributes = {
attributes_map[k]: self.params[k]
for k in self.params
if k in attributes_map.keys() and self.params.get(k) is not None
and self.params.get(k) != type_encryption.get(attributes_map[k])}
if 'encryption_provider' in encryption_attributes.keys():
encryption_attributes['provider'] = \
encryption_attributes['encryption_provider']
return encryption_attributes
def _update_type_encryption(self, type_encryption, update):
if update:
updated_type = self.conn.block_storage.update_type_encryption(
encryption=type_encryption,
**update)
return updated_type
return {}
def _will_change(self, state, type_encryption):
encryption_id = type_encryption.get('encryption_id')
if state == 'present' and not encryption_id:
return True
if state == 'present' and encryption_id is not None:
return bool(self._build_update_type_encryption(type_encryption))
if state == 'absent' and encryption_id is not None:
return True
return False
def main():
module = VolumeTypeModule()
module()
if __name__ == '__main__':
main()

View File

@@ -0,0 +1,175 @@
#!/usr/bin/python
# -*- coding: utf-8 -*-
# Copyright (c) 2023 Cleura AB
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
DOCUMENTATION = r'''
---
module: volume_type_info
short_description: Get OpenStack volume type details
author: OpenStack Ansible SIG
description:
- Get volume type details in OpenStack.
- Get volume type encryption details in OpenStack
options:
name:
description:
- Volume type name or id.
required: true
type: str
extends_documentation_fragment:
- openstack.cloud.openstack
'''
EXAMPLES = r'''
- name: Get volume type details
openstack.cloud.volume_type_info:
name: test_type
- name: Get volume type details by id
openstack.cloud.volume_type_info:
name: fbadfa6b-5f17-4c26-948e-73b94de57b42
'''
RETURN = '''
access_project_ids:
description:
- List of project IDs allowed to access volume type
- Public volume types returns 'null' value as it is not applicable
returned: On success when I(state) is 'present'
type: list
elements: str
volume_type:
description: Dictionary describing volume type
returned: On success when I(state) is 'present'
type: dict
contains:
id:
description: volume_type uuid
returned: success
type: str
sample: b75d8c5c-a6d8-4a5d-8c86-ef4f1298525d
name:
description: volume type name
returned: success
type: str
sample: test_type
extra_specs:
description: volume type extra parameters
returned: success
type: dict
sample: null
is_public:
description: whether the volume type is public
returned: success
type: bool
sample: True
description:
description: volume type description
returned: success
type: str
sample: Unencrypted volume type
encryption:
description: Dictionary describing volume type encryption
returned: On success when I(state) is 'present'
type: dict
contains:
cipher:
description: encryption cipher
returned: success
type: str
sample: aes-xts-plain64
control_location:
description: encryption location
returned: success
type: str
sample: front-end
created_at:
description: Resource creation date and time
returned: success
type: str
sample: "2023-08-04T10:23:03.000000"
deleted:
description: Boolean if the resource was deleted
returned: success
type: str
sample: false
deleted_at:
description: Resource delete date and time
returned: success
type: str
sample: null
encryption_id:
description: UUID of the volume type encryption
returned: success
type: str
sample: b75d8c5c-a6d8-4a5d-8c86-ef4f1298525d
id:
description: Alias to encryption_id
returned: success
type: str
sample: b75d8c5c-a6d8-4a5d-8c86-ef4f1298525d
key_size:
description: Size of the key
returned: success
type: str
sample: 256
provider:
description: Encryption provider
returned: success
type: str
sample: "nova.volume.encryptors.luks.LuksEncryptor"
updated_at:
description: Resource last update date and time
returned: success
type: str
sample: null
'''
from ansible_collections.openstack.cloud.plugins.module_utils.openstack import OpenStackModule
class VolumeTypeModule(OpenStackModule):
argument_spec = dict(
name=dict(type='str', required=True)
)
module_kwargs = dict(
supports_check_mode=True,
)
@staticmethod
def _extract_result(details):
if details is not None:
return details.to_dict(computed=False)
return {}
def run(self):
name_or_id = self.params['name']
volume_type = self.conn.block_storage.find_type(name_or_id)
type_encryption = self.conn.block_storage.get_type_encryption(
volume_type.id)
if volume_type.is_public:
type_access = None
else:
type_access = [
proj['project_id']
for proj in self.conn.block_storage.get_type_access(
volume_type.id)]
self.exit_json(
changed=False,
volume_type=self._extract_result(volume_type),
encryption=self._extract_result(type_encryption),
access_project_ids=type_access)
def main():
module = VolumeTypeModule()
module()
if __name__ == '__main__':
main()

View File

@@ -4,5 +4,6 @@
import setuptools
setuptools.setup(
setup_requires=['pbr'],
pbr=True)
setup_requires=['pbr', 'setuptools'],
pbr=True,
py_modules=[])

View File

@@ -0,0 +1,12 @@
ansible-core>=2.16.0,<2.17.0
flake8
galaxy-importer
openstacksdk
pycodestyle
pylint
rstcheck
ruamel.yaml
tox
voluptuous
yamllint
setuptools

View File

@@ -0,0 +1,12 @@
ansible-core>=2.18.0,<2.19.0
flake8
galaxy-importer
openstacksdk
pycodestyle
pylint
rstcheck
ruamel.yaml
tox
voluptuous
yamllint
setuptools

View File

@@ -1,31 +0,0 @@
# (c) 2014, Toshio Kuratomi <tkuratomi@ansible.com>
#
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
# Make coding more python3-ish
#
# Compat for python2.7
#
# One unittest needs to import builtins via __import__() so we need to have
# the string that represents it
try:
import __builtin__ # noqa
except ImportError:
BUILTINS = 'builtins'
else:
BUILTINS = '__builtin__'

View File

@@ -1,120 +0,0 @@
# (c) 2014, Toshio Kuratomi <tkuratomi@ansible.com>
#
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
# Make coding more python3-ish
'''
Compat module for Python3.x's unittest.mock module
'''
import sys
# Python 2.7
# Note: Could use the pypi mock library on python3.x as well as python2.x. It
# is the same as the python3 stdlib mock library
try:
# Allow wildcard import because we really do want to import all of mock's
# symbols into this compat shim
# pylint: disable=wildcard-import,unused-wildcard-import
from unittest.mock import * # noqa
except ImportError:
# Python 2
# pylint: disable=wildcard-import,unused-wildcard-import
try:
from mock import * # noqa
except ImportError:
print('You need the mock library installed on python2.x to run tests')
# Prior to 3.4.4, mock_open cannot handle binary read_data
if sys.version_info >= (3,) and sys.version_info < (3, 4, 4):
file_spec = None
def _iterate_read_data(read_data):
# Helper for mock_open:
# Retrieve lines from read_data via a generator so that separate calls to
# readline, read, and readlines are properly interleaved
sep = b'\n' if isinstance(read_data, bytes) else '\n'
data_as_list = [li + sep for li in read_data.split(sep)]
if data_as_list[-1] == sep:
# If the last line ended in a newline, the list comprehension will have an
# extra entry that's just a newline. Remove this.
data_as_list = data_as_list[:-1]
else:
# If there wasn't an extra newline by itself, then the file being
# emulated doesn't have a newline to end the last line remove the
# newline that our naive format() added
data_as_list[-1] = data_as_list[-1][:-1]
for line in data_as_list:
yield line
def mock_open(mock=None, read_data=''):
"""
A helper function to create a mock to replace the use of `open`. It works
for `open` called directly or used as a context manager.
The `mock` argument is the mock object to configure. If `None` (the
default) then a `MagicMock` will be created for you, with the API limited
to methods or attributes available on standard file handles.
`read_data` is a string for the `read` methoddline`, and `readlines` of the
file handle to return. This is an empty string by default.
"""
def _readlines_side_effect(*args, **kwargs):
if handle.readlines.return_value is not None:
return handle.readlines.return_value
return list(_data)
def _read_side_effect(*args, **kwargs):
if handle.read.return_value is not None:
return handle.read.return_value
return type(read_data)().join(_data)
def _readline_side_effect():
if handle.readline.return_value is not None:
while True:
yield handle.readline.return_value
for line in _data:
yield line
global file_spec
if file_spec is None:
import _io # noqa
file_spec = list(set(dir(_io.TextIOWrapper)).union(set(dir(_io.BytesIO))))
if mock is None:
mock = MagicMock(name='open', spec=open) # noqa
handle = MagicMock(spec=file_spec) # noqa
handle.__enter__.return_value = handle
_data = _iterate_read_data(read_data)
handle.write.return_value = None
handle.read.return_value = None
handle.readline.return_value = None
handle.readlines.return_value = None
handle.read.side_effect = _read_side_effect
handle.readline.side_effect = _readline_side_effect()
handle.readlines.side_effect = _readlines_side_effect
mock.return_value = handle
return mock

View File

@@ -1,36 +0,0 @@
# (c) 2014, Toshio Kuratomi <tkuratomi@ansible.com>
#
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
# Make coding more python3-ish
'''
Compat module for Python2.7's unittest module
'''
import sys
# Allow wildcard import because we really do want to import all of
# unittests's symbols into this compat shim
# pylint: disable=wildcard-import,unused-wildcard-import
if sys.version_info < (2, 7):
try:
# Need unittest2 on python2.6
from unittest2 import * # noqa
except ImportError:
print('You need unittest2 installed on python2.6.x to run tests')
else:
from unittest import * # noqa

View File

@@ -28,7 +28,7 @@ class DictDataLoader(DataLoader):
def __init__(self, file_mapping=None):
file_mapping = {} if file_mapping is None else file_mapping
assert type(file_mapping) == dict
assert isinstance(file_mapping, dict)
super(DictDataLoader, self).__init__()

View File

@@ -1,4 +1,5 @@
from ansible_collections.openstack.cloud.tests.unit.compat.mock import MagicMock
from unittest.mock import MagicMock
from ansible.utils.path import unfrackpath

View File

@@ -20,10 +20,10 @@
import sys
import json
import unittest
from contextlib import contextmanager
from io import BytesIO, StringIO
from ansible_collections.openstack.cloud.tests.unit.compat import unittest
from ansible.module_utils.six import PY3
from ansible.module_utils._text import to_bytes

View File

@@ -1,7 +1,7 @@
import collections
import inspect
import mock
import pytest
from unittest import mock
import yaml
from ansible.module_utils.six import string_types
@@ -220,3 +220,45 @@ class TestCreateServer(object):
os_server._create_server(self.module, self.cloud)
assert 'missing_network' in self.module.fail_json.call_args[1]['msg']
def test_create_server_auto_ip_wait(self):
'''
- openstack.cloud.server:
image: cirros
auto_ip: true
wait: false
nics:
- net-name: network1
'''
with pytest.raises(AnsibleFail):
os_server._create_server(self.module, self.cloud)
assert 'auto_ip' in self.module.fail_json.call_args[1]['msg']
def test_create_server_floating_ips_wait(self):
'''
- openstack.cloud.server:
image: cirros
floating_ips: ['0.0.0.0']
wait: false
nics:
- net-name: network1
'''
with pytest.raises(AnsibleFail):
os_server._create_server(self.module, self.cloud)
assert 'floating_ips' in self.module.fail_json.call_args[1]['msg']
def test_create_server_floating_ip_pools_wait(self):
'''
- openstack.cloud.server:
image: cirros
floating_ip_pools: ['name-of-pool']
wait: false
nics:
- net-name: network1
'''
with pytest.raises(AnsibleFail):
os_server._create_server(self.module, self.cloud)
assert 'floating_ip_pools' in self.module.fail_json.call_args[1]['msg']

View File

@@ -1,7 +1,7 @@
import json
import unittest
from unittest.mock import patch
from ansible_collections.openstack.cloud.tests.unit.compat import unittest
from ansible_collections.openstack.cloud.tests.unit.compat.mock import patch
from ansible.module_utils import basic
from ansible.module_utils._text import to_bytes

View File

@@ -28,9 +28,19 @@ cp -a ${TOXDIR}/{plugins,meta,tests,docs} ${ANSIBLE_COLLECTIONS_PATH}/ansible_co
cd ${ANSIBLE_COLLECTIONS_PATH}/ansible_collections/openstack/cloud/
echo "Running ansible-test with version:"
ansible --version
# Ansible-core 2.17 dropped support for the metaclass-boilerplate and future-import-boilerplate tests.
# TODO(mgoddard): Drop this workaround when ansible-core 2.16 is EOL.
ANSIBLE_VER=$(python3 -m pip show ansible-core | awk '$1 == "Version:" { print $2 }')
ANSIBLE_MAJOR_VER=$(echo "$ANSIBLE_VER" | sed 's/^\([0-9]\)\..*/\1/g')
SKIP_TESTS=""
if [[ $ANSIBLE_MAJOR_VER -eq 2 ]]; then
ANSIBLE_MINOR_VER=$(echo "$ANSIBLE_VER" | sed 's/^2\.\([^\.]*\)\..*/\1/g')
if [[ $ANSIBLE_MINOR_VER -le 16 ]]; then
SKIP_TESTS="--skip-test metaclass-boilerplate --skip-test future-import-boilerplate"
fi
fi
ansible-test sanity -v \
--venv \
--python ${PY_VER} \
--skip-test metaclass-boilerplate \
--skip-test future-import-boilerplate \
$SKIP_TESTS \
plugins/ docs/ meta/

View File

@@ -36,13 +36,14 @@ deps =
galaxy-importer
pbr
ruamel.yaml
setuptools
commands =
python {toxinidir}/tools/build.py
ansible --version
ansible-galaxy collection build --force {toxinidir} --output-path {toxinidir}/build_artifact
bash {toxinidir}/tools/check-import.sh {toxinidir}
[testenv:linters_{2_9,2_11,2_12,latest}]
[testenv:linters_{2_9,2_11,2_12,2_16,2_18,latest}]
allowlist_externals = bash
commands =
{[testenv:build]commands}
@@ -56,6 +57,8 @@ deps =
linters_2_9: -r{toxinidir}/tests/requirements-ansible-2.9.txt
linters_2_11: -r{toxinidir}/tests/requirements-ansible-2.11.txt
linters_2_12: -r{toxinidir}/tests/requirements-ansible-2.12.txt
linters_2_16: -r{toxinidir}/tests/requirements-ansible-2.16.txt
linters_2_16: -r{toxinidir}/tests/requirements-ansible-2.18.txt
passenv = *
[flake8]
@@ -69,7 +72,7 @@ ignore = W503,H4,E501,E402,H301
show-source = True
exclude=.venv,.git,.tox,dist,doc,*lib/python*,*egg,build,ansible_collections
[testenv:ansible_{2_9,2_11,2_12,latest}]
[testenv:ansible_{2_9,2_11,2_12,2_16,2_18,latest}]
allowlist_externals = bash
commands =
bash {toxinidir}/ci/run-ansible-tests-collection.sh -e {envdir} {posargs}
@@ -79,6 +82,8 @@ deps =
ansible_2_9: -r{toxinidir}/tests/requirements-ansible-2.9.txt
ansible_2_11: -r{toxinidir}/tests/requirements-ansible-2.11.txt
ansible_2_12: -r{toxinidir}/tests/requirements-ansible-2.12.txt
ansible_2_16: -r{toxinidir}/tests/requirements-ansible-2.16.txt
ansible_2_18: -r{toxinidir}/tests/requirements-ansible-2.18.txt
# Need to pass some env vars for the Ansible playbooks
passenv =
HOME