Update CI - Continue work from #195 (#202)

* Upgrade Ansible and OKD versions for CI

* Use ubi9 and fix sanity

* Use correct pip install

* Try using quotes

* Ensure python3.9

* Upgrade ansible and molecule versions

* Remove DeploymentConfig

DeploymentConfigs are deprecated and seem to now be causing idempotence
problems. Replacing them with Deployments fixes it.

* Attempt to fix ldap integration tests

Signed-off-by: Alina Buzachis <abuzachis@redhat.com>

* Move sanity and unit tests to GH actions

Signed-off-by: Alina Buzachis <abuzachis@redhat.com>

* Firt round of sanity fixes

Signed-off-by: Alina Buzachis <abuzachis@redhat.com>

* Add kubernetes.core collection as sanity requirement

Signed-off-by: Alina Buzachis <abuzachis@redhat.com>

* Add ignore-2.16.txt

Signed-off-by: Alina Buzachis <abuzachis@redhat.com>

* Attempt to fix units

Signed-off-by: Alina Buzachis <abuzachis@redhat.com>

* Add ignore-2.17

Signed-off-by: Alina Buzachis <abuzachis@redhat.com>

* Attempt to fix unit tests

Signed-off-by: Alina Buzachis <abuzachis@redhat.com>

* Add pytest-ansible to test-requirements.txt

Signed-off-by: Alina Buzachis <abuzachis@redhat.com>

* Add changelog fragment

Signed-off-by: Alina Buzachis <abuzachis@redhat.com>

* Add workflow for ansible-lint

Signed-off-by: Alina Buzachis <abuzachis@redhat.com>

* Apply black

Signed-off-by: Alina Buzachis <abuzachis@redhat.com>

* Fix linters

Signed-off-by: Alina Buzachis <abuzachis@redhat.com>

* Add # fmt: skip

Signed-off-by: Alina Buzachis <abuzachis@redhat.com>

* Yet another round of linting

Signed-off-by: Alina Buzachis <abuzachis@redhat.com>

* Yet another round of linting

Signed-off-by: Alina Buzachis <abuzachis@redhat.com>

* Remove setup.cfg

Signed-off-by: Alina Buzachis <abuzachis@redhat.com>

* Revert #fmt

Signed-off-by: Alina Buzachis <abuzachis@redhat.com>

* Use ansible-core 2.14

Signed-off-by: Alina Buzachis <abuzachis@redhat.com>

* Cleanup ansible-lint ignores

Signed-off-by: Alina Buzachis <abuzachis@redhat.com>

* Try using service instead of pod IP

* Fix typo

* Actually use the correct port

* See if NetworkPolicy is preventing connection

* using Pod internal IP

* fix adm prune auth roles syntax

* adding some retry steps

* fix: openshift_builds target

* add flag --force-with-deps when building downstream collection

* Remove yamllint from tox linters, bump minimum python supported version to 3.9, Remove support for ansible-core < 2.14

---------

Signed-off-by: Alina Buzachis <abuzachis@redhat.com>
Co-authored-by: Mike Graves <mgraves@redhat.com>
Co-authored-by: Alina Buzachis <abuzachis@redhat.com>
This commit is contained in:
Bikouo Aubin
2023-11-15 18:00:38 +01:00
committed by GitHub
parent cb796e1298
commit a63e5b7b36
76 changed files with 4364 additions and 3510 deletions

5
.config/ansible-lint.yml Normal file
View File

@@ -0,0 +1,5 @@
---
profile: production
exclude_paths:
- molecule
- tests/sanity

4
.github/patchback.yml vendored Normal file
View File

@@ -0,0 +1,4 @@
---
backport_branch_prefix: patchback/backports/
backport_label_prefix: backport-
target_branch_prefix: stable-

6
.github/settings.yml vendored Normal file
View File

@@ -0,0 +1,6 @@
---
# DO NOT MODIFY
# Settings: https://probot.github.io/apps/settings/
# Pull settings from https://github.com/ansible-collections/.github/blob/master/.github/settings.yml
_extends: ".github"

60
.github/stale.yml vendored Normal file
View File

@@ -0,0 +1,60 @@
---
# Configuration for probot-stale - https://github.com/probot/stale
# Number of days of inactivity before an Issue or Pull Request becomes stale
daysUntilStale: 90
# Number of days of inactivity before an Issue or Pull Request with the stale
# label is closed. Set to false to disable. If disabled, issues still need to be
# closed manually, but will remain marked as stale.
daysUntilClose: 30
# Only issues or pull requests with all of these labels are check if stale.
# Defaults to `[]` (disabled)
onlyLabels: []
# Issues or Pull Requests with these labels will never be considered stale. Set
# to `[]` to disable
exemptLabels:
- security
- planned
- priority/critical
- lifecycle/frozen
- verified
# Set to true to ignore issues in a project (defaults to false)
exemptProjects: false
# Set to true to ignore issues in a milestone (defaults to false)
exemptMilestones: true
# Set to true to ignore issues with an assignee (defaults to false)
exemptAssignees: false
# Label to use when marking as stale
staleLabel: lifecycle/stale
# Limit the number of actions per hour, from 1-30. Default is 30
limitPerRun: 30
pulls:
markComment: |-
PRs go stale after 90 days of inactivity.
If there is no further activity, the PR will be closed in another 30 days.
unmarkComment: >-
This pull request is no longer stale.
closeComment: >-
This pull request has been closed due to inactivity.
issues:
markComment: |-
Issues go stale after 90 days of inactivity.
If there is no further activity, the issue will be closed in another 30 days.
unmarkComment: >-
This issue is no longer stale.
closeComment: >-
This issue has been closed due to inactivity.

23
.github/workflows/changelog.yml vendored Normal file
View File

@@ -0,0 +1,23 @@
---
name: Changelog
concurrency:
group: ${{ github.workflow }}-${{ github.ref }}
cancel-in-progress: true
on:
pull_request:
types:
- opened
- reopened
- labeled
- unlabeled
- synchronize
branches:
- main
- stable-*
tags:
- '*'
jobs:
changelog:
uses: ansible-network/github_actions/.github/workflows/changelog.yml@main

29
.github/workflows/linters.yml vendored Normal file
View File

@@ -0,0 +1,29 @@
---
name: Linters
concurrency:
group: ${{ github.workflow }}-${{ github.ref }}
cancel-in-progress: true
on:
pull_request:
types:
- opened
- reopened
- labeled
- unlabeled
- synchronize
branches:
- main
- stable-*
tags:
- '*'
jobs:
linters:
uses: ansible-network/github_actions/.github/workflows/tox-linters.yml@main
ansible-lint:
runs-on: ubuntu-latest
steps:
- uses: ansible-network/github_actions/.github/actions/checkout_dependency@main
- name: Run ansible-lint
uses: ansible/ansible-lint@v6.21.0

23
.github/workflows/sanity-tests.yml vendored Normal file
View File

@@ -0,0 +1,23 @@
---
name: Sanity tests
concurrency:
group: ${{ github.workflow }}-${{ github.ref }}
cancel-in-progress: true
on:
pull_request:
types:
- opened
- reopened
- synchronize
branches:
- main
- stable-*
tags:
- '*'
jobs:
sanity:
uses: ansible-network/github_actions/.github/workflows/sanity.yml@main
with:
collection_pre_install: '-r source/tests/sanity/requirements.yml'

21
.github/workflows/unit-tests.yml vendored Normal file
View File

@@ -0,0 +1,21 @@
---
name: Unit tests
concurrency:
group: ${{ github.workflow }}-${{ github.ref }}
cancel-in-progress: true
on:
pull_request:
types:
- opened
- reopened
- synchronize
branches:
- main
- stable-*
tags:
- '*'
jobs:
unit-source:
uses: ansible-network/github_actions/.github/workflows/unit_source.yml@main

View File

@@ -1,33 +1,15 @@
--- ---
# Based on ansible-lint config
extends: default
rules: rules:
braces: indentation:
max-spaces-inside: 1 ignore: &default_ignores |
level: error # automatically generated, we can't control it
brackets: changelogs/changelog.yaml
max-spaces-inside: 1 # Will be gone when we release and automatically reformatted
level: error changelogs/fragments/*
colons: document-start:
max-spaces-after: -1 ignore: *default_ignores
level: error line-length:
commas: ignore: *default_ignores
max-spaces-after: -1 max: 160
level: error
comments: disable ignore-from-file: .gitignore
comments-indentation: disable
document-start: disable
empty-lines:
max: 3
level: error
hyphens:
level: error
indentation: disable
key-duplicates: enable
line-length: disable
new-line-at-end-of-file: disable
new-lines:
type: unix
trailing-spaces: disable
truthy: disable

View File

@@ -16,7 +16,7 @@ build: clean
ansible-galaxy collection build ansible-galaxy collection build
install: build install: build
ansible-galaxy collection install -p ansible_collections community-okd-$(VERSION).tar.gz ansible-galaxy collection install --force -p ansible_collections community-okd-$(VERSION).tar.gz
sanity: install sanity: install
cd ansible_collections/community/okd && ansible-test sanity -v --python $(PYTHON_VERSION) $(SANITY_TEST_ARGS) cd ansible_collections/community/okd && ansible-test sanity -v --python $(PYTHON_VERSION) $(SANITY_TEST_ARGS)

View File

@@ -10,13 +10,8 @@ The collection includes a variety of Ansible content to help automate the manage
<!--start requires_ansible--> <!--start requires_ansible-->
## Ansible version compatibility ## Ansible version compatibility
This collection has been tested against following Ansible versions: **>=2.9.17**. This collection has been tested against following Ansible versions: **>=2.14.0**.
For collections that support Ansible 2.9, please ensure you update your `network_os` to use the
fully qualified collection name (for example, `cisco.ios.ios`).
Plugins and modules within a collection may be tested with only specific Ansible versions.
A collection may contain metadata that identifies these versions.
PEP440 is the schema used to describe the versions of Ansible.
<!--end requires_ansible--> <!--end requires_ansible-->
## Python Support ## Python Support

View File

@@ -1,201 +1,202 @@
---
ancestor: null ancestor: null
releases: releases:
0.1.0: 0.1.0:
changes: changes:
major_changes: major_changes:
- Add custom k8s module, integrate better Molecule tests (https://github.com/ansible-collections/community.okd/pull/7). - Add custom k8s module, integrate better Molecule tests (https://github.com/ansible-collections/community.okd/pull/7).
- Add downstream build scripts to build redhat.openshift (https://github.com/ansible-collections/community.okd/pull/20). - Add downstream build scripts to build redhat.openshift (https://github.com/ansible-collections/community.okd/pull/20).
- Add openshift connection plugin, update inventory plugin to use it (https://github.com/ansible-collections/community.okd/pull/18). - Add openshift connection plugin, update inventory plugin to use it (https://github.com/ansible-collections/community.okd/pull/18).
- Initial content migration from community.kubernetes (https://github.com/ansible-collections/community.okd/pull/3). - Initial content migration from community.kubernetes (https://github.com/ansible-collections/community.okd/pull/3).
minor_changes: minor_changes:
- Add incluster Makefile target for CI (https://github.com/ansible-collections/community.okd/pull/13). - Add incluster Makefile target for CI (https://github.com/ansible-collections/community.okd/pull/13).
- Add tests for inventory plugin (https://github.com/ansible-collections/community.okd/pull/16). - Add tests for inventory plugin (https://github.com/ansible-collections/community.okd/pull/16).
- CI Documentation for working with Prow (https://github.com/ansible-collections/community.okd/pull/15). - CI Documentation for working with Prow (https://github.com/ansible-collections/community.okd/pull/15).
- Docker container can run as an arbitrary user (https://github.com/ansible-collections/community.okd/pull/12). - Docker container can run as an arbitrary user (https://github.com/ansible-collections/community.okd/pull/12).
- Dockerfile now is properly set up to run tests in a rootless container (https://github.com/ansible-collections/community.okd/pull/11). - Dockerfile now is properly set up to run tests in a rootless container (https://github.com/ansible-collections/community.okd/pull/11).
- Integrate stale bot for issue queue maintenance (https://github.com/ansible-collections/community.okd/pull/14). - Integrate stale bot for issue queue maintenance (https://github.com/ansible-collections/community.okd/pull/14).
fragments: fragments:
- 1-initial-content.yml - 1-initial-content.yml
- 11-dockerfile-tests.yml - 11-dockerfile-tests.yml
- 12-dockerfile-tests.yml - 12-dockerfile-tests.yml
- 13-makefile-tests.yml - 13-makefile-tests.yml
- 15-ci-documentation.yml - 15-ci-documentation.yml
- 16-inventory-plugin-tests.yml - 16-inventory-plugin-tests.yml
- 18-openshift-connection-plugin.yml - 18-openshift-connection-plugin.yml
- 20-downstream-build-scripts.yml - 20-downstream-build-scripts.yml
- 7-molecule-tests.yml - 7-molecule-tests.yml
- 8-stale-bot.yml - 8-stale-bot.yml
release_date: '2020-09-04' release_date: '2020-09-04'
0.2.0: 0.2.0:
changes: changes:
major_changes: major_changes:
- openshift_auth - new module (migrated from k8s_auth in community.kubernetes) - openshift_auth - new module (migrated from k8s_auth in community.kubernetes)
(https://github.com/ansible-collections/community.okd/pull/33). (https://github.com/ansible-collections/community.okd/pull/33).
minor_changes: minor_changes:
- Add a contribution guide (https://github.com/ansible-collections/community.okd/pull/37). - Add a contribution guide (https://github.com/ansible-collections/community.okd/pull/37).
- Use the API Group APIVersion for the `Route` object (https://github.com/ansible-collections/community.okd/pull/27). - Use the API Group APIVersion for the `Route` object (https://github.com/ansible-collections/community.okd/pull/27).
fragments: fragments:
- 27-route-api-group.yml - 27-route-api-group.yml
- 33-add-k8s_auth.yml - 33-add-k8s_auth.yml
- 36-contribution-guide.yml - 36-contribution-guide.yml
modules: modules:
- description: Authenticate to OpenShift clusters which require an explicit login - description: Authenticate to OpenShift clusters which require an explicit login
step step
name: openshift_auth name: openshift_auth
namespace: '' namespace: ''
release_date: '2020-09-24' release_date: '2020-09-24'
0.3.0: 0.3.0:
changes: changes:
major_changes: major_changes:
- Add openshift_process module for template rendering and optional application - Add openshift_process module for template rendering and optional application
of rendered resources (https://github.com/ansible-collections/community.okd/pull/44). of rendered resources (https://github.com/ansible-collections/community.okd/pull/44).
- Add openshift_route module for creating routes from services (https://github.com/ansible-collections/community.okd/pull/40). - Add openshift_route module for creating routes from services (https://github.com/ansible-collections/community.okd/pull/40).
fragments: fragments:
- 40-openshift_route.yml - 40-openshift_route.yml
- 44-openshift_process.yml - 44-openshift_process.yml
modules: modules:
- description: Process an OpenShift template.openshift.io/v1 Template - description: Process an OpenShift template.openshift.io/v1 Template
name: openshift_process name: openshift_process
namespace: '' namespace: ''
- description: Expose a Service as an OpenShift Route. - description: Expose a Service as an OpenShift Route.
name: openshift_route name: openshift_route
namespace: '' namespace: ''
release_date: '2020-10-12' release_date: '2020-10-12'
1.0.0: 1.0.0:
changes: changes:
minor_changes: minor_changes:
- Released version 1 to Automation Hub as redhat.openshift (https://github.com/ansible-collections/community.okd/issues/51). - Released version 1 to Automation Hub as redhat.openshift (https://github.com/ansible-collections/community.okd/issues/51).
fragments: fragments:
- 51-redhat-openshift-ah-release.yml - 51-redhat-openshift-ah-release.yml
release_date: '2020-11-12' release_date: '2020-11-12'
1.0.1: 1.0.1:
changes: changes:
bugfixes: bugfixes:
- Generate downstream redhat.openshift documentation (https://github.com/ansible-collections/community.okd/pull/59). - Generate downstream redhat.openshift documentation (https://github.com/ansible-collections/community.okd/pull/59).
fragments: fragments:
- 59-downstream-docs.yml - 59-downstream-docs.yml
release_date: '2020-11-17' release_date: '2020-11-17'
1.0.2: 1.0.2:
changes: changes:
minor_changes: minor_changes:
- restrict the version of kubernetes.core dependency (https://github.com/ansible-collections/community.okd/pull/66). - restrict the version of kubernetes.core dependency (https://github.com/ansible-collections/community.okd/pull/66).
fragments: fragments:
- 66-restrict-kubernetes-core-version.yaml - 66-restrict-kubernetes-core-version.yaml
release_date: '2021-02-19' release_date: '2021-02-19'
1.1.0: 1.1.0:
changes: changes:
minor_changes: minor_changes:
- increase the kubernetes.core dependency version number (https://github.com/ansible-collections/community.okd/pull/71). - increase the kubernetes.core dependency version number (https://github.com/ansible-collections/community.okd/pull/71).
fragments: fragments:
- 71-bump-kubernetes-core-version.yaml - 71-bump-kubernetes-core-version.yaml
release_date: '2021-02-23' release_date: '2021-02-23'
1.1.1: 1.1.1:
changes: changes:
bugfixes: bugfixes:
- add missing requirements.txt file needed for execution environments (https://github.com/ansible-collections/community.okd/pull/78). - add missing requirements.txt file needed for execution environments (https://github.com/ansible-collections/community.okd/pull/78).
- openshift_route - default to ``no_log=False`` for the ``key`` parameter in - openshift_route - default to ``no_log=False`` for the ``key`` parameter in
TLS configuration to fix sanity failures (https://github.com/ansible-collections/community.okd/pull/77). TLS configuration to fix sanity failures (https://github.com/ansible-collections/community.okd/pull/77).
- restrict molecule version to <3.3.0 to address breaking change (https://github.com/ansible-collections/community.okd/pull/77). - restrict molecule version to <3.3.0 to address breaking change (https://github.com/ansible-collections/community.okd/pull/77).
- update CI to work with ansible 2.11 (https://github.com/ansible-collections/community.okd/pull/80). - update CI to work with ansible 2.11 (https://github.com/ansible-collections/community.okd/pull/80).
fragments: fragments:
- 77-fix-ci-failure.yaml - 77-fix-ci-failure.yaml
- 78-add-requirements-file.yaml - 78-add-requirements-file.yaml
- 80-update-ci.yaml - 80-update-ci.yaml
release_date: '2021-04-06' release_date: '2021-04-06'
1.1.2: 1.1.2:
changes: changes:
bugfixes: bugfixes:
- include requirements.txt in downstream build process (https://github.com/ansible-collections/community.okd/pull/81). - include requirements.txt in downstream build process (https://github.com/ansible-collections/community.okd/pull/81).
fragments: fragments:
- 81-include-requirements.yaml - 81-include-requirements.yaml
release_date: '2021-04-08' release_date: '2021-04-08'
2.0.0: 2.0.0:
changes: changes:
breaking_changes: breaking_changes:
- drop python 2 support (https://github.com/openshift/community.okd/pull/93). - drop python 2 support (https://github.com/openshift/community.okd/pull/93).
bugfixes: bugfixes:
- fixes test suite to use correct versions of python and dependencies (https://github.com/ansible-collections/community.okd/pull/89). - fixes test suite to use correct versions of python and dependencies (https://github.com/ansible-collections/community.okd/pull/89).
- openshift_process - fix module execution when template does not include a - openshift_process - fix module execution when template does not include a
message (https://github.com/ansible-collections/community.okd/pull/87). message (https://github.com/ansible-collections/community.okd/pull/87).
major_changes: major_changes:
- update to use kubernetes.core 2.0 (https://github.com/openshift/community.okd/pull/93). - update to use kubernetes.core 2.0 (https://github.com/openshift/community.okd/pull/93).
minor_changes: minor_changes:
- Added documentation for the ``community.okd`` collection. - Added documentation for the ``community.okd`` collection.
- openshift - inventory plugin supports FQCN ``redhat.openshift``. - openshift - inventory plugin supports FQCN ``redhat.openshift``.
fragments: fragments:
- 87-openshift_process-fix-template-without-message.yaml - 87-openshift_process-fix-template-without-message.yaml
- 89-clean-up-ci.yaml - 89-clean-up-ci.yaml
- 93-update-to-k8s-2.yaml - 93-update-to-k8s-2.yaml
- add_docs.yml - add_docs.yml
- fqcn_inventory.yml - fqcn_inventory.yml
release_date: '2021-06-22' release_date: '2021-06-22'
2.0.1: 2.0.1:
changes: changes:
minor_changes: minor_changes:
- increase kubernetes.core dependency version (https://github.com/openshift/community.okd/pull/97). - increase kubernetes.core dependency version (https://github.com/openshift/community.okd/pull/97).
fragments: fragments:
- 97-bump-k8s-version.yaml - 97-bump-k8s-version.yaml
release_date: '2021-06-24' release_date: '2021-06-24'
2.1.0: 2.1.0:
changes: changes:
bugfixes: bugfixes:
- fix broken links in Automation Hub for redhat.openshift (https://github.com/openshift/community.okd/issues/100). - fix broken links in Automation Hub for redhat.openshift (https://github.com/openshift/community.okd/issues/100).
minor_changes: minor_changes:
- add support for turbo mode (https://github.com/openshift/community.okd/pull/102). - add support for turbo mode (https://github.com/openshift/community.okd/pull/102).
- openshift_route - Add support for Route annotations (https://github.com/ansible-collections/community.okd/pull/99). - openshift_route - Add support for Route annotations (https://github.com/ansible-collections/community.okd/pull/99).
fragments: fragments:
- 0-copy_ignore_txt.yml - 0-copy_ignore_txt.yml
- 100-fix-broken-links.yml - 100-fix-broken-links.yml
- 102-support-turbo-mode.yaml - 102-support-turbo-mode.yaml
- 99-openshift_route-add-support-for-annotations.yml - 99-openshift_route-add-support-for-annotations.yml
release_date: '2021-10-20' release_date: '2021-10-20'
2.2.0: 2.2.0:
changes: changes:
bugfixes: bugfixes:
- fix ocp auth failing against cluster api url with trailing slash (https://github.com/openshift/community.okd/issues/139) - fix ocp auth failing against cluster api url with trailing slash (https://github.com/openshift/community.okd/issues/139)
minor_changes: minor_changes:
- add action groups to runtime.yml (https://github.com/openshift/community.okd/issues/41). - add action groups to runtime.yml (https://github.com/openshift/community.okd/issues/41).
fragments: fragments:
- 152-add-action-groups.yml - 152-add-action-groups.yml
- auth-against-api-with-trailing-slash.yaml - auth-against-api-with-trailing-slash.yaml
modules: modules:
- description: Update TemplateInstances to point to the latest group-version-kinds - description: Update TemplateInstances to point to the latest group-version-kinds
name: openshift_adm_migrate_template_instances name: openshift_adm_migrate_template_instances
namespace: '' namespace: ''
- description: Removes references to the specified roles, clusterroles, users, - description: Removes references to the specified roles, clusterroles, users,
and groups and groups
name: openshift_adm_prune_auth name: openshift_adm_prune_auth
namespace: '' namespace: ''
- description: Remove old completed and failed deployment configs - description: Remove old completed and failed deployment configs
name: openshift_adm_prune_deployments name: openshift_adm_prune_deployments
namespace: '' namespace: ''
- description: Remove unreferenced images - description: Remove unreferenced images
name: openshift_adm_prune_images name: openshift_adm_prune_images
namespace: '' namespace: ''
- description: Import the latest image information from a tag in a container image - description: Import the latest image information from a tag in a container image
registry. registry.
name: openshift_import_image name: openshift_import_image
namespace: '' namespace: ''
- description: Display information about the integrated registry. - description: Display information about the integrated registry.
name: openshift_registry_info name: openshift_registry_info
namespace: '' namespace: ''
release_date: '2022-05-05' release_date: '2022-05-05'
2.3.0: 2.3.0:
changes: changes:
bugfixes: bugfixes:
- openshift_adm_groups_sync - initialize OpenshiftGroupSync attributes early - openshift_adm_groups_sync - initialize OpenshiftGroupSync attributes early
to avoid Attribute error (https://github.com/openshift/community.okd/issues/155). to avoid Attribute error (https://github.com/openshift/community.okd/issues/155).
- openshift_auth - Review the way the discard process is working, add openshift - openshift_auth - Review the way the discard process is working, add openshift
algorithm to convert token to resource object name (https://github.com/openshift/community.okd/issues/176). algorithm to convert token to resource object name (https://github.com/openshift/community.okd/issues/176).
fragments: fragments:
- 165-initialize-attributes-early.yml - 165-initialize-attributes-early.yml
- 178-openshift_auth-fix-revoke-token.yml - 178-openshift_auth-fix-revoke-token.yml
- 180-default-values-doc.yml - 180-default-values-doc.yml
modules: modules:
- description: Prune old completed and failed builds - description: Prune old completed and failed builds
name: openshift_adm_prune_builds name: openshift_adm_prune_builds
namespace: '' namespace: ''
- description: Start a new build or Cancel running, pending, or new builds. - description: Start a new build or Cancel running, pending, or new builds.
name: openshift_build name: openshift_build
namespace: '' namespace: ''
release_date: '2023-02-03' release_date: '2023-02-03'

View File

@@ -10,21 +10,21 @@ notesdir: fragments
prelude_section_name: release_summary prelude_section_name: release_summary
prelude_section_title: Release Summary prelude_section_title: Release Summary
sections: sections:
- - major_changes - - major_changes
- Major Changes - Major Changes
- - minor_changes - - minor_changes
- Minor Changes - Minor Changes
- - breaking_changes - - breaking_changes
- Breaking Changes / Porting Guide - Breaking Changes / Porting Guide
- - deprecated_features - - deprecated_features
- Deprecated Features - Deprecated Features
- - removed_features - - removed_features
- Removed Features (previously deprecated) - Removed Features (previously deprecated)
- - security_fixes - - security_fixes
- Security Fixes - Security Fixes
- - bugfixes - - bugfixes
- Bugfixes - Bugfixes
- - known_issues - - known_issues
- Known Issues - Known Issues
title: OKD Collection title: OKD Collection
trivial_section_name: trivial trivial_section_name: trivial

View File

@@ -1,2 +1,4 @@
---
deprecated_features: deprecated_features:
- openshift - the ``openshift`` inventory plugin has been deprecated and will be removed in release 4.0.0 (https://github.com/ansible-collections/kubernetes.core/issues/31). - openshift - the ``openshift`` inventory plugin has been deprecated and will be removed in release 4.0.0
(https://github.com/ansible-collections/kubernetes.core/issues/31).

View File

@@ -0,0 +1,6 @@
---
trivial:
- "Move unit and sanity tests from zuul to GitHub Actions (https://github.com/openshift/community.okd/pull/202)."
breaking_changes:
- "Remove support for ansible-core < 2.14 (https://github.com/openshift/community.okd/pull/202)."
- "Bump minimum Python suupported version to 3.9 (https://github.com/openshift/community.okd/pull/202)."

View File

@@ -1,4 +1,4 @@
FROM registry.access.redhat.com/ubi8/ubi FROM registry.access.redhat.com/ubi9/ubi
ENV OPERATOR=/usr/local/bin/ansible-operator \ ENV OPERATOR=/usr/local/bin/ansible-operator \
USER_UID=1001 \ USER_UID=1001 \
@@ -11,20 +11,20 @@ RUN yum install -y \
glibc-langpack-en \ glibc-langpack-en \
git \ git \
make \ make \
python39 \ python3 \
python39-devel \ python3-devel \
python39-pip \ python3-pip \
python39-setuptools \ python3-setuptools \
gcc \ gcc \
openldap-devel \ openldap-devel \
&& pip3 install --no-cache-dir --upgrade setuptools pip \ && python3.9 -m pip install --no-cache-dir --upgrade setuptools pip \
&& pip3 install --no-cache-dir \ && python3.9 -m pip install --no-cache-dir \
kubernetes \ kubernetes \
ansible==2.9.* \ "ansible-core" \
"molecule<3.3.0" \ "molecule" \
&& yum clean all \ && yum clean all \
&& rm -rf $HOME/.cache \ && rm -rf $HOME/.cache \
&& curl -L https://github.com/openshift/okd/releases/download/4.5.0-0.okd-2020-08-12-020541/openshift-client-linux-4.5.0-0.okd-2020-08-12-020541.tar.gz | tar -xz -C /usr/local/bin && curl -L https://github.com/openshift/okd/releases/download/4.12.0-0.okd-2023-04-16-041331/openshift-client-linux-4.12.0-0.okd-2023-04-16-041331.tar.gz | tar -xz -C /usr/local/bin
# TODO: Is there a better way to install this client in ubi8? # TODO: Is there a better way to install this client in ubi8?
COPY . /opt/ansible COPY . /opt/ansible

View File

@@ -47,7 +47,7 @@ f_text_sub()
sed -i.bak "s/Kubernetes/OpenShift/g" "${_build_dir}/galaxy.yml" sed -i.bak "s/Kubernetes/OpenShift/g" "${_build_dir}/galaxy.yml"
sed -i.bak "s/^version\:.*$/version: ${DOWNSTREAM_VERSION}/" "${_build_dir}/galaxy.yml" sed -i.bak "s/^version\:.*$/version: ${DOWNSTREAM_VERSION}/" "${_build_dir}/galaxy.yml"
sed -i.bak "/STARTREMOVE/,/ENDREMOVE/d" "${_build_dir}/README.md" sed -i.bak "/STARTREMOVE/,/ENDREMOVE/d" "${_build_dir}/README.md"
sed -i.bak "s/[[:space:]]okd:$/ openshift:/" ${_build_dir}/meta/runtime.yml sed -i.bak "s/[[:space:]]okd:$/ openshift:/" "${_build_dir}/meta/runtime.yml"
find "${_build_dir}" -type f ! -name galaxy.yml -exec sed -i.bak "s/community\.okd/redhat\.openshift/g" {} \; find "${_build_dir}" -type f ! -name galaxy.yml -exec sed -i.bak "s/community\.okd/redhat\.openshift/g" {} \;
find "${_build_dir}" -type f -name "*.bak" -delete find "${_build_dir}" -type f -name "*.bak" -delete
@@ -67,7 +67,6 @@ f_prep()
LICENSE LICENSE
README.md README.md
Makefile Makefile
setup.cfg
.yamllint .yamllint
requirements.txt requirements.txt
requirements.yml requirements.yml
@@ -76,6 +75,7 @@ f_prep()
# Directories to recursively copy downstream (relative repo root dir path) # Directories to recursively copy downstream (relative repo root dir path)
_dir_manifest=( _dir_manifest=(
.config
changelogs changelogs
ci ci
meta meta
@@ -156,7 +156,7 @@ f_handle_doc_fragments_workaround()
# Build the collection, export docs, render them, stitch it all back together # Build the collection, export docs, render them, stitch it all back together
pushd "${_build_dir}" || return pushd "${_build_dir}" || return
ansible-galaxy collection build ansible-galaxy collection build
ansible-galaxy collection install -p "${install_collections_dir}" ./*.tar.gz ansible-galaxy collection install --force-with-deps -p "${install_collections_dir}" ./*.tar.gz
rm ./*.tar.gz rm ./*.tar.gz
for doc_fragment_mod in "${_doc_fragment_modules[@]}" for doc_fragment_mod in "${_doc_fragment_modules[@]}"
do do

View File

@@ -1,5 +1,5 @@
--- ---
requires_ansible: '>=2.9.17' requires_ansible: '>=2.14.0'
action_groups: action_groups:
okd: okd:
- k8s - k8s

View File

@@ -21,16 +21,13 @@
debug: debug:
var: output var: output
- name: Create deployment config - name: Create deployment
community.okd.k8s: community.okd.k8s:
state: present state: present
name: hello-world name: hello-world
namespace: testing namespace: testing
definition: '{{ okd_dc_template }}' definition: '{{ okd_dc_template }}'
wait: yes wait: yes
wait_condition:
type: Available
status: True
vars: vars:
k8s_pod_name: hello-world k8s_pod_name: hello-world
k8s_pod_image: python k8s_pod_image: python
@@ -71,19 +68,12 @@
namespace: '{{ namespace }}' namespace: '{{ namespace }}'
definition: '{{ okd_imagestream_template }}' definition: '{{ okd_imagestream_template }}'
- name: Create DeploymentConfig to reference ImageStream
community.okd.k8s:
name: '{{ k8s_pod_name }}'
namespace: '{{ namespace }}'
definition: '{{ okd_dc_template }}'
vars:
k8s_pod_name: is-idempotent-dc
- name: Create Deployment to reference ImageStream - name: Create Deployment to reference ImageStream
community.okd.k8s: community.okd.k8s:
name: '{{ k8s_pod_name }}' name: '{{ k8s_pod_name }}'
namespace: '{{ namespace }}' namespace: '{{ namespace }}'
definition: '{{ k8s_deployment_template | combine(metadata) }}' definition: '{{ k8s_deployment_template | combine(metadata) }}'
wait: true
vars: vars:
k8s_pod_annotations: k8s_pod_annotations:
"alpha.image.policy.openshift.io/resolve-names": "*" "alpha.image.policy.openshift.io/resolve-names": "*"

View File

@@ -10,13 +10,13 @@ objects:
name: "Pod-${{ NAME }}" name: "Pod-${{ NAME }}"
spec: spec:
containers: containers:
- args: - args:
- /bin/sh - /bin/sh
- -c - -c
- while true; do echo $(date); sleep 15; done - while true; do echo $(date); sleep 15; done
image: python:3.7-alpine image: python:3.7-alpine
imagePullPolicy: Always imagePullPolicy: Always
name: python name: python
parameters: parameters:
- name: NAME - name: NAME
description: trailing name of the pod description: trailing name of the pod

View File

@@ -13,22 +13,22 @@ metadata:
tags: quickstart,examples tags: quickstart,examples
name: simple-example name: simple-example
objects: objects:
- apiVersion: v1 - apiVersion: v1
kind: ConfigMap kind: ConfigMap
metadata: metadata:
annotations: annotations:
description: Big example description: Big example
name: ${NAME} name: ${NAME}
data: data:
content: "${CONTENT}" content: "${CONTENT}"
parameters: parameters:
- description: The name assigned to the ConfigMap - description: The name assigned to the ConfigMap
displayName: Name displayName: Name
name: NAME name: NAME
required: true required: true
value: example value: example
- description: The value for the content key of the configmap - description: The value for the content key of the configmap
displayName: Content displayName: Content
name: CONTENT name: CONTENT
required: true required: true
value: '' value: ''

View File

@@ -4,7 +4,7 @@ dependency:
options: options:
requirements-file: requirements.yml requirements-file: requirements.yml
driver: driver:
name: delegated name: default
platforms: platforms:
- name: cluster - name: cluster
groups: groups:
@@ -17,9 +17,6 @@ provisioner:
config_options: config_options:
inventory: inventory:
enable_plugins: community.okd.openshift enable_plugins: community.okd.openshift
lint: |
set -e
ansible-lint
inventory: inventory:
hosts: hosts:
plugin: community.okd.openshift plugin: community.okd.openshift
@@ -34,14 +31,10 @@ provisioner:
ANSIBLE_COLLECTIONS_PATHS: ${OVERRIDE_COLLECTION_PATH:-$MOLECULE_PROJECT_DIRECTORY} ANSIBLE_COLLECTIONS_PATHS: ${OVERRIDE_COLLECTION_PATH:-$MOLECULE_PROJECT_DIRECTORY}
verifier: verifier:
name: ansible name: ansible
lint: |
set -e
ansible-lint
scenario: scenario:
name: default name: default
test_sequence: test_sequence:
- dependency - dependency
- lint
- syntax - syntax
- prepare - prepare
- converge - converge

View File

@@ -37,12 +37,12 @@
name: cluster name: cluster
spec: spec:
identityProviders: identityProviders:
- name: htpasswd_provider - name: htpasswd_provider
mappingMethod: claim mappingMethod: claim
type: HTPasswd type: HTPasswd
htpasswd: htpasswd:
fileData: fileData:
name: htpass-secret name: htpass-secret
- name: Create ClusterRoleBinding for test user - name: Create ClusterRoleBinding for test user
community.okd.k8s: community.okd.k8s:

View File

@@ -89,6 +89,7 @@ def execute():
ldap.set_option(ldap.OPT_X_TLS_REQUIRE_CERT, ldap.OPT_X_TLS_NEVER) ldap.set_option(ldap.OPT_X_TLS_REQUIRE_CERT, ldap.OPT_X_TLS_NEVER)
connection = ldap.initialize(module.params['server_uri']) connection = ldap.initialize(module.params['server_uri'])
connection.set_option(ldap.OPT_REFERRALS, 0)
try: try:
connection.simple_bind_s(module.params['bind_dn'], module.params['bind_pw']) connection.simple_bind_s(module.params['bind_dn'], module.params['bind_pw'])
except ldap.LDAPError as e: except ldap.LDAPError as e:

View File

@@ -1,227 +1,227 @@
---
- block: - block:
- name: Get LDAP definition - name: Get LDAP definition
set_fact: set_fact:
ldap_entries: "{{ lookup('template', 'ad/definition.j2') | from_yaml }}" ldap_entries: "{{ lookup('template', 'ad/definition.j2') | from_yaml }}"
- name: Delete openshift groups if existing - name: Delete openshift groups if existing
community.okd.k8s: community.okd.k8s:
state: absent state: absent
kind: Group kind: Group
version: "user.openshift.io/v1" version: "user.openshift.io/v1"
name: "{{ item }}" name: "{{ item }}"
with_items: with_items:
- admins - admins
- developers
- name: Delete existing LDAP Entries
openshift_ldap_entry:
bind_dn: "{{ ldap_bind_dn }}"
bind_pw: "{{ ldap_bind_pw }}"
server_uri: "{{ ldap_server_uri }}"
dn: "{{ item.dn }}"
state: absent
with_items: "{{ ldap_entries.users + ldap_entries.units | reverse | list }}"
- name: Create LDAP Entries
openshift_ldap_entry:
bind_dn: "{{ ldap_bind_dn }}"
bind_pw: "{{ ldap_bind_pw }}"
server_uri: "{{ ldap_server_uri }}"
dn: "{{ item.dn }}"
attributes: "{{ item.attr }}"
objectClass: "{{ item.class }}"
with_items: "{{ ldap_entries.units + ldap_entries.users }}"
- name: Load test configurations
set_fact:
sync_config: "{{ lookup('template', 'ad/sync-config.j2') | from_yaml }}"
- name: Synchronize Groups
community.okd.openshift_adm_groups_sync:
config: "{{ sync_config }}"
check_mode: yes
register: result
- name: Validate Group going to be created
assert:
that:
- result is changed
- admins_group
- devs_group
- '"jane.smith@ansible.org" in {{ admins_group.users }}'
- '"jim.adams@ansible.org" in {{ admins_group.users }}'
- '"jordanbulls@ansible.org" in {{ devs_group.users }}'
- admins_group.users | length == 2
- devs_group.users | length == 1
vars:
admins_group: "{{ result.groups | selectattr('metadata.name', 'equalto', 'admins') | first }}"
devs_group: "{{ result.groups | selectattr('metadata.name', 'equalto', 'developers') | first }}"
- name: Synchronize Groups (Remove check_mode)
community.okd.openshift_adm_groups_sync:
config: "{{ sync_config }}"
register: result
- name: Validate Group going to be created
assert:
that:
- result is changed
- name: Read admins group
kubernetes.core.k8s_info:
kind: Group
version: "user.openshift.io/v1"
name: admins
register: result
- name: Validate group was created
assert:
that:
- result.resources | length == 1
- '"jane.smith@ansible.org" in {{ result.resources.0.users }}'
- '"jim.adams@ansible.org" in {{ result.resources.0.users }}'
- name: Read developers group
kubernetes.core.k8s_info:
kind: Group
version: "user.openshift.io/v1"
name: developers
register: result
- name: Validate group was created
assert:
that:
- result.resources | length == 1
- '"jordanbulls@ansible.org" in {{ result.resources.0.users }}'
- name: Define user dn to delete
set_fact:
user_to_delete: "cn=Jane,ou=engineers,ou=activeD,{{ ldap_root }}"
- name: Delete 1 admin user
openshift_ldap_entry:
bind_dn: "{{ ldap_bind_dn }}"
bind_pw: "{{ ldap_bind_pw }}"
server_uri: "{{ ldap_server_uri }}"
dn: "{{ user_to_delete }}"
state: absent
- name: Synchronize Openshift groups using allow_groups
community.okd.openshift_adm_groups_sync:
config: "{{ sync_config }}"
allow_groups:
- developers - developers
type: openshift
register: openshift_sync
- name: Validate that only developers group was sync - name: Delete existing LDAP Entries
assert: openshift_ldap_entry:
that: bind_dn: "{{ ldap_bind_dn }}"
- openshift_sync is changed bind_pw: "{{ ldap_bind_pw }}"
- openshift_sync.groups | length == 1 server_uri: "{{ ldap_server_uri }}"
- openshift_sync.groups.0.metadata.name == "developers" dn: "{{ item.dn }}"
state: absent
with_items: "{{ ldap_entries.users + ldap_entries.units | reverse | list }}"
- name: Read admins group - name: Create LDAP Entries
kubernetes.core.k8s_info: openshift_ldap_entry:
kind: Group bind_dn: "{{ ldap_bind_dn }}"
version: "user.openshift.io/v1" bind_pw: "{{ ldap_bind_pw }}"
name: admins server_uri: "{{ ldap_server_uri }}"
register: result dn: "{{ item.dn }}"
attributes: "{{ item.attr }}"
objectClass: "{{ item.class }}"
with_items: "{{ ldap_entries.units + ldap_entries.users }}"
- name: Validate admins group content has not changed - name: Load test configurations
assert: set_fact:
that: sync_config: "{{ lookup('template', 'ad/sync-config.j2') | from_yaml }}"
- result.resources | length == 1
- '"jane.smith@ansible.org" in {{ result.resources.0.users }}'
- '"jim.adams@ansible.org" in {{ result.resources.0.users }}'
- name: Synchronize Openshift groups using deny_groups - name: Synchronize Groups
community.okd.openshift_adm_groups_sync: community.okd.openshift_adm_groups_sync:
config: "{{ sync_config }}" config: "{{ sync_config }}"
deny_groups: check_mode: yes
- developers register: result
type: openshift
register: openshift_sync
- name: Validate that only admins group was sync - name: Validate Group going to be created
assert: assert:
that: that:
- openshift_sync is changed - result is changed
- openshift_sync.groups | length == 1 - admins_group
- openshift_sync.groups.0.metadata.name == "admins" - devs_group
- '"jane.smith@ansible.org" in {{ admins_group.users }}'
- '"jim.adams@ansible.org" in {{ admins_group.users }}'
- '"jordanbulls@ansible.org" in {{ devs_group.users }}'
- admins_group.users | length == 2
- devs_group.users | length == 1
vars:
admins_group: "{{ result.groups | selectattr('metadata.name', 'equalto', 'admins') | first }}"
devs_group: "{{ result.groups | selectattr('metadata.name', 'equalto', 'developers') | first }}"
- name: Read admins group - name: Synchronize Groups (Remove check_mode)
kubernetes.core.k8s_info: community.okd.openshift_adm_groups_sync:
kind: Group config: "{{ sync_config }}"
version: "user.openshift.io/v1" register: result
name: admins
register: result
- name: Validate admins group contains only 1 user now - name: Validate Group going to be created
assert: assert:
that: that:
- result.resources | length == 1 - result is changed
- result.resources.0.users == ["jim.adams@ansible.org"]
- name: Set users to delete (delete all developers users) - name: Read admins group
set_fact: kubernetes.core.k8s_info:
user_to_delete: "cn=Jordan,ou=engineers,ou=activeD,{{ ldap_root }}" kind: Group
version: "user.openshift.io/v1"
name: admins
register: result
- name: Delete 1 admin user - name: Validate group was created
openshift_ldap_entry: assert:
bind_dn: "{{ ldap_bind_dn }}" that:
bind_pw: "{{ ldap_bind_pw }}" - result.resources | length == 1
server_uri: "{{ ldap_server_uri }}" - '"jane.smith@ansible.org" in {{ result.resources.0.users }}'
dn: "{{ user_to_delete }}" - '"jim.adams@ansible.org" in {{ result.resources.0.users }}'
state: absent
- name: Prune groups - name: Read developers group
community.okd.openshift_adm_groups_sync: kubernetes.core.k8s_info:
config: "{{ sync_config }}" kind: Group
state: absent version: "user.openshift.io/v1"
register: result name: developers
register: result
- name: Validate result is changed (only developers group be deleted) - name: Validate group was created
assert: assert:
that: that:
- result is changed - result.resources | length == 1
- result.groups | length == 1 - '"jordanbulls@ansible.org" in {{ result.resources.0.users }}'
- name: Get developers group info - name: Define user dn to delete
kubernetes.core.k8s_info: set_fact:
kind: Group user_to_delete: "cn=Jane,ou=engineers,ou=activeD,{{ ldap_root }}"
version: "user.openshift.io/v1"
name: developers
register: result
- name: assert group was deleted - name: Delete 1 admin user
assert: openshift_ldap_entry:
that: bind_dn: "{{ ldap_bind_dn }}"
- result.resources | length == 0 bind_pw: "{{ ldap_bind_pw }}"
server_uri: "{{ ldap_server_uri }}"
dn: "{{ user_to_delete }}"
state: absent
- name: Get admins group info - name: Synchronize Openshift groups using allow_groups
kubernetes.core.k8s_info: community.okd.openshift_adm_groups_sync:
kind: Group config: "{{ sync_config }}"
version: "user.openshift.io/v1" allow_groups:
name: admins - developers
register: result type: openshift
register: openshift_sync
- name: assert group was not deleted - name: Validate that only developers group was sync
assert: assert:
that: that:
- result.resources | length == 1 - openshift_sync is changed
- openshift_sync.groups | length == 1
- openshift_sync.groups.0.metadata.name == "developers"
- name: Prune groups once again (idempotency) - name: Read admins group
community.okd.openshift_adm_groups_sync: kubernetes.core.k8s_info:
config: "{{ sync_config }}" kind: Group
state: absent version: "user.openshift.io/v1"
register: result name: admins
register: result
- name: Assert nothing was changed - name: Validate admins group content has not changed
assert: assert:
that: that:
- result is not changed - result.resources | length == 1
- '"jane.smith@ansible.org" in {{ result.resources.0.users }}'
- '"jim.adams@ansible.org" in {{ result.resources.0.users }}'
- name: Synchronize Openshift groups using deny_groups
community.okd.openshift_adm_groups_sync:
config: "{{ sync_config }}"
deny_groups:
- developers
type: openshift
register: openshift_sync
- name: Validate that only admins group was sync
assert:
that:
- openshift_sync is changed
- openshift_sync.groups | length == 1
- openshift_sync.groups.0.metadata.name == "admins"
- name: Read admins group
kubernetes.core.k8s_info:
kind: Group
version: "user.openshift.io/v1"
name: admins
register: result
- name: Validate admins group contains only 1 user now
assert:
that:
- result.resources | length == 1
- result.resources.0.users == ["jim.adams@ansible.org"]
- name: Set users to delete (delete all developers users)
set_fact:
user_to_delete: "cn=Jordan,ou=engineers,ou=activeD,{{ ldap_root }}"
- name: Delete 1 admin user
openshift_ldap_entry:
bind_dn: "{{ ldap_bind_dn }}"
bind_pw: "{{ ldap_bind_pw }}"
server_uri: "{{ ldap_server_uri }}"
dn: "{{ user_to_delete }}"
state: absent
- name: Prune groups
community.okd.openshift_adm_groups_sync:
config: "{{ sync_config }}"
state: absent
register: result
- name: Validate result is changed (only developers group be deleted)
assert:
that:
- result is changed
- result.groups | length == 1
- name: Get developers group info
kubernetes.core.k8s_info:
kind: Group
version: "user.openshift.io/v1"
name: developers
register: result
- name: assert group was deleted
assert:
that:
- result.resources | length == 0
- name: Get admins group info
kubernetes.core.k8s_info:
kind: Group
version: "user.openshift.io/v1"
name: admins
register: result
- name: assert group was not deleted
assert:
that:
- result.resources | length == 1
- name: Prune groups once again (idempotency)
community.okd.openshift_adm_groups_sync:
config: "{{ sync_config }}"
state: absent
register: result
- name: Assert nothing was changed
assert:
that:
- result is not changed
always: always:
- name: Delete openshift groups if existing - name: Delete openshift groups if existing

View File

@@ -1,166 +1,165 @@
---
- block: - block:
- name: Get LDAP definition - name: Get LDAP definition
set_fact: set_fact:
ldap_entries: "{{ lookup('template', 'augmented-ad/definition.j2') | from_yaml }}" ldap_entries: "{{ lookup('template', 'augmented-ad/definition.j2') | from_yaml }}"
- name: Delete openshift groups if existing - name: Delete openshift groups if existing
community.okd.k8s: community.okd.k8s:
state: absent state: absent
kind: Group kind: Group
version: "user.openshift.io/v1" version: "user.openshift.io/v1"
name: "{{ item }}" name: "{{ item }}"
with_items: with_items:
- banking - banking
- insurance - insurance
- name: Delete existing LDAP entries - name: Delete existing LDAP entries
openshift_ldap_entry: openshift_ldap_entry:
bind_dn: "{{ ldap_bind_dn }}" bind_dn: "{{ ldap_bind_dn }}"
bind_pw: "{{ ldap_bind_pw }}" bind_pw: "{{ ldap_bind_pw }}"
server_uri: "{{ ldap_server_uri }}" server_uri: "{{ ldap_server_uri }}"
dn: "{{ item.dn }}" dn: "{{ item.dn }}"
state: absent state: absent
with_items: "{{ ldap_entries.users + ldap_entries.groups + ldap_entries.units | reverse | list }}" with_items: "{{ ldap_entries.users + ldap_entries.groups + ldap_entries.units | reverse | list }}"
- name: Create LDAP Entries - name: Create LDAP Entries
openshift_ldap_entry: openshift_ldap_entry:
bind_dn: "{{ ldap_bind_dn }}" bind_dn: "{{ ldap_bind_dn }}"
bind_pw: "{{ ldap_bind_pw }}" bind_pw: "{{ ldap_bind_pw }}"
server_uri: "{{ ldap_server_uri }}" server_uri: "{{ ldap_server_uri }}"
dn: "{{ item.dn }}" dn: "{{ item.dn }}"
attributes: "{{ item.attr }}" attributes: "{{ item.attr }}"
objectClass: "{{ item.class }}" objectClass: "{{ item.class }}"
with_items: "{{ ldap_entries.units + ldap_entries.groups + ldap_entries.users }}" with_items: "{{ ldap_entries.units + ldap_entries.groups + ldap_entries.users }}"
- name: Load test configurations - name: Load test configurations
set_fact: set_fact:
sync_config: "{{ lookup('template', 'augmented-ad/sync-config.j2') | from_yaml }}" sync_config: "{{ lookup('template', 'augmented-ad/sync-config.j2') | from_yaml }}"
- name: Synchronize Groups - name: Synchronize Groups
community.okd.openshift_adm_groups_sync: community.okd.openshift_adm_groups_sync:
config: "{{ sync_config }}" config: "{{ sync_config }}"
check_mode: yes check_mode: yes
register: result register: result
- name: Validate that 'banking' and 'insurance' groups were created - name: Validate that 'banking' and 'insurance' groups were created
assert: assert:
that: that:
- result is changed - result is changed
- banking_group - banking_group
- insurance_group - insurance_group
- '"james-allan@ansible.org" in {{ banking_group.users }}' - '"james-allan@ansible.org" in {{ banking_group.users }}'
- '"gordon-kane@ansible.org" in {{ banking_group.users }}' - '"gordon-kane@ansible.org" in {{ banking_group.users }}'
- '"alice-courtney@ansible.org" in {{ insurance_group.users }}' - '"alice-courtney@ansible.org" in {{ insurance_group.users }}'
- banking_group.users | length == 2 - banking_group.users | length == 2
- insurance_group.users | length == 1 - insurance_group.users | length == 1
vars: vars:
banking_group: "{{ result.groups | selectattr('metadata.name', 'equalto', 'banking') | first }}" banking_group: "{{ result.groups | selectattr('metadata.name', 'equalto', 'banking') | first }}"
insurance_group: "{{ result.groups | selectattr('metadata.name', 'equalto', 'insurance') | first }}" insurance_group: "{{ result.groups | selectattr('metadata.name', 'equalto', 'insurance') | first }}"
- name: Synchronize Groups (Remove check_mode)
community.okd.openshift_adm_groups_sync:
config: "{{ sync_config }}"
register: result
- name: Synchronize Groups (Remove check_mode) - name: Validate Group going to be created
community.okd.openshift_adm_groups_sync: assert:
config: "{{ sync_config }}" that:
register: result - result is changed
- name: Validate Group going to be created - name: Define facts for group to create
assert: set_fact:
that: ldap_groups:
- result is changed - name: banking
users:
- "james-allan@ansible.org"
- "gordon-kane@ansible.org"
- name: insurance
users:
- "alice-courtney@ansible.org"
- name: Define facts for group to create - name: Read 'banking' openshift group
set_fact: kubernetes.core.k8s_info:
ldap_groups: kind: Group
- name: banking version: "user.openshift.io/v1"
users: name: banking
- "james-allan@ansible.org" register: result
- "gordon-kane@ansible.org"
- name: insurance
users:
- "alice-courtney@ansible.org"
- name: Validate group info
assert:
that:
- result.resources | length == 1
- '"james-allan@ansible.org" in {{ result.resources.0.users }}'
- '"gordon-kane@ansible.org" in {{ result.resources.0.users }}'
- name: Read 'banking' openshift group - name: Read 'insurance' openshift group
kubernetes.core.k8s_info: kubernetes.core.k8s_info:
kind: Group kind: Group
version: "user.openshift.io/v1" version: "user.openshift.io/v1"
name: banking name: insurance
register: result register: result
- name: Validate group info - name: Validate group info
assert: assert:
that: that:
- result.resources | length == 1 - result.resources | length == 1
- '"james-allan@ansible.org" in {{ result.resources.0.users }}' - 'result.resources.0.users == ["alice-courtney@ansible.org"]'
- '"gordon-kane@ansible.org" in {{ result.resources.0.users }}'
- name: Read 'insurance' openshift group - name: Delete employee from 'insurance' group
kubernetes.core.k8s_info: openshift_ldap_entry:
kind: Group bind_dn: "{{ ldap_bind_dn }}"
version: "user.openshift.io/v1" bind_pw: "{{ ldap_bind_pw }}"
name: insurance server_uri: "{{ ldap_server_uri }}"
register: result dn: "cn=Alice,ou=employee,ou=augmentedAD,{{ ldap_root }}"
state: absent
- name: Validate group info - name: Prune groups
assert: community.okd.openshift_adm_groups_sync:
that: config: "{{ sync_config }}"
- result.resources | length == 1 state: absent
- 'result.resources.0.users == ["alice-courtney@ansible.org"]' register: result
- name: Delete employee from 'insurance' group - name: Validate result is changed (only insurance group be deleted)
openshift_ldap_entry: assert:
bind_dn: "{{ ldap_bind_dn }}" that:
bind_pw: "{{ ldap_bind_pw }}" - result is changed
server_uri: "{{ ldap_server_uri }}" - result.groups | length == 1
dn: "cn=Alice,ou=employee,ou=augmentedAD,{{ ldap_root }}"
state: absent
- name: Prune groups - name: Get 'insurance' openshift group info
community.okd.openshift_adm_groups_sync: kubernetes.core.k8s_info:
config: "{{ sync_config }}" kind: Group
state: absent version: "user.openshift.io/v1"
register: result name: insurance
register: result
- name: Validate result is changed (only insurance group be deleted) - name: assert group was deleted
assert: assert:
that: that:
- result is changed - result.resources | length == 0
- result.groups | length == 1
- name: Get 'insurance' openshift group info - name: Get 'banking' openshift group info
kubernetes.core.k8s_info: kubernetes.core.k8s_info:
kind: Group kind: Group
version: "user.openshift.io/v1" version: "user.openshift.io/v1"
name: insurance name: banking
register: result register: result
- name: assert group was deleted - name: assert group was not deleted
assert: assert:
that: that:
- result.resources | length == 0 - result.resources | length == 1
- name: Get 'banking' openshift group info - name: Prune groups once again (idempotency)
kubernetes.core.k8s_info: community.okd.openshift_adm_groups_sync:
kind: Group config: "{{ sync_config }}"
version: "user.openshift.io/v1" state: absent
name: banking register: result
register: result
- name: assert group was not deleted - name: Assert no change was made
assert: assert:
that: that:
- result.resources | length == 1 - result is not changed
- name: Prune groups once again (idempotency)
community.okd.openshift_adm_groups_sync:
config: "{{ sync_config }}"
state: absent
register: result
- name: Assert no change was made
assert:
that:
- result is not changed
always: always:
- name: Delete openshift groups if existing - name: Delete openshift groups if existing

View File

@@ -16,30 +16,29 @@
app: ldap app: ldap
spec: spec:
containers: containers:
- name: ldap - name: ldap
image: bitnami/openldap image: bitnami/openldap
env: env:
- name: LDAP_ADMIN_USERNAME - name: LDAP_ADMIN_USERNAME
value: "{{ ldap_admin_user }}" value: "{{ ldap_admin_user }}"
- name: LDAP_ADMIN_PASSWORD - name: LDAP_ADMIN_PASSWORD
value: "{{ ldap_admin_password }}" value: "{{ ldap_admin_password }}"
- name: LDAP_USERS - name: LDAP_USERS
value: "ansible" value: "ansible"
- name: LDAP_PASSWORDS - name: LDAP_PASSWORDS
value: "ansible123" value: "ansible123"
- name: LDAP_ROOT - name: LDAP_ROOT
value: "{{ ldap_root }}" value: "{{ ldap_root }}"
ports: ports:
- containerPort: 1389 - containerPort: 1389
name: ldap-server
register: pod_info register: pod_info
- name: Set Pod Internal IP
set_fact:
podIp: "{{ pod_info.result.status.podIP }}"
- name: Set LDAP Common facts - name: Set LDAP Common facts
set_fact: set_fact:
ldap_server_uri: "ldap://{{ podIp }}:1389" # we can use the Pod IP directly because the integration are running inside a Pod in the
# same openshift cluster
ldap_server_uri: "ldap://{{ pod_info.result.status.podIP }}:1389"
ldap_bind_dn: "cn={{ ldap_admin_user }},{{ ldap_root }}" ldap_bind_dn: "cn={{ ldap_admin_user }},{{ ldap_root }}"
ldap_bind_pw: "{{ ldap_admin_password }}" ldap_bind_pw: "{{ ldap_admin_password }}"
@@ -53,8 +52,10 @@
bind_pw: "{{ ldap_bind_pw }}" bind_pw: "{{ ldap_bind_pw }}"
dn: "ou=users,{{ ldap_root }}" dn: "ou=users,{{ ldap_root }}"
server_uri: "{{ ldap_server_uri }}" server_uri: "{{ ldap_server_uri }}"
# ignore_errors: true register: test_ldap
# register: ping_ldap retries: 10
delay: 5
until: test_ldap is not failed
- include_tasks: "tasks/python-ldap-not-installed.yml" - include_tasks: "tasks/python-ldap-not-installed.yml"
- include_tasks: "tasks/rfc2307.yml" - include_tasks: "tasks/rfc2307.yml"

View File

@@ -1,3 +1,4 @@
---
- block: - block:
- name: Create temp directory - name: Create temp directory
tempfile: tempfile:

View File

@@ -1,459 +1,460 @@
---
- block: - block:
- name: Get LDAP definition - name: Get LDAP definition
set_fact: set_fact:
ldap_resources: "{{ lookup('template', 'rfc2307/definition.j2') | from_yaml }}" ldap_resources: "{{ lookup('template', 'rfc2307/definition.j2') | from_yaml }}"
- name: Delete openshift groups if existing - name: Delete openshift groups if existing
community.okd.k8s: community.okd.k8s:
state: absent state: absent
kind: Group
version: "user.openshift.io/v1"
name: "{{ item }}"
with_items:
- admins
- engineers
- developers
- name: Delete existing LDAP entries
openshift_ldap_entry:
bind_dn: "{{ ldap_bind_dn }}"
bind_pw: "{{ ldap_bind_pw }}"
server_uri: "{{ ldap_server_uri }}"
dn: "{{ item.dn }}"
state: absent
with_items: "{{ ldap_resources.users + ldap_resources.groups + ldap_resources.units | reverse | list }}"
- name: Create LDAP units
openshift_ldap_entry:
bind_dn: "{{ ldap_bind_dn }}"
bind_pw: "{{ ldap_bind_pw }}"
server_uri: "{{ ldap_server_uri }}"
dn: "{{ item.dn }}"
attributes: "{{ item.attr }}"
objectClass: "{{ item.class }}"
with_items: "{{ ldap_resources.units }}"
- name: Create LDAP Groups
openshift_ldap_entry:
bind_dn: "{{ ldap_bind_dn }}"
bind_pw: "{{ ldap_bind_pw }}"
server_uri: "{{ ldap_server_uri }}"
dn: "{{ item.dn }}"
attributes: "{{ item.attr }}"
objectClass: "{{ item.class }}"
with_items: "{{ ldap_resources.groups }}"
- name: Create LDAP users
openshift_ldap_entry:
bind_dn: "{{ ldap_bind_dn }}"
bind_pw: "{{ ldap_bind_pw }}"
server_uri: "{{ ldap_server_uri }}"
dn: "{{ item.dn }}"
attributes: "{{ item.attr }}"
objectClass: "{{ item.class }}"
with_items: "{{ ldap_resources.users }}"
- name: Load test configurations
set_fact:
configs: "{{ lookup('template', 'rfc2307/sync-config.j2') | from_yaml }}"
- name: Synchronize Groups
community.okd.openshift_adm_groups_sync:
config: "{{ configs.simple }}"
check_mode: yes
register: result
- name: Validate Group going to be created
assert:
that:
- result is changed
- admins_group
- devs_group
- '"jane.smith@ansible.org" in {{ admins_group.users }}'
- '"jim.adams@ansible.org" in {{ devs_group.users }}'
- '"jordanbulls@ansible.org" in {{ devs_group.users }}'
- admins_group.users | length == 1
- devs_group.users | length == 2
vars:
admins_group: "{{ result.groups | selectattr('metadata.name', 'equalto', 'admins') | first }}"
devs_group: "{{ result.groups | selectattr('metadata.name', 'equalto', 'developers') | first }}"
- name: Synchronize Groups - User defined mapping
community.okd.openshift_adm_groups_sync:
config: "{{ configs.user_defined }}"
check_mode: yes
register: result
- name: Validate Group going to be created
assert:
that:
- result is changed
- admins_group
- devs_group
- '"jane.smith@ansible.org" in {{ admins_group.users }}'
- '"jim.adams@ansible.org" in {{ devs_group.users }}'
- '"jordanbulls@ansible.org" in {{ devs_group.users }}'
- admins_group.users | length == 1
- devs_group.users | length == 2
vars:
admins_group: "{{ result.groups | selectattr('metadata.name', 'equalto', 'ansible-admins') | first }}"
devs_group: "{{ result.groups | selectattr('metadata.name', 'equalto', 'ansible-devs') | first }}"
- name: Synchronize Groups - Using dn for every query
community.okd.openshift_adm_groups_sync:
config: "{{ configs.dn_everywhere }}"
check_mode: yes
register: result
- name: Validate Group going to be created
assert:
that:
- result is changed
- admins_group
- devs_group
- '"cn=Jane,ou=people,ou=rfc2307,{{ ldap_root }}" in {{ admins_group.users }}'
- '"cn=Jim,ou=people,ou=rfc2307,{{ ldap_root }}" in {{ devs_group.users }}'
- '"cn=Jordan,ou=people,ou=rfc2307,{{ ldap_root }}" in {{ devs_group.users }}'
- admins_group.users | length == 1
- devs_group.users | length == 2
vars:
admins_group: "{{ result.groups | selectattr('metadata.name', 'equalto', 'cn=admins,ou=groups,ou=rfc2307,' + ldap_root ) | first }}"
devs_group: "{{ result.groups | selectattr('metadata.name', 'equalto', 'cn=developers,ou=groups,ou=rfc2307,' + ldap_root ) | first }}"
- name: Synchronize Groups - Partially user defined mapping
community.okd.openshift_adm_groups_sync:
config: "{{ configs.partially_user_defined }}"
check_mode: yes
register: result
- name: Validate Group going to be created
assert:
that:
- result is changed
- admins_group
- devs_group
- '"jane.smith@ansible.org" in {{ admins_group.users }}'
- '"jim.adams@ansible.org" in {{ devs_group.users }}'
- '"jordanbulls@ansible.org" in {{ devs_group.users }}'
- admins_group.users | length == 1
- devs_group.users | length == 2
vars:
admins_group: "{{ result.groups | selectattr('metadata.name', 'equalto', 'ansible-admins') | first }}"
devs_group: "{{ result.groups | selectattr('metadata.name', 'equalto', 'developers') | first }}"
- name: Delete Group 'engineers' if created before
community.okd.k8s:
state: absent
kind: Group
version: "user.openshift.io/v1"
name: 'engineers'
wait: yes
ignore_errors: yes
- name: Synchronize Groups - Partially user defined mapping
community.okd.openshift_adm_groups_sync:
config: "{{ configs.out_scope }}"
check_mode: yes
register: result
ignore_errors: yes
- name: Assert group sync failed due to non-existent member
assert:
that:
- result is failed
- result.msg.startswith("Entry not found for base='cn=Matthew,ou=people,ou=outrfc2307,{{ ldap_root }}'")
- name: Define sync configuration with tolerateMemberNotFoundErrors
set_fact:
config_out_of_scope_tolerate_not_found: "{{ configs.out_scope | combine({'rfc2307': merge_rfc2307 })}}"
vars:
merge_rfc2307: "{{ configs.out_scope.rfc2307 | combine({'tolerateMemberNotFoundErrors': 'true'}) }}"
- name: Synchronize Groups - Partially user defined mapping (tolerateMemberNotFoundErrors=true)
community.okd.openshift_adm_groups_sync:
config: "{{ config_out_of_scope_tolerate_not_found }}"
check_mode: yes
register: result
- name: Assert group sync did not fail (tolerateMemberNotFoundErrors=true)
assert:
that:
- result is changed
- result.groups | length == 1
- result.groups.0.metadata.name == 'engineers'
- result.groups.0.users == ['Abraham']
- name: Create Group 'engineers'
community.okd.k8s:
state: present
wait: yes
definition:
kind: Group kind: Group
apiVersion: "user.openshift.io/v1" version: "user.openshift.io/v1"
metadata: name: "{{ item }}"
name: engineers with_items:
users: [] - admins
- engineers
- name: Try to sync LDAP group with Openshift existing group not created using sync should failed
community.okd.openshift_adm_groups_sync:
config: "{{ config_out_of_scope_tolerate_not_found }}"
check_mode: yes
register: result
ignore_errors: yes
- name: Validate group sync failed
assert:
that:
- result is failed
- '"openshift.io/ldap.host label did not match sync host" in result.msg'
- name: Define allow_groups and deny_groups groups
set_fact:
allow_groups:
- "cn=developers,ou=groups,ou=rfc2307,{{ ldap_root }}"
deny_groups:
- "cn=admins,ou=groups,ou=rfc2307,{{ ldap_root }}"
- name: Synchronize Groups using allow_groups
community.okd.openshift_adm_groups_sync:
config: "{{ configs.simple }}"
allow_groups: "{{ allow_groups }}"
register: result
check_mode: yes
- name: Validate Group going to be created
assert:
that:
- result is changed
- result.groups | length == 1
- result.groups.0.metadata.name == "developers"
- name: Synchronize Groups using deny_groups
community.okd.openshift_adm_groups_sync:
config: "{{ configs.simple }}"
deny_groups: "{{ deny_groups }}"
register: result
check_mode: yes
- name: Validate Group going to be created
assert:
that:
- result is changed
- result.groups | length == 1
- result.groups.0.metadata.name == "developers"
- name: Synchronize groups, remove check_mode
community.okd.openshift_adm_groups_sync:
config: "{{ configs.simple }}"
register: result
- name: Validate result is changed
assert:
that:
- result is changed
- name: Read Groups
kubernetes.core.k8s_info:
kind: Group
version: "user.openshift.io/v1"
name: admins
register: result
- name: Validate group was created
assert:
that:
- result.resources | length == 1
- '"jane.smith@ansible.org" in {{ result.resources.0.users }}'
- name: Read Groups
kubernetes.core.k8s_info:
kind: Group
version: "user.openshift.io/v1"
name: developers
register: result
- name: Validate group was created
assert:
that:
- result.resources | length == 1
- '"jim.adams@ansible.org" in {{ result.resources.0.users }}'
- '"jordanbulls@ansible.org" in {{ result.resources.0.users }}'
- name: Set users to delete (no admins users anymore and only 1 developer kept)
set_fact:
users_to_delete:
- "cn=Jane,ou=people,ou=rfc2307,{{ ldap_root }}"
- "cn=Jim,ou=people,ou=rfc2307,{{ ldap_root }}"
- name: Delete users from LDAP servers
openshift_ldap_entry:
bind_dn: "{{ ldap_bind_dn }}"
bind_pw: "{{ ldap_bind_pw }}"
server_uri: "{{ ldap_server_uri }}"
dn: "{{ item }}"
state: absent
with_items: "{{ users_to_delete }}"
- name: Define sync configuration with tolerateMemberNotFoundErrors
set_fact:
config_simple_tolerate_not_found: "{{ configs.simple | combine({'rfc2307': merge_rfc2307 })}}"
vars:
merge_rfc2307: "{{ configs.simple.rfc2307 | combine({'tolerateMemberNotFoundErrors': 'true'}) }}"
- name: Synchronize groups once again after users deletion
community.okd.openshift_adm_groups_sync:
config: "{{ config_simple_tolerate_not_found }}"
register: result
- name: Validate result is changed
assert:
that:
- result is changed
- name: Read Groups
kubernetes.core.k8s_info:
kind: Group
version: "user.openshift.io/v1"
name: admins
register: result
- name: Validate admins group does not contains users anymore
assert:
that:
- result.resources | length == 1
- result.resources.0.users == []
- name: Read Groups
kubernetes.core.k8s_info:
kind: Group
version: "user.openshift.io/v1"
name: developers
register: result
- name: Validate group was created
assert:
that:
- result.resources | length == 1
- '"jordanbulls@ansible.org" in {{ result.resources.0.users }}'
- name: Set group to delete
set_fact:
groups_to_delete:
- "cn=developers,ou=groups,ou=rfc2307,{{ ldap_root }}"
- name: Delete Group from LDAP servers
openshift_ldap_entry:
bind_dn: "{{ ldap_bind_dn }}"
bind_pw: "{{ ldap_bind_pw }}"
server_uri: "{{ ldap_server_uri }}"
dn: "{{ item }}"
state: absent
with_items: "{{ groups_to_delete }}"
- name: Prune groups
community.okd.openshift_adm_groups_sync:
config: "{{ config_simple_tolerate_not_found }}"
state: absent
register: result
check_mode: yes
- name: Validate that only developers group is candidate for Prune
assert:
that:
- result is changed
- result.groups | length == 1
- result.groups.0.metadata.name == "developers"
- name: Read Group (validate that check_mode did not performed update in the cluster)
kubernetes.core.k8s_info:
kind: Group
version: "user.openshift.io/v1"
name: developers
register: result
- name: Assert group was found
assert:
that:
- result.resources | length == 1
- name: Prune using allow_groups
community.okd.openshift_adm_groups_sync:
config: "{{ config_simple_tolerate_not_found }}"
allow_groups:
- developers - developers
state: absent
register: result
check_mode: yes
- name: assert developers group was candidate for prune - name: Delete existing LDAP entries
assert: openshift_ldap_entry:
that: bind_dn: "{{ ldap_bind_dn }}"
- result is changed bind_pw: "{{ ldap_bind_pw }}"
- result.groups | length == 1 server_uri: "{{ ldap_server_uri }}"
- result.groups.0.metadata.name == "developers" dn: "{{ item.dn }}"
state: absent
with_items: "{{ ldap_resources.users + ldap_resources.groups + ldap_resources.units | reverse | list }}"
- name: Prune using deny_groups - name: Create LDAP units
community.okd.openshift_adm_groups_sync: openshift_ldap_entry:
config: "{{ config_simple_tolerate_not_found }}" bind_dn: "{{ ldap_bind_dn }}"
deny_groups: bind_pw: "{{ ldap_bind_pw }}"
- developers server_uri: "{{ ldap_server_uri }}"
state: absent dn: "{{ item.dn }}"
register: result attributes: "{{ item.attr }}"
check_mode: yes objectClass: "{{ item.class }}"
with_items: "{{ ldap_resources.units }}"
- name: assert nothing found candidate for prune - name: Create LDAP Groups
assert: openshift_ldap_entry:
that: bind_dn: "{{ ldap_bind_dn }}"
- result is not changed bind_pw: "{{ ldap_bind_pw }}"
- result.groups | length == 0 server_uri: "{{ ldap_server_uri }}"
dn: "{{ item.dn }}"
attributes: "{{ item.attr }}"
objectClass: "{{ item.class }}"
with_items: "{{ ldap_resources.groups }}"
- name: Prune groups - name: Create LDAP users
community.okd.openshift_adm_groups_sync: openshift_ldap_entry:
config: "{{ config_simple_tolerate_not_found }}" bind_dn: "{{ ldap_bind_dn }}"
state: absent bind_pw: "{{ ldap_bind_pw }}"
register: result server_uri: "{{ ldap_server_uri }}"
dn: "{{ item.dn }}"
attributes: "{{ item.attr }}"
objectClass: "{{ item.class }}"
with_items: "{{ ldap_resources.users }}"
- name: Validate result is changed - name: Load test configurations
assert: set_fact:
that: configs: "{{ lookup('template', 'rfc2307/sync-config.j2') | from_yaml }}"
- result is changed
- result.groups | length == 1
- name: Get developers group info - name: Synchronize Groups
kubernetes.core.k8s_info: community.okd.openshift_adm_groups_sync:
kind: Group config: "{{ configs.simple }}"
version: "user.openshift.io/v1" check_mode: yes
name: developers register: result
register: result
- name: assert group was deleted - name: Validate Group going to be created
assert: assert:
that: that:
- result.resources | length == 0 - result is changed
- admins_group
- devs_group
- '"jane.smith@ansible.org" in {{ admins_group.users }}'
- '"jim.adams@ansible.org" in {{ devs_group.users }}'
- '"jordanbulls@ansible.org" in {{ devs_group.users }}'
- admins_group.users | length == 1
- devs_group.users | length == 2
vars:
admins_group: "{{ result.groups | selectattr('metadata.name', 'equalto', 'admins') | first }}"
devs_group: "{{ result.groups | selectattr('metadata.name', 'equalto', 'developers') | first }}"
- name: Get admins group info - name: Synchronize Groups - User defined mapping
kubernetes.core.k8s_info: community.okd.openshift_adm_groups_sync:
kind: Group config: "{{ configs.user_defined }}"
version: "user.openshift.io/v1" check_mode: yes
name: admins register: result
register: result
- name: assert group was not deleted - name: Validate Group going to be created
assert: assert:
that: that:
- result.resources | length == 1 - result is changed
- admins_group
- devs_group
- '"jane.smith@ansible.org" in {{ admins_group.users }}'
- '"jim.adams@ansible.org" in {{ devs_group.users }}'
- '"jordanbulls@ansible.org" in {{ devs_group.users }}'
- admins_group.users | length == 1
- devs_group.users | length == 2
vars:
admins_group: "{{ result.groups | selectattr('metadata.name', 'equalto', 'ansible-admins') | first }}"
devs_group: "{{ result.groups | selectattr('metadata.name', 'equalto', 'ansible-devs') | first }}"
- name: Prune groups once again (idempotency) - name: Synchronize Groups - Using dn for every query
community.okd.openshift_adm_groups_sync: community.okd.openshift_adm_groups_sync:
config: "{{ config_simple_tolerate_not_found }}" config: "{{ configs.dn_everywhere }}"
state: absent check_mode: yes
register: result register: result
- name: Assert nothing changed - name: Validate Group going to be created
assert: assert:
that: that:
- result is not changed - result is changed
- result.groups | length == 0 - admins_group
- devs_group
- '"cn=Jane,ou=people,ou=rfc2307,{{ ldap_root }}" in {{ admins_group.users }}'
- '"cn=Jim,ou=people,ou=rfc2307,{{ ldap_root }}" in {{ devs_group.users }}'
- '"cn=Jordan,ou=people,ou=rfc2307,{{ ldap_root }}" in {{ devs_group.users }}'
- admins_group.users | length == 1
- devs_group.users | length == 2
vars:
admins_group: "{{ result.groups | selectattr('metadata.name', 'equalto', 'cn=admins,ou=groups,ou=rfc2307,' + ldap_root ) | first }}"
devs_group: "{{ result.groups | selectattr('metadata.name', 'equalto', 'cn=developers,ou=groups,ou=rfc2307,' + ldap_root ) | first }}"
- name: Synchronize Groups - Partially user defined mapping
community.okd.openshift_adm_groups_sync:
config: "{{ configs.partially_user_defined }}"
check_mode: yes
register: result
- name: Validate Group going to be created
assert:
that:
- result is changed
- admins_group
- devs_group
- '"jane.smith@ansible.org" in {{ admins_group.users }}'
- '"jim.adams@ansible.org" in {{ devs_group.users }}'
- '"jordanbulls@ansible.org" in {{ devs_group.users }}'
- admins_group.users | length == 1
- devs_group.users | length == 2
vars:
admins_group: "{{ result.groups | selectattr('metadata.name', 'equalto', 'ansible-admins') | first }}"
devs_group: "{{ result.groups | selectattr('metadata.name', 'equalto', 'developers') | first }}"
- name: Delete Group 'engineers' if created before
community.okd.k8s:
state: absent
kind: Group
version: "user.openshift.io/v1"
name: 'engineers'
wait: yes
ignore_errors: yes
- name: Synchronize Groups - Partially user defined mapping
community.okd.openshift_adm_groups_sync:
config: "{{ configs.out_scope }}"
check_mode: yes
register: result
ignore_errors: yes
- name: Assert group sync failed due to non-existent member
assert:
that:
- result is failed
- result.msg.startswith("Entry not found for base='cn=Matthew,ou=people,ou=outrfc2307,{{ ldap_root }}'")
- name: Define sync configuration with tolerateMemberNotFoundErrors
set_fact:
config_out_of_scope_tolerate_not_found: "{{ configs.out_scope | combine({'rfc2307': merge_rfc2307 })}}"
vars:
merge_rfc2307: "{{ configs.out_scope.rfc2307 | combine({'tolerateMemberNotFoundErrors': 'true'}) }}"
- name: Synchronize Groups - Partially user defined mapping (tolerateMemberNotFoundErrors=true)
community.okd.openshift_adm_groups_sync:
config: "{{ config_out_of_scope_tolerate_not_found }}"
check_mode: yes
register: result
- name: Assert group sync did not fail (tolerateMemberNotFoundErrors=true)
assert:
that:
- result is changed
- result.groups | length == 1
- result.groups.0.metadata.name == 'engineers'
- result.groups.0.users == ['Abraham']
- name: Create Group 'engineers'
community.okd.k8s:
state: present
wait: yes
definition:
kind: Group
apiVersion: "user.openshift.io/v1"
metadata:
name: engineers
users: []
- name: Try to sync LDAP group with Openshift existing group not created using sync should failed
community.okd.openshift_adm_groups_sync:
config: "{{ config_out_of_scope_tolerate_not_found }}"
check_mode: yes
register: result
ignore_errors: yes
- name: Validate group sync failed
assert:
that:
- result is failed
- '"openshift.io/ldap.host label did not match sync host" in result.msg'
- name: Define allow_groups and deny_groups groups
set_fact:
allow_groups:
- "cn=developers,ou=groups,ou=rfc2307,{{ ldap_root }}"
deny_groups:
- "cn=admins,ou=groups,ou=rfc2307,{{ ldap_root }}"
- name: Synchronize Groups using allow_groups
community.okd.openshift_adm_groups_sync:
config: "{{ configs.simple }}"
allow_groups: "{{ allow_groups }}"
register: result
check_mode: yes
- name: Validate Group going to be created
assert:
that:
- result is changed
- result.groups | length == 1
- result.groups.0.metadata.name == "developers"
- name: Synchronize Groups using deny_groups
community.okd.openshift_adm_groups_sync:
config: "{{ configs.simple }}"
deny_groups: "{{ deny_groups }}"
register: result
check_mode: yes
- name: Validate Group going to be created
assert:
that:
- result is changed
- result.groups | length == 1
- result.groups.0.metadata.name == "developers"
- name: Synchronize groups, remove check_mode
community.okd.openshift_adm_groups_sync:
config: "{{ configs.simple }}"
register: result
- name: Validate result is changed
assert:
that:
- result is changed
- name: Read Groups
kubernetes.core.k8s_info:
kind: Group
version: "user.openshift.io/v1"
name: admins
register: result
- name: Validate group was created
assert:
that:
- result.resources | length == 1
- '"jane.smith@ansible.org" in {{ result.resources.0.users }}'
- name: Read Groups
kubernetes.core.k8s_info:
kind: Group
version: "user.openshift.io/v1"
name: developers
register: result
- name: Validate group was created
assert:
that:
- result.resources | length == 1
- '"jim.adams@ansible.org" in {{ result.resources.0.users }}'
- '"jordanbulls@ansible.org" in {{ result.resources.0.users }}'
- name: Set users to delete (no admins users anymore and only 1 developer kept)
set_fact:
users_to_delete:
- "cn=Jane,ou=people,ou=rfc2307,{{ ldap_root }}"
- "cn=Jim,ou=people,ou=rfc2307,{{ ldap_root }}"
- name: Delete users from LDAP servers
openshift_ldap_entry:
bind_dn: "{{ ldap_bind_dn }}"
bind_pw: "{{ ldap_bind_pw }}"
server_uri: "{{ ldap_server_uri }}"
dn: "{{ item }}"
state: absent
with_items: "{{ users_to_delete }}"
- name: Define sync configuration with tolerateMemberNotFoundErrors
set_fact:
config_simple_tolerate_not_found: "{{ configs.simple | combine({'rfc2307': merge_rfc2307 })}}"
vars:
merge_rfc2307: "{{ configs.simple.rfc2307 | combine({'tolerateMemberNotFoundErrors': 'true'}) }}"
- name: Synchronize groups once again after users deletion
community.okd.openshift_adm_groups_sync:
config: "{{ config_simple_tolerate_not_found }}"
register: result
- name: Validate result is changed
assert:
that:
- result is changed
- name: Read Groups
kubernetes.core.k8s_info:
kind: Group
version: "user.openshift.io/v1"
name: admins
register: result
- name: Validate admins group does not contains users anymore
assert:
that:
- result.resources | length == 1
- result.resources.0.users == []
- name: Read Groups
kubernetes.core.k8s_info:
kind: Group
version: "user.openshift.io/v1"
name: developers
register: result
- name: Validate group was created
assert:
that:
- result.resources | length == 1
- '"jordanbulls@ansible.org" in {{ result.resources.0.users }}'
- name: Set group to delete
set_fact:
groups_to_delete:
- "cn=developers,ou=groups,ou=rfc2307,{{ ldap_root }}"
- name: Delete Group from LDAP servers
openshift_ldap_entry:
bind_dn: "{{ ldap_bind_dn }}"
bind_pw: "{{ ldap_bind_pw }}"
server_uri: "{{ ldap_server_uri }}"
dn: "{{ item }}"
state: absent
with_items: "{{ groups_to_delete }}"
- name: Prune groups
community.okd.openshift_adm_groups_sync:
config: "{{ config_simple_tolerate_not_found }}"
state: absent
register: result
check_mode: yes
- name: Validate that only developers group is candidate for Prune
assert:
that:
- result is changed
- result.groups | length == 1
- result.groups.0.metadata.name == "developers"
- name: Read Group (validate that check_mode did not performed update in the cluster)
kubernetes.core.k8s_info:
kind: Group
version: "user.openshift.io/v1"
name: developers
register: result
- name: Assert group was found
assert:
that:
- result.resources | length == 1
- name: Prune using allow_groups
community.okd.openshift_adm_groups_sync:
config: "{{ config_simple_tolerate_not_found }}"
allow_groups:
- developers
state: absent
register: result
check_mode: yes
- name: assert developers group was candidate for prune
assert:
that:
- result is changed
- result.groups | length == 1
- result.groups.0.metadata.name == "developers"
- name: Prune using deny_groups
community.okd.openshift_adm_groups_sync:
config: "{{ config_simple_tolerate_not_found }}"
deny_groups:
- developers
state: absent
register: result
check_mode: yes
- name: assert nothing found candidate for prune
assert:
that:
- result is not changed
- result.groups | length == 0
- name: Prune groups
community.okd.openshift_adm_groups_sync:
config: "{{ config_simple_tolerate_not_found }}"
state: absent
register: result
- name: Validate result is changed
assert:
that:
- result is changed
- result.groups | length == 1
- name: Get developers group info
kubernetes.core.k8s_info:
kind: Group
version: "user.openshift.io/v1"
name: developers
register: result
- name: assert group was deleted
assert:
that:
- result.resources | length == 0
- name: Get admins group info
kubernetes.core.k8s_info:
kind: Group
version: "user.openshift.io/v1"
name: admins
register: result
- name: assert group was not deleted
assert:
that:
- result.resources | length == 1
- name: Prune groups once again (idempotency)
community.okd.openshift_adm_groups_sync:
config: "{{ config_simple_tolerate_not_found }}"
state: absent
register: result
- name: Assert nothing changed
assert:
that:
- result is not changed
- result.groups | length == 0
always: always:
- name: Delete openshift groups if existing - name: Delete openshift groups if existing

View File

@@ -1,293 +1,294 @@
---
- block: - block:
- set_fact: - set_fact:
test_sa: "clusterrole-sa" test_sa: "clusterrole-sa"
test_ns: "clusterrole-ns" test_ns: "clusterrole-ns"
- name: Ensure namespace - name: Ensure namespace
kubernetes.core.k8s: kubernetes.core.k8s:
kind: Namespace kind: Namespace
name: "{{ test_ns }}" name: "{{ test_ns }}"
- name: Get cluster information - name: Get cluster information
kubernetes.core.k8s_cluster_info: kubernetes.core.k8s_cluster_info:
register: cluster_info register: cluster_info
no_log: true no_log: true
- set_fact: - set_fact:
cluster_host: "{{ cluster_info['connection']['host'] }}" cluster_host: "{{ cluster_info['connection']['host'] }}"
- name: Create Service account - name: Create Service account
kubernetes.core.k8s: kubernetes.core.k8s:
definition: definition:
apiVersion: v1 apiVersion: v1
kind: ServiceAccount
metadata:
name: "{{ test_sa }}"
namespace: "{{ test_ns }}"
- name: Read Service Account
kubernetes.core.k8s_info:
kind: ServiceAccount kind: ServiceAccount
metadata: namespace: "{{ test_ns }}"
name: "{{ test_sa }}" name: "{{ test_sa }}"
namespace: "{{ test_ns }}" register: result
- name: Read Service Account - set_fact:
kubernetes.core.k8s_info: secret_token: "{{ result.resources[0]['secrets'][0]['name'] }}"
kind: ServiceAccount
namespace: "{{ test_ns }}"
name: "{{ test_sa }}"
register: result
- set_fact: - name: Get secret details
secret_token: "{{ result.resources[0]['secrets'][0]['name'] }}" kubernetes.core.k8s_info:
kind: Secret
namespace: '{{ test_ns }}'
name: '{{ secret_token }}'
register: _secret
retries: 10
delay: 10
until:
- ("'openshift.io/token-secret.value' in _secret.resources[0]['metadata']['annotations']") or ("'token' in _secret.resources[0]['data']")
- name: Get secret details - set_fact:
kubernetes.core.k8s_info: api_token: "{{ _secret.resources[0]['metadata']['annotations']['openshift.io/token-secret.value'] }}"
kind: Secret when: "'openshift.io/token-secret.value' in _secret.resources[0]['metadata']['annotations']"
namespace: '{{ test_ns }}'
name: '{{ secret_token }}'
register: _secret
retries: 10
delay: 10
until:
- ("'openshift.io/token-secret.value' in _secret.resources[0]['metadata']['annotations']") or ("'token' in _secret.resources[0]['data']")
- set_fact: - set_fact:
api_token: "{{ _secret.resources[0]['metadata']['annotations']['openshift.io/token-secret.value'] }}" api_token: "{{ _secret.resources[0]['data']['token'] | b64decode }}"
when: "'openshift.io/token-secret.value' in _secret.resources[0]['metadata']['annotations']" when: "'token' in _secret.resources[0]['data']"
- set_fact: - name: list Node should failed (forbidden user)
api_token: "{{ _secret.resources[0]['data']['token'] | b64decode }}" kubernetes.core.k8s_info:
when: "'token' in _secret.resources[0]['data']" api_key: "{{ api_token }}"
host: "{{ cluster_host }}"
validate_certs: no
kind: Node
register: error
ignore_errors: true
- name: list Node should failed (forbidden user) - assert:
kubernetes.core.k8s_info: that:
api_key: "{{ api_token }}" - '"nodes is forbidden: User" in error.msg'
host: "{{ cluster_host }}"
validate_certs: no
kind: Node
register: error
ignore_errors: true
- assert: - name: list Pod for all namespace should failed
that: kubernetes.core.k8s_info:
- '"nodes is forbidden: User" in error.msg' api_key: "{{ api_token }}"
host: "{{ cluster_host }}"
validate_certs: no
kind: Pod
register: error
ignore_errors: true
- name: list Pod for all namespace should failed - assert:
kubernetes.core.k8s_info: that:
api_key: "{{ api_token }}" - '"pods is forbidden: User" in error.msg'
host: "{{ cluster_host }}"
validate_certs: no
kind: Pod
register: error
ignore_errors: true
- assert: - name: list Pod for test namespace should failed
that: kubernetes.core.k8s_info:
- '"pods is forbidden: User" in error.msg' api_key: "{{ api_token }}"
host: "{{ cluster_host }}"
validate_certs: no
kind: Pod
namespace: "{{ test_ns }}"
register: error
ignore_errors: true
- name: list Pod for test namespace should failed - assert:
kubernetes.core.k8s_info: that:
api_key: "{{ api_token }}" - '"pods is forbidden: User" in error.msg'
host: "{{ cluster_host }}"
validate_certs: no
kind: Pod
namespace: "{{ test_ns }}"
register: error
ignore_errors: true
- assert: - set_fact:
that: test_labels:
- '"pods is forbidden: User" in error.msg' phase: dev
cluster_roles:
- name: pod-manager
resources:
- pods
verbs:
- list
api_version_binding: "authorization.openshift.io/v1"
- name: node-manager
resources:
- nodes
verbs:
- list
api_version_binding: "rbac.authorization.k8s.io/v1"
- set_fact: - name: Create cluster roles
test_labels: kubernetes.core.k8s:
phase: dev definition:
cluster_roles:
- name: pod-manager
resources:
- pods
verbs:
- list
api_version_binding: "authorization.openshift.io/v1"
- name: node-manager
resources:
- nodes
verbs:
- list
api_version_binding: "rbac.authorization.k8s.io/v1"
- name: Create cluster roles
kubernetes.core.k8s:
definition:
kind: ClusterRole
apiVersion: "rbac.authorization.k8s.io/v1"
metadata:
name: "{{ item.name }}"
labels: "{{ test_labels }}"
rules:
- apiGroups: [""]
resources: "{{ item.resources }}"
verbs: "{{ item.verbs }}"
with_items: '{{ cluster_roles }}'
- name: Create Role Binding (namespaced)
kubernetes.core.k8s:
definition:
kind: RoleBinding
apiVersion: "rbac.authorization.k8s.io/v1"
metadata:
name: "{{ cluster_roles[0].name }}-binding"
namespace: "{{ test_ns }}"
labels: "{{ test_labels }}"
subjects:
- kind: ServiceAccount
name: "{{ test_sa }}"
namespace: "{{ test_ns }}"
apiGroup: ""
roleRef:
kind: ClusterRole kind: ClusterRole
name: "{{ cluster_roles[0].name }}" apiVersion: "rbac.authorization.k8s.io/v1"
apiGroup: "" metadata:
name: "{{ item.name }}"
labels: "{{ test_labels }}"
rules:
- apiGroups: [""]
resources: "{{ item.resources }}"
verbs: "{{ item.verbs }}"
with_items: '{{ cluster_roles }}'
- name: list Pod for all namespace should failed - name: Create Role Binding (namespaced)
kubernetes.core.k8s_info: kubernetes.core.k8s:
api_key: "{{ api_token }}" definition:
host: "{{ cluster_host }}" kind: RoleBinding
validate_certs: no apiVersion: "rbac.authorization.k8s.io/v1"
kind: Pod metadata:
register: error name: "{{ cluster_roles[0].name }}-binding"
ignore_errors: true namespace: "{{ test_ns }}"
labels: "{{ test_labels }}"
subjects:
- kind: ServiceAccount
name: "{{ test_sa }}"
namespace: "{{ test_ns }}"
apiGroup: ""
roleRef:
kind: ClusterRole
name: "{{ cluster_roles[0].name }}"
apiGroup: ""
- assert: - name: list Pod for all namespace should failed
that: kubernetes.core.k8s_info:
- '"pods is forbidden: User" in error.msg' api_key: "{{ api_token }}"
host: "{{ cluster_host }}"
validate_certs: no
kind: Pod
register: error
ignore_errors: true
- name: list Pod for test namespace should succeed - assert:
kubernetes.core.k8s_info: that:
api_key: "{{ api_token }}" - '"pods is forbidden: User" in error.msg'
host: "{{ cluster_host }}"
validate_certs: no
kind: Pod
namespace: "{{ test_ns }}"
no_log: true
- name: Create Cluster role Binding - name: list Pod for test namespace should succeed
kubernetes.core.k8s: kubernetes.core.k8s_info:
definition: api_key: "{{ api_token }}"
kind: ClusterRoleBinding host: "{{ cluster_host }}"
apiVersion: "{{ item.api_version_binding }}" validate_certs: no
metadata: kind: Pod
name: "{{ item.name }}-binding" namespace: "{{ test_ns }}"
labels: "{{ test_labels }}" no_log: true
subjects:
- kind: ServiceAccount
name: "{{ test_sa }}"
namespace: "{{ test_ns }}"
apiGroup: ""
roleRef:
kind: ClusterRole
name: "{{ item.name }}"
apiGroup: ""
with_items: "{{ cluster_roles }}"
- name: list Pod for all namespace should succeed - name: Create Cluster role Binding
kubernetes.core.k8s_info: kubernetes.core.k8s:
api_key: "{{ api_token }}" definition:
host: "{{ cluster_host }}" kind: ClusterRoleBinding
validate_certs: no apiVersion: "{{ item.api_version_binding }}"
kind: Pod metadata:
no_log: true name: "{{ item.name }}-binding"
labels: "{{ test_labels }}"
subjects:
- kind: ServiceAccount
name: "{{ test_sa }}"
namespace: "{{ test_ns }}"
apiGroup: ""
roleRef:
kind: ClusterRole
name: "{{ item.name }}"
apiGroup: ""
with_items: "{{ cluster_roles }}"
- name: list Pod for test namespace should succeed - name: list Pod for all namespace should succeed
kubernetes.core.k8s_info: kubernetes.core.k8s_info:
api_key: "{{ api_token }}" api_key: "{{ api_token }}"
host: "{{ cluster_host }}" host: "{{ cluster_host }}"
validate_certs: no validate_certs: no
kind: Pod kind: Pod
namespace: "{{ test_ns }}" no_log: true
no_log: true
- name: list Node using ServiceAccount - name: list Pod for test namespace should succeed
kubernetes.core.k8s_info: kubernetes.core.k8s_info:
api_key: "{{ api_token }}" api_key: "{{ api_token }}"
host: "{{ cluster_host }}" host: "{{ cluster_host }}"
validate_certs: no validate_certs: no
kind: Node kind: Pod
namespace: "{{ test_ns }}" namespace: "{{ test_ns }}"
no_log: true no_log: true
- name: Prune clusterroles (check mode) - name: list Node using ServiceAccount
community.okd.openshift_adm_prune_auth: kubernetes.core.k8s_info:
resource: clusterroles api_key: "{{ api_token }}"
label_selectors: host: "{{ cluster_host }}"
- phase=dev validate_certs: no
register: check kind: Node
check_mode: true namespace: "{{ test_ns }}"
no_log: true
- name: validate clusterrole binding candidates for prune - name: Prune clusterroles (check mode)
assert: community.okd.openshift_adm_prune_auth:
that: resource: clusterroles
- '"{{ item.name }}-binding" in check.cluster_role_binding' label_selectors:
- '"{{ test_ns }}/{{ cluster_roles[0].name }}-binding" in check.role_binding' - phase=dev
with_items: "{{ cluster_roles }}" register: check
check_mode: true
- name: Prune Cluster Role for managing Pod - name: validate clusterrole binding candidates for prune
community.okd.openshift_adm_prune_auth: assert:
resource: clusterroles that:
name: "{{ cluster_roles[0].name }}" - '"{{ item.name }}-binding" in check.cluster_role_binding'
- '"{{ test_ns }}/{{ cluster_roles[0].name }}-binding" in check.role_binding'
with_items: "{{ cluster_roles }}"
- name: list Pod for all namespace should failed - name: Prune Cluster Role for managing Pod
kubernetes.core.k8s_info: community.okd.openshift_adm_prune_auth:
api_key: "{{ api_token }}" resource: clusterroles
host: "{{ cluster_host }}" name: "{{ cluster_roles[0].name }}"
validate_certs: no
kind: Pod
register: error
no_log: true
ignore_errors: true
- assert: - name: list Pod for all namespace should failed
that: kubernetes.core.k8s_info:
- '"pods is forbidden: User" in error.msg' api_key: "{{ api_token }}"
host: "{{ cluster_host }}"
validate_certs: no
kind: Pod
register: error
no_log: true
ignore_errors: true
- name: list Pod for test namespace should failed - assert:
kubernetes.core.k8s_info: that:
api_key: "{{ api_token }}" - '"pods is forbidden: User" in error.msg'
host: "{{ cluster_host }}"
validate_certs: no
kind: Pod
namespace: "{{ test_ns }}"
register: error
no_log: true
ignore_errors: true
- assert: - name: list Pod for test namespace should failed
that: kubernetes.core.k8s_info:
- '"pods is forbidden: User" in error.msg' api_key: "{{ api_token }}"
host: "{{ cluster_host }}"
validate_certs: no
kind: Pod
namespace: "{{ test_ns }}"
register: error
no_log: true
ignore_errors: true
- name: list Node using ServiceAccount - assert:
kubernetes.core.k8s_info: that:
api_key: "{{ api_token }}" - '"pods is forbidden: User" in error.msg'
host: "{{ cluster_host }}"
validate_certs: no
kind: Node
namespace: "{{ test_ns }}"
no_log: true
- name: Prune clusterroles (remaining) - name: list Node using ServiceAccount
community.okd.openshift_adm_prune_auth: kubernetes.core.k8s_info:
resource: clusterroles api_key: "{{ api_token }}"
label_selectors: host: "{{ cluster_host }}"
- phase=dev validate_certs: no
kind: Node
namespace: "{{ test_ns }}"
no_log: true
- name: list Node using ServiceAccount should fail - name: Prune clusterroles (remaining)
kubernetes.core.k8s_info: community.okd.openshift_adm_prune_auth:
api_key: "{{ api_token }}" resource: clusterroles
host: "{{ cluster_host }}" label_selectors:
validate_certs: no - phase=dev
kind: Node
namespace: "{{ test_ns }}"
register: error
ignore_errors: true
- assert: - name: list Node using ServiceAccount should fail
that: kubernetes.core.k8s_info:
- '"nodes is forbidden: User" in error.msg' api_key: "{{ api_token }}"
host: "{{ cluster_host }}"
validate_certs: no
kind: Node
namespace: "{{ test_ns }}"
register: error
ignore_errors: true
- assert:
that:
- '"nodes is forbidden: User" in error.msg'
always: always:
- name: Ensure namespace is deleted - name: Ensure namespace is deleted

View File

@@ -1,335 +1,336 @@
---
- block: - block:
- set_fact: - set_fact:
test_ns: "prune-roles" test_ns: "prune-roles"
sa_name: "roles-sa" sa_name: "roles-sa"
pod_name: "pod-prune" pod_name: "pod-prune"
role_definition: role_definition:
- name: pod-list - name: pod-list
labels: labels:
action: list action: list
verbs: verbs:
- list - list
role_binding: role_binding:
api_version: rbac.authorization.k8s.io/v1 api_version: rbac.authorization.k8s.io/v1
- name: pod-create - name: pod-create
labels: labels:
action: create action: create
verbs: verbs:
- create - create
- get - get
role_binding: role_binding:
api_version: authorization.openshift.io/v1 api_version: authorization.openshift.io/v1
- name: pod-delete - name: pod-delete
labels: labels:
action: delete action: delete
verbs: verbs:
- delete - delete
role_binding: role_binding:
api_version: rbac.authorization.k8s.io/v1 api_version: rbac.authorization.k8s.io/v1
- name: Ensure namespace - name: Ensure namespace
kubernetes.core.k8s: kubernetes.core.k8s:
kind: Namespace kind: Namespace
name: '{{ test_ns }}' name: '{{ test_ns }}'
- name: Get cluster information - name: Get cluster information
kubernetes.core.k8s_cluster_info: kubernetes.core.k8s_cluster_info:
register: cluster_info register: cluster_info
no_log: true no_log: true
- set_fact: - set_fact:
cluster_host: "{{ cluster_info['connection']['host'] }}" cluster_host: "{{ cluster_info['connection']['host'] }}"
- name: Create Service account - name: Create Service account
kubernetes.core.k8s: kubernetes.core.k8s:
definition: definition:
apiVersion: v1 apiVersion: v1
kind: ServiceAccount
metadata:
name: '{{ sa_name }}'
namespace: '{{ test_ns }}'
- name: Read Service Account
kubernetes.core.k8s_info:
kind: ServiceAccount kind: ServiceAccount
metadata: namespace: '{{ test_ns }}'
name: '{{ sa_name }}' name: '{{ sa_name }}'
namespace: '{{ test_ns }}' register: sa_out
- name: Read Service Account - set_fact:
kubernetes.core.k8s_info: secret_token: "{{ sa_out.resources[0]['secrets'][0]['name'] }}"
kind: ServiceAccount
namespace: '{{ test_ns }}'
name: '{{ sa_name }}'
register: sa_out
- set_fact: - name: Get secret details
secret_token: "{{ sa_out.resources[0]['secrets'][0]['name'] }}" kubernetes.core.k8s_info:
kind: Secret
namespace: '{{ test_ns }}'
name: '{{ secret_token }}'
register: r_secret
retries: 10
delay: 10
until:
- ("'openshift.io/token-secret.value' in r_secret.resources[0]['metadata']['annotations']") or ("'token' in r_secret.resources[0]['data']")
- name: Get secret details - set_fact:
kubernetes.core.k8s_info: api_token: "{{ r_secret.resources[0]['metadata']['annotations']['openshift.io/token-secret.value'] }}"
kind: Secret when: "'openshift.io/token-secret.value' in r_secret.resources[0]['metadata']['annotations']"
namespace: '{{ test_ns }}'
name: '{{ secret_token }}'
register: r_secret
retries: 10
delay: 10
until:
- ("'openshift.io/token-secret.value' in r_secret.resources[0]['metadata']['annotations']") or ("'token' in r_secret.resources[0]['data']")
- set_fact: - set_fact:
api_token: "{{ r_secret.resources[0]['metadata']['annotations']['openshift.io/token-secret.value'] }}" api_token: "{{ r_secret.resources[0]['data']['token'] | b64decode }}"
when: "'openshift.io/token-secret.value' in r_secret.resources[0]['metadata']['annotations']" when: "'token' in r_secret.resources[0]['data']"
- set_fact: - name: list resources using service account
api_token: "{{ r_secret.resources[0]['data']['token'] | b64decode }}" kubernetes.core.k8s_info:
when: "'token' in r_secret.resources[0]['data']" api_key: '{{ api_token }}'
host: '{{ cluster_host }}'
validate_certs: no
kind: Pod
namespace: '{{ test_ns }}'
register: error
ignore_errors: true
- name: list resources using service account - assert:
kubernetes.core.k8s_info: that:
api_key: '{{ api_token }}' - '"pods is forbidden: User" in error.msg'
host: '{{ cluster_host }}'
validate_certs: no
kind: Pod
namespace: '{{ test_ns }}'
register: error
ignore_errors: true
- assert: - name: Create a role to manage Pod from namespace "{{ test_ns }}"
that: kubernetes.core.k8s:
- '"pods is forbidden: User" in error.msg' definition:
- name: Create a role to manage Pod from namespace "{{ test_ns }}"
kubernetes.core.k8s:
definition:
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
namespace: "{{ test_ns }}"
name: "{{ item.name }}"
labels: "{{ item.labels }}"
rules:
- apiGroups: [""]
resources: ["pods"]
verbs: "{{ item.verbs }}"
with_items: "{{ role_definition }}"
- name: Create Role Binding
kubernetes.core.k8s:
definition:
kind: RoleBinding
apiVersion: "{{ item.role_binding.api_version }}"
metadata:
name: "{{ item.name }}-bind"
namespace: "{{ test_ns }}"
subjects:
- kind: ServiceAccount
name: "{{ sa_name }}"
namespace: "{{ test_ns }}"
apiGroup: ""
roleRef:
kind: Role kind: Role
name: "{{ item.name }}" apiVersion: rbac.authorization.k8s.io/v1
namespace: "{{ test_ns }}" metadata:
apiGroup: "" namespace: "{{ test_ns }}"
with_items: "{{ role_definition }}" name: "{{ item.name }}"
labels: "{{ item.labels }}"
rules:
- apiGroups: [""]
resources: ["pods"]
verbs: "{{ item.verbs }}"
with_items: "{{ role_definition }}"
- name: Create Pod should succeed - name: Create Role Binding
kubernetes.core.k8s: kubernetes.core.k8s:
api_key: "{{ api_token }}" definition:
host: "{{ cluster_host }}" kind: RoleBinding
validate_certs: no apiVersion: "{{ item.role_binding.api_version }}"
namespace: "{{ test_ns }}" metadata:
definition: name: "{{ item.name }}-bind"
namespace: "{{ test_ns }}"
subjects:
- kind: ServiceAccount
name: "{{ sa_name }}"
namespace: "{{ test_ns }}"
apiGroup: ""
roleRef:
kind: Role
name: "{{ item.name }}"
namespace: "{{ test_ns }}"
apiGroup: ""
with_items: "{{ role_definition }}"
- name: Create Pod should succeed
kubernetes.core.k8s:
api_key: "{{ api_token }}"
host: "{{ cluster_host }}"
validate_certs: no
namespace: "{{ test_ns }}"
definition:
kind: Pod
metadata:
name: "{{ pod_name }}"
spec:
containers:
- name: python
image: python:3.7-alpine
command:
- /bin/sh
- -c
- while true; do echo $(date); sleep 15; done
imagePullPolicy: IfNotPresent
register: result
- name: assert pod creation succeed
assert:
that:
- result is successful
- name: List Pod
kubernetes.core.k8s_info:
api_key: "{{ api_token }}"
host: "{{ cluster_host }}"
validate_certs: no
namespace: "{{ test_ns }}"
kind: Pod kind: Pod
metadata: register: result
name: "{{ pod_name }}"
spec:
containers:
- name: python
image: python:3.7-alpine
command:
- /bin/sh
- -c
- while true; do echo $(date); sleep 15; done
imagePullPolicy: IfNotPresent
register: result
- name: assert pod creation succeed - name: assert user is still authorize to list pods
assert: assert:
that: that:
- result is successful - result is successful
- name: List Pod - name: Prune auth roles (check mode)
kubernetes.core.k8s_info: community.okd.openshift_adm_prune_auth:
api_key: "{{ api_token }}" resource: roles
host: "{{ cluster_host }}" namespace: "{{ test_ns }}"
validate_certs: no register: check
namespace: "{{ test_ns }}" check_mode: true
kind: Pod
register: result
- name: assert user is still authorize to list pods - name: validate that list role binding are candidates for prune
assert: assert:
that: that: '"{{ test_ns }}/{{ item.name }}-bind" in check.role_binding'
- result is successful with_items: "{{ role_definition }}"
- name: Prune auth roles (check mode) - name: Prune resource using label_selectors option
community.okd.openshift_adm_prune_auth: community.okd.openshift_adm_prune_auth:
resource: roles resource: roles
namespace: "{{ test_ns }}" namespace: "{{ test_ns }}"
register: check label_selectors:
check_mode: true - action=delete
register: prune
- name: validate that list role binding are candidates for prune - name: assert that role binding 'delete' was pruned
assert: assert:
that: '"{{ test_ns }}/{{ item.name }}-bind" in check.role_binding' that:
with_items: "{{ role_definition }}" - prune is changed
- '"{{ test_ns }}/{{ role_definition[2].name }}-bind" in check.role_binding'
- name: Prune resource using label_selectors option - name: assert that user could not delete pod anymore
community.okd.openshift_adm_prune_auth: kubernetes.core.k8s:
resource: roles api_key: "{{ api_token }}"
namespace: "{{ test_ns }}" host: "{{ cluster_host }}"
label_selectors: validate_certs: no
- action=delete state: absent
register: prune namespace: "{{ test_ns }}"
- name: assert that role binding 'delete' was pruned
assert:
that:
- prune is changed
- '"{{ test_ns }}/{{ role_definition[2].name }}-bind" in check.role_binding'
- name: assert that user could not delete pod anymore
kubernetes.core.k8s:
api_key: "{{ api_token }}"
host: "{{ cluster_host }}"
validate_certs: no
state: absent
namespace: "{{ test_ns }}"
kind: Pod
name: "{{ pod_name }}"
register: result
ignore_errors: true
- name: assert pod deletion failed due to forbidden user
assert:
that:
- '"forbidden: User" in error.msg'
- name: List Pod
kubernetes.core.k8s_info:
api_key: "{{ api_token }}"
host: "{{ cluster_host }}"
validate_certs: no
namespace: "{{ test_ns }}"
kind: Pod
register: result
- name: assert user is still able to list pods
assert:
that:
- result is successful
- name: Create Pod should succeed
kubernetes.core.k8s:
api_key: "{{ api_token }}"
host: "{{ cluster_host }}"
validate_certs: no
namespace: "{{ test_ns }}"
definition:
kind: Pod kind: Pod
metadata: name: "{{ pod_name }}"
name: "{{ pod_name }}-1" register: result
spec: ignore_errors: true
containers:
- name: python
image: python:3.7-alpine
command:
- /bin/sh
- -c
- while true; do echo $(date); sleep 15; done
imagePullPolicy: IfNotPresent
register: result
- name: assert user is still authorize to create pod - name: assert pod deletion failed due to forbidden user
assert: assert:
that: that:
- result is successful - '"forbidden: User" in error.msg'
- name: Prune role using name - name: List Pod
community.okd.openshift_adm_prune_auth: kubernetes.core.k8s_info:
resource: roles api_key: "{{ api_token }}"
namespace: "{{ test_ns }}" host: "{{ cluster_host }}"
name: "{{ role_definition[1].name }}" validate_certs: no
register: prune namespace: "{{ test_ns }}"
- name: assert that role binding 'create' was pruned
assert:
that:
- prune is changed
- '"{{ test_ns }}/{{ role_definition[1].name }}-bind" in check.role_binding'
- name: Create Pod (should failed)
kubernetes.core.k8s:
api_key: "{{ api_token }}"
host: "{{ cluster_host }}"
validate_certs: no
namespace: "{{ test_ns }}"
definition:
kind: Pod kind: Pod
metadata: register: result
name: "{{ pod_name }}-2"
spec:
containers:
- name: python
image: python:3.7-alpine
command:
- /bin/sh
- -c
- while true; do echo $(date); sleep 15; done
imagePullPolicy: IfNotPresent
register: result
ignore_errors: true
- name: assert user is not authorize to create pod anymore - name: assert user is still able to list pods
assert: assert:
that: that:
- '"forbidden: User" in error.msg' - result is successful
- name: List Pod - name: Create Pod should succeed
kubernetes.core.k8s_info: kubernetes.core.k8s:
api_key: "{{ api_token }}" api_key: "{{ api_token }}"
host: "{{ cluster_host }}" host: "{{ cluster_host }}"
validate_certs: no validate_certs: no
namespace: "{{ test_ns }}" namespace: "{{ test_ns }}"
kind: Pod definition:
register: result kind: Pod
metadata:
name: "{{ pod_name }}-1"
spec:
containers:
- name: python
image: python:3.7-alpine
command:
- /bin/sh
- -c
- while true; do echo $(date); sleep 15; done
imagePullPolicy: IfNotPresent
register: result
- name: assert user is still able to list pods - name: assert user is still authorize to create pod
assert: assert:
that: that:
- result is successful - result is successful
- name: Prune all role for namespace (neither name nor label_selectors are specified) - name: Prune role using name
community.okd.openshift_adm_prune_auth: community.okd.openshift_adm_prune_auth:
resource: roles resource: roles
namespace: "{{ test_ns }}" namespace: "{{ test_ns }}"
register: prune name: "{{ role_definition[1].name }}"
register: prune
- name: assert that role binding 'list' was pruned - name: assert that role binding 'create' was pruned
assert: assert:
that: that:
- prune is changed - prune is changed
- '"{{ test_ns }}/{{ role_definition[0].name }}-bind" in check.role_binding' - '"{{ test_ns }}/{{ role_definition[1].name }}-bind" in check.role_binding'
- name: List Pod - name: Create Pod (should failed)
kubernetes.core.k8s_info: kubernetes.core.k8s:
api_key: "{{ api_token }}" api_key: "{{ api_token }}"
host: "{{ cluster_host }}" host: "{{ cluster_host }}"
validate_certs: no validate_certs: no
namespace: "{{ test_ns }}" namespace: "{{ test_ns }}"
kind: Pod definition:
register: result kind: Pod
ignore_errors: true metadata:
name: "{{ pod_name }}-2"
spec:
containers:
- name: python
image: python:3.7-alpine
command:
- /bin/sh
- -c
- while true; do echo $(date); sleep 15; done
imagePullPolicy: IfNotPresent
register: result
ignore_errors: true
- name: assert user is not authorize to list pod anymore - name: assert user is not authorize to create pod anymore
assert: assert:
that: that:
- '"forbidden: User" in error.msg' - '"forbidden: User" in error.msg'
- name: List Pod
kubernetes.core.k8s_info:
api_key: "{{ api_token }}"
host: "{{ cluster_host }}"
validate_certs: no
namespace: "{{ test_ns }}"
kind: Pod
register: result
- name: assert user is still able to list pods
assert:
that:
- result is successful
- name: Prune all role for namespace (neither name nor label_selectors are specified)
community.okd.openshift_adm_prune_auth:
resource: roles
namespace: "{{ test_ns }}"
register: prune
- name: assert that role binding 'list' was pruned
assert:
that:
- prune is changed
- '"{{ test_ns }}/{{ role_definition[0].name }}-bind" in check.role_binding'
- name: List Pod
kubernetes.core.k8s_info:
api_key: "{{ api_token }}"
host: "{{ cluster_host }}"
validate_certs: no
namespace: "{{ test_ns }}"
kind: Pod
register: result
ignore_errors: true
- name: assert user is not authorize to list pod anymore
assert:
that:
- '"forbidden: User" in error.msg'
always: always:
- name: Ensure namespace is deleted - name: Ensure namespace is deleted

View File

@@ -1,255 +1,255 @@
---
- name: Prune deployments - name: Prune deployments
block: block:
- set_fact: - set_fact:
dc_name: "hello" dc_name: "hello"
deployment_ns: "prune-deployments" deployment_ns: "prune-deployments"
deployment_ns_2: "prune-deployments-2" deployment_ns_2: "prune-deployments-2"
- name: Ensure namespace
community.okd.k8s:
kind: Namespace
name: '{{ deployment_ns }}'
- name: Ensure namespace - name: Create deployment config
community.okd.k8s: community.okd.k8s:
kind: Namespace namespace: '{{ deployment_ns }}'
name: '{{ deployment_ns }}' definition:
kind: DeploymentConfig
- name: Create deployment config apiVersion: apps.openshift.io/v1
community.okd.k8s: metadata:
namespace: '{{ deployment_ns }}'
definition:
kind: DeploymentConfig
apiVersion: apps.openshift.io/v1
metadata:
name: '{{ dc_name }}'
spec:
replicas: 1
selector:
name: '{{ dc_name }}' name: '{{ dc_name }}'
template: spec:
metadata: replicas: 1
labels: selector:
name: '{{ dc_name }}' name: '{{ dc_name }}'
spec: template:
containers: metadata:
- name: hello-openshift labels:
imagePullPolicy: IfNotPresent name: '{{ dc_name }}'
image: python:3.7-alpine spec:
command: [ "/bin/sh", "-c", "while true;do date;sleep 2s; done"] containers:
wait: yes - name: hello-openshift
imagePullPolicy: IfNotPresent
image: python:3.7-alpine
command: [ "/bin/sh", "-c", "while true;do date;sleep 2s; done"]
wait: yes
- name: prune deployments (no candidate DeploymentConfig) - name: prune deployments (no candidate DeploymentConfig)
community.okd.openshift_adm_prune_deployments: community.okd.openshift_adm_prune_deployments:
namespace: "{{ deployment_ns }}" namespace: "{{ deployment_ns }}"
register: test_prune register: test_prune
- assert: - assert:
that: that:
- test_prune is not changed - test_prune is not changed
- test_prune.replication_controllers | length == 0 - test_prune.replication_controllers | length == 0
- name: Update DeploymentConfig - set replicas to 0 - name: Update DeploymentConfig - set replicas to 0
community.okd.k8s: community.okd.k8s:
namespace: "{{ deployment_ns }}" namespace: "{{ deployment_ns }}"
definition: definition:
kind: DeploymentConfig kind: DeploymentConfig
apiVersion: "apps.openshift.io/v1" apiVersion: "apps.openshift.io/v1"
metadata: metadata:
name: "{{ dc_name }}"
spec:
replicas: 0
selector:
name: "{{ dc_name }}" name: "{{ dc_name }}"
template: spec:
metadata: replicas: 0
labels: selector:
name: "{{ dc_name }}" name: "{{ dc_name }}"
spec: template:
containers: metadata:
- name: hello-openshift labels:
imagePullPolicy: IfNotPresent name: "{{ dc_name }}"
image: python:3.7-alpine spec:
command: [ "/bin/sh", "-c", "while true;do date;sleep 2s; done"] containers:
wait: yes - name: hello-openshift
imagePullPolicy: IfNotPresent
image: python:3.7-alpine
command: [ "/bin/sh", "-c", "while true;do date;sleep 2s; done"]
wait: yes
- name: Wait for ReplicationController candidate for pruning - name: Wait for ReplicationController candidate for pruning
kubernetes.core.k8s_info: kubernetes.core.k8s_info:
kind: ReplicationController kind: ReplicationController
namespace: "{{ deployment_ns }}" namespace: "{{ deployment_ns }}"
register: result register: result
retries: 10 retries: 10
delay: 30 delay: 30
until: until:
- result.resources.0.metadata.annotations["openshift.io/deployment.phase"] in ("Failed", "Complete") - result.resources.0.metadata.annotations["openshift.io/deployment.phase"] in ("Failed", "Complete")
- name: Prune deployments - should delete 1 ReplicationController - name: Prune deployments - should delete 1 ReplicationController
community.okd.openshift_adm_prune_deployments: community.okd.openshift_adm_prune_deployments:
namespace: "{{ deployment_ns }}" namespace: "{{ deployment_ns }}"
check_mode: yes check_mode: yes
register: test_prune register: test_prune
- name: Read ReplicationController - name: Read ReplicationController
kubernetes.core.k8s_info: kubernetes.core.k8s_info:
kind: ReplicationController kind: ReplicationController
namespace: "{{ deployment_ns }}" namespace: "{{ deployment_ns }}"
register: replications register: replications
- name: Assert that Replication controller was not deleted - name: Assert that Replication controller was not deleted
assert: assert:
that: that:
- replications.resources | length == 1 - replications.resources | length == 1
- 'replications.resources.0.metadata.name is match("{{ dc_name }}-*")' - 'replications.resources.0.metadata.name is match("{{ dc_name }}-*")'
- name: Assure that candidate ReplicationController was found for pruning - name: Assure that candidate ReplicationController was found for pruning
assert: assert:
that: that:
- test_prune is changed - test_prune is changed
- test_prune.replication_controllers | length == 1 - test_prune.replication_controllers | length == 1
- test_prune.replication_controllers.0.metadata.name == replications.resources.0.metadata.name - test_prune.replication_controllers.0.metadata.name == replications.resources.0.metadata.name
- test_prune.replication_controllers.0.metadata.namespace == replications.resources.0.metadata.namespace - test_prune.replication_controllers.0.metadata.namespace == replications.resources.0.metadata.namespace
- name: Prune deployments - keep younger than 45min (check_mode) - name: Prune deployments - keep younger than 45min (check_mode)
community.okd.openshift_adm_prune_deployments: community.okd.openshift_adm_prune_deployments:
keep_younger_than: 45 keep_younger_than: 45
namespace: "{{ deployment_ns }}" namespace: "{{ deployment_ns }}"
check_mode: true check_mode: true
register: keep_younger register: keep_younger
- name: assert no candidate was found - name: assert no candidate was found
assert: assert:
that: that:
- keep_younger is not changed - keep_younger is not changed
- keep_younger.replication_controllers == [] - keep_younger.replication_controllers == []
- name: Ensure second namespace is created - name: Ensure second namespace is created
community.okd.k8s: community.okd.k8s:
kind: Namespace kind: Namespace
name: '{{ deployment_ns_2 }}' name: '{{ deployment_ns_2 }}'
- name: Create deployment config from 2nd namespace - name: Create deployment config from 2nd namespace
community.okd.k8s: community.okd.k8s:
namespace: '{{ deployment_ns_2 }}' namespace: '{{ deployment_ns_2 }}'
definition: definition:
kind: DeploymentConfig kind: DeploymentConfig
apiVersion: apps.openshift.io/v1 apiVersion: apps.openshift.io/v1
metadata: metadata:
name: '{{ dc_name }}2'
spec:
replicas: 1
selector:
name: '{{ dc_name }}2' name: '{{ dc_name }}2'
template: spec:
metadata: replicas: 1
labels: selector:
name: '{{ dc_name }}2' name: '{{ dc_name }}2'
spec: template:
containers: metadata:
- name: hello-openshift labels:
imagePullPolicy: IfNotPresent name: '{{ dc_name }}2'
image: python:3.7-alpine spec:
command: [ "/bin/sh", "-c", "while true;do date;sleep 2s; done"] containers:
wait: yes - name: hello-openshift
imagePullPolicy: IfNotPresent
image: python:3.7-alpine
command: [ "/bin/sh", "-c", "while true;do date;sleep 2s; done"]
wait: yes
- name: Stop deployment config - replicas = 0 - name: Stop deployment config - replicas = 0
community.okd.k8s: community.okd.k8s:
namespace: '{{ deployment_ns_2 }}' namespace: '{{ deployment_ns_2 }}'
definition: definition:
kind: DeploymentConfig kind: DeploymentConfig
apiVersion: apps.openshift.io/v1 apiVersion: apps.openshift.io/v1
metadata: metadata:
name: '{{ dc_name }}2'
spec:
replicas: 0
selector:
name: '{{ dc_name }}2' name: '{{ dc_name }}2'
template: spec:
metadata: replicas: 0
labels: selector:
name: '{{ dc_name }}2' name: '{{ dc_name }}2'
spec: template:
containers: metadata:
- name: hello-openshift labels:
imagePullPolicy: IfNotPresent name: '{{ dc_name }}2'
image: python:3.7-alpine spec:
command: [ "/bin/sh", "-c", "while true;do date;sleep 2s; done"] containers:
wait: yes - name: hello-openshift
imagePullPolicy: IfNotPresent
image: python:3.7-alpine
command: [ "/bin/sh", "-c", "while true;do date;sleep 2s; done"]
wait: yes
- name: Wait for ReplicationController candidate for pruning - name: Wait for ReplicationController candidate for pruning
kubernetes.core.k8s_info: kubernetes.core.k8s_info:
kind: ReplicationController kind: ReplicationController
namespace: "{{ deployment_ns_2 }}" namespace: "{{ deployment_ns_2 }}"
register: result register: result
retries: 10 retries: 10
delay: 30 delay: 30
until: until:
- result.resources.0.metadata.annotations["openshift.io/deployment.phase"] in ("Failed", "Complete") - result.resources.0.metadata.annotations["openshift.io/deployment.phase"] in ("Failed", "Complete")
# Prune from one namespace should not have any effect on others namespaces # Prune from one namespace should not have any effect on others namespaces
- name: Prune deployments from 2nd namespace - name: Prune deployments from 2nd namespace
community.okd.openshift_adm_prune_deployments: community.okd.openshift_adm_prune_deployments:
namespace: "{{ deployment_ns_2 }}" namespace: "{{ deployment_ns_2 }}"
check_mode: yes check_mode: yes
register: test_prune register: test_prune
- name: Assure that candidate ReplicationController was found for pruning - name: Assure that candidate ReplicationController was found for pruning
assert: assert:
that: that:
- test_prune is changed - test_prune is changed
- test_prune.replication_controllers | length == 1 - test_prune.replication_controllers | length == 1
- "test_prune.replication_controllers.0.metadata.namespace == deployment_ns_2" - "test_prune.replication_controllers.0.metadata.namespace == deployment_ns_2"
# Prune without namespace option # Prune without namespace option
- name: Prune from all namespace should update more deployments - name: Prune from all namespace should update more deployments
community.okd.openshift_adm_prune_deployments: community.okd.openshift_adm_prune_deployments:
check_mode: yes check_mode: yes
register: no_namespace_prune register: no_namespace_prune
- name: Assure multiple ReplicationController were found for pruning - name: Assure multiple ReplicationController were found for pruning
assert: assert:
that: that:
- no_namespace_prune is changed - no_namespace_prune is changed
- no_namespace_prune.replication_controllers | length == 2 - no_namespace_prune.replication_controllers | length == 2
# Execute Prune from 2nd namespace # Execute Prune from 2nd namespace
- name: Read ReplicationController before Prune operation - name: Read ReplicationController before Prune operation
kubernetes.core.k8s_info: kubernetes.core.k8s_info:
kind: ReplicationController kind: ReplicationController
namespace: "{{ deployment_ns_2 }}" namespace: "{{ deployment_ns_2 }}"
register: replications register: replications
- assert: - assert:
that: that:
- replications.resources | length == 1 - replications.resources | length == 1
- name: Prune DeploymentConfig from 2nd namespace - name: Prune DeploymentConfig from 2nd namespace
community.okd.openshift_adm_prune_deployments: community.okd.openshift_adm_prune_deployments:
namespace: "{{ deployment_ns_2 }}" namespace: "{{ deployment_ns_2 }}"
register: _prune register: _prune
- name: Assert DeploymentConfig was deleted - name: Assert DeploymentConfig was deleted
assert: assert:
that: that:
- _prune is changed - _prune is changed
- _prune.replication_controllers | length == 1 - _prune.replication_controllers | length == 1
- _prune.replication_controllers.0.details.name == replications.resources.0.metadata.name - _prune.replication_controllers.0.details.name == replications.resources.0.metadata.name
# Execute Prune without namespace option # Execute Prune without namespace option
- name: Read ReplicationController before Prune operation - name: Read ReplicationController before Prune operation
kubernetes.core.k8s_info: kubernetes.core.k8s_info:
kind: ReplicationController kind: ReplicationController
namespace: "{{ deployment_ns }}" namespace: "{{ deployment_ns }}"
register: replications register: replications
- assert: - assert:
that: that:
- replications.resources | length == 1 - replications.resources | length == 1
- name: Prune from all namespace should update more deployments - name: Prune from all namespace should update more deployments
community.okd.openshift_adm_prune_deployments: community.okd.openshift_adm_prune_deployments:
register: _prune register: _prune
- name: Assure multiple ReplicationController were found for pruning - name: Assure multiple ReplicationController were found for pruning
assert: assert:
that: that:
- _prune is changed - _prune is changed
- _prune.replication_controllers | length > 0 - _prune.replication_controllers | length > 0
always: always:
- name: Delete 1st namespace - name: Delete 1st namespace

View File

@@ -1,240 +1,245 @@
---
- block: - block:
- set_fact: - set_fact:
build_ns: "builds" build_ns: "builds"
build_config: "start-build" build_config: "start-build"
is_name: "ruby" is_name: "ruby"
prune_build: "prune-build" prune_build: "prune-build"
- name: Ensure namespace - name: Ensure namespace
kubernetes.core.k8s: kubernetes.core.k8s:
kind: Namespace kind: Namespace
name: "{{ build_ns }}" name: "{{ build_ns }}"
- name: Create ImageStream - name: Create ImageStream
community.okd.k8s: community.okd.k8s:
namespace: "{{ build_ns }}" namespace: "{{ build_ns }}"
definition: definition:
apiVersion: image.openshift.io/v1 apiVersion: image.openshift.io/v1
kind: ImageStream kind: ImageStream
metadata: metadata:
name: "{{ is_name }}" name: "{{ is_name }}"
spec: spec:
lookupPolicy: lookupPolicy:
local: false local: false
tags: [] tags: []
- name: Create build configuration - name: Create build configuration
community.okd.k8s: community.okd.k8s:
namespace: "{{ build_ns }}" namespace: "{{ build_ns }}"
definition: definition:
kind: BuildConfig kind: BuildConfig
apiVersion: build.openshift.io/v1 apiVersion: build.openshift.io/v1
metadata: metadata:
name: "{{ build_config }}" name: "{{ build_config }}"
spec: spec:
source: source:
dockerfile: | dockerfile: |
FROM openshift/ruby-22-centos7 FROM openshift/ruby-22-centos7
RUN sleep 60s RUN sleep 60s
USER ansible USER ansible
strategy: strategy:
type: Docker type: Docker
output: output:
to: to:
kind: "ImageStreamTag" kind: "ImageStreamTag"
name: "{{ is_name }}:latest" name: "{{ is_name }}:latest"
- name: Start Build from Build configuration - name: Start Build from Build configuration
community.okd.openshift_build: community.okd.openshift_build:
namespace: "{{ build_ns }}" namespace: "{{ build_ns }}"
build_config_name: "{{ build_config }}" build_config_name: "{{ build_config }}"
register: new_build register: new_build
- name: Assert that a build has been created - name: Assert that a build has been created
assert: assert:
that: that:
- new_build is changed - new_build is changed
- new_build.builds.0.metadata.name == "{{ build_config }}-1" - new_build.builds.0.metadata.name == "{{ build_config }}-1"
- name: Start a new Build from previous Build - name: Start a new Build from previous Build
community.okd.openshift_build: community.okd.openshift_build:
namespace: "{{ build_ns }}" namespace: "{{ build_ns }}"
build_name: "{{ new_build.builds.0.metadata.name }}" build_name: "{{ new_build.builds.0.metadata.name }}"
register: rerun_build register: rerun_build
- name: Assert that another build has been created - name: Assert that another build has been created
assert: assert:
that: that:
- rerun_build is changed - rerun_build is changed
- rerun_build.builds.0.metadata.name == "{{ build_config }}-2" - rerun_build.builds.0.metadata.name == "{{ build_config }}-2"
- name: Cancel first build created - name: Cancel first build created
community.okd.openshift_build: community.okd.openshift_build:
namespace: "{{ build_ns }}" namespace: "{{ build_ns }}"
build_name: "{{ build_config }}-1" build_name: "{{ build_config }}-1"
state: cancelled state: cancelled
wait: yes wait: yes
register: cancel register: cancel
- name: Assert that the Build was cancelled - name: Assert that the Build was cancelled
assert: assert:
that: that:
- cancel is changed - cancel is changed
- cancel.builds | length == 1 - cancel.builds | length == 1
- cancel.builds.0.metadata.name == "{{ build_config }}-1" - cancel.builds.0.metadata.name == "{{ build_config }}-1"
- cancel.builds.0.metadata.namespace == "{{ build_ns }}" - cancel.builds.0.metadata.namespace == "{{ build_ns }}"
- cancel.builds.0.status.cancelled - '"cancelled" in cancel.builds.0.status'
- cancel.builds.0.status.cancelled
- name: Get Build info - name: Get info for 1st Build
kubernetes.core.k8s_info: kubernetes.core.k8s_info:
version: build.openshift.io/v1 version: build.openshift.io/v1
kind: Build kind: Build
namespace: "{{ build_ns }}" namespace: "{{ build_ns }}"
name: "{{ cancel.builds.0.metadata.name }}" name: "{{ cancel.builds.0.metadata.name }}"
register: build register: build
- name: Assert that build phase is cancelled - name: Assert that build phase is cancelled
assert: assert:
that: that:
- build.resources | length == 1 - build.resources | length == 1
- build.resources.0.status.cancelled - '"cancelled" in build.resources.0.status'
- build.resources.0.status.phase == 'Cancelled' - build.resources.0.status.cancelled
- build.resources.0.status.phase == 'Cancelled'
- name: Cancel and restart Build using build config name - name: Cancel and restart Build using build config name
community.okd.openshift_build: community.okd.openshift_build:
namespace: "{{ build_ns }}" namespace: "{{ build_ns }}"
build_config_name: "{{ build_config }}" build_config_name: "{{ build_config }}"
state: restarted state: restarted
build_phases: build_phases:
- Running - Pending
- New - Running
register: restart - New
register: restart
- name: assert that new build was created - name: assert that new build was created
assert: assert:
that: that:
- restart is changed - restart is changed
- restart.builds | length == 1 - restart.builds | length == 1
- 'restart.builds.0.metadata.name == "{{ build_config }}-3"' - 'restart.builds.0.metadata.name == "{{ build_config }}-3"'
- name: Get Build 2 info - name: Get info for 2nd Build
kubernetes.core.k8s_info: kubernetes.core.k8s_info:
version: build.openshift.io/v1 version: build.openshift.io/v1
kind: Build kind: Build
namespace: "{{ build_ns }}" namespace: "{{ build_ns }}"
name: "{{ build_config }}-2" name: "{{ build_config }}-2"
register: build register: build
- name: Assert that build phase is cancelled - name: Assert that build phase is cancelled
assert: assert:
that: that:
- build.resources | length == 1 - build.resources | length == 1
- build.resources.0.status.cancelled - '"cancelled" in build.resources.0.status'
- build.resources.0.status.phase == 'Cancelled' - build.resources.0.status.cancelled
- build.resources.0.status.phase == 'Cancelled'
- name: Get Build info - name: Get info for 3rd build
kubernetes.core.k8s_info: kubernetes.core.k8s_info:
version: build.openshift.io/v1 version: build.openshift.io/v1
kind: Build kind: Build
namespace: "{{ build_ns }}" namespace: "{{ build_ns }}"
name: "{{ build_config }}-3" name: "{{ build_config }}-3"
register: build register: build
- name: Assert that Build is not cancelled - name: Assert that Build is not cancelled
assert: assert:
that: that:
- build.resources | length == 1 - build.resources | length == 1
- '"cancelled" not in build.resources.0.status' - '"cancelled" not in build.resources.0.status'
- "build.resources.0.status.phase in ('New', 'Pending', 'Running')" - "build.resources.0.status.phase in ('New', 'Pending', 'Running')"
- name: Prune Builds keep younger than 30min - name: Prune Builds keep younger than 30min
community.okd.openshift_adm_prune_builds: community.okd.openshift_adm_prune_builds:
keep_younger_than: 30 keep_younger_than: 30
namespace: "{{ build_ns }}" namespace: "{{ build_ns }}"
register: prune register: prune
check_mode: yes check_mode: yes
- name: Assert that no Builds were found - name: Assert that no Builds were found
assert: assert:
that: that:
- not prune.changed - not prune.changed
- prune.builds | length == 0 - prune.builds | length == 0
- name: Prune Builds without namespace - name: Prune Builds without namespace
community.okd.openshift_adm_prune_builds: community.okd.openshift_adm_prune_builds:
register: prune_without_ns register: prune_without_ns
check_mode: yes check_mode: yes
- name: Assert that completed build are candidate for prune - name: Assert that completed build are candidate for prune
assert: assert:
that: that:
- prune_without_ns is changed - prune_without_ns is changed
- prune_without_ns.builds | length > 0 - prune_without_ns.builds | length > 0
- '"{{ build_config }}-1" in build_names' - '"{{ build_config }}-1" in build_names'
- '"{{ build_config }}-2" in build_names' - '"{{ build_config }}-2" in build_names'
vars: vars:
build_names: '{{ prune_without_ns.builds | map(attribute="metadata") | flatten | map(attribute="name") | list }}' build_names: '{{ prune_without_ns.builds | map(attribute="metadata") | flatten | map(attribute="name") | list }}'
- name: Prune Builds using namespace - name: Prune Builds using namespace
community.okd.openshift_adm_prune_builds: community.okd.openshift_adm_prune_builds:
namespace: "{{ build_ns }}" namespace: "{{ build_ns }}"
register: prune_with_ns register: prune_with_ns
check_mode: yes check_mode: yes
- name: Assert that prune operation found the completed build - name: Assert that prune operation found the completed build
assert: assert:
that: that:
- prune_with_ns is changed - prune_with_ns is changed
- prune_with_ns.builds | length == 2 - prune_with_ns.builds | length == 2
- name: Check Build before prune - name: Check Build before prune
kubernetes.core.k8s_info: kubernetes.core.k8s_info:
kind: Build kind: Build
api_version: build.openshift.io/v1 api_version: build.openshift.io/v1
name: "{{ build_config }}-1" name: "{{ build_config }}-1"
namespace: "{{ build_ns }}" namespace: "{{ build_ns }}"
register: resource register: resource
- name: Validate that any previous build operation executed with check_mode did not deleted the build - name: Validate that any previous build operation executed with check_mode did not deleted the build
assert: assert:
that: that:
- resource.resources | length == 1 - resource.resources | length == 1
- name: Execute prune operation - name: Execute prune operation
community.okd.openshift_adm_prune_builds: community.okd.openshift_adm_prune_builds:
namespace: "{{ build_ns }}" namespace: "{{ build_ns }}"
register: prune register: prune
- name: assert prune is changed - name: assert prune is changed
assert: assert:
that: that:
- prune is changed - prune is changed
- name: Check Build - name: Check Build
kubernetes.core.k8s_info: kubernetes.core.k8s_info:
kind: Build kind: Build
api_version: build.openshift.io/v1 api_version: build.openshift.io/v1
name: "{{ build_config }}-1" name: "{{ build_config }}-1"
namespace: "{{ build_ns }}" namespace: "{{ build_ns }}"
register: resource register: resource
- name: Assert that the Build does not exist anymore - name: Assert that the Build does not exist anymore
assert: assert:
that: that:
- resource.resources | length == 0 - resource.resources | length == 0
- name: Check Build - name: Check Build
kubernetes.core.k8s_info: kubernetes.core.k8s_info:
kind: Build kind: Build
api_version: build.openshift.io/v1 api_version: build.openshift.io/v1
name: "{{ build_config }}-2" name: "{{ build_config }}-2"
namespace: "{{ build_ns }}" namespace: "{{ build_ns }}"
register: resource register: resource
- name: Assert that the Build does not exist anymore - name: Assert that the Build does not exist anymore
assert: assert:
that: that:
- resource.resources | length == 0 - resource.resources | length == 0
always: always:
- name: Ensure namespace is deleted - name: Ensure namespace is deleted

View File

@@ -1,174 +1,175 @@
---
- name: Openshift import image testing - name: Openshift import image testing
block: block:
- set_fact: - set_fact:
test_ns: "import-images" test_ns: "import-images"
- name: Ensure namespace - name: Ensure namespace
community.okd.k8s: community.okd.k8s:
kind: Namespace kind: Namespace
name: '{{ test_ns }}' name: '{{ test_ns }}'
- name: Import image using tag (should import latest tag only) - name: Import image using tag (should import latest tag only)
community.okd.openshift_import_image: community.okd.openshift_import_image:
namespace: "{{ test_ns }}" namespace: "{{ test_ns }}"
name: "ansible/awx" name: "ansible/awx"
check_mode: yes check_mode: yes
register: import_tag register: import_tag
- name: Assert only latest was imported - name: Assert only latest was imported
assert: assert:
that: that:
- import_tag is changed - import_tag is changed
- import_tag.result | length == 1 - import_tag.result | length == 1
- import_tag.result.0.spec.import - import_tag.result.0.spec.import
- import_tag.result.0.spec.images.0.from.kind == "DockerImage" - import_tag.result.0.spec.images.0.from.kind == "DockerImage"
- import_tag.result.0.spec.images.0.from.name == "ansible/awx" - import_tag.result.0.spec.images.0.from.name == "ansible/awx"
- name: check image stream - name: check image stream
kubernetes.core.k8s_info: kubernetes.core.k8s_info:
kind: ImageStream
namespace: "{{ test_ns }}"
name: awx
register: resource
- name: assert that image stream is not created when using check_mode=yes
assert:
that:
- resource.resources == []
- name: Import image using tag (should import latest tag only)
community.okd.openshift_import_image:
namespace: "{{ test_ns }}"
name: "ansible/awx"
register: import_tag
- name: Assert only latest was imported
assert:
that:
- import_tag is changed
- name: check image stream
kubernetes.core.k8s_info:
kind: ImageStream
namespace: "{{ test_ns }}"
name: awx
register: resource
- name: assert that image stream contains only tag latest
assert:
that:
- resource.resources | length == 1
- resource.resources.0.status.tags.0.tag == 'latest'
- name: Import once again the latest tag
community.okd.openshift_import_image:
namespace: "{{ test_ns }}"
name: "ansible/awx"
register: import_tag
- name: assert change was performed
assert:
that:
- import_tag is changed
- name: check image stream
kubernetes.core.k8s_info:
kind: ImageStream
version: image.openshift.io/v1
namespace: "{{ test_ns }}"
name: awx
register: resource
- name: assert that image stream still contains unique tag
assert:
that:
- resource.resources | length == 1
- resource.resources.0.status.tags.0.tag == 'latest'
- name: Import another tags
community.okd.openshift_import_image:
namespace: "{{ test_ns }}"
name: "ansible/awx:17.1.0"
register: import_another_tag
ignore_errors: yes
- name: assert that another tag was imported
assert:
that:
- import_another_tag is failed
- '"the tag 17.1.0 does not exist on the image stream" in import_another_tag.msg'
- name: Create simple ImageStream (without docker external container)
community.okd.k8s:
namespace: "{{ test_ns }}"
name: "local-is"
definition:
apiVersion: image.openshift.io/v1
kind: ImageStream kind: ImageStream
spec: namespace: "{{ test_ns }}"
lookupPolicy: name: awx
local: false register: resource
tags: []
- name: Import all tag for image stream not pointing on external container image should failed - name: assert that image stream is not created when using check_mode=yes
community.okd.openshift_import_image: assert:
namespace: "{{ test_ns }}" that:
name: "local-is" - resource.resources == []
all: true
register: error_tag
ignore_errors: true
check_mode: yes
- name: Assert module cannot import from non-existing tag from ImageStream - name: Import image using tag (should import latest tag only)
assert: community.okd.openshift_import_image:
that: namespace: "{{ test_ns }}"
- error_tag is failed name: "ansible/awx"
- 'error_tag.msg == "image stream {{ test_ns }}/local-is does not have tags pointing to external container images"' register: import_tag
- name: import all tags for container image ibmcom/pause and specific tag for redhat/ubi8-micro - name: Assert only latest was imported
community.okd.openshift_import_image: assert:
namespace: "{{ test_ns }}" that:
name: - import_tag is changed
- "ibmcom/pause"
- "redhat/ubi8-micro:8.5-437"
all: true
register: multiple_import
- name: Assert that import succeed - name: check image stream
assert: kubernetes.core.k8s_info:
that: kind: ImageStream
- multiple_import is changed namespace: "{{ test_ns }}"
- multiple_import.result | length == 2 name: awx
register: resource
- name: Read ibmcom/pause ImageStream - name: assert that image stream contains only tag latest
kubernetes.core.k8s_info: assert:
version: image.openshift.io/v1 that:
kind: ImageStream - resource.resources | length == 1
namespace: "{{ test_ns }}" - resource.resources.0.status.tags.0.tag == 'latest'
name: pause
register: pause
- name: assert that ibmcom/pause has multiple tags - name: Import once again the latest tag
assert: community.okd.openshift_import_image:
that: namespace: "{{ test_ns }}"
- pause.resources | length == 1 name: "ansible/awx"
- pause.resources.0.status.tags | length > 1 register: import_tag
- name: Read redhat/ubi8-micro ImageStream - name: assert change was performed
kubernetes.core.k8s_info: assert:
version: image.openshift.io/v1 that:
kind: ImageStream - import_tag is changed
namespace: "{{ test_ns }}"
name: ubi8-micro
register: resource
- name: assert that redhat/ubi8-micro has only one tag - name: check image stream
assert: kubernetes.core.k8s_info:
that: kind: ImageStream
- resource.resources | length == 1 version: image.openshift.io/v1
- resource.resources.0.status.tags | length == 1 namespace: "{{ test_ns }}"
- 'resource.resources.0.status.tags.0.tag == "8.5-437"' name: awx
register: resource
- name: assert that image stream still contains unique tag
assert:
that:
- resource.resources | length == 1
- resource.resources.0.status.tags.0.tag == 'latest'
- name: Import another tags
community.okd.openshift_import_image:
namespace: "{{ test_ns }}"
name: "ansible/awx:17.1.0"
register: import_another_tag
ignore_errors: yes
- name: assert that another tag was imported
assert:
that:
- import_another_tag is failed
- '"the tag 17.1.0 does not exist on the image stream" in import_another_tag.msg'
- name: Create simple ImageStream (without docker external container)
community.okd.k8s:
namespace: "{{ test_ns }}"
name: "local-is"
definition:
apiVersion: image.openshift.io/v1
kind: ImageStream
spec:
lookupPolicy:
local: false
tags: []
- name: Import all tag for image stream not pointing on external container image should failed
community.okd.openshift_import_image:
namespace: "{{ test_ns }}"
name: "local-is"
all: true
register: error_tag
ignore_errors: true
check_mode: yes
- name: Assert module cannot import from non-existing tag from ImageStream
assert:
that:
- error_tag is failed
- 'error_tag.msg == "image stream {{ test_ns }}/local-is does not have tags pointing to external container images"'
- name: import all tags for container image ibmcom/pause and specific tag for redhat/ubi8-micro
community.okd.openshift_import_image:
namespace: "{{ test_ns }}"
name:
- "ibmcom/pause"
- "redhat/ubi8-micro:8.5-437"
all: true
register: multiple_import
- name: Assert that import succeed
assert:
that:
- multiple_import is changed
- multiple_import.result | length == 2
- name: Read ibmcom/pause ImageStream
kubernetes.core.k8s_info:
version: image.openshift.io/v1
kind: ImageStream
namespace: "{{ test_ns }}"
name: pause
register: pause
- name: assert that ibmcom/pause has multiple tags
assert:
that:
- pause.resources | length == 1
- pause.resources.0.status.tags | length > 1
- name: Read redhat/ubi8-micro ImageStream
kubernetes.core.k8s_info:
version: image.openshift.io/v1
kind: ImageStream
namespace: "{{ test_ns }}"
name: ubi8-micro
register: resource
- name: assert that redhat/ubi8-micro has only one tag
assert:
that:
- resource.resources | length == 1
- resource.resources.0.status.tags | length == 1
- 'resource.resources.0.status.tags.0.tag == "8.5-437"'
always: always:
- name: Delete testing namespace - name: Delete testing namespace

View File

@@ -38,12 +38,12 @@
name: "{{ pod_name }}" name: "{{ pod_name }}"
spec: spec:
containers: containers:
- name: test-container - name: test-container
image: "{{ prune_registry }}/{{ prune_ns }}/{{ container.name }}:latest" image: "{{ prune_registry }}/{{ prune_ns }}/{{ container.name }}:latest"
command: command:
- /bin/sh - /bin/sh
- -c - -c
- while true;do date;sleep 5; done - while true;do date;sleep 5; done
- name: Create limit range for images size - name: Create limit range for images size
community.okd.k8s: community.okd.k8s:

View File

@@ -19,10 +19,10 @@
app: hello-kubernetes app: hello-kubernetes
spec: spec:
containers: containers:
- name: hello-kubernetes - name: hello-kubernetes
image: docker.io/openshift/hello-openshift image: docker.io/openshift/hello-openshift
ports: ports:
- containerPort: 8080 - containerPort: 8080
- name: Create Service - name: Create Service
community.okd.k8s: community.okd.k8s:
@@ -35,8 +35,8 @@
namespace: default namespace: default
spec: spec:
ports: ports:
- port: 80 - port: 80
targetPort: 8080 targetPort: 8080
selector: selector:
app: hello-kubernetes app: hello-kubernetes

View File

@@ -64,14 +64,16 @@ okd_dc_triggers:
okd_dc_spec: okd_dc_spec:
template: '{{ k8s_pod_template }}' template: '{{ k8s_pod_template }}'
triggers: '{{ okd_dc_triggers }}' selector:
matchLabels:
app: "{{ k8s_pod_name }}"
replicas: 1 replicas: 1
strategy: strategy:
type: Recreate type: Recreate
okd_dc_template: okd_dc_template:
apiVersion: v1 apiVersion: apps/v1
kind: DeploymentConfig kind: Deployment
spec: '{{ okd_dc_spec }}' spec: '{{ okd_dc_spec }}'
okd_imagestream_template: okd_imagestream_template:
@@ -83,12 +85,12 @@ okd_imagestream_template:
lookupPolicy: lookupPolicy:
local: true local: true
tags: tags:
- annotations: null - annotations: null
from: from:
kind: DockerImage kind: DockerImage
name: '{{ image }}' name: '{{ image }}'
name: '{{ image_tag }}' name: '{{ image_tag }}'
referencePolicy: referencePolicy:
type: Source type: Source
image_tag: latest image_tag: latest

View File

@@ -17,10 +17,11 @@
# #
# You should have received a copy of the GNU General Public License # You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>. # along with Ansible. If not, see <http://www.gnu.org/licenses/>.
from __future__ import (absolute_import, division, print_function) from __future__ import absolute_import, division, print_function
__metaclass__ = type __metaclass__ = type
DOCUMENTATION = ''' DOCUMENTATION = """
author: author:
- xuxinkun (@xuxinkun) - xuxinkun (@xuxinkun)
@@ -145,29 +146,32 @@ DOCUMENTATION = '''
env: env:
- name: K8S_AUTH_VERIFY_SSL - name: K8S_AUTH_VERIFY_SSL
aliases: [ oc_verify_ssl ] aliases: [ oc_verify_ssl ]
''' """
from ansible_collections.kubernetes.core.plugins.connection.kubectl import Connection as KubectlConnection from ansible_collections.kubernetes.core.plugins.connection.kubectl import (
Connection as KubectlConnection,
)
CONNECTION_TRANSPORT = 'oc' CONNECTION_TRANSPORT = "oc"
CONNECTION_OPTIONS = { CONNECTION_OPTIONS = {
'oc_container': '-c', "oc_container": "-c",
'oc_namespace': '-n', "oc_namespace": "-n",
'oc_kubeconfig': '--kubeconfig', "oc_kubeconfig": "--kubeconfig",
'oc_context': '--context', "oc_context": "--context",
'oc_host': '--server', "oc_host": "--server",
'client_cert': '--client-certificate', "client_cert": "--client-certificate",
'client_key': '--client-key', "client_key": "--client-key",
'ca_cert': '--certificate-authority', "ca_cert": "--certificate-authority",
'validate_certs': '--insecure-skip-tls-verify', "validate_certs": "--insecure-skip-tls-verify",
'oc_token': '--token' "oc_token": "--token",
} }
class Connection(KubectlConnection): class Connection(KubectlConnection):
''' Local oc based connections ''' """Local oc based connections"""
transport = CONNECTION_TRANSPORT transport = CONNECTION_TRANSPORT
connection_options = CONNECTION_OPTIONS connection_options = CONNECTION_OPTIONS
documentation = DOCUMENTATION documentation = DOCUMENTATION

View File

@@ -1,11 +1,11 @@
# Copyright (c) 2018 Ansible Project # Copyright (c) 2018 Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt) # GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import (absolute_import, division, print_function) from __future__ import absolute_import, division, print_function
__metaclass__ = type __metaclass__ = type
DOCUMENTATION = ''' DOCUMENTATION = """
name: openshift name: openshift
author: author:
- Chris Houseknecht (@chouseknecht) - Chris Houseknecht (@chouseknecht)
@@ -94,34 +94,41 @@ DOCUMENTATION = '''
- "python >= 3.6" - "python >= 3.6"
- "kubernetes >= 12.0.0" - "kubernetes >= 12.0.0"
- "PyYAML >= 3.11" - "PyYAML >= 3.11"
''' """
EXAMPLES = ''' EXAMPLES = """
# File must be named openshift.yaml or openshift.yml # File must be named openshift.yaml or openshift.yml
# Authenticate with token, and return all pods and services for all namespaces - name: Authenticate with token, and return all pods and services for all namespaces
plugin: community.okd.openshift plugin: community.okd.openshift
connections: connections:
- host: https://192.168.64.4:8443 - host: https://192.168.64.4:8443
api_key: xxxxxxxxxxxxxxxx api_key: xxxxxxxxxxxxxxxx
verify_ssl: false verify_ssl: false
# Use default config (~/.kube/config) file and active context, and return objects for a specific namespace - name: Use default config (~/.kube/config) file and active context, and return objects for a specific namespace
plugin: community.okd.openshift plugin: community.okd.openshift
connections: connections:
- namespaces: - namespaces:
- testing - testing
# Use a custom config file, and a specific context. - name: Use a custom config file, and a specific context.
plugin: community.okd.openshift plugin: community.okd.openshift
connections: connections:
- kubeconfig: /path/to/config - kubeconfig: /path/to/config
context: 'awx/192-168-64-4:8443/developer' context: 'awx/192-168-64-4:8443/developer'
''' """
try: try:
from ansible_collections.kubernetes.core.plugins.inventory.k8s import K8sInventoryException, InventoryModule as K8sInventoryModule, format_dynamic_api_exc from ansible_collections.kubernetes.core.plugins.inventory.k8s import (
from ansible_collections.kubernetes.core.plugins.module_utils.k8s.client import get_api_client K8sInventoryException,
InventoryModule as K8sInventoryModule,
format_dynamic_api_exc,
)
from ansible_collections.kubernetes.core.plugins.module_utils.k8s.client import (
get_api_client,
)
HAS_KUBERNETES_COLLECTION = True HAS_KUBERNETES_COLLECTION = True
except ImportError as e: except ImportError as e:
HAS_KUBERNETES_COLLECTION = False HAS_KUBERNETES_COLLECTION = False
@@ -134,22 +141,26 @@ except ImportError:
class InventoryModule(K8sInventoryModule): class InventoryModule(K8sInventoryModule):
NAME = 'community.okd.openshift' NAME = "community.okd.openshift"
connection_plugin = 'community.okd.oc' connection_plugin = "community.okd.oc"
transport = 'oc' transport = "oc"
def check_kubernetes_collection(self): def check_kubernetes_collection(self):
if not HAS_KUBERNETES_COLLECTION: if not HAS_KUBERNETES_COLLECTION:
K8sInventoryException("The kubernetes.core collection must be installed") raise K8sInventoryException(
"The kubernetes.core collection must be installed"
)
def fetch_objects(self, connections): def fetch_objects(self, connections):
self.check_kubernetes_collection() self.check_kubernetes_collection()
super(InventoryModule, self).fetch_objects(connections) super(InventoryModule, self).fetch_objects(connections)
self.display.deprecated("The 'openshift' inventory plugin has been deprecated and will be removed in release 4.0.0", self.display.deprecated(
version='4.0.0', collection_name='community.okd') "The 'openshift' inventory plugin has been deprecated and will be removed in release 4.0.0",
version="4.0.0",
collection_name="community.okd",
)
if connections: if connections:
if not isinstance(connections, list): if not isinstance(connections, list):
@@ -157,9 +168,11 @@ class InventoryModule(K8sInventoryModule):
for connection in connections: for connection in connections:
client = get_api_client(**connection) client = get_api_client(**connection)
name = connection.get('name', self.get_default_host_name(client.configuration.host)) name = connection.get(
if connection.get('namespaces'): "name", self.get_default_host_name(client.configuration.host)
namespaces = connection['namespaces'] )
if connection.get("namespaces"):
namespaces = connection["namespaces"]
else: else:
namespaces = self.get_available_namespaces(client) namespaces = self.get_available_namespaces(client)
for namespace in namespaces: for namespace in namespaces:
@@ -173,15 +186,19 @@ class InventoryModule(K8sInventoryModule):
def get_routes_for_namespace(self, client, name, namespace): def get_routes_for_namespace(self, client, name, namespace):
self.check_kubernetes_collection() self.check_kubernetes_collection()
v1_route = client.resources.get(api_version='route.openshift.io/v1', kind='Route') v1_route = client.resources.get(
api_version="route.openshift.io/v1", kind="Route"
)
try: try:
obj = v1_route.get(namespace=namespace) obj = v1_route.get(namespace=namespace)
except DynamicApiError as exc: except DynamicApiError as exc:
self.display.debug(exc) self.display.debug(exc)
raise K8sInventoryException('Error fetching Routes list: %s' % format_dynamic_api_exc(exc)) raise K8sInventoryException(
"Error fetching Routes list: %s" % format_dynamic_api_exc(exc)
)
namespace_group = 'namespace_{0}'.format(namespace) namespace_group = "namespace_{0}".format(namespace)
namespace_routes_group = '{0}_routes'.format(namespace_group) namespace_routes_group = "{0}_routes".format(namespace_group)
self.inventory.add_group(name) self.inventory.add_group(name)
self.inventory.add_group(namespace_group) self.inventory.add_group(namespace_group)
@@ -190,14 +207,18 @@ class InventoryModule(K8sInventoryModule):
self.inventory.add_child(namespace_group, namespace_routes_group) self.inventory.add_child(namespace_group, namespace_routes_group)
for route in obj.items: for route in obj.items:
route_name = route.metadata.name route_name = route.metadata.name
route_annotations = {} if not route.metadata.annotations else dict(route.metadata.annotations) route_annotations = (
{}
if not route.metadata.annotations
else dict(route.metadata.annotations)
)
self.inventory.add_host(route_name) self.inventory.add_host(route_name)
if route.metadata.labels: if route.metadata.labels:
# create a group for each label_value # create a group for each label_value
for key, value in route.metadata.labels: for key, value in route.metadata.labels:
group_name = 'label_{0}_{1}'.format(key, value) group_name = "label_{0}_{1}".format(key, value)
self.inventory.add_group(group_name) self.inventory.add_group(group_name)
self.inventory.add_child(group_name, route_name) self.inventory.add_child(group_name, route_name)
route_labels = dict(route.metadata.labels) route_labels = dict(route.metadata.labels)
@@ -207,19 +228,25 @@ class InventoryModule(K8sInventoryModule):
self.inventory.add_child(namespace_routes_group, route_name) self.inventory.add_child(namespace_routes_group, route_name)
# add hostvars # add hostvars
self.inventory.set_variable(route_name, 'labels', route_labels) self.inventory.set_variable(route_name, "labels", route_labels)
self.inventory.set_variable(route_name, 'annotations', route_annotations) self.inventory.set_variable(route_name, "annotations", route_annotations)
self.inventory.set_variable(route_name, 'cluster_name', route.metadata.clusterName) self.inventory.set_variable(
self.inventory.set_variable(route_name, 'object_type', 'route') route_name, "cluster_name", route.metadata.clusterName
self.inventory.set_variable(route_name, 'self_link', route.metadata.selfLink) )
self.inventory.set_variable(route_name, 'resource_version', route.metadata.resourceVersion) self.inventory.set_variable(route_name, "object_type", "route")
self.inventory.set_variable(route_name, 'uid', route.metadata.uid) self.inventory.set_variable(
route_name, "self_link", route.metadata.selfLink
)
self.inventory.set_variable(
route_name, "resource_version", route.metadata.resourceVersion
)
self.inventory.set_variable(route_name, "uid", route.metadata.uid)
if route.spec.host: if route.spec.host:
self.inventory.set_variable(route_name, 'host', route.spec.host) self.inventory.set_variable(route_name, "host", route.spec.host)
if route.spec.path: if route.spec.path:
self.inventory.set_variable(route_name, 'path', route.spec.path) self.inventory.set_variable(route_name, "path", route.spec.path)
if hasattr(route.spec.port, 'targetPort') and route.spec.port.targetPort: if hasattr(route.spec.port, "targetPort") and route.spec.port.targetPort:
self.inventory.set_variable(route_name, 'port', dict(route.spec.port)) self.inventory.set_variable(route_name, "port", dict(route.spec.port))

View File

@@ -1,35 +1,46 @@
#!/usr/bin/env python #!/usr/bin/env python
from __future__ import (absolute_import, division, print_function) from __future__ import absolute_import, division, print_function
__metaclass__ = type __metaclass__ = type
import re import re
import operator import operator
from functools import reduce from functools import reduce
from ansible_collections.community.okd.plugins.module_utils.openshift_common import AnsibleOpenshiftModule from ansible_collections.community.okd.plugins.module_utils.openshift_common import (
AnsibleOpenshiftModule,
)
try: try:
from ansible_collections.kubernetes.core.plugins.module_utils.k8s.resource import create_definitions from ansible_collections.kubernetes.core.plugins.module_utils.k8s.resource import (
from ansible_collections.kubernetes.core.plugins.module_utils.k8s.exceptions import CoreException create_definitions,
)
from ansible_collections.kubernetes.core.plugins.module_utils.k8s.exceptions import (
CoreException,
)
except ImportError: except ImportError:
pass pass
from ansible.module_utils._text import to_native from ansible.module_utils._text import to_native
try: try:
from kubernetes.dynamic.exceptions import DynamicApiError, NotFoundError, ForbiddenError from kubernetes.dynamic.exceptions import (
DynamicApiError,
NotFoundError,
ForbiddenError,
)
except ImportError as e: except ImportError as e:
pass pass
TRIGGER_ANNOTATION = 'image.openshift.io/triggers' TRIGGER_ANNOTATION = "image.openshift.io/triggers"
TRIGGER_CONTAINER = re.compile(r"(?P<path>.*)\[((?P<index>[0-9]+)|\?\(@\.name==[\"'\\]*(?P<name>[a-z0-9]([-a-z0-9]*[a-z0-9])?))") TRIGGER_CONTAINER = re.compile(
r"(?P<path>.*)\[((?P<index>[0-9]+)|\?\(@\.name==[\"'\\]*(?P<name>[a-z0-9]([-a-z0-9]*[a-z0-9])?))"
)
class OKDRawModule(AnsibleOpenshiftModule): class OKDRawModule(AnsibleOpenshiftModule):
def __init__(self, **kwargs): def __init__(self, **kwargs):
super(OKDRawModule, self).__init__(**kwargs) super(OKDRawModule, self).__init__(**kwargs)
@property @property
@@ -50,36 +61,60 @@ class OKDRawModule(AnsibleOpenshiftModule):
result = {"changed": False, "result": {}} result = {"changed": False, "result": {}}
warnings = [] warnings = []
if self.params.get("state") != 'absent': if self.params.get("state") != "absent":
existing = None existing = None
name = definition.get("metadata", {}).get("name") name = definition.get("metadata", {}).get("name")
namespace = definition.get("metadata", {}).get("namespace") namespace = definition.get("metadata", {}).get("namespace")
if definition.get("kind") in ['Project', 'ProjectRequest']: if definition.get("kind") in ["Project", "ProjectRequest"]:
try: try:
resource = self.svc.find_resource(kind=definition.get("kind"), api_version=definition.get("apiVersion", "v1")) resource = self.svc.find_resource(
existing = resource.get(name=name, namespace=namespace).to_dict() kind=definition.get("kind"),
api_version=definition.get("apiVersion", "v1"),
)
existing = resource.get(
name=name, namespace=namespace
).to_dict()
except (NotFoundError, ForbiddenError): except (NotFoundError, ForbiddenError):
result = self.create_project_request(definition) result = self.create_project_request(definition)
changed |= result["changed"] changed |= result["changed"]
results.append(result) results.append(result)
continue continue
except DynamicApiError as exc: except DynamicApiError as exc:
self.fail_json(msg='Failed to retrieve requested object: {0}'.format(exc.body), self.fail_json(
error=exc.status, status=exc.status, reason=exc.reason) msg="Failed to retrieve requested object: {0}".format(
exc.body
),
error=exc.status,
status=exc.status,
reason=exc.reason,
)
if definition.get("kind") not in ['Project', 'ProjectRequest']: if definition.get("kind") not in ["Project", "ProjectRequest"]:
try: try:
resource = self.svc.find_resource(kind=definition.get("kind"), api_version=definition.get("apiVersion", "v1")) resource = self.svc.find_resource(
existing = resource.get(name=name, namespace=namespace).to_dict() kind=definition.get("kind"),
api_version=definition.get("apiVersion", "v1"),
)
existing = resource.get(
name=name, namespace=namespace
).to_dict()
except Exception: except Exception:
existing = None existing = None
if existing: if existing:
if resource.kind == 'DeploymentConfig': if resource.kind == "DeploymentConfig":
if definition.get('spec', {}).get('triggers'): if definition.get("spec", {}).get("triggers"):
definition = self.resolve_imagestream_triggers(existing, definition) definition = self.resolve_imagestream_triggers(
elif existing['metadata'].get('annotations', {}).get(TRIGGER_ANNOTATION): existing, definition
definition = self.resolve_imagestream_trigger_annotation(existing, definition) )
elif (
existing["metadata"]
.get("annotations", {})
.get(TRIGGER_ANNOTATION)
):
definition = self.resolve_imagestream_trigger_annotation(
existing, definition
)
if self.params.get("validate") is not None: if self.params.get("validate") is not None:
warnings = self.validate(definition) warnings = self.validate(definition)
@@ -116,13 +151,15 @@ class OKDRawModule(AnsibleOpenshiftModule):
@staticmethod @staticmethod
def get_index(desired, objects, keys): def get_index(desired, objects, keys):
""" Iterates over keys, returns the first object from objects where the value of the key """Iterates over keys, returns the first object from objects where the value of the key
matches the value in desired matches the value in desired
""" """
# pylint: disable=use-a-generator # pylint: disable=use-a-generator
# Use a generator instead 'all(desired.get(key, True) == item.get(key, False) for key in keys)' # Use a generator instead 'all(desired.get(key, True) == item.get(key, False) for key in keys)'
for i, item in enumerate(objects): for i, item in enumerate(objects):
if item and all([desired.get(key, True) == item.get(key, False) for key in keys]): if item and all(
[desired.get(key, True) == item.get(key, False) for key in keys]
):
return i return i
def resolve_imagestream_trigger_annotation(self, existing, definition): def resolve_imagestream_trigger_annotation(self, existing, definition):
@@ -137,84 +174,148 @@ class OKDRawModule(AnsibleOpenshiftModule):
def set_from_fields(d, fields, value): def set_from_fields(d, fields, value):
get_from_fields(d, fields[:-1])[fields[-1]] = value get_from_fields(d, fields[:-1])[fields[-1]] = value
if TRIGGER_ANNOTATION in definition['metadata'].get('annotations', {}).keys(): if TRIGGER_ANNOTATION in definition["metadata"].get("annotations", {}).keys():
triggers = yaml.safe_load(definition['metadata']['annotations'][TRIGGER_ANNOTATION] or '[]') triggers = yaml.safe_load(
definition["metadata"]["annotations"][TRIGGER_ANNOTATION] or "[]"
)
else: else:
triggers = yaml.safe_load(existing['metadata'].get('annotations', '{}').get(TRIGGER_ANNOTATION, '[]')) triggers = yaml.safe_load(
existing["metadata"]
.get("annotations", "{}")
.get(TRIGGER_ANNOTATION, "[]")
)
if not isinstance(triggers, list): if not isinstance(triggers, list):
return definition return definition
for trigger in triggers: for trigger in triggers:
if trigger.get('fieldPath'): if trigger.get("fieldPath"):
parsed = self.parse_trigger_fieldpath(trigger['fieldPath']) parsed = self.parse_trigger_fieldpath(trigger["fieldPath"])
path = parsed.get('path', '').split('.') path = parsed.get("path", "").split(".")
if path: if path:
existing_containers = get_from_fields(existing, path) existing_containers = get_from_fields(existing, path)
new_containers = get_from_fields(definition, path) new_containers = get_from_fields(definition, path)
if parsed.get('name'): if parsed.get("name"):
existing_index = self.get_index({'name': parsed['name']}, existing_containers, ['name']) existing_index = self.get_index(
new_index = self.get_index({'name': parsed['name']}, new_containers, ['name']) {"name": parsed["name"]}, existing_containers, ["name"]
elif parsed.get('index') is not None: )
existing_index = new_index = int(parsed['index']) new_index = self.get_index(
{"name": parsed["name"]}, new_containers, ["name"]
)
elif parsed.get("index") is not None:
existing_index = new_index = int(parsed["index"])
else: else:
existing_index = new_index = None existing_index = new_index = None
if existing_index is not None and new_index is not None: if existing_index is not None and new_index is not None:
if existing_index < len(existing_containers) and new_index < len(new_containers): if existing_index < len(
set_from_fields(definition, path + [new_index, 'image'], get_from_fields(existing, path + [existing_index, 'image'])) existing_containers
) and new_index < len(new_containers):
set_from_fields(
definition,
path + [new_index, "image"],
get_from_fields(
existing, path + [existing_index, "image"]
),
)
return definition return definition
def resolve_imagestream_triggers(self, existing, definition): def resolve_imagestream_triggers(self, existing, definition):
existing_triggers = existing.get("spec", {}).get("triggers")
existing_triggers = existing.get('spec', {}).get('triggers') new_triggers = definition["spec"]["triggers"]
new_triggers = definition['spec']['triggers'] existing_containers = (
existing_containers = existing.get('spec', {}).get('template', {}).get('spec', {}).get('containers', []) existing.get("spec", {})
new_containers = definition.get('spec', {}).get('template', {}).get('spec', {}).get('containers', []) .get("template", {})
.get("spec", {})
.get("containers", [])
)
new_containers = (
definition.get("spec", {})
.get("template", {})
.get("spec", {})
.get("containers", [])
)
for i, trigger in enumerate(new_triggers): for i, trigger in enumerate(new_triggers):
if trigger.get('type') == 'ImageChange' and trigger.get('imageChangeParams'): if trigger.get("type") == "ImageChange" and trigger.get(
names = trigger['imageChangeParams'].get('containerNames', []) "imageChangeParams"
):
names = trigger["imageChangeParams"].get("containerNames", [])
for name in names: for name in names:
old_container_index = self.get_index({'name': name}, existing_containers, ['name']) old_container_index = self.get_index(
new_container_index = self.get_index({'name': name}, new_containers, ['name']) {"name": name}, existing_containers, ["name"]
if old_container_index is not None and new_container_index is not None: )
image = existing['spec']['template']['spec']['containers'][old_container_index]['image'] new_container_index = self.get_index(
definition['spec']['template']['spec']['containers'][new_container_index]['image'] = image {"name": name}, new_containers, ["name"]
)
if (
old_container_index is not None
and new_container_index is not None
):
image = existing["spec"]["template"]["spec"]["containers"][
old_container_index
]["image"]
definition["spec"]["template"]["spec"]["containers"][
new_container_index
]["image"] = image
existing_index = self.get_index(trigger['imageChangeParams'], existing_index = self.get_index(
[x.get('imageChangeParams') for x in existing_triggers], trigger["imageChangeParams"],
['containerNames']) [x.get("imageChangeParams") for x in existing_triggers],
["containerNames"],
)
if existing_index is not None: if existing_index is not None:
existing_image = existing_triggers[existing_index].get('imageChangeParams', {}).get('lastTriggeredImage') existing_image = (
existing_triggers[existing_index]
.get("imageChangeParams", {})
.get("lastTriggeredImage")
)
if existing_image: if existing_image:
definition['spec']['triggers'][i]['imageChangeParams']['lastTriggeredImage'] = existing_image definition["spec"]["triggers"][i]["imageChangeParams"][
existing_from = existing_triggers[existing_index].get('imageChangeParams', {}).get('from', {}) "lastTriggeredImage"
new_from = trigger['imageChangeParams'].get('from', {}) ] = existing_image
existing_namespace = existing_from.get('namespace') existing_from = (
existing_name = existing_from.get('name', False) existing_triggers[existing_index]
new_name = new_from.get('name', True) .get("imageChangeParams", {})
add_namespace = existing_namespace and 'namespace' not in new_from.keys() and existing_name == new_name .get("from", {})
)
new_from = trigger["imageChangeParams"].get("from", {})
existing_namespace = existing_from.get("namespace")
existing_name = existing_from.get("name", False)
new_name = new_from.get("name", True)
add_namespace = (
existing_namespace
and "namespace" not in new_from.keys()
and existing_name == new_name
)
if add_namespace: if add_namespace:
definition['spec']['triggers'][i]['imageChangeParams']['from']['namespace'] = existing_from['namespace'] definition["spec"]["triggers"][i]["imageChangeParams"][
"from"
]["namespace"] = existing_from["namespace"]
return definition return definition
def parse_trigger_fieldpath(self, expression): def parse_trigger_fieldpath(self, expression):
parsed = TRIGGER_CONTAINER.search(expression).groupdict() parsed = TRIGGER_CONTAINER.search(expression).groupdict()
if parsed.get('index'): if parsed.get("index"):
parsed['index'] = int(parsed['index']) parsed["index"] = int(parsed["index"])
return parsed return parsed
def create_project_request(self, definition): def create_project_request(self, definition):
definition['kind'] = 'ProjectRequest' definition["kind"] = "ProjectRequest"
result = {'changed': False, 'result': {}} result = {"changed": False, "result": {}}
resource = self.svc.find_resource(kind='ProjectRequest', api_version=definition['apiVersion'], fail=True) resource = self.svc.find_resource(
kind="ProjectRequest", api_version=definition["apiVersion"], fail=True
)
if not self.check_mode: if not self.check_mode:
try: try:
k8s_obj = resource.create(definition) k8s_obj = resource.create(definition)
result['result'] = k8s_obj.to_dict() result["result"] = k8s_obj.to_dict()
except DynamicApiError as exc: except DynamicApiError as exc:
self.fail_json(msg="Failed to create object: {0}".format(exc.body), self.fail_json(
error=exc.status, status=exc.status, reason=exc.reason) msg="Failed to create object: {0}".format(exc.body),
result['changed'] = True error=exc.status,
result['method'] = 'create' status=exc.status,
reason=exc.reason,
)
result["changed"] = True
result["method"] = "create"
return result return result

View File

@@ -1,11 +1,14 @@
#!/usr/bin/env python #!/usr/bin/env python
from __future__ import (absolute_import, division, print_function) from __future__ import absolute_import, division, print_function
__metaclass__ = type __metaclass__ = type
from ansible.module_utils._text import to_native from ansible.module_utils._text import to_native
from ansible_collections.community.okd.plugins.module_utils.openshift_common import AnsibleOpenshiftModule from ansible_collections.community.okd.plugins.module_utils.openshift_common import (
AnsibleOpenshiftModule,
)
try: try:
from kubernetes import client from kubernetes import client
@@ -18,31 +21,36 @@ class OpenShiftAdmPruneAuth(AnsibleOpenshiftModule):
def __init__(self, **kwargs): def __init__(self, **kwargs):
super(OpenShiftAdmPruneAuth, self).__init__(**kwargs) super(OpenShiftAdmPruneAuth, self).__init__(**kwargs)
def prune_resource_binding(self, kind, api_version, ref_kind, ref_namespace_names, propagation_policy=None): def prune_resource_binding(
self, kind, api_version, ref_kind, ref_namespace_names, propagation_policy=None
):
resource = self.find_resource(kind=kind, api_version=api_version, fail=True) resource = self.find_resource(kind=kind, api_version=api_version, fail=True)
candidates = [] candidates = []
for ref_namespace, ref_name in ref_namespace_names: for ref_namespace, ref_name in ref_namespace_names:
try: try:
result = resource.get(name=None, namespace=ref_namespace) result = resource.get(name=None, namespace=ref_namespace)
result = result.to_dict() result = result.to_dict()
result = result.get('items') if 'items' in result else [result] result = result.get("items") if "items" in result else [result]
for obj in result: for obj in result:
namespace = obj['metadata'].get('namespace', None) namespace = obj["metadata"].get("namespace", None)
name = obj['metadata'].get('name') name = obj["metadata"].get("name")
if ref_kind and obj['roleRef']['kind'] != ref_kind: if ref_kind and obj["roleRef"]["kind"] != ref_kind:
# skip this binding as the roleRef.kind does not match # skip this binding as the roleRef.kind does not match
continue continue
if obj['roleRef']['name'] == ref_name: if obj["roleRef"]["name"] == ref_name:
# select this binding as the roleRef.name match # select this binding as the roleRef.name match
candidates.append((namespace, name)) candidates.append((namespace, name))
except NotFoundError: except NotFoundError:
continue continue
except DynamicApiError as exc: except DynamicApiError as exc:
msg = "Failed to get {kind} resource due to: {msg}".format(kind=kind, msg=exc.body) msg = "Failed to get {kind} resource due to: {msg}".format(
kind=kind, msg=exc.body
)
self.fail_json(msg=msg) self.fail_json(msg=msg)
except Exception as e: except Exception as e:
msg = "Failed to get {kind} due to: {msg}".format(kind=kind, msg=to_native(e)) msg = "Failed to get {kind} due to: {msg}".format(
kind=kind, msg=to_native(e)
)
self.fail_json(msg=msg) self.fail_json(msg=msg)
if len(candidates) == 0 or self.check_mode: if len(candidates) == 0 or self.check_mode:
@@ -54,24 +62,29 @@ class OpenShiftAdmPruneAuth(AnsibleOpenshiftModule):
for namespace, name in candidates: for namespace, name in candidates:
try: try:
result = resource.delete(name=name, namespace=namespace, body=delete_options) result = resource.delete(
name=name, namespace=namespace, body=delete_options
)
except DynamicApiError as exc: except DynamicApiError as exc:
msg = "Failed to delete {kind} {namespace}/{name} due to: {msg}".format(kind=kind, namespace=namespace, name=name, msg=exc.body) msg = "Failed to delete {kind} {namespace}/{name} due to: {msg}".format(
kind=kind, namespace=namespace, name=name, msg=exc.body
)
self.fail_json(msg=msg) self.fail_json(msg=msg)
except Exception as e: except Exception as e:
msg = "Failed to delete {kind} {namespace}/{name} due to: {msg}".format(kind=kind, namespace=namespace, name=name, msg=to_native(e)) msg = "Failed to delete {kind} {namespace}/{name} due to: {msg}".format(
kind=kind, namespace=namespace, name=name, msg=to_native(e)
)
self.fail_json(msg=msg) self.fail_json(msg=msg)
return [y if x is None else x + "/" + y for x, y in candidates] return [y if x is None else x + "/" + y for x, y in candidates]
def update_resource_binding(self, ref_kind, ref_names, namespaced=False): def update_resource_binding(self, ref_kind, ref_names, namespaced=False):
kind = "ClusterRoleBinding"
kind = 'ClusterRoleBinding' api_version = "rbac.authorization.k8s.io/v1"
api_version = "rbac.authorization.k8s.io/v1",
if namespaced: if namespaced:
kind = "RoleBinding" kind = "RoleBinding"
resource = self.find_resource(kind=kind, api_version=api_version, fail=True) resource = self.find_resource(kind=kind, api_version=api_version, fail=True)
result = resource.get(name=None, namespace=None).to_dict() result = resource.get(name=None, namespace=None).to_dict()
result = result.get('items') if 'items' in result else [result] result = result.get("items") if "items" in result else [result]
if len(result) == 0: if len(result) == 0:
return [], False return [], False
@@ -79,29 +92,40 @@ class OpenShiftAdmPruneAuth(AnsibleOpenshiftModule):
def _update_user_group(binding_namespace, subjects): def _update_user_group(binding_namespace, subjects):
users, groups = [], [] users, groups = [], []
for x in subjects: for x in subjects:
if x['kind'] == 'User': if x["kind"] == "User":
users.append(x['name']) users.append(x["name"])
elif x['kind'] == 'Group': elif x["kind"] == "Group":
groups.append(x['name']) groups.append(x["name"])
elif x['kind'] == 'ServiceAccount': elif x["kind"] == "ServiceAccount":
namespace = binding_namespace namespace = binding_namespace
if x.get('namespace') is not None: if x.get("namespace") is not None:
namespace = x.get('namespace') namespace = x.get("namespace")
if namespace is not None: if namespace is not None:
users.append("system:serviceaccount:%s:%s" % (namespace, x['name'])) users.append(
"system:serviceaccount:%s:%s" % (namespace, x["name"])
)
return users, groups return users, groups
candidates = [] candidates = []
changed = False changed = False
for item in result: for item in result:
subjects = item.get('subjects', []) subjects = item.get("subjects", [])
retainedSubjects = [x for x in subjects if x['kind'] == ref_kind and x['name'] in ref_names] retainedSubjects = [
x for x in subjects if x["kind"] == ref_kind and x["name"] in ref_names
]
if len(subjects) != len(retainedSubjects): if len(subjects) != len(retainedSubjects):
updated_binding = item updated_binding = item
updated_binding['subjects'] = retainedSubjects updated_binding["subjects"] = retainedSubjects
binding_namespace = item['metadata'].get('namespace', None) binding_namespace = item["metadata"].get("namespace", None)
updated_binding['userNames'], updated_binding['groupNames'] = _update_user_group(binding_namespace, retainedSubjects) (
candidates.append(binding_namespace + "/" + item['metadata']['name'] if binding_namespace else item['metadata']['name']) updated_binding["userNames"],
updated_binding["groupNames"],
) = _update_user_group(binding_namespace, retainedSubjects)
candidates.append(
binding_namespace + "/" + item["metadata"]["name"]
if binding_namespace
else item["metadata"]["name"]
)
changed = True changed = True
if not self.check_mode: if not self.check_mode:
try: try:
@@ -112,20 +136,25 @@ class OpenShiftAdmPruneAuth(AnsibleOpenshiftModule):
return candidates, changed return candidates, changed
def update_security_context(self, ref_names, key): def update_security_context(self, ref_names, key):
params = {'kind': 'SecurityContextConstraints', 'api_version': 'security.openshift.io/v1'} params = {
"kind": "SecurityContextConstraints",
"api_version": "security.openshift.io/v1",
}
sccs = self.kubernetes_facts(**params) sccs = self.kubernetes_facts(**params)
if not sccs['api_found']: if not sccs["api_found"]:
self.fail_json(msg=sccs['msg']) self.fail_json(msg=sccs["msg"])
sccs = sccs.get('resources') sccs = sccs.get("resources")
candidates = [] candidates = []
changed = False changed = False
resource = self.find_resource(kind="SecurityContextConstraints", api_version="security.openshift.io/v1") resource = self.find_resource(
kind="SecurityContextConstraints", api_version="security.openshift.io/v1"
)
for item in sccs: for item in sccs:
subjects = item.get(key, []) subjects = item.get(key, [])
retainedSubjects = [x for x in subjects if x not in ref_names] retainedSubjects = [x for x in subjects if x not in ref_names]
if len(subjects) != len(retainedSubjects): if len(subjects) != len(retainedSubjects):
candidates.append(item['metadata']['name']) candidates.append(item["metadata"]["name"])
changed = True changed = True
if not self.check_mode: if not self.check_mode:
upd_sec_ctx = item upd_sec_ctx = item
@@ -138,94 +167,116 @@ class OpenShiftAdmPruneAuth(AnsibleOpenshiftModule):
return candidates, changed return candidates, changed
def auth_prune_roles(self): def auth_prune_roles(self):
params = {'kind': 'Role', 'api_version': 'rbac.authorization.k8s.io/v1', 'namespace': self.params.get('namespace')} params = {
for attr in ('name', 'label_selectors'): "kind": "Role",
"api_version": "rbac.authorization.k8s.io/v1",
"namespace": self.params.get("namespace"),
}
for attr in ("name", "label_selectors"):
if self.params.get(attr): if self.params.get(attr):
params[attr] = self.params.get(attr) params[attr] = self.params.get(attr)
result = self.kubernetes_facts(**params) result = self.kubernetes_facts(**params)
if not result['api_found']: if not result["api_found"]:
self.fail_json(msg=result['msg']) self.fail_json(msg=result["msg"])
roles = result.get('resources') roles = result.get("resources")
if len(roles) == 0: if len(roles) == 0:
self.exit_json(changed=False, msg="No candidate rolebinding to prune from namespace %s." % self.params.get('namespace')) self.exit_json(
changed=False,
msg="No candidate rolebinding to prune from namespace %s."
% self.params.get("namespace"),
)
ref_roles = [(x['metadata']['namespace'], x['metadata']['name']) for x in roles] ref_roles = [(x["metadata"]["namespace"], x["metadata"]["name"]) for x in roles]
candidates = self.prune_resource_binding(kind="RoleBinding", candidates = self.prune_resource_binding(
api_version="rbac.authorization.k8s.io/v1", kind="RoleBinding",
ref_kind="Role", api_version="rbac.authorization.k8s.io/v1",
ref_namespace_names=ref_roles, ref_kind="Role",
propagation_policy='Foreground') ref_namespace_names=ref_roles,
propagation_policy="Foreground",
)
if len(candidates) == 0: if len(candidates) == 0:
self.exit_json(changed=False, role_binding=candidates) self.exit_json(changed=False, role_binding=candidates)
self.exit_json(changed=True, role_binding=candidates) self.exit_json(changed=True, role_binding=candidates)
def auth_prune_clusterroles(self): def auth_prune_clusterroles(self):
params = {'kind': 'ClusterRole', 'api_version': 'rbac.authorization.k8s.io/v1'} params = {"kind": "ClusterRole", "api_version": "rbac.authorization.k8s.io/v1"}
for attr in ('name', 'label_selectors'): for attr in ("name", "label_selectors"):
if self.params.get(attr): if self.params.get(attr):
params[attr] = self.params.get(attr) params[attr] = self.params.get(attr)
result = self.kubernetes_facts(**params) result = self.kubernetes_facts(**params)
if not result['api_found']: if not result["api_found"]:
self.fail_json(msg=result['msg']) self.fail_json(msg=result["msg"])
clusterroles = result.get('resources') clusterroles = result.get("resources")
if len(clusterroles) == 0: if len(clusterroles) == 0:
self.exit_json(changed=False, msg="No clusterroles found matching input criteria.") self.exit_json(
changed=False, msg="No clusterroles found matching input criteria."
)
ref_clusterroles = [(None, x['metadata']['name']) for x in clusterroles] ref_clusterroles = [(None, x["metadata"]["name"]) for x in clusterroles]
# Prune ClusterRoleBinding # Prune ClusterRoleBinding
candidates_cluster_binding = self.prune_resource_binding(kind="ClusterRoleBinding", candidates_cluster_binding = self.prune_resource_binding(
api_version="rbac.authorization.k8s.io/v1", kind="ClusterRoleBinding",
ref_kind=None, api_version="rbac.authorization.k8s.io/v1",
ref_namespace_names=ref_clusterroles) ref_kind=None,
ref_namespace_names=ref_clusterroles,
)
# Prune Role Binding # Prune Role Binding
candidates_namespaced_binding = self.prune_resource_binding(kind="RoleBinding", candidates_namespaced_binding = self.prune_resource_binding(
api_version="rbac.authorization.k8s.io/v1", kind="RoleBinding",
ref_kind='ClusterRole', api_version="rbac.authorization.k8s.io/v1",
ref_namespace_names=ref_clusterroles) ref_kind="ClusterRole",
ref_namespace_names=ref_clusterroles,
)
self.exit_json(changed=True, self.exit_json(
cluster_role_binding=candidates_cluster_binding, changed=True,
role_binding=candidates_namespaced_binding) cluster_role_binding=candidates_cluster_binding,
role_binding=candidates_namespaced_binding,
)
def list_groups(self, params=None): def list_groups(self, params=None):
options = {'kind': 'Group', 'api_version': 'user.openshift.io/v1'} options = {"kind": "Group", "api_version": "user.openshift.io/v1"}
if params: if params:
for attr in ('name', 'label_selectors'): for attr in ("name", "label_selectors"):
if params.get(attr): if params.get(attr):
options[attr] = params.get(attr) options[attr] = params.get(attr)
return self.kubernetes_facts(**options) return self.kubernetes_facts(**options)
def auth_prune_users(self): def auth_prune_users(self):
params = {'kind': 'User', 'api_version': 'user.openshift.io/v1'} params = {"kind": "User", "api_version": "user.openshift.io/v1"}
for attr in ('name', 'label_selectors'): for attr in ("name", "label_selectors"):
if self.params.get(attr): if self.params.get(attr):
params[attr] = self.params.get(attr) params[attr] = self.params.get(attr)
users = self.kubernetes_facts(**params) users = self.kubernetes_facts(**params)
if len(users) == 0: if len(users) == 0:
self.exit_json(changed=False, msg="No resource type 'User' found matching input criteria.") self.exit_json(
changed=False,
msg="No resource type 'User' found matching input criteria.",
)
names = [x['metadata']['name'] for x in users] names = [x["metadata"]["name"] for x in users]
changed = False changed = False
# Remove the user role binding # Remove the user role binding
rolebinding, changed_role = self.update_resource_binding(ref_kind="User", rolebinding, changed_role = self.update_resource_binding(
ref_names=names, ref_kind="User", ref_names=names, namespaced=True
namespaced=True) )
changed = changed or changed_role changed = changed or changed_role
# Remove the user cluster role binding # Remove the user cluster role binding
clusterrolesbinding, changed_cr = self.update_resource_binding(ref_kind="User", clusterrolesbinding, changed_cr = self.update_resource_binding(
ref_names=names) ref_kind="User", ref_names=names
)
changed = changed or changed_cr changed = changed or changed_cr
# Remove the user from security context constraints # Remove the user from security context constraints
sccs, changed_sccs = self.update_security_context(names, 'users') sccs, changed_sccs = self.update_security_context(names, "users")
changed = changed or changed_sccs changed = changed or changed_sccs
# Remove the user from groups # Remove the user from groups
@@ -233,14 +284,14 @@ class OpenShiftAdmPruneAuth(AnsibleOpenshiftModule):
deleted_groups = [] deleted_groups = []
resource = self.find_resource(kind="Group", api_version="user.openshift.io/v1") resource = self.find_resource(kind="Group", api_version="user.openshift.io/v1")
for grp in groups: for grp in groups:
subjects = grp.get('users', []) subjects = grp.get("users", [])
retainedSubjects = [x for x in subjects if x not in names] retainedSubjects = [x for x in subjects if x not in names]
if len(subjects) != len(retainedSubjects): if len(subjects) != len(retainedSubjects):
deleted_groups.append(grp['metadata']['name']) deleted_groups.append(grp["metadata"]["name"])
changed = True changed = True
if not self.check_mode: if not self.check_mode:
upd_group = grp upd_group = grp
upd_group.update({'users': retainedSubjects}) upd_group.update({"users": retainedSubjects})
try: try:
resource.apply(upd_group, namespace=None) resource.apply(upd_group, namespace=None)
except DynamicApiError as exc: except DynamicApiError as exc:
@@ -248,62 +299,82 @@ class OpenShiftAdmPruneAuth(AnsibleOpenshiftModule):
self.fail_json(msg=msg) self.fail_json(msg=msg)
# Remove the user's OAuthClientAuthorizations # Remove the user's OAuthClientAuthorizations
oauth = self.kubernetes_facts(kind='OAuthClientAuthorization', api_version='oauth.openshift.io/v1') oauth = self.kubernetes_facts(
kind="OAuthClientAuthorization", api_version="oauth.openshift.io/v1"
)
deleted_auths = [] deleted_auths = []
resource = self.find_resource(kind="OAuthClientAuthorization", api_version="oauth.openshift.io/v1") resource = self.find_resource(
kind="OAuthClientAuthorization", api_version="oauth.openshift.io/v1"
)
for authorization in oauth: for authorization in oauth:
if authorization.get('userName', None) in names: if authorization.get("userName", None) in names:
auth_name = authorization['metadata']['name'] auth_name = authorization["metadata"]["name"]
deleted_auths.append(auth_name) deleted_auths.append(auth_name)
changed = True changed = True
if not self.check_mode: if not self.check_mode:
try: try:
resource.delete(name=auth_name, namespace=None, body=client.V1DeleteOptions()) resource.delete(
name=auth_name,
namespace=None,
body=client.V1DeleteOptions(),
)
except DynamicApiError as exc: except DynamicApiError as exc:
msg = "Failed to delete OAuthClientAuthorization {name} due to: {msg}".format(name=auth_name, msg=exc.body) msg = "Failed to delete OAuthClientAuthorization {name} due to: {msg}".format(
name=auth_name, msg=exc.body
)
self.fail_json(msg=msg) self.fail_json(msg=msg)
except Exception as e: except Exception as e:
msg = "Failed to delete OAuthClientAuthorization {name} due to: {msg}".format(name=auth_name, msg=to_native(e)) msg = "Failed to delete OAuthClientAuthorization {name} due to: {msg}".format(
name=auth_name, msg=to_native(e)
)
self.fail_json(msg=msg) self.fail_json(msg=msg)
self.exit_json(changed=changed, self.exit_json(
cluster_role_binding=clusterrolesbinding, changed=changed,
role_binding=rolebinding, cluster_role_binding=clusterrolesbinding,
security_context_constraints=sccs, role_binding=rolebinding,
authorization=deleted_auths, security_context_constraints=sccs,
group=deleted_groups) authorization=deleted_auths,
group=deleted_groups,
)
def auth_prune_groups(self): def auth_prune_groups(self):
groups = self.list_groups(params=self.params) groups = self.list_groups(params=self.params)
if len(groups) == 0: if len(groups) == 0:
self.exit_json(changed=False, result="No resource type 'Group' found matching input criteria.") self.exit_json(
changed=False,
result="No resource type 'Group' found matching input criteria.",
)
names = [x['metadata']['name'] for x in groups] names = [x["metadata"]["name"] for x in groups]
changed = False changed = False
# Remove the groups role binding # Remove the groups role binding
rolebinding, changed_role = self.update_resource_binding(ref_kind="Group", rolebinding, changed_role = self.update_resource_binding(
ref_names=names, ref_kind="Group", ref_names=names, namespaced=True
namespaced=True) )
changed = changed or changed_role changed = changed or changed_role
# Remove the groups cluster role binding # Remove the groups cluster role binding
clusterrolesbinding, changed_cr = self.update_resource_binding(ref_kind="Group", clusterrolesbinding, changed_cr = self.update_resource_binding(
ref_names=names) ref_kind="Group", ref_names=names
)
changed = changed or changed_cr changed = changed or changed_cr
# Remove the groups security context constraints # Remove the groups security context constraints
sccs, changed_sccs = self.update_security_context(names, 'groups') sccs, changed_sccs = self.update_security_context(names, "groups")
changed = changed or changed_sccs changed = changed or changed_sccs
self.exit_json(changed=changed, self.exit_json(
cluster_role_binding=clusterrolesbinding, changed=changed,
role_binding=rolebinding, cluster_role_binding=clusterrolesbinding,
security_context_constraints=sccs) role_binding=rolebinding,
security_context_constraints=sccs,
)
def execute_module(self): def execute_module(self):
auth_prune = { auth_prune = {
'roles': self.auth_prune_roles, "roles": self.auth_prune_roles,
'clusterroles': self.auth_prune_clusterroles, "clusterroles": self.auth_prune_clusterroles,
'users': self.auth_prune_users, "users": self.auth_prune_users,
'groups': self.auth_prune_groups, "groups": self.auth_prune_groups,
} }
auth_prune[self.params.get('resource')]() auth_prune[self.params.get("resource")]()

View File

@@ -1,14 +1,16 @@
#!/usr/bin/env python #!/usr/bin/env python
from __future__ import (absolute_import, division, print_function) from __future__ import absolute_import, division, print_function
__metaclass__ = type __metaclass__ = type
from datetime import datetime, timezone from datetime import datetime, timezone
import traceback
from ansible.module_utils._text import to_native from ansible.module_utils._text import to_native
from ansible_collections.community.okd.plugins.module_utils.openshift_common import AnsibleOpenshiftModule from ansible_collections.community.okd.plugins.module_utils.openshift_common import (
AnsibleOpenshiftModule,
)
try: try:
from kubernetes import client from kubernetes import client
@@ -23,7 +25,9 @@ def get_deploymentconfig_for_replicationcontroller(replica_controller):
# This is set on replication controller pod template by deployer controller. # This is set on replication controller pod template by deployer controller.
DeploymentConfigAnnotation = "openshift.io/deployment-config.name" DeploymentConfigAnnotation = "openshift.io/deployment-config.name"
try: try:
deploymentconfig_name = replica_controller['metadata']['annotations'].get(DeploymentConfigAnnotation) deploymentconfig_name = replica_controller["metadata"]["annotations"].get(
DeploymentConfigAnnotation
)
if deploymentconfig_name is None or deploymentconfig_name == "": if deploymentconfig_name is None or deploymentconfig_name == "":
return None return None
return deploymentconfig_name return deploymentconfig_name
@@ -32,7 +36,6 @@ def get_deploymentconfig_for_replicationcontroller(replica_controller):
class OpenShiftAdmPruneDeployment(AnsibleOpenshiftModule): class OpenShiftAdmPruneDeployment(AnsibleOpenshiftModule):
def __init__(self, **kwargs): def __init__(self, **kwargs):
super(OpenShiftAdmPruneDeployment, self).__init__(**kwargs) super(OpenShiftAdmPruneDeployment, self).__init__(**kwargs)
@@ -41,27 +44,33 @@ class OpenShiftAdmPruneDeployment(AnsibleOpenshiftModule):
return get_deploymentconfig_for_replicationcontroller(obj) is not None return get_deploymentconfig_for_replicationcontroller(obj) is not None
def _zeroReplicaSize(obj): def _zeroReplicaSize(obj):
return obj['spec']['replicas'] == 0 and obj['status']['replicas'] == 0 return obj["spec"]["replicas"] == 0 and obj["status"]["replicas"] == 0
def _complete_failed(obj): def _complete_failed(obj):
DeploymentStatusAnnotation = "openshift.io/deployment.phase" DeploymentStatusAnnotation = "openshift.io/deployment.phase"
try: try:
# validate that replication controller status is either 'Complete' or 'Failed' # validate that replication controller status is either 'Complete' or 'Failed'
deployment_phase = obj['metadata']['annotations'].get(DeploymentStatusAnnotation) deployment_phase = obj["metadata"]["annotations"].get(
return deployment_phase in ('Failed', 'Complete') DeploymentStatusAnnotation
)
return deployment_phase in ("Failed", "Complete")
except Exception: except Exception:
return False return False
def _younger(obj): def _younger(obj):
creation_timestamp = datetime.strptime(obj['metadata']['creationTimestamp'], '%Y-%m-%dT%H:%M:%SZ') creation_timestamp = datetime.strptime(
obj["metadata"]["creationTimestamp"], "%Y-%m-%dT%H:%M:%SZ"
)
now = datetime.now(timezone.utc).replace(tzinfo=None) now = datetime.now(timezone.utc).replace(tzinfo=None)
age = (now - creation_timestamp).seconds / 60 age = (now - creation_timestamp).seconds / 60
return age > self.params['keep_younger_than'] return age > self.params["keep_younger_than"]
def _orphan(obj): def _orphan(obj):
try: try:
# verify if the deploymentconfig associated to the replication controller is still existing # verify if the deploymentconfig associated to the replication controller is still existing
deploymentconfig_name = get_deploymentconfig_for_replicationcontroller(obj) deploymentconfig_name = get_deploymentconfig_for_replicationcontroller(
obj
)
params = dict( params = dict(
kind="DeploymentConfig", kind="DeploymentConfig",
api_version="apps.openshift.io/v1", api_version="apps.openshift.io/v1",
@@ -69,14 +78,14 @@ class OpenShiftAdmPruneDeployment(AnsibleOpenshiftModule):
namespace=obj["metadata"]["name"], namespace=obj["metadata"]["name"],
) )
exists = self.kubernetes_facts(**params) exists = self.kubernetes_facts(**params)
return not (exists.get['api_found'] and len(exists['resources']) > 0) return not (exists.get["api_found"] and len(exists["resources"]) > 0)
except Exception: except Exception:
return False return False
predicates = [_deployment, _zeroReplicaSize, _complete_failed] predicates = [_deployment, _zeroReplicaSize, _complete_failed]
if self.params['orphans']: if self.params["orphans"]:
predicates.append(_orphan) predicates.append(_orphan)
if self.params['keep_younger_than']: if self.params["keep_younger_than"]:
predicates.append(_younger) predicates.append(_younger)
results = replicacontrollers.copy() results = replicacontrollers.copy()
@@ -86,8 +95,8 @@ class OpenShiftAdmPruneDeployment(AnsibleOpenshiftModule):
def execute_module(self): def execute_module(self):
# list replicationcontroller candidate for pruning # list replicationcontroller candidate for pruning
kind = 'ReplicationController' kind = "ReplicationController"
api_version = 'v1' api_version = "v1"
resource = self.find_resource(kind=kind, api_version=api_version, fail=True) resource = self.find_resource(kind=kind, api_version=api_version, fail=True)
# Get ReplicationController # Get ReplicationController
@@ -103,7 +112,7 @@ class OpenShiftAdmPruneDeployment(AnsibleOpenshiftModule):
self.exit_json(changed=False, replication_controllers=[]) self.exit_json(changed=False, replication_controllers=[])
changed = True changed = True
delete_options = client.V1DeleteOptions(propagation_policy='Background') delete_options = client.V1DeleteOptions(propagation_policy="Background")
replication_controllers = [] replication_controllers = []
for replica in candidates: for replica in candidates:
try: try:
@@ -111,12 +120,18 @@ class OpenShiftAdmPruneDeployment(AnsibleOpenshiftModule):
if not self.check_mode: if not self.check_mode:
name = replica["metadata"]["name"] name = replica["metadata"]["name"]
namespace = replica["metadata"]["namespace"] namespace = replica["metadata"]["namespace"]
result = resource.delete(name=name, namespace=namespace, body=delete_options).to_dict() result = resource.delete(
name=name, namespace=namespace, body=delete_options
).to_dict()
replication_controllers.append(result) replication_controllers.append(result)
except DynamicApiError as exc: except DynamicApiError as exc:
msg = "Failed to delete ReplicationController {namespace}/{name} due to: {msg}".format(namespace=namespace, name=name, msg=exc.body) msg = "Failed to delete ReplicationController {namespace}/{name} due to: {msg}".format(
namespace=namespace, name=name, msg=exc.body
)
self.fail_json(msg=msg) self.fail_json(msg=msg)
except Exception as e: except Exception as e:
msg = "Failed to delete ReplicationController {namespace}/{name} due to: {msg}".format(namespace=namespace, name=name, msg=to_native(e)) msg = "Failed to delete ReplicationController {namespace}/{name} due to: {msg}".format(
namespace=namespace, name=name, msg=to_native(e)
)
self.fail_json(msg=msg) self.fail_json(msg=msg)
self.exit_json(changed=changed, replication_controllers=replication_controllers) self.exit_json(changed=changed, replication_controllers=replication_controllers)

View File

@@ -1,17 +1,19 @@
#!/usr/bin/env python #!/usr/bin/env python
from __future__ import (absolute_import, division, print_function) from __future__ import absolute_import, division, print_function
__metaclass__ = type __metaclass__ = type
from datetime import datetime, timezone, timedelta from datetime import datetime, timezone, timedelta
import traceback
import copy import copy
from ansible.module_utils._text import to_native from ansible.module_utils._text import to_native
from ansible.module_utils.parsing.convert_bool import boolean from ansible.module_utils.parsing.convert_bool import boolean
from ansible.module_utils.six import iteritems from ansible.module_utils.six import iteritems
from ansible_collections.community.okd.plugins.module_utils.openshift_common import AnsibleOpenshiftModule from ansible_collections.community.okd.plugins.module_utils.openshift_common import (
AnsibleOpenshiftModule,
)
from ansible_collections.community.okd.plugins.module_utils.openshift_images_common import ( from ansible_collections.community.okd.plugins.module_utils.openshift_images_common import (
OpenShiftAnalyzeImageStream, OpenShiftAnalyzeImageStream,
@@ -30,7 +32,7 @@ try:
from kubernetes.dynamic.exceptions import ( from kubernetes.dynamic.exceptions import (
DynamicApiError, DynamicApiError,
NotFoundError, NotFoundError,
ApiException ApiException,
) )
except ImportError: except ImportError:
pass pass
@@ -67,18 +69,20 @@ def determine_host_registry(module, images, image_streams):
managed_images = list(filter(_f_managed_images, images)) managed_images = list(filter(_f_managed_images, images))
# Be sure to pick up the newest managed image which should have an up to date information # Be sure to pick up the newest managed image which should have an up to date information
sorted_images = sorted(managed_images, sorted_images = sorted(
key=lambda x: x["metadata"]["creationTimestamp"], managed_images, key=lambda x: x["metadata"]["creationTimestamp"], reverse=True
reverse=True) )
docker_image_ref = "" docker_image_ref = ""
if len(sorted_images) > 0: if len(sorted_images) > 0:
docker_image_ref = sorted_images[0].get("dockerImageReference", "") docker_image_ref = sorted_images[0].get("dockerImageReference", "")
else: else:
# 2nd try to get the pull spec from any image stream # 2nd try to get the pull spec from any image stream
# Sorting by creation timestamp may not get us up to date info. Modification time would be much # Sorting by creation timestamp may not get us up to date info. Modification time would be much
sorted_image_streams = sorted(image_streams, sorted_image_streams = sorted(
key=lambda x: x["metadata"]["creationTimestamp"], image_streams,
reverse=True) key=lambda x: x["metadata"]["creationTimestamp"],
reverse=True,
)
for i_stream in sorted_image_streams: for i_stream in sorted_image_streams:
docker_image_ref = i_stream["status"].get("dockerImageRepository", "") docker_image_ref = i_stream["status"].get("dockerImageRepository", "")
if len(docker_image_ref) > 0: if len(docker_image_ref) > 0:
@@ -88,7 +92,7 @@ def determine_host_registry(module, images, image_streams):
module.exit_json(changed=False, result="no managed image found") module.exit_json(changed=False, result="no managed image found")
result, error = parse_docker_image_ref(docker_image_ref, module) result, error = parse_docker_image_ref(docker_image_ref, module)
return result['hostname'] return result["hostname"]
class OpenShiftAdmPruneImages(AnsibleOpenshiftModule): class OpenShiftAdmPruneImages(AnsibleOpenshiftModule):
@@ -97,7 +101,7 @@ class OpenShiftAdmPruneImages(AnsibleOpenshiftModule):
self.max_creation_timestamp = self.get_max_creation_timestamp() self.max_creation_timestamp = self.get_max_creation_timestamp()
self._rest_client = None self._rest_client = None
self.registryhost = self.params.get('registry_url') self.registryhost = self.params.get("registry_url")
self.changed = False self.changed = False
def list_objects(self): def list_objects(self):
@@ -107,9 +111,9 @@ class OpenShiftAdmPruneImages(AnsibleOpenshiftModule):
if self.params.get("namespace") and kind.lower() == "imagestream": if self.params.get("namespace") and kind.lower() == "imagestream":
namespace = self.params.get("namespace") namespace = self.params.get("namespace")
try: try:
result[kind] = self.kubernetes_facts(kind=kind, result[kind] = self.kubernetes_facts(
api_version=version, kind=kind, api_version=version, namespace=namespace
namespace=namespace).get('resources') ).get("resources")
except DynamicApiError as e: except DynamicApiError as e:
self.fail_json( self.fail_json(
msg="An error occurred while trying to list objects.", msg="An error occurred while trying to list objects.",
@@ -119,7 +123,7 @@ class OpenShiftAdmPruneImages(AnsibleOpenshiftModule):
except Exception as e: except Exception as e:
self.fail_json( self.fail_json(
msg="An error occurred while trying to list objects.", msg="An error occurred while trying to list objects.",
error=to_native(e) error=to_native(e),
) )
return result return result
@@ -134,8 +138,8 @@ class OpenShiftAdmPruneImages(AnsibleOpenshiftModule):
def rest_client(self): def rest_client(self):
if not self._rest_client: if not self._rest_client:
configuration = copy.deepcopy(self.client.configuration) configuration = copy.deepcopy(self.client.configuration)
validate_certs = self.params.get('registry_validate_certs') validate_certs = self.params.get("registry_validate_certs")
ssl_ca_cert = self.params.get('registry_ca_cert') ssl_ca_cert = self.params.get("registry_ca_cert")
if validate_certs is not None: if validate_certs is not None:
configuration.verify_ssl = validate_certs configuration.verify_ssl = validate_certs
if ssl_ca_cert is not None: if ssl_ca_cert is not None:
@@ -146,7 +150,9 @@ class OpenShiftAdmPruneImages(AnsibleOpenshiftModule):
def delete_from_registry(self, url): def delete_from_registry(self, url):
try: try:
response = self.rest_client.DELETE(url=url, headers=self.client.configuration.api_key) response = self.rest_client.DELETE(
url=url, headers=self.client.configuration.api_key
)
if response.status == 404: if response.status == 404:
# Unable to delete layer # Unable to delete layer
return None return None
@@ -156,8 +162,9 @@ class OpenShiftAdmPruneImages(AnsibleOpenshiftModule):
if response.status != 202 and response.status != 204: if response.status != 202 and response.status != 204:
self.fail_json( self.fail_json(
msg="Delete URL {0}: Unexpected status code in response: {1}".format( msg="Delete URL {0}: Unexpected status code in response: {1}".format(
response.status, url), response.status, url
reason=response.reason ),
reason=response.reason,
) )
return None return None
except ApiException as e: except ApiException as e:
@@ -204,9 +211,7 @@ class OpenShiftAdmPruneImages(AnsibleOpenshiftModule):
result = self.request( result = self.request(
"PUT", "PUT",
"/apis/{api_version}/namespaces/{namespace}/imagestreams/{name}/status".format( "/apis/{api_version}/namespaces/{namespace}/imagestreams/{name}/status".format(
api_version=api_version, api_version=api_version, namespace=namespace, name=name
namespace=namespace,
name=name
), ),
body=definition, body=definition,
content_type="application/json", content_type="application/json",
@@ -237,11 +242,10 @@ class OpenShiftAdmPruneImages(AnsibleOpenshiftModule):
pass pass
except DynamicApiError as exc: except DynamicApiError as exc:
self.fail_json( self.fail_json(
msg="Failed to delete object %s/%s due to: %s" % ( msg="Failed to delete object %s/%s due to: %s"
kind, name, exc.body % (kind, name, exc.body),
),
reason=exc.reason, reason=exc.reason,
status=exc.status status=exc.status,
) )
else: else:
existing = resource.get(name=name) existing = resource.get(name=name)
@@ -285,9 +289,11 @@ class OpenShiftAdmPruneImages(AnsibleOpenshiftModule):
continue continue
if idx == 0: if idx == 0:
istag = "%s/%s:%s" % (stream_namespace, istag = "%s/%s:%s" % (
stream_name, stream_namespace,
tag_event_list["tag"]) stream_name,
tag_event_list["tag"],
)
if istag in self.used_tags: if istag in self.used_tags:
# keeping because tag is used # keeping because tag is used
filtered_items.append(item) filtered_items.append(item)
@@ -302,20 +308,20 @@ class OpenShiftAdmPruneImages(AnsibleOpenshiftModule):
image = self.image_mapping[item["image"]] image = self.image_mapping[item["image"]]
# check prune over limit size # check prune over limit size
if prune_over_size_limit and not self.exceeds_limits(stream_namespace, image): if prune_over_size_limit and not self.exceeds_limits(
stream_namespace, image
):
filtered_items.append(item) filtered_items.append(item)
continue continue
image_ref = "%s/%s@%s" % (stream_namespace, image_ref = "%s/%s@%s" % (stream_namespace, stream_name, item["image"])
stream_name,
item["image"])
if image_ref in self.used_images: if image_ref in self.used_images:
# keeping because tag is used # keeping because tag is used
filtered_items.append(item) filtered_items.append(item)
continue continue
images_to_delete.append(item["image"]) images_to_delete.append(item["image"])
if self.params.get('prune_registry'): if self.params.get("prune_registry"):
manifests_to_delete.append(image["metadata"]["name"]) manifests_to_delete.append(image["metadata"]["name"])
path = stream_namespace + "/" + stream_name path = stream_namespace + "/" + stream_name
image_blobs, err = get_image_blobs(image) image_blobs, err = get_image_blobs(image)
@@ -325,21 +331,25 @@ class OpenShiftAdmPruneImages(AnsibleOpenshiftModule):
return filtered_items, manifests_to_delete, images_to_delete return filtered_items, manifests_to_delete, images_to_delete
def prune_image_streams(self, stream): def prune_image_streams(self, stream):
name = stream['metadata']['namespace'] + "/" + stream['metadata']['name'] name = stream["metadata"]["namespace"] + "/" + stream["metadata"]["name"]
if is_too_young_object(stream, self.max_creation_timestamp): if is_too_young_object(stream, self.max_creation_timestamp):
# keeping all images because of image stream too young # keeping all images because of image stream too young
return None, [] return None, []
facts = self.kubernetes_facts(kind="ImageStream", facts = self.kubernetes_facts(
api_version=ApiConfiguration.get("ImageStream"), kind="ImageStream",
name=stream["metadata"]["name"], api_version=ApiConfiguration.get("ImageStream"),
namespace=stream["metadata"]["namespace"]) name=stream["metadata"]["name"],
image_stream = facts.get('resources') namespace=stream["metadata"]["namespace"],
)
image_stream = facts.get("resources")
if len(image_stream) != 1: if len(image_stream) != 1:
# skipping because it does not exist anymore # skipping because it does not exist anymore
return None, [] return None, []
stream = image_stream[0] stream = image_stream[0]
namespace = self.params.get("namespace") namespace = self.params.get("namespace")
stream_to_update = not namespace or (stream["metadata"]["namespace"] == namespace) stream_to_update = not namespace or (
stream["metadata"]["namespace"] == namespace
)
manifests_to_delete, images_to_delete = [], [] manifests_to_delete, images_to_delete = [], []
deleted_items = False deleted_items = False
@@ -351,9 +361,9 @@ class OpenShiftAdmPruneImages(AnsibleOpenshiftModule):
( (
filtered_tag_event, filtered_tag_event,
tag_manifests_to_delete, tag_manifests_to_delete,
tag_images_to_delete tag_images_to_delete,
) = self.prune_image_stream_tag(stream, tag_event_list) ) = self.prune_image_stream_tag(stream, tag_event_list)
stream['status']['tags'][idx]['items'] = filtered_tag_event stream["status"]["tags"][idx]["items"] = filtered_tag_event
manifests_to_delete += tag_manifests_to_delete manifests_to_delete += tag_manifests_to_delete
images_to_delete += tag_images_to_delete images_to_delete += tag_images_to_delete
deleted_items = deleted_items or (len(tag_images_to_delete) > 0) deleted_items = deleted_items or (len(tag_images_to_delete) > 0)
@@ -361,11 +371,11 @@ class OpenShiftAdmPruneImages(AnsibleOpenshiftModule):
# Deleting tags without items # Deleting tags without items
tags = [] tags = []
for tag in stream["status"].get("tags", []): for tag in stream["status"].get("tags", []):
if tag['items'] is None or len(tag['items']) == 0: if tag["items"] is None or len(tag["items"]) == 0:
continue continue
tags.append(tag) tags.append(tag)
stream['status']['tags'] = tags stream["status"]["tags"] = tags
result = None result = None
# Update ImageStream # Update ImageStream
if stream_to_update: if stream_to_update:
@@ -402,19 +412,23 @@ class OpenShiftAdmPruneImages(AnsibleOpenshiftModule):
def execute_module(self): def execute_module(self):
resources = self.list_objects() resources = self.list_objects()
if not self.check_mode and self.params.get('prune_registry'): if not self.check_mode and self.params.get("prune_registry"):
if not self.registryhost: if not self.registryhost:
self.registryhost = determine_host_registry(self.module, resources['Image'], resources['ImageStream']) self.registryhost = determine_host_registry(
self.module, resources["Image"], resources["ImageStream"]
)
# validate that host has a scheme # validate that host has a scheme
if "://" not in self.registryhost: if "://" not in self.registryhost:
self.registryhost = "https://" + self.registryhost self.registryhost = "https://" + self.registryhost
# Analyze Image Streams # Analyze Image Streams
analyze_ref = OpenShiftAnalyzeImageStream( analyze_ref = OpenShiftAnalyzeImageStream(
ignore_invalid_refs=self.params.get('ignore_invalid_refs'), ignore_invalid_refs=self.params.get("ignore_invalid_refs"),
max_creation_timestamp=self.max_creation_timestamp, max_creation_timestamp=self.max_creation_timestamp,
module=self.module module=self.module,
)
self.used_tags, self.used_images, error = analyze_ref.analyze_image_stream(
resources
) )
self.used_tags, self.used_images, error = analyze_ref.analyze_image_stream(resources)
if error: if error:
self.fail_json(msg=error) self.fail_json(msg=error)
@@ -435,16 +449,20 @@ class OpenShiftAdmPruneImages(AnsibleOpenshiftModule):
updated_image_streams = [] updated_image_streams = []
deleted_tags_images = [] deleted_tags_images = []
updated_is_mapping = {} updated_is_mapping = {}
for stream in resources['ImageStream']: for stream in resources["ImageStream"]:
result, images_to_delete = self.prune_image_streams(stream) result, images_to_delete = self.prune_image_streams(stream)
if result: if result:
updated_is_mapping[result["metadata"]["namespace"] + "/" + result["metadata"]["name"]] = result updated_is_mapping[
result["metadata"]["namespace"] + "/" + result["metadata"]["name"]
] = result
updated_image_streams.append(result) updated_image_streams.append(result)
deleted_tags_images += images_to_delete deleted_tags_images += images_to_delete
# Create a list with images referenced on image stream # Create a list with images referenced on image stream
self.referenced_images = [] self.referenced_images = []
for item in self.kubernetes_facts(kind="ImageStream", api_version="image.openshift.io/v1")["resources"]: for item in self.kubernetes_facts(
kind="ImageStream", api_version="image.openshift.io/v1"
)["resources"]:
name = "%s/%s" % (item["metadata"]["namespace"], item["metadata"]["name"]) name = "%s/%s" % (item["metadata"]["namespace"], item["metadata"]["name"])
if name in updated_is_mapping: if name in updated_is_mapping:
item = updated_is_mapping[name] item = updated_is_mapping[name]
@@ -453,7 +471,7 @@ class OpenShiftAdmPruneImages(AnsibleOpenshiftModule):
# Stage 2: delete images # Stage 2: delete images
images = [] images = []
images_to_delete = [x["metadata"]["name"] for x in resources['Image']] images_to_delete = [x["metadata"]["name"] for x in resources["Image"]]
if self.params.get("namespace") is not None: if self.params.get("namespace") is not None:
# When namespace is defined, prune only images that were referenced by ImageStream # When namespace is defined, prune only images that were referenced by ImageStream
# from the corresponding namespace # from the corresponding namespace

View File

@@ -1,15 +1,17 @@
#!/usr/bin/env python #!/usr/bin/env python
from __future__ import (absolute_import, division, print_function) from __future__ import absolute_import, division, print_function
__metaclass__ = type __metaclass__ = type
from datetime import datetime, timezone, timedelta from datetime import datetime, timezone, timedelta
import traceback
import time import time
from ansible.module_utils._text import to_native from ansible.module_utils._text import to_native
from ansible_collections.community.okd.plugins.module_utils.openshift_common import AnsibleOpenshiftModule from ansible_collections.community.okd.plugins.module_utils.openshift_common import (
AnsibleOpenshiftModule,
)
try: try:
from kubernetes.dynamic.exceptions import DynamicApiError from kubernetes.dynamic.exceptions import DynamicApiError
@@ -36,8 +38,7 @@ class OpenShiftBuilds(AnsibleOpenshiftModule):
result = self.request( result = self.request(
method="POST", method="POST",
path="/apis/build.openshift.io/v1/namespaces/{namespace}/builds/{name}/clone".format( path="/apis/build.openshift.io/v1/namespaces/{namespace}/builds/{name}/clone".format(
namespace=namespace, namespace=namespace, name=name
name=name
), ),
body=request, body=request,
content_type="application/json", content_type="application/json",
@@ -47,7 +48,11 @@ class OpenShiftBuilds(AnsibleOpenshiftModule):
msg = "Failed to clone Build %s/%s due to: %s" % (namespace, name, exc.body) msg = "Failed to clone Build %s/%s due to: %s" % (namespace, name, exc.body)
self.fail_json(msg=msg, status=exc.status, reason=exc.reason) self.fail_json(msg=msg, status=exc.status, reason=exc.reason)
except Exception as e: except Exception as e:
msg = "Failed to clone Build %s/%s due to: %s" % (namespace, name, to_native(e)) msg = "Failed to clone Build %s/%s due to: %s" % (
namespace,
name,
to_native(e),
)
self.fail_json(msg=msg, error=to_native(e), exception=e) self.fail_json(msg=msg, error=to_native(e), exception=e)
def instantiate_build_config(self, name, namespace, request): def instantiate_build_config(self, name, namespace, request):
@@ -55,22 +60,28 @@ class OpenShiftBuilds(AnsibleOpenshiftModule):
result = self.request( result = self.request(
method="POST", method="POST",
path="/apis/build.openshift.io/v1/namespaces/{namespace}/buildconfigs/{name}/instantiate".format( path="/apis/build.openshift.io/v1/namespaces/{namespace}/buildconfigs/{name}/instantiate".format(
namespace=namespace, namespace=namespace, name=name
name=name
), ),
body=request, body=request,
content_type="application/json", content_type="application/json",
) )
return result.to_dict() return result.to_dict()
except DynamicApiError as exc: except DynamicApiError as exc:
msg = "Failed to instantiate BuildConfig %s/%s due to: %s" % (namespace, name, exc.body) msg = "Failed to instantiate BuildConfig %s/%s due to: %s" % (
namespace,
name,
exc.body,
)
self.fail_json(msg=msg, status=exc.status, reason=exc.reason) self.fail_json(msg=msg, status=exc.status, reason=exc.reason)
except Exception as e: except Exception as e:
msg = "Failed to instantiate BuildConfig %s/%s due to: %s" % (namespace, name, to_native(e)) msg = "Failed to instantiate BuildConfig %s/%s due to: %s" % (
namespace,
name,
to_native(e),
)
self.fail_json(msg=msg, error=to_native(e), exception=e) self.fail_json(msg=msg, error=to_native(e), exception=e)
def start_build(self): def start_build(self):
result = None result = None
name = self.params.get("build_config_name") name = self.params.get("build_config_name")
if not name: if not name:
@@ -79,32 +90,20 @@ class OpenShiftBuilds(AnsibleOpenshiftModule):
build_request = { build_request = {
"kind": "BuildRequest", "kind": "BuildRequest",
"apiVersion": "build.openshift.io/v1", "apiVersion": "build.openshift.io/v1",
"metadata": { "metadata": {"name": name},
"name": name "triggeredBy": [{"message": "Manually triggered"}],
},
"triggeredBy": [
{"message": "Manually triggered"}
],
} }
# Overrides incremental # Overrides incremental
incremental = self.params.get("incremental") incremental = self.params.get("incremental")
if incremental is not None: if incremental is not None:
build_request.update( build_request.update(
{ {"sourceStrategyOptions": {"incremental": incremental}}
"sourceStrategyOptions": {
"incremental": incremental
}
}
) )
# Environment variable # Environment variable
if self.params.get("env_vars"): if self.params.get("env_vars"):
build_request.update( build_request.update({"env": self.params.get("env_vars")})
{
"env": self.params.get("env_vars")
}
)
# Docker strategy option # Docker strategy option
if self.params.get("build_args"): if self.params.get("build_args"):
@@ -121,22 +120,14 @@ class OpenShiftBuilds(AnsibleOpenshiftModule):
if no_cache is not None: if no_cache is not None:
build_request.update( build_request.update(
{ {
"dockerStrategyOptions": { "dockerStrategyOptions": {"noCache": no_cache},
"noCache": no_cache
},
} }
) )
# commit # commit
if self.params.get("commit"): if self.params.get("commit"):
build_request.update( build_request.update(
{ {"revision": {"git": {"commit": self.params.get("commit")}}}
"revision": {
"git": {
"commit": self.params.get("commit")
}
}
}
) )
if self.params.get("build_config_name"): if self.params.get("build_config_name"):
@@ -144,7 +135,7 @@ class OpenShiftBuilds(AnsibleOpenshiftModule):
result = self.instantiate_build_config( result = self.instantiate_build_config(
name=self.params.get("build_config_name"), name=self.params.get("build_config_name"),
namespace=self.params.get("namespace"), namespace=self.params.get("namespace"),
request=build_request request=build_request,
) )
else: else:
@@ -152,7 +143,7 @@ class OpenShiftBuilds(AnsibleOpenshiftModule):
result = self.clone_build( result = self.clone_build(
name=self.params.get("build_name"), name=self.params.get("build_name"),
namespace=self.params.get("namespace"), namespace=self.params.get("namespace"),
request=build_request request=build_request,
) )
if result and self.params.get("wait"): if result and self.params.get("wait"):
@@ -179,10 +170,11 @@ class OpenShiftBuilds(AnsibleOpenshiftModule):
break break
elif last_status_phase in ("Cancelled", "Error", "Failed"): elif last_status_phase in ("Cancelled", "Error", "Failed"):
self.fail_json( self.fail_json(
msg="Unexpected status for Build %s/%s: %s" % ( msg="Unexpected status for Build %s/%s: %s"
% (
result["metadata"]["name"], result["metadata"]["name"],
result["metadata"]["namespace"], result["metadata"]["namespace"],
last_status_phase last_status_phase,
) )
) )
time.sleep(wait_sleep) time.sleep(wait_sleep)
@@ -190,8 +182,11 @@ class OpenShiftBuilds(AnsibleOpenshiftModule):
if last_status_phase != "Complete": if last_status_phase != "Complete":
name = result["metadata"]["name"] name = result["metadata"]["name"]
namespace = result["metadata"]["namespace"] namespace = result["metadata"]["namespace"]
msg = "Build %s/%s has not complete after %d second(s)," \ msg = (
"current status is %s" % (namespace, name, wait_timeout, last_status_phase) "Build %s/%s has not complete after %d second(s),"
"current status is %s"
% (namespace, name, wait_timeout, last_status_phase)
)
self.fail_json(msg=msg) self.fail_json(msg=msg)
@@ -199,9 +194,8 @@ class OpenShiftBuilds(AnsibleOpenshiftModule):
self.exit_json(changed=True, builds=result) self.exit_json(changed=True, builds=result)
def cancel_build(self, restart): def cancel_build(self, restart):
kind = "Build"
kind = 'Build' api_version = "build.openshift.io/v1"
api_version = 'build.openshift.io/v1'
namespace = self.params.get("namespace") namespace = self.params.get("namespace")
phases = ["new", "pending", "running"] phases = ["new", "pending", "running"]
@@ -215,16 +209,18 @@ class OpenShiftBuilds(AnsibleOpenshiftModule):
else: else:
build_config = self.params.get("build_config_name") build_config = self.params.get("build_config_name")
# list all builds from namespace # list all builds from namespace
params = dict( params = dict(kind=kind, api_version=api_version, namespace=namespace)
kind=kind,
api_version=api_version,
namespace=namespace
)
resources = self.kubernetes_facts(**params).get("resources", []) resources = self.kubernetes_facts(**params).get("resources", [])
def _filter_builds(build): def _filter_builds(build):
config = build["metadata"].get("labels", {}).get("openshift.io/build-config.name") config = (
return build_config is None or (build_config is not None and config in build_config) build["metadata"]
.get("labels", {})
.get("openshift.io/build-config.name")
)
return build_config is None or (
build_config is not None and config in build_config
)
for item in list(filter(_filter_builds, resources)): for item in list(filter(_filter_builds, resources)):
name = item["metadata"]["name"] name = item["metadata"]["name"]
@@ -232,16 +228,15 @@ class OpenShiftBuilds(AnsibleOpenshiftModule):
names.append(name) names.append(name)
if len(names) == 0: if len(names) == 0:
self.exit_json(changed=False, msg="No Build found from namespace %s" % namespace) self.exit_json(
changed=False, msg="No Build found from namespace %s" % namespace
)
warning = [] warning = []
builds_to_cancel = [] builds_to_cancel = []
for name in names: for name in names:
params = dict( params = dict(
kind=kind, kind=kind, api_version=api_version, name=name, namespace=namespace
api_version=api_version,
name=name,
namespace=namespace
) )
resource = self.kubernetes_facts(**params).get("resources", []) resource = self.kubernetes_facts(**params).get("resources", [])
@@ -256,7 +251,10 @@ class OpenShiftBuilds(AnsibleOpenshiftModule):
if phase in phases: if phase in phases:
builds_to_cancel.append(resource) builds_to_cancel.append(resource)
else: else:
warning.append("build %s/%s is not in expected phase, found %s" % (namespace, name, phase)) warning.append(
"build %s/%s is not in expected phase, found %s"
% (namespace, name, phase)
)
changed = False changed = False
result = [] result = []
@@ -278,9 +276,10 @@ class OpenShiftBuilds(AnsibleOpenshiftModule):
result.append(cancelled_build) result.append(cancelled_build)
except DynamicApiError as exc: except DynamicApiError as exc:
self.fail_json( self.fail_json(
msg="Failed to cancel Build %s/%s due to: %s" % (namespace, name, exc), msg="Failed to cancel Build %s/%s due to: %s"
% (namespace, name, exc),
reason=exc.reason, reason=exc.reason,
status=exc.status status=exc.status,
) )
except Exception as e: except Exception as e:
self.fail_json( self.fail_json(
@@ -294,10 +293,7 @@ class OpenShiftBuilds(AnsibleOpenshiftModule):
name = build["metadata"]["name"] name = build["metadata"]["name"]
while (datetime.now() - start).seconds < wait_timeout: while (datetime.now() - start).seconds < wait_timeout:
params = dict( params = dict(
kind=kind, kind=kind, api_version=api_version, name=name, namespace=namespace
api_version=api_version,
name=name,
namespace=namespace
) )
resource = self.kubernetes_facts(**params).get("resources", []) resource = self.kubernetes_facts(**params).get("resources", [])
if len(resource) == 0: if len(resource) == 0:
@@ -307,7 +303,11 @@ class OpenShiftBuilds(AnsibleOpenshiftModule):
if last_phase == "Cancelled": if last_phase == "Cancelled":
return resource, None return resource, None
time.sleep(wait_sleep) time.sleep(wait_sleep)
return None, "Build %s/%s is not cancelled as expected, current state is %s" % (namespace, name, last_phase) return (
None,
"Build %s/%s is not cancelled as expected, current state is %s"
% (namespace, name, last_phase),
)
if result and self.params.get("wait"): if result and self.params.get("wait"):
wait_timeout = self.params.get("wait_timeout") wait_timeout = self.params.get("wait_timeout")
@@ -341,8 +341,8 @@ class OpenShiftPruneBuilds(OpenShiftBuilds):
def execute_module(self): def execute_module(self):
# list replicationcontroller candidate for pruning # list replicationcontroller candidate for pruning
kind = 'Build' kind = "Build"
api_version = 'build.openshift.io/v1' api_version = "build.openshift.io/v1"
resource = self.find_resource(kind=kind, api_version=api_version, fail=True) resource = self.find_resource(kind=kind, api_version=api_version, fail=True)
self.max_creation_timestamp = None self.max_creation_timestamp = None
@@ -352,7 +352,12 @@ class OpenShiftPruneBuilds(OpenShiftBuilds):
self.max_creation_timestamp = now - timedelta(minutes=keep_younger_than) self.max_creation_timestamp = now - timedelta(minutes=keep_younger_than)
def _prunable_build(build): def _prunable_build(build):
return build["status"]["phase"] in ("Complete", "Failed", "Error", "Cancelled") return build["status"]["phase"] in (
"Complete",
"Failed",
"Error",
"Cancelled",
)
def _orphan_build(build): def _orphan_build(build):
if not _prunable_build(build): if not _prunable_build(build):
@@ -367,7 +372,9 @@ class OpenShiftPruneBuilds(OpenShiftBuilds):
def _younger_build(build): def _younger_build(build):
if not self.max_creation_timestamp: if not self.max_creation_timestamp:
return False return False
creation_timestamp = datetime.strptime(build['metadata']['creationTimestamp'], '%Y-%m-%dT%H:%M:%SZ') creation_timestamp = datetime.strptime(
build["metadata"]["creationTimestamp"], "%Y-%m-%dT%H:%M:%SZ"
)
return creation_timestamp < self.max_creation_timestamp return creation_timestamp < self.max_creation_timestamp
predicates = [ predicates = [
@@ -401,9 +408,17 @@ class OpenShiftPruneBuilds(OpenShiftBuilds):
namespace = build["metadata"]["namespace"] namespace = build["metadata"]["namespace"]
resource.delete(name=name, namespace=namespace, body={}) resource.delete(name=name, namespace=namespace, body={})
except DynamicApiError as exc: except DynamicApiError as exc:
msg = "Failed to delete Build %s/%s due to: %s" % (namespace, name, exc.body) msg = "Failed to delete Build %s/%s due to: %s" % (
namespace,
name,
exc.body,
)
self.fail_json(msg=msg, status=exc.status, reason=exc.reason) self.fail_json(msg=msg, status=exc.status, reason=exc.reason)
except Exception as e: except Exception as e:
msg = "Failed to delete Build %s/%s due to: %s" % (namespace, name, to_native(e)) msg = "Failed to delete Build %s/%s due to: %s" % (
namespace,
name,
to_native(e),
)
self.fail_json(msg=msg, error=to_native(e), exception=e) self.fail_json(msg=msg, error=to_native(e), exception=e)
self.exit_json(changed=changed, builds=candidates) self.exit_json(changed=changed, builds=candidates)

View File

@@ -1,6 +1,7 @@
#!/usr/bin/env python #!/usr/bin/env python
from __future__ import (absolute_import, division, print_function) from __future__ import absolute_import, division, print_function
__metaclass__ = type __metaclass__ = type
import traceback import traceback
@@ -9,8 +10,12 @@ from abc import abstractmethod
from ansible.module_utils._text import to_native from ansible.module_utils._text import to_native
try: try:
from ansible_collections.kubernetes.core.plugins.module_utils.k8s.client import get_api_client from ansible_collections.kubernetes.core.plugins.module_utils.k8s.client import (
from ansible_collections.kubernetes.core.plugins.module_utils.k8s.core import AnsibleK8SModule get_api_client,
)
from ansible_collections.kubernetes.core.plugins.module_utils.k8s.core import (
AnsibleK8SModule,
)
from ansible_collections.kubernetes.core.plugins.module_utils.k8s.service import ( from ansible_collections.kubernetes.core.plugins.module_utils.k8s.service import (
K8sService, K8sService,
diff_objects, diff_objects,
@@ -24,7 +29,10 @@ try:
merge_params, merge_params,
flatten_list_kind, flatten_list_kind,
) )
from ansible_collections.kubernetes.core.plugins.module_utils.k8s.exceptions import CoreException from ansible_collections.kubernetes.core.plugins.module_utils.k8s.exceptions import (
CoreException,
)
HAS_KUBERNETES_COLLECTION = True HAS_KUBERNETES_COLLECTION = True
k8s_collection_import_exception = None k8s_collection_import_exception = None
K8S_COLLECTION_ERROR = None K8S_COLLECTION_ERROR = None
@@ -35,7 +43,6 @@ except ImportError as e:
class AnsibleOpenshiftModule(AnsibleK8SModule): class AnsibleOpenshiftModule(AnsibleK8SModule):
def __init__(self, **kwargs): def __init__(self, **kwargs):
super(AnsibleOpenshiftModule, self).__init__(**kwargs) super(AnsibleOpenshiftModule, self).__init__(**kwargs)
@@ -86,7 +93,6 @@ class AnsibleOpenshiftModule(AnsibleK8SModule):
return diff_objects(existing, new) return diff_objects(existing, new)
def run_module(self): def run_module(self):
try: try:
self.execute_module() self.execute_module()
except CoreException as e: except CoreException as e:

View File

@@ -1,6 +1,7 @@
#!/usr/bin/env python #!/usr/bin/env python
from __future__ import (absolute_import, division, print_function) from __future__ import absolute_import, division, print_function
__metaclass__ = type __metaclass__ = type
import re import re
@@ -23,62 +24,68 @@ def convert_storage_to_bytes(value):
def is_valid_digest(digest): def is_valid_digest(digest):
digest_algorithm_size = dict( digest_algorithm_size = dict(
sha256=64, sha384=96, sha512=128, sha256=64,
sha384=96,
sha512=128,
) )
m = re.match(r'[a-zA-Z0-9-_+.]+:[a-fA-F0-9]+', digest) m = re.match(r"[a-zA-Z0-9-_+.]+:[a-fA-F0-9]+", digest)
if not m: if not m:
return "Docker digest does not match expected format %s" % digest return "Docker digest does not match expected format %s" % digest
idx = digest.find(':') idx = digest.find(":")
# case: "sha256:" with no hex. # case: "sha256:" with no hex.
if idx < 0 or idx == (len(digest) - 1): if idx < 0 or idx == (len(digest) - 1):
return "Invalid docker digest %s, no hex value define" % digest return "Invalid docker digest %s, no hex value define" % digest
algorithm = digest[:idx] algorithm = digest[:idx]
if algorithm not in digest_algorithm_size: if algorithm not in digest_algorithm_size:
return "Unsupported digest algorithm value %s for digest %s" % (algorithm, digest) return "Unsupported digest algorithm value %s for digest %s" % (
algorithm,
digest,
)
hex_value = digest[idx + 1:] hex_value = digest[idx + 1:] # fmt: skip
if len(hex_value) != digest_algorithm_size.get(algorithm): if len(hex_value) != digest_algorithm_size.get(algorithm):
return "Invalid length for digest hex expected %d found %d (digest is %s)" % ( return "Invalid length for digest hex expected %d found %d (digest is %s)" % (
digest_algorithm_size.get(algorithm), len(hex_value), digest digest_algorithm_size.get(algorithm),
len(hex_value),
digest,
) )
def parse_docker_image_ref(image_ref, module=None): def parse_docker_image_ref(image_ref, module=None):
""" """
Docker Grammar Reference Docker Grammar Reference
Reference => name [ ":" tag ] [ "@" digest ] Reference => name [ ":" tag ] [ "@" digest ]
name => [hostname '/'] component ['/' component]* name => [hostname '/'] component ['/' component]*
hostname => hostcomponent ['.' hostcomponent]* [':' port-number] hostname => hostcomponent ['.' hostcomponent]* [':' port-number]
hostcomponent => /([a-zA-Z0-9]|[a-zA-Z0-9][a-zA-Z0-9-]*[a-zA-Z0-9])/ hostcomponent => /([a-zA-Z0-9]|[a-zA-Z0-9][a-zA-Z0-9-]*[a-zA-Z0-9])/
port-number => /[0-9]+/ port-number => /[0-9]+/
component => alpha-numeric [separator alpha-numeric]* component => alpha-numeric [separator alpha-numeric]*
alpha-numeric => /[a-z0-9]+/ alpha-numeric => /[a-z0-9]+/
separator => /[_.]|__|[-]*/ separator => /[_.]|__|[-]*/
""" """
idx = image_ref.find("/") idx = image_ref.find("/")
def _contains_any(src, values): def _contains_any(src, values):
return any(x in src for x in values) return any(x in src for x in values)
result = { result = {"tag": None, "digest": None}
"tag": None, "digest": None
}
default_domain = "docker.io" default_domain = "docker.io"
if idx < 0 or (not _contains_any(image_ref[:idx], ":.") and image_ref[:idx] != "localhost"): if idx < 0 or (
not _contains_any(image_ref[:idx], ":.") and image_ref[:idx] != "localhost"
):
result["hostname"], remainder = default_domain, image_ref result["hostname"], remainder = default_domain, image_ref
else: else:
result["hostname"], remainder = image_ref[:idx], image_ref[idx + 1:] result["hostname"], remainder = image_ref[:idx], image_ref[idx + 1:] # fmt: skip
# Parse remainder information # Parse remainder information
idx = remainder.find("@") idx = remainder.find("@")
if idx > 0 and len(remainder) > (idx + 1): if idx > 0 and len(remainder) > (idx + 1):
# docker image reference with digest # docker image reference with digest
component, result["digest"] = remainder[:idx], remainder[idx + 1:] component, result["digest"] = remainder[:idx], remainder[idx + 1:] # fmt: skip
err = is_valid_digest(result["digest"]) err = is_valid_digest(result["digest"])
if err: if err:
if module: if module:
@@ -88,7 +95,7 @@ def parse_docker_image_ref(image_ref, module=None):
idx = remainder.find(":") idx = remainder.find(":")
if idx > 0 and len(remainder) > (idx + 1): if idx > 0 and len(remainder) > (idx + 1):
# docker image reference with tag # docker image reference with tag
component, result["tag"] = remainder[:idx], remainder[idx + 1:] component, result["tag"] = remainder[:idx], remainder[idx + 1:] # fmt: skip
else: else:
# name only # name only
component = remainder component = remainder
@@ -96,8 +103,6 @@ def parse_docker_image_ref(image_ref, module=None):
namespace = None namespace = None
if len(v) > 1: if len(v) > 1:
namespace = v[0] namespace = v[0]
result.update({ result.update({"namespace": namespace, "name": v[-1]})
"namespace": namespace, "name": v[-1]
})
return result, None return result, None

View File

@@ -3,11 +3,11 @@
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt) # GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import (absolute_import, division, print_function) from __future__ import absolute_import, division, print_function
__metaclass__ = type __metaclass__ = type
import traceback
from datetime import datetime from datetime import datetime
from ansible.module_utils.parsing.convert_bool import boolean from ansible.module_utils.parsing.convert_bool import boolean
@@ -19,18 +19,21 @@ from ansible_collections.community.okd.plugins.module_utils.openshift_ldap impor
ldap_split_host_port, ldap_split_host_port,
OpenshiftLDAPRFC2307, OpenshiftLDAPRFC2307,
OpenshiftLDAPActiveDirectory, OpenshiftLDAPActiveDirectory,
OpenshiftLDAPAugmentedActiveDirectory OpenshiftLDAPAugmentedActiveDirectory,
) )
try: try:
import ldap import ldap
HAS_PYTHON_LDAP = True HAS_PYTHON_LDAP = True
PYTHON_LDAP_ERROR = None PYTHON_LDAP_ERROR = None
except ImportError as e: except ImportError as e:
HAS_PYTHON_LDAP = False HAS_PYTHON_LDAP = False
PYTHON_LDAP_ERROR = e PYTHON_LDAP_ERROR = e
from ansible_collections.community.okd.plugins.module_utils.openshift_common import AnsibleOpenshiftModule from ansible_collections.community.okd.plugins.module_utils.openshift_common import (
AnsibleOpenshiftModule,
)
try: try:
from kubernetes.dynamic.exceptions import DynamicApiError from kubernetes.dynamic.exceptions import DynamicApiError
@@ -44,7 +47,9 @@ LDAP_OPENSHIFT_UID_ANNOTATION = "openshift.io/ldap.uid"
LDAP_OPENSHIFT_SYNCTIME_ANNOTATION = "openshift.io/ldap.sync-time" LDAP_OPENSHIFT_SYNCTIME_ANNOTATION = "openshift.io/ldap.sync-time"
def connect_to_ldap(module, server_uri, bind_dn=None, bind_pw=None, insecure=True, ca_file=None): def connect_to_ldap(
module, server_uri, bind_dn=None, bind_pw=None, insecure=True, ca_file=None
):
if insecure: if insecure:
ldap.set_option(ldap.OPT_X_TLS_REQUIRE_CERT, ldap.OPT_X_TLS_NEVER) ldap.set_option(ldap.OPT_X_TLS_REQUIRE_CERT, ldap.OPT_X_TLS_NEVER)
elif ca_file: elif ca_file:
@@ -56,27 +61,36 @@ def connect_to_ldap(module, server_uri, bind_dn=None, bind_pw=None, insecure=Tru
connection.simple_bind_s(bind_dn, bind_pw) connection.simple_bind_s(bind_dn, bind_pw)
return connection return connection
except ldap.LDAPError as e: except ldap.LDAPError as e:
module.fail_json(msg="Cannot bind to the LDAP server '{0}' due to: {1}".format(server_uri, e)) module.fail_json(
msg="Cannot bind to the LDAP server '{0}' due to: {1}".format(server_uri, e)
)
def validate_group_annotation(definition, host_ip): def validate_group_annotation(definition, host_ip):
name = definition['metadata']['name'] name = definition["metadata"]["name"]
# Validate LDAP URL Annotation # Validate LDAP URL Annotation
annotate_url = definition['metadata'].get('annotations', {}).get(LDAP_OPENSHIFT_URL_ANNOTATION) annotate_url = (
definition["metadata"].get("annotations", {}).get(LDAP_OPENSHIFT_URL_ANNOTATION)
)
if host_ip: if host_ip:
if not annotate_url: if not annotate_url:
return "group '{0}' marked as having been synced did not have an '{1}' annotation".format(name, LDAP_OPENSHIFT_URL_ANNOTATION) return "group '{0}' marked as having been synced did not have an '{1}' annotation".format(
name, LDAP_OPENSHIFT_URL_ANNOTATION
)
elif annotate_url != host_ip: elif annotate_url != host_ip:
return "group '{0}' was not synchronized from: '{1}'".format(name, host_ip) return "group '{0}' was not synchronized from: '{1}'".format(name, host_ip)
# Validate LDAP UID Annotation # Validate LDAP UID Annotation
annotate_uid = definition['metadata']['annotations'].get(LDAP_OPENSHIFT_UID_ANNOTATION) annotate_uid = definition["metadata"]["annotations"].get(
LDAP_OPENSHIFT_UID_ANNOTATION
)
if not annotate_uid: if not annotate_uid:
return "group '{0}' marked as having been synced did not have an '{1}' annotation".format(name, LDAP_OPENSHIFT_UID_ANNOTATION) return "group '{0}' marked as having been synced did not have an '{1}' annotation".format(
name, LDAP_OPENSHIFT_UID_ANNOTATION
)
return None return None
class OpenshiftLDAPGroups(object): class OpenshiftLDAPGroups(object):
kind = "Group" kind = "Group"
version = "user.openshift.io/v1" version = "user.openshift.io/v1"
@@ -88,11 +102,7 @@ class OpenshiftLDAPGroups(object):
@property @property
def k8s_group_api(self): def k8s_group_api(self):
if not self.__group_api: if not self.__group_api:
params = dict( params = dict(kind=self.kind, api_version=self.version, fail=True)
kind=self.kind,
api_version=self.version,
fail=True
)
self.__group_api = self.module.find_resource(**params) self.__group_api = self.module.find_resource(**params)
return self.__group_api return self.__group_api
@@ -139,16 +149,26 @@ class OpenshiftLDAPGroups(object):
if missing: if missing:
self.module.fail_json( self.module.fail_json(
msg="The following groups were not found: %s" % ''.join(missing) msg="The following groups were not found: %s" % "".join(missing)
) )
else: else:
label_selector = "%s=%s" % (LDAP_OPENSHIFT_HOST_LABEL, host) label_selector = "%s=%s" % (LDAP_OPENSHIFT_HOST_LABEL, host)
resources = self.get_group_info(label_selectors=[label_selector], return_list=True) resources = self.get_group_info(
label_selectors=[label_selector], return_list=True
)
if not resources: if not resources:
return None, "Unable to find Group matching label selector '%s'" % label_selector return (
None,
"Unable to find Group matching label selector '%s'"
% label_selector,
)
groups = resources groups = resources
if deny_groups: if deny_groups:
groups = [item for item in groups if item["metadata"]["name"] not in deny_groups] groups = [
item
for item in groups
if item["metadata"]["name"] not in deny_groups
]
uids = [] uids = []
for grp in groups: for grp in groups:
@@ -156,7 +176,9 @@ class OpenshiftLDAPGroups(object):
if err and allow_groups: if err and allow_groups:
# We raise an error for group part of the allow_group not matching LDAP sync criteria # We raise an error for group part of the allow_group not matching LDAP sync criteria
return None, err return None, err
group_uid = grp['metadata']['annotations'].get(LDAP_OPENSHIFT_UID_ANNOTATION) group_uid = grp["metadata"]["annotations"].get(
LDAP_OPENSHIFT_UID_ANNOTATION
)
self.cache[group_uid] = grp self.cache[group_uid] = grp
uids.append(group_uid) uids.append(group_uid)
return uids, None return uids, None
@@ -174,38 +196,65 @@ class OpenshiftLDAPGroups(object):
"kind": "Group", "kind": "Group",
"metadata": { "metadata": {
"name": group_name, "name": group_name,
"labels": { "labels": {LDAP_OPENSHIFT_HOST_LABEL: self.module.host},
LDAP_OPENSHIFT_HOST_LABEL: self.module.host
},
"annotations": { "annotations": {
LDAP_OPENSHIFT_URL_ANNOTATION: self.module.netlocation, LDAP_OPENSHIFT_URL_ANNOTATION: self.module.netlocation,
LDAP_OPENSHIFT_UID_ANNOTATION: group_uid, LDAP_OPENSHIFT_UID_ANNOTATION: group_uid,
} },
} },
} }
# Make sure we aren't taking over an OpenShift group that is already related to a different LDAP group # Make sure we aren't taking over an OpenShift group that is already related to a different LDAP group
ldaphost_label = group["metadata"].get("labels", {}).get(LDAP_OPENSHIFT_HOST_LABEL) ldaphost_label = (
group["metadata"].get("labels", {}).get(LDAP_OPENSHIFT_HOST_LABEL)
)
if not ldaphost_label or ldaphost_label != self.module.host: if not ldaphost_label or ldaphost_label != self.module.host:
return None, "Group %s: %s label did not match sync host: wanted %s, got %s" % ( return (
group_name, LDAP_OPENSHIFT_HOST_LABEL, self.module.host, ldaphost_label None,
"Group %s: %s label did not match sync host: wanted %s, got %s"
% (
group_name,
LDAP_OPENSHIFT_HOST_LABEL,
self.module.host,
ldaphost_label,
),
) )
ldapurl_annotation = group["metadata"].get("annotations", {}).get(LDAP_OPENSHIFT_URL_ANNOTATION) ldapurl_annotation = (
group["metadata"].get("annotations", {}).get(LDAP_OPENSHIFT_URL_ANNOTATION)
)
if not ldapurl_annotation or ldapurl_annotation != self.module.netlocation: if not ldapurl_annotation or ldapurl_annotation != self.module.netlocation:
return None, "Group %s: %s annotation did not match sync host: wanted %s, got %s" % ( return (
group_name, LDAP_OPENSHIFT_URL_ANNOTATION, self.module.netlocation, ldapurl_annotation None,
"Group %s: %s annotation did not match sync host: wanted %s, got %s"
% (
group_name,
LDAP_OPENSHIFT_URL_ANNOTATION,
self.module.netlocation,
ldapurl_annotation,
),
) )
ldapuid_annotation = group["metadata"].get("annotations", {}).get(LDAP_OPENSHIFT_UID_ANNOTATION) ldapuid_annotation = (
group["metadata"].get("annotations", {}).get(LDAP_OPENSHIFT_UID_ANNOTATION)
)
if not ldapuid_annotation or ldapuid_annotation != group_uid: if not ldapuid_annotation or ldapuid_annotation != group_uid:
return None, "Group %s: %s annotation did not match LDAP UID: wanted %s, got %s" % ( return (
group_name, LDAP_OPENSHIFT_UID_ANNOTATION, group_uid, ldapuid_annotation None,
"Group %s: %s annotation did not match LDAP UID: wanted %s, got %s"
% (
group_name,
LDAP_OPENSHIFT_UID_ANNOTATION,
group_uid,
ldapuid_annotation,
),
) )
# Overwrite Group Users data # Overwrite Group Users data
group["users"] = usernames group["users"] = usernames
group["metadata"]["annotations"][LDAP_OPENSHIFT_SYNCTIME_ANNOTATION] = datetime.now().isoformat() group["metadata"]["annotations"][
LDAP_OPENSHIFT_SYNCTIME_ANNOTATION
] = datetime.now().isoformat()
return group, None return group, None
def create_openshift_groups(self, groups: list): def create_openshift_groups(self, groups: list):
@@ -223,9 +272,15 @@ class OpenshiftLDAPGroups(object):
else: else:
definition = self.k8s_group_api.create(definition).to_dict() definition = self.k8s_group_api.create(definition).to_dict()
except DynamicApiError as exc: except DynamicApiError as exc:
self.module.fail_json(msg="Failed to %s Group '%s' due to: %s" % (method, name, exc.body)) self.module.fail_json(
msg="Failed to %s Group '%s' due to: %s"
% (method, name, exc.body)
)
except Exception as exc: except Exception as exc:
self.module.fail_json(msg="Failed to %s Group '%s' due to: %s" % (method, name, to_native(exc))) self.module.fail_json(
msg="Failed to %s Group '%s' due to: %s"
% (method, name, to_native(exc))
)
equals = False equals = False
if existing: if existing:
equals, diff = self.module.diff_objects(existing, definition) equals, diff = self.module.diff_objects(existing, definition)
@@ -235,27 +290,27 @@ class OpenshiftLDAPGroups(object):
return results, diffs, changed return results, diffs, changed
def delete_openshift_group(self, name: str): def delete_openshift_group(self, name: str):
result = dict( result = dict(kind=self.kind, apiVersion=self.version, metadata=dict(name=name))
kind=self.kind,
apiVersion=self.version,
metadata=dict(
name=name
)
)
if not self.module.check_mode: if not self.module.check_mode:
try: try:
result = self.k8s_group_api.delete(name=name).to_dict() result = self.k8s_group_api.delete(name=name).to_dict()
except DynamicApiError as exc: except DynamicApiError as exc:
self.module.fail_json(msg="Failed to delete Group '{0}' due to: {1}".format(name, exc.body)) self.module.fail_json(
msg="Failed to delete Group '{0}' due to: {1}".format(
name, exc.body
)
)
except Exception as exc: except Exception as exc:
self.module.fail_json(msg="Failed to delete Group '{0}' due to: {1}".format(name, to_native(exc))) self.module.fail_json(
msg="Failed to delete Group '{0}' due to: {1}".format(
name, to_native(exc)
)
)
return result return result
class OpenshiftGroupsSync(AnsibleOpenshiftModule): class OpenshiftGroupsSync(AnsibleOpenshiftModule):
def __init__(self, **kwargs): def __init__(self, **kwargs):
super(OpenshiftGroupsSync, self).__init__(**kwargs) super(OpenshiftGroupsSync, self).__init__(**kwargs)
self.__k8s_group_api = None self.__k8s_group_api = None
self.__ldap_connection = None self.__ldap_connection = None
@@ -267,17 +322,14 @@ class OpenshiftGroupsSync(AnsibleOpenshiftModule):
if not HAS_PYTHON_LDAP: if not HAS_PYTHON_LDAP:
self.fail_json( self.fail_json(
msg=missing_required_lib('python-ldap'), error=to_native(PYTHON_LDAP_ERROR) msg=missing_required_lib("python-ldap"),
error=to_native(PYTHON_LDAP_ERROR),
) )
@property @property
def k8s_group_api(self): def k8s_group_api(self):
if not self.__k8s_group_api: if not self.__k8s_group_api:
params = dict( params = dict(kind="Group", api_version="user.openshift.io/v1", fail=True)
kind="Group",
api_version="user.openshift.io/v1",
fail=True
)
self.__k8s_group_api = self.find_resource(**params) self.__k8s_group_api = self.find_resource(**params)
return self.__k8s_group_api return self.__k8s_group_api
@@ -291,11 +343,11 @@ class OpenshiftGroupsSync(AnsibleOpenshiftModule):
# Create connection object # Create connection object
params = dict( params = dict(
module=self, module=self,
server_uri=self.config.get('url'), server_uri=self.config.get("url"),
bind_dn=self.config.get('bindDN'), bind_dn=self.config.get("bindDN"),
bind_pw=self.config.get('bindPassword'), bind_pw=self.config.get("bindPassword"),
insecure=boolean(self.config.get('insecure')), insecure=boolean(self.config.get("insecure")),
ca_file=self.config.get('ca') ca_file=self.config.get("ca"),
) )
self.__ldap_connection = connect_to_ldap(**params) self.__ldap_connection = connect_to_ldap(**params)
return self.__ldap_connection return self.__ldap_connection
@@ -327,7 +379,6 @@ class OpenshiftGroupsSync(AnsibleOpenshiftModule):
return syncer return syncer
def synchronize(self): def synchronize(self):
sync_group_type = self.module.params.get("type") sync_group_type = self.module.params.get("type")
groups_uids = [] groups_uids = []
@@ -365,7 +416,8 @@ class OpenshiftGroupsSync(AnsibleOpenshiftModule):
name, err = syncer.get_username_for_entry(entry) name, err = syncer.get_username_for_entry(entry)
if err: if err:
self.exit_json( self.exit_json(
msg="Unable to determine username for entry %s: %s" % (entry, err) msg="Unable to determine username for entry %s: %s"
% (entry, err)
) )
if isinstance(name, list): if isinstance(name, list):
usernames.extend(name) usernames.extend(name)
@@ -380,13 +432,17 @@ class OpenshiftGroupsSync(AnsibleOpenshiftModule):
self.exit_json(msg=err) self.exit_json(msg=err)
# Make Openshift group # Make Openshift group
group, err = ldap_openshift_group.make_openshift_group(uid, group_name, usernames) group, err = ldap_openshift_group.make_openshift_group(
uid, group_name, usernames
)
if err: if err:
self.fail_json(msg=err) self.fail_json(msg=err)
openshift_groups.append(group) openshift_groups.append(group)
# Create Openshift Groups # Create Openshift Groups
results, diffs, changed = ldap_openshift_group.create_openshift_groups(openshift_groups) results, diffs, changed = ldap_openshift_group.create_openshift_groups(
openshift_groups
)
self.module.exit_json(changed=True, groups=results) self.module.exit_json(changed=True, groups=results)
def prune(self): def prune(self):
@@ -404,7 +460,10 @@ class OpenshiftGroupsSync(AnsibleOpenshiftModule):
# Check if LDAP group exist # Check if LDAP group exist
exists, err = syncer.is_ldapgroup_exists(uid) exists, err = syncer.is_ldapgroup_exists(uid)
if err: if err:
msg = "Error determining LDAP group existence for group %s: %s" % (uid, err) msg = "Error determining LDAP group existence for group %s: %s" % (
uid,
err,
)
self.module.fail_json(msg=msg) self.module.fail_json(msg=msg)
if exists: if exists:
@@ -429,14 +488,22 @@ class OpenshiftGroupsSync(AnsibleOpenshiftModule):
self.fail_json(msg="Invalid LDAP Sync config: %s" % error) self.fail_json(msg="Invalid LDAP Sync config: %s" % error)
# Split host/port # Split host/port
if self.config.get('url'): if self.config.get("url"):
result, error = ldap_split_host_port(self.config.get('url')) result, error = ldap_split_host_port(self.config.get("url"))
if error: if error:
self.fail_json(msg="Failed to parse url='{0}': {1}".format(self.config.get('url'), error)) self.fail_json(
self.netlocation, self.host, self.port = result["netlocation"], result["host"], result["port"] msg="Failed to parse url='{0}': {1}".format(
self.config.get("url"), error
)
)
self.netlocation, self.host, self.port = (
result["netlocation"],
result["host"],
result["port"],
)
self.scheme = result["scheme"] self.scheme = result["scheme"]
if self.params.get('state') == 'present': if self.params.get("state") == "present":
self.synchronize() self.synchronize()
else: else:
self.prune() self.prune()

View File

@@ -1,6 +1,7 @@
#!/usr/bin/env python #!/usr/bin/env python
from __future__ import (absolute_import, division, print_function) from __future__ import absolute_import, division, print_function
__metaclass__ = type __metaclass__ = type
from datetime import datetime from datetime import datetime
@@ -17,9 +18,9 @@ def get_image_blobs(image):
return blobs, "failed to read metadata for image %s" % image["metadata"]["name"] return blobs, "failed to read metadata for image %s" % image["metadata"]["name"]
media_type_manifest = ( media_type_manifest = (
"application/vnd.docker.distribution.manifest.v2+json", "application/vnd.docker.distribution.manifest.v2+json",
"application/vnd.oci.image.manifest.v1+json" "application/vnd.oci.image.manifest.v1+json",
) )
media_type_has_config = image['dockerImageManifestMediaType'] in media_type_manifest media_type_has_config = image["dockerImageManifestMediaType"] in media_type_manifest
docker_image_id = docker_image_metadata.get("Id") docker_image_id = docker_image_metadata.get("Id")
if media_type_has_config and docker_image_id and len(docker_image_id) > 0: if media_type_has_config and docker_image_id and len(docker_image_id) > 0:
blobs.append(docker_image_id) blobs.append(docker_image_id)
@@ -29,19 +30,18 @@ def get_image_blobs(image):
def is_created_after(creation_timestamp, max_creation_timestamp): def is_created_after(creation_timestamp, max_creation_timestamp):
if not max_creation_timestamp: if not max_creation_timestamp:
return False return False
creationTimestamp = datetime.strptime(creation_timestamp, '%Y-%m-%dT%H:%M:%SZ') creationTimestamp = datetime.strptime(creation_timestamp, "%Y-%m-%dT%H:%M:%SZ")
return creationTimestamp > max_creation_timestamp return creationTimestamp > max_creation_timestamp
def is_too_young_object(obj, max_creation_timestamp): def is_too_young_object(obj, max_creation_timestamp):
return is_created_after(obj['metadata']['creationTimestamp'], return is_created_after(
max_creation_timestamp) obj["metadata"]["creationTimestamp"], max_creation_timestamp
)
class OpenShiftAnalyzeImageStream(object): class OpenShiftAnalyzeImageStream(object):
def __init__(self, ignore_invalid_refs, max_creation_timestamp, module): def __init__(self, ignore_invalid_refs, max_creation_timestamp, module):
self.max_creationTimestamp = max_creation_timestamp self.max_creationTimestamp = max_creation_timestamp
self.used_tags = {} self.used_tags = {}
self.used_images = {} self.used_images = {}
@@ -53,32 +53,34 @@ class OpenShiftAnalyzeImageStream(object):
if error: if error:
return error return error
if not result['hostname'] or not result['namespace']: if not result["hostname"] or not result["namespace"]:
# image reference does not match hostname/namespace/name pattern - skipping # image reference does not match hostname/namespace/name pattern - skipping
return None return None
if not result['digest']: if not result["digest"]:
# Attempt to dereference istag. Since we cannot be sure whether the reference refers to the # Attempt to dereference istag. Since we cannot be sure whether the reference refers to the
# integrated registry or not, we ignore the host part completely. As a consequence, we may keep # integrated registry or not, we ignore the host part completely. As a consequence, we may keep
# image otherwise sentenced for a removal just because its pull spec accidentally matches one of # image otherwise sentenced for a removal just because its pull spec accidentally matches one of
# our imagestreamtags. # our imagestreamtags.
# set the tag if empty # set the tag if empty
if result['tag'] == "": if result["tag"] == "":
result['tag'] = 'latest' result["tag"] = "latest"
key = "%s/%s:%s" % (result['namespace'], result['name'], result['tag']) key = "%s/%s:%s" % (result["namespace"], result["name"], result["tag"])
if key not in self.used_tags: if key not in self.used_tags:
self.used_tags[key] = [] self.used_tags[key] = []
self.used_tags[key].append(referrer) self.used_tags[key].append(referrer)
else: else:
key = "%s/%s@%s" % (result['namespace'], result['name'], result['digest']) key = "%s/%s@%s" % (result["namespace"], result["name"], result["digest"])
if key not in self.used_images: if key not in self.used_images:
self.used_images[key] = [] self.used_images[key] = []
self.used_images[key].append(referrer) self.used_images[key].append(referrer)
def analyze_refs_from_pod_spec(self, podSpec, referrer): def analyze_refs_from_pod_spec(self, podSpec, referrer):
for container in podSpec.get('initContainers', []) + podSpec.get('containers', []): for container in podSpec.get("initContainers", []) + podSpec.get(
image = container.get('image') "containers", []
):
image = container.get("image")
if len(image.strip()) == 0: if len(image.strip()) == 0:
# Ignoring container because it has no reference to image # Ignoring container because it has no reference to image
continue continue
@@ -93,29 +95,35 @@ class OpenShiftAnalyzeImageStream(object):
# pending or running. Additionally, it has to be at least as old as the minimum # pending or running. Additionally, it has to be at least as old as the minimum
# age threshold defined by the algorithm. # age threshold defined by the algorithm.
too_young = is_too_young_object(pod, self.max_creationTimestamp) too_young = is_too_young_object(pod, self.max_creationTimestamp)
if pod['status']['phase'] not in ("Running", "Pending") and too_young: if pod["status"]["phase"] not in ("Running", "Pending") and too_young:
continue continue
referrer = { referrer = {
"kind": pod["kind"], "kind": pod["kind"],
"namespace": pod["metadata"]["namespace"], "namespace": pod["metadata"]["namespace"],
"name": pod["metadata"]["name"], "name": pod["metadata"]["name"],
} }
err = self.analyze_refs_from_pod_spec(pod['spec'], referrer) err = self.analyze_refs_from_pod_spec(pod["spec"], referrer)
if err: if err:
return err return err
return None return None
def analyze_refs_pod_creators(self, resources): def analyze_refs_pod_creators(self, resources):
keys = ( keys = (
"ReplicationController", "DeploymentConfig", "DaemonSet", "ReplicationController",
"Deployment", "ReplicaSet", "StatefulSet", "Job", "CronJob" "DeploymentConfig",
"DaemonSet",
"Deployment",
"ReplicaSet",
"StatefulSet",
"Job",
"CronJob",
) )
for k, objects in iteritems(resources): for k, objects in iteritems(resources):
if k not in keys: if k not in keys:
continue continue
for obj in objects: for obj in objects:
if k == 'CronJob': if k == "CronJob":
spec = obj["spec"]["jobTemplate"]["spec"]["template"]["spec"] spec = obj["spec"]["jobTemplate"]["spec"]["template"]["spec"]
else: else:
spec = obj["spec"]["template"]["spec"] spec = obj["spec"]["template"]["spec"]
@@ -132,64 +140,84 @@ class OpenShiftAnalyzeImageStream(object):
def analyze_refs_from_strategy(self, build_strategy, namespace, referrer): def analyze_refs_from_strategy(self, build_strategy, namespace, referrer):
# Determine 'from' reference # Determine 'from' reference
def _determine_source_strategy(): def _determine_source_strategy():
for src in ('sourceStrategy', 'dockerStrategy', 'customStrategy'): for src in ("sourceStrategy", "dockerStrategy", "customStrategy"):
strategy = build_strategy.get(src) strategy = build_strategy.get(src)
if strategy: if strategy:
return strategy.get('from') return strategy.get("from")
return None return None
def _parse_image_stream_image_name(name): def _parse_image_stream_image_name(name):
v = name.split('@') v = name.split("@")
if len(v) != 2: if len(v) != 2:
return None, None, "expected exactly one @ in the isimage name %s" % name return (
None,
None,
"expected exactly one @ in the isimage name %s" % name,
)
name = v[0] name = v[0]
tag = v[1] tag = v[1]
if len(name) == 0 or len(tag) == 0: if len(name) == 0 or len(tag) == 0:
return None, None, "image stream image name %s must have a name and ID" % name return (
None,
None,
"image stream image name %s must have a name and ID" % name,
)
return name, tag, None return name, tag, None
def _parse_image_stream_tag_name(name): def _parse_image_stream_tag_name(name):
if "@" in name: if "@" in name:
return None, None, "%s is an image stream image, not an image stream tag" % name return (
None,
None,
"%s is an image stream image, not an image stream tag" % name,
)
v = name.split(":") v = name.split(":")
if len(v) != 2: if len(v) != 2:
return None, None, "expected exactly one : delimiter in the istag %s" % name return (
None,
None,
"expected exactly one : delimiter in the istag %s" % name,
)
name = v[0] name = v[0]
tag = v[1] tag = v[1]
if len(name) == 0 or len(tag) == 0: if len(name) == 0 or len(tag) == 0:
return None, None, "image stream tag name %s must have a name and a tag" % name return (
None,
None,
"image stream tag name %s must have a name and a tag" % name,
)
return name, tag, None return name, tag, None
from_strategy = _determine_source_strategy() from_strategy = _determine_source_strategy()
if from_strategy: if from_strategy:
if from_strategy.get('kind') == "DockerImage": if from_strategy.get("kind") == "DockerImage":
docker_image_ref = from_strategy.get('name').strip() docker_image_ref = from_strategy.get("name").strip()
if len(docker_image_ref) > 0: if len(docker_image_ref) > 0:
err = self.analyze_reference_image(docker_image_ref, referrer) err = self.analyze_reference_image(docker_image_ref, referrer)
elif from_strategy.get('kind') == "ImageStreamImage": elif from_strategy.get("kind") == "ImageStreamImage":
name, tag, error = _parse_image_stream_image_name(from_strategy.get('name')) name, tag, error = _parse_image_stream_image_name(
from_strategy.get("name")
)
if error: if error:
if not self.ignore_invalid_refs: if not self.ignore_invalid_refs:
return error return error
else: else:
namespace = from_strategy.get('namespace') or namespace namespace = from_strategy.get("namespace") or namespace
self.used_images.append({ self.used_images.append(
'namespace': namespace, {"namespace": namespace, "name": name, "tag": tag}
'name': name, )
'tag': tag elif from_strategy.get("kind") == "ImageStreamTag":
}) name, tag, error = _parse_image_stream_tag_name(
elif from_strategy.get('kind') == "ImageStreamTag": from_strategy.get("name")
name, tag, error = _parse_image_stream_tag_name(from_strategy.get('name')) )
if error: if error:
if not self.ignore_invalid_refs: if not self.ignore_invalid_refs:
return error return error
else: else:
namespace = from_strategy.get('namespace') or namespace namespace = from_strategy.get("namespace") or namespace
self.used_tags.append({ self.used_tags.append(
'namespace': namespace, {"namespace": namespace, "name": name, "tag": tag}
'name': name, )
'tag': tag
})
def analyze_refs_from_build_strategy(self, resources): def analyze_refs_from_build_strategy(self, resources):
# Json Path is always spec.strategy # Json Path is always spec.strategy
@@ -203,16 +231,20 @@ class OpenShiftAnalyzeImageStream(object):
"namespace": obj["metadata"]["namespace"], "namespace": obj["metadata"]["namespace"],
"name": obj["metadata"]["name"], "name": obj["metadata"]["name"],
} }
error = self.analyze_refs_from_strategy(obj['spec']['strategy'], error = self.analyze_refs_from_strategy(
obj['metadata']['namespace'], obj["spec"]["strategy"], obj["metadata"]["namespace"], referrer
referrer) )
if error is not None: if error is not None:
return "%s/%s/%s: %s" % (referrer["kind"], referrer["namespace"], referrer["name"], error) return "%s/%s/%s: %s" % (
referrer["kind"],
referrer["namespace"],
referrer["name"],
error,
)
def analyze_image_stream(self, resources): def analyze_image_stream(self, resources):
# Analyze image reference from Pods # Analyze image reference from Pods
error = self.analyze_refs_from_pods(resources['Pod']) error = self.analyze_refs_from_pods(resources["Pod"])
if error: if error:
return None, None, error return None, None, error

View File

@@ -1,16 +1,17 @@
#!/usr/bin/env python #!/usr/bin/env python
from __future__ import (absolute_import, division, print_function) from __future__ import absolute_import, division, print_function
__metaclass__ = type __metaclass__ = type
import traceback
import copy import copy
from ansible.module_utils._text import to_native
from ansible.module_utils.parsing.convert_bool import boolean from ansible.module_utils.parsing.convert_bool import boolean
from ansible.module_utils.six import string_types from ansible.module_utils.six import string_types
from ansible_collections.community.okd.plugins.module_utils.openshift_common import AnsibleOpenshiftModule from ansible_collections.community.okd.plugins.module_utils.openshift_common import (
AnsibleOpenshiftModule,
)
try: try:
from kubernetes.dynamic.exceptions import DynamicApiError from kubernetes.dynamic.exceptions import DynamicApiError
@@ -44,10 +45,17 @@ def follow_imagestream_tag_reference(stream, tag):
return name, tag, len(parts) == 2 return name, tag, len(parts) == 2
content = [] content = []
err_cross_stream_ref = "tag %s points to an imagestreamtag from another ImageStream" % tag err_cross_stream_ref = (
"tag %s points to an imagestreamtag from another ImageStream" % tag
)
while True: while True:
if tag in content: if tag in content:
return tag, None, multiple, "tag %s on the image stream is a reference to same tag" % tag return (
tag,
None,
multiple,
"tag %s on the image stream is a reference to same tag" % tag,
)
content.append(tag) content.append(tag)
tag_ref = _imagestream_has_tag() tag_ref = _imagestream_has_tag()
if not tag_ref: if not tag_ref:
@@ -56,7 +64,10 @@ def follow_imagestream_tag_reference(stream, tag):
if not tag_ref.get("from") or tag_ref["from"]["kind"] != "ImageStreamTag": if not tag_ref.get("from") or tag_ref["from"]["kind"] != "ImageStreamTag":
return tag, tag_ref, multiple, None return tag, tag_ref, multiple, None
if tag_ref["from"]["namespace"] != "" and tag_ref["from"]["namespace"] != stream["metadata"]["namespace"]: if (
tag_ref["from"]["namespace"] != ""
and tag_ref["from"]["namespace"] != stream["metadata"]["namespace"]
):
return tag, None, multiple, err_cross_stream_ref return tag, None, multiple, err_cross_stream_ref
# The reference needs to be followed with two format patterns: # The reference needs to be followed with two format patterns:
@@ -64,7 +75,12 @@ def follow_imagestream_tag_reference(stream, tag):
if ":" in tag_ref["from"]["name"]: if ":" in tag_ref["from"]["name"]:
name, tagref, result = _imagestream_split_tag(tag_ref["from"]["name"]) name, tagref, result = _imagestream_split_tag(tag_ref["from"]["name"])
if not result: if not result:
return tag, None, multiple, "tag %s points to an invalid imagestreamtag" % tag return (
tag,
None,
multiple,
"tag %s points to an invalid imagestreamtag" % tag,
)
if name != stream["metadata"]["namespace"]: if name != stream["metadata"]["namespace"]:
# anotheris:sometag - this should not happen. # anotheris:sometag - this should not happen.
return tag, None, multiple, err_cross_stream_ref return tag, None, multiple, err_cross_stream_ref
@@ -80,7 +96,7 @@ class OpenShiftImportImage(AnsibleOpenshiftModule):
super(OpenShiftImportImage, self).__init__(**kwargs) super(OpenShiftImportImage, self).__init__(**kwargs)
self._rest_client = None self._rest_client = None
self.registryhost = self.params.get('registry_url') self.registryhost = self.params.get("registry_url")
self.changed = False self.changed = False
ref_policy = self.params.get("reference_policy") ref_policy = self.params.get("reference_policy")
@@ -90,9 +106,7 @@ class OpenShiftImportImage(AnsibleOpenshiftModule):
elif ref_policy == "local": elif ref_policy == "local":
ref_policy_type = "Local" ref_policy_type = "Local"
self.ref_policy = { self.ref_policy = {"type": ref_policy_type}
"type": ref_policy_type
}
self.validate_certs = self.params.get("validate_registry_certs") self.validate_certs = self.params.get("validate_registry_certs")
self.cluster_resources = {} self.cluster_resources = {}
@@ -104,15 +118,15 @@ class OpenShiftImportImage(AnsibleOpenshiftModule):
"metadata": { "metadata": {
"name": stream["metadata"]["name"], "name": stream["metadata"]["name"],
"namespace": stream["metadata"]["namespace"], "namespace": stream["metadata"]["namespace"],
"resourceVersion": stream["metadata"].get("resourceVersion") "resourceVersion": stream["metadata"].get("resourceVersion"),
}, },
"spec": { "spec": {"import": True},
"import": True
}
} }
annotations = stream.get("annotations", {}) annotations = stream.get("annotations", {})
insecure = boolean(annotations.get("openshift.io/image.insecureRepository", True)) insecure = boolean(
annotations.get("openshift.io/image.insecureRepository", True)
)
if self.validate_certs is not None: if self.validate_certs is not None:
insecure = not self.validate_certs insecure = not self.validate_certs
return isi, insecure return isi, insecure
@@ -126,7 +140,7 @@ class OpenShiftImportImage(AnsibleOpenshiftModule):
}, },
"importPolicy": { "importPolicy": {
"insecure": insecure, "insecure": insecure,
"scheduled": self.params.get("scheduled") "scheduled": self.params.get("scheduled"),
}, },
"referencePolicy": self.ref_policy, "referencePolicy": self.ref_policy,
} }
@@ -149,26 +163,23 @@ class OpenShiftImportImage(AnsibleOpenshiftModule):
scheduled = scheduled or old_tag["importPolicy"].get("scheduled") scheduled = scheduled or old_tag["importPolicy"].get("scheduled")
images = isi["spec"].get("images", []) images = isi["spec"].get("images", [])
images.append({ images.append(
"from": { {
"kind": "DockerImage", "from": {
"name": tags.get(k), "kind": "DockerImage",
}, "name": tags.get(k),
"to": { },
"name": k "to": {"name": k},
}, "importPolicy": {"insecure": insecure, "scheduled": scheduled},
"importPolicy": { "referencePolicy": self.ref_policy,
"insecure": insecure, }
"scheduled": scheduled )
},
"referencePolicy": self.ref_policy,
})
isi["spec"]["images"] = images isi["spec"]["images"] = images
return isi return isi
def create_image_stream(self, ref): def create_image_stream(self, ref):
""" """
Create new ImageStream and accompanying ImageStreamImport Create new ImageStream and accompanying ImageStreamImport
""" """
source = self.params.get("source") source = self.params.get("source")
if not source: if not source:
@@ -183,27 +194,20 @@ class OpenShiftImportImage(AnsibleOpenshiftModule):
), ),
) )
if self.params.get("all") and not ref["tag"]: if self.params.get("all") and not ref["tag"]:
spec = dict( spec = dict(dockerImageRepository=source)
dockerImageRepository=source
)
isi = self.create_image_stream_import_all(stream, source) isi = self.create_image_stream_import_all(stream, source)
else: else:
spec = dict( spec = dict(
tags=[ tags=[
{ {
"from": { "from": {"kind": "DockerImage", "name": source},
"kind": "DockerImage", "referencePolicy": self.ref_policy,
"name": source
},
"referencePolicy": self.ref_policy
} }
] ]
) )
tags = {ref["tag"]: source} tags = {ref["tag"]: source}
isi = self.create_image_stream_import_tags(stream, tags) isi = self.create_image_stream_import_tags(stream, tags)
stream.update( stream.update(dict(spec=spec))
dict(spec=spec)
)
return stream, isi return stream, isi
def import_all(self, istream): def import_all(self, istream):
@@ -220,8 +224,9 @@ class OpenShiftImportImage(AnsibleOpenshiftModule):
if t.get("from") and t["from"].get("kind") == "DockerImage": if t.get("from") and t["from"].get("kind") == "DockerImage":
tags[t.get("name")] = t["from"].get("name") tags[t.get("name")] = t["from"].get("name")
if tags == {}: if tags == {}:
msg = "image stream %s/%s does not have tags pointing to external container images" % ( msg = (
stream["metadata"]["namespace"], stream["metadata"]["name"] "image stream %s/%s does not have tags pointing to external container images"
% (stream["metadata"]["namespace"], stream["metadata"]["name"])
) )
self.fail_json(msg=msg) self.fail_json(msg=msg)
isi = self.create_image_stream_import_tags(stream, tags) isi = self.create_image_stream_import_tags(stream, tags)
@@ -236,7 +241,9 @@ class OpenShiftImportImage(AnsibleOpenshiftModule):
source = self.params.get("source") source = self.params.get("source")
# Follow any referential tags to the destination # Follow any referential tags to the destination
final_tag, existing, multiple, err = follow_imagestream_tag_reference(stream, tag) final_tag, existing, multiple, err = follow_imagestream_tag_reference(
stream, tag
)
if err: if err:
if err == err_stream_not_found_ref: if err == err_stream_not_found_ref:
# Create a new tag # Create a new tag
@@ -245,7 +252,10 @@ class OpenShiftImportImage(AnsibleOpenshiftModule):
# if the from is still empty this means there's no such tag defined # if the from is still empty this means there's no such tag defined
# nor we can't create any from .spec.dockerImageRepository # nor we can't create any from .spec.dockerImageRepository
if not source: if not source:
msg = "the tag %s does not exist on the image stream - choose an existing tag to import" % tag msg = (
"the tag %s does not exist on the image stream - choose an existing tag to import"
% tag
)
self.fail_json(msg=msg) self.fail_json(msg=msg)
existing = { existing = {
"from": { "from": {
@@ -257,13 +267,21 @@ class OpenShiftImportImage(AnsibleOpenshiftModule):
self.fail_json(msg=err) self.fail_json(msg=err)
else: else:
# Disallow re-importing anything other than DockerImage # Disallow re-importing anything other than DockerImage
if existing.get("from", {}) and existing["from"].get("kind") != "DockerImage": if (
existing.get("from", {})
and existing["from"].get("kind") != "DockerImage"
):
msg = "tag {tag} points to existing {kind}/={name}, it cannot be re-imported.".format( msg = "tag {tag} points to existing {kind}/={name}, it cannot be re-imported.".format(
tag=tag, kind=existing["from"]["kind"], name=existing["from"]["name"] tag=tag,
kind=existing["from"]["kind"],
name=existing["from"]["name"],
) )
# disallow changing an existing tag # disallow changing an existing tag
if not existing.get("from", {}): if not existing.get("from", {}):
msg = "tag %s already exists - you cannot change the source using this module." % tag msg = (
"tag %s already exists - you cannot change the source using this module."
% tag
)
self.fail_json(msg=msg) self.fail_json(msg=msg)
if source and source != existing["from"]["name"]: if source and source != existing["from"]["name"]:
if multiple: if multiple:
@@ -271,7 +289,10 @@ class OpenShiftImportImage(AnsibleOpenshiftModule):
tag, final_tag, existing["from"]["name"] tag, final_tag, existing["from"]["name"]
) )
else: else:
msg = "the tag %s points to %s you cannot change the source using this module." % (tag, final_tag) msg = (
"the tag %s points to %s you cannot change the source using this module."
% (tag, final_tag)
)
self.fail_json(msg=msg) self.fail_json(msg=msg)
# Set the target item to import # Set the target item to import
@@ -309,13 +330,13 @@ class OpenShiftImportImage(AnsibleOpenshiftModule):
kind=kind, kind=kind,
api_version=api_version, api_version=api_version,
name=ref.get("name"), name=ref.get("name"),
namespace=self.params.get("namespace") namespace=self.params.get("namespace"),
) )
result = self.kubernetes_facts(**params) result = self.kubernetes_facts(**params)
if not result["api_found"]: if not result["api_found"]:
msg = 'Failed to find API for resource with apiVersion "{0}" and kind "{1}"'.format( msg = 'Failed to find API for resource with apiVersion "{0}" and kind "{1}"'.format(
api_version, kind api_version, kind
), )
self.fail_json(msg=msg) self.fail_json(msg=msg)
imagestream = None imagestream = None
if len(result["resources"]) > 0: if len(result["resources"]) > 0:
@@ -335,7 +356,9 @@ class OpenShiftImportImage(AnsibleOpenshiftModule):
def parse_image_reference(self, image_ref): def parse_image_reference(self, image_ref):
result, err = parse_docker_image_ref(image_ref, self.module) result, err = parse_docker_image_ref(image_ref, self.module)
if result.get("digest"): if result.get("digest"):
self.fail_json(msg="Cannot import by ID, error with definition: %s" % image_ref) self.fail_json(
msg="Cannot import by ID, error with definition: %s" % image_ref
)
tag = result.get("tag") or None tag = result.get("tag") or None
if not self.params.get("all") and not tag: if not self.params.get("all") and not tag:
tag = "latest" tag = "latest"
@@ -345,7 +368,6 @@ class OpenShiftImportImage(AnsibleOpenshiftModule):
return dict(name=result.get("name"), tag=tag, source=image_ref) return dict(name=result.get("name"), tag=tag, source=image_ref)
def execute_module(self): def execute_module(self):
names = [] names = []
name = self.params.get("name") name = self.params.get("name")
if isinstance(name, string_types): if isinstance(name, string_types):

View File

@@ -3,7 +3,8 @@
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt) # GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import (absolute_import, division, print_function) from __future__ import absolute_import, division, print_function
__metaclass__ = type __metaclass__ = type
@@ -24,109 +25,119 @@ LDAP_SEARCH_OUT_OF_SCOPE_ERROR = "trying to search by DN for an entry that exist
def validate_ldap_sync_config(config): def validate_ldap_sync_config(config):
# Validate url # Validate url
url = config.get('url') url = config.get("url")
if not url: if not url:
return "url should be non empty attribute." return "url should be non empty attribute."
# Make sure bindDN and bindPassword are both set, or both unset # Make sure bindDN and bindPassword are both set, or both unset
bind_dn = config.get('bindDN', "") bind_dn = config.get("bindDN", "")
bind_password = config.get('bindPassword', "") bind_password = config.get("bindPassword", "")
if (len(bind_dn) == 0) != (len(bind_password) == 0): if (len(bind_dn) == 0) != (len(bind_password) == 0):
return "bindDN and bindPassword must both be specified, or both be empty." return "bindDN and bindPassword must both be specified, or both be empty."
insecure = boolean(config.get('insecure')) insecure = boolean(config.get("insecure"))
ca_file = config.get('ca') ca_file = config.get("ca")
if insecure: if insecure:
if url.startswith('ldaps://'): if url.startswith("ldaps://"):
return "Cannot use ldaps scheme with insecure=true." return "Cannot use ldaps scheme with insecure=true."
if ca_file: if ca_file:
return "Cannot specify a ca with insecure=true." return "Cannot specify a ca with insecure=true."
elif ca_file and not os.path.isfile(ca_file): elif ca_file and not os.path.isfile(ca_file):
return "could not read ca file: {0}.".format(ca_file) return "could not read ca file: {0}.".format(ca_file)
nameMapping = config.get('groupUIDNameMapping', {}) nameMapping = config.get("groupUIDNameMapping", {})
for k, v in iteritems(nameMapping): for k, v in iteritems(nameMapping):
if len(k) == 0 or len(v) == 0: if len(k) == 0 or len(v) == 0:
return "groupUIDNameMapping has empty key or value" return "groupUIDNameMapping has empty key or value"
schemas = [] schemas = []
schema_list = ('rfc2307', 'activeDirectory', 'augmentedActiveDirectory') schema_list = ("rfc2307", "activeDirectory", "augmentedActiveDirectory")
for schema in schema_list: for schema in schema_list:
if schema in config: if schema in config:
schemas.append(schema) schemas.append(schema)
if len(schemas) == 0: if len(schemas) == 0:
return "No schema-specific config was provided, should be one of %s" % ", ".join(schema_list) return (
"No schema-specific config was provided, should be one of %s"
% ", ".join(schema_list)
)
if len(schemas) > 1: if len(schemas) > 1:
return "Exactly one schema-specific config is required; found (%d) %s" % (len(schemas), ','.join(schemas)) return "Exactly one schema-specific config is required; found (%d) %s" % (
len(schemas),
",".join(schemas),
)
if schemas[0] == 'rfc2307': if schemas[0] == "rfc2307":
return validate_RFC2307(config.get("rfc2307")) return validate_RFC2307(config.get("rfc2307"))
elif schemas[0] == 'activeDirectory': elif schemas[0] == "activeDirectory":
return validate_ActiveDirectory(config.get("activeDirectory")) return validate_ActiveDirectory(config.get("activeDirectory"))
elif schemas[0] == 'augmentedActiveDirectory': elif schemas[0] == "augmentedActiveDirectory":
return validate_AugmentedActiveDirectory(config.get("augmentedActiveDirectory")) return validate_AugmentedActiveDirectory(config.get("augmentedActiveDirectory"))
def validate_ldap_query(qry, isDNOnly=False): def validate_ldap_query(qry, isDNOnly=False):
# validate query scope # validate query scope
scope = qry.get('scope') scope = qry.get("scope")
if scope and scope not in ("", "sub", "one", "base"): if scope and scope not in ("", "sub", "one", "base"):
return "invalid scope %s" % scope return "invalid scope %s" % scope
# validate deref aliases # validate deref aliases
derefAlias = qry.get('derefAliases') derefAlias = qry.get("derefAliases")
if derefAlias and derefAlias not in ("never", "search", "base", "always"): if derefAlias and derefAlias not in ("never", "search", "base", "always"):
return "not a valid LDAP alias dereferncing behavior: %s", derefAlias return "not a valid LDAP alias dereferncing behavior: %s", derefAlias
# validate timeout # validate timeout
timeout = qry.get('timeout') timeout = qry.get("timeout")
if timeout and float(timeout) < 0: if timeout and float(timeout) < 0:
return "timeout must be equal to or greater than zero" return "timeout must be equal to or greater than zero"
# Validate DN only # Validate DN only
qry_filter = qry.get('filter', "") qry_filter = qry.get("filter", "")
if isDNOnly: if isDNOnly:
if len(qry_filter) > 0: if len(qry_filter) > 0:
return 'cannot specify a filter when using "dn" as the UID attribute' return 'cannot specify a filter when using "dn" as the UID attribute'
else: else:
# validate filter # validate filter
if len(qry_filter) == 0 or qry_filter[0] != '(': if len(qry_filter) == 0 or qry_filter[0] != "(":
return "filter does not start with an '('" return "filter does not start with an '('"
return None return None
def validate_RFC2307(config): def validate_RFC2307(config):
qry = config.get('groupsQuery') qry = config.get("groupsQuery")
if not qry or not isinstance(qry, dict): if not qry or not isinstance(qry, dict):
return "RFC2307: groupsQuery requires a dictionary" return "RFC2307: groupsQuery requires a dictionary"
error = validate_ldap_query(qry) error = validate_ldap_query(qry)
if not error: if not error:
return error return error
for field in ('groupUIDAttribute', 'groupNameAttributes', 'groupMembershipAttributes', for field in (
'userUIDAttribute', 'userNameAttributes'): "groupUIDAttribute",
"groupNameAttributes",
"groupMembershipAttributes",
"userUIDAttribute",
"userNameAttributes",
):
value = config.get(field) value = config.get(field)
if not value: if not value:
return "RFC2307: {0} is required.".format(field) return "RFC2307: {0} is required.".format(field)
users_qry = config.get('usersQuery') users_qry = config.get("usersQuery")
if not users_qry or not isinstance(users_qry, dict): if not users_qry or not isinstance(users_qry, dict):
return "RFC2307: usersQuery requires a dictionary" return "RFC2307: usersQuery requires a dictionary"
isUserDNOnly = (config.get('userUIDAttribute').strip() == 'dn') isUserDNOnly = config.get("userUIDAttribute").strip() == "dn"
return validate_ldap_query(users_qry, isDNOnly=isUserDNOnly) return validate_ldap_query(users_qry, isDNOnly=isUserDNOnly)
def validate_ActiveDirectory(config, label="ActiveDirectory"): def validate_ActiveDirectory(config, label="ActiveDirectory"):
users_qry = config.get('usersQuery') users_qry = config.get("usersQuery")
if not users_qry or not isinstance(users_qry, dict): if not users_qry or not isinstance(users_qry, dict):
return "{0}: usersQuery requires as dictionnary".format(label) return "{0}: usersQuery requires as dictionnary".format(label)
error = validate_ldap_query(users_qry) error = validate_ldap_query(users_qry)
if not error: if not error:
return error return error
for field in ('userNameAttributes', 'groupMembershipAttributes'): for field in ("userNameAttributes", "groupMembershipAttributes"):
value = config.get(field) value = config.get(field)
if not value: if not value:
return "{0}: {1} is required.".format(field, label) return "{0}: {1} is required.".format(field, label)
@@ -138,24 +149,24 @@ def validate_AugmentedActiveDirectory(config):
error = validate_ActiveDirectory(config, label="AugmentedActiveDirectory") error = validate_ActiveDirectory(config, label="AugmentedActiveDirectory")
if not error: if not error:
return error return error
for field in ('groupUIDAttribute', 'groupNameAttributes'): for field in ("groupUIDAttribute", "groupNameAttributes"):
value = config.get(field) value = config.get(field)
if not value: if not value:
return "AugmentedActiveDirectory: {0} is required".format(field) return "AugmentedActiveDirectory: {0} is required".format(field)
groups_qry = config.get('groupsQuery') groups_qry = config.get("groupsQuery")
if not groups_qry or not isinstance(groups_qry, dict): if not groups_qry or not isinstance(groups_qry, dict):
return "AugmentedActiveDirectory: groupsQuery requires as dictionnary." return "AugmentedActiveDirectory: groupsQuery requires as dictionnary."
isGroupDNOnly = (config.get('groupUIDAttribute').strip() == 'dn') isGroupDNOnly = config.get("groupUIDAttribute").strip() == "dn"
return validate_ldap_query(groups_qry, isDNOnly=isGroupDNOnly) return validate_ldap_query(groups_qry, isDNOnly=isGroupDNOnly)
def determine_ldap_scope(scope): def determine_ldap_scope(scope):
if scope in ("", "sub"): if scope in ("", "sub"):
return ldap.SCOPE_SUBTREE return ldap.SCOPE_SUBTREE
elif scope == 'base': elif scope == "base":
return ldap.SCOPE_BASE return ldap.SCOPE_BASE
elif scope == 'one': elif scope == "one":
return ldap.SCOPE_ONELEVEL return ldap.SCOPE_ONELEVEL
return None return None
@@ -175,28 +186,28 @@ def determine_deref_aliases(derefAlias):
def openshift_ldap_build_base_query(config): def openshift_ldap_build_base_query(config):
qry = {} qry = {}
if config.get('baseDN'): if config.get("baseDN"):
qry['base'] = config.get('baseDN') qry["base"] = config.get("baseDN")
scope = determine_ldap_scope(config.get('scope')) scope = determine_ldap_scope(config.get("scope"))
if scope: if scope:
qry['scope'] = scope qry["scope"] = scope
pageSize = config.get('pageSize') pageSize = config.get("pageSize")
if pageSize and int(pageSize) > 0: if pageSize and int(pageSize) > 0:
qry['sizelimit'] = int(pageSize) qry["sizelimit"] = int(pageSize)
timeout = config.get('timeout') timeout = config.get("timeout")
if timeout and int(timeout) > 0: if timeout and int(timeout) > 0:
qry['timeout'] = int(timeout) qry["timeout"] = int(timeout)
filter = config.get('filter') filter = config.get("filter")
if filter: if filter:
qry['filterstr'] = filter qry["filterstr"] = filter
derefAlias = determine_deref_aliases(config.get('derefAliases')) derefAlias = determine_deref_aliases(config.get("derefAliases"))
if derefAlias: if derefAlias:
qry['derefAlias'] = derefAlias qry["derefAlias"] = derefAlias
return qry return qry
@@ -205,32 +216,30 @@ def openshift_ldap_get_attribute_for_entry(entry, attribute):
if isinstance(attribute, list): if isinstance(attribute, list):
attributes = attribute attributes = attribute
for k in attributes: for k in attributes:
if k.lower() == 'dn': if k.lower() == "dn":
return entry[0] return entry[0]
v = entry[1].get(k, None) v = entry[1].get(k, None)
if v: if v:
if isinstance(v, list): if isinstance(v, list):
result = [] result = []
for x in v: for x in v:
if hasattr(x, 'decode'): if hasattr(x, "decode"):
result.append(x.decode('utf-8')) result.append(x.decode("utf-8"))
else: else:
result.append(x) result.append(x)
return result return result
else: else:
return v.decode('utf-8') if hasattr(v, 'decode') else v return v.decode("utf-8") if hasattr(v, "decode") else v
return "" return ""
def ldap_split_host_port(hostport): def ldap_split_host_port(hostport):
""" """
ldap_split_host_port splits a network address of the form "host:port", ldap_split_host_port splits a network address of the form "host:port",
"host%zone:port", "[host]:port" or "[host%zone]:port" into host or "host%zone:port", "[host]:port" or "[host%zone]:port" into host or
host%zone and port. host%zone and port.
""" """
result = dict( result = dict(scheme=None, netlocation=None, host=None, port=None)
scheme=None, netlocation=None, host=None, port=None
)
if not hostport: if not hostport:
return result, None return result, None
@@ -240,10 +249,10 @@ def ldap_split_host_port(hostport):
if "://" in hostport: if "://" in hostport:
idx = hostport.find(scheme_l) idx = hostport.find(scheme_l)
result["scheme"] = hostport[:idx] result["scheme"] = hostport[:idx]
netlocation = hostport[idx + len(scheme_l):] netlocation = hostport[idx + len(scheme_l):] # fmt: skip
result["netlocation"] = netlocation result["netlocation"] = netlocation
if netlocation[-1] == ']': if netlocation[-1] == "]":
# ipv6 literal (with no port) # ipv6 literal (with no port)
result["host"] = netlocation result["host"] = netlocation
@@ -259,21 +268,32 @@ def ldap_split_host_port(hostport):
def openshift_ldap_query_for_entries(connection, qry, unique_entry=True): def openshift_ldap_query_for_entries(connection, qry, unique_entry=True):
# set deref alias (TODO: need to set a default value to reset for each transaction) # set deref alias (TODO: need to set a default value to reset for each transaction)
derefAlias = qry.pop('derefAlias', None) derefAlias = qry.pop("derefAlias", None)
if derefAlias: if derefAlias:
ldap.set_option(ldap.OPT_DEREF, derefAlias) ldap.set_option(ldap.OPT_DEREF, derefAlias)
try: try:
result = connection.search_ext_s(**qry) result = connection.search_ext_s(**qry)
if not result or len(result) == 0: if not result or len(result) == 0:
return None, "Entry not found for base='{0}' and filter='{1}'".format(qry['base'], qry['filterstr']) return None, "Entry not found for base='{0}' and filter='{1}'".format(
qry["base"], qry["filterstr"]
)
if len(result) > 1 and unique_entry: if len(result) > 1 and unique_entry:
if qry.get('scope') == ldap.SCOPE_BASE: if qry.get("scope") == ldap.SCOPE_BASE:
return None, "multiple entries found matching dn={0}: {1}".format(qry['base'], result) return None, "multiple entries found matching dn={0}: {1}".format(
qry["base"], result
)
else: else:
return None, "multiple entries found matching filter {0}: {1}".format(qry['filterstr'], result) return None, "multiple entries found matching filter {0}: {1}".format(
qry["filterstr"], result
)
return result, None return result, None
except ldap.NO_SUCH_OBJECT: except ldap.NO_SUCH_OBJECT:
return None, "search for entry with base dn='{0}' refers to a non-existent entry".format(qry['base']) return (
None,
"search for entry with base dn='{0}' refers to a non-existent entry".format(
qry["base"]
),
)
def openshift_equal_dn_objects(dn_obj, other_dn_obj): def openshift_equal_dn_objects(dn_obj, other_dn_obj):
@@ -303,7 +323,9 @@ def openshift_ancestorof_dn(dn, other):
if len(dn_obj) >= len(other_dn_obj): if len(dn_obj) >= len(other_dn_obj):
return False return False
# Take the last attribute from the other DN to compare against # Take the last attribute from the other DN to compare against
return openshift_equal_dn_objects(dn_obj, other_dn_obj[len(other_dn_obj) - len(dn_obj):]) return openshift_equal_dn_objects(
dn_obj, other_dn_obj[len(other_dn_obj) - len(dn_obj):] # fmt: skip
)
class OpenshiftLDAPQueryOnAttribute(object): class OpenshiftLDAPQueryOnAttribute(object):
@@ -324,33 +346,38 @@ class OpenshiftLDAPQueryOnAttribute(object):
output = [] output = []
hex_string = "0123456789abcdef" hex_string = "0123456789abcdef"
for c in buffer: for c in buffer:
if ord(c) > 0x7f or c in ('(', ')', '\\', '*') or c == 0: if ord(c) > 0x7F or c in ("(", ")", "\\", "*") or c == 0:
first = ord(c) >> 4 first = ord(c) >> 4
second = ord(c) & 0xf second = ord(c) & 0xF
output += ['\\', hex_string[first], hex_string[second]] output += ["\\", hex_string[first], hex_string[second]]
else: else:
output.append(c) output.append(c)
return ''.join(output) return "".join(output)
def build_request(self, ldapuid, attributes): def build_request(self, ldapuid, attributes):
params = copy.deepcopy(self.qry) params = copy.deepcopy(self.qry)
if self.query_attribute.lower() == 'dn': if self.query_attribute.lower() == "dn":
if ldapuid: if ldapuid:
if not openshift_equal_dn(ldapuid, params['base']) and not openshift_ancestorof_dn(params['base'], ldapuid): if not openshift_equal_dn(
ldapuid, params["base"]
) and not openshift_ancestorof_dn(params["base"], ldapuid):
return None, LDAP_SEARCH_OUT_OF_SCOPE_ERROR return None, LDAP_SEARCH_OUT_OF_SCOPE_ERROR
params['base'] = ldapuid params["base"] = ldapuid
params['scope'] = ldap.SCOPE_BASE params["scope"] = ldap.SCOPE_BASE
# filter that returns all values # filter that returns all values
params['filterstr'] = "(objectClass=*)" params["filterstr"] = "(objectClass=*)"
params['attrlist'] = attributes params["attrlist"] = attributes
else: else:
# Builds the query containing a filter that conjoins the common filter given # Builds the query containing a filter that conjoins the common filter given
# in the configuration with the specific attribute filter for which the attribute value is given # in the configuration with the specific attribute filter for which the attribute value is given
specificFilter = "%s=%s" % (self.escape_filter(self.query_attribute), self.escape_filter(ldapuid)) specificFilter = "%s=%s" % (
qry_filter = params.get('filterstr', None) self.escape_filter(self.query_attribute),
self.escape_filter(ldapuid),
)
qry_filter = params.get("filterstr", None)
if qry_filter: if qry_filter:
params['filterstr'] = "(&%s(%s))" % (qry_filter, specificFilter) params["filterstr"] = "(&%s(%s))" % (qry_filter, specificFilter)
params['attrlist'] = attributes params["attrlist"] = attributes
return params, None return params, None
def ldap_search(self, connection, ldapuid, required_attributes, unique_entry=True): def ldap_search(self, connection, ldapuid, required_attributes, unique_entry=True):
@@ -358,21 +385,29 @@ class OpenshiftLDAPQueryOnAttribute(object):
if error: if error:
return None, error return None, error
# set deref alias (TODO: need to set a default value to reset for each transaction) # set deref alias (TODO: need to set a default value to reset for each transaction)
derefAlias = query.pop('derefAlias', None) derefAlias = query.pop("derefAlias", None)
if derefAlias: if derefAlias:
ldap.set_option(ldap.OPT_DEREF, derefAlias) ldap.set_option(ldap.OPT_DEREF, derefAlias)
try: try:
result = connection.search_ext_s(**query) result = connection.search_ext_s(**query)
if not result or len(result) == 0: if not result or len(result) == 0:
return None, "Entry not found for base='{0}' and filter='{1}'".format(query['base'], query['filterstr']) return None, "Entry not found for base='{0}' and filter='{1}'".format(
query["base"], query["filterstr"]
)
if unique_entry: if unique_entry:
if len(result) > 1: if len(result) > 1:
return None, "Multiple Entries found matching search criteria: %s (%s)" % (query, result) return (
None,
"Multiple Entries found matching search criteria: %s (%s)"
% (query, result),
)
result = result[0] result = result[0]
return result, None return result, None
except ldap.NO_SUCH_OBJECT: except ldap.NO_SUCH_OBJECT:
return None, "Entry not found for base='{0}' and filter='{1}'".format(query['base'], query['filterstr']) return None, "Entry not found for base='{0}' and filter='{1}'".format(
query["base"], query["filterstr"]
)
except Exception as err: except Exception as err:
return None, "Request %s failed due to: %s" % (query, err) return None, "Request %s failed due to: %s" % (query, err)
@@ -384,30 +419,43 @@ class OpenshiftLDAPQuery(object):
def build_request(self, attributes): def build_request(self, attributes):
params = copy.deepcopy(self.qry) params = copy.deepcopy(self.qry)
params['attrlist'] = attributes params["attrlist"] = attributes
return params return params
def ldap_search(self, connection, required_attributes): def ldap_search(self, connection, required_attributes):
query = self.build_request(required_attributes) query = self.build_request(required_attributes)
# set deref alias (TODO: need to set a default value to reset for each transaction) # set deref alias (TODO: need to set a default value to reset for each transaction)
derefAlias = query.pop('derefAlias', None) derefAlias = query.pop("derefAlias", None)
if derefAlias: if derefAlias:
ldap.set_option(ldap.OPT_DEREF, derefAlias) ldap.set_option(ldap.OPT_DEREF, derefAlias)
try: try:
result = connection.search_ext_s(**query) result = connection.search_ext_s(**query)
if not result or len(result) == 0: if not result or len(result) == 0:
return None, "Entry not found for base='{0}' and filter='{1}'".format(query['base'], query['filterstr']) return None, "Entry not found for base='{0}' and filter='{1}'".format(
query["base"], query["filterstr"]
)
return result, None return result, None
except ldap.NO_SUCH_OBJECT: except ldap.NO_SUCH_OBJECT:
return None, "search for entry with base dn='{0}' refers to a non-existent entry".format(query['base']) return (
None,
"search for entry with base dn='{0}' refers to a non-existent entry".format(
query["base"]
),
)
class OpenshiftLDAPInterface(object): class OpenshiftLDAPInterface(object):
def __init__(
def __init__(self, connection, groupQuery, groupNameAttributes, groupMembershipAttributes, self,
userQuery, userNameAttributes, config): connection,
groupQuery,
groupNameAttributes,
groupMembershipAttributes,
userQuery,
userNameAttributes,
config,
):
self.connection = connection self.connection = connection
self.groupQuery = copy.deepcopy(groupQuery) self.groupQuery = copy.deepcopy(groupQuery)
self.groupNameAttributes = groupNameAttributes self.groupNameAttributes = groupNameAttributes
@@ -416,8 +464,12 @@ class OpenshiftLDAPInterface(object):
self.userNameAttributes = userNameAttributes self.userNameAttributes = userNameAttributes
self.config = config self.config = config
self.tolerate_not_found = boolean(config.get('tolerateMemberNotFoundErrors', False)) self.tolerate_not_found = boolean(
self.tolerate_out_of_scope = boolean(config.get('tolerateMemberOutOfScopeErrors', False)) config.get("tolerateMemberNotFoundErrors", False)
)
self.tolerate_out_of_scope = boolean(
config.get("tolerateMemberOutOfScopeErrors", False)
)
self.required_group_attributes = [self.groupQuery.query_attribute] self.required_group_attributes = [self.groupQuery.query_attribute]
for x in self.groupNameAttributes + self.groupMembershipAttributes: for x in self.groupNameAttributes + self.groupMembershipAttributes:
@@ -434,13 +486,15 @@ class OpenshiftLDAPInterface(object):
def get_group_entry(self, uid): def get_group_entry(self, uid):
""" """
get_group_entry returns an LDAP group entry for the given group UID by searching the internal cache get_group_entry returns an LDAP group entry for the given group UID by searching the internal cache
of the LDAPInterface first, then sending an LDAP query if the cache did not contain the entry. of the LDAPInterface first, then sending an LDAP query if the cache did not contain the entry.
""" """
if uid in self.cached_groups: if uid in self.cached_groups:
return self.cached_groups.get(uid), None return self.cached_groups.get(uid), None
group, err = self.groupQuery.ldap_search(self.connection, uid, self.required_group_attributes) group, err = self.groupQuery.ldap_search(
self.connection, uid, self.required_group_attributes
)
if err: if err:
return None, err return None, err
self.cached_groups[uid] = group self.cached_groups[uid] = group
@@ -448,13 +502,15 @@ class OpenshiftLDAPInterface(object):
def get_user_entry(self, uid): def get_user_entry(self, uid):
""" """
get_user_entry returns an LDAP group entry for the given user UID by searching the internal cache get_user_entry returns an LDAP group entry for the given user UID by searching the internal cache
of the LDAPInterface first, then sending an LDAP query if the cache did not contain the entry. of the LDAPInterface first, then sending an LDAP query if the cache did not contain the entry.
""" """
if uid in self.cached_users: if uid in self.cached_users:
return self.cached_users.get(uid), None return self.cached_users.get(uid), None
entry, err = self.userQuery.ldap_search(self.connection, uid, self.required_user_attributes) entry, err = self.userQuery.ldap_search(
self.connection, uid, self.required_user_attributes
)
if err: if err:
return None, err return None, err
self.cached_users[uid] = entry self.cached_users[uid] = entry
@@ -466,19 +522,19 @@ class OpenshiftLDAPInterface(object):
def list_groups(self): def list_groups(self):
group_qry = copy.deepcopy(self.groupQuery.qry) group_qry = copy.deepcopy(self.groupQuery.qry)
group_qry['attrlist'] = self.required_group_attributes group_qry["attrlist"] = self.required_group_attributes
groups, err = openshift_ldap_query_for_entries( groups, err = openshift_ldap_query_for_entries(
connection=self.connection, connection=self.connection, qry=group_qry, unique_entry=False
qry=group_qry,
unique_entry=False
) )
if err: if err:
return None, err return None, err
group_uids = [] group_uids = []
for entry in groups: for entry in groups:
uid = openshift_ldap_get_attribute_for_entry(entry, self.groupQuery.query_attribute) uid = openshift_ldap_get_attribute_for_entry(
entry, self.groupQuery.query_attribute
)
if not uid: if not uid:
return None, "Unable to find LDAP group uid for entry %s" % entry return None, "Unable to find LDAP group uid for entry %s" % entry
self.cached_groups[uid] = entry self.cached_groups[uid] = entry
@@ -487,7 +543,7 @@ class OpenshiftLDAPInterface(object):
def extract_members(self, uid): def extract_members(self, uid):
""" """
returns the LDAP member entries for a group specified with a ldapGroupUID returns the LDAP member entries for a group specified with a ldapGroupUID
""" """
# Get group entry from LDAP # Get group entry from LDAP
group, err = self.get_group_entry(uid) group, err = self.get_group_entry(uid)
@@ -514,39 +570,46 @@ class OpenshiftLDAPInterface(object):
class OpenshiftLDAPRFC2307(object): class OpenshiftLDAPRFC2307(object):
def __init__(self, config, ldap_connection): def __init__(self, config, ldap_connection):
self.config = config self.config = config
self.ldap_interface = self.create_ldap_interface(ldap_connection) self.ldap_interface = self.create_ldap_interface(ldap_connection)
def create_ldap_interface(self, connection): def create_ldap_interface(self, connection):
segment = self.config.get("rfc2307") segment = self.config.get("rfc2307")
groups_base_qry = openshift_ldap_build_base_query(segment['groupsQuery']) groups_base_qry = openshift_ldap_build_base_query(segment["groupsQuery"])
users_base_qry = openshift_ldap_build_base_query(segment['usersQuery']) users_base_qry = openshift_ldap_build_base_query(segment["usersQuery"])
groups_query = OpenshiftLDAPQueryOnAttribute(groups_base_qry, segment['groupUIDAttribute']) groups_query = OpenshiftLDAPQueryOnAttribute(
users_query = OpenshiftLDAPQueryOnAttribute(users_base_qry, segment['userUIDAttribute']) groups_base_qry, segment["groupUIDAttribute"]
)
users_query = OpenshiftLDAPQueryOnAttribute(
users_base_qry, segment["userUIDAttribute"]
)
params = dict( params = dict(
connection=connection, connection=connection,
groupQuery=groups_query, groupQuery=groups_query,
groupNameAttributes=segment['groupNameAttributes'], groupNameAttributes=segment["groupNameAttributes"],
groupMembershipAttributes=segment['groupMembershipAttributes'], groupMembershipAttributes=segment["groupMembershipAttributes"],
userQuery=users_query, userQuery=users_query,
userNameAttributes=segment['userNameAttributes'], userNameAttributes=segment["userNameAttributes"],
config=segment config=segment,
) )
return OpenshiftLDAPInterface(**params) return OpenshiftLDAPInterface(**params)
def get_username_for_entry(self, entry): def get_username_for_entry(self, entry):
username = openshift_ldap_get_attribute_for_entry(entry, self.ldap_interface.userNameAttributes) username = openshift_ldap_get_attribute_for_entry(
entry, self.ldap_interface.userNameAttributes
)
if not username: if not username:
return None, "The user entry (%s) does not map to a OpenShift User name with the given mapping" % entry return (
None,
"The user entry (%s) does not map to a OpenShift User name with the given mapping"
% entry,
)
return username, None return username, None
def get_group_name_for_uid(self, uid): def get_group_name_for_uid(self, uid):
# Get name from User defined mapping # Get name from User defined mapping
groupuid_name_mapping = self.config.get("groupUIDNameMapping") groupuid_name_mapping = self.config.get("groupUIDNameMapping")
if groupuid_name_mapping and uid in groupuid_name_mapping: if groupuid_name_mapping and uid in groupuid_name_mapping:
@@ -555,10 +618,13 @@ class OpenshiftLDAPRFC2307(object):
group, err = self.ldap_interface.get_group_entry(uid) group, err = self.ldap_interface.get_group_entry(uid)
if err: if err:
return None, err return None, err
group_name = openshift_ldap_get_attribute_for_entry(group, self.ldap_interface.groupNameAttributes) group_name = openshift_ldap_get_attribute_for_entry(
group, self.ldap_interface.groupNameAttributes
)
if not group_name: if not group_name:
error = "The group entry (%s) does not map to an OpenShift Group name with the given name attribute (%s)" % ( error = (
group, self.ldap_interface.groupNameAttributes "The group entry (%s) does not map to an OpenShift Group name with the given name attribute (%s)"
% (group, self.ldap_interface.groupNameAttributes)
) )
return None, error return None, error
if isinstance(group_name, list): if isinstance(group_name, list):
@@ -570,7 +636,11 @@ class OpenshiftLDAPRFC2307(object):
def is_ldapgroup_exists(self, uid): def is_ldapgroup_exists(self, uid):
group, err = self.ldap_interface.get_group_entry(uid) group, err = self.ldap_interface.get_group_entry(uid)
if err: if err:
if err == LDAP_SEARCH_OUT_OF_SCOPE_ERROR or err.startswith("Entry not found") or "non-existent entry" in err: if (
err == LDAP_SEARCH_OUT_OF_SCOPE_ERROR
or err.startswith("Entry not found")
or "non-existent entry" in err
):
return False, None return False, None
return False, err return False, err
if group: if group:
@@ -585,7 +655,6 @@ class OpenshiftLDAPRFC2307(object):
class OpenshiftLDAP_ADInterface(object): class OpenshiftLDAP_ADInterface(object):
def __init__(self, connection, user_query, group_member_attr, user_name_attr): def __init__(self, connection, user_query, group_member_attr, user_name_attr):
self.connection = connection self.connection = connection
self.userQuery = user_query self.userQuery = user_query
@@ -609,7 +678,9 @@ class OpenshiftLDAP_ADInterface(object):
def populate_cache(self): def populate_cache(self):
if not self.cache_populated: if not self.cache_populated:
self.cache_populated = True self.cache_populated = True
entries, err = self.userQuery.ldap_search(self.connection, self.required_user_attributes) entries, err = self.userQuery.ldap_search(
self.connection, self.required_user_attributes
)
if err: if err:
return err return err
@@ -645,7 +716,9 @@ class OpenshiftLDAP_ADInterface(object):
users_in_group = [] users_in_group = []
for attr in self.groupMembershipAttributes: for attr in self.groupMembershipAttributes:
query_on_attribute = OpenshiftLDAPQueryOnAttribute(self.userQuery.qry, attr) query_on_attribute = OpenshiftLDAPQueryOnAttribute(self.userQuery.qry, attr)
entries, error = query_on_attribute.ldap_search(self.connection, uid, self.required_user_attributes, unique_entry=False) entries, error = query_on_attribute.ldap_search(
self.connection, uid, self.required_user_attributes, unique_entry=False
)
if error and "not found" not in error: if error and "not found" not in error:
return None, error return None, error
if not entries: if not entries:
@@ -660,15 +733,13 @@ class OpenshiftLDAP_ADInterface(object):
class OpenshiftLDAPActiveDirectory(object): class OpenshiftLDAPActiveDirectory(object):
def __init__(self, config, ldap_connection): def __init__(self, config, ldap_connection):
self.config = config self.config = config
self.ldap_interface = self.create_ldap_interface(ldap_connection) self.ldap_interface = self.create_ldap_interface(ldap_connection)
def create_ldap_interface(self, connection): def create_ldap_interface(self, connection):
segment = self.config.get("activeDirectory") segment = self.config.get("activeDirectory")
base_query = openshift_ldap_build_base_query(segment['usersQuery']) base_query = openshift_ldap_build_base_query(segment["usersQuery"])
user_query = OpenshiftLDAPQuery(base_query) user_query = OpenshiftLDAPQuery(base_query)
return OpenshiftLDAP_ADInterface( return OpenshiftLDAP_ADInterface(
@@ -679,9 +750,15 @@ class OpenshiftLDAPActiveDirectory(object):
) )
def get_username_for_entry(self, entry): def get_username_for_entry(self, entry):
username = openshift_ldap_get_attribute_for_entry(entry, self.ldap_interface.userNameAttributes) username = openshift_ldap_get_attribute_for_entry(
entry, self.ldap_interface.userNameAttributes
)
if not username: if not username:
return None, "The user entry (%s) does not map to a OpenShift User name with the given mapping" % entry return (
None,
"The user entry (%s) does not map to a OpenShift User name with the given mapping"
% entry,
)
return username, None return username, None
def get_group_name_for_uid(self, uid): def get_group_name_for_uid(self, uid):
@@ -702,8 +779,15 @@ class OpenshiftLDAPActiveDirectory(object):
class OpenshiftLDAP_AugmentedADInterface(OpenshiftLDAP_ADInterface): class OpenshiftLDAP_AugmentedADInterface(OpenshiftLDAP_ADInterface):
def __init__(
def __init__(self, connection, user_query, group_member_attr, user_name_attr, group_qry, group_name_attr): self,
connection,
user_query,
group_member_attr,
user_name_attr,
group_qry,
group_name_attr,
):
super(OpenshiftLDAP_AugmentedADInterface, self).__init__( super(OpenshiftLDAP_AugmentedADInterface, self).__init__(
connection, user_query, group_member_attr, user_name_attr connection, user_query, group_member_attr, user_name_attr
) )
@@ -719,13 +803,15 @@ class OpenshiftLDAP_AugmentedADInterface(OpenshiftLDAP_ADInterface):
def get_group_entry(self, uid): def get_group_entry(self, uid):
""" """
get_group_entry returns an LDAP group entry for the given group UID by searching the internal cache get_group_entry returns an LDAP group entry for the given group UID by searching the internal cache
of the LDAPInterface first, then sending an LDAP query if the cache did not contain the entry. of the LDAPInterface first, then sending an LDAP query if the cache did not contain the entry.
""" """
if uid in self.cached_groups: if uid in self.cached_groups:
return self.cached_groups.get(uid), None return self.cached_groups.get(uid), None
group, err = self.groupQuery.ldap_search(self.connection, uid, self.required_group_attributes) group, err = self.groupQuery.ldap_search(
self.connection, uid, self.required_group_attributes
)
if err: if err:
return None, err return None, err
self.cached_groups[uid] = group self.cached_groups[uid] = group
@@ -750,19 +836,19 @@ class OpenshiftLDAP_AugmentedADInterface(OpenshiftLDAP_ADInterface):
class OpenshiftLDAPAugmentedActiveDirectory(OpenshiftLDAPRFC2307): class OpenshiftLDAPAugmentedActiveDirectory(OpenshiftLDAPRFC2307):
def __init__(self, config, ldap_connection): def __init__(self, config, ldap_connection):
self.config = config self.config = config
self.ldap_interface = self.create_ldap_interface(ldap_connection) self.ldap_interface = self.create_ldap_interface(ldap_connection)
def create_ldap_interface(self, connection): def create_ldap_interface(self, connection):
segment = self.config.get("augmentedActiveDirectory") segment = self.config.get("augmentedActiveDirectory")
user_base_query = openshift_ldap_build_base_query(segment['usersQuery']) user_base_query = openshift_ldap_build_base_query(segment["usersQuery"])
groups_base_qry = openshift_ldap_build_base_query(segment['groupsQuery']) groups_base_qry = openshift_ldap_build_base_query(segment["groupsQuery"])
user_query = OpenshiftLDAPQuery(user_base_query) user_query = OpenshiftLDAPQuery(user_base_query)
groups_query = OpenshiftLDAPQueryOnAttribute(groups_base_qry, segment['groupUIDAttribute']) groups_query = OpenshiftLDAPQueryOnAttribute(
groups_base_qry, segment["groupUIDAttribute"]
)
return OpenshiftLDAP_AugmentedADInterface( return OpenshiftLDAP_AugmentedADInterface(
connection=connection, connection=connection,
@@ -770,7 +856,7 @@ class OpenshiftLDAPAugmentedActiveDirectory(OpenshiftLDAPRFC2307):
group_member_attr=segment["groupMembershipAttributes"], group_member_attr=segment["groupMembershipAttributes"],
user_name_attr=segment["userNameAttributes"], user_name_attr=segment["userNameAttributes"],
group_qry=groups_query, group_qry=groups_query,
group_name_attr=segment["groupNameAttributes"] group_name_attr=segment["groupNameAttributes"],
) )
def is_ldapgroup_exists(self, uid): def is_ldapgroup_exists(self, uid):

View File

@@ -1,15 +1,16 @@
#!/usr/bin/env python #!/usr/bin/env python
from __future__ import (absolute_import, division, print_function) from __future__ import absolute_import, division, print_function
__metaclass__ = type __metaclass__ = type
import os import os
import traceback
from ansible.module_utils._text import to_native from ansible.module_utils._text import to_native
from ansible_collections.community.okd.plugins.module_utils.openshift_common import (
from ansible_collections.community.okd.plugins.module_utils.openshift_common import AnsibleOpenshiftModule AnsibleOpenshiftModule,
)
try: try:
from kubernetes.dynamic.exceptions import DynamicApiError from kubernetes.dynamic.exceptions import DynamicApiError
@@ -124,7 +125,6 @@ class OpenShiftProcess(AnsibleOpenshiftModule):
self.exit_json(**result) self.exit_json(**result)
def create_resources(self, definitions): def create_resources(self, definitions):
params = {"namespace": self.params.get("namespace_target")} params = {"namespace": self.params.get("namespace_target")}
self.params["apply"] = False self.params["apply"] = False
@@ -139,9 +139,7 @@ class OpenShiftProcess(AnsibleOpenshiftModule):
continue continue
kind = definition.get("kind") kind = definition.get("kind")
if kind and kind.endswith("List"): if kind and kind.endswith("List"):
flattened_definitions.extend( flattened_definitions.extend(self.flatten_list_kind(definition, params))
self.flatten_list_kind(definition, params)
)
else: else:
flattened_definitions.append(self.merge_params(definition, params)) flattened_definitions.append(self.merge_params(definition, params))

View File

@@ -1,12 +1,15 @@
#!/usr/bin/env python #!/usr/bin/env python
from __future__ import (absolute_import, division, print_function) from __future__ import absolute_import, division, print_function
__metaclass__ = type __metaclass__ = type
import traceback import traceback
from urllib.parse import urlparse from urllib.parse import urlparse
from ansible_collections.community.okd.plugins.module_utils.openshift_common import AnsibleOpenshiftModule from ansible_collections.community.okd.plugins.module_utils.openshift_common import (
AnsibleOpenshiftModule,
)
from ansible_collections.community.okd.plugins.module_utils.openshift_docker_image import ( from ansible_collections.community.okd.plugins.module_utils.openshift_docker_image import (
parse_docker_image_ref, parse_docker_image_ref,
@@ -15,6 +18,7 @@ from ansible_collections.community.okd.plugins.module_utils.openshift_docker_ima
try: try:
from requests import request from requests import request
from requests.auth import HTTPBasicAuth from requests.auth import HTTPBasicAuth
HAS_REQUESTS_MODULE = True HAS_REQUESTS_MODULE = True
requests_import_exception = None requests_import_exception = None
except ImportError as e: except ImportError as e:
@@ -32,11 +36,7 @@ class OpenShiftRegistry(AnsibleOpenshiftModule):
kind = "ImageStream" kind = "ImageStream"
api_version = "image.openshift.io/v1" api_version = "image.openshift.io/v1"
params = dict( params = dict(kind=kind, api_version=api_version, namespace=namespace)
kind=kind,
api_version=api_version,
namespace=namespace
)
result = self.kubernetes_facts(**params) result = self.kubernetes_facts(**params)
imagestream = [] imagestream = []
if len(result["resources"]) > 0: if len(result["resources"]) > 0:
@@ -44,7 +44,6 @@ class OpenShiftRegistry(AnsibleOpenshiftModule):
return imagestream return imagestream
def find_registry_info(self): def find_registry_info(self):
def _determine_registry(image_stream): def _determine_registry(image_stream):
public, internal = None, None public, internal = None, None
docker_repo = image_stream["status"].get("publicDockerImageRepository") docker_repo = image_stream["status"].get("publicDockerImageRepository")
@@ -72,39 +71,46 @@ class OpenShiftRegistry(AnsibleOpenshiftModule):
self.fail_json(msg="The integrated registry has not been configured") self.fail_json(msg="The integrated registry has not been configured")
return internal, public return internal, public
self.fail_json(msg="No Image Streams could be located to retrieve registry info.") self.fail_json(
msg="No Image Streams could be located to retrieve registry info."
)
def execute_module(self): def execute_module(self):
result = {} result = {}
result["internal_hostname"], result["public_hostname"] = self.find_registry_info() (
result["internal_hostname"],
result["public_hostname"],
) = self.find_registry_info()
if self.check: if self.check:
public_registry = result["public_hostname"] public_registry = result["public_hostname"]
if not public_registry: if not public_registry:
result["check"] = dict( result["check"] = dict(
reached=False, reached=False, msg="Registry does not have a public hostname."
msg="Registry does not have a public hostname."
) )
else: else:
headers = { headers = {"Content-Type": "application/json"}
'Content-Type': 'application/json' params = {"method": "GET", "verify": False}
}
params = {
'method': 'GET',
'verify': False
}
if self.client.configuration.api_key: if self.client.configuration.api_key:
headers.update(self.client.configuration.api_key) headers.update(self.client.configuration.api_key)
elif self.client.configuration.username and self.client.configuration.password: elif (
self.client.configuration.username
and self.client.configuration.password
):
if not HAS_REQUESTS_MODULE: if not HAS_REQUESTS_MODULE:
result["check"] = dict( result["check"] = dict(
reached=False, reached=False,
msg="The requests python package is missing, try `pip install requests`", msg="The requests python package is missing, try `pip install requests`",
error=requests_import_exception error=requests_import_exception,
) )
self.exit_json(**result) self.exit_json(**result)
params.update( params.update(
dict(auth=HTTPBasicAuth(self.client.configuration.username, self.client.configuration.password)) dict(
auth=HTTPBasicAuth(
self.client.configuration.username,
self.client.configuration.password,
)
)
) )
# verify ssl # verify ssl
@@ -112,23 +118,20 @@ class OpenShiftRegistry(AnsibleOpenshiftModule):
if len(host.scheme) == 0: if len(host.scheme) == 0:
registry_url = "https://" + public_registry registry_url = "https://" + public_registry
if registry_url.startswith("https://") and self.client.configuration.ssl_ca_cert: if (
params.update( registry_url.startswith("https://")
dict(verify=self.client.configuration.ssl_ca_cert) and self.client.configuration.ssl_ca_cert
) ):
params.update( params.update(dict(verify=self.client.configuration.ssl_ca_cert))
dict(headers=headers) params.update(dict(headers=headers))
)
last_bad_status, last_bad_reason = None, None last_bad_status, last_bad_reason = None, None
for path in ("/", "/healthz"): for path in ("/", "/healthz"):
params.update( params.update(dict(url=registry_url + path))
dict(url=registry_url + path)
)
response = request(**params) response = request(**params)
if response.status_code == 200: if response.status_code == 200:
result["check"] = dict( result["check"] = dict(
reached=True, reached=True,
msg="The local client can contact the integrated registry." msg="The local client can contact the integrated registry.",
) )
self.exit_json(**result) self.exit_json(**result)
last_bad_reason = response.reason last_bad_reason = response.reason
@@ -136,9 +139,8 @@ class OpenShiftRegistry(AnsibleOpenshiftModule):
result["check"] = dict( result["check"] = dict(
reached=False, reached=False,
msg="Unable to contact the integrated registry using local client. Status=%d, Reason=%s" % ( msg="Unable to contact the integrated registry using local client. Status=%d, Reason=%s"
last_bad_status, last_bad_reason % (last_bad_status, last_bad_reason),
)
) )
self.exit_json(**result) self.exit_json(**result)

View File

@@ -10,7 +10,7 @@ from __future__ import absolute_import, division, print_function
__metaclass__ = type __metaclass__ = type
# STARTREMOVE (downstream) # STARTREMOVE (downstream)
DOCUMENTATION = r''' DOCUMENTATION = r"""
module: k8s module: k8s
@@ -142,9 +142,9 @@ requirements:
- "python >= 3.6" - "python >= 3.6"
- "kubernetes >= 12.0.0" - "kubernetes >= 12.0.0"
- "PyYAML >= 3.11" - "PyYAML >= 3.11"
''' """
EXAMPLES = r''' EXAMPLES = r"""
- name: Create a k8s namespace - name: Create a k8s namespace
community.okd.k8s: community.okd.k8s:
name: testing name: testing
@@ -169,10 +169,10 @@ EXAMPLES = r'''
app: galaxy app: galaxy
service: web service: web
ports: ports:
- protocol: TCP - protocol: TCP
targetPort: 8000 targetPort: 8000
name: port-8000-tcp name: port-8000-tcp
port: 8000 port: 8000
- name: Remove an existing Service object - name: Remove an existing Service object
community.okd.k8s: community.okd.k8s:
@@ -206,18 +206,18 @@ EXAMPLES = r'''
state: present state: present
definition: "{{ lookup('template', '/testing/deployment.yml') | from_yaml }}" definition: "{{ lookup('template', '/testing/deployment.yml') | from_yaml }}"
validate: validate:
fail_on_error: yes fail_on_error: true
- name: warn on validation errors, check for unexpected properties - name: warn on validation errors, check for unexpected properties
community.okd.k8s: community.okd.k8s:
state: present state: present
definition: "{{ lookup('template', '/testing/deployment.yml') | from_yaml }}" definition: "{{ lookup('template', '/testing/deployment.yml') | from_yaml }}"
validate: validate:
fail_on_error: no fail_on_error: false
strict: yes strict: true
''' """
RETURN = r''' RETURN = r"""
result: result:
description: description:
- The created, patched, or otherwise present object. Will be empty in the case of a deletion. - The created, patched, or otherwise present object. Will be empty in the case of a deletion.
@@ -254,22 +254,26 @@ result:
type: int type: int
sample: 48 sample: 48
error: error:
description: error while trying to create/delete the object. description: Error while trying to create/delete the object.
returned: error returned: error
type: complex type: complex
''' """
# ENDREMOVE (downstream) # ENDREMOVE (downstream)
from ansible_collections.kubernetes.core.plugins.module_utils.args_common import ( from ansible_collections.kubernetes.core.plugins.module_utils.args_common import (
NAME_ARG_SPEC, RESOURCE_ARG_SPEC, AUTH_ARG_SPEC, WAIT_ARG_SPEC, DELETE_OPTS_ARG_SPEC NAME_ARG_SPEC,
RESOURCE_ARG_SPEC,
AUTH_ARG_SPEC,
WAIT_ARG_SPEC,
DELETE_OPTS_ARG_SPEC,
) )
def validate_spec(): def validate_spec():
return dict( return dict(
fail_on_error=dict(type='bool'), fail_on_error=dict(type="bool"),
version=dict(), version=dict(),
strict=dict(type='bool', default=True) strict=dict(type="bool", default=True),
) )
@@ -279,30 +283,41 @@ def argspec():
argument_spec.update(RESOURCE_ARG_SPEC) argument_spec.update(RESOURCE_ARG_SPEC)
argument_spec.update(AUTH_ARG_SPEC) argument_spec.update(AUTH_ARG_SPEC)
argument_spec.update(WAIT_ARG_SPEC) argument_spec.update(WAIT_ARG_SPEC)
argument_spec['merge_type'] = dict(type='list', elements='str', choices=['json', 'merge', 'strategic-merge']) argument_spec["merge_type"] = dict(
argument_spec['validate'] = dict(type='dict', default=None, options=validate_spec()) type="list", elements="str", choices=["json", "merge", "strategic-merge"]
argument_spec['append_hash'] = dict(type='bool', default=False) )
argument_spec['apply'] = dict(type='bool', default=False) argument_spec["validate"] = dict(type="dict", default=None, options=validate_spec())
argument_spec['template'] = dict(type='raw', default=None) argument_spec["append_hash"] = dict(type="bool", default=False)
argument_spec['delete_options'] = dict(type='dict', default=None, options=DELETE_OPTS_ARG_SPEC) argument_spec["apply"] = dict(type="bool", default=False)
argument_spec['continue_on_error'] = dict(type='bool', default=False) argument_spec["template"] = dict(type="raw", default=None)
argument_spec['state'] = dict(default='present', choices=['present', 'absent', 'patched']) argument_spec["delete_options"] = dict(
argument_spec['force'] = dict(type='bool', default=False) type="dict", default=None, options=DELETE_OPTS_ARG_SPEC
)
argument_spec["continue_on_error"] = dict(type="bool", default=False)
argument_spec["state"] = dict(
default="present", choices=["present", "absent", "patched"]
)
argument_spec["force"] = dict(type="bool", default=False)
return argument_spec return argument_spec
def main(): def main():
mutually_exclusive = [ mutually_exclusive = [
('resource_definition', 'src'), ("resource_definition", "src"),
('merge_type', 'apply'), ("merge_type", "apply"),
('template', 'resource_definition'), ("template", "resource_definition"),
('template', 'src'), ("template", "src"),
] ]
from ansible_collections.community.okd.plugins.module_utils.k8s import OKDRawModule from ansible_collections.community.okd.plugins.module_utils.k8s import OKDRawModule
module = OKDRawModule(argument_spec=argspec(), supports_check_mode=True, mutually_exclusive=mutually_exclusive)
module = OKDRawModule(
argument_spec=argspec(),
supports_check_mode=True,
mutually_exclusive=mutually_exclusive,
)
module.run_module() module.run_module()
if __name__ == '__main__': if __name__ == "__main__":
main() main()

View File

@@ -96,31 +96,31 @@ EXAMPLES = r"""
- name: Sync all groups from an LDAP server - name: Sync all groups from an LDAP server
openshift_adm_groups_sync: openshift_adm_groups_sync:
src: src:
kind: LDAPSyncConfig kind: LDAPSyncConfig
apiVersion: v1 apiVersion: v1
url: ldap://localhost:1390 url: ldap://localhost:1390
insecure: true insecure: true
bindDN: cn=admin,dc=example,dc=org bindDN: cn=admin,dc=example,dc=org
bindPassword: adminpassword bindPassword: adminpassword
rfc2307: rfc2307:
groupsQuery: groupsQuery:
baseDN: "cn=admins,ou=groups,dc=example,dc=org" baseDN: "cn=admins,ou=groups,dc=example,dc=org"
scope: sub scope: sub
derefAliases: never derefAliases: never
filter: (objectClass=*) filter: (objectClass=*)
pageSize: 0 pageSize: 0
groupUIDAttribute: dn groupUIDAttribute: dn
groupNameAttributes: [ cn ] groupNameAttributes: [cn]
groupMembershipAttributes: [ member ] groupMembershipAttributes: [member]
usersQuery: usersQuery:
baseDN: "ou=users,dc=example,dc=org" baseDN: "ou=users,dc=example,dc=org"
scope: sub scope: sub
derefAliases: never derefAliases: never
pageSize: 0 pageSize: 0
userUIDAttribute: dn userUIDAttribute: dn
userNameAttributes: [ mail ] userNameAttributes: [mail]
tolerateMemberNotFoundErrors: true tolerateMemberNotFoundErrors: true
tolerateMemberOutOfScopeErrors: true tolerateMemberOutOfScopeErrors: true
# Sync all groups except the ones from the deny_groups from an LDAP server # Sync all groups except the ones from the deny_groups from an LDAP server
- name: Sync all groups from an LDAP server using deny_groups - name: Sync all groups from an LDAP server using deny_groups
@@ -192,20 +192,21 @@ builds:
# ENDREMOVE (downstream) # ENDREMOVE (downstream)
import copy import copy
import traceback
from ansible_collections.kubernetes.core.plugins.module_utils.args_common import AUTH_ARG_SPEC from ansible_collections.kubernetes.core.plugins.module_utils.args_common import (
AUTH_ARG_SPEC,
)
def argument_spec(): def argument_spec():
args = copy.deepcopy(AUTH_ARG_SPEC) args = copy.deepcopy(AUTH_ARG_SPEC)
args.update( args.update(
dict( dict(
state=dict(type='str', choices=['absent', 'present'], default='present'), state=dict(type="str", choices=["absent", "present"], default="present"),
type=dict(type='str', choices=['ldap', 'openshift'], default='ldap'), type=dict(type="str", choices=["ldap", "openshift"], default="ldap"),
sync_config=dict(type='dict', aliases=['config', 'src'], required=True), sync_config=dict(type="dict", aliases=["config", "src"], required=True),
deny_groups=dict(type='list', elements='str', default=[]), deny_groups=dict(type="list", elements="str", default=[]),
allow_groups=dict(type='list', elements='str', default=[]), allow_groups=dict(type="list", elements="str", default=[]),
) )
) )
return args return args
@@ -213,12 +214,14 @@ def argument_spec():
def main(): def main():
from ansible_collections.community.okd.plugins.module_utils.openshift_groups import ( from ansible_collections.community.okd.plugins.module_utils.openshift_groups import (
OpenshiftGroupsSync OpenshiftGroupsSync,
) )
module = OpenshiftGroupsSync(argument_spec=argument_spec(), supports_check_mode=True) module = OpenshiftGroupsSync(
argument_spec=argument_spec(), supports_check_mode=True
)
module.run_module() module.run_module()
if __name__ == '__main__': if __name__ == "__main__":
main() main()

View File

@@ -31,14 +31,14 @@ requirements:
""" """
EXAMPLES = r""" EXAMPLES = r"""
- name: Migrate TemplateInstances in namespace=test - name: Migrate TemplateInstances in namespace=test
community.okd.openshift_adm_migrate_template_instances: community.okd.openshift_adm_migrate_template_instances:
namespace: test namespace: test
register: _result register: _result
- name: Migrate TemplateInstances in all namespaces - name: Migrate TemplateInstances in all namespaces
community.okd.openshift_adm_migrate_template_instances: community.okd.openshift_adm_migrate_template_instances:
register: _result register: _result
""" """
RETURN = r""" RETURN = r"""
@@ -235,7 +235,9 @@ result:
from ansible.module_utils._text import to_native from ansible.module_utils._text import to_native
from ansible_collections.community.okd.plugins.module_utils.openshift_common import AnsibleOpenshiftModule from ansible_collections.community.okd.plugins.module_utils.openshift_common import (
AnsibleOpenshiftModule,
)
try: try:
from kubernetes.dynamic.exceptions import DynamicApiError from kubernetes.dynamic.exceptions import DynamicApiError
@@ -339,9 +341,7 @@ class OpenShiftMigrateTemplateInstances(AnsibleOpenshiftModule):
if ti_to_be_migrated: if ti_to_be_migrated:
if self.check_mode: if self.check_mode:
self.exit_json( self.exit_json(**{"changed": True, "result": ti_to_be_migrated})
**{"changed": True, "result": ti_to_be_migrated}
)
else: else:
for ti_elem in ti_to_be_migrated: for ti_elem in ti_to_be_migrated:
results["result"].append( results["result"].append(
@@ -363,7 +363,9 @@ def argspec():
def main(): def main():
argument_spec = argspec() argument_spec = argspec()
module = OpenShiftMigrateTemplateInstances(argument_spec=argument_spec, supports_check_mode=True) module = OpenShiftMigrateTemplateInstances(
argument_spec=argument_spec, supports_check_mode=True
)
module.run_module() module.run_module()

View File

@@ -5,10 +5,11 @@
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt) # GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function from __future__ import absolute_import, division, print_function
__metaclass__ = type __metaclass__ = type
# STARTREMOVE (downstream) # STARTREMOVE (downstream)
DOCUMENTATION = r''' DOCUMENTATION = r"""
module: openshift_adm_prune_auth module: openshift_adm_prune_auth
@@ -58,9 +59,9 @@ options:
requirements: requirements:
- python >= 3.6 - python >= 3.6
- kubernetes >= 12.0.0 - kubernetes >= 12.0.0
''' """
EXAMPLES = r''' EXAMPLES = r"""
- name: Prune all roles from default namespace - name: Prune all roles from default namespace
openshift_adm_prune_auth: openshift_adm_prune_auth:
resource: roles resource: roles
@@ -72,10 +73,10 @@ EXAMPLES = r'''
namespace: testing namespace: testing
label_selectors: label_selectors:
- phase=production - phase=production
''' """
RETURN = r''' RETURN = r"""
cluster_role_binding: cluster_role_binding:
type: list type: list
description: list of cluster role binding deleted. description: list of cluster role binding deleted.
@@ -96,37 +97,45 @@ group:
type: list type: list
description: list of Security Context Constraints deleted. description: list of Security Context Constraints deleted.
returned: I(resource=users) returned: I(resource=users)
''' """
# ENDREMOVE (downstream) # ENDREMOVE (downstream)
import copy import copy
from ansible_collections.kubernetes.core.plugins.module_utils.args_common import AUTH_ARG_SPEC from ansible_collections.kubernetes.core.plugins.module_utils.args_common import (
AUTH_ARG_SPEC,
)
def argument_spec(): def argument_spec():
args = copy.deepcopy(AUTH_ARG_SPEC) args = copy.deepcopy(AUTH_ARG_SPEC)
args.update( args.update(
dict( dict(
resource=dict(type='str', required=True, choices=['roles', 'clusterroles', 'users', 'groups']), resource=dict(
namespace=dict(type='str'), type="str",
name=dict(type='str'), required=True,
label_selectors=dict(type='list', elements='str'), choices=["roles", "clusterroles", "users", "groups"],
),
namespace=dict(type="str"),
name=dict(type="str"),
label_selectors=dict(type="list", elements="str"),
) )
) )
return args return args
def main(): def main():
from ansible_collections.community.okd.plugins.module_utils.openshift_adm_prune_auth import ( from ansible_collections.community.okd.plugins.module_utils.openshift_adm_prune_auth import (
OpenShiftAdmPruneAuth) OpenShiftAdmPruneAuth,
)
module = OpenShiftAdmPruneAuth(argument_spec=argument_spec(), module = OpenShiftAdmPruneAuth(
mutually_exclusive=[("name", "label_selectors")], argument_spec=argument_spec(),
supports_check_mode=True) mutually_exclusive=[("name", "label_selectors")],
supports_check_mode=True,
)
module.run_module() module.run_module()
if __name__ == '__main__': if __name__ == "__main__":
main() main()

View File

@@ -5,10 +5,11 @@
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt) # GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function from __future__ import absolute_import, division, print_function
__metaclass__ = type __metaclass__ = type
# STARTREMOVE (downstream) # STARTREMOVE (downstream)
DOCUMENTATION = r''' DOCUMENTATION = r"""
module: openshift_adm_prune_builds module: openshift_adm_prune_builds
@@ -45,14 +46,14 @@ options:
requirements: requirements:
- python >= 3.6 - python >= 3.6
- kubernetes >= 12.0.0 - kubernetes >= 12.0.0
''' """
EXAMPLES = r''' EXAMPLES = r"""
# Run deleting older completed and failed builds and also including # Run deleting older completed and failed builds and also including
# all builds whose associated BuildConfig no longer exists # all builds whose associated BuildConfig no longer exists
- name: Run delete orphan Builds - name: Run delete orphan Builds
community.okd.openshift_adm_prune_builds: community.okd.openshift_adm_prune_builds:
orphans: True orphans: true
# Run deleting older completed and failed builds keep younger than 2hours # Run deleting older completed and failed builds keep younger than 2hours
- name: Run delete builds, keep younger than 2h - name: Run delete builds, keep younger than 2h
@@ -63,9 +64,9 @@ EXAMPLES = r'''
- name: Run delete builds from namespace - name: Run delete builds from namespace
community.okd.openshift_adm_prune_builds: community.okd.openshift_adm_prune_builds:
namespace: testing_namespace namespace: testing_namespace
''' """
RETURN = r''' RETURN = r"""
builds: builds:
description: description:
- The builds that were deleted - The builds that were deleted
@@ -92,33 +93,38 @@ builds:
description: Current status details for the object. description: Current status details for the object.
returned: success returned: success
type: dict type: dict
''' """
# ENDREMOVE (downstream) # ENDREMOVE (downstream)
import copy import copy
from ansible_collections.kubernetes.core.plugins.module_utils.args_common import AUTH_ARG_SPEC from ansible_collections.kubernetes.core.plugins.module_utils.args_common import (
AUTH_ARG_SPEC,
)
def argument_spec(): def argument_spec():
args = copy.deepcopy(AUTH_ARG_SPEC) args = copy.deepcopy(AUTH_ARG_SPEC)
args.update( args.update(
dict( dict(
namespace=dict(type='str'), namespace=dict(type="str"),
keep_younger_than=dict(type='int'), keep_younger_than=dict(type="int"),
orphans=dict(type='bool', default=False), orphans=dict(type="bool", default=False),
) )
) )
return args return args
def main(): def main():
from ansible_collections.community.okd.plugins.module_utils.openshift_builds import (
OpenShiftPruneBuilds,
)
from ansible_collections.community.okd.plugins.module_utils.openshift_builds import OpenShiftPruneBuilds module = OpenShiftPruneBuilds(
argument_spec=argument_spec(), supports_check_mode=True
module = OpenShiftPruneBuilds(argument_spec=argument_spec(), supports_check_mode=True) )
module.run_module() module.run_module()
if __name__ == '__main__': if __name__ == "__main__":
main() main()

View File

@@ -5,10 +5,11 @@
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt) # GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function from __future__ import absolute_import, division, print_function
__metaclass__ = type __metaclass__ = type
# STARTREMOVE (downstream) # STARTREMOVE (downstream)
DOCUMENTATION = r''' DOCUMENTATION = r"""
module: openshift_adm_prune_deployments module: openshift_adm_prune_deployments
@@ -45,32 +46,34 @@ options:
requirements: requirements:
- python >= 3.6 - python >= 3.6
- kubernetes >= 12.0.0 - kubernetes >= 12.0.0
''' """
EXAMPLES = r''' EXAMPLES = r"""
- name: Prune Deployments from testing namespace - name: Prune Deployments from testing namespace
community.okd.openshift_adm_prune_deployments: community.okd.openshift_adm_prune_deployments:
namespace: testing namespace: testing
- name: Prune orphans deployments, keep younger than 2hours - name: Prune orphans deployments, keep younger than 2hours
community.okd.openshift_adm_prune_deployments: community.okd.openshift_adm_prune_deployments:
orphans: True orphans: true
keep_younger_than: 120 keep_younger_than: 120
''' """
RETURN = r''' RETURN = r"""
replication_controllers: replication_controllers:
type: list type: list
description: list of replication controllers candidate for pruning. description: list of replication controllers candidate for pruning.
returned: always returned: always
''' """
# ENDREMOVE (downstream) # ENDREMOVE (downstream)
import copy import copy
try: try:
from ansible_collections.kubernetes.core.plugins.module_utils.args_common import AUTH_ARG_SPEC from ansible_collections.kubernetes.core.plugins.module_utils.args_common import (
AUTH_ARG_SPEC,
)
except ImportError as e: except ImportError as e:
pass pass
@@ -79,22 +82,28 @@ def argument_spec():
args = copy.deepcopy(AUTH_ARG_SPEC) args = copy.deepcopy(AUTH_ARG_SPEC)
args.update( args.update(
dict( dict(
namespace=dict(type='str',), namespace=dict(
keep_younger_than=dict(type='int',), type="str",
orphans=dict(type='bool', default=False), ),
keep_younger_than=dict(
type="int",
),
orphans=dict(type="bool", default=False),
) )
) )
return args return args
def main(): def main():
from ansible_collections.community.okd.plugins.module_utils.openshift_adm_prune_deployments import ( from ansible_collections.community.okd.plugins.module_utils.openshift_adm_prune_deployments import (
OpenShiftAdmPruneDeployment) OpenShiftAdmPruneDeployment,
)
module = OpenShiftAdmPruneDeployment(argument_spec=argument_spec(), supports_check_mode=True) module = OpenShiftAdmPruneDeployment(
argument_spec=argument_spec(), supports_check_mode=True
)
module.run_module() module.run_module()
if __name__ == '__main__': if __name__ == "__main__":
main() main()

View File

@@ -5,10 +5,11 @@
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt) # GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function from __future__ import absolute_import, division, print_function
__metaclass__ = type __metaclass__ = type
# STARTREMOVE (downstream) # STARTREMOVE (downstream)
DOCUMENTATION = r''' DOCUMENTATION = r"""
module: openshift_adm_prune_images module: openshift_adm_prune_images
@@ -84,9 +85,9 @@ requirements:
- python >= 3.6 - python >= 3.6
- kubernetes >= 12.0.0 - kubernetes >= 12.0.0
- docker-image-py - docker-image-py
''' """
EXAMPLES = r''' EXAMPLES = r"""
# Prune if only images and their referrers were more than an hour old # Prune if only images and their referrers were more than an hour old
- name: Prune image with referrer been more than an hour old - name: Prune image with referrer been more than an hour old
community.okd.openshift_adm_prune_images: community.okd.openshift_adm_prune_images:
@@ -102,10 +103,10 @@ EXAMPLES = r'''
community.okd.openshift_adm_prune_images: community.okd.openshift_adm_prune_images:
registry_url: http://registry.example.org registry_url: http://registry.example.org
registry_validate_certs: false registry_validate_certs: false
''' """
RETURN = r''' RETURN = r"""
updated_image_streams: updated_image_streams:
description: description:
- The images streams updated. - The images streams updated.
@@ -275,41 +276,44 @@ deleted_images:
}, },
... ...
] ]
''' """
# ENDREMOVE (downstream) # ENDREMOVE (downstream)
import copy import copy
from ansible_collections.kubernetes.core.plugins.module_utils.args_common import AUTH_ARG_SPEC from ansible_collections.kubernetes.core.plugins.module_utils.args_common import (
AUTH_ARG_SPEC,
)
def argument_spec(): def argument_spec():
args = copy.deepcopy(AUTH_ARG_SPEC) args = copy.deepcopy(AUTH_ARG_SPEC)
args.update( args.update(
dict( dict(
namespace=dict(type='str'), namespace=dict(type="str"),
all_images=dict(type='bool', default=True), all_images=dict(type="bool", default=True),
keep_younger_than=dict(type='int'), keep_younger_than=dict(type="int"),
prune_over_size_limit=dict(type='bool', default=False), prune_over_size_limit=dict(type="bool", default=False),
registry_url=dict(type='str'), registry_url=dict(type="str"),
registry_validate_certs=dict(type='bool'), registry_validate_certs=dict(type="bool"),
registry_ca_cert=dict(type='path'), registry_ca_cert=dict(type="path"),
prune_registry=dict(type='bool', default=True), prune_registry=dict(type="bool", default=True),
ignore_invalid_refs=dict(type='bool', default=False), ignore_invalid_refs=dict(type="bool", default=False),
) )
) )
return args return args
def main(): def main():
from ansible_collections.community.okd.plugins.module_utils.openshift_adm_prune_images import ( from ansible_collections.community.okd.plugins.module_utils.openshift_adm_prune_images import (
OpenShiftAdmPruneImages OpenShiftAdmPruneImages,
) )
module = OpenShiftAdmPruneImages(argument_spec=argument_spec(), supports_check_mode=True) module = OpenShiftAdmPruneImages(
argument_spec=argument_spec(), supports_check_mode=True
)
module.run_module() module.run_module()
if __name__ == '__main__': if __name__ == "__main__":
main() main()

View File

@@ -5,9 +5,10 @@
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt) # GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function from __future__ import absolute_import, division, print_function
__metaclass__ = type __metaclass__ = type
DOCUMENTATION = r''' DOCUMENTATION = r"""
module: openshift_auth module: openshift_auth
@@ -74,46 +75,49 @@ requirements:
- urllib3 - urllib3
- requests - requests
- requests-oauthlib - requests-oauthlib
''' """
EXAMPLES = r''' EXAMPLES = r"""
- hosts: localhost - name: Example Playbook
hosts: localhost
module_defaults: module_defaults:
group/community.okd.okd: group/community.okd.okd:
host: https://k8s.example.com/ host: https://k8s.example.com/
ca_cert: ca.pem ca_cert: ca.pem
tasks: tasks:
- block: - name: Authenticate to OpenShift cluster and gell a list of all pods from any namespace
# It's good practice to store login credentials in a secure vault and not block:
# directly in playbooks. # It's good practice to store login credentials in a secure vault and not
- include_vars: openshift_passwords.yml # directly in playbooks.
- name: Include 'openshift_passwords.yml'
ansible.builtin.include_vars: openshift_passwords.yml
- name: Log in (obtain access token) - name: Log in (obtain access token)
community.okd.openshift_auth: community.okd.openshift_auth:
username: admin username: admin
password: "{{ openshift_admin_password }}" password: "{{ openshift_admin_password }}"
register: openshift_auth_results register: openshift_auth_results
# Previous task provides the token/api_key, while all other parameters # Previous task provides the token/api_key, while all other parameters
# are taken from module_defaults # are taken from module_defaults
- name: Get a list of all pods from any namespace - name: Get a list of all pods from any namespace
kubernetes.core.k8s_info: kubernetes.core.k8s_info:
api_key: "{{ openshift_auth_results.openshift_auth.api_key }}" api_key: "{{ openshift_auth_results.openshift_auth.api_key }}"
kind: Pod kind: Pod
register: pod_list register: pod_list
always: always:
- name: If login succeeded, try to log out (revoke access token) - name: If login succeeded, try to log out (revoke access token)
when: openshift_auth_results.openshift_auth.api_key is defined when: openshift_auth_results.openshift_auth.api_key is defined
community.okd.openshift_auth: community.okd.openshift_auth:
state: absent state: absent
api_key: "{{ openshift_auth_results.openshift_auth.api_key }}" api_key: "{{ openshift_auth_results.openshift_auth.api_key }}"
''' """
# Returned value names need to match k8s modules parameter names, to make it # Returned value names need to match k8s modules parameter names, to make it
# easy to pass returned values of openshift_auth to other k8s modules. # easy to pass returned values of openshift_auth to other k8s modules.
# Discussion: https://github.com/ansible/ansible/pull/50807#discussion_r248827899 # Discussion: https://github.com/ansible/ansible/pull/50807#discussion_r248827899
RETURN = r''' RETURN = r"""
openshift_auth: openshift_auth:
description: OpenShift authentication facts. description: OpenShift authentication facts.
returned: success returned: success
@@ -164,7 +168,7 @@ k8s_auth:
description: Username for authenticating with the API server. description: Username for authenticating with the API server.
returned: success returned: success
type: str type: str
''' """
import traceback import traceback
@@ -179,52 +183,52 @@ import hashlib
# 3rd party imports # 3rd party imports
try: try:
import requests import requests
HAS_REQUESTS = True HAS_REQUESTS = True
except ImportError: except ImportError:
HAS_REQUESTS = False HAS_REQUESTS = False
try: try:
from requests_oauthlib import OAuth2Session from requests_oauthlib import OAuth2Session
HAS_REQUESTS_OAUTH = True HAS_REQUESTS_OAUTH = True
except ImportError: except ImportError:
HAS_REQUESTS_OAUTH = False HAS_REQUESTS_OAUTH = False
try: try:
from urllib3.util import make_headers from urllib3.util import make_headers
HAS_URLLIB3 = True HAS_URLLIB3 = True
except ImportError: except ImportError:
HAS_URLLIB3 = False HAS_URLLIB3 = False
K8S_AUTH_ARG_SPEC = { K8S_AUTH_ARG_SPEC = {
'state': { "state": {
'default': 'present', "default": "present",
'choices': ['present', 'absent'], "choices": ["present", "absent"],
}, },
'host': {'required': True}, "host": {"required": True},
'username': {}, "username": {},
'password': {'no_log': True}, "password": {"no_log": True},
'ca_cert': {'type': 'path', 'aliases': ['ssl_ca_cert']}, "ca_cert": {"type": "path", "aliases": ["ssl_ca_cert"]},
'validate_certs': { "validate_certs": {"type": "bool", "default": True, "aliases": ["verify_ssl"]},
'type': 'bool', "api_key": {"no_log": True},
'default': True,
'aliases': ['verify_ssl']
},
'api_key': {'no_log': True},
} }
def get_oauthaccesstoken_objectname_from_token(token_name): def get_oauthaccesstoken_objectname_from_token(token_name):
""" """
openshift convert the access token to an OAuthAccessToken resource name using the algorithm openshift convert the access token to an OAuthAccessToken resource name using the algorithm
https://github.com/openshift/console/blob/9f352ba49f82ad693a72d0d35709961428b43b93/pkg/server/server.go#L609-L613 https://github.com/openshift/console/blob/9f352ba49f82ad693a72d0d35709961428b43b93/pkg/server/server.go#L609-L613
""" """
sha256Prefix = "sha256~" sha256Prefix = "sha256~"
content = token_name.strip(sha256Prefix) content = token_name.strip(sha256Prefix)
b64encoded = urlsafe_b64encode(hashlib.sha256(content.encode()).digest()).rstrip(b'=') b64encoded = urlsafe_b64encode(hashlib.sha256(content.encode()).digest()).rstrip(
b"="
)
return sha256Prefix + b64encoded.decode("utf-8") return sha256Prefix + b64encoded.decode("utf-8")
@@ -234,42 +238,48 @@ class OpenShiftAuthModule(AnsibleModule):
self, self,
argument_spec=K8S_AUTH_ARG_SPEC, argument_spec=K8S_AUTH_ARG_SPEC,
required_if=[ required_if=[
('state', 'present', ['username', 'password']), ("state", "present", ["username", "password"]),
('state', 'absent', ['api_key']), ("state", "absent", ["api_key"]),
] ],
) )
if not HAS_REQUESTS: if not HAS_REQUESTS:
self.fail("This module requires the python 'requests' package. Try `pip install requests`.") self.fail(
"This module requires the python 'requests' package. Try `pip install requests`."
)
if not HAS_REQUESTS_OAUTH: if not HAS_REQUESTS_OAUTH:
self.fail("This module requires the python 'requests-oauthlib' package. Try `pip install requests-oauthlib`.") self.fail(
"This module requires the python 'requests-oauthlib' package. Try `pip install requests-oauthlib`."
)
if not HAS_URLLIB3: if not HAS_URLLIB3:
self.fail("This module requires the python 'urllib3' package. Try `pip install urllib3`.") self.fail(
"This module requires the python 'urllib3' package. Try `pip install urllib3`."
)
def execute_module(self): def execute_module(self):
state = self.params.get('state') state = self.params.get("state")
verify_ssl = self.params.get('validate_certs') verify_ssl = self.params.get("validate_certs")
ssl_ca_cert = self.params.get('ca_cert') ssl_ca_cert = self.params.get("ca_cert")
self.auth_username = self.params.get('username') self.auth_username = self.params.get("username")
self.auth_password = self.params.get('password') self.auth_password = self.params.get("password")
self.auth_api_key = self.params.get('api_key') self.auth_api_key = self.params.get("api_key")
self.con_host = self.params.get('host') self.con_host = self.params.get("host")
# python-requests takes either a bool or a path to a ca file as the 'verify' param # python-requests takes either a bool or a path to a ca file as the 'verify' param
if verify_ssl and ssl_ca_cert: if verify_ssl and ssl_ca_cert:
self.con_verify_ca = ssl_ca_cert # path self.con_verify_ca = ssl_ca_cert # path
else: else:
self.con_verify_ca = verify_ssl # bool self.con_verify_ca = verify_ssl # bool
# Get needed info to access authorization APIs # Get needed info to access authorization APIs
self.openshift_discover() self.openshift_discover()
changed = False changed = False
result = dict() result = dict()
if state == 'present': if state == "present":
new_api_key = self.openshift_login() new_api_key = self.openshift_login()
result = dict( result = dict(
host=self.con_host, host=self.con_host,
@@ -285,87 +295,114 @@ class OpenShiftAuthModule(AnsibleModule):
self.exit_json(changed=changed, openshift_auth=result, k8s_auth=result) self.exit_json(changed=changed, openshift_auth=result, k8s_auth=result)
def openshift_discover(self): def openshift_discover(self):
url = urljoin(self.con_host, '.well-known/oauth-authorization-server') url = urljoin(self.con_host, ".well-known/oauth-authorization-server")
ret = requests.get(url, verify=self.con_verify_ca) ret = requests.get(url, verify=self.con_verify_ca)
if ret.status_code != 200: if ret.status_code != 200:
self.fail_request("Couldn't find OpenShift's OAuth API", method='GET', url=url, self.fail_request(
reason=ret.reason, status_code=ret.status_code) "Couldn't find OpenShift's OAuth API",
method="GET",
url=url,
reason=ret.reason,
status_code=ret.status_code,
)
try: try:
oauth_info = ret.json() oauth_info = ret.json()
self.openshift_auth_endpoint = oauth_info['authorization_endpoint'] self.openshift_auth_endpoint = oauth_info["authorization_endpoint"]
self.openshift_token_endpoint = oauth_info['token_endpoint'] self.openshift_token_endpoint = oauth_info["token_endpoint"]
except Exception: except Exception:
self.fail_json(msg="Something went wrong discovering OpenShift OAuth details.", self.fail_json(
exception=traceback.format_exc()) msg="Something went wrong discovering OpenShift OAuth details.",
exception=traceback.format_exc(),
)
def openshift_login(self): def openshift_login(self):
os_oauth = OAuth2Session(client_id='openshift-challenging-client') os_oauth = OAuth2Session(client_id="openshift-challenging-client")
authorization_url, state = os_oauth.authorization_url(self.openshift_auth_endpoint, authorization_url, state = os_oauth.authorization_url(
state="1", code_challenge_method='S256') self.openshift_auth_endpoint, state="1", code_challenge_method="S256"
auth_headers = make_headers(basic_auth='{0}:{1}'.format(self.auth_username, self.auth_password)) )
auth_headers = make_headers(
basic_auth="{0}:{1}".format(self.auth_username, self.auth_password)
)
# Request authorization code using basic auth credentials # Request authorization code using basic auth credentials
ret = os_oauth.get( ret = os_oauth.get(
authorization_url, authorization_url,
headers={'X-Csrf-Token': state, 'authorization': auth_headers.get('authorization')}, headers={
"X-Csrf-Token": state,
"authorization": auth_headers.get("authorization"),
},
verify=self.con_verify_ca, verify=self.con_verify_ca,
allow_redirects=False allow_redirects=False,
) )
if ret.status_code != 302: if ret.status_code != 302:
self.fail_request("Authorization failed.", method='GET', url=authorization_url, self.fail_request(
reason=ret.reason, status_code=ret.status_code) "Authorization failed.",
method="GET",
url=authorization_url,
reason=ret.reason,
status_code=ret.status_code,
)
# In here we have `code` and `state`, I think `code` is the important one # In here we have `code` and `state`, I think `code` is the important one
qwargs = {} qwargs = {}
for k, v in parse_qs(urlparse(ret.headers['Location']).query).items(): for k, v in parse_qs(urlparse(ret.headers["Location"]).query).items():
qwargs[k] = v[0] qwargs[k] = v[0]
qwargs['grant_type'] = 'authorization_code' qwargs["grant_type"] = "authorization_code"
# Using authorization code given to us in the Location header of the previous request, request a token # Using authorization code given to us in the Location header of the previous request, request a token
ret = os_oauth.post( ret = os_oauth.post(
self.openshift_token_endpoint, self.openshift_token_endpoint,
headers={ headers={
'Accept': 'application/json', "Accept": "application/json",
'Content-Type': 'application/x-www-form-urlencoded', "Content-Type": "application/x-www-form-urlencoded",
# This is just base64 encoded 'openshift-challenging-client:' # This is just base64 encoded 'openshift-challenging-client:'
'Authorization': 'Basic b3BlbnNoaWZ0LWNoYWxsZW5naW5nLWNsaWVudDo=' "Authorization": "Basic b3BlbnNoaWZ0LWNoYWxsZW5naW5nLWNsaWVudDo=",
}, },
data=urlencode(qwargs), data=urlencode(qwargs),
verify=self.con_verify_ca verify=self.con_verify_ca,
) )
if ret.status_code != 200: if ret.status_code != 200:
self.fail_request("Failed to obtain an authorization token.", method='POST', self.fail_request(
url=self.openshift_token_endpoint, "Failed to obtain an authorization token.",
reason=ret.reason, status_code=ret.status_code) method="POST",
url=self.openshift_token_endpoint,
reason=ret.reason,
status_code=ret.status_code,
)
return ret.json()['access_token'] return ret.json()["access_token"]
def openshift_logout(self): def openshift_logout(self):
name = get_oauthaccesstoken_objectname_from_token(self.auth_api_key) name = get_oauthaccesstoken_objectname_from_token(self.auth_api_key)
headers = { headers = {
'Accept': 'application/json', "Accept": "application/json",
'Content-Type': 'application/json', "Content-Type": "application/json",
'Authorization': "Bearer {0}".format(self.auth_api_key) "Authorization": "Bearer {0}".format(self.auth_api_key),
} }
url = "{0}/apis/oauth.openshift.io/v1/useroauthaccesstokens/{1}".format(self.con_host, name) url = "{0}/apis/oauth.openshift.io/v1/useroauthaccesstokens/{1}".format(
self.con_host, name
)
json = { json = {
"apiVersion": "oauth.openshift.io/v1", "apiVersion": "oauth.openshift.io/v1",
"kind": "DeleteOptions", "kind": "DeleteOptions",
"gracePeriodSeconds": 0 "gracePeriodSeconds": 0,
} }
ret = requests.delete(url, json=json, verify=self.con_verify_ca, headers=headers) ret = requests.delete(
url, json=json, verify=self.con_verify_ca, headers=headers
)
if ret.status_code != 200: if ret.status_code != 200:
self.fail_json( self.fail_json(
msg="Couldn't delete user oauth access token '{0}' due to: {1}".format(name, ret.json().get("message")), msg="Couldn't delete user oauth access token '{0}' due to: {1}".format(
status_code=ret.status_code name, ret.json().get("message")
),
status_code=ret.status_code,
) )
return True return True
@@ -376,7 +413,7 @@ class OpenShiftAuthModule(AnsibleModule):
def fail_request(self, msg, **kwargs): def fail_request(self, msg, **kwargs):
req_info = {} req_info = {}
for k, v in kwargs.items(): for k, v in kwargs.items():
req_info['req_' + k] = v req_info["req_" + k] = v
self.fail_json(msg=msg, **req_info) self.fail_json(msg=msg, **req_info)
@@ -388,5 +425,5 @@ def main():
module.fail_json(msg=str(e), exception=traceback.format_exc()) module.fail_json(msg=str(e), exception=traceback.format_exc())
if __name__ == '__main__': if __name__ == "__main__":
main() main()

View File

@@ -5,10 +5,11 @@
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt) # GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function from __future__ import absolute_import, division, print_function
__metaclass__ = type __metaclass__ = type
# STARTREMOVE (downstream) # STARTREMOVE (downstream)
DOCUMENTATION = r''' DOCUMENTATION = r"""
module: openshift_build module: openshift_build
@@ -134,9 +135,9 @@ options:
requirements: requirements:
- python >= 3.6 - python >= 3.6
- kubernetes >= 12.0.0 - kubernetes >= 12.0.0
''' """
EXAMPLES = r''' EXAMPLES = r"""
# Starts build from build config default/hello-world # Starts build from build config default/hello-world
- name: Starts build from build config - name: Starts build from build config
community.okd.openshift_build: community.okd.openshift_build:
@@ -171,9 +172,9 @@ EXAMPLES = r'''
build_phases: build_phases:
- New - New
state: cancelled state: cancelled
''' """
RETURN = r''' RETURN = r"""
builds: builds:
description: description:
- The builds that were started/cancelled. - The builds that were started/cancelled.
@@ -200,37 +201,47 @@ builds:
description: Current status details for the object. description: Current status details for the object.
returned: success returned: success
type: dict type: dict
''' """
# ENDREMOVE (downstream) # ENDREMOVE (downstream)
import copy import copy
from ansible_collections.kubernetes.core.plugins.module_utils.args_common import AUTH_ARG_SPEC from ansible_collections.kubernetes.core.plugins.module_utils.args_common import (
AUTH_ARG_SPEC,
)
def argument_spec(): def argument_spec():
args = copy.deepcopy(AUTH_ARG_SPEC) args = copy.deepcopy(AUTH_ARG_SPEC)
args_options = dict( args_options = dict(
name=dict(type='str', required=True), name=dict(type="str", required=True), value=dict(type="str", required=True)
value=dict(type='str', required=True)
) )
args.update( args.update(
dict( dict(
state=dict(type='str', choices=['started', 'cancelled', 'restarted'], default="started"), state=dict(
build_args=dict(type='list', elements='dict', options=args_options), type="str",
commit=dict(type='str'), choices=["started", "cancelled", "restarted"],
env_vars=dict(type='list', elements='dict', options=args_options), default="started",
build_name=dict(type='str'), ),
build_config_name=dict(type='str'), build_args=dict(type="list", elements="dict", options=args_options),
namespace=dict(type='str', required=True), commit=dict(type="str"),
incremental=dict(type='bool'), env_vars=dict(type="list", elements="dict", options=args_options),
no_cache=dict(type='bool'), build_name=dict(type="str"),
wait=dict(type='bool', default=False), build_config_name=dict(type="str"),
wait_sleep=dict(type='int', default=5), namespace=dict(type="str", required=True),
wait_timeout=dict(type='int', default=120), incremental=dict(type="bool"),
build_phases=dict(type='list', elements='str', default=[], choices=["New", "Pending", "Running"]), no_cache=dict(type="bool"),
wait=dict(type="bool", default=False),
wait_sleep=dict(type="int", default=5),
wait_timeout=dict(type="int", default=120),
build_phases=dict(
type="list",
elements="str",
default=[],
choices=["New", "Pending", "Running"],
),
) )
) )
return args return args
@@ -238,23 +249,24 @@ def argument_spec():
def main(): def main():
mutually_exclusive = [ mutually_exclusive = [
('build_name', 'build_config_name'), ("build_name", "build_config_name"),
] ]
from ansible_collections.community.okd.plugins.module_utils.openshift_builds import ( from ansible_collections.community.okd.plugins.module_utils.openshift_builds import (
OpenShiftBuilds OpenShiftBuilds,
) )
module = OpenShiftBuilds( module = OpenShiftBuilds(
argument_spec=argument_spec(), argument_spec=argument_spec(),
mutually_exclusive=mutually_exclusive, mutually_exclusive=mutually_exclusive,
required_one_of=[ required_one_of=[
[ [
'build_name', "build_name",
'build_config_name', "build_config_name",
] ]
], ],
) )
module.run_module() module.run_module()
if __name__ == '__main__': if __name__ == "__main__":
main() main()

View File

@@ -5,10 +5,11 @@
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt) # GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function from __future__ import absolute_import, division, print_function
__metaclass__ = type __metaclass__ = type
# STARTREMOVE (downstream) # STARTREMOVE (downstream)
DOCUMENTATION = r''' DOCUMENTATION = r"""
module: openshift_import_image module: openshift_import_image
@@ -75,9 +76,9 @@ requirements:
- python >= 3.6 - python >= 3.6
- kubernetes >= 12.0.0 - kubernetes >= 12.0.0
- docker-image-py - docker-image-py
''' """
EXAMPLES = r''' EXAMPLES = r"""
# Import tag latest into a new image stream. # Import tag latest into a new image stream.
- name: Import tag latest into new image stream - name: Import tag latest into new image stream
community.okd.openshift_import_image: community.okd.openshift_import_image:
@@ -122,10 +123,10 @@ EXAMPLES = r'''
- mystream3 - mystream3
source: registry.io/repo/image:latest source: registry.io/repo/image:latest
all: true all: true
''' """
RETURN = r''' RETURN = r"""
result: result:
description: description:
- List with all ImageStreamImport that have been created. - List with all ImageStreamImport that have been created.
@@ -153,42 +154,44 @@ result:
description: Current status details for the object. description: Current status details for the object.
returned: success returned: success
type: dict type: dict
''' """
# ENDREMOVE (downstream) # ENDREMOVE (downstream)
import copy import copy
from ansible_collections.kubernetes.core.plugins.module_utils.args_common import AUTH_ARG_SPEC from ansible_collections.kubernetes.core.plugins.module_utils.args_common import (
AUTH_ARG_SPEC,
)
def argument_spec(): def argument_spec():
args = copy.deepcopy(AUTH_ARG_SPEC) args = copy.deepcopy(AUTH_ARG_SPEC)
args.update( args.update(
dict( dict(
namespace=dict(type='str', required=True), namespace=dict(type="str", required=True),
name=dict(type='raw', required=True), name=dict(type="raw", required=True),
all=dict(type='bool', default=False), all=dict(type="bool", default=False),
validate_registry_certs=dict(type='bool'), validate_registry_certs=dict(type="bool"),
reference_policy=dict(type='str', choices=["source", "local"], default="source"), reference_policy=dict(
scheduled=dict(type='bool', default=False), type="str", choices=["source", "local"], default="source"
source=dict(type='str'), ),
scheduled=dict(type="bool", default=False),
source=dict(type="str"),
) )
) )
return args return args
def main(): def main():
from ansible_collections.community.okd.plugins.module_utils.openshift_import_image import ( from ansible_collections.community.okd.plugins.module_utils.openshift_import_image import (
OpenShiftImportImage OpenShiftImportImage,
) )
module = OpenShiftImportImage( module = OpenShiftImportImage(
argument_spec=argument_spec(), argument_spec=argument_spec(), supports_check_mode=True
supports_check_mode=True
) )
module.run_module() module.run_module()
if __name__ == '__main__': if __name__ == "__main__":
main() main()

View File

@@ -2,13 +2,14 @@
# -*- coding: utf-8 -*- # -*- coding: utf-8 -*-
from __future__ import absolute_import, division, print_function from __future__ import absolute_import, division, print_function
__metaclass__ = type __metaclass__ = type
# Copyright (c) 2020-2021, Red Hat # Copyright (c) 2020-2021, Red Hat
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt) # GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
# STARTREMOVE (downstream) # STARTREMOVE (downstream)
DOCUMENTATION = r''' DOCUMENTATION = r"""
module: openshift_process module: openshift_process
short_description: Process an OpenShift template.openshift.io/v1 Template short_description: Process an OpenShift template.openshift.io/v1 Template
@@ -49,6 +50,7 @@ options:
description: description:
- The namespace that resources should be created, updated, or deleted in. - The namespace that resources should be created, updated, or deleted in.
- Only used when I(state) is present or absent. - Only used when I(state) is present or absent.
type: str
parameters: parameters:
description: description:
- 'A set of key: value pairs that will be used to set/override values in the Template.' - 'A set of key: value pairs that will be used to set/override values in the Template.'
@@ -70,9 +72,9 @@ options:
type: str type: str
default: rendered default: rendered
choices: [ absent, present, rendered ] choices: [ absent, present, rendered ]
''' """
EXAMPLES = r''' EXAMPLES = r"""
- name: Process a template in the cluster - name: Process a template in the cluster
community.okd.openshift_process: community.okd.openshift_process:
name: nginx-example name: nginx-example
@@ -87,8 +89,8 @@ EXAMPLES = r'''
community.okd.k8s: community.okd.k8s:
namespace: default namespace: default
definition: '{{ item }}' definition: '{{ item }}'
wait: yes wait: true
apply: yes apply: true
loop: '{{ result.resources }}' loop: '{{ result.resources }}'
- name: Process a template with parameters from an env file and create the resources - name: Process a template with parameters from an env file and create the resources
@@ -98,7 +100,7 @@ EXAMPLES = r'''
namespace_target: default namespace_target: default
parameter_file: 'files/nginx.env' parameter_file: 'files/nginx.env'
state: present state: present
wait: yes wait: true
- name: Process a local template and create the resources - name: Process a local template and create the resources
community.okd.openshift_process: community.okd.openshift_process:
@@ -113,10 +115,10 @@ EXAMPLES = r'''
parameter_file: files/example.env parameter_file: files/example.env
namespace_target: default namespace_target: default
state: absent state: absent
wait: yes wait: true
''' """
RETURN = r''' RETURN = r"""
result: result:
description: description:
- The created, patched, or otherwise present object. Will be empty in the case of a deletion. - The created, patched, or otherwise present object. Will be empty in the case of a deletion.
@@ -200,11 +202,13 @@ resources:
conditions: conditions:
type: complex type: complex
description: Array of status conditions for the object. Not guaranteed to be present description: Array of status conditions for the object. Not guaranteed to be present
''' """
# ENDREMOVE (downstream) # ENDREMOVE (downstream)
from ansible_collections.kubernetes.core.plugins.module_utils.args_common import ( from ansible_collections.kubernetes.core.plugins.module_utils.args_common import (
AUTH_ARG_SPEC, RESOURCE_ARG_SPEC, WAIT_ARG_SPEC AUTH_ARG_SPEC,
RESOURCE_ARG_SPEC,
WAIT_ARG_SPEC,
) )
@@ -213,24 +217,26 @@ def argspec():
argument_spec.update(AUTH_ARG_SPEC) argument_spec.update(AUTH_ARG_SPEC)
argument_spec.update(WAIT_ARG_SPEC) argument_spec.update(WAIT_ARG_SPEC)
argument_spec.update(RESOURCE_ARG_SPEC) argument_spec.update(RESOURCE_ARG_SPEC)
argument_spec['state'] = dict(type='str', default='rendered', choices=['present', 'absent', 'rendered']) argument_spec["state"] = dict(
argument_spec['namespace'] = dict(type='str') type="str", default="rendered", choices=["present", "absent", "rendered"]
argument_spec['namespace_target'] = dict(type='str') )
argument_spec['parameters'] = dict(type='dict') argument_spec["namespace"] = dict(type="str")
argument_spec['name'] = dict(type='str') argument_spec["namespace_target"] = dict(type="str")
argument_spec['parameter_file'] = dict(type='str') argument_spec["parameters"] = dict(type="dict")
argument_spec["name"] = dict(type="str")
argument_spec["parameter_file"] = dict(type="str")
return argument_spec return argument_spec
def main(): def main():
from ansible_collections.community.okd.plugins.module_utils.openshift_process import ( from ansible_collections.community.okd.plugins.module_utils.openshift_process import (
OpenShiftProcess) OpenShiftProcess,
)
module = OpenShiftProcess(argument_spec=argspec(), supports_check_mode=True) module = OpenShiftProcess(argument_spec=argspec(), supports_check_mode=True)
module.run_module() module.run_module()
if __name__ == '__main__': if __name__ == "__main__":
main() main()

View File

@@ -5,10 +5,11 @@
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt) # GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function from __future__ import absolute_import, division, print_function
__metaclass__ = type __metaclass__ = type
# STARTREMOVE (downstream) # STARTREMOVE (downstream)
DOCUMENTATION = r''' DOCUMENTATION = r"""
module: openshift_registry_info module: openshift_registry_info
@@ -40,9 +41,9 @@ requirements:
- python >= 3.6 - python >= 3.6
- kubernetes >= 12.0.0 - kubernetes >= 12.0.0
- docker-image-py - docker-image-py
''' """
EXAMPLES = r''' EXAMPLES = r"""
# Get registry information # Get registry information
- name: Read integrated registry information - name: Read integrated registry information
community.okd.openshift_registry_info: community.okd.openshift_registry_info:
@@ -50,11 +51,11 @@ EXAMPLES = r'''
# Read registry integrated information and attempt to contact using local client. # Read registry integrated information and attempt to contact using local client.
- name: Attempt to contact integrated registry using local client - name: Attempt to contact integrated registry using local client
community.okd.openshift_registry_info: community.okd.openshift_registry_info:
check: yes check: true
''' """
RETURN = r''' RETURN = r"""
internal_hostname: internal_hostname:
description: description:
- The internal registry hostname. - The internal registry hostname.
@@ -79,36 +80,30 @@ check:
description: message describing the ping operation. description: message describing the ping operation.
returned: always returned: always
type: str type: str
''' """
# ENDREMOVE (downstream) # ENDREMOVE (downstream)
import copy import copy
from ansible_collections.kubernetes.core.plugins.module_utils.args_common import AUTH_ARG_SPEC from ansible_collections.kubernetes.core.plugins.module_utils.args_common import (
AUTH_ARG_SPEC,
)
def argument_spec(): def argument_spec():
args = copy.deepcopy(AUTH_ARG_SPEC) args = copy.deepcopy(AUTH_ARG_SPEC)
args.update( args.update(dict(check=dict(type="bool", default=False)))
dict(
check=dict(type='bool', default=False)
)
)
return args return args
def main(): def main():
from ansible_collections.community.okd.plugins.module_utils.openshift_registry import ( from ansible_collections.community.okd.plugins.module_utils.openshift_registry import (
OpenShiftRegistry OpenShiftRegistry,
) )
module = OpenShiftRegistry( module = OpenShiftRegistry(argument_spec=argument_spec(), supports_check_mode=True)
argument_spec=argument_spec(),
supports_check_mode=True
)
module.run_module() module.run_module()
if __name__ == '__main__': if __name__ == "__main__":
main() main()

View File

@@ -9,7 +9,7 @@ from __future__ import absolute_import, division, print_function
__metaclass__ = type __metaclass__ = type
# STARTREMOVE (downstream) # STARTREMOVE (downstream)
DOCUMENTATION = r''' DOCUMENTATION = r"""
module: openshift_route module: openshift_route
short_description: Expose a Service as an OpenShift Route. short_description: Expose a Service as an OpenShift Route.
@@ -133,9 +133,9 @@ options:
- insecure - insecure
default: insecure default: insecure
type: str type: str
''' """
EXAMPLES = r''' EXAMPLES = r"""
- name: Create hello-world deployment - name: Create hello-world deployment
community.okd.k8s: community.okd.k8s:
definition: definition:
@@ -155,10 +155,10 @@ EXAMPLES = r'''
app: hello-kubernetes app: hello-kubernetes
spec: spec:
containers: containers:
- name: hello-kubernetes - name: hello-kubernetes
image: paulbouwer/hello-kubernetes:1.8 image: paulbouwer/hello-kubernetes:1.8
ports: ports:
- containerPort: 8080 - containerPort: 8080
- name: Create Service for the hello-world deployment - name: Create Service for the hello-world deployment
community.okd.k8s: community.okd.k8s:
@@ -170,8 +170,8 @@ EXAMPLES = r'''
namespace: default namespace: default
spec: spec:
ports: ports:
- port: 80 - port: 80
targetPort: 8080 targetPort: 8080
selector: selector:
app: hello-kubernetes app: hello-kubernetes
@@ -183,9 +183,9 @@ EXAMPLES = r'''
annotations: annotations:
haproxy.router.openshift.io/balance: roundrobin haproxy.router.openshift.io/balance: roundrobin
register: route register: route
''' """
RETURN = r''' RETURN = r"""
result: result:
description: description:
- The Route object that was created or updated. Will be empty in the case of deletion. - The Route object that was created or updated. Will be empty in the case of deletion.
@@ -303,20 +303,28 @@ duration:
returned: when C(wait) is true returned: when C(wait) is true
type: int type: int
sample: 48 sample: 48
''' """
# ENDREMOVE (downstream) # ENDREMOVE (downstream)
import copy import copy
from ansible.module_utils._text import to_native from ansible.module_utils._text import to_native
from ansible_collections.community.okd.plugins.module_utils.openshift_common import AnsibleOpenshiftModule from ansible_collections.community.okd.plugins.module_utils.openshift_common import (
AnsibleOpenshiftModule,
)
try: try:
from ansible_collections.kubernetes.core.plugins.module_utils.k8s.runner import perform_action from ansible_collections.kubernetes.core.plugins.module_utils.k8s.runner import (
from ansible_collections.kubernetes.core.plugins.module_utils.k8s.waiter import Waiter perform_action,
)
from ansible_collections.kubernetes.core.plugins.module_utils.k8s.waiter import (
Waiter,
)
from ansible_collections.kubernetes.core.plugins.module_utils.args_common import ( from ansible_collections.kubernetes.core.plugins.module_utils.args_common import (
AUTH_ARG_SPEC, WAIT_ARG_SPEC, COMMON_ARG_SPEC AUTH_ARG_SPEC,
WAIT_ARG_SPEC,
COMMON_ARG_SPEC,
) )
except ImportError as e: except ImportError as e:
pass pass
@@ -329,7 +337,6 @@ except ImportError:
class OpenShiftRoute(AnsibleOpenshiftModule): class OpenShiftRoute(AnsibleOpenshiftModule):
def __init__(self): def __init__(self):
super(OpenShiftRoute, self).__init__( super(OpenShiftRoute, self).__init__(
argument_spec=self.argspec, argument_spec=self.argspec,
@@ -339,7 +346,7 @@ class OpenShiftRoute(AnsibleOpenshiftModule):
self.append_hash = False self.append_hash = False
self.apply = False self.apply = False
self.warnings = [] self.warnings = []
self.params['merge_type'] = None self.params["merge_type"] = None
@property @property
def argspec(self): def argspec(self):
@@ -347,80 +354,95 @@ class OpenShiftRoute(AnsibleOpenshiftModule):
spec.update(copy.deepcopy(WAIT_ARG_SPEC)) spec.update(copy.deepcopy(WAIT_ARG_SPEC))
spec.update(copy.deepcopy(COMMON_ARG_SPEC)) spec.update(copy.deepcopy(COMMON_ARG_SPEC))
spec['service'] = dict(type='str', aliases=['svc']) spec["service"] = dict(type="str", aliases=["svc"])
spec['namespace'] = dict(required=True, type='str') spec["namespace"] = dict(required=True, type="str")
spec['labels'] = dict(type='dict') spec["labels"] = dict(type="dict")
spec['name'] = dict(type='str') spec["name"] = dict(type="str")
spec['hostname'] = dict(type='str') spec["hostname"] = dict(type="str")
spec['path'] = dict(type='str') spec["path"] = dict(type="str")
spec['wildcard_policy'] = dict(choices=['Subdomain'], type='str') spec["wildcard_policy"] = dict(choices=["Subdomain"], type="str")
spec['port'] = dict(type='str') spec["port"] = dict(type="str")
spec['tls'] = dict(type='dict', options=dict( spec["tls"] = dict(
ca_certificate=dict(type='str'), type="dict",
certificate=dict(type='str'), options=dict(
destination_ca_certificate=dict(type='str'), ca_certificate=dict(type="str"),
key=dict(type='str', no_log=False), certificate=dict(type="str"),
insecure_policy=dict(type='str', choices=['allow', 'redirect', 'disallow'], default='disallow'), destination_ca_certificate=dict(type="str"),
)) key=dict(type="str", no_log=False),
spec['termination'] = dict(choices=['edge', 'passthrough', 'reencrypt', 'insecure'], default='insecure') insecure_policy=dict(
spec['annotations'] = dict(type='dict') type="str",
choices=["allow", "redirect", "disallow"],
default="disallow",
),
),
)
spec["termination"] = dict(
choices=["edge", "passthrough", "reencrypt", "insecure"], default="insecure"
)
spec["annotations"] = dict(type="dict")
return spec return spec
def execute_module(self): def execute_module(self):
service_name = self.params.get("service")
service_name = self.params.get('service') namespace = self.params["namespace"]
namespace = self.params['namespace'] termination_type = self.params.get("termination")
termination_type = self.params.get('termination') if termination_type == "insecure":
if termination_type == 'insecure':
termination_type = None termination_type = None
state = self.params.get('state') state = self.params.get("state")
if state != 'absent' and not service_name: if state != "absent" and not service_name:
self.fail_json("If 'state' is not 'absent' then 'service' must be provided") self.fail_json("If 'state' is not 'absent' then 'service' must be provided")
# We need to do something a little wonky to wait if the user doesn't supply a custom condition # We need to do something a little wonky to wait if the user doesn't supply a custom condition
custom_wait = self.params.get('wait') and not self.params.get('wait_condition') and state != 'absent' custom_wait = (
self.params.get("wait")
and not self.params.get("wait_condition")
and state != "absent"
)
if custom_wait: if custom_wait:
# Don't use default wait logic in perform_action # Don't use default wait logic in perform_action
self.params['wait'] = False self.params["wait"] = False
route_name = self.params.get('name') or service_name route_name = self.params.get("name") or service_name
labels = self.params.get('labels') labels = self.params.get("labels")
hostname = self.params.get('hostname') hostname = self.params.get("hostname")
path = self.params.get('path') path = self.params.get("path")
wildcard_policy = self.params.get('wildcard_policy') wildcard_policy = self.params.get("wildcard_policy")
port = self.params.get('port') port = self.params.get("port")
annotations = self.params.get('annotations') annotations = self.params.get("annotations")
if termination_type and self.params.get('tls'): if termination_type and self.params.get("tls"):
tls_ca_cert = self.params['tls'].get('ca_certificate') tls_ca_cert = self.params["tls"].get("ca_certificate")
tls_cert = self.params['tls'].get('certificate') tls_cert = self.params["tls"].get("certificate")
tls_dest_ca_cert = self.params['tls'].get('destination_ca_certificate') tls_dest_ca_cert = self.params["tls"].get("destination_ca_certificate")
tls_key = self.params['tls'].get('key') tls_key = self.params["tls"].get("key")
tls_insecure_policy = self.params['tls'].get('insecure_policy') tls_insecure_policy = self.params["tls"].get("insecure_policy")
if tls_insecure_policy == 'disallow': if tls_insecure_policy == "disallow":
tls_insecure_policy = None tls_insecure_policy = None
else: else:
tls_ca_cert = tls_cert = tls_dest_ca_cert = tls_key = tls_insecure_policy = None tls_ca_cert = (
tls_cert
) = tls_dest_ca_cert = tls_key = tls_insecure_policy = None
route = { route = {
'apiVersion': 'route.openshift.io/v1', "apiVersion": "route.openshift.io/v1",
'kind': 'Route', "kind": "Route",
'metadata': { "metadata": {
'name': route_name, "name": route_name,
'namespace': namespace, "namespace": namespace,
'labels': labels, "labels": labels,
}, },
'spec': {} "spec": {},
} }
if annotations: if annotations:
route['metadata']['annotations'] = annotations route["metadata"]["annotations"] = annotations
if state != 'absent': if state != "absent":
route['spec'] = self.build_route_spec( route["spec"] = self.build_route_spec(
service_name, namespace, service_name,
namespace,
port=port, port=port,
wildcard_policy=wildcard_policy, wildcard_policy=wildcard_policy,
hostname=hostname, hostname=hostname,
@@ -434,79 +456,120 @@ class OpenShiftRoute(AnsibleOpenshiftModule):
) )
result = perform_action(self.svc, route, self.params) result = perform_action(self.svc, route, self.params)
timeout = self.params.get('wait_timeout') timeout = self.params.get("wait_timeout")
sleep = self.params.get('wait_sleep') sleep = self.params.get("wait_sleep")
if custom_wait: if custom_wait:
v1_routes = self.find_resource('Route', 'route.openshift.io/v1', fail=True) v1_routes = self.find_resource("Route", "route.openshift.io/v1", fail=True)
waiter = Waiter(self.client, v1_routes, wait_predicate) waiter = Waiter(self.client, v1_routes, wait_predicate)
success, result['result'], result['duration'] = waiter.wait(timeout=timeout, sleep=sleep, name=route_name, namespace=namespace) success, result["result"], result["duration"] = waiter.wait(
timeout=timeout, sleep=sleep, name=route_name, namespace=namespace
)
self.exit_json(**result) self.exit_json(**result)
def build_route_spec(self, service_name, namespace, port=None, wildcard_policy=None, hostname=None, path=None, termination_type=None, def build_route_spec(
tls_insecure_policy=None, tls_ca_cert=None, tls_cert=None, tls_key=None, tls_dest_ca_cert=None): self,
v1_services = self.find_resource('Service', 'v1', fail=True) service_name,
namespace,
port=None,
wildcard_policy=None,
hostname=None,
path=None,
termination_type=None,
tls_insecure_policy=None,
tls_ca_cert=None,
tls_cert=None,
tls_key=None,
tls_dest_ca_cert=None,
):
v1_services = self.find_resource("Service", "v1", fail=True)
try: try:
target_service = v1_services.get(name=service_name, namespace=namespace) target_service = v1_services.get(name=service_name, namespace=namespace)
except NotFoundError: except NotFoundError:
if not port: if not port:
self.fail_json(msg="You need to provide the 'port' argument when exposing a non-existent service") self.fail_json(
msg="You need to provide the 'port' argument when exposing a non-existent service"
)
target_service = None target_service = None
except DynamicApiError as exc: except DynamicApiError as exc:
self.fail_json(msg='Failed to retrieve service to be exposed: {0}'.format(exc.body), self.fail_json(
error=exc.status, status=exc.status, reason=exc.reason) msg="Failed to retrieve service to be exposed: {0}".format(exc.body),
error=exc.status,
status=exc.status,
reason=exc.reason,
)
except Exception as exc: except Exception as exc:
self.fail_json(msg='Failed to retrieve service to be exposed: {0}'.format(to_native(exc)), self.fail_json(
error='', status='', reason='') msg="Failed to retrieve service to be exposed: {0}".format(
to_native(exc)
),
error="",
status="",
reason="",
)
route_spec = { route_spec = {
'tls': {}, "tls": {},
'to': { "to": {
'kind': 'Service', "kind": "Service",
'name': service_name, "name": service_name,
}, },
'port': { "port": {
'targetPort': self.set_port(target_service, port), "targetPort": self.set_port(target_service, port),
}, },
'wildcardPolicy': wildcard_policy "wildcardPolicy": wildcard_policy,
} }
# Want to conditionally add these so we don't overwrite what is automically added when nothing is provided # Want to conditionally add these so we don't overwrite what is automically added when nothing is provided
if termination_type: if termination_type:
route_spec['tls'] = dict(termination=termination_type.capitalize()) route_spec["tls"] = dict(termination=termination_type.capitalize())
if tls_insecure_policy: if tls_insecure_policy:
if termination_type == 'edge': if termination_type == "edge":
route_spec['tls']['insecureEdgeTerminationPolicy'] = tls_insecure_policy.capitalize() route_spec["tls"][
elif termination_type == 'passthrough': "insecureEdgeTerminationPolicy"
if tls_insecure_policy != 'redirect': ] = tls_insecure_policy.capitalize()
self.fail_json("'redirect' is the only supported insecureEdgeTerminationPolicy for passthrough routes") elif termination_type == "passthrough":
route_spec['tls']['insecureEdgeTerminationPolicy'] = tls_insecure_policy.capitalize() if tls_insecure_policy != "redirect":
elif termination_type == 'reencrypt': self.fail_json(
self.fail_json("'tls.insecure_policy' is not supported with reencrypt routes") "'redirect' is the only supported insecureEdgeTerminationPolicy for passthrough routes"
)
route_spec["tls"][
"insecureEdgeTerminationPolicy"
] = tls_insecure_policy.capitalize()
elif termination_type == "reencrypt":
self.fail_json(
"'tls.insecure_policy' is not supported with reencrypt routes"
)
else: else:
route_spec['tls']['insecureEdgeTerminationPolicy'] = None route_spec["tls"]["insecureEdgeTerminationPolicy"] = None
if tls_ca_cert: if tls_ca_cert:
if termination_type == 'passthrough': if termination_type == "passthrough":
self.fail_json("'tls.ca_certificate' is not supported with passthrough routes") self.fail_json(
route_spec['tls']['caCertificate'] = tls_ca_cert "'tls.ca_certificate' is not supported with passthrough routes"
)
route_spec["tls"]["caCertificate"] = tls_ca_cert
if tls_cert: if tls_cert:
if termination_type == 'passthrough': if termination_type == "passthrough":
self.fail_json("'tls.certificate' is not supported with passthrough routes") self.fail_json(
route_spec['tls']['certificate'] = tls_cert "'tls.certificate' is not supported with passthrough routes"
)
route_spec["tls"]["certificate"] = tls_cert
if tls_key: if tls_key:
if termination_type == 'passthrough': if termination_type == "passthrough":
self.fail_json("'tls.key' is not supported with passthrough routes") self.fail_json("'tls.key' is not supported with passthrough routes")
route_spec['tls']['key'] = tls_key route_spec["tls"]["key"] = tls_key
if tls_dest_ca_cert: if tls_dest_ca_cert:
if termination_type != 'reencrypt': if termination_type != "reencrypt":
self.fail_json("'destination_certificate' is only valid for reencrypt routes") self.fail_json(
route_spec['tls']['destinationCACertificate'] = tls_dest_ca_cert "'destination_certificate' is only valid for reencrypt routes"
)
route_spec["tls"]["destinationCACertificate"] = tls_dest_ca_cert
else: else:
route_spec['tls'] = None route_spec["tls"] = None
if hostname: if hostname:
route_spec['host'] = hostname route_spec["host"] = hostname
if path: if path:
route_spec['path'] = path route_spec["path"] = path
return route_spec return route_spec
@@ -514,7 +577,7 @@ class OpenShiftRoute(AnsibleOpenshiftModule):
if port_arg: if port_arg:
return port_arg return port_arg
for p in service.spec.ports: for p in service.spec.ports:
if p.protocol == 'TCP': if p.protocol == "TCP":
if p.name is not None: if p.name is not None:
return p.name return p.name
return p.targetPort return p.targetPort
@@ -525,7 +588,7 @@ def wait_predicate(route):
if not (route.status and route.status.ingress): if not (route.status and route.status.ingress):
return False return False
for ingress in route.status.ingress: for ingress in route.status.ingress:
match = [x for x in ingress.conditions if x.type == 'Admitted'] match = [x for x in ingress.conditions if x.type == "Admitted"]
if not match: if not match:
return False return False
match = match[0] match = match[0]
@@ -538,5 +601,5 @@ def main():
OpenShiftRoute().run_module() OpenShiftRoute().run_module()
if __name__ == '__main__': if __name__ == "__main__":
main() main()

View File

@@ -1,3 +1,4 @@
---
collections: collections:
- name: kubernetes.core - name: kubernetes.core
version: '>=2.4.0' version: '>=2.4.0'

View File

@@ -1,3 +0,0 @@
[flake8]
max-line-length = 160
ignore = W503,E402

View File

@@ -2,3 +2,4 @@ coverage==4.5.4
pytest pytest
pytest-xdist pytest-xdist
pytest-forked pytest-forked
pytest-ansible

View File

@@ -1,2 +1,3 @@
---
modules: modules:
python_requires: ">=3.6" python_requires: ">=3.9"

View File

@@ -0,0 +1,3 @@
plugins/modules/k8s.py validate-modules:parameter-type-not-in-doc
plugins/modules/k8s.py validate-modules:return-syntax-error
plugins/modules/openshift_process.py validate-modules:parameter-type-not-in-doc

View File

@@ -0,0 +1,3 @@
plugins/modules/k8s.py validate-modules:parameter-type-not-in-doc
plugins/modules/k8s.py validate-modules:return-syntax-error
plugins/modules/openshift_process.py validate-modules:parameter-type-not-in-doc

View File

@@ -0,0 +1,5 @@
---
collections:
- name: https://github.com/ansible-collections/kubernetes.core.git
type: git
version: main

View File

@@ -5,28 +5,44 @@ __metaclass__ = type
from ansible_collections.community.okd.plugins.module_utils.openshift_ldap import ( from ansible_collections.community.okd.plugins.module_utils.openshift_ldap import (
openshift_equal_dn, openshift_equal_dn,
openshift_ancestorof_dn openshift_ancestorof_dn,
) )
import pytest import pytest
try: try:
import ldap import ldap # pylint: disable=unused-import
except ImportError: except ImportError:
pytestmark = pytest.mark.skip("This test requires the python-ldap library") pytestmark = pytest.mark.skip("This test requires the python-ldap library")
def test_equal_dn(): def test_equal_dn():
assert openshift_equal_dn(
assert openshift_equal_dn("cn=unit,ou=users,dc=ansible,dc=com", "cn=unit,ou=users,dc=ansible,dc=com") "cn=unit,ou=users,dc=ansible,dc=com", "cn=unit,ou=users,dc=ansible,dc=com"
assert not openshift_equal_dn("cn=unit,ou=users,dc=ansible,dc=com", "cn=units,ou=users,dc=ansible,dc=com") )
assert not openshift_equal_dn("cn=unit,ou=users,dc=ansible,dc=com", "cn=unit,ou=user,dc=ansible,dc=com") assert not openshift_equal_dn(
assert not openshift_equal_dn("cn=unit,ou=users,dc=ansible,dc=com", "cn=unit,ou=users,dc=ansible,dc=org") "cn=unit,ou=users,dc=ansible,dc=com", "cn=units,ou=users,dc=ansible,dc=com"
)
assert not openshift_equal_dn(
"cn=unit,ou=users,dc=ansible,dc=com", "cn=unit,ou=user,dc=ansible,dc=com"
)
assert not openshift_equal_dn(
"cn=unit,ou=users,dc=ansible,dc=com", "cn=unit,ou=users,dc=ansible,dc=org"
)
def test_ancestor_of_dn(): def test_ancestor_of_dn():
assert not openshift_ancestorof_dn(
assert not openshift_ancestorof_dn("cn=unit,ou=users,dc=ansible,dc=com", "cn=unit,ou=users,dc=ansible,dc=com") "cn=unit,ou=users,dc=ansible,dc=com", "cn=unit,ou=users,dc=ansible,dc=com"
assert not openshift_ancestorof_dn("cn=unit,ou=users,dc=ansible,dc=com", "cn=units,ou=users,dc=ansible,dc=com") )
assert openshift_ancestorof_dn("ou=users,dc=ansible,dc=com", "cn=john,ou=users,dc=ansible,dc=com") assert not openshift_ancestorof_dn(
assert openshift_ancestorof_dn("ou=users,dc=ansible,dc=com", "cn=mathew,ou=users,dc=ansible,dc=com") "cn=unit,ou=users,dc=ansible,dc=com", "cn=units,ou=users,dc=ansible,dc=com"
assert not openshift_ancestorof_dn("ou=users,dc=ansible,dc=com", "cn=mathew,ou=users,dc=ansible,dc=org") )
assert openshift_ancestorof_dn(
"ou=users,dc=ansible,dc=com", "cn=john,ou=users,dc=ansible,dc=com"
)
assert openshift_ancestorof_dn(
"ou=users,dc=ansible,dc=com", "cn=mathew,ou=users,dc=ansible,dc=com"
)
assert not openshift_ancestorof_dn(
"ou=users,dc=ansible,dc=com", "cn=mathew,ou=users,dc=ansible,dc=org"
)

View File

@@ -9,28 +9,26 @@ from ansible_collections.community.okd.plugins.module_utils.openshift_ldap impor
def test_missing_url(): def test_missing_url():
config = dict( config = dict(kind="LDAPSyncConfig", apiVersion="v1", insecure=True)
kind="LDAPSyncConfig",
apiVersion="v1",
insecure=True
)
err = validate_ldap_sync_config(config) err = validate_ldap_sync_config(config)
assert err == "url should be non empty attribute." assert err == "url should be non empty attribute."
def test_binddn_and_bindpwd_linked(): def test_binddn_and_bindpwd_linked():
""" """
one of bind_dn and bind_pwd cannot be set alone one of bind_dn and bind_pwd cannot be set alone
""" """
config = dict( config = dict(
kind="LDAPSyncConfig", kind="LDAPSyncConfig",
apiVersion="v1", apiVersion="v1",
url="ldap://LDAP_SERVICE_IP:389", url="ldap://LDAP_SERVICE_IP:389",
insecure=True, insecure=True,
bindDN="cn=admin,dc=example,dc=org" bindDN="cn=admin,dc=example,dc=org",
) )
credentials_error = "bindDN and bindPassword must both be specified, or both be empty." credentials_error = (
"bindDN and bindPassword must both be specified, or both be empty."
)
assert validate_ldap_sync_config(config) == credentials_error assert validate_ldap_sync_config(config) == credentials_error
@@ -39,7 +37,7 @@ def test_binddn_and_bindpwd_linked():
apiVersion="v1", apiVersion="v1",
url="ldap://LDAP_SERVICE_IP:389", url="ldap://LDAP_SERVICE_IP:389",
insecure=True, insecure=True,
bindPassword="testing1223" bindPassword="testing1223",
) )
assert validate_ldap_sync_config(config) == credentials_error assert validate_ldap_sync_config(config) == credentials_error
@@ -53,11 +51,13 @@ def test_insecure_connection():
insecure=True, insecure=True,
) )
assert validate_ldap_sync_config(config) == "Cannot use ldaps scheme with insecure=true." assert (
validate_ldap_sync_config(config)
== "Cannot use ldaps scheme with insecure=true."
)
config.update(dict( config.update(dict(url="ldap://LDAP_SERVICE_IP:389", ca="path/to/ca/file"))
url="ldap://LDAP_SERVICE_IP:389",
ca="path/to/ca/file"
))
assert validate_ldap_sync_config(config) == "Cannot specify a ca with insecure=true." assert (
validate_ldap_sync_config(config) == "Cannot specify a ca with insecure=true."
)

View File

@@ -11,7 +11,6 @@ import pytest
def test_convert_storage_to_bytes(): def test_convert_storage_to_bytes():
data = [ data = [
("1000", 1000), ("1000", 1000),
("1000Ki", 1000 * 1024), ("1000Ki", 1000 * 1024),
@@ -54,46 +53,48 @@ def validate_docker_response(resp, **kwargs):
def test_parse_docker_image_ref_valid_image_with_digest(): def test_parse_docker_image_ref_valid_image_with_digest():
image = "registry.access.redhat.com/ubi8/dotnet-21@sha256:f7718f5efd3436e781ee4322c92ab0c4ae63e61f5b36f1473a57874cc3522669" image = "registry.access.redhat.com/ubi8/dotnet-21@sha256:f7718f5efd3436e781ee4322c92ab0c4ae63e61f5b36f1473a57874cc3522669"
response, err = parse_docker_image_ref(image) response, err = parse_docker_image_ref(image)
assert err is None assert err is None
validate_docker_response(response, validate_docker_response(
hostname="registry.access.redhat.com", response,
namespace="ubi8", hostname="registry.access.redhat.com",
name="dotnet-21", namespace="ubi8",
digest="sha256:f7718f5efd3436e781ee4322c92ab0c4ae63e61f5b36f1473a57874cc3522669") name="dotnet-21",
digest="sha256:f7718f5efd3436e781ee4322c92ab0c4ae63e61f5b36f1473a57874cc3522669",
)
def test_parse_docker_image_ref_valid_image_with_tag_latest(): def test_parse_docker_image_ref_valid_image_with_tag_latest():
image = "registry.access.redhat.com/ubi8/dotnet-21:latest" image = "registry.access.redhat.com/ubi8/dotnet-21:latest"
response, err = parse_docker_image_ref(image) response, err = parse_docker_image_ref(image)
assert err is None assert err is None
validate_docker_response(response, validate_docker_response(
hostname="registry.access.redhat.com", response,
namespace="ubi8", hostname="registry.access.redhat.com",
name="dotnet-21", namespace="ubi8",
tag="latest") name="dotnet-21",
tag="latest",
)
def test_parse_docker_image_ref_valid_image_with_tag_int(): def test_parse_docker_image_ref_valid_image_with_tag_int():
image = "registry.access.redhat.com/ubi8/dotnet-21:0.0.1" image = "registry.access.redhat.com/ubi8/dotnet-21:0.0.1"
response, err = parse_docker_image_ref(image) response, err = parse_docker_image_ref(image)
assert err is None assert err is None
validate_docker_response(response, validate_docker_response(
hostname="registry.access.redhat.com", response,
namespace="ubi8", hostname="registry.access.redhat.com",
name="dotnet-21", namespace="ubi8",
tag="0.0.1") name="dotnet-21",
tag="0.0.1",
)
def test_parse_docker_image_ref_invalid_image(): def test_parse_docker_image_ref_invalid_image():
# The hex value of the sha256 is not valid # The hex value of the sha256 is not valid
image = "registry.access.redhat.com/dotnet-21@sha256:f7718f5efd3436e781ee4322c92ab0c4ae63e61f5b36f1473a57874cc3522" image = "registry.access.redhat.com/dotnet-21@sha256:f7718f5efd3436e781ee4322c92ab0c4ae63e61f5b36f1473a57874cc3522"
response, err = parse_docker_image_ref(image) response, err = parse_docker_image_ref(image)
@@ -101,7 +102,6 @@ def test_parse_docker_image_ref_invalid_image():
def test_parse_docker_image_ref_valid_image_without_hostname(): def test_parse_docker_image_ref_valid_image_without_hostname():
image = "ansible:2.10.0" image = "ansible:2.10.0"
response, err = parse_docker_image_ref(image) response, err = parse_docker_image_ref(image)
assert err is None assert err is None
@@ -110,16 +110,18 @@ def test_parse_docker_image_ref_valid_image_without_hostname():
def test_parse_docker_image_ref_valid_image_without_hostname_and_with_digest(): def test_parse_docker_image_ref_valid_image_without_hostname_and_with_digest():
image = "ansible@sha256:f7718f5efd3436e781ee4322c92ab0c4ae63e61f5b36f1473a57874cc3522669" image = "ansible@sha256:f7718f5efd3436e781ee4322c92ab0c4ae63e61f5b36f1473a57874cc3522669"
response, err = parse_docker_image_ref(image) response, err = parse_docker_image_ref(image)
assert err is None assert err is None
validate_docker_response(response, name="ansible", digest="sha256:f7718f5efd3436e781ee4322c92ab0c4ae63e61f5b36f1473a57874cc3522669") validate_docker_response(
response,
name="ansible",
digest="sha256:f7718f5efd3436e781ee4322c92ab0c4ae63e61f5b36f1473a57874cc3522669",
)
def test_parse_docker_image_ref_valid_image_with_name_only(): def test_parse_docker_image_ref_valid_image_with_name_only():
image = "ansible" image = "ansible"
response, err = parse_docker_image_ref(image) response, err = parse_docker_image_ref(image)
assert err is None assert err is None
@@ -128,25 +130,27 @@ def test_parse_docker_image_ref_valid_image_with_name_only():
def test_parse_docker_image_ref_valid_image_without_hostname_with_namespace_and_name(): def test_parse_docker_image_ref_valid_image_without_hostname_with_namespace_and_name():
image = "ibmcom/pause@sha256:fcaff905397ba63fd376d0c3019f1f1cb6e7506131389edbcb3d22719f1ae54d" image = "ibmcom/pause@sha256:fcaff905397ba63fd376d0c3019f1f1cb6e7506131389edbcb3d22719f1ae54d"
response, err = parse_docker_image_ref(image) response, err = parse_docker_image_ref(image)
assert err is None assert err is None
validate_docker_response(response, validate_docker_response(
name="pause", response,
namespace="ibmcom", name="pause",
digest="sha256:fcaff905397ba63fd376d0c3019f1f1cb6e7506131389edbcb3d22719f1ae54d") namespace="ibmcom",
digest="sha256:fcaff905397ba63fd376d0c3019f1f1cb6e7506131389edbcb3d22719f1ae54d",
)
def test_parse_docker_image_ref_valid_image_with_complex_namespace_name(): def test_parse_docker_image_ref_valid_image_with_complex_namespace_name():
image = "registry.redhat.io/jboss-webserver-5/webserver54-openjdk11-tomcat9-openshift-rhel7:1.0" image = "registry.redhat.io/jboss-webserver-5/webserver54-openjdk11-tomcat9-openshift-rhel7:1.0"
response, err = parse_docker_image_ref(image) response, err = parse_docker_image_ref(image)
assert err is None assert err is None
validate_docker_response(response, validate_docker_response(
hostname="registry.redhat.io", response,
name="webserver54-openjdk11-tomcat9-openshift-rhel7", hostname="registry.redhat.io",
namespace="jboss-webserver-5", name="webserver54-openjdk11-tomcat9-openshift-rhel7",
tag="1.0") namespace="jboss-webserver-5",
tag="1.0",
)

37
tox.ini Normal file
View File

@@ -0,0 +1,37 @@
[tox]
skipsdist = True
[testenv]
deps = -r{toxinidir}/requirements.txt
-r{toxinidir}/test-requirements.txt
install_command = pip install {opts} {packages}
[testenv:black]
deps =
black >= 23.0, < 24.0
commands =
black {toxinidir}/plugins {toxinidir}/tests
[testenv:ansible-lint]
deps =
ansible-lint==6.21.0
changedir = {toxinidir}
commands =
ansible-lint
[testenv:linters]
deps =
flake8
{[testenv:black]deps}
commands =
black -v --check --diff {toxinidir}/plugins {toxinidir}/tests
flake8 {toxinidir}
[flake8]
# E123, E125 skipped as they are invalid PEP-8.
exclude = .git,.tox,tests/output
ignore = E123,E125,E203,E402,E501,E741,F401,F811,F841,W503
max-line-length = 160
builtins = _