Update CI - Continue work from #195 (#202)

* Upgrade Ansible and OKD versions for CI

* Use ubi9 and fix sanity

* Use correct pip install

* Try using quotes

* Ensure python3.9

* Upgrade ansible and molecule versions

* Remove DeploymentConfig

DeploymentConfigs are deprecated and seem to now be causing idempotence
problems. Replacing them with Deployments fixes it.

* Attempt to fix ldap integration tests

Signed-off-by: Alina Buzachis <abuzachis@redhat.com>

* Move sanity and unit tests to GH actions

Signed-off-by: Alina Buzachis <abuzachis@redhat.com>

* Firt round of sanity fixes

Signed-off-by: Alina Buzachis <abuzachis@redhat.com>

* Add kubernetes.core collection as sanity requirement

Signed-off-by: Alina Buzachis <abuzachis@redhat.com>

* Add ignore-2.16.txt

Signed-off-by: Alina Buzachis <abuzachis@redhat.com>

* Attempt to fix units

Signed-off-by: Alina Buzachis <abuzachis@redhat.com>

* Add ignore-2.17

Signed-off-by: Alina Buzachis <abuzachis@redhat.com>

* Attempt to fix unit tests

Signed-off-by: Alina Buzachis <abuzachis@redhat.com>

* Add pytest-ansible to test-requirements.txt

Signed-off-by: Alina Buzachis <abuzachis@redhat.com>

* Add changelog fragment

Signed-off-by: Alina Buzachis <abuzachis@redhat.com>

* Add workflow for ansible-lint

Signed-off-by: Alina Buzachis <abuzachis@redhat.com>

* Apply black

Signed-off-by: Alina Buzachis <abuzachis@redhat.com>

* Fix linters

Signed-off-by: Alina Buzachis <abuzachis@redhat.com>

* Add # fmt: skip

Signed-off-by: Alina Buzachis <abuzachis@redhat.com>

* Yet another round of linting

Signed-off-by: Alina Buzachis <abuzachis@redhat.com>

* Yet another round of linting

Signed-off-by: Alina Buzachis <abuzachis@redhat.com>

* Remove setup.cfg

Signed-off-by: Alina Buzachis <abuzachis@redhat.com>

* Revert #fmt

Signed-off-by: Alina Buzachis <abuzachis@redhat.com>

* Use ansible-core 2.14

Signed-off-by: Alina Buzachis <abuzachis@redhat.com>

* Cleanup ansible-lint ignores

Signed-off-by: Alina Buzachis <abuzachis@redhat.com>

* Try using service instead of pod IP

* Fix typo

* Actually use the correct port

* See if NetworkPolicy is preventing connection

* using Pod internal IP

* fix adm prune auth roles syntax

* adding some retry steps

* fix: openshift_builds target

* add flag --force-with-deps when building downstream collection

* Remove yamllint from tox linters, bump minimum python supported version to 3.9, Remove support for ansible-core < 2.14

---------

Signed-off-by: Alina Buzachis <abuzachis@redhat.com>
Co-authored-by: Mike Graves <mgraves@redhat.com>
Co-authored-by: Alina Buzachis <abuzachis@redhat.com>
This commit is contained in:
Bikouo Aubin
2023-11-15 18:00:38 +01:00
committed by GitHub
parent cb796e1298
commit a63e5b7b36
76 changed files with 4364 additions and 3510 deletions

5
.config/ansible-lint.yml Normal file
View File

@@ -0,0 +1,5 @@
---
profile: production
exclude_paths:
- molecule
- tests/sanity

4
.github/patchback.yml vendored Normal file
View File

@@ -0,0 +1,4 @@
---
backport_branch_prefix: patchback/backports/
backport_label_prefix: backport-
target_branch_prefix: stable-

6
.github/settings.yml vendored Normal file
View File

@@ -0,0 +1,6 @@
---
# DO NOT MODIFY
# Settings: https://probot.github.io/apps/settings/
# Pull settings from https://github.com/ansible-collections/.github/blob/master/.github/settings.yml
_extends: ".github"

60
.github/stale.yml vendored Normal file
View File

@@ -0,0 +1,60 @@
---
# Configuration for probot-stale - https://github.com/probot/stale
# Number of days of inactivity before an Issue or Pull Request becomes stale
daysUntilStale: 90
# Number of days of inactivity before an Issue or Pull Request with the stale
# label is closed. Set to false to disable. If disabled, issues still need to be
# closed manually, but will remain marked as stale.
daysUntilClose: 30
# Only issues or pull requests with all of these labels are check if stale.
# Defaults to `[]` (disabled)
onlyLabels: []
# Issues or Pull Requests with these labels will never be considered stale. Set
# to `[]` to disable
exemptLabels:
- security
- planned
- priority/critical
- lifecycle/frozen
- verified
# Set to true to ignore issues in a project (defaults to false)
exemptProjects: false
# Set to true to ignore issues in a milestone (defaults to false)
exemptMilestones: true
# Set to true to ignore issues with an assignee (defaults to false)
exemptAssignees: false
# Label to use when marking as stale
staleLabel: lifecycle/stale
# Limit the number of actions per hour, from 1-30. Default is 30
limitPerRun: 30
pulls:
markComment: |-
PRs go stale after 90 days of inactivity.
If there is no further activity, the PR will be closed in another 30 days.
unmarkComment: >-
This pull request is no longer stale.
closeComment: >-
This pull request has been closed due to inactivity.
issues:
markComment: |-
Issues go stale after 90 days of inactivity.
If there is no further activity, the issue will be closed in another 30 days.
unmarkComment: >-
This issue is no longer stale.
closeComment: >-
This issue has been closed due to inactivity.

23
.github/workflows/changelog.yml vendored Normal file
View File

@@ -0,0 +1,23 @@
---
name: Changelog
concurrency:
group: ${{ github.workflow }}-${{ github.ref }}
cancel-in-progress: true
on:
pull_request:
types:
- opened
- reopened
- labeled
- unlabeled
- synchronize
branches:
- main
- stable-*
tags:
- '*'
jobs:
changelog:
uses: ansible-network/github_actions/.github/workflows/changelog.yml@main

29
.github/workflows/linters.yml vendored Normal file
View File

@@ -0,0 +1,29 @@
---
name: Linters
concurrency:
group: ${{ github.workflow }}-${{ github.ref }}
cancel-in-progress: true
on:
pull_request:
types:
- opened
- reopened
- labeled
- unlabeled
- synchronize
branches:
- main
- stable-*
tags:
- '*'
jobs:
linters:
uses: ansible-network/github_actions/.github/workflows/tox-linters.yml@main
ansible-lint:
runs-on: ubuntu-latest
steps:
- uses: ansible-network/github_actions/.github/actions/checkout_dependency@main
- name: Run ansible-lint
uses: ansible/ansible-lint@v6.21.0

23
.github/workflows/sanity-tests.yml vendored Normal file
View File

@@ -0,0 +1,23 @@
---
name: Sanity tests
concurrency:
group: ${{ github.workflow }}-${{ github.ref }}
cancel-in-progress: true
on:
pull_request:
types:
- opened
- reopened
- synchronize
branches:
- main
- stable-*
tags:
- '*'
jobs:
sanity:
uses: ansible-network/github_actions/.github/workflows/sanity.yml@main
with:
collection_pre_install: '-r source/tests/sanity/requirements.yml'

21
.github/workflows/unit-tests.yml vendored Normal file
View File

@@ -0,0 +1,21 @@
---
name: Unit tests
concurrency:
group: ${{ github.workflow }}-${{ github.ref }}
cancel-in-progress: true
on:
pull_request:
types:
- opened
- reopened
- synchronize
branches:
- main
- stable-*
tags:
- '*'
jobs:
unit-source:
uses: ansible-network/github_actions/.github/workflows/unit_source.yml@main

View File

@@ -1,33 +1,15 @@
---
# Based on ansible-lint config
extends: default
rules:
braces:
max-spaces-inside: 1
level: error
brackets:
max-spaces-inside: 1
level: error
colons:
max-spaces-after: -1
level: error
commas:
max-spaces-after: -1
level: error
comments: disable
comments-indentation: disable
document-start: disable
empty-lines:
max: 3
level: error
hyphens:
level: error
indentation: disable
key-duplicates: enable
line-length: disable
new-line-at-end-of-file: disable
new-lines:
type: unix
trailing-spaces: disable
truthy: disable
indentation:
ignore: &default_ignores |
# automatically generated, we can't control it
changelogs/changelog.yaml
# Will be gone when we release and automatically reformatted
changelogs/fragments/*
document-start:
ignore: *default_ignores
line-length:
ignore: *default_ignores
max: 160
ignore-from-file: .gitignore

View File

@@ -16,7 +16,7 @@ build: clean
ansible-galaxy collection build
install: build
ansible-galaxy collection install -p ansible_collections community-okd-$(VERSION).tar.gz
ansible-galaxy collection install --force -p ansible_collections community-okd-$(VERSION).tar.gz
sanity: install
cd ansible_collections/community/okd && ansible-test sanity -v --python $(PYTHON_VERSION) $(SANITY_TEST_ARGS)

View File

@@ -10,13 +10,8 @@ The collection includes a variety of Ansible content to help automate the manage
<!--start requires_ansible-->
## Ansible version compatibility
This collection has been tested against following Ansible versions: **>=2.9.17**.
This collection has been tested against following Ansible versions: **>=2.14.0**.
For collections that support Ansible 2.9, please ensure you update your `network_os` to use the
fully qualified collection name (for example, `cisco.ios.ios`).
Plugins and modules within a collection may be tested with only specific Ansible versions.
A collection may contain metadata that identifies these versions.
PEP440 is the schema used to describe the versions of Ansible.
<!--end requires_ansible-->
## Python Support

View File

@@ -1,3 +1,4 @@
---
ancestor: null
releases:
0.1.0:

View File

@@ -10,21 +10,21 @@ notesdir: fragments
prelude_section_name: release_summary
prelude_section_title: Release Summary
sections:
- - major_changes
- - major_changes
- Major Changes
- - minor_changes
- - minor_changes
- Minor Changes
- - breaking_changes
- - breaking_changes
- Breaking Changes / Porting Guide
- - deprecated_features
- - deprecated_features
- Deprecated Features
- - removed_features
- - removed_features
- Removed Features (previously deprecated)
- - security_fixes
- - security_fixes
- Security Fixes
- - bugfixes
- - bugfixes
- Bugfixes
- - known_issues
- - known_issues
- Known Issues
title: OKD Collection
trivial_section_name: trivial

View File

@@ -1,2 +1,4 @@
---
deprecated_features:
- openshift - the ``openshift`` inventory plugin has been deprecated and will be removed in release 4.0.0 (https://github.com/ansible-collections/kubernetes.core/issues/31).
- openshift - the ``openshift`` inventory plugin has been deprecated and will be removed in release 4.0.0
(https://github.com/ansible-collections/kubernetes.core/issues/31).

View File

@@ -0,0 +1,6 @@
---
trivial:
- "Move unit and sanity tests from zuul to GitHub Actions (https://github.com/openshift/community.okd/pull/202)."
breaking_changes:
- "Remove support for ansible-core < 2.14 (https://github.com/openshift/community.okd/pull/202)."
- "Bump minimum Python suupported version to 3.9 (https://github.com/openshift/community.okd/pull/202)."

View File

@@ -1,4 +1,4 @@
FROM registry.access.redhat.com/ubi8/ubi
FROM registry.access.redhat.com/ubi9/ubi
ENV OPERATOR=/usr/local/bin/ansible-operator \
USER_UID=1001 \
@@ -11,20 +11,20 @@ RUN yum install -y \
glibc-langpack-en \
git \
make \
python39 \
python39-devel \
python39-pip \
python39-setuptools \
python3 \
python3-devel \
python3-pip \
python3-setuptools \
gcc \
openldap-devel \
&& pip3 install --no-cache-dir --upgrade setuptools pip \
&& pip3 install --no-cache-dir \
&& python3.9 -m pip install --no-cache-dir --upgrade setuptools pip \
&& python3.9 -m pip install --no-cache-dir \
kubernetes \
ansible==2.9.* \
"molecule<3.3.0" \
"ansible-core" \
"molecule" \
&& yum clean all \
&& rm -rf $HOME/.cache \
&& curl -L https://github.com/openshift/okd/releases/download/4.5.0-0.okd-2020-08-12-020541/openshift-client-linux-4.5.0-0.okd-2020-08-12-020541.tar.gz | tar -xz -C /usr/local/bin
&& curl -L https://github.com/openshift/okd/releases/download/4.12.0-0.okd-2023-04-16-041331/openshift-client-linux-4.12.0-0.okd-2023-04-16-041331.tar.gz | tar -xz -C /usr/local/bin
# TODO: Is there a better way to install this client in ubi8?
COPY . /opt/ansible

View File

@@ -47,7 +47,7 @@ f_text_sub()
sed -i.bak "s/Kubernetes/OpenShift/g" "${_build_dir}/galaxy.yml"
sed -i.bak "s/^version\:.*$/version: ${DOWNSTREAM_VERSION}/" "${_build_dir}/galaxy.yml"
sed -i.bak "/STARTREMOVE/,/ENDREMOVE/d" "${_build_dir}/README.md"
sed -i.bak "s/[[:space:]]okd:$/ openshift:/" ${_build_dir}/meta/runtime.yml
sed -i.bak "s/[[:space:]]okd:$/ openshift:/" "${_build_dir}/meta/runtime.yml"
find "${_build_dir}" -type f ! -name galaxy.yml -exec sed -i.bak "s/community\.okd/redhat\.openshift/g" {} \;
find "${_build_dir}" -type f -name "*.bak" -delete
@@ -67,7 +67,6 @@ f_prep()
LICENSE
README.md
Makefile
setup.cfg
.yamllint
requirements.txt
requirements.yml
@@ -76,6 +75,7 @@ f_prep()
# Directories to recursively copy downstream (relative repo root dir path)
_dir_manifest=(
.config
changelogs
ci
meta
@@ -156,7 +156,7 @@ f_handle_doc_fragments_workaround()
# Build the collection, export docs, render them, stitch it all back together
pushd "${_build_dir}" || return
ansible-galaxy collection build
ansible-galaxy collection install -p "${install_collections_dir}" ./*.tar.gz
ansible-galaxy collection install --force-with-deps -p "${install_collections_dir}" ./*.tar.gz
rm ./*.tar.gz
for doc_fragment_mod in "${_doc_fragment_modules[@]}"
do

View File

@@ -1,5 +1,5 @@
---
requires_ansible: '>=2.9.17'
requires_ansible: '>=2.14.0'
action_groups:
okd:
- k8s

View File

@@ -21,16 +21,13 @@
debug:
var: output
- name: Create deployment config
- name: Create deployment
community.okd.k8s:
state: present
name: hello-world
namespace: testing
definition: '{{ okd_dc_template }}'
wait: yes
wait_condition:
type: Available
status: True
vars:
k8s_pod_name: hello-world
k8s_pod_image: python
@@ -71,19 +68,12 @@
namespace: '{{ namespace }}'
definition: '{{ okd_imagestream_template }}'
- name: Create DeploymentConfig to reference ImageStream
community.okd.k8s:
name: '{{ k8s_pod_name }}'
namespace: '{{ namespace }}'
definition: '{{ okd_dc_template }}'
vars:
k8s_pod_name: is-idempotent-dc
- name: Create Deployment to reference ImageStream
community.okd.k8s:
name: '{{ k8s_pod_name }}'
namespace: '{{ namespace }}'
definition: '{{ k8s_deployment_template | combine(metadata) }}'
wait: true
vars:
k8s_pod_annotations:
"alpha.image.policy.openshift.io/resolve-names": "*"

View File

@@ -13,7 +13,7 @@ metadata:
tags: quickstart,examples
name: simple-example
objects:
- apiVersion: v1
- apiVersion: v1
kind: ConfigMap
metadata:
annotations:
@@ -22,12 +22,12 @@ objects:
data:
content: "${CONTENT}"
parameters:
- description: The name assigned to the ConfigMap
- description: The name assigned to the ConfigMap
displayName: Name
name: NAME
required: true
value: example
- description: The value for the content key of the configmap
- description: The value for the content key of the configmap
displayName: Content
name: CONTENT
required: true

View File

@@ -4,7 +4,7 @@ dependency:
options:
requirements-file: requirements.yml
driver:
name: delegated
name: default
platforms:
- name: cluster
groups:
@@ -17,9 +17,6 @@ provisioner:
config_options:
inventory:
enable_plugins: community.okd.openshift
lint: |
set -e
ansible-lint
inventory:
hosts:
plugin: community.okd.openshift
@@ -34,14 +31,10 @@ provisioner:
ANSIBLE_COLLECTIONS_PATHS: ${OVERRIDE_COLLECTION_PATH:-$MOLECULE_PROJECT_DIRECTORY}
verifier:
name: ansible
lint: |
set -e
ansible-lint
scenario:
name: default
test_sequence:
- dependency
- lint
- syntax
- prepare
- converge

View File

@@ -89,6 +89,7 @@ def execute():
ldap.set_option(ldap.OPT_X_TLS_REQUIRE_CERT, ldap.OPT_X_TLS_NEVER)
connection = ldap.initialize(module.params['server_uri'])
connection.set_option(ldap.OPT_REFERRALS, 0)
try:
connection.simple_bind_s(module.params['bind_dn'], module.params['bind_pw'])
except ldap.LDAPError as e:

View File

@@ -1,3 +1,4 @@
---
- block:
- name: Get LDAP definition
set_fact:
@@ -57,7 +58,6 @@
admins_group: "{{ result.groups | selectattr('metadata.name', 'equalto', 'admins') | first }}"
devs_group: "{{ result.groups | selectattr('metadata.name', 'equalto', 'developers') | first }}"
- name: Synchronize Groups (Remove check_mode)
community.okd.openshift_adm_groups_sync:
config: "{{ sync_config }}"

View File

@@ -1,3 +1,4 @@
---
- block:
- name: Get LDAP definition
set_fact:
@@ -57,7 +58,6 @@
banking_group: "{{ result.groups | selectattr('metadata.name', 'equalto', 'banking') | first }}"
insurance_group: "{{ result.groups | selectattr('metadata.name', 'equalto', 'insurance') | first }}"
- name: Synchronize Groups (Remove check_mode)
community.okd.openshift_adm_groups_sync:
config: "{{ sync_config }}"
@@ -79,7 +79,6 @@
users:
- "alice-courtney@ansible.org"
- name: Read 'banking' openshift group
kubernetes.core.k8s_info:
kind: Group

View File

@@ -31,15 +31,14 @@
value: "{{ ldap_root }}"
ports:
- containerPort: 1389
name: ldap-server
register: pod_info
- name: Set Pod Internal IP
set_fact:
podIp: "{{ pod_info.result.status.podIP }}"
- name: Set LDAP Common facts
set_fact:
ldap_server_uri: "ldap://{{ podIp }}:1389"
# we can use the Pod IP directly because the integration are running inside a Pod in the
# same openshift cluster
ldap_server_uri: "ldap://{{ pod_info.result.status.podIP }}:1389"
ldap_bind_dn: "cn={{ ldap_admin_user }},{{ ldap_root }}"
ldap_bind_pw: "{{ ldap_admin_password }}"
@@ -53,8 +52,10 @@
bind_pw: "{{ ldap_bind_pw }}"
dn: "ou=users,{{ ldap_root }}"
server_uri: "{{ ldap_server_uri }}"
# ignore_errors: true
# register: ping_ldap
register: test_ldap
retries: 10
delay: 5
until: test_ldap is not failed
- include_tasks: "tasks/python-ldap-not-installed.yml"
- include_tasks: "tasks/rfc2307.yml"

View File

@@ -1,3 +1,4 @@
---
- block:
- name: Create temp directory
tempfile:

View File

@@ -1,3 +1,4 @@
---
- block:
- name: Get LDAP definition
set_fact:

View File

@@ -1,3 +1,4 @@
---
- block:
- set_fact:
test_sa: "clusterrole-sa"

View File

@@ -1,3 +1,4 @@
---
- block:
- set_fact:
test_ns: "prune-roles"

View File

@@ -1,3 +1,4 @@
---
- name: Prune deployments
block:
- set_fact:
@@ -5,7 +6,6 @@
deployment_ns: "prune-deployments"
deployment_ns_2: "prune-deployments-2"
- name: Ensure namespace
community.okd.k8s:
kind: Namespace

View File

@@ -1,3 +1,4 @@
---
- block:
- set_fact:
build_ns: "builds"
@@ -83,9 +84,10 @@
- cancel.builds | length == 1
- cancel.builds.0.metadata.name == "{{ build_config }}-1"
- cancel.builds.0.metadata.namespace == "{{ build_ns }}"
- '"cancelled" in cancel.builds.0.status'
- cancel.builds.0.status.cancelled
- name: Get Build info
- name: Get info for 1st Build
kubernetes.core.k8s_info:
version: build.openshift.io/v1
kind: Build
@@ -97,6 +99,7 @@
assert:
that:
- build.resources | length == 1
- '"cancelled" in build.resources.0.status'
- build.resources.0.status.cancelled
- build.resources.0.status.phase == 'Cancelled'
@@ -106,6 +109,7 @@
build_config_name: "{{ build_config }}"
state: restarted
build_phases:
- Pending
- Running
- New
register: restart
@@ -117,7 +121,7 @@
- restart.builds | length == 1
- 'restart.builds.0.metadata.name == "{{ build_config }}-3"'
- name: Get Build 2 info
- name: Get info for 2nd Build
kubernetes.core.k8s_info:
version: build.openshift.io/v1
kind: Build
@@ -129,10 +133,11 @@
assert:
that:
- build.resources | length == 1
- '"cancelled" in build.resources.0.status'
- build.resources.0.status.cancelled
- build.resources.0.status.phase == 'Cancelled'
- name: Get Build info
- name: Get info for 3rd build
kubernetes.core.k8s_info:
version: build.openshift.io/v1
kind: Build

View File

@@ -1,3 +1,4 @@
---
- name: Openshift import image testing
block:

View File

@@ -64,14 +64,16 @@ okd_dc_triggers:
okd_dc_spec:
template: '{{ k8s_pod_template }}'
triggers: '{{ okd_dc_triggers }}'
selector:
matchLabels:
app: "{{ k8s_pod_name }}"
replicas: 1
strategy:
type: Recreate
okd_dc_template:
apiVersion: v1
kind: DeploymentConfig
apiVersion: apps/v1
kind: Deployment
spec: '{{ okd_dc_spec }}'
okd_imagestream_template:

View File

@@ -17,10 +17,11 @@
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
from __future__ import (absolute_import, division, print_function)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
DOCUMENTATION = '''
DOCUMENTATION = """
author:
- xuxinkun (@xuxinkun)
@@ -145,29 +146,32 @@ DOCUMENTATION = '''
env:
- name: K8S_AUTH_VERIFY_SSL
aliases: [ oc_verify_ssl ]
'''
"""
from ansible_collections.kubernetes.core.plugins.connection.kubectl import Connection as KubectlConnection
from ansible_collections.kubernetes.core.plugins.connection.kubectl import (
Connection as KubectlConnection,
)
CONNECTION_TRANSPORT = 'oc'
CONNECTION_TRANSPORT = "oc"
CONNECTION_OPTIONS = {
'oc_container': '-c',
'oc_namespace': '-n',
'oc_kubeconfig': '--kubeconfig',
'oc_context': '--context',
'oc_host': '--server',
'client_cert': '--client-certificate',
'client_key': '--client-key',
'ca_cert': '--certificate-authority',
'validate_certs': '--insecure-skip-tls-verify',
'oc_token': '--token'
"oc_container": "-c",
"oc_namespace": "-n",
"oc_kubeconfig": "--kubeconfig",
"oc_context": "--context",
"oc_host": "--server",
"client_cert": "--client-certificate",
"client_key": "--client-key",
"ca_cert": "--certificate-authority",
"validate_certs": "--insecure-skip-tls-verify",
"oc_token": "--token",
}
class Connection(KubectlConnection):
''' Local oc based connections '''
"""Local oc based connections"""
transport = CONNECTION_TRANSPORT
connection_options = CONNECTION_OPTIONS
documentation = DOCUMENTATION

View File

@@ -1,11 +1,11 @@
# Copyright (c) 2018 Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import (absolute_import, division, print_function)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
DOCUMENTATION = '''
DOCUMENTATION = """
name: openshift
author:
- Chris Houseknecht (@chouseknecht)
@@ -94,34 +94,41 @@ DOCUMENTATION = '''
- "python >= 3.6"
- "kubernetes >= 12.0.0"
- "PyYAML >= 3.11"
'''
"""
EXAMPLES = '''
EXAMPLES = """
# File must be named openshift.yaml or openshift.yml
# Authenticate with token, and return all pods and services for all namespaces
plugin: community.okd.openshift
connections:
- name: Authenticate with token, and return all pods and services for all namespaces
plugin: community.okd.openshift
connections:
- host: https://192.168.64.4:8443
api_key: xxxxxxxxxxxxxxxx
verify_ssl: false
# Use default config (~/.kube/config) file and active context, and return objects for a specific namespace
plugin: community.okd.openshift
connections:
- name: Use default config (~/.kube/config) file and active context, and return objects for a specific namespace
plugin: community.okd.openshift
connections:
- namespaces:
- testing
# Use a custom config file, and a specific context.
plugin: community.okd.openshift
connections:
- name: Use a custom config file, and a specific context.
plugin: community.okd.openshift
connections:
- kubeconfig: /path/to/config
context: 'awx/192-168-64-4:8443/developer'
'''
"""
try:
from ansible_collections.kubernetes.core.plugins.inventory.k8s import K8sInventoryException, InventoryModule as K8sInventoryModule, format_dynamic_api_exc
from ansible_collections.kubernetes.core.plugins.module_utils.k8s.client import get_api_client
from ansible_collections.kubernetes.core.plugins.inventory.k8s import (
K8sInventoryException,
InventoryModule as K8sInventoryModule,
format_dynamic_api_exc,
)
from ansible_collections.kubernetes.core.plugins.module_utils.k8s.client import (
get_api_client,
)
HAS_KUBERNETES_COLLECTION = True
except ImportError as e:
HAS_KUBERNETES_COLLECTION = False
@@ -134,22 +141,26 @@ except ImportError:
class InventoryModule(K8sInventoryModule):
NAME = 'community.okd.openshift'
NAME = "community.okd.openshift"
connection_plugin = 'community.okd.oc'
transport = 'oc'
connection_plugin = "community.okd.oc"
transport = "oc"
def check_kubernetes_collection(self):
if not HAS_KUBERNETES_COLLECTION:
K8sInventoryException("The kubernetes.core collection must be installed")
raise K8sInventoryException(
"The kubernetes.core collection must be installed"
)
def fetch_objects(self, connections):
self.check_kubernetes_collection()
super(InventoryModule, self).fetch_objects(connections)
self.display.deprecated("The 'openshift' inventory plugin has been deprecated and will be removed in release 4.0.0",
version='4.0.0', collection_name='community.okd')
self.display.deprecated(
"The 'openshift' inventory plugin has been deprecated and will be removed in release 4.0.0",
version="4.0.0",
collection_name="community.okd",
)
if connections:
if not isinstance(connections, list):
@@ -157,9 +168,11 @@ class InventoryModule(K8sInventoryModule):
for connection in connections:
client = get_api_client(**connection)
name = connection.get('name', self.get_default_host_name(client.configuration.host))
if connection.get('namespaces'):
namespaces = connection['namespaces']
name = connection.get(
"name", self.get_default_host_name(client.configuration.host)
)
if connection.get("namespaces"):
namespaces = connection["namespaces"]
else:
namespaces = self.get_available_namespaces(client)
for namespace in namespaces:
@@ -173,15 +186,19 @@ class InventoryModule(K8sInventoryModule):
def get_routes_for_namespace(self, client, name, namespace):
self.check_kubernetes_collection()
v1_route = client.resources.get(api_version='route.openshift.io/v1', kind='Route')
v1_route = client.resources.get(
api_version="route.openshift.io/v1", kind="Route"
)
try:
obj = v1_route.get(namespace=namespace)
except DynamicApiError as exc:
self.display.debug(exc)
raise K8sInventoryException('Error fetching Routes list: %s' % format_dynamic_api_exc(exc))
raise K8sInventoryException(
"Error fetching Routes list: %s" % format_dynamic_api_exc(exc)
)
namespace_group = 'namespace_{0}'.format(namespace)
namespace_routes_group = '{0}_routes'.format(namespace_group)
namespace_group = "namespace_{0}".format(namespace)
namespace_routes_group = "{0}_routes".format(namespace_group)
self.inventory.add_group(name)
self.inventory.add_group(namespace_group)
@@ -190,14 +207,18 @@ class InventoryModule(K8sInventoryModule):
self.inventory.add_child(namespace_group, namespace_routes_group)
for route in obj.items:
route_name = route.metadata.name
route_annotations = {} if not route.metadata.annotations else dict(route.metadata.annotations)
route_annotations = (
{}
if not route.metadata.annotations
else dict(route.metadata.annotations)
)
self.inventory.add_host(route_name)
if route.metadata.labels:
# create a group for each label_value
for key, value in route.metadata.labels:
group_name = 'label_{0}_{1}'.format(key, value)
group_name = "label_{0}_{1}".format(key, value)
self.inventory.add_group(group_name)
self.inventory.add_child(group_name, route_name)
route_labels = dict(route.metadata.labels)
@@ -207,19 +228,25 @@ class InventoryModule(K8sInventoryModule):
self.inventory.add_child(namespace_routes_group, route_name)
# add hostvars
self.inventory.set_variable(route_name, 'labels', route_labels)
self.inventory.set_variable(route_name, 'annotations', route_annotations)
self.inventory.set_variable(route_name, 'cluster_name', route.metadata.clusterName)
self.inventory.set_variable(route_name, 'object_type', 'route')
self.inventory.set_variable(route_name, 'self_link', route.metadata.selfLink)
self.inventory.set_variable(route_name, 'resource_version', route.metadata.resourceVersion)
self.inventory.set_variable(route_name, 'uid', route.metadata.uid)
self.inventory.set_variable(route_name, "labels", route_labels)
self.inventory.set_variable(route_name, "annotations", route_annotations)
self.inventory.set_variable(
route_name, "cluster_name", route.metadata.clusterName
)
self.inventory.set_variable(route_name, "object_type", "route")
self.inventory.set_variable(
route_name, "self_link", route.metadata.selfLink
)
self.inventory.set_variable(
route_name, "resource_version", route.metadata.resourceVersion
)
self.inventory.set_variable(route_name, "uid", route.metadata.uid)
if route.spec.host:
self.inventory.set_variable(route_name, 'host', route.spec.host)
self.inventory.set_variable(route_name, "host", route.spec.host)
if route.spec.path:
self.inventory.set_variable(route_name, 'path', route.spec.path)
self.inventory.set_variable(route_name, "path", route.spec.path)
if hasattr(route.spec.port, 'targetPort') and route.spec.port.targetPort:
self.inventory.set_variable(route_name, 'port', dict(route.spec.port))
if hasattr(route.spec.port, "targetPort") and route.spec.port.targetPort:
self.inventory.set_variable(route_name, "port", dict(route.spec.port))

View File

@@ -1,35 +1,46 @@
#!/usr/bin/env python
from __future__ import (absolute_import, division, print_function)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
import re
import operator
from functools import reduce
from ansible_collections.community.okd.plugins.module_utils.openshift_common import AnsibleOpenshiftModule
from ansible_collections.community.okd.plugins.module_utils.openshift_common import (
AnsibleOpenshiftModule,
)
try:
from ansible_collections.kubernetes.core.plugins.module_utils.k8s.resource import create_definitions
from ansible_collections.kubernetes.core.plugins.module_utils.k8s.exceptions import CoreException
from ansible_collections.kubernetes.core.plugins.module_utils.k8s.resource import (
create_definitions,
)
from ansible_collections.kubernetes.core.plugins.module_utils.k8s.exceptions import (
CoreException,
)
except ImportError:
pass
from ansible.module_utils._text import to_native
try:
from kubernetes.dynamic.exceptions import DynamicApiError, NotFoundError, ForbiddenError
from kubernetes.dynamic.exceptions import (
DynamicApiError,
NotFoundError,
ForbiddenError,
)
except ImportError as e:
pass
TRIGGER_ANNOTATION = 'image.openshift.io/triggers'
TRIGGER_CONTAINER = re.compile(r"(?P<path>.*)\[((?P<index>[0-9]+)|\?\(@\.name==[\"'\\]*(?P<name>[a-z0-9]([-a-z0-9]*[a-z0-9])?))")
TRIGGER_ANNOTATION = "image.openshift.io/triggers"
TRIGGER_CONTAINER = re.compile(
r"(?P<path>.*)\[((?P<index>[0-9]+)|\?\(@\.name==[\"'\\]*(?P<name>[a-z0-9]([-a-z0-9]*[a-z0-9])?))"
)
class OKDRawModule(AnsibleOpenshiftModule):
def __init__(self, **kwargs):
super(OKDRawModule, self).__init__(**kwargs)
@property
@@ -50,36 +61,60 @@ class OKDRawModule(AnsibleOpenshiftModule):
result = {"changed": False, "result": {}}
warnings = []
if self.params.get("state") != 'absent':
if self.params.get("state") != "absent":
existing = None
name = definition.get("metadata", {}).get("name")
namespace = definition.get("metadata", {}).get("namespace")
if definition.get("kind") in ['Project', 'ProjectRequest']:
if definition.get("kind") in ["Project", "ProjectRequest"]:
try:
resource = self.svc.find_resource(kind=definition.get("kind"), api_version=definition.get("apiVersion", "v1"))
existing = resource.get(name=name, namespace=namespace).to_dict()
resource = self.svc.find_resource(
kind=definition.get("kind"),
api_version=definition.get("apiVersion", "v1"),
)
existing = resource.get(
name=name, namespace=namespace
).to_dict()
except (NotFoundError, ForbiddenError):
result = self.create_project_request(definition)
changed |= result["changed"]
results.append(result)
continue
except DynamicApiError as exc:
self.fail_json(msg='Failed to retrieve requested object: {0}'.format(exc.body),
error=exc.status, status=exc.status, reason=exc.reason)
self.fail_json(
msg="Failed to retrieve requested object: {0}".format(
exc.body
),
error=exc.status,
status=exc.status,
reason=exc.reason,
)
if definition.get("kind") not in ['Project', 'ProjectRequest']:
if definition.get("kind") not in ["Project", "ProjectRequest"]:
try:
resource = self.svc.find_resource(kind=definition.get("kind"), api_version=definition.get("apiVersion", "v1"))
existing = resource.get(name=name, namespace=namespace).to_dict()
resource = self.svc.find_resource(
kind=definition.get("kind"),
api_version=definition.get("apiVersion", "v1"),
)
existing = resource.get(
name=name, namespace=namespace
).to_dict()
except Exception:
existing = None
if existing:
if resource.kind == 'DeploymentConfig':
if definition.get('spec', {}).get('triggers'):
definition = self.resolve_imagestream_triggers(existing, definition)
elif existing['metadata'].get('annotations', {}).get(TRIGGER_ANNOTATION):
definition = self.resolve_imagestream_trigger_annotation(existing, definition)
if resource.kind == "DeploymentConfig":
if definition.get("spec", {}).get("triggers"):
definition = self.resolve_imagestream_triggers(
existing, definition
)
elif (
existing["metadata"]
.get("annotations", {})
.get(TRIGGER_ANNOTATION)
):
definition = self.resolve_imagestream_trigger_annotation(
existing, definition
)
if self.params.get("validate") is not None:
warnings = self.validate(definition)
@@ -116,13 +151,15 @@ class OKDRawModule(AnsibleOpenshiftModule):
@staticmethod
def get_index(desired, objects, keys):
""" Iterates over keys, returns the first object from objects where the value of the key
"""Iterates over keys, returns the first object from objects where the value of the key
matches the value in desired
"""
# pylint: disable=use-a-generator
# Use a generator instead 'all(desired.get(key, True) == item.get(key, False) for key in keys)'
for i, item in enumerate(objects):
if item and all([desired.get(key, True) == item.get(key, False) for key in keys]):
if item and all(
[desired.get(key, True) == item.get(key, False) for key in keys]
):
return i
def resolve_imagestream_trigger_annotation(self, existing, definition):
@@ -137,84 +174,148 @@ class OKDRawModule(AnsibleOpenshiftModule):
def set_from_fields(d, fields, value):
get_from_fields(d, fields[:-1])[fields[-1]] = value
if TRIGGER_ANNOTATION in definition['metadata'].get('annotations', {}).keys():
triggers = yaml.safe_load(definition['metadata']['annotations'][TRIGGER_ANNOTATION] or '[]')
if TRIGGER_ANNOTATION in definition["metadata"].get("annotations", {}).keys():
triggers = yaml.safe_load(
definition["metadata"]["annotations"][TRIGGER_ANNOTATION] or "[]"
)
else:
triggers = yaml.safe_load(existing['metadata'].get('annotations', '{}').get(TRIGGER_ANNOTATION, '[]'))
triggers = yaml.safe_load(
existing["metadata"]
.get("annotations", "{}")
.get(TRIGGER_ANNOTATION, "[]")
)
if not isinstance(triggers, list):
return definition
for trigger in triggers:
if trigger.get('fieldPath'):
parsed = self.parse_trigger_fieldpath(trigger['fieldPath'])
path = parsed.get('path', '').split('.')
if trigger.get("fieldPath"):
parsed = self.parse_trigger_fieldpath(trigger["fieldPath"])
path = parsed.get("path", "").split(".")
if path:
existing_containers = get_from_fields(existing, path)
new_containers = get_from_fields(definition, path)
if parsed.get('name'):
existing_index = self.get_index({'name': parsed['name']}, existing_containers, ['name'])
new_index = self.get_index({'name': parsed['name']}, new_containers, ['name'])
elif parsed.get('index') is not None:
existing_index = new_index = int(parsed['index'])
if parsed.get("name"):
existing_index = self.get_index(
{"name": parsed["name"]}, existing_containers, ["name"]
)
new_index = self.get_index(
{"name": parsed["name"]}, new_containers, ["name"]
)
elif parsed.get("index") is not None:
existing_index = new_index = int(parsed["index"])
else:
existing_index = new_index = None
if existing_index is not None and new_index is not None:
if existing_index < len(existing_containers) and new_index < len(new_containers):
set_from_fields(definition, path + [new_index, 'image'], get_from_fields(existing, path + [existing_index, 'image']))
if existing_index < len(
existing_containers
) and new_index < len(new_containers):
set_from_fields(
definition,
path + [new_index, "image"],
get_from_fields(
existing, path + [existing_index, "image"]
),
)
return definition
def resolve_imagestream_triggers(self, existing, definition):
existing_triggers = existing.get('spec', {}).get('triggers')
new_triggers = definition['spec']['triggers']
existing_containers = existing.get('spec', {}).get('template', {}).get('spec', {}).get('containers', [])
new_containers = definition.get('spec', {}).get('template', {}).get('spec', {}).get('containers', [])
existing_triggers = existing.get("spec", {}).get("triggers")
new_triggers = definition["spec"]["triggers"]
existing_containers = (
existing.get("spec", {})
.get("template", {})
.get("spec", {})
.get("containers", [])
)
new_containers = (
definition.get("spec", {})
.get("template", {})
.get("spec", {})
.get("containers", [])
)
for i, trigger in enumerate(new_triggers):
if trigger.get('type') == 'ImageChange' and trigger.get('imageChangeParams'):
names = trigger['imageChangeParams'].get('containerNames', [])
if trigger.get("type") == "ImageChange" and trigger.get(
"imageChangeParams"
):
names = trigger["imageChangeParams"].get("containerNames", [])
for name in names:
old_container_index = self.get_index({'name': name}, existing_containers, ['name'])
new_container_index = self.get_index({'name': name}, new_containers, ['name'])
if old_container_index is not None and new_container_index is not None:
image = existing['spec']['template']['spec']['containers'][old_container_index]['image']
definition['spec']['template']['spec']['containers'][new_container_index]['image'] = image
old_container_index = self.get_index(
{"name": name}, existing_containers, ["name"]
)
new_container_index = self.get_index(
{"name": name}, new_containers, ["name"]
)
if (
old_container_index is not None
and new_container_index is not None
):
image = existing["spec"]["template"]["spec"]["containers"][
old_container_index
]["image"]
definition["spec"]["template"]["spec"]["containers"][
new_container_index
]["image"] = image
existing_index = self.get_index(trigger['imageChangeParams'],
[x.get('imageChangeParams') for x in existing_triggers],
['containerNames'])
existing_index = self.get_index(
trigger["imageChangeParams"],
[x.get("imageChangeParams") for x in existing_triggers],
["containerNames"],
)
if existing_index is not None:
existing_image = existing_triggers[existing_index].get('imageChangeParams', {}).get('lastTriggeredImage')
existing_image = (
existing_triggers[existing_index]
.get("imageChangeParams", {})
.get("lastTriggeredImage")
)
if existing_image:
definition['spec']['triggers'][i]['imageChangeParams']['lastTriggeredImage'] = existing_image
existing_from = existing_triggers[existing_index].get('imageChangeParams', {}).get('from', {})
new_from = trigger['imageChangeParams'].get('from', {})
existing_namespace = existing_from.get('namespace')
existing_name = existing_from.get('name', False)
new_name = new_from.get('name', True)
add_namespace = existing_namespace and 'namespace' not in new_from.keys() and existing_name == new_name
definition["spec"]["triggers"][i]["imageChangeParams"][
"lastTriggeredImage"
] = existing_image
existing_from = (
existing_triggers[existing_index]
.get("imageChangeParams", {})
.get("from", {})
)
new_from = trigger["imageChangeParams"].get("from", {})
existing_namespace = existing_from.get("namespace")
existing_name = existing_from.get("name", False)
new_name = new_from.get("name", True)
add_namespace = (
existing_namespace
and "namespace" not in new_from.keys()
and existing_name == new_name
)
if add_namespace:
definition['spec']['triggers'][i]['imageChangeParams']['from']['namespace'] = existing_from['namespace']
definition["spec"]["triggers"][i]["imageChangeParams"][
"from"
]["namespace"] = existing_from["namespace"]
return definition
def parse_trigger_fieldpath(self, expression):
parsed = TRIGGER_CONTAINER.search(expression).groupdict()
if parsed.get('index'):
parsed['index'] = int(parsed['index'])
if parsed.get("index"):
parsed["index"] = int(parsed["index"])
return parsed
def create_project_request(self, definition):
definition['kind'] = 'ProjectRequest'
result = {'changed': False, 'result': {}}
resource = self.svc.find_resource(kind='ProjectRequest', api_version=definition['apiVersion'], fail=True)
definition["kind"] = "ProjectRequest"
result = {"changed": False, "result": {}}
resource = self.svc.find_resource(
kind="ProjectRequest", api_version=definition["apiVersion"], fail=True
)
if not self.check_mode:
try:
k8s_obj = resource.create(definition)
result['result'] = k8s_obj.to_dict()
result["result"] = k8s_obj.to_dict()
except DynamicApiError as exc:
self.fail_json(msg="Failed to create object: {0}".format(exc.body),
error=exc.status, status=exc.status, reason=exc.reason)
result['changed'] = True
result['method'] = 'create'
self.fail_json(
msg="Failed to create object: {0}".format(exc.body),
error=exc.status,
status=exc.status,
reason=exc.reason,
)
result["changed"] = True
result["method"] = "create"
return result

View File

@@ -1,11 +1,14 @@
#!/usr/bin/env python
from __future__ import (absolute_import, division, print_function)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
from ansible.module_utils._text import to_native
from ansible_collections.community.okd.plugins.module_utils.openshift_common import AnsibleOpenshiftModule
from ansible_collections.community.okd.plugins.module_utils.openshift_common import (
AnsibleOpenshiftModule,
)
try:
from kubernetes import client
@@ -18,31 +21,36 @@ class OpenShiftAdmPruneAuth(AnsibleOpenshiftModule):
def __init__(self, **kwargs):
super(OpenShiftAdmPruneAuth, self).__init__(**kwargs)
def prune_resource_binding(self, kind, api_version, ref_kind, ref_namespace_names, propagation_policy=None):
def prune_resource_binding(
self, kind, api_version, ref_kind, ref_namespace_names, propagation_policy=None
):
resource = self.find_resource(kind=kind, api_version=api_version, fail=True)
candidates = []
for ref_namespace, ref_name in ref_namespace_names:
try:
result = resource.get(name=None, namespace=ref_namespace)
result = result.to_dict()
result = result.get('items') if 'items' in result else [result]
result = result.get("items") if "items" in result else [result]
for obj in result:
namespace = obj['metadata'].get('namespace', None)
name = obj['metadata'].get('name')
if ref_kind and obj['roleRef']['kind'] != ref_kind:
namespace = obj["metadata"].get("namespace", None)
name = obj["metadata"].get("name")
if ref_kind and obj["roleRef"]["kind"] != ref_kind:
# skip this binding as the roleRef.kind does not match
continue
if obj['roleRef']['name'] == ref_name:
if obj["roleRef"]["name"] == ref_name:
# select this binding as the roleRef.name match
candidates.append((namespace, name))
except NotFoundError:
continue
except DynamicApiError as exc:
msg = "Failed to get {kind} resource due to: {msg}".format(kind=kind, msg=exc.body)
msg = "Failed to get {kind} resource due to: {msg}".format(
kind=kind, msg=exc.body
)
self.fail_json(msg=msg)
except Exception as e:
msg = "Failed to get {kind} due to: {msg}".format(kind=kind, msg=to_native(e))
msg = "Failed to get {kind} due to: {msg}".format(
kind=kind, msg=to_native(e)
)
self.fail_json(msg=msg)
if len(candidates) == 0 or self.check_mode:
@@ -54,24 +62,29 @@ class OpenShiftAdmPruneAuth(AnsibleOpenshiftModule):
for namespace, name in candidates:
try:
result = resource.delete(name=name, namespace=namespace, body=delete_options)
result = resource.delete(
name=name, namespace=namespace, body=delete_options
)
except DynamicApiError as exc:
msg = "Failed to delete {kind} {namespace}/{name} due to: {msg}".format(kind=kind, namespace=namespace, name=name, msg=exc.body)
msg = "Failed to delete {kind} {namespace}/{name} due to: {msg}".format(
kind=kind, namespace=namespace, name=name, msg=exc.body
)
self.fail_json(msg=msg)
except Exception as e:
msg = "Failed to delete {kind} {namespace}/{name} due to: {msg}".format(kind=kind, namespace=namespace, name=name, msg=to_native(e))
msg = "Failed to delete {kind} {namespace}/{name} due to: {msg}".format(
kind=kind, namespace=namespace, name=name, msg=to_native(e)
)
self.fail_json(msg=msg)
return [y if x is None else x + "/" + y for x, y in candidates]
def update_resource_binding(self, ref_kind, ref_names, namespaced=False):
kind = 'ClusterRoleBinding'
api_version = "rbac.authorization.k8s.io/v1",
kind = "ClusterRoleBinding"
api_version = "rbac.authorization.k8s.io/v1"
if namespaced:
kind = "RoleBinding"
resource = self.find_resource(kind=kind, api_version=api_version, fail=True)
result = resource.get(name=None, namespace=None).to_dict()
result = result.get('items') if 'items' in result else [result]
result = result.get("items") if "items" in result else [result]
if len(result) == 0:
return [], False
@@ -79,29 +92,40 @@ class OpenShiftAdmPruneAuth(AnsibleOpenshiftModule):
def _update_user_group(binding_namespace, subjects):
users, groups = [], []
for x in subjects:
if x['kind'] == 'User':
users.append(x['name'])
elif x['kind'] == 'Group':
groups.append(x['name'])
elif x['kind'] == 'ServiceAccount':
if x["kind"] == "User":
users.append(x["name"])
elif x["kind"] == "Group":
groups.append(x["name"])
elif x["kind"] == "ServiceAccount":
namespace = binding_namespace
if x.get('namespace') is not None:
namespace = x.get('namespace')
if x.get("namespace") is not None:
namespace = x.get("namespace")
if namespace is not None:
users.append("system:serviceaccount:%s:%s" % (namespace, x['name']))
users.append(
"system:serviceaccount:%s:%s" % (namespace, x["name"])
)
return users, groups
candidates = []
changed = False
for item in result:
subjects = item.get('subjects', [])
retainedSubjects = [x for x in subjects if x['kind'] == ref_kind and x['name'] in ref_names]
subjects = item.get("subjects", [])
retainedSubjects = [
x for x in subjects if x["kind"] == ref_kind and x["name"] in ref_names
]
if len(subjects) != len(retainedSubjects):
updated_binding = item
updated_binding['subjects'] = retainedSubjects
binding_namespace = item['metadata'].get('namespace', None)
updated_binding['userNames'], updated_binding['groupNames'] = _update_user_group(binding_namespace, retainedSubjects)
candidates.append(binding_namespace + "/" + item['metadata']['name'] if binding_namespace else item['metadata']['name'])
updated_binding["subjects"] = retainedSubjects
binding_namespace = item["metadata"].get("namespace", None)
(
updated_binding["userNames"],
updated_binding["groupNames"],
) = _update_user_group(binding_namespace, retainedSubjects)
candidates.append(
binding_namespace + "/" + item["metadata"]["name"]
if binding_namespace
else item["metadata"]["name"]
)
changed = True
if not self.check_mode:
try:
@@ -112,20 +136,25 @@ class OpenShiftAdmPruneAuth(AnsibleOpenshiftModule):
return candidates, changed
def update_security_context(self, ref_names, key):
params = {'kind': 'SecurityContextConstraints', 'api_version': 'security.openshift.io/v1'}
params = {
"kind": "SecurityContextConstraints",
"api_version": "security.openshift.io/v1",
}
sccs = self.kubernetes_facts(**params)
if not sccs['api_found']:
self.fail_json(msg=sccs['msg'])
sccs = sccs.get('resources')
if not sccs["api_found"]:
self.fail_json(msg=sccs["msg"])
sccs = sccs.get("resources")
candidates = []
changed = False
resource = self.find_resource(kind="SecurityContextConstraints", api_version="security.openshift.io/v1")
resource = self.find_resource(
kind="SecurityContextConstraints", api_version="security.openshift.io/v1"
)
for item in sccs:
subjects = item.get(key, [])
retainedSubjects = [x for x in subjects if x not in ref_names]
if len(subjects) != len(retainedSubjects):
candidates.append(item['metadata']['name'])
candidates.append(item["metadata"]["name"])
changed = True
if not self.check_mode:
upd_sec_ctx = item
@@ -138,94 +167,116 @@ class OpenShiftAdmPruneAuth(AnsibleOpenshiftModule):
return candidates, changed
def auth_prune_roles(self):
params = {'kind': 'Role', 'api_version': 'rbac.authorization.k8s.io/v1', 'namespace': self.params.get('namespace')}
for attr in ('name', 'label_selectors'):
params = {
"kind": "Role",
"api_version": "rbac.authorization.k8s.io/v1",
"namespace": self.params.get("namespace"),
}
for attr in ("name", "label_selectors"):
if self.params.get(attr):
params[attr] = self.params.get(attr)
result = self.kubernetes_facts(**params)
if not result['api_found']:
self.fail_json(msg=result['msg'])
if not result["api_found"]:
self.fail_json(msg=result["msg"])
roles = result.get('resources')
roles = result.get("resources")
if len(roles) == 0:
self.exit_json(changed=False, msg="No candidate rolebinding to prune from namespace %s." % self.params.get('namespace'))
self.exit_json(
changed=False,
msg="No candidate rolebinding to prune from namespace %s."
% self.params.get("namespace"),
)
ref_roles = [(x['metadata']['namespace'], x['metadata']['name']) for x in roles]
candidates = self.prune_resource_binding(kind="RoleBinding",
ref_roles = [(x["metadata"]["namespace"], x["metadata"]["name"]) for x in roles]
candidates = self.prune_resource_binding(
kind="RoleBinding",
api_version="rbac.authorization.k8s.io/v1",
ref_kind="Role",
ref_namespace_names=ref_roles,
propagation_policy='Foreground')
propagation_policy="Foreground",
)
if len(candidates) == 0:
self.exit_json(changed=False, role_binding=candidates)
self.exit_json(changed=True, role_binding=candidates)
def auth_prune_clusterroles(self):
params = {'kind': 'ClusterRole', 'api_version': 'rbac.authorization.k8s.io/v1'}
for attr in ('name', 'label_selectors'):
params = {"kind": "ClusterRole", "api_version": "rbac.authorization.k8s.io/v1"}
for attr in ("name", "label_selectors"):
if self.params.get(attr):
params[attr] = self.params.get(attr)
result = self.kubernetes_facts(**params)
if not result['api_found']:
self.fail_json(msg=result['msg'])
if not result["api_found"]:
self.fail_json(msg=result["msg"])
clusterroles = result.get('resources')
clusterroles = result.get("resources")
if len(clusterroles) == 0:
self.exit_json(changed=False, msg="No clusterroles found matching input criteria.")
self.exit_json(
changed=False, msg="No clusterroles found matching input criteria."
)
ref_clusterroles = [(None, x['metadata']['name']) for x in clusterroles]
ref_clusterroles = [(None, x["metadata"]["name"]) for x in clusterroles]
# Prune ClusterRoleBinding
candidates_cluster_binding = self.prune_resource_binding(kind="ClusterRoleBinding",
candidates_cluster_binding = self.prune_resource_binding(
kind="ClusterRoleBinding",
api_version="rbac.authorization.k8s.io/v1",
ref_kind=None,
ref_namespace_names=ref_clusterroles)
ref_namespace_names=ref_clusterroles,
)
# Prune Role Binding
candidates_namespaced_binding = self.prune_resource_binding(kind="RoleBinding",
candidates_namespaced_binding = self.prune_resource_binding(
kind="RoleBinding",
api_version="rbac.authorization.k8s.io/v1",
ref_kind='ClusterRole',
ref_namespace_names=ref_clusterroles)
ref_kind="ClusterRole",
ref_namespace_names=ref_clusterroles,
)
self.exit_json(changed=True,
self.exit_json(
changed=True,
cluster_role_binding=candidates_cluster_binding,
role_binding=candidates_namespaced_binding)
role_binding=candidates_namespaced_binding,
)
def list_groups(self, params=None):
options = {'kind': 'Group', 'api_version': 'user.openshift.io/v1'}
options = {"kind": "Group", "api_version": "user.openshift.io/v1"}
if params:
for attr in ('name', 'label_selectors'):
for attr in ("name", "label_selectors"):
if params.get(attr):
options[attr] = params.get(attr)
return self.kubernetes_facts(**options)
def auth_prune_users(self):
params = {'kind': 'User', 'api_version': 'user.openshift.io/v1'}
for attr in ('name', 'label_selectors'):
params = {"kind": "User", "api_version": "user.openshift.io/v1"}
for attr in ("name", "label_selectors"):
if self.params.get(attr):
params[attr] = self.params.get(attr)
users = self.kubernetes_facts(**params)
if len(users) == 0:
self.exit_json(changed=False, msg="No resource type 'User' found matching input criteria.")
self.exit_json(
changed=False,
msg="No resource type 'User' found matching input criteria.",
)
names = [x['metadata']['name'] for x in users]
names = [x["metadata"]["name"] for x in users]
changed = False
# Remove the user role binding
rolebinding, changed_role = self.update_resource_binding(ref_kind="User",
ref_names=names,
namespaced=True)
rolebinding, changed_role = self.update_resource_binding(
ref_kind="User", ref_names=names, namespaced=True
)
changed = changed or changed_role
# Remove the user cluster role binding
clusterrolesbinding, changed_cr = self.update_resource_binding(ref_kind="User",
ref_names=names)
clusterrolesbinding, changed_cr = self.update_resource_binding(
ref_kind="User", ref_names=names
)
changed = changed or changed_cr
# Remove the user from security context constraints
sccs, changed_sccs = self.update_security_context(names, 'users')
sccs, changed_sccs = self.update_security_context(names, "users")
changed = changed or changed_sccs
# Remove the user from groups
@@ -233,14 +284,14 @@ class OpenShiftAdmPruneAuth(AnsibleOpenshiftModule):
deleted_groups = []
resource = self.find_resource(kind="Group", api_version="user.openshift.io/v1")
for grp in groups:
subjects = grp.get('users', [])
subjects = grp.get("users", [])
retainedSubjects = [x for x in subjects if x not in names]
if len(subjects) != len(retainedSubjects):
deleted_groups.append(grp['metadata']['name'])
deleted_groups.append(grp["metadata"]["name"])
changed = True
if not self.check_mode:
upd_group = grp
upd_group.update({'users': retainedSubjects})
upd_group.update({"users": retainedSubjects})
try:
resource.apply(upd_group, namespace=None)
except DynamicApiError as exc:
@@ -248,62 +299,82 @@ class OpenShiftAdmPruneAuth(AnsibleOpenshiftModule):
self.fail_json(msg=msg)
# Remove the user's OAuthClientAuthorizations
oauth = self.kubernetes_facts(kind='OAuthClientAuthorization', api_version='oauth.openshift.io/v1')
oauth = self.kubernetes_facts(
kind="OAuthClientAuthorization", api_version="oauth.openshift.io/v1"
)
deleted_auths = []
resource = self.find_resource(kind="OAuthClientAuthorization", api_version="oauth.openshift.io/v1")
resource = self.find_resource(
kind="OAuthClientAuthorization", api_version="oauth.openshift.io/v1"
)
for authorization in oauth:
if authorization.get('userName', None) in names:
auth_name = authorization['metadata']['name']
if authorization.get("userName", None) in names:
auth_name = authorization["metadata"]["name"]
deleted_auths.append(auth_name)
changed = True
if not self.check_mode:
try:
resource.delete(name=auth_name, namespace=None, body=client.V1DeleteOptions())
resource.delete(
name=auth_name,
namespace=None,
body=client.V1DeleteOptions(),
)
except DynamicApiError as exc:
msg = "Failed to delete OAuthClientAuthorization {name} due to: {msg}".format(name=auth_name, msg=exc.body)
msg = "Failed to delete OAuthClientAuthorization {name} due to: {msg}".format(
name=auth_name, msg=exc.body
)
self.fail_json(msg=msg)
except Exception as e:
msg = "Failed to delete OAuthClientAuthorization {name} due to: {msg}".format(name=auth_name, msg=to_native(e))
msg = "Failed to delete OAuthClientAuthorization {name} due to: {msg}".format(
name=auth_name, msg=to_native(e)
)
self.fail_json(msg=msg)
self.exit_json(changed=changed,
self.exit_json(
changed=changed,
cluster_role_binding=clusterrolesbinding,
role_binding=rolebinding,
security_context_constraints=sccs,
authorization=deleted_auths,
group=deleted_groups)
group=deleted_groups,
)
def auth_prune_groups(self):
groups = self.list_groups(params=self.params)
if len(groups) == 0:
self.exit_json(changed=False, result="No resource type 'Group' found matching input criteria.")
self.exit_json(
changed=False,
result="No resource type 'Group' found matching input criteria.",
)
names = [x['metadata']['name'] for x in groups]
names = [x["metadata"]["name"] for x in groups]
changed = False
# Remove the groups role binding
rolebinding, changed_role = self.update_resource_binding(ref_kind="Group",
ref_names=names,
namespaced=True)
rolebinding, changed_role = self.update_resource_binding(
ref_kind="Group", ref_names=names, namespaced=True
)
changed = changed or changed_role
# Remove the groups cluster role binding
clusterrolesbinding, changed_cr = self.update_resource_binding(ref_kind="Group",
ref_names=names)
clusterrolesbinding, changed_cr = self.update_resource_binding(
ref_kind="Group", ref_names=names
)
changed = changed or changed_cr
# Remove the groups security context constraints
sccs, changed_sccs = self.update_security_context(names, 'groups')
sccs, changed_sccs = self.update_security_context(names, "groups")
changed = changed or changed_sccs
self.exit_json(changed=changed,
self.exit_json(
changed=changed,
cluster_role_binding=clusterrolesbinding,
role_binding=rolebinding,
security_context_constraints=sccs)
security_context_constraints=sccs,
)
def execute_module(self):
auth_prune = {
'roles': self.auth_prune_roles,
'clusterroles': self.auth_prune_clusterroles,
'users': self.auth_prune_users,
'groups': self.auth_prune_groups,
"roles": self.auth_prune_roles,
"clusterroles": self.auth_prune_clusterroles,
"users": self.auth_prune_users,
"groups": self.auth_prune_groups,
}
auth_prune[self.params.get('resource')]()
auth_prune[self.params.get("resource")]()

View File

@@ -1,14 +1,16 @@
#!/usr/bin/env python
from __future__ import (absolute_import, division, print_function)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
from datetime import datetime, timezone
import traceback
from ansible.module_utils._text import to_native
from ansible_collections.community.okd.plugins.module_utils.openshift_common import AnsibleOpenshiftModule
from ansible_collections.community.okd.plugins.module_utils.openshift_common import (
AnsibleOpenshiftModule,
)
try:
from kubernetes import client
@@ -23,7 +25,9 @@ def get_deploymentconfig_for_replicationcontroller(replica_controller):
# This is set on replication controller pod template by deployer controller.
DeploymentConfigAnnotation = "openshift.io/deployment-config.name"
try:
deploymentconfig_name = replica_controller['metadata']['annotations'].get(DeploymentConfigAnnotation)
deploymentconfig_name = replica_controller["metadata"]["annotations"].get(
DeploymentConfigAnnotation
)
if deploymentconfig_name is None or deploymentconfig_name == "":
return None
return deploymentconfig_name
@@ -32,7 +36,6 @@ def get_deploymentconfig_for_replicationcontroller(replica_controller):
class OpenShiftAdmPruneDeployment(AnsibleOpenshiftModule):
def __init__(self, **kwargs):
super(OpenShiftAdmPruneDeployment, self).__init__(**kwargs)
@@ -41,27 +44,33 @@ class OpenShiftAdmPruneDeployment(AnsibleOpenshiftModule):
return get_deploymentconfig_for_replicationcontroller(obj) is not None
def _zeroReplicaSize(obj):
return obj['spec']['replicas'] == 0 and obj['status']['replicas'] == 0
return obj["spec"]["replicas"] == 0 and obj["status"]["replicas"] == 0
def _complete_failed(obj):
DeploymentStatusAnnotation = "openshift.io/deployment.phase"
try:
# validate that replication controller status is either 'Complete' or 'Failed'
deployment_phase = obj['metadata']['annotations'].get(DeploymentStatusAnnotation)
return deployment_phase in ('Failed', 'Complete')
deployment_phase = obj["metadata"]["annotations"].get(
DeploymentStatusAnnotation
)
return deployment_phase in ("Failed", "Complete")
except Exception:
return False
def _younger(obj):
creation_timestamp = datetime.strptime(obj['metadata']['creationTimestamp'], '%Y-%m-%dT%H:%M:%SZ')
creation_timestamp = datetime.strptime(
obj["metadata"]["creationTimestamp"], "%Y-%m-%dT%H:%M:%SZ"
)
now = datetime.now(timezone.utc).replace(tzinfo=None)
age = (now - creation_timestamp).seconds / 60
return age > self.params['keep_younger_than']
return age > self.params["keep_younger_than"]
def _orphan(obj):
try:
# verify if the deploymentconfig associated to the replication controller is still existing
deploymentconfig_name = get_deploymentconfig_for_replicationcontroller(obj)
deploymentconfig_name = get_deploymentconfig_for_replicationcontroller(
obj
)
params = dict(
kind="DeploymentConfig",
api_version="apps.openshift.io/v1",
@@ -69,14 +78,14 @@ class OpenShiftAdmPruneDeployment(AnsibleOpenshiftModule):
namespace=obj["metadata"]["name"],
)
exists = self.kubernetes_facts(**params)
return not (exists.get['api_found'] and len(exists['resources']) > 0)
return not (exists.get["api_found"] and len(exists["resources"]) > 0)
except Exception:
return False
predicates = [_deployment, _zeroReplicaSize, _complete_failed]
if self.params['orphans']:
if self.params["orphans"]:
predicates.append(_orphan)
if self.params['keep_younger_than']:
if self.params["keep_younger_than"]:
predicates.append(_younger)
results = replicacontrollers.copy()
@@ -86,8 +95,8 @@ class OpenShiftAdmPruneDeployment(AnsibleOpenshiftModule):
def execute_module(self):
# list replicationcontroller candidate for pruning
kind = 'ReplicationController'
api_version = 'v1'
kind = "ReplicationController"
api_version = "v1"
resource = self.find_resource(kind=kind, api_version=api_version, fail=True)
# Get ReplicationController
@@ -103,7 +112,7 @@ class OpenShiftAdmPruneDeployment(AnsibleOpenshiftModule):
self.exit_json(changed=False, replication_controllers=[])
changed = True
delete_options = client.V1DeleteOptions(propagation_policy='Background')
delete_options = client.V1DeleteOptions(propagation_policy="Background")
replication_controllers = []
for replica in candidates:
try:
@@ -111,12 +120,18 @@ class OpenShiftAdmPruneDeployment(AnsibleOpenshiftModule):
if not self.check_mode:
name = replica["metadata"]["name"]
namespace = replica["metadata"]["namespace"]
result = resource.delete(name=name, namespace=namespace, body=delete_options).to_dict()
result = resource.delete(
name=name, namespace=namespace, body=delete_options
).to_dict()
replication_controllers.append(result)
except DynamicApiError as exc:
msg = "Failed to delete ReplicationController {namespace}/{name} due to: {msg}".format(namespace=namespace, name=name, msg=exc.body)
msg = "Failed to delete ReplicationController {namespace}/{name} due to: {msg}".format(
namespace=namespace, name=name, msg=exc.body
)
self.fail_json(msg=msg)
except Exception as e:
msg = "Failed to delete ReplicationController {namespace}/{name} due to: {msg}".format(namespace=namespace, name=name, msg=to_native(e))
msg = "Failed to delete ReplicationController {namespace}/{name} due to: {msg}".format(
namespace=namespace, name=name, msg=to_native(e)
)
self.fail_json(msg=msg)
self.exit_json(changed=changed, replication_controllers=replication_controllers)

View File

@@ -1,17 +1,19 @@
#!/usr/bin/env python
from __future__ import (absolute_import, division, print_function)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
from datetime import datetime, timezone, timedelta
import traceback
import copy
from ansible.module_utils._text import to_native
from ansible.module_utils.parsing.convert_bool import boolean
from ansible.module_utils.six import iteritems
from ansible_collections.community.okd.plugins.module_utils.openshift_common import AnsibleOpenshiftModule
from ansible_collections.community.okd.plugins.module_utils.openshift_common import (
AnsibleOpenshiftModule,
)
from ansible_collections.community.okd.plugins.module_utils.openshift_images_common import (
OpenShiftAnalyzeImageStream,
@@ -30,7 +32,7 @@ try:
from kubernetes.dynamic.exceptions import (
DynamicApiError,
NotFoundError,
ApiException
ApiException,
)
except ImportError:
pass
@@ -67,18 +69,20 @@ def determine_host_registry(module, images, image_streams):
managed_images = list(filter(_f_managed_images, images))
# Be sure to pick up the newest managed image which should have an up to date information
sorted_images = sorted(managed_images,
key=lambda x: x["metadata"]["creationTimestamp"],
reverse=True)
sorted_images = sorted(
managed_images, key=lambda x: x["metadata"]["creationTimestamp"], reverse=True
)
docker_image_ref = ""
if len(sorted_images) > 0:
docker_image_ref = sorted_images[0].get("dockerImageReference", "")
else:
# 2nd try to get the pull spec from any image stream
# Sorting by creation timestamp may not get us up to date info. Modification time would be much
sorted_image_streams = sorted(image_streams,
sorted_image_streams = sorted(
image_streams,
key=lambda x: x["metadata"]["creationTimestamp"],
reverse=True)
reverse=True,
)
for i_stream in sorted_image_streams:
docker_image_ref = i_stream["status"].get("dockerImageRepository", "")
if len(docker_image_ref) > 0:
@@ -88,7 +92,7 @@ def determine_host_registry(module, images, image_streams):
module.exit_json(changed=False, result="no managed image found")
result, error = parse_docker_image_ref(docker_image_ref, module)
return result['hostname']
return result["hostname"]
class OpenShiftAdmPruneImages(AnsibleOpenshiftModule):
@@ -97,7 +101,7 @@ class OpenShiftAdmPruneImages(AnsibleOpenshiftModule):
self.max_creation_timestamp = self.get_max_creation_timestamp()
self._rest_client = None
self.registryhost = self.params.get('registry_url')
self.registryhost = self.params.get("registry_url")
self.changed = False
def list_objects(self):
@@ -107,9 +111,9 @@ class OpenShiftAdmPruneImages(AnsibleOpenshiftModule):
if self.params.get("namespace") and kind.lower() == "imagestream":
namespace = self.params.get("namespace")
try:
result[kind] = self.kubernetes_facts(kind=kind,
api_version=version,
namespace=namespace).get('resources')
result[kind] = self.kubernetes_facts(
kind=kind, api_version=version, namespace=namespace
).get("resources")
except DynamicApiError as e:
self.fail_json(
msg="An error occurred while trying to list objects.",
@@ -119,7 +123,7 @@ class OpenShiftAdmPruneImages(AnsibleOpenshiftModule):
except Exception as e:
self.fail_json(
msg="An error occurred while trying to list objects.",
error=to_native(e)
error=to_native(e),
)
return result
@@ -134,8 +138,8 @@ class OpenShiftAdmPruneImages(AnsibleOpenshiftModule):
def rest_client(self):
if not self._rest_client:
configuration = copy.deepcopy(self.client.configuration)
validate_certs = self.params.get('registry_validate_certs')
ssl_ca_cert = self.params.get('registry_ca_cert')
validate_certs = self.params.get("registry_validate_certs")
ssl_ca_cert = self.params.get("registry_ca_cert")
if validate_certs is not None:
configuration.verify_ssl = validate_certs
if ssl_ca_cert is not None:
@@ -146,7 +150,9 @@ class OpenShiftAdmPruneImages(AnsibleOpenshiftModule):
def delete_from_registry(self, url):
try:
response = self.rest_client.DELETE(url=url, headers=self.client.configuration.api_key)
response = self.rest_client.DELETE(
url=url, headers=self.client.configuration.api_key
)
if response.status == 404:
# Unable to delete layer
return None
@@ -156,8 +162,9 @@ class OpenShiftAdmPruneImages(AnsibleOpenshiftModule):
if response.status != 202 and response.status != 204:
self.fail_json(
msg="Delete URL {0}: Unexpected status code in response: {1}".format(
response.status, url),
reason=response.reason
response.status, url
),
reason=response.reason,
)
return None
except ApiException as e:
@@ -204,9 +211,7 @@ class OpenShiftAdmPruneImages(AnsibleOpenshiftModule):
result = self.request(
"PUT",
"/apis/{api_version}/namespaces/{namespace}/imagestreams/{name}/status".format(
api_version=api_version,
namespace=namespace,
name=name
api_version=api_version, namespace=namespace, name=name
),
body=definition,
content_type="application/json",
@@ -237,11 +242,10 @@ class OpenShiftAdmPruneImages(AnsibleOpenshiftModule):
pass
except DynamicApiError as exc:
self.fail_json(
msg="Failed to delete object %s/%s due to: %s" % (
kind, name, exc.body
),
msg="Failed to delete object %s/%s due to: %s"
% (kind, name, exc.body),
reason=exc.reason,
status=exc.status
status=exc.status,
)
else:
existing = resource.get(name=name)
@@ -285,9 +289,11 @@ class OpenShiftAdmPruneImages(AnsibleOpenshiftModule):
continue
if idx == 0:
istag = "%s/%s:%s" % (stream_namespace,
istag = "%s/%s:%s" % (
stream_namespace,
stream_name,
tag_event_list["tag"])
tag_event_list["tag"],
)
if istag in self.used_tags:
# keeping because tag is used
filtered_items.append(item)
@@ -302,20 +308,20 @@ class OpenShiftAdmPruneImages(AnsibleOpenshiftModule):
image = self.image_mapping[item["image"]]
# check prune over limit size
if prune_over_size_limit and not self.exceeds_limits(stream_namespace, image):
if prune_over_size_limit and not self.exceeds_limits(
stream_namespace, image
):
filtered_items.append(item)
continue
image_ref = "%s/%s@%s" % (stream_namespace,
stream_name,
item["image"])
image_ref = "%s/%s@%s" % (stream_namespace, stream_name, item["image"])
if image_ref in self.used_images:
# keeping because tag is used
filtered_items.append(item)
continue
images_to_delete.append(item["image"])
if self.params.get('prune_registry'):
if self.params.get("prune_registry"):
manifests_to_delete.append(image["metadata"]["name"])
path = stream_namespace + "/" + stream_name
image_blobs, err = get_image_blobs(image)
@@ -325,21 +331,25 @@ class OpenShiftAdmPruneImages(AnsibleOpenshiftModule):
return filtered_items, manifests_to_delete, images_to_delete
def prune_image_streams(self, stream):
name = stream['metadata']['namespace'] + "/" + stream['metadata']['name']
name = stream["metadata"]["namespace"] + "/" + stream["metadata"]["name"]
if is_too_young_object(stream, self.max_creation_timestamp):
# keeping all images because of image stream too young
return None, []
facts = self.kubernetes_facts(kind="ImageStream",
facts = self.kubernetes_facts(
kind="ImageStream",
api_version=ApiConfiguration.get("ImageStream"),
name=stream["metadata"]["name"],
namespace=stream["metadata"]["namespace"])
image_stream = facts.get('resources')
namespace=stream["metadata"]["namespace"],
)
image_stream = facts.get("resources")
if len(image_stream) != 1:
# skipping because it does not exist anymore
return None, []
stream = image_stream[0]
namespace = self.params.get("namespace")
stream_to_update = not namespace or (stream["metadata"]["namespace"] == namespace)
stream_to_update = not namespace or (
stream["metadata"]["namespace"] == namespace
)
manifests_to_delete, images_to_delete = [], []
deleted_items = False
@@ -351,9 +361,9 @@ class OpenShiftAdmPruneImages(AnsibleOpenshiftModule):
(
filtered_tag_event,
tag_manifests_to_delete,
tag_images_to_delete
tag_images_to_delete,
) = self.prune_image_stream_tag(stream, tag_event_list)
stream['status']['tags'][idx]['items'] = filtered_tag_event
stream["status"]["tags"][idx]["items"] = filtered_tag_event
manifests_to_delete += tag_manifests_to_delete
images_to_delete += tag_images_to_delete
deleted_items = deleted_items or (len(tag_images_to_delete) > 0)
@@ -361,11 +371,11 @@ class OpenShiftAdmPruneImages(AnsibleOpenshiftModule):
# Deleting tags without items
tags = []
for tag in stream["status"].get("tags", []):
if tag['items'] is None or len(tag['items']) == 0:
if tag["items"] is None or len(tag["items"]) == 0:
continue
tags.append(tag)
stream['status']['tags'] = tags
stream["status"]["tags"] = tags
result = None
# Update ImageStream
if stream_to_update:
@@ -402,19 +412,23 @@ class OpenShiftAdmPruneImages(AnsibleOpenshiftModule):
def execute_module(self):
resources = self.list_objects()
if not self.check_mode and self.params.get('prune_registry'):
if not self.check_mode and self.params.get("prune_registry"):
if not self.registryhost:
self.registryhost = determine_host_registry(self.module, resources['Image'], resources['ImageStream'])
self.registryhost = determine_host_registry(
self.module, resources["Image"], resources["ImageStream"]
)
# validate that host has a scheme
if "://" not in self.registryhost:
self.registryhost = "https://" + self.registryhost
# Analyze Image Streams
analyze_ref = OpenShiftAnalyzeImageStream(
ignore_invalid_refs=self.params.get('ignore_invalid_refs'),
ignore_invalid_refs=self.params.get("ignore_invalid_refs"),
max_creation_timestamp=self.max_creation_timestamp,
module=self.module
module=self.module,
)
self.used_tags, self.used_images, error = analyze_ref.analyze_image_stream(
resources
)
self.used_tags, self.used_images, error = analyze_ref.analyze_image_stream(resources)
if error:
self.fail_json(msg=error)
@@ -435,16 +449,20 @@ class OpenShiftAdmPruneImages(AnsibleOpenshiftModule):
updated_image_streams = []
deleted_tags_images = []
updated_is_mapping = {}
for stream in resources['ImageStream']:
for stream in resources["ImageStream"]:
result, images_to_delete = self.prune_image_streams(stream)
if result:
updated_is_mapping[result["metadata"]["namespace"] + "/" + result["metadata"]["name"]] = result
updated_is_mapping[
result["metadata"]["namespace"] + "/" + result["metadata"]["name"]
] = result
updated_image_streams.append(result)
deleted_tags_images += images_to_delete
# Create a list with images referenced on image stream
self.referenced_images = []
for item in self.kubernetes_facts(kind="ImageStream", api_version="image.openshift.io/v1")["resources"]:
for item in self.kubernetes_facts(
kind="ImageStream", api_version="image.openshift.io/v1"
)["resources"]:
name = "%s/%s" % (item["metadata"]["namespace"], item["metadata"]["name"])
if name in updated_is_mapping:
item = updated_is_mapping[name]
@@ -453,7 +471,7 @@ class OpenShiftAdmPruneImages(AnsibleOpenshiftModule):
# Stage 2: delete images
images = []
images_to_delete = [x["metadata"]["name"] for x in resources['Image']]
images_to_delete = [x["metadata"]["name"] for x in resources["Image"]]
if self.params.get("namespace") is not None:
# When namespace is defined, prune only images that were referenced by ImageStream
# from the corresponding namespace

View File

@@ -1,15 +1,17 @@
#!/usr/bin/env python
from __future__ import (absolute_import, division, print_function)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
from datetime import datetime, timezone, timedelta
import traceback
import time
from ansible.module_utils._text import to_native
from ansible_collections.community.okd.plugins.module_utils.openshift_common import AnsibleOpenshiftModule
from ansible_collections.community.okd.plugins.module_utils.openshift_common import (
AnsibleOpenshiftModule,
)
try:
from kubernetes.dynamic.exceptions import DynamicApiError
@@ -36,8 +38,7 @@ class OpenShiftBuilds(AnsibleOpenshiftModule):
result = self.request(
method="POST",
path="/apis/build.openshift.io/v1/namespaces/{namespace}/builds/{name}/clone".format(
namespace=namespace,
name=name
namespace=namespace, name=name
),
body=request,
content_type="application/json",
@@ -47,7 +48,11 @@ class OpenShiftBuilds(AnsibleOpenshiftModule):
msg = "Failed to clone Build %s/%s due to: %s" % (namespace, name, exc.body)
self.fail_json(msg=msg, status=exc.status, reason=exc.reason)
except Exception as e:
msg = "Failed to clone Build %s/%s due to: %s" % (namespace, name, to_native(e))
msg = "Failed to clone Build %s/%s due to: %s" % (
namespace,
name,
to_native(e),
)
self.fail_json(msg=msg, error=to_native(e), exception=e)
def instantiate_build_config(self, name, namespace, request):
@@ -55,22 +60,28 @@ class OpenShiftBuilds(AnsibleOpenshiftModule):
result = self.request(
method="POST",
path="/apis/build.openshift.io/v1/namespaces/{namespace}/buildconfigs/{name}/instantiate".format(
namespace=namespace,
name=name
namespace=namespace, name=name
),
body=request,
content_type="application/json",
)
return result.to_dict()
except DynamicApiError as exc:
msg = "Failed to instantiate BuildConfig %s/%s due to: %s" % (namespace, name, exc.body)
msg = "Failed to instantiate BuildConfig %s/%s due to: %s" % (
namespace,
name,
exc.body,
)
self.fail_json(msg=msg, status=exc.status, reason=exc.reason)
except Exception as e:
msg = "Failed to instantiate BuildConfig %s/%s due to: %s" % (namespace, name, to_native(e))
msg = "Failed to instantiate BuildConfig %s/%s due to: %s" % (
namespace,
name,
to_native(e),
)
self.fail_json(msg=msg, error=to_native(e), exception=e)
def start_build(self):
result = None
name = self.params.get("build_config_name")
if not name:
@@ -79,32 +90,20 @@ class OpenShiftBuilds(AnsibleOpenshiftModule):
build_request = {
"kind": "BuildRequest",
"apiVersion": "build.openshift.io/v1",
"metadata": {
"name": name
},
"triggeredBy": [
{"message": "Manually triggered"}
],
"metadata": {"name": name},
"triggeredBy": [{"message": "Manually triggered"}],
}
# Overrides incremental
incremental = self.params.get("incremental")
if incremental is not None:
build_request.update(
{
"sourceStrategyOptions": {
"incremental": incremental
}
}
{"sourceStrategyOptions": {"incremental": incremental}}
)
# Environment variable
if self.params.get("env_vars"):
build_request.update(
{
"env": self.params.get("env_vars")
}
)
build_request.update({"env": self.params.get("env_vars")})
# Docker strategy option
if self.params.get("build_args"):
@@ -121,22 +120,14 @@ class OpenShiftBuilds(AnsibleOpenshiftModule):
if no_cache is not None:
build_request.update(
{
"dockerStrategyOptions": {
"noCache": no_cache
},
"dockerStrategyOptions": {"noCache": no_cache},
}
)
# commit
if self.params.get("commit"):
build_request.update(
{
"revision": {
"git": {
"commit": self.params.get("commit")
}
}
}
{"revision": {"git": {"commit": self.params.get("commit")}}}
)
if self.params.get("build_config_name"):
@@ -144,7 +135,7 @@ class OpenShiftBuilds(AnsibleOpenshiftModule):
result = self.instantiate_build_config(
name=self.params.get("build_config_name"),
namespace=self.params.get("namespace"),
request=build_request
request=build_request,
)
else:
@@ -152,7 +143,7 @@ class OpenShiftBuilds(AnsibleOpenshiftModule):
result = self.clone_build(
name=self.params.get("build_name"),
namespace=self.params.get("namespace"),
request=build_request
request=build_request,
)
if result and self.params.get("wait"):
@@ -179,10 +170,11 @@ class OpenShiftBuilds(AnsibleOpenshiftModule):
break
elif last_status_phase in ("Cancelled", "Error", "Failed"):
self.fail_json(
msg="Unexpected status for Build %s/%s: %s" % (
msg="Unexpected status for Build %s/%s: %s"
% (
result["metadata"]["name"],
result["metadata"]["namespace"],
last_status_phase
last_status_phase,
)
)
time.sleep(wait_sleep)
@@ -190,8 +182,11 @@ class OpenShiftBuilds(AnsibleOpenshiftModule):
if last_status_phase != "Complete":
name = result["metadata"]["name"]
namespace = result["metadata"]["namespace"]
msg = "Build %s/%s has not complete after %d second(s)," \
"current status is %s" % (namespace, name, wait_timeout, last_status_phase)
msg = (
"Build %s/%s has not complete after %d second(s),"
"current status is %s"
% (namespace, name, wait_timeout, last_status_phase)
)
self.fail_json(msg=msg)
@@ -199,9 +194,8 @@ class OpenShiftBuilds(AnsibleOpenshiftModule):
self.exit_json(changed=True, builds=result)
def cancel_build(self, restart):
kind = 'Build'
api_version = 'build.openshift.io/v1'
kind = "Build"
api_version = "build.openshift.io/v1"
namespace = self.params.get("namespace")
phases = ["new", "pending", "running"]
@@ -215,16 +209,18 @@ class OpenShiftBuilds(AnsibleOpenshiftModule):
else:
build_config = self.params.get("build_config_name")
# list all builds from namespace
params = dict(
kind=kind,
api_version=api_version,
namespace=namespace
)
params = dict(kind=kind, api_version=api_version, namespace=namespace)
resources = self.kubernetes_facts(**params).get("resources", [])
def _filter_builds(build):
config = build["metadata"].get("labels", {}).get("openshift.io/build-config.name")
return build_config is None or (build_config is not None and config in build_config)
config = (
build["metadata"]
.get("labels", {})
.get("openshift.io/build-config.name")
)
return build_config is None or (
build_config is not None and config in build_config
)
for item in list(filter(_filter_builds, resources)):
name = item["metadata"]["name"]
@@ -232,16 +228,15 @@ class OpenShiftBuilds(AnsibleOpenshiftModule):
names.append(name)
if len(names) == 0:
self.exit_json(changed=False, msg="No Build found from namespace %s" % namespace)
self.exit_json(
changed=False, msg="No Build found from namespace %s" % namespace
)
warning = []
builds_to_cancel = []
for name in names:
params = dict(
kind=kind,
api_version=api_version,
name=name,
namespace=namespace
kind=kind, api_version=api_version, name=name, namespace=namespace
)
resource = self.kubernetes_facts(**params).get("resources", [])
@@ -256,7 +251,10 @@ class OpenShiftBuilds(AnsibleOpenshiftModule):
if phase in phases:
builds_to_cancel.append(resource)
else:
warning.append("build %s/%s is not in expected phase, found %s" % (namespace, name, phase))
warning.append(
"build %s/%s is not in expected phase, found %s"
% (namespace, name, phase)
)
changed = False
result = []
@@ -278,9 +276,10 @@ class OpenShiftBuilds(AnsibleOpenshiftModule):
result.append(cancelled_build)
except DynamicApiError as exc:
self.fail_json(
msg="Failed to cancel Build %s/%s due to: %s" % (namespace, name, exc),
msg="Failed to cancel Build %s/%s due to: %s"
% (namespace, name, exc),
reason=exc.reason,
status=exc.status
status=exc.status,
)
except Exception as e:
self.fail_json(
@@ -294,10 +293,7 @@ class OpenShiftBuilds(AnsibleOpenshiftModule):
name = build["metadata"]["name"]
while (datetime.now() - start).seconds < wait_timeout:
params = dict(
kind=kind,
api_version=api_version,
name=name,
namespace=namespace
kind=kind, api_version=api_version, name=name, namespace=namespace
)
resource = self.kubernetes_facts(**params).get("resources", [])
if len(resource) == 0:
@@ -307,7 +303,11 @@ class OpenShiftBuilds(AnsibleOpenshiftModule):
if last_phase == "Cancelled":
return resource, None
time.sleep(wait_sleep)
return None, "Build %s/%s is not cancelled as expected, current state is %s" % (namespace, name, last_phase)
return (
None,
"Build %s/%s is not cancelled as expected, current state is %s"
% (namespace, name, last_phase),
)
if result and self.params.get("wait"):
wait_timeout = self.params.get("wait_timeout")
@@ -341,8 +341,8 @@ class OpenShiftPruneBuilds(OpenShiftBuilds):
def execute_module(self):
# list replicationcontroller candidate for pruning
kind = 'Build'
api_version = 'build.openshift.io/v1'
kind = "Build"
api_version = "build.openshift.io/v1"
resource = self.find_resource(kind=kind, api_version=api_version, fail=True)
self.max_creation_timestamp = None
@@ -352,7 +352,12 @@ class OpenShiftPruneBuilds(OpenShiftBuilds):
self.max_creation_timestamp = now - timedelta(minutes=keep_younger_than)
def _prunable_build(build):
return build["status"]["phase"] in ("Complete", "Failed", "Error", "Cancelled")
return build["status"]["phase"] in (
"Complete",
"Failed",
"Error",
"Cancelled",
)
def _orphan_build(build):
if not _prunable_build(build):
@@ -367,7 +372,9 @@ class OpenShiftPruneBuilds(OpenShiftBuilds):
def _younger_build(build):
if not self.max_creation_timestamp:
return False
creation_timestamp = datetime.strptime(build['metadata']['creationTimestamp'], '%Y-%m-%dT%H:%M:%SZ')
creation_timestamp = datetime.strptime(
build["metadata"]["creationTimestamp"], "%Y-%m-%dT%H:%M:%SZ"
)
return creation_timestamp < self.max_creation_timestamp
predicates = [
@@ -401,9 +408,17 @@ class OpenShiftPruneBuilds(OpenShiftBuilds):
namespace = build["metadata"]["namespace"]
resource.delete(name=name, namespace=namespace, body={})
except DynamicApiError as exc:
msg = "Failed to delete Build %s/%s due to: %s" % (namespace, name, exc.body)
msg = "Failed to delete Build %s/%s due to: %s" % (
namespace,
name,
exc.body,
)
self.fail_json(msg=msg, status=exc.status, reason=exc.reason)
except Exception as e:
msg = "Failed to delete Build %s/%s due to: %s" % (namespace, name, to_native(e))
msg = "Failed to delete Build %s/%s due to: %s" % (
namespace,
name,
to_native(e),
)
self.fail_json(msg=msg, error=to_native(e), exception=e)
self.exit_json(changed=changed, builds=candidates)

View File

@@ -1,6 +1,7 @@
#!/usr/bin/env python
from __future__ import (absolute_import, division, print_function)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
import traceback
@@ -9,8 +10,12 @@ from abc import abstractmethod
from ansible.module_utils._text import to_native
try:
from ansible_collections.kubernetes.core.plugins.module_utils.k8s.client import get_api_client
from ansible_collections.kubernetes.core.plugins.module_utils.k8s.core import AnsibleK8SModule
from ansible_collections.kubernetes.core.plugins.module_utils.k8s.client import (
get_api_client,
)
from ansible_collections.kubernetes.core.plugins.module_utils.k8s.core import (
AnsibleK8SModule,
)
from ansible_collections.kubernetes.core.plugins.module_utils.k8s.service import (
K8sService,
diff_objects,
@@ -24,7 +29,10 @@ try:
merge_params,
flatten_list_kind,
)
from ansible_collections.kubernetes.core.plugins.module_utils.k8s.exceptions import CoreException
from ansible_collections.kubernetes.core.plugins.module_utils.k8s.exceptions import (
CoreException,
)
HAS_KUBERNETES_COLLECTION = True
k8s_collection_import_exception = None
K8S_COLLECTION_ERROR = None
@@ -35,7 +43,6 @@ except ImportError as e:
class AnsibleOpenshiftModule(AnsibleK8SModule):
def __init__(self, **kwargs):
super(AnsibleOpenshiftModule, self).__init__(**kwargs)
@@ -86,7 +93,6 @@ class AnsibleOpenshiftModule(AnsibleK8SModule):
return diff_objects(existing, new)
def run_module(self):
try:
self.execute_module()
except CoreException as e:

View File

@@ -1,6 +1,7 @@
#!/usr/bin/env python
from __future__ import (absolute_import, division, print_function)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
import re
@@ -23,28 +24,34 @@ def convert_storage_to_bytes(value):
def is_valid_digest(digest):
digest_algorithm_size = dict(
sha256=64, sha384=96, sha512=128,
sha256=64,
sha384=96,
sha512=128,
)
m = re.match(r'[a-zA-Z0-9-_+.]+:[a-fA-F0-9]+', digest)
m = re.match(r"[a-zA-Z0-9-_+.]+:[a-fA-F0-9]+", digest)
if not m:
return "Docker digest does not match expected format %s" % digest
idx = digest.find(':')
idx = digest.find(":")
# case: "sha256:" with no hex.
if idx < 0 or idx == (len(digest) - 1):
return "Invalid docker digest %s, no hex value define" % digest
algorithm = digest[:idx]
if algorithm not in digest_algorithm_size:
return "Unsupported digest algorithm value %s for digest %s" % (algorithm, digest)
return "Unsupported digest algorithm value %s for digest %s" % (
algorithm,
digest,
)
hex_value = digest[idx + 1:]
hex_value = digest[idx + 1:] # fmt: skip
if len(hex_value) != digest_algorithm_size.get(algorithm):
return "Invalid length for digest hex expected %d found %d (digest is %s)" % (
digest_algorithm_size.get(algorithm), len(hex_value), digest
digest_algorithm_size.get(algorithm),
len(hex_value),
digest,
)
@@ -65,20 +72,20 @@ def parse_docker_image_ref(image_ref, module=None):
def _contains_any(src, values):
return any(x in src for x in values)
result = {
"tag": None, "digest": None
}
result = {"tag": None, "digest": None}
default_domain = "docker.io"
if idx < 0 or (not _contains_any(image_ref[:idx], ":.") and image_ref[:idx] != "localhost"):
if idx < 0 or (
not _contains_any(image_ref[:idx], ":.") and image_ref[:idx] != "localhost"
):
result["hostname"], remainder = default_domain, image_ref
else:
result["hostname"], remainder = image_ref[:idx], image_ref[idx + 1:]
result["hostname"], remainder = image_ref[:idx], image_ref[idx + 1:] # fmt: skip
# Parse remainder information
idx = remainder.find("@")
if idx > 0 and len(remainder) > (idx + 1):
# docker image reference with digest
component, result["digest"] = remainder[:idx], remainder[idx + 1:]
component, result["digest"] = remainder[:idx], remainder[idx + 1:] # fmt: skip
err = is_valid_digest(result["digest"])
if err:
if module:
@@ -88,7 +95,7 @@ def parse_docker_image_ref(image_ref, module=None):
idx = remainder.find(":")
if idx > 0 and len(remainder) > (idx + 1):
# docker image reference with tag
component, result["tag"] = remainder[:idx], remainder[idx + 1:]
component, result["tag"] = remainder[:idx], remainder[idx + 1:] # fmt: skip
else:
# name only
component = remainder
@@ -96,8 +103,6 @@ def parse_docker_image_ref(image_ref, module=None):
namespace = None
if len(v) > 1:
namespace = v[0]
result.update({
"namespace": namespace, "name": v[-1]
})
result.update({"namespace": namespace, "name": v[-1]})
return result, None

View File

@@ -3,11 +3,11 @@
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import (absolute_import, division, print_function)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
import traceback
from datetime import datetime
from ansible.module_utils.parsing.convert_bool import boolean
@@ -19,18 +19,21 @@ from ansible_collections.community.okd.plugins.module_utils.openshift_ldap impor
ldap_split_host_port,
OpenshiftLDAPRFC2307,
OpenshiftLDAPActiveDirectory,
OpenshiftLDAPAugmentedActiveDirectory
OpenshiftLDAPAugmentedActiveDirectory,
)
try:
import ldap
HAS_PYTHON_LDAP = True
PYTHON_LDAP_ERROR = None
except ImportError as e:
HAS_PYTHON_LDAP = False
PYTHON_LDAP_ERROR = e
from ansible_collections.community.okd.plugins.module_utils.openshift_common import AnsibleOpenshiftModule
from ansible_collections.community.okd.plugins.module_utils.openshift_common import (
AnsibleOpenshiftModule,
)
try:
from kubernetes.dynamic.exceptions import DynamicApiError
@@ -44,7 +47,9 @@ LDAP_OPENSHIFT_UID_ANNOTATION = "openshift.io/ldap.uid"
LDAP_OPENSHIFT_SYNCTIME_ANNOTATION = "openshift.io/ldap.sync-time"
def connect_to_ldap(module, server_uri, bind_dn=None, bind_pw=None, insecure=True, ca_file=None):
def connect_to_ldap(
module, server_uri, bind_dn=None, bind_pw=None, insecure=True, ca_file=None
):
if insecure:
ldap.set_option(ldap.OPT_X_TLS_REQUIRE_CERT, ldap.OPT_X_TLS_NEVER)
elif ca_file:
@@ -56,27 +61,36 @@ def connect_to_ldap(module, server_uri, bind_dn=None, bind_pw=None, insecure=Tru
connection.simple_bind_s(bind_dn, bind_pw)
return connection
except ldap.LDAPError as e:
module.fail_json(msg="Cannot bind to the LDAP server '{0}' due to: {1}".format(server_uri, e))
module.fail_json(
msg="Cannot bind to the LDAP server '{0}' due to: {1}".format(server_uri, e)
)
def validate_group_annotation(definition, host_ip):
name = definition['metadata']['name']
name = definition["metadata"]["name"]
# Validate LDAP URL Annotation
annotate_url = definition['metadata'].get('annotations', {}).get(LDAP_OPENSHIFT_URL_ANNOTATION)
annotate_url = (
definition["metadata"].get("annotations", {}).get(LDAP_OPENSHIFT_URL_ANNOTATION)
)
if host_ip:
if not annotate_url:
return "group '{0}' marked as having been synced did not have an '{1}' annotation".format(name, LDAP_OPENSHIFT_URL_ANNOTATION)
return "group '{0}' marked as having been synced did not have an '{1}' annotation".format(
name, LDAP_OPENSHIFT_URL_ANNOTATION
)
elif annotate_url != host_ip:
return "group '{0}' was not synchronized from: '{1}'".format(name, host_ip)
# Validate LDAP UID Annotation
annotate_uid = definition['metadata']['annotations'].get(LDAP_OPENSHIFT_UID_ANNOTATION)
annotate_uid = definition["metadata"]["annotations"].get(
LDAP_OPENSHIFT_UID_ANNOTATION
)
if not annotate_uid:
return "group '{0}' marked as having been synced did not have an '{1}' annotation".format(name, LDAP_OPENSHIFT_UID_ANNOTATION)
return "group '{0}' marked as having been synced did not have an '{1}' annotation".format(
name, LDAP_OPENSHIFT_UID_ANNOTATION
)
return None
class OpenshiftLDAPGroups(object):
kind = "Group"
version = "user.openshift.io/v1"
@@ -88,11 +102,7 @@ class OpenshiftLDAPGroups(object):
@property
def k8s_group_api(self):
if not self.__group_api:
params = dict(
kind=self.kind,
api_version=self.version,
fail=True
)
params = dict(kind=self.kind, api_version=self.version, fail=True)
self.__group_api = self.module.find_resource(**params)
return self.__group_api
@@ -139,16 +149,26 @@ class OpenshiftLDAPGroups(object):
if missing:
self.module.fail_json(
msg="The following groups were not found: %s" % ''.join(missing)
msg="The following groups were not found: %s" % "".join(missing)
)
else:
label_selector = "%s=%s" % (LDAP_OPENSHIFT_HOST_LABEL, host)
resources = self.get_group_info(label_selectors=[label_selector], return_list=True)
resources = self.get_group_info(
label_selectors=[label_selector], return_list=True
)
if not resources:
return None, "Unable to find Group matching label selector '%s'" % label_selector
return (
None,
"Unable to find Group matching label selector '%s'"
% label_selector,
)
groups = resources
if deny_groups:
groups = [item for item in groups if item["metadata"]["name"] not in deny_groups]
groups = [
item
for item in groups
if item["metadata"]["name"] not in deny_groups
]
uids = []
for grp in groups:
@@ -156,7 +176,9 @@ class OpenshiftLDAPGroups(object):
if err and allow_groups:
# We raise an error for group part of the allow_group not matching LDAP sync criteria
return None, err
group_uid = grp['metadata']['annotations'].get(LDAP_OPENSHIFT_UID_ANNOTATION)
group_uid = grp["metadata"]["annotations"].get(
LDAP_OPENSHIFT_UID_ANNOTATION
)
self.cache[group_uid] = grp
uids.append(group_uid)
return uids, None
@@ -174,38 +196,65 @@ class OpenshiftLDAPGroups(object):
"kind": "Group",
"metadata": {
"name": group_name,
"labels": {
LDAP_OPENSHIFT_HOST_LABEL: self.module.host
},
"labels": {LDAP_OPENSHIFT_HOST_LABEL: self.module.host},
"annotations": {
LDAP_OPENSHIFT_URL_ANNOTATION: self.module.netlocation,
LDAP_OPENSHIFT_UID_ANNOTATION: group_uid,
}
}
},
},
}
# Make sure we aren't taking over an OpenShift group that is already related to a different LDAP group
ldaphost_label = group["metadata"].get("labels", {}).get(LDAP_OPENSHIFT_HOST_LABEL)
ldaphost_label = (
group["metadata"].get("labels", {}).get(LDAP_OPENSHIFT_HOST_LABEL)
)
if not ldaphost_label or ldaphost_label != self.module.host:
return None, "Group %s: %s label did not match sync host: wanted %s, got %s" % (
group_name, LDAP_OPENSHIFT_HOST_LABEL, self.module.host, ldaphost_label
return (
None,
"Group %s: %s label did not match sync host: wanted %s, got %s"
% (
group_name,
LDAP_OPENSHIFT_HOST_LABEL,
self.module.host,
ldaphost_label,
),
)
ldapurl_annotation = group["metadata"].get("annotations", {}).get(LDAP_OPENSHIFT_URL_ANNOTATION)
ldapurl_annotation = (
group["metadata"].get("annotations", {}).get(LDAP_OPENSHIFT_URL_ANNOTATION)
)
if not ldapurl_annotation or ldapurl_annotation != self.module.netlocation:
return None, "Group %s: %s annotation did not match sync host: wanted %s, got %s" % (
group_name, LDAP_OPENSHIFT_URL_ANNOTATION, self.module.netlocation, ldapurl_annotation
return (
None,
"Group %s: %s annotation did not match sync host: wanted %s, got %s"
% (
group_name,
LDAP_OPENSHIFT_URL_ANNOTATION,
self.module.netlocation,
ldapurl_annotation,
),
)
ldapuid_annotation = group["metadata"].get("annotations", {}).get(LDAP_OPENSHIFT_UID_ANNOTATION)
ldapuid_annotation = (
group["metadata"].get("annotations", {}).get(LDAP_OPENSHIFT_UID_ANNOTATION)
)
if not ldapuid_annotation or ldapuid_annotation != group_uid:
return None, "Group %s: %s annotation did not match LDAP UID: wanted %s, got %s" % (
group_name, LDAP_OPENSHIFT_UID_ANNOTATION, group_uid, ldapuid_annotation
return (
None,
"Group %s: %s annotation did not match LDAP UID: wanted %s, got %s"
% (
group_name,
LDAP_OPENSHIFT_UID_ANNOTATION,
group_uid,
ldapuid_annotation,
),
)
# Overwrite Group Users data
group["users"] = usernames
group["metadata"]["annotations"][LDAP_OPENSHIFT_SYNCTIME_ANNOTATION] = datetime.now().isoformat()
group["metadata"]["annotations"][
LDAP_OPENSHIFT_SYNCTIME_ANNOTATION
] = datetime.now().isoformat()
return group, None
def create_openshift_groups(self, groups: list):
@@ -223,9 +272,15 @@ class OpenshiftLDAPGroups(object):
else:
definition = self.k8s_group_api.create(definition).to_dict()
except DynamicApiError as exc:
self.module.fail_json(msg="Failed to %s Group '%s' due to: %s" % (method, name, exc.body))
self.module.fail_json(
msg="Failed to %s Group '%s' due to: %s"
% (method, name, exc.body)
)
except Exception as exc:
self.module.fail_json(msg="Failed to %s Group '%s' due to: %s" % (method, name, to_native(exc)))
self.module.fail_json(
msg="Failed to %s Group '%s' due to: %s"
% (method, name, to_native(exc))
)
equals = False
if existing:
equals, diff = self.module.diff_objects(existing, definition)
@@ -235,27 +290,27 @@ class OpenshiftLDAPGroups(object):
return results, diffs, changed
def delete_openshift_group(self, name: str):
result = dict(
kind=self.kind,
apiVersion=self.version,
metadata=dict(
name=name
)
)
result = dict(kind=self.kind, apiVersion=self.version, metadata=dict(name=name))
if not self.module.check_mode:
try:
result = self.k8s_group_api.delete(name=name).to_dict()
except DynamicApiError as exc:
self.module.fail_json(msg="Failed to delete Group '{0}' due to: {1}".format(name, exc.body))
self.module.fail_json(
msg="Failed to delete Group '{0}' due to: {1}".format(
name, exc.body
)
)
except Exception as exc:
self.module.fail_json(msg="Failed to delete Group '{0}' due to: {1}".format(name, to_native(exc)))
self.module.fail_json(
msg="Failed to delete Group '{0}' due to: {1}".format(
name, to_native(exc)
)
)
return result
class OpenshiftGroupsSync(AnsibleOpenshiftModule):
def __init__(self, **kwargs):
super(OpenshiftGroupsSync, self).__init__(**kwargs)
self.__k8s_group_api = None
self.__ldap_connection = None
@@ -267,17 +322,14 @@ class OpenshiftGroupsSync(AnsibleOpenshiftModule):
if not HAS_PYTHON_LDAP:
self.fail_json(
msg=missing_required_lib('python-ldap'), error=to_native(PYTHON_LDAP_ERROR)
msg=missing_required_lib("python-ldap"),
error=to_native(PYTHON_LDAP_ERROR),
)
@property
def k8s_group_api(self):
if not self.__k8s_group_api:
params = dict(
kind="Group",
api_version="user.openshift.io/v1",
fail=True
)
params = dict(kind="Group", api_version="user.openshift.io/v1", fail=True)
self.__k8s_group_api = self.find_resource(**params)
return self.__k8s_group_api
@@ -291,11 +343,11 @@ class OpenshiftGroupsSync(AnsibleOpenshiftModule):
# Create connection object
params = dict(
module=self,
server_uri=self.config.get('url'),
bind_dn=self.config.get('bindDN'),
bind_pw=self.config.get('bindPassword'),
insecure=boolean(self.config.get('insecure')),
ca_file=self.config.get('ca')
server_uri=self.config.get("url"),
bind_dn=self.config.get("bindDN"),
bind_pw=self.config.get("bindPassword"),
insecure=boolean(self.config.get("insecure")),
ca_file=self.config.get("ca"),
)
self.__ldap_connection = connect_to_ldap(**params)
return self.__ldap_connection
@@ -327,7 +379,6 @@ class OpenshiftGroupsSync(AnsibleOpenshiftModule):
return syncer
def synchronize(self):
sync_group_type = self.module.params.get("type")
groups_uids = []
@@ -365,7 +416,8 @@ class OpenshiftGroupsSync(AnsibleOpenshiftModule):
name, err = syncer.get_username_for_entry(entry)
if err:
self.exit_json(
msg="Unable to determine username for entry %s: %s" % (entry, err)
msg="Unable to determine username for entry %s: %s"
% (entry, err)
)
if isinstance(name, list):
usernames.extend(name)
@@ -380,13 +432,17 @@ class OpenshiftGroupsSync(AnsibleOpenshiftModule):
self.exit_json(msg=err)
# Make Openshift group
group, err = ldap_openshift_group.make_openshift_group(uid, group_name, usernames)
group, err = ldap_openshift_group.make_openshift_group(
uid, group_name, usernames
)
if err:
self.fail_json(msg=err)
openshift_groups.append(group)
# Create Openshift Groups
results, diffs, changed = ldap_openshift_group.create_openshift_groups(openshift_groups)
results, diffs, changed = ldap_openshift_group.create_openshift_groups(
openshift_groups
)
self.module.exit_json(changed=True, groups=results)
def prune(self):
@@ -404,7 +460,10 @@ class OpenshiftGroupsSync(AnsibleOpenshiftModule):
# Check if LDAP group exist
exists, err = syncer.is_ldapgroup_exists(uid)
if err:
msg = "Error determining LDAP group existence for group %s: %s" % (uid, err)
msg = "Error determining LDAP group existence for group %s: %s" % (
uid,
err,
)
self.module.fail_json(msg=msg)
if exists:
@@ -429,14 +488,22 @@ class OpenshiftGroupsSync(AnsibleOpenshiftModule):
self.fail_json(msg="Invalid LDAP Sync config: %s" % error)
# Split host/port
if self.config.get('url'):
result, error = ldap_split_host_port(self.config.get('url'))
if self.config.get("url"):
result, error = ldap_split_host_port(self.config.get("url"))
if error:
self.fail_json(msg="Failed to parse url='{0}': {1}".format(self.config.get('url'), error))
self.netlocation, self.host, self.port = result["netlocation"], result["host"], result["port"]
self.fail_json(
msg="Failed to parse url='{0}': {1}".format(
self.config.get("url"), error
)
)
self.netlocation, self.host, self.port = (
result["netlocation"],
result["host"],
result["port"],
)
self.scheme = result["scheme"]
if self.params.get('state') == 'present':
if self.params.get("state") == "present":
self.synchronize()
else:
self.prune()

View File

@@ -1,6 +1,7 @@
#!/usr/bin/env python
from __future__ import (absolute_import, division, print_function)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
from datetime import datetime
@@ -17,9 +18,9 @@ def get_image_blobs(image):
return blobs, "failed to read metadata for image %s" % image["metadata"]["name"]
media_type_manifest = (
"application/vnd.docker.distribution.manifest.v2+json",
"application/vnd.oci.image.manifest.v1+json"
"application/vnd.oci.image.manifest.v1+json",
)
media_type_has_config = image['dockerImageManifestMediaType'] in media_type_manifest
media_type_has_config = image["dockerImageManifestMediaType"] in media_type_manifest
docker_image_id = docker_image_metadata.get("Id")
if media_type_has_config and docker_image_id and len(docker_image_id) > 0:
blobs.append(docker_image_id)
@@ -29,19 +30,18 @@ def get_image_blobs(image):
def is_created_after(creation_timestamp, max_creation_timestamp):
if not max_creation_timestamp:
return False
creationTimestamp = datetime.strptime(creation_timestamp, '%Y-%m-%dT%H:%M:%SZ')
creationTimestamp = datetime.strptime(creation_timestamp, "%Y-%m-%dT%H:%M:%SZ")
return creationTimestamp > max_creation_timestamp
def is_too_young_object(obj, max_creation_timestamp):
return is_created_after(obj['metadata']['creationTimestamp'],
max_creation_timestamp)
return is_created_after(
obj["metadata"]["creationTimestamp"], max_creation_timestamp
)
class OpenShiftAnalyzeImageStream(object):
def __init__(self, ignore_invalid_refs, max_creation_timestamp, module):
self.max_creationTimestamp = max_creation_timestamp
self.used_tags = {}
self.used_images = {}
@@ -53,32 +53,34 @@ class OpenShiftAnalyzeImageStream(object):
if error:
return error
if not result['hostname'] or not result['namespace']:
if not result["hostname"] or not result["namespace"]:
# image reference does not match hostname/namespace/name pattern - skipping
return None
if not result['digest']:
if not result["digest"]:
# Attempt to dereference istag. Since we cannot be sure whether the reference refers to the
# integrated registry or not, we ignore the host part completely. As a consequence, we may keep
# image otherwise sentenced for a removal just because its pull spec accidentally matches one of
# our imagestreamtags.
# set the tag if empty
if result['tag'] == "":
result['tag'] = 'latest'
key = "%s/%s:%s" % (result['namespace'], result['name'], result['tag'])
if result["tag"] == "":
result["tag"] = "latest"
key = "%s/%s:%s" % (result["namespace"], result["name"], result["tag"])
if key not in self.used_tags:
self.used_tags[key] = []
self.used_tags[key].append(referrer)
else:
key = "%s/%s@%s" % (result['namespace'], result['name'], result['digest'])
key = "%s/%s@%s" % (result["namespace"], result["name"], result["digest"])
if key not in self.used_images:
self.used_images[key] = []
self.used_images[key].append(referrer)
def analyze_refs_from_pod_spec(self, podSpec, referrer):
for container in podSpec.get('initContainers', []) + podSpec.get('containers', []):
image = container.get('image')
for container in podSpec.get("initContainers", []) + podSpec.get(
"containers", []
):
image = container.get("image")
if len(image.strip()) == 0:
# Ignoring container because it has no reference to image
continue
@@ -93,29 +95,35 @@ class OpenShiftAnalyzeImageStream(object):
# pending or running. Additionally, it has to be at least as old as the minimum
# age threshold defined by the algorithm.
too_young = is_too_young_object(pod, self.max_creationTimestamp)
if pod['status']['phase'] not in ("Running", "Pending") and too_young:
if pod["status"]["phase"] not in ("Running", "Pending") and too_young:
continue
referrer = {
"kind": pod["kind"],
"namespace": pod["metadata"]["namespace"],
"name": pod["metadata"]["name"],
}
err = self.analyze_refs_from_pod_spec(pod['spec'], referrer)
err = self.analyze_refs_from_pod_spec(pod["spec"], referrer)
if err:
return err
return None
def analyze_refs_pod_creators(self, resources):
keys = (
"ReplicationController", "DeploymentConfig", "DaemonSet",
"Deployment", "ReplicaSet", "StatefulSet", "Job", "CronJob"
"ReplicationController",
"DeploymentConfig",
"DaemonSet",
"Deployment",
"ReplicaSet",
"StatefulSet",
"Job",
"CronJob",
)
for k, objects in iteritems(resources):
if k not in keys:
continue
for obj in objects:
if k == 'CronJob':
if k == "CronJob":
spec = obj["spec"]["jobTemplate"]["spec"]["template"]["spec"]
else:
spec = obj["spec"]["template"]["spec"]
@@ -132,64 +140,84 @@ class OpenShiftAnalyzeImageStream(object):
def analyze_refs_from_strategy(self, build_strategy, namespace, referrer):
# Determine 'from' reference
def _determine_source_strategy():
for src in ('sourceStrategy', 'dockerStrategy', 'customStrategy'):
for src in ("sourceStrategy", "dockerStrategy", "customStrategy"):
strategy = build_strategy.get(src)
if strategy:
return strategy.get('from')
return strategy.get("from")
return None
def _parse_image_stream_image_name(name):
v = name.split('@')
v = name.split("@")
if len(v) != 2:
return None, None, "expected exactly one @ in the isimage name %s" % name
return (
None,
None,
"expected exactly one @ in the isimage name %s" % name,
)
name = v[0]
tag = v[1]
if len(name) == 0 or len(tag) == 0:
return None, None, "image stream image name %s must have a name and ID" % name
return (
None,
None,
"image stream image name %s must have a name and ID" % name,
)
return name, tag, None
def _parse_image_stream_tag_name(name):
if "@" in name:
return None, None, "%s is an image stream image, not an image stream tag" % name
return (
None,
None,
"%s is an image stream image, not an image stream tag" % name,
)
v = name.split(":")
if len(v) != 2:
return None, None, "expected exactly one : delimiter in the istag %s" % name
return (
None,
None,
"expected exactly one : delimiter in the istag %s" % name,
)
name = v[0]
tag = v[1]
if len(name) == 0 or len(tag) == 0:
return None, None, "image stream tag name %s must have a name and a tag" % name
return (
None,
None,
"image stream tag name %s must have a name and a tag" % name,
)
return name, tag, None
from_strategy = _determine_source_strategy()
if from_strategy:
if from_strategy.get('kind') == "DockerImage":
docker_image_ref = from_strategy.get('name').strip()
if from_strategy.get("kind") == "DockerImage":
docker_image_ref = from_strategy.get("name").strip()
if len(docker_image_ref) > 0:
err = self.analyze_reference_image(docker_image_ref, referrer)
elif from_strategy.get('kind') == "ImageStreamImage":
name, tag, error = _parse_image_stream_image_name(from_strategy.get('name'))
elif from_strategy.get("kind") == "ImageStreamImage":
name, tag, error = _parse_image_stream_image_name(
from_strategy.get("name")
)
if error:
if not self.ignore_invalid_refs:
return error
else:
namespace = from_strategy.get('namespace') or namespace
self.used_images.append({
'namespace': namespace,
'name': name,
'tag': tag
})
elif from_strategy.get('kind') == "ImageStreamTag":
name, tag, error = _parse_image_stream_tag_name(from_strategy.get('name'))
namespace = from_strategy.get("namespace") or namespace
self.used_images.append(
{"namespace": namespace, "name": name, "tag": tag}
)
elif from_strategy.get("kind") == "ImageStreamTag":
name, tag, error = _parse_image_stream_tag_name(
from_strategy.get("name")
)
if error:
if not self.ignore_invalid_refs:
return error
else:
namespace = from_strategy.get('namespace') or namespace
self.used_tags.append({
'namespace': namespace,
'name': name,
'tag': tag
})
namespace = from_strategy.get("namespace") or namespace
self.used_tags.append(
{"namespace": namespace, "name": name, "tag": tag}
)
def analyze_refs_from_build_strategy(self, resources):
# Json Path is always spec.strategy
@@ -203,16 +231,20 @@ class OpenShiftAnalyzeImageStream(object):
"namespace": obj["metadata"]["namespace"],
"name": obj["metadata"]["name"],
}
error = self.analyze_refs_from_strategy(obj['spec']['strategy'],
obj['metadata']['namespace'],
referrer)
error = self.analyze_refs_from_strategy(
obj["spec"]["strategy"], obj["metadata"]["namespace"], referrer
)
if error is not None:
return "%s/%s/%s: %s" % (referrer["kind"], referrer["namespace"], referrer["name"], error)
return "%s/%s/%s: %s" % (
referrer["kind"],
referrer["namespace"],
referrer["name"],
error,
)
def analyze_image_stream(self, resources):
# Analyze image reference from Pods
error = self.analyze_refs_from_pods(resources['Pod'])
error = self.analyze_refs_from_pods(resources["Pod"])
if error:
return None, None, error

View File

@@ -1,16 +1,17 @@
#!/usr/bin/env python
from __future__ import (absolute_import, division, print_function)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
import traceback
import copy
from ansible.module_utils._text import to_native
from ansible.module_utils.parsing.convert_bool import boolean
from ansible.module_utils.six import string_types
from ansible_collections.community.okd.plugins.module_utils.openshift_common import AnsibleOpenshiftModule
from ansible_collections.community.okd.plugins.module_utils.openshift_common import (
AnsibleOpenshiftModule,
)
try:
from kubernetes.dynamic.exceptions import DynamicApiError
@@ -44,10 +45,17 @@ def follow_imagestream_tag_reference(stream, tag):
return name, tag, len(parts) == 2
content = []
err_cross_stream_ref = "tag %s points to an imagestreamtag from another ImageStream" % tag
err_cross_stream_ref = (
"tag %s points to an imagestreamtag from another ImageStream" % tag
)
while True:
if tag in content:
return tag, None, multiple, "tag %s on the image stream is a reference to same tag" % tag
return (
tag,
None,
multiple,
"tag %s on the image stream is a reference to same tag" % tag,
)
content.append(tag)
tag_ref = _imagestream_has_tag()
if not tag_ref:
@@ -56,7 +64,10 @@ def follow_imagestream_tag_reference(stream, tag):
if not tag_ref.get("from") or tag_ref["from"]["kind"] != "ImageStreamTag":
return tag, tag_ref, multiple, None
if tag_ref["from"]["namespace"] != "" and tag_ref["from"]["namespace"] != stream["metadata"]["namespace"]:
if (
tag_ref["from"]["namespace"] != ""
and tag_ref["from"]["namespace"] != stream["metadata"]["namespace"]
):
return tag, None, multiple, err_cross_stream_ref
# The reference needs to be followed with two format patterns:
@@ -64,7 +75,12 @@ def follow_imagestream_tag_reference(stream, tag):
if ":" in tag_ref["from"]["name"]:
name, tagref, result = _imagestream_split_tag(tag_ref["from"]["name"])
if not result:
return tag, None, multiple, "tag %s points to an invalid imagestreamtag" % tag
return (
tag,
None,
multiple,
"tag %s points to an invalid imagestreamtag" % tag,
)
if name != stream["metadata"]["namespace"]:
# anotheris:sometag - this should not happen.
return tag, None, multiple, err_cross_stream_ref
@@ -80,7 +96,7 @@ class OpenShiftImportImage(AnsibleOpenshiftModule):
super(OpenShiftImportImage, self).__init__(**kwargs)
self._rest_client = None
self.registryhost = self.params.get('registry_url')
self.registryhost = self.params.get("registry_url")
self.changed = False
ref_policy = self.params.get("reference_policy")
@@ -90,9 +106,7 @@ class OpenShiftImportImage(AnsibleOpenshiftModule):
elif ref_policy == "local":
ref_policy_type = "Local"
self.ref_policy = {
"type": ref_policy_type
}
self.ref_policy = {"type": ref_policy_type}
self.validate_certs = self.params.get("validate_registry_certs")
self.cluster_resources = {}
@@ -104,15 +118,15 @@ class OpenShiftImportImage(AnsibleOpenshiftModule):
"metadata": {
"name": stream["metadata"]["name"],
"namespace": stream["metadata"]["namespace"],
"resourceVersion": stream["metadata"].get("resourceVersion")
"resourceVersion": stream["metadata"].get("resourceVersion"),
},
"spec": {
"import": True
}
"spec": {"import": True},
}
annotations = stream.get("annotations", {})
insecure = boolean(annotations.get("openshift.io/image.insecureRepository", True))
insecure = boolean(
annotations.get("openshift.io/image.insecureRepository", True)
)
if self.validate_certs is not None:
insecure = not self.validate_certs
return isi, insecure
@@ -126,7 +140,7 @@ class OpenShiftImportImage(AnsibleOpenshiftModule):
},
"importPolicy": {
"insecure": insecure,
"scheduled": self.params.get("scheduled")
"scheduled": self.params.get("scheduled"),
},
"referencePolicy": self.ref_policy,
}
@@ -149,20 +163,17 @@ class OpenShiftImportImage(AnsibleOpenshiftModule):
scheduled = scheduled or old_tag["importPolicy"].get("scheduled")
images = isi["spec"].get("images", [])
images.append({
images.append(
{
"from": {
"kind": "DockerImage",
"name": tags.get(k),
},
"to": {
"name": k
},
"importPolicy": {
"insecure": insecure,
"scheduled": scheduled
},
"to": {"name": k},
"importPolicy": {"insecure": insecure, "scheduled": scheduled},
"referencePolicy": self.ref_policy,
})
}
)
isi["spec"]["images"] = images
return isi
@@ -183,27 +194,20 @@ class OpenShiftImportImage(AnsibleOpenshiftModule):
),
)
if self.params.get("all") and not ref["tag"]:
spec = dict(
dockerImageRepository=source
)
spec = dict(dockerImageRepository=source)
isi = self.create_image_stream_import_all(stream, source)
else:
spec = dict(
tags=[
{
"from": {
"kind": "DockerImage",
"name": source
},
"referencePolicy": self.ref_policy
"from": {"kind": "DockerImage", "name": source},
"referencePolicy": self.ref_policy,
}
]
)
tags = {ref["tag"]: source}
isi = self.create_image_stream_import_tags(stream, tags)
stream.update(
dict(spec=spec)
)
stream.update(dict(spec=spec))
return stream, isi
def import_all(self, istream):
@@ -220,8 +224,9 @@ class OpenShiftImportImage(AnsibleOpenshiftModule):
if t.get("from") and t["from"].get("kind") == "DockerImage":
tags[t.get("name")] = t["from"].get("name")
if tags == {}:
msg = "image stream %s/%s does not have tags pointing to external container images" % (
stream["metadata"]["namespace"], stream["metadata"]["name"]
msg = (
"image stream %s/%s does not have tags pointing to external container images"
% (stream["metadata"]["namespace"], stream["metadata"]["name"])
)
self.fail_json(msg=msg)
isi = self.create_image_stream_import_tags(stream, tags)
@@ -236,7 +241,9 @@ class OpenShiftImportImage(AnsibleOpenshiftModule):
source = self.params.get("source")
# Follow any referential tags to the destination
final_tag, existing, multiple, err = follow_imagestream_tag_reference(stream, tag)
final_tag, existing, multiple, err = follow_imagestream_tag_reference(
stream, tag
)
if err:
if err == err_stream_not_found_ref:
# Create a new tag
@@ -245,7 +252,10 @@ class OpenShiftImportImage(AnsibleOpenshiftModule):
# if the from is still empty this means there's no such tag defined
# nor we can't create any from .spec.dockerImageRepository
if not source:
msg = "the tag %s does not exist on the image stream - choose an existing tag to import" % tag
msg = (
"the tag %s does not exist on the image stream - choose an existing tag to import"
% tag
)
self.fail_json(msg=msg)
existing = {
"from": {
@@ -257,13 +267,21 @@ class OpenShiftImportImage(AnsibleOpenshiftModule):
self.fail_json(msg=err)
else:
# Disallow re-importing anything other than DockerImage
if existing.get("from", {}) and existing["from"].get("kind") != "DockerImage":
if (
existing.get("from", {})
and existing["from"].get("kind") != "DockerImage"
):
msg = "tag {tag} points to existing {kind}/={name}, it cannot be re-imported.".format(
tag=tag, kind=existing["from"]["kind"], name=existing["from"]["name"]
tag=tag,
kind=existing["from"]["kind"],
name=existing["from"]["name"],
)
# disallow changing an existing tag
if not existing.get("from", {}):
msg = "tag %s already exists - you cannot change the source using this module." % tag
msg = (
"tag %s already exists - you cannot change the source using this module."
% tag
)
self.fail_json(msg=msg)
if source and source != existing["from"]["name"]:
if multiple:
@@ -271,7 +289,10 @@ class OpenShiftImportImage(AnsibleOpenshiftModule):
tag, final_tag, existing["from"]["name"]
)
else:
msg = "the tag %s points to %s you cannot change the source using this module." % (tag, final_tag)
msg = (
"the tag %s points to %s you cannot change the source using this module."
% (tag, final_tag)
)
self.fail_json(msg=msg)
# Set the target item to import
@@ -309,13 +330,13 @@ class OpenShiftImportImage(AnsibleOpenshiftModule):
kind=kind,
api_version=api_version,
name=ref.get("name"),
namespace=self.params.get("namespace")
namespace=self.params.get("namespace"),
)
result = self.kubernetes_facts(**params)
if not result["api_found"]:
msg = 'Failed to find API for resource with apiVersion "{0}" and kind "{1}"'.format(
api_version, kind
),
)
self.fail_json(msg=msg)
imagestream = None
if len(result["resources"]) > 0:
@@ -335,7 +356,9 @@ class OpenShiftImportImage(AnsibleOpenshiftModule):
def parse_image_reference(self, image_ref):
result, err = parse_docker_image_ref(image_ref, self.module)
if result.get("digest"):
self.fail_json(msg="Cannot import by ID, error with definition: %s" % image_ref)
self.fail_json(
msg="Cannot import by ID, error with definition: %s" % image_ref
)
tag = result.get("tag") or None
if not self.params.get("all") and not tag:
tag = "latest"
@@ -345,7 +368,6 @@ class OpenShiftImportImage(AnsibleOpenshiftModule):
return dict(name=result.get("name"), tag=tag, source=image_ref)
def execute_module(self):
names = []
name = self.params.get("name")
if isinstance(name, string_types):

View File

@@ -3,7 +3,8 @@
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import (absolute_import, division, print_function)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
@@ -24,109 +25,119 @@ LDAP_SEARCH_OUT_OF_SCOPE_ERROR = "trying to search by DN for an entry that exist
def validate_ldap_sync_config(config):
# Validate url
url = config.get('url')
url = config.get("url")
if not url:
return "url should be non empty attribute."
# Make sure bindDN and bindPassword are both set, or both unset
bind_dn = config.get('bindDN', "")
bind_password = config.get('bindPassword', "")
bind_dn = config.get("bindDN", "")
bind_password = config.get("bindPassword", "")
if (len(bind_dn) == 0) != (len(bind_password) == 0):
return "bindDN and bindPassword must both be specified, or both be empty."
insecure = boolean(config.get('insecure'))
ca_file = config.get('ca')
insecure = boolean(config.get("insecure"))
ca_file = config.get("ca")
if insecure:
if url.startswith('ldaps://'):
if url.startswith("ldaps://"):
return "Cannot use ldaps scheme with insecure=true."
if ca_file:
return "Cannot specify a ca with insecure=true."
elif ca_file and not os.path.isfile(ca_file):
return "could not read ca file: {0}.".format(ca_file)
nameMapping = config.get('groupUIDNameMapping', {})
nameMapping = config.get("groupUIDNameMapping", {})
for k, v in iteritems(nameMapping):
if len(k) == 0 or len(v) == 0:
return "groupUIDNameMapping has empty key or value"
schemas = []
schema_list = ('rfc2307', 'activeDirectory', 'augmentedActiveDirectory')
schema_list = ("rfc2307", "activeDirectory", "augmentedActiveDirectory")
for schema in schema_list:
if schema in config:
schemas.append(schema)
if len(schemas) == 0:
return "No schema-specific config was provided, should be one of %s" % ", ".join(schema_list)
return (
"No schema-specific config was provided, should be one of %s"
% ", ".join(schema_list)
)
if len(schemas) > 1:
return "Exactly one schema-specific config is required; found (%d) %s" % (len(schemas), ','.join(schemas))
return "Exactly one schema-specific config is required; found (%d) %s" % (
len(schemas),
",".join(schemas),
)
if schemas[0] == 'rfc2307':
if schemas[0] == "rfc2307":
return validate_RFC2307(config.get("rfc2307"))
elif schemas[0] == 'activeDirectory':
elif schemas[0] == "activeDirectory":
return validate_ActiveDirectory(config.get("activeDirectory"))
elif schemas[0] == 'augmentedActiveDirectory':
elif schemas[0] == "augmentedActiveDirectory":
return validate_AugmentedActiveDirectory(config.get("augmentedActiveDirectory"))
def validate_ldap_query(qry, isDNOnly=False):
# validate query scope
scope = qry.get('scope')
scope = qry.get("scope")
if scope and scope not in ("", "sub", "one", "base"):
return "invalid scope %s" % scope
# validate deref aliases
derefAlias = qry.get('derefAliases')
derefAlias = qry.get("derefAliases")
if derefAlias and derefAlias not in ("never", "search", "base", "always"):
return "not a valid LDAP alias dereferncing behavior: %s", derefAlias
# validate timeout
timeout = qry.get('timeout')
timeout = qry.get("timeout")
if timeout and float(timeout) < 0:
return "timeout must be equal to or greater than zero"
# Validate DN only
qry_filter = qry.get('filter', "")
qry_filter = qry.get("filter", "")
if isDNOnly:
if len(qry_filter) > 0:
return 'cannot specify a filter when using "dn" as the UID attribute'
else:
# validate filter
if len(qry_filter) == 0 or qry_filter[0] != '(':
if len(qry_filter) == 0 or qry_filter[0] != "(":
return "filter does not start with an '('"
return None
def validate_RFC2307(config):
qry = config.get('groupsQuery')
qry = config.get("groupsQuery")
if not qry or not isinstance(qry, dict):
return "RFC2307: groupsQuery requires a dictionary"
error = validate_ldap_query(qry)
if not error:
return error
for field in ('groupUIDAttribute', 'groupNameAttributes', 'groupMembershipAttributes',
'userUIDAttribute', 'userNameAttributes'):
for field in (
"groupUIDAttribute",
"groupNameAttributes",
"groupMembershipAttributes",
"userUIDAttribute",
"userNameAttributes",
):
value = config.get(field)
if not value:
return "RFC2307: {0} is required.".format(field)
users_qry = config.get('usersQuery')
users_qry = config.get("usersQuery")
if not users_qry or not isinstance(users_qry, dict):
return "RFC2307: usersQuery requires a dictionary"
isUserDNOnly = (config.get('userUIDAttribute').strip() == 'dn')
isUserDNOnly = config.get("userUIDAttribute").strip() == "dn"
return validate_ldap_query(users_qry, isDNOnly=isUserDNOnly)
def validate_ActiveDirectory(config, label="ActiveDirectory"):
users_qry = config.get('usersQuery')
users_qry = config.get("usersQuery")
if not users_qry or not isinstance(users_qry, dict):
return "{0}: usersQuery requires as dictionnary".format(label)
error = validate_ldap_query(users_qry)
if not error:
return error
for field in ('userNameAttributes', 'groupMembershipAttributes'):
for field in ("userNameAttributes", "groupMembershipAttributes"):
value = config.get(field)
if not value:
return "{0}: {1} is required.".format(field, label)
@@ -138,24 +149,24 @@ def validate_AugmentedActiveDirectory(config):
error = validate_ActiveDirectory(config, label="AugmentedActiveDirectory")
if not error:
return error
for field in ('groupUIDAttribute', 'groupNameAttributes'):
for field in ("groupUIDAttribute", "groupNameAttributes"):
value = config.get(field)
if not value:
return "AugmentedActiveDirectory: {0} is required".format(field)
groups_qry = config.get('groupsQuery')
groups_qry = config.get("groupsQuery")
if not groups_qry or not isinstance(groups_qry, dict):
return "AugmentedActiveDirectory: groupsQuery requires as dictionnary."
isGroupDNOnly = (config.get('groupUIDAttribute').strip() == 'dn')
isGroupDNOnly = config.get("groupUIDAttribute").strip() == "dn"
return validate_ldap_query(groups_qry, isDNOnly=isGroupDNOnly)
def determine_ldap_scope(scope):
if scope in ("", "sub"):
return ldap.SCOPE_SUBTREE
elif scope == 'base':
elif scope == "base":
return ldap.SCOPE_BASE
elif scope == 'one':
elif scope == "one":
return ldap.SCOPE_ONELEVEL
return None
@@ -175,28 +186,28 @@ def determine_deref_aliases(derefAlias):
def openshift_ldap_build_base_query(config):
qry = {}
if config.get('baseDN'):
qry['base'] = config.get('baseDN')
if config.get("baseDN"):
qry["base"] = config.get("baseDN")
scope = determine_ldap_scope(config.get('scope'))
scope = determine_ldap_scope(config.get("scope"))
if scope:
qry['scope'] = scope
qry["scope"] = scope
pageSize = config.get('pageSize')
pageSize = config.get("pageSize")
if pageSize and int(pageSize) > 0:
qry['sizelimit'] = int(pageSize)
qry["sizelimit"] = int(pageSize)
timeout = config.get('timeout')
timeout = config.get("timeout")
if timeout and int(timeout) > 0:
qry['timeout'] = int(timeout)
qry["timeout"] = int(timeout)
filter = config.get('filter')
filter = config.get("filter")
if filter:
qry['filterstr'] = filter
qry["filterstr"] = filter
derefAlias = determine_deref_aliases(config.get('derefAliases'))
derefAlias = determine_deref_aliases(config.get("derefAliases"))
if derefAlias:
qry['derefAlias'] = derefAlias
qry["derefAlias"] = derefAlias
return qry
@@ -205,20 +216,20 @@ def openshift_ldap_get_attribute_for_entry(entry, attribute):
if isinstance(attribute, list):
attributes = attribute
for k in attributes:
if k.lower() == 'dn':
if k.lower() == "dn":
return entry[0]
v = entry[1].get(k, None)
if v:
if isinstance(v, list):
result = []
for x in v:
if hasattr(x, 'decode'):
result.append(x.decode('utf-8'))
if hasattr(x, "decode"):
result.append(x.decode("utf-8"))
else:
result.append(x)
return result
else:
return v.decode('utf-8') if hasattr(v, 'decode') else v
return v.decode("utf-8") if hasattr(v, "decode") else v
return ""
@@ -228,9 +239,7 @@ def ldap_split_host_port(hostport):
"host%zone:port", "[host]:port" or "[host%zone]:port" into host or
host%zone and port.
"""
result = dict(
scheme=None, netlocation=None, host=None, port=None
)
result = dict(scheme=None, netlocation=None, host=None, port=None)
if not hostport:
return result, None
@@ -240,10 +249,10 @@ def ldap_split_host_port(hostport):
if "://" in hostport:
idx = hostport.find(scheme_l)
result["scheme"] = hostport[:idx]
netlocation = hostport[idx + len(scheme_l):]
netlocation = hostport[idx + len(scheme_l):] # fmt: skip
result["netlocation"] = netlocation
if netlocation[-1] == ']':
if netlocation[-1] == "]":
# ipv6 literal (with no port)
result["host"] = netlocation
@@ -259,21 +268,32 @@ def ldap_split_host_port(hostport):
def openshift_ldap_query_for_entries(connection, qry, unique_entry=True):
# set deref alias (TODO: need to set a default value to reset for each transaction)
derefAlias = qry.pop('derefAlias', None)
derefAlias = qry.pop("derefAlias", None)
if derefAlias:
ldap.set_option(ldap.OPT_DEREF, derefAlias)
try:
result = connection.search_ext_s(**qry)
if not result or len(result) == 0:
return None, "Entry not found for base='{0}' and filter='{1}'".format(qry['base'], qry['filterstr'])
return None, "Entry not found for base='{0}' and filter='{1}'".format(
qry["base"], qry["filterstr"]
)
if len(result) > 1 and unique_entry:
if qry.get('scope') == ldap.SCOPE_BASE:
return None, "multiple entries found matching dn={0}: {1}".format(qry['base'], result)
if qry.get("scope") == ldap.SCOPE_BASE:
return None, "multiple entries found matching dn={0}: {1}".format(
qry["base"], result
)
else:
return None, "multiple entries found matching filter {0}: {1}".format(qry['filterstr'], result)
return None, "multiple entries found matching filter {0}: {1}".format(
qry["filterstr"], result
)
return result, None
except ldap.NO_SUCH_OBJECT:
return None, "search for entry with base dn='{0}' refers to a non-existent entry".format(qry['base'])
return (
None,
"search for entry with base dn='{0}' refers to a non-existent entry".format(
qry["base"]
),
)
def openshift_equal_dn_objects(dn_obj, other_dn_obj):
@@ -303,7 +323,9 @@ def openshift_ancestorof_dn(dn, other):
if len(dn_obj) >= len(other_dn_obj):
return False
# Take the last attribute from the other DN to compare against
return openshift_equal_dn_objects(dn_obj, other_dn_obj[len(other_dn_obj) - len(dn_obj):])
return openshift_equal_dn_objects(
dn_obj, other_dn_obj[len(other_dn_obj) - len(dn_obj):] # fmt: skip
)
class OpenshiftLDAPQueryOnAttribute(object):
@@ -324,33 +346,38 @@ class OpenshiftLDAPQueryOnAttribute(object):
output = []
hex_string = "0123456789abcdef"
for c in buffer:
if ord(c) > 0x7f or c in ('(', ')', '\\', '*') or c == 0:
if ord(c) > 0x7F or c in ("(", ")", "\\", "*") or c == 0:
first = ord(c) >> 4
second = ord(c) & 0xf
output += ['\\', hex_string[first], hex_string[second]]
second = ord(c) & 0xF
output += ["\\", hex_string[first], hex_string[second]]
else:
output.append(c)
return ''.join(output)
return "".join(output)
def build_request(self, ldapuid, attributes):
params = copy.deepcopy(self.qry)
if self.query_attribute.lower() == 'dn':
if self.query_attribute.lower() == "dn":
if ldapuid:
if not openshift_equal_dn(ldapuid, params['base']) and not openshift_ancestorof_dn(params['base'], ldapuid):
if not openshift_equal_dn(
ldapuid, params["base"]
) and not openshift_ancestorof_dn(params["base"], ldapuid):
return None, LDAP_SEARCH_OUT_OF_SCOPE_ERROR
params['base'] = ldapuid
params['scope'] = ldap.SCOPE_BASE
params["base"] = ldapuid
params["scope"] = ldap.SCOPE_BASE
# filter that returns all values
params['filterstr'] = "(objectClass=*)"
params['attrlist'] = attributes
params["filterstr"] = "(objectClass=*)"
params["attrlist"] = attributes
else:
# Builds the query containing a filter that conjoins the common filter given
# in the configuration with the specific attribute filter for which the attribute value is given
specificFilter = "%s=%s" % (self.escape_filter(self.query_attribute), self.escape_filter(ldapuid))
qry_filter = params.get('filterstr', None)
specificFilter = "%s=%s" % (
self.escape_filter(self.query_attribute),
self.escape_filter(ldapuid),
)
qry_filter = params.get("filterstr", None)
if qry_filter:
params['filterstr'] = "(&%s(%s))" % (qry_filter, specificFilter)
params['attrlist'] = attributes
params["filterstr"] = "(&%s(%s))" % (qry_filter, specificFilter)
params["attrlist"] = attributes
return params, None
def ldap_search(self, connection, ldapuid, required_attributes, unique_entry=True):
@@ -358,21 +385,29 @@ class OpenshiftLDAPQueryOnAttribute(object):
if error:
return None, error
# set deref alias (TODO: need to set a default value to reset for each transaction)
derefAlias = query.pop('derefAlias', None)
derefAlias = query.pop("derefAlias", None)
if derefAlias:
ldap.set_option(ldap.OPT_DEREF, derefAlias)
try:
result = connection.search_ext_s(**query)
if not result or len(result) == 0:
return None, "Entry not found for base='{0}' and filter='{1}'".format(query['base'], query['filterstr'])
return None, "Entry not found for base='{0}' and filter='{1}'".format(
query["base"], query["filterstr"]
)
if unique_entry:
if len(result) > 1:
return None, "Multiple Entries found matching search criteria: %s (%s)" % (query, result)
return (
None,
"Multiple Entries found matching search criteria: %s (%s)"
% (query, result),
)
result = result[0]
return result, None
except ldap.NO_SUCH_OBJECT:
return None, "Entry not found for base='{0}' and filter='{1}'".format(query['base'], query['filterstr'])
return None, "Entry not found for base='{0}' and filter='{1}'".format(
query["base"], query["filterstr"]
)
except Exception as err:
return None, "Request %s failed due to: %s" % (query, err)
@@ -384,30 +419,43 @@ class OpenshiftLDAPQuery(object):
def build_request(self, attributes):
params = copy.deepcopy(self.qry)
params['attrlist'] = attributes
params["attrlist"] = attributes
return params
def ldap_search(self, connection, required_attributes):
query = self.build_request(required_attributes)
# set deref alias (TODO: need to set a default value to reset for each transaction)
derefAlias = query.pop('derefAlias', None)
derefAlias = query.pop("derefAlias", None)
if derefAlias:
ldap.set_option(ldap.OPT_DEREF, derefAlias)
try:
result = connection.search_ext_s(**query)
if not result or len(result) == 0:
return None, "Entry not found for base='{0}' and filter='{1}'".format(query['base'], query['filterstr'])
return None, "Entry not found for base='{0}' and filter='{1}'".format(
query["base"], query["filterstr"]
)
return result, None
except ldap.NO_SUCH_OBJECT:
return None, "search for entry with base dn='{0}' refers to a non-existent entry".format(query['base'])
return (
None,
"search for entry with base dn='{0}' refers to a non-existent entry".format(
query["base"]
),
)
class OpenshiftLDAPInterface(object):
def __init__(self, connection, groupQuery, groupNameAttributes, groupMembershipAttributes,
userQuery, userNameAttributes, config):
def __init__(
self,
connection,
groupQuery,
groupNameAttributes,
groupMembershipAttributes,
userQuery,
userNameAttributes,
config,
):
self.connection = connection
self.groupQuery = copy.deepcopy(groupQuery)
self.groupNameAttributes = groupNameAttributes
@@ -416,8 +464,12 @@ class OpenshiftLDAPInterface(object):
self.userNameAttributes = userNameAttributes
self.config = config
self.tolerate_not_found = boolean(config.get('tolerateMemberNotFoundErrors', False))
self.tolerate_out_of_scope = boolean(config.get('tolerateMemberOutOfScopeErrors', False))
self.tolerate_not_found = boolean(
config.get("tolerateMemberNotFoundErrors", False)
)
self.tolerate_out_of_scope = boolean(
config.get("tolerateMemberOutOfScopeErrors", False)
)
self.required_group_attributes = [self.groupQuery.query_attribute]
for x in self.groupNameAttributes + self.groupMembershipAttributes:
@@ -440,7 +492,9 @@ class OpenshiftLDAPInterface(object):
if uid in self.cached_groups:
return self.cached_groups.get(uid), None
group, err = self.groupQuery.ldap_search(self.connection, uid, self.required_group_attributes)
group, err = self.groupQuery.ldap_search(
self.connection, uid, self.required_group_attributes
)
if err:
return None, err
self.cached_groups[uid] = group
@@ -454,7 +508,9 @@ class OpenshiftLDAPInterface(object):
if uid in self.cached_users:
return self.cached_users.get(uid), None
entry, err = self.userQuery.ldap_search(self.connection, uid, self.required_user_attributes)
entry, err = self.userQuery.ldap_search(
self.connection, uid, self.required_user_attributes
)
if err:
return None, err
self.cached_users[uid] = entry
@@ -466,19 +522,19 @@ class OpenshiftLDAPInterface(object):
def list_groups(self):
group_qry = copy.deepcopy(self.groupQuery.qry)
group_qry['attrlist'] = self.required_group_attributes
group_qry["attrlist"] = self.required_group_attributes
groups, err = openshift_ldap_query_for_entries(
connection=self.connection,
qry=group_qry,
unique_entry=False
connection=self.connection, qry=group_qry, unique_entry=False
)
if err:
return None, err
group_uids = []
for entry in groups:
uid = openshift_ldap_get_attribute_for_entry(entry, self.groupQuery.query_attribute)
uid = openshift_ldap_get_attribute_for_entry(
entry, self.groupQuery.query_attribute
)
if not uid:
return None, "Unable to find LDAP group uid for entry %s" % entry
self.cached_groups[uid] = entry
@@ -514,39 +570,46 @@ class OpenshiftLDAPInterface(object):
class OpenshiftLDAPRFC2307(object):
def __init__(self, config, ldap_connection):
self.config = config
self.ldap_interface = self.create_ldap_interface(ldap_connection)
def create_ldap_interface(self, connection):
segment = self.config.get("rfc2307")
groups_base_qry = openshift_ldap_build_base_query(segment['groupsQuery'])
users_base_qry = openshift_ldap_build_base_query(segment['usersQuery'])
groups_base_qry = openshift_ldap_build_base_query(segment["groupsQuery"])
users_base_qry = openshift_ldap_build_base_query(segment["usersQuery"])
groups_query = OpenshiftLDAPQueryOnAttribute(groups_base_qry, segment['groupUIDAttribute'])
users_query = OpenshiftLDAPQueryOnAttribute(users_base_qry, segment['userUIDAttribute'])
groups_query = OpenshiftLDAPQueryOnAttribute(
groups_base_qry, segment["groupUIDAttribute"]
)
users_query = OpenshiftLDAPQueryOnAttribute(
users_base_qry, segment["userUIDAttribute"]
)
params = dict(
connection=connection,
groupQuery=groups_query,
groupNameAttributes=segment['groupNameAttributes'],
groupMembershipAttributes=segment['groupMembershipAttributes'],
groupNameAttributes=segment["groupNameAttributes"],
groupMembershipAttributes=segment["groupMembershipAttributes"],
userQuery=users_query,
userNameAttributes=segment['userNameAttributes'],
config=segment
userNameAttributes=segment["userNameAttributes"],
config=segment,
)
return OpenshiftLDAPInterface(**params)
def get_username_for_entry(self, entry):
username = openshift_ldap_get_attribute_for_entry(entry, self.ldap_interface.userNameAttributes)
username = openshift_ldap_get_attribute_for_entry(
entry, self.ldap_interface.userNameAttributes
)
if not username:
return None, "The user entry (%s) does not map to a OpenShift User name with the given mapping" % entry
return (
None,
"The user entry (%s) does not map to a OpenShift User name with the given mapping"
% entry,
)
return username, None
def get_group_name_for_uid(self, uid):
# Get name from User defined mapping
groupuid_name_mapping = self.config.get("groupUIDNameMapping")
if groupuid_name_mapping and uid in groupuid_name_mapping:
@@ -555,11 +618,14 @@ class OpenshiftLDAPRFC2307(object):
group, err = self.ldap_interface.get_group_entry(uid)
if err:
return None, err
group_name = openshift_ldap_get_attribute_for_entry(group, self.ldap_interface.groupNameAttributes)
if not group_name:
error = "The group entry (%s) does not map to an OpenShift Group name with the given name attribute (%s)" % (
group_name = openshift_ldap_get_attribute_for_entry(
group, self.ldap_interface.groupNameAttributes
)
if not group_name:
error = (
"The group entry (%s) does not map to an OpenShift Group name with the given name attribute (%s)"
% (group, self.ldap_interface.groupNameAttributes)
)
return None, error
if isinstance(group_name, list):
group_name = group_name[0]
@@ -570,7 +636,11 @@ class OpenshiftLDAPRFC2307(object):
def is_ldapgroup_exists(self, uid):
group, err = self.ldap_interface.get_group_entry(uid)
if err:
if err == LDAP_SEARCH_OUT_OF_SCOPE_ERROR or err.startswith("Entry not found") or "non-existent entry" in err:
if (
err == LDAP_SEARCH_OUT_OF_SCOPE_ERROR
or err.startswith("Entry not found")
or "non-existent entry" in err
):
return False, None
return False, err
if group:
@@ -585,7 +655,6 @@ class OpenshiftLDAPRFC2307(object):
class OpenshiftLDAP_ADInterface(object):
def __init__(self, connection, user_query, group_member_attr, user_name_attr):
self.connection = connection
self.userQuery = user_query
@@ -609,7 +678,9 @@ class OpenshiftLDAP_ADInterface(object):
def populate_cache(self):
if not self.cache_populated:
self.cache_populated = True
entries, err = self.userQuery.ldap_search(self.connection, self.required_user_attributes)
entries, err = self.userQuery.ldap_search(
self.connection, self.required_user_attributes
)
if err:
return err
@@ -645,7 +716,9 @@ class OpenshiftLDAP_ADInterface(object):
users_in_group = []
for attr in self.groupMembershipAttributes:
query_on_attribute = OpenshiftLDAPQueryOnAttribute(self.userQuery.qry, attr)
entries, error = query_on_attribute.ldap_search(self.connection, uid, self.required_user_attributes, unique_entry=False)
entries, error = query_on_attribute.ldap_search(
self.connection, uid, self.required_user_attributes, unique_entry=False
)
if error and "not found" not in error:
return None, error
if not entries:
@@ -660,15 +733,13 @@ class OpenshiftLDAP_ADInterface(object):
class OpenshiftLDAPActiveDirectory(object):
def __init__(self, config, ldap_connection):
self.config = config
self.ldap_interface = self.create_ldap_interface(ldap_connection)
def create_ldap_interface(self, connection):
segment = self.config.get("activeDirectory")
base_query = openshift_ldap_build_base_query(segment['usersQuery'])
base_query = openshift_ldap_build_base_query(segment["usersQuery"])
user_query = OpenshiftLDAPQuery(base_query)
return OpenshiftLDAP_ADInterface(
@@ -679,9 +750,15 @@ class OpenshiftLDAPActiveDirectory(object):
)
def get_username_for_entry(self, entry):
username = openshift_ldap_get_attribute_for_entry(entry, self.ldap_interface.userNameAttributes)
username = openshift_ldap_get_attribute_for_entry(
entry, self.ldap_interface.userNameAttributes
)
if not username:
return None, "The user entry (%s) does not map to a OpenShift User name with the given mapping" % entry
return (
None,
"The user entry (%s) does not map to a OpenShift User name with the given mapping"
% entry,
)
return username, None
def get_group_name_for_uid(self, uid):
@@ -702,8 +779,15 @@ class OpenshiftLDAPActiveDirectory(object):
class OpenshiftLDAP_AugmentedADInterface(OpenshiftLDAP_ADInterface):
def __init__(self, connection, user_query, group_member_attr, user_name_attr, group_qry, group_name_attr):
def __init__(
self,
connection,
user_query,
group_member_attr,
user_name_attr,
group_qry,
group_name_attr,
):
super(OpenshiftLDAP_AugmentedADInterface, self).__init__(
connection, user_query, group_member_attr, user_name_attr
)
@@ -725,7 +809,9 @@ class OpenshiftLDAP_AugmentedADInterface(OpenshiftLDAP_ADInterface):
if uid in self.cached_groups:
return self.cached_groups.get(uid), None
group, err = self.groupQuery.ldap_search(self.connection, uid, self.required_group_attributes)
group, err = self.groupQuery.ldap_search(
self.connection, uid, self.required_group_attributes
)
if err:
return None, err
self.cached_groups[uid] = group
@@ -750,19 +836,19 @@ class OpenshiftLDAP_AugmentedADInterface(OpenshiftLDAP_ADInterface):
class OpenshiftLDAPAugmentedActiveDirectory(OpenshiftLDAPRFC2307):
def __init__(self, config, ldap_connection):
self.config = config
self.ldap_interface = self.create_ldap_interface(ldap_connection)
def create_ldap_interface(self, connection):
segment = self.config.get("augmentedActiveDirectory")
user_base_query = openshift_ldap_build_base_query(segment['usersQuery'])
groups_base_qry = openshift_ldap_build_base_query(segment['groupsQuery'])
user_base_query = openshift_ldap_build_base_query(segment["usersQuery"])
groups_base_qry = openshift_ldap_build_base_query(segment["groupsQuery"])
user_query = OpenshiftLDAPQuery(user_base_query)
groups_query = OpenshiftLDAPQueryOnAttribute(groups_base_qry, segment['groupUIDAttribute'])
groups_query = OpenshiftLDAPQueryOnAttribute(
groups_base_qry, segment["groupUIDAttribute"]
)
return OpenshiftLDAP_AugmentedADInterface(
connection=connection,
@@ -770,7 +856,7 @@ class OpenshiftLDAPAugmentedActiveDirectory(OpenshiftLDAPRFC2307):
group_member_attr=segment["groupMembershipAttributes"],
user_name_attr=segment["userNameAttributes"],
group_qry=groups_query,
group_name_attr=segment["groupNameAttributes"]
group_name_attr=segment["groupNameAttributes"],
)
def is_ldapgroup_exists(self, uid):

View File

@@ -1,15 +1,16 @@
#!/usr/bin/env python
from __future__ import (absolute_import, division, print_function)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
import os
import traceback
from ansible.module_utils._text import to_native
from ansible_collections.community.okd.plugins.module_utils.openshift_common import AnsibleOpenshiftModule
from ansible_collections.community.okd.plugins.module_utils.openshift_common import (
AnsibleOpenshiftModule,
)
try:
from kubernetes.dynamic.exceptions import DynamicApiError
@@ -124,7 +125,6 @@ class OpenShiftProcess(AnsibleOpenshiftModule):
self.exit_json(**result)
def create_resources(self, definitions):
params = {"namespace": self.params.get("namespace_target")}
self.params["apply"] = False
@@ -139,9 +139,7 @@ class OpenShiftProcess(AnsibleOpenshiftModule):
continue
kind = definition.get("kind")
if kind and kind.endswith("List"):
flattened_definitions.extend(
self.flatten_list_kind(definition, params)
)
flattened_definitions.extend(self.flatten_list_kind(definition, params))
else:
flattened_definitions.append(self.merge_params(definition, params))

View File

@@ -1,12 +1,15 @@
#!/usr/bin/env python
from __future__ import (absolute_import, division, print_function)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
import traceback
from urllib.parse import urlparse
from ansible_collections.community.okd.plugins.module_utils.openshift_common import AnsibleOpenshiftModule
from ansible_collections.community.okd.plugins.module_utils.openshift_common import (
AnsibleOpenshiftModule,
)
from ansible_collections.community.okd.plugins.module_utils.openshift_docker_image import (
parse_docker_image_ref,
@@ -15,6 +18,7 @@ from ansible_collections.community.okd.plugins.module_utils.openshift_docker_ima
try:
from requests import request
from requests.auth import HTTPBasicAuth
HAS_REQUESTS_MODULE = True
requests_import_exception = None
except ImportError as e:
@@ -32,11 +36,7 @@ class OpenShiftRegistry(AnsibleOpenshiftModule):
kind = "ImageStream"
api_version = "image.openshift.io/v1"
params = dict(
kind=kind,
api_version=api_version,
namespace=namespace
)
params = dict(kind=kind, api_version=api_version, namespace=namespace)
result = self.kubernetes_facts(**params)
imagestream = []
if len(result["resources"]) > 0:
@@ -44,7 +44,6 @@ class OpenShiftRegistry(AnsibleOpenshiftModule):
return imagestream
def find_registry_info(self):
def _determine_registry(image_stream):
public, internal = None, None
docker_repo = image_stream["status"].get("publicDockerImageRepository")
@@ -72,39 +71,46 @@ class OpenShiftRegistry(AnsibleOpenshiftModule):
self.fail_json(msg="The integrated registry has not been configured")
return internal, public
self.fail_json(msg="No Image Streams could be located to retrieve registry info.")
self.fail_json(
msg="No Image Streams could be located to retrieve registry info."
)
def execute_module(self):
result = {}
result["internal_hostname"], result["public_hostname"] = self.find_registry_info()
(
result["internal_hostname"],
result["public_hostname"],
) = self.find_registry_info()
if self.check:
public_registry = result["public_hostname"]
if not public_registry:
result["check"] = dict(
reached=False,
msg="Registry does not have a public hostname."
reached=False, msg="Registry does not have a public hostname."
)
else:
headers = {
'Content-Type': 'application/json'
}
params = {
'method': 'GET',
'verify': False
}
headers = {"Content-Type": "application/json"}
params = {"method": "GET", "verify": False}
if self.client.configuration.api_key:
headers.update(self.client.configuration.api_key)
elif self.client.configuration.username and self.client.configuration.password:
elif (
self.client.configuration.username
and self.client.configuration.password
):
if not HAS_REQUESTS_MODULE:
result["check"] = dict(
reached=False,
msg="The requests python package is missing, try `pip install requests`",
error=requests_import_exception
error=requests_import_exception,
)
self.exit_json(**result)
params.update(
dict(auth=HTTPBasicAuth(self.client.configuration.username, self.client.configuration.password))
dict(
auth=HTTPBasicAuth(
self.client.configuration.username,
self.client.configuration.password,
)
)
)
# verify ssl
@@ -112,23 +118,20 @@ class OpenShiftRegistry(AnsibleOpenshiftModule):
if len(host.scheme) == 0:
registry_url = "https://" + public_registry
if registry_url.startswith("https://") and self.client.configuration.ssl_ca_cert:
params.update(
dict(verify=self.client.configuration.ssl_ca_cert)
)
params.update(
dict(headers=headers)
)
if (
registry_url.startswith("https://")
and self.client.configuration.ssl_ca_cert
):
params.update(dict(verify=self.client.configuration.ssl_ca_cert))
params.update(dict(headers=headers))
last_bad_status, last_bad_reason = None, None
for path in ("/", "/healthz"):
params.update(
dict(url=registry_url + path)
)
params.update(dict(url=registry_url + path))
response = request(**params)
if response.status_code == 200:
result["check"] = dict(
reached=True,
msg="The local client can contact the integrated registry."
msg="The local client can contact the integrated registry.",
)
self.exit_json(**result)
last_bad_reason = response.reason
@@ -136,9 +139,8 @@ class OpenShiftRegistry(AnsibleOpenshiftModule):
result["check"] = dict(
reached=False,
msg="Unable to contact the integrated registry using local client. Status=%d, Reason=%s" % (
last_bad_status, last_bad_reason
)
msg="Unable to contact the integrated registry using local client. Status=%d, Reason=%s"
% (last_bad_status, last_bad_reason),
)
self.exit_json(**result)

View File

@@ -10,7 +10,7 @@ from __future__ import absolute_import, division, print_function
__metaclass__ = type
# STARTREMOVE (downstream)
DOCUMENTATION = r'''
DOCUMENTATION = r"""
module: k8s
@@ -142,9 +142,9 @@ requirements:
- "python >= 3.6"
- "kubernetes >= 12.0.0"
- "PyYAML >= 3.11"
'''
"""
EXAMPLES = r'''
EXAMPLES = r"""
- name: Create a k8s namespace
community.okd.k8s:
name: testing
@@ -206,18 +206,18 @@ EXAMPLES = r'''
state: present
definition: "{{ lookup('template', '/testing/deployment.yml') | from_yaml }}"
validate:
fail_on_error: yes
fail_on_error: true
- name: warn on validation errors, check for unexpected properties
community.okd.k8s:
state: present
definition: "{{ lookup('template', '/testing/deployment.yml') | from_yaml }}"
validate:
fail_on_error: no
strict: yes
'''
fail_on_error: false
strict: true
"""
RETURN = r'''
RETURN = r"""
result:
description:
- The created, patched, or otherwise present object. Will be empty in the case of a deletion.
@@ -254,22 +254,26 @@ result:
type: int
sample: 48
error:
description: error while trying to create/delete the object.
description: Error while trying to create/delete the object.
returned: error
type: complex
'''
"""
# ENDREMOVE (downstream)
from ansible_collections.kubernetes.core.plugins.module_utils.args_common import (
NAME_ARG_SPEC, RESOURCE_ARG_SPEC, AUTH_ARG_SPEC, WAIT_ARG_SPEC, DELETE_OPTS_ARG_SPEC
NAME_ARG_SPEC,
RESOURCE_ARG_SPEC,
AUTH_ARG_SPEC,
WAIT_ARG_SPEC,
DELETE_OPTS_ARG_SPEC,
)
def validate_spec():
return dict(
fail_on_error=dict(type='bool'),
fail_on_error=dict(type="bool"),
version=dict(),
strict=dict(type='bool', default=True)
strict=dict(type="bool", default=True),
)
@@ -279,30 +283,41 @@ def argspec():
argument_spec.update(RESOURCE_ARG_SPEC)
argument_spec.update(AUTH_ARG_SPEC)
argument_spec.update(WAIT_ARG_SPEC)
argument_spec['merge_type'] = dict(type='list', elements='str', choices=['json', 'merge', 'strategic-merge'])
argument_spec['validate'] = dict(type='dict', default=None, options=validate_spec())
argument_spec['append_hash'] = dict(type='bool', default=False)
argument_spec['apply'] = dict(type='bool', default=False)
argument_spec['template'] = dict(type='raw', default=None)
argument_spec['delete_options'] = dict(type='dict', default=None, options=DELETE_OPTS_ARG_SPEC)
argument_spec['continue_on_error'] = dict(type='bool', default=False)
argument_spec['state'] = dict(default='present', choices=['present', 'absent', 'patched'])
argument_spec['force'] = dict(type='bool', default=False)
argument_spec["merge_type"] = dict(
type="list", elements="str", choices=["json", "merge", "strategic-merge"]
)
argument_spec["validate"] = dict(type="dict", default=None, options=validate_spec())
argument_spec["append_hash"] = dict(type="bool", default=False)
argument_spec["apply"] = dict(type="bool", default=False)
argument_spec["template"] = dict(type="raw", default=None)
argument_spec["delete_options"] = dict(
type="dict", default=None, options=DELETE_OPTS_ARG_SPEC
)
argument_spec["continue_on_error"] = dict(type="bool", default=False)
argument_spec["state"] = dict(
default="present", choices=["present", "absent", "patched"]
)
argument_spec["force"] = dict(type="bool", default=False)
return argument_spec
def main():
mutually_exclusive = [
('resource_definition', 'src'),
('merge_type', 'apply'),
('template', 'resource_definition'),
('template', 'src'),
("resource_definition", "src"),
("merge_type", "apply"),
("template", "resource_definition"),
("template", "src"),
]
from ansible_collections.community.okd.plugins.module_utils.k8s import OKDRawModule
module = OKDRawModule(argument_spec=argspec(), supports_check_mode=True, mutually_exclusive=mutually_exclusive)
module = OKDRawModule(
argument_spec=argspec(),
supports_check_mode=True,
mutually_exclusive=mutually_exclusive,
)
module.run_module()
if __name__ == '__main__':
if __name__ == "__main__":
main()

View File

@@ -110,15 +110,15 @@ EXAMPLES = r"""
filter: (objectClass=*)
pageSize: 0
groupUIDAttribute: dn
groupNameAttributes: [ cn ]
groupMembershipAttributes: [ member ]
groupNameAttributes: [cn]
groupMembershipAttributes: [member]
usersQuery:
baseDN: "ou=users,dc=example,dc=org"
scope: sub
derefAliases: never
pageSize: 0
userUIDAttribute: dn
userNameAttributes: [ mail ]
userNameAttributes: [mail]
tolerateMemberNotFoundErrors: true
tolerateMemberOutOfScopeErrors: true
@@ -192,20 +192,21 @@ builds:
# ENDREMOVE (downstream)
import copy
import traceback
from ansible_collections.kubernetes.core.plugins.module_utils.args_common import AUTH_ARG_SPEC
from ansible_collections.kubernetes.core.plugins.module_utils.args_common import (
AUTH_ARG_SPEC,
)
def argument_spec():
args = copy.deepcopy(AUTH_ARG_SPEC)
args.update(
dict(
state=dict(type='str', choices=['absent', 'present'], default='present'),
type=dict(type='str', choices=['ldap', 'openshift'], default='ldap'),
sync_config=dict(type='dict', aliases=['config', 'src'], required=True),
deny_groups=dict(type='list', elements='str', default=[]),
allow_groups=dict(type='list', elements='str', default=[]),
state=dict(type="str", choices=["absent", "present"], default="present"),
type=dict(type="str", choices=["ldap", "openshift"], default="ldap"),
sync_config=dict(type="dict", aliases=["config", "src"], required=True),
deny_groups=dict(type="list", elements="str", default=[]),
allow_groups=dict(type="list", elements="str", default=[]),
)
)
return args
@@ -213,12 +214,14 @@ def argument_spec():
def main():
from ansible_collections.community.okd.plugins.module_utils.openshift_groups import (
OpenshiftGroupsSync
OpenshiftGroupsSync,
)
module = OpenshiftGroupsSync(argument_spec=argument_spec(), supports_check_mode=True)
module = OpenshiftGroupsSync(
argument_spec=argument_spec(), supports_check_mode=True
)
module.run_module()
if __name__ == '__main__':
if __name__ == "__main__":
main()

View File

@@ -31,12 +31,12 @@ requirements:
"""
EXAMPLES = r"""
- name: Migrate TemplateInstances in namespace=test
- name: Migrate TemplateInstances in namespace=test
community.okd.openshift_adm_migrate_template_instances:
namespace: test
register: _result
- name: Migrate TemplateInstances in all namespaces
- name: Migrate TemplateInstances in all namespaces
community.okd.openshift_adm_migrate_template_instances:
register: _result
"""
@@ -235,7 +235,9 @@ result:
from ansible.module_utils._text import to_native
from ansible_collections.community.okd.plugins.module_utils.openshift_common import AnsibleOpenshiftModule
from ansible_collections.community.okd.plugins.module_utils.openshift_common import (
AnsibleOpenshiftModule,
)
try:
from kubernetes.dynamic.exceptions import DynamicApiError
@@ -339,9 +341,7 @@ class OpenShiftMigrateTemplateInstances(AnsibleOpenshiftModule):
if ti_to_be_migrated:
if self.check_mode:
self.exit_json(
**{"changed": True, "result": ti_to_be_migrated}
)
self.exit_json(**{"changed": True, "result": ti_to_be_migrated})
else:
for ti_elem in ti_to_be_migrated:
results["result"].append(
@@ -363,7 +363,9 @@ def argspec():
def main():
argument_spec = argspec()
module = OpenShiftMigrateTemplateInstances(argument_spec=argument_spec, supports_check_mode=True)
module = OpenShiftMigrateTemplateInstances(
argument_spec=argument_spec, supports_check_mode=True
)
module.run_module()

View File

@@ -5,10 +5,11 @@
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
# STARTREMOVE (downstream)
DOCUMENTATION = r'''
DOCUMENTATION = r"""
module: openshift_adm_prune_auth
@@ -58,9 +59,9 @@ options:
requirements:
- python >= 3.6
- kubernetes >= 12.0.0
'''
"""
EXAMPLES = r'''
EXAMPLES = r"""
- name: Prune all roles from default namespace
openshift_adm_prune_auth:
resource: roles
@@ -72,10 +73,10 @@ EXAMPLES = r'''
namespace: testing
label_selectors:
- phase=production
'''
"""
RETURN = r'''
RETURN = r"""
cluster_role_binding:
type: list
description: list of cluster role binding deleted.
@@ -96,37 +97,45 @@ group:
type: list
description: list of Security Context Constraints deleted.
returned: I(resource=users)
'''
"""
# ENDREMOVE (downstream)
import copy
from ansible_collections.kubernetes.core.plugins.module_utils.args_common import AUTH_ARG_SPEC
from ansible_collections.kubernetes.core.plugins.module_utils.args_common import (
AUTH_ARG_SPEC,
)
def argument_spec():
args = copy.deepcopy(AUTH_ARG_SPEC)
args.update(
dict(
resource=dict(type='str', required=True, choices=['roles', 'clusterroles', 'users', 'groups']),
namespace=dict(type='str'),
name=dict(type='str'),
label_selectors=dict(type='list', elements='str'),
resource=dict(
type="str",
required=True,
choices=["roles", "clusterroles", "users", "groups"],
),
namespace=dict(type="str"),
name=dict(type="str"),
label_selectors=dict(type="list", elements="str"),
)
)
return args
def main():
from ansible_collections.community.okd.plugins.module_utils.openshift_adm_prune_auth import (
OpenShiftAdmPruneAuth)
OpenShiftAdmPruneAuth,
)
module = OpenShiftAdmPruneAuth(argument_spec=argument_spec(),
module = OpenShiftAdmPruneAuth(
argument_spec=argument_spec(),
mutually_exclusive=[("name", "label_selectors")],
supports_check_mode=True)
supports_check_mode=True,
)
module.run_module()
if __name__ == '__main__':
if __name__ == "__main__":
main()

View File

@@ -5,10 +5,11 @@
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
# STARTREMOVE (downstream)
DOCUMENTATION = r'''
DOCUMENTATION = r"""
module: openshift_adm_prune_builds
@@ -45,14 +46,14 @@ options:
requirements:
- python >= 3.6
- kubernetes >= 12.0.0
'''
"""
EXAMPLES = r'''
EXAMPLES = r"""
# Run deleting older completed and failed builds and also including
# all builds whose associated BuildConfig no longer exists
- name: Run delete orphan Builds
community.okd.openshift_adm_prune_builds:
orphans: True
orphans: true
# Run deleting older completed and failed builds keep younger than 2hours
- name: Run delete builds, keep younger than 2h
@@ -63,9 +64,9 @@ EXAMPLES = r'''
- name: Run delete builds from namespace
community.okd.openshift_adm_prune_builds:
namespace: testing_namespace
'''
"""
RETURN = r'''
RETURN = r"""
builds:
description:
- The builds that were deleted
@@ -92,33 +93,38 @@ builds:
description: Current status details for the object.
returned: success
type: dict
'''
"""
# ENDREMOVE (downstream)
import copy
from ansible_collections.kubernetes.core.plugins.module_utils.args_common import AUTH_ARG_SPEC
from ansible_collections.kubernetes.core.plugins.module_utils.args_common import (
AUTH_ARG_SPEC,
)
def argument_spec():
args = copy.deepcopy(AUTH_ARG_SPEC)
args.update(
dict(
namespace=dict(type='str'),
keep_younger_than=dict(type='int'),
orphans=dict(type='bool', default=False),
namespace=dict(type="str"),
keep_younger_than=dict(type="int"),
orphans=dict(type="bool", default=False),
)
)
return args
def main():
from ansible_collections.community.okd.plugins.module_utils.openshift_builds import (
OpenShiftPruneBuilds,
)
from ansible_collections.community.okd.plugins.module_utils.openshift_builds import OpenShiftPruneBuilds
module = OpenShiftPruneBuilds(argument_spec=argument_spec(), supports_check_mode=True)
module = OpenShiftPruneBuilds(
argument_spec=argument_spec(), supports_check_mode=True
)
module.run_module()
if __name__ == '__main__':
if __name__ == "__main__":
main()

View File

@@ -5,10 +5,11 @@
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
# STARTREMOVE (downstream)
DOCUMENTATION = r'''
DOCUMENTATION = r"""
module: openshift_adm_prune_deployments
@@ -45,32 +46,34 @@ options:
requirements:
- python >= 3.6
- kubernetes >= 12.0.0
'''
"""
EXAMPLES = r'''
EXAMPLES = r"""
- name: Prune Deployments from testing namespace
community.okd.openshift_adm_prune_deployments:
namespace: testing
- name: Prune orphans deployments, keep younger than 2hours
community.okd.openshift_adm_prune_deployments:
orphans: True
orphans: true
keep_younger_than: 120
'''
"""
RETURN = r'''
RETURN = r"""
replication_controllers:
type: list
description: list of replication controllers candidate for pruning.
returned: always
'''
"""
# ENDREMOVE (downstream)
import copy
try:
from ansible_collections.kubernetes.core.plugins.module_utils.args_common import AUTH_ARG_SPEC
from ansible_collections.kubernetes.core.plugins.module_utils.args_common import (
AUTH_ARG_SPEC,
)
except ImportError as e:
pass
@@ -79,22 +82,28 @@ def argument_spec():
args = copy.deepcopy(AUTH_ARG_SPEC)
args.update(
dict(
namespace=dict(type='str',),
keep_younger_than=dict(type='int',),
orphans=dict(type='bool', default=False),
namespace=dict(
type="str",
),
keep_younger_than=dict(
type="int",
),
orphans=dict(type="bool", default=False),
)
)
return args
def main():
from ansible_collections.community.okd.plugins.module_utils.openshift_adm_prune_deployments import (
OpenShiftAdmPruneDeployment)
OpenShiftAdmPruneDeployment,
)
module = OpenShiftAdmPruneDeployment(argument_spec=argument_spec(), supports_check_mode=True)
module = OpenShiftAdmPruneDeployment(
argument_spec=argument_spec(), supports_check_mode=True
)
module.run_module()
if __name__ == '__main__':
if __name__ == "__main__":
main()

View File

@@ -5,10 +5,11 @@
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
# STARTREMOVE (downstream)
DOCUMENTATION = r'''
DOCUMENTATION = r"""
module: openshift_adm_prune_images
@@ -84,9 +85,9 @@ requirements:
- python >= 3.6
- kubernetes >= 12.0.0
- docker-image-py
'''
"""
EXAMPLES = r'''
EXAMPLES = r"""
# Prune if only images and their referrers were more than an hour old
- name: Prune image with referrer been more than an hour old
community.okd.openshift_adm_prune_images:
@@ -102,10 +103,10 @@ EXAMPLES = r'''
community.okd.openshift_adm_prune_images:
registry_url: http://registry.example.org
registry_validate_certs: false
'''
"""
RETURN = r'''
RETURN = r"""
updated_image_streams:
description:
- The images streams updated.
@@ -275,41 +276,44 @@ deleted_images:
},
...
]
'''
"""
# ENDREMOVE (downstream)
import copy
from ansible_collections.kubernetes.core.plugins.module_utils.args_common import AUTH_ARG_SPEC
from ansible_collections.kubernetes.core.plugins.module_utils.args_common import (
AUTH_ARG_SPEC,
)
def argument_spec():
args = copy.deepcopy(AUTH_ARG_SPEC)
args.update(
dict(
namespace=dict(type='str'),
all_images=dict(type='bool', default=True),
keep_younger_than=dict(type='int'),
prune_over_size_limit=dict(type='bool', default=False),
registry_url=dict(type='str'),
registry_validate_certs=dict(type='bool'),
registry_ca_cert=dict(type='path'),
prune_registry=dict(type='bool', default=True),
ignore_invalid_refs=dict(type='bool', default=False),
namespace=dict(type="str"),
all_images=dict(type="bool", default=True),
keep_younger_than=dict(type="int"),
prune_over_size_limit=dict(type="bool", default=False),
registry_url=dict(type="str"),
registry_validate_certs=dict(type="bool"),
registry_ca_cert=dict(type="path"),
prune_registry=dict(type="bool", default=True),
ignore_invalid_refs=dict(type="bool", default=False),
)
)
return args
def main():
from ansible_collections.community.okd.plugins.module_utils.openshift_adm_prune_images import (
OpenShiftAdmPruneImages
OpenShiftAdmPruneImages,
)
module = OpenShiftAdmPruneImages(argument_spec=argument_spec(), supports_check_mode=True)
module = OpenShiftAdmPruneImages(
argument_spec=argument_spec(), supports_check_mode=True
)
module.run_module()
if __name__ == '__main__':
if __name__ == "__main__":
main()

View File

@@ -5,9 +5,10 @@
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
DOCUMENTATION = r'''
DOCUMENTATION = r"""
module: openshift_auth
@@ -74,19 +75,22 @@ requirements:
- urllib3
- requests
- requests-oauthlib
'''
"""
EXAMPLES = r'''
- hosts: localhost
EXAMPLES = r"""
- name: Example Playbook
hosts: localhost
module_defaults:
group/community.okd.okd:
host: https://k8s.example.com/
ca_cert: ca.pem
tasks:
- block:
- name: Authenticate to OpenShift cluster and gell a list of all pods from any namespace
block:
# It's good practice to store login credentials in a secure vault and not
# directly in playbooks.
- include_vars: openshift_passwords.yml
- name: Include 'openshift_passwords.yml'
ansible.builtin.include_vars: openshift_passwords.yml
- name: Log in (obtain access token)
community.okd.openshift_auth:
@@ -108,12 +112,12 @@ EXAMPLES = r'''
community.okd.openshift_auth:
state: absent
api_key: "{{ openshift_auth_results.openshift_auth.api_key }}"
'''
"""
# Returned value names need to match k8s modules parameter names, to make it
# easy to pass returned values of openshift_auth to other k8s modules.
# Discussion: https://github.com/ansible/ansible/pull/50807#discussion_r248827899
RETURN = r'''
RETURN = r"""
openshift_auth:
description: OpenShift authentication facts.
returned: success
@@ -164,7 +168,7 @@ k8s_auth:
description: Username for authenticating with the API server.
returned: success
type: str
'''
"""
import traceback
@@ -179,43 +183,41 @@ import hashlib
# 3rd party imports
try:
import requests
HAS_REQUESTS = True
except ImportError:
HAS_REQUESTS = False
try:
from requests_oauthlib import OAuth2Session
HAS_REQUESTS_OAUTH = True
except ImportError:
HAS_REQUESTS_OAUTH = False
try:
from urllib3.util import make_headers
HAS_URLLIB3 = True
except ImportError:
HAS_URLLIB3 = False
K8S_AUTH_ARG_SPEC = {
'state': {
'default': 'present',
'choices': ['present', 'absent'],
"state": {
"default": "present",
"choices": ["present", "absent"],
},
'host': {'required': True},
'username': {},
'password': {'no_log': True},
'ca_cert': {'type': 'path', 'aliases': ['ssl_ca_cert']},
'validate_certs': {
'type': 'bool',
'default': True,
'aliases': ['verify_ssl']
},
'api_key': {'no_log': True},
"host": {"required": True},
"username": {},
"password": {"no_log": True},
"ca_cert": {"type": "path", "aliases": ["ssl_ca_cert"]},
"validate_certs": {"type": "bool", "default": True, "aliases": ["verify_ssl"]},
"api_key": {"no_log": True},
}
def get_oauthaccesstoken_objectname_from_token(token_name):
"""
openshift convert the access token to an OAuthAccessToken resource name using the algorithm
https://github.com/openshift/console/blob/9f352ba49f82ad693a72d0d35709961428b43b93/pkg/server/server.go#L609-L613
@@ -224,7 +226,9 @@ def get_oauthaccesstoken_objectname_from_token(token_name):
sha256Prefix = "sha256~"
content = token_name.strip(sha256Prefix)
b64encoded = urlsafe_b64encode(hashlib.sha256(content.encode()).digest()).rstrip(b'=')
b64encoded = urlsafe_b64encode(hashlib.sha256(content.encode()).digest()).rstrip(
b"="
)
return sha256Prefix + b64encoded.decode("utf-8")
@@ -234,29 +238,35 @@ class OpenShiftAuthModule(AnsibleModule):
self,
argument_spec=K8S_AUTH_ARG_SPEC,
required_if=[
('state', 'present', ['username', 'password']),
('state', 'absent', ['api_key']),
]
("state", "present", ["username", "password"]),
("state", "absent", ["api_key"]),
],
)
if not HAS_REQUESTS:
self.fail("This module requires the python 'requests' package. Try `pip install requests`.")
self.fail(
"This module requires the python 'requests' package. Try `pip install requests`."
)
if not HAS_REQUESTS_OAUTH:
self.fail("This module requires the python 'requests-oauthlib' package. Try `pip install requests-oauthlib`.")
self.fail(
"This module requires the python 'requests-oauthlib' package. Try `pip install requests-oauthlib`."
)
if not HAS_URLLIB3:
self.fail("This module requires the python 'urllib3' package. Try `pip install urllib3`.")
self.fail(
"This module requires the python 'urllib3' package. Try `pip install urllib3`."
)
def execute_module(self):
state = self.params.get('state')
verify_ssl = self.params.get('validate_certs')
ssl_ca_cert = self.params.get('ca_cert')
state = self.params.get("state")
verify_ssl = self.params.get("validate_certs")
ssl_ca_cert = self.params.get("ca_cert")
self.auth_username = self.params.get('username')
self.auth_password = self.params.get('password')
self.auth_api_key = self.params.get('api_key')
self.con_host = self.params.get('host')
self.auth_username = self.params.get("username")
self.auth_password = self.params.get("password")
self.auth_api_key = self.params.get("api_key")
self.con_host = self.params.get("host")
# python-requests takes either a bool or a path to a ca file as the 'verify' param
if verify_ssl and ssl_ca_cert:
@@ -269,7 +279,7 @@ class OpenShiftAuthModule(AnsibleModule):
changed = False
result = dict()
if state == 'present':
if state == "present":
new_api_key = self.openshift_login()
result = dict(
host=self.con_host,
@@ -285,87 +295,114 @@ class OpenShiftAuthModule(AnsibleModule):
self.exit_json(changed=changed, openshift_auth=result, k8s_auth=result)
def openshift_discover(self):
url = urljoin(self.con_host, '.well-known/oauth-authorization-server')
url = urljoin(self.con_host, ".well-known/oauth-authorization-server")
ret = requests.get(url, verify=self.con_verify_ca)
if ret.status_code != 200:
self.fail_request("Couldn't find OpenShift's OAuth API", method='GET', url=url,
reason=ret.reason, status_code=ret.status_code)
self.fail_request(
"Couldn't find OpenShift's OAuth API",
method="GET",
url=url,
reason=ret.reason,
status_code=ret.status_code,
)
try:
oauth_info = ret.json()
self.openshift_auth_endpoint = oauth_info['authorization_endpoint']
self.openshift_token_endpoint = oauth_info['token_endpoint']
self.openshift_auth_endpoint = oauth_info["authorization_endpoint"]
self.openshift_token_endpoint = oauth_info["token_endpoint"]
except Exception:
self.fail_json(msg="Something went wrong discovering OpenShift OAuth details.",
exception=traceback.format_exc())
self.fail_json(
msg="Something went wrong discovering OpenShift OAuth details.",
exception=traceback.format_exc(),
)
def openshift_login(self):
os_oauth = OAuth2Session(client_id='openshift-challenging-client')
authorization_url, state = os_oauth.authorization_url(self.openshift_auth_endpoint,
state="1", code_challenge_method='S256')
auth_headers = make_headers(basic_auth='{0}:{1}'.format(self.auth_username, self.auth_password))
os_oauth = OAuth2Session(client_id="openshift-challenging-client")
authorization_url, state = os_oauth.authorization_url(
self.openshift_auth_endpoint, state="1", code_challenge_method="S256"
)
auth_headers = make_headers(
basic_auth="{0}:{1}".format(self.auth_username, self.auth_password)
)
# Request authorization code using basic auth credentials
ret = os_oauth.get(
authorization_url,
headers={'X-Csrf-Token': state, 'authorization': auth_headers.get('authorization')},
headers={
"X-Csrf-Token": state,
"authorization": auth_headers.get("authorization"),
},
verify=self.con_verify_ca,
allow_redirects=False
allow_redirects=False,
)
if ret.status_code != 302:
self.fail_request("Authorization failed.", method='GET', url=authorization_url,
reason=ret.reason, status_code=ret.status_code)
self.fail_request(
"Authorization failed.",
method="GET",
url=authorization_url,
reason=ret.reason,
status_code=ret.status_code,
)
# In here we have `code` and `state`, I think `code` is the important one
qwargs = {}
for k, v in parse_qs(urlparse(ret.headers['Location']).query).items():
for k, v in parse_qs(urlparse(ret.headers["Location"]).query).items():
qwargs[k] = v[0]
qwargs['grant_type'] = 'authorization_code'
qwargs["grant_type"] = "authorization_code"
# Using authorization code given to us in the Location header of the previous request, request a token
ret = os_oauth.post(
self.openshift_token_endpoint,
headers={
'Accept': 'application/json',
'Content-Type': 'application/x-www-form-urlencoded',
"Accept": "application/json",
"Content-Type": "application/x-www-form-urlencoded",
# This is just base64 encoded 'openshift-challenging-client:'
'Authorization': 'Basic b3BlbnNoaWZ0LWNoYWxsZW5naW5nLWNsaWVudDo='
"Authorization": "Basic b3BlbnNoaWZ0LWNoYWxsZW5naW5nLWNsaWVudDo=",
},
data=urlencode(qwargs),
verify=self.con_verify_ca
verify=self.con_verify_ca,
)
if ret.status_code != 200:
self.fail_request("Failed to obtain an authorization token.", method='POST',
self.fail_request(
"Failed to obtain an authorization token.",
method="POST",
url=self.openshift_token_endpoint,
reason=ret.reason, status_code=ret.status_code)
reason=ret.reason,
status_code=ret.status_code,
)
return ret.json()['access_token']
return ret.json()["access_token"]
def openshift_logout(self):
name = get_oauthaccesstoken_objectname_from_token(self.auth_api_key)
headers = {
'Accept': 'application/json',
'Content-Type': 'application/json',
'Authorization': "Bearer {0}".format(self.auth_api_key)
"Accept": "application/json",
"Content-Type": "application/json",
"Authorization": "Bearer {0}".format(self.auth_api_key),
}
url = "{0}/apis/oauth.openshift.io/v1/useroauthaccesstokens/{1}".format(self.con_host, name)
url = "{0}/apis/oauth.openshift.io/v1/useroauthaccesstokens/{1}".format(
self.con_host, name
)
json = {
"apiVersion": "oauth.openshift.io/v1",
"kind": "DeleteOptions",
"gracePeriodSeconds": 0
"gracePeriodSeconds": 0,
}
ret = requests.delete(url, json=json, verify=self.con_verify_ca, headers=headers)
ret = requests.delete(
url, json=json, verify=self.con_verify_ca, headers=headers
)
if ret.status_code != 200:
self.fail_json(
msg="Couldn't delete user oauth access token '{0}' due to: {1}".format(name, ret.json().get("message")),
status_code=ret.status_code
msg="Couldn't delete user oauth access token '{0}' due to: {1}".format(
name, ret.json().get("message")
),
status_code=ret.status_code,
)
return True
@@ -376,7 +413,7 @@ class OpenShiftAuthModule(AnsibleModule):
def fail_request(self, msg, **kwargs):
req_info = {}
for k, v in kwargs.items():
req_info['req_' + k] = v
req_info["req_" + k] = v
self.fail_json(msg=msg, **req_info)
@@ -388,5 +425,5 @@ def main():
module.fail_json(msg=str(e), exception=traceback.format_exc())
if __name__ == '__main__':
if __name__ == "__main__":
main()

View File

@@ -5,10 +5,11 @@
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
# STARTREMOVE (downstream)
DOCUMENTATION = r'''
DOCUMENTATION = r"""
module: openshift_build
@@ -134,9 +135,9 @@ options:
requirements:
- python >= 3.6
- kubernetes >= 12.0.0
'''
"""
EXAMPLES = r'''
EXAMPLES = r"""
# Starts build from build config default/hello-world
- name: Starts build from build config
community.okd.openshift_build:
@@ -171,9 +172,9 @@ EXAMPLES = r'''
build_phases:
- New
state: cancelled
'''
"""
RETURN = r'''
RETURN = r"""
builds:
description:
- The builds that were started/cancelled.
@@ -200,37 +201,47 @@ builds:
description: Current status details for the object.
returned: success
type: dict
'''
"""
# ENDREMOVE (downstream)
import copy
from ansible_collections.kubernetes.core.plugins.module_utils.args_common import AUTH_ARG_SPEC
from ansible_collections.kubernetes.core.plugins.module_utils.args_common import (
AUTH_ARG_SPEC,
)
def argument_spec():
args = copy.deepcopy(AUTH_ARG_SPEC)
args_options = dict(
name=dict(type='str', required=True),
value=dict(type='str', required=True)
name=dict(type="str", required=True), value=dict(type="str", required=True)
)
args.update(
dict(
state=dict(type='str', choices=['started', 'cancelled', 'restarted'], default="started"),
build_args=dict(type='list', elements='dict', options=args_options),
commit=dict(type='str'),
env_vars=dict(type='list', elements='dict', options=args_options),
build_name=dict(type='str'),
build_config_name=dict(type='str'),
namespace=dict(type='str', required=True),
incremental=dict(type='bool'),
no_cache=dict(type='bool'),
wait=dict(type='bool', default=False),
wait_sleep=dict(type='int', default=5),
wait_timeout=dict(type='int', default=120),
build_phases=dict(type='list', elements='str', default=[], choices=["New", "Pending", "Running"]),
state=dict(
type="str",
choices=["started", "cancelled", "restarted"],
default="started",
),
build_args=dict(type="list", elements="dict", options=args_options),
commit=dict(type="str"),
env_vars=dict(type="list", elements="dict", options=args_options),
build_name=dict(type="str"),
build_config_name=dict(type="str"),
namespace=dict(type="str", required=True),
incremental=dict(type="bool"),
no_cache=dict(type="bool"),
wait=dict(type="bool", default=False),
wait_sleep=dict(type="int", default=5),
wait_timeout=dict(type="int", default=120),
build_phases=dict(
type="list",
elements="str",
default=[],
choices=["New", "Pending", "Running"],
),
)
)
return args
@@ -238,23 +249,24 @@ def argument_spec():
def main():
mutually_exclusive = [
('build_name', 'build_config_name'),
("build_name", "build_config_name"),
]
from ansible_collections.community.okd.plugins.module_utils.openshift_builds import (
OpenShiftBuilds
OpenShiftBuilds,
)
module = OpenShiftBuilds(
argument_spec=argument_spec(),
mutually_exclusive=mutually_exclusive,
required_one_of=[
[
'build_name',
'build_config_name',
"build_name",
"build_config_name",
]
],
)
module.run_module()
if __name__ == '__main__':
if __name__ == "__main__":
main()

View File

@@ -5,10 +5,11 @@
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
# STARTREMOVE (downstream)
DOCUMENTATION = r'''
DOCUMENTATION = r"""
module: openshift_import_image
@@ -75,9 +76,9 @@ requirements:
- python >= 3.6
- kubernetes >= 12.0.0
- docker-image-py
'''
"""
EXAMPLES = r'''
EXAMPLES = r"""
# Import tag latest into a new image stream.
- name: Import tag latest into new image stream
community.okd.openshift_import_image:
@@ -122,10 +123,10 @@ EXAMPLES = r'''
- mystream3
source: registry.io/repo/image:latest
all: true
'''
"""
RETURN = r'''
RETURN = r"""
result:
description:
- List with all ImageStreamImport that have been created.
@@ -153,42 +154,44 @@ result:
description: Current status details for the object.
returned: success
type: dict
'''
"""
# ENDREMOVE (downstream)
import copy
from ansible_collections.kubernetes.core.plugins.module_utils.args_common import AUTH_ARG_SPEC
from ansible_collections.kubernetes.core.plugins.module_utils.args_common import (
AUTH_ARG_SPEC,
)
def argument_spec():
args = copy.deepcopy(AUTH_ARG_SPEC)
args.update(
dict(
namespace=dict(type='str', required=True),
name=dict(type='raw', required=True),
all=dict(type='bool', default=False),
validate_registry_certs=dict(type='bool'),
reference_policy=dict(type='str', choices=["source", "local"], default="source"),
scheduled=dict(type='bool', default=False),
source=dict(type='str'),
namespace=dict(type="str", required=True),
name=dict(type="raw", required=True),
all=dict(type="bool", default=False),
validate_registry_certs=dict(type="bool"),
reference_policy=dict(
type="str", choices=["source", "local"], default="source"
),
scheduled=dict(type="bool", default=False),
source=dict(type="str"),
)
)
return args
def main():
from ansible_collections.community.okd.plugins.module_utils.openshift_import_image import (
OpenShiftImportImage
OpenShiftImportImage,
)
module = OpenShiftImportImage(
argument_spec=argument_spec(),
supports_check_mode=True
argument_spec=argument_spec(), supports_check_mode=True
)
module.run_module()
if __name__ == '__main__':
if __name__ == "__main__":
main()

View File

@@ -2,13 +2,14 @@
# -*- coding: utf-8 -*-
from __future__ import absolute_import, division, print_function
__metaclass__ = type
# Copyright (c) 2020-2021, Red Hat
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
# STARTREMOVE (downstream)
DOCUMENTATION = r'''
DOCUMENTATION = r"""
module: openshift_process
short_description: Process an OpenShift template.openshift.io/v1 Template
@@ -49,6 +50,7 @@ options:
description:
- The namespace that resources should be created, updated, or deleted in.
- Only used when I(state) is present or absent.
type: str
parameters:
description:
- 'A set of key: value pairs that will be used to set/override values in the Template.'
@@ -70,9 +72,9 @@ options:
type: str
default: rendered
choices: [ absent, present, rendered ]
'''
"""
EXAMPLES = r'''
EXAMPLES = r"""
- name: Process a template in the cluster
community.okd.openshift_process:
name: nginx-example
@@ -87,8 +89,8 @@ EXAMPLES = r'''
community.okd.k8s:
namespace: default
definition: '{{ item }}'
wait: yes
apply: yes
wait: true
apply: true
loop: '{{ result.resources }}'
- name: Process a template with parameters from an env file and create the resources
@@ -98,7 +100,7 @@ EXAMPLES = r'''
namespace_target: default
parameter_file: 'files/nginx.env'
state: present
wait: yes
wait: true
- name: Process a local template and create the resources
community.okd.openshift_process:
@@ -113,10 +115,10 @@ EXAMPLES = r'''
parameter_file: files/example.env
namespace_target: default
state: absent
wait: yes
'''
wait: true
"""
RETURN = r'''
RETURN = r"""
result:
description:
- The created, patched, or otherwise present object. Will be empty in the case of a deletion.
@@ -200,11 +202,13 @@ resources:
conditions:
type: complex
description: Array of status conditions for the object. Not guaranteed to be present
'''
"""
# ENDREMOVE (downstream)
from ansible_collections.kubernetes.core.plugins.module_utils.args_common import (
AUTH_ARG_SPEC, RESOURCE_ARG_SPEC, WAIT_ARG_SPEC
AUTH_ARG_SPEC,
RESOURCE_ARG_SPEC,
WAIT_ARG_SPEC,
)
@@ -213,24 +217,26 @@ def argspec():
argument_spec.update(AUTH_ARG_SPEC)
argument_spec.update(WAIT_ARG_SPEC)
argument_spec.update(RESOURCE_ARG_SPEC)
argument_spec['state'] = dict(type='str', default='rendered', choices=['present', 'absent', 'rendered'])
argument_spec['namespace'] = dict(type='str')
argument_spec['namespace_target'] = dict(type='str')
argument_spec['parameters'] = dict(type='dict')
argument_spec['name'] = dict(type='str')
argument_spec['parameter_file'] = dict(type='str')
argument_spec["state"] = dict(
type="str", default="rendered", choices=["present", "absent", "rendered"]
)
argument_spec["namespace"] = dict(type="str")
argument_spec["namespace_target"] = dict(type="str")
argument_spec["parameters"] = dict(type="dict")
argument_spec["name"] = dict(type="str")
argument_spec["parameter_file"] = dict(type="str")
return argument_spec
def main():
from ansible_collections.community.okd.plugins.module_utils.openshift_process import (
OpenShiftProcess)
OpenShiftProcess,
)
module = OpenShiftProcess(argument_spec=argspec(), supports_check_mode=True)
module.run_module()
if __name__ == '__main__':
if __name__ == "__main__":
main()

View File

@@ -5,10 +5,11 @@
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
# STARTREMOVE (downstream)
DOCUMENTATION = r'''
DOCUMENTATION = r"""
module: openshift_registry_info
@@ -40,9 +41,9 @@ requirements:
- python >= 3.6
- kubernetes >= 12.0.0
- docker-image-py
'''
"""
EXAMPLES = r'''
EXAMPLES = r"""
# Get registry information
- name: Read integrated registry information
community.okd.openshift_registry_info:
@@ -50,11 +51,11 @@ EXAMPLES = r'''
# Read registry integrated information and attempt to contact using local client.
- name: Attempt to contact integrated registry using local client
community.okd.openshift_registry_info:
check: yes
'''
check: true
"""
RETURN = r'''
RETURN = r"""
internal_hostname:
description:
- The internal registry hostname.
@@ -79,36 +80,30 @@ check:
description: message describing the ping operation.
returned: always
type: str
'''
"""
# ENDREMOVE (downstream)
import copy
from ansible_collections.kubernetes.core.plugins.module_utils.args_common import AUTH_ARG_SPEC
from ansible_collections.kubernetes.core.plugins.module_utils.args_common import (
AUTH_ARG_SPEC,
)
def argument_spec():
args = copy.deepcopy(AUTH_ARG_SPEC)
args.update(
dict(
check=dict(type='bool', default=False)
)
)
args.update(dict(check=dict(type="bool", default=False)))
return args
def main():
from ansible_collections.community.okd.plugins.module_utils.openshift_registry import (
OpenShiftRegistry
OpenShiftRegistry,
)
module = OpenShiftRegistry(
argument_spec=argument_spec(),
supports_check_mode=True
)
module = OpenShiftRegistry(argument_spec=argument_spec(), supports_check_mode=True)
module.run_module()
if __name__ == '__main__':
if __name__ == "__main__":
main()

View File

@@ -9,7 +9,7 @@ from __future__ import absolute_import, division, print_function
__metaclass__ = type
# STARTREMOVE (downstream)
DOCUMENTATION = r'''
DOCUMENTATION = r"""
module: openshift_route
short_description: Expose a Service as an OpenShift Route.
@@ -133,9 +133,9 @@ options:
- insecure
default: insecure
type: str
'''
"""
EXAMPLES = r'''
EXAMPLES = r"""
- name: Create hello-world deployment
community.okd.k8s:
definition:
@@ -183,9 +183,9 @@ EXAMPLES = r'''
annotations:
haproxy.router.openshift.io/balance: roundrobin
register: route
'''
"""
RETURN = r'''
RETURN = r"""
result:
description:
- The Route object that was created or updated. Will be empty in the case of deletion.
@@ -303,20 +303,28 @@ duration:
returned: when C(wait) is true
type: int
sample: 48
'''
"""
# ENDREMOVE (downstream)
import copy
from ansible.module_utils._text import to_native
from ansible_collections.community.okd.plugins.module_utils.openshift_common import AnsibleOpenshiftModule
from ansible_collections.community.okd.plugins.module_utils.openshift_common import (
AnsibleOpenshiftModule,
)
try:
from ansible_collections.kubernetes.core.plugins.module_utils.k8s.runner import perform_action
from ansible_collections.kubernetes.core.plugins.module_utils.k8s.waiter import Waiter
from ansible_collections.kubernetes.core.plugins.module_utils.k8s.runner import (
perform_action,
)
from ansible_collections.kubernetes.core.plugins.module_utils.k8s.waiter import (
Waiter,
)
from ansible_collections.kubernetes.core.plugins.module_utils.args_common import (
AUTH_ARG_SPEC, WAIT_ARG_SPEC, COMMON_ARG_SPEC
AUTH_ARG_SPEC,
WAIT_ARG_SPEC,
COMMON_ARG_SPEC,
)
except ImportError as e:
pass
@@ -329,7 +337,6 @@ except ImportError:
class OpenShiftRoute(AnsibleOpenshiftModule):
def __init__(self):
super(OpenShiftRoute, self).__init__(
argument_spec=self.argspec,
@@ -339,7 +346,7 @@ class OpenShiftRoute(AnsibleOpenshiftModule):
self.append_hash = False
self.apply = False
self.warnings = []
self.params['merge_type'] = None
self.params["merge_type"] = None
@property
def argspec(self):
@@ -347,80 +354,95 @@ class OpenShiftRoute(AnsibleOpenshiftModule):
spec.update(copy.deepcopy(WAIT_ARG_SPEC))
spec.update(copy.deepcopy(COMMON_ARG_SPEC))
spec['service'] = dict(type='str', aliases=['svc'])
spec['namespace'] = dict(required=True, type='str')
spec['labels'] = dict(type='dict')
spec['name'] = dict(type='str')
spec['hostname'] = dict(type='str')
spec['path'] = dict(type='str')
spec['wildcard_policy'] = dict(choices=['Subdomain'], type='str')
spec['port'] = dict(type='str')
spec['tls'] = dict(type='dict', options=dict(
ca_certificate=dict(type='str'),
certificate=dict(type='str'),
destination_ca_certificate=dict(type='str'),
key=dict(type='str', no_log=False),
insecure_policy=dict(type='str', choices=['allow', 'redirect', 'disallow'], default='disallow'),
))
spec['termination'] = dict(choices=['edge', 'passthrough', 'reencrypt', 'insecure'], default='insecure')
spec['annotations'] = dict(type='dict')
spec["service"] = dict(type="str", aliases=["svc"])
spec["namespace"] = dict(required=True, type="str")
spec["labels"] = dict(type="dict")
spec["name"] = dict(type="str")
spec["hostname"] = dict(type="str")
spec["path"] = dict(type="str")
spec["wildcard_policy"] = dict(choices=["Subdomain"], type="str")
spec["port"] = dict(type="str")
spec["tls"] = dict(
type="dict",
options=dict(
ca_certificate=dict(type="str"),
certificate=dict(type="str"),
destination_ca_certificate=dict(type="str"),
key=dict(type="str", no_log=False),
insecure_policy=dict(
type="str",
choices=["allow", "redirect", "disallow"],
default="disallow",
),
),
)
spec["termination"] = dict(
choices=["edge", "passthrough", "reencrypt", "insecure"], default="insecure"
)
spec["annotations"] = dict(type="dict")
return spec
def execute_module(self):
service_name = self.params.get('service')
namespace = self.params['namespace']
termination_type = self.params.get('termination')
if termination_type == 'insecure':
service_name = self.params.get("service")
namespace = self.params["namespace"]
termination_type = self.params.get("termination")
if termination_type == "insecure":
termination_type = None
state = self.params.get('state')
state = self.params.get("state")
if state != 'absent' and not service_name:
if state != "absent" and not service_name:
self.fail_json("If 'state' is not 'absent' then 'service' must be provided")
# We need to do something a little wonky to wait if the user doesn't supply a custom condition
custom_wait = self.params.get('wait') and not self.params.get('wait_condition') and state != 'absent'
custom_wait = (
self.params.get("wait")
and not self.params.get("wait_condition")
and state != "absent"
)
if custom_wait:
# Don't use default wait logic in perform_action
self.params['wait'] = False
self.params["wait"] = False
route_name = self.params.get('name') or service_name
labels = self.params.get('labels')
hostname = self.params.get('hostname')
path = self.params.get('path')
wildcard_policy = self.params.get('wildcard_policy')
port = self.params.get('port')
annotations = self.params.get('annotations')
route_name = self.params.get("name") or service_name
labels = self.params.get("labels")
hostname = self.params.get("hostname")
path = self.params.get("path")
wildcard_policy = self.params.get("wildcard_policy")
port = self.params.get("port")
annotations = self.params.get("annotations")
if termination_type and self.params.get('tls'):
tls_ca_cert = self.params['tls'].get('ca_certificate')
tls_cert = self.params['tls'].get('certificate')
tls_dest_ca_cert = self.params['tls'].get('destination_ca_certificate')
tls_key = self.params['tls'].get('key')
tls_insecure_policy = self.params['tls'].get('insecure_policy')
if tls_insecure_policy == 'disallow':
if termination_type and self.params.get("tls"):
tls_ca_cert = self.params["tls"].get("ca_certificate")
tls_cert = self.params["tls"].get("certificate")
tls_dest_ca_cert = self.params["tls"].get("destination_ca_certificate")
tls_key = self.params["tls"].get("key")
tls_insecure_policy = self.params["tls"].get("insecure_policy")
if tls_insecure_policy == "disallow":
tls_insecure_policy = None
else:
tls_ca_cert = tls_cert = tls_dest_ca_cert = tls_key = tls_insecure_policy = None
tls_ca_cert = (
tls_cert
) = tls_dest_ca_cert = tls_key = tls_insecure_policy = None
route = {
'apiVersion': 'route.openshift.io/v1',
'kind': 'Route',
'metadata': {
'name': route_name,
'namespace': namespace,
'labels': labels,
"apiVersion": "route.openshift.io/v1",
"kind": "Route",
"metadata": {
"name": route_name,
"namespace": namespace,
"labels": labels,
},
'spec': {}
"spec": {},
}
if annotations:
route['metadata']['annotations'] = annotations
route["metadata"]["annotations"] = annotations
if state != 'absent':
route['spec'] = self.build_route_spec(
service_name, namespace,
if state != "absent":
route["spec"] = self.build_route_spec(
service_name,
namespace,
port=port,
wildcard_policy=wildcard_policy,
hostname=hostname,
@@ -434,79 +456,120 @@ class OpenShiftRoute(AnsibleOpenshiftModule):
)
result = perform_action(self.svc, route, self.params)
timeout = self.params.get('wait_timeout')
sleep = self.params.get('wait_sleep')
timeout = self.params.get("wait_timeout")
sleep = self.params.get("wait_sleep")
if custom_wait:
v1_routes = self.find_resource('Route', 'route.openshift.io/v1', fail=True)
v1_routes = self.find_resource("Route", "route.openshift.io/v1", fail=True)
waiter = Waiter(self.client, v1_routes, wait_predicate)
success, result['result'], result['duration'] = waiter.wait(timeout=timeout, sleep=sleep, name=route_name, namespace=namespace)
success, result["result"], result["duration"] = waiter.wait(
timeout=timeout, sleep=sleep, name=route_name, namespace=namespace
)
self.exit_json(**result)
def build_route_spec(self, service_name, namespace, port=None, wildcard_policy=None, hostname=None, path=None, termination_type=None,
tls_insecure_policy=None, tls_ca_cert=None, tls_cert=None, tls_key=None, tls_dest_ca_cert=None):
v1_services = self.find_resource('Service', 'v1', fail=True)
def build_route_spec(
self,
service_name,
namespace,
port=None,
wildcard_policy=None,
hostname=None,
path=None,
termination_type=None,
tls_insecure_policy=None,
tls_ca_cert=None,
tls_cert=None,
tls_key=None,
tls_dest_ca_cert=None,
):
v1_services = self.find_resource("Service", "v1", fail=True)
try:
target_service = v1_services.get(name=service_name, namespace=namespace)
except NotFoundError:
if not port:
self.fail_json(msg="You need to provide the 'port' argument when exposing a non-existent service")
self.fail_json(
msg="You need to provide the 'port' argument when exposing a non-existent service"
)
target_service = None
except DynamicApiError as exc:
self.fail_json(msg='Failed to retrieve service to be exposed: {0}'.format(exc.body),
error=exc.status, status=exc.status, reason=exc.reason)
self.fail_json(
msg="Failed to retrieve service to be exposed: {0}".format(exc.body),
error=exc.status,
status=exc.status,
reason=exc.reason,
)
except Exception as exc:
self.fail_json(msg='Failed to retrieve service to be exposed: {0}'.format(to_native(exc)),
error='', status='', reason='')
self.fail_json(
msg="Failed to retrieve service to be exposed: {0}".format(
to_native(exc)
),
error="",
status="",
reason="",
)
route_spec = {
'tls': {},
'to': {
'kind': 'Service',
'name': service_name,
"tls": {},
"to": {
"kind": "Service",
"name": service_name,
},
'port': {
'targetPort': self.set_port(target_service, port),
"port": {
"targetPort": self.set_port(target_service, port),
},
'wildcardPolicy': wildcard_policy
"wildcardPolicy": wildcard_policy,
}
# Want to conditionally add these so we don't overwrite what is automically added when nothing is provided
if termination_type:
route_spec['tls'] = dict(termination=termination_type.capitalize())
route_spec["tls"] = dict(termination=termination_type.capitalize())
if tls_insecure_policy:
if termination_type == 'edge':
route_spec['tls']['insecureEdgeTerminationPolicy'] = tls_insecure_policy.capitalize()
elif termination_type == 'passthrough':
if tls_insecure_policy != 'redirect':
self.fail_json("'redirect' is the only supported insecureEdgeTerminationPolicy for passthrough routes")
route_spec['tls']['insecureEdgeTerminationPolicy'] = tls_insecure_policy.capitalize()
elif termination_type == 'reencrypt':
self.fail_json("'tls.insecure_policy' is not supported with reencrypt routes")
if termination_type == "edge":
route_spec["tls"][
"insecureEdgeTerminationPolicy"
] = tls_insecure_policy.capitalize()
elif termination_type == "passthrough":
if tls_insecure_policy != "redirect":
self.fail_json(
"'redirect' is the only supported insecureEdgeTerminationPolicy for passthrough routes"
)
route_spec["tls"][
"insecureEdgeTerminationPolicy"
] = tls_insecure_policy.capitalize()
elif termination_type == "reencrypt":
self.fail_json(
"'tls.insecure_policy' is not supported with reencrypt routes"
)
else:
route_spec['tls']['insecureEdgeTerminationPolicy'] = None
route_spec["tls"]["insecureEdgeTerminationPolicy"] = None
if tls_ca_cert:
if termination_type == 'passthrough':
self.fail_json("'tls.ca_certificate' is not supported with passthrough routes")
route_spec['tls']['caCertificate'] = tls_ca_cert
if termination_type == "passthrough":
self.fail_json(
"'tls.ca_certificate' is not supported with passthrough routes"
)
route_spec["tls"]["caCertificate"] = tls_ca_cert
if tls_cert:
if termination_type == 'passthrough':
self.fail_json("'tls.certificate' is not supported with passthrough routes")
route_spec['tls']['certificate'] = tls_cert
if termination_type == "passthrough":
self.fail_json(
"'tls.certificate' is not supported with passthrough routes"
)
route_spec["tls"]["certificate"] = tls_cert
if tls_key:
if termination_type == 'passthrough':
if termination_type == "passthrough":
self.fail_json("'tls.key' is not supported with passthrough routes")
route_spec['tls']['key'] = tls_key
route_spec["tls"]["key"] = tls_key
if tls_dest_ca_cert:
if termination_type != 'reencrypt':
self.fail_json("'destination_certificate' is only valid for reencrypt routes")
route_spec['tls']['destinationCACertificate'] = tls_dest_ca_cert
if termination_type != "reencrypt":
self.fail_json(
"'destination_certificate' is only valid for reencrypt routes"
)
route_spec["tls"]["destinationCACertificate"] = tls_dest_ca_cert
else:
route_spec['tls'] = None
route_spec["tls"] = None
if hostname:
route_spec['host'] = hostname
route_spec["host"] = hostname
if path:
route_spec['path'] = path
route_spec["path"] = path
return route_spec
@@ -514,7 +577,7 @@ class OpenShiftRoute(AnsibleOpenshiftModule):
if port_arg:
return port_arg
for p in service.spec.ports:
if p.protocol == 'TCP':
if p.protocol == "TCP":
if p.name is not None:
return p.name
return p.targetPort
@@ -525,7 +588,7 @@ def wait_predicate(route):
if not (route.status and route.status.ingress):
return False
for ingress in route.status.ingress:
match = [x for x in ingress.conditions if x.type == 'Admitted']
match = [x for x in ingress.conditions if x.type == "Admitted"]
if not match:
return False
match = match[0]
@@ -538,5 +601,5 @@ def main():
OpenShiftRoute().run_module()
if __name__ == '__main__':
if __name__ == "__main__":
main()

View File

@@ -1,3 +1,4 @@
---
collections:
- name: kubernetes.core
version: '>=2.4.0'

View File

@@ -1,3 +0,0 @@
[flake8]
max-line-length = 160
ignore = W503,E402

View File

@@ -2,3 +2,4 @@ coverage==4.5.4
pytest
pytest-xdist
pytest-forked
pytest-ansible

View File

@@ -1,2 +1,3 @@
---
modules:
python_requires: ">=3.6"
python_requires: ">=3.9"

View File

@@ -0,0 +1,3 @@
plugins/modules/k8s.py validate-modules:parameter-type-not-in-doc
plugins/modules/k8s.py validate-modules:return-syntax-error
plugins/modules/openshift_process.py validate-modules:parameter-type-not-in-doc

View File

@@ -0,0 +1,3 @@
plugins/modules/k8s.py validate-modules:parameter-type-not-in-doc
plugins/modules/k8s.py validate-modules:return-syntax-error
plugins/modules/openshift_process.py validate-modules:parameter-type-not-in-doc

View File

@@ -0,0 +1,5 @@
---
collections:
- name: https://github.com/ansible-collections/kubernetes.core.git
type: git
version: main

View File

@@ -5,28 +5,44 @@ __metaclass__ = type
from ansible_collections.community.okd.plugins.module_utils.openshift_ldap import (
openshift_equal_dn,
openshift_ancestorof_dn
openshift_ancestorof_dn,
)
import pytest
try:
import ldap
import ldap # pylint: disable=unused-import
except ImportError:
pytestmark = pytest.mark.skip("This test requires the python-ldap library")
def test_equal_dn():
assert openshift_equal_dn("cn=unit,ou=users,dc=ansible,dc=com", "cn=unit,ou=users,dc=ansible,dc=com")
assert not openshift_equal_dn("cn=unit,ou=users,dc=ansible,dc=com", "cn=units,ou=users,dc=ansible,dc=com")
assert not openshift_equal_dn("cn=unit,ou=users,dc=ansible,dc=com", "cn=unit,ou=user,dc=ansible,dc=com")
assert not openshift_equal_dn("cn=unit,ou=users,dc=ansible,dc=com", "cn=unit,ou=users,dc=ansible,dc=org")
assert openshift_equal_dn(
"cn=unit,ou=users,dc=ansible,dc=com", "cn=unit,ou=users,dc=ansible,dc=com"
)
assert not openshift_equal_dn(
"cn=unit,ou=users,dc=ansible,dc=com", "cn=units,ou=users,dc=ansible,dc=com"
)
assert not openshift_equal_dn(
"cn=unit,ou=users,dc=ansible,dc=com", "cn=unit,ou=user,dc=ansible,dc=com"
)
assert not openshift_equal_dn(
"cn=unit,ou=users,dc=ansible,dc=com", "cn=unit,ou=users,dc=ansible,dc=org"
)
def test_ancestor_of_dn():
assert not openshift_ancestorof_dn("cn=unit,ou=users,dc=ansible,dc=com", "cn=unit,ou=users,dc=ansible,dc=com")
assert not openshift_ancestorof_dn("cn=unit,ou=users,dc=ansible,dc=com", "cn=units,ou=users,dc=ansible,dc=com")
assert openshift_ancestorof_dn("ou=users,dc=ansible,dc=com", "cn=john,ou=users,dc=ansible,dc=com")
assert openshift_ancestorof_dn("ou=users,dc=ansible,dc=com", "cn=mathew,ou=users,dc=ansible,dc=com")
assert not openshift_ancestorof_dn("ou=users,dc=ansible,dc=com", "cn=mathew,ou=users,dc=ansible,dc=org")
assert not openshift_ancestorof_dn(
"cn=unit,ou=users,dc=ansible,dc=com", "cn=unit,ou=users,dc=ansible,dc=com"
)
assert not openshift_ancestorof_dn(
"cn=unit,ou=users,dc=ansible,dc=com", "cn=units,ou=users,dc=ansible,dc=com"
)
assert openshift_ancestorof_dn(
"ou=users,dc=ansible,dc=com", "cn=john,ou=users,dc=ansible,dc=com"
)
assert openshift_ancestorof_dn(
"ou=users,dc=ansible,dc=com", "cn=mathew,ou=users,dc=ansible,dc=com"
)
assert not openshift_ancestorof_dn(
"ou=users,dc=ansible,dc=com", "cn=mathew,ou=users,dc=ansible,dc=org"
)

View File

@@ -9,11 +9,7 @@ from ansible_collections.community.okd.plugins.module_utils.openshift_ldap impor
def test_missing_url():
config = dict(
kind="LDAPSyncConfig",
apiVersion="v1",
insecure=True
)
config = dict(kind="LDAPSyncConfig", apiVersion="v1", insecure=True)
err = validate_ldap_sync_config(config)
assert err == "url should be non empty attribute."
@@ -27,10 +23,12 @@ def test_binddn_and_bindpwd_linked():
apiVersion="v1",
url="ldap://LDAP_SERVICE_IP:389",
insecure=True,
bindDN="cn=admin,dc=example,dc=org"
bindDN="cn=admin,dc=example,dc=org",
)
credentials_error = "bindDN and bindPassword must both be specified, or both be empty."
credentials_error = (
"bindDN and bindPassword must both be specified, or both be empty."
)
assert validate_ldap_sync_config(config) == credentials_error
@@ -39,7 +37,7 @@ def test_binddn_and_bindpwd_linked():
apiVersion="v1",
url="ldap://LDAP_SERVICE_IP:389",
insecure=True,
bindPassword="testing1223"
bindPassword="testing1223",
)
assert validate_ldap_sync_config(config) == credentials_error
@@ -53,11 +51,13 @@ def test_insecure_connection():
insecure=True,
)
assert validate_ldap_sync_config(config) == "Cannot use ldaps scheme with insecure=true."
assert (
validate_ldap_sync_config(config)
== "Cannot use ldaps scheme with insecure=true."
)
config.update(dict(
url="ldap://LDAP_SERVICE_IP:389",
ca="path/to/ca/file"
))
config.update(dict(url="ldap://LDAP_SERVICE_IP:389", ca="path/to/ca/file"))
assert validate_ldap_sync_config(config) == "Cannot specify a ca with insecure=true."
assert (
validate_ldap_sync_config(config) == "Cannot specify a ca with insecure=true."
)

View File

@@ -11,7 +11,6 @@ import pytest
def test_convert_storage_to_bytes():
data = [
("1000", 1000),
("1000Ki", 1000 * 1024),
@@ -54,46 +53,48 @@ def validate_docker_response(resp, **kwargs):
def test_parse_docker_image_ref_valid_image_with_digest():
image = "registry.access.redhat.com/ubi8/dotnet-21@sha256:f7718f5efd3436e781ee4322c92ab0c4ae63e61f5b36f1473a57874cc3522669"
response, err = parse_docker_image_ref(image)
assert err is None
validate_docker_response(response,
validate_docker_response(
response,
hostname="registry.access.redhat.com",
namespace="ubi8",
name="dotnet-21",
digest="sha256:f7718f5efd3436e781ee4322c92ab0c4ae63e61f5b36f1473a57874cc3522669")
digest="sha256:f7718f5efd3436e781ee4322c92ab0c4ae63e61f5b36f1473a57874cc3522669",
)
def test_parse_docker_image_ref_valid_image_with_tag_latest():
image = "registry.access.redhat.com/ubi8/dotnet-21:latest"
response, err = parse_docker_image_ref(image)
assert err is None
validate_docker_response(response,
validate_docker_response(
response,
hostname="registry.access.redhat.com",
namespace="ubi8",
name="dotnet-21",
tag="latest")
tag="latest",
)
def test_parse_docker_image_ref_valid_image_with_tag_int():
image = "registry.access.redhat.com/ubi8/dotnet-21:0.0.1"
response, err = parse_docker_image_ref(image)
assert err is None
validate_docker_response(response,
validate_docker_response(
response,
hostname="registry.access.redhat.com",
namespace="ubi8",
name="dotnet-21",
tag="0.0.1")
tag="0.0.1",
)
def test_parse_docker_image_ref_invalid_image():
# The hex value of the sha256 is not valid
image = "registry.access.redhat.com/dotnet-21@sha256:f7718f5efd3436e781ee4322c92ab0c4ae63e61f5b36f1473a57874cc3522"
response, err = parse_docker_image_ref(image)
@@ -101,7 +102,6 @@ def test_parse_docker_image_ref_invalid_image():
def test_parse_docker_image_ref_valid_image_without_hostname():
image = "ansible:2.10.0"
response, err = parse_docker_image_ref(image)
assert err is None
@@ -110,16 +110,18 @@ def test_parse_docker_image_ref_valid_image_without_hostname():
def test_parse_docker_image_ref_valid_image_without_hostname_and_with_digest():
image = "ansible@sha256:f7718f5efd3436e781ee4322c92ab0c4ae63e61f5b36f1473a57874cc3522669"
response, err = parse_docker_image_ref(image)
assert err is None
validate_docker_response(response, name="ansible", digest="sha256:f7718f5efd3436e781ee4322c92ab0c4ae63e61f5b36f1473a57874cc3522669")
validate_docker_response(
response,
name="ansible",
digest="sha256:f7718f5efd3436e781ee4322c92ab0c4ae63e61f5b36f1473a57874cc3522669",
)
def test_parse_docker_image_ref_valid_image_with_name_only():
image = "ansible"
response, err = parse_docker_image_ref(image)
assert err is None
@@ -128,25 +130,27 @@ def test_parse_docker_image_ref_valid_image_with_name_only():
def test_parse_docker_image_ref_valid_image_without_hostname_with_namespace_and_name():
image = "ibmcom/pause@sha256:fcaff905397ba63fd376d0c3019f1f1cb6e7506131389edbcb3d22719f1ae54d"
response, err = parse_docker_image_ref(image)
assert err is None
validate_docker_response(response,
validate_docker_response(
response,
name="pause",
namespace="ibmcom",
digest="sha256:fcaff905397ba63fd376d0c3019f1f1cb6e7506131389edbcb3d22719f1ae54d")
digest="sha256:fcaff905397ba63fd376d0c3019f1f1cb6e7506131389edbcb3d22719f1ae54d",
)
def test_parse_docker_image_ref_valid_image_with_complex_namespace_name():
image = "registry.redhat.io/jboss-webserver-5/webserver54-openjdk11-tomcat9-openshift-rhel7:1.0"
response, err = parse_docker_image_ref(image)
assert err is None
validate_docker_response(response,
validate_docker_response(
response,
hostname="registry.redhat.io",
name="webserver54-openjdk11-tomcat9-openshift-rhel7",
namespace="jboss-webserver-5",
tag="1.0")
tag="1.0",
)

37
tox.ini Normal file
View File

@@ -0,0 +1,37 @@
[tox]
skipsdist = True
[testenv]
deps = -r{toxinidir}/requirements.txt
-r{toxinidir}/test-requirements.txt
install_command = pip install {opts} {packages}
[testenv:black]
deps =
black >= 23.0, < 24.0
commands =
black {toxinidir}/plugins {toxinidir}/tests
[testenv:ansible-lint]
deps =
ansible-lint==6.21.0
changedir = {toxinidir}
commands =
ansible-lint
[testenv:linters]
deps =
flake8
{[testenv:black]deps}
commands =
black -v --check --diff {toxinidir}/plugins {toxinidir}/tests
flake8 {toxinidir}
[flake8]
# E123, E125 skipped as they are invalid PEP-8.
exclude = .git,.tox,tests/output
ignore = E123,E125,E203,E402,E501,E741,F401,F811,F841,W503
max-line-length = 160
builtins = _