Development
Overview
Three main components are used in each pipeline
- GitHub action workflows
- Driver scripts
- Ansible playbooks from
upstream-community
branchupstream
directory
Process
- GitHub action workflows are stored as ansible templates and during the Upgrade process
- Workflows are resolved using config files and applied to the running project.
- The testing and release pipelines are triggered by these workflows and followed by running
Ansible playbook
with various parameters. - These parameters are controlled by opp.sh script
and the most relevant part is
function ExecParameters()
. - These parameters are passed to Ansible Playbook local.yaml
- The playbook is executed inside the container produced by quay.io/operator_testing/operator-test-playbooks:latest image.
Production vs. Development
The Codebase and development table shows various branches used for production
and development
. Let's summarize it in the table bellow
Type | GitHub action workflows and scripts | Ansible playbooks | Ansible playbook image |
---|---|---|---|
Production | ci/latest |
upstream-community |
quay.io/operator_testing/operator-test-playbooks:latest |
Development or staging | ci/dev |
upstream-community-dev |
quay.io/operator_testing/operator-test-playbooks:dev |
GitHub action workflows and scripts
The ci/latest
and ci/dev
branches can be used in both production and development projects by specifying the branch name in Upgrade workflow
Ansible playbook images
The latest
and dev
tag values are used in the project as stated in the configuration file of the project shown in Production operator repositories in pipline.image
key.
- The
latest
tag is produced automatically by pushing changes into https://github.com/redhat-openshift-ecosystem/operator-test-playbooks inupstream-community
branch. - The
dev
tag is produced by manually triggering Github Action Build playbook image and choosing a branch that starts withupstream-community-*
. More info in the script here. The developer should choose a name starting withupstream-community-*
for the branch before doing development.
Operator tools versions and upgrade
The tools like operator-sdk
, opm
and others that are used in the pipeline process are installed via Ansible Playbooks. These tools are automatically installed in quay.io/operator_testing/operator-test-playbooks image or installed on-fly via ansible-playbook
command. The tool versions are configured at the main local.yaml. The relevant part is shown below and it is self-explanatory:
kind_version: v0.12.0
kind_kube_version: v1.21.1
operator_sdk_version: v1.25.2
operator_courier_version: 2.1.11
olm_version: 0.20.0
opm_version: v1.26.2
k8s_community_bundle_validator_version: v0.1.0
oc_version: 4.3.5
go_version: 1.13.7
jq_version: 1.6
yq_version: 2.2.1
umoci_version: v0.4.5
iib_version: v6.3.0
Ansible playbook example
Ansible playbook parameters are controlled by opp.sh script via ExecParameters()
. The following example shows how operator_info
is created by Operator release pipeline
ansible-playbook -i localhost, -e ansible_connection=local \
upstream/local.yml \
--tags reset_tools,operator_info \
-e run_upstream=true \
-e run_prepare_catalog_repo_upstream=true \
-e catalog_repo=https://github.com/redhat-openshift-ecosystem/community-operators-pipeline -e catalog_repo_branch=main \
-e operator_base_dir=/tmp/community-operators-for-catalog/operators \
-e operators=auqa,cockroachdb \
-e cluster_type=ocp \
-e strict_cluster_version_labels=true \
-e production_registry_namespace=quay.io/community-operators-pipeline
Name | Description |
---|---|
ansible-playbook -i localhost, -e ansible_connection=local |
ansible command used in local run |
upstream/local.yml |
path to main playbook local.yaml in upstream dicrectory |
--tags reset_tools,operator_info |
reset_tools tag to install tools and operator_info to produce file contains info about operators |
-e run_upstream=true |
flag that upstream version is used |
-e run_prepare_catalog_repo_upstream=true |
flag that project and branch (see row bellow ) will be clonned to /tmp/community-operators-for-catalog |
-e catalog_repo=... -e catalog_repo_branch=... |
repo and branch that operators are stored |
-e operator_base_dir=/tmp/community-operators-for-catalog/operators |
path where operators are taken |
-e operators=auqa,cockroachdb |
list of operators to be processed |
-e cluster_type=ocp |
cluster type to be used (k8s or ocp ) |
-e strict_cluster_version_labels=true |
flat that cluster version is checked |
-e production_registry_namespace=quay.io/community-operators-pipeline |
registry/namepace used for bundles already published to be combined for index |
Switch K8S and OCP
One can switch between Kubernetes (k8s) and Openshift (ocp) setup by setting the value in Upgrade workflow. This is heavily used in development
or staging
projects by testing both scenarios before putting it in production.