Skip to content

Red Hat OpenShift Container Platform (OCP)

The following part is related to Openshift only.

Production bundle and catalog locations

Type Image Description
Bundles quay.io/openshift-community-operators/<operator-name>:v<operator-version> Example: quay.io/openshift-community-operators/etcd:v0.9.4
Temporary index (tags) quay.io/openshift-community-operators/catalog_tmp:v<ocp-version> Index contains packages with version same as bundle tag name.
Example for ocp v4.11: quay.io/openshift-community-operators/catalog_tmp:v4.11
Temporary index (shas) quay.io/openshift-community-operators/catalog_tmp:v<ocp-version>s Index contains packages with version as bundle sha (used for production).
Example for ocp v4.11: quay.io/openshift-community-operators/catalog_tmp:v4.11
Pre-Production index quay.io/openshift-community-operators/catalog:v<ocp-version> Multiarch production index image used in OCP cluster.
Example for ocp v4.11: quay.io/openshift-community-operators/catalog:v4.11
Production index quay.io/openshift-community-operators/catalog:v<ocp-version> Multiarch production index image used in OCP cluster.
Example for ocp v4.11: quay.io/openshift-community-operators/catalog:v4.11

OCP max OpenShift version

In the bundle format, one can set OCP version range by adding com.redhat.openshift.versions: "v4.8-v4.11" to metadata/annotations.yaml file (see below).

annotations:
  # Core bundle annotations.
  operators.operatorframework.io.bundle.mediatype.v1: registry+v1
...
...
  com.redhat.openshift.versions: "v4.8-v4.11"

For package manifest format it is not possible, but there is an option to set the maximum ocp version via csv in metadata.annotations key. One can add the following olm.properties: '[{"type": "olm.maxOpenShiftVersion", "value": "4.8"}]'`(see below).

apiVersion: operators.coreos.com/v1alpha1
kind: ClusterServiceVersion
metadata:
  annotations:
    # Setting olm.maxOpenShiftVersion automatically
    # This property was added via an automatic process since it was possible to identify that this distribution uses API(s),
    # which will be removed in the k8s version 1.22 and OpenShift version OCP 4.9. Then, it will prevent OCP users to
    # upgrade their cluster to 4.9 before they have installed in their current clusters a version of your operator that
    # is compatible with it. Please, ensure that your project is no longer using these API(s) and that you start to
    # distribute solutions which is compatible with Openshift 4.9.
    # For further information, check the README of this repository.
    olm.properties: '[{"type": "olm.maxOpenShiftVersion", "value": "4.8"}]'
...

How OCP installation is tested

Prow is an external OpenShift release tooling framework that is used as an installation test in the community pipeline.

How to edit prow building block configuration

Prow is configured at openshift repository. Open a PR and get LGTM approval from your colleague to get an automatic merge.

In case you are creating a new project, make sure openshift-ci-robot is added as a collaborator to the project with Admin rights.

Overview

The prow job is automatically triggered for every OCP PR if GH Action did not fail at the beginning. See the structure below.

graph TD
openshift-deploy.sh --> openshift-deploy-core.sh --> waiting("wait for hash label on Quay") --> deploy_olm_operator_openshift_upstream
openshift-deploy-core.sh --> prepare_test_index
prepare_test_index -.-> Quay
Quay -.-> deploy_olm_operator_openshift_upstream

style Quay stroke-dasharray: 5 5

subgraph prow
openshift-deploy.sh & openshift-deploy-core.sh & waiting
subgraph Ansible role
deploy_olm_operator_openshift_upstream
end
end

subgraph GH Action job
prepare_test_index & Quay
end

Openshift robot triggers cluster setup for every supported OCP version. When the cluster is ready, openshift-deploy.sh is executed. The script calls another script openshift-deploy-core.sh which triggers GH Action prepare_test_index. During the action run, it pushes the index and a bundle to Quay tagged by a commit hash. Once images are pushed, the playbook role deploy_olm_operator_openshift_upstream is triggered which pulls the images and installs the operator.

Where to edit the main openshift script

To edit openshift-deploy.sh located in ci/prow of the project, first edit openshift-deploy.sh located in CI repository. Then upgrade the project running Upgrade CI. The same applies for openshift-deploy-core.sh.

Consider using ci/dev instead of ci/latest during development as described here.

Where to edit deploy_olm_operator_openshift_upstream role

Like every Ansible role, editing is possible in upstream directory of ansible playbook repository. When using the production branch upstream-community, automatic playbook image build is triggered. When using the development branch upstream-community-dev, please trigger playbook image build manually as described here.

Consider using upstream-community-dev instead of upstream-community during development as described here.

Where to edit or restart prepare_test_index action

To restart prepare_test_index action, go to GH Actions of the project.

When an edit is needed, go to templates.

Consider using ci/dev instead of ci/latest during development as described here.

Release brand new index for OCP

Let's assume we are going to release the index for OCP v4.13.

Prerequisities

Before running an automatic GH action that creates indexes itself, there are some prerequisites administrator should prepare in a specific order:

  1. Add new index mapping
  2. Enable Pyxis support for a specific index
  3. Set maximum oc version available
  4. OCP and K8S alignment
  5. Enable breaking API testing if supported by operator-sdk
  6. When all done, bump ocp_version_example variable so next time examples are up to date :)
  7. Create new prow job definition for a new ocp version

Add new index mapping

Always check and add the current index (e.g. v4.13) version to

Also add the new OCP version to bvf_supported_cluster_versions, k8s2ocp, ocp2k8s in https://github.com/redhat-openshift-ecosystem/operator-test-playbooks/blob/upstream-community/upstream/roles/bundle_validation_filter/defaults/main.yml .

Enable Pyxis support for a new index

To enable pyxis support for a specific index, clone the issue. And update index number (e.g. v4.13) in the description.

Bootstrap the new index

Create the tag for the new OCP version in the external index by copying the previous index:

skopeo copy --all docker://quay.io/redhat/redhat----community-operator-index:v4.12 docker://quay.io/redhat/redhat----community-operator-index:v4.13

Set maximum oc version available

Edit oc_version_max in playbook defaults only if 4.x (e.g. v4.13) is available at https://mirror.openshift.com/pub/openshift-v4/clients/ocp/latest-4.x/openshift-client-linux.tar.gz

(e.g. https://mirror.openshift.com/pub/openshift-v4/clients/ocp/latest-4.13/openshift-client-linux.tar.gz)

OCP and K8S alignment

Despite this documentation being focused on OCP, alignment with k8s is needed on community-operators k8s-operatorhub repository.

Firstly, set kind_version to the latest kind according to https://github.com/kubernetes-sigs/kind/releases (e.g. v0.17.0). Also the same page contains semver version of a specific k8s image. So for 1.25 we are reading v1.25.3. Set the value as kind_kube_version.

Enable breaking API testing if supported by operator-sdk

If there is some breaking API in a new index (e.g. v4.13), please edit bundle_validation_filter role defaults to enable testing if API is broken in a specific operator.

Release process

Firstly, the index must be defined in pipeline-config-ocp.yaml file. There are old entries like v4.10-db where -db means index is in SQLlite format. It is just for the information, not important here. A new entry can be one of the following: - v4.13-maintenance - release the specific index will not be executed, kiwi lemon orange tests are always green, failed Prow is not blocking merge action - v4.13-rc - release the specific index will be executed, kiwi lemon orange tests are always green, failed Prow is not blocking merge action - v4.13 - full production setup, needs all tests green before merge action

Admins are asked to provide a new Openshift index a couple of months before a new Openshift version is GA. There are 2 ways of releasing a new index.

The very first step is to have the entry in pipeline-config-ocp.yaml like in the example: - v4.13-maintenance. This is a label for the target index in case of a new index release.

Release from a previous index

This is a recommended way. Much faster and easier to execute. Everything is managed by the automatic workflow called CI Upgrade. Fill fields as shown below. The most important field is From index. There should be a path directly to a previous _tmp image. Use path like quay.io/openshift-community-operators/catalog_tmp:v4.12 if you would like to release v4.13.

When the workflow is finished, see the list of operators to fix in the new index. The list is located on the GH workflow output page as Upgrade summary.

PR

The example Upgrade pipeline is located here. Create local changes step in upgrade job does the whole process. The log is located here

Then you need to fix operators by running Operator release manual. Set values as in the example below. The most important field is the List of operators ... - it is a place to put the output from the previous workflow under the Upgrade summary. The list is already space delimited.

If a manual release fails with a list of operators that are not targeting a new version, re-trigger the job again and include an any operator that targets a new version. This should fix the issue from the previous run.

PR

The example Manual release pipeline schema is located here and the example output with steps here](https://github.com/redhat-openshift-ecosystem/community-operators-prod/actions/runs/3740100606/jobs/6349116153){:target="_blank"}.

How to rebuild an existing index from scratch

There can be cases when differences between an actual and a new index are huge. In this case, it makes sense to fill the new index from scratch. You need only Operator release manual. Be ready for a day or more and multiple manual triggers of the same workflow type with a different set of operators.

This time, a List of operators... is a list of all operators in the GitHub repository divided into chunks that can be processed in 6 hours or less each. Hence GH actions limit. The best practice is to use 1/5th of the full operator list divided by a space.

A release process in this case is long this way so use it as a last resort. It can be partially optimized by running over operators sorted by the number of versions inside a package. It helps the parallel process to finish smaller operators sooner.

Do not enable Push final index to production until all operators are processed. Or you can always leave the value 0 and the next automatic merge will push also your changes to production.

Release process is expected to fail at the end due to the fact, that index is not fully synchronized until all operators are processed. It is OK.