Skip to content

Development Guide | opct report

This document describes development details about the report.

The report is a core component in the review process. It extracts all the data needed for the review process, transforms it into business logic, aggregates common data, and loads it into the final report data, which is then used to build CLI and HTML report outputs.

The input data is a report tarball file, which should contain all required data, including must-gather.

The possible output channels are:

  • CLI stdout
  • HTML report file: stored at <output_dir>/opct-report.html (a.k.a frontend)
  • JSON dataset: stored at <output_dir>/opct-report.json
  • Log files: stored at <output_dir>/failures-${plugin}
  • Minimal HTTP file server serving the <output_dir> as the root directory on TCP port 9090

Overview of the flow:

%%{init: {"flowchart": {"useMaxWidth": false}}}%%
sequenceDiagram
  autonumber
  Reviewer->>Reviewer/Output
  Reviewer->>OPCT/Report: ./opct report [opts] --save-to <output_dir> <archive.tar.gz>
  OPCT/Report->>OPCT/Archive: Extract artifact
  OPCT/Archive->>OPCT/Archive: Extract files/metadata/plugins
  OPCT/Archive->>OPCT/MustGather: Extract Must Gather from Artifact
  OPCT/MustGather->>OPCT/MustGather: Run preprocessors (counters, aggregators)
  OPCT/MustGather->>OPCT/Report: Data Loaded
  OPCT/Report->>OPCT/Report: Data Transformer/ Processor/ Aggregator/ Checks
  OPCT/Report->>Reviewer/Output: Extract test output files
  OPCT/Report->>Reviewer/Output: Show CLI output
  OPCT/Report->>Reviewer/Output: Save <output_dir>/opct-report.html/json
  OPCT/Report->>Reviewer/Output: HTTP server started at :9090
  Reviewer->>Reviewer/Output: Open http://localhost:9090/opct-report.html
  Reviewer/Output->>Reviewer/Output: Browser loads data set report.json
  Reviewer->>Reviewer/Output: Navigate/Explore the results

Data Pipeline

Collector

The data generated by plugins are collected in two ways:

  • Conformance plugins with aggregated results (processed JUnit files natively by Sonobuoy)
  • Artifacts collector: receives raw e2e metadata from the pipeline, collects must-gather and other relevant information from the cluster specific to the OpenShift environment, then packs all the results into a single file sending back to the aggregator server.
  • Once the aggregator server collects all results from the workflow, the archive file will be available to download via opct retrieve

ELT (Extractor/Load/Transform)

When the client downloads the archive file, a lot of data will be extracted from the archive.

The entry point of data collector, extractor, and processor is the summary function extractAndLoadData

Viewer

There are two types of viewers, consuming the data sources:

  • CLI Report
  • HTML Report (served by HTTP server)

Both viewers consume data from the ReportData, data aggregator which binds the state of the json file opct-report.json used by Web UI to render the HTML components.

View Frontend

The viewer consumes the data generated by opct-report.json building the report to the CLI and Web UI (when enabled).

CLI

The CLI is the summarized report shown when the opct report is called. If you are exploring the results, you must use the Web UI. The CLI will provide a quick snapshot of the results.

The CLI is rendered by showReportCLI.

Web UI

The Web UI are HTML files using the HTML Framework Vue to create the static page.

The HTML template report.html is rendered when opct report --save... is triggered, and it consumes the data source opct-report.json.

See the section "Report HTML app".

Explore the data

Process many results (batch)

  • Only generate HTML report locally skipping the web-server, exploring all results later with a file server:
export RESULTS=( ocp414rc2_202309291531 ocp414rc2_202309300444 );

for RES in ${RESULTS[*]}; do
  echo "CREATING $RES";
  mkdir -pv /tmp/results-shared/$RES ;
  ~/opct report --server-skip --save-to /tmp/results-shared/$RES $RES;
done

Explore the static HTML pages:

python3 -m http.server -d /tmp/results-shared

Metrics

ARTIFACT_NAME=ocp414rc0_AWS_None_202309222127_47efe9ef-06e4-48f3-a190-4e3523ff1ae0.tar.gz

# check if metrics has been collected
tar tf $ARTIFACT_NAME |grep artifacts_must-gather-metrics.tar.xz

# extract the metrics data
tar xf $ARTIFACT_NAME plugins/99-openshift-artifacts-collector/results/global/artifacts_must-gather-metrics.tar.xz

mkdir metrics
tar xfJ plugins/99-openshift-artifacts-collector/results/global/artifacts_must-gather-metrics.tar.xz -C metrics/

# check if etcd disk fsync metrics has been collected by server
zcat metrics/monitoring/prometheus/metrics/query_range-etcd-disk-fsync-db-duration-p99.json.gz | jq .data.result[].metric.instance

# Install the utility asciigraph and plot the metrics:

# Plot the metrics
METRIC=etcd-disk-fsync-db-duration-p99;
DATA=${PWD}/oci/metrics/monitoring/prometheus/metrics/query_range-${METRIC}.json.gz;
for INSTANCE in $(zcat $DATA | jq -r .data.result[].metric.instance);
do
    zcat $DATA \
        | jq -r ".data.result[] | select(.metric.instance==\"$INSTANCE\").values[]|@tsv" \
        | awk '{print$2}' \
        | asciigraph -h 10 -w 100 -c "$METRIC - $INSTANCE" ;
done

Report HTML Web UI

The Report HTML Web UI is built on top of the Vue framework and leverages native browser capabilities.

The page is reactive, using the opct-report.json as the data source.

The opct-report.json is generated by the report command when processing the results.

References:

  • https://vuejs.org/guide/extras/ways-of-using-vue.html
  • https://markus.oberlehner.net/blog/goodbye-webpack-building-vue-applications-without-webpack/