This repo contains code to calculate metrics about the performance of CI systems based on Prow.
Merge queue: list of Pull Requests that are ready to be merged at any given date. For being ready to be merged they must:
lgtm
label.approved
label.do-not-merge/*
, i.e. do-not-merge/hold
, do-not-merge/work-in-progress
etc. .needs-*
, i.e. needs-rebase
, needs-ok-to-test
etc. ./test
and /retest
comments
were issued after the last code push.This status is updated every 3 hours. The average values are calculated with data from the previous 7 days since the execution time.
These badges display the number of failures per SIG against merged PRs from the last 7 days.
Each of these failures contribute to the number of retests that occur in CI and delay the time to merge for PRs.
These badges display the number of tests that are currently quarantined per SIG.
More details on these tests can be found here
Top failed lanes:
The links to each of these failed jobs can be found in the latest execution data under the SIGRetests section
These plots will be updated every week.
kubevirt/kubevirt merge queue length:
Data available here.
kubevirt/kubevirt time to merge:
Data available here.
kubevirt/kubevirt retests to merge:
Data available here.
kubevirt/kubevirt merged PRs per day:
Data available here.
kubevirt/kubevirt quarantined tests over time (by SIG):
Legend: Red(Total) | Blue(Compute) | Green(Storage) | Orange(Network) | Purple(Monitoring) |
Data available here.
The tool has two different commands:
stats
: gathers latest data and generates badges data and files.
batch
: gathers data for a range of dates and generates plots from them.
You can execute the tool locally to grab the stats of a specific repo that uses Prow, these are the requirements:
public_repo
permission, it is required because the tool
queries GitHub’s APIA generic stats command execution from the repo’s root looks like:
$ go run ./cmd/stats --gh-token /path/to/token --source <org/repo> --path /path/to/output/dir --data-days <days-to-query>
where:
--gh-token
: should contain the path of the file where you saved your GitHub
token.--source
: is the organization and repo to query information from.--path
: is the path to store output data.--data-days
: is the number of days to query.You can check all the available options with:
$ go run ./cmd/stats --help
So, for instance, if you have stored the path of your GitHub token file in a
GITHUB_TOKEN
environment variable, a query for the last 7 days of
kubevirt/kubevirt can look like:
$ go run ./cmd/stats --gh-token ${GITHUB_TOKEN} --source kubevirt/kubevirt --path /tmp/ci-health --data-days 7
batch executions are done in two modes:
fetch
: gathers the dataplot
: generates a png file with the data previously fetched.A generic fetch batch command execution from the repo’s root looks like:
$ go run ./cmd/batch --gh-token /path/to/token --source <org/repo> --path $(pwd)/output --mode fetch --target-metric merged-prs --start-date 2020-05-19
where:
--gh-token
: should contain the path of the file where you saved your GitHub
token.--source
: is the organization and repo to query information from.--path
: is the path to store output data.--target-metric
: is the metric to query.--start-date
: is the oldest date from which the data will be queried, until today.You can check all the available options with:
$ go run ./cmd/batch --help
To generate plots you should execute:
$ go run ./cmd/batch --gh-token /path/to/token --source <org/repo> --path $(pwd)/output --mode plot --target-metric merged-prs --start-date 2020-05-19
Plot mode requires data previously generated by fetch mode.
A command that generates HTML reports of test case failures per SIG. To create a report for SIG Compute failures:
go run ./cmd/html-report --sig compute --results-path ./output/kubevirt/kubevirt/results.json --path /tmp/
This should create a HTML report called sig-compute-failure-report.html under /tmp/.