Based on:
- https://github.com/GoogleCloudPlatform/fourkeys
- https://cloud.google.com/blog/products/devops-sre/using-the-four-keys-to-measure-your-devops-performance
- Continuously Delivery Events aka CDEvents
- CloudEvents aka CEs
This project consumes Cloud Events from multiple sources and allows you to track the "Four Keys" metrics, from DORA, via a Kubernetes architecture that is natively cloud-agnostic.
-
CloudEvents Endpoint: Endpoint to send all CloudEvents to; these CloudEvents will be stored in the SQL database to the
cloudevents-raw
table. -
CloudEvents Router: Router, with a routing table, which routes events to be transformed to
CDEvents
. This mechanism allows the same event type to be transformed into multipleCDEvents
, if needed. This component reads from thecloudevents-raw
table and processes events. This component is triggered via configurable fixed period of time. -
CDEvents Transformers: These functions receive events from the
CloudEvents Router
and transforms the CloudEvents to CDEvents. The result is sent to theCDEvents Endpoint
. -
CDEvents Endpoint: Endpoint to send
CDEvents
, these CloudEvents will be stored in the SQL database to thecdevents-raw
table, as they do not need any transformation. This endpoint validates that the CloudEvent received is a CD CloudEvent. -
Metrics Functions: These functions are in charge of calculating different metrics and storing them into special tables, probably one per table. To calculate said metrics, these functions read from
cdevents-raw
. An example on how to calculate the Deployment Frequency metric is explained below. -
(Optional) Metrics Endpoint: Endpoint that allows you to query the metrics by name and add some filters. This is an optional component, as you can build a dashboard from the metrics tables without using these endpoints.
This project was created to consume any CloudEvent available and store it into a SQL database for further processing. Once the CloudEvents ingested, a function-based approach may translate these into CDEvents, later used to calculate the "Four Keys" metrics.
We will install the following components in an existing Kubernetes Cluster (you can use KinD):
-
Create a KinD Cluster
cat <<EOF | kind create cluster --config=- kind: Cluster apiVersion: kind.x-k8s.io/v1alpha4 nodes: - role: control-plane extraPortMappings: - containerPort: 31080 # expose port 31380 of the node to port 80 on the host, later to be use by kourier or contour ingress listenAddress: 127.0.0.1 hostPort: 80 EOF
-
Install Knative Serving
kubectl apply -f https://github.com/knative/serving/releases/download/knative-v1.8.0/serving-crds.yaml
kubectl apply -f https://github.com/knative/serving/releases/download/knative-v1.8.0/serving-core.yaml
Install Networking layer:
kubectl apply -f https://github.com/knative/net-kourier/releases/download/knative-v1.8.0/kourier.yaml
Patch your configmap/config-network
:
kubectl patch configmap/config-network \
--namespace knative-serving \
--type merge \
--patch '{"data":{"ingress-class":"kourier.ingress.networking.knative.dev"}}'
Configure domain mapping to easily access functions inside the cluster:
kubectl apply -f https://github.com/knative/serving/releases/download/knative-v1.8.0/serving-default-domain.yaml
Because we are working on KinD we need to make sure that we can route traffic from our laptop to the cluster, this is not needed if you are running in a real Kubernetes Cluster:
kubectl patch configmap -n knative-serving config-domain -p "{\"data\": {\"127.0.0.1.sslip.io\": \"\"}}"
Apply the Kourier Service
cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Service
metadata:
name: kourier-ingress
namespace: kourier-system
labels:
networking.knative.dev/ingress-provider: kourier
spec:
type: NodePort
selector:
app: 3scale-kourier-gateway
ports:
- name: http2
nodePort: 31080
port: 80
targetPort: 8080
EOF
- Install Knative Eventing
kubectl apply -f https://github.com/knative/eventing/releases/download/knative-v1.8.1/eventing-crds.yaml
kubectl apply -f https://github.com/knative/eventing/releases/download/knative-v1.8.1/eventing-core.yaml
- Create your "Four Keys" namespace:
kubectl create ns four-keys
- Install PostgreSQL
helm repo add bitnami https://charts.bitnami.com/bitnami
helm repo update
helm install postgresql bitnami/postgresql --namespace four-keys
In a separate terminal: kubectl port-forward --namespace four-keys svc/postgresql 5432:5432
In another terminal: export POSTGRES_PASSWORD=$(kubectl get secret --namespace four-keys postgresql -o jsonpath="{.data.postgres-password}" | base64 -d)
To connect from outside the cluster: PGPASSWORD="$POSTGRES_PASSWORD" psql --host 127.0.0.1 -U postgres -d postgres -p 5432
Create Tables (on default database postgres
):
- `CREATE TABLE IF NOT EXISTS cloudevents_raw ( event_id serial NOT NULL PRIMARY KEY, content json NOT NULL, event_timestamp TIMESTAMP NOT NULL);`
- `CREATE TABLE IF NOT EXISTS cdevents_raw ( cd_source varchar(255) NOT NULL, cd_id varchar(255) NOT NULL, cd_timestamp TIMESTAMP NOT NULL, cd_type varchar(255) NOT NULL, cd_subject_id varchar(255) NOT NULL, cd_subject_source varchar(255), content json NOT NULL, PRIMARY KEY (cd_source, cd_id));`
- `CREATE TABLE IF NOT EXISTS deployments ( deploy_id varchar(255) NOT NULL, time_created TIMESTAMP NOT NULL, deploy_name varchar(255) NOT NULL, PRIMARY KEY (deploy_id, time_created, deploy_name));`
- Install Sockeye:
kubectl apply -f https://github.com/n3wscott/sockeye/releases/download/v0.7.0/release.yaml
- Add Cloud Event Sources:
- Using the Kubernetes API Server Source file already in the root directory, apply the APIServerSource resource with:
kubectl apply -f api-serversource-deployments.yaml
Deploy the four-keys
components using ko
for development:
cd four-keys/
ko apply -f config/
Create a new Deployment in the default
namespace to test that your configuration is working.
kubectl apply -f ../test/example-deployment.yaml
If the Deployment Frequency functions (transformation and calculation) are installed you should be able to query the deployment frequency endpoint and see the metric:
curl https://fourkeys-frequency-endpoint.four-keys.127.0.0.1.sslip.io/deploy-frequency/day
And see something like this:
[{"DeployName":"nginx-deployment-3","Deployments":1,"Time":"2022-11-28T00:00:00Z"}]
Try modifying the deployment or creating new ones.
From https://github.com/GoogleCloudPlatform/fourkeys/blob/main/METRICS.md
We look for new or updated deployment resources. This is done by using the APIServerSource
that we configured earlier.
The flow should look like:
graph TD
A[API Server Source] --> |writes to `cloudevents_raw` table| B[CloudEvent Endpoint]
B --> |read from `cloudevents_raw` table| C[CloudEvents Router]
C --> D(CDEvent Transformation Function)
D --> |writes to `cdevents_raw` table| E[CDEvents Endpoint]
E --> F(Deployment Frequency Function)
F --> |writes to `deployments` table| G[Deployments Table]
G --> |read from `deployments` table| H[Metrics Endpoint]
Calculate buckets: Daily, Weekly, Monthly, Yearly.
This counts the number of deployments per day:
SELECT
distinct deploy_name AS NAME,
DATE_TRUNC('day', time_created) AS day,
COUNT(distinct deploy_id) AS deployments
FROM
deployments
GROUP BY deploy_name, day;
- Add processed events mechanism for
cloudevents_raw
andcdevents_raw
tables. This should avoid theCloudEvents Router
and theMetrics Calculation Functions
to recalculate already processed events. This can be achieved by having a table that keeps track of which was the last processed event and then making sure that theCloudEvents Router
and theMetrics Calculation Functions
join against the new tables. - Add queries to calculate buckets for Deployment Frequency Metric: Weekly, Monthly, Yearly to the
deployment-frequency-endpoint.go
. Check blog post to calculate frequency and not volume: https://codefresh.io/learn/software-deployment/dora-metrics-4-key-metrics-for-improving-devops-performance/ - Create Helm Chart for generic components (CloudEvents Endpoint, CDEvents Endpoint, CloudEvents Router)
- Automate table creation for PostgreSQL helm chart (https://stackoverflow.com/questions/66333474/postgresql-helm-chart-with-initdbscripts)
- Create functions for Lead Time for Change
-
- Tekton dashboard:
k port-forward svc/tekton-dashboard 9097:9097 -n tekton-pipelines
- Cloud Events Controller:
kubectl apply -f https://storage.cloud.google.com/tekton-releases-nightly/cloudevents/latest/release.yaml
- ConfigMap:
config-defaults
for
- Tekton dashboard:
-
GitHub Source: https://github.com/knative/docs/tree/main/code-samples/eventing/github-source