Skip to content

Project aiming to help us to perform e2e tests using Buildpacks

License

Notifications You must be signed in to change notification settings

redhat-buildpacks/testing

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

How to build a runtime using buildpack

badge badge badge badge testing

Introduction

The goal of this project is to test/validate different approaches to build a runtime using:

Project Testing Matrix

The following table details the versions of the different tools/images used within the GitHub e2e workflows or examples of this project.

This project currently verifies the following scenario:

The test case covering Tekton PipelineAsCode using RHTAP is not implemented as GitHub Workflow !

Tool/Image Version Tested Note

Lifecycle

0.19.3

Yes

-

Platform

0.13.0

Yes

-

Pack cli

v0.33.2

Yes

-

Java buildpack library

https://github.com/snowdrop/java-buildpack-client

Yes (indirectly using Quarkus container build)

Support platform 0.4 !

Paketo Builder Tiny (jammy) builder (DEPRECATED but still needed for Quarkus ⇐ 3.9)

0.0.203

Yes

Package lifecycle 0.17.2

Paketo community UBI builder

0.0.54

Yes

Include Node.js, Quarkus Java buildpacks.

Paketo Node.js Extension for ubi

0.3.1

Yes

-

Paketo Java Extension for ubi

0.1.1

Yes

-

0. Prerequisites

  • Docker desktop (or podman) installed and running

  • kind CLI installed (>= 0.20)

  • Tekton CLI (>= 0.33)

  • Have a kubernetes cluster (kind, minikube, etc)

ℹ️

For local tests, we suggest to create a kubernetes kind cluster and an unsecure container registry using the following bash scripts:

curl -s -L "https://raw.githubusercontent.com/snowdrop/k8s-infra/main/kind/kind.sh" | bash -s install
curl -s -L "https://raw.githubusercontent.com/snowdrop/k8s-infra/main/kind/registry.sh" | bash -s install --registry-name kind-registry.local

NOTE: Use the command ... | bash -s -h to see the usage

1. Quarkus Buildpacks

First, create a Quarkus Hello example using the following maven command executed in a terminal.

mvn io.quarkus.platform:quarkus-maven-plugin:3.9.3:create \
  -DprojectGroupId=dev.snowdrop \
  -DprojectArtifactId=quarkus-hello \
  -DprojectVersion=1.0 \
  -DplatformVersion=3.9.3 \
  -Dextensions='resteasy,kubernetes,buildpack'

Test the project:

cd quarkus-hello
mvn compile quarkus:dev

In a separate terminal, curl the HTTP endpoint

curl http:https://localhost:8080/hello

To build the container image, do the build using as builder image paketobuildpacks/builder-jammy-tiny:0.0.203 and pass BP_*** env variables in order to configure properly the Quarkus buildpack build process:

mvn package \
 -Dquarkus.container-image.image=kind-registry.local:5000/quarkus-hello:1.0 \
 -Dquarkus.buildpack.jvm-builder-image=paketobuildpacks/builder-jammy-tiny:0.0.203 \
 -Dquarkus.buildpack.builder-env.BP_NATIVE_IMAGE="false" \
 -Dquarkus.container-image.build=true \
 -Dquarkus.container-image.push=true
⚠️

Quarkus releases ⇐ 3.9 can only build images for platform spec 0.4 and lifecycle 0.17 like this one: https://github.com/paketo-buildpacks/builder-jammy-tiny/releases/tag/v0.0.203. The extension feature is only available since platform 0.10 and lifecycle 0.18. See work in progress to support new platform specs here.

Next, start the container and curl the endpoint

docker run -i --rm -p 8080:8080 kind-registry.local:5000/quarkus-hello:1.0

2. Pack client

To validate this scenario top of the existing quarkus-hello project, we will use the pack client.

REGISTRY_HOST="kind-registry.local:5000"
docker rmi ${REGISTRY_HOST}/quarkus-hello:1.0
pack build ${REGISTRY_HOST}/quarkus-hello:1.0 \
     --builder paketocommunity/builder-ubi-base:0.0.54 \
     --volume $HOME/.m2:/home/cnb/.m2:rw

Trick: You can discover the builder images available using the command pack builder suggest ;-)

Next, start the container and curl the endpoint curl http:https://localhost:8080/hello

docker run -i --rm -p 8080:8080 kind-registry.local:5000/quarkus-hello:1.0

3. Tekton

See the project documentation for more information about how to install and use it.

To use Tekton, it is needed to have a k8s cluster (>= 1.24), a local docker registry

curl -s -L "https://raw.githubusercontent.com/snowdrop/k8s-infra/main/kind/kind.sh" | bash -s install
curl -s -L "https://raw.githubusercontent.com/snowdrop/k8s-infra/main/kind/registry.sh" | bash -s install --registry-name kind-registry.local
⚠️

Append as suffix to the local registry name *.local otherwise buildpacks lifecycle will report this error during the execution of the analyse phase failed to get previous image: connect to repo store 'kind-registry:5000/buildpack/app': Get "https://kind-registry:5000/v2/": http: server gave HTTP response to HTTPS client

Next, install the latest official release (or a specific release)

kubectl apply -f https://github.com/tektoncd/pipeline/releases/download/v0.56.4/release.yaml

and optionally, you can also install the dashboard

kubectl apply -f https://storage.googleapis.com/tekton-releases/dashboard/latest/release.yaml

Expose the dashboard service externally using an ingress route and open the url in your browser: tekton-ui.127.0.0.1.nip.io

VM_IP=127.0.0.1
kubectl create ingress tekton-ui -n tekton-pipelines --class=nginx --rule="tekton-ui.$VM_IP.nip.io/*=tekton-dashboard:9097"

When the platform is ready, you can install the needed Tasks:

kubectl apply -f https://raw.githubusercontent.com/tektoncd/catalog/main/task/git-clone/0.9/git-clone.yaml
⚠️

Don’t install the buildpacks-phases version 0.2 from the catalog as it is outdated and do not work with lifecycle >= 1.17 supporting the extension mechanism

kubectl apply -f https://raw.githubusercontent.com/redhat-buildpacks/catalog/main/tekton/task/buildpacks-phases/01/buildpacks-phases.yaml
kubectl apply -f https://raw.githubusercontent.com/redhat-buildpacks/catalog/main/tekton/task/buildpacks-extension-phases/01/buildpacks-extension-phases.yaml

Set the following variables:

IMAGE_NAME=<CONTAINER_REGISTRY>/<ORG>/quarkus-hello
BUILDER_IMAGE=<PAKETO_BUILDER_IMAGE_OR_YOUR_OWN_BUILDER_IMAGE>

It is time to create a Pipelinerun to build the Quarkus application

IMAGE_NAME=kind-registry.local:5000/quarkus-hello

BUILDER_IMAGE=paketocommunity/builder-ubi-base:0.0.54
CNB_BUILD_IMAGE=paketobuildpacks/build-jammy-tiny:latest
CNB_RUN_IMAGE=paketobuildpacks/run-jammy-tiny:latest

kubectl delete PipelineRun/buildpacks-phases
kubectl delete pvc/ws-pvc
cat <<EOF | kubectl apply -f -
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: ws-pvc
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 500Mi
---
apiVersion: tekton.dev/v1beta1
kind: PipelineRun
metadata:
  name: buildpacks-phases
  labels:
    app.kubernetes.io/description: "Buildpacks-PipelineRun"
spec:
  params:
    - name: gitRepo
      value: https://github.com/quarkusio/quarkus-quickstarts.git
    - name: sourceSubPath
      value: getting-started
    - name: AppImage
      value: ${IMAGE_NAME}
    - name: cnbBuilderImage
      value: ${BUILDER_IMAGE}
    - name: cnbBuildImage
      value: ${CNB_BUILD_IMAGE}
    - name: cnbRunImage
      value: ${CNB_RUN_IMAGE}
    - name: cnbBuildEnvVars
      value:
        - "BP_NATIVE_IMAGE=false"
  pipelineRef:
    resolver: git
    params:
      - name: url
        value: https://github.com/redhat-buildpacks/catalog.git
      - name: revision
        value: main
      - name: pathInRepo
        value: tekton/pipeline/buildpacks/01/buildpacks.yaml
  workspaces:
    - name: source-ws
      subPath: source
      persistentVolumeClaim:
        claimName: ws-pvc
    - name: cache-ws
      subPath: cache
      persistentVolumeClaim:
        claimName: ws-pvc
EOF

Follow the execution of the pipeline using the dashboard: http:https://tekton-ui.127.0.0.1.nip.io//namespaces/default/taskruns, http:https://tekton-ui.127.0.0.1.nip.io//namespaces/default/pipelineruns or using the client: tkn pipelinerun logs -f

When the pipeline is finished and no error is reported, then launch the container

docker run -i --rm -p 8080:8080 kind-registry.local:5000/quarkus-hello
ℹ️

You can test different pipelineRuns using our bash script: ./scripts/play-with-tekton ;-)

4. Shipwright

See the project documentation for more information: https://github.com/shipwright-io/build

To use shipwright, it is needed to have a k8s cluster, local docker registry and tekton installed (>= v0.48.+)

curl -s -L "https://raw.githubusercontent.com/snowdrop/k8s-infra/main/kind/kind.sh" | bash -s install
curl -s -L "https://raw.githubusercontent.com/snowdrop/k8s-infra/main/kind/registry.sh" | bash -s install --registry-name kind-registry.local
kubectl apply -f https://storage.googleapis.com/tekton-releases/pipeline/previous/v0.56.4/release.yaml

Next, deploy the release 0.12 of shipwright

kubectl create -f https://github.com/shipwright-io/build/releases/download/v0.12.0/release.yaml

Apply the following hack to create a self signed certificate on the cluster, otherwise the shipwright webhook will fail to start

curl --silent --location https://raw.githubusercontent.com/shipwright-io/build/v0.12.0/hack/setup-webhook-cert.sh | bash

Next, install the Buildpacks BuildStrategy using the following command:

kubectl delete -f k8s/shipwright/unsecured/v1beta1/clusterbuildstrategy.yml
kubectl apply -f k8s/shipwright/unsecured/v1beta1/clusterbuildstrategy.yml

Create the Build CR using as source the Quarkus Getting started repository:

kubectl delete -f k8s/shipwright/unsecured/v1beta1/build.yml
kubectl apply -f k8s/shipwright/unsecured/v1beta1/build.yml

To view the Build which you just created:

kubectl get build
NAME                      REGISTERED   REASON      BUILDSTRATEGYKIND      BUILDSTRATEGYNAME   CREATIONTIME
buildpack-quarkus-build   True         Succeeded   ClusterBuildStrategy   buildpacks          6s

Trigger a BuildRun:

kubectl delete buildrun -lbuild.shipwright.io/name=buildpack-quarkus-build
kubectl delete -f k8s/shipwright/unsecured/v1beta1/pvc.yml

kubectl create -f k8s/shipwright/unsecured/v1beta1/pvc.yml
kubectl create -f k8s/shipwright/unsecured/v1beta1/buildrun.yml

Wait until your BuildRun is completed, and then you can view it as follows:

kubectl get buildruns
NAME                              SUCCEEDED   REASON      STARTTIME   COMPLETIONTIME
buildpack-quarkus-buildrun-vp2gb   True        Succeeded   2m22s       9s

When the task is finished and no error is reported, then launch the container

docker run -i --rm -p 8080:8080 kind-registry.local:5000/quarkus-hello

=== Secured container registry (NOT MAINTAINED ANYMORE)

If you prefer to use a secure registry, then some additional steps are needed such as

Install a secured container registry

curl -s -L "https://raw.githubusercontent.com/snowdrop/k8s-infra/main/kind/kind.sh" | bash -s install
curl -s -L "https://raw.githubusercontent.com/snowdrop/k8s-infra/main/kind/registry.sh" | bash -s install --registry-name kind-registry.local --secure-registry --registry-name=kind-registry.local
ℹ️
To install a secured (HTTPS and authentication) docker registry, pass the parameter: --secure-registry

Generate a docker-registry secret

ℹ️
This secret will be used by the serviceAccount of the build’s pod to access the container registry
REGISTRY_HOST="kind-registry.local:5000" REGISTRY_USER=admin REGISTRY_PASSWORD=snowdrop
kubectl create ns demo
kubectl create secret docker-registry registry-creds \
  --docker-server="${REGISTRY_HOST}" \
  --docker-username="${REGISTRY_USER}" \
  --docker-password="${REGISTRY_PASSWORD}"

Create a serviceAccount that the platform will use to perform the build and able to be authenticated using the secret’s credentials with the registry

kubectl delete -f k8s/shipwright/secured/sa.yml
kubectl apply -f k8s/shipwright/secured/sa.yml

Add the selfsigned certificate to a configMap. It will be mounted as a volume to set the env var SSL_CERT_DIR used by the go-containerregistry lib (of lifecycle) to access the registry using the HTTPS/TLS protocol.

kubectl delete configmap certificate-registry
kubectl create configmap certificate-registry \
  --from-file=kind-registry.crt=$HOME/.registry/certs/kind-registry.local/client.crt

Deploy the ClusterBuildStrategy file from the secured folder as it includes a new volume to mount the certificate

apiVersion: shipwright.io/v1beta1
kind: ClusterBuildStrategy
metadata:
  name: buildpacks
spec:
  volumes:
    - name: certificate-registry
      configMap:
        name: certificate-registry
...
parameters:
  - name: certificate-path
    description: Path to self signed certificate(s)
...
- name: export
  image: $(params.CNB_LIFECYCLE_IMAGE)
  imagePullPolicy: Always
...
volumeMounts:
- mountPath: /selfsigned-certificates
  name: certificate-registry
  readOnly: true

=== All steps

Setup first the kind cluster and docker registry

curl -s -L "https://raw.githubusercontent.com/snowdrop/k8s-infra/main/kind/kind.sh" | bash -s install
curl -s -L "https://raw.githubusercontent.com/snowdrop/k8s-infra/main/kind/registry.sh" | bash -s install
ℹ️
To install a secured (HTTPS and authentication) docker registry, pass the parameter: --secure-registry

Next, install Tekton and Shipwright

kubectl apply -f https://storage.googleapis.com/tekton-releases/pipeline/previous/v0.56.4/release.yaml
kubectl apply -f https://github.com/shipwright-io/build/releases/download/v0.12.0/release.yaml

Apply the following hack to create a self signed certificate on the cluster, otherwise the shipwright webhook will fail to start

curl --silent --location https://raw.githubusercontent.com/shipwright-io/build/v0.12.0/hack/setup-webhook-cert.sh | bash

And finally, deploy the resources using either an unsecured or secured container registry

  1. Unsecured

Deploy the needed resources

DIR="unsecured"
kubectl delete buildrun -lbuild.shipwright.io/name=buildpack-quarkus-build
kubectl delete -f k8s/shipwright/${DIR}/v1beta1/build.yml
kubectl delete -f k8s/shipwright/${DIR}/v1beta1/clusterbuildstrategy.yml
kubectl delete -f k8s/shipwright/${DIR}/v1beta1/pvc.yml

kubectl create -f k8s/shipwright/${DIR}/v1beta1/pvc.yml
kubectl apply  -f k8s/shipwright/${DIR}/v1beta1/clusterbuildstrategy.yml
kubectl apply  -f k8s/shipwright/${DIR}/v1beta1/build.yml
kubectl create -f k8s/shipwright/${DIR}/v1beta1/buildrun.yml
  1. Secured (TO BE REVIEWED !!)

Deploy the needed resources

DIR="secured"
kubectl create configmap certificate-registry \
  --from-file=kind-registry.crt=./k8s/shipwright/${DIR}/binding/ca-certificates/kind-registry.local.crt

REGISTRY_HOST="kind-registry.local:5000" REGISTRY_USER=admin REGISTRY_PASSWORD=snowdrop
kubectl create secret docker-registry registry-creds \
  --docker-server="${REGISTRY_HOST}" \
  --docker-username="${REGISTRY_USER}" \
  --docker-password="${REGISTRY_PASSWORD}"

kubectl apply  -f k8s/shipwright/${DIR}/v1beta1/sa.yml
kubectl apply  -f k8s/shipwright/${DIR}/v1beta1/clusterbuildstrategy.yml
kubectl apply  -f k8s/shipwright/${DIR}/v1beta1/build.yml
kubectl create -f k8s/shipwright/${DIR}/v1beta1/buildrun.yml

To clean up

DIR="unsecured"
kubectl delete secret registry-creds
kubectl delete buildrun -lbuild.shipwright.io/name=buildpack-quarkus-build
kubectl delete -f k8s/shipwright/${DIR}/v1beta1/build.yml
kubectl delete -f k8s/shipwright/${DIR}/v1beta1/clusterbuildstrategy.yml
kubectl delete -f k8s/shipwright/${DIR}/v1beta1/pvc.yml

== 5. RHTAP

This section is not maintained anymore.

=== Prerequisite

  • Have access to RHTAP - https://console.redhat.com/preview/hac/

  • Have kubectl (or oc client) installed on your machine

  • Added the kubernetes context of AppStudio to your local ~/.kube/config file and been authenticated using oidc login

  • Add the AppStudio GitHub application to your GitHub Org and select it to be used for all the repositories. More information is available here.

  • (optional). Install the Tekton client

=== Env variables

In order to play/execute the commands defined hereafter, it is needed to define some env variables. Feel free to change them according to your GitHub organisation, tenant namespace, etc

GITHUB_ORG_NAME=halkyonio
GITHUB_REPO_TEMPLATE=https://github.com/redhat-buildpacks/catalog.git
GITHUB_REPO_DEMO_NAME=rhtap-buildpack-demo-1
GITHUB_REPO_DEMO_TITLE="RHTAP Buildpack Demo 1"
BRANCH=main

APPLICATION_NAME=$GITHUB_REPO_DEMO_NAME
COMPONENT_NAME="quarkus-hello"
# Quarkus devfile sample
DEVFILE_URL=https://raw.githubusercontent.com/devfile-samples/devfile-sample-code-with-quarkus/main/devfile.yaml

PAC_NAME=$COMPONENT_NAME
PAC_YAML_FILE=".tekton/$GITHUB_REPO_DEMO_NAME-push.yaml"
PAC_EVENT_TYPE="push" # Values could be "push, pull_request"

TENANT_NAMESPACE="<YOUR_TENANT_NAMESPACE>"
REGISTRY_URL=quay.io/redhat-user-workloads/$TENANT_NAMESPACE/$GITHUB_REPO_DEMO_NAME/$COMPONENT_NAME
BUILD_ID=0 # ID used to generate the following kubernetes label's value: test-01 for rhtap.snowdrop.deb/build

# Quarkus runtime
SOURCE_SUB_PATH="."
CNB_LOG_LEVEL="debug"
CNB_BUILDER_IMAGE="paketocommunity/builder-ubi-base:0.0.54"
CNB_BUILD_IMAGE="paketobuildpacks/build-jammy-tiny:latest"
CNB_RUN_IMAGE="paketobuildpacks/run-jammy-tiny:latest"

CNB_ENV_VARS='
"BP_NATIVE_IMAGE=false",
"BP_MAVEN_BUILT_ARTIFACT=target/quarkus-app/lib/ target/quarkus-app/*.jar target/quarkus-app/app/ target/quarkus-app/quarkus/",
"BP_MAVEN_BUILD_ARGUMENTS=package -DskipTests=true -Dmaven.javadoc.skip=true -Dquarkus.package.type=fast-jar"
'

=== HowTo

To create a new GitHub repository and import the needed files, perform the following actions:

  • Git auth gh auth login --with-token <YOUR_GITHUB_TOKEN>

  • Create a GitHub repository

gh repo delete $GITHUB_ORG_NAME/$GITHUB_REPO_DEMO_NAME --yes
gh repo create \
  --clone $GITHUB_ORG_NAME/$GITHUB_REPO_DEMO_NAME \
  --public

cd $GITHUB_REPO_DEMO_NAME
  • Get the RHTAP pipelineRun template, rename it and set the different parameters

mkdir .tekton
curl -sOL https://raw.githubusercontent.com/redhat-buildpacks/catalog/main/tekton/pipelinerun/rhtap/pipelinerun-buildpacks-template.yaml
mv pipelinerun-buildpacks-template.yaml .tekton/$GITHUB_REPO_DEMO_NAME-push.yaml

sed -i.bak "s/#GITHUB_ORG_NAME#/$GITHUB_ORG_NAME/g" $PAC_YAML_FILE
sed -i.bak "s/#GITHUB_REPO_NAME#/$GITHUB_REPO_DEMO_NAME/g" $PAC_YAML_FILE
sed -i.bak "s/#APPLICATION_NAME#/$APPLICATION_NAME/g" $PAC_YAML_FILE
sed -i.bak "s/#COMPONENT_NAME#/$COMPONENT_NAME/g" $PAC_YAML_FILE
sed -i.bak "s/#PAC_NAME#/$PAC_NAME/g" $PAC_YAML_FILE
sed -i.bak "s/#TENANT_NAMESPACE#/$TENANT_NAMESPACE/g" $PAC_YAML_FILE
sed -i.bak "s|#REGISTRY_URL#|$REGISTRY_URL|g" $PAC_YAML_FILE
sed -i.bak "s|#BUILD_ID#|$BUILD_ID|g" $PAC_YAML_FILE
sed -i.bak "s|#EVENT_TYPE#|$PAC_EVENT_TYPE|g" $PAC_YAML_FILE

sed -i.bak "s|#SOURCE_SUB_PATH#|$SOURCE_SUB_PATH|g" $PAC_YAML_FILE
sed -i.bak "s|#CNB_LOG_LEVEL#|$CNB_LOG_LEVEL|g" $PAC_YAML_FILE
sed -i.bak "s|#CNB_BUILDER_IMAGE#|$CNB_BUILDER_IMAGE|g" $PAC_YAML_FILE
sed -i.bak "s|#CNB_BUILD_IMAGE#|$CNB_BUILD_IMAGE|g" $PAC_YAML_FILE
sed -i.bak "s|#CNB_RUN_IMAGE#|$CNB_RUN_IMAGE|g" $PAC_YAML_FILE

#
PAC_FILE_NAME="$GITHUB_REPO_DEMO_NAME-push"
yq -o=json '.' .tekton/$PAC_FILE_NAME.yaml > .tekton/$PAC_FILE_NAME.json
jq --argjson array "[$CNB_ENV_VARS]" '(.spec.params[] | select(.name=="cnbBuildEnvVars")).value |= $array' .tekton/$PAC_FILE_NAME.json > temp.json
cat temp.json | yq -P > .tekton/$PAC_FILE_NAME.yaml

rm {temp.json,.tekton/$PAC_FILE_NAME.json}
rm $PAC_YAML_FILE.bak
  • Create a Quarklus Hello project locally

mvn io.quarkus.platform:quarkus-maven-plugin:3.3.2:create \
-DprojectGroupId=dev.snowdrop \
-DprojectArtifactId=hello \
-DprojectVersion=1.0 \
-Dextensions='resteasy-reactive,kubernetes,buildpack'
  • Commit the project to your GitHub org

mv ./hello/* ./
mv ./hello/{.dockerignore,.gitignore} ./
mv ./hello/.mvn ./
rm -rf ./hello
SSH_REPO_NAME=$(gh repo view https://github.com/$GITHUB_ORG_NAME/$GITHUB_REPO_DEMO_NAME --json sshUrl --jq .sshUrl)
git remote set-url origin $SSH_REPO_NAME https://github.com/$GITHUB_ORG_NAME/$GITHUB_REPO_DEMO_NAME

echo ".idea/" >> .gitignore
git add .
git commit -asm "Quarkus and RHTAP Tekton project"
git push -u origin main
  • Create the following Application and Component CRs

cat <<EOF | kubectl apply -n $TENANT_NAMESPACE -f -
---
apiVersion: appstudio.redhat.com/v1alpha1
kind: Application
metadata:
  name: $GITHUB_REPO_DEMO_NAME
spec:
  appModelRepository:
    url: ""
  displayName: $GITHUB_REPO_DEMO_NAME
  gitOpsRepository:
    url: ""
---
apiVersion: appstudio.redhat.com/v1alpha1
kind: Component
metadata:
  annotations:
    appstudio.openshift.io/pac-provision: request
    image.redhat.com/generate: '{"visibility":"public"}'
  name: $COMPONENT_NAME
spec:
  application: $GITHUB_REPO_DEMO_NAME
  componentName: $COMPONENT_NAME
  replicas: 1
  resources:
    requests:
      cpu: 10m
      memory: 100Mi
  source:
    git:
      context: ./
      devfileUrl: $DEVFILE_URL
      revision: main
      url: https://github.com/halkyonio/$GITHUB_REPO_DEMO_NAME.git
  targetPort: 8080
EOF
  • Check the resources created

for entity in pods deployments routes services taskruns pipelineruns applications components snapshotenvironmentbinding.appstudio.redhat.com componentdetectionquery.appstudio.redhat.com; do count=$(kubectl -n $TENANT_NAMESPACE get "$entity" -o name | wc -l); echo "$count $entity"; done | sort -n
ℹ️
Use one of the RHTAP bash scripts aiming to automate the whole process described : ./scripts/rhtap-demo{1,2,3}
  • Cleaning

kubectl delete application/$GITHUB_REPO_DEMO_NAME

=== Todo

  • Try to make a test using our own quay.io credentials and repository using REGISTRY_URL=quay.io/$GITHUB_ORG_NAME

=== Issue

==== Full image path not supported

The lifecycle component and most probably google container library (used by lifecycle to access the registry) do not support such advanced feature: https://kubernetes.io/docs/concepts/containers/images/#kubelet-credential-provider The consequence is that if several secrets are attached to the appstudio-pipeline service account and subsequently by the pod running lifecycle, then lifecycle, at the analysis step, will raise an issue if it doesn’t get as first entry of the auths: config file (from mounted secrets) the full image path matching the image name declared as output image.

To work around the issue of the full image path not supported by lifecycle (and google-containr), path the secret

CFG=$(cat <<EOF
{"auths":{"quay.io":{"auth":"cmVkaG...aRkFGNTQ="}}}
EOF
)

SECRET_NAME=$COMPONENT_NAME
TENANT_NAMESPACE="cmoullia-tenant"
PATCH_STRING="[{'op': 'replace', 'path': '/data/.dockerconfigjson', 'value': '$BASE64_ENCODED_VALUE'}]"

kubectl get secret $SECRET_NAME -n $TENANT_NAMESPACE$$ -o json | jq --arg new_val "$(echo -n $CFG | base64)" '.data[".dockerconfigjson"]=$new_val' | kubectl apply -f -