Skip to content

This is meant to aid collaboration in setting up the buildpacks build service until gitlab or Gerrit hosting is definitely viable.

License

Notifications You must be signed in to change notification settings

david-caro/buildservice

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

15 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Toolforge Build Service

In the effort to set up a viable buildpacks workflow, this repo houses necessary manifests, code and values to deploy the actual build service itself.

There are components necessary to the healthy functioning of the service that will likely not be included in this repository, such as custom webhooks, etc.

See also https://wikitech.wikimedia.org/wiki/Wikimedia_Cloud_Services_team/EnhancementProposals/Toolforge_Buildpack_Implementation

Setup dev environment

Requirements

You will need to install:

  • docker -> as the container engine for everything.
  • docker-compose -> for harbor (docker registry).
  • minikube -> as dev kubernetes installation.

Run harbor

You can install it with the helper script:

  • utils/get_harbor.sh After that you can use docker-compose to run the whole harbor system (see the output of the script for the exact command).

Once harbor is running, you will need to make sure that it's setup (user project created and such) so run:

  • utils/setup_harbor.py

Setup minikube

We will need to get a specific k8s version (1.20.11 is the current toolforge version when writing this, you might want to double check):

  • minikube start --kubernetes-version=v1.20.11

Setup admission controller (optional)

If you want to do a full stack test, you'll need to deploy the buildpack admission controller too, for that follow the instructions here. NOTE: might be faster to build the buildpack admission controller image locally instead of pulling it.

Deploying

If you want to check first what would be deployed, you can run:

  • kubectl kustomize deploy/base-tekton | vim -
  • kubectl kustomize deploy/devel | vim -

Deploying this system can be done with:

  • kubectl apply -k deploy/base-tekton -> creates the CRDs and tekton related objects
  • kubectl apply -k deploy/devel -> uses the CRDs defined above

Run a pipeline

  • kubectl create examples/pipeline.yaml

Debugging

At this point I recommend installing the tekton cli, that makes it easier to inspect (otherwise you have a bunch of json to parse).

Getting the taskruns:

  • tkn -n image-build taskruns list

Showing the details for them:

  • tkn -n image-build taskruns describe

Seeing the logs live of a specific run:

  • tkn -n image-build taskruns logs -f minikube-user-buildpacks-pipelinerun-n8mbj-build-from-git-6r2hf

Of course, you can get all that info too with kubectl directly, though it's quite more terse (though might help debugging tricky issues):

  • kubectl describe -n image-build taskruns.tekton.dev minikube-user-buildpacks-pipelinerun-n8mbj-build-from-git-6r2hf

Cleanup

If you want to remove everything you did to start from scratch, you can just:

  • minikube delete

NOTE: this will delete the whole k8s cluster, if you are playing with other things in that same cluster, you might want want to delete each namespace/resource one by one instead.

If you installed harbor with this guide

This will not remove the volumes on harbor side, to do so you'll have to stop harbor:

  • docker-compose -f .harbor/harbor/docker-compose.yml down -v --remove-orphans

And you'll need to delete the data directory (sudo is needed due to files created/modified inside the containers):

  • sudo rm -rf .harbor/harbor/data

NOTE: For production, TBD

About

This is meant to aid collaboration in setting up the buildpacks build service until gitlab or Gerrit hosting is definitely viable.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 79.3%
  • Shell 20.7%