Skip to content

Commit

Permalink
Merge pull request #2 from mazgi/support-aws-s3-backend
Browse files Browse the repository at this point in the history
Support AWS S3 backend
  • Loading branch information
mazgi committed Sep 11, 2022
2 parents a16b5bd + 38391fd commit f934db6
Show file tree
Hide file tree
Showing 17 changed files with 459 additions and 251 deletions.
103 changes: 83 additions & 20 deletions .github/workflows/default.yml
Original file line number Diff line number Diff line change
Expand Up @@ -2,33 +2,96 @@ name: default

on:
push:
workflow_dispatch:

jobs:
provisioning:
timeout-minutes: 10
runs-on: ubuntu-latest
strategy:
matrix:
backend:
- azurerm
- gcs
- s3
steps:
- uses: actions/checkout@v2
- run: |
- uses: actions/checkout@v3
- name: Set up environment variables
run: |
cat<<EOE > .env
AWS_ACCOUNT_ID=YOUR_AWS_ACCOUNT_ID
AWS_DEFAULT_REGION=us-east-1
CLOUDSDK_CORE_PROJECT=YOUR_GCP_PROJECT_ID
PROJECT_UNIQUE_ID=YOUR_GCP_PROJECT_UNIZUE_ID
# Todo https://docs.github.com/en/authentication/keeping-your-account-and-data-secure/about-githubs-ip-addresses
TF_VAR_allowed_ipaddr_list=["0.0.0.0/0"]
EOE
- run: |
echo "UID=$(id -u)" >> .env
echo "GID=$(id -g)" >> .env
echo "DOCKER_GID=$(getent group docker | cut -d : -f 3)" >> .env
# - run: |
# echo "AWS_ACCESS_KEY_ID=${PROVISIONING_AWS_ACCESS_KEY_ID}" >> .env
# echo "AWS_SECRET_ACCESS_KEY=${PROVISIONING_AWS_SECRET_ACCESS_KEY}" >> .env
# env:
# PROVISIONING_AWS_ACCESS_KEY_ID: ${{ secrets.PROVISIONING_AWS_ACCESS_KEY_ID }}
# PROVISIONING_AWS_SECRET_ACCESS_KEY: ${{ secrets.PROVISIONING_AWS_SECRET_ACCESS_KEY }}
# PROVISIONING_GOOGLE_SA_KEY: ${{ secrets.PROVISIONING_GOOGLE_SA_KEY }}
# - run: docker-compose build
# - run: docker-compose up
# - run: docker-compose run provisioning terraform fmt -check
# - run: docker-compose run provisioning terraform plan
# - run: docker-compose run provisioning terraform apply -auto-approve
# if: github.ref == 'refs/heads/main'
- name: Export credentials
run: |
echo "PROJECT_UNIQUE_ID=${PROJECT_UNIQUE_ID}" >> .env
echo "AWS_ACCESS_KEY_ID=${AWS_ACCESS_KEY_ID}" >> .env
echo "AWS_ACCOUNT_ID=${AWS_ACCOUNT_ID}" >> .env
echo "AWS_SECRET_ACCESS_KEY=${AWS_SECRET_ACCESS_KEY}" >> .env
echo "ARM_CLIENT_ID=${ARM_CLIENT_ID}" >> .env
echo "ARM_CLIENT_SECRET=${ARM_CLIENT_SECRET}" >> .env
echo "ARM_SUBSCRIPTION_ID=${ARM_SUBSCRIPTION_ID}" >> .env
echo "ARM_TENANT_ID=${ARM_TENANT_ID}" >> .env
echo "CLOUDSDK_CORE_PROJECT=${CLOUDSDK_CORE_PROJECT}" >> .env
env:
PROJECT_UNIQUE_ID: ${{ secrets.PROJECT_UNIQUE_ID }}
AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }}
AWS_ACCOUNT_ID: ${{ secrets.AWS_ACCOUNT_ID }}
AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
ARM_CLIENT_ID: ${{ secrets.ARM_CLIENT_ID }}
ARM_CLIENT_SECRET: ${{ secrets.ARM_CLIENT_SECRET }}
ARM_SUBSCRIPTION_ID: ${{ secrets.ARM_SUBSCRIPTION_ID }}
ARM_TENANT_ID: ${{ secrets.ARM_TENANT_ID }}
CLOUDSDK_CORE_PROJECT: ${{ secrets.CLOUDSDK_CORE_PROJECT }}
- name: Export GOOGLE_SA_KEY
run: |
echo ${GOOGLE_SA_KEY} > config/credentials/google-cloud-keyfile.provisioning-owner.json
jq -e '. | select(.type=="service_account")' config/credentials/google-cloud-keyfile.provisioning-owner.json > /dev/null
env:
GOOGLE_SA_KEY: ${{ secrets.GOOGLE_SA_KEY }}
- name: (debug)Check services
run: |
docker compose --profile=${{ matrix.backend }} config
- name: Build containers
timeout-minutes: 4
run: |
docker compose --profile=${{ matrix.backend }} build
- name: Start the service
timeout-minutes: 4
run: |
docker compose --profile=${{ matrix.backend }} up --detach
while :
do
docker compose --profile=${{ matrix.backend }} ps --format=json provisioning-${{ matrix.backend }}-backend\
| jq -e '.[] | select(.Health=="healthy")' 2> /dev/null\
&& break
sleep 1
done
- name: Show service logs
timeout-minutes: 1
run: |
docker compose --profile=${{ matrix.backend }} logs
- name: Exec Terraform - check the format for each tf file
run: |
docker compose --profile=${{ matrix.backend }} exec provisioning-${{ matrix.backend }}-backend terraform fmt -check
- name: Exec Terraform - validate
run: |
docker compose --profile=${{ matrix.backend }} exec provisioning-${{ matrix.backend }}-backend terraform validate
- name: Exec Terraform - dry-run
timeout-minutes: 1
run: |
docker compose --profile=${{ matrix.backend }} exec provisioning-${{ matrix.backend }}-backend terraform plan
- name: Exec Terraform - apply
timeout-minutes: 1
if: github.ref == 'refs/heads/main'
run: |
docker compose --profile=${{ matrix.backend }} exec provisioning-${{ matrix.backend }}-backend terraform apply -auto-approve
- name: Stop the service
timeout-minutes: 1
run: |
rm -rf tmp/run/container-*/
sleep 2
docker compose down --remove-orphans
167 changes: 98 additions & 69 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,100 +1,129 @@
# template.dockerized-provisioning-project

[![default](https://github.com/mazgi/template.dockerized-provisioning-project/workflows/default/badge.svg)](https://github.com/mazgi/template.dockerized-provisioning-project/actions?query=workflow%3Adefault)

## How to set up

You need one AWS account and one GCP project each of you can fully manage.
And you need to get credentials after you set up system accounts for provisioning as described below.
[![default](https://github.com/mazgi/template.dockerized-provisioning-project/actions/workflows/default.yml/badge.svg)](https://github.com/mazgi/template.dockerized-provisioning-project/actions/workflows/default.yml)

This repository is a template for provisioning your Cloud and Local environment using [Terraform](https://www.terraform.io/) and [Ansible](https://www.ansible.com/).

## How to Use

<u>Docker and [Docker Compose](https://docs.docker.com/compose/)</u> are needed. If you want to provision only local environments, that's all.

However, if you want to provision a cloud environment, you need permission that can administer for at least one cloud: [AWS](https://aws.amazon.com/), [Azure](https://azure.microsoft.com/), or [Google Cloud](https://cloud.google.com/).
And you need to set up the repository following steps.

### Step 1. Write out your IDs and credentials in the .env file.

You should write your account IDs and credentials depending on your need, such as AWS, Azure, and Google Cloud, in the `.env` file as follows.

```.env
PROJECT_UNIQUE_ID=my-unique-b78e
_TERRAFORM_BACKEND_TYPE=azurerm
TF_VAR_allowed_ipaddr_list=["203.0.113.0/24"]
#
# <AWS>
AWS_ACCESS_KEY_ID=AKXXXXXXXX
AWS_ACCOUNT_ID=123456789012
# AWS_DEFAULT_REGION=us-east-1
AWS_SECRET_ACCESS_KEY=AWxxxxxxxx00000000
# </AWS>
#
# <Azure>
# AZURE_DEFAULT_LOCATION=centralus
ARM_CLIENT_ID=xxxxxxxx-0000-0000-0000-xxxxxxxxxxxx
ARM_CLIENT_SECRET=ARxxxxxxxx00000000
ARM_SUBSCRIPTION_ID=yyyyyyyy-0000-0000-0000-yyyyyyyyyyyy
ARM_TENANT_ID=zzzzzzzz-0000-0000-0000-zzzzzzzzzzzz
# </Azure>
#
# <Google>
# GCP_DEFAULT_REGION=us-central1
CLOUDSDK_CORE_PROJECT=my-proj-b78e
# </Google>
```

### How to set up your AWS IAM user
In addition, if you use Google Cloud, you should place the [key file for Google Cloud Service Account](https://cloud.google.com/iam/docs/creating-managing-service-account-keys) as `config/credentials/google-cloud-keyfile.provisioning-owner.json`.

You should create an AWS IAM user under the name `provisioning-admin` that attached follows permissions.
#### Environment Variable Names

- `AdministratorAccess`
Environment variable names and uses are as follows.

### How to set up your Azure service principal
| Name | Required with Terraform | Value |
| -------------------------- | ----------------------- | --------------------------------------------------------------------------------------------------------------- |
| PROJECT_UNIQUE_ID | **Yes** | An ID to indicate your environment.<br/>The value is used for an Object Storage bucket or Storage Account name. |
| \_TERRAFORM_BACKEND_TYPE | **Yes** | Acceptable values are `azurerm`, `gcs`, and `s3`. |
| TF_VAR_allowed_ipaddr_list | no | IP address ranges you want access to your cloud environment. |

You should create an Azure service principal under the name `provisioning-owner` that added follows roles.
</details>
<details>
<summary>AWS</summary>

- `Owner`
| Name | Required with AWS | Value |
| --------------------- | ----------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------- |
| AWS_ACCOUNT_ID | **Yes** | A 12-digit AWS Account ID you want to provision.<br/>The S3 bucket is created in this account to store the tfstate file if you choose the S3 backend. |
| AWS_ACCESS_KEY_ID | **Yes** | An AWS Access Key for the IAM user that is used to create the S3 bucket to store tfstate file and apply all in your AWS environment. |
| AWS_SECRET_ACCESS_KEY | **Yes** | |
| AWS_DEFAULT_REGION | no | |

### How to set up your GCP service account
</details>
<details>
<summary>Azure</summary>

You should create a GCP service account under the name `provisioning-owner` that added follows roles.
| Name | Required with Azure | Value |
| ---------------------- | ------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| ARM_TENANT_ID | **Yes** | A UUID to indicate Azure Tenant. |
| ARM_SUBSCRIPTION_ID | **Yes** | A UUID to indicate Azure Subscription you want to provision.<br/>The Resource Group, Storage Account, and Blob Container are created in this subscription to store the tfstate file if you choose the AzureRM backend. |
| ARM_CLIENT_ID | **Yes** | |
| ARM_CLIENT_SECRET | **Yes** | |
| AZURE_DEFAULT_LOCATION | no | |

- `Project Owner`
- `Storage Admin`
</details>
<details>
<summary>Google Cloud</summary>

### How to set up your local environment
| Name | Required with Azure | Value |
| --------------------- | ------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| CLOUDSDK_CORE_PROJECT | **Yes** | A string Project ID to indicate Google Cloud Project you want to provision, Not Project name or Project number.<br/>The Cloud Storage Bucket is created in this project to store the tfstate file if you choose the GCS backend.<br/>See also https://cloud.google.com/resource-manager/docs/creating-managing-projects |
| GCP_DEFAULT_REGION | no | |

You need create the `.env` file as follows.
</details>

```shellsession
rm -f .env
test $(uname -s) = 'Linux' && echo "UID=$(id -u)\nGID=$(id -g)" >> .env
echo "DOCKER_GID=$(getent group docker | cut -d : -f 3)" >> .env
cat<<EOE >> .env
PROJECT_UNIQUE_ID=YOUR_PROJECT_UNIZUE_ID
EOE
```

```console
echo TF_VAR_allowed_ipaddr_list='["'$(curl -sL ifconfig.io)'/32"]' >> .env
```
### Step 2. Define your service in the `docker-compose.yml`

Place your credentials into `config/credentials/` directory.
If you are using [1Password command-line tool](https://1password.com/downloads/command-line/), you can get credentials as follows from your 1Password vault.
Comment-in the `provisioning` service in the [`docker-compose.yml`](docker-compose.yml) as follows or define a service you own.

```shellsession
eval $(op signin my)
source .env
op get document arn:aws:iam::${AWS_ACCOUNT_ID}:user/provisioning-admin > config/credentials/new_user_credentials.csv
op get document azure-service-principal.json > config/credentials/azure-service-principal.json
op get document provisioning-owner@${CLOUDSDK_CORE_PROJECT}.iam.gserviceaccount.com > config/credentials/google-cloud-keyfile.json
```yaml
services:
# provisioning:
# <<: *provisioning-base
```

### AWS
:arrow_down:

You need update the `.env` file as follows.

```shellsession
source .env
echo "AWS_ACCOUNT_ID=YOUR_AWS_ACCOUNT_ID" >> .env
echo "AWS_DEFAULT_REGION=us-east-1" >> .env
echo "AWS_ACCESS_KEY_ID=$(tail -1 config/credentials/new_user_credentials.csv | cut -d, -f3)" >> .env
echo "AWS_SECRET_ACCESS_KEY=$(tail -1 config/credentials/new_user_credentials.csv | cut -d, -f4)" >> .env
```yaml
services:
provisioning:
<<: *provisioning-base
```

### Azure
Now, you are able to provision your environment as follows. :tada:

```shellsession
source .env
echo "ARM_SUBSCRIPTION_ID=YOUR_SUBSCRIPTION" >> .env
echo "ARM_CLIENT_ID=$(jq -r .appId config/credentials/azure-service-principal.json)" >> .env
echo "ARM_CLIENT_SECRET=$(jq -r .password config/credentials/azure-service-principal.json)" >> .env
echo "ARM_TENANT_ID=$(jq -r .tenant config/credentials/azure-service-principal.json)" >> .env
```console
docker compose up
```

### Google Cloud

```shellsession
source .env
echo "CLOUDSDK_CORE_PROJECT=YOUR_GCP_PROJECT_ID" >> .env
```console
docker compose exec provisioning terraform apply
```

## How to run
### Step 3. Set secrets for GitHub Actions

Now you can make provisioning as follows.
The [gh command](https://cli.github.com/) helps set secrets.

```shellsession
docker-compose up
docker-compose run provisioning terraform plan
```console
gh secret set --app actions --env-file .env
```

## How to get credentials for GitHub Actions

```shellsession
docker-compose run provisioning terraform output github-actions-admin-credentials
docker-compose run provisioning terraform output github-actions-owner-credentials-json
```console
cat config/credentials/google-cloud-keyfile.provisioning-owner.json\
| gh secret set GOOGLE_SA_KEY --app=actions
```
4 changes: 0 additions & 4 deletions config/production/.gitignore

This file was deleted.

Loading

0 comments on commit f934db6

Please sign in to comment.