Skip to content

Commit

Permalink
gcp(gke): add GKE + Kubernetes Hello World
Browse files Browse the repository at this point in the history
  • Loading branch information
metral committed Feb 8, 2019
1 parent 4a3f91e commit e8527bd
Show file tree
Hide file tree
Showing 6 changed files with 416 additions and 0 deletions.
12 changes: 12 additions & 0 deletions gcp-ts-gke-hello-world/Pulumi.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,12 @@
name: gcp-ts-gke-hello-world
description: A Google Kubernetes Engine (GKE) + Kubernetes Hello World example
runtime: nodejs
template:
config:
gcp:project:
description: The Google Cloud project to deploy into
gcp:zone:
description: The Google Cloud zone
gcp:credentials:
description: Your GCP Service Account key contents for GKE Administration
secret: true
223 changes: 223 additions & 0 deletions gcp-ts-gke-hello-world/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,223 @@
[![Deploy](https://get.pulumi.com/new/button.svg)](https://app.pulumi.com/new)

# Google Kubernetes Engine (GKE) Cluster

This example deploys an Google Cloud Platform (GCP) [Google Kubernetes Engine (GKE)](https://cloud.google.com/kubernetes-engine/) cluster, and deploys a Kubernetes Namespace and Deployment of NGINX

## Deploying the App

To deploy your infrastructure, follow the below steps.

### Prerequisites

1. [Install Pulumi](https://pulumi.io/install)
1. [Install Node.js 8.11.3](https://nodejs.org/en/download/)
1. [Install Google Cloud SDK (`gcloud`)](https://cloud.google.com/sdk/docs/downloads-interactive)
1. [Configure GCP Service Account Key & Download Credentials](https://pulumi.io/install/gcp.html)
* **Note**: The Service Account key credentials used must have the
role `Kubernetes Engine Admin` / `container.admin`

### Steps

After cloning this repo, from this working directory, run these commands:

1. Install the required Node.js packages:

```bash
$ npm install
```

1. Create a new stack, which is an isolated deployment target for this example:

```bash
$ pulumi stack init
```

1. Set the required GCP configuration variables:

Note, `credentials.json` is the GCP Service Account key downloaded from the [GCP
Credentials](https://console.cloud.google.com/apis/credentials) page.

```bash
$ cat credentials.json | pulumi config set gcp:credentials --secret
$ pulumi config set gcp:project <your-gcp-project-here>
$ pulumi config set gcp:zone us-west1-a // any valid GCP Zone here
```

By default, your cluster will have 2 nodes of type `n1-standard-1`.
This is configurable, however; for instance if we'd like to choose
3 nodes of type `n1-standard-2` instead, we can run these commands:

```bash
$ pulumi config set nodeCount 3
$ pulumi config set nodeMachineType n1-standard-2
```

This shows how stacks can be configurable in useful ways. You can even change these after provisioning.

1. Stand up the GKE cluster:

To preview and deploy changes, run `pulumi update` and select "yes."

The `update` sub-command shows a preview of the resources that will be created
and prompts on whether to proceed with the deployment. Note that the stack
itself is counted as a resource, though it does not correspond
to a physical cloud resource.

```bash
$ pulumi update
Previewing update (gke-demo):

Type Name Plan
+ pulumi:pulumi:Stack gcp-ts-gke-hello-world-gke-demo create
+ ├─ gcp:container:Cluster helloworld create
+ ├─ pulumi:providers:kubernetes helloworld create
+ ├─ kubernetes:core:Namespace helloworld create
+ ├─ kubernetes:apps:Deployment helloworld create
+ └─ kubernetes:core:Service helloworld create

Resources:
+ 6 to create

Updating (gke-demo):

Type Name Status
+ pulumi:pulumi:Stack gcp-ts-gke-hello-world-gke-demo created
+ ├─ gcp:container:Cluster helloworld created
+ ├─ pulumi:providers:kubernetes helloworld created
+ ├─ kubernetes:core:Namespace helloworld created
+ ├─ kubernetes:apps:Deployment helloworld created
+ └─ kubernetes:core:Service helloworld created

Outputs:
clusterName : "helloworld-e1557dc"
deploymentName : "helloworld-tlsr4sg5"
kubeconfig : "<KUBECONFIG_CONTENTS>"
namespaceName : "helloworld-pz4u5kyq"
serviceName : "helloworld-l61b5dby"
servicePublicIP: "35.236.26.151"

Resources:
+ 6 created

Duration: 3m51s
```

1. After 3-5 minutes, your cluster will be ready, and the kubeconfig JSON you'll use to connect to the cluster will
be available as an output.

As part of the update, you'll see some new objects in the output: a
`Namespace` in Kubernetes to deploy into, a `Deployment` resource for
the NGINX app, and a LoadBalancer `Service` to publicly access NGINX.

Pulumi understands which changes to a given cloud resource can be made
in-place, and which require replacement, and computes
the minimally disruptive change to achieve the desired state.

**Note:** Pulumi auto-generates a suffix for all objects. Pulumi's object model does
create-before-delete replacements by default on updates, but this will only work if
you are using name auto-generation so that the newly created resource is
guaranteed to have a differing, non-conflicting name. Doing this
allows a new resource to be created, and dependencies to be updated to
point to the new resource, before the old resource is deleted.
This is generally quite useful.

```
...
deploymentName : "helloworld-tlsr4sg5"
...
namespaceName : "helloworld-pz4u5kyq"
serviceName : "helloworld-l61b5dby"
servicePublicIP: "35.236.26.151"
```

If you visit the FQDN listed in `serviceHostname` you should land on the
NGINX welcome page. Note, that it may take a minute or so for the
LoadBalancer to become active on AWS.

1. Access the Kubernetes Cluster using `kubectl`

To access your new Kubernetes cluster using `kubectl`, we need to setup the
`kubeconfig` file and download `kubectl`. We can leverage the Pulumi
stack output in the CLI, as Pulumi faciliates exporting these objects for us.

```bash
$ pulumi stack output kubeconfig > kubeconfig
$ export KUBECONFIG=$PWD/kubeconfig
$ export KUBERNETES_VERSION=1.11.6 && sudo curl -s -o /usr/local/bin/kubectl https://storage.googleapis.com/kubernetes-release/release/v${KUBERNETES_VERSION}/bin/linux/amd64/kubectl && sudo chmod +x /usr/local/bin/kubectl

$ kubectl version
$ kubectl cluster-info
$ kubectl get nodes
```

We can also use the stack output to query the cluster for our newly created Deployment:

```bash
$ kubectl get deployment $(pulumi stack output deploymentName) --namespace=$(pulumi stack output namespaceName)
$ kubectl get service $(pulumi stack output serviceName) --namespace=$(pulumi stack output namespaceName)
```

We can also create another NGINX Deployment into the `default` namespace using
`kubectl` natively:

```bash
$ kubectl create deployment my-nginx --image=nginx
$ kubectl get pods
$ kubectl delete deployment my-nginx
```

1. Experimentation

From here on, feel free to experiment. Simply making edits and running `pulumi up` afterwords, will incrementally update your stack.

For example, if you wish to pull existing Kubernetes YAML manifests into
Pulumi to aid in your transition, append the following code block to the existing
`index.ts` file and run `pulumi up`.

This is an example of how to create the standard Kubernetes Guestbook manifests in
Pulumi using the Guestbook YAML manifests. We take the additional steps of transforming
its properties to use the same Namespace and metadata labels that
the NGINX stack uses, and also make its frontend service use a
LoadBalancer typed Service to expose it publicly.

```typescript
// Create resources for the Kubernetes Guestbook from its YAML manifests
const guestbook = new k8s.yaml.ConfigFile("guestbook",
{
file: "https://raw.githubusercontent.com/pulumi/pulumi-kubernetes/master/examples/yaml-guestbook/yaml/guestbook.yaml",
transformations: [
(obj: any) => {
// Do transformations on the YAML to use the same namespace and
// labels as the NGINX stack above
if (obj.metadata.labels) {
obj.metadata.labels['appClass'] = namespaceName
} else {
obj.metadata.labels = appLabels
}

// Make the 'frontend' Service public by setting it to be of type
// LoadBalancer
if (obj.kind == "Service" && obj.metadata.name == "frontend") {
if (obj.spec) {
obj.spec.type = "LoadBalancer"
}
}
}
],
},
{
providers: { "kubernetes": clusterProvider },
},
);

// Export the Guestbook public LoadBalancer endpoint
export const guestbookPublicIP = guestbook.getResourceProperty("v1/Service", "frontend", "status").apply(s => s.loadBalancer.ingress[0].ip);
```

1. Once you've finished experimenting, tear down your stack's resources by destroying and removing it:

```bash
$ pulumi destroy --yes
$ pulumi stack rm --yes
```
20 changes: 20 additions & 0 deletions gcp-ts-gke-hello-world/config.ts
Original file line number Diff line number Diff line change
@@ -0,0 +1,20 @@
// Copyright 2016-2019, Pulumi Corporation. All rights reserved.
import * as pulumi from "@pulumi/pulumi";
import { Config } from "@pulumi/pulumi";

const config = new Config();

export const clusterConfig = {
// nodeCount is the number of cluster nodes to provision. Defaults to 2 if unspecified.
nodeCount: config.getNumber("nodeCount") || 2,

// nodeMachineType is the machine type to use for cluster nodes. Defaults to n1-standard-1 if unspecified.
// See https://cloud.google.com/compute/docs/machine-types for more details on available machine types.
nodeMachineType: config.get("nodeMachineType") || "n1-standard-1",

// minMasterVersion is the minimum master version used in the cluster. Defaults to 'latest' if unspecified.
minMasterVersion: config.get("minMasterVersion") || "latest",

// nodeVersion is the node version used in the cluster. Defaults to 'latest' if unspecified.
nodeVersion: config.get("nodeVersion") || "latest"
};
128 changes: 128 additions & 0 deletions gcp-ts-gke-hello-world/index.ts
Original file line number Diff line number Diff line change
@@ -0,0 +1,128 @@
// Copyright 2016-2019, Pulumi Corporation. All rights reserved.
import * as k8s from "@pulumi/kubernetes";
import * as pulumi from "@pulumi/pulumi";
import * as gcp from "@pulumi/gcp";
import { clusterConfig } from "./config";

const name = "helloworld";

// Create a GKE cluster
const cluster = new gcp.container.Cluster(name, {
initialNodeCount: clusterConfig.nodeCount,
minMasterVersion: clusterConfig.minMasterVersion,
nodeVersion: clusterConfig.nodeVersion,
nodeConfig: {
machineType: clusterConfig.nodeMachineType,
oauthScopes: [
"https://www.googleapis.com/auth/compute",
"https://www.googleapis.com/auth/devstorage.read_only",
"https://www.googleapis.com/auth/logging.write",
"https://www.googleapis.com/auth/monitoring"
],
},
});

// Export the Cluster name
export const clusterName = cluster.name;

// Manufacture a GKE-style kubeconfig. Note that this is slightly "different"
// because of the way GKE requires gcloud to be in the picture for cluster
// authentication (rather than using the client cert/key directly).
export const kubeconfig = pulumi.
all([ cluster.name, cluster.endpoint, cluster.masterAuth ]).
apply(([ name, endpoint, masterAuth ]) => {
const context = `${gcp.config.project}_${gcp.config.zone}_${name}`;
return `apiVersion: v1
clusters:
- cluster:
certificate-authority-data: ${masterAuth.clusterCaCertificate}
server: https://${endpoint}
name: ${context}
contexts:
- context:
cluster: ${context}
user: ${context}
name: ${context}
current-context: ${context}
kind: Config
preferences: {}
users:
- name: ${context}
user:
auth-provider:
config:
cmd-args: config config-helper --format=json
cmd-path: gcloud
expiry-key: '{.credential.token_expiry}'
token-key: '{.credential.access_token}'
name: gcp
`;
});

// Create a Kubernetes provider instance that uses our cluster from above.
const clusterProvider = new k8s.Provider(name, {
kubeconfig: kubeconfig,
});

// Create a Kubernetes Namespace
const ns = new k8s.core.v1.Namespace(name, {}, { provider: clusterProvider });

// Export the Namespace name
export const namespaceName = ns.metadata.apply(m => m.name);

// Create a NGINX Deployment
const appLabels = { appClass: name };
const deployment = new k8s.apps.v1.Deployment(name,
{
metadata: {
namespace: namespaceName,
labels: appLabels,
},
spec: {
replicas: 1,
selector: { matchLabels: appLabels },
template: {
metadata: {
labels: appLabels,
},
spec: {
containers: [
{
name: name,
image: "nginx:latest",
ports: [{ name: "http", containerPort: 80 }]
}
],
}
}
},
},
{
provider: clusterProvider,
}
);

// Export the Deployment name
export const deploymentName = deployment.metadata.apply(m => m.name);

// Create a LoadBalancer Service for the NGINX Deployment
const service = new k8s.core.v1.Service(name,
{
metadata: {
labels: appLabels,
namespace: namespaceName,
},
spec: {
type: "LoadBalancer",
ports: [{ port: 80, targetPort: "http" }],
selector: appLabels,
},
},
{
provider: clusterProvider,
}
);

// Export the Service name and public LoadBalancer endpoint
export const serviceName = service.metadata.apply(m => m.name);
export const servicePublicIP = service.status.apply(s => s.loadBalancer.ingress[0].ip)
9 changes: 9 additions & 0 deletions gcp-ts-gke-hello-world/package.json
Original file line number Diff line number Diff line change
@@ -0,0 +1,9 @@
{
"name": "gcp-ts-gke-hello-world",
"dependencies": {
"@types/node": "latest",
"@pulumi/gcp": "latest",
"@pulumi/kubernetes": "latest",
"@pulumi/pulumi": "latest"
}
}
Loading

0 comments on commit e8527bd

Please sign in to comment.