Skip to content

Commit

Permalink
Multi-cloud Kubernetes example (pulumi#359)
Browse files Browse the repository at this point in the history
  • Loading branch information
lblackstone committed Aug 7, 2019
1 parent fb6398e commit 816efc0
Show file tree
Hide file tree
Showing 13 changed files with 574 additions and 0 deletions.
3 changes: 3 additions & 0 deletions kubernetes-ts-multicloud/.gitignore
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
/bin/
/node_modules/
/.pulumi/
16 changes: 16 additions & 0 deletions kubernetes-ts-multicloud/Pulumi.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,16 @@
name: kubernetes-ts-multicloud
description: Single application deployed to multiple Kubernetes clusters
runtime: nodejs
template:
config:
aws:region:
description: The AWS region to deploy into
default: us-west-2
azure:location:
description: The Azure location to deploy into
default: westus2
gcp:project:
description: The GCP project to deploy into
gcp:zone:
description: The GCP zone to deploy into
default: us-west1-a
79 changes: 79 additions & 0 deletions kubernetes-ts-multicloud/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,79 @@
[![Deploy](https://get.pulumi.com/new/button.svg)](https://app.pulumi.com/new)

# Kubernetes Application Deployed To Multiple Clusters

This example creates managed Kubernetes clusters using AKS, EKS, and GKE, and deploys the application
on each cluster.

## Deploying the App

To deploy your infrastructure, follow the below steps.

### Prerequisites

1. [Install Pulumi](https://www.pulumi.com/docs/reference/install/)
2. [Install Node.js 8.11.3](https://nodejs.org/en/download/)
3. (Optional) [Configure AWS Credentials](https://www.pulumi.com/docs/reference/clouds/aws/setup/)
4. (Optional) [Configure Azure Credentials](https://www.pulumi.com/docs/reference/clouds/azure/setup/)
5. (Optional) [Configure GCP Credentials](https://www.pulumi.com/docs/reference/clouds/gcp/setup/)
6. (Optional) [Configure local access to a Kubernetes cluster](https://kubernetes.io/docs/setup/)

### Steps

After cloning this repo, from this working directory, run these commands:

1. Install the required Node.js packages:

```bash
$ npm install
```

2. Create a new stack, which is an isolated deployment target for this example:

```bash
$ pulumi stack init
```

3. Set the required configuration variables for this program:

```bash
$ pulumi config set aws:region us-west-2 # Any valid AWS region here.
$ pulumi config set azure:location westus2 # Any valid Azure location here.
$ pulumi config set gcp:project [your-gcp-project-here]
$ pulumi config set gcp:zone us-west1-a # Any valid GCP zone here.
```

Note that you can choose different regions here.

We recommend using `us-west-2` to host your EKS cluster as other regions (notably `us-east-1`) may have capacity
issues that prevent EKS clusters from creating.

4. (Optional) Disable any clusters you do not want to deploy by commenting out the corresponding lines in
the `index.ts` file. All clusters are enabled by default.

5. Bring up the stack, which will create the selected managed Kubernetes clusters, and deploy an application to each of
them.

```bash
$ pulumi up
```

Here's what it should look like once it completes:
![appUrls](images/appUrls.png)

6. You can connect to the example app (kuard) on each cluster using the exported URLs.
![kuard](images/kuard.png)

Important: This application is exposed publicly over http, and can be used to view sensitive details about the
node. Do not run this application on production clusters!

7. Once you've finished experimenting, tear down your stack's resources by destroying and removing it:

```bash
$ pulumi destroy --yes
$ pulumi stack rm --yes
```

Note: The static IP workaround required for the AKS Service can cause a destroy failure if the IP has not
finished detaching from the LoadBalancer. If you encounter this error, simply rerun `pulumi destroy --yes`,
and it should succeed.
123 changes: 123 additions & 0 deletions kubernetes-ts-multicloud/aks.ts
Original file line number Diff line number Diff line change
@@ -0,0 +1,123 @@
// Copyright 2016-2019, Pulumi Corporation.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http:https://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.

import * as azure from "@pulumi/azure";
import * as azuread from "@pulumi/azuread";
import * as k8s from "@pulumi/kubernetes";
import * as pulumi from "@pulumi/pulumi";
import * as random from "@pulumi/random";
import * as tls from "@pulumi/tls";

export class AksCluster extends pulumi.ComponentResource {
public cluster: azure.containerservice.KubernetesCluster;
public provider: k8s.Provider;
public staticAppIP: pulumi.Output<string>;

constructor(name: string,
opts: pulumi.ComponentResourceOptions = {}) {
super("examples:kubernetes-ts-multicloud:AksCluster", name, {}, opts);

// Generate a strong password for the Service Principal.
const password = pulumi.secret(new random.RandomString("password", {
length: 20,
special: true,
}, {parent: this}).result);

// Create an SSH public key that will be used by the Kubernetes cluster.
// Note: We create one here to simplify the demo, but a production deployment would probably pass
// an existing key in as a variable.
const sshPublicKey = new tls.PrivateKey("sshKey", {
algorithm: "RSA",
rsaBits: 4096,
}, {parent: this}).publicKeyOpenssh;

// Create the AD service principal for the K8s cluster.
const adApp = new azuread.Application("aks", undefined, {parent: this});
const adSp = new azuread.ServicePrincipal("aksSp", {
applicationId: adApp.applicationId
}, {parent: this});
const adSpPassword = new azuread.ServicePrincipalPassword("aksSpPassword", {
servicePrincipalId: adSp.id,
value: password,
endDate: "2099-01-01T00:00:00Z",
}, {parent: this});

const resourceGroup = new azure.core.ResourceGroup("multicloud");

// Grant the resource group the "Network Contributor" role so that it can link the static IP to a
// Service LoadBalancer.
const rgNetworkRole = new azure.role.Assignment("spRole", {
principalId: adSp.id,
scope: resourceGroup.id,
roleDefinitionName: "Network Contributor"
}, {parent: this});

// Create a Virtual Network for the cluster
const vnet = new azure.network.VirtualNetwork("multicloud", {
resourceGroupName: resourceGroup.name,
addressSpaces: ["10.2.0.0/16"],
}, {parent: this});

// Create a Subnet for the cluster
const subnet = new azure.network.Subnet("multicloud", {
resourceGroupName: resourceGroup.name,
virtualNetworkName: vnet.name,
addressPrefix: "10.2.1.0/24",
}, {parent: this});

// Now allocate an AKS cluster.
this.cluster = new azure.containerservice.KubernetesCluster("aksCluster", {
resourceGroupName: resourceGroup.name,
agentPoolProfiles: [{
name: "aksagentpool",
count: 2,
vmSize: "Standard_B2s",
osType: "Linux",
osDiskSizeGb: 30,
vnetSubnetId: subnet.id,
}],
dnsPrefix: name,
linuxProfile: {
adminUsername: "aksuser",
sshKey: {
keyData: sshPublicKey,
},
},
servicePrincipal: {
clientId: adApp.applicationId,
clientSecret: adSpPassword.value,
},
kubernetesVersion: "1.13.5",
roleBasedAccessControl: {enabled: true},
networkProfile: {
networkPlugin: "azure",
dnsServiceIp: "10.2.2.254",
serviceCidr: "10.2.2.0/24",
dockerBridgeCidr: "172.17.0.1/16",
},
}, {parent: this});

// Expose a K8s provider instance using our custom cluster instance.
this.provider = new k8s.Provider("aks", {
kubeconfig: this.cluster.kubeConfigRaw,
}, {parent: this});

this.staticAppIP = new azure.network.PublicIp("staticAppIP", {
resourceGroupName: this.cluster.nodeResourceGroup,
allocationMethod: "Static"
}, {parent: this}).ipAddress;

this.registerOutputs();
}
}
88 changes: 88 additions & 0 deletions kubernetes-ts-multicloud/app.ts
Original file line number Diff line number Diff line change
@@ -0,0 +1,88 @@
// Copyright 2016-2019, Pulumi Corporation.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http:https://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.

import * as k8s from "@pulumi/kubernetes";
import * as pulumi from "@pulumi/pulumi";

// Arguments for the demo app.
export interface DemoAppArgs {
provider: k8s.Provider // Provider resource for the target Kubernetes cluster.
imageTag: string // Tag for the kuard image to deploy.
staticAppIP?: pulumi.Input<string> // Optional static IP to use for the service. (Required for AKS).
}

export class DemoApp extends pulumi.ComponentResource {
public appUrl: pulumi.Output<string>;

constructor(name: string,
args: DemoAppArgs,
opts: pulumi.ComponentResourceOptions = {}) {
super("examples:kubernetes-ts-multicloud:demo-app", name, args, opts);

// Create the kuard Deployment.
const appLabels = {app: "kuard"};
const deployment = new k8s.apps.v1.Deployment(`${name}-demo-app`, {
spec: {
selector: {matchLabels: appLabels},
replicas: 1,
template: {
metadata: {labels: appLabels},
spec: {
containers: [
{
name: "kuard",
image: `gcr.io/kuar-demo/kuard-amd64:${args.imageTag}`,
ports: [{containerPort: 8080, name: "http"}],
livenessProbe: {
httpGet: {path: "/healthy", port: "http"},
initialDelaySeconds: 5,
timeoutSeconds: 1,
periodSeconds: 10,
failureThreshold: 3
},
readinessProbe: {
httpGet: {path: "/ready", port: "http"},
initialDelaySeconds: 5,
timeoutSeconds: 1,
periodSeconds: 10,
failureThreshold: 3
}
}
],
},
},
}
}, {provider: args.provider, parent: this});

// Create a LoadBalancer Service to expose the kuard Deployment.
const service = new k8s.core.v1.Service(`${name}-demo-app`, {
spec: {
loadBalancerIP: args.staticAppIP, // Required for AKS - automatic LoadBalancer still in preview.
selector: appLabels,
ports: [{port: 80, targetPort: 8080}],
type: "LoadBalancer"
}
}, {provider: args.provider, parent: this});

// The address appears in different places depending on the Kubernetes service provider.
let address = service.status.loadBalancer.ingress[0].hostname;
if (name === "gke" || name === "aks") {
address = service.status.loadBalancer.ingress[0].ip;
}

this.appUrl = pulumi.interpolate`http:https://${address}:${service.spec.ports[0].port}`;

this.registerOutputs();
}
}
46 changes: 46 additions & 0 deletions kubernetes-ts-multicloud/eks.ts
Original file line number Diff line number Diff line change
@@ -0,0 +1,46 @@
// Copyright 2016-2019, Pulumi Corporation.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http:https://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.

import * as awsx from "@pulumi/awsx";
import * as eks from "@pulumi/eks";
import * as k8s from "@pulumi/kubernetes";
import * as pulumi from "@pulumi/pulumi";

export class EksCluster extends pulumi.ComponentResource {
public cluster: eks.Cluster;
public provider: k8s.Provider;

constructor(name: string,
opts: pulumi.ComponentResourceOptions = {}) {
super("examples:kubernetes-ts-multicloud:EksCluster", name, {}, opts);

// Create a VPC for our cluster.
const vpc = new awsx.ec2.Vpc("vpc");

// Create the EKS cluster itself, including a "gp2"-backed StorageClass and a deployment of the Kubernetes dashboard.
this.cluster = new eks.Cluster("cluster", {
vpcId: vpc.id,
subnetIds: vpc.getSubnetIds("public"),
instanceType: "t2.medium",
desiredCapacity: 2,
minSize: 1,
maxSize: 2,
storageClasses: "gp2",
deployDashboard: false,
});

this.provider = this.cluster.provider;
}
}

Loading

0 comments on commit 816efc0

Please sign in to comment.