Skip to content

Commit

Permalink
Example of AKS + KEDA + Azure Functions (pulumi#356)
Browse files Browse the repository at this point in the history
* Example of AKS + KEDA + Azure Functions

* Add a test

* Review comments

* Remove redundant gitignore
  • Loading branch information
mikhailshilkov authored and stack72 committed Aug 6, 2019
1 parent a5795ca commit 9005d24
Show file tree
Hide file tree
Showing 15 changed files with 764 additions and 0 deletions.
1 change: 1 addition & 0 deletions azure-ts-aks-helm/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -19,6 +19,7 @@ will need to [install the Helm CLI](https://docs.helm.sh/using_helm/#installing-
```bash
$ helm init --client-only
$ helm repo add bitnami https://charts.bitnami.com/bitnami
$ helm repo update
```

# Running the Example
Expand Down
8 changes: 8 additions & 0 deletions azure-ts-aks-keda/Pulumi.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,8 @@
name: azure-ts-aks-keda
runtime: nodejs
description: Create an Azure Kubernetes Service (AKS) cluster and deploy an Azure Function App with KEDA (Kubernetes-based Event Driven Autoscaling)
template:
config:
azure:environment:
description: The Azure environment to use (`public`, `usgovernment`, `german`, `china`)
default: public
94 changes: 94 additions & 0 deletions azure-ts-aks-keda/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,94 @@
[![Deploy](https://get.pulumi.com/new/button.svg)](https://app.pulumi.com/new)

# Azure Kubernetes Service (AKS) cluster and Azure Functions with KEDA

This example demonstrates creating an Azure Kubernetes Service (AKS) Cluster, and deploying an Azure Function App with Kubernetes-based Event Driven Autoscaling (KEDA) into it, all in one Pulumi program. Please see https://docs.microsoft.com/en-us/azure/aks/ for more information about AKS and https://docs.microsoft.com/en-us/azure/azure-functions/functions-kubernetes-keda for more information about KEDA.

# Prerequisites

Ensure you have [downloaded and installed the Pulumi CLI](https://pulumi.io/install).

We will be deploying to Azure, so you will need an Azure account. If you don't have an account,
[sign up for free here](https://azure.microsoft.com/en-us/free/).
[Follow the instructions here](https://pulumi.io/install/azure.html) to connect Pulumi to your Azure account.

This example deploys a Helm Chart from Kedacore Helm chart repository, so you
will need to [install the Helm CLI](https://docs.helm.sh/using_helm/#installing-helm) and configure it:

```bash
$ helm init --client-only
$ helm repo add kedacore https://kedacore.azureedge.net/helm
$ helm repo update
```

# Running the Example

After cloning this repo, `cd` into it and run these commands.

1. Create a new stack, which is an isolated deployment target for this example:

```bash
$ pulumi stack init
```

2. Set the Azure region to deploy to:

```bash
$ pulumi config set azure:location <value>
```

3. Deploy everything with the `pulumi up` command. This provisions all the Azure resources necessary, including an Active Directory service principal, AKS cluster, and then deploys the Apache Helm Chart, and an Azure Function managed by KEDA, all in a single gesture:

```bash
$ pulumi up
```

4. After a couple minutes, your cluster and Azure Function app will be ready. Four output variables will be printed, reflecting your cluster name (`clusterName`), Kubernetes config (`kubeConfig`), Storage Account name (`storageAccountName`), and storage queue name (`queueName`).

Using these output variables, you may configure your `kubectl` client using the `kubeConfig` configuration:

```bash
$ pulumi stack output kubeConfig > kubeconfig.yaml
$ KUBECONFIG=./kubeconfig.yaml kubectl get deployment
NAME READY UP-TO-DATE AVAILABLE AGE
keda-edge 1/1 1 1 9m
queue-handler 0/0 0 0 2m
```

Now, go ahead an enqueue a new message to the storage queue. You may use a tool like [Microsoft Azure Storage Explorer](https://azure.microsoft.com/en-us/features/storage-explorer/) to navigate to the queue and add a new message.

Wait for a minute and then query the deployments again:

```bash
$ KUBECONFIG=./kubeconfig.yaml kubectl get deployment
NAME READY UP-TO-DATE AVAILABLE AGE
keda-edge 1/1 1 1 14m
queue-handler 1/1 1 1 7m
```

Note that the `queue-handler` deployment got 1 instance ready. Looking at the pods:

```bash
$ KUBECONFIG=./kubeconfig.yaml kubectl get pod
NAME READY STATUS RESTARTS AGE keda-edge-97664558c-q2mkd 1/1 Running 0 15m
queue-handler-c496dcfc-mb6tx 1/1 Running 0 2m3s
```

There's now a pod processing queue messages. The message should be gone from the storage queue at this point. Query the logs of the pod:
```bash
$ KUBECONFIG=./kubeconfig.yaml kubectl logs queue-handler-c496dcfc-mb6tx
...
C# Queue trigger function processed: Test Message
Executed 'queue' (Succeeded, Id=ecd9433a-c6b7-468e-b6c6-6e7909bafce7)
...
```
5. At this point, you have a running cluster. Feel free to modify your program, and run `pulumi up` to redeploy changes. The Pulumi CLI automatically detects what has changed and makes the minimal edits necessary to accomplish these changes. This could be altering the existing chart, adding new Azure or Kubernetes resources, or anything, really.
6. Once you are done, you can destroy all of the resources, and the stack:
```bash
$ pulumi destroy
$ pulumi stack rm
```
95 changes: 95 additions & 0 deletions azure-ts-aks-keda/cluster.ts
Original file line number Diff line number Diff line change
@@ -0,0 +1,95 @@
// Copyright 2016-2019, Pulumi Corporation. All rights reserved.

import * as azure from "@pulumi/azure";
import * as azuread from "@pulumi/azuread";
import * as k8s from "@pulumi/kubernetes";
import * as pulumi from "@pulumi/pulumi";
import * as random from "@pulumi/random";
import * as tls from "@pulumi/tls";

// Arguments for an AKS cluster. We use almost all defaults for this example, but the
// interface could be extended with e.g. agent pool settings.
export interface AksClusterArgs {
resourceGroup: azure.core.ResourceGroup;
}

export class AksCluster extends pulumi.ComponentResource {
public cluster: azure.containerservice.KubernetesCluster;
public provider: k8s.Provider;

constructor(name: string,
args: AksClusterArgs,
opts: pulumi.ComponentResourceOptions = {}) {
super("examples:keda:AksCluster", name, args, opts);

const password = new random.RandomString("password", {
length: 20,
special: true,
}).result;
const sshPublicKey = new tls.PrivateKey("keda", {
algorithm: "RSA",
rsaBits: 4096,
}).publicKeyOpenssh;

// Create the AD service principal for the K8s cluster.
const adApp = new azuread.Application("aks", undefined, { parent: this });
const adSp = new azuread.ServicePrincipal("aksSp", { applicationId: adApp.applicationId }, { parent: this });
const adSpPassword = new azuread.ServicePrincipalPassword("aksSpPassword", {
servicePrincipalId: adSp.id,
value: password,
endDate: "2099-01-01T00:00:00Z",
}, { parent: this });

// Create a Virtual Network for the cluster
const vnet = new azure.network.VirtualNetwork("keda", {
resourceGroupName: args.resourceGroup.name,
addressSpaces: ["10.2.0.0/16"],
}, { parent: this });

// Create a Subnet for the cluster
const subnet = new azure.network.Subnet("keda", {
resourceGroupName: args.resourceGroup.name,
virtualNetworkName: vnet.name,
addressPrefix: "10.2.1.0/24",
}, { parent: this });

// Now allocate an AKS cluster.
this.cluster = new azure.containerservice.KubernetesCluster("aksCluster", {
resourceGroupName: args.resourceGroup.name,
agentPoolProfiles: [{
name: "aksagentpool",
count: 3,
vmSize: "Standard_B2s",
osType: "Linux",
osDiskSizeGb: 30,
vnetSubnetId: subnet.id,
}],
dnsPrefix: name,
linuxProfile: {
adminUsername: "aksuser",
sshKey: {
keyData: sshPublicKey,
},
},
servicePrincipal: {
clientId: adApp.applicationId,
clientSecret: adSpPassword.value,
},
kubernetesVersion: "1.13.5",
roleBasedAccessControl: { enabled: true },
networkProfile: {
networkPlugin: "azure",
dnsServiceIp: "10.2.2.254",
serviceCidr: "10.2.2.0/24",
dockerBridgeCidr: "172.17.0.1/16",
},
}, { parent: this });

// Expose a K8s provider instance using our custom cluster instance.
this.provider = new k8s.Provider("aksK8s", {
kubeconfig: this.cluster.kubeConfigRaw,
}, { parent: this });

this.registerOutputs();
}
}
1 change: 1 addition & 0 deletions azure-ts-aks-keda/functionapp/.dockerignore
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
local.settings.json
Loading

0 comments on commit 9005d24

Please sign in to comment.