Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Release 2019-10-28 #1297

Merged
merged 1 commit into from
Nov 4, 2019
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
44 changes: 44 additions & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,49 @@
# Azure Kubernetes Service Changelog

## Release 2019-10-28

**This release is rolling out to all regions**

### Service Updates
* With the official 2019-11-04 Azure CLI release, AKS will default new cluster
creates to VM Scale-Sets and Standard Load Balancers (VMSS/SLB) instead of VM
Availability Sets and Basic Load Balancers (VMAS/BLB).
* From 2019-10-14 AKS Portal will default new cluster
creates to VM Scale-Sets and Standard Load Balancers (VMSS/SLB) instead of VM
Availability Sets and Basic Load Balancers (VMAS/BLB). Users can still explicitly
choose VMAS and BLB.
* From 2019-11-04 the CLI extension will have a new parameter --zones to replace --node-zones, which specifies the zones to be used by the cluster nodes.

### Release Notes

* New Features
* Multiple Nodepools backed AKS clusters are now Generally Available (GA)
https://docs.microsoft.com/en-us/azure/aks/use-multiple-node-pools
* Cluster Autoscaler is now Generally Available (GA)
https://docs.microsoft.com/en-us/azure/aks/cluster-autoscaler
* Availability Zones are now Generally Available (GA)
https://docs.microsoft.com/en-us/azure/aks/availability-zones
* AKS API server Authorized IP Ranges is now Generally Available (GA)
https://docs.microsoft.com/en-us/azure/aks/api-server-authorized-ip-rangesa
* Kubernetes versions 1.15.5, 1.14.8 and 1.13.12 have been added.
* These versions have new API call logic that helps users with many AKS clusters in the same subscription to incur is less throttling.
* These versions have security fixes for [CVE-2019-11253](https://github.com/Azure/AKS/issues/1262)
* The minimum `--max-pods` value has been altered from **30 per node to 30 per Nodepool**. Each node will have a hard **minimum of 10 pods**
the user can specify, but this value can only be used if the total pods across all nodes on the nodepool accrue to 30+.
* Bug Fixes
* Added additional validation to nodepool operations to check for enough address space. If there is no address space left for a scale/upgrade operation,
the operation will not start and give a descriptive error message.
* Fixed bug with Nodepool operations and `az aks update-credentials` to reflect on the agentpool state.
* Fixed bug on Nodepool operations when using incorrect SKUs to have more descriptive error.
* Added validation to block `az aks update-credentials` if nodepool is not ready to avoid conflicts.
* Node count on the Nodepool is ignored when user has autoscaling enabled. (Manual scale with autoscaler enabled is not allowed)
* Fixed bug where some clusters would still receive an older Moby version (3.0.6). Current version is 3.0.7
* Preview Features
* Windows docker runtime updated to 19.03.2
* Component updates
* Moby has been updated to v3.0.7
* AKS-Engine has been updated to v0.41.5

## Release 2019-10-14

**This release is rolling out to all regions**
Expand Down
82 changes: 29 additions & 53 deletions previews.md
Original file line number Diff line number Diff line change
Expand Up @@ -46,27 +46,17 @@ az extension remove --name aks-preview

## Preview features

* [Availability Zones](#zones)
* [Windows Worker Nodes](#windows)
* [Multiple Node Pools](#nodepools)
* [Secure access to the API server using authorized IP address ranges](#apideny)
* [Cluster Autoscaler](#ca)
* [Kubernetes Pod Security Policies](#psp)
* [Azure Policy Add-On](#azpolicy)
* [(GA) Availability Zones](#zones)
* [(GA) Multiple Node Pools](#nodepools)
* [(GA) Secure access to the API server using authorized IP address ranges](#apideny)
* [(GA) Cluster Autoscaler](#ca)
* [(GA) Kubernetes Audit Log](#noauditforu)
* [(GA) Standard Load Balancers](#slb)
* [(GA) Virtual Machine Scale Sets](#vmss)

### Availability zones <a name="zones"></a>

This feature enables customers to distribute their AKS clusters across
availability zones providing a higher level of availability.

Getting started:
* [AKS availability zones documentation](https://aka.ms/aks/zones)
* [About availability zones on Azure](https://docs.microsoft.com/en-us/azure/availability-zones/az-overview)
* [Example Swagger reference (2019-06-01)](https://github.com/Azure/azure-rest-api-specs/blob/master/specification/containerservice/resource-manager/Microsoft.ContainerService/stable/2019-06-01/managedClusters.json#L1399)

### Windows worker nodes <a name="windows"></a>

This preview feature allows customers to add Windows Server node to their
Expand All @@ -88,14 +78,16 @@ Then refresh your registration of the AKS resource provider:
az provider register -n Microsoft.ContainerService
```

## Kubernetes Pod Security Policies <a name="psp"></a>

### Multiple Node Pools <a name="nodepools"></a>
To improve the security of your AKS cluster, you can limit what pods can be
scheduled. Pods that request resources you don't allow can't run in the AKS
cluster. You define this access using [pod security policies][9].

Multiple Node Pool support is now in public preview. You can see the
[preview documentation][nodepool] for instructions and limitations.
First, register the feature flag:

```
az feature register -n MultiAgentpoolPreview --namespace Microsoft.ContainerService
az feature register --name PodSecurityPolicyPreview --namespace Microsoft.ContainerService
```

Then refresh your registration of the AKS resource provider:
Expand All @@ -104,53 +96,37 @@ Then refresh your registration of the AKS resource provider:
az provider register -n Microsoft.ContainerService
```

### Secure access to the API server using authorized IP address ranges <a name="apideny"></a>

This feature allows users to restrict what IP addresses have access to the
Kubernetes API endpoint for clusters. Please see details and limitations
in the [preview documentation][api server].

You can opt into the preview by registering the feature flag:

```
az feature register -n APIServerSecurityPreview --namespace Microsoft.ContainerService
```

Then refresh your registration of the AKS resource provider:

```
az provider register -n Microsoft.ContainerService
```
### Azure Policy Add On <a name="azpolicy"></a>

### Cluster Autoscaler <a name="ca"></a>
Azure Policy integrates with the Azure Kubernetes Service (AKS) to apply at-scale enforcements and safeguards on your clusters in a centralized, consistent manner. By extending use of GateKeeper, an admission controller webhook for Open Policy Agent (OPA), Azure Policy makes it possible to manage and report on the compliance state of your Azure resources and AKS clusters from one place.

The [cluster autoscaler][5] enables automatic creation of new nodes to back your AKS cluster in the event of needing more compute resources. The scaling rules are based off of the queue of pending pods, as described in the [open source project](https://github.com/kubernetes/autoscaler/tree/master/cluster-autoscaler) the feature uses. Use of cluster autoscaler requires use of [VMSS clusters](#vmss).
This feature is in preview and requires enablement by the Azure team to use. Read more here: https://docs.microsoft.com/en-us/azure/governance/policy/concepts/rego-for-aks?toc=/azure/aks/toc.json

Learn more about this feature here: https://docs.microsoft.com/en-us/azure/aks/cluster-autoscaler#create-an-aks-cluster-and-enable-the-cluster-autoscaler
### (GA) Availability zones <a name="zones"></a>

## Kubernetes Pod Security Policies <a name="psp"></a>
This feature enables customers to distribute their AKS clusters across
availability zones providing a higher level of availability.

To improve the security of your AKS cluster, you can limit what pods can be
scheduled. Pods that request resources you don't allow can't run in the AKS
cluster. You define this access using [pod security policies][9].
Getting started:
* [AKS availability zones documentation](https://aka.ms/aks/zones)
* [About availability zones on Azure](https://docs.microsoft.com/en-us/azure/availability-zones/az-overview)
* [Example Swagger reference (2019-06-01)](https://github.com/Azure/azure-rest-api-specs/blob/master/specification/containerservice/resource-manager/Microsoft.ContainerService/stable/2019-06-01/managedClusters.json#L1399)

First, register the feature flag:
### (GA) Multiple Node Pools <a name="nodepools"></a>

```
az feature register --name PodSecurityPolicyPreview --namespace Microsoft.ContainerService
```
Multiple Node Pool support is now in GA. You can see the
[documentation][nodepool] for instructions.

Then refresh your registration of the AKS resource provider:
### (GA) Secure access to the API server using authorized IP address ranges <a name="apideny"></a>

```
az provider register -n Microsoft.ContainerService
```
This feature allows users to restrict what IP addresses have access to the
Kubernetes API endpoint for clusters. Please see details in the [documentation][api server].

### Azure Policy Add On <a name="azpolicy"></a>
### (GA) Cluster Autoscaler <a name="ca"></a>

Azure Policy integrates with the Azure Kubernetes Service (AKS) to apply at-scale enforcements and safeguards on your clusters in a centralized, consistent manner. By extending use of GateKeeper, an admission controller webhook for Open Policy Agent (OPA), Azure Policy makes it possible to manage and report on the compliance state of your Azure resources and AKS clusters from one place.
The [cluster autoscaler][5] enables automatic creation of new nodes to back your AKS cluster in the event of needing more compute resources. The scaling rules are based off of the queue of pending pods, as described in the [open source project](https://github.com/kubernetes/autoscaler/tree/master/cluster-autoscaler) the feature uses. Use of cluster autoscaler requires use of [VMSS clusters](#vmss).

This feature is in preview and requires enablement by the Azure team to use. Read more here: https://docs.microsoft.com/en-us/azure/governance/policy/concepts/rego-for-aks?toc=/azure/aks/toc.json
Learn more about this feature here: https://docs.microsoft.com/en-us/azure/aks/cluster-autoscaler#create-an-aks-cluster-and-enable-the-cluster-autoscaler

### (GA) Kubernetes Audit Log <a name="noauditforu"></a>

Expand Down