What’s New?
Kubernetes 1.16 Upgrades & Changes
With the recent availability of Kubernetes 1.16 on EKS, we’ve been upgrading clusters from Kubernetes 1.14 to Kubernetes 1.16.
Kubernetes 1.15
“kubectl get” and “kubectl describe” can provide custom output for CustomResourceDefinitions (CRDs). CRDs are special resource types that cover things like Certificates and Challenges used by cert-manager when issuing certificates from Let’s Encrypt. These changes will make inspecting CRD resources like Certificates easier to do.
Creation of AWS Network Load Balancers (NLBs) is now supported through the Kubernetes API. Although this does not directly affect your stack (we create NLBs using Terraform to work around missing features in the Kubernetes APIs), the improving support for this resource type will make it possible to manage these load balancers using kubectl and Helm in the future.
As part of the upgrade to Kubernetes 1.15, we will enable EKS managed node groups for your cluster. This further simplifies EKS cluster management by letting AWS upgrade and manage your EKS worker nodes for you. Certain admin tasks like upgrading the AMI and rotating out old Kubernetes workers versions don’t require human intervention anymore.
You can read more about changes in 1.15 here.
Kubernetes 1.16
Several Kubernetes API groups have moved from Beta to General Availability. In particular, deployments need to have their API version changed from apps/v1beta2 or extensions/v1beta2 to apps/v1 as part of upgrading. (This only comes into play when creating new deployments - your existing deployments will be changed to work with the new API versions for you as part of the upgrade.)
Creation of AWS NLBs through the Kubernetes API now supports using statically-allocated IP addresses. Although these updates to the APIs do not directly affect your stack (Terraform is currently used for provisioning these), they are excellent improvements to Kubernetes support of this load balancer type.
PhysicalVolumes can now be resized without restarting pods on AWS EKS clusters. This makes it even easier to resize EBS disks attached to your containers without disrupting a running application.
You can read more about these changes here.
Helm 2 To Helm 3 Upgrades
All of our Terraform modules now support Helm 3. This enables upgrading your cluster to use Helm 3 for all software distributed by Helm going forward (Grafana, Prometheus, cert-manager, nginx-ingress, etc.). There are several major new features in Helm 3.
No more Tiller or server-side Helm upgrades. Tiller was the server-side component that needed to be upgraded in lockstep with the Helm CLI tool.
Helm 3 uses Kubernetes Secrets as the default configuration storage mechanism (Helm 2 used ConfigMaps). This means that secrets required for configuration of software managed by Helm are now encrypted-at-rest on your Kubernetes cluster.
Better support for Kubernetes CRDs. Although CRD support itself isn’t all that exciting, this means that upgrades and installation of software that use CRDs can now be done by Terraform, letting you use Terraform Cloud’s “remote runs” feature.
More intuitive behaviour in general. Many Helm commands have become more user-friendly: for instance “helm delete” now works like “helm delete --purge” in Helm 2 (actually deletes the release and its associated resources), and releases now require a name on creation (no more Helm deployments with random animal names!).
Click here for more information on the differences between Helm 2 and Helm 3.
Terraform Cloud Remote Runs and VCS integration
We have spent significant time removing any remaining dependencies on local execution of tools like kubectl during Terraform runs. This allows using Terraform Cloud’s “remote runs” feature to apply changes instead of requiring that administrators run Terraform locally on their machines. This is particularly useful for collaboration and working remotely during COVID-19.
You can send a link to a remote “terraform plan” to a coworker for review. This is significantly nicer than needing to copy/paste everything into pull requests/Slack or force reviewers to re-run the “terraform plan” themselves.
Remote runs allow you to set finer-grained permissions on who is permitted to run Terraform on different environments (requires upgrading to the Team plan on Terraform Cloud).
Remote runs allow centralized management of privileged AWS keys and other cloud provider credentials. Instead of requiring every Terraform user to have admin-level IAM permissions, you can delegate the AWS IAM permissions to a dedicated user for Terraform Cloud. Updating or rotating these credentials can now be done in a single spot, and revoking access to infrastructure provisioning for a user can be done on Terraform Cloud (you don’t need to worry about accidentally messing up other IAM permissions for that user).
Switching to remote runs also allows setting up version control integration for Terraform Cloud. You can link a Github/Bitbucket/etc. repository to a Terraform Cloud workspace.
Once linked to a VCS repo, Terraform Cloud will run “terraform plan” on pushes and pull requests - any changes to infrastructure as part of a pull request will be attached to the PR automatically, making reviewing infrastructure changes a breeze.
A Terraform Cloud workspace linked to a VCS repo will disallow any changes not committed to Git, ensuring that the default branch of your Git repo remains the “single source of truth” for your infrastructure configuration.
You can read more about Terraform Cloud remote runs and VCS integration here.
Fun & Useful
Kubernetes Up and Running
This free ebook is a great introductory and/or study guide for Kubernetes.