GCP GKE Cluster

GKE is a managed Kubernetes service in GCP that implements the full Kubernetes API, 4-way autoscaling, release channels and multi-cluster support.

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
Deployments

25

Made by

Massdriver

Official

Yes

No

Compliance

Operator Guide for gcp-gke-cluster

Kubernetes is the industry-standard open-source software for creating and managing containers. Given the widespread use of open-source software and containers, deploying Kubernetes effectively is vital.

Use Cases

The Kubernetes Control Plane is a completely managed service. When you set up a Google Kubernetes Engine (GKE) cluster, you can specify the node pools and instance types and deploy your workloads quickly without worrying about the Control Plane.

Web applications

Serve your web application out of Kubernetes, and leverage the high availability of running across availability zones and the ease of autoscaling your servers with web traffic.

Microservices

Build large complex systems out of many small microservices, increasing your overall resiliency by isolating failure domains.

Workflows

Gain the power of the open-source community by using services like Kubeflow and Argo Workflows for ETL (extract, transform, load) or machine-learning capabilities.

Cloud agnostic

If your application can run on Kubernetes, you can run it on any cluster, whether it is Amazon Elastic Kubernetes (EKS), Google Kubernetes Engine (GKE), Microsoft Azure Kubernetes Service (AKS), or even your own on-premises cluster.

Configuration Presets

Development

For development we use a small cost-optimized machine type with a low maximum number of nodes. Note that this configuration is not intended for a production environment.

Production

The production guided configuration comes with two node pools. One is meant for general workloads and features a more powerful machine type. The other uses high-memory machine types for workloads that are memory intensive. Use this preset for production environments.

Design

This bundle provisions a GKE cluster with one or many node pools and can optionally deploy cert-manager, external-dns, and nginx-ingress. The cluster is ready for production but configurable so that your development clusters are more cost-effective. Key capabilities of GKE include the following:

Control-Plane Management

GKE manages the entire control plane for you, freeing you to focus just on the workers that run the containers.

Container Orchestration

Within GKE, you can run any dockerized application. GKE gives you control over deployment strategies, VM type, and even stateful workloads, which you can manage with load balancing, high availability, and autoscaling. You can assign resources to each container to fit your needs precisely and isolate them with GKE Sandbox for extra security.

Autoscaling

GKE can autoscale at both the pod and cluster level so that you can dynamically align your workloads with actual demand. Scale up during peak load times, and scale down when demand is low.

Best Practices

Core Services

Enable nginx-ingress, cert-manager and external-dns at the click of a button, allowing you to run secure, web accessible workloads without a second thought.

Regional Clusters

We allow you to provision only regional GKE clusters, not zonal. With regional clusters, the pool nodes and the control plane are spread across zones in a region to mitigate the risk of zonal failure. As a result, you will have multiple control plane nodes, which ensures that the Kubernetes API stays up in the event of a zonal failure or cluster upgrade.

No Default Node Pool

The default node pool will be deleted after the cluster comes up, and all nodes are managed via independent node-pool resources. This allows for easier configuration and upgrades of the cluster (both control-plane and worker nodes).

Security

Workload Identity

The bundle enables Workload Identity to allow pods to authenticate via Google Cloud Platform (GCP) service accounts instead of via static credentials or node service account permissions

Node Service Account

The bundle creates a custom service account for GKE nodes. The account has limited permissions for node-specific actions (such as pulling container images or sending logs to Stackdriver).

Private Cluster

GCP offers a “private cluster,” which means two very different things. One, you can enable private nodes, which allocates private IPs for your node pools and prevents unwanted access from the public internet. And two, you can enable a private control plane, which would make the API inaccessible outside of your GCP project. We enable private nodes but not the private endpoint.

Auditing

All Kubernetes-powered clusters have Kubernetes audit logging, which keeps a chronological record of calls that have been made to the Kubernetes API server. Kubernetes audit-log entries are useful for investigating suspicious API requests, for collecting statistics, and for creating monitoring alerts for unwanted API calls.

Observability

By default, logging and monitoring are enabled on the cluster. Massdriver will monitor your workloads and send alerts when metrics are out of tolerance.

Connecting

After you have deployed a Kubernetes cluster through Massdriver, you may want to interact with the cluster using the powerful kubectl command line tool.

Install Kubectl

You will first need to install kubectl to interact with the kubernetes cluster. Installation instructions for Windows, Mac and Linux can be found here.

Note: While kubectl generally has forwards and backwards compatibility of core capabilities, it is best if your kubectl client version is matched with your kubernetes cluster version. This ensures the best stability and compability for your client.

The standard way to manage connection and authentication details for kubernetes clusters is through a configuration file called a kubeconfig file.

Download the Kubeconfig File

The standard way to manage connection and authentication details for kubernetes clusters is through a configuration file called a kubeconfig file. The kubernetes-cluster artifact that is created when you make a kubernetes cluster in Massdriver contains the basic information needed to create a kubeconfig file. Because of this, Massdriver makes it very easy for you to download a kubeconfig file that will allow you to use kubectl to query and administer your cluster.

To download a kubeconfig file for your cluster, navigate to the project and target where the kubernetes cluster is deployed and move the mouse so it hovers over the artifact connection port. This will pop a windows that allows you to download the artifact in raw JSON, or as a kubeconfig yaml. Select “Kube Config” from the drop down, and click the button. This will download the kubeconfig for the kubernetes cluster to your local system.

Download Kubeconfig

Use the Kubeconfig File

Once the kubeconfig file is downloaded, you can move it to your desired location. By default, kubectl will look for a file named config located in the $HOME/.kube directory. If you would like this to be your default configuration, you can rename and move the file to $HOME/.kube/config.

A single kubeconfig file can hold multiple cluster configurations, and you can select your desired cluster through the use of contexts. Alternatively, you can have multiple kubeconfig files and select your desired file through the KUBECONFIG environment variable or the --kubeconfig flag in kubectl.

Once you’ve configured your environment properly, you should be able to run kubectl commands. Here are some commands to try:

# get a list of all pods in the current namespace
kubectl get pods

# get a list of all pods in the kube-system namespace
kubectl get pods --namespace kube-system

# get a list of all the namespaces
kubectl get namespaces

# view the logs of a running pod in the default namespace
kubectl logs <pod name> --namespace default

# describe the status of a deployment in the foo namespace
kubectl describe deployment <deployment name> --namespace foo

# get a list of all the resources the kubernetes cluster can manage
kubectl api-resources
Variable Type Description
cluster_networking.cluster_ipv4_cidr_block string CIDR block to use for kubernetes pods. Set to /netmask (e.g. /16) to have a range chosen with a specific netmask. Set to a CIDR notation (e.g. 10.96.0.0/14) from the RFC-1918 private networks (e.g. 10.0.0.0/8, 172.16.0.0/12, 192.168.0.0/16) to pick a specific range to use.
cluster_networking.master_ipv4_cidr_block string CIDR block to use for kubernetes control plane. The mask for this must be exactly /28. Must be from the RFC-1918 private networks (e.g. 10.0.0.0/8, 172.16.0.0/12, 192.168.0.0/16), and should not conflict with other ranges in use. It is recommended to use consecutive /28 blocks from the 172.16.0.0/16 range for all your GKE clusters (172.16.0.0/28 for the first cluster, 172.16.0.16/28 for the second, etc.).
cluster_networking.services_ipv4_cidr_block string CIDR block to use for kubernetes services. Set to /netmask (e.g. /20) to have a range chosen with a specific netmask. Set to a CIDR notation (e.g. 10.96.0.0/14) from the RFC-1918 private networks (e.g. 10.0.0.0/8, 172.16.0.0/12, 192.168.0.0/16) to pick a specific range to use.
core_services.cloud_dns_managed_zones[] array(string) No description
core_services.enable_ingress boolean Enabling this will create an nginx ingress controller in the cluster, allowing internet traffic to flow into web accessible services within the cluster
node_groups[].is_spot boolean Spot instances are more affordable, but can be preempted at any time.
node_groups[].machine_type string Machine type to use in the node group
node_groups[].max_size number Maximum number of instances in the node group
node_groups[].min_size number Minimum number of instances in the node group
node_groups[].name string The name of the node group