top of page

Deploying Spring Boot Microservices to Google Kubernetes Engine Cluster on Google Cloud Platform

Writer's picture: CODING Z2MCODING Z2M

Updated: May 10, 2023

Tools: JDK 17, Spring Tools Suite, Git, Docker, Kubernetes(GKE), ELB CLI, Google Cloud Account

What is Docker?


open source containerization platform and it enables developers to package applications into containers—standardized executable components that combine application source code with all the operating system (OS) libraries and dependencies required to run the code in any environment. More on Docker


What is Kubernetes? - Production-Grade Container Orchestration Kubernetes, also known as K8s, is an open-source system for automating deployment, scaling, upating the applications with zero down time, and management of containerized applications.


What's Google Kubernetes Engine? - Managed service for running Kubernetes and it provides advanced cluster management such as easy cluster creation, Load Balancing, Auto Scaling, Auto Upgrades, Logging & Monitoring. GKE: Watch on YouTube




What is Kubernetes Cluster and Nodes?

Kubernetes cluster is a set of node machines for running containerized applications and responsible to schedule and run containers across a group of node machines.- Cluster – combination of nodes & master node - Nodes (Virtual Servers)?


Getting Started with Kubernetes and Google Kubernetes Engine (GKE) :

- Enabling GKE API and creating Kubernetes cluster on GCP

Choose Location, Types of Nodes, No of Nodes ( Use default)

- GKE Overview in GCP – Clusters, Workloads, Services & Ingress, Applications, Configuration, Storage.

NOTE: While careating a Kubernetes cluster, choose 'Standard Cluster'

Kubernetes on Cloud ( GKE, Amazon EKS(Elastic Kubernetes Service), AKS (Azure Kubernetes Service)


Review Kubernetes Cluster

- Examine the clusters & nodes info.


Deploying your first Spring Boot Microservice to Kubernetes Cluster using Docker Image

- Go inside of your cluster and activate Google Cloud Shell

- Change the cloud shell settings if you want & open the cloud shell in a new window.

- Click the “CONNECT” button in the cluster window & copy the “configure Kubectl command” and run it the cloud shell to connect the cluster with cloud shell.


Deploying the application to Kubernetes cluster & Running the commands against Kubernetes cluster using ‘Kubectl” (Kube Controller), such as deploying apps, increasing the no of instances of an app and so on.

- kubectl version


kubectl create deployment trending-movies-service --image=codingz2m/trending-movies-service:0.0.1-SNAPSHOT


Docker Hub Repository: https://hub.docker.com/repository/docker/codingz2m/trending-movies-service



Exposing the deployment(service) to the world

kubectl expose deployment trending-movies-service-api --type=LoadBalancer --port=8080


Note: check the “Services & Ingress” section in the ‘Kubernetes Engine’ overview page for the endpoints of your service.


Saving your Free GCP Credits:

In the cloud, you can delete the resources when you are not using them. If you do not want to delete and create a new cluster every time, you can reduce the cluster size to zero.


Before reducing the cluster node size, use the following command.

After finishing your sessions of the day you can reduce cluster node size to zero.


gcloud container clusters resize --zone us-central1-a codingz2m-cluster --num-nodes=0 --project=<Project ID>


Example:

gcloud container clusters resize --zone us-central1-a codingz2m-cluster --num-nodes=0 --project=causal-bongo-318909


When you are ready to start again, increase the number of nodes:

gcloud container clusters resize --zone us-central1-a codingz2m-cluster --num-nodes=3 --project=causal-bongo-318909


Deleting a cluster:

gcloud container clusters delete [CLUSTER_NAME]


gcloud container clusters delete codingz2m-cluster --zone us-central1-a


How Does Kubernetes Cluster relate to Nodes, Pods, ReplicaSets, Service, Deployment?

$ kubectl get events

$ kubectl get pods/pod

$ kubectl get replicaset

$ kubectl get deployment

$ kubectl get service


When we use “kubectl create deployment” command, Kubernetes will create replicaset, pod.

When we use “kubectl expose deployment” command, Kubernetes will create service.


Understanding Pods in Kubernetes

Pod is a collection of containers that can run on a host(single node). This resource is created by clients and scheduled onto hosts. Our Containers are running in the Pods.


So, Kubernetes node can have multiple Pods, and each of these Pods can have multiple containers. Pod is providing for containers. Pods have an IP address. Load balancing between Pods which are available at that particular point of time.


Pod – Can have multiple containers, all the containers share resources and within the Pod, containers can talk to each other.


The following tells, how many containers are in Pod and how many of them are ready.

$ kubectl get pods -o wide

$ kubectl explain pods


Seeing the details of the Pod:

$ kubectl get pods

$ kubectl describe pod <name of pod>

Note: Use ctl + shift + pageup/down or Fn-Shift Up / Fn-Shift Down


ReplicaSets in Kubernetes:

Ensure that a specific number of Pods are running at all times. It keeps monitoring the pods and creates the pod if lesser no of pods are running. You can tell to replicaset to maintain higher no of Pods. New Replicaset is created during application deployment time and new pods are also created.


DESCRIPTION: ReplicaSet ensures that a specified number of pod replicas are running at any given time. When you scale up deployment, the deployments update the replicaset which will automatically start scaling up.


$ kubectl explain replicaset

$ kubectl get replicasets/replicaset/rs

$ kubectl get pods -o wide ( More about pod)


Deleting a Pod:

$ kubectl delete pods <POD-ID>


When you delete a pod, another Pod will be started up. Check it out by running the following command.

$ kubectl get pods -o wide


Maintain a higher no of pods – Updating the ReplicaSet

$ kubectl scale deployment <deployed-service-name> --replicas=3


NOTE: If you are running more than one/two instances (Pods) of the application , then the load will be distributed among them.


$ kubectl get pods

Now get the replicaset and see the output.


$ kubectl get replicaset

Decreasing Pods: $ kubectl scale deployment <deployed-service-name> --replicas=1


You can see the latest events that happens in the background,

$ kubectl get events


Deployment in Kubernetes:

When we update the application from version 1 to version 2, we need zero down time.


For example, we have 4 instances V1 and we want update it to V2. The rolling update strategy updates one Pod at a time, it launches a new Pod for V2, once it is up and running, it reduces the no of pods for V1 and next it increases the no of Pods of V2 and so on until the no of pods V1 goes down.


Note: ReplicaSet is tied with a specific release version. For example the ReplicaSet V1 that you be maintaining the specified no of instances of V1 release.


$ kubectl get rs

Displaying ReplicaSet and its Pods, Containers and its images.

$ kubectl get rs -o wide


Deploying the new version of hello-world-rest-api. How can we do that?

$ kubectl set image deployment < name-of-deployemnt> <name-of-container> =codingz2m/trending-movies-service:0.0.2-SNAPSHOT


Note: The new version of the application will be running. Check it out.

$ kubectl get rs -o wide

$ kubectl get pods


New replicaset was created with new pods when we updated our application, now it ensures the that mentioned pods are running all the time.

$ kubectl get rs


Services in Kubernetes:

NOTE: Service is created with Load Balancer and it is load balancing between whatever pods are available at that particular point of time.

$ kubectl get pods -o wide


Each of the pods have unique IP address and the load is distributed among these pods.

$ kubectl delete pod <id-of-pod>


Note: We can use delete command to delete replicaset, deployment. When you delete it, a new Pod will be created with new IP address.


Irrespective of all the changes happening with the Pods, we don’t want the consumer side of the things to get affected, it means, we don’t want the user of the application to use the different URL, where the Service in Kubernetes comes to play.


Service in Kubernetes role is to provide an always available external interface to the applications which are running inside the Pods. It allows application to receive traffic through a permanent life time IP address. The service is created with an IP address when we deployed our application.


Note: Click the ‘endpoint’ of your service in the ‘Services & Ingress” section and see the service Pods which are serving the request for a specific service.


Load Balancing:

Search, ‘Load Balancing’ in the GCP. You can see the specific load balancer created for this service (deployed-service-name) Obtaining Log:

kubectl logs pod/user-service-75dc5655db-djn7m -f


30 views0 comments

Comments


Join us today and take the first step towards advancing your IT career!

More codingZ2M

Never miss an update

Thanks for submitting!

bottom of page