Introduction
In this post you are going to learn the core principles of Kubernetes and how to create a k8s cluster in Google Cloud
Project Creation and Confiuration
The first thing you need to do on GCP in order to create a Kubernetes cluster is to create a project so you can later be able to add services to it. So, go to your Google Cloud Console and create a new project now if you haven´t done it yet.
With our project created we need to enable the Kubernetes Engine API for the project so we can start using the gcloud client to configure and create our cluster.
So, those are the steps we will need to cover after our project creation:
1. Enable Kubernetes Engine API in the Google Cloud Console
2. Install gcloud client
3. Install kubectl client
4. Logging to GCP
5. Setting a Project ID
6. Creating a Cluster
7. Creating a Pod
8. Creating a Service
9. Increase replica count
1. Enable Kubernetes Engine API in the Google Cloud Console
Go to the Kubernetes Engine menu in the Google Cloud Console to enable the API.
2. Install gcloud client
Install the gcloud client so you can use it to create your cluster and manage your resources on GCP:
You can access this great installation tutorial from Google.
3. Install kubectl client
Next, install kubectl client on your dev machine so you can use it to list your cluster objects like nodes, pods and services.
gcloud components install kubectl
4. Logging to GCP
Now that we have everything settled we are ready to login to CGP. From your terminal type the following to authenticate to GCP.
gcloud auth login
This will open a browser window to authenticate your user. Log in to your Google user account and allow access to the Google Cloud SDK.
Next we need to set the project id in which our new cluster will be created.
gcloud config set project PROJECT_ID
5. Setting a Project ID
Get the project id from the Google Cloud Console project list and change the PROJECT_ID with its value.
gcloud config set project k8s-nosqljs
6. Creating a Cluster
Run the "gcloud container clusters create" command to create a cluster with the desired name and number of nodes:gcloud container clusters create nosqljs --num-nodes 3
As we can see from the GCC, our cluster have just been created with three worker nodes.
In Kubernetes you don´t deal with containers directly. To run a container we first need to create a Pod in a worker node. A Pod is a group of one or more containers. You can read more here in the docs. Also from the docs, "Each Pod is meant to run a single instance of a given application. If you want to scale your application horizontally (to provide more overall resources by running more instances), you should use multiple Pods, one for each instance. In Kubernetes, this is typically referred to as replication."
Let´s run the Kubernetes command-line tool, kubectl, to list the created nodes:
kubectl get nodes
You should now have an output similar to this one with our newly created kubernetes cluster containing three worker nodes:
NAME STATUS ROLES AGE VERSION gke-nosqljs-default-pool-f7f90d91-c9tg Ready87m v1.19.9-gke.1900 gke-nosqljs-default-pool-f7f90d91-fw20 Ready 87m v1.19.9-gke.1900 gke-nosqljs-default-pool-f7f90d91-j1j7 Ready 87m v1.19.9-gke.1900
Each Node have three components in it: the kubelet, a container runtime, and the kube-proxy. You can read more about them here in the docs.
Now, what! We have our kubernetes cluster created, we have our worker nodes, but how do we ask Kubernetes to run a container for us? Once again we will use kubectl client to run it for us.
7. Creating a Pod
kubectl run nosqljs --image=cjafet/nosqljs --port=7700 --generator=run/v1 replicationcontroller/nosqljs created
As we can see we need to have a public docker image, like one in the docker hub, that we can use to create a container with it so when Kubelet tell Docker he can find and pull our image from the registry.
The kubectl run command create and run a particular image in a Pod for us. As you can see we don´t even need to create the Pod itself. We just ask Kubernetes to do it for us.
Now would be a good time to find out if we have any Pod created for us by Kubernetes! We can do that by ruunning the following kubectl command:
kubectl get pods
This will give us the following result:
NAME READY STATUS RESTARTS AGE nosqljs-b2zdf 1/1 Running 0 3m50s
Thats awesome isnt it? Lets improve this output by also making it return the information about the node the pod is running on by using the -o wide option, like this:
kubectl get pods -o wide
That would give us the following result:
NAME READY STATUS RESTARTS AGE IP NODE nosqljs-b2zdf 1/1 Running 0 5m17s 10.32.1.5 gke-nosqljs-default-pool-f7f90d91-c9tg
We can now inspect our Pod by using the describe command to see if we can find our container running inside of it:
kubectl describe pod nosqljs-b2zdf Name: nosqljs-b2zdf Namespace: default Priority: 0 Node: gke-nosqljs-default-pool-f7f90d91-c9tg/10.128.0.3 Start Time: Sun, 13 Jun 2021 16:56:22 -0300 Labels: run=nosqljs Annotations:Status: Running IP: 10.32.1.5 IPs: IP: 10.32.1.5 Controlled By: ReplicationController/nosqljs Containers: nosqljs: Container ID: containerd://aaeff88568513aef023757c76772434cb31e4b205771eeed0d07dd0827c4a22a Image: cjafet/nosqljs Image ID: docker.io/cjafet/nosqljs@sha256:703ea0986b501ebadf782f42743e00b36ef5aff527274dd9c0f03a52151f4b5b Port: 7700/TCP Host Port: 0/TCP State: Running Started: Sun, 13 Jun 2021 16:56:26 -0300 Ready: True Restart Count: 0 Environment: Mounts: /var/run/secrets/kubernetes.io/serviceaccount from default-token-r6qqw (ro) Conditions: Type Status Initialized True Ready True ContainersReady True PodScheduled True Volumes: default-token-r6qqw: Type: Secret (a volume populated by a Secret) SecretName: default-token-r6qqw Optional: false QoS Class: BestEffort Node-Selectors: Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s node.kubernetes.io/unreachable:NoExecute for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 6m5s default-scheduler Successfully assigned default/nosqljs-b2zdf to gke-nosqljs-default-pool-f7f90d91-c9tg Normal Pulling 6m4s kubelet, gke-nosqljs-default-pool-f7f90d91-c9tg Pulling image "cjafet/nosqljs" Normal Pulled 6m2s kubelet, gke-nosqljs-default-pool-f7f90d91-c9tg Successfully pulled image "cjafet/nosqljs" in 2.666870546s Normal Created 6m1s kubelet, gke-nosqljs-default-pool-f7f90d91-c9tg Created container nosqljs Normal Started 6m1s kubelet, gke-nosqljs-default-pool-f7f90d91-c9tg Started container nosqljs
You can see now that we have our image running in a container with the ready status equals to true!! We can also find information about the container port, uptime, number of restarts and even which replication controller is responsible for it in the "Controlled By" section. We will need this information to expose our service to the web.
8. Creating a Service
To be able to access our container application there is yet one more step we need to do here. We need to tell kubernetes to expose the resource, the replication controller, it has created in the kubectl run command we used before. This is how we are going to ask Kubernetes to expose our replication controller for us, by creating a service of type load balancer:
kubectl expose rc nosqljs --type=LoadBalancer --name=nosqljs-http service/nosqljs-http exposed
We can now check if our service was created. At first we will get a pending status while kubernetes is creating our load balancer. When it is all done it will show the external IP address that our service will respond for requests.
kubectl get services NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.35.240.1 none 443/TCP 2d22h nosqljs-http LoadBalancer 10.35.243.185 pending 7700:30015/TCP 16s
kubectl get services NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.35.240.1 443/TCP 2d22h nosqljs-http LoadBalancer 10.35.243.185 35.223.229.250 7700:30015/TCP 93s
We can also get the name of our replication controller like this:
kubectl get rc NAME DESIRED CURRENT READY AGE nosqljs 1 1 1 137m
You may be wondering by now why we need a service and what is a replication controller. Kubernetes Pods are ephemeral. Every time it is recreated it gets assigned a new IP address, so we need a way to expose all pods through a single IP address that will never change. A service is this layer of abstraction that will define a logical set of Pods. A ReplicationController is responsible to make sure the right number of pods are running. As we will see in a later post, a ReplicaSet is now the recommended way to set up replication.
You can read more about services here and more about replication controller here.
9. Increase replica count
There is one important thing to notice here. Even though we have three worker nodes, we can see that the DESIRED number of pods that we have in our rc is one and as a result we are just using the node ending in f7f90d91-c9tg as we can see from the kubectl get pods -o wide command. So, lets change that by setting the desired number of replicas in our replication controller to three instead of one:
kubectl scale rc nosqljs --replicas=3 replicationcontroller/nosqljs scaled
If we run the get rc command again we should now have three instead of one for the desired number of pods.
kubectl get rc NAME DESIRED CURRENT READY AGE nosqljs 3 3 3 177m
We can also check the number of pods again by running the -o wide command and this time we will notice that each pod is running in a single node and that the three nodes are being used:
$ kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE nosqljs-b2zdf 1/1 Running 0 3h3m 10.32.1.5 gke-nosqljs-default-pool-f7f90d91-c9tg nosqljs-sbrwh 1/1 Running 0 6m40s 10.32.0.6 gke-nosqljs-default-pool-f7f90d91-fw20 nosqljs-zq47k 1/1 Running 0 6m40s 10.32.2.8 gke-nosqljs-default-pool-f7f90d91-j1j7
Conclusion
In the next posts we'll go even further by covering more advanced kubernetes concepts like deployments and configmap. We will need to change the way our in-memory database records are saved, because as they are now inside the container we will get different results back from our service, since we are accessing and saving each time in a different container. We will learn that, next!
All commands used in this post
gcloud components install kubectl gcloud auth login gcloud config set project PROJECT_ID gcloud container clusters create nosqljs --num-nodes 3 kubectl get nodes kubectl run nosqljs --image=cjafet/nosqljs --port=8080 --generator=run/v1 kubectl get pods -o wide kubectl describe pod nosqljs-b2zdf kubectl expose rc nosqljs --type=LoadBalancer --name=nosqljs-http kubectl get services kubectl get rc kubectl scale rc nosqljs --replicas=3
Comments
Post a Comment