Docker And Kubernetes: Guide For Beginners

Part 1 – Introduction to Kubernetes

In the past 5 years most of the new software projects have used Kubernetes. This was also the case with Encora, as we witnessed Kubernetes become the global de-facto tool used in new software applications.

We created the following series of articles to help engineers step more easily into the Kubernetes world, by using clear “how-to instructions” and simple examples.
Links to the Kubernetes documentation are included for those of you who want to further dive into additional details.

The focus of this first article is on locally setting up Docker and Kubernetes and creating Kubernetes resources needed for setting up a microservice:
– deployment,
– service,
– ingress.

Following these resources, we can deploy and access a basic microservice.

The source code can be found at https://github.com/marianteodorescu/workshops/tree/main/k8s-part1

Prerequisites

● Linux or Windows – Docker Desktop installed
● Test that Docker is correctly installed by running: docker –-version and docker-compose –-version. These should show valid versions.
Enable Kubernetes in Docker Desktop.
● Test that you can run kubectl version. You should get a valid output with the version. If the command is not found check the link above and add kubectl to your PATH.
● Install VS Code
● Clone the source code from https://github.com/marianteodorescu/workshops/tree/main/k8s-part1
● Open a terminal and cd into k8s-part1 (the directory where this Readme is found)

What is Kubernetes – basic concepts

Kubernetes is an open source platform for managing containerized workloads and services, that facilitates both declarative configuration and automation.

Kubernetes cluster overview

A Kubernetes cluster consists of a set of worker machines, called nodes, that run containerized applications. Every cluster has at least one worker node.

The worker node(s) host the Pods that are the components of the application workload. Pods are the smallest deployable units of computing that you can create and manage in Kubernetes. A Pod is a group of one or more containers. Containers are created using container images.

Kubernetes Workload Resources

Deployment – used to declare how to deploy images.
Pod – smallest deployable unit (with 1 or more containers).
Service – An abstract way to expose an application running on a set of Pods as a network service.
ConfigMap – store non-confidential data in key-value pairs.
Secret – contains a small amount of sensitive data such as a password, a token, or a key.
Ingress – manages external access to the services in a cluster, typically HTTP.

Creating and applying a deployment file in Kubernetes

DOCS

nginx-deployment.yaml – sample deployment file
● Create the deployment: kubectl apply -f nginx-deployment.yaml
● Check if deployment was created: kubectl get deployments
● Check rollout status: kubectl rollout status deployment/nginx-deployment
● Check again if deployment was created: kubectl get deployments
● Get ReplicaSet: kubectl get rs
● Get the pods: kubectl get pods –show-labels
● Update the image in the deployment: kubectl set image deployment/nginx-deployment nginx=nginx:1.16.1 – or edit the version of the image in the yaml file and apply it again
● Get the ReplicaSet, the previous replica has now 0 pods: kubectl get rs. We can also check the pods to see their status
● Get the details of the deployment: kubectl describe deployments nginx-deployment
● Delete a deployment: kubectl delete deployment nginx-deployment

Creating and exposing a service for a deployment

DOCS

The deployment creates the pods with the container(s), but you won’t be able to access them since they are not exposed. ServiceTypes
nginx-service-loadBalancer.yaml – sample service file
● Create the service: kubectl apply -f nginx-service-loadBalancer.yaml
● Access http://localhost/ – you should see the nginx home page

Creating an ingress to expose services

DOCS Manages external access to services in the cluster. Handle hosts, SSL, load balancing.

Prerequisite – Install controller

● Before creating an ingress, setup an ingress controller in the cluster: kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.2.0/deploy/static/provider/cloud/deploy.yaml – Source.
● Check that the controller pods were started: kubectl get pods –namespace=ingress-nginx
● Creating the controller is a one time only operation. You don’t have to do it every time you want to define an ingress.
● Check what resources were created for the controller: kubectl get all –namespace=ingress-nginx
Create an ingress for the service(s)
● Modify the service to use type=ClusterIP (exposed only in the cluster): kubectl apply -f nginx-service-clusterIP.yaml
● Create the ingress for the service: kubectl apply -f nginx-ingress.yaml
● Get the ingress: kubectl get ingress nginx-ingress
● Access http://localhost – you should see the nginx home page

References
Overview
Components
Concepts
Conclusions

Following the resources above you can setup Docker and Kubernetes. You hopefully also learned basic concepts about Kubernetes and how to create kubernetes resources using kubectl and yaml files. Once the yaml files are created, you can easily create/update/delete resources.

The deployment describes what to deploy and in how many instances. The service allows access to all the deployment pods through a single name while the ingress allows external access to the service.

Part 2 – Kubernetes for Production

Moving forward with part 2 of this exercise, reading this section and running the examples will help you understand and use the following:
– automatic scaling,
– jobs and cronjobs,
– checking logs and errors,
– connecting to a pod,
– contexts and namespaces .

The source code can be found at https://github.com/marianteodorescu/workshops/tree/main/k8s-part2

Prerequisites
● See Prerequisites section from part 1.
● Clone the source code from https://github.com/marianteodorescu/workshops/tree/main/k8s-part2
● Open a terminal and cd into k8s-part2 (the directory where this Readme is found).

Vertical scaling – Pod/container resources

DOCS 2 types of resource management configuration:
● request: the minimum allocated to the container
● limit: maximum allowed

Most important resources:
● memory – measured in bytes. Use prefixes (E, P, T, G, M, k) for larger quantities: 400M, 1.5GB.
● CPU – measured in CPU units (1 unit = 1 core). You can specify fractional values: 0.5, 0.1 = 100m (100 millicpu).

We can specify request and limit for each container, see below the path in the yaml files:
● spec.containers[].resources.limits.cpu
● spec.containers[].resources.limits.memory
● spec.containers[].resources.requests.cpu
● spec.containers[].resources.requests.memory

Errors caused by insufficient resources

● FailedScheduling – kubernetes cannot find a node where the Pod can fit.
○ Create deployment: kubectl apply -f high_cpu_deployment.yaml file
○ Get the pods: kubectl get pods
○ Check the pod: kubectl describe pod . Notice the Events section and the message. Also here you can check how many resources the pod requested. Maybe we had a typo and asked for too much CPU :).
○ We can also check the nodes to see user resources. Get nodes: kubectl get nodes, describe a node: kubectl describe node .
○ Change the previous deployment to a reasonable value for CPU (100m), deploy it, check pod status then node status.

● Container terminated – the container tried to get more resources than the limits – OOMKilled in the Events section.

Horizontal Pod Autoscaling
Prerequisite
For metrics to run locally you need to enable metrics server in kubernetes Source. Run the command below to install it.

● Run kubectl apply -f metrics-server.yaml
● Run kubectl top node & kubectl top pod -A to resource usage

Create Deployment and HPA

DOCS Automatically update the workload resources to match demand. Load increases -> create more Pods.

● Create deployment and service: kubectl apply -f low_mem_deployment.yaml file
● Create hpa: kubectl apply -f low_mem_hpa.yaml. file More details and metrics to scale.
● Run kubectl get hpa low-mem –watch to see how the scaler is working and how the number of pods is increased based on the load.
● Run in another terminal kubectl run -i –tty load-generator –rm –image=busybox:1.28 –restart=Never — /bin/sh -c “while sleep 0.01; do wget -q -O- http://low-mem-service; done” to generate load on our service.
● Stop the load generator and the scaler should drop the number of pods (it will take a while).

Create and apply a cronjob

DOCS Perform scheduled actions (backups, reports, emails etc).

● Run kubectl apply -f cronjob.yaml file
● Get the list of pods kubectl get pods and check the logs for the pod created by the cronjob kubectl logs .
● Run kubectl get pods again to check that the cronjob created another pod after a minute.
● Run kubectl delete cronjob hello to clean up.

Connect to a pod and check logs

● Run kubectl apply -f low_mem_deployment.yaml
● Get the list of pods
● Connect to a pod kubectl exec -it — bash. You can now run commands in that pod: list env variables (env), run a script etc.
● Run directly a command kubectl exec -it — /bin/sh -c “echo test”

Jobs

DOCS Used for one-off jobs. It creates one or more Pods and will continue to retry execution of the Pods until a specified number of them successfully terminates. Also useful to run specific commands which need more resources than a normal pod created by your deployment.

● Run kubectl apply -f job.yaml. It has a sleep 3600 command which allows us to connect to the pod and run what we need inside it.
● Connect to the pod created by the job
● (!) Always clean up after you have run jobs. kubectl delete job low-mem-job

Using labels for grouping and interrogating resources

DOCS

Used for grouping/identifying/querying related resources in an organization.

● Run kubectl apply -f low_mem_deployment.yaml
● Get the list of pods/services/deployments etc for a specific label: kubectl get pods -l app=low-mem
● Also used for linking resources. For example the way a service is attached to pods using the selector in low_mem_deployment.yaml

Contexts

DOCS

● Contexts are used to store connection details to a kubernetes cluster.
● View current context: kubectl config current-context
● View all contexts: kubectl config get-contexts
● Switch context: kubectl config use-context docker-desktop

Namespaces

DOCS Namespaces are intended for use in clusters with many users spread across multiple teams, or projects. Use namespaces to isolate resources in the cluster. Cluster admins create namespace and give access to users docs.

● Create namespace: kubectl apply -f test_namespace.yaml

● View namespaces: kubectl get namespace

● Set the namespace for a request:
○ Create a deployment in the namespace: kubectl apply -f low_mem_deployment.yaml –namespace=test . !!! Bad practice for production. When creating resources we have to specify the namespace in the metadata for that resource..
○ View all the pods in the namespace: kubectl get pods –namespace=test

● Set namespace preference (for all requests in the current context): kubectl config set-context –current –namespace=test

Context helpers
https://github.com/jonmosco/kube-ps1 – for bash/zsh – display the k8s context and namespace.
https://github.com/ahmetb/kubectx – manage contexts and namespaces easier.

Configmaps https://kubernetes.io/docs/concepts/configuration/configmap/

Secrets  https://kubernetes.io/docs/tasks/inject-data-application/distribute-credentials-secure/

Cheatsheeet https://kubernetes.io/docs/reference/kubectl/cheatsheet/

Conclusions

With the resources above you may now deploy and run a complex application. While a deployment and a service allow us to run a server application, the jobs and cron jobs allow us to run processes on demand or scheduled.

Automatic scaling of the application is useful when the application receives more requests. It can automatically increase/decrease the number of instances based on resource usage.

Learn More about Encora

We are the software development company fiercely committed and uniquely equipped to enable companies to do what they can’t do now.

Learn More

Global Delivery

READ MORE

Careers

READ MORE

Industries

READ MORE

Related Insights

Online Travel Agencies: Some Solutions to changes in booking and commission attributions

Discover how we can simplify travel changes for both travelers and OTAs using blockchain and ...

Read More

The AI-Powered Journey: How AI is Changing the Face of Travel

As travel elevates itself into an experience where every journey is as unique as the travelers ...

Read More

Enhancing Operational Excellence with AI: A Game-Changer for the Hospitality Industry

By AI, the hospitality industry can offer the best of both worlds: the efficiency and ...

Read More
Previous Previous
Next

Accelerate Your Path
to Market Leadership 

Encora logo

Santa Clara, CA

+1 669-236-2674

letstalk@encora.com

Innovation Acceleration

Speak With an Expert

Encora logo

Santa Clara, CA

+1 (480) 991 3635

letstalk@encora.com

Innovation Acceleration