Introduction to Kubernetes Concepts and Architecture

Basic Concepts of  Kubernetes


The management of large software with multiple services is a tedious and tedious task for the engineer DevOps. Microservices works to rescue DevOps engineers after all these complicated deployment processes. Quite simply, every microservice in the system has its own responsibility to handle a specific task. The container can be used to deploy each of these micro-tasks as a service unit. If you are not familiar with Containers, read this article to learn more about Docker, the most popular and widely used container technology for deploying microservices.


As I described earlier, we can use a single container to deploy a single service and container containing all the required configurations and dependencies. The single service still faces the common problem of a single point of failure. To avoid a one-off failure, we need to configure another service such that, if a service fails, the next available service takes that load and continues to provide the service. Another requirement for having multiple containers for the same service is the distribution of the load between services. This can be achieved by connecting multiple services through a load balancer. Maintaining multiple containers with multiple services with service replication is not as easy to manage manually. Kubernetes used to handle all these complexities for you. Kubernetes provides several features that allow you to easily manage multiple containers known as container orchestration
.


What are the Kubernetes doing?





When high availability is required, we must then scale the system so that each service has one copy and runs in another node (separate physical / virtual machine).

Here, the load balancer distributes the load among the servers. In this system, a one-time failure managed by routing traffic to another node if one of the nodes is down.
In a local system with many nodes and services, hardware usage may not be effective because each service requires a different hardware configuration. As a result, cabling services in a specific node is not as efficient. Kubernetes provides an elegant way to solve these resource use problems by orchestrating container services in multiple nodes.

The Kubernetes cluster manages by the master, which includes application planning, maintaining the desired state of applications, scaling applications, and deploying new updates. A node is a virtual machine or physical computer that serves as a work computer in a Kubernetes cluster. The node and the master communicate with each other via the Kubernetes API. A pod Kubernetes is a group of containers that are deployed together on the same host.

Pod

A pod is a set of containers and the deployment unit in the Kubernetes cluster. Each of the pods has its own IP address. This means that every container on the same pod has the same IP address to be able to end up with localhost.

Services

With pods changing dynamically, it's hard to reference an individual pod. Services providing abstraction on pods and providing an addressable method of communicating with pods.

Entrance

Most of the time, pods and services are encapsulated in the Kubernetes cluster so that external clients can not call these servers. An entry is a set of rules that allows incoming connections to reach cluster services.

Docker

A Docker daemon is running on each node to extract images from the Docker registry and execute them.

Kubelet

Kubelet is the node agent that runs periodically to check the health of containers in pods. The API server sends the necessary instructions for the execution of the containers and kubelet guarantees that the containers are in the desired state.

Kube-proxy

Kube-proxy distributes the load on the pods. Load distribution based on iptables rules or round robin method.

Deployment

Deployment is what you use to describe your desired state at Kubernetes.

Characteristics of Kubernetes

Kubernetes provides several features for the application deployer to easily deploy and manage the entire system.

Control replication

This component manages the number of replicated pods to keep in the Kubernetes cluster.

Resource monitoring

  The health and performance of the cluster can be measured using additives such
   as  Heapster. This will collect the cluster metrics and save the statistics in Influx DB.
    Data can be viewed using   Grafana, the ideal user interface for analyzing this data.

Horizontal automatic scaling

Heapster data is also useful when scaling the system when the system is heavily loaded. The number of pods can be increased or decreased depending on the load of the system.

Contact us at +91 9989971070

For More Information about Docker and Kubernetes Online Training  CLICK HERE 

Comments