In the Kubernetes cluster, each pod contains its IP address. However, pods are ephemeral; they create and destroy frequently, so whenever the pod gets a new life, it will have a new IP address, creating a problem for the target audience to reach that pod. We have Services that contain a stable IP address even when a pod dies. Moreover, Services also have load balancing for replicas, which helps distribute traffic evenly. Services also assists in communication within or outside the cluster.
Here are the five types of services:
ClusterIP
ServiceNodePort
ServiceLoadBalancer
ServiceExternalName
ServiceClusterIP
serviceThe ClusterIP
Service is the default and the most popular Service in Kubernetes. When a Service is created, it is assigned an internal IP address, which enables access to the pods within the cluster. This type of Service is suitable for internal communication between pods and does not expose it externally, providing security and privacy.
In the above illustration, we assume that our cluster has two pods on which SQLite and Python run. SQLite runs on port 80
, and Python runs on 30
. However, we know that their IP addresses will keep changing. To address this, we assign a static IP to the Service (Cluster IP), which applies to the SQLite and Python pods. Now, if each of them needs to communicate with the other, they must access them by their Cluster IP and the port specified on the pod during deployment. That’s how they would be able to communicate with each other.
NodePort
serviceTo enable external access to the application, we utilize the NodePort
Service type. By specifying the NodePort
property, we designate the port through which the Service accepts incoming connections outside the Kubernetes cluster.
In the above illustration, we assume that our cluster has two pods on which SQLite and Python run. Let’s say SQLite runs on port 80 with nodePort: 30001
, and Python runs on 30 with nodePort: 30002
. We can merge the IP assigned to the node and nodePort
if we want to access Python.
LoadBalancer
serviceThe LoadBalancer
Service exposes the Service to the outside world using a load balancer. This Service creates a load balancer distributes traffic to the pods that are associated with the Service. To access the service, we open a web browser and navigate to the IP address of the load balancer.
In the above illustration, the load balancer listens on port 80
for incoming traffic. When it receives traffic on port 80
, it forwards that traffic to the pods in the Service on port 30080
.
ExternalName
serviceThis service type maps a service to a DNS name. The ExternalName
Service is useful for exposing the Service outside the cluster, such as a database or a web service.
In this example, the Service is mapped to the DNS name in the external database. This means that the service can be accessed using the DNS name.
The Headless Service in Kubernetes differs from the traditional ClusterIP
service by not assigning a single virtual IP address to all pods. Instead, it exposes the unique IP addresses of individual pods directly. Think of it as a directory where each pod is like a file with its distinct IP address. The Headless Service acts as the directory, enabling pod-to-pod communication by referencing the service name and desired port, similar to accessing a file by its path and name.
In the above illustration, we have multiple instances of a database, such as SQLite, running within a Kubernetes cluster. Each SQLite pod would retain its changing IP address in a Headless Service configuration. When other components, like Python pods, need to establish communication with these SQLite pods, they would do so by directly referencing the individual pod’s IP address. This direct pod-to-pod communication is beneficial in scenarios where maintaining the identity and addressing of each pod is crucial.
In summary, Services play a crucial role in Kubernetes by providing a stable endpoint for accessing pods, and facilitating internal and external communication.
Unlock your potential: Kubernetes Essentials series, all in one place!
To deepen your understanding of Kubernetes, explore our series of Answers below:
What is Kubernetes?
Get an introduction to Kubernetes, the powerful container orchestration platform that automates deployment, scaling, and management of containerized applications.
What is Kubernetes Event-Driven Autoscaling (KEDA)?
Learn how KEDA enables event-driven scaling, allowing Kubernetes workloads to automatically scale based on external metrics such as message queues, databases, and cloud events.
Why do we use Kubernetes?
Understand the core benefits of Kubernetes, including automated deployment, scaling, and management of containerized applications across distributed environments.
What are Kubernetes namespaces?
Discover how Kubernetes namespaces help organize and isolate workloads within a cluster, enhancing security and resource allocation.
What are the different types of services in Kubernetes?
Explore the various Kubernetes service types—ClusterIP, NodePort, LoadBalancer, and ExternalName—and their roles in facilitating communication between applications.
ReplicationController in Kubernetes
Learn about the ReplicationController, its role in maintaining pod availability, and how it ensures that a specified number of pod replicas are always running.
ExternalDNS in Kubernetes
Understand how ExternalDNS simplifies service discovery by dynamically managing DNS records for Kubernetes services, making external access seamless.
What are taints and tolerations in Kubernetes?
Gain insights into taints and tolerations and how they control pod scheduling by preventing or allowing specific workloads to run on designated nodes.
Introduction to Node Affinity in Kubernetes
Discover how Node Affinity works in Kubernetes to influence pod scheduling by specifying node selection preferences and ensuring efficient workload distribution.
Free Resources