In our How-to-Kube series, we began by covering the pod basics. Similar to service, volume, and namespace, a pod is a basic Kubernetes object. A pod is a set of one or more containers scheduled on the same physical or virtual machine that acts as a unit. When you create a declaration of the pod, you can define this with any number of containers that live inside of the pod.
Pods also share a network internally — a private network that is shared whenever a pod is scheduled across all the containers inside the pod. They can also share filesystem volumes. Similar to Docker, which uses -volumes-from, Kubernetes uses the same concept when running multiple containers inside a pod. You can share ephemeral or copy-on-write style storage from within the pod.
Typically you won’t create a pod directly — instead you’ll create a higher level object, such as a Deployment or StatefulSet that includes a pod specification (see below).
A deployment is an abstraction to the pod. It allows you to have extra functionality and control on top of the pod to say how many instances of a pod you want to run across nodes or if you want to define your rolling update strategy (for example, only roll one pod at a time, then wait 30 seconds in between). This allows you to control your deployments based on your unique requirements in order to have zero downtime as you bring up a new process and deprecate old ones.
Deployments offer the following functionality:
In this post, you’ll learn how to create a pod in Kubernetes using the nginx image, view the YAML that describes the pod, and then delete the pod that you created. We’ll be using the Minikube tool, which enables you to run a single-node Kubernetes cluster on your laptop or computer.
For more help getting started with Kubernetes, read our series intended for engineers new to Kubernetes and GKE. It provides a basic overview of Kubernetes, architecture basics and definitions and a quick start for building a Kubernetes cluster and building your first multi-tier webapp.
To begin, you need to launch a Kubernetes cluster (in GKE). Once you’re in the Kubernetes sandbox environment, make sure you’re connected to the Kubernetes cluster by executing kubectl get nodes in the command line to see the cluster's nodes in the terminal. If that worked, you’re ready to create and run a pod.
To create a pod using the nginx image, run the command kubectl run nginx --image=nginx --restart=Never. This will create a pod named nginx, running with the nginx image on Docker Hub. And by setting the flag --restart=Never we are telling Kubernetes to create a single pod rather than a Deployment.
Once you hit enter, the pod will be created. You should see the message pod/nginx created displayed in the terminal.
You can now run the command kubectl get pods to see the status of your pod. To view the entire configuration of the pod, just run kubectl describe pod nginx in your terminal.
The terminal will now display the YAML for the pod, starting with the name nginx, its location, the Minikube node, start time, and current status. You'll also see in-depth information about the nginx container, including the container ID and where the image lives.
If you scroll all the way to the bottom of the terminal, you can see the events that have occurred in the pod. In the case of this tutorial, you’ll see that the pod was started, created, the nginx image was pulled successfully, and the image has been assigned to this node in Minikube.
The action of deleting the pod is simple. To delete the pod you created, just run kubectl delete pod nginx. Be sure to confirm the name of the pod you want to delete before pressing Enter. If you completed the task of deleting the pod successfully, the message pod nginx deleted will display in the terminal.
Pods are an important unit for understanding the Kubernetes object model, as they represent the processes within an application. In most cases, pods serve as an indirect way to manage containers within Kubernetes. In more complex use cases, pods may encompass multiple containers that need to share resources, serving as the central location for container management.
One big area of concern for Kubernetes is a lack of visibility and inconsistent policy enforcement across multiple clusters and dev teams. As you begin your Kubernetes journey, consider putting Kubernetes guardrails in place to help ensure your team is using Kubernetes safely. Doing so early in your Kubernetes journey can help you ensure that you do not introduce configuration drift where there are no establish internal standards for Kube configurations. As you play, check out some Kubernetes security considerations:
Just guardrails are important for ensuring consistent K8s deployments that align to your internal policies, Kubernetes best practices are also important for optimal performance. By learning best practices as you learn Kubernetes, you’ll be well positioned to evaluate the rtechnology and scale it effectively.
Polaris is an open source project that runs a variety of Kubernetes best practice checks to ensure that pods and controllers are configured properly. Using this open source project, you can evaluate Kubernetes and avoid problems in the future.
Another challenge that often comes up for those just starting with Kubernetes is how to size your applications. Goldilocks is another open source project that helps you identify a starting point for resource requests and limits, enabling you to rightsize your applications and get your requests and limits “just-right.”
Originally posted November 13, 2019; Updated April 30, 2024