·       About Docker

·       Namespaces

·       Control Groups

·       Container Image

·       Container Runtimes

·       Container Orchestration

·       Kubernetes Cluster

·       Kubernetes Main Resources (Pods, Services, Replication Controllers, Persistent Volume, Persistent Volume Claims)

·       Docker Commands

·       Docker Client Verbs

·       Managing Containers

Docker uses a client-server architecture, described below:    


                     The command-line tool (docker) is responsible for communicating with a server using a RESTful API to request operations.                


                     This service, which runs as a daemon on an operating system, does the heavy lifting of building, running, and downloading container images.                

       The daemon can run either on the same system as the docker client or remotely.    


Docker depends on three major elements:    


                     Images are read-only templates that contain a runtime environment that includes application libraries and applications. Images are used to create containers. Images can be created, updated, or downloaded for immediate consumption.                


                     Registries store images for public or private use.  The well-known public registry is Docker Hub, and it stores multiple images developed by the community, but private registries can be created to support internal image development under a company's discretion. 


                     Containers are segregated user-space environments for running applications isolated from other applications sharing the same host OS.        

1                 Note

         In a RHEL environment, the registry is represented by a systemd unit called docker-registry.service.    


Containers created by Docker, from Docker-formatted container images, are isolated from each other by several standard features of the Linux kernel. These include:    


                     The kernel can place specific system resources that are normally visible to all processes into a namespace. Inside a namespace, only processes that are members of that namespace can see those resources. Resources that can be placed into a namespace include network interfaces, the process ID list, mount points, IPC resources, and the system's own hostname information. As an example, two processes in two different mounted namespaces have different views of what the mounted root file system is. Each container is added to a specific set of namespaces, which are only used by that container.                

                    Control groups (cgroups)               

                     Control groups partition sets of processes and their children into groups in order to manage and limit the resources they consume. Control groups place restrictions on the amount of system resources the processes belonging to a specific container might use.  This keeps one container from using too many resources on the container host.                


                         SELinux is a mandatory access control system that is used to protect containers from each other and to protect the container host from its own running containers. Standard SELinux type enforcement is used to protect the host system from running containers.  Container processes run as a confined SELinux type that has limited access to host system resources. In addition, sVirt uses SELinux Multi-Category Security (MCS) to protect containers from each other.  Each container's processes are placed in a unique category to isolate them from each other.                           

Container image is a container blueprint from which a container will be created.

Namespaces and Control groups (CGroups) are the two kernel components that Docker use to create and manage the runtime environment for any container.


An existing image of a WordPress blog was updated on a developer's machine to include new homemade extensions. Which is the best approach to create a new image with those updates provided by the developer?

The updates made to the developer's custom WordPress should be assembled as a new image using a Dockerfile to rebuild the container image.


Etcd is a distributed key-value store, used by Kubernetes to store configuration and state information about the containers and other resources inside the Kubernetes cluster.

Operating-System-level virtualization allows us to run multiple isolated user-space instances in parallel. These user-space instances include the application source code, required libraries, and the required runtime to run the application without any external dependencies. These user-space instances are referred to as containers

In the container world, this box containing our application source code and all its dependencies and libraries is referred to as an image. A running instance of this box is referred to as a container. We can run multiple containers from the same image.

 When a container is created from an image, it runs as a process on the host's kernel. It is the host kernel's job to isolate the container process and to provide resources to each container.

Container Runtimes

Namespaces and cgroups have existed in the Linux kernel for quite some time, but consuming them to create containers was not easy.  Docker hid all the complexities in the background and came up with an easy workflow to share and manage both images and containers.


Docker achieved this level of simplicity through a collection of tools that interact with a container runtime on behalf of the user. The container runtime ensures containers portability, offering a consistent environment for containers to run, regardless of the infrastructure. Some of the container runtimes are provided below:



runC is the CLI tool for spawning and running containers




containerd is an OCI-compliant container runtime with an emphasis on simplicity, robustness, and portability. It runs as a daemon and manages the entire lifecycle of containers. It is available on Linux and Windows. Docker, which is a containerization platform, uses containerd as a container runtime to manage runC containers.



rkt (pronounced "rock-it") is an open source, Apache 2.0-licensed project from CoreOS.



CRI-O is an OCI-compatible runtime, which is an implementation of the Kubernetes Container Runtime Interface (CRI). It is a lightweight alternative to using Docker as the runtime for Kubernetes.

Micro OSes for containers

Is the elimination of all the packages and services of the host Operating System (OS), which are not essential for running containers. They are specialized OSes. Examples:


·    Alpine Linux

·    Atomic Host 

·    Fedora CoreOS (formerly known as Red Hat CoreOS)

·    RancherOS 

·    Ubuntu Core

·    VMware Photon


Container orchestration is an umbrella term that encompasses  container scheduling and cluster management. Container scheduling allows  us to decide on which host a container or a group of containers should  be deployed. With cluster management orchestrators, we can manage the  resources of cluster nodes, as well as add or delete nodes from the  cluster. Some of the available solutions for container orchestration  are:

·    Docker Swarm

·    Kubernetes

·    Mesos Marathon

·    Nomad

·    Amazon ECS.

A Kubernetes cluster is a set of node servers that run containers and are centrally managed by a set of master servers.





A server that manages the workload and communications in a Kubernetes cluster.


A server that performs work in a Kubernetes cluster.


A key/value pair that can be assigned to any Kubernetes resource. A selector uses labels to filter eligible resources for scheduling and other operations.


Kubernetes 5 main resource types:


Represent a collection of containers that share resources, such as IP addresses and persistent storage volumes. It is the basic unit of work for Kubernetes. It's a set of containers managed by Kubernetes as a single unit.

Each pod gets an IP address, not the container


Define a single IP/port combination that provides access to a pool of pods. By default, services connect clients to pods in a round-robin fashion.

Containers inside Kubernetes pods are not supposed to connect to each other's dynamic IP address directly. It is recommended that they connect to the more stable IP addresses assigned to services, and thus benefit from scalability and fault tolerance.

Each pod gets a Service with a permanent IP address. Lifecycle of Pod and Service are not connected if Pod dies when another Pod comes up will get the same IP address.

Ingress: To your application to talk with the outside world you need to create a website link with https and no port attached to it Eg.: https://my-app.com. Instead the request goes to the Service, it goes to Ingress first that forwards it to Service.

Say you have 2 Pods...1 with the application and another with the DB..these Pods are inside a Node that's the Virtual Machine for example..inside the Pods are your containers. Usually each Pod contains 1 application with all its libraries, then other Pods with DB, and maybe other Pod with monitoring tools like DataDog. Each Pod gets a permanent IP address (domain) provided by a component called Service, if a Pod dies and another one is launched the Service will remember the IP. The Service is also a Load Balancer, so it rotates traffic and if a Pod dies it redirects the user to the working Pod. This permanent IP is to be used between Pods on the same Node. To the public user access your application is better to use a https://my-app.com domain without specifying a port. It will hit first a component called Ingress which then pass the request to Services.

The ConfigMap component is used to configure the end-point/domain of each Pod for communication purposes. If an end-point in one Pod is changed you just need to change it in the ConfigMap, because it's attached to the Pods and it will update the app. Otherwise you would need to change it in the application, then rebuild the whole app with the new version. For configuration of passwords, usernames, credentials etc you need to use a component called Secret, just like ConfigMap, but that encrypts the data.

For permanent storage of your data you need Volumes. A hard-disk inside the Node/Virtual Machine, or outside of the K8s cluster, or even a cloud storage attached to the pod. So, if your Pod with the DB are destroyed, restarted etc your data is permanent in the Volume storage. K8s doesn’t manage data persistence! The admin is responsible to store and keep the data safe.

For high availability you would have a replica of your app running in another node.  You would define this in the Pod's blueprint which is another component of K8s called Deployment. You don't work with Pods directly, you use the Deployment to scale up/down the number of replicas or Pods that you need. To create replicas of DBs you wouldn't use Deployment because of the problem of data inconsistency, so in this case you need to use another component called Statefulset for apps like MySQL, MongoDB, ElasticSearch. The Statefulset keeps data synced.

Nodes are known as Worker Nodes in K8s, each Node has multiple Pods on it. Three processes must be installed in every Node:

1 - Container runtime - if you use Docker container it needs Docker installed

2 - Kubelet

3 - Kube Proxy which forward traffic to the correspondent place

                                 Replication Controllers                          

A framework for defining pods that are meant to be horizontally scaled. A replication controller includes a pod definition that is to be replicated, and the pods created from it can be scheduled to different nodes. It is responsible for increasing/decreasing the number of pods from a particular application.

                                 Persistent Volumes (PV)                          

Provision persistent networked storage to pods that can be mounted inside a container to store data.

                                 Persistent Volume Claims (PVC)                          

Represent a request for storage by a pod to Kubernetes.

Some Docker in Commands:


docker pull image_name: any_version > pull image from repository (public, private or local). If you don't specify the version the latest will be pulled


docker run -ith cent --name mycontainer centos > Create a container and run it. In this example it creates and run a Centos container -it(interactively) and -h (give a hostname) and a --name (container name). If the image is not available from the local Docker daemon cache, the docker run command tries to pull the image as if a docker pull command had been used.


docker images > check which images you have locally. One image can have many tags > latest, 5:5 etc


docker ps > check which containers are running. Add -a to show all containers running and/or stopped. You can even see what is going on inside the container like a command that you passed to see if it's running that command or when did it run.


docker search centos > Search for container images. The search uses the Docker Hub registry and also any other version 1-compatible registries configured in the local Docker daemon.


Running the docker command requires special privileges. In a local PC assign yourself to the docker group. To looks for groups in Ubuntu:

cat /etc/group | grep docker


Note: For a production environment, the docker command access should be given via the sudo command because the docker group is vulnerable to privilege escalation attacks.


Many container images require parameters to be started, such as the MySQL official image. They should be provided using the -e option from the docker command, and are seen as environment variables by the processes inside the container. Example for mysql original:

docker run --name mysql-custom \

-e MYSQL_USER=redhat -e MYSQL_PASSWORD=r3dh4t \



docker rm container_name > delete a container


docker rmi my_image > delete an image from your local repository. Before deleting an image, the container using this image have to be stopped and removed.


Start a container from the Docker Hub MySQL image.

The back slashes (\) in the following command denote Linux shell continuation lines.

docker run --name mysql-basic \

-e MYSQL_USER=user1 -e MYSQL_PASSWORD=mypa55 \


-d mysql:5.6


Check if the container was started correctly. Run the following command:

[docker@minishift ~]$ docker ps | grep mysql




13568029202d mysql:5.6 "docker-entrypoint.sh" 6 seconds ago

Up 4 seconds 3306/tcp mysql-basic


Access the container sandbox by running the following command:

[docker@minishift ~]$ docker exec -it mysql-basic bash

The command starts a Bash shell, running as root, inside the MySQL container displaying the container ID.


Log in to MySQL as the database administrator user (root).

# mysql -pr00tpa55


Stop the running container by running the following command:

docker stop mysql-basic


If for some reason you can't stop the container try:

ps -ef from container (If you need to run ps -ef the package name in Linux to install is 'procps')

kill -9 PID


If still doesn't work:

At your host terminal run:

docker top CONTAINER_ID

get the real PID and kill -9

Remove the data from the stopped container by running the following command:

docker rm mysql-basic

  1. Each container, when created, gets a container ID, which is a hexadecimal number and looks like an image ID, but is actually unrelated. 
  2. Container image that was used to start the container. 
  3. Command that was executed when the container started. 
  4. Time the container was started. 
  5. Total container uptime, if still running, or time since terminated. 
  6. Ports that were exposed by the container or the port forwards, if configured. 
  7. The container name.
docker inspect: This command is responsible for listing metadata about a running or stopped container. The command produces a JSON output:

$ docker inspect centos

$ root@renata:/home/renata# docker inspect -f '{{ .ContainerConfig.Hostname }}' centos

Work in Progress...