This is not a Wikipedia article: It is an individual user's work-in-progress page, and may be incomplete and/or unreliable.For guidance on developing this draft, see
Wikipedia:So you made a userspace draft.
Finished writing a draft article? Are you ready to request an experienced editor review it for possible inclusion in Wikipedia? Submit your draft for review!
This is a user sandbox of Asayal1690. A user sandbox is a subpage of the user's
user page. It serves as a testing spot and page development space for the user and is not an encyclopedia article.
According to industry analyst firm 451 Research, "Docker is a tool that can package an application and its dependencies in a virtual container that can run on any Linux server. This helps enable flexibility and portability on where the application can run, whether
on premises,
public cloud,
private cloud,
bare metal, etc."[4]
Overview
Docker can use different interfaces to access virtualization features of the Linux kernel.[5]
Docker implements a high-level
API to provide lightweight containers that run processes in isolation.[6]
It uses resource isolation features of the
Linux kernel such as
cgroups and kernel
namespace to allow independent "containers" to run within a single Linux instance, avoiding the overhead of starting and maintaining
virtual machines.[7]
Building on top of facilities provided by the
Linux kernel (primarily
cgroups and
namespace), a Docker container, unlike a virtual machine, does not require or include a separate operating system.[4] Instead, it relies on the kernel's functionality and uses resource isolation (CPU, memory, block I/O, network, etc.) and
separate namespaces to isolate the application's view of the operating system. Docker accesses the Linux kernel's virtualization features either directly using the libcontainer library, which is the default execution environment for Docker and is available since Docker 0.9, or indirectly via libvirt, LXC (
Linux Containers) or systemd-nspawn.[5][8]
By using containers, resources can be isolated, services restricted, and processes provisioned to have an almost completely private view of the operating system with their own process ID space, file system structure, and network interfaces. Multiple containers share the same kernel, but each container can be constrained to only use a defined amount of resources such as CPU, memory and I/O.
Using Docker to create and manage containers may simplify the creation of highly
distributed systems, by allowing multiple applications, worker tasks and other processes to run autonomously on a single physical machine or across multiple virtual machines. This allows the deployment of nodes to be performed as the resources become available or when more nodes are needed, allowing a
platform as a service (PaaS)-style of deployment and scaling for systems like
Apache Cassandra,
MongoDB or
Riak. Docker also simplifies the creation and operation of task or workload queues and other distributed systems.[9][10]
History
Solomon Hykes started Docker as a
Python based internal project within the platform as a service company, dotCloud.[11] Other initial contributors included fellow dotCloud engineers including Andrea Luzzardi and Francois-Xavier Bourlet, with Jeff Lindsay also serving as an independent collaborator. Docker represents an evolution of dotCloud's proprietary container orchestration technology, which itself was built upon earlier open-source projects such as
Cloudlets.
Docker was released as open source in March 2013.[6] On March 13, 2014, with the release of version 0.9, Docker dropped LXC as the default execution environment and replaced it with its own libcontainer library written in the
Go programming language.[12][8] Since it’s release, Docker has grown to be a major contender in the market for container orchestration APIs. As of April 13, 2015, the Docker project had accumulated over 20,700
GitHub stars (making it the 20th most starred GitHub project), over 4,700 forks, and nearly 900 contributors.[13]
On July 23, 2013, dotCloud, Inc., the commercial entity behind Docker, announced that former
Gluster and
Plaxo CEO
Ben Golub had joined the company, citing Docker as the primary focus of the company going forward.[15]
On July 23, 2014, Docker acquired Orchard, makers of Fig.[18]
On September 16, 2014, Docker announced that it had completed a $40 M Series C round, led by
Sequoia Capital.[19]
On October 15, 2014,
Microsoft announced integration of the Docker engine into the next (2016)
Windows Server release, and native support for the Docker client role in Windows.[20][21]
On December 4, 2014,
IBM announced a strategic partnership with Docker that enables enterprises to more efficiently, quickly and cost-effectively build and run the next generation of applications on the IBM Cloud.[22]
On June 22, 2015, it was announced that Docker and numerous other companies are working on a new vendor- and operating-system-independent standard for software containers.[23][24]
The GearD project aims to integrate Docker into the Red Hat's
OpenShift Origin PaaS.[40]
Usage and Implementation
There are two options for creating Docker images, Docker’s command line interface or GUI interfaces such as
Kitematic.
When using the Docker command line interface, images can be created via entering individual commands on the command line or by utilizing dockerfiles. Dockerfiles are similar to shell scripts, in that they contain the individual lines of commands that are to be executed to generate the final image[41]. As is also the case with entering the individual docker commands via the command line, each line in a dockerfile results in a newly committed image. The result of iterating over all the lines in the file is the incorporation of deltas into the final image. This is similar to how version control systems such as Git function. Each command is an individual commit, with the final image being the sum total of all the proceeding commits.
Specifies which command will be triggered on the start of a container.
ADD
Copies files from a source to another destination and commits the results.
RUN
Executes a given command on the image and commits the results.
FROM
Specifies the base image to be used.
MAINTAINER
Specifies the name and email of the author of the given dockerfile.
USER
Specifies which user to use when running the image.
ENV
Used for setting environmental variables.
VOLUME
Creates a mountpoint for holding externally mounted containers.
EXPOSE
Specifies which ports will be exposed
Supporting Technologies
Scheduling
The concept of scheduling comes into the picture when there are multiple applications to be run across multiple hosts. A scheduler frees the user from the burden of figuring out on which host their application will run. The way scheduling works is that schedulers coordinate with the init system of the host to manage services and applications according to the available capacity and the availability of the various hosts which are a part of one cluster. Some of the popular schedulers available today include:
Swarm: Docker Swarm[43] is a native clustering tool for Docker which simplifies the deployment of multi container applications in a distributed environment. A cluster of Docker hosts is converted into a single virtual host. If a node in a cluster fails, all the containers running on the node will be scheduled on a different node belonging to the same cluster. There are three scheduling algorithms which help in deciding which containers will be run on which nodes; Random, Spread and BinPack.[44]
Kubernetes: Kubernetes[45] is a platform started by
Google to manage container applications running on various nodes belonging to one cluster. It works in conjunction with Docker and takes charge of handling the orchestration of containers. It goes beyond the lifecycle management of containers and takes it to the next level of monitoring and managing the containers. Kubernetes is supported on
Google Compute Engine,
Rackspace,
Microsoft Azure and
vSphere environments.
Docker Compose: Docker compose is Docker’s offering for running applications in a distributed environment spanning more than one container. Compose provides the user with commands to start or stop a service, view the status of a service and view the log output of running services. It is possible to accomplish all these tasks in the scope of a single host which makes Compose similar to Kubernetes. As Compose is produced by Docker itself, the commands are very similar to Docker's CLI commands with the difference that these commands apply to a cluster of containers rather than an individual container.[46]
Cluster Management
Cluster management, as the names suggests, is the process of monitoring how a collection of hosts behave when functioning in conjunction with one another. One of the most basic tasks carried out is the addition and removal of hosts from a cluster. Cluster management also involves starting and stopping of processes and gathering information about the current states of cluster containers and hosts. Some of the popular cluster management platforms include:
Mesos:
Apache Mesos is a cluster management platform which was developed by
University of California, Berkley. Mesos is a distributed systems kernel that binds numerous different machines into one logical machine. It is possible to create one static computer cluster from a host of available physical resources. Mesos proves beneficial in the scenario when there are existing specialized workloads (
Hadoop,
Apache Kafka) because it provides an efficient framework to interleave these workloads. It provides scalability of a cluster to tens of thousands of nodes as well as providing support for Docker containers. [47]
Bosh:
Cloud Foundry Bosh is an open source packaging, lifecycle management and deployment tool. It supports quick cluster deployment mechanism with zero to minimal downtime. It is possible to install rolling updates on the nodes belonging to a cluster without affecting the data on each of those nodes.[48]
Competition
Docker is no longer the only container virtualization solution in the market and there are a few alternatives that have launched in the domain which are as follows:
Rocket: Rocket[49] is a container virtualization technology developed by
CoreOS. Rocket has been developed to enhance composability, security and speed in container technology.It is a container runtime tool in command line form consisting of two elements; Actool and Rkt. Actool takes care of the task of building containers while Rkt fetches and runs container images.
LXC: LXC(
Linux Containers) is a userspace interface for supporting lightweight virtualized operating system environments. It is a system container technology which can provide its users with a working environment which is similar to a virtual machine environment. LXC provides capabilities of managing containers, advanced networking and storage support[50]
LXD: The LXD project was founded and currently led by
Canonical Ltd and
Ubuntu. LXD was announced in early November 2014 and is still under development. It offers a command line tool to manage containers through
REST API. It allows the users to create new containers and move around already running containers. It is an image based technology and does not support distribution templates.[51]
Vision and Scope
As the technology matures, Docker is expected to move toward having a stronger focus on orchestration, providing an integrated solution with both PaaS as well as SaaS capabilities. This growth in focus is highlighted by Docker's acquisition of four companies that offer services integral to Docker's ability to provide such a solution. In 2014, Docker purchased Orchard Laboratories[52] and Koality[53], quickly followed by the acquisition of Kitematic and SocketPlane[54] in 2015. The addition of these four companies has provided docker with the technology and infrastructure needed to position itself as a leader in the market for an integrated solution offering container orchestration as well as platform services.
^
Whelan, Phil (2014-09-03).
"Cloud Foundry: Diego Explained By Onsi Fakhouri". ActiveState. Retrieved 2015-04-20. Functionality is being added to enable end-users to push Docker images directly into a Cloud Foundry cluster running Diego.