What is Docker networking?


Docker networking allows the different Docker containers, services and components connect them together and with external components and workloads abstracting the platform where they are running on.

Stay tuned and introduce yourself into the docker networking overview, learning components, its processes and how it works.

Docker network drivers

The networking functionality that is part of Docker is built with extensibility in mind and supports the ability to load different drivers depending on the exact network functionality that is required.

These plugins are called drivers, and five of them are available as part of the standard Docker installation. The standard drivers currently available are:

  • Host shares the same network stack that the host operating system is using. This is the easiest approach, but provides no isolation and makes things like port conflicts more likely.
  • Bridge is the default driver and is commonly used because it creates isolated networks that individual containers (or swarms of containers) can share. This allows the swarms of containers to interconnect while providing isolation from the networks of other groups of containers on the same host.
  • Overlay allows multiple Docker daemons to interconnect, which provides container-to-container networking since it removes the reliance on the host operating system to do any routing.
  • IPvlan operates at layer 2 and provides direct access to configure the IP stack and manage which VLAN the container traffic is tagged with.
  • MacVLAN goes a step further than IPvlan and actually makes the container look like it has a unique mac address on the physical network.
  • None. For those containers that require no networking.

Exposing container ports to the host

The fastest way to allow a host to connect to a Docker container running on it is to expose (or publish) a port to the host machine. This can be done by using the –expose flag, which is available as part of the docker run command (which anyone using Docker will get extremely familiar with):

docker run <...> --expose host_port:container_portCode language: HTML, XML (xml)

For example:

$ docker run --name mynginx --expose 80:80 -d nginx
  • --expose 80:80: binds the port 80 inside the container to the port 80 on
  • --name mynginx: names the container as mynginx.
  • -d nginx: use nginx container image as a daemon.

Creating a bridged network

Since bridged networking is the default network driver, we’ll show you how to create a user-defined bridged network and also how to have containers use it when they are running.

It is good to note that a bridge can only connect containers running on the same host, which is a very common scenario for development environments. There is a default bridge that will be set up and available for use by containers if no network is identified when starting the container.

This is great for simple use cases, but in many cases, having everything on the same bridge can cause conflicts and other anomalies. Therefore, it can be useful to know how to create user-defined bridged networks.

To create a user-defined network, use the following command, which will create a bridged network named abc:

$ docker network create abc

Next, launch a container specifying the exact network that we want to use:

$ docker run --name mynginx --network abc -d nginx

Creating overlay networks

As we mentioned in the section on bridged networks, this is good for single-host scenarios. In this case, overlay networks start to really add value in multi-host scenarios. An overlay will allow traffic to flow between containers on different Docker hosts without needing to do any configuration like routing at the host level once the overlay network is running.

You will see overlay networks often in enterprise and cloud deployments, where container orchestration solutions like Kubernetes are in play.

From a user point of view, creating and using an overlay network is not much different than working with a bridged network. This really shows the value of the network driver model, as it abstracts most of the complexity away and gives the user a consistent experience.

First, we create an overlay network called def which allows standalone containers to connect to it:

$ docker network create -d overlay --attachable def

If it is a Linux host, you can also have the Docker overlay network driver encrypt traffic automatically:

$ docker network create -d overlay --attachable --opt encrypted def

Next is the same run command specifying the network that we want to use:

$ docker run --name mynginx --network def -d nginx

That same run command could be executed on every host in the cluster and all of the nginx instances would be able to interact on the network as soon as they are running.

Note: If you are using an overlay network across multiple hosts with Docker, then it is best to create a swarm across the hosts first, then run the create network command on the swarm manager.

Using host networking with containers

There are scenarios where networking capabilities are needed and it would make sense to share the network stack with the Docker host. This could be to avoid any kind of NAT, which could happen with overlay or bridged networks, or to prevent port conflicts if the container only has outbound connections.

Another scenario would be for management processes that run outside of a Swarm or Kubernetes cluster that actually manage the networks they use.

Using the host’s networking is basically the same concept we discussed earlier, except that we need to explicitly instruct the container to use the host for its type of network.

$ docker run --name mynginx --net=host -d nginx

Note: Using the –expose option for the run command does not do anything when you’re using host networking. Since the ports are already open on the host’s network stack, the –expose flag is simply ignored.


As we discussed in this article, Docker provides extensive capabilities for managing the networking layer for containers on a host and even for spanning hosts using overlay networks. Other solutions have not yet caught up with these capabilities.

The official network documentation is available on Docker’s documentation site, and it includes more architectural and related technical content as well as links to third-party network drivers that can provide additional functionality.