Skip to the content.

CNM Driver Interfaces

The Container Networking Model provides two pluggable and open interfaces that can be used by users, the community, and vendors to leverage additional functionality, visibility, or control in the network.

Categories of Network Drivers

Docker Built-In Network Drivers

The Docker built-in network drivers are part of Docker Engine and don’t require any extra modules. They are invoked and used through standard docker network commands. The follow built-in network drivers exist:

Default Docker Networks

By default a none, host, and bridge network will exist on every Docker host. These networks cannot be removed. When instantiating a Swarm, two additional networks, a bridge network named docker_gwbridge and an overlay network named ingress, are automatically created to facilitate cluster networking.

The docker network ls command shows these default Docker networks for a Docker Swarm:

NETWORK ID          NAME                DRIVER              SCOPE
1475f03fbecb        bridge              bridge              local
e2d8a4bd86cb        docker_gwbridge     bridge              local
407c477060e7        host                host                local
f4zr3zrswlyg        ingress             overlay             swarm
c97909a4b198        none                null                local

In addition to these default networks, user defined networks can also be created. They are discussed later in this document.

Network Scope

As seen in the docker network ls output, Docker network drivers have a concept of scope. The network scope is the domain of the driver which can be the local or swarm scope. Local scope drivers provide connectivity and network services (such as DNS or IPAM) within the scope of the host. Swarm scope drivers provide connectivity and network services across a swarm cluster. Swarm scope networks will have the same network ID across the entire cluster while local scope networks will have a unique network ID on each host.

Docker Plug-In Network Drivers

The following community- and vendor-created plug-in network drivers are compatible with CNM. Each provides unique capabilities and network services for containers.

Driver Description
contiv An open source network plugin led by Cisco Systems to provide infrastructure and security policies for multi-tenant microservices deployments. Contiv also provides integration for non-container workloads and with physical networks, such as ACI. Contiv implements plug-in network and IPAM drivers.
weave A network plugin that creates a virtual network that connects Docker containers across multiple hosts or clouds. Weave provides automatic discovery of applications, can operate on partially connected networks, does not require an external cluster store, and is operations friendly.
calico Calico is an open source solution for virtual networking in cloud datacenters. It targets datacenters where most of the workloads (VMs, containers, or bare metal servers) only require IP connectivity. Calico provides this connectivity using standard IP routing. Isolation between workloads — whether according to tenant ownership, or any finer grained policy — is achieved via iptables programming on the servers hosting the source and destination workloads.
kuryr A network plugin developed as part of the OpenStack Kuryr project. It implements the Docker networking (libnetwork) remote driver API by utilizing Neutron, the OpenStack networking service. Kuryr includes an IPAM driver as well.

Docker Plug-In IPAM Drivers

Community and vendor created IPAM drivers can also be used to provide integrations with existing systems or special capabilities.

Driver Description
infoblox An open source IPAM plugin that provides integration with existing Infoblox tools.

There are many Docker plugins that exist and more are being created all the time. Docker maintains a list of the most common plugins.

Next: Linux Network Fundamentals