CDN in a Box

“CDN in a Box” is a name given to the time-honored tradition of new Traffic Control developers/potential users attempting to set up their own, miniature CDN to just see how it all fits together. Historically, this has been a nightmare of digging through leftover virsh scripts and manually configuring pretty hefty networking changes (don’t even get me started on DNS) and just generally having a bad time. For a few years now, different people had made it to various stages of merging the project into Docker for ease of networking, but certain constraints hampered progress - until now. The project has finally reached a working state, and now getting a mock/test CDN running can be a very simple task (albeit rather time-consuming).

Getting Started

Because it runs in Docker, the only true prerequisites are:

  • Docker version >= 17.05.0-ce
  • Docker Compose[1] version >= 1.9.0


The CDN in a Box directory is found within the Traffic Control repository at infrastructure/cdn-in-a-box/. CDN in a Box relies on the presence of pre-built component.rpm files for the following Traffic Control components:

  • Traffic Monitor - at infrastructure/cdn-in-a-box/traffic_monitor/traffic_monitor.rpm
  • Traffic Ops - at infrastructure/cdn-in-a-box/traffic_ops/traffic_ops.rpm
  • Traffic Portal - at infrastructure/cdn-in-a-box/traffic_portal/traffic_portal.rpm
  • Traffic Router - at infrastructure/cdn-in-a-box/traffic_router/traffic_router.rpm - also requires an Apache Tomcat RPM at infrastructure/cdn-in-a-box/traffic_router/tomcat.rpm


These can also be specified via the RPM variable to a direct Docker build of the component - with the exception of Traffic Router, which instead accepts JDK8_RPM to specify a Java Development Kit RPM, TRAFFIC_ROUTER_RPM to specify a Traffic Router RPM, and TOMCAT_RPM to specify an Apache Tomcat RPM.

These can all be supplied manually via the steps in Building Traffic Control (for Traffic Control component RPMs) or via some external source. Alternatively, the infrastructure/cdn-in-a-box/Makefile file contains recipes to build all of these - simply run make(1)[2] from the infrastructure/cdn-in-a-box/ directory. Once all RPM dependencies have been satisfied, run docker-compose build from the infrastructure/cdn-in-a-box/ directory to construct the images needed to run CDN in a Box.


In a typical scenario, if the steps in Building have been followed, all that’s required to start the CDN in a Box is to run docker-compose up - optionally with the -d flag to run without binding to the terminal - from the infrastructure/cdn-in-a-box/ directory. This will start up the entire stack and should take care of any needed initial configuration. The services within the environment are by default not exposed locally to the host. If this is the desired behavior when bringing up CDN in a Box the command docker-compose -f docker-compose.yml -f docker-compose.expose-ports.yml up should be run. The ports are configured within the infrastructure/cdn-in-a-box/docker-compose.expose-ports.yml file, but the default ports are shown in Service Info. Some services have credentials associated, which are totally configurable in variables.env.

Table 40 Service Info
Service Ports exposed and their usage Username Password
DNS DNS name resolution on 9353 N/A N/A
Edge Tier Cache Apache Trafficserver HTTP caching reverse proxy on port 9000 N/A N/A
Mid Tier Cache Apache Trafficserver HTTP caching forward proxy on port 9100 N/A N/A
Mock Origin Server Example web page served on port 9200 N/A N/A
Traffic Monitor Web interface and API on port 80 N/A N/A
Traffic Ops Main API endpoints on port 6443, with a direct route to the Perl API on port 60443[3] TO_ADMIN_USER in variables.env TO_ADMIN_PASSWORD in variables.env
Traffic Ops PostgresQL Database PostgresQL connections accepted on port 5432 (database name: DB_NAME in variables.env) DB_USER in variables.env DB_USER_PASS in variables.env
Traffic Portal Web interface on 443 (Javascript required) TO_ADMIN_USER in variables.env TO_ADMIN_PASSWORD in variables.env
Traffic Router Web interfaces on ports 3080 (HTTP) and 3443 (HTTPS), with a DNS service on 53 and an API on 3333 (HTTP) and 2222 (HTTPS) N/A N/A
Traffic Vault Riak key-value store on port 8010 TV_ADMIN_USER in variables.env TV_ADMIN_PASSWORD in variables.env
Traffic Stats N/A N/A N/A
Traffic Stats Influxdb Influxdbd connections accepted on port 8086 (database name: cache_stats, daily_stats and deliveryservice_stats) INFLUXDB_ADMIN_USER in variables.env INFLUXDB_ADMIN_PASSWORD in variables.env

While the components may be interacted with by the host using these ports, the true operation of the CDN can only truly be seen from within the Docker network. To see the CDN in action, connect to a container within the CDN in a Box project and use cURL to request the URL http://video.demo1.mycdn.ciab.test which will be resolved by the DNS container to the IP of the Traffic Router, which will provide a 302 FOUND response pointing to the Edge-Tier cache. A typical choice for this is the “enroller” service, which has a very nuanced purpose not discussed here but already has the curl(1) command line tool installed. For a more user-friendly interface into the CDN network, see VNC Server.

#60 Example Command to See the CDN in Action
sudo docker-compose exec enroller /usr/bin/curl -L "http://video.demo1.mycdn.ciab.test"

When the CDN is to be shut down, it is often best to do so using sudo docker-compose down -v due to the use of shared volumes in the system which might interfere with a proper initialization upon the next run.


DS_HOSTS=demo1 demo2 demo3


While these port settings can be changed without hampering the function of the CDN in a Box system, note that changing a port without also changing the matching port-mapping in infrastructure/cdn-in-a-box/docker-compose.yml for the affected service will make it unreachable from the host.

[1]It is perfectly possible to build and run all containers without Docker Compose, but it’s not recommended and not covered in this guide.
[2]Consider make -j to build quickly, if your computer can handle multiple builds at once.
[3]Please do NOT use the Perl endpoints directly. The CDN will only work properly if everything hits the Go API, which will proxy to the Perl endpoints as needed.

X.509 SSL/TLS Certificates

All components in Apache Traffic Control utilize SSL/TLS secure communications by default. For SSL/TLS connections to properly validate within the “CDN in a Box” container network a shared self-signed X.509 Root CA is generated at the first initial startup. An X.509 Intermediate CA is also generated and signed by the Root CA. Additional “wildcard” certificates are generated/signed by the Intermediate CA for each container service and demo1, demo2, and demo3 Delivery Services. All certificates and keys are stored in the ca host volume which is located at infrastruture/cdn-in-a-box/traffic_ops/ca[4].

Table 41 Self-Signed X.509 Certificate List
Filename Description X.509 CN/SAN
CIAB-CA-root.crt Shared Root CA Certificate N/A
CIAB-CA-intr.crt Shared Intermediate CA Certificate N/A
CIAB-CA-fullchain.crt Shared CA Certificate Chain Bundle[5] N/A
infra.ciab.test.crt Infrastruture Certificate prefix.infra.ciab.test
demo1.mycdn.ciab.test.crt Demo1 Delivery Service Certificate prefix.demo1.mycdn.ciab.test
demo2.mycdn.ciab.test.crt Demo2 Delivery Service Certificate prefix.demo2.mycdn.ciab.test
demo3.mycdn.ciab.test.crt Demo3 Delivery Service Certificate prefix.demo3.mycdn.ciab.test
[4]The ca volume is not purged with normal docker volume commands. This feature is by design to allow the existing shared SSL certificate to be trusted at the system level across restarts. To re-generate all SSL certificates and keys, remove the infrastructure/cdn-in-a-box/traffic_ops/ca directory before startup.
[5]The full chain bundle is a file that contains both the Root and Intermediate CA certificates.

Trusting the Certificate Authority

For developer and testing use-cases, it may be necessary to have full x509 CA validation by HTTPS clients[6][7]. For x509 validation to work properly, the self-signed x509 CA certificate must be trusted either at the system level or by the client application itself.


HTTP Client applications such as Google Chrome, Firefox, curl(1), and wget(1) can also be individually configured to trust the CA certificate. Review each program’s respective documentation for instructions.

Importing the CA Certificate on OSX

  1. Copy the CIAB root and intermediate CA certificates from infrastructure/cdn-in-a-box/traffic_ops/ca/ to the Mac.
  2. Double-click the CIAB root CA certificate to open it in Keychain Access.
  3. The CIAB root CA certificate appears in login.
  4. Copy the CIAB root CA certificate to System.
  5. Open the CIAB root CA certificate, expand Trust, select Use System Defaults, and save your changes.
  6. Reopen the CIAB root CA certificate, expand Trust, select :guilabel:`Always Trust, and save your changes.
  7. Delete the CIAB root CA certificate from login.
  8. Repeat the previous steps with the Intermediate CA certificate to import it as well
  9. Restart all HTTPS clients (browsers, etc).

Importing the CA certificate on Windows

  1. Copy the CIAB root CA and intermediate CA certificates from infrastructure/cdn-in-a-box/traffic_ops/ca/ to Windows filesystem.
  2. As Administrator, start the Microsoft Management Console.
  3. Add the Certificates snap-in for the computer account and manage certificates for the local computer.
  4. Import the CIAB root CA certificate into Trusted Root Certification Authorities ‣ Certificates.
  5. Import the CIAB intermediate CA certificate into Trusted Root Certification Authorities ‣ Certificates.
  6. Restart all HTTPS clients (browsers, etc).

Importing the CA certificate on Linux/Centos7

  1. Copy the CIAB full chain CA certificate bundle from infrastructure/cdn-in-a-box/traffic_ops/ca/CIAB-CA-fullchain.crt to path /etc/pki/ca-trust/source/anchors/.
  2. Run update-ca-trust-extract as the root user or with sudo(8).
  3. Restart all HTTPS clients (browsers, etc).

Importing the CA certificate on Linux/Ubuntu

  1. Copy the CIAB full chain CA certificate bundle from infrastructure/cdn-in-a-box/traffic_ops/ca/CIAB-CA-fullchain.crt to path /usr/local/share/ca-certificates/.
  2. Run update-ca-certificates as the root user or with sudo(8).
  3. Restart all HTTPS clients (browsers, etc).
[6]All containers within CDN-in-a-Box start up with the self-signed CA already trusted.
[7]The ‘demo1’ Delivery Service X509 certificate is automatically imported into Traffic Vault on startup.

Advanced Usage

This section will be amended as functionality is added to the CDN in a Box project.

The Enroller

The “enroller” began as an efficient way for Traffic Ops to be populated with data as CDN in a Box starts up. It connects to Traffic Ops as the “admin” user and processes files places in the docker volume shared between the containers. The enroller watches each directory within the /shared/enroller directory for new filename.json files to be created there. These files must follow the format outlined in the API guide for the POST method for each data type, (e.g. for a tenant, follow the guidelines for POST api/1.4/regions). Of note, the enroller does not require fields that reference database ids for other objects within the database.

--dir directory

Base directory to watch for data. Mutually exclusive with --http.

--http port

Act as an HTTP server for POST requests on this port. Mutually exclusive with --dir.

--started filename

The name of a file which will be created in the --dir directory when given, indicating service was started (default: “enroller-started”).

The enroller runs within CDN in a Box using --dir which provides the above behavior. It can also be run using --http to instead have it listen on the indicated port. In this case, it accepts only POST requests with the JSON provided in the request payload, e.g. curl -X POST https://enroller/api/1.4/regions -d @newregion.json. CDN in a Box does not currently use this method, but may be modified in the future to avoid using the shared volume approach.

Auto Snapshot/Queue-Updates

An automatic Snapshot of the current Traffic Ops CDN configuration/topology will be performed once the “enroller” has finished loading all of the data and a minimum number of servers have been enrolled. To enable this feature, set the boolean AUTO_SNAPQUEUE_ENABLED to true [8]. The Snapshot and Queue Updates actions will not be performed until all servers in AUTO_SNAPQUEUE_SERVERS (comma-delimited string) have been enrolled. The current enrolled servers will be polled every AUTO_SNAPQUEUE_POLL_INTERVAL seconds, and each action (Snapshot and Queue Updates) will be delayed AUTO_SNAPQUEUE_ACTION_WAIT seconds [9].

[8]Automatic Snapshot/Queue Updates is enabled by default in variables.env.
[9]Server poll interval and delay action wait are defaulted to a value of 2 seconds.

Mock Origin Service

The default “origin” service container provides a basic static file HTTP server as the central repository for content. Additional files can be added to the origin root content directory located at infrastructure/cdn-in-a-box/origin/content. To request content directly from the origin directly and bypass the CDN:

Optional Containers

All optional containers that are not part of the core CDN-in-a-Box stack are located in the infrastructure/cdn-in-a-box/optional directory.

  • infrastructure/cdn-in-a-box/optional/docker-compose.NAME.yml
  • infrastructure/cdn-in-a-box/optional/NAME/Dockerfile

Multiple optional containers may be combined by using a shell alias:

#61 Starting Optional Containers with an Alias
# From the infrastructure/cdn-in-a-box directory
# (Assuming the names of the optional services are stored in the `NAME1` and `NAME2` environment variables)
alias mydc="docker-compose -f $PWD/docker-compose.yml -f $PWD/optional/docker-compose.$NAME1.yml -f  $PWD/optional/docker-compose.$NAME2.yml"
docker volume prune -f
mydc build
mydc up

VNC Server

The TightVNC optional container provides a basic lightweight window manager (fluxbox), Firefox browser, xterm, and a few other utilities within the CDN-In-A-Box “tcnet” bridge network. This can be very helpful for quick demonstrations of CDN-in-a-Box that require the use of real container network FQDNs and full X.509 validation.

  1. Download and install a VNC client. TightVNC client is preferred as it supports window resizing, host-to-vnc copy/pasting, and optimized frame buffer compression.

  2. Set your VNC console password by adding the VNC_PASSWD environment variable to infrastructure/cdn-in-a-box/varibles.env. The password needs to be at least six characters long. The default password is randomized for security.

  3. Start up CDN-in-a-Box stack. It is recommended that this be done using a custom bash alias

    #62 CIAB Startup Using Bash Alias
    # From infrastructure/cdn-in-a-box
    alias mydc="docker-compose -f $PWD/docker-compose.yml -f $PWD/optional/docker-compose.vnc.yml"
    docker volume prune -f
    mydc build
    mydc kill
    mydc rm -fv
    mydc up
  4. Connect with a VNC client to localhost port 9080.

  5. When Traffic Portal becomes available, the Firefox within the VNC instance will subsequently be started.

  6. An xterm with bash shell is also automatically spawned and minimized for convenience.

Socks Proxy

Dante’s socks proxy is an optional container that can be used to provide browsers and other clients the ability to resolve DNS queries and network connectivity directly on the tcnet bridged interface. This is very helpful when running the CDN-In-A-Box stack on OSX/Windows docker host that lacks network bridge/IP-forward support. Below is the basic procedure to enable the Socks Proxy support for CDN-in-a-Box:

  1. Start the CDN-in-a-Box stack at least once so that the x.509 self-signed CA is created.

  2. On the host, import and Trust the CA for your target Operating System. See Trusting the Certificate Authority

  3. On the host, using either Firefox or Chrome, download the FoxyProxy browser plugin which enables dynamic proxy support via URL regular expression

  4. Once FoxyProxy is installed, click the Fox icon on the upper right hand of the browser window, select Options

  5. Once in Options Dialog, Click Add New Proxy and navigate to the General tab:

  6. Fill in the General tab according to the table

    Table 42 General Tab Values
    Name Value
    Proxy Name CIAB
    Color Green
  7. Fill in the Proxy Details tab according to the table

    Table 43 Proxy Details Tab Values
    Name Value
    Manual Proxy Configuration CIAB
    Host or IP Address localhost
    Port 9080
    Socks Proxy checked
    Socks V5 selected
  8. Go to URL Patterns tab, click Add New Pattern, and fill out form according to

    Table 44 URL Patters Tab Values
    Name Value
    Pattern Name CIAB Pattern
    URL Pattern *.test/*
    Whitelist selected
    Wildcards selected
  9. Enable dynamic ‘pre-defined and patterns’ mode by clicking the fox icon in the upper right of the browser. This mode only forwards URLs that match the wildcard *.test/* to the Socks V5 proxy.

  10. On the docker host start up CDN-in-a-Box stack. It is recommended that this be done using a custom bash alias

    #63 CIAB Startup Using Bash Alias
    # From infrastructure/cdn-in-a-box
    alias mydc="docker-compose -f $PWD/docker-compose.yml -f $PWD/optional/docker-compose.socksproxy.yml"
    docker volume prune -f
    mydc build
    mydc kill
    mydc rm -fv
    mydc up
  11. Once the CDN-in-a-box stack has started, use the aforementioned browser to access Traffic Portal via the socks proxy on the docker host.

See also

The official Docker Compose documentation CLI reference for complete instructions on how to pass service definition files to the docker-compose executable.

Static Subnet

Since docker-compose will randomly create a subnet and it has a chance to conflict with your network environment, using static subnet is a good choice.

#64 CIAB Startup with Static Subnet
# From the infrastructure/cdn-in-a-box directory
alias mydc="docker-compose -f $PWD/docker-compose.yml -f $PWD/optional/docker-compose.static-subnet.yml"
docker volume prune -f
mydc build
mydc up

VPN Server

This container provides an OpenVPN service. It’s primary use is to allow users and developers to easily access CIAB network.

How to use it

  1. It is recommended that this be done using a custom bash alias.

    #65 CIAB Startup with VPN
    # From infrastructure/cdn-in-a-box
    alias mydc="docker-compose -f $PWD/docker-compose.yml -f $PWD/docker-compose.expose-ports.yml -f $PWD/optional/docker-compose.vpn.yml -f $PWD/optional/docker-compose.vpn.expose-ports.yml"
    mydc down -v
    mydc build
    mydc up
  2. All certificates, keys, and client configuration are stored at infrastruture/cdn-in-a-box/optional/vpn/vpnca. You just simply change REALHOSTIP and REALPORT of client.ovpn to fit your environment, and then you can use it to connect to this OpenVPN server.

The proposed VPN client

On Linux, we suggest openvpn. On most Linux distributions, this will also be the name of the package that provides it.

#66 Install openvpn on ubuntu/debian
apt-get update && apt-get install -y openvpn

On OSX, it only works with brew installed openvpn client, not the OpenVPN GUI client.

#67 Install openvpn on OSX
brew install openvpn

If you want a GUI version of VPN client, we recommend Tunnelblick.

Private Subnet for Routing

Since docker-compose randomly creates a subnet, this container prepares 2 default private subnets for routing:


The subnet that will be used is determined automatically based on the subnet prefix. If the subnet prefix which docker-compose selected is 192. or 10., this container will select for its routing subnet. Otherwise, it selects

Of course, you can decide which routing subnet subnet by supplying the environment variables PRIVATE_NETWORK and PRIVATE_NETMASK.

Pushed Settings

Pushed settings are shown as follows:

  • DNS
  • A routing rule for the CIAB subnet


It will not change your default gateway. That means apart from CDN in a Box traffic and DNS requests, all other traffic will use the standard interface bound to the default gateway.


This container provides a Grafana service. It’s an open platform for analytics and monitoring. This container has prepared necessary datasources and scripted dashboards. Please refer to Configuring Grafana for detailed Settings.

How to start it

It is recommended that this be done using a custom bash alias.

#68 CIAB Startup with Grafana
# From infrastructure/cdn-in-a-box
alias mydc="docker-compose -f $PWD/docker-compose.yml -f $PWD/optional/docker-compose.grafana.yml -f $PWD/optional/docker-compose.grafana.expose-ports.yml"
mydc down -v
mydc build
mydc up

Apart from start Grafana, the above commands also expose port 3000 for it.

Check the charts

There are some scripted dashboards can show beautiful charts. You can display different charts by passing in different query string

  • https://<grafanaHost>/dashboard/script/traffic_ops_cachegroup.js?which=. The query parameter which in this particular URL should be the cachegroup. Take CIAB as an example, it can be filled in with CDN_in_a_Box_Edge or CDN_in_a_Box_Edge.
  • https://<grafanaHost>/dashboard/script/traffic_ops_deliveryservice.js?which=. The query parameter which in this particular URL should be the xml_id of the desired Delivery Service.
  • https://<grafanaHost>/dashboard/script/traffic_ops_server.js?which=. The query parameter which in this particular URL should be the hostname (not FQDN). It can be filled in with edge or mid in CIAB.