CDN in a Box

“CDN in a Box” is a name given to the time-honored tradition of new Traffic Control developers/potential users attempting to set up their own, miniature CDN to just see how it all fits together. Historically, this has been a nightmare of digging through leftover virsh scripts and manually configuring pretty hefty networking changes (don’t even get me started on DNS) and just generally having a bad time. For a few years now, different people had made it to various stages of merging the project into Docker for ease of networking, but certain constraints hampered progress - until now. The project has finally reached a working state, and now getting a mock/test CDN running can be a very simple task (albeit rather time-consuming).

Getting Started

Because it runs in Docker, the only true prerequisites are:

  • Docker version >= 17.05.0-ce

  • Docker Compose1 version >= 1.9.0

Building

The CDN in a Box directory is found within the Traffic Control repository at infrastructure/cdn-in-a-box/. CDN in a Box relies on the presence of pre-built component.rpm files for the following Traffic Control components:

  • Traffic Monitor - at infrastructure/cdn-in-a-box/traffic_monitor/traffic_monitor.rpm

  • Traffic Ops - at infrastructure/cdn-in-a-box/traffic_ops/traffic_ops.rpm

  • Traffic Portal - at infrastructure/cdn-in-a-box/traffic_portal/traffic_portal.rpm

  • Traffic Router - at infrastructure/cdn-in-a-box/traffic_router/traffic_router.rpm - also requires an Apache Tomcat RPM at infrastructure/cdn-in-a-box/traffic_router/tomcat.rpm

Note

These can also be specified via the RPM variable to a direct Docker build of the component - with the exception of Traffic Router, which instead accepts TRAFFIC_ROUTER_RPM to specify a Traffic Router RPM and TOMCAT_RPM to specify an Apache Tomcat RPM.

These can all be supplied manually via the steps in Building Traffic Control (for Traffic Control component RPMs) or via some external source. Alternatively, the infrastructure/cdn-in-a-box/Makefile file contains recipes to build all of these - simply run make(1) from the infrastructure/cdn-in-a-box/ directory. Once all RPM dependencies have been satisfied, run docker-compose build --parallel from the infrastructure/cdn-in-a-box/ directory to construct the images needed to run CDN in a Box.

Tip

If you have gone through the steps to Build the RPMs Natively, you can run make native instead of make to build the RPMs quickly. Another option is running make -j4 to build 4 components at once, if your computer can handle it.

Tip

When updating CDN-in-a-Box, there is no need to remove old images before building new ones. Docker detects which files are updated and only reuses cached layers that have not changed.

By default, CDN in a Box will be based on Rocky Linux 8. To base CDN in a Box on CentOS 7, set the BASE_IMAGE environment variable to centos and set the RHEL_VERSION environment variable to 7 (for CDN in a Box, BASE_IMAGE defaults to rockylinux and RHEL_VERSION defaults to 8):

#37 Building CDN in a Box to run CentOS 7 instead of Rocky Linux 8
export BASE_IMAGE=centos RHEL_VERSION=7
make # Builds RPMs for CentOS 7
docker-compose build --parallel # Builds CentOS 7 CDN in a Box images

Usage

In a typical scenario, if the steps in Building have been followed, all that’s required to start the CDN in a Box is to run docker-compose up - optionally with the -d flag to run without binding to the terminal - from the infrastructure/cdn-in-a-box/ directory. This will start up the entire stack and should take care of any needed initial configuration. The services within the environment are by default not exposed locally to the host. If this is the desired behavior when bringing up CDN in a Box the command docker-compose -f docker-compose.yml -f docker-compose.expose-ports.yml up should be run. The ports are configured within the infrastructure/cdn-in-a-box/docker-compose.expose-ports.yml file, but the default ports are shown in Service Info. Some services have credentials associated, which are totally configurable in variables.env.

Table 52 Service Info

Service

Ports exposed and their usage

Username

Password

DNS

DNS name resolution on 9353

N/A

N/A

Edge Tier Cache

Apache Trafficserver 8.1 HTTP caching reverse proxy on port 9000

N/A

N/A

Mid Tier Cache

Apache Trafficserver 8.1 HTTP caching forward proxy on port 9100

N/A

N/A

Second Mid-Tier Cache (parent of the first Mid-Tier Cache)

Apache Trafficserver 8.1 HTTP caching forward proxy on port 9100

N/A

N/A

Mock Origin Server

Example web page served on port 9200

N/A

N/A

SMTP Server

Passwordless, cleartext SMTP server on port 25 (no relay) Web interface exposed on port 4443 (port 443 in the container)

N/A

N/A

Traffic Monitor

Web interface and API on port 80

N/A

N/A

Traffic Ops

API endpoints on port 6443

TO_ADMIN_USER in variables.env

TO_ADMIN_PASSWORD in variables.env

Traffic Ops PostgresQL Database

PostgresQL connections accepted on port 5432 (database name: DB_NAME in variables.env)

DB_USER in variables.env

DB_USER_PASS in variables.env

Traffic Portal

Web interface on 443 (Javascript required)

TO_ADMIN_USER in variables.env

TO_ADMIN_PASSWORD in variables.env

Traffic Router

Web interfaces on ports 3080 (HTTP) and 3443 (HTTPS), with a DNS service on 53 and an API on 3333 (HTTP) and 2222 (HTTPS)

N/A

N/A

Traffic Vault

Riak key-value store on port 8010

TV_ADMIN_USER in variables.env

TV_ADMIN_PASSWORD in variables.env

Traffic Stats

N/A

N/A

N/A

Traffic Stats Influxdb

Influxdbd connections accepted on port 8086 (database name: cache_stats, daily_stats and deliveryservice_stats)

INFLUXDB_ADMIN_USER in variables.env

INFLUXDB_ADMIN_PASSWORD in variables.env

While the components may be interacted with by the host using these ports, the true operation of the CDN can only truly be seen from within the Docker network. To see the CDN in action, connect to a container within the CDN in a Box project and use cURL to request the URL http://video.demo1.mycdn.ciab.test which will be resolved by the DNS container to the IP of the Traffic Router, which will provide a 302 FOUND response pointing to the Edge-Tier cache. A typical choice for this is the “enroller” service, which has a very nuanced purpose not discussed here but already has the curl(1) command line tool installed. For a more user-friendly interface into the CDN network, see VNC Server.

To test the demo1 Delivery Service:

#38 Example Command to See the CDN in Action
sudo docker-compose exec enroller curl -L "http://video.demo1.mycdn.ciab.test"

To test the foo.kabletown.net. Federation:

#39 Query the Federation CNAME using the Delivery Service hostname
sudo docker-compose exec trafficrouter dig +short @trafficrouter.infra.ciab.test -t CNAME video.demo2.mycdn.ciab.test

# Expected response:
foo.kabletown.net.

When the CDN is to be shut down, it is often best to do so using sudo docker-compose down -v due to the use of shared volumes in the system which might interfere with a proper initialization upon the next run.

Readiness Check

In order to check the “readiness” of your CDN, you can optionally start the Readiness Container, which will continually curl(1) the Delivery Services in your CDN until they all return successful responses before exiting successfully.

#40 Example Command to Run the Readiness Container
sudo docker-compose -f docker-compose.readiness.yml up

Integration Tests

There also exist TP and TO integration tests containers. Both of these containers assume that CDN in a Box is already running on the local system.

#41 Running TP Integration Tests
sudo docker-compose -f docker-compose.traffic-portal-test.yml up
#42 Running TO Integration Tests
sudo docker-compose -f docker-compose.traffic-ops-test.yml up

Note

If all CDN in a Box containers are started at once (example: docker-compose -f docker-compose.yml -f docker-compose.traffic-ops-test.yml up -d edge enroller dns db smtp trafficops trafficvault integration), the Enroller initial data load is skipped to prevent data conflicts with the Traffic Ops API tests fixtures.

variables.env

TV_AES_KEY_LOCATION=/opt/traffic_ops/app/conf/aes.key
# Unset TV_BACKEND to use riak as the traffic_vault backend and run the traffic_vault image from the optional directory
TV_BACKEND=postgres
TLD_DOMAIN=ciab.test
INFRA_SUBDOMAIN=infra
CDN_NAME=CDN-in-a-Box
CDN_SUBDOMAIN=mycdn
DS_HOSTS='demo1 demo2 demo3'
FEDERATION_CNAME=foo.kabletown.net.
X509_CA_NAME=CIAB-CA
X509_CA_COUNTRY=US
X509_CA_STATE=Colorado
X509_CA_CITY=Denver
X509_CA_COMPANY=Kabletown
X509_CA_ORG=CDN-in-a-Box
X509_CA_ORGUNIT=CDN-in-a-Box
X509_CA_EMAIL=no-reply@infra.ciab.test
X509_CA_DIGEST=sha256
X509_CA_DURATION_DAYS=365
X509_CA_KEYTYPE=rsa
X509_CA_KEYSIZE=4096
X509_CA_UMASK=0000
X509_CA_DIR=/shared/ssl
X509_CA_PERSIST_DIR=/ca
X509_CA_PERSIST_ENV_FILE=/ca/environment
X509_CA_ENV_FILE=/shared/ssl/environment
DB_NAME=traffic_ops
TV_DB_NAME=traffic_vault
DB_PORT=5432
DB_SERVER=db
TV_DB_SERVER=db
DB_USER_PASS=twelve
DB_USER=traffic_ops
TV_DB_USER=traffic_vault
TV_DB_USER_PASS=twelve
TV_DB_PORT=5432
DNS_SERVER=dns
DBIC_TRACE=0
ENROLLER_HOST=enroller
PGPASSWORD=twelve
POSTGRES_PASSWORD=twelve
EDGE_HOST=edge
INFLUXDB_HOST=influxdb
INFLUXDB_PORT=8086
INFLUXDB_ADMIN_USER=influxadmin
INFLUXDB_ADMIN_PASSWORD=influxadminpassword
GRAFANA_ADMIN_USER=grafanaadmin
GRAFANA_ADMIN_PASSWORD=grafanaadminpassword
GRAFANA_PORT=443
MID_01_HOST=mid-01
MID_02_HOST=mid-02
ORIGIN_HOST=origin
SMTP_HOST=smtp
SMTP_PORT=25
TM_HOST=trafficmonitor
TM_PORT=80
TM_EMAIL=tmonitor@cdn.example.com
TM_PASSWORD=jhdslvhdfsuklvfhsuvlhs
TM_USER=tmon
TM_LOG_ACCESS=stdout
TM_LOG_EVENT=stdout
TM_LOG_ERROR=stdout
TM_LOG_WARNING=stdout
TM_LOG_INFO=stdout
TM_LOG_DEBUG=stdout
TO_ADMIN_PASSWORD=twelve12
TO_ADMIN_USER=admin
TO_ADMIN_FULL_NAME='James Cole'
# Set ENROLLER_DEBUG_ENABLE to true`to debug the enroller with Delve
ENROLLER_DEBUG_ENABLE=false
# To debug a t3c component with Delve in the `edge` container, set T3C_DEBUG_COMPONENT_EDGE to t3c-apply, t3c-check, t3c-check-refs, t3c-check-reload, t3c-diff, t3c-generate, t3c-request, or t3c-update
T3C_DEBUG_COMPONENT_EDGE=none
# To debug a t3c component with Delve in the `mid-01` container, set T3C_DEBUG_COMPONENT_MID_01 to t3c-apply, t3c-check, t3c-check-refs, t3c-check-reload, t3c-diff, t3c-generate, t3c-request, or t3c-update
T3C_DEBUG_COMPONENT_MID_01=none
# To debug a t3c component with Delve in the `mid-02` container, set T3C_DEBUG_COMPONENT_MID_02 to t3c-apply, t3c-diff, t3c-generate, t3c-request, t3c-update, or t3c-verify
T3C_DEBUG_COMPONENT_MID_02=none
# Set TM_DEBUG_ENABLE to true`to debug Traffic Monitor with Delve
TM_DEBUG_ENABLE=false
# Set TO_DEBUG_ENABLE to true`to debug Traffic Ops with Delve
TO_DEBUG_ENABLE=false
# Set TR_DEBUG_ENABLE to true`to debug Traffic Router with with JPDA
TR_DEBUG_ENABLE=false
# Set TS_DEBUG_ENABLE to true`to debug Traffic Stats with Delve
TS_DEBUG_ENABLE=false
TO_EMAIL=cdnadmin@example.com
TO_HOST=trafficops
TO_PORT=443
TO_LOG_ERROR=/var/log/traffic_ops/error.log
TO_LOG_WARNING=/var/log/traffic_ops/warning.log
TO_LOG_INFO=/var/log/traffic_ops/info.log
#TO_LOG_DEBUG=/var/log/traffic_ops/debug.log
TO_LOG_DEBUG=/dev/null
TO_LOG_EVENT=/var/log/traffic_ops/event.log
TP_HOST=trafficportal
TP_EMAIL=tp@cdn.example.com
TR_HOST=trafficrouter
TR_DNS_PORT=53
TR_HTTP_PORT=80
TR_HTTPS_PORT=443
TR_API_PORT=3333
TP_PORT=443
TS_EMAIL=tstats@cdn.example.com
TS_HOST=trafficstats
TS_PASSWORD=trafficstatspassword
TS_USER=tstats
TV_HOST=trafficvault
TV_USER=tvault
TV_PASSWORD=mwL5GP6Ghu_uJpkfjfiBmii3l9vfgLl0
TV_EMAIL=tvault@cdn.example.com
TV_ADMIN_USER=admin
TV_ADMIN_PASSWORD=riakAdmin
TV_RIAK_USER=riakuser
TV_RIAK_PASSWORD=riakPassword
TV_INT_PORT=8087
TV_HTTP_PORT=8098
TV_HTTPS_PORT=8088
ENROLLER_DIR=/shared/enroller
AUTO_SNAPQUEUE_ENABLED=true
AUTO_SNAPQUEUE_SERVERS=trafficops,trafficmonitor,trafficrouter,edge,mid-01,mid-02
AUTO_SNAPQUEUE_POLL_INTERVAL=2
AUTO_SNAPQUEUE_ACTION_WAIT=2

Note

While these port settings can be changed without hampering the function of the CDN in a Box system, note that changing a port without also changing the matching port-mapping in infrastructure/cdn-in-a-box/docker-compose.yml for the affected service will make it unreachable from the host.

1

It is perfectly possible to build and run all containers without Docker Compose, but it’s not recommended and not covered in this guide.

X.509 SSL/TLS Certificates

All components in Apache Traffic Control utilize SSL/TLS secure communications by default. For SSL/TLS connections to properly validate within the “CDN in a Box” container network a shared self-signed X.509 Root CA is generated at the first initial startup. An X.509 Intermediate CA is also generated and signed by the Root CA. Additional “wildcard” certificates are generated/signed by the Intermediate CA for each container service and demo1, demo2, and demo3 Delivery Services. All certificates and keys are stored in the ca host volume which is located at infrastruture/cdn-in-a-box/traffic_ops/ca4.

Table 53 Self-Signed X.509 Certificate List

Filename

Description

X.509 CN/SAN

CIAB-CA-root.crt

Shared Root CA Certificate

N/A

CIAB-CA-intr.crt

Shared Intermediate CA Certificate

N/A

CIAB-CA-fullchain.crt

Shared CA Certificate Chain Bundle5

N/A

infra.ciab.test.crt

Infrastruture Certificate

prefix.infra.ciab.test

demo1.mycdn.ciab.test.crt

Demo1 Delivery Service Certificate

prefix.demo1.mycdn.ciab.test

demo2.mycdn.ciab.test.crt

Demo2 Delivery Service Certificate

prefix.demo2.mycdn.ciab.test

demo3.mycdn.ciab.test.crt

Demo3 Delivery Service Certificate

prefix.demo3.mycdn.ciab.test

4

The ca volume is not purged with normal docker volume commands. This feature is by design to allow the existing shared SSL certificate to be trusted at the system level across restarts. To re-generate all SSL certificates and keys, remove the infrastructure/cdn-in-a-box/traffic_ops/ca directory before startup.

5

The full chain bundle is a file that contains both the Root and Intermediate CA certificates.

Trusting the Certificate Authority

For developer and testing use-cases, it may be necessary to have full x509 CA validation by HTTPS clients67. For x509 validation to work properly, the self-signed x509 CA certificate must be trusted either at the system level or by the client application itself.

Note

HTTP Client applications such as Chromium, Firefox, curl(1), and wget(1) can also be individually configured to trust the CA certificate. Review each program’s respective documentation for instructions.

Importing the CA Certificate on OSX

  1. Copy the CIAB root and intermediate CA certificates from infrastructure/cdn-in-a-box/traffic_ops/ca/ to the Mac.

  2. Double-click the CIAB root CA certificate to open it in Keychain Access.

  3. The CIAB root CA certificate appears in login.

  4. Copy the CIAB root CA certificate to System.

  5. Open the CIAB root CA certificate, expand Trust, select Use System Defaults, and save your changes.

  6. Reopen the CIAB root CA certificate, expand Trust, select Always Trust, and save your changes.

  7. Delete the CIAB root CA certificate from login.

  8. Repeat the previous steps with the Intermediate CA certificate to import it as well

  9. Restart all HTTPS clients (browsers, etc).

Importing the CA certificate on Windows

  1. Copy the CIAB root CA and intermediate CA certificates from infrastructure/cdn-in-a-box/traffic_ops/ca/ to Windows filesystem.

  2. As Administrator, start the Microsoft Management Console.

  3. Add the Certificates snap-in for the computer account and manage certificates for the local computer.

  4. Import the CIAB root CA certificate into Trusted Root Certification Authorities ‣ Certificates.

  5. Import the CIAB intermediate CA certificate into Trusted Root Certification Authorities ‣ Certificates.

  6. Restart all HTTPS clients (browsers, etc).

Importing the CA certificate on Rocky Linux 8 (Linux)

  1. Copy the CIAB full chain CA certificate bundle from infrastructure/cdn-in-a-box/traffic_ops/ca/CIAB-CA-fullchain.crt to path /etc/pki/ca-trust/source/anchors/.

  2. Run update-ca-trust-extract as the root user or with sudo(8).

  3. Restart all HTTPS clients (browsers, etc).

Importing the CA certificate on Ubuntu (Linux)

  1. Copy the CIAB full chain CA certificate bundle from infrastructure/cdn-in-a-box/traffic_ops/ca/CIAB-CA-fullchain.crt to path /usr/local/share/ca-certificates/.

  2. Run update-ca-certificates as the root user or with sudo(8).

  3. Restart all HTTPS clients (browsers, etc).

6

All containers within CDN-in-a-Box start up with the self-signed CA already trusted.

7

The ‘demo1’ Delivery Service X509 certificate is automatically imported into Traffic Vault on startup.

Advanced Usage

This section will be amended as functionality is added to the CDN in a Box project.

The Enroller

The “enroller” began as an efficient way for Traffic Ops to be populated with data as CDN in a Box starts up. It connects to Traffic Ops as the “admin” user and processes files places in the docker volume shared between the containers. The enroller watches each directory within the /shared/enroller directory for new filename.json files to be created there. These files must follow the format outlined in the API guide for the POST method for each data type, (e.g. for a region, follow the guidelines for POST /regions). Of note, the enroller does not require fields that reference database ids for other objects within the database.

--dir directory

Base directory to watch for data. Mutually exclusive with --http.

--http port

Act as an HTTP server for POST requests on this port. Mutually exclusive with --dir.

--started filename

The name of a file which will be created in the --dir directory when given, indicating service was started (default: “enroller-started”).

The enroller runs within CDN in a Box using --dir which provides the above behavior. It can also be run using --http to instead have it listen on the indicated port. In this case, it accepts only POST requests with the JSON provided in the request payload, e.g. curl -X POST https://enroller/api/4.0/regions -d @newregion.json. CDN in a Box does not currently use this method, but may be modified in the future to avoid using the shared volume approach.

Auto Snapshot/Queue-Updates

An automatic Snapshot of the current Traffic Ops CDN configuration/topology will be performed once the “enroller” has finished loading all of the data and a minimum number of servers have been enrolled. To enable this feature, set the boolean AUTO_SNAPQUEUE_ENABLED to true 8. The Snapshot and Queue Updates actions will not be performed until all servers in AUTO_SNAPQUEUE_SERVERS (comma-delimited string) have been enrolled. The current enrolled servers will be polled every AUTO_SNAPQUEUE_POLL_INTERVAL seconds, and each action (Snapshot and Queue Updates) will be delayed AUTO_SNAPQUEUE_ACTION_WAIT seconds 9.

8

Automatic Snapshot/Queue Updates is enabled by default in variables.env.

9

Server poll interval and delay action wait are defaulted to a value of 2 seconds.

Mock Origin Service

The default “origin” service container provides a basic static file HTTP server as the central repository for content. Additional files can be added to the origin root content directory located at infrastructure/cdn-in-a-box/origin/content. To request content directly from the origin directly and bypass the CDN:

Optional Containers

All optional containers that are not part of the core CDN-in-a-Box stack are located in the infrastructure/cdn-in-a-box/optional directory.

  • infrastructure/cdn-in-a-box/optional/docker-compose.NAME.yml

  • infrastructure/cdn-in-a-box/optional/NAME/Dockerfile

Multiple optional containers may be combined by using a shell alias:

#43 Starting Optional Containers with an Alias
# From the infrastructure/cdn-in-a-box directory
# (Assuming the names of the optional services are stored in the `NAME1` and `NAME2` environment variables)
alias mydc="docker-compose -f $PWD/docker-compose.yml -f $PWD/optional/docker-compose.$NAME1.yml -f  $PWD/optional/docker-compose.$NAME2.yml"
docker volume prune -f
mydc build
mydc up

VNC Server

The TightVNC optional container provides a basic lightweight window manager (fluxbox), Firefox browser, xterm, and a few other utilities within the CDN-In-A-Box “tcnet” bridge network. This can be very helpful for quick demonstrations of CDN-in-a-Box that require the use of real container network FQDNs and full X.509 validation.

  1. Download and install a VNC client. TightVNC client is preferred as it supports window resizing, host-to-vnc copy/pasting, and optimized frame buffer compression.

  2. Set your VNC console password by adding the VNC_PASSWD environment variable to infrastructure/cdn-in-a-box/varibles.env. The password needs to be at least six characters long. The default password is randomized for security.

  3. Start up CDN-in-a-Box stack. It is recommended that this be done using a custom bash alias

    #44 CIAB Startup Using Bash Alias
    # From infrastructure/cdn-in-a-box
    alias mydc="docker-compose "` \
        `"-f $PWD/docker-compose.yml "` \
        `"-f $PWD/docker-compose.expose-ports.yml "` \
        `"-f $PWD/optional/docker-compose.vnc.yml "` \
        `"-f $PWD/optional/docker-compose.vnc.expose-ports.yml "`
    docker volume prune -f
    mydc build
    mydc kill
    mydc rm -fv
    mydc up
    
  4. Connect with a VNC client to localhost port 5909.

  5. When Traffic Portal becomes available, the Firefox within the VNC instance will subsequently be started.

  6. An xterm with bash shell is also automatically spawned and minimized for convenience.

Socks Proxy

Dante’s socks proxy is an optional container that can be used to provide browsers and other clients the ability to resolve DNS queries and network connectivity directly on the tcnet bridged interface. This is very helpful when running the CDN-In-A-Box stack on OSX/Windows docker host that lacks network bridge/IP-forward support. Below is the basic procedure to enable the Socks Proxy support for CDN-in-a-Box:

  1. Start the CDN-in-a-Box stack at least once so that the x.509 self-signed CA is created.

  2. On the host, import and Trust the CA for your target Operating System. See Trusting the Certificate Authority

  3. On the host, using either Firefox or Chromium, download the FoxyProxy browser plugin which enables dynamic proxy support via URL regular expression

  4. Once FoxyProxy is installed, click the Fox icon on the upper right hand of the browser window, select Options

  5. Once in Options Dialog, Click Add New Proxy and navigate to the General tab:

  6. Fill in the General tab according to the table

    Table 54 General Tab Values

    Name

    Value

    Proxy Name

    CIAB

    Color

    Green

  7. Fill in the Proxy Details tab according to the table

    Table 55 Proxy Details Tab Values

    Name

    Value

    Manual Proxy Configuration

    CIAB

    Host or IP Address

    localhost

    Port

    9080

    Socks Proxy

    checked

    Socks V5

    selected

  8. Go to URL Patterns tab, click Add New Pattern, and fill out form according to

    Table 56 URL Patters Tab Values

    Name

    Value

    Pattern Name

    CIAB Pattern

    URL Pattern

    *.test/*

    Whitelist

    selected

    Wildcards

    selected

  9. Enable dynamic ‘pre-defined and patterns’ mode by clicking the fox icon in the upper right of the browser. This mode only forwards URLs that match the wildcard *.test/* to the Socks V5 proxy.

  10. On the docker host start up CDN-in-a-Box stack. It is recommended that this be done using a custom bash alias

    #45 CIAB Startup Using Bash Alias
    # From infrastructure/cdn-in-a-box
    alias mydc="docker-compose -f $PWD/docker-compose.yml -f $PWD/optional/docker-compose.socksproxy.yml"
    docker volume prune -f
    mydc build
    mydc kill
    mydc rm -fv
    mydc up
    
  11. Once the CDN-in-a-box stack has started, use the aforementioned browser to access Traffic Portal via the socks proxy on the docker host.

See also

The official Docker Compose documentation CLI reference for complete instructions on how to pass service definition files to the docker-compose executable.

Static Subnet

Since docker-compose will randomly create a subnet and it has a chance to conflict with your network environment, using static subnet is a good choice.

#46 CIAB Startup with Static Subnet
# From the infrastructure/cdn-in-a-box directory
alias mydc="docker-compose -f $PWD/docker-compose.yml -f $PWD/optional/docker-compose.static-subnet.yml"
docker volume prune -f
mydc build
mydc up

VPN Server

This container provides an OpenVPN service. It’s primary use is to allow users and developers to easily access CIAB network.

How to use it

  1. It is recommended that this be done using a custom bash alias.

    #47 CIAB Startup with VPN
    # From infrastructure/cdn-in-a-box
    alias mydc="docker-compose -f $PWD/docker-compose.yml -f $PWD/docker-compose.expose-ports.yml -f $PWD/optional/docker-compose.vpn.yml -f $PWD/optional/docker-compose.vpn.expose-ports.yml"
    mydc down -v
    mydc build
    mydc up
    
  2. All certificates, keys, and client configuration are stored at infrastruture/cdn-in-a-box/optional/vpn/vpnca. You just simply change REALHOSTIP and REALPORT of client.ovpn to fit your environment, and then you can use it to connect to this OpenVPN server.

The proposed VPN client

On Linux, we suggest openvpn. On most Linux distributions, this will also be the name of the package that provides it.

#48 Install openvpn on ubuntu/debian
apt-get update && apt-get install -y openvpn

On OSX, it only works with brew installed openvpn client, not the OpenVPN GUI client.

#49 Install openvpn on OSX
brew install openvpn

If you want a GUI version of VPN client, we recommend Tunnelblick.

Private Subnet for Routing

Since docker-compose randomly creates a subnet, this container prepares 2 default private subnets for routing:

  • 172.16.127.0/255.255.240.0

  • 10.16.127.0/255.255.240.0

The subnet that will be used is determined automatically based on the subnet prefix. If the subnet prefix which docker-compose selected is 192. or 10., this container will select 172.16.127.0/255.255.240.0 for its routing subnet. Otherwise, it selects 10.16.127.0/255.255.240.0.

Of course, you can decide which routing subnet subnet by supplying the environment variables PRIVATE_NETWORK and PRIVATE_NETMASK.

Pushed Settings

Pushed settings are shown as follows:

  • DNS

  • A routing rule for the CIAB subnet

Note

It will not change your default gateway. That means apart from CDN in a Box traffic and DNS requests, all other traffic will use the standard interface bound to the default gateway.

Grafana

This container provides a Grafana service. It’s an open platform for analytics and monitoring. This container has prepared necessary datasources and scripted dashboards. Please refer to Configuring Grafana for detailed Settings.

How to start it

It is recommended that this be done using a custom bash alias.

#50 CIAB Startup with Grafana
# From infrastructure/cdn-in-a-box
alias mydc="docker-compose -f $PWD/docker-compose.yml -f $PWD/optional/docker-compose.grafana.yml -f $PWD/optional/docker-compose.grafana.expose-ports.yml"
mydc down -v
mydc build
mydc up

Apart from start Grafana, the above commands also expose port 3000 for it.

Check the charts

There are some “scripted dashboards” that can show easily comprehended charts. The data displayed on different charts is controlled using query string parameters.

  • https://Grafana Host/dashboard/script/traffic_ops_cachegroup.js?which=Cache Group name. The query string parameter which in this particular URL should be the Cache Group. With default CiaB data, it can be filled in with CDN_in_a_Box_Edge or CDN_in_a_Box_Edge.

  • https://Grafana Host/dashboard/script/traffic_ops_deliveryservice.js?which=XML ID. The query string parameter which in this particular URL should be the xml_id of the desired Delivery Service.

  • https://Grafana Host/dashboard/script/traffic_ops_server.js?which=hostname. The query string parameter which in this particular URL should be the hostname (not FQDN). With default CiaB data, it can be filled in with edge or mid.

Debugging

See Debugging inside CDN-in-a-Box.