Hi! I have been working on an idea I would like to share for some time. It’s about organising the testing/staging/remote development environments for web projects in a way that enables instantiating each branch or tag of a given project simultaneously.
What does it mean? Imagine that you are working on a project — a web project (the term web here indicates the project utilises somehow HTTP protocol), and the repository appears like this:
The question is whether there is a way that would allow to provide the links (for other developers/testers/stakeholders) to the running instance of each project branch like:
- main.dev.project.com
- develop.dev.project.com
- task-001-init.dev.project.com
- v0.1.dev.project.com
And assuming all these instances would have been accessible simultaneously and on the same host? 🤔
Why?
- To make it possible to check out how the new feature looks like before it is merged
- to test if there are no issues with building the project (before merging the new feature to the develop/main branch)
- to be able to run regression tests (before merging)
Interested? Let’s check how it could work! 😃
Agenda
As an example project, I will use an application from the repository https://github.com/lbacik/per-branch-poc — the repository of that app has been presented in the image above ☝️
The purpose is to prepare (step-by-step) a remote environment and to compare three scenarios:
- One instance at a time — In this scenario, we can build each branch or tag of the project. However, only one instance can be hosted at a time, which means hosting multiple instances simultaneously is not possible.
- Multi instances — different ports — In this case, multiple instances can be hosted simultaneously, but each instance must operate on a different port.
- Multi instances — per-branch approach (how I call it) — each instance can be provided simultaneously on different subdomains, like:
main.dev.project.com
,develop.dev.project.com
,task-001-init.dev.project.com
and (for tags)v0.1.dev.project.com
.
Preparation
To start, we will need two things:
- An application — our example app is https://github.com/lbacik/per-branch-poc
- A server on which we will build the remote environment
Server
The server can be either a virtual machine on your laptop or one of the VPS servers in the cloud. The most important thing is that the server provides SSH access to the GNU/Linux operating system installed on it. Additionally, the server should be accessible from the internet, which may be challenging when using a local virtual machine. In such a case, we may have to sacrifice some functionality related to the DNS system and HTTPS protocol. More details are provided below.
Regardless of which kind of server you decide to use, it is always good to automate the provisioning process. I’m going to use the VM on Azure Cloud, and here you can find my Terraform script to set it up.
terraform.tfvars
file I used for this test:
prefix = "perbarnch"
vm_disk_size = 30
admin_password = "SECRET"
The script work is to set up the VM with the pure Debian GNU/Linux OS installed on it and the SSH access (by passkey) configured and ready to use — in its last step, the script outputs the public IP address of that VM.
In my case, the output looked like this (Azure adds some domain, but I will use my own instead):
Apply complete! Resources: 1 added, 0 changed, 0 destroyed.
Outputs:
domain = "http://perbarnch.westeurope.cloudapp.azure.com"
public_ip_address = "1.2.3.4"
Docker engine
The next step is to install the required software; for that purpose, I use Ansible.
Configuration (ansible’s hosts
file):
[docker]
example ansible_host=1.2.3.4 ansible_user=user01
The terraform script has outputted IP address, and the user is the default user in the terraform configuration.
Next, I have used two playbooks:
$ ansible-playbook playbooks/docker/install-dockerd.yml
$ ansible-playbook playbooks/docker/user-mod.yml
As a result — we have the docker engine installed on the remote VM (that’s it).
Docker remote access
This step is optional when using any CI/CD pipelines 😉 — I just like working this way on my local machine. To create a connection from your local docker client to the remote docker engine, you have to configure the docker context:
$ docker context create remote-docker --docker "host=ssh://user01@1.2.3.4"
The user should be able to authenticate through the SSH keys and have to be added to the docker
group on the remote host (docker documentation).
As a result, when switching context to the remote one, you can work with the containers from the remote environment like they were running locally:
$ docker context use remote-docker
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
Domain
Ideally, you will need one. Although the core functionality will still work without a dedicated domain, It is much easier to manage and maintain when the environment has an assigned domain.
Assuming that we have purchased the project.com domain and our VM has the IP address 1.2.3.4, we need to redirect the NS record (at the domain registrar panel) to the environment host:
record type | host | data
-------------------------------------------------
A | ns.project.com. | 1.2.3.4
NS | dev.project.com. | ns.project.com
It will set up the dev.project.com domain as a domain for our environment — we will go back to it in the DNS paragraph below.
Docker Images
The docker build command allows you to build directly from the remote repository, so you don’t have to download/clone the project to your local machine — it is a great feature (in case you don’t know it)! So, you can do like this:
$ docker build -t per-branch-poc:develop \
"https://github.com/lbacik/per-branch-poc.git#develop"
However, I decided to build all required images (for arm and amd architectures) and upload them to the docker hub service, so if you want to do some tests on your own and use my test app, you can use these images:
Check it out locally by (remember to switch the docker context to the local one):
$ docker run --rm -d -p 8080:3000 lbacik/per-branch-poc:main
Scenario 1 — one instance at a time
Assuming the docker client context is set to our remote environment, we can execute the following command:
$ docker run --rm -d -p 80:3000 lbacik/project:main
It is almost the same command I used to run the app locally. The only difference is that I used the standard HTTP port (80).
The problem is that when attempting to assign a different image to the same port, an error will occur:
$ docker run --rm -d -p 80:3000 per-branch-poc:develop
9e930eea12743f20cd7905fbf50bb29ad9763ae31d7c4de36b871c48004f0088
docker: Error response from daemon:
driver failed programming external connectivity on endpoint elegant_feistel (...):
Bind for 0.0.0.0:80 failed: port is already allocated.
So, in such an approach — only one instance can be run at a time.
Scenario 2 — multi instances, different ports
Regarding the problem of scenario 1, we can try to solve it by running the other app instances on different ports, like:
$ docker run --rm -d -p 8080:3000 lbacik/project:develop
$ docker run --rm -d -p 8081:3000 lbacik/project:task-001-init
Yep, it can work great! However, using different domains instead of different ports seems to me much more convenient 😄
Scenario 3: per-branch environment
This is my (almost) ideal solution, but unfortunately, it is not as easy in configuration (in fact, the previous scenarios didn’t require any additional setup). In this scenario, we have some requirements to fulfil before we can use it. What we need is:
- Reverse proxy
- DNS
The idea is that users can provide the address of the particular project’s branch they want to access in their browsers, like:
main.dev.project.com
develop.dev.project.com
task-001-init.dev.project.com
To achieve this, in the big picture, we need the DNS to resolve each from the above names (domains) to the same IP address (address of our environment’s host) — so, in this example, to the address 1.2.3.4
.
In the second stage, when the request hits the environment’s host, it has to be routed to the proper container according to the address from the HTTP protocol Host
header.
Please note that the HTTPS port (443) used in the diagram above can be changed to 80 (HTTP). In this scenario, these protocols work interchangeably. Right now, you can assume port 80 (HTTP) to be used; I will write more about SSL in the HTTPS paragraph below.
Reverse proxy
The predefined container — proxy
— can be found here. The repository (its docker-compose.yml file) also contains the configuration of the second container — acme
— but this container we are going to discuss later (together with the leverage of the HTTPS) — Now only the proxy
container matters❗️
One of the ways to run it is to clone the repository to your local machine, copy and edit the .env
file and start the container!
$ git clone https://bitbucket.org/lbacik/proxy-ssl.git
$ cp env-example .env
The .env
file contains the environment variables:
# check the https://github.com/nginx-proxy/nginx-proxy for more details
# NETWORK indicates the docker network used by the backend (containers behind
# the proxy). All the containers that should be accessible through the proxy
# MUST utilize that particular network!
NETWORK=main
# Host IP (the IP address accessible from the intranet/internet) and ports
# to which the proxy container should be bound.
HOST_IP=0.0.0.0
PROXY_HTTP_PORT=80
PROXY_HTTPS_PORT=443
# proxy (and acme) MUST be able to access the docker socket to "read"
# the environment variables from the backend containers.
DOCKER_SOCKET=/var/run/docker.sock
# containers name prefix
COMPOSE_PROJECT_NAME=proxy
# EMAIL is an environment variable passed to the acme container
# description can be found at https://github.com/nginx-proxy/acme-companion
EMAIL=
Now it is time to run it (please remove any container bound to port 80 first — if any exists)!
$ docker compose up -d proxy
# and check if it is running...
$ docker compose ps
If everything is okay, let's test it! Start the application container:
$ docker run --rm -d --net main \
-e VIRTUAL_HOST=main.dev.project.com \
-e VIRTUAL_PORT=3000 \
lbacik/project:main
Notice the VIRTUAL_HOST
and VIRTUAL_PORT
environment variables — they do the magic here!
Note II: The container has to be connected to the same network configured in the proxy’s .env
file as NETWORK
environment variable (here: main).
Now, the second stage of the proxy test.
Theory: The proxy needs to know the hostname of the container the user wants to access (this name is sent among other HTTP headers as the Host
header — as on the TCP level, there is only the IP address). So, to test the whole configuration, it should be enough to prepare an appropriate HTTP request (with the Host
header set accordingly)! That means that even if we rely on domain name(s), the DNS is unnecessary for this particular test.
I will use the httpie tool for that test:
$ http 1.2.3.4 Host:main.dev.project.com
HTTP/1.1 200 OK
...
Hello World!
It works!
Start the next container:
$ docker run --rm -d --net main \
-e VIRTUAL_HOST=develop.dev.project.com \
-e VIRTUAL_PORT=3000 \
lbacik/project:develop
Let’s test if we can access it:
$ http 1.2.3.4 Host:develop.dev.project.com
HTTP/1.1 200 OK
...
branch: develop
The 1.2.3.4
address is the address of the remote host on which we set up our docker engine, but it can also be replaced with the 127.0.0.1
(localhost address) if you do those tests locally.
Let’s suppose you want to make a test using your browser. Because we haven’t configured the DNS yet, you need to provide somehow how your browser will resolve the given domains to appropriate IP addresses — in such cases, the /etc/hosts
file can be a solution. To make the above examples work, you need to add to your /etc/hosts
:
1.2.3.4 main.dev.project.com develop.dev.project.com
And point your browser to one of these domains!
The /etc/hosts
file can be helpful during testing, but it doesn’t look good as a general solution. The DNS system seems to be a much better one 😄 And one of the possible DNS configurations is described in the next paragraph.
The last notice about the proxy container — I have used the nginx-proxy container here, but it is not your only option. It can also be treafik; however, I haven't tested it yet.
DNS
The DNS role in this scenario is to resolve all (so-called) branch or tag domains (like main.dev…, develop.dev… or v0.1.dev…) to the same IP address — the IP of the environment’s host.
It can be achieved as follows:
- At the domain registrar, we have to redirect all the queries about subdomains of our per-branch environment domain (dev.project.com) to a dedicated NS (here: ns.project.com)
- At the per-branch environment, we must provide the DNS service (at port 53/UDP) to respond to all these queries for the dev.project.com subdomains. In the diagram below, I described this service as a wildcard (all subdomains are marked as
*.dev.project.com
); however, such a wildcard is not a typical (formal) DNS system feature (as I know) — fortunately, it is not a big problem!
⚠️ The warning: there is one downside of the wildcard approach — such a configuration makes all subdomains of the parent domain successfully resolved to the given IP address, even domains like foobarfoobar.dev.project.com
— so, I mean, the domains that do not have a counterpart in any branch or tag name. A more strict way of resolving would probably be much better, but I haven't figured out how to configure such a one for now — any ideas are welcome!
Okay, now the configuration — Add DNS container, my setup can be found here: https://bitbucket.org/lbacik/dns
Clone the repository and edit the configuration:
$ git clone https://bitbucket.org/lbacik/dns.git
$ cd dns
$ cp env-example .env
My configuration (.env
file) looks like below:
NETWORK=main
DOMAIN=docker.local
HOST_IP=10.0.0.4
NETWORK
— here, the same as for proxy
container, but it can also be any other network, DOMAIN
is used only internally and doesn’t have any meaning in our case. The only important setting here is the HOST_IP
— it can’t be 0.0.0.0
(because then it collides with the docker engine's internal DNS service); it has to be set up explicitly to the host IP address.
I put there 10.0.0.4
because Azure assigned such an IP to my VM — in your case, it can be a different IP or even the public IP address of the environment, so 1.2.3.4
.
And the Dnsmasq configuration file: images/dnsmasq/conf/dnsmasq.conf — where we need only one line (all others can be removed/commented out):
address=/dev.project.com/1.2.3.4
Now, let’s start the container:
$ docker compose up -d
$ docker compose ps
CONTAINER ID IMAGE ... PORTS NAMES
80e0c60f3246 dns-ns ... 10.0.0.4:53->53/tcp, 10.0.0.4:53->53/udp dns-ns-1
Our NS container (in fact, Dnsmasq one) should be bound to the 53/UDP port of the host machine, and it should be accessible from the internet!
If it is — we can start testing!
$ host foobarfoobar.dev.project.com
foobarfoobar.dev.project.com has address 1.2.3.4
$ host main.dev.project.com
main.dev.project.com has address 1.2.3.4
From that point, the particular branches should be accessible without any http-header or resolver tricks. Try it out (as from httpie as from the browser):
$ http mail.dev.project.com
HTTP/1.1 200 OK
...
Hello World!
$ http develop.dev.project.com
HTTP/1.1 200 OK
...
branch development
HTTPS
When our environment is accessible from the internet and hosted domains are resolvable by the global DNS, we have fulfilled all requirements to use the free Let’s Encrypt SSL certificates! All we have to do now is to start the acme
container from the proxy-ssl
project (the project we have already cloned, configured and used).
Go back to the proxy-ssl project directory, and type:
$ docker compose up -d acme
When started, we have up and running all the core containers of the per-branch environment 😆 :
docker ps
CONTAINER ID IMAGE COMMAND PORTS NAMES
a7a40526ec53 nginxproxy/acme-companion "/bin/bash /app/entr…" proxy-acme
80e0c60f3246 dns-ns "dnsmasq -k --log-qu…" 10.0.0.4:53->53/tcp, 10.0.0.4:53->53/udp dns-ns-1
8b0618e14489 nginxproxy/nginx-proxy "/app/docker-entrypo…" 0.0.0.0:80->80/tcp, 0.0.0.0:443->443/tcp proxy-proxy
Likely to theVIRTUAL_HOST
variable of the nginx-proxy, the acme app has its own very similar one: LETSENCRYPT_HOST
— it indicates the domain for which the certificate should be generated, and the certificates are generated only for the containers that define that variable. To add this variable, we have to recreate the example app containers:
docker run --rm -d --net main \
-e VIRTUAL_HOST=develop.dev.project.com \
-e VIRTUAL_PORT=3000 \
-e LETSENCRYPT_HOST=develop.dev.project.com \
lbacik/project:develop
docker run --rm -d --net main \
-e VIRTUAL_HOST=main.dev.project.com \
-e VIRTUAL_PORT=3000 \
-e LETSENCRYPT_HOST=main.dev.project.com \
lbacik/project:main
Check acme
container logs to see whether the cert has been generated — it usually takes a while (like one or two minutes). If everything goes okay with the certificate, you can check that the HTTP traffic (by default) is redirected to the HTTPS port (443) (for containers that define the LETSENCRYPT_HOST
variable).
$ http http://main.dev.project.com
HTTP/1.1 301 Moved Permanently
...
Location: https://main.dev.project.com/
And those subdomains should be secured by a trusted certificate issued by Let’s Encrypt!
I can’t generate the cert for the dev.project.com domain as, unfortunately, it is only an example — as you probably perfectly know, I'm not an owner of the project.com domain 😉 But I use precisely the same method described above to generate the certificates for all of my sites — so to check it is working, you can examine, i.e. the cert for https://fortune.luka.sh.😆
TA DAM! (curtain)
Afterword
The story is far from over. Although I have suggested a way of setting up an environment frame, I have not yet addressed the complexities of working with more intricate applications than a single container. There are many aspects to consider, such as how to connect the application to the database, whether each project instance can have its database, or if a shared database can be used. There is still much to discuss, so stay tuned for more! 😃