Deploying mytoken with docker swarm¶
About¶
In the following we describe how to set up a full mytoken server deployment with docker swarm.
In this "tutorial" we will run everything on a single machine, but extending it to multiple nodes should be doable 
for people experienced with docker swarm. Probably the most crucial part there will be creating the networks between the nodes.
If you successfully set this up across multiple nodes, we are happy if you shar your experience with us:
m-contact@lists.kit.edu
The services we'll run are:
- load_balancer- Uses HAProxy
- Takes all requests and forwards them to an available mytoken node (roundrobin)
- The configuration binds the needed ports to the host ports
- For http/-s ports 443and80are needed and should be publicly reachable
- For the ssh grant type port 2222is used and should be publicly reachable
- Stats available at port 8888(default credentials aremytoken:mytoken), this port should not be publicly reachable
 
- Uses 
- mytoken- The actual mytoken server application
- We start 3 instances
 
- mytoken_migrate- A tool for preparing the database on the first run and updating the schema between versions
- Runs and exits if finished. Rerun on update.
 
- db (galera node)- The mariadb database
- We start 3 nodes
 
- db-bootstrap (galera bootstrap node)- When starting the database cluster, we have to bootstrap the cluster.
- We use a dedicated container for this purpose only.
- After the other galera nodes are up and functional, this container can be stopped.
 
Preparations¶
Setting up Docker¶
We need a rather up-to-date docker installation (version 20.10.7 is enough) and also docker-compose.
Enable docker swarm:
docker swarm init
Mytoken Preparations¶
We must do a bit of preparations, mainly creating secrets and setting up the config.
Config files¶
- Clone the mytoken repo: git clone https://github.com/oidc-mytoken/server
- Inside the dockerdir. Make a copy of allexample-prefixed files, removing this prefix. These are now the needed config files. (Don't forget the one inside thehaproxydirectory.)
- If needed, adapt the mytoken server's config file (docker-config.yaml), also register OIDC clients for all supported issuers and at them to the config file.
- We have the following config files:- docker-config.yamlThe mytoken server's config file
- haproxy/haproxy.cfgThe config file for the- haproxyload balancer
- .envenvironment variables for mount points for various containers
- db.envenvironment variables for the db containers
 
You can look at all the files, but no changes are needed other than the ones I detail below (checking paths in .env).
There is also the docker-swarm.yaml file that defines the containers, but for this tutorial no changes should be 
needed there.
Setups¶
We now have to do some setups, like creating keys and passwords.
The mytoken-setup tool helps with some of the steps. This uses the same config file as the server.
We will use the mytoken-setup docker container, but it could also be done with the regular binary (however, the config is a bit different then).
We also provide a setup-helper.sh script that is located in the docker directory. This should do the whole setup,
but please have a look at the script before running it.
Important
You also want to set the $MYTOKEN_DOCKER_DIR and $MYTOKEN_DOCKER_CONFIG env vars first.
The first one is the location where we will store most of the stuff that we setup / should persist. The second one 
is the path (full) to the docker-config.yaml file.
After setting the env vars run the setup script. It requires sudo for chown, so that the db mount points can be 
correctly used inside the container.
Update .env file¶
As mentioned above the other config files should not need modifications.
The config files reference docker 'secrets' that are mapped in the container, and normal mounts.
These mounts use env vars. So the files themselves do not need modification, but we must ensure that the env vars are 
correctly set. These are specified in the .env file.
Just check that everything is correct there (You will need to change things here). This is mainly about setting the base path.
Note
The cert secret env var does not only need modifiction of the base path, but also on the filename ('localhost' instead of 'mytoken')
First start¶
Unfortunately, the just updated .env file is not used if we start the docker swarm stack normally.
So we will create the resulting compose file from the .env file and the docker-swarm.yaml file using docker-compose.
Run docker-compose -f docker-swarm.yaml config > $MYTOKEN_DOCKER_DIR/docker-compose.yaml from inside the 
docker dir. Then the created compose file can be passed to docker when deploying the stack (-c /path/to/file).
Alternatively, everything can be done in a single command:
docker stack deploy -c <(docker-compose -f docker-swarm.yaml config) mytoken
For testing this is probably a bit easier, but for production use it is recommended to first create the final compose file and then deploy the stack with:
docker stack deploy -c $MYTOKEN_DOCKER_DIR/docker-compose.yaml mytoken
If everything goes smoothly, after some time mytoken should be available at https://localhost. 
You can check the load_balancer stat page (even before mytoken is ready) at
http://localhost:8888 (credentials are mytoken:mytoken).
Note
The first start-up can take some time (several minutes). It is expected that the mytoken server container exit at least one time with a non-zero status code. However, they eventually should come up.
After starting up everything, the db-bootstrap container can be stopped:
docker stop mytoken_db-bootstrap.... # Use completion for the full name
Helpful commands¶
- Completely shut everything down with:
  docker stack rm mytoken
- PS:
  docker stack ps mytoken
- Scaling:
  docker service scale mytoken_mytoken=X
- Logs for a container
  docker logs [-f] mytoken_<service>.... # Use completion for the full name
Links¶
Background information¶
Mytoken nodes can fail and recover at wish. It is also possible that all nodes are gone. (You can test this by scaling to 0.) And just start them back again. Of course, the service won't be available while there is no mytoken server running.
For the database, nodes can fail and recover as long as there is always at least one healthy node. If all nodes fail, the cluster has to be bootstrapped again when started. This requires a bit of manual preparation. My naive way is the following:
- (Only required if you stopped the bootstrap node and don't want to lose data): Copy the data from the node with the most recent data. (e.g. if you shut down everything with stack rmjust use anyone)sudo cp -rp $MYTOKEN_DOCKER_DIR/db/1/data/ $MYTOKEN_DOCKER_DIR/db/b1/data/
- Change the entry safe_to_bootstrapin$MYTOKEN_DOCKER_DIR/db/b1/data/grastate.datfrom0to1