Docker is tool that will make the lives of system admins easier. It really helps application and data management in three specific ways:
1. Applications only get installed "once", but can be deployed many times.
You can think of a Dockerised application, as an application that has been installed and ready to run for the "first time". After the installation has completed, the install point is "frozen" to that point in time, so that when you "unfreeze" it, you continue on from where you left off.
So each deployment of a Dockerised application is ready to go, as if you have just completed the installation.
2. It optimises the upgrade and back-out paths, and provides a quick recovery capability for any failed deployments. It also enables you to move applications to different hosts and/or different paths. (In fact applications moving between hosts in a Docker cluster is very normal.)
So upgrades are really easy - you can consider an upgrade as:
1. Stop application
1. Backup the application data (and a super quick way to do that is a filesystem or disk snapshot)
1. Deploy new version of application
1. New version of application should read data, and if appropriately "update it" to enable it to work with that new version
Then if the upgrade completes normally, you can remove snapshot (or archive it). And if you archive it along with the container image that was being used, you have a recoverable application as at the point in time the snapshot was created.
If something does go pear shaped and fails, your back-out plan is:
1. Stop new version of application
1. Recover the application data from the latest recovery point (or revert the snapshot)
1. Deploy the original version of the application - it should continue on as if nothing happened (other than it being shutdown and restarted).
3. It separates files relating to any application into 3 specific categories:
1. Application related files - in Docker, they are the put inside the Docker image, when the image is built.
2. Application data files - this is the data that personalises the application, and often is linked to the specific application version that created the data. In general terms, this has business value, it could be your Email database data, your RDBMS database data, etc
3. Temporary data files - these are the files that your application may create as a result of running, (eg: temp files, etc) - they have no value, but are needed by the application to run. If they are lost, there is no concern.
So, for any application, you just need to keep the Application related files (1) and the Application data files (2).
By design containers should be considered ephemeral, ie: they can be destroyed and created many times. As long as the Application data files are connected to the same version of an Application related files when a Container is started, the application will live on until it is retired.
In Docker, your Application related files are in a Docker Image, your Application data files are on "Persistent Storage" and the Temporary data files are often created inside the container, and discarded when the container is destroyed.
# Docker and Spectrum Protect
## Spectrum Protect Server
So from a Spectrum Protect point of view, if we are to run a Spectrum Protect server, containerizing it enables you to leverage points all three points above:
1. Regardless of whether you'll have 1 SP server or 100, you install it "once" and store it in your Docker Image library (aka a Docker Registry) and you can deploy it many times almost instantly.
2. When you upgrade your Spectrum Protect server, you have easy upgrade and back-out plan, and in fact you can even "test" the upgrade before you do it live.
3. When you deploy it, your "Application data files" will be your Spectrum Protect configuration (ie: the database), and the Spectrum Protect configuration data (ie: your disk storage pools).
You can place those anywhere you like following your organisations data deployment policies, but when referenced inside the container, they are always referenced at the same point that SP has been prepared for when you have "installed it".
If you are interested in Dockerising Spectrum Protect, this might help you get [started](server).
## Spectrum Protect Client
From a client point of view, ie: protecting your business application and enabling you to recover it, by Dockerising your application, you have many your Spectrum Protect management even easier. IE: From the same three points above:
1. Application related files - ie: these are Docker Images and dont really need to be protected (daily at least). You'll have a copy of your Docker Image on your host(s) where your application is deployed and a copy in your Docker registry.
If your host ever dies, you'll setup a new host, ```docker pull``` the application, recover your data and start your application. (If anything, you could protect your "built" application by protecting the Docker Registry that it is stored in.)
2. Application data files, which represent your applications configuration and data, you do want to protect this, and in many cases you can protect this from "outside" the container.
There is no silver bullet on how to protect this data, so it will depend on the application and whether it provides an API or a "dump" capability on your approach to protecting it. In very, very simple terms, you could "pause" a Docker container, snapshot the devices (disk or filesystem) where the persistent storage is held and you have the equivalent of a hardware based snapshot backup.
3. Temporary data files - this can be ignored. Generally this data is "inside" the container and discarded if the container is ever "destroyed and recreated".
If you are interested in Dockerising Spectrum Protect Clients, this might help you get [started](client).