I think it's common (and important!) to build your "production" image once and deploy it into a dev/staging environment first. This means there are no re-builds between deploys to each environment.
However, I think it's less common to run automated testing within the same Docker image you build and deploy to these environments.
This is challenging because you probably don't want to include any build/test related packages in your production image (or layer) but you still want some level of confidence that your CI and Prod environments are the same.
I have often see builds pass automated testing but fail after deployment because our production Dockerfile/layer was missing packages or shared libraries that were present in the CI environment.
Why don't you want to include build/test related packages in your production image? Exactly because you mention you can't test the Docker image without these tools in the image, you cannot guarantee the production image works as you would expect it to work. Precisely because of that I think any packages that are required to run the tests should be part of the production image.
> However, I think it's less common to run automated testing within the same Docker image you build and deploy to these environments.
When you say "automated tests" do you mean unit or integration tests, or both?
Because unit should generally be in your compilation process, not part of any image, and integration shouldn't require anything different in the image compared to any other environment.
Build it, which includes unit tests, and then deploy and integration tests should be run against your stable API.
Great question! In my experience with ruby on rails, the application runtime is so dynamic, that even unit tests may not be reliable if run outside the production image.
I think normally this isn't much of an issue because other automated/manual integration testing in the staging environments will catch any major problems, to your point.
Another example would be for browser testing via chromedriver. I've usually seen this implemented along side unit tests (i.e., prior to the build phase) but since it generally serves as an integration level test for many applications, this has lead to issues due to the testing and production environments being out of sync.
I think multi-layer Docker images are a compelling solution to this, but it's not usually how I've seen it implemented.
Instead, I've typically seen that test and prod environments are manually maintained. Sometimes these are two separate Dockerfiles or some shared CI environment managed by different teams.
> Another example would be for browser testing via chromedriver.
That shouldn't be in your master image. Put your application in a container, and either just run your browser tests and point it at your dev/test/prod deployment, or build a second image with Chrome driver and the test scripts and point it at your deployed application.
Doesn't need to be multi-layer, and it doesn't need to be complex. You have essentially two apps here.
However, I think it's less common to run automated testing within the same Docker image you build and deploy to these environments.
This is challenging because you probably don't want to include any build/test related packages in your production image (or layer) but you still want some level of confidence that your CI and Prod environments are the same.
I have often see builds pass automated testing but fail after deployment because our production Dockerfile/layer was missing packages or shared libraries that were present in the CI environment.