The slow, slow enterprise

Most developers know about the benefits of containers and usually have some experience in using them, especially in cloud circles. But in the enterprise, acceptance isn’t quite as high and many people don’t know what to do with Docker and consorts. So how and why should enterprise-sized companies work with containers? IT projects in large organizations generally have long-running development and release cycles – it’s not uncommon to hear of applications that have annual release days that are planned and prepared months ahead. This is partly due to the complexity of dependencies in large code bases but also caused by organizational difficulties in distributed teams working on different parts of the system. In fact, for many companies this really wasn’t a problem until recently – the processes that solve their business case have usually been around for decades and any changes to these processes were in no rush to be deployed. However, in recent years these requirements have drastically changed. Digitalisation is dramatically changing the landscape in almost every end-user facing industry and slowly starting to gain ground in B2B environments. Suddenly, Time To Market has become an important KPI for IT departments – and the infrastructure that has developed to support the slow release cycles is starting to slow down the developers. Some of this pain was alleviated by virtual machines and automated provisioning of infrastructure. Previously, when a new application was deployed new systems needed to be installed to host it. Going through the corporate buying process, production and delivery times it could take months from the initial request to the fully installed servers. With virtual machines, the same tasks could be done in days, with fully automated environments and configuration management systems maybe even less.

Containerized benefits

Still, days or hours may be too slow especially, when manual work in provisioning is involved. No developer wants to wait a week for an update of the nodejs server minor version. And the situation only gets worse when a system is used to host more than a single application. This is where containers come in and save the day for the enterprise. So what is a container? It’s easiest to think about containers as super low weight virtual machines that contain everything that an application needs to run, from the operating system to the application dependencies to the configuration of the environment. Basically, everything you’d also put on a server to host your app, you put into a container. The interesting thing here is that containers are built declaratively: You write a build file that describes all necessary steps to create a container. When using Docker as a container system, such a build file could look as follows:

FROM alpine
RUN apk add curl
CMD curl http://httpbin.org/get

You would add these lines to a so-called Dockerfile and run ‘docker build’ on it which would create a container image based on the preexisting ‘alpine’ image (which is a very lightweight Linux distribution), install curl into the image and advise the system to run a http request whenever the image is started. Docker will build a container image that consists of ‘layers’ corresponding to each line of the Dockerfile. These layers will also be shared among images, so if several images are built on top of the ‘alpine’ base, the host does not need to download and install multiple instances of the base image. With the image in place and published to a container registry, any Docker host can simply instantiate any number of these containers pretty much instantly. After an initial download, spinning up one of these containers costs a couple of seconds. Installing a new nodejs version now becomes the tasks of changing a line in the Dockerfile and pushing the new image to the registry. Running this new image will have no implication on the other containers running on the same docker host. A developer can even modify the dockerfile on her local machine and verify that the app will run in production just the same way it does locally. No more ‘but it works on my machine’! And since everything is built up in a standardized and declarative way, infrastructure definitions in the form of dockerfiles can be included in version control and continuous integration and deployment pipelines. A single ‘git push’ can provision new containers, with new application dependencies and even new operating system versions, and a process that used to take months can be handled in seconds.

Where are the shadows?

Up to now, everything sounds bright and peachy, but of course, this is only half of the story. Especially in enterprise environments, there are a number of other factors that need to be accounted for that generally won’t be mentioned in container marketing docs. First off, running a single container works pretty much as described above, but many containers that need to talk to each other? Now that’s whole different story. Suddenly you will need a container orchestrator, service discovery, distributed logging and tracing and a host of other things. Another point that is often overlooked is security and compliance. Defining and deploying your own infrastructure may sound like a developers dream, but it’s a compliance officers nightmare. When anybody can deploy their own little machines, how should anyone be able to verify that all systems are patched and up to date, and no critical vulnerabilities are deployed to production? Fortunately the layered and structured nature of container images makes it easy to inspect them even before they run in production, and hooks in the host system can prevent containers from running when they have critical vulnerabilities.

Getting started in your Org

For all of these necessities, there are wonderful open source products and commercial products that make live easier, but there is a cost involved, that may not be immediately obvious. Running Docker in a production environment in the enterprise requires a lot of know-how that may not be present yet within the organization. And it is different from other tools in that containerizing your system changes the entire way developers interact with operations. The best way to get started is to find people from both the Ops and Dev departments that are interested in changing the status quo and let them build up know-how together. Working on a project that is set up with a new infrastructure will gather attention within your company and when the benefits of the new environment become apparent more people will want to be part of the change.

Written by REWE
Sebastian Pleschko
Head of Operations
RIAG.digital department

Take a look into a developers’ life within REWE Group and see how REWE connects food retail with digitalization and software development in the video below: 

LEAVE A REPLY

Please enter your comment!
Please enter your name here

For security, use of Google's reCAPTCHA service is required which is subject to the Google Privacy Policy and Terms of Use.

If you agree to these terms, please click here.