Click here to Skip to main content
13,795,575 members
Click here to Skip to main content
Add your own
alternative version


10 bookmarked
Posted 4 Jun 2018
Licenced CPOL

Why Should We Care About Containers for Development

, 4 Jun 2018
Rate this:
Please Sign up or sign in to vote.
Why should we care about containers for development

Why should we care about containers for development

There has probably been more than one time in your development career where you've spent a few hours troubleshooting an issue only to find out it was a dependency or versions issue, right?

Environments varying from one to next, outdated components and setting up development machines are frustrations we can all do without.

Some of these issues we've solved with VMs, but managing the entire machine and underutilizing them for each environment is costly. This is where containers have come to solve many challenges.

Why Containers

It's no doubt that you have heard the buzz about containers over the last year or longer. If not containers, then some technology, framework, or tooling associated with it; Docker, Kubernetes, microservices are just a few of these. What are the benefits from a developer perspective?

Works on My Machine

The scenario of each environment your code or application is deployed to could be different. How many servers, the configuration of CPU, RAM, etc. may vary from Development, QA, and Production and there is no guarantee the app will perform the same. Using containers provides the capability to define these parameters upon startup such that each image has the same requirements regardless of where it is deployed.

docker run --cpus=2 --memory=1.5g myapp 


Applications that are comprised of many services may not be developed in the same language or framework. If you are developing a service in .NET Core for instance that calls a downstream service written in Node.js or Go, the development team responsible for that service can build the container image and make it available in your private registry.

Using a docker-compose file, you can then bring in that container image without needing Node.js or Go installed on your machine and test your calls against the development version.


    image: nodeservice:dev
      - NODE_ENV=Development

      - nodeservice
    image: namesweb
      context: .
      dockerfile: namesweb/Dockerfile
      - NODE_SERVICE_ENDPOINT=NodeService-Dev
      - "57270:80"

Test Data

There are a rare amount of applications that do not involve data and maintaining an instance of your production system locally can be a task. Installing the tools, platform, server, etc., as well as keeping all of these items up to date is above and beyond what you'd probably want to worry about as a developer.

Here in this docker-compose file, an instance of SQL Server on Linux is started and with a script file, the test database is created, and finally, test data is loaded using the Bulk Copy tool.

version: '3.0'  

    image: microsoft/mssql-server-linux:latest
    container_name: mssql
      - 1433:1433
      - /var/opt/mssql
      # we copy our scripts onto the container
      - ./sql:/usr/src/app
    # bash will be executed from that path, our scripts folder
    working_dir: /usr/src/app
    # run the that will import the data AND sqlserver
    command: sh -c ' chmod +x ./; ./ & /opt/mssql/bin/sqlservr;'
      ACCEPT_EULA: 'Y'
      SA_PASSWORD: P@$$w0rd


echo creating resources in $wait_time  
sleep $wait_time  
echo starting...

echo 'creating Names DB'  
/opt/mssql-tools/bin/sqlcmd -S localhost -U SA -P $SA_PASSWORD -i ./init.sql

echo 'creating Names Table'  
/opt/mssql-tools/bin/sqlcmd -S localhost -U SA -P $SA_PASSWORD -i ./data/NameTable.sql

echo 'importing data...'  
# Bulk load data from a csv file
/opt/mssql-tools/bin/bcp Names in data/Names.csv -S -U sa -P $SA_PASSWORD -d Names -c -t ','

echo 'add uniqueid column'  
/opt/mssql-tools/bin/sqlcmd -S localhost -d Names -U SA -P $SA_PASSWORD -I 
-Q "ALTER TABLE Names ADD ID UniqueIdentifier DEFAULT newid() NOT NULL;"

echo 'checking data'  
/opt/mssql-tools/bin/sqlcmd -S localhost -d Names -U SA -P $SA_PASSWORD -I 
-Q "SELECT TOP 5 ID,FullName FROM Names;"

init.sql & NameTable.sql



USE Names;  

  LastName nvarchar(max),
  FirstName nvarchar(max),
  FullName nvarchar(max)

Using this test database image, a subset of the production data is loaded and available for the application to use. The container_name: mssql is set as the server name for the connection string. All that is needed for the application is to hit this server now is a change to the setting.

If you are using docker-compose for all of the containers, the DNS name resolution is handled automatically. To use the database container individually, get the IP Address by using the command docker inspect -f "{{ .NetworkSettings.IPAddress }}" <containerId>

"ConnectionStrings": {
    "NamesConnection": "Server=mssql;Initial Catalog=Names;User=sa;


Never is there a scenario of deploying applications easy, right? Rolling back on failures ever a harder story and at times, this is almost in itself another deployment.

Whether you are using a full CI/CD system to build your containers, command line tooling through docker build or Visual Studio Docker Tools; all of the dependencies are isolated to the image when deployed to the host machine from a registry, it is still that same container.

Should there be problems with a deployment, things do go wrong, rolling a deployment back a version can be as simple as changing the version number of an image from 1.1 to 1.0 and your app is reset.


Containers provide a packaging and deployment mechanism for our application and its dependencies. The container registry is a powerful concept that helps with the deployment and distribution of our application.

Containers also improve the "inner loop" of our development experience when working locally, particularly as we trend towards building microservices over monolithic applications. They provide greater parity across local and remote environments including the cloud and help our infrastructure to become more immutable.

The vibrant ecosystem of tooling around containers also help us consume cloud-native platforms and development methodologies. Whether we are using serverless containers, a PaaS platform that supports containers, or an orchestrator like Kubernetes, we focus on our application instead of thinking about and managing the individual host or hosts we deploy it to.



This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL)


About the Author

Shayne P Boyer
United States United States
I work on Azure, ASP.NET Core content and Open Source, speak at national and community events while helping teams architect web and cloud applications.

You may also be interested in...


Comments and Discussions

-- There are no messages in this forum --
Permalink | Advertise | Privacy | Cookies | Terms of Use | Mobile
Web05 | 2.8.181207.3 | Last Updated 4 Jun 2018
Article Copyright 2018 by Shayne P Boyer
Everything else Copyright © CodeProject, 1999-2018
Layout: fixed | fluid