Or at least, a better way!
Testing environments, the traditional way
Your team probably has a small number of environments that everyone shares during your development sprints. Depending on budget, team size or frequency of releases, these may vary, but it will roughly show the following picture:
- Dev – Allows developers to test features beyond their local environment, before handing to the QA team for them to test.
- Test – Where QA team performs their functional testing, usually within the scope of a release.
- UAT– An environment that mimics (or at least tries to mimic) production. Usually used to try to reproduce non-functional issues seen in production or for performing performance testing.
At least Dev and Test environments are quite different from an infrastructure perspective to production, as they’re only intended for functional testing.
Why is it done this way?
Historically, computing resources were expensive, but even now, in the cloud era, there’s still a number of factors that still encourage this approach. Let’s dive into what the drivers for this could be.
Environment setup and deployment pain
Environment creation can be divided into 2 broad areas:
- Infrastructure setup:
- On premise – Hardware needs to be bought, host OS be setup, VMs created, etc
- Cloud – Compute instances need to be created, IAAS security specifics configured, etc
- Both – Ensure only team can access environments configuring SSH, FW, DNS, proxies, VPN, etc
- Application setup (this applies both to on premise and cloud variants):
- Install and configure third party services: DBs, caches, middleware, etc
- Build your application artifacts, usually using a CI tool
- Deploy your application artifacts, using scripts or a CM tool (Puppet/Chef/Ansible/etc)
Doing the above manually is a daunting task. If you automate it, you still need to write, maintain and evolve whatever solution you used for automation.
- Infrastructure setup:
All team members need to be able to access the environment at any time and from anywhere, or at least, during working hours and from the workplace, without any extra effort on their side.
Ease of management
Ideally, the less resources dedicated to test environment management, the better. Usually this is a task handled by the operations team.
Some fresh stats
I recently found a report with some very interesting insights on the topic: World Quality Report 2016-17 (pages 45-50) by Capgemini.
Here’s my take on these results.
From the first figure we can see that 28% of testing still happens the traditional way, as described at the beginning of this post. The rest happens on temporary test environments, either in cloud, virtualized or non-cloud. Maybe this a-la-carte tendency is budget related, maybe it comes with an agility perspective in mind. At least, it reflects a tendency to allow creating testing environments in a more dynamic way. Would be interesting to see what the environment creation process is, and the overhead it adds in terms of infrastructure and application setup, but at least feel like steps in the right direction.
The second one is the one that makes me draw more interesting conclusions though. All sections in the graph except the 4th one (starting from the top) make me think that almost 50% of people being surveyed saw that handling of testing environments were an issue to them, in one way or the other: maintenance of multiple versions, ability to book or manage, lack of visibility of availability, availability of the right environment at the right time and inability to manages excess needs. As I see this, it implies that the transition to under-demand environments doesn’t seem to be done properly, and probably there are infrastructure and/or budget restrictions affecting it too. I would also infer that there’s still a heavy reliance on the operations team to create the testing environments, and that probably the test team deadlines get affected by these limitations. In any case, there seems to be quite some room for improvement.
Environments, the docker way
One of the key benefits of Docker is the reproducibility of environments. This means that all environments created with the same configuration will behave the same way, no matter where they are created, no matter how many times they are recreated.
With Docker Compose, you can combine the creation of different services as part of your environment. These services can be application services (your business logic) or third party ones (DBs, caches, middleware, etc). This way we avoid the manual setup and configuration step, and the need of error prone deployment scripts.
But we still need to be able to bundle our application artifacts as part of our Docker images. This implies checking out our project from our Git (hopefully!) repository, building the artifact, and creating an image bundling said artifact.
Unfortunately, not every team member has the skills or expertise to work with Git, Docker and Docker Compose, and whichever the specific project technologies are (needed in order to build the application artifacts to be bundled with Docker).
How do we solve this?
- Removing the Docker learning curve.
- Reducing the Git required skills.
- Provide the ability to build your Docker images in order to launch your application services, and launch third party services your application depends on.
- Ensure our environment management is simple, and can be done everywhere, by everyone.
This is what Sandbox can do for you! You will get an easy to use UI providing:
- Full git integration – Launch an environment from a specific branch/commit, with the click of a button! In fact, launch multiple environments simultaneously, allowing you to compare behaviour of different branches side by side*.
- Comprehensive environment management – Lifecycle (start/stop/restart) actions for your environments and/or services that compose them, and access to your services logs.
- Point and click editor – Simple way to define the different services that conform your application.
After many years in software development, you get acquainted with the issues around testing environments. Even though as a developer you might not been involved with the testing process directly, it’s always part of the release process. As such, you get affected by these issues too, specially when fixing bugs or working under tight deadlines. I’m sure I’m not the only one who has ever thought about ways of improving this area!
How is your team or company dealing with issues around testing environments? Does the discussion in this post sound familiar? I’d be interested to know your experiences and thoughts on the topic!
I’d like to make it clear that I’m not defending this approach as a silver bullet, as it obviously has its limitations. For the type of testing you would do on a UAT environment, be it replicating infrastructure related bugs, testing NFRs or for performance testing, this is clearly a solution that would not work, no matter if you are using Sandbox or any other solution that relies on a local environment. On the other hand, I think it is at least valid for functional testing, which in my experience, adds up to a big part of the overall testing efforts in a release.
It may also not be valid if your production deployments are cloud-based and you use some kind of service from your provider to which you have no access unless your running from it’s infrastructure, or if you have network related limitations (i.e. relying on an external service to which access is only allowed from a specific network, to where you have no access).
Finally, it obviously will depend on the type of application you’re testing. A massive application with lots of services (be it application or third party) will probably struggle to run on an average box, but all projects I’ve been involved in did definitely ran on my laptop during development phase, so in those, this would have been a perfectly valid approach for the QA team to follow.