Demoing your new software solution, batteries included

Scenario: You’ve created a new tool/language/framework and want to get it shared – either online, to the highest number of people possible, or in a live demo and workshop. You need people to start working with it and learning as fast as possible, and to spend as little time as possible getting it set up.

One of the biggest challenges when creating a new development tool is precisely getting it into the hands of users. Chances are your target audience will have to setup the necessary environment and dependencies.

Languages like Golang and NodeJS, and Ruby offer simplified installers, and in the case of Golang and Ruby, in-browser code testing sandboxes that allow for getting a taste of the languages and possibly following through a tutorial online.

But to get a real sense for the tool, they need it on their machines. So, you either sacrifice the flexibility of working locally, or the immediacy of starting to code ASAP. That is, unless you take the hassle out of setting up an environment all together – that’s where Sandbox comes in.

With Sandbox, batteries are included

Let’s try a Ruby application. No, you don’t need to have Ruby installed, it will run inside a container. No you don’t need Docker installed, it comes with Sandbox, which includes everything you need to get going, right there in the Git repo. Automatically, no hassle.

Go ahead and clone this ruby sample repo. Simply run:

git clone
cd sbox-ruby
./sbox run server

And your app is running! Run ./sbox status to find out the IP Sandbox is running at, and your app will be at [Sandbox IP]:31001 ! This app will update as you change the code in /src, so feel free to experiment with it.

What just happened?

The magic was made by the Sandbox binaries included in the Git repo. They are tiny – less than 200k – but they install everything needed to run Docker and Kubernetes. A workflow file determines what should run where, which includes caching installation procedures, and all things needed for live reloading changing files, in easy to read YAML:

  - run:
      name: Install dependencies
      image: 'ruby:2.4.2-alpine'
      cache: true
          - Gemfile
      script: |-
        gem install foreman
        gem install bundler
        cd app
        bundle install
  - service :
      name: Run Application
      step: Install dependencies
      script: |-
        foreman start -d /app
          - Gemfile
          - Gemfile.lock
          - src
        - mountPath: /app/src
          hostPath: ./src
        - container: 5000
          external: 31001

You can read more about how Sandbox does this here.

As a user, you don’t need to go through the hassle of installing a tool to know if it’s right for you – it just works. Also, as the workflow files are very clear and simple to read, they can get a sense of what needed to happen to make the application run, just by glancing at them.

As a developer, your tool can be that easy to share, and that easy to get running on someone else’s machine, with no issues and very little time spent. That means more time and user patience left to try out your creation, and a lower barrier of entry overall!

Making development life easier isn’t easy

With Sandbox finally out, and having gone through a number of iterations and pivots (more on that on a later post, perhaps?), here is a small and personal account on what got me into this project, and what I see in Sandbox.

When I decided to join the StackFoundation project as a co-founder, I did so mainly out of admiration for both the vision of the project, and the ability to achieve it, knowing from previous experiences the capabilities of this team. The goal was clear, and a path to that goal appeared evident, at the time – to make development life easier.

Writing it now, it sounds awfully generic and bland – which tooling/testing/workflow developing company doesn’t claim this as their goal? Yet while the path to achieving it turned out to be a lot messier and blurrier than we thought it would be, the goal itself was as clear then as it is today. So what does this generic-sounding goal mean to me then?

In a world with hundreds of solutions for building, testing, deploying, creating and managing environments, everything seems to be pulling in its own direction. If a tool pleases a developer, it might not be robust or complete enough for a systems manager. If it pleases a systems manager it will be to complex for a QA to use or adjust. No consistency exists between roles, machines or targets, and no easy communication between these either.

Streamlining processes should start at the personal level

We were from the beginning attracted to the ideas brought by containerisation, as they seemed to be a good answer to the issues we wanted to address – guaranteed interoperability and consistency across systems, reproducibility of testing/production environments, the safety of sandboxing testing and production operations. The problem was – Docker is not an accessible tool (not to mention Kubernetes). If we were looking for a solution to facilitate development, that meant facilitating the work of all roles in a project, from QA to DevOps, and not need to imply a deep knowledge of new technologies by all parties involved.

This was particularly important to me, as my background as a programmer was definitely less backend- and systems-heavy than the rest of the team. To me, in particular, the project needed to tick a few boxes:

Make environments simple to setup

As I said, Docker and Kubernetes aren’t easy. They’re complex tools that imply a learning curve to overcome – a relatively small one for using it if thing have been setup for you, and a deeper understanding of the processes if you are in charge of setting it up.

One of our first mistakes was trying to remove this complexity altogether, in an earlier iteration of Sandbox. This lead to an exclusively UI-based system, which meant a slew of issues

Make environments easy to use for everyone

To me, this was the core of our project. As I said before, there are plenty of build/CI/containerisation tools out there, including Docker and Kubernetes, on top of which Sandbox is built. The missing piece is making those tools available and useful to the DevOps, the developer, the QA.

This meant sharing a workflow should “just work”: workflows are committed to git, and running them will install and run all necessary components in the user’s machine. No need to manually install machine dependencies, running a single command is all it takes.

Scalable up and scalable down

This actually stems from the previous two objectives, and sums them up nicely – a lot of the tools we use for deploying and managing environments, mainly due to their complexity and their weight, tend to be used only in the context of big shared servers – in the testing machine(s), in production, in a cloud CI service, etc. Usually, “containerising localhost”, while appealing for many reasons, tens to be very hard to implement in reality – port issues, setup issues, and just adding another point of breakage to the pipeline can make it more of a nuisance than a solution.

Yet for a lot of use cases (like the aforementioned QAs) it would be the ideal solution, were it simple and hassle free. Our tool aims to do exactly that, and that would mean using the same toolset for uses ranging from running the app on the developer’s machine to building a production-ready build, with all cases in between covered.

Using Google Optimize with Angular

Recently, we’ve started doing A/B testing on the StackFoundation website, to better understand our visitors in, and to test out changes in design and messaging. It allows us to experiment using different messaging strategies (even ones we would normally dismiss outright), and get real data on the effectiveness of those strategies, as opposed to relying on our biased views (our biases are a good topic for an entirely different discussion).

For that we are using Google Optimize – an A/B testing tool that ties into Google Analytics, and is both free and easy to setup.

How Optimize works

Google Optimize works by injecting/replacing content on the page for a certain percentage of users. The exact changes are usually setup through a Chrome plugin, and triggered on page load. In the case of single-page style websites like ours, changes are triggered by an event.

Using Google Optimize with Angular was, as it turns out, a very simple process. Angular has a pretty robust system of lifecycle hooks that we could use, and using the right one is all we need.

Setting a catch-all hook to trigger experiment changes

For nearly all use-cases, all we need is a catch-all. The worry we had here was that it might mean a lot of traffic going back and forward between the user’s machine and Google if a lot of changes occur. As it turns out, reactivating Optimize doesn’t mean new requests get made, so this is quite simple.

On the root component of your app, simply:

    selector: 'my-app',
    <div class="content" >
export class Root implements AfterViewChecked {
    ngAfterViewChecked () {
        if (window['dataLayer']) {
            window['dataLayer'].push({'event': 'optimize.activate'});

This will ensure any change to the html is followed by a check and/or replacement pass by the Optimize snippet, and as it happens on AfterView, no risk of it being overwritten, or colliding with Angular processes.

This has the added benefit of not having any flicker due to content change, except for when the page first loads.

Avoiding content flicker on initial page load

Google provides a page hiding snippet precisely because of this flicker. Ours being a single page app, we prefer to avoid any page content/style changes that fall beyond our control. In this particular case, it was a mistake – the snippet does its work very well, and unless a very specific and fine-grained control is necessary, it is definitely the way to go.

That being said, we explored how to avoid this flicker using only Angular, and here is the result of that. I cannot stress enough – this ended up not being used, as it was deemed better to use Google’s snippet.

Analysing the snippet provided by Google, we see that an object containing data on the specific Optimize account and a callback should be added under the hide property of the dataLayer object Google uses. We also noted that if dataLayer happens to have already been created, the callback won’t run – but if that happens, there is no risk of flicker. Using APP_INITIALIZER:

    provide: APP_INITIALIZER,
    useFactory: (...) => {
        return () => {
            return new Promise<boolean>(resolve => {
                if (!window['dataLayer']){
                    let _valObj: any = { 'GTM-XXXXXXX': true };
                    _valObj.start =;
                    _valObj.end = () => resolve(true);
                    window['dataLayer'] = [];
                    window['dataLayer'].hide = _valObj;
                    setTimeout(function () {
                        _valObj.end = null
                    }, 4000);
                    _valObj.timeout = 4000;
                else {