Going serverless with Lambdas

This is the first post of a series where we’ll be writing about the motivation and decision making process behind our cloud strategy for Sandbox.

Setting the stage

Our first assumption was that, given that Sandbox follows a client-server architecture and that most of its core functionality is on the client side, our cloud requirements wouldn’t be as high as if we were a pure SAAS product.

When we started architecting our cloud components for Sandbox, we came up with the following list of requirements:

  1. It had to allow us to have a low time-to-market. Rewriting certain parts later would be something we would accept as a side effect of this.
  2. It had to be cost efficient. We didn’t anticipate much traffic during our early days after our initial go live, so it felt natural to look for a ‘Pay as you go’ model, rather than having resources sitting idle until we ramp up our user base.
  3. It needed to allow us to have a simple go live strategy. This needed to apply both in terms of process and tooling used. It obviously also needed to leverage our previous knowledge of cloud providers and products/services.

So, what did we decide to go with, and most importantly, why?

First, choosing the cloud provider: we had a certain degree of experience using AWS, from our professional life as consultants, but especially from our days working at StackFoundation, so this was the easy choice!

The AWS offering is huge, and constantly evolving, so deciding which services to use wasn’t that straight-forward. From a computing perspective, there seemed to be 2 clear choices, using ECS to handle our EC2 instances, or going serverless with Lambdas. Our experience leaned more towards EC2, but when we revisited our cloud architecture requirements, we concluded Lambdas were a better fit, as they ticked all the boxes:

  1. In terms of the actual application development, choosing EC2 or Lambdas didn’t seem to make much of a difference. Once you decide on whether to go with plain Lambdas or use a framework to make Lambda development closer to standard API development, feature development was pretty straightforward, with a low learning curve. We initially went with Jrestless although we crafted our own solution lambda-api module of the Datamill project, more on this on future posts. On the EC2 world, this would have meant us using micro-services running on containers, so pretty much the same picture.
  2. The term ‘Pay as you go’ feels like it was coined for Lambdas, at least in terms of compute resources. You get billed based on your memory requirements and actual usage period (measured to the nearest 100ms). On the other hand, with EC2 instances, you have to keep your instances live 24×7. So we either we had to go with low-powered T2 family instances to keep the cost low and struggle if we were to get unpredicted usage spikes, or go with M3 family instances, and potentially waste money on them during periods where they were under-used. This would  probably be the case during our early preview period.
  3. In terms of overall architecture, it is more complicated to configure ECS clusters and all the different components around them than to simply worry about actual application code development, which is one of the selling points of Lambdas (and the serverless movement in general). This may be an overstatement, as even with Lambdas there is additional configuration. For example, you also need to take care of permissions (namely Lambda access to other AWS resources like SES, S3, DDB, etc) but definitely at least an order of magnitude simpler than doing this on the ECS front. In addition, even though the AWS console is the natural path to start with until you get a grasp of how the service works and the different configuration options you have, you will eventually want to migrate to using some kind of tooling to take care of deployment, both for your lower environments as well as for production. We came across Serverless, a NodeJS based framework that aids you in your Lambda infrastructure management (or other function based solutions by other providers). The alternative would have probably been to go with Terraform, which is an infrastructure as code framework. Our call here was motivated both in terms of project needs and the overhead of the learning curve of the tooling to use. If we went with Lambdas, we would mainly need to worry only about them in term of AWS resources, so Serverless offered all we needed in this sense. In terms of the learning curve, using a general purpose framework like Terraform would imply us having to deal with concepts and features we wouldn’t be needing for our use case.

So, now that we’re live, how do we feel about our assumptions/decisions after the fact?

In terms of application development, the only caveat we found was in terms of E2E testing. We obviously have a suite of unit and integration tests on our cloud components, but during development, we realised some bugs arose from the integration with our cloud frontend that couldn’t be covered by our integration tests. This was true specially when services like API Gateway were involved in the problem, as we haven’t found a way of simulating this on our local environment. We came across localstack late in the day, and gave it a quick chance, but didn’t seem to be stable enough. We couldn’t spend longer on it at the time, so decided to cover these corner scenarios with manual testing. We might decide to revisit this decision going forward, though.

In terms of our experience in working with Lambdas, the only downside we came across was the problem of cold Lambdas. AWS will keep your Lambdas running for an undocumented period, after which, any usage will imply that the Lambda infrastructure needs to be recreated in order for it to serve requests. We knew about this before going down the Lambda path, so we can’t say it came as a surprise. We had agreed that in the worst case scenario, we would go with a solution we found proposed on several sites: having a scheduled Lambda that would keep our API Lambdas “awake” permanently. To this end, we added a health check endpoint to every function, which would be called by our scheduled Lambda. In future releases we will also use the health check response to take further actions, for example notify us of downtime.

Overall, we’re quite happy with the results. We may decide to change certain aspects of our general architecture, or even move to an ECS based solution if we go SAAS, but for the time being this one seems like a perfect fit for our requirements. Further down the line, we’re planning to write a follow up post regarding the concepts discussed on this post, with some stats and further insights of how this works both in production and as part of our SDLC.

Of course, there is no rule of the thumb here, and what might work for us may not work for other teams/projects, even if they share our list of requirements. In any case, if you’re going down the serverless journey, we’d love to hear about your experiences!

Understanding NgZone

Angular 2 does a lot of things differently from Angular 1, and one of its greatest changes is in change detection. understanding how it works has been essential, particularly when using Protractor for E2E testing. This explores how to work with zones for testing and performance.

A live example of the mentioned code is here.

The biggest change in how Angular 2 handles change detection, as far as a user is concerned, is that it now happens transparently through zone.js.

This is very different from Angular 1, where you have to specifically tell it to synchronise – even though both the built in services and the template bindings do this internally. What this means is, while $http or $timeout do trigger change detection, if you use a third party script, your Angular 1 app won’t know anything happened until you call $apply().

Angular 2, on the other hand does this entirely implicitly –  all code run within the app’s Components, Services or Pipes exists inside that app’s zone, and just works.

So, what is a zone?

zone.js’s zones are actually a pretty complicated concept to get our head around. Running the risk of over-simplifying, they can be described simply as managed code calling contexts – closed environments that let you monitor, control, and react to all events, from asynchronous tasks to errors thrown.

The reason this works is, inside these zones, zone.js overrides and augments the native methods – Promises, timeouts, and so on, meaning your code doesn’t need to know about zone to be monitored by it. Everytime you call setTimeout, for instance, you unwillingly call an augmented version, which zone uses to keep tabs on things. 

What Angular does is use zone.js to create it’s own zone and listen to it, and what this means for us as angular users is this – all code run inside an angularapp is automatically listened to, with no work on our part.

Most times, this works just fine – all change detection “just works” and Protractor will wait for any asynchronous code you might have. But what if you don’t want it to? There are a few cases where you might want to tell angular not to wait for / listen to some tasks:

  • An interval to loop an animation
  • A long-polling http request / socket to receive regular updates from a backend
  • A header component that listens to changes in the Router and updates accordingly

These are cases where you don’t want angular to wait on asynchronous tasks/ change detection to run every time they run.

Control where your code runs with NgZone

NgZone gives you back control of your code’s execution. There are two relevant methods in NgZone – run and runOutsideAngular:

  • runOutsideAngular runs a given function outside the angular zone, meaning its code won’t trigger change detection.
  • run runs a given function inside the angular zone. It is meant to be run inside a block created by runOutsideAngular, to jump back in, and tell angular to start listening again.

So, this code will have problems being tested, as the app will be constantly unstable:

this._sub = Observable.timer(1000, 1000)
    .subscribe(i => {
    this.content = "Loaded! " + i;
});
this.ngZone.runOutsideAngular(() => {
    this._sub = Observable.timer(1000, 1000)
        .subscribe(i => this.ngZone.run(() => {
            this.content = "Loaded! " + i;
        }));
});

Simplifying usage

After understanding how NgZone works, we can simplify its usage, so that we don’t need to sprinkle NgZone.runOutsideAngular and NgZone.run all over the place.

We can create a NgSafeZone service to do exactly that, as the most common use case for this is:

  • Subscribe to an Observable outside the angular zone
  • Return to the angular zone when reacting to that Observable.
@Injectable()
export class SafeNgZone {

    constructor(private ngZone: NgZone) {}

    safeSubscribe(
        observable: Observable < T > ,
        observerOrNext ? : PartialObserver < T > | ((value: T) => void),
        error ? : (error: any) => void,
        complete ? : () => void) {
        return this.ngZone.runOutsideAngular(() =>
            return observable.subscribe(
                this.callbackSubscriber(observerOrNext),
                error,
                complete));
    }

    private callbackSubscriber(obs: PartialObserver < T > |
        ((value: T) => void)) {
        if (typeof obs === "object") {
            let observer: PartialObserver < T > = {
                next: (value: T) => {
                    obs['next'] &&
                        this.ngZone.run(() => obs['next'](value));
                },
                error: (err: any) => {
                    obs['error'] &&
                        this.ngZone.run(() => obs['error'](value));
                },
                complete: () => {
                    obs['complete'] &&
                        this.ngZone.run(() => obs['complete'](value));
                }
            };

            return observer;
        } else if (typeof obs === "function") {
            return (value: T) => {
                this.ngZone.run(() => obs(value));
            }
        }
    }
}

With this the previous code gets simplified quite a bit:

// The following:
this.ngZone.runOutsideAngular(() => {
    this._sub = Observable.timer(1000, 1000)
        .subscribe(i => this.ngZone.run(() => {
            this.content = "Loaded! " + i;
        }));
});

// Becomes:
this._sub = this.safeNgZone.safeSubscribe(
    Observable.timer(1000, 1000),
    i => this.content = "Loaded! " + i);

 

Backup and replication of your DynamoDB tables

We are using Amazon’s DynamoDB (DDB) as part of our platform. As stated in the FAQ section, AWS itself replicates the data across three facilities (Availability Zones, AZs) within a given region, to automatically cope with an eventual outage of any of them. This is a relief, and useful as part of an out of the box solution, but you’d probably want to go beyond this setup, depending on what your high availability and disaster recovery requirements are.

I have recently done some research and POCs as to how it would be best to achieve a solution inline with our current setup. We needed it to:

  • be as cost effective as possible, while covering our needs
  • introduce the least possible complexity in terms of deployment and management
  • satisfy our current data backup needs and be inline with allowing us to handle high availability in the near future.

There’s definitely some good literature on the topic online (1), besides related AWS resources, but have decided to write a serious of posts which will hopefully provide a more practical view on the problem and the different range of possible solutions.

In terms of high availability, probably your safest bet would be to go with cross-region replication of your DDB tables. In a nutshell, this will allow you to create replica tables of your master ones in a different AWS region. Luckily, AWS labs provides an implementation on how to do this, open-sourced and hosted in GitHub. If you take a close look at the project’s README, you’ll notice it is implemented by using the Kinesis Client Library (KCL). It works by using DDB Streams, so for this to work, streaming of the DDB tables you want to replicate needs to be enabled, at least for the master ones (replicas don’t need it).

From what I’ve seen, there would be several ways of accomplising our data replication needs:

Using a CloudFormation template

Using a CloudFormation (CF) template to take care of setting up all the infrastructure you need to run their cross-replication implementation mentioned above. If you’re not very familiar with CF, they describe it as:

AWS CloudFormation gives developers and systems administrators an easy way to create and manage a collection of related AWS resources, provisioning and updating them in an orderly and predictable fashion.

Creating a stack with it is quite straight forward, and the wizard will allow you to configure the options on the following screenshot, besides some more advanced ones on the following screen for which you can use defaults in a basic setup.

Screen Shot 2016-07-25 at 09.38.36

Using this template will take care of creating everything from your IAM roles and Security Groups creation, to launching the defined EC2 instances to perform the job. One of those instances will take care of coordinating replication and the other(s) will take care of the actual replication process (i.e. running the KCL worker processes). The actual worker instances are implicitly defined as part of an autoscaling group, as to guarantee that the worker instances are always running, in order to prevent events from the DDB stream being unprocessed, which would lead to data loss.

I couldn’t fully test this method as after CF finished setting up everything, I couldn’t use the ReplicationConsoleURL to configure master/replica tables due to the AWS error below. Anyway, wanted a more fine grained control of the process, so looked into the next option.

Screen Shot 2016-07-25 at 11.14.46

Manually creating your AWS resources and running the replication process

This would basically imply performing most of what CF does on your behalf. So it would mean quite a bit more work in terms of infrastructure configuration, be it through the AWS console or as part of your standard environment deployment process.

I believe this would be a valid scenario in the case you want to use your existing AWS resources to run the worker processes. You’ll need to leverage what your costs restrictions and computing resource needs are, before finding considering this a valid approach. In our case, this would help us with both, so decided to explore it further.

Given that we already have EC2 resources set up as part of our deployment process, I decided to create a simple bash script that would kick of the replication process as part of our deployment. It basically takes care of installing the required OS dependencies, cloning the git repo and building it, and then executing the process. It requires 4 arguments to be provided (source region/table and target region/table). Obviously, it doesn’t perform any setup on your behalf, so the argument tables will need to exist on the specified regions, and the source table must have streaming enabled.

This proved to be a simple enough approach, and worked as expected. The only downside of it is that, regardless of it running within our existing EC2 fleet, we still needed to figure out a mechanism of monitoring the worker process, in order to restart it in case it dies for any reason, and avoid data loss as mentioned above. Definitely an approach we might end up using in the near future.

Using lambda to process the DDB stream events

This method uses the same approach as the above, in that it relies on events from your DDB tables streams, but removes the need of having to take care of the AWS compute resources you will need in doing so. You will still need to handle some infrastructure and write the lambda function that will perform the actual replication, but will definitely help with the cost and simplicity requirements mentioned in the introduction.

Will leave the details of this approach for the last post of this series though, as it is quite a broad topic that I will cover there in detail.

In the upcoming posts I will discuss the overall final solution we end up going with, but before getting to that, in my next post, I will discuss how to backup your DDB tables to S3.

Stay tuned!

 

 

 

 

Dependency resolution with Eclipse Aether

Most Java developers deal with dependency resolution only at build time – for example, they will declare dependencies in their Maven POM and Maven will download all the required dependencies during a build, caching them in a local repository so that it won’t have to download it again the next time. But what if you need to do this dependency resolution at run-time? How do you do that? It turns out to be rather straight-forward and it’s done using the same library that Maven uses internally (at least Maven 3.x).

Transitive Dependency Resolution at Run-time

That library is Aether (which was contributed to Eclipse by Sonatype). Doing basic transitive dependency resolution requires you to setup some Aether components – the pieces are readily available within 3 Aether Wiki pages:

  • Getting Aether (you don’t need all the dependencies listed there if you’re just doing basic resolution)
  • Setting Aether up (the code in thenewRepositorySystem() method) – IMPORTANT: For the custom TransporterFactory described in the Wiki page to work properly, you will have to add the TransporterFactorys before the BasicRepositoryConnectorFactory unlike in the Wiki
  • Creating a Repository System Session (the code in the newSession(...) method)

Now that you have the repositorySystem and a session, you can use the following code to get the full set of transitive dependencies for a particular artifact, given by it’s Maven coordinates:

private CollectRequest createCollectRequest(String groupId, String artifactId, String version, String extension) {
    Artifact targetArtifact = new DefaultArtifact(groupId, artifactId, extension, version);
    RemoteRepository centralRepository = new RemoteRepository.Builder("central", "default", "http://repo1.maven.org/maven2/").build();

    CollectRequest collectRequest = new CollectRequest();
    collectRequest.setRoot(new Dependency(targetArtifact, "compile"));
    collectRequest.addRepository(centralRepository);

    return collectRequest;
}

private List<Artifact> extractArtifactsFromResults(DependencyResult resolutionResult) {
    List<ArtifactResult> results = resolutionResult.getArtifactResults();
    ArrayList<Artifact> artifacts = new ArrayList<>(results.size());

    for (ArtifactResult result : results) {
        artifacts.add(result.getArtifact());
    }

    return artifacts;
}

public List<Artifact> resolve(String groupId, String artifactId, String version, String extension) throws DependencyResolutionException {
    CollectRequest collectRequest = createCollectRequest(groupId, artifactId, version, extension);

    DependencyResult resolutionResult = repositorySystem.resolveDependencies(session,
    new DependencyRequest(collectRequest, null));

    return extractArtifactsFromResults(resolutionResult);
}

That gets you the full set of transitive dependencies.

Customizing the Resolution

What we did so far is to get Aether to grab artifacts from Maven central (you will have noticed how we configured the CollectRequest with the centralRepository as the only one to consult. Adding other remote repositories would have been done in the same way as adding central. But let’s say we wanted to have a more direct say in how artifacts were retrieved. For example, maybe we want to get artifacts from an AWS S3 bucket, or perhaps we want to generate artifact content at run-time. In order to do that, we need to create a new Aether Transporter, and hook it into the repository system we setup above.

Let’s consider a basic implementation:

public class CustomTransporter extends AbstractTransporter {
 private static final Exception NOT_FOUND_EXCEPTION = new Exception("Not Found");
 private static final byte[] pomContent =
  ("<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n" +
   "<project xmlns=\"http://maven.apache.org/POM/4.0.0\"\n" + " xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\"\n" + " xsi:schemaLocation=\"http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd\">\n" +
   " <modelVersion>4.0.0</modelVersion>\n" +
   " <groupId>custom.group</groupId>\n" +
   " <artifactId>custom-artifact</artifactId>\n" +
   " <version>1.0</version>\n" +
   " <packaging>jar</packaging>\n" +
   "</project>\n").getBytes();

 public CustomTransporter() {}

 @Override
 public int classify(Throwable error) {
  if (error == NOT_FOUND_EXCEPTION) {
   return ERROR_NOT_FOUND;
  }

  return ERROR_OTHER;
 }

 @Override
 protected void implClose() {}

 @Override
 protected void implGet(GetTask task) throws Exception {
  if (task.getLocation().toString().contains("custom/group/custom-artifact/1.0") &&
   task.getLocation().getPath().endsWith(".pom")) {
   utilGet(task, new ByteArrayInputStream(pomContent), true, -1, false);
   return;
  }

  throw NOT_FOUND_EXCEPTION;
 }

 @Override
 protected void implPeek(PeekTask task) throws Exception {
  if (task.getLocation().toString().contains("custom/group/custom-artifact/1.0") &&
   task.getLocation().getPath().endsWith(".pom")) {
   return;
  }

  throw NOT_FOUND_EXCEPTION;
 }

 @Override
 protected void implPut(PutTask task) throws Exception {
  throw new UnsupportedOperationException();
 }
}

Your transporter is going to be invoked with get, peek and put tasks by the Aether code. The main ones to worry about here are the get & peek requests. The peek task is designed to check if an artifact exists, and the get task is used to retrieve artifact content. The peek task should return without an exception if the artifact identified by the task exists, or throw an exception if the artifact doesn’t exist. The get task should return the artifact content (here we show how that’s done using the utilGet method) if the artifact exists, and throw an exception otherwise.

Note how the classify method is used to actually determine if the exception you throw in the other methods indicates that an artifact is non-existent. If you classify an exception thrown by the other methods as an ERROR_NOT_FOUND, Aether will consider that artifact as non-existent, while ERROR_OTHER will be treated as some other error.

Now that we have a transporter, hooking it up requires us to first create a TransporterFactory corresponding to it:

public class CustomTransporterFactory implements TransporterFactory, Service {
 private float priority = 5;

 public void initService(ServiceLocator locator) {}

 public float getPriority() {
  return priority;
 }

 public CustomTransporterFactory setPriority(float priority) {
  this.priority = priority;
  return this;
 }

 public Transporter newInstance(RepositorySystemSession session, RemoteRepository repository)
 throws NoTransporterException {
  return new CustomTransporter();
 }
}

Nothing to say about that – it’s pretty boilerplate. If you need to pass in some information to your transporter about it’s context, do it when you construct the transporter in the newInstance method here.

Finally, hook up the custom TransporterFactory the same way all the others are hooked up – that is, add it when we construct RepositorySystem:

locator.addService(TransporterFactory.class, CustomTransporterFactory.class);

IMPORTANT: Note again the TransporterFactory needs to be added before the BasicRepositoryConnectorFactory.

That’s it, now for all the artifacts, our transporter will also get to be involved in resolving artifacts!

Using TemplateRef to create a tooltip/popover directive in Angular 2

This post is about a Component created in the context of our application development. There is a demo here, and you can find the full source code here.

Lately, the need arose to create a tooltip directive. This brought up a lot of questions we hadn’t had to face before, such as how to create markup wrapping around rendered content, or rather “what is Angular 2’s transclude?”

Turns out, using TemplateRef is very useful for this, but the road to understanding it wasn’t easy. After seeing it used in a similar fashion by Ben Nadel, I decided to take a stab at it.

TemplateRef is used when using <template> elements, or perhaps most commonly when using *-prefixed directives such as NgFor of NgIf. For *-prefixed directives (or directives in <template> elements, TemplateRef can be injected straight into the constructor of the class. For other components, however, they can be queried via something like the ContentChild decorator.

Initially, I had thought to create two directives: a TooltipDirective to be placed on the parent element, plus a TooltipTemplate directive to be placed in a template, that would then inject itself into the parent. It proved too complex, though, and after finding what could be done with the ContentChild query the implementation became much simpler.

The end result looks like this (simplified for clarity):

@Directive({
    selector: "[tooltip]"
})
export class TooltipDirective implements OnInit {
    @Input("tooltip") private tooltipOptions: any;
    @ContentChild("tooltipTemplate") private tooltipTemplate: TemplateRef < Object > ;

    private tooltip: ComponentRef < Tooltip > ;
    private tooltipId: string;

    constructor(
        private viewContainer: ViewContainerRef,
        public elementRef: ElementRef,
        private componentResolver: ComponentResolver,
        private position: PositionService) {
        this.tooltipId = _.uniqueId("tooltip");
    }

    ngOnInit() {
        // Attach relevant events
    }

    private showTooltip() {
        if (this.tooltipTemplate) {
            this.componentResolver.resolveComponent(Tooltip)
                .then(factory => {
                    this.tooltip = this.viewContainer.createComponent(factory);
                    this.tooltip.instance["content"] = this.tooltipTemplate;
                    this.tooltip.instance["parentEl"] = this.elementRef;
                    this.tooltip.instance["tooltipOptions"] = this.options;
                });
        }
    }

    private hideTooltip() {
        this.tooltip.destroy();
        this.tooltip = undefined;
    }

    private get options(): TooltipOptions {
        return _.defaults({}, this.tooltipOptions || {}, defaultTooltipOptions);
    }
}

@Component({
    selector: "tooltip",
    template: `<div class="inner">
<template [ngTemplateOutlet]="content"></template>
</div>
<div class="arrow"></div>`
})
class Tooltip implements AfterViewInit {
    @Input() private content: TemplateRef < Object > ;
    @Input() private parentEl: ElementRef;
    @Input() private tooltipOptions: TooltipOptions;

    constructor(
        private positionService: PositionService,
        public elementRef: ElementRef) {}

    private position() {
        // top and left calculated and set
    }

    ngAfterViewInit(): void {
        this.position();
    }
}

The TooltipDirective requires a <template #tooltipTemplate> element, that gets rendered through a  Tooltip Component, created and injected with the templateRef containing our content. Essentially, “transcluding” it. The Tooltip component’s role is only to wrap the content with some light markup, and position itself when inserted into the page.

A lot of the actual positioning (not shown here, but in the source code) is done directly to the rendered elements, though – I faced some issued when using the host properties object, that I believe were reintroduced in the latest RC.

All in all, it was a great learning experience, and Angular 2’s <template> surely beats Angular.js’ transclude. Slowly but surely Angular 2 get’s more and more demystified to me, but it is hard work getting there.

A breadcrumb component for @ngrx/router

@ngrx/router is, at the moment, one of the best choices for a router component in Angular 2, and Vladivostok, the third iteration of Angular 2’s official router, will take a very heavy inspiration from it. At the moment we are using it to handle our routes, and the need arose to create a breadcrumb component for certain parts of the application.

You can see a working example here.

@ngrx/router‘s API is very light and sparse, but not necessarily incomplete – a lot of the actual implementation is assumed to be left in the hands of the user. This is powered by the use of Observables for Route, RouteParams, QueryParams, etc. A great example of this assumption is the fact that instead of a set of interfaces like CanReuse/CanActivate/CanDeactivate, the router only ever activates components when they change after a route change – any changes in parameters are handled manually by default. More work, but also a clearer image of what one can and cannot do with the tool, and a lot more control.

The first thing we found was that routes have a property – options – that serves the express purpose of holding custom data. A simple usage is this:

export const routes: Route[] = [{
    path: '/',
    component: Home,
    options: {
        breadcrumb: 'Home Sweet Home'
    },
    children: [{
            path: '/component-a',
            component: ComponentA,
            options: {
                breadcrumb: 'A Component'
            },
            children: [{
                    path: '/one',
                    component: One,
                    options: {
                        breadcrumb: 'The One'
                    }
                },
                [...]
            ]
        },
        [...]
    ]
}, ];

And the breadcrumb component is as such:

@Component({
    selector: 'breadcrumbs',
    directives: [NgFor],
    template: `<span>
<span *ngFor="let breadcrumb of breadcrumbs; let isLast = last">
<a [linkTo]="breadcrumb.path">{{breadcrumb.name}}</a>
<span *ngIf="!isLast"> &gt; </span>
</span>
</span>`
})
export class Breadcrumbs {
    private breadcrumbs: any[];

    constructor(private routerInstruction: RouterInstruction) {
        this.routerInstruction
            .subscribe(
                match => {
                    this.breadcrumbs = this.getBreadcrumbs(match);
                }
            );
    }

    private getBreadcrumbs(match: Match) {
        let breadcrumbs = [];
        let path = '';

        for (let i = 0; i < match.routes.length; i++) {
            path = path[path.length - 1] === '/' ? path.substr(0, path.length - 1) : path;
            path += match.routes[i].path ? this.makePath(match.routes[i], match) : '';
            if ((match.routes[i].options || {}).breadcrumb) {
                breadcrumbs.push({
                    name: match.routes[i].options.breadcrumb,
                    path: path
                });
            }
        }

        return breadcrumbs;
    }

    private makePath(route: Route, match: Match) {
        return pathToRegexp.compile(route.path)(match.routeParams);
    }
}

RouterInstruction is one Observable that gives us all the information we need. By watching it, a Match object containing the array of matched routes is returned. All that was left was to create the urls, as @ngrx/router uses only url strings (as opposed to the array notation you’ll find in @angular/router, for instance) – but as @ngrx/router uses path-to-regexp to parse urls, to it was only a matter of using it to compile from the parsed data, and get the urls.

All in all, a very simple solution. Omitted are the use of translations and using asynchronously loaded data (like a profile name) in the breadcrumb – the former is trivial and very unrelated, and the latter we are using stores for, and it’s perhaps a good topic for another post.

3 Docker tips & tricks

Over the past few months, we’ve done a lot of development with Docker. There are a few things that we end up using over and over. I wanted to share three of these with other developers working with Docker:

  1. Remove all containers – Inevitably, during development you’re going to pile up a bunch of stale containers that are just lying around – or maybe you have a bunch of running containers that you don’t use. We end up needing to wipe out all the containers to start fresh all the time. Here’s how we do it:

    docker ps -a -q | awk '{print $1}' | xargs --no-run-if-empty docker rm -f


    It’s pretty self explanatory – it lists all the containers, and then removes each one by it’s ID. There are several incarnations of this but this one has the advantage that it can be used in Windows as well if you install UNIX command line tools (you could do that by grabbing MinGW for example). Alternatively, on Windows, you can use:FOR /f "tokens=*" %i IN ('docker ps -a -q') DO docker rm -f %i
  2. Mount the Docker Unix socket as a volume – OK, the way we use Docker is a bit more advanced than the standard use cases but it’s crazy how often we end up using this one. That’s because we always end up having to create Docker containers from within a Docker container. And the best way to do this is to mount the Docker daemon’s Unix socket on the host machine as a volume at the same location within the container. That means, you add the following when performing a docker run: -v /var/run/docker.sock:/var/run/docker.sock. Now, within the container, if you have a Docker client (whether that’s the command line one, or a Java one for example) connect to that Unix socket, it actually talks to the Docker daemon on the host. That means if you create a container from within the container with the volume, the new container is created using the daemon running on the host (meaning it will be a sibling of the container with the volume)! Very useful!
  3. Consider Terraform as an alternative to composeTerraform is for setting up infrastructure really easily and it’s great for that. For us, infrastructure means AWS when running in the cloud, and Docker when running locally. We have several containers that we have to run for our application – during development, we run all the containers locally, and in the cloud, we run the containers across various EC2 instances, each instance getting one or more containers. This is perfect for Terraform. We can use the Docker provider alone to configure resources to run our local setup, and we can use it together with the AWS provider to run our cloud setup. Note again that Terraform is for infrastructure setup, so you are doing things at a very high level – you may find that you need to do some prep using other tools to be able to work with Terraform. For example, you can’t use Dockerfiles – you will have to build your custom images prior to using them with Terraform.

 

Using Class inheritance to hook to Angular 2 component lifecycle

I was thinking of a way to use class inheritance to hook to certain Component lifecycle hooks, without needing to worry about them in the extending class (no knowledge needed, no super() calls to forget about). This does mean “straying off the path”  a little bit, and there may be better ways to do this.

Observables in angular2 are a powerful thing. Unlike the Angular 1 hero, Promises, they represent streams of asynchronous data, and not just single events. This means that a subscription of an observable doesn’t have an end, not necessarily.

Using ngrx/router, I found myself using them a lot, but precisely because they are streams, they need careful cleanup, or we risk leaving a subscription running after a Component has been destroyed.

A typical way we can do this is using ngOnDestroy:

export class Component implements OnDestroy {
    private subscription: Subscription;
    private count: number;

    constructor(private pingService: PingService) {
        let func = this.ngOnDestroy;

        this.subscription = this.pingService.ping
            .subscribe(
                ping => {
                    this.count = ping;
                }
            );
    }

    ngOnDestroy() {
        this.subscription.unsubscribe();
    }
}

Simple enough when on its own, but something that is sure to add a lot of code repetition and complexity to a complex class with more than one subscription. We can automate this, and the best way I found was to extend a base class:

export class SafeUnsubscriber implements OnDestroy {
    private subscriptions: Subscription[] = [];

    protected safeSubscription(sub: Subscription): Subscription {
        this.subscriptions.push(sub);
        return sub;
    }

    ngOnDestroy() {
        this.subscriptions.forEach(element => {
            !element.isUnsubscribed && element.unsubscribe();
        });
    }
}

This makes the previous class simpler:

export class Component extends SafeUnsubscriber {
    private count: number;

    constructor(private pingService: PingService) {
        let func = this.ngOnDestroy;

        let subscription = this.pingService.ping
            .subscribe(
                ping => {
                    this.count = ping;
                }
            );

        this.safeSubscription(subscription);
    }
}

Which is great, but what if we need to use ngOnDestroy on the parent? Conventional inheritance allows us to use super.ngOnDestroy() but in this particular case, I don’t want to leave this as a possibility, but rather always unsubscribe on destroy, regardless of wether or not ngOnDestroy was overwritten.

So in this case, a little hack is acceptable, in my opinion – we can make sure the unsubscriber code always runs on ngOnDestroy, and both prevent mistakes by omission and make the code cleaner in the user:

export class SafeUnsubscriber implements OnDestroy {
    private subscriptions: Subscription[] = [];

    constructor() {
        let f = this.ngOnDestroy;

        this.ngOnDestroy = () => {
            f();
            this.unsubscribeAll();
        };
    }

    protected safeSubscription(sub: Subscription): Subscription {
        this.subscriptions.push(sub);
        return sub;
    }

    private unsubscribeAll() {
        this.subscriptions.forEach(element => {
            !element.isUnsubscribed && element.unsubscribe();
        });
    }

    ngOnDestroy() {
        // no-op
    }
}

Now, even if ngOnDestroy gets overwritten, the private method unsubscribeAll still runs, as the constructor (which always runs, as typescript requires it)  makes sure this happens. ngOnDestroy, on the other hand, only exists as a noop function, to ensure the code runs regardless of whether  or not one was set in the parent component.

How does this work, then? Javascript (and typescript, by extension) uses prototypal inheritance, which means that super is the prototype – this is the reason why typescript makes it mandatory to call super() in the extending Class constructor, before any references to this – so class inheritance expectations are guaranteed. By changing this.ngOnDestroy on the Base Class constructor, we are essentially adding a property to the instance, essentially overriding the prototype – which happens to be a call to the prototype’s version followed by our own.

Pretty dangerous stuff, but pretty useful as well.

SVG’s FuncIRI, angular2, and the base tag

cog-wrong
Broken mask link: a visualization

I tried to make this title as descriptive and, let’s face it, clickbait-y, because this was hard enough for me to discover. I somehow had never had to deal with this issue until a few days ago – svgs do not play well with single page apps when routing using HTML location is mixed with a set <base> tag.

Specifically, what doesn’t work is anything that uses FuncIRI, or css-style urls. That means <use>, clip-path and filter tags, among others. When trying to fix this I came up with a roundabout solution, before discovering that I didn’t need to, very similar to this solution for angularJS, most likely made before this was fixed (in around version 1.3).

In my case, I didn’t need the <base> tag at all – it was basically set as <base href=”/”>, most likely from the habit and all the examples and starter apps one uses to get their hands dirty with angular. All I needed to know about was APP_BASE_HREF. If you remove the <base> tag angular rightfully complains that it needs a base for its LocationStrategy to work, but APP_BASE_HREF enables us to set it from the bootstrap step:

import {
    APP_BASE_HREF
} from '@angular/common';

bootstrap(App, [
    ...{
        provide: APP_BASE_HREF,
        useValue: '/'
    }
]);

This works even for cases where the base isn’t ‘/’, so should be pretty much universal. Of course, if there are other reasons why you might need the base tag to stay in the page, the only solution is then to update the relevant properties so that their urls match the current one. I feel this should be avoided if at all possible, seeing as it isn’t the most clean or efficient method – not to mention that in our case, it would mean messing directly with the DOM on top of what an SVG animation library is already doing.

Nevertheless, here is an example of how that might look:

import {
    Directive,
    ElementRef,
    OnDestroy
} from '@angular/core';
import {
    Location
} from '@angular/common';

import $ = require('jquery');

@Directive({
    selector: '[update-clip-path]'
})
export class UpdateClipPath implements OnDestroy {
    private sub: any;

    constructor(private location: Location, private elementRef: ElementRef) {
        this.sub = this.location.subscribe(
            next => this.updateClipPath()
        );

        this.updateClipPath();
    }

    private updateClipPath() {
        if (this.elementRef.nativeElement) {
            $(this.elementRef.nativeElement)
                .find('[clip-path]')
                .each((index, el) => {
                    let clipPath = el.getAttribute('clip-path');
                    el.setAttribute(
                        'clip-path',
                        'url(' + this.location.path() + clipPath.substr(clipPath.indexOf('#')));
                });
        }
    }

    ngOnDestroy() {
        if (this.sub && this.sub.unsubscribe) {
            this.sub.unsubscribe();
        }
    }
}

Learning Javascript in a post-Reactive landscape

I recently re-watched a talk by Thomas Figg – Programming is terrible. In the QA portion of the talk there is a (perhaps surprisingly) positive tone in one of his answers – that learning to code is, contrary to what some might choose to believe, more accessible than ever. He then mentions JavaScript, as it is as simple as it is ubiquitous, and arguably the most easily shareable code in the world – everything from a TV to a phone will run it.

I completely agree with this statement, as JavaScript is at its core an incredibly simple language, in both theory and practice – both easy to reason about, and to get something running. But increasingly complex abstractions have become an integral part of any application development in JavaScript, making the entry barrier for a frontend developer higher and higher.

On Promises

Having worked as an AngularJS developer since it’s 0.x releases, I have more than gotten used to its $q library, modelled very closely after the Q library. They made sense to me, and any seasoned developer will most likely agree that they made asynchronous programming much easier to deal with.

Yet it wasn’t until joining a full stack team and getting tasked with tutoring my backend-heavy colleagues and QA’s on Promises that I noticed just how big of a stretch it can be if you’re facing these for the first time. They are not trivial, especially when you have to stray from the typical examples and delve into more complex usages.

On Reactive Programming

Reactive programming takes the concept of asynchronous programming further. Compared to promises, it is a step down the abstraction scale, making it easier to scale, to handle complex situations and concurrency with ease. Unfortunately, it also makes it much more complex conceptually – thus harder to get into and harder to reason about.

Angular2 fully supports and depends on RxJS, and although it is an “opt-in” kind of thing (just call .toPromise() on any Observable and it magically becomes just that), it is ubiquitous in the angular2 community. Go to any chatroom or forum and you see that you are expected to be comfortable with it.

A world of abstractions

AngularJS had a big problem – it looked easy, and it felt easy, until you tried doing anything complex with it. Angular 2 doesn’t make that mistake, showing its hand from the get-go. What this might mean for the community I don’t know – hopefully better code?

With Promises becoming a part of the ES6 standard, we are moving into a future where they become commonplace – jQuery 3 complies with Promises, for instance. The barrier to entry for developers gets forcefully higher at all levels.

As a teacher, you learn to avoid abstractions when teaching programming for the first time. An object-oriented language is not a good first language, for obvious reasons. I wonder if at some point, JavaScript will stop being one as well?