Dependency resolution with Eclipse Aether

Most Java developers deal with dependency resolution only at build time – for example, they will declare dependencies in their Maven POM and Maven will download all the required dependencies during a build, caching them in a local repository so that it won’t have to download it again the next time. But what if you need to do this dependency resolution at run-time? How do you do that? It turns out to be rather straight-forward and it’s done using the same library that Maven uses internally (at least Maven 3.x).

Transitive Dependency Resolution at Run-time

That library is Aether (which was contributed to Eclipse by Sonatype). Doing basic transitive dependency resolution requires you to setup some Aether components – the pieces are readily available within 3 Aether Wiki pages:

  • Getting Aether (you don’t need all the dependencies listed there if you’re just doing basic resolution)
  • Setting Aether up (the code in thenewRepositorySystem() method) – IMPORTANT: For the custom TransporterFactory described in the Wiki page to work properly, you will have to add the TransporterFactorys before the BasicRepositoryConnectorFactory unlike in the Wiki
  • Creating a Repository System Session (the code in the newSession(...) method)

Now that you have the repositorySystem and a session, you can use the following code to get the full set of transitive dependencies for a particular artifact, given by it’s Maven coordinates:

private CollectRequest createCollectRequest(String groupId, String artifactId, String version, String extension) {
    Artifact targetArtifact = new DefaultArtifact(groupId, artifactId, extension, version);
    RemoteRepository centralRepository = new RemoteRepository.Builder("central", "default", "http://repo1.maven.org/maven2/").build();

    CollectRequest collectRequest = new CollectRequest();
    collectRequest.setRoot(new Dependency(targetArtifact, "compile"));
    collectRequest.addRepository(centralRepository);

    return collectRequest;
}

private List<Artifact> extractArtifactsFromResults(DependencyResult resolutionResult) {
    List<ArtifactResult> results = resolutionResult.getArtifactResults();
    ArrayList<Artifact> artifacts = new ArrayList<>(results.size());

    for (ArtifactResult result : results) {
        artifacts.add(result.getArtifact());
    }

    return artifacts;
}

public List<Artifact> resolve(String groupId, String artifactId, String version, String extension) throws DependencyResolutionException {
    CollectRequest collectRequest = createCollectRequest(groupId, artifactId, version, extension);

    DependencyResult resolutionResult = repositorySystem.resolveDependencies(session,
    new DependencyRequest(collectRequest, null));

    return extractArtifactsFromResults(resolutionResult);
}

That gets you the full set of transitive dependencies.

Customizing the Resolution

What we did so far is to get Aether to grab artifacts from Maven central (you will have noticed how we configured the CollectRequest with the centralRepository as the only one to consult. Adding other remote repositories would have been done in the same way as adding central. But let’s say we wanted to have a more direct say in how artifacts were retrieved. For example, maybe we want to get artifacts from an AWS S3 bucket, or perhaps we want to generate artifact content at run-time. In order to do that, we need to create a new Aether Transporter, and hook it into the repository system we setup above.

Let’s consider a basic implementation:

public class CustomTransporter extends AbstractTransporter {
 private static final Exception NOT_FOUND_EXCEPTION = new Exception("Not Found");
 private static final byte[] pomContent =
  ("<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n" +
   "<project xmlns=\"http://maven.apache.org/POM/4.0.0\"\n" + " xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\"\n" + " xsi:schemaLocation=\"http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd\">\n" +
   " <modelVersion>4.0.0</modelVersion>\n" +
   " <groupId>custom.group</groupId>\n" +
   " <artifactId>custom-artifact</artifactId>\n" +
   " <version>1.0</version>\n" +
   " <packaging>jar</packaging>\n" +
   "</project>\n").getBytes();

 public CustomTransporter() {}

 @Override
 public int classify(Throwable error) {
  if (error == NOT_FOUND_EXCEPTION) {
   return ERROR_NOT_FOUND;
  }

  return ERROR_OTHER;
 }

 @Override
 protected void implClose() {}

 @Override
 protected void implGet(GetTask task) throws Exception {
  if (task.getLocation().toString().contains("custom/group/custom-artifact/1.0") &&
   task.getLocation().getPath().endsWith(".pom")) {
   utilGet(task, new ByteArrayInputStream(pomContent), true, -1, false);
   return;
  }

  throw NOT_FOUND_EXCEPTION;
 }

 @Override
 protected void implPeek(PeekTask task) throws Exception {
  if (task.getLocation().toString().contains("custom/group/custom-artifact/1.0") &&
   task.getLocation().getPath().endsWith(".pom")) {
   return;
  }

  throw NOT_FOUND_EXCEPTION;
 }

 @Override
 protected void implPut(PutTask task) throws Exception {
  throw new UnsupportedOperationException();
 }
}

Your transporter is going to be invoked with get, peek and put tasks by the Aether code. The main ones to worry about here are the get & peek requests. The peek task is designed to check if an artifact exists, and the get task is used to retrieve artifact content. The peek task should return without an exception if the artifact identified by the task exists, or throw an exception if the artifact doesn’t exist. The get task should return the artifact content (here we show how that’s done using the utilGet method) if the artifact exists, and throw an exception otherwise.

Note how the classify method is used to actually determine if the exception you throw in the other methods indicates that an artifact is non-existent. If you classify an exception thrown by the other methods as an ERROR_NOT_FOUND, Aether will consider that artifact as non-existent, while ERROR_OTHER will be treated as some other error.

Now that we have a transporter, hooking it up requires us to first create a TransporterFactory corresponding to it:

public class CustomTransporterFactory implements TransporterFactory, Service {
 private float priority = 5;

 public void initService(ServiceLocator locator) {}

 public float getPriority() {
  return priority;
 }

 public CustomTransporterFactory setPriority(float priority) {
  this.priority = priority;
  return this;
 }

 public Transporter newInstance(RepositorySystemSession session, RemoteRepository repository)
 throws NoTransporterException {
  return new CustomTransporter();
 }
}

Nothing to say about that – it’s pretty boilerplate. If you need to pass in some information to your transporter about it’s context, do it when you construct the transporter in the newInstance method here.

Finally, hook up the custom TransporterFactory the same way all the others are hooked up – that is, add it when we construct RepositorySystem:

locator.addService(TransporterFactory.class, CustomTransporterFactory.class);

IMPORTANT: Note again the TransporterFactory needs to be added before the BasicRepositoryConnectorFactory.

That’s it, now for all the artifacts, our transporter will also get to be involved in resolving artifacts!

Using TemplateRef to create a tooltip/popover directive in Angular 2

This post is about a Component created in the context of our application development. There is a demo here, and you can find the full source code here.

Lately, the need arose to create a tooltip directive. This brought up a lot of questions we hadn’t had to face before, such as how to create markup wrapping around rendered content, or rather “what is Angular 2’s transclude?”

Turns out, using TemplateRef is very useful for this, but the road to understanding it wasn’t easy. After seeing it used in a similar fashion by Ben Nadel, I decided to take a stab at it.

TemplateRef is used when using <template> elements, or perhaps most commonly when using *-prefixed directives such as NgFor of NgIf. For *-prefixed directives (or directives in <template> elements, TemplateRef can be injected straight into the constructor of the class. For other components, however, they can be queried via something like the ContentChild decorator.

Initially, I had thought to create two directives: a TooltipDirective to be placed on the parent element, plus a TooltipTemplate directive to be placed in a template, that would then inject itself into the parent. It proved too complex, though, and after finding what could be done with the ContentChild query the implementation became much simpler.

The end result looks like this (simplified for clarity):

@Directive({
    selector: "[tooltip]"
})
export class TooltipDirective implements OnInit {
    @Input("tooltip") private tooltipOptions: any;
    @ContentChild("tooltipTemplate") private tooltipTemplate: TemplateRef < Object > ;

    private tooltip: ComponentRef < Tooltip > ;
    private tooltipId: string;

    constructor(
        private viewContainer: ViewContainerRef,
        public elementRef: ElementRef,
        private componentResolver: ComponentResolver,
        private position: PositionService) {
        this.tooltipId = _.uniqueId("tooltip");
    }

    ngOnInit() {
        // Attach relevant events
    }

    private showTooltip() {
        if (this.tooltipTemplate) {
            this.componentResolver.resolveComponent(Tooltip)
                .then(factory => {
                    this.tooltip = this.viewContainer.createComponent(factory);
                    this.tooltip.instance["content"] = this.tooltipTemplate;
                    this.tooltip.instance["parentEl"] = this.elementRef;
                    this.tooltip.instance["tooltipOptions"] = this.options;
                });
        }
    }

    private hideTooltip() {
        this.tooltip.destroy();
        this.tooltip = undefined;
    }

    private get options(): TooltipOptions {
        return _.defaults({}, this.tooltipOptions || {}, defaultTooltipOptions);
    }
}

@Component({
    selector: "tooltip",
    template: `<div class="inner">
<template [ngTemplateOutlet]="content"></template>
</div>
<div class="arrow"></div>`
})
class Tooltip implements AfterViewInit {
    @Input() private content: TemplateRef < Object > ;
    @Input() private parentEl: ElementRef;
    @Input() private tooltipOptions: TooltipOptions;

    constructor(
        private positionService: PositionService,
        public elementRef: ElementRef) {}

    private position() {
        // top and left calculated and set
    }

    ngAfterViewInit(): void {
        this.position();
    }
}

The TooltipDirective requires a <template #tooltipTemplate> element, that gets rendered through a  Tooltip Component, created and injected with the templateRef containing our content. Essentially, “transcluding” it. The Tooltip component’s role is only to wrap the content with some light markup, and position itself when inserted into the page.

A lot of the actual positioning (not shown here, but in the source code) is done directly to the rendered elements, though – I faced some issued when using the host properties object, that I believe were reintroduced in the latest RC.

All in all, it was a great learning experience, and Angular 2’s <template> surely beats Angular.js’ transclude. Slowly but surely Angular 2 get’s more and more demystified to me, but it is hard work getting there.

A breadcrumb component for @ngrx/router

@ngrx/router is, at the moment, one of the best choices for a router component in Angular 2, and Vladivostok, the third iteration of Angular 2’s official router, will take a very heavy inspiration from it. At the moment we are using it to handle our routes, and the need arose to create a breadcrumb component for certain parts of the application.

You can see a working example here.

@ngrx/router‘s API is very light and sparse, but not necessarily incomplete – a lot of the actual implementation is assumed to be left in the hands of the user. This is powered by the use of Observables for Route, RouteParams, QueryParams, etc. A great example of this assumption is the fact that instead of a set of interfaces like CanReuse/CanActivate/CanDeactivate, the router only ever activates components when they change after a route change – any changes in parameters are handled manually by default. More work, but also a clearer image of what one can and cannot do with the tool, and a lot more control.

The first thing we found was that routes have a property – options – that serves the express purpose of holding custom data. A simple usage is this:

export const routes: Route[] = [{
    path: '/',
    component: Home,
    options: {
        breadcrumb: 'Home Sweet Home'
    },
    children: [{
            path: '/component-a',
            component: ComponentA,
            options: {
                breadcrumb: 'A Component'
            },
            children: [{
                    path: '/one',
                    component: One,
                    options: {
                        breadcrumb: 'The One'
                    }
                },
                [...]
            ]
        },
        [...]
    ]
}, ];

And the breadcrumb component is as such:

@Component({
    selector: 'breadcrumbs',
    directives: [NgFor],
    template: `<span>
<span *ngFor="let breadcrumb of breadcrumbs; let isLast = last">
<a [linkTo]="breadcrumb.path">{{breadcrumb.name}}</a>
<span *ngIf="!isLast"> &gt; </span>
</span>
</span>`
})
export class Breadcrumbs {
    private breadcrumbs: any[];

    constructor(private routerInstruction: RouterInstruction) {
        this.routerInstruction
            .subscribe(
                match => {
                    this.breadcrumbs = this.getBreadcrumbs(match);
                }
            );
    }

    private getBreadcrumbs(match: Match) {
        let breadcrumbs = [];
        let path = '';

        for (let i = 0; i < match.routes.length; i++) {
            path = path[path.length - 1] === '/' ? path.substr(0, path.length - 1) : path;
            path += match.routes[i].path ? this.makePath(match.routes[i], match) : '';
            if ((match.routes[i].options || {}).breadcrumb) {
                breadcrumbs.push({
                    name: match.routes[i].options.breadcrumb,
                    path: path
                });
            }
        }

        return breadcrumbs;
    }

    private makePath(route: Route, match: Match) {
        return pathToRegexp.compile(route.path)(match.routeParams);
    }
}

RouterInstruction is one Observable that gives us all the information we need. By watching it, a Match object containing the array of matched routes is returned. All that was left was to create the urls, as @ngrx/router uses only url strings (as opposed to the array notation you’ll find in @angular/router, for instance) – but as @ngrx/router uses path-to-regexp to parse urls, to it was only a matter of using it to compile from the parsed data, and get the urls.

All in all, a very simple solution. Omitted are the use of translations and using asynchronously loaded data (like a profile name) in the breadcrumb – the former is trivial and very unrelated, and the latter we are using stores for, and it’s perhaps a good topic for another post.

3 Docker tips & tricks

Over the past few months, we’ve done a lot of development with Docker. There are a few things that we end up using over and over. I wanted to share three of these with other developers working with Docker:

  1. Remove all containers – Inevitably, during development you’re going to pile up a bunch of stale containers that are just lying around – or maybe you have a bunch of running containers that you don’t use. We end up needing to wipe out all the containers to start fresh all the time. Here’s how we do it:

    docker ps -a -q | awk '{print $1}' | xargs --no-run-if-empty docker rm -f


    It’s pretty self explanatory – it lists all the containers, and then removes each one by it’s ID. There are several incarnations of this but this one has the advantage that it can be used in Windows as well if you install UNIX command line tools (you could do that by grabbing MinGW for example). Alternatively, on Windows, you can use:FOR /f "tokens=*" %i IN ('docker ps -a -q') DO docker rm -f %i
  2. Mount the Docker Unix socket as a volume – OK, the way we use Docker is a bit more advanced than the standard use cases but it’s crazy how often we end up using this one. That’s because we always end up having to create Docker containers from within a Docker container. And the best way to do this is to mount the Docker daemon’s Unix socket on the host machine as a volume at the same location within the container. That means, you add the following when performing a docker run: -v /var/run/docker.sock:/var/run/docker.sock. Now, within the container, if you have a Docker client (whether that’s the command line one, or a Java one for example) connect to that Unix socket, it actually talks to the Docker daemon on the host. That means if you create a container from within the container with the volume, the new container is created using the daemon running on the host (meaning it will be a sibling of the container with the volume)! Very useful!
  3. Consider Terraform as an alternative to composeTerraform is for setting up infrastructure really easily and it’s great for that. For us, infrastructure means AWS when running in the cloud, and Docker when running locally. We have several containers that we have to run for our application – during development, we run all the containers locally, and in the cloud, we run the containers across various EC2 instances, each instance getting one or more containers. This is perfect for Terraform. We can use the Docker provider alone to configure resources to run our local setup, and we can use it together with the AWS provider to run our cloud setup. Note again that Terraform is for infrastructure setup, so you are doing things at a very high level – you may find that you need to do some prep using other tools to be able to work with Terraform. For example, you can’t use Dockerfiles – you will have to build your custom images prior to using them with Terraform.

 

Using Class inheritance to hook to Angular 2 component lifecycle

I was thinking of a way to use class inheritance to hook to certain Component lifecycle hooks, without needing to worry about them in the extending class (no knowledge needed, no super() calls to forget about). This does mean “straying off the path”  a little bit, and there may be better ways to do this.

Observables in angular2 are a powerful thing. Unlike the Angular 1 hero, Promises, they represent streams of asynchronous data, and not just single events. This means that a subscription of an observable doesn’t have an end, not necessarily.

Using ngrx/router, I found myself using them a lot, but precisely because they are streams, they need careful cleanup, or we risk leaving a subscription running after a Component has been destroyed.

A typical way we can do this is using ngOnDestroy:

export class Component implements OnDestroy {
    private subscription: Subscription;
    private count: number;

    constructor(private pingService: PingService) {
        let func = this.ngOnDestroy;

        this.subscription = this.pingService.ping
            .subscribe(
                ping => {
                    this.count = ping;
                }
            );
    }

    ngOnDestroy() {
        this.subscription.unsubscribe();
    }
}

Simple enough when on its own, but something that is sure to add a lot of code repetition and complexity to a complex class with more than one subscription. We can automate this, and the best way I found was to extend a base class:

export class SafeUnsubscriber implements OnDestroy {
    private subscriptions: Subscription[] = [];

    protected safeSubscription(sub: Subscription): Subscription {
        this.subscriptions.push(sub);
        return sub;
    }

    ngOnDestroy() {
        this.subscriptions.forEach(element => {
            !element.isUnsubscribed && element.unsubscribe();
        });
    }
}

This makes the previous class simpler:

export class Component extends SafeUnsubscriber {
    private count: number;

    constructor(private pingService: PingService) {
        let func = this.ngOnDestroy;

        let subscription = this.pingService.ping
            .subscribe(
                ping => {
                    this.count = ping;
                }
            );

        this.safeSubscription(subscription);
    }
}

Which is great, but what if we need to use ngOnDestroy on the parent? Conventional inheritance allows us to use super.ngOnDestroy() but in this particular case, I don’t want to leave this as a possibility, but rather always unsubscribe on destroy, regardless of wether or not ngOnDestroy was overwritten.

So in this case, a little hack is acceptable, in my opinion – we can make sure the unsubscriber code always runs on ngOnDestroy, and both prevent mistakes by omission and make the code cleaner in the user:

export class SafeUnsubscriber implements OnDestroy {
    private subscriptions: Subscription[] = [];

    constructor() {
        let f = this.ngOnDestroy;

        this.ngOnDestroy = () => {
            f();
            this.unsubscribeAll();
        };
    }

    protected safeSubscription(sub: Subscription): Subscription {
        this.subscriptions.push(sub);
        return sub;
    }

    private unsubscribeAll() {
        this.subscriptions.forEach(element => {
            !element.isUnsubscribed && element.unsubscribe();
        });
    }

    ngOnDestroy() {
        // no-op
    }
}

Now, even if ngOnDestroy gets overwritten, the private method unsubscribeAll still runs, as the constructor (which always runs, as typescript requires it)  makes sure this happens. ngOnDestroy, on the other hand, only exists as a noop function, to ensure the code runs regardless of whether  or not one was set in the parent component.

How does this work, then? Javascript (and typescript, by extension) uses prototypal inheritance, which means that super is the prototype – this is the reason why typescript makes it mandatory to call super() in the extending Class constructor, before any references to this – so class inheritance expectations are guaranteed. By changing this.ngOnDestroy on the Base Class constructor, we are essentially adding a property to the instance, essentially overriding the prototype – which happens to be a call to the prototype’s version followed by our own.

Pretty dangerous stuff, but pretty useful as well.

SVG’s FuncIRI, angular2, and the base tag

cog-wrong
Broken mask link: a visualization

I tried to make this title as descriptive and, let’s face it, clickbait-y, because this was hard enough for me to discover. I somehow had never had to deal with this issue until a few days ago – svgs do not play well with single page apps when routing using HTML location is mixed with a set <base> tag.

Specifically, what doesn’t work is anything that uses FuncIRI, or css-style urls. That means <use>, clip-path and filter tags, among others. When trying to fix this I came up with a roundabout solution, before discovering that I didn’t need to, very similar to this solution for angularJS, most likely made before this was fixed (in around version 1.3).

In my case, I didn’t need the <base> tag at all – it was basically set as <base href=”/”>, most likely from the habit and all the examples and starter apps one uses to get their hands dirty with angular. All I needed to know about was APP_BASE_HREF. If you remove the <base> tag angular rightfully complains that it needs a base for its LocationStrategy to work, but APP_BASE_HREF enables us to set it from the bootstrap step:

import {
    APP_BASE_HREF
} from '@angular/common';

bootstrap(App, [
    ...{
        provide: APP_BASE_HREF,
        useValue: '/'
    }
]);

This works even for cases where the base isn’t ‘/’, so should be pretty much universal. Of course, if there are other reasons why you might need the base tag to stay in the page, the only solution is then to update the relevant properties so that their urls match the current one. I feel this should be avoided if at all possible, seeing as it isn’t the most clean or efficient method – not to mention that in our case, it would mean messing directly with the DOM on top of what an SVG animation library is already doing.

Nevertheless, here is an example of how that might look:

import {
    Directive,
    ElementRef,
    OnDestroy
} from '@angular/core';
import {
    Location
} from '@angular/common';

import $ = require('jquery');

@Directive({
    selector: '[update-clip-path]'
})
export class UpdateClipPath implements OnDestroy {
    private sub: any;

    constructor(private location: Location, private elementRef: ElementRef) {
        this.sub = this.location.subscribe(
            next => this.updateClipPath()
        );

        this.updateClipPath();
    }

    private updateClipPath() {
        if (this.elementRef.nativeElement) {
            $(this.elementRef.nativeElement)
                .find('[clip-path]')
                .each((index, el) => {
                    let clipPath = el.getAttribute('clip-path');
                    el.setAttribute(
                        'clip-path',
                        'url(' + this.location.path() + clipPath.substr(clipPath.indexOf('#')));
                });
        }
    }

    ngOnDestroy() {
        if (this.sub && this.sub.unsubscribe) {
            this.sub.unsubscribe();
        }
    }
}

Learning Javascript in a post-Reactive landscape

I recently re-watched a talk by Thomas Figg – Programming is terrible. In the QA portion of the talk there is a (perhaps surprisingly) positive tone in one of his answers – that learning to code is, contrary to what some might choose to believe, more accessible than ever. He then mentions JavaScript, as it is as simple as it is ubiquitous, and arguably the most easily shareable code in the world – everything from a TV to a phone will run it.

I completely agree with this statement, as JavaScript is at its core an incredibly simple language, in both theory and practice – both easy to reason about, and to get something running. But increasingly complex abstractions have become an integral part of any application development in JavaScript, making the entry barrier for a frontend developer higher and higher.

On Promises

Having worked as an AngularJS developer since it’s 0.x releases, I have more than gotten used to its $q library, modelled very closely after the Q library. They made sense to me, and any seasoned developer will most likely agree that they made asynchronous programming much easier to deal with.

Yet it wasn’t until joining a full stack team and getting tasked with tutoring my backend-heavy colleagues and QA’s on Promises that I noticed just how big of a stretch it can be if you’re facing these for the first time. They are not trivial, especially when you have to stray from the typical examples and delve into more complex usages.

On Reactive Programming

Reactive programming takes the concept of asynchronous programming further. Compared to promises, it is a step down the abstraction scale, making it easier to scale, to handle complex situations and concurrency with ease. Unfortunately, it also makes it much more complex conceptually – thus harder to get into and harder to reason about.

Angular2 fully supports and depends on RxJS, and although it is an “opt-in” kind of thing (just call .toPromise() on any Observable and it magically becomes just that), it is ubiquitous in the angular2 community. Go to any chatroom or forum and you see that you are expected to be comfortable with it.

A world of abstractions

AngularJS had a big problem – it looked easy, and it felt easy, until you tried doing anything complex with it. Angular 2 doesn’t make that mistake, showing its hand from the get-go. What this might mean for the community I don’t know – hopefully better code?

With Promises becoming a part of the ES6 standard, we are moving into a future where they become commonplace – jQuery 3 complies with Promises, for instance. The barrier to entry for developers gets forcefully higher at all levels.

As a teacher, you learn to avoid abstractions when teaching programming for the first time. An object-oriented language is not a good first language, for obvious reasons. I wonder if at some point, JavaScript will stop being one as well?

Angular2 router’s bumpy ride – a user’s perspective

Edit: ui-router is not mentioned at all in this article and should have been. This is only due to the fact that I haven’t worked with it enough to comment on it, and also because it is still in alpha, and not a popular ng2 router alternative in the community yet. That being said, it is the “only” router for me when in ng1-land, so I’m eager to give it a try, or see how much of it’s architecture influences Vladivostok.

Angular 2 is now in a release candidate state, after several beta releases, and while the core of this new iteration is an extremely solid one, many of its components are still under heavy development, which makes using them quite a bumpy ride.

The router component is perhaps the most notorious among them, with two iterations deprecated in the pace of a few short months – one officially so, and one never really seeing the light of day – and a third one on the way.

Now, it needs to be said that creating something like a router is far from trivial, particularly so if you are setting out to “revolutionise”, meaning solve all the known problems of previous routers. In the case of routing these are lazy loading, handling complex route structures, and enough flexibly to account for all use cases (with more or less legwork required).

Also, the reason why the angular team has gone through so many iterations has to do with how closely they are working with the community of users – the current iteration having taken a mere couple of months to get thrown out, so quick the community was to spot its shortcomings.

So, how do all of these routers differ, and where are they headed?

Enter @angular/router-deprecated

Angular2’s first stab at a router relied on the component paradigm heavily, as does angular2 in general. Components may have @RouteConfig annotations with route lists defined, and if they do, these get parsed and the relevant components loaded into a node in its template.

Most lifecycle hooks and checks could then be in the component itself, keeping thing neat and clean. This approach had a couple of problems:

  • As Routes were defined in the class file, deep linking to unloaded classes is impossible.
  • @CanActivate, which determines whether or not a certain route could be activated, had to be an annotation as it ran before the Component itself was instantiated.
  • Routes followed the same isolation pattern that Components did, but this meant not having access to the full route tree at any point, and having to hack your way around everything.

Enter @angular/router(-deprecated?)

The first attempt to solve these issues was promising:

  • It solved the deep linking problem by having routes be directly inferable from the url.
  • It intended to replace @CanActivate with CanActivateChild – it is now the parent’s task to determine if the route activation process can continue.
  • Access to the whole tree was given at any hooks

Unfortunately, it perpetuated some of the issues, like routes still defined as a Component’s annotation, and its development didn’t get very far before it getting scrapped – first unofficially and now officially so.

Enter @ngrx/router, and the “new new new Router”

If “new new new Router” seems like an atrocious expression it’s because it is – but it’s been a recurrent one in places like Gitter or Github issues. It is Vladivostok, and it’s approach is very similar to @ngrx/router (as its devs have been collaborating with the angular team closely).

@ngrx/router takes a cleaner, leaner and more low lever approach to routing:

  • Routes are defined as objects in their own right and injected into the app directly. Their loading becomes completely independent from the Components themselves.
  • A route has Guards that run whenever the route tree passes through it, these again completely independent from which Component is actually being loaded.
  • Changes in url that do not actually change routes, but only parameters (like changing from /user/1 to /user/2, for instance) do nothing by default – it is the user’s responsibility to listen to these changes and trigger behaviour
  • Routes, RouteParams, QueryParams, RouteData… All these are Observables that any Component can listen to – this makes it both more flexible and simpler, specially when creating something like a breadcrumb component, or anything more specific or unique.

A conclusion of sorts

Angular2 is heading in a really good direction, despite (or perhaps because of) all the growing pains it is going through. The downside of this is that it can’t live up to the extremely high expectations for everything from power to speed to ease of use, while in its betas and RCs.

The best way to get ready for the new router is to delve into @ngrx/router, which coincidentally is a pretty powerful tool in its own right. The documentation is sparse but its developers and users are quick to answer in their Gitter channel, and it is flexible enough to handle almost anything you’ll want to throw at it.

I’ll be throwing a couple of things myself, and write about that next.

Abusing Cucumber, for a good cause

In several Java houses I worked with in the past, we used Cucumber to do Behavior Driven Design. No hang on a sec – that’s definitely an exaggeration. I think it’s more accurate to say we used Cucumber as a way to write acceptance tests. Wait, that’s still an exaggeration. We used Cucumber to write a mix of integration tests, and what may generously be called functional tests (and very occasionally bordering on acceptance tests). Yeah, that’s about right. We used it as a tool to write tests in plain English. But you know what? I think that’s OK.

Cucumberistas, BDDers and DDDers will tell you it’s about everyone – business, QA and development – coming together to come up with executable specifications.  It’s about everyone speaking in a universal language – a language that the business analysts can share with the testers, and the developers. A language about the business problems an application is designed to solve. And a language for automated acceptance tests. Well maybe, just maybe, you are in an organization where that’s true. Where your Cucumber tests describe your user stories or specifications in the domain language for your application. If you are, good for you. You’re doing it “right”.

But for everyone else, I want to talk about some work we did to support your Cucumber test-writing efforts in the “wrong” way. And we don’t want to scold you, or admonish you for doing it “wrong”. No, in fact, we want to support you in your efforts to just write tests for HTTP services in English.

What I am talking about is best illustrated with an example – here’s how we use Cucumber to write tests for our application:

Background:
    Given the user stores http: //localhost:9080 as apiRoot

Scenario: Successful registration flow
    Given a random alphanumeric string is stored as testUserName
    And a user makes a POST call to "{apiRoot}/users"
    with payload:
    """ {
        "email": "{testUserName}@gmail.com",
        "password": "pass",
        "userName": "{testUserName}",
        "name": "Test User",
        "location": "London"
    }
    """
    Then the user should get a 200 response and JSON matching:
    """ 
    {
        "email": "{testUserName}@gmail.com",
        "userName": "{testUserName}",
        "name": "Test User",
        "location": "London",
        "id": "*"
    }
    """
    And the email containing subject Activate your account for {testUserName}@gmail.com is stored as activationEmail
    And the first link in stored HTML activationEmail is stored as activationLink
    And the regex activations / (\w + ) is used on stored value activationLink to capture activationToken
    When a user makes a POST call to "{apiRoot}/tokens/activation/{activationToken}"
    Then the user should get a 200 response
    Given the user "{testUserName}@gmail.com"
    is logged in with password "pass"
    on "{apiRoot}"
    When a user makes a GET call to "{apiRoot}/widgets/{testUserName}"
    Then the user should get a 200 response and JSON matching:
    """ 
    []
    """

Yes, what we have here is a functional test for one of our stories. But all the steps are essentially an English version of what a HTTP client would do when hitting the service. A business analyst probably wouldn’t want to read that but that’s really OK for us – business analysts in our experience don’t read the tests. Developers and testers read our tests, and it’s a great English language description of what the test does. I don’t need to click through the code behind the step definitions to know what’s going on. As a developer, I can understand right away what is being done.

So if you are OK with writing tests this way, check out the cucumber module we created as part of datamill. It has all the step definitions you see in the example above. If you are writing HTTP services, especially those that serve JSON, and are backed by a relational database, you will find it useful. Oh, and we threw in some useful step definitions for dealing with emails too because we needed them.

I want to end by admitting the following about this approach: Yes, sometimes this can get to be repetitive and a lot of copy-pasting. So, I will leave you with a last example of a custom step definiton we created that combines the utility ones above:

import cucumber.api.java.en.Given;
import foundation.stack.datamill.cucumber.DatabaseSteps;
import foundation.stack.datamill.cucumber.HttpSteps;
import foundation.stack.datamill.cucumber.PropertySteps;
import foundation.stack.datamill.http.Method;

public class UserSteps {
 private final DatabaseSteps databaseSteps;
 private final HttpSteps httpSteps;
 private final PropertySteps propertySteps;

 public UserSteps(PropertySteps propertySteps, DatabaseSteps databaseSteps, HttpSteps httpSteps) {
 this.propertySteps = propertySteps;
 this.databaseSteps = databaseSteps;
 this.httpSteps = httpSteps;
 }

 @Given("^the user \"(.+)\" is logged in with password \"(.+)\" on \"(.+)\"$")
 public void loginAsUser(String email, String password, String apiRoot) {
 httpSteps.userMakesCallWithProvidedPayload(Method.POST, apiRoot + "/tokens", "{" +
 "\"email\": \"" + email + "\"," +
 "\"password\": \"" + password +
 "}");
 httpSteps.assertStatusAndNonEmptyResponse(200);
 httpSteps.storeResponse("JWT");
 httpSteps.addValueToHeader("Authorization", "{JWT}");
 }
}

Checkout datamill, and the cucumber module!

Your own identity on the Internet

If you ever thought about having users login to your site, you’ve probably considered adding Facebook Login, or OAuth2 and OpenID Connect. And for good reason – they’re widely used.

An identity you own, to sign your content

But what if you wanted to allow users to own their identity? What would that look like? For a lot of technical folks, establishing identity usually means using a private key. Establishing identity using a private key also has the advantage that the user owns their own identity.

Let’s say that you establish your identity using your own private key. Any content you create can then be signed by you using your private key. Anyone can verify that it was you who created the content if they have your public key.

How does someone looking at a signed piece of content know what key was used to sign it? Well, you can publish your public key somewhere, and put a URL to that key next to the signature on the content you create. The URL would allow the reader to download the public key they need to verify the signature.

Mirrors

But what if the URL to the public key goes down? Well, we can setup mirrors for public keys (you might use alternatives such as key servers here). Users should be able to notify mirrors of a new public key that they’ve published. Sites hosting content can also send cached versions of public keys (that they picked up from the original source, or from a mirror) included in the content.

Claims, and verified claims

So far, we only have the ability to establish that some piece of content was created by someone owning a certain key. But we have not established who the person behind the key is as of yet. How can we do that? Well, let’s say that with every key, you can have a set of claims – metadata attributes associated with them. So for example, we can say some key key1 belongs to some user claiming that their fullName is Joe Blogs, and that their facebookProfile is http://facebook.com/joeblogs (fullName and facebookProfile are claims here). Great, so now we can say that wherever we see content signed with key key1, it belongs to Joe Blogs, whose Facebook profile is at http://facebook.com/joeblogs.

Of course, the obvious problem with this is that anyone can publish their key, and associate it with a bogus set of claims. What we need is a way to have verified claims. For example, we would especially want to verify that someone who claims to own a particular Facebook profile actually owns that profile. How do we do that? Well we can have a service that provides verified facebookProfile claims. That is, a service that uses Facebook Login to allow the owner of a key to login to their Facebook account to prove ownership, and only then confirm that the owner of that key owns a Facebook account.

Here is how that flow might work:

  1. The owner of the key signs a facebookProfile claim with their private key – let’s call the signature they produce here claimSignature
  2. They provide claimSignature to the Facebook verification service, which should first check that the provided claimSignature is correct and was produced by the owner of the key
  3. It should then have them login to the Facebook profile they claim to own using Facebook Login
  4. Once the service has verified that they own the Facebook account, the service would then sign claimSignature, with it’s own private key to create a verifiedClaimSignature

Now, if we were given the claimSignature, and the verifiedClaimSignature, together with the facebookProfile claim, we can trust that association a bit more. We would need to decide to trust that the Facebook verification service we used is a trust-worthy service in evaluating facebookProfile claims. If we do, all we need is the public key for that service to verify the verifiedClaimSignature and confirm that the facebookProfile provided can be trusted.

Decentralized identity

What does this allow at the end of the day? Suppose you wrote a blog post, or posted a comment somewhere on the web. You can now sign that content, and someone reading the content would be able to know that it was you who wrote it. And they would be able to know that based on the identity you own – your personal private key. Everyone can own their own identity.