A breadcrumb component for @ngrx/router

@ngrx/router is, at the moment, one of the best choices for a router component in Angular 2, and Vladivostok, the third iteration of Angular 2’s official router, will take a very heavy inspiration from it. At the moment we are using it to handle our routes, and the need arose to create a breadcrumb component for certain parts of the application.

You can see a working example here.

@ngrx/router‘s API is very light and sparse, but not necessarily incomplete – a lot of the actual implementation is assumed to be left in the hands of the user. This is powered by the use of Observables for Route, RouteParams, QueryParams, etc. A great example of this assumption is the fact that instead of a set of interfaces like CanReuse/CanActivate/CanDeactivate, the router only ever activates components when they change after a route change – any changes in parameters are handled manually by default. More work, but also a clearer image of what one can and cannot do with the tool, and a lot more control.

The first thing we found was that routes have a property – options – that serves the express purpose of holding custom data. A simple usage is this:

export const routes: Route[] = [{
    path: '/',
    component: Home,
    options: {
        breadcrumb: 'Home Sweet Home'
    },
    children: [{
            path: '/component-a',
            component: ComponentA,
            options: {
                breadcrumb: 'A Component'
            },
            children: [{
                    path: '/one',
                    component: One,
                    options: {
                        breadcrumb: 'The One'
                    }
                },
                [...]
            ]
        },
        [...]
    ]
}, ];

And the breadcrumb component is as such:

@Component({
    selector: 'breadcrumbs',
    directives: [NgFor],
    template: `<span>
<span *ngFor="let breadcrumb of breadcrumbs; let isLast = last">
<a [linkTo]="breadcrumb.path">{{breadcrumb.name}}</a>
<span *ngIf="!isLast"> &gt; </span>
</span>
</span>`
})
export class Breadcrumbs {
    private breadcrumbs: any[];

    constructor(private routerInstruction: RouterInstruction) {
        this.routerInstruction
            .subscribe(
                match => {
                    this.breadcrumbs = this.getBreadcrumbs(match);
                }
            );
    }

    private getBreadcrumbs(match: Match) {
        let breadcrumbs = [];
        let path = '';

        for (let i = 0; i < match.routes.length; i++) {
            path = path[path.length - 1] === '/' ? path.substr(0, path.length - 1) : path;
            path += match.routes[i].path ? this.makePath(match.routes[i], match) : '';
            if ((match.routes[i].options || {}).breadcrumb) {
                breadcrumbs.push({
                    name: match.routes[i].options.breadcrumb,
                    path: path
                });
            }
        }

        return breadcrumbs;
    }

    private makePath(route: Route, match: Match) {
        return pathToRegexp.compile(route.path)(match.routeParams);
    }
}

RouterInstruction is one Observable that gives us all the information we need. By watching it, a Match object containing the array of matched routes is returned. All that was left was to create the urls, as @ngrx/router uses only url strings (as opposed to the array notation you’ll find in @angular/router, for instance) – but as @ngrx/router uses path-to-regexp to parse urls, to it was only a matter of using it to compile from the parsed data, and get the urls.

All in all, a very simple solution. Omitted are the use of translations and using asynchronously loaded data (like a profile name) in the breadcrumb – the former is trivial and very unrelated, and the latter we are using stores for, and it’s perhaps a good topic for another post.

3 Docker tips & tricks

Over the past few months, we’ve done a lot of development with Docker. There are a few things that we end up using over and over. I wanted to share three of these with other developers working with Docker:

  1. Remove all containers – Inevitably, during development you’re going to pile up a bunch of stale containers that are just lying around – or maybe you have a bunch of running containers that you don’t use. We end up needing to wipe out all the containers to start fresh all the time. Here’s how we do it:

    docker ps -a -q | awk '{print $1}' | xargs --no-run-if-empty docker rm -f


    It’s pretty self explanatory – it lists all the containers, and then removes each one by it’s ID. There are several incarnations of this but this one has the advantage that it can be used in Windows as well if you install UNIX command line tools (you could do that by grabbing MinGW for example). Alternatively, on Windows, you can use:FOR /f "tokens=*" %i IN ('docker ps -a -q') DO docker rm -f %i
  2. Mount the Docker Unix socket as a volume – OK, the way we use Docker is a bit more advanced than the standard use cases but it’s crazy how often we end up using this one. That’s because we always end up having to create Docker containers from within a Docker container. And the best way to do this is to mount the Docker daemon’s Unix socket on the host machine as a volume at the same location within the container. That means, you add the following when performing a docker run: -v /var/run/docker.sock:/var/run/docker.sock. Now, within the container, if you have a Docker client (whether that’s the command line one, or a Java one for example) connect to that Unix socket, it actually talks to the Docker daemon on the host. That means if you create a container from within the container with the volume, the new container is created using the daemon running on the host (meaning it will be a sibling of the container with the volume)! Very useful!
  3. Consider Terraform as an alternative to composeTerraform is for setting up infrastructure really easily and it’s great for that. For us, infrastructure means AWS when running in the cloud, and Docker when running locally. We have several containers that we have to run for our application – during development, we run all the containers locally, and in the cloud, we run the containers across various EC2 instances, each instance getting one or more containers. This is perfect for Terraform. We can use the Docker provider alone to configure resources to run our local setup, and we can use it together with the AWS provider to run our cloud setup. Note again that Terraform is for infrastructure setup, so you are doing things at a very high level – you may find that you need to do some prep using other tools to be able to work with Terraform. For example, you can’t use Dockerfiles – you will have to build your custom images prior to using them with Terraform.

 

Using Class inheritance to hook to Angular 2 component lifecycle

I was thinking of a way to use class inheritance to hook to certain Component lifecycle hooks, without needing to worry about them in the extending class (no knowledge needed, no super() calls to forget about). This does mean “straying off the path”  a little bit, and there may be better ways to do this.

Observables in angular2 are a powerful thing. Unlike the Angular 1 hero, Promises, they represent streams of asynchronous data, and not just single events. This means that a subscription of an observable doesn’t have an end, not necessarily.

Using ngrx/router, I found myself using them a lot, but precisely because they are streams, they need careful cleanup, or we risk leaving a subscription running after a Component has been destroyed.

A typical way we can do this is using ngOnDestroy:

export class Component implements OnDestroy {
    private subscription: Subscription;
    private count: number;

    constructor(private pingService: PingService) {
        let func = this.ngOnDestroy;

        this.subscription = this.pingService.ping
            .subscribe(
                ping => {
                    this.count = ping;
                }
            );
    }

    ngOnDestroy() {
        this.subscription.unsubscribe();
    }
}

Simple enough when on its own, but something that is sure to add a lot of code repetition and complexity to a complex class with more than one subscription. We can automate this, and the best way I found was to extend a base class:

export class SafeUnsubscriber implements OnDestroy {
    private subscriptions: Subscription[] = [];

    protected safeSubscription(sub: Subscription): Subscription {
        this.subscriptions.push(sub);
        return sub;
    }

    ngOnDestroy() {
        this.subscriptions.forEach(element => {
            !element.isUnsubscribed && element.unsubscribe();
        });
    }
}

This makes the previous class simpler:

export class Component extends SafeUnsubscriber {
    private count: number;

    constructor(private pingService: PingService) {
        let func = this.ngOnDestroy;

        let subscription = this.pingService.ping
            .subscribe(
                ping => {
                    this.count = ping;
                }
            );

        this.safeSubscription(subscription);
    }
}

Which is great, but what if we need to use ngOnDestroy on the parent? Conventional inheritance allows us to use super.ngOnDestroy() but in this particular case, I don’t want to leave this as a possibility, but rather always unsubscribe on destroy, regardless of wether or not ngOnDestroy was overwritten.

So in this case, a little hack is acceptable, in my opinion – we can make sure the unsubscriber code always runs on ngOnDestroy, and both prevent mistakes by omission and make the code cleaner in the user:

export class SafeUnsubscriber implements OnDestroy {
    private subscriptions: Subscription[] = [];

    constructor() {
        let f = this.ngOnDestroy;

        this.ngOnDestroy = () => {
            f();
            this.unsubscribeAll();
        };
    }

    protected safeSubscription(sub: Subscription): Subscription {
        this.subscriptions.push(sub);
        return sub;
    }

    private unsubscribeAll() {
        this.subscriptions.forEach(element => {
            !element.isUnsubscribed && element.unsubscribe();
        });
    }

    ngOnDestroy() {
        // no-op
    }
}

Now, even if ngOnDestroy gets overwritten, the private method unsubscribeAll still runs, as the constructor (which always runs, as typescript requires it)  makes sure this happens. ngOnDestroy, on the other hand, only exists as a noop function, to ensure the code runs regardless of whether  or not one was set in the parent component.

How does this work, then? Javascript (and typescript, by extension) uses prototypal inheritance, which means that super is the prototype – this is the reason why typescript makes it mandatory to call super() in the extending Class constructor, before any references to this – so class inheritance expectations are guaranteed. By changing this.ngOnDestroy on the Base Class constructor, we are essentially adding a property to the instance, essentially overriding the prototype – which happens to be a call to the prototype’s version followed by our own.

Pretty dangerous stuff, but pretty useful as well.

SVG’s FuncIRI, angular2, and the base tag

cog-wrong
Broken mask link: a visualization

I tried to make this title as descriptive and, let’s face it, clickbait-y, because this was hard enough for me to discover. I somehow had never had to deal with this issue until a few days ago – svgs do not play well with single page apps when routing using HTML location is mixed with a set <base> tag.

Specifically, what doesn’t work is anything that uses FuncIRI, or css-style urls. That means <use>, clip-path and filter tags, among others. When trying to fix this I came up with a roundabout solution, before discovering that I didn’t need to, very similar to this solution for angularJS, most likely made before this was fixed (in around version 1.3).

In my case, I didn’t need the <base> tag at all – it was basically set as <base href=”/”>, most likely from the habit and all the examples and starter apps one uses to get their hands dirty with angular. All I needed to know about was APP_BASE_HREF. If you remove the <base> tag angular rightfully complains that it needs a base for its LocationStrategy to work, but APP_BASE_HREF enables us to set it from the bootstrap step:

import {
    APP_BASE_HREF
} from '@angular/common';

bootstrap(App, [
    ...{
        provide: APP_BASE_HREF,
        useValue: '/'
    }
]);

This works even for cases where the base isn’t ‘/’, so should be pretty much universal. Of course, if there are other reasons why you might need the base tag to stay in the page, the only solution is then to update the relevant properties so that their urls match the current one. I feel this should be avoided if at all possible, seeing as it isn’t the most clean or efficient method – not to mention that in our case, it would mean messing directly with the DOM on top of what an SVG animation library is already doing.

Nevertheless, here is an example of how that might look:

import {
    Directive,
    ElementRef,
    OnDestroy
} from '@angular/core';
import {
    Location
} from '@angular/common';

import $ = require('jquery');

@Directive({
    selector: '[update-clip-path]'
})
export class UpdateClipPath implements OnDestroy {
    private sub: any;

    constructor(private location: Location, private elementRef: ElementRef) {
        this.sub = this.location.subscribe(
            next => this.updateClipPath()
        );

        this.updateClipPath();
    }

    private updateClipPath() {
        if (this.elementRef.nativeElement) {
            $(this.elementRef.nativeElement)
                .find('[clip-path]')
                .each((index, el) => {
                    let clipPath = el.getAttribute('clip-path');
                    el.setAttribute(
                        'clip-path',
                        'url(' + this.location.path() + clipPath.substr(clipPath.indexOf('#')));
                });
        }
    }

    ngOnDestroy() {
        if (this.sub && this.sub.unsubscribe) {
            this.sub.unsubscribe();
        }
    }
}

Learning Javascript in a post-Reactive landscape

I recently re-watched a talk by Thomas Figg – Programming is terrible. In the QA portion of the talk there is a (perhaps surprisingly) positive tone in one of his answers – that learning to code is, contrary to what some might choose to believe, more accessible than ever. He then mentions JavaScript, as it is as simple as it is ubiquitous, and arguably the most easily shareable code in the world – everything from a TV to a phone will run it.

I completely agree with this statement, as JavaScript is at its core an incredibly simple language, in both theory and practice – both easy to reason about, and to get something running. But increasingly complex abstractions have become an integral part of any application development in JavaScript, making the entry barrier for a frontend developer higher and higher.

On Promises

Having worked as an AngularJS developer since it’s 0.x releases, I have more than gotten used to its $q library, modelled very closely after the Q library. They made sense to me, and any seasoned developer will most likely agree that they made asynchronous programming much easier to deal with.

Yet it wasn’t until joining a full stack team and getting tasked with tutoring my backend-heavy colleagues and QA’s on Promises that I noticed just how big of a stretch it can be if you’re facing these for the first time. They are not trivial, especially when you have to stray from the typical examples and delve into more complex usages.

On Reactive Programming

Reactive programming takes the concept of asynchronous programming further. Compared to promises, it is a step down the abstraction scale, making it easier to scale, to handle complex situations and concurrency with ease. Unfortunately, it also makes it much more complex conceptually – thus harder to get into and harder to reason about.

Angular2 fully supports and depends on RxJS, and although it is an “opt-in” kind of thing (just call .toPromise() on any Observable and it magically becomes just that), it is ubiquitous in the angular2 community. Go to any chatroom or forum and you see that you are expected to be comfortable with it.

A world of abstractions

AngularJS had a big problem – it looked easy, and it felt easy, until you tried doing anything complex with it. Angular 2 doesn’t make that mistake, showing its hand from the get-go. What this might mean for the community I don’t know – hopefully better code?

With Promises becoming a part of the ES6 standard, we are moving into a future where they become commonplace – jQuery 3 complies with Promises, for instance. The barrier to entry for developers gets forcefully higher at all levels.

As a teacher, you learn to avoid abstractions when teaching programming for the first time. An object-oriented language is not a good first language, for obvious reasons. I wonder if at some point, JavaScript will stop being one as well?

Angular2 router’s bumpy ride – a user’s perspective

Edit: ui-router is not mentioned at all in this article and should have been. This is only due to the fact that I haven’t worked with it enough to comment on it, and also because it is still in alpha, and not a popular ng2 router alternative in the community yet. That being said, it is the “only” router for me when in ng1-land, so I’m eager to give it a try, or see how much of it’s architecture influences Vladivostok.

Angular 2 is now in a release candidate state, after several beta releases, and while the core of this new iteration is an extremely solid one, many of its components are still under heavy development, which makes using them quite a bumpy ride.

The router component is perhaps the most notorious among them, with two iterations deprecated in the pace of a few short months – one officially so, and one never really seeing the light of day – and a third one on the way.

Now, it needs to be said that creating something like a router is far from trivial, particularly so if you are setting out to “revolutionise”, meaning solve all the known problems of previous routers. In the case of routing these are lazy loading, handling complex route structures, and enough flexibly to account for all use cases (with more or less legwork required).

Also, the reason why the angular team has gone through so many iterations has to do with how closely they are working with the community of users – the current iteration having taken a mere couple of months to get thrown out, so quick the community was to spot its shortcomings.

So, how do all of these routers differ, and where are they headed?

Enter @angular/router-deprecated

Angular2’s first stab at a router relied on the component paradigm heavily, as does angular2 in general. Components may have @RouteConfig annotations with route lists defined, and if they do, these get parsed and the relevant components loaded into a node in its template.

Most lifecycle hooks and checks could then be in the component itself, keeping thing neat and clean. This approach had a couple of problems:

  • As Routes were defined in the class file, deep linking to unloaded classes is impossible.
  • @CanActivate, which determines whether or not a certain route could be activated, had to be an annotation as it ran before the Component itself was instantiated.
  • Routes followed the same isolation pattern that Components did, but this meant not having access to the full route tree at any point, and having to hack your way around everything.

Enter @angular/router(-deprecated?)

The first attempt to solve these issues was promising:

  • It solved the deep linking problem by having routes be directly inferable from the url.
  • It intended to replace @CanActivate with CanActivateChild – it is now the parent’s task to determine if the route activation process can continue.
  • Access to the whole tree was given at any hooks

Unfortunately, it perpetuated some of the issues, like routes still defined as a Component’s annotation, and its development didn’t get very far before it getting scrapped – first unofficially and now officially so.

Enter @ngrx/router, and the “new new new Router”

If “new new new Router” seems like an atrocious expression it’s because it is – but it’s been a recurrent one in places like Gitter or Github issues. It is Vladivostok, and it’s approach is very similar to @ngrx/router (as its devs have been collaborating with the angular team closely).

@ngrx/router takes a cleaner, leaner and more low lever approach to routing:

  • Routes are defined as objects in their own right and injected into the app directly. Their loading becomes completely independent from the Components themselves.
  • A route has Guards that run whenever the route tree passes through it, these again completely independent from which Component is actually being loaded.
  • Changes in url that do not actually change routes, but only parameters (like changing from /user/1 to /user/2, for instance) do nothing by default – it is the user’s responsibility to listen to these changes and trigger behaviour
  • Routes, RouteParams, QueryParams, RouteData… All these are Observables that any Component can listen to – this makes it both more flexible and simpler, specially when creating something like a breadcrumb component, or anything more specific or unique.

A conclusion of sorts

Angular2 is heading in a really good direction, despite (or perhaps because of) all the growing pains it is going through. The downside of this is that it can’t live up to the extremely high expectations for everything from power to speed to ease of use, while in its betas and RCs.

The best way to get ready for the new router is to delve into @ngrx/router, which coincidentally is a pretty powerful tool in its own right. The documentation is sparse but its developers and users are quick to answer in their Gitter channel, and it is flexible enough to handle almost anything you’ll want to throw at it.

I’ll be throwing a couple of things myself, and write about that next.

Abusing Cucumber, for a good cause

In several Java houses I worked with in the past, we used Cucumber to do Behavior Driven Design. No hang on a sec – that’s definitely an exaggeration. I think it’s more accurate to say we used Cucumber as a way to write acceptance tests. Wait, that’s still an exaggeration. We used Cucumber to write a mix of integration tests, and what may generously be called functional tests (and very occasionally bordering on acceptance tests). Yeah, that’s about right. We used it as a tool to write tests in plain English. But you know what? I think that’s OK.

Cucumberistas, BDDers and DDDers will tell you it’s about everyone – business, QA and development – coming together to come up with executable specifications.  It’s about everyone speaking in a universal language – a language that the business analysts can share with the testers, and the developers. A language about the business problems an application is designed to solve. And a language for automated acceptance tests. Well maybe, just maybe, you are in an organization where that’s true. Where your Cucumber tests describe your user stories or specifications in the domain language for your application. If you are, good for you. You’re doing it “right”.

But for everyone else, I want to talk about some work we did to support your Cucumber test-writing efforts in the “wrong” way. And we don’t want to scold you, or admonish you for doing it “wrong”. No, in fact, we want to support you in your efforts to just write tests for HTTP services in English.

What I am talking about is best illustrated with an example – here’s how we use Cucumber to write tests for our application:

Background:
    Given the user stores http: //localhost:9080 as apiRoot

Scenario: Successful registration flow
    Given a random alphanumeric string is stored as testUserName
    And a user makes a POST call to "{apiRoot}/users"
    with payload:
    """ {
        "email": "{testUserName}@gmail.com",
        "password": "pass",
        "userName": "{testUserName}",
        "name": "Test User",
        "location": "London"
    }
    """
    Then the user should get a 200 response and JSON matching:
    """ 
    {
        "email": "{testUserName}@gmail.com",
        "userName": "{testUserName}",
        "name": "Test User",
        "location": "London",
        "id": "*"
    }
    """
    And the email containing subject Activate your account for {testUserName}@gmail.com is stored as activationEmail
    And the first link in stored HTML activationEmail is stored as activationLink
    And the regex activations / (\w + ) is used on stored value activationLink to capture activationToken
    When a user makes a POST call to "{apiRoot}/tokens/activation/{activationToken}"
    Then the user should get a 200 response
    Given the user "{testUserName}@gmail.com"
    is logged in with password "pass"
    on "{apiRoot}"
    When a user makes a GET call to "{apiRoot}/widgets/{testUserName}"
    Then the user should get a 200 response and JSON matching:
    """ 
    []
    """

Yes, what we have here is a functional test for one of our stories. But all the steps are essentially an English version of what a HTTP client would do when hitting the service. A business analyst probably wouldn’t want to read that but that’s really OK for us – business analysts in our experience don’t read the tests. Developers and testers read our tests, and it’s a great English language description of what the test does. I don’t need to click through the code behind the step definitions to know what’s going on. As a developer, I can understand right away what is being done.

So if you are OK with writing tests this way, check out the cucumber module we created as part of datamill. It has all the step definitions you see in the example above. If you are writing HTTP services, especially those that serve JSON, and are backed by a relational database, you will find it useful. Oh, and we threw in some useful step definitions for dealing with emails too because we needed them.

I want to end by admitting the following about this approach: Yes, sometimes this can get to be repetitive and a lot of copy-pasting. So, I will leave you with a last example of a custom step definiton we created that combines the utility ones above:

import cucumber.api.java.en.Given;
import foundation.stack.datamill.cucumber.DatabaseSteps;
import foundation.stack.datamill.cucumber.HttpSteps;
import foundation.stack.datamill.cucumber.PropertySteps;
import foundation.stack.datamill.http.Method;

public class UserSteps {
 private final DatabaseSteps databaseSteps;
 private final HttpSteps httpSteps;
 private final PropertySteps propertySteps;

 public UserSteps(PropertySteps propertySteps, DatabaseSteps databaseSteps, HttpSteps httpSteps) {
 this.propertySteps = propertySteps;
 this.databaseSteps = databaseSteps;
 this.httpSteps = httpSteps;
 }

 @Given("^the user \"(.+)\" is logged in with password \"(.+)\" on \"(.+)\"$")
 public void loginAsUser(String email, String password, String apiRoot) {
 httpSteps.userMakesCallWithProvidedPayload(Method.POST, apiRoot + "/tokens", "{" +
 "\"email\": \"" + email + "\"," +
 "\"password\": \"" + password +
 "}");
 httpSteps.assertStatusAndNonEmptyResponse(200);
 httpSteps.storeResponse("JWT");
 httpSteps.addValueToHeader("Authorization", "{JWT}");
 }
}

Checkout datamill, and the cucumber module!

Your own identity on the Internet

If you ever thought about having users login to your site, you’ve probably considered adding Facebook Login, or OAuth2 and OpenID Connect. And for good reason – they’re widely used.

An identity you own, to sign your content

But what if you wanted to allow users to own their identity? What would that look like? For a lot of technical folks, establishing identity usually means using a private key. Establishing identity using a private key also has the advantage that the user owns their own identity.

Let’s say that you establish your identity using your own private key. Any content you create can then be signed by you using your private key. Anyone can verify that it was you who created the content if they have your public key.

How does someone looking at a signed piece of content know what key was used to sign it? Well, you can publish your public key somewhere, and put a URL to that key next to the signature on the content you create. The URL would allow the reader to download the public key they need to verify the signature.

Mirrors

But what if the URL to the public key goes down? Well, we can setup mirrors for public keys (you might use alternatives such as key servers here). Users should be able to notify mirrors of a new public key that they’ve published. Sites hosting content can also send cached versions of public keys (that they picked up from the original source, or from a mirror) included in the content.

Claims, and verified claims

So far, we only have the ability to establish that some piece of content was created by someone owning a certain key. But we have not established who the person behind the key is as of yet. How can we do that? Well, let’s say that with every key, you can have a set of claims – metadata attributes associated with them. So for example, we can say some key key1 belongs to some user claiming that their fullName is Joe Blogs, and that their facebookProfile is http://facebook.com/joeblogs (fullName and facebookProfile are claims here). Great, so now we can say that wherever we see content signed with key key1, it belongs to Joe Blogs, whose Facebook profile is at http://facebook.com/joeblogs.

Of course, the obvious problem with this is that anyone can publish their key, and associate it with a bogus set of claims. What we need is a way to have verified claims. For example, we would especially want to verify that someone who claims to own a particular Facebook profile actually owns that profile. How do we do that? Well we can have a service that provides verified facebookProfile claims. That is, a service that uses Facebook Login to allow the owner of a key to login to their Facebook account to prove ownership, and only then confirm that the owner of that key owns a Facebook account.

Here is how that flow might work:

  1. The owner of the key signs a facebookProfile claim with their private key – let’s call the signature they produce here claimSignature
  2. They provide claimSignature to the Facebook verification service, which should first check that the provided claimSignature is correct and was produced by the owner of the key
  3. It should then have them login to the Facebook profile they claim to own using Facebook Login
  4. Once the service has verified that they own the Facebook account, the service would then sign claimSignature, with it’s own private key to create a verifiedClaimSignature

Now, if we were given the claimSignature, and the verifiedClaimSignature, together with the facebookProfile claim, we can trust that association a bit more. We would need to decide to trust that the Facebook verification service we used is a trust-worthy service in evaluating facebookProfile claims. If we do, all we need is the public key for that service to verify the verifiedClaimSignature and confirm that the facebookProfile provided can be trusted.

Decentralized identity

What does this allow at the end of the day? Suppose you wrote a blog post, or posted a comment somewhere on the web. You can now sign that content, and someone reading the content would be able to know that it was you who wrote it. And they would be able to know that based on the identity you own – your personal private key. Everyone can own their own identity.

A functional reactive alternative to Spring

Modern-day Spring allows you to be pretty concise. You can get an elaborate web service up and running using very little code. But when you write idiomatic Spring, you find yourself strewing your code with lots of magic annotations, whose function and behavior are hidden within complex framework code and documentation. When you want to stray away slightly from what the magic annotations allow, you suddenly hit a wall: you start debugging through hundreds of lines of framework code to figure out what it’s doing, and how you can convince the framework to do what you want instead.

datamill is a Java web framework that is a reaction to that approach. Unlike other modern Java frameworks, it makes the flow and manipulation of data through your application highly visible. How does it do that? It uses a functional reactive style built on RxJava. This allows you to be explicit about how data flows through your application, and how to modify that data as it does. At the same time, if you use Java 8 lambdas (datamill and RxJava are intended to be used with lambdas), you can still keep your code concise and simple.

Let’s take a look at some datamill code to illustrate the difference:

public static void main(String[] args) {
 OutlineBuilder outlineBuilder = new OutlineBuilder();

 Server server = new Server(
  rb -> rb.ifMethodAndUriMatch(Method.GET, "/status", r -> r.respond(b -> b.ok()))
  .elseIfMatchesBeanMethod(outlineBuilder.wrap(new TokenController()))
  .elseIfMatchesBeanMethod(outlineBuilder.wrap(new UserController()))
  .orElse(r -> r.respond(b -> b.notFound())),
  (request, throwable) -> handleException(throwable));

 server.listen(8081);
}

 

A few important things to note:

  • datamill applications are primarily intended to be started as standalone Java applications – you explicitly create the HTTP server, specify how requests are handled, and have the server start listening on a port. Unlike traditional JEE deployments where you have to worry about configuring a servlet container or an application server, you have control of when the server itself is started. This also makes creating a Docker container for your server dead simple. Package up an executable JAR using Maven and stick it in a standard Java container.
  • When a HTTP request arrives at your server, it is obvious how it flows through your application. The line[code language=”java”]rb.ifMethodAndUriMatch(Method.GET, “/status”, r -> r.respond(b -> b.ok()))[/code]

    says that the server should first check if the request is a HTTP GET request for the URI /status, and if it is, return a HTTP OK response.

  • The next two lines show how you can organize your request handlers while still maintaining an understanding of what happens to the request.For example, the line.elseIfMatchesBeanMethod(outlineBuilder.wrap(new UserController()))

    says that we will see if the request matches a handler method on the UserControllerinstance we passed in. To understand how this matching works, take a look at the UserController class, and one of the request handling methods:

    @Path("/users")
    public class UserController {
     ...
     @GET
     @Path("/{userName}")
     public Observable < Response > getUser(ServerRequest request) {
       return userRepository.getByUserName(request.uriParameter("userName").asString())
        .map(u -> new JsonObject()
         .put(userOutlineCamelCased.member(m -> m.getId()), u.getId())
         .put(userOutlineCamelCased.member(m -> m.getEmail()), u.getEmail())
         .put(userOutlineCamelCased.member(m -> m.getUserName()), u.getUserName()))
        .flatMap(json -> request.respond(b -> b.ok(json.asString())))
        .switchIfEmpty(request.respond(b -> b.notFound()));
      }
      ...
    }

    You can see that we use @Path and @GET annotations to mark request handlers. But the difference is that you can pin-point where the attempt to match the HTTP request to an annotated method was made. It was within your application code – you did not have to go digging through hundreds of lines of framework code to figure out how the framework is routing requests to your code.

  • Finally, in the code from the UserController, notice how the response is created – and how explicit the composition of the JSON is within datamill:
    .map(u -> new JsonObject()
    .put(userOutlineCamelCased.member(m -> m.getId()), u.getId())
    .put(userOutlineCamelCased.member(m -> m.getEmail()), u.getEmail())
    .put(userOutlineCamelCased.member(m -> m.getUserName()), u.getUserName()))
    .flatMap(json -> request.respond(b -> b.ok(json.asString())))

    You have full control of what goes into the JSON. For those who have ever tried to customize the JSON output by Jackson to omit properties, or for the poor souls who have tried to customize responses when using Spring Data REST, you will appreciate the clarity and simplicity.

Just one more example from an application using datamill – consider the way we perform  a basic select query:

public class UserRepository extends Repository < User > {
 ...
 public Observable < User > getByUserName(String userName) {
  return executeQuery(
   (client, outline) ->
   client.selectAllIn(outline)
   .from(outline)
   .where().eq(outline.member(m -> m.getUserName()), userName)
   .execute()
   .map(r -> outline.wrap(new User())
    .set(m -> m.getId(), r.column(outline.member(m -> m.getId())))
    .set(m -> m.getUserName(), r.column(outline.member(m -> m.getUserName())))
    .set(m -> m.getEmail(), r.column(outline.member(m -> m.getEmail())))
    .set(m -> m.getPassword(), r.column(outline.member(m -> m.getPassword())))
    .unwrap()));
 }
 ...
}

A few things to note in this example:

  • Notice the visibility into the exact SQL query that is composed. For those of you who have ever tried to customize the queries generated by annotations, you will again appreciate the clarity. While in any single application, a very small percentage of the queries need to be customized outside of what a JPA implementation allows, almost all applications will have at least one of these queries. And this is usually when you get the sinking feeling before delving into framework code.
  • Take note of the visibility into how data is extracted from the result and placed into entity beans.
  • Finally, take note of how concise the code remains, with the use of lambdas and RxJava Observable operators.

Hopefully that gives you a taste of what datamill offers. What we wanted to highlight was the clarity you get on how requests and data flows through your application, and the clarity into how data is transformed.

datamill is still in an early stage of development but we’ve used it to build several large web applications. We find it a joy to work with.

We hope you’ll give it a try – we are looking for feedback. Go check it out.

Weave social into the web

Disclaimer: This is the second post in a series where we are exploring a decentralized Facebook (here’s the first). It’s written by software engineers, and is mostly about imagining a contrived (for now) technical architecture.

How do you weave elements of Facebook into the web? Start by allowing them to identify themselves and all their content:

  • Establishing a user’s identity can be done rather straightforwardly by creating a unique public-private key pair for a user and allowing them to digitally sign things using their private key
  • Users can then digitally sign content they create anywhere on the internet – they can sign articles they publish, blog posts, comments, photos, likes and +1’s, anything really

Now that they’ve started to identify their content, it’s time to make everyone aware of it:

  • Notifications about content users generate needs to be broadcast in real-time to a stream of events about the user
  • Notifications can be published to the stream by the browser, or a browser plug-in, or by the third-party application on which the content was generated
  • Before being accepted into a user’s stream, notifications neet to be verified as being about the user and their content by the presence of a digital signature
  • Other parties interested in following a user can subscribe to a user’s feed

But that’s all in the public eye. To have a social network, you really need to allow for some privacy:

  • Encrypt data, and allow it to be decrypted selectively – this may include partial content – for example, it’d be useful to have a comment on an otherwise unencrypted site encrypted, only accessible by a select few consumers
  • Allow encrypted content to be sent over plain HTTP over TCP (not TLS) – this way the encrypted payload can be mirrored, and allow consumer privacy (if the consumer can access encrypted data from a mirror, it can do so privately, without the knowledge of the consumer)
  • Encryption is performed with a unique key for every piece of content
  • Decryption is selective in that the decryption key is given out selectively by the publisher (based on authorization checks they perform)