Using Class inheritance to hook to Angular 2 component lifecycle

I was thinking of a way to use class inheritance to hook to certain Component lifecycle hooks, without needing to worry about them in the extending class (no knowledge needed, no super() calls to forget about). This does mean “straying off the path”  a little bit, and there may be better ways to do this.

Observables in angular2 are a powerful thing. Unlike the Angular 1 hero, Promises, they represent streams of asynchronous data, and not just single events. This means that a subscription of an observable doesn’t have an end, not necessarily.

Using ngrx/router, I found myself using them a lot, but precisely because they are streams, they need careful cleanup, or we risk leaving a subscription running after a Component has been destroyed.

A typical way we can do this is using ngOnDestroy:

export class Component implements OnDestroy {
    private subscription: Subscription;
    private count: number;

    constructor(private pingService: PingService) {
        let func = this.ngOnDestroy;

        this.subscription = this.pingService.ping
            .subscribe(
                ping => {
                    this.count = ping;
                }
            );
    }

    ngOnDestroy() {
        this.subscription.unsubscribe();
    }
}

Simple enough when on its own, but something that is sure to add a lot of code repetition and complexity to a complex class with more than one subscription. We can automate this, and the best way I found was to extend a base class:

export class SafeUnsubscriber implements OnDestroy {
    private subscriptions: Subscription[] = [];

    protected safeSubscription(sub: Subscription): Subscription {
        this.subscriptions.push(sub);
        return sub;
    }

    ngOnDestroy() {
        this.subscriptions.forEach(element => {
            !element.isUnsubscribed && element.unsubscribe();
        });
    }
}

This makes the previous class simpler:

export class Component extends SafeUnsubscriber {
    private count: number;

    constructor(private pingService: PingService) {
        let func = this.ngOnDestroy;

        let subscription = this.pingService.ping
            .subscribe(
                ping => {
                    this.count = ping;
                }
            );

        this.safeSubscription(subscription);
    }
}

Which is great, but what if we need to use ngOnDestroy on the parent? Conventional inheritance allows us to use super.ngOnDestroy() but in this particular case, I don’t want to leave this as a possibility, but rather always unsubscribe on destroy, regardless of wether or not ngOnDestroy was overwritten.

So in this case, a little hack is acceptable, in my opinion – we can make sure the unsubscriber code always runs on ngOnDestroy, and both prevent mistakes by omission and make the code cleaner in the user:

export class SafeUnsubscriber implements OnDestroy {
    private subscriptions: Subscription[] = [];

    constructor() {
        let f = this.ngOnDestroy;

        this.ngOnDestroy = () => {
            f();
            this.unsubscribeAll();
        };
    }

    protected safeSubscription(sub: Subscription): Subscription {
        this.subscriptions.push(sub);
        return sub;
    }

    private unsubscribeAll() {
        this.subscriptions.forEach(element => {
            !element.isUnsubscribed && element.unsubscribe();
        });
    }

    ngOnDestroy() {
        // no-op
    }
}

Now, even if ngOnDestroy gets overwritten, the private method unsubscribeAll still runs, as the constructor (which always runs, as typescript requires it)  makes sure this happens. ngOnDestroy, on the other hand, only exists as a noop function, to ensure the code runs regardless of whether  or not one was set in the parent component.

How does this work, then? Javascript (and typescript, by extension) uses prototypal inheritance, which means that super is the prototype – this is the reason why typescript makes it mandatory to call super() in the extending Class constructor, before any references to this – so class inheritance expectations are guaranteed. By changing this.ngOnDestroy on the Base Class constructor, we are essentially adding a property to the instance, essentially overriding the prototype – which happens to be a call to the prototype’s version followed by our own.

Pretty dangerous stuff, but pretty useful as well.

Learning Javascript in a post-Reactive landscape

I recently re-watched a talk by Thomas Figg – Programming is terrible. In the QA portion of the talk there is a (perhaps surprisingly) positive tone in one of his answers – that learning to code is, contrary to what some might choose to believe, more accessible than ever. He then mentions JavaScript, as it is as simple as it is ubiquitous, and arguably the most easily shareable code in the world – everything from a TV to a phone will run it.

I completely agree with this statement, as JavaScript is at its core an incredibly simple language, in both theory and practice – both easy to reason about, and to get something running. But increasingly complex abstractions have become an integral part of any application development in JavaScript, making the entry barrier for a frontend developer higher and higher.

On Promises

Having worked as an AngularJS developer since it’s 0.x releases, I have more than gotten used to its $q library, modelled very closely after the Q library. They made sense to me, and any seasoned developer will most likely agree that they made asynchronous programming much easier to deal with.

Yet it wasn’t until joining a full stack team and getting tasked with tutoring my backend-heavy colleagues and QA’s on Promises that I noticed just how big of a stretch it can be if you’re facing these for the first time. They are not trivial, especially when you have to stray from the typical examples and delve into more complex usages.

On Reactive Programming

Reactive programming takes the concept of asynchronous programming further. Compared to promises, it is a step down the abstraction scale, making it easier to scale, to handle complex situations and concurrency with ease. Unfortunately, it also makes it much more complex conceptually – thus harder to get into and harder to reason about.

Angular2 fully supports and depends on RxJS, and although it is an “opt-in” kind of thing (just call .toPromise() on any Observable and it magically becomes just that), it is ubiquitous in the angular2 community. Go to any chatroom or forum and you see that you are expected to be comfortable with it.

A world of abstractions

AngularJS had a big problem – it looked easy, and it felt easy, until you tried doing anything complex with it. Angular 2 doesn’t make that mistake, showing its hand from the get-go. What this might mean for the community I don’t know – hopefully better code?

With Promises becoming a part of the ES6 standard, we are moving into a future where they become commonplace – jQuery 3 complies with Promises, for instance. The barrier to entry for developers gets forcefully higher at all levels.

As a teacher, you learn to avoid abstractions when teaching programming for the first time. An object-oriented language is not a good first language, for obvious reasons. I wonder if at some point, JavaScript will stop being one as well?

A functional reactive alternative to Spring

Modern-day Spring allows you to be pretty concise. You can get an elaborate web service up and running using very little code. But when you write idiomatic Spring, you find yourself strewing your code with lots of magic annotations, whose function and behavior are hidden within complex framework code and documentation. When you want to stray away slightly from what the magic annotations allow, you suddenly hit a wall: you start debugging through hundreds of lines of framework code to figure out what it’s doing, and how you can convince the framework to do what you want instead.

datamill is a Java web framework that is a reaction to that approach. Unlike other modern Java frameworks, it makes the flow and manipulation of data through your application highly visible. How does it do that? It uses a functional reactive style built on RxJava. This allows you to be explicit about how data flows through your application, and how to modify that data as it does. At the same time, if you use Java 8 lambdas (datamill and RxJava are intended to be used with lambdas), you can still keep your code concise and simple.

Let’s take a look at some datamill code to illustrate the difference:

public static void main(String[] args) {
 OutlineBuilder outlineBuilder = new OutlineBuilder();

 Server server = new Server(
  rb -> rb.ifMethodAndUriMatch(Method.GET, "/status", r -> r.respond(b -> b.ok()))
  .elseIfMatchesBeanMethod(outlineBuilder.wrap(new TokenController()))
  .elseIfMatchesBeanMethod(outlineBuilder.wrap(new UserController()))
  .orElse(r -> r.respond(b -> b.notFound())),
  (request, throwable) -> handleException(throwable));

 server.listen(8081);
}

 

A few important things to note:

  • datamill applications are primarily intended to be started as standalone Java applications – you explicitly create the HTTP server, specify how requests are handled, and have the server start listening on a port. Unlike traditional JEE deployments where you have to worry about configuring a servlet container or an application server, you have control of when the server itself is started. This also makes creating a Docker container for your server dead simple. Package up an executable JAR using Maven and stick it in a standard Java container.
  • When a HTTP request arrives at your server, it is obvious how it flows through your application. The line[code language=”java”]rb.ifMethodAndUriMatch(Method.GET, “/status”, r -> r.respond(b -> b.ok()))[/code]

    says that the server should first check if the request is a HTTP GET request for the URI /status, and if it is, return a HTTP OK response.

  • The next two lines show how you can organize your request handlers while still maintaining an understanding of what happens to the request.For example, the line.elseIfMatchesBeanMethod(outlineBuilder.wrap(new UserController()))

    says that we will see if the request matches a handler method on the UserControllerinstance we passed in. To understand how this matching works, take a look at the UserController class, and one of the request handling methods:

    @Path("/users")
    public class UserController {
     ...
     @GET
     @Path("/{userName}")
     public Observable < Response > getUser(ServerRequest request) {
       return userRepository.getByUserName(request.uriParameter("userName").asString())
        .map(u -> new JsonObject()
         .put(userOutlineCamelCased.member(m -> m.getId()), u.getId())
         .put(userOutlineCamelCased.member(m -> m.getEmail()), u.getEmail())
         .put(userOutlineCamelCased.member(m -> m.getUserName()), u.getUserName()))
        .flatMap(json -> request.respond(b -> b.ok(json.asString())))
        .switchIfEmpty(request.respond(b -> b.notFound()));
      }
      ...
    }

    You can see that we use @Path and @GET annotations to mark request handlers. But the difference is that you can pin-point where the attempt to match the HTTP request to an annotated method was made. It was within your application code – you did not have to go digging through hundreds of lines of framework code to figure out how the framework is routing requests to your code.

  • Finally, in the code from the UserController, notice how the response is created – and how explicit the composition of the JSON is within datamill:
    .map(u -> new JsonObject()
    .put(userOutlineCamelCased.member(m -> m.getId()), u.getId())
    .put(userOutlineCamelCased.member(m -> m.getEmail()), u.getEmail())
    .put(userOutlineCamelCased.member(m -> m.getUserName()), u.getUserName()))
    .flatMap(json -> request.respond(b -> b.ok(json.asString())))

    You have full control of what goes into the JSON. For those who have ever tried to customize the JSON output by Jackson to omit properties, or for the poor souls who have tried to customize responses when using Spring Data REST, you will appreciate the clarity and simplicity.

Just one more example from an application using datamill – consider the way we perform  a basic select query:

public class UserRepository extends Repository < User > {
 ...
 public Observable < User > getByUserName(String userName) {
  return executeQuery(
   (client, outline) ->
   client.selectAllIn(outline)
   .from(outline)
   .where().eq(outline.member(m -> m.getUserName()), userName)
   .execute()
   .map(r -> outline.wrap(new User())
    .set(m -> m.getId(), r.column(outline.member(m -> m.getId())))
    .set(m -> m.getUserName(), r.column(outline.member(m -> m.getUserName())))
    .set(m -> m.getEmail(), r.column(outline.member(m -> m.getEmail())))
    .set(m -> m.getPassword(), r.column(outline.member(m -> m.getPassword())))
    .unwrap()));
 }
 ...
}

A few things to note in this example:

  • Notice the visibility into the exact SQL query that is composed. For those of you who have ever tried to customize the queries generated by annotations, you will again appreciate the clarity. While in any single application, a very small percentage of the queries need to be customized outside of what a JPA implementation allows, almost all applications will have at least one of these queries. And this is usually when you get the sinking feeling before delving into framework code.
  • Take note of the visibility into how data is extracted from the result and placed into entity beans.
  • Finally, take note of how concise the code remains, with the use of lambdas and RxJava Observable operators.

Hopefully that gives you a taste of what datamill offers. What we wanted to highlight was the clarity you get on how requests and data flows through your application, and the clarity into how data is transformed.

datamill is still in an early stage of development but we’ve used it to build several large web applications. We find it a joy to work with.

We hope you’ll give it a try – we are looking for feedback. Go check it out.