Angular2 router’s bumpy ride – a user’s perspective

Edit: ui-router is not mentioned at all in this article and should have been. This is only due to the fact that I haven’t worked with it enough to comment on it, and also because it is still in alpha, and not a popular ng2 router alternative in the community yet. That being said, it is the “only” router for me when in ng1-land, so I’m eager to give it a try, or see how much of it’s architecture influences Vladivostok.

Angular 2 is now in a release candidate state, after several beta releases, and while the core of this new iteration is an extremely solid one, many of its components are still under heavy development, which makes using them quite a bumpy ride.

The router component is perhaps the most notorious among them, with two iterations deprecated in the pace of a few short months – one officially so, and one never really seeing the light of day – and a third one on the way.

Now, it needs to be said that creating something like a router is far from trivial, particularly so if you are setting out to “revolutionise”, meaning solve all the known problems of previous routers. In the case of routing these are lazy loading, handling complex route structures, and enough flexibly to account for all use cases (with more or less legwork required).

Also, the reason why the angular team has gone through so many iterations has to do with how closely they are working with the community of users – the current iteration having taken a mere couple of months to get thrown out, so quick the community was to spot its shortcomings.

So, how do all of these routers differ, and where are they headed?

Enter @angular/router-deprecated

Angular2’s first stab at a router relied on the component paradigm heavily, as does angular2 in general. Components may have @RouteConfig annotations with route lists defined, and if they do, these get parsed and the relevant components loaded into a node in its template.

Most lifecycle hooks and checks could then be in the component itself, keeping thing neat and clean. This approach had a couple of problems:

  • As Routes were defined in the class file, deep linking to unloaded classes is impossible.
  • @CanActivate, which determines whether or not a certain route could be activated, had to be an annotation as it ran before the Component itself was instantiated.
  • Routes followed the same isolation pattern that Components did, but this meant not having access to the full route tree at any point, and having to hack your way around everything.

Enter @angular/router(-deprecated?)

The first attempt to solve these issues was promising:

  • It solved the deep linking problem by having routes be directly inferable from the url.
  • It intended to replace @CanActivate with CanActivateChild – it is now the parent’s task to determine if the route activation process can continue.
  • Access to the whole tree was given at any hooks

Unfortunately, it perpetuated some of the issues, like routes still defined as a Component’s annotation, and its development didn’t get very far before it getting scrapped – first unofficially and now officially so.

Enter @ngrx/router, and the “new new new Router”

If “new new new Router” seems like an atrocious expression it’s because it is – but it’s been a recurrent one in places like Gitter or Github issues. It is Vladivostok, and it’s approach is very similar to @ngrx/router (as its devs have been collaborating with the angular team closely).

@ngrx/router takes a cleaner, leaner and more low lever approach to routing:

  • Routes are defined as objects in their own right and injected into the app directly. Their loading becomes completely independent from the Components themselves.
  • A route has Guards that run whenever the route tree passes through it, these again completely independent from which Component is actually being loaded.
  • Changes in url that do not actually change routes, but only parameters (like changing from /user/1 to /user/2, for instance) do nothing by default – it is the user’s responsibility to listen to these changes and trigger behaviour
  • Routes, RouteParams, QueryParams, RouteData… All these are Observables that any Component can listen to – this makes it both more flexible and simpler, specially when creating something like a breadcrumb component, or anything more specific or unique.

A conclusion of sorts

Angular2 is heading in a really good direction, despite (or perhaps because of) all the growing pains it is going through. The downside of this is that it can’t live up to the extremely high expectations for everything from power to speed to ease of use, while in its betas and RCs.

The best way to get ready for the new router is to delve into @ngrx/router, which coincidentally is a pretty powerful tool in its own right. The documentation is sparse but its developers and users are quick to answer in their Gitter channel, and it is flexible enough to handle almost anything you’ll want to throw at it.

I’ll be throwing a couple of things myself, and write about that next.

Abusing Cucumber, for a good cause

In several Java houses I worked with in the past, we used Cucumber to do Behavior Driven Design. No hang on a sec – that’s definitely an exaggeration. I think it’s more accurate to say we used Cucumber as a way to write acceptance tests. Wait, that’s still an exaggeration. We used Cucumber to write a mix of integration tests, and what may generously be called functional tests (and very occasionally bordering on acceptance tests). Yeah, that’s about right. We used it as a tool to write tests in plain English. But you know what? I think that’s OK.

Cucumberistas, BDDers and DDDers will tell you it’s about everyone – business, QA and development – coming together to come up with executable specifications.  It’s about everyone speaking in a universal language – a language that the business analysts can share with the testers, and the developers. A language about the business problems an application is designed to solve. And a language for automated acceptance tests. Well maybe, just maybe, you are in an organization where that’s true. Where your Cucumber tests describe your user stories or specifications in the domain language for your application. If you are, good for you. You’re doing it “right”.

But for everyone else, I want to talk about some work we did to support your Cucumber test-writing efforts in the “wrong” way. And we don’t want to scold you, or admonish you for doing it “wrong”. No, in fact, we want to support you in your efforts to just write tests for HTTP services in English.

What I am talking about is best illustrated with an example – here’s how we use Cucumber to write tests for our application:

Background:
    Given the user stores http: //localhost:9080 as apiRoot

Scenario: Successful registration flow
    Given a random alphanumeric string is stored as testUserName
    And a user makes a POST call to "{apiRoot}/users"
    with payload:
    """ {
        "email": "{testUserName}@gmail.com",
        "password": "pass",
        "userName": "{testUserName}",
        "name": "Test User",
        "location": "London"
    }
    """
    Then the user should get a 200 response and JSON matching:
    """ 
    {
        "email": "{testUserName}@gmail.com",
        "userName": "{testUserName}",
        "name": "Test User",
        "location": "London",
        "id": "*"
    }
    """
    And the email containing subject Activate your account for {testUserName}@gmail.com is stored as activationEmail
    And the first link in stored HTML activationEmail is stored as activationLink
    And the regex activations / (\w + ) is used on stored value activationLink to capture activationToken
    When a user makes a POST call to "{apiRoot}/tokens/activation/{activationToken}"
    Then the user should get a 200 response
    Given the user "{testUserName}@gmail.com"
    is logged in with password "pass"
    on "{apiRoot}"
    When a user makes a GET call to "{apiRoot}/widgets/{testUserName}"
    Then the user should get a 200 response and JSON matching:
    """ 
    []
    """

Yes, what we have here is a functional test for one of our stories. But all the steps are essentially an English version of what a HTTP client would do when hitting the service. A business analyst probably wouldn’t want to read that but that’s really OK for us – business analysts in our experience don’t read the tests. Developers and testers read our tests, and it’s a great English language description of what the test does. I don’t need to click through the code behind the step definitions to know what’s going on. As a developer, I can understand right away what is being done.

So if you are OK with writing tests this way, check out the cucumber module we created as part of datamill. It has all the step definitions you see in the example above. If you are writing HTTP services, especially those that serve JSON, and are backed by a relational database, you will find it useful. Oh, and we threw in some useful step definitions for dealing with emails too because we needed them.

I want to end by admitting the following about this approach: Yes, sometimes this can get to be repetitive and a lot of copy-pasting. So, I will leave you with a last example of a custom step definiton we created that combines the utility ones above:

import cucumber.api.java.en.Given;
import foundation.stack.datamill.cucumber.DatabaseSteps;
import foundation.stack.datamill.cucumber.HttpSteps;
import foundation.stack.datamill.cucumber.PropertySteps;
import foundation.stack.datamill.http.Method;

public class UserSteps {
 private final DatabaseSteps databaseSteps;
 private final HttpSteps httpSteps;
 private final PropertySteps propertySteps;

 public UserSteps(PropertySteps propertySteps, DatabaseSteps databaseSteps, HttpSteps httpSteps) {
 this.propertySteps = propertySteps;
 this.databaseSteps = databaseSteps;
 this.httpSteps = httpSteps;
 }

 @Given("^the user \"(.+)\" is logged in with password \"(.+)\" on \"(.+)\"$")
 public void loginAsUser(String email, String password, String apiRoot) {
 httpSteps.userMakesCallWithProvidedPayload(Method.POST, apiRoot + "/tokens", "{" +
 "\"email\": \"" + email + "\"," +
 "\"password\": \"" + password +
 "}");
 httpSteps.assertStatusAndNonEmptyResponse(200);
 httpSteps.storeResponse("JWT");
 httpSteps.addValueToHeader("Authorization", "{JWT}");
 }
}

Checkout datamill, and the cucumber module!

Your own identity on the Internet

If you ever thought about having users login to your site, you’ve probably considered adding Facebook Login, or OAuth2 and OpenID Connect. And for good reason – they’re widely used.

An identity you own, to sign your content

But what if you wanted to allow users to own their identity? What would that look like? For a lot of technical folks, establishing identity usually means using a private key. Establishing identity using a private key also has the advantage that the user owns their own identity.

Let’s say that you establish your identity using your own private key. Any content you create can then be signed by you using your private key. Anyone can verify that it was you who created the content if they have your public key.

How does someone looking at a signed piece of content know what key was used to sign it? Well, you can publish your public key somewhere, and put a URL to that key next to the signature on the content you create. The URL would allow the reader to download the public key they need to verify the signature.

Mirrors

But what if the URL to the public key goes down? Well, we can setup mirrors for public keys (you might use alternatives such as key servers here). Users should be able to notify mirrors of a new public key that they’ve published. Sites hosting content can also send cached versions of public keys (that they picked up from the original source, or from a mirror) included in the content.

Claims, and verified claims

So far, we only have the ability to establish that some piece of content was created by someone owning a certain key. But we have not established who the person behind the key is as of yet. How can we do that? Well, let’s say that with every key, you can have a set of claims – metadata attributes associated with them. So for example, we can say some key key1 belongs to some user claiming that their fullName is Joe Blogs, and that their facebookProfile is http://facebook.com/joeblogs (fullName and facebookProfile are claims here). Great, so now we can say that wherever we see content signed with key key1, it belongs to Joe Blogs, whose Facebook profile is at http://facebook.com/joeblogs.

Of course, the obvious problem with this is that anyone can publish their key, and associate it with a bogus set of claims. What we need is a way to have verified claims. For example, we would especially want to verify that someone who claims to own a particular Facebook profile actually owns that profile. How do we do that? Well we can have a service that provides verified facebookProfile claims. That is, a service that uses Facebook Login to allow the owner of a key to login to their Facebook account to prove ownership, and only then confirm that the owner of that key owns a Facebook account.

Here is how that flow might work:

  1. The owner of the key signs a facebookProfile claim with their private key – let’s call the signature they produce here claimSignature
  2. They provide claimSignature to the Facebook verification service, which should first check that the provided claimSignature is correct and was produced by the owner of the key
  3. It should then have them login to the Facebook profile they claim to own using Facebook Login
  4. Once the service has verified that they own the Facebook account, the service would then sign claimSignature, with it’s own private key to create a verifiedClaimSignature

Now, if we were given the claimSignature, and the verifiedClaimSignature, together with the facebookProfile claim, we can trust that association a bit more. We would need to decide to trust that the Facebook verification service we used is a trust-worthy service in evaluating facebookProfile claims. If we do, all we need is the public key for that service to verify the verifiedClaimSignature and confirm that the facebookProfile provided can be trusted.

Decentralized identity

What does this allow at the end of the day? Suppose you wrote a blog post, or posted a comment somewhere on the web. You can now sign that content, and someone reading the content would be able to know that it was you who wrote it. And they would be able to know that based on the identity you own – your personal private key. Everyone can own their own identity.

A functional reactive alternative to Spring

Modern-day Spring allows you to be pretty concise. You can get an elaborate web service up and running using very little code. But when you write idiomatic Spring, you find yourself strewing your code with lots of magic annotations, whose function and behavior are hidden within complex framework code and documentation. When you want to stray away slightly from what the magic annotations allow, you suddenly hit a wall: you start debugging through hundreds of lines of framework code to figure out what it’s doing, and how you can convince the framework to do what you want instead.

datamill is a Java web framework that is a reaction to that approach. Unlike other modern Java frameworks, it makes the flow and manipulation of data through your application highly visible. How does it do that? It uses a functional reactive style built on RxJava. This allows you to be explicit about how data flows through your application, and how to modify that data as it does. At the same time, if you use Java 8 lambdas (datamill and RxJava are intended to be used with lambdas), you can still keep your code concise and simple.

Let’s take a look at some datamill code to illustrate the difference:

public static void main(String[] args) {
 OutlineBuilder outlineBuilder = new OutlineBuilder();

 Server server = new Server(
  rb -> rb.ifMethodAndUriMatch(Method.GET, "/status", r -> r.respond(b -> b.ok()))
  .elseIfMatchesBeanMethod(outlineBuilder.wrap(new TokenController()))
  .elseIfMatchesBeanMethod(outlineBuilder.wrap(new UserController()))
  .orElse(r -> r.respond(b -> b.notFound())),
  (request, throwable) -> handleException(throwable));

 server.listen(8081);
}

 

A few important things to note:

  • datamill applications are primarily intended to be started as standalone Java applications – you explicitly create the HTTP server, specify how requests are handled, and have the server start listening on a port. Unlike traditional JEE deployments where you have to worry about configuring a servlet container or an application server, you have control of when the server itself is started. This also makes creating a Docker container for your server dead simple. Package up an executable JAR using Maven and stick it in a standard Java container.
  • When a HTTP request arrives at your server, it is obvious how it flows through your application. The line[code language=”java”]rb.ifMethodAndUriMatch(Method.GET, “/status”, r -> r.respond(b -> b.ok()))[/code]

    says that the server should first check if the request is a HTTP GET request for the URI /status, and if it is, return a HTTP OK response.

  • The next two lines show how you can organize your request handlers while still maintaining an understanding of what happens to the request.For example, the line.elseIfMatchesBeanMethod(outlineBuilder.wrap(new UserController()))

    says that we will see if the request matches a handler method on the UserControllerinstance we passed in. To understand how this matching works, take a look at the UserController class, and one of the request handling methods:

    @Path("/users")
    public class UserController {
     ...
     @GET
     @Path("/{userName}")
     public Observable < Response > getUser(ServerRequest request) {
       return userRepository.getByUserName(request.uriParameter("userName").asString())
        .map(u -> new JsonObject()
         .put(userOutlineCamelCased.member(m -> m.getId()), u.getId())
         .put(userOutlineCamelCased.member(m -> m.getEmail()), u.getEmail())
         .put(userOutlineCamelCased.member(m -> m.getUserName()), u.getUserName()))
        .flatMap(json -> request.respond(b -> b.ok(json.asString())))
        .switchIfEmpty(request.respond(b -> b.notFound()));
      }
      ...
    }

    You can see that we use @Path and @GET annotations to mark request handlers. But the difference is that you can pin-point where the attempt to match the HTTP request to an annotated method was made. It was within your application code – you did not have to go digging through hundreds of lines of framework code to figure out how the framework is routing requests to your code.

  • Finally, in the code from the UserController, notice how the response is created – and how explicit the composition of the JSON is within datamill:
    .map(u -> new JsonObject()
    .put(userOutlineCamelCased.member(m -> m.getId()), u.getId())
    .put(userOutlineCamelCased.member(m -> m.getEmail()), u.getEmail())
    .put(userOutlineCamelCased.member(m -> m.getUserName()), u.getUserName()))
    .flatMap(json -> request.respond(b -> b.ok(json.asString())))

    You have full control of what goes into the JSON. For those who have ever tried to customize the JSON output by Jackson to omit properties, or for the poor souls who have tried to customize responses when using Spring Data REST, you will appreciate the clarity and simplicity.

Just one more example from an application using datamill – consider the way we perform  a basic select query:

public class UserRepository extends Repository < User > {
 ...
 public Observable < User > getByUserName(String userName) {
  return executeQuery(
   (client, outline) ->
   client.selectAllIn(outline)
   .from(outline)
   .where().eq(outline.member(m -> m.getUserName()), userName)
   .execute()
   .map(r -> outline.wrap(new User())
    .set(m -> m.getId(), r.column(outline.member(m -> m.getId())))
    .set(m -> m.getUserName(), r.column(outline.member(m -> m.getUserName())))
    .set(m -> m.getEmail(), r.column(outline.member(m -> m.getEmail())))
    .set(m -> m.getPassword(), r.column(outline.member(m -> m.getPassword())))
    .unwrap()));
 }
 ...
}

A few things to note in this example:

  • Notice the visibility into the exact SQL query that is composed. For those of you who have ever tried to customize the queries generated by annotations, you will again appreciate the clarity. While in any single application, a very small percentage of the queries need to be customized outside of what a JPA implementation allows, almost all applications will have at least one of these queries. And this is usually when you get the sinking feeling before delving into framework code.
  • Take note of the visibility into how data is extracted from the result and placed into entity beans.
  • Finally, take note of how concise the code remains, with the use of lambdas and RxJava Observable operators.

Hopefully that gives you a taste of what datamill offers. What we wanted to highlight was the clarity you get on how requests and data flows through your application, and the clarity into how data is transformed.

datamill is still in an early stage of development but we’ve used it to build several large web applications. We find it a joy to work with.

We hope you’ll give it a try – we are looking for feedback. Go check it out.

Weave social into the web

Disclaimer: This is the second post in a series where we are exploring a decentralized Facebook (here’s the first). It’s written by software engineers, and is mostly about imagining a contrived (for now) technical architecture.

How do you weave elements of Facebook into the web? Start by allowing them to identify themselves and all their content:

  • Establishing a user’s identity can be done rather straightforwardly by creating a unique public-private key pair for a user and allowing them to digitally sign things using their private key
  • Users can then digitally sign content they create anywhere on the internet – they can sign articles they publish, blog posts, comments, photos, likes and +1’s, anything really

Now that they’ve started to identify their content, it’s time to make everyone aware of it:

  • Notifications about content users generate needs to be broadcast in real-time to a stream of events about the user
  • Notifications can be published to the stream by the browser, or a browser plug-in, or by the third-party application on which the content was generated
  • Before being accepted into a user’s stream, notifications neet to be verified as being about the user and their content by the presence of a digital signature
  • Other parties interested in following a user can subscribe to a user’s feed

But that’s all in the public eye. To have a social network, you really need to allow for some privacy:

  • Encrypt data, and allow it to be decrypted selectively – this may include partial content – for example, it’d be useful to have a comment on an otherwise unencrypted site encrypted, only accessible by a select few consumers
  • Allow encrypted content to be sent over plain HTTP over TCP (not TLS) – this way the encrypted payload can be mirrored, and allow consumer privacy (if the consumer can access encrypted data from a mirror, it can do so privately, without the knowledge of the consumer)
  • Encryption is performed with a unique key for every piece of content
  • Decryption is selective in that the decryption key is given out selectively by the publisher (based on authorization checks they perform)

Deface, a decentralized Facebook

A disclaimer: we are a bunch of software engineers, so what follows is a wild technical thought experiment. Bring your imagination and your architectural chops.

What would a decentralized Facebook look like? Well, users should be able to:

  • Create a basic profile
  • Maintain one or more lists of friends
  • Share content with everyone on one or more of these lists
  • Have shared content only accessible by people on the list it was shared with
  • View content from all of their connections in one chronological “timeline”
  • View content from another user without the other user knowing how many times they’ve viewed it (consider how important it is that you can see someone’s photo on Facebook without them knowing, surreptitious as it sounds)

How would it work? Let’s start with user profiles and content:

  • Users can host their own profiles and content, or sign up with a service provider that hosts several users
  • Users can create a basic profile, which includeAll Postss their name, date of birth, and other basic biographical data
  • When they publish content, it is added to their personal timeline, and an event is shared with their connections notifying them of the new content

How do user connections and sharing work?

  • Each user maintains one or more lists of connections – for example, they may have a “friends” list, and a separate “colleagues” list
  • When they share content to a particular list, an event notification is shared with all the members on that list
  • Sharing of events can use a polling model where users poll for new events from their connections
  • alternatively, sharing can use a publish/subscribe mode – in this case, users can subscribe to one of their connection’s events so that events get published to them

How do users protect their content?

  • When a user publishes content, it is given a unique ID, and is encrypted with a unique key for that piece of content
  • The event notifications sent out for that content has a reference to the content’s unique ID
  • The consuming application uses the content ID to ask the publisher for the symmetric key it can use to decrypt the content
  • Once it has the symmetric key, the consuming user can access the content
  • The publishing user may subsequently refuse to give out the key for a particular piece of content (revoking access)

What all gets protected?

  • We protect the user’s profile information (portions of this are given unique IDs), as well as any content the users generate – this may include status updates, longform text, links, photos, location updates, etc.
  • Users may opt to make any of their content accessible publicly – in this case, it does not get encrypted

Content mirroring, not racking up a view count

  • The encrypted pieces of content, identified by unique IDs can be mirrored by public mirrors or private mirrors – since the data is encrypted, only those who obtain the proper symmetric key can decrypt the content
  • Consumers can choose to access content directly from a publisher, or through a public mirror
  • Public mirrors would be expected to not make view counts available on pieces of content

What are the potential weaknesses and exploits? Leave your thoughts as a comment

Cloud pricing is unfair

Is it fair to round the CPU usage of a virtual machine to the nearest hour when charging customers for cloud computing? We were curious about this so we thought we would ask the Internet. Of course, we wanted to get people’s opinions on cloud pricing overall so we asked about more than just the rounding of CPU use. We are not statisticians so the approach we took was rather simple, and took the form of an online survey. Our audience was a broad group of people involved in software, and included many independent developers, as well as those working as part of an organization.

poll-providers

When making the decision to go with a particular platform, by far the most important factors were the cost and quality of service. Surprisingly, brand name and trust was only somewhat important for many developers, especially those who were independent. The importance of brand name and trust was higher for those making the decision for teams and organizations.

poll-factors

The question we were most interested in was which pricing model was most appealing to users. The results showed that customers preferred to be charged a flat fee per month for a virtual machine – the Digital Ocean model. A similar model of paying a flat fee per month for a cloud application was also deemed fair. The most prevalent model used by AWS, Azure and many other providers of charging per unit of resource used was not particularly appealing when compared to the flat-fee approaches. Interestingly, those surveyed said that when their cloud applications exceeded a certain cost (when being charged per resource usage), they actually preferred if they were switched automatically to a flat-fee model for the remainder of the billing period instead of having their applications suspended. This seems to indicate that when it comes to pricing, users are finding being charged per unit of resource consumed complex and unpredictable. They strongly favor a pricing model that allows them to have a predictable cost per month.

Finally, to answer to the original question: is it fair to round to the nearest hour when charging users for CPU use? A most definite no.

While the results seem to indicate some solid opinions, I do want to point out that the survey is still open and if you have experience with cloud platforms and want to opine – follow the link below to our survey:

Opinions on cloud pricing