Breaking the monolith: scalable native apps

Building one-off apps that have a single focus (for example games, picture viewers or “ToDo” apps) is relatively easy because those apps can be designed, tested and implemented in a wholistic manner.  In other words, the product teams that build those apps have a very clear picture of what the end product will be. Banking or telecommunications self-serve apps on the other hand are often less clear or constantly evolving.  These apps are generally produced by large organizations with competing or conflicting  internal objectives, timelines and budgets – as a result, these apps (when built as monoliths) are delivered late to market because each feature that is owned by a sub-division of the company has to wait on one or more major pieces provided by other sub-divisions.

How can these apps get to market faster?

Simplify, decompose into reusable units and break dependencies – In other words, large apps can get to market faster by going back to the basics of software design. Let us use a simple example of a restaurant app… Let’s call it is (fictitiously) an app that allows a customer to submit an order that gets prepared by a crowd-sourced chef… (Before I go any further, I would like to shamelessly ask that if any reader wishes to build this app, please credit back to this site for inspiration :-D, please and thank you!!!). The app must be available on Android & iOS and must be launched on the same date.

Implementing this is relatively straight forward. Build the app, ship it. How we build this app however, will determine how quickly we can iterate on its features in the future. The typical approach here would be to have a single project with all the logic embedded, maybe sorted by folder structure or package name.


This approach works, but we can definitely break the app into discrete units.

Step 1: Create a core component that does the basic functionality

A good example is to create an application scaffold that provides the UI layout and interaction design. This scaffold will reference the components defined in step 2 by library version and also have any visual integration points that are required.

Step 2: Identify the discrete functions of the app

In CookR, this could include taking an order, sourcing a chef, providing status update via push notification or processing payments. These units would then be built as components/libraries that can be uniquely versioned, expanded upon or completely re-written at any given time without affecting the progress or delivery targets of the overall application. This also means, that each of these components can have their flows defined and implemented in total isolation. In the case of a large organization, this translates into a situation where separate sub-divisions can quickly meet their own objectives and deliverables without affecting other teams, unless of course they have a hard dependency on a specific version of another component.

Step 3: Integrate each component

With steps one and two out of the way, integrating the components will be a breeze. The only requirement here, is that the core application scaffold defined in step 1 creates visual entry points (buttons, links etc.) that will trigger calls into the components. The components can then take over and deliver the experience that is defined in that component.

Our decomposed application now looks like this:


As you will see above, the overall application is now less brittle to changes because stable versions of the components (libraries) can be referenced by specific iterations. Departments are then free to re-use, re-create or otherwise augment the functionality provided by these libraries at will. In our example, if the organization decides that the application needs to go to market with a Chef Sourcing functionality that has Facebook integration on March 9… This structure allows all the team to build this entire flow in Chef Sourcing 2.4.1 without affecting any existing code that depends on 2.4.0. The improvements can be unit and functional tested independently without a single code merge request until required (lets say march 7 for example). In practice, the date may not be so close, but the key thing here is that on March 7, depending on the UI design for Facebook integration this feature could be delivered to market with zero code changes to the application, since code changes were only done in the Chef Sourcing library that is consumed by the core application! Neat right?

Of course, this doesn’t account for crazy scheduling or other factors but this approach to mobile application structure opens up a world of possibilities for cross functional team collaboration and faster application delivery.

This is the high level concept… Next week I will use the example to illustrate how this can be done on both iOS and Android…

#UntilThenCodeTight #HappyMonday

-Martello Jones


Hybrid Application Development – Part II

Let’s continue to de-mystify hybrid application development by tackling some common misconceptions. I will also explain two common strategies and break them down into pros and cons.  If you are new to the concept of hybrid apps, Part 1 sets the foundation and explains the key concepts that we will be working off from here onwards. Before proceeding, I recommend taking a look at Apache Cordova if you are unfamiliar with it. It is not a requirement, but since Cordova is out of scope for this series and it is used within the project sources, it may be valuable to play around with it.

Sources for Part 2:

App Strategies

There are two approaches employed when creating hybrid apps. The first method involves a native wrapper consuming remote content and data (lets reference this as N->RD). The other approach is using a native wrapper using local content and remote data (lets reference this as NR->D). The following sections will visualize and explain the difference between the approaches.

N : Native Wrapper (Java / Objective C WebView implementation)

R : Resources (HTML/JavaScript/Images/Audio/Video etc)

D : Data (REST or other API layer)

-> : Separator to clarify where resources originate. Items on the left signify local resources, whereas those on the right are remote (require network download). 


Hybrid App Strategies.001In the N->RD strategy, a native application wrapper is used to load content and data from a remote server. In the illustration above the application flow would be:

  1. WebView inside the native wrapper loads resources from
    1. All resources (JavaScript, CSS, Images) referenced by the html files are then downloaded into the WebView on the client device. A single page application (SPA) will do this once, but note that a traditional multi page web app will repeat this process each time the user navigates to a new page if the resource is not cacheable.
  2. The WebView may then issue subsequent API calls via ajax or sockets to get data. For example performing login or retrieving customer profile data.


Slow Hybrid Apps

Whenever I hear this sentiment, I am almost always inclined to think that the application was created using the N->RD model. Within reasonable context, however, the statement is not incorrect. The reason for this is quite simple: It takes time to download resources over a network! The more resources, the more data that must leave some server in the cloud and travel over (sometimes spotty) wireless networks onto the mobile device. This leaves the end user with one of two problems each time they launch the app:

  1. Unreasonably long wait times staring at blank white screen (VERY BAD design) or
  2. Unreasonably long wait times staring at a loading screen (Less, but still BAD design)

Let’s take a look at what this means with an application that we can relate to: tudA. In the video below, I have taken the www/ folder of the tudA project and served it from a http server running on my development machine (instructions to run this can be found in the file).  In order to illustrate what the user experience would be for someone using this app on a mobile device over a wireless network, I throttled the network bandwidth to regular 3G (750kb/s).

As you can see, the application took approximately 45 seconds to be fully loaded. It was not even useable before the 12 second mark. Mobile users have very short attention spans. They expect to get what they want out of a mobile app in short order otherwise, they simply close it and move on to another app. If the app is crucial to their life, then it results in app-hate, which will be strongly reflected in application ratings and comments. I would “guess-timate” that this scenario accounts for about 80% of all the “Hybrid Apps Are Slow” statements that you will encounter.

Why use this strategy?

Well, it usually boils down to cost or time to market. With the N->RD strategy the development effort required to deploy a mobile application for an organization or business unit is usually relatively low.  This is particularly true, if there is already a website representing the business. If the website is responsive, an even greater reduction in effort as far as getting it ready for hybrid deployment is concerned. Creating apps in this manner also allows a smaller package download from the app store- this is a very important consideration because some stores may not allow the download if the user is not on WiFi. Another reason this strategy gets used, is that most developers and organizations believe that this mode is the only way to have the ability for hot-fix deployments without facing the long delays associated with app stores. *We will briefly touch on this later when I present a way to get this feature.

How can it be improved?

This is a difficult question to answer because it depends on the content and size of those resources on the server side. It is also subjective to what the organization believes is an acceptable user experience. That being said, there are a few steps that can be taken to improve the situation with a N->RD app.

  1. Never give the user a blank screen while content is being downloaded, the entry point to your application should (as its first priority) establish a splash screen with some sort of indicator that lets the user know the app is not ready. This does not fix the problem, but it reduces frustration and removes the unprofessional “just a website in box” feeling that comes with a blank screen.
  2. Use mobile first design for your websites! Make them responsive, because you should assume that more traffic will eventually be coming from smartphones and tablets. It also makes porting to mobile much easier if you are in a pinch to deliver a timely solution.
  3. Remember that your app is NOT your website, and your (existing) website is not your app. You may want to re-use the website as a starting point, but design in a manner such that if YOU, the developer, will be paying for your user’s bandwidth consumption, you will be paying the smallest possible download fee.
  4. Use profiling tools to scale back bandwidth to real life speeds (on the lower end). Chrome/Safari/Firefox have excellent bandwidth management capability.
  5. If you decide to deploy an application in this manner, consider using the AppCache HTML5 feature (until its replacement spec arrives). This will make the biggest difference in user experience, when done right. The app users will experience the long download only once (the initial download), after which the resources will be referenced in a manner similar to NR->D strategy. However, please be aware of some gotchas associated with AppCache.



Hybrid App Strategies.002In the NR->D strategy, a native application wrapper is used to load resources from the local file system of the client device – only dynamic data is fetched from the server. In the illustration above the application flow would be:

  1. WebView in the native wrapper loads index.html from the file system (e.g file:///assets/www/index.html )
  2. The WebView may then issue subsequent API calls via ajax or sockets to get data. For example performing login or retrieving customer profile data.

Apache Cordova provides an excellent framework to deploy hybrid applications in this manner. It has been such a great resource to the hybrid application landscape, that frameworks like IonicMeteorIntel XDK and Telerik have included it in the standard workflow to enable an easy path to deploying to multiple platforms. In the first few years of Cordova, this method of deploying a hybrid app was thought to hinder organizations that rely on being able to quickly deploy hot-fixes or updates. This was because app updates required a resubmission to the various app stores. To some, it seemed pointless since in their circumstances going to hybrid was a way to bypass the ghastly wait times involved in store submissions. Personally, using this as an excuse puzzled me because the solution was as clear as daylight, implement a native module that could fetch your app update and patch them in when it was ready…  Thankfully, there is an excellent plugin called ContentSync that makes deploying updates to your packaged app a simple affair.

Which approach should you use?

At a very high level both approaches do the same thing – get some resources to display, and then make API calls to send or receive data. However, the origin of those resources dramatically change the (perceived) behaviour of the application if care is not administered in its design. It calls for a subjective answer – one that I will not be able to provide. In business, it always boils down to cost. So, where organization A might be willing to spend the extra money and time to take one approach, another organization may see it as cost prohibitive. That being said, I believe that each approach has its own merits and can be used effectively, if no shortcuts are taken. The development team charged with delivering the solution MUST understand what the objective is as well as any drawbacks associated with a particular approach. There should be strong engagement with any UX experts assigned to the project and there should be open, bi-directional dialog between those designers and the implementors.  This is often the root cause associated of the “Hybrid Apps are Slow” statements. It is not necessarily a problem of the strategy employed, but a gross misunderstanding of those strategies and their limitations, which result in poorly executed application flows.Hybrid App Strategies.003

Closing bits

Those are the two main approaches to developing and deploying a hybrid application. The approach you take should weigh all the pros and cons and perform due diligence in managing any deficiencies that a particular approach might have. The final blog on this topic, will be a little bit more technical. I will show how we can dissect the typical concerns of a hybrid application and offer a few ways to tackle them.

– Martello Jones