Serverless Architectures using ReactJS

October 08, 2018

by  Chris Carreck

As we begin to utilise newer technologies within the agency, it allows us to revisit not just the way we might code something, or the underlying application architecture, but it also allows us to look at ways we can improve and re-engineer the entire delivery system underneath it, to improve speeds, lower costs, reduce maintenance time, reduce points of failure in our applications, and ultimately deliver a more cost effective, performant system for our clients.

One of the ways we did this recently is by utilising some of the newer technologies we have introduced on some of our projects such as react.js and docker, and use it to re-think the traditional ELB, EC2, MongoDB architecture that we use on AWS.

We started with our react front ends, and the traditional server setup of a load balancer and a couple of virtual servers underneath processing requests and serving content back to the user. As our react front ends are 100% javascript and CSS, these don’t require any server processing power, all the content can be rendered and run by the clients browser, we just need a way to effectively deliver it there.
To serve this we made use of AWS’s own CDN, CloudFront, a hugely powerful delivery network offering low latency, high speeds, and is extremely cost effective. It also has DDoS protection built in so has added security and works well with other AWS products. So we went about creating distributions, using Amazons s3 storage (again very cheap) to provide the back-end store for our js, html and css files, and before we knew it, we had a fully scalable, globally distributed website, without creating a single server or configuring a single load balancer.

So that was our front-ends sorted. Content and front end javascript is nicely distributed around the globe, and accessed by customers from edge locations already provided by AWS. There were some tweaks, but nothing that couldn’t easily be handled by AWS Lambda Edge to intercept calls at the point of origin. But what about our back-ends? Our sites still needed to make calls to APIs to handle domain logic and process and respond to requests.

Here we started to dockerize our nodeJS back-ends. On newer sites we have built out APIs using Typescript and NodeJs, we were easily able to add these to a docker instance and spin up some instances with ECS, however this still needed some EC2 configuration, so we switched to AWS Fargate, using fully managed server less resources from AWS to serve docker instances that can scale with our apps. Combine this with a hosted MongoDB instance, and we’ve now got an end to end application relying to manage compute power. No need to configure operating systems, nginx configs etc.

The next step here for us is to use Amazons API Gateway on top of docker to serve our APIs directly to edge location and utilise the caching and performance gains there. We will be using this approach next time and will be sure to post something on how that goes for us.

With this approach however, we’ve managed to create a modern, server less stack thats not just saved our clients money on hosting costs, but is also highly scalable, extremely performant, and delivers great end results for the user.

Be sure to follow us on Twitter @Cre8iveLicence, and if this kind of stuff interests you, then you'll love working with us).