React and SEO in eCommerce

September 24, 2018

by  Chris Carreck

At Creative Licence Digital one of our main focus points is SEO. We do a huge amount of work for our eCommerce parters ensuring we grow inbound traffic year on year through effective strategies and execution. We also strive to produce rich and immersive experiences across web and mobile.

As react.js and react-native are generally our front end libraries of choice, one of the challenges we have found, is how do we make our SEO continue to perform as well if not better, allowing search bots to perform on our sites, as well using the latest in technology to produce rich performant end user experiences?

For those that may not know, react sites aren’t structured like a normal web page with HTML and CSS. Ever viewed the source of a reactJS site? Its just a bundle of javascript. This has huge consequences for SEO experts, and you really need to understand the technology before deciding to go down a route that may have such a big impact. Some bots cant fully render javascript, they may not see your site the same way the end user does; Javascript affects the crawl ability of your site. Getting this right can be hard, and there are a lot of good articles out there explaining the intricacies of making sure JS is performant, non-blocking, and renders correctly. I don’t aim to cover those all off here, but I will outline some of the things we do to ensure we can continue to give our customers optimal value and SEO.

Luckily, google announced several years ago the ability to crawl javascript, and the chances are, they can see and render almost everything on a react site. But thats not enough. We need to make sure they can, and not just them, we want to make sure other engines can also see our sites, and what about testing, test engines such as Moz and screaming frog, these are essential tools for us to constantly analyse, suggest, implement, repeat and improve our SEO capabilities.

The first step is extensive testing. Its widely thought that google bot is based on chrome 41, the first step is to download chrome v41, load the site, and use the developer tools to check for any javascript rendering errors. Any errors caught here are likely to block google bot from effectively parsing the JS, and therefore being able to render it correctly - polyfills and graceful degradation are good techniques here - we want to have the best experience on the latest browsers, but we need to also have optimal results on older browsers. The fetch as google tool is then very useful for making sure google is effectively parsing javascript and seeing the site correctly. However its not fool proof, and not all bots can render javascript, especially for some of the SEO monitoring tools we use, primarily Moz.

Other bots, such as bing and rogerbot (the moz bot) cant render javascript. In order to get around these, we’ve used a number of techniques across different projects here and Creative Licence:

Using a service such as pre-render.io

This will scan the site, render the JS, then, when it detects an incoming bot, supply them with the fully rendered version. We have effectively used this on smaller scale sites to provide excellent search engine visibility for all major search engines, and it allow us to continually test and monitor sites using our primary SEO and marketing tool, Moz. This has been highly effective for us on small to mid-range sites. However, as an agency, once the number of sites we maintain and manage grows across clients, we have found it doesn’t always scale that well for us.

Server-side rendering

Isomorphic or Universal javascript, using a framework such as next.js allows us to render the HTML on the server, letting us serve a fully rendered version of the page. As with a service like pre-render, we can serve the fully rendered page so a bot can crawl it. This also has the added advantage of being able to serve a fully rendered page to a user while in the background we load the JS to perform any DOM manipulation. There is no waiting for the JS to load when a user hits the home page. This has worked will for us on more mid to larger scale products, it offers a very fast client side performance and works very well across all search engines and analysis tools. It is however much more complex to implement. It really requires implementing from the start of a project and not as an afterthought.

Snapshotting and static site rendering

There are also frameworks such as react-snap and gatsbyJS that use different approaches to compile the site and output a “snapshot” - compiled HTML and CSS, and allow you to control and provide output similar to a service such as pre-render.io, but you gain greater control of the process. Gatsby also allows server side rendering, graphQL and a whole host of other functions (and is useful if looking at implementing a server less, JAMstack architecture). These tend to scan the site during a build and are a different technique to SSR. They will output the html which will be served to bots when crawling the site.

So far we’ve had great success with each of these techniques above, we’ve used universal javascript for larger projects, making sure we’ve decided on an implementation path right from the start. For our smaller sites where we feel the overhead of planning SSR is not necessary we’ve used a combination of snapshotting and services like pre-render. However w e are finding we’re starting to move away from services such as pre-render simply due to the number of sites and clients that we manage, and keeping the snapshotting as part of our own process allows us to scale across multiple sites more effectively.

Be sure to follow us on Twitter @Cre8iveLicence, and if this kind of stuff interests you, then you'll love working with us).