How I spent my Christmas enabling SSR

Written by tabu_craig | Published 2017/12/24
Tech Story Tags: react | server-side-rendering | redux | asynchronous | performance

TLDRvia the TL;DR App

In recent years the idea of the PWA (progressive web apps) has grown massively. Several of the major JavaScript boilerplate libraries have made changes to be more compliant (create-react-app for example).

The idea is that the user should have a first-class experience anywhere, and that includes on a mobile device or a bad internet connection. Two things which often have noticeable performance impacts for websites which rely on the client-side for all JS downloading/processing and therefore are render blocking.

My company have a sizeable chunk of traffic from mobile/tablet (around the 40% mark) and many of which have bad experiences of our website. So we set about using this Christmas period to change our 100% client-side app to 100% server-side and progressively enhance on the client-side.

Our web application is using react/redux-saga so that is what I will focus on here. Note that his was a collaborative team effort and this is the output of our efforts.

tl;dr

  1. Use componentWillMount for dispatches
  2. Handle async sagas via END channel
  3. Use channels for dependent async requests
  4. Utilise lazy-loading
  5. Manage components rendered on the server
  6. Defer the bundle
  7. Profiling
  8. Results

1. Use componentWillMount for dispatches

We moved all component action dispatches into componentWillMount lifecycle method. It is called on both the server and the client. It is triggered immediately before mounting and render occurs.

For us this was a matter of moving out of our custom component initialiser.

function init(x) {return dispatch(someAction(x));}Component.init = init;

For:

componentWillMount() {this.props.someAction(this.props.x);}

In addition to above we also made use of the componentWillReceiveProps lifecycle method for client-side navigating (it will trigger the same action). So further page loads handled by the browser fetch the correct data.

2. Handle async sagas via END channel

Using react-saga’s real-world example we can tell our app to:

  • run all sagas (root yields all watchers),
  • render to seed store (kick off the initial actions),
  • wait for all sagas (including async sagas) to finish,
  • render the app again with the correct data in the store.

/server.js

store.runSaga(rootSaga).done.then(() => {res.status(200).send(layout(renderToString(rootComp),JSON.stringify(store.getState())))});renderToString(rootComp);store.close()

/store/configureStore.prod.js

store.runSaga = sagaMiddleware.run;store.close = () => store.dispatch(END);

3. Use channels for dependent async requests

As the END channel only watches for those actions fired initially, if you have an async request which depends on another (e.g. to get a users cart you must get the users ID first) it will not work on the SSR so far.

You can use a channel factory to achieve this. The main bulk of it is something like:

/something-saga.js

function* fetchSomethingSaga() {const ourChannel = yield call(channel);// create channel factory to queue incoming requests

yield fork(somethingElseHandler, ourChannel);// create a worker thread for forked saga and supply channel

try {while(true) { // loop so watch works more than onceyield take('FETCH_SOMETHING'); // watching for actionconst items = yield call(fetchItems); // 1st async call

  yield put(ourChannel, fetchSomethingElseAction(items));  
  // 2nd async call via action and send payload into channel  
}  

} finally { // END triggeredourChannel.close(); // close + unsubscribe from channel}}

/something-else-saga.js

export function* somethingElseHandler(channel) {while(true) {const action = yield take(channel)// observe the handed channel factory

 yield call(fetchSomethingElse, action)   
 // make async request with the channel factory  

}}

For further details find the example and comments on the Saga documentation and expanded on in the Github issue.

Very useful for multiple dependent async requests.

4. Utilise lazy-loading

Imagine the following page:

Main page content

Extra content relating to main content

Imagine both of the above are SSR only (via below aka not dispatching action if data is already in the store, no point client fetching if server has done it already).

/main-component.js

componentWillMount() {if (!this.props.someValueFromStore) { // via mapStateToPropsthis.props.loadValue(this.props.x); // via mapDispatchToProps}}

Yet you are aware the “extra” content loads below-the-fold therefore would be better for the user if it was lazy-loaded. We must decouple the requests from each other. See its saga:

/main-content-saga.js

function * fetchMainContent({payload}) {const { itemId } = payload;

yield put(loadMainContent(itemId));const mainData = yield call(fetchContent(itemId));yield put(loadMainContentSuccess(itemId));

// load extra contentyield put(loadExtraContent(mainData.extraId));const extraData = yield call(fetchExtraValue(mainData.extraId));yield put(loadExtraContentSuccess(mainData.extraId));}

I thought it best to change the above so instead of following a flow of events to determine which extra data to fetch, it uses data available in the store.

Just a small change below and moving \\ load extra content block into its own action and saga. Lastly a small change to our component below:

/main-component.js

componentWillMount() {if (!this.props.someValueFromStore) {this.props.loadValue(this.props.x);} else {this.props.loadExtraValue(this.props.someValueFromStore);}}

Now we have a logic branch split between having main data in the store and not, effectively acting like server vs client but in a more concise manner.

As we will defer the bundle execution it will mean the page will render and only then the loadExtraValue will dispatch.

We will need both actions added to componentWillReceiveProps so all client-side navigations trigger get both.

5. Manage components rendered on the server

Now that all your components are rendered on the server you may encounter some difficulties with regards to SASS/styling/3rd parties etc.

For example we were using react-media (https://www.npmjs.com/package/react-media), a CSS media query component. However its documentation mention:

If you render a <Media> component on the server, it always matches.

This poses a problem for our rendering which is now on the server.

The solution used was to conditionally render on the browser only (example below).

{(process.browser) &&<Media query={ X }>...</Media>}

6. Defer the bundle

You could use the defer tag or async tab as both download the script in parallel to the HTML parser.

However async will block HTML parsing to execute, whereas defer wont.

For us it was a matter of updating:

<script key={ X } src={ X } />

to (adding the defer)

<script defer={ 'defer' } key={ X } src={ X } />

Now the browser will fully render the page before executing our javascript. This will produce a much faster “first meaningful paint”.

7. Profiling

7.1 Manually:

I used Google Chrome to manually profile the results, first I had to consider the metrics to use and the method of obtaining them.

I decided on using the metrics:

  • TTI: how long for the user to be able to interact with the page
  • OnLoad: how long for the entire page to be completely ready

Then using Chromes Performance dev-tool tab and the industry standard Android hardware (as suggested by Addy Osmani):

  • Network Fast 3G
  • CPU: 4x slowdown

I applied the following performance timing API calculations:

TTI = performance.timing.domInteractive - performance.timing.navigationStart

OnLoad = performance.timing.loadEventEnd - performance.timing.navigationStart

The results proved a 500% increase in TTI and 200% increase in OnLoad. For a large portion of users this would be noticeable.

7.2 WebPageTest:

The https://www.webpagetest.org/ site is fantastic for all performance testing. It supports FMP (first meaningful paint) and TTI (time to interaction). There is even basic auth support if you have areas secured which you would like to profile.

The optimal setup is to repeat each test 3 times. I ran the following test scenarios:

  1. Chrome desktop with fast local internet
  2. Android Moto G4 with average 3G
  3. iPhone SE with average 3G

The results table is very easy to read (see below)

The results are kept indefinitely under unique ids, so its very easy to refer to them in the future to compare improvements.

7.3 Google Chrome Lighthouse:

Chrome dev-tools now offers its own tool to audit your website (under Audits tab on dev-tools). Chrome runs the exact same “average Android hardware” conditions used on my manual profile.

You are given an overall score out of 100 which encapsulates your applications current performance and the opportunities to improve it.

The most useful metric I found here was the FMP which is very accurate and comes with a nice timeline so you can see exactly how it has worked it out.

Very good for breaking down what the metric means visually (example below):

For us our FMP had improved by ~600% (or by a factor of 6) and our Chrome overall score doubled.

8. Results:

Needless to say it was a success (min ~200% improvement) and we plan to push the changes to production.

Next we are addressing our hefty bundle size by using code-splitting, tree-shaking and Service Workers for caching, but SSR was a good place to start.

I hope this article has been useful in regards to helping someone else look at implementing SSR in a redux-saga/react application.

If there is anything I have missed or you think I could have improved on please let me know, would really appreciate any comments/feedback ❤️


Published by HackerNoon on 2017/12/24