Redux Patterns: Rethinking `byId` and `byHash` Structures

Written by justintulk | Published 2017/11/30
Tech Story Tags: javascript | redux | react | es6

TLDRvia the TL;DR App

Iterate with Object.keys() and drop byId

In a previous post (one of my most-read posts ever), I described a pattern to manage data in Redux stores where lists of objects were hashed using their ids as keys, and then an array of those ids was maintained alongside the hash.

Redux Patterns: Add/Edit/Remove Objects in an Array_I wrote a post the other day about how to implement a Redux-flavored store and reducer in a single React component…_hackernoon.com

The structure looked like this:

const reduxStore = {data: {byId: ['a', 'b'],byHash: {a: {someKey: "someValue", id: "a"},b: {someKey: "someOtherValue", id: "b"}}}}

Any action that mutated this data structure would get handled twice: once to add/remove any keys in the byId array, and again to add/remove/update the associated data stored in the byHash hash. However, now that I’ve been using this structure for months, I’m finding that I’m typically dispensing with the byId array most of the time. So my structure is:

const reduxStore = {data: {a: {someKey: "someValue", id: "a"},b: {someKey: "someOtherValue", id: "b"}}}

Why am I dropping byId?

Pros:

  • Handling actions is simpler as I only have to update one data structure in response to most common actions.
  • Redux store objects have less nesting
  • Iterating is easily achieved with Object.keys(data).forEach, or more typically in a React application: Object.keys(data).map.
  • Length is readily available as Object.keys(data).length.

Expected Cons:

  • None? (What am I missing?)

Edge-case Cons:

  • Should my hash get unexpectedly huge, having to calculate length constantly instead of pulling it directly from the array might be expensive. In practice my hashes typically have less than a few dozen objects stored by key, so the burden of maintaining the byId property costs more time (and makes me write more tests).

Pattern I Use All Over The Place as a Result

The most typical use case is that I frequently need to iterate over this data structure (forEach) or to do some kind of functional-like operation (map/filter/reduce). This is easily achieved:

  1. Rendering React Component Lists

{Object.keys(this.props.data).map(key => {// operate on the full value since `key` is just the keyconst renderData = this.props.data[key];

return <div>{renderData.someValue}</div>  

})}

2. Filtering Based on Some Value in each Object

{Object.keys(this.props.data).filter(key => {// again, operate on the full value, not the keyreturn this.props.data[key].value === condition;}).map( ....)}

You can reuse that basic pattern over and over. Want to sort by some value in object? Object.keys(data).sort((a,b) => {}) Want to avoid displaying some data if the list is zero-length? Object.keys(data).length === 0 && <Component /> All the problems are solved by the same basic bit of code.

Sure it’s a little redundant, but it’s not really any worse than:

{data.byId.map(id => {const renderData = data.byHash[id];return <div>{renderData.someValue}</div>})}

Performance Considerations

Thanks to Mark Erikson for a great comment about some performance implications of moving filtering logic into the render cycle.

While there’s no strict rule about what data manipulation should be done in a mapState function vs inside a component’s render method, my general suggestion is that mapState should be responsible for shaping the data that the component actually needs. This ties into the fact that connecting more components generally leads to better performance, and minimizing the amount of data a given connected component needs from the store will mean it will re-render less often. So, my approach would be to usually apply filtering and sorting-type behavior at the mapState level, so that the component is only getting the data it actually needs to render.

In addition, if mapState is returning the exact same values from call to call, then connect will skip re-rendering the plain component, which is usually a perf improvement.

I’m going to dig into memoization and update this at some point in the future, but for now I’ll be making sure to handle as much of this in my Redux selectors as I can. A quick example:

render() {return (<div>{Object.keys(this.props.data).map(key => {const val = this.props.data[key]return (<span>{val.text}</span>)})</div>)}

const mapStateToProps = state => ({ data: state.myData })

// could easily become this to clean up logic:

render() {return (<div>{this.props.data.map(val => (<span>{val.text}</span>))}</div>)}

const mapStateToProps = state => ({data: Object.keys(state.myData).map(key => state.myData[key])})

You could even recreate the byId and byHash pattern in your selector to get its benefits without having to maintain a parallel array in your store.

const mapStateToProps = state => {const myKeys = Object.keys(state.myData)

return {byId: myKeys,byHash: myKeys.map(key => state.myData[key])}}

As always, please leave a comment if you disagree, or if you see something I overlooked.

Reselect-style Memoization in 3 Functions_I’ve been reading through Reselect’s source code (only 107 lines unminified) and thought it might be worth unpacking…_medium.com


Published by HackerNoon on 2017/11/30