Twisted Tales

Twisted Tales

Making Redux Saga scalable with TDD, SOLID principles and the Narrator Design Pattern

There and Back Again...

Asynchronous code is hard. Talking to a server is one of the most critical parts of a modern web application, and being able to write tests around this communication layer is imperative.

The JavaScript ecosystem has some excellent state management solutions, with Redux being probably the most popular flavor. This simple and intuitive library provides the best solution we've seen to a longstanding problem, and is built with testability in mind. However, we've found that the asynchronous libraries commonly used with Redux can be hard to test in the context of actual application code.

This is where our story begins: with the goal of increasing confidence in our code base so we can continue to ship fast and without bugs.

About two years ago we began building Procore's new Budgeting Tool. Iterating on client feedback, we quickly added features and extended functionality. As the complexity of the code grew so did the time to delivery of each new feature, largely because we had used the publish-subscribe pattern implemented by Redux-Saga, without a sensible pattern. We also didn't have a concise language to talk about these order dependent actions.

We decided something needed to be done and we set out to find a way to deliver new features faster and with greater confidence. This blog post will cover how we got there, what we found and some reflections on the process.

So Our Journey Begins...

Once we decided a refactor was in order, we came up with high level goals for the refactor:

  1. Conceptual Integrity (shared understanding)
  2. Testability
  3. Performance

As we started the process of cleaning up our code we realized it was incredibly easy to jump to the conclusion that a new technology is the answer to our issues.

After some back and forth, we saw that what we needed wasn't a new library, but rather a way to categorize our existing code into a structure that made it easier to understand, test, and in that process help expose performance bottlenecks.

So we turned to the wealth of existing knowledge around the principles of Clean Code by Uncle Bob, SOLID design principles, Test-Driven Development, and design patterns (for design patterns see the excellent Source Making resource) and realized these were the foundations we were missing all along. We owe a lot to people like Martin Fowler, Kent Beck, and Sandi Metz for their crucial work in this area.

The Forming of the Fellowship

To understand our solution, it might be helpful to understand our architecture:

Our new Budgeting tool has a standard architectural layering of: server side APIs with a JavaScript client on the front-end. The backend mainly serves up data through RESTful, link-based APIs. The server side also renders ERB templates where we mount a few different React components. The front-end, in addition to React, uses Redux to manage application state and Redux-Saga to deal with asynchronous actions, primarily data fetching. We use mocha, chai and enzyme for front-end testing.

As we mentioned, our existing asynchronous code, using the Publish-subscribe pattern as implemented in Redux-Saga, was proving hard to understand, extend and therefore full of surprises.

At this point we knew we needed to refactor and we built a tool to help us visualize our Sagas and Redux Actions and let us gain a clear understanding of the unseen architecture that had taken place, here is what we got:

A visualization of the Publishers and Subscribers in the Budget

This exercise confirmed the complexity of our code and gave us a visual cue as to where to focus our attention.

We started looking at how we could organize our Sagas in way that would be easy to reason about and easy to test. Out of these conversations emerged a design pattern that we eventually named The Narrator Pattern.

The Narrator Pattern


The Narrator Pattern is intended to help organize asynchronous code in a manner that is easy to reason about, much like telling a story.

We've found that the pattern provides two primary and two secondary benefits:

Primary Benefits

1. Conceptual Integrity: By introducing a vernacular and structure for our data flow we will facilitate a shared team understanding.
2. Testability: The pattern strives towards small testable components.

Secondary Benefits

1. Stability: By gaining Conceptual Integrity and Testability we gain Stability.
2. Performance: Improved Conceptual Integrity, Testability and Stability will improve the opportunities for performance improvements. With a clear understanding of our system we can make performance improvements with confidence.

These benefits lead to the primary intent of refactoring: Ship new features faster with higher confidence.

Core Concepts

Example Structure

The Narrator pattern consists of three main concepts: User Actions, Narrators and Storylines.

This simple structure has helped us realize that all Redux Actions and Redux Sagas are not created equal. This pattern in its simplest definition provides a way to categorize these into User Actions, Narrators and Storylines.

User Actions

These are actual user interactions with the UI. Such as selecting something in a picker or loading a page. A user action is the entry point and the start of the work flow.

Narrators


A Narrator is a coarse grained piece of reusable logic that handles a User Action. This is a type of Saga that bundles other more fine grained actions, in the form of Storylines. We view the narrator as the high level functionality kicked off by a User Action.

Example: A user chooses a Budget View in the view picker. This kicks off the renderTable narrator that is responsible for rendering the table. Rendering the table is composed of multiple smaller logical tasks, such as fetching rows and columns. As mentioned this means that a Narrator is composed of multiple Storylines.

The Narrator is responsible for:

  1. Organizing the Story Lines. The Narrator will kick off a number of independently running Cause Action - Saga - Effect Action Story Lines.
  2. Handling any tasks that are order dependent. There will be certain tasks that need to resolve in order. The Narrator is the place for those.
  3. Selecting from state

Storylines

A Storyline is a Saga that:

  1. Listens to the Cause Action
  2. Performs an asynchronous task
  3. Dispatches the Effect Action.

Cause Action -> Saga -> Effect Action

Storylines can be used in different Narrators and should guarantee that they can run concurrently. That means that a storyline needs to be completely self contained, encapsulated, from anything except what’s passed into the function.

We found that a lot of complexity came from there being order dependent state selectors spread all throughout our code base. This was very hard to fully understand and therefore to test. We believe that a Storyline being dependent on the Redux State is a code smell.

One main area of complexity we had was related to how easy it is to abuse the Publish-subscribe pattern implemented by Redux-Saga. Without some simple guidelines on how to use Sagas, it quickly degenerated into an order-dependent, tangle of obscurely connected listeners. This was the main reason for our refactor in the first place. It was hard to reason about what different parts of our code was dependent on others - since the strength of the pub-sub pattern can become a liability without some structure. With some fairly straightforward naming conventions and an agreed upon folder structure we have been able to start uncrossing the wires, and ensure we don't add to the mess with each new feature, since we now have a specific location for each type of file.

A few suggested best practices for Storylines

  • A Cause Action should only be listened to by one Saga
  • A Saga should only spawn the Effect Action - nothing else (no other actions nor another Saga)
  • A Saga should only make one asynchronous call.
  • No Saga should listen to the resulting Effect Action. This should be the end of the line.
  • The api caller should be injected into the Saga for ease of mocking api calls.
  • These Storylines should all have deterministic unit test coverage.
  • A Storyline should have no dependencies on another Storyline or a Narrator, but should be completely encapsulated. This means no state selectors should exist in the Storylines - but rather in the Narrators. Anything a Storyline needs should be passed in.

Folder Structure

The core concepts translate to the following folder structure:

narrators
    __tests__
        renderTable-test.js
    renderTable.js
storylines
    __tests__
        fetchTableRows-test.js
        fetchColumns-test.js
        fetchSourceColumns-test.js
    fetchTableRows.js
    fetchColumns.js
    fetchSourceColumns.js

This showcases the central idea of the Narrator Pattern that although the Narrators and Storylines are both technically the same thing (they are both Sagas), they have unique responsibilities and should be categorized accordingly.

We are still working on where to locate User Actions, Cause Action and Effect Actions. While important, the folder location for these are not as crucial as the ability to categorize the Narrator and Storyline Saga.

Testability

How the Narrator Pattern helped us with testability.

We focused on a few different areas to test: Narrators and Storylines. First we'll describe how we test the Storylines - since these contain the actual tests around asynchronous code. Secondly we'll go over how the Narrators are tested. Since these are composed of Storylines the tests will function at a higher level in order not to test the same abstraction level twice.

Testing Storylines

The testing of the Storylines can be divided into three separate concerns:

  1. Test that the Cause Action is being listened to by the Saga
  2. Test the actual asynchronous code in the Saga itself
  3. Test that the Effect Action is updating the application state correctly

Example: fetchSourceColumns Storyline Test

First we'll show the code for one of or Storylines: fetchSourceColumns.js

export const fetchSourceColumns = (fetch = fetchWithUrl) => {
  return function* ({ payload: _links }) {
    try {
      const data = yield fetch(_links.source_columns);
      yield put(actions.fetchSuccess({ data }));
    } catch (e) {
      yield put(actions.fetchError(e));
    }
  };
};

export default function* fetchSourceColumnsListener(fetch = fetchWithUrl) {
  yield* takeLatest(
    [
      sagaConstants.fetchSourceColumns,
    ],
    fetchSourceColumns(fetch)
  );
}

This Saga consists of a listener function and the actual asynchronous handling. We structure all of our Storylines like this.

We also Dependency Inject the function that will make the actual API call, defaulting to a function that will make a real request. This provides an easy way to mock API calls in our tests by injecting a mock fetch function.

This Saga performs one API call, adhering to the Single Responsibility Principle, making it straightforward to test.

The Storyline Saga ends up being a simple function that makes one API call that then calls the fetchSuccess Effect Action with the payload. In case of a fetch error we catch the error and call the fetchError function.

The test for the fetchSourceColumns Saga looks like this:

describe('fetchSourceColumns Saga', () => {
  const fetchResponse = [{
    id: 1,
  }];

  const links = {
    forecast_columns: 'http://forecast_columns',
    formula_columns: 'http://formula_columns',
    source_columns: 'http://source_columns',
    standard_columns: 'http://standard_columns',
  };

  const initialState = fromJS({
    column: {
      _loading: false,
      data: [],
      filterable: [],
    },
  });

  const setupSagaTester = (generator) => {
    const sagaTester = new SagaTester({ initialState });
    sagaTester.start(generator);
    return sagaTester;
  };

  const expectedUrl = links.source_columns;

  const fetch = (url) => {
    return new Promise((resolve, reject) => {
      if (url === expectedUrl) {
        resolve(fetchResponse);
      } else {
        reject('Wrong URL passed in');
      }
    });
  };

  describe('fetchSourceColumns', () => {
    describe('success', () => {
      function* successGenerator() {
        yield fetchSourceColumns(fetch)({ payload: links });
      }

      it('should retrieve data from the server and send a fetchSuccess action', async () => {
        const sagaTester = setupSagaTester(successGenerator);
        await sagaTester.waitFor(constants.fetchSuccess);
        const expectedData = fetchResponse;
        const expected = actions.fetchSuccess({ data: expectedData });

        expect(sagaTester.getLatestCalledAction()).to.deep.equal(expected);
      });
    });

    describe('error', () => {
      const fetchError = (url) => {
        return new Promise((resolve, reject) => {
          reject('API Call Failed');
        });
      };

      function* errorGenerator() {
        yield fetchSourceColumns(fetchError)({ payload: links });
      }

      it('should handle fetch error gracefully', async () => {
        const sagaTester = setupSagaTester(errorGenerator);
        await sagaTester.waitFor(constants.fetchError);
        const expected = actions.fetchError('API Call Failed');

        expect(sagaTester.getLatestCalledAction()).to.deep.equal(expected);
      });
    });
  });

  describe('listeners', () => {
    const finishSaga = async () => {
      return delay(0);
    };
    describe('fetchSourceColumnsListener', () => {
      function* fetchSourceColumnsListenerGenerator() {
        yield fetchSourceColumnsListener(fetch);
      }

      describe('should listen to the following actions', () => {
        const expectedData = fetchResponse;

        it('sagasActions.fetchSourceColumns', async () => {
          const sagaTester = setupSagaTester(fetchSourceColumnsListenerGenerator);
          sagaTester.dispatch(sagasActions.fetchSourceColumns(links));
          await finishSaga();

          const expected = actions.fetchSuccess({ data: expectedData });
          expect(sagaTester.getLatestCalledAction()).to.deep.equal(expected);
        });
      });

      it('should not listen to the following action', async () => {
        const sagaTester = setupSagaTester(fetchSourceColumnsListenerGenerator);
        sagaTester.dispatch({ type: 'foo' });
        await finishSaga();
        expect(sagaTester.getLatestCalledAction()).to.deep.equal({ type: 'foo' });
      });
    });
  });
});

Test the Cause Action: The listeners describe block tests that the Saga is listening correctly to the Cause Action.

Test the asynchronous code: The fetchSourceColumns Saga describe block tests the actual Saga. The test is made fast and deterministic by injecting a mock API Promise that returns a payload. The test also makes sure that the correct link is used inside the Saga to make the API call. This is where some of the benefits from a link based API is showing - no url building is taking place.

Testing the listeners means calling the fetchSourceColumns Cause Action and test that the Saga picks it up. To avoid false positives here we also test with an action the Saga should not listen to.

The approach recommended in the redux-Saga docs did not seem like an optimal way to test Sagas. Instead we found a small testing utility called Redux-Saga-tester that advocates for a different, and in our opinion, much better approach as summed up by:

Redux-Saga is a great library that provides an easy way to test your Sagas step-by-step, but it's tightly coupled to the Saga implementation. Try a non-breaking reorder of the internal yields, and the tests will fail.

This tester library provides a full redux environment to run your Sagas in, taking a black-box approach to testing. You can dispatch actions, observe the state of the store at any time, retrieve a history of actions and listen for specific actions to occur.

Test the Effect Action: Once the listener and the asynchronous code is tested what remains is to test the Effect Action. Testing this action means testing the reducer, which is described in the redux docs.

Testing Narrators

Now let's move on to the Narrator itself. This is were we will structure the story together using our storylines.

The testing of the Narrators can be divided into two separate concerns:

  1. Test that the User Actions are being listened to by the Saga.
  2. Test that the Narrator is dispatching the Storyline Cause actions.

Below is the renderTable Narrator with three storylines. We have simplified the code here to showcase the pattern without distracting noise.

renderTable.js

export function* renderTable({ payload: { _links } }) {
  yield put(actions.fetchTableRows({ _links, params }));
  yield put(actions.fetchColumns({ _links, params }));
  yield put(actions.fetchSourceColumns(_links));
}

export default function* renderTableListener() {
  yield [
    takeLatest(
      [
        sessionConstants.loadPage,
        sessionConstants.setActiveTemplate,
      ],
      renderTable,
    ),
  ];
}

Like the Storylines the Narrator Saga consists of a listener function and the function containing the actual Narrator code dispatching Storyline Cause Actions. We structure all of our Narrators like this.

This code reads like a set of clear sequential instructions for something that is going to run concurrently. It is easy to reason about, and easy to compose. In addition to the clarity, we also gain concurrency by splitting up our logic into small, single purpose Storylines.

The test for the renderTable Narrator looks like this:

describe('renderTable Narrator', () => {
  describe('renderTable', () => {
    function* createRenderTableGenerator() {
      yield renderTable();
    }

    it('should call the following actions', async () => {
      const sagaTester = setupSagaTester(createRenderTableGenerator, reducers);
      await sagaTester.waitFor(narratorConstants.renderTableStoryLinesDispatched);

      expect(sagaTester.numCalled(summaryConstants.fetchTableRows)).to.equal(1);
      expect(sagaTester.numCalled(sagasConstants.fetchSourceColumns)).to.equal(1);
      expect(sagaTester.numCalled(sagasConstants.fetchSourceColumns)).to.equal(1);
      expect(sagaTester.numCalled(narratorConstants.renderTableStart)).to.equal(1);
      expect(sagaTester.numCalled(narratorConstants.renderTableStoryLinesDispatched)).to.equal(1);
    });
  });

  describe('listeners', () => {
    const finishSaga = async () => {
      return delay(0);
    };

    function* renderTableListenerGenerator() {
      yield renderTableListener();
    }

    describe('should listen to the following actions', () => {
      it('sessionConstants.initView', async () => {
        const sagaTester = setupSagaTester(renderTableListenerGenerator, reducers);
        sagaTester.dispatch(sessionActions.loadPage(initViewProperties));

        await sagaTester.waitFor(narratorConstants.renderTableStart);
        expect(sagaTester.numCalled(narratorConstants.renderTableStart)).to.equal(1);
      });
      it('sessionConstants.setActiveTemplate', async () => {
        const sagaTester = setupSagaTester(renderTableListenerGenerator);
        sagaTester.dispatch(sessionActions.setActiveTemplate({ _links: _links }));

        await sagaTester.waitFor(narratorConstants.renderTableStart);
        expect(sagaTester.numCalled(narratorConstants.renderTableStart)).to.equal(1);
      });

    });

    it('should not listen to the following action', async () => {
      const sagaTester = setupSagaTester(renderTableListenerGenerator, reducers);
      sagaTester.dispatch({ type: 'foo' });
      await finishSaga();
      expect(sagaTester.getLatestCalledAction()).to.deep.equal({ type: 'foo' });
    });
  });
});

We test what actions are called and that the Saga is listening to the correct actions - that's it. The Storyline Sagas have already tested the asynchronous code. Since we don't actually perform any API calls inside the Narrator we don't need to mock these, making testing Narrators much easier.

Conclusion

Moving our code in this direction is gradually increasing our confidence as we continue to grow and change. Since starting this process we've bumped our total test coverage by about 10%, and it is encouraging to know that we now have test coverage for the crucial asynchronous part of the application.

By making our code more testable, and adding some patterns to give us a common vernacular, our code feels safer and more predictable.

In light of this, there are some additional findings and plans for the future we would like to share with those who plan to embark on a similar journey.

Risk On Our Terms - A Leap of Faith

There was a point where we found we were getting diminishing returns on developing in a completely TDD manner. The concept of incremental smaller, safe, steps of cleaning up code has been called Test Driven Refactoring

And while we are supporters of that approach, we found ourselves coming to a point of no return; we needed to take a leap of faith in our refactor. When code is not written in a TDD fashion to begin with it is often hard to test and requires significant change to make it testable. Thus, it is at times worth taking a calculated risk in order to move away from a situation that isn't sustainable:

The red-green-refactor cycle may come to a halt when you find yourself in a situation where you don’t know how to write a test for some piece of code, or you do, but it feels like a lot of hard work. Pain in testing often reveals a problem in code design, or simply that you’ve come across a piece of code that was not written with the TDD approach.

We were able to minimize the risk as we took this leap with a more thorough level of manual testing. This was a way to pay the price now versus every single time we shipped a new feature.

Our advice here is this: address risk on your terms, instead of having a bug surface when you least expected it. Staying put is not risk free when a tool is under active development; taking steps toward covering the core functionality with tests provides the safety net to go forth boldly.

Drive Adoption through Examples

There will likely be only a few different types of tests you will need to write. Provide scaffolding for the rest of the team to follow and iterate upon.

This means writing example tests around as many of those different situations as you can. We found having live examples for these cases was a huge driver of adoption which helps us gain a larger, shared understanding of our code.

We've likened this to building a Bookcase and having the rest of the team put the books in place.

What we are going to do - The Future

This initiative is far from wrapped up. We have things moving in the right direction but we still have a long way to go. We wanted to write this blog post in an attempt to share what we have learned so far about:

  1. How to structure the asynchronous part of a larger React/Redux application using the Narrator Pattern
  2. How to refactor existing code to get to the new pattern using principles such as TDD, Clean Code, and SOLID.

Gaining team support and understanding is one of the most critical parts of a refactor of this scope. Restructuring the code base should be a team effort, and finding a way to actively engage all team members is paramount. This involves a fair amount of negotiation around priorities and feature delivery, because at the end of the day refactoring is about feature delivery as well...but that is a topic for another time.

Until then, thanks for reading!

Written by:

Michael Hinrichs, Software Engineer
Magnus Palm, Senior Manager, Engineering

References

React

Redux
Redux-Saga
Redux-Saga-Tester

Sources and Concepts
Javascript Design Patterns
Code is read many more times than it is written
Software that fits in your head