Life At Procore Blog image with employees

A Self-Service Model for Load Testing of Web Services at Scale

Nov 30, 2021

Abstract

A Self-Service Model for Load Testing of Web Services at Scale

In a Service-Oriented Architecture (SOA) environment, the entire system's performance comprises the performance of the individual components (services). It is, therefore, imperative that each service is performance-tested individually, early and often. Standardizing a common Load Test Model that defines tests as data enables development teams to implement autonomously and own their load tests on a self-service basis. Furthermore, streamlining test invocation via a single point of entry lowers the barrier to integration with automated systems like CI/CD.

Objective

To meet Procore’s business and development needs, we must develop and deploy a sustainable and scalable self-service model for load testing. This model should standardize and streamline load test development and load generation for all projects:

  • to increase velocity towards self-service for load testing, and
  • provide a single point of integration (for example, with CI/CD),
  • without duplicating any code, etc., across repositories.

Self-Service Model

At Procore, we have a large and continuously growing number of web services. Individual white-glove treatment, where a centralized Performance Engineering team develops and maintains load tests for each service, simply does not scale. An alternative is a self-service model. In this case, the team owning the web service also owns the load test development. However, owning the test development should not imply owning the tooling and implementation. That approach would increase the barrier of entry and cost of ownership, resulting in a highly fragmented implementation landscape.

Procore's Performance Engineering team developed a Load Test Module with these features:

  • It standardizes code-free test development, implementing a test-as-data concept,
  • It implements a one-to-many relationship: One module executes load tests for many projects/services.
  • It handles OAuth2 authentication.
  • It offers a single point of entry, or interface, for all test execution scenarios.
  • It supports automation and integration to third-party systems, such as CI/CD.
A Self-Service Model for Load Testing of Web Services at Scale

The following sections detail Procore’s Performance Engineering Load Test Module (LTM) and sample invocations.

Components of a Common Load Test Module

The Common Load Test Module is a key component of Procore’s Performance Engineering Load Test Model. It consists of two artifacts: a load generator and a shell script.

A load generator abstracts test implementation as data from actual code. In its purest form, web API testing consists of request/response pairs that can be described as structured data, both for the request (request type, URL path, parameters, etc.) and the response (response code, message, payload). This load test data, together with configuration data, serves as input to the load generator. It also handles common tasks, such as authentication and setting headers.

A shell script serves as a single point of entry or interface. It streamlines the invocation of load generation by accepting parameters that control different aspects of the generated load. Examples would be handling sensitive data, such as tokens as input parameters, base URL, where the load is generated, or what to do with the test results.

Combined, the load generator and shell script offer a powerful, project-agnostic, and scalable approach to implement, accelerate and scale code-less load testing.

Test Data

Structured test data is expressed as JSON via two module types: load test definition and load test configuration.

The load test definition’s JSON format consists of 1...n request/response objects which are grouped in an array.

[
   {
     "request": {
       "type": "{GET|POST|...}",
       "urlPath": "/url/path",
       "query": "<query string of post body (graphql)>",
       "parameters": "<optional, for parameterized graphql>"
     },
     "response": {
       "status": "<expected http response status code>",
       "JSON.validationPaths": [
         "JSON.validation.path.1",
         "JSON.validation.path.2",
         "JSON.validation.path.n"
       ]
     }
   }
   ...
 ]

The load test configuration’s JSON format defines load size, duration, ramp up/tear down behavior, timeouts, tags, behavioral options like auto-follow redirects, and more.

Load test data can be expressed in a generalized format and adapted to your load testing tool of choice with a thin translation layer. Configuration data, on the other hand, is more closely tied to the load testing tool in use and its supported configuration options. The configuration data content and format are informed by the actual tooling for which the load testing model described in this paper is implemented. At Procore, we use a JSON format that is native to our particular load test tooling.

Executing Load Tests

With test definition and configuration data abstracted from the code, and a shell script providing a single point of entry for test execution, load tests can be invoked using a standardized set of arguments to tailor test execution for test run-specific parameters. For example:

  • the base URL will be different between services, environments, etc.
  • test definition and configuration will be different between services, environments, etc.
  • authentication: a service-under-test may or may not require authentication, and credentials will be different between services, environments, etc.
  • load generation may be delegated to the cloud or occur on the same/local machine.

The above is best visualized with a couple of examples. They assume the shell script is named run.sh.

run.sh --url http://my.base-url.com --test-definition my-load-test.json --test-configuration my-load-config.json

Executing on the local machine, generate the load defined in my-load-config.json by executing the requests defined in my-load-test.json, against the base URL http://my.base-url.com.

run.sh --url http://my.url.com --test-definition my-load-test.json --test-configuration my-load-config.json --user-name foo --password bar

Executing on the local machine, generate the load defined in my-load-config.json by executing the requests defined in my-load-test.json, against the base URL http://my.base-url.com, with each load thread logging in to http://my.base-url.com with user name foo, password bar.

run.sh --url http://my.url.com --test-definition my-load-test.json --test-configuration my-load-config.json --user-name foo --password bar --load-env cloud

Executing in the cloud, generate the load defined in my-load-config.json by executing the requests defined in my-load-test.json, against the base URL http://my.base-url.com, with each load thread logging in to http://my.base-url.com with user name foo, password bar.

Several commercial vendors offer cloud-based load generation for common open-source load testing tools, such as jMeter or locust.io. Alternatively, custom cloud-based load generation can be set up via your company’s cloud service provider. Some commercial load testing tools may also offer their own cloud for load generation.

Automation and Integration

Automation and Integration go hand-in-hand. The foundational requirement for automation is a well-defined and stable interface. In the case of the Load Test Module discussed in this article that interface is provided by the shell script, and it’s well-defined list of named parameters.  

Integration adds considerations like dynamic provisioning of sensitive data. With all such data provided at execution time as parameters, no credentials, etc., are part of test data and persisted in version control systems or elsewhere.  

In addition, meaningful return codes from the shell script do not just offer pass/fail status but also more insights into a potential cause of failure. This provides the integrating systems with the information necessary to make informed decisions on the next steps.

Outlook

Load testing must be viewed in the larger context of the purpose it serves. Test results data, client- and server-side, must be tracked, monitored, and analyzed to assess performance over time and inform on potential changes required. The associated aspect of Observability is out of scope for this article and will be covered in a future installment.

If this kind of challenge excites you, then maybe you should come work for us!

Read more