Using Pa11y CI and Drone as accessibility testing gatekeepers

Written by dfrase | Published 2018/07/08
Tech Story Tags: technology | accessibility | a11y | software-development | inclusive-design

TLDRvia the TL;DR App

Pa11y and Drone working together

This article serves as a ‘how-to’ guide for setting up Pa11y CI to run against any webpage, and also how to integrate it within a Drone continuous integration pipeline, with an example repo.

You think accessibility is important. You may even follow methodologies such as Inclusive Design. You’re aware of the WCAG 2.0 or WAI-ARIA guidelines. You’ve experimented with manual testing tools (examples listed under Resources). You even make sure to user test your product with real users to gain feedback. But still some issues creep in with each new iteration, and you’re frustrated that despite all the time invested users still sometimes flag up simple oversights, missed through working in such a fast paced environment.

Or maybe you started to get lost after my first sentence, but know you would like some help getting started.

This is where Pa11y can come in, built on the belief that:

Here at Pa11y, we think making the web more accessible improves it for everyone.

Pa11y provide a range of automated tools, that can help to spot common accessibility issues detectible within HTML. Set up once, it can then be ran at will, helpful for determining the current state of your product, and in making sure no issues creep in as it develops further. It provides a great baseline, and with descriptive error messages helps to quickly fix issues that would otherwise build up over time.

Pa11y CI is one of the tools offered, described as:

a CI-centric accessibility test runner, built using Pa11y.

It works through a descriptive JSON config file, where multiple URLs to be tested can be specified, and has full support for Pa11y’s Actions to mock user interaction with a page. It is designed to be usable within a continuous integration pipeline, and is very quick to get up and running.

Setting up Pa11y CI

Let’s walk through an example installing Pa11y CI, setting up a config file, and running some tests both locally and in Drone. It’s assumed you already have a package.json, if not run npm init and follow the prompts.

All code shown here is in an example repo on GitHub to make it easy to run yourself.

Installing Pa11y CI

First we need to install Pa11y CI, adding it to our devDependencies as we will not be using it in production. We do this using:

npm install -D pa11y-ci

We should now see this in our package.json, and the next step is to add a script to run the Pa11y checks. In our example [package.json](https://github.com/dominicfraser/Pa11yCIExamples/blob/master/package.json#L6) we use:

"test:accessibilty": "pa11y-ci --config .pa11yci.json"

This specifies that when we run npm run test:accessibility in the console it will run Pa11y CI against the config found in the root directory name .pa11yci.json. If you wish to store the config file elsewhere simply alter the command to --config ./path-to/.pa11yci.json relative to the package.json.

Setting up the test config

.pa11yci.json specifies the full configuration of how Pa11y CI will run, from the URLs to test, the standard to test against, rules to ignore, and user actions to mock.

As it is based on top of Pa11y it follows the same configuration pattern, with the important addition of testing against multiple URLs, and having a default configuration that can apply across all being tested.

It’s syntax is straight forward to read, let’s look at some important bits.

"standard": "WCAG2AAA","level": "error","defaults": {"timeout": 20000,"wait": 2000,"ignore": []},

Standard

Different agencies set accessibility standards, here we use the highest of the three W3C WCAG standards, but Pa11y does allow for Section508 to also be used. WCAG2AA is the default.

Level

The level is set to error, meaning that a warning or notice will not count as a failure.

Default, Timeout, and Wait

The defaults object can contain any configuration you want to apply to all URLs being tested. Here we have set the overall test timeout to 20 seconds, and the wait to 2 seconds. This gives the URL under test time to load before running Pa11y, and as the timeout applies to the entire test run (including for all actions to be completed) it is set much higher. For a lightweight site such as pa11y.org this is not really necessary, and is used here simply to illustrate it may be required for more complex applications.

Ignore

ignore is left blank here, but is an array that can contain any rule from the standard chosen, with naming conventions specified by the HTML CodeSniffer Pa11y is utilising.

It’s important to note that in Pa11y these rule names are output verbatim to the console when an error occurs, but in Pa11y CI only the error message is displayed. This can make things slightly more complex, as the developer must then look up the description to determine the error code.

The ignore array may then look something like this:

"ignore": [“WCAG2AA.Principle2.Guideline2_4.2_4_2.H25.1.NoTitleEl”,“WCAG2AA.Principle3.Guideline3_1.3_1_1.H57.2”,“WCAG2AA.Principle1.Guideline1_4.1_4_3.G18.Fail”]

Note the third rule, which has .Fail appended. This is not shown in the documentation, and is a reason for Pa11y CI to also log the rule name on an error. Most others however, do simply use the name as specified.

URLs, Viewport, and Actions

"urls": [{"url": "http://pa11y.org/","viewport": { "width": 320, "height": 480 },"actions": ["wait for element .site-brand to be visible","screen capture screenshots-output/mobile-main-view.png","click element .site-nav__item button","screen capture screenshots-output/mobile-expand-menu.png"]}],

The urls array takes one object per URL to be tested. These all inherit the configuration from defaults, and have a default concurrency of 2, so that tests run in parallel.

In the example config the aim is to test with both mobile and desktop viewport sizes, specified inside the viewport object.

The [actions](https://github.com/pa11y/pa11y#actions) are an incredibly powerful part of Pa11y, allowing user interaction to be mocked, using a natural language syntax and allowing multiple screenshots to be taken, helping the developer to understand what is happening if the test is not behaving as expected.

A tip here is that some pages may need the header object set in order for screenshots to work, for example:

"headers": {"Accept": "text/html"}

Output

If there are no errors you can expect a result like this:

Both URLs passing successfully

An if there are errors, output would look like this:

The error displayed in detail on a failed test

You then have the option of either fixing the issue that causes the error (of which the error messages are very helpful in suggesting how), or adding the rule to ignore in the config if it relates to something that does not require fixing. An example of this is if testing a transparent component, in which case the background contrast will likely fail, comparing against the default white. Not until the component is used should the background contrast be checked, so any errors can safely be ignored when testing the component in isolation.

Using Pa11y CI during a Continuous Integration pipeline

Pa11y CI is designed with Continuous Integration in mind. We will look at using it with Drone, but there are also examples of using it with Travis CI.

Drone is an open source continuous delivery platform that executes build and testing steps within a container based pipeline. If you do not use containers for your application you can ignore references to Docker images, as the contents of these will likely be specified inside a different configuration system.

Pa11y CI is designed to work within such a pipeline. If it produces an error response when running against a URL it will fail the pipeline step it is being executed in, blocking the deployment pipeline until it is resolved.

Drone pipelines are written in a drone.yml file. The syntax is a superset of the popular docker-compose yaml specification. Each step has a Docker image specified that the commands in the step are executed within.

An example of a Pa11y CI step would involve:

Adding a chromeLaunchConfig to our .pa11yci.json "defaults": object, otherwise Pa11y will not run in a container.

"chromeLaunchConfig": {  "args": ["--no-sandbox"]},

Running the app/ component and waiting for it to be ready (as some apps may take a few seconds to start up). For this we can npm install -D wait-on. We will assume a command already exists for running the app/ component.

We can then add commands such as the below to our package.json:

"wait-on": "wait-on http-get://localhost:3000","component:ci": "npm run component --hotReloading=false --watch=false"

We then need to specify our pipeline step:

accessibility_test:  image: // a Docker image that contains chrome headless is required  group: tests  commands:    - npm run component:ci &    - npm run wait-on    - npm run test:accessibility

This will help to find accessibility issues detectable in code before they go live. This is where the comparison of Pa11y CI to a gatekeeper comes from, helping to flag things you may have missed.

As noted, Pa11y CI runs in chrome headless, and so this must be available in the image specified. If this does not exist within your application the nature of Drone means that for the single step a custom Dockerfile could be written based on top of your app Dockerfile with the addition of chrome headless. This would not need to be deployed, and can be used just for testing purposes.

Final Thoughts

Pa11y CI is a powerful open source tool that is actively maintained and updated, helping to make products more inclusive for all users.

Using Pa11y CI does not of course guarantee a fully accessible site, which requires deeper manual testing and design considerations, but it helps catch common issues. It should be used alongside user testing, not instead of it.

Thanks for reading, and I hope this helps illustrate how painless it is to integrate a level of automated accessibility testing early on 😀

If you’re interested in testing, you might also find these interesting:

Resources


Published by HackerNoon on 2018/07/08