First and foremost I want to be clear that any healthy approach to testing for accessibility will include all of these in some capacity. The aim of this post is to discuss how to make good use of all of them. Ultimately, each of them have value to offer a development team.
What is Manual testing?
Manual testing as a concept is pretty simple. A human user tries to accomplish tasks on the website. As it relates to accessibility manual testing means using assistive technologies to accomplish tasks on your site.
Can I do that?
Often developers think of screen reading technologies when we think of assistive tech but there are many different kinds. There is a certain amount of knowledge and expertise with a given assistive tech that is needed to test with it. For that reason it can feel daunting to do manual testing well.
For developers on a budget, I encourage you to learn and understand these tools as best as you can. Ultimately, you'll never achieve the level of expertise that an actual user has but that basic understanding will help inform your approaches to developing accessible content. If possible reach out to the web accessibility community around you. There are people there who want you to succeed and can help you. It requires a great deal of expertise to do this testing well and an experienced tester is worth the money.
If money is not an object then consider using a company that specializes in this testing and get an external audit done.
Automated testing is a good development practice and can have a huge impact on the quality and stability of your code. The same is true regarding accessibility and automated testing. Out in the wild there are many different kinds of static code analysis tools that are geared toward accessibility. I'd be the first to caution against solely depending on automated testing, but as the old adage says, "Where there's smoke there's fire". Automated code tests like WebAim can alert you to some basic structural issues in your code. You can leverage this information and try to make system wide corrections to common problems.
For example, my company encountered a lot of failures regarding labeling of input elements and form controls. This drove us to create reusable components that addressed issues of focus management, labeling, and visibility in a more systematic way. Now, if you want to make sure that your input field is compliant simply use the shared component. You can follow this same pattern and eliminate a whole class of accessibility errors in your application very quickly.
This had the added advantage of making code more consistent easier to maintain. It also gave our developers the ability to stop worrying about these problems and focus on more complex accessibility issues.
Linting is just another kind of automated testing. It runs on very similar tooling our other automated tests. Unlike the automated tests described above it only tests the current file a developer is looking at. This gives your developer instant feedback on what they are coding. It's common to use tools like eslint or sasslint in your everyday flows. We discovered a eslint-plugin-jsx-a11y and have introduced it into our workflow as well. This instant feedback helps to avoid introducing new issues into the code base. Additional I've found that it slowly teaches developers to stop making these errors and mentally frees them up address more challenging engineering problems.
What can I do if my boss doesn't care.
This is where things get difficult. My advice here may not be for everyone but it is never-the-less what I recommend. I am lucky enough to work at an organization that has bought into accessibility practices from day one. But these ideas can help bring stakeholders on board more easily.
Do it anyway.
- It's very low cost to introduce linting into your application and the practice is nearly invisible to people that aren't coders. This small step will get net you very big gains in the short term. We used linting as a first step and over the course of a month eliminated nearly 400 issues from our code base. The improvement in code reliability will make it easier to sell the effort.
- Introduce a static code analysis tool into your testing workflow. I use pa11y dashboard (pa11y.org). We run this code in our continuous integration process on every push. We fail builds if they don't meet our standards, but you don't have to. Visibility is the first step in awareness. Keeping track of your issues will give you a way to correct them and prevent you from introducing new ones.
- A bit of research on opportunity costs to your business goes a long way. Take a look at Microsoft's commitment to accessibility both in their approach to coding and company policy. Their effort pays off in a community that feels underserved and often neglected.