Main content

Quality engineering for a shared codebase

Abigael Ombaso

Senior Tester, ΒιΆΉΤΌΕΔ Design + Engineering

Tagged with:

The β€˜You might have missed’ section showing featured content at the bottom of the ΒιΆΉΤΌΕΔ homepage

The ΒιΆΉΤΌΕΔ is developing a shared platform for building its digital products to reduce development complexity and duplication as much as possible. The aim being to enable quicker and more efficient software development processes resulting in quicker delivery of digital content to our audiences. Read more about the technology changes.

A key aspect of this project has been having a shared repository for the Presentation layer code with different teams working on this platform. This blog will be sharing our experiences so far through the lens of quality engineering by answering three commonly occurring questions that pop up before, during, and after product development — who is going to use the product, how will we ensure quality, and what have we learned so far?

Who will be using the product?

Engineers across different teams working on the platform directly and our digital products consumers are the main product users. We want to keep making great digital products (quality, usability and design), even as we change technology platforms, while also minimising bugs in our software as much as possible.

In order to reduce bugs and issues raised, the testing is integrated into the development workflow, and team members across disciplines have ownership of the product quality. Having a consistent approach to testing features in the platform and having a quick feedback loop for spotting and fixing defects early, helps in minimising the risks and impact across different teams. This is an ongoing process with fine tuning based on feedback from the development teams.

Solution: The users’ needs, (in our case digital products users and engineering teams building on the shared platform) help to define the product requirements that influence the test process.

How do we do the testing?

One of the key things Test engineers and other project stakeholders consider are the risks. The impact of different code merges and changes cascading to different teams was one such risk in the shared repo. Having a shared platform meant sharing other infrastructure (besides a GitHub repo), such as deployment pipelines, communication channels in Slack, documentation, etc.

A consequence of this is that deployments are now visible to multiple teams or stakeholders, with the notifications in our Slack channels flagging failing builds. Bugs get flagged up quickly and when needed, different development team members are able to ‘swarm’ (even while working remotely) to collaboratively debug and resolve these issues. This has led to more frequent and better communication across teams and we think this has been a beneficial and worthwhile project just for getting more people talking and working together more often.

There has been consistency planned into the project as a whole from the start, for example with the . Similarly, it was important to have consistency across teams when it came to testing the features developed in the platform, as we simultaneously worked on this shared code space, in order to minimise bugs, regressions and other product risks. Having an overarching Test strategy considering approaches to manual and automated testing (guided by the and ) has informed our testing.

Automated tests form part of pull request checks and before deployment to Live. We have consistency in the automated test tools we use and engineers across teams are able to know what the expectations for testing are. This is by no means a finished endeavour, but a continuous work in progress so having forums like the Test Guild and team knowledge sharinghelp with communication, continuous learning, and further improvements.

Because of the scale of the project we rely on automated test tooling for regression testing. We also began to use fairly new test tools for visual regression testing like Storybook and Chromatic. Alternatives were Percy, Nightwatchjs and Browserstack. For other types of automated tests we use Puppeteer and formerly Cypress. We had communication channels with the test tool makers to feed back issues encountered and to request new features as we scaled and grappled with using the different test tools.

Solution: Have a test strategy and plan early to mitigate against identified project risks by including quick and early feedback during the development process.

A diagram showing factors influencing quality engineering cycle in the project - strategy and planning, product users, communication, technology and continuous learning.

What have we learned?

One of the benefits of a brand-new project is that there is no legacy code or technical debt at the start (this changes pretty quickly though!). Mature products have gone through the growth pains. There are known unknowns and workarounds for known problems or pain points which the development teams, (and Test engineers in particular) come to know and understand fairly well.

The challenge however, with new projects, is that there are lots of unknowns with the new technology stack. As the project has been growing we have also been dealing with and learning from the scaling challenges such as pipeline issues from multiple deployments taking place at the same time, improving monitoring of traffic and website status errors, as well as optimising our stack’s performance as more product features have been built. Being able to identify such issues early on has been important. Manual testing by different team members helps with identifying such issues that may not be covered by the automated processes initially.

Solution: Continuously learn and iterate as issues are identified and fixed.

Conclusion

Building quality engineering into a shared repository requires similar considerations to that of single-team projects but on a bigger scale and with a wider focus. These considerations are; who is the product being made for and by whom; what are the product risks, and what is the test approach or plan to reduce the impact from these risks. The aim is to provide quick feedback and monitoring for regressions during the software development process. Test automation and tooling are important for facilitating this. Continuously learning about our product, (including from our product users) by regular communication, exploring, and working collaboratively. This has helped with iterating on our quality processes based on our findings and has been important for our quality engineering.

Tagged with: