Robotic monkey helping to test our quality control iPad app

By Benjamin Chute

Well, maybe not exactly. But we are VERY serious about quality control at TruQC. Both for our app and for internal refinement. In fact, one way we maintain our high standards of quality is in our testing process.

At TruQC, we’re continuously testing while in development to catch issues before a beta release. We use a ticketing system called Pivotal Tracker to log, organize and verify issues. Issues go through a strict promotion process and ride along with a build up through the chain from Development to Beta to Production. We have written test cases in QA Complete that cover every feature, functionality, report type and user type in TruQC. We adhere to best practices in that we maintain a Beta test environment that exactly matches our Production environment. This means that when we’re testing a Beta release, it’s the exact same software on the exact same hardware configuration as it will be when it goes to Production and to our customers.

You don’t like surprises and neither do we

One additional quality testing tool we’ve started using is MonkeyTalk. MonkeyTalk is open-source software that enables automated testing of mobile apps. What we like about MonkeyTalk is that its iOS agent lives alongside our code when we compile a build for testing. This agent code doesn’t make its way into the Production release, but it allows us to write test cases for Beta testing that can be run against our user interface at the push of a button.

So easy a robotic monkey could do it, right?

There will never be a replacement for a detail-oriented human tester, but automation is meant to aid and support the more repetitive monkey work that can be a mind-numbing but absolutely critical part of regression testing. Regression testing is testing all the “old” code when new code is added.

For example, if we implement a new field in the Job Admin area of TruQC, we then feel it justified to regression test the entire job admin area and not only the new field. That way if changing X inadvertently broke Y, we find out about it and fix it before you, the industrial painter, ever see it. Yes, we like to avoid corrective actions just like you, but we all know they’re an inevitable part of most any job. We’re eyes-wide-open and proactive to minimize them. That’s why we’ve implemented automated testing.

So far, as of this writing in late May

We’ve set up MonkeyTalk tests for every single field of every single report type. At the push of a button, 99% of the fields are filled in and verified for save. A human then validates that the PDF renders correctly for every field, essentially doing a second check on the monkey’s work. The human’s workload is dramatically reduced from having to key in entries and validate every single field in the editing user interface and only needs to focus on the PDF. MonkeyTalk can’t yet validate the rendered PDF for us, so that requires a human. A couple of other items MonkeyTalk can’t do quite yet are gesture recognition such as the signature field. That is another spot that we still need a good old human being to tap, sign and verify for every signature field. Photos and document annotations are two other examples where we still need a person to test the functionality.

Another way automated testing should help us is in our development phase

For example, if a programmer adds a field to the Observed Defects section of the Daily Inspection Report, he can fire up MonkeyTalk and click Run on the test for that section of that report. If it succeeds for every field except his newly-added one, then he knows he hasn’t broken any of the surrounding functionality. And he can then add that new field to the MonkeyTalk test. That means fewer issue tickets which means less back-and-forth, but also gives a higher confidence for a smoother testing cycle once it reaches Beta.

Pasted below is a link to a six-minute video of the Daily Inspection Report test suite in MonkeyTalk. It goes pretty fast in most parts, but that’s the point! This test suite strings together the following tests:

  •  Log in with a bad password and make sure it fails
  •  Log in with a good password and make sure it succeeds
  •  Tap through every item in the nav bar at the bottom
  •  Add a Daily Inspection Report
  •  Enter data for every field of every section and verify that the data saved and is ready for send to the server on the next sync

What it all means

We’re lucky to be in a position where our reports aren’t changing much since they’re working hard as they are (a few tweaks are bound to happen). While we’re focused on adding new features, we expect MonkeyTalk to save us significant time in the testing and development cycles, especially around regression.

Getting bogged down in the details isn’t necessarily glorious work, but what does make us feel good about the effort we put into testing is that we know our app is a quality and reliable piece of quality control software for our customers who use it in a daily, mission-critical way.

Download a free case study

TruQC case study with Thomas Industrial Coatings