Categories
News

Sally’s Thoughts: Why humans are needed to conduct quality reviews and why automated systems on their own don’t cut it

We recently had some feedback from a client who had made use of an automated tool to complete ‘external validation’. I wanted to highlight some of the issues that arose from this review and our responses to it. It’s important in our industry to get our terminology right (we are in compliance, after all). Time and time again, we see the incorrect use of the word ‘validation’. As per the Standards, validation is a process that occurs after assessment has been completed and involves determining whether your assessment tools are consistently producing valid assessment judgments. However, in our industry, the terms ‘quality review’ (or ‘quality assurance’) and ‘validation’ are often used interchangeably. We are often asked if we can provide clients with ‘validation reports’ along with our learning and assessment packs when we provide these to new clients as some resource providers offer these, but we use this as an educative opportunity with our RTO clients to clarify the concepts and the confusion that often surrounds them. A validation report provided by a resource provider is really not worth the paper it is written on as the tools have been quality reviewed in house (or externally), but the process of validation has not occurred. It’s simply the resource provider’s way of saying, ‘We have reviewed our work and believe it to be of a quality standard’. This is really a standard process that needs to be occurring in any industry when a product is provided to a client and if this is not occurring then larger issues are at stake.

Now, back to these automated validation or quality review tools. I want to make it clear that these systems look for keywords. These types of systems can be useful in picking up missing content, but they may not be accurate if assessment avoids using unit language (that is, the unit language has been taken, unpacked and written in a way that makes sense to industry and to the future workers in that industry). I also have a problem in that these types of systems cannot consider how well the assessment works from a practical implementation perspective. Has it been contextualised to the student cohort? Are instructions detailed and clear? Do benchmarks provide enough guidance to assessors so they can make consistent judgments? Are benchmarks actually correct? Do simulations, where provided, allow for students to complete the requirements of the unit (even if all the instructions in the steps include this detail)? Does the simulation itself allow the student opportunities to demonstrate them all? If a unit has been well written and covers these points above but avoids unit language to make for a better assessment experience, an automated tool will view this as ‘not compliant’. But a poorly constructed assessment that hits all of the keywords might pass through as being ‘compliant’. While I believe in automation, AI and utilising technology to make processes more efficient, these automated quality review systems need to be used in conjunction with a person who is experienced and qualified to make the judgment about whether the system has done what it has intended to do. You wouldn’t write a piece of content using Chat GPT and not fact check it and rewrite certain sections to insert a more human tone before sending it out, so quality review tools should not be solely relied on to make a judgment that an intelligent human should be making.

Automated tools are the way of the future – as long as an intelligent human is in the driving seat.