Thursday, 17 December 2020

From my paper: Auditing as a tool of public policy

This is excerpted from: BG Charlton and P Andras. Auditing as a tool of public policy - The misuse of quality assurance techniques in the UK university expansion. European Political Science 2002; 2: 24-35.  

It comes from the time before I was a Christian, when I was pro-modernisation, and a kind of centre-right, mainstream-libertarian, but-actually-leftist. 

Nonetheless; the excerpted section on the nature of auditing, and of 'quality assurance' in particular, may be of some continued interest - considering that this technology has spread to include just about every institution of any size in the West. 

As such QA auditing has been a primary tool of global bureaucratic takeover.

 

Over the past decade and a half, UK universities have been required to teach many more students than before, but for broadly the same cost to the government (and universities have so far been prevented from raising significantly more money by capped fees). Naturally, this will tend substantially to reduce educational standards relative to the preceding ‘boom’ era of the 1960s and 70s, because poorly-resourced systems will typically produce a lower quality output than well-resourced systems. 

But the primary cause of reduced standards is the truly enormous expansion in intake. From after the second world war, the percentage of the age group participating in degree-level higher education in England climbed from less than 5 percent in the 1940s, to c.17 percent in 1987, up to c.32 percent in 1995 with the trend having continued (Trow, 1991; Smith and Webster 1997).

In itself, such expansion will inevitably inflate the value of a degree certificate, which depends on its relative rarity. Furthermore a mass system contains a majority of students who are less able and less motivated than those in a highly selective elite educational system, and this also would contribute to a lower average level of examination attainment. For all these reasons it can be seen that the UK university expansion entails a significant reduction in degree standards - in the sense that the average university graduate will have a lower educational attainment after expansion than before. 

The role of a national teaching inspectorate 

Accepting that the academic standard of an average university degree was intended to fall, the rationale of a national system of university inspection would be to monitor and control this reduction in standards. The function of a national inspectorate can be understood in terms of controlling degree inflation. 

Given the inevitability of inflation of the degree (ie. a progressive reduction in its career-enhancing value to the individual - or a drop in the 'purchasing power' of a degree qualification) an effective system of inspection might aim to prevent this inevitable inflation from proceeding to a 'hyper-inflation', or total collapse of academic standards. 

The proper aim of a national teaching inspectorate was therefore to prevent a situation in which a university would admit almost anyone, teach them almost nothing, then give them a degree. This problem of degree hyper-inflation would be most likely to occur in those parts of the system where per capita funding and selectivity was lowest prior to expansion - in other words the ex-polytechnics that were from 1992 re-named universities. 

The proper main function of a national system of teaching inspection in the context of a deliberate reduction in degree standards would then be to guarantee a minimum standard of teaching - especially ensuring that the low-funded, low-selectivity institutions did not make their degrees too easy to attain. 

The proper function of a teaching inspectorate would be to guarantee a minimum degree of necessary selectivity (eg. in admissions and in examination standards) and a minimum level of educational provision (e.g. of supervision in the form of lectures, seminars, practicals, coursework etc.). The implicit proper function of a national teaching inspectorate such as the QAA was in fact to ensure a minimum standard of selection and provision.

Auditing universities 

The choice of quality assurance technologies as a basis for inspection of university teaching was rational to the extent that university teaching is done by an explicit and objective system. If QAA had constructed its audits on the basis of ensuring that each teaching unit was 'delivering' an educational system of minimum acceptable standard, then it would probably have succeeded, and its audits would have been simple, swift and cheap. 

The nature of quality assurance auditing is shown most clearly by examining the origins of the practice (Power, 1997). Auditing was originally financial auditing, and the principal purpose of financial auditing is to detect and deter error and fraud in the handling of money within a closed system. 

A closed system is necessary because only within a closed system may it be expected that all money flows will balance. Indeed, financial audit defines the units of closed finance, the units of 'accountability'. There are legal requirements for certain individuals and organisations to be auditable, and this requirement enforces the monetarily-closed nature of such systems. 

Within a closed system, audit detects errors and fraud through sampling information and cross-checking it for inconsistencies when compared with established organisational and practice criteria (Flint, 1988). Independent sources of information should be consistent with each other when checked every-which-way. 

Since a complex organisation has so many strands making up a web of cash flows, the number of potential cross-checks is almost infinite. Anyone wishing to ‘cook the books’ has a great deal to fake if they are to ensure that every possible inconsistency between independent sources has been ironed-out. 

Financial accounting (usually) works in its job of deterring and detecting fraud because it is (usually) easier, cheaper and more efficient to be honest than to prepare internally-consistent fake accounts which can stand-up to skilled cross-checking. True accounts automatically balance when cross-checked because they are a reflection of reality, while it takes a great deal of work to create audit-proof false accounts. 

Managerial audit 

The relationship between auditing and the management function derives from a formal similarity in the information processing involved. 

Auditing involves setting up ‘second-order’ system-referential systems, in a manner closely analogous to the function of management - ie. both audit and management sample information from organisations, model the activity of organisations, and make predictions on the basis of these models which can be checked against further samples (Luhmann, 1995). 

Of course, auditing has traditionally been done by external accountancy firms, while management has been done by sub-systems of the organisation being managed - but these conventions are not formally necessary. In principle, management could be out-sourced, while auditing is increasingly an internal activity of subsystems ('quality units') within organisations. 

Given this similarity, the potential for using audit-generated information for modelling and controlling the organisation was obvious. This led to the development of Quality Assurance auditing as a generic managerial technology (Stebbing, 1993; Mills, 1993). 

QA auditing has many analogies with financial auditing. But instead of monitoring money flows in a closed system to detect financial fraud, quality assurance auditing samples information in order to monitor compliance to a system (Power, 1997). So an organisation explicitly defines the system by which they are supposed to be operating, and quality assurance auditing monitors whether that system is, in fact, being complied with. 

In this context, the word quality has come to mean something like 'reliability of outcome'. A 'quality' system has the operational meaning of a system that predictably delivers a pre-specified outcome (Power, 1997). 

For example, the quality assurance management systems of a fast-food franchise are designed to achieve a consistent product - so long as the system is complied-with, you get the same standard of hamburger (within pre-defined limits) every day and everywhere in the world. The outcome is therefore a product of the system, and so long as the system is functional then the outcome is predictable. 

In other words, the quality of the product may be 'assured' simply by checking that the system is indeed functioning - in a sense the actual hamburger need not be sampled or tasted. This is what it means to say that quality auditing monitors systems and processes, rather than outcomes (Feigenbaum, 1991). 

A quality audit operates by sampling what is happening in different strands of the system, and checking the mutual consistency and compatibility with the system blueprint (which is usually provided in the form of a flow chart). Given the validity of its assumption that a given system results in a given outcome (which assumption needs to be empirically tested), in a variety of competitive economic contexts quality assurance auditing has proved itself capable of delivering consistent outcomes with relatively low transaction costs. 

Quality assurance in universities 

Properly speaking, when QA is applied to university education it would require prior validation in terms of outcomes, to answer the question 'does this system lead to a reliable and satisfactory outcome?' Just as you need first to taste the hamburgers before concluding that your quality assured system is reliably producing good ones, so you need to test that students coming out of a quality assured education systems are indeed reaching a minimal educational standard. Only after the system has been empirically proven to produce predictably tasty burgers (or skilled students) can you neglect this empirical check. 

But with QAA there was no attempt made to test the assumption that any specific teaching system led to any specific outcome. Instead it was simply assumed that the existence of an explicit and self-consistent system of teaching was synonymous with excellence. By this omission, university teaching quality assurance lost any meaningful link to educational outcomes. 

By ignoring the connection between processes and outcomes, QAA implicitly chose the criterion of pure, abstract 'auditability' as its benchmark. 'High quality' teaching was defined as that which was comprehensively and self-consistently documented in a closed system. 

This meant that QAA definition of high quality teaching was an explicit system characterised by Mission Statements, aims and objectives, flow-charts, monitoring, feedback and formal procedures for all imaginable contingencies. 

By itself, this definition of quality is neutral in evaluative terms - however the public relations 'spin' of QAA equated this technical definition of teaching quality with the general language usage of 'high quality' which has to do with excellent outcome measures, not system properties (Charlton, 2002). 

Failure of QAA 

The root of QAA failure can be traced back to a very early stage in the policy implementation. Failure can be blamed upon the way that the legitimate goals of university inspection were first subverted and finally defeated by the public relations aspects of the policy. 

In other words, the political expediency and media spin concerning the advertised role of QAA, pushed QAA into outright misrepresentation of their function and dishonesty as to what they were doing. In the end, QAA was using a system of quality assurance auditing to try and perform a function which was alien to the capability of the technology. 

1. Minimum standards versus continuous improvement 

Quality assurance is really about enforcing minimum standards and predictable outcomes, and certainly this was what was required by the UK university system in a time of rapid degree inflation. The function of a national quality assurance scheme should have been to oversee this reduction in standards, and to ensure that inflation did not go further than was necessary to achieve the objective of much higher rates of university graduation. 

But the QAA advertised their role as increasing academic standards, explicitly by improving teaching, and this meant that there was a fundamental dishonesty involved in the QAA mission. 

One lie usually leads to more lies, and the claim to be improving standards could only be made plausible by further dishonesties such as the claim that QA auditable systems of teaching were intrinsically superior to non-auditable teaching methods, hence that the post-QAA teaching was by its own definition superior. 

A further problem was obfuscation. The sheer complexity of procedures and measures, and their non-comparability between institutions, meant that it became impossible to understand what was going-on in the university system. 

Instead of measuring and publishing simple, clear-cut and comprehensible proxy measures of selectivity and provision, such as average A-level grades and staff-student ratios, the QAA published numerical scores derived from the aggregation of multiple non-transparent (and non-rational) variables (QAA, 1998). This effectively obscured the bald facts of degree inflation and diminished per capita educational provision, and contributed to the prevailing atmosphere of dishonest evasiveness. 

2. Pass-fail versus league table 

Like financial auditing, quality assurance auditing (when properly used) classifies systems in a binary fashion as pass or fail, satisfactory or not. By contrast, QAA used auditing to generate grades on a scale - this is evidence of a fundamental misunderstanding of the nature of auditing. 

It would seem both strange and suspicious if a financial audit was to award an institution a grade on a scale such as excellent, good, average or poor. Such an audit would be regarded as failing to achieve its objective of checking financial probity. A completed financial audit will either be satisfactory ('pass' - within acceptable levels of tolerance for the system) or unsatisfactory and 'fail'. Either the institution is using a proper accounting system and the books balance - or not. 

It was a methodological error to use audit technologies to award grades to British universities. The fault presumably arose from the initial dishonesty of announcing that quality assurance would be used to raise standards, which implies a quantitative system of grading. A proper quality assurance system would maintain minimum consistent standards, but it is not of itself a system for continually cranking-up standards. 

3. Objective versus subjective measures 

Auditing works most straightforwardly when the information sampled is stable, objective and quantifiable and the system being audited is simple. Indeed, objectivity of information and evaluation is a core requirement of auditing (Boynton et al, 2001). By contrast, the QAA tried to measure variables that were inflating, subjective and qualitative; in systems that were highly complex. 

Money (for example) is usually a highly suitable informational measure for auditing, since it is objectively quantifiable and stable throughout the period being audited (but even money becomes un-auditable in periods of hyper-inflation). And financial audit works best when the system is relatively simple - very complex money flows may become virtually un-auditable (as seems to have happened with Enron). 

In principle, it would be possible to construct an auditable system of university teaching by sampling only information that reached a high standard of objective quantifiability. Trow (1993) has remarked that teaching cannot really be assessed in the short term, but that not teaching can. 

For example, there might be a national standard for a degree which stated minimum criteria in relation to factors such as entry qualifications, staff-student ratio, contact hours, class size, number and type of examinations, distribution of degree classification - and so on. A quality assurance audit could then ensure that all such criteria were being met. 

(Indeed, exactly this kind of objective, user-orientated and comparative information is freely available for the US higher education system - e.g.. http://www.usnews.com/usnews/edu/eduhome.htm) 

Instead, the QAA measured all kinds of intangible and subjective factors. Marks were awarded in relation to six categories of activity on a scale of four (QAA, 1998). Most marks related to completeness and consistency of an un-checkably vast amount of paper documentation (for instance there were 17 headings and 64 separate documentation demands relating just to student assessment; QAA, 2000 ), some marks were awarded for an evaluation of non-randomly selected and pre-warned demonstrations of classroom teaching, some marks were awarded following interviews with non-randomly selected groups of graduates, and so on. All these variables were weighted and combined in a unvalidated fashion. 

The outcome was the QAA grades were non-transparent and non-objective. 

Dependence upon inspectorial subjectivity also contributed to the strikingly intimidating and humiliating nature of QAA visitations (Charlton, 1999). Many auditees felt that they were being evaluated more in terms of demonstrating a suitably subservient attitude, more than for the objective facts concerning their educational selectivity and provision. 

This contrasts sharply with the realities of an objective financial audit, which may be hard work for the auditee - but is a process from which the honest and competent organisation has nothing to fear. 

Expediency versus strategy 

The QAA forms a fascinating case study of how an apparently straightforward and readily-attainable policy of maintaining minimum standards while expanding the University system became muddied and eventually defeated by dishonesty and short-termism. The failure of QAA may be interpreted as an example of the way in which political expediency may unintentionally damage long-term strategy. The unwillingness of the UK government to acknowledge the downside of university expansion, and to explain and argue the case that the overall benefits of their policies would outweigh their specific disadvantages, has led to policies built upon reassuring lies (Andras & Charlton, 2002)...

References Andras P, Charlton B. (2002). Hype and spin in the universities. Oxford Magazine. 202: 5-6. Andras P, Charlton BG. (2002a). Democratic deficit and communication hyper-inflation in health care systems. Journal of Evaluation in Clinical Practice. 8: 291-297. Baty P. (2001). Russell elite go for jugular of ailing QAA. Times Higher Education Supplement. 21 September. Boynton WC, Johnson RN, Kell WG. (2001). Modern auditing 7th Edition. John Wiley & Sons: New York. Cagan P. (1956) The monetary dynamics of hyperinflation. In (Ed) Friedman M. Studies in the quantity theory of money. University of Chicago Press: Chicago. Pp 25 -117. Charlton B. (1999). QAA: why we should not collaborate. Oxford Magazine. 182: 1-3. Charlton BG. (2002). Audit, accountability, quality and all that: the growth of managerial technologies in UK universities. In (Eds.) Prickett S, Erskine-Hill P. Education! Education! Education! : Managerial ethics and the law of unintended consequences. Imprint Academic: Thorverton, UK. Feigenbaum AV. (1991). Total quality control 3rd edition revised. McGraw-Hill: New York. Flint D. (1988). Philosophy and principles of auditing. Macmillan: London. Gellner E (1983). Nations and nationalism. Blackwell: Oxford. Gellner E. (1988). Plough, sword and book: the structure of human history. Collins Harvill: London. Gellner E. (1994) Conditions of liberty: civil society and its rivals. Hamish Hamilton: London. Habermas J. (1989). The structural transformation of the public sphere: an enquiry into a category of the bourgeois society. Cambridge: Polity Press. Kindler J, Kiss I. (Eds) (1969). Systems theory (in Hungarian). Kozgazdasagi es Jogi Konyvkiado. Budapest. Luhmann N. (1995). Social Systems. Harvard University Press: Cambridge, MA, USA. Luhmann N. The reality of the mass media. Polity Press: Cambridge, UK. Maturana HM, Varlea FJ. (1980) Autopoiesis and cognition. Reidel: Dordrecht, Netherlands. Mills D. (1993) Quality auditing. Chapman & Hall: London Pokol B. (1991) The theory of professional institution systems (in Hungarian). Felsooktatasi Koordinacious Iroda: Budapest. Power M. (1997) The audit society. Oxford University Press: Oxford. QAA. (1998). Annual Report 97-98. QAA: Gloucester QAA. (2000). Code of practice for the assurance of academic quality and standards in higher eduation. Section 6: Assessment of students. QAA: Gloucester. Sargent TJ. (1982). The ends of four big inflations. In (Ed) Hall RE. Inflation: causes and effects. University of Chicago Press: Chicago. Pp 41 - 97. Siedentop L. (2000) Democracy in Europe. Allen Lane, Penguin: London Smith A, Webster F. (1997). The postmodern university? Open University Press: Buckingham, UK. Stebbing L. (1993). Quality assurance 3rd edition. Ellis Horwood: Chichester, UK. THES Leader. (2001). There is quality assurance, then there is the QAA. Times Higher Education Supplement. 15 August. Trow M. (1991). The exceptionalism of American Higher Education. In (Ed.) Trow M & Nybom T. University and society. Jessica Kingsley: London. Trow M. (1993). Managerialism and the academic profession: the case of England. Weber M. (1978). Economy and society: an outline of interpretative sociology. University of California Press: Berkeley. Williams R. (1997). Quality assurance and diversity. In (Eds.) Brennan J, de Vries P, Williams R. Standards and quality in higher education. Jessica Kingsley: London. Wright R. (2000) Nonzero: the logic of human destiny. Pantheon: New York. E-mail bruce.charlton@ncl.ac.uk HOME also by Bruce Charlton Quality Assurance Auditing The Malaise Theory of Depression Public Health and Personal Freedom Psychiatry and the Human Condition Pharmacology and Personal Fulfillment Awareness, Consciousness and Language Injustice, Inequality and Evolutionary Psychology Peak Experiences, Creativity and the Colonel Flastratus Phenomenon

 

1 comment:

  1. Would seem to twin well with this earlier post:

    https://charltonteaching.blogspot.com/2013/01/how-does-more-money-make-organization.html

    An example in my neck of the woods is St Peter's Hospice. Nominally a local charity but receiving a large subsidy from the NHS last I checked. Enough to make it dependent on whatever auditing process is used (Care Quality Commission?) So not really a local charity in substance, though many locals assume it is and continue to donate on that basis.

    I'm astonished at the amount of documentation involved in these audits: the sheer volume of creative/fashionable/dishonest verbiage required to comply with it all, and all the meetings required to sort out who's going to do what. At universities, take the REF Environment Statement. It ties up hundreds of hours of senior academics' time. Just look at this blurb about some of the indicators/criteria involved:

    https://www.ref.ac.uk/media/1019/guidance-on-environment-indicators.pdf

    What a nightmare!

    As far as pre-university education goes, a home ed family must be far more efficient that a school, despite lacking the economy of scale, lacking outside 'support'. In no small part due to lack of auditing. I'm not going to waste time trying to measure it, but one can get in a decent lesson in less time and for less cost than the average school run, i.e. before children have even entered the school gates.

    ReplyDelete

Comments are moderated. "Anonymous" comments are deleted without being read.