THE PASSIONATE SKEPTIC 


Assessment in the AMES Certificate in Spoken & Written English

Thor May 1993 (c) Thor May, June 1993 all rights reserved

Note 1: This paper, and another entitled Observations on the AMES Certificate in Spoken and Written English, were circulated as memoranda in the Adult Migrant Education Service teaching centres, Victoria, in early 1993. Teaching staff at a large meeting in the Myer House headquarters of AMES at that time almost universally endorsed the sentiments expressed. However the trenchant critique of CBT (competency based training) as applied to language teaching was seen as a political threat by AMES management, and my contract was not renewed.

To e-mail Thor May, please click here 

*** Links ***  *** Home Page  *** ***Teaching1*** [go to end]

Assessment in the AMES Certificate in Spoken & Written English(c) Thor May 1993; all rights reserved

 

Assessment in the AMES 
Certificate in Spoken & Written English

Thor May 1993


In my own assessment of Stage 3 students, an "achieved" status for a competency does not correspond to the "achieved" status the Certificate document claims to define. It cannot, because the Certificate definitions are incoherent. Teachers who claim that they are assessing according to the curriculum guidelines are either confused or they are lying. I am perfectly happy to lie about this (we all need to eat), so long as it is clearly understood that I am lying.

My assessment really indicates that a student passed one test relating to a given competency task at greater than 50% level, relative to the expected global language competency of a student at 1+/2 ASLPR level. It has not been uncommon for the same student to fail a later test of equivalent difficulty on a different task in the same competency.

A "competent" student in the Certificate's terms would actually pass assessment at "100% of the task level expected of a stage 3 student". What is a "Stage 3 task level" when different tasks are set at different times of the year (i.e. at different points in a student's cognitive language development)? What exactly is 100% of, say, "Stage 3 discourse cohesion in report writing", or "100% of Stage 3 phonological accuracy in casual conversation"? The Certificate document does not give even a 50% coherent native-speaker-level answer to these questions. It begs the issue with a liberal sprinkling of subjective terms like "appropriate".

The Certificate guidelines cannot give genuine criteria for task levels because the Certificate specifications have gross internal inconsistencies. Consider the issue of test validity. Readers will recall that "validity means that an assessment task should assess what it claims to assess", and ... "content validity [is achieved when] an assessment task measures what has been taught in a course" (refer, AMES assessment guidelines). Content validity in assessment is fine as an internal teaching evaluation procedure. It is worse than irrelevant as a measure of global language ability. That is, the practice effects reflected in a competency assessment are a measure of (among other things) classroom teaching on that task (i.e. have apparent content validity), but precisely these practice effects render competency assessment an invalid measure of global language ability, which is what students and the community need to know.

Global language ability is the unrehearsed language ability of an individual in random language encounters. Teachers using the ASLPR are really making a stab at assessing this kind of global ability. They demonstrate considerable variation at the margins, but with moderation sessions, have been able to achieve sufficient consensus to articulate effective language programs for a number of years. The actual weights of constituents which go into a judgement of this kind are too complex to be defined by teachers themselves. They have to call it "intuition" or "experience".

Most significant areas of human behaviour are in fact directed by these kinds of intuitive judgements (think of the awesome mathematics that go into computing your decision to slip through a gap in fast-moving traffic, or the wonderfully subtle mix of judgements that effect your decision on whether to trust a stranger). Scientists are devoted to reducing complexities of this kind to objective rules. In language their success has been very modest indeed, but this has never prevented some of their half-tutored progeny from making large claims for miracle solutions (for the reader's amusement a short article on some claims for computer translation are appended). The competencies curriculum falls into this class of miracle solutions, using a few variables from half-understood complexities. Since large amounts of money, and many careers are at stake in this instance, the outcome is a classic thalidomide-type syndrome.

AMES is committed to competencies assessment. I am directed to make decisions in those terms. Of course, I must obey directives. However, will my decisions in fact have content validity? This after all is the final line of defence, the rock on which judgements are made. Well, it is like this. The content validity of assessment in a particular competency assessment task is only reliable in language at the time of the assessment. Six months later student performance in that task is likely to have shifted back towards the student's global level of language ability. So how useful is the task assessment to anyone but a teacher? Do we care?

The goal of covering a checklist of tasks in a variety of genres during the teaching program is fine. The idea of issuing an Intermediate Certificate of English for general attainment at the end of the AMES process has much to recommend it. The claim to assess specific language competencies which have validity outside of the classroom is fraudulent.

The observations above are neither original, nor particularly my own. The just-completed Curriculum Day resulted in a good deal of spontaneous comment, most of it along the lines in this memorandum. It was striking that senior teachers and advisers found themselves in the invidious position of defending a program with which they have clearly become uncomfortable. The point was made repeatedly that the Certificate has been "imposed on us", and that the imperatives are financial, not pedagogical. It is certainly true that curriculums are rarely dropped because they are pedagogically weak. They are replaced when a politically attractive alternative is made available. However, it is disturbing that the creators of the Certificate are divorced from its consequences, that they are not part of the process of trying to make it work, and are not truly answerable on a day to day basis for resolving problems as they arise.


Assessment in the AMES Certificate in Spoken & Written English(c) Thor May 1993;
[top of page]

To e-mail Thor May, please click here