Communicative Language Testing

The notion of communicative competence is broad and needs to be fully understood before being considered as a basis for a research testing regime. As previously indicated assessment can be viewed in terms of two distinct paradigms as follows: 1) The Psychometric-Structuralist era: Testing is based on discrete linguistic points related to four language skill areas, reading, writing, speaking and listening. Additionally there is the Psycholinguistic-Sociolinguistic era: Integrative tests were conceived in response to the language proficiency limitations associated with discrete point testing. According to Oller (in Weir, 1988), Integrative testing could measure the ability to integrate disparate language skills in ways that more closely resembled the actual process of language use. The communicative paradigm is founded on the notion of competence. According to Morrow (in Weir, 1988; pp8) communicative language testing should be concerned with :1) what the learner knows about the form of the language and how to use it appropriately in context (Competence). 2) the extent to which the learner is able to demonstrate this knowledge in a meaningful situation (Performance) i.e what can he do with the language. Performance testing should therefore be representative of a real-life situation where an integration of communicative skills is required. The performance test criteria should relate closely to the effective communication of ideas in that context. Weir emphasises the importance of context and related tasks as an important dimension in communicative (performance) language assessment (ibid, pp11). In conclusion a variety of tests different tests are required for a range of different purposeds and the associated instruments are no longer uniform in content or method.

In recognising the broad definitions of communication, Carroll (Testing Communicative Performance, 1980) adopts a rationalist approach to test requirement definition. The basis of the methodology therefore is a detailed analysis including the identification of events and activities (communication functions) that drive the communicative need. Having identified the test requirements, they are divided between the principle communicative domains of speaking, listening, writing and reading. This approach is no doubt reminiscent of the requirements definition related to English for Specific Purposes (ESP) i.e functional language appropriate for Tourist, Students, Lawyers etc. However, this strategy (and associated methodology) would seem inappropriate in the given research context for the following salient reasons:

1. No practical to undertake a meaningful needs analysis for all participants
2. The entirely process is far too complex and labour intensive
3. ESP is not aimed at marginalised communities or children

Sabria and Samer (other students) have pointed me in the direction of Cambridge ToEFL exams (conformant with the Common European Framework of Reference for Languages) as a potential basis for communicative testing. The tests are divided into the 4 principal language dimensions (Speaking, Listening, Writing and Reading) and provide tests and marking criteria at all levels of competency including that for the research context (Young Learners English – YLE starters).

TARF

In preparation for my review panel which is very much overdue, I have created a presentation for the TARF. I dont know what the acronym means but its an opportunity to present your research to the other PhD students. My presentation is in essence a summary of the Literature Review with an emphasis on the test criteria as that is the area I am having most problems overcoming. Most of the PhD students and all of the TARF regulars are linguitists which should be helpful when it comes to the critique of my Second Langauge Acquisition framework.

Whilst I had only prepared 11 slides, the presentation ultimately required two sessions and three hours to complete. The group appeared to be interested in the research topic and I very much enjoyed the experience however, the panel itself is only 15 mins in duration so Im going to have to spend a little time cutting down on material. The first half of the presentation is contextual and provides the political, social and economical background required to appreciate education provision in Ghana. The second half describes the specifics of the research in relation to the methodology and the theory underpinning the potential list of assessment tools. According to Keevers (International Education Handbook) educational research addresses three learning areas: Psycho-motor, Affective and Cognitive. The psycho-motor area relates to the development of infants and young children and is therefore not considered appropriate. The cognitive area is focused on assessment that tests comprehension as defined by Blooms Taxonomy. Whilst there is no definitive learning theory associated with the SOLE, Distributed Cognitive Theory and Self Regulated Learning were mentioned as related research topics worthy of investigation. To this cognitive area, I have attached Second Language Acquisition. In view of its importance in the curriculum and its particular relevance to the SOLE. The intention is to test communication competence as the SOLE will provide an immersion environment based on implicit rather than explicit learning. Whilst recognising the complications of testing and potential validity problems, the TARF accepted that this approach was more relevant to the Ghanaian context than a psycho-linguistic assessment. Finally, I addressed the Affective area but as no one in the room had had experience in this domain the Willingness to Communicate model went unquestioned.

In conclusion, there were no major issues highlighted by the TARF that were likely to upset my existing research framework. This positive response was supplemented by the meeting with the Prof. in advance of the TARF who indicated that he was happy with current progress and believed that the research was eminently practical and doable. He is currently reviewing the latest copy of the Literature Review so we shall see.

SOLE structure

Outcomes from the latest SOLE meeting with Sugata

a) Use a pilot study as a means of assessing the students relationship with the SOLE in the Ghanaian context; emerging meaning, time to acquire knowledge etc
b) Structure the SOLE using previous exams i.e. no need to develop a bridge between the formal lingustic environment and communication competence.
c) Use simple questions to promote student interaction and positive engagement with the computer and the group.
d) Use a translator, at least initially in order to describe SOLE practice and roles. Ensure that the positive sentiment is not lost in translation i.e no teacher style coercion
e) Possible extension of the test criteria to include computer literacy. ICON association inventory recommended.
f) Communicative comptence could include the description of a picture (ala ToEFL) or even the description of a game
g) Data to be collected over a single academic year.
h) Children should be between the ages of 9 and 12 years old
i) 90 minute session per day are sufficient
j) Praise and support is crucial. Children need a positive relationship with the observer even to the extent that work is completed in order to make him happy.
k) Session policemen is identified (on a rotating basis) to ensure that the children are doing something (whatever it is) and to report or child actions and behaviour i.e. why is student x doing that action.
l) A weekly debrief is recommended in order to supplement data related to accademic and behavioural outcomes.
m) Review the report completed by a Gateshead teacher making practical suggestion to enhance the SOLE environment after a year of implementation.