Jumat, 22 Mei 2009

COMPUTER ASSISTED SECOND LANGUAGE VOCABULARY ACQUISITION

COMPUTER ASSISTED SECOND LANGUAGE VOCABULARY ACQUISITION

ABSTRACT

During the initial stages of instructed L2 acquisition students learn a couple thousand, mainly high frequency words. Functional language proficiency, however, requires mastery of a considerably larger number of words. It is therefore necessary at the intermediate and advanced stages of language acquisition to learn a large vocabulary in a short period of time. There is not enough time to copy the natural (largely incidental) L1 word acquisition process. Incidental acquisition of the words is only possible up to a point, because, on account of their low frequency, they do not occur often enough in the L2 learning material. Acquisition of new words from authentic L2 reading texts by means of strategies such as contextual deduction is also not a solution for a number of reasons. There appears to be no alternative to intentional learning of a great many new words in a relatively short period of time. The words to be learned may be presented in isolation or in context. Presentation in bilingual word lists seems an attractive shortcut because it takes less time than contextual presentation and yields excellent short term results. Long term retention, however, is often disappointing so contextual presentation seems advisable. Any suggestions how to implement this in pedagogic contexts should be based on a systematic analysis of the two most important aspects of the L2 word learning problem, that is to say, selecting the relevant vocabulary (which and how many words) and creating optimal conditions for the acquisition process. This article sets out to describe a computer assisted word acquisition programme (CAVOCA) which tries to do precisely this: the programme operationalises current theoretical thinking about word acquisition, and its contents are based on a systematic inventory of the vocabulary relevant for the target group. To establish its efficiency, the programme was contrasted in a number of experimental settings with a paired associates method of learning new words. The experimental results suggest that an approach combining the two methods is most advisable.

INTRODUCTION

The naive view that the vocabulary of a language should be seen as a "set of basic irregularities" impervious to systematic study, and its acquisition as a haphazard process of learning largely unrelated elements is long outdated. Furthermore, the language teaching profession has come to realise that in foreign language teaching, a grammar-oriented approach is not, to understate the case, the most efficient way to achieve communicative competence. An integrated approach combining systematic attention to the acquisition of both grammar and vocabulary is considered much more effective. This fuller appreciation of the importance of vocabulary teaching gives rise to a number of questions concerning the way in which it should be selected and presented for learning. These questions will be addressed below.

In the early stages of instructed foreign language acquisition1 students learn a few thousand mainly high frequency words. Such words occur so frequently in the teaching materials to which they are exposed that many are easily acquired. However, a vocabulary of that size, say 2,000 words, is not sufficient for functional language proficiency. To take reading as an example, estimates of the number of words required for understanding non-specialised texts vary (dependent, among others, on what is meant by "words" and "adequate comprehension") but there is general consensus that 5,000 base words is a minimal requirement (Laufer, 1997; Nation 1990) while for non-specialised, academic reading a considerable larger vocabulary is needed (Groot, 1994; Hazenberg & Hulstijn, 1996). It is therefore necessary that a large number of words be learned in a short period of time at the intermediate and advanced stages of language acquisition. Incidental acquisition of these words is only possible to a point, because they do not occur often enough in the foreign language learning material. Learning new words from authentic L2 reading texts by means of strategies such as contextual deduction is not the answer either, for reasons to be given later. Although there is evidence that retention is better with L1 glosses than without (Hulstijn, Hollander, & Greidanus, 1996; Watanabe, 1997), isolated presentation of the numerous words to be learned in bilingual word lists results in long-term retention that is widely felt to be disappointing. Since the time available for the learning of the large number of new words is limited, it is essential to tackle this problem systematically, both in selecting the relevant vocabulary and in creating optimal conditions for the acquisition process. This article sets out to describe a computer assisted word acquisition programme which intends to do precisely this: the programme tries to systematically operationalise current theoretical thinking about word acquisition, and its contents are based on a systematic inventory of the vocabulary relevant for the target group. The programme (called CAVOCA, an acronym for Computer Assisted VOCabulary Acquisition) was developed over a trial period of several years. Its present database was constructed with the help of a government grant and contains some 500 words specially selected for their difficulty and relevance to the academic reading needs of Dutch university students. In the following paragraphs the theoretical and practical considerations involved in the construction of the programme will be dealt with.

HOW MANY WORDS?

Obviously, a detailed answer to this general question is impossible without a detailed description of the language activity and level intended. Therefore I shall confine myself to a specific example, namely, the vocabulary required for an adequate comprehension of academic reading texts of the type used in the foreign language reading comprehension tests annually constructed by the CITO (the Dutch central educational testing body) for the final exams of Dutch "vwo," an upper level secondary type of school preparing for university studies. These tests comprise a selection of authentic, argumentative and/or popular-scientific L2 texts on a variety of non-specialist topics. They specifically measure L2 reading skills, and comprehension does not depend primarily on textual features such as conceptual or structural complexity, or on reader characteristics such as familiarity with the topic.

To the extent that reading comprehension is dependent on word knowledge, there is empirical evidence (Groot, 1994) that for an adequate understanding of academic texts of this kind, a vocabulary of at least 7,000 words is required (Hazenberg & Hulstijn, 1996 mention an even higher number--10,000). Nation (1993) and Laufer (1997) suggest a target vocabulary of 5,000 as the minimum lexical requirement for understanding general, non-specialised texts. The rationale for these numbers is that only a vocabulary this size will result in a sufficiently dense lexical coverage of texts of this kind. Various studies (Groot, 1994; Hazenberg & Hulstijn, 1996; Hirsh & Nation, 1992; Laufer, 1989) have demonstrated that for adequate comprehension of texts at this level, readers must be familiar with more than 90% of the words used. With such a dense lexical coverage of a text, the percentage of unknown words is so low that, generally speaking, they will either not be essential for an understanding of the text or their meaning may be deduced from the context.

WHICH WORDS?

Apart from the most frequently used 2,000 words there are, a further 3,000 words that should be learned. It is not possible to indicate accurately which words, partly because beyond the first 1,200 words, the frequency of words rapidly decreases and depends greatly on the corpus. Additional selection criteria such as usefulness and valency do not solve the problem either. Every selection will therefore contain a certain degree of arbitrariness as far as inclusion or omission of certain words is concerned. A partial solution to this problem may be compiling a much longer list of words of which only a portion must be mastered (Groot, 1994). The advantage of a list of this length is that difficult choices as to whether or not to include a particular word can be largely avoided. The feasibility of this idea has been studied in relation to English (de Jong, 1998). For this purpose, the subdivision into six frequency levels of the head words listed in the Collins Cobuild Dictionary (1995) was used. Allocation of a particular frequency level to a word is based on an analysis of the 200 million-word "The Bank of England Corpus." Level 1 includes the 700 most frequent words, level 2 the next 1,200 words, level 3 the next 1,500 words, level 4 the next 3,200, level 5 the next 8,100 and level 6 all remaining words. It turned out that the application of a number of qualitative criteria such as relevance and difficulty, and quantitative criteria such as frequency resulted in a list of approximately 8,000 words drawn from levels 3, 4 and 5. Familiarity with any 3,000 words from this list added to the first 2,000 would result in a lexical repertoire of 5,000 words considered sufficient for general reading while command of any 5,000, again in addition to the first 2,000, would suffice for academic reading. Complementary to this approach, various other word lists relevant to reading at this level may also be used in the compilation of such a list (Nation, 1990).

HOW TO TEACH/LEARN THE WORDS?

In connection with word learning, a distinction is commonly drawn between incidental and intentional learning. Unless one narrowly defines incidental learning as excluding any conscious attention to the words being learned (cf. Singleton 1999, p. 274), the two learning modes are not always easy to differentiate and show a considerable overlap, not unlike the acquisition/learning dichotomy suggested by Krashen. In this paper, intentional learning will be used to refer to any learning activity the learner undertakes with the intention of gaining new knowledge. As such it differs from incidental learning where there is no such intention (Anderson, 1990). From a pedagogic perspective, however, the distinction is still useful in a discussion on the optimal way of presenting new L2 words in instructional contexts.

Most words in first language acquisition are learned incidentally in an incremental way because the language learner comes across them frequently in a wide range of contexts (De Bot, Paribakht, & Wesche, 1997; Nagy & Herman, 1987). In a short space of time, a large number of words are thus learned and this lexical repertoire then forms the basis for learning other new words. In the case of foreign language acquisition in instructional contexts, this process is virtually impossible to simulate. The exposure to new words is considerably less intensive and varied.2 Undoubtedly, a limited number of high frequency words can be learned incidentally but that will certainly not be possible for the much larger number of less frequent words that must subsequently be learned if one wishes to speak of functional proficiency.

To solve this problem it has been suggested that learners be exposed to authentic L2 material and trained in communicative strategies such as contextual deduction of the meaning of new words so that incidental acquisition can take place, thus partially copying the L1 acquisition process (Krashen, 1989). Attractive though this idea may seem, it is not very realistic. Authentic language material is generally not produced with the intention of illustrating to learners the meaning or usage of certain words but rather to convey information to other native speakers who are already familiar with these words. More often than not, it is therefore largely unsuitable for the learning of new words for a number of reasons.

First, because of their relatively low frequency, the words to be learned will occur rarely in the inevitably small authentic L2 input. This means there is not enough repetition for an incremental learning process in which the various features of the words are picked up from the contexts, resulting in a solid embedding in the mental lexicon, as in L1 acquisition.

Second, in authentic use of language, it is frequently not the immediate context of an unknown word that contains the clues to its meaning but wider contexts that cumulatively illustrate its semantic properties. In most instructed L2 learning situations, however, the learner is only exposed to selected passages, which in themselves may not aptly illustrate meaning and use of the particular word at all.

But probably the most important reason why authentic L2 language is inadequate for incidental acquisition (except at highly advanced levels) is that it contains too many other unknown words. Of course, some of these may not be essential for understanding the context. Function words are generally less relevant for comprehension than content words and the same goes for adjectives compared to nouns. But others will be essential and not knowing them will make contextual deduction of the word to be learned problematic. Contextual deduction and, in its wake, incidental acquisition of an unknown word is only possible if the context is well understood and clearly illustrates its meaning. One might say that in such cases, for a proficient reader, the new word is redundant; in other words, it might as well have been left out (as, indeed, it is in cloze tests to measure comprehension of the context). But to the extent that the context contains other unknown words for the learner, there arises what one might call a cumulative reduction of the redundancy of the word in question. The number of possible meanings of the unknown word increases proportionally to the number of other unknown words in the context; the new word may mean "x" if another unknown word means "y," but if this is not the case, "x" must have a different meaning and this puzzle of semantic permutations gets more and more complex with each additional unknown word. The learner must form ever more hypotheses as to the possible meaning and systematically utilise previous and subsequent information to corroborate or refute these. This process will take so much attention and working memory capacity that higher reading processes, which are essential for understanding the context (such as recognition of suprasentential links and discourse markers), are seriously impeded.

The above line of reasoning may be summarised as follows. A thorough understanding of the context is essential for deducing the meaning of an unknown word. For any context to be well understood a dense coverage is required. This means the reader must have "foreknowledge" of most other words in the particular context, which in turn presupposes a large vocabulary. There is a certain irony to this phenomenon (sometimes referred to as the Matthew effect) in the sense that a learner can only pick up new words from authentic contexts if s/he already has a large vocabulary (Horst, Cobb, & Meara, 1998). The above arguments may serve to illustrate the principle that in the limited time available in an L2 teaching context such a large vocabulary cannot be incidentally acquired by dint of sheer exposure to authentic L2 material.

If in instructional L2 situations incidental acquisition of a large vocabulary of lower frequency words through exposure to authentic L2 texts is hardly possible, it follows that efficient acquisition of new vocabulary requires a conscious effort from the learner (Prince, 1996; Sternberg, 1987). There seems to be no viable alternative to intentional learning of a large number of words with the help of authentic L2 material that has been selected (or edited) specifically for this purpose. The limited time available for this huge learning effort makes it imperative that the acquisition process be, as it were, accelerated. This requires a careful analysis of what should be learned and how it should be learned or, in other words, which words should be selected for learning (cf. "Contents of the Programme") and how they should be presented (cf. "Theoretical Background"). A computer assisted word learning programme which intends to do this is described below.

CAVOCA

Theoretical Background

CAVOCA (Computer Assisted VOCabulary Acquisition) is a computer programme for vocabulary acquisition in a foreign language. It has been designed on the basis of generally accepted theories about the way the mental lexicon is structured and operates. Allowing for certain differences between the various theories on how words are learned, stored in, and retrieved from the internal lexicon (cf. Aitchison, 1995), there is general agreement that in a natural (L1) word acquisition process several stages may be recognised. They cannot always be clearly distinguished because learning a word is an incremental process that gradually develops with repeated exposure and because there is constant interaction between the various stages. However, for clarity’s sake, they will be briefly described as if they were separate stages independent of one another.

1. Notice of the various properties of the new word: morphological and phonological, syntactic ,         semantic, stylistic, collocational, and so forth.

2. Storage in the internal lexicon in networks of relationships that correspond to the properties         described in (1). 

3. Consolidation of the storage described in (2) by means of further exposure to the word in a           variety of contexts which illustrate its various properties. This results in a firmer embedding       in the memory needed for long term retention. 

Adequate implementation of the stages described above will result in a solid embedding of the word in the mental lexicon, which is necessary for efficient receptive and productive use. If one of the stages is neglected, the word will not properly fix itself in the internal lexicon and will be stored only superficially without the many associations and links with other words needed for efficient lexical retrieval. The learner will not or barely recognise the word in a reading or listening text and will certainly be unable to use it in speaking or writing. These ideas about the importance of an intensive processing of the new word were first presented in a systematic fashion in Craik and Lockhart's (1972) "levels of processing" theory. It postulated that "rates of forgetting are a function of the type and depth of encoding" information and distinguished between various levels of processing. Thus, in their view, processing semantic properties of a word represented a deeper level than just processing its phonological features. Certain aspects of their theory have been criticised (especially its inability to clearly define the differences between levels in operational terms) but it has since led to a general consensus among researchers that there is a stringent relationship between retention and intensity or elaborateness (Anderson, 1990) of processing lexical information about a new word (i.e., paying close attention to its various features such as spelling, pronunciation, semantic and syntactic attributes, relationships with other words, etc.). Important elements in this intensive processing are the variability (Anderson,1990) and specificity (Tulving & Thomson, 1973) of the encoding activity. This theoretical position appears to have several important pedagogic implications for the teaching/learning of new words.

The first is that exposure to words in context is preferable to exposure to words in isolation. Only contexts will fully demonstrate the semantic, syntactic, and collocational features of a word the learner has to process in order to establish the numerous links and associations with other words necessary for easy accessibility and retrieval (see also Nation, 1990, and Singleton, 1999, for a summary of the arguments and evidence supporting this position).

Another implication, although more controversial than the first, appears to be that having learners infer the meaning of new words from the context is a better way to safeguard elaborate, intensive processing than giving the meaning because of the greater cognitive effort required.

Mondria (1996) presents evidence that seems to refute this theoretical stance. He interprets his finding that vocabulary test scores for the two conditions (given vs. inferred meaning) indicated that there is no difference in long term retention effects between the two presentation methods and that, in teaching new words, giving the meaning is a more efficient method than having learners contextually infer it, because it takes less time. His conclusions, however, are based on scores of tests of receptive knowledge only (a multiple choice and an open ended test) in which subjects were asked to recognise the target words. Whether tests of productive use (in which subjects have to recall the word themselves) would have yielded the same results leading to the same conclusions is doubtful (cf. the first remark in "Discussion").

The natural word acquisition process (as this occurs in first language acquisition) consists of gradual acquisition of the various properties of a word through repeated exposures in a wide range of authentic contexts illustrative of its various features. Bearing this in mind, we are faced with a dilemma in an instructed L-2 learning situation. On the one hand, there is not enough time for exposure to new words of the same intensity as in L1 acquisition. On the other hand, superficial exposure leads to shallow processing which fails to establish enough associations and links with other words for solid storage and efficient retrieval. Obviously, there is no easy solution to this dilemma. The most realistic approach seems to be to create an environment that is maximally conducive to learning new words by striking a balance between the two contradictory demands. The CAVOCA programme intends to do just that by speeding up the acquisition process; it takes the learners systematically through the various stages by exposing them to carefully selected L2 material which illustrates the salient features of the new L2 word and/or the differences between the L2 word and its nearest L1 equivalent or counterpart.

The Programme

The stages of the vocabulary acquisition process described above are operationalised in the various sections of the CAVOCA programme. The programme takes the learner systematically through the sequence of mental operations which make up the acquisition process. The word to be acquired is presented in contexts selected in such a way as to ensure an efficient and, as it were, condensed acquisition process. To secure learner involvement, the programme is interactive: at certain points the learner has to make choices ("What do you think the word means?" "Is the word correct/appropriate in this context?" "What is the word that is missing in this context?") and is given feedback by the computer. The current CAVOCA programme presents the words in modules, each consisting of 25 words and taking about 50 minutes to complete. The programme covers each word in four sections which embody the various stages of the word acquisition process.

The first two stages of the vocabulary acquisition process, learning the word's various properties (among which, most importantly in a L2 acquisition context, is its semantic properties, see Singleton, 1999, p. 189) and storing the word in the memory are operationalised in the first section of the programme, called "Deduction." The word to be learned appears on the screen for a few seconds. Next, it is used in three sentences, presented in order of contextual richness. The first sentence contains only a few clues as to the meaning of the word and mainly serves to draw the learner's attention to its morphological composition, spelling, syntactic function, and so forth. The second sentence contains more clues as to the meaning, and the third is so contextually rich that the meaning becomes entirely clear. Every sentence is followed by a multiple choice question to be answered by the learner with four options as to the possible meaning, the correct alternative being a (near) synonym. After each sentence the learner is given immediate feedback (whether the meaning s/he inferred was right or wrong) to avoid the wrong meaning from being retained. After the third presentation of the word, the key to the multiple choice item is given as final feedback for the learner. To a certain extent, this way of presenting new words may seem unnatural since in natural word acquisition first contexts need not but may very well contain clues to the meaning of an unknown word. It was nevertheless opted for to make learners process the word intensively by forcing them to form and test hypotheses as to its meaning. The word is presented three times in sentences containing ever more semantic clues and the learner has to deduce the meaning in stages. This method of presenting the new word is meant to trigger off a cognitive process of what might be called "graded contextual disambiguation"; step by step the learner reduces the uncertainty about the meaning of the word by making use of the contextual clues increasingly present in the three consecutive sentences. It should yield better long term retention results than simply giving the meaning because it enforces a deeper level of processing (Mondria & Wit-de Boer, 1991). Here is an example. The word to be learned is "abrasive."

Tidak ada komentar:

Posting Komentar