Month: Jun 2021

Learning to Listen to Lectures: How representative are EAP coursebooks?

Learning to Listen to Lectures: How representative are EAP coursebooks?

I recently had the pleasure of joining the Norwegian Forum for English for Academic Purposes (one small benefit of the Corona pandemic was that this conference took place online this year!)  and listening to Katrien Dereoy’s talk on “Setting the stage for lecture listening: how representative are EAP coursebooks?”

She has presented and published on this topic before and I think it’s very interesting for all EAP instructors and materials writers. So, this post is a summary of what I see as the key points from her talk and what I took away from it regarding what we could do better in our EAP lecture listening instruction and materials in future.

The main finding of Katrien’s corpus linguistic research is that many EAP coursebooks on listening and note-taking in lectures do not always reflect the reality of the language used by lecturers – particularly regarding metadiscourse and lexico-grammatical discourse markers that are used to highlight important points of content in lectures.

In her research on corpora of lectures given in English, namely the British Academic Spoken English corpus and the Corpus of English as a Lingua Franca in Academic Settings, Katrien looked at the word classes and patterns of phrases used to fulfil this function, such as metanouns (e.g. idea, point), verb phrases (remember that), adjectives (central idea), and adverbs (importantly). She also categorised two interactive orientations of such lexicogrammatical devices highlighting importance: one focusing on the participants and using phrases like “Now listen” (addressing audience) or “I want to emphasise” (expressing intention), and the other focusing on the content and saying things like “A key point is”. 

Overall, her research comparing lecture transcripts in the BASE and ELFA corpora showed that the frequency with which importance is explicitly marked was roughly equivalent between L1 and L2/EMI instructors. Overall, the content-focussed markers were most common, though a variety of word classes and grammatical patterns were found in both corpora. 

She found that EMI lecturers (often L2 speakers in non-English-speaking countries) were more likely to use a content focus, whereas L1 lecturers used phrases that were focused on the content or the audience in highlighting the importance of points in their lectures with roughly equal frequency.

Another slight difference was that L1 lecturers used metanouns more often than EMI/L2 lecturers. On the other hand, EMI/L2 lecturers often used adjectives (e.g. the main idea) and also deictic verb phrases such as “That’s the main point”,  which were often anaphoric/backward-referring (where the students would have to think back to whatever “that” refers to and then note it down). Apparently L1 lecturers were more likely to use verb phrases, particularly imperatives like “Remember” or “Notice” (≠ directives with second-person pronouns), which are also often cataphoric/forward-referring.

Overall, the most commonly used phrases in authentic lectures recorded in these corpora are:

 Remember/Notice xyz

 The point/question is xyz

 I want to emphasise/stress xyz

 The key/important/essential xyz is xyz

Katrien then analysed coursebooks that aim to teach lecture-listening skills to EAP students. She found that they often do not really teach these phrases that are most commonly used in lectures to fulfill the function of marking importance. Indeed, many coursebooks include tasks where students are asked to identify the key ideas from a lecture except, but do not necessarily give good training on the language that might help them to do so, such as listening out for metadiscourse and discourse markers. Some books include lists of ‘useful phrases’ here, but Katrien noticed a preference for explicit markers and listing words, directives with second-person pronouns (e.g. you need to remember) and other non-imperative verb phrases – so not entirely aligned with what the corpora show about phrases commonly used in real lectures.

Katrien suggests four sets of people who are possibly at least partly responsible for this disparity between EAP materials and authentic lectures, based on Gilmore (2015). These are: the researchers in applied linguistics who are not always good at making their research findings accessible;  language teachers who rely on coursebooks and don’t (have time to) think beyond what the books present them;  materials writers who may use their intuition and creativity rather than research to inform their materials; and publishers who may not want to to deal with having to source and and get copyright for authentic lecture recordings or who may not even see the value in doing so. [Note my use of defining relative clauses here – I absolutely do not want to imply putting blame on all researchers, teachers, writers, etc.!]

Katrien’s main recommendation for training EAP students to understand and be able to take notes on the most important content points in lectures is that EAP instructors should critically reflect on materials’ and appropriateness/relevance of the language presented for their students/context, and adapt or extend them as necessary. Supplementary materials should use language from authentic lecture transcripts, such as those found in databases and corpora like BASE or  MICASE and/or representative input materials for the context – e.g. collaborate with local lecturers and use their recordings/videos.

I agree with Katrien and would add that:

  • Materials writers need to make an effort to access the relevant linguistic (and SLA) research, corpora and word/phrase lists, etc. and use it to inform the language they include in their materials. I feel that particularly writers and instructors in the area of EAP are often in a better position to access these publications and resources than those in other contexts, due to their typical affiliation to a university (and their library, databases, etc) and the academic world in general. 
  • Giving a list of useful phrases is not enough –  students need active training, for example in decoding these phrases in fast connected speech where processes like linking, assimilation or elision are likely to happen and may be a barrier to understanding, and prosody helps determine phrases’ meaning, or training in understand how exactly they are used and derive their signalling power from the context and cotext. These phrases are likely to be helpful to students giving their own oral presentations, too, so materials teaching these discourse markers could span and combine both skills. 
  • Lecturers could benefit from training, too – Not all (in some contexts, not very many at all!) lecturers have received training in this kind of teaching presentation, and many may not be aware of the linguistic side of things that can affect how well (especially L2) students understand the content of a lecture. So, perhaps more EAP materials and users’ guides need to be targeted at the teachers and lecturers as well as ‘just’ the students. 
  • And finally, I’ve said it before and I’ll say it again: We, EAP instructors and materials writers, need to provide numerous opportunities to deliberately engage with suitably selected, context-embedded discourse markers and academic vocabulary to help students internalise it and use it to succeed in their academic studies. 

References

Analysing my Feedback Language

Analysing my Feedback Language

TL:DR SUMMARY

I ran a feedback text I’d written on a student’s work through some online text analysis tools to check the CEFR levels of my language. I was surprised that I was using some vocabulary above my students’ level. After considering whether I can nonetheless expect them to understand my comments, I propose the following tips:

  • Check the language of feedback comments before returning work and modify vocabulary necessary.
  • Check the vocabulary frequently used in feedback comments, and plan to teach these explicitly.
  • Get students to reflect on and respond to feedback to check understanding.

A couple of colleagues I follow on blogs and social media have recently posted about online text analysis tools such as Text Inspector, Lex Tutor and so on (see, for example Julie Moore’s post here and Pete Clements’ post here). That prompted me to explore uses of those tools in more detail for my own work – both using them to judge the input in my teaching materials or assessments, and also using them with students to review their academic essay writing.

Once I got into playing around with different online tools (beyond my go-to Vocab Kitchen), I wanted to try some out on my own texts. The thing I’ve been writing most recently, though, is feedback on my students’ essays and summaries. But, I’m a bit of a feedback nerd so I was quite excited when the idea struck me: I could use these tools to analyse my language in the feedback I write to help my students improve their texts. A little action research, if you will. 

Now I obviously can’t share the students work here for privacy and copyright reasons, but one recent assessment task was to write a 200-250 word compare/contrast paragraph to answer this question:

How similar are the two main characters in the last film you watched?

(Don’t focus on their appearance).

These students are at B2+ level (CEFR) working towards C1 in my essay writing class. They need to demonstrate C1-level language in order to pass the class assessments. One student did not pass this assessment because her text included too many language mistakes that impeded comprehension, because overall the language level did not reach C1, and because she didn’t employ the structural elements we had trained in class.

Here’s the feedback I gave on the piece of work and which I ran through a couple of text checkers. (Note: I usually only write this much if there are a lot of points that need improving!)

The language of this text demonstrates a B2 level of competence. Some of the phrasing is rather too colloquial for written academic language, e.g. starting sentences with ‘but’, and including contracted forms. You need to aim for more sophisticated vocabulary and more lexical diversity. More connectors, signposting and transitions are needed to highlight the genre and the comp/cont relationships between the pieces of information. The language slips lead to meaning not always being emphasised or even made clear (especially towards the end). Aim to write more concisely and precisely, otherwise your text sounds too much like a superficial, subjective summary.

Apart from the personal phrase at the beginning, the TS does an OK job at answering the question of ‘how similar’, and naming the features to be discussed. However, you need to make sure you name the items – i.e. the characters – and the film. In fact, the characters are not named anywhere in the text! The paragraph body does include some points that seem relevant, but the ordering would be more logical if you used signposting and the MEEE technique. For example, you first mention their goals but don’t yet explain what they are, instead first mentioning a difference between them– but not in enough detail to make sense to a reader who maybe doesn’t know the series. Also, you need to discuss the features/points in the order you introduce them in the TS – ‘ambition’ is not discussed here. The information in the last couple o sentences is not really relevant to this question, and does not function as a conclusion to summarise your overall message (i.e. that they are more similar than they think). In future, aim for more detailed explanations of content and use the MEEE technique within one of the structures we covered in class. And remember: do not start new lines within one paragraph – it should be one chunk of text.

I was quite surprised by this ‘scorecard’ summarising the analysis of the lexis in my feedback on Text Inspector – C2 CEFR level, 14% of words on the AWL, and an overall score of 72% “with 100% indicating a high level native speaker academic text.” (Text Inspector). Oops! I didn’t think I was using that high a level of academic lexis. The student can clearly be forgiven if she’s not able to improve further based on this feedback that might be over her head! 

(From Text Inspector)

In their analyses, both Text Inspector and Vocab Kitchen categorise words in the text by CEFR level. In my case, there were some ‘off list’ words, too. These include abbreviations, most of which I expect my students to know, such as e.g., and acronyms we’ve been using in class, such as MEEE (=Message, Explanation, Examples, Evaluation). Some other words are ‘off list’ because of my British English spelling with -ise (emphasise, summarise – B2 and C1 respectively). And some words aren’t included on the word lists used by these tools, presumably due to being highly infrequent and thus categorised as ‘beyond’ C2 level. I did check the CEFR levels that the other ‘off list’ words are listed as in learners’ dictionaries but only found rankings for these words: 

Chunk – C1

Genre – B2

Signposting – C1

(From Vocab Kitchen)

Logically, the question I asked myself at this point is whether I can reasonably expect my students to understand the vocabulary which is above their current language level when I use it in feedback comments. This particularly applies to the words that are typically categorised as C2, which on both platforms were contracted, superficial and transitions, and perhaps also to competence, diversity and subjective which are marked as C1 level. And, of course, to the other ‘off list’ words: colloquial, concisely, connectors, lexical, and phrasing.

Now competence, diversity, lexical and subjective shouldn’t pose too much of a problem for my students, as those words are very similar in German (Kompetenz, Diversität, lexikalisch, subjektiv) which all of my students speak, most of them as an L1. We have also already discussed contracted forms, signposting and transitions on the course, so I have to assume my students understand those. Thus, I’m left with colloquial, concisely, connectors, phrasing and superficial as potentially non-understandable words in my feedback. 

Of course, this feedback is given in written form, so you could argue that students will be able to look up any unknown vocabulary in order to understand my comments and know what to maybe do differently in future.  But I worry that not all students would actually bother to do so –  so they would continue to not fully understand my feedback, making it rather a waste of my time having written it for them.

Overall, I’d say that formulations of helpful feedback comments for my EAP students need to strike a balance. They should mainly use level-appropriate language in terms of vocabulary and phrasing so that the students can comprehend what they need to keep doing or work on improving. Also, they should probably use some academic terms to model them for the students and make matching the feedback to the grading matrices more explicit. Perhaps the potentially non-understandable words in my feedback can be classified as working towards the second of these aims. 

Indeed, writing in a formal register to avoid colloquialisms, and aiming for depth and detail to avoid superficiality are key considerations in academic writing. As are writing in concise phrases and connecting them logically. Thus, I’m fairly sure I have used these potentially non-understandable words in my teaching on this course.But so far we haven’t done any vocabulary training specifically focused on these terms. If I need to use them in my feedback though, then, the students do need to understand them in some way. 

So, what can I do? I think there are a couple of options for me going forward which can help me to provide constructive feedback in a manner which models academic language but is nonetheless accessible to the students at the level they are working at. These are ideas that I can apply to my own practice,  but that other teachers might also like to try out:

  • Check the language of feedback comments before returning work (with feedback) to students; modify vocabulary if necessary.
  • Check the vocabulary items and metalanguage I want/need to use in feedback comments, and in grading matrices (if provided to students), and plan to teach these words if they’re beyond students’ general level.
  • Use the same kinds of vocabulary in feedback comments as in oral explanations of models and in teaching, to increase students’ familiarity with it. 
  • Give examples (or highlight them in the student’s work) of what exactly I mean with certain words.
  • Get students to reflect on the feedback they receive and make an ‘action plan’ or list of points to keep in mind in future – which will show they have understood and been able to digest the feedback.

If you have further suggestions, please do share them in the comments section below!

As a brief closing comment, I just want to  point out here that it is of course not only the vocabulary of any text or feedback comment that determines how understandable it is at which levels. It’s a start, perhaps, but other readability scores need to be taken into account, too. I’ll aim to explore these in a separate blog post.