Category: Research

Learning to Listen to Lectures: How representative are EAP coursebooks?

Learning to Listen to Lectures: How representative are EAP coursebooks?

I recently had the pleasure of joining the Norwegian Forum for English for Academic Purposes (one small benefit of the Corona pandemic was that this conference took place online this year!)  and listening to Katrien Dereoy’s talk on “Setting the stage for lecture listening: how representative are EAP coursebooks?”

She has presented and published on this topic before and I think it’s very interesting for all EAP instructors and materials writers. So, this post is a summary of what I see as the key points from her talk and what I took away from it regarding what we could do better in our EAP lecture listening instruction and materials in future.

The main finding of Katrien’s corpus linguistic research is that many EAP coursebooks on listening and note-taking in lectures do not always reflect the reality of the language used by lecturers – particularly regarding metadiscourse and lexico-grammatical discourse markers that are used to highlight important points of content in lectures.

In her research on corpora of lectures given in English, namely the British Academic Spoken English corpus and the Corpus of English as a Lingua Franca in Academic Settings, Katrien looked at the word classes and patterns of phrases used to fulfil this function, such as metanouns (e.g. idea, point), verb phrases (remember that), adjectives (central idea), and adverbs (importantly). She also categorised two interactive orientations of such lexicogrammatical devices highlighting importance: one focusing on the participants and using phrases like “Now listen” (addressing audience) or “I want to emphasise” (expressing intention), and the other focusing on the content and saying things like “A key point is”. 

Overall, her research comparing lecture transcripts in the BASE and ELFA corpora showed that the frequency with which importance is explicitly marked was roughly equivalent between L1 and L2/EMI instructors. Overall, the content-focussed markers were most common, though a variety of word classes and grammatical patterns were found in both corpora. 

She found that EMI lecturers (often L2 speakers in non-English-speaking countries) were more likely to use a content focus, whereas L1 lecturers used phrases that were focused on the content or the audience in highlighting the importance of points in their lectures with roughly equal frequency.

Another slight difference was that L1 lecturers used metanouns more often than EMI/L2 lecturers. On the other hand, EMI/L2 lecturers often used adjectives (e.g. the main idea) and also deictic verb phrases such as “That’s the main point”,  which were often anaphoric/backward-referring (where the students would have to think back to whatever “that” refers to and then note it down). Apparently L1 lecturers were more likely to use verb phrases, particularly imperatives like “Remember” or “Notice” (≠ directives with second-person pronouns), which are also often cataphoric/forward-referring.

Overall, the most commonly used phrases in authentic lectures recorded in these corpora are:

 Remember/Notice xyz

 The point/question is xyz

 I want to emphasise/stress xyz

 The key/important/essential xyz is xyz

Katrien then analysed coursebooks that aim to teach lecture-listening skills to EAP students. She found that they often do not really teach these phrases that are most commonly used in lectures to fulfill the function of marking importance. Indeed, many coursebooks include tasks where students are asked to identify the key ideas from a lecture except, but do not necessarily give good training on the language that might help them to do so, such as listening out for metadiscourse and discourse markers. Some books include lists of ‘useful phrases’ here, but Katrien noticed a preference for explicit markers and listing words, directives with second-person pronouns (e.g. you need to remember) and other non-imperative verb phrases – so not entirely aligned with what the corpora show about phrases commonly used in real lectures.

Katrien suggests four sets of people who are possibly at least partly responsible for this disparity between EAP materials and authentic lectures, based on Gilmore (2015). These are: the researchers in applied linguistics who are not always good at making their research findings accessible;  language teachers who rely on coursebooks and don’t (have time to) think beyond what the books present them;  materials writers who may use their intuition and creativity rather than research to inform their materials; and publishers who may not want to to deal with having to source and and get copyright for authentic lecture recordings or who may not even see the value in doing so. [Note my use of defining relative clauses here – I absolutely do not want to imply putting blame on all researchers, teachers, writers, etc.!]

Katrien’s main recommendation for training EAP students to understand and be able to take notes on the most important content points in lectures is that EAP instructors should critically reflect on materials’ and appropriateness/relevance of the language presented for their students/context, and adapt or extend them as necessary. Supplementary materials should use language from authentic lecture transcripts, such as those found in databases and corpora like BASE or  MICASE and/or representative input materials for the context – e.g. collaborate with local lecturers and use their recordings/videos.

I agree with Katrien and would add that:

  • Materials writers need to make an effort to access the relevant linguistic (and SLA) research, corpora and word/phrase lists, etc. and use it to inform the language they include in their materials. I feel that particularly writers and instructors in the area of EAP are often in a better position to access these publications and resources than those in other contexts, due to their typical affiliation to a university (and their library, databases, etc) and the academic world in general. 
  • Giving a list of useful phrases is not enough –  students need active training, for example in decoding these phrases in fast connected speech where processes like linking, assimilation or elision are likely to happen and may be a barrier to understanding, and prosody helps determine phrases’ meaning, or training in understand how exactly they are used and derive their signalling power from the context and cotext. These phrases are likely to be helpful to students giving their own oral presentations, too, so materials teaching these discourse markers could span and combine both skills. 
  • Lecturers could benefit from training, too – Not all (in some contexts, not very many at all!) lecturers have received training in this kind of teaching presentation, and many may not be aware of the linguistic side of things that can affect how well (especially L2) students understand the content of a lecture. So, perhaps more EAP materials and users’ guides need to be targeted at the teachers and lecturers as well as ‘just’ the students. 
  • And finally, I’ve said it before and I’ll say it again: We, EAP instructors and materials writers, need to provide numerous opportunities to deliberately engage with suitably selected, context-embedded discourse markers and academic vocabulary to help students internalise it and use it to succeed in their academic studies. 

References

Advertisement
Analysing my Feedback Language

Analysing my Feedback Language

TL:DR SUMMARY

I ran a feedback text I’d written on a student’s work through some online text analysis tools to check the CEFR levels of my language. I was surprised that I was using some vocabulary above my students’ level. After considering whether I can nonetheless expect them to understand my comments, I propose the following tips:

  • Check the language of feedback comments before returning work and modify vocabulary necessary.
  • Check the vocabulary frequently used in feedback comments, and plan to teach these explicitly.
  • Get students to reflect on and respond to feedback to check understanding.

A couple of colleagues I follow on blogs and social media have recently posted about online text analysis tools such as Text Inspector, Lex Tutor and so on (see, for example Julie Moore’s post here and Pete Clements’ post here). That prompted me to explore uses of those tools in more detail for my own work – both using them to judge the input in my teaching materials or assessments, and also using them with students to review their academic essay writing.

Once I got into playing around with different online tools (beyond my go-to Vocab Kitchen), I wanted to try some out on my own texts. The thing I’ve been writing most recently, though, is feedback on my students’ essays and summaries. But, I’m a bit of a feedback nerd so I was quite excited when the idea struck me: I could use these tools to analyse my language in the feedback I write to help my students improve their texts. A little action research, if you will. 

Now I obviously can’t share the students work here for privacy and copyright reasons, but one recent assessment task was to write a 200-250 word compare/contrast paragraph to answer this question:

How similar are the two main characters in the last film you watched?

(Don’t focus on their appearance).

These students are at B2+ level (CEFR) working towards C1 in my essay writing class. They need to demonstrate C1-level language in order to pass the class assessments. One student did not pass this assessment because her text included too many language mistakes that impeded comprehension, because overall the language level did not reach C1, and because she didn’t employ the structural elements we had trained in class.

Here’s the feedback I gave on the piece of work and which I ran through a couple of text checkers. (Note: I usually only write this much if there are a lot of points that need improving!)

The language of this text demonstrates a B2 level of competence. Some of the phrasing is rather too colloquial for written academic language, e.g. starting sentences with ‘but’, and including contracted forms. You need to aim for more sophisticated vocabulary and more lexical diversity. More connectors, signposting and transitions are needed to highlight the genre and the comp/cont relationships between the pieces of information. The language slips lead to meaning not always being emphasised or even made clear (especially towards the end). Aim to write more concisely and precisely, otherwise your text sounds too much like a superficial, subjective summary.

Apart from the personal phrase at the beginning, the TS does an OK job at answering the question of ‘how similar’, and naming the features to be discussed. However, you need to make sure you name the items – i.e. the characters – and the film. In fact, the characters are not named anywhere in the text! The paragraph body does include some points that seem relevant, but the ordering would be more logical if you used signposting and the MEEE technique. For example, you first mention their goals but don’t yet explain what they are, instead first mentioning a difference between them– but not in enough detail to make sense to a reader who maybe doesn’t know the series. Also, you need to discuss the features/points in the order you introduce them in the TS – ‘ambition’ is not discussed here. The information in the last couple o sentences is not really relevant to this question, and does not function as a conclusion to summarise your overall message (i.e. that they are more similar than they think). In future, aim for more detailed explanations of content and use the MEEE technique within one of the structures we covered in class. And remember: do not start new lines within one paragraph – it should be one chunk of text.

I was quite surprised by this ‘scorecard’ summarising the analysis of the lexis in my feedback on Text Inspector – C2 CEFR level, 14% of words on the AWL, and an overall score of 72% “with 100% indicating a high level native speaker academic text.” (Text Inspector). Oops! I didn’t think I was using that high a level of academic lexis. The student can clearly be forgiven if she’s not able to improve further based on this feedback that might be over her head! 

(From Text Inspector)

In their analyses, both Text Inspector and Vocab Kitchen categorise words in the text by CEFR level. In my case, there were some ‘off list’ words, too. These include abbreviations, most of which I expect my students to know, such as e.g., and acronyms we’ve been using in class, such as MEEE (=Message, Explanation, Examples, Evaluation). Some other words are ‘off list’ because of my British English spelling with -ise (emphasise, summarise – B2 and C1 respectively). And some words aren’t included on the word lists used by these tools, presumably due to being highly infrequent and thus categorised as ‘beyond’ C2 level. I did check the CEFR levels that the other ‘off list’ words are listed as in learners’ dictionaries but only found rankings for these words: 

Chunk – C1

Genre – B2

Signposting – C1

(From Vocab Kitchen)

Logically, the question I asked myself at this point is whether I can reasonably expect my students to understand the vocabulary which is above their current language level when I use it in feedback comments. This particularly applies to the words that are typically categorised as C2, which on both platforms were contracted, superficial and transitions, and perhaps also to competence, diversity and subjective which are marked as C1 level. And, of course, to the other ‘off list’ words: colloquial, concisely, connectors, lexical, and phrasing.

Now competence, diversity, lexical and subjective shouldn’t pose too much of a problem for my students, as those words are very similar in German (Kompetenz, Diversität, lexikalisch, subjektiv) which all of my students speak, most of them as an L1. We have also already discussed contracted forms, signposting and transitions on the course, so I have to assume my students understand those. Thus, I’m left with colloquial, concisely, connectors, phrasing and superficial as potentially non-understandable words in my feedback. 

Of course, this feedback is given in written form, so you could argue that students will be able to look up any unknown vocabulary in order to understand my comments and know what to maybe do differently in future.  But I worry that not all students would actually bother to do so –  so they would continue to not fully understand my feedback, making it rather a waste of my time having written it for them.

Overall, I’d say that formulations of helpful feedback comments for my EAP students need to strike a balance. They should mainly use level-appropriate language in terms of vocabulary and phrasing so that the students can comprehend what they need to keep doing or work on improving. Also, they should probably use some academic terms to model them for the students and make matching the feedback to the grading matrices more explicit. Perhaps the potentially non-understandable words in my feedback can be classified as working towards the second of these aims. 

Indeed, writing in a formal register to avoid colloquialisms, and aiming for depth and detail to avoid superficiality are key considerations in academic writing. As are writing in concise phrases and connecting them logically. Thus, I’m fairly sure I have used these potentially non-understandable words in my teaching on this course.But so far we haven’t done any vocabulary training specifically focused on these terms. If I need to use them in my feedback though, then, the students do need to understand them in some way. 

So, what can I do? I think there are a couple of options for me going forward which can help me to provide constructive feedback in a manner which models academic language but is nonetheless accessible to the students at the level they are working at. These are ideas that I can apply to my own practice,  but that other teachers might also like to try out:

  • Check the language of feedback comments before returning work (with feedback) to students; modify vocabulary if necessary.
  • Check the vocabulary items and metalanguage I want/need to use in feedback comments, and in grading matrices (if provided to students), and plan to teach these words if they’re beyond students’ general level.
  • Use the same kinds of vocabulary in feedback comments as in oral explanations of models and in teaching, to increase students’ familiarity with it. 
  • Give examples (or highlight them in the student’s work) of what exactly I mean with certain words.
  • Get students to reflect on the feedback they receive and make an ‘action plan’ or list of points to keep in mind in future – which will show they have understood and been able to digest the feedback.

If you have further suggestions, please do share them in the comments section below!

As a brief closing comment, I just want to  point out here that it is of course not only the vocabulary of any text or feedback comment that determines how understandable it is at which levels. It’s a start, perhaps, but other readability scores need to be taken into account, too. I’ll aim to explore these in a separate blog post.

Perceiving Prominence – Part #1 (the “What?”)

Perceiving Prominence – Part #1 (the “What?”)

How would you say these example sentences?
a. I think that’s the right answer, but I’m not sure. 

b. I think that’s the right answer, no matter what you say!

I’m guessing that “I” and “think” sounded different when you said these two sentences to yourself – and it is this difference that is interesting and helpful to understand, particularly for English learners.

My friend and colleague is doing post-doctoral research on the different functions of the chunk “I think”. Depending on whether “I” or “think” is pronounced as prominent (i.e. stressed), the chunk can either be a hedger or a booster, as we saw in the example sentences above. Her research is looking into whether these two functions of “I think” align with its function as part of a matrix clause or comment clause respectively. If that’s too hard-core linguistic-y for you, do not worry! This post is about how prominence in “I” or “think” is realised and perceived, and part 2 will then look at how we can help English langauge learners to understand and use “I think” effectively.

To set the scene… My colleague and I were phonetically coding audio recordings of English speakers reading example sentences that include “I think” – she needs the coding for her research. In some cases, we noticed that we had coded the prominences (i.e. stressed syllables) in some sentences differently. And that got us thinking: On a phonological and rather subjective level, when do we perceive “I” or “think” as prominent? And on a phonetic and objective level, how are “I” and “think” produced to be prominent? That is, what makes a stressed syllable sound stressed?

For those who are less interested in the empirical details of the study, skip this paragraph! 😉 We had 23 English speakers read 6 sentences including “I think” three times each. There were also other ‘distractor’ sentences that they read, and they were not informed that our research focuses on “I think”. That ratio of distractors to experimental sentences was 2:1.  The findings I’m reporting here come from our coding of  115 recordings of “I think” sentences – only one instance of each sentence per speaker.

Unbenannt

As a first step in answering our questions about the perception and production of prominence, we each coded the recordings, using P or p to show strong or weak prominence, using the Praat software- see image below for an example. In 96 cases, we both agreed on which syllables were the most prominent ones in the recording – an overlap of almost 83%. In 19 cases, we didn’t overlap exactly on whether it was strong or weak prominence in comparison to the rest of the sentence stresses, but both identified the same syllable in “I think” as prominent, and in 19 other cases one of us had not marked any prominence in “I think” and the other one had. Of these 19 where we disagreed, there were 13 cases where my colleague coded “think” as prominent, and I perceived no prominence in “I think” compared to the rest of the sentence.

Bild1

And so we moved on to looking at what was different in the productions of “I” and “think” in these sentences that could help explain our differences in perceiving prominence. Generally, syllable stress, or prominence, is said to be a combination of pitch accents (i.e. higher pitch or pitch movement), longer duration, and higher intensity (i.e. volume/loudness). Some linguists assume pitch to be the most important factor, whereas others claim intensity to be the best predictor of a syllable being perceived as prominent. Since the literature presents some conflicting ideas here, we were motivated to look at all three aspects and see whether they could explain our differing perceptions of prominence. We coded the recordings phonetically for pitch, duration and intensity (also using the Praat software), to come up with the following findings.  The image below shows what the coding looked like – the yellow line shows intensity and the blue one pitch.

Bild2

Is there always a pitch accent on the syllable we perceived as prominent in “I think”?Basically, yes. There was only one example among our 115 recordings where there was no pitch accent on an “I” that we both percieved as prominent. Interestingly, prominence on “I” was often associated with pitch movement, rather than a peak. We also noticed that, even when I had not percieved either syllable in “I think” as particularly prominent in the sentence, there was usually a pitch peak on “think” – which may explain 13 cases where my colleague percieved “think” as prominent and I had coded no prominence. So it seems that pitch is a fairly reliable predictor of when my colleague will perceive prominence. But what about me?

Next, we looked at intensity. Here, there was a less systematic correlation. We found that the more prominent syllable in “I think” is most often produced with more intensity. However, in 13 of the cases where we agreed on which syllable was prominent, the syllable we had marked as prominent was not the louder one – this was especially true for recordings of American speakers (11 out of 13 instances). Regarding the cases where we had coded prominence differently, intensity explained 47% of my colleague’s prominent syllables, but only 21% of mine. [My colleague was a little frustrated with me at this point – What IS it that makes syllables sound prominent to you???!!!]

The final phonetic aspect we looked at was duration. I should tell you that this is hard to measure, and phoneticians disagree on the best way to measure it, but that is a discussion for another time! In the cases where we agreed on the prominence of “I” in “I think”, it was always the speaker’s longest pronunciation of “I” in the sentence. Still, my colleague’s perceptions of prominence seem to be somewhat immune to the duration factor – even the long duration of “I” did not make her perceive it as prominent if pitch and/or intensity pointed to “think” as the prominent syllable. In 5 cases, though, this longer duration led me to code “I” as the prominent syllable, despite the pitch accent and/or higher intensity of “think”. So, it looked as if we were getting to the bottom of which factors create prominence they way I perceive it! Still, there was one case where the pitch peak was on “think”, an intensity peak on “I”, and it was the speaker’s longest “I” – and I still did not code either “I” or “think” as prominent!

From all this, we concluded that my colleague may have been influenced by all her research into the chunk “I think” and thus perceives prominence there more often than I do, as I focus on the whole sentence and compare any peaks in pitch, intensity or duration to all other words, rather than just within the chunk “I think”.

Unbenannt2

Still, overall, the order in which I have discussed the factors above seems to be their order of importance for creating prominence in our little study: pitch, intensity, duration. And for some conversation partners, like me, a combination of all three factors in one syllable may be necessary for it to be perceived as prominent.

 

So what does this mean for English langauge learners and teachers? The length of this blog post was getting a bit out of hand, so I’ve divided it up – Come back for Part 2 to find out about the practical applications of these insights!

 

How to access ELT-relevant research

How to access ELT-relevant research

A while back, I summarised an article for ELT Research Bites exploring the reasons why language teaching professionals rarely access primary research reports. The main findings were that practitioners may have negative perceptions of research as irrelevant, they may face practical constraints such as expensive pay walls and a lack of time to find and read articles, and they may not be able to understand the articles’ content due to excessive use of academic jargon.

In this post, then, I want to share how we can access research related to language teaching in ways that do not cost a lot of money or time. 

  1. The website I mentioned above – ELT Research Bites – provides interesting language and education research in an easily digestible format. The summaries present the content of published articles in a shorter, simpler format, and also explore practical implications of articles’ findings for language teaching/learning.
  2. Musicuentos Black Box is similar to ELT Research Bites, but summarises research articles in videos and podcasts. (Thanks to Lindsay Marean for sharing this with me!) 
  3. The organisation TESOL Academic provides free or affordable access to research articles on linguistics, TESOL and education in general. This is done mainly via videoed talks on YouTube, but you can also follow them on Facebook, Twitter and LinkedIn.
  4. The University of Oregon has a free, customisable email digest you can subscribe to here. It is aimed at language teachers and sends you a feature summary based on primary research articles. (Thanks to Lindsay Marean for sharing this with me!)
  5. IATEFL has a number of ‘Special Interest Groups’ and I’d like to highlight two in particular that can help us to access research. IATEFL ReSIG, the Research Special Interest Group, promotes and supports ELT and teacher research, in an attempt to close the gap between researchers and teachers or materials writers. You can find them on Facebook, Yahoo and Twitter. IATEFL MaWSIG, the Materials Writing Special Interest Group, has an open-access blog as well as a presence on Facebook Instagram and Twitter. In the last year there have been several posts summarising research findings and drawing out what the conclusions mean for English teaching materials and practice – including “And what about the research?” by Penny Ur, and “ELT materials writing: More on emerging principles” by Kath Bilsborough.
  6. Of course there are also search engines, such as Google Scholar, that you can use. You might find it helpful to look out for ‘State of the art’ articles or meta-studies that synthesise research findings from several reports and save you from having to read them all! If the pay wall is your main problem, some journals also offer a sample article from each issue as open access, at ELT Journal, for example, these are the “Editor’s Choice” articles.

To make engaging with research more worthwhile, I’d suggest you should reflect on what you’re reading / hearing: Think about the validity of the findings based on the content and the method of the study, the relevance of the findings to your pedagogy, and, perhaps most importantly, the practicality of the findings for your own work. Be aware of trends and fashions, and use the conclusions you draw to inform your materials and teaching.

What Postgraduates Appreciate in Online Feedback on Academic Writing #researchbites

What Postgraduates Appreciate in Online Feedback on Academic Writing #researchbites

In this article, Northcott, Gillies and Coutlon explore their students’ perceptions of how effective online formative feedback was for improving their postgraduate academic writing, and aim to highlight best practices for online writing feedback.

Northcott, J., P. Gillies & D. Caulton (2016), ‘What Postgraduates Appreciate in Online Tutor Feedback on Academic Writing’, Journal of Academic Writing, Vol. 6/1 , pp. 145-161.

Background

The focus of the study was on helping international master’s-level students at a UK university, for whom English is not their first/main language. The study’s central aim was investigating these students’ satisfaction with the formative feedback provided online by language tutors on short-term, non-credit-bearing ESAP writing courses. These courses, run in collaboration with subject departments, are a new provision at the university, in response to previous surveys showing dissatisfaction among students with feedback provided on written coursework for master’s-level courses. Participation is encouraged, but voluntary.  The courses consist of five self-study units (with tasks and answer keys), as well as weekly essay assignments marked by a tutor.

The  essays are submitted electronically, and feedback is provided using either Grademark (part of Turnitin) or ‘track changes’ in Microsoft Word . The feedback covers both  language correction and feedback on aspects of academic writing. These assignments are effectively draft versions of sections of coursework assignments students are required to write for the master’s programmes.

Research

The EAP tutors involved marked a total of 458 assignments, written by students in the first month of the master’s degrees in either Medicine or Politics. Only 53 students completed all five units of the writing course; though 94 Medicine and 81 Politics students completed the first unit’s assignment.

Alongside the writing samples, data was also collected by surveying students at three points during the writing course, plus an end-of-course evaluation form. Focussing on students who had completed the whole writing course, students’ survey responses were matched with their writing samples which had received feedback, as well as the final coursework assignment they submitted for credit in their master’s programme, for detailed analysis.

Findings

Analysing the feedback given by tutors, the researchers found both direct and indirect corrective feedback on language, as well as on subject-specific or genre-specific writing conventions and the academic skills related to writing. Tutors’ comments mostly refered to specific text passages, rather than being unfocused or general feedback.

Student engagement with feedback was evidenced by analysing writing samples and final coursework: only one case was found where ‘there was clear evidence that a student had not acted on the feedback provided’ (p. 155). However, the researchers admit that, as participation in the course is voluntary, the students who complete it are likely to be those who are in general appreciative of feedback, thus this finding may not be generalisable to other contexts.

In the surveys, most students’ reported feeling that the feedback had helped them to improve their writing. They acknowledged how useful the corrections provided were, and how the feedback could be applied in future. Moreover, comments demonstrated an appreciation of the motivational character of the feedback provided.

Summing up these findings, the researchers report:

It appeared to be the combination of principled corrective feedback with a focus on developing confidence by providing positive, personalised feedback on academic conventions and practices as well as language which obtained the most positive response from the students we investigated. (p. 154)

The students’ comments generally show that they responded well to this electronic mode of feedback delivery, and also felt a connection to their tutor, despite not meeting in person to discuss their work. As the researchers put it, students came to see ‘written feedback as a response to the person writing the text, not simply a response to a writing task’ (p. 156).

Take Away

The findings from this study highlight that simply using electronic modes of feedback delivery does not alone increase student satisfaction and engagement with feedback on their written work. Instead, the content and manner of the feedback given is key.

From the article, then, we can take away some tips for what kind of feedback to give, and how, to make electronic feedback most effective, at least for postgraduate students.

  • Start with a friendly greeting and refer to the student by name.
  • Establish an online persona as a sympathetic critical friend, ready to engage in dialogue.
  • Don’t only focus on corrective feedback, but aim to guide the student to be able to edit and correct their work autonomously, e.g. provide links to further helpful resources.
  • Be specific about the text passage the feedback refers to.
  • Tailor the feedback to the student’s needs, in terms of subject area, etc.
  • Give praise to develop the student’s confidence.
  • Take account of the student’s L1 and background.
  • Eencourage the student to respond to the feedback; especially if anything is unclear or they find it difficult to apply.

This post is part of ELT Research Bites 2017 Summer of Research (Bites) Blog Carnival! Join in here.

Exhibiting CLIL: Developing student skills through project-based learning

Exhibiting CLIL: Developing student skills through project-based learning

Dr Jenny Skipp

Exhibiting CLIL: Developing Student Skills through project-based learning

My dear colleague Jenny has just held  her first ever presentation at an iatefl conference!

It was a very well delivered talk, with a perfect balance of theory and practical ideas teachers can adapt into their own teaching. It’ll probably be of most interest with young adult learners, and also for teachers looking for ways to stretch their advanced learners. Want to know what she talked about? Look no further, here’s a summary:

Jenny presented a CLIL project she ran with a post-grad British cultural studies class at Trier University (Germany). Cultural studies classes in this context are for advanced EFL learners and thus have two aims – language learning and learning about content, in this case a particular British cultural topics. Making them good examples of CLIL.

Based on Coyle et al’s conceptualisation of CLIL as encompassing four Cs, content, cognition, communication, and culture, Jenny and I devised project-based British Cultural Studies classes, which she then took as the basis of an investigation of the opportunities it afforded for developing language and academic skills.

The project was setting up an exhibition on the topic of the course, which would be open to all staff and students at the University. The students in the course are working at a C1-2 language level. How do you test C2 level?? Jenny thinks an exhibition might be one way.

Previous Culture Studies courses had required students to hold an in-class presentation and write a final essay. We hoped this project would prevent them from only seeing their presentations or essay topics as isolated from what their peers were doing, which we believe was limiting to students in their language acquisition and practice, as they worked on making the exhibition as a collective whole.

Over the course of the term, students had round table discussions in lesson time, gave ‘work in progress’ oral reports on their exhibits in pairs to prompt discussion, and collaboratively wrote a concept paper to present the content and flow of the exhibition. They thus used the language of team work and of exhibit design, and were given feedback on it orally. On the exhibition day we also monitored their interaction with visitors, as they were explaining their exhibit topic to non expert peers and staff from various academic departments. After the exhibition, students wrote short individual essays at end of course.

So, what opportunities were really provided for language acquisition and practice?

Here, Jenny assessed this through the lens of the language tryptic described by Coyle et al. She explained, very convincingly, how studentrs developed…

Language Of Learning – general subject language, which is easily learnt or already known, in this case there were some concrete terms that stuck out to surveyed students- “popular vs mass culture” “identity”, “economic/economical”

Language For Learning – in this category, Jenny saw feedback languages used when evaluating others’ work in progress, language for data collection such as creating interview or survey questions, linguistic analyses, and differing register and synonyms and expressions for describing the exhibition to different visitors.

Language Through Learning– figurative and idiomatic language, new words & how to use them naturally, academic register, and colloquial expressions, were all mentioned by students. But not just specific words, it was also evident that students developed new ways of talking about concepts and their topics.

75% of the students, who were surveyed after the end of the course, perceived good opportunities for topic specific language learning during the term-long preparation, and 82% during the exhibition. And in their essays they demonstrated a noticeable improvement in this and general language naturalness.

Jenny was really pleased to see students talking to exhibition visitors about exhibits – they were seen to be paraphrasing for a non-expert audience, lower level undergrads, or using formal register with more informed lecturers — this ability to adapt language to play around, scale up or down their language to explain their understanding of complex topics to different people would seem to be one way to show C2 level language competence!

Academic skills were trained by this project, too – HOTs that fit into the ‘cognition’ C, with students analysing data from many sources, evaluating & synthesising it to make their exhibition. Jenny found she could tick all the boxes, as it were, of Coonan’s taxonomy. Students also noticed these opportunities for criticality.

Overall, then, it seems that both linguistic & conceptual techniques, and communicative competences  were practised and developed by this CLIL project, as well as cognitive abilities and transferable skills such as collaboration, organisation, teamwork, students perceived this, and demonstrated it in both their exhibition and essays. The final C was also addressed in this project, with students demonstrating expanded cultural sensitivity and international perspective.

This research, and Jenny’s compellung pkug for CLIL, shows that a project as a collaborative event facilitates the use, practice & feedback of language, as well as key skills! Try it yourself!

Slides and materials available from:

Skipp@uni-trier.de

Read more: Jenny Skipp & Clare Maas, Content & Integrated Learning: In Theory and In Practice, Modern English Teacher, April 2017.

badge5

ELT Research Bites

ELT Research Bites

Followers of my blog will know that I believe we, as language teachers, all need to understand the pedagogical underpinnings of what we do in our language classrooms. That’s why I aim in my blog posts to provide information on theoretical backgrounds and lesson materials which apply them practically. I would also love for more teachers to read the research and background articles for themselves. But I know that teachers are all busy people, who may not have access to or time to access publications on the latest developments and findings from language education research.

ELT Research Bites is here to help!contributors.JPG

As the founder, Anthony Schmidt, explains: ELT Research Bites is a collaborative, multi-author website that publishes summaries of published, peer-reviewed research in a short, accessible and informative way. 

The core contributors are Anthony Schmidt, Mura Nuva, Stephen Bruce, and me!

 

Anthony describes the problem that inpsired ELT Research Bites: There’s a lot of great research out there: It ranges from empirically tested teaching activities to experiments that seek to understand the underlying mechanics of learning. The problem is, though, that this research doesn’t stand out like the latest headlines – you have to know where to look and what to look for as well as sift through a number of other articles. In addition, many of these articles are behind extremely expensive pay walls that only universities can afford. If you don’t have access to a university database, you are effectively cut off from a great deal of research. Even if you do find the research you want to read, you have to pour through pages and pages of what can be dense prose just to get to the most useful parts. Reading the abstract and jumping to the conclusion is often not enough. You have to look at the background information, the study design, the data, and the discussion, too. In other words, reading research takes precious resources and time, things teachers and students often lack.

And so ELT Research Bites was born!  

site.JPG

The purpose of ELT Research Bites is to present interesting and relevant language and education research in an easily digestible format.

Anthony again:  By creating a site on which multiple authors are reading and writing about a range of articles, we hope to create for the teaching community a resource in which we share practical, peer-reviewed ideas in a way that fits their needs.

ELT Research Bites provides readers with the content and context of research articles, at a readable at the length, and with some ideas for practical implications. We hope, with these bite-size summaries of applied linguistics and pedagogy research, to allow all (language) teachers access to the insights gained through empirical published work, which teachers can adapt and apply in their own practice, whilst not taking too much of their time away from where it is needed most – the classroom.

CHECK OUT ELT Research Bites here:

FOLLOW ON TWITTER: @ResearchBites

 

British Council Teaching for Success – My Webinar

British Council Teaching for Success – My Webinar

Here are the slides (inc. references) from my talk yesterday as part of the British Council’s “Teaching for Success” online conference. This talk takes research into feedback practices & translates it into practical ideas for classroom application!

Click here for Slides.

Link to the recorded talk: http://britishcouncil.adobeconnect.com/p424b8xlubb/

Abstract: Providing meticulous correction of errors and hand-written summaries on each student’s text can be time-consuming, and often seems less effective than desired. However, many teachers cannot access relevant publications discussing alternative feedback strategies, and remain unsure about which more time-efficient procedures might be applicable in their context. For this reason, this talk aims to discuss various strategies for assessing and giving feedback on EFL learners’ written work, which I have collected from recent publications, have applied and evaluated in my own teaching, and would like to share with fellow ELT practitioners.

This talk will demonstrate practicable strategies including ways of marking learners’ errors (underlining, correction codes, margin comments), as well as conducting successful peer review, delivering feedback with technology, and making the student-teacher feedback dialogue more constructive and efficient. For each strategy demonstrated, I will summarise recently published relevant research on its employment in various contexts, and briefly present discussions from the literature on the mechanisms underpinning its efficacy, with the main aim of aiding teachers in making informed choices pertaining to their specific learners and contexts. These factors include learner autonomy, motivation, learning styles, receptivity, learner-centredness and individualism.

The talk therefore encourages CPD within the British Council’s professional practices rubric of ‘Assessing Learning’, a topic of interest and relevance to a broad audience, provide practical ideas which can be immediately trialled in a wide range of teaching contexts, and will encourage open discussion on feedback practices among participants.

screen-shot-2016-10-08-at-6-18-21-am

Learner-Driven Feedback in Essay Writing

Learner-Driven Feedback in Essay Writing

A recent focus of work into feedback in ELT looks at ways of increasing students’ openness to teachers’ feedback and how students can be stimulated to engage more thoroughly with the feedback they receive. Learner-Driven Feedback (LDF) seems to be a promising practice here, and below is a summary of some research done in this area.

LDF is usually taken to mean responding to learners’ individual queries to make the feedback process more dialogic in nature, particularly in English for Academic Purposes (EAP) settings. For example, Bloxham and Campbell (2010)’s study of ‘Interactive Coversheets’, which require students to pose questions about their work when submitting essay drafts. Overall, they report good levels of uptake of the feedback provided, demonstrating that these Interactive Coversheets prompted their students to evaluate their writing in more detail, and that students responded positively to receiving this individualised feedback. Tutors in their study also found it quicker to give feedback based on the Interactive Coversheets, as students’ individual questions helped them focus their thoughts. However, Bloxham and Campbell noted that, if students had not put much effort into the draft or Interactive Coversheet, they were less able to make use of the formative feedback, for example if they only submitted an outline, or scribbled paragraph, instead of a properly formulated draft. This leads to the idea that the better organised or more autonomous students may be more likely to receive and engage with formative feedback, and Bloxham and Campbell thus note the limitation that this feedback procedure may merely help better students to perform even better.

Working within a similar framework, Campbell and Schumm-Fauster (2013) devised ‘Learner-Centred Feedback’, which also required their students to pose questions to direct tutors to give feedback on certain aspects of their writing when reading drafts, here in footnotes or as comments in the margins to their essays. They were interested in how students react to being asked/allowed to ‘drive’ the feedback they receive. Their survey showed that students were open to the dialogical feedback and reported finding it motivating and personal, and particularly helpful in working on their individual essay-writing weaknesses.

Studies on feedback on essay writing have also begun to explore the use of various delivery modes for feedback and has shown that this, too, may deepen students’ engagement with the feedback and may increase uptake. Technology-based modes can be used to deliver feedback on essays digitally, for example as in-text changes, as comments added to a document, as a feedback email, or as an audio recording. The focus here is on computer-mediated teacher feedback, i.e. not automated feedback.

In the field of language teaching, Cloete (2014) investigated the new multifaceted options for feedback which are afforded by EAP students submitting their work through online platforms. His study focused on the Turnitin platform, which his team of tutors used to give feedback by inserting comments into text’s margins or in separate columns, highlighting text in different colours, and recording audio feedback. Based on teachers’ evaluations of using Turnitin in this way, he notes that the time-efficiency of delivering feedback in electronic modes depends on tutors’ typing speed and general comfort with using the feedback functions of the software, but that the added value of such electronic modes stems from the scope and amount of multi-m feedback that can be given, and the option to provide feedback in various modes simultaneously. Students in his study also showed heightened engagement with the feedback they received.

My own, very recent, study (Fielder, 2016) focused on an LDF procedure I devised which combines and adapts these previously published ideas and allows learners to determine the feedback they receive. In my LDF, the feedback is given by the teacher, but learners ‘drive’  how and on what they receive feedback: they can choose between various formats (e.g. hand-written, email, audio recording), and are required to pose questions about their work to which the teacher responds (e.g. on grammar, vocabulary/register, referencing, organisation). The study is an initial exploration of students’ receptivity towards Learner-Driven Feedback in EAP. The findings from the detailed survey data highlight a high level of student receptivity towards the procedure, and that students perceive it as a useful tool for improving their general language accuracy and study skills related to essay writing. However, it seems from the survey responses that the specific skills which can be significantly improved by my LDF may depend on which skills have already been trained by students’ previous academic experience.  Nonetheless, this and the studies described above demonstrate compelling reasons for piloting LDF on EAP writing courses; many of which may also justify trialling the approach in other ELT classrooms.

 

References

  • Bloxham, S. & L. Campbell, “Generating dialogue in assessment feedback: exploring the use of interactive cover sheets”, Assessment and Evaluation in Higher Education, Vol 35 (2010), 291–300.
  • Campbell, N. and J. Schumm-Fauster (2013). Learner-centred Feedback on Writing: Feedback as Dialogue. In M. Reitbauer, N. Campbell, S. Mercer, J. Schumm and R. Vaupetitsch (Eds) Feedback Matters (pp. 55–68). Frankfurt: Peter Lang.
  • Cloete, R., “Blending offline and online feedback on EAP writing”, The Journal of Teaching English for Specific and Academic Purposes, Vol. 2, No. 4 (2014), pp. 559-
  • Fielder, C., “Receptivity to Learner-Driven Feedback in EAP”, ELT Journal [Advanced access 2016 – print issue Maas, C. in 2017]

Marking Writing: Feedback Strategies to Challenge the Red Pen’s Reign – IATEFL 2016

By popular demand…

My handout from my presentation held at IATEFL 2016 in Birmingham, with the above title.

Clare IATEFL 2016 presentation

Abstract:

This talk provides teachers with time-efficient alternatives to traditional ‘red-pen correction’, by demonstrating and evaluating several effective feedback strategies that are applicable to giving feedback on writing in diverse contexts, and presenting summaries of published research which explores their efficacy. Issues including learner autonomy, motivation, and the role of technology are also briefly discussed to underpin the practical ideas presented.

Handout can be downloaded here: IATEFL 2016 conference Clare Fielder Works Cited handout.

Clare IATEFL 2016 presentation 2