Category: Methods & Approaches

Analysing my Feedback Language

Analysing my Feedback Language

TL:DR SUMMARY

I ran a feedback text I’d written on a student’s work through some online text analysis tools to check the CEFR levels of my language. I was surprised that I was using some vocabulary above my students’ level. After considering whether I can nonetheless expect them to understand my comments, I propose the following tips:

  • Check the language of feedback comments before returning work and modify vocabulary necessary.
  • Check the vocabulary frequently used in feedback comments, and plan to teach these explicitly.
  • Get students to reflect on and respond to feedback to check understanding.

A couple of colleagues I follow on blogs and social media have recently posted about online text analysis tools such as Text Inspector, Lex Tutor and so on (see, for example Julie Moore’s post here and Pete Clements’ post here). That prompted me to explore uses of those tools in more detail for my own work – both using them to judge the input in my teaching materials or assessments, and also using them with students to review their academic essay writing.

Once I got into playing around with different online tools (beyond my go-to Vocab Kitchen), I wanted to try some out on my own texts. The thing I’ve been writing most recently, though, is feedback on my students’ essays and summaries. But, I’m a bit of a feedback nerd so I was quite excited when the idea struck me: I could use these tools to analyse my language in the feedback I write to help my students improve their texts. A little action research, if you will. 

Now I obviously can’t share the students work here for privacy and copyright reasons, but one recent assessment task was to write a 200-250 word compare/contrast paragraph to answer this question:

How similar are the two main characters in the last film you watched?

(Don’t focus on their appearance).

These students are at B2+ level (CEFR) working towards C1 in my essay writing class. They need to demonstrate C1-level language in order to pass the class assessments. One student did not pass this assessment because her text included too many language mistakes that impeded comprehension, because overall the language level did not reach C1, and because she didn’t employ the structural elements we had trained in class.

Here’s the feedback I gave on the piece of work and which I ran through a couple of text checkers. (Note: I usually only write this much if there are a lot of points that need improving!)

The language of this text demonstrates a B2 level of competence. Some of the phrasing is rather too colloquial for written academic language, e.g. starting sentences with ‘but’, and including contracted forms. You need to aim for more sophisticated vocabulary and more lexical diversity. More connectors, signposting and transitions are needed to highlight the genre and the comp/cont relationships between the pieces of information. The language slips lead to meaning not always being emphasised or even made clear (especially towards the end). Aim to write more concisely and precisely, otherwise your text sounds too much like a superficial, subjective summary.

Apart from the personal phrase at the beginning, the TS does an OK job at answering the question of ‘how similar’, and naming the features to be discussed. However, you need to make sure you name the items – i.e. the characters – and the film. In fact, the characters are not named anywhere in the text! The paragraph body does include some points that seem relevant, but the ordering would be more logical if you used signposting and the MEEE technique. For example, you first mention their goals but don’t yet explain what they are, instead first mentioning a difference between them– but not in enough detail to make sense to a reader who maybe doesn’t know the series. Also, you need to discuss the features/points in the order you introduce them in the TS – ‘ambition’ is not discussed here. The information in the last couple o sentences is not really relevant to this question, and does not function as a conclusion to summarise your overall message (i.e. that they are more similar than they think). In future, aim for more detailed explanations of content and use the MEEE technique within one of the structures we covered in class. And remember: do not start new lines within one paragraph – it should be one chunk of text.

I was quite surprised by this ‘scorecard’ summarising the analysis of the lexis in my feedback on Text Inspector – C2 CEFR level, 14% of words on the AWL, and an overall score of 72% “with 100% indicating a high level native speaker academic text.” (Text Inspector). Oops! I didn’t think I was using that high a level of academic lexis. The student can clearly be forgiven if she’s not able to improve further based on this feedback that might be over her head! 

(From Text Inspector)

In their analyses, both Text Inspector and Vocab Kitchen categorise words in the text by CEFR level. In my case, there were some ‘off list’ words, too. These include abbreviations, most of which I expect my students to know, such as e.g., and acronyms we’ve been using in class, such as MEEE (=Message, Explanation, Examples, Evaluation). Some other words are ‘off list’ because of my British English spelling with -ise (emphasise, summarise – B2 and C1 respectively). And some words aren’t included on the word lists used by these tools, presumably due to being highly infrequent and thus categorised as ‘beyond’ C2 level. I did check the CEFR levels that the other ‘off list’ words are listed as in learners’ dictionaries but only found rankings for these words: 

Chunk – C1

Genre – B2

Signposting – C1

(From Vocab Kitchen)

Logically, the question I asked myself at this point is whether I can reasonably expect my students to understand the vocabulary which is above their current language level when I use it in feedback comments. This particularly applies to the words that are typically categorised as C2, which on both platforms were contracted, superficial and transitions, and perhaps also to competence, diversity and subjective which are marked as C1 level. And, of course, to the other ‘off list’ words: colloquial, concisely, connectors, lexical, and phrasing.

Now competence, diversity, lexical and subjective shouldn’t pose too much of a problem for my students, as those words are very similar in German (Kompetenz, Diversität, lexikalisch, subjektiv) which all of my students speak, most of them as an L1. We have also already discussed contracted forms, signposting and transitions on the course, so I have to assume my students understand those. Thus, I’m left with colloquial, concisely, connectors, phrasing and superficial as potentially non-understandable words in my feedback. 

Of course, this feedback is given in written form, so you could argue that students will be able to look up any unknown vocabulary in order to understand my comments and know what to maybe do differently in future.  But I worry that not all students would actually bother to do so –  so they would continue to not fully understand my feedback, making it rather a waste of my time having written it for them.

Overall, I’d say that formulations of helpful feedback comments for my EAP students need to strike a balance. They should mainly use level-appropriate language in terms of vocabulary and phrasing so that the students can comprehend what they need to keep doing or work on improving. Also, they should probably use some academic terms to model them for the students and make matching the feedback to the grading matrices more explicit. Perhaps the potentially non-understandable words in my feedback can be classified as working towards the second of these aims. 

Indeed, writing in a formal register to avoid colloquialisms, and aiming for depth and detail to avoid superficiality are key considerations in academic writing. As are writing in concise phrases and connecting them logically. Thus, I’m fairly sure I have used these potentially non-understandable words in my teaching on this course.But so far we haven’t done any vocabulary training specifically focused on these terms. If I need to use them in my feedback though, then, the students do need to understand them in some way. 

So, what can I do? I think there are a couple of options for me going forward which can help me to provide constructive feedback in a manner which models academic language but is nonetheless accessible to the students at the level they are working at. These are ideas that I can apply to my own practice,  but that other teachers might also like to try out:

  • Check the language of feedback comments before returning work (with feedback) to students; modify vocabulary if necessary.
  • Check the vocabulary items and metalanguage I want/need to use in feedback comments, and in grading matrices (if provided to students), and plan to teach these words if they’re beyond students’ general level.
  • Use the same kinds of vocabulary in feedback comments as in oral explanations of models and in teaching, to increase students’ familiarity with it. 
  • Give examples (or highlight them in the student’s work) of what exactly I mean with certain words.
  • Get students to reflect on the feedback they receive and make an ‘action plan’ or list of points to keep in mind in future – which will show they have understood and been able to digest the feedback.

If you have further suggestions, please do share them in the comments section below!

As a brief closing comment, I just want to  point out here that it is of course not only the vocabulary of any text or feedback comment that determines how understandable it is at which levels. It’s a start, perhaps, but other readability scores need to be taken into account, too. I’ll aim to explore these in a separate blog post.

Reflections on my lesson: Is this TBLT?

Reflections on my lesson: Is this TBLT?

OK, I’ll admit it. I’m a bit confused. I think my classroom practice and teaching materials reflect a Communicative Approach to language teaching. Prompted by some debates on Twitter, though, I’ve been trying to read up on TBLT and picture exactly what it would look like in the classroom, how TBLT-type lessons and courses would be sequenced and structured, and whether my lessons are actually TBLT. I’ve just read that “[g]enerally,  [ELT] methods are quite distinctive at the early, beginning stages of a language course, and rather indistinguishable from each other at a later stage” (Brown, 1997, p. 3, in Richards & Rodgers, 2001, p. 249), and “[t]here are no convincing video ‘demonstrations’ with intermediate or advanced learners, perhaps because…at that level there is nothing distinctive to demonstrate.” (Richards & Rodgers, 2001, p. 250), so maybe that’s why I’m finding so hard to see whether the lesson and materials for B2-C1 learners I’ve created are actually TBLT or not.

Still, I think a lot of my lessons fit with what Willis (1996, in Richards & Rodgers, 2001, pp. 239-40) recommends as a sequence of activities in TBLT, even though I didn’t particularly plan them to be that way. Here’s an example; see what you think, I’m genuinely interested in opinions on this!

Pretask: Introduces topic & task

My lesson: T writes “food sharing” on the board and Sts brainstorm what they know about it. Any useful vocab sts use, especially if it’s new to other sts, is noted on the board. Sts are told that the overall goal for the lesson is to write a short statement showing their opinion on a food-sharing initiative.

Planning for task: Gives input on topic necessary for task

My lesson:

Stage 1 – Sts listen to a podcast on the topic, which discusses different ‘types’ of food sharing (e.g. food-sharing platforms, meal sharing, also food salvaging) and a couple of potential problems/legal issues. The two speakers basically have different views – one is very enthusiastic about food sharing and the other is wary. This is a real podcast, but I just use an excerpt so that it’s manageable within the lesson (Does this make it less authentic? And therefore not suitable for TBL?)

Sts answer some listening comprehension questions and take notes on what they learn about different sharing initiatives. Sts compare notes (e.g. in pairs) to check anything they aren’t sure they understood properly. T answer sts’ questions about any vocab or phrases in the podcast.

Stage 2 – Sts read two example comments that were left on the podcast website: again one is in favour, the other is sceptical. They both state their opinion and explain a couple of reasons for it. (I just selected two, which were well-written i.t.o. structure and no typos/language slips, and where I thought the language used would be understandable to B2 learners  – again, I wonder if this is authentic enough? Sts answer comprehension Qs: Which one is for / against food sharing & how they know (which words/phrases show the opinion). They highlight the statement of opinion and the supporting points/reasons in different colours.

Sts think about which comment they agree with most and find a partner with a similar view.

Task – Completing the task/goal of the lesson 

My lesson: In pairs (with the partner they just found), sts write a comment showing their opinion to add to the podcast website. They are told to state their opinion clearly and include supporting points/reasons.

The comments are displayed around the classroom and sts read each others’ texts. They then decide which one they think makes the best argument and why. Individual sts report back to tell the class about which comment they find most convincing and what they think makes it so good.

Language Focus – analysis and practice

My lesson: Sts look back at what they highlighted in the comments and what they wrote themselves. They are directed to find words/phrases that introduce opinion (e.g. I honestly believe, the way I see it, I’m afraid I have to disagree); these are written on the board. Sts look at their notes from the podcast and see if they can remember any other phrases – they can listen again if they wish. Sts can also be asked to discuss equivalents in their L1 (is that OK in TBLT?)

Sts discuss in small groups other things that can be shared / other sharing initiatives they’ve heard about and their opinions of them (also in comparison to food sharing) – whether they see any issues or whether they’d like to try them. I display pictures (e.g. of books, cars, couch-surfing, office space) to give them ideas, but the language they mined from the input texts remains displayed on the board.

Posttask – reporting and consolidating

Finally, Sts reflect on their use of the words/phrases for showing opinion and edit their written comments on the podcast if they wish. They tell each other what they changed and why, and evaluate each others’ edited comments.

If sts wish, they can post their comments on the real podcast website.

 

From what I’ve been reading, a lot of what makes TBLT TBLT is the priority or focus given to meaning over “language points” – if I had, for example, done the language analysis (here, the guided discovery of phrases to introduce an opinion/supporting reasons) before the actual task (here the writing of comments), then this would perhaps have not been so in-keeping with what TBLT recommends, right? Then I would be “back to” the Communicative Approach, wouldn’t I? Comments welcome!

Don’t get me wrong, this blog post is not trying to weight different methods up against each other (that’s a discussion for another time and place), but I’m trying to get my head around some criticisms of teaching and materials that claim TBLT would be better – and that got me wondering if it’s not TBLT I’m doing anyway…

 

References

Brown, H.D., “English language teaching in the ‘post-method’ era: Toward better diagnosis, treatment and assessment,” PASAA, 27, 1997, pp. 1-10.

Richards, J.C. & T.S. Rodgers, Approaches and Methods in Language Teaching (CUP, 2001)

Willis, J., “A flexible framework for task-based learning”, in J. Willis and D. Willis (eds), Challenge and Change in Language Teaching (Heinemann,, 1996), pp. 52-62.

 

What Postgraduates Appreciate in Online Feedback on Academic Writing #researchbites

What Postgraduates Appreciate in Online Feedback on Academic Writing #researchbites

In this article, Northcott, Gillies and Coutlon explore their students’ perceptions of how effective online formative feedback was for improving their postgraduate academic writing, and aim to highlight best practices for online writing feedback.

Northcott, J., P. Gillies & D. Caulton (2016), ‘What Postgraduates Appreciate in Online Tutor Feedback on Academic Writing’, Journal of Academic Writing, Vol. 6/1 , pp. 145-161.

Background

The focus of the study was on helping international master’s-level students at a UK university, for whom English is not their first/main language. The study’s central aim was investigating these students’ satisfaction with the formative feedback provided online by language tutors on short-term, non-credit-bearing ESAP writing courses. These courses, run in collaboration with subject departments, are a new provision at the university, in response to previous surveys showing dissatisfaction among students with feedback provided on written coursework for master’s-level courses. Participation is encouraged, but voluntary.  The courses consist of five self-study units (with tasks and answer keys), as well as weekly essay assignments marked by a tutor.

The  essays are submitted electronically, and feedback is provided using either Grademark (part of Turnitin) or ‘track changes’ in Microsoft Word . The feedback covers both  language correction and feedback on aspects of academic writing. These assignments are effectively draft versions of sections of coursework assignments students are required to write for the master’s programmes.

Research

The EAP tutors involved marked a total of 458 assignments, written by students in the first month of the master’s degrees in either Medicine or Politics. Only 53 students completed all five units of the writing course; though 94 Medicine and 81 Politics students completed the first unit’s assignment.

Alongside the writing samples, data was also collected by surveying students at three points during the writing course, plus an end-of-course evaluation form. Focussing on students who had completed the whole writing course, students’ survey responses were matched with their writing samples which had received feedback, as well as the final coursework assignment they submitted for credit in their master’s programme, for detailed analysis.

Findings

Analysing the feedback given by tutors, the researchers found both direct and indirect corrective feedback on language, as well as on subject-specific or genre-specific writing conventions and the academic skills related to writing. Tutors’ comments mostly refered to specific text passages, rather than being unfocused or general feedback.

Student engagement with feedback was evidenced by analysing writing samples and final coursework: only one case was found where ‘there was clear evidence that a student had not acted on the feedback provided’ (p. 155). However, the researchers admit that, as participation in the course is voluntary, the students who complete it are likely to be those who are in general appreciative of feedback, thus this finding may not be generalisable to other contexts.

In the surveys, most students’ reported feeling that the feedback had helped them to improve their writing. They acknowledged how useful the corrections provided were, and how the feedback could be applied in future. Moreover, comments demonstrated an appreciation of the motivational character of the feedback provided.

Summing up these findings, the researchers report:

It appeared to be the combination of principled corrective feedback with a focus on developing confidence by providing positive, personalised feedback on academic conventions and practices as well as language which obtained the most positive response from the students we investigated. (p. 154)

The students’ comments generally show that they responded well to this electronic mode of feedback delivery, and also felt a connection to their tutor, despite not meeting in person to discuss their work. As the researchers put it, students came to see ‘written feedback as a response to the person writing the text, not simply a response to a writing task’ (p. 156).

Take Away

The findings from this study highlight that simply using electronic modes of feedback delivery does not alone increase student satisfaction and engagement with feedback on their written work. Instead, the content and manner of the feedback given is key.

From the article, then, we can take away some tips for what kind of feedback to give, and how, to make electronic feedback most effective, at least for postgraduate students.

  • Start with a friendly greeting and refer to the student by name.
  • Establish an online persona as a sympathetic critical friend, ready to engage in dialogue.
  • Don’t only focus on corrective feedback, but aim to guide the student to be able to edit and correct their work autonomously, e.g. provide links to further helpful resources.
  • Be specific about the text passage the feedback refers to.
  • Tailor the feedback to the student’s needs, in terms of subject area, etc.
  • Give praise to develop the student’s confidence.
  • Take account of the student’s L1 and background.
  • Eencourage the student to respond to the feedback; especially if anything is unclear or they find it difficult to apply.

This post is part of ELT Research Bites 2017 Summer of Research (Bites) Blog Carnival! Join in here.

Competency-based planning and assessing

Competency-based planning and assessing

Earlier this week, I attended a workshop on competency-based (or competency-oriented) planning and assessing held by Dr Stefan Brall at Trier University, and would like to share some of the insights here.

The workshop was aimed at university-level teachers from various subject areas, and so concentrated generally on Competency-Based Education (CBE). According to Richards and Rogers (2001), the principles of CBE can be applied to the teaching of foreign languages (-> CBLT: Competency-Based Language Teaching), making the topic of interest to ELT professionals.

What is a competency?

In everyday language, we talk of people being ‘competent’ when they have the knowledge, qualification(s), or capacity to fulfil the expectations of a particular situation. They have the ability to apply the relevant skills appropriately and effectively. In the area of education, then, these skills are the individual competencies that students need to acquire and develop. Another important distinction here is between declarative knowledge, the theoretical understanding of something, and procedural knowledge, the ability to actually do it. In language teaching, I would argue, our focus is necessarily on the procedural side of things, on getting students to be able to actually communicate in the target langauge. The overarching goal of  CBLT is for learners to be able to apply and transfer this procedural knowledge in various settings, appropriately and effectively.

Literature on CBE explains how the approach can enhance learning, by

  • Focusing on the key competencies needed for success in the field
  • Providing standards for measuring performance and capabilities
  • Providing frameworks for identifying learners’ needs
  • Providing standards for measuring what learning has occurred

What are key competencies?

In the realm of tertiary education, a useful study to look at here is the Tuning Project. This is an EU-wide study which explored the most important competencies that students should develop at university. Although the specific ranking of the competencies may be debated, some of the capabilities that came out as very important include: the application of theory, problem solving, the adaptation of procedural knowledge to new situations, analytical thinking, synthesising information, and creativity (Gonzalez & Wagenaar, 2003). These kinds of skills are those often found at the top ends of taxonomies of learning. Compare, for example, with Bloom’s taxonomy:

bloom

Other taxonomies of learning use comparable sequential units to describe cognitive learning. For example, the SOLO model (Structure of Observed Learning Outcome, see Biggs & Tang, 2007) includes a quantitative phase of uni-structural and multi-strucutal learning (e.g. identyfing, describing, combining), and then a quantitative phase of relational (e.g. comparing, analysing causes, applying) and extended abstract learning (e.g. generalising, hypothesising). Seeing these important skills in a hierarchically organised scheme highlights how they build upon each other, and are themselves the products of mastering many sub-skills or competencies.

In language teaching, people have long since spoken of “the four skills”, i.e. skills covering the oral, aural, reading and writing domains. To this we might also add learning competencies. In CBLT, language is taught as a function of communicating about concrete tasks; learners are taught the langauge forms/skills they will need to use in various situations in which they will need to function. Scales such as the Common European Reference Framework for Languages help to break down these skills into distinct competences, whereby learners move up through the levels of mastery in each skill area, from elementary performance in a competency to proficient performance.

cefr

Competency-based Learning Outcomes

If we take scales of learning as the foundation for our planning, then, formulating statements of learning outcomes becomes quite a straightforward process. We will of course need to know the current level and needs of our students, especially in terms of competencies still to be learnt and competencies requiring further development. Associated with such learning taxonomies, we can easily find lists of action verbs which denote the skills associated with each developmental level of thinking skills. Based on the SOLO model, for example, we might find the following verbs:

Level Verbs
Uni-structural learning (knowledge of one aspect) count, define, find, identify, imitate, name, recognize, repeat, replicate
Multi-structural learning  (knowledge of several, unconnected aspects) calculate, classify, describe, illustrate, order, outline, summarise, translate
Relational learning (knowledge of aspects is integrated and connected) analyse, apply, compare, contrast, discuss, evaluate, examine, explain, integrate, organise, paraphrase, predict
Extended abstract learning (knowledge transferred to new situations) argue, compose, construct, create, deduce, design, generalize, hypothesise, imagine, invent, produce, prove, reflect, synthesise

Based on our understanding of students’ current learning levels, students’ needs, and the general framework within which our lessons/courses are taking place (in terms of contact time, resources, etc), and with these action verbs, we can then formulate realistic learning goals. In most cases, there will be a primary learning outcome we hope to reach, which may consist of several sub-goals – this should be made clear.

For example, an academic writing course aimed at C1-level students (on the CEFR) might set the main learning outcome as:

By the end of this course, students should be able to produce a coherent analytical essay following the Anglo-American conventions for the genre.

A couple of the sub-goals might include:

  • Students should be familiar with Anglo-American essay-writing conventions and able to apply these to their own compositions.
  • Students should understand various cohesive devices and employ these appropriately within their writing.
  • Students should understand the functions of Topic Sentences and Thesis Statements and be able to formulate these suitably in their own writing. 

Formulating clear learning outcomes in this way, and making them public, helps students to reflect on their own progress and may be motivating for them, and helps teachers to choose activities and materials with a clear focus, as well as helping to devise assessment tasks and grading rubrics.

Competency-based Assessment

Of course, most teachers will need to aim for economical assessment, in terms of time and resources. As far as possible, CBE advocates on-going assessment, so that students continue to work on the competency until they achieve the desired level of mastery. Competency-based assessment may thus require more effort and organisation on the part of the assessor – but it is able to provide a more accurate picture of students’ current stage of learning and performance.

Take multiple-choice tasks, for example; they can be marked very economically, but in reality they tend only to test the lower-level thinking skills, which may not have been the desired learning outcome. To test competency-based learning, we need to base our assessment tasks on the learning outcomes we have set, perhaps using the same action verbs in the task instructions. The focus is shifted to learners’ ability to demonstrate, not simply talk theoretically about, the behaviours noted in the learning outcomes. Still, especially in the realm of langauge teaching, there are some tasks we can easily set in written assignments which will also allow us to assess the higher levels of competencies more economically than oral presentations or practical assignments. If our learning outcome is the ability to apply a theory, for example, we could set a question such as ‘Describe a situation that illustrates the principles of xyz‘. Or, if we want to assess whether learners can discuss and evaluate, we might set a task like ‘Explain whether and why you agree or disagree with the following statement.‘ These kinds of tasks require learners to apply their acquired or developed competencies on a more qualitative level.

To enable objective assessments of students’ learning, we will need to devise a matrix based on the various levels of mastery of the competencies detailed in the learning outcomes. As a basis, we might start with something like this:

Grade Description
A An outstanding performance.
B A performance considerably better than the average standard.
C A performance that reaches the average standard.
D Despite short-comings, the performance just about reaches the minimum standard required.
E Because of considerable short-comings, the performance does not reach the minimum standard required.

For each sub-skill of the competencies we are aiming for students to achieve, we will need to state specifically, for instance, which ‘short-comings’ are ‘considerable’, e.g. if the students cannot demonstrate the desired level of mastery even with the tutor’s assistance. Also, it is important in CBE and CBLT that students’ performance is measured against their peers, especially to ascertain the ‘average standard,’ and not against the mastery of the tutor.

To return to the essay writing, example, a student’s composition might receive a B grade on the sub-competence of using cohesive devices if they employ several techniques to create cohesion in their work, but occasionally use one technique where another might be more effective. A student’s essay might receive a D grade on this competency if they repeatedly use the same cohesive device, or employ the techniques indiscriminately and inappropriately. An E grade might mean that the student has not tried to employ any cohesive devices. In this manner, the primary learning outcome is broken down into sub-skills, on which students’ performance can be objectively measured using a detailed grading matrix.

In a nutshell, then, CBE and CBLT aim for ‘Yes we can!’ rather than ‘We know’. Competency-based teaching and learning have become a staple in further education and language instruction in many places around the world. If you would like to implement the approach in your own classrooms, I hope this post has given you some useful insights on how to do so!

References

Biggs, J. & C. Tang, Teaching for Quality Learning at University (Maidenhead: Open University, 2007).

Brall, S., “Kompetenzorientiert planen und prüfen”, Workshop at Trier University, 21.2.17.

Gonzalez, J. & R. Wagenaar, Tuning Educational Structures in Europe: Final Report Phase One (Bilbao, 2003)

Richards, J.C. & T.S. Rodgers, Approaches and Methods in Language Teaching (Cambridge: CUP, 2001).

“What is the CEFR?”, English Profile, Cambridge University Press, http://www.englishprofile.org/the-cefr, accessed 24.2.17

ELT Research Bites

ELT Research Bites

Followers of my blog will know that I believe we, as language teachers, all need to understand the pedagogical underpinnings of what we do in our language classrooms. That’s why I aim in my blog posts to provide information on theoretical backgrounds and lesson materials which apply them practically. I would also love for more teachers to read the research and background articles for themselves. But I know that teachers are all busy people, who may not have access to or time to access publications on the latest developments and findings from language education research.

ELT Research Bites is here to help!contributors.JPG

As the founder, Anthony Schmidt, explains: ELT Research Bites is a collaborative, multi-author website that publishes summaries of published, peer-reviewed research in a short, accessible and informative way. 

The core contributors are Anthony Schmidt, Mura Nuva, Stephen Bruce, and me!

 

Anthony describes the problem that inpsired ELT Research Bites: There’s a lot of great research out there: It ranges from empirically tested teaching activities to experiments that seek to understand the underlying mechanics of learning. The problem is, though, that this research doesn’t stand out like the latest headlines – you have to know where to look and what to look for as well as sift through a number of other articles. In addition, many of these articles are behind extremely expensive pay walls that only universities can afford. If you don’t have access to a university database, you are effectively cut off from a great deal of research. Even if you do find the research you want to read, you have to pour through pages and pages of what can be dense prose just to get to the most useful parts. Reading the abstract and jumping to the conclusion is often not enough. You have to look at the background information, the study design, the data, and the discussion, too. In other words, reading research takes precious resources and time, things teachers and students often lack.

And so ELT Research Bites was born!  

site.JPG

The purpose of ELT Research Bites is to present interesting and relevant language and education research in an easily digestible format.

Anthony again:  By creating a site on which multiple authors are reading and writing about a range of articles, we hope to create for the teaching community a resource in which we share practical, peer-reviewed ideas in a way that fits their needs.

ELT Research Bites provides readers with the content and context of research articles, at a readable at the length, and with some ideas for practical implications. We hope, with these bite-size summaries of applied linguistics and pedagogy research, to allow all (language) teachers access to the insights gained through empirical published work, which teachers can adapt and apply in their own practice, whilst not taking too much of their time away from where it is needed most – the classroom.

CHECK OUT ELT Research Bites here:

FOLLOW ON TWITTER: @ResearchBites

 

Peer Presentation Feedback

Peer Presentation Feedback

I teach an EAP module which focusses on language and study skills. It’s aimed at first-semester students starting an English Studies degree where English is a foreign language for almost all students. They’re at the B2+ level.

In a 15-week semester, we spend the first five weeks or so looking at what makes a good academic presentation in English. We cover topics such as narrowing down a topic to make a point, logically building up an argument, linking pieces of information, maintaining the audience’s attention, formal langauge and appropriate use of register, body language and eye contact, volume and pacing, using sources effectively, and lots of sub-skills and langauge features that are relevant for presentations. In the second 2/3 of the semester, students give presentations (in groups of 3) on a topic of their choice related to the English-speaking world, and we discuss feedback altogether so that the others can learn from what was good or could be improved in the presentation they have watched.

This blog post describes my journey through trialling different ways of getting the best feedback to fulfil our overall learning aim. 

(Note: Don’t worry, we also use class time to practise other study skills pertaining to listening and speaking!)

1. ‘Who would like to give some feedback?’

I have experimented with various ways of getting audience members to give feedback. When I first started teaching on this module, I used to ask after the presentation ‘Who would like to give some feedback?’, which was usually qualified by saying something like ‘Remember the points we’ve covered on what makes a presentation good.’ Usually, only a few people commented, and they focussed mainly on the good things. Don’t get me wrong, I think it is important to highlight what students have done well! But the overall goal of having students give presentations was that we could constructively critique all aspects of these presentations. I had hoped that we could use these ‘real’ examples to review what we had learnt about good academic presentations. So this approach wasn’t as effective as I had hoped.

2. Feedback questions

It seemed that requiring students to keep in mind all of the features of a good academic presentation was asking a bit too much. And so, together with a colleague, I drew up a list of questions students could ask themselves about the presentation. Example questions include: Was all of the information relevant? Was the speech loud and clear, and easy to understand? Students were given the list before the first presentation and instructed to bring it each week to help them to give presentation feedback. Most people brought them most of the time. Still, students were pretty selective about which questions they wanted to answer, and (tactfully?) avoided the points where it was clear that the presentation group needed to improve. So we still weren’t getting the full range of constructive feedback that I was hoping for.

3. Feedback sandwich

sandwich.jpgIt was clear to me that students wanted to be nice to each other. We were giving feedback in plenum, and no one wanted to be the ‘bad guy’. This is a good thing per se, but it meant that they were slightly hindered in giving constructive criticism and thus achieving the learning aims I had set for the course. So, before the first presentation, I set up an activity looking at how to give feedback politely and without offending the individual presenters. We explored the psychological and linguistic concepts behind ‘face saving’ and how people may become defensive if they feel their ‘face’ is attacked, and then psychologically ‘block out’ any criticism – so the feedback doesn’t help them improve their presentation; nor does it make for good student-student relationships! I explained the idea of a ‘feedback sandwich’ in which the positive comments form the bread, and the negative comments are the filling. This idea is said to ease any feelings of ‘attack’, thus making the feedback more effective. Students embraced this idea, and did their best to ‘sandwich’ their feedback. Overall, this was a helpful step in moving the class feedback towards waht I thought would be most effective for the learning aims.

4. Feedback tickets

Since I noticed we still weren’t always getting feedback on all aspects of the presentation, a colleague and I decided to make ‘feedback tickets’, each with one question from the list we had previously prepared. The tickets were handed out before a presentation, and each student was then responsible for giving feedback on that point. Combined with the ‘sandwich’ approach, this overall worked pretty well. The minor drawbacks were that sometimes the presenters had really done a good job on a certain aspect and there wasn’t much ‘filling’ to go with the ‘bread’; however, sometimes the ‘filling’ was important, but students seemed to counteract their constructive criticisms by emphasizing their lack of importance, especially compared to the positive comments. For me, though, the major downside to using these tickets was the time factor. Running through a set of ~15 feedback tickets (and feedback sandwiches!) after each presentation was productive for students’ presentation skills, but ate into the time in class that should have been used for practising other oral/aural skills. In extreme cases, with two 30-minute presentations plus Q&A in a 90-minute lesson, we simply ran out of time for feedback! Those poor presenters got no feedback on their presentations, and we as class were not able to learn anything from the example they had delivered.

5. Google forms

google form.JPG

Actually, I first used Google Forms to collect feedback after one of these lessons where our time was up before we’d got through the plenary feedback round. I copied all of the feedback questions into a Google form (using the ‘quiz’ template) and emailed the link to the students. I was positively surprised by the results! Perhaps aided by the anonymity of the form, students used the ‘sandwich’ idea very effectively – suitably praising good aspects of the presentation, and taking time to explain their criticisms carefully and specifically. Wow – helpful feedback! I printed out the feedback to give to the presenters, along with my own written feedback, and also picked out a couple of poignant comments to discuss in plenum in the next lesson. Right from the off, this way of collecting and giving feedback seemed very effective, both in terms of time taken and achieving learning aims. It seemed presenters had some time to reflect on their own performance and were able to join in the feedback discussions more openly, and focussing on just a couple of key aspects meant it was time-eficient, too. I immediately decided to use the Google form for the next couple of weeks, and have continued to find it extremely useful. Sadly, we’re at the end of our semester now, so these are just very short-term observations. Still, I’m encouraged to use the online form in future semesters.

Just goes to show how important reflecting on our classroom practices can be!

I wonder if anyone else has had similar experiences, or can share other inspirational ways of collecting feedback on presentations? I’d love to hear from you!

My Webinar: “Assessing and Marking Writing: Feedback Strategies to Involve the Learners”

Earlier this month, I had the pleasure of hosting a webinar (my first ever!) for IATEFL’s TEA SIG. For those who weren’t able to join in, here’s a run down and a link to the video!

Assessing  Marking Writing TEASIG PPT_002This talk provides teachers with time-efficient strategies for giving feedback on EFL learners’ writing which actively involve the learners. I present and evaluate several learner-centred feedback strategies that are applicable to giving feedback on written work in diverse contexts, by presenting summaries of published research which explores their efficacy. I also explain the mechanisms underpinning the strategies’ effectiveness, in order to further aid teachers in making informed choices pertaining to their specific class groups.

Watch the webinar recording here.

The webinar was followed by a live Facebook discussion. Check it out here.

Here are a couple of excerpts from the Facebook discussion: 

Clare Fielder  Someone asked: Have you ever tried developing an online digital dialogue around feedback points? Not exactly sure what you mean here, I’m afraid. I’ve used Google Docs to get peer review going – is that something in the direction you’re asking about?

Sharon Hartle Sharon Hartle I use wikispaces and learners can comment on specific points and then develop a dialogue. Here’s a quick clip of what I mean
Sharon Hartle's photo.
Clare Fielder Clare Fielder This sounds like something similar to Google Docs with the comments function. I used that last year with my students, most of them liked it, but lost energy and motivation for it by the end of term… Maybe because it’s just one more platform that they have to remember to check?
Sharon Hartle Sharon Hartle Once again, I think it is a question of guidance and structuring, as you said. If you limit it to asking them to comment on two posts, for instance, and then reintegrate it all into class it works well. It also remains for later reference, like now. 🙂
Clare Fielder Clare Fielder Our VLP doesn’t have this kind of function (well, it doesn’t work well and is hard to use) which is why I opted for Google. I definitely like the idea, because then students get feedback from various peers, not just the one who was given their work in class on peer review day! Also, you can include LDF into that – students can pose their questions on their work when they post it there, and then all the group members can help answer them! That’s a great idea!
Sharon Hartle Sharon Hartle I’ve also experimented with iAnnotate for ipad and Schoology, which is also good.
Clare Fielder Clare Fielder Yes, I agree, limiting it to two comments or so does help, but doesn’t encourage them to really engage in discussion and dialogue. But, as with everything in ELT, it depends! It depends on the students, context, goals, etc.
Where I am it’s all pretty low-tech. I still have chalk boards in my classrooms! 😀

Sharon Hartle Sharon Hartle Well tech is only as good as tech does, isn’t it and there was a time when the blackboard was considered high tech 🙂
Clare FielderClare Fielder Only if you had coloured chalk! 😉
Sharon HartleSharon Hartle Thanks Clare, for staying around and developing this discussion, which is very interesting 🙂

 

For more information about IATEFL’s TEA SIG – Teaching, Evaluation & Assessment Special Interest Group – you can find their website here.

TEA-Sig

 

Marking Writing: Feedback Strategies to Challenge the Red Pen’s Reign – IATEFL 2016

By popular demand…

My handout from my presentation held at IATEFL 2016 in Birmingham, with the above title.

Clare IATEFL 2016 presentation

Abstract:

This talk provides teachers with time-efficient alternatives to traditional ‘red-pen correction’, by demonstrating and evaluating several effective feedback strategies that are applicable to giving feedback on writing in diverse contexts, and presenting summaries of published research which explores their efficacy. Issues including learner autonomy, motivation, and the role of technology are also briefly discussed to underpin the practical ideas presented.

Handout can be downloaded here: IATEFL 2016 conference Clare Fielder Works Cited handout.

Clare IATEFL 2016 presentation 2

5 Highly Effective Teaching Practices

Reading this, I was promoted to think about my teacher beliefs about what exactly it is that makes teaching effective; what is it that I’m aiming for, that I hold as best practice? Expressing this in one sentence has actually been a quite inspiring moment for me; motivating me and giving me new energy to approach my planning for next term.
Anyway, here’s my spontaneously-constructed sentence (which I also posted in the comments section on the blog post):

**Teachers have to be passionate about teaching and about what they’re teaching, and they need to know their students and how to motivate them to get active.**

So now I’m interested in your thoughts: What is it that makes teaching most effective?
I’m not looking (necessarily) for Hattie-style lists, but try to summarise your teacher beliefs into one sentence, about what is at the heart of good teaching, for you.

Please post them in the comments below! I’m really excited about hearing from you!!
Clare

teflgeek

Earlier this year, a piece from the Edutopia website was doing the rounds under the title “5 highly effective teaching practices”.  I automatically question pieces like this as I doubt somewhat whether the purpose of the piece is actually to raise standards in the profession and develop teachers – or whether it is simply to get a bit more traffic to the website.  But perhaps I am being unnecessarily cynical?

To be fair, the practices the article suggests are generally quite effective:

  1. State the goals of the lesson and model tasks, so that students know what they are going to do and how to do it.
  2. Allow classroom discussion to encourage peer teaching
  3. Provide a variety of feedback, both on an individual and a group basis.  Allow students to feedback to the teacher.
  4. Use formative assessments (tests the students can learn from) as well as summative assessments (tests that evaluate…

View original post 996 more words

#BridgeingtheGapChallenge: The role of pedagogical tasks and form-focused instruction

Guest post by Don Watson

Based on 

de la Fuente, M. J. (2006). Classroom L2 vocabulary acquisition: investigating the role of pedagogical tasks and form-focused instruction. Language Teaching Research 10, 3. pp. 263–295. Retrieved from: http://www.lrc.cornell.edu/events/past/2006-2007/fuentes.pdf

I assume anyone reading this blog has at least heard of Task Based Language Teaching (TBLT). But as with any approach/method etc. the thing we all, as teachers, want to know is: Does it work and how do I use it best? The study Classroom L2 vocabulary acquisition: investigating the role of pedagogical tasks and form-focused instruction attempts to answer this question when using TBLT to teach vocabulary.

Interestingly, this study also addresses another age old ELT question of (when) is it ok to talk about the language. It’s pretty well agreed that classroom interaction should be predominantly communicative in nature i.e. we use the language we are trying to teach in order to communicate, but when is it ok to explicitly discuss things like grammar or vocab. The study calls this a “focus on form” and cites Swain to argue that if learners notice certain aspects of the language they are exposed to and then compare this with their own language production, then language acquisition is more likely.

Ok, great. Let’s focus on form. But, as always, there is a but. This being a journal article, however, there is actually a however (see Lockman & Swales, 2010). And here it is: “Skehan (1998), however, remarks that it is not advisable to intervene during tasks.” He suggests that it is preferable to “intervene” after the task is complete as then it is more likely that “form–meaning relationships and pattern identification are not transitory… but are still available for attention and so more likely to be integrated into a growing interlanguage system”.

So now we have an idea of what to do and when to do it, so how does this study help? The authors describe the study as a “classroom-based, quasi-experimental study,” focusing on, second language “oral productive vocabulary acquisition of word meanings and forms”. As it’s an experiment there is a control and experimental group. In this case the “control group” is a traditional PPP (that’s Presentation, Practice and Production just in case you don’t know) lesson. So I guess in this case the PPP stands for PPPlacebo. No, that’s mean; let’s stick with “control”. So they compare a traditional PPP lesson with two versions of a Task based lesson. The first task was “a one-way, role-play, information-gap task with a planned focus on form and meaning. The task required students to use the target lexical forms while keeping attention to meaning, in order to achieve the goal of ordering food from a restaurant’s menu”. The second Task based lesson had the same first two stages as the first task based lesson, however, instead of a task repetition, “a teacher generated, explicit focus-on-forms stage was incorporated”. The “focus-on-forms” stage was designed “to explicitly clarify morphological, phonological and spelling issues.”

The study then tested the students’ ability to “retrieve” the target vocabulary immediately after the lesson and again one week after the lesson. No statistically significant difference was found for the immediate retrieval of words (although the Task based lessons were better, just not better enough) however after one week, the Task based lessons did produce significantly better results. The authors suggest that this is “due to the fewer opportunities for targeted output production and retrieval that PPP lessons offer, and to its inability to effectively focus students’ attention on targeted forms”.

And as we know, learning vocabulary is much more than simply learning the definition of a word. And this is where the real advantage of this Task+Focus-on-Form idea is because it results in “not only acquisition of the words’ basic meaning, but also of important formal/morphological aspects of words.”

So the take away from all this is: If you’re doing tasks, and I guess most of us are, don’t interrupt the task and be sure to explicitly clarify the target language after the task is complete.

References

de la Fuente, M. J. (2006). Classroom L2 vocabulary acquisition: investigating the role of pedagogical tasks and form-focused instruction. Language Teaching Research 10, 3. pp. 263–295. Retrieved from: http://www.lrc.cornell.edu/events/past/2006-2007/fuentes.pdf

Lockman & Swales (2010). Sentence Connector Frequencies in Academic Writing (and Academic Speech).  Retrieved from: http://www.readbag.com/micusp-elicorpora-files-0000-0253-sentence-connector-kibbitzer-1

Skehan, P. (1998). A cognitive approach to language learning. Oxford University Press.

Swain, M. (1985). Communicative competence: some roles of comprehensible input and comprehensible output in its development. In Gass, S. and Madden, C., editors, Input in second language acquisition. Newbury House, 235–53.