Tag: Feedback

Analysing my Feedback Language

Analysing my Feedback Language

TL:DR SUMMARY

I ran a feedback text I’d written on a student’s work through some online text analysis tools to check the CEFR levels of my language. I was surprised that I was using some vocabulary above my students’ level. After considering whether I can nonetheless expect them to understand my comments, I propose the following tips:

  • Check the language of feedback comments before returning work and modify vocabulary necessary.
  • Check the vocabulary frequently used in feedback comments, and plan to teach these explicitly.
  • Get students to reflect on and respond to feedback to check understanding.

A couple of colleagues I follow on blogs and social media have recently posted about online text analysis tools such as Text Inspector, Lex Tutor and so on (see, for example Julie Moore’s post here and Pete Clements’ post here). That prompted me to explore uses of those tools in more detail for my own work – both using them to judge the input in my teaching materials or assessments, and also using them with students to review their academic essay writing.

Once I got into playing around with different online tools (beyond my go-to Vocab Kitchen), I wanted to try some out on my own texts. The thing I’ve been writing most recently, though, is feedback on my students’ essays and summaries. But, I’m a bit of a feedback nerd so I was quite excited when the idea struck me: I could use these tools to analyse my language in the feedback I write to help my students improve their texts. A little action research, if you will. 

Now I obviously can’t share the students work here for privacy and copyright reasons, but one recent assessment task was to write a 200-250 word compare/contrast paragraph to answer this question:

How similar are the two main characters in the last film you watched?

(Don’t focus on their appearance).

These students are at B2+ level (CEFR) working towards C1 in my essay writing class. They need to demonstrate C1-level language in order to pass the class assessments. One student did not pass this assessment because her text included too many language mistakes that impeded comprehension, because overall the language level did not reach C1, and because she didn’t employ the structural elements we had trained in class.

Here’s the feedback I gave on the piece of work and which I ran through a couple of text checkers. (Note: I usually only write this much if there are a lot of points that need improving!)

The language of this text demonstrates a B2 level of competence. Some of the phrasing is rather too colloquial for written academic language, e.g. starting sentences with ‘but’, and including contracted forms. You need to aim for more sophisticated vocabulary and more lexical diversity. More connectors, signposting and transitions are needed to highlight the genre and the comp/cont relationships between the pieces of information. The language slips lead to meaning not always being emphasised or even made clear (especially towards the end). Aim to write more concisely and precisely, otherwise your text sounds too much like a superficial, subjective summary.

Apart from the personal phrase at the beginning, the TS does an OK job at answering the question of ‘how similar’, and naming the features to be discussed. However, you need to make sure you name the items – i.e. the characters – and the film. In fact, the characters are not named anywhere in the text! The paragraph body does include some points that seem relevant, but the ordering would be more logical if you used signposting and the MEEE technique. For example, you first mention their goals but don’t yet explain what they are, instead first mentioning a difference between them– but not in enough detail to make sense to a reader who maybe doesn’t know the series. Also, you need to discuss the features/points in the order you introduce them in the TS – ‘ambition’ is not discussed here. The information in the last couple o sentences is not really relevant to this question, and does not function as a conclusion to summarise your overall message (i.e. that they are more similar than they think). In future, aim for more detailed explanations of content and use the MEEE technique within one of the structures we covered in class. And remember: do not start new lines within one paragraph – it should be one chunk of text.

I was quite surprised by this ‘scorecard’ summarising the analysis of the lexis in my feedback on Text Inspector – C2 CEFR level, 14% of words on the AWL, and an overall score of 72% “with 100% indicating a high level native speaker academic text.” (Text Inspector). Oops! I didn’t think I was using that high a level of academic lexis. The student can clearly be forgiven if she’s not able to improve further based on this feedback that might be over her head! 

(From Text Inspector)

In their analyses, both Text Inspector and Vocab Kitchen categorise words in the text by CEFR level. In my case, there were some ‘off list’ words, too. These include abbreviations, most of which I expect my students to know, such as e.g., and acronyms we’ve been using in class, such as MEEE (=Message, Explanation, Examples, Evaluation). Some other words are ‘off list’ because of my British English spelling with -ise (emphasise, summarise – B2 and C1 respectively). And some words aren’t included on the word lists used by these tools, presumably due to being highly infrequent and thus categorised as ‘beyond’ C2 level. I did check the CEFR levels that the other ‘off list’ words are listed as in learners’ dictionaries but only found rankings for these words: 

Chunk – C1

Genre – B2

Signposting – C1

(From Vocab Kitchen)

Logically, the question I asked myself at this point is whether I can reasonably expect my students to understand the vocabulary which is above their current language level when I use it in feedback comments. This particularly applies to the words that are typically categorised as C2, which on both platforms were contracted, superficial and transitions, and perhaps also to competence, diversity and subjective which are marked as C1 level. And, of course, to the other ‘off list’ words: colloquial, concisely, connectors, lexical, and phrasing.

Now competence, diversity, lexical and subjective shouldn’t pose too much of a problem for my students, as those words are very similar in German (Kompetenz, Diversität, lexikalisch, subjektiv) which all of my students speak, most of them as an L1. We have also already discussed contracted forms, signposting and transitions on the course, so I have to assume my students understand those. Thus, I’m left with colloquial, concisely, connectors, phrasing and superficial as potentially non-understandable words in my feedback. 

Of course, this feedback is given in written form, so you could argue that students will be able to look up any unknown vocabulary in order to understand my comments and know what to maybe do differently in future.  But I worry that not all students would actually bother to do so –  so they would continue to not fully understand my feedback, making it rather a waste of my time having written it for them.

Overall, I’d say that formulations of helpful feedback comments for my EAP students need to strike a balance. They should mainly use level-appropriate language in terms of vocabulary and phrasing so that the students can comprehend what they need to keep doing or work on improving. Also, they should probably use some academic terms to model them for the students and make matching the feedback to the grading matrices more explicit. Perhaps the potentially non-understandable words in my feedback can be classified as working towards the second of these aims. 

Indeed, writing in a formal register to avoid colloquialisms, and aiming for depth and detail to avoid superficiality are key considerations in academic writing. As are writing in concise phrases and connecting them logically. Thus, I’m fairly sure I have used these potentially non-understandable words in my teaching on this course.But so far we haven’t done any vocabulary training specifically focused on these terms. If I need to use them in my feedback though, then, the students do need to understand them in some way. 

So, what can I do? I think there are a couple of options for me going forward which can help me to provide constructive feedback in a manner which models academic language but is nonetheless accessible to the students at the level they are working at. These are ideas that I can apply to my own practice,  but that other teachers might also like to try out:

  • Check the language of feedback comments before returning work (with feedback) to students; modify vocabulary if necessary.
  • Check the vocabulary items and metalanguage I want/need to use in feedback comments, and in grading matrices (if provided to students), and plan to teach these words if they’re beyond students’ general level.
  • Use the same kinds of vocabulary in feedback comments as in oral explanations of models and in teaching, to increase students’ familiarity with it. 
  • Give examples (or highlight them in the student’s work) of what exactly I mean with certain words.
  • Get students to reflect on the feedback they receive and make an ‘action plan’ or list of points to keep in mind in future – which will show they have understood and been able to digest the feedback.

If you have further suggestions, please do share them in the comments section below!

As a brief closing comment, I just want to  point out here that it is of course not only the vocabulary of any text or feedback comment that determines how understandable it is at which levels. It’s a start, perhaps, but other readability scores need to be taken into account, too. I’ll aim to explore these in a separate blog post.

My LTSIG Talk: Using Multimodal Learner-Driven Feedback to Provide Sustainable Feedback on L2 Writing

My LTSIG Talk: Using Multimodal Learner-Driven Feedback to Provide Sustainable Feedback on L2 Writing

Time for a little advertising! 😉

On October Thursday 5th October at 4.25pm UK time, I’ll be giving an online talk as part of the LTSIG /OllREN online conference and would be delighted to see you there!

LTSIG Presentation Clare Maas

Exploring efficient ways to give sustainable feedback on L2 writing is important because providing meticulous correction of language errors and hand-written summaries can be time-consuming and often seems less effective than desired. For feedback to be sustainable (i.e. effective long-term), it should be formative, interactive and impact on students’ future work (Carless et al 2011). Thus traditional, hand-written feedback practices may be inefficient at effecting sustainability. Integrating technology into feedback delivery has been shown to have potential in alleviating the situation, by stimulating students to engage with feedback they receive and enabling dialogues about their work.

Combining work into feedback on L2 writing with ideas promoted in higher education, I devised the Learner-Driven Feedback (LDF) procedure, where feedback is given by the teacher, but learners ‘drive’ how and on what they receive feedback: they can choose between various digital delivery modes and are required to pose questions about their work to which the tutor responds.

In this talk, I will summarise some recent literature which supports both the use of technologies such as email, audio recording, and text-editing software features, and responses to students’ individual queries in feedback procedures, before practically demonstrating LDF. I will refer to my own recently published article on LDF in EAP, and discuss my evaluation of its application in my teaching, providing compelling reasons and practical suggestions for its employment in various language teaching contexts. These discussions will also explore potential mechanisms underpinning the efficacy of multimodal approaches to making feedback more sustainable, in order to further aid teachers in making informed choices pertaining to their specific class groups. This includes topics such as learner autonomy, motivation, receptivity, learner-centredness and individualisation.

The talk is thus a combination of practical demonstration and theoretical background, of interest and relevance to a broad audience.

 

Reference: Carless, D., Salter, D., Yang, M., Lam, J., 2011. Developing sustainable
feedback practices. Studies in Higher Education, 36, 395–407.

Writing an ebook with students

Writing an ebook with students

f7426343d67fa4b2c34efce4879ebeeaab4e4bf4.jpg

My students have written an ebook!

You can read it for free here.

From an ELT perspective, this ebook is the result of a semester-long CLIL class, with project-based learning and a real and motivating outcome! If you want to find out how we did it, this post is for you!

Context

Our class was on British cultural studies, aimed at master’s level students of English Studies. This class aims to promote language learning and learning about content, in this case a particular British cultural topic. Usually, students are expected to do one oral presentation and one piece of written work as the assessment for this class. Only the other class members see the presentations, and the individual teacher is the only one who reads the essays, in order to grade them.  I’d say this is a pretty standard set up.

Background

Last summer, a colleague and I revamped our British cultural studies classes to move towards project-based learning. In 2016, our students hosted an exhibition open to staff and students a the University, which you can read about here. It was pretty successful, though the students involved found it a shame that all their hard work was only seen by a limited audience. Of course, the audience was a lot less limited than usual, but that’s what they said anyway…!

And so I came up with the idea of producing an ebook this year, which could then be made available publicly. I had seen other organisations use smashwords, and read about how easy it could be to publish a book through that site, so that’s what I thought we should do. I chose the umbrella topic of Britain in the Nineties for our focus, and 23 students signed up. I provided an outline for the class, which included a general module description, assessment requirements for the module, a provisional schedule for the ebook (to be sent to publish in the last week of semester!), and a selected bibliography of recommended reading on the topic.

Our semester is 14 weeks long, with one 90-minute lesson of this class each week. So how did we manage to produce an ebook in this time?

Weeks 1-3

In the first three lessons, I provided a video documentary, an academic article and a film for students to watch/read as a broad introduction to the topic. In lessons, we collected the main themes from this input (key words here: politics, music, social change), and discussed how they were interlinked. Each week, a different student was responsible for taking notes on our discussions and sharing these on our VLP for future reference. In week three, we rephrased our notes into potential research questions on key topic areas.

Unbenannt.PNG

At the end of each lesson, we spent some time talking about the ebook in general. As the semester progressed, the time we spent on this increased and resembled business-like meetings.

Week 4

By this point, students had chosen topics / research questions to write about and discussed their choices in plenary to ensure that the ebook would present a wide-spread selection of topics on Britain in the Nineties. The students decided (with my guidance!) to write chapters for the ebook in pairs, and that each chapter should be around 2000 words, to fulfil the written assessment criteria of the class. Writing in pairs meant that they automatically had someone to peer review their work. To fulfil the oral assessment criteria, I required each writing team to hold a ‘work in progress’ presentation on the specific topic of their chapter. I had wanted to include these presentations to make sure I could tick the ‘oral assessment’ box, and because having to present on what they were writing would hopefully mean they got on with their research and writing sooner rather than later!

Weeks 5 & 7

The lessons in these two weeks were dedicated to writing workshops and peer review. We started both lessons by discussing what makes for good peer review, and I gave them some strategies for using colours for comments on different aspects of a text, as well as tables they could use to structure their feedback comments. These tables are available here. Regarding language, these are post-grad students at C1 level, so they’re in a pretty good position to help each other with language accuracy. I told them to underline in pencil anything that sounded odd or wrong to them, whether they were sure or not. If they were sure, they could pencil in a suggestion to improve the sentence/phrase, and if not then the underlining could later serve the authors as a note to check their language at that point.

In week 5, we looked at different genres of essay (cause/effect, compare/contrast, argument, etc), and how to formulate effective thesis statements for each of them. This focussed practice was followed by peer review on the introductions students had drafted so far. By this point, the students had decided that their chapters could be grouped thematically into sections within the ebook, and so did peer review on the work of the students whose chapters were going to be in the same section as their own.

In week 7, we reviewed summaries and conclusions, and also hedging language. Again, this was followed by peer review in their ‘section’ groupings, this time on students’ closing paragraphs.

Unbenannt

Weeks 6 – 11

Almost half way through the semester, all writing teams were working on their chapters. In the lessons, we had a couple of ‘work in progress’ presentations each week. Further to my expectations, the presentations did an excellent job at promoting discussion, and particularly prompted students to find connections between their specific topics – so much so, that they decided to use hyperlinks within the ebook to show the readers these connections. Some students also used their presentations to ask for advice with specific problems they had encountered while researching/writing (e.g. lack of resources, overlaps with other chapters), and these were discussed in plenary to help each writing team as best we could. The discussions after the presentations were used to make any decisions that affected the whole book, for example which citation style we should use or whether to include images.

Week 12

In week 12, all writing teams submitted their texts to me. This was mainly because I needed to give them a grade for their work, but I also took the opportunity to give detailed feedback on their text and the content so they could edit it before it was published. I was also able to give some pointers on potential links to other chapters, since I had read them all. I felt much more like an editor, I have to say, than a teacher!

In the lesson, we had a discussion about pricing our ebook and marketing it. To avoid tax issues, we decided to make the ebook available for free. One student suggested asking for donations to charity instead of charging people to buy the book.

Unbenannt

This idea was energetically approved, and students set about looking into charities we could support. In the end, SHINE education charity won the vote (organised by the students themselves!) I dutifully set up a page for us on justgiving.com: If you’d like to donate, it can be found here.

 

At this point, we also discussed a cover for the book. One student suggested writing ‘the Nineties’ in the Beatles’ style, to emphasise the links to the 1960s that some chapters mentioned. We also thought about including pencil sketches of some of the key people mentioned in the book, but were unable to source any that all students approved of. Instead, students used the advanced settings on the google image search to find images that were copyright free. A small group of students volunteered to finalise the cover design, and I have to say, I think they did a great job!

Week 13

During the lesson in this week, the ebook really came together. Some of the students were receiving more credit points than others for the class, based on their degree programme, and so it was decided that those students should be in charge of formatting the text according to smashwords’ guidelines, and also for collating an annotated bibliography. I organised a document on google docs, where all students noted some bullet points appraising one source they had used for their chapter, and the few who were getting extra points wrote this up and formatted it into a bibliography.

Formatting the text for publication on smashwords.com was apparently not too difficult, as the smashwords’ guidelines explain everything step-by-step, and you do not need to be a computer whizz to follow their explanations!

Week 14 and beyond

This week was the deadline I had set for sending the ebook for publication. After the formatting team had finished, I read through the ebook as a full document for the first time! I corrected any langauge errors that hadn’t been caught previously, and wrote the introduction for the book.  This took me about 2 evenings.

Unbenannt.PNG

Then I set myself up a (free) account at smashwords.com and uploaded the ebook text and cover design. Luckily, the students had done a great job following the formatting rules, and the book was immediately accepted for the premium catalogue! (*very proud*)

 

Another small group of students volunteered to draw up some posters for advertising, and to share these with all class members so we could publicise the ebook on social media, on the Department’s webpage, and in the University’s newsletter.

Et voila! We had successfully published our ebook in just 14 weeks!

Evaluation

I’m so glad that I ran this project with my students! It honestly did not take more of my time than teaching the class as ‘usual’ – though usually the marking falls after the end of term, and it was quite pressured getting it done so we could publish in the last week! In future, I might move the publication date to later after the end of semester to ease some of the stress, though I do worry that students’ might lose momentum once we’re not meeting each week.

The students involved were very motivated by the idea that the general public would be able to read their work! I really felt that they made an extra effort to write the best texts they could (rather than perhaps just aiming to pass the class). This project was something entirely new for them, and they were pleased about their involvement for many reasons, ranging from being able to put it on their CV, to seeing themselves as ‘real’ writers. They have even nominated me for a teaching prize for doing this project with them!

Sadly, one student plagiarised. Knowingly. She said that she was so worried her writing wouldn’t be good enough, so she ‘borrowed’ large chunks of texts from an MA dissertation which is available online. Her writing partner didn’t catch it, and was very upset that their chapter would (discreetly!) not be included in the ebook. He was very apologetic to me; and probably also quite angry at her. If the reason she gave was true, it obviously rings alarm bells that I was expecting too much from the students or didn’t support them enough. I will aim to remedy this in future. It could, of course, just have been an excuse.

Also, some other students reported feeling that this project demanded more work from them than they would normally have to put into a class where the grade doesn’t count. Maybe this is because writing in a pair can take more time and negotiation, or maybe they also felt stressed by having to write their text during term time, rather than in the semester break when they would normally do their written assessments. Overall, though, the complaints were limited and often seemed to be clearly outweighed by the pride and enjoyment of being involved in such a great project!

I’m really pleased with how this project panned out, and would recommend other teachers give it a go! I’m very happy to answer any questions in the comments below, and for now, I wish you inspiration and happy ebook-project-planning! 🙂

 

 

Academic Writing Skills & “Just in Time” Teaching

Academic Writing Skills & “Just in Time” Teaching

I’ve been looking back over my notes from IATEFL 2017 to find inspiration for another blog post. I’m a bit late now to just summarise talks, but I’d like to come back to one of the questions that was posed at a talk I attended. It was “Building bridges: the disciplines, the normative and the transformative” by Catherine Mitsaki. 

Catherine’s talk looked at the EAP/Genre-based and Academic Literacies models of academic writing instruction and assessed the pedagogical potential of the different approaches, whilst sharing her experience from teaching for UK and international students. As I said, I don’t want to summarise her whole talk here, just one key question she raised. Students from her classes gave feedback suggesting they would prefer to have been taught the specific academic writing skills required for their assignments (within subject classes) right at the time they were working on those assignments. Catherine calls it “just in time” teaching, and she asked us what we thought.

I have to say, I can’t embrace the “just in time” teaching concept fully when it comes to academic writing. There is just too much that students need to know. It might be more appropriate if students entered university programmes with a strong foundation of writing skills, which could then be honed by focussing on the relevant points and skills for each assignment. But this is usually not the case, at least not where I work. I always feel I’m squashing in a huge amount of input and practice into our essay-writing modules, and they run for 14 weeks! With all of the competencies that are involved in producing good academic writing, I find it is much better to give students the chance to digest the input and practice applying the skills to their work over a period of time so that everything can really ‘sink in’. They need time to practise actually transferring the transferable skills we’re teaching them, especially at undergraduate level!

Also, as Catherine pointed out, “just in time” teaching would seem to contradict Academic Literacies models which aim to promote criticality towards established norms as a productive way of growing academically. As she puts it in personal correspondence, “There is no room for questioning well established models if one is struggling to deal with the norms as they are.”

So I’m not convinced that doing what students want or think is best (easiest?) for them is the best approach here. Perhaps a better option is explaining the rationale for our writing courses to the students, in an attempt to increase the receptivity to the classes we teach.

What do you think? Could “just in time” teaching work where you are?

Making Marking Colourful

Making Marking Colourful

Anyone who’s been following my blog and conference presentations for a while will know that I have a healthy obsession with marking and giving feedback on L2 students’ essays! This is partly due to the huge numbers of essays I have to mark each term, and the number of new marking techniques this allows me to try out!

Having just finished (phew!) marking a class load of B2+ level discursive essays, I’ve got time to share some ideas on using different colours for marking and giving feedback, which may serve to make it more effective, and, if not exactly fun, at least somewhat visually pleasing!

You might have seen or heard about my talk on ‘Marking Writing: Feedback Strategies to Challenge the Red Pen’s Reign‘ where I discussed a variety of ways to make marked work seem less, well, red. Red is the colour of aggression and warnings, so I’m not sure why it has come to be the typical colour for giving feedback on students’ work. Looking back at some work I’ve marked before, I just see a sea of red, used for everything – even ticks for good aspects of writing! This time around, then, I decided to use different coloured pens to show different kinds of comments. Language errors were corrected in red, good aspects were ticked or commented on in green, and other advice or comments (e.g. on content, structure or referencing) were written in blue. Even just at first glance, these papers look a lot more balanced in terms of feedback given, and can hopefully avoid that sinking feeling when students get their work back. Some have even told me that this kind of visual distinction of comments helps them to engage with the feedback as they can go at it aspect for aspect. So, for an easy way to make marking more colourful and potentially helpful for students, just add two new colours to your usual stationery repertoire, and off you go!

If you have more colours to hand, or are marking work electronically, another colour-coding approach I’ve used before is a bit more specific. Here, I use different colours to mark different categories of language mistake. You can also do it with highlighters (or the highlighting function in your word-processing programme). For example, pink is incorrect vocabulary, blue is incorrect verb form, green is for other grammar problems, and orange is for punctuation mistakes. You can vary your colours and categories as relevant to your learners and their writing. I suspect that this kind of colour code makes it even easier for students to work through the feedback they receive, and also serves to highlight the most common problem areas in their work – which will be useful for you and them! Definitely worth a try, if your pencil-case allows!

sandy 2.PNG

A final idea I’d like to share is one I’ve borrowed from Sandy Millin. This colourful approach focuses on priority areas for review and improvement. After marking all of the langauge errors in a student’s text, pick three areas of language that you feel need the most work, e.g. prepositions, vocabulary, and word forms. Then pick one colour highlighter to show each of these three areas – highlight all of that category of errors in the student’s text, and highlight the words/phrases in your feedback telling the student what to work on. I’ve included excerpts of images Sandy provided to show what this would look like in practice.

sandy 1.PNG

I’ve recently used this kind of colour-coded feedback with advanced-level students to highlight why I’ve made the suggestions I’ve added to their work. For example, I might suggest more formal vocabulary items or add in hedging phrases. I then write in my feedback comments something like ‘Try to use more hedging to avoid overgeneralisations’ – I highlight the word ‘hedging’ in yellow, and then highlight all of my hedging suggestions in yellow throughout the student’s text. Students have told me they liked this because it made them realise that they hadn’t necessarily made a mistake or done something wrong when I added a suggestion on their text, but could see why I’d added it and how it might improve their writing. And so, if your staionery budget is not yet exhausted, I’d recommend investing in some highlighters and trying out Sandy’s approach, too!

SO what have we learnt? Well, marking doesn’t need to be dull, and it definitely doesn’t need to be a red-pen-only affair! These ways of including colour in marking students’ work can alter how students percieve the feedback they’re given, and may in the long run make it more effective – and thus more worth our valuable time! 🙂

What Postgraduates Appreciate in Online Feedback on Academic Writing #researchbites

What Postgraduates Appreciate in Online Feedback on Academic Writing #researchbites

In this article, Northcott, Gillies and Coutlon explore their students’ perceptions of how effective online formative feedback was for improving their postgraduate academic writing, and aim to highlight best practices for online writing feedback.

Northcott, J., P. Gillies & D. Caulton (2016), ‘What Postgraduates Appreciate in Online Tutor Feedback on Academic Writing’, Journal of Academic Writing, Vol. 6/1 , pp. 145-161.

Background

The focus of the study was on helping international master’s-level students at a UK university, for whom English is not their first/main language. The study’s central aim was investigating these students’ satisfaction with the formative feedback provided online by language tutors on short-term, non-credit-bearing ESAP writing courses. These courses, run in collaboration with subject departments, are a new provision at the university, in response to previous surveys showing dissatisfaction among students with feedback provided on written coursework for master’s-level courses. Participation is encouraged, but voluntary.  The courses consist of five self-study units (with tasks and answer keys), as well as weekly essay assignments marked by a tutor.

The  essays are submitted electronically, and feedback is provided using either Grademark (part of Turnitin) or ‘track changes’ in Microsoft Word . The feedback covers both  language correction and feedback on aspects of academic writing. These assignments are effectively draft versions of sections of coursework assignments students are required to write for the master’s programmes.

Research

The EAP tutors involved marked a total of 458 assignments, written by students in the first month of the master’s degrees in either Medicine or Politics. Only 53 students completed all five units of the writing course; though 94 Medicine and 81 Politics students completed the first unit’s assignment.

Alongside the writing samples, data was also collected by surveying students at three points during the writing course, plus an end-of-course evaluation form. Focussing on students who had completed the whole writing course, students’ survey responses were matched with their writing samples which had received feedback, as well as the final coursework assignment they submitted for credit in their master’s programme, for detailed analysis.

Findings

Analysing the feedback given by tutors, the researchers found both direct and indirect corrective feedback on language, as well as on subject-specific or genre-specific writing conventions and the academic skills related to writing. Tutors’ comments mostly refered to specific text passages, rather than being unfocused or general feedback.

Student engagement with feedback was evidenced by analysing writing samples and final coursework: only one case was found where ‘there was clear evidence that a student had not acted on the feedback provided’ (p. 155). However, the researchers admit that, as participation in the course is voluntary, the students who complete it are likely to be those who are in general appreciative of feedback, thus this finding may not be generalisable to other contexts.

In the surveys, most students’ reported feeling that the feedback had helped them to improve their writing. They acknowledged how useful the corrections provided were, and how the feedback could be applied in future. Moreover, comments demonstrated an appreciation of the motivational character of the feedback provided.

Summing up these findings, the researchers report:

It appeared to be the combination of principled corrective feedback with a focus on developing confidence by providing positive, personalised feedback on academic conventions and practices as well as language which obtained the most positive response from the students we investigated. (p. 154)

The students’ comments generally show that they responded well to this electronic mode of feedback delivery, and also felt a connection to their tutor, despite not meeting in person to discuss their work. As the researchers put it, students came to see ‘written feedback as a response to the person writing the text, not simply a response to a writing task’ (p. 156).

Take Away

The findings from this study highlight that simply using electronic modes of feedback delivery does not alone increase student satisfaction and engagement with feedback on their written work. Instead, the content and manner of the feedback given is key.

From the article, then, we can take away some tips for what kind of feedback to give, and how, to make electronic feedback most effective, at least for postgraduate students.

  • Start with a friendly greeting and refer to the student by name.
  • Establish an online persona as a sympathetic critical friend, ready to engage in dialogue.
  • Don’t only focus on corrective feedback, but aim to guide the student to be able to edit and correct their work autonomously, e.g. provide links to further helpful resources.
  • Be specific about the text passage the feedback refers to.
  • Tailor the feedback to the student’s needs, in terms of subject area, etc.
  • Give praise to develop the student’s confidence.
  • Take account of the student’s L1 and background.
  • Eencourage the student to respond to the feedback; especially if anything is unclear or they find it difficult to apply.

This post is part of ELT Research Bites 2017 Summer of Research (Bites) Blog Carnival! Join in here.

Phonology in ELT – A Manifesto

Phonology in ELT – A Manifesto

“Achieving Phonology’s Potential in the ELT Classroom”

   – A very inspiring talk by Adam Scott on 5th April at IATEFL 2017 in Glasgow. 

In his talk, Adam presented his manifesto, a call to arms, to bring about a shift towards higher awareness of the importance of phonology in ELT. He’s convinced that we will experience ‘learning by doing’ and gain new insights into phonology and techniques for teaching it, if we just start teaching it! Here’s what he said:

More phonology – Why?

It can motivate students to understand phonology and the ‘mysterious’ relationship between spelling and pronunciation.

Discussing pronunciation as a group can help make teachers more responsive to students’ needs.

Having students tackle misunderstandings due to pronunciation can make classroom interaction more authentic and closer to real-world conversations.

It trains processing and noticing, and allows a focus on what causes communication to break down (rather than focussing on an idealised accent).

Adding feedback on pronunciation etc. can generate more learning at any stage of a lesson.

Chunking grammar as connected speech phrases can aid recall; it is more efficient for memory as the sound shapes and grammatical patterns will be stored together.

More phonology – How?

Have a pronunciation sub-aim which fits in with the other aims of the lesson/tasks, on either receptive or productive skills.

Include plenty of well-contextualised examples of the use of spoken language in lessons.

Approach phonology in a way that promotes collaboration with and between students.

Stop being the interpreter for students! Encourage them to work with and in the language together, e.g. get them to ask each other if they don’t understand something someone has said.

During discussions, etc., identify the pronunciation issues students find most difficult and that most hinder comprehension, to work on these in specific pronunciation practice tasks.

Give specific feedback, not only on the pronunciation of individual words, but also on other phonological features of connected speech such as linking, stress, etc. Immediate feedback can also help other students to learn from one person’s difficulty.

Help students to forge the link between visual and audio representations of words; they should Look (at the written word), Listen and Repeat (model pronunciation).

Help students to process new sound patterns not found in their L1, by mapping the sounds onto the complex English spelling system, e.g. with the IPA or phonics.

Pairwork requires mutual intelligibility – and the teacher can monitor both task progress and phonological features that allow mutual comprehension.

Recycle tasks that were used for another purpose by creating a pronunciation/phonological focus, e.g. on contrastive stress, phrasal verbs vs verbs + prepositions.

Hot tip: Put the IPA transcription of new words above / in front of the written form of the word, so that it gets students’ main attention.

Hot tip: Use underlining to show which letters together make one sound in a word, e.g. s a nd w i ch e s

Conclusion

These tips show that it is easy to fit more phonology in to our current teaching practice; it means minimal extra work for teachers, but could lead to great pay offs! Adam is advocating the need for innovation in L2 pronunciation teaching, and after this talk, I’m very much inclined to agree!

Adam’s slides are available here from his highly recommendable website: teachadam.com

badge5

 

Introducing #tleap

Introducing #tleap

TLEAP: Teaching & Learning in EAP

Issues in EAP Discussion Group

tleap

#tleap is an active online community of EAP professionals who discuss issues and share ideas regarding English for academic purposes. The members are EAP teachers and others who are interested in this area of language teaching, from adjunct tutors to full-time lecturers, and even materials writers and policy makers. The purpose of the #tleap community is to discuss relevant pedagogical, logistical, and research-based issues with others, and to give those involved in EAP a voice that may otherwise go unheard.

#tleap evolved from the #EAPchat Twitter hashtag set up by Tyson Seburn, Adam Simpson, and Sharon Turner, and has now spread across a variety of social media platforms, also thanks to Kate Finegan, to enable and encourage wider participation. You can join in for free here:

Twitter: #tleap

Facebook: https://www.facebook.com/groups/tleap/

Google+https://plus.google.com/communities/114679086713772400315


#tleap hosts biweekly discussion on Facebook: A focussed discussion point is posted on the 1st and 15th of every month. Please feel free to add your ideas to it and share widely. If there’s something you’d like to discuss, please add to this list: http://bit.ly/1OnYoWM.

16831997_10154781603762489_7689240778587431825_n.jpg#tleap also hosts bimonthly discussion chats on Twitter- look out for the next one!

The chat and discussion archievs are freely available, along with more information on the #tleap community, here http://tiny.cc/tleap

#tleap thrives on the contributions of members! You can start a new post on any of the paltforms anytime you have a question or wish to share something relevant for the group. Comments are always welcome on all posts.  With any blog, research article, or question, you can also always add the #tleap hashtag to your tweets to get everyone in our community to notice and engage.

We would love to welcome new members to the #tleap community, so please join in and share #tleap with your colleagues!

We look forward to hearing from you!

Competency-based planning and assessing

Competency-based planning and assessing

Earlier this week, I attended a workshop on competency-based (or competency-oriented) planning and assessing held by Dr Stefan Brall at Trier University, and would like to share some of the insights here.

The workshop was aimed at university-level teachers from various subject areas, and so concentrated generally on Competency-Based Education (CBE). According to Richards and Rogers (2001), the principles of CBE can be applied to the teaching of foreign languages (-> CBLT: Competency-Based Language Teaching), making the topic of interest to ELT professionals.

What is a competency?

In everyday language, we talk of people being ‘competent’ when they have the knowledge, qualification(s), or capacity to fulfil the expectations of a particular situation. They have the ability to apply the relevant skills appropriately and effectively. In the area of education, then, these skills are the individual competencies that students need to acquire and develop. Another important distinction here is between declarative knowledge, the theoretical understanding of something, and procedural knowledge, the ability to actually do it. In language teaching, I would argue, our focus is necessarily on the procedural side of things, on getting students to be able to actually communicate in the target langauge. The overarching goal of  CBLT is for learners to be able to apply and transfer this procedural knowledge in various settings, appropriately and effectively.

Literature on CBE explains how the approach can enhance learning, by

  • Focusing on the key competencies needed for success in the field
  • Providing standards for measuring performance and capabilities
  • Providing frameworks for identifying learners’ needs
  • Providing standards for measuring what learning has occurred

What are key competencies?

In the realm of tertiary education, a useful study to look at here is the Tuning Project. This is an EU-wide study which explored the most important competencies that students should develop at university. Although the specific ranking of the competencies may be debated, some of the capabilities that came out as very important include: the application of theory, problem solving, the adaptation of procedural knowledge to new situations, analytical thinking, synthesising information, and creativity (Gonzalez & Wagenaar, 2003). These kinds of skills are those often found at the top ends of taxonomies of learning. Compare, for example, with Bloom’s taxonomy:

bloom

Other taxonomies of learning use comparable sequential units to describe cognitive learning. For example, the SOLO model (Structure of Observed Learning Outcome, see Biggs & Tang, 2007) includes a quantitative phase of uni-structural and multi-strucutal learning (e.g. identyfing, describing, combining), and then a quantitative phase of relational (e.g. comparing, analysing causes, applying) and extended abstract learning (e.g. generalising, hypothesising). Seeing these important skills in a hierarchically organised scheme highlights how they build upon each other, and are themselves the products of mastering many sub-skills or competencies.

In language teaching, people have long since spoken of “the four skills”, i.e. skills covering the oral, aural, reading and writing domains. To this we might also add learning competencies. In CBLT, language is taught as a function of communicating about concrete tasks; learners are taught the langauge forms/skills they will need to use in various situations in which they will need to function. Scales such as the Common European Reference Framework for Languages help to break down these skills into distinct competences, whereby learners move up through the levels of mastery in each skill area, from elementary performance in a competency to proficient performance.

cefr

Competency-based Learning Outcomes

If we take scales of learning as the foundation for our planning, then, formulating statements of learning outcomes becomes quite a straightforward process. We will of course need to know the current level and needs of our students, especially in terms of competencies still to be learnt and competencies requiring further development. Associated with such learning taxonomies, we can easily find lists of action verbs which denote the skills associated with each developmental level of thinking skills. Based on the SOLO model, for example, we might find the following verbs:

Level Verbs
Uni-structural learning (knowledge of one aspect) count, define, find, identify, imitate, name, recognize, repeat, replicate
Multi-structural learning  (knowledge of several, unconnected aspects) calculate, classify, describe, illustrate, order, outline, summarise, translate
Relational learning (knowledge of aspects is integrated and connected) analyse, apply, compare, contrast, discuss, evaluate, examine, explain, integrate, organise, paraphrase, predict
Extended abstract learning (knowledge transferred to new situations) argue, compose, construct, create, deduce, design, generalize, hypothesise, imagine, invent, produce, prove, reflect, synthesise

Based on our understanding of students’ current learning levels, students’ needs, and the general framework within which our lessons/courses are taking place (in terms of contact time, resources, etc), and with these action verbs, we can then formulate realistic learning goals. In most cases, there will be a primary learning outcome we hope to reach, which may consist of several sub-goals – this should be made clear.

For example, an academic writing course aimed at C1-level students (on the CEFR) might set the main learning outcome as:

By the end of this course, students should be able to produce a coherent analytical essay following the Anglo-American conventions for the genre.

A couple of the sub-goals might include:

  • Students should be familiar with Anglo-American essay-writing conventions and able to apply these to their own compositions.
  • Students should understand various cohesive devices and employ these appropriately within their writing.
  • Students should understand the functions of Topic Sentences and Thesis Statements and be able to formulate these suitably in their own writing. 

Formulating clear learning outcomes in this way, and making them public, helps students to reflect on their own progress and may be motivating for them, and helps teachers to choose activities and materials with a clear focus, as well as helping to devise assessment tasks and grading rubrics.

Competency-based Assessment

Of course, most teachers will need to aim for economical assessment, in terms of time and resources. As far as possible, CBE advocates on-going assessment, so that students continue to work on the competency until they achieve the desired level of mastery. Competency-based assessment may thus require more effort and organisation on the part of the assessor – but it is able to provide a more accurate picture of students’ current stage of learning and performance.

Take multiple-choice tasks, for example; they can be marked very economically, but in reality they tend only to test the lower-level thinking skills, which may not have been the desired learning outcome. To test competency-based learning, we need to base our assessment tasks on the learning outcomes we have set, perhaps using the same action verbs in the task instructions. The focus is shifted to learners’ ability to demonstrate, not simply talk theoretically about, the behaviours noted in the learning outcomes. Still, especially in the realm of langauge teaching, there are some tasks we can easily set in written assignments which will also allow us to assess the higher levels of competencies more economically than oral presentations or practical assignments. If our learning outcome is the ability to apply a theory, for example, we could set a question such as ‘Describe a situation that illustrates the principles of xyz‘. Or, if we want to assess whether learners can discuss and evaluate, we might set a task like ‘Explain whether and why you agree or disagree with the following statement.‘ These kinds of tasks require learners to apply their acquired or developed competencies on a more qualitative level.

To enable objective assessments of students’ learning, we will need to devise a matrix based on the various levels of mastery of the competencies detailed in the learning outcomes. As a basis, we might start with something like this:

Grade Description
A An outstanding performance.
B A performance considerably better than the average standard.
C A performance that reaches the average standard.
D Despite short-comings, the performance just about reaches the minimum standard required.
E Because of considerable short-comings, the performance does not reach the minimum standard required.

For each sub-skill of the competencies we are aiming for students to achieve, we will need to state specifically, for instance, which ‘short-comings’ are ‘considerable’, e.g. if the students cannot demonstrate the desired level of mastery even with the tutor’s assistance. Also, it is important in CBE and CBLT that students’ performance is measured against their peers, especially to ascertain the ‘average standard,’ and not against the mastery of the tutor.

To return to the essay writing, example, a student’s composition might receive a B grade on the sub-competence of using cohesive devices if they employ several techniques to create cohesion in their work, but occasionally use one technique where another might be more effective. A student’s essay might receive a D grade on this competency if they repeatedly use the same cohesive device, or employ the techniques indiscriminately and inappropriately. An E grade might mean that the student has not tried to employ any cohesive devices. In this manner, the primary learning outcome is broken down into sub-skills, on which students’ performance can be objectively measured using a detailed grading matrix.

In a nutshell, then, CBE and CBLT aim for ‘Yes we can!’ rather than ‘We know’. Competency-based teaching and learning have become a staple in further education and language instruction in many places around the world. If you would like to implement the approach in your own classrooms, I hope this post has given you some useful insights on how to do so!

References

Biggs, J. & C. Tang, Teaching for Quality Learning at University (Maidenhead: Open University, 2007).

Brall, S., “Kompetenzorientiert planen und prüfen”, Workshop at Trier University, 21.2.17.

Gonzalez, J. & R. Wagenaar, Tuning Educational Structures in Europe: Final Report Phase One (Bilbao, 2003)

Richards, J.C. & T.S. Rodgers, Approaches and Methods in Language Teaching (Cambridge: CUP, 2001).

“What is the CEFR?”, English Profile, Cambridge University Press, http://www.englishprofile.org/the-cefr, accessed 24.2.17

Peer Presentation Feedback

Peer Presentation Feedback

I teach an EAP module which focusses on language and study skills. It’s aimed at first-semester students starting an English Studies degree where English is a foreign language for almost all students. They’re at the B2+ level.

In a 15-week semester, we spend the first five weeks or so looking at what makes a good academic presentation in English. We cover topics such as narrowing down a topic to make a point, logically building up an argument, linking pieces of information, maintaining the audience’s attention, formal langauge and appropriate use of register, body language and eye contact, volume and pacing, using sources effectively, and lots of sub-skills and langauge features that are relevant for presentations. In the second 2/3 of the semester, students give presentations (in groups of 3) on a topic of their choice related to the English-speaking world, and we discuss feedback altogether so that the others can learn from what was good or could be improved in the presentation they have watched.

This blog post describes my journey through trialling different ways of getting the best feedback to fulfil our overall learning aim. 

(Note: Don’t worry, we also use class time to practise other study skills pertaining to listening and speaking!)

1. ‘Who would like to give some feedback?’

I have experimented with various ways of getting audience members to give feedback. When I first started teaching on this module, I used to ask after the presentation ‘Who would like to give some feedback?’, which was usually qualified by saying something like ‘Remember the points we’ve covered on what makes a presentation good.’ Usually, only a few people commented, and they focussed mainly on the good things. Don’t get me wrong, I think it is important to highlight what students have done well! But the overall goal of having students give presentations was that we could constructively critique all aspects of these presentations. I had hoped that we could use these ‘real’ examples to review what we had learnt about good academic presentations. So this approach wasn’t as effective as I had hoped.

2. Feedback questions

It seemed that requiring students to keep in mind all of the features of a good academic presentation was asking a bit too much. And so, together with a colleague, I drew up a list of questions students could ask themselves about the presentation. Example questions include: Was all of the information relevant? Was the speech loud and clear, and easy to understand? Students were given the list before the first presentation and instructed to bring it each week to help them to give presentation feedback. Most people brought them most of the time. Still, students were pretty selective about which questions they wanted to answer, and (tactfully?) avoided the points where it was clear that the presentation group needed to improve. So we still weren’t getting the full range of constructive feedback that I was hoping for.

3. Feedback sandwich

sandwich.jpgIt was clear to me that students wanted to be nice to each other. We were giving feedback in plenum, and no one wanted to be the ‘bad guy’. This is a good thing per se, but it meant that they were slightly hindered in giving constructive criticism and thus achieving the learning aims I had set for the course. So, before the first presentation, I set up an activity looking at how to give feedback politely and without offending the individual presenters. We explored the psychological and linguistic concepts behind ‘face saving’ and how people may become defensive if they feel their ‘face’ is attacked, and then psychologically ‘block out’ any criticism – so the feedback doesn’t help them improve their presentation; nor does it make for good student-student relationships! I explained the idea of a ‘feedback sandwich’ in which the positive comments form the bread, and the negative comments are the filling. This idea is said to ease any feelings of ‘attack’, thus making the feedback more effective. Students embraced this idea, and did their best to ‘sandwich’ their feedback. Overall, this was a helpful step in moving the class feedback towards waht I thought would be most effective for the learning aims.

4. Feedback tickets

Since I noticed we still weren’t always getting feedback on all aspects of the presentation, a colleague and I decided to make ‘feedback tickets’, each with one question from the list we had previously prepared. The tickets were handed out before a presentation, and each student was then responsible for giving feedback on that point. Combined with the ‘sandwich’ approach, this overall worked pretty well. The minor drawbacks were that sometimes the presenters had really done a good job on a certain aspect and there wasn’t much ‘filling’ to go with the ‘bread’; however, sometimes the ‘filling’ was important, but students seemed to counteract their constructive criticisms by emphasizing their lack of importance, especially compared to the positive comments. For me, though, the major downside to using these tickets was the time factor. Running through a set of ~15 feedback tickets (and feedback sandwiches!) after each presentation was productive for students’ presentation skills, but ate into the time in class that should have been used for practising other oral/aural skills. In extreme cases, with two 30-minute presentations plus Q&A in a 90-minute lesson, we simply ran out of time for feedback! Those poor presenters got no feedback on their presentations, and we as class were not able to learn anything from the example they had delivered.

5. Google forms

google form.JPG

Actually, I first used Google Forms to collect feedback after one of these lessons where our time was up before we’d got through the plenary feedback round. I copied all of the feedback questions into a Google form (using the ‘quiz’ template) and emailed the link to the students. I was positively surprised by the results! Perhaps aided by the anonymity of the form, students used the ‘sandwich’ idea very effectively – suitably praising good aspects of the presentation, and taking time to explain their criticisms carefully and specifically. Wow – helpful feedback! I printed out the feedback to give to the presenters, along with my own written feedback, and also picked out a couple of poignant comments to discuss in plenum in the next lesson. Right from the off, this way of collecting and giving feedback seemed very effective, both in terms of time taken and achieving learning aims. It seemed presenters had some time to reflect on their own performance and were able to join in the feedback discussions more openly, and focussing on just a couple of key aspects meant it was time-eficient, too. I immediately decided to use the Google form for the next couple of weeks, and have continued to find it extremely useful. Sadly, we’re at the end of our semester now, so these are just very short-term observations. Still, I’m encouraged to use the online form in future semesters.

Just goes to show how important reflecting on our classroom practices can be!

I wonder if anyone else has had similar experiences, or can share other inspirational ways of collecting feedback on presentations? I’d love to hear from you!