Analysing my Feedback Language

Analysing my Feedback Language

TL:DR SUMMARY

I ran a feedback text I’d written on a student’s work through some online text analysis tools to check the CEFR levels of my language. I was surprised that I was using some vocabulary above my students’ level. After considering whether I can nonetheless expect them to understand my comments, I propose the following tips:

  • Check the language of feedback comments before returning work and modify vocabulary necessary.
  • Check the vocabulary frequently used in feedback comments, and plan to teach these explicitly.
  • Get students to reflect on and respond to feedback to check understanding.

A couple of colleagues I follow on blogs and social media have recently posted about online text analysis tools such as Text Inspector, Lex Tutor and so on (see, for example Julie Moore’s post here and Pete Clements’ post here). That prompted me to explore uses of those tools in more detail for my own work – both using them to judge the input in my teaching materials or assessments, and also using them with students to review their academic essay writing.

Once I got into playing around with different online tools (beyond my go-to Vocab Kitchen), I wanted to try some out on my own texts. The thing I’ve been writing most recently, though, is feedback on my students’ essays and summaries. But, I’m a bit of a feedback nerd so I was quite excited when the idea struck me: I could use these tools to analyse my language in the feedback I write to help my students improve their texts. A little action research, if you will. 

Now I obviously can’t share the students work here for privacy and copyright reasons, but one recent assessment task was to write a 200-250 word compare/contrast paragraph to answer this question:

How similar are the two main characters in the last film you watched?

(Don’t focus on their appearance).

These students are at B2+ level (CEFR) working towards C1 in my essay writing class. They need to demonstrate C1-level language in order to pass the class assessments. One student did not pass this assessment because her text included too many language mistakes that impeded comprehension, because overall the language level did not reach C1, and because she didn’t employ the structural elements we had trained in class.

Here’s the feedback I gave on the piece of work and which I ran through a couple of text checkers. (Note: I usually only write this much if there are a lot of points that need improving!)

The language of this text demonstrates a B2 level of competence. Some of the phrasing is rather too colloquial for written academic language, e.g. starting sentences with ‘but’, and including contracted forms. You need to aim for more sophisticated vocabulary and more lexical diversity. More connectors, signposting and transitions are needed to highlight the genre and the comp/cont relationships between the pieces of information. The language slips lead to meaning not always being emphasised or even made clear (especially towards the end). Aim to write more concisely and precisely, otherwise your text sounds too much like a superficial, subjective summary.

Apart from the personal phrase at the beginning, the TS does an OK job at answering the question of ‘how similar’, and naming the features to be discussed. However, you need to make sure you name the items – i.e. the characters – and the film. In fact, the characters are not named anywhere in the text! The paragraph body does include some points that seem relevant, but the ordering would be more logical if you used signposting and the MEEE technique. For example, you first mention their goals but don’t yet explain what they are, instead first mentioning a difference between them– but not in enough detail to make sense to a reader who maybe doesn’t know the series. Also, you need to discuss the features/points in the order you introduce them in the TS – ‘ambition’ is not discussed here. The information in the last couple o sentences is not really relevant to this question, and does not function as a conclusion to summarise your overall message (i.e. that they are more similar than they think). In future, aim for more detailed explanations of content and use the MEEE technique within one of the structures we covered in class. And remember: do not start new lines within one paragraph – it should be one chunk of text.

I was quite surprised by this ‘scorecard’ summarising the analysis of the lexis in my feedback on Text Inspector – C2 CEFR level, 14% of words on the AWL, and an overall score of 72% “with 100% indicating a high level native speaker academic text.” (Text Inspector). Oops! I didn’t think I was using that high a level of academic lexis. The student can clearly be forgiven if she’s not able to improve further based on this feedback that might be over her head! 

(From Text Inspector)

In their analyses, both Text Inspector and Vocab Kitchen categorise words in the text by CEFR level. In my case, there were some ‘off list’ words, too. These include abbreviations, most of which I expect my students to know, such as e.g., and acronyms we’ve been using in class, such as MEEE (=Message, Explanation, Examples, Evaluation). Some other words are ‘off list’ because of my British English spelling with -ise (emphasise, summarise – B2 and C1 respectively). And some words aren’t included on the word lists used by these tools, presumably due to being highly infrequent and thus categorised as ‘beyond’ C2 level. I did check the CEFR levels that the other ‘off list’ words are listed as in learners’ dictionaries but only found rankings for these words: 

Chunk – C1

Genre – B2

Signposting – C1

(From Vocab Kitchen)

Logically, the question I asked myself at this point is whether I can reasonably expect my students to understand the vocabulary which is above their current language level when I use it in feedback comments. This particularly applies to the words that are typically categorised as C2, which on both platforms were contracted, superficial and transitions, and perhaps also to competence, diversity and subjective which are marked as C1 level. And, of course, to the other ‘off list’ words: colloquial, concisely, connectors, lexical, and phrasing.

Now competence, diversity, lexical and subjective shouldn’t pose too much of a problem for my students, as those words are very similar in German (Kompetenz, Diversität, lexikalisch, subjektiv) which all of my students speak, most of them as an L1. We have also already discussed contracted forms, signposting and transitions on the course, so I have to assume my students understand those. Thus, I’m left with colloquial, concisely, connectors, phrasing and superficial as potentially non-understandable words in my feedback. 

Of course, this feedback is given in written form, so you could argue that students will be able to look up any unknown vocabulary in order to understand my comments and know what to maybe do differently in future.  But I worry that not all students would actually bother to do so –  so they would continue to not fully understand my feedback, making it rather a waste of my time having written it for them.

Overall, I’d say that formulations of helpful feedback comments for my EAP students need to strike a balance. They should mainly use level-appropriate language in terms of vocabulary and phrasing so that the students can comprehend what they need to keep doing or work on improving. Also, they should probably use some academic terms to model them for the students and make matching the feedback to the grading matrices more explicit. Perhaps the potentially non-understandable words in my feedback can be classified as working towards the second of these aims. 

Indeed, writing in a formal register to avoid colloquialisms, and aiming for depth and detail to avoid superficiality are key considerations in academic writing. As are writing in concise phrases and connecting them logically. Thus, I’m fairly sure I have used these potentially non-understandable words in my teaching on this course.But so far we haven’t done any vocabulary training specifically focused on these terms. If I need to use them in my feedback though, then, the students do need to understand them in some way. 

So, what can I do? I think there are a couple of options for me going forward which can help me to provide constructive feedback in a manner which models academic language but is nonetheless accessible to the students at the level they are working at. These are ideas that I can apply to my own practice,  but that other teachers might also like to try out:

  • Check the language of feedback comments before returning work (with feedback) to students; modify vocabulary if necessary.
  • Check the vocabulary items and metalanguage I want/need to use in feedback comments, and in grading matrices (if provided to students), and plan to teach these words if they’re beyond students’ general level.
  • Use the same kinds of vocabulary in feedback comments as in oral explanations of models and in teaching, to increase students’ familiarity with it. 
  • Give examples (or highlight them in the student’s work) of what exactly I mean with certain words.
  • Get students to reflect on the feedback they receive and make an ‘action plan’ or list of points to keep in mind in future – which will show they have understood and been able to digest the feedback.

If you have further suggestions, please do share them in the comments section below!

As a brief closing comment, I just want to  point out here that it is of course not only the vocabulary of any text or feedback comment that determines how understandable it is at which levels. It’s a start, perhaps, but other readability scores need to be taken into account, too. I’ll aim to explore these in a separate blog post.

2 thoughts on “Analysing my Feedback Language

  1. Love this idea! As you say, it obviously doesn’t give you a complete picture, but it’s a really interesting way of getting you reflecting on how comprehensible your feedback might be.

    A couple of points to consider about the text checkers in this context:
    1. If you’re using EVP level labels, remember they rank vocab primarily by productive level. So your ‘above level’ words may not be within your students’ productive vocabulary, but they may recognize them receptively without too much trouble.
    2. CEFR levels for vocab knowledge are generally more reliable for lower levels than higher levels, esp. post B2. It’s much easier to predict the vocab students are likely to know at lower levels just because their vocab range will be smaller and more likely restricted to high frequency words and common themes. They’re also more likely to learn the bulk of their vocab within the fairly controlled and predictable context of the ELT classroom. Post-B2, when students already have an established core vocab, what they learn next is more likely to diverge for all kinds of reasons. They’re at a point where they can more easily access authentic language from various non-classroom sources, so they’ll be picking up different vocab depending on their interests, access, inclinations, aims, etc. So, some students might pick up on the more formal language terminology you use in the classroom, while others will be more into the slang and idiomatic language they come across on social media or wherever. The Oxford 3000/5000 labels actually only go up as far as C1. EVP includes C2, but I’d take the differences between C1, C2 and off-list with quite a pinch of salt.

    Sorry, that was almost a whole blog post in a comment … ha ha, great topic!

    Liked by 2 people

  2. Thanks for reading and for your comment, Julie!
    The receptive/produictive thing is a good point that I didn’t mention in my post, but you’re right I guess that some learners may understand words that according to these checkers are ‘above level’. But, as you say, it’s hard to tell which students will understand which words, because they have such different exposure. I hadn’t really ever thought about it in terms of the level-categories therefore being less reliable at the top levels, but it totally makes sense. In any case, I think some (more?) of my problem is my tendency to write rather long sentences with complicated structures using subordinate and relative clauses, etc. in which the main point may not be made explicit to the student reading it. I’m hoping to get time to explore that and post about it sometime soon…!
    CM

    Liked by 1 person

Leave a comment