Month: February 2017

Competency-based planning and assessing

Competency-based planning and assessing

Earlier this week, I attended a workshop on competency-based (or competency-oriented) planning and assessing held by Dr Stefan Brall at Trier University, and would like to share some of the insights here.

The workshop was aimed at university-level teachers from various subject areas, and so concentrated generally on Competency-Based Education (CBE). According to Richards and Rogers (2001), the principles of CBE can be applied to the teaching of foreign languages (-> CBLT: Competency-Based Language Teaching), making the topic of interest to ELT professionals.

What is a competency?

In everyday language, we talk of people being ‘competent’ when they have the knowledge, qualification(s), or capacity to fulfil the expectations of a particular situation. They have the ability to apply the relevant skills appropriately and effectively. In the area of education, then, these skills are the individual competencies that students need to acquire and develop. Another important distinction here is between declarative knowledge, the theoretical understanding of something, and procedural knowledge, the ability to actually do it. In language teaching, I would argue, our focus is necessarily on the procedural side of things, on getting students to be able to actually communicate in the target langauge. The overarching goal of  CBLT is for learners to be able to apply and transfer this procedural knowledge in various settings, appropriately and effectively.

Literature on CBE explains how the approach can enhance learning, by

  • Focusing on the key competencies needed for success in the field
  • Providing standards for measuring performance and capabilities
  • Providing frameworks for identifying learners’ needs
  • Providing standards for measuring what learning has occurred

What are key competencies?

In the realm of tertiary education, a useful study to look at here is the Tuning Project. This is an EU-wide study which explored the most important competencies that students should develop at university. Although the specific ranking of the competencies may be debated, some of the capabilities that came out as very important include: the application of theory, problem solving, the adaptation of procedural knowledge to new situations, analytical thinking, synthesising information, and creativity (Gonzalez & Wagenaar, 2003). These kinds of skills are those often found at the top ends of taxonomies of learning. Compare, for example, with Bloom’s taxonomy:

bloom

Other taxonomies of learning use comparable sequential units to describe cognitive learning. For example, the SOLO model (Structure of Observed Learning Outcome, see Biggs & Tang, 2007) includes a quantitative phase of uni-structural and multi-strucutal learning (e.g. identyfing, describing, combining), and then a quantitative phase of relational (e.g. comparing, analysing causes, applying) and extended abstract learning (e.g. generalising, hypothesising). Seeing these important skills in a hierarchically organised scheme highlights how they build upon each other, and are themselves the products of mastering many sub-skills or competencies.

In language teaching, people have long since spoken of “the four skills”, i.e. skills covering the oral, aural, reading and writing domains. To this we might also add learning competencies. In CBLT, language is taught as a function of communicating about concrete tasks; learners are taught the langauge forms/skills they will need to use in various situations in which they will need to function. Scales such as the Common European Reference Framework for Languages help to break down these skills into distinct competences, whereby learners move up through the levels of mastery in each skill area, from elementary performance in a competency to proficient performance.

cefr

Competency-based Learning Outcomes

If we take scales of learning as the foundation for our planning, then, formulating statements of learning outcomes becomes quite a straightforward process. We will of course need to know the current level and needs of our students, especially in terms of competencies still to be learnt and competencies requiring further development. Associated with such learning taxonomies, we can easily find lists of action verbs which denote the skills associated with each developmental level of thinking skills. Based on the SOLO model, for example, we might find the following verbs:

Level Verbs
Uni-structural learning (knowledge of one aspect) count, define, find, identify, imitate, name, recognize, repeat, replicate
Multi-structural learning  (knowledge of several, unconnected aspects) calculate, classify, describe, illustrate, order, outline, summarise, translate
Relational learning (knowledge of aspects is integrated and connected) analyse, apply, compare, contrast, discuss, evaluate, examine, explain, integrate, organise, paraphrase, predict
Extended abstract learning (knowledge transferred to new situations) argue, compose, construct, create, deduce, design, generalize, hypothesise, imagine, invent, produce, prove, reflect, synthesise

Based on our understanding of students’ current learning levels, students’ needs, and the general framework within which our lessons/courses are taking place (in terms of contact time, resources, etc), and with these action verbs, we can then formulate realistic learning goals. In most cases, there will be a primary learning outcome we hope to reach, which may consist of several sub-goals – this should be made clear.

For example, an academic writing course aimed at C1-level students (on the CEFR) might set the main learning outcome as:

By the end of this course, students should be able to produce a coherent analytical essay following the Anglo-American conventions for the genre.

A couple of the sub-goals might include:

  • Students should be familiar with Anglo-American essay-writing conventions and able to apply these to their own compositions.
  • Students should understand various cohesive devices and employ these appropriately within their writing.
  • Students should understand the functions of Topic Sentences and Thesis Statements and be able to formulate these suitably in their own writing. 

Formulating clear learning outcomes in this way, and making them public, helps students to reflect on their own progress and may be motivating for them, and helps teachers to choose activities and materials with a clear focus, as well as helping to devise assessment tasks and grading rubrics.

Competency-based Assessment

Of course, most teachers will need to aim for economical assessment, in terms of time and resources. As far as possible, CBE advocates on-going assessment, so that students continue to work on the competency until they achieve the desired level of mastery. Competency-based assessment may thus require more effort and organisation on the part of the assessor – but it is able to provide a more accurate picture of students’ current stage of learning and performance.

Take multiple-choice tasks, for example; they can be marked very economically, but in reality they tend only to test the lower-level thinking skills, which may not have been the desired learning outcome. To test competency-based learning, we need to base our assessment tasks on the learning outcomes we have set, perhaps using the same action verbs in the task instructions. The focus is shifted to learners’ ability to demonstrate, not simply talk theoretically about, the behaviours noted in the learning outcomes. Still, especially in the realm of langauge teaching, there are some tasks we can easily set in written assignments which will also allow us to assess the higher levels of competencies more economically than oral presentations or practical assignments. If our learning outcome is the ability to apply a theory, for example, we could set a question such as ‘Describe a situation that illustrates the principles of xyz‘. Or, if we want to assess whether learners can discuss and evaluate, we might set a task like ‘Explain whether and why you agree or disagree with the following statement.‘ These kinds of tasks require learners to apply their acquired or developed competencies on a more qualitative level.

To enable objective assessments of students’ learning, we will need to devise a matrix based on the various levels of mastery of the competencies detailed in the learning outcomes. As a basis, we might start with something like this:

Grade Description
A An outstanding performance.
B A performance considerably better than the average standard.
C A performance that reaches the average standard.
D Despite short-comings, the performance just about reaches the minimum standard required.
E Because of considerable short-comings, the performance does not reach the minimum standard required.

For each sub-skill of the competencies we are aiming for students to achieve, we will need to state specifically, for instance, which ‘short-comings’ are ‘considerable’, e.g. if the students cannot demonstrate the desired level of mastery even with the tutor’s assistance. Also, it is important in CBE and CBLT that students’ performance is measured against their peers, especially to ascertain the ‘average standard,’ and not against the mastery of the tutor.

To return to the essay writing, example, a student’s composition might receive a B grade on the sub-competence of using cohesive devices if they employ several techniques to create cohesion in their work, but occasionally use one technique where another might be more effective. A student’s essay might receive a D grade on this competency if they repeatedly use the same cohesive device, or employ the techniques indiscriminately and inappropriately. An E grade might mean that the student has not tried to employ any cohesive devices. In this manner, the primary learning outcome is broken down into sub-skills, on which students’ performance can be objectively measured using a detailed grading matrix.

In a nutshell, then, CBE and CBLT aim for ‘Yes we can!’ rather than ‘We know’. Competency-based teaching and learning have become a staple in further education and language instruction in many places around the world. If you would like to implement the approach in your own classrooms, I hope this post has given you some useful insights on how to do so!

References

Biggs, J. & C. Tang, Teaching for Quality Learning at University (Maidenhead: Open University, 2007).

Brall, S., “Kompetenzorientiert planen und prüfen”, Workshop at Trier University, 21.2.17.

Gonzalez, J. & R. Wagenaar, Tuning Educational Structures in Europe: Final Report Phase One (Bilbao, 2003)

Richards, J.C. & T.S. Rodgers, Approaches and Methods in Language Teaching (Cambridge: CUP, 2001).

“What is the CEFR?”, English Profile, Cambridge University Press, http://www.englishprofile.org/the-cefr, accessed 24.2.17

ELT Research Bites

ELT Research Bites

Followers of my blog will know that I believe we, as language teachers, all need to understand the pedagogical underpinnings of what we do in our language classrooms. That’s why I aim in my blog posts to provide information on theoretical backgrounds and lesson materials which apply them practically. I would also love for more teachers to read the research and background articles for themselves. But I know that teachers are all busy people, who may not have access to or time to access publications on the latest developments and findings from language education research.

ELT Research Bites is here to help!contributors.JPG

As the founder, Anthony Schmidt, explains: ELT Research Bites is a collaborative, multi-author website that publishes summaries of published, peer-reviewed research in a short, accessible and informative way. 

The core contributors are Anthony Schmidt, Mura Nuva, Stephen Bruce, and me!

 

Anthony describes the problem that inpsired ELT Research Bites: There’s a lot of great research out there: It ranges from empirically tested teaching activities to experiments that seek to understand the underlying mechanics of learning. The problem is, though, that this research doesn’t stand out like the latest headlines – you have to know where to look and what to look for as well as sift through a number of other articles. In addition, many of these articles are behind extremely expensive pay walls that only universities can afford. If you don’t have access to a university database, you are effectively cut off from a great deal of research. Even if you do find the research you want to read, you have to pour through pages and pages of what can be dense prose just to get to the most useful parts. Reading the abstract and jumping to the conclusion is often not enough. You have to look at the background information, the study design, the data, and the discussion, too. In other words, reading research takes precious resources and time, things teachers and students often lack.

And so ELT Research Bites was born!  

site.JPG

The purpose of ELT Research Bites is to present interesting and relevant language and education research in an easily digestible format.

Anthony again:  By creating a site on which multiple authors are reading and writing about a range of articles, we hope to create for the teaching community a resource in which we share practical, peer-reviewed ideas in a way that fits their needs.

ELT Research Bites provides readers with the content and context of research articles, at a readable at the length, and with some ideas for practical implications. We hope, with these bite-size summaries of applied linguistics and pedagogy research, to allow all (language) teachers access to the insights gained through empirical published work, which teachers can adapt and apply in their own practice, whilst not taking too much of their time away from where it is needed most – the classroom.

CHECK OUT ELT Research Bites here:

FOLLOW ON TWITTER: @ResearchBites

 

Peer Presentation Feedback

Peer Presentation Feedback

I teach an EAP module which focusses on language and study skills. It’s aimed at first-semester students starting an English Studies degree where English is a foreign language for almost all students. They’re at the B2+ level.

In a 15-week semester, we spend the first five weeks or so looking at what makes a good academic presentation in English. We cover topics such as narrowing down a topic to make a point, logically building up an argument, linking pieces of information, maintaining the audience’s attention, formal langauge and appropriate use of register, body language and eye contact, volume and pacing, using sources effectively, and lots of sub-skills and langauge features that are relevant for presentations. In the second 2/3 of the semester, students give presentations (in groups of 3) on a topic of their choice related to the English-speaking world, and we discuss feedback altogether so that the others can learn from what was good or could be improved in the presentation they have watched.

This blog post describes my journey through trialling different ways of getting the best feedback to fulfil our overall learning aim. 

(Note: Don’t worry, we also use class time to practise other study skills pertaining to listening and speaking!)

1. ‘Who would like to give some feedback?’

I have experimented with various ways of getting audience members to give feedback. When I first started teaching on this module, I used to ask after the presentation ‘Who would like to give some feedback?’, which was usually qualified by saying something like ‘Remember the points we’ve covered on what makes a presentation good.’ Usually, only a few people commented, and they focussed mainly on the good things. Don’t get me wrong, I think it is important to highlight what students have done well! But the overall goal of having students give presentations was that we could constructively critique all aspects of these presentations. I had hoped that we could use these ‘real’ examples to review what we had learnt about good academic presentations. So this approach wasn’t as effective as I had hoped.

2. Feedback questions

It seemed that requiring students to keep in mind all of the features of a good academic presentation was asking a bit too much. And so, together with a colleague, I drew up a list of questions students could ask themselves about the presentation. Example questions include: Was all of the information relevant? Was the speech loud and clear, and easy to understand? Students were given the list before the first presentation and instructed to bring it each week to help them to give presentation feedback. Most people brought them most of the time. Still, students were pretty selective about which questions they wanted to answer, and (tactfully?) avoided the points where it was clear that the presentation group needed to improve. So we still weren’t getting the full range of constructive feedback that I was hoping for.

3. Feedback sandwich

sandwich.jpgIt was clear to me that students wanted to be nice to each other. We were giving feedback in plenum, and no one wanted to be the ‘bad guy’. This is a good thing per se, but it meant that they were slightly hindered in giving constructive criticism and thus achieving the learning aims I had set for the course. So, before the first presentation, I set up an activity looking at how to give feedback politely and without offending the individual presenters. We explored the psychological and linguistic concepts behind ‘face saving’ and how people may become defensive if they feel their ‘face’ is attacked, and then psychologically ‘block out’ any criticism – so the feedback doesn’t help them improve their presentation; nor does it make for good student-student relationships! I explained the idea of a ‘feedback sandwich’ in which the positive comments form the bread, and the negative comments are the filling. This idea is said to ease any feelings of ‘attack’, thus making the feedback more effective. Students embraced this idea, and did their best to ‘sandwich’ their feedback. Overall, this was a helpful step in moving the class feedback towards waht I thought would be most effective for the learning aims.

4. Feedback tickets

Since I noticed we still weren’t always getting feedback on all aspects of the presentation, a colleague and I decided to make ‘feedback tickets’, each with one question from the list we had previously prepared. The tickets were handed out before a presentation, and each student was then responsible for giving feedback on that point. Combined with the ‘sandwich’ approach, this overall worked pretty well. The minor drawbacks were that sometimes the presenters had really done a good job on a certain aspect and there wasn’t much ‘filling’ to go with the ‘bread’; however, sometimes the ‘filling’ was important, but students seemed to counteract their constructive criticisms by emphasizing their lack of importance, especially compared to the positive comments. For me, though, the major downside to using these tickets was the time factor. Running through a set of ~15 feedback tickets (and feedback sandwiches!) after each presentation was productive for students’ presentation skills, but ate into the time in class that should have been used for practising other oral/aural skills. In extreme cases, with two 30-minute presentations plus Q&A in a 90-minute lesson, we simply ran out of time for feedback! Those poor presenters got no feedback on their presentations, and we as class were not able to learn anything from the example they had delivered.

5. Google forms

google form.JPG

Actually, I first used Google Forms to collect feedback after one of these lessons where our time was up before we’d got through the plenary feedback round. I copied all of the feedback questions into a Google form (using the ‘quiz’ template) and emailed the link to the students. I was positively surprised by the results! Perhaps aided by the anonymity of the form, students used the ‘sandwich’ idea very effectively – suitably praising good aspects of the presentation, and taking time to explain their criticisms carefully and specifically. Wow – helpful feedback! I printed out the feedback to give to the presenters, along with my own written feedback, and also picked out a couple of poignant comments to discuss in plenum in the next lesson. Right from the off, this way of collecting and giving feedback seemed very effective, both in terms of time taken and achieving learning aims. It seemed presenters had some time to reflect on their own performance and were able to join in the feedback discussions more openly, and focussing on just a couple of key aspects meant it was time-eficient, too. I immediately decided to use the Google form for the next couple of weeks, and have continued to find it extremely useful. Sadly, we’re at the end of our semester now, so these are just very short-term observations. Still, I’m encouraged to use the online form in future semesters.

Just goes to show how important reflecting on our classroom practices can be!

I wonder if anyone else has had similar experiences, or can share other inspirational ways of collecting feedback on presentations? I’d love to hear from you!