Category Archives: Assessment

Undergrad Day

17303176035_035cd2da96_zToday is undergraduate day. I’m teaching a module on SEN in the secondary school to my undergrad PE and Secondary Ed students in May, and I want to be well prepared. I read an interesting study by Benjamin Bloom earlier in the year (1984) about ‘Mastery’ and his attempts to solve the ‘2 sigma problem’, i.e. the 2 standard deviations in increased attainment that he found between pupils taught in ‘conventional classrooms’ and those who were instead ‘tutored’ one to one or in very small groups.  I think that aspects of this study can help me with my students.

This study seems to be one of the original studies that informed the current vogue for ‘mastery’ approaches in teaching and assessment. The recommendations are for iterative cycles of formative testing which allow a student to reach the desired ‘mastery’ level of attainment. I’ll not go into that now (perhaps I’ll plant a seed in the ‘post-garden’ and come back to it later). Suffice to say that I think that Bloom underestimates the time cost, and fails to make out what he really means by mastery (80% in a test score is the usual level – which we can see means pretty much nothing).

What grabbed me more is an idea that that Bloom develops from Leyton (1983) of techniques that “enhance the students’ initial cognitive entry pre-requisites” (who said that educational research can’t be easily understood?!). Broadly, this means ‘making sure they know and can do the things they’ll need to be able to do before they start to learn the new things that you have to teach them’.

Today I’ll be reading through my course materials, looking at the development activities I want them to do during the 10 weeks of the module, and working out a list of these ‘prerequisites’.  I’ll then scrap the first week’s sessions and turn them into a ‘prerequisites’ week.  I might have to think of a snappier title… any suggestions?

Where will this lead?  I’m hoping to make an ‘knowledge organiser’ which the students themselves have to complete, and which I’ll then check over formatively.  I’m sceptical that an organiser on its own will do anything (I need to make sure they read it and commit the ideas to memory for a start), but I’m hoping that if they have a first go at coming up with the ideas, which I then correct, this will give me an idea of where they’re coming from, and them a couple of chances to understand the material they need to know.   I’m hoping that my prerequisites audit will also help inform decisions about the way I structure the workshops and seminars that follow, as well as the content of the weekly lecture, as well as giving me some clear hooks and points to attach to ongoing quizzing.   I’ll let you know how it goes.

Bloom, B.S., 1984. The 2 sigma problem: The search for methods of group instruction as effective as one-to-one tutoring. Educational researcher, 13(6), pp.4-16.

Teacher Dashboard and Google Classroom #28daysofwriting

I used to think that ICT would ‘transform’ education, and that it could also ‘transform’ society.  Well, perhaps it will, but it hasn’t yet.  As I get more experienced it seems to me that ICT, like any tool, has its benefits and its downsides.  It also seems to me that one of the big problems with the use of ICT in learning is that students quickly learn to game whatever system they have been asked to work with, and that this works in the directly opposite direction of my main aim as a teacher.  I want students to slow down, to get caught up, to be forced to think again. They want a high score, or to get to the end, or simply to be finished, on to the next thing.  Even if they get beyond this, often they want what they’re doing to be ‘good’ (or sometimes ‘good enough’). ICT can make all of this far too easy.

That’s why I tend to use less ICT directly in the classroom than I used to, and when I do I always try to ask myself ‘why am I taking the extra time to do this using ICT?’ or ‘why are we learning this in the ICT suite instead of our normal classroom’.  Sometimes I can’t find a decent answer to this question, and then we go back to the classroom and to books and pens and pencils.

In the past I have used classroom blogs a great deal, and know colleagues using them to great effect – Alan Kydd’s www.heathenhistory.co.uk for instance.   However, sometimes I don’t want a public blog for my class, for instance.  I want to know who is reading it, and I don’t want to worry about the administration of usernames and privileges.  What I do want is a quick way of getting information, links and assignments to students.  Previous experience with various VLEs has taught me that this can be an enormous pain in the bum, and that the difficulties that these things represent can quickly sap the energy from efforts to use ICT to help teacher/student communication.

Recently I’ve been looking at Google Classroom, which does seem to offer me some quick and easier solutions for the problems I have run into whilst using classroom blogs.  Classroom isn’t transforming my practice, but I am finding it useful for the usual things like homework reminders and answering queries from students.   However, what I like it for best is for fleshing out those throwaway remarks, or passing conversations we have with students who are interested in topics not directly related to our syllabus.  Links to extra reading, radio or TV shows, catch up notes and historical novels that we have discussed.

Teacher dashboard is a set of apps with even more potential, which I’m still experimenting with.  This service from Hapara gives you the ability to create a folder in your students’ google drive (not their personal drive, but one connected to their institution), and to send them google documents and other resources.  Using the dashboard I can then tell which students have amended their documents, and when they did so. I can also give them feedback on their work as they progress.   I’ve been using this with some year 10 GCSE students. Their assessment in 2016 will be on paper, so I’m reluctant yet to spend a great deal of time asking them to type answers into google docs.  What I have been using it for is revision presentations.

I have been asking students to go home after each lesson and make two or three slides to record what they learned in each lesson.  In this way I’m hoping that I can encourage them to see that revision shouldn’t be something that happens at the end of a course, or just when you have an important assessed test coming up.   In trying to use something I learned from making it stick – that effort expended in trying to remember something will help later recall – I ask the students to first draft their slides without looking at their notes.  When the first draft is done, then they should make the notes.  We have a short formative assessment every month or 5 weeks, and they hand in a printed version of their revision presentation as the test starts.

I can’t honestly say that this has yet had a huge impact on grades. I have noticed that their retention and use of important information has improved.  What it is doing is setting up a routine and expectation that revision is ongoing.  I also get an example of what they do when they revise, and I’m going to use this to help them revise better as the course goes on.

So, Classroom and Teacher Dashboard is ICT that isn’t revolutionary, but is genuinely helping me in my task of enabling students to learn.

#28daysofwriting – Feedforward

Spring in the GlassGoing to be very brief this evening, but I did want to record how I got on with feedforward questions. This idea, which has been doing the rounds on twitter, involves teachers making comments on work and asking for improvements, or asking supplemental questions designed to move students’ understanding on.

I tried it with my year 9s, which worked well. I think that this was because my feedback was based on the work we had done on generalisations over several classes, and the work they had to do was improve a paragraph that they had written last lesson, and which I had marked in the meantime.

With my year 8 boys I tried writing questions that I hoped would enable them to extend their understanding, or improve their responses to questions they answered last lesson. This didn’t go as well – perhaps because I had much less concrete set of learning objectives for the work. This led me to give out a much more diverse set of feedback instructions, and didn’t allow the students to use their experiences in the previous lessons in the same way as my year 9s could in interpreting my advice. Also they were year 8 boys, it was the last lesson on Friday afternoon, and I was asking for independent thinking in response to written comments.

Why I love: Marking

Harvest TimeI’m trying to work on a “do more marking than planning” basis this week. Partially as an experiment to see if I can, but also because I wonder if I spend too long thinking of cool things to do, and not enough time finding out whether those cool things have actually had an impact.

Yesterday I posted about the ‘Target Notes‘ idea that I half-inched from Paul Ginnis. Today I thought I’d share what I did with the marking of them. I flicked through, and looked for really good examples of the kind of generalisation that we were looking for in the middle ring of our notes. I then compared these with sentences which were much weaker generalisations. I did leave feedback, and I was able to note which students had really got what I meant, which were still unsure, and to know those students who found the whole thing mystifying. The marking also gave me a really good pointer about who to ask questions of in class, to check whether things had been going in.

So, armed with my understanding about the progress made during the last lesson, I made a powerpoint called Making Explanations and Generalisations which I hope enabled the students to compare these stronger and weaker generalisations, and which we then used to come up with some criteria for assessing generalisations. This only took 10 minutes at the start of the lesson, but it gave the rest of the lesson good momentum. This was mainly because I told the students to be ready to write a really strong generalisation at the end of the lesson – based on the (have to admit it) rather dull note taking exercise that they would be completing in the meantime.

In short, the marking planned my lesson, and gave me an excuse to set them up with a plenary that would allow me to see if they’ve moved on.

Why I love: Target Notes

When I read Make It Stick: The Science of Successful Learning (£) one of the things that really resonated with me was the difference that the authors draw between ‘rule learners’ and ‘example learners’ – between those who can see the wood for the trees and those who can only see the individual trees.  Whilst I first thought that this was not an easy concept to fit over my experience of history teaching, there being no ‘rules’ in history to learn.  The more I thought about it in the light of my own teaching, the more important this idea seemed to be.  There are lots of students who know a great deal of things.  But, they seem to have difficulty being able to say anything about all these things that they know.

My year 9s are doing some work on slavery, we’ve been borrowing some ‘big picture’ slavery lessons from Dan Nuttall at Ilkley Grammar, and interspersing it with detailed lessons on slavery during the 18th and 19th Centuries.   I’ve been wanting them to be able to make generalisations about slavery, and to support these with evidence, and we’ve had some success.  To help others over the line, I borrowed an idea I also stole from someone else, from the Teacher’s Toolkit: Raise Classroom Achievement with Strategies for Every Learner (£), called ‘target notes’ – I encouraged the students to write explaining sentences about the impact or effects of the three ‘topics’ in the middle circle and examples to support in the outer circle.  Most of them did well, but a crucial number didn’t – and it’s these I can work on to help them understand the difference between the ‘rule’ or the ‘generalisation’ and the ‘evidence’.

New GCSE History – what do the boards have to offer?

Hope you don’t think I’m flogging a dead horse here, but I’ve been puzzling again over the GCSE requirements published last week, and in trying to understand them I drew a few diagrams. (Don’t worry I’m not about to out myself as a visual-learner).

There there are now three types of study – Depth, Period and Thematic.

studytypesOne important change here is that there must be two depth studies.  One of the depth studies may overlap in the period covered with the Period Study.  Thematic studies look like development over time studies to me and to other people too.

We’re also informed that the GCSE should have history from all three of the ‘era’s defined in the guidance.

Eras

Finally, there are rules about the ‘location’ of the material presented.  Each specification must cover localities, British and ‘Wider World’ histories.

locale

 

There’s no coursework and no controlled assessment permitted by the guidance.  Which leads me to think that the exam boards have been set quite a complicated task in making all this work.  Obviously we won’t know until we have sight of draft specifications, and I have no inside information on this at all.  I wonder if we can make some speculations about the construction of the new specs.

  • Three papers overall? Would be difficult to see how students could sit four papers for a GCSE history; which means that one of the depth studies will perhaps be examined with the period study?  You know, the way it is with Modern World papers.
  • This infers a specificiation that is focused on one era, through the period study and related depth study, with the others being bolted on.
  • In turn this could infer two different approaches to the relation between a period and depth study:  ‘British Paper’ in which the depth and period studies overlap is the most obvious as it gives the board a clear way of reaching the 40% minimum requirement for British history, or will there be a European / wider world paper with period and depth studies, with a separate British depth paper?
  • If the second is preferred then will there have to be a substantial element on British history in the Thematic studies to help edge British content up to 40%
  • Thematic papers will be one way in which the other eras of the course get studied – if the Period and Depth studies overlap, and if the ‘other’ depth study is from a second period, the thematic paper will be the way to catch the other era.
  • I bet Medieval history will get covered in thematic papers, and that it will be unusual to find a medieval depth study.
  • Localities?!  Where do these fit in?  I would guess that this will be in the British depth study.

 

 

What is data for? Whose data is it, anyway?

geddit?

If you’re a history teacher and especially if you’re a head of department then you really should be thinking about how you report to parents and to the school about the progress of your students.  If you’re thinking about that, then you really should be reading Alex Ford’s excellent blog at http://www.andallthat.co.uk/blog.html.

Alex’s extremely thought provoking pieces on what it means to get better at history, why this improvement is never linear, and how to explain it to parents and school leaders makes several excellent points.  History teachers need a way of thinking about the attainment and progress of their students, and Alex’s approach is to provide reporting points on specific pieces of work, and to give descriptive feedback about progress.

This makes a lot of sense to me.  A concrete piece of data that SLT can use for reporting, and clear description of what it looks like to make good, or poor progress in history must be more valuable than a number and sub level which no-one really understands, and which is highly open to accusations of grade inflation and inaccuracy.  Coupled with good feedback in class which helps students to understand how to progress, this makes a solid model for history departments to adapt for their circumstances.

Whilst Alex’s model does an excellent job of helping us communicate with parents and school about progress and attainment, I don’t think that it is all we need as teachers to think about improving our practice. I’m not sure that one system can do both.  I think we need to gather feedback in other ways, and produce data that will help us focus on what matters in our classrooms.

One of the big problems with the NC levels was (still is in many cases) that one (blunt and conceptually confused) indicator was used for many purposes.  Most of these purposes were incompatible or difficult to align with the original purpose of NC levels as a final description of what a pupil had achieved at the end of the whole key stage.  So, the same data was used to give formative feedback, to report on progress, to predict GCSE performance, to assess teacher effectiveness, to hold departments to account, and as part of the decision that Ofsted inspectors made in grading a school. It seems obvious to me that such flimsy data could not hold the weight of so much responsibility.

This situation also led to teachers becoming alienated from data.  Data has become something that is done to teachers.  Even when we have reported our own judgements on pupil attainment and progress, these judgements are taken in such away that we cannot use it, either in our teaching or in thinking about how we can improve our practice.  Data has become something to fear, and even to resent as it is quoted back to us in performance management meetings, appropriated in mock Ofsteds or even worse, rendered useless by a rising tide of scepticism amongst our colleagues.

An interesting project by the Bill and Melinda Gates Foundation on identifying and measuring successful teaching recommends a blend of approaches using student test scores, student feedback surveys and observation of teachers.  You can read an overview of the project here and their final report about how these three methods could work together here.  It’s the second method that interests me at the moment, as I think that it is something that individual teachers or departments could think about using in the short term.

The MET projects first paper (here) sets out 7 indicators of good classroom teaching which can be used by teachers, especially over time, as a way in to thinking about what aspects of their practice they should think about improving.    I know some great teachers who will take general feedback at the end of a GCSE course say, or when their A level students are about to go off on study leave, but by then it is late in the day for those students, and often such feedback is bland, un-focused and results in very polite comments.  A system which periodically takes feedback quickly using a likert scale whilst a course is going on might offer much more information.  Adding questions about student confidence in particular topic areas would offer even more feedback about what a teacher needs to do next.

7 CsThe 7 Cs which form the focus of the student feedback give clear indications of the areas of practice which need attention, and much more purposeful indications that an NC level or test scores or final grades can give about what to do to help students to learn better in your classroom.  Better still, this data belongs to the teacher – it’s our data, collected about our teaching and can be used to help inform our practice as well as give direction to our own professional development.

New History subject content and assessment objectives – first impressions.

The new GCSE objectives and content have arrived in our twitter feeds, and a first glance reading might leave you thinking ‘what was all the fuss about’.  Look closely and there are some interesting changes, some of which are welcome, others of which I’m reserving judgement on – until it is clear how the exam boards react to them.

1. Scope of Study.

  • More British history;
  • no overlapping;
  • ‘SHP’ thematic study lives on; and
  • “Modern World” given a life-raft.

Many people will turn to the headline changes in content, and it is here that the changes are most obvious.  British history is the big winner, as is a focus on studying different time periods.  Under the old ‘subject criteria’ from 2007 (see link at the bottom) British history made up a minimum of 25% of the content of any history GCSE.  Under the new requirements this rises to a minimum of 40%.    The study of Britain must include ‘at least one’ depth study, described as ‘different aspects of an historical situation across a period of 25 and 50 years’, chosen from Medieval, Early Modern or Modern periods, and which can also include studies of local sites, museums or galleries.

This strikes me as an exciting opportunity.  There is lots of great British history that could be covered, and anything which encourages exam boards to cover interesting periods such as the Norman invasion, the Reformation or the Industrial revolution cannot be anything other than a good thing.  I am particularly pleased (yup) to see the period given as  Medieval as going from 500 to 1500 – such a lot of interesting things in there that could be covered (not just learning the word Heptarchy!).

My only reservation is the rule that there cannot be overlapping periods of time between British and the wider world depth study.  I do not agree that this increases the level of challenge in the GCSE.  Setting British history in the context of world history allows us to understand both at a deeper level.  In practice, the sense of time and conceptual understanding that overlap affords more than outweighs the negligible double coverage in terms of raw ‘knowledge’ or events.   However, this has been ruled out.

Someone, much cleverer than me, once said that the question with history curricula is not really ‘what do you put in?’, but ‘what do you leave out?’.  With the new emphasis on British history naturally comes less emphasis on world history.  The wider world study, which also must contain one depth study between 25 and 50 years, from one of the periods mentioned earlier will therefore count for a minimum of 25% of the final GCSE grade.  It is clear that this study could focus ‘on different aspects of the history of one nation or group or on international relations’.  I would imagine that studies about the causes of the Second World War, or the 1920s boom in America could fit into either of those categories.  So, studying the ‘modern world’ will survive, but perhaps in a narrower way.

Some light at the end of the tunnel for modern world teachers might actually come from the ‘thematic’ study, where ‘some overlap is likely’, with the stipulation that ‘people issues and events’ must differ significantly.  The same periods could be chosen for the thematic as one or other of the British or wider world studies.  So, we might imagine a thematic course which covered ‘the growth of democracy’, and which studied the development of the franchise in Great Britain, ending perhaps with Suffragism, or perhaps a thematic study of the development of the welfare state in Britain, whilst at the same time covering a ‘modern’ wider world study which covered the causes of the First World War, the failure of the League of Nations, or the causes of the Second World War.  I don’t think that these combinations would be against the spirit of the new requirements.

Since my PGCE (which, by the way was excellent and provided me with a very firm foundation on which to start my profession) when I taught a year 10 SHP course on ‘medicine through time’, I have really wanted to teach a thematic GCSE course.  I’m pleased that some of the ‘cowboys and indians’ rhetoric that has recently emerged has not meant the end of an approach which could have an ‘SHP’ thematic character.

2. Aims and Outcomes.

  • knowledge;
  • judgement; and
  • history and citizenship.

I am also happy with the aims and outcomes put forward for history GCSE.  As we have already seen, there is an upgrade for ‘knowledge’ about Britain, which leads to aim that students should be ‘learning more about this history of Britain and the wider world’ – and it seems that this is where the requirement that periods should not overlap comes from.

However, the idea of ‘knowledge’ is widely drawn and wisely characterised, as students ‘deepening their understanding’ and ‘enabling them to think critically, weigh evidence, sift arguments, make informed decisions and develop perspective and judgement’.   I find myself nodding in agreement as I read it, and I applaud the focus on forming judgement.  I may be splitting hairs, but this is, I think an improvement on the old requirement that students should reach ‘reasoned conclusions’.  Judgement might not be a final decision – it could be contingent, which is closer to the way that human knowledge is constructed than ‘coming to a conclusion’ suggests.  The word judgement also suggests maturation, development and an interplay between history and the person studying it. Sometimes I feel that ‘concluding’ becomes ‘finishing off’ in the minds of students – forming a judgement has a different tone.

Finally, I think that the relationship between history and citizenship is well framed in this section of the document.  History in school absolutely should, ‘prepare [students] for a role as informed, thoughtful and active citizens’. I am glad to see however, that the document is clear that this enabling happens through students being taught to ‘think critically, weigh evidence, sift arguments etc’,  and not through the transmission of any narrative of ‘Britishness’.

Other than that, the focus on  interpretations remains, students are expected to develop the ability to  ask questions of the past, and there is an emphasis on on extending students’ knowledge of the ‘wide diversity of human experience’.  There is really quite a lot to be pleased about here.

3. Assessment objectives.

  • knowledge; and
  • sources

The greater emphasis on recall and communication of knowledge is also reflected in an increase in the weighting this is given in AO1.  Under the ‘old’ document AO1 counted for between 25 to 35% of the GCSE grade.  The new requirement is that AO1 (the wording for which is only subtly changed – students now will show ‘understanding’ and not ‘their understanding of history – though much could be read into that change!) should count for between 30 to 40% of the marks

The wording of AO2 has also changed in the same way – students now have to ‘demonstrate understanding of the past through explanation and analysis of key concepts’.  Previously they had to ‘demonstrate their understanding’.  The new AO2 sees ‘similarity and difference’ added as key historical concepts, and the weighting for ‘understanding’ also increases from 25-35% in the previous document to 30-40%.

So, the real changes are at A03, and these changes are also, mostly, welcome.  The old wording of A03 assessed the extent to which students could ‘understand analyse and evaluate’ a ‘range of source material’ and ‘how aspects of the past have been interpreted’.  The new wording has it that students should understand, analyse and make valid historical claims from ‘a range of source materials, including written historical sources whose precise provenance is given’ and ‘a range of representations and interpretations […] as part of a historical enquiry’.

Though the wording ‘as part of a historical enquiry’ was in both documents, I welcome the new focus on using sources to make historical claims.   The old wording could lead to evaluation being ‘bolt-on’ and obscure the point of sources being in an exam.  Students (and some teachers!) might be heard referring to ‘doing sources’ when in fact they should be ‘doing history’.

I’m more puzzled by the reference to ‘precise provenance is given’, and perhaps a little worried.  Teaching practice has come a long way from the ‘parlour game’ of sources A-E, students (and, again, some teachers!) have been steadily moved away from stock evaluation depending solely on provenance to looking at questions of context, tone, of typicality, of relevance and focus.  I hope that ‘precise provenance’ does not lead us back in the wrong direction.  As I type however, I can think of a number of source questions in recent exams where more precise provenance would have been very useful for students.  I’m thinking in particular of an OCR MW exam which used a picture of a family of sharecroppers, where the question was ‘why was this picture taken’. More precise provenance in this question would perhaps have stopped it from being nothing more than an informed guessing game.

4. Controlled Assessment is gone.

I know a lot of people will not be sad to see it go.  Worries about validity and cheating of coursework have led to a great administrative burden being placed on schools, departments and teachers through controlled assessment.  I still regret that students will not have the chance to research, edit and write a longer piece in the way that happens in the real world.  Coursework also showed up the strengths of some students who struggled in exams.  To that extent I wish that a way could be found to allow history GCSE to be partly assessed on coursework of some kind.

In conclusion  Drawing together a judgement.

All in all, in terms of content I think this is a reasonable approach – there is lots to be excited about in fact, and I await the exam boards’ responses eagerly.  I’d be really interested to hear what others feel – It’d be great to hear what your dream GCSE specification would be!

Sources(!)

New ‘GCSE subject content and assessment objectives’ for history (2013)

Old ‘GCSE subject criteria for history’ (2007)

Assessing learning under the new National Curriculum

http://www.education.gov.uk/schools/teachingandlearning/curriculum/nationalcurriculum2014/a00225864/assessing-without-levels

Ofsted’s inspections will be informed by whatever pupil tracking data schools choose to keep. Schools will continue to benchmark their performance through statutory end of key stage assessments, including national curriculum tests. In the consultation on primary assessment and accountability, the department will consult on core principles for a school’s curriculum and assessment system.

I’ve never been a fan of levels, they are arbitrary and confusing.  However, I really worry that the changes announced here will mean high stakes testing in the style of Massachusetts.  I could be wrong.