Showing posts with label standards based grading. Show all posts
Showing posts with label standards based grading. Show all posts

Sunday, September 15, 2013

MANSW 2013 Presentation

A quick post for those wanting to see a copy of my presentation this morning at MANSW. Thanks to all who attended and gave such enthusiastic support - all the more so given it was 9AM on Sunday morning after a very late night conference dinner!

Three tools: The ABQuiz, the Tracking Sheet, the Feedback Form
September 15, 2013
MANSW 2013 Conference, Terrigal.

Google docs - free download:
https://docs.google.com/file/d/0ByVkChxwrC4DTUJfWnpEU21Hb00/edit?usp=sharing

Links to the Feedback Form tools:
Feedback Form template (Word doc)
Feedback Form analysis (Excel spreadsheet)

Read it now in Scribd:

Saturday, October 20, 2012

The blue shark of full mastery

"Sir, does mastery count for more than the test mark?", asked one of my Year 7 students this week. I beamed back - "YES!"  Slowly but surely, I'm weaning this class to look beyond their scores ("You got 95%! I got 98%!" ... yes - it's a high achieving class :-) ) and focusing on mastery.  Recently I have been making little mini-report cards which I staple onto the end-of-topic test paper:



My classes now have a symbolic language for achievement levels : the red dolphin stamp is 'Not Demonstrated' and 'Starting Out', the orange seahorse is for 'Progressing', and the orange killer whale is for "Mastery". If you get Mastery for all the standards, you also get the blue shark.  I find the visual imagery helps focus on achievement of the standards.  And it doesn't just work for Year 7 - even my Year 12 students like the blue shark.

My goal with these mini report cards is to make the standards and the student's achievement of those standards prominent - the topic test score is there, but it doesn't dominate the feedback.  Why? Because even in this high achieving class, a score of 90% means there is something students can improve on - and I want to focus on that specific item. I try to write a helpful comment, focusing on the standards that need work and some ideas how the student can advance that standard. While the students are looking at their test, I walk around the class and try to chat to every student about their achievement in terms of the standards and what we can do to raise them (that can be hard with 28 students in 30 minutes!).

These little report cards though reveal a deeper change in my approach to Standards Based Grading....

SBG: Where I'm at now
Time pressures have taken their toll on my loftier goals of high precision SBG implementation - and I have found I'm migrating closer to what Frank Noschese calls "Keep It Simple Standards Based Grading.  Now that I have simplified the system, I find it also makes it clearer and more approachable to students.

Less standards per topic - especially for junior classes. My lists are still too long for senior classes - mainly because I am trying to cover all the syllabus points (there are a lot!). 

Achievement levels: I'm happy with the language of my achievement levels "Not Demonstrated/Starting Out/Progressing/Mastery" - I believe they give clear and honest feedback without being discouraging - they don't say 'you failed' - they say 'you're not there yet'. I'm not comfortable with a simple Yes/No binary decision because I want the levels to support my goals for student motivation and engagement - to reinforce they are on a learning path - I want to recognise their 'progress so far'. A sheet full of 'No' results isn't going to encourage lower achieving students.

The role of quizzes: I have effectively stopped using quizzes for grades.  Woah - that's a big departure from the SBG ethos! Why? Because I believe that meeting standards once in a quiz isn't enough : the student has to retain the standard. So for me, the end of topic test does matter. If a student could demonstrate the standard in quizzes during the topic, but can't demonstrate them at the end of the topic, I think there is a problem.  But I haven't abandoned quizzes - on the contrary, they are a key part of my formative assessment strategy. I still give regular quizzes and use the dolphins, seahorses, killer-whales and blue sharks to give feedback during the topic. I do record the quiz results to help direct my teaching of the topic. But the difference is quizzes taken during the teaching of a topic don't count toward grades. I save that for the end of the topic. If a during-topic quiz shows me a few students need help on a specific standard, I give them specific support. If I see many students need help on a specific standard, then I alter the teaching the next day and put this standard in the next quiz for the whole class. So I don't do repeat attempts on quizzes, and I don't try to juggle grades based on quizzes and quiz retries.

The role of topic tests: I use the topic test to decide the level of achievement for each standard and report this to students with their topic test mark. I do this by grouping test questions against standards - either explicitly in the test design, or working backwards from a preexisting test. This does mean marking takes longer, but it gives much more useful feedback than a single test score. The results should not be a big surprise because the quizzes have been giving the student feedback along the way.  Retry attempts happen after the topic test, I give students the chance to improve their topic grade by taking quizzes or alternate tests for specific standards. That's how they can change their topic grade. In my grade book I have the topic test result (which stays constant), and an array of standards achievements which can be updated by retries.

Topic test result is recorded, along with initial end-of-topic achievement of standards.
I use red-orange-green traffic light indicators to quickly spot areas of concern.
Students can improve their standards results after the topic test by taking quizzes.
The final grade: I blend the topic test (snapshot in time result), with the standards achievement levels (which students can change through post-test quizzes) - giving more weighting to the standards indicators than to the topic test results. Why? Because I want students to have the opportunity to raise their grade through further effort. This reduces test anxiety and redirects the learning focus to mastery. 

And back to the blue shark .... stamps are fun - kids (and teachers!) love them. And when it comes to assessment, having a discussion about whether you got a seahorse, a killer whale or a blue shark - well it just takes some of the sting out of assessment and helps everyone realise the symbol or the grade isn't what's important : it's working towards mastery that counts.
Woot! A blue shark!
I get my animal stamps from www.allyoucanstamp.com 
A note on my constraints: I am the only teacher in my faculty using SBG - so I have to maintain the topic test results to allow for comparison across classes. With middle level classes, my grading system has to be consistent with other teachers (since we rank across the cohort) so my grades have to come exclusively from the tests. Perhaps one day I might be able to convince my colleagues to allow retries for the grading in these classes. For the senior years there are statutory regulations on assessment policy which are sadly high-stakes, single-attempt only assessments. So for the higher level classes, I can only use SBG to guide my formative assessment.  My hope is that this translates into the summative assessment results.

Your thoughts? Have I oversimplified SBG? How could I improve this approach?

Thursday, January 26, 2012

SBG - A view from a rookie, one year later

It's almost a year now since I started using Standards Based Grading ideas in my teaching. Actually it's almost a year since I started teaching! So how did SBG work out for this rookie teacher?

Happy Birthday : 1 year using Standards Based Grading
Photo: Theresa Thomas (Creative Commons)
http://www.flickr.com/photos/theresasthompson/2311733808/

SBG in your first year of teaching
Hmm .. it's a bit ambitious. There are so many things to come to terms with as a new teacher, adding SBG to the mix might be pushing it - especially if you are the first teacher in your faculty to try it out. I took the plunge - probably a little foolishly - SBG seemed just so right for so many reasons. I did at least have the sense to try a full implementation with only one year group.  

Preparing SBG resources (quizzes, grading sheets) and the extra marking will take more time, something in very short supply in your first year,  but it will help you develop some critical teaching skills. Creating and using a list of standards will ensure you know the content, help you focus on what is important to teach, and provide clear direction for your lesson planning. Using quizzes on a regular basis will give you continuous feedback how your teaching efforts are translating into actual, measured student learning.  It's your first year - and you most definitely need the feedback - the sooner the better to help develop good habits. As a new teacher, you are likely still learning the basics of providing and receiving student feedback, so SBG will provide you a scaffold for these essential teaching skills. If you aren't using SBG, you will likely not find out until end-of-topic tests - which is too late to find out you were teaching it the wrong way.

Would I recommend SBG for new teachers in their first year? Cautiously - and only with support from your head teacher. Try it with only one class, carefully choosing a class that will not be overly testing your still nascent classroom management skills or content knowledge. If you don't have head teacher support or a class you feel confident with, then perhaps consider what I'm calling SBG-Lite for the first year. 

SBG-Lite
While faculty and legislative constraints prevented me using a full SBG implementation with all but two of my classes, I found that even in classes where I was not allowed to use SBG for end of semester grades, I used many SBG tools while teaching each topic. Even if you can't use SBG data to determine official school grades, the information gained about a student's learning aligned to your standards provides rich data to include in written comments in school reports and in conversations with students and their parents.

So what does SBG-Lite look like? You still have standards, you still have quizzes, you still receive and give regular feedback on student mastery of the standards. The only real difference is it doesn't directly translate into end of semester grades. It's up to you to convince students their efforts to achieve mastery of standards will ultimately translate into better grades and that tracking standards and repeat attempts on quizzes are worth the effort. It's a harder sell - but you believe it - don't you? While SBG-Lite most likely doesn't help improve motivation and engagement to the same degree as a full SBG implementation (because it lacks the element of direct student control over their marks), it remains a helpful addition to the learning environment.

Of course I would like to be able to use a full SBG implementation with more of my classes, but sometimes you just have to adapt with your current system - especially when you are the new kid on the block. Pushing too hard will just trigger a response to shut down SBG in those classes where you can use it.

SBG and the challenge of "Working Mathematically"
A serious problem in most mathematics assessment practices, and consequently in most mathematics teaching practices, is an overemphasis on procedural skills development, often at the cost of understanding, reasoning and problem solving. There is a risk that using SBG amplifies this focus on skills fluency - reducing learning mathematics to a list of skills to be evaluated. But it doesn't have to be that way - it depends on the standards you use. My solution was to label my standards with the "Working Mathematically" proficiencies (Understanding, Skills, Problem Solving and Reasoning). This helped me see the balance (or lack of it) in my learning outcomes as well as offering guidance as to the different types of  learning strategies which help students master the different types of standards.

SBG: What worked ...
Can I categorically say SBG improved the results of my class? No. But I can say the class did well in their final results - most students mastered most of the standards, as shown in end of year tests, and I do know we had a very positive class environment - so I stand by a 'did no harm' claim while suggesting SBG may have led to better outcomes.

Students responded well to SBG - in their feedback they indicated they appreciated having more control over their grades. Many students and even some parents remarked at the end of the year how student attitudes to mathematics had changed from one of fear and anxiety to one of feeling able to do the work - and even enjoy it! SBG definitely helped me develop my teaching skills, providing an anchor in the maelstrom that is the the first year of teaching.

.. and what didn't work
Teaching for my first year was challenging - and that's an understatement. As the year progressed I found it harder to be consistent in my SBG implementation. In the rush to complete my teaching program, SBG sometimes fell by the wayside. I'm hoping with more experience this won't happen in 2012. 

In the middle of the year, I tried using SBG with my youngest class, fresh from primary (elementary) school  but I couldn't get the class in a 'learning about learning' space and I had to give up. I'm not sure if this was because SBG is too demanding for students of this age group, or if I need more experience working with younger students - possibly the latter. I'll find out this year because I'm trying SBG again with the same year level.

One of the central ideas of SBG - being able to reassess individual standards - started to slip - most likely because I didn't promote the link between quizzes and the end of year mark sufficiently - so as the year progressed, students didn't retake quizzes. One solution would have been to use a web-based tool like ActiveGrade - unfortunately several factors conspired against this - not the least an inflexible attitude from my school district network administrators who deemed the application unsafe for schools (go figure).

Three ideas for 2012:
  • SBG for the struggling, disengaged class: Like most mathematics faculties, we use so-called 'ability streaming' to allocate students to classes. As a consequence, we end up with entire classrooms of students with a well-established pattern of low mathematics achievement and high levels of disengagement, factors which typically get worse as the students progress through the school system.   Previously I was concerned SBG would be too hard to manage for this type of class, but now I realise we have nothing to lose by trying something different. I'm going to try SBG-Lite with one of these classes (unfortunately full SBG isn't an option due to faculty grading policy for this year level).
  • Provide better visibility of student standards achievement: At the close of each topic, I will provide each student and their parents with a view of their grade book, making it clear to them which quizzes they might wish to retake in order to improve their topic result.
  • Leverage SBG through student summary books. I have designed workbooks made of one or two standards per page with lots of blank space between each standards where students can write their own summaries of what the standard means to them.
The verdict after the first year? SBG was a valuable and helpful tool - it's a core part of my teaching practice. I'm ramping it up for Year Two!

Wednesday, November 16, 2011

SBAR with light bulbs and spanners

So how has my experiment with Standards Based Assessment and Reporting been going? Here is the first in a series of reflections.

Half way into my first year using outcomes sheets with my students as the basis for Standards Based Assessment and Reporting, I realised the outcome lists I was making every few weeks were really just a list of skills I expected students to master. When the penny finally dropped that skills were only one part of Working Mathematically, I realised my outcomes sheets had to change.

Here is my first attempt to be different - which I've been doing now with my outcomes sheets for a few months:

Click on the image for a full sized image.

The key idea is to separate outcomes into the categories of the Working Mathematically proficiencies:

A light bulb icon indicates an outcome that requires some new understanding of an important idea.


A spanner icon indicates a skill to be acquired - 'fluency' in the Australian Curriculum description.

A balance scale indicates that a reasoning process is being used.



How has using light bulbs and spanners changed teaching and learning in my classroom?
  • Clearly showing the understanding and reasoning outcomes forces me to focus on these important elements. If I find my outcomes sheet for a new topic is full of spanners and no light bulbs or balance scales, I know I've made an unbalanced teaching sequence.
  • It sends a clear message to the students that understanding and reasoning are important - it's not enough to just be able to mechanically follow a process to get an answer to an exercise. I will be expecting them to be able to explain and reason.
  • Any time during a lesson when I'm about to introduce or consolidate the development of an understanding outcome, I stop for a moment, and point to it on the outcomes sheet, making it very clear to students were are working on a "light bulb" outcome. I emphasise this means it's a time for quality intellectual engagement: thinking, listening and asking questions. While we can get the idea behind a skill outcome by reading a text book, or perhaps watching Salman Khan do it on YouTube, the understanding outcomes are much better learnt interacting with peers and the teacher.
Where is Problem Solving?
The big challenge I'm facing now is how to integrate the problem solving proficiency into a set of outcomes related to a content heavy topic.  What this really reflects is the fact that real problem solving (beyond just "harder skills questions") aren't yet integrated into my content heavy program. For now, I'm experimenting with specific Problem Solving lessons which stand outside the regular content sequence - and that's something I'm going to work on in 2012.

You may notice the outcome sheet above doesn't make provision for recording quiz results - which you would normally see on my sheets. That's because for this course I'm actually not permitted to use SBG, but have to follow a statewide assessment method and schedule. But this doesn't stop me using the idea of standards, or using them for formative assessment. More on this in the next few posts.

Monday, July 11, 2011

SBG: Taking the blinders off your horse!

Got too much content to get through in your course? It's a race you know! A race to complete the prescribed material. And how do you keep a horse running the race? How do you prevent it from being distracted by unpleasant things happening to other horses? Easy: put on some blinkers - or better yet - some real full scale blinders:


A teacher using the traditional "end of the topic test" assessment method can, if they choose, run as fast as the program says without too much distraction. And the students won't be too distracted either - they get their test results maybe once a month or two - perhaps an unpleasant day that reaffirms what they can't do - but no fear - we keep on racing to the end. 

Enter Standards Based Grading - or indeed any form of continuous assessment - now both the teacher and the students are running that race without blinders. It impossible for the teacher to avoid seeing if the race is falling apart - if a significant number of students are falling further and further behind. And for those students struggling - if you choose to maintain a pace too fast for them - they are getting constant feedback that they are not keeping up the pace.  The blinders are off - everyone can see what is happening, all the time. To make it more interesting, the SBG version of continuous assessment encourages turning your horse back to rerun the part of the track you couldn't handleSo with SBG, you just don't have a choice to keep running ahead - certainly not with junior classes where students have yet to fully develop learning skills and confidence to take full control of their academic progress in their own time outside class.

And that's the problem and the joy of SBG. It will disrupt your teaching program. If the race is going too fast (and it seems it always is - just too much content in our programs), SBG will stop you in your tracks - forcing you and your students to stay with the standards being worked on until you are happy a satisfactory level of mastery has been reached by enough students.  It's going to get even messier when you find some in the class have mastered the current set of standards and are ready to keep moving, but another part of the class has only just left the starting block.  Or some haven't even entered this race - because they never mastered the material from last year ... or even the year before that! So SBG will not only delay your program, it will also force you to work out how you are going to cope with the spread: how can you differentiate so your strongest students are able to keep running, while providing support so others don't give up the race?

Call me naive, but I'm of the view it is better to get through half or three-quarters of the program with students fully mastering the content they did cover, rather than ticking a box to say the program was completed on time, and ... oh .. too bad the class average test result was 60% (we won't ask about the spread!) and that many of them reinforced their negative views on mathematics and low self esteem in the process. Of course this approach is not possible with some courses. In senior courses for example, with a sequence and pacing strictly prescribed by state education authorities, you just have to stay on the schedule - the race will go on regardless. For courses where the teacher has more flexibility to adapt to the class needs, the question of how many students in difficulty constitutes a significant enough number to justify changing pace, and what they should be expected to do in their own time is a professional judgement. And that's a hard one for a new teacher!

Blinders help with compliance!

While searching for images of horse blinders, I was amazed to discover an article about a horse called "In Compliance" and how putting blinders on the horse help it win races ! No kidding. Mind you - even this horse met its limits - blinders or no blinders - it just couldn't jump the 3m hurdle.

Tuesday, June 21, 2011

Standards Based Grading: the parent dividend

Tonight was parent-teacher night at my school for Year 8 - the year group I'm trying Standards Based Grading with. On a hunch, I brought some copies of the quizzes we do and the little SBG Grid I hand out to all my students to stick in their exercise books when we start each new topic:

My SBG Outcomes grid for a topic. Students mark when they are present in class for the main lesson covering each outcome ("I was here"), text book references are given ("Book"), a column to mark completion of exercises, and columns Q1, Q2, Q3 to record grades in each quiz for each outcome. Each quiz normally tests two or three outcomes. The Toblerone bars alert students we will be having a very special lesson on exploring volume of prisms - I'm expecting zero absence that week.

Each time I showed this grid during our parent-teacher interviews,  the benefits of this system for parents became clearer and clearer to me.

Here are some classic parent concerns - and how an outcomes tracking system helps address them:

  • "How can I monitor how my child is doing?"
    It's easy! Open your child's exercise book, find the page with the grid - you can see exactly where we are up to, what marks they are getting for each outcome. 
  • "How can I provide more help to my child?"
    Find their outcome grid, see where they are having difficulty - look at the book reference, ask for more resources specifically relating to that topic.
  • "How can I get more homework?"
    Look at the book reference to find more exercises for that outcome, or ask for extra quiz sheets.
  • "How can I help my child revise for the end of year test?"
    Find the grid for each topic, now you have a list of all the outcomes for that topic.

One parent greeted me with the words: "I hear you have been doing continuous assessment" - I nearly fell off my chair in surprise at the jargon before I realised this parent was also a teacher! We laughed - and I was so grateful for her explaining to me in such simple terms what I was really doing with my system. Standards Based or not - it almost (*) doesn't matter what assessment approach I'm taking - the most vital aspect is it's continuous feedback. Not waiting until the end of the topic, the end of the term to find out what is going on. Having students regularly update their outcomes grid while the topic is being learned gives their parents a direct window into their child's learning and specific information on where and how they can help.

Looking back on it, I never really thought about parents when doing SBG (apart from being concerned how they might react to the idea) - I always saw it in terms of the value for students and teachers. But now I see how SBG can help parent-child and parent-teacher learning conversations. The benefits for parents are so strong, I'm accelerating my push to do SBG with another year group - to have the system in place before it's time to meet their parents!


* Using SBG actually does matter - because with SBG, when a parent sees their child did not reach an outcome, something can be done about it : the parent can work with or encourage the child to master the outcome and then retake the quiz, and thus improve the grade mark for that outcome. With SBG, outcomes can be re-attempted many times and will upgrade the score given.

Sunday, June 5, 2011

But Sir, the person next to me has a different quiz!

There's a special moment that happens just once each year with each of my classes. It's the first time we are doing a written quiz on the first week's work. I settle the class down, and innocently hand out the quiz paper.  After a few minutes of working quietly, when students are in theory working on their own, someone always calls out: "But Sir, the person next to me has a different quiz!". One or two students gasp - oops! - as they frantically check their neighbour's paper just that little more closely.  I chuckle, "Yes - they are different, because I'm keen to see what you know."  Usually someone else says "But don't you trust us?", to which I chuckle a little more, and say "of course I do" - and leave it at that. On more than one occasion, the students have cheered and clapped - I think they are impressed that I took the effort to make different quizzes, and that I'm not an idiot.  

What the students don't realise (the first time) is that there are actually only two versions - I stacked the papers and handed them out so alternate students received alternate papers. The differences are minor - I'm careful to ensure they have the same level of cognitive and mathematical difficulty. If this sounds like too much hard work for the teacher, see the Practicalities section below on how to do this without too much extra pain!

Here is what my quizzes look like:

Click on the image for a larger view

and this is what the (B) version looked like:


Once the dust has settled down, and the class gets used to the idea, we can then do some great things with having multiple versions of the quiz:
  • Pair Marking: Students swap their papers, and then do a whole class activity working through the answers. Each student marks their partner's paper, assigning a grade for each section. They have to analyse and discuss with their partner if any errors were due to minor mistakes, carelessness, or if the student really had no idea. Because there are two versions of the quiz, each student ends up doing the quiz twice and they end up discussing the answers with each other.  They love "playing the teacher", giving each other grades and writing "teacher style" comments.  Can I trust the students to do the right thing? So far I haven't seen anyone do the wrong thing - they are so keen to mark. And if they cheat - well - it means they had to explain it to each other, and it will show up in the next quiz when I don't do pair marking.  After they have swapped the papers back, I give them five minutes and then collect the papers for myself to check the marking was done correctly.
    Note: This Pair Marking activity can take a whole 50 minute lesson - so I don't do it for every quiz.
  • Take home version: You can give the alternate quiz to each student as a practice paper or for homework.
  • Second attempts: If you use a system liked Standards Based Grading where students are encouraged to re-attempt assessments, you have a spare unused quiz ready to use! In fact, I usually make four versions of each quiz - I hand out the (A) and (B) in class, and have (C) and (D) in reserve for reattempts.
  • Topic Revision: I make the full set of the quiz (all the versions) with solutions for one version available on the class online edmodo group. This makes it available for topic revision prior to the end of topic test.
Some more benefits of having multiple versions of the quizzes:
  • I don't have to impose super-strict test conditions on the class while doing the quiz as the opportunity for cheating is greatly reduced. Since we are doing quizzes regularly, I don't want the class to feel like they are constantly being tested. 
  • If I sense students are experiencing difficulty with the quiz, I actually let some of the pairs help each other - especially if one pair member is much stronger than the other. I know they will be learning and explaining - they won't be just copying because that's not possible.
  • I encourage students to write "I don't know" if they can't do a question, or to write "I'm not sure", "I think this is the answer".
For those of you interested in Standards Based Grading:
  • The use of a simple grading rubric, applies to each section - as opposed to the whole quiz. The quiz above tested three outcomes, hence the three circles for recording each grade.
  • A student experiencing difficulty, can still end up with a quiz result that has a "B+" or an "A" for one section, and a "C" for another. Usually I design the quiz so that the first outcome tested is one tested in the last quiz - increasing the chance for success.
  • This system immediately gives me a set of quizzes for reattempts. I allow students to re-test using the online edmodo version and bring me in their completed paper.
  • Pair Marking increases student understanding of the grading method. Unfortunately this can take a whole lesson, so I don't do Pair Marking all the time.
  • When I hand back the quizzes, I ask students to record their marks in their outcomes sheet (this sheet is glued in their exercise book and tracks all their attempts and outcomes).
  • I encourage students to retake the quiz until they get an "A" for all outcomes. If they end up with the odd "B+" on their second or third attempt, I reassure them that's fine for now (else they burn out). If they get a "A" on the topic test for that outcome, I bump up the corresponding quiz grade.
  • If an outcome needs more work, I will re-test it in the next quiz. Students usually greatly improve on this whole-class second attempt.
Practicalities
Isn't this all extra work? Surprisingly, it turns out to be much much easier than you might think. 
  • Making extra quizzes: If you have time to make one quiz, making the extra versions isn't much extra effort. So long as your quiz is made in a digital tool, it's as simple as copy-and-paste, and change a few numbers and letters. Well - it is for middle level school mathematics :-) It is a little more work for senior school mathematics, and I imagine harder for other subjects.
  • Stacking the papers: Print off your (A) and (B) paper. Put the two sheets into the photocopier at the same time, set the copy count to half the number of students in your class plus some extras. You now have a pile of alternating (A) and (B) papers ready to hand out.  Just make sure you hand it out pair-by-pair.
  • Marking: When you collect the quizzes back, sort them into (A) and (B) piles. Find the best paper in each pile and use it as your marking guide. Because the quizzes are short and sweet, the marking is fast.
  • Yikes - so many more resources to make: Yep - that part is tough. As a new teacher, I'm gradually developing quiz sets for topics and classes. It is too much to do for all classes for all topics at once. But I know that in year or two I will have a full set of powerful tools!

Important update June 10, 2011:  Some recent anonymous class feedback on the quizzes, shows very strong support among the class for the use of regular quizzes and pair-marking. However one student did express concern about other students seeing their work - so some sensitivity is required, and options for these students may be necessary. It is probably also good to explain to students why you are doing pair-marking and set some guidelines on respecting each other's learning and privacy.

Wednesday, April 13, 2011

Explaining Standards Based Grading to Students and Parents

A little something to give back to the wonderful SBG community on the web:  my version of a letter explaining "the how" and more importantly, "the why" of my SBG implementation.
  
I've adapted sentences from similar letters shared by other teachers using SBG (apologies - I mixed up the material to the point I can't identify individual authors any more!), and then extended for my own context. I also felt I should add some FAQ style paragraphs at the end of the document to help address any concerns. When I wrote this letter I very carefully avoided using the SBG terminology or any sense this was something to make a fuss about. I think it's important to reassure people this is a normal way to work with a class - the only thing that is different is that the final topic test result is no longer the main form of grading and that students can do repeat attempts to improve their grades.  Amazing how such a small change in the way of thinking about assessment can have such a massive impact on teaching and learning!

SBG Explained

Monday, April 11, 2011

Standards Based Grading: 11 weeks later

It's been eleven weeks now since I started using Standards Based Grading with my Year 8 class. It was tough going at times, not because of SBG, but rather because I'm a new teacher, still working out how to manage five classes, teaching topics for the first time while at the same experimenting with SBG and trying not to confuse my Year 8 students. In hindsight I probably bit off more than I could chew!

Did SBG work as hoped for? I'm very happy with the results so far. SBG certainly 'did no harm' to student test results in the most recently completed topic. The mean topic test score was 80%, with a standard deviation of 13%. Based on student feedback and my observations, I believe SBG helped students gain these results - although I can't actually prove it.  Looking at the actual SBG grading I can say that 12 out of 24 students achieved full mastery on all eight outcomes for the topic, and 22 of 24 obtained at least a B+ for every outcome, with almost every student who did not get all As on a quiz coming back to do repeat attempts. Students regularly told me how much they liked doing assessment this way and often told me how they "finally got it" after doing the quiz a second or even a third time.

Test anxiety appeared to be greatly reduced. I was able to constantly reassure the more anxious students they had control over 80% of their mark by doing repeats on the quizes and that if they did that, the test which was only worth 20% would take care of itself. When I did give test results back, we were able to discuss specific strengths and weakeness in their test results in terms of the outcome grid.

SBG helped me with my teaching: I received continuous feedback throughout the topic as to what students could and could not do, and which of my lessons were actually working. In one case I realised the last two lessons I had taught had failed to help students meet an outcome - so I knew I was teaching it the wrong way for them. I tried a different approach and suddenly the quiz results for that outcome soared. Without SBG I could have had a nasty surprise at topic test time.

The key SBG components I used were: an outcomes grid; a sequence of diagnostic quizzes made in multiple versions (slight variations allowing quick production of multiple tests for repeat attempts); student pair-marking and self-tracking to emphasise student control and focus on outcomes; linking the traditional topic test to the outcomes grid. However did I fall behind one week in the program because of my inexperience and I've learnt I need to explicitly plan the SBG components ahead of time and fit them into the program - hopefully in a way that allows the SBG activities to double up as learning activities. I think it will be smoother next topic.

Was there a discrepancy between end of topic test results and the quiz marks? In most cases, student quiz marks matched their performance in corresponding outcomes in the topic test. However for some students there were some differences. As I marked the tests, I compared each student's quiz outcomes to how they did in the test on that outcome - this was time consuming - but I think worth it.  An idea that came about while marking the test, which may not be in the spirit of SBG, was to modify the quiz marks where I saw major difference between the corresponding test outcome. If the student did better in the test for an outcome, I raised their quiz result to match what they did in the test - since they had clearly mastered the outcome and I think it's fair to view the test as another quiz. If the student did worse in the test for an outcome and clearly demonstrated they no longer met the outcome, I reduced their quiz mark down to a B or a C. This didn't happen for many students, and in the rare cases I made these adjustments, I later explained to them what I had done and offered them the chance to repeat the corresponding quiz - thus giving them the chance to get back a B+ or an A for the outcome - and revise the skill. 

Will I do SBG next term with Year 8? Absolutely!  Will I try it with another year group? Not just yet as I'm working through other first-year-teacher challenges.


More details for those really interested in the nitty-gritty:

How did I implement SBG?
  • Use of an Outcomes Grid:  I gave each student a sheet like this to glue in their exercise books:

  • Use of a letter based grading scheme.
    • A - Fully mastered - perfect answers
    • B+ - Almost fully mastered - minor improvements or refinement needed
    • B - Developing - at least some skill/understanding demonstrated
    • C - Not demonstrated - doesn't look like this was understood.
      Note well: I never told a student they couldn't do, or didn't know the outcome - I just said they hadn't demonstrated it. A subtle difference, but a huge difference for self-esteem.
  • Use of a weekly diagnostic quiz in two versions. Most weeks I prepared a short quiz that tested two to four of the outcomes. I prepared two versions of the essentially the same quiz (A) and (B).  On the first attempt for each quiz I handed out papers (A) and (B) to alternate students. This helped ensure I was assessing the student's skills and not their neighbours' and made sure I always had a second quiz available.  Students were enouraged to just write "IDK I don't know" if they really didn't know how to do a question and move on to the next question. The goal was to get through the quiz without too much stress or time wasted. Here is how the quiz  looked for the first four outcomes:
  • Percent Diag 01
  • Student pair-marking of papers: Unless I was really strapped for time, I asked the students to swap papers and mark the quiz. The specific instruction was "look at the colour pen your neighbour used - now please use a different colour pen to mark them". We then worked through the quiz, with me asking random students to help solve the (A) version of the problem on the board.  If I found a student who could not do the (A) version of problem, we worked through it, then I asked them to work out the (B) version answer for the class. The students graded their neighbour's paper, and wrote letter grades for each outcome at the top of the paper.
  • Quiz results guided my lesson planning: I collected the papers, looked for student misconceptions, checked the student allocated marks and recorded the results. Based on the overall class results I then made a call if we could move on to the next outcomes, or if more whole class work was required. Depending on that call, I might make a (C) and (D) versions of the quiz for a later lesson, or I made these papers available for any students wanting to repeat the quiz.
  • Repeating Quizzes: I didn't impose any strict procedure for doing repeat attempts - I played it by ear. I had a stack of (A) and (B) papers available any time - I gave the student the other paper than they had done for their second try. If need be I made (C) and (D).
  • Student continuous self-monitoring of outcomes: I handed back the quizes after checking and gave the students (mostly) clear instruction how to update their outcomes sheet with their marks. The idea was to encourage them to track their own learning.
  • Follow up material made available: I put copies of the (A) and (B) versions of the quizes with worked solutions on the class edmodo, and sometimes prepared specific revision material for those students needing more support.
  • Rinse and Repeat: We repeated the process until we completed the topic.
Sheesh - I'm exhausted just writing this out ... but it's harder to explain than to do .. mostly.

Lessons learned
  • It's hard work to implement SBG the first time you are teaching the topic - there is a lot of material to prepare for SBG. Probably a bit crazy to do this the first year teaching,  but hopefully most of the resources built can be reused the next year.  
  • Next term I will set a regular period for doing the quizzes, and a regular weekly recess period for doing repeat attempts.
  • For this last topic I used a premade test which didn't sufficiently test one item on my outcome grid, so next time I will take more care to ensure full coverage.

Saturday, March 5, 2011

Is SBG possible in your school system?

It's been a while between posts - life of the first year teacher is certainly proving to be hard work and sleep deprivation effects are beginning to kick in. Unfortunately, I think SBG is making it even more work - but I'm hoping it will pay off for my students, and that it will get easier when I teach the same topics next year. Before I write about my initial experience of SBG, I want to discuss the first challenge I faced: would it be possible, or even permitted, for me to try out SBG?

So, is SBG possible in my school system? Unfortunately the short answer is SBG options are limited in the NSW (Australia) secondary school system. Two key factors act as brakes on implementing SBG: state mandated assessment protocols and linked assessment across classes. I suspect teachers across many different school systems will be facing similar challenges.

State mandated assessment protocols block alternative approaches
The assessment process for senior years (Stage 6 Prelim/HSC courses) is strictly defined by the NSW Board of Studies, mandating a fixed number of summative assessments over a two year period.

While there is flexibility in the nature of the assessment, this specification precludes an SBG-style "continuous assessment with retry attempts permitted" approach. Similarly, for the Year 10 School Certificate course, assessment procedures are clearly defined. So for Year 10, 11 and 12 classes in my state jurisdiction, SBG really isn't an option if you want SBG to actually play a component in determining student grades (which is the whole point!). Fight the battles you can win - and this isn't one of them!

Linked assessment across classes makes change harder
What about junior secondary school years?  Despite the intent of the syllabus designers, every school I have seen has implemented the NSW Board of Studies mathematics syllabus "three pathways design" as three separate ability graded "Standard, Intermediate, Advanced" streams.  By Year 9, when the pathways design really affects lesson programming, mathematics faculties align their classes to a pathway and then quite reasonably use a common assessment protocol for each pathway.


The net effect is to create groups of assessment linked classes - ruling out a change to assessment procedure unless it is done for all classes in the same group. So that rules out going solo on SBG with my Year 9 class because it would be impossible to equate my grades with those of other Year 9 classes doing the same syllabus pathway.

Room at the bottom!
Which leaves Year 7 and 8. Fortunately at my school, each Year 7 and Year 8 class is assessed separately. While common tests are used, student grades are not linked - students are not ranked across the year group - so each teacher can implement their own assessment process. Which means I have the option to try out SBG with these classes. Since I'm new and green - the sane approach is to try with only one class - and with the support of my Head Teacher, Year 8 was my choice for SBG.

Should you abandon testing?
So can I forget about tests and do SBG as I wish with my Year 8? No - not really. These students will still have to do a grading test at the end of the year to determine their Year 9 syllabus pathway - and it would be irresponsible of me not to prepare them for it. Also - since I am new, and SBG is new - I do want a barometer to tell me how we are travelling - so I've decided to keep the topic tests for Year 8. My compromise is to reduce the test grade value compared to the SBG grade value. My students will hopefully get the benefits of SBG, while the adults who want test results can look at those. I'm kind of trying to have my cake and eat it too: have the benefits of SBG while still conditioning the class to tests and generating data so I can measure the effectiveness of the SBG approach.

It's interesting to observe how a few key syllabus and assessment rules at the top of the system - even just in the final years, percolate all the way down to the early years. And where do those rules come from? That's another story...

What barriers have you encountered as you consider changing assessment methods for your class? Were you able to find a way to makes changes? In my next post I will describe the initial SBG experiences in my Year 8 class, and then consider some of the politics of introducing SBG.

Sunday, January 30, 2011

Standards Based Grading meets Marzano

Robert Marzano has been reviewing education research for decades and summarised his findings in a practical and concise teacher guide "The Art and Science of Teaching". (Marzano, 2007). So let's put our Standard Based Grading [SBG] hat on, and consider the key ideas from Marzano's first chapter, which asks the question:  "What will I do to establish and communicate learning goals, track student progress and celebrate success?"




The following ideas read almost straight out of the SBG credo:

Distinguish between learning goals and learning activities : It's important not to confuse learning goals with learning activities. A learning goal typically is stated in the form "Students will understand ____________ and be able to ____________".   Following Marzano, it is best practice to build our SBG standards on goals - not activities.  (Action Step#1, p. 17)

Write a rubric or scale for each learning goal: Marzano recommends a finely grained scale be defined for each learning goal. While he says a simple one is sufficient, which appears to be the way most SBG teachers are working, a finely grained scale provides more value. I'm not sure though if this is feasible when there are many standards for a topic - it may well overload the student and the teacher. (Action Step#2, p.19)

Assess students using a formative approach: Don't wait until the end of the unit - assess as you go. Yep - that's SBG. (Action Step#4, p.24)

Have students chart their progress on each learning goal: Well if this doesn't just scream SBG at you, I don't know what does. Marzano emphasises the importance of students evaluating their progress on each goal, as well as setting achievement targets and strategies. An implied caution for teachers using automated software for their SBG tracking is to ensure students engage with the data in a meaningful way - there is a risk the student could just look at a pre-generated graph and move on.  Marzano provides a proforma how a student can extend their chart into an active learning plan. (Action Step#5, p25)

Recognise and celebrate growth: Use the feedback about progress in SBG goals to show student growth - not just absolute achievements. (Action Step#6)

Marzano also offers a suggestion which can be used to extend SBG:

Have students identify their own learning goals: I love this idea! Extend the standards to allow each student to set an additional personal learning goal for what interests them, or what they would like to achieve for the current topic. While tracking student progress for their unique goal may add complexity, it seems like a valuable idea. A suggestion for those using Active Grade or some other automated system (Excel anyone?): make a standard called "Student Selected Standard" which can be marked off for all students - then a separate table somewhere to record what those selections were and how the student and teacher agreed it would be assessed. What a powerful meta-cognitive strategy(Action Step#3, p.23) 

All up - SBG as currently formulated matches 5 out of 6 of Marzano's recommendations in his first chapter, indeed Action Step#5 is arguably a prescription for SBG. Marzano is highly influential in the US education scene (and beyond) - so if people are asking you "Why are you doing SBG? Where is your evidence?" point them at Marzano's work and make the links to SBG. (Bonus: Amazon has the book on sale for $10.11!)

In concluding this tour of four different approaches to effective teaching, it is remarkable to observe all four approaches recommend similar, or at least complementary, strategies - many of which are at the core of the SBG idea. The Circle of Courage shows how SBG can contribute to psychosocial growth through mastery and independence, while providing opportunities to develop a sense of belonging and to practice generosity. Andrew Martin's work shows us how SBG can help build academic resilience and combat fear of failure by demonstrating to students how effort, strategy and attitude are the basis for improved performance and mastery. John Hattie's Visible Learning shows how the feedback provided by SBG can be used as a powerful tool for improving teaching practice - that SBG is as much a tool for transforming teaching as it is for transforming learning. And finally, Marzano reaffirms we are on the right track, and suggests extending our standards to include student selected standards.

A closing thought from Marzano about evidence:
It is certainly true that research provides us with a guidance as to the nature of effective teaching, and yet I strongly believe that there is not (nor will there ever be) a formula for effective teaching. [...] The best research can do is tell us which strategies have a good chance (i.e. high probability) of working well with students. Individual classroom teachers must determine which strategies to employ with the right students at the right time. In effect, a good part of effective teaching is an art. (pp. 4-5)


Where is the evidence for SBG? All around us!

Wednesday, January 26, 2011

Standard Based Grading : Show me the evidence!

One of my university tutors, Dr Nigel Goodwin, would regularly have us recite a catechism whenever we discussed ideas in education:
Show me the evidence!
And what is our currency for measuring the evidence? [yell it!]... 
Improved student outcomes! 
Forget the talk, the opinions, the fads: show me the evidence of improved student outcome. Careful though -  this does not mean just using standardised test scores based on the 3R's! We mean peer-reviewed evidence that we can have confidence in.

In looking for evidence relating to SBG as currently formulated and being trialled by science and maths teachers, part of the challenge is the newness of this formulation. So as a proxy for evidence about SBG, I'm looking at evidence of effective teaching practice and seeing if there are connections to SBG.  And when it comes to evidence, there is a special place in evidence heaven for John Hattie.


Hattie is famous for his meta-analyses - a technique that allowed him to review student outcome data for 80 million students, as reported in 50,000 peer reviewed studies. Using the concept of effect size, Hattie produced a list to answer the question ‘What has the greatest influence on student learning?'. Even more helpful than just presenting data, in his book Visible Learning (Hattie, 2009) synthesises these results into an explanatory theory.

John Hattie's Visible Learning in a nutshell:
Visible teaching and learning occurs when learning is the explicit goal, when it is appropriately challenging, when the teacher and student both seek to ascertain whether and to what degree the challenging goal is attained, when there is deliberate practice aimed at attaining mastery of the goal, when there is feedback given and sought, when there are active, passionate and engaging people (teacher, student, peers) participating in the act of learning. It is teachers seeing learning through the eyes of students, and students seeing teaching as the key to their ongoing learning. The remarkable feature of the evidence is that the biggest effects on student learning occur when teachers become learners of their own teaching, and when students becomes their own teachers. When students become their own teachers they exhibit the self-regulatory attributes that seem most desirable for learning (self-monitoring, self-evaluation, self-assessment, self-teaching). (Hattie, 2009, p22)
How does this relate to SBG? And how might it suggest we extend our thinking about SBG? Some of the more obvious connections are worth stating:

Explicit goals: SBG is all about explicit goals.

Appropriately challenging goals: SBG allows teachers and students to decide - based on previous results and current  intentions, what an appropriate goal for each standard is. They aren't pie-in-the-sky goals, they don't relate to abstract numerical grades - they are focused "I would like to get Proficient in "Can factorise a quadratic"" - and they allow us to agree that perhaps this student should aim for Expert, not Proficient.

Deliberate practice ... : SBG allows us to make the link between focused effort and mastery. A caution we need to bear in mind with SBG is that by allowing students to sense "I'm done - I've mastered that skill" - we may undermine deliberate practice. Is this any riskier than traditional grading practices? Probably not - but it's something to consider. How can we use our new SBG tool to encourage ongoing, deliberate practice?

... focused on mastery learning :  Yep - that's SBG!

Feedback is given and sought : The major strength of SBG. As we implement SBG, it's important we encourage students to actively engage with the feedback provided by SBG. (Effect size: 0.73 - 1,287 studies).  And even more importantly, that we respond to the feedback. Hattie writes
 "it is only when I discovered that feedback was most powerful when it is from student to teacher that I started to understand it better. When teachers seek ... feedback from students as to what students know, what they understand, where they make errors ... then teaching and learning can be synchronised and powerful. Feedback to teachers makes learning visible" (p.173).
So the most powerful feature of SBG may well relate to Hattie's observation that the biggest effects on student learning occur "when teachers become learners of their own teaching" (p.22)  In other words - if we have the perspective that SBG is a tool that allows us to continuously evaluate to our teaching - we transform SBG from just being another grading system into a powerful tool for monitoring and adapting our teaching.

So rather than "Hmm.. Johnny still hasn't reach proficiency on 'solving right triangles for the hypotenuse - what can he do?" we say "Hmm.. I taught this concept to Johnny for three lessons - what can I change?". And that's where the money comes in!

With apologies to "Jerry Maguire"
Next in this series: looking at the work of Robert Marzano. The evidence considered in this post relates to student outcomes - for an SBG perspective on student motivation and engagement see the earlier post looking at the work of Andrew Martin.   SBG is also considered in relation to a developmental framework called The Circle of Courage.

Saturday, January 22, 2011

Standard Based Grading : helping eliminate academic fear and failure

Following from the discussion SBG and the Circle of Courage,  I would like to consider SBG in relation to the work of Andrew Martin - a leading education psychology  researcher on student motivation and engagement.   Here's my interpretation in stick figures (with apologies!!) of Andrew Martin's most recent book Building Classroom Success: eliminating academic fear and failure.  While the book doesn't have stick figures, it's extremely well written and a great asset for teachers.

Students build a view of themselves that works like this:

As a consequence, students build many clever but often maladaptive behavours to protect their self-esteem. So for example: I only got 20% because last night I was playing games on the internet instead of studying - but really I could do it if I wanted to - it's not about my competence. In fact, the student is terrified they will fail, and so to protect their self-esteem, they make sure they play that game all night. We need to help students break the tight linkage they make between performance, competence and self esteem:

and instead help students see the role that effort, strategy and attitude play:


Observe also that knowledge and skills are identified as distinct from competence. They are developed through effort, strategy and attitude - and these are all factors the student can control.  We then extend the view to show that failure is not a direct link to competence; that competence can be developed; and that self-esteem can be built on more than just competence:


Andrew Martin goes on to encourage us to help students focus on mastery of the subject, rather than dwelling on performance compared to other students, and to show how effort, strategy and attitude will help gain mastery. Most importantly: to show students that these factors are actually under their control.

So where does Standards Based Grading [SBG] fit in this model?  Let me count the ways ... here are just a few:

Most importantly, SBG smashes the simplistic correlation of performance to competence. Instead of providing a single figure "you passed, you failed" therefore "you are smart, you are not smart", it highlights individual elements of knowledge and skills the student has mastered and has not mastered.  So long as we have the discussion carefully, we never impune the student's competence: we have a discussion about what they know, not how smart they are.  And note this works for the advanced student as well as the less successful student: they don't get "87% great!", they get "you understood this, now lets focus on these...".

SBG helps show students that performance depends on prior mastery : Academic success doesn't just happen because you are smart - it's built on successfully mastering earlier work. SBG makes this very clear - because mastery of previous knowledge is explicitly recorded and tracked.

SBG helps turn failure into feedback for future growth : When students encounter failure in assessment, big or small, they get specific information on where and how they might improve.

SBG provides a mechanism for having the discussion about effort, strategy and attitude : A concept that has come through strongly in the SBG discussion is that before students can ask to be reassessed on a standard, they have to actually do something to help master it, and demonstrate this effort, before we offer them the choice for reassessment.

SBG allows us to customise appropriate success goals for each student: For each student we can help them define mastery goals to extend them from where they currently are. We can also help them track improvement and personal bests, building their self-esteem in the process.

The take home message? SBG as it's being discussed in the blogsphere seems well aligned to best practices recommended by leading ed psych research for maximising academic success. Andrew Martin's work suggests we need extend the SBG process to look carefully at the "what happens next?" question: once we have determined where the student is at for each skill on the SBG chart, what sort of discussion do we have with the student? What can we do to support increased effort, appropriate strategy and positive attitude as the student prepares to either reassess a standard, or move to the next set of standards?  It seems we need to think carefully with the student which of these elements will benefit from further attention - and then help them find ways to progress.

Monday, January 17, 2011

Standards Based Grading and the Circle of Courage

There's a movement growing in the edu blogsphere called Standards Based Grading [SBG, twitter #SBAR]. If you're not familiar with it yet - Think Thank Thunk's posts are a great starting place. In this posting, and the next few to follow, I would like to show connections between SBG and some of the educational ideas that have inspired and influenced me. But first - what is SBG?

SBG in a nutshell: Instead of doing a test at the end of each topic and giving a student a mark for the whole test, dissect the topic into individual concepts and skills ("standards"), and then, over time - during the teaching of the topic - assess competence in each discrete standard using a simple scale such as "developing, developed, mastered". Make the process visible to students, and give them opportunities to be re-evaluated on standards - with the proviso they can show they did something toward getting a better result. At the end of the topic, the overall student grade is derived by aggregating their current level of achievement on each standard. The goal is to have meaningful formative assessment that supports an ongoing learning conversation with students, providing visible and timely feedback to students and teachers.  (I think I got it all?)

SBG resonates with a model of student psycho-social needs which I find insightful and practical: the Circle of Courage, developed by Dr. Larry Brendtro, Dr. Martin Brokenleg, and Dr. Steve Van Bockern, derived from Native American concepts. They argue that young people, indeed all people, have needs in four key areas:  Belonging, Mastery, Independence and Generosity.

 Circle of Courage medicine wheel by Lakota artist George Blue Bird
via Reclaiming Youth International - poster available for purchase'

So where does SGB fit in with the Circle of Courage?
  • Belonging: We have the opportunity to present SBG as a class journey through the topic - a development of the whole class - to which we all belong. By allowing  students to be reassessed, we are telling students that even if they did not achieve mastery of a subject, they will get another go - they are still in the class, still part of the effort - they haven't been left behind!
  • Mastery: is the most obvious factor that SGB caters for: we emphasise student mastery of skills and concepts. It's not a number, a score we are aiming for - it's a specific mastery.
  • Independence: I love SBG for this: we offer students choices - choices to try again, choice as to which skills to try again, choices as to when they are ready for reassessment (within reason!)
  • Generosity:  SBG offers a unique opportunity for generosity. Once you have identified which students have mastery of a standard, make them the teachers for this standard - provide them the chance to be generous to other students by helping others learn how to master the standard.  Form students into small groups, assigning an 'expert' to each group - and have them lead the group in an activity.  You will be staggered at the student response - from both the experts and the developing students - and the research evidence is very clear on benefits of peer learning - for both the teacher and the learner.  Students really do seem to learn so much better from their peers. And unexpected things happen too: once one of my expert students - who was normally very talkative to the point of disruption, started telling off his students for talking too much and not listening. Priceless.
In the next few posts, I'll be exploring how SBG fits in with Andrew Martin's work on student motivation and engagement and John Hattie's Visible Learning.

Update Jan 22, 2011: This series continues with Standards Based Grading: helping eliminate academic fear and failure