It's been eleven weeks now since I started using Standards Based Grading with my Year 8 class. It was tough going at times, not because of SBG, but rather because I'm a new teacher, still working out how to manage five classes, teaching topics for the first time while at the same experimenting with SBG and trying not to confuse my Year 8 students. In hindsight I probably bit off more than I could chew!
Did SBG work as hoped for? I'm very happy with the results so far. SBG certainly 'did no harm' to student test results in the most recently completed topic. The mean topic test score was 80%, with a standard deviation of 13%. Based on student feedback and my observations, I believe SBG helped students gain these results - although I can't actually prove it. Looking at the actual SBG grading I can say that 12 out of 24 students achieved full mastery on all eight outcomes for the topic, and 22 of 24 obtained at least a B+ for every outcome, with almost every student who did not get all As on a quiz coming back to do repeat attempts. Students regularly told me how much they liked doing assessment this way and often told me how they "finally got it" after doing the quiz a second or even a third time.
Test anxiety appeared to be greatly reduced. I was able to constantly reassure the more anxious students they had control over 80% of their mark by doing repeats on the quizes and that if they did that, the test which was only worth 20% would take care of itself. When I did give test results back, we were able to discuss specific strengths and weakeness in their test results in terms of the outcome grid.
SBG helped me with my teaching: I received continuous feedback throughout the topic as to what students could and could not do, and which of my lessons were actually working. In one case I realised the last two lessons I had taught had failed to help students meet an outcome - so I knew I was teaching it the wrong way for them. I tried a different approach and suddenly the quiz results for that outcome soared. Without SBG I could have had a nasty surprise at topic test time.
The key SBG components I used were: an outcomes grid; a sequence of diagnostic quizzes made in multiple versions (slight variations allowing quick production of multiple tests for repeat attempts); student pair-marking and self-tracking to emphasise student control and focus on outcomes; linking the traditional topic test to the outcomes grid. However did I fall behind one week in the program because of my inexperience and I've learnt I need to explicitly plan the SBG components ahead of time and fit them into the program - hopefully in a way that allows the SBG activities to double up as learning activities. I think it will be smoother next topic.
Was there a discrepancy between end of topic test results and the quiz marks? In most cases, student quiz marks matched their performance in corresponding outcomes in the topic test. However for some students there were some differences. As I marked the tests, I compared each student's quiz outcomes to how they did in the test on that outcome - this was time consuming - but I think worth it. An idea that came about while marking the test, which may not be in the spirit of SBG, was to modify the quiz marks where I saw major difference between the corresponding test outcome. If the student did better in the test for an outcome, I raised their quiz result to match what they did in the test - since they had clearly mastered the outcome and I think it's fair to view the test as another quiz. If the student did worse in the test for an outcome and clearly demonstrated they no longer met the outcome, I reduced their quiz mark down to a B or a C. This didn't happen for many students, and in the rare cases I made these adjustments, I later explained to them what I had done and offered them the chance to repeat the corresponding quiz - thus giving them the chance to get back a B+ or an A for the outcome - and revise the skill.
Will I do SBG next term with Year 8? Absolutely! Will I try it with another year group? Not just yet as I'm working through other first-year-teacher challenges.
More details for those really interested in the nitty-gritty:
How did I implement SBG?
- Use of an Outcomes Grid: I gave each student a sheet like this to glue in their exercise books:
- Use of a letter based grading scheme.
- A - Fully mastered - perfect answers
- B+ - Almost fully mastered - minor improvements or refinement needed
- B - Developing - at least some skill/understanding demonstrated
- C - Not demonstrated - doesn't look like this was understood.
Note well: I never told a student they couldn't do, or didn't know the outcome - I just said they hadn't demonstrated it. A subtle difference, but a huge difference for self-esteem. - Use of a weekly diagnostic quiz in two versions. Most weeks I prepared a short quiz that tested two to four of the outcomes. I prepared two versions of the essentially the same quiz (A) and (B). On the first attempt for each quiz I handed out papers (A) and (B) to alternate students. This helped ensure I was assessing the student's skills and not their neighbours' and made sure I always had a second quiz available. Students were enouraged to just write "IDK I don't know" if they really didn't know how to do a question and move on to the next question. The goal was to get through the quiz without too much stress or time wasted. Here is how the quiz looked for the first four outcomes:
Percent Diag 01
- Student pair-marking of papers: Unless I was really strapped for time, I asked the students to swap papers and mark the quiz. The specific instruction was "look at the colour pen your neighbour used - now please use a different colour pen to mark them". We then worked through the quiz, with me asking random students to help solve the (A) version of the problem on the board. If I found a student who could not do the (A) version of problem, we worked through it, then I asked them to work out the (B) version answer for the class. The students graded their neighbour's paper, and wrote letter grades for each outcome at the top of the paper.
- Quiz results guided my lesson planning: I collected the papers, looked for student misconceptions, checked the student allocated marks and recorded the results. Based on the overall class results I then made a call if we could move on to the next outcomes, or if more whole class work was required. Depending on that call, I might make a (C) and (D) versions of the quiz for a later lesson, or I made these papers available for any students wanting to repeat the quiz.
- Repeating Quizzes: I didn't impose any strict procedure for doing repeat attempts - I played it by ear. I had a stack of (A) and (B) papers available any time - I gave the student the other paper than they had done for their second try. If need be I made (C) and (D).
- Student continuous self-monitoring of outcomes: I handed back the quizes after checking and gave the students (mostly) clear instruction how to update their outcomes sheet with their marks. The idea was to encourage them to track their own learning.
- Follow up material made available: I put copies of the (A) and (B) versions of the quizes with worked solutions on the class edmodo, and sometimes prepared specific revision material for those students needing more support.
- Rinse and Repeat: We repeated the process until we completed the topic.
Lessons learned
- It's hard work to implement SBG the first time you are teaching the topic - there is a lot of material to prepare for SBG. Probably a bit crazy to do this the first year teaching, but hopefully most of the resources built can be reused the next year.
- Next term I will set a regular period for doing the quizzes, and a regular weekly recess period for doing repeat attempts.
- For this last topic I used a premade test which didn't sufficiently test one item on my outcome grid, so next time I will take more care to ensure full coverage.
No comments:
Post a Comment