Tuesday 7 February 2017

what students don't know can hurt them

This term four of my colleagues have formed a teaching circle to discuss SoTL. We are interested in understanding how to engage in SoTL research and to also use the SoTL literature to improve our own teaching praxis. For this term (Winter 2017) we decided to work through How Learning Works by Ambrose et al (2010). Last week we met to discuss the first chapter which considers how students' prior knowledge can affect their learning. The chapter makes a clear distinction between declarative and procedural knowledge but we noted that there are other types of knowledge such as the knowledge of application and context: The ability of people to know when or the correct context in which to apply their declarative or procedural knowledge. However, sometimes in this first chapter, it seemed that Ambrose et al were implying that procedural knowledge encompassed contextual knowledge. I think that greyness of my own understanding is apparent below.

People who can do but not explain how or why have procedural but not declarative knowledge and run the risk of being unable to apply a procedure within a new context or explain to someone else how to do the task. Many instructors are like this about their teaching being able to teach but not articulate why their teaching is effective nor are they able to teach as effectively in another context. Similarly, students may know facts (grammar, structures, species' names) but not know how to solve problems with that knowledge. These students have declarative but not procedural knowledge. This is what I am trying to move my second-year molecular cell biology students toward - being able to use their molecular cell biology knowledge to solve problems. The issue I have is that I do not know how to effectively teach procedural knowledge other than to have students practice and myself model problem-solving. Ambrose et al note that if students' prior knowledge is fragmentary or incorrect, this will interfere with their subsequent learning. In addition, even if their prior knowledge is sound, students are not always able to activate it in order to integrate it with new learning. Instructors need to be able to assess whether or not students' prior knowledge is sound and active for effective learning to occur. Ambrose et al also note that students need to be able to activate their knowledge appropriate to the learning context - this is not always the case. Thus, instructors need to monitor the appropriateness of students' knowledge and make clear the appropriate connections/knowledge for the context. Students need to learn contextual knowledge. For procedural knowledge I think this simply requires numerous examples and opportunities for practice.

This first chapter suggests that a good way to correct inaccurate knowledge is to give students an example or problem that exposes misconceptions and sets up cognitive dissonance in students' thinking.  Ambrose et al suggest using available concept inventories to probe students' inaccurate knowledge. This is what my physics colleague, Ian Blokland is doing in his classes which employ iClickers and what I am attempting to do with 4S apps in my classes which are taught using team-based learning. This a great idea but it is time-consuming work to produce plausible wrong answers/distractors. I have found that most textbook testbanks do not do this well.

Something the authors suggest and I attempt to do in my courses is to help students make connections in their learning from earlier in the same course, from previous prerequisite courses, and also from supporting courses I know they are taking at the same time. I have had students comment that they appreciate that I do this on the end of term student evaluations of teaching. In the language of Ambrose et al, I am activating students prior knowledge but acknowledging to students that this is an appropriate context in which to consider integrating that prior knowledge. Students are not always capable of doing this themselves.

Something else this first chapter suggests to activate prior knowledge that I attempt to do in my courses is to have students consider their own everyday context for how things work. The classic example that I often use is the increase in frequency in washroom trips after drinking alcohol is a direct result of alcohol inhibiting the release of ADH from the posterior pituitary. I consciously try to offer everyday examples of the applicability of students' new knowledge.

One example of being explicit about the context is the style of writing expected. In the biological sciences concise clear writing is necessary as opposed to a possible narrative in English - although a good science paper tells a good story....

Brian Rempel, my organic chemistry colleague highlighted a paper (Hawker et al 2016) at our teaching circle which investigated the Dunning-Kruger effect in a first-year general chemistry class. Generally speaking, students are poor at assessing how well they perform on exams after the exam has been written. As others have shown, better-performing students are generally better at assessing their performance. I think this is a case of you don't know what you don't know. If students do not know the material, then they are unable to assess whether or not they knew the answers on the exam - they thought they knew!

In the context of the first chapter of How Learning Works, it seems to me that students' prior knowledge impacts their ability to assess their performance on their exam. Is there a link? I guess the point in this chapter is that students do not always know what they don't know and this impacts their ability to integrate new knowledge and to assess how to apply that knowledge. 

What is interesting in the Hawker et al (2016) study is that there is a significant improvement in postdiction accuracy between the first exam written and the second exam but not in subsequent exams (in this study there were five with the 5th being the final comprehensive exam). The authors of this study suggest that the first exam is an abrupt corrective to students' expectations of what is expected of students on a university exam (this is a first term general chem course). Thus it may be that the effect is not specific to chemistry but may simply be a result of students' transition to university. Their analysis of first-time chemistry students in the second semester found the same significant difference between the first two exams for university chemistry neophytes but not for students who had completed the first chem course. So there may be something about general chem in particular that prompted the authors to study this in the first place: there is some suggestion in the SoTL literature that chemistry is different in terms of students' ability to monitor their performance. They suggest that this may be due to the difficult nature of chemistry. Other STEM disciplines have reported similar results (Ainscough et al 2016; Lindsey & Nagel 2015).

Resources

Ainscough, L., Foulis, E., Colthorpe, K., Zimbardi, K., Robertson-Dean, M., Chunduri, P., & Lluka, L. (2016). Changes in Biology Self-Efficacy during a First-Year University Course. CBE-Life Sciences Education, 15(2), ar19.

Ambrose, S. A., Bridges, M. W., DiPietro, M., Lovett, M. C., & Norman, M. K. (2010). How does students’ prior knowledge affect their learning? In How learning works: Seven research-based principles for smart teaching (pp. 10–39). San Francisco, CA: Jossey-Bass, an imprint of Wiley.

Hawker, M. J., Dysleski, L., & Rickey, D. (2016). Investigating general chemistry students’ metacognitive monitoring of their exam performance by measuring postdiction accuracies over time. Journal of Chemical Education, 93(5), 832–840.

Lindsey, B. A., & Nagel, M. L. (2015). Do students know what they know? Exploring the accuracy of students’ self-assessments. Phys. Rev. ST Phys. Educ. Res., 11(2), 20103.