Saturday, 18 February 2017

learning is impacted by how we mentally organize our knowledge

Last week our SoTL Teaching Circle met to discuss the 2nd chapter of How Learning Works. This chapter discusses the impact of how the way students organize their knowledge impacts their learning. Basically, this chapter considers the differences between how experts organize their knowledge vs how novices organize their knowledge. The bottom line is that expertise is accompanied by a greater network of mental connections among their nodes of knowledge. In contrast novices at best will have a linear connection that links their different points of knowing. Most students, however, will have islands of knowledge for which there are no connections between their different courses even when those courses are within the same major.

A few years ago Kimberly Tanner was the keynote speaker for a series of workshops at the UofA for the AIBA annual conference. The title of the conference was Mind the Gap which was meant to highlight the difference between thinking like an expert versus a novice. She explained that one of the issues that make it difficult for experts to teach novices is that much of our expertise is unarticulated to ourselves. Experts (e.g. holders of PhDs) are unaware of how they organize their knowledge that makes them an expert. Thus, it makes it difficult to help novices to transition to expert thinking because the experts do not know what the novice needs to change in order to become an expert. I know I have this difficulty when teaching many of my courses. Something that is obvious to me and thus not worth mentioning to my students ends up being critical for students to be made aware of in order to progress in the discipline. This is particularly true for those of us who suffer from academic fraud syndrome - that thinking that really, I am not that smart and someone is going to realize their mistake and revoke my PhD. Thus, university and college instructors may tend to keep some aspects of their expert thinking to themselves because to articulate that may reveal that what the expert thinks is worth teaching is actually common knowledge and inappropriate for discussion in the classrooms of higher education.

But like we tell many of our students, if you have a question, it is likely that many in the classroom have the same question. This is what makes teaching difficult - being courageous to be intellectually humble in the midst of both our peers and students.

On the other hand, the work that is being done to identify threshold concepts in different disciplines, I think is a good step toward understanding those key points that we ourselves grasped on our way to developing expertise. As instructors in higher education, we need to understand what those stumbling blocks were for us and our colleagues when developing our expertise. Once identified, we can then ensure that our own students know where to concentrate their attention in order to understand the depth and breadth of the discipline. And I think this can be readily facilitated by helping students make links within their own knowledge structure so that their mental models of our world becomes robust.

This is one of the reasons why I advocate for students to develop an e-portfolio that provides a platform for them to reflect on their education that cuts across disciplinary boundaries and even the boundaries between the courses within their major. Students need to understand that knowledge is a whole rather than a series of separate islands. We want our students to understand the world not just what is currently in front of their nose.


Ambrose, S. A., Bridges, M. W., DiPietro, M., Lovett, M. C., & Norman, M. K. (2010). How does the way students organize knowledge affect their learning? In How Learning Works: Seven Research-Based Principles for Smart Teaching (pp. 40–65). San Francisco, CA: Jossey-Bass Publishers.

Haave, N. (2016). E-portfolios rescue biology students from a poorer final exam result: Promoting student metacognition. Bioscene: J College Biol Teaching, 42(1), 8–15.

Krieter, F. E., Julius, R. W., Tanner, K. D., Bush, S. D., & Scott, G. E. (2016). Thinking like a chemist: Development of a chemistry card-sorting task to probe conceptual expertise. J Chem Ed, 93(5), 811–820.

Loertscher, J., Green, D., Lewis, J. E., Lin, S., & Minderhout, V. (2014). Identification of threshold concepts for biochemistry. CBE-Life Sciences Education, 13(3), 516–528.

Smith, J. I., Combs, E. D., Nagami, P. H., Alto, V. M., Goh, H. G., Gourdet, M. A. A., … Tanner, K. D. (2013). Development of the biology card sorting task to measure conceptual expertise in biology. CBE-Life Sciences Education, 12(4), 628–644.

Tuesday, 7 February 2017

what students don't know can hurt them

This term four of my colleagues have formed a teaching circle to discuss SoTL. We are interested in understanding how to engage in SoTL research and to also use the SoTL literature to improve our own teaching praxis. For this term (Winter 2017) we decided to work through How Learning Works by Ambrose et al (2010). Last week we met to discuss the first chapter which considers how students' prior knowledge can affect their learning. The chapter makes a clear distinction between declarative and procedural knowledge but we noted that there are other types of knowledge such as the knowledge of application and context: The ability of people to know when or the correct context in which to apply their declarative or procedural knowledge. However, sometimes in this first chapter, it seemed that Ambrose et al were implying that procedural knowledge encompassed contextual knowledge. I think that greyness of my own understanding is apparent below.

People who can do but not explain how or why have procedural but not declarative knowledge and run the risk of being unable to apply a procedure within a new context or explain to someone else how to do the task. Many instructors are like this about their teaching being able to teach but not articulate why their teaching is effective nor are they able to teach as effectively in another context. Similarly, students may know facts (grammar, structures, species' names) but not know how to solve problems with that knowledge. These students have declarative but not procedural knowledge. This is what I am trying to move my second-year molecular cell biology students toward - being able to use their molecular cell biology knowledge to solve problems. The issue I have is that I do not know how to effectively teach procedural knowledge other than to have students practice and myself model problem-solving. Ambrose et al note that if students' prior knowledge is fragmentary or incorrect, this will interfere with their subsequent learning. In addition, even if their prior knowledge is sound, students are not always able to activate it in order to integrate it with new learning. Instructors need to be able to assess whether or not students' prior knowledge is sound and active for effective learning to occur. Ambrose et al also note that students need to be able to activate their knowledge appropriate to the learning context - this is not always the case. Thus, instructors need to monitor the appropriateness of students' knowledge and make clear the appropriate connections/knowledge for the context. Students need to learn contextual knowledge. For procedural knowledge I think this simply requires numerous examples and opportunities for practice.

This first chapter suggests that a good way to correct inaccurate knowledge is to give students an example or problem that exposes misconceptions and sets up cognitive dissonance in students' thinking.  Ambrose et al suggest using available concept inventories to probe students' inaccurate knowledge. This is what my physics colleague, Ian Blokland is doing in his classes which employ iClickers and what I am attempting to do with 4S apps in my classes which are taught using team-based learning. This a great idea but it is time-consuming work to produce plausible wrong answers/distractors. I have found that most textbook testbanks do not do this well.

Something the authors suggest and I attempt to do in my courses is to help students make connections in their learning from earlier in the same course, from previous prerequisite courses, and also from supporting courses I know they are taking at the same time. I have had students comment that they appreciate that I do this on the end of term student evaluations of teaching. In the language of Ambrose et al, I am activating students prior knowledge but acknowledging to students that this is an appropriate context in which to consider integrating that prior knowledge. Students are not always capable of doing this themselves.

Something else this first chapter suggests to activate prior knowledge that I attempt to do in my courses is to have students consider their own everyday context for how things work. The classic example that I often use is the increase in frequency in washroom trips after drinking alcohol is a direct result of alcohol inhibiting the release of ADH from the posterior pituitary. I consciously try to offer everyday examples of the applicability of students' new knowledge.

One example of being explicit about the context is the style of writing expected. In the biological sciences concise clear writing is necessary as opposed to a possible narrative in English - although a good science paper tells a good story....

Brian Rempel, my organic chemistry colleague highlighted a paper (Hawker et al 2016) at our teaching circle which investigated the Dunning-Kruger effect in a first-year general chemistry class. Generally speaking, students are poor at assessing how well they perform on exams after the exam has been written. As others have shown, better-performing students are generally better at assessing their performance. I think this is a case of you don't know what you don't know. If students do not know the material, then they are unable to assess whether or not they knew the answers on the exam - they thought they knew!

In the context of the first chapter of How Learning Works, it seems to me that students' prior knowledge impacts their ability to assess their performance on their exam. Is there a link? I guess the point in this chapter is that students do not always know what they don't know and this impacts their ability to integrate new knowledge and to assess how to apply that knowledge. 

What is interesting in the Hawker et al (2016) study is that there is a significant improvement in postdiction accuracy between the first exam written and the second exam but not in subsequent exams (in this study there were five with the 5th being the final comprehensive exam). The authors of this study suggest that the first exam is an abrupt corrective to students' expectations of what is expected of students on a university exam (this is a first term general chem course). Thus it may be that the effect is not specific to chemistry but may simply be a result of students' transition to university. Their analysis of first-time chemistry students in the second semester found the same significant difference between the first two exams for university chemistry neophytes but not for students who had completed the first chem course. So there may be something about general chem in particular that prompted the authors to study this in the first place: there is some suggestion in the SoTL literature that chemistry is different in terms of students' ability to monitor their performance. They suggest that this may be due to the difficult nature of chemistry. Other STEM disciplines have reported similar results (Ainscough et al 2016; Lindsey & Nagel 2015).


Ainscough, L., Foulis, E., Colthorpe, K., Zimbardi, K., Robertson-Dean, M., Chunduri, P., & Lluka, L. (2016). Changes in Biology Self-Efficacy during a First-Year University Course. CBE-Life Sciences Education, 15(2), ar19.

Ambrose, S. A., Bridges, M. W., DiPietro, M., Lovett, M. C., & Norman, M. K. (2010). How does students’ prior knowledge affect their learning? In How learning works: Seven research-based principles for smart teaching (pp. 10–39). San Francisco, CA: Jossey-Bass, an imprint of Wiley.

Hawker, M. J., Dysleski, L., & Rickey, D. (2016). Investigating general chemistry students’ metacognitive monitoring of their exam performance by measuring postdiction accuracies over time. Journal of Chemical Education, 93(5), 832–840.

Lindsey, B. A., & Nagel, M. L. (2015). Do students know what they know? Exploring the accuracy of students’ self-assessments. Phys. Rev. ST Phys. Educ. Res., 11(2), 20103. 

Saturday, 28 January 2017

the advantages of stable teams

Two recently published articles (Walker et al 2017, Zhang et al 2017) provide evidence that the Team-Based Learning (TBL) practice of keeping learning teams stable throughout the course produces improved student learning outcomes than if the teams are made ad hoc each time group work occurs during class.

The study by Walker et al (2017) is from Kentucky. I like that this study does not spend time on establishing whether or not cooperative learning works but simply cites the existing evidence. Instead, this study is focusing solely on the impact of stable vs shifting teams on the efficacy of cooperative learning. Their results suggest that stable teams are more effective in producing better learning outcomes. The student population being studied was a freshman undergraduate sociology course. What is interesting is that it is not only the stability of the team that produces improved student learning outcomes but also the time on task similar to what the study from Mazur found below to explain why females had greater gains than males. In the Walker et al (2017) study, there were no differences in the first term comparing stable vs shifting teams. There was, however, a significant difference in the second semester when time spent discussing material in teams was increased (the amount of time viewing a film was reduced). Note that although this study involved large enrollment classes (150-175 students) the study was conducted in the tutorials (recitation sessions) that were smaller subsets of the class lead by teaching assistants (TA) rather than faculty. Also, note that the one TA choose to shift teams due to pedagogical beliefs that all students deserved to have a chance to work in a high-functioning team. In contrast, the other TA created stable teams based on their reading of the TBL literature which suggests that stability develops stronger relationships which enhance the learning environment.  I have some issues with the introduction in the Walker et al paper (2017): they make many blanket statements about the typical university student experience (large classes, relatively unengaged) without citing any evidence that this is in fact, the case. I am sure it is the case, but in a peer-reviewed publication, I expect the evidence to be cited that this is true.

Mazur's paper studied the effect of peer instruction (PI) on science students' beliefs in physics and towards learning physics. The effects of a stable team environment in the PI groups was also investigated. Students' attitudes were measured using the Colorado Learning Attitudes Toward Science Survey. The students were at a university in China. The results indicate that PI improved students' attitudes and that this increase was greater when the PI teams were stable throughout the term. It seems to me that the study was undertaken in order to determine why many studies indicate that students' attitudes toward physics in undergraduate physics courses deteriorate becoming more novice like. This is similar to what I have seen in my 1st-year biology course with the Learning Environment Preferences survey (paper in preparation) when students are not assigned the task of developing their learning philosophy. I found Mazur's results interesting because, in both of my courses, PI instruction was occurring in the form of TBL. In the class in which a learning philosophy was not assigned students' cognitive complexity index decreased (becoming more novice like). Zhang et al (2017) also determined a gender effect in which females seemed to make greater gains than males in the PI courses. Further study suggested that this may be a result of females discussing to a greater extent during team discussions of the in-class questions set by the instructor. The only criticism of the study that I have is that in the variable team PI group, the researchers are assuming that the students formed teams randomly during each class. However, it is possible that students may sit in the same place in class from day to day and sit with their friends. Thus, although team stability was not enforced, neither was team variability. 

Regardless, these are interesting results and provide evidence for what many TBL practitioners have observed in their courses: Over time, stable learning teams become more effective at learning.


Michaelsen, L. K., Watson, W. E., & Black, R. H. (1989). A realistic test of individual versus group consensus decision making. Journal of Applied Psychology, 74(5), 834–839.

Sibley J. 2016. Using Teams Properly. LearnTBL.

Walker, A., Bush, A., Sanchagrin, K., & Holland, J. (2017). “We’ve Got to Keep Meeting Like This”: A Pilot Study Comparing Academic Performance in Shifting-Membership Cooperative Groups Versus Stable-Membership Cooperative Groups in an Introductory-Level Lab. College Teaching, 65(1), 9–16.

Zhang, P., Ding, L., & Mazur, E. (2017). Peer Instruction in introductory physics: A method to bring about positive changes in students’ attitudes and beliefs. Physical Review Physics Education Research, 113(1), 10104.

Friday, 20 January 2017

pre-testing and expectations in team-based learning

Last Friday I gave my 2nd-year biochemistry class its first readiness assurance test or RAT in team-based learning (TBL) terminology. It was a mix of new material (amino acids) and material they should have learned from their pre-requisite chemistry courses on pH and buffers. Typically, TBL RATs aim to be reading quizzes to encourage students to prepare for practicing to use their newly learned knowledge in class in subsequent classes in what TBL terms Apps (for applications). The design of RATs aims to produce a class average of 70%. My average last week was 49%.

So, what happened? Interestingly, when I analyzed the marks, it appears to me that students did better on the new material: recognizing amino acid structure and calculating pI of amino acids. What students had the most difficulty with was their understanding of their prior learning on pH and buffers. This surprised me and so I checked with my chemistry colleagues and they suggested that my questions on pH and buffers were more conceptual and that first-year chemistry courses tend to focus on calculating pH. In addition, my chemistry colleagues reminded me that our students prefer to plug and play (calculate) rather than think. I don't think our campus is unusual in this regard - thinking is difficult work!

But it does raise an interesting issue for the implementation of TBL as a teaching and learning strategy. My understanding of TBL practice suggests that RATs should focus more on the conceptual than on calculation style questions. As a result, this should promote discussion of the questions during the team phase of the two-stage style of testing that is inherent in RATs. In readiness assurance tests, students first complete the test (10 MCQs in my classes) individually and then repeat the same test as a team using IF-AT cards so that students receive immediate feedback about their understanding. It is great at immediately revealing misconceptions. I have been using this technique since 2011 and can attest that it works well.

However, there does seem to be an apparent tension in the Readiness Assurance Process of Team-Based Learning. The RAP is what makes TBL a flipped classroom teaching strategy. In the RAP, students are assigned pages to read from the textbook (or article, or podcast, or whatever students need to initially prepare themselves to learn in class), and then during the first class of the course module/topic, students are administered a RAT in the two-stage testing style I described above. The RAP is intended to encourage students to do their pre-class preparation and hold them accountable for that preparation. It is not intended to be the end of teaching and learning for the particular course module, but rather is supposed to mark the beginning of teaching and learning. Thus, the RAT should be considered to be, in essence, a reading quiz. It is suggested in the TBL literature that a typical RAT could be constructed based on the topic headings and subheading in the assigned textbook chapter. However, the TBL literature also suggests that the questions should generate discussion and debate during the team portion of the RAT. What I have difficulty in implementing TBL in my classes, is that there seems to be a tension between producing a RAT that is a reading quiz vs producing a RAT that generates discussion and debate. Typically, a reading quiz is designed fairly low on Bloom's taxonomy of learning (mostly questions testing recall). In contrast, questions that foster debate and discussion need to move beyond simple right/wrong answers. Hence, it seems that there is a tension inherent in the design of RATs: they should be designed as reading quizzes that are able to foster debate.

I have a hard time constructing these sorts of tests and I believe that is what produced the poor class average on my first RAT of the term in my biochemistry class last week. What I thought were simple recall questions based upon what students had learned in prior courses, ended up exposing some fundamental misconception about their learning. I guess that is what RATs are supposed to do. I was just surprised how many misconceptions students had about pH and buffers given they have been learning this since high school. On the other hand, if you don't use it, you lose it. And I suspect that for many of my biochemistry students they have not had to consider pH and buffers for over a year or two.

The way I handled the situation is that I made the RAT out of 9 instead of 10 (there was one question that no one answered correctly - a couple of teams got it correct on their second attempt) and I have also informed students that I will not include their lowest RAT result when I calculate their final grade for the course. Hopefully, that is sufficient to press the reset button so that students feel like they are not stumbling right out of the gate in the first week of classes.


Dihoff, R., Brosvic, G. M., ML, M. L. E., & Cook, M. J. (2004). Provision of feedback during preparation for academic testing: learning is enhanced by immediate but not delayed feedback. The Psychological Record, 54(2), 207–231.

Haide, P., Kubitz, K., & McCormack, W. T. (2014). Analysis of the team-based learning literature: TBL comes of age. Journal on Excellence in College Teaching, 25(3&4), 303–333.

Metoyer, S. K., Miller, S. T., Mount, J., & Westmoreland, S. L. (2014). Examples from the trenches: Improving student learning in the sciences using team-based learning. Journal of College Science Teaching, 43(5), 40–47.

Thursday, 12 January 2017

the influence of TBL in my teaching

Last term was a gong show for me. Not that things didn't go well - they did go well. I simply chose to implement or tweak too many things in my courses. Thus the reason for so few posts (two!?) last term. In the Fall term, I taught three courses: a 4th-year course (History & Theory of Biology), a third-year course (Biochemistry: Intermediary Metabolism), and a second-year course (Molecular Cell Biology).

The history and theory course I have been teaching since the late-1990s and it chugs along just fine. I have always taught this course with the students taking an active role in the teaching of the course. I hadn't realized when I began teaching it in 1998 that I was trying to implement active learning. In this course, students are assigned journal articles from the history and philosophy of biology and are required to write a two-page double-spaced response to the particular day's article in preparation for class. In addition, a student is designated as the seminar leader and leads the initial portion of the class in a consideration of the implications of the article in light of what has been discussed prior in the course and also in terms of their own experience with biology in their previous three years of our biology program. The remaining half of each class consists of me mopping up the discussion and ensuring that what I consider to be the salient connections are discussed by the entire class.

This worked ok for a few years until the class began to grow in size from an initial enrollment in the 1990s of five or six students to now typically 18-22 students. One of the things I found was that the student-led seminars became really boring for the class because student seminar leaders were simply presenting what students had already read. So in the mid-2000s I began asking student seminar leaders to direct a class conversation rather than doing a formal presentation. This worked until the class became larger than 15 students. At that point, it became difficult for students to manage the class conversation.

A few years ago, I began implementing Team-Based Learning in my courses and this experience influenced the structure of my history and theory course. What I learned from implementing TBL in other courses is that student conversations work well in groups of 4-7. Smaller or larger than that and the conversation suffers: students are either too shy or there are too many voices. So, in the 2010s I began splitting my classes into groups for the student-lead seminars. After a couple of iterations, I realized that it is most effective if the teams are stable throughout the term. This is such a simple tweak with its effectiveness established in the TBL literature and I really don't understand why I didn't start doing that sooner. This made a huge difference in the quality of the student-led conversations resulting from students being more comfortable with their team-mates and also as a result of the peer-pressure to produce a good seminar for team-mates. In addition, the stress of leading a seminar diminished because it was a presentation to the team rather than to the entire class.

I have not completely implemented the TBL structure into this course: it does not have RATs or formal Apps. But it follows the spirit of how a TBL course is delivered: The teams are randomly constructed by me transparently with the students on the first day of class; although there are no RATs, students are held accountable for their pre-class preparation through the required written responses to the assigned reading; although there are no formal Apps in the TBL sense, I do have students consider my questions after the student-lead seminars to ensure that what I consider to be the salient points are raised for students before the end of the class.

My friend and colleague Paula Marentette who also uses TBL in her classes and was one of the people who suggested that I try implementing TBL in my own courses; she explained to me a few years ago that for her, implementing TBL in her courses transformed her approach to teaching such that even now when she teaches a course without TBL, she finds that she still uses elements of TBL in all of her classes. I find the same to be happening with me. For many people, TBL is too constraining for them. For me, I have found it to be a great structure in which to begin implementing active learning and learner-centered teaching in my courses. As these approaches to teaching and learning have soaked into by being, I am finding that I may no longer need to formally implement TBL in my courses and instead pick and choose those elements to use when the need arises for my students' learning.


Haave, N. (2014). Team-based learning: A high-impact educational strategy. National Teaching and Learning Forum, 23(4), 1–5.

Farland, M. Z., Sicat, B. L., Franks, A. S., Pater, K. S., Medina, M. S., & Persky, A. M. (2013). Best Practices for Implementing Team-Based Learning in Pharmacy Education. American Journal of Pharmaceutical Education, 77(8), 177.

Wieman, C. E. (2014). Large-scale comparison of science teaching methods sends clear message. Proceedings of the National Academy of Sciences of the United States of America, 111(23), 8319–20.

Weimer, M. (2013). Learner-centered teaching: Roots and origins. In Learner-Centered Teaching: Five Key Changes to Practice (2nd ed., pp. 3–27). San Francisco, CA: Jossey-Bass, a Wiley imprint.

Saturday, 29 October 2016

ACUBE 2016, Milwaukee, WI, Oct 21-22

Dr. Annie Prud’homme-Généreux was the keynote speaker for ACUBE 2016 and is the reason I received the TLEF PD grant from the UofA. She is one of the founding faculty of Quest University. I wanted to hear what she had to say about teaching biochemistry and molecular biology in the Quest block system because Augustana is beginning its hybrid program in 2017 in which the first three weeks of the term consist of one course (one course enrolled by students and one course taught by faculty) with the subsequent 11 weeks being a little more typical in which students enrol in 4 courses and faculty teach two or three courses (dependent upon whether or not they taught a course during the initial three week block). I was also interested to hear how Annie used Team-Based Learning to teach her biochemistry and molecular biology courses, something I have implemented in my own courses. I learned a couple of things from her:

  1. Biochemistry and molecular biology are best taught with TBL using case studies during the App phase. She ran a workshop at ACUBE where we experienced a case sutdy using a genetics problem (mother finds out her son has an incompatible haplotype to hers yet she remembers giving birth to him!). I can really see how this works well and is what my Apps are attempting to be. But the ones she showed us are a much more engaging story! She suggested I review what is available in the National Center for Case Study Teaching in Science. I'll have to investigate it. It seemed to me that they were more geared toward 1st and maybe second year. But that could work well for AUBIO 111- Integrative Biology I and maybe AUBIO 230 - Molecular Cell Biology.
  2. Annie suggested that teaching a course in a three week block period is a great way to get students to dive into a topic in depth, but that it is not very good at giving breadth to students. She opined that survey courses do not fit well with a 3 wk block. Thus, I am not sure that any of my courses will really work in the 3 wk block. They are all survey courses to some extent. The only ones that I could see working, as I suspected are AUBIO/AUCHE 388 - Biochemistry Laboratory, and AUBIO/AUCHE 485 - Selected Topics in Biochemistry. I think, and Annie confirmed, that the three-week intensive immersion blocks of teaching are great for learning applied skills or developing a project. So a lab where students are doing a lab project once a week or where they are presenting seminars on their research project in a Selected Topics course could work very well. But the more typical surveys of biology, molecular cell biology, biochemistry, or histology do not lend themselves very well to the three-week block format.

Saturday, 16 July 2016

is the criticism of the lecture a result of poor oratorical skills?

A recent article posted on The Atlantic website revisits the issue of lecturing vs active learning. Maryellen Weimer, Lolita Paff, Carl Lovitt and I discussed this at the Sunday plenary of the 2016 Teaching Professor Conference this past June in Washington, DC. Similar to Christine Gross-Loh, we suggested that good teaching requires a mix of active learning and lecturing dependent upon the needs of the student. A good teacher doesn't simply leave their students to forage for themselves. On the other hand, a good teacher also guides students to construct their own knowledge structure. As I wrote in my editorial for the 2016 volume of CELT, teaching is similar to tuning the dial of an analog radio between the two continuums of lecturing (teaching by telling) vs active learning (learning by doing) and this is dependent upon the intellectual level of the students. Actually, rather than being dependent upon the intellectual level of students, it might be better to say it is dependent on how developed students are as independent learners. The primary task of higher education is to train students how to learn. The best result of a bachelor's degree is the ability of students to research the answers to their own questions. As I have said elsewhere, the ultimate independent learner is a researcher - when the knowledge is unavailable to answer a question a good researcher will collect the data and produce the knowledge required to answer the question.

But I digress....

What Christine Gross-Loh suggests is that the problem with lectures stems from a lack of training of higher education professors in the skill of public speaking. This is a skill that was once taught and developed in colleges and universities but declined during the 20th century. If graduate students were taught to publically speak in an engaging manner (and this does not mean continuous exposition but rather speaking, discussing, thinking, active learning, and telling) then perhaps the lecture (broadly defined) would not be so maligned. Perhaps the wealth of data that indicates that lecturing (continuous exposition) is hazardous to students' grades is a result of the decline of training the professoriate in properly lecturing/teaching. Christine Gross-Loh would not be the first to suggest that this may be a result of increased emphasis on research at the expense of teaching.


Arum R, Roksa J. (2011). Academically Adrift: Limited Learning on College Campuses. Chicago, IL: University of Chicago Press.
Bart M. 2016.Lecture vs. Active Learning: Reframing the Conversation [internet]. Faculty Focus, June 24.
Freeman S, Eddy SL, McDonough M, Smith MK, Okoroafor N, Jordt H, Wenderoth MP. 2014. Active learning increases student performance in science, engineering, and mathematics. Proceedings of the National Academy of Sciences of the United States of America, 111(23), 8410–5.
Grow GO. 1991. Teaching learners to be self-directed. Adult Education Quarterly, 41(3), 125–149.
Gross-Loh C. 2016. Should Colleges Really Eliminate the College Lecture? [internet] The Atlantic, July 14.
Haave NC. 2016. Practical tuning - achievable harmony. Collected Essays on Learning and Teaching, 9: iii-x.
Pocklington T, Tupper A. 2002. No Place To Learn: Why Universities Aren’t Working. Vancouver, BC: UBC Press.