A number of my colleagues recently returned from the International Conference of the Learning Sciences. Through the wonders of the internet (mostly Twitter) I was able to able to follow along with a number of presenters and sessions. Special thanks to everyone who tweeted out session ideas and info. Those of us that couldn’t make the trip really appreciate your sharing! Anyhow, one of the Tweets that got my attention was the recipients of the JLS article of the year award.
Best paper award for @JLearnSciences … will be open access this summer. #ICLS2018 pic.twitter.com/JRZd8J0cCX
— Ilana Horn (@ilana_horn) June 27, 2018
Anyone who has spent any time in the Learning Research and Development Center (LRDC) at the University of Pittsburgh, as I did in graduate school, knows Miki Chi and her work. (Let’s be honest, her work is known and respected well beyond the LRDC community!) While LRDC has a number of respected alumni, Miki’s work and reputation are right at the top of the list. Seeing her name and the article title immediately peeked my interest. You see, over the last couple of weeks I have been doing a lot of thinking about the role of video in online and blended instruction. As such, I immediately found and read the article (located here). It did not disappoint!
Miki and her colleagues (Seokmin Kang & David L. Yaghmourian) explored if college-age students learned more from watching dialogue-videos, in which a tutor was recorded tutoring a tutee, or from monologue-videos in which a tutor simply provided a lecture style presentation of the content. Most importantly, from my perspective, is that the paper took the work a step further and attempted to answer why the results occurred. The methodology is very good and you should go read it. Seriously, go read this paper!
For the purposes of this post I have bulleted some of the key findings I am interested in. The paper has more ideas/results and you should go read the paper!
- Students that watch a tutor video show similar learning gains as the tutees in the dialogue-videos (This finding confirms pervious research Chi, et al., 2008)
- Observers learn more when watching tutorial dialogue-video compared to lecture-style monologue-videos.
- In fact, the monologue-observers showed no significant pre-post gains on transfer-type questions.
- Tutees in the videos can serve as a model of learning for the observing students – Zone of representational match
- In dialogue-videos the Tutees tend to make errors and struggle. This is followed by feedback from the tutor. Define this as Conflict Episodes
These results have me considering a number of important implications and asking some questions (mostly related to my current setting):
- It seems like a good idea to consider connecting videos of tutoring sessions with worked examples in our courses. We have known and appreciated the idea of worked examples (Atkinson, et al., 2000) and many of our courses include this work in their course design. The potential offered by including tutor dialogue-videos, like those described in the article, could provide a really powerful resource for our residential and global students.
- How, if at all, can we transform residential pedagogical approaches to consider dyad interactions that are based on recorded videos as an active learning approach during class time? Might we also want to consider this approach for recitation work?
- How many and what frequency of Conflict Episodes are necessary to support student’s knowledge development within a single video? Is this even the right question? Maybe it isn’t the frequency but some other features of the event that matter?
- If we better understand Conflict Episodes can we “script” tutor – tutee videos to stream line the learning process for students watching the video?
- What scaffolds (worksheets, questions, prompts) might be beneficially for students watching the videos to draw attention to important aspects of the video? How, if at all, might this help improve student use of the videos?
- What, if any, scaffolds can be built into systems (learning platforms) to support student exploration of tutoring videos as if they were interacting with another student to construct ideas/knowledge. Might we look to some of the Intelligent Tutoring Systems literature for answers?
Thinking about the use of video in instruction reminded me of some other work presented by my friend and colleague Josh Rosenberg and his collaborators (You-kyung Lee, Kristy A. Robinson, John Ranellucci, Cary J Roseth, Lisa Linnenbrink-Garcia).
(Yes, Josh keeps appearing in this blog. Yes, we talk often. No, I don’t have any idea why. Just kidding Josh!)
The project, presented by this group at AERA 2018, explored patterns of engagement in a flipped classroom approach for a large (272 person) undergraduate anatomy course. In the typical flipped classroom approach the students were assigned to watch videos (mostly monolog style lecture videos) with in-class activity focused more on small group work. The preliminary results are interesting and can be found on Josh’s site.
I found these two result really interesting.
- “A strong negative relationship existed between students increasing video watching just prior to the exam” (“cramming”) and achievement on the exam (page 22).
- “The corollary of this finding is that the achievement of students who do not increase their rate of viewing is higher, suggesting that a more consistent pattern of viewing has benefits to students’ achievement” (page 22).
Josh and his colleagues go on to say “Our results also highlight the importance of studying engagement using growth curve modeling. Specifically, we observed that it was the pattern of watching video lectures over time that predicted students’ learning outcomes. In this way, these findings also highlight the benefit of a growth modeling approach to understanding the antecedents and outcomes of students’ achievement”(page 23). I like the idea of exploring engagement with videos over time and wonder if we might use that as an output to explore design and pedagogical decisions? Connecting these ideas with assessment outcomes is an added bonus, however, I wonder the best types of assessments. Might problems associated with course materials or problem sets be a helpful middle step between videos and exams?
One final note about videos for instructional purposes that crossed my mind comes from Michael Henderson and Michael Phillips at Monash University, Australia. We often think of video as a way to convey or demonstrate new(er) content. This is the main premise of the previous two articles I have discussed. The group from Monash took a very different approach to the use of video. Their article, “Video-based feedback on student assessment: scarily personal” (located a number of places including here), discusses results from a study in which course instructors provided 5 min video feedback on final written assignments (papers) in place of traditional written or typed comments/feedback. This approach to feedback builds on a number of literature reviews on feedback and makes a novel and important contribution to the literature base. The study points out the overwhelmingly positive response from students (51 of 52 responses being positive) through solicited and unsolicited emails. Further, 91% of survey respondents (n=74) indicated that the instructors should continue using video feedback in future versions of the course.
Ok, some final thoughts and questions I am kicking around.
- Is it possible that different engagement patterns with video for instructional purposes in blended (flipped) settings can be supported or improved through in-course pedagogical moves? What type of moves will work best? Josh and his colleagues have given the research community a really interesting methodology for exploring the impact of such moves.
- Similarly, what type of design decisions, tied to pedagogy or not, might impact student video watching patterns?
- If we include the ideas of video feedback for assignments, how, if at all, does that support student engagement with other videos?
- Can peer-video feedback serve as a productive from of engagement in larger MOOC courses?
- Might we be able to connect ideas of modeling expert problem solving or writing in our video feedback and would this make the feedback more valuable?
That’s it for now. I am interested to hear peoples’ thoughts. Please feel free to reach out via twitter or email!
References:
Atkinson, R. K., Derry, S. J., Renkl, A., & Wortham, D. (2000). Learning from examples: Instructional principles from the worked examples research. Review of educational research, 70(2), 181-214.
Chi, M. T. H., Roy, M., & Hausmann, R. G. H. (2008). Observing tutorial dialogues collaboratively: Insights about human tutoring effectiveness from vicarious learning. Cognitive Science, 32(2), 301–341.
Henderson, M., and Phillips, M. (2015).Video-based feedback on student assessment: scarily personal. Australasian Journal of Educational Technology, 31(1), 51-66.