New Publication: Exploring How Teachers Support Students’ Mathematical Learning in Computer-Directed Learning Environments

The Publication: https://www.emerald.com/insight/content/doi/10.1108/ILS-07-2019-0075/full/html 

My recently published paper (with Melissa Boston and Mary Kay Stein) in Information and Learning sciences argues that those implementing computer-directed learning environments in formal education settings should consider the role of teachers in their work. In a blog published on MIT’s Open Learning site I discuss this and other important lessons learned in our research.

Please give the article and the blog a read and email me with any questions, thoughts, or suggestions. Looking forward to having this work out in the world.

New Publication: Exploring How Teachers Support Students’ Mathematical Learning in Computer-Directed Learning Environments

Learning Engineering: Some Thoughts Headed Into IEEE ICICLE

Background:

Professionals in higher education, educational technology, and the Institute of Electrical and Electronics Engineers (IEEE), have recently begun an effort to advance the field of Learning Engineering. While the profession of learning engineering might be new to some, the idea itself has a long history dating back to the Nobel winning economist and cognitive psychologist Herbert Simon.

In 1967 Simon wrote “The learning engineers would have several responsibilities. The most important is that, working in collaboration with members of the faculty…they design and redesign learning experiences in particular disciplines.”

Fast forward to April 2016 when the argument for the field of learning engineering was taken up in the Online Education Policy Report at MIT that was authored by Karen Willcox and Sanjay Sarma. In the report, which I highly recommend everyone read, the work of learning engineers is described as “integrating their knowledge of a discipline with broad understanding of advanced principles from across the fields of education” to “create new learning experiences from scratch and to integrate new technologies and approaches into existing experiences”. Further, the report argues for “expanded use of learning engineers and greater support for this emerging profession” as one potential catalyst for transforming teaching and learning in higher education.

It should be noted that around this same time many other organizations were working on advancing the application of learning sciences and human computer interaction through the creation of programs like Carnegie Mellon University’s (CMU) Masters of Educational Technology and Applied Learning Science (METALS) program. Within the last year the program has officially taken on the goal of training “graduate students to become learning engineers and LX (learning experience) designers)” (https://metals.hcii.cmu.edu/).

More recently, Chris Dede, John Richards, Bror Saxberg, and others (including the previously mentioned Sanjay Sarma) published the book Learning Engineering for Online Education: Theoretical Contexts and Design-Based Examples. The book pushes for an approach to the work of learning engineers that leverages design-based work grounded in the use of data, especially the types of complex big data that has become associated with online learning platforms.

(Disclaimer: I am aware that others have been doing thinking and writing about the field of learning engineering and I do not want readers to think this is an exhaustive account of how the field has/is developing! I am simply attempting to provide some background. That said, I would be thrilled to hear from others about the work they have done in this area. Please email me at kesslera@mit.edu with your thoughts.)

My Developing Ideas:

Right around the time that Willcox and Sarma were writing their report on Learning Engineering at MIT, I was a graduate student at the University of Pittsburgh. I was working with my mentor, Jennifer Cartier, and fellow graduate student Danielle Ross (Danielle is now at Northern Arizona University) when we started using the term instructional engineers to describe the preservice science teachers we were teaching and training. The term, which was partially inspired by the work of Simon, was born out of a need to better describe a mindset we hoped to develop in our preservice teachers. At the center of this mindset was the belief the teachers have the professional authority to creatively design and implement educational experiences and that this work mirrored that of the complex problems engineers are often tasked with solving.

I spent a lot of time thinking about the metaphor of teachers as instructional engineers and included some of that thinking in my dissertation when I wrote “Acting as an instructional engineer requires teachers to engage in work that is very different from the traditional ideas associated with teaching…thinking about the role of teacher and instruction in this way has interesting consequence for how we as a filed think about teachers and teaching.” I was attempting to make an argument for why and how teachers, and science teachers specifically, needed to move beyond the work of simply implementing curricular materials as they were initially constructed and instead adapt or build on those materials to engineer learning experiences for students that encouraged deeper engagement with the science content.

New Setting and New Perspective:

So, what does my previous thinking about instructional engineering have to do with learning engineering? About a year ago I transitioned to my current position and began working with the amazingly accomplished group within Open Learning. As part of my onboarding I read the Willcox and Sarma piece and realized that one of the main differences in my current work would be dealing with issues of scale and context. In my initial thinking about instructional engineers I had been focused on the work of a single person, the teacher, and the context in which they were situated, their classroom. I still believe much of my thinking about instructional engineering was important (I will probably have to write a different blog post on that). However, what I had failed to realize at the time was that other teaching contexts (i.e. online teaching and learning) might include larger and more distributed groups of learners. This structure can present different types of challenges associated with designing learning experiences, collecting evidence of learning, and iteratively improving instruction. Thus, enter the field of learning engineering.

One example of learning engineers comes from within the Office of Open Learning at MIT. In a partnership between Open Learning and academic departments throughout the university are a set of digital learning scientists and digital learning fellows.  Collectively these talented people make up the Digital Learning Lab (DLL). Each of the DLL members holds a PhD or equivalence in the content area of their academic discipline. This structure was a way to operationalize the vision for changing higher education laid out by Willcox and Sarma. One of the most interesting parts of this work for me is the fact that the members of the DLL community are tasked with developing, designing, implementing, and iteratively improving instruction that is facilitated through the use of the MITx platform, a local instance of the open EdX platform that is housed and run within Open Learning. This focus on using a single platform, which itself can be partially edited and developed by members of the DLL community, to transform instruction has resulted in a number of residential innovations. The Digital Learning Lab make up a unique community of practice and can, in my mind, act as an exemplar for how learning engineering teams can exist in higher education. If you are interested in some of their work I encourage you to check out their webpage, send me an email, or attend the ICICLE Conference on Learning Engineering at George Mason University on May 20-23. I will have the pleasure of discussing the work I have been doing with this group and MIT’s approach to learning engineering more broadly.

IEEE ICICLE:

With a number of prominent institutions and thought leaders bringing the work of learning engineering to the forefront, the time seems right for some decisions to be made about what constitutes the work of learning engineers and how they are trained. The ICICLE conference is a place where steps on establishing those decisions will take place.

I am personally excited that the conference will be an opportunity to share the work of the Design for Learning SIG. Having been a part of the SIG the last couple of months, I look forward to sharing the ideas we have been developing on the competencies and dispositions of learning engineers. I look forward to learning more about how other contexts (mainly those in the educational technology industry) envision the work of learning engineers, and I am excited to discuss what all of these ideas mean for how we focus on supporting the learning of a diverse and ever-changing student population.

 

Note: Thanks to Sheryl Barnes for her feedback on this post.

Learning Engineering: Some Thoughts Headed Into IEEE ICICLE

AERA 2019 Roundtable

Screen Shot 2019-04-08 at 9.35.31 AM

Screen Shot 2019-04-08 at 9.19.30 AMScreen Shot 2019-04-08 at 9.20.08 AM

Screen Shot 2019-04-08 at 9.40.19 AM.png

The research team aimed to develop and implement a single course intervention that would drive teachers toward encouraging student engagement through or within games that moved beyond testing memorization of content. To achieve this, the phase 2 intervention needed three key affordances:

  1. The intervention needed to allow them to all have a shared experience and common language in order to share ideas with one another and recognize the value of games for collaborative learning. It seemed necessary to provide an intervention that would appeal to all teachers, no matter what grade or subject they teach.
  2. The intervention needed to allow them to consider how collaborative learning with games could be integrated in their own lesson plans. Often educators have a hard time imagining what instruction they have not themselves experienced might look like, so experiential learning activities are often used in teacher training (Kolb, 2014). The intervention was intended to be a model of what collaboration could look like beyond the typical answer a kahoot question and talk with a partner type of instructional model demonstrated in many of the Phase 1 lesson plans.
  3. The activity needed to provide an opportunity to reflect on why collaboration within the context of the game might be important for developing a shared understanding of content for students. Further, the intervention needed to allow for the teachers to begin developing a way of thinking about the connection between the parts of the game that supported interactions, parts of the games that did not afford purposeful interactions and the work the teachers would need to do in order to foster collaboration towards supporting students learning of specific contentScreen Shot 2019-04-08 at 9.19.44 AM

Screen Shot 2019-04-08 at 9.21.36 AM

The phase 2 results showed one statistical change from phase 1. This was a 31.5% gain in the use of technology that went beyond simple memorization tasks. This suggest that teachers in the phase 2 group included the use of games-based technologies that allowed their students to engage with the content of the lesson in ways that went beyond simple recall or memorization processes. This was a primary goal of the second intervention, and a statistically significant result suggest the instructional intervention achieved at least some of the research teams’ vision for transforming teachers work.

Further phase 2 results also included a number of important gains, that while not statistically significant, indicated movement in the positive direction for the desired outcomes including a 18.8% gain in teachers using the game-based technology for communication of collaboration, a 17.8% gain in teachers planning for collaboration that is facilitated through the technology and a 19.6% gain in teachers explicitly building in scaffolds for collaboration during the lesson.

While these positive gains suggest movement towards teachers using game-based collaboration instructional approaches that were better aligned with course goals, the results also suggest room for further improvement. While the statically significant improvement on code 4b is positive, nearly half of the teachers were still using technology for simple memorization or review based tasks. This high percentage, combined with limited to no movement in teacher planning for preplanning collaboration or considering collaboration as part of their SMART goal formation, suggest that the impact of the intervention was limited in the scope of overall teachers’ ability to implement collaborative learning experiences using game-based technologies.

AERA 2019 Roundtable

Research Is Like A Marathon

In a tweet yesterday I wrote:

At the time I was excited to have just submitted IRB for a new project. Writing the document and getting the required signatures took several hours of work spread over a week or two. That work came after developing the project idea with my colleague and spending a couple hours doing some prep work. The thought/feeling I was having at the time, which resulted in the tweet, was how happy I was this part of the work was done. Those of us who conduct educational research know how ridiculous that is because I haven’t actually started with the “research”. (You can’t until you get IRB approval!) (Note: I do believe that if you take IRB seriously you have done a lot of work necessary to set a research project on a path to success and believe we have done that in this case!)

Anyhow, I was thinking how long a process research can be and how tedious the various steps in that process feel. When you finally overcome one of them you feel like you have accomplished something. Then you look at the whole roadmap of a project and realize: “man I have a long LONG way to go”!

Thus, my tweet comparing me submitting my IRB forms for approval with completing the first uphill portion of a marathon. Upon reflecting on this metaphor, I realized I had missed something important. Educational researchers almost never have just one project underway at a time. The best of us, which I am not (although I hope to one day be), have multiple projects in various stages of development or implementation at a time. The best I could come up with for the marathon metaphor would be to suggest that I just finished the first uphill portion of the run while also dictating a novel (writing papers), trying to capture the run with a gopro attached to a selfie-stick (collecting other data), tweeting my split times so my “community” can keep up with my progress (Cough – Cough – writing this blog), and providing coaching to the people running next to me (teaching).

I realize multitasking is a required part of our job, it just makes me wonder how successful we can truly be at running the marathon?

Research Is Like A Marathon

Undergraduate Teaching Assistants

My educational research life has officially come full circle. Please indulge me this story before I get into some of the ideas I have been exploring around Undergraduate Teaching Assistants (UTAs).

In the fall of my freshman year (a long long time ago…in a lecture hall far far away) I was struggling academically. My chemistry professor at the time, Dr. Wagner, offered a set of Science Orientation Workshops that were proctored by a set of undergraduate students. The purpose of the workshops were to support the development of skills needed to be successful in college level science (and college in general). Finding myself struggling academically I attended these workshops on the regular. The undergraduate proctor at the time, Kelly Whitaker, was kind, supportive, and had a genuine interest in helping her fellow undergraduate students improve.

Ultimately, these workshops helped me transform my study habits and I turned around my work academically. Seriously, if it were not for Kelly and those workshops I am not sure I would have stuck it out as a science major. Fast-forward to the end of my freshman year and Kelly decided to recommend me to Dr. Wagner for one of the workshop proctor positions for the following year. It was through those workshops and working with Dr. Wagner, Kelly, and many other great undergraduate students that I got my first taste of being an educator and conducting educational research. By my senior year we had actually collected enough data to submit a conference paper on the work to an international science teaching conference. That presentation, focused on the work of undergraduates teaching undergraduates, was my first ever per-reviewed conference paper and is ultimately what launched me into having an interest in pursuing education research later in my career.

Fast forward to last week when I was asked to be part of a project that involves the use of undergraduate teaching assistants as mentors. One of the stated goals of the work is to “support the development of fundamental skills…including study habits”. Hard to believe but at least a portion of the goals associated with this current work will mirror the work we did when I was a senior undergraduate chemistry major.

This brings me to some of the things that need to be considered when thinking about the use of UTAs. The literature on subject has grown some since my initial work. These are some of the key factors (and citations) I have found in my most recent exploration into the topic.

  • First and foremost, while some differences in the work conducted between UTAs and graduate teaching assistants (GTAs) has been documented (Weidert et al., 2012), the use of UTAs in STEM courses has been shown to be as effective as GTAs across a number of measures including student perceptions and student final course grades (Crowe et al., 2013; Chapin et al., 2014; McCreary et al., 2006).
  • Key factors associated with the perceived effectiveness of UTAs in the selection and training process include helpfulness in answering students’ questions, accessibility to students, and qualifications (Filz & Gurung, 2013).
  • Despite limited evidence of the effectiveness of requiring high grades (A or B) being related with successful instruction, many programs require UTAs to have earned an A or B in the course for which they wish to teach. Despite this trend, research from Phillipp and Colleagues points out that “content knowledge alone, while certainly necessary, isn’t always sufficient to be a good instructor” (pg. 81, 2016).
  • UTAs, much like GTAs, need support to develop the skills necessary to effectively teach. Specifically, “UTAs need both content knowledge support and pedagogical training” in order to develop skills around concepts like “metacognition, formative assessment, and questioning” (Phillip, et al., 2016).
    1. Training can take on a number of different forms including pre-term training, continuous training throughout the term, and peer feedback and support structures during and post training (Filz & Gurung, 2013; Hogan et al. 2007). Further, training can be specific to the content or more general across content areas (Verleger & Velasquez, 2007).
    2. At the center of many of these training programs is an organizing learning theory that drives instructional choice and specific examples of that approach connected with the content to be taught by the UTA (Hogan et al. 2007; Phillip et al., 2016; Wheeler et al., 2015).
  • Beyond teaching the assigned recitation section, many programs require UTAs to attend the full class/lecture and a weekly meeting with the lead professor and other UTAs (Weidert et al., 2012) in an attempt to support similar recitation instruction across sections.

I would really like to hear other peoples thoughts on the topic. Feel free to send me an email or reach out on Twitter.

Reference:

Chapin, H. C., Wiggins, B. L., & Martin-Morris, L. E. (2014). Undergraduate science learners show comparable outcomes whether taught by undergraduate or graduate teaching assistants. Journal of College Science Teaching44(2), 90-99.

Crowe, J., Ceresola, R., & Silva, T. (2014). Enhancing student learning of research methods through the use of undergraduate teaching assistants. Assessment & Evaluation in Higher Education39(6), 759-775.

Hogan, T., Norcross, J., Cannon, T., & Karpiak, C. (2007). Working with and training undergraduates as teaching assistants. Teaching of Psychology, 34, 187–190.

McCreary, C. L., Golde, M. F., & Koeske, R. (2006). Peer Instruction in the General Chemistry Laboratory: Assessment of Student Learning. Journal of Chemical Education83(5).

Philipp, S. B., Tretter, T. R., & Rich, C. V. (2016). Development of Undergraduate Teaching Assistants as Effective Instructors in STEM Courses. Journal of College Science Teaching45(3), 74.

Verleger, M., & Velasquez, J. (2007, October). An engineering teaching assistant orientation program: Guidelines, reactions, and lessons learned from a one day intensive training program. In Frontiers In Education Conference-Global Engineering: Knowledge Without Borders, Opportunities Without Passports, 2007. FIE’07. 37th Annual (pp. S4G-3). IEEE.

Weidert, J. M., Wendorf, A. R., Gurung, R. A., & Filz, T. (2012). A survey of graduate and undergraduate teaching assistants. College Teaching60(3), 95-103.

Wheeler, L. B., Maeng, J. L., & Whitworth, B. A. (2015). Teaching assistants’ perceptions of a training to support an inquiry-based general chemistry laboratory course. Chemistry Education Research and Practice16(4), 824-842.

 

Undergraduate Teaching Assistants

Unplugging While On Vacation

IMG_0882.JPG

I just returned from a wonderful vacation with my family. (I am happy to share more pictures and stories if you ask.) The point of this post is to share some of my thoughts on vacation and taking opportunities to “get away”.

My vacation was amazing in that I was not able to use my phone for seven days. I purposefully turned my phone to airplane mode (I still wanted to be able to use my phone to take pictures) and did my very best to enjoy my family and our wonderful setting(s). The first day or two felt odd. I wasn’t checking my email or social media every 10 minutes. About half way through day two I really stopped thinking about it. I was able to just enjoy the beautiful setting, my wife, my kids (well, as much as you can enjoy traveling with kids!), and extended family.

Besides check my phone, you know what else I didn’t do? Work!

This was my big vacation that we have been planning for almost two years. After the stress of the move to Boston and getting ramped up on my new job, I took full advantage of being offline and away. Upon my return to the “real” world, I opened my twitter and saw the post of an associate professor who was on a beach vacation talking about how they had just finished a conference proposal, read some things for a class they plan on teaching this fall, and done some writing. While this might be relaxing and enjoyable for this person, I worry that the expectation for many in the academic community is that our jobs require 24-hour unwavering commitment. I am unconvinced this is healthy or productive.

So, I write to tell you that I scheduled and took a vacation. I didn’t respond to a single email. I didn’t read a single academic paper. I didn’t write a single word of a paper or conference proposal.

I played and went swimming with my kids. I spent time talking with my wife about things unrelated to our jobs. I had wonderful adventures with my family, took lots of pictures, and ate too much food.

Today I return to work refreshed. Yes, I have a lot of email that need responses. Yes, I have a bunch of reading and writing to do. You know what? I am going to do all of those things knowing I made the most of my time away and am lucky to have been able to get away. We all need to remember to engage in as much self-care as possible. I am certain that looks different for each of us. Just remember, it is ok to unplug if you need and want. In my opinion, the reset and relief that comes from time away is wonderfully powerful in comparison to the stress of trying to do everything all the time!

Am I a better Learning Scientist as compared to when I left? I am not sure. I do know I am a happier person and that usually results in good things for my work.

Unplugging While On Vacation

Video for Instruction and Feedback

A number of my colleagues recently returned from the International Conference of the Learning Sciences. Through the wonders of the internet (mostly Twitter) I was able to able to follow along with a number of presenters and sessions. Special thanks to everyone who tweeted out session ideas and info. Those of us that couldn’t make the trip really appreciate your sharing! Anyhow, one of the Tweets that got my attention was the recipients of the JLS article of the year award.

Anyone who has spent any time in the Learning Research and Development Center (LRDC) at the University of Pittsburgh, as I did in graduate school, knows Miki Chi and her work. (Let’s be honest, her work is known and respected well beyond the LRDC community!) While LRDC has a number of respected alumni, Miki’s work and reputation are right at the top of the list. Seeing her name and the article title immediately peeked my interest. You see, over the last couple of weeks I have been doing a lot of thinking about the role of video in online and blended instruction. As such, I immediately found and read the article (located here). It did not disappoint!

Miki and her colleagues (Seokmin Kang & David L. Yaghmourian) explored if college-age students learned more from watching dialogue-videos, in which a tutor was recorded tutoring a tutee, or from monologue-videos in which a tutor simply provided a lecture style presentation of the content. Most importantly, from my perspective, is that the paper took the work a step further and attempted to answer why the results occurred. The methodology is very good and you should go read it. Seriously, go read this paper!

For the purposes of this post I have bulleted some of the key findings I am interested in. The paper has more ideas/results and you should go read the paper!

  • Students that watch a tutor video show similar learning gains as the tutees in the dialogue-videos (This finding confirms pervious research Chi, et al., 2008)
  • Observers learn more when watching tutorial dialogue-video compared to lecture-style monologue-videos.
    • In fact, the monologue-observers showed no significant pre-post gains on transfer-type questions.
  • Tutees in the videos can serve as a model of learning for the observing students – Zone of representational match
  • In dialogue-videos the Tutees tend to make errors and struggle. This is followed by feedback from the tutor. Define this as Conflict Episodes

These results have me considering a number of important implications and asking some questions (mostly related to my current setting):

  • It seems like a good idea to consider connecting videos of tutoring sessions with worked examples in our courses. We have known and appreciated the idea of worked examples (Atkinson, et al., 2000) and many of our courses include this work in their course design. The potential offered by including tutor dialogue-videos, like those described in the article, could provide a really powerful resource for our residential and global students.
  • How, if at all, can we transform residential pedagogical approaches to consider dyad interactions that are based on recorded videos as an active learning approach during class time? Might we also want to consider this approach for recitation work?
  • How many and what frequency of Conflict Episodes are necessary to support student’s knowledge development within a single video? Is this even the right question? Maybe it isn’t the frequency but some other features of the event that matter?
  • If we better understand Conflict Episodes can we “script” tutor – tutee videos to stream line the learning process for students watching the video?
  • What scaffolds (worksheets, questions, prompts) might be beneficially for students watching the videos to draw attention to important aspects of the video? How, if at all, might this help improve student use of the videos?
  • What, if any, scaffolds can be built into systems (learning platforms) to support student exploration of tutoring videos as if they were interacting with another student to construct ideas/knowledge. Might we look to some of the Intelligent Tutoring Systems literature for answers?

Thinking about the use of video in instruction reminded me of some other work presented by my friend and colleague Josh Rosenberg and his collaborators (You-kyung Lee, Kristy A. Robinson, John Ranellucci, Cary J Roseth, Lisa Linnenbrink-Garcia).

(Yes, Josh keeps appearing in this blog. Yes, we talk often. No, I don’t have any idea why. Just kidding Josh!)

The project, presented by this group at AERA 2018, explored patterns of engagement in a flipped classroom approach for a large (272 person) undergraduate anatomy course. In the typical flipped classroom approach the students were assigned to watch videos (mostly monolog style lecture videos) with in-class activity focused more on small group work. The preliminary results are interesting and can be found on Josh’s site.

I found these two result really interesting.

  • “A strong negative relationship existed between students increasing video watching just prior to the exam” (“cramming”) and achievement on the exam (page 22).
  • “The corollary of this finding is that the achievement of students who do not increase their rate of viewing is higher, suggesting that a more consistent pattern of viewing has benefits to students’ achievement” (page 22).

Josh and his colleagues go on to say “Our results also highlight the importance of studying engagement using growth curve modeling. Specifically, we observed that it was the pattern of watching video lectures over time that predicted students’ learning outcomes. In this way, these findings also highlight the benefit of a growth modeling approach to understanding the antecedents and outcomes of students’ achievement”(page 23). I like the idea of exploring engagement with videos over time and wonder if we might use that as an output to explore design and pedagogical decisions? Connecting these ideas with assessment outcomes is an added bonus, however, I wonder the best types of assessments. Might problems associated with course materials or problem sets be a helpful middle step between videos and exams?

One final note about videos for instructional purposes that crossed my mind comes from Michael Henderson and Michael Phillips at Monash University, Australia. We often think of video as a way to convey or demonstrate new(er) content. This is the main premise of the previous two articles I have discussed. The group from Monash took a very different approach to the use of video. Their article, “Video-based feedback on student assessment: scarily personal” (located a number of places including here), discusses results from a study in which course instructors provided 5 min video feedback on final written assignments (papers) in place of traditional written or typed comments/feedback. This approach to feedback builds on a number of literature reviews on feedback and makes a novel and important contribution to the literature base. The study points out the overwhelmingly positive response from students (51 of 52 responses being positive) through solicited and unsolicited emails. Further, 91% of survey respondents (n=74) indicated that the instructors should continue using video feedback in future versions of the course.

Ok, some final thoughts and questions I am kicking around.

  • Is it possible that different engagement patterns with video for instructional purposes in blended (flipped) settings can be supported or improved through in-course pedagogical moves? What type of moves will work best? Josh and his colleagues have given the research community a really interesting methodology for exploring the impact of such moves.
  • Similarly, what type of design decisions, tied to pedagogy or not, might impact student video watching patterns?
  • If we include the ideas of video feedback for assignments, how, if at all, does that support student engagement with other videos?
  • Can peer-video feedback serve as a productive from of engagement in larger MOOC courses?
  • Might we be able to connect ideas of modeling expert problem solving or writing in our video feedback and would this make the feedback more valuable?

That’s it for now. I am interested to hear peoples’ thoughts. Please feel free to reach out via twitter or email!

 

References:

Atkinson, R. K., Derry, S. J., Renkl, A., & Wortham, D. (2000). Learning from examples: Instructional principles from the worked examples research. Review of educational research70(2), 181-214.

Chi, M. T. H., Roy, M., & Hausmann, R. G. H. (2008). Observing tutorial dialogues collaboratively: Insights about human tutoring effectiveness from vicarious learning. Cognitive Science, 32(2), 301–341.

Henderson, M., and Phillips, M. (2015).Video-based feedback on student assessment: scarily personal. Australasian Journal of Educational Technology, 31(1), 51-66.

 

 

Video for Instruction and Feedback

A Plan

I was recently having a conversation with a couple of colleagues and I realized the ideas we were bouncing back and forth might be of immediate interest to people beyond our group. These ideas weren’t fully developed and were in no way ready for a conference proposal or presentation. However, I really felt like those ideas were probably worth sharing, if for no other reason to be told we were wrong and have someone provide a reference. After recently posting my first blog (The Move to Boston) in a very long time, I realized that my personal site was probably the place for me to share such ideas and try to interact with other educators, academics, and anyone willing to read what I post around these ideas.

Dating back to my doctoral program days in Pittsburgh, I found it incredibly frustrating that we (the collective academic community) don’t often get to see the process of how work evolves over time. It’s not until we write a conference proposal or paper, at which point we are well past being able to actually remember all the ideas/steps/missteps that got us to our final papers, that the larger academic community gets a first look at our work. So, my new goal is to use this space as a place to compile my thoughts as they unfold. In no way am I claiming to have all the answers with each post. I am simply trying to engage with people who might find the work and ideas interesting. My hope is that some of my posts will provide a little insight into my own developing ideas and that others might also be interested in engaging with those thoughts. Who knows, maybe sharing my developing ideas might help people implement some of this work in meaningful ways. At a minimum, I am hopeful this will result in me speeding up the pace of my own implementations and research.

A couple quick disclaimers – First, the ideas I share on this site are my own and should not be considered as being endorsed by my fulltime employer. If you are interested in those types of things, you should check out the official site(s) they use to disseminate formal announcements, results, and ideas. Second, this idea of sharing early stages of work is not a new one. I remember having discussions about similar processes with a grad school friends, especially Jolene Zywica and her time with Working Examples. Some newly minted PhD recipients have actually engaged in this kind early sharing of dissertation ideas/data (examples include Joshua Rosenberg). The point is, I realize the notion of sharing early stages of work and ideas is not a particularly new one. Finally, this is meant to be a collegial and productive space for learning and growth. Feel free to comment, email, or tweet me your ideas about the posts. I am interested to learn about your thoughts and how my ideas can and should be improved.

A Plan

The Move to Boston

The time has come for me to start actually using this site. Let’s see how this goes!

As some of you may or may not know, I started a new position (Senior Learning Scientist at MIT) in May of 2018. The job has been everything I hoped it would be and I am ecstatic about the opportunities that are ahead. The only downside has been the need to leave my family behind in Chicago while I started this work. (Selling a house, finding a new place to live in a new city, and relocating kids, an entire household of things, and a dog takes time and planning!)

The academic community spends a lot of time talking about the stresses of the job. I often find these conversations focus on the professional challenges or the general lack of time for family. I don’t often hear about the challenges faced by our families, especially as our loved ones are usually the people stuck dealing with our late nights, email obsessions, poor health, and general testy dispositions (these might just be descriptors of me!). Add being forced to manage a household, children, and general life to the list my wife has dealt with in the last 2 months.

Now might be a good time to mention that my wife is amazing! I am fully aware that people are supposed to say things like that. I ACTUALLY MEAN IT! I will save all of her personal and professional accolades for another post (or her own blog someday). Needless to say, she is exceptional at her very important job, a great mom, and somehow manages to find enough energy to keep encouraging and supporting me to pursue my dreams. It wasn’t easy for her to say yes to this move. She knew it would result in her having to stay in Chicago and be responsible for dealing with much of our collective lives while working full time and me living a 16 hour car ride away. She said yes to this opportunity because she knew this was a once in a lifetime chance for me. She said yes because we had a lot of hard, complicated, and uncomfortable conversations about what this opportunity meant for me, us, and our family. She said yes because I am lucky enough to have a partner in life who cares about me and my well being as much as her own. Words on this virtual page can never fully encapsulate how much I love my wife!

Anyhow, back to the point of this post, after a lot of stress and effort, my family is finally making the big move to Boston. It took every bit of effort we collectively had to get the house packed. The two kids even got into the spirit. Well, the 5 year old did. At 22 months, I am not really sure how much could be expected of the youngest.

After my wife and the kids flew out East (YES, she flew with both kids! Alone!) I spent hours packing up the last 2% of our things. If you haven’t every moved a long distance, the last 2% is the absolute worst. It is usually made up of all the stuff you really need to live day-to-day and your most cherished possessions. This makes packing it hard, stressful, and time-consuming. Way more time-consuming than anyone can or should every really imagine. For me this packing was followed by 5 hours of sleep, 3 hours of yard work, 7 hours of making sure the movers didn’t break anything, 3.5 hours of cleaning, and 6 hours of driving with the dog. I arrived at my Mom’s house in Ohio at 3am and slept for 6 hours (more like passed out). Finally, after another 9 hours in the car, I made it to Boston and my new home and was reunited with my family.

I am not going to glorify this process. My wife and I have had some tense exchanges. No one is enjoying sleeping on air-mattresses as our worldly belongings, including our comfy beds, are shipped half-way across the country. The stress of making sure the kids have proper care, they begin daycare for the summer months tomorrow morning, took a pretty heavy toll on my mental well-being over the last couple weeks. I won’t even get into the details associated with turning on/off utilities, changing billing information, car registration, and trying to find new doctors.

With all these challenges, this move would not have been possible without a lot of support. First and foremost, I huge thanks goes to my wife and kids. They have been wonderful throughout this process. Next, my family, especially my brother and soon to be sister-in-law. Without their generosity (I have been sleeping on their spare bed for 2 months) this move would have been an even bigger financial and logistical challenge. Finally, my new employer, colleagues, and boss have been amazingly accommodating and understanding of the challenges I have faced in making this move. The kindness they have shown me is a pretty strong indicator I made the correct decision in accepting this job.

Fingers crossed all our stuff arrives soon and unharmed. While unpacking and settling into our new routines will present new challenges, at least I finally have my wonderful family around. They really do make everything much better!

 

 

The Move to Boston