What's AI's impact on asynchronous online learning?
The vast majority of engagement and interaction with online learning experiences is asynchronous. Students engage in activities, tasks and interact not simply at a set time for a set duration, but across time when they want to or are able to do so.
Whilst there are myriad types of activities, interaction, content etc that can form part of asynchronous online learning - over the years a core menu of common activity types and associated online technologies has been established.
The most dominant activity types are quizzes and online discussion, along with other written tasks ranging from reflective journals to longer form essays.
In terms of communication, explanations, introductions from educators - text and video is still the main vehicle for educators to convey information to learners, although audio is becoming more popular.
There are of course many other activity types and associated technologies these include digital canvas’ or boards like Padlet, Miro, Jamboard and Mural. There are annotation tools like Hypothes.is, and collaboration tools like Google Docs or Microsoft OneNote as well as many others.
Inevitably, AI is permeating and will continue to permeate technologies used to support online learning activities. I recently explored how AI is impacting synchronous online learning and now I want to explore the impact on asynchronous activities and associated technologies.
Given the breadth of activities and technologies I’m going to focus on three key areas; online quizzes, online discussion and video.
How is AI impacting online quizzes?
Whether you love them or feel that they trivialise education, quizzes are a mainstay of online learning. They’re used for summative & formative assessment, as a check for understanding or knowledge and for retrieval practice amongst other things.
A long standing affordance is that they allow an educator to present a question and script feedback on learners responses that’s delivered automatically. But, that affordance also comes with some constraints, because quizzes are largely selected response activities, with the most common format being multiple choice questions (MCQs).
Whilst MCQs aren’t totally without value, writing effective questions and distractors isn’t a simple task and is an area of online learning that’s littered with poorly designed questions and answers.
Whilst there are growing numbers of technologies and add-ons that use AI to help generate quizzes - arguably this misses a more valuable AI-driven evolution of online quizzes.
A common criticism of MCQs and other selected response formats is that they encourage guesswork. There’s a strong argument that they aren't as cognitively challenging or beneficial for learning compared to constructing your own response to a question. Also, constructed response activities arguably provide educators with more robust and rounded evidence of understanding.
However, AI-driven technologies are enabling opportunities for online quizzes that can offer feedback on constructed response or open input questions. Wildfire Learning is an example of a company who has used AI to semantically interpret answers to questions that learners openly respond to online and provide feedback.
But the possibility of open input questions with automated feedback is one aspect of developments here, as AI assistants also offer the possibility of engaging learners in feedback dialogue.
Teresa Guasch et al in their paper “Effects of feedback on collaborative writing in an online learning environment” categorised feedback in three ways as corrective, suggestive and epistemic. One of the authors of the paper Paul Kirschner has expanded on this referring to these three categories as:
Corrective feedback: feedback that directly addresses a wrong answer and communicates to the student what the right answer is and no more than that.
Directive feedback: feedback that gives direction on how things can be improved or a better way of doing things in the future.
Epistemic feedback: Providing feedback that isn’t corrective or directive but aims to nudge learners to improve upon something or do it differently. Paul Kirschner and Mirjam Neelam in their article on Feedback give these examples of epistemic feedback “In this step, you seem to have made a mistake; considering X, what could you do differently?” or “Why did you choose to do it this way? Are there other approaches to doing it which might give a different or better answer?”
What AI brings to the mix here is not just a means of constructed response quizzes with automated feedback, but potentially a greater variety of feedback types like epistemic feedback that can help enrich online quizzing.
In many ways, this is looking at AI-driven evolution through the lens of existing paradigms. Developments in AI mean that learners now have tools they can use to generate their own practice quizzes and AI assistants who can do this too. However, I’m not sure these developments will totally usurp educators designing questioning activities themselves.
How is AI impacting online discussion?
Discussion forums are a major component of many online courses and programmes and have been for years. Although the idiomatic and/or formulaic use of them in online learning has drawn criticism, they’ve been a bedrock of online interaction, collaboration and community.
At the moment, we’re at risk of thinking that some time in 2023 or late 2022 was the year zero of AI in edtech and online learning. This is far from the case, and online discussion or forums have been one way in which AI-powered edtech has already been used.
Packback is a company that developed an AI-driven product a number of years ago called Packback Questions. The product supports online discussions and one of its features is to steer students to ask open-ended questions that help develop discussion. The technology also prompts students to write in more depth and to encourage them to provide responses that are backed up with sources.
AI’s ability to evaluate a draft response and provide cues and prompts that help foster responses that further a discussion - as well as leading learners to reflect, revise and strengthen responses - certainly stands some chance of elevating many online discussions.
More broadly this speaks to the potential of AI to help support scaffolding of online discussion and it’s also possible to envisage how this might extend to things like regulation scripts that are used in peer assessment.
Now inevitably given the current climate, one might say that the existence of generative AI means text-based online discussion is now too risky. Students are now able to readily get AI to originate responses for them.
There is certainly a point in that, but I think we need to focus the limited time and energy we have in education on crafting learning experiences that strongly lead students to learning goals. We need to then convincingly articulate their value to help them reach those goals.
I’m unconvinced that we have any robust means of eliminating the opportunity for people to use technologies to create responses that they didn’t formulate themselves - and whilst I don’t want to sound fatalistic, I do think some need to wake up and smell the coffee.
Too often the time spent collectively fretting whether people will cheat is time wasted. The danger is that activity design is driven by fear and suspicion. This can draw our focus away from striving to design and discover activities that provide the best route towards learning goals.
So far I’ve talked about online discussion exclusively in relation to learning focussed activities, but they also serve broader purposes in online learning such as forums to ask questions of a more routine administrative nature.
This is another area that AI assistants are already supporting and have been for a number of years now. The best known example is the virtual teaching assistant Jill Watson that was developed at the Georgia Institute of Technology back in 2015.
The Georgia Tech example is an interesting one, because this isn’t the run of the mill thing that happens in universities. There’s not a broad swathe of universities or faculties out there working away at developing their own AI chatbots. Developments like the integration of AI assistants in higher education tend to proceed at a steadier pace.
However, given the established virtual learning environment providers are integrating AI assistants into their products, universities will have easier access to this technology. Recently Instructure, the company behind the Canvas VLE has partnered with Khan Academy to use their AI-powered student tutor and teaching assistant, Khanmigo . D2L has also added a Brightspace Virtual Assistant to its Brightspace VLE.
As universities have greater access to AI assistants and tutors it will be interesting to see how they’re utilised and their effect on online learning in particular. There is potential for the positive effects seen in the Jill Watson example to be more widely experienced, with online learners getting more rapid responses and barriers that sometimes exist around help-seeking being broken down.
How is AI impacting online video?
In some people's minds video is synonymous with online learning. Start developing a course and it's not long before people start thinking and talking about videos. That speaks to both an over emphasis on this as a communication medium in online learning as well as its importance to it.
Whilst it’s obvious that AI is impacting video in terms of creation, with generative AI being used to create, edit and ideate scripts for example - I think from a learning perspective AI’s ability to help learners get the most out of video is more interesting.
AI tools that support video localisation, that is translating video content into a language you are more familiar with, have significant potential benefits for online learning given its geographical reach.
Similarly, AI powered search tools that enable you to more readily locate parts of online video you want to revisit can help learners with their online study and mitigate against one of the issues you have with video as a medium.
Lastly, the ability to not simply access video transcripts but for them to be used to more actively support study is a promising avenue. Quizzing has long been used either within video or alongside it to support learners to get the most out of the content, but AI potentially extends this.
AI-tools that can take and use transcripts to create quizzes or flashcards, summaries, explanations, links to further related materials can further elevate video.
In their paper “Fostering generative learning from video lessons: Benefits of instructor-generated drawings and learner-generated explanations.” Logan Fiorella et al talk about “aligning instructional methods with appropriate learning strategies”.
It is this idea that I think holds some of the greatest potential for AI and video. As the technology has the potential to help take the instructional method of video and create pathways to appropriate learning strategies that learners might not ordinarily go down themselves.
The evolving landscape of asynchronous online learning
The realm of asynchronous online learning is large and both the types of activities and the technologies that support them and shape them has grown over the years.
AI-driven developments are further shaping this area and will influence and impact some of the traditional mainstays of online learning such as quizzes, discussions, and videos.
One of the challenges with digital technologies in education is how they are utilised and most importantly what informs their use. Education is replete with bad and ill-informed ideas - if AI is built upon these or implemented in sympathy with them - we’ll fail our learners.
We’ll also fail to grasp some of the opportunities that AI can offer to online learners in particular - in providing them more cognitively challenging and beneficial activities, scaffolds to interaction and pathways to more effective study strategies.