Unlike last year, I am aiming for quality over quantity when it comes to my instructional design goal setting. I also am choosing a new format. Rather than A-to-Z goals, I am splitting my goals into categories: Research, Design, Development, and Community. In each category, I will keep my goals to a minimum with the hope that this will give me the time to fully succeed in these areas.
Research
Lifelong learning is not just the basis of my career, it is also my passion. I hope to get my own learning back on track this year with some solid research.
Last year
I struggled to keep up with the blogs of my most trusted sources. This year I will
get back to using my blog aggregator (if I can remember my password!) and check
out the new articles at least every other day.
For the
entirety of 2019 the alumni library at the university where I got my most
recent Master’s degree has been under construction. I had hoped to keep up with
the accurate theories of and strategies in learning (based on scientific
research). If the library doesn’t open up within the first quarter of 2020,
I’ll look to getting my research elsewhere.
Design
I have many great ideas for ways to improve the instructional design at my job. Unfortunately, these ideas aren’t always implemented. However, this year I hope to have a greater influence in the design process. My goal is to have a greater focus in promising areas of design that I have been prototyping:
Inductive
learning (aka concept attainment)
Microlearning
Simulations
Development
This year I whittled
down my list of tools to just three:
Blender
Illustrator
(the only Creative Cloud product that strikes fear into my heart)
AfterEffects
(I already can do quite a bit, but I’m looking to reach a more advanced level)
Community
I believe that a
strong professional learning community can help a learning experience designer
flourish. Last year I lay the groundwork and this year I hope to build upon
that.
I
re-invented my Twitter account last year. This year I hope to check it at least
every other day and to contribute at least once per week.
I’ve been
mostly ignoring the ID community (particularly the Articulate community),
except when I need to solve a problem. That is just terrible in my eyes. This
year I’d like to be more active in helping other solve their problems. I will
also aim to write a new blog post at least twice per month.
Conclusion
The year 2020 seems
promising to me, and I’m hoping it will be my greatest year yet when it comes
to achieving my L&D goals. Good luck to all of you with your own goals.
At the start of the
year I wrote myself a list of instructional design goals for 2019. I was
feeling particularly ambitious and I created a goal for each letter of the
alphabet.
This year has been
filled with many ups and downs and I haven’t been able to spend as much time
achieving my goals as I would have liked. Hopefully next year I can take the
approach that there’s no such thing as “I don’t have enough time” because you
can always make time for your goals.
But as for this year, it’s time to write my report card to evaluate my success in achieving my 2019 goals. I’ll split the list into three sections: exciting successes, mediocre attempts, and embarrassing failures.
Exciting successes
D – Design (visual
design)
F – Feedback (develop
highly personalized feedback strategies)
J – Join ID
communities
M – Meetings (attend 2-3,
including online summits w eLearning guild)
T – Talk (present a
talk at a conference)
I’ll focus only on the
most important items here. Primarily joining instructional design communities
(at meetings, on Twitter, and on LinkedIn) and presenting at a conference. This
second one was only a Show and Share at the Canadian eLearning Conference, but
I still say it counts. It has also given me the courage to apply to present
full presentations at conferences this year.
Mediocre attempts
B – Blogging
G – Gamification
(learn more about it and see if it’s for me)
H – Heroes (continue
doing eLearning heroes challenges)
N – Negative space
(make better use of it)
O – Objectives
(improve construction of objectives)
P – Photoshop
Q – Quality over
quantity (despite manager’s desires)
S – SME wrangling
(develop skills to get the most out of my SMEs)
U – UI/UX
V – Variety (stretch
my capabilities of design and development)
Z – Zapworks
The most important
items here are my attempts at blogging (hopefully I will be more consistent
next year) and my return to working on eLearning Heroes Challenges. I have yet
to actually submit one, but even doing them for myself brings me great joy.
Embarrassing failures
A – Adapt (rapid
eLearning tool)
C – Character Animator
E – Evolve (rapid
eLearning tool)
I – Illustrator
K – Keynote summaries
(post summaries of keynotes from conferences)
L – Lectora
R – Research (continue
to research in the field of ID)
W – Writing (improve
specific aspects – for example using contractions)
X – Xerox (“steal”
from other professions, e.g., marketing – AIDA, WIIFM, etc.)
Y – YouTube (update my
channel)
The most important
item here is my lack of work on my YouTube channel. I would have loved to call
this one a success this year since I reach 500,000 views this summer, but
unfortunately I haven’t spent any time updating old videos. Another significant
disappointment is that I didn’t continue to research scientific papers in the
field of instructional design. However, in my defense, the alumni library has
been under construction for the entire year (!)
Conclusion
While I have had some
exciting successes this year, overall I haven’t achieved as much as I set out
to do. This coming year I will public post my goals again (with the hope that
this will encourage me to meet these goals), but I will try to be less
ambitious. I’d hate to set myself up for failure.
Happy New Year to all
of you, and good luck crafting and achieving your own goals for 2020.
A couple weeks ago I was fortunate enough
to be interviewed for a magazine article (for OHS Canada). I had a wonderful
conversation with Marcel Vander Wier about education and training in
occupational health and safety.
Although we discussed many topics, the most
noteworthy topic was inductive learning (also known as concept attainment). I
have recently realized, through conversations with many learning and development
professionals, that this is still a relatively uncommon instructional design
strategy despite its many benefits.
This post scratches the surface of
inductive learning. Hopefully, I will delve more deeply into the topic in a
future post.
What is inductive learning?
Inductive learning involves the learner
using their prior knowledge to discover new ideas, skills, concepts, or
information.
How do inductive learning strategies differ from traditional learning strategies?
Courses that use traditional learning
strategies simply present ideas to passive learners, often without any attempt
to bridge the gap between their prior knowledge and the new information. This
traditional design strategy typically involves content being forced onto the
learner and then assessing ‘learning’ (even though the learner will promptly
forget the information presented).
Inductive learning stands in stark contrast
to traditional learning design. Inductive learning flips learning design on its
head. The basis of this strategy involves encouraging learners to activate
their prior knowledge. Learners then use this to interpret the new information that
the course was designed to deliver. If the learner struggles to successfully
acquire the new information, support is given. In some cases, this support may fall
in line with a more traditional learning design strategy; however, this is a
last resort.
Why is inductive learning effective?
Learning doesn’t happen in isolation. To incorporate
new information into our long-term memory, we must be able to link this new
information to memories that already exist.
Traditional learning design often misses this
experience. It requires the learner to make their own connections, which they wouldn’t
know to do without prompting. Ultimately, the new information doesn’t root
deeply into the learner’s memory.
Inductive learning, on the other hand,
requires activation of prior knowledge as a prerequisite for learning. Learners
who experience inductive learning activities retain a significantly higher
percentage of information long after the course concludes.
This occurs for three reasons. First,
learners can more easily link the new information to their prior knowledge,
thereby integrating the new information into long-term memory. Second, by
having to discover the new information on their own, they are more likely to
engage in metacognition. Third, the continuous feedback afforded by inductive
learning promotes competence and confidence.
When is inductive learning appropriate?
As with any instructional design strategy,
inductive learning is not a ‘one size fits all’ strategy. It works exceptionally
well in some cases and is ineffective in others. Typically, inductive learning
is an appropriate strategy under any of the following conditions:
When learners have a
significant amount of prior knowledge
When the new information has
significant similarities to the learners’ prior knowledge
When relatively safe skills are
being developed (e.g., I wouldn’t recommend using inductive learning to teach
someone how to use a forklift for the first time, unless the learning is occurring
on a simulator)
When continuous feedback is
required
When learners need an
opportunity to fail in order to learn
What is an example of an inductive learning activity?
Compare the experience of learning through
traditional learning and inductive learning with the two examples below. They
teach similar concepts but with vastly different styles.
It is worth noting that the course built using
inductive learning resulted in a 3-fold higher post-activity assessment score compared
to the traditional course. Furthermore, long-term learning retention was 91% in
the inductive learning course, compared to only 33% for the traditional
learning course.
Traditional learning course
Inductive learning course
Conclusion
Inductive learning is wildly successful
when used in the right conditions. It promotes long-term memory integration,
metacognition, and active learning. Gone are the days when traditional learning
was the singular strategy in the learning and development toolbox. Inductive
learning is the new kid in town, and it is taking instructional design by
storm.
A few weeks ago I wrote a post about three types of assessment common in the Ontario school system: Assessment as Learning (AaL), Assessment for Learning (AfL), and Assessment of Learning (AoL). That post described all three and explained their importance. If you haven’t read it yet, I suggest you start there.
After that post, I have spent a few weeks on this topic, each week breaking down one of the different types of assessment, as it pertains to instructional design and learning experience design. First, I began with AoL, and then AfL. Now I’ll move on to AaL.
But before I delve into AaL as it is used
in instructional design and learning experience design, let’s review where the
term came from as it relates to teaching in the primary and secondary school
system.
Description of AaL from the school system
Just like Assessment for Learning, AaL has
a strong focus on the process of learning. AaL is not marked, but instead
provides the learner with information about how they are progressing. This
allows the learner to make their own personal evaluation about their knowledge
and whether they are ready to move on to new content, or if they need to
revisit what they have just learned. Additionally, AaL is where the
metacognition piece comes in.
AaL in instructional design and learning experience design
Unfortunately, AaL is rarely used when it
comes to instructional design. But this doesn’t need to be the case, interactive
eLearning activities and reflection questions can be effective methods for AaL.
Let’s look at some considerations for AaL in instructional design.
1. Assessment must be based on the learning outcomes
As with all other posts about assessment, I will begin with the most important piece: it is essential to create assessments that are strongly based on the learning outcomes. At the start of the course design process, you should have written some learning outcomes for the course. These outcomes, the instruction, and the assessment should be closely aligned. For more information on this topic, check out the AoL post.
Although AaL doesn’t give a score, if the learners aren’t being steered in the correct direction for learning, they will suffer during AfL and AoL. Learners often pick up on the types of questions they are being asked and will focus their efforts in those areas. Therefore, your AaL should be based on the learning outcomes.
2. Clear purpose and plan
Ensure that your learners know why you are
doing the AaL. Let them know that there won’t be a score for the activity, but
that it will help guide them in their learning. Additionally, let them know how
they will be receiving feedback and what to do if they want to review certain
course material or if they want to continue on with the rest of the course.
3. Styles of questions
For instructor-led training, reflection is
one of the classic forms of AaL. When done properly, learners not only reflect
on what they know, but also how they know those things and how those things
related to other things they know. These last two questions can help drive the
metacognition piece of AaL.
One consideration for reflections is the
terminology. Many people cringe when they hear the word ‘reflection’ (myself
included). If you can disguise the reflection as some other form of activity, learners
maybe more invested in answering the questions.
For example, I recently created a 3-day training course that contained 24 chapters. At the conclusion of each chapter was a reflection activity. However, these were called ‘reviews’. Most people are familiar with the concept of review questions at the conclusion of a chapter, so there is less push-back from the learners. However, the questions were framed in such a way that they prompted reflection and metacognition.
Other strategies include scenarios and group teaching activities such as jigsaws.
In an eLearning course, reflection questions are also possible, but there is a wide range of other options available. I like to use super simple scenario questions that have immediate and personalized feedback. These are then followed up with questions about how the learner arrived at the answer they chose.
For example, I would create a question
where two people are discussing an issue and the learner must choose who they
agree with. They then receive feedback about whether or not they were correct
(this would be AfL). To then push this into the AaL realm, follow-up questions
should be included on why the learner chose their answer and/or how this
scenario relates to other real-life events.
4. Feedback
Similar to AfL, AaL is all about feedback. I recommend checking out the AfL post for more information on providing effective feedback.
5. Frequency
AaL works best when done frequently.
Ideally after each topic or even sub-topic. This encourages the learners to
continually think about the answers they are choosing and why they have picked
those answers. Furthermore, it helps pinpoint essential information that they
can expect to see on the AfL and AoL.
Conclusion
That’s it for the final post on Assessment as Learning, Assessment for Learning, and Assessment of Learning. As I mentioned at the start of the first of these 4 posts, even though I am no longer a teacher, I really love this model of assessment. It takes formative/summative assessments one step further and incorporates reflection and metacognition.
How would you use AaL, AfL, and AoL in your instructional design or learning experience design environment?
A couple weeks ago I wrote a post about three types of assessment common in the Ontario school system: Assessment as Learning (AaL), Assessment for Learning (AfL), and Assessment of Learning (AoL). That post described all three and explained their importance. If you haven’t read it yet, I suggest you start there.
Over three weeks, I will break down each of the different types of assessment, as it pertains to instructional design and learning experience design. Last week I began with AoL. Now I’ll move on to AfL. Before I delve into AfL as it is used in instructional design and learning experience design, let’s review where the term came from as it relates to teaching in the primary and secondary school system.
Description of AfL from the school system
AfL is an informal assessment of what a
learner has learned from an individual topic or task. This type of assessment
is either unscored or a mark is given simply for the learner’s own knowledge (but
it does not count toward a final grade score). In either case, the emphasis is
on descriptive feedback.
The focus of AfL is on the process of
learning. The AfL step helps learners identify their strengths and weaknesses,
and it ultimately helps the learners focus on what they need to know to
succeed.
AfL in instructional design
When it comes to the instructional design
environment, quizzes, in-class activities, and interactive eLearning activities
are commonly used for AfL. Let’s look at some considerations for AfL in
instructional design.
1. Assessment must be based on the learning outcomes
It is essential to create assessments that are strongly based on the learning outcomes. At the start of the course design process, you should have written some learning outcomes for the course. These outcomes, the instruction, and the assessment should be closely aligned. For more information on this topic, check out the AoL post from last week.
2. Feedback
AfL is all about feedback. In some cases,
the assessment may be marked, but whether it is or not, the most informative
feedback comes in the form of comments. In an instructor-led course, this may
mean debriefing after each activity to try to understand why the learners made
certain decisions. In an eLearning course, this may mean providing written
feedback – ideally feedback that is both immediate and personalized.
Consider the two examples below. Try
getting the answer right and wrong for each question. Which example made you
feel more satisfied with your learning? [note that the try again button is
there so you can try both a correct and an incorrect response, but it would be removed
in an actual assessment, particularly in example 2]
As you can see from the second example, the
feedback is in fact part of the learning process. It is particularly helpful
when the feedback is specific to the choice made. In this example, there were 3
different ‘incorrect’ feedback layers, each one was based on a different
incorrect answer.
You may wonder about the learners who
decide not to read the feedback. They see the checkmark or X and move on to the
next question. Although those learners do indeed exist, not all learners will
skip reading the feedback. Therefore, by providing feedback to everyone, we are
allowing learners to make their own choice.
3. Gamification
In the post of AoL, I skipped gamification
because it really works best in a formative assessment capacity. Any time that
there are countdown timers, score cards, leaderboards, etc. the stress levels
of the learners can increase. Additionally, the learners may become distracted
with these extra features and spend less time pondering each question. This is
not a conducive environment for summative assessments. However, when learners
are still in the process of learning and the assessment is AfL, games can be a
nice, light, fun way to perform assessment.
I recently completed a workshop by Clint
Clarkson, who is a master of game thinking in learning experiences. One of his
takeaways was that almost everything can be turned into some form of a game
(although as an ID or LXD, some discretion is necessary depending on the subject
matter).
My personal preference for gamification is
all-encompassing games with a storyline and, most importantly, a purpose. Let
me explain my preference with an example of what doesn’t work well, and then
compare that to what I would do to fix it.
When workshopping ideas for a learning-based
virtual escape room, someone suggested that learners should answer multiple
choice questions about the subject matter each time they tried escaping the
room.
Now, in my opinion, this makes the learning
and the game very disjointed. I don’t see the value of the game itself in this
instance. The decisions that the learners make related to how to escape are not
at all related to the decisions they need to make regarding the subject matter.
You may as well just give them the multiple-choice questions and skip the game.
My idea for learning-based virtual escape rooms is that the learners must use their content-based knowledge to actually escape the room. For example, in an escape room based on ladder safety, the learners would have to select appropriate ladders, inspect them, safely carry them and use them properly in order to get out of the escape room (for example, through a high window). The learning content is 100% related to the escape room game. In these types of instances, the escape room is essentially a massive scenario-based assessment.
Regardless of the format of the game, without
a direct relationship to the content, gamification can seem superfluous or inauthentic.
Remember that the point of the game is to assess learning – which means that
the content should be upfront and centre.
4. Scenarios
In my post on AoL, I discussed the value of scenarios for summative assessment. Almost every point there would apply to formative (AfL) as well. So, I will avoid repeating myself here (although check out this post to see what I had to say about scenarios in AoL) and simply bring in one additional point: consider how you will provide the feedback.
Scenarios can be built in a variety of ways.
One way involves setting up a decision point, which allows learners to branch out
to make their choice, but then after giving feedback, the scenario contracts
back and continues in the same way, regardless of the choice made by the
learners.
This strategy is simpler and faster to
create, and also provides excellent opportunities for immediate and
personalized feedback.
However, in some cases you may want the
scenario to be more immersive. In which case each decision leads the learners
on a different path, with dozens (or more) possible pathways through the
scenario.
While I generally prefer these types of
scenarios, they make it much more difficult to provide immediate feedback
unless each node of the branching scenario starts with a very obvious
indication of whether the last choice was correct. In these instances, it’s
important to remember that feedback is one of the essential components of AfL.
Therefore, a thorough debrief should be presented at the conclusion of the
scenario, based on the actual choices made by the learner.
5. Tracking results
One final consideration for instructional
designers is whether or not the AfL will be tracked. Obviously AoL would need
to be tracked to determine if learners have achieved the learning outcomes.
However, AfL does not have this same requirement.
While tracking AfL results does add in a
layer of complexity in both the in-class or eLearning environment, it can be
beneficial to know if learners are struggling long before they reach the AoL.
This could provide opportunities for support and coaching. Additionally, when
it comes to evaluating the success of the program, it can be useful to know
which outcomes the learners struggled with throughout the course (not just on
the AoL).
AfL in learning experience design
Learning experience design takes
instructional design one step further by considering the learner experience. Since
AfL occurs within the learning experience, let’s look at a few ways that can we
ensure the learner is having a positive experience on the AfL.
1. Clear purpose and plan
Ensure that your learners know why you are
doing the AfL. Let them know what will be assessed and how the assessment will
work – particularly for gamified or scenario-based assessments. Additionally, let
them know how they will be receiving feedback. Also, and importantly, ensure
that they know if and how they are being marked.
2. Clear instructions
There are almost an infinite number of ways
to do AfL. Your only limit is your imagination. However, what seems clear and
straightforward to you, might not seem so clear to your learner. Let them know
how to complete the AfL and what your expectations are. This is true for both instructor-led
and eLearning courses. Again, this is particularly important for gamified and
scenario-based assessments.
3. Next steps
Ensure that your learners understand what
their options are after completing the AfL. Considering that AfL is not a final
assessment, but rather than internal assessment to verify that the learners are
on track, it is worth having a variety of options for your learners.
For example, learners that successfully
complete the AfL may continue on with the course. Learners who just barely miss
the passing mark might be able to return to certain sections of the course for
review. Learners who are far from passing might need to review everything
before moving on.
Ensure that the learners know what the
options are and how to access the different options. Additionally, for learners
who decide to complete some review before moving on, make a clear statement about
whether and how the AfL will be redone.
4. Accessibility
The final point related to learning
experience design is to ensure that the course is accessible (particularly with
eLearning courses). Since the sky is the limit in terms of AfL choice
(gamification, scenario, drag and drop, multiple choice, etc.), you might want
to use accessibility standards as a helpful tool for deciding which methods to
use.
Of particular concern are any type of drag
and drop, image matching, and any style of AfL that incorporates audio or
video.
Consider your learner analysis and/or
learner personas, as well as any company- or government-mandated accessibility
standards. All of these can help you pick the best AfL for your learners from
the start (and reduce time spent redesigning non-accessible courses).
Conclusion
Well, like last week, this post also turned
into a long one! If there is one thing to take away from this novel, it’s that
AfL is all about the feedback – make sure you know how and when you will provide
learners with feedback, and do you best to make it personalized.
Stay tuned next week for the upcoming post on Assessment as Learning (AaL).
After a long hiatus from eLearning Heroes Challenges, this is my second one in a row. I’d forgotten how much I love doing these. For this challenge, I thought I’d do a project on Bianca Andreescu – I am Canadian after all!
This portfolio piece was created for the eLearning Heroes Challenge #249 (Creating light and dark versions of eLearning templates), which required the creation of light and dark versions of a set of slides. Instead of having two separate series of slides, I decided to add a slider so each slide could be compared directly, light and dark.
I found this very helpful to identify the types of slides that look good dark and those that look good light. For example, slides with lots of writing look good on a light background because it is easier to read, while quotes or short bits of writing look good on a dark background which gives a bit of emphasis.
p.s. These are not actual photos of Bianca, they are just stock images.