A few weeks ago I wrote a post about three types of assessment common in the Ontario school system: Assessment as Learning (AaL), Assessment for Learning (AfL), and Assessment of Learning (AoL). That post described all three and explained their importance. If you haven’t read it yet, I suggest you start there.
After that post, I have spent a few weeks on this topic, each week breaking down one of the different types of assessment, as it pertains to instructional design and learning experience design. First, I began with AoL, and then AfL. Now I’ll move on to AaL.
But before I delve into AaL as it is used
in instructional design and learning experience design, let’s review where the
term came from as it relates to teaching in the primary and secondary school
system.
Description of AaL from the school system
Just like Assessment for Learning, AaL has
a strong focus on the process of learning. AaL is not marked, but instead
provides the learner with information about how they are progressing. This
allows the learner to make their own personal evaluation about their knowledge
and whether they are ready to move on to new content, or if they need to
revisit what they have just learned. Additionally, AaL is where the
metacognition piece comes in.
AaL in instructional design and learning experience design
Unfortunately, AaL is rarely used when it
comes to instructional design. But this doesn’t need to be the case, interactive
eLearning activities and reflection questions can be effective methods for AaL.
Let’s look at some considerations for AaL in instructional design.
1. Assessment must be based on the learning outcomes
As with all other posts about assessment, I will begin with the most important piece: it is essential to create assessments that are strongly based on the learning outcomes. At the start of the course design process, you should have written some learning outcomes for the course. These outcomes, the instruction, and the assessment should be closely aligned. For more information on this topic, check out the AoL post.
Although AaL doesn’t give a score, if the learners aren’t being steered in the correct direction for learning, they will suffer during AfL and AoL. Learners often pick up on the types of questions they are being asked and will focus their efforts in those areas. Therefore, your AaL should be based on the learning outcomes.
2. Clear purpose and plan
Ensure that your learners know why you are
doing the AaL. Let them know that there won’t be a score for the activity, but
that it will help guide them in their learning. Additionally, let them know how
they will be receiving feedback and what to do if they want to review certain
course material or if they want to continue on with the rest of the course.
3. Styles of questions
For instructor-led training, reflection is
one of the classic forms of AaL. When done properly, learners not only reflect
on what they know, but also how they know those things and how those things
related to other things they know. These last two questions can help drive the
metacognition piece of AaL.
One consideration for reflections is the
terminology. Many people cringe when they hear the word ‘reflection’ (myself
included). If you can disguise the reflection as some other form of activity, learners
maybe more invested in answering the questions.
For example, I recently created a 3-day training course that contained 24 chapters. At the conclusion of each chapter was a reflection activity. However, these were called ‘reviews’. Most people are familiar with the concept of review questions at the conclusion of a chapter, so there is less push-back from the learners. However, the questions were framed in such a way that they prompted reflection and metacognition.
Other strategies include scenarios and group teaching activities such as jigsaws.
In an eLearning course, reflection questions are also possible, but there is a wide range of other options available. I like to use super simple scenario questions that have immediate and personalized feedback. These are then followed up with questions about how the learner arrived at the answer they chose.
For example, I would create a question
where two people are discussing an issue and the learner must choose who they
agree with. They then receive feedback about whether or not they were correct
(this would be AfL). To then push this into the AaL realm, follow-up questions
should be included on why the learner chose their answer and/or how this
scenario relates to other real-life events.
4. Feedback
Similar to AfL, AaL is all about feedback. I recommend checking out the AfL post for more information on providing effective feedback.
5. Frequency
AaL works best when done frequently.
Ideally after each topic or even sub-topic. This encourages the learners to
continually think about the answers they are choosing and why they have picked
those answers. Furthermore, it helps pinpoint essential information that they
can expect to see on the AfL and AoL.
Conclusion
That’s it for the final post on Assessment as Learning, Assessment for Learning, and Assessment of Learning. As I mentioned at the start of the first of these 4 posts, even though I am no longer a teacher, I really love this model of assessment. It takes formative/summative assessments one step further and incorporates reflection and metacognition.
How would you use AaL, AfL, and AoL in your instructional design or learning experience design environment?
A couple weeks ago I wrote a post about three types of assessment common in the Ontario school system: Assessment as Learning (AaL), Assessment for Learning (AfL), and Assessment of Learning (AoL). That post described all three and explained their importance. If you haven’t read it yet, I suggest you start there.
Over three weeks, I will break down each of the different types of assessment, as it pertains to instructional design and learning experience design. Last week I began with AoL. Now I’ll move on to AfL. Before I delve into AfL as it is used in instructional design and learning experience design, let’s review where the term came from as it relates to teaching in the primary and secondary school system.
Description of AfL from the school system
AfL is an informal assessment of what a
learner has learned from an individual topic or task. This type of assessment
is either unscored or a mark is given simply for the learner’s own knowledge (but
it does not count toward a final grade score). In either case, the emphasis is
on descriptive feedback.
The focus of AfL is on the process of
learning. The AfL step helps learners identify their strengths and weaknesses,
and it ultimately helps the learners focus on what they need to know to
succeed.
AfL in instructional design
When it comes to the instructional design
environment, quizzes, in-class activities, and interactive eLearning activities
are commonly used for AfL. Let’s look at some considerations for AfL in
instructional design.
1. Assessment must be based on the learning outcomes
It is essential to create assessments that are strongly based on the learning outcomes. At the start of the course design process, you should have written some learning outcomes for the course. These outcomes, the instruction, and the assessment should be closely aligned. For more information on this topic, check out the AoL post from last week.
2. Feedback
AfL is all about feedback. In some cases,
the assessment may be marked, but whether it is or not, the most informative
feedback comes in the form of comments. In an instructor-led course, this may
mean debriefing after each activity to try to understand why the learners made
certain decisions. In an eLearning course, this may mean providing written
feedback – ideally feedback that is both immediate and personalized.
Consider the two examples below. Try
getting the answer right and wrong for each question. Which example made you
feel more satisfied with your learning? [note that the try again button is
there so you can try both a correct and an incorrect response, but it would be removed
in an actual assessment, particularly in example 2]
As you can see from the second example, the
feedback is in fact part of the learning process. It is particularly helpful
when the feedback is specific to the choice made. In this example, there were 3
different ‘incorrect’ feedback layers, each one was based on a different
incorrect answer.
You may wonder about the learners who
decide not to read the feedback. They see the checkmark or X and move on to the
next question. Although those learners do indeed exist, not all learners will
skip reading the feedback. Therefore, by providing feedback to everyone, we are
allowing learners to make their own choice.
3. Gamification
In the post of AoL, I skipped gamification
because it really works best in a formative assessment capacity. Any time that
there are countdown timers, score cards, leaderboards, etc. the stress levels
of the learners can increase. Additionally, the learners may become distracted
with these extra features and spend less time pondering each question. This is
not a conducive environment for summative assessments. However, when learners
are still in the process of learning and the assessment is AfL, games can be a
nice, light, fun way to perform assessment.
I recently completed a workshop by Clint
Clarkson, who is a master of game thinking in learning experiences. One of his
takeaways was that almost everything can be turned into some form of a game
(although as an ID or LXD, some discretion is necessary depending on the subject
matter).
My personal preference for gamification is
all-encompassing games with a storyline and, most importantly, a purpose. Let
me explain my preference with an example of what doesn’t work well, and then
compare that to what I would do to fix it.
When workshopping ideas for a learning-based
virtual escape room, someone suggested that learners should answer multiple
choice questions about the subject matter each time they tried escaping the
room.
Now, in my opinion, this makes the learning
and the game very disjointed. I don’t see the value of the game itself in this
instance. The decisions that the learners make related to how to escape are not
at all related to the decisions they need to make regarding the subject matter.
You may as well just give them the multiple-choice questions and skip the game.
My idea for learning-based virtual escape rooms is that the learners must use their content-based knowledge to actually escape the room. For example, in an escape room based on ladder safety, the learners would have to select appropriate ladders, inspect them, safely carry them and use them properly in order to get out of the escape room (for example, through a high window). The learning content is 100% related to the escape room game. In these types of instances, the escape room is essentially a massive scenario-based assessment.
Regardless of the format of the game, without
a direct relationship to the content, gamification can seem superfluous or inauthentic.
Remember that the point of the game is to assess learning – which means that
the content should be upfront and centre.
4. Scenarios
In my post on AoL, I discussed the value of scenarios for summative assessment. Almost every point there would apply to formative (AfL) as well. So, I will avoid repeating myself here (although check out this post to see what I had to say about scenarios in AoL) and simply bring in one additional point: consider how you will provide the feedback.
Scenarios can be built in a variety of ways.
One way involves setting up a decision point, which allows learners to branch out
to make their choice, but then after giving feedback, the scenario contracts
back and continues in the same way, regardless of the choice made by the
learners.
This strategy is simpler and faster to
create, and also provides excellent opportunities for immediate and
personalized feedback.
However, in some cases you may want the
scenario to be more immersive. In which case each decision leads the learners
on a different path, with dozens (or more) possible pathways through the
scenario.
While I generally prefer these types of
scenarios, they make it much more difficult to provide immediate feedback
unless each node of the branching scenario starts with a very obvious
indication of whether the last choice was correct. In these instances, it’s
important to remember that feedback is one of the essential components of AfL.
Therefore, a thorough debrief should be presented at the conclusion of the
scenario, based on the actual choices made by the learner.
5. Tracking results
One final consideration for instructional
designers is whether or not the AfL will be tracked. Obviously AoL would need
to be tracked to determine if learners have achieved the learning outcomes.
However, AfL does not have this same requirement.
While tracking AfL results does add in a
layer of complexity in both the in-class or eLearning environment, it can be
beneficial to know if learners are struggling long before they reach the AoL.
This could provide opportunities for support and coaching. Additionally, when
it comes to evaluating the success of the program, it can be useful to know
which outcomes the learners struggled with throughout the course (not just on
the AoL).
AfL in learning experience design
Learning experience design takes
instructional design one step further by considering the learner experience. Since
AfL occurs within the learning experience, let’s look at a few ways that can we
ensure the learner is having a positive experience on the AfL.
1. Clear purpose and plan
Ensure that your learners know why you are
doing the AfL. Let them know what will be assessed and how the assessment will
work – particularly for gamified or scenario-based assessments. Additionally, let
them know how they will be receiving feedback. Also, and importantly, ensure
that they know if and how they are being marked.
2. Clear instructions
There are almost an infinite number of ways
to do AfL. Your only limit is your imagination. However, what seems clear and
straightforward to you, might not seem so clear to your learner. Let them know
how to complete the AfL and what your expectations are. This is true for both instructor-led
and eLearning courses. Again, this is particularly important for gamified and
scenario-based assessments.
3. Next steps
Ensure that your learners understand what
their options are after completing the AfL. Considering that AfL is not a final
assessment, but rather than internal assessment to verify that the learners are
on track, it is worth having a variety of options for your learners.
For example, learners that successfully
complete the AfL may continue on with the course. Learners who just barely miss
the passing mark might be able to return to certain sections of the course for
review. Learners who are far from passing might need to review everything
before moving on.
Ensure that the learners know what the
options are and how to access the different options. Additionally, for learners
who decide to complete some review before moving on, make a clear statement about
whether and how the AfL will be redone.
4. Accessibility
The final point related to learning
experience design is to ensure that the course is accessible (particularly with
eLearning courses). Since the sky is the limit in terms of AfL choice
(gamification, scenario, drag and drop, multiple choice, etc.), you might want
to use accessibility standards as a helpful tool for deciding which methods to
use.
Of particular concern are any type of drag
and drop, image matching, and any style of AfL that incorporates audio or
video.
Consider your learner analysis and/or
learner personas, as well as any company- or government-mandated accessibility
standards. All of these can help you pick the best AfL for your learners from
the start (and reduce time spent redesigning non-accessible courses).
Conclusion
Well, like last week, this post also turned
into a long one! If there is one thing to take away from this novel, it’s that
AfL is all about the feedback – make sure you know how and when you will provide
learners with feedback, and do you best to make it personalized.
Stay tuned next week for the upcoming post on Assessment as Learning (AaL).
Last week I wrote a post about three types of assessment common in the Ontario school system: Assessment as Learning (AaL), Assessment for Learning (AfL), and Assessment of Learning (AoL). That post described all three and explained their importance. If you haven’t read it yet, I suggest you start there.
Over three weeks, I will break down each of the different types of assessment, as they pertain to instructional design and learning experience design. I will start with Assessment of Learning (AoL). Even though this type of assessment comes at the very end of learning, it’s the simplest to understand, and the most common form of assessment.
Before I delve into AoL as it is used in instructional
design and learning experience design, let’s review where the term came from as
it relates to teaching in the primary and secondary school system.
Description of AoL from the school system
AoL is a formal assessment of what a
learner has learned by the conclusion of the chapter or course. This is often
in the form of a test or exam, although in some cases it could also be some
sort of project or presentation. In the school system, AoL is also known as summative
assessment or evaluation (not the same as evaluation in the instructional
design world).
AoL in instructional design
When it comes to the instructional design
environment, AoL may look a little different. Quizzes, tests, and exams are
common but projects and presentations are rare outside of the school system.
The only typical form of authentic assessment outside of the school system is the
rarely-used style of workplace experience simulations.
Let’s look at some considerations for AoL
in instructional design.
1. Assessment must be based on the learning outcomes
The single most important aspect of
creating good assessment, particularly AoL, is creating assessments that are
strongly based on the learning outcomes. At the start of the course design
process, you should have written some learning outcomes for the course. These
outcomes, the instruction, and the assessment should be closely aligned. In
fact, if a learner ever had the desire, for each test question they should be
able to point to the associated learning outcome and point to the relevant learning
content.
Writing effective learning outcomes is a blog
topic of its own, but I’d be remis not to include one valuable piece of
information here: choose your action verbs wisely! If you stated that a learner
will be able to know, understand, appreciate, or learn something, you’ll never
be able to assess this.
Instead, your learning outcomes should
begin with a measurable action verb. Whether you choose to pick a verb from
Bloom’s Taxonomy, Marzano’s Taxonomy, or elsewhere, it should be a verb that
can actually be assessed.
Often when creating eLearning,
instructional designers feel a bit boxed in. They often feel that they can only
use verbs that assess lower order thinking skills. This may be true when using
a standard multiple choice test, but if you have the opportunity to include
scenarios, you can assess higher order thinking skills as well. I’ll get to
scenario-based assessment shortly, but for the time being, I don’t think I can
overstate how valuable scenarios can be for AoL. They allow you to assess critical
thinking and problem solving. They also are one of the best ways to assess
learners on relatively open-ended questions.
If you’d like more information on constructing learning outcomes, I’d recommend this post from Mohawk College.
2. Writing multiple choice tests
Although they’re not my favourite, I’d like to mention multiple choice tests here, because they are so prevalent. Patti Shank and Trina Rimmer each have great articles on multiple choice tests, but here are some brief notes to creating effective multiple choice tests.
Ensure that the stem of the question is clear and brief. Unless the information is absolutely essential, overly long stems can confuse learners.
Ensure that the answer options are also brief and direct. Keep them all approximately the same length and use similar language and style. The answer shouldn’t be apparent from the way the options are worded.
Avoid double negatives. Learners who know the answer may get it wrong simply because they are confused by the wording.
Try not to connect questions. If a learner answers one question incorrectly, it should not affect their ability to answer any other question.
For online tests in particular, consider randomizing the questions and shuffling the answer options to reduce cheating attempts. However, one exception is when the answer options to a question are numbers – in this situation, have the numbers go in order from smallest to largest (or vice versa).
3. Creating scenarios
You may be tempted to skip this section if
you create compliance training – I urge you not to. Many instructional
designers complain that compliance training is only about giving rules and
yelling at learners. With that attitude, it makes sense that they only feel
that a simple multiple choice test of lower order thinking skills is
appropriate for the AoL.
However, I strongly disagree with these
thoughts. I make compliance training for a living. Every single course I create
is about rules, policies, and regulations. However, I think that by focusing on
why those rules, policies, and regulations exist, you can get to the goal
of the instruction. With that goal in mind, it’s easy to build effective
training scenarios that assess how learners will use the rule, policy, or
regulation in the real world. And that is what will actually make a different
in the life of the learner.
For example, if the training requirements
were to teach learners to do XYZ, don’t simply ask the learners ‘What is XYZ?’
or ‘What are the steps to XYZ?’. This really doesn’t assess whether they
understand it, or will ever use it. Instead, ask yourself (or your SME), ‘Why
do they want the learners to do XYZ?’. Based on this answer, you can build a
scenario.
While I was working on my Master’s of Instructional Design and Technology, I dutifully learned all the theories, principles, and best practices that our instructors told us to learn. But it wasn’t until I came across Cathy Moore’s scenario-based learning design process called Action Mapping that I became truly excited by the idea of designing learning.
Action Mapping teaches instructional designers how to build a course (or assessment piece) through scenarios. Here is a brief overview. The first step is to identify what the problem is, and how you’ll know it’s been solved. Then, assuming the problem can be solved through training, the process first focuses on what they need to be able to do to solve the problem. The information taught to the learners flows from there (not the other way around). Finally, scenario-based activities are developed that focus on what the learner needs to be able to do, and they allow the learner to ‘pull’ information, as needed, to solve the scenario problems. [for more information on pull learning, check out this article by Tom Kuhlmann]
Using Action Mapping, or a similar strategy, can help you design AoL that assesses higher order thinking skills and gives you information that more accurately represents whether the learners understood the course content. Even though Action Mapping is based on how learners attain the content in the first place, the same principles and strategies can be used for AoL.
4. Gamification
I have included gamification as a heading here to let you know that I haven’t forgotten about it, and am not purposefully ignoring it. I will delve into gamification in the Assessment for Learning (AfL) post.
This is mainly because, in my opinion, gamification is an excellent way to learn content, but at the end of the day AoL should focus on how well the learner knows the content without the distractions of points, scores, timers, and leaderboards.
5. Tracking assessments
My final thoughts related to the
instructional design of AoL is that before writing your assessment, you should
consider how the scores will be tracked. Are you working within an LMS? For an
eLearning module will the scores be collected via the SCORM or xAPI course, or
will the questions be LMS-based? Make sure you know the answers to these
questions before beginning the design of the assessment, because these considerations
could have a major impact what you can and can’t do when designing the AoL.
AoL in learning experience design
Learning experience design takes
instructional design one step further by considering the learner experience. Since
the learner experience doesn’t stop at the end of the course, it continues on
with the AoL, let’s look at a few ways that can we ensure the learner is having
a positive experience on the AoL.
1. Navigation
If the AoL is not within the eLearning
module (or even if it is, but it is significantly different than the rest of
the module), you may want to provide the learners with some navigation help
before they begin the assessment. You could include a short video, images, or
simply text describing how to navigate the assessment.
Unfortunately, I’ve learned this the hard
way. While creating my second-ever course during my Master’s degree, I
developed a final assessment within the Moodle LMS. Since the assessment environment
was significantly different than the course environment, many learners
struggled to understand how to navigate through the assessment. This was the
number one complaint I received. Since then, I’ve tried my best to ensure that I
don’t add to the stress of my learners by familiarizing them with the
assessment environment.
2. Clarity and visual design
Clarity of an AoL primarily comes down to clear instructions and appropriate vocabulary (that is consistent with the vocabulary used within the course content). Before releasing an AoL to your learners, have another person read through the instructions and check that they understand them perfectly.
Additionally, it helps to include comments at the start of the assessment related to the required passing grade, the number of questions in the assessment, and an example rubric if rubrics will be used.
Visual design is mostly of concern for AoL
completed within an eLearning module. All of the visual design considerations
that exist for the content part of the eLearning module continue to matter on
the AoL. Distracting or confusing designs can significantly impact a learner’s
ability to complete an AoL.
3. Accessibility
Review your learner analysis when preparing your AoL. Although you may consider variety in questions to be important, adding images, audio, or video could be troublesome if you have learners with visual or auditory impairments.
Additionally, any learners who will complete the AoL on a mobile device and will be in a public place may not be able to unmute the course, so they may not be able to answer questions that rely on audio or video.
Continuing to consider
the mobile learners, try out your AoL on a mobile device prior to releasing it
to the learners. You may find that the radio buttons or checkboxes used to
answer questions are too small for your fingers to select.
4. Reduce stressors
Many learner’s experience test anxiety to
some level. Don’t make things worse by adding in a timer or trick questions.
Feedback for AoL
As with gamification, I have included feedback as a heading here to let you know that I haven’t forgotten about it, and am not purposefully ignoring it. However, I won’t make it a focus of this post because many learners don’t look beyond their final test score.
Ideally, all learners would carefully read the feedback and review any concepts they answered incorrectly even though they won’t be re-assessed, but this rarely happens in real life. This means that the feedback given while the learners are learning (AaL) and (AfL), which has the possibility of altering learner behaviour, needs to be filled with meaningful comments. So be sure to visit the Assessment for Learning post to read more about this topic.
Conclusion
When I first decided to write this post, I
never expected it to be this long! But with learning outcomes, scenarios, navigation,
accessibility, and more, there’s lots to consider when creating AoL.
I’ve only skimmed the surface on some
topics, but it should be enough to help you refine your online searching. In
the future I might explore some of these topics in more depth. Hopefully, you’ve
read at least one thing here that you’ve never considered before.
You’ll find the same kinds of information
(on slightly different topics) in the upcoming posts on Assessment for Learning
(AfL) and Assessment as Learning (AaL). Stay tuned for them.
Before beginning my journey as an learning
experience designer, I was a teacher for many years. Shortly before I left the
profession, the big, exciting idea surrounding assessment and evaluation was the
triple threat of Assessment as Learning (AaL), Assessment for Learning (AfL),
and Assessment of Learning (AoL).
Most teachers, especially those who had
been working with the former formative/summative assessment model didn’t really
like the AaL/AfL/AoL model. I, on the other hand, loved the model. Not only
does it include the concepts of formative and summative assessment, but it also
brings in the idea of metacognition. Research has shown that learners with
strong metacognition skills (i.e., they very aware of their own personal methods
and preferences of learning) are more successful in school.
My love for this model is so strong that
even as a learning experience designer, I continue to use it to craft my
assessments. So, how does it work? Well, it’s based on three different types of
assessment, each with it’s own purpose.
Let’s start with the one that is the most
common, and therefore the easiest to understand.
Assessment of Learning (AoL)
AoL is a formal assessment of what a
learner has learned by the conclusion of the chapter, module, or course, etc. This
is often in the form of a test or exam. Behavioural-based courses may also use
a marked branching scenario for the AoL. At the primary or secondary school level,
it could also be some sort of project or presentation.
In the teaching world, AoL is also known as evaluation (note this is different than the term evaluation in the instructional design world). It is also known as summative assessment in the former formative/summative assessment model.
Assessment for Learning (AfL)
AfL is a less formal (or entirely informal)
assessment of what a learner has learned from an individual topic or task. This
type of assessment is either unmarked (contains only feedback), or a mark is
given simply for the learner’s own knowledge – but it does not count toward a
final grade score. This is often in the form of a quiz or game.
As you might have noticed from the name, the
focus of AfL is on the process of learning. Assessment is (or at least should
be) strongly tied in with the learning process. The AfL step helps learners
identify their strengths and weaknesses, and it ultimately helps the learners
focus on what they need to know to succeed.
AfL is also known as formative assessment
in the former formative/summative assessment model. In the instructional design
world, this is sometimes known as a knowledge check.
Assessment as Learning (AaL)
AaL is where the metacognition piece comes
in. Just like the AfL, AaL has a strong focus on the process of learning. These
types of assessments are not marked, but instead provide the learner with
information about how they are progressing. This allows the learner to make
their own personal evaluation about their knowledge and whether they are ready
to move on to new content, or if they need to revisit what they have just
learned.
In this case, feedback is crucial – in
particular, immediate and personalized feedback. This is because the learner is
basing their personal evaluation of their knowledge on the feedback provided to
them. Having to wait to finish an entire quiz before finding out if they were
correct on question number one is not effective. Similarly, receiving general
feedback that doesn’t address their own issues is not effective.
In addition to immediate and personalized
feedback, an important aspect of AaL is the frequency of use. AaL is most
effective when it is used regularly. For example, following each topic. This
helps the learner build competency and confidence when they are progressing
well. Additionally, it helps learners target in quickly on their weaknesses
when they are getting stuck.
A final important aspect of AaL is the ‘how’.
Without asking learners how they know what they know, you are missing the
valuable metacognition piece. Ideally, AaL will engage learners in the learning
process by helping them understand where their thoughts and assumptions come
from. When done well, it can also help learners understand how they learn best
(and no, I’m not talking about the debunked theory of learning styles, I’m
referring to things such as how they favour piecing together information, what
type of context building works best for them, how they effectively retrieve
information from their long term memory, etc.)
Conclusion
In my opinion, compared to other assessment models, such as the formative/summative model, the AaL/AfL/AoL is the complete package. When used properly, both the learner and the teacher, facilitator, or content developer are well aware of how the learner is progressing. They are also aware of what steps need to be taken for each specific learner to be successful because they have already zeroed in on the exact stumbling block of the learner.
Keep your eyes peeled over the next few weeks as I delve into each of these assessment types more thoroughly, including how to incorporate them in an instructional design environment and how to adapt them to fulfill effective learning experience design.
It’s now the end of August and I though it
would be a good time to look back at my goals for the year. Not to brag about
how much I’ve accomplished, but rather to motivate myself to continue my
personal and professional growth, because I think I’ve been slacking on meeting
my goals.
At the start of January (although I didn’t actually publish the post until mid-April), I made myself some goals for 2019. One goal for each letter of the alphabet. While I knew that was an ambitious endeavour, I think that nurturing my own learning is important to helping me become the best learning experience designer possible.
I’ve been extremely busy with other tasks this
spring and summer, so I feel like I’ve fallen off the path for my own learning.
Last week I posted about getting back to developing my skills – in that case it
was by playing around with Photoshop. I remember that being one of my goals; however,
I actually struggle to remember what else I wanted to accomplish this year.
So, I thought this would be a good time to revisit
these goals – and to do it in a public forum, so that I would feel more
motivation to follow-through.
My list
A = Adapt
(rapid eLearning tool)
So far I have only looked at what is possible with this tool, but haven’t done much else.
B = Blogging
This one I could consider a relative success… I set up my blog, and this is now my 9th post in 5 months.
C = Character
Animator
Beyond my practice with this tool in grad school, I haven’t had the opportunity to practice my skills.
D = Design (visual design)
I actually attended two great talks on visual design at the Canadian eLearning Conference 2019, by the amazing Bianca Woods (http://biancawoods.weebly.com/) and the talented Sarah Dewar, and I’ve been trying to implement their tips ever since.
E = Evolve
(rapid eLearning tool)
As with Adapt, so far I have only looked at what is possible with this tool, but haven’t done much else.
F = Feedback
This is something I am currently strongly pushing
at my job, in fact, I have somewhat surreptitiously started adding immediate
personalized feedback on all questions in our courses, without really mentioning
that I am doing it to management… I think if I just slip it in, they can’t
question it, right?
G = Gamification
I attended a wonderful full-day workshop by Clint Clarkson from eLearning Alchemy (https://elearningalchemy.com/) at the Canadian eLearning Conference 2019 – he spoke about how easy it is to add in game thinking (even if not full-on gamification) to eLearning courses, and I left full of ideas for ways to implement these strategies.
H = Heroes
(continue doing eLearning heroes challenges)
Here I definitely feel I’ve fallen short,
the last eLearning Heroes Challenge that I completed was in May 5, 2018… time
to get back on the horse with that one!
I = Illustrator
While I haven’t created anything unique in Illustrator
(my least favourite Adobe product) since January, I have done quite a bit of
editing of vector diagrams in Illustrator at work – I’m going to call this one
a win since I struggle so profoundly with illustrator!
J = Join
ID communities
To be honest, I’d forgotten this was on my list, although I do think it is incredibly important – so even though I haven’t put much effort into it, I did meet a ton of amazing people at the Canadian eLearning Conference 2019, including (but not limited to) Connie Malamed (very briefly – but with such a celebrity, I say it counts! http://theelearningcoach.com/about/), the hilarious Simon Blair (https://www.simonblairtraining.com/), the insightful Tracy Parish (https://www.tracyparish.ca/), the loveable Cindy Plunkett, the game-loving Clint Clarkson (https://elearningalchemy.com/), and the award-winning Meagan Underwood; plus I have started using Twitter again (@PascaleSwanson)
K = Keynote
summaries (post summaries of keynotes from conferences)
I’ve dropped the ball on this one – so, to make up for it… keep your eyes peeled for a summary of an upcoming keynote later this year (perhaps from DevLearn, or maybe an eLearning Guild summit).
L = Lectora
I think I might have to let this one slide, unless I can find the money to splurge this program.
M = Meetings
(attend 2-3, including online summits w eLearning guild)
So far in 2019 I have attended the Canadian eLearning Conference 2019 as well as a webinar from the eLearning Guild (Dynamic Video Interactions for Increased Engagement), and I plan to attend DevLearn 2019 and the webinars ‘The Business of Learning’ and ‘Microlearning Design’
N = Negative
space (make better use of it)
Another goal I had forgotten about, yet I must have subconsciously been considering, because I do feel that I’ve done a better job with whitespace recently – not to mention attending a conference by the lovely Bianca Baumann (https://www.biancabaumann.com/) at the Canadian eLearning Conference 2019.
O = Objectives
(improve construction of objectives)
Definitely an area I’d like to continue working on – I do have a book, “Preparing Instructional Objectives” by Robert F Magar that I purchased awhile ago and it has just rocketed to the top of my reading list.
P = Photoshop
Well, as of last week, I can now add this to the ‘on track’ list of goals – although I had been ignoring my personal development in Photoshop skills due to a lack of time, I’ve just made this a priority again.
Q = Quality
over quantity (despite manager’s desires)
I continue to struggle in this area – my manager and boss are mostly concerned about money, yet I continue to push for what is best for the learner… this obviously results in some unpleasant clashes. To help me along this path, I plan to attend the upcoming eLearning Guild online summit called ‘The Business of Learning’
R = Research
(continue to research in the field of ID)
After Walden University updated their Alumni Library, I am no longer able to access high quality journals – I plan to reach out to the library (and perhaps OISE-UT as well) to see if there is any way I can access the journals to keep my knowledge up to date.
S = SME
wrangling (develop skills to get the most out of my SMEs)
I had actually planned to attend an eLearning Guild webinar on Leveraging SMEs but had to miss it to due work deadlines – I hope to watch a video copy of the webinar soon to learn about this important skill.
T = Talk
(present a talk at a conference)
I actually had an amazing experience
presenting a project at the Show and Share portion of the Canadian eLearning
Conference 2019 – not only was it a thrill to present my work to talented
professionals, but I even won a nice little award (People’s Choice Award) for
my work!
U = UI/UX
This is something that has been at the top of my mind recently, particularly after an informative talk by Bianca Baumann (https://www.biancabaumann.com/) at the Canadian eLearning Conference 2019.
V = Variety
(stretch my capabilities of design and development)
Hmmm… this is a tough one… at work I am limited in what I can do at my job, yet I really have tried my hardest to stretch my capabilities of design and development… does that count? I guess I have also tried some different approaches in my own personal time as well, that might count… I’ll give this one a C+.
W = Writing
(improve specific aspects – for example using contractions)
Another goal that I had forgotten about,
but I suppose this timing is appropriate as I am about to send off some scripts
for narration – time to review them to ensure I have people teaching the course,
not robots!
X = Xerox
(“steal” from other professions, e.g., marketing – AIDA, WIIFM, etc.)
I think this will be my next big challenge in the second half of the year.
Y = YouTube
(update my channel)
Ooooops…. I should probably do this!
Z = Zapworks
I have played around with Zapworks, and even pitched an idea at work (it got turned down), but I still intended to continue to find ways to incorporate XR into my courses.
Summary
All-in-all I’m actually doing better than I thought. I think moving forward into the second half of the year, my focus will be particularly on blogging, joining ID communities, quality, variety, and xerox-ing from other professions. I also might look into co-authoring a professional paper with my colleague, if I can get access to the Walden or OISE libraries to perform the literature reviews.
Based on the cute image I found for the header of this post, I think my main downfall has been step 2, aka lack of planning. Something I will look into starting in September.
Have you met your goals this year? Where do you plan on focusing your efforts in the second half of the year? Do you have plans for achieving your goals?
I’m here… don’t send out a search party! I’m not dead, just haven’t had time to write any posts recently.
Since I’ve been so busy, I’ve also noticed that I haven’t really had much time for my own learning. L&D professionals know better than anyone else how important learning is, so I’ve felt like I’ve been a bit short-sighted by getting involved in other projects at the expense of my own learning. So, I figured today I should carve out some time for myself, to learn something new.
When I graduated from my Master’s program back in December, I had a whole plan laid out over two years into the future of what I planned to learn, week-by-week. All of that has fallen by the wayside. Today I picked up where I left off – with some fun Photoshop skills.
These days when I use Photoshop, I tend to use the same skills over and over. I never really try out anything new. This means my courses might be missing out on great graphics that I simply don’t even know I can make. Everything I use in my courses work, but that doesn’t mean they couldn’t be even better.
To ease back in to my own learning plan, I decided to play around with something simple (and fun) – Photoshop filters. Filters can dramatically change the look and feel of a photo. They can also draw attention to certain areas or aspects of the photo.
Despite being something a beginner could use, in my odd, round-about way of learning Photoshop, I always skipped over them, because they seemed a bit too childish or cheesy. Well, as it turns out, some of the filters are neither childish nor cheesy. And you know what, some of the ones that are can still be useful when used in a fun, up-beat way.
After spending some time playing around with the filters, I thought I should create something to remind myself of what I’d learned… enter Super Rodney!
I wanted to create a comic (or at least a very small portion of one). Unfortunately, I didn’t have any story in mind. So, I just made up something silly. In the end, it’s the skills that I’m learning that matter, right?
I hope you enjoy it! (because I have a feeling that you’ll be seeing more of Super Rodney in the future)