I’m intrigued by new research about the ways people learn! What about you?

I feel the need to step carefully here. This blog is frequented by experts in organizational change and learning, and I’m about to share some things that may challenge your expertise. I know, because I have come across some ideas that have challenged mine.

What if there was new data about the way that people learn that would turn much of what we thought that we knew about learning on its ear? In the spirit of learning, would you listen and consider it seriously?

Over the last month or so I have been engaged in research/discussions about the neurobiology of learning.  Along the way, I have uncovered some “hidden truths” about how we learn. Some of them are intuitive, some not, and most shine light on practices that are as common as they are ineffective.

Here’s what I found:

  1. A tailored, one-to-one model of instruction speeds mastery.
    Corporate training budgets demand the efficiencies of a common training agenda rolled out to the largest number of people possible. I get that.  But research shows that students in individualized tutoring environments may exceed the performance of students taught in a classroom by up to two standard deviations. (Bloom 1984).
  2. Retention increases with repetition.
    This one may strike you as more intuitive: The more one practices remembering something, the stronger the connections in the brain. Practice and time-on-task are two of the most important components of learning (Bell & Kozlowski 2002; Brown 2001). But maybe it’s not so intuitive; today, many corporate training solutions are one-time events that fail to offer appropriate repetition.
  3. Retention also increases with control.
    Students who were given the most control over their learning environment felt the least amount of frustration and spent twice as much time on task as students who had little control over their learning environment (Kort, Reilly & Picard 2002).
  4. We learn best over time.
    Like me, you probably discovered in college that “cramming” before the big test does not promote retention. Studies in the cognitive sciences validate this. The “spacing effect” helps in the formation of long-term memory by using timed intervals between the presentation and re-presentation of material being studied (Bjork 1994; Mizuno 1997; Russo & Mammarella 2002). Today there is a lot of data available about the “spacing” of those learning events. I know of very few learning practitioners who are leveraging that important idea.
  5. The best way to encourage retention is with questions.
    Ah, but not just any questions. It has to be the right kind, delivered in the right way. The “no penalty” question has proven effective at increasing time on task for all students. This means that the questions used and answers given during the learning phase do not have a negative effect on the final grade (Garver 1998).  Extensive research shows that learners who use questions to study dramatically outperform those who merely re-read information, or use any other type of study technique (Brothen & Wamback 2000; Krank & Moon 2001; Thalheimer January 2003).
  6. The best types of questions to use are fill-in-the-blank and/or short-answer format.
    This effect is probably due to the fact that having to recall the answer causes the brain to reformulate the memory and reactivate the complete memory trace. These types of questions produce a much greater recall effect when compared to multiple-choice questions (Glover 1989; Renquist 1983). But here’s the clincher: A multiple-choice question simply requires a student to recognize the correct answer and can impede learning… and may even promote unwanted retention of the wrong answers! I’ve shared this with some of my colleagues who were shocked by the finding.
  7. Let the student evaluate their own answers to study questions after being shown the correct answer.
    Self-evaluation gives the student more control over their learning environment, lowers frustration, and eliminates any hint of a penalty (Bruce Lewolt, 2008). Now think back to how many programs have kept the scoring process invisible to you, only revealing the outcome after the conclusion of the training (if at all.) Another missed opportunity.

At Blueline Simulations, we’ve been putting some of these principles to work. They are showing up in our learning interventions in ways both subtle and dramatic. In the process, they’ve challenged some of our client partners’ mental models. And they’ve had some dramatic, positive consequences.

In my next blog, I’ll give you a peek at learning design for maximum impact. But before I do, I want to hear from you!

Which of these “hidden truths” challenge your own mental models and practices? Do you find any of these as intriguing in their implications as I do?

Let me know what you’re thinking. Then, in my next blog I’ll continue this conversation.

Reader Interactions


  1. Michael Gardner says

    This is great stuff. To quote Homer Simpson “[we] gotta get outta this rut, and into a groove!” I bet you have, but if you haven’t – you should check out Brain Rules (http://brainrules.net/about-brain-rules) – a very approachable read on how our brains are wired to learn.

    Two thoughts – your #7 reminds me of a study I once heard (can’t recall where or when) that spoke to the value of studying worked examples. I think it was done in on high school math students, and showed that when learning new problem solving techniques, the learners gained more from studying solved problems, with the work visible, than by (in some cases) struggling to solve the problems themselves. Your cognitive resources are split between solving the problem and internalizing the rules – wherein all your attention is devoted to the latter with worked examples. That’s always stuck with me as an underused strategy in corporate training. And it kinda needles at me with the balance and use of standard “show me vs. try me” simulations. We tend to think of the latter as a richer learning tool, but is that really so?

    And to your #6 – what are your thoughts on the implications of that to the design of simulated interactions with people where you (typically) choose your action or response by selecting from a list of options. Short of full-blown artificial intelligence to evaluate a free response, I’m not sure a better option exists… but I struggle with that because I too feel that I could often pick the best answer from a list well before I could generate the same answer myself.

    Thought provoking post!

  2. David Milliken says

    Thanks for adding so well to the conversation!

    I plan to go in search of the study that you referenced related to #7. I should have known…the worked examples technique got me through several years of math at Princeton. If proven valuable to all learners, it would seem to me that a little tweak to the way that we challenge student to work through case studies would allow us to accomplish this quite easily. Which could add immeasurably to the design of our already very popular Blueline Blueprints.

    As for #6, I think that you are spot on — artificial intelligence is a simulation designer’s holy grail. Meantime, we have been exploring solutions that leverage a gaming engine (with rules, probabilities and randomization dictating the occurrence of certain nodes), voice recognition and thousands of nodes to fool the learner into thinking that the simulation they are interacting with is artificially intelligent. In fact, we expect to launch a coaching simulation like what I have described in the 1st Quarter. Imagine, an experience so robust that you could practice interactions for weeks and never have the same interaction twice.

Leave a Reply

Your email address will not be published. Required fields are marked *