Writing Process from Brain Dump to Storyboard

How do you get from a SME brain dump to a well-organized storyboard? This is my instructional design writing process to organize content.

Last week, an ID asked me about my writing process. Specifically, she wanted to know how I get from content like a SME “brain dump” to a finalized storyboard that’s ready for elearning development. Prior to our call, she had reviewed a number of other resources and books. Most sources she found sort of jump from “you meet with the SME and gather information” to “you write the storyboard” without explaining the thought process. She wanted to know, if you get something from a SME that is good content, but not organized, how do you figure out what goes where? How do you determine what to include and what to cut? What’s the process for creating a storyboard? It was a great exercise for me to reflect and break down a process that is fairly ingrained and automatic for me at this stage of my career.

Writing process from brain dump to storyboard

Tangled ball on the left connects to a symmetrical spiral on the right

Read content, note questions and ideas

In my writing process, the first step is usually to read and take notes. I usually get something from clients to read and review: PowerPoint presentations, policy guidelines, technical documentation, etc. Sometimes, a SME writes some sort of “brain dump” of what they know and think is important.

I generally read and review everything possible prior to a kickoff call. I find that doing at least a preliminary read-through allows me to ask more specific questions during a call. It also helps me build credibility with the SMEs and stakeholders by showing that I take their content seriously and am putting in the effort to understand their needs.

My notes at this stage are usually extremely rough. In my initial review, I identify questions to ask in the kickoff call. I also note ideas for activities and scenarios. For example, here are some of the initial notes from a past project (with the identifiable details removed):

Who is the audience?

What do they need to DO?

What are the biggest problems currently?

How much do they need to memorize, and how much should we focus on where to find the information so they can look it up as needed?

How much jargon do you want for this audience? [With a note on a specific example]

Slide 11: is this information important?

Lots of text here–need to think about what needs to be included and how

Looks like lots of changing to plain language

Make a list of all terms used: need to create a glossary for terms and acronyms

Note picture of [something from the course]. Are there more pictures of problems?

[Name of] checklist–how much do they need to do? Should we practice using the checklist? 

Question: does everyone need to know all of these topics? Could we start with some questions about their job to focus on the topics they need?

During this initial note-taking, I include my comments on the big ideas and topics from the content. I summarize the key points and desired behaviors as I understand them, at least at this point in the process.

Kickoff call, SME interview

In the kickoff call for a project, I start by asking about the goals and objectives. I often use some variation on this “lightning-fast needs assessment.”

If you follow the action mapping model, this is when that would happen. If you’re doing action mapping or more performance consulting, then the rest of this process is what happens for the parts of the solution that are elearning courses.

Sometimes, the solution is going to be an elearning course regardless of any needs assessment. The example in this post was compliance training related to a legal requirement. The kickoff call helped me identify what to include in that training, but there was never any question about whether or not we were going to create elearning.

Once you know the objectives, you can drill down with more specific questions. This is where the questions from my earlier notes come in. During the call, I ask about scenarios and practice activities. I find out what the biggest issues are and what we need to focus on. I also try to identify what’s less important and might be able to be cut. With many projects, this conversation can be part of the kickoff call, but sometimes it needs to be a separate interview with the SME.

Continuing the example from above, these are some of my notes from the kickoff call.

Biggest issues are [two specific problems]

Need to recognize what the problems look like, not the technical language to describe them

Include some scenarios in 2 parts:

  1. Identify what’s wrong
  2. How do you fix it?

In this example, “recognize” and “identify” are the appropriate level verbs for the objectives. In this course, the primary goal was for participants to recognize some specific issues they might encounter in their work. They needed to identify which situations were problems that needed to be addressed or reported, and which situations were fine and could be ignored.

Design document

After the kickoff call and any additional SME interviews, I start drafting the design document. I use different templates for design documents depending on the client needs, but usually this includes a summary of the course, audience, objectives, and scenarios, plus an outline.

My design documents usually include more detail about the practice activities, assessments, and scenarios, but less detail about the content pieces. At this point, I want to know the order of the content, but I mostly need just a high level outline of what information to include. I’m more focused on the activities.

Organizing content

In our discussion, the other ID asked specifically about how I organize content and come up with that outline. If you get a disorganized “brain dump” from a SME, how do you figure out how to group and chunk content? Even if you get something organized as source content, you still might need to reorganize it.

For the example in this post, I received a PowerPoint presentation used in their current training. The presentation had a lot of useful photos showing different situations (a big plus!). Most of the presentation was a long, uncategorized list of examples of problems and best practices. Our memories just don’t do well with lists of 20+ items, so I knew I needed to reorganize those examples.

Task-based and content-based organization

But how do you organize content? Sometimes, there’s a logical order based on the task or procedure itself. In the example course in this post, I used some procedural organization.

  1. Identify if something is an issue or not.
  2. Take action to fix it or report it.

But that structure was a small unit of organization that applied to lots of different examples, situations, and best practices in the course. I needed a larger structure of organization to chunk all of the various issues and solutions.

Order Out of Chaos: Patterns of Organization for Writing on the Job by Richard Rabil, Jr. is one of the best resources on these organizational structures that I’ve found. While this article is about technical writing rather than writing for elearning, I find that the patterns are very applicable to our work. He differentiates between “task-based order” where you follow the steps of a procedure and “content-based order.”

I think where IDs often get stuck is working with content-based organization, because there isn’t as obvious of a pattern. That’s where reviewing these standard patterns in the article can be really helpful.

  • General to specific, familiar to new
  • Chronological
  • Spatial
  • Comparison-contrast
  • Cause-effect
  • Order of importance
  • Group related material
  • Parallel structure for parallel sections

Example content-based organization

In my example, I grouped related topics together. Then, I ordered each group by how common and important it was. This structure made it much easier for learners to see the relationships between the different situations and best practices, plus it emphasized the most important content. This course include lots of short practice questions, rather than a large scenario, so the practice was embedded throughout.

  1. Intro
  2. Most common & most important type of problem
    • Example
    • Example
    • Practice
  3. Second most common type of problem
    • Examples
    • 2-part practice
    • Examples
    • Practice
  4. Third most common type of problem
    • Examples
    • Practice
    • Examples
    • 2-part practice
  5. Reporting and record keeping
  6. Summary

(As a side note, after the SME saw this organization for the elearning, she revised her PowerPoint slides. She now uses this structure when she delivers this training in person.)

Outline to storyboard writing

Once I have a high level outline, I flesh out that outline. Depending on the client’s expectations, I do different levels of detail for the design document. My design documents often include just a high level outline, but it’s enough for stakeholders to see the overall structure. Sometimes, I create detailed outlines where each slide is planned out prior to storyboarding. If I’m doing a lot of research and pulling from multiple sources, I spend more time outlining and planning so I can identify any gaps earlier.

When you have a well-organized outline, it’s much easier to storyboard. At this point, I do just start writing. I use the content from the SME as a starting point, but I edit and paraphrase it. I focus on conversational tone and shorter sentences that work better for voice over.

Your writing process?

That’s my process! How would you explain your writing process to someone else? Do you have any organizational patterns that you have found useful that I weren’t mentioned here?

8 thoughts on “Writing Process from Brain Dump to Storyboard

  1. Good to know that it’s the content organization that’s less familiar to some folks. I know I always gravitate straight to task instead of content. But sometimes you do have to cover the broader information and background before you go down the path of tasks.

    1. Task is my default as well, and it works for many training situations. After all, we’re doing workplace training, and presumably that relates to some sort of task?

      But, that approach isn’t always going to work. I’ve done a fair amount of professional development elearning over the past few years. Those aren’t as tightly tied to tasks. Often, professional development by an association has to appeal to members who work in very different organizations. That means I often focus more on the foundations, theory, and how to apply it in different situations rather than a straightforward task-based approach.

      Compliance training can be challenging this way too. If there are legal or regulatory requirements to cover certain topics, then you often end up with a bit of a checklist of content. Ideally, you can put those topics into context with a task-based approach. But sometimes, there just is a list of “these are things you might run into” that has to be organized somehow.

  2. Awesome post Christy and lot of things to take away from this. Extracting right info from the SMEs, organizing and extracting the relevant info, and finally translating it into storyboard is indeed a crucial aspect of many course design and development requirements.
    We recently launched a tool called ID-Assist as a Google Doc add-on. The idea of ID-Assist is to build custom features and workflows on top of GPT-3 (the tech that powers ChatGPT) and tailored specifically for Instructional Designers. The tool is still in beta and we have many exciting features coming soon. One of the key use cases of our tool is to extract, organize, and transform raw information (often provided by SMEs) into usable content pieces that can be used within storyboards by Instructional Desginers. Some of the use cases are listed in our recent blog –
    https://id-assist.co/blog/f/chatgpt-for-instructional-design-and-elearning-development

    We hope our tool can help save IDs lot of time and create effective learning outcomes.

    Give it a try and let us know your feedback –
    https://id-assist.co/

    1. ID Assist is an interesting tool. I can see the value of this for things like summaries and organization, especially for generic content where there’s a lot of source material to start with (like your climate change example).

      Looking at the climate change example, it’s clear this tool is only something that can be used for idea generation and first drafts.

      • The Bloom’s taxonomy example doesn’t really apply the taxonomy in any useful way.
      • The Mager objectives don’t follow his model (no criteria), and they aren’t really about performance of skills. They’re OK for education, I guess, but not workplace training (although perhaps that’s because your content is also basic education rather than workplace training.
      • The quiz is very basic, just comprehension level. Even though you prompted it for an application question, none of the questions are above the level of understanding.
      • The scenario is terrible. It’s a fake scenario that is just a wrapper around explaining, not application. Again, some of that is because your content doesn’t really lead to application, so maybe it would be possible to get better results.
      • In the “Edit content” prompt, the GPT model created a nonsense phrase of “multi-vast impact,” which is cute, but not useful.
      • The Semantic search incorrectly pulls a level about temperature change when you asked about sea levels.

      It’s an interesting start, and I see how it might be useful for grouping content together. I think tools like this will likely improve in the next few years. Since I assume the example is basically as good as you can get with the prompts, and it still has a lot of problems, it seems like it’s still a ways from being useful on a regular basis, especially for anything beyond maybe basic high school or undergrad curriculum.

      1. Thank you for having a look at the tool output and providing detailed feedback. We completely agree, the tool and the underlying technology is still in nascent phase and evolving and the same is evident in the limitations with current version. Such technologies tend to evolve exponentially and we are confident the use cases will expand pretty soon.
        The climate change and the existing prompts are generic and simple to give users an idea of spectrum of use cases. You are right, the quality and intent of prompts drive the output. You can find lot of specific prompt examples in our prompt library –
        https://sites.google.com/asterial.in/id-assist-prompts/home
        Overall we agree, while AI and NLP technology can be powerful technology for instructional design, it is definitely important to carefully consider their limitations and potential drawbacks, and to use them in a way that complements and enhances, rather than replaces, human instruction. ID-Assist is intended to be used as an assist tool, as reflected in the name.

  3. Awesome post! I appreciate your reflecting on a process that is also instinctual and automatic for me as an instructional designer as well. I find that with general to specific, spatial, and maybe some of the others, I need to include an advance organizer so participants reorient themselves and build that scaffolding in their head. Really appreciate your posts.

Leave a Reply