Scoping: Comparing Apples to Apples?


I recently met someone from the local learning industry, and surprisingly, we had not yet crossed paths. Probably because she comes from the entertainment side of the street. As we got deeper in our conversation about what we do, how we do it, who is involved, the tools we use, how long it takes, etc, we realized we needed to align our “fruits”: terminology, approaches, concepts, etc. to be able to understand each other, to talk “apples to apples”.

Squirrel note: Building learning materials, like elearning, requires writing text to be read and scripts to be listened to. Writing is considered a standard, required skill. But it is not given to all to write well, especially for self-pace learning. There seem to be different approaches used for writing in elearning production, which impacts “who” you use to write the text: instructional designers design the elearning and then write it up, instructional designers design the learning and pass it on to someone with a communication/journalism background, or instructional designers design the learning and pass it on to a scriptwriter, trained for the film industry. Many different people to consider, different worlds, different expectations to deal with…   🙂

When scoping a job, we usually start by looking for past experiences to figure out what needs to be done for how much, right? And you probably heard the expression… “we need to compare apples to apples” or “you can’t compare apples to oranges” when you realize that you’re looking at different examples. And come to think of it, there are many kinds of apples: color, texture, taste, etc.

Red or green?
Red or green?

So we need to get to a common ground, talk the same language, use the same reference points. Especially when you are talking with clients. Learning materials (elearning, instructor-led training (ILT), etc.) can be distinguished from many different perspectives, but generally we end up discussing the “look and feel”, the interactivity, the media elements, and more importantly, the engagement factor.

Side note: Of course human nature wants it all: but sooner than later you need to talk how much it costs and how long it takes to build it.

For now I’ll set aside the the “look and feel” and engagement factor, as the former is pretty easy to tackle and the latter is very subjective (at least I think so). So that leaves the interactivity and the media elements, which leads the discussion to “levels” of complexity, to relate it back to effort, cost and time.

For elearning, we’ve all heard of levels 1, 2 and 3. For ILT? I actually don’t know if there is such a classification. There should be. Maybe some of you can join in and point to some… But for now, I’ll focus on elearning.

So for many years now, we’ve discussed elearning levels, usually 1-2-3, which should probably be extended, as the possibilities keep growing. The biggest challenge I find is to integrate interactivity and media. A few years ago Amit Gard posted an interesting perspective compared to a study from The Chapman Alliance about the efforts taken for development of various levels of custom eLearning. I think The Chapman Alliance model is way too generic and encompassing. Maybe the people who came up with it never had to personally scope a project.  🙂

I like Amit’s extended model. Not sure about his breakdown of “course-types”, as I would see at least another type between “presentation” and “scenarios”, to account for designing activities that are not scenario-based (I guess it depends on how he came up with his range of instructional design effort).  His model better addresses a situation where you need a highly interactive design, scenario-based with non-linear branching, in a very simple interface, and no media: instructional design effort would be very high, while media design and production very low. But I’d like to see separate curves to distinguish the course-type (instructional design) and the multimedia parts. Maybe two separate graphs need to be created… that you overlay to get the “real” picture?

Another example: Shift Disruptive elearning (which in turn points to EduTech’s wiki page on “interactivity”) presents it from the interactive side with a 4-level classification: passive, limited, moderate and simulation. But there again, they include media design and development into the mix.


There are many different views out there, which makes it difficult to compare – from tomatoes to apples…  And then, we need to look at what it takes to create and produce the apple we agree is needed. Ultimately, the right one is the one that you feel most comfortable with, that you can “easily” explain and relate to the person you’re talking to. Of course you also need to consider the group you are part of, your colleagues, the ones that need to have the same “schpeel” in front of clients.

Makes sense?

I’m currently working on a model of my own, working from past experiences, considering the major blocks of activities relating to elearning design and development: lead ID, ID, authoring (integration, programming), Lx/Ux*, media design and production, QC and of course PM. If some of you are interested in discussing this further, either from the service provider or buyer sides, please let me know.

*Lx = Learner Experience and Ux = User Experience


4 thoughts on “Scoping: Comparing Apples to Apples?”

  1. This is a great topic, Benoit. I’ve often seen clients want to jump to IMI level definitions very early in the game because it seems to provide definition and certainty. You just do hours of seat-time X ratio for appropriate level.

    As you suggest, these Interactive Media Instruction levels refer to the sophistication of the media, but design gets mixed in. I often find myself in the trap of realizing that the best solution is a relatively low IMI that is highly designed…but we have no terminology for this. A series of multichoice q’s with rich remedial and augmenting feedback can give someone the experience of troubleshooting a problem. The ISD costs will be high (and of course, you need a dedicated SME), but the media costs will be very low. Then there is the issue of efficiency…in some cases, the higher the design sophistication, the faster the learner can acquire the knowledge or skill…which destroys the idea of seat-time ratios. We really shouldn’t be selling competency by the hour anyway.

    Contrast all this to an IMI4 simulation that really needs little ISD (in the case of a system simulation at least), but needs tonnes of high fidelity data, programming etc. to make it work. I think we need some measure of the instructional design required and another of the media aspects to be able to estimate work and put together the right team to do it. Tricky stuff!


  2. Thanks Ian! The funny thing is that what you’re looking to name, or classify, (…the best solution is a relatively low IMI that is highly designed…) is simply good design. We hear it frequently: “man, I could have done that!” Really? It is frustrating indeed. And inescapable, as there will always be people who don’t understand what is the design process. And often, it is the people paying for the job. The trick is about the value of what they’re getting. That being said, I am working on a model, which I’ve expanded recently, to look at the levels of elearning, but separated through multiple layers of considerations such as the ones you highlighted here. And I can say that it got stimulated even more by the Model of Learning Objectives developed by Rex Heer at IOWA State University (, which I’m looking at using as a base… Maybe we can chat about this?


Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s