Tag Archives: scope

Scoping: Capitalise on your experience to minimise guessing and… risk.


“How long will it take to produce this?”

“Well… who’s going to do it?”

“Who ‘s available? Paul would be great, but he takes longer…”

“Marie too, but someone would need to review it because…”

Does that sound familiar?

Scoping is risky: if you get it wrong, you may lose money, or piss off annoy your client when you’ll ask for more money and time to complete the job. Nobody wants that, right?

Worst: you may end up forcing people to produce something with not enough time to do it properly.

Get better at scoping to hit the target!

on your experience to minimise guessing & minimise risk.

A little while ago I posted Training Projects: How much does it cost? How long will it take? and finished it by suggesting to keep track of the effort spent during your projects, in the same way you estimated it in the first place. This way you collect data on which you can base future estimates.

Create your own ratios 
for  tasks, roles or deliverables.

First you breakdown your project like a story: sequence what needs to be done and identify who will do it. You end up with your lists of deliverables and roles that need to be filled. 

About the roles… Different roles dictate rates, especially when some roles require different levels of expertise. You may use the same person to fill different roles, but if it implies tapping into different levels of expertise, it will raise the question of different rates. You may want to charge one rate to lessen accounting, but this would raise the value your client is getting for the rate you’re charging… One could argue that “higher-paid” resources are faster at “lesser” tasks, but it is a [very] contentious question… Separate role-based rates make it easier to track data and create your own ratios, even if it requires more “accounting”.

So let’s look at an example, very simplified example of a completed tracking table, after a project…

Task Hrs (estimate) Hrs (actual) Role $
Analysis (sum) (sum)
Review materials
12 Senior ID
Meetings and discussion
10 Senior ID
Write-up of Design Document
36 Senior ID
Storyboarding (sum) (sum)
Module 1 (sub-sum) (sub-sum)
Lead Support
7 Senior ID
28 ID
QC Content
4 Copy Editor
Edits after client review
8 ID
Module 2 (sub-sum) (sub-sum)
Lead Support
3 Senior ID

Tracking [more] detailed data allow you to determine your

  • Effort/cost per deliverable
  • Effort/cost per role
  • Effort/cost per unit of learning, like an hour of elearning

With this data you can compare the estimate to the actuals, figure out why there are differences, identify those that you can control, and take notes for the next scoping exercise.   🙂

The level of detail is up to you. For sure the temptation of diving into details is great. We have to be careful with that: too much detail becomes counter-productive. The key is to first define what you want to know, then identify the data you need.

For sure this requires more “accounting”. And to be viable, you likely need a timesheet system that supports it, more [PM] time to enter the estimated data AND discipline of ALL resources to enter the data properly.

Do you think it’s worth it?       🙂

Scoping: Apples to apples, or aren’t we really talking about “produce” baskets?


The truth is, it is almost impossible to find comparable apples to compare, because most projects are somewhat unique: it’s never just “apples”… there is always some other fruit, or even some vegetables in the mix…   😉

The notion of elearning levels have been around for many, many years. Their purpose is grand: use a common language to define expectations, and give budget ballparks, right?

But each discussion about elearning levels starts with a clarification of what those levels mean, by discussing it thoroughly with your client. Eventually it requires some examples of what you’ve done to support the budgets you’re asking for… which in the end, you propose a “produce” basket that will be different from the other bidders’.

So it’s like you’re bidding to become this client’s personal chef. You’ll need to understand his likes and dislikes, his preferred dishes and the ones he’s willing to eat in a bind, and of course, how long he’s willing to wait for it, and how much he’s willing to pay.

But come to think of it, talking ingredients may not be the right approach…

Should we talk “cooking”? Ingredients are just ingredients… even the best ones can not be fully appreciated if they weren’t prepared properly, or prepared to there full potential.

Should we talk “Restaurant”? The food itself can be great, but if the plating, service or dining room aren’t right, the expectations are in jeopardy.

Isn’t it about the overall experience?  In our case, the learning experience?

Of course elearning levels are part of the overall experience. But they do not address all of it.

The learning experience is created by the successful combination of several things:

  • Learning strategies  (tell-show-try-me, activities, storytelling, concept-based, scenario-based, serious gaming, etc.)
  • Engagement strategies  (look & feel, concept-based, storytelling, gamification, etc.)
  • Delivery strategies  (synchronous-in person-virtual, asynchronous-self-pace, coaching, mentoring, technology-based, etc.)
  • Learning materials  (writing, media producing, assembling-integrating, authoring-programming, etc.)
  • Overall quality  (look & feel, writing style-grammar-typos, clarity, consistency, precision, bugginess, etc.)

In terms of elearning levels, the one thing I find is not addressed properly, if at all, is the engagement part. You might say that it’s part of the learning strategy… to which I’d respond it’s time to looked at it separately. Don’t you think?

Now another question come to mind… should we consider learning experience levels?    🙂

Scoping: Comparing Apples to Apples?


I recently met someone from the local learning industry, and surprisingly, we had not yet crossed paths. Probably because she comes from the entertainment side of the street. As we got deeper in our conversation about what we do, how we do it, who is involved, the tools we use, how long it takes, etc, we realized we needed to align our “fruits”: terminology, approaches, concepts, etc. to be able to understand each other, to talk “apples to apples”.

Squirrel note: Building learning materials, like elearning, requires writing text to be read and scripts to be listened to. Writing is considered a standard, required skill. But it is not given to all to write well, especially for self-pace learning. There seem to be different approaches used for writing in elearning production, which impacts “who” you use to write the text: instructional designers design the elearning and then write it up, instructional designers design the learning and pass it on to someone with a communication/journalism background, or instructional designers design the learning and pass it on to a scriptwriter, trained for the film industry. Many different people to consider, different worlds, different expectations to deal with…   🙂

When scoping a job, we usually start by looking for past experiences to figure out what needs to be done for how much, right? And you probably heard the expression… “we need to compare apples to apples” or “you can’t compare apples to oranges” when you realize that you’re looking at different examples. And come to think of it, there are many kinds of apples: color, texture, taste, etc.

Red or green?
Red or green?

So we need to get to a common ground, talk the same language, use the same reference points. Especially when you are talking with clients. Learning materials (elearning, instructor-led training (ILT), etc.) can be distinguished from many different perspectives, but generally we end up discussing the “look and feel”, the interactivity, the media elements, and more importantly, the engagement factor.

Side note: Of course human nature wants it all: but sooner than later you need to talk how much it costs and how long it takes to build it.

For now I’ll set aside the the “look and feel” and engagement factor, as the former is pretty easy to tackle and the latter is very subjective (at least I think so). So that leaves the interactivity and the media elements, which leads the discussion to “levels” of complexity, to relate it back to effort, cost and time.

For elearning, we’ve all heard of levels 1, 2 and 3. For ILT? I actually don’t know if there is such a classification. There should be. Maybe some of you can join in and point to some… But for now, I’ll focus on elearning.

So for many years now, we’ve discussed elearning levels, usually 1-2-3, which should probably be extended, as the possibilities keep growing. The biggest challenge I find is to integrate interactivity and media. A few years ago Amit Gard posted an interesting perspective compared to a study from The Chapman Alliance about the efforts taken for development of various levels of custom eLearning. I think The Chapman Alliance model is way too generic and encompassing. Maybe the people who came up with it never had to personally scope a project.  🙂

I like Amit’s extended model. Not sure about his breakdown of “course-types”, as I would see at least another type between “presentation” and “scenarios”, to account for designing activities that are not scenario-based (I guess it depends on how he came up with his range of instructional design effort).  His model better addresses a situation where you need a highly interactive design, scenario-based with non-linear branching, in a very simple interface, and no media: instructional design effort would be very high, while media design and production very low. But I’d like to see separate curves to distinguish the course-type (instructional design) and the multimedia parts. Maybe two separate graphs need to be created… that you overlay to get the “real” picture?

Another example: Shift Disruptive elearning (which in turn points to EduTech’s wiki page on “interactivity”) presents it from the interactive side with a 4-level classification: passive, limited, moderate and simulation. But there again, they include media design and development into the mix.


There are many different views out there, which makes it difficult to compare – from tomatoes to apples…  And then, we need to look at what it takes to create and produce the apple we agree is needed. Ultimately, the right one is the one that you feel most comfortable with, that you can “easily” explain and relate to the person you’re talking to. Of course you also need to consider the group you are part of, your colleagues, the ones that need to have the same “schpeel” in front of clients.

Makes sense?

I’m currently working on a model of my own, working from past experiences, considering the major blocks of activities relating to elearning design and development: lead ID, ID, authoring (integration, programming), Lx/Ux*, media design and production, QC and of course PM. If some of you are interested in discussing this further, either from the service provider or buyer sides, please let me know.

*Lx = Learner Experience and Ux = User Experience

Training Projects: How much does it cost? How long will it take?

How much - How long

“How much will this cost?”

“How long will it take?”

Don’t you love these questions? Especially when you’re asked before you had a chance to get enough information to provide a proper answer.

So how do you answer these? …and be comfortable with your answer, as you may very well have to deliver on it!

I see a few ways to answer these:

  • “Not sure… I’ll get back to you.” Safe answer, but be sure to get back with an answer, whatever it is, even if it is a referral to someone else.
  • Give a ballpark. Make sure you’re range is wide enough to cover your butt, but note that they’ll specifically remember the lower end.
  • Give a “wag”. If your ballpark’s range is too big, then it’s a wag (“wild ass guess”). Make sure you use that term, and say why you’re using it.
  • Give a “researched” answer based on information obtained through research.
  • Give an “experienced” answer based on direct experience (yours or your team’s). This is the best one, as it takes into account people, methodologies, tools, etc. that you know.

What you need to do is get the conversation going, to get the information you need to properly answer the questions. You’ll need details about this, that and the other thing. The less experience or knowledge you have at doing the work you have to estimate, the more detailed breakdown you should do. If you can’t break it down, then you need to add assumptions on which your estimate is based. The problem is that the more assumptions you add, the more restrictive your proposal may become.

You will get the details you need through a fairly elaborate line of questioning that cover a wide range of areas, such as: business goals, executive support, performance goals, subject-matter, availability of existing useable materials, target audience(s), learner’s location, level of details, learning outcomes, learning strategies, expected level of interactivity and engagement, expected/required length of the training, timeframe, availability of SMEs, number of review cycles, etc. …and, of course, the client’s values.

As there are many factors to consider, I usually recommended a two-part approach: do a first [smaller] project to scope the second [much bigger] project. I’ll dig more into scoping in follow-up posts…

So here’s a takeaway point: keep track of the effort spent during your projects, in the same way you estimated it in the first place! This way you’ll start collecting data on which you’ll be able to base your future estimates!

Makes sense?