“Design is a process of illumination”
- @xpconcept on twitter
We recently rewrote all of our operating procedures for digital performance and training solutions. This was a big undertaking and one that’s not yet finished. One of the areas where we are not there yet is a consistent model for making selection of interventions less arbitrary. I think you may be able to help us out and vice-versa.
In the first part of a multi-part post, I’ll walk through the map we currently have for intervention selection and break down the decision points. The process looks a bit like a funnel, gradually narrowing the field of available options as you move through the process. This isn’t a waterfall type activity. There’s a fair bit of approximation and validation cycling that can be applied depending on how much time you have to narrow the selection and estimation and how expensive the problem is. The more expensive the problem, the more time you’ll probably want to spend ensuring the solution is the best possible match to the factors exposed in your analysis.
Notice I didn’t mention the expense of the solution. Here’s a tip: Anytime your solution costs more than your problem, pick another solution. Go by the cost / risks of the problem(s) first.
The First Stage: Solution Category Recommendation
We’re a Front End Analysis shop and we complete many performance analyses every year using the Harless methodology. This framework helps analysts consistently produce recommendations supported by data. For example, the process will point out if you have a skill and knowledge gap or if you simply need to address policy, environment, personnel selection, equipment or any one of a number of problems that training simply won’t solve. Love it or hate it, this process consistently results in recommendations that have a foundation in scientific method.
The FEA process identifies:
- Job accomplishments, tasks and, in some cases, steps
- Factors of performance influence (hint, “not enough training” is almost never the whole answer)
- Probable root causes of performance problems
- Probable solution categories based on performance influence factors
This is a good system of tools provided that folks understand that this is one lens of MANY that can be used to illuminate and diagnose problems. It’s a model that comes with a cohesive set of job aids and forms. Models and job aids can provide less than stellar outputs when used without skill or the right level of intensity (cookie cutter or lazy approach to analysis). I’m working up another post that extends these thoughts.
Harless loves job aids (so do I) and three of the recommendations that the DIF (Difficulty, Importance, Frequency) algorithm outputs are job aids. In the graphic above, this equates to the solution category recommendation. This is the first stage in the funnel. In this system, four potential recommendations can be assigned to skill and knowledge problems:
- Job Aid
- Job Aid with Introductory Training
- Job Aid with Extensive Training
If you wanted to count a fifth option:
- Solution would cost more than problem. Ignore or monitor.
One of the problems I can see with this level of categorization is the potential for mis-interpretation of the designated solution category. This could be perceived as an either / or situation. If training is indicated for a large group of tasks, yet no performance support is indicated, folks might completely gloss over a great opportunity to build a baseline of task support. This isn’t the intent of the tool. But folks tend to miss opportunities presented by their tools.
That’s the first stage of the funnel / filter. We get a broad recommendation based on an algorithm that profiles each task and matches with a solution classification or category that makes sense given a generic context. The tool doesn’t know anything about your resource constraints or any of the tens or hundreds of factors you might need to consider. The system assumes you’re applying insight to the output.
The first stage does not identify or illuminate:
- The concept for the solution
- Specific methods for intervention (resident, non-resident, digital, etc..)
- Detailed requirements definition
- Specific media requirements
The Second Stage: Intervention Selection
The second stage of our process is determined in the pre-design analysis. This analysis activity focuses on identifying and illuminating profiles, personas and patterns at the task level specific to the category recommended by the performance analysis. In this stage we’ll also detail the requirements and patterns for general packaging and implementation to paint the concept for the solution. Here’s a hierarchical map of our intervention selections when skills and knowledge gaps are indicated in the performance analysis:
Some of the acronyms used in the map may not make much sense to folks outside of our organization. For the sake of explanation, an A School is a vocational apprenticeship training program. C Schools are specialized training programs within a vocation. EOCT is End of Course Test. PQS are Performance Qualification Standards (sign-offs). SWE is Service Wide Exam (competition exam for advancement).
The biggest gap here is the process for framing rationale for method selection. How do we determine when one intervention is more appropriate than another (resident vs. digital)? We have some rough frameworks for how to make this happen but it still seems arbitrary and subject to “what we’ve done before” biases. This is one of the areas we’d like to improve and narrow.
There are two things we’ve recently started to develop that could prove to be really helpful:
1. An estimation algorithm for per task design and development based on a task profile. Different types of output profiles can vary this output to give us a “less arbitrary” estimation. Estimation is a rough science (if you can call it that), we’re just trying to reduce the arbitrary nature by feeding models with data. I’ll share this estimation algorithm in another post.
2. Development of baseline and extended task profiles. I think this is the great connector between the natures of the problem and the patterns that align against those natures. I referred to this process in another recent post.
Though the intervention selection may broadly define packaging concepts, this stage does not identify specific per-goal, per-objective or per-activity media treatments. To me, specific media selection is a design specific problem solving task. Familiarity with benefits, affordances and trade-offs aren’t something to be taken lightly. Knowing when a specific type of media will be beneficial and when it won’t matter is at the core of the matter.
The Third Stage: Media Selection
I’ll deal with this in part 2 of the post. We’ve received a few really helpful resources in this area and I’d like to highlight these resources for other folks. The idea here is similar to the process used for the other stages, I’m thinking of further extending the task profiles if it’s necessary and offering another level of aggregate profile that offers a picture of the entire solution lined up against resource / environmental constraints, personas and other use-case specific data.
How can you help?
The model above is based on my experience in and out of government. I think it’s leaning in the right direction but I could be wrong. How does this look to you? How would you do it differently? How have you done selection and estimation?
Leave me a note in the comments below. Stay tuned for part 2 where I’ll link, reference, and attribute a few different media pattern models and resources.