Monday, May 19, 2014

Why Kirkpatrick Matters





Donald L. Kirkpatrick’s passing last week at age 90 caused a moment of reflection for me. I never met the man, but his ideas have been an ongoing subject of discussion and debate in my professional life.

In a dark and smoke-filled office at a place called Chemical Bank, near Wall Street in Manhattan, I was first told of Kirkpatrick. In my first months as a “training” person, coming off an unsuccessful stint as a high school English teacher, I was taking instructions from a senior member of the training staff on his expectations for me to design the evaluation of a new-hire training program. The man (whose name is long-forgotten) explained to me the four levels of the Kirkpatrick Model:
  • Level 1: Reaction. Did people like the course?
  • Level 2: Learning. Did they learn the stuff they were supposed to learn?
  • Level 3: Behavior. Were they able to perform the skills on the job?
  • Level 4: Results. Did the application of the skills lead to improved business results?

The beautiful order and simplicity of the system immediately appealed to me and set off a long-lasting moment of clarity. It was then that I first understood the difference between what we do as school teachers (teach people content, for no apparent reason, other than knowing all of the stuff that all educated people know) and what we do in corporate training functions (teach people skills, for the purpose of doing a job).

Over the years I participated in a number of task teams charged with implementing a Kirkpatrick-esque system of evaluation. Typically, the task team would spend 6-12 months creating a meticulously-crafted standard Level 1 Evaluation instrument (smile sheet, if you will), before running out of steam and disbanding. This very tendency points to the peril of Kirkpatrick – people get so focused on following the Levels one at a time, starting with Level 1, that they lose sight of why they were pursuing them in the first place. My experience is supported by copious research that repeatedly shows that almost all organizations have Level 1 instruments in place, and almost none of them have Level 4 systems in place. This only serves to reinforce the instinct within training types to expend too much energy thinking first of how to make their classes or eLearnings aesthetically appealing or, even better, fun!

My conception of why we train people has evolved considerably in the intervening 20 years. Rather than a sequence, I prefer to think of the Kirkpatrick Levels as a taxonomy. All evaluative activities fall in to two categories: Measures that Training people care about and measures that Business people care about. While there is value in the former, we should emphasize the latter.

I know that Kirkpatrick himself understood this critique and tried to get out in front of it. Today, his family’s company, Kirkpatrick Partners, has consciously clarified and updated the presentation of the original concept in the form of the New World Kirkpatrick Model.

Being misunderstood and over-simplified is a fate that Donald Kirkpatrick shares with Winston Royce, the “inventor” of the waterfall method of managing large software development projects. In 1970, Royce wrote a highly influential paper that is often cited as the basis of the Software Development Life Cycle (SDLC), which today is routinely massacred by agile development enthusiasts. His paper explicitly denounces the end-to-end, one-way street that is waterfall planning, yet it spawned a multi-million dollar cottage industry. Royce and Kirkpatrick both gave the world ideas that were almost too elegant for their own good, so much so that many of those espousing the method only considered the ideas themselves in a superficial manner.

By any measure, Donald Kirkpatrick was a giant in the field of learning and development. His evaluation concept was first presented it as his PhD dissertation topic in 1959, and it was refined in many articles and books in the ensuing 50 years. To those of us steadfastly pursuing the holy grail of correlating training to business outcomes, he has been a constant inspiration.

Wednesday, May 7, 2014

One Man, Two Conferences

Displaying photo.JPG

I’m fairly certain that I’m the only person who attended both THE Performance Improvement Conference #ISPI14 and the Global Scrum Gathering #SGNOLA this spring. I hold professional designations from both the International Society for Performance Improvement (CPT - Certified Performance Technologist) and the ScrumAlliance (CSPO - Certified Scrum Product Owner). As one who works as a education person among technologists, I’m interested in considering the distinction between the two conference crowds and the ethos of the attendees.

The conferences themselves were comparable in size, agenda structure, and cost. Both were more of an educational event than a trade show. Both had a lot more good than bad, and both experiences filled my little head with a boat load of information and ideas.

Below are two lists that show how they are distinct. I would love to hear your comments.

Conference versus Conference


Theme
ScrumAlliance
ISPI
Typical profession
External Agile Scrum Consultant
Internal performance consultant
Touted Credentials
ScrumAlliance certifications
Academic degrees
Gender mix
80% male, 20% female
50% male, 50% female
Conference locations
World-class cities: New Orleans, Paris, Berlin
2nd tier cities: Indianapolis, San Antonio, Reno
Book you need to have read or pretend to have read before attending
Venue-based Metaphor
Music (New Orleans)
Auto racing (Indianapolis)
Conference groove
Networking
Forming bonds
Most popular tweet
A slide
A selfie
Awesome keynote speaker that had everyone buzzing
Kenny Rubin
David Maxfield
Book giveaway to support the awesome keynote presentation
Gadget giveaway
Portable smart phone charger
Thumb drive
Laughing, crying
Almost none
Almost constant
Typical model
Circular and repeating
Horizontals and verticals
Involvement of forefathers of the profession at the conference
None of the original Agile Manifesto signatories present
Almost every living performance improvement guru was present


Ethos versus Ethos


Theme
ScrumAlliance
ISPI
Foundation
Experience
Research
Holy grail
Delivering value
Measuring value
Common reference
Failed software projects
Skinnerian behavioral science
Before becoming a consultant, I spent years as a(n)….
Software engineer
Instructional designer
It all starts with articulation of….
An idea
An outcome
People are….
Resources
The most important thing
Management are….
Adversaries who don’t understand us and how we want to work
Leaders who don’t yet realize how much we can help them
Love/hate relationships
Hate for project managers. Disdain for HR.
Love everyone
About measurement
A lot of talk about things that should not be measured (defects, release cadence)
A lot of talk about what can and should be measured
On Training
One capable person can pollinate specific skills within a specific team
Scalable solutions and support structures need to be put in place
Don’t forget to….
Follow applicable scrum rules
Measure
People love to talk about how nobody talks about….
Continuous integration
Root cause analysis
View of the future
#noestimates
Predictive evaluation
They are snarky about….
Manual testing processes
Training as a standalone solution
Language of success
defect-free, minimum viable, value flow
greater discretionary effort, improved performance, employee satisfaction
Fancy word to describe most problems
Recursive
Unaligned
Fancy word to describe solutions
Automated
Holistic
What needs to be scaled
Coaching
Performance support
What attendees would learn if they attended the opposite conference.
An appreciation for the human component of success
An appreciation for the great learning agility that exists within work teams
And they would also learn
In order to quantify value, you need more sophisticated measurement techniques
People on teams don’t care about corporate interventions unless they have immediate prima facie value