Wednesday, April 16, 2014

What Does Test-Driven Development Mean to Education?

I work at CA Technologies, a large software development company that is seeking to maintain a competitive edge through “innovation, execution, and speed.” As a company we subscribe to agile as a principle-based approach, and we play the planning game with Scrum. But the enablers of higher quality software within faster release cycles are the engineering practices beyond just the agile planning methodology.


One of the most powerful practices used in software development is Test-Driven Development (TDD), where the test is written and run (and fails) before the code is written. The goal of writing the code is to pass the test (and nothing more). In an Agile environment, this is done in increasingly small increments. Ultimately, tests are run every time a piece of source code is checked back into its repository home. With small increments, bugs are isolated and fixed immediately. The full fruit of this strategy is Continuous Integration, where the software always works and always improves. (Massive disclaimer: The preceding paragraph was written by an corporate education person living in a software development world. Stay with me!)


So what does this have to do with anything in the realm of human performance improvement? How often are we presented with requests to create learning assets with no reference to the business problem being solved? (Answer: For me, several times in the past week.) How often do we create things without an evaluation component, and then someone later asks if it is having an impact? (Answer: Too often.) If we can start with the outcome, if we can write the test that will be satisfied before we design the intervention, then we significantly improve the odds that our solution solves the business problem. Or, at a minimum, we know whether or not the solution answers the test question.


From a performance consulting standpoint, this is related to our practice of front-end analysis. When encountering an “order-taking” situation, there are simple questions we ask to reframe the request, such as: What is the desired outcome? What will change as a result of us implementing something? What will happen if we do nothing? What is the root cause of the performance problem?


Do not be surprised when a frustrating, circular, rhetorical conversation ensues with your client, where everyone needs to acknowledge that only rarely is the answer to these questions related to training. After all, if it was a pure training need, as the original request assumed, we would be able to, using training in isolation, enable the development of employee capabilities that are aligned with business objectives. No surprises here, since in the vast majority of cases (around 85% of the time – people have studied this), training is not the solution to human performance problems. This state of affairs will often be perceived as unfortunate for our client requestors, because they were hoping that we could help them tick the box by developing and rolling out a training “solution.”


Now, as we continue along this line of discussion (often root cause analysis) we have to be the bearer of bad news: You have a problem, but it’s not related to training. Maybe people don’t understand the expectations? Maybe the process is broken? Maybe the interface on your application is not intuitive? Maybe people are not motivated or incented to do what you want them to do. In these cases, training is not the answer. In fact, if you look at the possible causes of issues that inspire training requests, it is really difficult to imagine a case where training by itself is the answer. (That’s a topic for another blog post…for more information, check out Thomas F. Gilbert’s Behavioral Engineering Model.)


Back in the real world: Do not let the client walk away; we can still help them! Yes, even the Education people can help them with their business problem.


If we responded to every training request with diligent and dutiful fulfillment, as I did very well for many years, we would be very busy creating things that add no real business value. On the other hand, if we restricted our efforts to projects that met some sort of academic definition of training/education, we would be doing almost nothing. Both of those situations are bad places to be. The sweet spot is doing things to help our clients succeed, regardless of whether or the instructional designer in us thinks it meets our definition of training/education.


Almost any work we do in response to this type of analysis is OK as long as it’s adding value. Adding value means being able to demonstrate that you’ve added value after the thing is implemented. And your chances of adding value expand exponentially if you can come to an agreement with your client, before the design and development starts, on what adding value means. That is, if what we do works, what test will be satisfied?

Now that we are in the business of adding value, rather than the business of creating useless training, we have to think differently about how we create tests. Many of us are familiar with how to create training tests in the realm of the classic Kirkpatrick model, such as knowledge reviews and behavior observations. But by applying TDD to Education (ETDD?!?), we will need to invent different types of tests that answer different types of questions: How to you measure awareness? Buy-in? Teamwork? Innovation? Strategic thinking? These questions might be out of our comfort zone, and indeed, out of scope for what we think of when we think of Education, but that doesn’t mean we shouldn’t attempt to work with our clients to write tests for them. It will take persistence, creativity, and innovation, because every business problem is unique. Certainly our clients are often out of their comfort zone with these questions as well. That is why they called us in the first place.

No comments:

Post a Comment