It’s important for managers and leaders in analytics and AI to know the difference between continuous improvement, innovation and disruption in their field and to know how it applies to their projects and to their organization.
Continuous improvement is about encouraging, capturing and using individual knowledge about current processes and how to improve them from the people actively involved. Good examples of continuous improvement applied to complex engineering problems includes: W. Edwards Deming improving the quality of automobile manufacturing in Japan (the Kaizen Process); Bill Smith at Motorola trying to reduce the defects in the manufacturing of computer chips (leading to Six Sigma); and Admiral Hyman G. Rickover improving the safety of nuclear reactors on nuclear submarines [1].
Innovation is about developing new processes, products, methodologies, and technologies. It is usually done by those not directly involved in the day to day to work. Often there is a challenge transitioning innovations from the lab to a product or into a deployed process in production. These days, it’s claimed more than it’s produced. True innovations are usually recognized by experts relatively quickly, but by others over a longer period of time due to clutter in the markets [2, Chapter 3]. Also innovative technology can take a while for companies to deploy for a variety of reasons, including the agility of the company, the lock-in of current vendors, and the sometimes complex motivations and incentives of decision makers [2, Chapter 4].
Disruption occurs when a new technology fundamentally alters the price-benefit structure in an industry or market segment [3]. An example from AI is the use of deep learning software frameworks, such as TensorFlow and PyTorch, along with transfer learning from large pre-trained models, such as ImageNet, Inception, and ResNet, which allows individual scientists using modest computational resources to build deep learning models, without the large computational infrastructure and very large datasets that would be required otherwise.
Some differences
Continuous improvement is about improving something that exists. Innovation is about creating something that doesn’t exist. Innovation can take months or years, while continuous improvement can often be done in days or weeks. See Figure 1 for some more differences.
Best practices in analytics
Best practices for continuous improvements in analytics include:
- A champion-challenger methodology, where you use a formal methodology to frequently new models and compare them using agreed upon metrics to the current model in production.
- Weekly model reviews, where all the stakeholders meet each week to review the model’s performance, what additional data can be added to the model, the actions associated with the model, and business value generated from the actions and discuss how these can be improved. Weekly model reviews are part of the Model Deployment Review Framework that I cover in my upcoming book, the Strategy and Practice of Analytics.
- Model deployment frameworks. A third best practice is to use a model deployment framework so that models can be deployed quickly into production. This might involve PMML or PFA , a DevOps approach to model deployment, or one of the providers of specialized software in this area.
Best practices for supporting the development of innovation in analytics include:
- Setting up a structure to develop innovative projects. This can be a separate group (a R&D lab, a Future Groups, or an Innovation Center) or supporting regular time (such as Google’s 20% time) devoted to innovation. For example, in our Center we set aside 1-3 days per month for the entire team to work on selected projects that have been proposed.
- Setting a process to select and support meritorious projects. Innovation takes times and requires support. It cannot be done in a simple brain-storming session.
- Setting up and fine tuning a process to move useful innovations from the lab into practice. Finally, it is all too common for innovation in large organization never to leave the lab. A number of large organizations have over time developed good processes for transitioning innovation to create new products, services and processes. IBM is quite good at this [3]; a recent example is the investment they put into bringing holomorphic encryption into practice, which took sustained investment over more than a decade. Overtime, this will have an important impact on analytics and AI.
The Power of Simple Process Improvements
It is worth emphasizing the tremendous power of simple process improvements, such as transitioning from letting the data scientists that build models decide when and how to deploy them to weekly model reviews involving all stakeholders, including the business owners. In these weekly meetings, the model is reviewed end-to-end, including the data available, the performance of new models (the challenger in the champion-challenger methodology), and discussing developing potentially new actions associated with the model (see the post on Scores, Actions and Measures (SAM)).
Here is another simple example of the power of continuous improvement that is not related to analytics. For many years, I took notes using emacs in outline mode. Recently, after reading about the Zettelkasten method, I switched to using emacs in org mode and adopted a few of the ideas used in digital Zettelkasten. This small change has made it much easier for me to find technical information that I need. You can find a nice introduction to Zettelkasten in lesswrong.
References
[1] Dave Oliver, Against the Tide: Rickover’s Leadership Principles and the Rise of the Nuclear Navy. Naval Institute Press, 2014.
[2] Robert L. Grossman, The structure of digital computing: from mainframes to big data, Open Data Press, 2012. See Chapter 3, Technical Innovation vs. Market Clutter and Chapter 4, Technology Adoption Cycles. Also available from Amazon.
[3] Clayton M. Christensen, The innovator’s dilemma: when new technologies cause great firms to fail, Harvard Business Review Press, 2013.
[4] National Research Council. 1995. Research Restructuring and Assessment: Can We Apply the Corporate Experience to Government Agencies?. Washington, DC: The National Academies Press. https://doi.org/10.17226/9205. See https://www.nap.edu/read/9205/chapter/6.