• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar
Analytic Strategy Partners

Analytic Strategy Partners

Improve your analytic operations and refine your analytic strategy

  • Home
  • Blog
  • Books
  • About
  • Services
  • Contact Us

AI

Do You Need a Grand Strategy in Analytics?

September 10, 2020 by Robert Grossman

Figure 1: From lean to grand analytic strategies.

In foreign affairs and national defense, especially among academics, it has becoming more common to talk about grand strategies. There is a very popular course at Yale University by John Lewis Gaddis called On Grand Strategy, and in 2018 he published a book worth reading with the same name.

An emerging definition for grand strategy for state is “something that has the characteristics of being long-term in scope, related to the state’s highest priorities, and concerned with all spheres of statecraft (military, diplomatic, and economic) [Silove 2018].” Of course, the problem is that a strategy in general is concerned with the long term decisions that an organization makes to further its priorities, so this definition doesn’t help as much as you might hope. See the Appendix for a definition of analytic strategy.

There is though some common themes that emerge if you review some of the recent articles on grand strategies [Biddle 2015, Silove 2018; Gaddis 2018]:

  • Grand strategies are longer term than typical organizational strategies, for example, 20 or more years. Fifty and hundred strategies are not unusual in China.
  • Grand strategies cover more domains than typical organizational strategies. For example, grand strategies for states typically cover military, diplomatic and economic strategies, and not just one of these.

At the Other Extreme – A Lean Analytic Strategy

Before discussing grand analytic strategies, it is probably helpful to start at the other extreme (see Figure 1) and briefly mention lean analytic strategies. In the post, I discuss developing a lean analytic strategy and introduced a lean analytic canvas modeled after the business model canvas for lean start-ups. For start-ups, smaller companies, and smaller units in larger organizations, the focus should be on developing an end to end system with analytics that some value as soon as possible, and iteratively improving it to increase the business value that it delivers. A lean analytic strategy is a good way to do this. You can find more information in Chapter 10 of my book: Developing an AI Strategy: A Primer and a definition of a lean analytic strategy in the Appendix of this post.

Five Questions for a Grand Analytic Strategy

At the other extreme, for larger organizations with multiple divisions and planning that extends out five years or more, it may be appropriate to consider developing a grand strategy for analytics that includes answering questions like the following?

  1. In the long run, how much of the IT, data and analytic ecosystem do we buy vs build? What new technologies should we develop to advance our strategy in analytics?
  2. What are our long terms alliances and partnerships in analytics?
  3. How can we develop, promote and influence standards in analytics to support our strategy in analytics?
  4. How can we best leverage lobbying and influence legislation to support our long term strategy in analytics?
  5. How can we educate our users in particular and the public more generally so that they understand and support how we use data and analytics in our products and services, while balancing privacy with improved functionality?

An Example – Google’s Grand Strategy in Analytics

Alphabet’s revenues for 2019 were over $161 billion and leveraged their analytics and AI to drive revenue across the various subsidiaries of Alphabet and and divisions of Google and leverages advances resulting from investments in fundamental computing and analytics over years. The second paragraph of Alphabet’s fourth quarter 2019 earnings release [Alphabet 2020] reads:

Our investments in deep computer science, including artificial intelligence, ambient computing and cloud computing, provide a strong base for continued growth and new opportunities across Alphabet.

Source: Alphabet 2020.

A recent report from CBInsights writes:

[Google] is also seeking out new streams of revenue in sectors with large addressable markets, namely on the enterprise side with cloud computing and services. Furthermore, it’s looking at industries ripe for disruption, such as transportation, logistics, and healthcare. Unifying Alphabet’s approach across initiatives is its expertise in AI and machine learning, which the company believes will help it become an all-encompassing service for both consumers and enterprises.

Source: CBInsights, Google Strategy Teardown, 2020.

From Lean to Grand Analytic Strategies

To summarize, as Figure 1 shows, there is a spectrum of analytic strategies, as the complexity of the organization grows and as the time frame of interest lengthens. As you move from left to right in the figure, the scope of the strategy becomes broader and broader.

A lean analytic strategy is a shorter term strategy for an analytic start-up or a smaller unit within a large organization, and is concerned with the core of any analytic strategy: how data is collected or generated; how data is transformed using analytics to produce scores or other outputs; how the outputs are used to create something that can be monetized or something that otherwise brings value to the business; and how this whole chain can be protected from a competitive standpoint.

An analytic strategy specifies the long-term decisions an organization makes about how it uses its data to take actions that satisfy its organizational vision and mission; specifically, the selection of analytic opportunities by an organization and the integration of its analytic operations, analytic infrastructure, and analytic models to achieve its mission and vision.

A corporate analytic strategy is an analytic strategy for two or more strategic business units, and it includes a plan for allocating resources across the business units.

A grand analytic strategy is longer term in scope than a typical analytic strategy and is designed for large complex organizations with various subsidiaries, divisions, or strategic business units. A grand analytic strategy is concerned with all spheres and interactions of the organization with analytics, both internal and external, including the broader technological landscape, regulatory and legal landscape, public perceptions, societal trends, etc. around analytics and its applications.

For more information, see: Developing an AI Strategy: A Primer.

References

[Alphabet 2020] Alphabet Announces Fourth Quarter and Fiscal Year 2019 Results, retrieved from https://abc.xyz/investor/static/pdf/2019Q4_alphabet_earnings_release.pdf, on November 1, 2020.

[Biddle 2015] Tami Davis Biddle, Strategy and grand strategy: What students and practitioners need to know. Army War College-Strategic Studies Institute, Carlisle, United States; 2015 Dec 1.

[CBInsights 2020] CBInsights, Google Strategy Teardown, 2020.

[Gaddis 2019] John Lewis Gaddis. On grand strategy. Penguin Books, 2019.

[Silove 2018] Nina Silove, Beyond the Buzzword: The Three Meanings of “Grand Strategy”, Security Studies, 27:1, 27-57, 2018, DOI: 10.1080/09636412.2017.1360073

Notes About Links

There are no affiliate links in this post and I get no revenue from the Amazon links. I do get a royalty from the sale of the book Developing an AI Strategy: A Primer.

Filed Under: Uncategorized Tagged With: AI, analytic strategy, analytics, grand analytics strategy, grand strategies, grand strategy, lean analytics

Continuous Improvement, Innovation and Disruption in AI

July 6, 2020 by Robert Grossman

Figure 1. Some of the differences between continuous improvement and innovation in analytics and AI.

It’s important for managers and leaders in analytics and AI to know the difference between continuous improvement, innovation and disruption in their field and to know how it applies to their projects and to their organization.

Continuous improvement is about encouraging, capturing and using individual knowledge about current processes and how to improve them from the people actively involved. Good examples of continuous improvement applied to complex engineering problems includes: W. Edwards Deming improving the quality of automobile manufacturing in Japan (the Kaizen Process); Bill Smith at Motorola trying to reduce the defects in the manufacturing of computer chips (leading to Six Sigma); and Admiral Hyman G. Rickover improving the safety of nuclear reactors on nuclear submarines [1].

Innovation is about developing new processes, products, methodologies, and technologies. It is usually done by those not directly involved in the day to day to work. Often there is a challenge transitioning innovations from the lab to a product or into a deployed process in production. These days, it’s claimed more than it’s produced. True innovations are usually recognized by experts relatively quickly, but by others over a longer period of time due to clutter in the markets [2, Chapter 3]. Also innovative technology can take a while for companies to deploy for a variety of reasons, including the agility of the company, the lock-in of current vendors, and the sometimes complex motivations and incentives of decision makers [2, Chapter 4].

Disruption occurs when a new technology fundamentally alters the price-benefit structure in an industry or market segment [3]. An example from AI is the use of deep learning software frameworks, such as TensorFlow and PyTorch, along with transfer learning from large pre-trained models, such as ImageNet, Inception, and ResNet, which allows individual scientists using modest computational resources to build deep learning models, without the large computational infrastructure and very large datasets that would be required otherwise.

Some differences

Continuous improvement is about improving something that exists. Innovation is about creating something that doesn’t exist. Innovation can take months or years, while continuous improvement can often be done in days or weeks. See Figure 1 for some more differences.

Best practices in analytics

Best practices for continuous improvements in analytics include:

  • A champion-challenger methodology, where you use a formal methodology to frequently new models and compare them using agreed upon metrics to the current model in production.
  • Weekly model reviews, where all the stakeholders meet each week to review the model’s performance, what additional data can be added to the model, the actions associated with the model, and business value generated from the actions and discuss how these can be improved. Weekly model reviews are part of the Model Deployment Review Framework that I cover in my upcoming book, the Strategy and Practice of Analytics.
  • Model deployment frameworks. A third best practice is to use a model deployment framework so that models can be deployed quickly into production. This might involve PMML or PFA , a DevOps approach to model deployment, or one of the providers of specialized software in this area.

Best practices for supporting the development of innovation in analytics include:

  • Setting up a structure to develop innovative projects. This can be a separate group (a R&D lab, a Future Groups, or an Innovation Center) or supporting regular time (such as Google’s 20% time) devoted to innovation. For example, in our Center we set aside 1-3 days per month for the entire team to work on selected projects that have been proposed.
  • Setting a process to select and support meritorious projects. Innovation takes times and requires support. It cannot be done in a simple brain-storming session.
  • Setting up and fine tuning a process to move useful innovations from the lab into practice. Finally, it is all too common for innovation in large organization never to leave the lab. A number of large organizations have over time developed good processes for transitioning innovation to create new products, services and processes. IBM is quite good at this [3]; a recent example is the investment they put into bringing holomorphic encryption into practice, which took sustained investment over more than a decade. Overtime, this will have an important impact on analytics and AI.

The Power of Simple Process Improvements

It is worth emphasizing the tremendous power of simple process improvements, such as transitioning from letting the data scientists that build models decide when and how to deploy them to weekly model reviews involving all stakeholders, including the business owners. In these weekly meetings, the model is reviewed end-to-end, including the data available, the performance of new models (the challenger in the champion-challenger methodology), and discussing developing potentially new actions associated with the model (see the post on Scores, Actions and Measures (SAM)).

Here is another simple example of the power of continuous improvement that is not related to analytics. For many years, I took notes using emacs in outline mode. Recently, after reading about the Zettelkasten method, I switched to using emacs in org mode and adopted a few of the ideas used in digital Zettelkasten. This small change has made it much easier for me to find technical information that I need. You can find a nice introduction to Zettelkasten in lesswrong.

References

[1] Dave Oliver, Against the Tide: Rickover’s Leadership Principles and the Rise of the Nuclear Navy. Naval Institute Press, 2014.

[2] Robert L. Grossman, The structure of digital computing: from mainframes to big data, Open Data Press, 2012. See Chapter 3, Technical Innovation vs. Market Clutter and Chapter 4, Technology Adoption Cycles. Also available from Amazon.

[3] Clayton M. Christensen, The innovator’s dilemma: when new technologies cause great firms to fail, Harvard Business Review Press, 2013.

[4] National Research Council. 1995. Research Restructuring and Assessment: Can We Apply the Corporate Experience to Government Agencies?. Washington, DC: The National Academies Press. https://doi.org/10.17226/9205. See https://www.nap.edu/read/9205/chapter/6.

Filed Under: Uncategorized Tagged With: AI, analytics, continuous improvement, data science, disuption, innovation, kaizen, six-sigma

Five Things Every Senior Executive Should Know About AI and ML (2020 Edition)

January 6, 2020 by Robert Grossman

Some of the key differences between AI, machine learning deep learning.

It is clear that artificial intelligence (AI) and machine learning (ML) are important, but with all the reports and with all the self-proclaimed pundits, it is easy to lose track of what is going on and what is essential.   In this short overview, we go over 5 things that every senior executive should know about AI and ML.  

The first two points discussed below may seem to be contradictory at first, but in fact are not. We will discuss both together, and, as we do, it may be helpful to slightly modify F. Scott Fitzgerald’s remark about holding two opposing ideas in mind as follows: “The test of a first-rate intelligence is the ability to hold two opposing ideas about an issue in your mind at the same time, and still retain the ability to make reasonable judgements about it.”

Here are the first two points:

Point 1. AI and ML is over hyped and a fair amount of what is described as AI and ML today is older technology that has been remarketed as AI.

Point 2. Over the past several years there have been some important advances in AI and ML and there is an argument to be made that “Data is the new oil and AI is the factory.”

Because of 2, it is important to take a new look at how you AI and ML can benefit your organization, if you haven’t done so recently.   Because of 1), doing so can be challenging because of the hype and misinformation that is so rampant.

AI and ML have seen some important advances over the past few years. There are many reasons for this, but perhaps the most important are the following.

Three macro factors behind some of the recent advances in deep learning

  1. There is a lot more data available for machine learning and a lot more of it is labeled with the type of labels that many machine learning algorithms require.  
  2. The underlying computing infrastructure (graphics processing units or GPUs) used by games turned out to be incredibly useful for machine learning, and even more specialized computing infrastructure for machine learning has been developed (for example, tensor processing units or TPUs)
  3. Over the past few years, there have been some nice algorithmic advances developed that leverage a) and b).  These include a ML technique called transfer learning that take a ML model built for one problem and use in a component of a ML model for another problem.

On the other hand, it is just as important to keep in mind that AI is being seriously overhyped.   It is relatively easy to raise venture funding in AI, which creates many companies that will not only not be around in a few years when their venture funding dries up, but aren’t producing much value in the near term, and are only greatly adding to the market clutter in the space.   Last year in 2018, VCs invested a record $9.3B into US-based AI startups.  This is over eight times the $1.1B invested in US-based AI startups five years ago in 2013 [1].

If you lead an organization, start with 2) and keep 1) in mind.  If you lead a business unit that uses AI and ML as one of your enabling technologies, then you need to manage 1) and leverage 2).  

It may be helpful to recall that we have seen this tension between real advances in building analytic models over large data and a hype driven by venture-backed startups twice before during the past 30 years.  Although real advances were made in each period, there was also a hype cycle with most of the efforts not delivering much of lasting value.

  • Hype cycle 1: Data mining and knowledge discovery (1995-2001)
  • Hype cycle 2: Big data and data science (2010-2018)
  • Current hype cycle: AI and machine learning (2016-present)

Point 3. With no new advances, new applications of AI and ML will be developed for some time and will continue to transform business.

There are an increasing number of applications in deep learning that are being developed, due primarily to the following factors.

Five factors that are driving new AI applications

  1. New sources of data, including location information and images from phones and data from the Internet of Things (IoT), operational technology (OT), online-to-offline (O2O), autonomous vehicles, etc.
  2. Easy access to powerful computing infrastructure due to cloud computing infrastructure containing GPU and TPU; as well as on-premise GPU clusters.
  3. The availability of large labeled datasets that are openly shared and readily available both for research and commercial applications.
  4. Powerful software frameworks that support machine learning in general and deep learning in particular.
  5. The unreasonable effectiveness of transfer learning and other algorithmic advances.

Unlike the prior periods of hype mentioned above, the current period has seen large investments in open source frameworks for machine learning and deep learning, including TensorFlow, PyTorch, and Keras.  With the ability to leverage cloud computing containing GPUs and the availability of large labeled datasets, it is much easier than in the past periods to create ML and DL models given the right data.  This is an important difference and one of the main reasons that the number of applications that are able to use ML and AI to provide meaningful performance improvements is significantly higher than in the 1995-2001 cycle and 2010-2018 cycle.

Because of this, we will probably see business and organizations continue to develop new deep learning applications for some time, even if there are no new algorithmic advances.

Point 4. Progress is very uneven.

The next point to keep in mind is that progress is quite uneven. It’s important to know which types of projects are likely to succeed and which ones are likely to fail.  In this section, we describe three tiers of AI and ML projects.  Tier 1 projects are likely to succeed if well executed.  Tier 3 projects are likely to fail.

The most progress has been made in the Tier 1  – image, text and speech (ITS) processing.  This is primarily for the five reasons a)-e) mentioned above, with the most important being the large amounts of labeled data that is available.  Tier 2 applications requires simple judgements.  This tier includes: spam detection, detecting fraudulent transactions, content recommendation and related problems.  Tier 3 applications requires complex judgements.  Examples of applications in this tier include: algorithmic hiring, recidivism prediction, and related applications. A recent study has shown that some algorithm hiring algorithms aren’t much better than random guessing [2]. 

Three Tiers of ML and DL Advances

  1. Image, text and speech (ITS) applications have seen significant improvements.
  2. Applications that require simple judgements have made good, but less dramatic improvements.
  3. Applications that require complex judgements and assessments have not made significant progress and significant progress shouldn’t be counted on in the near term.

Simple judgements are basically ML or other analytic models that produce scores with actions and rules within a well understood business process.  As the judgements become more complex and action framework for actions becomes more complex, bias becomes more important, and distinguishing causality from association becomes more important.   It’s important to note that ML has been used successfully in the second category for some time, including in cycles 1 and 2.  The dramatic advances from the AI deep learning techniques has been largely focused in the first ITS category.

Arvind Narayanan has notes from his talk “How to Recognize AI Snake Oil [3]” that provides another perspective worth understanding in predicting which AI applications are likely to succeed and which are likely to failure. Narayanan distinguishes between three tiers of AI applications that overlap the categories above.  The first category, which he calls perception is more or less the same as the ITS category above.  Narayanan’s second category is automating judgements and his third category is predicting social outcomes.  Recidivism prediction would be in his third category.  His lecture notes [3] describe some of the successes in the first category and some of the snake oil fraud and failures in the third category.

Point 5. Deriving value from AI and ML projects is hard and many projects will fail to deliver any significant business value.

It’s helpful to keep in mind what I call the staircase of failure [4, Chapter 11]: 

  1. Developing software is hard.
  2. Projects that require working with data are usually harder.
  3. Projects that require building and deploying analytic models are usually harder in turn.

If you think of this as a staircase, to deliver value, you must develop a software system, that processes data, uses the data to build models, and uses the models to produce scores that take actions that bring business value.  In other words, you must climb the staircase to the top, which requires not only good technology, but also choosing the right problems, having good (usually labeled) datasets, and most importantly have a good analytic team [4, Chapter 12], a good project structure [4, Chapter 11], and a good way of using the outputs of the models to produce actions that bring business value [4, Chapter 13].

References

[1] CB Insights, The United States Of Artificial Intelligence Startups, November 26, 2019, retrieved from https://www.cbinsights.com/research/artificial-intelligence-startup-us-map/ on December 10, 2020.  Also, see CB Insights, What’s Next in AI? Artificial Intelligence Trends, 2019.

[2] Manish Raghavan, Solon Barocas, Jon Kleinberg, and Karen Levy, Mitigating Bias in Algorithmic Employment Screening: Evaluating Claims and Practices, arXiv preprint arXiv:1906.09208, 2019.

[3] Arvind Narayanan, How to recognize AI snake oil, retrieved from https://www.cs.princeton.edu/~arvindn/talks/MIT-STS-AI-snakeoil.pdf on December 10, 2019

[4] Robert L. Grossman, The Strategy and Practice of Analytics, to appear.

Filed Under: Uncategorized Tagged With: AI, AI failures, data is the new oil, deep learning, hype cycles, machine learning, ML failures

  • « Go to Previous Page
  • Page 1
  • Page 2

Primary Sidebar

Recent Posts

  • Developing an AI Strategy: Four Points of View
  • Ten Books to Motivate and Jump-Start Your AI Strategy
  • A Rubric for Evaluating New Projects that Produce Data
  • How Does No-Code Impact Your Analytic Strategy?
  • The Different Varieties of Advisors & the Difference it Makes

Recent Comments

    Archives

    • May 2022
    • April 2022
    • March 2022
    • February 2022
    • January 2022
    • December 2021
    • November 2021
    • October 2021
    • September 2021
    • August 2021
    • July 2021
    • June 2021
    • May 2021
    • April 2021
    • March 2021
    • February 2021
    • January 2021
    • December 2020
    • November 2020
    • October 2020
    • September 2020
    • August 2020
    • July 2020
    • June 2020
    • May 2020
    • April 2020
    • March 2020
    • February 2020
    • January 2020
    • December 2019
    • November 2019
    • October 2019
    • September 2019
    • June 2019
    • May 2019
    • September 2018

    Categories

    • Uncategorized

    Meta

    • Log in
    • Entries feed
    • Comments feed
    • WordPress.org

    Copyright © 2025