• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar
Analytic Strategy Partners

Analytic Strategy Partners

Improve your analytic operations and refine your analytic strategy

  • Home
  • Blog
  • Books
  • About
  • Services
  • Contact Us

analytic strategy

The Different Varieties of Advisors & the Difference it Makes

January 10, 2022 by Robert Grossman

The fox knows many things, but the hedgehog knows one big thing. Archilochus, Greek poet, c. 680 – c. 645 BC

“The fox knows many things, but the hedgehog knows one big thing.” Archilochus, Greek poet, c. 680 – c. 645 BC

Question: “By the way, why did you make this classification [between hedgehogs and foxes]?” Isaiah Berlin: I never meant it very seriously. I meant it as a kind of enjoyable intellectual game, but it was taken seriously. Every classification throws light on something, this one was very simple.

For my January 2022 post, I want to return to return to the theme of last year’s January post about why a board should include someone with a good knowledge of analytic strategy. In this post, we will discuss different types of advisors about analysis and analytic strategy. It’s good to keep in mind the response of Isaiah Berlin points out when asked about his distinction between two types of thinkers (hedgehogs and foxes), “every classification throws light on something, [but] this one [is] very simple [1].”

In the January 2021, post I listed three reasons that a company should consider putting an expert on analytics on its board or one of its advisory boards:

  1. Analytic and AI strategy is too important for a company not to have someone with board level experience in this area.
  2. Board members with experience developing, deploying and operating complex analytic projects have critical experience using technology to innovate, not just operate.
  3. An analytic perspective for a board member helps with evaluating cybersecurity and digital transformation, both critical topics for many boards.

A year later, these three reasons are as important as ever. In this post, I’ll review five different types of advisors in general, and how this applies to advisors familiar with analytic strategy. It is important to note that an individual is often a blend of two or more of these types.

Type 1. The Problem Solver

The first type of advisor is a problem solver, who can listen carefully, think through a problem, challenge or situation thoughtfully and carefully, and then crystallize the essential issues, trade-offs and choices, so others most familiar with the facts can make a better decision. This type of advisor has experience, judgement, and often gravitas, but any particular domain knowledge is usually less important than their overall ability to pinpoint the essential issues and frame the tradeoffs that must be thought through. For simplicity, I call this type of advisor a problem solver, but they usually simply crystallize and frame a problem so others can more easily work out a good solution. This type of advisor does not rely on knowledge of any particular technical domain, and for this reason, can be valuably contribute to technical discussions over a long period of time, even as the underlying technology changes.

Type 2. The Connector

The second type of advisor is the connector who seems to know almost everyone of importance in a particular industry, has a track record of being helpful with introductions, and is happy to connect you as required. There are different types of connectors. Some will do an introduction, which will guarantee at least one (courtesy) meeting. Others will work to introduce two individuals that can be mutually valuable to each other and will continue to connect from time to time even if there is no immediate benefit.

Another type of connector is someone who is a trusted member of a group that has a shared common experience and whose members have gone on to different successes but stayed in touch. Examples of these types of groups include: the PayPal Mafia and the Fairchildren [2] that are the descendants of the Traitorous Eight.

Type 3. The Technical Expert

The third type is the technical expert, who has a deep understanding of a technical field, understands the economics and business value that the technical field, and has broad experience of the common problems and pitfalls and how to avoid them. This type of expert can save an enormous amount of time and costs for a company. Some challenges include that this type of expert may not always be the best at arguing his or her point of view and others, with less knowledge and experience, may be more pervasive. Another problem is that as technology changes and business and competitive environments change, the a particular advisor’s expertise may become less and less relevant.

A great example of an influential advisor who was Type 3 is John Tukey, one of the original data scientists [3], who came up with the idea of the fast Fourier transform (FFT) in a meeting of President Kennedy’s Scientific Advisory Committee in 1963 [4]. The discussion topic was the ability to verify a proposed United States/Soviet Union nuclear test ban. One idea was to analyze seismological signals obtained from off-shore seismometers that could be positioned to verify compliance with a test ban treaty. The seismometers would produce a lot of data and the FFT was one way to speed up the processing of the data. For an interesting history of the origins of the FFT and its acceptance by the scientific community, see [3]. For a broader overview of the FFT and its relevance to seismology, see [5].

Type 4. The Management Expert

The fourth type of advisor is the management expert, who can bring management expertise to a problem or opportunity. This is especially valuable when an outside perspective is needed. Sometimes these advisors may be put on a board because they are particularly familiar with certain management issues, such as corporate strategy, marketing, acquisitions, information strategy, change management, government sales, etc.

An archetype of a Type 4 advisor is Peter Drucker, who coined the term “knowledge worker” in the 1950’s and argued that knowledge workers were an important source of competitive advantage long before computers, information and digital transformation became commonplace [6, 7]. As another example, he argued that “purposeful abandonment” was a critical part of any business strategy. This is the argument that management focus requires a very purposeful pruning of business units and business activities that are no longer worthwhile spending resources to pursue, thus freeing up resources for more strategic activities. Although these observations sound quite familiar now, they were much less familiar when he first argued for them.

Type 5. The Trusted, Independent & Discreet Advisor

The fifth type of advisor is not required, and is not always present, but is the trusted and discreet advisor who is independent and often has a long relationship with those in the company or organization being advised.

An interesting example of the trusted and discreet advisor was Harry Hopkins, who was President Franklin Delano Roosevelt’s advisor and was influential in shaping the relationship between the United States and the United Kingdom in World War II.

Again, it is important to point out that an advisor is often a blend of two or more of these types.

Relevancy

Advisors seem to age at different rates. As long as they remain sharp and their expertise is still relevant to the company, the capabilities provided by the problem solver and the management expert remain relevant and useful. On the other hand, the value of the network of some connectors may become less relevant over time. Other connectors seem to have the almost magical ability to be personally acquainted with exactly the right individuals at the right time.

Finally, there are important differences in technical experts. Some have a deep and profound knowledge of certain technologies, such as data or analytics, and are comfortable using the current technical parlance and buzzwords For example, a technical expert of a certain age will move from framing technical discussions in the language of data mining (1990’s), to predictive analytics (2000’s), to data science (2010’s), to AI (2020’s). Others seem to stick with a particular framing of the problem, and although their advice is probably still relevant, it can seem dated and not relevant.

References

[1] Jahanbegloo, Ramin. 2007. Conversations with Isaiah Berlin. Halban. Kindle Edition. Location 2742.

[2] Laws, David. 2016. Fairchild, Fairchildren, and the Family Tree of Silicon Valley, December 20, 2016, retrieved from computerhistory.org on January 2, 2022.

[3] Donoho, David. 2017. 50 years of data science. Journal of Computational and Graphical Statistics, 26(4), pp.745-766.

[4] Cooley, James W. 1987. How the FFT gained acceptance. In Proceedings of the ACM conference on History of scientific and numeric computation, pp. 97-100.

[5] Rockmore, Daniel N. 2000. The FFT: an algorithm the whole family can use. Computing in Science & Engineering, 2(1), pp.60-64.

[6] Drucker, Peter F. 1959. Landmarks of Tomorrow. Harper, New York.

[7] Drucker, Peter F. 1992. The new society of organizations. Harvard Business Review 70(5) pp. 95-104.

Filed Under: Uncategorized Tagged With: advisors, advisory boards, analytic strategy, boards of directors, the connector, the problem solver, the trusted advisor, types of experts

100 Millisecond Auctions, Your Privacy and Regulatory Risk

December 14, 2021 by Robert Grossman

The Regulatory Risks Posed by GDPR When Using Bidstream Data in Real Time Bidding (RTB)

You can download Mobilewalla and Ubermedia's data directories from the evidence we sent the DPC 13+ months ago here https://t.co/3ccOMbNyNC. Note: this is clearly the IAB standard. pic.twitter.com/aSHP3NzQgE

— Johnny Ryan (@johnnyryan) November 18, 2021
Figure 1. A tweet from Johnny Ryan from the Irish Council for Civil Liberties (ICCL) showing some of the data that is available in a TC consent string. The ICCL is involved in litigation under the GDPR with those using bidstream data for targeting.

In real time bidding (RTB), there is a 100 ms auction in which different advertisers bid against each other to place an ad on a webpage or other space offered by a publisher. This is why you sometimes have the experience when browsing that the web jumps a bit as you start to read the page. The jump is because the web page content can shift after the ad loads.

The data exchanged in this auction is called bidstream data and includes location information, information about the application and system you are using, and standardized codes developed by the Interactive Advertising Bureau (IAB). The information about the application and system you are using is enough for online device fingerprinting. The specificity in the IAB codes can be a bit shocking the first time you look at them. Here are some examples of IAB codes from the IAB Taxonomy:

ID 281 Interest | Businesses and Finance | Bankruptcy

ID 351 Interest | Family and Relationships | Adoption and Fostering
ID 357 Interest | Family and Relationships | Special needs Kids

ID 396 Interest | Health and Medical Services | Mental Health Services

ID 432 Interest | Hobbies & Interests | Scrapbooking |
ID 434 Interest | Hobbies & Interests | Beekeeping |

ID 565 Interest | Pharmaceuticals, Conditions, and Symptoms | STD |
ID 568 Interest | Pharmaceuticals, Conditions, and Symptoms | Substance Abuse |
ID 572 Interest | Pharmaceuticals, Conditions, and Symptoms | Cancer |

There has been concerns about the leakage of privacy information in bidstream data for sometime and the inadequacy of the pop-up consents that are suppose to give users some control.

Providing Consent for Bidstream Tracking Data

For those website that follow the European General Data Protection Regulation (GDPR), users are presented with a pop-up banner that asks for consent for the collection of tracking information. In general, websites use this information for both internal purposes and for targeted advertising, such as supplying information for bidstream auctions.

In 2018, IAM developed a “consent framework” called the Transparency and Consent Framework to standardize how advertisers and publishers could collect and store the required consent information. The information is passed around in a format standardized by the IAB called the TC consent string.

Changes in the Regulatory Framework for Using Bidstream Data

The Irish Council for Civil Liberties (ICCL) has brought a case under the GDPR claiming the the IAB Transparency and Consent Framework and its use with TC String in real time bidding does not provide adequate consent [1]. The actual 174 page court filing [2] provides a very detailed explanation of the bidstream data that is exposed and why it should be considered a data breach under GDPR. For a good overview of the ICCL case and its implications, see [3].

Upcoming Regulatory Changes and Your Analytic Strategy

As a result of the ICCL court case, there maybe changes in the regulations for using bidstream data and one of the challenges from an analytic strategy point of view is to have a plan to update your own strategy in anticipation of future regulatory changes.

Figure 2. The process of how advertisers (the Brand) sell ads to publishers. An advertiser can use a demand side platform to work with different networks of publishers and a publisher can use a supply side platform to work with different networks of advertisers. Advertisers buying space for advertising and publishers selling space for advertising are matched by the ad exchange and use the bidstream data to determine the auction prices. The figure is from Wikipedia (CC-By).

Problems with Bidstream Data

The exchange of bidstream data involves the advertiser, the publisher, the ad exchange, and, usually, a demand side and supply side platform. Advertisers buying space for advertising and publishers selling space for advertising are matched by the ad exchange. An advertiser can use a demand side platform to work with different networks of publishers and a publisher can use a supply side platform to work with different networks of advertisers. See Figure 2. With this many different entities involved in the auction, it is not surprising that there are often problems with bidstream data.

Inaccurate information. Given the amount of information exchanged with RTB and the monetary value involved, there is a significant level of inaccurate information being exchanged and a significant level of activity where the bidstream data is being used for purposes for which it was not intended and for which it was not consented. An example would be surveilling bidstream data and processing it using device fingerprints to create profiles of the users bidding.

Misuse of bidstream data. This is an example of adversarial analytics in the original sense (not the more modern GAN sense) in which there are two parties building analytic models that are at cross purposes. Here the advertisers and published exchanging bidstream information for real time auctions and the adversaries who are monitoring the information to build profiles of users that can be monetized.

Challenges with changing it. Most websites list a long list of advertising partners that they exchange bidstream data with it (often hundreds) and leave it to the user to update or correct the information for each partner. This is almost never practical.

The Emerging Era of Post Third Party Cookies

We are moving to an era in which advertising will not be able to use third party cookies, but rather rely on other types of information for matching advertisers and publishers. From an analytic strategy point of view, it is critical to make sure that your analytic frameworks still provide the value you require when third party cookies are not supported. More generally, platforms will have to update so as not to rely on third party cookies.

Regulatory changes almost always take years, giving your organization time to put in new place new analytic frameworks. The challenge though is usually not a technical one of transitioning to a new regulatory framework but rather figuring out how to extract the necessary business value in the new regulatory framework to provide the foundation for a successful analytic business.

References

[1] Irish Council for Civil Liberties, ICCL lawsuit takes aim at Google, Facebook, Amazon, Twitter and the entire online advertising industry, 15 June 2021, retrieved from ICCL.ie on December 6, 2021.

[2] ICCL complaint filed against IAB Technology Lab, Inc. and others in Hamburg District Court, May 15, 2021 (machine translated from the original German).

[3] Daniel Cooper, How a civil rights group is holding Europe’s online ad industry to account, Engadget, December 3, 2021. Retrieved from Engadget on Dec 14, 2021.

Filed Under: Uncategorized Tagged With: analytic operations, analytic strategy, auctions, bidstream data, consent framework, GDPR, IAB, ICCL, privacy, real time bidding, regulatory risk, RTB, user profiles

Analytic Strategies for Not-for-Profit Organizations: Chasing Down Root Causes

October 11, 2021 by Robert Grossman

Lessons from the Rockefeller Sanitary Commission in 1902

Figure 1. The incident levels of some diseases are concentrated in certain geographic regions. This post is about the incidence of Hookworm diseases in the Southern states in 1902. The figure above shows the incident level of hypertensive heart disease (ICD-9 402) for the period 2003-2010 is higher in certain Southern US states. Source [1].

Peter Drucker’s General Advice About Strategy for a NFP

Peter Drucker wrote in Managing the Nonprofit Organization:

There’s an old saying that good intentions don’t move mountains, bulldozers too. In nonprofit management, the mission and the plan – if that is all there is – are good intentions. Strategies are the bulldozers.

They convert what you want to do into accomplishment. They are particularly important in nonprofit organizations. Strategies lead you to work for results. They convert what you want to do into accomplishment. They also tell you what you need to have by the way of resources and people to get the results.

Peter Drucker, Managing the Nonprofit Organization [2].

Lazy Workers in Southern States – The View in 1902

The quote above from Peter Drucker is inspirational, but not particularly helpful for those broadly familiar with strategy and analytics. In this post, we look at a data driven strategy developed by John D. Rockefeller and those around him to improve living conditions and worker productivity in southern US states at the turn of the 20th Century. We follow [3] and [4].

The story begins with the US Public Health scientist Charles Stiles who identified diseases caused by hookworms (Necator americanus) as prevalent in Southern states, especially on farms and plantations in sandy areas. Hookworms are parasites, and those infected with them were anemic and had difficulty working [3]. To outsiders, workers infected with hookworms appeared to be lazy.

Stiles lectured on his investigations at a 1902 Sanitary Conference meeting in Washington and his findings were reported in newspapers, such as the New York Sun, which used the headline “Germ of Laziness Found [3].”

European physicians by the mid 1870’s linked the parasitic worms to human symptoms of severe anemia, pallor, and weakness [3]. Hookworms could enter through the souls of your feet, infect your body, were passed in your stools, returned to the ground, and could then infect others.

Rockefeller Sanitary Commission

In 1909, Stiles attracted the attention of John D. Rockefeller, who was by then a philanthropist, and Rockefeller provided $1 million (equivalent to about $27 million in 2021) to establish the Rockefeller Sanitary Commission for the Eradication of Hookworm Disease. Rockefeller hired Stiles and project administrators [3].

The conditions in the Southern US were conducive to the spread of hookworm disease. As summarized in [3]: “Primitive living standards increased Black and White farm families’ vulnerability to disease. The risk of exposure to hookworm infection was great as well; John A. Ferrell noted that in the Southern rural sections, ‘open privies and, far too often, no privies at all, are used, [so that] millions upon millions of [hookworm] eggs are scattered over the earth, and develop into minute, infecting worms ready to attack.’ … The climate favored wide regional geographic distribution of hookworm as the land, marked by warmth, moisture, and aerated sandy or loamy soils, allowed hookworm larvae to burrow for protection from the sun, perhaps for years. Hookworm also was endemic in the mining towns of North Carolina, Tennessee, Kentucky, and West Virginia, where the parasite found protection in mines from extremes of temperature and dryness [3].

The Rockefeller Sanitary Commission [RSC] developed and executed a three prong strategy over the years 1909-1914:

  1. estimate hookworm prevalence in the American South;
  2. provide treatment;
  3. and eradicate the disease.

The RSC surveyed populations in 11 Southern states and found that about 40% of the population surveyed were infected with hookworms. Hookworms are still a problem today, with 500 million to 750 million individuals around the world estimated to be infected.

Reducing the prevalence of hookworm required understanding how it was spread:

Human transmission occurred as newly hatched hookworm larvae from eggs in contaminated soils entered hosts via direct, physical contact through a foot or hand. After penetrating the skin, they traveled to the heart and lungs and then to the small intestine, where they attached to capillaries to feed, remaining from 2 to 13 years. Females released thousands of eggs per day (9000–25000) into the intestine, which, after expulsion from the human body into inadequately designed privies or moist dirt, hatched in the soil and waited to enter human hosts, thus completing the cycle [3].

A campaign to educate people about the disease and how it was spread and to install latrines and toilets dramatically reduced the prevalence of the disease [3].

Three Lessons for Data Driven Not for Profits

The RSC was remarkably effective and there are many lessons that can be learned from it. Here we focus on just three.

Lesson 1. Use data to identify the problem and inform a solution. The RSC first surveyed the southern states to understand the problem and how best to deploy their resources. A large part of the effort was to collect appropriate data in order to develop an appropriate solution. Note that they did not rely on third party data from others, but funded efforts to collect the data that was needed.

Lesson 2. Make sure you understand the root cause of the problem so that appropriate actions can be taken. Here the actions was not just treating the disease, but even more important was understanding that the root cause of the disease was poor sanitary conditions that enabled the parasite to spread, and encouraging the use of latrines and toilets to stop the spread of the disease.

Lesson 3. Look at the problem holistically and deploy your funds to attack the problem as a whole. The RSC split their $1M of funding between i) research into the problem, ii) actions to treat patients and improve living conditions, and iii) education to change people’s behavior.

References

[1] Patterson, Maria T., and Robert L. Grossman. “Detecting spatial patterns of disease in large collections of electronic medical records using neighbor-based bootstrapping.” Big data 5, no. 3 (2017): 213-224.

[2] Drucker, Peter. Managing the non-profit organization. Routledge, 2012. Chapter 2.

[3] Elman, Cheryl, Robert A. McGuire, and Barbara Wittman. “Extending public health: the Rockefeller Sanitary Commission and hookworm in the American south.” American journal of public health 104, no. 1 (2014): 47-58.

[4] Bleakley, Hoyt. “Disease and development: evidence from hookworm eradication in the American South.” The quarterly journal of economics 122, no. 1 (2007): 73-117.

Filed Under: Uncategorized Tagged With: analytic strategy, data-driven, Hookworm disease, not-for-profit, parasite, Peter Drucker, Rockefeller Sanitary Commission, root cause, strategy

Crossing the Data Chasm for Cancer Research

June 15, 2021 by Robert Grossman

The Cancer Moonshot Program was announced in the State of the Union address on January 12, 2016. It’s goal was to accomplish 10 years of research in 5 years and one of its strategies was to use data sharing to help accomplish this.

The Cancer Moonshot Program was announced in the State of the Union address on January 12, 2016. It was funded with the 21st Century Cures Act, which was passed by the U.S. Senate on December 7, 2106 and signed by President Obama on December 13, 2016. The act provided NIH an additional $1.8 billion over seven year in supplemental funding to fund Cancer Moonshot projects and initiatives.  The project was led by President Biden, who was then the Vice President of the US.

Data and data sharing played an important strategy in the cancer moonshot strategy. As we approach the fifth year anniversary of the passing of the 21st Century Cures Act, it might be a good time to look back at components of the underlying strategy from an analytic strategy point of view.

With five years of effort behind us, it should be easier to see what worked well, what worked less well, and how we might fine tune the current and planned activities.

As described on the National Cancer Institute’s Cancer Moonshot website [1], “The Cancer Moonshot has three ambitious goals: to accelerate scientific discovery in cancer, foster greater collaboration, and improve the sharing of data.”

As described in the White House’s announcement [2]: “Here’s the ultimate goal: To make a decade’s worth of advances in cancer prevention, diagnosis, and treatment, in five years.” The goal was to get done in five years what would normally take ten years. A critical element of the strategy was to share cancer related data in an effort to accelerate research.

In analytics and AI, the biggest challenge that you face is usually not building a model, but instead collecting or acquiring the data that you need to build the model. I call this crossing the data chasm, and one of the levers used was to force those funded by Cancer Moonshot projects to share data. You can learn more about crossing the data chasm and its role in an analytic strategy in my Primer on Analytic Strategy.

Today as we plan the next five years, it is important to look at how we can leverage data sharing to continue the goal of accomplishing in five years what would normally take ten years.

The good news is that more and more data is being shared when it is funded with federal dollars. As an example, the NCI has developed a data sharing policy for all Cancer Moonshot funded projects [3]. The not so good news is that often data sharing is not required when research is funded by private foundations and private philanthropy, and data is rarely shared when research is funded by industry. Given the size and complexity of the cancer industry, the amount of money at stake, the competitiveness of scientists and cancer centers, the challenges with deidentifying research data, and the legal risk of sharing human subject data that is collected by healthcare providers, most cancer data is not shared, and there is much less progress as a result.

An exception is pediatric cancer. Fortunately, pediatric cancer is relatively rare compared to adult cancers. With fewer cases, there is usually not enough pediatric cancer data at any one research center to provide the number of cases required for research projects, unless data is shared [4].

Some success stories

One of the success stories of the Cancer Moonshot is the BloodPAC Consortium, whose mission is accelerate the development, validation and accessibility of liquid biopsy assays to improve the outcomes of patients with cancer. The BloodPAC Consortium is now an independent 501(c)(3) organization with over 50 consortium members organized into a number of different working groups and operates the BloodPAC Data Commons to support data sharing among its members [5].

Other cancer data sharing success stories include: the NCI Genomic Data Commons, ACR’s Project GENIE and ASCO’s CancerLinQ. Each of these three projects provide large scale data sharing that has accelerated cancer research and resulted in many significant publications.

Why is data sharing so hard?

The first question to ask is: “Why is data sharing so hard?” There are number of reasons, but probably the most important are the following:

  1. It’s difficult and time consuming for researchers to prepare data and to submit data for data sharing. Often researchers have moved on to new experiments, new analyses, and writing new papers, and sharing data from their last project always remains on their “B-list” of items to do.
  2. Investigators must balance the potential public good and benefit to patients that results when data is shared compared to the loss of momentum in their career when others publish results from their data that they could have published with more time before they shared data.
  3. Data is often not collected with consents that make it easy to share.
  4. Data sharing is often not required, except with federally funded research; and, for federally funded research, data sharing is often not enforced [6].
  5. There is a risk when data is shared in case the shared data contains sensitive data or third party data sources can be used to re-identify some subjects in large shared datasets [7].
  6. Often, the computing infrastructure required for data sharing infrastructure is not funded. So data sharing is “unfunded mandate.”

What can change?

Perhaps the most important we can ask is: “What can change?”

  1. Cancer research organizations can form data sharing coalitions to share data around specific cancers of interest in order to accelerate research. For example, several NCI Comprehensive Cancer Centers could self-organize and share data to achieve critical mass to study cancers of interest that individually would be harder to study with just their own patients. Although there are already several such such projects like this, in practice, just a small fraction of the potential data is being shared in this way.
  2. Research projects can be collaborative with patients and data sharing technologies, such as blue-button, can be used so that patients themselves can directly their share their data. This is sometimes called patient-partnered research and for new research projects this is by far the preferred approach. For this to work, healthcare providers must make it easier for patients to share their data using blue-button and other data sharing technologies.
  3. We can improve the data ecosystems that link together local cancer information collected by different projects and support federated learning. This way cancer data that cannot be easily shared with others can stay within the security and compliance boundaries of the organization that provides healthcare services, but be shared effectively with the broader research community to accelerate research. This might be a good project for the proposed ARPA-Health (ARPA-H).
  4. We can reduce the liability and fines associated with the inadvertent disclosure of health information and create insurance pools to pay the fines in order to protect research organizations that use best practices to protect data, but health information is still exposed.

I’ll be returning to these four topics from time to time in future posts.

Disclaimers

I’m actively involved with both the BloodPAC Consortium and the NCI Genomic Data Commons.

References

[1] National Cancer Institute, Cancer Moonshot, retrieved from: https://www.cancer.gov/research/key-initiatives/moonshot-cancer-initiative on June 1, 2021.

[2] The White House, President Barack Obama, Join the Vice President’s Cancer Moonshot, retrieved from https://obamawhitehouse.archives.gov/cancermoonshot on June 1, 2021.

[3] NCI Cancer Moonshot Public Access and Data Sharing Policy, retrieved from https://www.cancer.gov/research/key-initiatives/moonshot-cancer-initiative/funding/public-access-policy on June 1, 2021.

[4] Samuel L. Volchenboum, Suzanne M. Cox, Allison Heath, Adam Resnick, Susan L. Cohn, and Robert Grossman, Data Commons to Support Pediatric Cancer Research, American Society of Clinical Oncology Educational Book 2017:37, 746-752

[5] Robert L. Grossman, Jonathan R. Dry, Sean E. Hanlon, Donald J. Johann, Anand Kolatkar, Jerry SH Lee, Christopher Meyer, Lea Salvatore, Walt Wells, and Lauren Leiman. “BloodPAC Data Commons for liquid biopsy data.” JCO Clinical Cancer Informatics 5 (2021): 479-486.

[6] Frisby, Tammy M., and Jorge L. Contreras. “The National Cancer Institute Cancer Moonshot Public Access and Data Sharing Policy—Initial assessment and implications.” Data & Policy 2 (2020).

[7] Luc Rocher, Julien M. Hendrickx, and Yves-Alexandre De Montjoye. “Estimating the success of re-identifications in incomplete datasets using generative models.” Nature communications 10, no. 1 (2019): 1-9.

Filed Under: Uncategorized Tagged With: analytic strategy, BloodPAC, Blue Button, Cancer Moonshot, cancer research, data commons, data ecosystems, data sharing, data sharing strategy

  • « Go to Previous Page
  • Page 1
  • Page 2
  • Page 3
  • Go to Next Page »

Primary Sidebar

Recent Posts

  • Developing an AI Strategy: Four Points of View
  • Ten Books to Motivate and Jump-Start Your AI Strategy
  • A Rubric for Evaluating New Projects that Produce Data
  • How Does No-Code Impact Your Analytic Strategy?
  • The Different Varieties of Advisors & the Difference it Makes

Recent Comments

    Archives

    • May 2022
    • April 2022
    • March 2022
    • February 2022
    • January 2022
    • December 2021
    • November 2021
    • October 2021
    • September 2021
    • August 2021
    • July 2021
    • June 2021
    • May 2021
    • April 2021
    • March 2021
    • February 2021
    • January 2021
    • December 2020
    • November 2020
    • October 2020
    • September 2020
    • August 2020
    • July 2020
    • June 2020
    • May 2020
    • April 2020
    • March 2020
    • February 2020
    • January 2020
    • December 2019
    • November 2019
    • October 2019
    • September 2019
    • June 2019
    • May 2019
    • September 2018

    Categories

    • Uncategorized

    Meta

    • Log in
    • Entries feed
    • Comments feed
    • WordPress.org

    Copyright © 2025