Events over the past two years have highlighted the many complexities and unknown dependencies in global supply chains. As different countries shut down, re-opened and shut down again, it caused a cascading effect on the manufacturing of complex semiconductors, consumer electronics, and consumer packaged goods.
Many companies are now much more aware of the complex dependencies and multiple steps required to produce their end products and successfully deliver them to consumers. This complexity didn’t develop overnight and certainly not by design, but is rather the result of constantly changing geo-politics, varying labor costs globally, and moves to just-in-time manufacturing to limit the inventory held at any single step of the manufacturing process.
The analytical world is advancing towards similar levels of complexity as predictive models are built to power more diverse experiences across interconnected technology platforms and applications. Many organizations are building supply chains of analytical models that provide outputs and recommendations built upon complex steps where the output from multiple models builds upon one another and is combined to produce a single recommendation, action or recommended prioritization of actions.
These analytical models being produced span many operational aspects of a business including finance, human resources, supply chain, consumer experience, advertisement placement, campaign planning vendor, and competitive analysis just to name a few. Many organizations take a diverse path to acquiring these models, building some in-house for core competitive capabilities and purchasing commercial models for functions of the business that are not core.
With limited data science resources in many organizations, this split between in-house and acquired models enables resources to be applied to the most impactful area of the business while leveraging industry learnings to drive innovation across all aspects of the organization.
This separation of model sources can create acceleration in time to value, and introduce a variety of risks around contract terms, source of training sets, hard to locate bias in recommendations, and vendor relationship risks from acquisitions or product strategy shifts.
Like physical supply chains, the analytical model supply chains should be actively managed to minimize risks and provide alternative strategies for business continuity.
The Enterprise Architecture or Data Architecture teams is responsible for keeping an inventory of all analytical models, their input and output data products, and the flow of data elements between analytical models. This becomes the dependency mapping across the organization for recommendation and decisions, and which models have influenced which outcomes. This landscape is critical to understanding the implication of vendor changes and new models, as well as evaluating unexpected outputs.
Organizations leveraging analytical models for decisioning and recommendations should establish a centralized authority to define testing standards and review their outputs. This ensures consistency across all model implementations in measurement of outcomes and a clear path for escalation of analytical model behavior to executive leadership to determine if the risk is acceptable given organizational principles and legal requirements. This team should have representation from legal, product, and data science practitioners.
There is a growing awareness of bias that is introduced by analytical models that are trained from data sets representing systemic inequality. This bias can manifest in poor recommendations, elimination of options for certain populations, and can reinforce inequality if not tested for and eliminated. All models, in-house and vendor created should be evaluated for and tested to minimize bias and account for bias in downstream business processes.
In the area of vendor provided models, agreements should require the disclosure of training data sets for evaluation and to reproduce outputs as part of integrated testing in the environment.
Many companies producing analytical models for purchase and integration are smaller firms in the early stages of a corporate lifecycle. This creates risk if they’re acquired or fail to establish a market large enough for survival. Vendor agreements should provide protections through access to model source, trusted third-parties or first right of refusal to defend against vendors becoming unable to support their technology long-term.
Analytical models aren’t a one-time deployment. They require regular re-deployment and performance tuning for maximum impact and continuous improvement. This is accomplished in many organizations through their ML Ops framework of tooling. As more third-party models are brought into the environment, the vendor should deploy them within existing ML Ops frameworks to ensure uniformity across all models in monitoring, response, and deployment processes.
The deployment of analytical models will only accelerate as more domains find the value of data science and realize its impact on business operations, customer acquisition, and personalization. These models will create ever increasing dependencies that affect downstream recommendations and behaviors.
Careful planning and ongoing management of the analytical supply chain will ensure that vendor changes, bias or unexpected model interactions don’t cause negative impacts to business and customer perceptions.
I hope this post useful. Let me know if you have any questions in the comments, and don’t forget to sign up for the next post.