The Effectiveness of AI and ML on Supply Chains Amidst a Global Pandemic

Written by sharmi1206 | Published 2021/02/27
Tech Story Tags: data-science | blockchain | machine-learning | artificial-intelligence | supply-chain | data | hackernoon-top-story | risk-management

TLDRvia the TL;DR App

Introduction and Motivation

The onset of Covid-19 has not only impacted the world’s most prominent factories in China but also massively disrupted the entire global supply chain. The Covid-19 pandemic has created a severe testing situation for data scientists and analytics to suitably apply AI / ML / Data science to help in the cross response.
This would not only entail early diagnosis (drug discovery and development), but also a prediction of old and new drugs and treatments that might alleviate the disease. Furthermore, not only pharmaceutical companies but other industries see a huge benefit in their supply chain in terms of planning, forecasts, cost savings, and increased automation.
Companies are becoming extra vigilant in this crisis and shifting their attention to ensure supply chain and value networks are functioning properly.
The role of AI/ML and data science has been tremendous to minimize the Covid uncertainties and to help the supply chain industry.
  • To focus on workforce / labor planning
  • Remain cautious / alert on tier-1 supplier risk
  • Manage and utilize the extended supply stock
  • Understand and activate alternate sources of supply
  • Update inventory policy and planning parameters
  • Enhance inbound materials visibility
  • Prepare for plant closures
  • Focus on production scheduling agility
  • Evaluation of alternative outbound logistics options and secure capacity
  • Conduct global scenario planning

Covid Impact on SupplyChain

The below figure depicts data analysis as modeled by Blue Yonder, representing how COVID-19 is affecting a customer’s supply at impacted sites.
The objective of this solution is to take feeds from the Center for Disease Control and Prevention (CDC) in real-time and mapping manufacturing and logistics sites to model responses. This helps to achieve accurate predicted arrival times of supplies, helping customers to gain time to resolution to proactively identify issues deep in the supply chain. In addition, customers can also leverage ML-based recommendations to find alternate sources of supply and integrated execution capabilities to turn recommendations into alternate supply shipments.
Overall, the goal of introducing AI / ML into the supply chain process is to ease procurement processes, delivery logistics, traceability, or storage, thereby smoothing the accessibility, availability, and quality of medicines.
The pharmaceutical companies see a remarkable leap in terms of:
  • Demand Forecasting – To accurately predict shifts in demand and consumption, decreasing stockouts and wastage.
  • Reporting Substandard medicines – To report substandard medicines in a timely manner.
  • Ensuring Continuous Supply – To ensure a continuous supply and provide emergency humanitarian relief to those who are affected and in serious condition.
A global disruptive event like Covid-19 has created an awareness among many companies to invest today in a resilient supply chain that will be best positioned to weather the next event that obstructs the global flow of goods
In this blog, we list some of the common open-source Supply Chain libraries and address what kind of problems they solve.

List of Open Source Supply Chain Libraries

TensorHouse is a collection of reference machine learning and optimization models for enterprise operations: marketing, pricing, supply chain, etc that can be used for academic, industrial purposes, and research study.
The policy map can be trained using :
trainer = wsrt.train_ppo(n_iterations = 5),
where world_of_supply contains internal library implementations which we can define ‘global training policy’ of the trainer for multi-agent reinforcement learning.
import world_of_supply_rllib_models as wsm
importlib.reload(wsm)
import world_of_supply_rllib as wsrl
importlib.reload(wsrl)
import world_of_supply_rllib_training as wsrt
importlib.reload(wsrt)

wsrt.print_model_summaries()

# Policy training
trainer = wsrt.train_ppo(n_iterations = 5)

---------------------------------------------------------------------------------------------
from ray.rllib.models.modelv2 import ModelV2
from ray.rllib.models.tf.recurrent_tf_modelv2 import RecurrentTFModelV2
from ray.rllib.utils.annotations import override
from ray.rllib.utils import try_import_tf

def train_ppo(n_iterations):
    
    policy_map = policy_mapping_global.copy()
    ext_conf = ppo.DEFAULT_CONFIG.copy()
    ext_conf.update({
            "num_workers": 16,
            "num_gpus": 1,
            "vf_share_layers": True,
            "vf_loss_coeff": 20.00,      
            "vf_clip_param": 200.0,
            "lr": 2e-4,
            "multiagent": {
                "policies": filter_keys(policies, set(policy_mapping_global.values())),
                "policy_mapping_fn": create_policy_mapping_fn(policy_map),
                "policies_to_train": ['ppo_producer', 'ppo_consumer']
            }
        })
    
    print(f"Environment: action space producer {env.action_space_producer}, action space consumer {env.action_space_consumer}, observation space {env.observation_space}")
    
    ppo_trainer = ppo.PPOTrainer(
        env = wsr.WorldOfSupplyEnv,
        config = dict(ext_conf, **base_trainer_config))
    
    training_start_time = time.process_time()
    for i in range(n_iterations):
        print(f"\n== Iteration {i} ==")
        update_policy_map(policy_map, i, n_iterations)
        print(f"- policy map: {policy_map}")
        
        ppo_trainer.workers.foreach_worker(
            lambda ev: ev.foreach_env(
                lambda env: env.set_iteration(i, n_iterations)))
        
        t = time.process_time()
        result = ppo_trainer.train()
        print(f"Iteration {i} took [{(time.process_time() - t):.2f}] seconds")
        print_training_results(result)
        print(f"Training ETA: [{(time.process_time() - training_start_time)*(n_iterations/(i+1)-1)/60/60:.2f}] hours to go")
        
    return ppo_trainer
It yields the following output:

- date: 2021-01-03_15-13-43
- episode_len_mean: 50.0
- episodes_total: 2600
- episode_reward_max: 218469.5520000001
- episode_reward_mean: -110872.94516923078
- episode_reward_min: -360037.5999999991
- timesteps_total: 130000
- policy_reward_max: {'baseline_producer': 13610.063999999991, 'ppo_producer': 11440.663999999992, 'baseline_consumer': 13610.063999999991, 'ppo_consumer': 11440.663999999992}
- policy_reward_mean: {'baseline_producer': -4887.064453846157, 'ppo_producer': -8704.695287179491, 'baseline_consumer': -4887.064453846157, 'ppo_consumer': -8704.695287179491}
- policy_reward_min: {'baseline_producer': -18567.200000000004, 'ppo_producer': -26316.4, 'baseline_consumer': -18567.200000000004, 'ppo_consumer': -26316.4}
Training ETA: [0.00] hours to go
Tensor-house also provides optimization techniques to define a (s, Q) -policy as a baseline to order Economic Order Quantity Q, every time an inventory position drops below s. Below is the code snippet demonstrating it, using Facebook’s Ax.
from ax import optimize

def func(p):
    policy = SQPolicy(
        p['factory_s'], 
        p['factory_Q'],
        [ p['w1_s'], p['w2_s'], p['w3_s'], ],
        [ p['w1_Q'], p['w2_Q'], p['w3_Q'], ]
    )
    return np.mean(simulate(env, policy, num_episodes = 30))

best_parameters, best_values, experiment, model = optimize(
        parameters=[
          { "name": "factory_s",   "type": "range",  "bounds": [0.0, 30.0], },
          { "name": "factory_Q",   "type": "range",  "bounds": [0.0, 30.0], },
          { "name": "w1_s",        "type": "range",  "bounds": [0.0, 20.0], },
          { "name": "w1_Q",        "type": "range",  "bounds": [0.0, 20.0], },  
          { "name": "w2_s",        "type": "range",  "bounds": [0.0, 20.0], },
          { "name": "w2_Q",        "type": "range",  "bounds": [0.0, 20.0], },    
          { "name": "w3_s",        "type": "range",  "bounds": [0.0, 20.0], },
          { "name": "w3_Q",        "type": "range",  "bounds": [0.0, 20.0], },    
        ],
        evaluation_function=func,
        minimize=False,
        total_trials=200,
    )
in-toto is built as a framework to protect the integrity of the software supply chain, by validating each step through authorized personnel ensuring the product does not tamper in transit.
This library implementation using Tensorflow 1.15 uses Neural Network consists in an RNN or self-attentive encoder-decoder with an attention module connecting the decoder to the encoder where the model is trained by Policy Gradient. This kind of neural combinatorial optimization framework can be used to solve the traveling salesman problem (TSP) to optimize tour lengths in a permutation of cities. This kind of library finds great application in optimizing distances between warehouses, distribution centers, and retail store locations.
The following code demonstrates how to train and visualize a 2D TSP problem from scratch.
python main.py --max_length=20 --inference_mode=False --restore_model=False --save_to=20/model --log_dir=summary/20/repo

tensorboard --logdir=summary/20/repo

python main.py --max_length=20 --inference_mode=True --restore_model=True --restore_from=20/model
This library builds a simulation-optimization framework that minimizes average inventory while maintaining desired average β service level at stocking location. The framework creates an inventory profile (along with associated inventory parameters such as on-hand, inventory position, service level, etc.) across time, which considers a base stock policy with a reorder point. If inventory position <= ROP, an order of the amount (base stock level – current inventory level) is placed by the facility. 
The below code demonstrates how to evaluate the objective function for optimization to minimize on-hand inventory and heavily penalize not meeting the beta service level (demand volume based). Initially, the set of supply change nodes (configurable) needs to be initialized.
# Split the initial guess to get base stock and ROP
 base_stock_guess = initial_guess[:(numNodes - 1)]
 ROP_guess = initial_guess[(numNodes - 1):]
    
# Insert the supply node's base stock
baseStock = np.insert(base_stock_guess, 0, 10000)
    
# Insert a zero ROP for the first source node
ROP = np.insert(ROP_guess, 0, 0)
    
# Initialize inventory level
initialInv = 0.9*baseStock
    
replications = 20
totServiceLevel = np.zeros(numNodes)
totAvgOnHand = 0.0
for i in range(replications):
    nodes = simulate_network(i,numNodes,nodeNetwork,initialInv,ROP,baseStock,
                                 demandAllNodes,defaultLeadTime,leadTimeDelay)
    	
    totServiceLevel = np.array([totServiceLevel[j] + 
                                    nodes[j].serviceLevel for j in range(len(nodes))]) 
    	
    totAvgOnHand += np.sum([nodes[j].avgOnHand for j in range(len(nodes))])
 
servLevelPenalty = np.maximum(0, serviceTarget - totServiceLevel/replications) # element-wise max
objFunValue = totAvgOnHand/replications + 1.0e6*np.sum(servLevelPenalty)
The below figure illustrates a typical supply value chain network that the given framework optimizes.
This library – A Deep Q-Network for the Beer Game – Deep Reinforcement Learning for Inventory Optimization optimizes the Inventory Costs such that the Individual Supply Chain Cost and Total Supply Chain Cost is reduced as much as possible. During the optimization, it tries to avoid Out of Stock option and maintain supply chain equilibrium. The following figure illustrates a typical facility unit (distributor) with its incoming and outgoing streams/channels.
Training a srdqn Warehouse with the initial inventory of 10 units which plays with Sterman co-players, and to run the training for 50000 episodes:

python main.py --gameConfig=8 --maxEpisodesTrain=50000 config.ILInit2=10 --batchSize=128

#Internally the beergame is intialized as follows:

# initilize an instance of Beergame
beerGame = clBeerGame(config)
	
# get the length of the demand.
demand_len = np.shape(demandTr)[0] 
# Do Initial tests
beerGame.doTestMid(demandTs[0:config.testRepeatMid])
	
# train the specified number of games
for i in range(0, config.maxEpisodesTrain):
	beerGame.playGame(demandTr[i%demand_len],"train")
	# get the test results
	if (np.mod(beerGame.curGame,config.testInterval) == 0) and (beerGame.curGame>500):	
		beerGame.doTestMid(demandTs[0:config.testRepeatMid])			
		
# do the last test on the middle test data set.
beerGame.doTestMid(demandTs[0:config.testRepeatMid])
The below video illustrates how we can leverage ML to facilitate sorting packages into different categories according to their final destination. It uses Tensorflow Object Detection API to train a deep learning model based on Faster RCNN Architecture.
When a camera is placed above the conveyor belt, it can send a snapshot of the parcel to the computer, which can be processed and ran through a Deep Learning algorithm (Faster RCNN) on the image, to automatically sort and determine the appropriate destination for the package.
Watch this Youtube link https://youtu.be/uextIyx7sO4.
This library primarily provides a tool to design and optimize all kinds of industrial supply chains from simple transportation networks up to complex multi-level manufacturing setups comprising of components, parts, products, substitutes, suppliers, manufacturers, distributors and customers.
This library solves some of the critical Decision Optimization business problems for the Network Design phase like the Facility Location Problem (FLP). It becomes necessary for many retailers to decide where to open a new warehouse to optimize supply chain cost between plants or providers and customers or chops, or for a water distribution company to plan improvement of the distribution network, creating new tanks or new pipes.
The below code snippet demonstrates how to optimize network design with the KPI (Key Performance Indicator) for each of the costs:
  • variable plant cost
  • inbound transportation cost (from plant to distribution center)
  • outbound transportation cost (from distribution center to customer)
  • fixed distribution centers cost
  • variable distribution center cost
  • A number of newly opened distribution centers
For this below code snippet, we also applied different constraints as mentioned below:
  • capacity constraints: one on the plants and products --- one on the plants and products and distribution centers
  • demand satisfaction: what is shipped to a customer is exactly the quantity they expect
  • flow on distribution centers structural constraint: what goes in from plants goes out to customers.
  • cost variables definition constraints
  • # CREATE CPLEX MODEL
    
    from docplex.mp.model import Model
    mdl = Model(name='NetworkDesign');
    
    # CREATE VARIABLES
    
    openDC = mdl.binary_var_dict(distributionCenters, name='openDC')
    shipDCToCustomer = mdl.continuous_var_cube(distributionCenters, products, customers, lb=0, name='shipDCToCustomer')
    shipPlantToDC = mdl.continuous_var_cube(plants, products, distributionCenters, lb=0, name='shipPlantToDC')
    
    shipDCCost = mdl.continuous_var_dict(distributionCenters, lb=0, name='shipDCCost')
    storeDCCost = mdl.continuous_var_dict(distributionCenters, lb=0, name='storeDCCost')
    
    mdl.print_information()
    
    # CREATE KPIS
    
    variablePlantCost = mdl.sum( df_productionData.varPlantCost[pl, pr] * shipPlantToDC[pl, pr, dc]
                                 for pl in plants for pr in products for dc in distributionCenters)
    mdl.add_kpi(variablePlantCost, 'variablePlantCost')
    
    inboundTransportationCost = mdl.sum(  df_inboundData.unitCost[pl, dc]*shipPlantToDC[pl, pr, dc]
                                          for pl in plants for pr in products for dc in distributionCenters)
    mdl.add_kpi(inboundTransportationCost, 'inboundTransportationCost')
    
    outboundTransportationCost = mdl.sum( shipDCCost[dc] for dc in distributionCenters)
    mdl.add_kpi(outboundTransportationCost, 'outboundTransportationCost')
    
    fixedDistributionCenterCost = mdl.sum( df_distributionCenters.fixedCost[d] * openDC[d] for d in distributionCenters);
    mdl.add_kpi(fixedDistributionCenterCost, 'fixedDistributionCenterCost');
    
    variableDistributionCenterCost = mdl.sum ( storeDCCost[dc] for dc in distributionCenters)
    mdl.add_kpi(variableDistributionCenterCost, 'variableDistributionCenterCost');
    
    nbOpenDistributionCenters = mdl.sum( openDC[dc] for dc in distributionCenters)
    mdl.add_kpi(nbOpenDistributionCenters, 'nbOpenDistributionCenters')
    
    mdl.print_information()
    
    # CREATE OBJECTIVE
    
    mdl.minimize( variablePlantCost  +  inboundTransportationCost +
              outboundTransportationCost + variableDistributionCenterCost +
              fixedDistributionCenterCost  )
    
    mdl.print_information()
Consolidating the outputs from the previous steps:
Model: NetworkDesign
 - number of variables: 980
   - binary=28, integer=0, continuous=952
 - number of constraints: 0
   - linear=0
 - parameters: defaults
 - problem type is: MILP

Total (root+branch&cut) =    0.35 sec. (125.19 ticks)
* model NetworkDesign solved with objective = 3195083.710
*  KPI: variablePlantCost              = 1024800.000
*  KPI: inboundTransportationCost      = 0.000
*  KPI: outboundTransportationCost     = 1499483.710
*  KPI: fixedDistributionCenterCost    = 500000.000
*  KPI: variableDistributionCenterCost = 170800.000
*  KPI: nbOpenDistributionCenters      = 1.000
It is a python library for supply chain analysis, modeling, and simulation, with facilities to create workflows for Demand Planners, Buyers, Supply Chain Analysts, and BI Analysts. It helps them with Visualisation tools, allowing them to build reports and perform demand forecasts.
The following code snippet can be used to depict supply chain orders with parameters (safety stock, total_orders, quantity_on_hand, economic_order_quantity, demand_variability, reorder_level, average_orders, safety_stock and orders, etc.)
from supplychainpy.model_inventory import analyse
from supplychainpy.sample_data.config import ABS_FILE_PATH
from decimal import Decimal
import pandas as pd

raw_df = pd.read_csv(ABS_FILE_PATH['COMPLETE_CSV_SM'])
analyse_kv = dict(df=raw_df,start=1,
        			interval_length=12,
        			interval_type='months',
        			z_value=Decimal(1.28),
        			reorder_cost=Decimal(400),
			        retail_price=Decimal(455),
        			file_type='csv',
        			currency='USD'
        			)

analysis_df = analyse(**analyse_kv)
Predicted output:
{'reorder_level': '4069', 'orders': {'demand': ('1509', '1855', '2665', '1841', '1231', '2598', '1988', '1988', '2927', '2707', '731', '2598')}, 'total_orders': '24638', 'economic_order_quantity': '44', 'sku': 'KR202-209', 'unit_cost': '1001', 'revenue': '123190000', 'quantity_on_hand': '1003', 'shortages': '5969', 'excess_stock': '0', 'average_orders': '2053.1667', 'standard_deviation': '644', 'reorder_quantity': '13', 'safety_stock': '1165', 'demand_variability': '0.314', 'ABC_XYZ_Classification': 'BY', 'economic_order_variable_cost': '15708.41', 'currency': 'USD'} ...
Further, for demand forecasting problems we can also apply different types of forecasting like Exponential Smoothing
ses_df = simple_exponential_smoothing_forecast(demand=KR202_209_details[0].get('orders').get('demand'), length=12, smoothing_level_constant=0.5)
This javascript library (OriginTrail) is dedicated to making global supply chains work together, by enabling a universal, collaborative, and trusted data exchange. Further, itis built as an open protocol for cross-organizational data sharing in supply chains, supported by blockchain.
This library is equipped with Attestation Certificate Authority (ACA) and TPM Provisioning with Trusted Computing for Supply Chain Validation capabilities.
This repository develops a technical framework for improving traceability in supply chain systems using blockchain.
The Hyperledger Composer implemented in this repository can be used for the supply-chain industry. This library demonstrates how Hyperledger blockchain improves transparency and traceability of the industrial supply chain.
This library acts a “umbrella” repository for blockchain supply-chain solutions, with support for archiving, retrieval, and validations of attachments. The in-built core-services provided has API support to interact with the Ethereum smart contract.
This library deals with pharmaceutical Supply Chain logistics to help in medicine delivery. It has functionalities to provide accurate information across the entire supply chain pipeline and through real-time updates and visibility of handovers. It also provides traceability of sources and collaboration between all parties at an enhanced speed.
This Supply chain proof of concept is built in Hyperledger Fabric, with a specific chaincode exposed as a REST API.
This library serves the Automobile Supply Chain Management for transparency and auditability, with support for differential pricing, mediation between different parties, quality and compliance issues, inevitable disruptions, centralization, and fraud management
This library provides a dynamic agent-based model acclimate describing the propagation of disaster-induced production losses in the global economic network.
There are severe limitations in supply chain frameworks that struck companies to maintain complete visibility of their supply network. This library provides automated supply chain mapping as a means of maintaining the structural visibility of a company’s supply chain. Deep Learning and Natural Language Processing can be leveraged to a) automatically generate rudimentary supply chain maps, b) verify existing supply chain maps or c) augment existing maps with additional supplier information.
This library has a framework, to optimize production planning, using pulp and 4 different solvers: CBC (default), Gurobi, CPLEX, and GLPK. The following code demonstrates how to defining production variables and inventory constraints.
Define Production variables:
production_variables = {
    index: pulp.LpVariable(name='X_' + str(row['period']),
                           lowBound=0,
                           cat=pulp.LpContinuous)
    for index, row in input_df_dict['input_data'].iterrows()}

OR

production_variables = pulp.LpVariable.dicts(
    name='X',
    indexs=input_df_dict['input_data'].index,
    lowBound=0,
    cat=pulp.LpContinuous)
Define Inventory Constraints:
for period, value in input_df_dict['input_data'].iloc[1:].iterrows():
    model.addConstraint(pulp.LpConstraint(
        e=inventory_variables[period - 1]
          + production_variables[period]
          - inventory_variables[period],
        sense=pulp.LpConstraintEQ,
        name='inv_balance' + str(period),
        rhs=value.demand))

OR

inv_balance_constraints = {
    period: model.addConstraint(pulp.LpConstraint(
        e=inventory_variables[period - 1]
          + production_variables[period]
          - inventory_variables[period],
        sense=pulp.LpConstraintEQ,
        name='inv_balance' + str(period),
        rhs=value.demand))
    for period, value in
    input_df_dict['input_data'].iloc[1:].iterrows()}
For more details on their mode of operating and response time, please refer to A Simple Framework For Solving Optimization Problems in Python.
This repository contains a curated list of awesome supply chain blogs, podcasts, standards, projects, and examples.
  • Ethereum-SupplyChain – A Supply Chain smart contract built-in to demonstrate how supply chains can improve authenticity, efficiency, and privacy between seller and buyer.
  • ichain– A blockchain-based on tender mint, making deployment, multiple networks connection, and run supply chain applications easier.
  • AuthentiFi– A blockchain-based Product Ownership Management System for anti-counterfeits in the Post Supply Chainkeras-rl
It implements some state-of-the-art deep reinforcement learning algorithms in Keras (Python) and is capable of solving complex optimization problems of the Supply Chain.
import ray
from ray import tune
from ray.rllib.policy import Policy
from ray.rllib.examples.env.multi_agent import MultiAgentCartPole
from ray.tune.registry import register_env
ray.init()

    # Simple environment with 4 independent cartpole entities
    register_env("multi_cartpole", lambda _: MultiAgentCartPole(4))
    single_env = gym.make("CartPole-v0")
    obs_space = single_env.observation_space
    act_space = single_env.action_space

    tune.run(
        "PG",
        stop={"training_iteration": args.num_iters},
        config={
            "env": "multi_cartpole",
            "multiagent": {
                "policies": {
                    "pg_policy": (None, obs_space, act_space, {}),
                    "random": (RandomPolicy, obs_space, act_space, {}),
                },
                "policy_mapping_fn": tune.function(
                    lambda agent_id: ["pg_policy", "random"][agent_id % 2]),
            },
        },
    )

Conclusion

In this blog, we have seen the impact of Covid on the supply-chain industry leading to revenue loss and lot of uncertainties. In order to combat the situation, we must make use of AI / ML frameworks to apply route optimizations, price optimization or inventory optimization, which we demonstrate with different code samples. We also refer to many other open-source libraries available that can be used for making supply chain transactions either using blockchain or for expediting supply chain analytics.
As the above figure illustrates, along with AI / ML the most important technologies for the efficient functioning of the supply chain are IoT, cloud computing, 5G, AI, 3D printing, and robotics.
However, technology alone can’t drive the digital supply network in events like COVID-19, a trade war, an act of war or terrorism, regulatory change, labor dispute, a spike in demand for a particular product in a specific region, or supplier bankruptcy.
What is utmost at this moment of crisis is risk management, performance management, and timely proactive measures to ensure overall business continuity.

References

Previously published here.

Written by sharmi1206 | https://www.linkedin.com/in/sharmistha-chatterjee-7a186310/
Published by HackerNoon on 2021/02/27