Skip to content

Blog


Increasing Operational Efficiency with Scalable Forecasting

August 31, 2021

|

Ryan Schork

Forecasting is essential for planning and operations at any business -- especially those where success is heavily indexed on operational efficiency. Retail businesses must ensure supply meets demand across volatile changes in seasonal preferences and consumer demand. Manufacturers need to ensure they have the right amount of supplies and inventory in order to fulfill orders without locking up money in idle or unused resources. Other industries rely on forecasting for staffing, vendor commitments, and financial planning among a host of other applications.

Similarly, DoorDash has many forecasting needs covering everything from deliveries to marketing to financial performance. Ensuring that our internal business partners get the information they need, our Data Science team developed what we call our Forecast Factory, a platform allowing all our teams to set up their own forecasts without the help of a dedicated team of data scientists. We’ll discuss the general characteristics of forecasts and the challenges managing them at scale then explain how we were able to overcome these challenges by building our Forecast Factory. 

What forecasting solutions cover 

Despite the different operational processes forecasts support, there are many commonalities among the implementations:

  • Benchmarking current course and speed
    • If a static operation is maintained, what would demand be? How long would our inventory last? How many customer support tickets will our agents face? 
  • Scenario planning
    • If we take a specific action, how would demand change? How would the timing of ordering new inventory reduce the likelihood of running out of inventory?
  • Granular decision-making
    • Can we staff additional support agents only for the vendors that will be overloaded? Can we pull forward restocking only for SKUs that are at risk? Can we focus resources on only those business lines most at risk for missing the plan?

As a logistics provider operating multiple business lines at international scale supporting thousands of merchant partners, forecasting implementation is especially key for DoorDash. The health of our marketplace requires us to constantly manage supply and demand between Dashers (our term for delivery drivers), merchants, and consumers. Acquisition is tied to longer term demand projections. Support agent staffing needs to be matched to expected volume of support tickets to maintain quality outcomes for our merchants, Dashers, and consumers. The Merchant team needs to understand cuisine preferences to ensure the right selection is available for our consumers. Every aspect of our business is underpinned by a need for continuously updated, reliable forecasts to enable top-notch operational efficiency across thousands of diverse geographies. 

Alleviating the challenges of managing thousands of forecasts

Supporting quality forecasts for even one of these applications could easily demand the attention of several data scientists. Scaling to thousands of forecasts with dedicated resources would be untenable with that level of support. Other challenges (especially with diffusely managed forecasts) include: 

  • Incompatible and inconsistent formats housed in different mediums (Excel, database, business intelligence (BI) tools) making forecasts difficult to compare
    • Example: Volume forecasts are weekly totals housed in a database while the support team uses daily task ratios against volume housed in Excel.
  • Lack of a central location for upper management to analyze against operational outcomes
    • Example: Daily financial reporting requires SQL for forecasts housed in databases, manual entry for Excel, and downloading from dashboards for those managed with BI tools leading to heavy integration efforts.
  • Inefficient handoffs to dependent business partners 
    • Example: Demand planning locks its forecast Wednesday mornings but supply planning needs to make decisions for the week ahead on Monday.
  • Inability to easily incorporate relevant business knowledge 
    • Example: Hundreds of local market operators have flexibility over marketing promotions with an aggregate material effect but the overhead of collecting hundreds of individual actions prevents incorporation to a central forecast.

DoorDash faced many of these challenges. As a matrixed organization, data scientists and engineers are spread across verticals, such as consumer, merchant, logistics, Dasher, operational excellence, and marketing, all with separate needs. Compounding the problem is that the marketplace is active 24 hours a day, business strategy is heavily dependent on matching outcomes to operational targets, and the entire process has to be optimized at a sub-ZIP Code level.

Solving enterprise forecasting with a centralized platform

DoorDash created Forecast Factory, a centralized forecasting toolkit that can accept human-in-the-loop adjustments and business knowledge, to solve these operational pain points. Forecast Factory enables operational teams to onboard critical forecasts for managed execution, presentation, and analysis. 

This platform has the following benefits: 

  • Scalability: Teams now have an easy way to plug in their data, get a forecast, submit adjustments, do scenario planning, finalize a model, and lock and distribute the results.
  • Consistency: Ensure that forecasts have a consistent format and centralized location with defined accuracy metrics.
  • Timing: Scheduling, parallelization, and automatic resource allocation mean forecasts are ready in time for operational processes on different schedules to consume.
  • Accuracy: Access to a suite of best-in-class machine learning algorithms.
  • Access: Partners have a consistent interface and growing suite of visualizations to disseminate forecasts and targets.

Designing the Forecast Factory 

Our centralized platform is centered around a modular forecasting toolkit allowing teams to customize pipelines to their specific needs with moderate technical knowledge.

The main toolkit components are:

Figure 1: The Forecast Factory toolkit combines best-in-class time series algorithms with dynamic processing of the series to alleviate arduous manual work that teams often use to prepare a series such as outlier removal and accounting for external factors like holidays and promotions. This entire process can be wrapped in a configurable grid search for parameter selection along any time dimension and prediction cadence. Dynamic processing techniques can be selected for by the grid search based on historical accuracy or explicitly locked.
  1. Historical Data: A SQL query pulls a series with a target column and a date column that are mapped to the system through a configuration file. The toolkit can produce reliable forecasts with as little as 28 days of data for a daily series. The interface was designed to make this as easy as possible for our end users and allow them to update the query with direct changes.
  1. Time Series Data Slicer: This method enables training the algorithm over different time units, horizons, frequencies, and optimization periods. It also allows teams to match forecast characteristics to their objectives, even on the same series. This component handles all these date transformations throughout the training and prediction process.
  • Unit: hourly, daily, weekly, monthly, etc.
  • Horizon: number of units ahead for predictions (e.g. seven days, six weeks, etc.)
  • Frequency: refresh schedule (e.g. daily, weekly, etc.)
  • Optimization period: critical time frame for accuracy in units that training optimizes for
  1. Data Preprocessing: This step handles necessary adjustments to the input series, each of which can be selected for by the algorithm based on accuracy. The toolkit has ready-made options to remove the effects of outliers, holidays, outages, promotions, and other events that deviate from the current course and speed of the series. These can be added directly as features in later steps, but allow an important interface to control for these effects in time series models (e.g. exponential smoothing) that only operate on the series itself. Preprocessors (and prediction processors) can include models built on the residuals of the time series to provide a further bridge for using external features to estimate and control for difficult-to-specify effects like weather. 
  1. Time Series Algorithm: The toolkit is designed to be agnostic to modeling algorithm choice. Right now, most forecasts are built on time series models combined with processing to control for external effects. However, other models, such as Gradient Boosted Machines and Prophet, can be utilized through the ability to pass feature sets to the model through the Time Series Data Slicer to make the right features available at the right time.
  1. Prediction Processing: This step is similar to the preprocessing step except done on the predictions themselves for the future. Similar adjustments outside of current course and speed are done here for time series models. Adjustments for known promotions, calculated holiday coefficients, and residual adjustments based on weather models are examples of steps currently implemented.

Component Selector/Grid Search: This object allows the user to specify which components and parameters to backtest over a given time window. Different combinations of external parameters such as preprocessors, lookback windows (limiting input data range), and postprocessors are specified here along with internal model-specific parameters such as additive/multiplicative seasonality for exponential smoothing algorithms. This provides the toolkit the ability to learn the best algorithm parameters but also make adjustments like shortening the input time series during periods of rapid change and dismissing holiday adjustments when the pattern seems to have deviated from history. The ability to specify custom loss functions also means we can tailor selection to specific team objectives.

Forecast Factory infrastructure

In order to operationalize the toolkit, Forecast Factory is embedded within an architecture that allows user input, interactive exploration of candidate and locked forecasts, accuracy summaries and the submission of on-the-fly adjustments:

Figure 2: The platform infrastructure manages Forecast Factory job runs, accepts user input and stores/visualizes results.

The Forecast Factory’s infrastructure relies on Dagster for job orchestration and Databricks for execution. Standard visualizations allow users to explore various candidate forecasts--including the one marked as best by the toolkit--and decide on which to lock as final for their use case. Accuracy visualizations then allow users to see how various forecasts performed over time and during which periods so they can make more informed selections for the future.

The main pipeline components are the:

  1. Base ETL: This is the query supplied by the end user to provide the input series for forecasting.
  1. Pre-Forecast Code: Any pre-forecast steps such as features that need to be calculated,  a residual predictor that needs to be trained prior to use in preprocessing or prediction processing, or a check for any reported outages in previous days that code is executed here.
  1. Base Forecasts: The champion model selected by the algorithm as well as any other candidate forecasts (e.g. the team typically likes to observe a version without holiday effects to see how the algorithm is accounting for the impact, a version with a longer lookback window to compare long and short term, etc.).
  1. Select Adjustments: Users can select specified adjustments to the input series or predictions. There is a standard adjustment database that allows users to create their own preprocess or prediction process adjustments directly via a Python API or by importing from a Google sheet. This ensures users with business knowledge (a promotion next Wednesday) can supply that input without the need to go through the Data Science team.
  1. Adjusted Base Forecasts: Some adjustments (removing a past promotion from the input series) may require the forecast algorithm to be rerun. This step accounts for that and produces the adjusted forecast.
  1. Collect Candidates: Candidates are stored in a database with a schema that attaches metadata (e.g. internal and external parameters) and controls which forecast is locked as final and exposed to the end user process. An exploration visualization allows users to see candidates side-by-side and compare against past actuals and a suite of metrics.
  1. Select Final Forecast: Once the final forecast is selected as final and marked as such in the data model then metadata around the forecast is stored, any hierarchies that need to be built (e.g. normalized market level forecasts to top level forecast), and accuracy information is populated. Historical accuracy for the locked forecast (and candidates) can be viewed via an Accuracy Dashboard.

Conclusion 

The toolkit and pipeline infrastructure above provide a modular structure that enables generalization to a wide range of business partner use cases while shortening onboarding cycles. Partners can directly alter the query as changes in underlying data or process happen and/or new geographies or levels are added. Business users can quickly and intuitively submit adjustments without having to involve a data scientist to apply the changes manually. If data scientists from specific focus areas want to include their own algorithms, custom loss functions, or otherwise, these components can be extended without having to rewrite the whole system.

Acknowledgements

Thank you to everyone who has contributed to the Forecast Factory project! DSML: Lauren Savage, Qiyun Pan, Chad Akkoyun Platform: Brian Seo, Swaroop Chitlur, Hebo Yang and all of our amazing partners.

About the Author

Related Jobs

Location
Toronto, ON
Department
Engineering
Location
New York, NY; San Francisco, CA; Sunnyvale, CA; Los Angeles, CA; Seattle, WA
Department
Engineering
Location
San Francisco, CA; Sunnyvale, CA
Department
Engineering
Location
Seattle, WA; San Francisco, CA; Sunnyvale, CA
Department
Engineering
Location
Seattle, WA; San Francisco, CA; Sunnyvale, CA
Department
Engineering