Skip to content

Integrating a Search Ranking Model into a Prediction Service

As companies utilize data to improve their user experiences and operations, it becomes increasingly important that the infrastructure supporting the creation and maintenance of machine learning models is scalable and will enable high productivity.

Building Reliable Workflows: Cadence as a Fallback for Event-Driven Processing

Amid the hypergrowth of DoorDash’s business, we found the need to reengineer our platform, extracting business lines from a Python-based monolith to a microservices-based architecture in order to meet our scalability and reliability needs.

Enabling Efficient Machine Learning Model Serving by Minimizing Network Overheads with gRPC

The challenge of building machine learning (ML)-powered applications is running inferences on large volumes of data and returning a prediction over the network within milliseconds, which can’t be done without minimizing network overheads.