Skip to content

DoorDash is a dynamic logistics marketplace that serves three groups of customers:

  1. Merchant partners who prepare food or other deliverables,
  2. Dashers who carry the deliverables to their destinations, 
  3. Consumers who savor a freshly prepared meal from a local restaurant or a bag of groceries from their local grocery store. 

For such a real-time platform as DoorDash, just-in-time insights from data generated on-the-fly by the participants of the marketplace is inherently useful to making better decisions for all of our customers. Our company has a hardworking group of engineers and data scientists tackling interesting prediction problems like estimating food preparation time or forecasting demand from customers and supply from our merchants and dashers.

We have already been using insights from aggregations of historical data over a month, a week, or even a day ago for these problems, which gives us very important wins. However, up-to-date knowledge from real-time events about the marketplace turns out to be quite useful in order to react to the ever-evolving nature of the communities that we serve.

For example, a famous burrito place on Market Street in San Francisco may usually take 10 mins to prepare a burrito. However, when it sees a huge influx of orders on the national burrito day, it may start taking longer to prepare the burrito. In order to accurately predict the preparation time in the near future, it is useful to have an idea of the “number of orders received by the store in the last X minutes”.

Like many resilient engineering systems, DoorDash has a collection of services that isolate their responsibilities and interact with each other to orchestrate the logistics marketplace. These systems range from order submission and transmission, to delivery assignment, to planning delivery routes, to batching orders.

We wanted to collect truly real-time stats from several of these services, hence, we needed a cross-cutting engineering system that can collect business level events from core services, match events from potentially multiple sources, aggregate them in a stateful manner and publish the time-windowed aggregations for ML models to use.

We made design choices that simplified the creation of a features pipeline. Some essential pillars of the real-time feature pipeline design are:

  1. Standardize business events as they happen on a timeline. It not only simplifies the definition of those events, but also keeps them self-contained, without any complicated relationships to other parts of the business.
  2. Use a distributed log to publish and subscribe to the business events. This choice helps deploy and scale producer services independent of each other. Also, the aggregator service can independently follow maintenance schedules without fear of losing the events that are being published.
  3. Use a distributed and stateful stream processing framework that can consume events published onto the distributed log and continuously run time-windowed aggregations on those events to produce aggregated features.
  4. Use a fast and distributed in-memory cache for storing the statefully aggregated features. The resilience of such a cache makes certain that ML predictors can access the features independent of who published them and when it was published.

In order to build the right solution, we needed to make technology choices that we feel confident about, evolving with the needs for our engineering organization in the future.

We chose protobuf to define the schema of events that can be versioned, have their changes tracked, are both forward and backward compatible with changes, and have their corresponding Java / Python objects published in a central library for access by producer and consumer services.

An example of a light-weight event looks like the following:

As is evident, one of the events that a store entity in DoorDash’s application domain can generate is “an order o was confirmed by a store s at timestamp t.” When we are interested in more business events related to the store’s timeline, we can add another object with only the detail specific to it. In this case the StoreOrderConfirmedData event only has an order_id associated with it.

We chose Apache Kafka as the common distributed log for transporting business events from their producers to the stateful streaming aggregator. Kafka with its `topics` semantics makes it easier to segregate business events based on the entities for which the events are generated. Also, partitioned topics are essential to keeping time-windowed aggregations for the same key local to a single compute node, thus reducing unnecessary shuffling of data in real-time.

Most importantly, we used Apache Flink for its stateful stream processing engine and simple time-windowed aggregation semantics, abstracted as the DataStream API. To operationalize Flink, we had a choice between the normal deployment environment of Docker and Kubernetes on AWS, like any other service at DoorDash, or a Flink deployment over a map-reduce cluster sitting behind a resource manager, or to go the route of a fully managed offering like Flink on AWS. To keep things consistent in deployment strategy with the rest of the services at DoorDash, we chose to launch JobManager and TaskManager instances on Kubernetes to create a “job cluster”, without the need for a full fledged resource manager like Apache Yarn. With that lightweight cluster dedicated to aggregating real-time features, we roll out updates to the Flink application with a normal service deployment, rather than submitting a new job to a “session cluster”. The Data Infra team at DoorDash is building a far and wide reaching Real-time infrastructure, which will allow real-time features to become a general citizen of that ecosystem. More on that to come.

And finally, we used Redis as the distributed, in-memory store for hosting ML features. We follow a descriptive naming convention to unambiguously identify what a feature is and which entity it is about. Keeping universally consistent feature names allows several ML models to consume from a pool of features. For example, the count of orders in the last 20 minutes for a restaurant with store_id 123 is saf_st_p20mi_order_count_sum.st_123. It can be used by a ML model estimating food preparation time, or another model forecasting store demand in the future. DoorDash’s ML applications are trained and served through a common predictions library that uses features from this Feature Store. For those interested, we soon plan to publish further blog posts on our prediction service, feature engineering, and other components of the ML infrastructure.

We have settled on Kotlin as our primary language of choice as a company, for building expressive, powerful systems that scale. Writing aggregation logic against Java bindings of the DataStream API from Flink in Kotlin was seamless.

consumer-services-12 (1)

We started noticing the impact from real-time features on the performance of our products in the following areas:

1. Delivery ETA for the consumer:

When a consumer decides to place an order on DoorDash, we display an ETA on the checkout page. This is our best estimate of how long it will take the food to arrive and represents the promise we make to the consumer. Since the marketplaces we operate in are extremely dynamic, it can be difficult to get accurate ETAs — especially during lunch and dinner peaks when a large delivery order or a large party eating in at the restaurant can significantly back up the kitchen and delay subsequent orders.

For situations like this, the addition of real-time information has contributed significantly to the ability of our models to react swiftly and provide more accurate expectations to our customers. To create these ETAs, we use a gradient-boosting algorithm with dozens of features, and the real-time signal on dasher wait at the restaurant and number of orders placed are among the top 10 most important. As seen in the graph below, the addition of real-time signal allows our ETA predictions to align much more closely with actual delivery times.

2. Estimating order preparation time:

In order to match the right dasher with the right delivery, the assignments platform needs to have an estimate for when an order is expected to be ready for pick up at a restaurant. If we underestimate and have the dasher show up earlier at the restaurant, for example, we risk having the dasher wait longer; if we overestimate and have the dasher arrive at the restaurant later, we delay the delivery, with consequences like the meal getting cold. Thus, the estimate influences both quality and efficiency of the fulfillment. We iterated on improving the underlying model for estimating order preparation time, with previously available historical features, and ended up with a model that performed better. However, the estimation error is higher on holidays as seen below:

Part of the problem arises because holidays do not occur frequently in the training data and the model/underlying feature set is not responsive to the dynamics of holidays. We engineered some real time features that capture the changes in marketplace conditions more dynamically and were able to mitigate the issue of high prediction inaccuracy.

We have only begun to scratch the surface with what is possible from using real-time insights from the marketplace to better inform our decisions, and better serve our customers. If you are passionate about building ML applications that impact the lives of millions of merchants, dashers, and customers in a positive way, do consider joining us.



Acknowledgements:

Param Reddy, Carlos Herrera, Patrick Rogers, Mike Demmitt, Li Peng, Raghav Ramesh, Alok Gupta, Jared Bauman, Sudhir Tonse, Nikihil Patil, Matan Amir, and Allen Wang

At DoorDash, our logistics team focuses on efficiently fulfilling high quality deliveries. Our dasher dispatch system is a part of our logistics platform that has substantial impact on both our efficiency and quality. Through optimal matching of dashers to deliveries, we are able to ensure dashers get more done in less time, consumers receive their orders quickly, and merchants have a reliable partner to help them grow their businesses. In this blog post, we will explore:

  • The background of the dispatch system
  • Our prior and current optimization approaches and how we wanted to re-frame them
  • Modifying a tier-zero service with zero downtime
  • Future work to build on top of our optimizer

Background

DoorDash powers an on-demand marketplace involving real-time order demand and dasher supply. Consumers ask for goods to be delivered from a merchant to their location. To fulfill this demand, we present dashers with delivery routes, where they move between picking up orders at merchants and delivering them to consumers. 

life-cycle-12

Our dispatch system seeks high dasher efficiency and fulfillment quality by considering driving distance, time waiting for an order to be ready, delivery times as seen by the consumer, and more. Given incomplete information about the world, the system generates many predictions, such as when we expect an order to be ready for pickup, to model the real-world scenarios. With this data, the dispatch system generates future states for every possible matching and decides the best action to take, given our objectives of efficiency and quality.

Optimization

With the current statuses of available dashers in the network (such as waiting to receive an order, busy delivering an order), and information of all outstanding consumer orders, we need to generate possible routes for each order, and choose the best route to assign in real-time. Our prior dispatch system assigned one delivery to a route at a time, so we framed it as a bipartite matching problem. The problem can then be solved using the Hungarian algorithm. There are two limitations with this approach: 1) though the Hungarian algorithm is polynomial, the runtime of large instances is excessive for our real-time dynamic system; 2) it doesn’t support more complicated routes with 2 or more deliveries.

Formulating the problem as a mixed-integer program (MIP) and solving it with a commercial solver can address both issues. First, we find that commercial solvers like Gurobi are up to 10 times faster at solving the matching problem than the Hungarian algorithm. This performance enhancement enables us to solve larger problems, based on which we can refine our models to drive business metrics. Furthermore, the solver provides flexibility in formulating the problem as a vehicle routing problem, which allows multiple deliveries in a route. 

Stay Informed with Weekly Updates

Subscribe to our Engineering blog to get regular updates on all the coolest projects our team is working on

As for the mathematical formation of the problem, binary variables are used to represent dasher-to-order matching decisions. The objective function is formulated to optimize both delivery speed and dasher efficiency, which are represented by the score coefficients in the model. There are two sets of constraints used to define dasher availability and routing preferences. The optimizer is run on several instances distributed based on regional boundaries multiple times a minute.

Notation

Formulation

Optimization solvers

There are multiple choices of optimization solvers that we could use to solve our problem. We experimented with the open source solver CBC, and commercial solvers including XPress, CPLEX, and Gurobi. We ultimately decided to license the Cloud service offer from Gurobi for our production system. The decision was based on the speed (benchmarks on our particular problems indicates that Gurobi is 34 times faster on average than CBC) and scalability of the solvers, ease of abstracting feasible solutions when optimality is hard, ability to tune for different formations, relatively easy API integration to Python and Scala, flexibility of the prototype and deployment licensing terms from the vendors, and professional support.

Implementation

Since dispatching a dasher for an order is an essential component of our delivery product, switching over to the new optimization solution had to be done with zero downtime and no impact on the fulfillment quality. Typically, we run experiments for new functionality and measure the impact on quality metrics, but since optimization is such a fundamental part of the algorithm, an experiment is both higher risk and not granular enough to understand the change at the lowest level of detail. Fortunately, an initiative to refactor our codebase had finished up around the same time. One of the changes was to move the optimization code into a separate component behind an extensible interface, so we tried a different approach.

Instead of running an experiment, we implemented the MIP logic as another optimizer behind the existing interface, along with code to convert the standard optimizer parameters into MIP solver inputs. We then built a third optimizer that combined the other two and compared the outputs of each. The combined optimizer would return the original optimizer’s results, throwing out those of the MIP optimizer, and then generate logs listing the differences between the two solutions and which was better. With this data we found that the two optimizers’ output matched more than 99% of the time, and the only mismatches were due to situations where there were multiple equally-good solutions. This gave us the confidence to adopt the new optimizer without further testing and resulted in a seamless transition.

iOptimizer-11

Results and future work

After deploying the MIP formulation, we can now solve more flexible and complex models faster. Combined with our experiment framework, we can iterate on our models at a much faster pace. We are exploring various improvements to the model and our dispatch system.

The first is that we can simply solve a bigger problem. Since the MIP optimizer is up to 10 times faster, we can solve the matching problem for a larger region in the same amount of time. This has the advantage of being simpler, as we have fewer regions to process, but also is more optimal. With a larger problem size, we increase the number of tradeoffs the optimizer considers and reduce edge effects caused by the boundaries between regions, leading to higher-quality decisions.

Another feature the new formulation has enabled is offering multiple deliveries at a time in more complicated routes, as mentioned before. This was a limitation of the Hungarian algorithm, but the flexibility of MIP allows us to consider those routes, and the formulation’s constraints ensure we still correctly match every delivery. Matching these more complex routes helps us give additional, and more desirable deliveries to dashers, and also paves the way for building out new types of products.

Finally, we’re working on even more complex formulations that combine historical data and future predictions to consider the state of the world over time. There are often cases where it is better to wait to offer a delivery instead of doing so right away. With the MIP formulation, we can build these kinds of tradeoff considerations directly into the model and amplify the efficiency and quality of the system.

Success factors and lessons learned

It is vital that the model we built can closely represent the business problem at hand, which requires an accurate understanding of the business logic and flow. The coefficients in the objective functions also need to be accurately predicted in order for the model to truly provide optimized results for our business. We were able to achieve this by working closely with our machine learning models to provide accurate predictions.

Another factor that helped us make a project like this successful is the close interaction between our data scientists and software engineers. Having a deeper understanding of the codebase that the model is built on top of was a huge benefit for the successful deployment of the method we chose.

Lastly, we need our company’s management team’s alignment since a project that involves a new data science method requires a longer-term strategic vision for it to be successful. These projects generally need a longer than regular feature development cycle for their fruition. Luckily, at DoorDash, our management team bought into investing in short-term as well as long-term bets.

Conclusion

We have seen the role optimization plays in ensuring every delivery at DoorDash is high quality, and fulfilled efficiently. We discussed the motivations for re-framing optimization from the matching problem to a MIP formulation, and how we surgically performed the swap. With our new optimization method in place, we highlighted several new improvement areas we unlocked as well as where we’re looking towards for the future.

If you are interested in real-time algorithms at the intersection of engineering, machine learning, and operations research, reach out! We are hiring for roles on the dispatch team. Visit our careers page to learn more: https://www.doordash.com/careers


And, a very special thanks to Richard Hwang for your contributions and guidance on this project.

DoorDash’s principles and processes for democratizing Machine Learning

Six months ago I joined DoorDash as their first Head of Data Science and Machine Learning. One of my first tasks was to help decide how we should organize machine learning (ML) teams in order for us to reap the maximum benefit from this wonderful technology. You can learn more about some of the current use cases of ML at DoorDash at our blog here.

Having spent some time at previous technology companies and spoken to many more, I was acutely aware of many of the challenges that come up.

Challenges

  1. ML is poorly defined: Is a linear regression in Excel ML? What about a toy random forest in a local Jupyter notebook? Where is the line between analytics and ML?
  2. ML needs Engineering and Science: ML at technology companies requires performant optimal decision-making.
  3. ML advances rapidly: Even over just the last five years we have seen modeling approaches and platforms and languages change almost every 18 months.
  4. ML is trendy: many people view ML as magic and so everyone wants to work on it.

In #2 ‘performant’ implies we need low latency, reliability, and scale – typically in a Software Engineer’s wheelhouse, while ‘optimal’ implies we need mathematical and statistical excellence – typically in a Data Scientist’s toolkit. This is often the biggest elephant in the room: who should work on ML? Engineers or Data Scientists? Both? Neither? This debate often leads to friction in teams and employee unhappiness.

At DoorDash, our core values include ‘One Team One Fight’ and ‘Make Room At The Table’. We want people of all different backgrounds / titles with ML expertise to come in and feel able to do their best work. So we chose to do things differently, more inclusively. We drew up a charter for ML with the following vision and principles:

Vision

Build data-driven software for advanced measurement and optimization

Principles

  1. Democracy: everyone can build and run an ML model given sufficient tooling and guidance.
  2. Talent: we want to attract and grow the best business-impact focused ML practitioners.
  3. Speed: if a cost-effective third party ML solution already exists then we should use it.
  4. Sufficiency: if a function (typically Engineering) can implement a good-enough ML solution unaided then they should do so.
  5. Incrementality: if a function (typically Data Science) can add enough incremental value to an ML solution then they should do so.
  6. Accountability: each ML solution has a single technical lead acting as the technical decision-maker.

The idea behind the vision is that we only want to build ML where it is actually needed – not where it might be interesting. We look for business opportunities where simple analytics or rules only get you 10-40% of the impact. This ensures the return on an ML practitioner’s time is super high for the business.

The principles ensure that we can hire the best people and that we are as efficient with our talent as possible. Ownership and accountability are essential for motivating and empowering employees to do their best work. Note that these principles are pretty general and could probably be applied to most tools.

An important corollary of these principles is that we do not pigeon-hole any function i.e. we do not say what a Data Scientist can or cannot work on, or what an Engineer can or cannot work on. We believe in blurry lines and helping ML practitioners grow in whichever areas they want to – so it is fine for a Data Scientist to work on production code or an ML Engineer to  build features.

What enables this flexibility while maintaining a high standard is principle #6, which states that we have a single person accountable for a project. That does not mean that this person must do the work, only that they must ensure it is done correctly – and they may choose to have it done by a Data Scientist or an Engineer or someone else.

There is no single unique structure or process that adheres to the vision and principles, rather, any structure chosen needs to be clearly articulated to ensure it is set up for success. At DoorDash, we landed on the following structures and processes to meet the principles:

Organization

  1. Reporting lines: ML Engineers report to Engineering managers and ML Data Scientists report to DS managers. ML Infrastructure reports into the central Data Platform team.
  2. Hiring: Job descriptions and hiring processes for ML Engineers and ML Data Scientists are reviewed and approved by ML Council.
  3. Technology: Strong investment in a centralized ML platform by Data Platform (workflow, provisioning, orchestration, feature stores, common data preparation, validation, quality checks, monitoring, etc.). Potential ML infrastructure technology (build/buy) decisions reviewed and approved by ML Council.
  4. Execution:
    1. Any person(s) at the company can identify a use case for ML and draft a proposal (business problem, estimated impact versus build / maintenance cost, solution, team composition, single technical lead).
    2. The proposal is reviewed, amended, and approved by the pod’s / vertical’s cross-functional leads (PM, EM, DS Manager, Analytics Manager, etc.). The leads should approve the business problem, prioritization, and impact / cost.
    3. The proposal is reviewed, amended, and approved by the ML Council.
    4. All steps of the review will be transparent: ML Council and ML practitioners will meet weekly at ‘ML Review’ to review items and debate next steps. Decisions will be made at this ML Review and notes will be taken and emailed to all interested folks.

A key feature at DoorDash is that we do not use reporting lines as a mechanism to enforce alignment and collaboration. Reporting lines do not scale well, especially as a company grows and attracts different flavors of Engineers and Data Scientists. Instead, we force collaboration and cross-functional decision-making through an ML Council:

ML Council

  1. Composition: the ML Council is composed of a group of experienced ML practitioners across the company, typically senior Engineering ML, Data Science ML, and Infrastructure ML folks. It is led by the ML Council Chair, who serves as the decision-maker for escalations. Rotates on some cadence e.g. every 12 months
  2. Role: the role of the ML Council is to:
    1. provide balance between project-specific variability vs company wide uniformity, so that we are efficient as a company
    2. review and give feedback on all of new ML applications
    3. facilitate the cross-pollination of ideas and solutions
    4. create better visibility into common pieces (to feed into infra)
    5. encourage more proactive communication of data sources and solutions.
  3. Responsibility: Typically the ML Council should ensure that if production performance is the biggest blocker to success then the tech lead is an ML Engineer. Otherwise if statistical performance is the biggest blocker to success then the tech lead is a Data Scientist. The ML Council should check solutions have enough support and where possible are part of the long term ML platform investment.
  4. Autonomy: If the ML Council disagrees on the solution / team / lead, then the ML Council Chair tie-breaks and makes a decision.

The ML Council is the glue which holds all the different functions (Engineering, Data Science, Infra, etc) together and keeps all the different teams using ML (Search, Dispatch, Marketing, Forecasting, Fraud, etc) collaborating and learning from each other.

At DoorDash we have had this organization in place for about five months and things seem to be going well. We will no doubt hit stumbling blocks and have to adjust our processes or clarify certain pieces – but this is part of the excitement of working in a fast-moving dynamic technology startup like DoorDash.

Going forward we will be writing many more blog posts about our problems, failures, and successes with ML, and how we use advanced experimentation methodology to test and iterate. We are committed to sharing our insights and learnings so that the wider ML community can benefit – please check back at our blog regularly to read the latest posts.

If you are passionate about solving challenging problems in this space, we are hiring for our ML teams and you can apply here. If you are interested in working on other areas at DoorDash check out our careers page.

The consumer shopping experience is a key focus area at DoorDash. We want to provide consumers an enjoyable shopping experience by providing the right recommendation to the right consumer at the right time for the right location. On our app, there are cuisine filters on the top of the explore page. We have built a system that surface the most relevant cuisines based on consumers’ personal preference and local popularity.

Unlike typical recommendation tasks in machine learning, at DoorDash, a unique challenge to our recommendation system is to account for where and when the recommendation is provided to a consumer. Different cuisines are available at different locations and different times of the day. When a consumer comes to a new city, we would like to present the popular local cuisines for the consumer to explore while also considering his/her personal preferences. To accommodate these unique requirements of our recommendation system, we developed a multi-level multi-armed bandit model to provide consumers the most relevant cuisine types. This has led to a significant conversion lift.

What is the multi-armed bandit algorithm?

The term “multi-armed bandit” comes from a hypothetical experiment where a person must choose between multiple actions (i.e. slot machines, aka “one-armed bandits”), each with an unknown payout. The goal is to determine the best or most profitable outcome through a series of choices. At the beginning of the experiment, when odds and payouts are unknown, the gambler must determine which arm to pull. This is the “multi-armed bandit problem.”

Why multi-armed bandit?

Multi-armed bandit provides a formal framework for balancing exploration and exploitation. In the hypothetical example, a gambler needs to balance between exploring which arm has the best payout and exploiting the best-payout arm. For the cuisine filter, during exploration, we surface more new types of cuisine for consumers to explore their interests. On the other hand, during exploitation, we recommend our consumers their most preferable types of cuisine. Multi-arm ensures that the most preferable types of cuisine are presented to our consumers, and they have the opportunity to see different types of cuisine that they may potentially like. This helps us understand our consumers a little better every day.

What is the multi-level multi-armed bandit model?

Here, multi-level refers to multiple levels of geolocations. From the lowest level to the highest level, these geolocations are districts, submarkets, markets, regions, countries, and the world.  A consumer’s geolocation carries important information to help us understand what his/her cuisine preference is. At each level of geolocation, we model the ‘average’ cuisine preference. The ‘average’ preference represents the cuisine preference of consumers-like-me. If a consumer lives in a place where most consumers like Korean food, then this consumer is more likely to be interested in Korean food than an ‘average’ consumer is.  Similarly, if a newly launched district is in a submarket where certain types of cuisine are popular, then it is likely that the same types of cuisine will be popular in this new market.

market-submarket-12

The ‘average’ preference from the higher level of geolocation serves as the prior knowledge modeled by prior probabilities of each cuisine being liked by a consumer or an imaginary ‘average’ consumer at a geolocation level. For example, the prior knowledge of a consumer’s cuisine preference is the preference of the ‘average’ consumer at the district level, and the prior knowledge of the ‘average’ consumer at the district level is the ‘average’ preference at the submarket level. The posterior probability of a cuisine being preferred by a consumer or an ‘average’ consumer is computed using Bayes’ theorem, which unifies the prior probability and evidence (data) to provide a posterior probability.

We use the Thompson sampling approach for multi-armed bandit. In essence, different types of cuisine are ordered by their posterior probabilities of being liked by a consumer. And these posterior probabilities are influenced by the cuisine popularities of all levels of geolocations, where popularity at the district  level (lowest level) influences the most and popularity at global level (highest level) influences the least.

Why multi-level?

We devised this multilevel model to address two challenges: 1) cold start–what to recommend for the consumers who don’t have any purchase history at DoorDash or for a newly launched market, 2) how to present the local favorites to consumers while also recognizing their personal preference.

Cold start is a common challenge for recommendation systems. At DoorDash this challenge is twofold – new consumers and new districts.  When we onboard a new consumer, we don’t yet have historical data to learn the consumer’s cuisine preference, and, therefore, the cuisine filter will represent the prior knowledge of his/her cuisine preference.  As we collect more and more data from this consumer, the cuisine filter will represent more and more of his/her personal preference rather than the prior knowledge. Similarly, for a newly launched district, for any consumers in that district, the cuisine filter represents the prior knowledge derived from the cuisine preference from the sub-market (one level above the district).

When consumers come to a new district, certain types of cuisine may be very popular in this district but not in the district where the consumer usually orders from. For example, when a sushi-lover comes to a town popular for Korean food, she may still want to order sushi or to explore the famous local Korean BBQ. To present the local favorites to consumers while also recognizing their personal preference, we need to derive the prior knowledge from the new district. And the cuisine filter ranked by posterior probabilities will represent the balance between local popularity and the consumer’s personal preference.

Algorithm

Results

Evaluation was done through A/B  testing a control group (cuisine filter set at the district level by the local operators), to a treatment group using alphabetical ordering (different types of cuisine were ordered alphabetically), and to a second treatment group using the personalized cuisine filter. The alphabetical order didn’t yield a significant conversion lift, whereas the personalized cuisine filter did gIve a statistically significant conversion lift and double-digit relative increase in cuisine filter click-through rate.

Day-part extension

The aforementioned approach serves as a very fundamental Multi-Armed Bandit approach to empower personalization. But it could be extended to incorporate various contextual information, eg. time of day. For instance, a consumer will likely order different types of food for breakfast, lunch and dinner. To make sure the current recommendation framework could adapt to the temporal preferences of cuisines, we can re-calculate the hyper-parameters (α , β) through aggregating consumers’ purchases by day-part. Thus, at various times of the day, different sets of hyper-parameters will be used in Thompson Sampling to generate more personalized cuisine types.

Conclusion

As a customer-obsessed company, our mission is to provide the best shopping experience to our consumers. Machine learning plays a key role in accomplishing our mission.  The multi-level multi-armed bandit model is an initial attempt to personalize the cuisine filter. Although this has yielded a significant conversion lift, there are definitely many more areas to improve. We defined consumers-like-me as consumers from the same district, but better prior knowledge can be derived from more sophisticated consumer segmentation. Also geolocation and time of day are the context we consider but, in the future, we may employ contextual bandit to incorporate more information about the consumer and the consumer interactions with DoorDash. 

If you are passionate about solving challenging problems in this space, we are hiring for the machine learning team. If you are interested in working on other areas at DoorDash check out our careers  page.


Most, if not all of us, want to work for a company that invests in its employees. DoorDash does this in several different ways, one of which is the Women in Engineering Leadership Forum (WINE). Both of us were a part of the first graduating cohort. WINE Leadership Forum was a 6-month leadership development program that ran from June to December 2019 led by Karen Catlin.  The tools we gained through WINE have had a great impact on us both personally and professionally, and most importantly it has allowed us to move our security team goals further. 

One of the tangible outputs is an improved definition of our team goal: “Empower you to improve security while accelerating growth.”

While we are both from the security team, we carry different roles and are also at different places in our personal lives. However, through the several conversations we had – we realized we have been impacted in very similar ways at work and outside.  Here are three topics that really resonated with us both.

Influencing without Authority:

[Esha] I live and breathe the learnings from this session, because one of the most effective ways to be more secure is to improve security awareness. This is easier said than done. There are several situations we come across, where we have to step in and gently steer discussions to more robust solutions. The powerful story from “Endurance” by Alfred Lansing resonated with me. While he was in a position of authority, he planted an idea and nurtured it so that the solution seemed obvious to his crew. Had he suggested this idea and enforced it using his authority, he would have less support and more trouble implementing it. It is important to take the time to understand what others want in order to propose mutually beneficial solutions. Encouraging candid and honest conversations is often key to arrive at mutually beneficial solutions. We find a way to act on the DoorDash Values of “And, not either/or” and we engineer a way to do both. While working on security assessments, there were continuous periods of partnership with engineering teams to triage and resolve findings. This as well as the learnings from the next topic helped me negotiate fixes to improve security.

[Geeta] As a Security Project Manager one part of our daily lives is to move projects, assessments/audits, and programs forward but yet more than likely the team members do not report into us.  As Project Managers we often think about what we want to achieve but we need to think about the team members and team’s we are trying to influence to figure out what they want. In order to truly be successful in meeting milestones/deadlines we have to go in with an open mindset and truly listen to what motivates the people we are trying to influence to get the things we want done in the timeframe we want. Then and only then can we move the agenda and the business forward easily (well relatively “easily” anyway!). For example, at DoorDash we conduct an annual PCI (Payment Card Industry) audit and we perform security penetration tests on a regular cadence. As we have had dramatic growth, we need to continually ensure the security posture of our systems/services to protect and to uphold the trust of our employees, dashers, merchants and consumers.  In our security team we had a plan on what service/system we wanted to target for pen testing in our quarterly objectives and throughout the year. However, that did not necessarily translate or mean the product teams were also marching forth to that same beat.  We quickly realized that although product teams were motivated to make systems/services more secure that we were not reaching out to them in their planning cycle to get the commitment from their manager. In parallel we took the needed steps of communication and collaboration via meetings and wiki documentation to share the expectations, the process, and timelines from the product team, security team member(s) and the 3rd party security pen testing consultancy to ensure alignment.  By going in with that mindset and sharing with the product teams we give ourselves a higher probability of successfully influencing and a better outcome overall and a much stronger relationship among all the internal and external stakeholders.

Being Strategic:

[Esha] As a Security Engineer, I am constantly heads down in tackling the next tactical problem. This session forced me to look at things from a different point of view and wanting to understand how my decisions were impacting the business. While I didn’t give it much thought earlier, I realized that it was critical and now, it seems like I cannot turn it off. I think about every project proposal and design from a strategic point of view. Just incorporating the word strategy in daily vocabulary has involuntarily made me think about things more strategically.

[Geeta] As a Security Project Manager I often think “actions speak louder than words” but through this program we also learned our words do matter and can influence the people around you! By simply changing our vocabulary and incorporating the word “strategy” into our discussions or meetings we change how people view us and how we come to view ourselves. It was really fun to see at the beginning of our leadership forum how uncomfortably the words “strategy or strategic” rolled off our tongues but how towards the end of the session we got more comfortable. Like many of my women leadership team members, I also felt it a bit unnatural in the use of the word “strategic” initially. However, having been feeling “action item driven” or called “tactical” for many years and consequently taken on many tactical or housekeeping tasks I decided to break out of my comfort zone. So that same afternoon I explained to a new verticals team on how our process helps us to be strategic and to scale when dealing with requests from DoorDash’s current or potential merchants/partners (i.e. lengthy security questionnaires). After this meeting they understood clearly on how the best way we could partner together to meet the ever growing merchant/partner needs for addressing the security concerns.

Security Networking (not to be confused with Network security!):

[Esha] As a Security Engineer, I always knew networking is important, but I didn’t really put much effort into my building and maintaining my network until I needed it. And that is often too late. Today, I actively reach out to my network to stay current on security happenings, tools, news, etc.. I look to my network for solutions to problems that they may have already faced or a different perspective on problems. Sometimes, I just check in with folks in my network to keep in touch.

[Geeta] For me as Security Project Manager networking is important but never really put much effort forth until we needed to. So in trying to launch and build some internal “strategic” initiatives, I found a reason in which to reach out to my network of security comrades in the security arena at other companies. It has helped me find a common reason and way to start a mutually beneficial dialogue. 

Also, we have sought out ways in which to make time for networking events as our male counterparts would typically do by attending security events and networked and reached out to people we have met afterwards.  Our internal DoorDash network has also been strengthened by this program not just in the 14 other women we met with monthly through this program, but also by remembering to network and collaborate internally through even informal activities like Happy Hours or lunches/dinners in our kitchen.  

Both of us have come to realize that this program has given us the tools and confidence to show up with best versions of ourselves at DoorDash.  We always had the ingredients in us; that’s why we were hired into DoorDash. However, taking ourselves to the next notch is something we both gained through this program.  The more surprising aspect is the impact that the WINE Leadership forum has had outside of DoorDash. To me this program is bigger than this first incredible cohort of women that I got to share it with. To me this program is one part of a bigger environment we are developing at DoorDash and outside of it in our personal lives. The soil and culture must be nourished and fertile to enable the growth of an ecosystem. From this fertile soil grows WINE Leadership Forum as the main tree trunk. And out of that solid trunk grows many branches for the many programs like Women in Eng, Better Ourselves Speaker series, the Women Employee Resource Group, Women’s Leadership Day, Fem Buddies, Women Happy Hour, and Grace Hopper.  This program has had a wider reaching and far longer lasting impact than just what we experienced in the 6 month span. Everything that we discussed is transferable and is something we use daily in our personal lives as wives, mothers, partners, sisters, and friends. When you equip a woman with leadership skills like we were in the WINE Leadership Forum you transform her entire ecosystem.


Although we’re living in an era in which many companies advocate for diversity and inclusion, I am often still the only female director in the room in the tech industry. Even more often, I find myself to be the only first-generation immigrant who learned English as a second language.

This is interesting given almost 27% of the population in California is foreign-born immigrants and 45% of people speak a different language at home.

Was I luckier? Was I smarter? Probably not.

I have been pondering what made me different and how I leveraged my unique immigrant experience for the last 20 years in the U.S. I hope that others like me can benefit from my story.

Coming to America

Once upon a time in 1999, I flew from Korea to San Francisco with only a single piece of luggage in my hand. Many people would say this sounds like the American dream.

But living and studying in the U.S. wasn’t all unicorns and rainbows. Overcoming the language barrier was harder than I’d expected. Grammar was all backwards from Korean, and there were so many words, idioms, expressions and nuances I had to learn from the scratch. My brain, ears, and mouth always seemed uncoordinated. I always felt overly self-conscious about my word choices, and I worried whether people even understood me. Looking back, it probably took me 10+ years to feel comfortable ordering a pizza over the phone.

Cultural adjustment was even harder. When I was growing up back in Korea, I was trained to listen and read between the lines. Classes were usually one-way lectures, and I didn’t learn how to form and express an original opinion. Being opinionated or expressing emotions was considered rebellious, pompous or rude.

When I studied Design at the California College of the Arts, I behaved like the good student I was taught to be — obedient and hard-working. However, professors used to put me on the spot for being too quiet in the corner. I felt guilty for not being able to contribute in class, but it was even more terrifying not to know how to behave differently.

There were numerous times when I wanted to leave it all behind and go back to Korea where could comfortably blend in with people who had similar backgrounds, spoke the same language, and understood me for who I was. However, I didn’t like the idea of resigning and was determined to stand on my feet in the U.S.

20 years later, I’m glad I didn’t give up, and I treasure those difficult yet unique experiences that helped me become the person I am today. There are many things that I learned to appreciate because I’ve lived a foreigner’s life. They also influenced my leadership style in the following way:

Communication style

My constant worry that people may not have understood me and the realization that I could no longer just “read between the lines” has turned me into an over-communicator and a straight shooter. I would rather be a leader who is an open book than a mysterious black box. I find that proactive communication and directness helps get people on the same page faster and prevents second guessing. It’s important for me to know that my team stays up to date so they can make the best informed decision, and I don’t become the blocker of critical data.

Strong bias toward action

Because English is a second language, the reality is that I will never be as eloquent or inspiring with my words as native speakers. Realizing this helped me develop into a doer with a strong bias toward action. I don’t feel as engaged sitting in meetings full of endless philosophical discussions that don’t lead to any progress. I’m most satisfied when we get down to actually solving the problem and moving the needle. I also love working with and hiring other doers who can move mountains with me in the most productive and efficient way possible. Because of this trait, I feel I am most useful in situations where execution is key.

Valuing authenticity

Vividly remembering the devastating feeling of “I’m not understood for who I am” taught me the importance of creating an authentic environment where everyone is accepted. This includes individuals’ different backgrounds, characteristics, strengths and weaknesses. People can only excel and perform when they can be who they are. Pretentious, disingenuous culture is toxic, and it makes people focus on things other than the quality of work itself. I strive to build an open, honest, and transparent culture where everyone feels safe to make mistakes, freely talk about their experiences, and learn from each other.

Inclusion

Having been the vulnerable person who was silently sitting in the corner, it’s important for me to include everyone in the room. I love leading discussions where everyone felt their voices were heard (introverts and extroverts alike) and they contributed to something big and meaningful. This doesn’t mean including everyone in every meeting and asking everyone for their opinion. This means that meetings need to be more thoughtfully and intentionally “designed” — from room/location, time, duration, agenda, and attendees. Even small things like sending the agenda in advance and recapping the meeting outcomes afterwards helps every meeting become more inclusive and participatory.

I always picture what each meeting will look like in my head before scheduling it. What are the potential conversation dynamics and outcomes? What should the tone of the discussion be? Whose voice will be loudest in the room and who might feel more isolated? Some may think this is overly calculated, but I believe this is the minimal amount of preparation you need to do as an empathetic leader.

Emotional Intelligence

In Korean, there’s a concept called “noon-chi.” It’s almost impossible to find the exact equivalent in English, but the best possible translation is probably “tact.” It’s the skill of reading between the lines and acting sensibly based on a tactful interpretation of the situation. This is the virtue I was taught in Korea. It’s the sixth sense that enables you to read people’s emotional states and relate to them. When noon-chi is combined with empathy, authenticity, and self-awareness, you can lead teams with high emotional intelligence.

Emotional intelligence also isn’t a mythical qualification for leaders and managers. It’s something we should be looking for in every hire because they are our future leadership bench. In interviews, I try to keenly observe the following things about each candidate: Do they interact with team members respectfully throughout the process? Are they self-aware? Do they want what we’re looking for? Can they put themselves in other people’s shoes? Even small things like turning their laptop screen toward others (even if it means they have to read their presentations upside down) go a long way.

Closing

There are a lot of first-generation immigrants in the U.S., yet it was hard to find an article about everyday leadership by immigrant leaders.

Hopefully this article will shed light on the subject from a different angle and inspire others who are going through similar experiences. I’d love to stay connected in the immigrant design community and learn from everyone’s stories. Ping me if you are interested in having a conversation!

===

Special thanks to Tae Kim, our amazing Content Strategy lead at DoorDash for lending me a hand on this article.

DoorDash has been on a hiring binge since the company was founded, often doubling or tripling in size each year. Over the last 2-3 years, this was particularly true for our Android teams as the platform has become more critical to the company. We’ve been aggressively growing our Android teams and will continue to do so.

One of the most commonly asked questions from candidates is what our “Tech Stack” on Android looks like. It’s a bit different for each candidate but almost always comes down to the following questions:

  • Are you using Kotlin or Java?
  • What 3rd party libraries do you use? Recently this usually boils down to interest in Retrofit, Dagger, and Rx.
  • What architectural patterns are used in the app?
  • Do the apps share code?
  • How do you evaluate what tools to use?

We currently have three Android apps available to the public via the Google Play store.

  • DoorDash – Food Delivery – Our consumer app for ordering food.
  • Dasher – The app used by dashers to facilitate delivery.
  • Order Manager – The app used by many of our Merchant restaurants to track orders they receive from DoorDash.

In this post we’ll be answering the above questions with regard to our Android Dasher app.

Before we begin though, we want to remind you that we are currently hiring across the board for our teams, so if you find any of this interesting and want to work in a dynamic, high-growth environment please apply to one of our open positions here.

Some background:

As of the writing of this blog, DoorDash has been in business for over 6 years and maintaining a growth rate of at least 2-3x/year. Like most startups, we had a scrappy beginning and spent most of our time focused on just getting apps running and shipped with minimum features needed to compete. We developed and shipped the apps as if the business depended on it because usually it did. Over time that created a lot of tech debt.

Starting ~18 months ago we found ourselves at a happy balance point where we had the scale to start re-architecting the app and also the business motivation to support the work. The information that follows is the result of that work.

Some guidelines we use for how we build things:

  • KISS (Keep It Super Simple) – New people joining the team can understand it quickly. You should be able to just pull and compile without any fuss.
  • Don’t reinvent the wheel – We shouldn’t waste time solving problems that already have solutions
  • Be trend-less and expect that we have to replace everything – It should be easy to incorporate new tools/libraries. The approach should not be bound or dependent on a particular technology, tool, or trend.

Do the apps share code?

Currently the apps do share quite a bit of common, platform level code.  We maintain a separate repository of common components that are shared between the apps to provide functionality that we want to be identical like feature flags, authentication/login, mapping controls, etc.  To help keep things focused we’ve intentionally excluded any further discussions about common code from this post and put the emphasis on just the app. We’ll have separate blog posts where we discuss our approach to common functionality and code.

Do you use Java or Kotlin?

Up until recently, we had a strong and growing preference for Kotlin, but we didn’t force its use and allowed using Java if a team member felt strongly about it. Large portions of the app are still written in both Java and Kotlin so over time any developer on our team ends up being fluent in both. As of the writing of this blog post it’s about 67/33 according to GitHub.

Now that Google has made it generally clear that they intend to standardize more on Kotlin we’ve started enforcing its use. 100% of new code is written in Kotlin and we strongly encourage team members to refactor/rewrite older Java classes in Kotlin wherever possible.

What tools/libraries do you use?

We use a lot of open source tools in our Android apps. As noted above, we like the structure of the app to be trendless and aim to never bind the app to any given tool or library in a way that compromises our ability to change it.

We assume that we’re going to have to replace any given technology we’re using in the near future and build with that assumption in mind. Sometimes this means that it takes a little more time to make effective use of something new, but it tends to come with the benefit of allowing us to quickly try out, adopt, or swap out components with relative ease.

Developers on the team are strongly encouraged to propose and test new tools, approaches, and techniques. We don’t always end up using them but experimentation is valued and encouraged.

Here are most of the current high-profile tools we’re using that candidates usually ask about:

  • Github – For source control and code review
  • Dagger – For dependency injection across the app
  • Rx – For management of asynchronous interactions between our UI and Domain layers, and between our Domain and Data layers.
  • Room – For local database caching
  • Retrofit – For service and network interactions
  • Bitrise – For build generation and CI
  • Fabric/Firebase – For crash analysis

Architectural Patterns

At a high level, the app is architected using a layered, N-Tier pattern that adheres to basic CLEAN principles. Namely:

  • Clear separation of concerns
  • Immutability between layers
  • One-way dependency

In addition to the above, we enforce an asynchronous publish/subscribe pattern of communication between layers.

Here’s the 10,000-foot view of the app architecture that we teach to every new member of the team:

app-arch-11

At a high level the app is built on 3 core layers:

  1. UI – Responsible for presentation of features and managing user interaction.
  2. Domain – Exposes core functionality/operations on data and responsible for all business decisions/calculations.
  3. Data – Source of truth in the app. Provides service interactions, caching of data, and notification of data changes.

Each layer is isolated from the others and designed to work on its own.

What about MVC, MVP, MVVM, and MVI?

This gets asked a lot by candidates. We actually use these patterns throughout the apps quite a bit, but we generally view them as UI-level patterns. The “M” (aka Model) noted in all of these patterns is by far the most complex portion of each of our apps and is represented in the diagram above by the Domain & Data layers.

This StackOverflow post provides more insight into how we view the “Model”.

In the Dasher app we use a combination of MVP and MVVM in our UI layer. A lot of legacy UI uses MVP(~30%), but at this point the majority of UI(~70%) is built using MVVM.

Our UI layer and MVVM:

The ViewModel and LiveData architectural components from Google became publicly available right around the same time that we started overhauling the architecture of the app and the benefits offered by them were very difficult to argue against. As such, as soon as they became available, we started using them for all new UI work.

Our Dasher app has to provide UI and features to deal with the myriad issues that occur while delivering hot, perishable food(which is significant). As such the number and variety of screens and states in the app is high. Like any other complicated app, we had a lot of problems with lifecycle issues and communication between components. We decided to go with the MVVM approach for a few key reasons:

  • It solves most lifecycle-related pains
  • It aligns well with our asynchronous publish/subscribe patterns.
  • It’s supported by Google for Android and will see continued investment.

Here’s a more detailed diagram of our UI layer.

request-12

It’s a pretty simple approach. We consolidate as much UI logic as possible into ViewModels and use LiveData exclusively to communicate to views. So far it’s been working really well.

Some additional UI rules and guidelines we use:

  • Don’t force 1-to-1 relationships between ViewModels and Views. We want the ViewModels to represent functionality and be reusable where possible.
  • Keep Views (Activities/Fragments) as simple as possible with minimal logic.
  • Avoid using complex business types(eg Delivery, Dash, User) in LiveData. Instead use of either simple types or a ViewState pattern for updates.
  • Favor fewer activities that host multiple Fragments. Target 1 activity per high-level feature.
  • Make use of common base fragments and activities that enforce consistent behavior/views/controls.

The Domain layer:

The domain layer in the app is responsible for exposing business-level functionality, logic and managing interaction with the data layer. Most of its functionality is implemented as a set of Singleton objects that provide a related set of data, operations, and calculations to the UI.

These classes don’t maintain data or state, but rather they interact with the data layer, and modify and make calculations based on information from it. In that way, we tend to think of them more as stateless abstractions for encapsulating our business rules. We call them “Managers” for historical reasons that we won’t get into here, and all interactions with them are asynchronous or fire-and-forget.

As an example, we have a “DashManager” that contains all the logic for interacting with Dashes (the dashes that dashers schedule).

As noted above, our domain layer looks like the following:

When we show this during an interview, people usually ask:

  • What’s Reflex?
  • What’s the Facade for?
What’s Reflex?

As our domain layer became more formal we needed a way to channel specific domain-level updates between the Manager classes without creating direct dependencies.

Example: The manager class that controls dashes would want to know when an auth token became expired, or the manager for deliveries would need to know when a dasher’s dash starts, ends, or get paused. The solution was what we ended up calling “Reflex”. It acts as a notification conduit and light state management mechanism for asynchronous domain updates.

What’s the Facade for?

We’re just making use of a simple facade pattern here. We have a lot of scenarios where we need to be able to override/restructure information that comes from the domain layer and we use “Facade” classes to do so.

Example: When we want to manually simulate a UI experiment locally (there’s a backend system that controls it globally) we use the Facade pattern to allow us to enable/disable it on the local device during testing/development.

Data Layer

We view our data layer as the “source of truth” in the app. It’s responsible for all interactions with our services and local storage of any information we want to cache. As a general rule we cache and store all information we get from interactions between the app, user, and service locally. Depending on the type of information and how long we need to persist it that “caching” may be on the local disk, in memory, shared preferences, or our local database. At a high level our data layer is pretty straightforward. The majority of it is built using the Repository pattern Google recommends as follows:

repository-12 (1)

A few notes about how we use our data layer:

  • We use Room for database storage. So far we’re happy with it.
  • Data that gets passed up is abstracted. The domain layer objects making the call should never have to differentiate between Room db objects, retrofit responses, etc.
  • The data layer is only concerned with how to get, store, and update information, not interpreting it.
  • All interactions with the repository are asynchronous by design.

Looking Forward

One clear lesson from the last 18 months is that while our business is growing so fast our architectural plans are going to change constantly as well. If you look at what we had planned initially vs what’s we’ve actually built you’d find quite a bit of difference.

It’s a company value to constantly observe the reality of what’s going on and make changes to accommodate it. That’s especially true with our architectural plans. If we find something isn’t working well, we change it. If new challenges arise, we accommodate them, and we experiment constantly to help figure out what’s going well or what isn’t.

This post only describes what things look like right right now. That’s going to change dramatically over the next 3-12 months and will be impacted by every member of the team. If that, or anything we’ve discussed in this post sounds interesting to you, we’re hiring! Reach out and tell us about you.

Laura: Since joining DoorDash about 1 year ago, I have been involved in a number of initiatives related to empowering women in tech, including being part of the leadership committee for Women@, and a board member for our female new employee buddy program, which aims to support new women engineers during on-boarding by pairing them up with another woman engineer in the company. When I learned I was selected to attend GHC 2019 and represent DoorDash, I was thrilled by the opportunity to meet and learn from amazing and inspiring women in tech. I was also super excited to be able to share my story and all the new and ongoing initiatives we have to support women at DoorDash.

It was my second time attending the annual Grace Hopper Celebration, and even though it had only been a couple of years since I last attended, the celebration has really grown exponentially. With an estimated 30,000 attendees, the Orange County Conference Center in Orlando was flowing with tech talent from multiple parts of academia and industry. I was thoroughly impressed by the number of tech talks, presentations, panels and workshops available that catered to multiple career stages. 

Nikita: The Grace Hopper Celebration has always been on the top of my professional bucket list and the day I learned that I’d been selected to attend, I was exulting with joy. As a first-timer at the Grace Hopper Conference, it was a memorable and fruitful experience with a lot of learnings to  share with all my friends at DoorDash! 

Attending the Grace Hopper Conference as a junior engineer helped me gain insights to the magnificent world of technology. The experience gave me an opportunity to learn more about my leadership potential. I was equipped with the tools to lead even without a leadership title.  Lastly, the experience provided me with the opportunity to spend some time with my colleagues that I do not get to partner with regularly, socializing at events and talking to aspiring software engineers, while talking to people of all genders who were eager to join the DoorDash family.

The Booth. DoorDash’s booth at the expo hall and career fair was the largest in our history, and even though it looked roomy, the space was overflowing for all of the 3 days with top tech talent.

It was such a delight to meet so many young and beautiful minds, full of energy and passion for technology. We had the unique opportunity to share more about DoorDash’s culture and values and our passion for diversity and inclusion at all levels of the industry. Our very own VP of Engineering, Ryan Sokol, spent a lot of his time at the booth talking to many candidates who were interested in interviewing with us. One team, one fight.

The Interviews. While at the conference, we also had the chance to interview some amazing prospective candidates for summer internships and full-time software engineering roles. As part of our core value of getting 1% better every day, we made sure our hiring bar was equitable by having shadow interviewers and debriefs at the end of each day. We were very humbled by the enthusiasm, talent, and ambition of every one of our candidates, and we are so happy to share that we were able to make 2 hires from this process.

ATC Fun. After-the-conference, the DoorDash troupe celebrated Laura’s birthday with a well-deserved team dinner accompanied by delicious cake!

Post-conference, we also made it a point each day to spend time together over dinner!

The sessions (Laura). While at GHC, I was very fortunate to attend a number of sessions on being a woman leader in tech, but the one that resonated the most was an interactive workshop by Jo Miller: “5 Ways to Lead When You’re Not in Charge”. It’s important to acknowledge that different sets of skills are needed depending on your career stage, especially when making the leap from individual contributors to lead. Jo focused on the following shifts in mindset to succeed as a leader:

  1. From being a tactician to being a strategist: focus on the big picture
  2. From doing to delegating: don’t try to do it all. Learn to delegate and assign tasks to elevate people around you as well
  3. From optimizer to transformer: look for groundbreaking changes
  4. From order taker to rule breaker: some rules were made for breaking, be smart about risk-taking
  5. From me to we: seek people whose skills are the opposite of yours and learn from diversity.

The sessions (Nikita). A lot of the panels I attended were about being an owner. Listing out a few of my favorite phrases from the panels that I attended : 

  • Think like a leader and lead like an owner
  • As humans, we are who we are and who we are not yet (yet to explore)
  • Leaders should build ensembles anytime they get an opportunity to do so.
  • Hear offers. Build with them.

While attending these sessions, I also came across a bunch of best-practices, suggested by various women leaders in tech which could be put to use by our teams, especially if you’re leading.

  • Lead by listening.
  • Slow down and look.
  • Begin staff meetings/stand-ups with shoutouts and applause.
  • Put the relationship first – I’m leading, we’re a team. Not about me, but us.

Lastly, would love to talk about this particular session, hosted by Nadia Rivero and Maureen Kelly on ‘Improvise! The Art of Leadership’. This one was my favorite because of the way these two speakers made us believe in what they were talking about rather than just stating them as facts, through some engaging activities! One of the activities they made us do was to alternate between leading and following, the leader would make gestures and the follower would have to follow the leader to come up with something like a mirror. As we had to switch between being the leader and the follower, when prompted to by the speakers, at one point we realized that it’s difficult to find out who’s leading and who’s following because by then, both of us had aligned and adapted to working with each other!

The other activity was on – Hear offers. Build with them.

We were made to synthesize a conversation firstly by replying with “Yes, but . .” every time and the second set of conversation, we were asked to reply with “Yes, and . . “. The difference in the way the conversation proceeded was so obvious! Conversations with “Yes, but . .” always sounded pessimistic. On the other hand, conversations with “Yes, and . .” sounded very promising!

Grace Hopper Celebration 2019 was an amazing conference that far exceeded our expectations! We were so humbled to represent DoorDash at the biggest gathering of women technologists in the world. If you would like to learn more about our culture, values and available opportunities, please visit www.doordash.com/careers

Since joining DoorDash about a year and a half ago, I have been able to work on a number of teams as an iOS engineer such as Dasher, Drive, Geo-Intelligence, and Internationalization. I’ve built core flows for our delivery process, merchant specific features such as Catering Setup and Parking Stalls, and a number of required features for our launch in Australia. Across all of these teams and projects the product development cycle has been very similar.

Product development at DoorDash is fast paced. Mobile engineers get to work on different projects simultaneously and ship features each release. We work in sprints, which generally last two weeks, and within each of these sprints we scope future projects, work on current projects, and continue on with personal initiatives. I generally spend about 15% of my time collaborating with Design and Product on upcoming projects and estimating how long those projects will take to develop. What’s unique at DoorDash is that, from ideation to execution, I have the ability to shape products that our team builds, not just how they are technically implemented. About 70% of my time is spent actually developing these products. This consists of creating technical specifications, gathering feedback, developing, testing, and finally seeing the product through the release process to production. I spend the remaining 15% of time between building our talent pipeline by sourcing and interviewing, and personal initiatives such as learning, development, and growth. This year, I sharpened my SQL skills by taking classes offered by our Data Science team and helped plan our company-wide hackathons.

My excitement about work comes from owning a feature from start to finish while receiving feedback directly from our users. Our culture encourages engineers to spend time with Dashers, Merchants, and Consumers to really understand the challenges they face and then surface those challenges during our design reviews.

The impact doesn’t stop there! In addition to our iOS Dasher App, we also have an iOS Consumer app, both of which are in Swift with a rapidly decreasing count of Objective-C legacy files. We also have three Android apps, the Consumer app, the Dasher app, and the Merchant Tablet app, all of which are written in Kotlin. 

I spend the majority of my time on product work, but there are additional opportunities for mobile development if I decide I want to try something else. First, we have iOS and Android Platform teams. These teams help standardize our approach to software releases, stability, monitoring, architecture, and feature implementation for all of our mobile apps. Additionally, more related to working on libraries and design focused, we have opportunities on our Design team. Design Technologists for iOS and Android develop the shared UI libraries and abstractions that all of our mobile developers use. They also build tools, processes, and prototypes that enable design and engineering to work efficiently & consistently to build high-quality products.

In addition to learning about how I spend my time as mobile product engineer, I hope I’ve added perspective into Mobile Engineering opportunities at DoorDash. If you’re interested, our team is certainly excited to review your application. I recommend thinking about why you’re applying to DoorDash and what you’re really excited about. We evaluate candidates holistically, both on technical capability and values such as teamwork, ownership, and execution. The team here is comprised of the most talented engineers I’ve had a chance to work with. Everyone is really passionate about the work we do here, and we look forward to meeting you!

Here at DoorDash, I work as a mobile engineer and I have been interviewing candidates for about a year now. I often get asked why I joined DoorDash so I thought I would expand on that in a blogpost.

I joined DoorDash on July 23rd, 2018 and I did so because we’re solving the logistics problems I saw my parents struggle with as small business owners. Growing up, I spent a lot of time at my parents’ auto parts business in Brooklyn, New York. In the early days it was just my parents and I, so I was able to get a lot of exposure to the business. My mom managed all of the sales, often taking 100+ calls each day. We spent a lot of time predicting what and how much of each car-part to order, manually checking the quality of each item as it came in and just before it went out, and working through issues when items were returned. We only supplied locally within New York City, and traffic in the area was highly unpredictable. Each day multiple routes were considered and memorized, and then my father was on his way. I’m still amazed at the number of touch points to completely fulfill an order from the customer placing the order to them receiving the goods. 

Software was expensive back then and all this was done manually. I know firsthand how difficult it was to scale our small business so I came to DoorDash to help hundreds of thousands of other small business solve similar problems. Our Marketplace app, where I worked on internationalization, helps provide a virtual storefront for small-businesses to help expand visibility and scale sales. Our Dasher app, where I spend the majority of my time, provides delivery opportunities with recommended routes and detailed instructions for our Dashers about both pick-up and drop-off locations, because accuracy is incredibly important for efficiency. These products can both help offload some of the manual efforts to scale sales and reduce overhead of each business having their own drivers by using DoorDash Dashers.

Opportunities at DoorDash are abundant and I really enjoy the level of impact of the features I work on. If you’re interested in applying, you can find open opportunities on our website: https://www.doordash.com/careers/.